refactor(hwcodec): 简化 hwcodec 库以适配 IP-KVM 场景

移除 IP-KVM 场景不需要的功能模块:
- 移除 VRAM 模块 (GPU 显存直接编解码)
- 移除 Mux 模块 (视频混流)
- 移除 macOS/Android 平台支持
- 移除外部 SDK 依赖 (~9MB)
- 移除开发工具和示例程序

简化解码器为仅支持 MJPEG (采集卡输出格式)
简化 NVIDIA 检测代码 (使用 dlopen 替代 SDK)
更新版本号至 0.8.0
更新相关技术文档
This commit is contained in:
mofeng-git
2025-12-31 19:47:08 +08:00
parent a8a3b6c66b
commit d0e2e13b35
441 changed files with 467 additions and 143421 deletions

View File

@@ -2,26 +2,24 @@
## 1. 项目概述 ## 1. 项目概述
hwcodec 是一个基于 FFmpeg 的硬件视频编解码库,来源于 RustDesk 项目并针对 One-KVM 进行了定制优化。该库提供跨平台的 GPU 加速视频编码能力,支持多个 GPU 厂商和多种编码标准 hwcodec 是一个基于 FFmpeg 的硬件视频编解码库,来源于 RustDesk 项目并针对 One-KVM 进行了深度定制优化。该库专注于 IP-KVM 场景,提供 Windows 和 Linux 平台的 GPU 加速视频编码能力。
### 1.1 项目位置 ### 1.1 项目位置
``` ```
libs/hwcodec/ libs/hwcodec/
├── src/ # Rust 源代码 ├── src/ # Rust 源代码
── cpp/ # C++ 源代码 ── cpp/ # C++ 源代码
├── externals/ # 外部依赖 (SDK)
├── dev/ # 开发工具
└── examples/ # 示例程序
``` ```
### 1.2 核心特性 ### 1.2 核心特性
- **多编解码格式支持**: H.264, H.265 (HEVC), VP8, VP9, AV1, MJPEG - **多编解码格式支持**: H.264, H.265 (HEVC), VP8, VP9, MJPEG
- **硬件加速**: NVENC/NVDEC, AMF, Intel QSV/MFX, VAAPI, RKMPP, V4L2 M2M, VideoToolbox - **硬件加速**: NVENC, AMF, Intel QSV (Windows), VAAPI, RKMPP, V4L2 M2M (Linux)
- **跨平台**: Windows, Linux, macOS, Android, iOS - **跨平台**: Windows, Linux (x86_64, ARM64, ARMv7)
- **低延迟优化**: 专为实时流媒体场景设计 - **低延迟优化**: 专为实时流媒体场景设计
- **Rust/C++ 混合架构**: Rust 提供安全的上层 APIC++ 实现底层编解码逻辑 - **Rust/C++ 混合架构**: Rust 提供安全的上层 APIC++ 实现底层编解码逻辑
- **IP-KVM 专用**: 解码仅支持 MJPEG采集卡输出格式编码支持多种硬件加速
## 2. 架构设计 ## 2. 架构设计
@@ -30,35 +28,31 @@ libs/hwcodec/
``` ```
┌─────────────────────────────────────────────────────────────┐ ┌─────────────────────────────────────────────────────────────┐
│ Rust API Layer │ │ Rust API Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ ┌─────────────────────────────────────────────────────────┐│
│ │ ffmpeg_ram vram │ │ mux │ │ ffmpeg_ram module ││
│ │ module module │ │ module │ │ (encode.rs + decode.rs) ││
│ └────────────┘ └──────┬──────┘ └────────────────────┘ │ └─────────────────────────┬──────────────────────────────┘│
├─────────────────────────┼───────────────────┼──────────────┤ ├─────────────────────────────┼───────────────────────────────┤
│ │
FFI Bindings (bindgen) │ FFI Bindings (bindgen)
▼ │
├─────────────────────────────────────────────────────────────┤ ├─────────────────────────────────────────────────────────────┤
│ C++ Core Layer │ │ C++ Core Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ ┌─────────────────────────────────────────────────────────┐│
│ │ ffmpeg_ram ffmpeg_vram │ │ mux.cpp │ │ ffmpeg_ram (encode/decode) ││
│ encode/ │ │ encode/ │ │ │ └──────────────────────────┬──────────────────────────────┘
│ │ decode │ │ decode │ │ │ │ ├─────────────────────────────┼───────────────────────────────┤
└──────┬──────┘ └──────┬──────┘ └──────────┬──────────┘
├─────────┼────────────────┼───────────────────┼──────────────┤ │ ▼ │
│ │ │ │ │
│ └────────────────┴───────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │ │ ┌──────────────────────────────────────────────────────┐ │
│ │ FFmpeg Libraries │ │ │ │ FFmpeg Libraries │ │
│ │ libavcodec │ libavutil │ libavformat │ libswscale │ │ │ │ libavcodec │ libavutil │ libavformat │ libswscale │ │
│ └──────────────────────────────────────────────────────┘ │ │ └──────────────────────────────────────────────────────┘ │
├────────────────────────────────────────────────────────────┤ ├────────────────────────────────────────────────────────────┤
│ Hardware Acceleration Backends │ │ Hardware Acceleration Backends │
│ ┌────────┐ ┌─────┐ ┌─────┐ ┌───────┐ ┌───────┐ ┌───────┐ │ │ ┌────────┐ ┌─────┐ ┌─────┐ ┌───────┐ ┌───────┐ ┌───────┐ │
│ │ NVENC │ │ AMF │ │ MFX │ │ VAAPI │ │ RKMPP │ │V4L2M2M│ │ │ │ NVENC │ │ AMF │ │ QSV │ │ VAAPI │ │ RKMPP │ │V4L2M2M│ │
│ └────────┘ └─────┘ └─────┘ └───────┘ └───────┘ └───────┘ │ │ └────────┘ └─────┘ └─────┘ └───────┘ └───────┘ └───────┘ │
└─────────────────────────────────────────────────────────────┘ └─────────────────────────────────────────────────────────────┘
``` ```
@@ -68,8 +62,6 @@ libs/hwcodec/
| 模块 | 职责 | 关键文件 | | 模块 | 职责 | 关键文件 |
|------|------|----------| |------|------|----------|
| `ffmpeg_ram` | 基于 RAM 的软件/硬件编解码 | `src/ffmpeg_ram/` | | `ffmpeg_ram` | 基于 RAM 的软件/硬件编解码 | `src/ffmpeg_ram/` |
| `vram` | GPU 显存直接编解码 (Windows) | `src/vram/` |
| `mux` | 视频混流 (MP4/MKV) | `src/mux.rs` |
| `common` | 公共定义和 GPU 检测 | `src/common.rs` | | `common` | 公共定义和 GPU 检测 | `src/common.rs` |
| `ffmpeg` | FFmpeg 日志和初始化 | `src/ffmpeg.rs` | | `ffmpeg` | FFmpeg 日志和初始化 | `src/ffmpeg.rs` |
@@ -82,17 +74,11 @@ libs/hwcodec/
pub mod common; pub mod common;
pub mod ffmpeg; pub mod ffmpeg;
pub mod ffmpeg_ram; pub mod ffmpeg_ram;
pub mod mux;
#[cfg(all(windows, feature = "vram"))]
pub mod vram;
#[cfg(target_os = "android")]
pub mod android;
``` ```
**功能**: **功能**:
- 导出所有子模块 - 导出所有子模块
- 提供 C 日志回调函数 `hwcodec_log` - 提供 C 日志回调函数
- 条件编译: `vram` 模块仅在 Windows + vram feature 启用时编译
### 3.2 公共模块 (common.rs) ### 3.2 公共模块 (common.rs)
@@ -111,13 +97,11 @@ pub enum Driver {
| 平台 | 检测函数 | 检测方式 | | 平台 | 检测函数 | 检测方式 |
|------|----------|----------| |------|----------|----------|
| Linux | `linux_support_nv()` | 加载 CUDA/NVENC 动态库 | | Linux | `linux_support_nv()` | 加载 libcuda.so + libnvidia-encode.so |
| Linux | `linux_support_amd()` | 检查 `libamfrt64.so.1` | | Linux | `linux_support_amd()` | 检查 `libamfrt64.so.1` |
| Linux | `linux_support_intel()` | 检查 `libvpl.so`/`libmfx.so` | | Linux | `linux_support_intel()` | 检查 `libvpl.so`/`libmfx.so` |
| Linux | `linux_support_rkmpp()` | 检查 `/dev/mpp_service` | | Linux | `linux_support_rkmpp()` | 检查 `/dev/mpp_service` |
| Linux | `linux_support_v4l2m2m()` | 检查 `/dev/video*` 设备 | | Linux | `linux_support_v4l2m2m()` | 检查 `/dev/video*` 设备 |
| macOS | `get_video_toolbox_codec_support()` | 调用 VideoToolbox API |
| Windows | 通过 VRAM 模块检测 | 查询 D3D11 设备 |
### 3.3 FFmpeg RAM 编码模块 ### 3.3 FFmpeg RAM 编码模块
@@ -129,7 +113,7 @@ pub enum Driver {
pub struct CodecInfo { pub struct CodecInfo {
pub name: String, // 编码器名称如 "h264_nvenc" pub name: String, // 编码器名称如 "h264_nvenc"
pub mc_name: Option<String>, // MediaCodec 名称 (Android) pub mc_name: Option<String>, // MediaCodec 名称 (Android)
pub format: DataFormat, // H264/H265/VP8/VP9/AV1/MJPEG pub format: DataFormat, // H264/H265/VP8/VP9/MJPEG
pub priority: i32, // 优先级 (Best=0, Good=1, Normal=2, Soft=3, Bad=4) pub priority: i32, // 优先级 (Best=0, Good=1, Normal=2, Soft=3, Bad=4)
pub hwdevice: AVHWDeviceType, // 硬件设备类型 pub hwdevice: AVHWDeviceType, // 硬件设备类型
} }
@@ -179,7 +163,7 @@ pub struct Encoder {
#### 3.3.2 C++ 层 (cpp/ffmpeg_ram/) #### 3.3.2 C++ 层 (cpp/ffmpeg_ram/)
**FFmpegRamEncoder 类** (ffmpeg_ram_encode.cpp:97-420): **FFmpegRamEncoder 类** (ffmpeg_ram_encode.cpp):
```cpp ```cpp
class FFmpegRamEncoder { class FFmpegRamEncoder {
@@ -225,6 +209,8 @@ fill_frame() - 填充 AVFrame 数据指针
### 3.4 FFmpeg RAM 解码模块 ### 3.4 FFmpeg RAM 解码模块
**IP-KVM 专用设计**: 解码器仅支持 MJPEG 软件解码,因为 IP-KVM 场景中视频采集卡输出的是 MJPEG 格式。
**Decoder 类**: **Decoder 类**:
```rust ```rust
@@ -244,16 +230,27 @@ pub struct DecodeFrame {
} }
``` ```
**available_decoders()**: 仅返回 MJPEG 软件解码器
```rust
pub fn available_decoders() -> Vec<CodecInfo> {
vec![CodecInfo {
name: "mjpeg".to_owned(),
format: MJPEG,
hwdevice: AV_HWDEVICE_TYPE_NONE,
priority: Priority::Best as _,
..Default::default()
}]
}
```
**C++ 实现** (ffmpeg_ram_decode.cpp): **C++ 实现** (ffmpeg_ram_decode.cpp):
```cpp ```cpp
class FFmpegRamDecoder { class FFmpegRamDecoder {
AVCodecContext *c_ = NULL; AVCodecContext *c_ = NULL;
AVBufferRef *hw_device_ctx_ = NULL;
AVFrame *sw_frame_ = NULL; // 软件帧 (用于硬件→软件转换)
AVFrame *frame_ = NULL; // 解码输出帧 AVFrame *frame_ = NULL; // 解码输出帧
AVPacket *pkt_ = NULL; AVPacket *pkt_ = NULL;
bool hwaccel_ = true;
int do_decode(const void *obj); int do_decode(const void *obj);
}; };
@@ -262,23 +259,16 @@ class FFmpegRamDecoder {
**解码流程**: **解码流程**:
``` ```
输入编码数据 输入 MJPEG 数据
avcodec_send_packet() - 发送数据到解码器 avcodec_send_packet() - 发送数据到解码器
avcodec_receive_frame() - 获取解码帧 avcodec_receive_frame() - 获取解码帧 (YUV420P)
├──▶ (软件解码) 直接使用 frame_
callback() - 回调输出
└──▶ (硬件解码) av_hwframe_transfer_data()
sw_frame_ (GPU → CPU)
callback() - 回调输出
``` ```
## 4. 硬件加速支持 ## 4. 硬件加速支持
@@ -293,27 +283,32 @@ avcodec_receive_frame() - 获取解码帧
| VAAPI | 通用 | Linux | h264_vaapi, hevc_vaapi, vp8_vaapi, vp9_vaapi | | VAAPI | 通用 | Linux | h264_vaapi, hevc_vaapi, vp8_vaapi, vp9_vaapi |
| RKMPP | Rockchip | Linux | h264_rkmpp, hevc_rkmpp | | RKMPP | Rockchip | Linux | h264_rkmpp, hevc_rkmpp |
| V4L2 M2M | ARM SoC | Linux | h264_v4l2m2m, hevc_v4l2m2m | | V4L2 M2M | ARM SoC | Linux | h264_v4l2m2m, hevc_v4l2m2m |
| VideoToolbox | Apple | macOS/iOS | hevc_videotoolbox |
| MediaCodec | Google | Android | h264_mediacodec, hevc_mediacodec |
### 4.2 硬件检测逻辑 (Linux) ### 4.2 硬件检测逻辑 (Linux)
```cpp ```cpp
// libs/hwcodec/cpp/common/platform/linux/linux.cpp // libs/hwcodec/cpp/common/platform/linux/linux.cpp
// NVIDIA 检测 - 加载 CUDA 和 NVENC 动态库 // NVIDIA 检测 - 简化的动态库检测
int linux_support_nv() { int linux_support_nv() {
CudaFunctions *cuda_dl = NULL; void *handle = dlopen("libcuda.so.1", RTLD_LAZY);
NvencFunctions *nvenc_dl = NULL; if (!handle) handle = dlopen("libcuda.so", RTLD_LAZY);
CuvidFunctions *cvdl = NULL; if (!handle) return -1;
load_driver(&cuda_dl, &nvenc_dl, &cvdl); dlclose(handle);
// 成功加载则返回 0
handle = dlopen("libnvidia-encode.so.1", RTLD_LAZY);
if (!handle) handle = dlopen("libnvidia-encode.so", RTLD_LAZY);
if (!handle) return -1;
dlclose(handle);
return 0;
} }
// AMD 检测 - 检查 AMF 运行时库 // AMD 检测 - 检查 AMF 运行时库
int linux_support_amd() { int linux_support_amd() {
void *handle = dlopen("libamfrt64.so.1", RTLD_LAZY); void *handle = dlopen("libamfrt64.so.1", RTLD_LAZY);
// 成功加载则返回 0 if (!handle) return -1;
dlclose(handle);
return 0;
} }
// Intel 检测 - 检查 VPL/MFX 库 // Intel 检测 - 检查 VPL/MFX 库
@@ -379,11 +374,6 @@ bool set_lantency_free(void *priv_data, const std::string &name) {
name.find("vaapi") != std::string::npos) { name.find("vaapi") != std::string::npos) {
av_opt_set(priv_data, "async_depth", "1", 0); av_opt_set(priv_data, "async_depth", "1", 0);
} }
// VideoToolbox: 实时模式
if (name.find("videotoolbox") != std::string::npos) {
av_opt_set_int(priv_data, "realtime", 1, 0);
av_opt_set_int(priv_data, "prio_speed", 1, 0);
}
// libvpx: 实时模式 // libvpx: 实时模式
if (name.find("libvpx") != std::string::npos) { if (name.find("libvpx") != std::string::npos) {
av_opt_set(priv_data, "deadline", "realtime", 0); av_opt_set(priv_data, "deadline", "realtime", 0);
@@ -394,86 +384,19 @@ bool set_lantency_free(void *priv_data, const std::string &name) {
} }
``` ```
## 5. 混流模块 (Mux) ## 5. 构建系统
### 5.1 功能概述 ### 5.1 Cargo.toml 配置
混流模块提供将编码后的视频流写入容器格式 (MP4/MKV) 的功能。
### 5.2 Rust API
```rust
// libs/hwcodec/src/mux.rs
pub struct MuxContext {
pub filename: String, // 输出文件名
pub width: usize, // 视频宽度
pub height: usize, // 视频高度
pub is265: bool, // 是否为 H.265
pub framerate: usize, // 帧率
}
pub struct Muxer {
inner: *mut c_void, // C++ Muxer 指针
pub ctx: MuxContext,
start: Instant, // 开始时间
}
impl Muxer {
pub fn new(ctx: MuxContext) -> Result<Self, ()>;
pub fn write_video(&mut self, data: &[u8], key: bool) -> Result<(), i32>;
pub fn write_tail(&mut self) -> Result<(), i32>;
}
```
### 5.3 C++ 实现
```cpp
// libs/hwcodec/cpp/mux/mux.cpp
class Muxer {
OutputStream video_st; // 视频流
AVFormatContext *oc = NULL; // 格式上下文
int framerate;
int64_t start_ms; // 起始时间戳
int64_t last_pts; // 上一帧 PTS
int got_first; // 是否收到第一帧
bool init(const char *filename, int width, int height,
int is265, int framerate);
int write_video_frame(const uint8_t *data, int len,
int64_t pts_ms, int key);
};
```
**写入流程**:
```
write_video_frame()
├── 检查是否为关键帧 (第一帧必须是关键帧)
├── 计算 PTS (相对于 start_ms)
├── 填充 AVPacket
├── av_packet_rescale_ts() (ms → stream timebase)
└── av_write_frame() → 写入文件
```
## 6. 构建系统
### 6.1 Cargo.toml 配置
```toml ```toml
[package] [package]
name = "hwcodec" name = "hwcodec"
version = "0.7.1" version = "0.8.0"
edition = "2021"
description = "Hardware video codec for IP-KVM (Windows/Linux)"
[features] [features]
default = [] default = []
vram = [] # GPU VRAM 直接编解码 (Windows only)
[dependencies] [dependencies]
log = "0.4" log = "0.4"
@@ -486,7 +409,7 @@ cc = "1.0" # C++ 编译
bindgen = "0.59" # FFI 绑定生成 bindgen = "0.59" # FFI 绑定生成
``` ```
### 6.2 构建流程 (build.rs) ### 5.2 构建流程 (build.rs)
``` ```
build.rs build.rs
@@ -494,57 +417,61 @@ build.rs
├── build_common() ├── build_common()
│ ├── 生成 common_ffi.rs (bindgen) │ ├── 生成 common_ffi.rs (bindgen)
│ ├── 编译平台相关 C++ 代码 │ ├── 编译平台相关 C++ 代码
│ └── 链接系统库 (d3d11, dxgi, stdc++) │ └── 链接系统库 (stdc++)
── ffmpeg::build_ffmpeg() ── ffmpeg::build_ffmpeg()
├── 生成 ffmpeg_ffi.rs ├── 生成 ffmpeg_ffi.rs
├── 链接 FFmpeg 库 (VCPKG 或 pkg-config) ├── 链接 FFmpeg 库 (VCPKG 或 pkg-config)
── build_ffmpeg_ram() ── build_ffmpeg_ram()
└── 编译 ffmpeg_ram_encode.cpp, ffmpeg_ram_decode.cpp └── 编译 ffmpeg_ram_encode.cpp, ffmpeg_ram_decode.cpp
│ ├── build_ffmpeg_vram() [vram feature]
│ │ └── 编译 ffmpeg_vram_encode.cpp, ffmpeg_vram_decode.cpp
│ └── build_mux()
│ └── 编译 mux.cpp
└── sdk::build_sdk() [Windows + vram feature]
├── build_nv() - NVIDIA SDK
├── build_amf() - AMD AMF
└── build_mfx() - Intel MFX
``` ```
### 6.3 FFmpeg 链接方式 ### 5.3 FFmpeg 链接方式
| 方式 | 平台 | 条件 | | 方式 | 平台 | 条件 |
|------|------|------| |------|------|------|
| VCPKG 静态链接 | 跨平台 | 设置 `VCPKG_ROOT` 环境变量 | | VCPKG 静态链接 | 跨平台 | 设置 `VCPKG_ROOT` 环境变量 |
| pkg-config 动态链接 | Linux | 默认方式 | | pkg-config 动态链接 | Linux | 默认方式 |
## 7. 外部依赖 ## 6. 与原版 hwcodec 的区别
### 7.1 SDK 版本 针对 One-KVM IP-KVM 场景,对原版 RustDesk hwcodec 进行了以下简化:
| SDK | 版本 | 用途 | ### 6.1 移除的功能
|-----|------|------|
| nv-codec-headers | n12.1.14.0 | NVIDIA 编码头文件 |
| Video_Codec_SDK | 12.1.14 | NVIDIA 编解码 SDK |
| AMF | v1.4.35 | AMD Advanced Media Framework |
| MediaSDK | 22.5.4 | Intel Media SDK |
### 7.2 FFmpeg 依赖库 | 移除项 | 原因 |
|--------|------|
| VRAM 模块 | IP-KVM 不需要 GPU 显存直接编解码 |
| Mux 模块 | IP-KVM 不需要录制到文件 |
| macOS 支持 | IP-KVM 目标平台不包含 macOS |
| Android 支持 | IP-KVM 目标平台不包含 Android |
| 外部 SDK | 简化构建,减少依赖 |
| 多格式解码 | IP-KVM 仅需 MJPEG 解码 |
``` ### 6.2 保留的功能
libavcodec - 编解码核心
libavutil - 工具函数
libavformat - 容器格式
libswscale - 图像缩放转换
```
## 8. 总结 | 保留项 | 用途 |
|--------|------|
| FFmpeg RAM 编码 | WebRTC 视频编码 |
| FFmpeg RAM 解码 | MJPEG 采集卡解码 |
| 硬件加速编码 | 低延迟高效编码 |
| 软件编码后备 | 无硬件加速时的兜底方案 |
hwcodec 库通过 Rust/C++ 混合架构,在保证内存安全的同时实现了高性能的视频编解码。其核心设计特点包括: ### 6.3 代码量对比
1. **统一的编解码器 API**: 无论使用硬件还是软件编解码,上层 API 保持一致 | 指标 | 原版 | 简化版 | 减少 |
|------|------|--------|------|
| 外部 SDK | ~9MB | 0 | 100% |
| C++ 文件 | ~30 | ~10 | ~67% |
| Rust 模块 | 6 | 3 | 50% |
## 7. 总结
hwcodec 库通过 Rust/C++ 混合架构,在保证内存安全的同时实现了高性能的视频编解码。针对 One-KVM IP-KVM 场景的优化设计特点包括:
1. **精简的编解码器 API**: 解码仅支持 MJPEG编码支持多种硬件加速
2. **自动硬件检测**: 运行时自动检测并选择最优的硬件加速后端 2. **自动硬件检测**: 运行时自动检测并选择最优的硬件加速后端
3. **优先级系统**: 基于质量和性能为不同编码器分配优先级 3. **优先级系统**: 基于质量和性能为不同编码器分配优先级
4. **低延迟优化**: 针对实时流媒体场景进行了专门优化 4. **低延迟优化**: 针对实时流媒体场景进行了专门优化
5. **跨平台支持**: 覆盖主流操作系统和 GPU 厂商 5. **简化的构建系统**: 无需外部 SDK仅依赖系统 FFmpeg
6. **Windows/Linux 跨平台**: 支持 x86_64、ARM64、ARMv7 架构

View File

@@ -9,7 +9,7 @@
```rust ```rust
pub struct EncodeContext { pub struct EncodeContext {
pub name: String, // 编码器名称 pub name: String, // 编码器名称
pub mc_name: Option<String>, // MediaCodec 名称 (Android) pub mc_name: Option<String>, // MediaCodec 名称 (保留字段)
pub width: i32, // 视频宽度 (必须为偶数) pub width: i32, // 视频宽度 (必须为偶数)
pub height: i32, // 视频高度 (必须为偶数) pub height: i32, // 视频高度 (必须为偶数)
pub pixfmt: AVPixelFormat, // 像素格式 pub pixfmt: AVPixelFormat, // 像素格式
@@ -58,7 +58,6 @@ pub struct EncodeContext {
| `hevc_rkmpp` | H.265 | Rockchip MPP | Linux | | `hevc_rkmpp` | H.265 | Rockchip MPP | Linux |
| `h264_v4l2m2m` | H.264 | V4L2 M2M | Linux | | `h264_v4l2m2m` | H.264 | V4L2 M2M | Linux |
| `hevc_v4l2m2m` | H.265 | V4L2 M2M | Linux | | `hevc_v4l2m2m` | H.265 | V4L2 M2M | Linux |
| `hevc_videotoolbox` | H.265 | VideoToolbox | macOS |
| `h264` | H.264 | 软件 (x264) | 全平台 | | `h264` | H.264 | 软件 (x264) | 全平台 |
| `hevc` | H.265 | 软件 (x265) | 全平台 | | `hevc` | H.265 | 软件 (x265) | 全平台 |
| `libvpx` | VP8 | 软件 | 全平台 | | `libvpx` | VP8 | 软件 | 全平台 |
@@ -161,51 +160,44 @@ for encoder in available_encoders {
## 2. 解码器 API ## 2. 解码器 API
### 2.1 解码器初始化 ### 2.1 IP-KVM 专用设计
在 One-KVM IP-KVM 场景中,解码器仅支持 MJPEG 软件解码。这是因为视频采集卡输出的格式是 MJPEG不需要其他格式的硬件解码支持。
### 2.2 解码器初始化
#### DecodeContext 参数 #### DecodeContext 参数
```rust ```rust
pub struct DecodeContext { pub struct DecodeContext {
pub name: String, // 解码器名称 pub name: String, // 解码器名称 ("mjpeg")
pub device_type: AVHWDeviceType, // 硬件设备类型 pub device_type: AVHWDeviceType, // 硬件设备类型 (NONE)
pub thread_count: i32, // 解码线程数 pub thread_count: i32, // 解码线程数
} }
``` ```
#### 硬件设备类型 ### 2.3 创建解码器
| AVHWDeviceType | 说明 |
|----------------|------|
| `AV_HWDEVICE_TYPE_NONE` | 软件解码 |
| `AV_HWDEVICE_TYPE_CUDA` | NVIDIA CUDA |
| `AV_HWDEVICE_TYPE_VAAPI` | Linux VAAPI |
| `AV_HWDEVICE_TYPE_D3D11VA` | Windows D3D11 |
| `AV_HWDEVICE_TYPE_VIDEOTOOLBOX` | macOS VideoToolbox |
| `AV_HWDEVICE_TYPE_MEDIACODEC` | Android MediaCodec |
### 2.2 创建解码器
```rust ```rust
use hwcodec::ffmpeg_ram::decode::{Decoder, DecodeContext}; use hwcodec::ffmpeg_ram::decode::{Decoder, DecodeContext};
use hwcodec::ffmpeg::AVHWDeviceType; use hwcodec::ffmpeg::AVHWDeviceType;
let ctx = DecodeContext { let ctx = DecodeContext {
name: "h264".to_string(), name: "mjpeg".to_string(),
device_type: AVHWDeviceType::AV_HWDEVICE_TYPE_VAAPI, device_type: AVHWDeviceType::AV_HWDEVICE_TYPE_NONE,
thread_count: 4, thread_count: 4,
}; };
let decoder = Decoder::new(ctx)?; let decoder = Decoder::new(ctx)?;
``` ```
### 2.3 解码帧 ### 2.4 解码帧
```rust ```rust
// 输入编码数据 // 输入 MJPEG 编码数据
let encoded_packet: Vec<u8> = receive_encoded_data(); let mjpeg_data: Vec<u8> = receive_mjpeg_frame();
match decoder.decode(&encoded_packet) { match decoder.decode(&mjpeg_data) {
Ok(frames) => { Ok(frames) => {
for frame in frames.iter() { for frame in frames.iter() {
println!("Decoded: {}x{}, format={:?}, key={}", println!("Decoded: {}x{}, format={:?}, key={}",
@@ -214,7 +206,7 @@ match decoder.decode(&encoded_packet) {
// 访问 YUV 数据 // 访问 YUV 数据
let y_plane = &frame.data[0]; let y_plane = &frame.data[0];
let u_plane = &frame.data[1]; let u_plane = &frame.data[1];
let v_plane = &frame.data[2]; // 仅 YUV420P let v_plane = &frame.data[2];
} }
} }
Err(code) => { Err(code) => {
@@ -223,7 +215,7 @@ match decoder.decode(&encoded_packet) {
} }
``` ```
### 2.4 DecodeFrame 结构体 ### 2.5 DecodeFrame 结构体
```rust ```rust
pub struct DecodeFrame { pub struct DecodeFrame {
@@ -246,7 +238,7 @@ pub struct DecodeFrame {
| `NV12` | 2 | Y | UV (交错) | - | | `NV12` | 2 | Y | UV (交错) | - |
| `NV21` | 2 | Y | VU (交错) | - | | `NV21` | 2 | Y | VU (交错) | - |
### 2.5 检测可用解码器 ### 2.6 获取可用解码器
```rust ```rust
use hwcodec::ffmpeg_ram::decode::Decoder; use hwcodec::ffmpeg_ram::decode::Decoder;
@@ -256,6 +248,9 @@ for decoder in available_decoders {
println!("Available: {} (format: {:?}, hwdevice: {:?})", println!("Available: {} (format: {:?}, hwdevice: {:?})",
decoder.name, decoder.format, decoder.hwdevice); decoder.name, decoder.format, decoder.hwdevice);
} }
// 输出:
// Available: mjpeg (format: MJPEG, hwdevice: AV_HWDEVICE_TYPE_NONE)
``` ```
## 3. 码率控制模式 ## 3. 码率控制模式
@@ -287,7 +282,6 @@ pub enum RateControl {
| amf | ✓ | ✓ (低延迟) | ✗ | | amf | ✓ | ✓ (低延迟) | ✗ |
| qsv | ✓ | ✓ | ✗ | | qsv | ✓ | ✓ | ✗ |
| vaapi | ✓ | ✓ | ✗ | | vaapi | ✓ | ✓ | ✗ |
| mediacodec | ✓ | ✓ | ✓ |
## 4. 质量等级 ## 4. 质量等级
@@ -310,45 +304,9 @@ pub enum Quality {
| Medium | p4 | balanced | medium | | Medium | p4 | balanced | medium |
| Low | p1 | speed | veryfast | | Low | p1 | speed | veryfast |
## 5. 混流器 API ## 5. 错误处理
### 5.1 创建混流器 ### 5.1 错误码
```rust
use hwcodec::mux::{Muxer, MuxContext};
let ctx = MuxContext {
filename: "/tmp/output.mp4".to_string(),
width: 1920,
height: 1080,
is265: false, // H.264
framerate: 30,
};
let muxer = Muxer::new(ctx)?;
```
### 5.2 写入视频帧
```rust
// 编码后的帧数据
let encoded_data: Vec<u8> = encoder.encode(...)?;
let is_keyframe = true;
muxer.write_video(&encoded_data, is_keyframe)?;
```
### 5.3 完成写入
```rust
// 写入文件尾
muxer.write_tail()?;
// muxer 被 drop 时自动释放资源
```
## 6. 错误处理
### 6.1 错误码
| 错误码 | 常量 | 说明 | | 错误码 | 常量 | 说明 |
|--------|------|------| |--------|------|------|
@@ -356,7 +314,7 @@ muxer.write_tail()?;
| -1 | `HWCODEC_ERR_COMMON` | 通用错误 | | -1 | `HWCODEC_ERR_COMMON` | 通用错误 |
| -2 | `HWCODEC_ERR_HEVC_COULD_NOT_FIND_POC` | HEVC 解码参考帧丢失 | | -2 | `HWCODEC_ERR_HEVC_COULD_NOT_FIND_POC` | HEVC 解码参考帧丢失 |
### 6.2 常见错误处理 ### 5.2 常见错误处理
```rust ```rust
match encoder.encode(&yuv_data, pts) { match encoder.encode(&yuv_data, pts) {
@@ -372,9 +330,9 @@ match encoder.encode(&yuv_data, pts) {
} }
``` ```
## 7. 最佳实践 ## 6. 最佳实践
### 7.1 编码器选择策略 ### 6.1 编码器选择策略
```rust ```rust
fn select_best_encoder( fn select_best_encoder(
@@ -399,7 +357,7 @@ fn select_best_encoder(
} }
``` ```
### 7.2 帧内存布局 ### 6.2 帧内存布局
```rust ```rust
// 获取 NV12 帧布局信息 // 获取 NV12 帧布局信息
@@ -417,7 +375,7 @@ let mut buffer = vec![0u8; length as usize];
// 填充 UV 平面: buffer[offset[0]..length] // 填充 UV 平面: buffer[offset[0]..length]
``` ```
### 7.3 关键帧控制 ### 6.3 关键帧控制
```rust ```rust
let mut frame_count = 0; let mut frame_count = 0;
@@ -433,7 +391,7 @@ loop {
} }
``` ```
### 7.4 线程安全 ### 6.4 线程安全
```rust ```rust
// Decoder 实现了 Send + Sync // Decoder 实现了 Send + Sync
@@ -443,3 +401,81 @@ unsafe impl Sync for Decoder {}
// 可以安全地在多线程间传递 // 可以安全地在多线程间传递
let decoder = Arc::new(Mutex::new(Decoder::new(ctx)?)); let decoder = Arc::new(Mutex::new(Decoder::new(ctx)?));
``` ```
## 7. IP-KVM 典型使用场景
### 7.1 视频采集和转码流程
```
USB 采集卡 (MJPEG)
┌─────────────────┐
│ MJPEG Decoder │ ◄── Decoder::new("mjpeg")
│ (软件解码) │
└────────┬────────┘
│ YUV420P
┌─────────────────┐
│ H264 Encoder │ ◄── Encoder::new("h264_vaapi")
│ (硬件加速) │
└────────┬────────┘
│ H264 NAL
WebRTC 传输
```
### 7.2 完整示例
```rust
use hwcodec::ffmpeg_ram::decode::{Decoder, DecodeContext};
use hwcodec::ffmpeg_ram::encode::{Encoder, EncodeContext};
use hwcodec::ffmpeg::AVHWDeviceType;
// 创建 MJPEG 解码器
let decode_ctx = DecodeContext {
name: "mjpeg".to_string(),
device_type: AVHWDeviceType::AV_HWDEVICE_TYPE_NONE,
thread_count: 4,
};
let mut decoder = Decoder::new(decode_ctx)?;
// 检测并选择最佳编码器
let encode_ctx = EncodeContext {
name: String::new(),
width: 1920,
height: 1080,
// ...
};
let available = Encoder::available_encoders(encode_ctx.clone(), None);
let best_h264 = available.iter()
.filter(|e| e.format == DataFormat::H264)
.min_by_key(|e| e.priority)
.expect("No H264 encoder available");
// 使用最佳编码器创建实例
let encode_ctx = EncodeContext {
name: best_h264.name.clone(),
..encode_ctx
};
let mut encoder = Encoder::new(encode_ctx)?;
// 处理循环
loop {
let mjpeg_frame = capture_frame();
// 解码 MJPEG -> YUV
let decoded = decoder.decode(&mjpeg_frame)?;
// 编码 YUV -> H264
for frame in decoded {
let yuv_data = frame.data.concat();
let encoded = encoder.encode(&yuv_data, pts)?;
// 发送编码数据
for packet in encoded {
send_to_webrtc(packet.data);
}
}
}
```

View File

@@ -35,7 +35,7 @@
每个检测到的硬件编码器都会进行实际编码测试: 每个检测到的硬件编码器都会进行实际编码测试:
```rust ```rust
// libs/hwcodec/src/ffmpeg_ram/encode.rs:358-450 // libs/hwcodec/src/ffmpeg_ram/encode.rs
// 生成测试用 YUV 数据 // 生成测试用 YUV 数据
let yuv = Encoder::dummy_yuv(ctx.clone())?; let yuv = Encoder::dummy_yuv(ctx.clone())?;
@@ -47,7 +47,7 @@ match Encoder::new(c) {
match encoder.encode(&yuv, 0) { match encoder.encode(&yuv, 0) {
Ok(frames) => { Ok(frames) => {
let elapsed = start.elapsed().as_millis(); let elapsed = start.elapsed().as_millis();
// 验证: 必须产生 1 帧且为关键帧,且在 1 秒内完成 // 验证: 必须产生 1 帧且为关键帧,且在超时时间内完成
if frames.len() == 1 && frames[0].key == 1 if frames.len() == 1 && frames[0].key == 1
&& elapsed < TEST_TIMEOUT_MS { && elapsed < TEST_TIMEOUT_MS {
res.push(codec); res.push(codec);
@@ -64,27 +64,35 @@ match Encoder::new(c) {
### 2.1 检测机制 (Linux) ### 2.1 检测机制 (Linux)
使用简化的动态库检测方法,无需 CUDA SDK 依赖:
```cpp ```cpp
// libs/hwcodec/cpp/common/platform/linux/linux.cpp:57-73 // libs/hwcodec/cpp/common/platform/linux/linux.cpp
int linux_support_nv() { int linux_support_nv() {
CudaFunctions *cuda_dl = NULL; // 检测 CUDA 运行时库
NvencFunctions *nvenc_dl = NULL; void *handle = dlopen("libcuda.so.1", RTLD_LAZY);
CuvidFunctions *cvdl = NULL; if (!handle) {
handle = dlopen("libcuda.so", RTLD_LAZY);
}
if (!handle) {
LOG_TRACE("NVIDIA: libcuda.so not found");
return -1;
}
dlclose(handle);
// 加载 CUDA 动态 // 检测 NVENC 编码
if (cuda_load_functions(&cuda_dl, NULL) < 0) handle = dlopen("libnvidia-encode.so.1", RTLD_LAZY);
throw "cuda_load_functions failed"; if (!handle) {
handle = dlopen("libnvidia-encode.so", RTLD_LAZY);
}
if (!handle) {
LOG_TRACE("NVIDIA: libnvidia-encode.so not found");
return -1;
}
dlclose(handle);
// 加载 NVENC 动态库 LOG_TRACE("NVIDIA: driver support detected");
if (nvenc_load_functions(&nvenc_dl, NULL) < 0)
throw "nvenc_load_functions failed";
// 加载 CUVID (解码) 动态库
if (cuvid_load_functions(&cvdl, NULL) < 0)
throw "cuvid_load_functions failed";
// 全部成功则支持 NVIDIA 硬件加速
return 0; return 0;
} }
``` ```
@@ -127,16 +135,15 @@ av_opt_set(priv_data, "rc", "cbr", 0); // 或 "vbr"
### 2.4 依赖库 ### 2.4 依赖库
- `libcuda.so` - CUDA 运行时 - `libcuda.so` / `libcuda.so.1` - CUDA 运行时
- `libnvidia-encode.so` - NVENC 编码器 - `libnvidia-encode.so` / `libnvidia-encode.so.1` - NVENC 编码器
- `libnvcuvid.so` - NVDEC 解码器
## 3. AMD AMF ## 3. AMD AMF
### 3.1 检测机制 (Linux) ### 3.1 检测机制 (Linux)
```cpp ```cpp
// libs/hwcodec/cpp/common/platform/linux/linux.cpp:75-91 // libs/hwcodec/cpp/common/platform/linux/linux.cpp
int linux_support_amd() { int linux_support_amd() {
#if defined(__x86_64__) || defined(__aarch64__) #if defined(__x86_64__) || defined(__aarch64__)
@@ -186,26 +193,12 @@ av_opt_set(priv_data, "rc", "vbr_latency", 0); // 低延迟 VBR
- `libamfrt64.so.1` (64位) 或 `libamfrt32.so.1` (32位) - `libamfrt64.so.1` (64位) 或 `libamfrt32.so.1` (32位)
### 3.4 外部 SDK
```
externals/AMF_v1.4.35/
├── amf/
│ ├── public/common/ # 公共代码
│ │ ├── AMFFactory.cpp
│ │ ├── Thread.cpp
│ │ └── TraceAdapter.cpp
│ └── public/include/ # 头文件
│ ├── components/ # 组件定义
│ └── core/ # 核心定义
```
## 4. Intel QSV/MFX ## 4. Intel QSV/MFX
### 4.1 检测机制 (Linux) ### 4.1 检测机制 (Linux)
```cpp ```cpp
// libs/hwcodec/cpp/common/platform/linux/linux.cpp:93-107 // libs/hwcodec/cpp/common/platform/linux/linux.cpp
int linux_support_intel() { int linux_support_intel() {
const char *libs[] = { const char *libs[] = {
@@ -262,18 +255,7 @@ c->strict_std_compliance = FF_COMPLIANCE_UNOFFICIAL;
### 4.3 限制 ### 4.3 限制
- QSV 不支持 `YUV420P` 像素格式,必须使用 `NV12` - QSV 不支持 `YUV420P` 像素格式,必须使用 `NV12`
- 在 Windows 平台完全支持 - One-KVM 简化版中仅 Windows 平台完全支持
### 4.4 外部 SDK
```
externals/MediaSDK_22.5.4/
├── api/
│ ├── include/ # MFX 头文件
│ ├── mfx_dispatch/ # MFX 调度器
│ └── mediasdk_structures/ # 数据结构
└── samples/sample_common/ # 示例代码
```
## 5. VAAPI (Linux) ## 5. VAAPI (Linux)
@@ -362,17 +344,20 @@ avcodec_receive_packet(c_, pkt_) // 获取编码数据
### 6.1 检测机制 ### 6.1 检测机制
```cpp ```cpp
// libs/hwcodec/cpp/common/platform/linux/linux.cpp:122-137 // libs/hwcodec/cpp/common/platform/linux/linux.cpp
int linux_support_rkmpp() { int linux_support_rkmpp() {
// 检测 MPP 服务设备 // 检测 MPP 服务设备
if (access("/dev/mpp_service", F_OK) == 0) { if (access("/dev/mpp_service", F_OK) == 0) {
LOG_TRACE("RKMPP: Found /dev/mpp_service");
return 0; // MPP 可用 return 0; // MPP 可用
} }
// 备用: 检测 RGA 设备 // 备用: 检测 RGA 设备
if (access("/dev/rga", F_OK) == 0) { if (access("/dev/rga", F_OK) == 0) {
LOG_TRACE("RKMPP: Found /dev/rga");
return 0; // MPP 可能可用 return 0; // MPP 可能可用
} }
LOG_TRACE("RKMPP: No Rockchip MPP device found");
return -1; // MPP 不可用 return -1; // MPP 不可用
} }
``` ```
@@ -395,7 +380,7 @@ int linux_support_rkmpp() {
### 7.1 检测机制 ### 7.1 检测机制
```cpp ```cpp
// libs/hwcodec/cpp/common/platform/linux/linux.cpp:139-163 // libs/hwcodec/cpp/common/platform/linux/linux.cpp
int linux_support_v4l2m2m() { int linux_support_v4l2m2m() {
const char *m2m_devices[] = { const char *m2m_devices[] = {
@@ -409,10 +394,12 @@ int linux_support_v4l2m2m() {
int fd = open(m2m_devices[i], O_RDWR | O_NONBLOCK); int fd = open(m2m_devices[i], O_RDWR | O_NONBLOCK);
if (fd >= 0) { if (fd >= 0) {
close(fd); close(fd);
LOG_TRACE("V4L2 M2M: Found device " + m2m_devices[i]);
return 0; // V4L2 M2M 可用 return 0; // V4L2 M2M 可用
} }
} }
} }
LOG_TRACE("V4L2 M2M: No M2M device found");
return -1; return -1;
} }
``` ```
@@ -429,75 +416,9 @@ int linux_support_v4l2m2m() {
- 通用 ARM SoC (Allwinner, Amlogic 等) - 通用 ARM SoC (Allwinner, Amlogic 等)
- 支持 V4L2 M2M API 的设备 - 支持 V4L2 M2M API 的设备
## 8. Apple VideoToolbox ## 8. 硬件加速优先级
### 8.1 检测机制 (macOS) ### 8.1 优先级定义
```rust
// libs/hwcodec/src/common.rs:57-87
#[cfg(target_os = "macos")]
pub(crate) fn get_video_toolbox_codec_support() -> (bool, bool, bool, bool) {
extern "C" {
fn checkVideoToolboxSupport(
h264_encode: *mut i32,
h265_encode: *mut i32,
h264_decode: *mut i32,
h265_decode: *mut i32,
) -> c_void;
}
let mut h264_encode = 0;
let mut h265_encode = 0;
let mut h264_decode = 0;
let mut h265_decode = 0;
unsafe {
checkVideoToolboxSupport(&mut h264_encode, &mut h265_encode,
&mut h264_decode, &mut h265_decode);
}
(h264_encode == 1, h265_encode == 1,
h264_decode == 1, h265_decode == 1)
}
```
### 8.2 编码配置
```cpp
// libs/hwcodec/cpp/common/util.cpp
// VideoToolbox 低延迟配置
if (name.find("videotoolbox") != std::string::npos) {
av_opt_set_int(priv_data, "realtime", 1, 0);
av_opt_set_int(priv_data, "prio_speed", 1, 0);
}
// 强制硬件编码
if (name.find("videotoolbox") != std::string::npos) {
av_opt_set_int(priv_data, "allow_sw", 0, 0);
}
```
### 8.3 限制
- H.264 编码不稳定,已禁用
- 仅支持 H.265 编码
- 完全支持 H.264/H.265 解码
### 8.4 依赖框架
```
CoreFoundation
CoreVideo
CoreMedia
VideoToolbox
AVFoundation
```
## 9. 硬件加速优先级
### 9.1 优先级定义
```rust ```rust
pub enum Priority { pub enum Priority {
@@ -509,7 +430,7 @@ pub enum Priority {
} }
``` ```
### 9.2 各编码器优先级 ### 8.2 各编码器优先级
| 优先级 | 编码器 | | 优先级 | 编码器 |
|--------|--------| |--------|--------|
@@ -517,10 +438,10 @@ pub enum Priority {
| Good (1) | vaapi, v4l2m2m | | Good (1) | vaapi, v4l2m2m |
| Soft (3) | x264, x265, libvpx | | Soft (3) | x264, x265, libvpx |
### 9.3 选择策略 ### 8.3 选择策略
```rust ```rust
// libs/hwcodec/src/ffmpeg_ram/mod.rs:49-117 // libs/hwcodec/src/ffmpeg_ram/mod.rs
pub fn prioritized(coders: Vec<CodecInfo>) -> CodecInfos { pub fn prioritized(coders: Vec<CodecInfo>) -> CodecInfos {
// 对于每种格式,选择优先级最高的编码器 // 对于每种格式,选择优先级最高的编码器
@@ -537,9 +458,9 @@ pub fn prioritized(coders: Vec<CodecInfo>) -> CodecInfos {
} }
``` ```
## 10. 故障排除 ## 9. 故障排除
### 10.1 NVIDIA ### 9.1 NVIDIA
```bash ```bash
# 检查 NVIDIA 驱动 # 检查 NVIDIA 驱动
@@ -553,7 +474,7 @@ ldconfig -p | grep cuda
ldconfig -p | grep nvidia-encode ldconfig -p | grep nvidia-encode
``` ```
### 10.2 AMD ### 9.2 AMD
```bash ```bash
# 检查 AMD 驱动 # 检查 AMD 驱动
@@ -563,7 +484,7 @@ lspci | grep AMD
ldconfig -p | grep amf ldconfig -p | grep amf
``` ```
### 10.3 Intel ### 9.3 Intel
```bash ```bash
# 检查 Intel 驱动 # 检查 Intel 驱动
@@ -574,7 +495,7 @@ ldconfig -p | grep mfx
ldconfig -p | grep vpl ldconfig -p | grep vpl
``` ```
### 10.4 VAAPI ### 9.4 VAAPI
```bash ```bash
# 安装 vainfo # 安装 vainfo
@@ -593,7 +514,7 @@ vainfo
# ... # ...
``` ```
### 10.5 Rockchip MPP ### 9.5 Rockchip MPP
```bash ```bash
# 检查 MPP 设备 # 检查 MPP 设备
@@ -604,7 +525,7 @@ ls -la /dev/rga
ldconfig -p | grep rockchip_mpp ldconfig -p | grep rockchip_mpp
``` ```
### 10.6 V4L2 M2M ### 9.6 V4L2 M2M
```bash ```bash
# 列出 V4L2 设备 # 列出 V4L2 设备
@@ -613,3 +534,28 @@ v4l2-ctl --list-devices
# 检查设备能力 # 检查设备能力
v4l2-ctl -d /dev/video10 --all v4l2-ctl -d /dev/video10 --all
``` ```
## 10. 性能优化建议
### 10.1 编码器选择
1. **优先使用硬件编码**: NVENC > AMF > QSV > VAAPI > V4L2 M2M > 软件
2. **ARM 设备**: 优先检测 RKMPP其次 V4L2 M2M
3. **x86 设备**: 根据 GPU 厂商自动选择
### 10.2 低延迟配置
所有硬件编码器都启用了低延迟优化:
| 编码器 | 配置 |
|--------|------|
| NVENC | `delay=0` |
| AMF | `query_timeout=1000` |
| QSV | `async_depth=1` |
| VAAPI | `async_depth=1` |
| libvpx | `deadline=realtime`, `cpu-used=6` |
### 10.3 码率控制
- **实时流**: 推荐 CBR 模式,保证稳定码率
- **GOP 大小**: 建议 30-60 帧 (1-2秒),平衡延迟和压缩效率

View File

@@ -7,43 +7,35 @@ libs/hwcodec/
├── Cargo.toml # 包配置 ├── Cargo.toml # 包配置
├── Cargo.lock # 依赖锁定 ├── Cargo.lock # 依赖锁定
├── build.rs # 构建脚本 ├── build.rs # 构建脚本
── src/ # Rust 源码 ── src/ # Rust 源码
├── lib.rs # 库入口 ├── lib.rs # 库入口
├── common.rs # 公共定义 ├── common.rs # 公共定义
├── ffmpeg.rs # FFmpeg 集成 ├── ffmpeg.rs # FFmpeg 集成
── mux.rs # 混流器 ── ffmpeg_ram/ # RAM 编解码
├── android.rs # Android 支持 ├── mod.rs
├── ffmpeg_ram/ # RAM 编解码 ├── encode.rs
├── mod.rs └── decode.rs
├── encode.rs └── cpp/ # C++ 源码
│ └── decode.rs ├── common/ # 公共代码
│ ├── vram/ # GPU 编解码 (Windows) │ ├── log.cpp
│ ├── mod.rs │ ├── log.h
│ ├── encode.rs │ ├── util.cpp
│ ├── decode.rs │ ├── util.h
── ... ── callback.h
└── res/ # 测试资源 │ ├── common.h
── 720p.h264 ── platform/
── 720p.h265 ── linux/
├── cpp/ # C++ 源码 ├── linux.cpp
├── common/ # 公共代码 │ │ └── linux.h
├── ffmpeg_ram/ # FFmpeg RAM 实现 │ └── win/
│ ├── ffmpeg_vram/ # FFmpeg VRAM 实现 │ ├── win.cpp
├── nv/ # NVIDIA 实现 └── win.h
├── amf/ # AMD 实现 ├── ffmpeg_ram/ # FFmpeg RAM 实现
│ ├── mfx/ # Intel 实现 │ ├── ffmpeg_ram_encode.cpp
│ ├── mux/ # 混流实现 │ ├── ffmpeg_ram_decode.cpp
│ └── yuv/ # YUV 处理 │ └── ffmpeg_ram_ffi.h
├── externals/ # 外部 SDK (Git 子模块) └── yuv/ # YUV 处理
│ ├── nv-codec-headers_n12.1.14.0/ └── yuv.cpp
│ ├── Video_Codec_SDK_12.1.14/
│ ├── AMF_v1.4.35/
│ └── MediaSDK_22.5.4/
├── dev/ # 开发工具
│ ├── capture/ # 捕获工具
│ ├── render/ # 渲染工具
│ └── tool/ # 通用工具
└── examples/ # 示例程序
``` ```
## 2. Cargo 配置 ## 2. Cargo 配置
@@ -53,12 +45,12 @@ libs/hwcodec/
```toml ```toml
[package] [package]
name = "hwcodec" name = "hwcodec"
version = "0.7.1" version = "0.8.0"
edition = "2021" edition = "2021"
description = "Hardware video codec for IP-KVM (Windows/Linux)"
[features] [features]
default = [] default = []
vram = [] # GPU VRAM 直接编解码 (仅 Windows)
[dependencies] [dependencies]
log = "0.4" # 日志 log = "0.4" # 日志
@@ -72,26 +64,23 @@ bindgen = "0.59" # FFI 绑定生成
[dev-dependencies] [dev-dependencies]
env_logger = "0.10" # 日志输出 env_logger = "0.10" # 日志输出
rand = "0.8" # 随机数
``` ```
### 2.2 Feature 说明 ### 2.2 与原版的区别
| Feature | 说明 | 平台 | | 特性 | 原版 (RustDesk) | 简化版 (One-KVM) |
|---------|------|------| |------|-----------------|------------------|
| `default` | 基础功能 | 全平台 | | `vram` feature | ✓ | ✗ (已移除) |
| `vram` | GPU VRAM 直接编解码 | 仅 Windows | | 外部 SDK | 需要 | 不需要 |
| 版本号 | 0.7.1 | 0.8.0 |
| 目标平台 | Windows/Linux/macOS/Android | Windows/Linux |
### 2.3 使用方式 ### 2.3 使用方式
```toml ```toml
# 基础使用 # 在 One-KVM 项目中使用
[dependencies] [dependencies]
hwcodec = { path = "libs/hwcodec" } hwcodec = { path = "libs/hwcodec" }
# 启用 VRAM 功能 (Windows)
[dependencies]
hwcodec = { path = "libs/hwcodec", features = ["vram"] }
``` ```
## 3. 构建脚本详解 (build.rs) ## 3. 构建脚本详解 (build.rs)
@@ -109,11 +98,7 @@ fn main() {
// 2. 构建 FFmpeg 相关模块 // 2. 构建 FFmpeg 相关模块
ffmpeg::build_ffmpeg(&mut builder); ffmpeg::build_ffmpeg(&mut builder);
// 3. 构建 SDK 模块 (Windows + vram feature) // 3. 编译生成静态库
#[cfg(all(windows, feature = "vram"))]
sdk::build_sdk(&mut builder);
// 4. 编译生成静态库
builder.static_crt(true).compile("hwcodec"); builder.static_crt(true).compile("hwcodec");
} }
``` ```
@@ -139,9 +124,6 @@ fn build_common(builder: &mut Build) {
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
builder.file(common_dir.join("platform/linux/linux.cpp")); builder.file(common_dir.join("platform/linux/linux.cpp"));
#[cfg(target_os = "macos")]
builder.file(common_dir.join("platform/mac/mac.mm"));
// 工具代码 // 工具代码
builder.files([ builder.files([
common_dir.join("log.cpp"), common_dir.join("log.cpp"),
@@ -168,11 +150,8 @@ mod ffmpeg {
// 链接系统库 // 链接系统库
link_os(); link_os();
// 构建模块 // 构建 FFmpeg RAM 模块
build_ffmpeg_ram(builder); build_ffmpeg_ram(builder);
#[cfg(feature = "vram")]
build_ffmpeg_vram(builder);
build_mux(builder);
} }
} }
``` ```
@@ -186,8 +165,6 @@ fn link_vcpkg(builder: &mut Build, path: PathBuf) -> PathBuf {
// 目标平台识别 // 目标平台识别
let target = match (target_os, target_arch) { let target = match (target_os, target_arch) {
("windows", "x86_64") => "x64-windows-static", ("windows", "x86_64") => "x64-windows-static",
("macos", "x86_64") => "x64-osx",
("macos", "aarch64") => "arm64-osx",
("linux", arch) => format!("{}-linux", arch), ("linux", arch) => format!("{}-linux", arch),
_ => panic!("unsupported platform"), _ => panic!("unsupported platform"),
}; };
@@ -239,57 +216,12 @@ fn link_os() {
let libs: Vec<&str> = match target_os.as_str() { let libs: Vec<&str> = match target_os.as_str() {
"windows" => vec!["User32", "bcrypt", "ole32", "advapi32"], "windows" => vec!["User32", "bcrypt", "ole32", "advapi32"],
"linux" => vec!["drm", "X11", "stdc++", "z"], "linux" => vec!["drm", "X11", "stdc++", "z"],
"macos" | "ios" => vec!["c++", "m"],
"android" => vec!["z", "m", "android", "atomic", "mediandk"],
_ => panic!("unsupported os"), _ => panic!("unsupported os"),
}; };
for lib in libs { for lib in libs {
println!("cargo:rustc-link-lib={}", lib); println!("cargo:rustc-link-lib={}", lib);
} }
// macOS 框架
if target_os == "macos" || target_os == "ios" {
for framework in ["CoreFoundation", "CoreVideo", "CoreMedia",
"VideoToolbox", "AVFoundation"] {
println!("cargo:rustc-link-lib=framework={}", framework);
}
}
}
```
### 3.6 SDK 模块构建 (Windows)
```rust
#[cfg(all(windows, feature = "vram"))]
mod sdk {
pub fn build_sdk(builder: &mut Build) {
build_amf(builder); // AMD AMF
build_nv(builder); // NVIDIA
build_mfx(builder); // Intel MFX
}
fn build_nv(builder: &mut Build) {
let sdk_path = externals_dir.join("Video_Codec_SDK_12.1.14");
// 包含 SDK 头文件
builder.includes([
sdk_path.join("Interface"),
sdk_path.join("Samples/Utils"),
sdk_path.join("Samples/NvCodec"),
]);
// 编译 SDK 源文件
builder.file(sdk_path.join("Samples/NvCodec/NvEncoder/NvEncoder.cpp"));
builder.file(sdk_path.join("Samples/NvCodec/NvEncoder/NvEncoderD3D11.cpp"));
builder.file(sdk_path.join("Samples/NvCodec/NvDecoder/NvDecoder.cpp"));
// 编译封装代码
builder.files([
nv_dir.join("nv_encode.cpp"),
nv_dir.join("nv_decode.cpp"),
]);
}
} }
``` ```
@@ -332,40 +264,10 @@ impl bindgen::callbacks::ParseCallbacks for CommonCallbacks {
| `common_ffi.rs` | `common.h`, `callback.h` | 枚举、常量、回调类型 | | `common_ffi.rs` | `common.h`, `callback.h` | 枚举、常量、回调类型 |
| `ffmpeg_ffi.rs` | `ffmpeg_ffi.h` | FFmpeg 日志级别、函数 | | `ffmpeg_ffi.rs` | `ffmpeg_ffi.h` | FFmpeg 日志级别、函数 |
| `ffmpeg_ram_ffi.rs` | `ffmpeg_ram_ffi.h` | 编解码器函数 | | `ffmpeg_ram_ffi.rs` | `ffmpeg_ram_ffi.h` | 编解码器函数 |
| `mux_ffi.rs` | `mux_ffi.h` | 混流器函数 |
## 5. 外部依赖管理 ## 5. 平台构建指南
### 5.1 Git 子模块 ### 5.1 Linux 构建
```bash
# 初始化子模块
git submodule update --init --recursive
# 更新子模块
git submodule update --remote externals
```
### 5.2 子模块配置 (.gitmodules)
```
[submodule "externals"]
path = libs/hwcodec/externals
url = https://github.com/rustdesk-org/externals.git
```
### 5.3 依赖版本
| 依赖 | 版本 | 用途 |
|------|------|------|
| nv-codec-headers | n12.1.14.0 | NVIDIA FFmpeg 编码头 |
| Video_Codec_SDK | 12.1.14 | NVIDIA 编解码 SDK |
| AMF | v1.4.35 | AMD Advanced Media Framework |
| MediaSDK | 22.5.4 | Intel Media SDK |
## 6. 平台构建指南
### 6.1 Linux 构建
```bash ```bash
# 安装 FFmpeg 开发库 # 安装 FFmpeg 开发库
@@ -374,11 +276,14 @@ sudo apt install libavcodec-dev libavformat-dev libavutil-dev libswscale-dev
# 安装其他依赖 # 安装其他依赖
sudo apt install libdrm-dev libx11-dev pkg-config sudo apt install libdrm-dev libx11-dev pkg-config
# 安装 clang (bindgen 需要)
sudo apt install clang libclang-dev
# 构建 # 构建
cargo build --release -p hwcodec cargo build --release -p hwcodec
``` ```
### 6.2 Windows 构建 (VCPKG) ### 5.2 Windows 构建 (VCPKG)
```powershell ```powershell
# 安装 VCPKG # 安装 VCPKG
@@ -392,26 +297,11 @@ cd vcpkg
# 设置环境变量 # 设置环境变量
$env:VCPKG_ROOT = "C:\path\to\vcpkg" $env:VCPKG_ROOT = "C:\path\to\vcpkg"
# 构建
cargo build --release -p hwcodec --features vram
```
### 6.3 macOS 构建
```bash
# 安装 FFmpeg (Homebrew)
brew install ffmpeg pkg-config
# 或使用 VCPKG
export VCPKG_ROOT=/path/to/vcpkg
vcpkg install ffmpeg:arm64-osx # Apple Silicon
vcpkg install ffmpeg:x64-osx # Intel
# 构建 # 构建
cargo build --release -p hwcodec cargo build --release -p hwcodec
``` ```
### 6.4 交叉编译 ### 5.3 交叉编译
```bash ```bash
# 安装 cross # 安装 cross
@@ -424,9 +314,9 @@ cross build --release -p hwcodec --target aarch64-unknown-linux-gnu
cross build --release -p hwcodec --target armv7-unknown-linux-gnueabihf cross build --release -p hwcodec --target armv7-unknown-linux-gnueabihf
``` ```
## 7. 集成到 One-KVM ## 6. 集成到 One-KVM
### 7.1 依赖配置 ### 6.1 依赖配置
```toml ```toml
# Cargo.toml # Cargo.toml
@@ -434,12 +324,12 @@ cross build --release -p hwcodec --target armv7-unknown-linux-gnueabihf
hwcodec = { path = "libs/hwcodec" } hwcodec = { path = "libs/hwcodec" }
``` ```
### 7.2 使用示例 ### 6.2 使用示例
```rust ```rust
use hwcodec::ffmpeg_ram::encode::{Encoder, EncodeContext}; use hwcodec::ffmpeg_ram::encode::{Encoder, EncodeContext};
use hwcodec::ffmpeg_ram::decode::{Decoder, DecodeContext}; use hwcodec::ffmpeg_ram::decode::{Decoder, DecodeContext};
use hwcodec::ffmpeg::AVPixelFormat; use hwcodec::ffmpeg::{AVPixelFormat, AVHWDeviceType};
// 检测可用编码器 // 检测可用编码器
let encoders = Encoder::available_encoders(ctx, None); let encoders = Encoder::available_encoders(ctx, None);
@@ -458,31 +348,41 @@ let encoder = Encoder::new(EncodeContext {
// 编码 // 编码
let frames = encoder.encode(&yuv_data, pts_ms)?; let frames = encoder.encode(&yuv_data, pts_ms)?;
// 创建 MJPEG 解码器 (IP-KVM 专用)
let decoder = Decoder::new(DecodeContext {
name: "mjpeg".to_string(),
device_type: AVHWDeviceType::AV_HWDEVICE_TYPE_NONE,
thread_count: 4,
})?;
// 解码
let frames = decoder.decode(&mjpeg_data)?;
``` ```
### 7.3 日志集成 ### 6.3 日志集成
```rust ```rust
// hwcodec 使用 log crate与 One-KVM 日志系统兼容 // hwcodec 使用 log crate与 One-KVM 日志系统兼容
use log::{debug, info, warn, error}; use log::{debug, info, warn, error};
// C++ 层日志通过回调传递 // C++ 层日志通过回调传递到 Rust
#[no_mangle] #[no_mangle]
pub extern "C" fn hwcodec_log(level: i32, message: *const c_char) { pub extern "C" fn hwcodec_av_log_callback(level: i32, message: *const c_char) {
// 转发到 Rust log 系统
match level { match level {
0 => error!("{}", message), AV_LOG_ERROR => error!("{}", message),
1 => warn!("{}", message), AV_LOG_WARNING => warn!("{}", message),
2 => info!("{}", message), AV_LOG_INFO => info!("{}", message),
3 => debug!("{}", message), AV_LOG_DEBUG => debug!("{}", message),
4 => trace!("{}", message),
_ => {} _ => {}
} }
} }
``` ```
## 8. 故障排除 ## 7. 故障排除
### 8.1 编译错误 ### 7.1 编译错误
**FFmpeg 未找到**: **FFmpeg 未找到**:
``` ```
@@ -502,7 +402,7 @@ error: failed to run custom build command for `hwcodec`
sudo apt install clang libclang-dev sudo apt install clang libclang-dev
``` ```
### 8.2 链接错误 ### 7.2 链接错误
**符号未定义**: **符号未定义**:
``` ```
@@ -521,7 +421,7 @@ sudo ldconfig
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
``` ```
### 8.3 运行时错误 ### 7.3 运行时错误
**硬件编码器不可用**: **硬件编码器不可用**:
``` ```
@@ -537,3 +437,41 @@ Encoder h264_vaapi test failed
avcodec_receive_frame failed, ret = -11 avcodec_receive_frame failed, ret = -11
``` ```
解决: 这通常表示需要更多输入数据 (EAGAIN),是正常行为 解决: 这通常表示需要更多输入数据 (EAGAIN),是正常行为
## 8. 与原版 RustDesk hwcodec 的构建差异
### 8.1 移除的构建步骤
| 步骤 | 原因 |
|------|------|
| `build_mux()` | 移除了 Mux 模块 |
| `build_ffmpeg_vram()` | 移除了 VRAM 模块 |
| `sdk::build_sdk()` | 移除了外部 SDK 依赖 |
| macOS 框架链接 | 移除了 macOS 支持 |
| Android NDK 链接 | 移除了 Android 支持 |
### 8.2 简化的构建流程
```
原版构建流程:
build.rs
├── build_common()
├── ffmpeg::build_ffmpeg()
│ ├── build_ffmpeg_ram()
│ ├── build_ffmpeg_vram() [已移除]
│ └── build_mux() [已移除]
└── sdk::build_sdk() [已移除]
简化版构建流程:
build.rs
├── build_common()
└── ffmpeg::build_ffmpeg()
└── build_ffmpeg_ram()
```
### 8.3 优势
1. **更快的编译**: 无需编译外部 SDK 代码
2. **更少的依赖**: 无需下载 ~9MB 的外部 SDK
3. **更简单的维护**: 代码量减少约 67%
4. **更小的二进制**: 不包含未使用的功能

View File

@@ -1,3 +0,0 @@
[submodule "externals"]
path = externals
url = https://github.com/rustdesk-org/externals.git

View File

@@ -1,13 +1,11 @@
[package] [package]
name = "hwcodec" name = "hwcodec"
version = "0.7.1" version = "0.8.0"
edition = "2021" edition = "2021"
description = "Hardware video codec for IP-KVM (Windows/Linux)"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features] [features]
default = [] default = []
vram = []
[dependencies] [dependencies]
log = "0.4" log = "0.4"
@@ -21,9 +19,3 @@ bindgen = "0.59"
[dev-dependencies] [dev-dependencies]
env_logger = "0.10" env_logger = "0.10"
rand = "0.8"
[target.'cfg(target_os="windows")'.dev-dependencies]
capture = { path = "dev/capture" }
render = { path = "dev/render" }
tool = { path = "dev/tool" }

View File

@@ -6,18 +6,13 @@ use std::{
fn main() { fn main() {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")); let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let externals_dir = manifest_dir.join("externals");
let cpp_dir = manifest_dir.join("cpp"); let cpp_dir = manifest_dir.join("cpp");
println!("cargo:rerun-if-changed=src"); println!("cargo:rerun-if-changed=src");
println!("cargo:rerun-if-changed=deps");
println!("cargo:rerun-if-changed={}", externals_dir.display());
println!("cargo:rerun-if-changed={}", cpp_dir.display()); println!("cargo:rerun-if-changed={}", cpp_dir.display());
let mut builder = Build::new(); let mut builder = Build::new();
build_common(&mut builder); build_common(&mut builder);
ffmpeg::build_ffmpeg(&mut builder); ffmpeg::build_ffmpeg(&mut builder);
#[cfg(all(windows, feature = "vram"))]
sdk::build_sdk(&mut builder);
builder.static_crt(true).compile("hwcodec"); builder.static_crt(true).compile("hwcodec");
} }
@@ -25,6 +20,7 @@ fn build_common(builder: &mut Build) {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")); let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let target_os = std::env::var("CARGO_CFG_TARGET_OS").unwrap(); let target_os = std::env::var("CARGO_CFG_TARGET_OS").unwrap();
let common_dir = manifest_dir.join("cpp").join("common"); let common_dir = manifest_dir.join("cpp").join("common");
bindgen::builder() bindgen::builder()
.header(common_dir.join("common.h").to_string_lossy().to_string()) .header(common_dir.join("common.h").to_string_lossy().to_string())
.header(common_dir.join("callback.h").to_string_lossy().to_string()) .header(common_dir.join("callback.h").to_string_lossy().to_string())
@@ -44,32 +40,23 @@ fn build_common(builder: &mut Build) {
builder.include(&common_dir); builder.include(&common_dir);
// platform // platform
let _platform_path = common_dir.join("platform"); let platform_path = common_dir.join("platform");
#[cfg(windows)] #[cfg(windows)]
{ {
let win_path = _platform_path.join("win"); let win_path = platform_path.join("win");
builder.include(&win_path); builder.include(&win_path);
builder.file(win_path.join("win.cpp")); builder.file(win_path.join("win.cpp"));
} }
#[cfg(target_os = "linux")] #[cfg(target_os = "linux")]
{ {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")); let linux_path = platform_path.join("linux");
let externals_dir = manifest_dir.join("externals");
// ffnvcodec
let ffnvcodec_path = externals_dir
.join("nv-codec-headers_n12.1.14.0")
.join("include")
.join("ffnvcodec");
builder.include(ffnvcodec_path);
let linux_path = _platform_path.join("linux");
builder.include(&linux_path); builder.include(&linux_path);
builder.file(linux_path.join("linux.cpp")); builder.file(linux_path.join("linux.cpp"));
} }
if target_os == "macos" {
let macos_path = _platform_path.join("mac"); // Unsupported platforms
builder.include(&macos_path); if target_os != "windows" && target_os != "linux" {
builder.file(macos_path.join("mac.mm")); panic!("Unsupported OS: {}. Only Windows and Linux are supported.", target_os);
} }
// tool // tool
@@ -93,9 +80,6 @@ impl bindgen::callbacks::ParseCallbacks for CommonCallbacks {
} }
mod ffmpeg { mod ffmpeg {
#[allow(unused_imports)]
use core::panic;
use super::*; use super::*;
pub fn build_ffmpeg(builder: &mut Build) { pub fn build_ffmpeg(builder: &mut Build) {
@@ -111,13 +95,6 @@ mod ffmpeg {
link_os(); link_os();
build_ffmpeg_ram(builder); build_ffmpeg_ram(builder);
#[cfg(feature = "vram")]
build_ffmpeg_vram(builder);
build_mux(builder);
let target_os = std::env::var("CARGO_CFG_TARGET_OS").unwrap();
if target_os == "macos" || target_os == "ios" {
builder.flag("-std=c++11");
}
} }
/// Link system FFmpeg using pkg-config (for Linux development) /// Link system FFmpeg using pkg-config (for Linux development)
@@ -181,15 +158,7 @@ mod ffmpeg {
} else { } else {
target_arch = "arm".to_owned(); target_arch = "arm".to_owned();
} }
let mut target = if target_os == "macos" { let mut target = if target_os == "windows" {
if target_arch == "x64" {
"x64-osx".to_owned()
} else if target_arch == "arm64" {
"arm64-osx".to_owned()
} else {
format!("{}-{}", target_arch, target_os)
}
} else if target_os == "windows" {
"x64-windows-static".to_owned() "x64-windows-static".to_owned()
} else { } else {
format!("{}-{}", target_arch, target_os) format!("{}-{}", target_arch, target_os)
@@ -241,27 +210,13 @@ mod ffmpeg {
v.push("z"); v.push("z");
} }
v v
} else if target_os == "macos" || target_os == "ios" {
["c++", "m"].to_vec()
} else if target_os == "android" {
// https://github.com/FFmpeg/FFmpeg/commit/98b5e80fd6980e641199e9ce3bc27100e2df17a4
// link to mediandk directly since n7.1
["z", "m", "android", "atomic", "mediandk"].to_vec()
} else { } else {
panic!("unsupported os"); panic!("Unsupported OS: {}. Only Windows and Linux are supported.", target_os);
}; };
for lib in dyn_libs.iter() { for lib in dyn_libs.iter() {
println!("cargo:rustc-link-lib={}", lib); println!("cargo:rustc-link-lib={}", lib);
} }
if target_os == "macos" || target_os == "ios" {
println!("cargo:rustc-link-lib=framework=CoreFoundation");
println!("cargo:rustc-link-lib=framework=CoreVideo");
println!("cargo:rustc-link-lib=framework=CoreMedia");
println!("cargo:rustc-link-lib=framework=VideoToolbox");
println!("cargo:rustc-link-lib=framework=AVFoundation");
}
} }
fn ffmpeg_ffi() { fn ffmpeg_ffi() {
@@ -299,223 +254,4 @@ mod ffmpeg {
["ffmpeg_ram_encode.cpp", "ffmpeg_ram_decode.cpp"].map(|f| ffmpeg_ram_dir.join(f)), ["ffmpeg_ram_encode.cpp", "ffmpeg_ram_decode.cpp"].map(|f| ffmpeg_ram_dir.join(f)),
); );
} }
#[cfg(feature = "vram")]
fn build_ffmpeg_vram(builder: &mut Build) {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let ffmpeg_ram_dir = manifest_dir.join("cpp").join("ffmpeg_vram");
let ffi_header = ffmpeg_ram_dir
.join("ffmpeg_vram_ffi.h")
.to_string_lossy()
.to_string();
bindgen::builder()
.header(ffi_header)
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("ffmpeg_vram_ffi.rs"))
.unwrap();
builder.files(
["ffmpeg_vram_decode.cpp", "ffmpeg_vram_encode.cpp"].map(|f| ffmpeg_ram_dir.join(f)),
);
}
fn build_mux(builder: &mut Build) {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let mux_dir = manifest_dir.join("cpp").join("mux");
let mux_header = mux_dir.join("mux_ffi.h").to_string_lossy().to_string();
bindgen::builder()
.header(mux_header)
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("mux_ffi.rs"))
.unwrap();
builder.files(["mux.cpp"].map(|f| mux_dir.join(f)));
}
}
#[cfg(all(windows, feature = "vram"))]
mod sdk {
use super::*;
pub(crate) fn build_sdk(builder: &mut Build) {
build_amf(builder);
build_nv(builder);
build_mfx(builder);
}
fn build_nv(builder: &mut Build) {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let externals_dir = manifest_dir.join("externals");
let common_dir = manifest_dir.join("common");
let nv_dir = manifest_dir.join("cpp").join("nv");
println!("cargo:rerun-if-changed=src");
println!("cargo:rerun-if-changed={}", common_dir.display());
println!("cargo:rerun-if-changed={}", externals_dir.display());
bindgen::builder()
.header(&nv_dir.join("nv_ffi.h").to_string_lossy().to_string())
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("nv_ffi.rs"))
.unwrap();
// system
#[cfg(target_os = "windows")]
[
"kernel32", "user32", "gdi32", "winspool", "shell32", "ole32", "oleaut32", "uuid",
"comdlg32", "advapi32", "d3d11", "dxgi",
]
.map(|lib| println!("cargo:rustc-link-lib={}", lib));
#[cfg(target_os = "linux")]
println!("cargo:rustc-link-lib=stdc++");
// ffnvcodec
let ffnvcodec_path = externals_dir
.join("nv-codec-headers_n12.1.14.0")
.join("include")
.join("ffnvcodec");
builder.include(ffnvcodec_path);
// video codc sdk
let sdk_path = externals_dir.join("Video_Codec_SDK_12.1.14");
builder.includes([
sdk_path.clone(),
sdk_path.join("Interface"),
sdk_path.join("Samples").join("Utils"),
sdk_path.join("Samples").join("NvCodec"),
sdk_path.join("Samples").join("NvCodec").join("NVEncoder"),
sdk_path.join("Samples").join("NvCodec").join("NVDecoder"),
]);
for file in vec!["NvEncoder.cpp", "NvEncoderD3D11.cpp"] {
builder.file(
sdk_path
.join("Samples")
.join("NvCodec")
.join("NvEncoder")
.join(file),
);
}
for file in vec!["NvDecoder.cpp"] {
builder.file(
sdk_path
.join("Samples")
.join("NvCodec")
.join("NvDecoder")
.join(file),
);
}
// crate
builder.files(["nv_encode.cpp", "nv_decode.cpp"].map(|f| nv_dir.join(f)));
}
fn build_amf(builder: &mut Build) {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let externals_dir = manifest_dir.join("externals");
let amf_dir = manifest_dir.join("cpp").join("amf");
println!("cargo:rerun-if-changed=src");
println!("cargo:rerun-if-changed={}", externals_dir.display());
bindgen::builder()
.header(amf_dir.join("amf_ffi.h").to_string_lossy().to_string())
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("amf_ffi.rs"))
.unwrap();
// system
#[cfg(windows)]
println!("cargo:rustc-link-lib=ole32");
#[cfg(target_os = "linux")]
println!("cargo:rustc-link-lib=stdc++");
// amf
let amf_path = externals_dir.join("AMF_v1.4.35");
builder.include(format!("{}/amf/public/common", amf_path.display()));
builder.include(amf_path.join("amf"));
for f in vec![
"AMFFactory.cpp",
"AMFSTL.cpp",
"Thread.cpp",
#[cfg(windows)]
"Windows/ThreadWindows.cpp",
#[cfg(target_os = "linux")]
"Linux/ThreadLinux.cpp",
"TraceAdapter.cpp",
] {
builder.file(format!("{}/amf/public/common/{}", amf_path.display(), f));
}
// crate
builder.files(["amf_encode.cpp", "amf_decode.cpp"].map(|f| amf_dir.join(f)));
}
fn build_mfx(builder: &mut Build) {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let externals_dir = manifest_dir.join("externals");
let mfx_dir = manifest_dir.join("cpp").join("mfx");
println!("cargo:rerun-if-changed=src");
println!("cargo:rerun-if-changed={}", externals_dir.display());
bindgen::builder()
.header(&mfx_dir.join("mfx_ffi.h").to_string_lossy().to_string())
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("mfx_ffi.rs"))
.unwrap();
// MediaSDK
let sdk_path = externals_dir.join("MediaSDK_22.5.4");
// mfx_dispatch
let mfx_path = sdk_path.join("api").join("mfx_dispatch");
// include headers and reuse static lib
builder.include(mfx_path.join("windows").join("include"));
let sample_path = sdk_path.join("samples").join("sample_common");
builder
.includes([
sdk_path.join("api").join("include"),
sample_path.join("include"),
])
.files(
[
"sample_utils.cpp",
"base_allocator.cpp",
"d3d11_allocator.cpp",
"avc_bitstream.cpp",
"avc_spl.cpp",
"avc_nal_spl.cpp",
]
.map(|f| sample_path.join("src").join(f)),
)
.files(
[
"time.cpp",
"atomic.cpp",
"shared_object.cpp",
"thread_windows.cpp",
]
.map(|f| sample_path.join("src").join("vm").join(f)),
);
// link
[
"kernel32", "user32", "gdi32", "winspool", "shell32", "ole32", "oleaut32", "uuid",
"comdlg32", "advapi32", "d3d11", "dxgi",
]
.map(|lib| println!("cargo:rustc-link-lib={}", lib));
builder
.files(["mfx_encode.cpp", "mfx_decode.cpp"].map(|f| mfx_dir.join(f)))
.define("NOMINMAX", None)
.define("MFX_DEPRECATED_OFF", None)
.define("MFX_D3D11_SUPPORT", None);
}
} }

View File

@@ -1,34 +0,0 @@
#include "common.h"
#include <iostream>
#include <public/common/TraceAdapter.h>
#include <stdio.h>
#ifndef AMF_FACILITY
#define AMF_FACILITY L"AMFCommon"
#endif
static bool convert_api(amf::AMF_MEMORY_TYPE &rhs) {
// Always use DX11 since it's the only supported API
rhs = amf::AMF_MEMORY_DX11;
return true;
}
static bool convert_surface_format(SurfaceFormat lhs,
amf::AMF_SURFACE_FORMAT &rhs) {
switch (lhs) {
case SURFACE_FORMAT_NV12:
rhs = amf::AMF_SURFACE_NV12;
break;
case SURFACE_FORMAT_RGBA:
rhs = amf::AMF_SURFACE_RGBA;
break;
case SURFACE_FORMAT_BGRA:
rhs = amf::AMF_SURFACE_BGRA;
break;
default:
std::cerr << "unsupported surface format: " << static_cast<int>(lhs)
<< "\n";
return false;
}
return true;
}

View File

@@ -1,451 +0,0 @@
#include <public/common/AMFFactory.h>
#include <public/common/AMFSTL.h>
#include <public/common/ByteArray.h>
#include <public/common/Thread.h>
#include <public/common/TraceAdapter.h>
#include <public/include/components/VideoConverter.h>
#include <public/include/components/VideoDecoderUVD.h>
#include <cstring>
#include <iostream>
#include "callback.h"
#include "common.h"
#include "system.h"
#include "util.h"
#define LOG_MODULE "AMFDEC"
#include "log.h"
#define AMF_FACILITY L"AMFDecoder"
#define AMF_CHECK_RETURN(res, msg) \
if (res != AMF_OK) { \
LOG_ERROR(std::string(msg) + ", result code: " + std::to_string(int(res))); \
return res; \
}
namespace {
class AMFDecoder {
private:
// system
void *device_;
int64_t luid_;
std::unique_ptr<NativeDevice> nativeDevice_ = nullptr;
// amf
AMFFactoryHelper AMFFactory_;
amf::AMFContextPtr AMFContext_ = NULL;
amf::AMFComponentPtr AMFDecoder_ = NULL;
amf::AMF_MEMORY_TYPE AMFMemoryType_;
amf::AMF_SURFACE_FORMAT decodeFormatOut_ = amf::AMF_SURFACE_NV12;
amf::AMF_SURFACE_FORMAT textureFormatOut_;
amf::AMFComponentPtr AMFConverter_ = NULL;
int last_width_ = 0;
int last_height_ = 0;
amf_wstring codec_;
bool full_range_ = false;
bool bt709_ = false;
// buffer
std::vector<std::vector<uint8_t>> buffer_;
public:
AMFDecoder(void *device, int64_t luid, amf::AMF_MEMORY_TYPE memoryTypeOut,
amf_wstring codec, amf::AMF_SURFACE_FORMAT textureFormatOut) {
device_ = device;
luid_ = luid;
AMFMemoryType_ = memoryTypeOut;
textureFormatOut_ = textureFormatOut;
codec_ = codec;
}
~AMFDecoder() {}
AMF_RESULT decode(uint8_t *iData, uint32_t iDataSize, DecodeCallback callback,
void *obj) {
AMF_RESULT res = AMF_FAIL;
bool decoded = false;
amf::AMFBufferPtr iDataWrapBuffer = NULL;
res = AMFContext_->CreateBufferFromHostNative(iData, iDataSize,
&iDataWrapBuffer, NULL);
AMF_CHECK_RETURN(res, "CreateBufferFromHostNative failed");
res = AMFDecoder_->SubmitInput(iDataWrapBuffer);
if (res == AMF_RESOLUTION_CHANGED) {
iDataWrapBuffer = NULL;
LOG_INFO(std::string("resolution changed"));
res = AMFDecoder_->Drain();
AMF_CHECK_RETURN(res, "Drain failed");
res = AMFDecoder_->Terminate();
AMF_CHECK_RETURN(res, "Terminate failed");
res = AMFDecoder_->Init(decodeFormatOut_, 0, 0);
AMF_CHECK_RETURN(res, "Init failed");
res = AMFContext_->CreateBufferFromHostNative(iData, iDataSize,
&iDataWrapBuffer, NULL);
AMF_CHECK_RETURN(res, "CreateBufferFromHostNative failed");
res = AMFDecoder_->SubmitInput(iDataWrapBuffer);
}
AMF_CHECK_RETURN(res, "SubmitInput failed");
amf::AMFDataPtr oData = NULL;
auto start = util::now();
do {
res = AMFDecoder_->QueryOutput(&oData);
if (res == AMF_REPEAT) {
amf_sleep(1);
}
} while (res == AMF_REPEAT && util::elapsed_ms(start) < DECODE_TIMEOUT_MS);
if (res == AMF_OK && oData != NULL) {
amf::AMFSurfacePtr surface(oData);
AMF_RETURN_IF_INVALID_POINTER(surface, L"surface is NULL");
if (surface->GetPlanesCount() == 0)
return AMF_FAIL;
// convert texture
amf::AMFDataPtr convertData;
res = Convert(surface, convertData);
AMF_CHECK_RETURN(res, "Convert failed");
amf::AMFSurfacePtr convertSurface(convertData);
if (!convertSurface || convertSurface->GetPlanesCount() == 0)
return AMF_FAIL;
// For DirectX objects, when a pointer to a COM interface is returned,
// GetNative does not call IUnknown::AddRef on the interface being
// returned.
void *native = convertSurface->GetPlaneAt(0)->GetNative();
if (!native)
return AMF_FAIL;
switch (convertSurface->GetMemoryType()) {
case amf::AMF_MEMORY_DX11: {
{
ID3D11Texture2D *src = (ID3D11Texture2D *)native;
D3D11_TEXTURE2D_DESC desc;
src->GetDesc(&desc);
nativeDevice_->EnsureTexture(desc.Width, desc.Height);
nativeDevice_->next();
ID3D11Texture2D *dst = nativeDevice_->GetCurrentTexture();
nativeDevice_->context_->CopyResource(dst, src);
nativeDevice_->context_->Flush();
if (callback)
callback(dst, obj);
decoded = true;
}
break;
} break;
case amf::AMF_MEMORY_OPENCL: {
uint8_t *buf = (uint8_t *)native;
} break;
}
surface = NULL;
convertData = NULL;
convertSurface = NULL;
}
oData = NULL;
iDataWrapBuffer = NULL;
return decoded ? AMF_OK : AMF_FAIL;
return AMF_OK;
}
AMF_RESULT destroy() {
// Terminate converter before terminate decoder get "[AMFDeviceDX11Impl]
// Warning: Possible memory leak detected: DX11 device is being destroyed,
// but has 6 surfaces associated with it. This is OK if there are references
// to the device outside AMF"
if (AMFConverter_ != NULL) {
AMFConverter_->Drain();
AMFConverter_->Terminate();
AMFConverter_ = NULL;
}
if (AMFDecoder_ != NULL) {
AMFDecoder_->Drain();
AMFDecoder_->Terminate();
AMFDecoder_ = NULL;
}
if (AMFContext_ != NULL) {
AMFContext_->Terminate();
AMFContext_ = NULL; // context is the last
}
AMFFactory_.Terminate();
return AMF_OK;
}
AMF_RESULT initialize() {
AMF_RESULT res;
res = AMFFactory_.Init();
AMF_CHECK_RETURN(res, "AMFFactory Init failed");
amf::AMFSetCustomTracer(AMFFactory_.GetTrace());
amf::AMFTraceEnableWriter(AMF_TRACE_WRITER_CONSOLE, true);
amf::AMFTraceSetWriterLevel(AMF_TRACE_WRITER_CONSOLE, AMF_TRACE_WARNING);
res = AMFFactory_.GetFactory()->CreateContext(&AMFContext_);
AMF_CHECK_RETURN(res, "CreateContext failed");
switch (AMFMemoryType_) {
case amf::AMF_MEMORY_DX11:
nativeDevice_ = std::make_unique<NativeDevice>();
if (!nativeDevice_->Init(luid_, (ID3D11Device *)device_, 4)) {
LOG_ERROR(std::string("Init NativeDevice failed"));
return AMF_FAIL;
}
res = AMFContext_->InitDX11(
nativeDevice_->device_.Get()); // can be DX11 device
AMF_CHECK_RETURN(res, "InitDX11 failed");
break;
default:
LOG_ERROR(std::string("unsupported memory type: ") +
std::to_string((int)AMFMemoryType_));
return AMF_FAIL;
}
res = AMFFactory_.GetFactory()->CreateComponent(AMFContext_, codec_.c_str(),
&AMFDecoder_);
AMF_CHECK_RETURN(res, "CreateComponent failed");
res = setParameters();
AMF_CHECK_RETURN(res, "setParameters failed");
res = AMFDecoder_->Init(decodeFormatOut_, 0, 0);
AMF_CHECK_RETURN(res, "Init decoder failed");
return AMF_OK;
}
private:
AMF_RESULT setParameters() {
AMF_RESULT res;
res =
AMFDecoder_->SetProperty(AMF_TIMESTAMP_MODE, amf_int64(AMF_TS_DECODE));
AMF_RETURN_IF_FAILED(
res, L"SetProperty AMF_TIMESTAMP_MODE to AMF_TS_DECODE failed");
res =
AMFDecoder_->SetProperty(AMF_VIDEO_DECODER_REORDER_MODE,
amf_int64(AMF_VIDEO_DECODER_MODE_LOW_LATENCY));
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_DECODER_REORDER_MODE failed");
// color
res = AMFDecoder_->SetProperty<amf_int64>(
AMF_VIDEO_DECODER_COLOR_RANGE,
full_range_ ? AMF_COLOR_RANGE_FULL : AMF_COLOR_RANGE_STUDIO);
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_DECODER_COLOR_RANGE failed");
res = AMFDecoder_->SetProperty<amf_int64>(
AMF_VIDEO_DECODER_COLOR_PROFILE,
bt709_ ? (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_709
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_709)
: (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_601
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_601));
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_DECODER_COLOR_PROFILE failed");
// res = AMFDecoder_->SetProperty<amf_int64>(
// AMF_VIDEO_DECODER_COLOR_TRANSFER_CHARACTERISTIC,
// bt709_ ? AMF_COLOR_TRANSFER_CHARACTERISTIC_BT709
// : AMF_COLOR_TRANSFER_CHARACTERISTIC_SMPTE170M);
// AMF_CHECK_RETURN(
// res,
// "SetProperty AMF_VIDEO_DECODER_COLOR_TRANSFER_CHARACTERISTIC
// failed");
// res = AMFDecoder_->SetProperty<amf_int64>(
// AMF_VIDEO_DECODER_COLOR_PRIMARIES,
// bt709_ ? AMF_COLOR_PRIMARIES_BT709 : AMF_COLOR_PRIMARIES_SMPTE170M);
// AMF_CHECK_RETURN(res,
// "SetProperty AMF_VIDEO_DECODER_COLOR_PRIMARIES failed");
return AMF_OK;
}
AMF_RESULT Convert(IN amf::AMFSurfacePtr &surface,
OUT amf::AMFDataPtr &convertData) {
if (decodeFormatOut_ == textureFormatOut_)
return AMF_OK;
AMF_RESULT res;
int width = surface->GetPlaneAt(0)->GetWidth();
int height = surface->GetPlaneAt(0)->GetHeight();
if (AMFConverter_ != NULL) {
if (width != last_width_ || height != last_height_) {
LOG_INFO(std::string("Convert size changed, (") + std::to_string(last_width_) + "x" +
std::to_string(last_height_) + ") -> (" +
std::to_string(width) + "x" + std::to_string(width) + ")");
AMFConverter_->Terminate();
AMFConverter_ = NULL;
}
}
if (!AMFConverter_) {
res = AMFFactory_.GetFactory()->CreateComponent(
AMFContext_, AMFVideoConverter, &AMFConverter_);
AMF_CHECK_RETURN(res, "Convert CreateComponent failed");
res = AMFConverter_->SetProperty(AMF_VIDEO_CONVERTER_MEMORY_TYPE,
AMFMemoryType_);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_CONVERTER_MEMORY_TYPE failed");
res = AMFConverter_->SetProperty(AMF_VIDEO_CONVERTER_OUTPUT_FORMAT,
textureFormatOut_);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_CONVERTER_OUTPUT_FORMAT failed");
res = AMFConverter_->SetProperty(AMF_VIDEO_CONVERTER_OUTPUT_SIZE,
::AMFConstructSize(width, height));
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_CONVERTER_OUTPUT_SIZE failed");
res = AMFConverter_->Init(decodeFormatOut_, width, height);
AMF_CHECK_RETURN(res, "Init converter failed");
// color
res = AMFConverter_->SetProperty<amf_int64>(
AMF_VIDEO_CONVERTER_INPUT_COLOR_RANGE,
full_range_ ? AMF_COLOR_RANGE_FULL : AMF_COLOR_RANGE_STUDIO);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_CONVERTER_INPUT_COLOR_RANGE failed");
res = AMFConverter_->SetProperty<amf_int64>(
AMF_VIDEO_CONVERTER_OUTPUT_COLOR_RANGE, AMF_COLOR_RANGE_FULL);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_CONVERTER_OUTPUT_COLOR_RANGE failed");
res = AMFConverter_->SetProperty<amf_int64>(
AMF_VIDEO_CONVERTER_COLOR_PROFILE,
bt709_ ? (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_709
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_709)
: (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_601
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_601));
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_CONVERTER_COLOR_PROFILE failed");
res = AMFConverter_->SetProperty<amf_int64>(
AMF_VIDEO_CONVERTER_INPUT_TRANSFER_CHARACTERISTIC,
bt709_ ? AMF_COLOR_TRANSFER_CHARACTERISTIC_BT709
: AMF_COLOR_TRANSFER_CHARACTERISTIC_SMPTE170M);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_CONVERTER_INPUT_TRANSFER_CHARACTERISTIC "
"failed");
res = AMFConverter_->SetProperty<amf_int64>(
AMF_VIDEO_CONVERTER_INPUT_COLOR_PRIMARIES,
bt709_ ? AMF_COLOR_PRIMARIES_BT709 : AMF_COLOR_PRIMARIES_SMPTE170M);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_CONVERTER_INPUT_COLOR_PRIMARIES failed");
}
last_width_ = width;
last_height_ = height;
res = AMFConverter_->SubmitInput(surface);
AMF_CHECK_RETURN(res, "Convert SubmitInput failed");
res = AMFConverter_->QueryOutput(&convertData);
AMF_CHECK_RETURN(res, "Convert QueryOutput failed");
return AMF_OK;
}
};
bool convert_codec(DataFormat lhs, amf_wstring &rhs) {
switch (lhs) {
case H264:
rhs = AMFVideoDecoderUVD_H264_AVC;
break;
case H265:
rhs = AMFVideoDecoderHW_H265_HEVC;
break;
default:
LOG_ERROR(std::string("unsupported codec: ") + std::to_string(lhs));
return false;
}
return true;
}
} // namespace
#include "amf_common.cpp"
extern "C" {
int amf_destroy_decoder(void *decoder) {
try {
AMFDecoder *dec = (AMFDecoder *)decoder;
if (dec) {
dec->destroy();
delete dec;
dec = NULL;
return 0;
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("destroy failed: ") + e.what());
}
return -1;
}
void *amf_new_decoder(void *device, int64_t luid,
DataFormat dataFormat) {
AMFDecoder *dec = NULL;
try {
amf_wstring codecStr;
amf::AMF_MEMORY_TYPE memory;
amf::AMF_SURFACE_FORMAT surfaceFormat;
if (!convert_api(memory)) {
return NULL;
}
if (!convert_codec(dataFormat, codecStr)) {
return NULL;
}
dec = new AMFDecoder(device, luid, memory, codecStr, amf::AMF_SURFACE_BGRA);
if (dec) {
if (dec->initialize() == AMF_OK) {
return dec;
}
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("new failed: ") + e.what());
}
if (dec) {
dec->destroy();
delete dec;
dec = NULL;
}
return NULL;
}
int amf_decode(void *decoder, uint8_t *data, int32_t length,
DecodeCallback callback, void *obj) {
try {
AMFDecoder *dec = (AMFDecoder *)decoder;
if (dec->decode(data, length, callback, obj) == AMF_OK) {
return HWCODEC_SUCCESS;
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("decode failed: ") + e.what());
}
return HWCODEC_ERR_COMMON;
}
int amf_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum,
int32_t *outDescNum, DataFormat dataFormat,
uint8_t *data, int32_t length, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
Adapters adapters;
if (!adapters.Init(ADAPTER_VENDOR_AMD))
return -1;
int count = 0;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
AMFDecoder *p = (AMFDecoder *)amf_new_decoder(
nullptr, currentLuid, dataFormat);
if (!p)
continue;
auto start = util::now();
bool succ = p->decode(data, length, nullptr, nullptr) == AMF_OK;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = currentLuid;
outVendors[count] = VENDOR_AMD;
count += 1;
}
p->destroy();
delete p;
p = nullptr;
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("test failed: ") + e.what());
}
return -1;
}
} // extern "C"

View File

@@ -1,611 +0,0 @@
#include <public/common/AMFFactory.h>
#include <public/common/AMFSTL.h>
#include <public/common/Thread.h>
#include <public/common/TraceAdapter.h>
#include <public/include/components/VideoEncoderAV1.h>
#include <public/include/components/VideoEncoderHEVC.h>
#include <public/include/components/VideoEncoderVCE.h>
#include <public/include/core/Platform.h>
#include <stdio.h>
#include <cstring>
#include <iostream>
#include <math.h>
#include "callback.h"
#include "common.h"
#include "system.h"
#include "util.h"
#define LOG_MODULE "AMFENC"
#include "log.h"
#define AMF_FACILITY L"AMFEncoder"
#define MILLISEC_TIME 10000
namespace {
#define AMF_CHECK_RETURN(res, msg) \
if (res != AMF_OK) { \
LOG_ERROR(std::string(msg) + ", result code: " + std::to_string(int(res))); \
return res; \
}
/** Encoder output packet */
struct encoder_packet {
uint8_t *data; /**< Packet data */
size_t size; /**< Packet size */
int64_t pts; /**< Presentation timestamp */
int64_t dts; /**< Decode timestamp */
int32_t timebase_num; /**< Timebase numerator */
int32_t timebase_den; /**< Timebase denominator */
bool keyframe; /**< Is a keyframe */
/* ---------------------------------------------------------------- */
/* Internal video variables (will be parsed automatically) */
/* DTS in microseconds */
int64_t dts_usec;
/* System DTS in microseconds */
int64_t sys_dts_usec;
};
class AMFEncoder {
public:
DataFormat dataFormat_;
amf::AMFComponentPtr AMFEncoder_ = NULL;
amf::AMFContextPtr AMFContext_ = NULL;
private:
// system
void *handle_;
// AMF Internals
AMFFactoryHelper AMFFactory_;
amf::AMF_MEMORY_TYPE AMFMemoryType_;
amf::AMF_SURFACE_FORMAT AMFSurfaceFormat_ = amf::AMF_SURFACE_BGRA;
std::pair<int32_t, int32_t> resolution_;
amf_wstring codec_;
// const
AMF_COLOR_BIT_DEPTH_ENUM eDepth_ = AMF_COLOR_BIT_DEPTH_8;
int query_timeout_ = ENCODE_TIMEOUT_MS;
int32_t bitRateIn_;
int32_t frameRate_;
int32_t gop_;
bool enable4K_ = false;
bool full_range_ = false;
bool bt709_ = false;
// Buffers
std::vector<uint8_t> packetDataBuffer_;
public:
AMFEncoder(void *handle, amf::AMF_MEMORY_TYPE memoryType, amf_wstring codec,
DataFormat dataFormat, int32_t width, int32_t height,
int32_t bitrate, int32_t framerate, int32_t gop) {
handle_ = handle;
dataFormat_ = dataFormat;
AMFMemoryType_ = memoryType;
resolution_ = std::make_pair(width, height);
codec_ = codec;
bitRateIn_ = bitrate;
frameRate_ = framerate;
gop_ = (gop > 0 && gop < MAX_GOP) ? gop : MAX_GOP;
enable4K_ = width > 1920 && height > 1080;
}
~AMFEncoder() {}
AMF_RESULT encode(void *tex, EncodeCallback callback, void *obj, int64_t ms) {
amf::AMFSurfacePtr surface = NULL;
amf::AMFComputeSyncPointPtr pSyncPoint = NULL;
AMF_RESULT res;
bool encoded = false;
switch (AMFMemoryType_) {
case amf::AMF_MEMORY_DX11:
// https://github.com/GPUOpen-LibrariesAndSDKs/AMF/issues/280
// AMF will not copy the surface during the CreateSurfaceFromDX11Native
// call
res = AMFContext_->CreateSurfaceFromDX11Native(tex, &surface, NULL);
AMF_CHECK_RETURN(res, "CreateSurfaceFromDX11Native failed");
{
amf::AMFDataPtr data1;
surface->Duplicate(surface->GetMemoryType(), &data1);
surface = amf::AMFSurfacePtr(data1);
}
break;
default:
LOG_ERROR(std::string("Unsupported memory type"));
return AMF_NOT_IMPLEMENTED;
break;
}
surface->SetPts(ms * AMF_MILLISECOND);
res = AMFEncoder_->SubmitInput(surface);
AMF_CHECK_RETURN(res, "SubmitInput failed");
amf::AMFDataPtr data = NULL;
res = AMFEncoder_->QueryOutput(&data);
if (res == AMF_OK && data != NULL) {
struct encoder_packet packet;
PacketKeyframe(data, &packet);
amf::AMFBufferPtr pBuffer = amf::AMFBufferPtr(data);
packet.size = pBuffer->GetSize();
if (packet.size > 0) {
if (packetDataBuffer_.size() < packet.size) {
size_t newBufferSize = (size_t)exp2(ceil(log2((double)packet.size)));
packetDataBuffer_.resize(newBufferSize);
}
packet.data = packetDataBuffer_.data();
std::memcpy(packet.data, pBuffer->GetNative(), packet.size);
if (callback)
callback(packet.data, packet.size, packet.keyframe, obj, ms);
encoded = true;
}
pBuffer = NULL;
}
data = NULL;
pSyncPoint = NULL;
surface = NULL;
return encoded ? AMF_OK : AMF_FAIL;
}
AMF_RESULT destroy() {
if (AMFEncoder_) {
AMFEncoder_->Terminate();
AMFEncoder_ = NULL;
}
if (AMFContext_) {
AMFContext_->Terminate();
AMFContext_ = NULL; // AMFContext_ is the last
}
AMFFactory_.Terminate();
return AMF_OK;
}
AMF_RESULT test() {
AMF_RESULT res = AMF_OK;
amf::AMFSurfacePtr surface = nullptr;
res = AMFContext_->AllocSurface(AMFMemoryType_, AMFSurfaceFormat_,
resolution_.first, resolution_.second,
&surface);
AMF_CHECK_RETURN(res, "AllocSurface failed");
if (surface->GetPlanesCount() < 1)
return AMF_FAIL;
void *native = surface->GetPlaneAt(0)->GetNative();
if (!native)
return AMF_FAIL;
int32_t key_obj = 0;
auto start = util::now();
res = encode(native, util_encode::vram_encode_test_callback, &key_obj, 0);
int64_t elapsed = util::elapsed_ms(start);
if (res == AMF_OK && key_obj == 1 && elapsed < TEST_TIMEOUT_MS) {
return AMF_OK;
}
return AMF_FAIL;
}
AMF_RESULT initialize() {
AMF_RESULT res;
res = AMFFactory_.Init();
if (res != AMF_OK) {
std::cerr << "AMF init failed, error code = " << res << "\n";
return res;
}
amf::AMFSetCustomTracer(AMFFactory_.GetTrace());
amf::AMFTraceEnableWriter(AMF_TRACE_WRITER_CONSOLE, true);
amf::AMFTraceSetWriterLevel(AMF_TRACE_WRITER_CONSOLE, AMF_TRACE_WARNING);
// AMFContext_
res = AMFFactory_.GetFactory()->CreateContext(&AMFContext_);
AMF_CHECK_RETURN(res, "CreateContext failed");
switch (AMFMemoryType_) {
case amf::AMF_MEMORY_DX11:
res = AMFContext_->InitDX11(handle_); // can be DX11 device
AMF_CHECK_RETURN(res, "InitDX11 failed");
break;
default:
LOG_ERROR(std::string("unsupported amf memory type"));
return AMF_FAIL;
}
// component: encoder
res = AMFFactory_.GetFactory()->CreateComponent(AMFContext_, codec_.c_str(),
&AMFEncoder_);
AMF_CHECK_RETURN(res, "CreateComponent failed");
res = SetParams(codec_);
AMF_CHECK_RETURN(res, "Could not set params in encoder.");
res = AMFEncoder_->Init(AMFSurfaceFormat_, resolution_.first,
resolution_.second);
AMF_CHECK_RETURN(res, "encoder->Init() failed");
return AMF_OK;
}
private:
AMF_RESULT SetParams(const amf_wstring &codecStr) {
AMF_RESULT res;
if (codecStr == amf_wstring(AMFVideoEncoderVCE_AVC)) {
// ------------- Encoder params usage---------------
res = AMFEncoder_->SetProperty(
AMF_VIDEO_ENCODER_USAGE,
AMF_VIDEO_ENCODER_USAGE_LOW_LATENCY_HIGH_QUALITY);
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_ENCODER_USAGE failed");
// ------------- Encoder params static---------------
res = AMFEncoder_->SetProperty(
AMF_VIDEO_ENCODER_FRAMESIZE,
::AMFConstructSize(resolution_.first, resolution_.second));
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_FRAMESIZE failed, (" +
std::to_string(resolution_.first) + "," +
std::to_string(resolution_.second) + ")");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_LOWLATENCY_MODE, true);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_LOWLATENCY_MODE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_QUALITY_PRESET,
AMF_VIDEO_ENCODER_QUALITY_PRESET_QUALITY);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_QUALITY_PRESET failed");
res =
AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_COLOR_BIT_DEPTH, eDepth_);
AMF_CHECK_RETURN(res,
"SetProperty(AMF_VIDEO_ENCODER_COLOR_BIT_DEPTH failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD,
AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD_CBR);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_RATE_CONTROL_METHOD");
if (enable4K_) {
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_PROFILE,
AMF_VIDEO_ENCODER_PROFILE_HIGH);
AMF_CHECK_RETURN(res, "SetProperty(AMF_VIDEO_ENCODER_PROFILE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_PROFILE_LEVEL,
AMF_H264_LEVEL__5_1);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_PROFILE_LEVEL failed");
}
// color
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_FULL_RANGE_COLOR,
full_range_);
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_ENCODER_FULL_RANGE_COLOR");
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_OUTPUT_COLOR_PROFILE,
bt709_ ? (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_709
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_709)
: (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_601
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_601));
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_OUTPUT_COLOR_PROFILE");
// https://github.com/obsproject/obs-studio/blob/e27b013d4754e0e81119ab237ffedce8fcebcbbf/plugins/obs-ffmpeg/texture-amf.cpp#L924
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_OUTPUT_TRANSFER_CHARACTERISTIC,
bt709_ ? AMF_COLOR_TRANSFER_CHARACTERISTIC_BT709
: AMF_COLOR_TRANSFER_CHARACTERISTIC_SMPTE170M);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_OUTPUT_TRANSFER_CHARACTERISTIC");
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_OUTPUT_COLOR_PRIMARIES,
bt709_ ? AMF_COLOR_PRIMARIES_BT709 : AMF_COLOR_PRIMARIES_SMPTE170M);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_OUTPUT_COLOR_PRIMARIES");
// ------------- Encoder params dynamic ---------------
AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_B_PIC_PATTERN, 0);
// do not check error for AMF_VIDEO_ENCODER_B_PIC_PATTERN
// - can be not supported - check Capability Manager
// sample
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_QUERY_TIMEOUT,
query_timeout_); // ms
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_QUERY_TIMEOUT failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_TARGET_BITRATE,
bitRateIn_);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_TARGET_BITRATE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_FRAMERATE,
::AMFConstructRate(frameRate_, 1));
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_ENCODER_FRAMERATE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_IDR_PERIOD, gop_);
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_ENCODER_IDR_PERIOD failed");
} else if (codecStr == amf_wstring(AMFVideoEncoder_HEVC)) {
// ------------- Encoder params usage---------------
res = AMFEncoder_->SetProperty(
AMF_VIDEO_ENCODER_HEVC_USAGE,
AMF_VIDEO_ENCODER_HEVC_USAGE_LOW_LATENCY_HIGH_QUALITY);
AMF_CHECK_RETURN(res, "SetProperty AMF_VIDEO_ENCODER_HEVC_USAGE failed");
// ------------- Encoder params static---------------
res = AMFEncoder_->SetProperty(
AMF_VIDEO_ENCODER_HEVC_FRAMESIZE,
::AMFConstructSize(resolution_.first, resolution_.second));
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_HEVC_FRAMESIZE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_LOWLATENCY_MODE,
true);
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_LOWLATENCY_MODE failed");
res = AMFEncoder_->SetProperty(
AMF_VIDEO_ENCODER_HEVC_QUALITY_PRESET,
AMF_VIDEO_ENCODER_HEVC_QUALITY_PRESET_QUALITY);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_HEVC_QUALITY_PRESET failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_COLOR_BIT_DEPTH,
eDepth_);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_HEVC_COLOR_BIT_DEPTH failed");
res = AMFEncoder_->SetProperty(
AMF_VIDEO_ENCODER_HEVC_RATE_CONTROL_METHOD,
AMF_VIDEO_ENCODER_HEVC_RATE_CONTROL_METHOD_CBR);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_HEVC_RATE_CONTROL_METHOD failed");
if (enable4K_) {
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_TIER,
AMF_VIDEO_ENCODER_HEVC_TIER_HIGH);
AMF_CHECK_RETURN(res, "SetProperty(AMF_VIDEO_ENCODER_HEVC_TIER failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_PROFILE_LEVEL,
AMF_LEVEL_5_1);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_HEVC_PROFILE_LEVEL failed");
}
// color
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_HEVC_NOMINAL_RANGE,
full_range_ ? AMF_VIDEO_ENCODER_HEVC_NOMINAL_RANGE_FULL
: AMF_VIDEO_ENCODER_HEVC_NOMINAL_RANGE_STUDIO);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_HEVC_NOMINAL_RANGE failed");
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_HEVC_OUTPUT_COLOR_PROFILE,
bt709_ ? (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_709
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_709)
: (full_range_ ? AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_601
: AMF_VIDEO_CONVERTER_COLOR_PROFILE_601));
AMF_CHECK_RETURN(
res,
"SetProperty AMF_VIDEO_ENCODER_HEVC_OUTPUT_COLOR_PROFILE failed");
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_HEVC_OUTPUT_TRANSFER_CHARACTERISTIC,
bt709_ ? AMF_COLOR_TRANSFER_CHARACTERISTIC_BT709
: AMF_COLOR_TRANSFER_CHARACTERISTIC_SMPTE170M);
AMF_CHECK_RETURN(
res, "SetProperty "
"AMF_VIDEO_ENCODER_HEVC_OUTPUT_TRANSFER_CHARACTERISTIC failed");
res = AMFEncoder_->SetProperty<amf_int64>(
AMF_VIDEO_ENCODER_HEVC_OUTPUT_COLOR_PRIMARIES,
bt709_ ? AMF_COLOR_PRIMARIES_BT709 : AMF_COLOR_PRIMARIES_SMPTE170M);
AMF_CHECK_RETURN(
res,
"SetProperty AMF_VIDEO_ENCODER_HEVC_OUTPUT_COLOR_PRIMARIES failed");
// ------------- Encoder params dynamic ---------------
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_QUERY_TIMEOUT,
query_timeout_); // ms
AMF_CHECK_RETURN(
res, "SetProperty(AMF_VIDEO_ENCODER_HEVC_QUERY_TIMEOUT failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_TARGET_BITRATE,
bitRateIn_);
AMF_CHECK_RETURN(
res, "SetProperty AMF_VIDEO_ENCODER_HEVC_TARGET_BITRATE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_FRAMERATE,
::AMFConstructRate(frameRate_, 1));
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_HEVC_FRAMERATE failed");
res = AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_GOP_SIZE,
gop_); // todo
AMF_CHECK_RETURN(res,
"SetProperty AMF_VIDEO_ENCODER_HEVC_GOP_SIZE failed");
} else {
return AMF_FAIL;
}
return AMF_OK;
}
void PacketKeyframe(amf::AMFDataPtr &pData, struct encoder_packet *packet) {
if (AMFVideoEncoderVCE_AVC == codec_) {
uint64_t pktType;
pData->GetProperty(AMF_VIDEO_ENCODER_OUTPUT_DATA_TYPE, &pktType);
packet->keyframe = AMF_VIDEO_ENCODER_OUTPUT_DATA_TYPE_IDR == pktType ||
AMF_VIDEO_ENCODER_OUTPUT_DATA_TYPE_I == pktType;
} else if (AMFVideoEncoder_HEVC == codec_) {
uint64_t pktType;
pData->GetProperty(AMF_VIDEO_ENCODER_HEVC_OUTPUT_DATA_TYPE, &pktType);
packet->keyframe =
AMF_VIDEO_ENCODER_HEVC_OUTPUT_DATA_TYPE_IDR == pktType ||
AMF_VIDEO_ENCODER_HEVC_OUTPUT_DATA_TYPE_I == pktType;
}
}
};
bool convert_codec(DataFormat lhs, amf_wstring &rhs) {
switch (lhs) {
case H264:
rhs = AMFVideoEncoderVCE_AVC;
break;
case H265:
rhs = AMFVideoEncoder_HEVC;
break;
default:
LOG_ERROR(std::string("unsupported codec: ") + std::to_string((int)lhs));
return false;
}
return true;
}
} // namespace
#include "amf_common.cpp"
extern "C" {
int amf_destroy_encoder(void *encoder) {
try {
AMFEncoder *enc = (AMFEncoder *)encoder;
enc->destroy();
delete enc;
enc = NULL;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("destroy failed: ") + e.what());
}
return -1;
}
void *amf_new_encoder(void *handle, int64_t luid,
DataFormat dataFormat, int32_t width, int32_t height,
int32_t kbs, int32_t framerate, int32_t gop) {
AMFEncoder *enc = NULL;
try {
amf_wstring codecStr;
if (!convert_codec(dataFormat, codecStr)) {
return NULL;
}
amf::AMF_MEMORY_TYPE memoryType;
if (!convert_api(memoryType)) {
return NULL;
}
enc = new AMFEncoder(handle, memoryType, codecStr, dataFormat, width,
height, kbs * 1000, framerate, gop);
if (enc) {
if (AMF_OK == enc->initialize()) {
return enc;
}
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("new failed: ") + e.what());
}
if (enc) {
enc->destroy();
delete enc;
enc = NULL;
}
return NULL;
}
int amf_encode(void *encoder, void *tex, EncodeCallback callback, void *obj,
int64_t ms) {
try {
AMFEncoder *enc = (AMFEncoder *)encoder;
return -enc->encode(tex, callback, obj, ms);
} catch (const std::exception &e) {
LOG_ERROR(std::string("encode failed: ") + e.what());
}
return -1;
}
int amf_driver_support() {
try {
AMFFactoryHelper factory;
AMF_RESULT res = factory.Init();
if (res == AMF_OK) {
factory.Terminate();
return 0;
}
} catch (const std::exception &e) {
}
return -1;
}
int amf_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
DataFormat dataFormat, int32_t width,
int32_t height, int32_t kbs, int32_t framerate,
int32_t gop, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
Adapters adapters;
if (!adapters.Init(ADAPTER_VENDOR_AMD))
return -1;
int count = 0;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
AMFEncoder *e = (AMFEncoder *)amf_new_encoder(
(void *)adapter.get()->device_.Get(), currentLuid,
dataFormat, width, height, kbs, framerate, gop);
if (!e)
continue;
if (e->test() == AMF_OK) {
outLuids[count] = currentLuid;
outVendors[count] = VENDOR_AMD;
count += 1;
}
e->destroy();
delete e;
e = nullptr;
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("test ") + std::to_string(kbs) + " failed: " + e.what());
}
return -1;
}
int amf_set_bitrate(void *encoder, int32_t kbs) {
try {
AMFEncoder *enc = (AMFEncoder *)encoder;
AMF_RESULT res = AMF_FAIL;
switch (enc->dataFormat_) {
case H264:
res = enc->AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_TARGET_BITRATE,
kbs * 1000);
break;
case H265:
res = enc->AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_TARGET_BITRATE,
kbs * 1000);
break;
}
return res == AMF_OK ? 0 : -1;
} catch (const std::exception &e) {
LOG_ERROR(std::string("set bitrate to ") + std::to_string(kbs) +
"k failed: " + e.what());
}
return -1;
}
int amf_set_framerate(void *encoder, int32_t framerate) {
try {
AMFEncoder *enc = (AMFEncoder *)encoder;
AMF_RESULT res = AMF_FAIL;
AMFRate rate = ::AMFConstructRate(framerate, 1);
switch (enc->dataFormat_) {
case H264:
res = enc->AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_FRAMERATE, rate);
break;
case H265:
res =
enc->AMFEncoder_->SetProperty(AMF_VIDEO_ENCODER_HEVC_FRAMERATE, rate);
break;
}
return res == AMF_OK ? 0 : -1;
} catch (const std::exception &e) {
LOG_ERROR(std::string("set framerate to ") + std::to_string(framerate) +
" failed: " + e.what());
}
return -1;
}
} // extern "C"

View File

@@ -1,39 +0,0 @@
#ifndef AMF_FFI_H
#define AMF_FFI_H
#include "../common/callback.h"
#include <stdbool.h>
int amf_driver_support();
void *amf_new_encoder(void *handle, int64_t luid,
int32_t data_format, int32_t width, int32_t height,
int32_t bitrate, int32_t framerate, int32_t gop);
int amf_encode(void *encoder, void *texture, EncodeCallback callback, void *obj,
int64_t ms);
int amf_destroy_encoder(void *encoder);
void *amf_new_decoder(void *device, int64_t luid,
int32_t dataFormat);
int amf_decode(void *decoder, uint8_t *data, int32_t length,
DecodeCallback callback, void *obj);
int amf_destroy_decoder(void *decoder);
int amf_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
int32_t dataFormat, int32_t width,
int32_t height, int32_t kbs, int32_t framerate,
int32_t gop, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int amf_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
int32_t dataFormat, uint8_t *data,
int32_t length, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int amf_set_bitrate(void *encoder, int32_t kbs);
int amf_set_framerate(void *encoder, int32_t framerate);
#endif // AMF_FFI_H

View File

@@ -2,83 +2,50 @@
#include "../../log.h" #include "../../log.h"
#include <cstring> #include <cstring>
#include <dlfcn.h> #include <dlfcn.h>
#include <dynlink_cuda.h>
#include <dynlink_loader.h>
#include <errno.h> #include <errno.h>
#include <exception> // Include the necessary header file
#include <signal.h> #include <signal.h>
#include <sys/prctl.h> #include <sys/prctl.h>
#include <unistd.h> #include <unistd.h>
#include <fcntl.h> #include <fcntl.h>
namespace // Check for NVIDIA driver support by loading CUDA libraries
{
void load_driver(CudaFunctions **pp_cuda_dl, NvencFunctions **pp_nvenc_dl,
CuvidFunctions **pp_cvdl)
{
if (cuda_load_functions(pp_cuda_dl, NULL) < 0)
{
LOG_TRACE(std::string("cuda_load_functions failed"));
throw "cuda_load_functions failed";
}
if (nvenc_load_functions(pp_nvenc_dl, NULL) < 0)
{
LOG_TRACE(std::string("nvenc_load_functions failed"));
throw "nvenc_load_functions failed";
}
if (cuvid_load_functions(pp_cvdl, NULL) < 0)
{
LOG_TRACE(std::string("cuvid_load_functions failed"));
throw "cuvid_load_functions failed";
}
}
void free_driver(CudaFunctions **pp_cuda_dl, NvencFunctions **pp_nvenc_dl,
CuvidFunctions **pp_cvdl)
{
if (*pp_cvdl)
{
cuvid_free_functions(pp_cvdl);
*pp_cvdl = NULL;
}
if (*pp_nvenc_dl)
{
nvenc_free_functions(pp_nvenc_dl);
*pp_nvenc_dl = NULL;
}
if (*pp_cuda_dl)
{
cuda_free_functions(pp_cuda_dl);
*pp_cuda_dl = NULL;
}
}
} // namespace
int linux_support_nv() int linux_support_nv()
{ {
try // Try to load NVIDIA CUDA runtime library
void *handle = dlopen("libcuda.so.1", RTLD_LAZY);
if (!handle)
{ {
CudaFunctions *cuda_dl = NULL; handle = dlopen("libcuda.so", RTLD_LAZY);
NvencFunctions *nvenc_dl = NULL;
CuvidFunctions *cvdl = NULL;
load_driver(&cuda_dl, &nvenc_dl, &cvdl);
free_driver(&cuda_dl, &nvenc_dl, &cvdl);
return 0;
} }
catch (...) if (!handle)
{ {
LOG_TRACE(std::string("nvidia driver not support")); LOG_TRACE(std::string("NVIDIA: libcuda.so not found"));
return -1;
} }
return -1; dlclose(handle);
// Also check for nvenc library
handle = dlopen("libnvidia-encode.so.1", RTLD_LAZY);
if (!handle)
{
handle = dlopen("libnvidia-encode.so", RTLD_LAZY);
}
if (!handle)
{
LOG_TRACE(std::string("NVIDIA: libnvidia-encode.so not found"));
return -1;
}
dlclose(handle);
LOG_TRACE(std::string("NVIDIA: driver support detected"));
return 0;
} }
int linux_support_amd() int linux_support_amd()
{ {
#if defined(__x86_64__) || defined(__aarch64__) #if defined(__x86_64__) || defined(__aarch64__)
#define AMF_DLL_NAME L"libamfrt64.so.1"
#define AMF_DLL_NAMEA "libamfrt64.so.1" #define AMF_DLL_NAMEA "libamfrt64.so.1"
#else #else
#define AMF_DLL_NAME L"libamfrt32.so.1"
#define AMF_DLL_NAMEA "libamfrt32.so.1" #define AMF_DLL_NAMEA "libamfrt32.so.1"
#endif #endif
void *handle = dlopen(AMF_DLL_NAMEA, RTLD_LAZY); void *handle = dlopen(AMF_DLL_NAMEA, RTLD_LAZY);
@@ -160,4 +127,4 @@ int linux_support_v4l2m2m() {
LOG_TRACE(std::string("V4L2 M2M: No M2M device found")); LOG_TRACE(std::string("V4L2 M2M: No M2M device found"));
return -1; return -1;
} }

View File

@@ -1,167 +0,0 @@
#include <AVFoundation/AVFoundation.h>
#include <CoreFoundation/CoreFoundation.h>
#include <CoreMedia/CoreMedia.h>
#include <MacTypes.h>
#include <VideoToolbox/VideoToolbox.h>
#include <cstdlib>
#include <pthread.h>
#include <ratio>
#include <sys/_types/_int32_t.h>
#include <sys/event.h>
#include <unistd.h>
#include "../../log.h"
#if defined(__APPLE__)
#include <TargetConditionals.h>
#endif
// ---------------------- Core: More Robust Hardware Encoder Detection ----------------------
static int32_t hasHardwareEncoder(bool h265) {
CMVideoCodecType codecType = h265 ? kCMVideoCodecType_HEVC : kCMVideoCodecType_H264;
// ---------- Path A: Quick Query with Enable + Require ----------
// Note: Require implies Enable, but setting both here makes it easier to bypass the strategy on some models that default to a software encoder.
CFMutableDictionaryRef spec = CFDictionaryCreateMutable(kCFAllocatorDefault, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(spec, kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder, kCFBooleanTrue);
CFDictionarySetValue(spec, kVTVideoEncoderSpecification_RequireHardwareAcceleratedVideoEncoder, kCFBooleanTrue);
CFDictionaryRef properties = NULL;
CFStringRef outID = NULL;
// Use 1280x720 for capability detection to reduce the probability of "no hardware encoding" due to resolution/level issues.
OSStatus result = VTCopySupportedPropertyDictionaryForEncoder(1280, 720, codecType, spec, &outID, &properties);
if (properties) CFRelease(properties);
if (outID) CFRelease(outID);
if (spec) CFRelease(spec);
if (result == noErr) {
// Explicitly found an encoder that meets the "hardware-only" specification.
return 1;
}
// Reaching here means either no encoder satisfying Require was found (common), or another error occurred.
// For all failure cases, continue with the safer "session-level confirmation" path to avoid misjudgment.
// ---------- Path B: Create Session and Read UsingHardwareAcceleratedVideoEncoder ----------
CFMutableDictionaryRef enableOnly = CFDictionaryCreateMutable(kCFAllocatorDefault, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(enableOnly, kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder, kCFBooleanTrue);
VTCompressionSessionRef session = NULL;
// Also use 1280x720 to reduce profile/level interference
OSStatus st = VTCompressionSessionCreate(kCFAllocatorDefault,
1280, 720, codecType,
enableOnly, /* encoderSpecification */
NULL, /* sourceImageBufferAttributes */
NULL, /* compressedDataAllocator */
NULL, /* outputCallback */
NULL, /* outputRefCon */
&session);
if (enableOnly) CFRelease(enableOnly);
if (st != noErr || !session) {
// Creation failed, considered no hardware available.
return 0;
}
// First, explicitly prepare the encoding process to give VideoToolbox a chance to choose between software/hardware.
OSStatus prepareStatus = VTCompressionSessionPrepareToEncodeFrames(session);
if (prepareStatus != noErr) {
VTCompressionSessionInvalidate(session);
CFRelease(session);
return 0;
}
// Query the session's read-only property: whether it is using a hardware encoder.
CFBooleanRef usingHW = NULL;
st = VTSessionCopyProperty(session,
kVTCompressionPropertyKey_UsingHardwareAcceleratedVideoEncoder,
kCFAllocatorDefault,
(void **)&usingHW);
Boolean isHW = (st == noErr && usingHW && CFBooleanGetValue(usingHW));
if (usingHW) CFRelease(usingHW);
VTCompressionSessionInvalidate(session);
CFRelease(session);
return isHW ? 1 : 0;
}
// -------------- Your Public Interface: Unchanged ------------------
extern "C" void checkVideoToolboxSupport(int32_t *h264Encoder, int32_t *h265Encoder, int32_t *h264Decoder, int32_t *h265Decoder) {
// https://stackoverflow.com/questions/50956097/determine-if-ios-device-can-support-hevc-encoding
*h264Encoder = 0; // H.264 encoder support is disabled due to frequent reliability issues (see encode.rs)
*h265Encoder = hasHardwareEncoder(true);
*h264Decoder = VTIsHardwareDecodeSupported(kCMVideoCodecType_H264);
*h265Decoder = VTIsHardwareDecodeSupported(kCMVideoCodecType_HEVC);
return;
}
extern "C" uint64_t GetHwcodecGpuSignature() {
int32_t h264Encoder = 0;
int32_t h265Encoder = 0;
int32_t h264Decoder = 0;
int32_t h265Decoder = 0;
checkVideoToolboxSupport(&h264Encoder, &h265Encoder, &h264Decoder, &h265Decoder);
return (uint64_t)h264Encoder << 24 | (uint64_t)h265Encoder << 16 | (uint64_t)h264Decoder << 8 | (uint64_t)h265Decoder;
}
static void *parent_death_monitor_thread(void *arg) {
int kq = (intptr_t)arg;
struct kevent events[1];
int ret = kevent(kq, NULL, 0, events, 1, NULL);
if (ret > 0) {
// Parent process died, terminate this process
LOG_INFO("Parent process died, terminating hwcodec check process");
exit(1);
}
return NULL;
}
extern "C" int setup_parent_death_signal() {
// On macOS, use kqueue to monitor parent process death
pid_t parent_pid = getppid();
int kq = kqueue();
if (kq == -1) {
LOG_DEBUG("Failed to create kqueue for parent monitoring");
return -1;
}
struct kevent event;
EV_SET(&event, parent_pid, EVFILT_PROC, EV_ADD | EV_ONESHOT, NOTE_EXIT, 0,
NULL);
int ret = kevent(kq, &event, 1, NULL, 0, NULL);
if (ret == -1) {
LOG_ERROR("Failed to register parent death monitoring on macOS\n");
close(kq);
return -1;
} else {
// Spawn a thread to monitor parent death
pthread_t monitor_thread;
ret = pthread_create(&monitor_thread, NULL, parent_death_monitor_thread,
(void *)(intptr_t)kq);
if (ret != 0) {
LOG_ERROR("Failed to create parent death monitor thread");
close(kq);
return -1;
}
// Detach the thread so it can run independently
pthread_detach(monitor_thread);
return 0;
}
}

View File

@@ -1,410 +0,0 @@
// https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c
// https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/decode_video.c
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavutil/hwcontext.h>
#include <libavutil/log.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
}
#include <libavutil/hwcontext_d3d11va.h>
#include <memory>
#include <mutex>
#include <stdbool.h>
#include "callback.h"
#include "common.h"
#include "system.h"
#define LOG_MODULE "FFMPEG_VRAM_DEC"
#include <log.h>
#include <util.h>
namespace {
#define USE_SHADER
void lockContext(void *lock_ctx);
void unlockContext(void *lock_ctx);
class FFmpegVRamDecoder {
public:
AVCodecContext *c_ = NULL;
AVBufferRef *hw_device_ctx_ = NULL;
AVCodecParserContext *sw_parser_ctx_ = NULL;
AVFrame *frame_ = NULL;
AVPacket *pkt_ = NULL;
std::unique_ptr<NativeDevice> native_ = nullptr;
ID3D11Device *d3d11Device_ = NULL;
ID3D11DeviceContext *d3d11DeviceContext_ = NULL;
void *device_ = nullptr;
int64_t luid_ = 0;
DataFormat dataFormat_;
std::string name_;
AVHWDeviceType device_type_ = AV_HWDEVICE_TYPE_D3D11VA;
bool bt709_ = false;
bool full_range_ = false;
FFmpegVRamDecoder(void *device, int64_t luid, DataFormat dataFormat) {
device_ = device;
luid_ = luid;
dataFormat_ = dataFormat;
switch (dataFormat) {
case H264:
name_ = "h264";
break;
case H265:
name_ = "hevc";
break;
default:
LOG_ERROR(std::string("unsupported data format"));
break;
}
// Always use DX11 since it's the only API
device_type_ = AV_HWDEVICE_TYPE_D3D11VA;
}
~FFmpegVRamDecoder() {}
void destroy() {
if (frame_)
av_frame_free(&frame_);
if (pkt_)
av_packet_free(&pkt_);
if (c_)
avcodec_free_context(&c_);
if (hw_device_ctx_) {
av_buffer_unref(&hw_device_ctx_);
// AVHWDeviceContext takes ownership of d3d11 object
d3d11Device_ = nullptr;
d3d11DeviceContext_ = nullptr;
} else {
SAFE_RELEASE(d3d11Device_);
SAFE_RELEASE(d3d11DeviceContext_);
}
frame_ = NULL;
pkt_ = NULL;
c_ = NULL;
hw_device_ctx_ = NULL;
}
int reset() {
destroy();
if (!native_) {
native_ = std::make_unique<NativeDevice>();
if (!native_->Init(luid_, (ID3D11Device *)device_, 4)) {
LOG_ERROR(std::string("Failed to init native device"));
return -1;
}
}
if (!native_->support_decode(dataFormat_)) {
LOG_ERROR(std::string("unsupported data format"));
return -1;
}
d3d11Device_ = native_->device_.Get();
d3d11Device_->AddRef();
d3d11DeviceContext_ = native_->context_.Get();
d3d11DeviceContext_->AddRef();
const AVCodec *codec = NULL;
int ret;
if (!(codec = avcodec_find_decoder_by_name(name_.c_str()))) {
LOG_ERROR(std::string("avcodec_find_decoder_by_name ") + name_ + " failed");
return -1;
}
if (!(c_ = avcodec_alloc_context3(codec))) {
LOG_ERROR(std::string("Could not allocate video codec context"));
return -1;
}
c_->flags |= AV_CODEC_FLAG_LOW_DELAY;
hw_device_ctx_ = av_hwdevice_ctx_alloc(device_type_);
if (!hw_device_ctx_) {
LOG_ERROR(std::string("av_hwdevice_ctx_create failed"));
return -1;
}
AVHWDeviceContext *deviceContext =
(AVHWDeviceContext *)hw_device_ctx_->data;
AVD3D11VADeviceContext *d3d11vaDeviceContext =
(AVD3D11VADeviceContext *)deviceContext->hwctx;
d3d11vaDeviceContext->device = d3d11Device_;
d3d11vaDeviceContext->device_context = d3d11DeviceContext_;
d3d11vaDeviceContext->lock = lockContext;
d3d11vaDeviceContext->unlock = unlockContext;
d3d11vaDeviceContext->lock_ctx = this;
ret = av_hwdevice_ctx_init(hw_device_ctx_);
if (ret < 0) {
LOG_ERROR(std::string("av_hwdevice_ctx_init failed, ret = ") + av_err2str(ret));
return -1;
}
c_->hw_device_ctx = av_buffer_ref(hw_device_ctx_);
if (!(pkt_ = av_packet_alloc())) {
LOG_ERROR(std::string("av_packet_alloc failed"));
return -1;
}
if (!(frame_ = av_frame_alloc())) {
LOG_ERROR(std::string("av_frame_alloc failed"));
return -1;
}
if ((ret = avcodec_open2(c_, codec, NULL)) != 0) {
LOG_ERROR(std::string("avcodec_open2 failed, ret = ") + av_err2str(ret) +
", name=" + name_);
return -1;
}
return 0;
}
int decode(const uint8_t *data, int length, DecodeCallback callback,
const void *obj) {
int ret = -1;
if (!data || !length) {
LOG_ERROR(std::string("illegal decode parameter"));
return -1;
}
pkt_->data = (uint8_t *)data;
pkt_->size = length;
ret = do_decode(callback, obj);
return ret;
}
private:
int do_decode(DecodeCallback callback, const void *obj) {
int ret;
bool decoded = false;
bool locked = false;
ret = avcodec_send_packet(c_, pkt_);
if (ret < 0) {
LOG_ERROR(std::string("avcodec_send_packet failed, ret = ") + av_err2str(ret));
return ret;
}
auto start = util::now();
while (ret >= 0 && util::elapsed_ms(start) < DECODE_TIMEOUT_MS) {
if ((ret = avcodec_receive_frame(c_, frame_)) != 0) {
if (ret != AVERROR(EAGAIN)) {
LOG_ERROR(std::string("avcodec_receive_frame failed, ret = ") + av_err2str(ret));
}
goto _exit;
}
if (frame_->format != AV_PIX_FMT_D3D11) {
LOG_ERROR(std::string("only AV_PIX_FMT_D3D11 is supported"));
goto _exit;
}
lockContext(this);
locked = true;
if (!convert(frame_, callback, obj)) {
LOG_ERROR(std::string("Failed to convert"));
goto _exit;
}
if (callback)
callback(native_->GetCurrentTexture(), obj);
decoded = true;
}
_exit:
if (locked) {
unlockContext(this);
}
av_packet_unref(pkt_);
return decoded ? 0 : -1;
}
bool convert(AVFrame *frame, DecodeCallback callback, const void *obj) {
ID3D11Texture2D *texture = (ID3D11Texture2D *)frame->data[0];
if (!texture) {
LOG_ERROR(std::string("texture is NULL"));
return false;
}
D3D11_TEXTURE2D_DESC desc2D;
texture->GetDesc(&desc2D);
if (desc2D.Format != DXGI_FORMAT_NV12) {
LOG_ERROR(std::string("only DXGI_FORMAT_NV12 is supported"));
return false;
}
if (!native_->EnsureTexture(frame->width, frame->height)) {
LOG_ERROR(std::string("Failed to EnsureTexture"));
return false;
}
native_->next(); // comment out to remove picture shaking
#ifdef USE_SHADER
native_->BeginQuery();
if (!native_->Nv12ToBgra(frame->width, frame->height, texture,
native_->GetCurrentTexture(),
(int)frame->data[1])) {
LOG_ERROR(std::string("Failed to Nv12ToBgra"));
native_->EndQuery();
return false;
}
native_->EndQuery();
native_->Query();
#else
native_->BeginQuery();
// nv12 -> bgra
D3D11_VIDEO_PROCESSOR_CONTENT_DESC contentDesc;
ZeroMemory(&contentDesc, sizeof(contentDesc));
contentDesc.InputFrameFormat = D3D11_VIDEO_FRAME_FORMAT_PROGRESSIVE;
contentDesc.InputFrameRate.Numerator = 60;
contentDesc.InputFrameRate.Denominator = 1;
// TODO: aligned width, height or crop width, height
contentDesc.InputWidth = frame->width;
contentDesc.InputHeight = frame->height;
contentDesc.OutputWidth = frame->width;
contentDesc.OutputHeight = frame->height;
contentDesc.OutputFrameRate.Numerator = 60;
contentDesc.OutputFrameRate.Denominator = 1;
DXGI_COLOR_SPACE_TYPE colorSpace_out =
DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709;
DXGI_COLOR_SPACE_TYPE colorSpace_in;
if (bt709_) {
if (full_range_) {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P709;
} else {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P709;
}
} else {
if (full_range_) {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P601;
} else {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P601;
}
}
if (!native_->Process(texture, native_->GetCurrentTexture(), contentDesc,
colorSpace_in, colorSpace_out, (int)frame->data[1])) {
LOG_ERROR(std::string("Failed to process"));
native_->EndQuery();
return false;
}
native_->context_->Flush();
native_->EndQuery();
if (!native_->Query()) {
LOG_ERROR(std::string("Failed to query"));
return false;
}
#endif
return true;
}
};
void lockContext(void *lock_ctx) { (void)lock_ctx; }
void unlockContext(void *lock_ctx) { (void)lock_ctx; }
} // namespace
extern "C" int ffmpeg_vram_destroy_decoder(FFmpegVRamDecoder *decoder) {
try {
if (!decoder)
return 0;
decoder->destroy();
delete decoder;
decoder = NULL;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("ffmpeg_ram_free_decoder exception:") + e.what());
}
return -1;
}
extern "C" FFmpegVRamDecoder *ffmpeg_vram_new_decoder(void *device,
int64_t luid,
DataFormat dataFormat) {
FFmpegVRamDecoder *decoder = NULL;
try {
decoder = new FFmpegVRamDecoder(device, luid, dataFormat);
if (decoder) {
if (decoder->reset() == 0) {
return decoder;
}
}
} catch (std::exception &e) {
LOG_ERROR(std::string("new decoder exception:") + e.what());
}
if (decoder) {
decoder->destroy();
delete decoder;
decoder = NULL;
}
return NULL;
}
extern "C" int ffmpeg_vram_decode(FFmpegVRamDecoder *decoder,
const uint8_t *data, int length,
DecodeCallback callback, const void *obj) {
try {
int ret = decoder->decode(data, length, callback, obj);
if (DataFormat::H265 == decoder->dataFormat_ && util_decode::has_flag_could_not_find_ref_with_poc()) {
return HWCODEC_ERR_HEVC_COULD_NOT_FIND_POC;
} else {
return ret == 0 ? HWCODEC_SUCCESS : HWCODEC_ERR_COMMON;
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("ffmpeg_ram_decode exception:") + e.what());
}
return HWCODEC_ERR_COMMON;
}
extern "C" int ffmpeg_vram_test_decode(int64_t *outLuids, int32_t *outVendors,
int32_t maxDescNum, int32_t *outDescNum,
DataFormat dataFormat,
uint8_t *data, int32_t length,
const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
int count = 0;
struct VendorMapping {
AdapterVendor adapter_vendor;
int driver_vendor;
};
VendorMapping vendors[] = {
{ADAPTER_VENDOR_INTEL, VENDOR_INTEL},
{ADAPTER_VENDOR_NVIDIA, VENDOR_NV},
{ADAPTER_VENDOR_AMD, VENDOR_AMD}
};
for (auto vendorMap : vendors) {
Adapters adapters;
if (!adapters.Init(vendorMap.adapter_vendor))
continue;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
FFmpegVRamDecoder *p = (FFmpegVRamDecoder *)ffmpeg_vram_new_decoder(
nullptr, LUID(adapter.get()->desc1_), dataFormat);
if (!p)
continue;
auto start = util::now();
bool succ = ffmpeg_vram_decode(p, data, length, nullptr, nullptr) == 0;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = LUID(adapter.get()->desc1_);
outVendors[count] = (int32_t)vendorMap.driver_vendor; // Map adapter vendor to driver vendor
count += 1;
}
p->destroy();
delete p;
p = nullptr;
if (count >= maxDescNum)
break;
}
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
std::cerr << e.what() << '\n';
}
return -1;
}

View File

@@ -1,558 +0,0 @@
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavutil/hwcontext.h>
#include <libavutil/imgutils.h>
#include <libavutil/log.h>
#include <libavutil/opt.h>
}
#ifdef _WIN32
#include <libavutil/hwcontext_d3d11va.h>
#endif
#include <memory>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "callback.h"
#include "common.h"
#include "system.h"
#define LOG_MODULE "FFMPEG_VRAM_ENC"
#include <log.h>
#include <util.h>
namespace {
void lockContext(void *lock_ctx);
void unlockContext(void *lock_ctx);
enum class EncoderDriver {
NVENC,
AMF,
QSV,
};
class Encoder {
public:
Encoder(EncoderDriver driver, const char *name, AVHWDeviceType device_type,
AVHWDeviceType derived_device_type, AVPixelFormat hw_pixfmt,
AVPixelFormat sw_pixfmt) {
driver_ = driver;
name_ = name;
device_type_ = device_type;
derived_device_type_ = derived_device_type;
hw_pixfmt_ = hw_pixfmt;
sw_pixfmt_ = sw_pixfmt;
};
EncoderDriver driver_;
std::string name_;
AVHWDeviceType device_type_;
AVHWDeviceType derived_device_type_;
AVPixelFormat hw_pixfmt_;
AVPixelFormat sw_pixfmt_;
};
class FFmpegVRamEncoder {
public:
AVCodecContext *c_ = NULL;
AVBufferRef *hw_device_ctx_ = NULL;
AVFrame *frame_ = NULL;
AVFrame *mapped_frame_ = NULL;
ID3D11Texture2D *encode_texture_ = NULL; // no free
AVPacket *pkt_ = NULL;
std::unique_ptr<NativeDevice> native_ = nullptr;
ID3D11Device *d3d11Device_ = NULL;
ID3D11DeviceContext *d3d11DeviceContext_ = NULL;
std::unique_ptr<Encoder> encoder_ = nullptr;
void *handle_ = nullptr;
int64_t luid_;
DataFormat dataFormat_;
int32_t width_ = 0;
int32_t height_ = 0;
int32_t kbs_;
int32_t framerate_;
int32_t gop_;
const int align_ = 0;
const bool full_range_ = false;
const bool bt709_ = false;
FFmpegVRamEncoder(void *handle, int64_t luid, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs,
int32_t framerate, int32_t gop) {
handle_ = handle;
luid_ = luid;
dataFormat_ = dataFormat;
width_ = width;
height_ = height;
kbs_ = kbs;
framerate_ = framerate;
gop_ = gop;
}
~FFmpegVRamEncoder() {}
bool init() {
const AVCodec *codec = NULL;
int ret;
native_ = std::make_unique<NativeDevice>();
if (!native_->Init(luid_, (ID3D11Device *)handle_)) {
LOG_ERROR(std::string("NativeDevice init failed"));
return false;
}
d3d11Device_ = native_->device_.Get();
d3d11Device_->AddRef();
d3d11DeviceContext_ = native_->context_.Get();
d3d11DeviceContext_->AddRef();
AdapterVendor vendor = native_->GetVendor();
if (!choose_encoder(vendor)) {
return false;
}
LOG_INFO(std::string("encoder name: ") + encoder_->name_);
if (!(codec = avcodec_find_encoder_by_name(encoder_->name_.c_str()))) {
LOG_ERROR(std::string("Codec ") + encoder_->name_ + " not found");
return false;
}
if (!(c_ = avcodec_alloc_context3(codec))) {
LOG_ERROR(std::string("Could not allocate video codec context"));
return false;
}
/* resolution must be a multiple of two */
c_->width = width_;
c_->height = height_;
c_->pix_fmt = encoder_->hw_pixfmt_;
c_->sw_pix_fmt = encoder_->sw_pixfmt_;
util_encode::set_av_codec_ctx(c_, encoder_->name_, kbs_, gop_, framerate_);
if (!util_encode::set_lantency_free(c_->priv_data, encoder_->name_)) {
return false;
}
// util_encode::set_quality(c_->priv_data, encoder_->name_, Quality_Default);
util_encode::set_rate_control(c_, encoder_->name_, RC_CBR, -1);
util_encode::set_others(c_->priv_data, encoder_->name_);
hw_device_ctx_ = av_hwdevice_ctx_alloc(encoder_->device_type_);
if (!hw_device_ctx_) {
LOG_ERROR(std::string("av_hwdevice_ctx_create failed"));
return false;
}
AVHWDeviceContext *deviceContext =
(AVHWDeviceContext *)hw_device_ctx_->data;
AVD3D11VADeviceContext *d3d11vaDeviceContext =
(AVD3D11VADeviceContext *)deviceContext->hwctx;
d3d11vaDeviceContext->device = d3d11Device_;
d3d11vaDeviceContext->device_context = d3d11DeviceContext_;
d3d11vaDeviceContext->lock = lockContext;
d3d11vaDeviceContext->unlock = unlockContext;
d3d11vaDeviceContext->lock_ctx = this;
ret = av_hwdevice_ctx_init(hw_device_ctx_);
if (ret < 0) {
LOG_ERROR(std::string("av_hwdevice_ctx_init failed, ret = ") + av_err2str(ret));
return false;
}
if (encoder_->derived_device_type_ != AV_HWDEVICE_TYPE_NONE) {
AVBufferRef *derived_context = nullptr;
ret = av_hwdevice_ctx_create_derived(
&derived_context, encoder_->derived_device_type_, hw_device_ctx_, 0);
if (ret) {
LOG_ERROR(std::string("av_hwdevice_ctx_create_derived failed, err = ") +
av_err2str(ret));
return false;
}
av_buffer_unref(&hw_device_ctx_);
hw_device_ctx_ = derived_context;
}
c_->hw_device_ctx = av_buffer_ref(hw_device_ctx_);
if (!set_hwframe_ctx()) {
return false;
}
if (!(pkt_ = av_packet_alloc())) {
LOG_ERROR(std::string("Could not allocate video packet"));
return false;
}
if ((ret = avcodec_open2(c_, codec, NULL)) < 0) {
LOG_ERROR(std::string("avcodec_open2 failed, ret = ") + av_err2str(ret) +
", name: " + encoder_->name_);
return false;
}
if (!(frame_ = av_frame_alloc())) {
LOG_ERROR(std::string("Could not allocate video frame"));
return false;
}
frame_->format = c_->pix_fmt;
frame_->width = c_->width;
frame_->height = c_->height;
frame_->color_range = c_->color_range;
frame_->color_primaries = c_->color_primaries;
frame_->color_trc = c_->color_trc;
frame_->colorspace = c_->colorspace;
frame_->chroma_location = c_->chroma_sample_location;
if ((ret = av_hwframe_get_buffer(c_->hw_frames_ctx, frame_, 0)) < 0) {
LOG_ERROR(std::string("av_frame_get_buffer failed, ret = ") + av_err2str(ret));
return false;
}
if (frame_->format == AV_PIX_FMT_QSV) {
mapped_frame_ = av_frame_alloc();
if (!mapped_frame_) {
LOG_ERROR(std::string("Could not allocate mapped video frame"));
return false;
}
mapped_frame_->format = AV_PIX_FMT_D3D11;
ret = av_hwframe_map(mapped_frame_, frame_,
AV_HWFRAME_MAP_WRITE | AV_HWFRAME_MAP_OVERWRITE);
if (ret) {
LOG_ERROR(std::string("av_hwframe_map failed, err = ") + av_err2str(ret));
return false;
}
encode_texture_ = (ID3D11Texture2D *)mapped_frame_->data[0];
} else {
encode_texture_ = (ID3D11Texture2D *)frame_->data[0];
}
return true;
}
int encode(void *texture, EncodeCallback callback, void *obj, int64_t ms) {
if (!convert(texture))
return -1;
return do_encode(callback, obj, ms);
}
void destroy() {
if (pkt_)
av_packet_free(&pkt_);
if (frame_)
av_frame_free(&frame_);
if (mapped_frame_)
av_frame_free(&mapped_frame_);
if (c_)
avcodec_free_context(&c_);
if (hw_device_ctx_) {
av_buffer_unref(&hw_device_ctx_);
// AVHWDeviceContext takes ownership of d3d11 object
d3d11Device_ = nullptr;
d3d11DeviceContext_ = nullptr;
} else {
SAFE_RELEASE(d3d11Device_);
SAFE_RELEASE(d3d11DeviceContext_);
}
}
int set_bitrate(int kbs) {
return util_encode::change_bit_rate(c_, encoder_->name_, kbs) ? 0 : -1;
}
int set_framerate(int framerate) {
c_->time_base = av_make_q(1, framerate);
c_->framerate = av_inv_q(c_->time_base);
return 0;
}
private:
bool choose_encoder(AdapterVendor vendor) {
if (ADAPTER_VENDOR_NVIDIA == vendor) {
const char *name = nullptr;
if (dataFormat_ == H264) {
name = "h264_nvenc";
} else if (dataFormat_ == H265) {
name = "hevc_nvenc";
} else {
LOG_ERROR(std::string("Unsupported data format: ") + std::to_string(dataFormat_));
return false;
}
encoder_ = std::make_unique<Encoder>(
EncoderDriver::NVENC, name, AV_HWDEVICE_TYPE_D3D11VA,
AV_HWDEVICE_TYPE_NONE, AV_PIX_FMT_D3D11, AV_PIX_FMT_NV12);
return true;
} else if (ADAPTER_VENDOR_AMD == vendor) {
const char *name = nullptr;
if (dataFormat_ == H264) {
name = "h264_amf";
} else if (dataFormat_ == H265) {
name = "hevc_amf";
} else {
LOG_ERROR(std::string("Unsupported data format: ") + std::to_string(dataFormat_));
return false;
}
encoder_ = std::make_unique<Encoder>(
EncoderDriver::AMF, name, AV_HWDEVICE_TYPE_D3D11VA,
AV_HWDEVICE_TYPE_NONE, AV_PIX_FMT_D3D11, AV_PIX_FMT_NV12);
return true;
} else if (ADAPTER_VENDOR_INTEL == vendor) {
const char *name = nullptr;
if (dataFormat_ == H264) {
name = "h264_qsv";
} else if (dataFormat_ == H265) {
name = "hevc_qsv";
} else {
LOG_ERROR(std::string("Unsupported data format: ") + std::to_string(dataFormat_));
return false;
}
encoder_ = std::make_unique<Encoder>(
EncoderDriver::QSV, name, AV_HWDEVICE_TYPE_D3D11VA,
AV_HWDEVICE_TYPE_QSV, AV_PIX_FMT_QSV, AV_PIX_FMT_NV12);
return true;
} else {
LOG_ERROR(std::string("Unsupported vendor: ") + std::to_string(vendor));
return false;
}
return false;
}
int do_encode(EncodeCallback callback, const void *obj, int64_t ms) {
int ret;
bool encoded = false;
frame_->pts = ms;
if ((ret = avcodec_send_frame(c_, frame_)) < 0) {
LOG_ERROR(std::string("avcodec_send_frame failed, ret = ") + av_err2str(ret));
return ret;
}
auto start = util::now();
while (ret >= 0 && util::elapsed_ms(start) < ENCODE_TIMEOUT_MS) {
if ((ret = avcodec_receive_packet(c_, pkt_)) < 0) {
if (ret != AVERROR(EAGAIN)) {
LOG_ERROR(std::string("avcodec_receive_packet failed, ret = ") + av_err2str(ret));
}
goto _exit;
}
if (!pkt_->data || !pkt_->size) {
LOG_ERROR(std::string("avcodec_receive_packet failed, pkt size is 0"));
goto _exit;
}
encoded = true;
if (callback)
callback(pkt_->data, pkt_->size, pkt_->flags & AV_PKT_FLAG_KEY, obj,
pkt_->pts);
}
_exit:
av_packet_unref(pkt_);
return encoded ? 0 : -1;
}
bool convert(void *texture) {
if (frame_->format == AV_PIX_FMT_D3D11 ||
frame_->format == AV_PIX_FMT_QSV) {
ID3D11Texture2D *texture2D = (ID3D11Texture2D *)encode_texture_;
D3D11_TEXTURE2D_DESC desc;
texture2D->GetDesc(&desc);
if (desc.Format != DXGI_FORMAT_NV12) {
LOG_ERROR(std::string("convert: texture format mismatch, ") +
std::to_string(desc.Format) +
" != " + std::to_string(DXGI_FORMAT_NV12));
return false;
}
DXGI_COLOR_SPACE_TYPE colorSpace_in =
DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709;
DXGI_COLOR_SPACE_TYPE colorSpace_out;
if (bt709_) {
if (full_range_) {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P709;
} else {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P709;
}
} else {
if (full_range_) {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P601;
} else {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P601;
}
}
if (!native_->BgraToNv12((ID3D11Texture2D *)texture, texture2D, width_,
height_, colorSpace_in, colorSpace_out)) {
LOG_ERROR(std::string("convert: BgraToNv12 failed"));
return false;
}
return true;
} else {
LOG_ERROR(std::string("convert: unsupported format, ") +
std::to_string(frame_->format));
return false;
}
}
bool set_hwframe_ctx() {
AVBufferRef *hw_frames_ref;
AVHWFramesContext *frames_ctx = NULL;
int err = 0;
bool ret = true;
if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx_))) {
LOG_ERROR(std::string("av_hwframe_ctx_alloc failed."));
return false;
}
frames_ctx = (AVHWFramesContext *)(hw_frames_ref->data);
frames_ctx->format = encoder_->hw_pixfmt_;
frames_ctx->sw_format = encoder_->sw_pixfmt_;
frames_ctx->width = width_;
frames_ctx->height = height_;
frames_ctx->initial_pool_size = 0;
if (encoder_->device_type_ == AV_HWDEVICE_TYPE_D3D11VA) {
frames_ctx->initial_pool_size = 1;
AVD3D11VAFramesContext *frames_hwctx =
(AVD3D11VAFramesContext *)frames_ctx->hwctx;
frames_hwctx->BindFlags = D3D11_BIND_RENDER_TARGET;
frames_hwctx->MiscFlags = 0;
}
if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {
LOG_ERROR(std::string("av_hwframe_ctx_init failed."));
av_buffer_unref(&hw_frames_ref);
return false;
}
c_->hw_frames_ctx = av_buffer_ref(hw_frames_ref);
if (!c_->hw_frames_ctx) {
LOG_ERROR(std::string("av_buffer_ref failed"));
ret = false;
}
av_buffer_unref(&hw_frames_ref);
return ret;
}
};
void lockContext(void *lock_ctx) { (void)lock_ctx; }
void unlockContext(void *lock_ctx) { (void)lock_ctx; }
} // namespace
extern "C" {
FFmpegVRamEncoder *ffmpeg_vram_new_encoder(void *handle, int64_t luid,
DataFormat dataFormat, int32_t width,
int32_t height, int32_t kbs,
int32_t framerate, int32_t gop) {
FFmpegVRamEncoder *encoder = NULL;
try {
encoder = new FFmpegVRamEncoder(handle, luid, dataFormat, width,
height, kbs, framerate, gop);
if (encoder) {
if (encoder->init()) {
return encoder;
}
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("new FFmpegVRamEncoder failed, ") + std::string(e.what()));
}
if (encoder) {
encoder->destroy();
delete encoder;
encoder = NULL;
}
return NULL;
}
int ffmpeg_vram_encode(FFmpegVRamEncoder *encoder, void *texture,
EncodeCallback callback, void *obj, int64_t ms) {
try {
return encoder->encode(texture, callback, obj, ms);
} catch (const std::exception &e) {
LOG_ERROR(std::string("ffmpeg_vram_encode failed, ") + std::string(e.what()));
}
return -1;
}
void ffmpeg_vram_destroy_encoder(FFmpegVRamEncoder *encoder) {
try {
if (!encoder)
return;
encoder->destroy();
delete encoder;
encoder = NULL;
} catch (const std::exception &e) {
LOG_ERROR(std::string("free encoder failed, ") + std::string(e.what()));
}
}
int ffmpeg_vram_set_bitrate(FFmpegVRamEncoder *encoder, int kbs) {
try {
return encoder->set_bitrate(kbs);
} catch (const std::exception &e) {
LOG_ERROR(std::string("ffmpeg_ram_set_bitrate failed, ") + std::string(e.what()));
}
return -1;
}
int ffmpeg_vram_set_framerate(FFmpegVRamEncoder *encoder, int32_t framerate) {
try {
return encoder->set_bitrate(framerate);
} catch (const std::exception &e) {
LOG_ERROR(std::string("ffmpeg_vram_set_framerate failed, ") + std::string(e.what()));
}
return -1;
}
int ffmpeg_vram_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum,
int32_t *outDescNum, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs,
int32_t framerate, int32_t gop,
const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
int count = 0;
struct VendorMapping {
AdapterVendor adapter_vendor;
int driver_vendor;
};
VendorMapping vendors[] = {
{ADAPTER_VENDOR_INTEL, VENDOR_INTEL},
{ADAPTER_VENDOR_NVIDIA, VENDOR_NV},
{ADAPTER_VENDOR_AMD, VENDOR_AMD}
};
for (auto vendorMap : vendors) {
Adapters adapters;
if (!adapters.Init(vendorMap.adapter_vendor))
continue;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
FFmpegVRamEncoder *e = (FFmpegVRamEncoder *)ffmpeg_vram_new_encoder(
(void *)adapter.get()->device_.Get(), currentLuid,
dataFormat, width, height, kbs, framerate, gop);
if (!e)
continue;
if (e->native_->EnsureTexture(e->width_, e->height_)) {
e->native_->next();
int32_t key_obj = 0;
auto start = util::now();
bool succ = ffmpeg_vram_encode(e, e->native_->GetCurrentTexture(), util_encode::vram_encode_test_callback,
&key_obj, 0) == 0 && key_obj == 1;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = currentLuid;
outVendors[count] = (int32_t)vendorMap.driver_vendor; // Map adapter vendor to driver vendor
count += 1;
}
}
e->destroy();
delete e;
e = nullptr;
if (count >= maxDescNum)
break;
}
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("test failed: ") + e.what());
}
return -1;
}
} // extern "C"

View File

@@ -1,32 +0,0 @@
#ifndef FFMPEG_VRAM_FFI_H
#define FFMPEG_VRAM_FFI_H
#include "../common/callback.h"
#include <stdbool.h>
void *ffmpeg_vram_new_decoder(void *device, int64_t luid,
int32_t codecID);
int ffmpeg_vram_decode(void *decoder, uint8_t *data, int len,
DecodeCallback callback, void *obj);
int ffmpeg_vram_destroy_decoder(void *decoder);
int ffmpeg_vram_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum,
int32_t *outDescNum,
int32_t dataFormat, uint8_t *data, int32_t length,
const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
void *ffmpeg_vram_new_encoder(void *handle, int64_t luid,
int32_t dataFormat, int32_t width, int32_t height,
int32_t kbs, int32_t framerate, int32_t gop);
int ffmpeg_vram_encode(void *encoder, void *tex, EncodeCallback callback,
void *obj, int64_t ms);
int ffmpeg_vram_destroy_encoder(void *encoder);
int ffmpeg_vram_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum,
int32_t *outDescNum,
int32_t dataFormat, int32_t width, int32_t height,
int32_t kbs, int32_t framerate, int32_t gop,
const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int ffmpeg_vram_set_bitrate(void *encoder, int32_t kbs);
int ffmpeg_vram_set_framerate(void *encoder, int32_t framerate);
#endif // FFMPEG_VRAM_FFI_H

View File

@@ -1,481 +0,0 @@
#include <cstring>
#include <d3d11_allocator.h>
#include <libavutil/pixfmt.h>
#include <sample_defs.h>
#include <sample_utils.h>
#include "callback.h"
#include "common.h"
#include "system.h"
#include "util.h"
#define LOG_MODULE "MFXDEC"
#include "log.h"
#define CHECK_STATUS(X, MSG) \
{ \
mfxStatus __sts = (X); \
if (__sts != MFX_ERR_NONE) { \
MSDK_PRINT_RET_MSG(__sts, MSG); \
LOG_ERROR(std::string(MSG) + "failed, sts=" + std::to_string((int)__sts)); \
return __sts; \
} \
}
#define USE_SHADER
namespace {
class VplDecoder {
public:
std::unique_ptr<NativeDevice> native_ = nullptr;
MFXVideoSession session_;
MFXVideoDECODE *mfxDEC_ = NULL;
std::vector<mfxFrameSurface1> pmfxSurfaces_;
mfxVideoParam mfxVideoParams_;
bool initialized_ = false;
D3D11FrameAllocator d3d11FrameAllocator_;
mfxFrameAllocResponse mfxResponse_;
void *device_;
int64_t luid_;
DataFormat codecID_;
bool bt709_ = false;
bool full_range_ = false;
VplDecoder(void *device, int64_t luid, DataFormat codecID) {
device_ = device;
luid_ = luid;
codecID_ = codecID;
ZeroMemory(&mfxVideoParams_, sizeof(mfxVideoParams_));
ZeroMemory(&mfxResponse_, sizeof(mfxResponse_));
}
~VplDecoder() {}
int destroy() {
if (mfxDEC_) {
mfxDEC_->Close();
delete mfxDEC_;
mfxDEC_ = NULL;
}
return 0;
}
mfxStatus init() {
mfxStatus sts = MFX_ERR_NONE;
native_ = std::make_unique<NativeDevice>();
if (!native_->Init(luid_, (ID3D11Device *)device_, 4)) {
LOG_ERROR(std::string("Failed to initialize native device"));
return MFX_ERR_DEVICE_FAILED;
}
sts = InitializeMFX();
CHECK_STATUS(sts, "InitializeMFX");
// Create Media SDK decoder
mfxDEC_ = new MFXVideoDECODE(session_);
if (!mfxDEC_) {
LOG_ERROR(std::string("Failed to create MFXVideoDECODE"));
return MFX_ERR_NOT_INITIALIZED;
}
memset(&mfxVideoParams_, 0, sizeof(mfxVideoParams_));
if (!convert_codec(codecID_, mfxVideoParams_.mfx.CodecId)) {
LOG_ERROR(std::string("Unsupported codec"));
return MFX_ERR_UNSUPPORTED;
}
mfxVideoParams_.IOPattern = MFX_IOPATTERN_OUT_VIDEO_MEMORY;
// AsyncDepth: sSpecifies how many asynchronous operations an
// application performs before the application explicitly synchronizes the
// result. If zero, the value is not specified
mfxVideoParams_.AsyncDepth = 1; // Not important.
// DecodedOrder: For AVC and HEVC, used to instruct the decoder
// to return output frames in the decoded order. Must be zero for all other
// decoders.
mfxVideoParams_.mfx.DecodedOrder = true; // Not important.
mfxVideoParams_.mfx.FrameInfo.FrameRateExtN = 30;
mfxVideoParams_.mfx.FrameInfo.FrameRateExtD = 1;
mfxVideoParams_.mfx.FrameInfo.AspectRatioW = 1;
mfxVideoParams_.mfx.FrameInfo.AspectRatioH = 1;
mfxVideoParams_.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
mfxVideoParams_.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
// Validate video decode parameters (optional)
sts = mfxDEC_->Query(&mfxVideoParams_, &mfxVideoParams_);
CHECK_STATUS(sts, "Query");
return MFX_ERR_NONE;
}
int decode(uint8_t *data, int len, DecodeCallback callback, void *obj) {
mfxStatus sts = MFX_ERR_NONE;
mfxSyncPoint syncp;
mfxFrameSurface1 *pmfxOutSurface = NULL;
bool decoded = false;
mfxBitstream mfxBS;
setBitStream(&mfxBS, data, len);
if (!initialized_) {
sts = initializeDecode(&mfxBS, false);
if (sts != MFX_ERR_NONE) {
LOG_ERROR(std::string("initializeDecode failed, sts=") + std::to_string((int)sts));
return -1;
}
initialized_ = true;
}
setBitStream(&mfxBS, data, len);
auto start = util::now();
do {
if (util::elapsed_ms(start) > DECODE_TIMEOUT_MS) {
LOG_ERROR(std::string("decode timeout"));
break;
}
int nIndex =
GetFreeSurfaceIndex(pmfxSurfaces_.data(),
pmfxSurfaces_.size()); // Find free frame surface
if (nIndex >= pmfxSurfaces_.size()) {
LOG_ERROR(std::string("GetFreeSurfaceIndex failed, nIndex=") +
std::to_string(nIndex));
break;
}
sts = mfxDEC_->DecodeFrameAsync(&mfxBS, &pmfxSurfaces_[nIndex],
&pmfxOutSurface, &syncp);
if (MFX_ERR_NONE == sts) {
if (!syncp) {
LOG_ERROR(std::string("should not happen, syncp is NULL while error is none"));
break;
}
sts = session_.SyncOperation(syncp, 1000);
if (MFX_ERR_NONE != sts) {
LOG_ERROR(std::string("SyncOperation failed, sts=") + std::to_string((int)sts));
break;
}
if (!pmfxOutSurface) {
LOG_ERROR(std::string("pmfxOutSurface is null"));
break;
}
if (!convert(pmfxOutSurface)) {
LOG_ERROR(std::string("Failed to convert"));
break;
}
if (callback)
callback(native_->GetCurrentTexture(), obj);
decoded = true;
break;
} else if (MFX_WRN_DEVICE_BUSY == sts) {
LOG_INFO(std::string("Device busy"));
Sleep(1);
continue;
} else if (MFX_ERR_INCOMPATIBLE_VIDEO_PARAM == sts) {
// https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#multiple-sequence-headers
LOG_INFO(std::string("Incompatible video param, reset decoder"));
// https://github.com/FFmpeg/FFmpeg/blob/f84412d6f4e9c1f1d1a2491f9337d7e789c688ba/libavcodec/qsvdec.c#L736
setBitStream(&mfxBS, data, len);
sts = initializeDecode(&mfxBS, true);
if (sts != MFX_ERR_NONE) {
LOG_ERROR(std::string("initializeDecode failed, sts=") + std::to_string((int)sts));
break;
}
Sleep(1);
continue;
} else if (MFX_WRN_VIDEO_PARAM_CHANGED == sts) {
LOG_TRACE(std::string("new sequence header"));
sts = mfxDEC_->GetVideoParam(&mfxVideoParams_);
if (sts != MFX_ERR_NONE) {
LOG_ERROR(std::string("GetVideoParam failed, sts=") + std::to_string((int)sts));
}
continue;
} else if (MFX_ERR_MORE_SURFACE == sts) {
LOG_INFO(std::string("More surface"));
Sleep(1);
continue;
} else {
LOG_ERROR(std::string("DecodeFrameAsync failed, sts=") + std::to_string(sts));
break;
}
// double confirm, check continue
} while (MFX_ERR_NONE == sts || MFX_WRN_DEVICE_BUSY == sts ||
MFX_ERR_INCOMPATIBLE_VIDEO_PARAM == sts ||
MFX_WRN_VIDEO_PARAM_CHANGED == sts || MFX_ERR_MORE_SURFACE == sts);
if (!decoded) {
LOG_ERROR(std::string("decode failed, sts=") + std::to_string(sts));
}
return decoded ? 0 : -1;
}
private:
mfxStatus InitializeMFX() {
mfxStatus sts = MFX_ERR_NONE;
mfxIMPL impl = MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11;
mfxVersion ver = {{0, 1}};
D3D11AllocatorParams allocParams;
sts = session_.Init(impl, &ver);
CHECK_STATUS(sts, "session Init");
sts = session_.SetHandle(MFX_HANDLE_D3D11_DEVICE, native_->device_.Get());
CHECK_STATUS(sts, "SetHandle");
allocParams.bUseSingleTexture = false; // important
allocParams.pDevice = native_->device_.Get();
allocParams.uncompressedResourceMiscFlags = 0;
sts = d3d11FrameAllocator_.Init(&allocParams);
CHECK_STATUS(sts, "init D3D11FrameAllocator");
sts = session_.SetFrameAllocator(&d3d11FrameAllocator_);
CHECK_STATUS(sts, "SetFrameAllocator");
return MFX_ERR_NONE;
}
bool convert_codec(DataFormat dataFormat, mfxU32 &CodecId) {
switch (dataFormat) {
case H264:
CodecId = MFX_CODEC_AVC;
return true;
case H265:
CodecId = MFX_CODEC_HEVC;
return true;
}
return false;
}
mfxStatus initializeDecode(mfxBitstream *mfxBS, bool reinit) {
mfxStatus sts = MFX_ERR_NONE;
mfxFrameAllocRequest Request;
memset(&Request, 0, sizeof(Request));
mfxU16 numSurfaces;
mfxU16 width, height;
mfxU8 bitsPerPixel = 12; // NV12
mfxU32 surfaceSize;
mfxU8 *surfaceBuffers;
// mfxExtVideoSignalInfo got MFX_ERR_INVALID_VIDEO_PARAM
// mfxExtVideoSignalInfo video_signal_info = {0};
// https://spec.oneapi.io/versions/1.1-rev-1/elements/oneVPL/source/API_ref/VPL_func_vid_decode.html#mfxvideodecode-decodeheader
sts = mfxDEC_->DecodeHeader(mfxBS, &mfxVideoParams_);
MSDK_IGNORE_MFX_STS(sts, MFX_WRN_PARTIAL_ACCELERATION);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
sts = mfxDEC_->QueryIOSurf(&mfxVideoParams_, &Request);
MSDK_IGNORE_MFX_STS(sts, MFX_WRN_PARTIAL_ACCELERATION);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
numSurfaces = Request.NumFrameSuggested;
// Request.Type |= WILL_READ; // This line is only required for Windows
// DirectX11 to ensure that surfaces can be retrieved by the application
// Allocate surfaces for decoder
if (reinit) {
sts = d3d11FrameAllocator_.FreeFrames(&mfxResponse_);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
}
sts = d3d11FrameAllocator_.AllocFrames(&Request, &mfxResponse_);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
// Allocate surface headers (mfxFrameSurface1) for decoder
pmfxSurfaces_.resize(numSurfaces);
for (int i = 0; i < numSurfaces; i++) {
memset(&pmfxSurfaces_[i], 0, sizeof(mfxFrameSurface1));
pmfxSurfaces_[i].Info = mfxVideoParams_.mfx.FrameInfo;
pmfxSurfaces_[i].Data.MemId =
mfxResponse_
.mids[i]; // MID (memory id) represents one video NV12 surface
}
// Initialize the Media SDK decoder
if (reinit) {
// https://github.com/FFmpeg/FFmpeg/blob/f84412d6f4e9c1f1d1a2491f9337d7e789c688ba/libavcodec/qsvdec.c#L181
sts = mfxDEC_->Close();
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
}
sts = mfxDEC_->Init(&mfxVideoParams_);
MSDK_IGNORE_MFX_STS(sts, MFX_WRN_PARTIAL_ACCELERATION);
MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
return MFX_ERR_NONE;
}
void setBitStream(mfxBitstream *mfxBS, uint8_t *data, int len) {
memset(mfxBS, 0, sizeof(mfxBitstream));
mfxBS->Data = data;
mfxBS->DataLength = len;
mfxBS->MaxLength = len;
mfxBS->DataFlag = MFX_BITSTREAM_COMPLETE_FRAME;
}
bool convert(mfxFrameSurface1 *pmfxOutSurface) {
mfxStatus sts = MFX_ERR_NONE;
mfxHDLPair pair = {NULL};
sts = d3d11FrameAllocator_.GetFrameHDL(pmfxOutSurface->Data.MemId,
(mfxHDL *)&pair);
if (MFX_ERR_NONE != sts) {
LOG_ERROR(std::string("Failed to GetFrameHDL"));
return false;
}
ID3D11Texture2D *texture = (ID3D11Texture2D *)pair.first;
D3D11_TEXTURE2D_DESC desc2D;
texture->GetDesc(&desc2D);
if (!native_->EnsureTexture(pmfxOutSurface->Info.CropW,
pmfxOutSurface->Info.CropH)) {
LOG_ERROR(std::string("Failed to EnsureTexture"));
return false;
}
native_->next(); // comment out to remove picture shaking
#ifdef USE_SHADER
native_->BeginQuery();
if (!native_->Nv12ToBgra(pmfxOutSurface->Info.CropW,
pmfxOutSurface->Info.CropH, texture,
native_->GetCurrentTexture(), 0)) {
LOG_ERROR(std::string("Failed to Nv12ToBgra"));
native_->EndQuery();
return false;
}
native_->EndQuery();
native_->Query();
#else
native_->BeginQuery();
// nv12 -> bgra
D3D11_VIDEO_PROCESSOR_CONTENT_DESC contentDesc;
ZeroMemory(&contentDesc, sizeof(contentDesc));
contentDesc.InputFrameFormat = D3D11_VIDEO_FRAME_FORMAT_PROGRESSIVE;
contentDesc.InputFrameRate.Numerator = 60;
contentDesc.InputFrameRate.Denominator = 1;
// TODO: aligned width, height or crop width, height
contentDesc.InputWidth = pmfxOutSurface->Info.CropW;
contentDesc.InputHeight = pmfxOutSurface->Info.CropH;
contentDesc.OutputWidth = pmfxOutSurface->Info.CropW;
contentDesc.OutputHeight = pmfxOutSurface->Info.CropH;
contentDesc.OutputFrameRate.Numerator = 60;
contentDesc.OutputFrameRate.Denominator = 1;
DXGI_COLOR_SPACE_TYPE colorSpace_out =
DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709;
DXGI_COLOR_SPACE_TYPE colorSpace_in;
if (bt709_) {
if (full_range_) {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P709;
} else {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P709;
}
} else {
if (full_range_) {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P601;
} else {
colorSpace_in = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P601;
}
}
if (!native_->Process(texture, native_->GetCurrentTexture(), contentDesc,
colorSpace_in, colorSpace_out, 0)) {
LOG_ERROR(std::string("Failed to process"));
native_->EndQuery();
return false;
}
native_->context_->Flush();
native_->EndQuery();
if (!native_->Query()) {
LOG_ERROR(std::string("Failed to query"));
return false;
}
#endif
return true;
}
};
} // namespace
extern "C" {
int mfx_destroy_decoder(void *decoder) {
VplDecoder *p = (VplDecoder *)decoder;
if (p) {
p->destroy();
delete p;
p = NULL;
}
return 0;
}
void *mfx_new_decoder(void *device, int64_t luid, DataFormat codecID) {
VplDecoder *p = NULL;
try {
p = new VplDecoder(device, luid, codecID);
if (p) {
if (p->init() == MFX_ERR_NONE) {
return p;
}
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("new failed: ") + e.what());
}
if (p) {
p->destroy();
delete p;
p = NULL;
}
return NULL;
}
int mfx_decode(void *decoder, uint8_t *data, int len, DecodeCallback callback,
void *obj) {
try {
VplDecoder *p = (VplDecoder *)decoder;
if (p->decode(data, len, callback, obj) == 0) {
return HWCODEC_SUCCESS;
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("decode failed: ") + e.what());
}
return HWCODEC_ERR_COMMON;
}
int mfx_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum,
int32_t *outDescNum, DataFormat dataFormat,
uint8_t *data, int32_t length, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
Adapters adapters;
if (!adapters.Init(ADAPTER_VENDOR_INTEL))
return -1;
int count = 0;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
VplDecoder *p = (VplDecoder *)mfx_new_decoder(
nullptr, currentLuid, dataFormat);
if (!p)
continue;
auto start = util::now();
bool succ = mfx_decode(p, data, length, nullptr, nullptr) == 0;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = currentLuid;
outVendors[count] = VENDOR_INTEL;
count += 1;
}
p->destroy();
delete p;
p = nullptr;
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
std::cerr << e.what() << '\n';
}
return -1;
}
} // extern "C"

View File

@@ -1,709 +0,0 @@
#include <cstring>
#include <iostream>
#include <libavutil/pixfmt.h>
#include <limits>
#include <sample_defs.h>
#include <sample_utils.h>
#include "callback.h"
#include "common.h"
#include "system.h"
#include "util.h"
#define LOG_MODULE "MFXENC"
#include "log.h"
// #define CONFIG_USE_VPP
#define CONFIG_USE_D3D_CONVERT
#define CHECK_STATUS(X, MSG) \
{ \
mfxStatus __sts = (X); \
if (__sts != MFX_ERR_NONE) { \
LOG_ERROR(std::string(MSG) + " failed, sts=" + std::to_string((int)__sts)); \
return __sts; \
} \
}
namespace {
mfxStatus MFX_CDECL simple_getHDL(mfxHDL pthis, mfxMemId mid, mfxHDL *handle) {
mfxHDLPair *pair = (mfxHDLPair *)handle;
pair->first = mid;
pair->second = (mfxHDL)(UINT)0;
return MFX_ERR_NONE;
}
mfxFrameAllocator frameAllocator{{}, NULL, NULL, NULL,
NULL, simple_getHDL, NULL};
mfxStatus InitSession(MFXVideoSession &session) {
mfxInitParam mfxparams{};
mfxIMPL impl = MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11;
mfxparams.Implementation = impl;
mfxparams.Version.Major = 1;
mfxparams.Version.Minor = 0;
mfxparams.GPUCopy = MFX_GPUCOPY_OFF;
return session.InitEx(mfxparams);
}
class VplEncoder {
public:
std::unique_ptr<NativeDevice> native_ = nullptr;
MFXVideoSession session_;
MFXVideoENCODE *mfxENC_ = nullptr;
std::vector<mfxFrameSurface1> encSurfaces_;
std::vector<mfxU8> bstData_;
mfxBitstream mfxBS_;
mfxVideoParam mfxEncParams_;
mfxExtBuffer *extbuffers_[4] = {NULL, NULL, NULL, NULL};
mfxExtCodingOption coding_option_;
mfxExtCodingOption2 coding_option2_;
mfxExtCodingOption3 coding_option3_;
mfxExtVideoSignalInfo signal_info_;
ComPtr<ID3D11Texture2D> nv12Texture_ = nullptr;
// vpp
#ifdef CONFIG_USE_VPP
MFXVideoVPP *mfxVPP_ = nullptr;
mfxVideoParam vppParams_;
mfxExtBuffer *vppExtBuffers_[1] = {NULL};
mfxExtVPPDoNotUse vppDontUse_;
mfxU32 vppDontUseArgList_[4];
std::vector<mfxFrameSurface1> vppSurfaces_;
#endif
void *handle_ = nullptr;
int64_t luid_;
DataFormat dataFormat_;
int32_t width_ = 0;
int32_t height_ = 0;
int32_t kbs_;
int32_t framerate_;
int32_t gop_;
bool full_range_ = false;
bool bt709_ = false;
VplEncoder(void *handle, int64_t luid, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs, int32_t framerate,
int32_t gop) {
handle_ = handle;
luid_ = luid;
dataFormat_ = dataFormat;
width_ = width;
height_ = height;
kbs_ = kbs;
framerate_ = framerate;
gop_ = gop;
}
~VplEncoder() {}
mfxStatus Reset() {
mfxStatus sts = MFX_ERR_NONE;
if (!native_) {
native_ = std::make_unique<NativeDevice>();
if (!native_->Init(luid_, (ID3D11Device *)handle_)) {
LOG_ERROR(std::string("failed to init native device"));
return MFX_ERR_DEVICE_FAILED;
}
}
sts = resetMFX();
CHECK_STATUS(sts, "resetMFX");
#ifdef CONFIG_USE_VPP
sts = resetVpp();
CHECK_STATUS(sts, "resetVpp");
#endif
sts = resetEnc();
CHECK_STATUS(sts, "resetEnc");
return MFX_ERR_NONE;
}
int encode(ID3D11Texture2D *tex, EncodeCallback callback, void *obj,
int64_t ms) {
mfxStatus sts = MFX_ERR_NONE;
int nEncSurfIdx =
GetFreeSurfaceIndex(encSurfaces_.data(), encSurfaces_.size());
if (nEncSurfIdx >= encSurfaces_.size()) {
LOG_ERROR(std::string("no free enc surface"));
return -1;
}
mfxFrameSurface1 *encSurf = &encSurfaces_[nEncSurfIdx];
#ifdef CONFIG_USE_VPP
mfxSyncPoint syncp;
sts = vppOneFrame(tex, encSurf, syncp);
syncp = NULL;
if (sts != MFX_ERR_NONE) {
LOG_ERROR(std::string("vppOneFrame failed, sts=") + std::to_string((int)sts));
return -1;
}
#elif defined(CONFIG_USE_D3D_CONVERT)
DXGI_COLOR_SPACE_TYPE colorSpace_in =
DXGI_COLOR_SPACE_RGB_FULL_G22_NONE_P709;
DXGI_COLOR_SPACE_TYPE colorSpace_out;
if (bt709_) {
if (full_range_) {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P709;
} else {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P709;
}
} else {
if (full_range_) {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_FULL_G22_LEFT_P601;
} else {
colorSpace_out = DXGI_COLOR_SPACE_YCBCR_STUDIO_G22_LEFT_P601;
}
}
if (!nv12Texture_) {
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
tex->GetDesc(&desc);
desc.Format = DXGI_FORMAT_NV12;
desc.MiscFlags = 0;
HRI(native_->device_->CreateTexture2D(
&desc, NULL, nv12Texture_.ReleaseAndGetAddressOf()));
}
if (!native_->BgraToNv12(tex, nv12Texture_.Get(), width_, height_,
colorSpace_in, colorSpace_out)) {
LOG_ERROR(std::string("failed to convert to NV12"));
return -1;
}
encSurf->Data.MemId = nv12Texture_.Get();
#else
encSurf->Data.MemId = tex;
#endif
return encodeOneFrame(encSurf, callback, obj, ms);
}
void destroy() {
if (mfxENC_) {
// - It is recommended to close Media SDK components first, before
// releasing allocated surfaces, since
// some surfaces may still be locked by internal Media SDK resources.
mfxENC_->Close();
delete mfxENC_;
mfxENC_ = NULL;
}
#ifdef CONFIG_USE_VPP
if (mfxVPP_) {
mfxVPP_->Close();
delete mfxVPP_;
mfxVPP_ = NULL;
}
#endif
// session closed automatically on destruction
}
private:
mfxStatus resetMFX() {
mfxStatus sts = MFX_ERR_NONE;
sts = InitSession(session_);
CHECK_STATUS(sts, "InitSession");
sts = session_.SetHandle(MFX_HANDLE_D3D11_DEVICE, native_->device_.Get());
CHECK_STATUS(sts, "SetHandle");
sts = session_.SetFrameAllocator(&frameAllocator);
CHECK_STATUS(sts, "SetFrameAllocator");
return MFX_ERR_NONE;
}
#ifdef CONFIG_USE_VPP
mfxStatus resetVpp() {
mfxStatus sts = MFX_ERR_NONE;
memset(&vppParams_, 0, sizeof(vppParams_));
vppParams_.IOPattern =
MFX_IOPATTERN_IN_VIDEO_MEMORY | MFX_IOPATTERN_OUT_VIDEO_MEMORY;
vppParams_.vpp.In.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
vppParams_.vpp.In.FrameRateExtN = framerate_;
vppParams_.vpp.In.FrameRateExtD = 1;
vppParams_.vpp.In.Width = MSDK_ALIGN16(width_);
vppParams_.vpp.In.Height =
(MFX_PICSTRUCT_PROGRESSIVE == vppParams_.vpp.In.PicStruct)
? MSDK_ALIGN16(height_)
: MSDK_ALIGN32(height_);
vppParams_.vpp.In.CropX = 0;
vppParams_.vpp.In.CropY = 0;
vppParams_.vpp.In.CropW = width_;
vppParams_.vpp.In.CropH = height_;
vppParams_.vpp.In.Shift = 0;
memcpy(&vppParams_.vpp.Out, &vppParams_.vpp.In, sizeof(vppParams_.vpp.Out));
vppParams_.vpp.In.FourCC = MFX_FOURCC_RGB4;
vppParams_.vpp.Out.FourCC = MFX_FOURCC_NV12;
vppParams_.vpp.In.ChromaFormat = MFX_CHROMAFORMAT_YUV444;
vppParams_.vpp.Out.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
vppParams_.AsyncDepth = 1;
vppParams_.ExtParam = vppExtBuffers_;
vppParams_.NumExtParam = 1;
vppExtBuffers_[0] = (mfxExtBuffer *)&vppDontUse_;
vppDontUse_.Header.BufferId = MFX_EXTBUFF_VPP_DONOTUSE;
vppDontUse_.Header.BufferSz = sizeof(vppDontUse_);
vppDontUse_.AlgList = vppDontUseArgList_;
vppDontUse_.NumAlg = 4;
vppDontUseArgList_[0] = MFX_EXTBUFF_VPP_DENOISE;
vppDontUseArgList_[1] = MFX_EXTBUFF_VPP_SCENE_ANALYSIS;
vppDontUseArgList_[2] = MFX_EXTBUFF_VPP_DETAIL;
vppDontUseArgList_[3] = MFX_EXTBUFF_VPP_PROCAMP;
if (mfxVPP_) {
mfxVPP_->Close();
delete mfxVPP_;
mfxVPP_ = NULL;
}
mfxVPP_ = new MFXVideoVPP(session_);
if (!mfxVPP_) {
LOG_ERROR(std::string("Failed to create MFXVideoVPP"));
return MFX_ERR_MEMORY_ALLOC;
}
sts = mfxVPP_->Query(&vppParams_, &vppParams_);
CHECK_STATUS(sts, "vpp query");
mfxFrameAllocRequest vppAllocRequest;
ZeroMemory(&vppAllocRequest, sizeof(vppAllocRequest));
memcpy(&vppAllocRequest.Info, &vppParams_.vpp.In, sizeof(mfxFrameInfo));
sts = mfxVPP_->QueryIOSurf(&vppParams_, &vppAllocRequest);
CHECK_STATUS(sts, "vpp QueryIOSurf");
vppSurfaces_.resize(vppAllocRequest.NumFrameSuggested);
for (int i = 0; i < vppAllocRequest.NumFrameSuggested; i++) {
memset(&vppSurfaces_[i], 0, sizeof(mfxFrameSurface1));
memcpy(&vppSurfaces_[i].Info, &vppParams_.vpp.In, sizeof(mfxFrameInfo));
}
sts = mfxVPP_->Init(&vppParams_);
MSDK_IGNORE_MFX_STS(sts, MFX_WRN_PARTIAL_ACCELERATION);
CHECK_STATUS(sts, "vpp init");
return MFX_ERR_NONE;
}
#endif
mfxStatus resetEnc() {
mfxStatus sts = MFX_ERR_NONE;
memset(&mfxEncParams_, 0, sizeof(mfxEncParams_));
// Basic
if (!convert_codec(dataFormat_, mfxEncParams_.mfx.CodecId)) {
LOG_ERROR(std::string("unsupported dataFormat: ") + std::to_string(dataFormat_));
return MFX_ERR_UNSUPPORTED;
}
// mfxEncParams_.mfx.LowPower = MFX_CODINGOPTION_ON;
mfxEncParams_.mfx.BRCParamMultiplier = 0;
// Frame Info
mfxEncParams_.mfx.FrameInfo.FrameRateExtN = framerate_;
mfxEncParams_.mfx.FrameInfo.FrameRateExtD = 1;
#ifdef CONFIG_USE_VPP
mfxEncParams_.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
mfxEncParams_.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
#elif defined(CONFIG_USE_D3D_CONVERT)
mfxEncParams_.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
mfxEncParams_.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
#else
mfxEncParams_.mfx.FrameInfo.FourCC = MFX_FOURCC_BGR4;
mfxEncParams_.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV444;
#endif
mfxEncParams_.mfx.FrameInfo.BitDepthLuma = 8;
mfxEncParams_.mfx.FrameInfo.BitDepthChroma = 8;
mfxEncParams_.mfx.FrameInfo.Shift = 0;
mfxEncParams_.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
mfxEncParams_.mfx.FrameInfo.CropX = 0;
mfxEncParams_.mfx.FrameInfo.CropY = 0;
mfxEncParams_.mfx.FrameInfo.CropW = width_;
mfxEncParams_.mfx.FrameInfo.CropH = height_;
// Width must be a multiple of 16
// Height must be a multiple of 16 in case of frame picture and a multiple
// of 32 in case of field picture
mfxEncParams_.mfx.FrameInfo.Width = MSDK_ALIGN16(width_);
mfxEncParams_.mfx.FrameInfo.Height =
(MFX_PICSTRUCT_PROGRESSIVE == mfxEncParams_.mfx.FrameInfo.PicStruct)
? MSDK_ALIGN16(height_)
: MSDK_ALIGN32(height_);
// Encoding Options
mfxEncParams_.mfx.EncodedOrder = 0;
mfxEncParams_.IOPattern = MFX_IOPATTERN_IN_VIDEO_MEMORY;
// Configuration for low latency
mfxEncParams_.AsyncDepth = 1; // 1 is best for low latency
mfxEncParams_.mfx.GopRefDist =
1; // 1 is best for low latency, I and P frames only
mfxEncParams_.mfx.GopPicSize = (gop_ > 0 && gop_ < 0xFFFF) ? gop_ : 0xFFFF;
// quality
// https://www.intel.com/content/www/us/en/developer/articles/technical/common-bitrate-control-methods-in-intel-media-sdk.html
mfxEncParams_.mfx.TargetUsage = MFX_TARGETUSAGE_BEST_SPEED;
mfxEncParams_.mfx.RateControlMethod = MFX_RATECONTROL_VBR;
mfxEncParams_.mfx.InitialDelayInKB = 0;
mfxEncParams_.mfx.BufferSizeInKB = 512;
mfxEncParams_.mfx.TargetKbps = kbs_;
mfxEncParams_.mfx.MaxKbps = kbs_;
mfxEncParams_.mfx.NumSlice = 1;
mfxEncParams_.mfx.NumRefFrame = 0;
if (H264 == dataFormat_) {
mfxEncParams_.mfx.CodecLevel = MFX_LEVEL_AVC_51;
mfxEncParams_.mfx.CodecProfile = MFX_PROFILE_AVC_MAIN;
} else if (H265 == dataFormat_) {
mfxEncParams_.mfx.CodecLevel = MFX_LEVEL_HEVC_51;
mfxEncParams_.mfx.CodecProfile = MFX_PROFILE_HEVC_MAIN;
}
resetEncExtParams();
// Create Media SDK encoder
if (mfxENC_) {
mfxENC_->Close();
delete mfxENC_;
mfxENC_ = NULL;
}
mfxENC_ = new MFXVideoENCODE(session_);
if (!mfxENC_) {
LOG_ERROR(std::string("failed to create MFXVideoENCODE"));
return MFX_ERR_NOT_INITIALIZED;
}
// Validate video encode parameters (optional)
// - In this example the validation result is written to same structure
// - MFX_WRN_INCOMPATIBLE_VIDEO_PARAM is returned if some of the video
// parameters are not supported,
// instead the encoder will select suitable parameters closest matching
// the requested configuration
sts = mfxENC_->Query(&mfxEncParams_, &mfxEncParams_);
MSDK_IGNORE_MFX_STS(sts, MFX_WRN_INCOMPATIBLE_VIDEO_PARAM);
CHECK_STATUS(sts, "Query");
mfxFrameAllocRequest EncRequest;
memset(&EncRequest, 0, sizeof(EncRequest));
sts = mfxENC_->QueryIOSurf(&mfxEncParams_, &EncRequest);
CHECK_STATUS(sts, "QueryIOSurf");
// Allocate surface headers (mfxFrameSurface1) for encoder
encSurfaces_.resize(EncRequest.NumFrameSuggested);
for (int i = 0; i < EncRequest.NumFrameSuggested; i++) {
memset(&encSurfaces_[i], 0, sizeof(mfxFrameSurface1));
memcpy(&encSurfaces_[i].Info, &mfxEncParams_.mfx.FrameInfo,
sizeof(mfxFrameInfo));
}
// Initialize the Media SDK encoder
sts = mfxENC_->Init(&mfxEncParams_);
CHECK_STATUS(sts, "Init");
// Retrieve video parameters selected by encoder.
// - BufferSizeInKB parameter is required to set bit stream buffer size
sts = mfxENC_->GetVideoParam(&mfxEncParams_);
CHECK_STATUS(sts, "GetVideoParam");
// Prepare Media SDK bit stream buffer
memset(&mfxBS_, 0, sizeof(mfxBS_));
mfxBS_.MaxLength = mfxEncParams_.mfx.BufferSizeInKB * 1024;
bstData_.resize(mfxBS_.MaxLength);
mfxBS_.Data = bstData_.data();
return MFX_ERR_NONE;
}
#ifdef CONFIG_USE_VPP
mfxStatus vppOneFrame(void *texture, mfxFrameSurface1 *out,
mfxSyncPoint syncp) {
mfxStatus sts = MFX_ERR_NONE;
int surfIdx =
GetFreeSurfaceIndex(vppSurfaces_.data(),
vppSurfaces_.size()); // Find free frame surface
if (surfIdx >= vppSurfaces_.size()) {
LOG_ERROR(std::string("No free vpp surface"));
return MFX_ERR_MORE_SURFACE;
}
mfxFrameSurface1 *in = &vppSurfaces_[surfIdx];
in->Data.MemId = texture;
for (;;) {
sts = mfxVPP_->RunFrameVPPAsync(in, out, NULL, &syncp);
if (MFX_ERR_NONE < sts &&
!syncp) // repeat the call if warning and no output
{
if (MFX_WRN_DEVICE_BUSY == sts)
MSDK_SLEEP(1); // wait if device is busy
} else if (MFX_ERR_NONE < sts && syncp) {
sts = MFX_ERR_NONE; // ignore warnings if output is available
break;
} else {
break; // not a warning
}
}
if (MFX_ERR_NONE == sts) {
sts = session_.SyncOperation(
syncp, 1000); // Synchronize. Wait until encoded frame is ready
CHECK_STATUS(sts, "SyncOperation");
}
return sts;
}
#endif
int encodeOneFrame(mfxFrameSurface1 *in, EncodeCallback callback, void *obj,
int64_t ms) {
mfxStatus sts = MFX_ERR_NONE;
mfxSyncPoint syncp;
bool encoded = false;
auto start = util::now();
do {
if (util::elapsed_ms(start) > ENCODE_TIMEOUT_MS) {
LOG_ERROR(std::string("encode timeout"));
break;
}
mfxBS_.DataLength = 0;
mfxBS_.DataOffset = 0;
mfxBS_.TimeStamp = ms * 90; // ms to 90KHZ
mfxBS_.DecodeTimeStamp = mfxBS_.TimeStamp;
sts = mfxENC_->EncodeFrameAsync(NULL, in, &mfxBS_, &syncp);
if (MFX_ERR_NONE == sts) {
if (!syncp) {
LOG_ERROR(std::string("should not happen, error is none while syncp is null"));
break;
}
sts = session_.SyncOperation(
syncp, 1000); // Synchronize. Wait until encoded frame is ready
if (MFX_ERR_NONE != sts) {
LOG_ERROR(std::string("SyncOperation failed, sts=") + std::to_string(sts));
break;
}
if (mfxBS_.DataLength <= 0) {
LOG_ERROR(std::string("mfxBS_.DataLength <= 0"));
break;
}
int key = (mfxBS_.FrameType & MFX_FRAMETYPE_I) ||
(mfxBS_.FrameType & MFX_FRAMETYPE_IDR);
if (callback)
callback(mfxBS_.Data + mfxBS_.DataOffset, mfxBS_.DataLength, key, obj,
ms);
encoded = true;
break;
} else if (MFX_WRN_DEVICE_BUSY == sts) {
LOG_INFO(std::string("device busy"));
Sleep(1);
continue;
} else if (MFX_ERR_NOT_ENOUGH_BUFFER == sts) {
LOG_ERROR(std::string("not enough buffer, size=") +
std::to_string(mfxBS_.MaxLength));
if (mfxBS_.MaxLength < 10 * 1024 * 1024) {
mfxBS_.MaxLength *= 2;
bstData_.resize(mfxBS_.MaxLength);
mfxBS_.Data = bstData_.data();
Sleep(1);
continue;
} else {
break;
}
} else {
LOG_ERROR(std::string("EncodeFrameAsync failed, sts=") + std::to_string(sts));
break;
}
// double confirm, check continue
} while (MFX_WRN_DEVICE_BUSY == sts || MFX_ERR_NOT_ENOUGH_BUFFER == sts);
if (!encoded) {
LOG_ERROR(std::string("encode failed, sts=") + std::to_string(sts));
}
return encoded ? 0 : -1;
}
void resetEncExtParams() {
// coding option
memset(&coding_option_, 0, sizeof(mfxExtCodingOption));
coding_option_.Header.BufferId = MFX_EXTBUFF_CODING_OPTION;
coding_option_.Header.BufferSz = sizeof(mfxExtCodingOption);
coding_option_.NalHrdConformance = MFX_CODINGOPTION_OFF;
extbuffers_[0] = (mfxExtBuffer *)&coding_option_;
// coding option2
memset(&coding_option2_, 0, sizeof(mfxExtCodingOption2));
coding_option2_.Header.BufferId = MFX_EXTBUFF_CODING_OPTION2;
coding_option2_.Header.BufferSz = sizeof(mfxExtCodingOption2);
coding_option2_.RepeatPPS = MFX_CODINGOPTION_OFF;
extbuffers_[1] = (mfxExtBuffer *)&coding_option2_;
// coding option3
memset(&coding_option3_, 0, sizeof(mfxExtCodingOption3));
coding_option3_.Header.BufferId = MFX_EXTBUFF_CODING_OPTION3;
coding_option3_.Header.BufferSz = sizeof(mfxExtCodingOption3);
extbuffers_[2] = (mfxExtBuffer *)&coding_option3_;
// signal info
memset(&signal_info_, 0, sizeof(mfxExtVideoSignalInfo));
signal_info_.Header.BufferId = MFX_EXTBUFF_VIDEO_SIGNAL_INFO;
signal_info_.Header.BufferSz = sizeof(mfxExtVideoSignalInfo);
signal_info_.VideoFormat = 5;
signal_info_.ColourDescriptionPresent = 1;
signal_info_.VideoFullRange = !!full_range_;
signal_info_.MatrixCoefficients =
bt709_ ? AVCOL_SPC_BT709 : AVCOL_SPC_SMPTE170M;
signal_info_.ColourPrimaries =
bt709_ ? AVCOL_PRI_BT709 : AVCOL_PRI_SMPTE170M;
signal_info_.TransferCharacteristics =
bt709_ ? AVCOL_TRC_BT709 : AVCOL_TRC_SMPTE170M;
// https://github.com/GStreamer/gstreamer/blob/651dcb49123ec516e7c582e4a49a5f3f15c10f93/subprojects/gst-plugins-bad/sys/qsv/gstqsvh264enc.cpp#L1647
extbuffers_[3] = (mfxExtBuffer *)&signal_info_;
mfxEncParams_.ExtParam = extbuffers_;
mfxEncParams_.NumExtParam = 4;
}
bool convert_codec(DataFormat dataFormat, mfxU32 &CodecId) {
switch (dataFormat) {
case H264:
CodecId = MFX_CODEC_AVC;
return true;
case H265:
CodecId = MFX_CODEC_HEVC;
return true;
}
return false;
}
};
} // namespace
extern "C" {
int mfx_driver_support() {
MFXVideoSession session;
return InitSession(session) == MFX_ERR_NONE ? 0 : -1;
}
int mfx_destroy_encoder(void *encoder) {
VplEncoder *p = (VplEncoder *)encoder;
if (p) {
p->destroy();
delete p;
p = NULL;
}
return 0;
}
void *mfx_new_encoder(void *handle, int64_t luid,
DataFormat dataFormat, int32_t w, int32_t h, int32_t kbs,
int32_t framerate, int32_t gop) {
VplEncoder *p = NULL;
try {
p = new VplEncoder(handle, luid, dataFormat, w, h, kbs, framerate,
gop);
if (!p) {
return NULL;
}
mfxStatus sts = p->Reset();
if (sts == MFX_ERR_NONE) {
return p;
} else {
LOG_ERROR(std::string("Init failed, sts=") + std::to_string(sts));
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("Exception: ") + e.what());
}
if (p) {
p->destroy();
delete p;
p = NULL;
}
return NULL;
}
int mfx_encode(void *encoder, ID3D11Texture2D *tex, EncodeCallback callback,
void *obj, int64_t ms) {
try {
return ((VplEncoder *)encoder)->encode(tex, callback, obj, ms);
} catch (const std::exception &e) {
LOG_ERROR(std::string("Exception: ") + e.what());
}
return -1;
}
int mfx_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
DataFormat dataFormat, int32_t width,
int32_t height, int32_t kbs, int32_t framerate,
int32_t gop, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
Adapters adapters;
if (!adapters.Init(ADAPTER_VENDOR_INTEL))
return -1;
int count = 0;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
VplEncoder *e = (VplEncoder *)mfx_new_encoder(
(void *)adapter.get()->device_.Get(), currentLuid,
dataFormat, width, height, kbs, framerate, gop);
if (!e)
continue;
if (e->native_->EnsureTexture(e->width_, e->height_)) {
e->native_->next();
int32_t key_obj = 0;
auto start = util::now();
bool succ = mfx_encode(e, e->native_->GetCurrentTexture(), util_encode::vram_encode_test_callback, &key_obj,
0) == 0 && key_obj == 1;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = currentLuid;
outVendors[count] = VENDOR_INTEL;
count += 1;
}
}
e->destroy();
delete e;
e = nullptr;
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("test failed: ") + e.what());
}
return -1;
}
// https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#dynamic-bitrate-change
// https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md#mfxinfomfx
// https://spec.oneapi.io/onevpl/2.4.0/programming_guide/VPL_prg_encoding.html#configuration-change
int mfx_set_bitrate(void *encoder, int32_t kbs) {
try {
VplEncoder *p = (VplEncoder *)encoder;
mfxStatus sts = MFX_ERR_NONE;
// https://github.com/GStreamer/gstreamer/blob/e19428a802c2f4ee9773818aeb0833f93509a1c0/subprojects/gst-plugins-bad/sys/qsv/gstqsvencoder.cpp#L1312
p->kbs_ = kbs;
p->mfxENC_->GetVideoParam(&p->mfxEncParams_);
p->mfxEncParams_.mfx.TargetKbps = kbs;
p->mfxEncParams_.mfx.MaxKbps = kbs;
sts = p->mfxENC_->Reset(&p->mfxEncParams_);
if (sts != MFX_ERR_NONE) {
LOG_ERROR(std::string("reset failed, sts=") + std::to_string(sts));
return -1;
}
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("Exception: ") + e.what());
}
return -1;
}
int mfx_set_framerate(void *encoder, int32_t framerate) {
LOG_WARN("not support change framerate");
return -1;
}
}

View File

@@ -1,39 +0,0 @@
#ifndef MFX_FFI_H
#define MFX_FFI_H
#include "../common/callback.h"
#include <stdbool.h>
int mfx_driver_support();
void *mfx_new_encoder(void *handle, int64_t luid,
int32_t dataFormat, int32_t width, int32_t height,
int32_t kbs, int32_t framerate, int32_t gop);
int mfx_encode(void *encoder, void *tex, EncodeCallback callback, void *obj,
int64_t ms);
int mfx_destroy_encoder(void *encoder);
void *mfx_new_decoder(void *device, int64_t luid,
int32_t dataFormat);
int mfx_decode(void *decoder, uint8_t *data, int len, DecodeCallback callback,
void *obj);
int mfx_destroy_decoder(void *decoder);
int mfx_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
int32_t dataFormat, int32_t width,
int32_t height, int32_t kbs, int32_t framerate,
int32_t gop, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int mfx_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
int32_t dataFormat, uint8_t *data,
int32_t length, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int mfx_set_bitrate(void *encoder, int32_t kbs);
int mfx_set_framerate(void *encoder, int32_t framerate);
#endif // MFX_FFI_H

View File

@@ -1,188 +0,0 @@
// https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/muxing.c
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <libavutil/timestamp.h>
}
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define LOG_MODULE "MUX"
#include <log.h>
namespace {
typedef struct OutputStream {
AVStream *st;
AVPacket *tmp_pkt;
} OutputStream;
class Muxer {
public:
OutputStream video_st;
AVFormatContext *oc = NULL;
int framerate;
int64_t start_ms;
int64_t last_pts;
int got_first;
Muxer() {}
void destroy() {
OutputStream *ost = &video_st;
if (ost && ost->tmp_pkt)
av_packet_free(&ost->tmp_pkt);
if (oc && oc->pb && !(oc->oformat->flags & AVFMT_NOFILE))
avio_closep(&oc->pb);
if (oc)
avformat_free_context(oc);
}
bool init(const char *filename, int width, int height, int is265,
int framerate) {
OutputStream *ost = &video_st;
ost->st = NULL;
ost->tmp_pkt = NULL;
int ret;
if ((ret = avformat_alloc_output_context2(&oc, NULL, NULL, filename)) < 0) {
LOG_ERROR(std::string("avformat_alloc_output_context2 failed, ret = ") +
std::to_string(ret));
return false;
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
LOG_ERROR(std::string("avformat_new_stream failed"));
return false;
}
ost->st->id = oc->nb_streams - 1;
ost->st->codecpar->codec_id = is265 ? AV_CODEC_ID_H265 : AV_CODEC_ID_H264;
ost->st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
ost->st->codecpar->width = width;
ost->st->codecpar->height = height;
if (!(oc->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
LOG_ERROR(std::string("avio_open failed, ret = ") + std::to_string(ret));
return false;
}
}
ost->tmp_pkt = av_packet_alloc();
if (!ost->tmp_pkt) {
LOG_ERROR(std::string("av_packet_alloc failed"));
return false;
}
ret = avformat_write_header(oc, NULL);
if (ret < 0) {
LOG_ERROR(std::string("avformat_write_header failed"));
return false;
}
this->framerate = framerate;
this->start_ms = 0;
this->last_pts = 0;
this->got_first = 0;
return true;
}
int write_video_frame(const uint8_t *data, int len, int64_t pts_ms, int key) {
OutputStream *ost = &video_st;
AVPacket *pkt = ost->tmp_pkt;
AVFormatContext *fmt_ctx = oc;
int ret;
if (framerate <= 0)
return -3;
if (!got_first) {
if (key != 1)
return -2;
start_ms = pts_ms;
}
int64_t pts = (pts_ms - start_ms); // use write timestamp
if (pts <= last_pts && got_first) {
pts = last_pts + 1000 / framerate;
}
got_first = 1;
pkt->data = (uint8_t *)data;
pkt->size = len;
pkt->pts = pts;
pkt->dts = pkt->pts; // no B-frame
int64_t duration = pkt->pts - last_pts;
last_pts = pkt->pts;
pkt->duration = duration > 0 ? duration : 1000 / framerate; // predict
AVRational rational;
rational.num = 1;
rational.den = 1000;
av_packet_rescale_ts(pkt, rational,
ost->st->time_base); // ms -> stream timebase
pkt->stream_index = ost->st->index;
if (key == 1) {
pkt->flags |= AV_PKT_FLAG_KEY;
} else {
pkt->flags &= ~AV_PKT_FLAG_KEY;
}
ret = av_write_frame(fmt_ctx, pkt);
if (ret < 0) {
LOG_ERROR(std::string("av_write_frame failed, ret = ") + std::to_string(ret));
return -1;
}
return 0;
}
};
} // namespace
extern "C" Muxer *hwcodec_new_muxer(const char *filename, int width, int height,
int is265, int framerate) {
Muxer *muxer = NULL;
try {
muxer = new Muxer();
if (muxer) {
if (muxer->init(filename, width, height, is265, framerate)) {
return muxer;
}
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("new muxer exception: ") + std::string(e.what()));
}
if (muxer) {
muxer->destroy();
delete muxer;
muxer = NULL;
}
return NULL;
}
extern "C" int hwcodec_write_video_frame(Muxer *muxer, const uint8_t *data,
int len, int64_t pts_ms, int key) {
try {
return muxer->write_video_frame(data, len, pts_ms, key);
} catch (const std::exception &e) {
LOG_ERROR(std::string("write_video_frame exception: ") + std::string(e.what()));
}
return -1;
}
extern "C" int hwcodec_write_tail(Muxer *muxer) {
return av_write_trailer(muxer->oc);
}
extern "C" void hwcodec_free_muxer(Muxer *muxer) {
try {
if (!muxer)
return;
muxer->destroy();
delete muxer;
muxer = NULL;
} catch (const std::exception &e) {
LOG_ERROR(std::string("free_muxer exception: ") + std::string(e.what()));
}
}

View File

@@ -1,15 +0,0 @@
#ifndef MUX_FFI_H
#define MUX_FFI_H
#include <stdint.h>
void *hwcodec_new_muxer(const char *filename, int width, int height, int is265,
int framerate);
int hwcodec_write_video_frame(void *muxer, const uint8_t *data, int len,
int64_t pts_ms, int key);
int hwcodec_write_tail(void *muxer);
void hwcodec_free_muxer(void *muxer);
#endif // FFI_H

View File

@@ -1,693 +0,0 @@
#define FFNV_LOG_FUNC
#define FFNV_DEBUG_LOG_FUNC
#include <DirectXMath.h>
#include <Samples/NvCodec/NvDecoder/NvDecoder.h>
#include <Samples/Utils/NvCodecUtils.h>
#include <algorithm>
#include <array>
#include <d3dcompiler.h>
#include <directxcolors.h>
#include <iostream>
#include <libavutil/pixfmt.h>
#include <thread>
#include "callback.h"
#include "common.h"
#include "system.h"
#include "util.h"
#define LOG_MODULE "CUVID"
#include "log.h"
#define NUMVERTICES 6
using namespace DirectX;
namespace {
#define succ(call) ((call) == 0)
class CUVIDAutoUnmapper {
CudaFunctions *cudl_ = NULL;
CUgraphicsResource *pCuResource_ = NULL;
public:
CUVIDAutoUnmapper(CudaFunctions *cudl, CUgraphicsResource *pCuResource)
: cudl_(cudl), pCuResource_(pCuResource) {
if (!succ(cudl->cuGraphicsMapResources(1, pCuResource, 0))) {
LOG_TRACE(std::string("cuGraphicsMapResources failed"));
NVDEC_THROW_ERROR("cuGraphicsMapResources failed", CUDA_ERROR_UNKNOWN);
}
}
~CUVIDAutoUnmapper() {
if (!succ(cudl_->cuGraphicsUnmapResources(1, pCuResource_, 0))) {
LOG_TRACE(std::string("cuGraphicsUnmapResources failed"));
// NVDEC_THROW_ERROR("cuGraphicsUnmapResources failed",
// CUDA_ERROR_UNKNOWN);
}
}
};
class CUVIDAutoCtxPopper {
CudaFunctions *cudl_ = NULL;
public:
CUVIDAutoCtxPopper(CudaFunctions *cudl, CUcontext cuContext) : cudl_(cudl) {
if (!succ(cudl->cuCtxPushCurrent(cuContext))) {
LOG_TRACE(std::string("cuCtxPushCurrent failed"));
NVDEC_THROW_ERROR("cuCtxPopCurrent failed", CUDA_ERROR_UNKNOWN);
}
}
~CUVIDAutoCtxPopper() {
if (!succ(cudl_->cuCtxPopCurrent(NULL))) {
LOG_TRACE(std::string("cuCtxPopCurrent failed"));
// NVDEC_THROW_ERROR("cuCtxPopCurrent failed", CUDA_ERROR_UNKNOWN);
}
}
};
void load_driver(CudaFunctions **pp_cudl, CuvidFunctions **pp_cvdl) {
if (cuda_load_functions(pp_cudl, NULL) < 0) {
LOG_TRACE(std::string("cuda_load_functions failed"));
NVDEC_THROW_ERROR("cuda_load_functions failed", CUDA_ERROR_UNKNOWN);
}
if (cuvid_load_functions(pp_cvdl, NULL) < 0) {
LOG_TRACE(std::string("cuvid_load_functions failed"));
NVDEC_THROW_ERROR("cuvid_load_functions failed", CUDA_ERROR_UNKNOWN);
}
}
void free_driver(CudaFunctions **pp_cudl, CuvidFunctions **pp_cvdl) {
if (*pp_cvdl) {
cuvid_free_functions(pp_cvdl);
*pp_cvdl = NULL;
}
if (*pp_cudl) {
cuda_free_functions(pp_cudl);
*pp_cudl = NULL;
}
}
typedef struct _VERTEX {
DirectX::XMFLOAT3 Pos;
DirectX::XMFLOAT2 TexCoord;
} VERTEX;
class CuvidDecoder {
public:
CudaFunctions *cudl_ = NULL;
CuvidFunctions *cvdl_ = NULL;
NvDecoder *dec_ = NULL;
CUcontext cuContext_ = NULL;
CUgraphicsResource cuResource_[2] = {NULL, NULL}; // r8, r8g8
ComPtr<ID3D11Texture2D> textures_[2] = {NULL, NULL};
ComPtr<ID3D11RenderTargetView> RTV_ = NULL;
ComPtr<ID3D11ShaderResourceView> SRV_[2] = {NULL, NULL};
ComPtr<ID3D11VertexShader> vertexShader_ = NULL;
ComPtr<ID3D11PixelShader> pixelShader_ = NULL;
ComPtr<ID3D11SamplerState> samplerLinear_ = NULL;
std::unique_ptr<NativeDevice> native_ = nullptr;
void *device_;
int64_t luid_;
DataFormat dataFormat_;
bool prepare_tried_ = false;
bool prepare_ok_ = false;
int width_ = 0;
int height_ = 0;
CUVIDEOFORMAT last_video_format_ = {};
public:
CuvidDecoder(void *device, int64_t luid, DataFormat dataFormat) {
device_ = device;
luid_ = luid;
dataFormat_ = dataFormat;
ZeroMemory(&last_video_format_, sizeof(last_video_format_));
load_driver(&cudl_, &cvdl_);
}
~CuvidDecoder() {}
bool init() {
if (!succ(cudl_->cuInit(0))) {
LOG_ERROR(std::string("cuInit failed"));
return false;
}
CUdevice cuDevice = 0;
native_ = std::make_unique<NativeDevice>();
if (!native_->Init(luid_, (ID3D11Device *)device_, 4)) {
LOG_ERROR(std::string("Failed to init native device"));
return false;
}
if (!succ(cudl_->cuD3D11GetDevice(&cuDevice, native_->adapter_.Get()))) {
LOG_ERROR(std::string("Failed to get cuDevice"));
return false;
}
if (!succ(cudl_->cuCtxCreate(&cuContext_, 0, cuDevice))) {
LOG_ERROR(std::string("Failed to create cuContext"));
return false;
}
if (!create_nvdecoder()) {
LOG_ERROR(std::string("Failed to create nvdecoder"));
return false;
}
return true;
}
// ref: HandlePictureDisplay
int decode(uint8_t *data, int len, DecodeCallback callback, void *obj) {
int nFrameReturned = decode_and_recreate(data, len);
if (nFrameReturned == -2) {
nFrameReturned = dec_->Decode(data, len, CUVID_PKT_ENDOFPICTURE);
}
if (nFrameReturned <= 0) {
return -1;
}
last_video_format_ = dec_->GetLatestVideoFormat();
cudaVideoSurfaceFormat format = dec_->GetOutputFormat();
int width = dec_->GetWidth();
int height = dec_->GetHeight();
if (prepare_tried_ && (width != width_ || height != height_)) {
LOG_INFO(std::string("resolution changed, (") + std::to_string(width_) + "," +
std::to_string(height_) + ") -> (" + std::to_string(width) +
"," + std::to_string(height) + ")");
reset_prepare();
width_ = width;
height_ = height;
}
if (!prepare()) {
LOG_ERROR(std::string("prepare failed"));
return -1;
}
bool decoded = false;
for (int i = 0; i < nFrameReturned; i++) {
uint8_t *pFrame = dec_->GetFrame();
native_->BeginQuery();
if (!copy_cuda_frame(pFrame)) {
LOG_ERROR(std::string("copy_cuda_frame failed"));
native_->EndQuery();
return -1;
}
if (!native_->EnsureTexture(width, height)) {
LOG_ERROR(std::string("EnsureTexture failed"));
native_->EndQuery();
return -1;
}
native_->next();
if (!set_rtv(native_->GetCurrentTexture())) {
LOG_ERROR(std::string("set_rtv failed"));
native_->EndQuery();
return -1;
}
if (!draw()) {
LOG_ERROR(std::string("draw failed"));
native_->EndQuery();
return -1;
}
native_->EndQuery();
if (!native_->Query()) {
LOG_ERROR(std::string("Query failed"));
}
if (callback)
callback(native_->GetCurrentTexture(), obj);
decoded = true;
}
return decoded ? 0 : -1;
}
void destroy() {
if (dec_) {
delete dec_;
dec_ = nullptr;
}
if (cudl_ && cuContext_) {
cudl_->cuCtxPushCurrent(cuContext_);
for (int i = 0; i < 2; i++) {
if (cuResource_[i]) {
cudl_->cuGraphicsUnregisterResource(cuResource_[i]);
cuResource_[i] = NULL;
}
}
cudl_->cuCtxPopCurrent(NULL);
cudl_->cuCtxDestroy(cuContext_);
cuContext_ = NULL;
}
free_driver(&cudl_, &cvdl_);
}
private:
void reset_prepare() {
prepare_tried_ = false;
prepare_ok_ = false;
if (cudl_ && cuContext_) {
cudl_->cuCtxPushCurrent(cuContext_);
for (int i = 0; i < 2; i++) {
if (cuResource_[i])
cudl_->cuGraphicsUnregisterResource(cuResource_[i]);
}
cudl_->cuCtxPopCurrent(NULL);
}
for (int i = 0; i < 2; i++) {
textures_[i].Reset();
SRV_[i].Reset();
}
RTV_.Reset();
vertexShader_.Reset();
pixelShader_.Reset();
samplerLinear_.Reset();
}
bool prepare() {
if (prepare_tried_) {
return prepare_ok_;
}
prepare_tried_ = true;
if (!set_srv())
return false;
if (!set_view_port())
return false;
if (!set_sample())
return false;
if (!set_shader())
return false;
if (!set_vertex_buffer())
return false;
if (!register_texture())
return false;
prepare_ok_ = true;
return true;
}
bool copy_cuda_frame(unsigned char *dpNv12) {
int width = dec_->GetWidth();
int height = dec_->GetHeight();
int chromaHeight = dec_->GetChromaHeight();
CUVIDAutoCtxPopper ctxPoper(cudl_, cuContext_);
for (int i = 0; i < 2; i++) {
CUarray dstArray;
CUVIDAutoUnmapper unmapper(cudl_, &cuResource_[i]);
if (!succ(cudl_->cuGraphicsSubResourceGetMappedArray(
&dstArray, cuResource_[i], 0, 0)))
return false;
CUDA_MEMCPY2D m = {0};
m.srcMemoryType = CU_MEMORYTYPE_DEVICE;
m.srcDevice = (CUdeviceptr)(dpNv12 + (width * height) * i);
m.srcPitch = width; // pitch
m.dstMemoryType = CU_MEMORYTYPE_ARRAY;
m.dstArray = dstArray;
m.WidthInBytes = width;
m.Height = i == 0 ? height : chromaHeight;
if (!succ(cudl_->cuMemcpy2D(&m)))
return false;
}
return true;
}
bool draw() {
native_->context_->Draw(NUMVERTICES, 0);
native_->context_->Flush();
return true;
}
// return:
// >=0: nFrameReturned
// -1: failed
// -2: recreated, please decode again
int decode_and_recreate(uint8_t *data, int len) {
try {
int nFrameReturned = dec_->Decode(data, len, CUVID_PKT_ENDOFPICTURE);
if (nFrameReturned <= 0)
return -1;
CUVIDEOFORMAT video_format = dec_->GetLatestVideoFormat();
auto d1 = last_video_format_.display_area;
auto d2 = video_format.display_area;
// reconfigure may cause wrong display area
if (last_video_format_.coded_width != 0 &&
(d1.left != d2.left || d1.right != d2.right || d1.top != d2.top ||
d1.bottom != d2.bottom)) {
LOG_INFO(
std::string("recreate, display area changed from (") + std::to_string(d1.left) +
", " + std::to_string(d1.top) + ", " + std::to_string(d1.right) +
", " + std::to_string(d1.bottom) + ") to (" +
std::to_string(d2.left) + ", " + std::to_string(d2.top) + ", " +
std::to_string(d2.right) + ", " + std::to_string(d2.bottom) + ")");
if (create_nvdecoder()) {
return -2;
} else {
LOG_ERROR(std::string("create_nvdecoder failed"));
}
return -1;
} else {
return nFrameReturned;
}
} catch (const std::exception &e) {
unsigned int maxWidth = dec_->GetMaxWidth();
unsigned int maxHeight = dec_->GetMaxHeight();
CUVIDEOFORMAT video_format = dec_->GetLatestVideoFormat();
// https://github.com/NVIDIA/DALI/blob/4f5ee72b287cfbbe0d400734416ff37bd8027099/dali/operators/reader/loader/video/frames_decoder_gpu.cc#L212
if (maxWidth > 0 && (video_format.coded_width > maxWidth ||
video_format.coded_height > maxHeight)) {
LOG_INFO(std::string("recreate, exceed maxWidth/maxHeight: (") +
std::to_string(video_format.coded_width) + ", " +
std::to_string(video_format.coded_height) + " > (" +
std::to_string(maxWidth) + ", " + std::to_string(maxHeight) +
")");
if (create_nvdecoder()) {
return -2;
} else {
LOG_ERROR(std::string("create_nvdecoder failed"));
}
} else {
LOG_ERROR(std::string("Exception decode_and_recreate: ") + e.what());
}
}
return -1;
}
bool set_srv() {
int width = dec_->GetWidth();
int height = dec_->GetHeight();
int chromaHeight = dec_->GetChromaHeight();
LOG_TRACE(std::string("width:") + std::to_string(width) +
", height:" + std::to_string(height) +
", chromaHeight:" + std::to_string(chromaHeight));
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = width;
desc.Height = height;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.MiscFlags = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
HRB(native_->device_->CreateTexture2D(
&desc, nullptr, textures_[0].ReleaseAndGetAddressOf()));
desc.Format = DXGI_FORMAT_R8G8_UNORM;
desc.Width = width / 2;
desc.Height = chromaHeight;
HRB(native_->device_->CreateTexture2D(
&desc, nullptr, textures_[1].ReleaseAndGetAddressOf()));
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc = CD3D11_SHADER_RESOURCE_VIEW_DESC(textures_[0].Get(),
D3D11_SRV_DIMENSION_TEXTURE2D,
DXGI_FORMAT_R8_UNORM);
HRB(native_->device_->CreateShaderResourceView(
textures_[0].Get(), &srvDesc, SRV_[0].ReleaseAndGetAddressOf()));
srvDesc = CD3D11_SHADER_RESOURCE_VIEW_DESC(textures_[1].Get(),
D3D11_SRV_DIMENSION_TEXTURE2D,
DXGI_FORMAT_R8G8_UNORM);
HRB(native_->device_->CreateShaderResourceView(
textures_[1].Get(), &srvDesc, SRV_[1].ReleaseAndGetAddressOf()));
// set SRV
std::array<ID3D11ShaderResourceView *, 2> const textureViews = {
SRV_[0].Get(), SRV_[1].Get()};
native_->context_->PSSetShaderResources(0, textureViews.size(),
textureViews.data());
return true;
}
bool set_rtv(ID3D11Texture2D *texture) {
D3D11_RENDER_TARGET_VIEW_DESC rtDesc;
rtDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
rtDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
rtDesc.Texture2D.MipSlice = 0;
HRB(native_->device_->CreateRenderTargetView(
texture, &rtDesc, RTV_.ReleaseAndGetAddressOf()));
const float clearColor[4] = {0.0f, 0.0f, 0.0f, 0.0f}; // clear as black
native_->context_->ClearRenderTargetView(RTV_.Get(), clearColor);
native_->context_->OMSetRenderTargets(1, RTV_.GetAddressOf(), NULL);
return true;
}
bool set_view_port() {
int width = dec_->GetWidth();
int height = dec_->GetHeight();
D3D11_VIEWPORT vp;
vp.Width = (FLOAT)(width);
vp.Height = (FLOAT)(height);
vp.MinDepth = 0.0f;
vp.MaxDepth = 1.0f;
vp.TopLeftX = 0;
vp.TopLeftY = 0;
native_->context_->RSSetViewports(1, &vp);
return true;
}
bool set_sample() {
D3D11_SAMPLER_DESC sampleDesc = CD3D11_SAMPLER_DESC(CD3D11_DEFAULT());
HRB(native_->device_->CreateSamplerState(
&sampleDesc, samplerLinear_.ReleaseAndGetAddressOf()));
native_->context_->PSSetSamplers(0, 1, samplerLinear_.GetAddressOf());
return true;
}
bool set_shader() {
// https://gist.github.com/RomiTT/9c05d36fe339b899793a3252297a5624
#include "pixel_shader_601.h"
#include "vertex_shader.h"
native_->device_->CreateVertexShader(
g_VS, ARRAYSIZE(g_VS), nullptr, vertexShader_.ReleaseAndGetAddressOf());
native_->device_->CreatePixelShader(g_PS, ARRAYSIZE(g_PS), nullptr,
pixelShader_.ReleaseAndGetAddressOf());
// set InputLayout
constexpr std::array<D3D11_INPUT_ELEMENT_DESC, 2> Layout = {{
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0,
D3D11_INPUT_PER_VERTEX_DATA, 0},
{"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12,
D3D11_INPUT_PER_VERTEX_DATA, 0},
}};
ComPtr<ID3D11InputLayout> inputLayout = NULL;
HRB(native_->device_->CreateInputLayout(Layout.data(), Layout.size(), g_VS,
ARRAYSIZE(g_VS),
inputLayout.GetAddressOf()));
native_->context_->IASetInputLayout(inputLayout.Get());
native_->context_->VSSetShader(vertexShader_.Get(), NULL, 0);
native_->context_->PSSetShader(pixelShader_.Get(), NULL, 0);
return true;
}
bool set_vertex_buffer() {
UINT Stride = sizeof(VERTEX);
UINT Offset = 0;
FLOAT blendFactor[4] = {0.f, 0.f, 0.f, 0.f};
native_->context_->OMSetBlendState(nullptr, blendFactor, 0xffffffff);
native_->context_->IASetPrimitiveTopology(
D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// set VertexBuffers
VERTEX Vertices[NUMVERTICES] = {
{XMFLOAT3(-1.0f, -1.0f, 0), XMFLOAT2(0.0f, 1.0f)},
{XMFLOAT3(-1.0f, 1.0f, 0), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3(1.0f, -1.0f, 0), XMFLOAT2(1.0f, 1.0f)},
{XMFLOAT3(1.0f, -1.0f, 0), XMFLOAT2(1.0f, 1.0f)},
{XMFLOAT3(-1.0f, 1.0f, 0), XMFLOAT2(0.0f, 0.0f)},
{XMFLOAT3(1.0f, 1.0f, 0), XMFLOAT2(1.0f, 0.0f)},
};
D3D11_BUFFER_DESC BufferDesc;
RtlZeroMemory(&BufferDesc, sizeof(BufferDesc));
BufferDesc.Usage = D3D11_USAGE_DEFAULT;
BufferDesc.ByteWidth = sizeof(VERTEX) * NUMVERTICES;
BufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
BufferDesc.CPUAccessFlags = 0;
D3D11_SUBRESOURCE_DATA InitData;
RtlZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = Vertices;
ComPtr<ID3D11Buffer> VertexBuffer = nullptr;
// Create vertex buffer
HRB(native_->device_->CreateBuffer(&BufferDesc, &InitData, &VertexBuffer));
native_->context_->IASetVertexBuffers(0, 1, VertexBuffer.GetAddressOf(),
&Stride, &Offset);
return true;
}
bool register_texture() {
CUVIDAutoCtxPopper ctxPoper(cudl_, cuContext_);
bool ret = true;
for (int i = 0; i < 2; i++) {
if (!succ(cudl_->cuGraphicsD3D11RegisterResource(
&cuResource_[i], textures_[i].Get(),
CU_GRAPHICS_REGISTER_FLAGS_NONE))) {
ret = false;
break;
}
if (!succ(cudl_->cuGraphicsResourceSetMapFlags(
cuResource_[i], CU_GRAPHICS_REGISTER_FLAGS_WRITE_DISCARD))) {
ret = false;
break;
}
}
return ret;
}
bool dataFormat_to_cuCodecID(DataFormat dataFormat, cudaVideoCodec &cuda) {
switch (dataFormat) {
case H264:
cuda = cudaVideoCodec_H264;
break;
case H265:
cuda = cudaVideoCodec_HEVC;
break;
default:
return false;
}
return true;
}
bool create_nvdecoder() {
LOG_TRACE(std::string("create nvdecoder"));
bool bUseDeviceFrame = true;
bool bLowLatency = true;
bool bDeviceFramePitched = false; // width=pitch
cudaVideoCodec cudaCodecID;
if (!dataFormat_to_cuCodecID(dataFormat_, cudaCodecID)) {
return false;
}
if (dec_) {
delete dec_;
dec_ = nullptr;
}
dec_ = new NvDecoder(cudl_, cvdl_, cuContext_, bUseDeviceFrame, cudaCodecID,
bLowLatency, bDeviceFramePitched);
return true;
}
};
} // namespace
extern "C" {
int nv_decode_driver_support() {
try {
CudaFunctions *cudl = NULL;
CuvidFunctions *cvdl = NULL;
load_driver(&cudl, &cvdl);
free_driver(&cudl, &cvdl);
return 0;
} catch (const std::exception &e) {
}
return -1;
}
int nv_destroy_decoder(void *decoder) {
try {
CuvidDecoder *p = (CuvidDecoder *)decoder;
if (p) {
p->destroy();
delete p;
p = NULL;
}
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("destroy failed: ") + e.what());
}
return -1;
}
void *nv_new_decoder(void *device, int64_t luid,
DataFormat dataFormat) {
CuvidDecoder *p = NULL;
try {
p = new CuvidDecoder(device, luid, dataFormat);
if (!p) {
goto _exit;
}
if (p->init())
return p;
} catch (const std::exception &ex) {
LOG_ERROR(std::string("destroy failed: ") + ex.what());
goto _exit;
}
_exit:
if (p) {
p->destroy();
delete p;
p = NULL;
}
return NULL;
}
int nv_decode(void *decoder, uint8_t *data, int len, DecodeCallback callback,
void *obj) {
try {
CuvidDecoder *p = (CuvidDecoder *)decoder;
if (p->decode(data, len, callback, obj) == 0 ) {
return HWCODEC_SUCCESS;
}
} catch (const std::exception &e) {
LOG_ERROR(std::string("decode failed: ") + e.what());
}
return HWCODEC_ERR_COMMON;
}
int nv_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum,
int32_t *outDescNum, DataFormat dataFormat,
uint8_t *data, int32_t length, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
Adapters adapters;
if (!adapters.Init(ADAPTER_VENDOR_NVIDIA))
return -1;
int count = 0;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
CuvidDecoder *p = (CuvidDecoder *)nv_new_decoder(
nullptr, currentLuid, dataFormat);
if (!p)
continue;
auto start = util::now();
bool succ = nv_decode(p, data, length, nullptr, nullptr) == 0;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = currentLuid;
outVendors[count] = VENDOR_NV;
count += 1;
}
p->destroy();
delete p;
p = nullptr;
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("test failed: ") + e.what());
}
return -1;
}
} // extern "C"

View File

@@ -1,464 +0,0 @@
#define FFNV_LOG_FUNC
#define FFNV_DEBUG_LOG_FUNC
#include <Samples/NvCodec/NvEncoder/NvEncoderD3D11.h>
#include <Samples/Utils/Logger.h>
#include <Samples/Utils/NvCodecUtils.h>
#include <Samples/Utils/NvEncoderCLIOptions.h>
#include <dynlink_cuda.h>
#include <dynlink_loader.h>
#include <fstream>
#include <iostream>
#include <libavutil/pixfmt.h>
#include <memory>
#include <d3d11.h>
#include <d3d9.h>
#include <wrl/client.h>
using Microsoft::WRL::ComPtr;
#include "callback.h"
#include "common.h"
#include "system.h"
#include "util.h"
#define LOG_MODULE "NVENC"
#include "log.h"
simplelogger::Logger *logger =
simplelogger::LoggerFactory::CreateConsoleLogger();
namespace {
// #define CONFIG_NV_OPTIMUS_FOR_DEV
#define succ(call) ((call) == 0)
void load_driver(CudaFunctions **pp_cuda_dl, NvencFunctions **pp_nvenc_dl) {
if (cuda_load_functions(pp_cuda_dl, NULL) < 0) {
LOG_TRACE(std::string("cuda_load_functions failed"));
NVENC_THROW_ERROR("cuda_load_functions failed", NV_ENC_ERR_GENERIC);
}
if (nvenc_load_functions(pp_nvenc_dl, NULL) < 0) {
LOG_TRACE(std::string("nvenc_load_functions failed"));
NVENC_THROW_ERROR("nvenc_load_functions failed", NV_ENC_ERR_GENERIC);
}
}
void free_driver(CudaFunctions **pp_cuda_dl, NvencFunctions **pp_nvenc_dl) {
if (*pp_nvenc_dl) {
nvenc_free_functions(pp_nvenc_dl);
*pp_nvenc_dl = NULL;
}
if (*pp_cuda_dl) {
cuda_free_functions(pp_cuda_dl);
*pp_cuda_dl = NULL;
}
}
class NvencEncoder {
public:
std::unique_ptr<NativeDevice> native_ = nullptr;
NvEncoderD3D11 *pEnc_ = nullptr;
CudaFunctions *cuda_dl_ = nullptr;
NvencFunctions *nvenc_dl_ = nullptr;
void *handle_ = nullptr;
int64_t luid_;
DataFormat dataFormat_;
int32_t width_;
int32_t height_;
int32_t kbs_;
int32_t framerate_;
int32_t gop_;
bool full_range_ = false;
bool bt709_ = false;
NV_ENC_CONFIG encodeConfig_ = {0};
NvencEncoder(void *handle, int64_t luid, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs, int32_t framerate,
int32_t gop) {
handle_ = handle;
luid_ = luid;
dataFormat_ = dataFormat;
width_ = width;
height_ = height;
kbs_ = kbs;
framerate_ = framerate;
gop_ = gop;
load_driver(&cuda_dl_, &nvenc_dl_);
}
~NvencEncoder() {}
bool init() {
GUID guidCodec;
switch (dataFormat_) {
case H264:
guidCodec = NV_ENC_CODEC_H264_GUID;
break;
case H265:
guidCodec = NV_ENC_CODEC_HEVC_GUID;
break;
default:
LOG_ERROR(std::string("dataFormat not support, dataFormat: ") +
std::to_string(dataFormat_));
return false;
}
if (!succ(cuda_dl_->cuInit(0))) {
LOG_TRACE(std::string("cuInit failed"));
return false;
}
native_ = std::make_unique<NativeDevice>();
#ifdef CONFIG_NV_OPTIMUS_FOR_DEV
if (!native_->Init(luid_, nullptr))
return false;
#else
if (!native_->Init(luid_, (ID3D11Device *)handle_)) {
LOG_ERROR(std::string("d3d device init failed"));
return false;
}
#endif
CUdevice cuDevice = 0;
if (!succ(cuda_dl_->cuD3D11GetDevice(&cuDevice, native_->adapter_.Get()))) {
LOG_ERROR(std::string("Failed to get cuDevice"));
return false;
}
int nExtraOutputDelay = 0;
pEnc_ = new NvEncoderD3D11(cuda_dl_, nvenc_dl_, native_->device_.Get(),
width_, height_, NV_ENC_BUFFER_FORMAT_ARGB,
nExtraOutputDelay, false, false); // no delay
NV_ENC_INITIALIZE_PARAMS initializeParams = {0};
ZeroMemory(&initializeParams, sizeof(initializeParams));
ZeroMemory(&encodeConfig_, sizeof(encodeConfig_));
initializeParams.encodeConfig = &encodeConfig_;
pEnc_->CreateDefaultEncoderParams(
&initializeParams, guidCodec,
NV_ENC_PRESET_P3_GUID /*NV_ENC_PRESET_LOW_LATENCY_HP_GUID*/,
NV_ENC_TUNING_INFO_LOW_LATENCY);
// no delay
initializeParams.encodeConfig->frameIntervalP = 1;
initializeParams.encodeConfig->rcParams.lookaheadDepth = 0;
// bitrate
initializeParams.encodeConfig->rcParams.averageBitRate = kbs_ * 1000;
// framerate
initializeParams.frameRateNum = framerate_;
initializeParams.frameRateDen = 1;
// gop
initializeParams.encodeConfig->gopLength =
(gop_ > 0 && gop_ < MAX_GOP) ? gop_ : NVENC_INFINITE_GOPLENGTH;
// rc method
initializeParams.encodeConfig->rcParams.rateControlMode =
NV_ENC_PARAMS_RC_CBR;
// color
if (dataFormat_ == H264) {
setup_h264(initializeParams.encodeConfig);
} else {
setup_hevc(initializeParams.encodeConfig);
}
pEnc_->CreateEncoder(&initializeParams);
return true;
}
int encode(void *texture, EncodeCallback callback, void *obj, int64_t ms) {
bool encoded = false;
std::vector<NvPacket> vPacket;
const NvEncInputFrame *pEncInput = pEnc_->GetNextInputFrame();
// TODO: sdk can ensure the inputPtr's width, height same as width_,
// height_, does capture's frame can ensure width height same with width_,
// height_ ?
ID3D11Texture2D *pBgraTextyure =
reinterpret_cast<ID3D11Texture2D *>(pEncInput->inputPtr);
#ifdef CONFIG_NV_OPTIMUS_FOR_DEV
copy_texture(texture, pBgraTextyure);
#else
native_->context_->CopyResource(
pBgraTextyure, reinterpret_cast<ID3D11Texture2D *>(texture));
#endif
NV_ENC_PIC_PARAMS picParams = {0};
picParams.inputTimeStamp = ms;
pEnc_->EncodeFrame(vPacket);
for (NvPacket &packet : vPacket) {
int32_t key = (packet.pictureType == NV_ENC_PIC_TYPE_IDR ||
packet.pictureType == NV_ENC_PIC_TYPE_I)
? 1
: 0;
if (packet.data.size() > 0) {
if (callback)
callback(packet.data.data(), packet.data.size(), key, obj, ms);
encoded = true;
}
}
return encoded ? 0 : -1;
}
void destroy() {
if (pEnc_) {
pEnc_->DestroyEncoder();
delete pEnc_;
pEnc_ = nullptr;
}
free_driver(&cuda_dl_, &nvenc_dl_);
}
void setup_h264(NV_ENC_CONFIG *encodeConfig) {
NV_ENC_CODEC_CONFIG *encodeCodecConfig = &encodeConfig->encodeCodecConfig;
NV_ENC_CONFIG_H264 *h264 = &encodeCodecConfig->h264Config;
NV_ENC_CONFIG_H264_VUI_PARAMETERS *vui = &h264->h264VUIParameters;
vui->videoFullRangeFlag = !!full_range_;
vui->colourMatrix = bt709_ ? NV_ENC_VUI_MATRIX_COEFFS_BT709 : NV_ENC_VUI_MATRIX_COEFFS_SMPTE170M;
vui->colourPrimaries = bt709_ ? NV_ENC_VUI_COLOR_PRIMARIES_BT709 : NV_ENC_VUI_COLOR_PRIMARIES_SMPTE170M;
vui->transferCharacteristics =
bt709_ ? NV_ENC_VUI_TRANSFER_CHARACTERISTIC_BT709 : NV_ENC_VUI_TRANSFER_CHARACTERISTIC_SMPTE170M;
vui->colourDescriptionPresentFlag = 1;
vui->videoSignalTypePresentFlag = 1;
h264->sliceMode = 3;
h264->sliceModeData = 1;
h264->repeatSPSPPS = 1;
// Specifies the chroma format. Should be set to 1 for yuv420 input, 3 for
// yuv444 input
h264->chromaFormatIDC = 1;
h264->level = NV_ENC_LEVEL_AUTOSELECT;
encodeConfig->profileGUID = NV_ENC_H264_PROFILE_MAIN_GUID;
}
void setup_hevc(NV_ENC_CONFIG *encodeConfig) {
NV_ENC_CODEC_CONFIG *encodeCodecConfig = &encodeConfig->encodeCodecConfig;
NV_ENC_CONFIG_HEVC *hevc = &encodeCodecConfig->hevcConfig;
NV_ENC_CONFIG_HEVC_VUI_PARAMETERS *vui = &hevc->hevcVUIParameters;
vui->videoFullRangeFlag = !!full_range_;
vui->colourMatrix = bt709_ ? NV_ENC_VUI_MATRIX_COEFFS_BT709 : NV_ENC_VUI_MATRIX_COEFFS_SMPTE170M;
vui->colourPrimaries = bt709_ ? NV_ENC_VUI_COLOR_PRIMARIES_BT709 : NV_ENC_VUI_COLOR_PRIMARIES_SMPTE170M;
vui->transferCharacteristics =
bt709_ ? NV_ENC_VUI_TRANSFER_CHARACTERISTIC_BT709 : NV_ENC_VUI_TRANSFER_CHARACTERISTIC_SMPTE170M;
vui->colourDescriptionPresentFlag = 1;
vui->videoSignalTypePresentFlag = 1;
hevc->sliceMode = 3;
hevc->sliceModeData = 1;
hevc->repeatSPSPPS = 1;
// Specifies the chroma format. Should be set to 1 for yuv420 input, 3 for
// yuv444 input
hevc->chromaFormatIDC = 1;
hevc->level = NV_ENC_LEVEL_AUTOSELECT;
hevc->outputPictureTimingSEI = 1;
hevc->tier = NV_ENC_TIER_HEVC_MAIN;
encodeConfig->profileGUID = NV_ENC_HEVC_PROFILE_MAIN_GUID;
}
private:
#ifdef CONFIG_NV_OPTIMUS_FOR_DEV
int copy_texture(void *src, void *dst) {
ComPtr<ID3D11Device> src_device = (ID3D11Device *)handle_;
ComPtr<ID3D11DeviceContext> src_deviceContext;
src_device->GetImmediateContext(src_deviceContext.ReleaseAndGetAddressOf());
ComPtr<ID3D11Texture2D> src_tex = (ID3D11Texture2D *)src;
ComPtr<ID3D11Texture2D> dst_tex = (ID3D11Texture2D *)dst;
HRESULT hr;
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
src_tex->GetDesc(&desc);
desc.Usage = D3D11_USAGE_STAGING;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.BindFlags = 0;
desc.MiscFlags = 0;
ComPtr<ID3D11Texture2D> staging_tex;
src_device->CreateTexture2D(&desc, NULL,
staging_tex.ReleaseAndGetAddressOf());
src_deviceContext->CopyResource(staging_tex.Get(), src_tex.Get());
D3D11_MAPPED_SUBRESOURCE map;
src_deviceContext->Map(staging_tex.Get(), 0, D3D11_MAP_READ, 0, &map);
std::unique_ptr<uint8_t[]> buffer(
new uint8_t[desc.Width * desc.Height * 4]);
memcpy(buffer.get(), map.pData, desc.Width * desc.Height * 4);
src_deviceContext->Unmap(staging_tex.Get(), 0);
D3D11_BOX Box;
Box.left = 0;
Box.right = desc.Width;
Box.top = 0;
Box.bottom = desc.Height;
Box.front = 0;
Box.back = 1;
native_->context_->UpdateSubresource(dst_tex.Get(), 0, &Box, buffer.get(),
desc.Width * 4,
desc.Width * desc.Height * 4);
return 0;
}
#endif
};
} // namespace
extern "C" {
int nv_encode_driver_support() {
try {
CudaFunctions *cuda_dl = NULL;
NvencFunctions *nvenc_dl = NULL;
load_driver(&cuda_dl, &nvenc_dl);
free_driver(&cuda_dl, &nvenc_dl);
return 0;
} catch (const std::exception &e) {
LOG_TRACE(std::string("driver not support, ") + e.what());
}
return -1;
}
int nv_destroy_encoder(void *encoder) {
try {
NvencEncoder *e = (NvencEncoder *)encoder;
if (e) {
e->destroy();
delete e;
e = NULL;
}
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("destroy failed: ") + e.what());
}
return -1;
}
void *nv_new_encoder(void *handle, int64_t luid, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs,
int32_t framerate, int32_t gop) {
NvencEncoder *e = NULL;
try {
e = new NvencEncoder(handle, luid, dataFormat, width, height, kbs,
framerate, gop);
if (!e->init()) {
goto _exit;
}
return e;
} catch (const std::exception &ex) {
LOG_ERROR(std::string("new failed: ") + ex.what());
goto _exit;
}
_exit:
if (e) {
e->destroy();
delete e;
e = NULL;
}
return NULL;
}
int nv_encode(void *encoder, void *texture, EncodeCallback callback, void *obj,
int64_t ms) {
try {
NvencEncoder *e = (NvencEncoder *)encoder;
return e->encode(texture, callback, obj, ms);
} catch (const std::exception &e) {
LOG_ERROR(std::string("encode failed: ") + e.what());
}
return -1;
}
// ref: Reconfigure API
#define RECONFIGURE_HEAD \
NvencEncoder *enc = (NvencEncoder *)e; \
NV_ENC_CONFIG sEncodeConfig = {0}; \
NV_ENC_INITIALIZE_PARAMS sInitializeParams = {0}; \
sInitializeParams.encodeConfig = &sEncodeConfig; \
enc->pEnc_->GetInitializeParams(&sInitializeParams); \
NV_ENC_RECONFIGURE_PARAMS params = {0}; \
params.version = NV_ENC_RECONFIGURE_PARAMS_VER; \
params.reInitEncodeParams = sInitializeParams;
#define RECONFIGURE_TAIL \
if (enc->pEnc_->Reconfigure(&params)) { \
return 0; \
}
int nv_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
DataFormat dataFormat, int32_t width,
int32_t height, int32_t kbs, int32_t framerate,
int32_t gop, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount) {
try {
Adapters adapters;
if (!adapters.Init(ADAPTER_VENDOR_NVIDIA))
return -1;
int count = 0;
for (auto &adapter : adapters.adapters_) {
int64_t currentLuid = LUID(adapter.get()->desc1_);
if (util::skip_test(excludedLuids, excludeFormats, excludeCount, currentLuid, dataFormat)) {
continue;
}
NvencEncoder *e = (NvencEncoder *)nv_new_encoder(
(void *)adapter.get()->device_.Get(), currentLuid,
dataFormat, width, height, kbs, framerate, gop);
if (!e)
continue;
if (e->native_->EnsureTexture(e->width_, e->height_)) {
e->native_->next();
int32_t key_obj = 0;
auto start = util::now();
bool succ = nv_encode(e, e->native_->GetCurrentTexture(), util_encode::vram_encode_test_callback, &key_obj,
0) == 0 && key_obj == 1;
int64_t elapsed = util::elapsed_ms(start);
if (succ && elapsed < TEST_TIMEOUT_MS) {
outLuids[count] = currentLuid;
outVendors[count] = VENDOR_NV;
count += 1;
}
}
e->destroy();
delete e;
e = nullptr;
if (count >= maxDescNum)
break;
}
*outDescNum = count;
return 0;
} catch (const std::exception &e) {
LOG_ERROR(std::string("test failed: ") + e.what());
}
return -1;
}
int nv_set_bitrate(void *e, int32_t kbs) {
try {
RECONFIGURE_HEAD
params.reInitEncodeParams.encodeConfig->rcParams.averageBitRate =
kbs * 1000;
RECONFIGURE_TAIL
} catch (const std::exception &e) {
LOG_ERROR(std::string("set bitrate to ") + std::to_string(kbs) +
"k failed: " + e.what());
}
return -1;
}
int nv_set_framerate(void *e, int32_t framerate) {
try {
RECONFIGURE_HEAD
params.reInitEncodeParams.frameRateNum = framerate;
params.reInitEncodeParams.frameRateDen = 1;
RECONFIGURE_TAIL
} catch (const std::exception &e) {
LOG_ERROR(std::string("set framerate failed: ") + e.what());
}
return -1;
}
} // extern "C"

View File

@@ -1,40 +0,0 @@
#ifndef NV_FFI_H
#define NV_FFI_H
#include "../common/callback.h"
#include <stdbool.h>
int nv_encode_driver_support();
int nv_decode_driver_support();
void *nv_new_encoder(void *handle, int64_t luid,
int32_t dataFormat, int32_t width, int32_t height,
int32_t bitrate, int32_t framerate, int32_t gop);
int nv_encode(void *encoder, void *tex, EncodeCallback callback, void *obj,
int64_t ms);
int nv_destroy_encoder(void *encoder);
void *nv_new_decoder(void *device, int64_t luid, int32_t codecID);
int nv_decode(void *decoder, uint8_t *data, int len, DecodeCallback callback,
void *obj);
int nv_destroy_decoder(void *decoder);
int nv_test_encode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
int32_t dataFormat, int32_t width,
int32_t height, int32_t kbs, int32_t framerate, int32_t gop,
const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int nv_test_decode(int64_t *outLuids, int32_t *outVendors, int32_t maxDescNum, int32_t *outDescNum,
int32_t dataFormat, uint8_t *data,
int32_t length, const int64_t *excludedLuids, const int32_t *excludeFormats, int32_t excludeCount);
int nv_set_bitrate(void *encoder, int32_t kbs);
int nv_set_framerate(void *encoder, int32_t framerate);
#endif // NV_FFI_H

View File

@@ -1,13 +0,0 @@
[package]
name = "capture"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
log = "0.4"
[build-dependencies]
cc = "1.0"
bindgen = "0.59"

View File

@@ -1,51 +0,0 @@
use cc::Build;
use std::{
env,
path::{Path, PathBuf},
};
fn main() {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let externals_dir = manifest_dir
.parent()
.unwrap()
.parent()
.unwrap()
.join("externals");
println!("cargo:rerun-if-changed=src");
println!("cargo:rerun-if-changed={}", externals_dir.display());
let ffi_header = "src/dxgi_ffi.h";
bindgen::builder()
.header(ffi_header)
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("capture_ffi.rs"))
.unwrap();
let mut builder = Build::new();
// system
#[cfg(windows)]
["d3d11", "dxgi"].map(|lib| println!("cargo:rustc-link-lib={}", lib));
#[cfg(target_os = "linux")]
println!("cargo:rustc-link-lib=stdc++");
#[cfg(windows)]
{
// dxgi
let dxgi_path = externals_dir.join("nvEncDXGIOutputDuplicationSample");
builder.include(&dxgi_path);
for f in vec!["DDAImpl.cpp"] {
builder.file(format!("{}/{}", dxgi_path.display(), f));
}
builder.file("src/dxgi.cpp");
}
// crate
builder
.cpp(false)
.static_crt(true)
.warnings(false)
.compile("capture");
}

View File

@@ -1,42 +0,0 @@
#include <DDA.h>
#include <Windows.h>
#include <string>
extern "C" void *dxgi_new_capturer(int64_t luid) {
DemoApplication *d = new DemoApplication(luid);
HRESULT hr = d->Init();
if (FAILED(hr)) {
delete d;
d = NULL;
return NULL;
}
return d;
}
extern "C" void *dxgi_device(void *capturer) {
DemoApplication *d = (DemoApplication *)capturer;
return d->Device();
}
extern "C" int dxgi_width(const void *capturer) {
DemoApplication *d = (DemoApplication *)capturer;
return d->width();
}
extern "C" int dxgi_height(const void *capturer) {
DemoApplication *d = (DemoApplication *)capturer;
return d->height();
}
extern "C" void *dxgi_capture(void *capturer, int wait_ms) {
DemoApplication *d = (DemoApplication *)capturer;
void *texture = d->Capture(wait_ms);
return texture;
}
extern "C" void destroy_dxgi_capturer(void *capturer) {
DemoApplication *d = (DemoApplication *)capturer;
if (d)
delete d;
}

View File

@@ -1,42 +0,0 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
use std::os::raw::c_void;
include!(concat!(env!("OUT_DIR"), "/capture_ffi.rs"));
pub struct Capturer {
inner: *mut c_void,
}
impl Capturer {
pub fn new(luid: i64) -> Result<Self, ()> {
let inner = unsafe { dxgi_new_capturer(luid) };
if inner.is_null() {
Err(())
} else {
Ok(Self { inner })
}
}
pub unsafe fn device(&mut self) -> *mut c_void {
dxgi_device(self.inner)
}
pub unsafe fn width(&self) -> i32 {
dxgi_width(self.inner)
}
pub unsafe fn height(&self) -> i32 {
dxgi_height(self.inner)
}
pub unsafe fn capture(&mut self, wait_ms: i32) -> *mut c_void {
dxgi_capture(self.inner, wait_ms)
}
pub unsafe fn drop(&mut self) {
destroy_dxgi_capturer(self.inner);
}
}

View File

@@ -1,13 +0,0 @@
#ifndef FFI_H
#define FFI_H
#include <stdint.h>
void *dxgi_new_capturer(int64_t luid);
void *dxgi_device(void *capturer);
int dxgi_width(const void *capturer);
int dxgi_height(const void *capturer);
void *dxgi_capture(void *capturer, int wait_ms);
void destroy_dxgi_capturer(void *capturer);
#endif // FFI_H

View File

@@ -1,2 +0,0 @@
#[cfg(windows)]
pub mod dxgi;

View File

@@ -1,13 +0,0 @@
[package]
name = "render"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
log = "0.4"
[build-dependencies]
cc = "1.0"
bindgen = "0.59"

View File

@@ -1,50 +0,0 @@
use cc::Build;
use std::{
env,
path::{Path, PathBuf},
};
fn main() {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
let externals_dir = manifest_dir
.parent()
.unwrap()
.parent()
.unwrap()
.join("externals");
println!("cargo:rerun-if-changed=src");
println!("cargo:rerun-if-changed={}", externals_dir.display());
let ffi_header = "src/render_ffi.h";
bindgen::builder()
.header(ffi_header)
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("render_ffi.rs"))
.unwrap();
let mut builder = Build::new();
// system
#[cfg(windows)]
["d3d11", "dxgi", "User32"].map(|lib| println!("cargo:rustc-link-lib={}", lib));
#[cfg(target_os = "linux")]
println!("cargo:rustc-link-lib=stdc++");
#[cfg(windows)]
{
let sdl_dir = externals_dir.join("SDL");
builder.include(sdl_dir.join("include"));
let sdl_lib_path = sdl_dir.join("lib").join("x64");
builder.file(manifest_dir.join("src").join("dxgi_sdl.cpp"));
println!("cargo:rustc-link-search=native={}", sdl_lib_path.display());
println!("cargo:rustc-link-lib=SDL2");
}
// crate
builder
.cpp(false)
.static_crt(true)
.warnings(false)
.compile("render");
}

Binary file not shown.

Binary file not shown.

View File

@@ -1,581 +0,0 @@
#include <atomic>
#include <chrono>
#include <cstdio>
#include <list>
#include <mutex>
#include <thread>
#include <vector>
#include <DirectXMath.h>
#include <SDL.h>
#include <SDL_syswm.h>
#include <d3d11.h>
#include <d3d11_1.h>
#include <d3d11_2.h>
#include <d3d11_3.h>
#include <d3d11_4.h>
#include <dxgi.h>
#include <dxgi1_2.h>
#include <iostream>
#include <wrl/client.h>
using Microsoft::WRL::ComPtr;
#define SAFE_RELEASE(p) \
{ \
if ((p)) { \
(p)->Release(); \
(p) = nullptr; \
} \
}
#define LUID(desc) \
(((int64_t)desc.AdapterLuid.HighPart << 32) | desc.AdapterLuid.LowPart)
#define HRB(f) MS_CHECK(f, return false;)
#define HRI(f) MS_CHECK(f, return -1;)
#define HRP(f) MS_CHECK(f, return nullptr;)
#define MS_CHECK(f, ...) \
do { \
HRESULT __ms_hr__ = (f); \
if (FAILED(__ms_hr__)) { \
std::clog \
<< #f " ERROR@" << __LINE__ << __FUNCTION__ << ": (" << std::hex \
<< __ms_hr__ << std::dec << ") " \
<< std::error_code(__ms_hr__, std::system_category()).message() \
<< std::endl \
<< std::flush; \
__VA_ARGS__ \
} \
} while (false)
#define MS_THROW(f, ...) MS_CHECK(f, throw std::runtime_error(#f);)
#define LUID(desc) \
(((int64_t)desc.AdapterLuid.HighPart << 32) | desc.AdapterLuid.LowPart)
#ifndef CSO_DIR
#define CSO_DIR "dev/render/res"
#endif
struct AdatperOutputs {
IDXGIAdapter1 *adapter;
DXGI_ADAPTER_DESC1 desc;
AdatperOutputs() : adapter(nullptr){};
AdatperOutputs(AdatperOutputs &&src) noexcept {
adapter = src.adapter;
src.adapter = nullptr;
desc = src.desc;
}
AdatperOutputs(const AdatperOutputs &src) {
adapter = src.adapter;
adapter->AddRef();
desc = src.desc;
}
~AdatperOutputs() {
if (adapter)
adapter->Release();
}
};
bool get_first_adapter_output(IDXGIFactory2 *factory2,
IDXGIAdapter1 **adapter_out,
IDXGIOutput1 **output_out, int64_t luid) {
UINT num_adapters = 0;
AdatperOutputs curent_adapter;
IDXGIAdapter1 *selected_adapter = nullptr;
IDXGIOutput1 *selected_output = nullptr;
HRESULT hr = S_OK;
bool found = false;
while (factory2->EnumAdapters1(num_adapters, &curent_adapter.adapter) !=
DXGI_ERROR_NOT_FOUND) {
++num_adapters;
DXGI_ADAPTER_DESC1 desc = DXGI_ADAPTER_DESC1();
curent_adapter.adapter->GetDesc1(&desc);
if (LUID(desc) != luid) {
continue;
}
selected_adapter = curent_adapter.adapter;
selected_adapter->AddRef();
IDXGIOutput *output;
if (curent_adapter.adapter->EnumOutputs(0, &output) !=
DXGI_ERROR_NOT_FOUND) {
IDXGIOutput1 *temp;
hr = output->QueryInterface(IID_PPV_ARGS(&temp));
if (SUCCEEDED(hr)) {
selected_output = temp;
}
}
found = true;
break;
}
*adapter_out = selected_adapter;
*output_out = selected_output;
return found;
}
class dx_device_context {
public:
dx_device_context(int64_t luid) {
// This is what matters (the 1)
// Only a guid of fatory2 will not work
HRESULT hr = CreateDXGIFactory1(IID_PPV_ARGS(&factory2));
if (FAILED(hr))
exit(hr);
if (!get_first_adapter_output(factory2, &adapter1, &output1, luid)) {
std::cout << "no render adapter found" << std::endl;
exit(-1);
}
D3D_FEATURE_LEVEL levels[]{D3D_FEATURE_LEVEL_11_0};
hr = D3D11CreateDevice(
adapter1, D3D_DRIVER_TYPE_UNKNOWN, NULL,
D3D11_CREATE_DEVICE_VIDEO_SUPPORT | D3D11_CREATE_DEVICE_BGRA_SUPPORT,
levels, 1, D3D11_SDK_VERSION, &device, NULL, &context);
if (FAILED(hr))
exit(hr);
hr = device->QueryInterface(IID_PPV_ARGS(&video_device));
if (FAILED(hr))
exit(hr);
hr = context->QueryInterface(IID_PPV_ARGS(&video_context));
if (FAILED(hr))
exit(hr);
hr = context->QueryInterface(IID_PPV_ARGS(&hmt));
if (FAILED(hr))
exit(hr);
// This is required for MFXVideoCORE_SetHandle
hr = hmt->SetMultithreadProtected(TRUE);
if (FAILED(hr))
exit(hr);
}
~dx_device_context() {
if (hmt)
hmt->Release();
if (video_context)
video_context->Release();
if (video_device)
video_device->Release();
if (context)
context->Release();
if (device)
device->Release();
if (output1)
output1->Release();
if (adapter1)
adapter1->Release();
if (factory2)
factory2->Release();
}
IDXGIFactory2 *factory2 = nullptr;
IDXGIAdapter1 *adapter1 = nullptr;
IDXGIOutput1 *output1 = nullptr;
ID3D11Device *device = nullptr;
ID3D11DeviceContext *context = nullptr;
ID3D11VideoDevice *video_device = nullptr;
ID3D11VideoContext *video_context = nullptr;
ID3D10Multithread *hmt = nullptr;
HMODULE debug_mod = nullptr;
};
class simplerenderer {
public:
simplerenderer(HWND in_window, dx_device_context &dev_ctx)
: ctx(dev_ctx), window(in_window) {
ctx.factory2->MakeWindowAssociation(in_window, 0);
sampler_view = nullptr;
D3D11_SAMPLER_DESC desc = CD3D11_SAMPLER_DESC(CD3D11_DEFAULT());
HRESULT hr = ctx.device->CreateSamplerState(&desc, &sampler_interp);
if (FAILED(hr))
exit(hr);
init_fbo0(window);
init_vbo();
init_shaders();
}
~simplerenderer() {
// render_thread.join();
if (sampler_interp)
sampler_interp->Release();
if (sampler_view)
sampler_view->Release();
if (vbo)
vbo->Release();
if (vao)
vao->Release();
if (frag)
frag->Release();
if (vert)
vert->Release();
if (fbo0)
fbo0->Release();
if (fbo0rbo)
fbo0rbo->Release();
if (swapchain)
swapchain->Release();
}
private:
void init_fbo0(HWND window) {
DXGI_SWAP_CHAIN_DESC1 swp_desc{
1280, 720, DXGI_FORMAT_B8G8R8A8_UNORM, FALSE, DXGI_SAMPLE_DESC{1, 0},
DXGI_USAGE_RENDER_TARGET_OUTPUT, 3, DXGI_SCALING_STRETCH,
// DXGI_SWAP_EFFECT_DISCARD,
DXGI_SWAP_EFFECT_FLIP_DISCARD, DXGI_ALPHA_MODE_UNSPECIFIED, 0};
HRESULT hr = ctx.factory2->CreateSwapChainForHwnd(
ctx.device, window, &swp_desc, NULL, NULL, &swapchain);
if (FAILED(hr))
exit(hr);
fbo0 = nullptr;
fbo0rbo = nullptr;
RECT rect;
GetClientRect(window, &rect);
hr = swapchain->GetBuffer(0, IID_PPV_ARGS(&fbo0rbo));
if (FAILED(hr))
exit(hr);
hr = ctx.device->CreateRenderTargetView(fbo0rbo, nullptr, &fbo0);
if (FAILED(hr))
exit(hr);
D3D11_VIEWPORT VP{};
VP.Width = static_cast<FLOAT>(rect.right - rect.left);
VP.Height = static_cast<FLOAT>(rect.bottom - rect.top);
VP.MinDepth = 0.0f;
VP.MaxDepth = 1.0f;
VP.TopLeftX = 0;
VP.TopLeftY = 0;
ctx.context->RSSetViewports(1, &VP);
}
void init_vbo() {
struct vertex {
DirectX::XMFLOAT3 Pos;
DirectX::XMFLOAT2 TexCoord;
};
vertex points[4]{{DirectX::XMFLOAT3(-1, 1, 0), DirectX::XMFLOAT2(0, 0)},
{DirectX::XMFLOAT3(1, 1, 0), DirectX::XMFLOAT2(1, 0)},
{DirectX::XMFLOAT3(-1, -1, 0), DirectX::XMFLOAT2(0, 1)},
{DirectX::XMFLOAT3(1, -1, 0), DirectX::XMFLOAT2(1, 1)}};
D3D11_BUFFER_DESC vbo_desc{};
vbo_desc.ByteWidth = sizeof(vertex) * 4;
vbo_desc.Usage = D3D11_USAGE_IMMUTABLE;
vbo_desc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
D3D11_SUBRESOURCE_DATA initial_data{};
initial_data.pSysMem = points;
HRESULT hr = ctx.device->CreateBuffer(&vbo_desc, &initial_data, &vbo);
if (FAILED(hr))
exit(hr);
vbo_stride = sizeof(vertex);
vbo_offset = 0;
}
void init_shaders() {
uint8_t *shader_bytecode = nullptr;
size_t bytecode_len = 0;
// read file
FILE *shader_file = fopen(CSO_DIR "/frag.cso", "rb");
fseek(shader_file, 0, SEEK_END);
bytecode_len = ftell(shader_file);
fseek(shader_file, 0, SEEK_SET);
shader_bytecode = (uint8_t *)malloc(bytecode_len);
fread(shader_bytecode, 1, bytecode_len, shader_file);
HRESULT hr = ctx.device->CreatePixelShader(shader_bytecode, bytecode_len,
nullptr, &frag);
if (FAILED(hr))
exit(hr);
// free(shader_bytecode);
shader_file = freopen(CSO_DIR "/vert.cso", "rb", shader_file);
fseek(shader_file, 0, SEEK_END);
bytecode_len = ftell(shader_file);
fseek(shader_file, 0, SEEK_SET);
shader_bytecode = (uint8_t *)malloc(bytecode_len);
fread(shader_bytecode, 1, bytecode_len, shader_file);
// fclose(shader_file);
hr = ctx.device->CreateVertexShader(shader_bytecode, bytecode_len, nullptr,
&vert);
if (FAILED(hr))
exit(hr);
D3D11_INPUT_ELEMENT_DESC input_desc[]{
// name, vertex attrib index, format, unpack alignment, instance
// releated
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0,
D3D11_INPUT_PER_VERTEX_DATA, 0},
{"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12,
D3D11_INPUT_PER_VERTEX_DATA, 0},
};
hr = ctx.device->CreateInputLayout(input_desc, 2, shader_bytecode,
bytecode_len, &vao);
if (FAILED(hr))
exit(hr);
// free(shader_bytecode);
ctx.context->VSSetShader(vert, nullptr, 0);
ctx.context->IASetInputLayout(vao);
ctx.context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
ctx.context->IASetVertexBuffers(0, 1, &vbo, &vbo_stride, &vbo_offset);
ctx.context->PSSetShader(frag, nullptr, 0);
ctx.context->PSSetSamplers(0, 1, &sampler_interp);
}
void bind_texture(ID3D11Texture2D *texture) {
D3D11_TEXTURE2D_DESC desc;
texture->GetDesc(&desc);
D3D11_SHADER_RESOURCE_VIEW_DESC shader_resource_desc{};
shader_resource_desc.Format = desc.Format;
;
shader_resource_desc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shader_resource_desc.Texture2D = {0, 1};
if (sampler_view)
sampler_view->Release();
HRESULT hr = ctx.device->CreateShaderResourceView(
texture, &shader_resource_desc, &sampler_view);
if (FAILED(hr))
exit(hr);
ctx.context->PSSetShaderResources(0, 1, &sampler_view);
}
void resize_swapchain(uint32_t width, uint32_t height) {
if (fbo0)
fbo0->Release();
if (fbo0rbo)
fbo0rbo->Release();
HRESULT hr =
swapchain->ResizeBuffers(0, width, height, DXGI_FORMAT_UNKNOWN, 0);
hr = swapchain->GetBuffer(0, IID_PPV_ARGS(&fbo0rbo));
if (FAILED(hr))
exit(hr);
hr = ctx.device->CreateRenderTargetView(fbo0rbo, nullptr, &fbo0);
if (FAILED(hr))
exit(hr);
D3D11_VIEWPORT VP{};
VP.Width = static_cast<FLOAT>(width);
VP.Height = static_cast<FLOAT>(height);
VP.MinDepth = 0.0f;
VP.MaxDepth = 1.0f;
VP.TopLeftX = 0;
VP.TopLeftY = 0;
ctx.context->RSSetViewports(1, &VP);
}
std::chrono::high_resolution_clock::time_point last_fps_time =
std::chrono::high_resolution_clock::now();
public:
void render_frame(ID3D11Texture2D *texture) {
if (!occluded) {
bind_texture(texture);
if (need_resize.load(std::memory_order_acquire)) {
atomic_packed_32x2 temp;
temp.packed.store(client_size.packed.load(std::memory_order_relaxed),
std::memory_order_relaxed);
resize_swapchain(temp.separate.width, temp.separate.height);
}
ctx.context->OMSetRenderTargets(1, &fbo0, nullptr);
ctx.context->Draw(4, 0);
HRESULT hr = swapchain->Present(0, 0);
if (FAILED(hr))
exit(hr);
if (hr == DXGI_STATUS_OCCLUDED) {
occluded = true;
}
frame_count++;
std::chrono::high_resolution_clock::time_point current =
std::chrono::high_resolution_clock::now();
if (current - last_fps_time >= std::chrono::seconds(1)) {
int fps = frame_count - last_frame_count;
last_frame_count = frame_count;
last_fps_time = current;
std::cout << fps << " Hz" << std::endl;
}
} else {
HRESULT hr = swapchain->Present(0, DXGI_PRESENT_TEST);
if (FAILED(hr))
exit(hr);
if (!DXGI_STATUS_OCCLUDED) {
occluded = false;
}
}
}
void set_size(uint32_t width, uint32_t height) {
atomic_packed_32x2 temp;
temp.separate.width = width;
temp.separate.height = height;
client_size.packed.store(temp.packed.load(std::memory_order_relaxed),
std::memory_order_relaxed);
}
HWND window;
dx_device_context &ctx;
IDXGISwapChain1 *swapchain;
ID3D11Texture2D *fbo0rbo;
ID3D11RenderTargetView *fbo0;
ID3D11VertexShader *vert;
ID3D11PixelShader *frag;
ID3D11InputLayout *vao;
ID3D11Buffer *vbo;
ID3D11ShaderResourceView *sampler_view;
ID3D11SamplerState *sampler_interp;
UINT vbo_stride;
UINT vbo_offset;
std::thread render_thread;
std::atomic_bool running;
std::atomic_bool need_resize;
bool occluded = false;
int frame_count = 0;
int last_frame_count = 0;
struct atomic_packed_32x2 {
union {
struct detail {
uint32_t width;
uint32_t height;
} separate;
std::atomic_uint64_t packed;
};
atomic_packed_32x2();
} client_size;
};
simplerenderer::atomic_packed_32x2::atomic_packed_32x2(void) {}
class Render {
public:
Render(int64_t luid, bool inputSharedHandle);
int Init();
int RenderTexture(ID3D11Texture2D *);
std::unique_ptr<std::thread> message_thread;
std::unique_ptr<simplerenderer> renderer;
bool running = false;
std::unique_ptr<dx_device_context> ctx;
// dx_device_context ctx;
int64_t luid;
bool inputSharedHandle;
};
Render::Render(int64_t luid, bool inputSharedHandle) {
// ctx.reset(new dx_device_context());
this->luid = luid;
this->inputSharedHandle = inputSharedHandle;
ctx = std::make_unique<dx_device_context>(luid);
};
static void run(Render *self) {
SetProcessDPIAware();
SDL_Init(SDL_INIT_VIDEO);
SDL_Window *window = SDL_CreateWindow(
"test window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 1280,
720, SDL_WINDOW_RESIZABLE | SDL_WINDOW_ALLOW_HIGHDPI);
SDL_SysWMinfo info{};
SDL_GetWindowWMInfo(window, &info);
{
self->renderer.reset(new simplerenderer(info.info.win.window, *self->ctx));
MONITORINFOEX monitor_info{};
monitor_info.cbSize = sizeof(monitor_info);
DXGI_OUTPUT_DESC screen_desc;
if (self->ctx->output1) {
HRESULT hr = self->ctx->output1->GetDesc(&screen_desc);
GetMonitorInfo(screen_desc.Monitor, &monitor_info);
}
self->running = true;
bool maximized = false;
while (self->running) {
SDL_Event event;
SDL_WaitEvent(&event);
switch (event.type) {
case SDL_WINDOWEVENT:
switch (event.window.event) {
case SDL_WINDOWEVENT_CLOSE:
// capturer.Stop();
self->running = false;
break;
case SDL_WINDOWEVENT_MAXIMIZED:
if (self->ctx->output1) {
int border_l, border_r, border_t, border_b;
SDL_GetWindowBordersSize(window, &border_t, &border_l, &border_b,
&border_r);
int max_w = monitor_info.rcWork.right - monitor_info.rcWork.left;
int max_h =
monitor_info.rcWork.bottom - monitor_info.rcWork.top - border_t;
SDL_SetWindowSize(window, max_w, max_h);
SDL_SetWindowPosition(window, monitor_info.rcWork.left, border_t);
maximized = true;
}
break;
case SDL_WINDOWEVENT_RESTORED:
maximized = false;
break;
case SDL_WINDOWEVENT_RESIZED:
if (self->ctx->output1) {
int max_w, max_h;
SDL_GetWindowMaximumSize(window, &max_w, &max_h);
double aspect = double(screen_desc.DesktopCoordinates.right -
screen_desc.DesktopCoordinates.left) /
(screen_desc.DesktopCoordinates.bottom -
screen_desc.DesktopCoordinates.top);
int temp = event.window.data1 * event.window.data2;
int width = sqrt(temp * aspect) + 0.5;
int height = sqrt(temp / aspect) + 0.5;
int pos_x, pos_y;
SDL_GetWindowPosition(window, &pos_x, &pos_y);
int ori_w, ori_h;
SDL_GetWindowSize(window, &ori_w, &ori_h);
SDL_SetWindowPosition(window, pos_x + ((ori_w - width) / 2),
pos_y + ((ori_h - height) / 2));
SDL_SetWindowSize(window, width, height);
}
break;
case SDL_WINDOWEVENT_SIZE_CHANGED:
self->renderer->set_size(event.window.data1, event.window.data2);
break;
default:
break;
}
break;
default:
break;
}
if (event.type == SDL_WINDOWEVENT) {
}
}
}
SDL_DestroyWindow(window);
SDL_Quit();
exit(0);
}
int Render::Init() {
message_thread.reset(new std::thread(run, this));
return 0;
}
int Render::RenderTexture(ID3D11Texture2D *texture) {
renderer->render_frame(texture);
return 0;
}
extern "C" void *CreateDXGIRender(int64_t luid, bool inputSharedHandle) {
Render *p = new Render(luid, inputSharedHandle);
p->Init();
return p;
}
extern "C" int DXGIRenderTexture(void *render, HANDLE handle) {
Render *self = (Render *)render;
if (!self->running)
return 0;
ComPtr<ID3D11Texture2D> texture = nullptr;
if (self->inputSharedHandle) {
ComPtr<IDXGIResource> resource = nullptr;
ComPtr<ID3D11Texture2D> tex_ = nullptr;
MS_THROW(self->ctx->device->OpenSharedResource(
handle, __uuidof(ID3D11Texture2D),
(void **)resource.ReleaseAndGetAddressOf()));
MS_THROW(resource.As(&tex_));
texture = tex_.Get();
} else {
texture = (ID3D11Texture2D *)handle;
}
self->RenderTexture(texture.Get());
return 0;
}
extern "C" void DestroyDXGIRender(void *render) {
Render *self = (Render *)render;
self->running = false;
if (self->message_thread)
self->message_thread->join();
}
extern "C" void *DXGIDevice(void *render) {
Render *self = (Render *)render;
return self->ctx->device;
}

View File

@@ -1,39 +0,0 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
use std::os::raw::c_void;
include!(concat!(env!("OUT_DIR"), "/render_ffi.rs"));
pub struct Render {
inner: *mut c_void,
}
impl Render {
pub fn new(luid: i64, input_shared_handle: bool) -> Result<Self, ()> {
let inner = unsafe { CreateDXGIRender(luid, input_shared_handle) };
if inner.is_null() {
Err(())
} else {
Ok(Self { inner })
}
}
pub unsafe fn render(&mut self, tex: *mut c_void) -> Result<(), i32> {
let result = DXGIRenderTexture(self.inner, tex);
if result == 0 {
Ok(())
} else {
Err(result)
}
}
pub unsafe fn device(&mut self) -> *mut c_void {
DXGIDevice(self.inner)
}
pub unsafe fn drop(&mut self) {
DestroyDXGIRender(self.inner);
}
}

View File

@@ -1,11 +0,0 @@
#ifndef RENDER_FFI_H
#define RENDER_FFI_H
#include <stdbool.h>
void *CreateDXGIRender(long long luid, bool inputSharedHandle);
int DXGIRenderTexture(void *render, void *tex);
void DestroyDXGIRender(void *render);
void *DXGIDevice(void *render);
#endif // RENDER_FFI_H

View File

@@ -1,13 +0,0 @@
[package]
name = "tool"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
log = "0.4"
[build-dependencies]
cc = "1.0"
bindgen = "0.59"

View File

@@ -1,39 +0,0 @@
use cc::Build;
use std::{
env,
path::{Path, PathBuf},
};
fn main() {
let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
println!("cargo:rerun-if-changed=src");
let ffi_header = "src/tool_ffi.h";
bindgen::builder()
.header(ffi_header)
.rustified_enum("*")
.generate()
.unwrap()
.write_to_file(Path::new(&env::var_os("OUT_DIR").unwrap()).join("tool_ffi.rs"))
.unwrap();
let mut builder = Build::new();
builder.include(
manifest_dir
.parent()
.unwrap()
.parent()
.unwrap()
.join("cpp")
.join("common"),
);
builder.file("src/tool.cpp");
// crate
builder
.cpp(false)
.static_crt(true)
.warnings(false)
.compile("tool");
}

View File

@@ -1,44 +0,0 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
use std::os::raw::c_void;
include!(concat!(env!("OUT_DIR"), "/tool_ffi.rs"));
pub struct Tool {
inner: *mut c_void,
}
impl Tool {
pub fn new(luid: i64) -> Result<Self, ()> {
let inner = unsafe { tool_new(luid) };
if inner.is_null() {
Err(())
} else {
Ok(Self { inner })
}
}
pub fn device(&mut self) -> *mut c_void {
unsafe { tool_device(self.inner) }
}
pub fn get_texture(&mut self, width: i32, height: i32) -> *mut c_void {
unsafe { tool_get_texture(self.inner, width, height) }
}
pub fn get_texture_size(&mut self, texture: *mut c_void) -> (i32, i32) {
let mut width = 0;
let mut height = 0;
unsafe { tool_get_texture_size(self.inner, texture, &mut width, &mut height) }
(width, height)
}
}
impl Drop for Tool {
fn drop(&mut self) {
unsafe { tool_destroy(self.inner) }
self.inner = std::ptr::null_mut();
}
}

View File

@@ -1,67 +0,0 @@
#include <memory.h>
#include "common.h"
#include "system.h"
namespace {
class Tool {
public:
std::unique_ptr<NativeDevice> native_;
bool initialized_ = false;
public:
Tool(int64_t luid) {
native_ = std::make_unique<NativeDevice>();
initialized_ = native_->Init(luid, nullptr, 1);
}
ID3D11Texture2D *GetTexture(int width, int height) {
native_->EnsureTexture(width, height);
return native_->GetCurrentTexture();
}
void getSize(ID3D11Texture2D *texture, int *width, int *height) {
D3D11_TEXTURE2D_DESC desc;
texture->GetDesc(&desc);
*width = desc.Width;
*height = desc.Height;
}
};
} // namespace
extern "C" {
void *tool_new(int64_t luid) {
Tool *t = new Tool(luid);
if (t && !t->initialized_) {
delete t;
return nullptr;
}
return t;
}
void *tool_device(void *tool) {
Tool *t = (Tool *)tool;
return t->native_->device_.Get();
}
void *tool_get_texture(void *tool, int width, int height) {
Tool *t = (Tool *)tool;
return t->GetTexture(width, height);
}
void tool_get_texture_size(void *tool, void *texture, int *width, int *height) {
Tool *t = (Tool *)tool;
t->getSize((ID3D11Texture2D *)texture, width, height);
}
void tool_destroy(void *tool) {
Tool *t = (Tool *)tool;
if (t) {
delete t;
t = nullptr;
}
}
} // extern "C"

View File

@@ -1,12 +0,0 @@
#ifndef TOOL_FFI_H
#define TOOL_FFI_H
#include <stdint.h>
void *tool_new(int64_t luid);
void *tool_device(void *tool);
void *tool_get_texture(void *tool, int width, int height);
void tool_get_texture_size(void *tool, void *texture, int *width, int *height);
void tool_destroy(void *tool);
#endif // TOOL_FFI_H

View File

@@ -1,31 +0,0 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.5.33627.172
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "AMFTest", "AMFTest.vcxproj", "{59599E6A-52F7-44DD-9EC5-487342FF33F8}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|x64 = Debug|x64
Debug|x86 = Debug|x86
Release|x64 = Release|x64
Release|x86 = Release|x86
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Debug|x64.ActiveCfg = Debug|x64
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Debug|x64.Build.0 = Debug|x64
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Debug|x86.ActiveCfg = Debug|Win32
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Debug|x86.Build.0 = Debug|Win32
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Release|x64.ActiveCfg = Release|x64
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Release|x64.Build.0 = Release|x64
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Release|x86.ActiveCfg = Release|Win32
{59599E6A-52F7-44DD-9EC5-487342FF33F8}.Release|x86.Build.0 = Release|Win32
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {E1168B32-184A-4268-AC43-21E66A82F076}
EndGlobalSection
EndGlobal

View File

@@ -1,154 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<VCProjectVersion>16.0</VCProjectVersion>
<Keyword>Win32Proj</Keyword>
<ProjectGuid>{59599e6a-52f7-44dd-9ec5-487342ff33f8}</ProjectGuid>
<RootNamespace>AMFTest</RootNamespace>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>_DEBUG;_CONSOLE;_CRT_SECURE_NO_WARNINGS;CSO_DIR="../../render/res";%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
<AdditionalIncludeDirectories>..\..\externals\AMF_v1.4.29\amf;..\..\externals\AMF_v1.4.29\amf\public\common;..\..\common\src;..\..\common\src\platform\win;..\..\externals\nvEncDXGIOutputDuplicationSample;..\..\codec\src;..\..\externals\SDL\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>..\..\externals\SDL\lib\x64;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>SDL2.lib;dxgi.lib;d3d11.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="..\..\amf\src\common.cpp" />
<ClCompile Include="..\..\amf\src\decode.cpp" />
<ClCompile Include="..\..\amf\src\encode.cpp" />
<ClCompile Include="..\..\capture\src\dxgi.cpp" />
<ClCompile Include="..\..\codec\src\data.c" />
<ClCompile Include="..\..\codec\src\utils.c" />
<ClCompile Include="..\..\common\src\log.cpp" />
<ClCompile Include="..\..\common\src\platform\win\win.cpp" />
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\AMFFactory.cpp" />
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\AMFSTL.cpp" />
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\Thread.cpp" />
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\TraceAdapter.cpp" />
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\Windows\ThreadWindows.cpp" />
<ClCompile Include="..\..\externals\nvEncDXGIOutputDuplicationSample\DDAImpl.cpp" />
<ClCompile Include="..\..\externals\nvEncDXGIOutputDuplicationSample\Preproc.cpp" />
<ClCompile Include="..\..\render\src\dxgi_sdl.cpp" />
<ClCompile Include="main.cpp" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

View File

@@ -1,83 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="source">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="source\externals">
<UniqueIdentifier>{a09eabb2-ceac-4062-b19e-244b29176c2b}</UniqueIdentifier>
</Filter>
<Filter Include="source\externals\AMF_v1.4.29">
<UniqueIdentifier>{8adb5a7a-3fba-4330-9a43-dc4251f2ff2f}</UniqueIdentifier>
</Filter>
<Filter Include="source\externals\nvEncDXGIOutputDuplicationSample">
<UniqueIdentifier>{2bc8fd1a-5082-40fb-a6b5-ac8c3b84d450}</UniqueIdentifier>
</Filter>
<Filter Include="source\render">
<UniqueIdentifier>{6edc9023-4c96-4474-b544-20059b2e32ac}</UniqueIdentifier>
</Filter>
<Filter Include="source\codec">
<UniqueIdentifier>{ccee2176-d5f2-46b3-b138-538ac9bd7d13}</UniqueIdentifier>
</Filter>
<Filter Include="source\common">
<UniqueIdentifier>{8a1b8e43-9bdb-4e28-98d9-854c8e8bf107}</UniqueIdentifier>
</Filter>
<Filter Include="source\capture">
<UniqueIdentifier>{a65898d7-8992-40d0-9141-7b4f10415e1f}</UniqueIdentifier>
</Filter>
</ItemGroup>
<ItemGroup>
<ClCompile Include="main.cpp">
<Filter>source</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\AMFFactory.cpp">
<Filter>source\externals\AMF_v1.4.29</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\AMFSTL.cpp">
<Filter>source\externals\AMF_v1.4.29</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\Thread.cpp">
<Filter>source\externals\AMF_v1.4.29</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\Windows\ThreadWindows.cpp">
<Filter>source\externals\AMF_v1.4.29</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\AMF_v1.4.29\amf\public\common\TraceAdapter.cpp">
<Filter>source\externals\AMF_v1.4.29</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\nvEncDXGIOutputDuplicationSample\DDAImpl.cpp">
<Filter>source\externals\nvEncDXGIOutputDuplicationSample</Filter>
</ClCompile>
<ClCompile Include="..\..\externals\nvEncDXGIOutputDuplicationSample\Preproc.cpp">
<Filter>source\externals\nvEncDXGIOutputDuplicationSample</Filter>
</ClCompile>
<ClCompile Include="..\..\common\src\platform\win\win.cpp">
<Filter>source\common</Filter>
</ClCompile>
<ClCompile Include="..\..\capture\src\dxgi.cpp">
<Filter>source\capture</Filter>
</ClCompile>
<ClCompile Include="..\..\render\src\dxgi_sdl.cpp">
<Filter>source\render</Filter>
</ClCompile>
<ClCompile Include="..\..\codec\src\data.c">
<Filter>source\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\codec\src\utils.c">
<Filter>source\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\amf\src\common.cpp">
<Filter>source\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\amf\src\decode.cpp">
<Filter>source\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\amf\src\encode.cpp">
<Filter>source\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\common\src\log.cpp">
<Filter>source\common</Filter>
</ClCompile>
</ItemGroup>
</Project>

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="Current" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup />
</Project>

View File

@@ -1,109 +0,0 @@
#include <Windows.h>
#include <callback.h>
#include <common.h>
#include <iostream>
#include <stdint.h>
#include <system.h>
extern "C" {
void *dxgi_new_capturer(int64_t luid);
void *dxgi_device(void *self);
int dxgi_width(const void *self);
int dxgi_height(const void *self);
void *dxgi_capture(void *self, int wait_ms);
void destroy_dxgi_capturer(void *self);
void *amf_new_encoder(void *hdl, int64_t luid, API api, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs,
int32_t framerate, int32_t gop);
int amf_encode(void *e, void *tex, EncodeCallback callback, void *obj);
int amf_destroy_encoder(void *e);
void *amf_new_decoder(void *device, int64_t luid, int32_t api,
int32_t dataFormat, bool outputSharedHandle);
int amf_decode(void *decoder, uint8_t *data, int32_t length,
DecodeCallback callback, void *obj);
int amf_destroy_decoder(void *decoder);
void *CreateDXGIRender(long long luid, bool inputSharedHandle);
int DXGIRenderTexture(void *render, HANDLE shared_handle);
void DestroyDXGIRender(void *render);
void *DXGIDevice(void *render);
}
static const uint8_t *encode_data;
static int32_t encode_len;
static void *decode_shared_handle;
extern "C" static void encode_callback(const uint8_t *data, int32_t len,
int32_t key, const void *obj) {
encode_data = data;
encode_len = len;
std::cerr << "encode len" << len << std::endl;
}
extern "C" static void decode_callback(void *shared_handle, const void *obj) {
decode_shared_handle = shared_handle;
}
extern "C" void log_gpucodec(int level, const char *message) {
std::cout << message << std::endl;
}
int main() {
Adapters adapters;
adapters.Init(ADAPTER_VENDOR_AMD);
if (adapters.adapters_.size() == 0) {
std::cout << "no amd adapter" << std::endl;
return -1;
}
int64_t luid = LUID(adapters.adapters_[0].get()->desc1_);
DataFormat dataFormat = H264;
void *dup = dxgi_new_capturer(luid);
if (!dup) {
std::cerr << "create duplicator failed" << std::endl;
return -1;
}
int width = dxgi_width(dup);
int height = dxgi_height(dup);
std::cout << "width: " << width << " height: " << height << std::endl;
void *device = dxgi_device(dup);
void *encoder = amf_new_encoder(device, luid, API_DX11, dataFormat, width,
height, 4000, 30, 0xFFFF);
if (!encoder) {
std::cerr << "create encoder failed" << std::endl;
return -1;
}
void *render = CreateDXGIRender(luid, false);
if (!render) {
std::cerr << "create render failed" << std::endl;
return -1;
}
void *decoder =
amf_new_decoder(DXGIDevice(render), luid, API_DX11, dataFormat, false);
if (!decoder) {
std::cerr << "create decoder failed" << std::endl;
return -1;
}
while (true) {
void *texture = dxgi_capture(dup, 100);
if (!texture) {
std::cerr << "texture is NULL" << std::endl;
continue;
}
if (0 != amf_encode(encoder, texture, encode_callback, NULL)) {
std::cerr << "encode failed" << std::endl;
continue;
}
if (0 != amf_decode(decoder, (uint8_t *)encode_data, encode_len,
decode_callback, NULL)) {
std::cerr << "decode failed" << std::endl;
continue;
}
if (0 != DXGIRenderTexture(render, decode_shared_handle)) {
std::cerr << "render failed" << std::endl;
continue;
}
}
// no release temporarily
}

View File

@@ -1,31 +0,0 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.5.33627.172
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "MFXTest", "MFXTest.vcxproj", "{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|x64 = Debug|x64
Debug|x86 = Debug|x86
Release|x64 = Release|x64
Release|x86 = Release|x86
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Debug|x64.ActiveCfg = Debug|x64
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Debug|x64.Build.0 = Debug|x64
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Debug|x86.ActiveCfg = Debug|Win32
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Debug|x86.Build.0 = Debug|Win32
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Release|x64.ActiveCfg = Release|x64
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Release|x64.Build.0 = Release|x64
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Release|x86.ActiveCfg = Release|Win32
{1FBDACB6-9142-40DA-9B50-5F591F1DD0AD}.Release|x86.Build.0 = Release|Win32
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {0B559C12-4403-4530-9A4E-7F4DF6235164}
EndGlobalSection
EndGlobal

View File

@@ -1,175 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<VCProjectVersion>16.0</VCProjectVersion>
<Keyword>Win32Proj</Keyword>
<ProjectGuid>{1fbdacb6-9142-40da-9b50-5f591f1dd0ad}</ProjectGuid>
<RootNamespace>MFXTest</RootNamespace>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>_DEBUG;_CONSOLE;_CRT_SECURE_NO_WARNINGS;CSO_DIR="../../render/res";MFX_D3D11_SUPPORT;NOMINMAX=1;MFX_DEPRECATED_OFF;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
<AdditionalIncludeDirectories>D:\rustdesk\gpucodec\externals\libvpl_v2023.4.0\tools\legacy\media_sdk_compatibility_headers;..\..\..\externals\libvpl_v2023.4.0\libvpl;..\..\..\externals\libvpl_v2023.4.0\api;..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src;..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\include;..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\include\vm;..\..\..\externals\nvEncDXGIOutputDuplicationSample;..\..\..\native\common;..\..\..\native\gpucodec;..\..\..\externals\SDL\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalLibraryDirectories>..\..\..\externals\SDL\lib\x64;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>SDL2.lib;dxgi.lib;d3d11.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_config_interface\mfx_config_interface.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_config_interface\mfx_config_interface_string_api.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_config.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_loader.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_log.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_lowlatency.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_msdk.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_critical_section.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dispatcher.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dispatcher_log.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dispatcher_main.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_driver_store_loader.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dxva2_device.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_function_table.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_library_iterator.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_load_dll.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_win_reg_key.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\avc_bitstream.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\avc_nal_spl.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\avc_spl.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\base_allocator.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\d3d11_allocator.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\sample_utils.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\atomic.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\shared_object.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\thread_windows.cpp" />
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\time.cpp" />
<ClCompile Include="..\..\..\externals\nvEncDXGIOutputDuplicationSample\DDAImpl.cpp" />
<ClCompile Include="..\..\..\native\common\log.cpp" />
<ClCompile Include="..\..\..\native\common\platform\win\win.cpp" />
<ClCompile Include="..\..\..\native\gpucodec\data.c" />
<ClCompile Include="..\..\..\native\gpucodec\gpucodec_utils.c" />
<ClCompile Include="..\..\..\native\vpl\vpl_decode.cpp" />
<ClCompile Include="..\..\..\native\vpl\vpl_encode.cpp" />
<ClCompile Include="..\..\capture\src\dxgi.cpp" />
<ClCompile Include="..\..\render\src\dxgi_sdl.cpp" />
<ClCompile Include="main.cpp" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

View File

@@ -1,172 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd</Extensions>
</Filter>
<Filter Include="Resource Files">
<UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>
<Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>
</Filter>
<Filter Include="Source Files\capture">
<UniqueIdentifier>{bd78eb0e-0894-4f11-a2f5-2cc02731bc52}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\codec">
<UniqueIdentifier>{dd2d68d6-6559-413b-948b-f718b34cde0a}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external">
<UniqueIdentifier>{fd52358a-9b99-43b4-b717-f3a77580a52b}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\nvEncDXGIOutputDuplicationSample">
<UniqueIdentifier>{bdbd71f9-8ed4-4bdd-a8c3-612f5858504d}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\render">
<UniqueIdentifier>{2089080f-479d-4d89-8cf5-7ac8e52b11d1}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\common">
<UniqueIdentifier>{3514d21d-4a74-44b9-b21b-bea909cda0b2}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl">
<UniqueIdentifier>{ad9c7b01-cd0a-40a2-afe6-f81133ac0e15}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl\libvpl">
<UniqueIdentifier>{ad00df31-d9af-432b-b452-b3ab9c2b381c}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl\libvpl\src">
<UniqueIdentifier>{b5498eaa-1ff8-4c2b-b5c3-7c71a0a2efcc}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl\libvpl\src\windows">
<UniqueIdentifier>{bbca452b-a73f-4d37-a393-627c0db2617c}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl\libvpl\src\mfx_config_interface">
<UniqueIdentifier>{cb497035-08af-4b38-9abd-e4616861d366}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl\sample_common">
<UniqueIdentifier>{85b55a79-fab7-428c-bc6c-fdf0a58b4704}</UniqueIdentifier>
</Filter>
<Filter Include="Source Files\external\vpl\sample_common\vm">
<UniqueIdentifier>{a3824aee-0800-467f-964b-61a76000e160}</UniqueIdentifier>
</Filter>
</ItemGroup>
<ItemGroup>
<ClCompile Include="main.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="..\..\capture\src\dxgi.cpp">
<Filter>Source Files\capture</Filter>
</ClCompile>
<ClCompile Include="..\..\render\src\dxgi_sdl.cpp">
<Filter>Source Files\render</Filter>
</ClCompile>
<ClCompile Include="..\..\..\native\common\log.cpp">
<Filter>Source Files\common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\native\common\platform\win\win.cpp">
<Filter>Source Files\common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\native\gpucodec\data.c">
<Filter>Source Files\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\..\native\gpucodec\gpucodec_utils.c">
<Filter>Source Files\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_critical_section.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dispatcher.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dispatcher_log.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dispatcher_main.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_driver_store_loader.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_dxva2_device.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_function_table.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_library_iterator.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_load_dll.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\windows\mfx_win_reg_key.cpp">
<Filter>Source Files\external\vpl\libvpl\src\windows</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_config_interface\mfx_config_interface.cpp">
<Filter>Source Files\external\vpl\libvpl\src\mfx_config_interface</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_config_interface\mfx_config_interface_string_api.cpp">
<Filter>Source Files\external\vpl\libvpl\src\mfx_config_interface</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl.cpp">
<Filter>Source Files\external\vpl\libvpl\src</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_config.cpp">
<Filter>Source Files\external\vpl\libvpl\src</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_loader.cpp">
<Filter>Source Files\external\vpl\libvpl\src</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_log.cpp">
<Filter>Source Files\external\vpl\libvpl\src</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_lowlatency.cpp">
<Filter>Source Files\external\vpl\libvpl\src</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\libvpl\src\mfx_dispatcher_vpl_msdk.cpp">
<Filter>Source Files\external\vpl\libvpl\src</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\avc_bitstream.cpp">
<Filter>Source Files\external\vpl\sample_common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\avc_nal_spl.cpp">
<Filter>Source Files\external\vpl\sample_common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\avc_spl.cpp">
<Filter>Source Files\external\vpl\sample_common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\base_allocator.cpp">
<Filter>Source Files\external\vpl\sample_common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\sample_utils.cpp">
<Filter>Source Files\external\vpl\sample_common</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\atomic.cpp">
<Filter>Source Files\external\vpl\sample_common\vm</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\shared_object.cpp">
<Filter>Source Files\external\vpl\sample_common\vm</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\thread_windows.cpp">
<Filter>Source Files\external\vpl\sample_common\vm</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\vm\time.cpp">
<Filter>Source Files\external\vpl\sample_common\vm</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\nvEncDXGIOutputDuplicationSample\DDAImpl.cpp">
<Filter>Source Files\external\nvEncDXGIOutputDuplicationSample</Filter>
</ClCompile>
<ClCompile Include="..\..\..\native\vpl\vpl_decode.cpp">
<Filter>Source Files\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\..\native\vpl\vpl_encode.cpp">
<Filter>Source Files\codec</Filter>
</ClCompile>
<ClCompile Include="..\..\..\externals\libvpl_v2023.4.0\tools\legacy\sample_common\src\d3d11_allocator.cpp">
<Filter>Source Files\external\vpl\sample_common</Filter>
</ClCompile>
</ItemGroup>
</Project>

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="Current" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup />
</Project>

View File

@@ -1,111 +0,0 @@
#include <Windows.h>
#include <callback.h>
#include <common.h>
#include <iostream>
#include <stdint.h>
#include <system.h>
extern "C" {
void *dxgi_new_capturer(int64_t luid);
void *dxgi_device(void *self);
int dxgi_width(const void *self);
int dxgi_height(const void *self);
void *dxgi_capture(void *self, int wait_ms);
void destroy_dxgi_capturer(void *self);
void *vpl_new_encoder(void *hdl, int64_t luid, API api, DataFormat dataFormat,
int32_t width, int32_t height, int32_t kbs,
int32_t framerate, int32_t gop);
int vpl_encode(void *e, void *tex, EncodeCallback callback, void *obj);
int vpl_destroy_encoder(void *e);
void *vpl_new_decoder(void *device, int64_t luid, int32_t api,
int32_t dataFormat, bool outputSharedHandle);
int vpl_decode(void *decoder, uint8_t *data, int32_t length,
DecodeCallback callback, void *obj);
int vpl_destroy_decoder(void *decoder);
void *CreateDXGIRender(long long luid, bool inputSharedHandle);
int DXGIRenderTexture(void *render, HANDLE shared_handle);
void DestroyDXGIRender(void *render);
void *DXGIDevice(void *render);
}
static const uint8_t *encode_data;
static int32_t encode_len;
static void *decode_shared_handle;
extern "C" static void encode_callback(const uint8_t *data, int32_t len,
int32_t key, const void *obj) {
encode_data = data;
encode_len = len;
}
extern "C" static void decode_callback(void *shared_handle, const void *obj) {
decode_shared_handle = shared_handle;
}
extern "C" void log_gpucodec(int level, const char *message) {
std::cout << message << std::endl;
}
int main() {
Adapters adapters;
adapters.Init(ADAPTER_VENDOR_INTEL);
if (adapters.adapters_.size() == 0) {
std::cout << "no intel adapter" << std::endl;
return -1;
}
int64_t luid = LUID(adapters.adapters_[0].get()->desc1_);
DataFormat dataFormat = H264;
void *dup = dxgi_new_capturer(luid);
if (!dup) {
std::cerr << "create duplicator failed" << std::endl;
return -1;
}
int width = dxgi_width(dup);
int height = dxgi_height(dup);
std::cout << "width: " << width << " height: " << height << std::endl;
void *device = dxgi_device(dup);
void *encoder = vpl_new_encoder(device, luid, API_DX11, dataFormat, width,
height, 4000, 30, 0xFFFF);
if (!encoder) {
std::cerr << "create encoder failed" << std::endl;
return -1;
}
void *render = CreateDXGIRender(luid, false);
if (!render) {
std::cerr << "create render failed" << std::endl;
return -1;
}
void *decoder =
vpl_new_decoder(DXGIDevice(render), luid, API_DX11, dataFormat, false);
if (!decoder) {
std::cerr << "create decoder failed" << std::endl;
return -1;
}
while (true) {
void *texture = dxgi_capture(dup, 100);
if (!texture) {
std::cerr << "texture is NULL" << std::endl;
continue;
}
if (0 != vpl_encode(encoder, texture, encode_callback, NULL)) {
std::cerr << "encode failed" << std::endl;
continue;
}
if (0 != vpl_decode(decoder, (uint8_t *)encode_data, encode_len,
decode_callback, NULL)) {
std::cerr << "decode failed" << std::endl;
continue;
}
if (0 != DXGIRenderTexture(render, decode_shared_handle)) {
std::cerr << "render failed" << std::endl;
continue;
}
}
// no release temporarily
}

View File

@@ -1,31 +0,0 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.5.33627.172
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "ShaderCompileTool", "ShaderCompileTool.vcxproj", "{AB626EFE-F38C-4587-B79C-29FC898FBC96}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|x64 = Debug|x64
Debug|x86 = Debug|x86
Release|x64 = Release|x64
Release|x86 = Release|x86
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Debug|x64.ActiveCfg = Debug|x64
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Debug|x64.Build.0 = Debug|x64
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Debug|x86.ActiveCfg = Debug|Win32
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Debug|x86.Build.0 = Debug|Win32
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Release|x64.ActiveCfg = Release|x64
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Release|x64.Build.0 = Release|x64
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Release|x86.ActiveCfg = Release|Win32
{AB626EFE-F38C-4587-B79C-29FC898FBC96}.Release|x86.Build.0 = Release|Win32
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {9A483A95-1588-48EF-A2C9-2B4731AFCAB2}
EndGlobalSection
EndGlobal

View File

@@ -1,150 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<VCProjectVersion>16.0</VCProjectVersion>
<Keyword>Win32Proj</Keyword>
<ProjectGuid>{ab626efe-f38c-4587-b79c-29fc898fbc96}</ProjectGuid>
<RootNamespace>ShaderCompileTool</RootNamespace>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<FxCompile Include="nv_vertex_shader.hlsl">
<ObjectFileOutput Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
</ObjectFileOutput>
<EntryPointName Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">VS</EntryPointName>
<DisableOptimizations Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">false</DisableOptimizations>
<ShaderType Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">Vertex</ShaderType>
<HeaderFileOutput Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">%(Filename).h</HeaderFileOutput>
</FxCompile>
<FxCompile Include="nv_pixel_shader_601.hlsl">
<ObjectFileOutput Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
</ObjectFileOutput>
<EntryPointName Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">PS</EntryPointName>
<DisableOptimizations Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">false</DisableOptimizations>
<ShaderType Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">Pixel</ShaderType>
<HeaderFileOutput Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">%(Filename).h</HeaderFileOutput>
</FxCompile>
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

View File

@@ -1,21 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd</Extensions>
</Filter>
<Filter Include="Resource Files">
<UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>
<Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>
</Filter>
</ItemGroup>
<ItemGroup>
<FxCompile Include="nv_pixel_shader_601.hlsl" />
<FxCompile Include="nv_vertex_shader.hlsl" />
</ItemGroup>
</Project>

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="Current" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup />
</Project>

View File

@@ -1,20 +0,0 @@
Texture2D g_txFrame0 : register(t0);
Texture2D g_txFrame1 : register(t1);
SamplerState g_Sam : register(s0);
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
float4 PS(PS_INPUT input) : SV_TARGET{
float y = g_txFrame0.Sample(g_Sam, input.Tex).r;
y = 1.164383561643836 * (y - 0.0625);
float2 uv = g_txFrame1.Sample(g_Sam, input.Tex).rg - float2(0.5f, 0.5f);
float u = uv.x;
float v = uv.y;
float r = saturate(y + 1.596026785714286 * v);
float g = saturate(y - 0.812967647237771 * v - 0.391762290094914 * u);
float b = saturate(y + 2.017232142857142 * u);
return float4(r, g, b, 1.0f);
}

View File

@@ -1,15 +0,0 @@
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD;
};
VS_OUTPUT VS(VS_INPUT input)
{
return input;
}

View File

@@ -1,214 +0,0 @@
use env_logger::{init_from_env, Env, DEFAULT_FILTER_ENV};
#[cfg(feature = "vram")]
use hwcodec::{
common::MAX_GOP,
vram::{DynamicContext, FeatureContext},
};
use hwcodec::{
common::{DataFormat, Quality::*, RateControl::*},
ffmpeg::AVPixelFormat::*,
ffmpeg_ram::{
decode::{DecodeContext, Decoder},
encode::{EncodeContext, Encoder},
ffmpeg_linesize_offset_length, CodecInfo,
},
};
#[cfg(feature = "vram")]
use tool::Tool;
fn main() {
init_from_env(Env::default().filter_or(DEFAULT_FILTER_ENV, "info"));
let max_align = 16;
setup_ram(max_align);
#[cfg(feature = "vram")]
setup_vram(max_align);
}
fn setup_ram(max_align: i32) {
let encoders = Encoder::available_encoders(
EncodeContext {
name: String::from(""),
mc_name: None,
width: 1920,
height: 1080,
pixfmt: AV_PIX_FMT_NV12,
align: 0,
fps: 30,
gop: 60,
rc: RC_CBR,
quality: Quality_Default,
kbs: 0,
q: -1,
thread_count: 1,
},
None,
);
let decoders = Decoder::available_decoders();
let h264_encoders = encoders
.iter()
.filter(|info| info.name.contains("h264"))
.cloned()
.collect::<Vec<_>>();
let h265_encoders = encoders
.iter()
.filter(|info| info.name.contains("hevc"))
.cloned()
.collect::<Vec<_>>();
let h264_decoders = decoders
.iter()
.filter(|info| info.format == DataFormat::H264)
.cloned()
.collect::<Vec<_>>();
let h265_decoders = decoders
.iter()
.filter(|info| info.format == DataFormat::H265)
.cloned()
.collect::<Vec<_>>();
let start_width = 1920;
let start_height = 1080;
let step = 2;
for width in (start_width..=(start_width + max_align)).step_by(step) {
for height in (start_height..=(start_height + max_align)).step_by(step) {
for encode_info in &h264_encoders {
test_ram(width, height, encode_info.clone(), h264_decoders[0].clone());
}
for decode_info in &h264_decoders {
test_ram(width, height, h264_encoders[0].clone(), decode_info.clone());
}
for encode_info in &h265_encoders {
test_ram(width, height, encode_info.clone(), h265_decoders[0].clone());
}
for decode_info in &h265_decoders {
test_ram(width, height, h265_encoders[0].clone(), decode_info.clone());
}
}
}
}
fn test_ram(width: i32, height: i32, encode_info: CodecInfo, decode_info: CodecInfo) {
println!(
"Test {}x{}: {} -> {}",
width, height, encode_info.name, decode_info.name
);
let encode_ctx = EncodeContext {
name: encode_info.name.clone(),
mc_name: None,
width,
height,
pixfmt: AV_PIX_FMT_NV12,
align: 0,
kbs: 0,
fps: 30,
gop: 60,
quality: Quality_Default,
rc: RC_CBR,
thread_count: 1,
q: -1,
};
let decode_ctx = DecodeContext {
name: decode_info.name.clone(),
device_type: decode_info.hwdevice,
thread_count: 4,
};
let (_, _, len) = ffmpeg_linesize_offset_length(
encode_ctx.pixfmt,
encode_ctx.width as _,
encode_ctx.height as _,
encode_ctx.align as _,
)
.unwrap();
let mut video_encoder = Encoder::new(encode_ctx).unwrap();
let mut video_decoder = Decoder::new(decode_ctx).unwrap();
let buf: Vec<u8> = vec![0; len as usize];
let encode_frames = video_encoder.encode(&buf, 0).unwrap();
assert_eq!(encode_frames.len(), 1);
let docode_frames = video_decoder.decode(&encode_frames[0].data).unwrap();
assert_eq!(docode_frames.len(), 1);
assert_eq!(docode_frames[0].width, width);
assert_eq!(docode_frames[0].height, height);
println!(
"Pass {}x{}: {} -> {} {:?}",
width, height, encode_info.name, decode_info.name, decode_info.hwdevice
)
}
#[cfg(feature = "vram")]
fn setup_vram(max_align: i32) {
let encoders = hwcodec::vram::encode::available(DynamicContext {
device: None,
width: 1920,
height: 1080,
kbitrate: 1000,
framerate: 30,
gop: MAX_GOP as _,
});
let decoders = hwcodec::vram::decode::available();
let start_width = 1920;
let start_height = 1080;
let step = 2;
for width in (start_width..=(start_width + max_align)).step_by(step) {
for height in (start_height..=(start_height + max_align)).step_by(step) {
for encode_info in &encoders {
if let Some(decoder) = decoders.iter().find(|d| {
d.luid == encode_info.luid && d.data_format == encode_info.data_format
}) {
test_vram(width, height, encode_info.clone(), decoder.clone());
}
}
for decode_info in &decoders {
if let Some(encoder) = encoders.iter().find(|e| {
e.luid == decode_info.luid && e.data_format == decode_info.data_format
}) {
test_vram(width, height, encoder.clone(), decode_info.clone());
}
}
}
}
}
#[cfg(feature = "vram")]
fn test_vram(
width: i32,
height: i32,
encode_info: FeatureContext,
decode_info: hwcodec::vram::DecodeContext,
) {
println!(
"Test {}x{}: {:?} {:?} -> {:?}",
width, height, encode_info.data_format, encode_info.driver, decode_info.driver
);
let mut tool = Tool::new(encode_info.luid).unwrap();
let encode_ctx = hwcodec::vram::EncodeContext {
f: encode_info.clone(),
d: hwcodec::vram::DynamicContext {
device: Some(tool.device()),
width,
height,
kbitrate: 1000,
framerate: 30,
gop: MAX_GOP as _,
},
};
let mut encoder = hwcodec::vram::encode::Encoder::new(encode_ctx).unwrap();
let mut decoder = hwcodec::vram::decode::Decoder::new(hwcodec::vram::DecodeContext {
device: Some(tool.device()),
..decode_info.clone()
})
.unwrap();
let encode_frames = encoder.encode(tool.get_texture(width, height), 0).unwrap();
assert_eq!(encode_frames.len(), 1);
let decoder_frames = decoder.decode(&encode_frames[0].data).unwrap();
assert_eq!(decoder_frames.len(), 1);
let (decoded_width, decoded_height) = tool.get_texture_size(decoder_frames[0].texture);
assert_eq!(decoded_width, width);
assert_eq!(decoded_height, height);
println!(
"Pass {}x{}: {:?} {:?} -> {:?}",
width, height, encode_info.data_format, encode_info.driver, decode_info.driver
);
}

View File

@@ -1,67 +0,0 @@
use env_logger::{init_from_env, Env, DEFAULT_FILTER_ENV};
use hwcodec::{
common::{get_gpu_signature, Quality::*, RateControl::*},
ffmpeg::AVPixelFormat,
ffmpeg_ram::{
decode::Decoder,
encode::{EncodeContext, Encoder},
},
};
fn main() {
let start = std::time::Instant::now();
init_from_env(Env::default().filter_or(DEFAULT_FILTER_ENV, "info"));
ram();
#[cfg(feature = "vram")]
vram();
log::info!(
"signature: {:?}, elapsed: {:?}",
get_gpu_signature(),
start.elapsed()
);
}
fn ram() {
println!("ram:");
println!("encoders:");
let ctx = EncodeContext {
name: String::from(""),
mc_name: None,
width: 1280,
height: 720,
pixfmt: AVPixelFormat::AV_PIX_FMT_NV12,
align: 0,
kbs: 1000,
fps: 30,
gop: i32::MAX,
quality: Quality_Default,
rc: RC_CBR,
q: -1,
thread_count: 1,
};
let encoders = Encoder::available_encoders(ctx.clone(), None);
encoders.iter().map(|e| println!("{:?}", e)).count();
println!("decoders:");
let decoders = Decoder::available_decoders();
decoders.iter().map(|e| println!("{:?}", e)).count();
}
#[cfg(feature = "vram")]
fn vram() {
use hwcodec::common::MAX_GOP;
use hwcodec::vram::{decode, encode, DynamicContext};
println!("vram:");
println!("encoders:");
let encoders = encode::available(DynamicContext {
width: 1920,
height: 1080,
kbitrate: 5000,
framerate: 30,
gop: MAX_GOP as _,
device: None,
});
encoders.iter().map(|e| println!("{:?}", e)).count();
println!("decoders:");
let decoders = decode::available();
decoders.iter().map(|e| println!("{:?}", e)).count();
}

View File

@@ -1,147 +0,0 @@
use env_logger::{init_from_env, Env, DEFAULT_FILTER_ENV};
use hwcodec::{
common::{Quality::*, RateControl::*},
ffmpeg::AVPixelFormat,
ffmpeg_ram::{
decode::{DecodeContext, Decoder},
encode::{EncodeContext, Encoder},
CodecInfo, CodecInfos,
},
};
use rand::random;
use std::io::Write;
use std::time::Instant;
fn main() {
init_from_env(Env::default().filter_or(DEFAULT_FILTER_ENV, "info"));
let ctx = EncodeContext {
name: String::from(""),
mc_name: None,
width: 1920,
height: 1080,
pixfmt: AVPixelFormat::AV_PIX_FMT_NV12,
align: 0,
kbs: 5000,
fps: 30,
gop: 60,
quality: Quality_Default,
rc: RC_DEFAULT,
thread_count: 4,
q: -1,
};
let yuv_count = 10;
println!("benchmark");
let yuvs = prepare_yuv(ctx.width as _, ctx.height as _, yuv_count);
let encoders = Encoder::available_encoders(ctx.clone(), None);
log::info!("encoders: {:?}", encoders);
let best = CodecInfo::prioritized(encoders.clone());
for info in encoders {
test_encoder(info.clone(), ctx.clone(), &yuvs, is_best(&best, &info));
}
let (h264s, h265s) = prepare_h26x(best, ctx.clone(), &yuvs);
let decoders = Decoder::available_decoders();
log::info!("decoders: {:?}", decoders);
let best = CodecInfo::prioritized(decoders.clone());
for info in decoders {
let h26xs = if info.name.contains("h264") {
&h264s
} else {
&h265s
};
if h26xs.len() == yuv_count {
test_decoder(info.clone(), h26xs, is_best(&best, &info));
}
}
}
fn test_encoder(info: CodecInfo, ctx: EncodeContext, yuvs: &Vec<Vec<u8>>, best: bool) {
let mut ctx = ctx;
ctx.name = info.name;
let mut encoder = Encoder::new(ctx.clone()).unwrap();
let start = Instant::now();
for yuv in yuvs {
let _ = encoder
.encode(yuv, start.elapsed().as_millis() as _)
.unwrap();
}
println!(
"{}{}: {:?}",
if best { "*" } else { "" },
ctx.name,
start.elapsed() / yuvs.len() as _
);
}
fn test_decoder(info: CodecInfo, h26xs: &Vec<Vec<u8>>, best: bool) {
let ctx = DecodeContext {
name: info.name,
device_type: info.hwdevice,
thread_count: 4,
};
let mut decoder = Decoder::new(ctx.clone()).unwrap();
let start = Instant::now();
let mut cnt = 0;
for h26x in h26xs {
let _ = decoder.decode(h26x).unwrap();
cnt += 1;
}
let device = format!("{:?}", ctx.device_type).to_lowercase();
let device = device.split("_").last().unwrap();
println!(
"{}{} {}: {:?}",
if best { "*" } else { "" },
ctx.name,
device,
start.elapsed() / cnt
);
}
fn prepare_yuv(width: usize, height: usize, count: usize) -> Vec<Vec<u8>> {
let mut ret = vec![];
for index in 0..count {
let linesize = width * 3 / 2;
let mut yuv = vec![0u8; linesize * height];
for y in 0..height {
for x in 0..linesize {
yuv[linesize * y + x] = random();
}
}
ret.push(yuv);
print!("\rprepare {}/{}", index + 1, count);
std::io::stdout().flush().ok();
}
println!();
ret
}
fn prepare_h26x(
best: CodecInfos,
ctx: EncodeContext,
yuvs: &Vec<Vec<u8>>,
) -> (Vec<Vec<u8>>, Vec<Vec<u8>>) {
let f = |info: Option<CodecInfo>| {
let mut h26xs = vec![];
if let Some(info) = info {
let mut ctx = ctx.clone();
ctx.name = info.name;
let mut encoder = Encoder::new(ctx).unwrap();
for yuv in yuvs {
let h26x = encoder.encode(yuv, 0).unwrap();
for frame in h26x {
h26xs.push(frame.data.to_vec());
}
}
}
h26xs
};
(f(best.h264), f(best.h265))
}
fn is_best(best: &CodecInfos, info: &CodecInfo) -> bool {
Some(info.clone()) == best.h264 || Some(info.clone()) == best.h265
}

View File

@@ -1,117 +0,0 @@
use env_logger::{init_from_env, Env, DEFAULT_FILTER_ENV};
use hwcodec::{
common::{Quality::*, RateControl::*},
ffmpeg::{AVHWDeviceType::*, AVPixelFormat::*},
ffmpeg_ram::{
decode::{DecodeContext, Decoder},
encode::{EncodeContext, Encoder},
ffmpeg_linesize_offset_length,
},
};
use std::{
fs::File,
io::{Read, Write},
};
fn main() {
init_from_env(Env::default().filter_or(DEFAULT_FILTER_ENV, "info"));
let encode_ctx = EncodeContext {
name: String::from("h264_nvenc"),
mc_name: None,
width: 1920,
height: 1080,
pixfmt: AV_PIX_FMT_NV12,
align: 0,
kbs: 0,
fps: 30,
gop: 60,
quality: Quality_Default,
rc: RC_DEFAULT,
thread_count: 4,
q: -1,
};
let decode_ctx = DecodeContext {
name: String::from("hevc"),
device_type: AV_HWDEVICE_TYPE_D3D11VA,
thread_count: 4,
};
let _ = std::thread::spawn(move || test_encode_decode(encode_ctx, decode_ctx)).join();
}
fn test_encode_decode(encode_ctx: EncodeContext, decode_ctx: DecodeContext) {
let size: usize;
if let Ok((_, _, len)) = ffmpeg_linesize_offset_length(
encode_ctx.pixfmt,
encode_ctx.width as _,
encode_ctx.height as _,
encode_ctx.align as _,
) {
size = len as _;
} else {
return;
}
let mut video_encoder = Encoder::new(encode_ctx).unwrap();
let mut video_decoder = Decoder::new(decode_ctx).unwrap();
let mut yuv_file = File::open("input/1920_1080_decoded.yuv").unwrap();
let mut encode_file = File::create("output/1920_1080.265").unwrap();
let mut decode_file = File::create("output/1920_1080_decode.yuv").unwrap();
let mut buf = vec![0; size + 64];
let mut encode_sum = 0;
let mut decode_sum = 0;
let mut encode_size = 0;
let mut counter = 0;
let mut f = |data: &[u8]| {
let now = std::time::Instant::now();
if let Ok(encode_frames) = video_encoder.encode(data, 0) {
log::info!("encode:{:?}", now.elapsed());
encode_sum += now.elapsed().as_micros();
for encode_frame in encode_frames.iter() {
encode_size += encode_frame.data.len();
encode_file.write_all(&encode_frame.data).unwrap();
encode_file.flush().unwrap();
let now = std::time::Instant::now();
if let Ok(docode_frames) = video_decoder.decode(&encode_frame.data) {
log::info!("decode:{:?}", now.elapsed());
decode_sum += now.elapsed().as_micros();
counter += 1;
for decode_frame in docode_frames {
log::info!("decode_frame:{}", decode_frame);
for data in decode_frame.data.iter() {
decode_file.write_all(data).unwrap();
decode_file.flush().unwrap();
}
}
}
}
}
};
loop {
match yuv_file.read(&mut buf[..size]) {
Ok(n) => {
if n > 0 {
f(&buf[..n]);
} else {
break;
}
}
Err(e) => {
log::info!("{:?}", e);
break;
}
}
}
log::info!(
"counter:{}, encode_avg:{}us, decode_avg:{}us, size_avg:{}",
counter,
encode_sum / counter,
decode_sum / counter,
encode_size / counter as usize,
);
}

View File

@@ -1,78 +0,0 @@
use capture::dxgi;
use env_logger::{init_from_env, Env, DEFAULT_FILTER_ENV};
use hwcodec::common::{DataFormat, Driver, MAX_GOP};
use hwcodec::vram::{
decode::Decoder, encode::Encoder, DecodeContext, DynamicContext, EncodeContext, FeatureContext,
};
use render::Render;
use std::{
io::Write,
path::PathBuf,
time::{Duration, Instant},
};
fn main() {
init_from_env(Env::default().filter_or(DEFAULT_FILTER_ENV, "trace"));
let luid = 69524; // 63444; // 59677
unsafe {
// one luid create render failed on my pc, wouldn't happen in rustdesk
let data_format = DataFormat::H265;
let mut capturer = dxgi::Capturer::new(luid).unwrap();
let mut render = Render::new(luid, false).unwrap();
let en_ctx = EncodeContext {
f: FeatureContext {
driver: Driver::FFMPEG,
vendor: Driver::NV,
data_format,
luid,
},
d: DynamicContext {
device: Some(capturer.device()),
width: capturer.width(),
height: capturer.height(),
kbitrate: 5000,
framerate: 30,
gop: MAX_GOP as _,
},
};
let de_ctx = DecodeContext {
device: Some(render.device()),
driver: Driver::FFMPEG,
vendor: Driver::NV,
data_format,
luid,
};
let mut dec = Decoder::new(de_ctx).unwrap();
let mut enc = Encoder::new(en_ctx).unwrap();
let filename = PathBuf::from(".\\1.264");
let mut file = std::fs::File::create(filename).unwrap();
let mut dup_sum = Duration::ZERO;
let mut enc_sum = Duration::ZERO;
let mut dec_sum = Duration::ZERO;
let mut pts_instant = Instant::now();
loop {
let start = Instant::now();
let texture = capturer.capture(100);
if texture.is_null() {
continue;
}
dup_sum += start.elapsed();
let start = Instant::now();
let frame = enc
.encode(texture, pts_instant.elapsed().as_millis() as _)
.unwrap();
enc_sum += start.elapsed();
for f in frame {
file.write_all(&mut f.data).unwrap();
let start = Instant::now();
let frames = dec.decode(&f.data).unwrap();
dec_sum += start.elapsed();
for f in frames {
render.render(f.texture).unwrap();
}
}
}
}
}

View File

@@ -1,128 +0,0 @@
use env_logger::{init_from_env, Env, DEFAULT_FILTER_ENV};
use hwcodec::{
common::{Quality::*, RateControl::*, MAX_GOP},
ffmpeg::{
AVHWDeviceType::{self, *},
AVPixelFormat::*,
},
ffmpeg_ram::{
decode::{DecodeContext, Decoder},
encode::{EncodeContext, Encoder},
},
};
use std::{
fs::File,
io::{Read, Write},
};
fn main() {
let gpu = true;
let h264 = true;
let hw_type = if gpu { "gpu" } else { "hw" };
let file_type = if h264 { "h264" } else { "h265" };
let codec = if h264 { "h264" } else { "hevc" };
init_from_env(Env::default().filter_or(DEFAULT_FILTER_ENV, "info"));
let device_type = AV_HWDEVICE_TYPE_CUDA;
let decode_ctx = DecodeContext {
name: String::from(codec),
device_type,
thread_count: 4,
};
let mut video_decoder = Decoder::new(decode_ctx).unwrap();
decode_encode(
&mut video_decoder,
0,
hw_type,
file_type,
1600,
900,
h264,
device_type,
);
decode_encode(
&mut video_decoder,
1,
hw_type,
file_type,
1440,
900,
h264,
device_type,
);
}
fn decode_encode(
video_decoder: &mut Decoder,
index: usize,
hw_type: &str,
file_type: &str,
width: usize,
height: usize,
h264: bool,
device_type: AVHWDeviceType,
) {
let input_enc_filename = format!("input/data_and_line/{hw_type}_{width}_{height}.{file_type}");
let len_filename = format!("input/data_and_line/{hw_type}_{width}_{height}_{file_type}.txt");
let enc_ctx = EncodeContext {
name: if h264 {
"h264_nvenc".to_owned()
} else {
"hevc_nvenc".to_owned()
},
mc_name: None,
width: width as _,
height: height as _,
pixfmt: if device_type == AV_HWDEVICE_TYPE_NONE {
AV_PIX_FMT_YUV420P
} else {
AV_PIX_FMT_NV12
},
align: 0,
kbs: 1_000,
fps: 30,
gop: MAX_GOP as _,
quality: Quality_Default,
rc: RC_DEFAULT,
thread_count: 4,
q: -1,
};
let mut video_encoder = Encoder::new(enc_ctx).unwrap();
let mut encode_file =
File::create(format!("output/{hw_type}_{width}_{height}.{file_type}")).unwrap();
let mut yuv_file =
File::create(format!("output/{hw_type}_{width}_{height}_decode.yuv")).unwrap();
let mut file_lens = File::open(len_filename).unwrap();
let mut file = File::open(input_enc_filename).unwrap();
let mut file_lens_buf = Vec::new();
file_lens.read_to_end(&mut file_lens_buf).unwrap();
let file_lens_str = String::from_utf8_lossy(&file_lens_buf).to_string();
let lens: Vec<usize> = file_lens_str
.split(",")
.filter(|e| !e.is_empty())
.map(|e| e.parse().unwrap())
.collect();
for i in 0..lens.len() {
let mut buf = vec![0; lens[i]];
file.read(&mut buf).unwrap();
let frames = video_decoder.decode(&buf).unwrap();
println!(
"file{}, w:{}, h:{}, fmt:{:?}, linesize:{:?}",
index, frames[0].width, frames[0].height, frames[0].pixfmt, frames[0].linesize
);
assert!(frames.len() == 1);
let mut encode_buf = Vec::new();
for d in &mut frames[0].data {
encode_buf.append(d);
}
yuv_file.write_all(&encode_buf).unwrap();
let frames = video_encoder.encode(&encode_buf, 0).unwrap();
assert_eq!(frames.len(), 1);
for f in frames {
encode_file.write_all(&f.data).unwrap();
}
}
}

View File

@@ -1,32 +0,0 @@
cmake_minimum_required(VERSION 3.15)
project(amf)
set(CMAKE_CXX_STANDARD 11)
cmake_policy(SET CMP0091 NEW)
set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>")
set(AMF_DIR ${CMAKE_SOURCE_DIR})
set(AMF_COMMON_DIR ${AMF_DIR}/amf/public/common)
# Source files
set(SOURCES
${AMF_COMMON_DIR}/AMFFactory.cpp
${AMF_COMMON_DIR}/AMFSTL.cpp
${AMF_COMMON_DIR}/Thread.cpp
${AMF_COMMON_DIR}/TraceAdapter.cpp
${AMF_COMMON_DIR}/Windows/ThreadWindows.cpp
)
# Include directories
set(AMF_INCLUDE_DIRS
${AMF_DIR}/amf
${AMF_COMMON_DIR}
)
include_directories(${AMF_INCLUDE_DIRS})
# Build target
add_library(amf STATIC ${SOURCES})
target_link_libraries(amf
ole32
)

View File

@@ -1,262 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "AMFFactory.h"
#include "Thread.h"
#ifdef __clang__
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wexit-time-destructors"
#pragma clang diagnostic ignored "-Wglobal-constructors"
#endif
AMFFactoryHelper g_AMFFactory;
#ifdef __clang__
#pragma clang diagnostic pop
#endif
#ifdef AMF_CORE_STATIC
extern "C"
{
extern AMF_CORE_LINK AMF_RESULT AMF_CDECL_CALL AMFInit(amf_uint64 version, amf::AMFFactory **ppFactory);
}
#endif
//-------------------------------------------------------------------------------------------------
AMFFactoryHelper::AMFFactoryHelper() :
m_hDLLHandle(NULL),
m_pFactory(NULL),
m_pDebug(NULL),
m_pTrace(NULL),
m_AMFRuntimeVersion(0),
m_iRefCount(0)
{
}
//-------------------------------------------------------------------------------------------------
AMFFactoryHelper::~AMFFactoryHelper()
{
Terminate();
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMFFactoryHelper::Init(const wchar_t* dllName)
{
dllName;
#ifndef AMF_CORE_STATIC
if (m_hDLLHandle != NULL)
{
amf_atomic_inc(&m_iRefCount);
return AMF_OK;
}
const wchar_t* dllName_ = dllName == NULL ? AMF_DLL_NAME : dllName;
#if defined (_WIN32) || defined (__APPLE__)
m_hDLLHandle = amf_load_library(dllName_);
#else
m_hDLLHandle = amf_load_library1(dllName_, false); //load with local flags
#endif
if(m_hDLLHandle == NULL)
{
return AMF_FAIL;
}
AMFInit_Fn initFun = (AMFInit_Fn)::amf_get_proc_address(m_hDLLHandle, AMF_INIT_FUNCTION_NAME);
if(initFun == NULL)
{
return AMF_FAIL;
}
AMF_RESULT res = initFun(AMF_FULL_VERSION, &m_pFactory);
if(res != AMF_OK)
{
return res;
}
AMFQueryVersion_Fn versionFun = (AMFQueryVersion_Fn)::amf_get_proc_address(m_hDLLHandle, AMF_QUERY_VERSION_FUNCTION_NAME);
if(versionFun == NULL)
{
return AMF_FAIL;
}
res = versionFun(&m_AMFRuntimeVersion);
if(res != AMF_OK)
{
return res;
}
#else
AMF_RESULT res = AMFInit(AMF_FULL_VERSION, &m_pFactory);
if (res != AMF_OK)
{
return res;
}
m_AMFRuntimeVersion = AMF_FULL_VERSION;
#endif
m_pFactory->GetTrace(&m_pTrace);
m_pFactory->GetDebug(&m_pDebug);
amf_atomic_inc(&m_iRefCount);
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMFFactoryHelper::Terminate()
{
if(m_hDLLHandle != NULL)
{
amf_atomic_dec(&m_iRefCount);
if(m_iRefCount == 0)
{
amf_free_library(m_hDLLHandle);
m_hDLLHandle = NULL;
m_pFactory= NULL;
m_pDebug = NULL;
m_pTrace = NULL;
}
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
amf::AMFFactory* AMFFactoryHelper::GetFactory()
{
return m_pFactory;
}
//-------------------------------------------------------------------------------------------------
amf::AMFDebug* AMFFactoryHelper::GetDebug()
{
return m_pDebug;
}
//-------------------------------------------------------------------------------------------------
amf::AMFTrace* AMFFactoryHelper::GetTrace()
{
return m_pTrace;
}
//-------------------------------------------------------------------------------------------------
amf_uint64 AMFFactoryHelper::AMFQueryVersion()
{
return m_AMFRuntimeVersion;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMFFactoryHelper::LoadExternalComponent(amf::AMFContext* pContext, const wchar_t* dll, const char* function, void* reserved, amf::AMFComponent** ppComponent)
{
// check passed in parameters
if (!pContext || !dll || !function)
{
return AMF_INVALID_ARG;
}
// check if DLL has already been loaded
amf_handle hDll = NULL;
for (std::vector<ComponentHolder>::iterator it = m_extComponents.begin(); it != m_extComponents.end(); ++it)
{
#if defined(_WIN32)
if (wcsicmp(it->m_DLL.c_str(), dll) == 0) // ignore case on Windows
#elif defined(__linux) // Linux
if (wcscmp(it->m_DLL.c_str(), dll) == 0) // case sensitive on Linux
#endif
{
if (it->m_hDLLHandle != NULL)
{
hDll = it->m_hDLLHandle;
amf_atomic_inc(&it->m_iRefCount);
break;
}
return AMF_UNEXPECTED;
}
}
// DLL wasn't loaded before so load it now and
// add it to the internal list
if (hDll == NULL)
{
ComponentHolder component;
component.m_iRefCount = 0;
component.m_hDLLHandle = NULL;
component.m_DLL = dll;
#if defined(_WIN32) || defined(__APPLE__)
hDll = amf_load_library(dll);
#else
hDll = amf_load_library1(dll, false); //global flag set to true
#endif
if (hDll == NULL)
return AMF_FAIL;
// since LoadLibrary succeeded add the information
// into the internal list so we can properly free
// the DLL later on, even if we fail to get the
// required information from it...
component.m_hDLLHandle = hDll;
amf_atomic_inc(&component.m_iRefCount);
m_extComponents.push_back(component);
}
// look for function we want in the dll we just loaded
typedef AMF_RESULT(AMF_CDECL_CALL *AMFCreateComponentFunc)(amf::AMFContext*, void* reserved, amf::AMFComponent**);
AMFCreateComponentFunc initFn = (AMFCreateComponentFunc)::amf_get_proc_address(hDll, function);
if (initFn == NULL)
return AMF_FAIL;
return initFn(pContext, reserved, ppComponent);
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMFFactoryHelper::UnLoadExternalComponent(const wchar_t* dll)
{
if (!dll)
{
return AMF_INVALID_ARG;
}
for (std::vector<ComponentHolder>::iterator it = m_extComponents.begin(); it != m_extComponents.end(); ++it)
{
#if defined(_WIN32)
if (wcsicmp(it->m_DLL.c_str(), dll) == 0) // ignore case on Windows
#elif defined(__linux) // Linux
if (wcscmp(it->m_DLL.c_str(), dll) == 0) // case sensitive on Linux
#endif
{
if (it->m_hDLLHandle == NULL)
{
return AMF_UNEXPECTED;
}
amf_atomic_dec(&it->m_iRefCount);
if (it->m_iRefCount == 0)
{
amf_free_library(it->m_hDLLHandle);
m_extComponents.erase(it);
}
break;
}
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------

View File

@@ -1,89 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_AMFFactory_h
#define AMF_AMFFactory_h
#pragma once
#include "../include/core/Factory.h"
#include <string>
#include <vector>
class AMFFactoryHelper
{
public:
AMFFactoryHelper();
virtual ~AMFFactoryHelper();
AMF_RESULT Init(const wchar_t* dllName = NULL);
AMF_RESULT Terminate();
AMF_RESULT LoadExternalComponent(amf::AMFContext* pContext, const wchar_t* dll, const char* function, void* reserved, amf::AMFComponent** ppComponent);
AMF_RESULT UnLoadExternalComponent(const wchar_t* dll);
amf::AMFFactory* GetFactory();
amf::AMFDebug* GetDebug();
amf::AMFTrace* GetTrace();
amf_uint64 AMFQueryVersion();
amf_handle GetAMFDLLHandle() { return m_hDLLHandle; }
protected:
struct ComponentHolder
{
amf_handle m_hDLLHandle;
amf_long m_iRefCount;
std::wstring m_DLL;
ComponentHolder()
{
m_hDLLHandle = NULL;
m_iRefCount = 0;
}
};
amf_handle m_hDLLHandle;
amf::AMFFactory* m_pFactory;
amf::AMFDebug* m_pDebug;
amf::AMFTrace* m_pTrace;
amf_uint64 m_AMFRuntimeVersion;
amf_long m_iRefCount;
std::vector<ComponentHolder> m_extComponents;
};
extern ::AMFFactoryHelper g_AMFFactory;
#endif // AMF_AMFFactory_h

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,362 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_AMFSTL_h
#define AMF_AMFSTL_h
#pragma once
#if defined(__GNUC__)
//disable gcc warinings on STL code
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Weffc++"
#include <memory> //default stl allocator
#else
#include <xmemory> //default stl allocator
#endif
#include <algorithm>
#include <string>
#include <vector>
#include <list>
#include <deque>
#include <queue>
#include <map>
#include <set>
#include "../include/core/Interface.h"
#if defined(__cplusplus)
extern "C"
{
#endif
// allocator
void* AMF_STD_CALL amf_alloc(amf_size count);
void AMF_STD_CALL amf_free(void* ptr);
void* AMF_STD_CALL amf_aligned_alloc(size_t count, size_t alignment);
void AMF_STD_CALL amf_aligned_free(void* ptr);
#if defined(__cplusplus)
}
#endif
namespace amf
{
#pragma warning(push)
#pragma warning(disable: 4996) // was declared deprecated
//-------------------------------------------------------------------------------------------------
// STL allocator redefined - will allocate all memory in "C" runtime of Common.DLL
//-------------------------------------------------------------------------------------------------
template<class _Ty>
class amf_allocator : public std::allocator<_Ty>
{
public:
amf_allocator() : std::allocator<_Ty>()
{}
amf_allocator(const amf_allocator<_Ty>& rhs) : std::allocator<_Ty>(rhs)
{}
template<class _Other> amf_allocator(const amf_allocator<_Other>& rhs) : std::allocator<_Ty>(rhs)
{}
template<class _Other> struct rebind // convert an allocator<_Ty> to an allocator <_Other>
{
typedef amf_allocator<_Other> other;
};
void deallocate(_Ty* const _Ptr, const size_t _Count)
{
_Count;
amf_free((void*)_Ptr);
}
_Ty* allocate(const size_t _Count, const void* = static_cast<const void*>(0))
{ // allocate array of _Count el ements
return static_cast<_Ty*>(amf_alloc(_Count * sizeof(_Ty)));
}
};
//-------------------------------------------------------------------------------------------------
// STL container templates with changed memory allocation
//-------------------------------------------------------------------------------------------------
template<class _Ty>
class amf_vector
: public std::vector<_Ty, amf_allocator<_Ty> >
{
public:
typedef std::vector<_Ty, amf_allocator<_Ty> > _base;
amf_vector() : _base() {}
explicit amf_vector(size_t _Count) : _base(_Count) {} //MM GCC has strange compile error. to get around replaced size_type with size_t
amf_vector(size_t _Count, const _Ty& _Val) : _base(_Count,_Val) {}
};
template<class _Ty>
class amf_list
: public std::list<_Ty, amf_allocator<_Ty> >
{};
template<class _Ty>
class amf_deque
: public std::deque<_Ty, amf_allocator<_Ty> >
{};
template<class _Ty>
class amf_queue
: public std::queue<_Ty, amf_deque<_Ty> >
{};
template<class _Kty, class _Ty, class _Pr = std::less<_Kty> >
class amf_map
: public std::map<_Kty, _Ty, _Pr, amf_allocator<std::pair<const _Kty, _Ty>> >
{};
template<class _Kty, class _Pr = std::less<_Kty> >
class amf_set
: public std::set<_Kty, _Pr, amf_allocator<_Kty> >
{};
template<class _Ty>
class amf_limited_deque
: public amf_deque<_Ty> // circular queue of pointers to blocks
{
public:
typedef amf_deque<_Ty> _base;
amf_limited_deque(size_t size_limit) : _base(), _size_limit(size_limit)
{ // construct empty deque
}
size_t size_limit()
{
return _size_limit;
}
void set_size_limit(size_t size_limit)
{
_size_limit = size_limit;
while(_base::size() > _size_limit)
{
_base::pop_front();
}
}
_Ty push_front(const _Ty& _Val)
{ // insert element at beginning
_Ty ret;
if(_size_limit > 0)
{
_base::push_front(_Val);
if(_base::size() > _size_limit)
{
ret = _base::back();
_base::pop_back();
}
}
return ret;
}
void push_front_ex(const _Ty& _Val)
{ // insert element at beginning
_base::push_front(_Val);
}
_Ty push_back(const _Ty& _Val)
{ // insert element at beginning
_Ty ret;
if(_size_limit > 0)
{
_base::push_back(_Val);
if(_base::size() > _size_limit)
{
ret = _base::front();
_base::pop_front();
}
}
return ret;
}
protected:
size_t _size_limit;
};
#pragma warning(pop)
//---------------------------------------------------------------
#if defined(__GNUC__)
//disable gcc warinings on STL code
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Weffc++"
#endif
template<class _Interf>
class AMFInterfacePtr_TAdapted : public AMFInterfacePtr_T<_Interf>
{
public:
AMFInterfacePtr_TAdapted* operator&()
{
return this;
}
AMFInterfacePtr_TAdapted()
: AMFInterfacePtr_T<_Interf>()
{}
AMFInterfacePtr_TAdapted(_Interf* pOther)
: AMFInterfacePtr_T<_Interf>(pOther)
{}
AMFInterfacePtr_TAdapted(const AMFInterfacePtr_T<_Interf>& other)
: AMFInterfacePtr_T<_Interf>(other)
{}
};
template<class _Interf>
class amf_vector<AMFInterfacePtr_T<_Interf> >
: public std::vector<AMFInterfacePtr_TAdapted<_Interf>, amf_allocator<AMFInterfacePtr_TAdapted<_Interf> > >
{
public:
typedef AMFInterfacePtr_T<_Interf>& reference;
typedef std::vector<AMFInterfacePtr_TAdapted<_Interf>, amf_allocator<AMFInterfacePtr_TAdapted<_Interf> > > baseclass;
reference operator[](size_t n)
{
return baseclass::operator[](n);
}
};
template<class _Interf>
class amf_deque<AMFInterfacePtr_T<_Interf> >
: public std::deque<AMFInterfacePtr_TAdapted<_Interf>, amf_allocator<AMFInterfacePtr_TAdapted<_Interf> > >
{};
template<class _Interf>
class amf_list<AMFInterfacePtr_T<_Interf> >
: public std::list<AMFInterfacePtr_TAdapted<_Interf>, amf_allocator<AMFInterfacePtr_TAdapted<_Interf> > >
{};
#if defined(__GNUC__)
// restore gcc warnings
#pragma GCC diagnostic pop
#endif
}
//-------------------------------------------------------------------------------------------------
// string classes
//-------------------------------------------------------------------------------------------------
typedef std::basic_string<char, std::char_traits<char>, amf::amf_allocator<char> > amf_string;
typedef std::basic_string<wchar_t, std::char_traits<wchar_t>, amf::amf_allocator<wchar_t> > amf_wstring;
template <class TAmfString>
std::size_t amf_string_hash(TAmfString const& s) noexcept
{
#if defined(_WIN64) || defined(__x86_64__)
constexpr size_t fnvOffsetBasis = 14695981039346656037ULL;
constexpr size_t fnvPrime = 1099511628211ULL;
#else // defined(_WIN64) || defined(__x86_64__)
constexpr size_t fnvOffsetBasis = 2166136261U;
constexpr size_t fnvPrime = 16777619U;
#endif // defined(_WIN64) || defined(__x86_64__)
const unsigned char* const pStr = reinterpret_cast<const unsigned char*>(s.c_str());
const size_t count = s.size() * sizeof(typename TAmfString::value_type);
size_t value = fnvOffsetBasis;
for (size_t i = 0; i < count; ++i)
{
value ^= static_cast<size_t>(pStr[i]);
value *= fnvPrime;
}
return value;
}
template<>
struct std::hash<amf_wstring>
{
std::size_t operator()(amf_wstring const& s) const noexcept
{
return amf_string_hash<amf_wstring>(s);
}
};
template<>
struct std::hash<amf_string>
{
std::size_t operator()(amf_string const& s) const noexcept
{
return amf_string_hash<amf_string>(s);
}
};
namespace amf
{
//-------------------------------------------------------------------------------------------------
// string conversion
//-------------------------------------------------------------------------------------------------
amf_string AMF_STD_CALL amf_from_unicode_to_utf8(const amf_wstring& str);
amf_wstring AMF_STD_CALL amf_from_utf8_to_unicode(const amf_string& str);
amf_string AMF_STD_CALL amf_from_unicode_to_multibyte(const amf_wstring& str);
amf_wstring AMF_STD_CALL amf_from_multibyte_to_unicode(const amf_string& str);
amf_string AMF_STD_CALL amf_from_string_to_hex_string(const amf_string& str);
amf_string AMF_STD_CALL amf_from_hex_string_to_string(const amf_string& str);
amf_string AMF_STD_CALL amf_string_to_lower(const amf_string& str);
amf_wstring AMF_STD_CALL amf_string_to_lower(const amf_wstring& str);
amf_string AMF_STD_CALL amf_string_to_upper(const amf_string& str);
amf_wstring AMF_STD_CALL amf_string_to_upper(const amf_wstring& str);
amf_string AMF_STD_CALL amf_from_unicode_to_url_utf8(const amf_wstring& data, bool bQuery = false); // converts to UTF8 and replace fobidden symbols
amf_wstring AMF_STD_CALL amf_from_url_utf8_to_unicode(const amf_string& data);
amf_wstring AMF_STD_CALL amf_convert_path_to_os_accepted_path(const amf_wstring& path);
amf_wstring AMF_STD_CALL amf_convert_path_to_url_accepted_path(const amf_wstring& path);
//-------------------------------------------------------------------------------------------------
// string helpers
//-------------------------------------------------------------------------------------------------
amf_wstring AMF_STD_CALL amf_string_format(const wchar_t* format, ...);
amf_string AMF_STD_CALL amf_string_format(const char* format, ...);
amf_wstring AMF_STD_CALL amf_string_formatVA(const wchar_t* format, va_list args);
amf_string AMF_STD_CALL amf_string_formatVA(const char* format, va_list args);
amf_int AMF_STD_CALL amf_string_ci_compare(const amf_wstring& left, const amf_wstring& right);
amf_int AMF_STD_CALL amf_string_ci_compare(const amf_string& left, const amf_string& right);
amf_size AMF_STD_CALL amf_string_ci_find(const amf_wstring& left, const amf_wstring& right, amf_size off = 0);
amf_size AMF_STD_CALL amf_string_ci_rfind(const amf_wstring& left, const amf_wstring& right, amf_size off = amf_wstring::npos);
//-------------------------------------------------------------------------------------------------
} // namespace amf
#if defined(__GNUC__)
// restore gcc warnings
#pragma GCC diagnostic pop
#endif
#endif // AMF_AMFSTL_h

View File

@@ -1,136 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_ByteArray_h
#define AMF_ByteArray_h
#pragma once
#include "../include/core/Platform.h"
#define INIT_ARRAY_SIZE 1024
#define ARRAY_MAX_SIZE (1LL << 60LL) // extremely large maximum size
//------------------------------------------------------------------------
class AMFByteArray
{
protected:
amf_uint8 *m_pData;
amf_size m_iSize;
amf_size m_iMaxSize;
public:
AMFByteArray() : m_pData(0), m_iSize(0), m_iMaxSize(0)
{
}
AMFByteArray(const AMFByteArray &other) : m_pData(0), m_iSize(0), m_iMaxSize(0)
{
*this = other;
}
AMFByteArray(amf_size num) : m_pData(0), m_iSize(0), m_iMaxSize(0)
{
SetSize(num);
}
virtual ~AMFByteArray()
{
if (m_pData != 0)
{
delete[] m_pData;
}
}
void SetSize(amf_size num)
{
if (num == m_iSize)
{
return;
}
if (num < m_iSize)
{
memset(m_pData + num, 0, m_iMaxSize - num);
}
else if (num > m_iMaxSize)
{
// This is done to prevent the following error from surfacing
// for the pNewData allocation on some compilers:
// -Werror=alloc-size-larger-than=
amf_size newSize = (num / INIT_ARRAY_SIZE) * INIT_ARRAY_SIZE + INIT_ARRAY_SIZE;
if (newSize > ARRAY_MAX_SIZE)
{
return;
}
m_iMaxSize = newSize;
amf_uint8 *pNewData = new amf_uint8[m_iMaxSize];
memset(pNewData, 0, m_iMaxSize);
if (m_pData != NULL)
{
memcpy(pNewData, m_pData, m_iSize);
delete[] m_pData;
}
m_pData = pNewData;
}
m_iSize = num;
}
void Copy(const AMFByteArray &old)
{
if (m_iMaxSize < old.m_iSize)
{
m_iMaxSize = old.m_iMaxSize;
if (m_pData != NULL)
{
delete[] m_pData;
}
m_pData = new amf_uint8[m_iMaxSize];
memset(m_pData, 0, m_iMaxSize);
}
memcpy(m_pData, old.m_pData, old.m_iSize);
m_iSize = old.m_iSize;
}
amf_uint8 operator[] (amf_size iPos) const
{
return m_pData[iPos];
}
amf_uint8& operator[] (amf_size iPos)
{
return m_pData[iPos];
}
AMFByteArray& operator=(const AMFByteArray &other)
{
SetSize(other.GetSize());
if (GetSize() > 0)
{
memcpy(GetData(), other.GetData(), GetSize());
}
return *this;
}
amf_uint8 *GetData() const { return m_pData; }
amf_size GetSize() const { return m_iSize; }
};
#endif // AMF_ByteArray_h

View File

@@ -1,275 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include <iostream>
#include <vector>
#include <bitset>
#include <array>
#include <string>
#include <cstring>
#if defined(_WIN32)
#include <intrin.h>
#else
#include <stdint.h>
#endif
class InstructionSet
{
// forward declarations
class InstructionSet_Internal;
public:
// getters
static std::string Vendor(void) { return CPU_Rep.vendor_; }
static std::string Brand(void) { return CPU_Rep.brand_; }
static bool SSE3(void) { return CPU_Rep.f_1_ECX_[0]; }
static bool PCLMULQDQ(void) { return CPU_Rep.f_1_ECX_[1]; }
static bool MONITOR(void) { return CPU_Rep.f_1_ECX_[3]; }
static bool SSSE3(void) { return CPU_Rep.f_1_ECX_[9]; }
static bool FMA(void) { return CPU_Rep.f_1_ECX_[12]; }
static bool CMPXCHG16B(void) { return CPU_Rep.f_1_ECX_[13]; }
static bool SSE41(void) { return CPU_Rep.f_1_ECX_[19]; }
static bool SSE42(void) { return CPU_Rep.f_1_ECX_[20]; }
static bool MOVBE(void) { return CPU_Rep.f_1_ECX_[22]; }
static bool POPCNT(void) { return CPU_Rep.f_1_ECX_[23]; }
static bool AES(void) { return CPU_Rep.f_1_ECX_[25]; }
static bool XSAVE(void) { return CPU_Rep.f_1_ECX_[26]; }
static bool OSXSAVE(void) { return CPU_Rep.f_1_ECX_[27]; }
static bool AVX(void) { return CPU_Rep.f_1_ECX_[28]; }
static bool F16C(void) { return CPU_Rep.f_1_ECX_[29]; }
static bool RDRAND(void) { return CPU_Rep.f_1_ECX_[30]; }
static bool MSR(void) { return CPU_Rep.f_1_EDX_[5]; }
static bool CX8(void) { return CPU_Rep.f_1_EDX_[8]; }
static bool SEP(void) { return CPU_Rep.f_1_EDX_[11]; }
static bool CMOV(void) { return CPU_Rep.f_1_EDX_[15]; }
static bool CLFSH(void) { return CPU_Rep.f_1_EDX_[19]; }
static bool MMX(void) { return CPU_Rep.f_1_EDX_[23]; }
static bool FXSR(void) { return CPU_Rep.f_1_EDX_[24]; }
static bool SSE(void) { return CPU_Rep.f_1_EDX_[25]; }
static bool SSE2(void) { return CPU_Rep.f_1_EDX_[26]; }
static bool FSGSBASE(void) { return CPU_Rep.f_7_EBX_[0]; }
static bool BMI1(void) { return CPU_Rep.f_7_EBX_[3]; }
static bool HLE(void) { return CPU_Rep.isIntel_ && CPU_Rep.f_7_EBX_[4]; }
static bool AVX2(void) { return CPU_Rep.f_7_EBX_[5]; }
static bool BMI2(void) { return CPU_Rep.f_7_EBX_[8]; }
static bool ERMS(void) { return CPU_Rep.f_7_EBX_[9]; }
static bool INVPCID(void) { return CPU_Rep.f_7_EBX_[10]; }
static bool RTM(void) { return CPU_Rep.isIntel_ && CPU_Rep.f_7_EBX_[11]; }
static bool AVX512F(void) { return CPU_Rep.f_7_EBX_[16]; }
static bool RDSEED(void) { return CPU_Rep.f_7_EBX_[18]; }
static bool ADX(void) { return CPU_Rep.f_7_EBX_[19]; }
static bool AVX512PF(void) { return CPU_Rep.f_7_EBX_[26]; }
static bool AVX512ER(void) { return CPU_Rep.f_7_EBX_[27]; }
static bool AVX512CD(void) { return CPU_Rep.f_7_EBX_[28]; }
static bool SHA(void) { return CPU_Rep.f_7_EBX_[29]; }
static bool AVX512BW(void) { return CPU_Rep.f_7_EBX_[30]; }
static bool AVX512VL(void) { return CPU_Rep.f_7_EBX_[31]; }
static bool PREFETCHWT1(void) { return CPU_Rep.f_7_ECX_[0]; }
static bool LAHF(void) { return CPU_Rep.f_81_ECX_[0]; }
static bool LZCNT(void) { return CPU_Rep.isIntel_ && CPU_Rep.f_81_ECX_[5]; }
static bool ABM(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_ECX_[5]; }
static bool SSE4a(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_ECX_[6]; }
static bool XOP(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_ECX_[11]; }
static bool TBM(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_ECX_[21]; }
static bool SYSCALL(void) { return CPU_Rep.isIntel_ && CPU_Rep.f_81_EDX_[11]; }
static bool MMXEXT(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_EDX_[22]; }
static bool RDTSCP(void) { return CPU_Rep.isIntel_ && CPU_Rep.f_81_EDX_[27]; }
static bool _3DNOWEXT(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_EDX_[30]; }
static bool _3DNOW(void) { return CPU_Rep.isAMD_ && CPU_Rep.f_81_EDX_[31]; }
private:
static const InstructionSet_Internal CPU_Rep;
class InstructionSet_Internal
{
protected:
void GetCpuID
(
int32_t registers[4], //out
int32_t functionID,
int32_t subfunctionID = 0
)
{
#ifdef _WIN32
if(!subfunctionID)
{
__cpuid((int *)registers, (int)functionID);
}
else
{
__cpuidex((int *)registers, (int)functionID, subfunctionID);
}
#else
asm volatile
(
"cpuid":
"=a" (registers[0]),
"=b" (registers[1]),
"=c" (registers[2]),
"=d" (registers[3]):
"a" (functionID),
"c" (subfunctionID)
);
#endif
}
public:
InstructionSet_Internal()
: nIds_( 0 ),
nExIds_( 0 ),
isIntel_( false ),
isAMD_( false ),
f_1_ECX_( 0 ),
f_1_EDX_( 0 ),
f_7_EBX_( 0 ),
f_7_ECX_( 0 ),
f_81_ECX_( 0 ),
f_81_EDX_( 0 )
{
//int cpuInfo[4] = {-1};
std::array<int, 4> cpui;
// Calling __cpuid with 0x0 as the function_id argument
// gets the number of the highest valid function ID.
//todo: verify
//__cpuid(cpui.data(), 0);
GetCpuID(cpui.data(), 0);
nIds_ = cpui[0];
for (int i = 0; i <= nIds_; ++i)
{
//todo: verify
//__cpuidex(cpui.data(), i, 0);
GetCpuID(cpui.data(), i, 0);
data_.push_back(cpui);
}
// Capture vendor string
char vendor[0x20];
std::memset(vendor, 0, sizeof(vendor));
*reinterpret_cast<int*>(vendor) = data_[0][1];
*reinterpret_cast<int*>(vendor + 4) = data_[0][3];
*reinterpret_cast<int*>(vendor + 8) = data_[0][2];
vendor_ = vendor;
if (vendor_ == "GenuineIntel")
{
isIntel_ = true;
}
else if (vendor_ == "AuthenticAMD")
{
isAMD_ = true;
}
// load bitset with flags for function 0x00000001
if (nIds_ >= 1)
{
f_1_ECX_ = data_[1][2];
f_1_EDX_ = data_[1][3];
}
// load bitset with flags for function 0x00000007
if (nIds_ >= 7)
{
f_7_EBX_ = data_[7][1];
f_7_ECX_ = data_[7][2];
}
// Calling __cpuid with 0x80000000 as the function_id argument
// gets the number of the highest valid extended ID.
//todo: verify
//__cpuid(cpui.data(), 0x80000000);
GetCpuID(cpui.data(), 0x80000000);
nExIds_ = cpui[0];
char brand[0x40];
memset(brand, 0, sizeof(brand));
for (int i = 0x80000000; i <= nExIds_; ++i)
{
//todo: verify
//__cpuidex(cpui.data(), i, 0);
GetCpuID(cpui.data(), i, 0);
extdata_.push_back(cpui);
}
// load bitset with flags for function 0x80000001
if (nExIds_ >= 0x80000001)
{
f_81_ECX_ = extdata_[1][2];
f_81_EDX_ = extdata_[1][3];
}
// Interpret CPU brand string if reported
if (nExIds_ >= 0x80000004)
{
memcpy(brand, extdata_[2].data(), sizeof(cpui));
memcpy(brand + 16, extdata_[3].data(), sizeof(cpui));
memcpy(brand + 32, extdata_[4].data(), sizeof(cpui));
brand_ = brand;
}
};
virtual ~InstructionSet_Internal()
{
int i = 0;
++i;
}
int nIds_;
int nExIds_;
std::string vendor_;
std::string brand_;
bool isIntel_;
bool isAMD_;
std::bitset<32> f_1_ECX_;
std::bitset<32> f_1_EDX_;
std::bitset<32> f_7_EBX_;
std::bitset<32> f_7_ECX_;
std::bitset<32> f_81_ECX_;
std::bitset<32> f_81_EDX_;
std::vector<std::array<int, 4>> data_;
std::vector<std::array<int, 4>> extdata_;
};
};

View File

@@ -1,71 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "CurrentTimeImpl.h"
namespace amf
{
//-------------------------------------------------------------------------------------------------
AMFCurrentTimeImpl::AMFCurrentTimeImpl()
: m_timeOfFirstCall(-1)
{
}
//-------------------------------------------------------------------------------------------------
AMFCurrentTimeImpl::~AMFCurrentTimeImpl()
{
m_timeOfFirstCall = -1;
}
//-------------------------------------------------------------------------------------------------
amf_pts AMF_STD_CALL AMFCurrentTimeImpl::Get()
{
amf::AMFLock lock(&m_sync);
// We want pts time to start at 0 and subsequent
// times to be relative to that
if (m_timeOfFirstCall < 0)
{
m_timeOfFirstCall = amf_high_precision_clock();
return 0;
}
return (amf_high_precision_clock() - m_timeOfFirstCall); // In nanoseconds
}
//-------------------------------------------------------------------------------------------------
void AMF_STD_CALL AMFCurrentTimeImpl::Reset()
{
m_timeOfFirstCall = -1;
}
}

View File

@@ -1,69 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_CurrentTimeImpl_h
#define AMF_CurrentTimeImpl_h
#include "../include/core/CurrentTime.h"
#include "InterfaceImpl.h"
#include "Thread.h"
namespace amf
{
class AMFCurrentTimeImpl : public AMFInterfaceImpl<AMFCurrentTime>
{
public:
AMFCurrentTimeImpl();
~AMFCurrentTimeImpl();
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY(AMFCurrentTime)
AMF_END_INTERFACE_MAP
virtual amf_pts AMF_STD_CALL Get();
virtual void AMF_STD_CALL Reset();
private:
amf_pts m_timeOfFirstCall;
mutable AMFCriticalSection m_sync;
};
//----------------------------------------------------------------------------------------------
// smart pointer
//----------------------------------------------------------------------------------------------
typedef AMFInterfacePtr_T<AMFCurrentTime> AMFCurrentTimePtr;
//----------------------------------------------------------------------------------------------}
}
#endif // AMF_CurrentTimeImpl_h

View File

@@ -1,109 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
/**
***************************************************************************************************
* @file DataStream.h
* @brief AMFDataStream declaration
***************************************************************************************************
*/
#ifndef AMF_DataStream_h
#define AMF_DataStream_h
#pragma once
#include "../include/core/Interface.h"
namespace amf
{
// currently supports only
// file://
// memory://
// eventually can be extended with:
// rtsp://
// rtmp://
// http://
// etc
//----------------------------------------------------------------------------------------------
enum AMF_STREAM_OPEN
{
AMFSO_READ = 0,
AMFSO_WRITE = 1,
AMFSO_READ_WRITE = 2,
AMFSO_APPEND = 3,
};
//----------------------------------------------------------------------------------------------
enum AMF_FILE_SHARE
{
AMFFS_EXCLUSIVE = 0,
AMFFS_SHARE_READ = 1,
AMFFS_SHARE_WRITE = 2,
AMFFS_SHARE_READ_WRITE = 3,
};
//----------------------------------------------------------------------------------------------
enum AMF_SEEK_ORIGIN
{
AMF_SEEK_BEGIN = 0,
AMF_SEEK_CURRENT = 1,
AMF_SEEK_END = 2,
};
//----------------------------------------------------------------------------------------------
// AMFDataStream interface
//----------------------------------------------------------------------------------------------
class AMF_NO_VTABLE AMFDataStream : public AMFInterface
{
public:
AMF_DECLARE_IID(0xdb08fe70, 0xb743, 0x4c26, 0xb2, 0x77, 0xa5, 0xc8, 0xe8, 0x14, 0xda, 0x4)
// interface
virtual AMF_RESULT AMF_STD_CALL Open(const wchar_t* pFileUrl, AMF_STREAM_OPEN eOpenType, AMF_FILE_SHARE eShareType) = 0;
virtual AMF_RESULT AMF_STD_CALL Close() = 0;
virtual AMF_RESULT AMF_STD_CALL Read(void* pData, amf_size iSize, amf_size* pRead) = 0;
virtual AMF_RESULT AMF_STD_CALL Write(const void* pData, amf_size iSize, amf_size* pWritten) = 0;
virtual AMF_RESULT AMF_STD_CALL Seek(AMF_SEEK_ORIGIN eOrigin, amf_int64 iPosition, amf_int64* pNewPosition) = 0;
virtual AMF_RESULT AMF_STD_CALL GetPosition(amf_int64* pPosition) = 0;
virtual AMF_RESULT AMF_STD_CALL GetSize(amf_int64* pSize) = 0;
virtual bool AMF_STD_CALL IsSeekable() = 0;
static AMF_RESULT AMF_STD_CALL OpenDataStream(const wchar_t* pFileUrl, AMF_STREAM_OPEN eOpenType, AMF_FILE_SHARE eShareType, AMFDataStream** str);
};
//----------------------------------------------------------------------------------------------
// smart pointer
//----------------------------------------------------------------------------------------------
typedef AMFInterfacePtr_T<AMFDataStream> AMFDataStreamPtr;
//----------------------------------------------------------------------------------------------
} //namespace amf
#endif // AMF_DataStream_h

View File

@@ -1,86 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "DataStream.h"
#include "DataStreamMemory.h"
#include "DataStreamFile.h"
#include "TraceAdapter.h"
#include <string>
using namespace amf;
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL amf::AMFDataStream::OpenDataStream(const wchar_t* pFileUrl, AMF_STREAM_OPEN eOpenType, AMF_FILE_SHARE eShareType, AMFDataStream** str)
{
AMF_RETURN_IF_FALSE(pFileUrl != NULL, AMF_INVALID_ARG);
AMF_RESULT res = AMF_NOT_SUPPORTED;
std::wstring url(pFileUrl);
std::wstring protocol;
std::wstring path;
std::wstring::size_type found_pos = url.find(L"://", 0);
if(found_pos != std::wstring::npos)
{
protocol = url.substr(0, found_pos);
path = url.substr(found_pos + 3);
}
else
{
protocol = L"file";
path = url;
}
AMFDataStreamPtr ptr = NULL;
if(protocol == L"file")
{
ptr = new AMFDataStreamFileImpl;
res = AMF_OK;
}
if(protocol == L"memory")
{
ptr = new AMFDataStreamMemoryImpl();
res = AMF_OK;
}
if( res == AMF_OK )
{
res = ptr->Open(path.c_str(), eOpenType, eShareType);
if( res != AMF_OK )
{
return res;
}
*str = ptr.Detach();
return AMF_OK;
}
return res;
}
//-------------------------------------------------------------------------------------------------

View File

@@ -1,271 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "TraceAdapter.h"
#include "DataStreamFile.h"
#pragma warning(disable: 4996)
#if defined(_WIN32)
#include <io.h>
#endif
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#if defined(_WIN32)
#define amf_close _close
#define amf_read _read
#define amf_write _write
#define amf_seek64 _lseeki64
#elif defined(__linux)// Linux
#include <unistd.h>
#define amf_close close
#define amf_read read
#define amf_write write
#define amf_seek64 lseek64
#elif defined(__APPLE__)
#include <unistd.h>
#define amf_close close
#define amf_read read
#define amf_write write
#define amf_seek64 lseek
#endif
using namespace amf;
#define AMF_FACILITY L"AMFDataStreamFileImpl"
#define AMF_FILE_PROTOCOL L"file"
//-------------------------------------------------------------------------------------------------
AMFDataStreamFileImpl::AMFDataStreamFileImpl()
: m_iFileDescriptor(-1), m_Path()
{}
//-------------------------------------------------------------------------------------------------
AMFDataStreamFileImpl::~AMFDataStreamFileImpl()
{
Close();
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::Close()
{
AMF_RESULT err = AMF_OK;
if(m_iFileDescriptor != -1)
{
const int status = amf_close(m_iFileDescriptor);
if(status != 0)
{
err = AMF_FAIL;
}
m_iFileDescriptor = -1;
}
return err;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::Read(void* pData, amf_size iSize, amf_size* pRead)
{
AMF_RETURN_IF_FALSE(m_iFileDescriptor != -1, AMF_FILE_NOT_OPEN, L"Read() - File not open");
AMF_RESULT err = AMF_OK;
int ready = amf_read(m_iFileDescriptor, pData, (amf_uint)iSize);
if(pRead != NULL)
{
*pRead = ready;
}
if(ready == 0) // eof
{
err = AMF_EOF;
}
else if(ready == -1)
{
err = AMF_FAIL;
}
return err;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::Write(const void* pData, amf_size iSize, amf_size* pWritten)
{
AMF_RETURN_IF_FALSE(m_iFileDescriptor != -1, AMF_FILE_NOT_OPEN, L"Write() - File not Open");
AMF_RESULT err = AMF_OK;
amf_uint32 written = amf_write(m_iFileDescriptor, pData, (amf_uint)iSize);
if(pWritten != NULL)
{
*pWritten = written;
}
if(written != iSize) // check errors
{
err = AMF_FAIL;
}
return err;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::Seek(AMF_SEEK_ORIGIN eOrigin, amf_int64 iPosition, amf_int64* pNewPosition)
{
AMF_RETURN_IF_FALSE(m_iFileDescriptor != -1, AMF_FILE_NOT_OPEN, L"Seek() - File not Open");
int org = 0;
switch(eOrigin)
{
case AMF_SEEK_BEGIN:
org = SEEK_SET;
break;
case AMF_SEEK_CURRENT:
org = SEEK_CUR;
break;
case AMF_SEEK_END:
org = SEEK_END;
break;
}
amf_int64 new_pos = 0;
new_pos = amf_seek64(m_iFileDescriptor, iPosition, org);
if(new_pos == -1L) // check errors
{
return AMF_FAIL;
}
if(pNewPosition != NULL)
{
*pNewPosition = new_pos;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::GetPosition(amf_int64* pPosition)
{
AMF_RETURN_IF_FALSE(pPosition != NULL, AMF_INVALID_POINTER);
AMF_RETURN_IF_FALSE(m_iFileDescriptor != -1, AMF_FILE_NOT_OPEN, L"GetPosition() - File not Open");
*pPosition = amf_seek64(m_iFileDescriptor, 0, SEEK_CUR);
if(*pPosition == -1L)
{
return AMF_FAIL;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::GetSize(amf_int64* pSize)
{
AMF_RETURN_IF_FALSE(pSize != NULL, AMF_INVALID_POINTER);
AMF_RETURN_IF_FALSE(m_iFileDescriptor != -1, AMF_FILE_NOT_OPEN, L"GetSize() - File not open");
amf_int64 cur_pos = amf_seek64(m_iFileDescriptor, 0, SEEK_CUR);
*pSize = amf_seek64(m_iFileDescriptor, 0, SEEK_END);
amf_seek64(m_iFileDescriptor, cur_pos, SEEK_SET);
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
bool AMF_STD_CALL AMFDataStreamFileImpl::IsSeekable()
{
return true;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamFileImpl::Open(const wchar_t* pFilePath, AMF_STREAM_OPEN eOpenType, AMF_FILE_SHARE eShareType)
{
if(m_iFileDescriptor != -1)
{
Close();
}
AMF_RETURN_IF_FALSE(pFilePath != NULL, AMF_INVALID_ARG);
m_Path = pFilePath;
#if defined(_WIN32)
int access = _O_BINARY;
#else
int access = 0;
#endif
switch(eOpenType)
{
case AMFSO_READ:
access |= O_RDONLY;
break;
case AMFSO_WRITE:
access |= O_CREAT | O_TRUNC | O_WRONLY;
break;
case AMFSO_READ_WRITE:
access |= O_CREAT | O_TRUNC | O_RDWR;
break;
case AMFSO_APPEND:
access |= O_CREAT | O_APPEND | O_RDWR;
break;
}
#ifdef _WIN32
int shflag = 0;
switch(eShareType)
{
case AMFFS_EXCLUSIVE:
shflag = _SH_DENYRW;
break;
case AMFFS_SHARE_READ:
shflag = _SH_DENYWR;
break;
case AMFFS_SHARE_WRITE:
shflag = _SH_DENYRD;
break;
case AMFFS_SHARE_READ_WRITE:
shflag = _SH_DENYNO;
break;
}
#endif
#ifdef O_BINARY
access |= O_BINARY;
#endif
#ifdef _WIN32
m_iFileDescriptor = _wsopen(m_Path.c_str(), access, shflag, 0666);
#else
amf_string str = amf_from_unicode_to_utf8(m_Path);
m_iFileDescriptor = open(str.c_str(), access, 0666);
#endif
if(m_iFileDescriptor == -1)
{
return AMF_FAIL;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------

View File

@@ -1,67 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_DataStreamFile_h
#define AMF_DataStreamFile_h
#pragma once
#include "DataStream.h"
#include "InterfaceImpl.h"
#include "AMFSTL.h"
#include <string>
namespace amf
{
class AMFDataStreamFileImpl : public AMFInterfaceImpl<AMFDataStream>
{
public:
AMFDataStreamFileImpl();
virtual ~AMFDataStreamFileImpl();
// interface
virtual AMF_RESULT AMF_STD_CALL Close();
virtual AMF_RESULT AMF_STD_CALL Read(void* pData, amf_size iSize, amf_size* pRead);
virtual AMF_RESULT AMF_STD_CALL Write(const void* pData, amf_size iSize, amf_size* pWritten);
virtual AMF_RESULT AMF_STD_CALL Seek(AMF_SEEK_ORIGIN eOrigin, amf_int64 iPosition, amf_int64* pNewPosition);
virtual AMF_RESULT AMF_STD_CALL GetPosition(amf_int64* pPosition);
virtual AMF_RESULT AMF_STD_CALL GetSize(amf_int64* pSize);
virtual bool AMF_STD_CALL IsSeekable();
// local
// aways pass full URL just in case
virtual AMF_RESULT AMF_STD_CALL Open(const wchar_t* pFilePath, AMF_STREAM_OPEN eOpenType, AMF_FILE_SHARE eShareType);
protected:
int m_iFileDescriptor;
amf_wstring m_Path;
};
} //namespace amf
#endif // AMF_DataStreamFile_h

View File

@@ -1,175 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "Thread.h"
#include "TraceAdapter.h"
#include "DataStreamMemory.h"
using namespace amf;
#define AMF_FACILITY L"AMFDataStreamMemoryImpl"
//-------------------------------------------------------------------------------------------------
AMFDataStreamMemoryImpl::AMFDataStreamMemoryImpl()
: m_pMemory(NULL),
m_uiMemorySize(0),
m_uiAllocatedSize(0),
m_pos(0)
{}
//-------------------------------------------------------------------------------------------------
AMFDataStreamMemoryImpl::~AMFDataStreamMemoryImpl()
{
Close();
}
//-------------------------------------------------------------------------------------------------
// interface
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamMemoryImpl::Close()
{
if(m_pMemory != NULL)
{
amf_virtual_free(m_pMemory);
}
m_pMemory = NULL,
m_uiMemorySize = 0,
m_uiAllocatedSize = 0,
m_pos = 0;
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMFDataStreamMemoryImpl::Realloc(amf_size iSize)
{
if(iSize > m_uiMemorySize)
{
amf_uint8* pNewMemory = (amf_uint8*)amf_virtual_alloc(iSize);
if(pNewMemory == NULL)
{
return AMF_OUT_OF_MEMORY;
}
m_uiAllocatedSize = iSize;
if(m_pMemory != NULL)
{
memcpy(pNewMemory, m_pMemory, m_uiMemorySize);
amf_virtual_free(m_pMemory);
}
m_pMemory = pNewMemory;
}
m_uiMemorySize = iSize;
if(m_pos > m_uiMemorySize)
{
m_pos = m_uiMemorySize;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamMemoryImpl::Read(void* pData, amf_size iSize, amf_size* pRead)
{
AMF_RETURN_IF_FALSE(pData != NULL, AMF_INVALID_POINTER, L"Read() - pData==NULL");
AMF_RETURN_IF_FALSE(m_pMemory != NULL, AMF_NOT_INITIALIZED, L"Read() - Stream is not allocated");
amf_size toRead = AMF_MIN(iSize, m_uiMemorySize - m_pos);
memcpy(pData, m_pMemory + m_pos, toRead);
m_pos += toRead;
if(pRead != NULL)
{
*pRead = toRead;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamMemoryImpl::Write(const void* pData, amf_size iSize, amf_size* pWritten)
{
AMF_RETURN_IF_FALSE(pData != NULL, AMF_INVALID_POINTER, L"Write() - pData==NULL");
AMF_RETURN_IF_FAILED(Realloc(m_pos + iSize), L"Write() - Stream is not allocated");
amf_size toWrite = AMF_MIN(iSize, m_uiMemorySize - m_pos);
memcpy(m_pMemory + m_pos, pData, toWrite);
m_pos += toWrite;
if(pWritten != NULL)
{
*pWritten = toWrite;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamMemoryImpl::Seek(AMF_SEEK_ORIGIN eOrigin, amf_int64 iPosition, amf_int64* pNewPosition)
{
switch(eOrigin)
{
case AMF_SEEK_BEGIN:
m_pos = (amf_size)iPosition;
break;
case AMF_SEEK_CURRENT:
m_pos += (amf_size)iPosition;
break;
case AMF_SEEK_END:
m_pos = m_uiMemorySize - (amf_size)iPosition;
break;
}
if(m_pos > m_uiMemorySize)
{
m_pos = m_uiMemorySize;
}
if(pNewPosition != NULL)
{
*pNewPosition = m_pos;
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamMemoryImpl::GetPosition(amf_int64* pPosition)
{
AMF_RETURN_IF_FALSE(pPosition != NULL, AMF_INVALID_POINTER, L"GetPosition() - pPosition==NULL");
*pPosition = m_pos;
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT AMF_STD_CALL AMFDataStreamMemoryImpl::GetSize(amf_int64* pSize)
{
AMF_RETURN_IF_FALSE(pSize != NULL, AMF_INVALID_POINTER, L"GetPosition() - pSize==NULL");
*pSize = m_uiMemorySize;
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
bool AMF_STD_CALL AMFDataStreamMemoryImpl::IsSeekable()
{
return true;
}
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------

View File

@@ -1,77 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_DataStreamMemory_h
#define AMF_DataStreamMemory_h
#pragma once
#include "DataStream.h"
#include "InterfaceImpl.h"
namespace amf
{
class AMFDataStreamMemoryImpl : public AMFInterfaceImpl<AMFDataStream>
{
public:
AMFDataStreamMemoryImpl();
virtual ~AMFDataStreamMemoryImpl();
// interface
virtual AMF_RESULT AMF_STD_CALL Open(const wchar_t* /*pFileUrl*/, AMF_STREAM_OPEN /*eOpenType*/, AMF_FILE_SHARE /*eShareType*/)
{
//pFileUrl;
//eOpenType;
//eShareType;
return AMF_OK;
}
virtual AMF_RESULT AMF_STD_CALL Close();
virtual AMF_RESULT AMF_STD_CALL Read(void* pData, amf_size iSize, amf_size* pRead);
virtual AMF_RESULT AMF_STD_CALL Write(const void* pData, amf_size iSize, amf_size* pWritten);
virtual AMF_RESULT AMF_STD_CALL Seek(AMF_SEEK_ORIGIN eOrigin, amf_int64 iPosition, amf_int64* pNewPosition);
virtual AMF_RESULT AMF_STD_CALL GetPosition(amf_int64* pPosition);
virtual AMF_RESULT AMF_STD_CALL GetSize(amf_int64* pSize);
virtual bool AMF_STD_CALL IsSeekable();
protected:
AMF_RESULT Realloc(amf_size iSize);
amf_uint8* m_pMemory;
amf_size m_uiMemorySize;
amf_size m_uiAllocatedSize;
amf_size m_pos;
private:
AMFDataStreamMemoryImpl(const AMFDataStreamMemoryImpl&);
AMFDataStreamMemoryImpl& operator=(const AMFDataStreamMemoryImpl&);
};
} //namespace amf
#endif // AMF_DataStreamMemory_h

View File

@@ -1,250 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "IOCapsImpl.h"
namespace amf
{
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
AMFIOCapsImpl::SurfaceFormat::SurfaceFormat() :
m_Format(AMF_SURFACE_UNKNOWN),
m_Native(false)
{
}
AMFIOCapsImpl::SurfaceFormat::SurfaceFormat(AMF_SURFACE_FORMAT format, amf_bool native) :
m_Format(format),
m_Native(native)
{
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
AMFIOCapsImpl::MemoryType::MemoryType() :
m_Type(AMF_MEMORY_UNKNOWN),
m_Native(false)
{
}
AMFIOCapsImpl::MemoryType::MemoryType(AMF_MEMORY_TYPE type, amf_bool native) :
m_Type(type),
m_Native(native)
{
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
AMFIOCapsImpl::AMFIOCapsImpl() :
m_MinWidth(-1),
m_MaxWidth(-1),
m_MinHeight(-1),
m_MaxHeight(-1),
m_VertAlign(-1),
m_InterlacedSupported(false)
{
}
AMFIOCapsImpl::AMFIOCapsImpl(amf_int32 minWidth, amf_int32 maxWidth,
amf_int32 minHeight, amf_int32 maxHeight,
amf_int32 vertAlign, amf_bool interlacedSupport,
amf_int32 numOfNativeFormats, const AMF_SURFACE_FORMAT* nativeFormats,
amf_int32 numOfNonNativeFormats, const AMF_SURFACE_FORMAT* nonNativeFormats,
amf_int32 numOfNativeMemTypes, const AMF_MEMORY_TYPE* nativeMemTypes,
amf_int32 numOfNonNativeMemTypes, const AMF_MEMORY_TYPE* nonNativeMemTypes)
{
m_MinWidth = minWidth;
m_MaxWidth = maxWidth;
m_MinHeight = minHeight;
m_MaxHeight = maxHeight;
m_VertAlign = vertAlign;
m_InterlacedSupported = interlacedSupport;
PopulateSurfaceFormats(numOfNativeFormats, nativeFormats, true);
PopulateSurfaceFormats(numOfNonNativeFormats, nonNativeFormats, false);
PopulateMemoryTypes(numOfNativeMemTypes, nativeMemTypes, true);
PopulateMemoryTypes(numOfNonNativeMemTypes, nonNativeMemTypes, false);
}
void AMFIOCapsImpl::PopulateSurfaceFormats(amf_int32 numOfFormats, const AMF_SURFACE_FORMAT* formats, amf_bool native)
{
if (formats != NULL)
{
for (amf_int32 i = 0; i < numOfFormats; i++)
{
bool found = false;
for(amf_size exists_idx = 0; exists_idx < m_SurfaceFormats.size(); exists_idx++)
{
if(m_SurfaceFormats[exists_idx].GetFormat() == formats[i])
{
found = true;
}
}
if(!found)
{
m_SurfaceFormats.push_back(SurfaceFormat(formats[i], native));
}
}
}
}
void AMFIOCapsImpl::PopulateMemoryTypes(amf_int32 numOfTypes, const AMF_MEMORY_TYPE* memTypes, amf_bool native)
{
if (memTypes != NULL)
{
for (amf_int32 i = 0; i < numOfTypes; i++)
{
bool found = false;
for(amf_size exists_idx = 0; exists_idx < m_MemoryTypes.size(); exists_idx++)
{
if(m_MemoryTypes[exists_idx].GetType() == memTypes[i])
{
found = true;
}
}
if(!found)
{
m_MemoryTypes.push_back(MemoryType(memTypes[i], native));
}
}
}
}
// Get supported resolution ranges in pixels/lines:
void AMF_STD_CALL AMFIOCapsImpl::GetWidthRange(amf_int32* minWidth, amf_int32* maxWidth) const
{
if (minWidth != NULL)
{
*minWidth = m_MinWidth;
}
if (maxWidth != NULL)
{
*maxWidth = m_MaxWidth;
}
}
void AMF_STD_CALL AMFIOCapsImpl::GetHeightRange(amf_int32* minHeight, amf_int32* maxHeight) const
{
if (minHeight != NULL)
{
*minHeight = m_MinHeight;
}
if (maxHeight != NULL)
{
*maxHeight = m_MaxHeight;
}
}
// Get memory alignment in lines:
// Vertical aligmnent should be multiples of this number
amf_int32 AMF_STD_CALL AMFIOCapsImpl::GetVertAlign() const
{
return m_VertAlign;
}
// Enumerate supported surface pixel formats:
amf_int32 AMF_STD_CALL AMFIOCapsImpl::GetNumOfFormats() const
{
return (amf_int32)m_SurfaceFormats.size();
}
AMF_RESULT AMF_STD_CALL AMFIOCapsImpl::GetFormatAt(amf_int32 index, AMF_SURFACE_FORMAT* format, bool* native) const
{
if (index >= 0 && index < static_cast<amf_int32>(m_SurfaceFormats.size()))
{
SurfaceFormat curFormat(m_SurfaceFormats.at(index));
if (format != NULL)
{
*format = curFormat.GetFormat();
}
if (native != NULL)
{
*native = curFormat.IsNative();
}
return AMF_OK;
}
else
{
return AMF_INVALID_ARG;
}
}
// Enumerate supported surface formats:
amf_int32 AMF_STD_CALL AMFIOCapsImpl::GetNumOfMemoryTypes() const
{
return (amf_int32)m_MemoryTypes.size();
}
AMF_RESULT AMF_STD_CALL AMFIOCapsImpl::GetMemoryTypeAt(amf_int32 index, AMF_MEMORY_TYPE* memType, bool* native) const
{
if (index >= 0 && index < static_cast<amf_int32>(m_MemoryTypes.size()))
{
MemoryType curType(m_MemoryTypes.at(index));
if (memType != NULL)
{
*memType = curType.GetType();
}
if (native != NULL)
{
*native = curType.IsNative();
}
return AMF_OK;
}
else
{
return AMF_INVALID_ARG;
}
}
// interlaced support:
amf_bool AMF_STD_CALL AMFIOCapsImpl::IsInterlacedSupported() const
{
return m_InterlacedSupported;
}
void AMFIOCapsImpl::SetResolution(amf_int32 minWidth, amf_int32 maxWidth, amf_int32 minHeight, amf_int32 maxHeight)
{
m_MinWidth = minWidth;
m_MaxWidth = maxWidth;
m_MinHeight = minHeight;
m_MaxHeight = maxHeight;
}
void AMFIOCapsImpl::SetVertAlign(amf_int32 vertAlign)
{
m_VertAlign = vertAlign;
}
void AMFIOCapsImpl::SetInterlacedSupport(amf_bool interlaced)
{
m_InterlacedSupported = interlaced;
}
}

View File

@@ -1,132 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_IOCapsImpl_h
#define AMF_IOCapsImpl_h
#pragma once
#include "InterfaceImpl.h"
#include "../include/components/ComponentCaps.h"
#include <vector>
namespace amf
{
class AMFIOCapsImpl : public AMFInterfaceImpl<AMFIOCaps>
{
protected:
class SurfaceFormat
{
public:
typedef std::vector<SurfaceFormat> Collection;
public:
SurfaceFormat();
SurfaceFormat(AMF_SURFACE_FORMAT format, amf_bool native);
inline AMF_SURFACE_FORMAT GetFormat() const throw() { return m_Format; }
inline amf_bool IsNative() const throw() { return m_Native; }
private:
AMF_SURFACE_FORMAT m_Format;
amf_bool m_Native;
};
class MemoryType
{
public:
typedef std::vector<MemoryType> Collection;
public:
MemoryType();
MemoryType(AMF_MEMORY_TYPE type, amf_bool native);
inline AMF_MEMORY_TYPE GetType() const throw() { return m_Type; }
inline amf_bool IsNative() const throw() { return m_Native; }
private:
AMF_MEMORY_TYPE m_Type;
amf_bool m_Native;
};
struct Resolution
{
amf_int32 m_Width;
amf_int32 m_Height;
};
protected:
AMFIOCapsImpl();
AMFIOCapsImpl(amf_int32 minWidth, amf_int32 maxWidth,
amf_int32 minHeight, amf_int32 maxHeight,
amf_int32 vertAlign, amf_bool interlacedSupport,
amf_int32 numOfNativeFormats, const AMF_SURFACE_FORMAT* nativeFormats,
amf_int32 numOfNonNativeFormats, const AMF_SURFACE_FORMAT* nonNativeFormats,
amf_int32 numOfNativeMemTypes, const AMF_MEMORY_TYPE* nativeMemTypes,
amf_int32 numOfNonNativeMemTypes, const AMF_MEMORY_TYPE* nonNativeMemTypes);
public:
// Get supported resolution ranges in pixels/lines:
virtual void AMF_STD_CALL GetWidthRange(amf_int32* minWidth, amf_int32* maxWidth) const;
virtual void AMF_STD_CALL GetHeightRange(amf_int32* minHeight, amf_int32* maxHeight) const;
// Get memory alignment in lines:
// Vertical aligmnent should be multiples of this number
virtual amf_int32 AMF_STD_CALL GetVertAlign() const;
// Enumerate supported surface pixel formats:
virtual amf_int32 AMF_STD_CALL GetNumOfFormats() const;
virtual AMF_RESULT AMF_STD_CALL GetFormatAt(amf_int32 index, AMF_SURFACE_FORMAT* format, amf_bool* native) const;
// Enumerate supported surface formats:
virtual amf_int32 AMF_STD_CALL GetNumOfMemoryTypes() const;
virtual AMF_RESULT AMF_STD_CALL GetMemoryTypeAt(amf_int32 index, AMF_MEMORY_TYPE* memType, amf_bool* native) const;
// interlaced support:
virtual amf_bool AMF_STD_CALL IsInterlacedSupported() const;
protected:
void SetResolution(amf_int32 minWidth, amf_int32 maxWidth, amf_int32 minHeight, amf_int32 maxHeight);
void SetVertAlign(amf_int32 alignment);
void SetInterlacedSupport(amf_bool interlaced);
void PopulateSurfaceFormats(amf_int32 numOfFormats, const AMF_SURFACE_FORMAT* formats, amf_bool native);
void PopulateMemoryTypes(amf_int32 numOfTypes, const AMF_MEMORY_TYPE* memTypes, amf_bool native);
protected:
amf_int32 m_MinWidth;
amf_int32 m_MaxWidth;
amf_int32 m_MinHeight;
amf_int32 m_MaxHeight;
amf_int32 m_VertAlign;
amf_bool m_InterlacedSupported;
SurfaceFormat::Collection m_SurfaceFormats;
MemoryType::Collection m_MemoryTypes;
};
}
#endif // AMF_IOCapsImpl_h

View File

@@ -1,214 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#ifndef AMF_InterfaceImpl_h
#define AMF_InterfaceImpl_h
#pragma once
#include "../include/core/Interface.h"
#include "Thread.h"
#pragma warning(disable : 4511)
namespace amf
{
#define AMF_BEGIN_INTERFACE_MAP \
virtual AMF_RESULT AMF_STD_CALL QueryInterface(const amf::AMFGuid & interfaceID, void** ppInterface) \
{ \
AMF_RESULT err = AMF_NO_INTERFACE; \
#define AMF_INTERFACE_ENTRY(T) \
if(AMFCompareGUIDs(interfaceID, T::IID())) \
{ \
*ppInterface = (void*)static_cast<T*>(this); \
this->Acquire(); \
err = AMF_OK; \
} \
else \
#define AMF_INTERFACE_ENTRY_THIS(T, _TI) \
if(AMFCompareGUIDs(interfaceID, T::IID())) \
{ \
*ppInterface = (void*)static_cast<T*>(static_cast<_TI*>(this)); \
this->Acquire(); \
err = AMF_OK; \
} \
else \
#define AMF_INTERFACE_MULTI_ENTRY(T) \
if(AMFCompareGUIDs(interfaceID, T::IID())) \
{ \
*ppInterface = (void*)static_cast<T*>(this); \
AcquireInternal(); \
err = AMF_OK; \
} \
else \
#define AMF_INTERFACE_CHAIN_ENTRY(T) \
if(static_cast<T&>(*this).T::QueryInterface(interfaceID, ppInterface) == AMF_OK) \
{err = AMF_OK;} \
else \
//good as an example but we should not use aggregate pattern without big reason - very hard to debug
#define AMF_INTERFACE_AGREGATED_ENTRY(T, _Ptr) \
if(AMFCompareGUIDs(interfaceID, T::IID())) \
{ \
T* ptr = static_cast<T*>(_Ptr); \
*ppInterface = (void*)ptr; \
ptr->Acquire(); \
err = AMF_OK; \
} \
else \
#define AMF_INTERFACE_CHAIN_AGREGATED_ENTRY(T, _Ptr) \
if(err = static_cast<T*>(_Ptr)->QueryInterface(interfaceID, ppInterface)) { \
} \
else \
#define AMF_END_INTERFACE_MAP \
{} \
return err; \
} \
//---------------------------------------------------------------
class AMFInterfaceBase
{
protected:
amf_long m_refCount;
virtual ~AMFInterfaceBase()
#if __GNUC__ == 11 //WORKAROUND for gcc-11 bug
__attribute__ ((noinline))
#endif
{}
public:
AMFInterfaceBase() : m_refCount(0)
{}
virtual amf_long AMF_STD_CALL AcquireInternal()
{
amf_long newVal = amf_atomic_inc(&m_refCount);
return newVal;
}
virtual amf_long AMF_STD_CALL ReleaseInternal()
{
amf_long newVal = amf_atomic_dec(&m_refCount);
if(newVal == 0)
{
delete this;
}
return newVal;
}
virtual amf_long AMF_STD_CALL RefCountInternal()
{
return m_refCount;
}
};
//---------------------------------------------------------------
template<class _Base , typename _Param1 = int, typename _Param2 = int, typename _Param3 = int>
class AMFInterfaceImpl : public _Base, public AMFInterfaceBase
{
protected:
virtual ~AMFInterfaceImpl()
{}
public:
AMFInterfaceImpl(_Param1 param1, _Param2 param2, _Param3 param3) : _Base(param1, param2, param3)
{}
AMFInterfaceImpl(_Param1 param1, _Param2 param2) : _Base(param1, param2)
{}
AMFInterfaceImpl(_Param1 param1) : _Base(param1)
{}
AMFInterfaceImpl()
{}
virtual amf_long AMF_STD_CALL Acquire()
{
return AMFInterfaceBase::AcquireInternal();
}
virtual amf_long AMF_STD_CALL Release()
{
return AMFInterfaceBase::ReleaseInternal();
}
virtual amf_long AMF_STD_CALL RefCount()
{
return AMFInterfaceBase::RefCountInternal();
}
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY(AMFInterface)
AMF_INTERFACE_ENTRY(_Base)
AMF_END_INTERFACE_MAP
};
//---------------------------------------------------------------
template<class _Base, class _BaseInterface, typename _Param1 = int, typename _Param2 = int, typename _Param3 = int, typename _Param4 = int, typename _Param5 = int, typename _Param6 = int>
class AMFInterfaceMultiImpl : public _Base
{
protected:
virtual ~AMFInterfaceMultiImpl()
{}
public:
AMFInterfaceMultiImpl(_Param1 param1, _Param2 param2, _Param3 param3, _Param4 param4, _Param5 param5, _Param6 param6) : _Base(param1, param2, param3, param4, param5, param6)
{}
AMFInterfaceMultiImpl(_Param1 param1, _Param2 param2, _Param3 param3, _Param4 param4, _Param5 param5) : _Base(param1, param2, param3, param4, param5)
{}
AMFInterfaceMultiImpl(_Param1 param1, _Param2 param2, _Param3 param3, _Param4 param4) : _Base(param1, param2, param3, param4)
{}
AMFInterfaceMultiImpl(_Param1 param1, _Param2 param2, _Param3 param3) : _Base(param1, param2, param3)
{}
AMFInterfaceMultiImpl(_Param1 param1, _Param2 param2) : _Base(param1, param2)
{}
AMFInterfaceMultiImpl(_Param1 param1) : _Base(param1)
{}
AMFInterfaceMultiImpl()
{}
virtual amf_long AMF_STD_CALL Acquire()
{
return AMFInterfaceBase::AcquireInternal();
}
virtual amf_long AMF_STD_CALL Release()
{
return AMFInterfaceBase::ReleaseInternal();
}
virtual amf_long AMF_STD_CALL RefCount()
{
return AMFInterfaceBase::RefCountInternal();
}
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY_THIS(AMFInterface, _BaseInterface)
AMF_INTERFACE_CHAIN_ENTRY(_Base)
AMF_END_INTERFACE_MAP
};
} // namespace amf
#endif // AMF_InterfaceImpl_h

View File

@@ -1,318 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#pragma once
#include "public/include/core/Interface.h"
#include "public/include/core/Variant.h"
#include <string>
#include <stdint.h>
namespace amf
{
class JSONParser : public amf::AMFInterface
{
public:
//-----------------------------------------------------------------------------------------
enum Result
{
OK,
MISSING_QUOTE,
MISSING_BRACE,
MISSING_BRACKET,
MISSING_DELIMITER,
MISSING_VALUE,
UNEXPECTED_END,
DUPLICATE_NAME,
INVALID_ARG,
INVALID_VALUE
};
//-----------------------------------------------------------------------------------------
typedef amf::AMFInterfacePtr_T<JSONParser> Ptr;
AMF_DECLARE_IID(0x14aefb78, 0x80af, 0x4ee1, 0x82, 0x9f, 0xa2, 0xfc, 0xc7, 0xae, 0xab, 0x33)
//-----------------------------------------------------------------------------------------
class Error
{
public:
Error(JSONParser::Result error) :
m_Ofs(0),
m_Error(error)
{
}
Error(size_t ofs, JSONParser::Result error) :
m_Ofs(ofs),
m_Error(error)
{
}
inline size_t GetOffset() const { return m_Ofs; }
inline JSONParser::Result GetResult() const { return m_Error; }
private:
size_t m_Ofs;
JSONParser::Result m_Error;
};
//-----------------------------------------------------------------------------------------
struct OutputFormatDesc
{
bool bHumanReadable;
bool bNewLineBeforeBrace;
char cOffsetWith;
uint8_t nOffsetSize;
};
//-----------------------------------------------------------------------------------------
class Element : public amf::AMFInterface
{
public:
typedef amf::AMFInterfacePtr_T<Element> Ptr;
AMF_DECLARE_IID(0xd2d71993, 0xbbcb, 0x420f, 0xbc, 0xdd, 0xd8, 0xd6, 0xb6, 0x2e, 0x46, 0x5e)
virtual Error Parse(const std::string& str, size_t start, size_t end) = 0;
virtual std::string Stringify() const = 0;
virtual std::string StringifyFormatted(const OutputFormatDesc& format, int indent) const = 0;
};
//-----------------------------------------------------------------------------------------
class Value : public Element
{
public:
typedef amf::AMFInterfacePtr_T<Value> Ptr;
AMF_DECLARE_IID(0xba0e44d4, 0xa487, 0x4d64, 0xa4, 0x94, 0x93, 0x9b, 0xfd, 0x76, 0x72, 0x32)
virtual void SetValue(const std::string& val) = 0;
virtual void SetValueAsInt32(int32_t val) = 0;
virtual void SetValueAsUInt32(uint32_t val) = 0;
virtual void SetValueAsInt64(int64_t val) = 0;
virtual void SetValueAsUInt64(uint64_t val) = 0;
virtual void SetValueAsDouble(double val) = 0;
virtual void SetValueAsFloat(float val) = 0;
virtual void SetValueAsBool(bool val) = 0;
virtual void SetValueAsTime(time_t date, bool utc) = 0;
virtual void SetToNull() = 0;
virtual const std::string& GetValue() const = 0;
virtual int32_t GetValueAsInt32() const = 0;
virtual uint32_t GetValueAsUInt32() const = 0;
virtual int64_t GetValueAsInt64() const = 0;
virtual uint64_t GetValueAsUInt64() const = 0;
virtual double GetValueAsDouble() const = 0;
virtual float GetValueAsFloat() const = 0;
virtual bool GetValueAsBool() const = 0;
virtual time_t GetValueAsTime() const = 0;
virtual bool IsNull() const = 0;
};
//-----------------------------------------------------------------------------------------
class Node : public Element
{
public:
typedef amf::AMFInterfacePtr_T<Node> Ptr;
AMF_DECLARE_IID(0x6623d6b8, 0x533d, 0x4824, 0x9d, 0x3b, 0x45, 0x1a, 0xa8, 0xc3, 0x7b, 0x5d)
virtual size_t GetElementCount() const = 0;
virtual JSONParser::Element* GetElementByName(const std::string& name) const = 0;
virtual JSONParser::Result AddElement(const std::string& name, Element* element) = 0;
virtual JSONParser::Element* GetElementAt(size_t idx, std::string& name) const = 0;
};
//-----------------------------------------------------------------------------------------
class Array : public Element
{
public:
typedef amf::AMFInterfacePtr_T<Array> Ptr;
AMF_DECLARE_IID(0x8c066a6d, 0xb377, 0x44e8, 0x8c, 0xf5, 0xf8, 0xbf, 0x88, 0x85, 0xbb, 0xe9)
virtual size_t GetElementCount() const = 0;
virtual JSONParser::Element* GetElementAt(size_t idx) const = 0;
virtual void AddElement(Element* element) = 0;
};
//-----------------------------------------------------------------------------------------
virtual Result Parse(const std::string& str, Node** root) = 0; // Parse a JSON string into a tree of DOM elements
virtual std::string Stringify(const Node* root) const = 0; // Convert a DOM to a JSON string
virtual std::string StringifyFormatted(const Node* root, const OutputFormatDesc& format, int indent = 0) const = 0;
virtual Result CreateNode(Node** node) const = 0;
virtual Result CreateValue(Value** value) const = 0;
virtual Result CreateArray(Array** array) const = 0;
virtual size_t GetLastErrorOffset() const = 0; // Returns the offset of the last syntax error (same as what is passed in the exception if thrown)
};
extern "C"
{
// Helpers
#define TAG_JSON_VALUE "Val"
void SetBoolValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, bool val);
void CreateBoolValue(amf::JSONParser* parser, amf::JSONParser::Value** node, bool val);
bool GetBoolValue(const amf::JSONParser::Node* root, const char* name, bool& val);
bool GetBoolFromJSON(const amf::JSONParser::Value* element, bool& val);
void SetDoubleValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, double val);
void CreateDoubleValue(amf::JSONParser* parser, amf::JSONParser::Value** node, double val);
bool GetDoubleValue(const amf::JSONParser::Node* root, const char* name, double& val);
bool GetDoubleFromJSON(const amf::JSONParser::Value* element, double& val);
void SetFloatValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, float val);
void CreateFloatValue(amf::JSONParser* parser, amf::JSONParser::Value** node, float val);
bool GetFloatValue(const amf::JSONParser::Node* root, const char* name, float& val);
bool GetFloatFromJSON(const amf::JSONParser::Value* element, float& val);
void SetInt64Value(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, int64_t val);
void CreateInt64Value(amf::JSONParser* parser, amf::JSONParser::Value** node, const int64_t val);
bool GetInt64Value(const amf::JSONParser::Node* root, const char* name, int64_t& val);
bool GetInt64FromJSON(const amf::JSONParser::Value* element, int64_t& val);
void SetUInt64Value(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, uint64_t val);
bool GetUInt64Value(const amf::JSONParser::Node* root, const char* name, uint64_t& val);
bool GetUInt64FromJSON(const amf::JSONParser::Value* element, uint64_t& val);
void SetInt32Value(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, int32_t val);
bool GetInt32Value(const amf::JSONParser::Node* root, const char* name, int32_t& val);
bool GetInt32FromJSON(const amf::JSONParser::Value* element, int32_t& val);
void SetUInt32Value(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, uint32_t val);
bool GetUInt32Value(const amf::JSONParser::Node* root, const char* name, uint32_t& val);
bool GetUInt32FromJSON(const amf::JSONParser::Value* element, uint32_t& val);
void SetUInt32Array(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const uint32_t* val, size_t size);
void CreateUInt32Array(amf::JSONParser* parser, amf::JSONParser::Array** array, const uint32_t* val, size_t size);
bool GetUInt32Array(const amf::JSONParser::Node* root, const char* name, uint32_t* val, size_t& size);
bool GetUInt32ArrayFromJSON(const amf::JSONParser::Array* element, uint32_t* val, size_t& size);
void SetInt32Array(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const int32_t* val, size_t size);
void CreateInt32Array(amf::JSONParser* parser, amf::JSONParser::Array** array, const int32_t* val, size_t size);
bool GetInt32Array(const amf::JSONParser::Node* root, const char* name, int32_t* val, size_t& size);
bool GetInt32ArrayFromJSON(const amf::JSONParser::Array* element, int32_t* val, size_t& size);
void SetInt64Array(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const int64_t* val, size_t size);
bool GetInt64Array(const amf::JSONParser::Node* root, const char* name, int64_t* val, size_t& size);
bool GetInt64ArrayFromJSON(const amf::JSONParser::Array* element, int64_t* val, size_t& size);
void SetFloatArray(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const float* val, size_t size);
void CreateFloatArray(amf::JSONParser* parser, amf::JSONParser::Array** array, const float* val, size_t size);
bool GetFloatArray(const amf::JSONParser::Node* root, const char* name, float* val, size_t& size);
bool GetFloatArrayFromJSON(const amf::JSONParser::Array* element, float* val, size_t& size);
void SetDoubleArray(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const double* val, size_t size);
bool GetDoubleArray(const amf::JSONParser::Node* root, const char* name, double* val, size_t& size);
bool GetDoubleArrayFromJSON(const amf::JSONParser::Array* element, double* val, size_t& size);
void SetSizeValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFSize& val);
void CreateSizeValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFSize& val);
bool GetSizeValue(const amf::JSONParser::Node* root, const char* name, AMFSize& val);
bool GetSizeFromJSON(const amf::JSONParser::Element* element, AMFSize& val);
void SetRectValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFRect& val);
void CreateRectValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFRect& val);
bool GetRectValue(const amf::JSONParser::Node* root, const char* name, AMFRect& val);
bool GetRectFromJSON(const amf::JSONParser::Element* element, AMFRect& val);
void SetPointValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFPoint& val);
void CreatePointValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFPoint& val);
bool GetPointValue(const amf::JSONParser::Node* root, const char* name, AMFPoint& val);
bool GetPointFromJSON(const amf::JSONParser::Element* element, AMFPoint& val);
void SetRateValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFRate& val);
void CreateRateValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFRate& val);
bool GetRateValue(const amf::JSONParser::Node* root, const char* name, AMFRate& val);
bool GetRateFromJSON(const amf::JSONParser::Element* element, AMFRate& val);
void SetRatioValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFRatio& val);
void CreateRatioValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFRatio& val);
bool GetRatioValue(const amf::JSONParser::Node* root, const char* name, AMFRatio& val);
bool GetRatioFromJSON(const amf::JSONParser::Element* element, AMFRatio& val);
void SetColorValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFColor& val);
void CreateColorValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFColor& val);
bool GetColorValue(const amf::JSONParser::Node* root, const char* name, AMFColor& val);
bool GetColorFromJSON(const amf::JSONParser::Element* element, AMFColor& val);
void SetFloatSizeValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFFloatSize& val);
void CreateFloatSizeValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFFloatSize& val);
bool GetFloatSizeValue(const amf::JSONParser::Node* root, const char* name, AMFFloatSize& val);
bool GetFloatSizeFromJSON(const amf::JSONParser::Element* element, AMFFloatSize& val);
void SetFloatPoint2DValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFFloatPoint2D& val);
void CreateFloatPoint2DValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFFloatPoint2D& val);
bool GetFloatPoint2DValue(const amf::JSONParser::Node* root, const char* name, AMFFloatPoint2D& val);
bool GetFloatPoint2DFromJSON(const amf::JSONParser::Element* element, AMFFloatPoint2D& val);
void SetFloatPoint3DValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFFloatPoint3D& val);
void CreateFloatPoint3DValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFFloatPoint3D& val);
bool GetFloatPoint3DValue(const amf::JSONParser::Node* root, const char* name, AMFFloatPoint3D& val);
bool GetFloatPoint3DFromJSON(const amf::JSONParser::Element* element, AMFFloatPoint3D& val);
void SetFloatVector4DValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const AMFFloatVector4D& val);
void CreateFloatVector4DValue(amf::JSONParser* parser, amf::JSONParser::Array** array, const AMFFloatVector4D& val);
bool GetFloatVector4DValue(const amf::JSONParser::Node* root, const char* name, AMFFloatVector4D& val);
bool GetFloatVector4DFromJSON(const amf::JSONParser::Element* element, AMFFloatVector4D& val);
void SetStringValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const std::string& val);
void CreateStringValue(amf::JSONParser* parser, amf::JSONParser::Value** node, const std::string& val);
bool GetStringValue(const amf::JSONParser::Node* root, const char* name, std::string& val);
bool GetStringFromJSON(const amf::JSONParser::Value* element, std::string& val);
void SetInterfaceValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, /*const*/ AMFInterface* pVal);
void CreateInterfaceValue(amf::JSONParser* parser, amf::JSONParser::Node** node, /*const*/ AMFInterface* pval);
bool GetInterfaceValue(const amf::JSONParser::Node* root, const char* name, AMFInterface* ppVal);
bool GetInterfaceFromJSON(const amf::JSONParser::Element* element, AMFInterface* ppVal);
void SetVariantValue(amf::JSONParser* parser, amf::JSONParser::Node* root, const char* name, const amf::AMFVariant& value);
void SetVariantToJSON(amf::JSONParser* parser, amf::JSONParser::Node** node, const amf::AMFVariant& value);
bool GetVariantValue(const amf::JSONParser::Node* root, const char* name, amf::AMFVariant& val);
bool GetVariantFromJSON(const amf::JSONParser::Node* element, amf::AMFVariant& val);
// variant value only; variant type assumed to be pre-set
void CreateVariantValue(amf::JSONParser* parser, amf::JSONParser::Element** el, const amf::AMFVariant& value);
bool GetVariantValueFromJSON(const amf::JSONParser::Element* element, amf::AMFVariant& val);
}
class AMFInterfaceJSONSerializable : public amf::AMFInterface
{
public:
// {EC40A26C-1345-4281-9B6C-362DDD6E05B5}
AMF_DECLARE_IID(0xec40a26c, 0x1345, 0x4281, 0x9b, 0x6c, 0x36, 0x2d, 0xdd, 0x6e, 0x5, 0xb5)
//
virtual AMF_RESULT AMF_STD_CALL ToJson(amf::JSONParser* parser, amf::JSONParser::Node* node) const = 0;
//
virtual AMF_RESULT AMF_STD_CALL FromJson(const amf::JSONParser::Node* node) = 0;
};
typedef AMFInterfacePtr_T<AMFInterfaceJSONSerializable> AMFInterfaceJSONSerializablePtr;
}
extern "C"
{
AMF_RESULT AMF_CDECL_CALL CreateJSONParser(amf::JSONParser** parser);
#define AMF_JSON_PARSER_FACTORY "CreateJSONParser"
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,185 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#pragma once
#pragma once
#include "Json.h"
#include "InterfaceImpl.h"
#include <map>
#include <ctime>
namespace amf
{
//-----------------------------------------------------------------------------------------
class JSONParserImpl :
public AMFInterfaceImpl<JSONParser>
{
public:
//-----------------------------------------------------------------------------------------
class ElementHelper
{
protected:
ElementHelper();
Error CreateElement(const std::string& str, size_t start, size_t& valueStart, size_t& valueEnd, JSONParser::Element** val);
size_t FindClosure(const std::string& str, char opener, char closer, size_t start);
void InsertTabs(std::string& target, int count, const OutputFormatDesc& format) const;
protected:
};
//-----------------------------------------------------------------------------------------
class ValueImpl :
public AMFInterfaceImpl<JSONParser::Value>,
public ElementHelper
{
public:
ValueImpl();
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY(JSONParser::Element)
AMF_INTERFACE_ENTRY(JSONParser::Value)
AMF_END_INTERFACE_MAP
virtual JSONParser::Error Parse(const std::string& str, size_t start, size_t end);
virtual std::string Stringify() const;
virtual std::string StringifyFormatted(const OutputFormatDesc& format, int indent) const;
virtual void SetValue(const std::string& val);
virtual void SetValueAsInt32(int32_t val);
virtual void SetValueAsUInt32(uint32_t val);
virtual void SetValueAsInt64(int64_t val);
virtual void SetValueAsUInt64(uint64_t val);
virtual void SetValueAsDouble(double val);
virtual void SetValueAsFloat(float val);
virtual void SetValueAsBool(bool val);
virtual void SetValueAsTime(time_t date, bool utc);
virtual void SetToNull();
virtual const std::string& GetValue() const;
virtual int32_t GetValueAsInt32() const;
virtual uint32_t GetValueAsUInt32() const;
virtual int64_t GetValueAsInt64() const;
virtual uint64_t GetValueAsUInt64() const;
virtual double GetValueAsDouble() const;
virtual float GetValueAsFloat() const;
virtual bool GetValueAsBool() const;
virtual time_t GetValueAsTime() const;
virtual bool IsNull() const;
private:
enum VALUE_TYPE
{
VT_Unknown = 0,
VT_Null = 1,
VT_Bool = 2,
VT_String = 3,
VT_Numeric = 4,
};
VALUE_TYPE m_eType;
std::string m_Value;
};
//-----------------------------------------------------------------------------------------
class NodeImpl :
public AMFInterfaceImpl<JSONParser::Node>,
public ElementHelper
{
public:
typedef std::map<std::string, Element::Ptr> ElementMap;
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY(JSONParser::Element)
AMF_INTERFACE_ENTRY(JSONParser::Node)
AMF_END_INTERFACE_MAP
NodeImpl();
virtual JSONParser::Error Parse(const std::string& str, size_t start, size_t end);
virtual std::string Stringify() const;
virtual std::string StringifyFormatted(const OutputFormatDesc& format, int indent) const;
virtual size_t GetElementCount() const;
virtual JSONParser::Element* GetElementByName(const std::string& name) const;
virtual JSONParser::Result AddElement(const std::string& name, JSONParser::Element* element);
virtual JSONParser::Element* GetElementAt(size_t idx, std::string& name) const;
const ElementMap& GetElements() const { return m_Elements; }
private:
ElementMap m_Elements;
};
//-----------------------------------------------------------------------------------------
class ArrayImpl :
public AMFInterfaceImpl<JSONParser::Array>,
public ElementHelper
{
public:
typedef std::vector<Element::Ptr> ElementVector;
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY(JSONParser::Element)
AMF_INTERFACE_ENTRY(JSONParser::Array)
AMF_END_INTERFACE_MAP
ArrayImpl();
virtual JSONParser::Error Parse(const std::string& str, size_t start, size_t end);
virtual std::string Stringify() const;
virtual std::string StringifyFormatted(const OutputFormatDesc& format, int indent) const;
virtual size_t GetElementCount() const;
virtual JSONParser::Element* GetElementAt(size_t idx) const;
virtual void AddElement(Element* element);
private:
ElementVector m_Elements;
};
//-----------------------------------------------------------------------------------------
JSONParserImpl();
virtual JSONParser::Result Parse(const std::string& str, Node** root);
virtual std::string Stringify(const Node* root) const;
virtual std::string StringifyFormatted(const Node* root, const OutputFormatDesc& format, int indent) const;
virtual size_t GetLastErrorOffset() const;
virtual Result CreateNode(Node** node) const;
virtual Result CreateValue(Value** value) const;
virtual Result CreateArray(Array** array) const;
private:
size_t m_LastErrorOfs;
};
}

View File

@@ -1,90 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file PulseAudioImprotTable.cpp
/// @brief pulseaudio import table
///-------------------------------------------------------------------------
#include "CairoImportTable.h"
#include "public/common/TraceAdapter.h"
#include "../Thread.h"
using namespace amf;
#define GET_SO_ENTRYPOINT(m, h, f) m = reinterpret_cast<decltype(&f)>(amf_get_proc_address(h, #f)); \
AMF_RETURN_IF_FALSE(nullptr != m, AMF_FAIL, L"Failed to acquire entrypoint %S", #f);
//-------------------------------------------------------------------------------------------------
CairoImportTable::CairoImportTable()
{}
//-------------------------------------------------------------------------------------------------
CairoImportTable::~CairoImportTable()
{
UnloadFunctionsTable();
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT CairoImportTable::LoadFunctionsTable()
{
if (nullptr == m_hLibCairoSO)
{
m_hLibCairoSO = amf_load_library(L"libcairo.so.2");
AMF_RETURN_IF_FALSE(nullptr != m_hLibCairoSO, AMF_FAIL, L"Failed to load libcairo.so.2");
}
GET_SO_ENTRYPOINT(m_cairo_image_surface_create_from_png_stream, m_hLibCairoSO, cairo_image_surface_create_from_png_stream);
GET_SO_ENTRYPOINT(m_cairo_surface_destroy, m_hLibCairoSO, cairo_surface_destroy);
GET_SO_ENTRYPOINT(m_cairo_image_surface_get_width, m_hLibCairoSO, cairo_image_surface_get_width);
GET_SO_ENTRYPOINT(m_cairo_image_surface_get_height, m_hLibCairoSO, cairo_image_surface_get_height);
GET_SO_ENTRYPOINT(m_cairo_image_surface_get_stride, m_hLibCairoSO, cairo_image_surface_get_stride);
GET_SO_ENTRYPOINT(m_cairo_image_surface_get_format, m_hLibCairoSO, cairo_image_surface_get_format);
GET_SO_ENTRYPOINT(m_cairo_image_surface_get_data, m_hLibCairoSO, cairo_image_surface_get_data);
return AMF_OK;
}
void CairoImportTable::UnloadFunctionsTable()
{
if (nullptr != m_hLibCairoSO)
{
amf_free_library(m_hLibCairoSO);
m_hLibCairoSO = nullptr;
}
m_cairo_image_surface_create_from_png_stream = nullptr;
m_cairo_surface_destroy = nullptr;
m_cairo_image_surface_get_width = nullptr;
m_cairo_image_surface_get_height = nullptr;
m_cairo_image_surface_get_stride = nullptr;
m_cairo_image_surface_get_format = nullptr;
m_cairo_image_surface_get_data = nullptr;
}

View File

@@ -1,63 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file CairoImportTable.h
/// @brief Cairo import table
///-------------------------------------------------------------------------
#pragma once
#include "../../include/core/Result.h"
#include <memory>
#include <cairo.h>
struct CairoImportTable{
CairoImportTable();
~CairoImportTable();
AMF_RESULT LoadFunctionsTable();
void UnloadFunctionsTable();
decltype(&cairo_image_surface_create_from_png_stream) m_cairo_image_surface_create_from_png_stream = nullptr;
decltype(&cairo_surface_destroy) m_cairo_surface_destroy = nullptr;
decltype(&cairo_image_surface_get_width) m_cairo_image_surface_get_width = nullptr;
decltype(&cairo_image_surface_get_height) m_cairo_image_surface_get_height = nullptr;
decltype(&cairo_image_surface_get_stride) m_cairo_image_surface_get_stride = nullptr;
decltype(&cairo_image_surface_get_format) m_cairo_image_surface_get_format = nullptr;
decltype(&cairo_image_surface_get_data) m_cairo_image_surface_get_data = nullptr;
amf_handle m_hLibCairoSO = nullptr;
};
typedef std::shared_ptr<CairoImportTable> CairoImportTablePtr;

View File

@@ -1,309 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; AV1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "DRMDevice.h"
#include <drm.h>
#include <drm_fourcc.h>
#include <drm_mode.h>
#include <amdgpu_drm.h>
#include <xf86drm.h>
#include <xf86drmMode.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <dirent.h>
#include <fcntl.h>
#include <unistd.h>
#define AMF_FACILITY L"DRMDevice"
struct FormatMapEntry
{
amf::AMF_SURFACE_FORMAT formatAMF;
uint32_t formatDRM;
};
static const FormatMapEntry formatMap [] =
{
#ifdef DRM_FORMAT_R8
{ amf::AMF_SURFACE_GRAY8, DRM_FORMAT_R8 },
#endif
#ifdef DRM_FORMAT_R16
// { , DRM_FORMAT_R16 },
// { , DRM_FORMAT_R16 | DRM_FORMAT_BIG_ENDIAN },
#endif
// { , DRM_FORMAT_BGR233 },
// { , DRM_FORMAT_XRGB1555 },
// { , DRM_FORMAT_XRGB1555 | DRM_FORMAT_BIG_ENDIAN },
// { , DRM_FORMAT_XBGR1555 },
// { , DRM_FORMAT_XBGR1555 | DRM_FORMAT_BIG_ENDIAN },
// { , DRM_FORMAT_RGB565 },
// { , DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN },
// { , DRM_FORMAT_BGR565 },
// { , DRM_FORMAT_BGR565 | DRM_FORMAT_BIG_ENDIAN },
// { , DRM_FORMAT_RGB888 },
// { , DRM_FORMAT_BGR888 },
{ amf::AMF_SURFACE_BGRA, DRM_FORMAT_BGRX8888 },
{ amf::AMF_SURFACE_RGBA, DRM_FORMAT_RGBX8888 },
{ amf::AMF_SURFACE_BGRA, DRM_FORMAT_XBGR8888 },
{ amf::AMF_SURFACE_BGRA /*AMF_SURFACE_ARGB*/, DRM_FORMAT_XRGB8888 },
{ amf::AMF_SURFACE_RGBA, DRM_FORMAT_BGRA8888 },
{ amf::AMF_SURFACE_ARGB, DRM_FORMAT_ARGB8888 },
{ amf::AMF_SURFACE_YUY2, DRM_FORMAT_YUYV },
// { , DRM_FORMAT_YVYU },
{ amf::AMF_SURFACE_UYVY, DRM_FORMAT_UYVY },
};
amf::AMF_SURFACE_FORMAT AMF_STD_CALL FromDRMtoAMF(uint32_t formatDRM)
{
for(int i = 0; i < amf_countof(formatMap); i++)
{
if(formatMap[i].formatDRM == formatDRM)
{
return formatMap[i].formatAMF;
}
}
return amf::AMF_SURFACE_UNKNOWN;
}
drmModeFB2Ptr AMF_STD_CALL AMFdrmModeGetFB2(int fd, uint32_t fb_id)
{
struct drm_mode_fb_cmd2 get = {
.fb_id = fb_id,
};
drmModeFB2Ptr ret;
int err;
err = drmIoctl(fd, DRM_IOCTL_MODE_GETFB2, &get);
if (err != 0)
return NULL;
ret = (drmModeFB2Ptr)drmMalloc(sizeof(drmModeFB2));
if (!ret)
return NULL;
ret->fb_id = fb_id;
ret->width = get.width;
ret->height = get.height;
ret->pixel_format = get.pixel_format;
ret->flags = get.flags;
ret->modifier = get.modifier[0];
memcpy(ret->handles, get.handles, sizeof(uint32_t) * 4);
memcpy(ret->pitches, get.pitches, sizeof(uint32_t) * 4);
memcpy(ret->offsets, get.offsets, sizeof(uint32_t) * 4);
return ret;
}
void AMF_STD_CALL AMFdrmModeFreeFB2(drmModeFB2Ptr ptr)
{
drmFree(ptr);
}
DRMDevice::DRMDevice() {}
DRMDevice::~DRMDevice()
{
Terminate();
}
AMF_RESULT AMF_STD_CALL DRMDevice::InitFromVulkan(int pciDomain, int pciBus, int pciDevice, int pciFunction)
{
int dirfd = open("/dev/dri/by-path", O_RDONLY);
AMF_RETURN_IF_FALSE(dirfd != -1, AMF_FAIL, L"Couldn't open /dev/dri/by-path")
DIR *pDir = fdopendir(dirfd);
if (pDir == nullptr)
{
close(dirfd);
return AMF_FAIL;
}
struct dirent *entry;
while ((entry = readdir(pDir)) != NULL)
{
int entryDomain = -1, entryBus = -1, entryDevice = -1, entryFunction = -1, length = -1;
int res = sscanf(entry->d_name, "pci-%x:%x:%x.%x-card%n",
&entryDomain, &entryBus, &entryDevice, &entryFunction, &length);
//check if matches pattern
if (res != 4 || length != strlen(entry->d_name))
{
continue;
}
if (entryDomain == pciDomain && entryBus == pciBus && entryDevice == pciDevice && entryFunction == pciFunction)
{
m_fd = openat(dirfd, entry->d_name, O_RDWR | O_CLOEXEC);
m_pathToCard = entry->d_name;
break;
}
}
closedir(pDir); //implicitly closes dirfd
if (m_fd < 0)
{
return AMF_FAIL;
}
return SetupDevice();
}
AMF_RESULT AMF_STD_CALL DRMDevice::InitFromPath(const char* pathToCard)
{
m_fd = open(pathToCard, O_RDWR | O_CLOEXEC);
m_pathToCard = pathToCard;
if (m_fd < 0)
{
return AMF_FAIL;
}
return SetupDevice();
}
AMF_RESULT DRMDevice::SetupDevice()
{
drmVersionPtr version = drmGetVersion(m_fd);
AMF_RETURN_IF_FALSE(version != nullptr, AMF_FAIL, L"drmGetVersion() failed from %S", m_pathToCard.c_str());
AMFTraceDebug(AMF_FACILITY, L"Opened DRM device %S: driver name %S version %d.%d.%d", m_pathToCard.c_str(), version->name,
version->version_major, version->version_minor, version->version_patchlevel);
drmFreeVersion(version);
uint64_t valueExport = 0;
int err = drmGetCap(m_fd, DRM_PRIME_CAP_EXPORT, &valueExport);
err = drmSetClientCap(m_fd, DRM_CLIENT_CAP_UNIVERSAL_PLANES, 1);
if (err < 0)
{
AMFTraceWarning(AMF_FACILITY, L"drmSetClientCap(DRM_CLIENT_CAP_UNIVERSAL_PLANES) Failed with %d", err);
}
drmSetClientCap(m_fd, DRM_CLIENT_CAP_ATOMIC, 1);
return AMF_OK;
}
AMF_RESULT AMF_STD_CALL DRMDevice::Terminate()
{
if (m_fd >= 0)
{
close(m_fd);
m_fd = -1;
}
m_pathToCard = "";
return AMF_OK;
}
int AMF_STD_CALL DRMDevice::GetFD() const
{
return m_fd;
}
std::string AMF_STD_CALL DRMDevice::GetPathToCard() const
{
return m_pathToCard;
}
AMF_RESULT AMF_STD_CALL DRMDevice::GetCRTCs(std::vector<DRMCRTC>& crtcs) const
{
AMF_RETURN_IF_FALSE(m_fd >= 0, AMF_FAIL, L"Not Initialized");
AMFdrmModeResPtr resources = drmModeGetResources(m_fd);
AMF_RETURN_IF_FALSE(resources.p != nullptr, AMF_FAIL, L"drmModeGetResources() return nullptr");
crtcs.clear();
for(int i = 0; i < resources.p->count_crtcs; i ++)
{
AMFdrmModeCrtcPtr crtc = drmModeGetCrtc(m_fd, resources.p->crtcs[i]);
AMFRect crop = {};
amf::AMF_SURFACE_FORMAT formatAMF = amf::AMF_SURFACE_UNKNOWN;
int formatDRM = 0;
int handle = 0;
if(GetCrtcInfo(crtc, crop, formatDRM, formatAMF, handle) != AMF_OK)
{
continue;
}
AMFTraceDebug(AMF_FACILITY, L" CRTC id=%d fb=%d crop(%d,%d,%d,%d)", crtc.p->crtc_id, crtc.p->buffer_id, crop.left, crop.top, crop.right, crop.bottom);
DRMCRTC drmCrtc = {};
drmCrtc.crtcID = crtc.p->crtc_id;
drmCrtc.fbID = crtc.p->buffer_id;
drmCrtc.crop = crop;
drmCrtc.formatDRM = formatDRM;
drmCrtc.formatAMF = formatAMF;
drmCrtc.handle = handle;
crtcs.push_back(drmCrtc);
}
return AMF_OK;
}
AMF_RESULT AMF_STD_CALL DRMDevice::GetCrtcInfo(const AMFdrmModeCrtcPtr& crtc, AMFRect &crop, int& formatDRM, amf::AMF_SURFACE_FORMAT& formatAMF, int& handle) const
{
if(crtc.p == nullptr)
{
return AMF_FAIL;
}
if(crtc.p->buffer_id == 0)
{
return AMF_FAIL;
}
// check if active
AMFdrmModeObjectPropertiesPtr properties = drmModeObjectGetProperties (m_fd, crtc.p->crtc_id, DRM_MODE_OBJECT_CRTC);
if(properties.p == nullptr)
{
return AMF_FAIL;
}
for(int k = 0; k < properties.p->count_props; k++)
{
AMFdrmModePropertyPtr prop = drmModeGetProperty(m_fd, properties.p->props[k]);
if(std::string(prop.p->name) == "ACTIVE" && properties.p->prop_values[k] == 0)
{
return AMF_FAIL;
}
}
// check FB
AMFdrmModeFB2Ptr fb2 = AMFdrmModeGetFB2(m_fd, crtc.p->buffer_id);
if(fb2.p == nullptr)
{
return AMF_FAIL;
}
crop.left = crtc.p->x;
crop.top = crtc.p->y;
crop.right = crtc.p->x + crtc.p->width;
crop.bottom = crtc.p->y + crtc.p->height;
formatDRM = fb2.p->pixel_format;
formatAMF= FromDRMtoAMF(fb2.p->pixel_format);
handle = fb2.p->handles[0];
return AMF_OK;
}

View File

@@ -1,123 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; AV1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#pragma once
#include <string>
#include <vector>
#include "public/common/TraceAdapter.h"
#include "public/include/core/Surface.h"
#include <drm.h>
#include <drm_fourcc.h>
#include <drm_mode.h>
#include <amdgpu_drm.h>
#include <xf86drm.h>
#include <xf86drmMode.h>
// These classed provide a nice interface to a DRM card using libdrm
amf::AMF_SURFACE_FORMAT AMF_STD_CALL FromDRMtoAMF(uint32_t formatDRM);
drmModeFB2Ptr AMF_STD_CALL AMFdrmModeGetFB2(int fd, uint32_t fb_id);
void AMF_STD_CALL AMFdrmModeFreeFB2(drmModeFB2Ptr ptr);
template <class type, void function(type)>
class AMFAutoDRMPtr
{
public:
AMFAutoDRMPtr() : p(nullptr){}
AMFAutoDRMPtr(type ptr) : p(ptr){}
~AMFAutoDRMPtr()
{
Clear();
}
AMFAutoDRMPtr<type, function>& operator=(type ptr)
{
if(p != ptr)
{
Clear();
p = ptr;
}
return *this;
}
void Clear()
{
if(p != nullptr)
{
function(p);
p = nullptr;
}
}
type p;
private:
AMFAutoDRMPtr<type, function>& operator=(const AMFAutoDRMPtr<type, function>& other);
};
typedef AMFAutoDRMPtr<drmModePlanePtr, drmModeFreePlane> AMFdrmModePlanePtr;
typedef AMFAutoDRMPtr<drmModeFBPtr, drmModeFreeFB> AMFdrmModeFBPtr;
typedef AMFAutoDRMPtr<drmModeFB2Ptr, AMFdrmModeFreeFB2> AMFdrmModeFB2Ptr;
typedef AMFAutoDRMPtr<drmModePlaneResPtr, drmModeFreePlaneResources> AMFdrmModePlaneResPtr;
typedef AMFAutoDRMPtr<drmModeObjectPropertiesPtr, drmModeFreeObjectProperties> AMFdrmModeObjectPropertiesPtr;
typedef AMFAutoDRMPtr<drmModePropertyPtr, drmModeFreeProperty> AMFdrmModePropertyPtr;
typedef AMFAutoDRMPtr<drmModeCrtcPtr, drmModeFreeCrtc> AMFdrmModeCrtcPtr;
typedef AMFAutoDRMPtr<drmModeResPtr, drmModeFreeResources> AMFdrmModeResPtr;
struct DRMCRTC {
int crtcID;
int fbID;
AMFRect crop;
int formatDRM;
amf::AMF_SURFACE_FORMAT formatAMF;
int handle;
};
class DRMDevice {
public:
DRMDevice();
~DRMDevice();
AMF_RESULT AMF_STD_CALL InitFromVulkan(int pciDomain, int pciBus, int pciDevice, int pciFunction);
AMF_RESULT AMF_STD_CALL InitFromPath(const char* pathToCard);
AMF_RESULT AMF_STD_CALL Terminate();
int AMF_STD_CALL GetFD() const;
std::string AMF_STD_CALL GetPathToCard() const;
AMF_RESULT AMF_STD_CALL GetCRTCs(std::vector<DRMCRTC>& crtcs) const;
AMF_RESULT AMF_STD_CALL GetCrtcInfo(const AMFdrmModeCrtcPtr& crtc, AMFRect &crop, int& formatDRM, amf::AMF_SURFACE_FORMAT& formatAMF, int& handle) const;
private:
AMF_RESULT SetupDevice();
int m_fd = -1;
std::string m_pathToCard;
};

View File

@@ -1,155 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file PulseAudioImprotTable.cpp
/// @brief pulseaudio import table
///-------------------------------------------------------------------------
#include "PulseAudioImportTable.h"
#include "public/common/TraceAdapter.h"
#include "../Thread.h"
using namespace amf;
#define GET_SO_ENTRYPOINT(m, h, f) m = reinterpret_cast<decltype(&f)>(amf_get_proc_address(h, #f)); \
AMF_RETURN_IF_FALSE(nullptr != m, AMF_FAIL, L"Failed to acquire entrypoint %S", #f);
//-------------------------------------------------------------------------------------------------
PulseAudioImportTable::PulseAudioImportTable()
{}
//-------------------------------------------------------------------------------------------------
PulseAudioImportTable::~PulseAudioImportTable()
{
UnloadFunctionsTable();
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT PulseAudioImportTable::LoadFunctionsTable()
{
// Load pulseaudio simple api shared library and pulseaudio shared library.
if (nullptr == m_hLibPulseSimpleSO)
{
m_hLibPulseSimpleSO = amf_load_library(L"libpulse-simple.so.0");
AMF_RETURN_IF_FALSE(nullptr != m_hLibPulseSimpleSO, AMF_FAIL, L"Failed to load libpulse-simple.so.0");
}
if (nullptr == m_hLibPulseSO)
{
m_hLibPulseSO = amf_load_library(L"libpulse.so.0");
AMF_RETURN_IF_FALSE(nullptr != m_hLibPulseSO, AMF_FAIL, L"Failed to load libpulse.so.0");
}
// Load pulseaudio mainloop functions.
GET_SO_ENTRYPOINT(m_pPA_Mainloop_Free, m_hLibPulseSO, pa_mainloop_free);
GET_SO_ENTRYPOINT(m_pPA_Mainloop_New, m_hLibPulseSO, pa_mainloop_new);
GET_SO_ENTRYPOINT(m_pPA_Mainloop_Quit, m_hLibPulseSO, pa_mainloop_quit);
GET_SO_ENTRYPOINT(m_pPA_Mainloop_Get_API, m_hLibPulseSO, pa_mainloop_get_api);
GET_SO_ENTRYPOINT(m_pPA_Mainloop_Run, m_hLibPulseSO, pa_mainloop_run);
// Load pulseaudio context functions.
GET_SO_ENTRYPOINT(m_pPA_Context_Unref, m_hLibPulseSO, pa_context_unref);
GET_SO_ENTRYPOINT(m_pPA_Context_Load_Module, m_hLibPulseSO, pa_context_load_module);
GET_SO_ENTRYPOINT(m_pPA_Context_Unload_Module, m_hLibPulseSO, pa_context_unload_module);
GET_SO_ENTRYPOINT(m_pPA_Context_New, m_hLibPulseSO, pa_context_new);
GET_SO_ENTRYPOINT(m_pPA_Context_Get_State, m_hLibPulseSO, pa_context_get_state);
GET_SO_ENTRYPOINT(m_pPA_Context_Set_State_Callback, m_hLibPulseSO, pa_context_set_state_callback);
GET_SO_ENTRYPOINT(m_pPA_Context_Get_Server_Info, m_hLibPulseSO, pa_context_get_server_info);
GET_SO_ENTRYPOINT(m_pPA_Context_Connect, m_hLibPulseSO, pa_context_connect);
GET_SO_ENTRYPOINT(m_pPA_Context_Disconnect, m_hLibPulseSO, pa_context_disconnect);
GET_SO_ENTRYPOINT(m_pPA_Context_Get_Sink_Info_By_Name, m_hLibPulseSO, pa_context_get_sink_info_by_name);
GET_SO_ENTRYPOINT(m_pPA_Context_Get_Sink_Info_List, m_hLibPulseSO, pa_context_get_sink_info_list);
GET_SO_ENTRYPOINT(m_pPA_Context_Get_Source_Info_List, m_hLibPulseSO, pa_context_get_source_info_list);
// Load other pulse audio functions.
GET_SO_ENTRYPOINT(m_pPA_Operation_Unref, m_hLibPulseSO, pa_operation_unref);
GET_SO_ENTRYPOINT(m_pPA_Strerror, m_hLibPulseSO, pa_strerror);
// Load pulse audio simple api functions.
GET_SO_ENTRYPOINT(m_pPA_Simple_New, m_hLibPulseSimpleSO, pa_simple_new);
GET_SO_ENTRYPOINT(m_pPA_Simple_Free, m_hLibPulseSimpleSO, pa_simple_free);
GET_SO_ENTRYPOINT(m_pPA_Simple_Write, m_hLibPulseSimpleSO, pa_simple_write);
GET_SO_ENTRYPOINT(m_pPA_Simple_Read, m_hLibPulseSimpleSO, pa_simple_read);
GET_SO_ENTRYPOINT(m_pPA_Simple_Flush, m_hLibPulseSimpleSO, pa_simple_flush);
GET_SO_ENTRYPOINT(m_pPA_Simple_Get_Latency, m_hLibPulseSimpleSO, pa_simple_get_latency);
return AMF_OK;
}
void PulseAudioImportTable::UnloadFunctionsTable()
{
if (nullptr != m_hLibPulseSimpleSO)
{
amf_free_library(m_hLibPulseSimpleSO);
m_hLibPulseSO = nullptr;
}
if (nullptr != m_hLibPulseSO)
{
amf_free_library(m_hLibPulseSO);
m_hLibPulseSO = nullptr;
}
m_pPA_Mainloop_Free = nullptr;
m_pPA_Mainloop_Quit = nullptr;
m_pPA_Mainloop_New = nullptr;
m_pPA_Mainloop_Get_API = nullptr;
m_pPA_Mainloop_Run = nullptr;
// Context functions.
m_pPA_Context_Unref = nullptr;
m_pPA_Context_Load_Module = nullptr;
m_pPA_Context_Unload_Module = nullptr;
m_pPA_Context_New = nullptr;
m_pPA_Context_Get_State = nullptr;
m_pPA_Context_Set_State_Callback = nullptr;
m_pPA_Context_Get_Server_Info = nullptr;
m_pPA_Context_Connect = nullptr;
m_pPA_Context_Disconnect = nullptr;
m_pPA_Context_Get_Sink_Info_By_Name = nullptr;
m_pPA_Context_Get_Sink_Info_List = nullptr;
m_pPA_Context_Get_Source_Info_List = nullptr;
// Others
m_pPA_Operation_Unref = nullptr;
m_pPA_Strerror = nullptr;
// PulseAudio Simple API functions.
m_pPA_Simple_New = nullptr;
m_pPA_Simple_Free = nullptr;
m_pPA_Simple_Write = nullptr;
m_pPA_Simple_Read = nullptr;
m_pPA_Simple_Flush = nullptr;
m_pPA_Simple_Get_Latency = nullptr;
}

View File

@@ -1,91 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file PulseAudioImportTable.h
/// @brief pulseaudio import table
///-------------------------------------------------------------------------
#pragma once
#include "../../include/core/Result.h"
#include <memory>
#include <pulse/simple.h>
#include <pulse/pulseaudio.h>
struct PulseAudioImportTable{
PulseAudioImportTable();
~PulseAudioImportTable();
AMF_RESULT LoadFunctionsTable();
void UnloadFunctionsTable();
// PulseAudio functions.
// Mainloop functions.
decltype(&pa_mainloop_free) m_pPA_Mainloop_Free = nullptr;
decltype(&pa_mainloop_quit) m_pPA_Mainloop_Quit = nullptr;
decltype(&pa_mainloop_new) m_pPA_Mainloop_New = nullptr;
decltype(&pa_mainloop_get_api) m_pPA_Mainloop_Get_API = nullptr;
decltype(&pa_mainloop_run) m_pPA_Mainloop_Run = nullptr;
// Context functions.
decltype(&pa_context_unref) m_pPA_Context_Unref = nullptr;
decltype(&pa_context_load_module) m_pPA_Context_Load_Module = nullptr;
decltype(&pa_context_unload_module) m_pPA_Context_Unload_Module = nullptr;
decltype(&pa_context_new) m_pPA_Context_New = nullptr;
decltype(&pa_context_get_state) m_pPA_Context_Get_State = nullptr;
decltype(&pa_context_set_state_callback) m_pPA_Context_Set_State_Callback = nullptr;
decltype(&pa_context_get_server_info) m_pPA_Context_Get_Server_Info = nullptr;
decltype(&pa_context_connect) m_pPA_Context_Connect = nullptr;
decltype(&pa_context_disconnect) m_pPA_Context_Disconnect = nullptr;
decltype(&pa_context_get_sink_info_by_name) m_pPA_Context_Get_Sink_Info_By_Name = nullptr;
decltype(&pa_context_get_sink_info_list) m_pPA_Context_Get_Sink_Info_List = nullptr;
decltype(&pa_context_get_source_info_list) m_pPA_Context_Get_Source_Info_List = nullptr;
// Others
decltype(&pa_operation_unref) m_pPA_Operation_Unref = nullptr;
decltype(&pa_strerror) m_pPA_Strerror = nullptr;
// PulseAudio Simple API functions.
decltype(&pa_simple_new) m_pPA_Simple_New = nullptr;
decltype(&pa_simple_free) m_pPA_Simple_Free = nullptr;
decltype(&pa_simple_write) m_pPA_Simple_Write = nullptr;
decltype(&pa_simple_read) m_pPA_Simple_Read = nullptr;
decltype(&pa_simple_flush) m_pPA_Simple_Flush = nullptr;
decltype(&pa_simple_get_latency) m_pPA_Simple_Get_Latency = nullptr;
amf_handle m_hLibPulseSO = nullptr;
amf_handle m_hLibPulseSimpleSO = nullptr;
};
typedef std::shared_ptr<PulseAudioImportTable> PulseAudioImportTablePtr;

View File

@@ -1,757 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include "../Thread.h"
#if defined (__linux) || (__APPLE__)
#if defined(__GNUC__)
//disable gcc warinings on STL code
#pragma GCC diagnostic ignored "-Weffc++"
#endif
#define POSIX
#include <locale>
#include <algorithm>
#include <dirent.h>
#include <fnmatch.h>
#include <pwd.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdlib.h>
#include <errno.h>
#include <time.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <dlfcn.h>
#include <sys/time.h>
#include <fstream>
#if !defined(__APPLE__)
#include <malloc.h>
#endif
#if defined(__ANDROID__)
#include <android/log.h>
#endif
#include <sys/types.h>
#include <semaphore.h>
#include <pthread.h>
#include "../AMFSTL.h"
using namespace amf;
extern "C" void AMF_STD_CALL amf_debug_trace(const wchar_t* text);
void perror(const char* errorModule)
{
char buf[128];
#if defined(__ANDROID__) || (__APPLE__)
strerror_r(errno, buf, sizeof(buf));
fprintf(stderr, "%s: %s", buf, errorModule);
#else
char* err = strerror_r(errno, buf, sizeof(buf));
fprintf(stderr, "%s: %s", err, errorModule);
#endif
exit(1);
}
#if defined(__APPLE__)
amf_uint64 AMF_STD_CALL get_current_thread_id()
{
return reinterpret_cast<amf_uint64>(pthread_self());
}
#else
amf_uint32 AMF_STD_CALL get_current_thread_id()
{
return static_cast<amf_uint32>(pthread_self());
}
#endif
// int clock_gettime(clockid_t clk_id, struct timespec *tp);
//----------------------------------------------------------------------------------------
// threading
//----------------------------------------------------------------------------------------
amf_long AMF_STD_CALL amf_atomic_inc(amf_long* X)
{
return __sync_add_and_fetch(X, 1);
}
//----------------------------------------------------------------------------------------
amf_long AMF_STD_CALL amf_atomic_dec(amf_long* X)
{
return __sync_sub_and_fetch(X, 1);
}
//----------------------------------------------------------------------------------------
amf_handle AMF_STD_CALL amf_create_critical_section()
{
pthread_mutex_t* mutex = new pthread_mutex_t;
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(mutex, &attr);
return (amf_handle)mutex;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_delete_critical_section(amf_handle cs)
{
if(cs == NULL)
{
return false;
}
pthread_mutex_t* mutex = (pthread_mutex_t*)cs;
int err = pthread_mutex_destroy(mutex);
delete mutex;
return err == 0;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_enter_critical_section(amf_handle cs)
{
if(cs == NULL)
{
return false;
}
pthread_mutex_t* mutex = (pthread_mutex_t*)cs;
return pthread_mutex_lock(mutex) == 0;
}
//----------------------------------------------------------------------------------------
bool AMF_CDECL_CALL amf_wait_critical_section(amf_handle cs, amf_ulong ulTimeout)
{
if(cs == NULL)
{
return false;
}
return amf_wait_for_mutex(cs, ulTimeout);
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_leave_critical_section(amf_handle cs)
{
if(cs == NULL)
{
return false;
}
pthread_mutex_t* mutex = (pthread_mutex_t*)cs;
return pthread_mutex_unlock(mutex) == 0;
}
//----------------------------------------------------------------------------------------
struct MyEvent
{
bool m_manual_reset;
pthread_cond_t m_cond;
pthread_mutex_t m_mutex;
bool m_triggered;
};
//----------------------------------------------------------------------------------------
amf_handle AMF_STD_CALL amf_create_event(bool initially_owned, bool manual_reset, const wchar_t* name)
{
MyEvent* event = new MyEvent;
// Linux does not natively support Named Condition variables
// so raise an error.
// Implement this using boost (NamedCondition), Qt, or some other framework.
if(name != NULL)
{
perror("Named Events not supported under Linux yet");
exit(1);
}
event->m_manual_reset = manual_reset;
pthread_cond_t cond_tmp = PTHREAD_COND_INITIALIZER;
event->m_cond = cond_tmp;
pthread_mutex_t mutex_tmp = PTHREAD_MUTEX_INITIALIZER;
event->m_mutex = mutex_tmp;
event->m_triggered = false;
if(initially_owned)
{
amf_set_event((amf_handle)event);
}
return (amf_handle)event;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_delete_event(amf_handle hevent)
{
if(hevent == NULL)
{
return false;
}
MyEvent* event = (MyEvent*)hevent;
int err1 = pthread_mutex_destroy(&event->m_mutex);
int err2 = pthread_cond_destroy(&event->m_cond);
delete event;
return err1 == 0 && err2 == 0;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_set_event(amf_handle hevent)
{
if(hevent == NULL)
{
return false;
}
MyEvent* event = (MyEvent*)hevent;
pthread_mutex_lock(&event->m_mutex);
event->m_triggered = true;
int err1 = pthread_cond_broadcast(&event->m_cond);
pthread_mutex_unlock(&event->m_mutex);
return err1 == 0;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_reset_event(amf_handle hevent)
{
if(hevent == NULL)
{
return false;
}
MyEvent* event = (MyEvent*)hevent;
pthread_mutex_lock(&event->m_mutex);
event->m_triggered = false;
int err = pthread_mutex_unlock(&event->m_mutex);
return err == 0;
}
//----------------------------------------------------------------------------------------
static bool AMF_STD_CALL amf_wait_for_event_int(amf_handle hevent, unsigned long timeout, bool bTimeoutErr)
{
if(hevent == NULL)
{
return false;
}
bool ret = true;
int err = 0;
MyEvent* event = (MyEvent*)hevent;
pthread_mutex_lock(&event->m_mutex);
timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
amf_uint64 start_time = ((amf_uint64)ts.tv_sec) * 1000 + ((amf_uint64)ts.tv_nsec) / 1000000; //to msec
if(event->m_manual_reset)
{
while(!event->m_triggered)
{
if(timeout == AMF_INFINITE)
{
err = pthread_cond_wait(&event->m_cond, &event->m_mutex); //MM todo - timeout is not supported
ret = err == 0;
}
else
{
clock_gettime(CLOCK_REALTIME, &ts);
amf_uint64 current_time = ((amf_uint64)ts.tv_sec) * 1000 + ((amf_uint64)ts.tv_nsec) / 1000000; //to msec
if(current_time - start_time > (amf_uint64)timeout)
{
ret = bTimeoutErr ? false : true;
break;
}
amf_uint64 to_wait = start_time + timeout;
timespec abstime;
abstime.tv_sec = (time_t)(to_wait / 1000); // timeout is in millisec
abstime.tv_nsec = (time_t)((to_wait - ((amf_uint64)abstime.tv_sec) * 1000) * 1000000); // the rest to nanosec
err = pthread_cond_timedwait(&event->m_cond, &event->m_mutex, &abstime);
ret = err == 0;
}
}
}
else
{
if(event->m_triggered)
{
ret = true;
}
else
{
if (timeout == AMF_INFINITE) {
err = pthread_cond_wait(&event->m_cond, &event->m_mutex);
} else {
start_time += timeout;
timespec abstime;
abstime.tv_sec = (time_t) (start_time / 1000); // timeout is in millisec
abstime.tv_nsec = (time_t) ((start_time - (amf_uint64) (abstime.tv_sec) * 1000) *
1000000); // the rest to nanosec
err = pthread_cond_timedwait(&event->m_cond, &event->m_mutex, &abstime);
}
if (bTimeoutErr) {
ret = (err == 0);
} else {
ret = (err == 0 || err == ETIMEDOUT);
}
}
if(ret == true)
{
event->m_triggered = false;
}
}
pthread_mutex_unlock(&event->m_mutex);
return ret;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_wait_for_event(amf_handle hevent, unsigned long timeout)
{
return amf_wait_for_event_int(hevent, timeout, true);
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_wait_for_event_timeout(amf_handle hevent, amf_ulong ulTimeout)
{
return amf_wait_for_event_int(hevent, ulTimeout, false);
}
//----------------------------------------------------------------------------------------
amf_handle AMF_STD_CALL amf_create_mutex(bool initially_owned, const wchar_t* name)
{
pthread_mutex_t* mutex = new pthread_mutex_t;
pthread_mutex_t mutex_tmp = PTHREAD_MUTEX_INITIALIZER;
*mutex = mutex_tmp;
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(mutex, &attr);
if(initially_owned)
{
pthread_mutex_lock(mutex);
}
return (amf_handle)mutex;
}
//----------------------------------------------------------------------------------------
amf_handle AMF_STD_CALL amf_open_mutex(const wchar_t* pName)
{
assert(false);
return 0;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_delete_mutex(amf_handle hmutex)
{
if(hmutex == NULL)
{
return false;
}
pthread_mutex_t* mutex = (pthread_mutex_t*)hmutex;
int err = pthread_mutex_destroy(mutex);
delete mutex;
return err == 0;
}
//----------------------------------------------------------------------------------------
#if defined(__APPLE__)
int sem_timedwait1(sem_t* semaphore, const struct timespec* timeout)
{
struct timeval timenow;
struct timespec sleepytime;
int retcode;
/// This is just to avoid a completely busy wait
sleepytime.tv_sec = 0;
sleepytime.tv_nsec = 10000000; // 10ms
while((retcode = sem_trywait(semaphore)) != 0)
{
gettimeofday (&timenow, NULL);
if((timenow.tv_sec >= timeout->tv_sec) && ((timenow.tv_usec * 1000) >= timeout->tv_nsec))
{
return retcode;
}
nanosleep (&sleepytime, NULL);
}
return retcode;
}
#endif
#if defined(__ANDROID__) || defined(__APPLE__)
int pthread_mutex_timedlock1(pthread_mutex_t* mutex, const struct timespec* timeout)
{
struct timeval timenow;
struct timespec sleepytime;
int retcode;
/// This is just to avoid a completely busy wait
sleepytime.tv_sec = 0;
sleepytime.tv_nsec = 10000000; // 10ms
while((retcode = pthread_mutex_trylock (mutex)) == EBUSY)
{
gettimeofday (&timenow, NULL);
if((timenow.tv_sec >= timeout->tv_sec) && ((timenow.tv_usec * 1000) >= timeout->tv_nsec))
{
return ETIMEDOUT;
}
nanosleep (&sleepytime, NULL);
}
return retcode;
}
#endif
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_wait_for_mutex(amf_handle hmutex, unsigned long timeout)
{
if(hmutex == NULL)
{
return false;
}
pthread_mutex_t* mutex = (pthread_mutex_t*)hmutex;
if(timeout == AMF_INFINITE)
{
return pthread_mutex_lock(mutex) == 0;
}
// ulTimeout is in milliseconds
long timeout_sec = timeout / 1000; /* Seconds */;
long timeout_nsec = (timeout - (timeout / 1000) * 1000) * 1000000;
timespec wait_time; //absolute time
clock_gettime(CLOCK_REALTIME, &wait_time);
wait_time.tv_sec += timeout_sec;
wait_time.tv_nsec += timeout_nsec;
if (wait_time.tv_nsec >= 1000000000)
{
wait_time.tv_sec++;
wait_time.tv_nsec -= 1000000000;
}
#if defined(__ANDROID__) || defined (__APPLE__)
return pthread_mutex_timedlock1(mutex, &wait_time) == 0;
#else
return pthread_mutex_timedlock(mutex, &wait_time) == 0;
#endif
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_release_mutex(amf_handle hmutex)
{
if(hmutex == NULL)
{
return false;
}
pthread_mutex_t* mutex = (pthread_mutex_t*)hmutex;
return pthread_mutex_unlock(mutex) == 0;
}
//----------------------------------------------------------------------------------------
amf_handle AMF_STD_CALL amf_create_semaphore(amf_long iInitCount, amf_long iMaxCount, const wchar_t* /*pName*/)
{
if(iMaxCount == 0 || iInitCount > iMaxCount)
{
return NULL;
}
sem_t* semaphore = new sem_t;
if(sem_init(semaphore, 0, iInitCount) != 0)
{
delete semaphore;
return NULL;
}
return (amf_handle)semaphore;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_delete_semaphore(amf_handle hsemaphore)
{
if(hsemaphore == NULL)
{
return false;
}
bool ret = true;
sem_t* semaphore = (sem_t*)hsemaphore;
ret = (0==sem_destroy(semaphore)) ? 1:0;
delete semaphore;
return ret;
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_wait_for_semaphore(amf_handle hsemaphore, amf_ulong timeout)
{
if(hsemaphore == NULL)
{
return true;
}
// ulTimeout is in milliseconds
long timeout_sec = timeout / 1000; /* Seconds */;
long timeout_nsec = (timeout - (timeout / 1000) * 1000) * 1000000;
timespec wait_time; //absolute time
clock_gettime(CLOCK_REALTIME, &wait_time);
wait_time.tv_sec += timeout_sec;
wait_time.tv_nsec += timeout_nsec;
if (wait_time.tv_nsec >= 1000000000)
{
wait_time.tv_sec++;
wait_time.tv_nsec -= 1000000000;
}
sem_t* semaphore = (sem_t*)hsemaphore;
if(timeout != AMF_INFINITE)
{
#if defined(__APPLE__)
return sem_timedwait1 (semaphore, &wait_time) == 0; // errno=ETIMEDOU
#else
return sem_timedwait (semaphore, &wait_time) == 0; // errno=ETIMEDOUT
#endif
}
else
{
return sem_wait(semaphore) == 0;
}
}
//----------------------------------------------------------------------------------------
bool AMF_STD_CALL amf_release_semaphore(amf_handle hsemaphore, amf_long iCount, amf_long* iOldCount)
{
if(hsemaphore == NULL)
{
return false;
}
sem_t* semaphore = (sem_t*)hsemaphore;
if(iOldCount != NULL)
{
int iTmp = 0;
sem_getvalue(semaphore, &iTmp);
*iOldCount = iTmp;
}
for(int i = 0; i < iCount; i++)
{
sem_post(semaphore);
}
return true;
}
//------------------------------------------------------------------------------
/*
* Delay is specified in milliseconds.
* Function will return prematurely if msDelay value is invalid.
*
* */
void AMF_STD_CALL amf_sleep(amf_ulong msDelay)
{
#if defined(NANOSLEEP_DONTUSE)
struct timespec sts, sts_remaining;
int iErrorCode;
ts.tv_sec = msDelay / 1000;
ts.tv_nsec = (msDelay - sts.tv_sec * 1000) * 1000000; // nanosec
// put in code to measure sleep clock jitter
do
{
iErrorCode = nanosleep(&sts, &sts_remaining);
if(iErrorCode)
{
switch(errno)
{
case EINTR:
sts = sts_remaining;
break;
case EFAULT:
case EINVAL:
case default:
perror("amf_sleep");
return;
/* TODO: how to log errors? */
}
}
} while(iErrorCode);
#else
usleep(msDelay * 1000);
#endif
}
//----------------------------------------------------------------------------------------
//----------------------------------------------------------------------------------------
// memory
//----------------------------------------------------------------------------------------
//----------------------------------------------------------------------------------------
void AMF_STD_CALL amf_debug_trace(const wchar_t* text)
{
#if defined(__ANDROID__)
__android_log_write(ANDROID_LOG_DEBUG, "AMF_TRACE", amf_from_unicode_to_multibyte(text).c_str());
#else
fprintf(stderr, "%ls", text);
#endif
}
void* AMF_STD_CALL amf_virtual_alloc(size_t size)
{
void* mem = NULL;
#if defined(__ANDROID__)
mem = memalign(sysconf(_SC_PAGESIZE), size);
if(mem == NULL)
{
amf_debug_trace(L"Failed to alloc memory using memalign() function.");
}
#else
int exitCode = posix_memalign(&mem, sysconf(_SC_PAGESIZE), size);
if(exitCode != 0)
{
amf_debug_trace(L"Failed to alloc memory using posix_memaling() function.");
}
#endif
return mem;
}
//-------------------------------------------------------------------------------------------------------
void AMF_STD_CALL amf_virtual_free(void* ptr)
{
free(ptr); // according to linux help memory allocated by memalign() must be freed by free()
}
//----------------------------------------------------------------------------------------
amf_handle AMF_STD_CALL amf_load_library(const wchar_t* filename)
{
void *ret = dlopen(amf_from_unicode_to_multibyte(filename).c_str(), RTLD_NOW | RTLD_GLOBAL);
if(ret ==0 )
{
const char *err = dlerror();
int a=1;
}
return ret;
}
amf_handle AMF_STD_CALL amf_load_library1(const wchar_t* filename, bool bGlobal)
{
void *ret;
if (bGlobal) {
ret = dlopen(amf_from_unicode_to_multibyte(filename).c_str(), RTLD_NOW | RTLD_GLOBAL);
} else {
#if defined(__ANDROID__) || (__APPLE__)
ret = dlopen(amf_from_unicode_to_multibyte(filename).c_str(), RTLD_NOW | RTLD_LOCAL);
#else
ret = dlopen(amf_from_unicode_to_multibyte(filename).c_str(), RTLD_NOW | RTLD_LOCAL| RTLD_DEEPBIND);
#endif
}
if(ret == 0)
{
const char *err = dlerror();
int a=1;
}
return ret;
}
void* AMF_STD_CALL amf_get_proc_address(amf_handle module, const char* procName)
{
return dlsym(module, procName);
}
//-------------------------------------------------------------------------------------------------
int AMF_STD_CALL amf_free_library(amf_handle module)
{
return dlclose(module) == 0;
}
void AMF_STD_CALL amf_increase_timer_precision()
{
}
void AMF_STD_CALL amf_restore_timer_precision()
{
}
//----------------------------------------------------------------------------------------
double AMF_STD_CALL amf_clock()
{
//MM: clock() Win32 - returns time from beginning of the program
//MM: clock() works different in Linux - returns consumed processor time
timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
double cur_time = ((double)ts.tv_sec) + ((double)ts.tv_nsec) / 1000000000.; //to sec
return cur_time;
}
//----------------------------------------------------------------------------------------
amf_int64 AMF_STD_CALL get_time_in_seconds_with_fraction()
{
struct timeval tv;
gettimeofday(&tv, NULL);
amf_int64 ntp_time = ((tv.tv_sec * 1000) + (tv.tv_usec / 1000));
return ntp_time;
}
//---------------------------------------------------------------------------------------
amf_pts AMF_STD_CALL amf_high_precision_clock()
{
timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
return ts.tv_sec * 10000000LL + ts.tv_nsec / 100.; //to nanosec
}
//---------------------------------------------------------------------------------------
// Returns number of physical cores
amf_int32 AMF_STD_CALL amf_get_cpu_cores()
{
// NOTE: get_nprocs is preffered way to get online cores on linux but it will
// return number of logical cores. Uncomment line bellow if that's the behaviour needed
//return get_nprocs();
const char CPUINFO_CORES_COUNT[] = "cpu cores";
std::ifstream cpuinfo("/proc/cpuinfo");
std::string line;
while (std::getline(cpuinfo, line))
{
if (line.compare(0, strlen(CPUINFO_CORES_COUNT), CPUINFO_CORES_COUNT) == 0)
{
size_t pos = line.rfind(':') + 2;
if (pos == std::string::npos)
{
continue;
}
std::string tmp = line.substr(pos);
const char* value = tmp.c_str();
int cores_online = std::atoi(value);
// Make sure we always return at least 1
return std::max(1, cores_online);
}
}
// Failure, return default
return 1;
}
//--------------------------------------------------------------------------------
// the end
//--------------------------------------------------------------------------------
#endif

View File

@@ -1,75 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; AV1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#pragma once
#include <memory>
#include <X11/X.h>
#include <X11/Xlib.h>
//this pattern makes it impossible to use the x11 Display* pointer without first calling XLockDisplay
class XDisplay {
public:
typedef std::shared_ptr<XDisplay> Ptr;
XDisplay()
: m_pDisplay(XOpenDisplay(nullptr))
, m_shouldClose(true)
{}
XDisplay(Display* dpy)
: m_pDisplay(dpy)
, m_shouldClose(false)
{}
~XDisplay() { if(IsValid() && m_shouldClose) XCloseDisplay(m_pDisplay); }
bool IsValid() { return m_pDisplay != nullptr; }
private:
Display* m_pDisplay;
bool m_shouldClose = false;
friend class XDisplayPtr;
};
class XDisplayPtr {
public:
XDisplayPtr() = delete;
XDisplayPtr(const XDisplayPtr&) = delete;
XDisplayPtr& operator=(const XDisplayPtr&) =delete;
explicit XDisplayPtr(std::shared_ptr<XDisplay> display) : m_pDisplay(display) { XLockDisplay(m_pDisplay->m_pDisplay); }
~XDisplayPtr() { XUnlockDisplay(m_pDisplay->m_pDisplay); }
//XDisplayPtr acts like a normal Display* pointer, but the only way to obtain it is by locking the Display
operator Display*() { return m_pDisplay->m_pDisplay; }
private:
XDisplay::Ptr m_pDisplay;
};

View File

@@ -1,11 +0,0 @@
///-------------------------------------------------------------------------
/// Copyright © 2020-2022 Advanced Micro Devices, Inc. All rights reserved.
///-------------------------------------------------------------------------
#pragma once
#include <memory>
#include <X11/extensions/Xrandr.h>
typedef std::shared_ptr<XRRScreenResources> XRRScreenResourcesPtr;
typedef std::shared_ptr<XRRCrtcInfo> XRRCrtcInfoPtr;

View File

@@ -1,144 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
/**
***************************************************************************************************
* @file ObservableImpl.h
* @brief AMFObservableImpl common template declaration
***************************************************************************************************
*/
#ifndef AMF_ObservableImpl_h
#define AMF_ObservableImpl_h
#pragma once
#include "Thread.h"
#include <list>
namespace amf
{
template<typename Observer>
class AMFObservableImpl
{
private:
typedef std::list<Observer*> ObserversList;
ObserversList m_observers;
public:
AMFObservableImpl() : m_observers()
{}
virtual ~AMFObservableImpl()
{
assert(m_observers.size() == 0);
}
virtual void AMF_STD_CALL AddObserver(Observer* pObserver)
{
if (pObserver == nullptr)
{
return;
}
amf_bool found = false;
AMFLock lock(&m_sc);
for (typename ObserversList::iterator it = m_observers.begin(); it != m_observers.end(); it++)
{
if (*it == pObserver)
{
found = true;
break;
}
}
if (found == false)
{
m_observers.push_back(pObserver);
}
}
virtual void AMF_STD_CALL RemoveObserver(Observer* pObserver)
{
AMFLock lock(&m_sc);
m_observers.remove(pObserver);
}
protected:
void AMF_STD_CALL ClearObservers()
{
AMFLock lock(&m_sc);
m_observers.clear();
}
void AMF_STD_CALL NotifyObservers(void (AMF_STD_CALL Observer::* pEvent)())
{
ObserversList tempList;
{
AMFLock lock(&m_sc);
tempList = m_observers;
}
for (typename ObserversList::iterator it = tempList.begin(); it != tempList.end(); ++it)
{
Observer* pObserver = *it;
(pObserver->*pEvent)();
}
}
template<typename TArg0>
void AMF_STD_CALL NotifyObservers(void (AMF_STD_CALL Observer::* pEvent)(TArg0), TArg0 arg0)
{
ObserversList tempList;
{
AMFLock lock(&m_sc);
tempList = m_observers;
}
for (typename ObserversList::iterator it = tempList.begin(); it != tempList.end(); ++it)
{
Observer* pObserver = *it;
(pObserver->*pEvent)(arg0);
}
}
template<typename TArg0, typename TArg1>
void AMF_STD_CALL NotifyObservers(void (AMF_STD_CALL Observer::* pEvent)(TArg0, TArg1), TArg0 arg0, TArg1 arg1)
{
ObserversList tempList;
{
AMFLock lock(&m_sc);
tempList = m_observers;
}
for (typename ObserversList::iterator it = tempList.begin(); it != tempList.end(); it++)
{
Observer* pObserver = *it;
(pObserver->*pEvent)(arg0, arg1);
}
}
private:
AMFCriticalSection m_sc;
};
}
#endif //AMF_ObservableImpl_h

View File

@@ -1,563 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file OpenGLImportTable.cpp
/// @brief OpenGL import table
///-------------------------------------------------------------------------
#include "OpenGLImportTable.h"
#include "public/common/TraceAdapter.h"
#include "public/common/Thread.h"
using namespace amf;
#define AMF_FACILITY L"OpenGLImportTable"
//-------------------------------------------------------------------------------------------------
#define TRY_GET_DLL_ENTRY_POINT_CORE(w) \
w = reinterpret_cast<w##_fn>(amf_get_proc_address(m_hOpenGLDll, #w));
#define GET_DLL_ENTRY_POINT_CORE(w)\
TRY_GET_DLL_ENTRY_POINT_CORE(w)\
AMF_RETURN_IF_FALSE(w != nullptr, AMF_NOT_FOUND, L"Failed to aquire entry point %S", #w);
// On windows, some functions are defined in the core opengl32.dll (especially the core old ones)
// and some are not and we have to use wglGetProcAddress. Its a problem because the ones defined in
// opengl32.dll are not included in the wglGetProcAddress and vice versa
#if defined(_WIN32)
#define TRY_GET_DLL_ENTRY_POINT(w) \
{\
void const * const p = (void*)wglGetProcAddress(#w);\
if (p == nullptr || p == (void*)0x1 || p == (void*)0x2 || p == (void*)0x3 || p == (void*)-1)\
{\
TRY_GET_DLL_ENTRY_POINT_CORE(w);\
}\
else\
{\
w = reinterpret_cast<w##_fn>(p);\
}\
}
#else
#define TRY_GET_DLL_ENTRY_POINT(w) TRY_GET_DLL_ENTRY_POINT_CORE(w)
#endif
#define GET_DLL_ENTRY_POINT(w)\
TRY_GET_DLL_ENTRY_POINT(w)\
AMF_RETURN_IF_FALSE(w != nullptr, AMF_NOT_FOUND, L"Failed to aquire entry point %S", #w);
OpenGLImportTable::OpenGLImportTable() :
m_hOpenGLDll(nullptr),
glGetError(nullptr),
glGetString(nullptr),
// glGetStringi(nullptr),
glEnable(nullptr),
glClear(nullptr),
glClearAccum(nullptr),
glClearColor(nullptr),
glClearDepth(nullptr),
glClearIndex(nullptr),
glClearStencil(nullptr),
glDrawArrays(nullptr),
glViewport(nullptr),
glFinish(nullptr),
#if defined(_WIN32)
wglCreateContext(nullptr),
wglDeleteContext(nullptr),
wglGetCurrentContext(nullptr),
wglGetCurrentDC(nullptr),
wglMakeCurrent(nullptr),
wglGetProcAddress(nullptr),
wglGetExtensionsStringARB(nullptr),
wglSwapIntervalEXT(nullptr),
wndClass{},
hDummyWnd(nullptr),
hDummyDC(nullptr),
hDummyOGLContext(nullptr),
#elif defined(__ANDROID__)
eglInitialize(nullptr),
eglGetDisplay(nullptr),
eglChooseConfig(nullptr),
eglCreateContext(nullptr),
eglDestroyImageKHR(nullptr),
eglCreateImageKHR(nullptr),
eglSwapInterval(nullptr),
glEGLImageTargetTexture2DOES(nullptr),
glReadPixels(nullptr),
#elif defined(__linux)
glXDestroyContext(nullptr),
glXDestroyWindow(nullptr),
glXSwapBuffers(nullptr),
glXQueryExtension(nullptr),
glXChooseFBConfig(nullptr),
glXCreateWindow(nullptr),
glXCreateNewContext(nullptr),
glXMakeCurrent(nullptr),
glXGetCurrentContext(nullptr),
glXGetCurrentDrawable(nullptr),
glXQueryExtensionsString(nullptr),
glXSwapIntervalEXT(nullptr),
#endif
glBindTexture(nullptr),
glDeleteTextures(nullptr),
glGenTextures(nullptr),
glGetTexImage(nullptr),
glGetTexLevelParameteriv(nullptr),
glTexParameteri(nullptr),
glTexImage2D(nullptr),
glActiveTexture(nullptr),
glBindFramebuffer(nullptr),
// glBindRenderbuffer(nullptr),
glBlitFramebuffer(nullptr),
glCheckFramebufferStatus(nullptr),
glDeleteFramebuffers(nullptr),
// glDeleteRenderbuffers(nullptr),
// glFramebufferRenderbuffer(nullptr),
// glFramebufferTexture1D(nullptr),
glFramebufferTexture2D(nullptr),
// glFramebufferTexture3D(nullptr),
glFramebufferTextureLayer(nullptr),
glGenFramebuffers(nullptr),
// glGenRenderbuffers(nullptr),
// glGenerateMipmap(nullptr),
// glGetFramebufferAttachmentParameteriv(nullptr),
// glGetRenderbufferParameteriv(nullptr),
// glIsFramebuffer(nullptr),
// glIsRenderbuffer(nullptr),
// glRenderbufferStorage(nullptr),
// glRenderbufferStorageMultisample(nullptr),
glGenBuffers(nullptr),
glBindBuffer(nullptr),
glBufferData(nullptr),
glBufferSubData(nullptr),
glDeleteBuffers(nullptr),
glVertexAttribPointer(nullptr),
// glVertexAttribLPointer(nullptr),
// glVertexAttribIPointer(nullptr),
glBindVertexBuffer(nullptr),
glDisableVertexAttribArray(nullptr),
glEnableVertexAttribArray(nullptr),
glBindVertexArray(nullptr),
glDeleteVertexArrays(nullptr),
glGenVertexArrays(nullptr),
glIsVertexArray(nullptr),
glCreateShader(nullptr),
glShaderSource(nullptr),
glCompileShader(nullptr),
glGetShaderInfoLog(nullptr),
glGetShaderSource(nullptr),
glGetShaderiv(nullptr),
glCreateProgram(nullptr),
glAttachShader(nullptr),
glLinkProgram(nullptr),
glGetProgramInfoLog(nullptr),
glGetProgramiv(nullptr),
glValidateProgram(nullptr),
glUseProgram(nullptr),
glDeleteShader(nullptr),
glDeleteProgram(nullptr),
glGetUniformLocation(nullptr),
// glUniform1f(nullptr),
// glUniform1fv(nullptr),
glUniform1i(nullptr),
// glUniform1iv(nullptr),
// glUniform2f(nullptr),
// glUniform2fv(nullptr),
// glUniform2i(nullptr),
// glUniform2iv(nullptr),
// glUniform3f(nullptr),
// glUniform3fv(nullptr),
// glUniform3i(nullptr),
// glUniform3iv(nullptr),
// glUniform4f(nullptr),
glUniform4fv(nullptr),
// glUniform4i(nullptr),
// glUniform4iv(nullptr),
// glUniformMatrix2fv(nullptr),
// glUniformMatrix3fv(nullptr),
// glUniformMatrix4fv(nullptr),
glBindBufferBase(nullptr),
glBindBufferRange(nullptr),
glGetUniformBlockIndex(nullptr),
glUniformBlockBinding(nullptr),
glBindSampler(nullptr),
glDeleteSamplers(nullptr),
glGenSamplers(nullptr),
// glGetSamplerParameterIiv(nullptr),
// glGetSamplerParameterIuiv(nullptr),
// glGetSamplerParameterfv(nullptr),
// glGetSamplerParameteriv(nullptr),
// glIsSampler(nullptr),
// glSamplerParameterIiv(nullptr),
// glSamplerParameterIuiv(nullptr),
glSamplerParameterf(nullptr),
glSamplerParameterfv(nullptr),
glSamplerParameteri(nullptr)
// glSamplerParameteriv(nullptr)
{
}
OpenGLImportTable::~OpenGLImportTable()
{
if (m_hOpenGLDll != nullptr)
{
amf_free_library(m_hOpenGLDll);
}
m_hOpenGLDll = nullptr;
#if defined(_WIN32)
DestroyDummy();
#endif
}
AMF_RESULT OpenGLImportTable::LoadFunctionsTable()
{
if (m_hOpenGLDll != nullptr)
{
return AMF_OK;
}
#if defined(_WIN32)
m_hOpenGLDll = amf_load_library(L"opengl32.dll");
#elif defined(__ANDROID__)
m_hOpenGLDll = amf_load_library1(L"libGLES.so", true);
#elif defined(__linux__)
m_hOpenGLDll = amf_load_library1(L"libGL.so.1", true);
#endif
if (m_hOpenGLDll == nullptr)
{
AMFTraceError(L"OpenGLImportTable", L"amf_load_library() failed to load opengl dll!");
return AMF_FAIL;
}
// Core
GET_DLL_ENTRY_POINT_CORE(glGetError);
GET_DLL_ENTRY_POINT_CORE(glGetString);
GET_DLL_ENTRY_POINT_CORE(glEnable);
GET_DLL_ENTRY_POINT_CORE(glClear);
GET_DLL_ENTRY_POINT_CORE(glClearAccum);
GET_DLL_ENTRY_POINT_CORE(glClearColor);
GET_DLL_ENTRY_POINT_CORE(glClearDepth);
GET_DLL_ENTRY_POINT_CORE(glClearIndex);
GET_DLL_ENTRY_POINT_CORE(glClearStencil);
GET_DLL_ENTRY_POINT_CORE(glDrawArrays);
GET_DLL_ENTRY_POINT_CORE(glViewport);
GET_DLL_ENTRY_POINT_CORE(glFinish);
// Core (platform-dependent)
#if defined(_WIN32)
GET_DLL_ENTRY_POINT_CORE(wglCreateContext);
GET_DLL_ENTRY_POINT_CORE(wglDeleteContext);
GET_DLL_ENTRY_POINT_CORE(wglGetCurrentContext);
GET_DLL_ENTRY_POINT_CORE(wglGetCurrentDC);
GET_DLL_ENTRY_POINT_CORE(wglMakeCurrent);
GET_DLL_ENTRY_POINT_CORE(wglGetProcAddress);
#elif defined(__ANDROID__)
GET_DLL_ENTRY_POINT_CORE(eglInitialize);
GET_DLL_ENTRY_POINT_CORE(eglGetDisplay);
GET_DLL_ENTRY_POINT_CORE(eglChooseConfig);
GET_DLL_ENTRY_POINT_CORE(eglCreateContext);
GET_DLL_ENTRY_POINT_CORE(eglDestroyImageKHR);
GET_DLL_ENTRY_POINT_CORE(eglCreateImageKHR);
GET_DLL_ENTRY_POINT_CORE(glEGLImageTargetTexture2DOES);
GET_DLL_ENTRY_POINT_CORE(glReadPixels);
#elif defined(__linux)
GET_DLL_ENTRY_POINT_CORE(glXDestroyContext);
GET_DLL_ENTRY_POINT_CORE(glXDestroyWindow);
GET_DLL_ENTRY_POINT_CORE(glXSwapBuffers);
GET_DLL_ENTRY_POINT_CORE(glXQueryExtension);
GET_DLL_ENTRY_POINT_CORE(glXChooseFBConfig);
GET_DLL_ENTRY_POINT_CORE(glXCreateWindow);
GET_DLL_ENTRY_POINT_CORE(glXCreateNewContext);
GET_DLL_ENTRY_POINT_CORE(glXMakeCurrent);
GET_DLL_ENTRY_POINT_CORE(glXGetCurrentContext);
GET_DLL_ENTRY_POINT_CORE(glXGetCurrentDrawable);
#endif
// Textures
GET_DLL_ENTRY_POINT_CORE(glBindTexture);
GET_DLL_ENTRY_POINT_CORE(glDeleteTextures);
GET_DLL_ENTRY_POINT_CORE(glGenTextures);
GET_DLL_ENTRY_POINT_CORE(glGetTexImage);
GET_DLL_ENTRY_POINT_CORE(glGetTexLevelParameteriv);
GET_DLL_ENTRY_POINT_CORE(glTexParameteri);
GET_DLL_ENTRY_POINT_CORE(glTexImage2D);
// For windows, we need to use wglGetProcAddress to get some
// addresses however that requires a context. We can just create
// a small dummy context/window and then delete it when we are done
#if defined(_WIN32)
{
AMF_RESULT res = CreateDummy();
if (res != AMF_OK)
{
DestroyDummy();
AMF_RETURN_IF_FAILED(res, L"CreateDummy() failed");
}
}
#endif
AMF_RESULT res = LoadContextFunctionsTable();
AMF_RETURN_IF_FAILED(res, L"LoadContextFunctionsTable() failed");
#if defined(_WIN32)
DestroyDummy();
#endif
return AMF_OK;
}
AMF_RESULT OpenGLImportTable::LoadContextFunctionsTable()
{
if (m_hOpenGLDll == nullptr)
{
AMF_RETURN_IF_FAILED(LoadFunctionsTable());
}
#if defined(_WIN32)
HGLRC context = wglGetCurrentContext();
AMF_RETURN_IF_FALSE(context != nullptr, AMF_NOT_INITIALIZED, L"LoadContextFunctionsTable() - context is not initialized");
#endif
// Core
// GET_DLL_ENTRY_POINT(glGetStringi);
#if defined(_WIN32)
TRY_GET_DLL_ENTRY_POINT(wglGetExtensionsStringARB);
TRY_GET_DLL_ENTRY_POINT(wglSwapIntervalEXT);
#elif defined(__ANDROID__)
TRY_GET_DLL_ENTRY_POINT(eglSwapInterval);
#elif defined(__linux)
TRY_GET_DLL_ENTRY_POINT(glXQueryExtensionsString);
TRY_GET_DLL_ENTRY_POINT(glXSwapIntervalEXT);
#endif
// Textures
GET_DLL_ENTRY_POINT(glActiveTexture);
// Frame buffer and render buffer objects
GET_DLL_ENTRY_POINT(glBindFramebuffer);
// GET_DLL_ENTRY_POINT(glBindRenderbuffer);
GET_DLL_ENTRY_POINT(glBlitFramebuffer);
GET_DLL_ENTRY_POINT(glCheckFramebufferStatus);
GET_DLL_ENTRY_POINT(glDeleteFramebuffers);
// GET_DLL_ENTRY_POINT(glDeleteRenderbuffers);
// GET_DLL_ENTRY_POINT(glFramebufferRenderbuffer);
// GET_DLL_ENTRY_POINT(glFramebufferTexture1D);
GET_DLL_ENTRY_POINT(glFramebufferTexture2D);
// GET_DLL_ENTRY_POINT(glFramebufferTexture3D);
GET_DLL_ENTRY_POINT(glFramebufferTextureLayer);
GET_DLL_ENTRY_POINT(glGenFramebuffers);
// GET_DLL_ENTRY_POINT(glGenRenderbuffers);
// GET_DLL_ENTRY_POINT(glGenerateMipmap);
// GET_DLL_ENTRY_POINT(glGetFramebufferAttachmentParameteriv);
// GET_DLL_ENTRY_POINT(glGetRenderbufferParameteriv);
// GET_DLL_ENTRY_POINT(glIsFramebuffer);
// GET_DLL_ENTRY_POINT(glIsRenderbuffer);
// GET_DLL_ENTRY_POINT(glRenderbufferStorage);
// GET_DLL_ENTRY_POINT(glRenderbufferStorageMultisample);
// Buffers
GET_DLL_ENTRY_POINT(glGenBuffers);
GET_DLL_ENTRY_POINT(glBindBuffer);
GET_DLL_ENTRY_POINT(glBufferData);
GET_DLL_ENTRY_POINT(glBufferSubData);
GET_DLL_ENTRY_POINT(glDeleteBuffers);
// Vertex buffer attributes
GET_DLL_ENTRY_POINT(glVertexAttribPointer);
// GET_DLL_ENTRY_POINT(glVertexAttribLPointer);
// GET_DLL_ENTRY_POINT(glVertexAttribIPointer);
GET_DLL_ENTRY_POINT(glBindVertexBuffer);
GET_DLL_ENTRY_POINT(glDisableVertexAttribArray);
GET_DLL_ENTRY_POINT(glEnableVertexAttribArray);
GET_DLL_ENTRY_POINT(glBindVertexArray);
GET_DLL_ENTRY_POINT(glDeleteVertexArrays);
GET_DLL_ENTRY_POINT(glGenVertexArrays);
GET_DLL_ENTRY_POINT(glIsVertexArray);
// Shaders
GET_DLL_ENTRY_POINT(glCreateShader);
GET_DLL_ENTRY_POINT(glShaderSource);
GET_DLL_ENTRY_POINT(glCompileShader);
GET_DLL_ENTRY_POINT(glGetShaderInfoLog);
GET_DLL_ENTRY_POINT(glGetShaderSource);
GET_DLL_ENTRY_POINT(glGetShaderiv);
GET_DLL_ENTRY_POINT(glCreateProgram);
GET_DLL_ENTRY_POINT(glAttachShader);
GET_DLL_ENTRY_POINT(glLinkProgram);
GET_DLL_ENTRY_POINT(glGetProgramInfoLog);
GET_DLL_ENTRY_POINT(glGetProgramiv);
GET_DLL_ENTRY_POINT(glValidateProgram);
GET_DLL_ENTRY_POINT(glUseProgram);
GET_DLL_ENTRY_POINT(glDeleteShader);
GET_DLL_ENTRY_POINT(glDeleteProgram);
// Uniforms
GET_DLL_ENTRY_POINT(glGetUniformLocation);
// GET_DLL_ENTRY_POINT(glUniform1f);
// GET_DLL_ENTRY_POINT(glUniform1fv);
GET_DLL_ENTRY_POINT(glUniform1i);
// GET_DLL_ENTRY_POINT(glUniform1iv);
// GET_DLL_ENTRY_POINT(glUniform2f);
// GET_DLL_ENTRY_POINT(glUniform2fv);
// GET_DLL_ENTRY_POINT(glUniform2i);
// GET_DLL_ENTRY_POINT(glUniform2iv);
// GET_DLL_ENTRY_POINT(glUniform3f);
// GET_DLL_ENTRY_POINT(glUniform3fv);
// GET_DLL_ENTRY_POINT(glUniform3i);
// GET_DLL_ENTRY_POINT(glUniform3iv);
// GET_DLL_ENTRY_POINT(glUniform4f);
GET_DLL_ENTRY_POINT(glUniform4fv);
// GET_DLL_ENTRY_POINT(glUniform4i);
// GET_DLL_ENTRY_POINT(glUniform4iv);
// GET_DLL_ENTRY_POINT(glUniformMatrix2fv);
// GET_DLL_ENTRY_POINT(glUniformMatrix3fv);
// GET_DLL_ENTRY_POINT(glUniformMatrix4fv);
// Uniform buffer objects
GET_DLL_ENTRY_POINT(glBindBufferBase);
GET_DLL_ENTRY_POINT(glBindBufferRange);
GET_DLL_ENTRY_POINT(glGetUniformBlockIndex);
GET_DLL_ENTRY_POINT(glUniformBlockBinding);
// Sampler objects
GET_DLL_ENTRY_POINT(glBindSampler);
GET_DLL_ENTRY_POINT(glDeleteSamplers);
GET_DLL_ENTRY_POINT(glGenSamplers);
// GET_DLL_ENTRY_POINT(glGetSamplerParameterIiv);
// GET_DLL_ENTRY_POINT(glGetSamplerParameterIuiv);
// GET_DLL_ENTRY_POINT(glGetSamplerParameterfv);
// GET_DLL_ENTRY_POINT(glGetSamplerParameteriv);
// GET_DLL_ENTRY_POINT(glIsSampler);
// GET_DLL_ENTRY_POINT(glSamplerParameterIiv);
// GET_DLL_ENTRY_POINT(glSamplerParameterIuiv);
GET_DLL_ENTRY_POINT(glSamplerParameterf);
GET_DLL_ENTRY_POINT(glSamplerParameterfv);
GET_DLL_ENTRY_POINT(glSamplerParameteri);
// GET_DLL_ENTRY_POINT(glSamplerParameteriv);
return AMF_OK;
}
#if defined(_WIN32)
AMF_RESULT OpenGLImportTable::CreateDummy()
{
DestroyDummy();
wndClass = { 0 };
wndClass.cbSize = sizeof(wndClass);
wndClass.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC;
wndClass.lpfnWndProc = DefWindowProcW;
wndClass.hInstance = GetModuleHandle(0);
wndClass.lpszClassName = L"OpenGL_Dummy_Class";
int ret = RegisterClassExW(&wndClass);
AMF_RETURN_IF_FALSE(ret != 0, AMF_FAIL, L"CreateDummy() - RegisterClassA() failed, error=%d", GetLastError());
hDummyWnd = CreateWindowExW(0, wndClass.lpszClassName, L"Dummy OpenGL Window", 0, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, 0, 0, wndClass.hInstance, 0);
AMF_RETURN_IF_FALSE(hDummyWnd != nullptr, AMF_FAIL, L"CreateDummy() - CreateWindowExA() failed to create window");
hDummyDC = GetDC(hDummyWnd);
PIXELFORMATDESCRIPTOR pfd = {};
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.cColorBits = 32;
pfd.cAlphaBits = 8;
pfd.iLayerType = PFD_MAIN_PLANE;
pfd.cDepthBits = 24;
pfd.cStencilBits = 8;
int pixel_format = ChoosePixelFormat(hDummyDC, &pfd);
AMF_RETURN_IF_FALSE(pixel_format != 0, AMF_FAIL, L"CreateDummy() - ChoosePixelFormat() failed to find a suitable pixel format.");
ret = SetPixelFormat(hDummyDC, pixel_format, &pfd);
AMF_RETURN_IF_FALSE(ret != 0, AMF_FAIL, L"CreateDummy() - SetPixelFormat() failed");
hDummyOGLContext = wglCreateContext(hDummyDC);
AMF_RETURN_IF_FALSE(hDummyOGLContext != nullptr, AMF_FAIL, L"CreateDummy() - wglCreateContext() failed");
ret = !wglMakeCurrent(hDummyDC, hDummyOGLContext);
AMF_RETURN_IF_FALSE(hDummyOGLContext != nullptr, AMF_FAIL, L"CreateDummy() - wglMakeCurrent() failed");
return AMF_OK;
}
AMF_RESULT OpenGLImportTable::DestroyDummy()
{
if (hDummyOGLContext != nullptr)
{
wglDeleteContext(hDummyOGLContext);
hDummyOGLContext = nullptr;
}
if (hDummyWnd != nullptr || hDummyDC != nullptr)
{
if (wglMakeCurrent != nullptr)
{
wglMakeCurrent(hDummyDC, 0);
}
ReleaseDC(hDummyWnd, hDummyDC);
DestroyWindow(hDummyWnd);
hDummyWnd = nullptr;
hDummyDC = nullptr;
}
if (wndClass.lpszClassName != nullptr)
{
UnregisterClassW(wndClass.lpszClassName, wndClass.hInstance);
wndClass = {};
}
return AMF_OK;
}
#endif

View File

@@ -1,540 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file OpenGLImportTable.h
/// @brief OpenGL import table
///-------------------------------------------------------------------------
#pragma once
#include "public/include/core/Result.h"
#include "public/common/AMFSTL.h"
#if defined(_WIN32)
#include <Wingdi.h>
#include <gl/GL.h>
#include <gl/GLU.h>
#elif defined(__ANDROID__)
////todo:AA #include <android/native_window.h> // requires ndk r5 or newer
#define GL_GLEXT_PROTOTYPES
#define EGL_EGLEXT_PROTOTYPES
#include <EGL/egl.h> // requires ndk r5 or newer
#include <EGL/eglext.h>
#include <GLES/gl.h> // requires ndk r5 or newer
#include <GLES/glext.h> // requires ndk r5 or newer
#include <GLES2/gl2.h>
#include <GLES2/gl2ext.h>
////todo:AA #include <ui/FramebufferNativeWindow.h>
// #include <gralloc_priv.h>
#include <time.h>
#if !defined(CLOCK_MONOTONIC_RAW)
#define CLOCK_MONOTONIC_RAW 4
#endif
////todo:AA #include <amdgralloc.h>
////todo:AA using namespace android;
#if !defined(GL_CLAMP)
#define GL_CLAMP GL_CLAMP_TO_EDGE
#endif
#elif defined(__linux)
#include <GL/glx.h>
#include <GL/glu.h>
#endif
#ifndef AMF_GLAPI
#if defined(_WIN32)
#define AMF_GLAPI WINGDIAPI
#elif defined(__ANDROID__)
#define AMF_GLAPI GL_API
#else // __linux
#define AMF_GLAPI
#endif
#endif
#ifndef AMF_GLAPIENTRY
#if defined(_WIN32)
#define AMF_GLAPIENTRY APIENTRY
#elif defined(__ANDROID__)
#define AMF_GLAPIENTRY GL_APIENTRY
#elif defined(__linux)
#define AMF_GLAPIENTRY GLAPIENTRY
#else
#define AMF_GLAPIENTRY
#endif
#endif
typedef char GLchar;
#if defined(__ANDROID__)
typedef double GLclampd;
#define GL_TEXTURE_BORDER_COLOR 0x1004
#else
typedef ptrdiff_t GLintptr;
#endif
#ifdef _WIN32
typedef size_t GLsizeiptr; // Defined in glx.h on linux
#endif
// Core
typedef AMF_GLAPI GLenum (AMF_GLAPIENTRY* glGetError_fn) (void);
typedef AMF_GLAPI const GLubyte* (AMF_GLAPIENTRY* glGetString_fn) (GLenum name);
typedef AMF_GLAPI const GLubyte* (AMF_GLAPIENTRY* glGetStringi_fn) (GLenum name, GLuint index);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glEnable_fn) (GLenum cap);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glClear_fn) (GLbitfield mask);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glClearAccum_fn) (GLfloat red, GLfloat green, GLfloat blue, GLfloat alpha);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glClearColor_fn) (GLclampf red, GLclampf green, GLclampf blue, GLclampf alpha);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glClearDepth_fn) (GLclampd depth);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glClearIndex_fn) (GLfloat c);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glClearStencil_fn) (GLint s);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDrawArrays_fn) (GLenum mode, GLint first, GLsizei count);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glViewport_fn) (GLint x, GLint y, GLsizei width, GLsizei height);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glFinish_fn) (void);
// Textures
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindTexture_fn) (GLenum target, GLuint texture);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteTextures_fn) (GLsizei n, const GLuint* textures);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenTextures_fn) (GLsizei n, GLuint* textures);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetTexImage_fn) (GLenum target, GLint level, GLenum format, GLenum type, GLvoid* pixels);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetTexLevelParameteriv_fn) (GLenum target, GLint level, GLenum pname, GLint* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glTexParameteri_fn) (GLenum target, GLenum pname, GLint param);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glTexImage2D_fn) (GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height,
GLint border, GLenum format, GLenum type, const GLvoid* pixels);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glActiveTexture_fn) (GLenum texture);
// Framebuffer and Renderbuffer objects - EXT: GL_ARB_framebuffer_object
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindFramebuffer_fn) (GLenum target, GLuint framebuffer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindRenderbuffer_fn) (GLenum target, GLuint renderbuffer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBlitFramebuffer_fn) (GLint srcX0, GLint srcY0, GLint srcX1, GLint srcY1, GLint dstX0, GLint dstY0, GLint dstX1, GLint dstY1, GLbitfield mask, GLenum filter);
typedef AMF_GLAPI GLenum (AMF_GLAPIENTRY* glCheckFramebufferStatus_fn) (GLenum target);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteFramebuffers_fn) (GLsizei n, const GLuint* framebuffers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteRenderbuffers_fn) (GLsizei n, const GLuint* renderbuffers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glFramebufferRenderbuffer_fn) (GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glFramebufferTexture1D_fn) (GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glFramebufferTexture2D_fn) (GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glFramebufferTexture3D_fn) (GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level, GLint layer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glFramebufferTextureLayer_fn) (GLenum target, GLenum attachment, GLuint texture, GLint level, GLint layer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenFramebuffers_fn) (GLsizei n, GLuint* framebuffers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenRenderbuffers_fn) (GLsizei n, GLuint* renderbuffers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenerateMipmap_fn) (GLenum target);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetFramebufferAttachmentParameteriv_fn) (GLenum target, GLenum attachment, GLenum pname, GLint* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetRenderbufferParameteriv_fn) (GLenum target, GLenum pname, GLint* params);
typedef AMF_GLAPI GLboolean (AMF_GLAPIENTRY* glIsFramebuffer_fn) (GLuint framebuffer);
typedef AMF_GLAPI GLboolean (AMF_GLAPIENTRY* glIsRenderbuffer_fn) (GLuint renderbuffer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glRenderbufferStorage_fn) (GLenum target, GLenum internalformat, GLsizei width, GLsizei height);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glRenderbufferStorageMultisample_fn) (GLenum target, GLsizei samples, GLenum internalformat, GLsizei width, GLsizei height);
// Buffers
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenBuffers_fn) (GLsizei n, GLuint* buffers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindBuffer_fn) (GLenum target, GLuint buffer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBufferData_fn) (GLenum target, GLsizeiptr size, const void* data, GLenum usage);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBufferSubData_fn) (GLenum target, GLintptr offset, GLsizeiptr size, const void* data);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteBuffers_fn) (GLsizei n, const GLuint* buffers);
// Vertex attributes
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glVertexAttribPointer_fn) (GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const void* pointer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glVertexAttribLPointer_fn) (GLuint index, GLint size, GLenum type, GLsizei stride, const void* pointer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glVertexAttribIPointer_fn) (GLuint index, GLint size, GLenum type, GLsizei stride, const void* pointer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindVertexBuffer_fn) (GLuint bindingindex, GLuint buffer, GLintptr offset, GLsizei stride);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDisableVertexAttribArray_fn) (GLuint index);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glEnableVertexAttribArray_fn) (GLuint index);
// Vertex array objects
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindVertexArray_fn) (GLuint array);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteVertexArrays_fn) (GLsizei n, const GLuint* arrays);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenVertexArrays_fn) (GLsizei n, GLuint* arrays);
typedef AMF_GLAPI GLboolean (AMF_GLAPIENTRY* glIsVertexArray_fn) (GLuint array);
// Shaders
typedef AMF_GLAPI GLuint (AMF_GLAPIENTRY* glCreateShader_fn) (GLenum type);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glShaderSource_fn) (GLuint shader, GLsizei count, const GLchar* const* string, const GLint* length);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glCompileShader_fn) (GLuint shader);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetShaderInfoLog_fn) (GLuint shader, GLsizei bufSize, GLsizei* length, GLchar* infoLog);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetShaderSource_fn) (GLuint obj, GLsizei maxLength, GLsizei* length, GLchar* source);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetShaderiv_fn) (GLuint shader, GLenum pname, GLint* param);
typedef AMF_GLAPI GLuint (AMF_GLAPIENTRY* glCreateProgram_fn) (void);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glAttachShader_fn) (GLuint program, GLuint shader);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glLinkProgram_fn) (GLuint program);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetProgramInfoLog_fn) (GLuint program, GLsizei bufSize, GLsizei* length, GLchar* infoLog);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetProgramiv_fn) (GLuint program, GLenum pname, GLint* param);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glValidateProgram_fn) (GLuint program);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUseProgram_fn) (GLuint program);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteShader_fn) (GLuint shader);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteProgram_fn) (GLuint program);
// Uniforms
typedef AMF_GLAPI GLint (AMF_GLAPIENTRY* glGetUniformLocation_fn) (GLuint program, const GLchar* name);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform1f_fn) (GLint location, GLfloat v0);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform1fv_fn) (GLint location, GLsizei count, const GLfloat* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform1i_fn) (GLint location, GLint v0);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform1iv_fn) (GLint location, GLsizei count, const GLint* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform2f_fn) (GLint location, GLfloat v0, GLfloat v1);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform2fv_fn) (GLint location, GLsizei count, const GLfloat* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform2i_fn) (GLint location, GLint v0, GLint v1);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform2iv_fn) (GLint location, GLsizei count, const GLint* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform3f_fn) (GLint location, GLfloat v0, GLfloat v1, GLfloat v2);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform3fv_fn) (GLint location, GLsizei count, const GLfloat* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform3i_fn) (GLint location, GLint v0, GLint v1, GLint v2);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform3iv_fn) (GLint location, GLsizei count, const GLint* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform4f_fn) (GLint location, GLfloat v0, GLfloat v1, GLfloat v2, GLfloat v3);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform4fv_fn) (GLint location, GLsizei count, const GLfloat* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform4i_fn) (GLint location, GLint v0, GLint v1, GLint v2, GLint v3);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniform4iv_fn) (GLint location, GLsizei count, const GLint* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniformMatrix2fv_fn) (GLint location, GLsizei count, GLboolean transpose, const GLfloat* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniformMatrix3fv_fn) (GLint location, GLsizei count, GLboolean transpose, const GLfloat* value);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniformMatrix4fv_fn) (GLint location, GLsizei count, GLboolean transpose, const GLfloat* value);
// Uniform block objects - EXT: ARB_Uniform_Buffer_Object
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindBufferBase_fn) (GLenum target, GLuint index, GLuint buffer);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindBufferRange_fn) (GLenum target, GLuint index, GLuint buffer, GLintptr offset, GLsizeiptr size);
typedef AMF_GLAPI GLuint (AMF_GLAPIENTRY* glGetUniformBlockIndex_fn) (GLuint program, const GLchar* uniformBlockName);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glUniformBlockBinding_fn) (GLuint program, GLuint uniformBlockIndex, GLuint uniformBlockBinding);
// Sampler Objects - EXT: GL_ARB_sampler_objects
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glBindSampler_fn) (GLuint unit, GLuint sampler);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glDeleteSamplers_fn) (GLsizei count, const GLuint* samplers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGenSamplers_fn) (GLsizei count, GLuint* samplers);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetSamplerParameterIiv_fn) (GLuint sampler, GLenum pname, GLint* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetSamplerParameterIuiv_fn) (GLuint sampler, GLenum pname, GLuint* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetSamplerParameterfv_fn) (GLuint sampler, GLenum pname, GLfloat* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glGetSamplerParameteriv_fn) (GLuint sampler, GLenum pname, GLint* params);
typedef AMF_GLAPI GLboolean (AMF_GLAPIENTRY* glIsSampler_fn) (GLuint sampler);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glSamplerParameterIiv_fn) (GLuint sampler, GLenum pname, const GLint* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glSamplerParameterIuiv_fn) (GLuint sampler, GLenum pname, const GLuint* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glSamplerParameterf_fn) (GLuint sampler, GLenum pname, GLfloat param);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glSamplerParameterfv_fn) (GLuint sampler, GLenum pname, const GLfloat* params);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glSamplerParameteri_fn) (GLuint sampler, GLenum pname, GLint param);
typedef AMF_GLAPI void (AMF_GLAPIENTRY* glSamplerParameteriv_fn) (GLuint sampler, GLenum pname, const GLint* params);
#if defined(_WIN32)
typedef WINGDIAPI HGLRC (WINAPI* wglCreateContext_fn) (HDC);
typedef WINGDIAPI BOOL (WINAPI* wglDeleteContext_fn) (HGLRC);
typedef WINGDIAPI HGLRC (WINAPI* wglGetCurrentContext_fn) (VOID);
typedef WINGDIAPI HDC (WINAPI* wglGetCurrentDC_fn) (VOID);
typedef WINGDIAPI BOOL (WINAPI* wglMakeCurrent_fn) (HDC, HGLRC);
typedef WINGDIAPI PROC (WINAPI* wglGetProcAddress_fn) (LPCSTR func);
typedef WINGDIAPI const char* (WINAPI* wglGetExtensionsStringARB_fn) (HDC hdc);
typedef WINGDIAPI BOOL (WINAPI* wglSwapIntervalEXT_fn) (int interval);
#elif defined(__ANDROID__)
typedef EGLAPI EGLBoolean (EGLAPIENTRY* eglInitialize_fn) (EGLDisplay dpy, EGLint* major, EGLint* minor);
typedef EGLAPI EGLDisplay (EGLAPIENTRY* eglGetDisplay_fn) (EGLNativeDisplayType display_id);
typedef EGLAPI EGLBoolean (EGLAPIENTRY* eglChooseConfig_fn) (EGLDisplay dpy, const EGLint* attrib_list, EGLConfig* configs, EGLint config_size, EGLint* num_config);
typedef EGLAPI EGLContext (EGLAPIENTRY* eglCreateContext_fn) (EGLDisplay dpy, EGLConfig config, EGLContext share_context, const EGLint* attrib_list);
typedef EGLAPI EGLBoolean (EGLAPIENTRY* eglDestroyImageKHR_fn) (EGLDisplay dpy, EGLImageKHR image);
typedef EGLAPI EGLImageKHR (EGLAPIENTRY* eglCreateImageKHR_fn) (EGLDisplay dpy, EGLContext ctx, EGLenum target, EGLClientBuffer buffer, const EGLint* attrib_list);
typedef EGLAPI EGLBoolean (EGLAPIENTRY* eglSwapInterval_fn) (EGLDisplay dpy, EGLint interval);
typedef GL_API void (GL_APIENTRY* glEGLImageTargetTexture2DOES_fn) (GLenum target, GLeglImageOES image);
typedef GL_API void (GL_APIENTRY* glReadPixels_fn) (GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid* pixels);
#elif defined(__linux)
typedef void (GLAPIENTRY* glXDestroyContext_fn) (Display* dpy, GLXContext ctx);
typedef void (GLAPIENTRY* glXDestroyWindow_fn) (Display* dpy, GLXWindow window);
typedef void (GLAPIENTRY* glXSwapBuffers_fn) (Display* dpy, GLXDrawable drawable);
typedef Bool (GLAPIENTRY* glXQueryExtension_fn) (Display* dpy, int* errorb, int* event);
typedef GLXFBConfig* (GLAPIENTRY* glXChooseFBConfig_fn) (Display* dpy, int screen, const int* attribList, int* nitems );
typedef GLXWindow (GLAPIENTRY* glXCreateWindow_fn) (Display* dpy, GLXFBConfig config, Window win, const int* attribList );
typedef GLXContext (GLAPIENTRY* glXCreateNewContext_fn) (Display* dpy, GLXFBConfig config, int renderType, GLXContext shareList, Bool direct );
typedef Bool (GLAPIENTRY* glXMakeCurrent_fn) (Display* dpy, GLXDrawable drawable, GLXContext ctx);
typedef GLXContext (GLAPIENTRY* glXGetCurrentContext_fn) (void);
typedef GLXDrawable (GLAPIENTRY* glXGetCurrentDrawable_fn) (void);
typedef const char* (GLAPIENTRY* glXQueryExtensionsString_fn) (Display* dpy, int screen);
typedef void (GLAPIENTRY* glXSwapIntervalEXT_fn) (Display* dpy, GLXDrawable drawable, int interval);
#endif
// Target
#define GL_DEPTH_BUFFER 0x8223
#define GL_STENCIL_BUFFER 0x8224
#define GL_ARRAY_BUFFER 0x8892
#define GL_ELEMENT_ARRAY_BUFFER 0x8893
#define GL_PIXEL_PACK_BUFFER 0x88EB
#define GL_PIXEL_UNPACK_BUFFER 0x88EC
#define GL_UNIFORM_BUFFER 0x8A11
#define GL_TEXTURE_BUFFER 0x8C2A
#define GL_TRANSFORM_FEEDBACK_BUFFER 0x8C8E
#define GL_READ_FRAMEBUFFER 0x8CA8
#define GL_DRAW_FRAMEBUFFER 0x8CA9
#define GL_FRAMEBUFFER 0x8D40
#define GL_RENDERBUFFER 0x8D41
#define GL_COPY_READ_BUFFER 0x8F36
#define GL_COPY_WRITE_BUFFER 0x8F37
#define GL_DRAW_INDIRECT_BUFFER 0x8F3F
#define GL_SHADER_STORAGE_BUFFER 0x90D2
#define GL_DISPATCH_INDIRECT_BUFFER 0x90EE
#define GL_QUERY_BUFFER 0x9192
#define GL_ATOMIC_COUNTER_BUFFER 0x92C0
// Attachments
#define GL_COLOR_ATTACHMENT0 0x8CE0
#define GL_COLOR_ATTACHMENT_UNIT(x) (GL_COLOR_ATTACHMENT0 + x)
#define GL_DEPTH_ATTACHMENT 0x8D00
#define GL_STENCIL_ATTACHMENT 0x8D20
// Frame Buffer Status
#define GL_FRAMEBUFFER_UNDEFINED 0x8219
#define GL_FRAMEBUFFER_COMPLETE 0x8CD5
#define GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT 0x8CD6
#define GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT 0x8CD7
#define GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER 0x8CDB
#define GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER 0x8CDC
#define GL_FRAMEBUFFER_UNSUPPORTED 0x8CDD
#define GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE 0x8D56
#define GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS 0x8DA8
// Texture unit
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE_UNIT(x) (GL_TEXTURE0 + x)
// Usage
#define GL_STREAM_DRAW 0x88E0
#define GL_STREAM_READ 0x88E1
#define GL_STREAM_COPY 0x88E2
#define GL_STATIC_DRAW 0x88E4
#define GL_STATIC_READ 0x88E5
#define GL_STATIC_COPY 0x88E6
#define GL_DYNAMIC_DRAW 0x88E8
#define GL_DYNAMIC_READ 0x88E9
#define GL_DYNAMIC_COPY 0x88EA
// Shader Type
#define GL_FRAGMENT_SHADER 0x8B30
#define GL_VERTEX_SHADER 0x8B31
#define GL_GEOMETRY_SHADER 0x8DD9
#define GL_TESS_EVALUATION_SHADER 0x8E87
#define GL_TESS_CONTROL_SHADER 0x8E88
#define GL_COMPUTE_SHADER 0x91B9
// Shader Info
#define GL_DELETE_STATUS 0x8B80
#define GL_COMPILE_STATUS 0x8B81
#define GL_LINK_STATUS 0x8B82
#define GL_VALIDATE_STATUS 0x8B83
#define GL_INFO_LOG_LENGTH 0x8B84
// Sampler params
#define GL_TEXTURE_MIN_LOD 0x813A
#define GL_TEXTURE_MAX_LOD 0x813B
#define GL_TEXTURE_WRAP_R 0x8072
#define GL_TEXTURE_COMPARE_MODE 0x884C
#define GL_TEXTURE_COMPARE_FUNC 0x884D
struct OpenGLImportTable
{
OpenGLImportTable();
~OpenGLImportTable();
AMF_RESULT LoadFunctionsTable();
AMF_RESULT LoadContextFunctionsTable();
amf_handle m_hOpenGLDll;
// Core
glGetError_fn glGetError;
glGetString_fn glGetString;
// glGetStringi_fn glGetStringi;
glEnable_fn glEnable;
glClear_fn glClear;
glClearAccum_fn glClearAccum;
glClearColor_fn glClearColor;
glClearDepth_fn glClearDepth;
glClearIndex_fn glClearIndex;
glClearStencil_fn glClearStencil;
glDrawArrays_fn glDrawArrays;
glViewport_fn glViewport;
glFinish_fn glFinish;
// Core (platform-dependent)
#if defined(_WIN32)
wglCreateContext_fn wglCreateContext;
wglDeleteContext_fn wglDeleteContext;
wglGetCurrentContext_fn wglGetCurrentContext;
wglGetCurrentDC_fn wglGetCurrentDC;
wglMakeCurrent_fn wglMakeCurrent;
wglGetProcAddress_fn wglGetProcAddress;
wglGetExtensionsStringARB_fn wglGetExtensionsStringARB;
wglSwapIntervalEXT_fn wglSwapIntervalEXT;
#elif defined(__ANDROID__)
eglInitialize_fn eglInitialize;
eglGetDisplay_fn eglGetDisplay;
eglChooseConfig_fn eglChooseConfig;
eglCreateContext_fn eglCreateContext;
eglDestroyImageKHR_fn eglDestroyImageKHR;
eglCreateImageKHR_fn eglCreateImageKHR;
eglSwapInterval_fn eglSwapInterval;
glEGLImageTargetTexture2DOES_fn glEGLImageTargetTexture2DOES;
glReadPixels_fn glReadPixels;
#elif defined(__linux)
glXDestroyContext_fn glXDestroyContext;
glXDestroyWindow_fn glXDestroyWindow;
glXSwapBuffers_fn glXSwapBuffers;
glXQueryExtension_fn glXQueryExtension;
glXChooseFBConfig_fn glXChooseFBConfig;
glXCreateWindow_fn glXCreateWindow;
glXCreateNewContext_fn glXCreateNewContext;
glXMakeCurrent_fn glXMakeCurrent;
glXGetCurrentContext_fn glXGetCurrentContext;
glXGetCurrentDrawable_fn glXGetCurrentDrawable;
glXQueryExtensionsString_fn glXQueryExtensionsString;
glXSwapIntervalEXT_fn glXSwapIntervalEXT;
#endif
// Textures
glBindTexture_fn glBindTexture;
glDeleteTextures_fn glDeleteTextures;
glGenTextures_fn glGenTextures;
glGetTexImage_fn glGetTexImage;
glGetTexLevelParameteriv_fn glGetTexLevelParameteriv;
glTexParameteri_fn glTexParameteri;
glTexImage2D_fn glTexImage2D;
glActiveTexture_fn glActiveTexture;
// Frame buffer and render buffer objects
glBindFramebuffer_fn glBindFramebuffer;
// glBindRenderbuffer_fn glBindRenderbuffer;
glBlitFramebuffer_fn glBlitFramebuffer;
glCheckFramebufferStatus_fn glCheckFramebufferStatus;
glDeleteFramebuffers_fn glDeleteFramebuffers;
// glDeleteRenderbuffers_fn glDeleteRenderbuffers;
// glFramebufferRenderbuffer_fn glFramebufferRenderbuffer;
// glFramebufferTexture1D_fn glFramebufferTexture1D;
glFramebufferTexture2D_fn glFramebufferTexture2D;
// glFramebufferTexture3D_fn glFramebufferTexture3D;
glFramebufferTextureLayer_fn glFramebufferTextureLayer;
glGenFramebuffers_fn glGenFramebuffers;
// glGenRenderbuffers_fn glGenRenderbuffers;
// glGenerateMipmap_fn glGenerateMipmap;
// glGetFramebufferAttachmentParameteriv_fn glGetFramebufferAttachmentParameteriv;
// glGetRenderbufferParameteriv_fn glGetRenderbufferParameteriv;
// glIsFramebuffer_fn glIsFramebuffer;
// glIsRenderbuffer_fn glIsRenderbuffer;
// glRenderbufferStorage_fn glRenderbufferStorage;
// glRenderbufferStorageMultisample_fn glRenderbufferStorageMultisample;
// Buffers
glGenBuffers_fn glGenBuffers;
glBindBuffer_fn glBindBuffer;
glBufferData_fn glBufferData;
glBufferSubData_fn glBufferSubData;
glDeleteBuffers_fn glDeleteBuffers;
// Vertex attributes
glVertexAttribPointer_fn glVertexAttribPointer;
// glVertexAttribLPointer_fn glVertexAttribLPointer;
// glVertexAttribIPointer_fn glVertexAttribIPointer;
glBindVertexBuffer_fn glBindVertexBuffer;
glDisableVertexAttribArray_fn glDisableVertexAttribArray;
glEnableVertexAttribArray_fn glEnableVertexAttribArray;
// Vertex array objects
glBindVertexArray_fn glBindVertexArray;
glDeleteVertexArrays_fn glDeleteVertexArrays;
glGenVertexArrays_fn glGenVertexArrays;
glIsVertexArray_fn glIsVertexArray;
// Shaders
glCreateShader_fn glCreateShader;
glShaderSource_fn glShaderSource;
glCompileShader_fn glCompileShader;
glGetShaderInfoLog_fn glGetShaderInfoLog;
glGetShaderSource_fn glGetShaderSource;
glGetShaderiv_fn glGetShaderiv;
glCreateProgram_fn glCreateProgram;
glAttachShader_fn glAttachShader;
glLinkProgram_fn glLinkProgram;
glGetProgramInfoLog_fn glGetProgramInfoLog;
glGetProgramiv_fn glGetProgramiv;
glValidateProgram_fn glValidateProgram;
glUseProgram_fn glUseProgram;
glDeleteShader_fn glDeleteShader;
glDeleteProgram_fn glDeleteProgram;
// Uniforms
glGetUniformLocation_fn glGetUniformLocation;
// glUniform1f_fn glUniform1f;
// glUniform1fv_fn glUniform1fv;
glUniform1i_fn glUniform1i;
// glUniform1iv_fn glUniform1iv;
// glUniform2f_fn glUniform2f;
// glUniform2fv_fn glUniform2fv;
// glUniform2i_fn glUniform2i;
// glUniform2iv_fn glUniform2iv;
// glUniform3f_fn glUniform3f;
// glUniform3fv_fn glUniform3fv;
// glUniform3i_fn glUniform3i;
// glUniform3iv_fn glUniform3iv;
// glUniform4f_fn glUniform4f;
glUniform4fv_fn glUniform4fv;
// glUniform4i_fn glUniform4i;
// glUniform4iv_fn glUniform4iv;
// glUniformMatrix2fv_fn glUniformMatrix2fv;
// glUniformMatrix3fv_fn glUniformMatrix3fv;
// glUniformMatrix4fv_fn glUniformMatrix4fv;
// Uniform buffer objects
glBindBufferBase_fn glBindBufferBase;
glBindBufferRange_fn glBindBufferRange;
glGetUniformBlockIndex_fn glGetUniformBlockIndex;
glUniformBlockBinding_fn glUniformBlockBinding;
// Sampler objects
glBindSampler_fn glBindSampler;
glDeleteSamplers_fn glDeleteSamplers;
glGenSamplers_fn glGenSamplers;
// glGetSamplerParameterIiv_fn glGetSamplerParameterIiv;
// glGetSamplerParameterIuiv_fn glGetSamplerParameterIuiv;
// glGetSamplerParameterfv_fn glGetSamplerParameterfv;
// glGetSamplerParameteriv_fn glGetSamplerParameteriv;
// glIsSampler_fn glIsSampler;
// glSamplerParameterIiv_fn glSamplerParameterIiv;
// glSamplerParameterIuiv_fn glSamplerParameterIuiv;
glSamplerParameterf_fn glSamplerParameterf;
glSamplerParameterfv_fn glSamplerParameterfv;
glSamplerParameteri_fn glSamplerParameteri;
// glSamplerParameteriv_fn glSamplerParameteriv;
private:
#if defined(_WIN32)
WNDCLASSEX wndClass;
HWND hDummyWnd;
HDC hDummyDC;
HGLRC hDummyOGLContext;
AMF_RESULT CreateDummy();
AMF_RESULT DestroyDummy();
#endif
};

View File

@@ -1,472 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#include <climits>
#include "PropertyStorageExImpl.h"
#include "PropertyStorageImpl.h"
#include "TraceAdapter.h"
#pragma warning(disable: 4996)
using namespace amf;
#define AMF_FACILITY L"AMFPropertyStorageExImpl"
#ifdef __clang__
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wexit-time-destructors"
#pragma clang diagnostic ignored "-Wglobal-constructors"
#endif
amf::AMFCriticalSection amf::ms_csAMFPropertyStorageExImplMaps;
#ifdef __clang__
#pragma clang diagnostic pop
#endif
//-------------------------------------------------------------------------------------------------
AMF_RESULT amf::CastVariantToAMFProperty(amf::AMFVariantStruct* pDest, const amf::AMFVariantStruct* pSrc, amf::AMF_VARIANT_TYPE eType,
amf::AMF_PROPERTY_CONTENT_TYPE /*contentType*/,
const amf::AMFEnumDescriptionEntry* pEnumDescription)
{
AMF_RETURN_IF_INVALID_POINTER(pDest);
AMF_RESULT err = AMF_OK;
switch (eType)
{
case AMF_VARIANT_INTERFACE:
if (pSrc->type == eType)
{
err = AMFVariantCopy(pDest, pSrc);
}
else
{
pDest->type = AMF_VARIANT_INTERFACE;
pDest->pInterface = nullptr;
}
break;
case AMF_VARIANT_INT64:
{
if(pEnumDescription)
{
const AMFEnumDescriptionEntry* pEnumDescriptionCache = pEnumDescription;
err = AMFVariantChangeType(pDest, pSrc, AMF_VARIANT_INT64);
bool found = false;
if(err == AMF_OK)
{
//mean numeric came. validating
while(pEnumDescriptionCache->name)
{
if(pEnumDescriptionCache->value == AMFVariantGetInt64(pDest))
{
AMFVariantAssignInt64(pDest, pEnumDescriptionCache->value);
found = true;
break;
}
pEnumDescriptionCache++;
}
err = found ? AMF_OK : AMF_INVALID_ARG;
}
if(!found)
{
pEnumDescriptionCache = pEnumDescription;
err = AMFVariantChangeType(pDest, pSrc, AMF_VARIANT_WSTRING);
if(err == AMF_OK)
{
//string came. validating and assigning numeric
found = false;
while(pEnumDescriptionCache->name)
{
if(amf_wstring(pEnumDescriptionCache->name) == AMFVariantGetWString(pDest))
{
AMFVariantAssignInt64(pDest, pEnumDescriptionCache->value);
found = true;
break;
}
pEnumDescriptionCache++;
}
err = found ? AMF_OK : AMF_INVALID_ARG;
}
}
}
else
{
err = AMFVariantChangeType(pDest, pSrc, AMF_VARIANT_INT64);
}
}
break;
default:
err = AMFVariantChangeType(pDest, pSrc, eType);
break;
}
return err;
}
//-------------------------------------------------------------------------------------------------
AMFPropertyInfoImpl::AMFPropertyInfoImpl(const wchar_t* name, const wchar_t* desc, AMF_VARIANT_TYPE type, AMF_PROPERTY_CONTENT_TYPE contentType,
AMFVariantStruct defaultValue, AMFVariantStruct minValue, AMFVariantStruct maxValue, bool allowChangeInRuntime,
const AMFEnumDescriptionEntry* pEnumDescription) : m_name(), m_desc()
{
AMF_PROPERTY_ACCESS_TYPE accessTypeTmp = allowChangeInRuntime ? AMF_PROPERTY_ACCESS_FULL : AMF_PROPERTY_ACCESS_READ_WRITE;
Init(name, desc, type, contentType, defaultValue, minValue, maxValue, accessTypeTmp, pEnumDescription);
}
//-------------------------------------------------------------------------------------------------
AMFPropertyInfoImpl::AMFPropertyInfoImpl(const wchar_t* name, const wchar_t* desc, AMF_VARIANT_TYPE type, AMF_PROPERTY_CONTENT_TYPE contentType,
AMFVariantStruct defaultValue, AMFVariantStruct minValue, AMFVariantStruct maxValue, AMF_PROPERTY_ACCESS_TYPE accessType,
const AMFEnumDescriptionEntry* pEnumDescription) : m_name(), m_desc()
{
Init(name, desc, type, contentType, defaultValue, minValue, maxValue, accessType, pEnumDescription);
}
//-------------------------------------------------------------------------------------------------
AMFPropertyInfoImpl::AMFPropertyInfoImpl() : m_name(), m_desc()
{
AMFVariantInit(&this->defaultValue);
AMFVariantInit(&this->minValue);
AMFVariantInit(&this->maxValue);
name = L"";
desc = L"";
type = AMF_VARIANT_EMPTY;
contentType = AMF_PROPERTY_CONTENT_TYPE(-1);
accessType = AMF_PROPERTY_ACCESS_FULL;
}
//-------------------------------------------------------------------------------------------------
void AMFPropertyInfoImpl::Init(const wchar_t* name_, const wchar_t* desc_, AMF_VARIANT_TYPE type_, AMF_PROPERTY_CONTENT_TYPE contentType_,
AMFVariantStruct defaultValue_, AMFVariantStruct minValue_, AMFVariantStruct maxValue_, AMF_PROPERTY_ACCESS_TYPE accessType_,
const AMFEnumDescriptionEntry* pEnumDescription_)
{
m_name = name_;
name = m_name.c_str();
m_desc = desc_;
desc = m_desc.c_str();
type = type_;
contentType = contentType_;
accessType = accessType_;
AMFVariantInit(&defaultValue);
AMFVariantInit(&minValue);
AMFVariantInit(&maxValue);
pEnumDescription = pEnumDescription_;
switch(type)
{
case AMF_VARIANT_BOOL:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignBool(&defaultValue, false);
}
}
break;
case AMF_VARIANT_RECT:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignRect(&defaultValue, AMFConstructRect(0, 0, 0, 0));
}
}
break;
case AMF_VARIANT_SIZE:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignSize(&defaultValue, AMFConstructSize(0, 0));
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignSize(&minValue, AMFConstructSize(INT_MIN, INT_MIN));
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignSize(&maxValue, AMFConstructSize(INT_MAX, INT_MAX));
}
}
break;
case AMF_VARIANT_POINT:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignPoint(&defaultValue, AMFConstructPoint(0, 0));
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignPoint(&minValue, AMFConstructPoint(INT_MIN, INT_MIN));
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignPoint(&maxValue, AMFConstructPoint(INT_MAX, INT_MAX));
}
}
break;
case AMF_VARIANT_RATE:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignRate(&defaultValue, AMFConstructRate(0, 0));
}
if (CastVariantToAMFProperty(&this->minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignRate(&this->minValue, AMFConstructRate(0, 1));
}
if (CastVariantToAMFProperty(&this->maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignRate(&this->maxValue, AMFConstructRate(INT_MAX, INT_MAX));
}
}
break;
case AMF_VARIANT_RATIO:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignRatio(&defaultValue, AMFConstructRatio(0, 0));
}
}
break;
case AMF_VARIANT_COLOR:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignColor(&defaultValue, AMFConstructColor(0, 0, 0, 255));
}
}
break;
case AMF_VARIANT_INT64:
{
if(pEnumDescription)
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignInt64(&defaultValue, pEnumDescription->value);
}
}
else //AMF_PROPERTY_CONTENT_DEFAULT
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignInt64(&defaultValue, 0);
}
if(CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignInt64(&minValue, INT_MIN);
}
if(CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignInt64(&maxValue, INT_MAX);
}
}
}
break;
case AMF_VARIANT_DOUBLE:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignDouble(&defaultValue, 0);
}
if(CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignDouble(&minValue, DBL_MIN);
}
if(CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignDouble(&maxValue, DBL_MAX);
}
}
break;
case AMF_VARIANT_STRING:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignString(&maxValue, "");
}
}
break;
case AMF_VARIANT_WSTRING:
{
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignWString(&maxValue, L"");
}
}
break;
case AMF_VARIANT_INTERFACE:
if(CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignWString(&maxValue, L"");
}
break;
case AMF_VARIANT_FLOAT:
{
if (CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloat(&defaultValue, 0);
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloat(&minValue, FLT_MIN);
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloat(&maxValue, FLT_MAX);
}
}
break;
case AMF_VARIANT_FLOAT_SIZE:
{
if (CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatSize(&defaultValue, AMFConstructFloatSize(0, 0));
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatSize(&minValue, AMFConstructFloatSize(FLT_MIN, FLT_MIN));
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatSize(&maxValue, AMFConstructFloatSize(FLT_MAX, FLT_MAX));
}
}
break;
case AMF_VARIANT_FLOAT_POINT2D:
{
if (CastVariantToAMFProperty(&defaultValue, &defaultValue, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatPoint2D(&defaultValue, AMFConstructFloatPoint2D(0, 0));
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatPoint2D(&minValue, AMFConstructFloatPoint2D(FLT_MIN, FLT_MIN));
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatPoint2D(&maxValue, AMFConstructFloatPoint2D(FLT_MAX, FLT_MAX));
}
}
break;
case AMF_VARIANT_FLOAT_POINT3D:
{
if (CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatPoint3D(&defaultValue, AMFConstructFloatPoint3D(0, 0, 0));
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatPoint3D(&minValue, AMFConstructFloatPoint3D(FLT_MIN, FLT_MIN, FLT_MIN));
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatPoint3D(&maxValue, AMFConstructFloatPoint3D(FLT_MAX, FLT_MAX, FLT_MAX));
}
}
break;
case AMF_VARIANT_FLOAT_VECTOR4D:
{
if (CastVariantToAMFProperty(&defaultValue, &defaultValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatVector4D(&defaultValue, AMFConstructFloatVector4D(0, 0, 0, 0));
}
if (CastVariantToAMFProperty(&minValue, &minValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatVector4D(&minValue, AMFConstructFloatVector4D(FLT_MIN, FLT_MIN, FLT_MIN, FLT_MIN));
}
if (CastVariantToAMFProperty(&maxValue, &maxValue_, type, contentType, pEnumDescription) != AMF_OK)
{
AMFVariantAssignFloatVector4D(&maxValue, AMFConstructFloatVector4D(FLT_MAX, FLT_MAX, FLT_MAX, FLT_MAX));
}
}
break;
default:
break;
}
value = defaultValue;
}
AMFPropertyInfoImpl::AMFPropertyInfoImpl(const AMFPropertyInfoImpl& propertyInfo) : AMFPropertyInfo(), m_name(), m_desc()
{
Init(propertyInfo.name, propertyInfo.desc, propertyInfo.type, propertyInfo.contentType, propertyInfo.defaultValue, propertyInfo.minValue, propertyInfo.maxValue, propertyInfo.accessType, propertyInfo.pEnumDescription);
}
//-------------------------------------------------------------------------------------------------
AMFPropertyInfoImpl& AMFPropertyInfoImpl::operator=(const AMFPropertyInfoImpl& propertyInfo)
{
// store name and desc inside instance in m_sName and m_sDesc recpectively;
// m_pName and m_pDesc are pointed to our local copies
this->m_name = propertyInfo.name;
this->m_desc = propertyInfo.desc;
this->name = m_name.c_str();
this->desc = m_desc.c_str();
this->type = propertyInfo.type;
this->contentType = propertyInfo.contentType;
this->accessType = propertyInfo.accessType;
AMFVariantCopy(&this->defaultValue, &propertyInfo.defaultValue);
AMFVariantCopy(&this->minValue, &propertyInfo.minValue);
AMFVariantCopy(&this->maxValue, &propertyInfo.maxValue);
this->pEnumDescription = propertyInfo.pEnumDescription;
this->value = propertyInfo.value;
this->userModified = propertyInfo.userModified;
return *this;
}
//-------------------------------------------------------------------------------------------------
AMFPropertyInfoImpl::~AMFPropertyInfoImpl()
{
AMFVariantClear(&this->defaultValue);
AMFVariantClear(&this->minValue);
AMFVariantClear(&this->maxValue);
}
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------
//-------------------------------------------------------------------------------------------------

View File

@@ -1,617 +0,0 @@
//
// Notice Regarding Standards. AMD does not provide a license or sublicense to
// any Intellectual Property Rights relating to any standards, including but not
// limited to any audio and/or video codec technologies such as MPEG-2, MPEG-4;
// AVC/H.264; HEVC/H.265; AAC decode/FFMPEG; AAC encode/FFMPEG; VC-1; and MP3
// (collectively, the "Media Technologies"). For clarity, you will pay any
// royalties due for such third party technologies, which may include the Media
// Technologies that are owed as a result of AMD providing the Software to you.
//
// MIT license
//
// Copyright (c) 2018 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
///-------------------------------------------------------------------------
/// @file PropertyStorageExImpl.h
/// @brief AMFPropertyStorageExImpl header
///-------------------------------------------------------------------------
#ifndef AMF_PropertyStorageExImpl_h
#define AMF_PropertyStorageExImpl_h
#pragma once
#include "../include/core/PropertyStorageEx.h"
#include "Thread.h"
#include "InterfaceImpl.h"
#include "ObservableImpl.h"
#include "TraceAdapter.h"
#include <limits.h>
#include <float.h>
#include <memory>
namespace amf
{
AMF_RESULT CastVariantToAMFProperty(AMFVariantStruct* pDest, const AMFVariantStruct* pSrc, AMF_VARIANT_TYPE eType,
AMF_PROPERTY_CONTENT_TYPE contentType,
const AMFEnumDescriptionEntry* pEnumDescription = 0);
//---------------------------------------------------------------------------------------------
class AMFPropertyInfoImpl : public AMFPropertyInfo
{
private:
amf_wstring m_name;
amf_wstring m_desc;
void Init(const wchar_t* name, const wchar_t* desc, AMF_VARIANT_TYPE type, AMF_PROPERTY_CONTENT_TYPE contentType,
AMFVariantStruct defaultValue, AMFVariantStruct minValue, AMFVariantStruct maxValue, AMF_PROPERTY_ACCESS_TYPE accessType,
const AMFEnumDescriptionEntry* pEnumDescription);
public:
AMFVariant value;
amf_bool userModified = false;
public:
AMFPropertyInfoImpl(const wchar_t* name, const wchar_t* desc, AMF_VARIANT_TYPE type, AMF_PROPERTY_CONTENT_TYPE contentType,
AMFVariantStruct defaultValue, AMFVariantStruct minValue, AMFVariantStruct maxValue, bool allowChangeInRuntime,
const AMFEnumDescriptionEntry* pEnumDescription);
AMFPropertyInfoImpl(const wchar_t* name, const wchar_t* desc, AMF_VARIANT_TYPE type, AMF_PROPERTY_CONTENT_TYPE contentType,
AMFVariantStruct defaultValue, AMFVariantStruct minValue, AMFVariantStruct maxValue, AMF_PROPERTY_ACCESS_TYPE accessType,
const AMFEnumDescriptionEntry* pEnumDescription);
AMFPropertyInfoImpl();
AMFPropertyInfoImpl(const AMFPropertyInfoImpl& propertyInfo);
AMFPropertyInfoImpl& operator=(const AMFPropertyInfoImpl& propertyInfo);
virtual ~AMFPropertyInfoImpl();
virtual void OnPropertyChanged() { }
};
typedef amf_map<amf_wstring, std::shared_ptr<AMFPropertyInfoImpl> > PropertyInfoMap;
//---------------------------------------------------------------------------------------------
template<typename _TBase> class AMFPropertyStorageExImpl :
public _TBase,
public AMFObservableImpl<AMFPropertyStorageObserver>
{
protected:
PropertyInfoMap m_PropertiesInfo;
AMFCriticalSection m_Sync; //thread-safety lock.
public:
AMFPropertyStorageExImpl()
{
}
virtual ~AMFPropertyStorageExImpl()
{
}
// interface access
AMF_BEGIN_INTERFACE_MAP
AMF_INTERFACE_ENTRY(AMFPropertyStorage)
AMF_INTERFACE_ENTRY(AMFPropertyStorageEx)
AMF_END_INTERFACE_MAP
using _TBase::GetProperty;
using _TBase::SetProperty;
// interface
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL Clear()
{
ResetDefaultValues();
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL AddTo(AMFPropertyStorage* pDest, bool overwrite, bool /*deep*/) const
{
AMF_RETURN_IF_INVALID_POINTER(pDest);
if (pDest != this)
{
AMFLock lock(const_cast<AMFCriticalSection*>(&m_Sync));
for (PropertyInfoMap::const_iterator it = m_PropertiesInfo.begin(); it != m_PropertiesInfo.end(); it++)
{
if (!overwrite && pDest->HasProperty(it->first.c_str()))
{
continue;
}
AMF_RESULT err = pDest->SetProperty(it->first.c_str(), it->second->value);
if (err != AMF_INVALID_ARG) // not validated - skip it
{
AMF_RETURN_IF_FAILED(err, L"AddTo() - failed to copy property=%s", it->first.c_str());
}
}
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL CopyTo(AMFPropertyStorage* pDest, bool deep) const
{
AMF_RETURN_IF_INVALID_POINTER(pDest);
if (pDest != this)
{
pDest->Clear();
return AddTo(pDest, true, deep);
}
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL SetProperty(const wchar_t* name, AMFVariantStruct value)
{
AMF_RETURN_IF_INVALID_POINTER(name);
const AMFPropertyInfo* pParamInfo = NULL;
AMF_RESULT err = GetPropertyInfo(name, &pParamInfo);
if (err != AMF_OK)
{
return err;
}
if (pParamInfo && !pParamInfo->AllowedWrite())
{
return AMF_ACCESS_DENIED;
}
return SetPrivateProperty(name, value);
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL GetProperty(const wchar_t* name, AMFVariantStruct* pValue) const
{
AMF_RETURN_IF_INVALID_POINTER(name);
AMF_RETURN_IF_INVALID_POINTER(pValue);
const AMFPropertyInfo* pParamInfo = NULL;
AMF_RESULT err = GetPropertyInfo(name, &pParamInfo);
if (err != AMF_OK)
{
return err;
}
if (pParamInfo && !pParamInfo->AllowedRead())
{
return AMF_ACCESS_DENIED;
}
return GetPrivateProperty(name, pValue);
}
//-------------------------------------------------------------------------------------------------
virtual bool AMF_STD_CALL HasProperty(const wchar_t* name) const
{
const AMFPropertyInfo* pParamInfo = NULL;
AMF_RESULT err = GetPropertyInfo(name, &pParamInfo);
return (err != AMF_OK) ? false : true;
}
//-------------------------------------------------------------------------------------------------
virtual amf_size AMF_STD_CALL GetPropertyCount() const
{
return m_PropertiesInfo.size();
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL GetPropertyAt(amf_size index, wchar_t* name, amf_size nameSize, AMFVariantStruct* pValue) const
{
AMF_RETURN_IF_INVALID_POINTER(name);
AMF_RETURN_IF_INVALID_POINTER(pValue);
AMF_RETURN_IF_FALSE(nameSize != 0, AMF_INVALID_ARG);
AMF_RETURN_IF_FALSE(index < m_PropertiesInfo.size(), AMF_INVALID_ARG);
PropertyInfoMap::const_iterator found = m_PropertiesInfo.begin();
for (amf_size i = 0; i < index; i++)
{
found++;
}
size_t copySize = AMF_MIN(nameSize-1, found->first.length());
memcpy(name, found->first.c_str(), copySize * sizeof(wchar_t));
name[copySize] = 0;
AMFLock lock(const_cast<AMFCriticalSection*>(&m_Sync));
AMFVariantCopy(pValue, &found->second->value);
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
virtual amf_size AMF_STD_CALL GetPropertiesInfoCount() const
{
return m_PropertiesInfo.size();
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL GetPropertyInfo(amf_size szInd, const AMFPropertyInfo** ppParamInfo) const
{
AMF_RETURN_IF_INVALID_POINTER(ppParamInfo);
AMF_RETURN_IF_FALSE(szInd < m_PropertiesInfo.size(), AMF_INVALID_ARG);
PropertyInfoMap::const_iterator it = m_PropertiesInfo.begin();
for (; szInd > 0; --szInd)
{
it++;
}
*ppParamInfo = it->second.get();
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL GetPropertyInfo(const wchar_t* name, const AMFPropertyInfo** ppParamInfo) const
{
AMF_RETURN_IF_INVALID_POINTER(name);
AMF_RETURN_IF_INVALID_POINTER(ppParamInfo);
PropertyInfoMap::const_iterator it = m_PropertiesInfo.find(name);
if (it != m_PropertiesInfo.end())
{
*ppParamInfo = it->second.get();
return AMF_OK;
}
return AMF_NOT_FOUND;
}
//-------------------------------------------------------------------------------------------------
virtual AMF_RESULT AMF_STD_CALL ValidateProperty(const wchar_t* name, AMFVariantStruct value, AMFVariantStruct* pOutValidated) const
{
AMF_RETURN_IF_INVALID_POINTER(name);
AMF_RETURN_IF_INVALID_POINTER(pOutValidated);
AMF_RESULT err = AMF_OK;
const AMFPropertyInfo* pParamInfo = NULL;
AMF_RETURN_IF_FAILED(GetPropertyInfo(name, &pParamInfo), L"Property=%s", name);
AMF_RETURN_IF_FAILED(CastVariantToAMFProperty(pOutValidated, &value, pParamInfo->type, pParamInfo->contentType, pParamInfo->pEnumDescription), L"Property=%s", name);
switch(pParamInfo->type)
{
case AMF_VARIANT_INT64:
if((pParamInfo->minValue.type != AMF_VARIANT_EMPTY && AMFVariantGetInt64(pOutValidated) < AMFVariantGetInt64(&pParamInfo->minValue)) ||
(pParamInfo->maxValue.type != AMF_VARIANT_EMPTY && AMFVariantGetInt64(pOutValidated) > AMFVariantGetInt64(&pParamInfo->maxValue)) )
{
err = AMF_OUT_OF_RANGE;
}
break;
case AMF_VARIANT_DOUBLE:
if((AMFVariantGetDouble(pOutValidated) < AMFVariantGetDouble(&pParamInfo->minValue)) ||
(AMFVariantGetDouble(pOutValidated) > AMFVariantGetDouble(&pParamInfo->maxValue)) )
{
err = AMF_OUT_OF_RANGE;
}
break;
case AMF_VARIANT_FLOAT:
if ((AMFVariantGetFloat(pOutValidated) < AMFVariantGetFloat(&pParamInfo->minValue)) ||
(AMFVariantGetFloat(pOutValidated) > AMFVariantGetFloat(&pParamInfo->maxValue)))
{
err = AMF_OUT_OF_RANGE;
}
break;
case AMF_VARIANT_RATE:
{
// NOTE: denominator can't be 0
const AMFRate& validatedSize = AMFVariantGetRate(pOutValidated);
AMFRate minSize = AMFConstructRate(0, 1);
AMFRate maxSize = AMFConstructRate(INT_MAX, INT_MAX);
if (pParamInfo->minValue.type != AMF_VARIANT_EMPTY)
{
minSize = AMFVariantGetRate(&pParamInfo->minValue);
}
if (pParamInfo->maxValue.type != AMF_VARIANT_EMPTY)
{
maxSize = AMFVariantGetRate(&pParamInfo->maxValue);
}
if (validatedSize.num < minSize.num || validatedSize.num > maxSize.num ||
validatedSize.den < minSize.den || validatedSize.den > maxSize.den)
{
err = AMF_OUT_OF_RANGE;
}
}
break;
case AMF_VARIANT_SIZE:
{
AMFSize validatedSize = AMFVariantGetSize(pOutValidated);
AMFSize minSize = AMFConstructSize(0, 0);
AMFSize maxSize = AMFConstructSize(INT_MAX, INT_MAX);
if (pParamInfo->minValue.type != AMF_VARIANT_EMPTY)
{
minSize = AMFVariantGetSize(&pParamInfo->minValue);
}
if (pParamInfo->maxValue.type != AMF_VARIANT_EMPTY)
{
maxSize = AMFVariantGetSize(&pParamInfo->maxValue);
}
if (validatedSize.width < minSize.width || validatedSize.height < minSize.height ||
validatedSize.width > maxSize.width || validatedSize.height > maxSize.height)
{
err = AMF_OUT_OF_RANGE;
}
}
break;
case AMF_VARIANT_FLOAT_SIZE:
{
AMFFloatSize validatedSize = AMFVariantGetFloatSize(pOutValidated);
AMFFloatSize minSize = AMFConstructFloatSize(0, 0);
AMFFloatSize maxSize = AMFConstructFloatSize(FLT_MIN, FLT_MAX);
if (pParamInfo->minValue.type != AMF_VARIANT_EMPTY)
{
minSize = AMFVariantGetFloatSize(&pParamInfo->minValue);
}
if (pParamInfo->maxValue.type != AMF_VARIANT_EMPTY)
{
maxSize = AMFVariantGetFloatSize(&pParamInfo->maxValue);
}
if (validatedSize.width < minSize.width || validatedSize.height < minSize.height ||
validatedSize.width > maxSize.width || validatedSize.height > maxSize.height)
{
err = AMF_OUT_OF_RANGE;
}
}
break;
default: // GK: Clang issues a warning when not every value of an enum is handled in a switch-case
break;
}
return err;
}
//-------------------------------------------------------------------------------------------------
virtual void AMF_STD_CALL OnPropertyChanged(const wchar_t* /*name*/){ }
//-------------------------------------------------------------------------------------------------
virtual void AMF_STD_CALL AddObserver(AMFPropertyStorageObserver* pObserver)
{
AMFLock lock(&m_Sync);
AMFObservableImpl<AMFPropertyStorageObserver>::AddObserver(pObserver);
}
//-------------------------------------------------------------------------------------------------
virtual void AMF_STD_CALL RemoveObserver(AMFPropertyStorageObserver* pObserver)
{
AMFLock lock(&m_Sync);
AMFObservableImpl<AMFPropertyStorageObserver>::RemoveObserver(pObserver);
}
//-------------------------------------------------------------------------------------------------
protected:
//-------------------------------------------------------------------------------------------------
AMF_RESULT SetAccessType(const wchar_t* name, AMF_PROPERTY_ACCESS_TYPE accessType)
{
AMF_RETURN_IF_INVALID_POINTER(name);
PropertyInfoMap::iterator found = m_PropertiesInfo.find(name);
AMF_RETURN_IF_FALSE(found != m_PropertiesInfo.end(), AMF_NOT_FOUND);
if (found->second->accessType == accessType)
{
return AMF_OK;
}
found->second->accessType = accessType;
OnPropertyChanged(name);
NotifyObservers<const wchar_t*>(&AMFPropertyStorageObserver::OnPropertyChanged, name);
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT SetPrivateProperty(const wchar_t* name, AMFVariantStruct value)
{
AMF_RETURN_IF_INVALID_POINTER(name);
AMFVariant validatedValue;
AMF_RESULT validateResult = ValidateProperty(name, value, &validatedValue);
if (validateResult != AMF_OK)
{
return validateResult;
}
PropertyInfoMap::iterator found = m_PropertiesInfo.find(name);
if (found == m_PropertiesInfo.end())
{
return AMF_NOT_FOUND;
}
{
AMFLock lock(&m_Sync);
if (found->second->value == validatedValue)
{
return AMF_OK;
}
found->second->value = validatedValue;
}
found->second->OnPropertyChanged();
OnPropertyChanged(name);
NotifyObservers<const wchar_t*>(&AMFPropertyStorageObserver::OnPropertyChanged, name);
return AMF_OK;
}
//-------------------------------------------------------------------------------------------------
AMF_RESULT GetPrivateProperty(const wchar_t* name, AMFVariantStruct* pValue) const
{
AMF_RETURN_IF_INVALID_POINTER(name);
AMF_RETURN_IF_INVALID_POINTER(pValue);
PropertyInfoMap::const_iterator found = m_PropertiesInfo.find(name);
if (found != m_PropertiesInfo.end())
{
AMFLock lock(const_cast<AMFCriticalSection*>(&m_Sync));
AMFVariantCopy(pValue, &found->second->value);
return AMF_OK;
}
// NOTE: needed for internal components that don't automatically
// expose their properties in the main map...
const AMFPropertyInfo* pParamInfo;
if (GetPropertyInfo(name, &pParamInfo) == AMF_OK)
{
AMFLock lock(const_cast<AMFCriticalSection*>(&m_Sync));
AMFVariantCopy(pValue, &pParamInfo->defaultValue);
return AMF_OK;
}
return AMF_NOT_FOUND;
}
//-------------------------------------------------------------------------------------------------
template<typename _T>
AMF_RESULT AMF_STD_CALL SetPrivateProperty(const wchar_t* name, const _T& value)
{
AMF_RESULT err = SetPrivateProperty(name, static_cast<const AMFVariantStruct&>(AMFVariant(value)));
return err;
}
//-------------------------------------------------------------------------------------------------
template<typename _T>
AMF_RESULT AMF_STD_CALL GetPrivateProperty(const wchar_t* name, _T* pValue) const
{
AMFVariant var;
AMF_RESULT err = GetPrivateProperty(name, static_cast<AMFVariantStruct*>(&var));
if(err == AMF_OK)
{
*pValue = static_cast<_T>(var);
}
return err;
}
//-------------------------------------------------------------------------------------------------
bool HasPrivateProperty(const wchar_t* name) const
{
return m_PropertiesInfo.find(name) != m_PropertiesInfo.end();
}
//-------------------------------------------------------------------------------------------------
bool IsRuntimeChange(const wchar_t* name) const
{
PropertyInfoMap::const_iterator it = m_PropertiesInfo.find(name);
return (it != m_PropertiesInfo.end()) ? it->second->AllowedChangeInRuntime() : false;
}
//-------------------------------------------------------------------------------------------------
void ResetDefaultValues()
{
AMFLock lock(&m_Sync);
// copy defaults to property storage
for (PropertyInfoMap::iterator it = m_PropertiesInfo.begin(); it != m_PropertiesInfo.end(); ++it)
{
AMFPropertyInfoImpl* info = it->second.get();
info->value = info->defaultValue;
info->userModified = false;
}
}
//-------------------------------------------------------------------------------------------------
private:
AMFPropertyStorageExImpl(const AMFPropertyStorageExImpl&);
AMFPropertyStorageExImpl& operator=(const AMFPropertyStorageExImpl&);
};
extern AMFCriticalSection ms_csAMFPropertyStorageExImplMaps;
//---------------------------------------------------------------------------------------------
#define AMFPrimitivePropertyInfoMapBegin \
{ \
amf::AMFPropertyInfoImpl* s_PropertiesInfo[] = \
{
#define AMFPrimitivePropertyInfoMapEnd \
}; \
for (amf_size i = 0; i < sizeof(s_PropertiesInfo) / sizeof(s_PropertiesInfo[0]); ++i) \
{ \
amf::AMFPropertyInfoImpl* pPropInfo = s_PropertiesInfo[i]; \
m_PropertiesInfo[pPropInfo->name].reset(pPropInfo); \
} \
}
#define AMFPropertyInfoBool(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_BOOL, 0, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoEnum(_name, _desc, _defaultValue, pEnumDescription, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_INT64, 0, amf::AMFVariant(amf_int64(_defaultValue)), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, pEnumDescription)
#define AMFPropertyInfoInt64(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_INT64, 0, amf::AMFVariant(amf_int64(_defaultValue)), \
amf::AMFVariant(amf_int64(_minValue)), amf::AMFVariant(amf_int64(_maxValue)), _AccessType, 0)
#define AMFPropertyInfoDouble(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_DOUBLE, 0, amf::AMFVariant(amf_double(_defaultValue)), \
amf::AMFVariant(amf_double(_minValue)), amf::AMFVariant(amf_double(_maxValue)), _AccessType, 0)
#define AMFPropertyInfoFloat(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_FLOAT, 0, amf::AMFVariant(amf_float(_defaultValue)), \
amf::AMFVariant(amf_float(_minValue)), amf::AMFVariant(amf_float(_maxValue)), _AccessType, 0)
#define AMFPropertyInfoRect(_name, _desc, defaultLeft, defaultTop, defaultRight, defaultBottom, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_RECT, 0, amf::AMFVariant(AMFConstructRect(defaultLeft, defaultTop, defaultRight, defaultBottom)), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoPoint(_name, _desc, defaultX, defaultY, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_POINT, 0, amf::AMFVariant(AMFConstructPoint(defaultX, defaultY)), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoSize(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_SIZE, 0, amf::AMFVariant(AMFSize(_defaultValue)), \
amf::AMFVariant(AMFSize(_minValue)), amf::AMFVariant(AMFSize(_maxValue)), _AccessType, 0)
#define AMFPropertyInfoFloatSize(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_FLOAT_SIZE, 0, amf::AMFVariant(AMFFloatSize(_defaultValue)), \
amf::AMFVariant(AMFFloatSize(_minValue)), amf::AMFVariant(AMFFloatSize(_maxValue)), _AccessType, 0)
#define AMFPropertyInfoRate(_name, _desc, defaultNum, defaultDen, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_RATE, 0, amf::AMFVariant(AMFConstructRate(defaultNum, defaultDen)), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoRateEx(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_RATE, 0, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(_minValue), amf::AMFVariant(_maxValue), _AccessType, 0)
#define AMFPropertyInfoRatio(_name, _desc, defaultNum, defaultDen, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_RATIO, 0, amf::AMFVariant(AMFConstructRatio(defaultNum, defaultDen)), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoColor(_name, _desc, defaultR, defaultG, defaultB, defaultA, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_COLOR, 0, amf::AMFVariant(AMFConstructColor(defaultR, defaultG, defaultB, defaultA)), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoString(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_STRING, 0, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoWString(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_WSTRING, 0, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoInterface(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_INTERFACE, 0, amf::AMFVariant(amf::AMFInterfacePtr(_defaultValue)), \
amf::AMFVariant(amf::AMFInterfacePtr()), amf::AMFVariant(amf::AMFInterfacePtr()), _AccessType, 0)
#define AMFPropertyInfoXML(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_STRING, AMF_PROPERTY_CONTENT_XML, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoPath(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_WSTRING, AMF_PROPERTY_CONTENT_FILE_OPEN_PATH, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoSavePath(_name, _desc, _defaultValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_WSTRING, AMF_PROPERTY_CONTENT_FILE_SAVE_PATH, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(), amf::AMFVariant(), _AccessType, 0)
#define AMFPropertyInfoFloatVector4D(_name, _desc, _defaultValue, _minValue, _maxValue, _AccessType) \
new amf::AMFPropertyInfoImpl(_name, _desc, amf::AMF_VARIANT_FLOAT_VECTOR4D, 0, amf::AMFVariant(_defaultValue), \
amf::AMFVariant(_minValue), amf::AMFVariant(_maxValue), _AccessType, 0)
} // namespace amf
#endif // #ifndef AMF_PropertyStorageExImpl_h

Some files were not shown because too many files have changed in this diff Show More