From 36ea29f397e567cfd05aa9708e251913f3ff9640 Mon Sep 17 00:00:00 2001 From: jaswu51 <506346013@qq.com> Date: Tue, 9 Dec 2025 20:14:50 -0500 Subject: [PATCH] Add GStreamer Jetson real-time video processing article --- _data/navigation.yml | 2 + .../gstreamer-jetson-realtime-video.md | 579 ++++++++++++++++++ wiki/networking/index.md | 3 + 3 files changed, 584 insertions(+) create mode 100644 wiki/networking/gstreamer-jetson-realtime-video.md diff --git a/_data/navigation.yml b/_data/navigation.yml index 6600d2a4..8537a258 100644 --- a/_data/navigation.yml +++ b/_data/navigation.yml @@ -232,6 +232,8 @@ wiki: url: /wiki/networking/ros-distributed/ - title: Setting up WiFi hotspot at the boot up for Linux devices url: /wiki/networking/wifi-hotspot/ + - title: Real-Time Video Processing on Jetson with GStreamer + url: /wiki/networking/gstreamer-jetson-realtime-video/ - title: Simulation url: /wiki/simulation/ children: diff --git a/wiki/networking/gstreamer-jetson-realtime-video.md b/wiki/networking/gstreamer-jetson-realtime-video.md new file mode 100644 index 00000000..3b7abb78 --- /dev/null +++ b/wiki/networking/gstreamer-jetson-realtime-video.md @@ -0,0 +1,579 @@ +--- +date: 2025-12-09 +title: Real-Time Video Processing on Jetson with GStreamer +--- + +### 1. Why GStreamer on Jetson Orin? + +Robotics video demands three competing goals: **High-Fidelity Capture** for sensors, **Low-Latency Perception** for AI, and **Efficient Streaming** for teleoperation. Traditional approaches often fail to meet these requirements simultaneously, especially when dealing with high-resolution video streams from modern camera sensors. + +**The Problem:** Handling these sequentially in a Python script (Capture $\rightarrow$ CPU Copy $\rightarrow$ Process $\rightarrow$ Encode) creates massive bottlenecks. Each step introduces latency: copying frames from camera buffers to system RAM, converting formats on the CPU, processing with OpenCV or similar libraries, and finally encoding for transmission. For a 4K camera at 30 FPS, this can consume 80-90% of CPU resources, leaving little room for critical robot control algorithms. Additionally, multiple memory copies (camera buffer $\rightarrow$ user space $\rightarrow$ processing $\rightarrow$ encoder) waste bandwidth and introduce frame drops. + +**The Solution:** GStreamer on Jetson utilizes **`memory:NVMM`** (NVIDIA Memory Map). This "Zero-Copy" design allows video frames to flow directly from **Camera $\rightarrow$ ISP $\rightarrow$ GPU $\rightarrow$ Encoder**. The CPU is bypassed entirely, leaving it 99% free to run high-level robot logic like SLAM or path planning. NVMM is a unified memory architecture that allows the camera ISP (Image Signal Processor), GPU, and hardware encoders to share the same physical memory without expensive copies. This means a 4K video stream can be captured, processed, and encoded using less than 5% CPU, with latency reduced from hundreds of milliseconds to single-digit milliseconds. + +**Performance Benefits:** In real-world robotics applications, this translates to: +- **Latency Reduction:** From 200-500ms (traditional pipeline) to 10-30ms (GStreamer with NVMM) +- **CPU Utilization:** From 80-90% down to 5-10% for video processing +- **Frame Rate Stability:** Consistent 30 FPS even under heavy computational load +- **Power Efficiency:** Lower CPU usage means reduced thermal throttling and longer battery life for mobile robots + +--- + +### 1.2 What GStreamer Is + +Think of GStreamer not as a media player, but as a circuit board for data. You connect specific "elements" to form a pipeline where data flows from sources through filters to sinks. Each element is a self-contained processing unit that performs one specific task, and elements communicate through "pads" (input/output ports) using standardized data formats called "caps" (capabilities). + +**Core Concepts:** + +* **Source:** The Generator (Camera, File, Network). Sources produce data and push it downstream. Examples include `nvarguscamerasrc` for CSI cameras, `v4l2src` for USB cameras, and `udpsrc` for network streams. + +* **Filter:** The Transformer (Resize, Convert, Encode, Decode). Filters receive data, process it, and output modified data. They can change format, resolution, frame rate, or apply effects. Examples include `nvvidconv` for hardware-accelerated scaling, `videoconvert` for format conversion, and `nvv4l2h264enc` for encoding. + +* **Sink:** The Consumer (Network, Screen, File, Application). Sinks consume data and either display it, save it, or make it available to your application code. Examples include `udpsink` for network transmission, `nvdrmvideosink` for display, and `appsink` for integration with Python/C++ code. + +**The "Tee" Architecture** + +To achieve your goals, use a **Tee** element to split one camera source into two concurrent, hardware-accelerated branches. The Tee element acts like a splitter, duplicating the video stream so each branch can process it independently without interfering with the other. This is crucial for robotics where you need both real-time perception and remote monitoring simultaneously. + +1. **Perception Branch (AI):** Uses `nvvidconv` (Video Image Compositor) to downscale 4K images for inference. This branch typically outputs lower resolution (e.g., 640x480 or 1280x720) frames at high frame rates (30-60 FPS) optimized for neural network inference. The downscaling happens entirely in hardware, maintaining zero-copy semantics. + + * *Tradeoff:* Sacrifices resolution for raw speed (FPS). Lower resolution means faster inference but less detail for object detection. + +2. **Streaming Branch (Base Station):** Uses `nvv4l2h264enc` (NVENC) to compress video for transmission. This branch encodes the video stream using hardware H.264 encoding, optimizing for bandwidth efficiency while maintaining acceptable visual quality for human operators. + + * *Tradeoff:* Sacrifices visual perfection for low bandwidth and latency. Compression artifacts may be visible, but the stream can be transmitted over limited network connections. + +**Essential Hardware Plugins:** + +* **`nvarguscamerasrc`:** Accesses the ISP (Image Signal Processor) for hardware auto-exposure, auto-white-balance, and debayering. This plugin provides direct access to CSI camera sensors on Jetson platforms, bypassing V4L2 and enabling full control over camera parameters. It supports multiple camera sensors and can handle high-resolution streams (up to 4K) with minimal CPU overhead. + +* **`nvvidconv`:** Handles hardware scaling and format conversion using the GPU's video processing units. It can convert between various color formats (NV12, I420, BGRx, etc.) and resize images without copying data to CPU memory. This is essential for format conversion between camera output and neural network input requirements. + +* **`nvv4l2h264enc`:** Dedicated hardware H.264 encoding using NVIDIA's NVENC engine. This encoder can handle multiple simultaneous streams, supports various encoding profiles (baseline, main, high), and provides fine-grained control over bitrate, quality, and latency. It's capable of encoding 4K video at 30 FPS with minimal CPU usage. + +## 2. Getting GStreamer Working on Jetson Orin + +### 2.1 Installation & Environment Check + +**Jetson Linux / JetPack Expectations** + +Ensure you are running **JetPack 6.0, 6.1, or 6.2**. These releases correspond to Jetson Linux R36.x (Ubuntu 22.04) which utilizes GStreamer 1.20.x. Earlier JetPack versions (5.x) use GStreamer 1.18.x, which may have different plugin capabilities and API differences. To check your JetPack version, run: + +```bash +cat /etc/nv_tegra_release +``` + +The first number indicates the major release (e.g., R36 for JetPack 6.x). If you're using an older version, consider upgrading to access the latest hardware acceleration features and bug fixes. Note that upgrading JetPack requires flashing the entire system, so plan accordingly. + +**Installing GStreamer-1.0 and Standard Plugins** + +Run the following commands to install the core GStreamer tools and plugins: + +```bash + +sudo apt-get update + +sudo apt-get install gstreamer1.0-tools gstreamer1.0-alsa \ + + gstreamer1.0-plugins-base gstreamer1.0-plugins-good \ + + gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly \ + + gstreamer1.0-libav + +sudo apt-get install libgstreamer1.0-dev \ + + libgstreamer-plugins-base1.0-dev \ + + libgstreamer-plugins-good1.0-dev \ + + libgstreamer-plugins-bad1.0-dev + +``` + +**Installing Accelerated GStreamer Plugins** + +To install the NVIDIA-specific hardware-accelerated plugins (required for `nvarguscamerasrc`, `nvv4l2decoder`, etc.): + +```bash + +sudo apt-get update + +sudo apt-get install nvidia-l4t-gstreamer + +sudo ldconfig + +rm -rf .cache/gstreamer-1.0/ + +``` + +**Checking the Version** + +Verify the installation was successful by checking the version: + +```bash + +gst-inspect-1.0 --version + +``` + +**Verifying NVIDIA Plugins** + +After installation, verify that NVIDIA-specific plugins are available. Run the following command to list all installed plugins: + +```bash + +gst-inspect-1.0 | grep nv + +``` + +You should see plugins like `nvarguscamerasrc`, `nvvidconv`, `nvv4l2h264enc`, `nvv4l2decoder`, and others. If these are missing, the `nvidia-l4t-gstreamer` package may not have installed correctly. Try reinstalling it or check for package conflicts. + +**Testing Plugin Capabilities** + +To inspect a specific plugin's capabilities and properties, use: + +```bash + +gst-inspect-1.0 nvarguscamerasrc + +``` + +This will show you all available properties, their types, default values, and allowed ranges. This is invaluable when configuring pipelines, as it shows exactly what parameters each element supports. + +**Common Installation Issues** + +- **Plugin not found errors:** If you see "No such element" errors, ensure `nvidia-l4t-gstreamer` is installed and run `sudo ldconfig` to refresh library paths. + +- **Permission errors:** Some camera access may require user permissions. Add your user to the `video` group: `sudo usermod -a -G video $USER` (requires logout/login). + +- **Cache issues:** If plugins aren't detected after installation, clear the GStreamer cache: `rm -rf ~/.cache/gstreamer-1.0/` + +--- + +### 2.2 Minimal Test Pipelines + +#### Pipeline 1: Camera to Screen + +This pipeline captures from the CSI camera using the Argus API (`nvarguscamerasrc`) and displays it directly to the screen using the DRM video sink (`nvdrmvideosink`). This is the simplest pipeline to verify your camera and display setup are working correctly. The `nvarguscamerasrc` element automatically handles camera initialization, exposure control, and debayering in hardware. The `queue` element provides buffering to prevent frame drops during display updates. The `-e` flag ensures the pipeline stops cleanly when interrupted (Ctrl+C). + +```bash + +gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), \ + + width=(int)1920, height=(int)1080, format=(string)NV12, \ + + framerate=(fraction)10/1' ! queue ! nvdrmvideosink -e + +``` + +#### Pipeline 2: Camera Sensor to Downstream C++/CUDA + +This pipeline demonstrates the "zero-copy" path. It captures from the camera, keeps the data in hardware memory (`NVMM`), passes it through `nvivafilter` (which loads a custom CUDA library `libnvsample_cudaprocess.so`), and renders the output. This is the foundation for integrating custom CUDA processing into your video pipeline. The `nvivafilter` element allows you to inject custom CUDA kernels that process video frames entirely in GPU memory, perfect for computer vision algorithms like object detection, tracking, or image enhancement. + +**Key Features:** +- All data remains in NVMM throughout the pipeline +- Custom CUDA processing can be applied without CPU involvement +- Supports high-resolution input (4K in this example) +- Output can be fed to other pipeline elements or applications + +*Note: This relies on the NVIDIA sample library `libnvsample_cudaprocess.so` being present (typically found in `/usr/lib/aarch64-linux-gnu/` or installed via the BSP sources). For production use, you would compile your own CUDA library with your specific processing algorithms. The library must export specific functions that GStreamer expects - refer to NVIDIA's documentation for the exact interface requirements.* + +```bash + +gst-launch-1.0 \ + + nvarguscamerasrc ! \ + + "video/x-raw(memory:NVMM),width=3840,height=2160,format=NV12,framerate=10/1" ! \ + + nvivafilter cuda-process=true \ + + customer-lib-name="libnvsample_cudaprocess.so" ! \ + + "video/x-raw(memory:NVMM),format=NV12" ! \ + + nv3dsink -e + +``` + +### 2.3 Camera Sensor → Encode → Network Transmission + +This pipeline demonstrates a complete flow: capturing from a V4L2 device (like a USB webcam), processing the frame rate, uploading to Hardware Memory (NVMM), hardware encoding, and streaming over UDP. This is the most common use case for robotics teleoperation, where video needs to be transmitted to a remote base station with minimal latency and bandwidth usage. + +**Pipeline Breakdown:** + +1. **`v4l2src`:** Captures from V4L2-compatible devices (USB cameras, some CSI cameras). The `device` parameter specifies which camera to use (check available devices with `v4l2-ctl --list-devices`). + +2. **`videorate`:** Ensures consistent frame rate by dropping or duplicating frames as needed. This is important for stable encoding and network transmission. + +3. **`nvvidconv`:** Converts the video format to NV12 (required for NVENC) and uploads to NVMM memory. This is where the zero-copy path begins. + +4. **`nvv4l2h264enc`:** Hardware H.264 encoder with several important parameters: + - `maxperf-enable=1`: Maximizes encoding performance (may increase power consumption) + - `control-rate=1`: Constant bitrate mode (use 2 for variable bitrate) + - `bitrate=2500000`: Target bitrate in bits per second (2.5 Mbps) + - `iframeinterval=3`: Insert I-frame every 3 seconds (for error recovery) + - `insert-sps-pps=true`: Include sequence and picture parameter sets in stream (required for some decoders) + +5. **`h264parse`:** Parses H.264 stream and ensures proper structure for RTP transmission. + +6. **`rtph264pay`:** Packages H.264 NAL units into RTP packets for network transmission. The `config-interval` parameter controls how often SPS/PPS are sent. + +7. **`udpsink`:** Transmits RTP packets over UDP. The `sync=false` and `async=false` parameters disable synchronization to minimize latency (important for real-time robotics applications). + +**Pipeline Command:** + +```bash + +gst-launch-1.0 -e v4l2src device=/dev/video4 ! \ + + video/x-raw,format=YUY2,width=1920,height=1080,framerate=10/1 ! \ + + videorate ! video/x-raw,format=YUY2,framerate=10/1 ! \ + + nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! \ + + nvv4l2h264enc maxperf-enable=1 control-rate=1 bitrate=2500000 \ + + iframeinterval=3 idrinterval=1 insert-sps-pps=true preset-level=1 ! \ + + h264parse ! rtph264pay config-interval=1 pt=96 ! \ + + udpsink host=10.3.1.10 port=5000 sync=false async=false + +``` + +**Matching Receiver Pipeline (PC/Client Side)** + +To view this stream on a receiving base station (a linux computer), use the following command. This receiver pipeline reverses the encoding process: it receives RTP packets, extracts H.264 data, decodes it, and displays it. The pipeline can run on any Linux system with GStreamer installed (doesn't require Jetson hardware). + +**Receiver Pipeline Breakdown:** + +1. **`udpsrc`:** Receives UDP packets on the specified port. Make sure your firewall allows UDP traffic on this port. + +2. **`rtph264depay`:** Extracts H.264 NAL units from RTP packets. + +3. **`h264parse`:** Parses the H.264 stream structure. + +4. **`avdec_h264`:** Software H.264 decoder (hardware decoding available on some systems with `vaapidecodebin`). + +5. **`videoconvert`:** Converts decoded video format for display. + +6. **`autovideosink`:** Automatically selects the best video output (X11, Wayland, etc.) for your system. + +**Network Considerations:** + +- **Bandwidth:** Ensure your network can handle the bitrate (2.5 Mbps in this example). For wireless connections, consider lower bitrates (1-1.5 Mbps) for stability. + +- **Latency:** UDP provides low latency but no reliability. For critical applications, consider adding error correction or using TCP (with higher latency). + +- **Multicast:** For multiple receivers, use multicast addresses (e.g., `udpsink host=224.1.1.1`) instead of unicast. + +```bash + +gst-launch-1.0 udpsrc port=5000 ! \ + + application/x-rtp, encoding-name=H264, payload=96 ! \ + + rtph264depay ! h264parse ! avdec_h264 ! \ + + videoconvert ! autovideosink sync=false + +``` + +## 3. Integrating into Code (Python & C++) + +### 3.1 Python + +Use the `Gst.parse_launch()` function to literally copy-paste your working terminal command into Python. This is the quickest way to get started, as you can test your pipeline with `gst-launch-1.0` first, then use the exact same pipeline string in your Python code. However, for production applications, consider building pipelines programmatically using individual elements for better error handling and dynamic configuration. + +**Prerequisites:** + +```bash + +sudo apt-get install python3-gi python3-gst-1.0 + +``` + +**Python Script:** + +```python + +import sys + +import gi + +gi.require_version('Gst', '1.0') + +from gi.repository import Gst, GLib + +def main(): + + # 1. Initialize GStreamer + + Gst.init(sys.argv) + + # 2. Define the Pipeline (Exact same string as terminal!) + + # Note: We use 'appsink' to get the data into Python (e.g., for OpenCV) + + cmd = ( + + "nvarguscamerasrc ! " + + "video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=10/1 ! " + + "nvvidconv ! video/x-raw, format=BGRx ! " + + "videoconvert ! video/x-raw, format=BGR ! " + + "appsink name=mysink emit-signals=True" + + ) + + # 3. Create Pipeline + + pipeline = Gst.parse_launch(cmd) + + + + # 4. (Optional) Capture Data for OpenCV + + # appsink = pipeline.get_by_name("mysink") + + # ... callback connection logic here ... + + # Example callback setup for OpenCV integration: + + # def on_new_sample(sink): + + # sample = sink.emit("pull-sample") + + # if sample: + + # buf = sample.get_buffer() + + # caps = sample.get_caps() + + # # Extract buffer data and convert to numpy array for OpenCV + + # # See GStreamer Python examples for full implementation + + # return Gst.FlowReturn.OK + + # appsink.connect("new-sample", on_new_sample) + + # 5. Start Playing + + pipeline.set_state(Gst.State.PLAYING) + + # 6. Run Main Loop (Required for GStreamer events) + + loop = GLib.MainLoop() + + try: + + loop.run() + + except KeyboardInterrupt: + + pass + + # 7. Cleanup + + pipeline.set_state(Gst.State.NULL) + +if __name__ == "__main__": + + main() + +``` + +### 3.2 C++ + +C++ offers robust error handling and lower overhead. For high-performance robotics applications, C++ is often preferred due to its deterministic performance characteristics and direct access to GStreamer's C API. The C++ bindings provide a more object-oriented interface while maintaining full compatibility with the underlying C library. + +**Best Practices for C++ Integration:** + +- Always check for errors after `gst_parse_launch()` - the function can fail silently if the pipeline string is invalid. + +- Use `gst_bus_set_sync_handler()` for thread-safe message handling in multi-threaded applications. + +- For dynamic pipeline construction, use `gst_element_factory_make()` instead of `gst_parse_launch()` for better control and error reporting. + +- Consider using GStreamer's `gst_parse_bin_from_description()` for creating sub-pipelines that can be easily modified at runtime. + +**CMakeLists.txt Dependencies:** + +You need `gstreamer-1.0` and `gobject-2.0`. + +```cmake + +find_package(PkgConfig REQUIRED) + +pkg_check_modules(GST REQUIRED gstreamer-1.0) + +include_directories(${GST_INCLUDE_DIRS}) + +link_directories(${GST_LIBRARY_DIRS}) + +add_executable(my_robot_vision main.cpp) + +target_link_libraries(my_robot_vision ${GST_LIBRARIES}) + +``` + +**C++ Code:** + +```cpp + +#include + +#include + +int main(int argc, char *argv[]) { + + // 1. Initialize GStreamer + + gst_init(&argc, &argv); + + // 2. Create Elements + + // "playbin" is an easy example, but for robotics, build manually or use parse_launch + + GError *error = nullptr; + + const gchar *pipeline_str = + + "nvarguscamerasrc ! " + + "video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=10/1 ! " + + "queue ! nv3dsink"; + + GstElement *pipeline = gst_parse_launch(pipeline_str, &error); + + if (error) { + + std::cerr << "Pipeline error: " << error->message << std::endl; + + return -1; + + } + + // 3. Start Playing + + gst_element_set_state(pipeline, GST_STATE_PLAYING); + + // 4. Wait until error or EOS (End Of Stream) + + // In a real application, you'd want to handle messages in a separate thread + + // or use async message handling to avoid blocking the main loop + + GstBus *bus = gst_element_get_bus(pipeline); + + GstMessage *msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE, + + (GstMessageType)(GST_MESSAGE_ERROR | GST_MESSAGE_EOS)); + + // For production code, consider using gst_bus_add_watch() with a callback + + // to handle messages asynchronously, allowing your main loop to continue + + // processing robot control logic while monitoring the pipeline + + // 5. Cleanup + + if (msg != nullptr) { + + gst_message_unref(msg); + + } + + gst_object_unref(bus); + + gst_element_set_state(pipeline, GST_STATE_NULL); + + gst_object_unref(pipeline); + + return 0; + +} + +``` + +## References + +### Official Documentation + +- **[GStreamer Official Documentation](https://gstreamer.freedesktop.org/documentation/)** + Comprehensive documentation for GStreamer framework, including tutorials, API references, and plugin documentation. + +- **[GStreamer Application Development Manual](https://gstreamer.freedesktop.org/documentation/application-development/)** + Detailed guide for developing applications with GStreamer, covering pipeline construction, element management, and event handling. + +- **[NVIDIA Jetson Linux Developer Guide](https://docs.nvidia.com/jetson/l4t/index.html)** + Official documentation for Jetson Linux, including GStreamer integration, hardware acceleration, and platform-specific features. + +- **[NVIDIA DeepStream SDK Documentation](https://developer.nvidia.com/deepstream-sdk)** + Advanced video analytics SDK built on GStreamer for Jetson platforms, useful for AI inference pipelines. + +### GStreamer Plugins and Elements + +- **[GStreamer Plugin Reference](https://gstreamer.freedesktop.org/documentation/plugins/index.html)** + Complete reference for all GStreamer plugins, including NVIDIA-specific plugins like `nvarguscamerasrc`, `nvvidconv`, and `nvv4l2h264enc`. + +- **[GStreamer Tools Documentation](https://gstreamer.freedesktop.org/documentation/tools/index.html)** + Documentation for command-line tools like `gst-launch-1.0`, `gst-inspect-1.0`, and `gst-discoverer-1.0`. + +- **[NVIDIA GStreamer Plugins](https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/gstreamer.html)** + NVIDIA-specific GStreamer plugins documentation for Jetson platforms, including hardware-accelerated video processing. + +### Python and C++ Bindings + +- **[PyGObject (GStreamer Python Bindings)](https://pygobject.readthedocs.io/)** + Python bindings for GStreamer using GObject Introspection, enabling Python integration with GStreamer pipelines. + +- **[GStreamer C++ API Reference](https://gstreamer.freedesktop.org/documentation/gstreamer/index.html)** + C++ API documentation for GStreamer, including core classes and functions for application development. + +### Tutorials and Examples + +- **[GStreamer Basic Tutorials](https://gstreamer.freedesktop.org/documentation/tutorials/index.html)** + Step-by-step tutorials covering basic concepts, pipeline construction, and common use cases. + +- **[NVIDIA Jetson GStreamer Examples](https://github.com/NVIDIA/jetson-gstreamer-examples)** + Official GitHub repository with GStreamer examples specifically for Jetson platforms. + +- **[GStreamer Pipeline Examples](https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html)** + Collection of `gst-launch-1.0` pipeline examples for various use cases. + +### Network Streaming + +- **[RTP/RTSP Streaming with GStreamer](https://gstreamer.freedesktop.org/documentation/tutorials/basic/network-streaming.html)** + Tutorial on network streaming protocols (RTP, RTSP, UDP) with GStreamer. + +- **[GStreamer RTSP Server](https://github.com/GStreamer/gst-rtsp-server)** + RTSP server implementation for GStreamer, useful for advanced streaming scenarios. + +### Troubleshooting and Debugging + +- **[GStreamer Debugging Guide](https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html)** + Tools and techniques for debugging GStreamer pipelines, including `GST_DEBUG` environment variables. + +- **[GStreamer FAQ](https://gstreamer.freedesktop.org/documentation/frequently-asked-questions/index.html)** + Frequently asked questions and common issues with GStreamer. + +### Related Resources + +- **[JetsonHacks GStreamer Tutorials](https://www.jetsonhacks.com/)** + Community tutorials and guides for Jetson platforms, including GStreamer examples. + +- **[OpenCV with GStreamer](https://docs.opencv.org/master/d0/d86/tutorial_gstreamer_pipeline.html)** + Tutorial on integrating OpenCV with GStreamer pipelines for computer vision applications. + +- **[ROS 2 GStreamer Integration](https://github.com/ros-drivers/gscam)** + ROS 2 camera driver using GStreamer for video capture and streaming. + diff --git a/wiki/networking/index.md b/wiki/networking/index.md index aeccbc2b..9f6f49df 100644 --- a/wiki/networking/index.md +++ b/wiki/networking/index.md @@ -24,6 +24,9 @@ This section focuses on **networking techniques, tools, and protocols** commonly - **[XBee Pro DigiMesh 900](/wiki/networking/xbee-pro-digimesh-900/)** A tutorial for configuring XBee Pro DigiMesh modules to create low-bandwidth mesh networks. Includes instructions for using X-CTU for module setup and a Python-based script for serial communication over a mesh network. Covers advanced configurations like multi-transmit, broadcast radius, and mesh retries. +- **[Real-Time Video Processing on Jetson with GStreamer](/wiki/networking/gstreamer-jetson-realtime-video/)** + A comprehensive guide to using GStreamer on Jetson Orin for hardware-accelerated video streaming over networks. Covers zero-copy video processing with NVMM, UDP/RTP streaming pipelines, H.264 encoding, and integration with Python/C++ for teleoperation and remote monitoring applications. + ## Resources ### General Networking