{"id":8648,"date":"2023-06-21T12:21:13","date_gmt":"2023-06-21T12:21:13","guid":{"rendered":"http:\/\/canopyvisionai.com\/?p=8648"},"modified":"2023-07-13T09:33:27","modified_gmt":"2023-07-13T09:33:27","slug":"canopy-vision-camera-latency-testing","status":"publish","type":"post","link":"http:\/\/canopyvisionai.com\/index.php\/2023\/06\/21\/canopy-vision-camera-latency-testing\/","title":{"rendered":"Canopy Vision &#8211; Camera Latency Testing"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"8648\" class=\"elementor elementor-8648\">\n\t\t\t\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6b0959a9 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6b0959a9\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-45eb61a\" data-id=\"45eb61a\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d8d4cbf elementor-widget elementor-widget-heading\" data-id=\"d8d4cbf\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<style>\/*! elementor - v3.14.0 - 26-06-2023 *\/\n.elementor-heading-title{padding:0;margin:0;line-height:1}.elementor-widget-heading .elementor-heading-title[class*=elementor-size-]>a{color:inherit;font-size:inherit;line-height:inherit}.elementor-widget-heading .elementor-heading-title.elementor-size-small{font-size:15px}.elementor-widget-heading .elementor-heading-title.elementor-size-medium{font-size:19px}.elementor-widget-heading .elementor-heading-title.elementor-size-large{font-size:29px}.elementor-widget-heading .elementor-heading-title.elementor-size-xl{font-size:39px}.elementor-widget-heading .elementor-heading-title.elementor-size-xxl{font-size:59px}<\/style><h2 class=\"elementor-heading-title elementor-size-default\">Canopy Vision - Camera Latency Testing<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2e6ff10c elementor-widget elementor-widget-text-editor\" data-id=\"2e6ff10c\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<style>\/*! elementor - v3.14.0 - 26-06-2023 *\/\n.elementor-widget-text-editor.elementor-drop-cap-view-stacked .elementor-drop-cap{background-color:#69727d;color:#fff}.elementor-widget-text-editor.elementor-drop-cap-view-framed .elementor-drop-cap{color:#69727d;border:3px solid;background-color:transparent}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap{margin-top:8px}.elementor-widget-text-editor:not(.elementor-drop-cap-view-default) .elementor-drop-cap-letter{width:1em;height:1em}.elementor-widget-text-editor .elementor-drop-cap{float:left;text-align:center;line-height:1;font-size:50px}.elementor-widget-text-editor .elementor-drop-cap-letter{display:inline-block}<\/style>\t\t\t\t<!-- wp:heading {\"level\":1} -->\n<h2><span style=\"color: #000000;\">Overview<\/span><\/h2>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>This report outlines a series of tests to understand the end-to-end latency of an A.I. vision pipeline using the Canopy Vision application. The goal of these tests is to understand the true delay between when an image is presented to a camera sensor and when the output of a deep neural network is displayed. The following outlines differences in model architectures, compute hardware, networking architectures, camera framerates, and more. Many factors will affect the overall end-to-end latency of a vision pipeline \u2014 these test results are meant to better understand what a reasonable expectation can be and to help guide in initial project planning for production deployments.<\/p>\n<!-- \/wp:paragraph --><!-- wp:paragraph -->\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading -->\n<h2><span style=\"color: #000000;\">Test Setup<\/span><\/h2>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>To truly test end-to-end latency, we must know the exact time an image is displayed as well as the exact time the inference output is displayed. To do this, we set up computer with an LED monitor. The LED monitor should ideally have as high of a refresh rate as possible. Most monitors are running at 60 Hz. In our test setup, we used the Samsung Odyssey monitor which supports up to a 240 Hz refresh rate. Due to other hardware limitations, we were only able to achieve an actual refresh rate of 120 Hz.<\/p>\n<!-- \/wp:paragraph --><!-- wp:image {\"id\":8649,\"sizeSlug\":\"large\",\"linkDestination\":\"none\"} -->\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"1024\" height=\"575\" class=\"wp-image-8649\" src=\"http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/image2033-1024x575.png\" alt=\"\" \/>\n<figcaption>To find the actual refresh rate you can use the following tool:\u00a0<a href=\"https:\/\/www.testufo.com\/\">https:\/\/www.testufo.com<\/a><\/figcaption>\n<\/figure>\n<!-- \/wp:image --><!-- wp:paragraph -->\n<p>The computer connected to this LED monitor will need to run 3 application windows:<\/p>\n<!-- \/wp:paragraph --><!-- wp:list -->\n<ul>\n<li>Time Looping script\n<ul>\n<li>Continuously looping the current time (with milliseconds)<\/li>\n<li><span style=\"color: #ff0000;\"> time_loop.py<\/span><\/li>\n<\/ul>\n<\/li>\n<li>SSH Terminal Window to the computer resource\n<ul>\n<li>When a button is pressed on the keyboard, an image of a woman wearing a hardhat will appear on the monitor<\/li>\n<li><span style=\"color: #ff0000;\">display_image.py<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<!-- \/wp:list --><!-- wp:paragraph -->\n<p>A camera will then be positioned to observe this LED monitor, making sure it can clearly see the image displayed by the display script.<\/p>\n<!-- \/wp:paragraph --><!-- wp:paragraph -->\n<p>On a separate window it may be helpful to display a live RTSP stream for camera positioning. Additionally, a window showing the details of each specific test will help with keeping track of each different test configuration.<\/p>\n<!-- \/wp:paragraph --><!-- wp:paragraph -->\n<p>Finally, from a separate camera, the test will be recorded. For this test, we used an iPhone with 240FPS slow-motion video recording.<\/p>\n<!-- \/wp:paragraph --><!-- wp:paragraph -->\n<p>An image of the test setup is shown below. You can see the three application windows on the right.<\/p>\n<!-- \/wp:paragraph --><!-- wp:image {\"id\":8650,\"sizeSlug\":\"large\",\"linkDestination\":\"none\"} -->\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"1024\" height=\"768\" class=\"wp-image-8650\" src=\"http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-1024x768.jpg\" alt=\"\" \/>\n<figcaption>Our setup with the camera pointed at the monitor<\/figcaption>\n<\/figure>\n<!-- \/wp:image --><!-- wp:paragraph -->\n<p>When the setup is ready to test:<\/p>\n<!-- \/wp:paragraph --><!-- wp:list -->\n<ul>\n<li>Begin recording on the iPhone in slo-mo<\/li>\n<li>Enter the button on the keyboard to display the image<\/li>\n<li>Wait for the output to appear on the SSH window<\/li>\n<li>Stop the iPhone recording<\/li>\n<\/ul>\n<!-- \/wp:list --><!-- wp:paragraph -->\n<p>Analysis of end-to-end latency can be achieved by calculating the time difference between the first frame where the image appears and the first frame where the inference output appears.<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":1} -->\n<h2><span style=\"color: #000000;\">Test Results<\/span><\/h2>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>The following top-level results are shown below. Each row indicates a separate test. Three individual measurements were taken for each test. Specific details on each test setup are provided below. The specific slow-mo video files are available upon request.<\/p>\n<table>\n<thead>\n<tr>\n<th>Test Setup Description<\/th>\n<th>1<\/th>\n<th>2<\/th>\n<th>3<\/th>\n<th>Average<\/th>\n<th>FPS<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Canopy Vision Edge Camera, Jetson Nano, 5W, PeopleNet (14.0MB FP16), 500&#215;500, 60FPS camera sensor<\/td>\n<td>1150 ms<\/td>\n<td>1160 ms<\/td>\n<td>1160 ms<\/td>\n<td>1157 ms<\/td>\n<td>17.3<\/td>\n<\/tr>\n<tr>\n<td>Canopy Vision Edge Camera, Jetson Nano, 10W, PeopleNet (14.0MB FP16), 500&#215;500, 60FPS camera sensor<\/td>\n<td>890 ms<\/td>\n<td>900 ms<\/td>\n<td>930 ms<\/td>\n<td>907 ms<\/td>\n<td>24.1<\/td>\n<\/tr>\n<tr>\n<td>Canopy Vision Edge Camera, Jetson Nano, 10W, Custom Hard Hat Model (2.5MB FP16), 500&#215;500, 60FPS camera sensor<\/td>\n<td>350 ms<\/td>\n<td>340 ms<\/td>\n<td>340 ms<\/td>\n<td>343 ms<\/td>\n<td>60 +<\/td>\n<\/tr>\n<tr>\n<td>General IP Network Camera (SV3C), 1920&#215;1080, RTSP w\/H.264, 25FPS on LAN to Jetson Nano, 10W, Custom Hard Hat Model (2.5MB FP16), 500&#215;500, 60FPS camera sensor<\/td>\n<td>536 ms<\/td>\n<td>409 ms<\/td>\n<td>533 ms<\/td>\n<td>493 ms<\/td>\n<td>25 +<\/td>\n<\/tr>\n<tr>\n<td>Canopy IMX477 RTSP w\/H.264, 60FPS located in Tampa on 500\/500mbps internet to Google Cloud VM instance with A100 GPU located in us-east1, Docker Container, Custom Hard Hat Model (856kb FP16), 500&#215;500<\/td>\n<td>204 ms<\/td>\n<td>342 ms<\/td>\n<td>331 ms<\/td>\n<td>292 ms<\/td>\n<td>60 +<\/td>\n<\/tr>\n<tr>\n<td>General IP Network Camera (SV3C), 1920&#215;1080, RTSP w\/H.264, 25FPS located in Tampa on 500\/500mbps internet to Google Cloud VM instance with A100 GPU located in us-east1, Docker Container, Custom Hard Hat Model (856kb FP16), 500&#215;500<\/td>\n<td>255 ms<\/td>\n<td>236 ms<\/td>\n<td>250 ms<\/td>\n<td>247 ms<\/td>\n<td>25 +<\/td>\n<\/tr>\n<tr>\n<td>General IP Network Camera (SV3C), 1920&#215;1080, RTSP w\/H.264, 25FPS located in Tampa on 500\/500mbps internet to Google Cloud VM instance with A100 GPU located in us-east1, Docker Container, Custom Hard Hat Model (856kb FP16), 500&#215;500, 10 containers running concurrently<\/td>\n<td>285 ms<\/td>\n<td>251 ms<\/td>\n<td>315 ms<\/td>\n<td>284 ms<\/td>\n<td>25 +<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<!-- \/wp:paragraph --><!-- wp:group -->\n<div class=\"wp-block-group\"><!-- wp:group -->\n<div class=\"wp-block-group\"><!-- wp:group {\"layout\":{\"type\":\"default\"}} -->\n<div class=\"wp-block-group\"><!-- wp:group -->\n<div class=\"wp-block-group\"><!-- wp:group -->\n<div class=\"wp-block-group\"><!-- wp:table {\"fontSize\":\"small\"} -->\n<figure class=\"wp-block-table has-small-font-size\">\n<figcaption>Here we get a clear overview how the tests went.<\/figcaption>\n<\/figure>\n<!-- \/wp:table --><\/div>\n<!-- \/wp:group --><\/div>\n<!-- \/wp:group --><\/div>\n<!-- \/wp:group --><\/div>\n<!-- \/wp:group --><\/div>\n<!-- \/wp:group --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Canopy Vision Edge Camera, Jetson Nano, 5W, PeopleNet (14.0MB FP16), 500&#215;500, 60FPS camera sensor<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>This test utilizes the Canopy Vision Integrated Edge Compute Camera, where a Sony IMX477 sensor is connected directly to a Jetson Nano compute module. For this test, the module was run in reduced power mode (5W). A standard off-the-shelf PeopleNet model was deployed to the device with FP16 precision. The inference engine file was 14.0MB. This model was pruned by NVIDIA to an unknown extent. The camera sensor was set to capture frames at 60FPS at a resolution of 1920&#215;1080. The video was downsized to 500&#215;500 resolution just prior to model inference. This setup had an average latency of 1157ms. Although the camera framerate was 60FPS, the actual framerate was only 17.3 FPS with the model inference being the bottleneck.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Canopy Vision Edge Camera, Jetson Nano, 10W, PeopleNet (14.0MB FP16), 500&#215;500, 60FPS camera sensor<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>The difference between this setup and the previous one was increasing the power mode of the Jetson Nano module from 5W to 10W. The overall framerate increased by 40% while the overall latency decreased by 22%.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Canopy Vision Edge Camera, Jetson Nano, 10W, Custom Hard Hat Model (2.5MB FP16), 500&#215;500, 60FPS camera sensor<\/span><\/h3>\n<p>\u00a0<\/p>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>The difference between this setup and the previous one was the model deployed. In this configuration, the PeopleNet model was replaced with a Custom Hard Hat model trained by Monomer Software for demonstration purposes. This model was trained with a ResNet18 model architecture and was pruned heavily to reduce the model size. The inference engine file is only 2.5MB compared to the 14.0MB PeopleNet model. The overall latency was 343 ms. This model performs at the full 60FPS of the camera sensor and is thereby not bottlenecking the vision pipeline. With different camera sensor settings, a higher overall FPS may be attainable which would likely lead to a lower overall latency.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">General IP Network Camera (SV3C), 1920&#215;1080, RTSP w\/H.264, 25FPS on LAN to Jetson Nano, 10W, Custom Hard Hat Model (2.5MB FP16), 500&#215;500, 60FPS camera sensor<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>The difference between this setup and the previous one was the replacement of the integrated IMX477 camera sensor with a standard consumer-grade off-the-shelf IP Network Security Camera. This camera is a typical security camera, often purchased for consumer or light commercial use, with a price-point around $60. The camera outputs a multicast RTSP stream with a resolution of 1920&#215;1080 at 25FPS with H.264 compression. The camera was placed on the same LAN as a Canopy Edge Jetson Nano device. The Canopy Edge device was ingesting the RTSP stream of the IP camera and running the Hard Hat model. The average latency with this setup was 493ms with the overall framerate limited by the IP camera. Just like in the previous example, if a camera with a higher framerate was utilized, the overall latency would likely be lower.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Canopy IMX477 RTSP w\/H.264, 60FPS located in Tampa on 500\/500mbps internet to Google Cloud VM instance with A100 GPU located in us-east1, Docker Container, Custom Hard Hat Model (856kb FP16), 500&#215;500<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>With this configuration, the computing was changed from being performed on a Canopy Edge Jetson Nano device to a datacenter GPU. The compute hardware was an NVIDIA A100 GPU running on a Google Cloud Compute Engine instance. The Compute Engine was located in the US-East region (South Carolina). The camera stream was a Canopy IMX477 camera streaming 60FPS from a Jetson Nano using the onboard video encoder. No inference was being performed on the Jetson Nano device. The Jetson Nano device was connected to the Monomer Software office LANE in Tampa, FL via Ethernet to a 500\/500mbps connection to the external internet. For this pipeline, the image captured by the camera in the Monomer Software office was transmitted via RTSP to the Google Cloud datacenter computer, analyzed with the Canopy Vision application running as a Docker container with the A100, and the inference output was returned back to the Monomer Software office all with a total round-trip latency of only 292 ms. Multiple options are available the bring this latency even lower.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">General IP Network Camera (SV3C), 1920&#215;1080, RTSP w\/H.264, 25FPS located in Tampa on 500\/500mbps internet to Google Cloud VM instance with A100 GPU located in us-east1, Docker Container, Custom Hard Hat Model (856kb FP16), 500&#215;500<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>The difference between this setup and the previous one was the replacement of the Canopy IMX477 camera with the same off-the-shelf RTSP camera used in other previous tests. This test configuration demonstrates the lowest overall end-to-end latency at only 247 ms. The test video with the lowest reported latency is provided below. The image was displayed at 09.462. The output was displayed at 09.698, resulting in a total end-to-end latency of 236 ms.<\/p>\n<!-- \/wp:paragraph --><!-- wp:video {\"id\":8651} -->\n<figure class=\"wp-block-video\"><video src=\"http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3023.MOV.mov\" controls=\"controls\" width=\"300\" height=\"150\"><\/video><\/figure>\n<!-- \/wp:video --><!-- wp:heading {\"level\":3} -->\n<h3>\u00a0<\/h3>\n<h3><span style=\"color: #000000;\">General IP Network Camera (SV3C), 1920&#215;1080, RTSP w\/H.264, 25FPS located in Tampa on 500\/500mbps internet to Google Cloud VM instance with A100 GPU located in us-east1, Docker Container, Custom Hard Hat Model (856kb FP16), 500&#215;500, 10 containers running concurrently<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>The difference between this setup and the previous one was adding additional Docker container instances on the same A100 VM. 10 Canopy Vision Docker Container instances were run in parallel reading the same RTSP camera stream from the Monomer Software office. This test was conducted to determine what impact multiple Docker container instances might have on an individual stream\u2019s latency. The resulting latency for this configuration was 37ms longer than the latency with only 1 Docker instance. Additional performance improvements could likely be made to reduce this additional latency overhead. During this test the GPU utilization and CPU utilization were measured, with screenshots provided below. Even at 10 instances, the camera stream was still steady at 25FPS, indicating no bottlenecking by having additional streams on the same A100 GPU.<\/p>\n<!-- \/wp:paragraph --><!-- wp:image {\"id\":8652,\"sizeSlug\":\"full\",\"linkDestination\":\"none\"} -->\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"982\" height=\"363\" class=\"wp-image-8652\" src=\"http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/image2034.png\" alt=\"\" \/><\/figure>\n<!-- \/wp:image --><!-- wp:image {\"id\":8653,\"sizeSlug\":\"full\",\"linkDestination\":\"none\"} -->\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"655\" height=\"479\" class=\"wp-image-8653\" src=\"http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/image2035.png\" alt=\"\" \/><\/figure>\n<!-- \/wp:image --><!-- wp:image {\"id\":8654,\"sizeSlug\":\"full\",\"linkDestination\":\"none\"} -->\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"742\" height=\"650\" class=\"wp-image-8654\" src=\"http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/image2036.png\" alt=\"\" \/><\/figure>\n<p>\u00a0<\/p>\n<!-- \/wp:image --><!-- wp:heading -->\n<h2><span style=\"color: #000000;\">Additional Notes and Observations<\/span><\/h2>\n<!-- \/wp:heading --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Test setup limitations<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>During previous testing with a 60Hz monitor and a Linux computer, we had observed a latency measurement on the Canopy Edge Jetson Nano with the integrated IMX477 camera that was below 160ms. During the test results presented in this report, a 120Hz monitor was used with a Windows laptop. No configuration was able to achieve &lt;235ms latency. This challenges some of the assumptions about the testing. We intend to find the best combination of LED monitors and computers to provide the most correct measurement of latency. Knowing this, the reported latency numbers above are likely conservative estimates. Delays in monitor time, application window display time, and other network factors are likely to increase the latency as measured in these tests.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Off-The-Shelf IP Camera vs Canopy IMX477<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>Surprisingly, the off-the-shelf IP camera with a 25FPS framerate showed lower latency than the IMX477 RTSP stream from the Jetson Nano, which was running at 60FPS. This is likely due to latency delays caused by the Jetson Nano, either at the hardware encoding step or the networking step onboard the Jetson Nano. Additional testing may uncover why the Jetson Nano has a larger latency for RTSP streaming.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading {\"level\":3} -->\n<h3><span style=\"color: #000000;\">Post-Processing considerations<\/span><\/h3>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>These test results do not include any post-processing computation time. For most projects, additional post-processing business logic is necessary to determine if the inference output is in an alarm state or to generate other calculated metrics from the inference output. Because every project is different, the goal of this test was to determine latency prior to post-processing. This test can easily be repeated with specific post-processing logic applied.<\/p>\n<p>\u00a0<\/p>\n<!-- \/wp:paragraph --><!-- wp:heading -->\n<h2><span style=\"color: #000000;\">Proposed Future Test Configurations<\/span><\/h2>\n<!-- \/wp:heading --><!-- wp:paragraph -->\n<p>Based upon these tests and Monomer Software\u2019s expertise in the field, the following differentiations should be tested:<\/p>\n<!-- \/wp:paragraph --><!-- wp:list -->\n<ul>\n<li>A100 vs T4 vs other GPU Hardware<\/li>\n<li>Cloud Compute Region and other Network Adjustments<\/li>\n<li>Multicast vs Unicast RTSP<\/li>\n<li>H.264 vs H.265 vs MJPEG RTSP compression<\/li>\n<li>RTSP vs other streaming protocols<\/li>\n<li>Jetson Nano vs Jetson TX2 NX vs Jetson Xavier NX<\/li>\n<li>Model Inference Data communicated via TCP vs UDP<\/li>\n<li>INT8 vs FP16 vs FP32<\/li>\n<\/ul>\n<!-- \/wp:list -->\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Canopy Vision &#8211; Camera Latency Testing Overview This report outlines a series of tests to understand the end-to-end latency of an A.I. vision pipeline using the Canopy Vision application. The goal of these tests is to understand the true delay between when an image is presented to a camera sensor and when the output of&#8230;<\/p>\n","protected":false},"author":1,"featured_media":8650,"comment_status":"open","ping_status":"open","sticky":false,"template":"elementor_header_footer","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0},"categories":[59],"tags":[],"acf":[],"rttpg_featured_image_url":{"full":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-scaled.jpg",2560,1920,false],"landscape":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-scaled.jpg",2560,1920,false],"portraits":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-scaled.jpg",2560,1920,false],"thumbnail":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-150x150.jpg",150,150,true],"medium":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-300x225.jpg",300,225,true],"large":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-1024x768.jpg",1024,768,true],"saasland_370x300":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x300.jpg",370,300,true],"saasland_85x70":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-85x70.jpg",85,70,true],"saasland_228x405":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-228x405.jpg",228,405,true],"saasland_370x280":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x280.jpg",370,280,true],"saasland_370x700":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x700.jpg",370,700,true],"saasland_370x190":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x190.jpg",370,190,true],"saasland_80x80":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-80x80.jpg",80,80,true],"saasland_70x70":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-70x70.jpg",70,70,true],"saasland_83x88":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-83x88.jpg",83,88,true],"saasland_100x100":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-100x100.jpg",100,100,true],"saasland_85x90":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-85x90.jpg",85,90,true],"saasland_960x500":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-960x500.jpg",960,500,true],"saasland_370x400":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x400.jpg",370,400,true],"saasland_270x350":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-270x350.jpg",270,350,true],"saasland_570x400":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-570x400.jpg",570,400,true],"saasland_640x450":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-640x450.jpg",640,450,true],"saasland_480x450":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-480x450.jpg",480,450,true],"saasland_240x220":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-240x220.jpg",240,220,true],"saasland_240x250":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-240x250.jpg",240,250,true],"saasland_450x420":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-450x420.jpg",450,420,true],"saasland_80x90":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-80x90.jpg",80,90,true],"saasland_350x360":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-350x360.jpg",350,360,true],"saasland_350x400":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-350x400.jpg",350,400,true],"saasland_370x440":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x440.jpg",370,440,true],"saasland_560x400":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-560x400.jpg",560,400,true],"saasland_370x320":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x320.jpg",370,320,true],"saasland_250x320":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x320.jpg",370,320,true],"saasland_270x330":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-270x330.jpg",270,330,true],"saasland_700x480":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-700x480.jpg",700,480,true],"saasland_370x480":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x480.jpg",370,480,true],"saasland_1170x675":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-1170x675.jpg",1170,675,true],"saasland_370x418":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x418.jpg",370,418,true],"saasland_480x480":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-480x480.jpg",480,480,true],"saasland_634x480":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-634x480.jpg",634,480,true],"saasland_960x670":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-960x670.jpg",960,670,true],"saasland_470x520":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-470x520.jpg",470,520,true],"saasland_670x670":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-670x670.jpg",670,670,true],"saasland_370x370":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x370.jpg",370,370,true],"saasland_170x120":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-170x120.jpg",170,120,true],"1536x1536":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-1536x1152.jpg",1536,1152,true],"2048x2048":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-2048x1536.jpg",2048,1536,true],"saasland_370x360":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x360.jpg",370,360,true],"saasland_770x480":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-770x480.jpg",770,480,true],"saasland_570x340":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-570x340.jpg",570,340,true],"saasland_110x80":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-110x80.jpg",110,80,true],"saasland_800x400":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-800x400.jpg",800,400,true],"saasland_455x600":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-455x600.jpg",455,600,true],"saasland_520x300":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-520x300.jpg",520,300,true],"saasland_75x75":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-75x75.jpg",75,75,true],"saasland_240x200":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-240x200.jpg",240,200,true],"saasland_370x350":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-370x350.jpg",370,350,true],"saasland_350x365":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-350x365.jpg",350,365,true],"saasland_670x450":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-670x450.jpg",670,450,true],"saasland_1170x600":["http:\/\/canopyvisionai.com\/wp-content\/uploads\/2023\/06\/IMG_3019-1170x600.jpg",1170,600,true]},"rttpg_author":{"display_name":"Harry Helmrich","author_link":"http:\/\/canopyvisionai.com\/index.php\/author\/harrycanopy-vision-ai\/"},"rttpg_comment":5,"rttpg_category":"<a href=\"http:\/\/canopyvisionai.com\/index.php\/category\/technical\/\" rel=\"category tag\">Technical<\/a>","rttpg_excerpt":"Canopy Vision &#8211; Camera Latency Testing Overview This report outlines a series of tests to understand the end-to-end latency of an A.I. vision pipeline using the Canopy Vision application. The goal of these tests is to understand the true delay between when an image is presented to a camera sensor and when the output of...","_links":{"self":[{"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/posts\/8648"}],"collection":[{"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/comments?post=8648"}],"version-history":[{"count":19,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/posts\/8648\/revisions"}],"predecessor-version":[{"id":9246,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/posts\/8648\/revisions\/9246"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/media\/8650"}],"wp:attachment":[{"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/media?parent=8648"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/categories?post=8648"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/canopyvisionai.com\/index.php\/wp-json\/wp\/v2\/tags?post=8648"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}