Check out this license plate reader that is running two separate pre-trained machine learning models. The first model identifies if there are license plates in frame, and if so, where they are located. Meanwhile, the second model identifies all symbols in the license place and converts them to text. Multiple plates can be detected and decoded at the same time, with a total inference time, including pre and post processing of image data, of only 200 ms! 🔷 Both models are executed in place from external OSPI connected NVM 🔷 Tensor arena RAM use is approximately 2MB 🔷 Quantized model sizes are 5.2MB for Detection, and 11MB for Decoding 🔷 Helium accelerated ISP pipeline executing on Arm Cortex-M55 core ipXchange Jerome Schang #cortexm55 #cortex #arm Arm #machinelearning #artificalintelligence
Alif Semiconductor’s Post
More Relevant Posts
-
DLSS 3.5 Improved raytracing with "ray reconstruction" Not a ton of info on the tech details of how it works. But with a better ground truth on their AI model in training. They can now achieve a more accurate scene vs the prior implementation which relied more on temporal ray accumulation. Which has great stable results, but can be blurry at times. This new method appears to retain quality much better and with no performance hit as well. Very impressive 👏
NVIDIA DLSS 3.5 | New Ray Reconstruction Enhances Ray Tracing with AI
https://www.youtube.com/
To view or add a comment, sign in
-
Presenting the #Mantra BIONIC Xtreme series - the ultimate MULTI MODEL BIOMETRIC SYSTEM for #Identity Verification & Access Control. Mantra takes pride in presenting our latest embedded #Linux-based device for the cutting-edge Identity Access Management System. Discover the new height of AI-driven innovation with BIONIC Xtreme! BioNIC Xtreme is a powerful edge device that has an AI processor, with GPU & NPU, optimized for deep learning, delivering highly accurate and fast Biometric #Identification. Embrace the emerging revolution! Learn more about BioNIC Xtreme by watching the whole video! #biometrics #biometricsecurity #facialrecognition #identitysecurity #fingerprint #contactless #accesscontrol #identitymanagement #accessmanagement #identityverification #identityandaccessmanagement
To view or add a comment, sign in
-
🤯 I did not expect #ai could solve #linearalgebra on a locally running #opensource #llm on small GPU less hardware. 🤩 Quality of reasoning & explanations are just amazing https://lnkd.in/ghwX36xf #mathematics #aiadoption #learnbydoing
To view or add a comment, sign in
-
At NVIDIA #GTC23, check out the latest developments in the newest accelerated framework on the block. It is double exciting for me as my current org - Vector Institute and previous org Shell are both represented in this talk !! Vector Institute's very own Matthew Choi will be talking about our work on benchmarking and scaling #llms using #JAX Register now to see what's in store for JAX support on GPUs, and learn how to provide the latest performance and capabilities for #GenerativeAI. #AI #ML #acceleration https://lnkd.in/dY_YA3MH
GTC 2023: #1 AI Conference
nvidia.com
To view or add a comment, sign in
-
Save the date: September 26th at 5:00pm CET / 8:00am PT. Join me for a talk about AI optimization for the Arm Cortex-M85 with Helium vector extensions. I’ll show how we run our people detection AI at a stunning 13 FPS on a microcontroller, a 3.7x speed boost vs. Cortex-M7. We will dive into Helium MVE, compare it to the traditional Cortex-M instruction set, and show Helium code for 8-bit integer matrix-multiplications, the core of deep learning models. There will be a demo on a Renesas board with Arm Cortex-M85, a preview of our AI on Arm’s Ethos-U accelerator, and more Helium-accelerated AI apps by Plumerai. Don't miss it! #arm, #peopledetection, #tinyml, #microcontrollers cc: Tobias McBride Sign up now: https://lnkd.in/eAChpPXU Promo video below:
To view or add a comment, sign in
-
🚀 AI and LLM Performance Soars on Rockchip RK3588 and Mixtile Blade 3 Combination In a recent showcase of technology, the Rockchip RK3588's abilities were put to the test in artificial intelligence (AI) and large language models (LLM). Utilizing the Mixtile Blade 3 SBC, which boasts 32GB of RAM, researchers explored a range of AI tasks, from object detection with Yolo v5 to LLM exercises featuring models like RedPajama-INCITE-Chat-3B-v1-q4f16_1 and Llama series chat models. The results indicated that the Mixtile Blade 3 was adept at handling YoloV5 object detection in real-time, revealing the power of the device's Arm Mali-610 GPU when running complex LLM models. RedPajama-INCITE-Chat-3B-v1-q4f16_1 stood out, displaying commendable speed and accuracy, while even the slower Llama-2-13b-chat-hf-q4f16_1 model delivered precision. These outcomes suggest that ARM architecture, when paired with capable hardware like the Mixtile Blade 3, could serve as a significant platform for future AI and LLM venture. Sources: https://lnkd.in/gZDCq2Md #AI #LLM #Rockchip #MixtileBlade3 #SBC #TechnologyUpdate #MachineLearning #ArtificialIntelligence #GPU #ARM
To view or add a comment, sign in
-
-
Super recommended simple guide for people interested in learning Generative AI by doing experiments in their own PC! 🤖 In this case I played with image generation using Stable Diffusion to make the generation faster as explained by the guys of Nvidia here: https://lnkd.in/dUyFeFGC Using models published in HuggingFace.co This small experiment showed me the following result using only the hardware of my laptop 😎 😲
To view or add a comment, sign in
-
-
Outstanding visualisation on tokens, parameters, layers and transformers. How Gen AI and LLMs moved to become matrix weight calculations and how it moved the computing needs to GPUs. Few of the questions answered visually. What is the tally on GPT 175 billion - word embedding, unemedding and softmax.. pretty fascinating. Around 30 mts. Every second worth it. https://lnkd.in/g5nJYwsa
But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning
https://www.youtube.com/
To view or add a comment, sign in
-
Cloud Platform Enabler | Spearheading GCP Engineering at Verizon | Orchestrating Large-scale public cloud Migrations and Analytics Enablement | EX-Deloitte Manager
The new Blackwell GPU is here to tackle the AI industry’s challenges. With its advanced NVLink and Resilience Technologies and the new Tensor Cores and the TensorRT-LLM Compiler are revolutionizing the AI landscape by slashing LLM inference operating costs and energy consumption by an astonishing 25 times. Will this efficiency gain is pivotal in addressing the current GPU shortage? Time will tell soon. #AI #BlackwellGPU #Innovation
Nvidia reveals the ‘world’s most powerful chip’ for AI: Blackwell B200 GPU #NVDA #NvidiaStock
https://www.youtube.com/
To view or add a comment, sign in
Great one Mike. Iterate.ai has full stack SW which runs license plate detection on edge HW and help QSR and convinient stores help customers and businesses improve the service and upsell products. Mike Yousef and Brian Sathianathan may want to connect and discuss potential collaborations.