Skip to content
Merged
2 changes: 1 addition & 1 deletion content/install-guides/streamline-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Streamline CLI tools are supported on the following Arm CPUs:

Use the Arm Sysreport utility to determine whether your system configuration supports hardware-assisted profiling. Follow the instructions in [Get ready for performance analysis with Sysreport][1] to discover how to download and run this utility.

[1]: https://learn.arm.com/learning-paths/servers-and-cloud-computing/sysreport/
[1]: /learning-paths/servers-and-cloud-computing/sysreport/

The `perf counters` entry in the generated report indicates how many CPU counters are available. The `perf sampling` entry indicates if SPE is available. You will achieve the best profiles in systems with at least 6 available CPU counters and SPE.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The example has been tested on [AWS EC2](https://aws.amazon.com/ec2/) and an [Am

## Installation

You need Docker to run Open AD Kit. Refer to the [Docker install guide](https://learn.arm.com/install-guides/docker/) to learn how to install Docker on an Arm platform.
You need Docker to run Open AD Kit. Refer to the [Docker install guide](/install-guides/docker/) to learn how to install Docker on an Arm platform.

First, verify Docker is installed on your development computer by running:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In modern vehicles, multiple sensors such as LiDAR, radar, and cameras must cont
DDS ensures these components share data seamlessly and in real time, both within the vehicle and across infrastructure such as V2X systems, including traffic lights and road sensors.

{{% notice Tip %}}
To get started with open-source DDS on Arm platforms, see the [Installation Guide for CycloneDDS](https://learn.arm.com/install-guides/cyclonedds).
To get started with open-source DDS on Arm platforms, see the [Installation Guide for CycloneDDS](/install-guides/cyclonedds).
{{% /notice %}}


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ layout: learningpathall

Now that you’ve explored the concept of a safety island, a dedicated subsystem responsible for executing safety-critical control logic, and learned how DDS (Data Distribution Service) enables real-time, distributed communication, you’ll refactor the original OpenAD Kit architecture into a multi-instance deployment.

The predecessor Learning Path, [Deploy Open AD Kit containerized autonomous driving simulation on Arm Neoverse](http://learn.arm.com/learning-paths/automotive/openadkit1_container/), showed how to deploying three container components on a single Arm-based instance, to handle:
The predecessor Learning Path, [Deploy Open AD Kit containerized autonomous driving simulation on Arm Neoverse](/learning-paths/automotive/openadkit1_container/), showed how to deploying three container components on a single Arm-based instance, to handle:
- The simulation environment
- Visualization
- Planning and control
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ On the Simulation and Visualization node, execute:

Once both machines are running their launch scripts, the Visualizer container exposes a web-accessible interface at: http://6080/vnc.html.

Open this link in your browser to observe the simulation in real time. The demo closely resembles the output in the [previous Learning Path, Deploy Open AD Kit containerized autonomous driving simulation on Arm Neoverse](http://learn.arm.com/learning-paths/automotive/openadkit1_container/4_run_openadkit/).
Open this link in your browser to observe the simulation in real time. The demo closely resembles the output in the [previous Learning Path, Deploy Open AD Kit containerized autonomous driving simulation on Arm Neoverse](/learning-paths/automotive/openadkit1_container/4_run_openadkit/).

![Distributed OpenAD Kit simulation running on two Arm-based instances with visualizer and simulator coordination over DDS alt-text#center](split_aws_run.gif "Visualizer output from a distributed OpenAD Kit simulation showing ROS 2 modules running across two cloud instances using DDS communication.")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ learning_objectives:
- Run a basic C++ matrix multiplication example to showcase the speedup that KleidiAI micro-kernels can deliver.

prerequisites:
- An Arm-based Linux machine that implements the Int8 Matrix Multiplication (*i8mm*) architecture feature. The example in this Learning Path is run on an AWS Graviton 3 instance. Instructions on setting up an Arm-based server are [found here](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/aws/).
- An Arm-based Linux machine that implements the Int8 Matrix Multiplication (*i8mm*) architecture feature. The example in this Learning Path is run on an AWS Graviton 3 instance. Instructions on setting up an Arm-based server are [found here](/learning-paths/servers-and-cloud-computing/csp/aws/).
- A basic understanding of linear algebra terminology, such as dot product and matrix multiplication.

author: Zach Lasiuk
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ There are essentially two types of KleidiAI micro-kernels today:
![KleidiAI stuff](KleidiAI-src.JPG "KleidiAI src directory")

### What are the quantization levels that KleidiAI supports?
KleidiAI has multiple matrix multiplication micro-kernels, and dynamic quantization routines, to optimally support all model quantization levels. To learn more about model quantization and how selecting the right quantization level affects your AI-based application, refer to [this Learning Path](https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/llama-chatbot#quantization-format).
KleidiAI has multiple matrix multiplication micro-kernels, and dynamic quantization routines, to optimally support all model quantization levels. To learn more about model quantization and how selecting the right quantization level affects your AI-based application, refer to [this Learning Path](/learning-paths/servers-and-cloud-computing/llama-cpu/llama-chatbot#quantization-format).

KleidiAI currently has three matrix multiplication directories that each handle input/output types differently, and which will evolve to broaden the reach of their support:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ This reference code is functionally-identical to KleidiAI's micro-kernels, and i
Follow these steps to build and run the KleidiAI library and example script:

1. Create an Ubuntu 24.04 Arm Linux machine on an AWS EC2 instance.
For more details view the Learning Path on [setting up AWS EC2 Graviton instances](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/aws/). Use an M7g-medium instance type, which uses the Graviton 3 SoC supporting the *i8mm* Arm architecture feature. The 1 CPU and 4 GB of RAM in the M7g-medium are sufficient for this basic example run.
For more details view the Learning Path on [setting up AWS EC2 Graviton instances](/learning-paths/servers-and-cloud-computing/csp/aws/). Use an M7g-medium instance type, which uses the Graviton 3 SoC supporting the *i8mm* Arm architecture feature. The 1 CPU and 4 GB of RAM in the M7g-medium are sufficient for this basic example run.

2. Initialize your system by installing essential packages:
```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ layout: learningpathall
--output_name="llama3_kv_sdpa_xnn_qe_4_32.pte"
```

- Build the Llama Runner binary for [Android](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/5-run-benchmark-on-android/).
- Build and Run [Android](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/6-build-android-chat-app/).
- Build the Llama Runner binary for [Android](/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/5-run-benchmark-on-android/).
- Build and Run [Android](/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/6-build-android-chat-app/).
- Open Android Studio and choose "Open an existing Android Studio project" to navigate to examples/demo-apps/android/LlamaDemo and Press Run (^R) to build and launch the app on your phone.
- Tap the Settings widget to select a model, configure its parameters, and set any prompts.
- After choosing the model, tokenizer, and model type, click "Load Model" to load it into the app and return to the main Chat activity.
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Each component in the diagram plays a distinct role in enabling AI agents to int
- The **Remote services** are external APIs the server can call on the host’s behalf.

{{% notice Learning Tip %}}
Learn more about AI Agents in the Learning Path [Deploy an AI Agent on Arm with llama.cpp and llama-cpp-agent using KleidiAI](https://learn.arm.com/learning-paths/servers-and-cloud-computing/ai-agent-on-cpu/).
Learn more about AI Agents in the Learning Path [Deploy an AI Agent on Arm with llama.cpp and llama-cpp-agent using KleidiAI](/learning-paths/servers-and-cloud-computing/ai-agent-on-cpu/).
{{% /notice %}}

## Section summary
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ shared_between:
further_reading:
- resource:
title: Write a Dynamic Memory Allocator
link: https://learn.arm.com/learning-paths/cross-platform/dynamic-memory-allocator/
link: /learning-paths/cross-platform/dynamic-memory-allocator/
type: website
- resource:
title: Memory Latency
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ further_reading:
type: documentation
- resource:
title: Port Code to Arm Scalable Vector Extension (SVE)
link: https://learn.arm.com/learning-paths/servers-and-cloud-computing/sve
link: /learning-paths/servers-and-cloud-computing/sve
type: website
- resource:
title: Introducing the Scalable Matrix Extension for the Armv9-A Architecture
Expand All @@ -72,7 +72,7 @@ further_reading:
type: blog
- resource:
title: Build adaptive libraries with multiversioning
link: https://learn.arm.com/learning-paths/cross-platform/function-multiversioning/
link: /learning-paths/cross-platform/function-multiversioning/
type: website
- resource:
title: SME Programmer's Guide
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,4 @@ On SVE and SME:
- [Arm Scalable Matrix Extension (SME) Introduction (Part 1) - Zenon Xiu](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/arm-scalable-matrix-extension-introduction)
- [Arm Scalable Matrix Extension (SME) Introduction (Part 2) - Zenon Xiu](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/arm-scalable-matrix-extension-introduction-p2)
- [Matrix-matrix multiplication. Neon, SVE, and SME compared (Part 3)](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/.matrix-matrix-multiplication-neon-sve-and-sme-compared)
- [Learn about function multiversioning - Alexandros Lamprineas, Arm](https://learn.arm.com/learning-paths/cross-platform/function-multiversioning/)
- [Learn about function multiversioning - Alexandros Lamprineas, Arm](/learning-paths/cross-platform/function-multiversioning/)
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ If Python3 is not installed, you can download and install it from [python.org](h

Alternatively, you can also install Python3 using package managers such as Homebrew or APT.

If you are using Windows on Arm, see the [Python install guide](https://learn.arm.com/install-guides/py-woa/).
If you are using Windows on Arm, see the [Python install guide](/install-guides/py-woa/).

Next, if you do not already have it, download and install [Visual Studio Code](https://code.visualstudio.com/download).

Expand Down
4 changes: 2 additions & 2 deletions content/learning-paths/cross-platform/simd-loops/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ further_reading:
type: documentation
- resource:
title: Port Code to Arm Scalable Vector Extension (SVE)
link: https://learn.arm.com/learning-paths/servers-and-cloud-computing/sve
link: /learning-paths/servers-and-cloud-computing/sve
type: website
- resource:
title: Introducing the Scalable Matrix Extension for the Armv9-A Architecture
Expand All @@ -69,7 +69,7 @@ further_reading:
type: blog
- resource:
title: Build adaptive libraries with multiversioning
link: https://learn.arm.com/learning-paths/cross-platform/function-multiversioning/
link: /learning-paths/cross-platform/function-multiversioning/
type: website
- resource:
title: SME Programmer's Guide
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ further_reading:
type: documentation
- resource:
title: How to use the Arm Performance Monitoring Unit and System Counter
link: https://learn.arm.com/learning-paths/servers-and-cloud-computing/arm_pmu/
link: /learning-paths/servers-and-cloud-computing/arm_pmu/
type: website


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ further_reading:
type: documentation
- resource:
title: Port code to Arm Scalable Vector Extension (SVE)
link: https://learn.arm.com/learning-paths/servers-and-cloud-computing/sve
link: /learning-paths/servers-and-cloud-computing/sve
type: website
- resource:
title: Introducing the Scalable Matrix Extension for the Armv9-A Architecture
Expand All @@ -54,7 +54,7 @@ further_reading:
type: blog
- resource:
title: Build adaptive libraries with multiversioning
link: https://learn.arm.com/learning-paths/cross-platform/function-multiversioning/
link: /learning-paths/cross-platform/function-multiversioning/
type: website
- resource:
title: SME Programmer's Guide
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ learning_objectives:
- Control LEDs by turning them on and off based on model predictions.

prerequisites:
- Completion of [Embedded programming with Arduino on the Raspberry Pi Pico](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/arduino-pico/) if you're an absolute beginner.
- Completion of [Embedded programming with Arduino on the Raspberry Pi Pico](/learning-paths/embedded-and-microcontrollers/arduino-pico/) if you're an absolute beginner.
- An [Edge Impulse Studio](https://studio.edgeimpulse.com/signup) account.
- The [Arduino IDE](/install-guides/arduino-pico/) with the RP2040 board support package installed on your computer.
- An [Arduino Nano RP2040 Connect board](https://store.arduino.cc/products/arduino-nano-rp2040-connect-with-headers).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,7 @@ You use [llama.cpp](https://github.com/ggerganov/llama.cpp), an open source C/C+

Memory size is an important factor to consider when selecting an LLM because many LLMs have memory requirements that are too large for edge devices, such as the Raspberry Pi 5. An idea of the required memory size can be obtained by looking at the number of parameters of the model. A higher number of parameters means more memory used.

You can also use the [Memory Model Calculator](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator
) from Hugging Face to estimate memory size.
You can also use the [Memory Model Calculator](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator) from Hugging Face to estimate memory size.

Copy the string below:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ layout: learningpathall

The following section is only relevant to users of Keil MDK v5. It explains how to migrate from a uvprojx-based project file to the new Open-CMSIS-Pack csolution project file format.

The learning path [Convert uvprojx-based projects to csolution](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/uvprojx-conversion/) explains how to import, convert, and build uvprojx-based projects in [Keil Studio for VS Code](https://learn.arm.com/install-guides/keilstudio_vs/). It also shows how to convert and build uvprojx-based projects on the command line.
The learning path [Convert uvprojx-based projects to csolution](/learning-paths/embedded-and-microcontrollers/uvprojx-conversion/) explains how to import, convert, and build uvprojx-based projects in [Keil Studio for VS Code](/install-guides/keilstudio_vs/). It also shows how to convert and build uvprojx-based projects on the command line.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ layout: learningpathall
This Learning Path has been validated on Ubuntu 22.04 LTS and macOS.

{{% notice %}}
If you are running Windows, you can use Ubuntu through Windows subsystem for Linux 2 (WSL2). Check out [Get started with Windows Subsystem for Linux (WSL) on Arm](https://learn.arm.com/learning-paths/laptops-and-desktops/wsl2/setup/) to learn more.
If you are running Windows, you can use Ubuntu through Windows subsystem for Linux 2 (WSL2). Check out [Get started with Windows Subsystem for Linux (WSL) on Arm](/learning-paths/laptops-and-desktops/wsl2/setup/) to learn more.
{{% /notice %}}

## Install software tools
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ further_reading:
type: documentation
- resource:
title: Profile llama.cpp performance with Arm Streamline and KleidiAI LLM kernels Learning Path
link: https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama_cpp_streamline/
link: /learning-paths/servers-and-cloud-computing/llama_cpp_streamline/
type: blog
- resource:
title: Arm-Powered NVIDIA DGX Spark Workstations to Redefine AI
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ further_reading:
type: documentation
- resource:
title: Unlock quantized LLM performance on Arm-based NVIDIA DGX Spark
link: https://learn.arm.com/learning-paths/laptops-and-desktops/dgx_spark_llamacpp/
link: /learning-paths/laptops-and-desktops/dgx_spark_llamacpp/
type: Learning Path


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ further_reading:
type: blog
- resource:
title: Learn about function multiversioning
link: https://learn.arm.com/learning-paths/cross-platform/function-multiversioning/
link: /learning-paths/cross-platform/function-multiversioning/
type: website


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ layout: "learningpathall"

In this Learning Path, you discover how to configure and use an Arm64 runner that builds a .NET application for Arm64. Additionally, the CI/CD pipeline you create generates an Arm64 Docker image of the application and then pushes the image to a Docker Hub repository.

Before completing this Learning Path you can complete the Hello World [example](https://learn.arm.com/learning-paths/laptops-and-desktops/windows_cicd_github/), which provides a basic "hello world" scenario.
Before completing this Learning Path you can complete the Hello World [example](/learning-paths/laptops-and-desktops/windows_cicd_github/), which provides a basic "hello world" scenario.

You will extend that knowledge here with a comprehensive set of operations critical for real-world application deployment:

Expand Down
Loading
Loading