https://ilgpu.net/

 

ILGPU - A Modern GPU Compiler for .Net Programs

A modern, lightweight & fast GPU compiler for high-performance .NET programs.

ilgpu.net

 

https://ilgpu.net/docs/

 

Documentation

ILGPU Tutorials

ilgpu.net

 

How a GPU works

00 Setting up ILGPU

ILGPU should work on any 64-bit platform that .NET supports.

I have even used it on the inexpensive nvidia jetson nano with pretty decent cuda performance.

 

1. Install the most recent .NET SDK for your chosen platform.

2. Create a new C# project: dotnet new console

3. Add the ILGPU package: dotnet add package ILGPU

 

01 A GPU is not a CPU

CPU | SIMD: Single Instruction Multiple Data

CPUs have a trick for parallel programs called SIMD.

These are a set of instructions that allow you to have one instruction do operations on multiple pieces of data at once.

 

GPU |  SIMT: Single Instruction Multiple Threads

SIMT is the same idea as SIMD
but instead of just doing the math instructions in parallel
why not do all the instructions in parallel.

 

using System;
using System.Threading.Tasks;

static void TestParallelFor()
{
    // Load the data
    int[] data = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
    int[] output = new int[10_000];
    
    // Load the action and execute
    Parallel.For(0, output.Length,
        (int i) =>
        {
            output[i] = data[i % data.Length];
        });
}

-

using ILGPU
using ILGPU.Runtime;
using ILGPU.Runtime.CPU;

static void TestILGPU_CPU()
{
    // Initialize ILGPU.
    Context context = Context.CreateDefault();
    Accelerator accelerator = context.CreateCPUAccelerator(0);
    
    // Load the data.
    var deviceData = accelerator.Allocate1D(new int[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 });
    var deviceOutput = accelerator.Allocate1D<int>(10_000);
    
    // load / compile the kernel
    var loadedKernel = accelerator.LoadAutoGroupedStreamKernel(
        (Index1D i, ArrayView<int> data, ArrayView<int> output) =>
        {
            output[i] = data[i % data.Length];
        });
    
    // tell the accelerator to start computing the kernel
    loadedKernel((int)deviceOutput.Length, deviceData.View, deviceOutput.View);
    
    // wait for the accelerator to be finished with whatever it's doing
    // in this case it just waits for the kernel to finish.
    accelerator.Synchronize();
    
    accelerator.Dispose();
    context.Dispose();
}

 

 

02 Memory and bandwidth and trheads

Computers need memory, and memory is slow.

 

GPU's have memory, and memory is slow.

 

 

 

 

https://mangkyu.tistory.com/85

 

[CUDA] CUDA 프로그래밍 고급 예제 (2/2)

1. CUDA C 병렬 프로그래밍 고급 예제 [ Parallel Dot ] 아래의 예제는 동일한 크기의 A배열과 B배열의 index에 있는 숫자를 곱해 C배열에 대입하는 코드이다. #include "device_launch_parameters.h" #include #include #in

mangkyu.tistory.com

 

NVIDIA가 만든 병렬 컴퓨팅 플랫폼 및 API 모델

 

GPGPU(General-Purpose Computing on Graphics Processing Unit)

 

CUDA Platform은 GPU의 가상 명령어셋과 병렬처리 요소들을 사용할 수 있도록 만들어주는 소프트웨어 레이어

 

 

Host: CPU and its memory (Host Memory)

Device: GPU and its memory (Device Memory)

 

1. Data copied from CPU to GPU

2. Launch VectorAdd kernel on the GPU

3. Resulting data copied from GPU to CPU

 

 

디바이스 코드는 nvcc (NVIDIA's Compiler)가 컴파일하는데, 몇가지 키워드가 필요함

__global__ 키워드는 Host를 통해 호출되어 Device에서 동작하는 함수

 

Block과 Thread

GPU 코드를 병렬로 처리하기 위한 단위

1개의 Block은 N개의 Thread로 구성됨

 

kernel <<< BlockCount, Threads-per-Block >>>(...)

 

#define N                   (2048 * 2048)
#define THREADS_PER_BLOCK   512

__global__ void dot(int *a, int *b, int *c)
{
    __shared__ int temp[THREADS_PER_BLOCK];
    int index = threadIdx.x + blockIdx.x * blockDim.x;
    temp[threadIdx.x] = a[index] * b[index];
    
    __syncthreads();
    
    if (threadIdx.x == 0)
    {
        int sum = 0;
        for (int i = 0; i < THREADS_PER_BLOCK; ++i)
            sum += temp[i];
        atomicAdd(c, sum);
    }
}

int main(void)
{
    int *p_a, *p_b, *p_c;
    int *p_dev_a, *p_dev_b, *p_dev_c;
    int cbSize = N * sizeof(int);
    
    // allocate host memories
    p_a = (int *)malloc(cbSize);
    p_b = (int *)malloc(cbSize);
    p_c = (int *)malloc(sizeof(int));
    
    // allocate device memories
    cudaMalloc(&p_dev_a, cbSize);
    cudaMalloc(&p_dev_b, cbSize);
    cudaMalloc(&p_dev_c, sizeof(int));
    
    // initialize variables
    for (int i = 0; i < N; ++i)
    {
        p_a[i] = i;
        p_b[i] = i;
    }
    
    // copy host memories to device memories
    cudaMemCpy(p_dev_a, p_a, cbSize, cudaMemcpyHostToDevice);
    cudaMemCpy(p_dev_b, p_b, cbSize, cudaMemcpyHostToDevice);
    
    // run dot with N threads
    dot<<< N / THREADS_PER_BLOCK, THREADS_PER_BLOCK >>>(p_dev_a, p_dev_b, p_dev_c);
    
    // copy device memories sum result(p_dev_c) to host memories(p_c)
    cudaMemCpy(p_c, p_dev_c, sizeof(int), cudaMemCpyDeviceToHost);
    
    printf("Total Sum: %d\n", *p_c);
    
    free(p_a);
    free(p_b);
    free(p_c);
    cudaFree(p_dev_a);
    cudaFree(p_dev_b);
    cudaFree(p_dev_c);
    
    return 0;
}

 

 

Atomic Operations

  • atomicAdd()
  • atomicSub()
  • atomicMin()
  • atomicMax()
  • atomicInc()
  • atomicDec()
  • atomicExch()
  • atomicCAS()

 

1.

 

// 1. Register HttpClientFactory
public void ConfigureServices(IServiceCollection services)
{
    // Other service registrations
    ;
    services.AddHttpClient();
    ;
    
    // Register your custom services
    services.AddScoped<IMyApiService, MyApiService>();
}

// 2. Create a Service Interface
public interface IMyApiService
{
    Task<string> GetApiResponseAsync();
}

// 3. Implement the Service Using HttpClientFactory
public class MyApiService : IMyApiService
{
    readonly IHttpClientFactory m_httpClientFactory;
    
    public MyApiService(IHttpClientFactory httpClientFactory)
    {
        m_httpClientFactory = httpClientFactory;
    }
    
    public async Task<string> GetApiResponseAsync()
    {
        // Create a named client or use the default client
        HttpClient client = m_httpClientFactory.CreateClient();
        
        // Make a HTTP GET request
        HttpResponseMessage response = await client.GetAsync("https://api.example.com/api/");
        
        if (response.IsSuccessStatusCode)
        {
            var sRespBody = await response.Content.ReadAsStringAsync();
            return sRespBody;
        }
        
        // Handle error
        return "Error occurred.";
    }
}

// Use the Service in a Controller
public class MyController : Controller
{
    readonly IMyApiService m_myApiService;
    
    public MyController(IMyApiService myApiService)
    {
        m_myApiService = myApiService;
    }
    
    public async Task<IActionResult> Index()
    {
        var sApiResponse = await m_myApiService.GetApiResponseAsync();
        return View(sApiResponse);
    }
}

 

2. Using Named HttpClient Instances

private static void ConfigureServices(IServiceCollection services)
{
    services.AddHttpClient("WebApiClient", config =>
    {
        config.BaseAddress = new Uri("https://localhost:5001/api/");
        config.Timeout = TimeSpan.FromSeconds(30);
        config.DefaultRequestHeaders.Clear();
    });
    
    ;
    services.AddScoped<IHttpClientFactoryService, HttpClientFactoryService>();
}

private async Task<List<TItem>> GetItemsWithHttpClientFactory<TItem>()
{
    var httpClient = m_httpClientFactory.CreateClient("WebApiClient");
    var response = await httpClient.GetAsync("items",
        HttpCompletionOption.ResponseHeadersRead);
    using (response)
    {
        response.EnsureSuccessStatusCode();
        
        var stream = await response.Content.ReadAsStreamAsync();
        var items = await JsonSerializer.DeserializeAsync<List<TItem>>(stream, m_jsopts);
        return items;
    }
}

 

3. Using Typed HttpClient Instances

// 1. Create the custom client
public class ItemsClient
{
    public HttpClient Client { get; }
    
    public ItemsClient(HttpClient client)
    {
        this.Client = client;
        this.Client.BaseAddress = new Uri("https://localhost:5001/api/");
        this.Client.Timeout = TimeSpan.FromSeconds(30);
        this.Client.DefaultRequestHeaders.Clear();
    }
}

// 2. Register
private static void ConfigureServices(IServiceCollection services)
{
    ;
    services.AddHttpClient<ItemsClient>();
    ;
}


public class HttpClientFactoryService : IHttpClientFactoryService
{
    readonly IHttpClientFactory m_httpClientFactory;
    readonly ItemsClient m_itemsClient;
    readonly JsonSerializerOptions m_jsopts;
    
    public HttpClientFactoryService(
        IHttpClientFactory httpClientFactory,
        ItemsClient itemsClients)
    {
        m_httpClientFactory = httpClientFactory;
        m_itemsClient = itemsClients;
        
        m_options = new JsonSerializerOptions { PropertyNameCaseInsentive = true };
    }
    
    private async Task<TItems> GetItemsWithTypedClient<TItem>()
    {
        var response = await m_itemsClient.Client.GetAsync("items",
            HttpCompletionOption.ResponseHeaderRead);
        using (response)
        {
            response.EnsureSuccessStatusCode();
            
            var stream = await response.Content.ReadAsStreamAsync();
            var items = await JsonSerializer.Deserialize<List<TItem>>(stream, m_jsopts);
            return items;
        }
    }
}

 

'Web > ASP.NET Core' 카테고리의 다른 글

Static files  (0) 2021.12.06

https://download.tek.com/document/37W_17249_6_Fundamentals_of_Real-Time_Spectrum_Analysis1.pdf

 

GLOSSARY

Acquisition An integer number of time-contiguous samples.
Acquisition Time The length of time represented by one acquisition
Amplitude The magnitude of an electrical signal.
Amplitude Modulation (AM) The process in which the amplitude of a sine wave (the carrier) is varie in accordance with the instantaneous voltage of a second electrical signal (the modulating signal).
Analysis Time A subset of time-contiguous samples from one block, used as input to an analysis view.
Analysis View The fexible window used to display real-time measurement results.
Carrier The RF signal upon which modulation resides.
Carrier Frequency  The frequency of the CW component of the carrier signal.
Center Frequency The frequency corresponding to the center of a frequency span of a spectrum the analyzer display.
CZT-Chirp-Z transform A computationally effcient method of computing a Discrete Fourier Transform (DFT). CZTs offer more fexibility for example in selecting the number of output frequency points than the conventional FFT at the expense of additional computations.
CW Signal Continuous wave signal. A sine wave.
dBfs A unit to express power level in decibels referenced to full scale. Depending on the context, this is either the full scale of the display screen or the full scale of the ADC.
dBm A unit to express power level in decibels referenced to 1 milliwatt.
dBmV A unit to express voltage levels in decibels referenced to 1 millivolt.
Decibel (dB) Ten times the logarithm of the ratio of one electrical power to another.
Discrete Fourier transform A mathematical process to calculate the frequency spectrum of a sampled time domain signal.
Display Line A horizontal or vertical line on a waveform display, used as a reference for visual (or automatic) comparison with a given level, time, or frequency.
Distortion Degradation of a signal, often a result of nonlinear operations, resulting in unwanted frequency components. Harmonic and intermodulation distortions are common types.
DPX Digital Phosphor analysis - A signal analysis and compression methodology that allows the live view of timechanging signals allowing the discovery of rare transient events.
DPX Spectrum DPX technology applied to spectrum analysis. DPX Spectrum provides a Live RF view as well as the observation frequency domain transients.
Dynamic Range The maximum ratio of the levels of two signals simultaneously present at the input which can be measured to a specifed accuracy.
Fast Fourier Transform A computationally effcient method of computing a Discrete Fourier Transform (DFT). A common FFT algorithm requires that the number of input and output samples are equal and a power of 2 (2,4,8,16,…).
Frequency The rate at which a signal oscillates, expressed as hertz or number of cycles per second.
Frequency Domain View The representation of the power of the spectral components of a signal as a function of frequency; the spectrum of the signal.
Frequency Drift Gradual shift or change a signal frequency over the specifed time, where other conditions remain constant. Expressed in hertz per second.
Frequency Mask Trigger  A fexible real-time trigger based on specifc events that occur in the frequency domain. The triggering parameters are defned by a graphical mask.
Frequency Modulation (FM) The process in which the frequency of an electrical signal (the carrier) is varied according to the instantaneous voltage of a second electrical signal (the modulating signal).
Frequency Range The range of frequencies over which a device operates, with lower and upper bounds.
Frequency Span A continuous range of frequencies extending between two frequency limits.
Marker  A visually identifable point on a waveform trace, used to extract a readout of domain and range values represented by that point.
Modulate To vary a characteristic of a signal, typically in order to transmit information.
Noise Unwanted random disturbances superimposed on a signal which tend to obscure that signal.
Noise Floor The level of noise intrinsic to a system that represents the minimum limit at which input signals can be observed; ultimately limited by thermal noise (kTB).
Noise Bandwidth (NBW) The exact bandwidth of a flter that is used to calculate the absolute power of noise or noise-like signals in dBm/Hz.
Probability of Intercept The certainty to which a signal can be detected within defned parameters.
Real-Time Bandwidth The frequency span over which realtime seamless capture can be performed, which is a function of the digitizer and the IF bandwidth of a Real-Time Spectrum Analyzer.
Real-Time Seamless Capture The ability to acquire and store an uninterrupted series of time domain samples that represent the behavior of an RF signal over a long period of time.
Real-Time Spectrum Analysis A spectrum analysis technique based on Discrete Fourier Transforms (DFT) that is capable of continuously analyzing a bandwidth of interest without time gaps. Real-Time Spectrum Analysis provides 100% probability of display and trigger of transient signal fuctuations within the specifed span, resolution bandwidth and time parameters.
Real-Time Spectrum Analyzer Instrument capable of measuring elusive RF events in RF signals, triggering on those events, seamlessly capturing them into memory, and analyzing them in the frequency, time, and modulation domains.
Reference Level The signal level represented by the uppermost graticule line of the analyzer display.
Resolution Bandwidth (RBW) The width of the narrowest measurable band of frequencies in a spectrum analyzer display. The RBW determines the analyzer’s ability to resolve closely spaced signal components.
Sensitivity Measure of a spectrum analyzer’s ability to display minimum level signals, usually expressed as Displayed Average Noise Level (DANL).
Spectrogram Frequency vs. Time vs. amplitude display where the frequency is represented on x-axis and time on the y-axis. The power is expressed by the color.
Spectrum The frequency domain representation of a signal showing the power distribution of its spectral component versus frequency.
Spectrum Analysis Measurement technique for determining the frequency content of an RF signal.
Vector Signal Analysis Measurement technique for charactering the modulation of an RF signal. Vector analysis takes both magnitude and phase into account.

 

Acronym Reference

ACP Adjacent Channel Power
ADC Analog-to-Digital Converter
AM Amplitude Modulation
BH4B Blackman-Harris 4B Window
BW Bandwidth
CCDF Complementary Cumulative Distribution Function
CDMA Code Division Multiple Access
CW Continuous Wave
dB Decibel
dBfs dB Full Scale
DDC Digital Downconverter
DFT Discrete Fourier Transform
DPX Digital Phosphor Display, Spectrum, etc.
DSP Digital Signal Processing
EVM Error Vector Magnitude
FFT Fast Fourier Transform 
FM Frequency Modulation 
FSK Frequency Shift Keying 
IF Intermediate Frequency
IQ In-Phase Quadrature
LO Local Oscillator
NBW Noise Bandwidth
OFDM Orthogonal Frequency Division Multiplexing
PAR Peak-Average Ratio
PM Phase Modulation
POI Probability of Intercept
PRBS Pseudorandom Binary Sequence
PSK Phase Shift Keying
QAM Quadrature Amplitude Modulation
QPSK Quadrature Phase Shift Keying
RBW Resolution Bandwidth
RF Radio Frequency
RMS Root Mean Square
RTSA Real-Time Spectrum Analyzer
SA Spectrum Analyze
VSA Vector Signal Analyzer

docker 이미지 검색

$ sudo docker search tpm2

https://github.com/starlab-io/docker-tpm2-emulator

 

GitHub - starlab-io/docker-tpm2-emulator: TPM 2.0 emulator

TPM 2.0 emulator. Contribute to starlab-io/docker-tpm2-emulator development by creating an account on GitHub.

github.com

 

내려받기

$ docker pull starlabio/tpm2-emulator
Using default tag: latest
latest: Pulling from starlabio/tpm2-emulator
eeacba527962: Pull complete
25405ed4f245: Pull complete
22752cd61bd5: Pull complete
:
Digest: sha256:1dc4ea06a8061225251461dfa9ce535eb4ba682276ba6aca00185831b35e97cb
Status: Downloaded newer image for starlabio/tpm2-emulator:latest
docker.io/starlabio/tpm2-emulator:latest

도커 실행

$ docker run -it starlabio/tpm2-emulator
root@219e4ab48689:/source# ls
ibmtpm  tpm2-tools-2.1.0  tpm2-tss-1.2.0

 

Running the emulator

$ tpm_server -rm &

If you want to start with a fresh state run it with -rm as an option.

Before any TPM command will work you must send it a startup command, with a real TPM it is apparently the job of the BIOS to do this.

$ tpm2_startup --clear

 

 

tpm2-software-container

https://github.com/tpm2-software/tpm2-software-container

 

GitHub - tpm2-software/tpm2-software-container: Container building stuff

Container building stuff. Contribute to tpm2-software/tpm2-software-container development by creating an account on GitHub.

github.com

 

Auto builds

https://github.com/orgs/tpm2-software/packages

 

GitHub: Let’s build from here

GitHub is where over 100 million developers shape the future of software, together. Contribute to the open source community, manage your Git repositories, review code like a pro, track bugs and fea...

github.com

$ docker run -it --rm ghcr.io/tpm2-software/ubuntu-22.04 /bin/bash

 

 

 

https://github.com/greatscottgadgets/hackrf

 

GitHub - greatscottgadgets/hackrf: low cost software radio platform

low cost software radio platform. Contribute to greatscottgadgets/hackrf development by creating an account on GitHub.

github.com

 

Prerequisites for Linux (Debian/Ubuntu):

$ sudo apt-get install build-essential cmake libusb-1.0-0-dev pkg-config libfftw3-dev

 

Build host software on Linux:

cmake ..

~/dev/hackrf$ mkdir host/build
~/dev/hackrf$ cd host/build
~/dev/hackrf/host/build$ cmake ..

CMake Deprecation Warning at CMakeLists.txt:3 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- The C compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
CMake Deprecation Warning at libhackrf/CMakeLists.txt:24 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.2")
-- Checking for module 'libusb-1.0'
--   Found libusb-1.0, version 1.0.25
CMake Warning (dev) at /usr/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
  The package name passed to `find_package_handle_standard_args` (LIBUSB)
  does not match the name of the calling package (USB1).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake/modules/FindUSB1.cmake:39 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  libhackrf/CMakeLists.txt:48 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found LIBUSB: /usr/lib/x86_64-linux-gnu/libusb-1.0.so
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Setting udev rule group to - plugdev
-- HackRF udev rules will be installed to '/etc/udev/rules.d' upon running 'make install'
CMake Deprecation Warning at hackrf-tools/CMakeLists.txt:24 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- Found FFTW: /usr/lib/x86_64-linux-gnu/libfftw3f.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/{user}/dev/hackrf/host/build

make

~/dev/hackrf/host/build$ make
[  5%] Building C object libhackrf/src/CMakeFiles/hackrf.dir/hackrf.c.o
[ 10%] Linking C shared library libhackrf.so
[ 10%] Built target hackrf
[ 15%] Building C object libhackrf/src/CMakeFiles/hackrf-static.dir/hackrf.c.o
[ 20%] Linking C static library libhackrf.a
[ 20%] Built target hackrf-static
[ 25%] Building C object hackrf-tools/src/CMakeFiles/hackrf_transfer.dir/hackrf_transfer.c.o
[ 30%] Linking C executable hackrf_transfer
[ 30%] Built target hackrf_transfer
[ 35%] Building C object hackrf-tools/src/CMakeFiles/hackrf_spiflash.dir/hackrf_spiflash.c.o
[ 40%] Linking C executable hackrf_spiflash
[ 40%] Built target hackrf_spiflash
[ 45%] Building C object hackrf-tools/src/CMakeFiles/hackrf_cpldjtag.dir/hackrf_cpldjtag.c.o
[ 50%] Linking C executable hackrf_cpldjtag
[ 50%] Built target hackrf_cpldjtag
[ 55%] Building C object hackrf-tools/src/CMakeFiles/hackrf_info.dir/hackrf_info.c.o
[ 60%] Linking C executable hackrf_info
[ 60%] Built target hackrf_info
[ 65%] Building C object hackrf-tools/src/CMakeFiles/hackrf_debug.dir/hackrf_debug.c.o
[ 70%] Linking C executable hackrf_debug
[ 70%] Built target hackrf_debug
[ 75%] Building C object hackrf-tools/src/CMakeFiles/hackrf_clock.dir/hackrf_clock.c.o
[ 80%] Linking C executable hackrf_clock
[ 80%] Built target hackrf_clock
[ 85%] Building C object hackrf-tools/src/CMakeFiles/hackrf_sweep.dir/hackrf_sweep.c.o
[ 90%] Linking C executable hackrf_sweep
[ 90%] Built target hackrf_sweep
[ 95%] Building C object hackrf-tools/src/CMakeFiles/hackrf_operacake.dir/hackrf_operacake.c.o
[100%] Linking C executable hackrf_operacake
[100%] Built target hackrf_operacake

sudo make install

~/dev/hackrf/host/build$ sudo make install
Consolidate compiler generated dependencies of target hackrf
[ 10%] Built target hackrf
Consolidate compiler generated dependencies of target hackrf-static
[ 20%] Built target hackrf-static
Consolidate compiler generated dependencies of target hackrf_transfer
[ 30%] Built target hackrf_transfer
Consolidate compiler generated dependencies of target hackrf_spiflash
[ 40%] Built target hackrf_spiflash
Consolidate compiler generated dependencies of target hackrf_cpldjtag
[ 50%] Built target hackrf_cpldjtag
Consolidate compiler generated dependencies of target hackrf_info
[ 60%] Built target hackrf_info
Consolidate compiler generated dependencies of target hackrf_debug
[ 70%] Built target hackrf_debug
Consolidate compiler generated dependencies of target hackrf_clock
[ 80%] Built target hackrf_clock
Consolidate compiler generated dependencies of target hackrf_sweep
[ 90%] Built target hackrf_sweep
Consolidate compiler generated dependencies of target hackrf_operacake
[100%] Built target hackrf_operacake
Install the project...
-- Install configuration: ""
-- Installing: /usr/local/lib/pkgconfig/libhackrf.pc
-- Installing: /etc/udev/rules.d/53-hackrf.rules
-- Installing: /usr/local/lib/libhackrf.so.0.8.0
-- Installing: /usr/local/lib/libhackrf.so.0
-- Installing: /usr/local/lib/libhackrf.so
-- Installing: /usr/local/lib/libhackrf.a
-- Installing: /usr/local/include/libhackrf/hackrf.h
-- Installing: /usr/local/bin/hackrf_transfer
-- Set runtime path of "/usr/local/bin/hackrf_transfer" to ""
-- Installing: /usr/local/bin/hackrf_spiflash
-- Set runtime path of "/usr/local/bin/hackrf_spiflash" to ""
-- Installing: /usr/local/bin/hackrf_cpldjtag
-- Set runtime path of "/usr/local/bin/hackrf_cpldjtag" to ""
-- Installing: /usr/local/bin/hackrf_info
-- Set runtime path of "/usr/local/bin/hackrf_info" to ""
-- Installing: /usr/local/bin/hackrf_debug
-- Set runtime path of "/usr/local/bin/hackrf_debug" to ""
-- Installing: /usr/local/bin/hackrf_clock
-- Set runtime path of "/usr/local/bin/hackrf_clock" to ""
-- Installing: /usr/local/bin/hackrf_sweep
-- Set runtime path of "/usr/local/bin/hackrf_sweep" to ""
-- Installing: /usr/local/bin/hackrf_operacake
-- Set runtime path of "/usr/local/bin/hackrf_operacake" to ""

sudo ldconfig

~/dev/hackrf/host/build$ sudo ldconfig
/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

By default this will attempt to install an udev rule to /etc/udev/rules.d

 to provide the the usb or plugdev group access to HackRF.

If your setup requires the udev rule to be installed elsewhere

 you can modify the path with -DUDEV_RULES_PATH=/path/to/udev.

Note: The udev rule is not installed by default for PyBOMBS installs
 as they do not usually get installed with root privileges.

 

Clean CMake temporary files/dirs:

$ cd host/build
$ rm -rf *

 

hackrf_transfer

~/dev/hackrf$ hackrf_transfer -h
Usage:
        -h # this help
        [-d serial_number] # Serial number of desired HackRF.
        -r <filename> # Receive data into file (use '-' for stdout).
        -t <filename> # Transmit data from file (use '-' for stdin).
        -w # Receive data into file with WAV header and automatic name.
           # This is for SDR# compatibility and may not work with other software.
        [-f freq_hz] # Frequency in Hz [1MHz to 6000MHz supported, 0MHz to 7250MHz forceable].
        [-i if_freq_hz] # Intermediate Frequency (IF) in Hz [2170MHz to 2740MHz supported, 2000MHz to 3000MHz forceable].
        [-o lo_freq_hz] # Front-end Local Oscillator (LO) frequency in Hz [84MHz to 5400MHz].
        [-m image_reject] # Image rejection filter selection, 0=bypass, 1=low pass, 2=high pass.
        [-a amp_enable] # RX/TX RF amplifier 1=Enable, 0=Disable.
        [-p antenna_enable] # Antenna port power, 1=Enable, 0=Disable.
        [-l gain_db] # RX LNA (IF) gain, 0-40dB, 8dB steps
        [-g gain_db] # RX VGA (baseband) gain, 0-62dB, 2dB steps
        [-x gain_db] # TX VGA (IF) gain, 0-47dB, 1dB steps
        [-s sample_rate_hz] # Sample rate in Hz (2-20MHz supported, default 10MHz).
        [-F force] # Force use of parameters outside supported ranges.
        [-n num_samples] # Number of samples to transfer (default is unlimited).
        [-S buf_size] # Enable receive streaming with buffer size buf_size.
        [-B] # Print buffer statistics during transfer
        [-c amplitude] # CW signal source mode, amplitude 0-127 (DC value to DAC).
        [-R] # Repeat TX mode (default is off)
        [-b baseband_filter_bw_hz] # Set baseband filter bandwidth in Hz.
        Possible values: 1.75/2.5/3.5/5/5.5/6/7/8/9/10/12/14/15/20/24/28MHz, default <= 0.75 * sample_rate_hz.
        [-C ppm] # Set Internal crystal clock error in ppm.
        [-H] # Synchronize RX/TX to external trigger input.

https://github.com/greatscottgadgets/hackrf

 

GitHub - greatscottgadgets/hackrf: low cost software radio platform

low cost software radio platform. Contribute to greatscottgadgets/hackrf development by creating an account on GitHub.

github.com

 

Prerequisties for Visual Studio:

 

Download | CMake

Current development distribution Each night binaries are created as part of the testing process. Other than passing all of the tests in CMake, this version of CMake should not be expected to work in a production environment. It is being produced so that us

cmake.org

  • libusbx-1.0.18 or later
PS C:\dev\vcpkg> .\vcpkg install libusb:x64-windows

Computing installation plan...
The following packages will be built and installed:
    libusb[core]:x64-windows -> 1.0.26.11791#3
  * pkgconf[core]:x64-windows -> 1.8.0#5
  * vcpkg-msbuild[core]:x64-windows -> 2023-08-08
  * vcpkg-pkgconfig-get-modules[core]:x64-windows -> 2023-02-25
Additional packages (*) will be modified to complete this operation.
Detecting compiler hash for triplet x64-windows...
:
:
:
libusb provides CMake targets:

    find_package(libusb CONFIG REQUIRED)
    target_include_directories(main PRIVATE ${LIBUSB_INCLUDE_DIRS})
    target_link_libraries(main PRIVATE ${LIBUSB_LIBRARIES})
  • fftw-3.3.5 or later
PS C:\dev\vcpkg> .\vcpkg search fftw3
dlib[fftw3]                               fftw3 support for dlib
fftw3                    3.3.10#8         FFTW is a C subroutine library for computing the discrete Fourier transfor...
fftw3[avx]                                Builds part of the library with avx, sse2, sse
fftw3[avx2]                               Builds part of the library with avx2, fma, avx, sse2, sse
fftw3[openmp]                             Builds openmp enabled lib
fftw3[sse]                                Builds part of the library with sse
fftw3[sse2]                               Builds part of the library with sse2, sse
fftw3[threads]                            Enable threads in fftw3
matplotplusplus[fftw]                     fftw3 support for Matplot++

PS C:\dev\vcpkg> .\vcpkg install fftw3:x64-windows
Computing installation plan...
:
fftw3 provides CMake targets:

    # this is heuristically generated, and may not be correct
    find_package(FFTW3 CONFIG REQUIRED)
    target_link_libraries(main PRIVATE FFTW3::fftw3)

    find_package(FFTW3f CONFIG REQUIRED)
    target_link_libraries(main PRIVATE FFTW3::fftw3f)

    find_package(FFTW3l CONFIG REQUIRED)
    target_link_libraries(main PRIVATE FFTW3::fftw3l)
 

GitHub - GerHobbelt/pthread-win32: clone / cvs-import of pthread-win32 + local tweaks (including MSVC2008 - MSVC2022 project fil

clone / cvs-import of pthread-win32 + local tweaks (including MSVC2008 - MSVC2022 project files) - GitHub - GerHobbelt/pthread-win32: clone / cvs-import of pthread-win32 + local tweaks (including M...

github.com

cmake

hackrf\host\build> cmake ../ -G "Visual Studio 17 2022" -A x64 \
	-DLIBUSB_INCLUDE_DIR=C:\Dev\vcpkg\packages\libusb_x64-windows\include\libusb-1.0 \
    -DLIBUSB_LIBRARIES=C:\Dev\vcpkg\packages\libusb_x64-windows\lib\libusb-1.0.lib \
    -DTHREADS_PTHREADS_INCLUDE_DIR=D:\Dev\HackRF\pthread-win32 \
    -DTHREADS_PTHREADS_WIN32_LIBRARY=D:\Dev\HackRF\pthread-win32\windows\VS2022\bin\Release-Unicode-64bit-x64\pthread.lib \
    -DFFTW_INCLUDES=C:\Dev\vcpkg\packages\fftw3_x64-windows\include \
    -DFFTW_LIBRARIES=C:\Dev\vcpkg\packages\fftw3_x64-windows\lib\fftw3f.lib

CMake Deprecation Warning at CMakeLists.txt:3 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045.
CMake Deprecation Warning at libhackrf/CMakeLists.txt:24 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


CMake Deprecation Warning at hackrf-tools/CMakeLists.txt:24 (cmake_minimum_required):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- Configuring done
-- Generating done
-- Build files have been written to: D:/Dev/Defense/HackRF/hackrf/host/build

...

2023-08-30  02:46            18,307 ALL_BUILD.vcxproj
2023-08-30  02:06               288 ALL_BUILD.vcxproj.filters
2023-08-30  02:11               168 ALL_BUILD.vcxproj.user
2023-08-30  02:46            15,235 CMakeCache.txt
2023-08-30  02:48    <DIR>          CMakeFiles
2023-08-30  02:06             1,716 cmake_install.cmake
2023-08-30  02:06             1,486 cmake_uninstall.cmake
2023-08-30  02:46    <DIR>          hackrf-tools
2023-08-30  02:06            16,620 HackRF.sln
2023-08-30  02:06            10,139 INSTALL.vcxproj
2023-08-30  02:06               530 INSTALL.vcxproj.filters
2023-08-30  02:46    <DIR>          libhackrf
2023-08-30  02:46            19,742 uninstall.vcxproj
2023-08-30  02:06               722 uninstall.vcxproj.filters
2023-08-30  02:11    <DIR>          x64
2023-08-30  02:46            20,128 ZERO_CHECK.vcxproj
2023-08-30  02:06               531 ZERO_CHECK.vcxproj.filters

 

HackRF.sln 파일을 Microsoft Visual Studio Community 2022로 열여서 컴파일!

세 폴더에 컴파일된 파일들을 한 폴더에 모아서 사용하면 도작함!

 

TODO: pthread 까지 vcpkg로 설치해서 사용해 보면 좋을 듯

 

 

'OS > Windows' 카테고리의 다른 글

Apache Kafka  (0) 2021.10.12

https://www.oreilly.com/library/view/concurrency-in-c/9781492054498/

 

Concurrency in C# Cookbook, 2nd Edition

If you’re one of many developers still uncertain about concurrent and multithreaded development, this practical cookbook will change your mind. With more than 85 code-rich recipes in this updated second … - Selection from Concurrency in C# Cookbook, 2n

www.oreilly.com

 

TPL(Task Parallel Library) Dataflow

Nuget Package: System.Threading.Tasks.Dataflow

try
{
    var multiplyBlock = new TransformBlock<int, int>(item =>
    {
        if (item == 1)
            throw new InvalidOperationException("Blech.");
        return item * 2;
    });
    
    var substractBlock = new TransformBlock<int, int>(item => item - 2);
    
    multiplyBlock.LinkTo(substractBlock,
        new DataflowLinkOptions { PropagateCompletion = true });
    
    multiplyBlock.Post(1);
    substractBlock.Completion.Wait();
}
catch (AggregateException ex)
{
    AggregateException agex = ex.Flatten();
    Trace.WriteLine(ex.InnerException);
}

'.NET > C#' 카테고리의 다른 글

Concurrency - Reactive Programming  (0) 2023.08.16
Concurrency - Parallel Programming  (0) 2023.08.16
Concurrency - Asynchronous Programming  (0) 2023.08.16
Concurrency (동시성)  (0) 2023.08.16
Marshaling: 복사 및 고정  (0) 2021.10.15

https://www.oreilly.com/library/view/concurrency-in-c/9781492054498/

 

Concurrency in C# Cookbook, 2nd Edition

If you’re one of many developers still uncertain about concurrent and multithreaded development, this practical cookbook will change your mind. With more than 85 code-rich recipes in this updated second … - Selection from Concurrency in C# Cookbook, 2n

www.oreilly.com

 

Nuget Package: System.Reactive

 

  • 이벤트 스트림을 데이터 스트림처럼 다룰 수 있다.
  • Observable stream의 개념을 바탕으로 한다.
  • Observable stream을 구독(subscribe)하면 정해지지 않은 수의 데이터 항목을 수신(OnNext)한 뒤에 하나의 오류(OnError) 또는 스트림 끝 알림(OnCompleted)으로 스트림이 끝난다.
  • 끝나지 않는 observable stream도 있다.

실제 인터페이스 모습, System.Reactive 라이브러리에 모든 구현을 포함하고 있음.

interface IObserver<in T>
{
    void OnNext(T item);
    void OnCompleted();
    void OnError(Exception error);
}

interface IObservable<out T>
{
    IDisposable Subscribe(IObserver<T> observer);
}

Example:

Observable.Interval(TimeSpan.FromSeconds(1))
    .Timestamp()
    .Where(ev => ev.Value % 2 == 0)
    .Select(ev => ev.Timestamp)
    .Subscribe(
        ts => Trace.WriteLine(ts),
        ex => Trace.WriteLine(ex));

IObservable<DateTimeOffset> timestamps = Observable
    .Interval(TimeSpan.FromSeconds(1))
    .Timestamp()
    .Where(ev => ev.Value % 2 == 0)
    .Select(ev => ev.Timestamp);
timestamps.Subscribe(
    ts => Trace.WriteLine(ts),
    ex => Trace.WriteLine(ex));
  • System.Reactive 구독 역시 리소스다.
  • Subscribe 연산자는 구독을 나타내는 IDisposable 반환
    • 항상 오류 처리 매개 변수를 함께 받아야 함
  • 코드에서 observable stream의 수신을 완료하면 구독을 삭제해야 함
  • 구독은 hot observable과 cold observable에서 다르게 동작
    • hot observable: 언제든 발생할 수 있는 이벤트 스트림
      이벤트가 발생할 때 구독이 없으면 해당 이벤트는 사라짐
    • cold observable: 자동으로 발생하는 이벤트가 없는 경우
      구독에 대응해서 이벤트를 순서대로 발생하기 시작

http://www.introtorx.com 

 

Introduction to Rx

IntroToRx.com is the online resource for getting started with the Reactive Extensions to .Net. Originally starting life as a blog series, it has now flourished into an online book. You can read it online here via the website, or get a copy of the Kindle ed

introtorx.com

 

.NET 이벤트 변환

var progress = new Progress<int>();
IObservable<EventPattern<int>> progressReports =
    Observable.FromEventPattern<int>(
        handler => progress.ProgressChanged += handler,
        handler => progress.ProgressChanged -= handler);
progressReports.Subscribe(data => Trace.WriteLine("OnNext: " + data.EventArgs));


var timer = new System.Timers.Timer(interval: 1000) { Enabled = true };
IObservable<EventPattern<ElapsedEventArgs>> ticks =
    Observable.FromEventPattern<ElaspedEventHandler, ElaspedEventArgs>(
        handler => (s, a) => handler(s, a),	// EventHandler<ElapsedEventArgs to ElapsedEventArgs
        handler => timer.Elapsed += handler,
        handler => timer.Elapsed -= handler);
ticks.Subscribe(data => Trace.WriteLine("OnNext: " + data.EventArgs.SignalTime));


var timer = new System.Timers.Timer(interval: 1000) { Enabled = true };
IObservable<EventPattern<object>> ticks =
    Observable.FromEventPattern(timer, nameof(Timer.Elapsed));
ticks.Subscribe(data => Trace.WriteLine("OnNext: "
    + (data.EventArgs as ElapsedEventArgs).SignalTime));

 

컨텍스트로 알림 전달

각 OnNext 알림은 순차적으로 발생하지만 무조건 같은 스레드에서 발생하지 않는다.

private void Button_Click(object sender, RoutedEventArgs e)
{
    SynchronizationContext uiContext = SynchronizationContext.Current;
    Trace.WriteLine($"UI Thread is {Environment.CurrentManagedThreadId}"};
    Observable.Interval(TimeSpan.FromSeconds(1))
        .ObserveOn(uiContext)
        .Subscribe(x => Trace.WriteLine(
            $"Interval {x} on thread {Environment.CurrentManagedThreadId}"));
}

UI 스레드를 벗어나는 용도

SynchronizationContext uiContext = SynchronizationContext.Current;
Trace.WriteLine($"UI thread is {Environment.CurrentManagedThreadId}");
Observable.FromEventPattern<MouseEventHandler, MouseEventArgs>(
    handler => (s, a) => handler(s, a),
    handler => MouseMove += handler,
    handler => MouseMove -= handler)
    .Select(evt => evt.EventArgs.GetPosition(this))
    .ObserveOn(Scheduler.Default)
    .Select(position =>
    {
        // 복잡한 계산
        Trace.WriteLine($"Calculated result {rv} on thread {Environment.CurrentManagedThreadId}");
        return rv;
    })
    .ObserveOn(uiContext)
    .Subscribe(x => Trace.WriteLine(
        $"Result {x} on thread {Environment.CurrentManagedThreadId}"));

 

Window와 Buffer로 이벤트 데이터 그룹화

  Buffer Window
이벤트 전달 들어오는 이벤트의 그룹화가 끝나면 모든 이벤트를 이벤트 컬렉션으로 한 번에 전달 들어오는 이벤트를 논리적으로 그룹호하지만 도착하는 대로 전달
반환형식 IObservable<IList<T>> IObservable<IObservable<T>>

 

'.NET > C#' 카테고리의 다른 글

Concurrency - TPL Dataflow  (0) 2023.08.16
Concurrency - Parallel Programming  (0) 2023.08.16
Concurrency - Asynchronous Programming  (0) 2023.08.16
Concurrency (동시성)  (0) 2023.08.16
Marshaling: 복사 및 고정  (0) 2021.10.15

https://www.oreilly.com/library/view/concurrency-in-c/9781492054498/

 

Concurrency in C# Cookbook, 2nd Edition

If you’re one of many developers still uncertain about concurrent and multithreaded development, this practical cookbook will change your mind. With more than 85 code-rich recipes in this updated second … - Selection from Concurrency in C# Cookbook, 2n

www.oreilly.com

 

데이터 병렬

void RotateMatrices(IEnumerable<Matrix> matrices, float degrees)
{
    Parallel.ForEach(matrices, matrix => matrix.Rotate(degrees));
}

IEnumerable<bool> PrimalityTest(IEnumerable<int> values)
{
    return values.AsParallel()
        .Select(value => IsPrime(value));
}

 

작업 병렬(병렬 호출)

void ProcessArray(double[] array)
{
    Parallel.Invoke(
        () => ProcessPartialArray(array, 0, array.Length / 2),
        () => ProcessPartialArray(array, array.Length / 2, array.Length)
    );
}

void ProcessPartialArray(double[] array, int begin, int end)
{
    // CPU 집약적인 처리
}

작업 병렬에서는 특히 closure 안에 캡처한 변수에 주의해야 한다.

값이 아닌 참조를 캡처하므로 명확하지 않은 공유가 일어날 수도 있다.

try
{
    Parallel.Invoke(
        () => { throw new Exception(); },
        () => { throw new Exception(); });
}
catch (AggregateException ex)
{
    ex.Handle(exception =>
    {
        Trace.WriteLine(exception);
        return true; // 처리함
    });
}

루푸 중지

void InvertMatrices(IEnumerable<Matrix> matrices)
{
    Pallel.ForEach(matrices, (matrix, state) =>
    {
        if (matrix.InInvertible)
            matrix.Invert();
        else
            state.Stop();
    });
}

중지는 루프의 내부에서 일어나며 취소는 루푸의 외부에서 일어난다.

void RotateMatrices(IEnumerable<Matrix> matrices, float degrees, CancellationToken token)
{
    Parallel.ForEach(matrices,
        new ParallelOptions { CancellationToken = token },
        matrix => matrix.Rotate(degrees));
}

공유 상태를 보호하는 잠금(lock)의 사용법 예

int InvertMatrices(IEnumerable<Matrix> matrices)
{
    object mutex = new object();
    int nonInvertibleCount = 0;
    Parallel.ForEach(matrices, matrix =>
    {
        if (matrix.IsInvertible)
            matrix.Invert();
        else
        {
            lock (mutex)
            {
                ++nonInvertibleCount;
            }
        }
    });
    return nonInvertibleCount;
}

PLINQ는 Parallel과 거의 똑같은 기능을 제공한다. 차이점:

  • PLINQ는 컴퓨터의 모든 코어를 사용할 수 있다고 가정
  • Prallel은 CPU의 상황에 따라 동적으로 대응

병렬 집계

int ParallelSum1(IEnumerable<int> values)
{
    object mutex = new object();
    int result = 0;    
    Parallel.ForEach(source: values,
        localInit: () => 0,
        body: (item, state, localValue) => localValue + item,
        localFinally: localValue =>
        {
            lock (mutex)
                result += localValue;
        });
    return result;
}

int ParallelSum2(IEnumerable<int> values)
{
    return values.AsParallel().Sum();
}

int ParallelSum3(IEnumerable<int> values)
{
    return values.AsParallel().Aggregate(
        seed: 0,
        func: (sum, item) => sum + item
    );
}

병렬 호출

void DoAction20Times(Action action, CancellationToken token)
{
    Action[] actions = Enumerable.Repeat(action, 20).ToArray();
    Parallel.Invoke(
        new ParallelOptions( { CancellationToken = token },
        actions);
}

 

TPL의 핵심은 Task 형식이다.

Parallel 클래스오 PLINQ는 강력한 Task 형식을 편리하게 쓸 수 있게 감싼 wrapper일 뿐

동적 병렬 처리가 필요하다면 Task 형식을 직접 사용하는 것이 가장 쉽다.

 

이진 트리의 각 노드에 비용이 많이 드는 처리를 수행해야 하는 예

void Traverse(Node current)
{
    DoExpensiveActionOnNode(current);
    
    if (current.Left is not null)
        ProcessNode(current.Left);
    if (current.Right is not null)
        ProcessNode(current.Right);
}

Task ProcessNode(Node node,
    TaskCreationOptions options = TaskCreationOptions.AttachedToParent)
{
    return Task.Factory.StartNew(
        () => Traverse(node),
        CancellationToken.None,
        options,
        TaskScheduler.Default);
}

void ProcessTree(Node root)
{
    var task = ProcessNode(root, TaskCreationOptions.None);
    task.Wait();
}

연속 작업(Continuation)

Task task = Task.Factory.StartNew(
    () => Thread.Sleep(TimeSpan.FromSeconds(2)),
    CancellationToken.None,
    TaskCreationOptions.None,
    TaskScheduler.Default);

Task continuation = task.ContinueWith(
    t => Trace.WriteLine("Task is done"),
    CancelltationToken.None,
    TaskContinuationOptions.None,
    TaskScheduler.Default);

PLINQ

IEnumerable<int> MultiplyBy2(IEnumerable<int> values)
{
    return values.AsParallel().Select(value => value * 2);
}

IEnumerable<int> MultiplyBy2Ordered(IEnumerable<int> values)
{
    return values.AsParallel()
        .AsOrdered()
        .Select(value => value * 2);
}

int ParallelSum(IEnumerable<int> values)
{
    return values.AsParallel().Sum();
}

 

'.NET > C#' 카테고리의 다른 글

Concurrency - TPL Dataflow  (0) 2023.08.16
Concurrency - Reactive Programming  (0) 2023.08.16
Concurrency - Asynchronous Programming  (0) 2023.08.16
Concurrency (동시성)  (0) 2023.08.16
Marshaling: 복사 및 고정  (0) 2021.10.15

https://www.oreilly.com/library/view/concurrency-in-c/9781492054498/

 

Concurrency in C# Cookbook, 2nd Edition

If you’re one of many developers still uncertain about concurrent and multithreaded development, this practical cookbook will change your mind. With more than 85 code-rich recipes in this updated second … - Selection from Concurrency in C# Cookbook, 2n

www.oreilly.com

Example

async Task DoSomethingAsync()
{
    int value = 1;
    
    // Context 저장
    // null ? SynchronizationContext : TaskScheduler
    // ASP.NET Core는 별도의 요청 컨텍스트가 아닌 threadpool context 사용
    await Task.Delay(TimeSpan.FromSeconds(1));
    
    value *= 2;
    
    await Task.Delay(TimeSpan.FromSeconds(1));
    
    Trace.WriteLine(value);
}
항상 코어 '라이브러리' 메서드 안에서 ConfigureAwait를 호출하고,
필요할 때만 다른 외부 '사용자 인터페이스' 메서드에서 컨택스트를 재개하는 것이 좋다.
async Task DoSomethingAsync()
{
    int value = 1;
    
    await Task.Delay(TimeSpan.FromSeconds(1))
        .ConfigureAwait(false);
    // Threadpool thread에서 실행을 재개한다.
    
    value *= 2;
    
    await Task.Delay(TimeSpan.FromSeconds(1))
        .ConfigureAwait(false);
    
    Trace.WriteLine(value);
}

 

ValueTask<T>

  • 메모리 내 캐시에서 결과를 읽을 수 있는 등 메모리 할당을 줄일 수 있는 형식

 

Task 인스턴스를 만드는 방법

  1. CPU가 실행해야 할 실제 코드를 나타내는 계산 작업은 Task.Run으로 생성
  2. 특정 스케줄러에서 실행해야 한다면 TaskFactory.StartNew로 생성
  3. 이벤트 기반 작업은 TaskCompletionSource<TResult>
    (대부분 I/O 작업은 TaskCompletionSource<TResult> 사용)

 

오류 처리

async Task TrySomethingAsync()
{
    // 예외는 Task에서 발생한다.
    var task = PossibleExceptionAsync();
    
    try
    {
        // 여기서 예외 발생
        await task;
    }
    catch (NotSupportedException ex)
    {
        // 이렇게 잡힌 예외는 자체적으로 적절한 스택 추적(Stack trace)을 보존하고 있으며
        // 따로 TargetInvocationException이나 AggregateException으로 쌓여 있지 않다.
        LogException(ex);
        throw;
    }
}

 

Deadlock

async Task WaitAsync()
{
    // 3. 현재 context 저장
    await Task.Delay(TimeSpan.FromSeconds(1));
    // ...
    // 3. 저장된 context 안에서 재개 시도
    //  Deadlock 메서드의 2. task.Wait()에서 차단된 thread
    //  context는 한 번에 하나의 thread만 허용하므로 재개할 수 없음
}

void Deadlock()
{
    // 1. 지연 시작
    var task = WaitAsync();
    
    // 2. 동기적으로 차단하고 async 메서드의 완료 대기
    task.Wait();
}

위의 코든느 UI 컨텍스트나 ASP.NET 클래식 컨텍스트에서 호출하면 교착 상태에 빠진다.

ConfigureAwait(false)로 해결

async Task WaitAsync()
{
    // 3. 현재 context 저장
    await Task.Delay(TimeSpan.FromSeconds(1))
        .ConfigureAwait(false);
    // ...
    // 3. Threadpool thread에서 재개
}

void Deadlock()
{
    // 1. 지연 시작
    var task = WaitAsync();
    
    // 2. 동기적으로 차단하고 async 메서드의 완료 대기
    task.Wait();
}

https://learn.microsoft.com/en-us/dotnet/csharp/asynchronous-programming/

 

Asynchronous programming in C#

An overview of the C# language support for asynchronous programming using async, await, Task, and Task

learn.microsoft.com

https://learn.microsoft.com/en-us/dotnet/standard/asynchronous-programming-patterns/task-based-asynchronous-pattern-tap

 

Task-based Asynchronous Pattern (TAP): Introduction and overview

Learn about the Task-based Asynchronous Pattern (TAP), and compare it to the legacy patterns: Asynchronous Programming Model (APM) and Event-based Asynchronous Pattern (EAP).

learn.microsoft.com

 

Exponential backoff

async Task<string> DownloadStringWithRetries(HttpClient client, string uri)
{
    TimeSpan nextDelay = TimeSpan.FromSeconds(1);
    for (int i = 0; i < 3; ++i)
    {
        try
        {
            return await client.GetStringAsync(uri);
        }
        catch { }
        
        await Task.Delay(nextDelay);
        nextDelay = nextDelay + nextDelay;
    }
    // 오류를 전파할 수 있게 마지막으로 한 번 더 시도
    return await client.GetStringAsync(uri);
}

 

Soft timeout

async Task<string> DownloadStringWithTimeout(HttpClient client, string uri)
{
    using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(3));
    Task<string> downloadTask = client.GetStringAsync(uri);
    Task timeoutTask = Task.Delay(Timeout.InfiniteTimeSpan, cts.Token);
    
    Task completedTask = await Task.WhenAny(downloadTask, timeoutTask);
    if (completedTask == timeoutTask)
    {
        // WARNING: downloadTask는 여전히 동작한다.
        return null;
    }
    return await downloadTask;
}

타임아웃이 지나면 실행을 중단해야 할 때

async Task IssueTimeoutAsync()
{
    using var cts = new CancellationTokenSource(TimeSpan.FromSeconds(3));
    CancellationToken token = cts.Token;
    await Task.Delay(TimeSpan.FromSeconds(10), token);
}

async Task IssueTimeoutAsync()
{
    using var cts = new CancellationTokenSource();
    CancellationToken token = cts.Token;
    cts.CancelAfter(TimeSpan.FromSeconds(3));
    await Task.Delay(TimeSpan.FromSeconds(10), token);
}

 

비동기 시그니처를 사용해서 동기 메서드 구현

interface IMyAsync
{
    Task<int> GetValueAsync(CancellationToken token);
    Task DoSomethingAsync();
}

class MySync : IMyAsync
{
    // 자주 사용하는 작업 결과라면 미리 만들어 놓고 쓴다.
    private static readonly Task<int> ZeroTask = Task.FromResult(0);

    public Task<int> GetValueAsync(CancellationToken token)
    {
        if (token.IsCancellationRequested)
            return Task.FromCanceled<int>(token);
        return Task.FromResult(10);
    }
    
    public Task<T> NotImplementedAsync()
    {
        return Task.FromException<T>(new NotImplementedException());
    }
    
    protected void DoSomethingSynchronously()
    {
    }
    
    public Task DoSomethingAsync()
    {
        try
        {
            DoSomethingSynchronously();
            return Task.CompletedTask;
        }
        catch (Exception ex)
        {
            return Task.FromException(ex);
        }
    }
}

 

진행 상황 보고

async Task MyMethodAsync(IProgress<double> progress = null)
{
    bool done = false;
    double percentComplete = 0;
    while (!done)
    {
        ...
        progress?.Report(percentComplete);
    }
}

async Task CallMyMethodAsync()
{
    var progress = new Progress<double>();
    progress.ProgressChanged += (sender, args) =>
    {
        ...
    };
    await MyMethodAsync(progress);
}

 

모든 작업의 완료 대기

Task task1 = Task.Delay(TimeSpan.FromSeconds(1));
Task task2 = Task.Delay(TimeSpan.FromSeconds(2));
Task task3 = Task.Delay(TimeSpan.FromSeconds(1));

await Task.WhenAll(task1, task2, task3);


Task<int> task1 = Task.FromResult(1);
Task<int> task2 = Task.FromResult(3);
Task<int> task3 = Task.FromResult(5);

int[] results = await Task.WhenAll(task1, task2, task3);
// results = [1, 3, 5]

Example:

async Task<string> DownloadAllAsync(HttpClient client,
    IEnumerable<string> urls)
{
    var downloads = urls.Select(url => client.GetStringAsync(url));
    // 아직 실제로 시작한 작업은 없다.
    
    // 동시에 모든 URL에서 다운로드 시작
    Task<string>[] downloadTasks = downloads.ToArray();
    
    // 모든 다운로드 완료를 비동기적으로 대기
    string[] htmlPages = await Task.WhenAll(downloadTasks);
    
    return string.Concat(htmlPages);
}

작업 중 하나가 예외를 일으키면 Task.WhenAll은 작업과 함께 해당 예외를 반환하며 실패한다.

여러 작업이 예외를 일으키면 모든 예외를 Task.WhenAll이 반환하는 Task에 넣는다.

하지만 작업이 대기 상태면 예외 중 하나만 일으킨다.

async Task ThrowNotImplementedExceptionAsync()
{
    throw new NotImplementedException();
}

async Task ThrowInvalidOperationExceptionAsync()
{
    throw new InvalidOperationException();
}

async Task ObserveOneExceptionAsync()
{
    var task1 = ThrowNotImplementedExceptionAsync();
    var task2 = ThrowInvalidOperationExceptionAsync();
    
    try
    {
        await Task.WhenAll(task1, task2);
    }
    catch (Excpeiton ex)
    {
        // ex는 NotImplementedException or InvalidOperationException
        Trace.WriteLine(ex);
    }
}

async Task ObserveAllExcpetionAsync()
{
    var task1 = ThrowNotImplementedExceptionAsync();
    var task2 = ThrowInvalidOperationExceptionAsync();

    Task allTasks = Task.WhenAll(task1, task2);
    try
    {
        await allTasks;
    }
    catch
    {
        AggregateException allExceptions = allTasks.Excpetion;
        ...
    }
}

 

작업이 완료될 때마다 처리

async Task<int> DelayAndReturnAsync(int value)
{
    await Task.Delay(TimeSpan.FromSeconds(value));
    return value;
}

async Task AwaitAndProcessAsync(Task<int> task)
{
    int rv = await task;
    Trace.WriteLine(rv);
}

async Task ProcessTasksAsync(int flag)
{
    Task<int> taskA = DelayAndReturnAsync(2);
    Task<int> taskB = DelayAndReturnAsync(3);
    Task<int> taskC = DelayAndReturnAsync(1);
    var tasks = new Task[] { taskA, taskB, taskC };
    
    Task[] processingTasks;
    
    if (flag == 1)
    {
        IEnumerable<Task> taskQuery =
            from t in tasks select AwaitAndProcessAsync(t);
        processingTasks = taskQuery.ToArray();

        await Task.WhenAll(processingTasks);
    }
    else if (flag == 2)
    {
        processingTasks = tasks.Select(async t =>
        {
            var rv = await t;
            Trace.WriteLine(rv);
        }).ToArray();

        await Task.WhenAll(processingTasks);
    }
    else if (flag == 3)
    {
        foreach (Task<int> task in tasks.OrderByCompletion())
        {
            int rv = await task;
            Trace.WriteLine(rv);
        }
    }
}

 

async void 메서드의 예외 처리

sealed class MyAsyncCommand : ICommand
{
    async void ICommand.Execute(object parameter)
    {
        await Execute(parameter);
    }
    
    public async Task Execute(object parameter)
    {
        ; // 비동기 작업 구현
    }
    
    ; // CanExecute 등 구현
}

 

ValueTask 생성/사용

  • 반환할 수 있는 동기적 결과가 있고 비동기 동작이 드문 상황에서 반환 형식으로 사용
  • 프로파일링을 통해 애플리케이션의 성능 향상을 확인할 수 있을 때만 고려해야 함
  • ValueTask를 반환하는 DisposeAsync 메서드가 있는 IAsyncDisposable을 구현할 때
private Task<int> SlowMethodAsync();

public ValueTask<int> MethodAsync()
{
    if (CanBehaveSynchronously)
        return new ValueTask<int>(1);
    
    return new ValueTask<int>(SlowMethodAsync());
}

async Task ConsumingMethodAsync()
{
    ValueTask<int> valueTask = MethodAsync();
    ; // 기타 동시성 작업
    int value = await valueTask;
    ;
}
ValueTask는 딱 한 번만 대기할 수 있다.
더 복잡한 작업을 하려면 AsTask를 호출해서 ValueTask<T>를 Task<T>로 변환해야 한다.
ValueTask에서 동기적으로 결과를 얻으려면 ValueTask를 완료한 뒤에 한 번만 할 수 있다.
async Task ConsumingTaskAsync()
{
    Task<int> task = MethodAsync().AsTask();
    ; // 기타 동시성 작업
    int value = await task;
    // Task<T>는 await로 여러 번 대기해도 완벽하게 안전하다.
    int anotherValue = await task;
}

async Task ConsumingTaskAsync()
{
    Task<int> task1 = MethodAsync().AsTask();
    Task<int> task2 = MethodAsync().AsTask();
    int[] results = await Task.WhenAll(task1, task2);
}

 

Asynchronous Stream

async IAsyncEnumerable<string> GetValuesAsync(HttpClient client)
{
    const int limit = 10;
    for (int offset = 0; true; offset += limit)
    {
        string result = await client.GetStringAsync(
            $"https://example.com/api/values?offset={offset}&limit={limit}");
        string[] valuesOnThisPage = result.Split('\n');

        // 현재 페이지의 결과 전달
        foreach (string value in valuesOnThisPage)
            yield return value;

        // 마지막 페이지면 끝
        if (valuesOnThisPage.Length != limit)
            break;
   }

   public async Task ProcessValuesAsync(HttpClient client)
   {
       await foreach (string value in GetValuesAsync(client))
       {
           Console.WriteLine(value);
       }
   }

   public async Task ProcessValuesAsync(HttpClient client)
   {
       await foreach (string value in GetValuesAsync(client).ConfigureAwait(false))
       {
           await Task.Delay(100).ConfigureAwait(false); // 비동기 작업
           Console.WriteLine(value);
       }
   }

 

비동기 스트림과 LINQ 함께 사용

IEnumerable<T>에는 LINQ to Objects가 있고 IObservable<T>에는 LINQ to Events가 있다.

IAsyncEnumerable<T>도 System.Linq.Async NuGet Package를 통해 LINQ 지원

IAsyncEnumerable<int> values = SlowRange().WhereAwait(
    async value =>
    {
        // 요소의 포함 여부를 결정할 비동기 작업 수행
        await Task.Delay(10);
        return value % 2 == 0;
    })
    .Where(value => value % 4 == 0); // 결과는 비동기 스트림

await foreach (int result in values)
    Console.WriteLine(result);


// 진행에 따라 속도가 느려지는 시퀀스 생성
async IAsyncEnumerable<int> SlowRange()
{
    for (int i = 0; i < 10; ++i)
    {
        await Task.Delay(i * 100);
        yield return i;
    }
}

Async 접미사는 값을 추출하거나 계산을 수행한 뒤에 비동기 시퀀스가 아닌 비동기 스칼라 값을 반환하는 연산자에만 붙는다.

int count = await SlowRange().CountAsync(
    value => value % 2 == 0);

// 조건자가 비동기적일 땐 AwaitAsync 접미사가 붙는 연산자를 사용
int count = await SlowRange().CountAwaitAsync(
    async value =>
    {
        await Task.Delay(10);
        return value % 2 == 0;
    });

비동기 스트림 취소

using var cts = new CancellationTokenSource(500);
CancellationToken token = ct.Token;

await foreach (int result in SlowRange(token))
{
    Console.WriteLine(result);
}

// 진행에 따라 속도가 느려지는 시퀀스 생성
async IAsyncEnumerable<int> SlowRange(
    [EnumeratorCancellation] CancellationToken token = default)
{
    for (int i = 0; i < 10; ++i)
    {
        await Task.Delay(i * 100, token);
        yield return i;
    }
}

비동기 스트림의 열거에 CancellationToken을 추가할 수 있는 WithCancellation 확장 메서드 지원

async Task ConsumeSequence(IAsyncEnumerable<int> items)
{
    using var cts = new CancellationTokenSource(500);
    CancellationToken token = cts.Token;
    await foreach (int result in items.WithCancellation(token))
    {
        Console.WriteLine(result);
    }
}

await ConsumeSequence(SlowRange());

 

'.NET > C#' 카테고리의 다른 글

Concurrency - Reactive Programming  (0) 2023.08.16
Concurrency - Parallel Programming  (0) 2023.08.16
Concurrency (동시성)  (0) 2023.08.16
Marshaling: 복사 및 고정  (0) 2021.10.15
Array Marshaling  (0) 2021.10.15

https://www.oreilly.com/library/view/concurrency-in-c/9781492054498/

 

Concurrency in C# Cookbook, 2nd Edition

If you’re one of many developers still uncertain about concurrent and multithreaded development, this practical cookbook will change your mind. With more than 85 code-rich recipes in this updated second … - Selection from Concurrency in C# Cookbook, 2n

www.oreilly.com

 

Concurrency

  • 한 번에 두 가지 이상의 작업 수행

Multithreading

  • 다수의 실행 스레드를 사용하는 동시성의 한 형태

Parallel Processing

  • 많은 작업을 여러 스레드에 나눠서 동시에 수행
  • 멀티스레딩을 사용해서 멀티 코어 프로세서를 최대한 활용하는 방법

Asynchronous Programming

  • 불필요한 스레드의 사용을 피하려고 future나 callback을 사용하는 동시성의 한 형태

Reactive Programing

  • 애플리케이션이 이벤트에 대응하게 하는 선언형 프로그래밍 방식
  • 비동기 연산이 아닌 비동기 이벤트 기반

 

 

'.NET > C#' 카테고리의 다른 글

Concurrency - Parallel Programming  (0) 2023.08.16
Concurrency - Asynchronous Programming  (0) 2023.08.16
Marshaling: 복사 및 고정  (0) 2021.10.15
Array Marshaling  (0) 2021.10.15
Comparisons and Sorts  (0) 2021.10.15

https://cocodataset.org/

 

COCO - Common Objects in Context

 

cocodataset.org

 

COCO(Common Object in Context)

is a large-scale object dtection, segmentation, and captioning dataset.

features:

  • Object segmentation
  • Recognition in context
  • Superpixel stuff segmentation
  • 330K images (> 200K labeled)
  • 1.5 million object instances
  • 80 object categories
  • 91 stuff categories
  • 5 captions per image
  • 250,000 people with keypoints

 

Data format

All annotations share the same basic data structure below:

{
  "info": info,
  "images": [image],
  "annotations": [annotation],
  "licenses": [license],
}

info: {
  "year": int,
  "version": str,
  "description": str,
  "contributor": str,
  "url": str,
  "date_created": datetime,
}

image: {
  "id": int,
  "width": int,
  "height": int,
  "file_name": str,
  "license": int,
  "flickr_url": str,
  "coco_url": str,
  "date_captured": datetime,
}

license: {
  "id": int,
  "name": str, "url": str,
}

 

1. Object Detection

Each object instance annotation contains a series of fields,
including the category id and segmentation mask of the object.

annotation: {
  "id": int,
  "image_id": int,
  "category_id": int,
  "segmentation": RLE or [polygon],
  "area": float,
  "bbox": [x,y,width,height],
  "iscrowd": 0 or 1,
}

categories: [
  {
    "id": int,
    "name": str,
    "supercategory": str,
  }
]
  • iscrowd: large groups of objects (e.g. a crowd of people)
    • 0 = single object, polygons used
    • 1 = collection of objects, RLE used
  • segmentation
    • RLE(Run Length Encoding)
      • counts: [ ... ]
      • size: [ width, height ]
    • [ polygon ]
      • polygon: [ x1, y1, x2, y2, ... ]
  • bbox: enclosing bounding box,

 

2. Keypoint Detection

annotation: {
  :
  : object detection annotations
  :
  "keypoints": [x1, y1, v1, ...],
  "num_keypoints": int,
  : ,
}

categories: [
  {
    :
    : object detection annotations
    :
    "keypoints": [str],
    "skeleton": [edge]
  }
]
  • annotation
    • keypoints: a length 3k array where k is the total number of keypoints defined for the category
      • each keypoint has a 0-indexed location x, y and a visibility flag v
      • v=0: not labed (in which case x=y=0)
      • v=1: labeld but not visible
      • v=2: labeld and visible
    • num_keypoint: the number of labeled keypoints (v > 0) for a given object
      (many objects, e.g. crowds and small objects, will have num_keypoints=0).
  • categories
    • keypoints: a length k array of keypoint names
    • skeleton: defines connectivity via a list of keypoint edge pairs
    • edge: [ index1, index2 ]
"categories": [
  {
    "supercategory": "person",
    "id": 1,
    "name": "person",
    "keypoints": [
      "nose",
      "left_eye",       "right_eye",
      "left_ear",       "right_ear",
      "left_shoulder",  "right_shoulder",
      "left_elbow",     "right_elbow",
      "left_wrist",     "right_wrist",
      "left_hip",       "right_hip",
      "left_knee",      "right_knee",
      "left_ankle",     "right_ankle"
    ],
    "skeleton": [
      [16, 14], [14, 12], [17, 15], [15, 13],
      [12, 13], [6, 12], [7, 13], [6, 7], [6, 8],
      [7, 9], [8, 10], [9, 11], [2, 3], [1, 2],
      [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]
    ]
  }
]

 

3. Stuff Segmentation

The stuff annotation format is idential and fully compatible to the object detection format above

(except iscrowd is unnecessary and set to 0 by default).

...

 

4. Panoptic Segmentation

For the panotic task, each annotation struct is a per-image annotation rather than a per-object annotation.

Each per-image annotation has two parts:

(1) a PNG that stores the class-agnostic image segmentation and

(2) a JSON struct that stores the semantic information for each image segmentation.

...

  1. annotation.image_id == image.id
  2. For each annotation, per-pixel segment ids are stored as a single PNG at annotation.file_name.
  3. For each annotation, per-segment info is stored in annotation.segments_info.
    segment_info.id stores the unique id of the segment and is used to retrieve the corresponding mask from the PNG.
    • category_id gives the semantic category
    • iscrowd indicates the segment encompasses a group of objects (relevant for thing categories only).
    • The bbox and area fields provide additional info about the segment.
  4. The COCO panoptic task has the same thing categories as the detection task,
    whereas the stuff categories differ from those in the stuff task.
    Finally, each category struct has two additional fields:
    • isthing: distinguishes stuff and thing categories
    • color: is useful for consistent visualization
annotation: {
  "image_id": int,
  "file_name": str,
  "segments_info": [segment_info],
}

segment_info: {
  "id": int,
  "category_id": int,
  "area": int,
  "bbox": [x, y, width, height],
  "iscrowd": 0 or 1,
}

categories: [
  {
    "id": int,
    "name": str,
    "supercategory": str,
    "isthing": 0 or 1,
    "color": [R,G,B],
  }
]

 

5. Image Captioning

These annotations are used to store image captions.

Each caption describes the specified image and each image has at least 5 captions (some images have more).

annotation: {
  "id": int,
  "image_id": int,
  "caption": str,
}

 

6. DensePose

Each annotation contains a series of fields, including category id, bounding box, body part masks and parametrization data for selected points.

DensePose annotations are stored in dp_* fields:

 

Annotated masks:

  • dp_masks: RLE encoded dense masks.
    • All part masks are of size 256x256.
    • They correspond to 14 semantically meaningful parts of the body:
      Torso, Left/Right Hand, Left/Right Foot, Upper Leg Left/Right, Lower Leg Left/Right, Upper Arm Left/Right, Lower Arm Left/Right, Head;

Annotated points:

  • dp_x, dp_y: spatial coordinates of collected points on the image.
    • The coordinates are scaled such that the boudning box size is 256x256.
  • dp_I: The patch index that indicates which of the 24 surface patches the point is on.
    • Patches correspond to the body parts described above.
      Some body parts are split into 2 patches:
      1.  
      2. Torso
      3. Right Hand
      4. Left Hand
      5. Left Foot
      6. Right Foot
      7.  
      8.  
      9. Upper Leg Right
      10. Upper Leg Left
      11.  
      12.  
      13. Lower Leg Right
      14. Lower Leg Left
      15.  
      16.  
      17. Upper Arm Left
      18. Upper Arm Right
      19.  
      20.  
      21. Lower Arm Left
      22. Lower Arm Right
      23.  
      24. Head
  • dp_U, dp_V: Coordinates in the UV space.
    Each surface patch has a separate 2D parameterization.
annotation: {
  "id": int,
  "image_id": int,
  "category_id": int,
  "is_crowd": 0 or 1,
  "area": int,
  "bbox": [x, y, width, height],
  "dp_I": [float],
  "dp_U": [float],
  "dp_V": [float],
  "dp_x": [float],
  "dp_y": [float],
  "dp_masks": [RLE],
}

http://host.robots.ox.ac.uk/pascal/VOC/

 

The PASCAL Visual Object Classes Homepage

2006 10 classes: bicycle, bus, car, cat, cow, dog, horse, motorbike, person, sheep. Train/validation/test: 2618 images containing 4754 annotated objects. Images from flickr and from Microsoft Research Cambridge (MSRC) dataset The MSRC images were easier th

host.robots.ox.ac.uk

PASCAL(Pattern Analysis, Statistical Modeling and Computational Learning)

VOC(Visual Object Classes)

 

Pascal VOC Chanllenges 2005-2012

 

Classification/Detection Competitions

  1. Classification
    For each of the twenty classes, predicting presence/absence of an example of that class in the test image.
  2. Detection
    Predicting the bounding box and label of each object from the twenty target classes in the test image.

20 classes

Segmentation Competition

  • Segmentation
    Generating pixel-wise segmentations giving the class of the object visible at each pixel, or "background" otherwise.

Action Classification Competition

  • Action Classification
    Predicting the action(s) being performed by a person in a still image.

10 action classes + "other"

 

ImageNet Large Scale Visual Recognition Competition

To estimate the content of photographs for the purpose of retrieval and automatic annotation using a subset of the large hand-labeled ImageNet dataset (10,000,000 labeled images depicting 10,000+ object categories) as training.

 

 

Person Layout Tester Competition

  • Person Layout
    Predicting the bounding box and label of each part of a person (head, hands, feet).

 

Data

폴더 계층 구조

VOC20XX
 ├─ Annotations
 ├─ ImageSets
 ├─ JPEGImages
 ├─ SegmentationClass
 └─ SegmentationObject
  • Annotations: JPEGImages 폴더 속 원본 이미지와 같은 이름들의 xml 파일들이 존재, 정답 데이터
  • ImageSets: 사용 목적의 이미지 그룹 정보(test, train, trainval, val), 특정 클래스가 어떤 이미지에 있는지 등에 대한 정보 포함
  • JPEGImages: *.jpg 확장자를 가진 이미지 파일들, 입력 데이터
  • SegmentationClass: Semantic segmentation 학습을 위한 label 이미지
  • SegmentationObject: Instance segmentation 학습을 위한 label 이미

XML 파일 구조

<annotation>
  <folder>VOC2012</folder>
  <filename>2007_000027.jpg</filename>
  <source>
    <database>The VOC2007 Database</database>
    <annotation>PASCAL VOC2007</annotation>
    
  </source>
  <size>
    <width>486</width>
    <height>500</height>
    <depth>3</depth>
  </size>
  <segmented>0</segmented>
  <object>
    <name>person</name>
    <pose>Unspecified</pose>
    <truncated>0</truncated>
    <difficult>0</difficult>
    <bndbox>
      <xmin>174</xmin>
      <ymin>101</ymin>
      <xmax>349</xmax>
      <ymax>351</ymax>
    </bndbox>
    <part>
      <name>head</name>
      <bndbox>
        <xmin>169</xmin>
        <ymin>104</ymin>
        <xmax>209</xmax>
        <ymax>146</ymax>
      </bndbox>
    </part>
    <part>
      <name>hand</name>
      <bndbox>
        <xmin>278</xmin>
        <ymin>210</ymin>
        <xmax>297</xmax>
        <ymax>233</ymax>
      </bndbox>
    </part>
    <part>
      <name>foot</name>
      <bndbox>
        <xmin>273</xmin>
        <ymin>333</ymin>
        <xmax>297</xmax>
        <ymax>354</ymax>
      </bndbox>
    </part>
    <part>
      <name>foot</name>
      <bndbox>
        <xmin>319</xmin>
        <ymin>307</ymin>
        <xmax>340</xmax>
        <ymax>326</ymax>
      </bndbox>
    </part>
  </object>
</annotation>
  • size: xml 파일과 대응되는 이미지의 width, height, depth(channel) 정보
    • width
    • height
    • depth
  • segmented:
  • object
    • name: 클래스 이름
    • pose: person의 경우만 사용됨
    • truncated: 0 = 전체 포함, 1 = 일부 포함
    • difficult: 0 = 인식하기 쉬움, 1 = 인식하기 어려움
    • bndbox
      • xmin: 좌측상단 x 좌표값
      • ymin: 좌측상단 y 좌표값
      • xmax: 우측하단 x 좌표값
      • ymax: 우측하단 y 좌표값
    • part: person의 경우에만 사용됨

'ML' 카테고리의 다른 글

COCO Dataset  (0) 2023.08.16
분류 모델의 성능평가지표 Accuracy, Recall, Precision, F1-score  (0) 2022.12.19
윈도우 버전 YOLO v3 설치  (0) 2021.08.16

ASP.NET Core 7 / Blazor Server App / Dockerfile

 

GitHub의 qiime2 아래 linux-worker-docker 프로젝트를 참고한다.

https://github.com/qiime2/linux-worker-docker

 

GitHub - qiime2/linux-worker-docker

Contribute to qiime2/linux-worker-docker development by creating an account on GitHub.

github.com

위 프로젝트의 Dockerfile 내용을 Blazor Server App의 Dockerfile 에 추가한다.

#See https://aka.ms/customizecontainer to learn how to customize your debug container
# and how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base

WORKDIR /app
EXPOSE 80
EXPOSE 443

# Locale for click
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8

# Utilities
RUN apt-get update -q
RUN apt-get install -yq wget unzip bzip2 git build-essential

# Install conda
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O /tmp/miniconda3.sh
RUN /bin/bash /tmp/miniconda3.sh -bp /opt/miniconda3
RUN rm /tmp/miniconda3.sh

# Update conda and install conda-build
RUN /opt/miniconda3/bin/conda update -yq conda
RUN /opt/miniconda3/bin/conda install -yq conda-build wget

RUN wget -q https://data.qiime2.org/distro/core/qiime2-2023.5-py38-linux-conda.yml
RUN /opt/miniconda3/bin/conda env create -yq -n qiime2-2023.5 --file qiime2-2023.5-py38-linux-conda.yml

# Install any other goodies
#RUN /opt/miniconda3/bin/conda run pip install -q https://github.com/qiime2/q2lint/archive/master.zip
#RUN /opt/miniconda3/bin/conda install -yq -c conda-forge nodejs

# Set conda environment
RUN echo "export PATH=/opt/miniconda3/bin:$PATH" > /etc/profile
ENV PATH /opt/miniconda3/bin:$PATH


FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["TestBlazorServerApp/TestBlazorServerApp.csproj", "TestBlazorServerApp/"]
RUN dotnet restore "TestBlazorServerApp/TestBlazorServerApp.csproj"
COPY . .
WORKDIR "/src/TestBlazorServerApp"
RUN dotnet build "TestBlazorServerApp.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "TestBlazorServerApp.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TestBlazorServerApp.dll"]

 

'BIO' 카테고리의 다른 글

C#에서 Linux 환경에 설치된 QIIME 2 실행하기  (0) 2023.08.02
FASTQ format in QIIME 2  (0) 2023.08.01
Installing the QIIME 2 Core 2023.5 distribution using WSL  (0) 2023.07.31
QIIME 2 Core concepts  (0) 2023.07.31
FASTAQ  (0) 2023.07.31

https://www.tutorialspoint.com/unix/unix-io-redirections.htm

 

Unix / Linux - Shell Input/Output Redirections

Unix Linux Shell Input Output Redirections - In this chapter, we will discuss in detail about the Shell input/output redirections. Most Unix system commands take input from your terminal and send the resulting output back to your terminal. A command normal

www.tutorialspoint.com

Output Redirection

$ who > users
$ cat users
user	tty01	Sep 12 07:30
:

Example

$ echo line 1 > lines
$ cat lines
line 1
$

$ echo line 2 >> lines
$ cat lines
line 1
line 2
$

 

Input Redirection

The commands that normally take their input from the standard input can have their input redirected from a file in this manner.

For example, to count the number of lines in the file lines generated above,
you can execute the command as follows -

$ wc -l lines
2 lines
$

$ wc -l < lines
2
$

$ wc -l << EOF
> This is a simple lookup program
> for good (and bad) restaurants
> in Cape Town.
> EOF
3
$

 

Here Document

A here document is used to redirect input into an interactive shell script or program.

We can run an interactive program within a shell script without user action by supplying the required input for the interactive program, or interactive shell script.

The general form for a here document is -

command << delimiter
document
delimiter

 

Here the shell interprets the << operator as an instruction to read input until it finds a line containing the specified delimiter. All the input lines up to the line containing the delimiter are then fed into the standard input of the command.

 

The delimiter tells the shell that the here document has completed.

Without it, the shell continues to read the input forever.

The delimiter must be a single word that does not contain spaces or tabs.

 

Follwing is the input to the command wc -l to count the total number of lines -

$ wc -l << EOF
> This is a simple lookup program
> for good (and bad) restaurants
> in Cape Town.
> EOF
3
$

You can use the here document to print multiple lines using your script as follows -

#!/bin/sh

cat << EOF
This is a simple lookup program 
for good (and bad) restaurants
in Cape Town.
EOF

Result

This is a simple lookup program
for good (and bad) restaurants
in Cape Town.

 

Discard the output

Sometimes you will need to execute a command, but you don't want the output displayed on the screen.

In such cases, you can discard the output by redirecting it to the file /dev/null -

$ command > /dev/null

The file /dev/null is a special file that automatically discards all its input.

 

To discard both output of a command and its error output,
use standard redirection to redirect STDERR to STDOUT -

$ command > /dev/null 2>&1

Here 2 represents STDERR and 1 represents STDOUT.

 

You can display a message on to STDERR by redirecting STDOUT into STDERR as follows

$ echo message 1>&2

 

Redirection Commands

Sequence Command Description
1 pgm > file Output of pgm is redirected to file
2 pgm >> file Output of pgm is appeded to file
3 pgm < file Program pgm read its input from file
4 n > file Output from stream with descriptor n redirected to file
5 n >> file Output from stream with descriptor n appended to file
6 n >& m Merges output from stream n with stream m
7 n <& m Merges input from stream n with stream m
8 << tag Standard input comes from here through next tag at the start of file
9 | Takes output from one program, or process, and sends it to another

the file descriptor

  • 0 = normally standard input (STDIN)
  • 1 = standard output (STDOUT)
  • 2 = standard error output (STDERR)

'OS > Linux' 카테고리의 다른 글

Linux Shell Script  (0) 2023.08.09
LD_LIBRARY_PATH  (0) 2022.12.21
CUDA 11.7.1 on WSL2  (0) 2022.11.13
Fastest way to check if a file exists  (0) 2022.11.10
Install libjpeg-turbo  (0) 2022.11.06

에https://www.tutorialspoint.com/unix/shell_scripting.htm

 

Shell Scripting Tutorial

Shell Scripting Tutorial - A shell script is a computer program designed to be run by the Unix/Linux shell which could be one of the following:

www.tutorialspoint.com

 

기본 구조

파일명: test.sh

#!/bin/sh

echo "Hello, World"
  • 확장자: .sh
  • 첫 번째 행에 해당 쉘 명시
    • #!bin/sh
    • #!bin/bash
  • 쉘 스크립트 파일을 실행시키려면 실행 권한을 줘야 함
$ chmod 755 test.sh

실행

$ ./test.sh
or
$ sh test.sh
or
$ bash test.sh

 

기본 문법

주석(comment)

#으로 시작

 

입력/출력

  • 입력: read
  • 출력: echo
#!/bin/sh

read NAME
echo "Hello, $NAME!"

결과

$ ./test.sh
dozob
Hello, dozob!

 

※ Tip

Bash에서는 -e 플래그로 특수 문자를 escape 할 수 있음

#!bin/bash

echon -e "Hello\n$NAME!"	# 개행('\n')됨

 

Variables

  • 변수 이름으로 영문자, 숫자, 언더바('_') 사용
  • 변수에 값을 할당할 때 '='의 앞뒤에 공백이 없어야 함
  • 문자열인 경우 쌍따옴표(")로 감싸야 함
  • 변수를 사용할 때는 앞에 $를 붙임, 변수는 {}로 감쌀 수 있음
  • readonly 키워드를 앞에 붙여 읽기 전용 변수를 정의할 수 있음
  • 변수 삭제는 unset으로 가능하나 readonly 변수는 삭제 못함
#!/bin/sh

var1=1234
var2="text"

echo "var2=$var2"

readonly var3="initialized readonly variable"
var3="try to assign readonly variable"

unset var2

결과

$ ./test.sh
var2=text
./test.sh: 9: var3: is read only

 

Special Variables

변수 기능
$0 스크립트명
$1 ~ $9 N번째 인수
$# 스크립트에 전달된 인수 개수
$* 모든 인수를 모아 하나로 처리
$@ 모든 인수를 각각 처리
$? 직전에 실행한 명령(command)의 종료 값
(성공: 0, 실패: 1)
$$ 쉘 스크립트의 프로세스 ID
$! 마지막으로 실행한 백그라운드 프로세스 ID

 

Metacharacters (특수 문자)

* ? [ ] ' " \ $ ; & ( ) | ^ < > new-line space tab

문자열 내에 쓰일 때는 '\'를 앞에 붙여야 함

Sequence Quoting Description
1 Single quote All special characters between these quotes lose their special meaning.
2 Double quote Most special characters between these quotes lose their special meaning
with these exceptions -
 - $, `, \$, \', \", \\
3 Backslash Any character immediately following the backslash loses its special meaning.
4 Back quote Anything in between back quotes would be treated as a command and would be executed.

Example

#!/bin/bash

echo <-$1500.**>; (update?) [y|n]
# -bash: syntax error near unexpected token `;'
echo \<-\$1500.\*\*\>\; \(update\?\) \[y\|n\]
echo '<-$1500.**>; (update?) [y|n]'

echo 'It\'s Shell Programming
# It\s Shell Programming

echo 'It\'s Shell Programming'
# Syntax error: Unterminated quoted string

VAR=ZARA
echo "$VAR owes <-\$1500.**>; [ as of (`date +%m/%d`) ]"
# ZARA owes <-$1500.**>; [ as of (07/02) ]

 

변수 값의 치환

문법 설명
${var} 변수 값으로 치환
${var:-word} if var is null or unset, word is substitued for var.
The value of var does not change.
${var:=word} if var is null or unset, var is set to the value of word.
${var:+word} if var is set, word is substituted for var.
The value of var does not change.
${var:?message} If var is null or unset, message is printed to standard error.
This checks that variables are set correctly.
#/bin/sh

echo "1. \${var:-default value1}: \"${var:-default value1}\", var=${var}"
echo "2. \${var:=default value2}: \"${var:=default value2}\", var=${var}"

var="assigned value"
echo "var=\"assigned value\""
echo "3. \${var:+default value3}: \"${var:+default value3}\", var=${var}"
echo "4. \${var:?default value4}: \"${var:?default value4}\", var=${var}"

unset var
echo "unset var"
echo "5. \${var:+default value5}: \"${var:+default value5}\", var=${var}"
echo "6. \${var:?default value6}:"
echo " \"${var:?default value6}\", var=${var}"

결과

$ ./test.sh
1. ${var:-default value1}: "default value1", var=
2. ${var:=default value2}: "default value2", var=default value2
var="assigned value"
3. ${var:+default value3}: "default value3", var=assigned value
4. ${var:?default value4}: "assigned value", var=assigned value
unset var
5. ${var:+default value5}: "", var=
6. ${var:?default value6}:
./test.sh: line 15: var: default value6

 

Arrays

#!/bin/bash

# bash shell
ARRAY=(item item2 item3 item4)
ARRAY[0]="ITEM1"
ARRAY[2]="ITEM3"

echo "ARRAY[0]: ${ARRAY[0]}"
echo "ARRAY[2]: ${ARRAY[2]}"

echo "ARRAY[*]: ${ARRAY[*]}"
echo "ARRAY[@]: ${ARRAY[@]}"

결과

$ ./test.sh
ARRAY[0]: ITEM1
ARRAY[2]: ITEM3
ARRAY[*]: ITEM1 item2 ITEM3 item4
ARRAY[@]: ITEM1 item2 ITEM3 item4

 

Arithmetic Operators

Operator Description Example: a=10, b=20
+ Addition echo `expr $a + $b` → 30
- Substraction echo `expr $a - $b` → -10
\* (Multiplication) Multiplies values on either side of the operator echo `expr $a \* $b` → 200
/ (Division) Divide left hand operand by right hand operand echo `expr $b / $a` → 2
% (Modulus) evide left hand operand by right hand operand and returns remainder expr `expr $b % $a` → 0
= (Assignment) Aissigns right operand in left operand a=$b
== (Equality) Compares two numbers,
if both are same then returns true.
[ $a == $b ] → false
!= (Not Equality) Compares two numbers,
if both are different then return true.
[ $a != $b ] → true

※ Tip

for example, [ $a == $b ] is correct

whereas, [$a==$b] is incorrect

 

 

Relational / Boolean / String Operators

Operator Description Example
-eq equal [ $a -eq $b ]
-ne not equal [ $a -ne $b ]
-gt greater than [ $a -gt $b ]
-lt less than [ $a -lt $b ]
-ge greater than or equal to [ $a -ge $b ]
-le less than or equal to [ $a -le $b ]
Boolean Operators
! logical negation [ ! false ] → true
-o logical OR [ $a -lt 20 -o $b -gt 100 ] → true
-a logical AND [ $a -lt 20 $b -gt 100 ] → false
String Operators
= equal [ $a = $b ] → not true.
!= not equal [ $a != $b ] → true
-z Checks if the given string operand size is zero;
if it is zero length, then is returns true.
[ -z $a ] → not ture
-n Checks if the given string operand size is non-zero;
if it is nonzero length, then it returns true.
[ -n $a ] → not false
str Checks if str is not the empty string;
if it is empty, then it returns false.
[ $a ] → not false.

 

File Test Operators

Assume a variable file holds an existing file name "test" the size of which is 100 bytes and has read, write and execute permission on -

Operator Description: Checks if file ~ Example
-e file file exists; is true even if file is a directory but exists. [ -e $file ] → true
-s file file has size greater then 0 [ -s $file ] → true
     
-b file block special file [ -b $file ] → false
-c file character special file [ -c $file ] → false
-d file directory [ -d $file ] → not true
-p file named pipe [ -p $file ] → false
-f file an ordinary file as opposed to a directory or special file [ -f $file ] → true
-t file file descriptor is open and associated with a terminal [ -t $file ] → false
     
-g file has its Set Group ID (SGID) bit set [ -g $file ] → false
-k file has its Sticky bit set [ -k $file ] → false
-u file has its Set User ID (SUID) bit set [ -u $file ] → false
     
-r file readable [ -r $file ] → true
-w file writable [ -w $file ] → true
-x file executable [ -x $file ] → true

 

Decision Making

if ... else

  • if ... fi
  • if ... else ... fi
  • if ... elif ... else ... fi

 

case ... esac

  • case ... esac

case VARIABLE in CONDITION/VALUE) Command ;; esac

#!/bin/sh

DRINK="coffee"
case "$DRINK" in
    "beer") echo "맥주"
    ;;
    "juice") echo "주스"
    ;;
    "coffee") echo "커피"
    ;;
esac

 

 

Loops

  • while
  • for
  • until
  • select
while command1 ;	# this is loop1, the outer loop
do
    Statement(s) to be executed if command1 is true
    
    while command2 ;	# this is loop2, the inner loop
    do
        Statement(s) to be executed if command2 is ture
    done
    
    Statements(s) to be executed if command1 is true
done

 

Example

#!/bin/sh

a=0
while [ "$a" -lt 10 ]	# this is loop1
#until [ "$a" -ge 10 ]
do
    b="$a"
    
    while [ "$b" -ge 0 ]	# this is loop2
    #until [ "$b" -lt 0 ]
    do
        echo -n "$b "		# -n option lets echo avoid printing a new line character
        b=`expr $b - 1`
    done
    
    echo
    
    a=`expr $a + 1`
done

Result

0
1 0
2 1 0
3 2 1 0
4 3 2 1 0
5 4 3 2 1 0
6 5 4 3 2 1 0
7 6 5 4 3 2 1 0
8 7 6 5 4 3 2 1 0
9 8 7 6 5 4 3 2 1 0

 

Example

#!/bin/sh

for var1 in 1 2 3
do
   for var2 in 0 5
   do
      if [ $var1 -eq 2 -a $var2 -eq 0 ]
      then
         break 2
      else
         echo "$var1 $var2"
      fi
   done
done

Result

1 0
1 5

 

Example

#!/bin/sh

NUMS="1 2 3 4 5 6 7"

for NUM in $NUMS
do
   Q=`expr $NUM % 2`
   if [ $Q -eq 0 ]
   then
      echo "$NUM: Number is an even number!"
      continue
   fi
   echo "$NUM: Found odd number"
done

bash

#!/bin/bash

# using array
NUMS=(1 2 3 4 5 6 7)

for NUM in ${NUMS[*]}
do
   Q=`expr $NUM % 2`
   if [ $Q -eq 0 ]
   then
      echo "$NUM: Number is an even number!"
      continue
   fi
   echo "$NUM: Found odd number"
done

 

Result

1: Found odd number
2: Number is an even number!!
3: Found odd number
4: Number is an even number!!
5: Found odd number
6: Number is an even number!!
7: Found odd number

 

Substitution

Sequence Escape Description
1 \\ backslash
2 \a alert (BEL)
3 \b backspace
4 \c suppress trailing newline
5 \f form feed
6 \n new line
7 \r carriage return
8 \t horizontal tab
9 \v vertical tab

You can use the -E option to disable the interpretation of the backslash escapes (default).

You can use the -e option to enable the interpretation of the backslash escapes.

You can use the -n option to disable the insertion of a new line.

 

Command Substitution

Command substitution is the mechanism by which the shell performs a given set of commands

and then substitues their output in the place of the commands.

Syntax

`command`

When performing the command substitution make sure that you use the backquote, not the single quote character.

 

Example

#!/bin/sh

DATE=`date`
echo "Date is $DATE"

USERS=`who | wc -l`
echo "Logged in user are $USERS"

UP=`date ; uptime`
echo "Uptime is $UP"

Result

Date is Wed Aug  9 16:54:06 KST 2023
Logged in user are 0
Uptime is Wed Aug  9 16:54:06 KST 2023
 16:54:06 up 10 days, 14:15,  0 users,  load average: 0.03, 0.01, 0.00

 

'OS > Linux' 카테고리의 다른 글

Linux Shell - IO Redirections  (0) 2023.08.09
LD_LIBRARY_PATH  (0) 2022.12.21
CUDA 11.7.1 on WSL2  (0) 2022.11.13
Fastest way to check if a file exists  (0) 2022.11.10
Install libjpeg-turbo  (0) 2022.11.06

Linux 상에서 bash 쉘 실행하기

public static class ShellHelper
{
    public static ShellOutput Bash(this string cmd, string sWorkingDir)
    {
        var sEscapedArgs = cmd.Replace("\"", "\\\"");
        var process = new Process
        {
            StartInfo = new ProcessStartInfo
            {
                FileName = "/bin/bash",
                Arguments = $"-c \"{sEscapedArgs}\"",
                RedirectStandardOutput = true,
                RedirectStandardError = true,
                UseShellExecute = false,
                CreateNoWindow = true,
                WorkingDirectory = sWorkingDir
            }
        };

        ShellOutput rv = null;
        try
        {
            process.Start();
            process.WaitForExit();

            rv = new ShellOutput(process);
        }
        catch (Exception ex)
        {
            rv = new ShellOutput(null, ex);
            LogEx.Logger.LogError(ex, nameof(ShellHelper));
        }
        return rv;
    }
}

public class ShellOutput
{
    public ShellOutput(Process process, Exception ex = null)
    {
        this.Process = process;
        m_sStdErr = ex?.Message;
    }

    #region Fields

    string m_sStdOut;
    string m_sStdErr;

    #endregion Fields

    public Process Process { get; }

    public bool IsSuccessful => this.Process?.ExitCode == 0;

    public string StdOutText => m_sStdOut ??= this.Process?.StandardOutput.ReadToEnd();

    public string StdErrText => m_sStdErr ??= this.Process?.StandardError.ReadToEnd();
}

 

qiime 2가 설치된 Python 가상환경에서 실행시키기 위한 스크립트 작성

#!/bin/bash

_CONDA_ROOT="/home/user/miniconda3"

export PATH=$_CONDA_ROOT/bin:$_CONDA_ROOT/condabin

source $_CONDA_ROOT/etc/profile.d/conda.sh

#echo conda activate qiime2-2023.5
conda activate qiime2-2023.5

if [ $# == 0 ]
then
    qiime info
then
    qiime $*
fi

#echo conda deactivate
conda deactivate

 

 

qiime 실행하고 결과 받기

string sResult = "Running...";
var rv = ShellHelper.Bash("~/qiime2/q2run.sh info", null);
//var rv = ShellHelper.Bash("~/qiime2/q2run.sh tools list-types", null);
//var rv = ShellHelper.Bash("~/qiime2/q2run.sh tools list-formats", null);
if (rv.IsSuccessful)
{
    sResult = rv.StdOutText;
}
else
{
    sResult = rv.StdErrText;
}

 

qiime 2가 설치된 Python 가상환경에서 여러 명령을 수행하는 스크립트

q2env.sh

#!/bin/bash

_CONDA_ROOT="/home/user/miniconda3"

export PATH=$_CONDA_ROOT/bin:$_CONDA_ROOT/condabin:$PATH

source $_CONDA_ROOT/etc/profile.d/conda.sh

#echo conda activate qiime2-2023.5
conda activate qiime2-2023.5

if [ $# == 0 ]
then
    qiime info
else
    tid=1
    for i
    do
        echo "### q2-task-$tid"
        echo "$i"
        echo ">>>"
        $i
        echo "<<<"
        tid=`expr $tid + 1`
    done
fi

#echo conda deactivate
conda deactivate

Example:

$ ./q2env.sh "qiime info" "qiime tools list-types"
### q2-task-1
qiime info
>>>
System versions
:
<<<
### q2-task-2
qiime tools list-types
>>>
Bowtie2Index
        No description
:
<<<
$

'BIO' 카테고리의 다른 글

ASP.NET Core Dockerfile에 QIIME 2 환경 추가하기  (0) 2023.08.11
FASTQ format in QIIME 2  (0) 2023.08.01
Installing the QIIME 2 Core 2023.5 distribution using WSL  (0) 2023.07.31
QIIME 2 Core concepts  (0) 2023.07.31
FASTAQ  (0) 2023.07.31

qiime tools

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime tools --help
Usage: qiime tools [OPTIONS] COMMAND [ARGS]...

  Tools for working with QIIME 2 files.

Options:
  --help      Show this message and exit.

Commands:
  cache-create              Create an empty cache at the given location.
  cache-fetch               Fetches an artifact out of a cache into a .qza.
  cache-garbage-collection  Runs garbage collection on the cache at the
                            specified location.
  cache-remove              Removes a given key from a cache.
  cache-status              Checks the status of the cache.
  cache-store               Stores a .qza in the cache under a key.
  cast-metadata             Designate metadata column types.
  citations                 Print citations for a QIIME 2 result.
  export                    Export data from a QIIME 2 Artifact or a
                            Visualization
  extract                   Extract a QIIME 2 Artifact or Visualization
                            archive.
  import                    Import data into a new QIIME 2 Artifact.
  inspect-metadata          Inspect columns available in metadata.
  list-formats              List the available formats.
  list-types                List the available semantic types.
  peek                      Take a peek at a QIIME 2 Artifact or
                            Visualization.
  validate                  Validate data in a QIIME 2 Artifact.
  view                      View a QIIME 2 Visualization.
  • export
  • import
  • list-formats
  • list-types

 

qiime tools list-types

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime tools list-types
Bowtie2Index
        No description

DeblurStats
        No description

DistanceMatrix
        A symmetric matrix representing distances between entities.

EMPPairedEndSequences
        No description

EMPSingleEndSequences
        No description

ErrorCorrectionDetails
        No description

FeatureData[AlignedProteinSequence]
        Aligned protein sequences associated with a set of feature
        identifiers. Exactly one sequence is associated with each
        feature identfiier.

FeatureData[AlignedRNASequence]
        Aligned RNA sequences associated with a set of feature
        identifiers. Exactly one sequence is associated with each
        feature identfiier.

FeatureData[AlignedSequence]
        Aligned DNA sequences associated with a set of feature
        identifiers (e.g., aligned ASV sequences or OTU representative
        sequence). Exactly one sequence is associated with each feature
        identfiier.

FeatureData[BLAST6]
        BLAST results associated with a set of feature identifiers.

FeatureData[DecontamScore]
        No description

FeatureData[DifferentialAbundance]
        No description

FeatureData[Differential]
        No description

FeatureData[Importance]
        No description

FeatureData[PairedEndRNASequence]
        No description

FeatureData[PairedEndSequence]
        No description

FeatureData[ProteinSequence]
        Unaligned protein sequences associated with a set of feature
        identifiers. Exactly one sequence is associated with each
        feature identfiier.

FeatureData[RNASequence]
        Unaligned RNA sequences associated with a set of feature
        identifiers. Exactly one sequence is associated with each
        feature identfiier.

FeatureData[Sequence]
        Unaligned DNA sequences associated with a set of feature
        identifiers (e.g., ASV sequences or OTU representative
        sequence). Exactly one sequence is associated with each feature
        identfiier.

FeatureData[Taxonomy]
        Hierarchical metadata or annotations associated with a set of
        features. This can contain one or more hierarchical levels, and
        annotations can be anything (e.g., taxonomy of organisms,
        functional categorization of gene families, ...) as long as it
        is strictly hierarchical.

FeatureTable[Balance]
        No description

FeatureTable[Composition]
        A feature table (e.g., samples by ASVs) where each value in the
        matrix is a whole number greater than 0 representing the
        frequency or count of a feature in the corresponding sample.
        These data are typically not raw counts, having been
        transformed in some way to exclude zero counts.

FeatureTable[Design]
        No description

FeatureTable[Frequency]
        A feature table (e.g., samples by ASVs) where each value in the
        matrix is a whole number greater than or equal to 0
        representing the frequency or count of a feature in the
        corresponding sample. These data should be raw (not normalized)
        counts.

FeatureTable[PercentileNormalized]
        No description

FeatureTable[PresenceAbsence]
        A feature table (e.g., samples by ASVs) where each value
        indicates is a boolean indication of whether the feature is
        observed in the sample or not.

FeatureTable[RelativeFrequency]
        A feature table (e.g., samples by ASVs) where each value in the
        matrix is a real number greater than or equal to 0.0 and less
        than or equal to 1.0 representing the proportion of the sample
        that is composed of that feature. The feature values for each
        sample should sum to 1.0.

Hierarchy
        No description

ImmutableMetadata
        Immutable sample or feature metadata.

MultiplexedPairedEndBarcodeInSequence
        Multiplexed sequences (i.e., representing multiple difference
        samples), which are paired-end reads, and which contain the
        barcode (i.e., index) indicating the source sample as part of
        the sequence read.

MultiplexedSingleEndBarcodeInSequence
        Multiplexed sequences (i.e., representing multiple difference
        samples), which are single-end reads, and which contain the
        barcode (i.e., index) indicating the source sample as part of
        the sequence read.

PCoAResults
        The results of running principal coordinate analysis (PCoA).

Phylogeny[Rooted]
        A phylogenetic tree containing a defined root.

Phylogeny[Unrooted]
        A phylogenetic tree not containing a defined root.

Placements
        No description

ProcrustesStatistics
        The results of running Procrustes analysis.

QualityFilterStats
        No description

RawSequences
        No description

SampleData[AlphaDiversity]
        Alpha diversity values, each associated with a single sample
        identifier.

SampleData[ArtificialGrouping]
        No description

SampleData[BooleanSeries]
        No description

SampleData[ClassifierPredictions]
        No description

SampleData[DADA2Stats]
        No description

SampleData[FirstDifferences]
        No description

SampleData[JoinedSequencesWithQuality]
        Collections of joined paired-end sequences with quality scores
        associated with specified samples (i.e., demultiplexed
        sequences).

SampleData[PairedEndSequencesWithQuality]
        Collections of unjoined paired-end sequences with quality
        scores associated with specified samples (i.e., demultiplexed
        sequences).

SampleData[Probabilities]
        No description

SampleData[RegressorPredictions]
        No description

SampleData[SequencesWithQuality]
        Collections of sequences with quality scores associated with
        specified samples (i.e., demultiplexed sequences).

SampleData[Sequences]
        Collections of sequences associated with specified samples
        (i.e., demultiplexed sequences).

SampleData[TrueTargets]
        No description

SampleEstimator[Classifier]
        No description

SampleEstimator[Regressor]
        No description

SeppReferenceDatabase
        No description

TaxonomicClassifier
        No description

UchimeStats
        No description
  • SampleData[PairedEndSequencesWithQuality]
    • Collections of unjoined paired-end sequences with quality scores
      associated with specified samples (i.e., demultiplexed sequences).

 

qiime tools list-formats --importable / --exportable

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime tools list-formats --importable
AlignedDNAFASTAFormat
        No description

AlignedDNASequencesDirectoryFormat
        No description

AlignedProteinFASTAFormat
        No description

AlignedProteinSequencesDirectoryFormat
        No description

AlignedRNAFASTAFormat
        No description

AlignedRNASequencesDirectoryFormat
        No description

AlphaDiversityDirectoryFormat
        No description

AlphaDiversityFormat
        No description

ArtificialGroupingDirectoryFormat
        No description

ArtificialGroupingFormat
        No description

BIOMV100DirFmt
        No description

BIOMV100Format
        No description

BIOMV210DirFmt
        No description

BIOMV210Format
        No description

BLAST6DirectoryFormat
        No description

BLAST6Format
        No description

BooleanSeriesDirectoryFormat
        No description

BooleanSeriesFormat
        No description

Bowtie2IndexDirFmt
        No description

CasavaOneEightLanelessPerSampleDirFmt
        No description

CasavaOneEightSingleLanePerSampleDirFmt
        No description

DADA2StatsDirFmt
        No description

DADA2StatsFormat
        No description

DNAFASTAFormat
        No description

DNASequencesDirectoryFormat
        No description

DataLoafPackageDirFmt
        No description

DeblurStatsDirFmt
        No description

DeblurStatsFmt
        No description

DecontamScoreDirFmt
        No description

DecontamScoreFormat
        No description

DifferentialDirectoryFormat
        No description

DifferentialFormat
        No description

DistanceMatrixDirectoryFormat
        No description

EMPPairedEndCasavaDirFmt
        No description

EMPPairedEndDirFmt
        No description

EMPSingleEndCasavaDirFmt
        No description

EMPSingleEndDirFmt
        No description

ErrorCorrectionDetailsDirFmt
        No description

FastqGzFormat
        A gzipped fastq file.

FirstDifferencesDirectoryFormat
        No description

FirstDifferencesFormat
        No description

HeaderlessTSVTaxonomyDirectoryFormat
        No description

HeaderlessTSVTaxonomyFormat
        Format for a 2+ column TSV file without a header.

ImmutableMetadataDirectoryFormat
        No description

ImmutableMetadataFormat
        No description

ImportanceDirectoryFormat
        No description

ImportanceFormat
        No description

LSMatFormat
        No description

MixedCaseAlignedDNAFASTAFormat
        No description

MixedCaseAlignedDNASequencesDirectoryFormat
        No description

MixedCaseAlignedRNAFASTAFormat
        No description

MixedCaseAlignedRNASequencesDirectoryFormat
        No description

MixedCaseDNAFASTAFormat
        No description

MixedCaseDNASequencesDirectoryFormat
        No description

MixedCaseRNAFASTAFormat
        No description

MixedCaseRNASequencesDirectoryFormat
        No description

MultiplexedFastaQualDirFmt
        No description

MultiplexedPairedEndBarcodeInSequenceDirFmt
        No description

MultiplexedSingleEndBarcodeInSequenceDirFmt
        No description

NewickDirectoryFormat
        No description

NewickFormat
        No description

OrdinationDirectoryFormat
        No description

OrdinationFormat
        No description

PairedDNASequencesDirectoryFormat
        No description

PairedEndFastqManifestPhred33
        No description

PairedEndFastqManifestPhred33V2
        No description

PairedEndFastqManifestPhred64
        No description

PairedEndFastqManifestPhred64V2
        No description

PairedRNASequencesDirectoryFormat
        No description

PlacementsDirFmt
        No description

PlacementsFormat
        No description

PredictionsDirectoryFormat
        No description

PredictionsFormat
        No description

ProbabilitiesDirectoryFormat
        No description

ProbabilitiesFormat
        No description

ProcrustesStatisticsDirFmt
        No description

ProcrustesStatisticsFmt
        No description

ProteinFASTAFormat
        No description

ProteinSequencesDirectoryFormat
        No description

QIIME1DemuxDirFmt
        No description

QIIME1DemuxFormat
        QIIME 1 demultiplexed FASTA format.

QualityFilterStatsDirFmt
        No description

QualityFilterStatsFmt
        No description

RNAFASTAFormat
        No description

RNASequencesDirectoryFormat
        No description

SampleEstimatorDirFmt
        No description

SampleIdIndexedSingleEndPerSampleDirFmt
        Single-end reads in fastq.gz files where base filename is the
        sample id

SeppReferenceDirFmt
        No description

SingleEndFastqManifestPhred33
        No description

SingleEndFastqManifestPhred33V2
        No description

SingleEndFastqManifestPhred64
        No description

SingleEndFastqManifestPhred64V2
        No description

SingleLanePerSamplePairedEndFastqDirFmt
        No description

SingleLanePerSampleSingleEndFastqDirFmt
        No description

TSVTaxonomyDirectoryFormat
        No description

TSVTaxonomyFormat
        Format for a 2+ column TSV file with an expected minimal
        header.

TaxonomicClassiferTemporaryPickleDirFmt
        No description

TrueTargetsDirectoryFormat
        No description

UchimeStatsDirFmt
        No description

UchimeStatsFmt
        No description
  • PairedEndFastqManifestPhred33V2

 

qiime tools import

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime tools import --help
Usage: qiime tools import [OPTIONS]

  Import data to create a new QIIME 2 Artifact. See https://docs.qiime2.org/
  for usage examples and details on the file types and associated semantic
  types that can be imported.

Options:
  --type TEXT             The semantic type of the artifact that will be
                          created upon importing. Use --show-importable-types
                          to see what importable semantic types are available
                          in the current deployment.                [required]
  --input-path PATH       Path to file or directory that should be imported.
                                                                    [required]
  --output-path ARTIFACT  Path where output artifact should be written.
                                                                    [required]
  --input-format TEXT     The format of the data to be imported. If not
                          provided, data must be in the format expected by the
                          semantic type provided via --type.
  --show-importable-types Show the semantic types that can be supplied to
                          --type to import data into an artifact.
  --show-importable-formats
                          Show formats that can be supplied to --input-format
                          to import data into an artifact.
  --help                  Show this message and exit.

 

$ qiime tools import \
    --type 'SampleData[PairedEndSequencesWithQuality]' \
    --input-path sample-metadata.tsv \
    --input-format PairedEndFastqManifestPhred33V2 \
    --output-path demux-paired-end.qza

 

qiime demux summarize

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime demux summarize --help
Usage: qiime demux summarize [OPTIONS]

  Summarize counts per sample for all samples, and generate interactive
  positional quality plots based on `n` randomly selected sequences.

Inputs:
  --i-data ARTIFACT SampleData[SequencesWithQuality |
    PairedEndSequencesWithQuality | JoinedSequencesWithQuality]
                       The demultiplexed sequences to be summarized.
                                                                    [required]
Parameters:
  --p-n INTEGER        The number of sequences that should be selected at
                       random for quality score plots. The quality plots will
                       present the average positional qualities across all of
                       the sequences selected. If input sequences are paired
                       end, plots will be generated for both forward and
                       reverse reads for the same `n` sequences.
                                                              [default: 10000]
Outputs:
  --o-visualization VISUALIZATION
                                                                    [required]
Miscellaneous:
  --output-dir PATH    Output unspecified results to a directory
  --verbose / --quiet  Display verbose output to stdout and/or stderr during
                       execution of this action. Or silence output if
                       execution is successful (silence is golden).
  --example-data PATH  Write example data and exit.
  --citations          Show citations and exit.
  --help               Show this message and exit.

Examples:
  # ### example: demux
  qiime demux summarize \
    --i-data demux.qza \
    --o-visualization visualization.qzv

 

$ qiime demux summarize \
    --i-data demux-paired-end.qza \
    --o-visualization demux-paried-end.qzv

 

 

qiime dada2 denoise-paired

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime dada2 denoise-paired --help
Usage: qiime dada2 denoise-paired [OPTIONS]

  This method denoises paired-end sequences, dereplicates them, and filters
  chimeras.

Inputs:
  --i-demultiplexed-seqs ARTIFACT SampleData[PairedEndSequencesWithQuality]
                         The paired-end demultiplexed sequences to be
                         denoised.                                  [required]
Parameters:
  --p-trunc-len-f INTEGER
                         Position at which forward read sequences should be
                         truncated due to decrease in quality. This truncates
                         the 3' end of the of the input sequences, which will
                         be the bases that were sequenced in the last cycles.
                         Reads that are shorter than this value will be
                         discarded. After this parameter is applied there must
                         still be at least a 12 nucleotide overlap between the
                         forward and reverse reads. If 0 is provided, no
                         truncation or length filtering will be performed
                                                                    [required]
  --p-trunc-len-r INTEGER
                         Position at which reverse read sequences should be
                         truncated due to decrease in quality. This truncates
                         the 3' end of the of the input sequences, which will
                         be the bases that were sequenced in the last cycles.
                         Reads that are shorter than this value will be
                         discarded. After this parameter is applied there must
                         still be at least a 12 nucleotide overlap between the
                         forward and reverse reads. If 0 is provided, no
                         truncation or length filtering will be performed
                                                                    [required]
  --p-trim-left-f INTEGER
                         Position at which forward read sequences should be
                         trimmed due to low quality. This trims the 5' end of
                         the input sequences, which will be the bases that
                         were sequenced in the first cycles.      [default: 0]
  --p-trim-left-r INTEGER
                         Position at which reverse read sequences should be
                         trimmed due to low quality. This trims the 5' end of
                         the input sequences, which will be the bases that
                         were sequenced in the first cycles.      [default: 0]
  --p-max-ee-f NUMBER    Forward reads with number of expected errors higher
                         than this value will be discarded.     [default: 2.0]
  --p-max-ee-r NUMBER    Reverse reads with number of expected errors higher
                         than this value will be discarded.     [default: 2.0]
  --p-trunc-q INTEGER    Reads are truncated at the first instance of a
                         quality score less than or equal to this value. If
                         the resulting read is then shorter than `trunc-len-f`
                         or `trunc-len-r` (depending on the direction of the
                         read) it is discarded.                   [default: 2]
  --p-min-overlap INTEGER
    Range(4, None)       The minimum length of the overlap required for
                         merging the forward and reverse reads.  [default: 12]
  --p-pooling-method TEXT Choices('independent', 'pseudo')
                         The method used to pool samples for denoising.
                         "independent": Samples are denoised indpendently.
                         "pseudo": The pseudo-pooling method is used to
                         approximate pooling of samples. In short, samples are
                         denoised independently once, ASVs detected in at
                         least 2 samples are recorded, and samples are
                         denoised independently a second time, but this time
                         with prior knowledge of the recorded ASVs and thus
                         higher sensitivity to those ASVs.
                                                      [default: 'independent']
  --p-chimera-method TEXT Choices('consensus', 'none', 'pooled')
                         The method used to remove chimeras. "none": No
                         chimera removal is performed. "pooled": All reads are
                         pooled prior to chimera detection. "consensus":
                         Chimeras are detected in samples individually, and
                         sequences found chimeric in a sufficient fraction of
                         samples are removed.           [default: 'consensus']
  --p-min-fold-parent-over-abundance NUMBER
                         The minimum abundance of potential parents of a
                         sequence being tested as chimeric, expressed as a
                         fold-change versus the abundance of the sequence
                         being tested. Values should be greater than or equal
                         to 1 (i.e. parents should be more abundant than the
                         sequence being tested). This parameter has no effect
                         if chimera-method is "none".           [default: 1.0]
  --p-allow-one-off / --p-no-allow-one-off
                         Bimeras that are one-off from exact are also
                         identified if the `allow-one-off` argument is TrueIf
                         True, a sequence will be identified as bimera if it
                         is one mismatch or indel away from an exact bimera.
                                                              [default: False]
  --p-n-threads INTEGER  The number of threads to use for multithreaded
                         processing. If 0 is provided, all available cores
                         will be used.                            [default: 1]
  --p-n-reads-learn INTEGER
                         The number of reads to use when training the error
                         model. Smaller numbers will result in a shorter run
                         time but a less reliable error model.
                                                            [default: 1000000]
  --p-hashed-feature-ids / --p-no-hashed-feature-ids
                         If true, the feature ids in the resulting table will
                         be presented as hashes of the sequences defining each
                         feature. The hash will always be the same for the
                         same sequence so this allows feature tables to be
                         merged across runs of this method. You should only
                         merge tables if the exact same parameters are used
                         for each run.                         [default: True]
Outputs:
  --o-table ARTIFACT FeatureTable[Frequency]
                         The resulting feature table.               [required]
  --o-representative-sequences ARTIFACT FeatureData[Sequence]
                         The resulting feature sequences. Each feature in the
                         feature table will be represented by exactly one
                         sequence, and these sequences will be the joined
                         paired-end sequences.                      [required]
  --o-denoising-stats ARTIFACT SampleData[DADA2Stats]
                                                                    [required]
Miscellaneous:
  --output-dir PATH      Output unspecified results to a directory
  --verbose / --quiet    Display verbose output to stdout and/or stderr
                         during execution of this action. Or silence output if
                         execution is successful (silence is golden).
  --example-data PATH    Write example data and exit.
  --citations            Show citations and exit.
  --help                 Show this message and exit.

Examples:
  # ### example: denoise paired
  qiime dada2 denoise-paired \
    --i-demultiplexed-seqs demux-paired.qza \
    --p-trunc-len-f 150 \
    --p-trunc-len-r 140 \
    --o-representative-sequences representative-sequences.qza \
    --o-table table.qza \
    --o-denoising-stats denoising-stats.qza

 

$ qiime dada2 denoise-paired \
    --i-demultiplexed-seqs demux-paired-end.qza \
    --p-trim-left-f 23 \
    --p-trim-left-r 23 \
    --p-trunc-len-f 240 \
    --p-trunc-len-r 226 \
    --o-representative-sequences rep-seqs-dada2.qza \
    --o-table table-dada2.qza \
    --o-denoising-stats stats-dada2.qza

 

 

qiime metadata tabulate

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime metadata tabulate --help
Usage: qiime metadata tabulate [OPTIONS]

  Generate a tabular view of Metadata. The output visualization supports
  interactive filtering, sorting, and exporting to common file formats.

Parameters:
  --m-input-file METADATA...
    (multiple            The metadata to tabulate.
     arguments will be
     merged)                                                        [required]
  --p-page-size INTEGER  The maximum number of Metadata records to display
                         per page                               [default: 100]
Outputs:
  --o-visualization VISUALIZATION
                                                                    [required]
Miscellaneous:
  --output-dir PATH      Output unspecified results to a directory
  --verbose / --quiet    Display verbose output to stdout and/or stderr
                         during execution of this action. Or silence output if
                         execution is successful (silence is golden).
  --example-data PATH    Write example data and exit.
  --citations            Show citations and exit.
  --help                 Show this message and exit.

 

$ qiime metadata tabulate \
    --m-input-file stats-dada2.qza \
    --o-visualization stats-dada2.qzv

 

 

qiime feature-table summarize

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime feature-table summarize --help
Usage: qiime feature-table summarize [OPTIONS]

  Generate visual and tabular summaries of a feature table.

Inputs:
  --i-table ARTIFACT FeatureTable[Frequency | RelativeFrequency |
    PresenceAbsence]   The feature table to be summarized.          [required]
Parameters:
  --m-sample-metadata-file METADATA...
    (multiple          The sample metadata.
     arguments will
     be merged)                                                     [optional]
Outputs:
  --o-visualization VISUALIZATION
                                                                    [required]
Miscellaneous:
  --output-dir PATH    Output unspecified results to a directory
  --verbose / --quiet  Display verbose output to stdout and/or stderr during
                       execution of this action. Or silence output if
                       execution is successful (silence is golden).
  --example-data PATH  Write example data and exit.
  --citations          Show citations and exit.
  --help               Show this message and exit.

Examples:
  # ### example: feature table summarize
  qiime feature-table summarize \
    --i-table feature-table.qza \
    --o-visualization table.qzv

 

$ qiime feature-table summarize \
    --i-table table-dada2.qza \
    --o-visualization table.qzv \
    --m-sample-metadata-file sample-metadata.tsv

 

 

qiime feature-table tabulate-seqs

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime feature-table tabulate-seqs --help
Usage: qiime feature-table tabulate-seqs [OPTIONS]

  Generate tabular view of feature identifier to sequence mapping, including
  links to BLAST each sequence against the NCBI nt database.

Inputs:
  --i-data ARTIFACT FeatureData[Sequence | AlignedSequence]
                       The feature sequences to be tabulated.       [required]
Outputs:
  --o-visualization VISUALIZATION
                                                                    [required]
Miscellaneous:
  --output-dir PATH    Output unspecified results to a directory
  --verbose / --quiet  Display verbose output to stdout and/or stderr during
                       execution of this action. Or silence output if
                       execution is successful (silence is golden).
  --example-data PATH  Write example data and exit.
  --citations          Show citations and exit.
  --help               Show this message and exit.

Examples:
  # ### example: feature table tabulate seqs
  qiime feature-table tabulate-seqs \
    --i-data rep-seqs.qza \
    --o-visualization rep-seqs.qzv

 

$ qiime feature-table tabulate-seqs \
    --i-data rep-seqs-dada2.qza \
    --o-visualization rep-seqs.qzv

 

 

qiime phylogeny align-to-tree-mafft-fasttree

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime phylogeny align-to-tree-mafft-fasttree --help
Usage: qiime phylogeny align-to-tree-mafft-fasttree [OPTIONS]

  This pipeline will start by creating a sequence alignment using MAFFT, after
  which any alignment columns that are phylogenetically uninformative or
  ambiguously aligned will be removed (masked). The resulting masked alignment
  will be used to infer a phylogenetic tree and then subsequently rooted at
  its midpoint. Output files from each step of the pipeline will be saved.
  This includes both the unmasked and masked MAFFT alignment from q2-alignment
  methods, and both the rooted and unrooted phylogenies from q2-phylogeny
  methods.

Inputs:
  --i-sequences ARTIFACT FeatureData[Sequence]
                          The sequences to be used for creating a fasttree
                          based rooted phylogenetic tree.           [required]
Parameters:
  --p-n-threads VALUE Int % Range(1, None) | Str % Choices('auto')
                          The number of threads. (Use `auto` to automatically
                          use all available cores) This value is used when
                          aligning the sequences and creating the tree with
                          fasttree.                               [default: 1]
  --p-mask-max-gap-frequency PROPORTION Range(0, 1, inclusive_end=True)
                          The maximum relative frequency of gap characters in
                          a column for the column to be retained. This
                          relative frequency must be a number between 0.0 and
                          1.0 (inclusive), where 0.0 retains only those
                          columns without gap characters, and 1.0 retains all
                          columns  regardless of gap character frequency. This
                          value is used when masking the aligned sequences.
                                                                [default: 1.0]
  --p-mask-min-conservation PROPORTION Range(0, 1, inclusive_end=True)
                          The minimum relative frequency of at least one
                          non-gap character in a column for that column to be
                          retained. This relative frequency must be a number
                          between 0.0 and 1.0 (inclusive). For example, if a
                          value of  0.4 is provided, a column will only be
                          retained  if it contains at least one character that
                          is present in at least 40% of the sequences. This
                          value is used when masking the aligned sequences.
                                                                [default: 0.4]
  --p-parttree / --p-no-parttree
                          This flag is required if the number of sequences
                          being aligned are larger than 1000000. Disabled by
                          default.                            [default: False]
Outputs:
  --o-alignment ARTIFACT FeatureData[AlignedSequence]
                          The aligned sequences.                    [required]
  --o-masked-alignment ARTIFACT FeatureData[AlignedSequence]
                          The masked alignment.                     [required]
  --o-tree ARTIFACT       The unrooted phylogenetic tree.
    Phylogeny[Unrooted]                                             [required]
  --o-rooted-tree ARTIFACT
    Phylogeny[Rooted]     The rooted phylogenetic tree.             [required]
Miscellaneous:
  --output-dir PATH       Output unspecified results to a directory
  --verbose / --quiet     Display verbose output to stdout and/or stderr
                          during execution of this action. Or silence output
                          if execution is successful (silence is golden).
  --recycle-pool TEXT     Use a cache pool for pipeline resumption. QIIME 2
                          will cache your results in this pool for reuse by
                          future invocations. These pool are retained until
                          deleted by the user. If not provided, QIIME 2 will
                          create a pool which is automatically reused by
                          invocations of the same action and removed if the
                          action is successful. Note: these pools are local to
                          the cache you are using.
  --no-recycle            Do not recycle results from a previous failed
                          pipeline run or save the results from this run for
                          future recycling.
  --parallel              Execute your action in parallel. This flag will use
                          your default parallel config.
  --parallel-config FILE  Execute your action in parallel using a config at
                          the indicated path.
  --use-cache DIRECTORY   Specify the cache to be used for the intermediate
                          work of this pipeline. If not provided, the default
                          cache under $TMP/qiime2/<uname> will be used.
                          IMPORTANT FOR HPC USERS: If you are on an HPC system
                          and are using parallel execution it is important to
                          set this to a location that is globally accessible
                          to all nodes in the cluster.
  --example-data PATH     Write example data and exit.
  --citations             Show citations and exit.
  --help                  Show this message and exit.

Examples:
  # ### example: align to tree mafft fasttree
  qiime phylogeny align-to-tree-mafft-fasttree \
    --i-sequences rep-seqs.qza \
    --o-alignment aligned-rep-seqs.qza \
    --o-masked-alignment masked-aligned-rep-seqs.qza \
    --o-tree unrooted-tree.qza \
    --o-rooted-tree rooted-tree.qza

 

$ qiime phylogeny align-to-tree-mafft-fasttree \
    --i-sequences rep-seqs-dada2.qza \
    --o-alignment aligned-rep-seqs.qza \
    --o-masked-alignment masked-aligned-rep-seqs.qza \
    --o-tree unrooted-tree.qza \
    --o-rooted-tree rooted-tree.qza

 

 

qiime feature-classifier classify-sklearn

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime feature-classifier classify-sklearn --help
Usage: qiime feature-classifier classify-sklearn [OPTIONS]

  Classify reads by taxon using a fitted classifier.

Inputs:
  --i-reads ARTIFACT FeatureData[Sequence]
                         The feature data to be classified.         [required]
  --i-classifier ARTIFACT
    TaxonomicClassifier  The taxonomic classifier for classifying the reads.
                                                                    [required]
Parameters:
  --p-reads-per-batch VALUE Int % Range(1, None) | Str % Choices('auto')
                         Number of reads to process in each batch. If "auto",
                         this parameter is autoscaled to min( number of query
                         sequences / n-jobs, 20000).         [default: 'auto']
  --p-n-jobs INTEGER     The maximum number of concurrently worker processes.
                         If -1 all CPUs are used. If 1 is given, no parallel
                         computing code is used at all, which is useful for
                         debugging. For n-jobs below -1, (n_cpus + 1 + n-jobs)
                         are used. Thus for n-jobs = -2, all CPUs but one are
                         used.                                    [default: 1]
  --p-pre-dispatch TEXT  "all" or expression, as in "3*n_jobs". The number of
                         batches (of tasks) to be pre-dispatched.
                                                         [default: '2*n_jobs']
  --p-confidence VALUE Float % Range(0, 1, inclusive_end=True) | Str %
    Choices('disable')   Confidence threshold for limiting taxonomic depth.
                         Set to "disable" to disable confidence calculation,
                         or 0 to calculate confidence but not apply it to
                         limit the taxonomic depth of the assignments.
                                                                [default: 0.7]
  --p-read-orientation TEXT Choices('same', 'reverse-complement', 'auto')
                         Direction of reads with respect to reference
                         sequences. same will cause reads to be classified
                         unchanged; reverse-complement will cause reads to be
                         reversed and complemented prior to classification.
                         "auto" will autodetect orientation based on the
                         confidence estimates for the first 100 reads.
                                                             [default: 'auto']
Outputs:
  --o-classification ARTIFACT FeatureData[Taxonomy]
                                                                    [required]
Miscellaneous:
  --output-dir PATH      Output unspecified results to a directory
  --verbose / --quiet    Display verbose output to stdout and/or stderr
                         during execution of this action. Or silence output if
                         execution is successful (silence is golden).
  --example-data PATH    Write example data and exit.
  --citations            Show citations and exit.
  --help                 Show this message and exit.

 

$ qiime feature-classifier classify-sklearn \
    --i-reads rep-seqs.qza \
    --i-classifier silva-138-99-515-806-nb-classifier.qza \
    --o-classification taxonomy.qza

 

 

qiime diversity core-metrics-phylogenetic

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime diversity core-metrics-phylogenetic --help
Usage: qiime diversity core-metrics-phylogenetic [OPTIONS]

  Applies a collection of diversity metrics (both phylogenetic and non-
  phylogenetic) to a feature table.

Inputs:
  --i-table ARTIFACT FeatureTable[Frequency]
                          The feature table containing the samples over which
                          diversity metrics should be computed.     [required]
  --i-phylogeny ARTIFACT  Phylogenetic tree containing tip identifiers that
    Phylogeny[Rooted]     correspond to the feature identifiers in the table.
                          This tree can contain tip ids that are not present
                          in the table, but all feature ids in the table must
                          be present in this tree.                  [required]
Parameters:
  --p-sampling-depth INTEGER
    Range(1, None)        The total frequency that each sample should be
                          rarefied to prior to computing diversity metrics.
                                                                    [required]
  --m-metadata-file METADATA...
    (multiple arguments   The sample metadata to use in the emperor plots.
     will be merged)                                                [required]
  --p-with-replacement / --p-no-with-replacement
                          Rarefy with replacement by sampling from the
                          multinomial distribution instead of rarefying
                          without replacement.                [default: False]
  --p-n-jobs-or-threads VALUE Int % Range(1, None) | Str % Choices('auto')
                          [beta/beta-phylogenetic methods only] - The number
                          of concurrent jobs or CPU threads to use in
                          performing this calculation. Individual methods will
                          create jobs/threads as implemented in
                          q2-diversity-lib dependencies. May not exceed the
                          number of available physical cores. If
                          n-jobs-or-threads = 'auto', one thread/job will be
                          created for each identified CPU core on the host.
                                                                  [default: 1]
Outputs:
  --o-rarefied-table ARTIFACT FeatureTable[Frequency]
                          The resulting rarefied feature table.     [required]
  --o-faith-pd-vector ARTIFACT SampleData[AlphaDiversity]
                          Vector of Faith PD values by sample.      [required]
  --o-observed-features-vector ARTIFACT SampleData[AlphaDiversity]
                          Vector of Observed Features values by sample.
                                                                    [required]
  --o-shannon-vector ARTIFACT SampleData[AlphaDiversity]
                          Vector of Shannon diversity values by sample.
                                                                    [required]
  --o-evenness-vector ARTIFACT SampleData[AlphaDiversity]
                          Vector of Pielou's evenness values by sample.
                                                                    [required]
  --o-unweighted-unifrac-distance-matrix ARTIFACT
    DistanceMatrix        Matrix of unweighted UniFrac distances between
                          pairs of samples.                         [required]
  --o-weighted-unifrac-distance-matrix ARTIFACT
    DistanceMatrix        Matrix of weighted UniFrac distances between pairs
                          of samples.                               [required]
  --o-jaccard-distance-matrix ARTIFACT
    DistanceMatrix        Matrix of Jaccard distances between pairs of
                          samples.                                  [required]
  --o-bray-curtis-distance-matrix ARTIFACT
    DistanceMatrix        Matrix of Bray-Curtis distances between pairs of
                          samples.                                  [required]
  --o-unweighted-unifrac-pcoa-results ARTIFACT
    PCoAResults           PCoA matrix computed from unweighted UniFrac
                          distances between samples.                [required]
  --o-weighted-unifrac-pcoa-results ARTIFACT
    PCoAResults           PCoA matrix computed from weighted UniFrac
                          distances between samples.                [required]
  --o-jaccard-pcoa-results ARTIFACT
    PCoAResults           PCoA matrix computed from Jaccard distances between
                          samples.                                  [required]
  --o-bray-curtis-pcoa-results ARTIFACT
    PCoAResults           PCoA matrix computed from Bray-Curtis distances
                          between samples.                          [required]
  --o-unweighted-unifrac-emperor VISUALIZATION
                          Emperor plot of the PCoA matrix computed from
                          unweighted UniFrac.                       [required]
  --o-weighted-unifrac-emperor VISUALIZATION
                          Emperor plot of the PCoA matrix computed from
                          weighted UniFrac.                         [required]
  --o-jaccard-emperor VISUALIZATION
                          Emperor plot of the PCoA matrix computed from
                          Jaccard.                                  [required]
  --o-bray-curtis-emperor VISUALIZATION
                          Emperor plot of the PCoA matrix computed from
                          Bray-Curtis.                              [required]
Miscellaneous:
  --output-dir PATH       Output unspecified results to a directory
  --verbose / --quiet     Display verbose output to stdout and/or stderr
                          during execution of this action. Or silence output
                          if execution is successful (silence is golden).
  --recycle-pool TEXT     Use a cache pool for pipeline resumption. QIIME 2
                          will cache your results in this pool for reuse by
                          future invocations. These pool are retained until
                          deleted by the user. If not provided, QIIME 2 will
                          create a pool which is automatically reused by
                          invocations of the same action and removed if the
                          action is successful. Note: these pools are local to
                          the cache you are using.
  --no-recycle            Do not recycle results from a previous failed
                          pipeline run or save the results from this run for
                          future recycling.
  --parallel              Execute your action in parallel. This flag will use
                          your default parallel config.
  --parallel-config FILE  Execute your action in parallel using a config at
                          the indicated path.
  --use-cache DIRECTORY   Specify the cache to be used for the intermediate
                          work of this pipeline. If not provided, the default
                          cache under $TMP/qiime2/<uname> will be used.
                          IMPORTANT FOR HPC USERS: If you are on an HPC system
                          and are using parallel execution it is important to
                          set this to a location that is globally accessible
                          to all nodes in the cluster.
  --example-data PATH     Write example data and exit.
  --citations             Show citations and exit.
  --help                  Show this message and exit.

 

$ qiime diversity core-metric-phylogenetic \
    --i-phylogeny rooted-tree.qza \
    --i-table table.qza \
    --p-sampling-depth 27,286.5 \
    --m-metadata-file sample-metadata.tsv \
    --output-dir core-metrics-results

 

 

qiime diversity alpha-group-significance

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime diversity alpha-group-significance --help
Usage: qiime diversity alpha-group-significance [OPTIONS]

  Visually and statistically compare groups of alpha diversity values.

Inputs:
  --i-alpha-diversity ARTIFACT SampleData[AlphaDiversity]
                       Vector of alpha diversity values by sample.  [required]
Parameters:
  --m-metadata-file METADATA...
    (multiple          The sample metadata.
     arguments will
     be merged)                                                     [required]
Outputs:
  --o-visualization VISUALIZATION
                                                                    [required]
Miscellaneous:
  --output-dir PATH    Output unspecified results to a directory
  --verbose / --quiet  Display verbose output to stdout and/or stderr during
                       execution of this action. Or silence output if
                       execution is successful (silence is golden).
  --example-data PATH  Write example data and exit.
  --citations          Show citations and exit.
  --help               Show this message and exit.

Examples:
  # ### example: alpha group significance faith pd
  qiime diversity alpha-group-significance \
    --i-alpha-diversity alpha-div-faith-pd.qza \
    --m-metadata-file metadata.tsv \
    --o-visualization visualization.qzv

 

$ qiime diversity alpha-group-signifcance \
    --i-alpha-diversity core-metrics-results/faith_pd_vector.qza \
    --m-metadata-file sample-metadata.tsv \
    --o-visualization core-metrics-results/faith-pd-group-significance.qzv

$ qiime diversity alpha-group-signifcance \
    --i-alpha-diversity core-metrics-results/evenness_vector.qza \
    --m-metadata-file sample-metadata.tsv \
    --o-visualization core-metrics-results/evenness-group-significance.qzv

 

 

qiime diversity beta-group-significance

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime diversity beta-group-significance --help
Usage: qiime diversity beta-group-significance [OPTIONS]

  Determine whether groups of samples are significantly different from one
  another using a permutation-based statistical test.

Inputs:
  --i-distance-matrix ARTIFACT
    DistanceMatrix     Matrix of distances between pairs of samples.
                                                                    [required]
Parameters:
  --m-metadata-file METADATA
  --m-metadata-column COLUMN  MetadataColumn[Categorical]
                       Categorical sample metadata column.          [required]
  --p-method TEXT Choices('permanova', 'anosim', 'permdisp')
                       The group significance test to be applied.
                                                        [default: 'permanova']
  --p-pairwise / --p-no-pairwise
                       Perform pairwise tests between all pairs of groups in
                       addition to the test across all groups. This can be
                       very slow if there are a lot of groups in the metadata
                       column.                                [default: False]
  --p-permutations INTEGER
                       The number of permutations to be run when computing
                       p-values.                                [default: 999]
Outputs:
  --o-visualization VISUALIZATION
                                                                    [required]
Miscellaneous:
  --output-dir PATH    Output unspecified results to a directory
  --verbose / --quiet  Display verbose output to stdout and/or stderr during
                       execution of this action. Or silence output if
                       execution is successful (silence is golden).
  --example-data PATH  Write example data and exit.
  --citations          Show citations and exit.
  --help               Show this message and exit.

 

$ qiime diversity beta-group-significance \
    --i-distance-matrix core-metrics-results/unweighted_unifrac_distance_matrix.qza \
    --m-metadata-file sample-metadata.tsv \
    --m-metadata-column Group \
    --o-visualization core-metrics-results/unweighted-unifrac-group-significance.qzv \
    --p-pairwise

 

 

qiime taxa filter-table

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime taxa filter-table --help
Usage: qiime taxa filter-table [OPTIONS]

  This method filters features from a table based on their taxonomic
  annotations. Features can be retained in the resulting table by specifying
  one or more include search terms, and can be filtered out of the resulting
  table by specifying one or more exclude search terms. If both include and
  exclude are provided, the inclusion critera will be applied before the
  exclusion critera. Either include or exclude terms (or both) must be
  provided. Any samples that have a total frequency of zero after filtering
  will be removed from the resulting table.

Inputs:
  --i-table ARTIFACT FeatureTable[Frequency]
                         Feature table to be filtered.              [required]
  --i-taxonomy ARTIFACT FeatureData[Taxonomy]
                         Taxonomic annotations for features in the provided
                         feature table. All features in the feature table must
                         have a corresponding taxonomic annotation. Taxonomic
                         annotations for features that are not present in the
                         feature table will be ignored.             [required]
Parameters:
  --p-include TEXT       One or more search terms that indicate which taxa
                         should be included in the resulting table. If
                         providing more than one term, terms should be
                         delimited by the query-delimiter character. By
                         default, all taxa will be included.        [optional]
  --p-exclude TEXT       One or more search terms that indicate which taxa
                         should be excluded from the resulting table. If
                         providing more than one term, terms should be
                         delimited by the query-delimiter character. By
                         default, no taxa will be excluded.         [optional]
  --p-query-delimiter TEXT
                         The string used to delimit multiple search terms
                         provided to include or exclude. This parameter should
                         only need to be modified if the default delimiter (a
                         comma) is used in the provided taxonomic annotations.
                                                                [default: ',']
  --p-mode TEXT Choices('exact', 'contains')
                         Mode for determining if a search term matches a
                         taxonomic annotation. "contains" requires that the
                         annotation has the term as a substring; "exact"
                         requires that the annotation is a perfect match to a
                         search term.                    [default: 'contains']
Outputs:
  --o-filtered-table ARTIFACT FeatureTable[Frequency]
                         The taxonomy-filtered feature table.       [required]
Miscellaneous:
  --output-dir PATH      Output unspecified results to a directory
  --verbose / --quiet    Display verbose output to stdout and/or stderr
                         during execution of this action. Or silence output if
                         execution is successful (silence is golden).
  --example-data PATH    Write example data and exit.
  --citations            Show citations and exit.
  --help                 Show this message and exit.

 

$ qiime taxa filter-table \
    --i-table filtered-sequences/table-with-phyla-no-mitochondria-chloroplast.qza \
    --i-taxonomy taxonomy.qza \
    --p-exclude "k__Archaea" \
    --o-filtered-table filtered-sequences/table-with-phyla-no-mitochondria-chloroplasts-archaea.qza

 

 

qiime tools export

(qiime2-2023.5) user@user-DT:~/qiime2$ qiime tools export --help
Usage: qiime tools export [OPTIONS]

  Exporting extracts (and optionally transforms) data stored inside an
  Artifact or Visualization. Note that Visualizations cannot be transformed
  with --output-format

Options:
  --input-path ARTIFACT/VISUALIZATION
                        Path to file that should be exported        [required]
  --output-path PATH    Path to file or directory where data should be
                        exported to                                 [required]
  --output-format TEXT  Format which the data should be exported as. This
                        option cannot be used with Visualizations
  --help                Show this message and exit.

 

$ qiime tools export \
    --input-path rep-seqs.qza \
    --output-path rep-seqs.fna

$ qiime tools export \
    --input-path table.qza \
    --output-path feature-table.biom

 

 

 

 

 

 

 

 

 

Native conda installation

Installing Miniconda

https://docs.conda.io/en/latest/miniconda.html

 

Miniconda — conda documentation

Miniconda is a free minimal installer for conda. It is a small, bootstrap version of Anaconda that includes only conda, Python, the packages they depend on, and a small number of other useful packages, including pip, zlib and a few others. Use the conda in

docs.conda.io

Miniconda3 Linux 64-bit

$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
:
$ chmod +x Miniconda3-latest-Linux-x86_64.sh
$ ./Miniconda3-latest-Linux-x86_64.sh
:
Miniconda3 will now be installed into this location:
/home/jylee/miniconda3

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/home/user/miniconda3] >>>
PREFIX=/home/user/miniconda3
Unpacking payload ...

Installing base environment...

Downloading and Extracting Packages
Downloading and Extracting Packages

Preparing transaction: done
Executing transaction: done
installation finished.
Do you wish the installer to initialize Miniconda3
by running conda init? [yes|no]
[no] >>> yes
no change     /home/user/miniconda3/condabin/conda
no change     /home/user/miniconda3/bin/conda
no change     /home/user/miniconda3/bin/conda-env
no change     /home/user/miniconda3/bin/activate
no change     /home/user/miniconda3/bin/deactivate
no change     /home/user/miniconda3/etc/profile.d/conda.sh
no change     /home/user/miniconda3/etc/fish/conf.d/conda.fish
no change     /home/user/miniconda3/shell/condabin/Conda.psm1
no change     /home/user/miniconda3/shell/condabin/conda-hook.ps1
no change     /home/user/miniconda3/lib/python3.11/site-packages/xontrib/conda.xsh
no change     /home/user/miniconda3/etc/profile.d/conda.csh
modified      /home/user/.bashrc

==> For changes to take effect, close and re-open your current shell. <==

If you'd prefer that conda's base environment not be activated on startup,
   set the auto_activate_base parameter to false:

conda config --set auto_activate_base false

Thank you for installing Miniconda3!

Updating Miniconda

$ conda update conda

Installing wget

$ conda install wget

 

Install QIIME 2 within a conda environment

These instructions are identical to the Linux instructions and are intended for users of the Windows Subsystem for Linux.

$ wget https://data.qiime2.org/distro/core/qiime2-2023.5-py38-linux-conda.yml
$ conda env create -n qiime2-2023.5 --file qiime2-2023.5-py38-linux-conda.yml

OPTIONAL CLEANUP

$ rm qiime2-2023.5-py38-linux-conda.yml

 

Activate the conda environment

$ conda activate qiime2-2023.5
:
$ conda deactivate

Test your installation

$ qiime --help

 

 

How do I update to the newest version of QIIME 2?

In order to update/upgrade to the newest release, you simply install the newest version in a new conda environment by following the instructions above. Then you will have two conda environments, one with the older version of QIIME 2 and one with the newer version.

 

 

※ 터미널을 다시 켤 때마다 base 가상 환경이 기본으로 실행되는 것 해제

$ conda config --set auto_activate_base false

'BIO' 카테고리의 다른 글

ASP.NET Core Dockerfile에 QIIME 2 환경 추가하기  (0) 2023.08.11
C#에서 Linux 환경에 설치된 QIIME 2 실행하기  (0) 2023.08.02
FASTQ format in QIIME 2  (0) 2023.08.01
QIIME 2 Core concepts  (0) 2023.07.31
FASTAQ  (0) 2023.07.31

https://docs.qiime2.org/2023.5/concepts/#

 

Core concepts — QIIME 2 2023.5.1 documentation

Core concepts This page describes several core concepts in QIIME 2 that are important to understand before starting to use the software. The glossary may be helpful to refer to as you read through this page and other documentation on the site. Data files:

docs.qiime2.org

Data files: QIIME 2 artifacts

Data produced by QIMME 2 exist as QIIME 2 artifacts.

A QIIME 2 artifact contains data and metadata.

The metadata describes things about the data, such as its type, format and how it was generated (provenance).

A QIIME 2 artifact typically has the .qza file extension when stored in a file.

 

Since QIIME 2 works with artifacts instead of data files (e.g. FASTA files),

you must create a QIIME 2 artifact by importing data.

You can import data at any step in an analysis, though typically you will start by importing raw sequence data.

QIIME 2 also has tools to export data from an artifact.

 

By using QIIME 2 artifacts instead of simple data files, QIIME 2 can automatically track the type, format, and provenance of data for researchers. Using artifacts instead of data files enables researchers to focus on the analyses they want to perform, instead of the particular format the data needs to be in for an analysis.

 

Artifacts enable QIIME 2 to track, in addition to the data itself, the provenance of how the data came to be.

With an artifact's provenance, you can trace back to all previous analyses that were run to produce the artifact,
including the input data used at each step.

This automatic, integrated, and decentralized provenance tracking of data enables a researcher to archive artifacts, or for example, send an artifact to a collaborator, with the ability to understand exactly how the artifact was created.

This enables replicability(복제가능성) and reproducibility(재현성) of analyses, as well as generation of diagrams and text that can be used in the methods section of a paper.

Provenance also supports and encourages the paper attribution to underlying tools (e.g. FastTree to build a phylogenetic(계통 발생의) tree) used to generate the artifact.

 

Data files: Visualizations

Visualizations are another type of data generated by QIIME 2.

When written to disk, visualization files typically have the .qzv file extension.

Visualzations contain similar types of metadata as QIIME 2 artifacts, including provenance information.

Similar to QIIME 2 artifacts, visualizations are standalone information that can be archived or shared with collarborators.

 

In contrast to QIIME 2 artifacts, visualizations are terminal outputs of an analysis, and can represent, for example, a statistical results table, an interactive visualization, static images, or really any combination of visual data representations.

Since visualizations are terminal outputs, they cannot be used as input other analyses in QIIME 2.

 

Tip
Use https://view.qiime2.org to easily view QIIME 2 artifacts and visualizations files (generally .qza and .qzv files) without requiring a QIIME installation. This is helpful for sharing QIIME 2 data with collarborators who may not have QIIME 2 installed. https://view.qiime2.org also supports viewing data provenance.

 

Semantic types

Every artifact generated by QIIME 2 has a semantic type associated with it.

Semantic types enable QIIME 2 to identify artifacts that are suitable inputs to an analysis.

For example, if an analysis expects a distance matrix as input,
QIIME 2 can determine which artifacts have a distance matrix semantic type
and prevent incompatible artifacts from being used in the analysis
(e.g. an artifact representing a phylogenetic tree).

 

Semantic types also help users avoid semantically incorrect analyses.

For example, a feature table could contain presence/absence data
(i.e., a 1 to indicate that an OTU was observed at least one time in a given sample,
and a 0 to indicate that an OTU was not observed at least one time in a given sample).

However, if that feature table were provided to an analysis computing a quantitative diversity metric
where OTU abundances are included in the calculation (e.g., weighted UniFrac),
the analysis would complete successfully, but the result would not be meaningful.

 

Check out the semantic types page for more information about semantic types and what types are currently available.

 

Plugins

QIIME 2 microbiome(미생물) analyses are made available to users via plugins.

To perform analyses with QIIME 2, you will install one or more plugins
that provide the specific analyses you are interested in.

For example, if you want to demultiplex your raw sequence data, you might use the q2-demux QIIME 2 plugin,
or if you're wanting to perform alpha- or beta-deversity analyses, you could use the q2-diversity plugin.

 

Plugins are software packages that can be developed by anyone.

The QIIME 2 team has developed several plugins for an initial end-to-end microbiome analysis pipline,
but third-party developers are encouraged to create their own plugins to provide additional analyses.

Third-party developers will define these plugins in the same way that the QIIME 2 team has define the "official" plugins.

This decentralized development of microbiome analysis functionality means
that many more analyses and tools will be accessible to QIIME 2 users, including the latest techniques and protocols.

Plugins also allow users to choose and customize analysis pipeline for their specific needs.

 

Check out the plugin availability page to see what plugins are currently available
and the future plugins page for those that are being developed.

 

Methods and Visualizers

QIIME 2 plugins define methods and visualizers that are used to perform analyses.

 

A method accepts some combination of QIIME 2 artifacts and parameters as input,
and produces one or more QIIME 2 artifacts as output.

These output artifacts could subsequently be used as input to other QIIME 2 methods or visualizers.

Methods can produce intermediate or terminal outputs in a QIIME 2 analysis.

For example, the rarefy method defined in the q2-feature-table plugin
accepts a feature table artifact and sampling depth as input
and produces a rarefied feature table artifact as output.

This rarefied feature table artifact could then be used in another analysis,
such as alpha diversity calculations provided by the alpha method in q2-diversity.

 

A visualizer is similar to a mehtod in that it accepts some combination of QIIME 2 artifacts and parameters as input.

In contrast to a method, a visualizer produces exactly one visualization as output.

Visualizations, by definition, cannot be used as input to other QIIME 2 methods or visualizers.

Thus, visualizers can only produce terminal output in a QIIME 2 analysis.

https://github.com/tfussell/xlnt

 

GitHub - tfussell/xlnt: Cross-platform user-friendly xlsx library for C++11+

:bar_chart: Cross-platform user-friendly xlsx library for C++11+ - GitHub - tfussell/xlnt: Cross-platform user-friendly xlsx library for C++11+

github.com

 

Windows

> cd vcpkg
> ./vcpkg install xlnt:x64-windows

Linux

$ git clone https://github.com/microsoft/vcpkg.git
$ cd vcpkg
$ ./bootstrap-vcpkg.sh
$ sudo ./vcpkg integrate install
$ ./vcpkg install xlnt
...
-- Configuring x64-linux
-- Building x64-linux-dbg
-- Building x64-linux-rel
-- Installing: /home/jym/dev/vcpkg/packages/xlnt_x64-linux/share/xlnt/copyright
-- Fixing pkgconfig file: /home/jym/dev/vcpkg/packages/xlnt_x64-linux/lib/pkgconfig/xlnt.pc
-- Fixing pkgconfig file: /home/jym/dev/vcpkg/packages/xlnt_x64-linux/debug/lib/pkgconfig/xlnt.pc
...
xlnt provides CMake targets:

    # this is heuristically generated, and may not be correct
    find_package(Xlnt CONFIG REQUIRED)
    target_link_libraries(main PRIVATE xlnt::xlnt)

 

Specify the toolchain as a CMake option:

-DCMAKE_TOOLCHAIN_FILE=/home/jym/dev/vcpkg/scripts/buildsystems/vcpkg.cmake

But this won't work if you already specify a toolchain, such as when cross-compiling.

To avoid this problem, include it.

include(/home/jym/dev/vcpkg/scripts/buildsystems/vcpkg.cmake)

 

Example: CMakeLists.txt

cmake_minimum_required(VERSION 3.16)
project(test-xlnt CXX)

set(CMAKE_VERBOSE_MAKEFILE true)
set(CMAKE_CXX_STANDARD 14)

include(/home/jym/dev/vcpkg/scripts/buildsystems/vcpkg.cmake)

list(APPEND CMAKE_PREFIX_PATH "/home/jym/dev/vcpkg/installed/x64-linux/share")

find_package(Xlnt CONFIG REQUIRED)

message("XLNT_CMAKE_DIR:   ${XLNT_CMAKE_DIR}")
message("xlnt_INCLUDE_DIR: ${xlnt_INCLUDE_DIR}")

set(SRC_FILES main.cpp)

add_executable(${PROJECT_NAME} ${SRC_FILES})

target_include_directories(${PROJECT_NAME} PRIVATE
    ${xlnt_INCLUDE_DIR})
target_link_libraries(${PROJECT_NAME} PRIVATE
    xlnt::xlnt)

 

다른 방법:

Findxlnt.cmake

# Findxlnt.cmake
#
# Finds the xlnt library
#
# This will define the following variables
#
#   xlnt_FOUND
#   xlnt_LIBRARY
#   xlnt_LIBRARIES
#   xlnt_LIBRARY_DEBUG
#   xlnt_LIBRARY_RELEASE
#
# and the following imported targets
#
#   xlnt::xlnt
#
# Author: John Coffey - johnco3@gmail.com
#

find_path(xlnt_INCLUDE_DIR NAMES xlnt/xlnt.hpp)

if (NOT xlnt_LIBRARIES)
    find_library(xlnt_LIBRARY_RELEASE NAMES xlnt DOC "xlnt release library")
    find_library(xlnt_LIBRARY_DEBUG NAMES xlntd DOC "xlnt debug library")
    include(SelectLibraryConfigurations)
    select_library_configurations(xlnt)
endif()

include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(xlnt
    REQUIRED_VARS xlnt_INCLUDE_DIR xlnt_LIBRARY)
mark_as_advanced(
    xlnt_INCLUDE_DIR
    xlnt_LIBRARY)

if(xlnt_FOUND AND NOT (TARGET xlnt::xlnt))
    # debug output showing the located libraries
    message(STATUS "xlnt_INCLUDE_DIR=${xlnt_INCLUDE_DIR}")
    message(STATUS "xlnt_LIBRARY=${xlnt_LIBRARY}")
    message(STATUS "xlnt_LIBRARIES=${xlnt_LIBRARIES}")
    message(STATUS "xlnt_LIBRARY_DEBUG=${xlnt_LIBRARY_DEBUG}")
    message(STATUS "xlnt_LIBRARY_RELEASE=${xlnt_LIBRARY_RELEASE}")
    # Add a blank imported library
    add_library(xlnt::xlnt UNKNOWN IMPORTED)

    # add the transitive includes property
    set_target_properties(xlnt::xlnt PROPERTIES
        INTERFACE_INCLUDE_DIRECTORIES "${xlnt_INCLUDE_DIR}")

    # Optimized library
    if(xlnt_LIBRARY_RELEASE)
        set_property(TARGET xlnt::xlnt APPEND PROPERTY
            IMPORTED_CONFIGURATIONS RELEASE)
        set_target_properties(xlnt::xlnt PROPERTIES
            IMPORTED_LOCATION_RELEASE "${xlnt_LIBRARY_RELEASE}")
    endif()

    # Debug library
    if(xlnt_LIBRARY_DEBUG)
        set_property(TARGET xlnt::xlnt APPEND PROPERTY
            IMPORTED_CONFIGURATIONS DEBUG)
        set_target_properties(xlnt::xlnt PROPERTIES
            IMPORTED_LOCATION_DEBUG "${xlnt_LIBRARY_DEBUG}")
    endif()

    # some other configuration
    if(NOT xlnt_LIBRARY_RELEASE AND NOT xlnt_LIBRARY_DEBUG)
        set_property(TARGET xlnt::xlnt APPEND PROPERTY
            IMPORTED_LOCATION "${xlnt_LIBRARY}")
    endif()
endif()

CMakeLists.txt 수정

...
# For finding Findxlnt.cmake
list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/../cmake")
find_package(xlnt REQUIRED)

message("xlnt_INCLUDE_DIR: ${xlnt_INCLUDE_DIR}")
message("xlnt_LIBRARY:     ${xlnt_LIBRARY}")
message("xlnt_LIBRARIES:   ${xlnt_LIBRARIES}")
...

 

 

 

'C, C++' 카테고리의 다른 글

To install the MinGW-w64 toolchain  (0) 2022.10.28
문자열 구분자로 분리  (0) 2021.10.20
VSCode + vcpkg  (0) 2021.10.19
Get DLL path at run time  (0) 2021.10.05
ticktock  (0) 2021.08.15

이 환경변수(environment variable)을 이용해 shared library 참조

 

$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/nvidia/deepstream/deepstream-6.1/lib

이 설정은 터미널이 닫히면 사라진다.

따라서, "~/.bashrc" 파일을 직접 수정해 줘야 함

 

.bashrc

별칭(alias)과 bash가 수행될 때 실행되는 함수를 제어하는 지역적인 시스템 설정과 관련된 파일

모든 프로그램이 실행되기 전에 수행됨

적당한 위치에 LD_LIBRARY_PATH 설정문을 넣어둔다.

 

$ nano ~/.bashrc
...
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/nvidia/deepstream/deepstream-6.1/lib
...

 

ldconfig

기본 설정 파일은 /etc/ld.so.conf

파일 내용은 아래와 같다.

include /etc/ld.so.conf.d/*.conf

추가하려는 경로는 ld.so.conf.d 디렉토리 안에 파일을 생성해서 넣으면 된다.

 

'OS > Linux' 카테고리의 다른 글

Linux Shell - IO Redirections  (0) 2023.08.09
Linux Shell Script  (0) 2023.08.09
CUDA 11.7.1 on WSL2  (0) 2022.11.13
Fastest way to check if a file exists  (0) 2022.11.10
Install libjpeg-turbo  (0) 2022.11.06

Confusion Matrix

Prediction \ Ground-Truth True False
Positive True Positive
Correct Hit
False Positive
False Alarm
(Type I Error)
Negative False Negative
Missing Result
(Type II Error)
True Negative
Correct Rejection

Type I Error: False Alram

좋은 중고차로 예측하고 구매했지만 실제로 좋지 않은 차인 경우

 

Type II Error: Missing Result

암환자를 건강하다고 판별하는 경우

 

1. Accuracy (정확도)

전체 예측 건수에서 정답을 맞힌 건수의 비율

모든 분류 건수 중에 정답을 맞춘 비율 (True, False 정확히 분류)

 

Accuracy = (TP + TN) / (TP + TN + FP + FN)

                = (TP + TN) / (All Data)

 

Accuracy Paradox (정확도 역설):

희박한 확률의 결과를 예측할 때 (실제 데이터에 Negative 비율이 너무 높아서)

TP는 낮고 TN만 높아도 Accuracy 값은 높게 나온다. => Recall(재현율)로 평가

예) 폭설 예측, 결과가 TN만 나와도 Accuracy가 높다.

 

2. Recall (재현율)

object를 얼마나 잘 찾는가?

실제로 정답이 True인 것들(GT=True) 중 분류기가 TP로 예측한 비율

(True가 발생하는 확률이 적을 때 사용)

맞다고 분류해야 하는 건수 중에 제대로 분류한 비율

 

Recall = TP / (TP + FN) = TP / (All Ground-Truths)

단점: Accuracy와 반대로 TP만 맞추고 TN이 낮아도 Recall이 높게 나옴

 

3. Precision (정밀도)

찾은 object가 정확한가?

True로 예측(Positive)한 건수(TP + FP) 중 실제로 TP인 비율

분류기가 맞다고 예측한 건수 중에 실제로 맞은 건수 비율

 

Precision = TP / (TP + FP) = TP / (All Detections)

단점: Recall의 장점

 

Reference: https://darkpgmr.tistory.com/162

 

precision, recall의 이해

자신이 어떤 기술을 개발하였다. 예를 들어 이미지에서 사람을 자동으로 찾아주는 영상 인식 기술이라고 하자. 이 때, 사람들에게 "이 기술의 검출율은 99.99%입니다"라고 말하면 사람들은 "오우..

darkpgmr.tistory.com

Accuracy = (TP + TN) / (TP + TN + FP + FN) Precision = TP / (TP + FP)
정확도 정밀도
결과가 참값(TP + TN)에 얼마나 가까운지 얼마나 일관된 값을 출력하는지
시스템의 bias 반복 정밀도

예) 몸무게 50kg인 사람의 몸무게 측정, 60, 60.12, 59.99, ... 와 같이 60kg 근처 값으로 나오면

이 저울의 accuracy는 매우 낮지만 precision은 높다.

 

4. F1-score (조화평균)

 

F1-score = 2 x (Precision x Recall) / (Precision + Recall)

 

5. IoU (Intersection over Union)

 

IoU = TP / (TP + FP + FN)

       = Area(GT ∩ Prediction) / Area(GT ∪ Prediction)

보통 0.5 값을 기준으로 함

 

6. AP(Average Precision)

PR곡선(Precision, Recall)을 그렸을 때, 근사되는 넓이

PR Curve : Confidence 레벨에 대한 threshold 값의 변환에 대해 물체 검출기의 성능 평가

Reference:  https://bskyvision.com/465

 

물체 검출 알고리즘 성능 평가방법 AP(Average Precision)의 이해

물체 검출(object detection) 알고리즘의 성능은 precision-recall 곡선과 average precision(AP)로 평가하는 것이 대세다. 이에 대해서 이해하려고 한참을 구글링했지만 초보자가 이해하기에 적당한 문서는 찾

bskyvision.com

 

7. mAP(mean Average Precision)

class가 여러개인 경우 평균 AP

Reference: https://github.com/Cartucho/mAP

 

GitHub - Cartucho/mAP: mean Average Precision - This code evaluates the performance of your neural net for object recognition.

mean Average Precision - This code evaluates the performance of your neural net for object recognition. - GitHub - Cartucho/mAP: mean Average Precision - This code evaluates the performance of your...

github.com

 

 

 

 

 

 

 

 

'ML' 카테고리의 다른 글

COCO Dataset  (0) 2023.08.16
Pascal VOC(Visual Object Classes) Challenges  (0) 2023.08.15
윈도우 버전 YOLO v3 설치  (0) 2021.08.16

Reference: https://www.baeldung.com/linux/ffmpeg-cutting-videos

1. Clipping with Re-Encoding

Video encoding is the process of compressing and preparing a video for output,

thus, making video sizes reasonably small and quick to process.

By default, the re-encoding will use the codec used in the orignal video.

$ ffmpeg -i input.mp4 -ss 00:00:15 -t 00:00:10 -async -1 output.mp4
  • -i input.mp4 : used for specifying input files
  • -ss 00:12:34 : seeks to the timestamp specified
  • -t 00:10:00 : used to specify the duration of the clip
  • -async -1 : spcifies whether to contract or stretch the audio to match the timestamp.
    The value 1 will correct the start of the stream without any later correction.

Alternatively, if we need a more time-accurate cut,

we can manually add the keyframes to the start and end of the clipped video:

$ ffmpeg -i input.mp4 -force_key_frames 00:00:15,00:00:25 output.mp4
  • -force_key_frames : video clipping occurs at keyframes
    However, if the first frame is not a keyframe,
    then the frames before the first keyframe in the clip will not be playable.
    Therefore, we forced FFmpeg to add keyframes at the first and last frames to ensure we encode a perfect clip.
    Moreover, to limit errors, we should avoid adding lots of keyframes.

 

2. Clipping Instantly via Stream Copy

FFmpeg allows for copying the codec from the orignal video to the trimmed video,
which takes only a few seconds.

$ ffmpeg -i input.mp4 -ss 00:00:15 -to 00:00:25 -c copy output.mp4
  • -to : specifies the end of the clip. (from 00:0015 to 00:00:25)
  • -c : to copy both audio and video codecs to the output.mp4 container

If we're using different containers,

we'll be presented with a container mismatch error.

If we have two different containers, we can specify the copying options separately.

$ ffmpeg -i input.mkv -ss 00:00:15 -to 00:00:25 -acodec copy -vcodec copy output.mp4

 

3. Clipping Using the trim Filter

It's useful when we have a short video, preferably less than a minute,

and we want to cut a small portion of it:

$ ffmpeg -i input -vf trim=10:25,setpts=PTS-STARTPTS output.mp4
  • -vf : specifies that we're using a video filter
  • We provided the trim filter with the value 10:25,
    which will slice the video from 00:00:10 to 00:00:25.
  • The setpts filter sets the presentation timestamp for each video frame in the clip.
    We set its value to be PTS-STARTPTS to make sure our clip doesn't delay or halt at the start,
    and the frames are synchronized relative to the setpts value, which is 0.

 

 

 

'C, C++ > FFmepg' 카테고리의 다른 글

FFmpeg CLI Tips  (0) 2021.09.01
FFmpeg CLI  (0) 2021.08.15

CUDA Support for WSL 2

https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl2

 

CUDA on WSL :: CUDA Toolkit Documentation

Whether to efficiently use hardware resources or to improve productivity, virtualization is a more widely used solution in both consumer and enterprise space. There are different types of virtualizations, and it is beyond the scope of this document to delv

docs.nvidia.com

The lastest NVIDIA Windows GPU Driver will fully support WSL 2.

With CUDA support in the driver, existing applications (compiled elsewhere on a Linux system for the same target GPU)

can run unmodified with the WSL environment.

 

To compile new CUDA applications, a CUDA Toolkit for Linux x86 is needed.

CUDA Toolkit support for WSL is still in preview stage as developer tools such as profilers are not available yet.

However, CUDA application development is fully supported in the WSL 2 environment, as a result, users should be able to compile new CUDA Linux applications with the lastest CUDA Toolkit for x86 Linux.

 

Once a Windows NVIDIA GPU driver is installed on the system, CUDA becomes available within WSL 2.

The CUDA driver installed on Windows host will be stubbed inside the WSL 2 as libcuda.so,

therefore users must not install any NVIDIA GPU Linux driver within WSL 2.

One has to be very careful here as the default CUDA Toolkit comes packaged with a driver,

and it is easy to overwrite the WSL 2 NVIDIA driver with the default installation.

We recommend developers to use a separate CUDA Toolkit for WSL 2 (Ubuntu) available here to avoid this overwriting.

 

https://docs.nvidia.com/cuda/archive/11.7.1/cuda-installation-guide-linux/index.html#wsl-installation

 

Installation Guide Linux :: CUDA Toolkit Documentation

Check that the device files/dev/nvidia* exist and have the correct (0666) file permissions. These files are used by the CUDA Driver to communicate with the kernel-mode portion of the NVIDIA Driver. Applications that use the NVIDIA driver, such as a CUDA ap

docs.nvidia.com

 

0. Remove CUDA files

$ sudo apt-get remove --purge '^nvidia-.*'
 
$ sudo apt-get remove --purge 'cuda*'
$ sudo apt-get autoremove --purge 'cuda*'
 
$ sudo rm -rf /usr/local/cuda
$ sudo rm -rf /usr/local/cuda-#.#

 

1. Prepare WSL

1.1. Remove Outdated Signing key:

$ sudo apt-key del 7fa2af80

 

2. Local Repo Installation for WSL

2.1. Install local repository on file system:

$ wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
$ sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600

$ wget https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda-repo-wsl-ubuntu-11-7-local_11.7.1-1_amd64.deb
$ sudo dpkg -i cuda-repo-wsl-ubuntu-11-7-local_11.7.1-1_amd64.deb

2.2. Enroll ephemeral public GPG key:

$ sudo cp /var/cuda-repo-wsl-ubuntu-11-7-local/cuda-96193861-keyring.gpg /usr/share/keyrings/

 

3. Network Repo Installation for WSL

The new GPG public key for the CUDA repository (Debian-base distros) is 3bf863cc.

(https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/3bf863cc.pub)

This must be enrolled on the system, either using the cuda-keyring package or manually;

the apt-key command is deprecated and not recommended.

3.1. Install the newcuda-keyring package:

$ wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.0-1_all.deb
$ sudo dpkg -i cuda-keyring_1.0-1_all.deb

 

4. Common Installation Instructions for WSL

These instructions apply to both local and network installation for WSL.

4.1. Update the Apt repository cache:

$ sudo apt-get update

4.2. Install CUDA SDK:

$ sudo apt-get -y install cuda

$ sudo apt-get -y install cuda-toolkit-11-7

The installation instructions for the CUDA Toolkit can be found in the CUDA Toolkit download page for each installer.

But DO NOT choose the "cuda", "cuda-11-8", "cuda-drivers" meta-packages under WSL 2

as these packages will result in an attempt to install the Linux NVIDIA driver under WSL 2.

Install the cuda-toolkit-11-x metapackage only.

 

4.3. Perform the post-installation actions.

The post-installation actions must be manually performed.

These actions are split into mandatory, recommended, and optional sections.

 

5. Post-installation Actions

5.1. Mandatory Actions

Some actions must be taken after the installation before the CUDA Toolkit and Driver can be used.

5.1.1. Environment Setup

The PATH variable needs to include export PATH=/usr/local/cuda-11.7/bin${PATH:+:${PATH}}.

Nsight Compute has moved to /opt/nvidia/nsight-compute/ only in rpm/deb installation method.

When using .run installer it is still located under /usr/local/cuda-11.7/.

 

To add this path to the PATH variable:

$ export PATH=/usr/local/cuda-11.7/bin${PATH:+:${PATH}}

In addition, when using the runfile installation method,

the LD_LIBRARY_PATH variable needs to contain

  • /usr/local/cuda-11.7/lib64 on a 64-bit system, or
  • /usr/local/cuda-11.7/lib on a 32-bit system

To change the environment variables for 64-bit operating systems:

export LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

5.1.2 POWER9 Setup

Because of the addition of new features specific to the NVIDIA POWER9 CUDA driver,

there are some additional setup requirements in order for the driver to function properly.

These additional steps are not handled by the installation of CUDA packages,

and failure to ensure these extra requirements are met will result in a non-functional CUDA driver installation.

 

There are two changes that need to be made manually after installing the NVIDIA CUDA driver to ensure proper operation:

1. The NVIDIA Persistence Daemon should be automatically started for POWER9 installations.

    Check that it is running with the following command:

$ sudo systemctl status nvidia-persistenced

 

2. Disable a udev rule installed by default in some Linux distributions

that cause hot-pluggable memory to be automatically onlined when it is physically probed.

This behavior prevents NVIDIA software from bringing NVIDIA device memory online with non-default settings.

This udev rule must be disabled in order for the NVIDIA CUDA driver to function properly on POWER9 systems.

 

 

 

 

 

'OS > Linux' 카테고리의 다른 글

Linux Shell Script  (0) 2023.08.09
LD_LIBRARY_PATH  (0) 2022.12.21
Fastest way to check if a file exists  (0) 2022.11.10
Install libjpeg-turbo  (0) 2022.11.06
CUDA-11.4 on WSL2  (0) 2022.10.12

Reference:

https://www.it-note.kr/173

 

stat(2) - 파일의 상태 및 정보를 얻는 함수

stat(2) #include #include #include int stat(const char *path, struct stat *buf); 파일의 크기, 파일의 권한, 파일의 생성일시, 파일의 최종 변경일 등, 파일의 상태나 파일의 정보를 얻는 함수입니다. stat(2) 함수는 s

www.it-note.kr

 

#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <time.h>


#if 0
struct stat {
    __dev_t     st_dev;     /* Device. */
    __ino_t     st_ino;     /* File serial number */
    __nlink_t   st_nlink;   /* Link count. */
    __mode_t    st_mode;    /* File mode. */
    __uid_t     st_uid;     /* User ID of the file's owner. */
    __gid_t     st_gid;     /* Group ID of the file's group. */
    __dev_t     st_rdev;    /* Device number, if device. */
    __off_t     st_size;    /* Size of file, in bytes. */
    __blksize_t st_blksize; /* Optimal block size for I/O */
    __blkcnt_t  st_blocks;  /* Number 512-byte blocks allocated. */
    /**
     * Nanosecond resolution timestamps are stored in a format
     * equivalent to 'struct timespec'.
     * This is the type used whenever possible
     * but the Unix namespace rules do not allow the identifier 'timespec'
     * to appear in the <sys/stat.h> header.
     * Therefore we have to handle the use of this header
     * in strictly standard-compliant sources sepcial.
     */
    time_t      st_atime;   /* time of last access */
    time_t      st_mtile;   /* time of last modification */
    time_t      st_ctime;   /* time of last status change */
};

int stat(const char *path, struct stat *buf);
#endif


inline bool fs_file_exists(const char *path)
{
    struct stat buffer;
    return stat(path, &buffer) == 0;
}

int example(const char *path)
{
    struct stat sb;
    
    if (stat(path, &sb) == -1)
    {
        // 오류, 상세한 오류 내용은 errno 전역변수에 설정
        perror("stat");
        return errno;
    }
    
    switch (sb.st_mode & S_IFMT)
    {
    case S_IFBLK:  printf("block device\n");     break;
    case S_IFCHR:  printf("character device\n"); break;
    case S_IFDIR:  printf("directory\n");        break;
    case S_IFIFO:  printf("FIFO/pipe\n");        break;
    case S_IFLNK:  printf("symlink\n");          break;
    case S_IFREG:  printf("regular file\n");     break;
    case S_IFSOCK: printf("socket\n");           break;
    default:       printf("unknown?\n");         break;
    }
    
    printf("I-node number:            %ld\n", (long) sb.st_ino);
    printf("Mode:                     %lo (octal)\n", (unsigned long) sb.st_mode);
    printf("Link count:               %ld\n", (long) sb.st_nlink);
    printf("Ownership:                UID=%ld   GID=%ld\n", (long) sb.st_uid, (long) sb.st_gid);
    printf("Preferred I/O block size: %ld bytes\n",         (long) sb.st_blksize);
    printf("File size:                %lld bytes\n",        (long long) sb.st_size);
    printf("Blocks allocated:         %lld\n",              (long long) sb.st_blocks);
    printf("Last status change:       %s", ctime(&sb.st_ctime));
    printf("Last file access:         %s", ctime(&sb.st_atime));
    printf("Last file modification:   %s", ctime(&sb.st_mtime));
    
    return 0;
}

 

#include <iostream>
#include <string>
#include <sys/stat.h>   // stat
#include <errno.h>      // errno, ENOENT, EEXIST
#if defined(_WIN32)
#   include <direct.h>  // _mkdir
#endif


bool fs_file_exists(const std::string& path)
{
#if defined(_WIN32)
    struct _stat info;
    return !_stat(path.c_str(), &info);
#else
    struct stat info;
    return !stat(path.c_str(), &info);
#endif
}

bool fs_dir_exists(const std::string& path)
{
#if defined(_WIN32)
    struct _stat info;
    return _stat(path.c_str(), &info)
        ? false
        : (info.st_mode & _S_IFDIR) != 0;
#else
    struct stat info;
    return stat(path.c_str(), &info)
        ? false
        : (info.st_mode & S_IFDIR) != 0;
#endif
}

bool fs_mkdir(const std::string& path)
{
#if defined(_WIN32)
    int rv = _mkdir(path.c_str());
#else
    mode_t mode = 0755;
    int rv = mkdir(path.c_str(), mode);
#endif

    if (rv)
    {
        switch (errno)
        {
        case ENOENT:
            // parent didn't exist, try to create it
            {
                int pos = path.find_last_of('/');
                if (pos == std::string::npos)
#if defined(_WIN32)
                    pos = path.find_last_of('\\');
                if (pos == std::string::npos)
#endif
                    return false;
                
                if (!fs_mkdir(path.substr(0, pos)))
                    return false;
            }
            // now, try to create again
#if defined(_WIN32)
            return !_mkdir(path.c_str());
#else
            return !mkdir(path.c_str(), mode);
#endif
        case EEXIST:
            // done!
            return fs_dir_exists(path);

        default:
            return false;
        }
    }

    return true;
 }

'OS > Linux' 카테고리의 다른 글

LD_LIBRARY_PATH  (0) 2022.12.21
CUDA 11.7.1 on WSL2  (0) 2022.11.13
Install libjpeg-turbo  (0) 2022.11.06
CUDA-11.4 on WSL2  (0) 2022.10.12
Ubuntu에서 GPG ERROR NO_PUBKEY 해결방법  (0) 2022.10.11

DeepStream 6.1.1 Is the release with support for Ubuntu 20.04 LTS

 

Microsoft Store에서 "Ubuntu 20.04.5 LTS" 설치

WSL은 기본 설치 경로가 사용자 디렉토리다.

(C:\Users\user\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04LTS_79rhkp1fndgsc\LocalState\ext4.vhdx)

 

Windows 상에서 네트워크 드라이브 연결

Install DeepStream

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html

 

Quickstart Guide — DeepStream 6.1.1 Release documentation

Quickstart Guide NVIDIA® DeepStream Software Development Kit (SDK) is an accelerated AI framework to build intelligent video analytics (IVA) pipelines. DeepStream runs on NVIDIA® T4, NVIDIA® Ampere and platforms such as NVIDIA® Jetson AGX Xavier™, NV

docs.nvidia.com

dGPU Setup for Ubuntu

더보기

NOTE:

This document uses the term dGPU (“discrete GPU”) to refer to NVIDIA GPU expansion card products such as NVIDIA Tesla® T4 , NVIDIA GeForce® GTX 1080, NVIDIA GeForce® RTX 2080 and NVIDIA GeForce® RTX 3080. This version of DeepStream SDK runs on specific dGPU products on x86_64 platforms supported by NVIDIA driver 515.65.01 and NVIDIA TensorRT™ 8.4.1.5 and later versions.

 

You must install the following components:

  • GStreamer 1.16.2
  • NVIDIA driver 515.65.01
  • CUDA 11.7 update 1
  • TensorRT 8.4.1.5

Remove all previous DeepStream installations

To remove DeepStream 4.0 or later installations:

  1. Open the uninstall.sh file in /opt/nvidia/deepstream/deepstream/
  2. Set PREV_DS_VER as 4.0
  3. Run the following script as
$ sudo ./uninstall.sh

Install Dependencies

$ sudo apt -y install \
    libssl1.1 \
    libgstreamer1.0-0   \
    libgstreamer1.0-dev \
    gstreamer1.0-tools        \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad  \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav        \
    libgstreamer-plugins-base1.0-dev \
    libgstrtspserver-1.0-0   \
    libgstrtspserver-1.0-dev \
    libjansson4 \
    libjson-glib-dev \
    libyaml-cpp-dev \
    gcc \
    make \
    git \
    python3 python-is-python3

 

https://docs.nvidia.com/cuda/archive/11.7.1/wsl-user-guide/index.html

 

CUDA on WSL :: CUDA Toolkit Documentation

Whether to efficiently use hardware resources or to improve productivity, virtualization is a more widely used solution in both consumer and enterprise space. There are different types of virtualizations, and it is beyond the scope of this document to delv

docs.nvidia.com

 

[OS/Linux] - CUDA 11.7.1 on WSL2

 

Install TensorRT 8.4.1.5

$ sudo apt-get -y install \
    libnvinfer8=8.4.1-1+cuda11.6 \
    libnvinfer-plugin8=8.4.1-1+cuda11.6 \
    libnvparsers8=8.4.1-1+cuda11.6 \
    libnvonnxparsers8=8.4.1-1+cuda11.6 \
    libnvinfer-bin=8.4.1-1+cuda11.6 \
    libnvinfer-dev=8.4.1-1+cuda11.6 \
    libnvinfer-plugin-dev=8.4.1-1+cuda11.6 \
    libnvparsers-dev=8.4.1-1+cuda11.6 \
    libnvonnxparsers-dev=8.4.1-1+cuda11.6 \
    libnvinfer-samples=8.4.1-1+cuda11.6 \
    libcudnn8=8.4.1.50-1+cuda11.6 \
    libcudnn8-dev=8.4.1.50-1+cuda11.6 \
    python3-libnvinfer=8.4.1-1+cuda11.6 \
    python3-libnvinfer-dev=8.4.1-1+cuda11.6
더보기

NOTE:

$sudo dpkg -i cudnn-local-repo-ubuntu2004-8.4.1.50_1.0-1_amd64.deb``
$sudo apt-get update
$sudo apt install libcudnn8=8.4.1.50-1+cuda11.6 libcudnn8-dev=8.4.1.50-1+cuda11.6

 

Install librdkafka (to enable Kafka protocol adaptor for message broker)

1. Clone the librdkafka repository from GitHub:

2. Configure and build the library:

$ git clone https://github.com/edenhill/librdkafka.git

$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
$ ./configure
$ make
make[1]: Entering directory '/home/ym/dev/librdkafka/src'
...
Generating linker script librdkafka.lds from rdkafka.h
/usr/bin/env: ‘python’: No such file or directory
make[1]: *** [Makefile:79: librdkafka.lds] Error 127
make[1]: Leaving directory '/home/ym/dev/librdkafka/src'
make: *** [Makefile:20: libs] Error 2

$ whereis python3
python3: /usr/bin/python3.8 /usr/bin/python3 /usr/lib/python3.8 /usr/lib/python3.9 /usr/lib/python3 /etc/python3.8 /etc/python3 /usr/local/lib/python3.8 /usr/share/python3 /mnt/c/Users/jylee/AppData/Local/Microsoft/WindowsApps/python3.exe /mnt/c/msys64/mingw64/bin/python3.10-config /mnt/c/msys64/mingw64/bin/python3.exe /usr/share/man/man1/python3.1.gz

#$ sudo ln -s /usr/bin/python3 /usr/bin/python
$ sudo apt install python-is-python3

$ make
make[1]: Entering directory '/home/ym/dev/librdkafka/src'
...
Updating
CONFIGURATION.md CONFIGURATION.md.tmp differ: byte 345, line 6
Checking  integrity
CONFIGURATION.md               OK
examples/rdkafka_example       OK
examples/rdkafka_performance   OK
examples/rdkafka_example_cpp   OK
make[1]: Entering directory '/home/ym/dev/librdkafka/src'
Checking librdkafka integrity
librdkafka.so.1                OK
librdkafka.a                   OK
Symbol visibility              OK
make[1]: Leaving directory '/home/ym/dev/librdkafka/src'
make[1]: Entering directory '/home/ym/dev/librdkafka/src-cpp'
Checking librdkafka++ integrity
librdkafka++.so.1              OK
librdkafka++.a                 OK
make[1]: Leaving directory '/home/ym/dev/librdkafka/src-cpp'

$ sudo make install

If Python3 has been installed, run these commands: whereis python3

Then we create a symlink to it: sudo ln -s /usr/bin/python3 /usr/bin/python

 

3. Copy the generated libraries to the deepstream directory:

$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.1/lib
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.1/lib

 

Install the DeepStream SDK

Method 1: Using the DeepStream Debian package

Download the DeepStream 6.1.1 dGPU Debian package deepstream-6.1_6.1.1-1_amd64.deb:

https://developer.nvidia.com/deepstream-6.1_6.1.1-1_amd64.deb

$ sudo apt-get install ./deepstream-6.1_6.1.1-1_amd64.deb
...
---------------------------------------------------------------------------------------
NOTE: sources and samples folders will be found in /opt/nvidia/deepstream/deepstream-6.1
---------------------------------------------------------------------------------------
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...

 

Run the deepstream-app (the reference application)

Go to the samples directory and enter this command:

$ deepstream-app -c <path_to_config_file>

 

Run precompiled sample applications

1. Navigate to the chosen application directory inside sources/apps/samples_apps.

2. Follow that directory's README file to run the application.

더보기

NOTE:

If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:

$ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin

When the application is run for a model which does not have an existing engine file, it may take up to a few minutes (depending on the platform and the model) for the file generation and application launch. For later runs, these generated engine files can be reused for faster loading.

 

'C, C++ > DeepStream' 카테고리의 다른 글

Crop and Resize Objects  (0) 2022.11.08

+ Recent posts