.Make certain being compatible along with a number of structures, including.NET 6.0,. Internet Platform 4.6.2, and.NET Standard 2.0 and also above.Lessen dependences to stop model conflicts and the necessity for binding redirects.Transcribing Audio Record.Some of the major functionalities of the SDK is actually audio transcription. Developers can translate audio data asynchronously or in real-time. Below is actually an example of exactly how to translate an audio documents:.using AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local files, comparable code may be utilized to attain transcription.await using var stream = brand-new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.stream,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK also supports real-time sound transcription using Streaming Speech-to-Text. This component is actually specifically valuable for treatments requiring prompt processing of audio information.using AssemblyAI.Realtime.wait for making use of var transcriber = brand new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Ultimate: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for receiving sound coming from a mic for example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( piece)).wait for transcriber.CloseAsync().Making Use Of LeMUR for LLM Apps.The SDK incorporates along with LeMUR to allow programmers to create big foreign language design (LLM) apps on vocal information. Right here is an example:.var lemurTaskParams = new LemurTaskParams.Urge="Deliver a brief conclusion of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Versions.Also, the SDK possesses integrated support for audio intelligence versions, making it possible for conviction evaluation and other sophisticated components.var records = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = real. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// POSITIVE, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To read more, visit the formal AssemblyAI blog.Image source: Shutterstock.