Remember to maintain security and privacy. Do not share sensitive information. Procedimento.com.br may make mistakes. Verify important information. Termo de Responsabilidade
Voice recognition technology has become an integral part of modern computing, enabling users to interact with their devices through voice commands. Apple has been at the forefront of this technology with Siri, its intelligent personal assistant, and the Speech framework, which allows developers to integrate voice recognition into their apps. In this article, we will explore how to implement voice recognition on Apple devices using Siri and the Speech framework.
Siri is Apple's built-in voice-controlled personal assistant, available on iOS, macOS, watchOS, and tvOS devices. It allows users to perform a variety of tasks through voice commands, such as sending messages, setting reminders, and controlling smart home devices.
The Speech framework, on the other hand, provides developers with the tools needed to incorporate speech recognition into their apps. It supports both on-device and server-based speech recognition, making it versatile for various use cases.
Before we dive into the implementation, ensure you have the following:
You need to request permission from the user to access speech recognition and Siri. Add the following keys to your Info.plist
file:
<key>NSSpeechRecognitionUsageDescription</key>
<string>We need access to speech recognition for voice commands.</string>
<key>NSSiriUsageDescription</key>
<string>We need access to Siri for voice commands.</string>
Create a new Swift file and import the necessary frameworks:
import UIKit
import Speech
class ViewController: UIViewController, SFSpeechRecognizerDelegate {
private let speechRecognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))!
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
private let audioEngine = AVAudioEngine()
override func viewDidLoad() {
super.viewDidLoad()
requestSpeechAuthorization()
}
private func requestSpeechAuthorization() {
SFSpeechRecognizer.requestAuthorization { authStatus in
switch authStatus {
case .authorized:
print("Speech recognition authorized")
case .denied:
print("Speech recognition denied")
case .restricted:
print("Speech recognition restricted")
case .notDetermined:
print("Speech recognition not determined")
@unknown default:
fatalError()
}
}
}
func startRecording() throws {
recognitionTask?.cancel()
self.recognitionTask = nil
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.record, mode: .measurement, options: .duckOthers)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
let inputNode = audioEngine.inputNode
guard let recognitionRequest = recognitionRequest else {
fatalError("Unable to create a recognition request")
}
recognitionRequest.shouldReportPartialResults = true
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in
var isFinal = false
if let result = result {
print("Transcription: \(result.bestTranscription.formattedString)")
isFinal = result.isFinal
}
if error != nil || isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
}
}
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, when in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
print("Say something, I'm listening!")
}
@IBAction func startButtonTapped(_ sender: UIButton) {
try? startRecording()
}
}
Implementing voice recognition on Apple devices is straightforward with the help of Siri and the Speech framework. By following the steps outlined in this article, you can add powerful voice recognition capabilities to your apps, enhancing user interaction and accessibility.