Read this document in Español
Alexa.NET is a helper library for working with Alexa skill requests/responses in C#. Whether you are using the AWS Lambda service or hosting your own service on your server, this library aims just to make working with the Alexa API more natural for a C# developer using a strongly-typed object model.
Alexa.NET also serves as a base foundation for a set of further Alexa skill development extensions from Steven Pears:
- Management GitHub / NuGet
- In-skill Pricing GitHub / NuGet
- Messaging GitHub / NuGet
- Gadgets GitHub / NuGet
- Customer and Person Profile API GitHub / NuGet
- Settings API GitHub / NuGet
- APL Support GitHub / NuGet
- Reminders API GitHub / NuGet
- Proactive Events API GitHub / NuGet
- CanFulfillIntent Request Support GitHub / NuGet
- Response Assertions GitHub / NuGet
- SkillFlow support (experimental)
- Timers API GitHub / NuGet
- Web API for Games GitHub / NuGet
- Shopping Kit GitHub / NuGet
- Conversations API (Beta) GitHub / NuGet
- Pin Confirmation (Beta) GitHub / NuGet
Regardless of your architecture, your function for Alexa will be accepting a SkillRequest and returning a SkillResponse. The deserialization of the incoming request into a SkillRequest object will depend on your framework.
public SkillResponse FunctionHandler(SkillRequest input, ILambdaContext context)
{
// your function logic goes here
return new SkillResponse("OK");
}
Use the Amazon.Lambda.Serialization.Json
package. The default may be different depending on how you created your project.
In your project file:
<Project Sdk="Microsoft.NET.Sdk">
<!-- ... -->
<ItemGroup>
<PackageReference Include="Alexa.NET" Version="1.15.0" />
<PackageReference Include="Amazon.Lambda.Core" Version="1.2.0" />
<PackageReference Include="Amazon.Lambda.Serialization.Json" Version="1.8.0" />
</ItemGroup>
</Project>
In any .cs file:
// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
- Contributors
- Request Types
- Responses
- Session Variables
- Responses Without Helpers
- Progressive Responses
Alexa will send different types of requests depending on the events you should respond to. Below are all of the types of requests:
AccountLinkSkillEventRequest
AudioPlayerRequest
DisplayElementSelectedRequest
IntentRequest
LaunchRequest
PermissionSkillEventRequest
PlaybackControllerRequest
SessionEndedRequest
SkillEventRequest
SystemExceptionRequest
This request is used for linking Alexa to another account. The request will come with the access token needed to interact with the connected service. These events are only sent if they have been subscribed to.
var accountLinkReq = input.Request as AccountLinkSkillEventRequest;
var accessToken = accountLinkReq.AccessToken;
Audio Player Requests will be sent when a skill is supposed to play audio, or if an audio state change has occured on the device.
// do some audio response stuff
var audioRequest = input.Request as AudioPlayerRequest;
if (audioRequest.AudioRequestType == AudioRequestType.PlaybackNearlyFinished)
{
// queue up another audio file
}
Each AudioPlayerRequest
will also come with a request type to describe the state change:
PlaybackStarted
PlaybackFinished
PlaybackStopped
PlaybackNearlyFinished
PlaybackFailed
Display Element Selected Requests will be sent when a skill has a GUI, and one of the buttons were selected by the user. This request comes with a token that will tell you which GUI element was selected.
var elemSelReq = input.Request as DisplayElementSelectedRequest;
var buttonID = elemSelReq.Token;
This is the type that will likely be used most often. IntentRequest will also come with an Intent
object and a DialogState
of either STARTED
, IN_PROGRESS
or COMPLETED
Each intent is defined by the name configured in the Alexa Developer Console. If you have included slots in your intent, they will be included in this object, along with a confirmation status.
var intentRequest = input.Request as IntentRequest;
// check the name to determine what you should do
if (intentRequest.Intent.Name.Equals("MyIntentName"))
{
if(intentRequest.DialogState.Equals("COMPLETED"))
{
// get the slots
var firstValue = intentRequest.Intent.Slots["FirstSlot"].Value;
}
}
This type of request is sent when your skill is opened with no intents triggered. You should respond and expect an IntentRequest
to follow.
if(input.Request is LaunchRequest)
{
return ResponseBuilder.Ask("How can I help you today?");
}
This event is sent when a customer grants or revokes permissions. This request will include a SkillEventPermissions
object with the included permission changes. These events are only sent if they have been subscribed to.
var permissionReq = input.Request as PermissionSkillEventRequest;
var firstPermission = permissionReq.Body.AcceptedPermissions[0];
This event is sent to control playback for an audio player skill.
var playbackReq = input.Request as PlaybackControllerRequest;
switch(playbackReq.PlaybackRequestType)
{
case PlaybackControllerRequestType.Next:
break;
case PlaybackControllerRequestType.Pause:
break;
case PlaybackControllerRequestType.Play:
break;
case PlaybackControllerRequestType.Previous:
break;
}
This event is sent if the user requests to exit, their response takes too long or an error has occured on the device.
var sessEndReq = input.Request as SessionEndedRequest;
switch(sessEndReq.Reason)
{
case Reason.UserInitiated:
break;
case Reason.Error:
break;
case Reason.ExceededMaxReprompts:
break;
}
This event is sent when a user enables or disables the skill. These events are only sent if they have been subscribed to.
When an error occurs, whether as the result of a malformed event or too many requests, AVS will return a message to your client that includes an exception code and a description.
var sysException = input.Request as SystemExceptionRequest;
string message = sysException.Error.Message;
string reqID = sysException.ErrorCause.requestId;
switch(sysException.Error.Type)
{
case ErrorType.InvalidResponse:
break;
case ErrorType.DeviceCommunicationError:
break;
case ErrorType.InternalError:
break;
case ErrorType.MediaErrorUnknown:
break;
case ErrorType.InvalidMediaRequest:
break;
case ErrorType.MediaServiceUnavailable:
break;
case ErrorType.InternalServerError:
break;
case ErrorType.InternalDeviceError:
break;
}
There are two helper methods for forming a speech response with ResponseBuilder
:
var finalResponse = ResponseBuilder.Tell("We are done here.");
var openEndedResponse = ResponseBuilder.Ask("Are we done here?");
Using Tell sets ShouldEndSession
to true
. Using Ask sets ShouldEndSession
to false
. Use the appropriate function depending on whether you expect to continue dialog or not.
SSML can be used to customize the way Alexa speaks. Below is an example of using SSML with the helper functions:
// build the speech response
var speech = new SsmlOutputSpeech();
speech.Ssml = "<speak>Today is <say-as interpret-as=\"date\">????0922</say-as>.<break strength=\"x-strong\"/>I hope you have a good day.</speak>";
// create the response using the ResponseBuilder
var finalResponse = ResponseBuilder.Tell(speech);
return finalResponse;
In your response you can also have a 'Card' response, which presents UI elements to Alexa. ResponseBuilder
presently builds Simple cards only, which contain titles and plain text.
// create the speech response - cards still need a voice response
var speech = new SsmlOutputSpeech();
speech.Ssml = "<speak>Today is <say-as interpret-as=\"date\">????0922</say-as>.</speak>";
// create the card response
var finalResponse = ResponseBuilder.TellWithCard(speech, "Your Card Title", "Your card content text goes here, no HTML formatting honored");
return finalResponse;
If you want to reprompt the user, use the Ask helpers. A reprompt can be useful if you would like to continue the conversation, or if you would like to remind the user to answer the question.
// create the speech response
var speech = new SsmlOutputSpeech();
speech.Ssml = "<speak>Today is <say-as interpret-as=\"date\">????0922</say-as>.</speak>";
// create the speech reprompt
var repromptMessage = new PlainTextOutputSpeech();
repromptMessage.Text = "Would you like to know what tomorrow is?";
// create the reprompt
var repromptBody = new Reprompt();
repromptBody.OutputSpeech = repromptMessage;
// create the response
var finalResponse = ResponseBuilder.Ask(speech, repromptBody);
return finalResponse;
If your skill is registered as an audio player, you can send directives (instructions to play, enqueue, or stop an audio stream).
// create the speech response - you most likely will still have this
string audioUrl = "http://mydomain.com/myaudiofile.mp3";
string audioToken = "a token to describe the audio file";
var audioResponse = ResponseBuilder.AudioPlayerPlay(PlayBehavior.ReplaceAll, audioUrl, audioToken);
return audioResponse
Session variables can be saved into a response, and will be sent back and forth as long as the session remains open.
string speech = "The time is twelve twenty three.";
Session session = input.Session;
if(session.Attributes == null)
session.Attributes = new Dictionary<string, object>();
session.Attributes["real_time"] = DateTime.Now;
return ResponseBuilder.Tell(speech, session);
Session session = input.Session;
DateTime lastTime = session.Attributes["real_time"] as DateTime;
return ResponseBuilder.Tell("The last day you asked was at " lastTime.DayOfWeek.ToString());
If you do not want to use the helper Tell/Ask functions, you can build up the response manually using the Response
and IOutputSpeech
objects. If you would like to include a StandardCard
or LinkAccountCard
in your response, you could add it like this onto the response body:
// create the speech response
var speech = new SsmlOutputSpeech();
speech.Ssml = "<speak>Today is <say-as interpret-as=\"date\">????0922</say-as>.</speak>";
// create the reprompt speech
var repromptMessage = new PlainTextOutputSpeech();
repromptMessage.Text = "Would you like to know what tomorrow is?";
// create the reprompt object
var repromptBody = new Reprompt();
repromptBody.OutputSpeech = repromptMessage;
// create the response
var responseBody = new ResponseBody();
responseBody.OutputSpeech = speech;
responseBody.ShouldEndSession = false; // this triggers the reprompt
responseBody.Reprompt = repromptBody;
responseBody.Card = new SimpleCard {Title = "Test", Content = "Testing Alexa"};
var skillResponse = new SkillResponse();
skillResponse.Response = responseBody;
skillResponse.Version = "1.0";
return skillResponse;
Your skill can send progressive responses to keep the user engaged while your skill prepares a full response to the user's request. Below is an example of sending a progressive response:
var progressiveResponse = new ProgressiveResponse(skillRequest);
progressiveResponse.SendSpeech("Please wait while I gather your data.");
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
This project has adopted the .NET Foundation Code of Conduct. For more information see the Code of Conduct itself or contact project maintainers with any additional questions or comments or to report a violation.