Showing posts with label ASP.NET Core. Show all posts
Showing posts with label ASP.NET Core. Show all posts

Saturday, April 8, 2023

Deploying an ASP.NET core application with Elastic Beanstalk

 In this tutorial, you will walk through the process of building a new ASP.NET Core application and deploying it to AWS Elastic Beanstalk.

First, you will use the .NET Core SDK's dotnet command line tool to generate a basic .NET Core command line application, install dependencies, compile code, and run applications locally. Next, you will create the default Program.cs class, and add an ASP.NET Startup.cs class and configuration files to make an application that serves HTTP requests with ASP.NET and IIS.

Finally, Elastic Beanstalk uses a deployment manifest to configure deployments for .NET Core applications, custom applications, and multiple .NET Core or MSBuild applications on a single server. To deploy a .NET Core application to a Windows Server environment, you add a site archive to an application source bundle with a deployment manifest. The dotnet publish command generates compiled classes and dependencies that you can bundle with a web.config file to create a site archive. The deployment manifest tells Elastic Beanstalk the path at which the site should run and can be used to configure application pools and run multiple applications at different paths.

The application source code is available here: dotnet-core-tutorial-source.zip

The deployable source bundle is available here: dotnet-core-tutorial-bundle.zip

Prerequisites

This tutorial uses the .NET Core SDK to generate a basic .NET Core application, run it locally, and build a deployable package.

Requirements
  • .NET Core (x64) 1.0.1, 2.0.0, or later

To install the .NET core SDK
  1. Download the installer from microsoft.com/net/core. Choose Windows. Choose Download .NET SDK.

  2. Run the installer and follow the instructions.

This tutorial uses a command line ZIP utility to create a source bundle that you can deploy to Elastic Beanstalk. To use the zip command in Windows, you can install UnxUtils, a lightweight collection of useful command line utilities like zip and ls. Alternatively, you can use Windows Explorer or any other ZIP utility to create source bundle archives.

To install UnxUtils
  1. Download UnxUtils.

  2. Extract the archive to a local directory. For example, C:\Program Files (x86).

  3. Add the path to the binaries to your Windows PATH user variable. For example, C:\Program Files (x86)\UnxUtils\usr\local\wbin.

    1. Press the Windows key, and then enter environment variables.

    2. Choose Edit environment variables for your account.

    3. Choose PATH, and then choose Edit.

    4. Add paths to the Variable value field, separated by semicolons. For example: C:\item1\path;C:\item2\path

    5. Choose OK twice to apply the new settings.

    6. Close any running Command Prompt windows, and then reopen a Command Prompt window.

  4. Open a new command prompt window and run the zip command to verify that it works.

    > zip -h Copyright (C) 1990-1999 Info-ZIP Type 'zip "-L"' for software license. ...

Generate a .NET core project

Use the dotnet command line tool to generate a new C# .NET Core project and run it locally. The default .NET Core application is a command line utility that prints Hello World! and then exits.

To generate a new .NET core project
  1. Open a new command prompt window and navigate to your user folder.

    > cd %USERPROFILE%
  2. Use the dotnet new command to generate a new .NET Core project.

    C:\Users\username> dotnet new console -o dotnet-core-tutorial Content generation time: 65.0152 ms The template "Console Application" created successfully. C:\Users\username> cd dotnet-core-tutorial
  3. Use the dotnet restore command to install dependencies.

    C:\Users\username\dotnet-core-tutorial> dotnet restore Restoring packages for C:\Users\username\dotnet-core-tutorial\dotnet-core-tutorial.csproj... Generating MSBuild file C:\Users\username\dotnet-core-tutorial\obj\dotnet-core-tutorial.csproj.nuget.g.props. Generating MSBuild file C:\Users\username\dotnet-core-tutorial\obj\dotnet-core-tutorial.csproj.nuget.g.targets. Writing lock file to disk. Path: C:\Users\username\dotnet-core-tutorial\obj\project.assets.json Restore completed in 1.25 sec for C:\Users\username\dotnet-core-tutorial\dotnet-core-tutorial.csproj. NuGet Config files used: C:\Users\username\AppData\Roaming\NuGet\NuGet.Config C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config Feeds used: https://api.nuget.org/v3/index.json C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\
  4. Use the dotnet run command to build and run the application locally.

    C:\Users\username\dotnet-core-tutorial> dotnet run Hello World!

Launch an Elastic Beanstalk environment

Use the Elastic Beanstalk console to launch an Elastic Beanstalk environment. For this example, you will launch with a .NET platform. After you launch and configure your environment, you can deploy new source code at any time.

To launch an environment (console)
  1. Open the Elastic Beanstalk console using this preconfigured link: console.aws.amazon.com/elasticbeanstalk/home#/newApplication?applicationName=tutorials&environmentType=LoadBalanced

  2. For Platform, select the platform and platform branch that match the language used by your application.

  3. For Application code, choose Sample application.

  4. Choose Review and launch.

  5. Review the available options. Choose the available option you want to use, and when you're ready, choose Create app.

Environment creation takes about 10 minutes. During this time you can update your source code.

Update the source code

Modify the default application into a web application that uses ASP.NET and IIS.

  • ASP.NET is the website framework for .NET.

  • IIS is the web server that runs the application on the Amazon EC2 instances in your Elastic Beanstalk environment.

The source code examples to follow are available here: dotnet-core-tutorial-source.zip

Note

The following procedure shows how to convert the project code into a web application. To simplify the process, you can generate the project as a web application right from the start. In the previous section Generate a .NET core project, modify the dotnet new step's command with the following command.

C:\Users\username> dotnet new web -o dotnet-core-tutorial
To add ASP.NET and IIS support to your code
  1. Copy Program.cs to your application directory to run as a web host builder.

    Example c:\users\username\dotnet-core-tutorial\Program.cs
    using System; using Microsoft.AspNetCore.Hosting; using System.IO; namespace aspnetcoreapp { public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup<Startup>() .Build(); host.Run(); } } }
  2. Add Startup.cs to run an ASP.NET website.

    Example c:\users\username\dotnet-core-tutorial\Startup.cs
    using System; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Http; namespace aspnetcoreapp { public class Startup { public void Configure(IApplicationBuilder app) { app.Run(context => { return context.Response.WriteAsync("Hello from ASP.NET Core!"); }); } } }
  3. Add the web.config file to configure the IIS server.

    Example c:\users\username\dotnet-core-tutorial\web.config
    <?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <handlers> <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" /> </handlers> <aspNetCore processPath="dotnet" arguments=".\dotnet-core-tutorial.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false" /> </system.webServer> </configuration>
  4. Add dotnet-core-tutorial.csproj, which includes IIS middleware and includes the web.config file from the output of dotnet publish.

    Note

    The following example was developed using .NET Core Runtime 2.2.1. You might need to modify the TargetFramework or the Version attribute values in the PackageReference elements to match the version of .NET Core Runtime that you are using in your custom projects.

    Example c:\users\username\dotnet-core-tutorial\dotnet-core-tutorial.csproj
    <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.2</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="2.2.0" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="2.2.0" /> </ItemGroup> <ItemGroup> <None Include="web.config" CopyToPublishDirectory="Always" /> </ItemGroup> </Project>

Next, install the new dependencies and run the ASP.NET website locally.

To run the website locally
  1. Use the dotnet restore command to install dependencies.

  2. Use the dotnet run command to build and run the app locally.

  3. Open localhost:5000 to view the site.

To run the application on a web server, you need to bundle the compiled source code with a web.config configuration file and runtime dependencies. The dotnet tool provides a publish command that gathers these files in a directory based on the configuration in dotnet-core-tutorial.csproj.

To build your website
  • Use the dotnet publish command to output compiled code and dependencies to a folder named site.

    C:\users\username\dotnet-core-tutorial> dotnet publish -o site

To deploy the application to Elastic Beanstalk, bundle the site archive with a deployment manifest. This tells Elastic Beanstalk how to run it.

To create a source bundle
  1. Add the files in the site folder to a ZIP archive.

    Note

    If you use a different ZIP utility, be sure to add all files to the root folder of the resulting ZIP archive. This is required for a successful deployment of the application to your Elastic Beanstalk environment.

    C:\users\username\dotnet-core-tutorial> cd site C:\users\username\dotnet-core-tutorial\site> zip ../site.zip * adding: dotnet-core-tutorial.deps.json (164 bytes security) (deflated 84%) adding: dotnet-core-tutorial.dll (164 bytes security) (deflated 59%) adding: dotnet-core-tutorial.pdb (164 bytes security) (deflated 28%) adding: dotnet-core-tutorial.runtimeconfig.json (164 bytes security) (deflated 26%) adding: Microsoft.AspNetCore.Authentication.Abstractions.dll (164 bytes security) (deflated 49%) adding: Microsoft.AspNetCore.Authentication.Core.dll (164 bytes security) (deflated 57%) adding: Microsoft.AspNetCore.Connections.Abstractions.dll (164 bytes security) (deflated 51%) adding: Microsoft.AspNetCore.Hosting.Abstractions.dll (164 bytes security) (deflated 49%) adding: Microsoft.AspNetCore.Hosting.dll (164 bytes security) (deflated 60%) adding: Microsoft.AspNetCore.Hosting.Server.Abstractions.dll (164 bytes security) (deflated 44%) adding: Microsoft.AspNetCore.Http.Abstractions.dll (164 bytes security) (deflated 54%) adding: Microsoft.AspNetCore.Http.dll (164 bytes security) (deflated 55%) adding: Microsoft.AspNetCore.Http.Extensions.dll (164 bytes security) (deflated 50%) adding: Microsoft.AspNetCore.Http.Features.dll (164 bytes security) (deflated 50%) adding: Microsoft.AspNetCore.HttpOverrides.dll (164 bytes security) (deflated 49%) adding: Microsoft.AspNetCore.Server.IISIntegration.dll (164 bytes security) (deflated 46%) adding: Microsoft.AspNetCore.Server.Kestrel.Core.dll (164 bytes security) (deflated 63%) adding: Microsoft.AspNetCore.Server.Kestrel.dll (164 bytes security) (deflated 46%) adding: Microsoft.AspNetCore.Server.Kestrel.Https.dll (164 bytes security) (deflated 44%) adding: Microsoft.AspNetCore.Server.Kestrel.Transport.Abstractions.dll (164 bytes security) (deflated 56%) adding: Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.dll (164 bytes security) (deflated 51%) adding: Microsoft.AspNetCore.WebUtilities.dll (164 bytes security) (deflated 55%) adding: Microsoft.Extensions.Configuration.Abstractions.dll (164 bytes security) (deflated 48%) adding: Microsoft.Extensions.Configuration.Binder.dll (164 bytes security) (deflated 47%) adding: Microsoft.Extensions.Configuration.dll (164 bytes security) (deflated 46%) adding: Microsoft.Extensions.Configuration.EnvironmentVariables.dll (164 bytes security) (deflated 46%) adding: Microsoft.Extensions.Configuration.FileExtensions.dll (164 bytes security) (deflated 47%) adding: Microsoft.Extensions.DependencyInjection.Abstractions.dll (164 bytes security) (deflated 54%) adding: Microsoft.Extensions.DependencyInjection.dll (164 bytes security) (deflated 53%) adding: Microsoft.Extensions.FileProviders.Abstractions.dll (164 bytes security) (deflated 46%) adding: Microsoft.Extensions.FileProviders.Physical.dll (164 bytes security) (deflated 47%) adding: Microsoft.Extensions.FileSystemGlobbing.dll (164 bytes security) (deflated 49%) adding: Microsoft.Extensions.Hosting.Abstractions.dll (164 bytes security) (deflated 47%) adding: Microsoft.Extensions.Logging.Abstractions.dll (164 bytes security) (deflated 54%) adding: Microsoft.Extensions.Logging.dll (164 bytes security) (deflated 48%) adding: Microsoft.Extensions.ObjectPool.dll (164 bytes security) (deflated 45%) adding: Microsoft.Extensions.Options.dll (164 bytes security) (deflated 53%) adding: Microsoft.Extensions.Primitives.dll (164 bytes security) (deflated 50%) adding: Microsoft.Net.Http.Headers.dll (164 bytes security) (deflated 53%) adding: System.IO.Pipelines.dll (164 bytes security) (deflated 50%) adding: System.Runtime.CompilerServices.Unsafe.dll (164 bytes security) (deflated 43%) adding: System.Text.Encodings.Web.dll (164 bytes security) (deflated 57%) adding: web.config (164 bytes security) (deflated 39%) C:\users\username\dotnet-core-tutorial\site> cd ../
  2. Add a deployment manifest that points to the site archive.

    Example c:\users\username\dotnet-core-tutorial\aws-windows-deployment-manifest.json
    { "manifestVersion": 1, "deployments": { "aspNetCoreWeb": [ { "name": "test-dotnet-core", "parameters": { "appBundle": "site.zip", "iisPath": "/", "iisWebSite": "Default Web Site" } } ] } }
  3. Use the zip command to create a source bundle named dotnet-core-tutorial.zip.

    C:\users\username\dotnet-core-tutorial> zip dotnet-core-tutorial.zip site.zip aws-windows-deployment-manifest.json adding: site.zip (164 bytes security) (stored 0%) adding: aws-windows-deployment-manifest.json (164 bytes security) (deflated 50%)

Deploy your application

Deploy the source bundle to the Elastic Beanstalk environment that you created.

You can download the source bundle here: dotnet-core-tutorial-bundle.zip

To deploy a source bundle
  1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.

  2. In the navigation pane, choose Environments, and then choose the name of your environment from the list.

    Note

    If you have many environments, use the search bar to filter the environment list.

  3. On the environment overview page, choose Upload and deploy.

  4. Use the on-screen dialog box to upload the source bundle.

  5. Choose Deploy.

  6. When the deployment completes, you can choose the site URL to open your website in a new tab.

The application simply writes Hello from ASP.NET Core! to the response and returns.

Launching an environment creates the following resources:

  • EC2 instance – An Amazon Elastic Compute Cloud (Amazon EC2) virtual machine configured to run web apps on the platform that you choose.

    Each platform runs a specific set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination of these. Most platforms use either Apache or NGINX as a reverse proxy that sits in front of your web app, forwards requests to it, serves static assets, and generates access and error logs.

  • Instance security group – An Amazon EC2 security group configured to allow inbound traffic on port 80. This resource lets HTTP traffic from the load balancer reach the EC2 instance running your web app. By default, traffic isn't allowed on other ports.

  • Load balancer – An Elastic Load Balancing load balancer configured to distribute requests to the instances running your application. A load balancer also eliminates the need to expose your instances directly to the internet.

  • Load balancer security group – An Amazon EC2 security group configured to allow inbound traffic on port 80. This resource lets HTTP traffic from the internet reach the load balancer. By default, traffic isn't allowed on other ports.

  • Auto Scaling group – An Auto Scaling group configured to replace an instance if it is terminated or becomes unavailable.

  • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk.

  • Amazon CloudWatch alarms – Two CloudWatch alarms that monitor the load on the instances in your environment and that are triggered if the load is too high or too low. When an alarm is triggered, your Auto Scaling group scales up or down in response.

  • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes. The resources are defined in a template that you can view in the AWS CloudFormation console.

  • Domain name – A domain name that routes to your web app in the form subdomain.region.elasticbeanstalk.com.

All of these resources are managed by Elastic Beanstalk. When you terminate your environment, Elastic Beanstalk terminates all the resources that it contains.

Note

The Amazon S3 bucket that Elastic Beanstalk creates is shared between environments and isn't deleted during environment termination. For more information, see Using Elastic Beanstalk with Amazon S3.

Cleanup

When you finish working with Elastic Beanstalk, you can terminate your environment. Elastic Beanstalk terminates all AWS resources associated with your environment, such as Amazon EC2 instancesdatabase instancesload balancers, security groups, and alarms.

To terminate your Elastic Beanstalk environment
  1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.

  2. In the navigation pane, choose Environments, and then choose the name of your environment from the list.

    Note

    If you have many environments, use the search bar to filter the environment list.

  3. Choose Actions, and then choose Terminate environment.

  4. Use the on-screen dialog box to confirm environment termination.

With Elastic Beanstalk, you can easily create a new environment for your application at any time.

Next steps

As you continue to develop your application, you'll probably want to manage environments and deploy your application without manually creating a .zip file and uploading it to the Elastic Beanstalk console. The Elastic Beanstalk Command Line Interface (EB CLI) provides easy-to-use commands for creating, configuring, and deploying applications to Elastic Beanstalk environments from the command line.

If you use Visual Studio to develop your application, you can also use the AWS Toolkit for Visual Studio to deploy changed, manage your Elastic Beanstalk environments, and manage other AWS resources. See The AWS Toolkit for Visual Studio for more information.

For developing and testing, you might want to use the Elastic Beanstalk functionality for adding a managed DB instance directly to your environment. For instructions on setting up a database inside your environment, see Adding a database to your Elastic Beanstalk environment.

Finally, if you plan to use your application in a production environment, configure a custom domain name for your environment and enable HTTPS for secure connections.

Wednesday, June 2, 2021

Transform Your ASP.NET Core API into AWS Lambda Functions

In a recent article (Discovering AWS for .NET Developers), you read about my first foray into using .NET in the Amazon Web Services (AWS) ecosystem. I explored the AWS Relational Database Service (RDS) and created an ASP.NET Core API using Entity Framework Core (EF Core) to connect to a SQL Server Express database hosted in RDS. In the end, I deployed my API to run on AWS Elastic Beanstalk with my database credentials stored securely in Amazon's Parameter Store to continue interacting with that same database.

Interacting with the database was a great first step for me and hopefully for readers as well. And it gave me enough comfort with AWS to set my sights on their server-less offering, AWS Lambda Functions. Some of the most critical differences between hosting a full application in the cloud and rendering your logic as functions are:

·       Rather than paying for an application that's constantly running and available, you only pay for individual requests to a function. In fact, the first one million requests each month are free along with a generous amount of compute time. (Details at aws.amazon.com/Lambda/pricing)

·       Serverless functions are also stateless, meaning that you can run many instances of the same function without worrying about conflicting state across those instances.

·       Most of the management of serverless functions is taken care of by the function host, leaving you to focus on the logic you care about.

In this article, I'll evolve the ASP.NET Core API from the previous article to a Serverless Application Model (SAM) application which is a form of Lambda function.

Moving an Existing ASP.NET Core API to a Serverless App

This was such an interesting journey. And an educational one. Amazon has created what I'll refer to as a lot of “shims” to seamlessly host an ASP.NET Core API behind a Lambda function. The beauty of this is that you can write an ASP.NET Core API using the skills you already have and AWS's logic will provide a bridge that runs each controller method as needed. That way, you get the benefits of serverless functions such as the on-demand billing but continue to build APIs the way you already know how. It took a bit of time (and some repeated explanations and reading) to wrap my head around this. I hope this article provides a quicker learning path for you.

If you installed the AWS Toolkit for Visual Studio as per the previous article, then you already have the project template needed to create the basis for the new API. I'll start by creating a new project using the template and then copy the classes and some code from the existing API into the new project. The project template contains part of the “bridge” I just referred to, and it also has logic that calls into some additional tooling in AWS that provides more of the bridging. Although I do think it's important to have some understanding about how your tools work, there's a point where it's okay to say “okay, it just works.”

Let's walk through the steps that I performed to transform my API. While this article is lengthy, most of the details are here to provide a deeper understanding of the choices I've made and how things are working. But the actual steps are not that many. If you want to follow along, I've included the previous solution in the downloads for this article.

Creating the New Project

Start by creating a new project and in the template finder, filter on AWS and C#. This gives you four templates and the one to choose is AWS Serverless Application (.NET Core?C#). After naming the new project, you'll get a chance to choose a “Blueprint”, i.e., a sample template for a particular type of app. From the available blueprint options, choose ASP.NET Core Web API. This is the template that includes the plumbing to ensure that your controller methods can be run behind a Lambda function. The project that's generated (shown in Figure 1) looks similar to the one created by the ASP.NET Core Web API template with a few exceptions.

                                                 Figure 1: The project generated from the selected template

  1.  One exception is the introduction of the S3ProxyController. This is just a sample controller that I'll remove from my   project. I'm keeping the values controller so that I can validate my API if needed.
  2.  Another is the aws-Lambda-tools-defaults.json file. This file holds settings used for publishing whether you are   doing so via the tooling or at the command line using AWS' dotnet CLI extensions.
  3.  LambdaEntryPoint.cs replaces program.cs for the deployed application.
  4.  LocalEntryPoint.cs replaces program.cs for running or debugging locally.
  5.  The serverless.template contains configuration information for the deploying the application. Specifically, this uses   the AWS SAM specification, which is an AWS CloudFormation extension used for serverless applications.

Copying Assets from the Original API

Before looking at the Lambda-specific files, let's pull in the logic from the original API. In the downloads that accompany this article, you'll find a BEFORE folder that contains the solution from the previous article.

First, you'll need to add the NuGet references for the EF Core packages (SqlServer and Tools for migrations) as well as the SystemsManager extension you used for the deployed API to read the secured parameters stored in AWS. You can see the packages in the csproj file shown in Figure 2.


Figure 2: The EF Core and SystemsManager package references added to the project file

Next, I'll copy files from the previous application into the project and remove the S3ProxyController file. The files I copied in, highlighted in Figure 3, are the AuthorsController, the BookContext, the Author and Book classes, and the contents of the Migrations folder. As a reminder, the AuthorsController was originally created using the controller template that generates actions using Entity Framework.

Note that in the previous article, I created a SQL Server database instance in Amazon's RDS, let EF Core migrations create the database and tables, and then manually added some data via SQL Server Object Explorer. The “Before” solution that comes with this article has two changes related to the data. Its BookContext class now includes HasData methods to seed some data into the Authors and Books tables. Also, there is a second migration file, seeddata, that has the logic to insert the seed data defined in the BookContext. If you don't have the database yet, you'll be able to use the update-database migrations command to create the database and its seed data in your database instance. But you will have to create the database instance in advance.

                                                   Figure 3: Files copied into the project from my original API

The new Startup class has some extra logic to interact with an AWS S3 Proxy, which is then used by the S3ProxyController that you just deleted. Because you don't need that, it's safe to completely replace this startup.cs file with the one from the original solution instead of copying various pieces of that file. Most important in there is the logic to build a connection string by combining details you'll add in shortly. The final bit of that logic attaches the concatenated connection string to the DbContext dependency injection configuration with this code:

 services.AddDbContext<BookContext> (options => options.UseSqlServer(connection));

My startup class (and the others) from the earlier project has the namespace, VisublogEFCoreAPI. My new project's namespace is different. Therefore, I needed to add

 using VisublogEFCoreAPI;

Add this using statement to both the LambdaEntryPoint and LocalEntryPoint classes. The compiler will remind you about this.

The next assets you need from the earlier solution are the connection string and its credentials.

The connection string to the database goes into appsettings.json, to be read by the startup class code that builds the connection string. My setting looks like this, although I've hidden my server name:

 "ConnectionStrings": {

  "BooksConnection":

   "Server=Visublogmicro.***.us-east-2.rds.amazonaws.com,1433;

    Database=BookDatabase"

}

Keep in mind that JSON doesn't like wrapped lines. They are only wrapped here for the sake of this article's formatting. Also don't forget that very important comma to separate the Logging section from the ConnectionStrings section. I can attest to how easy it is to make that mistake.

The ASP.NET Core Secret Manager will supply the DbPassword and DbUser values for the connection string at design time but won't get stored into the project, which means that you don't have to worry about accidentally deploying them. As a reminder, right-click on the project in Solution Explorer, choose Manage User Secrets, which will open a json file for the secrets. Add your secret credentials into this file, for example:

 {

  "DbPassword""myfancypassword",

  "DbUser""myusername"

}

These values will be available to the Configuration API. With all of this in place, I'm now able to run the new API locally on my computer?hosted by .NET's Kestrel server?by choosing the project name in the Debug Toolbar. The apps read the password and user ID from the ASP.NET secrets and with those, are able to interact with my AWS hosted database.

                                         Figure 4: Targeting the project to run locally using .NET's Kestrel server

Running Locally Isn't Using Any Lambda Logic

At this point, Visual Studio isn't doing anything more than running the API in the same way as it would for an ASP.NET Core API, ignoring all of the Lambda-specific logic added by the template. The new API runs locally and the puzzle pieces are in place for this application to run as a Lambda function, but they aren't being used yet.

So far, you're seeing that your existing skills for building ASP.NET Core apps remain 100% relevant, even for testing and debugging your apps. You don't have to worry about issues related to the Lambda function getting in the way of building and debugging the app. All of that logic stays out of your way for this part of the application building. That's because the project knows to run locally from the LocalEntryPoint class, which avoids all of the Lambda infrastructure. Other than the class name, LocalEntryPoint.cs is exactly the same as program.cs in a typical ASP.NET Core API project. And by default, the debugger will start by calling its Main method.

Understanding How Your API Will Transform into a Lambda Function

With my API now running successfully, it's time to trigger the special logic included in the template to run all of this as a Lambda function.

I think it's important to understand some of the “magic” that is happening for this scenario. Of course, it's not magic. It's some very clever architecture on the part of the AWS Lambda team. Keep in mind that the biggest difference between running the API as a regular Web application and running it as a serverless application is that the Web application is always running and consuming resources, whereas the serverless application is a Lambda function that acts as a wrapper to your controller methods. And by running on demand, that means you are only paying for the resources used in the moments that function is running, not while it's sitting around waiting for a request to come in.

When the app is deployed (using some of the special assets added by the template) it doesn't just push your application to the cloud, it builds a full Lambda function infrastructure. Because this function is meant to be accessed through HTTP, it's shielded by an API Gateway?the default?but you have the option to switch to an Application Load Balancer instead. Unlike a regular ASP.NET Core API, the controller methods aren't exposed directly through URIs (or routing). A Lambda function wraps your controllers and runs only on demand when something calls your API. If nothing calls, nothing is running.

                                        Figure 5: How your hosted API works after being transformed during deployment

What's in between the gateway and your controller is the Amazon.Lambda.AspNetCoreServer, which contains its own Lambda function that translates the API Gateway request into an ASP.NET Core request. The requests to that Lambda function are all that you pay for in this setup, not the controller activity; that is, after the monthly free allocation. There's more to how this works but for the purposes of this article, this should be enough to have a high-level understanding of what appears to be magic.

Engaging the SAM and Lambda Logic in Your API

So now let's take a look at some of the assets shown in Figure 1 that were created by the template. Your friend here is the Readme markdown file included in the project. It gives some insight into the assets and I will highlight some of the relevant descriptions here for you:

  1.            serverless.template: an AWS CloudFormation Serverless Application Model template file for declaring your Serverless functions and other AWS resources
  2.            aws-Lambda-tools-defaults.json: default argument settings for use with Visual Studio and command line deployment tools for AWS
  3.            LambdaEntryPoint.cs: class that derives from Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction. The code in this file bootstraps the ASP.NET Core hosting framework. The Lambda function is defined in the base class. When you ran the app locally, it started with the LocalEntryPoint class. When you run this within the Lambda service in the cloud, this LambdaEntryPoint is what will be used.

In addition to the Using statement mentioned above, there is one more change to make in the LambdaEntryPoint class. If you read the earlier article, you may recall that there was also a lesson in there on storing the database UserId and password as secured parameters in AWS. I was able to leverage the SystemsManager extension to read from AWS Systems Manager where the parameters are stored. You'll need to ensure that the deployed app can do that by adding the following builder.ConfigureAppConfiguration code into the Init method of the LambdaEntryPoint class. This will also require a Using statement for Microsoft.Extensions.Configuration.

 protected override void Init (IWebHostBuilder builder)

{

  builder.ConfigureAppConfiguration((c, b) => b.AddSystemsManager("/visublogapi"));

  builder.UseStartup<Startup>();

}

In the serverless.template file, you also need to make a simple change to the policies controlling what the function can access, so that it can read the parameters.

By default, AWS' AWSLambdaFullAccess policy is defined directly in the serverless.template without using roles. You can see this in the Properties section of the AspNetCoreFunction resource in the file:

 "Role"null,

"Policies": [

    "AWSLambdaFullAccess

],

You just need to add two more policies, AmazonSSMReadOnlyAccess and AWSLambdaVPCAccessExecutionRole. The Role property is not needed at all, so I removed it.

 "Policies": [

    "AWSLambdaFullAccess",

    "AmazonSSMReadOnlyAccess",

    "AWSLambdaVPCAccessExecutionRole"

],

The SSM policy gives the deployed function permission to access the parameters in the Systems Manager. The VPCAccess policy gives the function permission to wire up a connection to the VPC that's hosting the database. I'll point out when this comes into play after deploying the function.

There is some more cleanup you can do in the serverless template. Many settings in there are related to the S3Proxy controller that you deleted. You can delete the related sections.

The sections you can delete, starting from the top are:

·       Parameters

·       Conditions

·       the Environment section within Resources:AspNetCoreFunction

·       Bucket within Resources

·       S3ProxyBucket within Outputs

Take care to get correct start and end points when deleting sections from this JSON file, including commas.

A related setting in appsettings.json is the AppS3Bucket property. You can delete that as well.

Deploying the Serverless Application

Although you can run the non-Lambda version of the app locally as I did earlier, you can't just install the Lambda service on your computer to check out how it works with the infrastructure. You need to deploy the application to AWS. This is a simple task, thanks again to the toolkit. Let's walk through that process.

The aws-Lambda-tools-default.json file contains configuration information for publishing the function. In fact, the file also has configuration information for creating the S3 Proxy used by the controller which we have now deleted. Deleting the “template-parameters” property from the file will clear that extraneous information.

The context menu for the serverless project has the option: "Publish to AWS Lambda?". This triggers a form to open where you can specify settings for your deployed application. The profile and region are pre-populated using your AWS Explorer settings.

You'll need to name the CloudFormation stack and bucket for the deployment. All of the resources for your application will be bundled up into a single unit and managed by CloudFormation. This is what is the stack refers to. The S3 bucket (different from the S3 Proxy used by the deleted controller) will store the application's compiled code for your function. Any existing buckets in your account are listed in the drop-down, and you can create a new one with the New button. Note that bucket names must be all lower case. Also note that if you selected an existing bucket, it needs to be one that's in the same region as the one where you're deploying the Lambda function. My settings are shown in Figure 6.

                                 Figure 6: Publishing the serverless application

Now you're ready to publish the application, so just click Publish. You'll immediately start to see log information about the steps being taken to build and push the application to the cloud. After that, a log is displayed showing what's happening in the cloud to create all of the infrastructure to run the application. When it's all done, the status shows CREATE_COMPLETE and the final logs indicate the same. (Figure 7).

                                   Figure 7: The final logs when the deployment has completed

The URL of the application is shown on the form. Mine is https://hfsw7u3sk5.execute-api.us-east-2.amazonaws.com/Prod.

Sending a request to the values controller will succeed, in my case at https://hfsw7u3sk5.execute-api.us-east-2.amazonaws.com/Prod/api/values, but the authors controller will fail with a timeout. That's because you have a bit more security configuration to perform on the newly deployed function.

Let's first look in the portal to see what was created and then address the last bits of security for the authors controller in the AWS cloud to access the authors data in the database.

Examining What Got Created in the Cloud

If you refresh the AWS Lambda node in the AWS Explorer, you should see your new function app listed. Its name will start with the CloudFormation stack you specified in the publish wizard, concatenated with “AspNetCoreFunction” and a randomly generated string. You can update some of the function's configuration, look at logs, and more. You might notice the Mock Lambda Test Tool in the toolkit. But this is not for debugging the cloud-based Lambda from Visual Studio. It's for performing an advanced form of testing to debug problems in the deployed function. You can learn more about that tool here: Mock Lambda Test Tool. I'll come back to the configuration page shortly.

When learning I like to also see the function in the portal. It feels more real and more interesting to me. Here's how to do that.

Log into the portal and be sure to set your view to the region where you published the function. Select Lambda by dropping down the Services menu at the top. From the AWS Lambda dashboard, select the Functions view. Here, you'll see the same list of Lambda functions in your account, filtered by whatever region is selected at the top of the browser page.

Click on the function to open its configuration page. At the top, there's a note that the function belongs to an application with a link to see some information about the application: the API endpoint and a view of the various resources (your Lambda Function, an IAM role associated with the function, and the API gateway). There are other details to explore in the application view, such as a log of deployments and monitoring.

Back in the function's overview page, the first section shows a visual representation of the function with an API gateway block and the function itself. Click on the API gateway to see the two REST endpoints that were created: one with a proxy and one without. Next, click on the block for the function and you'll notice that the display below changes. If your app was using a scripting language, there would be a code editor available. Because you're using a language that requires a compiler, uploading a zip file is the only option – and that's what the Publish wizard did for you – so the code editor is hidden. Keep scrolling down to see more sections: Environment variables, Tags, and a few others.

The block of interest is the currently empty VPC area. VPC is an acronym for Virtual Private Cloud, a logically isolated section of the AWS cloud. The VPC settings are the key to giving the function permission to access the database instance. Currently, the lack of that access is why the authors controller is failing.

Understanding and Affecting What Permissions the Function Has

Thanks to the AmazonSSMReadOnlyAccess policy you added to the function in the serverless.template file, the function is able to access the UserId and Password parameters you stored in the Systems Manager as part of the previous article. However, even though it can read the connection string credentials for the database, it isn't able to connect to the VPC where the database lives. Everything is secure by default here. Even from other services attached to the same IAM account.

The database instance is inside the default VPC in my AWS account. That's most likely the case for you if you followed the demo in the earlier article. The function itself isn't inside a VPC. As I explained earlier, it was deployed to a CloudFormation stack, the one you named in the Publish wizard. What you need to do next is tie the function to the VPC that contains the database instance. You can do that through the portal or using the Function configuration page of the Toolkit in Visual Studio. I'll show you how to do this back in Visual Studio.

The Toolkit's function configuration page has a VPC section and in there, a drop-down to select one or more VPC Subnets to which you can tie the function and a drop-down for security groups. The latter is disabled because it only shows security groups for the VPC(s) you've selected.

A subnet is essentially a part of an IP range exposed through the AWS cloud. A VPC can have one or more subnets associated with it. By default, the default VPC has three subnets and each of those is a public subnet, meaning that it's allowed to receive inbound requests from the Internet (and can make outbound calls), e.g., a Web server. To connect the Lambda function to this VPC, you can select any one of the subnets from the default VPC. If the VPC has private subnets, connecting to one of those will work as well. Based on all of my experiments (and guidance specifically for this article from experts at AWS), you can randomly choose any subnet attached to the VPC as I've done in Figure 8. Note that the “Map Public IP” column isn't an indication of whether the subnet is public or private.

                            Figure 8: Selecting a subnet within the database's VPC

Having selected that subnet, the Security Groups drop-down now gives me two security group options – these are the only groups tied to that VPC. If you have more, they'll all be available in the drop-down. There will always be a default security group, so you can select that one.

Once you've specified the subnet and security group, save the settings by clicking the “Apply Changes” icon at the top of the Function page. Unfortunately, the toolkit doesn't provide status. So, I flipped back to the portal view, refreshed the Web page and waited for the message “Updating the function” to change to “Updated”. The message is displayed right at the top of the page in a blue banner so shouldn't be hard to find. This took about one minute.

Remember adding the AWSLambdaVPCAccessExecutionRole to the serverless.template policies earlier? That policy is what gave the Lambda function permission to perform this action of attaching to the VPC.

One Last Hook: VPC, Meet Systems Manager

Now, if you test the api/values again or the api/authors, you may think you've broken everything! Both controllers time out. But you haven't broken the function. The function itself is able to access the parameters, but the VPC is not. Therefore, now that the function has been configured to run attached to my VPC, it can't reach back to Parameter Store over the Internet. Recall the logic you added to the LambdaEntryPoint class:

 builder.ConfigureAppConfiguration((c, b) => b.AddSystemsManager("/visublogapi"));

The final piece of the puzzle is to allow the VPC access to the Systems Manager. There are two options. One is to configure the VPC to allow the Lambda function to go out to the Internet and then to the service for the Parameter Store. The other is to configure a channel (called an endpoint) on the VPC that allows the function to call the Systems Manager without ever leaving the AWS network. The latter is the simplest path and the one I chose.

I'll create endpoint on the default VPC, giving the endpoint permissions to call the Systems Manager. Endpoints aren't available in the toolkit, so you'll do that in the portal, and luckily, it's just a few steps where you can rely mostly on default settings. It's not a bad idea to get a little more experience with interacting with the portal. Alternatively, you could do this using the AWS CLI or AWS' PowerShell tools as well.

In the portal, start by selecting VPC from the AWS Services list. From the VPC menu on the left, select Endpoints, then Create Endpoint. Filter the available service names by typing ssm into the search box, then select com.amazonaws.[region].ssm.

From the VPC drop-down, select the relevant VPC. It's handy to know the ID of your VPC, or its name, if you've assigned one in the console. Once selected, all of that VPC's public subnets are preselected, which is fine. In fact, all of the rest of the defaults on this page are correct, so you can scroll to the bottom of the page and click the Create endpoint button.

That's it! The endpoint should be ready right away. The application now has access to the parameters, and it's able to use those parameters to build the connection string and access the database. Finally, the api/values and api/authors should successfully return their expected output.

A Journey, But in the End, Not a Lot of Work

Although this article has been a long one, so much of what you read was to be sure you understood how things work and why you were performing certain steps. In fact, the journey to modernize your ASP.NET Core API to AWS Lambda functions doesn't entail a lot of work and the value can be significant.

You created a project from a template, copied over files from the original API and made a few small changes to a handful of files. With this, the API was already able to run locally in Visual Studio.

Then you used a wizard to publish the API to AWS as a Lambda function and because the API interacts with a SQL Server database in Amazon RDS (using Entity Framework Core), you needed to enable a few more permissions. That was only two steps: Connect the database's VPC to the function and create an endpoint so that VPC was able to access the credentials that are stored as AWS parameters.

Although this exercise was focused on an existing API, you can also just create a function app from scratch with this template and, using the new project, build an ASP.NET Core API from scratch using all of the knowledge you already have for doing that without having to learn how to build Lambda functions. Surely, like me, once you've whetted your appetite with this, you'll probably be curious and ready to explore building straight up Lambda functions next.

Let Visual Studio Access Your Database

Note that if you're using the database you created from the previous article, you may need to re-grant access from the IP address of your PC. I hit this snag myself. The easiest way I found to do this was to view the RDS instances in the AWS Explorer, right click-on the desired instance and then choose “Add to Server Explorer.” If your IP address doesn't have access, a window will open with your IP address listed and a prompt to grant access.

Watch Out for Publicly Available Production Databases

Keep in mind that when originally creating the database instance (in the earlier article), I specified that it should be publicly available which, combined with setting accessibility to my development computer's IP address, allows me to debug the API in Visual Studio while connecting to the database on AWS. I can also connect through Visual Studio's database tools, SSMS, Azure Data Studio or other tools. However, for a production database that's being accessed by another AWS service (e.g., this Lambda function app, or my API deployed to AWS Elastic Beanstalk) you should disable the public availability for the instance. For testing, you could have one test database in its own publicly available instance (still limited to select IP addresses) and leave the production database locked down.

The Relationship Between the Function and the Database's VPC

At first, I thought that specifying a VPC for the application meant that I would be pushing the app into the same VPC as the database. This misunderstanding led me on a wild goose chase. I started by creating a separate VPC and could never get it to communicate with the database. It's important to understand that you aren't creating a VPC for hosting the application, but identifying the existing VPC with the subnet(s) that allow the access to the RDS instance. It's like you're making an introduction between the VPC and the Systems Manager.

 

  Author : Julie Lerman


Popular Posts