Introducing Amazon Bedrock, a fully managed service, that provides a range of top-performing foundation models (FMs) from leading AI companies. It empowers you to effortlessly experiment with these FMs, customize them with your data through techniques like fine-tuning and retrieval augmented generation (RAG), and build managed agents for executing intricate business tasks and private projects.
As a fully managed service, it eliminates the need for server management or similar tasks. This means you won't have to worry about server provisioning, scaling, or maintenance.
In the context of serverless computing, services like AWS Lambda enable you to run code without managing servers. When you trigger a Lambda function, AWS automatically handles the infrastructure, scaling, and execution of your code. It's a pay-as-you-go model where you're only charged for the compute time your code actually uses, making it a cost-efficient and hassle-free option for running code in the cloud.
Tools that we'll be using:
Today, we'll be building two Lambda functions for text and image retrieval from Amazon Bedrock. We'll deploy these using Infrastructure as Code, make them accessible via API Gateway, and ensure everything remains Serverless.
You can use these functions to create applications similar to the ones I've showcased below:
Let's start!
You can choose which tool you want to use for deploying your Lambda functions by yourself but I'll provide the code for creating the Lambdas:
Please take note of a few important points below:
client-bedrock-runtime
packageimport { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
const client = new BedrockRuntimeClient({ region: 'us-east-1' });
export async function handler(event: any) {
const prompt = JSON.parse(event.body).prompt;
const input = {
modelId: 'ai21.j2-mid-v1',
contentType: 'application/json',
accept: '*/*',
headers: {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Methods': 'POST'
},
body: JSON.stringify({
prompt: prompt,
maxTokens: 200,
temperature: 0.7,
topP: 1,
stopSequences: [],
countPenalty: { scale: 0 },
presencePenalty: { scale: 0 },
frequencyPenalty: { scale: 0 }
})
};
try {
const data = await client.send(new InvokeModelCommand(input));
const jsonString = Buffer.from(data.body).toString('utf8');
const parsedData = JSON.parse(jsonString);
const text = parsedData.completions[0].data.text;
return text;
} catch (error) {
console.error(error);
}
}
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
const client = new BedrockRuntimeClient({ region: 'us-east-1' });
export async function handler(event: any) {
const prompt = JSON.parse(event.body).text_prompts;
const input = {
modelId: 'stability.stable-diffusion-xl-v0',
contentType: 'application/json',
accept: 'application/json',
headers: {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Methods': 'POST'
},
body: JSON.stringify({
text_prompts: prompt,
cfg_scale: 10,
seed: 0,
steps: 50
})
};
try {
const command = new InvokeModelCommand(input);
const response = await client.send(command);
const blobAdapter = response.body;
const textDecoder = new TextDecoder('utf-8');
const jsonString = textDecoder.decode(blobAdapter.buffer);
try {
const parsedData = JSON.parse(jsonString);
return parsedData.artifacts[0].base64;
} catch (error) {
console.error('Error parsing JSON:', error);
return 'TextError';
}
} catch (error) {
console.error(error);
}
}
Now deploy your Lambdas, if you're using Serverless Framework you can use the following configuration:
service: aws-bedrock-ts
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs18.x
iam:
role:
statements:
- Effect: 'Allow'
Action:
- 'bedrock:InvokeModel'
Resource: '*'
functions:
bedrockText:
handler: src/bedrock/text.handler
name: 'aws-bedrock-text'
events:
- httpApi:
path: /bedrock/text
method: post
bedrockImage:
handler: src/bedrock/image.handler
name: 'aws-bedrock-image'
events:
- httpApi:
path: /bedrock/image
method: post
Let's test our functions in Postman:
Create a new POST request with the following data:
{
"prompt": "Your search text"
}
Create a new POST request with the following data:
{
"text_prompts": [
{
"text": "Your search text"
}
],
"cfg_scale": 10,
"seed": 0,
"steps": 50
}
Now that your functions are prepared to be utilized with API Gateway, you can begin integrating them into your applications, much like the example I presented in the beginning of this article.
As this is a local app for testing, I've set the Access-Control-Allow-Origin
to *
. Additionally, you may need to adjust the CORS settings in API Gateway. Please be aware that there will be a small cost associated with your API calls. For detailed pricing information, refer to the Amazon Bedrock pricing model.
Amazon Bedrock provides a robust selection of high-performing foundation models from top AI companies. Integrating it into your application is straightforward and enhances its capabilities. If you haven't tried it yet, I highly recommend doing so!
Interested in the complete project? Feel free to let me know, and I'll create a part two that covers the UI aspect.
Источник: dev.to
Наш сайт является информационным посредником. Сообщить о нарушении авторских прав.
aws ai lambda serverless