Purpose: The HTTP API allows you to expose the Lambda function via a public HTTP endpoint that you can call from external applications (e.g., Postman, Jupyter Notebook).
πΉ Screenshot Tip:
πΉ Screenshot of the API Gateway landing page β Create API
generate-text-lamini
.πΉ Screenshot Tip:
πΉ Screenshot the Add Integration screen showing your Lambda function selected
/generate
πΉ Screenshot Tip:
πΉ Screenshot the Route configuration screen showing POST
and /generate
πΉ Screenshot Tip:
πΉ Screenshot the Stage configuration screen with $default and Auto deploy enabled
https://your-api-id.execute-api.us-east-2.amazonaws.com
/generate
to the Invoke URL.https://your-api-id.execute-api.us-east-2.amazonaws.com/generate
πΉ Screenshot Tip:
πΉ Screenshot of the API Invoke URL and the Routes tab showing /generate
{
"prompt": "Write a poem about the ocean."
}
πΉ Screenshot Tip:
πΉ Screenshot of the API Gateway testing console showing request/response
You can now interact with your model remotely using a simple Python script.
import requests
# Replace with your actual invoke URL
url = "https://your-api-id.execute-api.us-east-2.amazonaws.com/generate"
data = {"prompt": "Write an article about deploying LLM models to AWS services"}
response = requests.post(url, json=data)
print(response.json())
πΉ Screenshot Tip:
πΉ Optional: Screenshot of API call from a Jupyter Notebook or terminal
πΉ Summary Checklist:
POST /generate
route linked to Lambda.$default
stage.πΉ Important Notes:
sagemaker:InvokeEndpoint
permission (covered in Part 2).πΉ Next Step: You are now ready to send HTTP requests to your SageMaker-powered LLM model through API Gateway!