Generative AI (GenAI) is experiencing explosive growth, with major cloud providers releasing new services that empower developers to easily leverage large language models (LLMs). ChatGPT, DeepSeek, Claude, Llama, OpenAI, CoPilot, and Gemini are widely recognized and often used as simple Q&A chatbots or enhanced search tools.
While these applications are excellent for individuals seeking to learn new skills or challenge their own thinking, we want to explore building advanced AI agents capable of doing the work using cloud services and workflows to automate and speed up decision cycles.
Additionally, these services can transcribe videos in almost any language, generate an audio version or podcast of a document or website and create automatic documentation from code or design documents.
This article, the first in a three-part series, explores the top generative AI offerings on Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and Oracle Cloud Infrastructure (OCI), focusing on their potential applications for government and military use cases.
The Value Proposition of GenAI
GenAI is revolutionizing government operations, streamlining service delivery by reducing timelines and optimizing the workforce needed to complete the mission effectively.
Automate administrative tasks
GenAI can analyze documents, route inquiries, pre-populate forms (proforma), and handle repetitive tasks, freeing up human capital for strategic work.
Enhance public communications
Models can conduct fact-checking analysis, create executive summaries, produce boilerplate acquisition language, and ensure compliance, improving efficiency and communication quality.
Improve citizen experience
AI assistants can provide 24/7 support for common inquiries, increasing accessibility and reducing strain on call centers.
Support policy research and development
GenAI can analyze legislation, summarize key issues, suggest research materials, and even draft policy documents, supplementing the work of policy analysts.
Strengthen cybersecurity and logistics
AI software agents can monitor network activity and logistics operations for anomalies and threats, and when authorized, take autonomous action to mitigate disruptions or alerting decision-makers.
GenAI Cloud Service Providers
Below are AI services offered by popular cloud service providers that support the development of AI agents that leverage LLMs for decision logic while operating within established guardrails and limits to ensure human-authorized and ethical actions.
NOTE: Some services may not have dedicated public pages, especially if they are integrated into broader offerings. In such cases, the provided links direct to a relevant/related resource.
Amazon Web Services (AWS)
NT Concepts leverages AWS Bedrock to simplify building and managing production workflows for ML and GenAI. Bedrock’s drag-and-drop interface facilitates integration with services like Amazon SageMaker, AWS Lambda, and AWS CodeCommit. The service’s CI/CD, testing, monitoring, and governance capabilities free up AI talent so people can focus on higher-level tasks like model refinement. Amazon Q Developer (e.g., CodeWhisperer), a SaaS-based coding partner, generates code recommendations in multiple languages, accelerating development.
GenAI Key Service | Description |
---|---|
Amazon Bedrock | Provides access to Foundation Models (FMs) from various providers, including Amazon’s Titan FMs. |
Amazon Titan Models | Amazon’s family of LLMs for tasks such as text summarization, question answering, and code generation. (No public information available on FedRAMP status.) |
Amazon Q Developer | AI-powered coding companion. |
Amazon SageMaker | A fully managed service for building, training, and deploying machine learning models. |
Amazon EC2 | Scalable compute capacity for training large models, including GPU instances. |
Amazon S3 | Object storage for large datasets and model artifacts. |
Amazon ECR (Elastic Container Registry) | Stores Docker images. |
AWS Lambda | Serverless compute for running inference. |
Amazon API Gateway | Exposes models via APIs. |
AWS CloudTrail | Monitors API calls for logging and auditing. |
Amazon CloudWatch | Monitors training jobs and inference metrics. |
Microsoft Azure
GenAI Key Service | Description |
---|---|
Azure OpenAI Service | Provides access to a range of OpenAI models. (FedRAMP High P-ATO.) |
Azure AI Foundry | A web-based environment for building, training, and deploying AI models. |
Azure Machine Learning | A cloud service for building and deploying machine learning models. |
Azure AI Services | Pre-built APIs for cognitive tasks like text analytics and computer vision. |
Azure AI Bot Service | Facilitates chatbot development. |
Azure Kubernetes Service (AKS) | Managed Kubernetes for deploying models at scale. |
Azure GPU VMs | Virtual machines with GPUs for accelerated training. |
Azure Blob Storage | Object storage for datasets and model artifacts. |
Azure Functions | Serverless compute for running inference. |
Azure Container Registry | Stores Docker containers. |
Azure Monitor | Provides monitoring and analytics for model training and deployment. |
Google Cloud
GenAI Key Service | Description |
---|---|
Vertex AI | A fully managed ML platform for building, training, and deploying models. Provides access to Google’s foundation models including PaLM 2, Imagen, and Codey. (FedRAMP High P-ATO for some Vertex AI services.) |
Generative AI Studio (Vertex AI Studio) | A no-code environment within Vertex AI for experimenting with and deploying generative AI models. |
Model Garden | A collection of Google’s pre-trained models. |
Vertex AI Workbench (Vertex AI Notebooks) | A collaborative IDE for building ML projects. |
Compute Engine | Scalable VMs with GPUs and TPUs for training large models. |
Cloud Storage | Object storage for datasets and model artifacts. |
BigQuery | Serverless data warehouse for analyzing training data. |
Cloud Run | Serverless containers for deploying models. |
Cloud TPUs | Hardware accelerators optimized for ML workloads. |
Cloud Logging & Monitoring (Cloud Observability) | Tools for monitoring model runs. |
Cloud Functions | Serverless compute for deploying models as microservices. |
Container Registry (Artifact Registry) | Stores Docker images. |
Oracle Cloud Infrastructure (OCI)
Key GenAI Service | Description |
---|---|
OCI Language | Provides pre-trained models for various NLP tasks. (No public information available on FedRAMP status.) |
OCI Data Science | A platform for building, training, and deploying machine learning models. |
OCI AI Services | Includes services for vision, speech, and anomaly detection. |
OCI Compute (BM and VM GPU Instances) | Compute instances with GPUs for accelerated training. |
Oracle Object Storage | Storage for large datasets and artifacts. |
OCI Functions | Serverless compute for deploying models. |
Container Engine for Kubernetes (OKE) | Managed Kubernetes for deploying models. |
Container Registry (OCIR) | Stores Docker images. |
Logging and Monitoring | Tools for monitoring model runs. |
Streaming and Notifications | Services for real-time data processing. |
Be sure to check back for Part 2 of this series, where we will delve further into specific government and military use cases for generative AI, exploring practical applications and demonstrating how these cloud services can be effectively deployed to address real-world challenges. We will also examine the critical considerations for responsible AI implementation, including ethical implications and security best practices.
Nicholas Chadwick
Cloud Migration & Adoption Technical Lead Nick Chadwick is obsessed with creating data-driven government enterprises. With an impressive certification stack (CompTIA A+, Network+, Security+, Cloud+, Cisco, Nutanix, Microsoft, GCP, AWS, and CISSP), Nick is our resident expert on cloud computing, data management, and cybersecurity.