Generative AI (GenAI) is experiencing explosive growth, with major cloud providers releasing new services that empower developers to easily leverage large language models (LLMs). ChatGPT, DeepSeek, Claude, Llama, OpenAI, CoPilot, and Gemini are widely recognized and often used as simple Q&A chatbots or enhanced search tools.

A group of generativeAI icons

While these applications are excellent for individuals seeking to learn new skills or challenge their own thinking, we want to explore building advanced AI agents capable of doing the work using cloud services and workflows to automate and speed up decision cycles.

Additionally, these services can transcribe videos in almost any language, generate an audio version or podcast of a document or website and create automatic documentation from code or design documents.

This article, the first in a three-part series, explores the top generative AI offerings on Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and Oracle Cloud Infrastructure (OCI), focusing on their potential applications for government and military use cases.

The Value Proposition of GenAI

GenAI is revolutionizing government operations, streamlining service delivery by reducing timelines and optimizing the workforce needed to complete the mission effectively.

Automate administrative tasks

GenAI can analyze documents, route inquiries, pre-populate forms (proforma), and handle repetitive tasks, freeing up human capital for strategic work.

Enhance public communications

Models can conduct fact-checking analysis, create executive summaries, produce boilerplate acquisition language, and ensure compliance, improving efficiency and communication quality.

Improve citizen experience

AI assistants can provide 24/7 support for common inquiries, increasing accessibility and reducing strain on call centers.

Support policy research and development

GenAI can analyze legislation, summarize key issues, suggest research materials, and even draft policy documents, supplementing the work of policy analysts.

Strengthen cybersecurity and logistics

AI software agents can monitor network activity and logistics operations for anomalies and threats, and when authorized, take autonomous action to mitigate disruptions or alerting decision-makers.

By leveraging cloud-hosted generative AI services, developers can rapidly prototype and build intelligent applications without requiring deep machine learning (ML) expertise. These services abstract away the complexities of model training and deployment, offering scalable solutions on demand. With the right guidance, any organization can begin exploring the transformative potential of generative AI – from accelerating software delivery to analyzing text at scale and enhancing decision-making.

GenAI Cloud Service Providers

Below are AI services offered by popular cloud service providers that support the development of AI agents that leverage LLMs for decision logic while operating within established guardrails and limits to ensure human-authorized and ethical actions.

NOTE: Some services may not have dedicated public pages, especially if they are integrated into broader offerings. In such cases, the provided links direct to a relevant/related resource.

AWS Logo
Amazon Web Services (AWS)

NT Concepts leverages AWS Bedrock to simplify building and managing production workflows for ML and GenAI. Bedrock’s drag-and-drop interface facilitates integration with services like Amazon SageMaker, AWS Lambda, and AWS CodeCommit. The service’s CI/CD, testing, monitoring, and governance capabilities free up AI talent so people can focus on higher-level tasks like model refinement. Amazon Q Developer (e.g., CodeWhisperer), a SaaS-based coding partner, generates code recommendations in multiple languages, accelerating development.

GenAI Key ServiceDescription
Amazon BedrockProvides access to Foundation Models (FMs) from various providers, including Amazon’s Titan FMs.
Amazon Titan ModelsAmazon’s family of LLMs for tasks such as text summarization, question answering, and code generation. (No public information available on FedRAMP status.)
Amazon Q DeveloperAI-powered coding companion.
Amazon SageMakerA fully managed service for building, training, and deploying machine learning models.
Amazon EC2Scalable compute capacity for training large models, including GPU instances.
Amazon S3Object storage for large datasets and model artifacts.
Amazon ECR (Elastic Container Registry)Stores Docker images.
AWS LambdaServerless compute for running inference.
Amazon API GatewayExposes models via APIs.
AWS CloudTrailMonitors API calls for logging and auditing.
Amazon CloudWatchMonitors training jobs and inference metrics.
Microsoft Azure
NT Concepts utilizes Azure’s OpenAI Service to access powerful LLMs like GPT-3.5, GPT-4, Codex, and DALL-E 2, enabling the development of engaging data search and discovery applications. Azure’s diverse AI services offer pre-trained models and tools for various tasks.
GenAI Key ServiceDescription
Azure OpenAI ServiceProvides access to a range of OpenAI models. (FedRAMP High P-ATO.)
Azure AI FoundryA web-based environment for building, training, and deploying AI models.
Azure Machine LearningA cloud service for building and deploying machine learning models.
Azure AI ServicesPre-built APIs for cognitive tasks like text analytics and computer vision.
Azure AI Bot ServiceFacilitates chatbot development.
Azure Kubernetes Service (AKS)Managed Kubernetes for deploying models at scale.
Azure GPU VMsVirtual machines with GPUs for accelerated training.
Azure Blob StorageObject storage for datasets and model artifacts.
Azure FunctionsServerless compute for running inference.
Azure Container RegistryStores Docker containers.
Azure MonitorProvides monitoring and analytics for model training and deployment.
Google Cloud
NT Concepts leverages Google Cloud’s Vertex AI, a unified platform for building and deploying machine learning models, including generative AI solutions. Vertex AI provides access to pre-trained models and supports custom model training. The Model Garden provides a collection of pre-trained models, accelerating development.
GenAI Key ServiceDescription
Vertex AIA fully managed ML platform for building, training, and deploying models. Provides access to Google’s foundation models including PaLM 2, Imagen, and Codey. (FedRAMP High P-ATO for some Vertex AI services.)
Generative AI Studio (Vertex AI Studio)A no-code environment within Vertex AI for experimenting with and deploying generative AI models.
Model GardenA collection of Google’s pre-trained models.
Vertex AI Workbench (Vertex AI Notebooks)A collaborative IDE for building ML projects.
Compute EngineScalable VMs with GPUs and TPUs for training large models.
Cloud StorageObject storage for datasets and model artifacts.
BigQueryServerless data warehouse for analyzing training data.
Cloud RunServerless containers for deploying models.
Cloud TPUsHardware accelerators optimized for ML workloads.
Cloud Logging & Monitoring (Cloud Observability)Tools for monitoring model runs.
Cloud FunctionsServerless compute for deploying models as microservices.
Container Registry (Artifact Registry)Stores Docker images.
Oracle Cloud Logo
Oracle Cloud Infrastructure (OCI)
AI teams can use OCI’s AI services, including OCI Language, for tasks like text classification, entity extraction, and sentiment analysis. These services enable the development of applications such as analyzing customer feedback and building chatbots. OCI also offers GPU instances for training large models.
Key GenAI ServiceDescription
OCI LanguageProvides pre-trained models for various NLP tasks. (No public information available on FedRAMP status.)
OCI Data ScienceA platform for building, training, and deploying machine learning models.
OCI AI ServicesIncludes services for vision, speech, and anomaly detection.
OCI Compute (BM and VM GPU Instances)Compute instances with GPUs for accelerated training.
Oracle Object StorageStorage for large datasets and artifacts.
OCI FunctionsServerless compute for deploying models.
Container Engine for Kubernetes (OKE)Managed Kubernetes for deploying models.
Container Registry (OCIR)Stores Docker images.
Logging and MonitoringTools for monitoring model runs.
Streaming and NotificationsServices for real-time data processing.

Be sure to check back for Part 2 of this series, where we will delve further into specific government and military use cases for generative AI, exploring practical applications and demonstrating how these cloud services can be effectively deployed to address real-world challenges. We will also examine the critical considerations for responsible AI implementation, including ethical implications and security best practices.

Photo of Nick Chadwick
Nicholas Chadwick

Cloud Migration & Adoption Technical Lead Nick Chadwick is obsessed with creating data-driven government enterprises. With an impressive certification stack (CompTIA A+, Network+, Security+, Cloud+, Cisco, Nutanix, Microsoft, GCP, AWS, and CISSP), Nick is our resident expert on cloud computing, data management, and cybersecurity.