Note: this page is available currently for review purposes only. Our EKS AWS Marketplace offering is not yet available to the public
Using gdotv on AWS Marketplace (EKS)
gdotv is available as a Kubernetes application on the AWS Marketplace. It is a graph database client, perfect for developers looking to start on a graph project or support an existing one. It is compatible with Amazon Neptune's Gremlin and Cypher API, as well as Apache TinkerPop enabled graph databases such as JanusGraph, Gremlin Server and Aerospike Graph. It is also compatible with Google Cloud Spanner Graph, Dgraph, Oracle Graph and many more graph databases.
We provide state of the art development tools with advanced autocomplete, syntax checking and graph visualization.
With gdotv you can:
- View your graph database's schema in 1 click
- Write and run Gremlin, Cypher, SPARQL, GQL and DQL queries against your database
- Visualize query results across a variety of formats such as graph visualization, JSON and tables
- Explore your data interactively with our no-code graph database browser
- Debug Gremlin queries step by step, and access profiling tools for Gremlin and Cypher
It is deployed on an Amazon EKS cluster via Helm, running five containerized services behind a Network Load Balancer with TLS enabled by default.
Deploying a gdotv instance
There are two ways to deploy gdotv on EKS:
- One-click deploy via CloudFormation - AWS creates the EKS cluster and deploys gdotv automatically. Recommended for first-time deployments.
- Manual deploy via Helm - Deploy into an existing EKS cluster using the Helm chart directly. Recommended when you already have an EKS cluster and want full control over the configuration.
Pricing
gdotv on AWS Marketplace (EKS) is usage-based. Charges are billed through your AWS account via the AWS Marketplace Metering Service. For pricing details, refer to the Marketplace listing page.
Architecture
gdotv is deployed as a set of Kubernetes workloads on EKS. The deployment includes five containers managed by Helm:
- gdotv-developer: The gdotv web application (Spring Boot + Vue.js)
- gdotv-keycloak: A Keycloak instance providing user authentication, federation and SSO capabilities
- gdotv-postgres: A PostgreSQL database storing gdotv application data
- gdotv-keycloak-postgres: A PostgreSQL database storing Keycloak configuration and realm data
- gdotv-nginx: An NGINX reverse proxy fronting the application over port 443, with TLS enabled by default
The architecture of the application is as shown below:

Sizing
The EKS node group should be provisioned with sufficient resources to run all five containers. The following table provides sizing guidance:
| Node Instance Type | Concurrent Users | Notes |
|---|---|---|
| t3.large | Up to 3 users | Minimum viable for evaluation |
| t3.xlarge | Up to 10 users | Recommended for most teams |
| t3.2xlarge | Up to 20 users | For larger teams |
| m5.2xlarge | Over 20 users | For enterprise deployments |
For production use, we recommend at least 3 nodes across two availability zones for high availability.
One-click deploy via CloudFormation
The CloudFormation stack creates a complete EKS environment - VPC, cluster, node group, EBS CSI driver - and deploys gdotv automatically. No prior Kubernetes experience is required.
Accept the Marketplace agreement first
Before launching the stack, you must subscribe to gdotv on the AWS Marketplace and accept the terms. Navigate to the gdotv AWS Marketplace listing, click View purchase options, and accept the terms. The CloudFormation deployment will fail if this step is skipped.
Or deploy manually via the AWS CLI:
aws cloudformation create-stack \
--stack-name gdotv \
--template-url https://gdotv-cloudformation.s3.amazonaws.com/gdotv-eks.yaml \
--capabilities CAPABILITY_IAM \
--region <your-region> \
--parameters \
ParameterKey=ClusterName,ParameterValue=gdotv-eks \
ParameterKey=NodeInstanceType,ParameterValue=t3.xlarge \
ParameterKey=NodeGroupDesiredSize,ParameterValue=3Parameters
| Parameter | Default | Description |
|---|---|---|
ClusterName | gdotv-eks | Name of the EKS cluster |
KubernetesVersion | 1.32 | Kubernetes version |
NodeInstanceType | t3.xlarge | EC2 instance type for worker nodes |
NodeGroupDesiredSize | 3 | Initial number of worker nodes |
NodeGroupMinSize | 2 | Minimum number of nodes (autoscaling) |
NodeGroupMaxSize | 6 | Maximum number of nodes (autoscaling) |
NodeDiskSize | 100 | EBS volume size in GB per node |
Hostname | (empty) | Custom hostname (leave empty to auto-detect the NLB hostname) |
GdotvReplicaCount | 2 | Number of gdotv application replicas |
ExistingVpcId | (empty) | ID of an existing VPC. Leave empty to create a new one |
ExistingPublicSubnet1Id | (empty) | Existing public subnet in AZ 1 (required if using existing VPC) |
ExistingPublicSubnet2Id | (empty) | Existing public subnet in AZ 2 (required if using existing VPC) |
ExistingPrivateSubnet1Id | (empty) | Existing private subnet in AZ 1 (required if using existing VPC) |
ExistingPrivateSubnet2Id | (empty) | Existing private subnet in AZ 2 (required if using existing VPC) |
What the stack creates
- A VPC with public and private subnets across two availability zones (or reuses an existing one)
- NAT Gateway for outbound internet access from private subnets
- EKS cluster with a managed node group
- OIDC provider for IRSA
- EBS CSI driver add-on for persistent volume support
- An IAM role with
marketplacemetering:RegisterUsagepermission bound to the gdotv service account - A temporary EC2 deployer instance that runs
helm installand signals CloudFormation on completion
Deployment duration
- New VPC + new cluster: approximately 20–30 minutes
- Existing VPC + new cluster: approximately 15–20 minutes
Retrieving credentials after deployment
Once the stack reaches CREATE_COMPLETE, retrieve the initial passwords and NLB hostname from the deployer instance logs:
INSTANCE_ID=$(aws cloudformation describe-stack-resource \
--stack-name gdotv \
--logical-resource-id DeployerInstance \
--query 'StackResourceDetail.PhysicalResourceId' \
--output text)
aws ssm start-session \
--target "$INSTANCE_ID" \
--document-name AWS-StartNonInteractiveCommand \
--parameters 'command=["cat /var/log/gdotv-deploy.log"]'The log contains the NLB hostname, gdotv user password and Keycloak admin password.
Manual deploy via Helm
Use this method to deploy gdotv into an existing EKS cluster.
Prerequisites
Install the following CLI tools if you haven't already:
Step 1 - Configure kubectl
aws eks update-kubeconfig --name <CLUSTER_NAME> --region <REGION>
kubectl get nodes # verify connectivityStep 2 - Create the IRSA role
The gdotv pod must call the AWS Marketplace Metering API (RegisterUsage) to validate its license. This requires an IAM role bound to the Kubernetes service account via IRSA.
WARNING
EKS Pod Identity is not supported by the AWS Marketplace Metering API. IRSA is required.
2a - Ensure your cluster has an OIDC provider:
eksctl utils associate-iam-oidc-provider \
--cluster <CLUSTER_NAME> \
--region <REGION> \
--approve2b - Create the IAM role and attach the metering policy:
eksctl create iamserviceaccount \
--cluster <CLUSTER_NAME> \
--region <REGION> \
--namespace gdotv \
--name gdotv-developer \
--attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage \
--approve \
--override-existing-serviceaccountsThis creates an IAM role with the marketplacemetering:RegisterUsage permission and annotates the gdotv-developer Kubernetes service account in the gdotv namespace with the role ARN.
Note the role ARN output - you will need it in the next step:
ROLE_ARN=$(aws iam list-roles \
--query "Roles[?contains(RoleName, 'gdotv')].Arn" \
--output text)
echo "$ROLE_ARN"Step 3 - Add Amazon Neptune IAM permissions (optional)
If you plan to connect gdotv to Amazon Neptune with IAM authentication enabled, add the necessary Neptune permissions to the role created above. We recommend using the following fine-grained inline policy rather than NeptuneFullAccess:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPassRoleForNeptune",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:passedToService": "rds.amazonaws.com"
}
}
},
{
"Sid": "AllowDataAccessForNeptune",
"Effect": "Allow",
"Action": [
"neptune-db:*",
"neptune-graph:*"
],
"Resource": [
"<ENTER COMMA SEPARATED LIST OF AMAZON NEPTUNE ARNs>"
]
}
]
}TIP
You can find your Amazon Neptune DB cluster ARN or Neptune Analytics graph ARN from the Neptune console under the Configuration section.
ARN formats:
arn:aws:rds:{region}:{account}:cluster:{cluster-name}(Amazon Neptune)arn:aws:neptune-graph:{region}:{account}:graph/{graph-id}(Amazon Neptune Analytics)
To grant access to all Neptune resources, use "Resource": ["*"]. We recommend following the principle of least privilege.
You can optionally also add CloudWatchLogsReadOnlyAccess to enable our Slow Query and Audit Logs functionality.
Step 4 - Authenticate Helm with ECR
aws ecr get-login-password --region us-east-1 \
| helm registry login \
--username AWS \
--password-stdin \
709825985650.dkr.ecr.us-east-1.amazonaws.comStep 5 - Install the Helm chart
helm upgrade --install gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--version <CHART_VERSION> \
--create-namespace \
--namespace gdotv \
--set gdotv.env.licensingMode="aws-marketplace-eks" \
--set gdotv.env.deploymentMode="eks" \
--set gdotv.env.awsMarketplaceProductCode="akyagndqcrxxwssbchcyjon9i" \
--set gdotv.env.awsRegion="<REGION>" \
--set serviceAccount.roleArn="$ROLE_ARN" \
--set gdotvPostgres.auth.password="$(openssl rand -base64 24 | tr -d '/+=')" \
--set keycloakPostgres.auth.password="$(openssl rand -base64 24 | tr -d '/+=')" \
--set keycloak.auth.adminPassword="$(openssl rand -base64 24 | tr -d '/+=')" \
--set keycloak.client.secret="$(openssl rand -base64 32 | tr -d '/+=')" \
--set gdotv.bootstrap.defaultUserPassword="$(openssl rand -base64 24 | tr -d '/+=')" \
--set gdotvPostgres.persistence.storageClass="gp3" \
--set keycloakPostgres.persistence.storageClass="gp3" \
--set nginx.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb" \
--set nginx.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"="internet-facing" \
--set gdotv.env.hostname="<YOUR_HOSTNAME_OR_LEAVE_EMPTY>"TIP
The gp3 storage class requires the EBS CSI driver add-on. If it is not installed on your cluster, run:
eksctl create addon --name aws-ebs-csi-driver --cluster <CLUSTER_NAME> --region <REGION>Once deployed, retrieve the initial passwords from the Kubernetes secrets:
# gdotv user password
kubectl get secret gdotv-developer-gdotv-secrets \
-n gdotv \
-o jsonpath='{.data.gdotv-user-password}' | base64 -d
# Keycloak admin password
kubectl get secret gdotv-developer-keycloak-secrets \
-n gdotv \
-o jsonpath='{.data.admin-password}' | base64 -dConnecting to your EKS cluster
To run kubectl or helm commands against your gdotv deployment, authenticate to the EKS cluster first.
aws eks update-kubeconfig --name <CLUSTER_NAME> --region <REGION>
kubectl get pods -n gdotvYou should see the gdotv pods listed with their status.
Accessing gdotv
Once the deployment is complete, find the NLB hostname assigned to the LoadBalancer service:
kubectl get svc -n gdotv -l app.kubernetes.io/component=nginxNavigate to https://<NLB_HOSTNAME> in your browser. Since gdotv uses a self-signed TLS certificate by default, you will see a browser warning. Click Advanced, then Proceed to continue.

Authenticating to gdotv
gdotv uses Keycloak for authentication. On first deployment, a default user is created automatically:
- Username:
gdotv - Password: Auto-generated during deployment
To retrieve the default user password:
kubectl get secret gdotv-developer-gdotv-secrets \
-n gdotv \
-o jsonpath='{.data.gdotv-user-password}' | base64 -dWhen navigating to gdotv while unauthenticated, you will be presented with the Keycloak login screen:

We recommend changing the default password after your first login. To do so, click on the username in the top-right menu bar, then select Change Password. You will be redirected to the Keycloak profile management page where you can update your password.
Authenticating to the gdotv Keycloak realm
The gdotv Keycloak realm is where all gdotv users are stored and managed. New authentication flows, such as Single Sign On, can be configured from the gdotv Keycloak realm admin console.
The gdotv Keycloak realm admin console can be accessed at:
https://<HOSTNAME>/kc/admin/gdotv/console/The default realm admin credentials are:
- Username:
gdotv - Password: Same as the default gdotv user password (see above)
Authenticating to the master Keycloak realm
The master Keycloak realm provides access to the master administration interface of Keycloak. Under normal circumstances, it should rarely need to be accessed. However, we recommend logging in after initial deployment to change the master admin password.
The master Keycloak realm admin console can be accessed at:
https://<HOSTNAME>/kc/admin/master/console/To retrieve the Keycloak admin credentials:
kubectl get secret gdotv-developer-keycloak-secrets \
-n gdotv \
-o jsonpath='{.data.admin-password}' | base64 -dThe default master admin username is admin.
Configuring a TLS certificate
By default, gdotv uses a self-signed certificate to serve its web interface over HTTPS. You may wish to configure your own trusted certificate against a domain name that you own.
WARNING
When changing the hostname, you must also update the TLS certificate in the same helm upgrade command. Running separate helm upgrade commands will cause values to be overwritten due to how --reuse-values works.
Renewing or replacing the certificate for the current hostname
HOSTNAME="<your-current-hostname>"
# Generate a self-signed certificate (or use your CA-issued .crt and .key files)
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
-keyout tls.key \
-out tls.crt \
-subj "/CN=gdotv/O=gdotv" \
-addext "subjectAltName=DNS:${HOSTNAME}"
TLS_CRT=$(base64 -w0 tls.crt)
TLS_KEY=$(base64 -w0 tls.key)
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set nginx.tls.certificate="$TLS_CRT" \
--set nginx.tls.privateKey="$TLS_KEY"
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotvConfirm the new certificate is active:
echo | openssl s_client -connect <HOSTNAME>:443 2>/dev/null | openssl x509 -noout -datesUsing a Kubernetes TLS secret
Alternatively, you can store your certificate as a Kubernetes TLS secret:
kubectl create secret tls my-tls-cert \
-n gdotv \
--cert=path/to/tls.crt \
--key=path/to/tls.key
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set nginx.tls.existingSecret=my-tls-cert
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotvReverting to the default self-signed certificate
# Get the current NLB hostname
NLB=$(kubectl get svc gdotv-developer-nginx -n gdotv \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
# Generate a self-signed certificate for the NLB hostname
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt \
-subj "/CN=gdotv" \
-addext "subjectAltName=DNS:${NLB}"
TLS_CRT=$(base64 -w0 tls.crt)
TLS_KEY=$(base64 -w0 tls.key)
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set gdotv.env.hostname="$NLB" \
--set nginx.tls.existingSecret="" \
--set nginx.tls.certificate="$TLS_CRT" \
--set nginx.tls.privateKey="$TLS_KEY"
kubectl rollout restart deployment/gdotv-developer-keycloak -n gdotv
kubectl rollout restart deployment/gdotv-developer -n gdotv
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotvConfiguring a custom hostname
If you deployed gdotv without a hostname (using the auto-detected NLB hostname) and later want to configure a custom domain:
TIP
If you have an active session on the old hostname, your browser may have cached session data that causes redirects to the old address. We recommend logging out of gdotv before changing the hostname, or testing the new hostname in a private/incognito browser window.
- Fetch the NLB hostname of your gdotv deployment:
kubectl get svc gdotv-developer-nginx -n gdotv \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'Create a DNS CNAME record on your domain manager, pointing your domain (e.g.
gdotv.example.com) to the NLB hostname.Update the hostname and TLS certificate together in a single
helm upgradecommand:
With a self-signed certificate:
HOSTNAME="gdotv.example.com"
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
-keyout tls.key -out tls.crt \
-subj "/CN=gdotv/O=gdotv" \
-addext "subjectAltName=DNS:${HOSTNAME}"
TLS_CRT=$(base64 -w0 tls.crt)
TLS_KEY=$(base64 -w0 tls.key)
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set gdotv.env.hostname="$HOSTNAME" \
--set nginx.tls.certificate="$TLS_CRT" \
--set nginx.tls.privateKey="$TLS_KEY"With a Kubernetes TLS secret:
kubectl create secret tls my-tls-cert \
-n gdotv \
--cert=path/to/tls.crt \
--key=path/to/tls.key
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set gdotv.env.hostname=gdotv.example.com \
--set nginx.tls.existingSecret=my-tls-cert- Restart all services to pick up the changes (Keycloak first, then gdotv):
kubectl rollout restart deployment/gdotv-developer-keycloak -n gdotv
kubectl rollout restart deployment/gdotv-developer -n gdotv
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotvUpgrading to a new version
gdotv receives frequent updates with new features and improvements. Upgrading only updates the gdotv application container - Keycloak, nginx and the databases are unaffected.
First, authenticate Helm with ECR (tokens expire after 12 hours):
aws ecr get-login-password --region us-east-1 \
| helm registry login \
--username AWS \
--password-stdin \
709825985650.dkr.ecr.us-east-1.amazonaws.comThen upgrade:
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set gdotv.image.tag=<NEW_VERSION>--reuse-values preserves all previously set values (passwords, hostname, certificates) so they do not need to be passed again.
Monitor the rollout:
kubectl rollout status deployment/gdotv-developer -n gdotvIf the new version fails to start, roll back to the previous release:
helm rollback gdotv-developer -n gdotvBackup and Restore
Recommended: AWS Backup for EKS
For production deployments, we recommend using AWS Backup, which supports EKS namespace-level backups including persistent volume data.
To set up AWS Backup for EKS:
- Enable the EKS backup feature and install the Velero-based agent on your cluster:
aws eks create-addon \
--cluster-name <CLUSTER_NAME> \
--addon-name aws-efs-csi-driver \
--region <REGION>- Create a backup vault and backup plan targeting the
gdotvnamespace via the AWS Backup console. Refer to the AWS Backup for EKS documentation for full steps.
Manual backup with pg_dump
For one-off backups, you can back up the PostgreSQL databases directly. gdotv uses two PostgreSQL instances: one for the application and one for Keycloak.
Back up the gdotv application database:
kubectl exec -n gdotv statefulset/gdotv-developer-gdotv-postgres -- \
pg_dump -U postgres --clean --if-exists postgres \
> gdotv-backup-$(date +%Y%m%d-%H%M%S).sqlBack up the Keycloak database:
kubectl exec -n gdotv statefulset/gdotv-developer-keycloak-postgres -- \
pg_dump -U postgres --clean --if-exists postgres \
> keycloak-backup-$(date +%Y%m%d-%H%M%S).sqlRestore the gdotv application database:
WARNING
Restoring overwrites current data. Stop the dependent application pod first to avoid write conflicts.
kubectl scale deployment/gdotv-developer -n gdotv --replicas=0
cat gdotv-backup-<timestamp>.sql | kubectl exec -i -n gdotv \
statefulset/gdotv-developer-gdotv-postgres -- \
psql -U postgres postgres
kubectl scale deployment/gdotv-developer -n gdotv --replicas=2Restore the Keycloak database:
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=0
cat keycloak-backup-<timestamp>.sql | kubectl exec -i -n gdotv \
statefulset/gdotv-developer-keycloak-postgres -- \
psql -U postgres postgres
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=1Stop / Start Services
Stop all services (scale to zero)
This preserves PVCs and Secrets - data is not lost.
kubectl scale deployment/gdotv-developer -n gdotv --replicas=0
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=0
kubectl scale deployment/gdotv-developer-nginx -n gdotv --replicas=0
kubectl scale statefulset/gdotv-developer-gdotv-postgres -n gdotv --replicas=0
kubectl scale statefulset/gdotv-developer-keycloak-postgres -n gdotv --replicas=0Start all services
Start order matters: databases first, then Keycloak, then gdotv, then nginx.
kubectl scale statefulset/gdotv-developer-gdotv-postgres -n gdotv --replicas=1
kubectl scale statefulset/gdotv-developer-keycloak-postgres -n gdotv --replicas=1
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=1
kubectl scale deployment/gdotv-developer -n gdotv --replicas=2
kubectl scale deployment/gdotv-developer-nginx -n gdotv --replicas=2Restart a single service
kubectl rollout restart deployment/gdotv-developer -n gdotv
kubectl rollout restart deployment/gdotv-developer-keycloak -n gdotv
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotvWatch rollout progress:
kubectl rollout status deployment/gdotv-developer -n gdotvCheck Logs
gdotv application
# Live logs
kubectl logs -f -n gdotv deployment/gdotv-developer
# Logs from the previous (crashed) container
kubectl logs -n gdotv deployment/gdotv-developer --previousKeycloak
kubectl logs -f -n gdotv deployment/gdotv-developer-keycloaknginx
kubectl logs -f -n gdotv deployment/gdotv-developer-nginxBootstrap job
kubectl get pods -n gdotv -l app.kubernetes.io/component=bootstrap
kubectl logs -n gdotv -l app.kubernetes.io/component=bootstrapAll pods
kubectl get pods -n gdotv -l app.kubernetes.io/instance=gdotv-developerPod events (useful for crash diagnosis)
kubectl describe pod -n gdotv -l app.kubernetes.io/component=gdotv-appScaling
Scale the gdotv application
The gdotv application is stateless (session state is stored in PostgreSQL). It can safely run multiple replicas.
# Via kubectl
kubectl scale deployment/gdotv-developer -n gdotv --replicas=3
# Or persistently via helm upgrade
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-values \
--wait \
--set gdotv.replicaCount=3Scale nginx
nginx is stateless and can be scaled freely:
kubectl scale deployment/gdotv-developer-nginx -n gdotv --replicas=3Keycloak and databases
Keycloak and both PostgreSQL instances run as single replicas. Scaling them requires additional configuration (Keycloak clustering, PostgreSQL replication) that is outside the scope of this deployment. For production HA requirements, consider using Amazon RDS for PostgreSQL and a Keycloak Operator.
Cluster-level scaling
If pods cannot be scheduled due to insufficient node resources, scale the EKS managed node group:
aws eks update-nodegroup-config \
--cluster-name <CLUSTER_NAME> \
--nodegroup-name <NODEGROUP_NAME> \
--scaling-config minSize=2,maxSize=10,desiredSize=5 \
--region <REGION>AWS: IRSA for Amazon Neptune and other AWS services
To connect gdotv to Amazon Neptune with IAM authentication, or other AWS services, without storing credentials in the cluster, use IAM Roles for Service Accounts (IRSA). The gdotv service account is already annotated with an IAM role during installation (see Step 2 - Create the IRSA role). You can extend that role with additional permissions as needed.
Kubernetes service account
gdotv runs under the following Kubernetes service account:
- Namespace:
gdotv - Service account name:
gdotv-developer
When configuring IAM trust policies manually - for example to grant access to Amazon Neptune or other AWS services - reference this service account as the principal:
system:serviceaccount:gdotv:gdotv-developerAdding Neptune permissions to the existing IRSA role
- Find the role ARN annotated on the gdotv service account:
kubectl get serviceaccount gdotv-developer -n gdotv \
-o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}'- Attach an inline policy granting Neptune access (see the policy JSON in Step 3 - Add Amazon Neptune IAM permissions above):
aws iam put-role-policy \
--role-name <ROLE_NAME> \
--policy-name gdotv-neptune-access \
--policy-document file://neptune-policy.json- Restart the gdotv pod to pick up the new permissions:
kubectl rollout restart deployment/gdotv-developer -n gdotvUninstalling
To completely remove gdotv from your cluster:
helm uninstall gdotv-developer -n gdotv
kubectl delete namespace gdotvNote that persistent volumes (PostgreSQL data) are not automatically deleted. To remove them:
kubectl delete pvc --all -n gdotvTo delete the EKS cluster entirely (if it was created by the CloudFormation stack):
aws cloudformation delete-stack --stack-name gdotv --region <REGION>Troubleshooting
The application is not accessible over HTTPS
- Check that the NLB has a hostname assigned:
kubectl get svc -n gdotv - Verify that EKS security group rules allow inbound traffic on port 443
- Check that all pods are running:
kubectl get pods -n gdotv
gdotv pods are in CrashLoopBackOff
This typically indicates a hostname or Keycloak connectivity issue. Check the logs:
kubectl logs -n gdotv deployment/gdotv-developerCommon causes:
- Hostname mismatch between the configured hostname and the actual NLB endpoint
- Keycloak not fully ready when gdotv starts (init containers should handle this, but check events)
The application is stating that AWS Marketplace License Validation failed
License validation failures are caused by missing or incorrect IRSA configuration. Check the following:
- The
gdotv-developerservice account has theeks.amazonaws.com/role-arnannotation:bashkubectl get serviceaccount gdotv-developer -n gdotv -o yaml - The annotated IAM role has the
marketplacemetering:RegisterUsagepermission - The
AWS_REGIONenvironment variable is set on the gdotv pod:bashkubectl exec -n gdotv deployment/gdotv-developer -- env | grep AWS_REGION - The OIDC provider is correctly associated with the cluster:bash
aws eks describe-cluster --name <CLUSTER_NAME> --region <REGION> \ --query 'cluster.identity.oidc.issuer'
Pods stuck in Pending state
The cluster may not have sufficient node resources. Check events:
kubectl get events -n gdotv --sort-by='.lastTimestamp' | tail -20If nodes are the issue, scale the node group (see Cluster-level scaling above).
Bootstrap job failed
The bootstrap job creates the initial Keycloak user. If it failed, check its logs:
kubectl logs -n gdotv -l app.kubernetes.io/component=bootstrapYou can re-trigger it by running a Helm upgrade:
helm upgrade gdotv-developer \
oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
--namespace gdotv \
--reuse-valuesAdditional support
For any support queries, email us at support@gdotv.com. Support is free and we answer all queries within one business day.