Skip to content

Note: this page is available currently for review purposes only. Our EKS AWS Marketplace offering is not yet available to the public

Using gdotv on AWS Marketplace (EKS)

gdotv is available as a Kubernetes application on the AWS Marketplace. It is a graph database client, perfect for developers looking to start on a graph project or support an existing one. It is compatible with Amazon Neptune's Gremlin and Cypher API, as well as Apache TinkerPop enabled graph databases such as JanusGraph, Gremlin Server and Aerospike Graph. It is also compatible with Google Cloud Spanner Graph, Dgraph, Oracle Graph and many more graph databases.

We provide state of the art development tools with advanced autocomplete, syntax checking and graph visualization.

With gdotv you can:

  • View your graph database's schema in 1 click
  • Write and run Gremlin, Cypher, SPARQL, GQL and DQL queries against your database
  • Visualize query results across a variety of formats such as graph visualization, JSON and tables
  • Explore your data interactively with our no-code graph database browser
  • Debug Gremlin queries step by step, and access profiling tools for Gremlin and Cypher

It is deployed on an Amazon EKS cluster via Helm, running five containerized services behind a Network Load Balancer with TLS enabled by default.

Deploying a gdotv instance

There are two ways to deploy gdotv on EKS:

  • One-click deploy via CloudFormation - AWS creates the EKS cluster and deploys gdotv automatically. Recommended for first-time deployments.
  • Manual deploy via Helm - Deploy into an existing EKS cluster using the Helm chart directly. Recommended when you already have an EKS cluster and want full control over the configuration.

Pricing

gdotv on AWS Marketplace (EKS) is usage-based. Charges are billed through your AWS account via the AWS Marketplace Metering Service. For pricing details, refer to the Marketplace listing page.

Architecture

gdotv is deployed as a set of Kubernetes workloads on EKS. The deployment includes five containers managed by Helm:

  • gdotv-developer: The gdotv web application (Spring Boot + Vue.js)
  • gdotv-keycloak: A Keycloak instance providing user authentication, federation and SSO capabilities
  • gdotv-postgres: A PostgreSQL database storing gdotv application data
  • gdotv-keycloak-postgres: A PostgreSQL database storing Keycloak configuration and realm data
  • gdotv-nginx: An NGINX reverse proxy fronting the application over port 443, with TLS enabled by default

The architecture of the application is as shown below:

gdotv GCP Architecture

Sizing

The EKS node group should be provisioned with sufficient resources to run all five containers. The following table provides sizing guidance:

Node Instance TypeConcurrent UsersNotes
t3.largeUp to 3 usersMinimum viable for evaluation
t3.xlargeUp to 10 usersRecommended for most teams
t3.2xlargeUp to 20 usersFor larger teams
m5.2xlargeOver 20 usersFor enterprise deployments

For production use, we recommend at least 3 nodes across two availability zones for high availability.

One-click deploy via CloudFormation

The CloudFormation stack creates a complete EKS environment - VPC, cluster, node group, EBS CSI driver - and deploys gdotv automatically. No prior Kubernetes experience is required.

Accept the Marketplace agreement first

Before launching the stack, you must subscribe to gdotv on the AWS Marketplace and accept the terms. Navigate to the gdotv AWS Marketplace listing, click View purchase options, and accept the terms. The CloudFormation deployment will fail if this step is skipped.

Launch Stack

Or deploy manually via the AWS CLI:

bash
aws cloudformation create-stack \
  --stack-name gdotv \
  --template-url https://gdotv-cloudformation.s3.amazonaws.com/gdotv-eks.yaml \
  --capabilities CAPABILITY_IAM \
  --region <your-region> \
  --parameters \
    ParameterKey=ClusterName,ParameterValue=gdotv-eks \
    ParameterKey=NodeInstanceType,ParameterValue=t3.xlarge \
    ParameterKey=NodeGroupDesiredSize,ParameterValue=3

Parameters

ParameterDefaultDescription
ClusterNamegdotv-eksName of the EKS cluster
KubernetesVersion1.32Kubernetes version
NodeInstanceTypet3.xlargeEC2 instance type for worker nodes
NodeGroupDesiredSize3Initial number of worker nodes
NodeGroupMinSize2Minimum number of nodes (autoscaling)
NodeGroupMaxSize6Maximum number of nodes (autoscaling)
NodeDiskSize100EBS volume size in GB per node
Hostname(empty)Custom hostname (leave empty to auto-detect the NLB hostname)
GdotvReplicaCount2Number of gdotv application replicas
ExistingVpcId(empty)ID of an existing VPC. Leave empty to create a new one
ExistingPublicSubnet1Id(empty)Existing public subnet in AZ 1 (required if using existing VPC)
ExistingPublicSubnet2Id(empty)Existing public subnet in AZ 2 (required if using existing VPC)
ExistingPrivateSubnet1Id(empty)Existing private subnet in AZ 1 (required if using existing VPC)
ExistingPrivateSubnet2Id(empty)Existing private subnet in AZ 2 (required if using existing VPC)

What the stack creates

  • A VPC with public and private subnets across two availability zones (or reuses an existing one)
  • NAT Gateway for outbound internet access from private subnets
  • EKS cluster with a managed node group
  • OIDC provider for IRSA
  • EBS CSI driver add-on for persistent volume support
  • An IAM role with marketplacemetering:RegisterUsage permission bound to the gdotv service account
  • A temporary EC2 deployer instance that runs helm install and signals CloudFormation on completion

Deployment duration

  • New VPC + new cluster: approximately 20–30 minutes
  • Existing VPC + new cluster: approximately 15–20 minutes

Retrieving credentials after deployment

Once the stack reaches CREATE_COMPLETE, retrieve the initial passwords and NLB hostname from the deployer instance logs:

bash
INSTANCE_ID=$(aws cloudformation describe-stack-resource \
  --stack-name gdotv \
  --logical-resource-id DeployerInstance \
  --query 'StackResourceDetail.PhysicalResourceId' \
  --output text)

aws ssm start-session \
  --target "$INSTANCE_ID" \
  --document-name AWS-StartNonInteractiveCommand \
  --parameters 'command=["cat /var/log/gdotv-deploy.log"]'

The log contains the NLB hostname, gdotv user password and Keycloak admin password.

Manual deploy via Helm

Use this method to deploy gdotv into an existing EKS cluster.

Prerequisites

Install the following CLI tools if you haven't already:

Step 1 - Configure kubectl

bash
aws eks update-kubeconfig --name <CLUSTER_NAME> --region <REGION>
kubectl get nodes  # verify connectivity

Step 2 - Create the IRSA role

The gdotv pod must call the AWS Marketplace Metering API (RegisterUsage) to validate its license. This requires an IAM role bound to the Kubernetes service account via IRSA.

WARNING

EKS Pod Identity is not supported by the AWS Marketplace Metering API. IRSA is required.

2a - Ensure your cluster has an OIDC provider:

bash
eksctl utils associate-iam-oidc-provider \
  --cluster <CLUSTER_NAME> \
  --region <REGION> \
  --approve

2b - Create the IAM role and attach the metering policy:

bash
eksctl create iamserviceaccount \
  --cluster <CLUSTER_NAME> \
  --region <REGION> \
  --namespace gdotv \
  --name gdotv-developer \
  --attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage \
  --approve \
  --override-existing-serviceaccounts

This creates an IAM role with the marketplacemetering:RegisterUsage permission and annotates the gdotv-developer Kubernetes service account in the gdotv namespace with the role ARN.

Note the role ARN output - you will need it in the next step:

bash
ROLE_ARN=$(aws iam list-roles \
  --query "Roles[?contains(RoleName, 'gdotv')].Arn" \
  --output text)
echo "$ROLE_ARN"

Step 3 - Add Amazon Neptune IAM permissions (optional)

If you plan to connect gdotv to Amazon Neptune with IAM authentication enabled, add the necessary Neptune permissions to the role created above. We recommend using the following fine-grained inline policy rather than NeptuneFullAccess:

json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowPassRoleForNeptune",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:passedToService": "rds.amazonaws.com"
                }
            }
        },
        {
            "Sid": "AllowDataAccessForNeptune",
            "Effect": "Allow",
            "Action": [
                "neptune-db:*",
                "neptune-graph:*"
            ],
            "Resource": [
                "<ENTER COMMA SEPARATED LIST OF AMAZON NEPTUNE ARNs>"
            ]
        }
    ]
}

TIP

You can find your Amazon Neptune DB cluster ARN or Neptune Analytics graph ARN from the Neptune console under the Configuration section.

ARN formats:

  • arn:aws:rds:{region}:{account}:cluster:{cluster-name} (Amazon Neptune)
  • arn:aws:neptune-graph:{region}:{account}:graph/{graph-id} (Amazon Neptune Analytics)

To grant access to all Neptune resources, use "Resource": ["*"]. We recommend following the principle of least privilege.

You can optionally also add CloudWatchLogsReadOnlyAccess to enable our Slow Query and Audit Logs functionality.

Step 4 - Authenticate Helm with ECR

bash
aws ecr get-login-password --region us-east-1 \
  | helm registry login \
    --username AWS \
    --password-stdin \
    709825985650.dkr.ecr.us-east-1.amazonaws.com

Step 5 - Install the Helm chart

bash
helm upgrade --install gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --version <CHART_VERSION> \
  --create-namespace \
  --namespace gdotv \
  --set gdotv.env.licensingMode="aws-marketplace-eks" \
  --set gdotv.env.deploymentMode="eks" \
  --set gdotv.env.awsMarketplaceProductCode="akyagndqcrxxwssbchcyjon9i" \
  --set gdotv.env.awsRegion="<REGION>" \
  --set serviceAccount.roleArn="$ROLE_ARN" \
  --set gdotvPostgres.auth.password="$(openssl rand -base64 24 | tr -d '/+=')" \
  --set keycloakPostgres.auth.password="$(openssl rand -base64 24 | tr -d '/+=')" \
  --set keycloak.auth.adminPassword="$(openssl rand -base64 24 | tr -d '/+=')" \
  --set keycloak.client.secret="$(openssl rand -base64 32 | tr -d '/+=')" \
  --set gdotv.bootstrap.defaultUserPassword="$(openssl rand -base64 24 | tr -d '/+=')" \
  --set gdotvPostgres.persistence.storageClass="gp3" \
  --set keycloakPostgres.persistence.storageClass="gp3" \
  --set nginx.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb" \
  --set nginx.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"="internet-facing" \
  --set gdotv.env.hostname="<YOUR_HOSTNAME_OR_LEAVE_EMPTY>"

TIP

The gp3 storage class requires the EBS CSI driver add-on. If it is not installed on your cluster, run:

bash
eksctl create addon --name aws-ebs-csi-driver --cluster <CLUSTER_NAME> --region <REGION>

Once deployed, retrieve the initial passwords from the Kubernetes secrets:

bash
# gdotv user password
kubectl get secret gdotv-developer-gdotv-secrets \
  -n gdotv \
  -o jsonpath='{.data.gdotv-user-password}' | base64 -d

# Keycloak admin password
kubectl get secret gdotv-developer-keycloak-secrets \
  -n gdotv \
  -o jsonpath='{.data.admin-password}' | base64 -d

Connecting to your EKS cluster

To run kubectl or helm commands against your gdotv deployment, authenticate to the EKS cluster first.

bash
aws eks update-kubeconfig --name <CLUSTER_NAME> --region <REGION>
kubectl get pods -n gdotv

You should see the gdotv pods listed with their status.

Accessing gdotv

Once the deployment is complete, find the NLB hostname assigned to the LoadBalancer service:

bash
kubectl get svc -n gdotv -l app.kubernetes.io/component=nginx

Navigate to https://<NLB_HOSTNAME> in your browser. Since gdotv uses a self-signed TLS certificate by default, you will see a browser warning. Click Advanced, then Proceed to continue.

Browser self signed certificate warning

Authenticating to gdotv

gdotv uses Keycloak for authentication. On first deployment, a default user is created automatically:

  • Username: gdotv
  • Password: Auto-generated during deployment

To retrieve the default user password:

bash
kubectl get secret gdotv-developer-gdotv-secrets \
  -n gdotv \
  -o jsonpath='{.data.gdotv-user-password}' | base64 -d

When navigating to gdotv while unauthenticated, you will be presented with the Keycloak login screen:

gdotv Login Screen

We recommend changing the default password after your first login. To do so, click on the username in the top-right menu bar, then select Change Password. You will be redirected to the Keycloak profile management page where you can update your password.

Authenticating to the gdotv Keycloak realm

The gdotv Keycloak realm is where all gdotv users are stored and managed. New authentication flows, such as Single Sign On, can be configured from the gdotv Keycloak realm admin console.

The gdotv Keycloak realm admin console can be accessed at:

https://<HOSTNAME>/kc/admin/gdotv/console/

The default realm admin credentials are:

  • Username: gdotv
  • Password: Same as the default gdotv user password (see above)

Authenticating to the master Keycloak realm

The master Keycloak realm provides access to the master administration interface of Keycloak. Under normal circumstances, it should rarely need to be accessed. However, we recommend logging in after initial deployment to change the master admin password.

The master Keycloak realm admin console can be accessed at:

https://<HOSTNAME>/kc/admin/master/console/

To retrieve the Keycloak admin credentials:

bash
kubectl get secret gdotv-developer-keycloak-secrets \
  -n gdotv \
  -o jsonpath='{.data.admin-password}' | base64 -d

The default master admin username is admin.

Configuring a TLS certificate

By default, gdotv uses a self-signed certificate to serve its web interface over HTTPS. You may wish to configure your own trusted certificate against a domain name that you own.

WARNING

When changing the hostname, you must also update the TLS certificate in the same helm upgrade command. Running separate helm upgrade commands will cause values to be overwritten due to how --reuse-values works.

Renewing or replacing the certificate for the current hostname

bash
HOSTNAME="<your-current-hostname>"

# Generate a self-signed certificate (or use your CA-issued .crt and .key files)
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
  -keyout tls.key \
  -out tls.crt \
  -subj "/CN=gdotv/O=gdotv" \
  -addext "subjectAltName=DNS:${HOSTNAME}"

TLS_CRT=$(base64 -w0 tls.crt)
TLS_KEY=$(base64 -w0 tls.key)

helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set nginx.tls.certificate="$TLS_CRT" \
  --set nginx.tls.privateKey="$TLS_KEY"

kubectl rollout restart deployment/gdotv-developer-nginx -n gdotv

Confirm the new certificate is active:

bash
echo | openssl s_client -connect <HOSTNAME>:443 2>/dev/null | openssl x509 -noout -dates

Using a Kubernetes TLS secret

Alternatively, you can store your certificate as a Kubernetes TLS secret:

bash
kubectl create secret tls my-tls-cert \
  -n gdotv \
  --cert=path/to/tls.crt \
  --key=path/to/tls.key

helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set nginx.tls.existingSecret=my-tls-cert

kubectl rollout restart deployment/gdotv-developer-nginx -n gdotv

Reverting to the default self-signed certificate

bash
# Get the current NLB hostname
NLB=$(kubectl get svc gdotv-developer-nginx -n gdotv \
  -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

# Generate a self-signed certificate for the NLB hostname
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout tls.key -out tls.crt \
  -subj "/CN=gdotv" \
  -addext "subjectAltName=DNS:${NLB}"

TLS_CRT=$(base64 -w0 tls.crt)
TLS_KEY=$(base64 -w0 tls.key)

helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set gdotv.env.hostname="$NLB" \
  --set nginx.tls.existingSecret="" \
  --set nginx.tls.certificate="$TLS_CRT" \
  --set nginx.tls.privateKey="$TLS_KEY"

kubectl rollout restart deployment/gdotv-developer-keycloak -n gdotv
kubectl rollout restart deployment/gdotv-developer -n gdotv
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotv

Configuring a custom hostname

If you deployed gdotv without a hostname (using the auto-detected NLB hostname) and later want to configure a custom domain:

TIP

If you have an active session on the old hostname, your browser may have cached session data that causes redirects to the old address. We recommend logging out of gdotv before changing the hostname, or testing the new hostname in a private/incognito browser window.

  1. Fetch the NLB hostname of your gdotv deployment:
bash
kubectl get svc gdotv-developer-nginx -n gdotv \
  -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
  1. Create a DNS CNAME record on your domain manager, pointing your domain (e.g. gdotv.example.com) to the NLB hostname.

  2. Update the hostname and TLS certificate together in a single helm upgrade command:

With a self-signed certificate:

bash
HOSTNAME="gdotv.example.com"

openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
  -keyout tls.key -out tls.crt \
  -subj "/CN=gdotv/O=gdotv" \
  -addext "subjectAltName=DNS:${HOSTNAME}"

TLS_CRT=$(base64 -w0 tls.crt)
TLS_KEY=$(base64 -w0 tls.key)

helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set gdotv.env.hostname="$HOSTNAME" \
  --set nginx.tls.certificate="$TLS_CRT" \
  --set nginx.tls.privateKey="$TLS_KEY"

With a Kubernetes TLS secret:

bash
kubectl create secret tls my-tls-cert \
  -n gdotv \
  --cert=path/to/tls.crt \
  --key=path/to/tls.key

helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set gdotv.env.hostname=gdotv.example.com \
  --set nginx.tls.existingSecret=my-tls-cert
  1. Restart all services to pick up the changes (Keycloak first, then gdotv):
bash
kubectl rollout restart deployment/gdotv-developer-keycloak -n gdotv
kubectl rollout restart deployment/gdotv-developer -n gdotv
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotv

Upgrading to a new version

gdotv receives frequent updates with new features and improvements. Upgrading only updates the gdotv application container - Keycloak, nginx and the databases are unaffected.

First, authenticate Helm with ECR (tokens expire after 12 hours):

bash
aws ecr get-login-password --region us-east-1 \
  | helm registry login \
    --username AWS \
    --password-stdin \
    709825985650.dkr.ecr.us-east-1.amazonaws.com

Then upgrade:

bash
helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set gdotv.image.tag=<NEW_VERSION>

--reuse-values preserves all previously set values (passwords, hostname, certificates) so they do not need to be passed again.

Monitor the rollout:

bash
kubectl rollout status deployment/gdotv-developer -n gdotv

If the new version fails to start, roll back to the previous release:

bash
helm rollback gdotv-developer -n gdotv

Backup and Restore

For production deployments, we recommend using AWS Backup, which supports EKS namespace-level backups including persistent volume data.

To set up AWS Backup for EKS:

  1. Enable the EKS backup feature and install the Velero-based agent on your cluster:
bash
aws eks create-addon \
  --cluster-name <CLUSTER_NAME> \
  --addon-name aws-efs-csi-driver \
  --region <REGION>
  1. Create a backup vault and backup plan targeting the gdotv namespace via the AWS Backup console. Refer to the AWS Backup for EKS documentation for full steps.

Manual backup with pg_dump

For one-off backups, you can back up the PostgreSQL databases directly. gdotv uses two PostgreSQL instances: one for the application and one for Keycloak.

Back up the gdotv application database:

bash
kubectl exec -n gdotv statefulset/gdotv-developer-gdotv-postgres -- \
  pg_dump -U postgres --clean --if-exists postgres \
  > gdotv-backup-$(date +%Y%m%d-%H%M%S).sql

Back up the Keycloak database:

bash
kubectl exec -n gdotv statefulset/gdotv-developer-keycloak-postgres -- \
  pg_dump -U postgres --clean --if-exists postgres \
  > keycloak-backup-$(date +%Y%m%d-%H%M%S).sql

Restore the gdotv application database:

WARNING

Restoring overwrites current data. Stop the dependent application pod first to avoid write conflicts.

bash
kubectl scale deployment/gdotv-developer -n gdotv --replicas=0

cat gdotv-backup-<timestamp>.sql | kubectl exec -i -n gdotv \
  statefulset/gdotv-developer-gdotv-postgres -- \
  psql -U postgres postgres

kubectl scale deployment/gdotv-developer -n gdotv --replicas=2

Restore the Keycloak database:

bash
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=0

cat keycloak-backup-<timestamp>.sql | kubectl exec -i -n gdotv \
  statefulset/gdotv-developer-keycloak-postgres -- \
  psql -U postgres postgres

kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=1

Stop / Start Services

Stop all services (scale to zero)

This preserves PVCs and Secrets - data is not lost.

bash
kubectl scale deployment/gdotv-developer -n gdotv --replicas=0
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=0
kubectl scale deployment/gdotv-developer-nginx -n gdotv --replicas=0
kubectl scale statefulset/gdotv-developer-gdotv-postgres -n gdotv --replicas=0
kubectl scale statefulset/gdotv-developer-keycloak-postgres -n gdotv --replicas=0

Start all services

Start order matters: databases first, then Keycloak, then gdotv, then nginx.

bash
kubectl scale statefulset/gdotv-developer-gdotv-postgres -n gdotv --replicas=1
kubectl scale statefulset/gdotv-developer-keycloak-postgres -n gdotv --replicas=1
kubectl scale deployment/gdotv-developer-keycloak -n gdotv --replicas=1
kubectl scale deployment/gdotv-developer -n gdotv --replicas=2
kubectl scale deployment/gdotv-developer-nginx -n gdotv --replicas=2

Restart a single service

bash
kubectl rollout restart deployment/gdotv-developer -n gdotv
kubectl rollout restart deployment/gdotv-developer-keycloak -n gdotv
kubectl rollout restart deployment/gdotv-developer-nginx -n gdotv

Watch rollout progress:

bash
kubectl rollout status deployment/gdotv-developer -n gdotv

Check Logs

gdotv application

bash
# Live logs
kubectl logs -f -n gdotv deployment/gdotv-developer

# Logs from the previous (crashed) container
kubectl logs -n gdotv deployment/gdotv-developer --previous

Keycloak

bash
kubectl logs -f -n gdotv deployment/gdotv-developer-keycloak

nginx

bash
kubectl logs -f -n gdotv deployment/gdotv-developer-nginx

Bootstrap job

bash
kubectl get pods -n gdotv -l app.kubernetes.io/component=bootstrap
kubectl logs -n gdotv -l app.kubernetes.io/component=bootstrap

All pods

bash
kubectl get pods -n gdotv -l app.kubernetes.io/instance=gdotv-developer

Pod events (useful for crash diagnosis)

bash
kubectl describe pod -n gdotv -l app.kubernetes.io/component=gdotv-app

Scaling

Scale the gdotv application

The gdotv application is stateless (session state is stored in PostgreSQL). It can safely run multiple replicas.

bash
# Via kubectl
kubectl scale deployment/gdotv-developer -n gdotv --replicas=3

# Or persistently via helm upgrade
helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values \
  --wait \
  --set gdotv.replicaCount=3

Scale nginx

nginx is stateless and can be scaled freely:

bash
kubectl scale deployment/gdotv-developer-nginx -n gdotv --replicas=3

Keycloak and databases

Keycloak and both PostgreSQL instances run as single replicas. Scaling them requires additional configuration (Keycloak clustering, PostgreSQL replication) that is outside the scope of this deployment. For production HA requirements, consider using Amazon RDS for PostgreSQL and a Keycloak Operator.

Cluster-level scaling

If pods cannot be scheduled due to insufficient node resources, scale the EKS managed node group:

bash
aws eks update-nodegroup-config \
  --cluster-name <CLUSTER_NAME> \
  --nodegroup-name <NODEGROUP_NAME> \
  --scaling-config minSize=2,maxSize=10,desiredSize=5 \
  --region <REGION>

AWS: IRSA for Amazon Neptune and other AWS services

To connect gdotv to Amazon Neptune with IAM authentication, or other AWS services, without storing credentials in the cluster, use IAM Roles for Service Accounts (IRSA). The gdotv service account is already annotated with an IAM role during installation (see Step 2 - Create the IRSA role). You can extend that role with additional permissions as needed.

Kubernetes service account

gdotv runs under the following Kubernetes service account:

  • Namespace: gdotv
  • Service account name: gdotv-developer

When configuring IAM trust policies manually - for example to grant access to Amazon Neptune or other AWS services - reference this service account as the principal:

system:serviceaccount:gdotv:gdotv-developer

Adding Neptune permissions to the existing IRSA role

  1. Find the role ARN annotated on the gdotv service account:
bash
kubectl get serviceaccount gdotv-developer -n gdotv \
  -o jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}'
  1. Attach an inline policy granting Neptune access (see the policy JSON in Step 3 - Add Amazon Neptune IAM permissions above):
bash
aws iam put-role-policy \
  --role-name <ROLE_NAME> \
  --policy-name gdotv-neptune-access \
  --policy-document file://neptune-policy.json
  1. Restart the gdotv pod to pick up the new permissions:
bash
kubectl rollout restart deployment/gdotv-developer -n gdotv

Uninstalling

To completely remove gdotv from your cluster:

bash
helm uninstall gdotv-developer -n gdotv
kubectl delete namespace gdotv

Note that persistent volumes (PostgreSQL data) are not automatically deleted. To remove them:

bash
kubectl delete pvc --all -n gdotv

To delete the EKS cluster entirely (if it was created by the CloudFormation stack):

bash
aws cloudformation delete-stack --stack-name gdotv --region <REGION>

Troubleshooting

The application is not accessible over HTTPS

  • Check that the NLB has a hostname assigned: kubectl get svc -n gdotv
  • Verify that EKS security group rules allow inbound traffic on port 443
  • Check that all pods are running: kubectl get pods -n gdotv

gdotv pods are in CrashLoopBackOff

This typically indicates a hostname or Keycloak connectivity issue. Check the logs:

bash
kubectl logs -n gdotv deployment/gdotv-developer

Common causes:

  • Hostname mismatch between the configured hostname and the actual NLB endpoint
  • Keycloak not fully ready when gdotv starts (init containers should handle this, but check events)

The application is stating that AWS Marketplace License Validation failed

License validation failures are caused by missing or incorrect IRSA configuration. Check the following:

  • The gdotv-developer service account has the eks.amazonaws.com/role-arn annotation:
    bash
    kubectl get serviceaccount gdotv-developer -n gdotv -o yaml
  • The annotated IAM role has the marketplacemetering:RegisterUsage permission
  • The AWS_REGION environment variable is set on the gdotv pod:
    bash
    kubectl exec -n gdotv deployment/gdotv-developer -- env | grep AWS_REGION
  • The OIDC provider is correctly associated with the cluster:
    bash
    aws eks describe-cluster --name <CLUSTER_NAME> --region <REGION> \
      --query 'cluster.identity.oidc.issuer'

Pods stuck in Pending state

The cluster may not have sufficient node resources. Check events:

bash
kubectl get events -n gdotv --sort-by='.lastTimestamp' | tail -20

If nodes are the issue, scale the node group (see Cluster-level scaling above).

Bootstrap job failed

The bootstrap job creates the initial Keycloak user. If it failed, check its logs:

bash
kubectl logs -n gdotv -l app.kubernetes.io/component=bootstrap

You can re-trigger it by running a Helm upgrade:

bash
helm upgrade gdotv-developer \
  oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/gdotv/gdotv-developer \
  --namespace gdotv \
  --reuse-values

Additional support

For any support queries, email us at support@gdotv.com. Support is free and we answer all queries within one business day.