Home Compute

Compute

American Cloud compute resources.
Dane
Joe
By Dane and 2 others
5 articles

Cloud Compute

Cloud Compute - Getting Started Cloud compute refers to the use of remote computing resources delivered over the internet, such as virtual machines (VMs) or containers, provided by cloud service providers. These resources can be configured and managed remotely, allowing users to run applications, store data, and perform computing tasks without having to invest in and maintain their own physical infrastructure. Cloud compute offers scalability, flexibility, and cost-effectiveness, as users can pay for only the resources they need and easily adjust their computing capacity as requirements change. Deploy a Compute Instance 1. Login to the Web Portal with a valid American Cloud account 2. On the left navigation column choose 'Cloud Compute' 3. In Manage Instance select 'Create An Instance' 4. In the Create New Instance popup select the desired project from the drop down or create a new project 5. Fill out the provided creation form which allows for full customization of the new Instance. The following fields are available: Location This field provides the ability to choose which geographic region the instance will reside. Network American Cloud provides two networking options Elastic Cloud or a VPC. For more information about Elastic/VPC networks, Click Here. Server Image In this field select the desired Image or marketplace Apps For more information on marketplace apps, Click Here. Server Size Select from four prebuilt size configurations or specify size Personalize Instance This section allows personalization of an instance with options to add a startup script, add a new ssh key, or generate a new SSH key. When using the 'Genarate New SSH Key' toggle, ensure to select the key once it's genarated and select 'Add New SSH Key'. More support adding and using startup scripts coming soon! For more support generating or adding an SSH key, Click Here. Hostname & Label - This field allows for a name to be assigned to the to the instance. A unique name is suggested in order to easily differentiate between instances Billing Method & Coupon - Determine the method of billing between hourly or monthly. If a coupon has been redeemed ensure to select it here for applications Review & Deploy - Once all fields are filled in correct select 'Review and Deploy' 6. Review selections and cost in the Cost Breakdown window and select 'Deploy' Note The first instance may take up to a minute to deploy. - Following the successful creation, a redirection to server information will occur Managing Instances Instance Dashboard The instances dashboards allows for certain controls over the instances. Navigate to the instance dashboard section. 1. From the 'Home Screen' on the left navigation column choose 'Cloud Compute' 2. Identify the instance to manage, each instance has it's own controls located to the right - Each toggle switch has an action described below Console This toggle provides the ability to console into the instances using user name and password Power - The power toggle allows for powering on/off the respective instances. Reboot Server - Provides the option of power cycling the respective instance Reinstall Server - This function of complete reset of selected instance Note When resetting an instance all unsaved data will be lost Destroy Server - The destroy toggle will permanently delete the selected VM Specific Instance Management Instance management provideds detailed overview and settings of the each individual instance. Navigate to an instance using the following steps. 1. On the navigation panel to the left select 'Cloud Compute' 2. On the Manage Instance page select the instance to manage - Inside the instance are Overview, Usage, Settings, Snapshots, and SSH Keys. Below each section is broken down. Overview - This section provides a detailed view of the instance. Most importantly, the IP address, username, and default password used to SSH into the instance. Username and password are configured upon initial build but can be changed afterward. Usage - In usage a breakdown of monthly usage is provided in numerical values as well graphical values. Graphs provide information on CPU Usage, CPU Load, Memory Usage, Network Interface, and Disk Operations. Settings - Basic configurations are available inside the settings settions. Prior to executing any of these changes ensure you identify and read the warning banners. Several of the changes require an instance to be in a stopped state and may have impacts on data stored. For more information on Firewalls and Port Forwarding Rules, Click Here. Snapshots - Snapshots are complete images of a drive's data at a specific moment, allowing for data recovery, rollback, or backup. They capture the drive's entire contents, including files, folders, and system settings. - When naming snapshots use naming conventions that are easily tracked and organized. SSH Keys - Keys applied to the instance will be listed under this section For more information on adding/using SSH Keys, Click Here.

Last updated on Sep 26, 2024

Managing SSH Keys

Generate and Add SSH Key About SSH Keys SSH key, or Secure Shell key, is a cryptographic key pair used for securely authenticating and encrypting communication between two entities in a Secure Shell (SSH) protocol-based system, such as remote access to a server or a Git repository. SSH is a widely used protocol for securely connecting to and managing remote servers over a network. An SSH key pair consists of two keys: a private key and a public key. The private key is kept secret and is known only to the owner, while the public key is shared with other parties. When a client initiates an SSH connection to a server, the server requests the client to authenticate using a key pair. The client uses its private key to generate a digital signature, which is sent to the server along with the public key. The server then uses the public key to verify the digital signature, and if it matches, the client is granted access. RSA vs ED2519 RSA and Ed25519 are two different types of cryptographic key pairs used in SSH for secure communication and authentication. Here are the key differences between RSA and Ed25519 key pairs: Algorithm: RSA (Rivest-Shamir-Adleman) is a widely used asymmetric encryption algorithm, while Ed25519 is a newer elliptic curve cryptography (ECC) algorithm. Key Size: RSA key pairs typically have larger key sizes, such as 2048 bits or 4096 bits, while Ed25519 key pairs have a fixed key size of 256 bits. This means that RSA keys are generally larger and require more computational resources for key generation, encryption, and decryption compared to Ed25519 keys. Security: Both RSA and Ed25519 are considered secure for most purposes. However, Ed25519 is generally considered to provide stronger security with smaller key sizes compared to RSA, due to the use of elliptic curve cryptography, which offers higher security levels with shorter key lengths. RSA is susceptible to attacks such as factorization, while Ed25519 is designed to be resistant to various cryptographic attacks. Performance: Ed25519 is known for its faster performance compared to RSA, as it requires less computational resources for key generation, encryption, and decryption. This makes Ed25519 more efficient for use in resource-constrained environments, such as embedded systems or high-traffic networks. Compatibility: RSA is more widely supported and compatible with older systems and software, as it has been in use for a longer time. Ed25519, being a newer algorithm, may not be supported by all SSH implementations or older systems. However, most modern SSH clients and servers support Ed25519, and it is gaining wider adoption in recent years. Key Management: RSA keys are typically managed using the ssh-keygen tool, which is available on most operating systems. Ed25519 keys can also be generated using ssh-keygen, but it may require a newer version of the tool that supports ECC algorithms. Additionally, RSA keys often require regular key size updates for maintaining strong security, while Ed25519 keys are fixed at 256 bits. In summary, RSA and Ed25519 are both commonly used for SSH key-based authentication, but they differ in terms of algorithm, key size, security, performance, compatibility, and key management. The choice between RSA and Ed25519 depends on the specific use case, security requirements, and compatibility considerations of the system or network being used. Generating SSH Keys Generating SSH Keys using Terminal/CMD Prompt Here are two ways to generate an SSH key for use within the American Cloud Cloud Management Platform (CMP). Generate within the terminal or cmd prompt using the following commands: Terminal or CMD Prompt Open a terminal or cmd prompt on your local machine. Run Commands Run the command to generate rsa and/or ed2519 keys - RSA ssh-keygen -t rsa -b 4096 -C "[email protected]" - Ed2519 ssh-keygen -t ed25519 -C "[email protected]" Using AC Key Generator - The American Cloud CMP offers a convenient toggle that automatically generates and saves an SSH key. This feature simplifies the process of creating an SSH key for use within the platform. Here's how it typically works: 1. In the left pane select 'Account' 2. On the user dashboard select 'Security' Tab 3. Select 'SSH Key' 4. Select 'Generate Keypair' 5. Save the newly Generated key pair to the local PC. - Once key has been generated and saved, it will be displayed within the profile section and ready for use with new instances. Placing Pre-generated Keys - The American Cloud CMP also offers the ability to place pre-built keys. Below are the steps to accomplish this: 1. In the left pane select 'Account' 2. Choose the 'Security' tab and then select 'SSH Key' 3. Select 'Upload SSH Key' 4. Add the new SSH key and select 'Add New SSH Key' - Once uploaded the key will populate within the list of available keys and is ready for use.

Last updated on Aug 30, 2024

OpenBSD

OpenBSD is a free and open-source Unix-like operating system known for its emphasis on security, correctness, and a strong commitment to free and open-source software principles. Developed by a community of volunteers, OpenBSD prioritizes code quality and rigorous security auditing, making it a preferred choice for security-conscious users and organizations. The system's proactive security features, such as privilege separation and a focus on secure coding practices, contribute to its reputation as one of the most secure operating systems available. OpenBSD supports various hardware architectures and includes a range of built-in utilities and services. Its commitment to simplicity, clarity, and a secure default configuration has established OpenBSD as a reliable platform for network infrastructure, firewalls, and security-focused applications. OpenBSD with American Cloud American Cloud offers two OpenBSD options. - OpenBSD (Beta) -Predefined with a 25GB startup disk and cloud init allowing interaction with CMP functions. - OpenBSD (Beta) -Self-Install version which requires more robust technical knowledge to utilze all the resources within the CMP. This documentation will discuss both. OpenBSD (Beta) -Predefined Build Instance 1. Login to the Web Portal with a valid American Cloud account 2. On the left navigation column choose 'Cloud Compute' 3. Click on "Create an Instance" select your "Project" and click "Proceed" 4. Select your location and network. Under "Choose Server Image" select "Other Services" tab and choose "OpenBSD 7.4 Beta" 5. Choose server size Important: The base template will provide a 25GB root disk no matter the SSD selected. Therefore, only choose a 25GB root disk for pricing purposes. Later in the documentation, more storage will be allocated. 6. Click on Review and Deploy once reviewed click on Deploy Now Upon initial boot the American Cloud CMP will show a running status moments before the full boot is complete. By opening the console the boot process can be observed. Add Additional Storage - As discussed previously additional storage may be required. American Cloud provides this through block storage. Select here for additional documentation on building Block Storage Click Here. Ensure when building the block storage volume that the newly built instance is selected. 1. OpenBSD lacks a hot-add function. Therefore, a reboot is required following the build of block storage Reboot the system either through the CMP or command sudo reboot. Retrieve Disk Information 1. Upon reboot, use the command sysctl -a | grep -i disk. The disk count will be printed onto the screen. ac-openbsd$ sysctl -a | grep -i disk hw.disknames=cd0:,sd0:5ca267e7629f19b2,sd1:,fd0: hw.diskcount=4 machdep.bios.diskinfo.128=bootdev = 0xa0000204, cylinders = 1023, heads = 255, sectors = 63 2. Access root using sudo -i and run the command disklabel sd1 to print the drive information ac-openbsd# disklabel sd1 # /dev/rsd1c: type: SCSI disk: SCSI disk label: Block Device duid: 0000000000000000 flags: bytes/sector: 512 sectors/track: 63 tracks/cylinder: 255 sectors/cylinder: 16065 cylinders: 6527 total sectors: 104857600 boundstart: 0 boundend: 104857600 16 partitions: # size offset fstype [fsize bsize cpg] c: 104857600 0 unused Partition the Drive Command Function help Display summary of available commands manual Display fdisk man page reinit Initialize the partition table setpid Set identifier of table entry edit Edit table entry flag Set flag value of table entry update Update MBR bootcode select Select MBR extended table entry swap Swap two table entries print Print partition table write Write partition table to disk exit Discard changes and exit edit level quit Save changes and exit edit level abort Discard changes and terminate fdisk 1. Partition the drive using fdisk -e sd1 ac-openbsd# fdisk -e sd1 Enter 'help' for information sd1: 1> 2. Use the print command to list the available partitions sd1: 1> p Disk: sd1 geometry: 6527/255/63 [104857600 Sectors] Offset: 0 Signature: 0x0 Starting Ending LBA Info: #: id C H S - C H S [ start: size ] ------------------------------------------------------------------------------- 0: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 1: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 2: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 3: 00 0 0 0 - 0 0 0 [ 0: 0 ] Unused 3. Determine how to partition the drive and use the edit command to make adjustments. For this example a single 50GB partition will be built on partition 3 4. Select the partition id. To print a list of identifiers type ?. For an identifier of OpenBSD utilize A6 5. Determine to use CHS or not. 6. Determine the partition offset and size sd1: 1> edit 3 Partition id ('0' to disable) [01 - FF]: [00] (? for help) A6 Do you wish to edit in CHS mode? [n] n Partition offset [0 - 104857599]: [0] 0 Partition size [1 - 104857600]: [1] 104857600 7. Using the print command ensure the partition has been created. If satisfied use the quit command to save & exit fdisk 8. Build the new file system using the command newfs sd1c. Mount Drive 1. Using either nano or vim, open /etc/fstab. Insert the drive info using (duid /location perms) ex...786d2bc033bfc8ff.c /mnt/test1 ffs rw,wxallowed 1 1 2. Using the mkdir command build the location for the drive ac-openbsd# mkdir /mnt/test1 3. Using mount -a command will mount all drives in the /etc/fstab file. 4. Finally list the drives using df -h ac-openbsd# df -h Filesystem Size Used Avail Capacity Mounted on /dev/sd0a 24.2G 1.4G 21.6G 7% / /dev/sd1c 48.4G 2.0K 46.0G 1% /mnt/test1 /dev/sdc1c at /mnt/test1 has been added OpenBSD (Beta) -Self-Install Build Instance 1. Login to the Web Portal with a valid American Cloud account 2. On the left navigation column choose 'Cloud Compute' 3. Click on "Create an Instance" select your "Project" and click "Proceed" 4. Select your location and network. Under "Choose Server Image" select "Other Services" tab and choose "OpenBSD 7.4 Beta" 5. Choose server size 6. Click on Review and Deploy once reviewed click on Deploy Now Upon initial boot the American Cloud CMP will show a running status moments before the full boot is complete. By opening the console the boot process can be observed. Finalize Install At first build the OpenBSD instances will not be receiving input from the CMP. Therefore, the console should be used to interact with the instance. 1. On the instance overview page launch the console utilizing the console toggle 2. An initial install page will be displayed. Hit the enter key to continue 3. Select the desired boot mode. This documentation will focus on normal boot mode While conducting the initial setup options surrounded by "[ ]" are default. 4. Provide keyboard and hostname for the instance. This will be adjusted later in the documentation Welcome to the OpenBSD/amd64 7.4 installation program. (I)nstall, (U)pgrade, (A)utoinstall or (S)hell? I Choose your keyboard layout ('?' or 'L' for list) [default] System hostname? (short form, e.g. 'foo') ac_openbsd 5. Establish the network configuration Available network interfaces are: em0 vlan0. Network interface to configure? (name, lladdr, '?', or 'done') [em0] IPv4 address for em0? (or 'autoconf' or 'none') [autoconf] IPv6 address for em0? (or 'autoconf' or 'none') [none] Available network interfaces are: em0 vlan0. Network interface to configure? (name, lladdr, '?', or 'done') [done] 6. Configure root account information. This documentation is prohibiting-password for ssh login and should be adjusted to fit requirements Password for root account? (will not echo) Password for root account? (again) Start sshd(8) by default? [yes] Do you expect to run the X Window System? [yes] no Change the default console to com0? [no] Setup a user? (enter a lower-case loginname, or 'no') [no] Allow root ssh login? (yes, no, prohibit-password) [no] prohibit-password What timezone are you in? ('?' for list) [US/Eastern] UTC 7. Configure the required disk space. There are several options. This documentation a custom layout is utilized. The disk space was determined by step 5 in the Build Instance section Available disks are: sd0. Which disk is the root disk ('?' for details) [sd0] Encrypt the root disk with a passphrase? [no] Use (W)hole disk MBR, whole disk )G)PT or (E)dit? [whole] Use (A)uto layout, (E)dit auto layout, or create (C)ustom layout? [a] C Label editor (enter '?' for help at any prompt) sd0>a partition to add: [a] offset: [64] size: [104857536] FS type: [4.2BSD] mount point: [none] / 8. Once the drive has been built and mounted utilize the p command to review the disk. When satisfied use 'q' to quit and write label sd0*> p sd0*> q Write new label?: [y] 9. Install the sets utilizing the below information Let's install the sets! Location of sets? (cd0 disk http nfs or 'done') [http] HTTP proxy URL? (e.g. 'http://proxy:8080' or 'none') [none] HTTP Server? (hostname, list#, 'done' or '?') [ftp.usa.openbsd.org] Server directory? [pub/OpenBSD/7.4/amd64] 10. Next select sets. Upon initial population all sets will be selected. Use the -all command to unselect all and type the desired set names and reboot the instance Select sets by entering a set name, a file name pattern or 'all'. Set name(s)? (or 'abort' or 'done') [done] -all Set name(s)? (or 'abort' or 'done') [done] bsd bsd.rd base74.tgz bsd.mp man74.tgz name(s)? (or 'abort' or 'done') [done] Locations of sets? (cd0 disk http nfs or 'done') [done] Exit to (S)hell, (H)alt or (R)eboot? [reboot] Install Cloud init In order for the machine to interact with American Cloud CMP cloud-init is required. Below are the necessary requirements. 1. Install python using 'pkg_add python git`. 2. When prompted, choose option 3: python-3.10.13 vm-play-w-9b186d# pkg_add python git quirks-6.160 signed on 2023-12-14T11:48:02Z Ambiguous: choose package for python a 0: <None> 1: python-2.7.18p11 2: python-3.9.18 3: python-3.10.13 4: python-3.11.5 Your choice: 3. Clone cloud-init using git clone https://github.com/canonical/cloud-init.git 4. Navigate to the cloud-init directory using cd cloud-init/ 5. Using ./tools/build-on-openbsd install the tools within the cloud-init directory 6. Install the preferred editor using pkg_add vim or pkg_add nano 7. Edit the rc.local file by running vim /etc/rc.local and inserting the below code under line number two. Also, comment out or remove the /usr/local/lib/cloud-init/ds-identify rm -f var/run/.instance-id rm -f var/run/instance-data #/usr/local/lib/cloud-init/ds-identify 8. Edit the cloud.cfg file using the command vim /etc/cloud/cloud.cfg to match the below. Ensure to add the datasource_list. # The modules that run in the 'init' stage datasource_list: [ CloudStack ] datasource: CloudStack: {} None: {} # The modules that run in the 'init' stage cloud_init_modules: - seed_random - bootcmd - write_files - [set_hostname, always] - update_hostname - update_etc_hosts - ca_certs - rsyslog - users_groups - ssh # The modules that run in the 'config' stage cloud_config_modules: - ssh_import_id - keyboard - locale - [set_passwords, always] - ntp - timezone - disable_ec2_metadata - [runcmd, always] # System and/or distro specific settings # (not accessible to handlers/transforms) system_info: # This will affect which distro class gets used distro: openbsd # Default user name + that default users groups (if added/used) default_user: name: cloud lock_passwd: False gecos: cloud groups: [sudo, wheel] doas: - permit nopass cloud sudo: ["ALL=(ALL) NOPASSWD:ALL"] shell: /bin/ksh network: renderers: ['openbsd'] 9. Reboot the system The instance can now be managed completely using the American Cloud CMP. This can be tested by changing hostname or password.

Last updated on Aug 30, 2024

American Cloud & Cloudflare Tunnels

Cert Less with American Cloud & Cloudflare Tunnels Imagine you have just finished the first working version of your new NextJS app, which connects to a mongodb database to store data for your users. There's also a database management layer in the form of a mongo-express container, which is just for administrators to manage your database. You've provisioned an American Cloud VM which is running your application nicely as a set of docker containers, but now it's time to make it available for your users. The next steps will involve something like this: 1. Set up strong admin credentials to protect your mongo-express app with Basic Auth 2. Configure your firewall and port forwarding rules to allow TCP traffic over ports 80 and 443 3. Add a reverse proxy container like nginx or traefik to route HTTP traffic to your NextJS and mongo-express containers 4. Set up DNS rules to point your domain to your VM's public IP address 5. Set up certbot to obtain SSL certificates for your apps and enable secure HTTPS connections 6. Configure ACL rules to only allow known good IPs to connect to the mongo-express app For many engineers, this is not troublesome. But others don't want to handle the cognitive burden of managing the networking and security of their applications. This can lead to unintentional misconfiguration, and cause holes in security. Cloudflare Tunnels (or similar tunneling solutions) can simplify this. Why Tunnels? Using a tunnel to route traffic into your containers gives you the following benefits: - No exposing of inbound ports needed - No reverse proxy required - No need to manage DNS records - Built-in SSL/TLS - Complex access rules for various situations - Much simpler than traditional port forwarding/firewall setup In the above example, we can use Tunnels to: 1. Allow public access to your NextJS app 2. Restrict access for mongo-express to only users in your organization (emails ending in @mydomain.com) Let's Get Started 1. First, make sure you have a Cloudflare account, and have signed up for Zero Trust features. This may require a credit card for a free account, but that's all you will need. 2. Make sure you have at least one domain set up in Cloudflare, or you will not be able to route traffic into your VM once your tunnel is up. 3. Set up a tunnel by going into Networks -> Tunnels -> Create a Tunnel. Select Cloudflared as the tunnel type, and make a note of the token that gets generated for the next step. 4. Update your docker-compose.yaml file on your VM. Remove all port mappings, and ensure that you have a network configured that allows your containers to talk to each other. services: nextjs-app: image: your-nextjs-app:latest environment: MONGODB_URI: "mongodb://mongouser:mysecurepassword@mongo:27017/" depends_on: - mongo networks: - app-network mongo: image: mongo:latest environment: MONGO_INITDB_ROOT_USERNAME: mongouser MONGO_INITDB_ROOT_PASSWORD: mysecurepassword volumes: - mongo-data:/data/db networks: - app-network mongo-express: image: mongo-express:latest environment: ME_CONFIG_MONGODB_ADMINUSERNAME: mongouser ME_CONFIG_MONGODB_ADMINPASSWORD: mysecurepassword ME_CONFIG_MONGODB_SERVER: mongo depends_on: - mongo networks: - app-network cloudflared: image: cloudflare/cloudflared:latest restart: unless-stopped environment: - TUNNEL_TOKEN=MY-SECRET-TOKEN networks: - app-network command: tunnel run volumes: mongo-data: networks: app-network: driver: bridge Be sure to replace placeholders like database username and password, and "MY-SECRET-TOKEN" with the token you get from Cloudflare 1. Add a Public Hostname to your Tunnel. You should route app.mydomain.com as the hostname to Service http://nextjs-app:3000 2. Add another Public Hostname to your Tunnel. Route mongo-express.mydomain.com as the hostname to Service http://mongo-express:8081 3. To restrict access to mongo-express, go to Access -> Applications in Cloudflare. Add an application and call it mongo-express 4. Create an Access Group in Access -> Access groups, call it my-employees 5. In your my-employees Access Group, define the group criteria by adding an Include rule. Set that rule to "Emails ending in @mydomain.com" 6. Connect the Access rule. Go back to Access -> Applications, select mongo-express, add a Policy called mongo-express-employees, and inside that policy, click the checkbox to assign it to the group my-employees That's it! Now, when you visit https://mongo-express.mydomain.com, you will be met with a Cloudflare page that will verify your email address by sending you a code. Let's Take it a Step Further Now, the only thing that could make this better would be if we could push code to a branch in github or gitlab, have that kick off some CI pipeline, automatically build an image and host it on a private image registry, and somehow tell the VM to rebuild the container using the latest image. We can do that! First, follow the steps to set up a Github Workflow or Gitlab CI pipeline that performs the build actions, and pushes the resultant image to a private container registry. Then, we modify the above configuration to add a watchtower container, and use a webhook to notify it when a new version of your app is available. Note: Make sure that you have performed a docker login on your VM if you are using a private container registry, so that watchtower can use the ~/.docker/config.json file to gain access to it. services: nextjs-app: image: your-nextjs-app:latest environment: MONGODB_URI: "mongodb://mongouser:mysecurepassword@mongo:27017/" labels: - "com.centurylinklabs.watchtower.enable=true" depends_on: - mongo networks: - app-network mongo: image: mongo:latest environment: MONGO_INITDB_ROOT_USERNAME: mongouser MONGO_INITDB_ROOT_PASSWORD: mysecurepassword volumes: - mongo-data:/data/db networks: - app-network mongo-express: image: mongo-express:latest environment: ME_CONFIG_MONGODB_ADMINUSERNAME: mongouser ME_CONFIG_MONGODB_ADMINPASSWORD: mysecurepassword ME_CONFIG_MONGODB_SERVER: mongo depends_on: - mongo networks: - app-network cloudflared: image: cloudflare/cloudflared:latest restart: unless-stopped environment: - TUNNEL_TOKEN=MY-SECRET-TOKEN networks: - app-network command: tunnel run watchtower: image: containrrr/watchtower environment: WATCHTOWER_HTTP_API_TOKEN: MY-WATCHTOWER-TOKEN volumes: - /var/run/docker.sock:/var/run/docker.sock - ~/.docker/config.json:/config.json:ro command: --interval 300 --cleanup --http-api-update --label-enable networks: - app-network volumes: mongo-data: networks: app-network: driver: bridge Notice in particular that we have added a watchtower service, and added a label to the NextJS app that allows watchtower to manage it. Next, add a new Public Hostname to your tunnel. Create it as watchtower.mydomain.com and map it to http://watchtower:8080 as the service. Add a CI step in your Github workflow or Gitlab CI that makes the following HTTP request: curl -H "Authorization: Bearer $MY-WATCHTOWER-TOKEN" https://watchtower.mydomain.com That's it! Now, when you push to your CI-enabled branch or branches, the final step will reach out to your watchtower container and trigger a rebuild based on the latest image.

Last updated on Nov 08, 2024