Home Tutorials

Tutorials

American Cloud Tutorials
Dane Joe
By Dane and 1 other
11 articles

Deploying Web Applications with Kubernetes on American Cloud Kubernetes Service (ACKS)

Ensure No Other Proxies are Running on the local machine. Deploying Web Applications with Kubernetes on American Cloud Kubernetes Service (ACKS) Prerequisites - Install kubectl by following these instructions - Install helm by following these instructions - Owned domain with the ability to manage DNS - Dockerized application images in a public or private registry (extra steps in section Connecting to Private Image Repositories) 1. Provisioning Kubernetes Cluster - Choose a name, project, version, region, and node plan for your ACKS cluster. 2. Connecting to Kubernetes Cluster - Once the cluster is in "Running" state: - Download the cluster config file by clicking on "Download Config File" - Move the kube.conf file to a new directory. You'll be creating more files alongside it in order to set up your app. Note: kube.conf contains connection details on how your machine will connect and dispatch commands to the cluster. Every action will be of the form: kubectl --kubeconfig kube.conf unless you set it as the global kube config. - Set kube.conf as the default config by running export KUBECONFIG=kube.conf, or by copying the file to ~/.kube/config Note: Example 1-create-admin-user.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard This generates one user (the ServiceAccount) and gives it the permissions necessary to access the Dashboard (the ClusterRoleBinding) - Run kubectl apply -f 1-create-admin-user.yaml to create a user profile in order to generate access tokens to log in to the Dashboard. ac-demo % kubectl apply -f 1-create-admin-user.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created - Run kubectl proxy in a new terminal to start the Dashboard UI locally. Leave this running in the background. - Open this url in your browser: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ - Run kubectl -n kubernetes-dashboard create token admin-user to get a fresh token, and paste it in the input field of the Dashboard login. - You will be met with an empty dashboard, and the namespace default selected. 3. Creating App Resources First we want to get our app running in its own pods. Then we can expose it. We are going to create: 1 Deployment (a prescriptive model of your application including environment variables, port mappings, and scaling details) 1 Service (a way of allowing external access into your application) If your images are hosted in a private repository, you will need to create 1 Secret as well (a protected resource containing repository access information, assuming your images are in a private registry) Connecting to Private Image Repositories Let's continue our example for now by pulling a public image which will run on internal port 8080. Note: Example 2-demo-app-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: demo-app namespace: default spec: replicas: 2 selector: matchLabels: app: demo-app strategy: type: RollingUpdate template: metadata: labels: app: demo-app spec: containers: - image: paulbouwer/hello-kubernetes:1.8 imagePullPolicy: IfNotPresent name: demo-app env: - name: MESSAGE value: Hello world! ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: demo-svc spec: type: ClusterIP ports: - port: 80 targetPort: 8080 selector: app: demo-app Deploy by running kubectl apply -f 2-demo-app-deployment.yaml You can check on your resources by running kubectl get pods and kubectl get svc, or by checking in your Dashboard: Congratulations! Your application is running in Kubernetes. 4. Exposing Your App Next, we must create LoadBalancer and Ingress resources to allow external access. We start by installing the Kubernetes Nginx Ingress Controller ac-demo % helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx "ingress-nginx" has been added to your repositories ac-demo % helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "ingress-nginx" chart repository Update Complete. ⎈Happy Helming!⎈ ac-demo % helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true NAME: nginx-ingress LAST DEPLOYED: Tue Oct 25 20:40:16 2022 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: foo spec: ingressClassName: nginx rules: - host: www.example.com http: paths: - pathType: Prefix backend: service: name: exampleService port: number: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: <base64 encoded cert> tls.key: <base64 encoded key> type: kubernetes.io/tls Take note of the new public ip after a couple minutes by running kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller Now we create an Ingress to point traffic to the LoadBalancer: Note: Example 3-nginx-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: "demo.your_domain_name" http: paths: - pathType: Prefix path: "/" backend: service: name: demo-svc port: number: 80 Before we apply it, we need to ensure that we have a DNS A record pointing your domain to the new public ip of your LoadBalancer. Apply the Ingress: kubectl apply -f 3-nginx-ingress.yaml Go to https://demo.your_domain_name and see the Hello Kubernetes app! 5. Securing Your App Now we need to get SSL / HTTPS playing nicely. ac-demo % kubectl create namespace cert-manager namespace/cert-manager created ac-demo % helm repo add jetstack https://charts.jetstack.io "jetstack" has been added to your repositories ac-demo % helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "jetstack" chart repository ...Successfully got an update from the "ingress-nginx" chart repository Update Complete. ⎈Happy Helming!⎈ ac-demo % helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.6.0 --set installCRDs=true NAME: cert-manager LAST DEPLOYED: Tue Oct 25 21:05:28 2022 NAMESPACE: cert-manager STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: cert-manager v1.6.0 has been deployed successfully! In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure them can be found in our documentation: https://cert-manager.io/docs/configuration/ For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation: https://cert-manager.io/docs/usage/ingress/ Note: Example 4-production-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # Email address used for ACME registration email: your_email_address server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: # Name of a secret used to store the ACME account private key name: letsencrypt-prod-private-key # Add a single challenge solver, HTTP01 using nginx solvers: - http01: ingress: class: nginx ac-demo % kubectl apply -f 4-production-issuer.yaml clusterissuer.cert-manager.io/letsencrypt-prod created Update the Ingress by using a new config file: Note: Example 5-nginx-ingress-secured.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - demo.your_domain secretName: demo-tls rules: - host: "demo.your_domain" http: paths: - pathType: Prefix path: "/" backend: service: name: demo-svc port: number: 80 ac-demo % kubectl apply -f 5-nginx-ingress-secured.yaml ingress.networking.k8s.io/demo-ingress configured Connecting to Private Image Repositories In order to connect to a private image or package repository, a token with sufficient access to pull images needs to be encoded and stored in Kubernetes as a Secret. In this example, we will be connecting to a private registry (GHCR: GitHub Container Registry) which contains a Docker image with a NextJS web application. We create a new personal access token with scope read:packages by visiting https://github.com/settings/tokens/new?scopes=read:packages We are granted a token, in this example: ghp_vMutK7pgmY1d6hOpF9vGeVpcUB34fd0i7O0j We need a base64 encoded string which contains the username and the token: ac-demo % echo -n "github-username:ghp_vMutK7pgmY1d6hOpF9vGeVpcUB34fd0i7O0j" | base64< Z2l0aHViLXVzZXJuYW1lOmdocF92TXV0SzdwZ21ZMWQ2aE9wRjl2R2VWcGNVQjM0ZmQwaTdPMGo= Create a new file, .dockerconfigjson, with the following content: { "auths": { "https://ghcr.io/ORGANIZATION_NAME/IMAGE_REPOSITORY_NAME":{ "username":"github-username", "password":"ghp_vMutK7pgmY1d6hOpF9vGeVpcUB34fd0i7O0j", "email":"YOUR_EMAIL", "auth":"Z2l0aHViLXVzZXJuYW1lOmdocF92TXV0SzdwZ21ZMWQ2aE9wRjl2R2VWcGNVQjM0ZmQwaTdPMGo=" } } } Note: This docker config format can be used to authenticate any Docker image repository, not just GHCR Now encode this entire file, which we will save as the secret. ac-demo % cat .dockerconfigjson | base64 ewogICAgImF1dGhzIjogewogICAgICAgICJodHRwczovL2doY3IuaW8vT1JHQU5JWkFUSU9OX05BTUUvSU1BR0VfUkVQT1NJVE9SWV9OQU1FIjp7CiAgICAgICAgICAgICJ1c2VybmFtZSI6ImdpdGh1Yi11c2VybmFtZSIsCiAgICAgICAgICAgICJwYXNzd29yZCI6ImdocF92TXV0SzdwZ21ZMWQ2aE9wRjl2R2VWcGNVQjM0ZmQwaTdPMGoiLAogICAgICAgICAgICAiZW1haWwiOiJZT1VSX0VNQUlMIiwKICAgICAgICAgICAgImF1dGgiOiJaMmwwYUhWaUxYVnpaWEp1WVcxbE9tZG9jRjkyVFhWMFN6ZHdaMjFaTVdRMmFFOXdSamwyUjJWV2NHTlZRak0wWm1Rd2FUZFBNR289IgogICAgCX0KICAgIH0KfQ== This is the configuration file which will be used to create the Secret, along with a Deployment which uses it to connect to the image repository. apiVersion: v1 kind: Secret metadata: name: registry-credentials namespace: default type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: ewogICAgImF1dGhzIjogewogICAgICAgICJodHRwczovL2doY3IuaW8vT1JHQU5JWkFUSU9OX05BTUUvSU1BR0VfUkVQT1NJVE9SWV9OQU1FIjp7CiAgICAgICAgICAgICJ1c2VybmFtZSI6ImdpdGh1Yi11c2VybmFtZSIsCiAgICAgICAgICAgICJwYXNzd29yZCI6ImdocF92TXV0SzdwZ21ZMWQ2aE9wRjl2R2VWcGNVQjM0ZmQwaTdPMGoiLAogICAgICAgICAgICAiZW1haWwiOiJZT1VSX0VNQUlMIiwKICAgICAgICAgICAgImF1dGgiOiJaMmwwYUhWaUxYVnpaWEp1WVcxbE9tZG9jRjkyVFhWMFN6ZHdaMjFaTVdRMmFFOXdSamwyUjJWV2NHTlZRak0wWm1Rd2FUZFBNR289IgogICAgCX0KICAgIH0KfQ== --- apiVersion: apps/v1 kind: Deployment metadata: name: demo-app namespace: default spec: replicas: 2 selector: matchLabels: app: demo-app strategy: type: RollingUpdate template: metadata: labels: app: demo-app spec: containers: - image: ghcr.io/ORGANIZATION_NAME/IMAGE_REPOSITORY_NAME imagePullPolicy: IfNotPresent name: demo-app env: - name: REACT_APP_ENVIRONMENT value: PROD ports: - containerPort: 8080 imagePullSecrets: - name: registry-credentials Use Traefik Ingress (Instead of NGINX) In order to use traefik as an ingress controller, simply run these commands and apply this traefik ingress file instead of using nginx. Note: You still need to configure an A record to point to your domain. helm repo add traefik https://helm.traefik.io/traefik helm repo update helm install traefik traefik/traefik traefik-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-ingress annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: "demo.your_domain" http: paths: - pathType: Prefix path: "/" backend: service: name: demo-svc port: number: 80 ac-demo % kubectl apply -f traefik-ingress.yaml ingress.networking.k8s.io/demo-ingress created Enable Autoscaling for your App In order to enable Kubernetes autoscaling follow our Kubernetes Autoscaling Guide.

Last updated on Feb 27, 2025

Cockpit Installation

How to Install Cockpit on Linux Cockpit is a tool for server administration that provides you with real-time information about your server's status. It displays data on CPU usage, filesystem statistics, processes, and other relevant details. One of the advantages of using Cockpit is that it does not consume any server resources until you log in to the control panel. The service is only activated when you access the control panel. Cockpit enables you to perform various server administration tasks, such as managing users and addressing network issues. It also allows you to access a terminal from your computer or phone's browser. To log in and manage the system, Cockpit utilizes your system's users and sudo for privilege escalation. As a result, it does not introduce an additional layer of security considerations by creating a separate set of Cockpit-only users for your server. Instructions Using the below guides you can install Cockpit on various different Linux OS's. Ubuntu Ubuntu 17.04 and later: 1. Install cockpit: . /etc/os-release sudo apt install -t ${VERSION_CODENAME}-backports cockpit 2. Enable cockpit: sudo systemctl enable --now cockpit.socket Fedora 1. Install cockpit: sudo dnf install cockpit 2. Enable cockpit: sudo systemctl enable --now cockpit.socket 3. Ensure that the firewall is open: sudo firewall-cmd --add-service=cockpit sudo firewall-cmd --add-service=cockpit --permanent CentOS CentOS 7 and later: 1. Install cockpit: sudo yum install cockpit 2. Enable cockpit: sudo systemctl enable --now cockpit.socket 3. Open the firewall: sudo firewall-cmd --permanent --zone=public --add-service=cockpit sudo firewall-cmd --reload Debian Debian 10 and later: 1. To get the latest version, we recommend to enable the backports repository (as root): . /etc/os-release echo "deb http://deb.debian.org/debian ${VERSION_CODENAME}-backports main" > \ /etc/apt/sources.list.d/backports.list apt update 2. Install or update the package: apt install -t ${VERSION_CODENAME}-backports cockpit Rocky Linux Rocky Linux 8 and later: 1. Install cockpit sudo yum install cockpit 2. Enable cockpit: sudo systemctl enable --now cockpit.socket 3. Allow port through firewall: sudo firewall-cmd --add-service=cockpit --permanent sudo firewall-cmd --reload

Last updated on Aug 01, 2024

Connecting Juice FS to American Cloud A2 Object Storage

1. Login to the Web Portal with a valid American Cloud account. 2. On the left navigation column choose 'Object Storage' 3. Click on 'Create A2 Storage Unit' to create object storage. 4. Choose and fill out the following information location, project, A2 storage name, and volume size 5. Obtain the A2 access key and secret key, which will be needed to configure JuiceFS. 6. Under the newly create A2 storage click on 'Create Bucket' 7. Enter a custom name in 'Bucket Name' 8. From your JuiceFS machine use the JuiceFS CLI tool to create a new filesystem that is backed by the American Cloud A2 bucket you created in step 7. You can use the following command, replacing the values in brackets with your own information juicefs create --backend s3 \ --bucket [your-bucket-name] \ --access-key [your-access-key] \ --secret-key [your-secret-key] \ --endpoint https://a2-west.americancloud.com/juicefsa2bucket \ --region a2-west Note: Replace the --endpoint and --region values with the region where you created your American Cloud A2 bucket. 9. Once your new filesystem is created, you can mount it on your JuiceFS machine and use it like any other filesystem. For example, you can use the following command to mount the filesystem: sudo juicefs mount [local-mount-point] [juicefs-mount-point] Note: Replace [local-mount-point] with the path where you want to mount the filesystem on your JuiceFS machine, and replace [juicefs-mount-point] with the name of the JuiceFS filesystem you created in step 3. 10. You can also configure JuiceFS to use American Cloud A2 as a shared file system by setting up a shared file system using NFS or other network filesystem protocols.

Last updated on Aug 30, 2024

s3cmd (Simple Storage Service Command Line Tool and API)

Install s3cmd The S3 API allows developers to interact with S3 storage resources programmatically, enabling them to create, manage, and manipulate objects (files) in S3 buckets (containers) using various operations. The S3 API is RESTful (Representational State Transfer) in nature, which means it follows the principles of the REST architecture and uses standard HTTP methods, such as GET, PUT, POST, DELETE, etc., to perform operations on S3 objects and buckets. The S3 API supports both synchronous and asynchronous operations, allowing developers to interact with S3 in real-time or perform batch operations as needed. The s3 API requires install and configuration. Follow the steps below to get started: Install To install run the apt install as shown below. sudo apt install s3cmd - The system will provide warning of packages that'll be installed and size requirements. When prompted type 'y -> enter' to continue. The following NEW packages will be installed: python3-dateutil s3cmd 0 upgraded, 2 newly installed, 0 to remove and 1 not upgraded. Need to get 199 kB of archives. After this operation, 861 kB of additional disk space will be used. Do you want to continue? [Y/n] - If running homebrew for Mac use brew install. brew install s3cmd Configure s3cmd URL and API Keys The URL and API keys are required for configuration. Follow the steps below to configure s3cmd with the American Cloud CMP. 1. Sign into American Cloud CMP 2. On the left navigation column choose 'Storage' 3. Choose the A2 Object Storage header tab select 'Manage' Endpoint URL The S3 API endpoint URL is the web address used to interact with Amazon S3 programmatically. It specifies the S3 service region and provides a RESTful interface for performing operations on S3 objects and buckets via HTTP/HTTPS requests. The endpoint should follow the below configuration. 1. Select Object Storage 'Settings' 2. Select 'Keys' inside settings menu API Keys S3 API keys are access credentials that enable programmatic interaction with Amazon S3. They consist of an access key and a secret access key, and are used to authenticate requests to perform operations on S3 objects and buckets via the S3 API. Configure Using s3 Wizard The S3 configuration wizard is a tool that guides users through the process of setting up and configuring an Object Storage bucket. It provides step-by-step instructions for configuring access permissions, encryption, and other settings for the bucket.- Start the configurer by entering the below command. Below are the steps to configure with American Cloud's A2 Object Storage. Start the Wizard Once install, s3 commands will be recognized. To start the configuration wizard run the following: s3cmd --configure - Once started a readout like below will populate asking for the access key. cloud@Compute-1:~$ s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: Key Placement Identification of American Object Storage Keys was explained in previous steps. Easily copy/paste the keys in the respective location as requested by the configuration wizard. [US] for connecting can be left as default. Access Key: EXAMPLEJYJGRYBV6X Secret Key: EXAMPLEIuxqWjBad31hjQi3Eo97YM4 Default Region [US]: Endpoint URL Identification of American Object Storage endpoint URL was explained in previous steps. Easily copy/paste the American Cloud endpoint. Below example is for buckets within American Cloud West region. This may change for some. Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: a2-west.americancloud.com DNS The S3 configuration wizard prompts for a URL template to access the bucket. Using the variable %(bucket)s. For this example in American Cloud object storage place 'n' for NO and press enter. Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: Encryption Password The S3 configuration wizard may prompt for an optional encryption password. GPG encryption protects files both in transit and while stored on American Cloud's A2 Object Storage, unlike HTTPS which only protects files in transit. Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG If using GPG a path to is required in the next prompt. On linux machines leave the default. Path to GPG program: HTTPS Next the wizard will prompt to use HTTPS which will protect traffic while being transmitted. Press enter to except the default of [Yes]. When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: Proxy Leave blank and press enter unless running proxy. If so place IP or domain here. On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: Validate and Test The wizard will provide an overview of all newly assigned settings. Validate all settings are correct and input 'Y' to run test with provided settings. New settings: Access Key: Secret Key: Default Region: US S3 Endpoint: s3.amazonaws.com DNS-style bucket+hostname: n Encryption password: Path to GPG program: None Use HTTPS protocol: True HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Save Next prompt will ask if saving is desired. If yes type 'Y' press enter. If settings are saved the settings will be placed within the .s3cfg file for further use. If no the settings will be deleted. Save settings? [y/N] Retry The wizard next provides the option to retry the configuration. Select between Y/n. Retry configuration? [Y/n] - As previously mentioned, if chosen to save the configuration settings in the wizard, the settings will be stored in the .s3cfg file. If changes to the configuration settings are neccessary, such as generating new keys, easily access the .s3cfg file by running the command vi .s3cfg in your terminal or command prompt, and then edit the document accordingly. [default] access_key = ACCESS KEY HERE secret_key = SECRET KEY HERE access_token = add_encoding_exts = add_headers = bucket_location = US ca_certs_file = cache_file = check_ssl_certificate = True check_ssl_hostname = True cloudfront_host = cloudfront.amazonaws.com connection_max_age = 5 connection_pooling = True content_disposition = content_type = default_mime_type = binary/octet-stream delay_updates = False delete_after = False delete_after_fetch = False delete_removed = False dry_run = False enable_multipart = True encoding = UTF-8 encrypt = False expiry_date = expiry_days = expiry_prefix = follow_symlinks = False force = False get_continue = False gpg_command = None gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = guess_mime_type = True host_base = a2-west.americancloud.com host_bucket = a2-west.americancloud.com human_readable_sizes = True invalidate_default_index_on_cf = False invalidate_default_index_root_on_cf = True invalidate_on_cf = False kms_key = limit = -1 limitrate = 0 list_allow_unordered = False list_md5 = False log_target_prefix = long_listing = False max_delete = -1 mime_type = multipart_chunk_size_mb = 500 multipart_copy_chunk_size_mb = 2048 multipart_max_chunks = 10000 preserve_attrs = True progress_meter = True proxy_host = proxy_port = 0 public_url_use_https = True put_continue = False recursive = False recv_chunk = 65536 reduced_redundancy = False requester_pays = False restore_days = 1 restore_priority = Standard send_chunk = 65536 server_side_encryption = False signature_v2 = False signurl_use_https = True simpledb_host = sdb.amazonaws.com skip_existing = False socket_timeout = 300 ssl_client_cert_file = ssl_client_key_file = stats = False stop_on_error = False storage_class = throttle_max = 100 upload_id = urlencoding_mode = normal use_http_expect = False use_https = True use_mime_magic = True verbosity = WARNING website_endpoint = https://a2-west.americancloud.com/ website_error = website_index = index.html`^` Add Buckets Object storage buckets are containers for storing and organizing large volumes of unstructured data, such as files, images, and videos, in the cloud. They provide scalable, durable, and cost-effective storage solutions, allowing users to upload, retrieve, and manage data using APIs or web interfaces. Below are list of commands for adding buckets. Make Bucket Command # Use mb (make bucket) command s3cmd mb s3://americancloud-1 cloud@Compute-1:~$ s3cmd mb s3://americancloud-1 Bucket 's3://americancloud-1/' created - Using the ls command list buckets. s3cmd ls - The new bucket is listed. cloud@Compute-1:~$ s3cmd ls 2023-04-19 23:46 s3://americancloud-1 2023-04-19 21:02 s3://bucketac2 - As expected the bucket has been placed inside AC CMP. Removing a Bucket Removing buckets is a process that permanently deletes a bucket and all its objects. To remove a bucket, the user must have appropriate permissions, and all objects within the bucket must be deleted first. Once a bucket is removed, its data cannot be recovered. It is important to exercise caution and ensure backups are in place before deleting any buckets in S3. Remove Bucket Command s3cmd rb cloud@Compute-1:~$ s3cmd rb s3://ac-123 Bucket 's3://ac-123/' removed - CMP side the bucket has been removed as well. List Buckets and Files Listing files in S3 involves retrieving a list of objects (files) stored within a specific bucket. The list typically includes information such as object names, sizes, and metadata. It can be useful for navigating and managing objects in S3, including copying, deleting, or downloading files. Proper access permissions and authentication are required to list files in S3, ensuring data security and privacy. List command - The ls command will list the buckets within Object Storage. s3cmd ls cloud@Compute-1:~$ s3cmd ls 2023-04-19 23:46 s3://americancloud-1 2023-04-20 17:51 s3://americancloud-2 - List files within a bucket by running the ls s3://*bucketname. s3cmd ls s3://americancloud-1 cloud@Compute-1:~$ s3cmd ls s3://americancloud-1 2023-04-20 03:16 89.6904296875k s3://americancloud-1/AC is Awesome.pages 2023-04-20 00:39 148.197265625k s3://americancloud-1/Screenshot 2023-04-19 at 6.52.20 PM-20230420123958.png 2023-04-20 00:20 148.90234375k s3://americancloud-1/Screenshot 2023-04-19 at 7.19.54 PM-20230420122021.png 2023-04-20 17:15 0 s3://americancloud-1/americancloudisawesome.txt 2023-04-20 17:14 0 s3://americancloud-1/sample.txt - Additionally, list all files within all buckets by executing s3cmd la. s3cmd la cloud@Compute-1:~$ s3cmd la 2023-04-20 03:16 89.6904296875k s3://americancloud-1/AC is Awesome.pages 2023-04-20 00:39 148.197265625k s3://americancloud-1/Screenshot 2023-04-19 at 6.52.20 PM-20230420123958.png 2023-04-20 00:20 148.90234375k s3://americancloud-1/Screenshot 2023-04-19 at 7.19.54 PM-20230420122021.png 2023-04-20 17:15 0 s3://americancloud-1/americancloudisawesome.txt 2023-04-20 17:14 0 s3://americancloud-1/sample.txt 2023-04-20 17:52 0 s3://americancloud-2/americancloudisawesome.txt Add Files The "put" command in S3 is a command-line operation that allows users to upload (put) objects (files) from their local system to an S3 bucket. The "put" command requires specifying the source file path, destination S3 bucket name, and object key (file name) to store the object in S3. Proper permissions and authentication are necessary for successful object uploads. PUT Command - Single file upload s3cmd put /file s3://americancloud-1 s3cmd put americancloudisawesome.txt s3://americancloud-1 upload: 'americancloudisawesome.txt' -> 's3://americancloud-1/americancloudisawesome.txt' [1 of 1] 0 of 0 0% in 0s 0.00 B/s done - Multiple file upload s3cmd put ac1.txt ac2.txt path/to/ac3.txt s3://americancloud-1 s3cmd put acisawesome.txt americancloudisawesome.txt s3://bucketac4 upload: 'acisawesome.txt' -> 's3://bucketac4/acisawesome.txt' [1 of 2] 0 of 0 0% in 0s 0.00 B/s done upload: 'americancloudisawesome.txt' -> 's3://bucketac4/americancloudisawesome.txt' [2 of 2] 0 of 0 0% in 0s 0.00 B/s done - Change name during upload s3cmd put ac1.txt s3://americancloud-1/newname.txt s3cmd put test.txt s3://bucketac4/ac-2.txt upload: 'test.txt' -> 's3://bucketac4/ac-2.txt' [1 of 1] 0 of 0 0% in 0s 0.00 B/s done - If desired an entire director can be moved using 'sync' command. Idea for backup scenarios cloud@Compute-AC-9:~$ s3cmd sync /home/cloud s3://bucketac4 upload: '/home/cloud/.bash_history' -> 's3://bucketac4/cloud/.bash_history' [1 of 12] 0 of 0 0% in 0s 0.00 B/s done upload: '/home/cloud/.bash_logout' -> 's3://bucketac4/cloud/.bash_logout' [2 of 12] 220 of 220 100% in 0s 7.99 KB/s done upload: '/home/cloud/.bashrc' -> 's3://bucketac4/cloud/.bashrc' [3 of 12] Retrieving Files To retrieve files in S3, a cloud-based object storage service, you can use the S3 API or S3 console. First, authenticate and authorize access, then specify the S3 bucket and object key to identify the file. Use the appropriate method, such as GET, to retrieve the file from S3. Optionally, you can configure access control and encryption settings for added security. GET Command Single file download s3cmd get s3://[bucketname]/filename s3cmd get s3://bucketac4/ac-2.txt download: 's3://bucketac4/ac-2.txt' -> './ac-2.txt' [1 of 1] 0 of 0 0% in 0s 0.00 B/s done Multiple file download s3cmd get s3://bucketac4/test1.txt s3://bucketac4/test2.txt download: 's3://bucketac4/test1.txt' -> './test1.txt' [1 of 2] 0 of 0 0% in 0s 0.00 B/s done download: 's3://bucketac4/test2.txt' -> './test2.txt' [2 of 2] 0 of 0 0% in 0s 0.00 B/s done Change file name s3cmd get s3://[bucketname]/filename newfilename s3cmd get s3://bucketac4/ac-4.txt ac-5.txt --recursive download: 's3://bucketac4/ac-4.txt' -> 'ac-5.txt' [1 of 1] 0 of 0 0% in 0s 0.00 B/s done Use of --recursive. To pull all files from a bucket use the recursive flag. s3cmd get s3://[bucketname]/ --recursive s3cmd get s3://bucketac4/ --recursive download: 's3://bucketac4/Screenshot 2023-04-18 at 11.03.46 PM-20230423120714.png' -> './Screenshot 2023-04-18 at 11.03.46 PM-20230423120714.png' [1 of 10] 512226 of 512226 100% in 0s 1229.82 KB/s done download: 's3://bucketac4/Screenshot 2023-04-21 at 5.20.19 PM-20230423120721.png' -> './Screenshot 2023-04-21 at 5.20.19 PM-20230423120721.png' [2 of 10] 42789 of 42789 100% in 0s 432.86 KB/s done Removing Files Deleting a file in S3 is a straightforward process. Deleted files cannot be retrieved. Remove Command Remove files s3cmd rm s3://[bucketname]/filename s3cmd rm s3://bucketac4/ac-5.txt delete: 's3://bucketac4/ac-5.txt' Remove multiple files s3cmd rm s3://bucketac4/ac-2.txt s3://bucketac4/ac-4.txt delete: 's3://bucketac4/ac-2.txt' delete: 's3://bucketac4/ac-4.txt' Remove all files from a bucket use the recursive and force flag. s3cmd rm s3://[bucketname]/ --recursive --force s3cmd get s3://bucketac4/ --recursive download: 's3://bucketac4/Screenshot 2023-04-18 at 11.03.46 PM-20230423120714.png' -> './Screenshot 2023-04-18 at 11.03.46 PM-20230423120714.png' [1 of 10] 512226 of 512226 100% in 0s 1229.82 KB/s done download: 's3://bucketac4/Screenshot 2023-04-21 at 5.20.19 PM-20230423120721.png' -> './Screenshot 2023-04-21 at 5.20.19 PM-20230423120721.png' [2 of 10] 42789 of 42789 100% in 0s 432.86 KB/s done

Last updated on Aug 30, 2024

Setting Domain Registrar's Nameservers to American Cloud Nameservers

Setting your Domain Registrar's Nameservers to American Cloud's Nameservers Although American Cloud is not a domain registrar our free DNS Manager will work with any domain registrar. Using American Cloud DNS Manager you will need to set your domain registrar to use American Cloud's nameservers. Below are step-by-step guides on how to find your domain registrar and how to change nameservers for popular domain registrars. American Cloud Nameservers can be located in the American Cloud App in your DNS Management portal Looking up your Domain Registrar To lookup your domain's registrar you can use https://www.whois.com/whois/ to enter your domain and click search. The results will provide you with your domain registrar information. Changing your Nameservers Now that you know your domain registrar, you will need to login to your registrar account. Once you are logged into please use the below guides for your domain registrar to change the naeservers. easyDNS 1. Log into your easyDNS account. 2. Click on WHOIS. 3. Under NAME SERVERS click on EDIT. 4. Enter your name servers in the spaces provided. You can also click on the link to use the default easyDNS name servers for your domain. 5. Click NEXT. 6. Confirm your changes. NameCheap 1. Sign in to your Namecheap account. 2. Select Domain List from the left sidebar and click the Manage button next to your domain: 3. Find the Nameservers section and select your preferred option from the drop-down menu. Click on the green checkmark to save the changes: GoDaddy 1. Sign in to your GoDaddy Domain Portfolio. 2. Select the checkbox for domain being changed 3. Select Nameservers from the action menu. 4. Choose the nameserver setting, I'll use my own nameservers 5. Enter your custom nameservers. 6. Select Save, then Continue to complete your updates. HostGator 1. Sign in to your HostGator Customr Portal. 2. Click on Domains on the left menu. 3. Click on the More button for the domain to be updated. 4. Click on the Change link under the Name Servers 5. Enter American Cloud's nameservers. WARNING: When changing nameservers at the registrar, it can take up to 24-48 hours for DNS propagation time, where your website and email may not be available.

Last updated on Aug 30, 2024

Installing Ubuntu GUI

Ubuntu GUI Install American Cloud VMs are virtualized computing resources, allowing users to run virtual instances of operating systems and applications in the cloud. GUI integration can make it easier for users to interact with these VMs, especially for tasks that require visual feedback or interactions, such as managing the VM's configurations, accessing file systems, or installing software with graphical installers. Update & Ugrade system sudo apt update && sudo apt upgrade American Cloud utilize a specific cloud.cfg file so customers can manage their VM’s via their CMP dashboard. While running the update && upgrade, the system will ask to use the current configured cloud.cfg or the standard Ubuntu. Choose ’N’ so management through the CMP is still possible.  Install xrdp xrdp is a software tool that allows for remote desktop protocol (RDP) connections to Linux-based operating systems such as Ubuntu. It enables remote access to Ubuntu through a graphical user interface (GUI) from another device over a network connection. - To establish a remote connection to the Ubuntu OS, xrdp will be utilized. Install xrdp on the Virtual Machine to facilitate this connection using the below commands. sudo apt-get install xrdp Newly installed packages and space will be identified by the system. Select 'y' and press enter when promted to continue. 0 upgraded, 266 newly installed, 0 to remove and 0 not upgraded. Need to get 130 MB of archives. After this operation, 489 MB of additional disk space will be used. Do you want to continue? [Y/n] y - enable systemctl to start the process sudo systemctl enable xrdp Firewall Settings - In the American cloud CMP create a new firewall rule allowing port 3389. For a detailed firewall explanation, Click Here. - If utilizing UFW on linux ensure port 3389 is open for communication with the following command: sudo ufw allow 3389/tcp Install Ubuntu Desktop Ubuntu Desktop is a popular Linux-based operating system designed for desktop and laptop computers. It provides a user-friendly interface with a graphical desktop environment, offering a wide range of pre-installed applications for productivity, web browsing, multimedia, and more. Ubuntu Desktop is known for its stability, security, and open-source nature, making it a popular choice for individuals, businesses, and educational institutions seeking a free and powerful operating system. - The below command will install Ubuntu Desktop. This will take several minutes to finish. sudo apt-get install ubuntu-desktop - Reboot the Virtual Machine to ensure everything gets saved properly. sudo reboot Mac: Connect to GUI On Mac Note For this tutorial Microsoft Remote Desktop will be used. In the app store search and install the Microsoft Remote Desktop. There are several remote desktop applications that may work as well. Download Microsoft Remote Desktop In the app serch field type 'Microsoft Remote Desktop' press enter. The first application will be Microsoft Remote Desktop, select 'GET'. After a few seconds the application will be downloaded and installed on the system. Note The picture shows open b/c the application has already been installed on this machine. Using Microsoft Remote Desktop Microsoft Remote Desktop is a software application that allows users to remotely access and control Windows-based computers or servers from another device, such as a computer, tablet, or mobile device. It uses the Remote Desktop Protocol (RDP) to establish a secure connection between the local device and the remote Windows-based computer, enabling users to interact with the remote desktop as if they were physically present at that computer. Microsoft Remote Desktop is widely used for remote work, technical support, and server administration, among other purposes. - Follow the steps below to connect to Ubuntu Desktop previously installed 1. Select 'Launchpad' from the tool bar. 2. Select 'Microsoft Remote Desktop' Input Connection Information - The application will launch with a single 'Add PC' switch. Select 'Add PC' or if desired the toolbar to the top select '+' icon. - A new window will appear requesting PC information. In the PC name field input the Public IP of the virtual machine Ubuntu Desktop is running. Additionally, if desired add and save the user account information. Once complete select add. For help finding the Public IP within the AC CMP, Click Here. Connect To The Desktop - In the main application window the the new machine will be added. - Now select the three dot toggle on the lower right of the machine and select 'Connect'. - A warning window will populate. Select 'Connect' - In the popup provide the account credentials. - The new connection has been made. Windows: Connect to GUI on Windows - In windows this tutorial will utilize the built-in 'Remote Desktop Connection' software. Open Remote Desktop Connection - In the windows tool bar search field type 'Remote desktop connection' - In the windows popup window select 'Windows Remote Desktop' Connect to Ubuntu Desktop - Once the software starts, place the public IP in the computer name text box. For help identifying the public IP, Click Here. - If desired select the grey arrow in the lower left for more options and add user information. - Select 'Connect'. A warning window will populate select 'Yes' in order to continue. - Next sign into the account to continue to the Ubuntu Desktop. - Now the Ubuntu Desktop sign-in will appear. Sign in using the appropriate credintials. - That's it. It's connected.

Last updated on Nov 21, 2024

Using SSH (Secured Shell)

What is Secure Shell SSH stands for Secure Shell, and it is a secure network protocol that allows for remote access and control of a computer or server over an unsecured network. It is commonly used by system administrators and developers to securely manage and transfer data between computers over the internet. When connecting to a remote server using SSH, the connection is encrypted, which means that no one can eavesdrop on the communication or steal the login credentials. The encryption ensures that all data, including passwords and other sensitive information, is transmitted securely over the network. To use SSH, it's neccessary to have an SSH client installed on the computer, and the remote server must have an SSH server installed. Also needed is a username and password or a public/private key pair to authenticate to the remote server. Once authenticated, a command-line interface can be utilized to execute commands on the remote server or transfer files securely between computers and the remote server. SSH also allows for the creatation of encrypted tunnels to forward other network services such as HTTP or FTP, making it an essential tool for secure remote access and administration. Basic Usage Locate Required Credentials - In order to being the connection an IP address/Hostname, Username,and Password are required. In the American Cloud CMP this information can be found in the compute section. Follow the steps below to acquire the information 1. Login to the Web Portal with a valid American Cloud account 2. On the left navigation column choose 'Cloud Compute' 3. In Manage Instance select the desired instance to SSH into - Inside the 'Server Information' page retreive the public IP address, username (default cloud), and copy the password (default is a randomly selected password) SSH The Machine - Open a terminal or cmd prompt and type the following command ssh cloud@[IPAddress] ssh [email protected] The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. ED25519 key fingerprint is SHA256:EXAMPLEp01iD6zXvKCF+QdF5VLl3MiFrITEXAMPLE. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? - If this is the first login, a message asking to save the fingerprint will appear. Type 'yes' to continue - Next enter the password for the User being logged into [email protected]'s password: SSH with Keys When using SSH keys, authentication to a remote server is possible without having to enter a password while logging in. Instead, a generated pair of cryptographic keys: a public key and a private key. The public key is uploaded to the remote server, while the private key is stored securely on the local computer. When connecting to the remote server using SSH, the server checks the public key against a list of authorized keys. If the public key is on the list, the server uses it to encrypt a message that can only be decrypted with the paired private key. The server sends this encrypted message back to the local computer, and the local SSH client uses the private key to decrypt the message and authenticate to the server. Using SSH keys has several advantages over using a password for authentication. First, it is more secure because it is much harder for an attacker to guess or steal a private key than it is for them to crack your password. Second, it is more convenient because typing a password every time log in isn't neccessary. And third, it is easier to automate scripts or other processes that require remote access, since the private key can be included in the scripts without having to store a password in plain text. To use SSH keys, first generate a key pair using a tool like ssh-keygen. Then copy the public key to the remote server using a command like ssh-copy-id or by manually appending the public key to the authorized_keys file on the remote server. Finally, configure the SSH client to use the private key when connecting to the remote server. - Follow the steps below to SSH a server 1. Generate the SSH key pair - For more information on generating key pairs Click Here. 2. Save the newly generated SSH key pair to the '/.ssh' directory 3. Place the Public Key in the '/.ssh/authorize_keys' directory - There are two primary ways to accomplish step 3 discussed below ssh-copy-id - The ssh-copy-id command is an easy way to add the local machines public key to the remote servers /.ssh/authorized_keys directory. To accomplish this follow the below commands ssh-copy-id cloud@[IPAddress] - After pressing enter the remote server will being receiving ssh key pairs from the local machine. As shown the user's password will be required for completion ssh-copy-id [email protected] /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/work/.ssh/id_ed25519.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: - Following an accurate password the system will show the number of keys imported and log out Number of key(s) added: 1 Now try logging into the machine, with: "ssh '[email protected]'" and check to make sure that only the key(s) you wanted were added. Next log back into the remote server using the standard ssh command. If a passphrase was established during generation it will be requested ssh [email protected] Enter passphrase for key '/Users/joeevans/.ssh/id_ed25519': - A connection not requiring user password will be made Placing Public Key in Authorized_keys directory - Another way to accomplish placing a public key into the /.ssh/authorized_keys directory is below. Follow these steps 1. On the local machine naviate to the /.ssh directory. 2. Copy the desired public key. 3. Log into the remote server using the username/password ssh [email protected] 4. Edit the /.ssh/authorized_keys using the preferred editor. vi /.ssh/authorized_keys 5. Paste the copied public key from the local machine inside the folder ssh-ed25519 Example333lZDI1aaaAAAAIxxxghuGkFSh4256QQoDC+DI5vMwi2EXAMPLE 6. Log out of the remote server using the 'exit' command - It is now possible to log in without needing the user's password. Again if a passphrase was used while generating the key pair input it here.

Last updated on Aug 30, 2024

Mount or Unmount Drives

Mount/Unmount List All Partitions Running the lsblk the available drives will be provided lsblk Once command is ran a read-out will be provided showing available drives similar to below. In this example the volume vdb size 50G is block-storage_1 inside the American Cloud CMP. Additionally, below we can see vdb is not mounted. cloud@Compute-1:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 63.3M 1 loop /snap/core20/1822 loop1 7:1 0 91.9M 1 loop /snap/lxd/24061 loop2 7:2 0 49.9M 1 loop /snap/snapd/18357 loop3 7:3 0 63.3M 1 loop /snap/core20/1852 sr0 11:0 1 1024M 0 rom vda 252:0 0 25G 0 disk ├─vda1 252:1 0 24.9G 0 part / ├─vda14 252:14 0 4M 0 part └─vda15 252:15 0 106M 0 part /boot/efi vdb 252:16 0 50G 0 disk Partition Drive - Partitioning a drive involves dividing it into one or more logical sections, each of which acts as a separate drive with its own file system. This can be useful for various reasons, such as isolating data for backup or security purposes, installing multiple operating systems on a single drive, or organizing files and folders more efficiently. Partitioning can be done using various tools, such as Disk Management in Windows, Disk Utility in macOS, or fdisk in Linux. - In this example fdisk command will be utilized 1. Identify the drive to partition using fdisk -l sudo fdisk -l Disk /dev/loop0: 49.84 MiB, 52260864 bytes, 102072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 111.95 MiB, 117387264 bytes, 229272 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes 2. Use fdisk command to partition the drive. - Identify the drives path. For this example /dev/vdb1 will be placed in fdisk command. This information was retrieved running fdisk -l above sudo fdisk /dev/vdb1 - A readout similar to the below will be displayed confirming drive is open using fdisk command Welcome to fdisk (util-linux 2.37.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. The device contains 'ext4' signature and it will be removed by a write command. See fdisk(8) man page and --wipe option for more details. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x11b600de. Command (m for help): - fdisk command is a letter based operation where a letter is assigned to a command. Notice (m for help). Press 'm' and enter to enter help mode and print command layout. The first couple of columns are printed below Command (m for help): m Help: DOS (MBR) a toggle a bootable flag b edit nested BSD disklabel c toggle the dos compatibility flag Generic d delete a partition F list free unpartitioned space l list known partition types n add a new partition p print the partition table t change a partition type v verify the partition table i print information about a partition - Create new partition using 'n' command Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p - In the about command once the 'n' command has been given fdisk command request information on the partition type. Here a primary partition will be built identified by the 'p' command. Next we'll be asked the sector in which to build the new partition. The first (1) sector will be selected Partition number (1-4, default 1): 1 - Following the above we'll determine the size of the sector/partition. Last sector, +/-sectors or +/-size{K,M,G,T,P} (10000-97654783, default 97654783): +10G - The new partition of 10G has been built by fdisk command Created a new partition 1 of type 'Linux' and of size 10 GiB. - Notice partition type defaulted to 'Linux in the above read out. fdisk will automatically defualt to 'Linux' In order to change this use the 't' command. Following the 't' command a 'L' command can be given to list all available types 1 EFI System C12A7328-F81F-11D2-BA4B-00A0C93EC93B 2 MBR partition scheme 024DEE41-33E7-11D3-9D69-0008C781F39F 3 Intel Fast Flash D3BFE2DE-3DAF-11DF-BA40-E3A556D89593 4 BIOS boot 21686148-6449-6E6F-744E-656564454649 5 Sony boot partition F4019732-066E-4E12-8273-346C5641494F 6 Lenovo boot partition BFBFAFE7-A34F-448A-9A5B-6213EB736C22 7 PowerPC PReP boot 9E1A2D38-C612-4316-AA26-8B49521E5A8B 8 ONIE boot 7412F7D5-A156-4B13-81DC-867174929325 9 ONIE config D4E6E2CD-4469-46F3-B5CB-1BFF57AFC149 10 Microsoft reserved E3C9E316-0B5C-4DB8-817D-F92DF00215AE 11 Microsoft basic data EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 12 Microsoft LDM metadata 5808C8AA-7E8F-42E0-85D2-E1E90434CFB3 13 Microsoft LDM data AF9B60A0-1431-4F62-BC68-3311714A69AD 14 Windows recovery environment DE94BBA4-06D1-4D40-A16A-BFD50179D6AC 15 IBM General Parallel Fs 37AFFC90-EF7D-4E96-91C3-2D7AE055B174 16 Microsoft Storage Spaces E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D 17 HP-UX data 75894C1E-3AEB-11D3-B7C1-7B03A0000000 18 HP-UX service E2A1E728-32E3-11D6-A682-7B03A0000000 19 Linux swap 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F 20 Linux filesystem 0FC63DAF-8483-4772-8E79-3D69D8477DE4 21 Linux server data 3B8F8425-20E0-4F3B-907F-1A25A76F98E8 22 Linux root (x86) 44479540-F297-41B2-9AF7-D131D5F0458A 23 Linux root (x86-64) 4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709 24 Linux root (ARM) 69DAD710-2CE4-4E3C-B16C-21A1D49ABED3 : - Now the new partition is saved in memory and waiting to be written to disk. To review the newly built partition use the 'p' command Command (m for help): p Disk /dev/vdb: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 0831ABEB-082B-4EF1-AA79-E22EE04FFF74 Device Start End Sectors Size Type /dev/vdb1 2048 20973567 20971520 10G Linux filesystem - To write the changes use the 'w' command. This will write the newly developed partition to the disk Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. - Using sudo fdisk -l double check the build of the new partition Format Drive There are different types of Linux format like btrfs, ext2, ext4, xfs, cramfs, ext3 and minix that are compatible with the Linux operating system BTRFS - Btrfs: A modern file system for Linux operating systems that provides features such as snapshots, compression, and checksums for data integrity. It is designed to improve performance, scalability, and manageability of file storage on modern systems. EXT2 - Ext2: A traditional file system for Linux operating systems that was introduced in the early 1990s. It provides support for basic file and directory operations and has been widely used in Linux distributions. However, it lacks some modern features such as journaling and dynamic resizing. EXT4 - Ext4: A modern file system for Linux operating systems that provides features such as journaling, support for large files and directories, and improved performance and scalability. It is the default file system in many Linux distributions and is widely used in production environments. XFS - XFS: A high-performance file system for Linux and other Unix-like operating systems. It was designed for scalability, supporting file systems up to 16 exabytes in size, and is optimized for handling large files and high-volume data throughput. XFS is widely used in enterprise and cloud environments. CRAMFS - Cramfs (Compressed ROM File System): A read-only file system commonly used in embedded systems such as routers, set-top boxes, and smartphones. It is designed to save storage space by compressing the file system and is loaded into memory at boot time for fast access. EXT3 - Ext3: A journaled file system for Linux operating systems that was introduced in 2001. It provides support for basic file and directory operations and also includes a journaling system for improved reliability and faster recovery from crashes. Ext3 is widely used in Linux distributions but has been largely replaced by Ext4. MINIX - MINIX: A file systems using a simple structure consisting of a boot block, superblock, and inode block. - To format the drive follow the following steps 1. Identify drive to format. If partitioning occured in the above step select the partition. 2. Run the below command to format drive. If desired change ext4 to different format. sudo mkfs.ext4 /dev/vdb1 - Readout should look similar to: mke2fs 1.46.5 (30-Dec-2021) Discarding device blocks: done Creating filesystem with 12206848 4k blocks and 3055616 inodes Filesystem UUID: a86a8d51-0ed0-4818-9c81-b7afb8c77309 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Allocating group tables: done Writing inode tables: done Creating journal (65536 blocks): done Writing superblocks and filesystem accounting information: done - The drive is now formatted to ext4 Create Mount Point A mount point directory is a directory in a file system that serves as a reference point for accessing a storage device or a partition. When a storage device is connected to a computer or server, it must be mounted to be accessed by the system. Create the directory within /mnt by running the following command. sudo mkdir /mnt/vdb1 - To check creation run: ls /mnt Mount the Partition - Now that the new partition has been built, formatted, and created a mount point. Mount the partition. The below commands will be ran sudo mount /dev/vdb1 /mnt/vdb1 - There will not be a readout from this command. To check mounting use command lsblk as described in previous steps cloud@Compute-AC-9:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 49.8M 1 loop /snap/snapd/18357 loop1 7:1 0 111.9M 1 loop /snap/lxd/24322 loop2 7:2 0 63.3M 1 loop /snap/core20/1828 loop3 7:3 0 63.3M 1 loop /snap/core20/1852 loop4 7:4 0 53.2M 1 loop /snap/snapd/18933 sr0 11:0 1 1024M 0 rom vda 252:0 0 25G 0 disk ├─vda1 252:1 0 24.9G 0 part / ├─vda14 252:14 0 4M 0 part └─vda15 252:15 0 106M 0 part /boot/efi vdb 252:16 0 50G 0 disk └─vdb1 252:17 0 10G 0 part /mnt/vdb1 - In the above, vdb1 has been mounted to /mnt/vdb1 as depicted in the MOUNTPOINTS column Note The example uses partitions and drives on the local machine. Ensure to use accurate [paths] on your local machine. Unmount Partition - A drive can be unmounted using the 'umount' command sudo umount /dev/vdb1 - There will be no readout from this command. To check the success of the operation use the 'lsblk' command lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 49.8M 1 loop /snap/snapd/18357 loop1 7:1 0 111.9M 1 loop /snap/lxd/24322 loop2 7:2 0 63.3M 1 loop /snap/core20/1828 loop3 7:3 0 63.3M 1 loop /snap/core20/1852 loop4 7:4 0 53.2M 1 loop /snap/snapd/18933 sr0 11:0 1 1024M 0 rom vda 252:0 0 25G 0 disk ├─vda1 252:1 0 24.9G 0 part / ├─vda14 252:14 0 4M 0 part └─vda15 252:15 0 106M 0 part /boot/efi vdb 252:16 0 50G 0 disk └─vdb1 252:17 0 10G 0 part - Notice the mountpoint has been removed from vdb1 Note If desired use fdisk to remove partition from drive. All the examples have been built on an American Cloud CMP Block Storage drive.

Last updated on Aug 01, 2024

Using Node.js to upload files to A2 Storage

Login and connect to VM 1. Login to the Web Portal with a valid American Cloud account 2. Go to Cloud Compute and select the VM to install Node.js on. If no VM is created yet Click Here. 3. Get the password and public IP for the cloud user of the VM to SSH into the VM 4. SSH into the VM ssh cloud@"PublicIP" Install and configure Node.js 1. Run sudo apt-get update to ensure repositories are up to date 2. Install Node.js onto the VM using sudo apt install nodejs 3. Verify Node.js installed using node -v 4. Run sudo apt install npm to be able to install dependecies 5. Once Node.js is installed a dependency will need to be install npm install aws-sdk 6. Create a Node.js script to upload a file sudo nano upload-to-a2.js const AWS = require('aws-sdk'); const fs = require('fs'); // Configure AWS SDK with your A2 endpoint and credentials const s3 = new AWS.S3({ endpoint: 'YOUR_A2_ENDPOINT', // Replace with your A2 endpoint. Don't include https:// accessKeyId: 'YOUR_ACCESS_KEY', secretAccessKey: 'YOUR_SECRET_KEY', s3ForcePathStyle: true, region: 'a2-west', // This doesn't need to be specific it can be anything }); // Define the bucket name and file name const bucketName = 'your-bucket-name'; const fileName = 'file-to-upload.txt'; // Rename or code to automatically generate names for files const tenant = 'YOUR_TENANT_ID' ; // Read the file const fileContent = fs.readFileSync(fileName); // Construct the URL with endpoint preceding the bucket name const fileURL = `https://${s3.config.endpoint}/${tenant}:${bucketName}/${fileName}`; // Create parameters for A2 upload const params = { Bucket: bucketName, Key: fileName, // The name you want to give to the file in A2 Body: fileContent, ACL: 'public-read', // Set to different permissions if needed }; // Upload file to A2 Storage s3.upload(params, (err, data) => { if (err) { console.error('Error uploading file:', err); } else { console.log('File uploaded successfully. File URL:', fileURL); } }); "YOUR_A2_ENDPOINT" Copy the bucket URL the only thing needed will be the "region.americancloud.com" for the endpoint "your-bucket-name" "YOUR_TENANT_ID" For testing create a file to test uploading touch file-to-upload.txt Create Object Storage 1. To create and get the information needed from the A2 Storage Click Here. Final Step Once Node.js is installed and configured and the A2 storage is setup. This command can be used to run the script node upload-to-a2.js Below is the successful output.

Last updated on Aug 30, 2024

Deploy a simple web app with Kamal

Alert: American Cloud doesn't allow 'root' as default. Only users that have a full understanding of their environments and the security implications should use this guide. - Navigate to https://app.americancloud.com/login Create an Instance with Start Up Script 1.) In the left navigation pane select "Cloud Compute" 2.) Select "CREATE AN INSTANCE" 3.) Select the project to build in and select "Proceed" 4.) Select between "US-West-0" and "US-West-1" - If wishing to build on our premium stack select "US-West-0" 5.) Select between Standard or Premium in "US-West-0" and Standard in "US-West-0" 6.) Select the desired OS/Marketplace App. 7.) Provision your VM with our custom or default options. 8.) Select "Add a new Startup Script" 9.) Add this startup script to the block. Ensure to replace "mypubkey" with your actual pubkey leaving the quotations in place. #!/bin/bash echo "PermitRootLogin yes" >> /etc/ssh/sshd_config SSH_KEY_CONTENT="mypubkey" echo "$SSH_KEY_CONTENT" >> /root/.ssh/authorized_keys chmod 600 /root/.ssh/authorized_keys chown root:root /root/.ssh/authorized_keys systemctl restart sshd echo "SSH configuration for root updated. Root login now permitted with specified key." 10.) Select "Add startup script" 11.) Give the instance a customized hostname and label. 12.) Select "Deploy Now". Confirm Root SSH is enabled ssh using root to confirm access ssh root@'publicip' Install Kamal 13.) Now that root ssh access is established on the VM. It's time to install Kamal. There are a couple of prerequisites prior. - Docker and buildx is required on your machine. This tutorial is built on Mac so brew install docker and brew install docker-buildx was utilized. If docker and buildx is not installed a failure will occur during kamal setup. This step may be different depending on OS. It a relatively quick lookup. - The private key matching the public one on your VM should be added to your ssh-agent, you can ensure this is the case by running ssh-add ~/.ssh/kamal_privkey (whatever your key is) 14.) Install Kamal locally by running gem install kamal or set up an alias to run in docker. - If issues arise in step 14, you'll probably need to update ruby and set the ruby environment. 15.) Choose your container registry (it can be public or private), and create a personal access token with write:packages scope in order to push images to it. We are going to use ghcr.io and a private registry for this example. 16.) Select user menu in top right corner. 17.) Select "Settings" 18.) Scroll to the bottom of the menu and select "Developer settings" 19.) In the next menu select "Personal access tokens". Then in the dropdown select "Tokens (classic)" 20.) Select "Generate New Token" followed by "Generate new token (classic)" from the dropdown. 21.) In the section provide a name for the token and at a minimum select "write:packages" 22.) Select "Generate token" 23.) Copy the key to be utilized in the next steps. 24.) Set your personal access token as KAMAL_REGISTRY_PASSWORD using the export command below: export KAMAL_REGISTRY_PASSWORD=ghp_12345abcde 25.) Create a directory for kamal in the location you'd like to run it. For my example I'm simply creating on my ~/Desktop utilizing mkdir kamal 26.) Set up your code, if you haven't already. Make sure you include a Dockerfile and that your app returns a 200 ok on the path /up To test we can use some sample code. Inside the kamal directory create two files Dockerfile and server.ts 27.) server.ts const server = Deno.listen({ port: 80 }); console.log("Server running on http://localhost:80"); for await (const conn of server) { handleConnection(conn); } async function handleConnection(conn: Deno.Conn) { for await (const requestEvent of Deno.serveHttp(conn)) { const url = new URL(requestEvent.request.url); requestEvent.respondWith(new Response("Hello, Kamal!", { status: 200 })); } } 28.) Dockerfile FROM denoland/deno:latest WORKDIR /app COPY server.ts . EXPOSE 80 CMD ["deno", "run", "--allow-net", "server.ts"] 29. (Optional) Skip this step if you are already using git. If your code is not already committed with git, you can continue by simply using git locally by running these commands git init git add . git commit -m "Initial commit" 30.) Initialize kamal by running kamal init from the kamal directory. 31.) Update your newly created config/deply.yml file located in the kamal directory with the below code. Consult the Kamal docs for more options. Ensure to change line 4&13 to reflect the username of the repository. Line 8 will change to server Public IP. # Name of your application. Used to uniquely configure containers. service: kamal-demo # Name of the container image. image: github-username/kamal-demo # Deploy to these servers. servers: web: - 192.168.0.0 #<-- Put your VM's public IP here # Credentials for your image host. registry: # Specify the registry server, if you're not using Docker Hub server: ghcr.io username: github-username # Always use an access token rather than real password (pulled from .kamal/secrets). password: - KAMAL_REGISTRY_PASSWORD # Configure builder setup. Make sure you use this if you are building on a Mac. builder: arch: amd64 This mapping should already be present but to double-check that your .kamal/secrets file includes this mapping run: KAMAL_REGISTRY_PASSWORD=$KAMAL_REGISTRY_PASSWORD 32.) Commit your file changes to git - That's it! You're ready to deploy your app. 33.) Run kamal setup to begin the build on the host machine. 34.) Verify your app is running on your VM(s) by logging into your VM and checking the app curl -X GET "http://localhost:80/" This should return a Hello, Kamal! It's important before making your app public to utilize system hardening techniques as they're not installed by default. Click here for a good article of reference.

Last updated on Feb 06, 2025

How to use MySQL Workbench with Coolify

MySQL Coolify to MySQL Workbench The below documentation outlines neccessary steps to create a SSH connection between a Coolify MySQL and MySQL Workbench. Set up Instance 1. Create the MySQL resource within Coolify by selecting the project for the resource. The default 'My first project' is utilized for the document. 2. Select the '+Add New Resource'. 3. Select the desired server for the resource. 4. Once the resource list is presented navigate down the page to the databases section. Select 'New MySQL'. 5. Identify the destination for the resource. Either by selecting a previously built or adding a new destination. 6. SSH into the coolify machine. Run the command sudo docker run --rm -ti --name=ctop -v /var/run/docker.sock:/var/run/docker.sock quay.io/vektorlab/ctop:latest. This will list containers running on the machine. Using the arrow keys scroll to the MySQL resource and press enter. The container needs to be in a running state to be accessed 7. The containers listening ports will be listed. This port will be used in the next step. 8. Once the resource is complete, 1) add the desired port to communicate on. In the example port 3000 for the local machine and 3306 for the container. 2) Restart the machine to put the new configs in place. 9. Once restared, check the container to ensure ports are configured appropriately by repeating steps 6&7 above. As an example (below), port 3000 is mapped from the local coolify instance to port 3306 if the mysql container. Set up Local Host 1. Set up SSH tunnel on your local machine that is running the MySQL client (ie. mysql or MySQL Workbench) by running the below command. ssh -4 -f -N -T -L 3131:127.0.0.1:3000 cloud@coolify_public_ip_here 2. Open MySQL Workbench and select '+' toggle to add a new connection. 3. In the pop up add a name for the connection. Make hostname localhost 127.0.0.1 and port 3131 as set previously. Select test connection. 4. Provide the MySQL root password from coolify in the MySQL Workbench pop up. Optionally, save the password to keychain for quick launch. 5. Select the newly built connection. If prompted provide root password. 6. Select 'Server Status' from the left navigation menu. 7. Ensure connection and server is running. MySQL Workbench is now connected to the Coolify MySQL resource and editable.

Last updated on Feb 28, 2025