Skip to content

isovalent/terraform-equinix-infra

Repository files navigation

Overall

This module will deploy the baremetal server in Equinix metal, and this baremetal server will be used as the hypervisor for the VMs.

On top of the this hypervisor, it will create 2 virtual networks(public and private) and 2 VMs(router and the testbox).

The public virtual network will be in the same physical public network with the hyperviosr where it can get the public IP address and use the Equinix gateway as the next-hop to access the Internet. Baremetal hypervisor, router and testbox will have the public IP address for you to access them through the Internet

In the private virtual network, the router works as the ipv4/ipv6 L3 gateway/DHCP/DNS server for the private network where you can deploy more VMs in the private networks for your further k8s cluster deployment. The testbox also has the interface in this network to receive the IP address from the DHCP server for test purpose.

We require you to input the mac address prefix,hostname prefix and nodes counts for k8s nodes info in this module because we will populate the staitc DHCP mappingg, the FQDN name and HA proxy config for the k8s nodes in the DHCP/DNS server and HA proxy. When you deploy your k8s nodes, you can just use the MAC address from the output from this module so you can have the relaible IP address from the DHCP server.

Requirements

  • You need to install xsltproc and mkisofs package for the VM deployment
  • You need to have something like libvirt-daemon-system on your local machine to run the module where I dont quite understand(dmacvicar/terraform-provider-libvirt#1089)

Notes

  • the router will source NAT the virtual network traffic to its public IP address to help the virtual private network VM to access the Internet.
  • the router will forward the 6443/443/80 traffic from the its public/private interfaces to the k8s masters(6443)/workers(443/80) nodes.
  • all the VMs in the private network will receive the ipv4/ipv6 address.
  • Testbox is using the KVM VM ubuntu image so the kernel config can be different from the vanilla ubuntu.

Output of this modules

After running this module, the private key will be generated to output folder and you can SSH the equinix host(username: root) and testbox(testbox) with the private key. The router's default username is vyos and the default password is in the variables.tf. There are some pre populated DNS entries/HA proxy setup for the Openshift platform. please check out the templates/router-user-data.yam to see what has been configured.

Requirements

Name Version
terraform >=1.6.5
equinix >=1.39
libvirt >=0.7.6

Providers

Name Version
equinix >=1.39
libvirt >=0.7.6
local n/a
null n/a
tls n/a

Modules

No modules.

Resources

Name Type
equinix_metal_device.libvirt_host resource
equinix_metal_project_ssh_key.public_key resource
equinix_metal_reserved_ip_block.public_ip_block resource
libvirt_cloudinit_disk.router resource
libvirt_cloudinit_disk.testbox resource
libvirt_domain.router resource
libvirt_domain.testbox resource
libvirt_network.private_network resource
libvirt_network.public_network resource
libvirt_pool.main resource
libvirt_volume.router resource
libvirt_volume.router_base resource
libvirt_volume.testbox resource
libvirt_volume.testbox_base resource
local_file.host-username resource
local_file.private_key_file resource
local_file.public_key_file resource
null_resource.libvirt_host_provisioner resource
tls_private_key.private_key resource
equinix_metal_project.demos data source

Inputs

Name Description Type Default Required
api_key The Equinix API key. string n/a yes
dns_base_domain base domain for the LAN network so the k8s nodes can get unique the FQDN name and resolved by the router string "local" no
equinix_metal_project The Equinix project to use. string "Demos" no
infra_name The name of the equinix metal infrastructure. string n/a yes
k8s_cluster_name this will be used to create the FQDN name in the DNS record to follow the OCP guide(https://docs.openshift.com/container-platform/4.11/installing/installing_bare_metal/installing-bare-metal.html#installation-dns-user-infra-example_installing-bare-metal) string "liyi-test" no
k8s_master_count number of master nodes, and we only support numbers 1 or 3 for now number "3" no
k8s_master_hostname_prefix prefix hostname of the master nodes string "k8s-masters" no
k8s_master_mac_prefix prefix mac address of the master nodes string "52:54:00:aa:bb:a" no
k8s_worker_count number of worker nodes, and we only support numbers less than 9 for now number "2" no
k8s_worker_hostname_prefix prefix hostname of the worker nodes string "k8s-workers" no
k8s_worker_mac_prefix prefix mac address of the worker nodes string "52:54:00:aa:bb:b" no
metro The Equinix metro to use. string "ny" no
private_network_ipv4_cidr the private network IPv4 cidr block for VMs, only /24 is supported for now string "192.168.1.0/24" no
private_network_ipv6_cidr the private network IPv6 cidr block for VMs, and only /112 is supported for now string "fd03::/112" no
router_password The login password for vyos. string "R0uter123!" no
server_type The server type to use. string "m3.small.x86" no

Outputs

Name Description
dns_base_domain base domain for the LAN network so the k8s nodes can get unique the FQDN name and resolved by the router
host-public-ip-address the public IP address of the equinix host
k8s_cluster_name this will be used to create the FQDN name in the DNS record to follow the OCP guide(https://docs.openshift.com/container-platform/4.11/installing/installing_bare_metal/installing-bare-metal.html#installation-dns-user-infra-example_installing-bare-metal)
k8s_master_count number of master nodes, and we only support numbers 1 or 3 for now
k8s_master_ip_mac_hostname_map k8s masters maps including ipv4/mac/hostname/ipv6
k8s_worker_count number of worker nodes, and we only support numbers less than 9 for now
k8s_worker_ip_mac_hostname_map k8s workers maps including ipv4/mac/hostname/ipv6
libvirt_pool_main_name libvirt_pool_main_name
libvirt_private_network_id libvirt private network id
libvirt_public_network_id libvirt public network id
private_network_ipv4_cidr the private network IPv4 cidr block for VMs, only /24 is supported for now
private_network_ipv6_cidr the private network IPv6 cidr block for VMs, and only /112 is supported for now
router-public-ip-address the public IP address of the router
router_password The login password for vyos.
ssh_private_key_file_path private key file path for this project
ssh_public_key_file_path public key file path for this project
testbox-public-ip-address the public IP address of the testbox

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published