Skip to content

Commit

Permalink
Fix some typos (#212)
Browse files Browse the repository at this point in the history
  • Loading branch information
simcod committed Sep 9, 2024
1 parent 11e4b36 commit 096b107
Show file tree
Hide file tree
Showing 3 changed files with 61 additions and 71 deletions.
126 changes: 58 additions & 68 deletions docs/src/installation/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ We are bootstrapping the [metal control plane](../overview/architecture.md#Metal

In order to build up your deployment, we recommend to make use of the same Ansible roles that we are using by ourselves in order to deploy the metal-stack. You can find them in the repository called [metal-roles](https://github.com/metal-stack/metal-roles).

In order to wrap up deployment dependencies there is a special [deployment base image](https://hub.docker.com/r/metalstack/metal-deployment-base) hosted on Docker Hub that you can use for running the deployment. Using this Docker image eliminates a lot of moving parts in the deployment and should keep the footprints on your system fairly small and maintainable.
In order to wrap up deployment dependencies there is a special [deployment base image](https://github.com/metal-stack/metal-deployment-base/pkgs/container/metal-deployment-base) hosted on GitHub that you can use for running the deployment. Using this Docker image eliminates a lot of moving parts in the deployment and should keep the footprints on your system fairly small and maintainable.

This document will from now on assume that you want to use our Ansible deployment roles for setting up metal-stack. We will also use the deployment base image, so you should also have [Docker](https://docs.docker.com/get-docker/) installed. It is in the nature of software deployments to differ from site to site, company to company, user to user. Therefore, we can only describe you the way of how the deployment works for us. It is up to you to tweak the deployment described in this document to your requirements.

Expand Down Expand Up @@ -290,38 +290,30 @@ Also define the following configurations for `cfssl`:
- `files/certs/ca-config.json`
```json
{
"signing": {
"default": {
"expiry": "43800h"
},
"profiles": {
"server": {
"expiry": "43800h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": {
"expiry": "43800h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"client-server": {
"expiry": "43800h",
"usages": [
"signing",
"key encipherment",
"client auth",
"server auth"
]
}
}
"signing": {
"default": {
"expiry": "43800h"
},
"profiles": {
"server": {
"expiry": "43800h",
"usages": ["signing", "key encipherment", "server auth"]
},
"client": {
"expiry": "43800h",
"usages": ["signing", "key encipherment", "client auth"]
},
"client-server": {
"expiry": "43800h",
"usages": [
"signing",
"key encipherment",
"client auth",
"server auth"
]
}
}
}
}
```
- `files/certs/ca-csr.json`
Expand All @@ -335,9 +327,9 @@ Also define the following configurations for `cfssl`:
},
"names": [
{
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"OU": "DevOps",
"ST": "Bavaria"
}
Expand All @@ -355,9 +347,9 @@ Also define the following configurations for `cfssl`:
},
"names": [
{
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"OU": "DevOps",
"ST": "Bavaria"
}
Expand All @@ -380,9 +372,9 @@ Also define the following configurations for `cfssl`:
},
"names": [
{
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"OU": "DevOps",
"ST": "Bavaria"
}
Expand All @@ -400,9 +392,9 @@ Also define the following configurations for `cfssl`:
},
"names": [
{
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"OU": "DevOps",
"ST": "Bavaria"
}
Expand All @@ -413,18 +405,16 @@ Also define the following configurations for `cfssl`:
```json
{
"CN": "metal-api",
"hosts": [
"<your-metal-api-dns-ingress-domain>"
],
"hosts": ["<your-metal-api-dns-ingress-domain>"],
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"C": "DE",
"L": "Munich",
"O": "Metal-Stack",
"OU": "DevOps",
"ST": "Bavaria"
}
Expand Down Expand Up @@ -660,24 +650,24 @@ You can find installation instructions for Gardener on the Gardener website bene
1. Register the [os-extension-provider-metal](https://github.com/metal-stack/os-metal-extension) controller by deploying the [controller-registration](https://github.com/metal-stack/os-metal-extension/blob/v0.4.1/example/controller-registration.yaml) into your Gardener cluster, this controller can transform the operating system configuration from Gardener into Ignition user data
1. You need to use the Gardener's [networking-calico](https://github.com/gardener/gardener-extension-networking-calico) controller for setting up shoot CNI, you will have to put specific provider configuration into the shoot spec to make it work with metal-stack:
```yaml
networking:
type: calico
# we can peer with the frr within 10.244.0.0/16, which we do with the metallb
# the networks for the shoot need to be disjunct with the networks of the seed, otherwise the VPN connection will not work properly
# the seeds are typically deployed with podCIDR 10.244.128.0/18 and serviceCIDR 10.244.192.0/18
# the shoots are typically deployed with podCIDR 10.244.0.0/18 and serviceCIDR 10.244.64.0/18
pods: 10.244.0.0/18
services: 10.244.64.0/18
providerConfig:
apiVersion: calico.networking.extensions.gardener.cloud/v1alpha1
kind: NetworkConfig
backend: vxlan
ipv4:
pool: vxlan
mode: Always
autoDetectionMethod: interface=lo
typha:
enabled: false
networking:
type: calico
# we can peer with the frr within 10.244.0.0/16, which we do with the metallb
# the networks for the shoot need to be disjunct with the networks of the seed, otherwise the VPN connection will not work properly
# the seeds are typically deployed with podCIDR 10.244.128.0/18 and serviceCIDR 10.244.192.0/18
# the shoots are typically deployed with podCIDR 10.244.0.0/18 and serviceCIDR 10.244.64.0/18
pods: 10.244.0.0/18
services: 10.244.64.0/18
providerConfig:
apiVersion: calico.networking.extensions.gardener.cloud/v1alpha1
kind: NetworkConfig
backend: vxlan
ipv4:
pool: vxlan
mode: Always
autoDetectionMethod: interface=lo
typha:
enabled: false
```
1. For your seed cluster you will need to provide the provider secret for metal-stack containing the key `metalAPIHMac`, which is the API HMAC to grant editor access to the metal-api
1. Checkout our current provider configuration for [infrastructure](https://github.com/metal-stack/gardener-extension-provider-metal/blob/master/pkg/apis/metal/v1alpha1/types_infrastructure.go) and [control-plane](https://github.com/metal-stack/gardener-extension-provider-metal/blob/master/pkg/apis/metal/v1alpha1/types_controlplane.go) before deploying your shoot
Expand Down
4 changes: 2 additions & 2 deletions docs/src/overview/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ One more word towards determining the location for your metal control plane: It

The foundation of the metal-stack is what we call the _metal control plane_.

The control plane contains of a couple of essential microservices for the metal-stack including:
The control plane contains a couple of essential microservices for the metal-stack including:

- **[metal-api](https://github.com/metal-stack/metal-api)**
The API to manage and control plane resources like machines, switches, operating system images, machine sizes, networks, IP addresses and more. The exposed API is an old-fashioned REST API with different authentication methods. The metal-api stores the state of these entities in a [RethinkDB](https://rethinkdb.com/) database. The metal-api also has its own IP address management ([go-ipam](https://github.com/metal-stack/go-ipam)), which writes IP address and network allocations into a PostgreSQL backend.
The API to manage control plane resources like machines, switches, operating system images, machine sizes, networks, IP addresses and more. The exposed API is an old-fashioned REST API with different authentication methods. The metal-api stores the state of these entities in a [RethinkDB](https://rethinkdb.com/) database. The metal-api also has its own IP address management ([go-ipam](https://github.com/metal-stack/go-ipam)), which writes IP address and network allocations into a PostgreSQL backend.
- **[masterdata-api](https://github.com/metal-stack/masterdata-api)**
Manages tenant and project entities, which can be described as entities used for company-specific resource separation and grouping. Having these "higher level entities" managed by a separate microservice was a design choice that allows to re-use the information by other microservices without having them to know the metal-api at all. The masterdata gets persisted in a dedicated PostgreSQL database.
- **[metal-console](https://github.com/metal-stack/metal-console)**
Expand Down
2 changes: 1 addition & 1 deletion docs/src/overview/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ In BGP, ASN is how BGP peers know each other.

Within the data center each BGP router is identified by a private autonomous system number (ASN). This ASN is used for internal communication. The default is to have 2-byte ASN. To avoid having to find workarounds in case the ASN address space is exhausted, a 4-byte ASN that supports up to 95 million ASNs (4200000000–4294967294) is used from the beginning.

ASN numbering in a CLOS topology should follow a model to avoid routing problems (path hunting) due to it's redundant nature. Within a CLOS topology the following ANS numbering model is suggested to solve path hunting problems:
ASN numbering in a CLOS topology should follow a model to avoid routing problems (path hunting) due to it's redundant nature. Within a CLOS topology the following ASN numbering model is suggested to solve path hunting problems:

- Leaves have unique ASN
- Spines share an ASN
Expand Down

0 comments on commit 096b107

Please sign in to comment.