Skip to content

Commit

Permalink
[doc] First work on the new documentation, to be continued...
Browse files Browse the repository at this point in the history
  • Loading branch information
Samuel Hassine committed Oct 24, 2019
1 parent 9b14d39 commit 07254e5
Show file tree
Hide file tree
Showing 10 changed files with 74 additions and 82 deletions.
Binary file modified opencti-documentation/docs/assets/getting-started/screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 3 additions & 25 deletions opencti-documentation/docs/development/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ $ cd opencti
### Install the API dependencies

```bash
$ cd opencti-graphql
$ cd opencti-platform/sopencti-graphql
$ yarn install
```

Expand All @@ -61,12 +61,7 @@ $ yarn install
### Install the worker dependencies

```bash
$ pip3 install -r requirements.txt
```

### Install the integration dependencies

```bash
$ cd ../../opencti-worker
$ pip3 install -r requirements.txt
```

Expand Down Expand Up @@ -111,24 +106,7 @@ Change the *config.yml* file according to your <admin token>
#### Start

```bash
$ python3 worker_export.py &
$ python3 worker_import.py &
```

### Integration

#### Configure

```bash
$ cd opencti-integration
$ cp config.yml.sample config.yml
```
Change the *config.yml* file according to your <admin token>

#### Start

```bash
$ python3 connectors_scheduler.py
$ python3 worker.py &
```

### Frontend
Expand Down
8 changes: 8 additions & 0 deletions opencti-documentation/docs/getting-started/requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,14 @@ ElasticSearch is also a JAVA process that needs a minimal amount of memory and C

> In order to setup the JAVA memory allocation, you can use the environment variable `ES_JAVA_OPTS`. You can find more information in the [official ElasticSearch documenation](ttps://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html).
### Minio

Minio has a very small footprint but depending on what you intend to store on OpenCTI, it could require disk space:

| CPU | RAM | Disk type | Disk space |
| ------------- |---------------| ---------------------------|-----------------------------------|
| 1 core | 128MB | Normal | 1GB |

### Redis

Redis has a very small footprint and only needs a tiny configuration:
Expand Down
59 changes: 42 additions & 17 deletions opencti-documentation/docs/installation/connectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,38 @@ sidebar_label: Enable connectors

## Introduction

Connectors are standalone processes that are independant of the rest of the platform. They are using RabbitMQ to push data to OpenCTI, through a dedicated queue for each instance of connector. Depending on your deployment, you can enable connectors by using the connectors Docker images or launch them manually.
Connectors are standalone processes that are independant of the rest of the platform. They are using RabbitMQ to consume or push data to OpenCTI, through a dedicated queue for each instance of connector. Depending on your deployment, you can enable connectors by using the connectors Docker images or launch them manually.

## Connector configurations

All connectors have 2 mandatory configuration parameters, the `name` and the `confidence_level`. The `name` is the name of the instance of the connector. For instance, for the MISP connector, you can launch as many MISP connectors as you need, if you need to pull data from multiple MISP instances.
All connectors have to be able to access to the OpenCTI API. To allow this connection, they have 2 mandatory configuration parameters, the `OPENCTI_URL` and the `OPENCTI_TOKEN`. In addition of these 2 parameters, connectors have 5 other mandatory parameters that need to be set in order to get them work.

> The `name` of each instance of connector must be unique.
```
- CONNECTOR_ID=ChangeMe
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileStix2
- CONNECTOR_SCOPE=application/json
- CONNECTOR_CONFIDENCE_LEVEL=3
- CONNECTOR_LOG_LEVEL=info
```

> The `CONNECTOR_ID` must be a valid UUIDv4
> The `CONNECTOR_TYPE` must be a valid type, the possible types are:
> - EXTERNAL_IMPORT: from remote sources to OpenCTI stix2
> - INTERNAL_IMPORT_FILE: from OpenCTI file system to OpenCTI stix2
> - INTERNAL_ENRICHMENT: from OpenCTI stix2 to OpenCTI stix2
> - INTERNAL_EXPORT_FILE: from OpenCTI stix2 to OpenCTI file system
> The `CONNECTOR_NAME` is an arbitrary name
> The `CONNECTOR_SCOPE` is the scope handled by the connector:
> - EXTERNAL_IMPORT: entity types that have to be imported by the connectors, if the connector provide more, they will be ignored
> - INTERNAL_IMPORT_FILE: files mime types to support (application/json, ...)
> - INTERNAL_ENRICHMENT: entity types to support (Report, Hash, ...)
> - INTERNAL_EXPORT_FILE: files mime types to generate (application/pdf, ...)
> The `confidence_level` of the connector will be used to set the `confidence_level` of the relationships created by the connector. If a connector needs to create a relationship that already exists, it will check the current `confidence_level` and if it is lower than its own, it will update the relationship with the new information. If it is higher, it will do nothing and keep the existing relationship.
> The `CONNECTOR_CONFIDENCE_LEVEL` of the connector will be used to set the `CONNECTOR_CONFIDENCE_LEVEL` of the relationships created by the connector. If a connector needs to create a relationship that already exists, it will check the current `CONNECTOR_CONFIDENCE_LEVEL` and if it is lower than its own, it will update the relationship with the new information. If it is higher, it will do nothing and keep the existing relationship.
## Docker activation

Expand All @@ -26,21 +49,23 @@ For instance, to enable the MISP connector, you can add a new service to your `d

```
connector-misp:
image: opencti/connector-misp:{RELEASE_VERSION}
image: opencti/connector-misp:2.0.0
environment:
- RABBITMQ_HOSTNAME=localhost
- RABBITMQ_PORT=5672
- RABBITMQ_USERNAME=guest
- RABBITMQ_PASSWORD=guest
- MISP_NAME=MISP\ Circle
- MISP_CONFIDENCE_LEVEL=3
- MISP_URL=http://localhost
- MISP_KEY=ChangeMe
- MISP_TAG=OpenCTI:\ Import
- MISP_UNTAG_EVENT=true
- MISP_IMPORTED_TAG=OpenCTI:\ Imported
- OPENCTI_URL=http://localhost
- OPENCTI_TOKEN=ChangeMe
- CONNECTOR_ID=ChangeMe
- CONNECTOR_TYPE=EXTERNAL_IMPORT
- CONNECTOR_NAME=MISP
- CONNECTOR_SCOPE=misp
- CONNECTOR_CONFIDENCE_LEVEL=3
- CONNECTOR_LOG_LEVEL=info
- MISP_URL=http://localhost # Required
- MISP_KEY=ChangeMe # Required
- MISP_TAG=OpenCTI:\ Import # Optional, tags of events to be ingested (if not provided, import all!)
- MISP_UNTAG_EVENT=true # Optional, remove the tag after import
- MISP_IMPORTED_TAG=OpenCTI:\ Imported # Required, tag event after import
- MISP_FILTER_ON_IMPORTED_TAG=true # Required, use imported tag to know which events to not ingest
- MISP_INTERVAL=1 # Minutes
- MISP_LOG_LEVEL=info
restart: always
```

Expand Down
11 changes: 10 additions & 1 deletion opencti-documentation/docs/installation/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Before running the docker-compose command, please change the admin token (this t
- APP__ADMIN__TOKEN=ChangeMe
```
And change the variable `OPENCTI_TOKEN` (for `worker-import` and `worker-export`) according to the value of `APP__ADMIN__TOKEN`
And change the variable `OPENCTI_TOKEN` (for the `worker` and all connectors) according to the value of `APP__ADMIN__TOKEN`

```yaml
- OPENCTI_TOKEN=ChangeMe
Expand Down Expand Up @@ -80,6 +80,11 @@ volumes:
driver_opts:
o: bind
type: none
s3data:
driver: local
driver_opts:
o: bind
type: none
```

## Memory configuration
Expand Down Expand Up @@ -118,6 +123,10 @@ Redis has a very small footprint and only provides an option to limit the maximu

You can find more information in the [Redis docker hub](https://hub.docker.com/r/bitnami/redis/).

### Minio

Minio is a small process and does not require a high amount of memory. More information are available for Linux here on the [Kernel tuning guide](https://github.com/minio/minio/tree/master/docs/deployment/kernel-tuning).

### RabbitMQ

The RabbitMQ memory configuration can be find in the [RabbitMQ official documentation](https://www.rabbitmq.com/memory.html). Basically RabbitMQ will consumed memory until a specific threshold. So it should be configure along with the Docker memory limitation.
46 changes: 8 additions & 38 deletions opencti-documentation/docs/installation/manual.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@ sidebar_label: Manual deployment
## Prerequisites

- Node.JS (>= 10)
- Grakn (>= 1.5.7)
- Grakn (>= 1.5.78
- Redis (>= 3.0)
- ElasticSearch (>= 7.x)
- Minio (>= 20191012)
- RabbitMQ (>= 3.7)


## Prepare the installation

### Installation of dependencies
Expand Down Expand Up @@ -65,49 +65,19 @@ $ node dist/server.js &

The default username and password are those you put in the `config/production.json` file.

## Install the workers

2 different workers must be configured to allow the platform to import and export data. One is for import and the other for export.

### Install the import worker
## Install the worker

#### Configure the import worker
The OpenCTI worker is used to write the data coming from the RabbitMQ messages broker.

Just copy the worker directory to a new one, named `worker-import`.
#### Configure the worker

```bash
$ cp -a worker worker-import
$ cd worker-import
$ cd worker
$ pip3 install -r requirements.txt
$ cp config.yml.sample config.yml
```

Change the *config.yml* file according to your OpenCTI token and RabbitMQ configuration.

> The worker type must be set to "import"
#### Start as many workers as you need
```bash
$ python3 worker.py &
$ python3 worker.py &
```

### Install the export worker

#### Configure the export worker

Just copy the worker directory to a new one, named `worker-export`.

```bash
$ cd ..
$ cp -a worker worker-export
$ cd worker-export
$ cp config.yml.sample config.yml
```

Change the *config.yml* file according to your OpenCTI token and RabbitMQ configuration.

> The worker type must be set to "export"
Change the *config.yml* file according to your OpenCTI token.

#### Start as many workers as you need
```bash
Expand All @@ -124,4 +94,4 @@ $ npm run schema
$ npm run migrate
```

Then start the platform.
Then start the platform.
2 changes: 1 addition & 1 deletion opencti-documentation/website/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,6 @@
"rename-version": "docusaurus-rename-version"
},
"devDependencies": {
"docusaurus": "^1.9.0"
"docusaurus": "^1.14.0"
}
}
1 change: 1 addition & 0 deletions opencti-documentation/website/sidebars.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
"Usage": [
"usage/overview",
"usage/model",
"usage/knowledge-create",
"usage/reports-create",
"usage/report-knowledge"
],
Expand Down
1 change: 1 addition & 0 deletions opencti-documentation/website/versions.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
[
"2.0.0",
"1.1.2",
"1.1.1",
"1.1.0",
Expand Down

0 comments on commit 07254e5

Please sign in to comment.