简介
在使用Elasticsearch进行全文搜索时,会使用到Elasticsearch(搜索引擎)、Logstash(数据同步)、Kibana(数据可视化)这三个中间件。在开发或测试环境中部署项目时,常使用docker-compose一键部署项目所需的环境,因此本文记录使用docker-compose一键部署Elasticsearch、Logstash和Kibana的过程。
部署
端口说明
- Elasticsearch:9200(HTTP通信)和9300(TCP通信)
- Logstash:5044和9600(TCP通信)
- Kibana:5601
配置文件
docker-compose.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
| version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2 container_name: elasticsearch_server restart: unless-stopped environment: - discovery.type=single-node - discovery.zen.minimum_master_nodes=1 - ES_JAVA_OPTS=-Xms3g -Xmx3g volumes: - "/home/docker_data/myproj/elasticsearch/data:/data" ports: - 9200:9200 - 9300:9300 networks: default: common: aliases: - elasticsearch
kibana: image: docker.elastic.co/kibana/kibana:7.16.2 depends_on: - elasticsearch container_name: kibana_server restart: unless-stopped environment: - ELASTICSEARCH_URL=http://elasticsearch:9200 - SERVER_NAME=kibana volumes: - "/home/docker_data/myproj/config/kibana.yml:/config/kibana.yml" ports: - "5601:5601" networks: default: common: aliases: - kibana
logstash: image: docker.elastic.co/logstash/logstash:7.16.2 depends_on: - elasticsearch container_name: logstash_server restart: unless-stopped environment: - LS_JAVA_OPTS=-Xmx256m -Xms256m volumes: - "/home/docker_data/myproj/config/logstash.conf:/config/logstash.conf" networks: default: common: aliases: - logstash
entrypoint: - logstash - -f - /config/logstash.conf logging: driver: "json-file" options: max-size: "200m" max-file: "3"
networks: common: external: name: nginx-bridge
|
elasticsearch.yml
1 2 3 4 5 6 7
| cluster.name: "es-server" network.host: 0.0.0.0 http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-headers: Authorization xpack.security.enabled: true xpack.security.transport.ssl.enabled: true
|
kibana.yml
1 2 3 4 5 6
| server.host: "0.0.0.0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://elasticsearch:9200" ] monitoring.ui.container.elasticsearch.enabled: true elasticsearch.username: "elastic" elasticsearch.password: "123456"
|
logstash.yml
1 2 3 4 5
| http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ] xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: logstash_system xpack.monitoring.elasticsearch.password: 123456
|
logstash.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133
| input { stdin { } jdbc { jdbc_connection_string => "jdbc:postgresql://127.0.0.1:5432/postgres" jdbc_user => "postgres" jdbc_password => "123456" jdbc_driver_library => "pgsql\postgresql-42.5.1.jar" jdbc_driver_class => "org.postgresql.Driver" jdbc_paging_enabled => "true" jdbc_page_size => "300000" use_column_value => "true" tracking_column => "id" statement_filepath => "pgsql\logstash-pgsql1.sql" schedule => "* 4 * * *" type => "postgres_SystemName" jdbc_default_timezone =>"Asia/Shanghai" } jdbc { jdbc_connection_string => "jdbc:postgresql://127.0.0.1:5432/postgres" jdbc_user => "postgres" jdbc_password => "123456" jdbc_driver_library => "pgsql\postgresql-42.5.1.jar" jdbc_driver_class => "org.postgresql.Driver" jdbc_paging_enabled => "true" jdbc_page_size => "300000" use_column_value => "true" tracking_column => "id" statement_filepath => "pgsql\logstash-pgsql2.sql" schedule => "* 4 * * *" type => "postgres_SystemDetail" jdbc_default_timezone =>"Asia/Shanghai" } jdbc { jdbc_connection_string => "jdbc:postgresql://127.0.0.1:5432/postgres" jdbc_user => "postgres" jdbc_password => "123456" jdbc_driver_library => "pgsql\postgresql-42.5.1.jar" jdbc_driver_class => "org.postgresql.Driver" jdbc_paging_enabled => "true" jdbc_page_size => "300000" use_column_value => "true" tracking_column => "id" statement_filepath => "pgsql\logstash-pgsql3.sql" schedule => "* 4 * * *" type => "postgres_ProblemList" jdbc_default_timezone =>"Asia/Shanghai" } } filter { json { source => "message" remove_field => ["message"] } } output { if [type] == "postgres_SystemName"{ elasticsearch { hosts => ["localhost:9200"] index => "test" template => "pgsql\es-template.json" template_name => "t-statistic-out-logstash" template_overwrite => true document_type => "text" document_id => "%{id}" } } if [type] == "postgres_SystemDetail"{ elasticsearch { hosts => ["localhost:9200"] index => "test" template => "pgsql\es-template.json" template_name => "t-statistic-out-logstash" template_overwrite => true document_type => "text" document_id => "%{id}" } } if [type] == "postgres_ProblemList"{ elasticsearch { hosts => ["localhost:9200"] index => "test" template => "pgsql\es-template.json" template_name => "t-statistic-out-logstash" template_overwrite => true document_type => "text" document_id => "%{id}" } } stdout { codec => json_lines } }
|
设置&配置密码
elasticsearch
使用上述配置创建容器后,进入容器中执行以下命令,可配置elastic、kibana、logstash_system等账号的密码:
1
| elasticsearch-setup-passwords interactive
|
设置完成后,登录kibana的账号名是kibana,elasticsearch的账户名为elastic。
一键部署
梳理好部署目录并创建好对应文件后,执行以下命令启动容器docker-compose up -d
。
在内网使用Elasticsearch、Logstash时,可以不设置密码,但kibana需要设置。