Welcome, Guest: Register On Nairaland / LOGIN! / Trending / Recent / New
Stats: 3,152,798 members, 7,817,305 topics. Date: Saturday, 04 May 2024 at 09:54 AM

Orient Me, Elasticsearch And Disk Space - Education - Nairaland

Nairaland Forum / Nairaland / General / Education / Orient Me, Elasticsearch And Disk Space (376 Views)

Removing Data From Elasticsearch / Elasticsearch Tutorial / Getting Started With Elasticsearch Installation (2) (3) (4)

(1) (Reply)

Orient Me, Elasticsearch And Disk Space by Ttacy341(f): 8:39am On Sep 26, 2017
With IBM Connections 6 you can deploy the additional component Orient Me, which provides the first microservices which will build the new IBM Connections pink. Orient Me is installed on top of IBM Spectrum Conductor for Containers (CFC) a new product to help with clustering and orchestrating of the Docker containers.

Klaus Bild showed in a blog post some weeks ago how to add a container with Kibana to use the deployed Elasticsearch for visualizing the environment.

I found two issues with the deployed Elasticsearch container, but let me explain from the beginning.

On Monday I checked my demo server and the disk was full, so I searched a little bit and found that Elasticsearch is using around 50GB of disk space for the indices. On my server the data path for Elasticsearch is /var/lib/elasticsearch/data. With du -hs /var/lib/* you can check the used space.

You will see something like this and I would recommend to create a seperate mount point for /var/lib or two on /var/lib/docker and /var/lib/elasticsearch for your CFC/Orient Me server:

du -hs /var/lib/*
...
15G /var/lib/docker
0 /var/lib/docker.20170425072316
6,8G /var/lib/elasticsearch
451M /var/lib/etcd
...
So I searched how to show and delete Elasticsearch indices.

On your CFC host run:

curl localhost:9200/_aliases
or

[root@cfc ~]# curl http://localhost:9200/_aliases?pretty=1
{
"logstash-2017.06.01" : {
"aliases" : { }
},
"logstash-2017.05.30" : {
"aliases" : { }
},
"logstash-2017.05.31" : {
"aliases" : { }
},
".kibana" : {
"aliases" : { }
},
"heapster-2017.06.01" : {
"aliases" : {
"heapster-cpu-2017.06.01" : { },
"heapster-filesystem-2017.06.01" : { },
"heapster-general-2017.06.01" : { },
"heapster-memory-2017.06.01" : { },
"heapster-network-2017.06.01" : { }
}
}
}
On my first try, the list was “a little bit” longer. So it is a test server, so I just deleted the indices with:

curl XDELETE http://localhost:9200/logstash-*
curl XDELETE http://localhost:9200/heapster-*
For this post, I checked this commands from my local machine and curl XDELETE ... with IP or hostname are working too! Elasticsearch provides no real security for the index handling, so best practice is to put a Nginx server in front and only allow GET and POST on the URL. So in a production environment, you should think about securing the port 9200 (Nginx, iptables), or anybody could delete the indices. Only logs and performance data, but I don’t want to allow this.

Now the server was running again and I digged a little bit deeper. So I found that there is a container indices-cleaner running on the server:

[root@cfc ~]# docker ps | grep clean
6c1a52fe0e0e ibmcom/indices-cleaner:0.1 "cron && tail -f /..." 51 minutes ago Up 51 minutes k8s_indices-cleaner.a3303a57_k8s-elasticsearch-10.10.10.215_kube-system_62f659ecf9bd14948b6b4ddcf96fb5a3_0b3aeb84
So I checked this container:

docker logs 6c1a52fe0e0e
shows nothing. Normally it should show us the curator log. The container command is not selected in the best way.

cron && tail -f /var/log/curator-cron.log
shall show the log file of curator (a tool to delete Elasticsearch indices), but with && it only starts tail when cron is ended with status true. So that’s the reason that docker logs shows nothing.

I started a bash in the container with docker exec -it 6c1a52fe0e0e bash and checked the settings there.

cat /etc/cron.d/curator-cron
59 23 * * * root /bin/bash /clean-indices.sh
# An empty line is required at the end of this file for a valid cron file.
There is a cronjob which runs each day at 23:59. The started script runs:

/usr/local/bin/curator --config /etc/curator.yml /action.yml
Within the /action.yml the config shows that logstash-* should be deleted after 5 days and heapster-* after 1 day.

I checked /var/log/curator-cron.log, but it was empty! So the cronjob never ran. To test if the script works as expected, I just started /clean-indices.sh and the log file shows:

cat /var/log/curator-cron.log
2017-05-31 08:17:01,654 INFO Preparing Action ID: 1, "delete_indices"
2017-05-31 08:17:01,663 INFO Trying Action ID: 1, "delete_indices": Delete logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:01,797 INFO Deleting selected indices: [u'logstash-2017.05.08', u'logstash-2017.05.09', u'logstash-2017.05.03', u'logstash-2017.04.28', u'logstash-2017.04.27', u'logstash-2017.04.26', u'logstash-2017.05.18', u'logstash-2017.05.15', u'logstash-2017.05.12', u'logstash-2017.05.11']
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.08
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.09
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.03
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.04.28
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.04.27
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.04.26
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.18
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.15
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.12
2017-05-31 08:17:01,797 INFO ---deleting index logstash-2017.05.11
2017-05-31 08:17:02,130 INFO Action ID: 1, "delete_indices" completed.
2017-05-31 08:17:02,130 INFO Preparing Action ID: 2, "delete_indices"
2017-05-31 08:17:02,133 INFO Trying Action ID: 2, "delete_indices": Delete heapster prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:02,161 INFO Deleting selected indices: [u'heapster-2017.04.26', u'heapster-2017.04.27', u'heapster-2017.04.28', u'heapster-2017.05.03', u'heapster-2017.05.15', u'heapster-2017.05.12', u'heapster-2017.05.11', u'heapster-2017.05.09', u'heapster-2017.05.08']2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.04.26
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.04.27
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.04.28
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.03
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.15
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.12
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.11
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.09
2017-05-31 08:17:02,161 INFO ---deleting index heapster-2017.05.08
2017-05-31 08:17:02,366 INFO Action ID: 2, "delete_indices" completed.
2017-05-31 08:17:02,367 INFO Job completed.
I checked the log file daily after the research and after running the task manually the cron job is working as expected and curator does it’s job. No full disk since last week.

CFC uses kubernetes and so stopping the clean-indices container creates a new one immediately! All changes disappear then and the cronjob stops working. I don’t want to wait until IBM provides a container update, so I searched a way to run the curator even with a new container on a regular basis.

I created a script:

#!/bin/bash
id=`docker ps | grep indices-cleaner | awk '{print $1}'`
docker exec -t $id /clean-indices.sh
docker exec -t $id tail /var/log/curator-cron.log
and added it to my crontab on the CFC server.

crontab -e 59 23 * * * script >> /var/log/curator.log
When you use Kibana to analyse the logs, you maybe want to have more indices available. docker inspect containerid shows us:

"Mounts": [
{
"Type": "bind",
"Source": "/etc/cfc/conf/curator-action.yml",
"Destination": "/action.yml",
"Mode": "",
"RW": true,
"Propagation": ""
},

source : https://www.stoeps.de

(1) (Reply)

Tired Of Spending Money On Education? Here Is The Opportunity For Scholarship / Google Nigeria Business Internship For Students 2018 / 21,780 Kaduna Teachers Fail Primary Four Exam

(Go Up)

Sections: politics (1) business autos (1) jobs (1) career education (1) romance computers phones travel sports fashion health
religion celebs tv-movies music-radio literature webmasters programming techmarket

Links: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

Nairaland - Copyright © 2005 - 2024 Oluwaseun Osewa. All rights reserved. See How To Advertise. 26
Disclaimer: Every Nairaland member is solely responsible for anything that he/she posts or uploads on Nairaland.