-
Notifications
You must be signed in to change notification settings - Fork 2
Description
WDQS CRASHES
look at the wdqs log: $ docker-compose logs -f --tail 200 wdqs
show what is running in the container: $ docker ps
The dumping out of wikibase by Krusty::wd_to_neo4j.py makes wdqs crash when executed from aws. Consequently the dump out is unsuccessful. No problem when executed from our servers. Looking at the wdqs log it shows something about memory errors:
wdqs_1 | Caused by: com.bigdata.rwstore.sector.MemoryManagerClosedException: null
WDQS RE-START
solution 1: restart the query service via docker-composer
cd /home/ubuntu/wikibase
docker-compose restart wdqs
docker-compose restart wdqs-updater
solution 2 (harder crash): restart docker and then the query service via
docker-compose down
service docker restart
docker-compose up -d
THE PROBLEM
It seems that we are suffering from RAM problems. The docker-compose setup requires more than 2GB of available RAM to start. While being developed the dev machine has 4GB of RAM.'
according to: https://github.com/wmde/wikibase-docker/blob/master/README-compose.md
aws RAM:
$ free -mh
total used free shared buff/cache available
Mem: 7.8G 5.7G 589M 121M 1.5G 1.6G
Swap: 0B 0B 0B
our servers RAM:
$ free -mh
total used free shared buff/cache available
Mem: 62G 5.9G 9.3G 2.2M 47G 56G
Swap: 63G 0B 63G
Possible solutions:
- limit neo4j ram usage in aws
- run the dump out in another machine or on-demand platform
- increase ram in aws