r? @phrawzty - I need to get figure out how to get this associated with bugs properly (it fixes several), so don't merge yet :) I think we should land them all together to minimize breaking external users - I think the docs aren't quite good enough yet in particular, but the following all WFM:
- removes all config files except alembic and django local.py
- local.py can go away when PR #2684 lands
- adds systemd service files for all socorro services
- provides working nginx sample configs
- updates docs
- splits socorro initial setup out to a
/usr/bin/setup-socorro.sh
I've been testing this out on our AMI and it seems to work! Now that some of the "socorro lite" work has landed, you can run collection+processing without any additional services running, except for consul which is now required for all Socorro services:
sudo yum install consul envconsul
consul agent -bootstrap-expect 1 -server -data-dir=./consul-data/
First you must onfigure collector to use WSGI rather than default built-in web.py server:
curl -X PUT -d 'socorro.webapi.servers.ApacheModWSGI' http://localhost:8500/v1/kv/socorro/collector/web_server__wsgi_server_class
(@twobraids we should really change the name of that config key to remove the ApacheMod
part and just call it socorro.webapi.servers.WSGI
)
Then just start the services:
sudo yum install socorro
sudo systemctl start socorro-collector
sudo systemctl start socorro-processor
This will store both raw and processed crashes to ~socorro/crashes
, and will scan the filesystem rather than using a queue. Storing crashes to ES/S3/PG and enabling RabbitMQ are just a matter of setting the right keys/values in consul.
You should be able to submit crashes, and they should be processed successfully (both raw .json
and .dump
and processed .jsonz
files are stored in ~socorro/crashes
):
# from a Socorro checkout w/ activated virtualenv
socorro submitter -u http://crash-reports/submit -s testcrash/raw
For a distributed setup we don't want to share a filesystem, so you'll need to turn RabbitMQ and S3 on via consul.
The webapp has more dependencies before it'll work:
sudo /usr/pgsql-9.3/bin/postgresql93-setup initdb
# set local connections to "trust"
vi /var/lib/pgsql/9.3/data/pg_hba.conf
sudo systemctl start postgresql-9.3
sudo systemctl start elasticsearch
sudo yum install memcached
sudo systemctl start memcached
PG and ES need to be set up (NOTE this script assumes they are on localhost, it should fail gracefully if that's not the case. Also it's safe to re-run the script, it won't destroy anything already set up):
sudo setup-socorro.sh
# configure middleware to use WSGI rather than default built-in web.py server
curl -X PUT -d 'socorro.webapi.servers.ApacheModWSGI' http://localhost:8500/v1/kv/socorro/middleware/web_server__wsgi_server_class
sudo systemctl start socorro-middleware
sudo systemctl start socorro-webapp
The RPM drops nginx sample configs into /etc/nginx/conf.d
that listen on the vhosts crash-reports
(collector), crash-stats
(webapp), and socorro-middleware
(this one only listens on localhost since it's not safe, don't want anyone to accidentally expose it)
The webapp will give you 404s for the default WaterWolf
unless you either use socorro setupdb
's --fakedata
option, or set up a new product via the admin UI per http://socorro.readthedocs.org/en/latest/configuring-socorro.html (maybe we should have the setup-socorro
script do some/all of this?)