Dear All,
We managed to deploy the mist docker image in DC/OS via marathon using the following json configuration.
{ "volumes": null, "id": "/mist-job-server", "cmd": "/usr/share/mist/bin/mist-master start --config /config/docker.conf --router-config /config/router.conf --debug true", "args": null, "user": null, "env": null, "instances": 1, "cpus": 1, "mem": 2048, "disk": 500, "gpus": 0, "executor": null, "constraints": null, "fetch": null, "storeUrls": null, "backoffSeconds": 1, "backoffFactor": 1.15, "maxLaunchDelaySeconds": 3600, "container": { "docker": { "image": "hydrosphere/mist:0.12.3-2.1.1", "forcePullImage": true, "privileged": false, "portMappings": [ { "containerPort": 2004, "protocol": "tcp", "servicePort": 10106 } ], "network": "BRIDGE" }, "type": "DOCKER", "volumes": [ { "containerPath": "/config", "hostPath": "/nfs/mist/config", "mode": "RW" }, { "containerPath": "/jobs", "hostPath": "/nfs/mist/jobs", "mode": "RW" }, { "containerPath": "/var/run/docker.sock", "hostPath": "/var/run/docker.sock", "mode": "RW" } ] }, "healthChecks": null, "readinessChecks": null, "dependencies": null, "upgradeStrategy": { "minimumHealthCapacity": 1, "maximumOverCapacity": 1 }, "labels": { "HAPROXY_GROUP": "external" }, "acceptedResourceRoles": null, "residency": null, "secrets": null, "taskKillGracePeriodSeconds": null, "portDefinitions": [ { "port": 10106, "protocol": "tcp", "labels": {} } ], "requirePorts": false }
Now, we wanted to switch spark from local mode to cluster mode.
Our docker.conf file looks as follows:
mist {
context-defaults.spark-conf = {
spark.master = "local[4]"
spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3"
spark.cassandra.connection.host="node-0.cassandra.mesos"
}
context.test.spark-conf = {
spark.cassandra.connection.host="node-0.cassandra.mesos"
spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3"
}
http {
on = true
host = "0.0.0.0"
port = 2004
}
workers.runner = "local"
}
To make spark run in cluster mode, we added the following:
mist {
context-defaults.spark-conf = {
spark.master = "mesos://spark.marathon.mesos:31921"
spark.submit.deployMode = "cluster"
spark.mesos.executor.docker.image = "mesosphere/spark:1.1.0-2.1.1-hadoop-2.6"
spark.mesos.executor.home = "/opt/spark/dist"
spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3"
spark.cassandra.connection.host="node-0.cassandra.mesos"
}
context.test.spark-conf = {
spark.cassandra.connection.host="node-0.cassandra.mesos"
spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3"
}
http {
on = true
host = "0.0.0.0"
port = 2004
}
workers.runner = "local" //????
}
Now we get the exception mesos native library libmesos.so not found.
Does anybody know what we are missing?
Also, can anybody tell us what are the valid values for workers.runner? Do we have to change anything here?
best regards
Sriraman.