## Example 6 - Multi-traj¶

### 0. Imports¶

In [1]:

import sys, os


Alright, let’s load the package and pick the Project since we want to start a project

In [2]:

from adaptivemd import Project


Let’s open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.

In [3]:

# Use this to completely remove the example-worker project from the database.
Project.delete('tutorial-multi')

In [4]:

project = Project('tutorial-multi')


Now we have a handle for our project. First thing is to set it up to work on a resource.

### 1. Set the resource¶

What is a resource? A Resource specifies a shared filesystem with one or more clusteres attached to it. This can be your local machine or just a regular cluster or even a group of cluster that can access the same FS (like Titan, Eos and Rhea do).

Once you have chosen your place to store your results this way it is set for the project and can (at least should) not be altered since all file references are made to match this resource. Currently you can use the Fu Berlin Allegro Cluster or run locally. There are two specific local adaptations that include already the path to your conda installation. This simplifies the use of openmm or pyemma.

Let us pick a local resource on a laptop for now.

In [5]:

from adaptivemd import LocalCluster, AllegroCluster


first pick your resource – where you want to run your simulation. Local or on Allegro

In [6]:

resource = LocalCluster()

In [7]:

project.initialize(resource)


### 2. Add TaskGenerators¶

TaskGenerators are instances whose purpose is to create tasks to be executed. This is similar to the way Kernels work. A TaskGenerator will generate Task objects for you which will be translated into a ComputeUnitDescription and executed. In simple terms:

The task generator creates the bash scripts for you that run a simulation or run pyemma.

A task generator will be initialized with all parameters needed to make it work and it will now what needs to be staged to be used.

In [8]:

from adaptivemd.engine.openmm import OpenMMEngine


#### The engine¶

In [9]:

pdb_file = File('file://../files/alanine/alanine.pdb').named('initial_pdb').load()

In [10]:

engine = OpenMMEngine(
pdb_file=pdb_file,
args='-r --report-interval 1 -p CPU'
).named('openmm')

In [11]:

engine.add_output_type('master', 'master.dcd', 10)

In [12]:

engine.types

Out[12]:

{'master': <adaptivemd.engine.engine.OutputTypeDescription at 0x10f7254d0>,

In [13]:

project.generators.add(engine)

In [14]:

s = engine._create_output_str()
print s

--types="{'protein':{'stride':1,'filename':'protein.dcd'},'master':{'stride':10,'filename':'master.dcd'}}"

In [15]:

task = project.new_trajectory(pdb_file, 100, engine=engine).run()


### 3. Create one intial trajectory¶

#### Create a Trajectory object¶

In [16]:

project.queue(task)  # shortcut for project.tasks.add(task)


That is all we can do from here. To execute the tasks you need to run a worker using

adaptivemdworker -l tutorial --verbose

In [17]:

print project.tasks

<StoredBundle for with 2 file(s) @ 0x10f6e3e90>

In [18]:

task.trajectory

Out[18]:

Trajectory('alanine.pdb' >> [0..100])

In [21]:

task.state

Out[21]:

u'success'

In [22]:

t = project.trajectories.one

In [24]:

t.types['protein']

Out[24]:

<adaptivemd.engine.engine.OutputTypeDescription at 0x10f725510>


Once this is done, come back here and check your results. If you want you can execute the next cell which will block until the task has been completed.

In [25]:

print project.files
print project.trajectories

<StoredBundle for with 5 file(s) @ 0x10f6e3e50>
<ViewBundle for with 1 file(s) @ 0x10f6e3e10>


and close the project.

In [25]:

project.close()


The final project.close() will close the DB connection.

In [ ]: