Jobs loaded and executed from XML rather than loaded from XML and executed from DB
Reported by John Wyles | April 26th, 2011 @ 09:59 AM
A surprising discrepancy was found between the on disk XML defined for a job and the one loaded in to the DB for RunDeck. Ideally the XML could either be polled and loaded in to the DB on a regular interval or parsed every time the job is loaded/executed.
Comments and changes to this ticket
-
Greg Schueler April 26th, 2011 @ 10:52 AM
i suppose something like this could be set up with jenkins/hudson; the job.xml are in source control and when updated a jenkins job is executed that imports them into rundeck.
i.e. normal source control workflow could be used, rather than baking it into the RunDeck system. this is just an observation. We have also talked about ways to organize the job xml on disk as the authoritative source and sync changes made via GUI (like jenkins/hudson does for jobs)
-
Deleted User April 26th, 2011 @ 11:04 AM
As a convention, I've been placing them in /var/rundeck/projects/<project>/jobs.d and then rd-jobs load'ing them from there.
-
John Wyles April 26th, 2011 @ 11:57 AM
@Greg - I agree there are a number of ways to resolve the issue with the drift that happens between the RunDeck DB and the Jobs XML and certainly the way you mention is a probably a good practice. Thinking about it perhaps this request now falls out two-fold: adding new jobs should export an XML file to a standard path in addition to adding to the DB and the XML is the authoritative source for the job. I think this makes much more sense and leads to less surprises when you are managing your jobs through XML.
@Noah - yes, duly noted, perhaps that is something that could be cron'ed, but even that still feels a bit hackish
-
Alex-SF April 27th, 2011 @ 02:21 PM
+1 on @John's point. Possible next steps:
1) decide sync model/life-cycle for job defs between file system and DB
2) discuss configuration settings specifying where job defs live on the FS
3) reconcile changes made through the gui to (versioned?) files on the FS -
Deleted User June 8th, 2011 @ 11:50 AM
- Tag set to customer request
-
Hasan February 24th, 2012 @ 02:28 PM
I'd definitely like the ability to tell RunDeck where my git-managed directory is located, such that any change to any manually altered RunDeck element would be kept there: jobs, resources, jaas policies, etc.
Data generated from RunDeck actions should NOT go there.
-
Esa February 28th, 2012 @ 07:09 AM
- Assigned user set to Greg Schueler
This request should really be considered in the larger context of 'make rundeck distributable over N boxes'.
With this I mean:
Provide a plugin for each kind of 'backend type' (getting node resources, storing job defs, storing job execution results, storing projects, node executors, etc.)
Then for each type, provide two implementations. One where everything works pretty much like they do now. One where everything is on the filesystem (on the RD box).
Then it would be possible to have community contributions that are more useful in the context of a larger enterprice (for example I would fairly quickly write a set of plugins that use zookeeper to make it really possible to run two or more, in sync hot/ho, instances of Rundeck).
-
Greg Schueler February 28th, 2012 @ 09:31 AM
Yes, making it a point for plugins would definitely be a requirement of this. I think the crucial questions for this issue are:
- how/when should job definitions be "synched" between the RD datastore and any plugins that provide the definitions?
- Should it be two way, such that any change to a job definition gets written back through the plugins? Or should each plugin get to decide whether it can do this?
- How can a plugin tell Rundeck that something has changed? i.e. push a new definition into Rundeck? (this can already be done with the API)
Some of this overlaps with the existing API: you can already read and write jobs to Rundeck that way.
If we consider the issue in light of a Webhook model, then other web services could perhaps 'subscribe' to job definition data and receive changes via HTTP.
-
Esa February 29th, 2012 @ 04:39 AM
•how/when should job definitions be "synched" between the RD datastore and any plugins that provide the definitions?
Shouldn't the 'RD datastore' itself if a plugable thing. In the simple 'stand alone' senario, it would not need to be synched, ever. In a distributed version, the provider of the plugin should be the only one that needs to worry about this. The api calls (with the restful interface) would just forward all writes (and reads) to the plugin. (similary at run time, don't cache anything, always ask the plugin. If the plugin wants to, it can cache results for quick reading (an obviously invalidate the cache if, for example, so outside actor changed the data. An example of such an outside actor would be an other 'hot/hot' instance of rundeck)
I think the above answers the other two bullets too. If the plugin is the authority, then the simple case is simple, and more complicated cases are upto the plugin writer(s).
For us this in particular needs to work for a senario where we have N rundeck instance behind loadbalancers, and an executing job must show up in all instances.
Please Sign in or create a free account to add a new ticket.
With your very own profile, you can contribute to projects, track your activity, watch tickets, receive and update tickets through your email and much more.
Create your profile
Help contribute to this project by taking a few moments to create your personal profile. Create your profile ยป
(DEPRECATED) Please use github issues for issue tracking at http://github.com/dtolabs/rundeck/issues