About This Data

We manufacture records typical of large scale software development. Real sources are rarely conveniently formatted. We pickup the flow simulating the output of an extraction layer reformatting each source independently.

In our simulation a single script, regenerate.rb, invents objects in stages similar to the processes that comprise typical production software. Saved output files are then reposted to wiki on realistic schedules. github

See Organization Chart for the start of a sequence.

See Neo4j Online Meetup for overview.

Extraction scripts and wiki share a secret key granting write access to specific pages. Misconfiguration is reported back to the script where one would begin debugging if data isn't recorded as expected.

See About JSON Plugin where we accept uploads.

We require broad but limited visibility into systems maintained by parts of an organization that are properly concerned about the security of the data they manage. We choose push transactions here because wiki then has no more access than necessary to do its work.

We may require extraction scripts to write an expressive but easily validated subset of JSON we call Regular JSON.

Wiki users will want to know how old the data they find here is and how often updates are pushed. Recent push history is reported in the fine print on each page.

Custom plugins used on this site can be installed by administrators from npm using this Plugmatic plugin.

wiki-plugin-json wiki-plugin-metamodel

ITEM json

ITEM metamodel

See Metamodel Specification for current experiments.

See TopicQuests Nodes for our second test dataset.

See Live El Dorado Demo where we incorporate this data into a read-only docker build from a Neo4j image.