Porting repositories between GitHub servers with octokit.rb

The project I'm working on, PsychTaskFramework, was initially developed on Yale's private git instance. This made perfect sense at the time: the project had no non-Yale collaborators and git.yale.edu is easily accessible to anyone with a Yale ID. And if we need to move later, no big deal, right? GitHub has to have a simple mechanism of porting repositories.

git repositories, yes. GitHub repositories - with labels, milestones, issues, pull requests, and comments? Not so much. The official GitHub support response was that I should avail myself of the API. So I did.

Note 1: My approach makes some avoidable compromises that I note below. Other shortcomings, however, are inherent to the process. The main one is the loss of all GitHub event metadata: all actions and events will appear to have been done at the time of the upload, by the uploading user account. (Luckily, this doesn't apply to git metadata.)

Note 2: I use Ruby and octokit.rb, but this approach should generalize easily to other languages for which Octokit is available.

Existing solutions

I'm not the first to run into this problem. Here are the three alternative solutions that I found most easily, None of them ports milestones or pull requests, nor are they actively maintained, but they might just get the job done, or at least form the backbone of your own solution.

  • If your repositories are not confined to the internal network, you might like github-issue-mover.
  • github-issue-migrate is an easily extensible Ruby class.
  • github-issue-import is a configurable tool written in Python. It makes certain choices about indicating issue state, e.g. ports closed issues as open issues that start with the word "[CLOSED]". It doesn't guarantee issue / milestone number equality, but if you're porting to an empty repository and you've never deleted a milestone, that might not be a problem.

Since I wanted to preserve milestones and pull requests -- and, in most regards, to essentially make a carbon copy of the original repository -- I had to roll my own. Here's how I did it. (If you're impatient, here are the scripts as gists.)

Step 1: Copy the commits, branches, and the wiki

This one is easy, because each git repository is a full copy. Just initialize a GitHub repo and push the bare Enterprise repository to it. GitHub has a step-by-step approach here; it includes moving the wiki, too.

Step 2: Get personal access tokens for both systems

For password-less authentication, go to Settings > Developer settings > Personal access tokens (/settings/tokens on each GitHub instance) and generate one. I was liberal with the scopes I allowed the tokens to have; the repo scope should be sufficient, but I haven't tested it.

You will want to revoke these tokens after you're done.

Alternatively, you can use any of the other forms of authentication that Octokit works with.

Step 3: Retrieve every GitHub object from the source repo

(Technically, you could retrieve each object in Step 4, as needed. I wanted to investigate the structure of the retrieved objects, though, and do it offline.)

This is the more straightforward part. Download labels, milestones, issues, pull requests, and comments; do so in the order in which they were created. This will make things a little easier later.

require 'octokit'
require 'json'

# Part 1: Extract issues & everything else from the source repo
## Setup
Octokit.configure do |c|
  c.api_endpoint = 'https://git.yale.edu/api/v3/'
  c.auto_paginate = true
end
# set ENTERPRISE_TOKEN prior to this line
yalegit = Octokit::Client.new(:access_token => ENTERPRISE_TOKEN)
repoName = 'levylab/RNA_PTB_task'

## Action
opts = {:state => :all, :sort => :created, :direction => :asc}
labels = yalegit.labels(repoName, {:state => :all})
issuesAndPRs = yalegit.issues(repoName, opts)
pulls = yalegit.pull_requests(repoName, opts)
milestones = yalegit.milestones(repoName, opts)
comments = yalegit.issues_comments(repoName, opts)

## Intermediate save
# Returned objects are Sawyer resources; we need
# `sawyer_resource.map(&:to_h)` to serialize them.
File.open('labels.json', 'w') do |f|
  f.write(labels.map(&:to_h).to_json)
end
# (...and so on for every element)

Why did we name a variable issuesAndPRs and then also retrieved pull requests? The Issues API treats pull requests as if they were issues. The Pull Request API obtains additional information that will be useful later.

Step 4: Push to the target repo -- in good order

This is where things get a little tricky. Here's why.

  1. GitHub disallows you from deleting issues. To preserve links to issue numbers, you need to add the issues in the right order.
  2. GitHub does allow you to delete a milestone, but it will only re-use its number if no newer milestone has been created since. Consequently, you will need to create placeholder milestones if you made any omissions.
  3. GitHub doesn't allow you to set the numerical identifier for an object.
  4. GitHub only allows link to objects that already exist. Consequently, we need to make sure that if we create an issue with a label, the label's already there.

The order we're going with is labels-milestones-issues-pulls-comments. Don't forget to adjust Octokit configuration for the target GitHub server:

require 'octokit'
require 'json'

# Part 3: Upload everything to target repo on GitHub
## Setup
Octokit.configure do |c|
  c.api_endpoint = 'https://api.github.com/'
  c.auto_paginate = true
end
# set GITHUB_TOKEN prior to this line
github = Octokit::Client.new(:access_token => GITHUB_TOKEN)
repo = 'YaleDecisionNeuro/PsychTaskFramework'

Labels

The main gotcha here is that GitHub has some default labels, which your source repository may or may not be partially using. If it is, we'll upload them, and if it isn't, they shouldn't be there anyway, so let's remove them:

github.labels(repo).each do |l|
  github.delete_label!(repo, l[:name])
end

In no particular order, read and upload your original labels:

labels = JSON.parse(File.read('labels.json'), {symbolize_names: true})
labels.each do |l|
  begin
    github.add_label(repo, l[:name], l[:color])
    puts "Added #{l[:name]} - ##{l[:color]}"
  rescue Exception => e
    puts "#{l[:name]} already exists, updating:" if e.class == Octokit::UnprocessableEntity
    github.update_label(repo, l[:name], {color: l[:color]})
  end
end

Milestones

As explained above, GitHub insists on numbering milestones by itself, but also allows milestone deletions. So we just need to pay attention to any milestones that are missing in our original data.

milestones = JSON.parse(File.read('milestones.json'), {symbolize_names: true}).sort_by {|m| m[:number]}
current_milestone = 0
fake_milestones = []
milestones.each do |m|
  current_milestone = current_milestone + 1
  while m[:number] > current_milestone
    github.create_milestone(repo, "fake #{current_milestone}")
    fake_milestones << current_milestone
    current_milestone = current_milestone + 1
  end
  github.create_milestone(repo, m[:title], {state: m[:state], description: m[:description]})
end

After that, it's trivial to remove the placeholders:

fake_milestones.each do |fake|
  github.delete_milestone(repo, fake)
end

Issues, PRs, and comments

We'll do all of issues, pull requests and comments in a single loop through the issues.

(This strikes some compromises that are harder to defend. The most complete approach, at least with the objects we'd retrieved thus far, would take separate passes for issue / PR creation, adding comments in the right order, and closing the issues if appropriate. The Octokit comment object does not include a direct reference to the issue number, though, and while extracting it is trivial, I just wanted to be done.)

First, we'll load the files, extract useful identifiers, and create the issue. Since issues are also auto-numbered but cannot be deleted, we'll also guard against the possibility of duplicating issues we had already added:

issuesAndPRs = JSON.parse(File.read('issuesAndPRs.json'), {symbolize_names: true}).sort_by { |p| p[:number] }
pulls = JSON.parse(File.read('pulls.json'), {symbolize_names: true}).sort_by { |p| p[:number] }
comments = JSON.parse(File.read('comments.json'), {symbolize_names: true}).sort_by { |p| p[:id] }

# In case uploading was interrupted, note the uploaded issues
issues_uploaded = github.issues(repo, {state: :all, sort: :created, direction: :desc})

issuesAndPRs.each do |i|
  # Extract identifiers from the issue
  # Skip existing issues
  issue_number = i[:number]
  unless issues_uploaded.empty?
    last_issue_id = issues_uploaded[0][:number]
    if issue_number <= last_issue_id
      next
    end
  end

  issue_url = i[:url]
  issue_labels = i[:labels].map { |l| l[:name] }
  begin
    issue_milestone = i[:milestone][:number]
  rescue Exception
    issue_milestone = nil
  end

  # Create issue
  sleep(3) # to avoid rate limiting
  github.create_issue(repo, i[:title], i[:body], {milestone: issue_milestone, labels: issue_labels})
end

But instead of closing the loop and going to the next issue, we'll do three more things. First, if the original issue was actually a pull request, we'll convert it into a PR or at least note the origin:

if i.key?(:pull_request)
  current_pull = pulls.select { |p| p[:number] == issue_number }[0]
  base = current_pull[:base][:ref]
  head = current_pull[:head][:ref]
  if i[:state] == "open"
    github.create_pull_request_for_issue(repo, base, head, issue_number)
  else
    merge_commit_sha = current_pull[:merge_commit_sha]
    base_sha = current_pull[:base][:sha]
    head_sha = current_pull[:head][:sha]
    pull_note = "**Migration note**: This was a pull request to merge "
    pull_note << "`#{head}` at #{head_sha} into `#{base}` at #{base_sha}. "
    pull_note << "It was merged in #{merge_commit_sha}.\n\n"
    new_body = pull_note + current_pull[:body]
    github.update_issue(repo, issue_number, { body: new_body })
  end
end

Second, we'll add the original comments to the issue:

comments.select { |c| c[:issue_url] == issue_url }.each do |c|
  github.add_comment(repo, issue_number, c[:body])
end

Finally, we'll close the issue if appropriate:

if i[:state] != 'open'
  github.close_issue(repo, issue_number)
end

This is a little confusing, so I'm noting again that the upload script is also available as a gist.

Step 5: Start working with the new copy of the repository

Add remotes to your working copies. Lock or remove the existing issues. Hang a big banner saying "Work has moved to a new location." Set up a post-receive hook that will automatically re-push commits to their new home.

Omissions, shortcomings, compromises

I was going for a good-enough facsimile, not the perfect replica. Here's what I skipped, and how you could preserve it if you cared to.

  • I didn't preserve complex issue timelines -- multiple closings and re-openings, changes of labels and milestones, and the like. You could retrieve the events and the comments via source.issue_timeline(repo, issueNumber), sort by :created_at, and add them to the target repository in the right order using the requisite API command. (In fact, you could retrieve everything via source.repository_events(repo) and then use the strategy pattern to walk the entire repo history. If I were making a fully general solution, that's what I'd go for.)
  • I haven't ported merged pull requests. In order for the GitHub API to create a pull request from an issue, there needs to be a difference between the base and head refs fails, Since the merge definitionally removed this difference, the API will refuse the conversion. To get around this, you'd have to find a way to "replay" the commits along with the repository events. Leaving a quick note about the historical origin of the repository seemed like a reasonable compromise.
  • In comments and issue descriptions, GitHub automagically creates links to existing issues. Automagic issue linking doesn't happen if the issue doesn't exist yet. You can get most of this by adding comments in the order in which they appeared, but even that can occasionally fail -- e.g. if you edited the checklist in the issue OP to link to a relevant issue created later. (You can hack this by iterating through all target issues and comments and making an invisible change like adding a space.)
  • Hard links point to the wrong location, which is to the source repo (e.g. the README.md linking to a wiki page, or a comment pointing to the canonical URL of a particular file at a particular commit). A content filter that replaces source URL with target URL before it pushes milestones, issues / PRs, and issue comments would be a clean way of fixing this.
  • There's no issue locking, because I hadn't locked any issues. It is trivial to add, though: check the boolean i[:locked].
  • Reactions to comments are lost. I'm not sure it would make sense for the uploader to add them.
Porting repositories between GitHub servers with octokit.rb

Adventures with Qualtrics, part 2: exporting the latest response via API

(In Part 1, I wrote about the role of Piped Text and building a custom web service that Qualtrics will recognize.)

For the feature I was trying to implement in December, I needed to evaluate a batch of responses the subject answered earlier in the survey. Luckily, Qualtrics has an API that allows for response export! While the documentation has an example of a response export workflow, I found their per-format export pages more informative. Here's the CSV export documentation page. Still, I ran into some issues that merit documenting.

Requesting a single response? You can't

Since one of the embedded fields that Qualtrics creates is ResponseID, can't we just pass that and let our external service use it to grab our current participant's set of responses? Sadly, no. Qualtrics doesn't allow you to query at the level of a response, only at the level of a survey. (There is an optional lastResponseId parameter in the export query, but that will only get you all responses entered after the survey you're calling the service from. This could be useful if we were building a dataset incrementally, but in my case, I needed the data almost immediately.)

Instead, I assign the subject a unique ID early in the survey. This can be either pre-assigned or generated in the survey - perhaps with the random number generator web service I mentioned above. I pass this ID to my web service, which will use it to pick out the right response.

But we can't select on any response-level variable. This means that to limit our queries, we'll have to do some guessing. If we're sure that there are no race conditions -- i.e. only one person at a time only ever takes the survey -- we can use limit = 1 to only get the last response. Alternatively, if you know that the external service will be called immediately after the participant fills out the survey, you can use startDate set to a few hours before current time. (NB: the parameter value takes ISO-8601 format..)

The Nitty Gritty

Now, let's look at an example of the inquiry logic. In the abstract, there are three steps: get the response, unzip it, and load it into an appropriate data structure.

# Excerpt from a Sinatra helper function
response_zip = getResponseFromQualtrics()
response_string = unzip(response_zip)
csv_table = rawToTable(response_string)

Step 1: Get the data

Getting the data is a two-step process. First, I request a CSV file from Qualtrics and wait until it's ready. Second, I download it.

Instead of implementing the handshake myself, I took advantage of the qualtrics_api Ruby gem made by Yurui Zhang. (There's also sunkev's qualtrics gem, which I haven't tried.)

def getResponseFromQualtrics
  start_time = getStartTime(settings.prior_hours)

  QualtricsAPI.configure do |config|
    config.api_token = settings.token
  end

  survey = QualtricsAPI.surveys[settings.survey]
  export_service = survey.export_responses({start_date: start_time})
  export = export_service.start

  while not export.completed?
    sleep(5)
    export.status
  end

  require 'open-uri'
  return open(export.file_url, "X-API-TOKEN" => settings.token).read
end

def getStartTime(hours_offset)
  require 'time'
  start_time = Time.now.utc - (60 * 60 * hours_offset)
  return start_time.iso8601
end

(These are Sinatra helpers. settings is a Sinatra-wide global that reads in secrets specified in the environment and various other configuration. (The dotenv gem is excellent for secret storage in development; as for production, here's how to set secrets on Heroku.)

Steps 2 & 3: Unzip and convert

unzip is just rubyzip; no magic there. There is a bit of a trick to getting a compressed stream to a CSV with headers, though. That's because some of the Ruby CSV methods can only deal with files, not streams.

def rawToTable(response_string)
  require 'csv'
  response_csv = CSV.new(response_string, headers: true)
  response_csv = response_csv.read
  response_csv.delete_if do |row|
    # Remove the row with descriptions & internal IDs
    /^R_/ !~ row['ResponseID'] 
  end
  return response_csv
end

And done!

After this, I select the row that contains the subject ID I had passed in the Qualtrics redirect, pick a choice and evaluate it, and visualize it with an assist from the wonderful animate.css library at an endpoint created by Sinatra and deployed to Heroku. Unlike Qualtrics features, all are well-documented elsewhere.

Approach 2: Avoid the API, pass the values

The API approach has a number of problems. For one, Qualtrics API is a paid feature. Worse, API calls lag -- at least once, the call and processing took over 30 seconds and caused a request timeout. While I could re-write the interface so that the API call and processing are done by a background process that the front-end checks for periodically, it's a pain that might not be worth it.

The obvious alternative: instead of a subject identifier, pass the responses that the survey has readily available via URL. I write about this in part 1.

There are limits. Because Qualtrics uses GET for everything, you might have to keep your URI under 2000 characters. Basically, don't try to transmit essay responses. (I was worried that Qualtrics itself might throw a fit if I tell it to store 56k-character URI, because piped text is obviously longer than the response it denotes. I shouldn't have worried. Qualtrics managed even a 100k-character URI without a hiccup -- and that's way past the 2,000 characters that your browser and your server can handle. In other words, Qualtrics isn't going to be your constraint.)

As usual, the trade-off for speed is maintainability. You refer to many piped text variables instead of just one or two, so you will likely have to develop a pipeline to generate the URI. You might have named your questions for clearer data manipulation, but for the purposes of piped text, you'll have to replace them with the internal question IDs (QID#). And while you can maintain the order of values in one place, you have to explicitly plan for that.

Bonus Approach: No API is best API

Finally, I should note that custom web services and APIs are an extra overhead. For simpler problems, there are at least two steps to attempt first.

1. Abusing Survey Flow

Basic Survey Flow building blocks are quite powerful, making many problems tractable with stock Qualtrics. To pick randomly from a bag of option sets, you can use Randomization to pick exactly one of n embedded data blocks underneath it. Branches, of course, offer basic if conditionals (although not else -- you'll have to take care to make their triggering conditions mutually exclusive).

2. JavaScript

You can do some things with the Qualtrics Javascript. (For instance, if you can you get arbitrary piped text, that could make things easier.) You will need to weigh how much crucial logic you want to embed in JavaScript -- if you don't control the survey-taking environment, you cannot guarantee that the client has JS enabled, and you might have to take extra steps to either degrade functionality graciously or detect the absence.

Other approaches?

It is very possible that other approaches exist; they were not necessary for my purposes. In one of my next articles, I hope to talk about what they were.

Adventures with Qualtrics, part 2: exporting the latest response via API

Adventures with Qualtrics, part 1: Custom Web Services and Piped Text

To create a feature in a pilot study I was running in December, I took a dive into Qualtrics API and custom web service building. In the process, I discovered a couple of workarounds and little-documented properties of both. The key to integrating them: piped text.

Piped Text: The Qualtrics Variable

With piped text, you can insert any embedded data and any answer your subject gave into (almost) any Qualtrics context.

If this doesn't excite you, it should.

Let me rephrase. Piped text references the content of variables you can set. It can do this in conditional validation, display logic and survey flow. (You can't make it into a GOTO, but that might be a good thing.) The documentation undersells this; this Qualtrics blog article does it a little more justice.

For my purposes, the most important insight goes unmentioned: you can use piped text to pass data to an external web service. That way, you can use data from an in-progress session as input for arbitrarily complex logic implemented in a programming language of your choice.

The approach

How does this work? First, you identify the shortcode for an answer or embedded field. Then, you insert it into the URL, like so:

http://your.service.URL/${e://Field/Identifier}/${q://QID1783/ChoiceTextEntryVField>

This will substitute the value of Field and the answer to question QID1783 in time for the redirect.

Qualtrics can call an external service in two ways.

  1. End-of-survey redirect. Qualtrics simply passes the torch to your service, which wraps up the session for your participant.
  2. Web Service step in Survey Flow. Your service will pass results back to Qualtrics, and they'll be available for as embedded data for the following Qualtrics questions in that session. (With the "Fire and Forget" setting, this can be asynchronous.)

The external service then passes the results back to Qualtrics.

What's the pass-back format?

"Pass results back to Qualtrics" glides over a big issue: Qualtrics documentation does not provide a list of valid return formats. The documentation and the only StackOverflow answer I could find both mention RSS as the only example of an acceptable format. The random number generator everyone uses for MTurk compensation, however, has a much simpler outcome: random=7. That's hopeful, but what if you want to pass multiple values back? Docs don't say.

I decided to test this out on a dummy web service I wrote in Sinatra. It turns out that Qualtrics will take data from JSON, XML, and URI query element. (That's ?a=b&c=d - I owe this insight to Andrew Long at the Behavioral Lab.) You can try this out for yourself -- just put down https://salty-meadow-86558.herokuapp.com/ as your Web Service in Qualtrics.

Pulling the API in

My project required more data to the custom Web service than Piped Text could conveniently pass, which meant that I needed to tangle the API. For that, see part two.

Adventures with Qualtrics, part 1: Custom Web Services and Piped Text

Executing nested rules with dragonfly

Rule nesting makes context-free grammars very powerful. It allows for brevity while preserving complexity — and dragonfly, the unofficial Python extension to Dragon Professional Individual, seems to promise that functionality with RuleRef, which "allows a rule to include (i.e. reference) another rule".

But using RuleRef is less obvious than it would appear. How do you actually refer to the rules? How do you execute the actions that are associated with the referenced rules? And how do you ensure that dragonfly does not complain about rule duplication if you do this multiple times?

I will proceed step-by-step, but if you want to jump ahead to the solution, you can read it on GitHub.

If you're unfamiliar with Dragonfly, do read this introduction to basic Dragonfly concepts in the Caster documentation.

Step 1: Include the rule with RuleRef

Let's start with a toy grammar. In this grammar, we will have two rules that are not exported: that is to say, you can't invoke them directly. We'll call them simply RuleA and RuleB. (I will refer to them as "subrules" from here on out.)

# Rules proper
class RuleA(MappingRule):
    exported = False
    mapping = {
        "add <n>": Text('RuleA %(n)s'),
    }
    extras = [
        IntegerRef("n", 1, 10),
    ]

class RuleB(MappingRule):
    exported = False
    mapping = {
        "bun <n>": Text("RuleB %(n)s") ,
    }
    extras = [
        IntegerRef("n", 1, 10),
    ]

We'll call the top-level rule RuleMain and include Rules A and B in the extras.

class RuleMain(MappingRule):
    name = "rule_main"
    exported = True
    mapping = {
        "boo <rule_b> and <rule_a>": Text("Rule matched: B and A!"),
        "fair <rule_a> and <rule_b>": Text("Rule matched: A and B!"),
    }
    extras = [
        RuleRef(rule = RuleA(), name = "rule_a"),
        RuleRef(rule = RuleB(), name = "rule_b")
    ]

The name argument of RuleRef takes care of the correspondence between the spec and the subrule. To get recognized, you do actually have to match the subrule's spec by saying e.g. "boo bun three add five".

This only carries out the Text("Rule matched: ...") action defined in the MainRule, though. To actually execute the subrules, we'll need to add the Function action.

Step 2: Use (and mass-produce) Function

Dragonfly's Function allows arbitrary code execution. However, you can only pass in a function reference, to which Function passes the right extras (seemingly) automagically. The caster documentation gives a useful but incomplete example:

def my_fn(my_key):
  '''some custom logic here'''

class MyRule(MappingRule):
  mapping = {
    "press <my_key>":     Function(my_fn),
  }
  extras = [
    Choice("my_key", {
      "arch": "a",
      "brav": "b",
      "char": "c"
    })
  ]

When you say "press arch", my_fn gets called with the value of the my_key extra. But what if the mapping contained a reference to another rule in another extra? Would that also be passed to my_fn? It turns out that Function actually passes keyword arguments. If you name the argument to my_fn the same as the name of your extra, then my_fn will be called with the value of that extra. You're not limited to one extra, either: for example, if we added an extra called towel to MyRule.extras, then def my_fn(towel, my_key) would receive both.

(If you define my_fn with **kwargs, it will receive all extras in a dict, including the default _node, _rule, and _grammar. This does lose the order in which the subrules were invoked, so you can't just pass a general function that invokes all rules unless you're happy with them being invoked alphabetically / in an arbitrary order. That was my first approach:

def execute_rule(**kwargs): # NOTE: don't use
    defaultKeys = ['_grammar', '_rule', '_node']
    for propName, possibleAction in kwargs.iteritems():
        if propName in defaultKeys:
            continue
        if isinstance(possibleAction, ActionBase):
            possibleAction.execute()

In this case, Rule A will be executed before Rule B, no matter the optionality or the order of utterance, just because of the kwargs key order. I played around with exploring the default extras, but I haven't managed to figure out how to extract the order from the actual utterance to reorder the subrules automagically; that might require a deeper dive into Dragonfly than I'm ready for.)

You could write executeRuleA(rule_a) to run rule_a.execute(), then add Function(executeRuleA) to be executed alongside Text when the rule is matched. Unless you want to do different things for different rules, though, it is easiest to define a factory for functions that simply execute whatever extras you specify:

from dragonfly import Function, ActionBase

def _executeRecursive(executable):
    if isinstance(executable, ActionBase):
        executable.execute()
    elif hasattr(executable, '__iter__'):
        for item in executable:
            _executeRecursive(item)
    else:
        print "Neither executable nor a list: ", executable

def execute_rule(*rule_names):
    def _exec_function(**kwargs):
        for name in rule_names:
            executable = kwargs.get(name)
            _executeRecursive(executable)

    return Function(_exec_function)

This way, if you want to execute rule B before rule A, you can add execute_rule(['rule_b', 'rule_a']) to the action. Equivalently, you could use execute_rule('rule_b') + execute_rule('rule_a'). (Since both factories return a Function, their output can be added with other dragonfly Action elements.)

Step 3: Reusing subrule references in other rules

Let's say you want to reuse your subrules in another rule, like so:

# Note: This doesn't execute the sub-actions at all
class CompoundMain(CompoundRule):
    spec = "did (<rule_a1> and <rule_b1> | <rule_b1> and <rule_a1>)"
    exported = True
    extras = [
        RuleRef(rule = RuleB(), name = "rule_b1"),
        RuleRef(rule = RuleA(), name = "rule_a1"),
    ]

If you add this to your grammar, though, dragonfly will fail to load it with the following error:

GrammarError: Two rules with the same name 'RuleA' not allowed.

How did this happen? We even renamed the extras! It turns out that each subrule instantiated in RuleRef is registered as a separate rule. By default, each instance will assign name = SubRule.__name__. Consequently, you'll have to instantiate the subrules with unique names each time you re-use them. Fun fact: those names don't have to bear any relation to anything else.

    extras = [
        RuleRef(rule = RuleB(name = "Sweeney Todd"), name = "rule_b1"),
        RuleRef(rule = RuleA(name = "Les Miserables"), name = "rule_a1"),
    ]

There are many like it, but this one is mine

I'm sure this is not the only way to do it: one could override the _process_recognition method of your MainRule, or perhaps caster, aenea, or dragonfluid implement equivalent nesting functionality in ways that I have overlooked. I would be very excited to learn about other approaches!

For now, I'm looking forward to applying this in my vim-grammar for dragonfly project. I'm hoping to write about the reasons why vim is excellent for voice programming later.

Executing nested rules with dragonfly