Here’s the most recent episode of @msatokotsubi’s podcast Field Notes with Richard Griscom and Andrew Harvey (both are here, hi @rgriscom and @Andrew_Harvey!).
Today’s episode is with Andrew Harvey and Richard Griscom from Leiden University. Andrew and Richard have just returned from their most recent field trip to Tanzania and in this episode they discuss their current projects (documenting Gorwaa, Hadza and Ihanzu) and teamwork in the field.
There is a lot to be learned from what this group is doing! @rgriscom or @Andrew_Harvey, there’s one thing I wanted to ask you about that was mentioned in one of the (three!) episodes you were involved in, but your term for it has slipped my mind — a methodology for recording and re-recording audio that proved very successful? I would love to learn more about that. (My brain’s memory capacity of late is hovering somewhere around that of a flashlight… one bit…)
2 Likes
Hi!
@rgriscom has been putting a lot of thought into optimizing workflows, both in and out of the field (I’m lucky to be working so closely with him, as I’m a bit more analogue in my approach
)
I think the method you’re probably referring to is he and Manuel Otero’s Digital Notebook Method, which I’ve used (in a very limited capacity) and found to have lots of nice benefits as well as limitations depending on the type of language you’re working on and the sort of data you’re looking for.
Give it a look! I bet Richard would appreciate some discussion about it.
2 Likes
Ah yes, that’s it! Thanks for the pointers, bedtime reading tonight

1 Like
Yes, the basic idea behind “re-recording” is that most elicitation involves an exploratory phase during which the “eliciter” and “elicitee” both become acquainted with the target data. That exploratory phase is important, but a recording of it will feature a lot of meta-linguistic dialogue in addition to productions of the elicited data. From both a data processing and accessibility perspective, such a recording is not ideal, assuming the goal is to produce time-aligned digital data.
If you create a second recording that is only of the targeted data, ideally produced in a consistent way, then you will have a recording that can be easily paired with time-aligned annotations. You can either use automated audio segmentation (e.g. Annotate to silences… in Praat), which is also more generally known by the term “voice activity detection” (VAD), or you can create timecode data during the recording session by using a timer app on a mobile device or a timer macro in a spreadsheet on a computer. You then can integrate the timecode data with the text data using a script such as the one I made for the Digital Notebook Method, and then produce outputs like .TextGrid for Praat, .EAF for ELAN, and CSV/TSV.
1 Like