R datasets of modest size are routinely stored as flat files and retrieved as data frames. The classic storage formats (comma delimited, tab delimited) do not have obvious mechanisms for storing data about the data: i.e., metadata such as column labels, units, and meanings of categorical codes. In many cases we hold such information in our heads and hard-code it in our scripts as axis labels, figure legends, or table enhancements. That’s probably fine for simple cases but does not scale well in production settings where the same metadata is re-used extensively. Is there a better way to store, retrieve, and bind table metadata for consistent reuse?
Yamlet is a YAML-based mechanism for working with table metadata. Supports compact syntax for creating, modifying, viewing, exporting, importing, displaying, and plotting metadata coded as column attributes. etc.
It provides a flexible environment to store and apply metadata such as column labels, units, and meanings of categorical codes for consistent tabulation and plotting.
Or discover more on...
Argo
Argo is a command line tool that works from the command prompt, powershell, or terminal windows on Windows, Linux, and MacOS. Argo manages and runs docker images to provide access to customized R environments. The images come with preconfigured R […]
PMDatR
An R package for building pharmacometric data sets. PMDatR distills decades of expert knowledge regarding the intricate needs of pharmacometric analysis datasets. PMDatR builds on top of popular R packages such as dplyr[1] and tidyr[2], so much of the syntax is well known […]
qpNCA
Non-compartmental analysis (NCA) is a subdivision within pharmacokinetics (PK) that calculates PK parameters without deciding on a particular compartmental model and with minimal prior assumptions. qPharmetra has developed the open source R package qpNCA package. It performs all essential PK […]
qpToolkit
An R package for exploring and reporting pharmacometric analyses. The toolkit showcases our emphasis on programmatic information processing, for source-to-finish traceability. It reflects decades of accumulated expertise in the efficient transformation of model inputs and outputs. Especially relevant is our experience with PsN, a […]