One of the most time consuming – and critical – exercise for data migration and integration is data mapping between source and target systems. Extract the schema details from the legacy systems and map to a xCBL or directly to other systems is always a challenge for many reasons – old technologies, database highly normalized, lack of documentation, back-end inaccessible – sometimes all together – requiring a mix of reverse engineering and business analysis.
When it comes to FinOps, the metadata for data entities are much more accessible and easy to extract, but requires some manual effort as the information is presented in different pieces from different places. To help out on that I wrote some time ago a simple script to extract extended field metadata for every data entity available in the system – both public and non-public.
The output is a CSV file, that has been quite helpful to support the migration and integration teams to map fields, data type conversions, understand enum values, check mandatory fields and so on.
The script is available at GitHub, its a raw runnable class that can be executed manually from FinOps via URL call, or easily attached to a menu-item.
The available columns are:
- Entity – Data entity object name
- Name – Entity label
- Public – Is the entity public (Y/N)
- Public Name – Entity name used for public access (via OData API)
- Category – Data category classification
- Shared – Shared cross-company (Y/N)
- Field Name – Field system name
- Field Label – Field label name
- Field Type – Data type used by the field
- Enum Values – All available values with labels for enum data types
- Mandatory – Mandatory field (Y/N)
- Foreign Key – Is the field a a foreign key (Y/N)
- Allow Edit – Field is allowed to be edited (Y/N)
- Allow Edit On Create – Field is allowed to be edited on record insert (Y/N)
- Origin Field – Target table field name
- Origin Table – Target table name
Sample output file: