Tuesday, November 15, 2022

BYOD vs Data Lake in D365 FO

 


BYOD : 

When following a BYOD approach, native table level access does not exist in D365 F&O. To achieve functionality akin to this, a set of custom entities, “replicating” the raw table structure is typically built to mimic the underlying structures in the D365 F&O database.


The custom entities are built in C# and require specialized skills to build and maintain. Complicating matters further, the custom entities then need to be deployed through the D365 F&O development lifecycle (development, Tier 1,2, etc.) to the production environment.


The typical ERP development lifecycle is a SLOW and deliberate process to ensure the stability and robustness of the system. As a result, the process of making any changes to the data environment can be painstakingly slow, IMPACTING the data and analytics teams’ ability to deliver updates and changes to the business.

As a result, the BYOD method can hamper the data and analytics team’s speed of response, system enhancements and any additional development to the data warehouse, downstream semantic models, and reports. This is because any new fields or tables, needed for the enhancement, can take weeks or longer to become available due to the dependency on code promotion in the ERP development lifecycle. 

Once the BYOD entities are published, data is then “synchronized” on a scheduled basis to an Azure SQL database, which forms the typical input point for the data warehousing process. While this seems like a simple process, it is filled with several technical and process-related challenges. 



EXPORT TO DATA LAKE : 

By contrast, the Export to Data Lake functionality is designed and built with robustness and simplicity in mind. The Export to Data Lake functionality allows you to select raw tables, to be exported to the data lake, directly in the D365 F&O front-end application. This greatly simplifies the process of adding to and enhancing the downstream data warehouse and analytics environment. 


At the core of the process is the Change Data Capture (CDC) process that constantly synchronizes data between the D365 F&O environment and a predetermined Azure Data Lake folder/directory structure.


In the Azure Data Lake, data is stored in a .CSV file structure, per table and changed data is constantly being fed in a similar structure to the data lake.


Using the data in the Azure Data Lake as the starting point for your typical data warehouse process allows you the following:


Align the architecture of your analytical environment with the recognized best practices for a Modern Data Architecture through the use of Azure Data Lake and Azure Synapse Analytics.

Get access to data in a much more timely manner to improve accurate decision-making and insights into operational analytics and reporting.

Provide internal data customers with near real-time access to data.

Create a robust and reliable, enterprise-grade method of extracting data from your ERP system, thereby minimizing unplanned downtime and improving how data is made available to internal customers.


Data in the lake is stored as CSV files in a folder structure maintained by the system. The folder structure is based on data organization in finance and operations apps. For example, you will see folders with names such as Finance, Supply Chain, and Commerce, and within these folders, you will see sub-folders with names such as Accounts Receivable or Accounts Payable. Further down the hierarchy, you will see folders that contain the actual data for each table. Within a table-level folder, you will see one or more CSV files as well as metadata files that describe the format of the data.


Once the features are enabled and working, the first significant difference when comparing the BYOD and Export to Data Lake lies in the way entities and tables are handled.


As mentioned, a key principle of building a data & analytics environment is to use the lowest-level data structures (tables) from the sources system.

Link

Conclusion : 

Data lake: 1

BYOD    : 0