Quantcast
Channel: fishing with FDMEE
Viewing all 54 articles
Browse latest View live

BBT for FDMEE #1 - Target Accounts as Source in Mappings

$
0
0
Hola!
Working for customers, preparing training, conferences and the most important one, Francisco Jr running around, have been keeping me busy during the last months.

One of the presentations I've been working on is "Black Belt Techniques for FDMEE" (aka BBT for FDMEE). I thought it was interesting for people to know how to meet requirements of different complexity with some techniques which of course, aren't in the books :-)

Although I can't go into too much detail (I don't want to spoil the presentation), this is a foretaste of what you will enjoy if attending to Kscope17.

The Requirement
As you know, FDMEE can pull data from very heterogeneous source systems. Once data has been extracted, it has to be mapped into our target system (let's say HFM). Usually, people responsible of maintaining mappings (aka mappers) are more familiar with target system rather than source. 
This is not always the case but it's a common scenario when financial departments are split. How often do you hear "Not sure about this, we need to ask our ERP guy..."?

Another common scenario is that ICP/Custom dimensions mappings use source ERP account as a driver either importing source account into source ICP/Custom dimensions or using Multi-dim/Conditional maps. 

Have you have you ever asked to the mapper: Would it be easier for you if you could use the HFM account to define ICP/Custom dimension mappings rather than source account?

In my case, I always do. And what I found is that if they can define mapping using the target HFM account, maintenance tasks are much simpler and the number of mapping rules is highly reduced.

Of course, the immediate question is: Can we do that? Yes we can. How? 

Lookup dimensions as a Bridge, that's the answer
Lookup dimensions can be used in FDMEE for different purposes. How can they help us to meet our requirement?
  • They don't have an impact on target application
  • We can define a #SQL mapping to copy our target values into other source dimension values including the lookup dimension itself
  • We can define the order in which the lookup dimension mapped
Have a look at this: Source Account > Target Account > Lookup > Source C1 > Target C1
Did you understand above flow? Let's put some lights on it.

Let's start defining our lookup dimension "HFM Account":
In this example, we are going to use the lookup dimension to copy the target HFM account into the source lookup. For that purpose, we need to make sure that the lookup dimension is mapped after Account dimension. As you can see above, sequence number for the lookup is 2 while Account has assigned 1.

Besides, column UD5 of TDATASEG(_T) has been assigned to HFM Account (I could have used any other like UD10 so I leave some UDx free in case we have new HFM custom dimensions). 

Copying the Target HFM Account into other/lookup dimensions
As for any other dimension, we can create maps for lookup dimensions. Our main goal is to copy a target value into other dimensions so why not using a SQL mapping?
The good thing of mapping scripts is that we can include multiple columns:
  • Set target HFM Account to "Lookup"
  • Set source HFM Account (UD5) to target Account (ACCOUNTX)
Done, our target account has been copied and it's now available as a source value in our lookup dimension.

A picture is worth than a thousand words
Let's create a multi-dimensional mapping to show how this works:
Mapping says: when source Product is "'001" and HFM Account is "Price" then target Product is "P7000_Phones"

Thanks to our lookup dimension we can use the HFM account as a source. Mapping rule is clear and easy to create. No need to change the SQL map we created, that's static.

What happens in Data Load Workbench?
  1. Source ERP account "10100" has been mapped to "Price"
  2. "Price" has been copied to source HFM Account
  3. Product has been mapped using source Product and HFM Account
At some point, I expect Oracle enhancing multi-dim maps to support target dimensions also so let's see another example where this approach is quite useful as well.

Another example: write-back to SAP using SAP accounts as Source in Explicit maps
In this case, we are extracting data from HFM in order to generate report files for SAP (write-back)
The requirement is to map SAP IFRS using SAP Accounts. 

Following our approach:
  1. Map HFM Account to SAP Account
  2. Copy SAP Account to source SAP IFRS
  3. Map SAP IFRS based on SAP Accounts
Let's see the workflow:
As you can see, we have now copied the SAP Account into another dimension rather than the lookup. That allows us to create our Explicit mappings using SAP accounts in a very easy way.

Cloud-Friendly
Once nice thing is that this solution is cloud-friendly. Data Management for the Cloud allows creating lookup dimensions and #SQL mapping scripts so you can implement it if not using my loved on-premises FDMEE.

I'm going to leave here for today. I hope you found this BBT useful.

More BTTs soon!

Replacing source files on the fly - Playing around with XML files

$
0
0
The following post came up after seeing that lot of people in the FDMEE community were asking "how can we manipulate the source file and replace by new one on the fly?"
In other words, how we can replace the source file selected by end user with a new file we create during the import process... or let's be clear... how can we cheat on FDMEE? :-)

I thought it was good idea to share with you a real case study we had from a customer. Their ERP system had a built-in process which was extracting data in XML format. Hold on, but can FDMEE import XML files? Not out the box, Yes with some imagination and scripting.

The Requirement
As stated above, FDMEE does not support all kind of formats out of the box. We usually have to ask our ERP admin (or IT) to create a file in a format that FDMEE can easily read, mainly delimited files such as CSV.

But what about Web Services like SOAP or REST? they mainly return XML or JSON responses. We need to be prepared for that in case we want our FDMEE integration to consume a WS. This is quite useful in FDMEE on-premise as I guess Data Management for the Cloud will include "some kind of JSON adapter" sooner or later in order to integrate with non-Oracle web services.

And what about any other integration where source files are in another format than fixed/delimited?

Luckily I had that one... we get an XML file from our IT and we need to convert to CSV so we can import through FDMEE.

High Level Solution
The main idea is to convert the XML file selected to a CSV file that FDMEE can understand. Now the questions are Where and How?

  • Where? It makes sense that we do the conversion before the actual import happens. BefImport event script?
  • How? FDMEE will be expecting a CSV file, how do we convert the XML to CSV? There are multiple methods: Python modules, Java libraries for XPath... I will show one of them.
The XML File
Image below doesn't show the real XML (confidentiality) but a basic one:
As you can see data is enclosed in <data> tag an lines are enclosed in <dataRow> tags. Besides, each dimension has a different tag. 
As an extra for this post, just to let you know that I usually use Notepad++ plugin XML Tools which allows me to perform multiple operations including XPath queries:
Before we move into more details. What do you think it would happen if we try to import the XML file with no customization? 
FDMEE rejects all records in the file. What were you thinking then? That's the reason I'm blogging about this (lol)

Import Format, Location and DLR for the CSV File
In this case, our source type is File. However, I usually define instances of File when I want FDMEE admins/users to see the real source system (this is optional):
The Import Format (IF)  has been defined to import a semicolon delimited file having numeric data only (you can use any delimiter):
I'm going to keep it simple. One-to-one mapping between XML dimension tags and HFM dimensions:
The Data Load Rule is using the IF we just defined. As you may know, we can have one location with multiple DLRs using different IFs when source is File.

BefImport Event Script
The conversion will done in the BefImport event script which is triggered before FDMEE imports the file the end-user selected when running the DLR.

We can split this script into two main steps:
  1. Create the new CSV file in the location's inbox folder
  2. Update database tables to replace the original file selected with the new one created in step 1
The final solution could be more sophisticated (create the CSV based on IF definition, parse null values, etc.). Today we will go for the simple one.

Let's dive into details.

Converting XML to CSV
There are multiple ways of converting an XML to CSV. To simplify, we could group them as:
  • Method A: parses the entire XML and convert to CSV
  • Method B: converts nodes into CSV lines as we iterate them
Method A would be good for small files. It's also quite useful if our XML structure is complex. However, if we have big files we may want to avoid loading file into memory before converting which is more efficient. Therefore, I have decided to implement Method B. Within all different options we have, I will show the event-style method using xml Python module.

Which Python modules I'm using?
  • xml module to iterate the XML nodes (iterparse)
  • csv module to create the CSV file
  • os to build the file paths
Let's import the modules:

# Import Section
try:
fromxml.etree.ElementTreeimportiterparse
importcsv
importos
fdmAPI.logInfo("Modules successfully imported")
exceptImportError,err:
fdmAPI.logFatal("Error importing libraries: "%err)

Then we need to build the different paths for the XML and CSV files. We will also create a file object for the CSV file. This object will be used to create the csv writer.
The XML file is automatically uploaded to the location's inbox folder when import begins. The CSV file will be created in the same folder.

# Get Context details
inboxDir=fdmContext["INBOXDIR"]
locName=fdmContext["LOCNAME"]
fileName=fdmContext["FILENAME"]
loadId=fdmContext["LOADID"]

# XML File
xmlFile=os.path.join(inboxDir,locName,fileName)
fdmAPI.logInfo("Source XML file: %s"%xmlFile)

# CSV file will be created in the inbox folder
csvFilename=fileName.replace(".xml",".csv")
csvFilepath=os.path.join(inboxDir,locName,csvFilename)

# To avoid blank lines in between lines: csv file
# must be opened with the "b" flag on platforms where
# that makes a difference (like windows)
csvFile=open(csvFilepath,"wb")
fdmAPI.logInfo("New CSV file: %s"%csvFilepath)

The writer object for the CSV file must use semicolon as delimiter so it matches with our IF definition. We have also enclosed non-numeric values in quotes to avoid issues in case you define your import format as comma delimited:

try:
# Writer
writer=csv.writer(csvFile,delimiter=';',quoting=csv.QUOTE_NONNUMERIC)
exceptException,err:
fdmAPI.logDebug("Error creting the writer: %s"%err)

Once the writer is ready, it's time to iterate the nodes and building our CSV. Before seeing the code, I'd like to highlight some points:
  • We just want to capture start tags so we only capture start event in iterparse
  • We can include event in the for statement for debugging purposes (we can print how the XML file is read)
  • Property tag returns the XML node name (<entity>...)
  • Property text returns the XML node text (<entity>EastSales</entity>) 
  • We know amount is the last XML tag so we will write the CSV line when it's found
  • The CSV writer generates the delimited line from list of node texts (row)
try:
# Iterate the XML file to build lines for CSV file
for(event,node)initerparse(xmlFile,events=['start']):

# Ignore anything not being dimension tags
ifnode.tagin["data","dataRow"]:
continue

# For other nodes, get node value based on tag
ifnode.tag=="entity":
entity=node.text
elifnode.tag=="account":
account=node.text
elifnode.tag=="icp":
icp=node.text
elifnode.tag=="custom1":
c1=node.text
elifnode.tag=="custom2":
c2=node.text
elifnode.tag=="custom3":
c3=node.text
elifnode.tag=="custom4":
c4=node.text
elifnode.tag=="amount":
amount=node.text

# Build CSV row as a list (only when amount is reached)
ifnode.tag=="amount":
row=[entity,account,icp,c1,c2,c3,c4,amount]
fdmAPI.logInfo("Row parsed: %s"%";".join(row))
# Output a data row
writer.writerow(row)

exceptException,err:
fdmAPI.logDebug("Error parsing the XML file: %s"%err)

The result of this step is the CSV file created in the same folder as the XML one:
If we open the file, we can see the 3 lines generated from the 3 XML dataRows:
Cool, first challenged completed. Now we need to make FDMEE to import the new file. Let's move forward.

Replacing the Source File on the fly
FDMEE stores the file name to be imported in several tables. It took to me some time and several tests to get which tables I had to update. Finally I got them:
  • AIF_PROCESS_DETAILS: to show the new file name in Process Details page
  • AIF_BAL_RULE_LOADS: to set the new file name for the current process
  • AIF_PROCESS_PERIODS: the file name is also used in the table where FDMEE stores periods processed by the current process
To update the tables we need 2 parameters: CSV file name and current Load Id (Process Id)

# ********************************************************************
# Replace source file in FDMEE tables
# ********************************************************************

# Table AIF_PROCESS_DETAILS
sql="UPDATE AIF_PROCESS_DETAILS SET ENTITY_NAME = ? WHERE PROCESS_ID = ?"
params=[csvFilename,loadId]
fdmAPI.executeDML(sql,params,True)

# Table AIF_BAL_RULE_LOADS
sql="UPDATE AIF_BAL_RULE_LOADS SET FILE_NAME_STATIC = ? WHERE LOADID = ?"
params=[csvFilename,loadId]
fdmAPI.executeDML(sql,params,True)

# Table AIF_PROCESS_PERIODS
sql="UPDATE AIF_PROCESS_PERIODS SET IMP_ENTITY_NAME = ? WHERE PROCESS_ID = ?"
params=[csvFilename,loadId]
fdmAPI.executeDML(sql,params,True)

Let's have a look to the tables after they have been updated:
  • AIF_BAL_RULE_LOADS
  •  AIF_PROCESS_DETAILS
  •  AIF_PROCESS_PERIODS
At this point, FDMEE doesn't know anything about the original XML file. Maybe some references in the process log, but nothing important.

Let's give a try!
Ready to go. FDMEE user selects the XML file when running the DLR:
Import is happening... and... data imported! XML with 3 dataRows = 3 lines imported
Process details show the new file (although it's not mandatory to change it if you don't want to)

I'm going to leave it here for today. Processing XML files can be something very useful, not only when we have to import data but in other scenarios. For example, I'm sure some of you had some solutions in mind where the Intersection Check Report (FDMEE generates an XML file which is converted to PDF) had to be processed...

I hope you enjoy this post and find it useful for your current or future requirements.

Have a good weekend!

FDMEE and PBJ, together hand in hand

$
0
0
Do you know Jason Jones? I guess you do but in case you don't, I'm sure you may have been playing around with any of his developments.

Personally, I've been following Jason since years. I remember what I thought when I went to one his presentations in Kscope: "This guy really knows what he says and has put lot of effort helping the EPM community. Definitely an EPM Rock star."

One day, I found something quite interesting in his blog: PBJ. I thought it could be very useful to improve and simplify something that I had already built using a different solution. Why not then using something he was offering to the community as open-source? It was good to me and also good to him. I guess that seeing something you've built is useful for others, must make you proud.
When I told him that I was going to integrate FDMEE on-prem with PBCS using PBJ, he was very enthusiastic. The library was not fully tested so I made sure I was providing continuous feedback. Some days ago he published about our "joint venture". Now it's time for me.

FDMEE Hybrid Integrations
We have already covered Hybrid integrations in some posts.
In a few words, FDMEE on-prem PSU200+ can be used to extract/load data from Oracle EPM Cloud Services (E/PBCS, FCCS so far).

I suggest you also visit John's blog to know more about hybrid integrations in FDMEE:
PBJ - The Java Library for PBCS
REST Web Services, what's that? I  let you google and read about it. For us, REST is how EPM Cloud Services open to the external world. Oracle provides different REST APIs for the different EPM CS.

Luckily, Jason has gone one step further. He built a Java Library to use the REST API for PBCS:

PBJ is a Java library for working with the Planning and Budgeting Cloud Service (PBCS) REST API. It is open source software.

Why would we need PBJ in our solutions? Currently hybrid integrations have some missing functionality like working with metadata among others. For example, we recently built a solution in FDMEE to load exchange rates from HFM into PBCS.

FDMEE was offering seamless extracts from HFM. Rates are data in HFM but not in PBCS. They are treated as metadata. We used REST APIs for PBCS from FDMEE scripts which was working perfectly. However, we built the code using modules available in Jython 2.5.1. That gave rise to much head-scratching... Working with HTTP requests and JSON was not an easy task.
We noticed everything was much easier from Python 2.7 (Jython 2.7) but nothing we could do here as we were stick to what FDMEE can use :-(

TBH, we had a further ace up our sleeve: building our own Java library but we delayed this development for different reasons. It was then that PBJ appeared :-)

Why reinventing the wheel? PBJ is open-source and makes coding easier. We can collaborate with Jason in GIT and he is quite receptive for feedback given.

Using PBJ from FDMEE Scripts
When I first started testing it, I noticed that there were multiple JAR dependencies which had to be added to the sys.path in my FDMEE script. That was causing some conflicts with other Jars used by FDMEE so Jason came up with an uber-JAR:

uber-JAR—also known as a fat JAR or JAR with dependencies—is a JAR file that contains not only a Java program, but embeds its dependencies as well. This means that the JAR functions as an "all-in-one" distribution of the software, without needing any other Java code. (You still need a Java run-time, and an underlying operating system, of course.)

One of my concerns was the fact that FDMEE uses Java 1.6. That's usually a problem when using external Jars from FDMEE scripts. Luckily, PBJ is also built using Java 1.6 so the current versions of FDMEE and PBJ are good friends.

Before using any PBJ class we have to add the Jar to the sys.path which contains a list of strings that specifies the search path for modules:


# -------------------------------------------------------------------
# Add library path to sys.path
# -------------------------------------------------------------------
importsys
importos.pathasosPath

# list of jar files
listPBJdep=["pbj-pbcs-client-1.0.3-SNAPSHOT.jar"]

# Add jars
pbjDepPath=r"E:\PBJ\uber_jar"
# Debug
fdmAPI.logInfo("Adding PBJ dependencies from %s"%pbjDepPath)
forjarinlistPBJdep:
pbjPathJar=osPath.join(pbjDepPath,jar)
ifpbjPathJarnotinsys.path:
sys.path.append(pbjPathJar)
fdmAPI.logDebug("PBJ dependency appended to sys path: %s"%jar)


Once the Jar file is added to the path we can import the different classes we want to use:


# -------------------------------------------------------------------
# Import section
# -------------------------------------------------------------------
fromcom.jasonwjones.pbcs.client.implimportPbcsConnectionImpl
fromcom.jasonwjones.pbcsimportPbcsClientFactory
fromcom.jasonwjones.pbcs.client.exceptionsimportPbcsClientException
importtime


We are now ready to connect to our PBCS instance.

Example: loading new Cost Centers into PBCS
I have created a custom script in FDMEE to keep it simple. The script is basically performing the following actions:
  1. Import PBJ Jar file
  2. Connect to PBCS
  3. Upload a CSV file with new Cost Centers
  4. Execute a Job to add new Cost Centers
Our CSV file with new metadata is simple, just three new members:
PBJ has class PbcsClientException to capture and handle exceptions. You can use this class in addition to Python's one:


try:
# Your code...
exceptPbcsClientException,exPBJ:
fdmAPI.logInfo("Error in PBJ: %s"%exPBJ)
exceptException,exPy:
fdmAPI.logInfo("Error: %s"%exPy)


Connecting to PBCS
We just need 4 parameters to create a PBCS connection:


# -------------------------------------------------------------------
# Credentials
# -------------------------------------------------------------------
server="fishingwithfdmee.pbcs.em2.oraclecloud.com"
identityDomain="fishingwithfdmee"
username="franciscoamores@fishingwithfdmee.com"
password="LongliveOnPrem"


Note: just working with Jason to use encrypted password instead of hard-coded one. I'll update this post soon.

Creating the PBJ Client (PbcsClient)
PBJ can be seen as a PBCS client built in Java so next step is to create the Client object:


# Create client
clientFactory=PbcsClientFactory()
fdmAPI.logInfo("PbcsClientFactory object created")
client=clientFactory.createClient(connection)# PbcsClient
fdmAPI.logInfo("PbcsClient object created")

With the client object we can upload the file with new metadata to the PBCS Inbox/Outbox folder. This is done with uploadFile method:

# Upload metadata file to PBCS Inbox
csvFilepath=r"E:\FDMEE_CC\FDMEE_CostCenter.csv"
client.uploadFile(csvFilepath)
fdmAPI.logInfo("File successfully uploaded to PBCS Inbox")


The file is then uploaded to PBCS so the Job can process it:


Creating the Application object (PbcsApplication) 
Once file file is uploaded we need to create an application object to import new metadata. In my case, my PBCS application is DPEU.


# Set PBCS application
appName="DPEU"
app=client.getApplication(appName)# PbcsApplication


Executing the Job and Checking Job Status
I have created a Job in PBCS to upload new cost centers from a CSV file (PBJ also supports zip files):

One thing you need to know about REST API is that they are called asynchronously. In other words, we need to check the job status until it is completed (or predefined timeout is reached).

So we first execute the job by calling method importMethod and then check job status with method getJobStatus. The status will be checked every 10 seconds while the job is running.

In order to check the job status we need to know the job id. This is done with getJobId method:


# Execute Job to import new metadata (Cost Center)
result=app.importMetadata("FDMEE_Import_CC")# PbcsJobLaunchResult
fdmAPI.logInfo("Result: %s "%result)

# Check Job Status and loop while executing (may add timeout)
jobStatus=app.getJobStatus(result.getJobId())# PbcsJobStatus
fdmAPI.logInfo("Job status: %s "%jobStatus)
statusCode=jobStatus.getStatus()
while(statusCode==-1):
time.sleep(10)# sleep 10 seconds
jobStatus=app.getJobStatus(result.getJobId())# PbcsJobStatus
fdmAPI.logInfo("Job status: %s "%jobStatus)
statusCode=jobStatus.getStatus()

# Show Message
ifstatusCode==0:
fdmAPI.showCustomMessage("New Cost Centers added!")
else:
fdmAPI.showCustomMessage("Some errors happened! %s"%jobStatus)

Once the job is completed we can see the results in the PBCS Job console:
Job was executed with no errors. By navigating to the Cost Center dimension we can see the new hierarchy added:
I have also added some code to write debug entries in the FDMEE process log. This is always useful and can help you to find and fix issues easily:

Conclusion and Feedback
In this post, my main goal has been to show you how to use PBJ library in FDMEE. I'm sure this can be very useful to implement different requirements for hybrid integrations.

Jason did a great job and the ball is now in our court. The best way of contributing is to keep testing PBJ and provide feedback.
Let me highlight that PBJ is not his only project. There are few others that you can check in his site.

Enjoy FDMEE and PBJ together hand in hand!

Universal Data Adapter for SAP HANA Cloud

$
0
0
Some time ago I covered SAP HANA integration through the Universal Data Adapter (UDA). You can see details in the 3 parts I posted:
Now that everything is heading into the Cloud, why not playing around with SAP HANA Cloud?

SAP HANA Cloud
When I first tried to get a SAP ECC training environment, I noticed that SAP was offering nothing for free. Nowadays, things have changed a little bit. Luckily, they noticed that you need to offer some trial/training sandbox if you want people get closer to you.
For those who want to be part of the game, you can visit their Cloud site.

Why the Universal Data Adapter?
SAP HANA Cloud brings something called SAP Cloud Connector. Too complicated for me :-)
Luckily for me, I googled an easier way of extracting data from Cloud. There is something called database tunnels which allows on-premise systems to connect the HANA DB in the cloud through a secure connection. It doesn't sound quite straight forward but it didn't take too long to configure.

There are different ways of opening the tunnel. I have used the SAP Cloud Console Client which you can download from SAP for free.

Once the database tunnel is opened from the FDMEE Server(s) to the SAP HANA Cloud DB, the Universal Data Adapter can be used in the same way that we used with on-premise HANA DB.
Please, note that as I'm not using a productive cloud environment I had to open the tunnel via command line. This is fair enough to complete my POC.

My data in SAP HANA Cloud
I'm keeping this simple so I have a table in HANA Cloud with some dummy data:
Let's go through the configuration steps to bring that data into my application.

Importing data through FDMEE
As any configuration of UDA we need to:
  • Configure ODI Topology for the physical connection, logical schema and context
  • Configure FDMEE (source system, source adapter, period mapping, etc.)
ODI
Data Server needs to point to the DB tunnel:
We use the same JDBC driver as for HANA on-premise:
As usually, I create a dedicated context for this new source system. That gives me more flexibility:

FDMEE
In FDMEE, nothing different. 
We first create the source system with the context we created in ODI:
Then, add the source adapter for the table we want to extract data from:
Time now to import the table definition, classify columns and generate the template package in ODI:
As you can see above, FDMEE could reverse the HANA Cloud table so I can now assign the columns to my dimensions and regenerate the ODI scenario:

I'm not going to show how to create a location and data load rule as I assume you are familiar with that process.

Final step is to run our data load rule and see how data is pulled from the SAP cloud and loaded into HFM on-premise app through FDMEE :-)
I'm going to leave it here for today. As you can see, Universal Data Adapter provides a simple and transparent way of connecting our on-premise system with heterogeneous source systems, including SAP HANA Cloud!

Cheers

Code Snippet: merging files by writing chunks

$
0
0
Do you have multiple source files that have to be merged so you can import them as a single one? This requirement is quite common, especially in automated data loads.

For example, our ERP system is exporting two files for Balance Sheet and Profit & Loss. We want to import them as a single file under the same POV. Merging the source files is the solution.

The approach taken is to merge files by writing chunks into target file. In this way, we avoid memory issues when having large source files.

Let's have a look!

Merging a list of files by writing chunks


'''
Snippet: Merge a list of files
Author: Francisco Amores
Date: 23/05/2016
Blog: http://fishingwithfdmee.blogspot.com

Notes: This snippet can be pasted in any event script.
Content of fdmContext object will be logged in the
FDMEE process log (...\outbox\logs\)

Instructions: Set log level (global or application settings) to > 4
Hints: Use this snippet to merge multiple single files into
ont.
It write chunks to avoid memory issues with large files

FDMEE Version: 11.1.2.3 and later
----------------------------------------------------------------------
Change:
Author:
Date:
'''

# initialize
srcFolder=r"C:\temp"
tgtFolder=r"C:\temp"
listSrcFilename=["file1.txt","file2.txt","file3.txt"]
tgtFilename="merge.txt"

# import section
importos
importshutil

try:
# Open Target File in write mode
tgtFilepath=os.path.join(tgtFolder,tgtFilename)
tgtFile=open(tgtFilepath,"w")
# Log
fdmAPI.logInfo("File created: %s"%tgtFilepath)

# Loop source files to merge
forsrcFilenameinlistSrcFilename:

# file path
filepath=os.path.join(srcFolder,srcFilename)
# Log
fdmAPI.logInfo("Merging file: %s"%filepath)
# Open file in read mode
srcFile=open(filepath,"r")
# Copy source file into target
# 10 MB per writing chunk to avoid big file into memory
shutil.copyfileobj(srcFile,tgtFile,1024*1024*10)
# Add new line char in the target file
# to avoid issues if source file don't have end of line chars
tgtFile.write(os.linesep)
# Close source file
srcFile.close()
# Debug
fdmAPI.logInfo("File merged: %s"%file)

# Close target file
tgtFile.close()

except(IOError,OSError),err:
raiseRuntimeError("Error concatenating source files: %s",err)

Code snippets for FDMEE can be downloaded from GitHub.

Universal Data Adapter - Making it Simple for Multiple Databases

$
0
0
Hi folks!

Long time since I don't post but several events happened lately. Anyway, sorry about that. 

Before I dive into today's topic I'd like to summarize my life in the last weeks:
  • Kscope17 was a great event as always. It's a very good opportunity for us living on the other side of the pond. Meeting lot of people, partners, customers is always great. San Antonio was impressive. I spent one day visiting the city with my colleague Henri (The Finnish Hyperion Guy). We had sun and rain. And what do you when it's raining? Shopping! In addition, I got the "EPM Data Integration Top Speaker" award! That was awesome. I didn't expect it so I can only say thanks to all the community. 
  • Heading to large family. This is an easy one: if all goes fine, next year we will be 2 more in the family :-)
  • New apartment. I've been very busy assembling IKEA furniture. For those would would like visiting Malaga (Spain), we bought a new apartment for rental. Feel free to visit it!
OK, now that you know what I have been doing...time for some FDMEE content!

Universal Data Adapter (UDA)
If you are not familiar with the UDA yet, it may be a good a idea that you visit my previous entries about it:
The Requirement - Multiple Source Databases with same Source View Layout
One of the drawbacks of the UDA is the configuration and maintenance. That's something we cannot change. It has been designed like that. 

Why configuration
UDA requires configuration in FDMEE and Oracle Data Integrator (ODI).

In FDMEE, UDA requires typical artifacts plus some specific ones
  • Source System with an ODI context assigned
  • Source Adapter
  • Source Period Mappings
  • Import Formats
  • Location and Data Load Rule
In ODI, UDA requires new artifacts to be created in addition to the ones imported during initial configuration (Projects and Model Folders)
  • Manually in ODI
    • Data Server
    • Physical Schema
    • Context
  • Generated in ODI from FDMEE
    • Datastore for Table/View definition
    • Package and Interface
    • Scenario for the Import Format
Why maintenance? There are many events that require re-generating the ODI objects created from ODI. I'm not going to list all of them but will explain the main ones.
  • Migrating across environments. LCM doesn't migrate the ODI objects. Besides, you can't do a native ODI export/import. FDMEE has been designed to have the same ODI repositories' ID across environments so the export/import objects will fail.
  • Apply patches. Some patches may require to re-import some default ODI objects for ODI. This will probably delete the objects you generated from FDMEE.
  • Changes in Tables/Views. Think about adding a new column to a Table/View. You have to re-import table definition again, regenerate the package and interface, adjust the Import Format and regenerate the ODI scenario
What about multiple databases? All configuration mentioned above multiply as well. Why? All is chained. Each Source System has assigned an ODI Context. If you have multiple sources of the same database type, you can't use the Global context as you can only assigned once to Logical Schema UDA_MSSQL. Then, as you need multiple Source Systems and Source Adapters, you will need multiple Import Formats as they are assigned to adapters, and so on...

I know what you are thinking...lot of manual work to do!

Today, I will show you the solution we implemented for an integration with Microsoft Navision. (SQL Server). Customer had 30 different source databases in the same server (including different collations)

The Solution - Moving complexity to the DB makes UDA simpler!
As part of the Analysis & Design workshop, we explained the drawbacks of having 30 source databases. They understood immediately that we had to find another solution.
For 2 databases, solution architecture would be as:
Then I told them that I could simplify the design a lot but we needed some additional SQL work. They wanted things to be simple in FDMEE so they were happy to go for it. Actually, they did :-)

Basically, my advice was to move the complexity to SQL. That would made the UDA configuration and maintenance simpler.

In a nutshell:
  1. Create a DB Link (Oracle)/Linked Server (SQL Server) from FDMEE database server to the source database server
  2. Create a view in FDMEE database. This view has an additional column for the source database. It queries the source views using remote queries (Ex: OPENQUERY in SQL Server) which perform quite well as they leverage the source DB engine
  3. Configure UDA in ODI
  4. Configure UDA FDMEE
  5. In addition to the required parameters, define an additional one for the column having the source database

Note that if the source databases are in different servers, the solution would be slightly different but still doable.

Also, similar approach could be taken if you have multiple views with different layouts. You could merge them into one with common layout.

I know you may have concerns about performance but if the views are correctly designed and network delay is not a bottleneck, everything should be fine. Indeed, ODI usually executes the SQL extract queries in the source database. Don't forget to play a bit with Data Server settings to get the best performance.

I hope you found this solution interesting as it may help you to simplify your design.

Have a good summer!

FDMEE & Java APIs, more than friends

$
0
0
Hi folks!

Finally the day arrived.

Some years ago, FDMEE was introduced into our lives with lot of nice and new functionality. Jython is probably one the most important ones. Why? That's an easy one. With Jython, FDMEE opened itself to the Java world. 
If you haven't read it yet, there is a must-read chapter about Jython and Java Integration at Jython's site. I'd like to highlight the following phrase:

Java Integration is the heart of Jython application development...The fact is that the most Jython developers are using it so that they can take advantage of the vast libraries available to the Java world, and in order to so there are needs to be a certain amount of Java integration in the application.

Most of the key Oracle EPM and non-EPM products have their own Java API (JAPI). During this blog series, I'm going to focus on the EPM ones. In a nutshell, the integration of FDMEE with the Java APIs for products like Essbase or HFM, gives us freedom to implement additional functionality to enhance our EPM integration solutions.

Using Java from within Jython
One of the goals of Jython is to use existing Java libraries straight forward. It's as simple as using external Jython modules (PY files) within a Jython script:
  1. Import the required Java classes and use them directly in your code
  2. Call the Java methods or functions you need
What about Types? well, here the good thing comes. Usually, you don't need to worry at all about them. There is some automatic type coercion (conversion of one type of object to a new object of a different type with similar content) either for parameters passed and for the value returned by the Java method.

Basically, when it gets a Java numeric type, or a Java string, Jython automatically converts it into one of its primitive types.

Let's have a look at the following example:
As you can see, the ArrayList object (which is an object from the Java Collection Framework) has been coerced into a Jython list. We can use methods from ArrayList Class (like add) and iterate the object as it would be a proper Jython list.

We will see more examples for coercion when using the Essbase and HFM JAPIs.

BTW, what is Foo?

Using Java from within FDMEE Scripts (Jython 2.5.1 and Java 1.6)
When writing your Jython scripts, don't forget that:
  • FDMEE latest version (11.1.2.4) uses Jython 2.5.1
  • FDMEE uses Java 1.6 (as EPM system)
In other words, you are restricted to use classes available in Java 1.6. Also, if you use 3rd party Java libraries, they must be compatible with 1.6.

Regarding the different approaches to implement the Jython script:
  • Build the custom functionality in a Java library that you can later import into your scripts
  • Cast Java code as Jython within your script
Option 1 requires a deeper knowledge on Java programming. I'd recommend this option only if you know Java programming and your customization is a good candidate for being reused in other implementations. On the other hand, option 2 is quicker and probably a better option for one-time customization.

Essbase Java API
FDMEE comes with functionality that is commonly use:
  • Extract data 
  • Run calculation scripts before/after loading data
  • Pass parameters to the scripts
  • Create drill regions
  • Among others...
But, what about?
  • Run calculation scripts before extracting data
  • Validate target data before is loaded
  • Load new metadata before loading data
  • Execute MaxL scripts
  • Using substitution variables in FDMEE artifacts like period mappings 
  • Among others...
I wish the product would provide this functionality but unfortunately it doesn't. However, it provides a powerful scripting engine which enables us to extend its functionality.

Going back to the list above, you have probably met some these requirements in one of your projects. What did you do? Create a MaxL script and run it from script using subprocess module? Or, did you leverage the Essbase JAPI?

That probably depends on many other factors...do we have time for implementation? do we know how to do it? do they have existing batches doing the work?...

To me, using the Essbase JAPI is not only about having seamless integration but capturing errors in an elegant way. Something that you can hardly get by running batches from external scripts.

Spoiler!!! see how simple would be to execute a MaxL script or statement:
I will cover more details about using the Essbase JAPI and some examples in upcoming posts.

HFM Java API
What about HFM?
  • How can we extract Cell Texts?
  • Extract and Load Metadata?
  • Translate data before extracting it?
  • Run custom consolidation/calculation/translation logic?
  • Among others
HFM also has a JAPI! Actually, in the same way that happens with Essbase integration, FDMEE uses these APIs behind the scenes.

Spoiler again!!! extracting cell texts:
Other Java APIs
Besides the HFM and Essbase JAPIs, there are other products and components having their own API. Some of them such as LCM's one are documented, some others are not. In example, OLU's API (Outline Load Utility).

In the next posts, I will show some examples for customization implemented with the Essbase and HFM APIs. If you can't wait, my colleague John already published a very cool one.

I haven't forgotten about Planning. It does not have any published Java API but you should have a look at REST API.

Take care!

Code Snippet: Executing PL/SQL Stored Procedures with IN/OUT Parameters

$
0
0
Do you need to execute a stored procedure from you script? Maybe to populate the Open Interface Table? You can make use of the FDMEE API method executePLSQL but only if the stored procedure does not return OUT parameters. If you need to return any value, then you can use the Java classes for SQL.

The following snippet shows how to execute the procedure remotely from a dblink. Executing the procedure from the database you connect, follows the same approach.

Let's have a look!

Executing a stored procedure with IN/OUT parameters

'''
Snippet: Execute a PL/SQL stored procedures IN/OUT params
Author: Francisco Amores
Date: 24/11/2017
Blog: http://fishingwithfdmee.blogspot.com

Notes: This snippet can be pasted in any event script.
Content of fdmContext object will be logged in the
FDMEE process log (...\outbox\logs\)

This snippet executes the stored procedure via dblink
Local stored procedures are executed in a similar way

Instructions: Set log level (global or application settings) to > 4
Hints: You can implement also code to get db connection details
instead of hard-coding

FDMEE Version: 11.1.2.3 and later
----------------------------------------------------------------------
Change:
Author:
Date:
'''
try:
# Import Java libraries
importjava.sql.SQLExceptionasSQLException
importjava.sql.DriverManagerasSQLDriverMgr
importjava.sql.CallableStatementasSQLCallableStmt
importjava.sql.TypesasSQLTypes
importjava.sql.DateasSQLDate# needed for DATE parameters
importjava.text.SimpleDateFormatasSimpleDateFormat

# Note: import any other class you need

exceptImportError,err:
errMsg="Error importing libraries: %s"%err
fdmAPI.logFatal(errMsg)
raiseRuntimeError(errMsg)

# ----------------------------------------
# Connect to FDMEE or External database
# ----------------------------------------

# Connection details
dbConn="the jdbc url"
dbUser="the db user"
dbPasswd="the db password"

try:
# get connection to database for callable statements
conn=SQLDriverMgr.getConnection(dbConn,dbUser,dbPasswd)
fdmAPI.logInfo("Connected to the database")
exceptSQLException,ex:
errMsg="Error executing SQL: %s"%(ex)
raiseRuntimeError("Error generated from FDMEE script\n%s"%errMsg)

# ----------------------------------------
# Execute PL/SQL Stored Procedure
# ----------------------------------------

# Get dblink
dbLink="your dblink"

# PL/SQL Block Code (via DBLINK)
'''
Procedure implemeted as:
PROCEDURE CARGA_TABLA(P1 OUT VARCHAR2,
P2 OUT NUMBER,
P3 IN NUMBER,
P4 IN DATETIME,
P5 IN VARCHAR2,
P6 IN VARCHAR2,
P7 IN VARCHAR2,
P8 IN VARCHAR2,
P9 IN VARCHAR2)
'''

# Each ? represents one stored proc parameter
# Ex: schema.package.storedproc if your stored proc is in a package
plSqlBlock="{CALL schema.package.storedproc@%s(?, ?, ?, ?, ?, ?, ?, ?, ?)}"%dbLink

# Get parameters for the statement
p3="valuep3"
# parameter p4 must be passed as java.sql.Date
sdf=SimpleDateFormat("dd/MM/yyyy")
dtParsed=sdf.parse("date value")
p4=SQLDate(dtParsed.getTime())
p5="this param is passed as null"
p6="valuep6"
p7="valuep7"
p8="valuep8"
p9="valuep9"

# Prepare and execute call
try:
# Callable Statement
callableStmt=conn.prepareCall(plSqlBlock)
fdmAPI.logInfo("Callable statement successfully prepared")

# Set IN parameters
callableStmt.setBigDecimal("p3",p3)
callableStmt.setDate("p4",p4)
callableStmt.setNull("p5",SQLTypes.VARCHAR)# NULL
callableStmt.setString("p6",p6)
callableStmt.setString("p7",p7)
callableStmt.setString("p8",p7)
callableStmt.setString("p9",p7)
fdmAPI.logInfo("Parameters IN set")

# Register OUT parameters
callableStmt.registerOutParameter("p1",SQLTypes.VARCHAR)
callableStmt.registerOutParameter("p2",SQLTypes.NUMERIC)
fdmAPI.logInfo("Parameters OUT registered")

# Execute PL/SQL Stored Procedure
result=callableStmt.execute()
conn.commit()
fdmAPI.logInfo("Stored Proceedure successfully executed: %s"%result)

# Get OUT parameters
p1=callableStmt.getString("p1")
p2=callableStmt.getInt("p2")

# Log OUT parameters
fdmAPI.logInfo("OUT p1: %s"%p1)
fdmAPI.logInfo("OUT p2: %s"%p2)

except(Exception,SQLException),ex:
errMsg="Error when executing the stored procedure: %s"%ex
fdmAPI.logFatal(errMsg)
iflen(errMsg)<=1000:
fdmAPI.showCustomMessage(errMsg)
raiseRuntimeError(errMsg)

# ----------------------------------------
# Close connection
# ----------------------------------------
ifcallableStmtisnotNone:
callableStmt.close()
ifconnisnotNone:
conn.close()
fdmAPI.logInfo("DB connection closed")



Code snippets for FDMEE can be downloaded from GitHub.

Data Protection with Multiple Global Application Users

$
0
0
Dear colleagues,
I'm back! It has been a hard few months with a lot of work especially at home where the large family has required 200% of my attention.

Many people asked me where I was, although those who know me know that I have not been to the Caribbean :-)

Being said that...
We all know that the Cloud continues to grow with great force and that the role of data integration is fundamental in the architecture of any solution. Some other people including my colleague John Goodwin, have covered many topics of maximum interest. I strongly recommend visiting the different blogs out there although I'm sure you already did it :-)

Today, I come back to show you a solution that we have been implementing in multiple customers. How many of you have had data protection problems in HFM when loading from FDMEE? Do you know all the solutions? I'm not going to cover all of them, but I will be introducing the one that is not fully documented.

As usually, I'm not stating this is the best solution for your requirement. This is just to share with you a new idea that I found very useful for some of my implementations.

The requirement
Let's start with a common question from customers:
When we load HFM data with FDMEE in Replace mode, some accounts are wiped out. Controllers type them, so we need to protect that data. Can we?
Then, you start thinking about different approaches.
  • Maybe, we can use Merge mode instead...
  • Or Replace by Security...
  • FDMEE has built-in functionality for Data Protection...
  • Etc.
During your analysis, all options should be evaluated. You need to understand the pros and cons and if they have any impact on existing integration flows.

Let's now take that requirement to another level of complexity. Multiple FDMEE interfaces loading different sets of data for the same HFM sub-cube (Entity, Scenario, Year, and Value). For example: 
  • ERP data / Supplemental data
  • Statutory accounts / IFRS16 accounts
The two examples above have something in common. If you execute the Data Load Rules (same category) with Replace export mode, each DLR will delete the data of the other one. This is how the Replace load method works in HFM so data for the sub-cube is deleted before the new data set is loaded. For example:
  1. DLR GL_DATA loads actual data in Replace mode for Entity NY, Period Mar-2019
  2. DLR IFRS16_DATA loads actual data in Replace mode for Entity NY, Period Mar-2019
There can be many different sequences and scenarios but if we focus on the above one, the second execution will delete all the data previously loaded for NY/Mar/2019 (assuming there is no data protection mechanism).

Can I use HFM Data Protection functionality available in FDMEE?
I must admit that I have not been a big fan of Data Protection functionality. The main reason is the way it works, its limitations and the performance impact it can have. Basically, FDMEE (legacy FDM classic too) extracts the data to be protected, and then append to the DAT file that will be loaded.
You can protect, for the sub-cube being loaded, all data being that has specific member name in any of the dimensions of the data intersections, or just all data which has not that member.
Some of the limitations I refer to... you cannot protect multiple dimension members OOTB. Also, protecting data with operator "<>" can derive in FDMEE extracting big volumes of data. Definitely, something to be taken into account.

Therefore, yes, data protection is an option to be evaluated but, in addition to understanding how it works, you need to take it into account in your HFM application design. If you want to protect manual data inputs or different data sets loaded through FDMEE, you may want to consider using an HFM custom dimension (typically Custom4), so the different data sets use different custom dimension members. Each FDMEE interface should protect all data being different than the custom protection member it is loading into. As you can imaging,simply extracting the HFM data requires time and resources. Also,several re-loads can happen depending on our solution design.
BTW, I haven't mentioned Cell Texts but they are included in the data protection process.

What about Merging data instead of Replacing?
If you are thinking that changing your load method can be the solution for your data protection issue, you may be introducing more issues. As we usually say, the cure is worse than the disease. E.g. if you have re-allocations in your data, you might be protecting your data but leaving wrong data in the system as well. There is a well-known custom solution called Smart-Merge but it is not in the scope of this post, so I leave that for your researching.

Replace by Security Load Method and Global Application Users
Let's have a look to a load method different than Merge and Replace. The HFM documentation defines Replace by Security as:
In a nutshell, write and clear the data cells that you access to. This seems to be a good way of protecting data, doesn't it? As you know, FDMEE leverages HFM load methods so Replace by Security is available when exporting data:
So if the user loading data has specific security classes assigned, data being secured will be protected as it will not be cleared. Good, we are on the right path.

Now let's go with users. Let's say the user loading into HFM is an admin user. Restricting access for the admin might no be a good idea. The same applies for non-admin users, assigning security classes for data protection requirements only, might conflict with his role.

What about having specific Shared Services native users for data protection only? That would fit but we don't want everyone using the same user in FDMEE as that is not compatible with other FDMEE featured such us Security by Location.
Luckily, FDMEE has something called Global Application Users. This feature is not new as it was already available in legacy FDM Classic as "Global Logon" option in the target adapter.

In FDMEE, this option is available in the Target Application Options:
Therefore, if we join the two concepts we have global users loading data in Replace by Security mode. Still on the right path. However, this option is only available at target application level. We cannot overwrite it at Data Load Rule level as we can do with other target options:
One target application can only have one global user assigned. This is not good if we want to have multiple global users for multiple data flows.
You may come up with the option to register target applications multiple times which is possible from FDMEE 220. In my opinion, that's not a good idea as most of the FDMEE artifacts are defined at target application level and you would end up duplicating them as well. 

This post makes sense when we must find a solution to work around this limitation :-)

Multiple Global Application Users at Data Load Rule level
If we cannot have global users at DLR level OOTB, how can we get them?
Global Users, as many other options, are target application options. Therefore, as we already showed in other blog posts, they are stored at run-time in table AIF_BAL_RULE_LOAD_PARAMS when the DLR is executed. The technical name for the global user option is GLOBAL_USER_FOR_APP_ACCESS:
Then, this little hack is easy. We can update the table at run-time based on the logic we define to get the right global user. The script below inserts the global application user for the current DLR only if it does not exist already (this is done in case you want to place the script in different event scripts as shown in section At which step do we update the global user?)
As you can see, the SQL statement above includes an INSERT/UPDATE combination. Why? Because if you define a global user at Target Application level, there will be an existing line in the table, so we just have to update it. If you don't define it, we just have to insert it. It also avoids issues if you want to use global user in the Validate step to run the intersection check report, and you try to insert it again in the load step (as long as you execute the DLR in one go)

The code I show is valid for Oracle Database but it is similar for SQL Server. You might consider MERGE statement as well. Whatever you implement, the result must be the same :-)

At which step do we update the global user?
The key question is when do we update the global user in our process? The answer depends on your specific requirements, but the table below summarizes the main scenarios:
How do we configure the global users?
It depends on the requirements. For our example, if we need two global users:
  • User for GL data:IntegrationUser_INP_FDM
  • User for IFRS16 data: IntegrationUser_IFRS16_FDM
We have to perform the following steps:
  1. Create the security classes in HFM: C4_INP_FDM and C4_IFRS16_FDM
  2. Activate the security class for Custom4 (App settings in HFM metadata)
  3. Assign the security classes to the Custom4 members: INP_FDM and IFRS16_FDM
  4. Assign role Default to the global user so he shows up in the matrix below
  5. Assign user access to the security classes
Let's see an end-to-end example
We will first show the data protection issue:
  1. Load GL data first in Replace mode
  2. Input data manually
  3. Load IFRS16 data in Replace mode.
To show how the solution works, we will repeat the process but loading IFRS16 data in Replace by Security mode with new global user IntegrationUser_IFRS16_FDM.
  1. As a starting point, we load the GL data with Replace Mode. Data is loaded to Custom4 member INP_FDM

  2. The HFM data form shows the right data being loaded:
  3. After loading GL data, we input value 500 manually (Custom4 member INP):
  4. Finally, we load IFRS data in Replace mode:
As you can see, all data previously loaded has been wiped out after loading IFRS16 data in Replace mode. Therefore, both manual and GL data have not been protected.
Now we have a data protection issue and a solution for it. Once we apply our event script with the code to update the global user at run-time, data looks good after loading in Replace by Security mode. The three values have been protected as expected:
Conclusion
As usually happens for any custom solution, you need to take into account different considerations:
  • If you run the data load rule manually, you can select Replace mode as there is no functionality to hide load methods. To avoid that issue, you can always have a BefLoad script which checks the export mode before loading data. 
  • As you are loading with the global user, that's the user you see in the HFM Audit.
  • The solution applies to many different requirements. For example, we had a customer with different security requirements at HFM scenario level. In that solution, we used global users at category level.
  • Consider location security. If user A has access to location A, note that the HFM security applied will be the one for the global user and not for user A. Therefore, if user A tries to load data for an entity that he has not access in HFM but the global user has, then data will be loaded for that entity. As a good practice, I usually build a control when locations/data load rules are defined for unique entities. In this way, I prevent users loading data for other entities, especially when they can manipulate the source files :-)
Today, I tried to show you a new solution for data protection that leverages global users, HFM security classes and Replace by Security mode.

I hope you found it useful. Now it is turn for your creativeness.

That's all my folks.

Kscope19 - there we go!

$
0
0
Every year gets more and more complicate to get a slot in this amazing event.
I'm more than happy to be there again. 

Sharing knowledge is something I really enjoy, so I will try to do my best as always.
This is the summary for you:
We all know that the administration guides are a good starting point to learn how to use a product. But, is everything written in them? Dark arts are not taught in books. I have been writing my own magic potions for many years, and it is a pleasure to be able to show you some dark magic in FDMEE.

The cloud is already a reality which increasingly takes center stage in our data integrations solutions. That's why we won’t forget about it, and we will share the best tips and tricks with you. You will also see new Data Integration SUI!

If you would like to discover the best spells for your data integration requirements, this is undoubtedly your platform 9¾. We wait for you!



Looking forward to seeing you at Seattle!

Code Snippet: Getting ODI Details for Source/Target System

$
0
0
There are different scenarios where you might need to interact with your source system. For example, we want to perform a delta extract. For that purpose, we have a timestamp column in our source table/view. In our solution, we need to execute a query against the source system to get only the entities which have data generated after specific timestamp.

Also, you may want to export your data to a target table. You will need to connect and execute an insert statement.

For source systems, the connection details are stored in the ODI tables of the FDMEE database. Basically, what you setup in ODI topology. If using a relational database as a target system, you may want to store the connection details in ODI to avoid hard-coding and making it more dynamic.

This version uses the current context you setup in the source system page. We also wanted to have to logical schema as a parameter

Let's have a look!

SQL query to get the ODI details for specific source system:


defget_odi_source_details(fdmAPI,fdmContext,sourceSystemName,logicalSchema):
'''
Snippet: Get ODI Source details for Source System Name
Author: Francisco Amores
Date: 21/05/2019

Parameters:
- fdmAPI: FDMEE API object
- fdmContext: FDM Context object
- sourceSystemName: Source System name
- logicalSchema: ODI Logical Schema

Notes: This snippet can be pasted in any event script. The function
                    returns a map object with the different properties

FDMEE Version: 11.1.2.3 and later

----------------------------------------------------------------------
Change:
Author:
Date:
'''

# *******************************************
# Import section
# *******************************************
fromjava.sqlimportSQLException


# *******************************************
# Get ODI Details for Source System
# *******************************************

# log
logMsg="Getting ODI details for Source System %s"%(sourceSystemName)
fdmAPI.logInfo(logMsg)

sqlOdiDetails="""SELECT
S.SOURCE_SYSTEM_NAME,
C.CONTEXT_CODE AS ODI_CONTEXT,
CO.CON_NAME AS DATA_SERVER_NAME,
L.LSCHEMA_NAME AS LOGICAL_SCHEMA,
P.SCHEMA_NAME AS PHYSICAL_SCHEMA,
TXT.FULL_TXT AS JAVA_URL,
CO.JAVA_DRIVER,
CO.USER_NAME,
CO.PASS AS ENCRYPTED_PWD
FROM
AIF_SOURCE_SYSTEMS S INNER JOIN SNP_CONTEXT C
ON S.ODI_CONTEXT_CODE = C.CONTEXT_CODE
INNER JOIN SNP_LSCHEMA L
ON L.LSCHEMA_NAME = ?
INNER JOIN SNP_PSCHEMA_CONT PC
ON PC.I_CONTEXT = C.I_CONTEXT AND
PC.I_LSCHEMA = L.I_LSCHEMA
INNER JOIN SNP_PSCHEMA P
ON P.I_PSCHEMA = PC.I_PSCHEMA
INNER JOIN SNP_CONNECT CO
ON P.I_CONNECT = CO.I_CONNECT
INNER JOIN SNP_MTXT TXT
ON CO.I_TXT_JAVA_URL = TXT.I_TXT
LEFT OUTER JOIN SNP_CONNECT_PROP CP
ON CP.I_CONNECT = CO.I_CONNECT
WHERE S.SOURCE_SYSTEM_NAME = ?"""

# params
params=[logicalSchema,sourceSystemName]

try:
# execute SQL query
rsOdiDetails=fdmAPI.executeQuery(sqlOdiDetails,params)

# initialize map
mapOdiDetails={}

# loop
ifrsOdiDetails.isBeforeFirst():
whilersOdiDetails.next():
# get ODI details
mapOdiDetails["ODI_CONTEXT"] =rsOdiDetails.getString("ODI_CONTEXT")
mapOdiDetails["DATA_SERVER_NAME"]=rsOdiDetails.getString("DATA_SERVER_NAME")
mapOdiDetails["PHYSICAL_SCHEMA"]=rsOdiDetails.getString("PHYSICAL_SCHEMA")
mapOdiDetails["JAVA_URL"] =rsOdiDetails.getString("JAVA_URL")
mapOdiDetails["JAVA_DRIVER"] =rsOdiDetails.getString("JAVA_DRIVER")
mapOdiDetails["USER_NAME"] =rsOdiDetails.getString("USER_NAME")
mapOdiDetails["ENCRYPTED_PWD"] =rsOdiDetails.getString("ENCRYPTED_PWD")

# log
fdmAPI.logInfo("ODI Details: %s"%mapOdiDetails)
else:
# ODI Details not found
errMsg="ODI Details not found for Source System name %s (Logical Schema %s)"%sourceSystemName,logicalSchema
fdmAPI.logInfo(errMsg)
raiseRuntimeError(errMsg)
# close rs
fdmAPI.closeResultSet(rsOdiDetails)
exceptSQLException,ex:
errMsg="Error executing the SQL Statement: %s"%ex
raiseRuntimeError(errMsg)

# return
returnmapOdiDetails

Code snippets for FDMEE can be downloaded from GitHub.

Building your own Monitoring Solution for Data Changes in HFM Phase Submission Loads - Part 1

$
0
0
Back to blog!

From time to time I get some cool requirements from customers. That makes me think and do what I like to do: analyze, design, build and make everyone happy :-)

In this case, I got a challenging requirement for an automated integration between several source systems and HFM. Of course, using our favorite fishing tool, FDMEE. Yes, I know, everybody is now thinking about Cloud and more Cloud. However, FDMEE is still a very strong EPM integration tool supporting Hybrid and survives in 11.2.

I won't go into much detail so I can keep it simple.

The customer had already all automation built using open batch files and different direct data extracts. However, there was no email notification solution implemented. It was hard for them to get results without accessing FDMEE.

We started to gather the requirements and we came up with the following list:
  • Bypass missing mappings and send summary by email
  • Bypass invalid HFM intersections and send the Intersection Check report by email (html format)
  • Identify the CANNOT WRITE intersections (the blue ones) and include them in the email body
  • Show Process Management information about the HFM Process Units where data cannot be written. Basically, show Review Level so the users are aware of what is happening
  • Build a data-tracking solution for HFM Phased Submission
The goal is to cover all requirements in future blog posts but today I will start with the key one, the data-tracking. Personally, I think this would be a great functionality for Cloud Data Management as well. It fills a gap in the data audit in FDMEE (you can see how cell data changes in HFM)

Requirement - Data Tracking for HFM Phased Submission
This is the requirement word by word:
To maintain control over the phased close process, a data tracking service is required to track data changes impacting Gross Margin and EBITDA from WD2 and WD4 respectively.  
We need the following key controls:
  • The phased submission process should allow for a soft Gross Margin and EBITDA close whilst tracking changes to the underlying data impacting Gross Margin and EBITDA from WD2 and WD4 respectively
  • The data load monitoring service must be relevant i.e. we DO NOT want to track all financial statement data changes and report them to users
  • Appropriate stakeholders must be notified of the data changes once the deadlines have been passed
  • This process must not disrupt the source trial balance data load to HFM i.e. the trial balance loaded must be complete and accurate
  • Sufficient provision should be built into the design to switch over to a hard WD2/WD4 close if desired
Is it clear? I know it is not a conventional solution but it will help establish the control that is desired over the phased close process without "hard" locking components of the income statement. This is what the customer wants to achieve.

Let's give some shape to these words as we want to keep everything simple. HFM will be using phased submissions to manage the data flow with review levels acting as a trigger event for data tracking.
  • Review Level 1 (RL1): Both the automated and manual FDMEE load processes are running and exporting data to HFM.
  • Review Level 2 (RL2) Soft Close - the promotion to RL2 will now act as a trigger to activate the data monitoring script that will track data changes when data is exported to HFM.
  • Review Level 3 (RL3) Hard Closeat RL3 the FDMEE Global user system account no longer has access to load data to HFM.
Phased Submissions at a glance
The term phased submissions in HFM is synonymous with managing data through stages of the close cycle. In a typical organization this would involve:
  • Phase 1 – Promote inter-company data
  • Phase 2 – Promote Trial Balance data
  • Phase 3 – Promote Balance Sheet movement data/cash-flow
  • Phase 4 – Promote disclosure notes 
The process encourages a phased hard close and is designed for locking data submissions defined by accounting processes (i.e when Phase 1 is promoted to Review Level 3, inter-company data cannot be changed by the trial balance loads in Phase 2)

Our Customer
The following tables show the different phases and details: 

WD2
WD4
Phase 1
Phase 2
Promote Gross Margin data
Promote EBITDA data
Soft close - RL2
Soft close - RL2
Enable data monitoring on accounts that impact Gross Margin
Enable data monitoring on accounts that impact EBITDA
Automated notifications informing users of changes to Gross Margin data
Automated notifications informing users of changes to EBITDA data


WD6

Phase 3
Phase 4
Promote Trial Balance data
Promote USGAAP/Disclosure Notes
Hard close – RL3
Hard close – RL3
Lock trial balance data
Lock USGAAP adjustments and changes to disclosure notes
Stop all email notifications
Journals/Offline (excel) submissions

Using Custom HFM dimension to Link Accounts and Phases
A conscious decision was made to try and avoid tagging submission groups on the account dimension as invariably accounts end up being incorrectly tagged (or Group Finance change their mind!) which results in the need for an application metadata update to correct the submission group assignment (used to profile phased submissions).

Instead a spare custom dimension was leveraged, in this case Custom6 (Custom5 is used to evaluate constant currency) with the intention of using FDMEE and SQL mapping scripts to load data to each of the Custom6 dimension members respectively. This means that we can now adjust the FDMEE mapping for phased submission purposes without the need to apply HFM metadata updates.
Please, note that this approach will only work dynamically where the chart of accounts adheres to a rational numbering sequence e.g. Revenue account are 1* series, Cost of Sales accounts are 2* series and OPEX accounts are 3* series etc. Other approaches are also valid.

Translating the Requirements for the Integration Guy
Although solution seems to be complex, design is the key. Basically, we need FDMEE to:
  • Get the HFM Process Management details (Review Level for the different phases)
  • When Soft Close, be able to compare new data set against last data set loaded. That will monitor all data load changes
  • Auto-map missing mappings so the workflow can be completed
  • Auto-map for invalid HFM intersections and identify the CANNOT WRITE intersections (Soft Close)
  • Keep audit in FDMEE so missing mappings and invalid intersections can be easily filtered in the Data Load Workbench
  • Send email notification with detailed results 
In the next chapter we will go through the solution design in FDMEE, and then we will continue with some examples where you can see how it works and the outcome of this data monitoring solution.

Merry Xmas!

Building your own Monitoring Solution for Data Changes in HFM Phase Submission Loads - Part 2

$
0
0
In the last blog post Building your own Monitoring Solution for Data Changes in HFM Phase Submission Loads - Part 1, we described FDMEE automation requested by our customer. One of the requirements was to include a data change tracker to monitor the data changes along the different HFM submission phases.

The MUST-HAVE list for this custom functionality is:
  • Get the HFM Process Management details (Review Level for the different phases)
  • When Soft Close, be able to compare new data set against last data set loaded. That will monitor all data load changes
  • Auto-map missing mappings so the workflow can be completed
  • Auto-map for invalid HFM intersections and identify the CANNOT WRITE intersections (Soft Close)
  • Keep audit in FDMEE so missing mappings and invalid intersections can be easily filtered in the Data Load Workbench
  • Send email notification with detailed results 
Top-Down Solution Design, that's the key
There are multiple ways of addressing the design of a solution. If you ask me, my preferred approach is to start with drawing my solution at high-level. Why? Basically, with this Top-Down methodology, I can decompose the solution is smaller parts to have a better comprehension of its different components. I can then refine each part in greater detail.
Definitely, with this "Divide and Conquer" approach, I will have a better understanding of how my solution will look like and will reduce the impact of additional requirements.
Let's start drawing the solution!

Solution Diagram
As I usually say, "A picture paints a thousands words". There will be always a solution design diagram in every project I work:
Basically:
  • We get the HFM Process Management details before data is loaded into HFM
  • We perform the data monitoring after data is loaded into HFM
Let's use reverse engineering to add more explanation:
  • After data has been loaded into HFM, the data monitoring is initiated for all HFM entities in Soft Close for any of the different phases. 
  • As we first need to know the status for all phases, FDMEE will get this information from HFM and store it in the TDATASEG table. This action happens before data is loaded into HFM (BefLoad event script)
  • Then, the data monitoring will compare the new data being loaded with data previously loaded. But, where is the data previously loaded? It's not in FDMEE because data is wiped out when the same POV (DLR and Period) is re-processed. Data must be accessible somewhere else. To workaround this, FDMEE data is copied to a custom table (Z_BALANCES_PRIOR) every time it is loaded into HFM (with no errors). This action happens after data is loaded into HFM (AftLoad event script)
In a nutshell, the first time data is loaded for a specific POV (DATA 0), there won't be data to be compare against. The second time data is loaded into HFM (DATA 1), it will be compared against DATA 0, and so on.

What do users get?
A very nice email notification with results :-)
The email above, shows that one intersection has been updated. Results are attached as a CSV file where they can see data differences for all phases. They can also see when previous data has not been loaded in the new load:

Taking the notifications to the Next Level
This solution for the automated process followed the pattern "This is very cool! Could we also have...?" I like challenges and delivering solutions to meet requirements. That makes everyone happy, doesn't it?

There are missing maps
We want to see which source values have not been mapped:

There are invalid intersections
We want to see the invalid intersections in the same way we did in FDM Classic!
There are invalid intersections, but why?
Most of the times, the Cannot Write intersections are due to process level, can we get process level for the different HFM process units with invalid intersections?


What's next?
Next post will cover details of implementation:
  • How to get the HFM Process Management details
  • SQL Query to get data differences (new, updated, or deleted data)
Cheers

FDMEE 11.2.x...I can't believe my eyes! that script has a hard-coded password!

$
0
0

Ey there!

Long time ago I was showing one of the different methods to avoid hard-coding passwords in FDMEE scripts. You can see the original post here.

I have upgraded FDMEE to 11.2.x, and now what?
If you run the same code in 11.2.x, you will gen the following error:

TypeError: snpsDecypher(): expected 2 args; got 1

Why? FDMEE 11.1.2.x uses ODI 11g and FDMEE 11.2.x uses ODI 12c. And the ODI APIs have slightly changed. Some methods are now deprecated.

Luckily, there is a workaround. Let's use the ODI 12c API :-)
The code below also shows you how to use different EPM API such as the EPM Registry API.

Let me highlight something that I have seen in different implementations. If you want to execute a SQL statement against the FDMEE database, you don't need to open the connection via jdbc. There are API methods to execute SQL statements against it. Those methods will manage the DB connection for you.

Enjoy the code!

# ***************************************
# ODI 12c Password Decrypter
# ***************************************

'''
ODI 11g Code
--------------------------------------------
# Import ODI API class
    from com.sunopsis.dwg import DwgObject
# Decrypt pwd
connPwdDec = DwgObject.snpsDecypher(connPwdEnc)

Execution in ODI 12c
--------------------------------------------
Traceback (most recent call last):
File "<string>", line 529, in executeJythonScript
File "\\WIN19\EPMSHARE\FDMEE/data/scripts/custom/odi12c_decrypy_pwd.py", line 20, in <module>
    connPwdDec = DwgObject.snpsDecypher(connPwdEnc)
TypeError: snpsDecypher(): expected 2 args; got 1

Cause
--------------------------------------------
Function (depecreated) is defined as:
public static String snpsDecypher(String pPass, OdiInstance pOdiInstance)

'''

# Import libraries
fromoracle.odi.core.configimportMasterRepositoryDbInfo
fromoracle.odi.core.configimportOdiInstanceConfig
fromoracle.odi.core.configimportPoolingAttributes
fromoracle.odi.core.configimportWorkRepositoryDbInfo
fromoracle.odi.coreimportOdiInstance
fromcom.sunopsis.dwgimportDwgObject

fromcom.hyperion.aif.utilimportRegistryUtilCore
fromcom.hyperion.hit.registryimportDBTypeComponentImpl
fromcom.hyperion.hit.registryimportComponentType

# Encrypted password (you can get it from ODI SNP tables as you would do in 11g)
connPwdEnc="xxxxxxxxxxxxxxxxxxxx"

# Get MR Connection details
# ----------------------------
aifDbComponent=RegistryUtilCore.getAIFDatabaseComponent()
jdbcDriver=aifDbComponent.getPropertyValue("dbJDBCDriverProperty")
jdbcUrl=aifDbComponent.getJdbcUrl()
jdbcUserName=aifDbComponent.getUserName()
jdbcPwd=aifDbComponent.getPassword()
fdmAPI.logInfo("Jdbc Driver ->"+str(jdbcDriver))
fdmAPI.logInfo("Jdbc Url -> "+str(jdbcUrl))
fdmAPI.logInfo("Jdbc User -> "+str(jdbcUserName))
fdmAPI.logInfo("Jdbc Pwd -> "+str(jdbcPwd))

# Create MR/WR Info
# ----------------------------
workRep="FDMEE"
masterInfo=MasterRepositoryDbInfo(jdbcUrl,jdbcDriver,jdbcUserName,jdbcPwd,PoolingAttributes())
workInfo=WorkRepositoryDbInfo(workRep,PoolingAttributes())

# Create ODI instance
# ----------------------------
odiInstance=OdiInstance.createInstance(OdiInstanceConfig(masterInfo,workInfo))

# Decrypt password
# ----------------------------
connPwdDec=DwgObject.snpsDecypher(connPwdEnc,odiInstance)

# Log decrypted password
fdmAPI.logInfo("Jdbc Decrypted Pwd -> "+str(connPwdDec))

# Destroy objects
odiInstance.close()

Have a good weekend!
Viewing all 54 articles
Browse latest View live