Quantcast
Channel: ATeam Chronicles
Viewing all 204 articles
Browse latest View live

Oracle Fusion Applications : Diagnostics – Log file locations

$
0
0

Introduction

Oracle Fusion Applications is a suite that provides business functionality in User Interface (UI) screens.  When there is a need to review the innards of these business functions, the first places to look are the various system logs made readily available for the administrator. This article introduces the key logs and their locations.

Main Article

While Fusion Applications displays a mosaic of features in web based screens, under the hood this is achieved with a combination of multiple Oracle products and technologies.  The products can be visualized in multi-layer stack according to their functionality and role in transaction processing.  This is shown in the figure below.  Since Layer0 and Layer5 are outside the scope of Fusion Application suite, we will limit this article to Layers 1 to 4.

Given the variety of the products and their stacking, the best way to view information for monitoring and diagnostics is by using the Oracle Enterprise Manager Cloud Control (EM 12C).  EM 12c can not only manage and monitor Fusion Applications layer but the entire stack including OS, VM, and DB layers.  It can also manage multiple environments. If Cloud Control is not installed or configured in your environmnet, the good news is that Fusion Applications comes with Oracle Enterprise Manager which provides similar functionality.  The built-in EM Control works within each FA domain and so will need to be accessed individually and needs EM Administrator access.  One great feature of EM Cloud Control is to track transactions as a whole thread instead of peering at individual log file lines.

Despite EM, there is a need to review logs directly at the filesystem level.  Quite often, review of logs to diagnose and make configuration changes is needed when the FA system (and so the EM) is down.  Also, some savvy administrators may prefer the traditional simple tools like vi and grep for this work.

Thus, knowing what logs to seek and where they are is a must for system administrators.  Since Fusion Applications involves a very broad set of products it is often difficult for administrators to know them all in-depth.

The table below summarizes the key log locations across the stack in order for administrators to review them quickly.  The table shows the various layers of products in the Fusion Applications stack as shown by the simple picture above.

 

Product(s) Log Location and How to find Logs in Non-Default Installs
Databases
(Layer 1)
Hosts   : All DB hosts
Log dir : <ORACLE_DB_BASE>/diag/rdbms/<DB_NAME>/<DB_NAME>/trace/

OVM default examples :
/u01/app/oracle/diag/rdbms/fusiondb/fusiondb/trace/
/u01/app/oracle/diag/rdbms/oidb/oiddb/trace
/u01/app/oracle/diag/rdbms/oimdb/oimdb/trace

To find DB_NAME values use : ps -ef | grep pmon
As admin user in sqlplus use the query : show parameter dump_dest to see the trace dir.
The file alert_<DB_NAME>.log is the key file to review for DB errors.
Additional files like *.trc will be useful for further diagnostics.
Listener,SQLNet
(Layer 2)
Hosts   : All DB hosts
Log dir : <ORACLE_DB_BASE>/diag/tnslsnr/<DB_NAME>/<DB_NAME>/trace/
OVM default example :
/u01/app/oracle/diag/tnslsnr/<FUSIONDB_HOSTNAME>/listener/trace/

To get trace dir, as dba in db box : lsnrctl status <optioal_listenername> | grep Log
The file listener.log is the key file to review for sqlnet errors.
Node
 Manager
(Layer 3)
Hosts : All WLS hosts
<CONFIG_BASE>/nodemanager/<hostname>/

OVM default examples :
/u01/IDMTOP/config/nodemanager/oimfa.us.oracle.com/
/u01/APPLTOP/instance/nodemanager/admin-apps.oracleoutsourcing.com/
/u01/APPLTOP/instance/nodemanager/primary.oracleoutsourcing.com/
/u01/APPLTOP/instance/nodemanager/secondary.oracleoutsourcing.com/
/u01/APPLTOP/instance/nodemanager/bi.oracleoutsourcing.com/
/u01/APPLTOP/instance/nodemanager/primary-ha1.oracleoutsourcing.com/

Using "ps -ef | grep weblogic.NodeManager" the parameter -DNodeManagerHome
shows the nodemanager home dir and in this dir, using
"grep LogFile nodemanager.properties" shows the log dir location.

NodeManager helps to start weblogic servers and is crucial to monitoring
the health of weblogic servers and its logs explain issues seen during
startup or stability of weblogic servers.
 OID
(IdM)
(Layer 3)
Hosts  : All OID hosts
Log dir: <OID_CONFIG_DIR>/instances/<OID_InstanceName>/diagnostics/logs/OID/<OID_InstanceName>/
OVM default example :
/u01/IDMTOP/config/instances/oid1/diagnostics/logs/OID/oid1

If OID is running, "ps -ef | grep ldapd" shows the process id (PID) of one of the procs,
and "cat /proc/<PID>/environ |tr "\0"  "\n" |grep COMPONENT_LOG_PATH" shows the log dir.

If OID is down, the config dir location is in the original install oraInventory/logs dir
and the location of the oraInventory will be in the OID_HOME/oraInst.loc file.
All security requests go through ldap server and so the logs are useful.
However, if no errors and call trace is needed, log levels need to be increased accordingly.
 OVD
(IdM)
(Layer 3)
Hosts  : All OVD hosts
Log dir: <OID_CONFIG_DIR>/instances/<OID_InstanceName>/diagnostics/logs/OVD/<OVD_InstanceName>/
OVM default example :
/u01/IDMTOP/config/instances/oid1/diagnostics/logs/OVD/ovd/

If OVD is running, "ps -ef | grep oracle.component.type=OVD" shows the parameter
-Doracle.component.logpath which is the log dir.

OVD is an aggregator of multiple authenticators and so, FA is often installed with OVD in front
of ldap servers and OVD logs provide crucial information on ldap errors in addition to oid logs.
OPMN
 (IdM)
(Layer 3)
Hosts  : All OID / OVD hosts
Log dir: <OID_CONFIG_DIR>/instances/<OID_InstanceName>/diagnostics/logs/OPMN/<OPMN_InstanceName>/
OVM default example :
/u01/IDMTOP/config/instances/oid1/diagnostics/logs/OPMN/opmn/

OPMN log dir will be located under same tree as oid log dir (see above).

OPMN is a monitoring agent for oid and ovd and its logs are reviewed if oid or ovd
have any issues with starting or stability.
Weblogic
 (IdM)
(AdminServer,
 OIM, OAM
 SOA, ODS,
 OIF )
(Layer 3)
Hosts  : All IdM Weblogic Hosts
Log dir: <IDM_SERVER_CONFIG_DIR>/logs
OVM default example :
/u01/IDMTOP/config/domains/IDMDomain/servers/AdminServer/logs/

If a weblogic server is running, use "ps -ef |grep <servername>"
(where servername is wls_oim, wls_oam etc) to see the parameter oracle.server.config.dir
that is the value of IDM_SERVER_CONFIG_DIR above.

If a server is not running, the file <IDM_BASE_DIR>/domain-registry.xml
(in OVM default example, the file is /u01/IDMTOP/products/app/domain-registry.xml ) shows
IDM_DOMAIN_DIR and <IDM_SERVER_CONFIG_DIR> = <IDM_DOMAIN_DIR>/servers/<servername>

The log files named access*.log show the request/response to the servers.
The files named *.out give the health of the server itself - especially issues with starting.
The files named *.log give details on the various FA deployments running on the server.
The files named *diagnostic.log provide detailed diagnostics and stack dumps.
Weblogic
 (FA)
(AdminServer,
 Managed
 Servers
 including
 BI, SOA,
 and ESS
 in various
 domains)
(Layer 3)
Hosts  : All FA Weblogic hosts
Log dir: <FA_SERVER_CONFIG_DIR/logs
OVM default example :
/u01/APPLTOP/instance/domains/bi.oracleoutsourcing.com/BIDomain/config/servers/AdminServer/logs

If server is running, use "ps -ef |grep <servername>" to see the wls server process
and the parameter -Doracle.server.config.dir shows the FA_SERVER_CONFIG_DIR above

If a server is not running, the file <FA_BASE_DIR>/domain-registry.xml
(in OVM default example, the file is /u01/APPLTOP/fusionapps/app/domain-registry.xml )
shows the value for FA_DOMAIN_DIR for different domains
and and the <FA_SERVER_CONFIG_DIR> = <FA_DOMAIN_DIR>/servers/<servername

The log files named access*.log show the request/response to the servers.
The files named *.out give the health of the server itself - especially issues with starting.
The files named *.log give details on the various FA deployments running on the server.
The files named *diagnostic.log provide detailed diagnostics and stack dumps.
BI
 (non-WLS)
(Cluster
Controller,
 Essbase,
 Java Host,
 Presentation,
 Scheduler,
 BIServer,
 and OPMN)
(Layer 3)
Host   : BI Hosts
Log dir: <FA_CONFIG_DIR>/BIInstance/diagnostics/logs/<servername>
OVM default examples :
/u01/APPLTOP/instance/BIInstance/diagnostics/logs/OPMN/opmn/
 /u01/APPLTOP/instance/BIInstance/diagnostics/logs/Essbase/
 /u01/APPLTOP/instance/BIInstance/diagnostics/logs/OracleBISchedulerComponent/
 /u01/APPLTOP/instance/BIInstance/diagnostics/logs/OracleBIServerComponent/
 /u01/APPLTOP/instance/BIInstance/diagnostics/logs/OracleBIJavaHostComponent/
 /u01/APPLTOP/instance/BIInstance/diagnostics/logs/OracleBIPresentationServicesComponent/
 /u01/APPLTOP/instance/BIInstance/diagnostics/logs/OracleBIClusterControllerComponent/

The various logs within each component provide detailed information on the requests,
queries and connections made including parameters and values.
These help diagnose issues in these components.
The opmn logs shows issues with component start or stability.
FA
 (non-WLS)
(GOP
and
OPMN)
(Layer 3)
Hosts  : Host running SCM Advanced Procurement if it is configured.
Log dir: FA_CONFIG_DIR>/<GOP_INSTANCE_NAME>/diagnostics/logs/
OVM default example :
/u01/APPLTOP/instance/gop_1/diagnostics/logs/

This is a rare case of opmn based FA server used with SCM ie. outside of weblogic.
Webtier,
  Wegbgate
  and OPMN
(Layer 4)
Hosts : All OHS hosts

Log dirs :
OHS, Webgate logs : <OHS_CONFIG_HOME>/diagnostics/logs/OHS/<instance_name>/
OPMN logs : <OHS_CONFIG_HOME>/diagnostics/logs/OPMN/<instance_name>/
OVM default examples :
IdM OHS  : /u01/IDMTOP/config/instances/ohs1/diagnostics/logs/OHS/ohs1/
IdM OPMN : /u01/IDMTOP/config/instances/ohs1/diagnostics/logs/OPMN/opmn/
FA  OHS  : /u02/instance/CommonDomain_webtier_local/diagnostics/logs/OHS/ohs1/
FA  OPMN : /u02/instance/CommonDomain_webtier_local/diagnostics/logs/OPMN/opmn/
If OHS is runing, use "ps -ef | grep httpd" to get the ID of any thread and then 
"cat /proc/<PID>/environ | tr "\0" "\n" | grep COMPONENT_LOG_PATH" to get log dir.

If OHS is down, config dir can be seen in original install log if it exists.
"grep INSTANCE _HOME <WEBTIER_HOME>/cfgtools/oui/ *.log" will show the instance dir
and the logs will be under that dir similar to above.

All http requests go through the OHS web servers and so the access_log files
are useful to trace the first incoming call and then track it down to
further calls in inner layers using the same ID seen in that request log line.

The oblog_log files are useful to track the requests of the webgate plugin.

The log level needs to be set right to get the right diagnostic information.

OPMN logs are used to review OHS starting or stability issues.

 

The size of the log file is curtailed by periodic roll-overs and so old logs can accumulate and it is crucial to sort log files by time to review them right.

Apart from the above, there are a few other critical tasks involved in creation and maintaining of Fusion Applications and the logs files related them are listed below.

Phase Log Location
Initial Install
Tools  : Fusion Applications Provisioning Utility
Log dir: <FA_BASE>/logs/provisioning/<hostname>

These are created during the initial provisioning of the system and are not used after that.
 The logs here are useful during the installation. Later upgrades and patching use different locations.
 Sometimes keeping these logs may help if there are any questions later on the provisioning process.
Install / Upgrade
Tools  : Orcle Universal Installer (OUI)
Log dir: <INVENTORY_DIR>/logs/

At initial install and later updates, product files are updated by the Oracle Universal Installer (OUI).
This uses the traditional location known as the OraInventory dir to save information on the install
as well as the installer logs. Normally this directory is preserved in case there is any need to
review the install process later.  The location of INVENTORY_DIR is in each product home dir
in a file named oraInst.loc.  In some rare cases of manual installs,  due to the flexibility offered
by the installer, users may have chosen different oraInventory directories for different products
- so it helps to check the oraInst.loc files for each product home.  For a FA system upgraded
from one release to another, there is a requirement to consolidate all FA inventory location files.
Patching
Tools  : Oracle Patch Utility (OPatch)
Log dir: <ORACLE_HOME>/cfgtools/opatch/

OVM default examples :
/u01/IDMTOP/products/dir/oid/cfgtoollogs/opatch
/u01/IDMTOP/products/ohs/ohs/cfgtoollogs/opatch
/u01/APPLTOP/fusionapps/applications/cfgtoollogs/opatch/
/u01/APPLTOP/webtier_mwhome/webtier/cfgtoollogs/opatch/

Patching of Fusion Applications involves patches applied to :
(a) traditional Fusion Middleware (FMW) or Database products, and
(b) Fusion Applications that run over the above products.
The products of type (a) are patched using the Oracle OPatch utility and
and the products of type (b) are updated using the Applications Patch Manager.
The OPatch utlility saves its files and logs in <ORACLE_HOME>/cfgtools/opatch

Tools  : Oracle Fusion Applications Patch Manager
Log dir: <FA_ORACLE_HOME>/admin/FUSION/log*/
OVM default example :
/u01/APPLTOP/fusionapps/applications/admin/FUSION/log*/

The Fusion Applications Patch Manager saves its logs in the directories
<FA_ORACLE_HOME>/admin/log and <FA_ORACLE_HOME>/admin/FUSION/logarchive .

Upgrade
Tools  : Oracle Fusion Applications Upgrade Utility
Log dir: <APPLICATIONS_CONFIG>/lcm/logs/

OVM default example :
/u01/APPLTOP/instance/lcm/logs/

Fusion Applications Upgrade Utility does several tasks to upgrade and patch and often in
parallel by calling the Oracle installer and patch tools and orchestrating these across hosts.
It keeps all its work under the dir <APPLICATIONS_CONFIG>/lcm/logs/ since Release 7.
(Earlier releases had the upgrade logs located under APPLICATIONS_BASE dir).

 

Conclusion

For FA Administrators, knowing all the log locations handy is useful anytime, but crucial when there is any urgent need to troubleshoot or monitor the system especially when EM Cloud control is not readily available or not geared for a specific task.


Fusion Applications User, Role Identity Flow and Initial Bulk Load

$
0
0

Introduction

As customers work towards implementing Fusion Applications (FA) in their enterprise and prepare for go-live, the enterprise user and role identity data from various HR applications needs to be migrated to FA, so that the users can become part of FA system and be able to use the application. There are a number of steps involved in this process and performing partial steps in lower environments does not prepare sufficiently for a go-live preparation to a production environment. In this article, we will cover the important steps involved and some helpful tips for a successful user and role load.

High-level Steps of Initial Bulk Load

To migrate the Users and Role identities to FA, user and role information can be collected from various enterprise applications and brought in to FA using tools offered with the FA release the customer is using. For Release 7 and Release 8 of FA, Human Capital Management (HCM) File-Based Loader (FBL) is the recommended option for bulk-load. While Release 9 includes an evolution of FBL, this is not covered in detail in this article, FBL can still be used with Release 9.

FBL enables you to bulk-load data from any data source to HCM for the initial load as well as for on-going maintenance of the data by means of periodic batch loading. Please refer to the documents listed in reference section below for full list of FBL capabilities and limitations and detailed steps of execution. The following diagram depicts the high-level steps of using FBL:

FBL Flow Diagram

As noted in the diagram above, data gathered from various sources is transformed for import using FBL. The tool helps with data validation and up to loading the data into FA. After a FBL run, you will see the users and roles reflected in the FA product families. However, the users and roles loaded using FBL cannot be used, as yet. There is further down-stream provisioning needed before we can login using the user-id of any of the users just loaded. Before we look at the steps of how to provision down-stream, it helps to understand user and role identity containers in FA. Let’s look at that next.

User and Role Identities in FA

User and Role identity information is stored primarily in 3 containers in a typical FA deployment:

1. FA-HCM (Even if you meant to use other product families, HCM module is always used in FA to manage User and Role identity)

2. Oracle Identity Manager (OIM)

3. Oracle Internet Directory (OID).

During run-time, the identity information in all of these 3 containers is applied to the users and used in enforcing security and entitlement policies of FA. For more details, please refer to the ‘Implementing Workforce Deployment’ document: https://docs.oracle.com/cd/E48434_01/fusionapps.1118/e49577/toc.htm

The following diagram gives an overview of the flow of user and role identity information between them:

VNBlogpic3

We discussed FBL and the process of loading users and roles into FA HCM. Now, as per the flow in the diagram above, we need the users and roles provisioned in OIM and in-turn provisioned to the OID server. So that, when a user tries to login to the FA system, Oracle Access Manager (OAM) can authenticate the user by looking up the user in OID. Likewise, much of the FA security policy enforcement is performed based on what user and role information is available in OID. Hence, after step 1.FBL, step 2. ‘Send Pending LDAP Requests’ must be manually invoked to provision the users downstream. There are scheduled jobs that take care of step 3 to provision the users into OID. Once we have established that the steps 1, 2 and 3 have been run, step 4. ‘Retrieve Latest LDAP Changes’ can be invoked manually to complete the full cycle. Step 4, ‘Retrieve Latest LDAP Changes’ brings in updates made in the IDM systems (OIM, OID and APM) to FA HCM. The step marked ‘1.1.Person Synchronization’ helps to ensure user information is complete and also help cover dated actions. This 1.1 step must always be run at the beginning of the cycle followed by Step 1.2 Retrieve Latest LDAP Changes’ for initial load.

If a customer were to perform step 1, FBL and not continue through the rest of the steps under the assumption that the rest of the steps are not significant, they would not come away with a full understanding of the importance of remaining steps of the process, nor would the users and roles be fully provisioned in the system. So, it is important to complete all 4 steps in at least one lower environment before attempting to do this in production environment for the first time. This so you have a full appreciation of the complete process and an ability to plan for go-live activity.

Completing All the Important Steps

Assuming step 1.FBL is completed, let’s look at completing the ‘Send Pending LDAP Requests’ job to provision the users downstream from FA HCM. This is an important and resource intensive step that needs to be carefully executed with proper planning.

Firstly, the FA system must be properly tuned. There is good guidance available in FA tuning document. Please review FA tuning guide from the link below and make sure you apply recommended parameters for your FA environment needs:

http://docs.oracle.com/cd/E36909_01/fusionapps.1111/e16686/toc.htm#BEGIN

If we were to point one primary component that must be tuned, we would look at the OID processes. This component is kind of security-central for the overall FA application and must be tuned properly for a good performance of the FA system anytime. It is very easy to tune this component as detailed in the above document. Next, the OIM and SOA components tuning certainly helps in the next step. The tuning document above details the necessary JVM heap size and data source tuning parameters among others.

Once tuning is completed, and you are now ready to invoke the step 2. Send Pending LDAP Requests job, be prepared with a few SQL queries to track the progress of the load as shown below. At the end of step 1. FBL load, you would see multiple LDAP requests generated in status of CREATE and REQUEST.  You may query the status, as they get processed by the Send Pending LDAP Requests. A sample SQL query such as below, gives a snapshot of the requests:

select plr.ldap_request_id, plr.request_status, plu.request_status user_request_status, plu.request_type,
ppn.last_name, ppn.first_name, plu.username requested_username, pu.username, plr.request_id, plr.active_flag,
      plr.error_code, plr.error_description,
      plr.last_update_date,plr.request_date, plr.requesting_reference_id
from fusion.per_ldap_requests plr, fusion.per_ldap_users plu, fusion.per_person_names_f ppn, fusion.per_users pu
where trunc(plr.last_update_date) >= trunc(sysdate-:dayOffset)
and plr.requesting_reference_id = ppn.person_id(+)
and plr.requesting_reference_id = pu.person_id(+)
and plr.ldap_request_id = plu.ldap_request_id
and ppn.name_type(+) = ‘GLOBAL’
and sysdate between ppn.effective_start_date(+) and ppn.effective_end_date(+)
–and nvl(plr.error_code, ‘X’) <> ‘HCM-SPML-0001′
–and plr.request_status in (‘FAULTED’,’REJECTED’)
–and plr.request_status = ‘COMPLETE’
–and plr.request_id is not null
–and plu.request_type = ‘TERMINATE’
order by plr.last_update_date desc;

Once step 2 is finished, you may export the output to a spread sheet to process success and failures and resolve any issues for a re-run. There can be reasonable errors such as user id already exists or invalid data that may prevent the process from completing. So, you may correct those errors and a re-run could complete them. For example, error IAM-3076036 – User with the attribute mail, value <email_id> already existscan be corrected by updating the user record with a correct email id. Error code IAM-3071004 means the user is not correctly setup with a manager record. For more details on the error codes, please refer to Doc ID 1509644.1 in support.oracle.com.

Also, since this provisioning to OIM and OID, steps 2 and 3, are resource intensive steps, it is a good idea to split the load and load users in small batches. When the users are brought in with FBL, make that a smaller manageable size that performs better for your environment requirement. Say, you are bringing in a total of 100,000 users, then it may be good to load them in batches of 25,000. Load 25,000 users in first batch using step 1. FBL and complete steps 2, 3 and 4. Then bring in next 25,000 in FBL and so on.

There are pros and cons of running in smaller batches as with any batch processing, but achieving the right size is important. Smaller batches help in reducing the stress on the system and also help in error handling. Depending on the performance you get, you may adjust the size of the load.

It is a good idea to plan for dedicated time for the system to complete step 2. ‘Send Pending LDAP Requests’ process, since a general slowness may be observed in user login performance during this load. Hence, you may plan for one batch over-night or few batches over a weekend. This helps in avoiding conflicts, were the system to be shared by other work streams. Allowing sufficient dedicated load-time gets this process complete relatively faster.

There are several good documents, listed in the reference section, in My Oracle Support to prepare and monitor the progress of this phase and trouble-shooting articles that you would find valuable.

Besides the initial load, it is important to plan and schedule for the ongoing incremental processing once the system is in use. By looking at the number of users created or updated, roles created or updated in previous systems, a set of these jobs we looked at above need to be scheduled. Step 1.1 must always be executed at the beginning of scheduled cycle. Step 3. OIM LDAP Synchronization can also be manually invoked using OIM scheduled jobs. ‘LDAP Role Create and Update Full Reconciliation’ and ‘LDAP Role Create and Update Reconciliation’ are some important ones. For further details on how these programs work, and when to schedule them, see ‘Synchronization of User and Role Information with Oracle Identity Management: How It Is Processed in the Oracle® Fusion Applications Coexistence for HCM Implementation Guide.

Summary

A few take away points from this article for initial bulk-load:

1. Load initial bulk load of users and roles in batches of size relative to your environment.

2. Proper tuning of OID, OIM and SOA components are key to the success of this process

3. Run the full process discussed here in at least one environment before production load and plan to allow dedicated system resources for the job.

4. Prepare / gather necessary monitoring and validation scripts beforehand for easy monitoring and progress during the load.

5. Plan for and run scheduled jobs discussed above on an ongoing basis to keep the information flow current.

Reference

My Oracle Support Docs:

File-Based Loader for Release 7 & 8 – 1595283.1

File Based Loader Diagnostics Release 7 & 8 – Doc ID 1594500.1

User and Role Provisioning – Troubleshooting Guide (Doc ID 1459830.1)

Fusion HCM: Common BPEL and OIM error messages in User and Roles Provisioning (Doc ID 1509644.1)

Important log files and their location in Fusion Applications

$
0
0

Overview

Log files are critical for the maintenance of system health, debugging performance problems, and functional or technical issues. Fusion applications (FA) is an integrated suite of products and hence it is important to know the log file locations for all the products and components to effectively troubleshoot a problem. A single transaction can span various components with the application, identity and database tiers. The best way to look at and get all the log files is through Enterprise Manager (EM) – you can choose to view them via the individual domains EM Applications Control and Database Control for DB, or you can view them all from a single centralized UI using EM Cloud Control (EMCC). EM Cloud control provides a centralized view of the entire FA stack.

Having said that, it is important to be able to view the files from the command prompt for various reasons – the most important being the ability to script and non-availability of EMCC. This post provides the list of most important log files within the entire FA stack. Viewing and gathering log files from EM will be handled in a separate post.

Main Article

A fully provisioned Fusion Applications environment contains 2-3 databases (1 for FA and 1 or 2 for IDM depending on whether you chose to have IDM and IAM share the same database or not); a full IDM stack comprising of system components – Oracle Internet Directory (OID) / Oracle Virtual Directory (OVD), Oracle HTTP Server (OHS) protected by WebGate,  and java components – OAM, OIM, SOA and OIF in a WebLogic domain named IDMDomain; and Fusion Applications application tier comprising on various WebLogic domains for various product families like HCM, CRM etc. and system components for BI, OHS and GOP. Here is a listing of all the log files for these components.

This post refers to FA Release 9, but most of it will apply to FA installations of lower releases. Sometimes, minor tweaking may be required, but the concepts presented in this post will help you find the alternate locations.

Database:

Alert log and background processes logs:

alert.log is the most important database log file. It lists important message and errors reported by the database. Other background processes logs are less important and you may need to view them only under rare circumstances. These log files go in a location defined by the database initialization parameter background_dump_destination. The parameter value can be found by querying v$parameter database view or using the SQL*Plus “show parameter” command as shown in the following screenshot

SELECT name, value FROM v$parameter WHERE name like ‘background_dump_dest';

DB_Dump_Destination

Session trace files:

Session trace files are generated either automatically – when a DB error like ORA-600 or ORA-07445 occurs – or they can be generated on demand by turning trace on a DB session which is usually done for identifying performance issues and sometimes for getting errorstack. These logs go in a location defined by the database initialization parameter user_dump_destination. The parameter value can be found by querying v$parameter database view or using the SQL*Plus “show parameter” command as shown in the following screenshot:

SELECT name, value FROM v$parameter WHERE name like ‘user_dump_dest';

DB_User_Dump_Destination

Install Logs:

Software install log files are located under the logs directory, which is under the central inventory. If you use a single central inventory for all oracle products, then this directory can be located using the central inventory location file /etc/oraInst.loc (AIX and Linux) or /var/opt/oracle/oraInst.loc (Solaris). If you use different central inventory location for various products, then the system oraInst.loc may not be a definitive source of the inventory location. In such a case, it is best to look at the local oracle inventory location pointer file $ORACLE_HOME/oraInst.loc

DB_OraInst_Loc

OPatch logs:

OPatch is a tool to patch oracle binaries. OPatch logs are required to troubleshoot issues encountered during patching or for analyzing patch conflicts. These logs can be found under $ORACLE_HOME/cfgtoollogs/opatch directory.

Listener logs:

Listener log files can be found using the command “lsnrctl status <listener_name>”

The following screenshot shows an example:

DB_Listener_Log

Identity Management:

Identity Management logs are stored in the shared configuration location which is specified during IDM provisioning. We will refer to this location as IDM_CONFIG_HOME in the rest of the post.

Similarly, the install base for all the products under IDM is specified during provisioning and we will refer to this location as IDM_INSTALL_BASE in the rest of the post.

IDM_Provisioning_Install_Config_Location

Domain Logs:

Java components for IDM stack, namely, OIM, OAM, SOA and optionally OIF run under individual managed servers within a WebLogic Domain which by default is named IDMDomain. Each WebLogic Domain has a single domain log and log files for each Admin and Managed Server. The four important logs files for each WLS Server (Admin or Managed) are:

1. The diagnostic log file – <server_name>-diagnostic.log

2. The container log file to which WLS logs messages are written – <server_name>.log

3. The standard error and output file (commonly referred to as the out file) to which all standard output and error messages are redirected – <server_name>.out

4. Access log file to which all incoming HTTP requests for that server are logged. This file is typically named access.log or access.log.<timestamp>

 

To identify these log files, you first need to identify the DOMAIN_HOME for IDMDomain on the particular server. For instructions on identifying the DOMAIN_HOME for any WebLogic domain on a particular server, please refer to Appendix A.

The location for various log files is:

Domain Log: <DOMAIN_HOME>/servers/AdminServer/IDMDomain.log

Diagnostic Log for each server: <DOMAIN_HOME>/server/<server_name>/logs/<server_name>-diagnostic.log

WLS Log: <DOMAIN_HOME>/server/<server_name>/logs/<server_name>.log

Out File: <DOMAIN_HOME>/server/<server_name>/logs/<server_name>.out

An example of these files for AdminServer where DOMAIN_HOME is set to /u01/app/r8/idm/config/domains/IDMDomain is shown in the following screenshot:

IDM_Domain_Logs

As a general rule system component logs are located in INSTANCE_HOME/diagnotics/logs/<component_type>/<component_name> which for each component is located in <IDM_CONFIG_HOME>/instances/<instance_name> directory.

OHS Logs:

OPMN logs list messages written by the Oracle Process Monitor which is used to start and monitor OHS processes. These log files are located in <IDM_CONFIG_HOME>/instances/ohs1/diagnostics/logs/OPMN/opmn

OHS logs contain the OHS error and access logs. These log files are located in <IDM_CONFIG_HOME>/instances/ohs1/diagnostics/logs/OHS/ohs1

OID/OVD Logs:

OPMN logs: <IDM_CONFIG_HOME>/instances/ohs1/diagnostics/logs/OPMN/opmn

OID logs: <IDM_CONFIG_HOME>/instances/oid1/diagnostics/logs/OID/oid1

OVD logs: <IDM_CONFIG_HOME>/instances/oid1/diagnostics/logs/OVD/ovd1/

OID Audit logs: <IDM_CONFIG_HOME>/instances/oid1/auditlogs/OID/oid1

OVD Audit logs: <IDM_CONFIG_HOME>/instances/oid1/auditlogs/OVD/ovd1

Patch Logs:

There are various products installed in the IDM stack. Opatch is used for applying patches to various Oracle Homes. All Oracle Homes are installed under various subdirectories within <IDM_INSTALL_BASE>. Here is the information about a few of these homes:

OID/OVD Home: <IDM_INSTALL_BASE>/products/dir/oid

OHS Home: <IDM_INSTALL_BASE>/products/ohs/ohs

IAM Home: <IDM_INSTALL_BASE>/products/app/iam

 

OPatch logs for each of these products are stored in their respective Oracle Homes. The location for these logs is $ORACLE_HOME/cfgtoollogs/opatch

Fusion Applications:

Fusion Applications logs are stored in the shared configuration location which is specified during FA provisioning. We will refer to this location as FA_CONFIG_HOME in the rest of the post.

Similarly, the install base for all the products under FA is specified during provisioning and we will refer to this location as FA_INSTALL_BASE in the rest of the post.

FA_Provisioning_Install_Config_Location

Domain Logs:

A provisioned FA environment contains up to 9 WebLogic Domains. The exact number depends on the products you provisioned. Each WebLogic Domain has a single domain log and log files for each Admin and Managed Server. The four important logs files for each WLS Server (Admin or Managed) are:

1. The diagnostic log file – <server_name>-diagnostic.log

2. The container log file to which WLS logs messages are written – <server_name>.log

3. The standard error and output file (commonly referred to as the out file) to which all standard output and error messages are redirected – <server_name>.out

4. Access log file to which all incoming HTTP requests for that server are logged. This file is typically named access.log or access.log.<timestamp>

 

To identify these log files, you first need to identify the DOMAIN_HOME for IDMDomain on the particular server. For instructions on identifying the DOMAIN_HOME for any WebLogic domain on a particular server, please refer to Appedix A.

The location for various log files is:

Domain Log: <DOMAIN_HOME>/servers/AdminServer/IDMDomain.log

Diagnostic Log for each server: <DOMAIN_HOME>/server/<server_name>/logs/<server_name>-diagnostic.log

WLS Log: <DOMAIN_HOME>/server/<server_name>/logs/<server_name>.log

Out File: <DOMAIN_HOME>/server/<server_name>/logs/<server_name>.out

ESS Logs:

Enterprise Scheduler Service is responsible for scheduling and running batch jobs. The ESS component itself runs in a WLS managed server named ess_server<n> in each domain that requires scheduling services. These logs can be viewed using the location identified using the procedure defined under section “Domain Logs.” The log and output files for individual ESS jobs resides in the following location:

ESS Logs: <FA_CONFIG_HOME>/ess/rfd/<requestid>/log

ESS Output: <FA_CONFIG_HOME>/ess/rfd/<requestid>/out

For individual requests, this information can be viewed in the database using the following query:

SELECT
      processgroup,
      logworkdirectory,
      outputworkdirectory
FROM  fusion_ora_ess.request_history
WHERE requestid = <request_id>;

 

Patching Logs:

FA Patch Manager is a tool to apply patches to FA. It calls OPatch to patch middleware components and adpatch to apply database artifacts to FA database.

FA Patch log files are located in <FA_CONFIG_HOME>/atgpf/logs directory.

 

Often times, the middleware patches may be applied directly to individual Oracle Homes using OPatch. OPatch logs are located under $ORACLE_HOME/cfgtoollogs/opatch. Each ORACLE_HOME is listed in the central inventory. This directory is specified during provisioning. inventory.xml under ContextsXML sub-directory of the central inventory lists each Oracle Home.

 

OHS Logs:

OPMN log files are located in <FA_CONFIG_HOME>/CommonDomain_webtier/diagnostics/logs/OPMN/opmn

OHS logs files are located in <FA_CONFIG_HOME>/CommonDomain_webtier/diagnostics/logs/OHS/ohs1

 

BI Logs:

BI is composed of java components (running in WLS domain) and system components (managed by OPMN).

WLS Domain housing BI component is named BIDomain and you can follow the procedure defined to identify other domain logs to get the BI logs.

System components reside under an OPMN instance. The instance home is <FA_CONFIG_HOME>/BIInstance. Log files for each BI system component such as Presentation Server, Cluster controller, Scheduler etc. are located in the following location:

<FA_CONFIG_HOME>/BIInstance/diagnostics/logs/<process-type>/<ias-component>

Here are a example of 2 such components:

Presentation Server: <FA_CONFIG_HOME>/BIInstance/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1

Cluster Controller: <FA_CONFIG_HOME>/BIInstance/diagnostics/logs/OracleBIClusterControllerComponent/coreapplication_obiccs1

 

Appendix A: Identifying DOMAIN_HOME for a particular WebLogic Domain

To identify DOMAIN_HOME for a particular Weblogic domain, you must first identify the MIDDLEWARE_HOME. Middleware home is the top level directory under which WLS Server and various other Oracle products are installed.

For IDM, it is <IDM_INSTALL_BASE>/products/app

And for FA, is it <FA_INSTALL_BASE>/fusionapps

Each domain is registered in domain-registry.xml file under these middleware homes. A single domain can be across multiple servers and can optionally be scaled out. Hence a single domain may be listed multiple times. Each of the listing represents a distinct domain home under which various managed servers may be running. Below are the screenshots of domain-registry for IDM and FA where a domain resides complete on a single server:

IDMDomain:

IDM_Domain_Registry

Various FA Domains:

FA_Domain_Registry

Understanding listen ports and addresses to effectively troubleshoot Fusion Applications

$
0
0

Introduction:

To communicate with any process, you need to know at least 3 things about it – IP address, port number and protocol. Fusion Applications comprises many running processes. End users communicate directly with only a handful of them, but all processes communicate with other processes to provide necessary services. Understanding various IP addresses and listen ports is very important to effectively troubleshoot communication between various components and identifying where the problem lies.

Main Article:

We will start by describing some of the key concepts used in this post. We will then demonstrate their relationships and a way to get details about them so that you can use them while troubleshooting.

IP address – an identifier for a device on a TCP/IP network. Servers usually have one or more Network Interface Cards (NICs). In the simplest configuration, each NIC has one IP address. This is usually referred to as the physical IP address and is also the IP address mapped to the network hostname in the Domain Name System (DNS).

Virtual IP (VIP) – an IP address that does not correspond to a physical network interface. You can assign multiple IP addresses to a single NIC card. These virtual interfaces show up as eth0:1, eth0:2 etc. The reason to use VIP instead of physical IP is easier portability. In case of hardware failure, you can assign the VIP to a different server and bring your application there.

Host Name – the name assigned to a server. This name is usually defined in corporate and optionally in public DNS and maps to the physical IP address of the server. In FA, we refer to two different types of hostnames – physical and abstract.
Physical hostname is the one defined in DNS and recognized across the network.
Abstract hostname is like a nickname or an alias – you can assign multiple nicknames to the same IP address. For example, a person may officially be recognized as Richard, but his friends may call him Rick, Dick or even Richie. In FA, abstract hostname to IP address mapping is defined in the hosts file instead of DNS so that the alias to IP address mapping can be kept private to only a particular instance of Fusion Application.

Listen Port – serves as an endpoint in an operating system for many types of communication. It is not a hardware device, but a logical construct that identifies a service or process. As an example, an HTTP server listens on port 80. A port number is unique to a process on a device. No two processes can share the same port number at the same time.

Ephemeral Port – a short-lived port for communications allocated automatically from a predefined range by the IP software. When a process makes a connection to another process on its listen port, the originating process is assigned a temporary port. It is also called the foreign port.

Think of a telephone communication. If you have to reach an individual and you do not wish to use your personal number since you want it to be available for incoming calls, you can use any available public telephone to make the call. This public phone also has a telephone number. It remains busy so long as you are on the call. As soon as you hang up, it becomes available for use by any other individual. Ephemeral ports are the equivalent of public telephones in this example.

Listen Address - the combination of IP address and listen port. A process can listen on various IP addresses, but usually on only one port. E.g., DB listener listens on port 1521 by default. In the most common configuration, a process either listens on a single IP address, or all available IP addresses.

Let us see how to use a useful and simple command “netstat” to identify and understand the addresses and their relationships.

In the simplest usage, netstat prints the following columns:

 

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name

 

The important ones for this post are the following:

Proto: Protocol used for this port
Local Address: The listen address. The first part is the IP address; second is the port number
Foreign Address: The address of the remote end point of this socket. Think of it as the phone number of the public telephone you used to call your contact. Similar to the listen address, the first part is the IP address; second is the port number
State: State can have various values. The important ones are:

LISTEN: The socket is listening for incoming connections. Foreign address is not relevant for this line
ESTABLISHED: The socket has an established connection. Foreign address in the address of the remote end point of the socket.
CLOSE_WAIT: The remote end has shut down, waiting for the socket to close.

Let us now look at some example outputs of the netstat command. In the first example below, let us look at some listen sockets

 

     Proto Recv-Q Send-Q Local Address          Foreign Address State   PID/Program name
1.   tcp        0      0 :::10214               :::*            LISTEN  4229/java
2.   tcp        0      0 0.0.0.0:10206          0.0.0.0:*       LISTEN  4230/nqsserver
3.   tcp        0      0 10.228.136.10:10622    0.0.0.0:*       LISTEN  3501/httpd.worker

 

All 3 of the lines above show that the respective processes are listening for an incoming request on a particular address. This is evident from the value under the “State” column.

In lines 1 and 2, the processes are listening on all available IP addresses. This shows up in netstat as either “0.0.0.0” or a string of empty delimiters – “:::”. The listen port is 10214 and 10206 for java and nqsserver respectively. This means you can connect to these programs using any IP address that is bound to any interface on that server, so long as you use the right port number

In line 3, the process is listening on port 10622 only on IP address 10.228.136.10. This means that any request coming on a different IP address will not reach the process, even if the IP address is of the same host.

This is particularly important when you use VIP for enabling dynamic failover of various components, such as SOA Server. In this case, telnet to physical_ip:port_number will fail, but to virtual_ip:port_number will succeed.

Let’s now look at another example of some established connections:

 

     Proto Recv-Q Send-Q Local Address              Foreign Address          State       PID/Program name
1.   tcp        0      0 10.228.136.10:10212        10.228.136.10:39709      ESTABLISHED 4232/nqsclustercont
2.   tcp        0      0 10.228.136.10:12927        10.233.24.88:5575        ESTABLISHED 3524/httpd.worker
3.   tcp        0      0 ::ffff:10.228.136.10:29024 ::ffff:10.233.24.88:1521 ESTABLISHED 19757/java

 

 

Established connections show details of both ends of the socket. Based on our analogy above, “Local Address” and “Foreign Address” are 2 ends of a telephone connection. However, just by looking at a single line of netstat output for a socket, it cannot be determined which one is the listen port and which one is the connection requestor.

Please note that “Foreign address” doesn’t have to be a different host, or even a different IP address. It is simply the other end of socket connection and hence is a different process, which may be running on the same host or a different one. The local address always refers to an IP address on the same host.

To determine which one is the listen port and which one is ephemeral port, you need to determine if there is a LISTEN socket for that particular IP address. To do that, make sure you are on the host that the IP address is tied to, and run the following command:

netstat -anp | grep <local_address_port_number> | grep LISTEN

If it returns a LISTEN socket, then that is the process to which a client is connected. The client information can be found by running a similar command on the host referred to in “Foreign Address.”

If a LISTEN socket is not returned, the Foreign Address refers to the LISTEN socket and this line of netstat output refers to a client connecting to a remote process.

 

Troubleshooting scenario:

Let’s now try to apply this understanding to troubleshoot a simple problem – unable to access a web page.

You are trying to access the WebLogic administration console of CommonDomain and are unable to do so. To access the page, you type a URL similar to:

http://common-internal.mycompany.com:80/console

Note: The values used in this scenario are for demonstration purpose. You should substitute the appropriate values based on your environment

The first step is to identify which component has a problem. For that you need to understand the components involved. In a typical enterprise deployment of FA, a Load Balancer (LBR) sits in front of the HTTP Server, which in turn communicates with the WebLogic servers. Since the console application is deployed on the AdminServer, HTTP Server in turn talks to AdminServer of CommonDomain.

Here is a graphical representation of this flow:

 

01_ListenAddress

As you can see, the request flows through the LBR and the HTTP Server before reaching the AdminServer.

When you are unable to access a web page, the following are some of the common reasons:

1. Problem with name resolution
2. Problem with network layer preventing communication between 2 components
3. One or more components are down or unresponsive

We can use some basic tools to identify where the problem is. These tools are ping, telnet, netstat, lsof and ps. Once we have identified which component has a problem, we can figure out what is causing it. In this post, we will keep our focus on finding which component has a problem.

So let us walk through the request flow:

1. Check the name resolution to hostname/VIP in the URL (in our example common-internal.mycompany.com). Use the ping utility to determine if you can resolve the name, and contact the IP address. Since your browser is trying to contact the server, you will run ping utility on the desktop or device which is accessing this URL

ping common-internal.mycompany.com

Ping may not work if ICMP is disabled, but it will return the IP address that the name resolves to. Make sure this is the correct IP address. If it is not, the problem is with name resolution. If it returns the correct IP address, please move to the next step.

2. Make sure the port you are trying to reach on the hostname/VIP is reachable. Similar to #1 above, you will run telnet utility on the device you are trying to access the URL from.

telnet common-internal.mycompany.com 80

If telnet is unsuccessful, it means LBR cannot be reached on the HTTP port. This could be due to 2 reasons:

a. LBR is not listening or is down
b. Firewall or network issues are stopping this communication

If telnet is successful, please move to the next step.
3. The next step is to make sure LBR is configured properly and is routing requests to the HTTP Server(s) on the right port. Since LBR is a component outside of the FA stack and is usually managed by the network/security team, troubleshooting it is outside of the scope of this document. Make sure the team managing LBR confirms the configuration as well as reachability of the web server.

 

4. Another way to eliminate LBR and figure out if the problem is within FA stack and/or network communication between the components of the FA stack only is to directly access the URL from the web server. The initial URL we used resolved to LBR. We now need to figure out how to directly access this URL from the HTTP server. This can be done by changing the hostname in the URL to the hostname of one of the HTTP servers. Also change the port number in the URL to that of the listen address of the appropriate Virtual Host of the HTTP Server.

The virtual host configuration is stored in one of the files under $INSTANCE_HOME/config/OHS/<component_name>/moduleconf directory. These files are named FusionVirtualHost_<domain_short_form>.conf, where domain_short_form is fs for CommonDomain, hcm for HCMDomain, and so on.

Since common-internal is used for CommonDomain, we will look at FusionVirtualHost_fs.conf. The first few lines in this file specify the listen addresses (one for HTTP requests and one for HTTPS):

## Fusion Applications Virtual Host Configuration
Listen fusionhost.mycompany.com:10614
Listen fusionhost.mycompany.com:10613

 

 

The <VirtualHost> section specifies the VirtualHost configuration and this can be used to identify the mapping between LBR port and HTTP server port:

#Internal virtual host for fs
<VirtualHost fusionhost.mycompany.com:10613 >
ServerName http://common-internal.mycompany.com:80

 

So common-internal.mycompany.com maps to fusionhost.mycompany.com:10613

Now we can change the URL and try to access it directly. The new URL will be http://fusionhost.mycompany.com:10613/console. Please note that your organization may block direct access to servers on ports other than SSH from the desktops. In this case, you can access this URL from a browser running on the HTTP server itself.

If this URL works, the problem is with components before the HTTP server, namely, LBR and Desktop and the network between them.

If this URL also doesn’t work, please move to the next step.

5. Now we need to check whether Oracle HTTP Server (OHS) is working of not.

a. Check if it is running. Use “opmnctl status -l”
b. Check if it is listening on the port of interest – in this case, port 10613.

TestCha -bash-3.2$ netstat -anp | grep 10613 | grep LISTEN(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)tcp 0 0 10.228.136.10:10613 0.0.0.0:* LISTEN 3501/httpd.worker

If the above commands are unsuccessful, we need to troubleshoot why OHS is not running.

If the above commands are successful, please move to the next step.

6. Now let’s focus our attention to the final component of the flow – WebLogic AdminServer. The first step is to identify the listen address. By default, FA provisioning engine enables AdminServer of a particular domain to listen on the IP address tied to the hostname (physical or abstract) specified in the response file created using the provisioning wizard. Enterprise Deployment Guide for FA recommends changing this listen address to a VIP so that AdminServer can be manually failed over in a highly available environment. Similar steps are recommended for automatic migration of SOA Servers of each domain and BI Server. Once the change to listen address is made, HTTP Server configuration needs to be edited to point to the new address.

So let’s determine if HTTP Server is configured properly and can communicate with the WebLogic Server – in this case AdminServer of the CommonDomain.

First determine where does the Location “/console” is configured to. For CommonDomain, open FusionVirtualHost_fs.conf and look for “/console” under the internal virtual host section.

## Context roots for application consoleapp
<Location /console >
    SetHandler weblogic-handler
    WebLogicCluster fusionhost.mycompany.com:7001
</Location>

 

In our example, we have configured HTTP Server to direct incoming requests for “/console” to WLS running on host fusionhost.mycompany.com and port 7001.

First, we will verify connectivity to AdminServer.

Please make sure that this hostname maps to the right IP address.

Now use telnet to connect with AdminServer

telnet fusionhost.mycompany.com 7001

If telnet succeeds, make sure port 7001 is not accidentally in use by a different process. To do so, login to the host where AdminServer is supposed to run and issue the following command:

netstat -anp | grep 7001 | grep LISTEN

Example Output:

-bash-3.2$ netstat -anp | grep 7001 | grep LISTEN
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp   0   0   ::ffff:10.228.136.10:7001   :::*   LISTEN   14853/java

 

Make sure the process returned by the above command is actually the AdminServer.

If telnet fails, then it could point to the following:

a. HTTP Server is not configured to point to the correct listen address. Look at AdminServer configuration and make sure the Listen Address matches with HTTP server configuration.
b. AdminServer is either not running or not responding. Look at the AdminServer logs to determine it is healthy.
c. Network issues or firewall is blocking the communication.

 

Conclusion:

This concludes our troubleshooting of a failure of a web page request. As you can see, understanding listen addresses and rudimentary network tools is important in troubleshooting communication issues in Fusion Applications. This knowledge can be applied to non-HTTP requests; to products other than FA – even non-Oracle products.

Fusion HCM Cloud – Bulk Integration Automation Using Managed File Transfer (MFT) and Node.js

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from the cloud.

The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion Human Capital Management (Oracle Fusion HCM). HDL supports one-time data migration and incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT).

HCM Extracts is an outbound integration tool that lets you choose HCM data, gathers it from the HCM database and archives it as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns.

Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal systems and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use, especially for non technical staff, so you can leverage more resources to manage the transfer of files. The built in extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

This post focuses on how to automate HCM Cloud batch integration using MFT (Managed File Transfer) and Node.js. MFT can receive files, decrypt/encrypt files and invoke Service Oriented Architecture (SOA) composites for various HCM integration patterns.

 

Main Article

Managed File Transfer (MFT)

Oracle Managed File Transfer (MFT) is a high performance, standards-based, end-to-end managed file gateway. It features design, deployment, and monitoring of file transfers using a lightweight web-based design-time console that includes file encryption, scheduling, and embedded FTP and sFTP servers.

Oracle MFT provides built-in compression, decompression, encryption and decryption actions for transfer pre-processing and post-processing. You can create new pre-processing and post-processing actions, which are called callouts.

The callouts can be associated with either the source or the target. The sequence of processing action execution during a transfer is as follows:

  1. 1. Source pre processing actions
  2. 2. Target pre processing actions
  3. 3. Payload delivery
  4. 4. Target post processing actions
Source Pre-Processing

Source pre-processing is triggered right after a file has been received and has identified a matching Transfer. This is the best place to do file validation, compression/decompression, encryption/decryption and/or extend MFT.

Target Pre-Processing

Target pre-processing is triggered just before the file is delivered to the Target by the Transfer. This is the best place to send files to external locations and protocols not supported in MFT.

Target Post-Processing

Post-processing occurs after the file is delivered. This is the best place for notifications, analytic/reporting or maybe remote endpoint file rename.

For more information, please refer to the Oracle MFT document

 

HCM Inbound Flow

This is a typical Inbound FBL/HDL process flow:

inbound_mft

The FBL/HDL process for HCM is a two-phase web services process as follows:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke “LoaderIntegrationService” or “HCMDataLoader” to initiate the loading process.

The following diagram illustrates the MFT steps iwith respect to “Integration” for FBL/HDL:

inbound_mft_2

HCM Outbound Flow

This is a typical outbound batch Integration flow using HCM Extracts:

extractflow

 

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS) – this report is stored in WCC under the hcm/dataloader/export account.
  • MFT scheduler can pull files from WCC
  • The data file(s) are either uploaded to the customer’s sFTP server as pass through or to Integration tools such as Service Oriented Architecture (SOA) for orchestrating and processing data to target applications in cloud or on-premise.

The following diagram illustrates the MFT orchestration steps in “Integration” for Extract:

 

outbound_mft

 

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  • Select HCM Delivery Type to “HCM Connect”
  • Select an Encryption Mode of the four supported encryption types or select None
  • Specify the Integration Name – this value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:
Author: FUSION_APPSHCM_ESS_APPID
Security Group: FAFusionImportExport
Account: hcm/dataloader/export
Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Fusion Applications Security

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.
The FBL/HDL web services are secured through Oracle Web Service Manager (OWSM) using the following policy: oracle/wss11_saml_or_username_token_with_message_protection_service_policy.

The client must satisfy the message protection policy to ensure that the payload is encrypted or sent over the SSL transport layer.

A client policy that can be used to meet this requirement is: “oracle/wss11_username_token_with_message_protection_client_policy”

To use this policy, the message must be encrypted using a public key provided by the server. When the message reaches the server it can be decrypted by the server’s private key. A KeyStore is used to import the certificate and it is referenced in the subsequent client code.

The public key can be obtained from the certificate provided in the service WSDL file.

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion HCM supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

MFT Callout using Node.js

 

Prerequisites

To automate HCM batch integration patterns, the following components must be installed and configured respectively:

 

Node.js Utility

A simple Node.js utility “mft2hcm” has been developed for uploading or downloading files to/from a MFT server callout to Oracle WebCenter Content server and initiate HCM SaaS loader service. It utilizes the node “mft-upload” package and provides SOAP substitution templates for WebCenter (UCM) and Oracle HCM Loader service.

Please refer to the “mft2hcm” node package for installation and configuration.

RunScript

The RunScript is configured as “Run Script Pre 01” to configure a callout that can be injected into MFT in pre or post processing. This callout always sends the following default parameters to the script:

  • Filename
  • Directory
  • ECID
  • Filesize
  • Targetname (not for source callouts)
  • Sourcename
  • Createtime

Please refer to “PreRunScript” for more information on installation and configuration.

MFT Design

MFT Console enables the following tasks depending on your user roles:

Designer: Use this page to create, modify, delete, rename, and deploy sources, targets, and transfers.

Monitoring: Use this page to monitor transfer statistics, progress, and errors. You can also use this page to disable, enable, and undeploy transfer deployments and to pause, resume, and resubmit instances.

Administration: Use this page to manage the Oracle Managed File Transfer configuration, including embedded server configuration.

Please refer to the MFT Users Guide for more information.

 

HCM FBL/HDL MFT Transfer

This is a typical MFT transfer design and configuration for FBL/HDL:

MFT_FBL_Transfer

The transfer could be designed for additional steps such as compress file and/or encrypt/decrypt files using PGP, depending on the use cases.

 

HCM FBL/HDL (HCM-MFT) Target

The MFT server receives files from any Source protocol such as SFTP, SOAP, local file system or a back end integration process. The file can be decrypted, uncompressed or validated before a Source or Target pre-processing callout uploads it to UCM then notifies HCM to initiate the batch load. Finally the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion.

This is a typical target configuration in the MFT-HCM transfer:

Click on target Pre-Processing Action and select “Run Script Pre 01”:

MFT_RunScriptPre01

 

Enter “scriptLocation” where node package “mft2hcm” is installed. For example, <Node.js-Home>/hcm/node_modules/mft2hcm/mft2hcm.js

MFTPreScriptUpload

 

Do not check ”UseFileFromScript”. This property replaces an inbound file (source) of MFT with the file from target execution. In FBL/HDL, the response (target execution) do not contain file.

 

HCM Extract (HCM-MFT) Transfer

An external event or scheduler triggers the MFT server to search for a file in WCC using a search query. Once a document id is indentified, it is retrieved using a “Source Pre-Processing” callout which injects the retrieved file into the MFT Transfer. The file can then be decrypted, validated or decompressed before being sent to an MFT Target of any protocol such as SFTP, File system, SOAP Web Service or a back end integration process. Finally, the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion. The MFT server can live in either on premise or a cloud iPaaS hosted environment.

This is a typical configuration of HCM-MFT Extract Transfer:

MFT_Extract_Transfer

 

In the Source definition, add “Run Script Pre 01” processing action and enter the location of the script:

MFTPreScriptDownload

 

The “UseFileFromScript” must be checked as the source scheduler is triggered with mft2hcm payload (UCM-PAYLOAD-SEARCH) to initiate the search and get WCC’s operations. Once the file is retrieved from WCC, this flag tells MFT engine to substitute the file from downloaded from WCC.

 

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using MFT and Node.js. The Node.js package could be replaced with WebCenter Content native APIs and SOA for orchestration. This process can also be replicated for other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

Integrating Oracle Fusion Applications – WebCenter / Universal Content Management (UCM) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article describes how to integrate Oracle Fusion Applications – WebCenter / Universal Content Management (UCM) with Oracle Business Intelligence Cloud Service (BICS). The integration pattern covered shares similarities to those addressed in the previously published A-Team blog on: “Integrating Oracle Fusion Sales Cloud with Oracle Business Intelligence Cloud Service (BICS)”. The motivation behind this article is to provide a fresh perspective on this subject, and to offer an alternative for use cases unable to use OTBI web services to extract Fusion data.

The solution uses PL/SQL and Soap Web Services to retrieve the Fusion data. It was written and tested on Fusion Sales Cloud R10. That said, it is also relevant to any other Oracle product that has access to WebCenter / UCM – provided that idcws/GenericSoapPort?wsdl is publicly available. The article is geared towards BICS installations on an Oracle Schema Service Database. However, it may also be useful for DbaaS environments.

The artifacts provided can be used as a starting point to build a custom BICS – WebCenter / UCM adapter. The examples were tested against a small test data-set, and it is anticipated that code changes will be required before applying to a Production environment.

The article is divided into four steps:

 

Step One – Create and Activate the Schedule Export Process

Describes the data staging process – which is configured through “Schedule Export” (accessed via “Setup and Maintenance”). “Schedule Export” provides a variety of export objects for each Fusion module / product family. It walks through creating, editing, scheduling, and activating the “Scheduled Export”. The results of the “Scheduled Export” are saved to a CSV file stored in WebCenter / UCM.

Step Two – Confirm data is available in WebCenter / UCM

Verifies that the user can log into Webcenter / UCM and access the CSV file that was created in Step One. The “UCM ID” associated with the CSV file is visible from the WebCenter / UCM Content Server Search. The id is then used in Step Three to programmatically search for the object.

Step Three – Test GET_SEARCH_RESULTS and GET_FILE Soap Requests

Outlines how to build the Soap requests that utilizes the public idcws/GenericSoapPort?wsdl (available in Fusion R10). “GET_SEARCH_RESULTS” is used to retrieve the “dID” based on the “UCM ID” of the CSV file (gathered from Step Two). “GET_FILE” is then used to retrieve the file associated the given “dID”. The file is returned as a SOAP attachment.

Step Four – Code the Stored Procedure

Provides PL/SQL samples that can be used as a starting point to build out the integration solution. The database artifacts are created through Apex SQL Workshop SQL Commands. The Soap requests are called using apex_web_service.make_rest_request.

Generally speaking, Make_Rest_Request is reserved for RESTful Web Services and apex_web_service.make_request for Soap Web Services. However, in this case it was not possible to use apex_web_service.make_request as the data returned was not compatible with the mandatory output of XMLTYPE. Apex_web_service.make_rest_request has been used as a workaround as it offers additionally flexibility, allowing the data to be retrieved as a CLOB.

For “GET_SEARCH_RESULTS” the non-XML components of the file are removed, and the data is saved as XMLTYPE so that the namespace can be used to retrieve the “dID”.

For “GET_FILE” the data is kept as a CLOB. The non-CSV components of the file are removed from the CLOB. Then the data is parsed to the database using “csv_util_pkg.clob_to_csv” that is installed from the Alexandria PL/SQL Utility Library.

 

Main Article

 

Step One – Create and Activate the Schedule Export Process

 

1)    Click Setup and Maintenance.

Snap0

 2)    Enter “Schedule Export” into the “Search: Tasks” search box.

Click the arrow to search.

Snap2

3)    Click Go to Task.

Snap3

 

4)    Click Create.

Snap4

 

 

 

 

 

 

5)    Type in Name and Description.

Click Next.

Snap1

 

6)    Click Actions -> Create.

Snap6

 

 

7)    Select the desired export object.

Click Done.

Snap2

 

 

 

 

 

8)    Click the arrow to expand the attributes.

Snap9

 

9)    Un-check unwanted attributes.

For this example the following five attributes have been selected:

a)    Territory Name
b)    Status Code
c)    Status
d)    Type
e)    Forecast Participation Code

Snap3 Snap4

10)   Select Schedule Type = Immediate.

Click Next.

Snap10

11)   Click Activate.

Snap11

 

12)   Click refresh icon (top right) until status shows as “Running”.

Confirm process completed successfully.

Snap12a

 

Snap6

Snap7

13)   Once the process is complete – Click on “Exported data file” CSV file link.

Confirm CSV contains expected results.

Snap8

Step Two – Confirm data is available in WebCenter / UCM

 

1)    Login to WebCenter / UCM.

https://hostname.fs.em2.oraclecloud.com/cs

2)    Search by Title.

Search by Title for the CSV file.

Snap1

 

3)    Click the ID to download the CSV file.

4)    Confirm the CSV file contains the expected data-set.

Snap2

Step Three – Test GET_SEARCH_RESULTS and GET_FILE Soap Requests

 

1)    Confirm that the idcws/GenericSoapPort?wsdl is accessible. (Note this is only public in Fusion Applications R10.)

https://hostname.fs.em2.oraclecloud.com/idcws/GenericSoapPort?wsdl

Snap15

 

2)    Launch SOAPUI.

Enter the idcws/GenericSoapPort?wsdl in the Initial WSDL box.

Click OK.

https://hostname.fs.em2.oraclecloud.com/idcws/GenericSoapPort?wsdl

Snap16

 

3)    Right Click on the Project.

Select “Show Project View”.

Snap17

4)    Add a new outgoing WSS Configuration called “Outgoing” and a new WSS Entry for “Username”

a)    Click on the “WS-Security Configurations” tab.

b)    Click on the + (plus sign) located in the top left.

c)    Type “Outgoing” in Name.

d)    Click OK.

e)    Click on the + (plus sign) located in the middle left. Select Username. Click OK.

f)    Type in the WebCenter / UCM  user name and password.

g)    In the Password Type drop down box select “PasswordText”

Snap18

 

5)    Add a new WSS Entry for Timestamp

a)    Click on the + (plus sign) again located in middle left.

b)    Select Timestamp. Put in a very large number. This is the timeout in milliseconds.

c)    Close the window.

Snap19

 

6)    Click on Request

Delete the default envelope and replace it with below:

Replace the highlighted UCM ID with that found in “Step Two – Confirm data is available in WebCenter / UCM”.

Do not remove the [``] tick marks around the UCM ID.

For a text version of this code click here.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/”
xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Header>
Right Click here … then remove this text
</soapenv:Header>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_SEARCH_RESULTS”>
<ucm:Document>
<ucm:Field name=”QueryText”>dDocName &lt;starts> `UCMFA001069`</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>

7)    Place the cursor in between the <soapenv:Header> tags (i.e. “Right Click here … then remove this text”).

Right Click -> Select Outgoing WSS -> Select Apply “Outgoing”.

Snap4

8)    The previously defined header containing the user name, password, and timeout settings should now be added to the request.

Remove the “Right Click here … then remove this text” comment.

Confirm Outgoing WSS has been applied to the correct position.

Snap5

9)    Submit the Request (by hitting the green arrow in the top left).

Snap6

10)   The Request should return XML containing the results of GET_SEARCH_RESULTS.

Ctrl F -> Find: dID

Snap7

 

 

 

 

11)   Make note of the dID. For example the dID below is “1011”.

Snap8

 

 

12)   Right Click on the Request.

Rename it GET_SEARCH_RESULTS for later use.

Snap25a

 

 

Snap25b

 

 

13)   Right Click on GenericSoapOperation -> Select New Request

Snap26

 

 

14)   Name it GET_FILE.

Snap27

 

 

15)   Delete the default request envelope and replace with below:

For a text version of the code click here.

<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Header>
Right Click here … then remove this text
</soapenv:Header>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_FILE”>
<ucm:Document>
<ucm:Field name=”dID”>1011</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>

16)  (a)   Repeat process of adding Outgoing WSS

Place the cursor in between the <soapenv:Header> tags (i.e. “Right Click here … then remove this text”).

Right Click -> Select Outgoing WSS -> Select Apply “Outgoing”.

The previously defined header containing the user name, password, and timeout settings should now be added to the request.

Remove the “Right Click here … then remove this text” comment.

Confirm Outgoing WSS has been applied to the correct position.

(b)   Submit the Request (green arrow top left).

An attachment should be generated.

Click on the Attachments tab (at bottom).

Double click to open the attachment.

Snap9

 

17)   Confirm results are as expected.

Snap10

Step Four – Code the Stored Procedure

1)    Test GET_SEARCH_RESULTS PL/SQL

(a)    Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(b)    Replace:

(i) Username

(ii) Password

(iii) Hostname

(iv) dDocName i.e. UCMFA001069

(c)    For a text version of the PL/SQL click here.

DECLARE
l_user_name VARCHAR2(100) := ‘username‘;
l_password VARCHAR2(100) := ‘password‘;
l_ws_url VARCHAR2(500) := ‘https://hostname.fs.us2.oraclecloud.com/idcws/GenericSoapPort?wsdl';
l_ws_action VARCHAR2(500) := ‘urn:GenericSoap/GenericSoapOperation';
l_ws_response_clob CLOB;
l_ws_response_clob_clean CLOB;
l_ws_envelope CLOB;
l_http_status VARCHAR2(100);
v_dID VARCHAR2(100);
l_ws_resp_xml XMLTYPE;
l_start_xml PLS_INTEGER;
l_end_xml PLS_INTEGER;
l_resp_len PLS_INTEGER;
l_xml_len PLS_INTEGER;
clob_l_start_xml PLS_INTEGER;
clob_l_resp_len PLS_INTEGER;
clob_l_xml_len PLS_INTEGER;
clean_clob_l_end_xml PLS_INTEGER;
clean_clob_l_resp_len PLS_INTEGER;
clean_clob_l_xml_len PLS_INTEGER;
v_cdata VARCHAR2(100);
v_length INTEGER;
BEGIN
l_ws_envelope :=
‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_SEARCH_RESULTS”>
<ucm:Document>
<ucm:Field name=”QueryText”>dDocName &lt;starts> `UCMFA001069`</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>';
apex_web_service.g_request_headers(1).name := ‘SOAPAction';
apex_web_service.g_request_headers(1).value := l_ws_action;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘text/xml; charset=UTF-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘POST’,
p_body => l_ws_envelope,
p_username => l_user_name,
p_password => l_password);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–Tested on a very small CLOB. Less than 32767. If larger may need to slice.
–dbms_output.put_line(length(l_ws_response_clob));
–Remove header as it is not XML
clob_l_start_xml := INSTR(l_ws_response_clob,'<?xml’,1,1);
clob_l_resp_len := LENGTH(l_ws_response_clob);
clob_l_xml_len := clob_l_resp_len – clob_l_start_xml + 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob,clob_l_xml_len,clob_l_start_xml);
–dbms_output.put_line(l_ws_response_clob_clean);
–Remove the tail as it is not XML
clean_clob_l_end_xml := INSTR(l_ws_response_clob_clean,’——=’,1,1);
clean_clob_l_resp_len := LENGTH(l_ws_response_clob_clean);
clean_clob_l_xml_len := clean_clob_l_end_xml – 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob_clean,clean_clob_l_xml_len,1);
–dbms_output.put_line(l_ws_response_clob_clean);
–Convert CLOB to XMLTYPE
l_ws_resp_xml := XMLTYPE.createXML(l_ws_response_clob_clean);
select (cdata_section)
into v_cdata
from
xmltable
(
xmlnamespaces
(
‘http://schemas.xmlsoap.org/soap/envelope/’ as “env”,
‘http://www.oracle.com/UCM’ as “ns2″
),
‘//env:Envelope/env:Body/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name="dID"]‘
passing l_ws_resp_xml
columns
cdata_section VARCHAR2(100) path ‘text()’
) dat;
dbms_output.put_line(‘dID:’ || v_cdata);
END;

 (d)    The Results should show the corresponding dID.

Snap11

2)    Install the relevant alexandria-plsql-utils

(a)    Go to: https://github.com/mortenbra/alexandria-plsql-utils

Snap2

(b)    Click Download ZIP

Snap1

(c)    Run these three sql scripts / packages in this order:

\plsql-utils-v170\setup\types.sql

\plsql-utils-v170\ora\csv_util_pkg.pks

\plsql-utils-v170\ora\csv_util_pkg.pkb

3)    Create table in Apex to insert data into.

For a text version of the SQL click here.

CREATE TABLE TERRITORY_INFO(
Territory_Name VARCHAR(100),
Status_Code VARCHAR(100),
Status VARCHAR(100),
Type VARCHAR(100),
Forecast_Participation_Code VARCHAR(100)
);

4)    Test UCM_GET_FILE STORED PROCEDURE

(a)    Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(b)    Replace:

(I) Column names in SQL INSERT as needed

(Ii) Header name of first column. i.e. “Territory Name”

(c)    For a text version of the PL/SQL click here.

CREATE OR REPLACE PROCEDURE UCM_GET_FILE
(
p_ws_url VARCHAR2,
p_user_name VARCHAR2,
p_password VARCHAR2,
p_dID VARCHAR2
) IS
l_ws_envelope CLOB;
l_ws_response_clob CLOB;
l_ws_response_clob_clean CLOB;
l_ws_url VARCHAR2(500) := p_ws_url;
l_user_name VARCHAR2(100) := p_user_name;
l_password VARCHAR2(100) := p_password;
l_ws_action VARCHAR2(500) := ‘urn:GenericSoap/GenericSoapOperation';
l_ws_resp_xml XMLTYPE;
l_start_xml PLS_INTEGER;
l_end_xml PLS_INTEGER;
l_resp_len PLS_INTEGER;
clob_l_start_xml PLS_INTEGER;
clob_l_resp_len PLS_INTEGER;
clob_l_xml_len PLS_INTEGER;
clean_clob_l_end_xml PLS_INTEGER;
clean_clob_l_resp_len PLS_INTEGER;
clean_clob_l_xml_len PLS_INTEGER;
BEGIN
l_ws_envelope :=
‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_FILE”>
<ucm:Document>
<ucm:Field name=”dID”>’|| p_dID ||'</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>
‘;
apex_web_service.g_request_headers(1).name := ‘SOAPAction';
apex_web_service.g_request_headers(1).value := l_ws_action;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘text/xml; charset=UTF-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘POST’,
p_body => l_ws_envelope,
p_username => l_user_name,
p_password => l_password);
–Note: This was tested with a very small result-set
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–Tested on a very small CLOB. Less than 32767. If larger may need to slice.
–dbms_output.put_line(length(l_ws_response_clob));
–Remove junk header
clob_l_start_xml := INSTR(l_ws_response_clob,'”Territory Name“‘,1,1);
clob_l_resp_len := LENGTH(l_ws_response_clob);
clob_l_xml_len := clob_l_resp_len – clob_l_start_xml + 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob,clob_l_xml_len,clob_l_start_xml);
–dbms_output.put_line(l_ws_response_clob_clean);
–Remove junk footer
clean_clob_l_end_xml := INSTR(l_ws_response_clob_clean,CHR(13),-3)-2;
clean_clob_l_resp_len := LENGTH(l_ws_response_clob_clean);
clean_clob_l_xml_len := clean_clob_l_end_xml;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob_clean,clean_clob_l_xml_len,1);
— dbms_output.put_line(l_ws_response_clob_clean);
–Insert into database
DELETE FROM TERRITORY_INFO;
INSERT INTO TERRITORY_INFO (Territory_Name,Status_Code,Status,Type,Forecast_Participation_Code)
select C001,C002,C003,C004,C005 FROM table(csv_util_pkg.clob_to_csv(l_ws_response_clob_clean,’,’,1));
END;

(d)    To run the stored procedure – Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(e)    For a text version of the PL/SQL click here.

(f)    Replace:

(i) Username

(ii) Password

(iii) Hostname

(iv) dID i.e. 1011

BEGIN
UCM_GET_FILE(‘https://hostname.fs.us2.oraclecloud.com/idcws/GenericSoapPort?wsdl’,’username‘,’password‘,’dID‘);
END;

(g)    Confirm data was loaded successfully

SELECT * FROM TERRITORY_INFO;

Snap3

5) Combine GET_SEARCH_RESULTS and GET_FILE

(a)    Copy the below PL/SQL code into the Apex -> SQL Workshop -> SQL Commands.

(b)    Replace:

(i) Username

(ii) Password

(iii) Hostname

(iv) dDocName i.e. UCMFA001069

(c)    Note: The code highlighted in green is the only change made to the original.

(d)    For a text version of the PL/SQL click here.

(e)    Note: This code would also be converted to a stored procedure with parameters should it be implemented in production.

DECLARE
l_user_name VARCHAR2(100) := ‘username‘;
l_password VARCHAR2(100) := ‘password‘;
l_ws_url VARCHAR2(500) := ‘https://hostname.fs.us2.oraclecloud.com/idcws/GenericSoapPort?wsdl';
l_ws_action VARCHAR2(500) := ‘urn:GenericSoap/GenericSoapOperation';
l_ws_response_clob CLOB;
l_ws_response_clob_clean CLOB;
l_ws_envelope CLOB;
l_http_status VARCHAR2(100);
v_dID VARCHAR2(100);
l_ws_resp_xml XMLTYPE;
l_start_xml PLS_INTEGER;
l_end_xml PLS_INTEGER;
l_resp_len PLS_INTEGER;
l_xml_len PLS_INTEGER;
clob_l_start_xml PLS_INTEGER;
clob_l_resp_len PLS_INTEGER;
clob_l_xml_len PLS_INTEGER;
clean_clob_l_end_xml PLS_INTEGER;
clean_clob_l_resp_len PLS_INTEGER;
clean_clob_l_xml_len PLS_INTEGER;
v_cdata VARCHAR2(100);
v_length INTEGER;
BEGIN
l_ws_envelope :=
‘<soapenv:Envelope xmlns:soapenv=”http://schemas.xmlsoap.org/soap/envelope/” xmlns:ucm=”http://www.oracle.com/UCM”>
<soapenv:Body>
<ucm:GenericRequest webKey=”cs”>
<ucm:Service IdcService=”GET_SEARCH_RESULTS”>
<ucm:Document>
<ucm:Field name=”QueryText”>dDocName &lt;starts> `UCMFA001069`</ucm:Field>
</ucm:Document>
</ucm:Service>
</ucm:GenericRequest>
</soapenv:Body>
</soapenv:Envelope>';
apex_web_service.g_request_headers(1).name := ‘SOAPAction';
apex_web_service.g_request_headers(1).value := l_ws_action;
apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘text/xml; charset=UTF-8′;
l_ws_response_clob := apex_web_service.make_rest_request(
p_url => l_ws_url,
p_http_method => ‘POST’,
p_body => l_ws_envelope,
p_username => l_user_name,
p_password => l_password);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,24000,1));
–Tested on a very small CLOB. Less than 32767. If larger may need to slice.
–dbms_output.put_line(length(l_ws_response_clob));
–Remove header as it is not XML
clob_l_start_xml := INSTR(l_ws_response_clob,'<?xml’,1,1);
clob_l_resp_len := LENGTH(l_ws_response_clob);
clob_l_xml_len := clob_l_resp_len – clob_l_start_xml + 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob,clob_l_xml_len,clob_l_start_xml);
–dbms_output.put_line(l_ws_response_clob_clean);
–Remove the tail as it is not XML
clean_clob_l_end_xml := INSTR(l_ws_response_clob_clean,’——=’,1,1);
clean_clob_l_resp_len := LENGTH(l_ws_response_clob_clean);
clean_clob_l_xml_len := clean_clob_l_end_xml – 1;
l_ws_response_clob_clean := dbms_lob.substr(l_ws_response_clob_clean,clean_clob_l_xml_len,1);
–dbms_output.put_line(l_ws_response_clob_clean);
–Convert CLOB to XMLTYPE
l_ws_resp_xml := XMLTYPE.createXML(l_ws_response_clob_clean);
select (cdata_section)
into v_cdata
from
xmltable
(
xmlnamespaces
(
‘http://schemas.xmlsoap.org/soap/envelope/’ as “env”,
‘http://www.oracle.com/UCM’ as “ns2″
),
‘//env:Envelope/env:Body/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name="dID"]‘
passing l_ws_resp_xml
columns
cdata_section VARCHAR2(100) path ‘text()’
) dat;
–dbms_output.put_line(‘dID:’ || v_cdata);
UCM_GET_FILE(l_ws_url,l_user_name,l_password,v_cdata);
END;

Further Reading

Click here for the Application Express API Reference Guide –  MAKE_REST_REQUEST Function

Click here for the Alexandria-plsql-utils

Click here for related A-Team BICS blogs

Summary

This article described how to integrate Oracle Fusion Applications – WebCenter / Universal Content Management (UCM) with Oracle Business Intelligence Cloud Service (BICS). It covered the functional and technical steps necessary to automate exporting of data from WebCenter / UCM and load it into a BICS database.

In order to implement this solution the WebCenter / UCM idcws/GenericSoapPort?wsdl must be publicly available. At the time of writing this was available in Fusion Applications R10.

The solution did not cover creating the BICS Data Model or Dashboards. Information on this topic can be found on other A-Team blogs published by the same author.

The SQL scripts provided are for demonstration purposes only. They were tested on small sample data-set. It is anticipated that code adjustments would be need to accommodate larger Production data-sets.

Simplified Role Hierarchy in R10

$
0
0

Introduction

Our teammate Jack Desai published an article last year about Fusion Application Roles Concept. It gives you a great overview about the design to grant access to certain functionalities to specific users. His article familiarizes you with the concepts of Abstract Roles, Duty Roles, Job Roles or Data Roles and how they are used in a Role Based Access (RBAC) model in Fusion Applications.

Starting with the current Fusion Apps Release 10 further improvements in the role concept have been implemented resulting in flatter role hierarchies. More detailed information about the Simplified Reference Roles Models can be found in Oracle Support DocId 2016990.1.

The following article below will give a short introduction and overview about the changes and perform a brief comparison of features between role hierarchies in R10 and pre-R10.

Role Hierarchy Changes in R10

As shown in the drawing below the main change is a simplification of complex structures in pre-R10 releases. For this purpose the old Duty Roles have been deprecated and replaced by newer versions with different and leaner assignments. These assignments can be other roles or even privileges. The previous model sometimes had many Duty Roles assigned to Job Roles. As a result creating custom Job Roles could become a challenge.

1_Overview_Slide_Role_Model_1

By introducing a simplified reference model the following benefits were achieved in R10:

  • New model provides a clean reference
  • For customers not using any customized roles the new features have been seamlessly activated, so customers won’t even notice these changes as the new features were installed during the R10 upgrade or provisioning process.
  • For Oracle Applications Cloud some new capabilities have been implemented to allow new features in an isolated model
  • Last but not least with R10 we’re having a reduced number of roles compared to the role model pre-R10

The most visible aspect of the Release 10 simplification is the introduction of an Application Job Role and a reduced number of duty roles when compared to the role model in the previous release. Also worth to mention is that newly created Application Role Names start with a prefix “ORA_”.

2_Overview_Slide_Role_Model_2

As shown in figure above these changes in R10 include also an evolutionary process of modifications in terms of privilege assignments to duty and/or application roles. However the new role reference model doesn’t necessarily affect the setup details of these privileges (i.e. Entitlements, Resources etc). If there are changes for privileges in R10, theses might have been caused by functional and technical requirements in Fusion Application. Almost obvious to say that these changes in R10 are valid for entire Fusion Applications and work the same way for on-premise and Oracle Cloud instances.

Comparison of sample roles Pre-R10 vs R10

When comparing the both role models we should start with the similarities between pre-R10 and R10 role models as they are:

  • The set of privileges to control assignment of features or functionality is the same in both role models
  • In case of using an existing privilege that Oracle Application Cloud has modified to improve existing functionality, then both models benefit from the improvement.
  • The duty roles are organized by application sources (i.e. Product Pillars), such as Customer Relationship Management (CRM), Human Capital Management (HCM) and Financial Supply Chain Management (FSCM) in both role models.

As mentioned above the main differences are the hierarchical organization of roles under the Enterprise Job Role level. The figure below illustrates these changes for the sample of Sales Enterprise Job Role:

  • Every Duty Role named “Sales Representative Duty” in pre-R10 is an individual implementation per Application Source like CRM, FSCM etc
  • It is also possible to assign different duty roles in the context of an Application Source to the Enterprise Job Role

As shown in our sample below the Sales Representative Duty role exists for various application sources and could give the impression they refer to the same role as they are sharing almost the same display name in pre-R10 releases. In reality these are different roles! For instance the Sales Representative Duty role for CRM can be found under a role name ZBS_ENT_SALES_REPRESENTATIVE_DUTY while the role with same display name exists in HCM with the name ZBS_ENT_SALES_REPRESENTATIVE_DUTY_HCM.

In R10 we see that there are also Sales Representative roles assigned to Sales Representative Enterprise Role, but we notice the following differences:

  • The suffix “Duty” as part of the displayed name is gone as these are Application Job Roles assigned at top level now
  • All these same visible names refer also to the same role ORA_ZBS_SALES_REPRESENTATIVE_JOB
  • As mentioned above the internal names of these new roles start with a prefix “ORA” and can be easily identified as a new role by its name
  • By the suffix “JOB” these roles can be identified as an Application Job Role while Duty Roles are registered with a suffix “DUTY”

3_Overview_Slide_Role_Model_3

As said the Application Job Role with the display name “Sales Representative” refer to the same role with name ORA_ZBS_SALES_REPRESENTATIVE_JOB across all Application Sources. Furthermore the usage of Role Categories has been introduced. These role category assignments put roles into a dedicated context and can be used for isolation in Cloud. In our sample the “Sales Representative” role is assigned to a Role Category named “CRM_ENTERPRISE_DUTY”. The Application Role Hierarchy might differ for roles with same names (like “Sales Representative”) across the various Application Sources. For instance the “Sales Representative” role for HCM has the following Application Role Hierarchy assigned :

  • “Sales Party Management” (ORA_ZCM_SALES_PARTY_MANAGEMENT_DUTY_HCM)

The same Application Job Role for FSCM contains a different Application Role Hierarchy used in that specific application context:

  • “Sales Party Management” (ORA_ZCM_SALES_PARTY_MANAGEMENT_DUTY_FSCM)
  • “Sales Party Review” (ORA_ZCM_SALES_PARTY_REVIEW_DUTY_FSCM)

For a better illustration of differences between these both role models pre-R10 vs R10 sample screenshots of Fusion Apps Authorization Policy Management (APM) are shown below.

4_Channel_Account_Manager_ExtRole_GeneralTab_R10

Starting point is the External Role “Channel Account Manager” in a pre-R10 instance with the internal name ZPM_CHANNEL_ACCOUNT_MANAGER_JOB as shown above. The screenshot below highlights the Application Role Mapping with various different Duty Roles assigned.

5_Channel_Account_Manager_ExtRole_AppRoleMapping_old

In R10 this External Role is unchanged (means: same name) but Application Role Mapping is different as shown in screenshot below. As mentioned previously there are Application Job Roles assigned in R10. Only a few duty roles for OBI are reused for technical reasons to provide continuity for some features. These are roles used by internal systems, such as role codes that end with _PRIV_OBI, which are used to secure reports. These roles don’t have the ORA_ prefix in the role code. You must never modify these roles.

6_Channel_Account_Manager_ExtRole_AppRoleMapping_R10

In pre-R10 releases Duty Roles had the term “DUTY” included in internal name and also in display name (see screenshot below).

7_Channel_Account_Manager_AppRole_GeneralTab_old

This role setup changes in R10 (below) as the prefix ORA has been added for newly created roles. As screenshot below indicates, the term JOB is added as suffix for Application Job Roles.

8_Channel_Account_Manager_AppRole_GeneralTab_R10

Another new feature is the assignment of Job and Duty Roles to specific Role Categories as shown below. The existing assignment is not to be changed by an user for standard roles.

9_Role_Categorization_Usage_R10

As shown in the screenshot below the pre-R10 Duty Role has a dedicated layout in Application Role Hierarchy

10_Channel_Account_Manager_AppRole_ItemInquiry_old

In R10 the Application Role Hierarchy looks different as it has been reworked and flattened.

11_Channel_Account_Manager_AppRole_ItemInquiry_R10

Duty Roles in R10 follow the name scheme with ORA as prefix and DUTY as suffix as shown below.

13_Item_Inquiry_General_R10

Policies assigned to these Duty Roles in R10 might be the same as those in pre-R10. Any changes between releases will have been caused by functional or technical requirements in various Application Sources during release changes (being done automatically when upgrading a certain release).

14_Item_Inquiry_Duty_Policies_Overview_R10

Screenshot below shows the detailed privileges (Entitlements) at the lowest level to grant access to pages, page flows, data or interfaces (all of then are called “Resources”). As said these structures are not affected in R10 by switching to a simplified role reference.

15_ViewItem_Entitlement_R10

Upgrade of role customization in R10

In case any custom job roles have been created in pre-R10 instances a migration to the simplified reference role model is necessary. As shown in the figure below the links to old duty roles must be removed and another link to the according Application Job Role in R10 created. Detailed instructions can be found in various documents under Oracle Support DocID 2016990.1.

16_Overview_Slide_Role_Model_2

Conclusion

This post provides an introduction to recent changes in the Role Model in R10. It is supposed to be seen in the context of the previous post about RBAC in Fusion Applications. Some more changes have been seen in R10 for security related topics and more posts on this site will follow to cover those.

Executing a Stored Procedure from Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article describes how to configure a link on an Oracle Business Intelligence Cloud Service (BICS) Dashboard that allows a BICS consumer to execute a stored procedure from BICS Dashboard. In this particular example the stored procedure inserts a record into table located in the Oracle Schema Service. However, the provided steps can also be applied to a BICS DbaaS environment.

The final Dashboard created in this article is displayed below. The BICS consumer clicks the refresh link, that calls a database function (using EVALUATE), which in turn executes the stored procedure. The stored procedure contains the logic to update / refresh the given table. The example provided inserts a single row into a one column table. However, this solution can be easily modified for much more complex use cases.

Snap25

 

The article is divided into seven steps:

 

Step One: Create Table (to store data)

Step Two: Create Stored Procedure (to load data)

Step Three: Create Function (to execute Stored Procedure)

Step Four: Create Dummy Table (to reference the EVALUATE function)

Step Five: Create Expression in Data Modeler (that references EVALUATE function)

Step Six: Create Analysis (that executes EVALUATE function)

Step Seven: Create Dashboard (to display Analysis)

Main Article

 

Step One – Create Table (to store data)

For a text version of SQL Scripts in Steps One, Two, and Three click here

Note: All SQL Statements have been run through Apex -> SQL Workshop -> SQL Commands

Snap4

Snap5

1)    Create Table

CREATE TABLE STORE_DATA
(DATA_FIELD TIMESTAMP);

Step Two – Create Stored Procedure (to load data)

1)    Create Stored Procedure

CREATE OR REPLACE PROCEDURE LOAD_DATA AS
BEGIN
INSERT INTO STORE_DATA (DATA_FIELD)
VALUES(SYSDATE);
END;

2)    Test that executing the stored procedure inserts into the table without errors.

BEGIN
LOAD_DATA();
END;

3)    Confirm that data loaded as expected.

SELECT * FROM STORE_DATA;

 Snap6

Step Three – Create Function (to execute Stored Procedure)

1)    The Functions main purpose is to execute the stored procedure. At the time of writing it was found that an input

parameter, PRAGMA AUTONOMOUS_TRANSACTION, and return value was required.

CREATE OR REPLACE FUNCTION FUNC_LOAD_DATA (
p_input_value VARCHAR2
) RETURN INTEGER
IS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
LOAD_DATA();
COMMIT;
RETURN 1;
END;

2)    Confirm that the function can be ran successfully.

SELECT FUNC_LOAD_DATA(‘Hello’)
FROM DUAL;

Snap7

3)    Confirm that each time the function is referenced the table is updated.

SELECT * FROM STORE_DATA;

Snap10

 

Step Four – Create Dummy Table (to reference EVALUATE function)

For a text version of SQL Scripts in Step Four click here

1)    Create Table

CREATE TABLE DUMMY_REFRESH
(REFRESH_TEXT VARCHAR2(255));

2)    Insert descriptive text into table

INSERT INTO DUMMY_REFRESH (REFRESH_TEXT)
VALUES (‘Hit Refresh to Update Data’);

3)    Confirm insert was successful

SELECT * FROM DUMMY_REFRESH;

Snap1

 

 

 

Step Five: Create Expression in Data Modeler (that references EVALUATE function)

1)    Lock to Edit the Data Model

2)    Add the “DUMMY_REFRESH” table as a dimension table.

Snap11

 Snap12

3)    Join the DUMMY_REFRESH table to a Fact Table. This does not have to be a true join.

However, the data types must match.

Snap13

4)    Click on DUMMY_REFRESH in the Dimension Table list

5)    Click on Add Column

6)    In the Expression Builder type:

For a text version of the expression click here

EVALUATE(‘FUNC_LOAD_DATA(%1)’,’Hello’)

7)    In Name and Description type: Run_Func

8)    Validate the Expression

Snap15

9)    Click Done

Snap16

10)   Click Done

11)   Publish the Model

Step Six: Create Analysis (that executes EVALUATE function)

1)    Create a new Analysis

2)    It may be necessary to Refresh -> Reload Server Metadata in order to see the new expression created in the previous step

Snap17

3)    Add both columns

Snap18

4)    From the Results tab set the Run_Func column to be hidden

Snap19

5)    Remove the Title

6)    Go to the Column Properties of DUMMY_REFRESH and “Click Custom Headings”.

Type a space for the heading name.

Snap21

7)    Save the  Analysis. It should look something like below.

Snap22

 

Step Seven: Create Dashboard (to display Analysis)

1)    Add the Analysis to a Dashboard

2)    Set custom Report Links

Snap23

 

 

3)    Customize -> Refresh

Snap24

4)    The Analysis should look something like the below on the Dashboard

Snap25

 

5)    To run the function that updates the data – Hit Refresh.

6)    Confirm data was updated in the STORE_DATA table.

SELECT * FROM STORE_DATA;

7)    For certain use cases it may also be beneficial to display the table that is being updated in a separate Analysis

on the same dashboard. In this example the STORE_DATA results. This allows the BICS consumer to view the new

results immediately after they are refreshed.

Further Reading

Click here for related A-Team BICS blogs

Click here for more information on the EVALUATE function. This link is for the “Logical SQL Reference” – for “Oracle® Fusion Middleware Metadata Repository Builder’s Guide for Oracle Business Intelligence Enterprise Edition”. Therefore, not all commands in this guide are applicable to BICS. The relevant section on the EVALUATE function has been provided below. Note for BICS environments EVALUATE_SUPPORT_LEVEL is enabled by default.

Snap26

Summary

 

This article described how to configure a link on an Oracle Business Intelligence Cloud Service (BICS) Dashboard that allows a BICS consumer to execute a stored procedure from BICS Dashboard.

The key components to the solution are the use of EVALUATE in the Data Modeler and referencing PRAGMA AUTONOMOUS_TRANSACTION in the database function.

In this example the stored procedure is executed by clicking the refresh link. An alternative approach would be to invoke the stored procedure through a Dashboard Prompt.

The example shown can be easily modified to:

1) Pass values from the Dashboard to the Stored Procedure.

2) Return output values such as “number of records inserted” or “number of records failed”.

When using this solution to load / refresh data in BICS, it is important to also remember to add logic to clear the cache.


Protecting users and their emails after FA-P2T in On-Prem Environments

$
0
0

Introduction

The P2T – Prodution to Test – procedure is a very popular feature that FA customers utilize. It allows them to have their production data copied to another environment. Nowadays, P2T is a very common cloud SAAS and on-premise procedure. An important aspect that is not discussed frequently is the post-process of P2T. This approach is very important to avoid security issues, such as production passwords and emails being available in a different environment.

Main Article

This article will cover the post-process of FA-P2T and changing information that comes from production that is unwanted in the test environment. For this blog, our specific test case is email’s attribute. Note: There isn’t a requirement to change this specific attribute, but if they are not changed, then you may inadvertently send notifications from a non-production environment to valid, production email addresses. This may cause confusion and frustration from some users trying to identify what environment these e-mails are coming from. Hence, this article provides step-by-step instructions to accomplish the change e-mail task making sure your end-user’s email will not be available on lower environments.

NOTE: ALL THESE STEPS HAVE TO BE DONE ONLY IN THE TEST ENVIRONMENT

Step 1: FROM OID SIDE
1.1)Do a backup first:

/u01/oid/oid_home/ldap/bin/ldifwrite connect=oiddb basedn="cn=users,dc=us,dc=oracle,dc=com" thread=3 verbose=true ldiffile=/tmp/backup-ENVXXX-[DATE].dat

1.2)Change e-mail address(And avoid special characters issues) :

/u01/oid/oid_home/ldap/bin/bulkmodify connect=oiddb attribute="mail" value="Global-Fusion.alerts@oracle.com" replace="TRUE" basedn="cn=users,dc=us,dc=oracle,dc=com" threads=4 size=1000 verbose=true

1.3)NOTE: If you want to protect admin e-mails or change to something else, add those values into a ldif file and run as ldapmodify command below(instead of 1.2):

ldapmodify -p 3060 -D cn=orcladmin -w **** -a -f return-SYSADMINEMAILS.ldif (file attached)

Step 2: FROM OIM SIDE:
2.1)UPDATE or CREATE SYSPROP into OIM(WEBUI) to allow unique e-mails. Set ‘OIM.EmailUniqueCheck’ to FALSE

OIM11G-UniqueEmailProperty

2.2)Run SQL:

 update usr set usr_email='Global-Fusion.alerts@oracle.com' where (usr_login not in('XELSYSADM', 'OAMADMINUSER','FUSION_APPS_HCM_SOA_SPML_APPID','WEBCHATADMIN','XELOPERATOR','WEBLOGIC','OIMINTERNAL','POLICYROUSER','POLICYRWUSER',
'OBLIXANONYMOUS','WEBLOGIC_IDM','IDROUSER','IDRWUSER','OAMSOFTWAREUSER','FAADMIN','OIM_ADMIN','OAMADMINUSER','OCLOUD9_OSN_APPID','OSN_LDAP_BIND_USER','HCM.USER','OIMADMINUSER'))

Step 3: FROM FUSION SIDE
3.1)Updating per_email_addresses table

 
UPDATE fusion.per_email_addresses pea set pea.email_address = 'Global-Fusion.alerts@oracle.com' where pea.email_address like '%oracle.com'

3.2)Updating hz_parties table

 
update fusion.hz_parties hp set hp.email_address = 'Global-Fusion.alerts@oracle.com' where hp.email_address like '%oracle.com'

3.3)Updating hz_contact_points table

update fusion.hz_contact_points hcp set hcp.email_address = 'Global-Fusion.alerts@oracle.com' where hcp.email_address like '%oracle.com'

Step 4: RUN ESS JOB to update FA from OID data:
4.1)Login into FA(Navigator–>Setup & Manitenance–>Search Schedule Person Keyword Crawler(Named as ‘Update Person Search Keywords’)–>Submit.

Conclusion

Well done, however, protecting email addresses for an organization is a proposition that should be done carefully, and an entire environment backup must be done before it starts. Using proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern how they handle email data. It also highlights what of the enterprise is willing to protect end- user data from copied environments, and how best to offer Oracle protection in an integrated and effective manner.

HCM Atom Feed Subscriber using Node.js

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Node.js. The assumption is that the reader has some basic knowledge on Node.js. Please refer to this link to download and install Node.js in your environment.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. It runs on a single threaded event loop and leverages asynchronous calls for various operations such as I/O. This is an evolution from stateless-web based on the stateless request-response paradigm. For example, when a request is sent to invoke a service such as REST or a database query, Node.js will continue serving the new requests. When a response comes back, it will jump back to the respective requestor. Node.js is lightweight and provides a high level of concurrency. However, it is not suitable for CPU intensive operations as it is single threaded.

Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

For more information on Node.js, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

Refer my blog on how to invoke secured REST services using Node.js

Security

The RESTFul services in Oracle HCM Cloud are protected with Oracle Web Service Manager (OWSM). The server policy allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

The client must provide one of the above policies in the security headers of the invocation call for authentication. The sample in this post is using HTTP Basic Authentication over SSL policy.

 

Fusion Security Roles

REST and Atom Feed Roles

To use Atom feed, a user must have any HCM Cloud role that inherits the following roles:

  • “HCM REST Services and Atom Feeds Duty” – for example, Human Capital Management Integration Specialist
  • “Person Management Duty” – for example, Human Resource Specialist

REST/Atom Privileges

 

Privilege Name

Resource and Method

PER_REST_SERVICE_ACCESS_EMPLOYEES_PRIV emps ( GET, POST, PATCH)
PER_REST_SERVICE_ACCESS_WORKSTRUCTURES_PRIV grades (get)jobs (get)
jobFamilies (get)
positions (get)
locations (get)
organizations (get)
PER_ATOM_WORKSPACE_ACCESS_EMPLOYEES_PRIV employee/newhire (get)
employee/termination (get)
employee/empupdate (get)
employee/empassignment (get )
PER_ATOM_WORKSPACE_ACCESS_WORKSTRUCTURES_PRIV workstructures/grades (get)
workstructures/jobs (get)
workstructures/jobFamilies (get)
workstructures/positions (get)
workstructures/locations (get)
workstructures/organizations (get)

 

 

Atom Payload Response Structure

The Atom feed response is in XML format. Please see the following diagram to understand the feed structure:

 

AtomFeedSample_1

 

A feed can have multiple entries. The entries are ordered by “updated” timestamp of the <entry> and the first one is the latest. There are two critical elements that will provide information on how to process these entries downstream.

Content

The <content> element contains critical attributes such as Employee Number, Phone, Suffix, CitizenshipLegislation, EffectiveStartDate, Religion, PassportNumber, NationalIdentifierType, , EventDescription, LicenseNumber, EmployeeName, WorkEmail, NationalIdentifierNumber. It is in JSON format as you can see from the above diagram.

Resource Link

If data provided in the <content> is not sufficient, the RESTFul service resource link is provided to get more details. Please refer the above diagram on employee resource link for each entry. Node.js can invoke this newly created RestFul resource link.

 

Avoid Duplicate Atom Feed Entries

To avoid consuming feeds with duplicate entries, one of the following parameters must be provided to consume feeds since last polled:

1. updated-min: Returns entries within collection  Atom:updated > updated-min

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z – Return entries published after “2015-09-16T09:16:00.000Z”.

2. updated-max: Returns entries within collection Atom:updated <=updated-max

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-max=2015-09-16T09:16:00.000Z – Return entries published at/before “2015-09-16T09:16:00.000Z”.

3. updated-min=&updated-max: Return entries within collection (Atom:updated > updated-min && Atom:updated <=updated-max)

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z&updated-max=2015-09-11T10:03:35.000Z – Return entries published between “2015-09-11T10:03:35.000Z” and “2015-09-16T09:16:00.000Z”.

Node.js Implementation

Refer my blog on how to invoke secured REST services using Node.js. These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a file.

For example:

//persist timestamp for the next call

if (i == 0) {

fs.writeFile('updateDate', updateDate[0].text, function(fserr) {

if (fserr) throw fserr; } );

}

 

Next Call

In next call, read the updated timestamp value from the above persisted file to generate the path as follows:

//Check if updateDate file exists and is not empty
try {

var lastFeedUpdateDate = fs.readFileSync('updateDate');

console.log('Last Updated Date is: ' + lastFeedUpdateDate);

} catch (e) {

// handle error

}

if (lastFeedUpdateDate.length > 0) {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;

} else {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire';

}

 

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the “node-elementtree” package is implemented to parse the XML. You can use any library as long as the following data are extracted for each entry in the feed for downstream processing.

var et = require('elementtree');
//Request call
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Feed Response - the structure is defined in section: Atom Payload Response Structure
feed = et.parse(body);

//Identify if feed has any entries
var numberOfEntries = feed.findall('./entry/').length;

//if there are entries, extract data for downstream processing
if (numberOfEntries > 0) {
console.log('Get Content for each Entry');

//Get Data based on XPath Expression
var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i > content.length; i++ ) {

//get Resouce link for the respected entry
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));

//get Content data of the respective entry which in JSON format
console.log(feed.findall('content.text'));
 
//persist timestamp for the next call
if (i == 0) {
  fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
  if (fserr) throw fserr; } );

}

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Downstream Processing Pattern

The node.js scheduler can be implemented to consume feeds periodically. Once the message is parsed, there are several patterns to support various use cases. In addition, you could have multiple subscribers such as Employee new hire, Employee termination, locations, jobs, positions, etc. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This pattern will provide global transaction and recovery when downstream applications are not available or throws error. The following diagram shows the high level architecture:

nodejs_soa_atom_pattern

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

var et = require('elementtree');

var uname = 'username';
var pword = 'password';
var http = require('https'),
fs = require('fs');

var XML = et.XML;
var ElementTree = et.ElementTree;
var element = et.Element;
var subElement = et.SubElement;

var lastFeedUpdateDate = '';
var pathUri = '';

//Check if updateDate file exists and is not empty
try {
var lastFeedUpdateDate = fs.readFileSync('updateDate');
console.log('Last Updated Date is: ' + lastFeedUpdateDate);
} catch (e) {
// add error logic
}

//get last feed updated date to get entries since that date
if (lastFeedUpdateDate.length > 0) {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;
} else {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire';
}

// Generate Request Options
var options = {
ca: fs.readFileSync('HCM Cert'), //get HCM Cloud certificate - either through openssl or export from web browser
host: 'HCMHostname',
port: 443,
path: pathUri,
"rejectUnauthorized" : false,
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}
};

//Invoke REST resource for Employee New Hires
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Atom Payload response 
feed = et.parse(body);

//Get Entries count
var numberOfEntries = feed.findall('./entry/').length;

console.log('...................Feed Extracted.....................');
console.log('Numer of Entries: ' + numberOfEntries);

//Process each entry
if (numberOfEntries > 0) {

console.log('Get Content for each Entry');

var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i < content.length; i++ ) {
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));
console.log(feed.findall('content.text'));

//persist timestamp for the next call
if (i == 0) {
fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
if (fserr) throw fserr; } );
}

fs.writeFile(entryId[i].text,content[i].text, function(fserr) {
if (fserr) throw fserr; } );
}
}

})
res.on('error', function(e) {
console.log("Got error: " + e.message);
});
});

 

 

HCM Atom Feed Subscriber using SOA Cloud Service

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Oracle Service Oriented Architecture (SOA) Cloud Service. Oracle SOA Cloud Service provides a PaaS computing platform solution for running Oracle SOA Suite, Oracle Service Bus, and Oracle API Manager in the cloud. For more information on SOA Cloud Service, please refer this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based connectivity to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure.

For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

HCM Atom Introduction

For Atom “security, roles and privileges”, please refer my blog HCM Atom Feed Subscriber using Node.js.

 

Atom Feed Response Template

 

AtomFeedSample_1

SOA Cloud Service Implementation

Refer my blog on how to invoke secured REST services using SOA. The following diagram shows the patterns to subscribe to HCM Atom feeds and process it to downstream applications that may have either web services or file based interfaces. Optionally, all entries from the feeds could be staged either in database or messaging cloud before processing it during events such as downstream application is not available or throwing system errors. This provides the ability to consume the feeds, but hold the processing until downstream applications are available. Enterprise Scheduler Service (ESS), a component of SOA Suite, is leveraged to invoke the subscriber composite periodically.

 

soacs_atom_pattern

The following diagram shows the implementation of the above pattern for Employee New Hire:

soacs_atom_composite

 

Feed Invocation from SOA

HCM cloud feed though in XML representation, the media type of the payload response is “application/atom+xml”. This media type is not supported at this time, but use the following java embedded activity in your BPEL component:

Once the built-in REST Adapter supports the Atom media type, java embedded activity will be replaced and further simplify the solution.

try {

String url = "https://mycompany.oraclecloud.com";
String lastEntryTS = (String)getVariableData("LastEntryTS");
String uri = "/hcmCoreApi/atomservlet/employee/newhire";

//Generate URI based on last entry timestamp from previous invocation
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

java.net.URL obj = new URL(null,url+uri, new sun.net.www.protocol.https.Handler());

javax.net.ssl.HttpsURLConnection conn = (HttpsURLConnection) obj.openConnection();
conn.setRequestProperty("Content-Type", "application/vnd.oracle.adf.resource+json");
conn.setDoOutput(true);
conn.setRequestMethod("GET");

String userpass = "username" + ":" + "password";
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes("UTF-8"));
conn.setRequestProperty ("Authorization", basicAuth);

String response="";
int responseCode=conn.getResponseCode();
System.out.println("Response Code is: " + responseCode);

if (responseCode == HttpsURLConnection.HTTP_OK) {

BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));

String line;
String contents = "";

while ((line = reader.readLine()) != null) {
contents += line;
}

setVariableData("outputVariable", "payload", "/client:processResponse/client:result", contents);

reader.close();

}

} catch (Exception e) {
e.printStackTrace();
}

 

These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a database cloud (DbaaS).

This is the sample database table

create table atomsub (
id number,
feed_ts varchar2(100) );

For initial consumption, keep the table empty or add a row with the value of feed_ts to consume initial feeds. For example, the feed_ts value could be “2015-09-16T09:16:00.000Z” to get all the feeds after this timestamp.

In SOA composite, you will update the above table to persist the “/entry/updated” timestamp in the feed_ts column of the “atomsub” table.

 

Next Call

In next call, read the updated timestamp value from the database and generate the URI path as follows:

String uri = "/hcmCoreApi/atomservlet/employee/newhire";
String lastEntryTS = (String)getVariableData("LastEntryTS");
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

The above step is done in java embedded activity, but it could be done in SOA using <assign> expressions.

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the feed response is stored in output variable as a string. The following expression in <assign> activity will convert it to XML

oraext:parseXML($outputVariable.payload/client:result)


Parsing Each Atom Entry for Downstream Processing

Each entry has two major elements as mentioned in Atom response payload structure.

Resource Link

This contains the REST employee resource link to get Employee object. This is a typical REST invocation from SOA using REST Adapter. For more information on invoking REST services from SOA, please refer my blog.

 

Content Type

This contains selected resource data in JSON format. For example: “{  “Context” : [ {    "EmployeeNumber" : "212",    "PersonId" : "300000006013981",    "EffectiveStartDate" : "2015-10-08",    "EffectiveDate" : "2015-10-08",    "WorkEmail" : "phil.davey@mycompany.com",    "EmployeeName" : "Davey, Phillip"  } ]}”.

In order to use above data, it must be converted to XML. The BPEL component provides a Translator activity to transform JSON to XML. Please refer the SOA Development document, section B1.8 – doTranslateFromNative.

 

The <Translate> activity syntax to convert above JSON string from <content> is as follows:

<assign name="TranslateJSON">
<bpelx:annotation>
<bpelx:pattern>translate</bpelx:pattern>
</bpelx:annotation>
<copy>
 <from>ora:doTranslateFromNative(string($FeedVariable.payload/ns1:entry/ns1:content), 'Schemas/JsonToXml.xsd', 'Root-Element', 'DOM')</from>
 <to>$JsonToXml_OutputVar_1</to>
 </copy>
</assign>

This is the output:

jsonToXmlOutput

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select JSON format and use above <content> as a sample to generate a schema. Please see the following diagrams:

JSON_nxsd_1JSON_nxsd_2JSON_nxsd_3

JSON_nxsd_5

 

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Scheduler and Downstream Processing

Oracle Enterprise Scheduler Service (ESS) is configured to invoke the above composite periodically. At present, SOA cloud service is not provisioned with ESS, but refer this to extend your domain. Once the feed response message is parsed, you can process it to downstream applications based on your requirements or use cases. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This will provide global transaction and recovery when downstream applications are not available or throws error.

The following diagram shows how to create job definition for a SOA composite. For more information on ESS, please refer this.

ess_3

SOA Cloud Service Instance Flows

First invocation without updated-min argument to get all the feeds

 

soacs_atom_instance_json

Atom Feed Response from above instance

AtomFeedResponse_1

 

Next invocation with updated-min argument based on last entry timestamp

soacs_atom_instance_noentries

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

The sample prototype code is available here.

 

soacs_atom_composite_1

 

 

Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 2

$
0
0

Introduction

 

This article expands on “Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 1” published in June 2015.

Part 1 described how to integrate BICS with Oracle Service Cloud (RightNow) Connect Web Services for Simple Object Access Protocol (SOAP).

Part 2 covers using the Connect REST API which allows integration with Oracle Service Cloud (RightNow) using representational state transfer (REST) web services.

REST is currently the recommended and suggested method for integrating BICS with Oracle Service Cloud (RightNow). The Connect REST API has been available as of the May 2015 release of Oracle Service Cloud (RightNow). However, support for ROQL object and tabular queries only became available in the August 2015 release. Therefore, Oracle Service Cloud (RightNow) August 2015 or higher is required to implement the examples provided.

Additionally, this article showcases the new APEX_JSON package that is available as of Oracle Apex 5.0 for parsing and generating JSON. The following three APEX_JSON functions are utilized in the solution: apex_json.parse, apex_json.get_number and, apex_json.get_varchar2.

The eight steps below explain how to load data from Oracle Service Cloud (RightNow) into BICS using PL/SQL ran through Oracle Apex SQL Workshop – using an Oracle Schema Service Database. The code snippets may then be incorporated into a stored procedure or web service and scheduled / triggered. (Such topics have been covered in past BICS blogs.)

1)    Construct the ROQL Query

2)    Test the Connect REST API Query URL

3)    Run the apex_web_service.make_rest_request

4)    Formulate the JSON Path Expression

5)    Create the BICS Database Table

6)    Add the APEX_JSON parse code to the PL/SQL

7)    Execute the PL/SQL

8)    Review the Results

Main Article

 

Step One – Construct the ROQL Query

 

See Step Two of “Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 1” on how to construct the ROQL Query. For this example the ROQL Query is:

select id, subject from incidents where id <12500

Step Two – Test the Connect REST API Query URL

 

This section covers building and testing the URL for ROQL object queries.

The q query parameter ROQL syntax is as follows:

https://<your_site_interface>/services/rest/connect/<version>/<resource>/?q=<ROQL Statement>

1)    Take the ROQL Query from Step One and replace all the spaces with %20.

For example: select%20id,%20subject%20from%20incidents%20where%20id%20<12500

2)    Append the ROQL string to the REST API URL. For example:

https://yoursite.rightnowdemo.com/services/rest/connect/v1.3/queryResults/?query=select%20id,%20subject%20from%20incidents%20where%20id%20<12500

3)    Place the URL into a browser. It should prompt for a username and password. Enter the username and password.

4)    The browser will then prompt to save or open the results. Save the results locally. View the results in a text editor.

The text file should contain the results from the ROQL query in JSON format.

Snap1

 

Step Three – Run the apex_web_service.make_rest_request

1)    Open SQL Workshop from Oracle Application Express

Snap2

2)    Launch SQL Commands

Snap3

3)    Use the snippet below as a starting point to build your PL/SQL.

Run the final PL/SQL in the SQL Commands Window.

Replace the URL site, username, password, and ROQL query.

For a text version of the code snippet click here.

DECLARE
l_ws_response_clob CLOB;
l_ws_url           VARCHAR2(500) := ‘https://yoursite.rightnowdemo.com/services/rest/connect/v1.3/queryResults/?query=select%20id,%20subject%20from%20incidents%20where%20id%20<12500‘;

apex_web_service.g_request_headers(1).name := ‘Accept'; apex_web_service.g_request_headers(1).value := ‘application/json; charset=utf-8′; apex_web_service.g_request_headers(2).name := ‘Content-Type';
apex_web_service.g_request_headers(2).value := ‘application/json; charset=utf-8′;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘username‘,
p_password => ‘password‘,
p_http_method => ‘GET’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,12001));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,24001));

4)    Run the query. A subset of the JSON results should be displayed in the Results section of SQL Commands.

Additional dbms_output.put_line’s may be added should further debugging be required.

It is not necessary at this stage to view the entire result set. The key to this exercise is to prove that the URL is correct and can successfully be ran through apex_web_service.make_rest_request.

Snap4

Step Four – Formulate the JSON Path Expression

 

1)    When formulating the JSON path expression, it may be useful to use an online JSON Path Expression Tester.

There are many different free JSON tools available online. The one below is: https://jsonpath.curiousconcept.com/

When testing, reduce the JSON output to something more manageable.

For a text version of the JSON used in this example click here.

Snap5

2)    For this exercise the following needs to be exacted from the JSON: count, all the ids, and all the subject data.

Test various path scenarios to confirm that the required data can be extracted from the JSON.

(It is much easier to debug the path in an online JSON editor than in SQL Developer.)

a)    COUNT

JSONPath Expression:

items[0].count

Returns:

[
3
]

b)    ID

id #1

JSONPath Expression:

items[0]["rows"][0][0]

Returns:

[
"12278"
]

id #2

JSONPath Expression:

items[0]["rows"][1][0]

Returns:

[
"12279"
]

id #3

JSONPath Expression:

items[0]["rows"][2][0]

Returns:

[
"12280"
]

c)    SUBJECT

subject #1

JSONPath Expression:

items[0]["rows"][0][1]

Returns:

[
"How long until I receive my refund on my credit card?"
]

subject #2

JSONPath Expression:

items[0]["rows"][1][1]

Returns:

[
"Do you ship outside the US?"
]

subject #3

JSONPath Expression:

items[0]["rows"][2][1]

Returns:

[
"How can I order another product manual?"
]

 

Step Five –  Create the BICS Database Table

 

1)    Open SQL Workshop from Oracle Application Express

Snap2

 

2)    Launch SQL Commands

Snap3

 

 

 

 

 

 

3)    Create the RIGHT_NOW_REST table in the BICS database.

To view the SQL in plain text click here.

CREATE TABLE “RIGHT_NOW_REST”
(ID NUMBER,
SUBJECT VARCHAR2(500)
);

Snap7

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Step Six – Add the APEX_JSON parse code to the PL/SQL

 

For a text version of the PL/SQL snippet click here.

The code highlighted in orange is what was tested in ‘Step Three – Run the apex_web_service.make_rest_request’.

Replace the URL site, username, password, and ROQL query, as previously described in ‘Step Three – Run the apex_web_service.make_rest_request’.

The objective of this step is to add the JSON Path Expression’s formulated and tested in Step Four into the PL/SQL.

Note: Three debug lines have been left in the code – as they will most likely be needed. These can be removed if desired.

The key parts to the APEX_JSON code are:

1)    Identify a value that can be used to loop through the records.

Apex PL/SQL: apex_json.get_number(p_path => ‘items[1].count’)

In this example it is possible to use “count” to loop through the records.

However, different use cases may require another parameter to be used.

Note the different numbering in the JSONPath Expression vs. the apex_json function.

In the online JSONPath Expression builder position 1 = [0]. However, in Apex position 1 = [1].

Snap8

2)    Map the fields to be selected from the JSON.

ID:             apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][1]‘),

SUBJECT: apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][2]‘)

Change the get_datatype accordingly. For example: get_varchar2, get_number, get_date etc. See APEX_JSON function for syntax. It may be easier to return the JSON as varchar2 and convert it in a separate procedure to avoid datatype errors.

Revisit the JSONPath Expression’s created in ‘Step Four – Formulate the JSON Path Expression’ and map them to APEX_JSON.

Convert the JSONPath to APEX_JSON. Taking into consideration the LOOP / Count ‘i’ variable.

Repeat the process of adding one position to the starting value. i.e. [0] -> [1], [1] -> [2].

ID:

items[0]["rows"][0][0]         —————->

items[0]["rows"][1][0]         —————->      apex_json.get_varchar2(p_path => ‘items[1].rows['|||| '][1]‘)

items[0]["rows"][2][0]         —————->

SUBJECT:

items[0]["rows"][0][1]         —————->

items[0]["rows"][1][1]         —————->      apex_json.get_varchar2(p_path => ‘items[1].rows['|||| '][2]‘)

items[0]["rows"][2][1]         —————->

Additional Code Advice (p_path):

Keep in mind that p_path is expecting a string.

Therefore, it is necessary to concatenate any dynamic variables such as the LOOP / Count ‘i’ variable.

DECLARE
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘https://yoursite.rightnowdemo.com/services/rest/connect/v1.3/queryResults/?query=select%20id,%20subject%20from%20incidents%20where%20id%20<12500‘;

apex_web_service.g_request_headers(1).name := ‘Accept'; apex_web_service.g_request_headers(1).value := ‘application/json; charset=utf-8′; apex_web_service.g_request_headers(2).name := ‘Content-Type';

apex_web_service.g_request_headers(2).value := ‘application/json; charset=utf-8′;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘username‘,
p_password => ‘password‘,
p_http_method => ‘GET’
);
apex_json.parse(l_ws_response_clob);
–sys.dbms_output.put_line(‘found ‘||apex_json.get_varchar2(p_path => ‘items[1].rows[3][2]‘));
–sys.dbms_output.put_line(‘Number of Incidents: ‘||apex_json.get_varchar2(p_path => ‘items[1].count’));
for i in 1..apex_json.get_number(p_path => ‘items[1].count’) LOOP
INSERT INTO RIGHT_NOW_REST(ID, SUBJECT)
VALUES
(
apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][1]‘),
apex_json.get_varchar2(p_path => ‘items[1].rows['|| i || '][2]‘)
);
–sys.dbms_output.put_line(‘Col1:’|| apex_json.get_varchar2(p_path => l_col1_path) || ‘ Col2:’ || apex_json.get_varchar2(p_path => l_col2_path)) ;
end loop;

Step Seven – Execute the PL/SQL

 

Run the PL/SQL in Apex SQL Commands.

“1 row(s) inserted.” should appear in the results.

This PL/SQL code snippet could also be incorporated through many other mechanisms such as stored procedure, web service etc. Since these topics have been covered in past blogs this will not be discussed here.

Step Eight – Review the Results

 

Confirm data was inserted as expected.

select * from RIGHT_NOW_REST;

Snap9

Further Reading

 

Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the Application Express API Reference Guide – APEX_JSON Package.

Click here for the Oracle Service Cloud Connect REST API Developers Guide – August 2015.

Click here for more A-Team BICS Blogs.

Summary

 

This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) using the Connect REST API web services.

The use case shown was for BICS and Oracle Service Cloud (RightNow) integration. However, many of the techniques referenced could be used to integrate Oracle Service Cloud (RightNow) with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST and APEX_JSON examples can be easily modified to integrate BICS or standalone Oracle Apex with any other REST web service that can be accessed via a URL and returns JSON data.

Techniques referenced in this blog may be useful for those building BICS REST ETL connectors and plug-ins.

Key topics covered in this article include: Oracle Business Intelligence Cloud Service (BICS), Oracle Service Cloud (RightNow), ROQL, Connect REST API, Oracle Apex API, APEX_JSON, apex_web_service.make_rest_request, PL/SQL, and Cloud to Cloud integration.

Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article outlines how to integrate Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS). Two primary data movement patterns are described:

(a) JSON data is retrieved from an external cloud source using REST Web Services.

(b) Data is inserted into the Schema Service database with Apex PL/SQL functions.

The above topics have previously been discussed in past A-Team BICS Blogs. What makes this article unique, is that it retrieves and displays the results real-time. These results are stored temporary in the database while viewed via a Dashboard. This data could then be permanently archived if desired.

The eighteen steps below have been broken into two parts. Part A follows a similar to pattern to that covered in “Integrating Oracle Service Cloud (RightNow) with Oracle Business Intelligence Cloud Service (BICS) – Part 2″. Part B incorporates ideas covered in “Executing a Stored Procedure from Oracle Business Intelligence Cloud” Service” and “Using the Oracle Business Intelligence Cloud Service (BICS) REST API to Clear Cached Data“.


PART A – Retrieve & Load Data


1)    Review REST API Documentation

2)    Build REST Search Endpoint URL

3)    Run apex_web_service.make_rest_request

4)    Formulate JSON Path

5)    Create BICS Tables

6)    Parse JSON

7)    Execute PL/SQL

8)    Review Results


PART B – Trigger Results Real-Time


9)    Add Clear BI Server Cache Logic

10)  Create Function – to execute Stored Procedure

11) Test Function – that executes Stored Procedure

12)  Create Dummy Table (to reference the EVALUATE function)

13)  Create Repository Variables

14)  Create Expression in Data Modeler (that references EVALUATE function)

15)  Create Analysis – that executes EVALUATE function

16)  Create Analysis – to display results

17)  Create Dashboard Prompt

18)  Create Dashboard

Main Article

 

Part A – Retrieve & Load Data

 

Step 1 – Review REST API Documentation

 

Begin by reviewing the REST APIs for Oracle Social Data and Insight Cloud Service documentation. This article only covers using the /v2/search end point. The /v2/search is used to retrieve a list of companies or contacts that match a given criteria. There are many other end points available in the API that may be useful for various integration scenarios.

Step 2 – Build REST Search Endpoint URL

 

Access Oracle Social Data and Insight Cloud Service from Oracle Cloud My Services.

The Service REST Endpoint (Company and Contact Data API) will be listed.

For Example: https://datatrial1234-IdentityDomain.data.us9.oraclecloud.com/data/api

Append v2/search to the URL.

For Example: https://datatrial1234-IdentityDomain.data.us9.oraclecloud.com/data/api/v2/search

Step 3 – Run apex_web_service.make_rest_request

 

1)    Open SQL Workshop from Oracle Application Express

Snap2

2)    Launch SQL Commands

Snap3

3)    Use the code snippet below as a starting point to build your PL/SQL.

Run the final PL/SQL in the SQL Commands Window.

Replace the URL, username, password, identity domain, and body parameters.

For a text version of the code snippet click here.

For detailed information on all body parameters available click here.

l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘YourURL/data/api/v2/search';
l_body CLOB;

l_body := ‘{“objectType”:”People”,”limit”:”100″,”filterFields”:
[{"name":"company.gl_ult_dun","value":"123456789"},
{"name":"person.management_level","value":"0"},
{"name":"person.department","value":"3"}],”returnFields”:
["company.gl_ult_dun","person.parent_duns", "person.first_name",
"person.last_name","person.department","person.management_level","person.gen
der_code","person.title","person.standardized_title","person.age_range","per
son.company_phone","person.company_phone_extn","name.mail","person.co_offica
l_id"]}‘;
–use rest to retrieve the Data Service Cloud – Social Data
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
apex_web_service.g_request_headers(2).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(2).value := ‘Identity Domain’;
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘Username‘,
p_password => ‘Password‘,
p_body => l_body,
p_http_method => ‘POST’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,12001));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob,12000,24001));

4)    Run the query. A subset of the JSON results should be displayed in the Results section of SQL Commands.

Additional dbms_output.put_line’s may be added should further debugging be required.

It is not necessary at this stage to view the entire result set. The key to this exercise is to prove that the URL is correct and can successfully be run through apex_web_service.make_rest_request.

5)    Currently the body parameter filterFeilds only accepts “value” and not “DisplayValue”; thus, it may be necessary to create dimension lookup tables. For this exercise two dimension look-up tables are used.

Look-up values may change over time and should be re-confirmed prior to table creation.

“Step 5 –  Create the BICS Database Tables” describes how to create the two look-up tables below.

Department – Lookup Values

Value    DisplayValue
0        Administration
1        Consulting
2        Education
3        Executive
4        Facilities
5        Finance
6        Fraternal Organizations
7        Government
8        Human Resources
9        Operations
10       Other
11       Purchasing
12       Religion
13       Research & Development
14       Sales & Marketing
15       Systems

Management Level – Lookup Values

Value    DisplayValue
0        C-Level
1        Vice-President
2        Director
3        Manager
4        Other

 

Step 4 – Formulate JSON Path


1)    When formulating the JSON path expression, it may be useful to use an online JSON Path Expression Tester.

There are many different free JSON tools available online. The one below is: https://jsonpath.curiousconcept.com

2)    For this exercise the below values will be exacted from the JSON.

Each path was tested in the JSON Path Expression Tester.

The attribute numbers 1-12 are associated with the order in which returnFields have been specified in the body parameter. Thus, attribute numbers may differ from the example if:

a) Fields are listed in a different sequence.

b) An alternative number or combination of fields is defined.

Value                             JSON Path Expression 
totalHits                         ‘totalHits’
company.gl_ult_dun                 parties[*].attributes[1].value
person.parent_duns                 parties[*].attributes[2].value
person.first_name                  parties[*].attributes[3].value
person.last_name                   parties[*].attributes[4].value
person.department                  parties[*].attributes[5].value
person.management_level            parties[*].attributes[6].value
person.gender_code                 parties[*].attributes[7].value
person.title                       parties[*].attributes[8].value
person.standardized_title          parties[*].attributes[9].value
person.age_range                   parties[*].attributes[10].value
person.company_phone               parties[*].attributes[11].value
person.company_phone_extn          parties[*].attributes[12].value
name.mail                          parties[*].email
person.co_offical_id               parties[*].id

Step 5 –  Create BICS Tables

 

1)    Open SQL Workshop from Oracle Application Express

Snap2

2)    Launch SQL Commands

Snap3

3)    Create the SOCIAL_DATA_CONTACTS table in the BICS database.

To view the SQL in plain text click here.

CREATE TABLE SOCIAL_DATA_CONTACTS(
COMPANY_DUNS_NUMBER VARCHAR2(500),
CONTACT_DUNS_NUMBER VARCHAR2(500),
FIRST_NAME VARCHAR2(500),
LAST_NAME VARCHAR2(500),
DEPARTMENT VARCHAR2(500),
MANAGEMENT_LEVEL VARCHAR2(500),
GENDER VARCHAR2(500),
JOB_TITLE VARCHAR2(500),
STANDARDIZED_TITLE VARCHAR2(500),
AGE_RANGE VARCHAR2(500),
COMPANY_PHONE VARCHAR2(500),
COMPANY_PHONE_EXT VARCHAR2(500),
EMAIL_ADDRESS VARCHAR2(500),
INDIVIDUAL_ID VARCHAR2(500));

4)    Create and populate the DEPARTMENT_PROMPT look-up table in the BICS database.

CREATE TABLE DEPARTMENT_PROMPT(DEPT_NUM VARCHAR(500), DEPT_NAME VARCHAR2(500));

INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘0′,’Administration’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘1′,’Consulting’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘2′,’Education’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘3′,’Executive’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘4′,’Facilities’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘5′,’Finance’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘6′,’Fraternal Organizations’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘7′,’Government’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘8′,’Human Resources’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(‘9′,’Operations’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’10’,’Other’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’11’,’Purchasing’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’12’,’Religion’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’13’,’Research & Development’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’14’,’Sales & Marketing’);
INSERT INTO DEPARTMENT_PROMPT(DEPT_NUM, DEPT_NAME) VALUES(’15’,’Systems’);

5)    Create and populate the MANAGEMENT_LEVEL_PROMPT look-up table in the BICS database.

CREATE TABLE MANAGEMENT_LEVEL_PROMPT(ML_NUM VARCHAR(500), ML_NAME VARCHAR2(500));

INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘0′,’C-Level’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘1′,’Vice-President’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘2′,’Director’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘3′,’Manager’);
INSERT INTO MANAGEMENT_LEVEL_PROMPT(ML_NUM, ML_NAME) VALUES(‘4′,’Other’);

Step 6 – Parse JSON


For a text version of the PL/SQL snippet click here.

Replace URL, username, password, identity domain, and body parameters.

The code spinet has been highlighted in different colors grouping the various logical components.

Blue

Rest Request that retrieves the data in JSON format as a clob.

Yellow

Lookup “Value” codes based on users selection of “DisplayValue” descriptions.

Purple

Logic to handle ‘All Column Values’ when run from BICS. (This could be handled in many other ways … and is just a suggestion.)

Green

Convert JSON clob to readable list -> Parse JSON values and insert into database.

Code advice: Keep in mind that p_path is expecting a string. Therefore, it is necessary to concatenate any dynamic variables such as the LOOP / Count ‘i’ variable.

Red

Array to handle entering multiple duns numbers.

Grey

Left pad Duns numbers with zeros – as this is how they are stored in Oracle Social Data and Insight Cloud Service.

CREATE OR REPLACE PROCEDURE SP_LOAD_SOCIAL_DATA_CONTACTS(
p_company_duns varchar2
,p_department varchar2
,p_management_level varchar2
) IS
l_ws_response_clob CLOB;
l_ws_url VARCHAR2(500) := ‘YourURL/data/api/v2/search';
l_body CLOB;
l_num_contacts NUMBER;
v_array apex_application_global.vc_arr2;
l_filter_fields VARCHAR2(500);
l_pad_duns VARCHAR2(9);
l_department_num VARCHAR2(100);
l_management_level_num VARCHAR2(100);

DELETE FROM SOCIAL_DATA_CONTACTS;
–lookup department code
IF p_department != ‘All Column Values’ THEN
SELECT MAX(DEPT_NUM) into l_department_num
FROM DEPARTMENT_PROMPT
WHERE DEPT_NAME = p_department;
END IF;
–lookup management level code
IF p_management_level != ‘All Column Values’ THEN
SELECT MAX(ML_NUM) into l_management_level_num
FROM MANAGEMENT_LEVEL_PROMPT
WHERE ML_NAME = p_management_level;
END IF;
–loop though company duns numbers
v_array := apex_util.string_to_table(p_company_duns, ‘,’);
for j in 1..v_array.count LOOP
–pad duns numbers with zeros – as they are stored in the system this way
l_pad_duns := LPAD(v_array(j),9,’0′);
–logic to handle All Column Values
IF p_department != ‘All Column Values’ AND p_management_level != ‘All Column Values’ THEN
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.department","value":"'|| l_department_num || '"},{"name":"person.management_level","value":"'|| l_management_level_num || '"}]‘;
ELSE
IF p_department = ‘All Column Values’ AND p_management_level != ‘All Column Values’ THEN
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.management_level","value":"'|| l_management_level_num || '"}]‘;
ELSE
IF p_department != ‘All Column Values’ AND p_management_level = ‘All Column Values’ THEN
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.department","value":"'|| l_department_num || '"}]‘;
ELSE
l_filter_fields := ‘”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"}]‘;
END IF;
END IF;
END IF;
–build dynamic body
l_body := ‘{“objectType”:”People”,”limit”:”100″,’ || l_filter_fields || ‘,”returnFields”:["company.gl_ult_dun","person.parent_duns", "person.first_name", "person.last_name","person.department","person.management_level","person.gender_code","person.title","person.standardized_title","person.age_range","person.company_phone","person.company_phone_extn","name.mail","person.co_offical_id"]}';
–use rest to retrieve the Data Service Cloud – Social Data
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
apex_web_service.g_request_headers(2).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(2).value := ‘identity domain';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_username => ‘UserName’,
p_password => ‘Password’,
p_body => l_body,
p_http_method => ‘POST’
);
–parse the clob as JSON
apex_json.parse(l_ws_response_clob);
–get total hits
l_num_contacts := CAST(apex_json.get_varchar2(p_path => ‘totalHits’) AS NUMBER);
–loop through total hits and insert JSON data into database
IF l_num_contacts > 0 THEN
for i in 1..l_num_contacts LOOP

INSERT INTO SOCIAL_DATA_CONTACTS(COMPANY_DUNS_NUMBER, CONTACT_DUNS_NUMBER,FIRST_NAME,LAST_NAME,DEPARTMENT,MANAGEMENT_LEVEL,GENDER,JOB_TITLE,STANDARDIZED_TITLE,AGE_RANGE,COMPANY_PHONE,COMPANY_PHONE_EXT,EMAIL_ADDRESS,INDIVIDUAL_ID)
VALUES
(
v_array(j),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[2].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[3].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[4].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[5].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[6].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[7].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[8].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[9].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[10].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[11].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[12].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].email’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].id’)
);
end loop; –l_num_contacts
END IF;    –greater than 0

end loop; –v_array.count
commit;

Step 7 – Execute PL/SQL

 

Run the PL/SQL in Apex SQL Commands. Test various combinations. Test with duns numbers with less than 9 digits.

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’All Column Values’,’All Column Values’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’Administration’,’All Column Values’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’All Column Values’,’Manager’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’Administration’,’Manager’);

SP_LOAD_SOCIAL_DATA_CONTACTS(‘1234567′,’All Column Values’,’All Column Values’);

Step 8 – Review Results

 

Confirm data was inserted as expected.

SELECT * FROM SOCIAL_DATA_CONTACTS;

Part B – Trigger Results Real-Time

 

Oracle Social Data and Insight Cloud Service data will be retrieved by the following sequence of events.

 

a)    A BICS Consumer selects required input parameters via a Dashboard Prompt.

b)    Selected Dashboard Prompt values are passed to Request Variables.

(The Request Variable temporary changes the state of the Repository Variable for that Session.)

c)    Session Variables (NQ_SESSION) are read by the Modeler Expression using the VALUEOF function.

d)    EVALUATE is used to call a Database Function and pass the Session Variable values to the Database Function.

e)    The Database Function calls a Stored Procedure – passing parameters from the Function to the Stored Procedure

f)    The Stored Procedure uses apex_web_service to call the Rest API and retrieve data in JSON format as a clob.

g)    The clob is parsed and results are returned and inserted into a BICS database table.

h)    Results are displayed on a BICS Analysis Request, and presented to the BICS Consumer via a Dashboard.


Step 9 – Add Clear BI Server Cache Logic


For a text version of the PL/SQL snippets in step 9-11 click here.

This step is optional depending on how cache is refreshed / recycled.

Replace BICS URL, BICS User, BICS Pwd, and BICS Identity Domain.

Add cache code after insert commit (after all inserts / updates are complete).

Repeat testing undertaken in “Step 7 – Execute PL/SQL”.

DECLARE
l_bics_response_clob    CLOB;

–clear BICS BI Server Cache
apex_web_service.g_request_headers(1).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(1).Value := ‘BICS Identity Domain‘;
l_bics_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://BICS_URL/bimodeler/api/v1/dbcache’,
p_http_method => ‘DELETE’,
p_username => ‘BICS UserName‘,
p_password => ‘BICS Pwd
);
–dbms_output.put_line(‘Status:’ || apex_web_service.g_status_code);


Step 10 – Create Function – to execute Stored Procedure

 

CREATE OR REPLACE FUNCTION LOAD_SOCIAL_DATA_CONTACTS
(
p_company_duns IN VARCHAR2,
p_department IN VARCHAR2,
p_management_level VARCHAR2
) RETURN INTEGER
IS PRAGMA AUTONOMOUS_TRANSACTION;

SP_LOAD_SOCIAL_DATA_CONTACTS(p_company_duns,p_department,p_management_level);
COMMIT;
RETURN 1;

Step 11 – Test Function – that executes Stored Procedure

 

SELECT LOAD_SOCIAL_DATA_CONTACTS(‘123456789′,’All Column Values’,’All Column Values’) FROM DUAL;

 

Step 12 – Create Dummy Table – to reference the EVALUATE function


For a text version of the PL/SQL snippet click here


1)    Create Table

CREATE TABLE DUMMY_REFRESH
(REFRESH_TEXT VARCHAR2(255));

2)    Insert descriptive text into table

INSERT INTO DUMMY_REFRESH (REFRESH_TEXT)
VALUES (‘Hit Refresh to Update Data’);

3)    Confirm insert was successful

SELECT * FROM DUMMY_REFRESH;

 

Step 13 – Create Repository Variables


Create a Repository Variable in the BICS Modeler tool for each parameter that needs to be passed to the function and stored procedure.

Snap12

Snap13

Snap14

 

Step 14 –  Create Expression in Data Modeler – that references EVALUATE function

 

Create the expression in the BICS Modeler tool using EVALUATE to call the function and pass necessary parameters to the function and stored procedure.

EVALUATE(‘LOAD_SOCIAL_DATA_CONTACTS(%1,%2, %3)’,VALUEOF(NQ_SESSION.”r_company_duns”),VALUEOF(NQ_SESSION.”r_department”),VALUEOF(NQ_SESSION.”r_management_level”))

 

Snap2

 

Step 15 – Create Analysis – that executes EVALUATE function


Create an Analysis and add both field from the DUMMY_REFRESH table. Hide both field so that nothing is returned.

 Snap4

Snap5

Snap6

Step 16 – Create Analysis – to display results

 
Add all or desired fields from SOCIAL_DATA_CONTACTS table.

 Snap16

Step 17 – Create Dashboard Prompt


For each Prompt set the corresponding Request Variable.

*** These must exactly match the names of the repository variables created in “Step 13 – Create Repository Variables” ***

Snap9

Snap10

Snap11

Snap15

For each prompt manually add the text for ‘All Columns Values’ and exclude NULL’s.

Snap19

Snap20

The Dashboard also contains a workaround to deal multiple Dun’s numbers. Currently VALUELISTOF is not available in BICS. Therefore, it is not possible to pass multiple values from a Prompt to a request / session variable; since, VALUEOF can only handle a single value.

A suggested workaround is to put the multi-section list into a single comma delimiter string – using LISTAGG. The single string can then be read by VALUEOF and logic in the stored procedure can read through the array.

CAST(EVALUATE_AGGR(‘LISTAGG(%1,%2) WITHIN GROUP (ORDER BY %1 DESC)’,”DUNS_NUMBERS”.”COMPANY_DUNS_NUMBER”,’,’) AS VARCHAR(500))

 
Step 18 – Create Dashboard


There are many ways to design the Dashboard for the BICS Consumer. One suggestion is below:

The Dashboard is processed in seven clicks.

1)    Select Duns Number(s).

2)    Select Confirm Duns Number (only required for multi-select workaround described in Step 17).

3)    Select Department or run for ‘All Column Values’.

4)    Select Management Level or run for ‘All Column Values’.

5)    Click Apply. *** This is a very important step as Request Variables are only read once Apply is hit ***

6)    Click Refresh – to kick off Refresh Analysis Request (built in Step 15).

7)    Click Get Contact Details to display Contact Analysis Request (built in Step 16).

Snap7

Ensure the Refresh Report Link is made available on the Refresh Analysis Request – to allow the BICS Consumer to override cache.

Snap17

Optional: Make use of Link – Within the Dashboard on Contact Analysis Request to create “Get Contact Details” link.

Snap18

Further Reading


Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the Application Express API Reference Guide – APEX_JSON Package.

Click here for the REST APIs for Oracle Social Data and Insight Cloud Service guide.

Click here for more A-Team BICS Blogs.

Summary


This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS) using the Connect REST API web services.

The use case shown was for BICS and Oracle Social Data and Insight Cloud Service integration. However, many of the techniques referenced could be used to integrate Oracle Social Data and Insight Cloud Service with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST and APEX_JSON examples could be easily modified to integrate BICS or standalone Oracle Apex with any other REST web service that is accessed via a URL and returns JSON data.

Techniques referenced in this blog could be useful for those building BICS REST ETL connectors and plug-ins.

Key topics covered in this article include: Oracle Business Intelligence Cloud Service (BICS), Oracle Social Data and Insight Cloud Service, Oracle Apex API, APEX_JSON, apex_web_service.make_rest_request, PL/SQL, BICS Variables (Request, Repository, Session), BICS BI Server Cache, BICS Functions (EVALUATE, VALUEOF, LISTAGG), and Cloud to Cloud integration.

Oracle HCM Cloud – Bulk Integration Automation Using SOA Cloud Service

$
0
0

Introduction

Oracle Human Capital Management (HCM) Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the batch integration to load and extract data to and from the HCM cloud. HCM provides the following bulk integration interfaces and tools:

HCM Data Loader (HDL)

HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion HCM. It supports important business objects belonging to key Oracle Fusion HCM products, including Oracle Fusion Global Human Resources, Compensation, Absence Management, Performance Management, Profile Management, Global Payroll, Talent and Workforce Management. For detailed information on HDL, please refer to this.

HCM Extracts

HCM Extract is an outbound integration tool that lets you select HCM data elements, extracting them from the HCM database and archiving these data elements as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

Oracle Fusion HCM provides the above tools with comprehensive user interfaces for initiating data uploads, monitoring upload progress, and reviewing errors, with real-time information provided for both the import and load stages of upload processing. Fusion HCM provides tools, but it requires additional orchestration such as generating FBL or HDL file, uploading these files to WebCenter Content and initiating FBL or HDL web services. This post describes how to design and automate these steps leveraging Oracle Service Oriented Architecture (SOA) Cloud Service deployed on Oracle’s cloud Platform As a Service (PaaS) infrastructure.  For more information on SOA Cloud Service, please refer to this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based components to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure. For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer to this.

These bulk integration interfaces and patterns are not applicable to Oracle Taleo.

Main Article

 

HCM Inbound Flow (HDL)

Oracle WebCenter Content (WCC) acts as the staging repository for files to be loaded and processed by HDL. WCC is part of the Fusion HCM infrastructure.

The loading process for FBL and HDL consists of the following steps:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke the “LoaderIntegrationService” or the “HCMDataLoader” to initiate the loading process.

However, the above steps assume the existence of an HDL file and do not provide a mechanism to generate an HDL file of the respective objects. In this post we will use the sample use case where we get the data file from customer, using it to transform the data and generate an HDL file, and then initiate the loading process.

The following diagram illustrates the typical orchestration of the end-to-end HDL process using SOA cloud service:

 

hcm_inbound_v1

HCM Outbound Flow (Extract)

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS)
  • Report is stored in WCC under the hcm/dataloader/export account.

 

However, the report must then be delivered to its destination depending on the use cases. The following diagram illustrates the typical end-to-end orchestration after the Extract report is generated:

hcm_outbound_v1

 

For HCM bulk integration introduction including security, roles and privileges, please refer to my blog Fusion HCM Cloud – Bulk Integration Automation using Managed File Trasnfer (MFT) and Node.js. For introduction to WebCenter Content Integration services using SOA, please refer to my blog Fusion HCM Cloud Bulk Automation.

 

Sample Use Case

Assume that a customer receives benefits data from their partner in a file with CSV (comma separated value) format periodically. This data must be converted into HDL format for the “ElementEntry” object and initiate the loading process in Fusion HCM cloud.

This is a sample source data:

E138_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,23,Reason,Corrected all entry value,Date,2013-01-10
E139_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,33,Reason,Corrected one entry value,Date,2013-01-11

This is the HDL format of ElementryEntry object that needs to be generated based on above sample file:

METADATA|ElementEntry|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|EntryType|CreatorType
MERGE|ElementEntry|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|E|H
MERGE|ElementEntry|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|E|H
METADATA|ElementEntryValue|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|InputValueName|ScreenEntryValue
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Amount|23
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected all entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-10
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Amount|33
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected one entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-11

SOA Cloud Service Design and Implementation

A canonical schema pattern has been implemented to design end-to-end inbound bulk integration process – from the source data file to generating HDL file and initiating the loading process in HCM cloud. The XML schema of HDL object “ElementEntry” is created. The source data is mapped to this HDL schema and SOA activities will generate the HDL file.

Having a canonical pattern automates the generation of HDL file and it becomes a reusable asset for various interfaces. The developer or business user only needs to focus on mapping the source data to this canonical schema. All other activities such as generating the HDL file, compressing and encrypting the file, uploading the file to WebCenter Content and invoking web services needs to be developed once and then once these activities are developed they also become reusable assets.

Please refer to Wikipedia for the definition of Canonical Schema Pattern

These are the following design considerations:

1. Convert source data file from delimited format to XML

2. Generate Canonical Schema of ElementEntry HDL Object

3. Transform source XML data to HDL canonical schema

4. Generate and compress HDL file

5. Upload a file to WebCenter Content and invoke HDL web service

 

Please refer to SOA Cloud Service Develop and Deploy for introduction and creating SOA applications.

SOA Composite Design

This is a composite based on above implementation principles:

hdl_composite

Convert Source Data to XML

“GetEntryData” in the above composite is a File Adapter service. It is configured to use native format builder to convert CSV data to XML format. For more information on File Adapter, refer to this. For more information on Native Format Builder, refer to this.

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select delimited format type and use source data as a sample to generate a XML schema. Please see the following diagrams:

FileAdapterConfig

nxsd1

nxsd2_v1 nxsd3_v1 nxsd4_v1 nxsd5_v1 nxsd6_v1 nxsd7_v1

Generate XML Schema of ElementEntry HDL Object

A similar approach is used to generate ElementEntry schema. It has two main objects: ElementEntry and ElementEntryValue.

ElementEntry Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”Entry” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntry” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EntryType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”CreatorType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

ElementEntryValue Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryValueHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryValueHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”EntryValue” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”InputValueName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ScreenEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

In Native Format Builder, change “|” separator to “,” in the sample file and change it to “|” for each element in the generated schema.

Transform Source XML Data to HDL Canonical Schema

Since we are using canonical schema, all we need to do is map the source data appropriately and Native Format Builder will convert each object into HDL output file. The transformation could be complex depending on the source data format and organization of data values. In our sample use case, each row has one ElementEntry object and 3 ElementEntryValue sub-objects respectively.

The following provides the organization of the data elements in a single row of the source:

Entry_Desc_v1

The main ElementEntry entries are mapped to each respective row, but ElementEntryValue entries attributes are located at the end of each row. In this sample it results 3 entries. This can be achieved easily by splitting and transforming each row with different mappings as follows:

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “1” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “2” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “3” from above diagram

 

Metadata Attribute

The most common use cases are to use “merge” action for creating and updating objects. In this use case, it is hard coded to “merge”, but the action could be set up to be dynamic if source data row has this information. The “delete” action removes the entire record and must not be used with “merge” instruction of the same record as HDL cannot guarantee in which order the instructions will be processed. It is highly recommended to correct the data rather than to delete and recreate it using the “delete” action. The deleted data cannot be recovered.

 

This is the sample schema developed in JDeveloper to split each row into 3 rows for ElementEntryValue object:

<xsl:template match=”/”>
<tns:Root-Element>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C9″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C10″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C11″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C12″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C13″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C14″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
</tns:Root-Element>
</xsl:template>

BPEL Design – “ElementEntryPro…”

This is a BPEL component where all the major orchestration activities are defined. In this sample, all the activities after transformation are reusable and can be moved to a separate composite. A separate composite may be developed only for transformation and data enrichment that in the end invokes the reusable composite to complete the loading process.

 

hdl_bpel_v2

 

 

SOA Cloud Service Instance Flows

The following diagram shows an instance flow:

ElementEntry Composite Instance

instance1

BPEL Instance Flow

audit_1

Receive Input Activity – receives delimited data to XML format through Native Format Builder using File Adapter

audit_2

Transformation to Canonical ElementEntry data

Canonical_entry

Transformation to Canonical ElementEntryValue data

Canonical_entryvalue

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using SOA Cloud Service. It shows how to convert customer’s data to HDL format followed by initiating the loading process. This process can also be replicated to other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

Integrating Social Relationship Management (SRM) with Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article outlines how to integrate Social Relationship Management (SRM) with Oracle Business Intelligence Cloud Service (BICS).

Bringing the SRM records into BICS enables the data consumer to refine and focus on relevant data through prompts and filters. Additionally, once in BICS, the SRM data can be mashed with complementary datasets.

Three patterns are covered:

(a) Managing SRM authentication, authorization, and token refresh.

(b) Retrieving SRM data in JSON format using REST Web Services.

(c) Parsing and Inserting SRM data into the Schema Service database with Apex PL/SQL functions.


The article concludes at PL/SQL code snippet stage. Suggestions on how to schedule, trigger, and display results in BICS can be found in past A-Team BICS Blogs.

SRM screen prints in this article have been translated to English from Spanish and text may differ slightly from originals.

For those very familiar with the SRM API and the BICS API, it is possible to jump straight ahead to Step 4 – #4 “Final Code Snippet – add apex_json.parse code to PL/SQL”. That said, Steps 1-3 are valuable for assisting in understanding the solution and will be needed should de-bugging be required.

 

Main Article

 

Step 1 – Review SRM Developer Platform API Documentation

 

Begin by reviewing the SRM Developer Platform API documentation. This article covers Refreshing Tokens (oauth/token) and the List Messages Method (engage/v1/messages). There are many other objects and methods available in the API that may be useful for various integration scenarios.

Step 2 – Authentication

 

Detailed documented steps for Authentication can be found here. The following is a summary of what was found to be the minimal steps required for BICS.


Prerequisites:

* SRM Account must include message routing from from Listen to Engage within Social Engagement and Monitoring (SEM) for these to be available in the API.

* SRM Account must have administrator privileges to access the Account link.


1)    Register the Client Application

Go to: https://accounts.vitrue.com/#api_apps

Click: Create New Application

Enter Application Name & URL(URI) Call Back.

Make a note of the Application Name, Callback URL (URI), Client ID (Customer ID), and Client Secret.

Snap1

Snap2


2)    Request User Authorization and Obtain the Authorization Code

Authorize URL:

https://gatekeeper.vitrue.com/oauth/authorize?client_id=abc123&scope=engage&redirect_uri=https://accounts.vitrue.com&response_type=code

Replace client_id and redirect_uri with those associated with the API Application.

There are 3 different scopes:

engage = Access endpoints for messages and replies
publish = Access endpoints for posts
admin = Access endpoints for accounts, bundles, and resources

To determine what scope is required review the URL of the method being used. This example uses engage/v1/messages, thus scope = engage is required. When using admin/v1/accounts, scope = admin is required. If the incorrect scope is referenced, when running the method, the following error will be received:

{“Unauthorized.”,”detail”:”The user credentials provided with the request are invalid or do not provide the necessary level of access.”,”status”:401}

Copy the Authorize URL into a browser. Log into SRM.

Untitled1

Click “Authorize” to grant access. It is very important that the correct scope is displayed. In this case “Engage”.

Untitled3

If successful the browser will re-direct to the URL (URI) Call Back / Redirect URI. The code will be appended. Make a note of the code.

Untitled2


3)    Exchange Authorization Code for an Access Tokens

If necessary install Curl. Download Curl from here.

Run the following replacing client_id, client_secret, redirect_uri, code, and scope.

It is very important that the scope matches what was used to generate the code.

curl -X POST -d grant_type=authorization_code -d client_id=abc123 -d client_secret=abc123 -d redirect_uri=”https://accounts.vitrue.com” -d code=abc123 -d scope=engage https://gatekeeper.vitrue.com/oauth/token -k

If successful the access_token and refresh_token will be returned in JSON format. Copy the JSON containing the tokens and save it to notepad for future reference.

{“access_token”:”abc123″,”token_type”:”bearer”,”expires_in”:7200,”refresh_token”:”abc123″,”scope”:”engage”}

Tokens expire after two hours. If tokens expire generate new tokens by running grant_type=refresh_token.

curl -X POST -d grant_type=refresh_token -d refresh_token=abc123 -d client_id=abc123 -d client_secret=abc123 -d redirect_uri=”https://accounts.vitrue.com” https://gatekeeper.vitrue.com/oauth/token -k

If tokens get lost or out of sync obtain a new “authorization code” and repeat the process to get a new code and new access / refresh tokens. If tokens get out of sync the following error will be received:

{“error”:”invalid_request”,”error_description”:”The request is missing a require
d parameter, includes an unsupported parameter value, or is otherwise malformed.”}

If an attempt is made to re-authorize a code that is still active the following error will be received. Thus, the need to get a fresh code.

{“error”:”invalid_grant”,”error_description”:”The provided authorization grant is invalid, expired, revoked, does not match the redirection URI used in the authorization request, or was issued to another client.”}


4)    Save the refresh tokens to the BICS Database

CREATE TABLE REFRESH_TOKEN(CREATE_DATE TIMESTAMP, access_token VARCHAR(500), token_type VARCHAR(10), expires_in VARCHAR(10), refresh_token varchar(500), scope varchar(10));

Replace ‘abc123‘ with the current refresh token. Only the refresh token is needed at this stage.

INSERT INTO REFRESH_TOKEN(CREATE_DATE,refresh_token) VALUES (SYSDATE,’abc123‘);

5)    Refresh token from BICS

This code snippet provides a very basic example of how to store the refresh token in BICS. It should be used for demo purposes only. In production systems, more secure options of storing and linking refreshing token’s with the user’s record or profile should be considered.

Open SQL Workshop from Oracle Application Express

Snap2

Launch SQL Commands

Snap3

Use the code snippet below as a starting point to build the refresh token PL/SQL.

For a text version of the code snippet click here.

Replace the redirect_uri, client_id, client_secret.

Re-run the code snippet code to confirm that the new refresh code gets inserted into the REFRESH_TOKEN table each time.

The REFRESH_TOKEN table should only ever have one record in it at all times.

DECLARE
l_ws_response_clob CLOB;
l_refresh_token VARCHAR(500);
l_body VARCHAR(500);

SELECT MAX(refresh_token) INTO l_refresh_token FROM REFRESH_TOKEN;
dbms_output.put_line(‘Old Refresh Token: ‘ || dbms_lob.substr(l_refresh_token,12000,1));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
l_body := ‘{
“refresh_token”: “‘ || l_refresh_token || ‘”,
“grant_type”: “refresh_token”,
“redirect_uri”: “https://accounts.vitrue.com“,
“client_id”: “abc123“,
“client_secret”: “abc123
}';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://gatekeeper.vitrue.com/oauth/token’,
p_body => l_body,
p_http_method => ‘POST’
);
apex_json.parse(l_ws_response_clob);
DELETE FROM REFRESH_TOKEN;
INSERT INTO REFRESH_TOKEN(CREATE_DATE, access_token, token_type, expires_in, refresh_token, scope)
VALUES (
SYSDATE,
apex_json.get_varchar2(p_path => ‘access_token’),
apex_json.get_varchar2(p_path => ‘token_type’),
apex_json.get_varchar2(p_path => ‘expires_in’),
apex_json.get_varchar2(p_path => ‘refresh_token’),
apex_json.get_varchar2(p_path => ‘scope’)
);
dbms_output.put_line(‘New Refresh Token: ‘ || apex_json.get_varchar2(p_path => ‘refresh_token’));
COMMIT;

 

Step 3 – Run List Messages Method (engage/v1/messages)


For a text version of the code snippet click here.


Code Breakdown


Orange
– Refresh token code (from Step 2).

Blue – Reads Access Token from REFRESH_TOKEN table

Light Green – Rest Web Service for List Messages Method (engage/v1/messages). Returns data as a clob.

Bright Green – bundleId and resourceId parameters. These were identified through the “List Accounts Method” (admin/v1/accounts) and “List Bundles Method” (admin/v1/accounts/:account_id/bundles).

DECLARE
l_ws_response_clob CLOB;
l_refresh_token VARCHAR(500);
l_body VARCHAR(500);
l_access_token VARCHAR(500);
l_ws_response_clob2 CLOB;
l_ws_url VARCHAR2(500) := ‘https://public-api.vitrue.com/engage/v1/messages?bundleId=3876?resourceId=108641‘;

SELECT MAX(refresh_token) INTO l_refresh_token FROM REFRESH_TOKEN;
–dbms_output.put_line(‘Old Refresh Token: ‘ || dbms_lob.substr(l_refresh_token,12000,1));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
l_body := ‘{
“refresh_token”: “‘ || l_refresh_token || ‘”,
“grant_type”: “refresh_token”,
“redirect_uri”: “https://accounts.vitrue.com”,
“client_id”: “abc123″,
“client_secret”: “abc123″
}';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://gatekeeper.vitrue.com/oauth/token’,
p_body => l_body,
p_http_method => ‘POST’
);
apex_json.parse(l_ws_response_clob);
DELETE FROM REFRESH_TOKEN;
INSERT INTO REFRESH_TOKEN(CREATE_DATE, access_token, token_type, expires_in, refresh_token, scope)
VALUES (
SYSDATE,
apex_json.get_varchar2(p_path => ‘access_token’),
apex_json.get_varchar2(p_path => ‘token_type’),
apex_json.get_varchar2(p_path => ‘expires_in’),
apex_json.get_varchar2(p_path => ‘refresh_token’),
apex_json.get_varchar2(p_path => ‘scope’)
);
–dbms_output.put_line(‘New Refresh Token: ‘ || apex_json.get_varchar2(p_path => ‘refresh_token’));
–Get Access Token
SELECT MAX(access_token) INTO l_access_token FROM REFRESH_TOKEN;
dbms_output.put_line(dbms_lob.substr(l_access_token,12000,1));
–Set Headers
apex_web_service.g_request_headers(1).name := ‘Authorization';
apex_web_service.g_request_headers(1).value := ‘Bearer ‘ || l_access_token;
apex_web_service.g_request_headers(2).name := ‘Accept';
apex_web_service.g_request_headers(2).value := ‘application/json';
–Get Message
l_ws_response_clob2 := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_http_method => ‘GET’
);
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,1));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,12001));
dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,24001));
COMMIT;

 

Step 4 – Parse and Insert SRM messages into BICS


1. Validate the JSON


Only valid JSON can be parsed by Apex. If the code fails at the parsing stage, it is recommended to validate it.

There are many free online JSON validating tools such as: https://jsonformatter.curiousconcept.com/

For a sample SRM JSON payload click here.

Viewing the JSON in the JSON formatter allows for easily expand and collapse of the different elements – assisting with choosing the desired fields to bring into BICS.

For this example id, type, resourceName, resourceType, and body will be selected.

Snap3

2. Test JSON Path Expressions


When formulating the JSON path expression, it may be useful to use an online JSON Path Expression Tester such as https://jsonpath.curiousconcept.com.

The below example shows testing the “id” path.

Snap4

Snap5

JSON Path Expression’s of all required fields.

Value                       JSON Path Expression 
count                       count
id                          items[*].id
type                        items[*].type
resourceName                items[*].resource.resourceName
resourceType                items[*].resource.resourceType
body                        items[*].body

3. Create SRM_MESSAGE table in BICS

Create the table from Apex SQL Workshop -> SQL Commands.

CREATE TABLE SRM_MESSAGES(ID VARCHAR(100),TYPE VARCHAR(100),RESOURCE_NAME VARCHAR(100),RESOURCE_TYPE VARCHAR(100),BODY VARCHAR(1000));

 4. Final Code Snippet – add apex_json.parse code to PL/SQL

For a text version of the code snippet click here.

DECLARE
l_ws_response_clob CLOB;
l_refresh_token VARCHAR(500);
l_body VARCHAR(500);
l_access_token VARCHAR(500);
l_ws_response_clob2 CLOB;
l_ws_url VARCHAR2(500) := ‘https://public-api.vitrue.com/engage/v1/messages?bundleId=3876?resourceId=108641′;

SELECT MAX(refresh_token) INTO l_refresh_token FROM REFRESH_TOKEN;
–dbms_output.put_line(‘Old Refresh Token: ‘ || dbms_lob.substr(l_refresh_token,12000,1));
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
l_body := ‘{
“refresh_token”: “‘ || l_refresh_token || ‘”,
“grant_type”: “refresh_token”,
“redirect_uri”: “https://accounts.vitrue.com”,
“client_id”: “abc123“,
“client_secret”: “abc123
}';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://gatekeeper.vitrue.com/oauth/token’,
p_body => l_body,
p_http_method => ‘POST’
);
apex_json.parse(l_ws_response_clob);
DELETE FROM REFRESH_TOKEN;
INSERT INTO REFRESH_TOKEN(CREATE_DATE, access_token, token_type, expires_in, refresh_token, scope)
VALUES (
SYSDATE,
apex_json.get_varchar2(p_path => ‘access_token’),
apex_json.get_varchar2(p_path => ‘token_type’),
apex_json.get_varchar2(p_path => ‘expires_in’),
apex_json.get_varchar2(p_path => ‘refresh_token’),
apex_json.get_varchar2(p_path => ‘scope’)
);
–dbms_output.put_line(‘New Refresh Token: ‘ || apex_json.get_varchar2(p_path => ‘refresh_token’));
–Get Access Token
SELECT MAX(access_token) INTO l_access_token FROM REFRESH_TOKEN;
–dbms_output.put_line(‘Access Token’ || dbms_lob.substr(l_access_token,12000,1));
–Set Headers
apex_web_service.g_request_headers(1).name := ‘Authorization';
apex_web_service.g_request_headers(1).value := ‘Bearer ‘ || l_access_token;
apex_web_service.g_request_headers(2).name := ‘Accept';
apex_web_service.g_request_headers(2).value := ‘application/json';
–Get Message
l_ws_response_clob2 := apex_web_service.make_rest_request
(
p_url => l_ws_url,
p_http_method => ‘GET’
);
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,1));
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,12001));
–dbms_output.put_line(dbms_lob.substr(l_ws_response_clob2,12000,24001));
–Delete Messages
DELETE FROM SRM_MESSAGES;
–Parse Clob to JSON
apex_json.parse(l_ws_response_clob2);
–Insert data
IF apex_json.get_varchar2(p_path => ‘count’) > 0 THEN
for i in 1..apex_json.get_varchar2(p_path => ‘count’) LOOP
INSERT INTO SRM_MESSAGES(ID,TYPE,RESOURCE_NAME,RESOURCE_TYPE,BODY)
VALUES
(
apex_json.get_varchar2(p_path => ‘items['|| i || '].id’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].type’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].resource.resourceName’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].resource.resourceType’),
apex_json.get_varchar2(p_path => ‘items['|| i || '].body’)
);
end loop;
END IF;
COMMIT;

Further Reading


Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the Application Express API Reference Guide – APEX_JSON Package.

Click here for the SRM Developer Platform API Guide.

Click here for more A-Team BICS Blogs.

Summary


This article provided a set of examples that leverage the APEX_WEB_SERVICE_API to integrate Social Relationship Management (SRM) with Oracle Business Intelligence Cloud Service (BICS) using the SRM Developer Platform.

The use case shown was for BICS and SRM. However, many of the techniques referenced could be used to integrate SRM with other Oracle and non-Oracle applications.

Similarly, the Apex MAKE_REST_REQUEST and APEX_JSON examples could be easily modified to integrate BICS or standalone Oracle Apex with any other REST web service that is accessed via a URL and returns JSON data.

Techniques referenced in this blog could be useful for those building BICS REST ETL connectors and plug-ins.


Behind the Delete Trigger in Sales Cloud Application Composer

$
0
0

Cautionary Details on Delete Trigger Behavior with TCA Objects

Developers and technically-inclined users who have ever needed to extend Oracle Sales Cloud are probably familiar with Application Composer (known as App Composer to its friends) — the built-in, browser-based collection of tools that makes it possible to extend Sales Cloud safely without requiring a level of system access that would be inconsistent with and unsafe for the cloud infrastructure. Likewise, many who have built App Composer extensions probably know about object triggers and how to add custom Groovy scripts to these events. Object trigger logic is a major part of most Sales Cloud extensions, especially when there is a need to communicate with external systems. With current Sales Cloud releases (Rel9 and Rel10), harnessing these trigger events and arming them with Groovy scripts arguably has become one of the more effective strategies for developing point-to-point data synchronization extensions.

Existing documentation on trigger usage in App Composer is located in two places: the Groovy Scripting Reference  and the guide for Customizing Sales. But due to the scope of these documents and the extent of topics that require coverage, the authors were unable to provide detailed information on triggers.  These reference guides were never meant to offer best practice recommendations on the use of specific triggers for different purposes. Given this need — that Sales Cloud extension developers need more guidance in order to be more proficient when using object triggers — there are numerous areas requiring deeper technical investigation.  These topics can, and probably should, be covered in detail in the blog arena.

One area requiring additional clarification and vetting of options is a somewhat obscure anomaly in the behavior of delete triggers across different objects in Sales Cloud. By design, objects belonging to Oracle’s Trading Community Architecture (TCA) data model – for example Accounts, Addresses, Contacts, Households, and more – are never deleted physically from the database, at least not through the native Sales Cloud application UI. Therefore, delete triggers do not fire as expected for these objects. In other words, any piece of App Composer extension logic touching TCA objects that includes a delete trigger as one of its components will probably fail. For non-TCA objects, triggers specific to delete actions work as designed. This post will explore differences in delete trigger behavior between TCA and non-TCA data objects in Sales Cloud. The illustrative use case used for this post is a requirement to keep a rudimentary audit trail of deleted object activity in Sales Cloud, tracking the object deleted along with user and timestamp values.

Prerequisites for the Use Case

A custom object, “DeleteAuditLog”, will act as the container for storing the archived object, user, and timestamp details. It has the following attributes:

Field Name Field Type Standard/Custom Additional Info
CreatedBy Text Standard value used to determine who performed the delete
CreationDate Date Standard date of deletion
RecordName Text Standard
LastUpdateDate Date Standard
LastUpdatedBy Text Standard
Id Number Standard
ObjectId Number Custom holds id of object deleted
ObjectType Text Custom holds type of object deleted
ObjectDetail Text Custom holds details for object deleted

Granted, the data elements that will be saved to the custom object do not represent an extremely meaningful audit trail; the intent is not to implement a complete solution but rather to demonstrate the potential of what is possible with trigger scripts.

Although not absolutely required, a global function that takes care of the DeleteAuditLog record creation and assignment of attribute values eliminates duplicate code.  Having it in place as a global function is consistent with modular coding best practices. Here are the details for this global function:

triggers8

Trigger Overview

For readers who have not yet ventured into the world of App Composer triggers, a short introduction is in order. Creating Groovy scripts for event triggers is a way to extend Sales Cloud by telling the system to do something extra when object-related system events occur. That “something extra” can take a variety of forms: validating a field, populating a custom field, calling out to an external web service, creating new instances of objects (standard or custom), or reacting to a given value in an object field and doing some extra processing. Different sequences of triggers fire when objects are created, modified, or deleted. Some triggers fire and are shared across multiple UI actions; others are single-purpose.

There are up to fifteen different object trigger events exposed in App Composer. Not all of these fifteen trigger events are exposed across all objects, however.  For example, the “Before Commit to the Database” trigger is not exposed for objects belonging to the Sales application container. To access and extend trigger events, navigate to the object of interest in the left navigation frame after selecting the correct application container, and then expand the object accordion, which will expose the “Server Scripts” link.

triggers1

Clicking the Server Scripts link populates the App Composer content window with the Server Scripts navigator for the selected object, one component of which is the tab for Triggers. (There are additional tabs in the content window for Validation Rules and Object Functions.) Selecting the Trigger tab exposes areas for Object Triggers and Field Triggers. New, edit, and delete actions for triggers are available through icons are through the Action drop-down menus for Object Triggers and Field Triggers.

triggers2

The majority of triggers are related to database events: before/after insert, before/after update, before/after delete, before/after commit, before/after rollback, and after changes are posted to the database. There are several remaining triggers related to ADF page life cycle events: after object create, before modify, before invalidate, and before remove.

For investigative purposes, it can be revealing to create Groovy scripts for these trigger events in order to reveal firing order and other trigger behaviors; in fact, that was the strategy used here to clarify trigger behavior across different types of objects. Thus, a typical trigger script that does nothing other than to log the trigger event might consist of the following two lines:

println 'Entering AfterCreateTrigger ' + adf.util.AddMilliTimestamp()
println 'Exiting AfterCreateTrigger ' + adf.util.AddMilliTimestamp()

(NOTE: AddMilliTimestamp is a global function that displays time formatted with an added milliseconds component.)

After Groovy trigger scripts are in place for the object trigger events in focus, it then becomes possible to test multiple actions (e.g. object creates, updates, deletes) across different objects in the Sales Cloud user interface. This results in debug-style statements getting written to server logs, which can then be examined in the App Composer Run Time Messages applet to discover end-to-end trigger behavior for various UI actions. The logs for commonly-performed UI actions on Sales Cloud objects follow below. (Run Time Messages content was exported to spreadsheet format to allow selecting the subset of messages applicable to each UI action).

Object create followed by Save and Close:

triggers3

Object modify followed by Save and Close:

triggers4

Object (non-TCA) delete followed by User Verification of the Delete Action:

triggers5

Normal Delete Trigger Behavior

From the above listings, delete triggers work pretty much as expected, at least for non-TCA objects. BeforeRemove, BeforeInvalidate, and BeforeModify events occur before the database-related events – Before/After Delete and AfterCommit – fire. Given this sequence of events, if the goal of the Sales Cloud extension is to log details of the delete event, then it probably makes the most sense to target the unique trigger that fires as soon as the object deletion becomes a known sure thing but before the transaction commit in order to get the log record created in the same database transaction. In this case, therefore, focusing on the AfterDelete event should be optimal; it only fires for the delete action and it occurs, by definition, after the delete event occurs in the database.

There is a behavior unique to the delete action and its chain of triggers, however, that makes implementation of this straightforward approach a bit more complicated.  After the BeforeModify trigger event, which occurs fairly early in the event chain, getting a handle to the to-be-deleted record becomes impossible.  If a need exists, therefore, to read any of the record’s attribute values, it has to be done during or before the BeforeModify event.  After that event the system treats the record as deleted so effectively it is no longer available.

Because the audit log requirement requires reading the value of an object id and then writing that to the audit trail record, hooking into the BeforeModify event is required.  But because the BeforeModify trigger is no longer unique to the delete UI action, the script would somehow have to include a check to make sure that the trigger is a part of the delete chain of events and not being fired for a normal update action.  There does not seem to be a way to perform this check using any native field values, so one option might be to push field values onto the session stack in the BeforeModify trigger, and then pull them off the session stack in the AfterDelete trigger.

Script for the BeforeModify trigger event:

println 'Entering BeforeModify ' + adf.util.AddMilliTimestamp()
adf.userSession.userData.put('ObjectId', SalesAccountPartyId)
adf.userSession.userData.put('ObjectDetail', 'Name: ' + Name + ', Org: ' + OrganizationName)
println 'Exiting BeforeModify ' + adf.util.AddMilliTimestamp()

Script for the AfterDelete trigger event:

println 'Entering AfterDelete ' + adf.util.AddMilliTimestamp()
println 'Creating Delete Audit Log Record'
def objType = 'Sales Lead'
def objId = adf.userSession.userData.ObjectId
def objDtl = adf.userSession.userData.ObjectDetail
def logResult = adf.util.CreateDeleteAuditLog(objType, objId, objDtl) ? 
  'Delete Log Record created OK' : 'Delete Log Record create failure'
println logResult
println 'Exiting AfterDelete ' + adf.util.AddMilliTimestamp()

TCA Objects and the Delete Trigger

Implementing a similar trigger for a TCA object (e.g. Account) leads to a far different outcome. The failure to log the Account UI delete event becomes clear when the debug Run Time Messages associated with the event are examined.

The delete action on the Account object results in the following series of triggers getting fired:

triggers6

The above listing of fired triggers is representative of what takes place when a TCA object record is “deleted” in the UI.  By design, soft deletes occur instead of physical deletes, so the sequence of trigger events looks more like the objects are being modified than deleted. And actually, record updates are indeed what occur.  For TCA objects, instead of being physically deleted they are marked as inactive by changing the value of the PartyStatus (or similar) field to ‘I’. This value tells the system to treat records as if they no longer exist in the database.

Therefore, hooking into delete trigger events for TCA objects will never have the desired effect.  What can be done about the audit log use case?  Now that this behavior is known for TCA objects, and knowing that the value of the PartyStatus (or equivalent) field can be used to check for the delete UI action, all of the audit log logic can be contained in the BeforeModify trigger event.  There is no need to push and pull values off of the session.  Hooking into the BeforeModify trigger event remains viable for TCA objects even though the chain of triggers is far different.

Here is a sample script, which shares some pieces of the delete script for Sales Lead above, for the Account (TCA) object:

println 'Entering BeforeModifyTrigger ' + adf.util.AddMilliTimestamp()
// check for delete activity
if (isAttributeChanged('PartyStatus') && PartyStatus == 'I') {
  println 'Creating Delete Audit Log Record'
  def objType = 'Account'
  def objId = PartyId
  def objDtl = OrganizationName
  def logResult = adf.util.CreateDeleteAuditLog(objType, objId, objDtl) ? 
    'Delete Log Record created OK' : 'Delete Log Record create failure'
  println logResult
} else {
  println 'Record Change other than a delete attempt'
}
println 'Exiting BeforeModifyTrigger ' + adf.util.AddMilliTimestamp()

Short-Term Workarounds and Long-Term Prognosis

Multiple customer service requests related to the delete trigger anomaly for TCA objects have been filed across various versions of Sales Cloud, and this activity has resulted in at least one KnowledgeBase article (Unable To Restrict Deletion Of Contacts Via Groovy (Doc ID 2044073.1) getting published about the behavior. A workaround entirely consistent with what was presented here for TCA objects is discussed in the article. For the longer term, enhancement request #21557055 has been filed and approved for a future release of Sales Cloud.

Customizing Fusion Application Expenses with Descriptive Flex Fields

$
0
0

Introduction

This article describes how to customize the Oracle Fusion Applications (FA) Expenses application with the use of Descriptive Flex Field (DFF) in expenses. We will look at a use case with a need to capture more detail around an expense item and look at how to implement it. This use case is meant to showcase the ability of the application that can be further developed for specific needs.

Use Case

Let’s take a use case where a company has decided that employees are required to get pre-approval before claiming internet expenses and enter an approval code each time an internet expense is claimed.

The purpose of this use case is to look at how to capture more detail of an expense item when claimed. This can be achieved by using context sensitive Descriptive Flex Field (DFF) that must be entered when a certain type of expense is selected and claimed. This additional data is stored in the system along with the expense report for future reporting and analysis.

Scope

In this blog we captured details of the solution and setup. Please consider this as a reference to give you sufficient idea of the setup and not the sole guide for your complete setup. Functional setup of the FA Finance module is a prerequisite, and as a result is not discussed in this blog. Because the setup can change from environment to environment depending on your functional setup, so can the steps documented here. If you have previously setup DFF and contexts, please refer to the product documents (links are given in reference section below) and adjust as necessary. Prior experience in the Oracle Fusion Finance and Expenses modules is desired to follow the content discussed in this article.

Implementation of Use Case

As this use case is about capturing the approval code when the user is entering the internet expense report, we will first look at how to enable a user to enter the approval code upon selection of ‘Internet’ for expense type. We expect the approval code has been obtained outside of the expense system and prior to creating the expense report.

Let’s first discuss briefly about flexfields in FA and look at the steps to implement the same. Generally speaking, flexfields offer a way to add custom fields that are specific for a customer need and a way to extend the fusion application for customer specific use cases. There are 3 types of flexfields available in FA: Key, Descriptive and Extensible flexfields. Key flexfields are configurable multipart intelligent keys that can represent part numbers and account numbers. Descriptive and Extensible flexfields allow capturing additional information declaratively without custom development. Please refer to the Oracle documentation for more details on each of the flex fields at the link here.

We will be using Descriptive Flexfields (DFF) for this use case, as it provides simple, sufficient building blocks necessary for this use case. Generally, flexfields are considered part of the base fusion apps product and so benefits of using ‘sandbox’ feature is limited for the use case discussed here. However, ‘sandbox’ is recommended for heavier customizations where page component properties are used.

Descriptive Flex Field Setup

Login to Fusion Applications as a user such as Application Implementation Consultant with implementation privileges and go to Setup and Maintenance as pointed in the picture below with a green arrow.

ExpB1Nav

In the ‘Setup and Maintenance’ screen as shown below, under ‘Search Tasks’, search for ‘expense’ that shows expense related tasks. We will be using two of the tasks listed in the search result – ‘Manage Expenses System Options’ and ‘Manage Descriptive FlexFields for Expense Reports’. First, let’s do a quick check whether the ‘Descriptive Flexfields’ are enabled in the system by selecting go to task ‘Manage Expenses System Options’.

ExpB1SM2

This will launch a window as shown below and you will need to make sure the high-lighted ‘Enable Descriptive Flexfields’ is set to ‘Yes’.

ExpB1SO3

Once the field is confirmed to have been set correctly, ‘Save and Close’ the window to move to the next step.

Now, back in the list of ‘expense’ tasks, select go to task ‘Manage Descriptive FlexFields for Expense Reports’. Now, in the ‘Manage Descriptive FlexFields for Expense Reports’ screen, select ‘Expenses’ in Module pull-down and search as shown below. This results in two lines, and now you can edit the ‘Expense’ line as highlighted below.

ExpB1SM4

Once you select the pencil for edit, the following screen shows up where you can manage contexts and various other expense related fields.

ExpB1SM5

As shown in the ‘Edit Descriptive Flexfield: Expenses’ screen, select ‘Manage Context’ button high-lighted at the top and create a context ‘Internet’ for this use case and the context sensitive segments to hold the value of the approval code.

ExpB1SM6

The below screen shows the ‘Context Sensitive Segments’ and the options entered to ‘Create Segment’.

ExpB1SM7

Once done entering values, ‘Save and Close’ at ‘Create Segment’ and then ‘Save and Close’ at ‘Create Context’ screen.

‘Save and Close’ the ‘Edit Descriptive Flexfield: Expense’ window and select ‘Deploy Flexfield’. The progress is as shown below:

ExpB1SM8

This completes the setup of DFF and the next step is to create an expense report and test the DFF field.

Now, we login to FA as a user with necessary privileges to launch ‘Expenses’ and select the ‘Expenses’ application icon under ‘My Information’. Then, the user can see the option to ‘Create Report’ and ‘Create Expense Item’ to fill in the details of the expense as shown below:

ExpB1SM9

Here we see that selecting ‘Internet’ under ‘Expense Type’ makes the ‘ApprovalCodeForInternet’ field show up and it is a mandatory field. Now the values entered here will get stored into the expense database in column ‘ATTRIBUTE_CHAR1’ as we have setup in the segment.

With what we have done so far, we have used the DFF to capture a specific expense attribute in the expense tables that can be used for runtime validations and future audits and analysis. This approval code value can be accessed in BPM Worklist as ‘ExpenseItem.attributeChar1’ and also be accessed in reports.

Summary

In this article, we have looked at how to customize Fusion Apps Expense using DFF to capture additional data of expense item depending on the expense type/context selected for claim by the user.

References

Web links and Oracle Documents:

  • Information Center: Oracle Internet Expenses (Doc ID 1381228.2)
  • Troubleshooting Assistant: Fusion Expenses (EXM) (Doc ID 1574035.2) is a nice menu driven tool.
  • Oracle Cloud Expenses Co-existence and Integration Options (Doc ID 2046956.1)
  • How to Assign Fusion Expenses to A User (Doc ID 1571884.1)

Integration with Fusion Application Expenses over Web Services

$
0
0

Introduction

This blog is about how to integrate Fusion Application (FA) Expenses with external applications over web services. We will provide an outline of the steps for integration along with sample payloads that can be adapted for specific needs.

Use Case

In this use case, a customer has an application such as Customer Relationship Management (CRM) or a custom application that limits an expense amount according to local company policy defined in the application. Let’s use an example that the policy evaluation has determined that the entertainment expense limit is $40 for a particular event and this information needs to be posted to an expense report along with the event details. So, after incurring the expense, the user can go to the FA expenses module and pick up the report, which is waiting for the user in ‘saved’ mode, which is created for this event with the set limits.

This integration of posting the information from the customer application to FA Expenses can be achieved by invoking the web services offered by FA Expenses module. Let’s first look at the components of an expense report and then look at how to integrate with it.

Scope

Please consider this article as a reference guide to give you an idea of the integration steps and not the sole guide for your integration work. Functional setup of the FA Finance module is a prerequisite that is not covered in this blog. Because the setup can change from environment to environment depending on your functional setup, so do the steps documented here. Prior experience in consuming SOAP/WSDL web services, Oracle Fusion Finance and FA Expenses modules are desired to follow the content discussed in this article.

Components of an FA Expense Report

An expense report is a container for expense items and other data points useful for the approver, to apply policies / rules, as well as for auditing and reporting. An expense report can have one or more expense items. The report total is a sum of the expense items claimed and the ‘purpose’ is to give some meaningful message to the approver and others involved in processing the expense report including the filer. An expense report, as shown in the picture below, is created for the purpose of ‘Taking Govt customer for XYZ event EventCode:ABC123’:

ExpWSBl-1

Per our use case, this event information has originated in CRM and an expense has been pre-approved there with a Per Diem for entertainment according to local laws or policy.

Our use case requires that this information needs to be populated in an expense item as shown in the picture below:

ExpWSBl-2

So, our plan is to create an expense report in the FA expense system saved with such details for the user to go in and review, which will look like the following when the user logs in:

ExpWSBl-3

Now, let’s look at the steps to perform this integration and transfer of information.

Preparing for Integration

To perform this integration over web services, we need to first identify the SOAP WSDL end-points to invoke. This information is listed at the site https://fusionappsoer.oracle.com (accessible with an Oracle Technology Network Login). This is your starting point to discover services that are exposed for external customer consumption. The web page allows you to search the module you are looking for in a product family and the version you are interested in. The following picture shows such a query for the ‘expenses’ related web services. We are looking to use the service ‘Expense Item and Expense Report’ as selected here:

ExpWSBl-4

The ‘Detail’ tab has all the methods offered by this service and towards the end of ‘Detail’ tab content, you’ll see these URLs:

Service Path: https://<fin_server:PortNumber>/finExmSharedCommon/ExpenseService?WSDL

Abstract WSDL URL: rep://R10_FUSIONAPPS_HOSTEDREP/oracle/apps/financials/expenses/shared/common/expenseExternalService/ExpenseService.wsdl

With help of these, you could map to your own WSDL services URL, for example:

https://fin-external.mycompany.com:12604/finExmSharedCommon/ExpenseService?wsdl

The ‘Documentation’ tab has further details on the methods and service XSDs.

Integration with FA Expense Web Services

Now that we have identified the WSDL end-point to integrate with FA Expenses, let’s look at the invocation of the methods to create an expense report and expense item. We will use the SoapUI tool to demonstrate the client invocation. First, note down the WSDL endpoint of the expense application of your Fusion Application, as discussed above. Let’s take the following URL for our discussion:

https://fin-external.mycompany.com:12604/finExmSharedCommon/ExpenseService?wsdl

Again, this is only a sample URL and the host and port entries need to be updated according to your FA setup. Now, we will use SoapUI tool to create a SOAP client project, as shown in the picture below, to perform the operations. You may use your tool of choice.

ExpWSBl-5

Now, let’s look at creating an expense report over a web-service call. In the SoapUI project that we just created, locate the ‘createExpenseReport’ under ‘ExpenseServiceSoapHttp’. Double click ‘Request 1’ and we’ll see that the window open up for running the operation as shown below:

ExpWSBl-6.1

Please set the necessary credentials. Then the request body needs to be edited to desired input values. A sample payload is given below.

Payload Sample to Create Expense Report

<soapenv:Envelope xmlns:com="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/companykff/" xmlns:cos="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:dff="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/dff/" xmlns:exp="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/" xmlns:pjc="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/pjcdff/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
   <soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><wsu:Timestamp wsu:Id="TS-16"><wsu:Created>2015-11-11T23:30:25.998Z</wsu:Created><wsu:Expires>2015-11-11T23:40:25.998Z</wsu:Expires></wsu:Timestamp><wsse:UsernameToken wsu:Id="UsernameToken-15"><wsse:Username>finuser1</wsse:Username><wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Welcome1</wsse:Password><wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">zmye0rbowzoQFQGtL1H/jg==</wsse:Nonce><wsu:Created>2015-11-11T23:30:25.998Z</wsu:Created></wsse:UsernameToken></wsse:Security></soapenv:Header>
   <soapenv:Body>
      <typ:createExpenseReport>
         <typ:expenseReport>
             
          <!--  <exp:PersonId>?</exp:PersonId>
           -->
		<exp:PersonId>300100051411339</exp:PersonId>
		<exp:AssignmentId>300100051411351</exp:AssignmentId>
		<exp:ExpenseReportNumber>0099140154</exp:ExpenseReportNumber>
           <exp:Purpose>Taking Govt customer for XYZ event EventCode:ABC123 
</exp:Purpose>
             
            <exp:ReimbursementCurrencyCode>USD</exp:ReimbursementCurrencyCode>

            <exp:ExpenseStatusCode>SAVED</exp:ExpenseStatusCode>
             
            <exp:OrgId>204</exp:OrgId>
   
         </typ:expenseReport> 
          
      </typ:createExpenseReport>
   </soapenv:Body>
</soapenv:Envelope>

This will create an expense report in ‘Saved’ mode, as set by the property ‘ExpenseStatusCode’ in the payload above. Then, when the user logs in to the expense system, he/she will see that there is a report waiting in ‘Saved’ mode.

A successful create expense report returns the following response from server.

Response from Server to Create Expense Report Call

<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://www.w3.org/2005/08/addressing" xmlns:typ="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
   <env:Header>
      <wsa:Action>http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService//ExpenseService/createExpenseReportResponse</wsa:Action>
      <wsa:MessageID>urn:uuid:fa9adc34-90e6-4fcf-b358-bce24fa3f7d8</wsa:MessageID>
      <wsse:Security env:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
         <wsu:Timestamp wsu:Id="Timestamp-Lh548KZH8MY1FHz1M1OQMQ22" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
            <wsu:Created>2015-11-11T23:30:55.998Z</wsu:Created>
            <wsu:Expires>2015-11-11T23:35:55.998Z</wsu:Expires>
         </wsu:Timestamp>
      </wsse:Security>
   </env:Header>
   <env:Body>
      <ns0:createExpenseReportResponse xmlns:ns0="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
         <ns1:result xsi:type="ns4:ExpenseReport" xmlns:ns1="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/" xmlns:ns4="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/" xmlns:ns0="http://xmlns.oracle.com/adf/svc/types/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
            <ns4:ExpenseReportId>300100051444933</ns4:ExpenseReportId>
            <ns4:ParentExpenseReportId xsi:nil="true"/>
            <ns4:PreparerId xsi:nil="true"/>
            <ns4:PersonId>300100051411339</ns4:PersonId>
            <ns4:AssignmentId>300100051411351</ns4:AssignmentId>
            <ns4:ExpenseReportDate xsi:nil="true"/>
            <ns4:ExpenseReportTotal xsi:nil="true"/>
            <ns4:ExpenseReportNumber>0099140154</ns4:ExpenseReportNumber>
            <ns4:Purpose>Taking Govt customer for XYZ event EventCode:ABC123
</ns4:Purpose>
            <ns4:ReimbursementCurrencyCode>USD</ns4:ReimbursementCurrencyCode>
            <ns4:ExchangeRateType xsi:nil="true"/>
            <ns4:OverrideApproverId xsi:nil="true"/>
            <ns4:CurrentApproverId xsi:nil="true"/>
            <ns4:ReportSubmitDate xsi:nil="true"/>
            <ns4:ExpenseStatusCode>SAVED</ns4:ExpenseStatusCode>
            <ns4:ExpenseStatusDate xsi:nil="true"/>
            <ns4:ExpReportProcessingId xsi:nil="true"/>
            <ns4:ReceiptsStatusCode>NOT_REQUIRED</ns4:ReceiptsStatusCode>
            <ns4:ReceiptsReceivedDate xsi:nil="true"/>
            <ns4:BothpayFlag xsi:nil="true"/>
            <ns4:ExportRejectCode xsi:nil="true"/>
            <ns4:ExportRequestId xsi:nil="true"/>
            <ns4:OrgId>204</ns4:OrgId>
            <ns4:ObjectVersionNumber>1</ns4:ObjectVersionNumber>
            <ns4:PaymentMethodCode>CHECK</ns4:PaymentMethodCode>
            <ns4:CashExpensePaidDate xsi:nil="true"/>
            <ns4:FinalApprovalDate xsi:nil="true"/>
         </ns1:result>
      </ns0:createExpenseReportResponse>
   </env:Body>
</env:Envelope>

Now we take the newly created report for our next step of creating the expense item to further add the specific Per Diem for ‘Entertainment’ expense item. For this we need to parse the response of ‘createExpenseReport’ and get a set of values to build the payload for the Create Expense Item call ‘createExpense’ as shown below. As an example, <ns4:ExpenseReportId>300100051444933</ns4:ExpenseReportId> is parsed from ‘createExpenseReport’ call above and is used as value to ‘ExpenseReportId’ in our next payload.

Payload Sample to Create Expense Item

<soapenv:Envelope xmlns:com="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/companykff/" xmlns:cos="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:dff="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/dff/" xmlns:exp="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/" xmlns:pjc="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/pjcdff/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/financials/expenses/shared/common/expenseExternalService/types/">
  <soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><wsu:Timestamp wsu:Id="TS-43DC3896F929810A1D14429866338154"><wsu:Created>2015-11-11T23:35:55.998Z</wsu:Created><wsu:Expires>2015-11-11T23:45:55.998Z</wsu:Expires></wsu:Timestamp><wsse:UsernameToken wsu:Id="UsernameToken-43DC3896F929810A1D14429866338143"><wsse:Username>finuser1</wsse:Username><wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Welcome1</wsse:Password><wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">JUlms0V81jjx5yCY04TCcA==</wsse:Nonce><wsu:Created>2015-11-11T23:35:55.998Z</wsu:Created></wsse:UsernameToken></wsse:Security></soapenv:Header>
  <soapenv:Body>
     <typ:createExpense>
        <typ:expense>
           <exp:ExpenseReportId>300100051444933</exp:ExpenseReportId>
           <!--Optional:-->
           <exp:ReimbursableAmount currencyCode="USD">40</exp:ReimbursableAmount>
           <!--Optional:-->
           <exp:Description>Entertainment Per Diem - do not change.</exp:Description>
           <!--Optional:-->
           <exp:StartDate>2015-08-11</exp:StartDate>
           <!--Optional:-->
           <exp:EndDate>2015-08-11</exp:EndDate>
           <!--Optional:-->
           <exp:ReceiptCurrencyCode>USD</exp:ReceiptCurrencyCode>
           <!--Optional:-->
           <exp:ExchangeRate>1</exp:ExchangeRate>
           <!--Optional:-->
           <exp:ReceiptAmount currencyCode="USD">0</exp:ReceiptAmount>
           <!--Optional:-->
           <exp:ExpenseSource>CASH</exp:ExpenseSource>
           <!--Optional:-->
           <exp:ExpenseCategoryCode>BUSINESS</exp:ExpenseCategoryCode>
           <!--Optional:-->
           <exp:ExpenseTypeCategoryCode>Entertainment</exp:ExpenseTypeCategoryCode>
           <!--Optional:-->
           <exp:FuncCurrencyAmount currencyCode="USD">0</exp:FuncCurrencyAmount>
           <exp:LocationId>300100001957889</exp:LocationId>
           <!--Optional:-->
           <exp:ReceiptRequiredFlag>false</exp:ReceiptRequiredFlag>
           <exp:OrgId>204</exp:OrgId>
           <exp:Location>Abbeville, Dodge, Georgia, United States</exp:Location>
           <!--Optional:-->
           <exp:ReimbursementCurrencyCode>USD</exp:ReimbursementCurrencyCode>
           <exp:ExpenseTypeId>100000010094106</exp:ExpenseTypeId>
           <exp:ExpenseTemplateId>10024</exp:ExpenseTemplateId>
               <exp:ExpenseDistribution xmlns:com="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:cos="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/companykff/">
                 <exp:CodeCombinationId>13799</exp:CodeCombinationId>
                 <exp:ReimbursableAmount xmlns:tns="http://xmlns.oracle.com/adf/svc/errors/">40</exp:ReimbursableAmount>
                 <exp:CostCenter>740</exp:CostCenter>
                 <exp:Segment1>01</exp:Segment1>
                 <exp:Segment2>740</exp:Segment2>
<exp:CompanyKFF xmlns:type="com:CompanyKffOPERATIONS_5FACCOUNTING_5FFLEX">
                 <com:ExpenseDistId xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">300100051444944</com:ExpenseDistId>
                 <com:_FLEX_StructureInstanceCode xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">OPERATIONS_ACCOUNTING_FLEX</com:_FLEX_StructureInstanceCode>
                 <com:_FLEX_StructureInstanceId xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">246</com:_FLEX_StructureInstanceId>
                 <com:_GL_5FGL_23_StructureInstanceNumber xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">101</com:_GL_5FGL_23_StructureInstanceNumber>
                 <com:FND_ACFF_ConcatenatedStorage xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:nil="true"/>
                 <com:FND_ACFF_Delimiter xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/" xmlns:nil="true"/>
                 <com:_Company xmlns:cos1="http://xmlns.oracle.com/apps/financials/expenses/entry/shared/flex/costcenterkff/">01</com:_Company>
              </exp:CompanyKFF>
              <exp:CostCenterKFF xmlns:type="cos:CostCenterKffOPERATIONS_5FACCOUNTING_5FFLEX">
                 <cos:ExpenseDistId>300100051444944</cos:ExpenseDistId>
                 <cos:_FLEX_StructureInstanceCode>OPERATIONS_ACCOUNTING_FLEX</cos:_FLEX_StructureInstanceCode>
                 <cos:_FLEX_StructureInstanceId>246</cos:_FLEX_StructureInstanceId>
                 <cos:_GL_5FGL_23_StructureInstanceNumber>101</cos:_GL_5FGL_23_StructureInstanceNumber>
                 <cos:FND_ACFF_ConcatenatedStorage xmlns:nil="true"/>
                 <cos:FND_ACFF_Delimiter xmlns:nil="true"/>
                 <cos:_Department>520</cos:_Department>
              </exp:CostCenterKFF>
              <exp:ProjectDFF xmlns:type="pjc:PJCDFFEXM_5FExpense_5FReport_5FLine">
                 <pjc:ExpenseDistId>300100051444944</pjc:ExpenseDistId>
                 <pjc:__FLEX_Context>EXM_Expense_Report_Line</pjc:__FLEX_Context>
                 <pjc:__FLEX_Context_DisplayValue>EXM: Expense Report Line</pjc:__FLEX_Context_DisplayValue>
                 <pjc:_FLEX_NumOfSegments>10</pjc:_FLEX_NumOfSegments>
                 <pjc:FLEX_PARAM_BusinessUnit>204</pjc:FLEX_PARAM_BusinessUnit>
              </exp:ProjectDFF>
           </exp:ExpenseDistribution>
           <!--Optional:-->
           <exp:ExpenseDFF xmlns:type="dff:Internet">
           </exp:ExpenseDFF>
        </typ:expense>
     </typ:createExpense>
  </soapenv:Body>
</soapenv:Envelope>

As a result of the steps we carried out so far, the user should be able to see the below report in his/her expense application:

ExpWSBl-7

Now the user can add any other expenses incurred, review the details and submit for further processing.

Summary

We looked at the steps to integrate with Fusion Application Expenses module over web services, locate the WSDL URL along with sample payloads that can be adapted for specific needs.

References

Web links and Oracle Documents:

• http://www.oracle.com/us/products/applications/fusion/financial-management/financials/expenses/resources/index.html

• Information Center: Oracle Internet Expenses (Doc ID 1381228.2)

• Troubleshooting Assistant: Fusion Expenses (EXM) (Doc ID 1574035.2) is a nice menu driven tool.

• Oracle Cloud Expenses Co-existence and Integration Options (Doc ID 2046956.1)

• How to Assign Fusion Expenses to A User (Doc ID 1571884.1)

Pipelined Table Functions in Oracle Business Intelligence Cloud Service (BICS)

$
0
0

Introduction

 

This article outlines how to use a pipelined table function in Oracle Business Intelligence Cloud Service (BICS).


Using a pipelined table function makes it is possible to display data in BICS without having to load the data to a physical table.


Two possible use cases for pipelined table functions in BICS are:

1)    Drill-to-deal scenarios where data is only required momentarily and temporarily.

2)    Situations where corporate security may restrict sensitive data being physically saved to the cloud.


Pipelined table functions are best suited to small data-sets. Latency issues may occur on large data volumes.


The code snippets provided use Oracle Social Data in Oracle Social Data and Insight Cloud Service as the data source. Data is retrieved using the Social Data and Insight REST APIs. That said any data source accessible by either SOAP or REST web services may be referenced in the pipelined table function.


Since the current version of BICS does not support opaque views; it is not possible to reference variables in the model view. This means any parameters that need to be passed to the pipelined table function must be table driven. Therefore, in order to pass the selected value of a prompt to the pipelined table function, it must be first saved to a physical table. Unfortunately, this adds extra complexity and overhead to the solution when using prompts.


The article covers the four steps required to create, configure, and execute pipeline table functions from BICS:


1)    Create Pipelined Table Function

2)    Display Pipelined Data in BICS

3)    Pass Dashboard Prompt

4)    Consume Dashboard


Due to the sensitive nature of the Social Data and Insight Cloud Service data limited screens shots of output are available.

In the code snippet examples, DUNS number is used in the Dashboard Prompt to retrieve the Social Data. The Data Universal Numbering System or DUNS Number is assigned and maintained by Dun & Bradstreet (D&B’s). A DUNS number is a unique nine-character number used to identify each physical location of a business. Additionally, the US federal government uses this number to track how federal money is allocated.


Main Article

 

Step 1 – Create Pipelined Table Function

 

Step 1 outlines how to create the SQL artifacts needed for the pipelined table function to run in Oracle Application Express SQL Workshop.


Five artifacts are created:


Object Type: TYPE DUNS_OT

Table Type: DUNS_TYPE

Table: SELECTED_DUNS

Function: FUNC_DUNS_PIPELINE

View: DUNS_VIEW


Steps:


From Oracle Application Express -> SQL Workshop -> SQL Commands


For a text file containing ALL the code snippets click here.

 

a)    Create an Object Type – specifying all columns to return with relevant data types.

CREATE TYPE DUNS_OT AS OBJECT
(
FIRST_NAME VARCHAR(500),
LAST_NAME VARCHAR(500),
TITLE VARCHAR(500)
);

b)    Create Table Type DUNS_TYPE based on DUNS_OT.

CREATE TYPE DUNS_TYPE AS TABLE OF DUNS_OT;

c)    Create table to store values selected from Dashboard Prompts. In this case the Prompt is a DUNS Number.

CREATE TABLE SELECTED_DUNS(
COMPANY_DUNS_NUMBER VARCHAR2(9)
);

d)    Insert a sample DUNS Number into the table (to test with).

INSERT INTO SELECTED_DUNS(COMPANY_DUNS_NUMBER)
VALUES(‘123456789′);

e)    Create Pipelined Table Function.

Code Snippet Breakdown:

Gold: Retrieves Social Data via Rest API. For more info see: Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS).

Green: Pipelines the results from the Rest API and parses them into DUNS_OT [created in step a]. It may help to compare the syntax used in this snippet to that used in Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS) which inserts into a physical table vs. the virtual select shown here.

Aqua: Defines RETURN type DUNS_TYPE [created in step b].

Purple: Selects DUNS Number from table driven prompt [created in step c and populated in step d].

create or replace FUNCTION FUNC_DUNS_PIPELINE
RETURN DUNS_TYPE PIPELINED AS
l_ws_response_clob CLOB;
l_num_contacts NUMBER;
l_selected_duns VARCHAR2(9);
l_pad_duns VARCHAR2(9);
l_body CLOB;

SELECT MAX(COMPANY_DUNS_NUMBER) INTO l_selected_duns FROM SELECTED_DUNS;
l_pad_duns := LPAD(l_selected_duns,9,’0′);
l_body := ‘{“objectType”:”People”,”limit”:”10″,”filterFields”:[{"name":"company.gl_ult_dun","value":"' || l_pad_duns || '"},{"name":"person.management_level","value":"0"},{"name":"person.department","value":"3"}],”returnFields”:["person.first_name","person.last_name","person.title"]}';
apex_web_service.g_request_headers(1).name := ‘Content-Type';
apex_web_service.g_request_headers(1).value := ‘application/json';
apex_web_service.g_request_headers(2).name := ‘X-ID-TENANT-NAME';
apex_web_service.g_request_headers(2).value := ‘TenantName';
l_ws_response_clob := apex_web_service.make_rest_request
(
p_url => ‘https://SocialDataURL/data/api/v2/search’,
p_username => ‘User’,
p_password => ‘Pwd’,
p_body => l_body,
p_http_method => ‘POST’
);
–parse the clob as JSON
apex_json.parse(l_ws_response_clob);
–get total hits
l_num_contacts := CAST(apex_json.get_varchar2(p_path => ‘totalHits’) AS NUMBER);
–loop through total hits and insert JSON data into database
IF l_num_contacts > 0 THEN
for i in 1..l_num_contacts LOOP
PIPE ROW ( DUNS_OT (apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[1].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[2].value’),
apex_json.get_varchar2(p_path => ‘parties['|| i || '].attributes[3].value’)
));
end loop;
RETURN;
END IF;
–;

f)    Test the function from SQL Workshop.

CREATE VIEW DUNS_VIEW AS
SELECT * FROM TABLE(FUNC_DUNS_PIPELINE);

SELECT * FROM DUNS_VIEW;

 

Step 2 – Display Pipelined Data in BICS


Step 2 outlines how to display the data retrieved from the pipelined table function in BICS.


This is a two part process:


1)   Reference the pipelined table function in a data modeler view.

2)   Build an Analysis based on the view.


Steps:


a)    Lock to Edit the Model.

b)    Click on the cog / wheel next to “Database” then “Create View” to create a new View.

Untitled

c)    Click on SQL Query

Use the example below as a starting point for the syntax needed to call the pipelined table function from the view.

Confirm that your data is returned in the data tab.

SELECT
FIRST_NAME,
LAST_NAME,
TITLE
FROM
(
SELECT
*
FROM TABLE(FUNC_DUNS_PIPELINE)
)

Snap4

d)    Create an Analysis based on the custom view. Confirm that the Analysis returns results.

Snap5

e)    Place the Analysis on a Dashboard. Confirm results are viable.


Step 3 – Pass Dashboard Prompt


Step 3 outlines how to use a dashboard prompt on a dashboard to pass a user selected value from BICS to the pipelined table function in the database.

Currently BICS does not support opaque views; therefore, it is not possible to pass variables to the pipelined table function. At this stage the best approach is to use a physical table to store the prompt selection, and drive the pipelined table function from that table.

Remember the pipelined table function has no parameters. It is driven from the values contained in the SELECTED_DUNS table.

i.e. SELECT MAX(COMPANY_DUNS_NUMBER) INTO l_selected_duns FROM SELECTED_DUNS

The diagram below shows the steps required to save the selected dashboard prompt to the SELECTED_DUNS table. (Detailed steps follow.)

BICS_Prompts

Steps:


a)    Create Procedure PROCEDURE SP_LOAD_SELECTED_DUNS

create or replace PROCEDURE SP_LOAD_SELECTED_DUNS(p_selected_duns VARCHAR2)
IS
BEGIN
DELETE FROM SELECTED_DUNS;
INSERT INTO SELECTED_DUNS(COMPANY_DUNS_NUMBER)
VALUES(p_selected_duns);
END;

b)    Create Function FUNCTION FUNC_EXEC_SELECTED_DUNS

create or replace FUNCTION FUNC_EXEC_SELECTED_DUNS
(p_selected_duns VARCHAR2) RETURN INTEGER
IS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
SP_LOAD_SELECTED_DUNS(p_selected_duns);
COMMIT;
RETURN 1;
END;

c)    Test Function

SELECT FUNC_EXEC_SELECTED_DUNS(‘123456789′) from DUAL;

SELECT * FROM SELECTED_DUNS;

d)    Create a dummy table that will be used in the trigger analysis. Insert any descriptive text into the table.

This should be a one row only table – as the trigger will run for every record in the table.

CREATE TABLE DUMMY_REFRESH(REFRESH_TEXT VARCHAR2(100));

INSERT INTO DUMMY_REFRESH(REFRESH_TEXT)
VALUES (‘Hit Return once complete’);

e)    In the Modeler create session variable r_selected_duns.

Snap7

f)     Add the DUMMY_REFRESH table in the Modeler as a Dimension table and join it to a fact table.

Add a Column Expression called UPDATE_DUNS.

This Expression calls the FUNC_EXEC_SELECTED_DUNS and passes the r_selected_duns request variable to it.

Snap8

EVALUATE(‘FUNC_EXEC_SELECTED_DUNS(%1)’,VALUEOF(NQ_SESSION.”r_selected_duns”))

Snap9

g)   Create an Analysis based on the DUMMY_REFRESH table that containing REFRESH_TEXT field & UPDATE_DUNS.

Snap10

h)    Add an Action Link to trigger the Analysis from the Dashboard (through Navigate to BI Content).

Set Action Options. Run Confirmation may be useful.

Snap11

Snap12 Snap13

i)     Add a Prompt to the Dashboard for DUNS Number.

Either manually add choice list DUNS Numbers or drive your Prompt from a table.

Set a Request Variable on the Prompt. This will be used to pass the selected value to the pipelined table function.

Due to BICS not supporting VALUELISTOF “Enable user to select multiple values” has been unchecked.

A workaround for passing multiple values to session variables has been previously discussed in Integrating Oracle Social Data and Insight Cloud Service with Oracle Business Intelligence Cloud Service (BICS).

Snap6

Step 4 – Consume Dashboard


Step 4 outlines to use an action link to trigger a data refresh to reflect the selection made on the dashboard prompt.


Steps: 


The final dashboard will contain thee components: A Dashboard Prompt, The Action Link, Analysis Request to display results.

Snap14
Snap15

In order to prevent accidental data refreshes, the Dashboard Consumer must confirm the action.

To refresh results; it may be necessary to clear BI Server Cache (through the modeler) or hit Refresh on the Dashboard (to clear Presentation cache).

Snap16

 

 

Further Reading


Click here for the Application Express API Reference Guide – MAKE_REST_REQUEST Function.

Click here for the SRM Developer Platform API Guide.

Click here for more A-Team BICS Blogs.

Summary


This article described how to create a pipelined table function in BICS. Additionally, it described how to display the results and pass a dashboard prompt to the pipelined function.

Pipelined table functions may be useful in situations where it is not feasible or desirable to physically save data to the BICS database.

The data-source example provided was for Social Data and Insight Cloud Service data. However, this article can easily be modified to apply to any other data-source accessible via REST or SOAP Web Service API’s.

Tuning Asynchronous Web Services in Fusion Applications Part II

$
0
0

Introduction

In a series of earlier blogs we have covered in detail several aspects of asynchronous web services in Fusion Applications including the general implementation, how to monitor, and also discussed tuning considerations and options. This article discusses an additional tuning option for how Fusion Applications consumes waiting messages in the underlying queues.

Main Article

In a recent engagement tuning a high-volume environment we have discovered a situation where a large Fusion Applications cluster can lead to contention on the database when a high number of application threads concurrently attempt to dequeue messages from the Advanced Queues (AQ) used for the asynchronous web service implementation. We were lucky to have the database experts from the Real World Performance group with us supporting the database tuning.

A detailed analysis revealed that the default MDB implementation (detailed discussion) leverages the AQjmsConsumer.receive operation which can result in database side contention if the number of MDB threads and, hence, the corresponding database sessions were increased in order to achieve higher parallelism on the application tier. AQjmsConsumer.receive is implemented in a way to delegate the active polling for new messages from the queue to the database tier and the database session will repeatedly try to lock a message and deliver this message back to the waiting application tier thread. Now with a significant number of such database sessions technically competing for messages, a high portion of these dequeue attempts can’t be successful as only a single session can be successful in locking a record. This logic can cause relevant overhead on the database tier as observed through a high number of ‘select for update skip locked’ invocations in the usual database performance reports while only a small portion of the invocations actually deliver a record.

In order to address this situation, there is a new patch available for the Weblogic MDB implementation which allows for leveraging the non-blocking operation AQjmsConsumer.receiveNoWait instead of AQjmsConsumer.receive. With that, the control is passed back from the database to the application tier immediately if the the process cannot get hold of a message (i.e. no looping happens on the database tier) and put the application thread to sleep for a defined duration before attempting the next dequeue. This results overall in a significant reduction of database contention and unsuccessful row locking attempts while not compromising the overall throughput.

In order to activate the non-blocking dequeue logic, the corresponding patch 21561271 delivered through P4FA patch bundles must be available on the system. Secondly, there is a new Java system property in order to turn on the new logic by setting weblogic.ejb.container.AQMDBReceiveNoWait=true. Per default this switch is set to false for backward compatibility reasons, i.e. it needs to be explicitly turned on to use the non-blocking implementation.

Finally, in this analysis there was a parallel effort in order to optimise the database side implementation as well resulting in patch 16739157 which is recommended to be applied if high number of row lock attempts are observed on Advanced Queues.

Conclusion

This article presented an additional tuning option for decreasing database side contention for deployments with a larger number of application tier processes dequeuing from the same Advanced Queue.

Viewing all 204 articles
Browse latest View live