Quantcast
Channel: SCN : All Content - Data Services and Data Quality
Viewing all 4013 articles
Browse latest View live

Data is not picking correctly even after successful completion of job

$
0
0

Hi all,

 

 

 

Data is not picking correctly even after successful completion of job,

When we run manually for the second time it is pulling the correct data.

 

Its scheduled job,

 

 

 

 

PFA images of before and after running of job

 

before running.jpg

 

after running.jpg

 

Please any one can help what is the issue?

 

Thanks in advance

 

 

 

 

Regards:

Vijay

 

Message was edited by: Vijay Bheemanathini


Spliting string onto new line after every 132 characters in Data Services

$
0
0

Hi Guys,

 

I want to split string onto new line after every 132 characters.

 

Line_Splitting1.jpg

 

here, for the product Pencil, the description length is more than 132 characters, hence after any complete word character, nearby to 132 character it shall split the string onto new line.

 

for more understanding of a problem I am sharing the screenshot below:

 

Line_Splitting2.jpg

 

Kindly suggest.

 

Regards,

DS_Beginner

Data Flow Error - Named pipe error occurred:

$
0
0

Hi Friends,

We get the below Error intermittently in a few of our Data Flows - " Named pipe error occurred: <The pipe has been ended"

We just upgraded from BODS 4.0 to BODS 4.2 and this Error has started appearing and the Data Flow fails with the Error. This happens only with our long running data flows that run from 2-3 hours. The shorter jobs run fine. Also this was rare in V 4.0 but after coming onto 4.2 this is happening more often. I did refer the KBA 1305751. We are using the correct Oracle Client. 

 

 

 

Error Snapshot:

(14.2.5) 10-20-15 11:33:14 (W) (000:000) ABT-200201: Job server killed job with pid <11260>, execName <10_20_2015_11_09_20_116__b6cdf656_b004_4ee3_b334_19940cbd88ed> due to job server shutdown.

(14.2) 10-20-15 11:33:14 (E) (8036:6340) FIL-080134: |Dataflow Adhoc_DF_IML_Previous_Day_Shipment|Pipe Listener for Adhoc_DF_IML_Previous_Day_Shipment_1_3

                                                     Named pipe error occurred: <The pipe has been ended.

                                                     >

Monitor Log Snapshot:

@FIL-080134, ERROR, 0, 0.000, 1432.382, 0.000, 0.000, 0:0, 0.000, 0, FIL-080134: |Dataflow Adhoc_DF_IML_Previous_Day_Shipment|Pipe Listener for Adhoc_DF_IML_Previous_Day_Shipment_1_3: Named pipe error occurred: <The pipe has been ended.

>

 

 

Any inputs will be very much appreciated.

 

Thanks,

SA

Problems with special characters uploading data into oracle database

$
0
0

Greetings

I have a problem uploading data into oracle tables from sybase datasource. The data that I want to upload is in Spanish, for example when I have a varchar field with the data 'Consultoría', in oracle table the data upload with interrogation symbols.

 

I have my source and target datastores configured as follows.

 

Any suggestion? Thank you for your time

BODS 4.1 integration/installation on BO 4.0 distributed and clustered CMC

$
0
0

Hello Community,

 

i´m trying to install BO DataServices 4.1 and integrate it in CMC (BO 4.0 with a complex architecture, distributed and clustered).

 

Our BO architecture is described as follows:

 

=> Release installed:    SAP BO 4.0 SP5 FP5.

=> Platform:                 Windows 2008 R2 Enterprise 64bit (all servers) with  failover cluster.

=> CMS repository:      SQL Server 2008 Enterprise (64bit) clustered.

=> Servers distribution:

 

- One load balancer (F5 Big-Ip pointing to the both tomcat servers)

- Two tomcat servers (clustered and each one pointing to the two CMS servers). Each one installed on its own Windows 2008 server.

- Two CMS servers (clustered). Each one installed on its own Windows 2008 server.

 

We have installed BODS as follows:

 

1. Web Tier Installation - Tomcat servers

 

- Sucessfull, installing only the Data Services Management Console feature. On this way, the Data Services Application is deployed on both tomcat servers. We think that no other feature it´s necessary on this tier.

 

2. CMS Tier Installation - CMS servers

 

Installed the rest of features except the Data Services Management Console on both CMS servers. On CMC, two new servers are created (NODExx.EIM Adaptive Processing Server). We have tried some installations types:

 

- Installed the BODS software on each CMS server (all features except the Data Services Management Console) and only one DS 4.1 repository. On second installation warns about an existing repository and to overwrite it.

On CMS integration used the CMS host name and not the cluster name. Installation cancelled and uninstalled the software.

 

- Installed the software on each CMS server (all features except the Data Services Management Console) and two separate DS 4.1 repositories. On CMS integration used the CMS host name and not the cluster name.

The installation was sucessfull, but the CMC servers (BO servers) on one of the CMC nodes are not working properly and also the new NODExx.EIM Adaptive Processing Servers (very slow and checked the properties of some of them,

a warning message comes telling us that can´t find the SIA service for this server).

 

Has anybody installed BODS 4.1 on this BO scenario?. Any ideas?

 

As info, we want to have one DS 4.1 installed and running on the first CMS node, and installed but stopped on the second CMS node.

 

Regards and thanks in advance,

 

Francisco Ortiz.

Use of Comparison column in table comparison.

$
0
0

Hi Team,

 

I have one data flow lke

 

source table -----> query transform ------> table comparsion ---->target table.

 

the fields present in the source :

 

cost_centre_id, Outlet name, revenue

===================================

M102345     ,  outlet1, 10000$

M102346     ,  Outlet2, 20000$

 

 

and in the table comparison transform, in the key generation i have cost_centre_id and left the comparison column as blank.

 

and i loaded the data to the target, it is coming fine. as

 

cost_centre_id, Outlet name, revenue

===================================

M102345     ,  outlet1, 10000$

M102346     ,  Outlet2, 20000$

 

and for the 2nd job , i changed 2nd records as Outlet23 instead of Outlet2 ,

 

cost_centre_id, Outlet name, revenue

===================================

M102345     ,  outlet1, 10000$

M102346     ,  Outlet23, 20000$

 

Since there is no fields present in the comparison column . To which column it will  compare.

 

Please help...

 

Many thanks,

Praveen.

how to kill Data services second instance of the job

$
0
0

Hi All,

 

is there a way to kill automatically the second instance of the job which has been kicked off accidentally or scheduled to run ?

 

we have a situation - job needs to be killed if its kicked while first instance of the job is running .

 

 

Thanks 

Scheduled Job not started

$
0
0

Hi,

 

I scheduled some batch jobs using scheduler of ds console but job didn't started on the scheduled date and time.

 

What could be the reason??

 

Regards,

Sachin


Delete records using Script, variables and multiple values

$
0
0

Hi

I have a source file in one DB and target file in another DB.

 

I want to delete records in the target file where targetfile.month=sourcefile.month

 

How can I achieve this?

 

I tried to use scripts with variables

 

Example :

 

$variable1 = sql('DB1', 'select month from sourcefile');

sql('DB2', ' delete from target file where month = {$variable1} ')

 

But apparently since $variable1 can hold only one value I am not getting the results I want. I have multiple values for the first statement above and its not fixed. How do I store multiple values in a variable or create an array? please eplain with the format for any function if we have one.

 

Cheers.

 

Using DS Designer 14.2.1.568

SAP BODS -SAP ECC Some issues with Import

$
0
0

Hello Gurus,

 

       My source system is SAP ECC, successfully established a connection/configuration between SAP ECC and SAP BODS.  Able to see all the ECC tables and when I tried to import into local repository of SAP BODS, there is no error but tables are not getting import into SAP BODS local repository.

 

   Anyone has come across similiar kind of issues?.

 

Regards

Suresh

Slow insertion in Amazon Redshift

$
0
0

I'm testing the BO Data Services as ETL tool for data extraction having as Amazon Redshift destination. However, the performance for the inserts into Redshift is very bad.

 

On average, the Data Services inserts about 5 records per second. (I made tests with Talend and it already entered 22000 reg / sec)

Tempo DataServices - Redshift.jpg


We have a BODS running on the Linux platform x64.

 

For connection, we use an ODBC connection created and configured as Amazon recommendations.

http://docs.aws.amazon.com/redshift/latest/mgmt/install-odbc-driver-linux.html

 

Does anyone have any idea how I can provide more performance in data inserts in Redshift using bods?

 

 

Thanks

Customer Virtual Coffee Corner for SAP Data Services

$
0
0

Hello All,

 

I wanted to be certain you are aware that starting at the end of October 2015, the EIM Support Team will be holding Customer Virtual Coffee Corner sessions for SAP Data Services.

 

These sessions are for licensed SAP customers who have received an email invitation from the EIM Support Team.

 

If you'd like an invite as well as further information, please refer to the following:

 

Customer Virtual Coffee Corner for SAP Data Services

 

Cheers!
Julie

The EIM Bulletin

$
0
0

Purpose

 

The Enterprise Information Management Bulletin (this page) is a timely and regularly-updated information source providing links to hot issues, new documentation, and upcoming events of interest to users and administrators of SAP Data Quality Management (DQM), SAP Data Services (DS), and SAP Information Steward (IS).

 

To subscribe to The EIM Bulletin, click the "Follow" link you see to the right.

 

HotTopics

(updated 2015-10-21)

  • Are you a licensed user of SAP Data Services? Please join the EIM Support Team for a learning session:

Customer Virtual Coffee Corner for SAP Data Services - Session on Tuesday, October 27th

  • Finally upgrading your old version of Data Integrator or Data Services? Please reference the guide below:

Best Practices for upgrading older Data Integrator or Data Services repositories to Data Services 4.2

 

 

Latest Release Notes

(updated 2015-10-01)

  • 2224623 - Release Notes for DQM SDK 4.2 Support Pack 5 Patch 2 (14.2.5.957)
  • 2192027 - Release Notes for SAP Data Services 4.2 Support Pack 4 Patch 3 (14.2.4.873)
  • 2223961 - Release Notes for SAP Data Services 4.2 Support Pack 5 Patch 2 (14.2.5.947)
  • 2192015 - Release Notes for SAP Information Steward 4.2 Support Pack 4 Patch 3 (14.2.4.836)
  • 2223957 - Release Notes for SAP Information Steward 4.2 Support Pack 5 Patch 2 (14.2.5.903)

 

New Product Support Features
(2015-09-29)

 

 

Selected New KB Articles and SAP Notes

(updated 2015-10-01)

  • 2217082 - French Address Assignment Differences
  • 2215891 - Realtime services are not starting - Data Services 4.x
  • 2215896 - Cannot load HANA Global Temporary Table from Data Services
  • 2218041 - Support for Microsoft Internet Explorer 11 - Information Steward 4.x
  • 2210650 - Error: "ORA-20002: ORA-01741: illegal zero-length identifier ORA-06513" - Data Services
  • 2211607 - XML Parser failed: Error no declaration found for element 'SUGGESTION LIST' - Data Services 4.x

 

Your Product Ideas Realised!

(new section 2015-06-25)

 

Enhancements for EIM products suggested via the SAP Idea Place, where you can vote for your favorite enhancement requests, or enter new ones.

 

Events

(2015-10-21)


What are we planning here in EIM? Please check out the following opportunities:

 

 

New Resources

(To be determined)


Didn't find what you were looking for? Please see:


Note: To stay up-to-date on EIM Product information, subscribe to this living document by clicking "Follow", which you can see in the upper right-hand corner of this page.

Load no of sheets in an excel workbook

$
0
0

1. Please define two global variables one to store sheet name and second one to store total no of sheets. We will also have one more local variable for counter.

 

variables.png

 

initialize $L_SHEET_COUNT = 1 in initializing_SCR.

 

2. Now drag While and add DF in which you have made excel sheet as source. Add two more script one before the DF and one after the DF.

 

while.jpg

As you can see condition in WHILE $L_SHEET_COUNT <= $G_Total_Sheet where $L_SHEET_COUNT = 1 and $G_Total_Sheet  = 4 (because I have 4 sheets)

 

3. in Sheet_Name_SCR write below code

 

$G_SHEET_NAME = 'sheet'||$L_SHEET_COUNT;

print('Loading '|| 'Sheet'||$L_SHEET_COUNT);

 

4. increment  $L_SHEET_COUNT = $L_SHEET_COUNT + 1 in increment_SCR.

 

5. Defining File Format.

 

file_format.jpg

Right Click on Excel Workbooks, select New and then create file format. Above screen shows how can you do it.

 

Make sure you have checked use first row values as column name and choosing worksheet option and passing parameter Global Variable which you have initialized in Sheet_Name_SCR.

 

Please do like if it helps.

 

Thanks,

Imran

How to join related records for IDOC when using RowGeneration?

$
0
0

I have a situation where I want to create IDOCS for customers (1 idoc / customer). Each customer may have multiple associated accounts. Each account results in an idoc segment. Each Account has a DOCUMENT-NUMBER which links it to the Customer.

 

  • Cust1
    • Account1
  • Cust2
    • Account1
    • Account2

 

I first created the job to create 1-idoc per customer by adding a RowGeneration transform and selecting from this. This works successfully as I create 1-idoc for every customer. The segments select only from the RowGen transform (not Customer). This is how my input schema looks for each segment:

 

http://i.imgur.com/7mzk1EH.png

 

However, I now want to add segments to each of these idocs (for the Accounts).

 

When I make the join criteria, I have to use Customer/Account as a join. This means I must select from Customer/Account, which is returning all Accounts for all customers and making each as an idoc (so in the above situation I would have 3 idocs all with 3 segments - not 1 with 1, and another with 2).

http://i.imgur.com/0L1ESU2.png

 

I am not sure how to modify these schema selections to properly make the segments work correctly. It seems like I need to avoid using Customer in selection criteria here, but if that is the case, I am unsure how to setup my join correctly so that each set of Accounts is linked to the proper customer.


Reason for unnecessary exception

$
0
0

Exception1.jpg

 

Hi ,

 

Any suggestions my designer is getting logout many times a day and  i need to reopen the designer.Any suggestions or documents for this aspect is fine.Please let me know.

 

 

Thanks,

Pradeep

Multiple left outer joins...

$
0
0

Hello All,

Can someone please help me to achieve the following in query transform.

 

and h.coverage_id = c.coverage_id(+)

and c.payor_id = epm.payor_id(+)

AND c.plan_id = epp.benefit_plan_id(+)



Thanks.


Error when triggering BODS jobs from BW process chains

$
0
0

Hi,

 

I'm getting the below error when executing BODS jobs from BW process chain

 

"Error while accessing repository: XXXXX: Implicit conversion from datatype 'VARCHAR' to 'INT' is not allowed. Use the CONVERT function to run this query."


I entered all the necessary parameters Repository name,Job server name, Job name. I have a global variables set for this job without any values so i left them blank when passing from BW


 

The jobs executes fine from BODS. Not sure what is causing the error when executing from BW. Any body faced this issue? Please help.

 

Thanks,

Rohith

BODS Dataload into Salesforce

$
0
0

Hi,

 

We are required to load a inbound file into SalesForce. Can someone point out any documentation on this?

We are on BODS 14.2.3.5XX, Windows and Salesforce.

I see documentation on using Salesforce objects used as sources but not as targets.

 

Earlier (like in BODS 12.x) , we did load data into Salesforce using third party APIs into Salesforce. Any lead would be highly appreciated.

 

Thanks and Regards.

Kumar.

Updating records in Salesforce is taking a few days

$
0
0

I am the Salesforce Administrator trying to work with our SAP Data Services team, they have installed the SAP Data Services with Salesforce adaptor, inserting records is rather quick, however when trying to perform an update of 400,000 records or so it is taking 4-5 days. We are would like to eventually get a daily update from the our data warehouse as it collects information from six or seven different locations into Salesforce, we have on average 150,000 records a day that will be updated.

 

My SAP team is saying that the only thing that they can do is to run a daily query in the data warehouse for all the updates into Salesforce and put them into a CSV file for me to manually place. This seems a bit laborious to do when I would think it can be done automatically. Does the SFDC adaptor use the Bulk API to update records that I can enable so that it speeds it up? I have tried to get the SAP team to look into it more but all they will say is "On a daily basis, we can provide a single file with all the rows that need updating to the Salesforce team, but they will have to figure out how to divide the file into 10,000 row chunks, get them parsed into separate files and uploaded into Salesforce as this is not something that the data warehouse team does."

 

Any help would be greatly appreciated.

Viewing all 4013 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>