Quantcast
Channel: SCN : All Content - Data Services and Data Quality
Viewing all 4013 articles
Browse latest View live

SQL Transform use varchar but not nvarchar

$
0
0

Hi all,

 

I am using DB2 as source database, local repository and I am using DS4.1 SP5.

 

I have some data is Chinese characters and I store those data with datatype vargraphic in database.

 

When I select those data in a SQL transform, the SAP Data Services automatically create a temporary table with "varchar" instead of "nvarchar".

As a result, the job fail because varchar is not compatible to vargraphic for Chinese characters.

 

There is a "Use NVARCHAR for VARCHAR columns in supported databases" option for temporary tables.

However there is not this option in SQL transform.

 

How can I fix the issues?

 

Regards,

Gordon Lo


Unable to create WSDL for service.

$
0
0

Hi all

 

I want to use a real-time job as a web service and the real-time job is running fine with out any errors and warning. but i can't generate WSDL.

It is showing error as "Unable to create WSDL for service:Doc"..

Please Help me to solve this error.

 

Thanks in advance

PrasannaKumar

How to use look up and lookup_seq in sap BODS with example.

$
0
0

Dear All,

 

I am new to BODS, i have some doubts using look up and lookup_seq in sap bods, what is the usage of these 2 functions where we have to lookup_ext to get the number of columns where it matches the condition. I understand the process of how to use Lookup_ext and i have created 2 batch jobs for that and it works fine for me. I am little bit confused in using Lookup and lookup_seq, please help me out how to use these 2 ffunctions in my batch jobs with any snap shots.

 

Thanks in Advance.

Break Data into Groups for SAP BAPI

$
0
0

Hello Experts,

 

We are using Data Services to load data from different source system to SAP ECC, we are using SAP BAPI. We need to break the data into groups because we can't push all the data from source to ECC in one strech, so I need to break the data into groups.

 

For Example: We have 1 millions rows in source that needs be loaded in SAP, need to break down data into groups let's say 50,000 records.

 

Could you please let me how to handle this in DS.

 

Thanks,

Nishith

Loading a flat file to HANA is running out of physical memory on server

$
0
0

Dear SAP forum,

 

I'm performing a test of trying to load a 150 million row flat file to a HANA table to get performance metrics at my client site. I have created the simplest data flow where I connect the flat file input format to a HANA table (No transformations at all just a straight map). What is happening is that I eventually run out of physical virtual memory about half way through the file. My first reaction is why is anything being written to memory at all, its a straight migration? I see this behavior by using the windows task monitor to see that when I start the job, the virtual memory starts to get used up little by little, eventually crashing when I hit my 90 million rows. When I try using SQL Server as my target, I do not get any virtual memory issues at all, as it stays the same as I would expect.

 

The csv file is located on the job server. 

 

The real question to me is: why is Data Services writing to physical memory when the target is HANA and not when the target is SQL Server? 

 

After 30 million rows I run out of physical memory (99%) with HANA and at the same point with SQL Server the memory does not move from 35% utilization.

 

Does this mean that all data that gets loaded into HANA using data services gets staged in physical memory on the server?

 

Does anyone have a clue as to why this is happening?

 

My Data Services version is using the new 4.2 SP01 and this also happened with version 4.1

 

I appreciate any ideas.

 

Best regards,

 

Michael Ardizzone

itelligence group

Is there any way to parameterize DataStore configuration

$
0
0

My main issue is the code deployment to multiple environments from dev, if i deploy direcely form DEV the datastore configurations will get replaced I am looking to parameterize data store configurations so that i can use different set of values in each environment.

 

I know alternate solution:

 

Configure respective environments credentials initially and skip datastore configuration while exporting form DEV.

 

I am looking for more robust way.

 

Thanks in advance.

File Watcher in BODS4.1?

$
0
0


Hi Experts,

 

I'm using data services4.1

and my requirement is that every day at 5am my etl should check for file(CSV)

and if it is there then starts the loading into the target and once done then it moves the file to another location.

The file is available on the same server at a specified location.

 

Please provide the code if is there any script need to use.

 

Thanks & Regards,

Mahesh babu k

Facing issue in Deploying jobs in production .

$
0
0

We export jobs from Development to Production by exporting and importing XML.


All the fields of DATA Store are override by development parameter like 'tns name', 'User Name','password'
Every time we have to reset password and connection property of DATA STORE .

 

Is there any method of export or import so that DATA Store property does not get override ?

 

Regards

Anupam


View Data services job server error

$
0
0

Hi all

 

I am configuring information steward in CMC, i am getting an error when i am selecting "View Data Services job server" option by right clicking on the information steward application in CMC as

 

 

IS.png

 

and also i cant find Data services option in integrator type in integrator source under information steward tab in CMC.

(information steward-> manage->new-> integrator source->integrator type)

 

please help me in solving this issue.

 

Thanks in advance

PrasannaKumar

Effective_Date Transformation in sap Bods.

$
0
0

Hi All,

 

Can any body explain along with a scenario, what is the use of Effective_Date Transformation in SAP BODS. I understand the concept of Effective date from the technical manual, how to get the Effective Date in the target table, whether we have to write some custom function based on the available date and the client requirements, or any other alternate way to get it. in my transform i am using Effective_Date Transformation, and i am selecting Date column in table in Effective_Date option. And i am keeping Effective Sequence Column as blank, and i am getting the Default Date What i have mention in my target table. How to use the Effective Date transform to get the Effective dates based on warranty or based on some other requirements like that. Please try to provide some scenario with screen shots, it will be very much helpful for so many people, not only for me.

 

 

Thanks in Advance.

Subbu.

BODS alternatives and considerations

$
0
0

1.SQL transform

When underlying table is altered (add/delete columns) in database the SQL transform should be "UPDATE SCHEMA"

If not this will not pull any records from the table and it neither error nor warns when we validate the job from designer

 

2. Date format

to_date() function

'YYYYDDD' format is not supported by BODS.

Documentation manual don’t provide any information to convert 7 digit Julian dates (Legacy COBOL dates).

We may need to write custom function to convert these dates or get date from underlying database "if database supported this date format"

------Sample function ----------

IF(substr($julian_dt, 1, 4) = 2000 or substr($julian_dt, 1, 2) = 18)

begin

RETURN(sql('DS_Legacy', 'select to_char(to_date('||$julian_dt||',\'YYYYDDD\'),\'YYYY.MM.DD\') from dual'));

END

return decode((substr($julian_dt, 1, 2) = 19), jde_date(substr($julian_dt, 3, 5)), (substr($julian_dt, 1, 2) = 20), add_months(jde_date(substr($julian_dt, 3, 5)), 1200),  NULL );

-------------sample function END ---------------

 

 

3. Getting timestamp column from SQL transform

In SQL transform  a timestamp field is not pulled directly, instead alternatively we can convert that to text or custom format accordingly and pill and convert back to desired date time format.

 

4. When a character field is mapped to numeric field , if the value is not numeric equivalent then the value is converted to NULL.

if the value is equivalent to numeric that is typecast to numeric.

Alternative: ADD nvl AFTER YOU MAP it to numeric field, if you don’t want to populate NULL for that field in target

 

5. for character fields while mapping higher length field to less length fields the value will be truncated and propagated.

if the source value is all blanks NULL will be propagated to the next transform

(something similar to above)

 

6. When using gen_rownum() function:

If this function is used in a query transform in which there is join operation then there is possibility to generate duplicate values.

The issue is not with the function instead the query transform functionality in combination with join and gen_rownum() function.

Reason: - For every transform BODI will generate SQL query and pushes to database to fetch/compute result data

- When joining BODI caches one table and then fetches other table and joins returns the data.

- While caching these row numbers are generated. Here is the issue

-- example When joining table with 100 records(cached) with table 200 records (assuming all 200 match join criteria) then output volume of join

   is 200 records since the row numbers are already generated with 100 records table there will be 100 duplicate values in output.

  

7.BODS 14 version allows multiple users operate simultaneously on single local repository.

This leads to code inconsistency, if the same object (Datastore/JOB/WORKFLOW/DATAFLOW) is being modified by two different users at the same time

the last saving version is stored to the repository.

 

Solution: Mandatory to use central repository concept to check-out-check-in code safely.

 

 

8. "Enable recovery" is one of the best feature of BODS, when we use Try-Catch approach in the job automatic recovery option will not recover in case of job failure.

- Must be careful to choose try-catch blocks, when used this BODS expects developer to handle exceptions.

 

9. Bulk loader option for target tables:

Data directly written to database data files (skips SQL layer), when enables back the constraints even PK may also be not valid because of duplicate values in the column because data is not validated while loading data.

- This error is shown at the end of the job and job will have successful completion with an error saying "UNIQUE constraint is in unusable state"

 

9a. While enabling/rebuilding UNIQUE index on the table if there is any oracle error to enable the index sill the error from BODS log is shown as duplicate values found cannot enable UNIQUE index.

actually the issue is not with data issue is with oracle database temp segment.

 

When used API bulk load option the data load will be faster and all the constraints on the table are automatically disabled and enabled back after the data loaded to the table.

 

10.LOOKUP,TARGETS,SOURCE Objects from data store are hard coded schema names.

When we update schema name at datastore level that is not sufficient to point to updated schema for these objects.

Instead we need to use "Alias" in the data store.

 

11.Static repository instance:

Multiple users can login to same local repository and work simultaneously, When any user updates the repository object those changes are not visible immediately to other logged in users, to reflect those other users should re-login to the repository.

 

12.BODI-1270032

This error shows then we want to execute the job, the job will not start and not even take this to "Execution Properties" window.

Simply says cannot execute so and so job.

If you validate the job the job validates successfully without any errors or issues

 

This may be cause of following issues

1. Invalid MERGE transform (go to merge transform validate, take care of warnings)

2. Invalid validation transform columns (Check each validation rule)

 

Best alternate:

Go to Tools>options>Designer>general un check the option "perform complete validation before job execution" then start the job

now the job fails with proper error message in error log

 

13. How to use global variables in SQL transform:

you can use global variables in SQL Transform in SQL Statement

 

you will not be able to import schema with reference to global variable in SQL Statement, so when importing schema use constant values instead of global variable, once the schema is imported, you can replace the constant with global variable, it will be replaced with the value you set for that variable when job is executed

 

the other thing, I don't think you will be able to retain the value of global variable outside the DF, to verify this add script after the first DF and print the value of variable, is it same as that set inside the DF ?

 

if the data type of the column is VARCHAR the enclose the variable in { }, for example:- WHERE BATCH_ID = {$CurrentBatchID} if its NUMERIC then use WHERE BATCH_ID = $CurrentBatchID

How could we write the data to SAP

$
0
0

Dear all,

 

i am a beginner to data services, and i would like to know the procedure to write the SAP, we create a data store to read the data and we also use extractors, how to connect them we use i doc's to read, how this procedure goes.

 

your help and response will be highly appreciated.

 

regards

Amjad.

Incomplete log in DataServices 4.1 SP1

$
0
0

Hello,

 

 

My issue is this: I have a very simple job: one only script.

I use a script to call a procedure.

sql(<DataStore>,<CommandeSQL>)

 

 

When I run the job in DataServices, I get this error message:

"<[Microsoft][SQL ServerNative Client 11.0][SQL Server]Avertissement : la valeur NULL est éliminée par un agrégat ou par une autre opération SET.>."

 

When I execute the procedure in MS-SQLServer2012, I have this error message:

"Avertissement : la valeur NULL est éliminée par un agrégat ou par une autre opération SET.

Msg 8134, Level 16, State 1, Procedure Test_BODS, Line 17

Division par zéro."

 

How to display all log in DataServices ?

 

For information:

DataServices 4.1 SP1

MS-SQLServer 2012

 

Thanks in advance for your helps.

Kind regards,

Carole

Security for Job execution only

$
0
0

Hello All

 

I am trying to create a Access Level to have the security to view the repository and only execute job.I have tried the standard access level View together with the right execute job, but this didn't work. Can you please advice on the possibility.

 

Thanks

Pavan

BODS jobs related to database tables are not changing the status from ‘PROCEED’ to ‘STOP’

$
0
0

Hello,

 

All BODS jobs which have DB tables involved are not showing
as ‘Completed successfully’ after execution even though data are getting loaded
into tables. I have checked table level locks in DB , and there is no lock involved in my SQL server.

 

But jobs which does not have SQL tables involved are givingCompleted successfully’
status.

 

This issue is not allowing me to add other data flow after the
first as second is never getting executed. I have attached the monitor screen.

 

Regards,

Vipin


Did Table comparison have some performance changes from BODS 3.2 to 4.1 ?

$
0
0

I found the usability of "table comparison" descend after upgrade from 3.2 to 4.1 .

May someone make sure this  ?

 

I have a ETL program , I descript its function below :

Table A is compared with table B.  If some records are different with records in table B, these records would merge into table B.

Now the amount of table A and B are both 16000 , and the data are the absolute same.

 

I use "table comparison" to achive this function .

The "input primary key columns" are "MP_CP_ID" and "MD_BUSINESS_KEY".

In tables , all MP_CP_ID are the same, and all MD_BUSINESS_KEY are different .

 

In bods 4.1 :

If the order of "input primary key columns" are "md_business_key","md_cp_id", this program runs very slowly [ I haven't waited it until its ending ] .

If the order are "md_cp_id","md_business_key", it runs quickly [ around 5 seconds] .

 

In bods 3.2 :

No matter which one is in the first of "input primary key columns",  it both runs fast .

 

I haven't find out explanations about this phenomeno  ?   Could someone tell me why  ?

usage of base64_decode function on BLOB data

$
0
0

Hi,

 

I have a DS4.1 dataflow that reads base64-encoded word docs from a source.

The encoded word docs are stored in a varchar(200000) field and look e.g. like this:

 

0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAABAAAAUwAAAAAAAAA [...]

 

I'd like to decode the field and store it in a BLOB/ST_MEMORY_LOB column in SAP HANA but I'm struggling with base64_decode() in DS.

 

apporach 1:

base64_decode( <encoded_doc>, 'UTF-8')  seems only to work if <encoded_doc> is an encoded UTF-8 string which is not the case here - the word

docs are encoded binary data.

 

approach 2:

step 1: apply varchar_to_long(<encoded_doc>) and map it to a BLOB field in DS

step 2: in a downstream query: call base64_decode( <encoded_doc_as_BLOB> ) and map the output to a LONG field which
becomes a BLOB/ST_MEMORY_LOB in my SAP HANA table.

 

When I do this, I'm getting wrong data in the SAP HANA BLOB that has nothing to do with the original word doc (?)

 

Does anyone has experience with base64_encode/decode on binary data stored in blobs?.

Thanks for your response!

 

Regards, Martin

 

PS: I can write the the encoded data into SAP HANA if i map the varchar_to_long(<encoded_doc>) output to a LONG field in DS and map it to SAP HANA BLOB field. But I don't have a base64-decode function in SAP HANA.

Different Data Loading Methods from BODS

$
0
0

Hello All,

 

Scenario:           While working on SAP data migration project I have following target systems SAP ECC, EWM & APO

Understanding:  While posting data from BODS to SAP I may need to use following methods, IDOCs, BAPI or LSMW directly in SAP ECC. EWM &

                          APO are different boxes which resides in SAP ECC only.

Doubt:                Is there some different method or terminology would be used for loading data in EWM & APO systems ? Also if there are any other

                          ways of loading the data apart from the traditional ways.

 

If someone worked on this kind of scenario or knows about it ... suggest the inputs, Thanks in advance !!

 

 

Best !

Deep

SAP AIO contents for DM

$
0
0

Hello Experts,

 

I am not able to locate the SAP AIO (All in one) contents documents/.ATL files for objects for data migration in SMP. I have already seen some links but not directing me to right location. Pls help for the same.

 

Thanks in advance.

 

+ Deep

Unknown symbols in BODS error log

$
0
0

Dear All,

We are Connecting  SQL server with ODBC Data Direct driver 6.1 (Part of BODS installation package) and able to extract/load data into SQL Server but  if any error occurs, unable to see the SQL Server error messages from BODS error log, it is showing some unknown symbols as attached screen shots.


BODS version : 4.1

Job server OS : Linux

ODBC Driver: /dataservices/DataDirect/odbc/lib/DAsqls25.so [DataDirect 6.1 SQL Server Wire Protocol]

 

Can any one help on this issue

 

msg2.png

msg4.pngmsg3.png

 

Thank You,

Tirupal


Viewing all 4013 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>