Quantcast
Channel: SCN : All Content - Data Services and Data Quality
Viewing all 4013 articles
Browse latest View live

What is the difference between SCD type 2 & 3? How to implement SCD type 3 in BODS?

$
0
0

Hi All,

 

I'm new to BODS, can any one please help in implementing SCD type 3 in BODS


Slowly Changing Dimensions

$
0
0

SCD- Slowly Changing Dimensions

SCDs are dimensions that have data that changes over time. The following methods of handling SCDs are available:

  • Type 1 : No history preservation

  v Natural consequence of normalization 

  • Type 2: Unlimited history preservation and new rows
  • v New rows generated for significant changes
  • v Requires use of a unique key
  • v New fields are generated to store history data
  • v Requires an Effective_Date field               .
  • Type 3: Limited history preservation

  v Two states of data are preserved: current and old 

 

Slowly Changing Dimension Type 1(SCD Type1)

For SCD Type 1 change, you find and update the appropriate attributes on a specific dimensional record. The new information simply overwrites the original information. In other words, no history is kept.

Example

  • Customer Key
  • Name
  • State
  • 1001
  • Christina
  • Illinois

             After Christina moved from Illinois to California, the new information replaces the new record, and we have the following table:  

Customer Key

Name

State

1001

Christina

California

 

 

Advantages:

This is the easiest way to handle the Slowly Changing Dimension problem, since there is no need to keep track of the old information

Disadvantages:

   All history is lost. By applying this methodology, it is not possible to trace back in history. For example, in this case, the company would not be able to know that Christina lived in Illinois before.

 

Slowly Changing Dimension Type 2(SCD Type2)

With a Type 2 change, we don’t make structural changes in the table. Instead we add a record. A new record is added to the table to represent the new information. Therefore, both the original and the new record will be present. The new record gets its own primary key.

In our example, recall we originally have the following table:

Customer Key

Name

State

1001

Christina

Illinois

 

After Christina moved from Illinois to California, we add the new information as a new row into the table.

Customer Key

Name

State

1001

Christina

Illinois

1005

Christina

California

Advantages:

This allows us to accurately keep all historical information.

Disadvantages:

This will cause the size of the table to grow fast. In cases where the number of rows for the table is very high to start with, storage and performance can become a concern.

This necessarily complicates the ETL process.

 

Slowly Changing Dimension Type 3(SCD Type3)

With a Type 3 change, we change the dimension structure so that it renames the existing attribute and add two attributes, one to record the new value and one to record the date of change.

       In our example, recall we originally have the following table:

Customer Key

Name

State

1001

Christina

Illinois

 

  • After Christina moved from Illinois to California, the original information gets updated, and we have the following table (assuming the effective date of change is January 15, 2003):
  • Customer Key
  • Name
  • Original State
  • Current State
  • Effective Date
  • 1001
  • Christina
  • Illinois
  • California
  • 15-JAN-2003

Advantages:

This does not increase the size of the table, since new information is updated.

This allows us to keep some part of history.

Disadvantages:

Type 3 will not be able to keep all history where an attribute is changed more than once. For example, if Christina later moves to Texas on December 15, 2003, the California information will be lost.

Data Services InfoPackages process issues

$
0
0

We integrated two external systems through DI data services. Process chains developed for all the data services InfoPackage jobs. Process chain failing when source system is not returning any data time out.

 

Want to find with any body could able to resolve this issue?

Cleansing Package XI3.2

$
0
0

Hi all

 

I just installed BODS XI3.2 cleansing package, i tried to create a reporsitory with it on mysql but came up with an error "The cleansing package repository does not exist."

unable to see repositories in DS management console

$
0
0

Hi guys,

 

I have install the BODS 4.0 SP2 , after installation I have created central repositories from RM and assign a job server to repository it showed successful

 

every thing fine up to here but when I try  to register repositories from BODS MANAGEMENT CONSOLE I am unable to see the central repository option under Administrator

 

and 

 

when try to do above same  task   from CMC  there is no Data Services option found in CMC HOME Drop down and instead of that i found the below option

 

 

please check the error screen shot file i attached over here ........

 

 

please help me out

Sequencing using Webservices

$
0
0

HI All,

 

Can anybody give me inputs on how to schedule Jobs in Sequence using Webservices

 

Thanks in Advance

Rajee

Africa SAP User Group Data Services 4.1

$
0
0

A few months ago I was asked by one of the SAP mentors Zimkhita Buwa if I would do a online session on Data Services 4.1.

 

 

Never done one before, was a bit sceptical at first but then I just said yes to give it a try.

 

 

On the day of the web session everything went according to plan, just had a slight sound echo.

 

 

Here is a replay for anyone interested in having a look at the new features in Data Services 4.1 or would like to see the new Workbench in action.

 

 

 

 

 

Follow me on twitter for more info. @louisdegouveia

Data Services 4.2 What's New Overview

$
0
0

There was a web ex session on 28th May 2013 to reveal some of the new features in Data Services 4.2. Must say some nice new features coming.

 

The session was presented by Connie Chan who is the SAP IM Project Management.

 

Data_Services_4.2.jpg

 

 

 

 

 

The session began with Connie discussing what SAP is trying to achieve with Data Services. In the below picture one can see the high level points.

SAP_Data_Integration_Mission.jpg

 

 

 

 

The features for 4.2 focuses on providing functionality around

  • Smart & Simple
  • Information Governance
  • Big Data
  • Enterprise Ready

 

InforManagement.jpg

 

 

 

 

 

So with regards to Big Data the following features were a focus

  • Real-time Data Provisioning
  • Harness the Power of HANA
  • Enhance Adapter SDK
  • Unlock SAP ECC data asset

Big_Data.jpg

 

 

 

 

With regards to the real-time with Data Services SAP is trying to bring together Sybase Replication, SAP SLT and Data Services. When I hear this I was really excited.

Real-Time Data Provisioning.jpg

 

 

 

 

 

In order to achieve the real time replication SAP will make use of Sybase Replication that will replicate to Sybase ASE. From there Data Services will pull from Sybase ASE via CDC.

 

What is unclear here is whether Sybase ASE and Replication server will come bundled with SAP Data Services, also whetehr they are seperate installs.

 

Real-Time CDC.jpg

 

 

 

 

 

Connie then proceeded to demonstrated how this will be achieved by using the new datastore option for Sybase RepServer. This is Step 1.

 

Step1.jpg

 

 

 

Step 2 is set up how regular this replication should run. This can be setup on the workflow by choosing the appropriate option.

continous.jpg

 

 

 

 

 

Some further settings can be found on a second tab, these are more related on how frequent to release resources.

 

Continous2.jpg

 

 

 

 

 

Connie then moved on how Data Services has been optimized to work better with HANA. We all know when developing with Data Services we must push down as much work to the database, but to get this to work effectively with HANA that means Data Services cant just push down standard SQL. Will need to do more than that to harness the power of HANA.

HANA.jpg

 

 

 

 

Previously Data Services would have done the below dataflow by executing simple SQL at the points where we see the red bubbles, the remainder of the work would of been done by Data Services. Now with Data Services being optimized those queries will fall away. Data Services will now create a calculation view that will get the data but also do a lot of the transformation, transformation that would of previously been done by Data Services.

HANA_Optimization.jpg

 

 

 

 

Below you can see the SQL that would of been previously generated versus the calculation view that will be generated in Data Services 4.2. One can clearly see we will gain much performance improvements when extracting from HANA.

OptimizedSQL_HANA.jpg

 

 

 

 

Connie did discuss that some enhancements have been done to the SDK.

 

 

 

Shortly after Connie then elaborated on how more extractors have been exposed to data services. Over 5000.

ECC_Unlock.jpg

 

 

 

 

From the smart and simple aspect Connie discussed the data services workbench. This was first released with 4.1, was very simple and could not have complex transformations. We can now see that more functionality has been made available within the workbench. This includes the visual part of the dataflow. With the below you can see that when clicking on transforms you can apply the columns transforms at the bottom of the screen. This allows you to have a complete overview where previously while editing the transform you would lose oversight of the complete dataflow.

 

Data_Services_WorkBench.jpg

 

 

 

With Data Services 4.2 you will now be able to export and import content from the respective repositories. This will allow transportation between environments to be easier.

Export.jpg

 

 

 

 

 

Above is most of the points that I thought would make the 4.2 a interesting release in my opinion.

 

Hope you enjoyed reading the blog.

 

Follow me on twitter for more info. @louisdegouveia


Candidate Selection Option in Match Transform

$
0
0

How to work with candidate selection in Match Transform

Candidate Selection process helps when new data set from source system is to be compared and matched to an existing data collection in data warehouse. Without candidate selection, the new set of matched data would simply get appended and there would not be any logical grouping between new data set and the existing data in database.

To achieve grouping between new source data set and existing DB data, we have option in Match Transform – Candidate Section

Pic1.jpeg.jpg

Open Match Transform: For example FirmAddress_MatchBatch

Pic2.jpg

Mapping Input:

  • Map default fields:  FIRM, ADDRESS_PRIMARY_NAME, ADDRESS_PRIMARY_NUMBER, ADDRESS_SECONDARY_NUMBER and POSTCODE
  • Apart from default fields, map all columns that is required to be mapped with DB columns

Options:

Right Click on “Group Forming” options and select Candidate Selection. In the candidate section editor, provide below details

Datastore: Select the valid DW connection where existing data collection is available

  • Suppose secondary source containing existing data collection is static, persistent data store can also be used as data store. This is also advantageous in case secondary source is not an RDBMS.

 

Cache Type: Pre-Load Cache or No-Cache

Auto Generate SQL or Custom SQL: Either use auto generated SQL or custom SQL for fetching columns from database table that is to be mapped with input data.

Break Key: Break Key on the input data must match with break key used in database table.

  • Use Break Key column from Database
  • Or Match the break key fields with DB columns

 

Use Constant Source Value: Suppose in case of MDM, you would be matching data from multiple sources. In case you want to keep track of input source data set, provide constant value (Physical Source Value) and map the same to the DB column (Physical Source Field).

Column Mapping: Map fields from input data set with your DB Table columns. In this case for break key fields, the Break Key column gets automatically set to “YES”.

Output: Map all required fields to the output

 

Pic3.jpg

Group Prioritization: Priority can be set using Group Forming > Group Prioritization operation option. This ensures that high priority records are the master or driver record. Without this prioritization, records from the original source are always considered as driver.

Update DB with new data set: In case data from new data set source has to be matched and merged again in the DB table, use Insert Else Update option in target table update control property.

 

Hope you find this information useful.

 

Thanks,

Sukanya Krishnan

Configure CDC in DS 4.2 for Sybase replication server

$
0
0

Hi ,

 

I am using BODS 4.2 .

 

I have configured  CDC using Sybase Replication Server( I did configured Sybase replication server,Sybase ASE, Sybase Power Designer).   As per configuration  I have source database  ( I used pubs2 database from Sybase ASE) and staging database (CDC database) contains the  source table structure and along with SAPCDC tables with additional fields (DS_SEQUENCE_NUMBER, DS_SRCDB, DS_SRCDB_SRV,DS_SRCDB_ID,DS_SRCDB_COMMIT_TIME,DS_SRCDB_ORIGIN_QID,DS_SRCDB_OP_TYPE,DS_CDCDB_PUBLISH_TIME) for each source table.

 

Here I created the SybaseRepSrvr data store type .  Then I  can see only metadata of source tables with two additional fields DI_SEQUENCE_NUMBER,DI_OPERATION_TYPE.  I couldn't see the SAPCDC tables from staging(CDC) database.

 

I want to build the flow for change data capture.  Can I  build separate flow of initial load .    if I want to what is the source  and what is the target.

 

Here I have two databases and 3 data stores.

 

1. Source Database

2. CDC Database(Staging database)

3. Target Database.

 

I want to build the flow for change data capture. Can I  build separate flow of initial load .

 

What is the initial load source and target databases.

What is the delta load source and target databases.

 

I tried to load Initial load from source to CDC database tables  using  different data store other than CDC data store. After loading the data I couldn't see the data in the same table in the CDC data store.

 

In the technical manual  there is complete information for set up but not enough information from BODS side.

 

Please help me how to build the CDC flows for Sybase replication flows.

 

Thanks & Regards,

Ramana.

Blueprints Data quality USA LoadInitial error

$
0
0

Hello experts,

 

I'm trying to test Blueprints Data quality USA. I have import atl file successfully, removed and add back data store, and all the text file xml files are in tutorial directory. I can view data in data store. But when I run LoadInitial job I got tons of error below. What those error meaning, did I missing anything ?

 

 

109049672VAL-0309106/2/2013 8:23:13 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:13 PMTransform <USARegulatoryNonCertified_AddressCleanse>:Option Error(Option: [Non Certified Options/Disable Certification]):
109049672VAL-0309106/2/2013 8:23:13 PM<URC2500>: PSFORM 3553 will not be generated  because some Non-Certified options are enabled.
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[Cleansing Package/Cleansing Package Name]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON_FIRM_EN..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [Options/Parser Configuration]): <UDC0044>: Data Cleanse is
109049672VAL-0309106/2/2013 8:23:14 PMunable to retrieve cleansing package PERSON_FIRM_EN from Cleansing Package Builder. The cleansing package does not exist.
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[Options/Parser Configuration]]): Error in invoking the
109049672VAL-0309106/2/2013 8:23:14 PMVerifier <UDC0024>: to determine valid parsers for transform <UDC0023>: Data Cleanse..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was GIVEN_NAME1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was GIVEN_NAME1_MATCH_STD1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was GIVEN_NAME1_MATCH_STD2..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was GIVEN_NAME1_MATCH_STD3..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was GIVEN_NAME2..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was GIVEN_NAME2_MATCH_STD1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was FAMILY_NAME1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was MATURITY_POSTNAME..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was PERSON1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was TITLE..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was NORTH_AMERICAN_PHONE1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was NORTH_AMERICAN_PHONE..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/PARENT_COMPONENT]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was EMAIL1..
109049672VAL-0309106/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309106/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>:Option Error(Option: [[OUTPUT_FIELDS/FIELD/GENERATED_FIELD_NAME]]): The option has
109049672VAL-0309106/2/2013 8:23:14 PMvalid choices of .  The value specified was EMAIL..
109049672VAL-0309006/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309006/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>: Verifier DLL <datacleanseverifieru.dll> function <Verify> failed due to previous
109049672VAL-0309006/2/2013 8:23:14 PMerror.
109049672VAL-0309006/2/2013 8:23:14 PM|Session DqBlueprintUSA_LoadInitial
109049672VAL-0309006/2/2013 8:23:14 PMTransform <EnglishNorthAmerica_DataCleanse>: Verifier DLL <datacleanseverifieru.dll> function <Verify> failed due to previous
109049672VAL-0309006/2/2013 8:23:14 PMerror.

Program /FLDQ/AD_REPT_LOAD_COUNTRIES doesnt exist and Region_short.txt not found

$
0
0

Hi Folks,

 

               I am in Process of carrying out DQM Installation ,I am facing 2 issues.

               Here are few details about the installation.

               DQM installation for sap 4.1 SP1 (the upgrade was recently done from 4.0 to 4.1 SP1).

               Address Engine which is enabled Address Engine USA.

               The issues are as follows:

               1.The program  /FLDQ/AD_REPT_LOAD_COUNTRIES which Loads supported country doesn't exist in SAP .

                    so i am unable to carry out further task that is Run Run Quarterly Adjustments report.

                2. When I am trying to execute the real time job  Job_Realtime_DQ_SAP_Get_Region_Data  in data services

                      I am getting an Error Region_short.txt file not found.

                  

                If anyone has any suggestions along these lines it would be most helpful.

 

                Thanks and Regards,

                 Lakshmi.

    

Can Data Services Connect to SAP Business One to Import Data?

$
0
0

There doesn't appear to be any native capabilities to import data into Business One using Data Services.  However it does appear as though the Data Transfer Workbench (DTW) allows import via generic ODBC.

 

So my question is would this work for connecting Data Services via ODBC to DTW to import data into Business One?  Is there a better approach?

 

Regards,

Chuck

Real time job in management console toggle there state and can't start its services

$
0
0

Hello Everyone,

 

                    I have configured Real time jobs as services in management console (for DQM) ,

                    but all the jobs the configured jobs are changing there state from

                    started/starting/error,I have Run all the jobs in Data services ,

                    all are running successfully,Can anyone please guide me

                    what should I do to make this jobs stop toggling and start its services successfully.

 

Thanks,

Shubhangi.

MAP_CDC insert issue/SCD2 implementation

$
0
0

Hi ,

 

I am first time using MAP_CDC transform.  After MAP_CDC I used target table and loaded the data. It capture only changes from the source  table  and it is giving op codes and loading data into target table.  

 

But when I am capturing the inserts first time it is inserting into the data but when I am executing the job second time same insertion trying to happen because of MAP_CDC op code and getting primary key violation error.

 

After MAP_CDC  I am trying to use table comparison  but   it is giving warning  MAP_CDC out put is delete,update and insert . But Table comparison we need normal op codes as input.  How can I handle this issue regarding inserts for second execution.

 

If i want to build the SCD2  how to handle the MAP_CDC op codes with table comparison.

 

Please help me how to resolve this issue.

 

Thanks & Regards,

Ramana.


Datetime2 is recognized as VARCHAR(27)

$
0
0

Hi Experts,

 

I am using SAP Stata services 4.1 SP1 with SQL Server 2008 backend. The problem is, I have table having datatype datetime2 in SQL Server backend. But this datetime2 is recognized as varchar(27) in Dataservices. I have gone through DS reference guides and it is saying SQL server datetime and datetime2 should be Datetime in DS. I need immediate help.

 

Can anyone help me to overcome this problem? Why Datetime2 is recognized as Varchar(27)? Is it DS expected behaviour?

 

Regards,

Deepa.

Return long datatype in real-time svcs

$
0
0

The data services job we are designing is required to run as a web service and we do not know what the size of the input text is going to be - so we are using a clob (long datatype) in a simple xml structure that is accepted by the real time service and gets processed and the result is returned again in the simple xml format as a clob.

 

The accepting part works fine - but the returning results part is not working as expected - data services is returning the result in a separate file... Is there a way the long datatype can be returned inline in the XML message result?

 

Thanks

 

Vijay Ragavan

Error while installing sap bods 3.2

$
0
0

Hi Guys,

 

I am getting an error while installing SAP BODS 3.2 while creating a local repository. The error is "unknown error occurred while testing the database connection".I am using oracle database and the credentials for the database are very correct.But I don't understand why I am getting this type error.I installed bods 3.2 for several times,but didn't get this error.I hope you guys will definitely provide a solution to this problem.

 

 

 

Thanks in advance.

HTTP access to a real-time job web service

$
0
0

Hi,

 

Anyone knows how a web service created with a real-time Data Services job can be accessed via HTTP post? According to the documentation (sbo411_ds_integrate_en.pdf), the web service can be accessed via HTTP by using the URL: http://ds_host_name:access_server_port/admin/servlet/com.acta.adapter.http.server.HTTPActaServlet?ServiceName=ServiceName

 

But it doesn't seem to work (accessing the URL just takes forever and I cant find any file corresponding to the servlet name in the BODS installation path)... Are there any specific steps needed to enable this feature?

 

The documentations discuss ways to access other services using the HTTP adapter but I couldn't information on how a BODS web service can be accessed via HTTP...

 

Thanks in advance for answering....

 

Vijay

SAP BusinessObjects Data Services XI 3.2 SP2 - End of Life

$
0
0

Hello!,

 

I would like to know if SAP BusinessObjects Data Services XI 3.2 SP2 ((version 12.2.2.0) - [http://help.sap.com/bods32sp2] follows the same Lifecycle Policy as SAP BusinessObjects Data Services XI 3.2.

 

If SAP has a Support Policy for Service Packs, could you please help with any weblink/source to understand the Lifecycle of the Product.

 

Regards,

Sampath

Viewing all 4013 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>