I have a pretty much straight Dataflow, wherein i am just trying to do a group by on a source table and load it into the Target.
The Source tables has around 200 Million records in total.
When 1 run this job, i see the following in the trace log. But, its not fetching any records. Job is getting successfully completed with 0 rows.
" Cache statistics determined that data flow <DF_XXXXXXX> uses <0> caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches. Statistics is switching the cache type to IN MEMORY. "
When i run the generated sql from the Data Flow on the Database directly i am able to see the data.
Did any one come across this before ?
Any thoughts / comments would be helpful.
Just a few questions:
1) Do you see the magnifying-glass on the source and can you see the records when you click on it?
2)You said easy workflow...there could be a push-down.
- What happens when you try with a target on the data-store of the source (same database), does that work?
- In both cases, when you click on the DataFlow and validate->optimized SQL, what does it look like?
- When you execute that SQL directly on the database, do you get the rows you are looking for?(at least the update one..)
To give you more clarity on the issue -
1. Yes, i can see the records on the source table by clicking on the magnifying glass.
2. Yes, there is a push down with Insert/Append.
- I did try with a target on the same data-store of the same db. It didnt work. still same error..
- In both the cases, the target is the same database just different schema's - It looks like Insert/Append.
- Yes, i see data directly on the database.
Do let me know if you have any more questions.
Thank you for your time.
You mentioned two different datastores and a push down. Can you execute the SQL with both schemas?
When you write click on the Job there should be a check on something mentioning use implicit data base links (or similar). Does the result of the SQL change when you uncheck that ?
If I remember correctly a pushdown on different datastores results in one SQL query that has be executed by one of both schemas. That user will execute the SQL. What happens if the other user executes it on your system? (Or did you try the SQL with both?)