We are seeing a strangle problem in our Data Services environment, which is processing Twitter feeds, performs text analysis on them using Text Data Processing and does a number of things - including sentiment analysis.
The problem is the Text Data Processing transform seems to be eating a small number of our tweets - without spitting anything out!
For example, the job processes a new batch of 50,000 tweets - but the output table only contains 48,226 unique tweet IDs. By putting some template tables before and after the TDP transform, we have determined that 1774 unique IDs have disappeared in our TDP output!
Thinking that it could be caused by some rules/dictionary filtering - we have reconfigured the Text Data Processing transform to not use ANY of our custom dictionaries nor any of the supplied rule files. We have turned advanced processing OFF and set our processing time-out to -1 (= no time out at all).
Still, no matter what we have tried, we end up with the same missing 1774 tweets in our output. I have looked at the tweets but I can't see anything in the text that would indicate any particular reason to drop them. I have also looked at the position of these records in our batch of data and there is no pattern there either, it seems entirely random.
This is on Data Services 4.1 SP2 running on a Windows Server and SQL Server environment.
Has anyone else ever seen this problem?