Hi,

While trying to debug some performance issues, we narrowed down the problem to calls to Entity Beans taking long time. Original design is nice and generic but does not scale up in this case.
Here are the details:
- J2EE application.
- Oracle DB
- Entity Beans used for DB access.

While creating 50000 instances of a particular entity in DB, performance is terrible because of 50000 calls to bean.create().
Could someone point me some solutions? Here are a few we thought of:
- Create SQL in a text file and then execute it.
- Create a csv file and import it.

thanks..

Recommended Answers

All 6 Replies

If all you would want to do is import/insert some bulk data in your database, you are better off using a specialized/optimized approach rather than the plain old inserts wrapped under some stupid managed bean method invocations. Given that you are OK with solutions like using a CSV file, I'd assume that using Java is not a requirement here. In that case, you can follow the tips mentioned here and see if that works out for you.

Use of Java is mandatory, but given the amount of data to be loaded, we can switch to something else (like a call from Java code to do csv load) during the flow.
I would check up what you suggested. Looks interesting.

One other question if you know:
Does hibernate help in improving "insert" performance? Or is it only for read/update?

Thanks.

> Does hibernate help in improving "insert" performance? Or is it only for
> read/update?

Any persistence framework you use would only impact your performance when it comes to bulk inserts; be it iBatis or Hibernate. But still, if you are stuck with using Hibernate, I'd recommend checking out the Hibernate mailing list for more suggestions.

Also make sure you turn off auto-commit if doing this from Java; a commit after each insert would end up killing the performance here.

After trying out various things if things still don't work out, try explaining the problem space in depth along with the methods you've tried out and we might together be able to come up with something.

>> Any persistence framework you use would only impact your performance when it comes to bulk inserts;
I suppose the impact is positive. Could you confirm?

Abt the commits, AFAIK currently given the way code is written, it's not an option to switch it off. But we would look into it.

Again the link you provided is quite interesting, we'll see if something from there can be used. Would post back updates.

> I suppose the impact is positive. Could you confirm?

Not sure what you mean by positive here; given that yet another layer of abstraction is being used, it would only end up hurting the performance. When it comes to plain and simple inserts without any business logic, nothing can beat the bulk load capabilities offered by your database.

I remember reading once that a functionality which took around 10-15 minutes in pure Java took only 10 seconds to run when executed as a stored procedure. :-)

To use Java:
1) create a stored procedure in the database that schedules a bulk update batch and takes the data to be used as a parameter
2) use Java to call that stored procedure.

That's how we do it here to drive bulk inserts through upload of data files from a web application, works a charm.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.