Slow MySQL execution over Network under Java

If an application and the connected database is developed and tested on one single machine, code optimization and design of queries may suffer due to instant access time and nearly no transfer time between database and application. However, if you put the Application and Database on different machines such design flaws can lead to horrible processing times. There are several fields that may cause problems in that case. One problem is transferring of many records from or to database. Another problem is execution of hundreds of single independent SQL statements in a very short time, where the application always has to wait for response of database.

Make Databases accessible for the Semantic Web - Solution Proposal

First it's very important to find out how much structure needs to be maintained.

A first approach could be:

  • find a uri for every row of every table (will be called <tuple> from now on).
  • For every column and for every row: Create an RDF triple:
    <tuple> <columnname> <value> .

But if also relations between tables has to be "transported" to RDF, this concept has to be implemented in the parser. The first approach would be to point to other entities with the RDF built in predicates. Use rdf:Bag for 1:n and n:m relations.

Another concept which isn't consired in this schema are constraints like unique columns. to implement this, a new vocabulary has to be defined.

For more information see G. Lausen, M. Meier, M. Schmidt: SPARQLing Constraints for RDF in Proceedings of the 11th international conference on Extending database technology: Advances in database technology, 2008, 499-509.

Make Databases accessible for the Semantic Web

Besides (re)creating data manually, automatic generation or extraction of information already there is a possibility. One more specific possibility are databases which already contain huge amount of highly structured data (for example content management systems or the so called deep web). But how can we make this data part of the semantic web?

Searching Entities with Hibernate Entity Manager

Hibernate is used to easily handle database operations in Java applications. It makes mapping between Java objects and the database a lot easier. It is also possible to use the persistence API (JPA) to store entities in databases. However, if you search for entities, that you want to use in your application, the path to those entities should not contain a whitespace. Otherwise you’ll get a “File not Found” – exception (this is not correct).

Cheap Yet Secure Daily Backups For Database And Files

I need to make backups of a webapplication's MySQL database and some files for a client. Backups should be done on a daily basis and the whole system should be as cheap as possible - "ok, backups are important, but I don't want to spend any money on it anyway...". The backup should be fully automated. Furthermore installing an FTP server on one of the client's local machines (to upload the backups there) is not wanted, due to security concerns. Generally nothing of the existing setup should be changed, if possible. Finally security is a major concern - the backups' content is sensitive so making the data publicly available is nearly as bad as losing it.

PostgreSQL Database Backup Takes Too Long

While giving a training course on PostgreSQL, the problem arose that a pg_dump of their database would take too long. The database contains more or less nothing else than BLOBs (PDF files). Currently only 500MB, but there are 45GB of archived files (from the last 10 years or so) and new ones will be added on a daily basis. So at least 50GB (older files could be removed) will be stored in the database. Doing a test dump with pg_dump takes approximately 3 minutes for 500MB (on the given system) - which means 50GB will take somewhere around 5 hours. That is definitely too long, as the backup will need to run every night and additional file system backups (with IBM Tivoli) need to be performed as well.


Subscribe to database