database

Connect remotely located Java application and database using JBoss, EJB

The Java application which serves as client-side software should be implemented as form of JavaBeans which are serializable.  Server-side software should be adopted to EJB(Enterprise Java Bean) which uses JBoss application server to communicate with database and client-side software components.What is serialization:In computer science, in the context of data storage and transmission, serialization is the process of converting an object into a sequence of bits so that it can be persisted on a storage medium (such as a file, or a memory buffer) or transmitted across a network connection link to be "resurrected" later in the same or another computer environment.[1] When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object. For many complex objects, such as those that make extensive use of references, this process is not straightforward.What is JBoss:JBoss Application Server (or JBoss AS) is a free software/open-source Java EE-based application server. Because it is Java-based, the JBoss application server operates cross-platform: usable on any operating system that Java supports. JBoss AS was developed by JBoss, now a division of Red Hat.

Taggings:

Speeding up project using Zend_Db

<p>An existing project realized with the Zend Framework should be optimized in terms of performance. The use of Zend_Db slows down the database access because of its automatic table-scanning and overhead. The SQL-queries as well as the table structures are already optimized and should not be changed. The configuration on the server environment:</p><ul><li>PHP 5.2.6</li><li>MySQL Server Version: 5.0.75</li><li>Zend Framework 1.5</li><li>Apache 2.6.28-16</li><li>Suhosin-Patch 0.9.6.2</li></ul><p>The update to a newer version of Zend Framework is right now not possible because of some migration problems. The changes in the code should be measurable and viewable with XDEBUG/kcachegrind.</p>

Connect remotely located Java applications and database

<p>A library has a management software implemented by Java. This software is going to be used as a server application which is located in Vienna. A database including all book informations of the same library is located in London. Find out an appropriate technology by which a client Java application located in Paris can search the book informations of the library remotely.</p>

BitTorrent tracker optimization

<p><span style="font-family: sans-serif; color: #000000; font-size: small;"><span style="font-size: 13px; line-height: 19px;"><span style="font-family: Verdana, sans-serif; font-size: 12px; color: #494949; white-space: pre-wrap;">A BitTorrent tracker is a server software based on BitTorrent protocol. Download clients are required to firstly communicate with the tracker to get other peers which have already begun downloading the same file that the clients want to get. During the download, tracker randomly and periodically provides clients newer peers which have joined the same download group. Optimize the tracker by using GeoTool so that the tracker can provide clients the nearest peers through geographical coordinates and therefore speed up the entire download time. </span></span></span></p><p><span style="line-height: 19px; white-space: pre-wrap;"><span style="line-height: 20px; white-space: normal;"><br /></span></span></p>

Slow MySQL execution over Network under Java

If an application and the connected database is developed and tested on one single machine, code optimization and design of queries may suffer due to instant access time and nearly no transfer time between database and application. However, if you put the Application and Database on different machines such design flaws can lead to horrible processing times. There are several fields that may cause problems in that case. One problem is transferring of many records from or to database. Another problem is execution of hundreds of single independent SQL statements in a very short time, where the application always has to wait for response of database.

Make Databases accessible for the Semantic Web - Solution Proposal

First it's very important to find out how much structure needs to be maintained.

A first approach could be:

  • find a uri for every row of every table (will be called <tuple> from now on).
  • For every column and for every row: Create an RDF triple:
    <tuple> <columnname> <value> .

But if also relations between tables has to be "transported" to RDF, this concept has to be implemented in the parser. The first approach would be to point to other entities with the RDF built in predicates. Use rdf:Bag for 1:n and n:m relations.

Another concept which isn't consired in this schema are constraints like unique columns. to implement this, a new vocabulary has to be defined.

For more information see G. Lausen, M. Meier, M. Schmidt: SPARQLing Constraints for RDF in Proceedings of the 11th international conference on Extending database technology: Advances in database technology, 2008, 499-509.

Make Databases accessible for the Semantic Web

Besides (re)creating data manually, automatic generation or extraction of information already there is a possibility. One more specific possibility are databases which already contain huge amount of highly structured data (for example content management systems or the so called deep web). But how can we make this data part of the semantic web?

Searching Entities with Hibernate Entity Manager

Hibernate is used to easily handle database operations in Java applications. It makes mapping between Java objects and the database a lot easier. It is also possible to use the persistence API (JPA) to store entities in databases. However, if you search for entities, that you want to use in your application, the path to those entities should not contain a whitespace. Otherwise you’ll get a “File not Found” – exception (this is not correct).

Cheap Yet Secure Daily Backups For Database And Files

I need to make backups of a webapplication's MySQL database and some files for a client. Backups should be done on a daily basis and the whole system should be as cheap as possible - "ok, backups are important, but I don't want to spend any money on it anyway...". The backup should be fully automated. Furthermore installing an FTP server on one of the client's local machines (to upload the backups there) is not wanted, due to security concerns. Generally nothing of the existing setup should be changed, if possible. Finally security is a major concern - the backups' content is sensitive so making the data publicly available is nearly as bad as losing it.

PostgreSQL Database Backup Takes Too Long

While giving a training course on PostgreSQL, the problem arose that a pg_dump of their database would take too long. The database contains more or less nothing else than BLOBs (PDF files). Currently only 500MB, but there are 45GB of archived files (from the last 10 years or so) and new ones will be added on a daily basis. So at least 50GB (older files could be removed) will be stored in the database. Doing a test dump with pg_dump takes approximately 3 minutes for 500MB (on the given system) - which means 50GB will take somewhere around 5 hours. That is definitely too long, as the backup will need to run every night and additional file system backups (with IBM Tivoli) need to be performed as well.

Pages

Subscribe to database