Store and share files across the internet

In modern life, people change devices many times a day. However, very often, they work with the same files. When transferring files by hard-disks, USB sticks, other external storage or sending by mail, you can easily become a victim of many different fragmented versions. Nowadays, however there are systems available that solve that problem by storing your data in the cloud. This way, it is available on every device, wherever you are, as long as you are connected to the internet. <strong>Challenge:</strong> <ul> <li>Find an online hosting service for your files.</li> <li>Find out how to upload your files and upload at least 3.</li> <li>Change one of your files and save it. Now look into the version history and try to undo your action by re-activating the older version.</li> <li>Is there an integration of the hosting service into you operating system? How can you make use of this?</li> <li>Now try to find your uploaded files on a different device (e.g. smartphone)</li> <li>Share one of the files with a friend of yours.</li> </ul>

easy to use backup tool for linux

Create an incremental backup of a debian-linux server with an easy to use tool. The backup-server is reachable via ftp and sftp. The backup should be encrypted and compressed to be space and bandwidth efficient. There should be an easy way to either restore parts of the backup or the full system after a failure.

Use FlickrFaves to backup your flickr favorites

Despite the availability of a large number of tools based on the flickr photo sharing site, the only convenient tool I found for downloading all your flickr favorites is FlickrFaves (see attached screenshot). It's a small tool written in Java, which uses flickrj (a java-based wrapper library for the REST-based flickr API) for the single purpose of downloading someones flickr favorites. Furthermore, it's GPL licensed open source software and provides an interesting entry into the world of the flickr API.

It's use is completely straight forward: 1.) Start FlickrFaves 2.) Authorize the program with flickr 3.) Set the parameters 4.) Download your favorites.

Please note that FlickrFaves doesn't take care about any metadata. The downloaded photos will have cryptic filenames and won't have any information about the tags it was assigned to on flickr.

wget & curl

use wget or curl from the command line(almost every linux distro and the like has them...)

wget -mk
or use


download whole website

A small company with a pretty old website. They outsourced the administration of the site and only have access to the content via web interface, so no database, ftp etc. login data is available anymore because the company, to which the administration was outsourced is no more and the login data was lost. A new website will be put in place and they want to old website to stay in a subdomain, just static html pages are enough for this purpose. The goal is to download a whole website incl. all graphics, documents etc. without following external links to other sites for backup and archive reasons. scriptable solutions preferred (no gui apps)

Cheap Yet Secure Daily Backups For Database And Files

I need to make backups of a webapplication's MySQL database and some files for a client. Backups should be done on a daily basis and the whole system should be as cheap as possible - "ok, backups are important, but I don't want to spend any money on it anyway...". The backup should be fully automated. Furthermore installing an FTP server on one of the client's local machines (to upload the backups there) is not wanted, due to security concerns. Generally nothing of the existing setup should be changed, if possible. Finally security is a major concern - the backups' content is sensitive so making the data publicly available is nearly as bad as losing it.

PostgreSQL Database Backup Takes Too Long

While giving a training course on PostgreSQL, the problem arose that a pg_dump of their database would take too long. The database contains more or less nothing else than BLOBs (PDF files). Currently only 500MB, but there are 45GB of archived files (from the last 10 years or so) and new ones will be added on a daily basis. So at least 50GB (older files could be removed) will be stored in the database. Doing a test dump with pg_dump takes approximately 3 minutes for 500MB (on the given system) - which means 50GB will take somewhere around 5 hours. That is definitely too long, as the backup will need to run every night and additional file system backups (with IBM Tivoli) need to be performed as well.


Subscribe to backup