curl

Good Description, how to resolve the task.

Taggings:

Using the linux command line tool "curl" for accessing REST endpoints

So here is one way to do it (on an Ubuntu box):

1. Open a terminal
2. Install "curl": sudo apt-get install curl
3. Find out the web-adresses of your respective REST services and their needed inputs
4. Issue the following (exemplary) command:
curl -v -s -H "X-Requested-Auth: Digest" --digest -u your_remotesystem_account:theverygoodpassword --form-string "workflowDefinitionId=full" -F "file=@media.zip" http://webappserver.tuwien.ac.at/ingest/addZippedMediaPackage &
5. Play around a bit to get the syntax/inputs right, check if the status of your webapp has changed, if not, retry
6. Be happy ;)

Taggings:

Command-line access to REST endpoints

There is a web application that offers and interface via REST-endpoints for remote access and control. I have a Linux box as my main central tool, and I am used to the "command line" way of working. How would I access the web-acpplication using command line?

download whole website

A small company with a pretty old website. They outsourced the administration of the site and only have access to the content via web interface, so no database, ftp etc. login data is available anymore because the company, to which the administration was outsourced is no more and the login data was lost. A new website will be put in place and they want to old website to stay in a subdomain, just static html pages are enough for this purpose. The goal is to download a whole website incl. all graphics, documents etc. without following external links to other sites for backup and archive reasons. scriptable solutions preferred (no gui apps)
Subscribe to curl