Converting HTTP to FTP URL

Jeffrey Massung massung at gmail.com
Mon Aug 8 02:37:48 EDT 2011


On 08/07/2011 09:49 PM, J. Landman Gay wrote:
> On 8/7/11 10:05 PM, Jeffrey Massung wrote:
>
>> So, the only way you can begin to solve this issue is if you have direct
>> access to the web server and know how it's routing and serving static
>> content.
>
> I'll answer you and Mark Weider at the same time. I'm trying to add 
> auto-upload to AirLaunch. It mostly works. I can only upload via ftp 
> (http isn't usually accepted, I don't think, even if you provide a 
> name and password. At least, it doesn't work on my hosted account.) 
> There is already a place in the stack for the user to enter the http 
> URL where their installation files will be downloaded from. That URL 
> is required for setting up the wi-fi installation, so AirLaunch 
> already has that.
>
> I could ask the user for a separate FTP upload URL but I'd rather not 
> if I don't have to. What would be better is to take their HTTP URL and 
> convert it to a valid FTP URL which will only be used internally by 
> AirLaunch to get their files onto their server.

In my experience, trying to infer the location is going to lead - down 
the road - to an unhappy customer. No one will remember the inference 
and it will break. I can completely sympathise with what you want to do 
and why. But I would advise against it.

> It sounds like I may have to ask the user to provide both paths. But 
> that makes everything more complex because the two paths are slightly 
> different, the user needs to know the distinction between web-relative 
> paths and absolute paths and what their web root folder is called, and 
> I'd rather avoid the support issues around that if I can.

If you wouldn't mind another option: use the PUT protocol in HTTP along 
with authentication. 
http://www.w3.org/QA/2008/10/understanding-http-put.html. This will 
still require work from the customer to enable this, but it'll make life 
easier for everyone.

The HTTP PUT has an added benefit: it let's the server do whatever it 
wants with the incoming data. This is important, because it means that 
the server can put the file into a different location and act on it 
later. Consider the following setup:

1. The actual site is stored in a git repository somewhere.

2. The /var/www folder is actually a clone of the git repo.

3. For every "upload" being done (likely per user), they can be staged 
somewhere else (/var/www-staged-username), which is another clone of the 
repo.

4. When all the uploads are completed, a push can be made which uploads 
the staged repo to the main repository.

5. A build process takes over and ensures data integrity and that 
everything is correct and the upload won't bring down the live site.

6. Later on the /var/www site pulls down all changes from the repo.

This entire process can be fully automated and done behind the scenes 
without any intervention or pathing "voodoo." It can also be done on a 
single machine or across many machines ensuring that if one goes down or 
the drive fails, the others still have all the data. And with a SCM 
system in place, you'll also have full history of who did what and when. 
Very handy at times. ;-)

That might be very heavy-weight for your customer [now], but it could be 
a direction down the road they'd like to take things. Flesh out the 
above as a plan and let them know why you'd like to support HTTP PUT and 
how it would work. Perhaps now all it does it upload directly to the 
live site, but explain how it sets them up for later.

Jeff M.





More information about the use-livecode mailing list