WebDAV - Anybody know if it's meant to cache files during transfer?

Jayce

Fully Optimized
Messages
3,056
Location
/home/jason
I'm running several systems here. Laptop, desktop, server. All Ubuntu 12.04 64 bit. The server is running "OwnCloud" which is basically like Dropbox with additional functionality but on your own self hosted server. I noticed something about WebDAV and I'm trying to understand whether this is by design or if there's an actual issue elsewhere.

The question: Is WebDAV supposed to cache/buffer the file you're sending during transmission?

Reasons I ask is this... I mounted the WebDAV server from Nautilus (which I assume utilizes .gvfs) and sent a 6.0GB file to my WebDAV server. It eventually tanked the connection as my RAM has maxed out (I have 8GB of RAM). Factor in the system and application RAM and you have the remaining ~2.0GB unaccounted for. I sent 8,000 pictures amounting to 30GB to the server via WebDAV in Nautilus and it worked great. But sending a single 6GB file is a different story. I assume the overhead of having to process file by file of the 8k files allowed the system to "let go" of memory quick enough that I never even saw an increase, whereas the single 6GB file being sent over gigabit LAN... different story. That's my assumption, at least.

I found this bug here and thought, oh dang, it's a Nautilus/.gvfs bug. But then I installed davfs2 and mounted my WebDAV server on a local folder I created on root, named simply, /webdav. I used the following command:

sudo mount.davfs -o file_mode=775,dir_mode=775,uid=jason http://myserverurl.com/owncloud/files/webdav.php /webdav

I also duplicated the same steps but changing the mount point from /webdav to /home/jason/webdav.

In both scenarios, it still eats up space on my root partition (I have root and home split), likely due to the /tmp directory (I'd have to guess, anyway) taking on the brunt of the caching.

So I guess this begs the obvious question. Is it normal for WebDAV to cache in any way? Is it expected behavior that Nautilus/.gvfs utilize system memory for transferring the files, or would that still be considered a definite bug?
 
According to some people I spoke to, PHP has nothing to do with it. They explained to me why but I wasn't exactly following. When you said run dmesg, is that it? I ran dmesg while I pushed a 1 GB file over... saw nothing.

I just couldn't help but to think that with davfs2 (terminal based) acting the same as Nautilus/.gvfs (except HDD caching vs RAM caching... but still caching somehow) that maybe that's just the design of WebDAV? If so, I wonder what you're supposed to do if you're pushing a file significantly larger than your RAM size over... perhaps a way to throttle it to release RAM as it can to not get overloaded?

EDIT - davfs man page:

Caching

mount.davfs tries to reduce HTTP-trafic by caching and reusing data. Information about directories and files are held in memory, while downloaded files are cached on disk.

mount.davfs will consider cached information about directories and file attributes valid for a configurable time and look up this information on the server only after this time has expired (or there is other evidence that this information is stale). So if somebody else creates or deletes files on the server it may take some time before the local file system reflects this.

Hmm, looks like there's my answer. I wasn't able to find anything conclusive with the Nautilus way of using WebDAV but if davfs (a terminal based method) does it, I can't help but to assume it may be a built-by-design thing.
 
If the upload is going via PHP then the file will be cached in memory until the upload is complete. I'm not too familiar with WebDAV, it's a protocol somewhat similar to FTP for file upload/download and manipulation right? Have you got swap space set up on this server?

I don't know anything about WebDAV to be completely honest. I kind of thought WebDAV was a "backend" way to upload files to my WebDAV server. That said, WebDAV seems to be HTTP based, which is also of course how one would access the site and would run into PHP.

I suppose it's possible, but unless somebody is a wizard at PHP who can explain how it would be affected even when using WebDAV in what seems to be a backend, I'm not sure I understand how it's part of the equation. That said, it's the first thing my buddy said who does a lot of PHP programming for web sites, however he was not sure of an answer in regard to the caching.
 
Jayce, so there's a web page hosted on the server that you access to perform the uploads or do you have to use a client to connect? If it is a PHP page then yes you'd be working in PHP memory and limited by the amount of memory available for caching to PHP. You'd likely have to modify your default php.ini settings too if that's the case to provide it a large enough value for upload_max_filesize, post_max_size and memory_limit, perhaps you'd also need to change max_execution_time to stop it from timing out. You've jogged my memory, you're right WebDAV is an extension of the HTTP protocol and was intended as an alternative for FTP; it's used as part of the support for Frontpage on some of our servers (yes people actually still use it for some reason).

EDIT: Derp, reread your first post and realised you're using it as mountpoint. I'm not too sure I'm sorry, it could be a protocol issue, I don't know how WebDAV handles large transfers and what sort of resume support if any there is.
 
I went back and forth with Gnome devs a little bit. They're looking into the issue, but the one guy I spoke to said he thought he remembered WebDAV being built like this for some reason. Something about HTTP based protocols that do this sort of caching during file transmission. We'll wait and see though.
 
Back
Top Bottom