web-dev-qa-db-ja.com

Dropzone JS-チャンク

私はこれにかなり近いと思います、私は次のドロップゾーン設定を持っています:

_Dropzone.options.myDZ = {
  chunking: true,
  chunkSize: 500000,
  retryChunks: true,
  retryChunksLimit: 3,
  chunksUploaded: function(file, done) {
   done();
  }
};
_

ただし、done()コマンドのため、1チャンクの後で終了します。この時点で、すべてのチャンクがアップロードされているかどうかを確認する必要があると思います。アップロードされている場合は、done()を呼び出します。

これがチャンクのwikiです: https://gitlab.com/meno/dropzone/wikis/faq#chunked-uploads

そして、これが設定オプションです: http://www.dropzonejs.com/#configuration

以前にドロップゾーンを使用したことがありますか?

7
Lee

しばらくこれに取り組んだ後、(最新バージョンのdropzone 5.4.0を使用して)ファイルのすべてのチャンクがアップロードされたときにのみchunksUploadedが呼び出されることを確認できます。 done()を呼び出すと、fileが正常に処理されます。アップロードしようとしているファイルのサイズは? chunkSizeの下にある場合、実際にはそのファイルはチャンクされません(デフォルト値の_forceChunking = false;_のため)chunksUploadedは呼び出されません( ソース )。

以下に、dropzone.jsを使用したチャンクのフロントサイド実装の作業を含めました。いくつかの注意事項:myDropzonecurrentFileは、次のように$(document).ready()の外部で宣言されたグローバル変数です。

_var currentFile = null;
var myDropzone = null;
_

これは、chunksUploaded関数内でPUTリクエストのエラー処理を行うときにそれらをスコープに含める必要があるためです(ここで渡されたdone()は、エラーメッセージを受け入れません。引数なので、これらのグローバル変数を必要とする私たち自身で処理する必要があります。必要に応じて、さらに詳しく説明できます)。

_$(function () {
    myDropzone = new Dropzone("#attachDZ", {
        url: "/api/ChunkedUpload",  
        params: function (files, xhr, chunk) {
            if (chunk) {
                return {
                    dzUuid: chunk.file.upload.uuid,
                    dzChunkIndex: chunk.index,
                    dzTotalFileSize: chunk.file.size,
                    dzCurrentChunkSize: chunk.dataBlock.data.size,
                    dzTotalChunkCount: chunk.file.upload.totalChunkCount,
                    dzChunkByteOffset: chunk.index * this.options.chunkSize,
                    dzChunkSize: this.options.chunkSize,
                    dzFilename: chunk.file.name,
                    userID: <%= UserID %>,
                };
            }
        },
        parallelUploads: 1,  // since we're using a global 'currentFile', we could have issues if parallelUploads > 1, so we'll make it = 1
        maxFilesize: 1024,   // max individual file size 1024 MB
        chunking: true,      // enable chunking
        forceChunking: true, // forces chunking when file.size < chunkSize
        parallelChunkUploads: true, // allows chunks to be uploaded in parallel (this is independent of the parallelUploads option)
        chunkSize: 1000000,  // chunk size 1,000,000 bytes (~1MB)
        retryChunks: true,   // retry chunks on failure
        retryChunksLimit: 3, // retry maximum of 3 times (default is 3)
        chunksUploaded: function (file, done) {
            // All chunks have been uploaded. Perform any other actions
            currentFile = file;

            // This calls server-side code to merge all chunks for the currentFile
            $.ajax({
                type: "PUT",
                url: "/api/ChunkedUpload?dzIdentifier=" + currentFile.upload.uuid
                    + "&fileName=" + encodeURIComponent(currentFile.name)
                    + "&expectedBytes=" + currentFile.size
                    + "&totalChunks=" + currentFile.upload.totalChunkCount
                    + "&userID=" + <%= UserID %>,
                success: function (data) {
                    // Must call done() if successful
                    done();
                },
                error: function (msg) {
                    currentFile.accepted = false;
                    myDropzone._errorProcessing([currentFile], msg.responseText);
                }
             });
        },
        init: function() {

            // This calls server-side code to delete temporary files created if the file failed to upload
            // This also gets called if the upload is canceled
            this.on('error', function(file, errorMessage) {
                $.ajax({
                    type: "DELETE",
                    url: "/api/ChunkedUpload?dzIdentifier=" + file.upload.uuid
                        + "&fileName=" + encodeURIComponent(file.name)
                        + "&expectedBytes=" + file.size
                        + "&totalChunks=" + file.upload.totalChunkCount
                        + "&userID=" + <%= UserID %>,
                    success: function (data) {
                        // nothing
                    }
                });
            });
        }
    });
});
_

誰かが私のサーバーサイドコードに興味があれば、私に知らせて、私がそれを投稿します。 C#/ ASP.Netを使用しています。

編集:サーバー側コードを追加しました

ChunkedUploadController.cs

_public class ChunkedUploadController : ApiController
{
    private class DzMeta
    {
        public int intChunkNumber = 0;
        public string dzChunkNumber { get; set; }
        public string dzChunkSize { get; set; }
        public string dzCurrentChunkSize { get; set; }
        public string dzTotalSize { get; set; }
        public string dzIdentifier { get; set; }
        public string dzFilename { get; set; }
        public string dzTotalChunks { get; set; }
        public string dzCurrentChunkByteOffset { get; set; }
        public string userID { get; set; }

        public DzMeta(Dictionary<string, string> values)
        {
            dzChunkNumber = values["dzChunkIndex"];
            dzChunkSize = values["dzChunkSize"];
            dzCurrentChunkSize = values["dzCurrentChunkSize"];
            dzTotalSize = values["dzTotalFileSize"];
            dzIdentifier = values["dzUuid"];
            dzFilename = values["dzFileName"];
            dzTotalChunks = values["dzTotalChunkCount"];
            dzCurrentChunkByteOffset = values["dzChunkByteOffset"];
            userID = values["userID"];
            int.TryParse(dzChunkNumber, out intChunkNumber);
        }

        public DzMeta(NameValueCollection values)
        {
            dzChunkNumber = values["dzChunkIndex"];
            dzChunkSize = values["dzChunkSize"];
            dzCurrentChunkSize = values["dzCurrentChunkSize"];
            dzTotalSize = values["dzTotalFileSize"];
            dzIdentifier = values["dzUuid"];
            dzFilename = values["dzFileName"];
            dzTotalChunks = values["dzTotalChunkCount"];
            dzCurrentChunkByteOffset = values["dzChunkByteOffset"];
            userID = values["userID"];
            int.TryParse(dzChunkNumber, out intChunkNumber);
        }
    }

    [HttpPost]
    public async Task<HttpResponseMessage> UploadChunk()
    {
        HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.Created };

        try
        {
            if (!Request.Content.IsMimeMultipartContent("form-data"))
            {
                //No Files uploaded
                response.StatusCode = HttpStatusCode.BadRequest;
                response.Content = new StringContent("No file uploaded or MIME multipart content not as expected!");
                throw new HttpResponseException(response);
            }

            var meta = new DzMeta(HttpContext.Current.Request.Form);
            var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR"); 
            var path = string.Format(@"{0}\{1}", chunkDirBasePath, meta.dzIdentifier);
            var filename = string.Format(@"{0}.{1}.{2}.tmp", meta.dzFilename, (meta.intChunkNumber + 1).ToString().PadLeft(4, '0'), meta.dzTotalChunks.PadLeft(4, '0'));
            Directory.CreateDirectory(path);

            Request.Content.LoadIntoBufferAsync().Wait();

            await Request.Content.ReadAsMultipartAsync(new CustomMultipartFormDataStreamProvider(path, filename)).ContinueWith((task) =>
            {
                if (task.IsFaulted || task.IsCanceled)
                {
                    response.StatusCode = HttpStatusCode.InternalServerError;
                    response.Content = new StringContent("Chunk upload task is faulted or canceled!");
                    throw new HttpResponseException(response);
                }
            });
        }
        catch (HttpResponseException ex)
        {
            LogProxy.WriteError(ex.Response.Content.ToString(), ex);
        }
        catch (Exception ex)
        {
            LogProxy.WriteError("Error uploading/saving chunk to filesystem", ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
            response.Content = new StringContent(string.Format("Error uploading/saving chunk to filesystem: {0}", ex.Message));
        }

        return response;
    }

    [HttpPut]
    public HttpResponseMessage CommitChunks([FromUri]string dzIdentifier, [FromUri]string fileName, [FromUri]int expectedBytes, [FromUri]int totalChunks, [FromUri]int userID)
    {
        HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.OK };
        string path = "";

        try
        {
            var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
            path = string.Format(@"{0}\{1}", chunkDirBasePath, dzIdentifier);
            var dest = Path.Combine(path, HttpUtility.UrlDecode(fileName));
            FileInfo info = null;

            // Get all files in directory and combine in filestream
            var files = Directory.EnumerateFiles(path).Where(s => !s.Equals(dest)).OrderBy(s => s);
            // Check that the number of chunks is as expected
            if (files.Count() != totalChunks)
            {
                response.Content = new StringContent(string.Format("Total number of chunks: {0}. Expected: {1}!", files.Count(), totalChunks));
                throw new HttpResponseException(response);
            }

            // Merge chunks into one file
            using (var fStream = new FileStream(dest, FileMode.Create))
            {
                foreach (var file in files)
                {
                    using (var sourceStream = System.IO.File.OpenRead(file))
                    {
                        sourceStream.CopyTo(fStream);
                    }
                }
                fStream.Flush();
            }

            // Check that merged file length is as expected.
            info = new FileInfo(dest);
            if (info != null)
            {
                if (info.Length == expectedBytes)
                {
                    // Save the file in the database
                    tTempAtt file = tTempAtt.NewInstance();
                    file.ContentType = MimeMapping.GetMimeMapping(info.Name);
                    file.File = System.IO.File.ReadAllBytes(info.FullName);
                    file.FileName = info.Name;
                    file.Title = info.Name;
                    file.TemporaryID = userID;
                    file.Description = info.Name;
                    file.User = userID;
                    file.Date = SafeDateTime.Now;
                    file.Insert();
                }
                else
                {
                    response.Content = new StringContent(string.Format("Total file size: {0}. Expected: {1}!", info.Length, expectedBytes));
                    throw new HttpResponseException(response);
                }
            }
            else
            {
                response.Content = new StringContent("Chunks failed to merge and file not saved!");
                throw new HttpResponseException(response);
            }
        }
        catch (HttpResponseException ex)
        {
            LogProxy.WriteError(ex.Response.Content.ToString(), ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
        }
        catch (Exception ex)
        {
            LogProxy.WriteError("Error merging chunked upload!", ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
            response.Content = new StringContent(string.Format("Error merging chunked upload: {0}", ex.Message));
        }
        finally
        {
            // No matter what happens, we need to delete the temporary files if they exist
            if (!path.IsNullOrWS() && Directory.Exists(path))
            {
                Directory.Delete(path, true);
            }
        }

        return response;
    }

    [HttpDelete]
    public HttpResponseMessage DeleteCanceledChunks([FromUri]string dzIdentifier, [FromUri]string fileName, [FromUri]int expectedBytes, [FromUri]int totalChunks, [FromUri]int userID)
    {
        HttpResponseMessage response = new HttpResponseMessage { StatusCode = HttpStatusCode.OK };

        try
        {
            var chunkDirBasePath = tSysParm.GetParameter("CHUNKUPDIR");
            var path = string.Format(@"{0}\{1}", chunkDirBasePath, dzIdentifier);

            // Delete abandoned chunks if they exist
            if (!path.IsNullOrWS() && Directory.Exists(path))
            {
                Directory.Delete(path, true);
            }
        }
        catch (Exception ex)
        {
            LogProxy.WriteError("Error deleting canceled chunks", ex);
            response.StatusCode = HttpStatusCode.InternalServerError;
            response.Content = new StringContent(string.Format("Error deleting canceled chunks: {0}", ex.Message));
        }

        return response;
    }
}
_

そして最後に、CustomMultipartFormDataStreamPrivder.cs

_public class CustomMultipartFormDataStreamProvider : MultipartFormDataStreamProvider
{
    public readonly string _filename;
    public CustomMultipartFormDataStreamProvider(string path, string filename) : base(path)
    {
        _filename = filename;
    }

    public override string GetLocalFileName(HttpContentHeaders headers)
    {
        return _filename;
    }
}
_
12
Cory