Our support team is at your disposal for any software questions you may have.
Please find the FAQ for the FileCatalyst solution, to help you with basic questions or configurations.
FileCatalyst uses multiple techniques to yield exceptional results; in many cases, resulting transfers have virtual speeds faster than actual line speed. These techniques include:
On-the-fly compression, one method used by FileCatalyst to accelerate file transfers, allows digital files to be reduced in size as they are sent. It uses the same principles as WinZip, Gzip and other compression utilities.
What differentiates FileCatalyst is that compression occurs as the file is being transferred, saving preparation time. As the files reach the recipient, they are decompressed and automatically stored in their original formats.
Less setup and teardown is required with on-the-fly compression, which becomes important when transferring a large number of files. Imagine the overhead involved in trying to send 1,000 files, as each file is individually created and closed by the server. Standard compression techniques interrupt the data flow, adding to the total time and making the file transfer appear slower. When sending one large archive, there is only one setup and teardown involved, which greatly speeds up the overall transfer process.
FileCatalyst uses the UDP protocol for data transport and TCP for control commands and retransmission requests. FileCatalyst also provides a secondary firewall-friendly transfer method that enhances the performance of TCP by opening up multiple concurrent streams of data.
The UDP-based protocol used in FileCatalyst is proprietary. It is a highly efficient, patent-pending, retransmission and congestion control mechanism that adds a reliability layer to UDP. The flow of data can achieve full line speed with an amazingly low 0.25% overhead.
The UDP-based protocol used in FileCatalyst is proprietary. It is a highly efficient, patent-pending, retransmission and congestion control mechanism that adds a reliability layer to UDP. The flow of data can achieve full line speed with an amazingly low 0.25% overhead.
Both FTP and HTTP use TCP as the transport protocol. The inherent characteristics of TCP make it highly susceptible to network latency and packet loss. Even on a relatively stable network, TCP goodput is always lower than the actual available line speed. For example, on a T3 network (45 Mbps) with packet loss of 0.1% and a delay of 10 ms, FTP transfers can peak at only 30 Mbps.
In contrast, FileCatalyst yields goodput of 44 Mbps—only slightly less than maximum available line speed. When network conditions deteriorate to 2% packet loss and a delay of 150 ms, FTP transfers can be expected to perform at 450 Kbps, or 1% of the actual available bandwidth. FileCatalyst maintains its 44Mbps goodput.
Rather than reading and sending the file sequentially, multiple threads read from the file and transfer pieces over their own TCP streams. These pieces are received and reassembled on the fly by an equivalent number of receiver threads. There is no latency during reconstruction as the pieces are written using random access. The number of streams can be tuned to achieve the desired throughput. This method of transferring files is effective when network degradation is at a reasonable level. Due to the large number of concurrent threads that must run in order to sustain high speeds, this method is not as scalable as FileCatalyst UDP-based transfers.
FileCatalyst supports an advanced “delta” transfer algorithm. Once a file is transferred in full, any new revisions will only require these incremental changes to be sent rather than the whole new file. Imagine, for example, a large database file. Sometimes only small portions of the database are changed, such as a single name or location field. FileCatalyst calculates these modifications as “deltas” and transfers only the new data. At the destination, FileCatalyst automatically amends the changes, bringing the file up to speed with the source. This ensures that bandwidth usage is kept to a minimum and results in a very high effective goodput.
Your firewall may be blocking incoming TCP requests on port 21 (or whatever port you set for the control connection).
Make sure the user under which FileCatalyst server is running has sufficient permissions to write files to the data directory specified.
Your firewall or router must allow incoming UDP traffic so data can be received from the client applications. Further, if the server is behind a NAT, the packets must be forwarded to the proper IP address. Another possible issue is that the client side is behind a firewall and doesn’t allow outgoing UDP data.
It is possible that you have set the encoding unit size greater than 1472, which results in fragmented UDP packets. Some routers and firewall’s will automatically drop fragmented packets. If this is the case try lowering the packet size to 1472 or less. Windows operating systems tend to perform better with a value of 1024.
The answer is likely that your disk is not able to keep up with the transfers. The only option is to upgrade your disk to something faster. To receive at hundreds of Mbps, you will need to have a very efficient storage device to keep up. You could start with a fast SATA drive at 10K or 15K RPM, a fast RAID, or a fast SAN or NAS connected over GigE.
Hubs are more susceptible to collisions at high speeds which will result in an additional packet loss. As you increase the packet size, the UDP packets FileCatalyst sends will no longer fit in an ether frame. The OS will fragment the Jumbo packets into many smaller fragmented packets that match the MTU of your network (usually 1500 bytes). The larger the packet, the more pieces it is broken into.
If your concurrent connection speed is 100 Mbps and FileCatalyst server is storing data to network storage (NAS or SAN) on a 100Mbps or lower switch; this will cause FC to compete with itself for bandwidth when receiving and writing. The transfer rate using this network topology would be approximately 50Mbps or lower.
Performance may be further affected if you are connected to a 100 Mbps or lower hub as this is a potential source of packet loss due to congestion. If replacing the hub with a fast switch is not an option, do not attempt to send at the full capacity of the hub with a packet size greater than 1472 (or whatever setting does not cause fragmentation) or significant performance degradation could occur.
Windows machines which have mapped network drives may not being visible to the Server due to Microsoft’s implementation of User Access Control (UAC) starting with Windows Vista.
There are examples online on how to disable this or set the system to allow drives to be visible to services.
To implement a workaround:
On Window or Linux, the logs can be found in the logs directory in the application’s install directory. On OS X, the logs will be found in logs folder of the appropriate application folder in « Library/Application Support/FileCatalyst » for the user running the application, or in the root library if the application is being run as a service. The path to the logs can also be found in the configuration file as log.location.
FileCatalyst software components must be installed on systems meeting certain recommended requirements.
The following recommended settings are for a typical server deployment up-to 1Gbps and 20 concurrent connections.
For speeds over 1 Gbps and 50 concurrent connections.
Recommended VM sizes for public cloud services
Cloud Service | VM Size | Max. Possible Throughput |
AWS | M5.XLarge | 500 Mbps |
AWS | T3.XLarge | 250 Mbps |
Azure | Standard D4s v3 | 400 Mbps |
Azure | Standard D8s v3 | 600 Mbps |
The following recommended settings are for a typical deployment of up to 25 concurrent connections. Virtual server configurations are welcomed.
The following recommended settings are for a typical deployment of up-to 40 nodes being monitored. Virtual server configurations are welcomed.
The following recommended settings are for a typical server deployment up-to 1Gbps and 20 concurrent connections.
Contact our support. Our technical team we'll be happy to help you.