The more general-purpose -Wconversion has many false positives, often around int to char conversion. Some functions like int toupper(int) have an unexpected int return type to deal with special out-of-bound values like EOF.
This is not a NOP; it explicitly clears the upper 32 bits of EDI since the compiler does not know that they are zero in this situation. If you change cc from an int to size_t (long on x86-64) the compiler will generate:
The article mentions 32-bit Raspbian which is moving to 64-bit to better support the Raspberry Pi 4 with 8 GB of RAM. Both the RPi 3 and 4 have 64-bit support but earlier models, including the popular and still-on-sale Zero, do not support 64-bit. This platform will likely stick around for a long time; we need to continue to fix issues like this size_t/off_t one in s3fs:
We just got a raspberrypi 400 in the mail yesterday to use is a media center pc. From what I understand reading things, Netflix will only work on 32bit because ChromeOS is 32 bit and they steal the Widevine blob out of ChromeOS to make Netflix and Amazon Prime work.
The other kind of lame consequence of that is I have to use Chromium rather than Firefox, which irks me just a little.
Other than that, the RaspberryPi 400 is a really nicely put together little machine.
Not that it matters tons for a media center, but, for general knowledge:
Chromium variants with the Google removed or deactivated are both more secure AND private.
I think 32-bit is still recommended on most Rpi 4 installs, because of greater software compatibility (across the established ecosystem). So it's going to change slowly.
This is great news for application developers! And probably hard work for the AWS implementors. I will try to update this over the weekend: https://github.com/gaul/are-we-consistent-yet
Could you expand on this comment? s3fs uses multiple threads for uploading and populating readdir metadata via S3fsMultiCurl::MultiPerform. Earlier versions used curl_multi_perform which may have hidden some of the parallelism from you.
Huh, right you are. I did a cursory search, but whatever I saw clearly was wrong.
That said, something is going on that I can't explain:
# Default settings for aws
$ time aws s3 cp /dev/shm/1GB.bin s3://test-kihaqtowex/a
upload: ../../dev/shm/1GB.bin to s3://test-kihaqtowex/a
real 0m10.312s
user 0m5.909s
sys 0m4.204s
$ aws configure set default.s3.max_concurrent_requests 4
$ time aws s3 cp /dev/shm/1GB.bin s3://test-kihaqtowex/b
upload: ../../dev/shm/1GB.bin to s3://test-kihaqtowex/b
real 0m26.732s
user 0m4.989s
sys 0m2.741s
$ time cp /dev/shm/1GB.bin s3fs_mount_-kihaqtowex/c
real 0m26.368s
user 0m0.006s
sys 0m0.699s
(I did multiple runs, though not hundreds so I can't say with certainty this would hold)
I swear the difference was worse in the past, so maybe things have improved. Still not worth the added slowness to me for my use cases.
Thanks for testing! You may improve performance via increasing -o parallel_count (default="5"). Newer versions of s3fs have improved performance but please report any cliffs at https://github.com/s3fs-fuse/s3fs-fuse/issues .
This is not expected behavior. An older version had pessimized locking which preventing concurrently adding a file to a directory and listing the directory simultaneously which is the most similar symptom. I recommend upgrading to the latest version, running with debug flags `-f -d -o curldbg`, and observing what's happening. Please open an issue at https://github.com/s3fs-fuse/s3fs-fuse/issues if your symptoms persist.