Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Tune for Performance: Do you really need a big internal SSD?

By: hoakley
12 December 2024 at 15:30

The most common economy that many make when specifying their next Mac is to opt for just 512 GB internal storage, and save the $/€/£600 or so it would cost to increase that to 2 TB. After all, if you can buy a 2 TB Thunderbolt 5 SSD for around two-thirds of the price, why pay more? And who needs the blistering speed of that expensive internal storage when your apps work perfectly well with a far cheaper SSD? This article considers whether that choice matters in terms of performance.

To look at this, I’m going to use the same compression task that I used in this previous article, my app Cormorant relying on the AppleArchive framework in macOS to do all the heavy lifting. As results are going to differ considerably when using other apps and other tasks, I’d like to make it clear that this can’t reach general conclusions that apply to every task and every app: your mileage will vary. My purpose here is to show how you can work out whether using a slower disk will affect your app’s performance.

Methods

In that previous article, I concluded that compression speed was unlikely to be determined by disk performance, as even with all 10 P cores running compression threads, that rate only reached 2.28 GB/s, around the same as write speed to a good Thunderbolt 3 SSD. For today’s tests, I therefore set Cormorant to use the default number of threads (all 14 on my M4 Pro) so it would run as fast as the CPU cores would allow when compressing my standard 15.517 GB IPSW test file.

I’m fortunate to have a range of SSDs to test, and here use the 2 TB internal SSD of my Mac mini M4 Pro, an external USB4 enclosure (OWC Express 1M2) with a 2 TB Samsung 990 Pro SSD, and an external 2 TB Thunderbolt 3 SSD (OWC Envoy Pro SX). As few are likely to have access to such a range, I included two disk images stored on the internal SSD, a 100 GB sparse bundle, and a 50 GB read-write disk image, to see if they could be used to model external storage. All tests used unencrypted APFS file systems.

The first step with each was to measure its write speed using Stibium. Unlike more popular benchmarking apps for the Mac, Stibium measures the speed across a wide range of file sizes, from 2 MB to 2 GB, providing more insight into performance. After those measurements, those test files were removed, the large test file copied to the volume, and compressed by Cormorant at high QoS with the default number of threads.

Disk write speeds

compressionbydisk1

Results in each test followed a familiar pattern, with rapidly increasing write speeds to a peak at a file size of about 200 MB, then a steady rate or slow decline up to the 2 GB tested. The read-write disk image was a bit more erratic, though, with high write speeds at 800 and 1000 MB.

At 2 GB file size, write speeds were:

  • 7.69 GB/s for the internal SSD
  • 7.35 GB/s for the sparse bundle
  • 3.61 GB/s for the USB4 SSD
  • 2.26 GB/s for the Thunderbolt 3 SSD
  • 1.35 GB/s for the disk image.

Those are in accord with my many previous measurements of write speeds for those types of storage.

Compression rates

Times to compress the 15.517 GB test file ranked differently:

  • 5.57 s for the internal SSD
  • 5.84 s for the USB4 SSD
  • 9.67 s for the sparse bundle
  • 10.49 s for the Thunderbolt 3 SSD
  • 16.87 s for the disk image.

When converted to compression rates, sparse bundle results are even more obviously an outlier, as shown in the chart below.

compressionbydisk2

There’s a roughly linear relationship between measured write speed and compression rate in the disk image, Thunderbolt 3 SSD, and USB4 SSD, and little difference between the latter and the internal SSD. These suggest that disk write speed becomes the rate-limiting factor for compression when write speed falls below about 3 GB/s, but above that faster disks make little difference to compression rate.

Poor performance of the sparse bundle was a surprise, given how close its write speeds are to those of the host internal SSD. This is probably the result of compression writing a single very large file across its 14 threads; as the sparse bundle stores file data on a large number of band files, their overhead appears to have got the better of it. I will return to look at this in more detail in the near future, as sparse bundles have become popular largely because of their perceived superior performance.

The difference in compression rates between USB4 and Thunderbolt 3 SSDs is also surprisingly large. Of course, a Mac with fewer cores to run compression threads might not show any significant difference: a base M3 chip with 4 P and 4 E cores is unlikely to achieve a compression rate much in excess of 1.5 GB/s on its internal SSD because of its limited cores, so the restricted write speed of a Thunderbolt 3 SSD may not there become the rate-limiting factor.

Conclusions

  • The rate-limiting step in task performance will change according to multiple factors, including the effective use of multiple threads on multiple cores, and disk performance.
  • There’s no simple model you can apply to assess the effects of disk performance, and tests using disk images can be misleading.
  • You can’t predict whether a task will be disk-bound from disk benchmark performance.
  • Even expensive high-performance external SSDs can result in noticeably poor task performance. Maybe that money would be better spent on a larger internal SSD after all.

How Thunderbolt 5 can be faster or not

By: hoakley
9 December 2024 at 15:30

Early reports of Thunderbolt 5 performance have been mixed. This article starts from Thunderbolt 3 and dives deeper into the TB5 specification to explain why it might not deliver the performance you expect.

Origins

Thunderbolt 3 claims to deliver up to 40 Gb/s transfer speeds, but can’t actually deliver that to a peripheral device. That’s because TB3 includes both 4 lanes of PCIe data at 32.4 Gb/s and 4 lanes of DisplayPort 1.4 at up to 32.4 Gb/s, coming to a total of nearly 65 Gb/s. But that’s constrained within its maximum of 40 Gb/s.

The end result is that the best you can expect from a Thunderbolt 3 SSD is a read/write speed of around 3 GB/s. Although that would be sufficient for many, Macs don’t come with arrays of half a dozen Thunderbolt 3 ports. If you need to connect external displays, the only practical solution might be to feed them through a Thunderbolt 4 dock or hub. As that’s fed by a single port on the Mac, its total capacity is still limited to 40 Gb/s, and that connection becomes the bottleneck.

Thunderbolt 5 isn’t a direct descendant of Thunderbolt 3 or 4, but is aligned with the second version of USB4. This might appear strange, but USB4 in its original version includes support for Thunderbolt 3. What most obviously changes with USB4 2.0 is its maximum transfer rate has doubled to 80 Gb/s. But even that’s not straightforward.

Architecture

thunderbolt5a

The basic architecture of a Thunderbolt 5 or USB4 2.0 connection is shown above. It consists of two lanes, each of which has two transmitter-receiver pairs operating in one direction at a time and each transferring data at up to 40 Gb/s. This provides a simultaneous total transfer rate of 80 Gb/s in each direction.

These lanes and transmitter-receiver pairs can be operated in several modes, including three for Thunderbolt 5 and USB4 2.0.

thunderbolt5b

Single-lane USB4 is the same as the original USB4 already supported by all Apple silicon Macs, and in OWC’s superb Express 1M2 USB4 enclosure, and in practice its full 40 Gb/s comfortably outperforms Thunderbolt 3 for data transfers.

thunderbolt5c

The first of the new high-speed Thunderbolt 5 and USB4 2.0 modes is known as Symmetric USB4, with both lanes bonded together to provide a total of 80 Gb/s in each direction. This is the mode that an external TB5 SSD operates in, to achieve claimed transfer rates of ‘up to’ 6 GB/s.

thunderbolt5d

The other new mode is Asymmetric USB4. To achieve this, one of the Lanes has one of its transmitter-receiver pairs reversed. This provides a total of 120 Gb/s in one direction, and 40 Gb/s in the other, and is referred to in Thunderbolt 5 as Bandwidth Boost. This can be used upstream, from the peripheral to the Mac host, or more commonly downstream, where it could provide sufficient bandwidth to support high-res displays, for example.

Direct host connections

In practice, these modes should work transparently and to your advantage when connecting a peripheral direct to your Mac. If it’s a display, then the connection can switch to Asymmetric USB4 with its three transmitters in the host Mac, to deliver 120 Gb/s to that display. If it’s for storage or another device moving data in both directions, then Symmetric USB4 is good for 80 Gb/s in each direction, and should deliver those promised 6 GB/s read/write speeds.

Docks and hubs

When it comes to docks and hubs, though, there’s the potential for disappointment. Connect three high-res displays to your dock, and you want them to benefit from Asymmetric USB4 coming downstream from the host Mac. If the dock correctly switches to that mode for its connections to the displays, then it won’t work properly (if at all) if the connection between the dock and host is Symmetric USB4, as that will act as an 80 Gb/s bottleneck for the dock’s downstream 120 Gb/s.

Using a Thunderbolt 5 dock or hub is thus a great enabler, as it takes just one port on your Mac to feed up to three demanding peripherals, but it requires careful coordination of modes, and even then could fall short of your expectations.

Consider a TB5 Mac with a dock connected to one large high-res display, and a TB5 SSD. If modes are coordinated correctly, the Mac and dock will connect using Asymmetric mode to deliver 120 Gb/s downstream to the dock, then Asymmetric again from the dock to the display. But that leaves the SSD with what’s left over from that downstream bandwidth, although it still has 40 Gb/s on the return from the SSD through the dock to the Mac. That TB5 drive is then likely to perform as if it was an old USB4 drive, with perhaps half its normal read/write speeds. Of course, that’s still better than you’d get from a TB4 dock, but not what you paid for.

There’s also the potential for bugs and errors. I wonder if reported problems in getting three 6K displays working through a TB5 hub might come down to a failure to connect from Mac to dock in Asymmetric mode. What if the Mac and dock agree to operate in Asymmetric mode when the sole connected display doesn’t require that bandwidth, thus preventing an SSD from achieving an acceptable read speed?

Who needs TB5?

There will always be those who work with huge amounts of data and need as much speed as they can get. But for many, the most important use for Thunderbolt 5 is in the connection between Mac and dock or hub, as that’s the bottleneck that limits everyday performance. MacBook Pro and Mac mini models that now support TB5 come with three Thunderbolt 5 ports and one HDMI display port. You don’t have to indulge in excess to fill those up: add just one Studio Display and external storage for backups, and that Mac is down to its last Thunderbolt port.

As a result, Thunderbolt docks and hubs have proved popular, despite their bottleneck connection to the Mac. For many, that will be where Thunderbolt 5 proves its worth, provided it can get its modes straight and deliver the better performance we’re paying for.

Which M4 chip and model?

By: hoakley
7 November 2024 at 15:30

In the light of recent news, you might now be wondering whether you can afford to wait until next year in the hope that Apple then releases the M4 Mac of your dreams. To help guide you in your decision-making, this article explains what chip options are available in this month’s new M4 models, and how to choose between them.

CPU core types

Intel CPUs in modern Macs have several cores, all of them identical. Whether your Mac is running a background task like indexing for Spotlight, or running code for a time-critical user task, code is run across any of the available cores. In an Apple silicon chip like those in the M4 family, background tasks are normally constrained to efficiency (E) cores, leaving the performance (P) cores for your apps and other pressing user tasks. This brings significant energy economy for background tasks, and keeps your Mac more responsive to your demands.

Some tasks are normally constrained to run only on E cores. These include scheduled background tasks like Spotlight indexing, Time Machine backups, and some encoding of media. Game Mode is perhaps a more surprising E core user, as explained below.

Most user tasks are run preferentially on P cores, when they’re available. When there are more high-priority threads to be run than there are available P cores, then macOS will normally send them to be run on E cores instead. This also applies to threads running a Virtual Machine (VM) using lightweight virtualisation, whose threads will be preferentially scheduled on P cores when they’re available, even when code being run in the VM would normally be allocated to E cores.

macOS also controls the clock speed or frequency of cores. For background tasks running on E cores, their frequency is normally held relatively low, for best energy efficiency. When high-priority threads overspill onto E cores, they’re normally run at higher frequency, which is less energy-efficient but brings their performance closer to that of a P core. macOS goes to great lengths to schedule threads and control core frequencies to strike the best balance between energy efficiency and performance.

Unfortunately, it’s normally hard to see effects of frequency in apps like Activity Monitor. Its CPU % figures only show the percentage of cycles that are used for processing, and make no allowance for core frequency. It will therefore show a background thread running at low frequency but 100%, the same as a thread overspilt from P cores running at the maximum frequency of that E core. So when you see Spotlight indexing apparently taking 200% of CPU % on your Mac’s E cores, that might only be a small fraction of their maximum capacity if they were running at maximum frequency.

There are no differences between chips in the M4 family when it comes to each type of CPU core: each P core in a Base variant is the same as each in an M4 Pro or Max, with the same maximum frequency, and the same applies to E cores. macOS also allocates threads to different types of core using the same rules, and their frequencies are controlled the same as well. What differs between them is the number of each type of core, ranging from 4 P and 4 E in the 8-core variant of the Base M4, up to 12 P and 4 E in the 16-core variant of the M4 Max. Thus, their single-core benchmark results should be almost identical, although their multi-core results should vary according to the number of cores.

Game Mode

This mode is an exception to normal CPU and GPU core use, as it:

  • gives preferential access to the E cores,
  • gives highest priority access to the GPU,
  • uses low-latency Bluetooth modes for input controllers and audio output.

However, my previous testing didn’t demonstrate that apps running in Game Mode were given exclusive access to E cores. But for gamers, it now appears that the more E cores, the better.

GPU cores

These are also used for tasks other than graphics, such as some of the more demanding calculations required for Machine Learning and AI. However, experience so far with Writing Tools in Sequoia 15.1 is that macOS currently offloads their heavy lifting to be run off-device in one of Apple’s dedicated servers. Although having plenty of GPU cores might well be valuable for non-graphics purposes in the future, for now there seems little advantage for many.

Thunderbolt 5

M4 Pro and Max, but not Base variants, come equipped with Thunderbolt ports that not only support Thunderbolt 3 and 4, but 5, as well as USB4. Thunderbolt 5 should effectively double the speed of connected TB5 SSDs, but to see that benefit, you’ll need to buy a TB5 SSD. Not only are they more expensive than TB3/4 models, but at present I know of only one range that’s due to ship this year. There will also be other peripherals with TB5 support, including at least one dock and one hub, although neither is available yet. The only TB5 accessories that are already available are cables, and even they are expensive.

TB5 also brings increased video bandwidth and support for DisplayPort 2.1, although even the M4 Max can’t make full use of that. If you’re looking to drive a combination of high-res displays, consult Apple’s Tech Specs carefully, as they’re complicated.

Although TB5 will become increasingly important over the next few years, TB3/4 and USB4 are far from dead yet and are supported by all M4 models.

Which M4 chip?

The table below summarises key figures for each of the variants in the M4 family that have now been released. It’s likely that next year Apple will release an Ultra, consisting of two M4 Max chips joined in tandem, in case you feel the burning desire for 24 P and 8 E cores.

m4configs2

Models available next week featuring each M4 chip are shown with green rectangles at the right.

There are two variants of the Base M4, one with 4P + 4E and 8 GPU cores, the same as Base variants in M1 to M3 families. There’s also the more capable variant, for the first time with 4P + 6E, which promises to be a better all-rounder, and when in Game Mode. It also has an extra couple of GPU cores.

The M4 Pro also comes in two variants, this time differing in the number of P cores, 8 or 10, and GPU cores, 16 or 20. Those overlap with the M4 Max, with 10 or 12 P cores and 32 or 40 GPU cores. Thus the gap between M4 Pro and Max isn’t as great as in the M3, with the GPUs in the M4 Max being aimed more at those working with high-res video, for instance. For more general use, there’s little difference between the 14-core Pro and Max.

Memory and storage

Chips in the M4 family also determine the maximum memory and internal SSD capacity. Apple has at last eliminated base models with only 8 GB of memory, and all now start with at least 16 GB. Base M4 chips are limited to a maximum of 32 GB, while the M4 Pro can go up to 64 GB, and the 16-core Max up to 128 GB, although in its 14-core variant, the Max is only available with 36 GB (I’m very grateful to Thomas for pointing this out below).

Unfortunately, Apple hasn’t increased the minimum size of internal SSD, which remains at 256 GB for some base models. Smaller SSDs may be cheaper, but they are also likely to have shorter lives, as under heavy use their small number of blocks will be erased for reuse more frequently. That may shorten their life expectancy to much less than the normal period of up to 10 years, as was seen in some of the first M1 models. This is more likely to occur when swap space is regularly used for virtual memory. I for one would have preferred 512 GB as a starting point.

While Base M4 chips come with SSDs up to 2 TB in size, both Pro and Max can be supplied with internal SSDs of up to 8 TB.

I hope this proves useful in guiding your decision.

Disk Images: Performance

By: hoakley
16 October 2024 at 14:30

Over the last few years, the performance of disk images has been keenly debated. In some cases, writing to a disk image proceeds at a snail’s pace, but this has appeared unpredictable. Over two years ago, I reported sometimes dismal write performance to disk images, summarised in the table below.

diskimages13

This article presents new results from tests performed using macOS 15.0.1 Sequoia, which should give a clearer picture of what performance to expect now.

Methods

Previous work highlighted discrepancies depending on how tests were performed, whether on freshly made disk images, or on those that had already been mounted and written to. The following protocol was used throughout these new tests:

  1. A 100 GB APFS disk image was created using Disk Utility, which automatically mounts the disk image on completion.
  2. A single folder was created on the mounted disk image, then it was unmounted.
  3. After a few seconds, the disk image was mounted again by double-clicking it in the Finder, and was left mounted for at least 10 seconds before performing any tests. That should ensure read-write disk images are converted into sparse file format, and allows time for Trimming.
  4. My utility Stibium then wrote 160 test files ranging in size from 2 MB to 2 GB in randomised order, a total of just over 53 GB, to the test folder.
  5. Stibium then read those files back, in the same randomised order.
  6. The disk image was then unmounted, its size checked, and it was trashed.

All tests were performed on a Mac Studio M1 Max, using its 2 TB internal SSD, and an external Samsung 980 Pro 2 TB SSD in an OWC Express 1M2 enclosure running in USB4 mode.

Results

These are summarised in the table below.

xferdiskimage24

Read speeds for sparse bundles and read-write disk images were high, whether the container was encrypted or not. On the internal SSD, encryption resulted in read speeds about 1 GB/s lower than those for unencrypted tests, but differences for the external SSD were small and not consistent.

Write speeds were only high for sparse bundles, where they showed similar effects with encryption. Read-write disk images showed write speeds consistently about 1 GB/s, whether on the internal or external SSD, and regardless of encryption.

When unencrypted, read and write speeds for sparse (disk) images were also slower. Although faster than read-write disk images when writing to the internal SSD, read speed was around 2.2 GB/s for both. Results for encrypted sparse images were by far the worst of all the tests performed, and ranged between 0.08 to 0.5 GB/s.

Surprisingly good results were obtained from a new-style virtual machine with FileVault enabled in its disk image. Although previous tests had found read and write speeds of 4.4 and 0.7 GB/s respectively, the Sequoia VM achieved 5.9 and 4.5 GB/s.

Which disk image?

On grounds of performance only, the fastest and most consistent type of disk image is a sparse bundle (UDSB). Although on fast internal SSDs there is a reduction in read and write speeds of about 1 GB/s when encrypted using 256-bit AES, no such reduction should be seen on fast external SSDs.

On read speed alone, a read-write disk image is slightly faster than a sparse bundle, but its write speed is limited to 1 GB/s. For disk images that are more frequently read from than written to, read-write disk images should be almost as performant as sparse bundles.

However, sparse (disk) images delivered weakest performance, being particularly slow when encrypted. Compared with previous results from 2022, unencrypted write performance has improved, from 0.9 to 2.0 GB/s, but their use still can’t be recommended.

Performance range

It’s hard to explain how three different types of disk image can differ so widely in their performance. Using the same container encryption, write speeds ranged from 0.08 to 3.2 GB/s, for a sparse image and sparse bundle, on an external SSD with a native write speed of 3.2 GB/s. It’s almost as if sparse images are being deprecated by neglect.

Currently, excellent performance is also delivered by FileVault images used by Apple’s lightweight virtualisation on Apple silicon. The contrast is great between its 4.5 GB/s write speed and that of an encrypted sparse image at 0.1 GB/s, a factor of 45 when running on identical hardware.

Recommendations

  • For general use, sparse bundles (UDSB) are to be preferred for their consistently good read and write performance, whether encrypted or unencrypted.
  • When good write speed is less important, read-write disk images (UDRW) can be used, although their write performance is comparable to that of USB 3.2 Gen 2 at 10 Gb/s and no faster.
  • Sparse (disk) images (UDSP) are to be avoided, particularly when encrypted, as they’re likely to give disappointing performance.
  • Encrypting UDSB or UDRW disk images adds little if any performance overhead, and should be used whenever needed.

Previous articles

Introduction
Tools
How read-write disk images have gone sparse

个人备份指北

By: Lucky
14 June 2022 at 22:30

备份的意义

我想大家都多多少少遇到过设备死机、进水、被偷,甚至被黑客勒索的事情吧。从概率上来说,数据丢失只是早晚的事情。这就是为什么要备份,不然可能青春的回忆,各种美好就再也不见了,那会很痛,可能比失恋还痛。

「不同裝置都有相同的資料才叫作資料備份,只是換個裝置存放不叫做備份[1]

我和一些网络上厉害的家伙聊过,发现他们也没有做到真正的备份。把数据一股脑丢进硬盘,把数据丢进各种网盘/云服务,甚至程序员也只是把代码放在 GitHub 上。这些都不是备份,国内的网盘不说了,诸如 iCloud 和 Google Drive,本质是个同步盘。历史上不管是 iCloud 还是 Google Drive 都出现过宕机问题,不把鸡蛋放在一个篮子里,要狡兔三窟。

备份的目的是为了可以「无损」还原,如果不能还原,储存再多的数据也没有意义。

备份类型

  • 全部备份(Full Backup,包括系统)备份数据量大,备份所需时间长。
  • 增量备份(Incremental Backup)自上一次完全备份后有变化数据的备份
  • 差异备份 (Differential backup) 提供完整备份后变更的文件

第一种全盘备份,举个例子,从一台电脑上一模一样搬迁到另一台电脑上。有些备份是不包括系统设置数据的,还原数据后需要重新设定应用设置、字体大小、系统语言啊,种种……准备系统备份的好处在于带有一个开机系统,比如 Windows 的 WinPE,macOS 的可引导安装器

差异备份与增量备份的区别在于前者包括上一次完全备份、差异备份或增量备份,后者仅是上一次完全备份后有变化。挺绕的……

3-2-1-1-0 原则

  • 至少做三份数据副本(一份本体,两份 copy)
  • 存储在至少两种不同类型的存储介质
  • 在异地位置保留一份备份副本

在 3-2-1 原则再加两点[2]

  • 将一份媒体副本离线或隔空
  • 确保所有可恢复性解决方案没有错误(刚说备份之目的)

在这个科技高度发展的年代,其实最安全的储存方式还是用磁带保存,埋在一个角落(比如 GitHub 埋在北极的胶片[3])。

云备份

当然我们不用这么极端,用个靠谱的云备份就行了,也是一种异地备份。刚刚讲过,常规的那些网盘不算备份,要选择专门用来备份的云存储服务,那么靠谱的备份服务有哪些呢,可选的不多。

大家公认的有 Backblaze Personal Backup。每月 $7,不限备份容量,从 2007 年运营至今,稳定。懒人首选,只要后台开着软件就可以无感备份,手机端有 App 也可以备份。

对于国内用户来说由于网络封锁,需要耗费代理流量,第一次备份需要注意流量。从国内访问,稳定性一定要考虑,可是目前无解。

还有一个严格意义来说不算提供云储存服务的软件——Arq, 2009 年运营,$49.99 买断单设备,Premium $59.99/year(提供 1 TB 云空间,不推荐)

搭配第三方云端空间使用,官方描述:Use it for encrypted, versioned backups of your important files. 有加密功能不错。

相当于一个云服务整合管理器

硬盘备份

推荐用硬盘来储存,硬盘分为 HDD(机械硬盘)和 SSD(固态硬盘),硬盘可是大有学问,这里不展开了,感兴趣的可以在本 blog 搜索 SSD

相比之下 SSD 的优势在:

  1. SSD 读取写入速度快,快就是王道
  2. SSD 的寿命(写入和擦除)长
  3. SSD 可以在使用中移动

缺点:

  1. 售价贵
  2. 恢复数据困难

不管是 HDD 还是 SSD,「任何的存储装置都是消耗品」,说不定哪天就挂掉了,所以多准备一块硬盘吧。虽然 SSD 也能抢救资料,但是恢复难度大(要看硬盘哪里损坏)关键是数据恢复又贵(基本 ¥2000 以上)又麻烦还不安全(冠希哥艳照门)。

Time Machine

用 Mac 的同学可以用一下这个,免费,但是不太好用,设定逻辑太简单,Apple 没有好好做,所以有下面的软件推荐。

搭配硬盘使用的备份软件

  • SuperDuper (2004 年 $27.95 买断制,有免费版功能少一点也可以用,优点是可以把 SSD 作为一个启动盘,支持差异备份。

  • Carbon Copy Cloner 6 $39.99 更加老牌的一个软件,官方的口号是「A leader in Mac backups since 2002」,速度和稳定性比 Time Machine 好,口碑不错。

上述备份软件和 Backblaze 都提供试用,可以选择一款适合自己的。

NAS 备份

我没有使用 NAS,这方面不了解,根据需求选择。用 NAS 也要有双保险,做好异地备份。虽然硬盘不怕坏了,但是也有一些不可抗力因素。

备份内容

除了全盘备份,还需要注意保存在云上的数据,比如我存在 Bitwarden 上的密码。暂时好像没有别的了。


看了那么多,今天你备份了吗?Just do it!

参考

PrimoCache:让固态硬盘作为缓存给机械硬盘加速

By: 胡中元
29 May 2018 at 13:22

对于电脑硬盘,固态肯定是全方面优于机械硬盘的选择,不过按照马克思主义矛盾论的观点,这就存在一个 “低速的 HDD 与高价的 SSD” 之间的矛盾。目前我的笔记本使用 128G+1T 的组合,处于并将长期处于 “个人电脑硬盘的基本矛盾” 之中。

直到,我遇到了 PrimoCache 这款软件。推荐给大家。

PrimoCache 是一款可以将物理内存、SSD 硬盘或闪存盘等虚拟成硬盘缓存的软件。它可以自动将硬盘中读取的数据存入物理内存等速度较快的设备,当系统再次需要该数据时它可以很快从缓存设备中读取,而无需再次访问速度较慢的硬盘,从而有效提升物理硬盘的访问性能。

中文官网:http://www.romexsoftware.com/zh-cn/primo-cache/index.html
平台:Windows(其实 *nix 下也有类似的)
软件类型:共享软件

两个月后更新:

经过 2 个月的实际体验,这款软件并没有宣传的那么完美。少数软件一运行就会完全死机(跑跑卡丁车,并确定是由该软件造成的),整个系统也似乎有一种不稳定的感觉(偶尔弹出一些意义不明的错误提示)。另外还有额外的内存占用。

总之,不推荐将系统盘加速,也不推荐大多数情况下的使用。除非你有一些常玩的游戏,但由于几十 GB 的体积巨大不能放入 SSD,才值得使用此软件。

缓存技术

这种理念我认为非常好,Cache 技术也是计算机硬件软件当中一个使用非常广泛的技术。这和最初的英特尔快速存储技术(RST)以及英特尔傲腾技术类似。都是使用少量高速的 SSD 作为缓存,为低速的 HDD 加速, 使得电脑拥有 HDD 的大容量的同时,拥有接近于 SSD 的速度。

至于什么数据会被缓存到 SDD 中?这是由算法控制的,自动选择 HDD 中最常用的那些数据。

PrimoCache 与 RST 或者傲腾的区别在于,这款软件不需要你使用最新的 Intel 主板,或者是购买 Intel 家的傲腾内存,它兼容一切现有的 SSD。

PrimoCache 还支持使用内存作为一级缓存,SSD 作为二级缓存

是的,这也是 PrimoCache 的一个特有的功能,内存的每秒读写速度单位在 GB 级别,比 SSD 高了一个量级,能有效为 SSD 加速。(不过我还没有直观感受到差异,大概在这时瓶颈已经不在 IO 了)

效果展示

我现在终于可以把动辄几十 G 的游戏放心的放在机械硬盘了,然后使用 PrimoCache 让他们拥有令人满意的读取速度。

我使用了 12G SSD 作为二级缓存,1G RAM 作为一级缓存,运行测速工具对机械硬盘测速结果如下:

未使用缓存:

使用缓存:

注意,由于缓存的原理是将常用数据放在 SSD、RAM 中,需要时快速获取,所以使用测试软件随机读取或写入时并没有预存这个过程,并不能反映实际效果。
但是我们也可以看到明显的进步了。

注意事项

发现的缺点:

  • 使用二级缓存 SSD 时,需要占用一定量的内存用于存储映射。
  • 这是一个收费软件,虽然有破解版。
  • 之前出现了一次显卡被降频,关闭该软件后恢复。但后来开启该软件又没有出现类似状态。

此外,虽然我的 RAM 有 16GB,但我也只使用了不到 2GB 作为硬盘缓存,因为我觉得目前大多数大型软件都会使用 RAM 为自己加速,我们没必要多此一举。并且充裕的 RAM 本身也是提升电脑响应速度的途径。

❌
❌