2019年1月22日 星期二

Slow firebase querying

Regarding your follow-up question, I would like to add more context about load spike.
The percentage shown roughly represents the max I/O operation per second that your database can handle, with the caveat that it also depends on the actual size of the operation events to what sort of throughput you get. During this peak, some of your operations will not be executed on the time you expect them to trigger. In other words, when it reaches 100%, your database will start to queue up additional requests in a backlog until they can be executed. This surfaces as latency and in severe cases (if you stay at 100% for an extended period of time) the DB can even appear unresponsive (although ultimately the requests get served, just very slowly). The greater the load percentage your project is incurring, the greater the latency you will be experiencing.

These load peaks usually caused by the following (which also applies to your database operations inside Cloud Functions):
  • Large individual reads, typically a value listen high up in the tree
  • Accessing specific node on a huge collection
  • Queries against large collections
  • Too frequent writes
  • Deleting or exporting large data
If I may, you could check the following suggestions that may help:
  • Archive or delete nodes in any lists that contain more than 100k items (or items that are queried hundreds of times per second with more than 10k nodes). This is true when querying over huge collections even if the limit is set to minimum, e.g. using limitToFirst(10). Firebase Database needs to first traverse all the data and build index in memory and only then fetch the results.
  • Ensure queries are efficient and only retrieving what people can consume (i.e. fetch 20 records initially and get more as needed, don't try to preload 200)
  • If you have a backup process or server script downloading large chunks of your DB daily/hourly/etc, switch to our automated backups and use that for analysis/backups/et al
  • Throttle the number of bytes being deleted at once by navigating deeper to your nodes, and delete it in chunks to avoid unresponsiveness.
  • For write operations:
  • If these are coming from many clients, we would suggest to randomize the time of operation.
  • If these are coming from your backend, we would advise to smear these requests over longer period of time.