How Tikv handle read requests?

Recently, I was reading the tikv source code。And I have some questions。
How Tikv handle read requests with RawGet API?
Data can be read through the send_command() method provided by the raftstorerouter interface,but the rawget API looks like reading from snapshot。
How does this work in general? For example, how to create the snapshot and when to update it. Thank you very much.

First, I would suggest looking this great article to get the idea of overall how TiKV works: https://en.pingcap.com/blog/how-tikv-reads-and-writes/.

For RawGet, the overall steps is as below:
Upon a RawGet request from gRpc, Storage layer will create a task in read pool: first it will create a snapshot (via RaftCmdRequest), and then use the snapshot to read the data. The corresponding code is in src/storage/mod.rs (fn raw_get).

The snapshot is basically the current view of that TiKV node.To create a snapshot, it must go through the RaftCmdRequest, which will first make sure the current leader is in lease (which means it’s really the leader of that region)—Note that if the leader is not in lease, then it has to first go through ReadIndex process via raft protocol to make sure it’s the leader.
Once it knows the TiKV node really is the leader, then it will call local RocksDB engine to create a snapshot (The code is get_snapshot in components/raftstore/src/store/peer.rs). With that snapshot, it will simply call get_value_cf_opt to read the data.

Snapshot is readonly, it does not need the update. That being said, all changes after the snapshot created will not be visible to that snapshot caller----- RocksDB guaranteed that.

Hope it helps.

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

It is helpful,Thanks very much!
But i still have a question.
Does tikv need to create a snapshot for each rawget?
Is it expensive to create snapshot?

For each grpc api (e.g. raw get, or batch raw get), it will create a new snapshot—that is for each new ThreadReadId, it will create a new snapshot.
Snapshot itself is not expensive, in rocksdb it’s just a global sequence number (you can think it as kind of a timestamp, all changes before that timestamp is invisible to that snapshot).
However to create the snapshot, it must make sure the node is the leader, which sometimes is expensive. When the leader’s lease is expired, it’s quite expensive to make sure the tikv node is really the leader as it needs to go through a read index procedure.
In v5.4 or later, we have a feature to automatically renew the lease to mitigate the problem above.