当前位置: 首页 > news >正文

免费dedecms企业网站模板中国室内设计联盟登录

免费dedecms企业网站模板,中国室内设计联盟登录,建筑设计方案怎么做,罗湖高端网站建设费用GC到底清理的是什么#xff1f; Golang是函数式编程语言#xff0c;如果是函数内定义的临时变量#xff0c;在函数退出时会被自动清理掉不需要GC参与#xff1b;如果使用了指针#xff0c;那么即使函数退出了也不会将其清理#xff0c;这个时候就需要全局的GC来清扫。 …GC到底清理的是什么 Golang是函数式编程语言如果是函数内定义的临时变量在函数退出时会被自动清理掉不需要GC参与如果使用了指针那么即使函数退出了也不会将其清理这个时候就需要全局的GC来清扫。 对于cache组件来说存储的对象比较多本质上就是一个大的哈希表如果GC要去扫描这些对象的话可能会造成大量的延迟因此我们不需要GC来扫描它们。 利用 Go 1.5 的特性当 map 中的 key 和 value 都是基础类型时GC 就不会扫到 map 里的 key 和 value 对于keygolang中的字符串本质上也是指针于是将它进行hash操作将字符串转换成整型信息经过hash操作后有可能会丢失部分信息为了避免hash冲突时分不清key值所以会将key放到value中一起存储。 对于value构建一个超大的byte数组buf将原来的key和value信息经过序列化变成[]byte将其存放在buf中并记录下它的offset然后将offset值存到map的value位置即 map[key-hash]offset。为了节省内存消耗会将buf设计为环形队列的结构。 BigCachehttps://github.com/allegro/bigcache https://syslog.ravelin.com/further-dangers-of-large-heaps-in-go-7a267b57d487 to keep the amount of GC work down you essentially have two choices as follows.1、Make sure the memory you allocate contains no pointers. That means no slices, no strings, no time.Time, and definitely no pointers to other allocations. If an allocation has no pointers it gets marked as such and the GC does not scan it.2、Allocate the memory off-heap by directly calling the mmap syscall yourself. Then the GC knows nothing about the memory. This has upsides and downsides. The downside is that this memory can’t really be used to reference objects allocated normally, as the GC may think they are no longer in-use and free them.How it works BigCache relies on optimization presented in 1.5 version of Go (issue-9477). This optimization states that if map without pointers in keys and values is used then GC will omit its content. Therefore BigCache uses map[uint64]uint32 where keys are hashed and values are offsets of entries.Entries are kept in byte slices, to omit GC again. Byte slices size can grow to gigabytes without impact on performance because GC will only see single pointer to it.Bigcache vs Freecache Both caches provide the same core features but they reduce GC overhead in different ways. Bigcache relies on map[uint64]uint32, freecache implements its own mapping built on slices to reduce number of pointers.bigcache 开发团队的博客的测试数据 With an empty cache, this endpoint had maximum responsiveness latency of 10ms for 10k rps. When the cache was filled, it had more than a second latency for 99th percentile. Metrics indicated that there were over 40 mln objects in the heap and GC mark and scan phase took over four seconds.最终他们采用了 map[uint64]uint64 作为 cacheShard 中的关键存储。key 是 sharding 时得到的 uint64 hashed keyvalue 则只存 offset 整体使用 FIFO 的 bytes queue也符合按照时序淘汰的需求。 经过优化bigcache 在 2000w 条记录下 GC 的表现 go version go version go1.13 linux/arm64go run caches_gc_overhead_comparison.go Number of entries: 20000000 GC pause for bigcache: 22.382827ms GC pause for freecache: 41.264651ms GC pause for map: 72.236853ms初始化操作 // BigCache is fast, concurrent, evicting cache created to keep big number of entries without impact on performance. // It keeps entries on heap but omits GC for them. To achieve that, operations take place on byte arrays, // therefore entries (de)serialization in front of the cache will be needed in most use cases. type BigCache struct {shards []*cacheShard // 缓存分片lifeWindow uint64 // 过期时间clock clockhash Hasherconfig ConfigshardMask uint64close chan struct{} }type Config struct {// Number of cache shards, value must be a power of twoShards int// Time after which entry can be evictedLifeWindow time.Duration// Interval between removing expired entries (clean up).// If set to 0 then no action is performed. Setting to 1 second is counterproductive — bigcache has a one second resolution.CleanWindow time.Duration// Max number of entries in life window. Used only to calculate initial size for cache shards.// When proper value is set then additional memory allocation does not occur.MaxEntriesInWindow int// Max size of entry in bytes. Used only to calculate initial size for cache shards.MaxEntrySize int// StatsEnabled if true calculate the number of times a cached resource was requested.StatsEnabled bool// Verbose mode prints information about new memory allocationVerbose bool// Hasher used to map between string keys and unsigned 64bit integers, by default fnv64 hashing is used.Hasher Hasher// HardMaxCacheSize is a limit for BytesQueue size in MB.// It can protect application from consuming all available memory on machine, therefore from running OOM Killer.// Default value is 0 which means unlimited size. When the limit is higher than 0 and reached then// the oldest entries are overridden for the new ones. The max memory consumption will be bigger than// HardMaxCacheSize due to Shards s additional memory. Every Shard consumes additional memory for map of keys// and statistics (map[uint64]uint32) the size of this map is equal to number of entries in// cache ~ 2×(6432)×n bits overhead or map itself.HardMaxCacheSize int// OnRemove is a callback fired when the oldest entry is removed because of its expiration time or no space left// for the new entry, or because delete was called.// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.// ignored if OnRemoveWithMetadata is specified.OnRemove func(key string, entry []byte)// OnRemoveWithMetadata is a callback fired when the oldest entry is removed because of its expiration time or no space left// for the new entry, or because delete was called. A structure representing details about that specific entry.// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.OnRemoveWithMetadata func(key string, entry []byte, keyMetadata Metadata)// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left// for the new entry, or because delete was called. A constant representing the reason will be passed through.// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.// Ignored if OnRemove is specified.OnRemoveWithReason func(key string, entry []byte, reason RemoveReason)onRemoveFilter int// Logger is a logging interface and used in combination with Verbose// Defaults to DefaultLogger()Logger Logger }// DefaultConfig initializes config with default values. // When load for BigCache can be predicted in advance then it is better to use custom config. func DefaultConfig(eviction time.Duration) Config {return Config{Shards: 1024,LifeWindow: eviction,CleanWindow: 1 * time.Second,MaxEntriesInWindow: 1000 * 10 * 60,MaxEntrySize: 500,StatsEnabled: false,Verbose: true,Hasher: newDefaultHasher(),HardMaxCacheSize: 0,Logger: DefaultLogger(),} }其中分片的个数 Shards必须为 2 的倍数。 func (c *BigCache) Set(key string, entry []byte) errorentry的类型只能是[]byte如果你要存结构体我们还需要使用 json.Marshal 这样的工具来序列化。对key进行hash func (f fnv64a) Sum64(key string) uint64 {var hash uint64 offset64for i : 0; i len(key); i {hash ^ uint64(key[i])hash * prime64}return hash }根据hash找到shard shardMask: uint64(config.Shards - 1),func (c *BigCache) getShard(hashedKey uint64) (shard *cacheShard) {return c.shards[hashedKeyc.shardMask] }默认值 shards 1024 (10000000000)shardMask 1023 (1111111111)然后进行按位与运算它要比取模运算效率高。然后向shard 中写入 type cacheShard struct {hashmap map[uint64]uint64 // 索引列表hashedkey-indexOfEntryentries queue.BytesQueue // 实际数据存储lock sync.RWMutexentryBuffer []byteonRemove onRemoveCallbackisVerbose boolstatsEnabled boollogger Loggerclock clocklifeWindow uint64hashmapStats map[uint64]uint32stats StatscleanEnabled bool }func (s *cacheShard) set(key string, hashedKey uint64, entry []byte) error {currentTimestamp : uint64(s.clock.Epoch())s.lock.Lock()// 冲突检查将之前的key对应value置为空粗暴解决哈希冲突问题if previousIndex : s.hashmap[hashedKey]; previousIndex ! 0 {if previousEntry, err : s.entries.Get(int(previousIndex)); err nil {resetHashFromEntry(previousEntry)//remove hashkeydelete(s.hashmap, hashedKey)}}if !s.cleanEnabled {// 每次插入新数据时bigCache 都会获取 BytesQueue 头部数据if oldestEntry, err : s.entries.Peek(); err nil {// 然后判断数据是否过期如果过期则删除s.onEvict(oldestEntry, currentTimestamp, s.removeOldestEntry)}}// 包装数据得到 []bytew : wrapEntry(currentTimestamp, hashedKey, key, entry, s.entryBuffer)for {// 找到插入位置记录在 hashmapif index, err : s.entries.Push(w); err nil {s.hashmap[hashedKey] uint64(index)s.lock.Unlock()return nil}// 没有找到插入位置时因为没有空间了所以要删除。if s.removeOldestEntry(NoSpace) ! nil {s.lock.Unlock()return errors.New(entry is bigger than max shard size)}} }func wrapEntry(timestamp uint64, hash uint64, key string, entry []byte, buffer *[]byte) []byte {keyLength : len(key)blobLength : len(entry) headersSizeInBytes keyLengthif blobLength len(*buffer) {*buffer make([]byte, blobLength)}blob : *bufferbinary.LittleEndian.PutUint64(blob, timestamp)binary.LittleEndian.PutUint64(blob[timestampSizeInBytes:], hash)binary.LittleEndian.PutUint16(blob[timestampSizeInByteshashSizeInBytes:], uint16(keyLength))copy(blob[headersSizeInBytes:], key)copy(blob[headersSizeInByteskeyLength:], entry)return blob[:blobLength] }// BytesQueue is a non-thread safe queue type of fifo based on bytes array. // For every push operation index of entry is returned. It can be used to read the entry later type BytesQueue struct {full boolarray []byte // 实际存储数据的地方capacity intmaxCapacity inthead inttail intcount intrightMargin intheaderBuffer []byteverbose bool }cacheShard.entryBuffer 它是一个容器用来包装数据可以重复利用它的值为 entryBuffer: make([]byte, config.MaxEntrySizeheadersSizeInBytes)它表示的是一个entry占用的最大存储空间。如果某个entry占用空间比entryBuffer还大那么entryBuffer就会被替换掉。在bigCache中bigCache将数据存储在BytesQueue中BytesQueue的底层结构是[]byte 这样只给GC增加了一个额外对象由于字节切片除了自身对象并不包含其他指针数据所以GC对于整个对象的标记时间是O(1)的。 关于哈希冲突 func (s *cacheShard) get(key string, hashedKey uint64) ([]byte, error) {s.lock.RLock()wrappedEntry, err : s.getWrappedEntry(hashedKey)if err ! nil {s.lock.RUnlock()return nil, err}if entryKey : readKeyFromEntry(wrappedEntry); key ! entryKey {s.lock.RUnlock()s.collision()if s.isVerbose {s.logger.Printf(Collision detected. Both %q and %q have the same hash %x, key, entryKey, hashedKey)}return nil, ErrEntryNotFound}entry : readEntry(wrappedEntry)s.lock.RUnlock()s.hit(hashedKey)return entry, nil }结合Set方法我们知道在写入的时候是直接覆盖的在读取的时候会直接报不存在。 BigCache does not handle collisions. When new item is inserted and its hash collides with previously stored item, new item overwrites previously stored value.删除 对应的Key的删除其实就是把 entries 中 arrary 对应的 itemIndex 上置为0。其实这种做法并没有正在删除数据只是置为0实际的内存并没有归还但是把 s.hashmap 中的key对应的 index 删除了。这就做了假删除。用户已经查询不到这个数据了。 缓存过期 bigCache 可以为插入的数据设置过期时间但是缺点是所有数据的过期时间都是一样的。bigCache 中自动删除数据有两种场景 在插入数据时删除过期数据通过设置 CleanWindow启动 goroutine 后台定时批量删除过期数据
http://www.pierceye.com/news/908551/

相关文章:

  • 产品展示栏目在网站中的作用电子商务网站建设实训方案
  • 做外贸网站需要请外贸文员吗pc端设计网站
  • 免费按模板制作微网站厦门十大软件公司
  • 免费网站模板在哪下载什么网站做的最好
  • 在智联招聘网站做销售医疗软件网站建设公司排名
  • 小程序商城设计太原搜索引擎优化
  • 旅游商业网站策划书网页在线设计平台
  • 网站建设的软文怎么写深圳我的网站
  • 动漫网站建设意义js 取网站域名
  • 网站建设项目功能需求分析报告做健身类小程序的网站
  • 专业建设网站公司哪家好建设工程合同管理多少分及格
  • 网站制作是那个大连开发区一中
  • 做预约的网站2345网址导航官网下载
  • 网站建设创建wordpress用户评论图片
  • .耐思尼克官方网站工程公司会计账务处理
  • 如何进入微网站毕业设计网站开发
  • 已经备案的网站新增ip怎么做网站分站如何做
  • 网站建设 常州怎么做网络推广营销
  • 海南建设工程信息网站常用网站建设软件
  • 福州网络推广建站网站建设工作室深圳
  • html的网站案例长春头条新闻今天
  • 免费的十大免费货源网站产品设计开发流程图
  • 做网站的内容网站建设工作室有几个部门
  • jquery win8风格企业网站模板wordpress编辑器 模板
  • 北京国互网网站建设电话免费网站怎么盈利模式
  • 网站建设图片如何加载ssh做电商 网站
  • 网站开发资质网站域名服务错误
  • html5 社团网站模板 代码下载上海做营销网站哪个公司好
  • 动易网站 模板南京企业建站系统模板
  • 网站实名网站建设技术百科