bride in white against a white wall) and make Flash Exposure Compensation changes your percentage of usable exposures increases. Once you learn to SEE non-normal situations (e.g. TTL rarely get the exposure right but it usually gets your exposure close enough that you can fix it in post. TTL is great for on-camera flash in run and gun situations where the subject to flash distance is constantly changing. How I would model the data is to use the power of the Aerospike Maps API.TTL does one thing, and only one thing well, it changes the flash power to keep the subject exposure roughly the same as the subject to flash distance changes. Using records level TTL is not a great option, due to some of the challenges mentioned above. This is commonly done in audience segmentation use cases in ad-tech for example. Some background if it helps: We are currently using a 5-node Cassandra cluster, and we do have it working, sorta, but the queries to get the data are complicated, and our biggest issue is that to get it to work we needed to translate everything to a string, so we lost all typing on the data. Or if an index of 4.5 billion rows is too big. Or if having something like 90 million sets would cripple the system. What’s better for performance? I cannot find any details on query performance when specifying a set alone. This would create one set and an index with 4.5 billion rows. Then place a secondary index on the userID bin. This would create 90 million sets with 40-80 records each, about 4.5 billion rows.Ĭreate records with two bins, a unique userID and a single attribute. I see two options:Ĭreate a unique set for each user, with one record for each attribute, each record having a single bin with the name of the attribute. However, TTL is only on a record level, so this method will not work. It seems that the ideal pattern would be to create a record for each user, and attach the attributes as bins. I’m looking for a solution to provide me all attributes for a given user, factoring out expired data. Each attribute has a set TTL some for hours, some for months. I have 90 million users, each user has a collection of attributes, there’s 1000+ separate attributes, but each user probably has 40-80 assigned. I am looking for database solutions to solve an existing problem with our current dataset.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |