Searched refs:batch (Results 1 – 5 of 5) sorted by relevance
68 hidl_vec<hidl_memory> batch; in batchAllocate() local69 batch.resize(count); in batchAllocate()73 batch[allocated] = allocateOne(size); in batchAllocate()75 if (batch[allocated].handle() == nullptr) { in batchAllocate()87 _hidl_cb(true /* success */, batch); in batchAllocate()91 cleanup(std::move(batch[i])); in batchAllocate()
39 * @return batch Unmapped memory objects.41 batchAllocate(uint64_t size, uint64_t count) generates (bool success, vec<memory> batch);
17 // C2H: Data payload to be validated. This is a batch of data exactly as it
964 [&](bool success, const hidl_vec<hidl_memory>& batch) { in TEST_F() argument966 EXPECT_EQ(kBatchSize, batch.size()); in TEST_F()968 for (uint64_t i = 0; i < batch.size(); i++) { in TEST_F()969 sp<IMemory> memory = mapMemory(batch[i]); in TEST_F()976 EXPECT_EQ(memory->getSize(), batch[i].size()); in TEST_F()983 batchCopy = batch; in TEST_F()
151 required, nanoapps are encouraged to batch their messages and opportunistically252 with the longest batch interval that still meets the latency requirement.