Lines Matching full:groups
10 Say if there are n groups then
20 * other groups share of time.
39 First of all mount the cpu controller on /dev/cpuctl and create n groups.
40 The number of groups should be > the number of cpus for checking scheduling
41 fairness(as we will run 1 task per group). then we create say n groups. By
43 tasks in different groups on the basis of the shares assigned to that group.
47 So until and unless this ratio(group A' shares/ Total shares of all groups)
50 Let us say we have 3 groups(1 task each) A,B,C each having 2, 4, 6 shares
85 In this test the shares value of some of the groups is increased and for some
86 groups is decreased. Accordingly the expected cpu time of a task is calculated.
93 Renice all tasks of a group to -20 and let tasks in all other groups run with
94 normal priority. The aim is to test that nice effect is within the groups and
99 In this test for the first run the test is run with say n tasks in m groups.
103 Test 06-08 : NUM GROUPS vs NUMBER of TASKS TEST
107 more groups on fairness.(however latency check will be done in future)
109 Test 06: N X M (N groups with M tasks each)
112 Test 07: N*M X 1 (N*M groups with 1 task each)
120 The next two testcases put stress on system and create a large number of groups
125 Creates 4 windows of different NICE values. Each window runs some n groups.
145 In case of test 12 the tasks are running under different groups created
147 NUM_CPUS). The tasks migrate to their groups automatically, before they start
148 hogging the cpu. The latency check task also runs under any of the groups.
158 is mounted, and has tasks in different groups.