1#! /usr/bin/env perl
2# Copyright 2010-2016 The OpenSSL Project Authors. All Rights Reserved.
3#
4# Licensed under the OpenSSL license (the "License").  You may not use
5# this file except in compliance with the License.  You can obtain a copy
6# in the file LICENSE in the source distribution or at
7# https://www.openssl.org/source/license.html
8
9#
10# ====================================================================
11# Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
12# project. The module is, however, dual licensed under OpenSSL and
13# CRYPTOGAMS licenses depending on where you obtain it. For further
14# details see http://www.openssl.org/~appro/cryptogams/.
15# ====================================================================
16#
17# March, May, June 2010
18#
19# The module implements "4-bit" GCM GHASH function and underlying
20# single multiplication operation in GF(2^128). "4-bit" means that it
21# uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
22# code paths: vanilla x86 and vanilla SSE. Former will be executed on
23# 486 and Pentium, latter on all others. SSE GHASH features so called
24# "528B" variant of "4-bit" method utilizing additional 256+16 bytes
25# of per-key storage [+512 bytes shared table]. Performance results
26# are for streamed GHASH subroutine and are expressed in cycles per
27# processed byte, less is better:
28#
29#		gcc 2.95.3(*)	SSE assembler	x86 assembler
30#
31# Pentium	105/111(**)	-		50
32# PIII		68 /75		12.2		24
33# P4		125/125		17.8		84(***)
34# Opteron	66 /70		10.1		30
35# Core2		54 /67		8.4		18
36# Atom		105/105		16.8		53
37# VIA Nano	69 /71		13.0		27
38#
39# (*)	gcc 3.4.x was observed to generate few percent slower code,
40#	which is one of reasons why 2.95.3 results were chosen,
41#	another reason is lack of 3.4.x results for older CPUs;
42#	comparison with SSE results is not completely fair, because C
43#	results are for vanilla "256B" implementation, while
44#	assembler results are for "528B";-)
45# (**)	second number is result for code compiled with -fPIC flag,
46#	which is actually more relevant, because assembler code is
47#	position-independent;
48# (***)	see comment in non-MMX routine for further details;
49#
50# To summarize, it's >2-5 times faster than gcc-generated code. To
51# anchor it to something else SHA1 assembler processes one byte in
52# ~7 cycles on contemporary x86 cores. As for choice of MMX/SSE
53# in particular, see comment at the end of the file...
54
55# May 2010
56#
57# Add PCLMULQDQ version performing at 2.10 cycles per processed byte.
58# The question is how close is it to theoretical limit? The pclmulqdq
59# instruction latency appears to be 14 cycles and there can't be more
60# than 2 of them executing at any given time. This means that single
61# Karatsuba multiplication would take 28 cycles *plus* few cycles for
62# pre- and post-processing. Then multiplication has to be followed by
63# modulo-reduction. Given that aggregated reduction method [see
64# "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
65# white paper by Intel] allows you to perform reduction only once in
66# a while we can assume that asymptotic performance can be estimated
67# as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
68# and Naggr is the aggregation factor.
69#
70# Before we proceed to this implementation let's have closer look at
71# the best-performing code suggested by Intel in their white paper.
72# By tracing inter-register dependencies Tmod is estimated as ~19
73# cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per
74# processed byte. As implied, this is quite optimistic estimate,
75# because it does not account for Karatsuba pre- and post-processing,
76# which for a single multiplication is ~5 cycles. Unfortunately Intel
77# does not provide performance data for GHASH alone. But benchmarking
78# AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt
79# alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that
80# the result accounts even for pre-computing of degrees of the hash
81# key H, but its portion is negligible at 16KB buffer size.
82#
83# Moving on to the implementation in question. Tmod is estimated as
84# ~13 cycles and Naggr is 2, giving asymptotic performance of ...
85# 2.16. How is it possible that measured performance is better than
86# optimistic theoretical estimate? There is one thing Intel failed
87# to recognize. By serializing GHASH with CTR in same subroutine
88# former's performance is really limited to above (Tmul + Tmod/Naggr)
89# equation. But if GHASH procedure is detached, the modulo-reduction
90# can be interleaved with Naggr-1 multiplications at instruction level
91# and under ideal conditions even disappear from the equation. So that
92# optimistic theoretical estimate for this implementation is ...
93# 28/16=1.75, and not 2.16. Well, it's probably way too optimistic,
94# at least for such small Naggr. I'd argue that (28+Tproc/Naggr),
95# where Tproc is time required for Karatsuba pre- and post-processing,
96# is more realistic estimate. In this case it gives ... 1.91 cycles.
97# Or in other words, depending on how well we can interleave reduction
98# and one of the two multiplications the performance should be between
99# 1.91 and 2.16. As already mentioned, this implementation processes
100# one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart
101# - in 2.02. x86_64 performance is better, because larger register
102# bank allows to interleave reduction and multiplication better.
103#
104# Does it make sense to increase Naggr? To start with it's virtually
105# impossible in 32-bit mode, because of limited register bank
106# capacity. Otherwise improvement has to be weighed against slower
107# setup, as well as code size and complexity increase. As even
108# optimistic estimate doesn't promise 30% performance improvement,
109# there are currently no plans to increase Naggr.
110#
111# Special thanks to David Woodhouse for providing access to a
112# Westmere-based system on behalf of Intel Open Source Technology Centre.
113
114# January 2010
115#
116# Tweaked to optimize transitions between integer and FP operations
117# on same XMM register, PCLMULQDQ subroutine was measured to process
118# one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere.
119# The minor regression on Westmere is outweighed by ~15% improvement
120# on Sandy Bridge. Strangely enough attempt to modify 64-bit code in
121# similar manner resulted in almost 20% degradation on Sandy Bridge,
122# where original 64-bit code processes one byte in 1.95 cycles.
123
124#####################################################################
125# For reference, AMD Bulldozer processes one byte in 1.98 cycles in
126# 32-bit mode and 1.89 in 64-bit.
127
128# February 2013
129#
130# Overhaul: aggregate Karatsuba post-processing, improve ILP in
131# reduction_alg9. Resulting performance is 1.96 cycles per byte on
132# Westmere, 1.95 - on Sandy/Ivy Bridge, 1.76 - on Bulldozer.
133
134$0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
135push(@INC,"${dir}","${dir}../../../perlasm");
136require "x86asm.pl";
137
138$output=pop;
139open STDOUT,">$output";
140
141&asm_init($ARGV[0],$x86only = $ARGV[$#ARGV] eq "386");
142
143$sse2=0;
144for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
145
146($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
147$inp  = "edi";
148$Htbl = "esi";
149
150$unroll = 0;	# Affects x86 loop. Folded loop performs ~7% worse
151		# than unrolled, which has to be weighted against
152		# 2.5x x86-specific code size reduction.
153
154sub x86_loop {
155    my $off = shift;
156    my $rem = "eax";
157
158	&mov	($Zhh,&DWP(4,$Htbl,$Zll));
159	&mov	($Zhl,&DWP(0,$Htbl,$Zll));
160	&mov	($Zlh,&DWP(12,$Htbl,$Zll));
161	&mov	($Zll,&DWP(8,$Htbl,$Zll));
162	&xor	($rem,$rem);	# avoid partial register stalls on PIII
163
164	# shrd practically kills P4, 2.5x deterioration, but P4 has
165	# MMX code-path to execute. shrd runs tad faster [than twice
166	# the shifts, move's and or's] on pre-MMX Pentium (as well as
167	# PIII and Core2), *but* minimizes code size, spares register
168	# and thus allows to fold the loop...
169	if (!$unroll) {
170	my $cnt = $inp;
171	&mov	($cnt,15);
172	&jmp	(&label("x86_loop"));
173	&set_label("x86_loop",16);
174	    for($i=1;$i<=2;$i++) {
175		&mov	(&LB($rem),&LB($Zll));
176		&shrd	($Zll,$Zlh,4);
177		&and	(&LB($rem),0xf);
178		&shrd	($Zlh,$Zhl,4);
179		&shrd	($Zhl,$Zhh,4);
180		&shr	($Zhh,4);
181		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
182
183		&mov	(&LB($rem),&BP($off,"esp",$cnt));
184		if ($i&1) {
185			&and	(&LB($rem),0xf0);
186		} else {
187			&shl	(&LB($rem),4);
188		}
189
190		&xor	($Zll,&DWP(8,$Htbl,$rem));
191		&xor	($Zlh,&DWP(12,$Htbl,$rem));
192		&xor	($Zhl,&DWP(0,$Htbl,$rem));
193		&xor	($Zhh,&DWP(4,$Htbl,$rem));
194
195		if ($i&1) {
196			&dec	($cnt);
197			&js	(&label("x86_break"));
198		} else {
199			&jmp	(&label("x86_loop"));
200		}
201	    }
202	&set_label("x86_break",16);
203	} else {
204	    for($i=1;$i<32;$i++) {
205		&comment($i);
206		&mov	(&LB($rem),&LB($Zll));
207		&shrd	($Zll,$Zlh,4);
208		&and	(&LB($rem),0xf);
209		&shrd	($Zlh,$Zhl,4);
210		&shrd	($Zhl,$Zhh,4);
211		&shr	($Zhh,4);
212		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
213
214		if ($i&1) {
215			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
216			&and	(&LB($rem),0xf0);
217		} else {
218			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
219			&shl	(&LB($rem),4);
220		}
221
222		&xor	($Zll,&DWP(8,$Htbl,$rem));
223		&xor	($Zlh,&DWP(12,$Htbl,$rem));
224		&xor	($Zhl,&DWP(0,$Htbl,$rem));
225		&xor	($Zhh,&DWP(4,$Htbl,$rem));
226	    }
227	}
228	&bswap	($Zll);
229	&bswap	($Zlh);
230	&bswap	($Zhl);
231	if (!$x86only) {
232		&bswap	($Zhh);
233	} else {
234		&mov	("eax",$Zhh);
235		&bswap	("eax");
236		&mov	($Zhh,"eax");
237	}
238}
239
240if ($unroll) {
241    &function_begin_B("_x86_gmult_4bit_inner");
242	&x86_loop(4);
243	&ret	();
244    &function_end_B("_x86_gmult_4bit_inner");
245}
246
247sub deposit_rem_4bit {
248    my $bias = shift;
249
250	&mov	(&DWP($bias+0, "esp"),0x0000<<16);
251	&mov	(&DWP($bias+4, "esp"),0x1C20<<16);
252	&mov	(&DWP($bias+8, "esp"),0x3840<<16);
253	&mov	(&DWP($bias+12,"esp"),0x2460<<16);
254	&mov	(&DWP($bias+16,"esp"),0x7080<<16);
255	&mov	(&DWP($bias+20,"esp"),0x6CA0<<16);
256	&mov	(&DWP($bias+24,"esp"),0x48C0<<16);
257	&mov	(&DWP($bias+28,"esp"),0x54E0<<16);
258	&mov	(&DWP($bias+32,"esp"),0xE100<<16);
259	&mov	(&DWP($bias+36,"esp"),0xFD20<<16);
260	&mov	(&DWP($bias+40,"esp"),0xD940<<16);
261	&mov	(&DWP($bias+44,"esp"),0xC560<<16);
262	&mov	(&DWP($bias+48,"esp"),0x9180<<16);
263	&mov	(&DWP($bias+52,"esp"),0x8DA0<<16);
264	&mov	(&DWP($bias+56,"esp"),0xA9C0<<16);
265	&mov	(&DWP($bias+60,"esp"),0xB5E0<<16);
266}
267
268if (!$x86only) {{{
269
270&static_label("rem_4bit");
271
272if (!$sse2) {{	# pure-MMX "May" version...
273
274    # This code was removed since SSE2 is required for BoringSSL. The
275    # outer structure of the code was retained to minimize future merge
276    # conflicts.
277
278}} else {{	# "June" MMX version...
279		# ... has slower "April" gcm_gmult_4bit_mmx with folded
280		# loop. This is done to conserve code size...
281$S=16;		# shift factor for rem_4bit
282
283sub mmx_loop() {
284# MMX version performs 2.8 times better on P4 (see comment in non-MMX
285# routine for further details), 40% better on Opteron and Core2, 50%
286# better on PIII... In other words effort is considered to be well
287# spent...
288    my $inp = shift;
289    my $rem_4bit = shift;
290    my $cnt = $Zhh;
291    my $nhi = $Zhl;
292    my $nlo = $Zlh;
293    my $rem = $Zll;
294
295    my ($Zlo,$Zhi) = ("mm0","mm1");
296    my $tmp = "mm2";
297
298	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
299	&mov	($nhi,$Zll);
300	&mov	(&LB($nlo),&LB($nhi));
301	&mov	($cnt,14);
302	&shl	(&LB($nlo),4);
303	&and	($nhi,0xf0);
304	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
305	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
306	&movd	($rem,$Zlo);
307	&jmp	(&label("mmx_loop"));
308
309    &set_label("mmx_loop",16);
310	&psrlq	($Zlo,4);
311	&and	($rem,0xf);
312	&movq	($tmp,$Zhi);
313	&psrlq	($Zhi,4);
314	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
315	&mov	(&LB($nlo),&BP(0,$inp,$cnt));
316	&psllq	($tmp,60);
317	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
318	&dec	($cnt);
319	&movd	($rem,$Zlo);
320	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
321	&mov	($nhi,$nlo);
322	&pxor	($Zlo,$tmp);
323	&js	(&label("mmx_break"));
324
325	&shl	(&LB($nlo),4);
326	&and	($rem,0xf);
327	&psrlq	($Zlo,4);
328	&and	($nhi,0xf0);
329	&movq	($tmp,$Zhi);
330	&psrlq	($Zhi,4);
331	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
332	&psllq	($tmp,60);
333	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
334	&movd	($rem,$Zlo);
335	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
336	&pxor	($Zlo,$tmp);
337	&jmp	(&label("mmx_loop"));
338
339    &set_label("mmx_break",16);
340	&shl	(&LB($nlo),4);
341	&and	($rem,0xf);
342	&psrlq	($Zlo,4);
343	&and	($nhi,0xf0);
344	&movq	($tmp,$Zhi);
345	&psrlq	($Zhi,4);
346	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
347	&psllq	($tmp,60);
348	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
349	&movd	($rem,$Zlo);
350	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
351	&pxor	($Zlo,$tmp);
352
353	&psrlq	($Zlo,4);
354	&and	($rem,0xf);
355	&movq	($tmp,$Zhi);
356	&psrlq	($Zhi,4);
357	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
358	&psllq	($tmp,60);
359	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
360	&movd	($rem,$Zlo);
361	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
362	&pxor	($Zlo,$tmp);
363
364	&psrlq	($Zlo,32);	# lower part of Zlo is already there
365	&movd	($Zhl,$Zhi);
366	&psrlq	($Zhi,32);
367	&movd	($Zlh,$Zlo);
368	&movd	($Zhh,$Zhi);
369
370	&bswap	($Zll);
371	&bswap	($Zhl);
372	&bswap	($Zlh);
373	&bswap	($Zhh);
374}
375
376&function_begin("gcm_gmult_4bit_mmx");
377	&mov	($inp,&wparam(0));	# load Xi
378	&mov	($Htbl,&wparam(1));	# load Htable
379
380	&call	(&label("pic_point"));
381	&set_label("pic_point");
382	&blindpop("eax");
383	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
384
385	&movz	($Zll,&BP(15,$inp));
386
387	&mmx_loop($inp,"eax");
388
389	&emms	();
390	&mov	(&DWP(12,$inp),$Zll);
391	&mov	(&DWP(4,$inp),$Zhl);
392	&mov	(&DWP(8,$inp),$Zlh);
393	&mov	(&DWP(0,$inp),$Zhh);
394&function_end("gcm_gmult_4bit_mmx");
395
396######################################################################
397# Below subroutine is "528B" variant of "4-bit" GCM GHASH function
398# (see gcm128.c for details). It provides further 20-40% performance
399# improvement over above mentioned "May" version.
400
401&static_label("rem_8bit");
402
403&function_begin("gcm_ghash_4bit_mmx");
404{ my ($Zlo,$Zhi) = ("mm7","mm6");
405  my $rem_8bit = "esi";
406  my $Htbl = "ebx";
407
408    # parameter block
409    &mov	("eax",&wparam(0));		# Xi
410    &mov	("ebx",&wparam(1));		# Htable
411    &mov	("ecx",&wparam(2));		# inp
412    &mov	("edx",&wparam(3));		# len
413    &mov	("ebp","esp");			# original %esp
414    &call	(&label("pic_point"));
415    &set_label	("pic_point");
416    &blindpop	($rem_8bit);
417    &lea	($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
418
419    &sub	("esp",512+16+16);		# allocate stack frame...
420    &and	("esp",-64);			# ...and align it
421    &sub	("esp",16);			# place for (u8)(H[]<<4)
422
423    &add	("edx","ecx");			# pointer to the end of input
424    &mov	(&DWP(528+16+0,"esp"),"eax");	# save Xi
425    &mov	(&DWP(528+16+8,"esp"),"edx");	# save inp+len
426    &mov	(&DWP(528+16+12,"esp"),"ebp");	# save original %esp
427
428    { my @lo  = ("mm0","mm1","mm2");
429      my @hi  = ("mm3","mm4","mm5");
430      my @tmp = ("mm6","mm7");
431      my ($off1,$off2,$i) = (0,0,);
432
433      &add	($Htbl,128);			# optimize for size
434      &lea	("edi",&DWP(16+128,"esp"));
435      &lea	("ebp",&DWP(16+256+128,"esp"));
436
437      # decompose Htable (low and high parts are kept separately),
438      # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack...
439      for ($i=0;$i<18;$i++) {
440
441	&mov	("edx",&DWP(16*$i+8-128,$Htbl))		if ($i<16);
442	&movq	($lo[0],&QWP(16*$i+8-128,$Htbl))	if ($i<16);
443	&psllq	($tmp[1],60)				if ($i>1);
444	&movq	($hi[0],&QWP(16*$i+0-128,$Htbl))	if ($i<16);
445	&por	($lo[2],$tmp[1])			if ($i>1);
446	&movq	(&QWP($off1-128,"edi"),$lo[1])		if ($i>0 && $i<17);
447	&psrlq	($lo[1],4)				if ($i>0 && $i<17);
448	&movq	(&QWP($off1,"edi"),$hi[1])		if ($i>0 && $i<17);
449	&movq	($tmp[0],$hi[1])			if ($i>0 && $i<17);
450	&movq	(&QWP($off2-128,"ebp"),$lo[2])		if ($i>1);
451	&psrlq	($hi[1],4)				if ($i>0 && $i<17);
452	&movq	(&QWP($off2,"ebp"),$hi[2])		if ($i>1);
453	&shl	("edx",4)				if ($i<16);
454	&mov	(&BP($i,"esp"),&LB("edx"))		if ($i<16);
455
456	unshift	(@lo,pop(@lo));			# "rotate" registers
457	unshift	(@hi,pop(@hi));
458	unshift	(@tmp,pop(@tmp));
459	$off1 += 8	if ($i>0);
460	$off2 += 8	if ($i>1);
461      }
462    }
463
464    &movq	($Zhi,&QWP(0,"eax"));
465    &mov	("ebx",&DWP(8,"eax"));
466    &mov	("edx",&DWP(12,"eax"));		# load Xi
467
468&set_label("outer",16);
469  { my $nlo = "eax";
470    my $dat = "edx";
471    my @nhi = ("edi","ebp");
472    my @rem = ("ebx","ecx");
473    my @red = ("mm0","mm1","mm2");
474    my $tmp = "mm3";
475
476    &xor	($dat,&DWP(12,"ecx"));		# merge input data
477    &xor	("ebx",&DWP(8,"ecx"));
478    &pxor	($Zhi,&QWP(0,"ecx"));
479    &lea	("ecx",&DWP(16,"ecx"));		# inp+=16
480    #&mov	(&DWP(528+12,"esp"),$dat);	# save inp^Xi
481    &mov	(&DWP(528+8,"esp"),"ebx");
482    &movq	(&QWP(528+0,"esp"),$Zhi);
483    &mov	(&DWP(528+16+4,"esp"),"ecx");	# save inp
484
485    &xor	($nlo,$nlo);
486    &rol	($dat,8);
487    &mov	(&LB($nlo),&LB($dat));
488    &mov	($nhi[1],$nlo);
489    &and	(&LB($nlo),0x0f);
490    &shr	($nhi[1],4);
491    &pxor	($red[0],$red[0]);
492    &rol	($dat,8);			# next byte
493    &pxor	($red[1],$red[1]);
494    &pxor	($red[2],$red[2]);
495
496    # Just like in "May" version modulo-schedule for critical path in
497    # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor'
498    # is scheduled so late that rem_8bit[] has to be shifted *right*
499    # by 16, which is why last argument to pinsrw is 2, which
500    # corresponds to <<32=<<48>>16...
501    for ($j=11,$i=0;$i<15;$i++) {
502
503      if ($i>0) {
504	&pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
505	&rol	($dat,8);				# next byte
506	&pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
507
508	&pxor	($Zlo,$tmp);
509	&pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
510	&xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
511      } else {
512	&movq	($Zlo,&QWP(16,"esp",$nlo,8));
513	&movq	($Zhi,&QWP(16+128,"esp",$nlo,8));
514      }
515
516	&mov	(&LB($nlo),&LB($dat));
517	&mov	($dat,&DWP(528+$j,"esp"))		if (--$j%4==0);
518
519	&movd	($rem[0],$Zlo);
520	&movz	($rem[1],&LB($rem[1]))			if ($i>0);
521	&psrlq	($Zlo,8);				# Z>>=8
522
523	&movq	($tmp,$Zhi);
524	&mov	($nhi[0],$nlo);
525	&psrlq	($Zhi,8);
526
527	&pxor	($Zlo,&QWP(16+256+0,"esp",$nhi[1],8));	# Z^=H[nhi]>>4
528	&and	(&LB($nlo),0x0f);
529	&psllq	($tmp,56);
530
531	&pxor	($Zhi,$red[1])				if ($i>1);
532	&shr	($nhi[0],4);
533	&pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2)	if ($i>0);
534
535	unshift	(@red,pop(@red));			# "rotate" registers
536	unshift	(@rem,pop(@rem));
537	unshift	(@nhi,pop(@nhi));
538    }
539
540    &pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
541    &pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
542    &xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
543
544    &pxor	($Zlo,$tmp);
545    &pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
546    &movz	($rem[1],&LB($rem[1]));
547
548    &pxor	($red[2],$red[2]);			# clear 2nd word
549    &psllq	($red[1],4);
550
551    &movd	($rem[0],$Zlo);
552    &psrlq	($Zlo,4);				# Z>>=4
553
554    &movq	($tmp,$Zhi);
555    &psrlq	($Zhi,4);
556    &shl	($rem[0],4);				# rem<<4
557
558    &pxor	($Zlo,&QWP(16,"esp",$nhi[1],8));	# Z^=H[nhi]
559    &psllq	($tmp,60);
560    &movz	($rem[0],&LB($rem[0]));
561
562    &pxor	($Zlo,$tmp);
563    &pxor	($Zhi,&QWP(16+128,"esp",$nhi[1],8));
564
565    &pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
566    &pxor	($Zhi,$red[1]);
567
568    &movd	($dat,$Zlo);
569    &pinsrw	($red[2],&WP(0,$rem_8bit,$rem[0],2),3);	# last is <<48
570
571    &psllq	($red[0],12);				# correct by <<16>>4
572    &pxor	($Zhi,$red[0]);
573    &psrlq	($Zlo,32);
574    &pxor	($Zhi,$red[2]);
575
576    &mov	("ecx",&DWP(528+16+4,"esp"));	# restore inp
577    &movd	("ebx",$Zlo);
578    &movq	($tmp,$Zhi);			# 01234567
579    &psllw	($Zhi,8);			# 1.3.5.7.
580    &psrlw	($tmp,8);			# .0.2.4.6
581    &por	($Zhi,$tmp);			# 10325476
582    &bswap	($dat);
583    &pshufw	($Zhi,$Zhi,0b00011011);		# 76543210
584    &bswap	("ebx");
585
586    &cmp	("ecx",&DWP(528+16+8,"esp"));	# are we done?
587    &jne	(&label("outer"));
588  }
589
590    &mov	("eax",&DWP(528+16+0,"esp"));	# restore Xi
591    &mov	(&DWP(12,"eax"),"edx");
592    &mov	(&DWP(8,"eax"),"ebx");
593    &movq	(&QWP(0,"eax"),$Zhi);
594
595    &mov	("esp",&DWP(528+16+12,"esp"));	# restore original %esp
596    &emms	();
597}
598&function_end("gcm_ghash_4bit_mmx");
599}}
600
601if ($sse2) {{
602######################################################################
603# PCLMULQDQ version.
604
605$Xip="eax";
606$Htbl="edx";
607$const="ecx";
608$inp="esi";
609$len="ebx";
610
611($Xi,$Xhi)=("xmm0","xmm1");	$Hkey="xmm2";
612($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
613($Xn,$Xhn)=("xmm6","xmm7");
614
615&static_label("bswap");
616
617sub clmul64x64_T2 {	# minimal "register" pressure
618my ($Xhi,$Xi,$Hkey,$HK)=@_;
619
620	&movdqa		($Xhi,$Xi);		#
621	&pshufd		($T1,$Xi,0b01001110);
622	&pshufd		($T2,$Hkey,0b01001110)	if (!defined($HK));
623	&pxor		($T1,$Xi);		#
624	&pxor		($T2,$Hkey)		if (!defined($HK));
625			$HK=$T2			if (!defined($HK));
626
627	&pclmulqdq	($Xi,$Hkey,0x00);	#######
628	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
629	&pclmulqdq	($T1,$HK,0x00);		#######
630	&xorps		($T1,$Xi);		#
631	&xorps		($T1,$Xhi);		#
632
633	&movdqa		($T2,$T1);		#
634	&psrldq		($T1,8);
635	&pslldq		($T2,8);		#
636	&pxor		($Xhi,$T1);
637	&pxor		($Xi,$T2);		#
638}
639
640sub clmul64x64_T3 {
641# Even though this subroutine offers visually better ILP, it
642# was empirically found to be a tad slower than above version.
643# At least in gcm_ghash_clmul context. But it's just as well,
644# because loop modulo-scheduling is possible only thanks to
645# minimized "register" pressure...
646my ($Xhi,$Xi,$Hkey)=@_;
647
648	&movdqa		($T1,$Xi);		#
649	&movdqa		($Xhi,$Xi);
650	&pclmulqdq	($Xi,$Hkey,0x00);	#######
651	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
652	&pshufd		($T2,$T1,0b01001110);	#
653	&pshufd		($T3,$Hkey,0b01001110);
654	&pxor		($T2,$T1);		#
655	&pxor		($T3,$Hkey);
656	&pclmulqdq	($T2,$T3,0x00);		#######
657	&pxor		($T2,$Xi);		#
658	&pxor		($T2,$Xhi);		#
659
660	&movdqa		($T3,$T2);		#
661	&psrldq		($T2,8);
662	&pslldq		($T3,8);		#
663	&pxor		($Xhi,$T2);
664	&pxor		($Xi,$T3);		#
665}
666
667if (1) {		# Algorithm 9 with <<1 twist.
668			# Reduction is shorter and uses only two
669			# temporary registers, which makes it better
670			# candidate for interleaving with 64x64
671			# multiplication. Pre-modulo-scheduled loop
672			# was found to be ~20% faster than Algorithm 5
673			# below. Algorithm 9 was therefore chosen for
674			# further optimization...
675
676sub reduction_alg9 {	# 17/11 times faster than Intel version
677my ($Xhi,$Xi) = @_;
678
679	# 1st phase
680	&movdqa		($T2,$Xi);		#
681	&movdqa		($T1,$Xi);
682	&psllq		($Xi,5);
683	&pxor		($T1,$Xi);		#
684	&psllq		($Xi,1);
685	&pxor		($Xi,$T1);		#
686	&psllq		($Xi,57);		#
687	&movdqa		($T1,$Xi);		#
688	&pslldq		($Xi,8);
689	&psrldq		($T1,8);		#
690	&pxor		($Xi,$T2);
691	&pxor		($Xhi,$T1);		#
692
693	# 2nd phase
694	&movdqa		($T2,$Xi);
695	&psrlq		($Xi,1);
696	&pxor		($Xhi,$T2);		#
697	&pxor		($T2,$Xi);
698	&psrlq		($Xi,5);
699	&pxor		($Xi,$T2);		#
700	&psrlq		($Xi,1);		#
701	&pxor		($Xi,$Xhi)		#
702}
703
704&function_begin_B("gcm_init_clmul");
705	&mov		($Htbl,&wparam(0));
706	&mov		($Xip,&wparam(1));
707
708	&call		(&label("pic"));
709&set_label("pic");
710	&blindpop	($const);
711	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
712
713	&movdqu		($Hkey,&QWP(0,$Xip));
714	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
715
716	# <<1 twist
717	&pshufd		($T2,$Hkey,0b11111111);	# broadcast uppermost dword
718	&movdqa		($T1,$Hkey);
719	&psllq		($Hkey,1);
720	&pxor		($T3,$T3);		#
721	&psrlq		($T1,63);
722	&pcmpgtd	($T3,$T2);		# broadcast carry bit
723	&pslldq		($T1,8);
724	&por		($Hkey,$T1);		# H<<=1
725
726	# magic reduction
727	&pand		($T3,&QWP(16,$const));	# 0x1c2_polynomial
728	&pxor		($Hkey,$T3);		# if(carry) H^=0x1c2_polynomial
729
730	# calculate H^2
731	&movdqa		($Xi,$Hkey);
732	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
733	&reduction_alg9	($Xhi,$Xi);
734
735	&pshufd		($T1,$Hkey,0b01001110);
736	&pshufd		($T2,$Xi,0b01001110);
737	&pxor		($T1,$Hkey);		# Karatsuba pre-processing
738	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
739	&pxor		($T2,$Xi);		# Karatsuba pre-processing
740	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
741	&palignr	($T2,$T1,8);		# low part is H.lo^H.hi
742	&movdqu		(&QWP(32,$Htbl),$T2);	# save Karatsuba "salt"
743
744	&ret		();
745&function_end_B("gcm_init_clmul");
746
747&function_begin_B("gcm_gmult_clmul");
748	&mov		($Xip,&wparam(0));
749	&mov		($Htbl,&wparam(1));
750
751	&call		(&label("pic"));
752&set_label("pic");
753	&blindpop	($const);
754	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
755
756	&movdqu		($Xi,&QWP(0,$Xip));
757	&movdqa		($T3,&QWP(0,$const));
758	&movups		($Hkey,&QWP(0,$Htbl));
759	&pshufb		($Xi,$T3);
760	&movups		($T2,&QWP(32,$Htbl));
761
762	&clmul64x64_T2	($Xhi,$Xi,$Hkey,$T2);
763	&reduction_alg9	($Xhi,$Xi);
764
765	&pshufb		($Xi,$T3);
766	&movdqu		(&QWP(0,$Xip),$Xi);
767
768	&ret	();
769&function_end_B("gcm_gmult_clmul");
770
771&function_begin("gcm_ghash_clmul");
772	&mov		($Xip,&wparam(0));
773	&mov		($Htbl,&wparam(1));
774	&mov		($inp,&wparam(2));
775	&mov		($len,&wparam(3));
776
777	&call		(&label("pic"));
778&set_label("pic");
779	&blindpop	($const);
780	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
781
782	&movdqu		($Xi,&QWP(0,$Xip));
783	&movdqa		($T3,&QWP(0,$const));
784	&movdqu		($Hkey,&QWP(0,$Htbl));
785	&pshufb		($Xi,$T3);
786
787	&sub		($len,0x10);
788	&jz		(&label("odd_tail"));
789
790	#######
791	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
792	#	[(H*Ii+1) + (H*Xi+1)] mod P =
793	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
794	#
795	&movdqu		($T1,&QWP(0,$inp));	# Ii
796	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
797	&pshufb		($T1,$T3);
798	&pshufb		($Xn,$T3);
799	&movdqu		($T3,&QWP(32,$Htbl));
800	&pxor		($Xi,$T1);		# Ii+Xi
801
802	&pshufd		($T1,$Xn,0b01001110);	# H*Ii+1
803	&movdqa		($Xhn,$Xn);
804	&pxor		($T1,$Xn);		#
805	&lea		($inp,&DWP(32,$inp));	# i+=2
806
807	&pclmulqdq	($Xn,$Hkey,0x00);	#######
808	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
809	&pclmulqdq	($T1,$T3,0x00);		#######
810	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
811	&nop		();
812
813	&sub		($len,0x20);
814	&jbe		(&label("even_tail"));
815	&jmp		(&label("mod_loop"));
816
817&set_label("mod_loop",32);
818	&pshufd		($T2,$Xi,0b01001110);	# H^2*(Ii+Xi)
819	&movdqa		($Xhi,$Xi);
820	&pxor		($T2,$Xi);		#
821	&nop		();
822
823	&pclmulqdq	($Xi,$Hkey,0x00);	#######
824	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
825	&pclmulqdq	($T2,$T3,0x10);		#######
826	&movups		($Hkey,&QWP(0,$Htbl));	# load H
827
828	&xorps		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
829	&movdqa		($T3,&QWP(0,$const));
830	&xorps		($Xhi,$Xhn);
831	 &movdqu	($Xhn,&QWP(0,$inp));	# Ii
832	&pxor		($T1,$Xi);		# aggregated Karatsuba post-processing
833	 &movdqu	($Xn,&QWP(16,$inp));	# Ii+1
834	&pxor		($T1,$Xhi);		#
835
836	 &pshufb	($Xhn,$T3);
837	&pxor		($T2,$T1);		#
838
839	&movdqa		($T1,$T2);		#
840	&psrldq		($T2,8);
841	&pslldq		($T1,8);		#
842	&pxor		($Xhi,$T2);
843	&pxor		($Xi,$T1);		#
844	 &pshufb	($Xn,$T3);
845	 &pxor		($Xhi,$Xhn);		# "Ii+Xi", consume early
846
847	&movdqa		($Xhn,$Xn);		#&clmul64x64_TX	($Xhn,$Xn,$Hkey); H*Ii+1
848	  &movdqa	($T2,$Xi);		#&reduction_alg9($Xhi,$Xi); 1st phase
849	  &movdqa	($T1,$Xi);
850	  &psllq	($Xi,5);
851	  &pxor		($T1,$Xi);		#
852	  &psllq	($Xi,1);
853	  &pxor		($Xi,$T1);		#
854	&pclmulqdq	($Xn,$Hkey,0x00);	#######
855	&movups		($T3,&QWP(32,$Htbl));
856	  &psllq	($Xi,57);		#
857	  &movdqa	($T1,$Xi);		#
858	  &pslldq	($Xi,8);
859	  &psrldq	($T1,8);		#
860	  &pxor		($Xi,$T2);
861	  &pxor		($Xhi,$T1);		#
862	&pshufd		($T1,$Xhn,0b01001110);
863	  &movdqa	($T2,$Xi);		# 2nd phase
864	  &psrlq	($Xi,1);
865	&pxor		($T1,$Xhn);
866	  &pxor		($Xhi,$T2);		#
867	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
868	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
869	  &pxor		($T2,$Xi);
870	  &psrlq	($Xi,5);
871	  &pxor		($Xi,$T2);		#
872	  &psrlq	($Xi,1);		#
873	  &pxor		($Xi,$Xhi)		#
874	&pclmulqdq	($T1,$T3,0x00);		#######
875
876	&lea		($inp,&DWP(32,$inp));
877	&sub		($len,0x20);
878	&ja		(&label("mod_loop"));
879
880&set_label("even_tail");
881	&pshufd		($T2,$Xi,0b01001110);	# H^2*(Ii+Xi)
882	&movdqa		($Xhi,$Xi);
883	&pxor		($T2,$Xi);		#
884
885	&pclmulqdq	($Xi,$Hkey,0x00);	#######
886	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
887	&pclmulqdq	($T2,$T3,0x10);		#######
888	&movdqa		($T3,&QWP(0,$const));
889
890	&xorps		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
891	&xorps		($Xhi,$Xhn);
892	&pxor		($T1,$Xi);		# aggregated Karatsuba post-processing
893	&pxor		($T1,$Xhi);		#
894
895	&pxor		($T2,$T1);		#
896
897	&movdqa		($T1,$T2);		#
898	&psrldq		($T2,8);
899	&pslldq		($T1,8);		#
900	&pxor		($Xhi,$T2);
901	&pxor		($Xi,$T1);		#
902
903	&reduction_alg9	($Xhi,$Xi);
904
905	&test		($len,$len);
906	&jnz		(&label("done"));
907
908	&movups		($Hkey,&QWP(0,$Htbl));	# load H
909&set_label("odd_tail");
910	&movdqu		($T1,&QWP(0,$inp));	# Ii
911	&pshufb		($T1,$T3);
912	&pxor		($Xi,$T1);		# Ii+Xi
913
914	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
915	&reduction_alg9	($Xhi,$Xi);
916
917&set_label("done");
918	&pshufb		($Xi,$T3);
919	&movdqu		(&QWP(0,$Xip),$Xi);
920&function_end("gcm_ghash_clmul");
921
922} else {		# Algorithm 5. Kept for reference purposes.
923
924sub reduction_alg5 {	# 19/16 times faster than Intel version
925my ($Xhi,$Xi)=@_;
926
927	# <<1
928	&movdqa		($T1,$Xi);		#
929	&movdqa		($T2,$Xhi);
930	&pslld		($Xi,1);
931	&pslld		($Xhi,1);		#
932	&psrld		($T1,31);
933	&psrld		($T2,31);		#
934	&movdqa		($T3,$T1);
935	&pslldq		($T1,4);
936	&psrldq		($T3,12);		#
937	&pslldq		($T2,4);
938	&por		($Xhi,$T3);		#
939	&por		($Xi,$T1);
940	&por		($Xhi,$T2);		#
941
942	# 1st phase
943	&movdqa		($T1,$Xi);
944	&movdqa		($T2,$Xi);
945	&movdqa		($T3,$Xi);		#
946	&pslld		($T1,31);
947	&pslld		($T2,30);
948	&pslld		($Xi,25);		#
949	&pxor		($T1,$T2);
950	&pxor		($T1,$Xi);		#
951	&movdqa		($T2,$T1);		#
952	&pslldq		($T1,12);
953	&psrldq		($T2,4);		#
954	&pxor		($T3,$T1);
955
956	# 2nd phase
957	&pxor		($Xhi,$T3);		#
958	&movdqa		($Xi,$T3);
959	&movdqa		($T1,$T3);
960	&psrld		($Xi,1);		#
961	&psrld		($T1,2);
962	&psrld		($T3,7);		#
963	&pxor		($Xi,$T1);
964	&pxor		($Xhi,$T2);
965	&pxor		($Xi,$T3);		#
966	&pxor		($Xi,$Xhi);		#
967}
968
969&function_begin_B("gcm_init_clmul");
970	&mov		($Htbl,&wparam(0));
971	&mov		($Xip,&wparam(1));
972
973	&call		(&label("pic"));
974&set_label("pic");
975	&blindpop	($const);
976	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
977
978	&movdqu		($Hkey,&QWP(0,$Xip));
979	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
980
981	# calculate H^2
982	&movdqa		($Xi,$Hkey);
983	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
984	&reduction_alg5	($Xhi,$Xi);
985
986	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
987	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
988
989	&ret		();
990&function_end_B("gcm_init_clmul");
991
992&function_begin_B("gcm_gmult_clmul");
993	&mov		($Xip,&wparam(0));
994	&mov		($Htbl,&wparam(1));
995
996	&call		(&label("pic"));
997&set_label("pic");
998	&blindpop	($const);
999	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1000
1001	&movdqu		($Xi,&QWP(0,$Xip));
1002	&movdqa		($Xn,&QWP(0,$const));
1003	&movdqu		($Hkey,&QWP(0,$Htbl));
1004	&pshufb		($Xi,$Xn);
1005
1006	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1007	&reduction_alg5	($Xhi,$Xi);
1008
1009	&pshufb		($Xi,$Xn);
1010	&movdqu		(&QWP(0,$Xip),$Xi);
1011
1012	&ret	();
1013&function_end_B("gcm_gmult_clmul");
1014
1015&function_begin("gcm_ghash_clmul");
1016	&mov		($Xip,&wparam(0));
1017	&mov		($Htbl,&wparam(1));
1018	&mov		($inp,&wparam(2));
1019	&mov		($len,&wparam(3));
1020
1021	&call		(&label("pic"));
1022&set_label("pic");
1023	&blindpop	($const);
1024	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1025
1026	&movdqu		($Xi,&QWP(0,$Xip));
1027	&movdqa		($T3,&QWP(0,$const));
1028	&movdqu		($Hkey,&QWP(0,$Htbl));
1029	&pshufb		($Xi,$T3);
1030
1031	&sub		($len,0x10);
1032	&jz		(&label("odd_tail"));
1033
1034	#######
1035	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1036	#	[(H*Ii+1) + (H*Xi+1)] mod P =
1037	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
1038	#
1039	&movdqu		($T1,&QWP(0,$inp));	# Ii
1040	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1041	&pshufb		($T1,$T3);
1042	&pshufb		($Xn,$T3);
1043	&pxor		($Xi,$T1);		# Ii+Xi
1044
1045	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1046	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1047
1048	&sub		($len,0x20);
1049	&lea		($inp,&DWP(32,$inp));	# i+=2
1050	&jbe		(&label("even_tail"));
1051
1052&set_label("mod_loop");
1053	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1054	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1055
1056	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1057	&pxor		($Xhi,$Xhn);
1058
1059	&reduction_alg5	($Xhi,$Xi);
1060
1061	#######
1062	&movdqa		($T3,&QWP(0,$const));
1063	&movdqu		($T1,&QWP(0,$inp));	# Ii
1064	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1065	&pshufb		($T1,$T3);
1066	&pshufb		($Xn,$T3);
1067	&pxor		($Xi,$T1);		# Ii+Xi
1068
1069	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1070	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1071
1072	&sub		($len,0x20);
1073	&lea		($inp,&DWP(32,$inp));
1074	&ja		(&label("mod_loop"));
1075
1076&set_label("even_tail");
1077	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1078
1079	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1080	&pxor		($Xhi,$Xhn);
1081
1082	&reduction_alg5	($Xhi,$Xi);
1083
1084	&movdqa		($T3,&QWP(0,$const));
1085	&test		($len,$len);
1086	&jnz		(&label("done"));
1087
1088	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1089&set_label("odd_tail");
1090	&movdqu		($T1,&QWP(0,$inp));	# Ii
1091	&pshufb		($T1,$T3);
1092	&pxor		($Xi,$T1);		# Ii+Xi
1093
1094	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1095	&reduction_alg5	($Xhi,$Xi);
1096
1097	&movdqa		($T3,&QWP(0,$const));
1098&set_label("done");
1099	&pshufb		($Xi,$T3);
1100	&movdqu		(&QWP(0,$Xip),$Xi);
1101&function_end("gcm_ghash_clmul");
1102
1103}
1104
1105&set_label("bswap",64);
1106	&data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
1107	&data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2);	# 0x1c2_polynomial
1108&set_label("rem_8bit",64);
1109	&data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
1110	&data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
1111	&data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
1112	&data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
1113	&data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
1114	&data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
1115	&data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
1116	&data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
1117	&data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
1118	&data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
1119	&data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
1120	&data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
1121	&data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
1122	&data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
1123	&data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
1124	&data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
1125	&data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
1126	&data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
1127	&data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
1128	&data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
1129	&data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
1130	&data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
1131	&data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
1132	&data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
1133	&data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
1134	&data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
1135	&data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
1136	&data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
1137	&data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
1138	&data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
1139	&data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
1140	&data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
1141}}	# $sse2
1142
1143&set_label("rem_4bit",64);
1144	&data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
1145	&data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
1146	&data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
1147	&data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
1148}}}	# !$x86only
1149
1150&asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
1151&asm_finish();
1152
1153close STDOUT;
1154
1155# A question was risen about choice of vanilla MMX. Or rather why wasn't
1156# SSE2 chosen instead? In addition to the fact that MMX runs on legacy
1157# CPUs such as PIII, "4-bit" MMX version was observed to provide better
1158# performance than *corresponding* SSE2 one even on contemporary CPUs.
1159# SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
1160# implementation featuring full range of lookup-table sizes, but with
1161# per-invocation lookup table setup. Latter means that table size is
1162# chosen depending on how much data is to be hashed in every given call,
1163# more data - larger table. Best reported result for Core2 is ~4 cycles
1164# per processed byte out of 64KB block. This number accounts even for
1165# 64KB table setup overhead. As discussed in gcm128.c we choose to be
1166# more conservative in respect to lookup table sizes, but how do the
1167# results compare? Minimalistic "256B" MMX version delivers ~11 cycles
1168# on same platform. As also discussed in gcm128.c, next in line "8-bit
1169# Shoup's" or "4KB" method should deliver twice the performance of
1170# "256B" one, in other words not worse than ~6 cycles per byte. It
1171# should be also be noted that in SSE2 case improvement can be "super-
1172# linear," i.e. more than twice, mostly because >>8 maps to single
1173# instruction on SSE2 register. This is unlike "4-bit" case when >>4
1174# maps to same amount of instructions in both MMX and SSE2 cases.
1175# Bottom line is that switch to SSE2 is considered to be justifiable
1176# only in case we choose to implement "8-bit" method...
1177