Home
last modified time | relevance | path

Searched refs:our (Results 1 – 25 of 890) sorted by relevance

12345678910>>...36

/external/curl/tests/data/
Dtest108033 http://%HOSTIP:%HTTPPORT/we/want/our/1080 http://%HOSTIP:%HTTPPORT/we/want/our/1080 -w '%{redirect_…
43 GET /we/want/our/1080 HTTP/1.1
47 GET /we/want/our/1080 HTTP/1.1
59 http://%HOSTIP:%HTTPPORT/we/want/our/data/10800002.txt?coolsite=yes
66 http://%HOSTIP:%HTTPPORT/we/want/our/data/10800002.txt?coolsite=yes
Dtest108141 http://%HOSTIP:%HTTPPORT/we/want/our/1081 http://%HOSTIP:%HTTPPORT/we/want/our/10810002 -w '%{redir…
51 GET /we/want/our/1081 HTTP/1.1
55 GET /we/want/our/10810002 HTTP/1.1
67 http://%HOSTIP:%HTTPPORT/we/want/our/data/10810099.txt?coolsite=yes
Dtest102933 http://%HOSTIP:%HTTPPORT/we/want/our/1029 -w '%{redirect_url}\n'
43 GET /we/want/our/1029 HTTP/1.1
55 http://%HOSTIP:%HTTPPORT/we/want/our/data/10290002.txt?coolsite=yes
/external/skia/site/dev/testing/
Dindex.md4 Skia relies heavily on our suite of unit and Golden Master \(GM\) tests, which
5 are served by our Diamond Master \(DM\) test tool, for correctness testing.
6 Tests are executed by our trybots, for every commit, across most of our
16 See the individual subpages for more details on our various test tools.
/external/llvm/docs/tutorial/
DLangImpl8.rst28 our program down to something small and standalone. As part of this
73 First we make our anonymous function that contains our top level
74 statement be our "main":
147 our piece of Kaleidoscope language down to an executable program via this
162 construct one for our fib.ks file.
176 of our IR level descriptions. Construction for it takes a module so we
177 need to construct it shortly after we construct our module. We've left it
180 Next we're going to create a small container to cache some of our frequent
181 data. The first will be our compile unit, but we'll also write a bit of
182 code for our one type since we won't have to worry about multiple typed
[all …]
DLangImpl2.rst14 `parser <http://en.wikipedia.org/wiki/Parsing>`_ for our Kaleidoscope
99 expressions. One thing that is nice about our AST is that it captures
104 For our basic language, these are all of the expression nodes we'll
173 in our parser will assume that CurTok is the current token that needs to
189 The ``Error`` routines are simple helper routines that our parser will
190 use to handle errors. The error recovery in our parser will not be the
191 best and is not particular user-friendly, but it will be enough for our
196 our grammar: numeric literals.
202 process. For each production in our grammar, we'll define a function
249 they happened: in our parser, we return null on an error.
[all …]
/external/skia/site/dev/contrib/
Dc++11.md4 Skia uses C++11. But as a library, we are technically limited by what our
5 clients support and what our build bots support.
49 Most of our bots are pretty up-to-date: the Windows bots use MSVC 2013, the Mac
51 bots use a recent toolchain from Android (see above), and our Chrome bots use
52 Chrome's toolchains (see above). I'm not exactly sure what our Chrome OS bots
53 are using. They're probably our weak link right now, though problems are rare.
55 I believe our bots' ability to use C++11 matches Mozilla's list nearly identically.
/external/opencv3/doc/tutorials/imgproc/
Dtable_of_content_imgproc.markdown60 Where we learn to design our own filters by using OpenCV functions
68 Where we learn how to pad our images!
124 Where we learn how to rotate, translate and scale our images
132 Where we learn how to improve the contrast in our images
172 Where we learn how to find contours of objects in our image
188 Where we learn how to obtain bounding boxes and circles for our contours.
196 Where we learn how to obtain rotated bounding boxes and ellipses for our contours.
/external/skia/site/user/sample/
Dbuilding.md66 With the repo created we can go ahead and create our src/DEPS file. The DEPS
67 file is used by gclient to checkout the dependent repositories of our
87 `Var()` accessor. In this case, we define our root directory, a shorter name
94 The `deps` section defines our dependencies. Currently we have one dependency
98 Once done, we can use gclient to checkout our dependencies.
152 First, we need to add GYP to our project. We'll do that by adding a new entry
203 main configuration file for our application is `src/using_skia.gyp`.
239 `configurations` section allows us to have different build flags for our `Debug`
244 The dependencies section lists our build dependencies. These will be built
245 before our sources are built. In this case, we depend on the `skia_lib` target
[all …]
/external/opencv3/doc/py_tutorials/py_calib3d/py_pose/
Dpy_pose.markdown17 how camera is placed in space to see our pattern image. So, if we know how the object lies in the
20 Our problem is, we want to draw our 3D coordinate axis (X, Y, Z axes) on our chessboard's first
22 should feel like it is perpendicular to our chessboard plane.
48 our X axis is drawn from (0,0,0) to (3,0,0), so for Y axis. For Z axis, it is drawn from (0,0,0) to
59 **cv2.solvePnPRansac()**. Once we those transformation matrices, we use them to project our **axis
62 to each of these points using our draw() function. Done !!!
/external/opencv3/doc/tutorials/imgproc/histograms/back_projection/
Dback_projection.markdown29 histogram besides is going to be our *model histogram* (which we know represents a sample of
40 - What we want to do is to use our *model histogram* (that we know represents a skin tonality) to
41 detect skin areas in our Test Image. Here are the steps
42 -# In each pixel of our Test Image (i.e. \f$p(i,j)\f$ ), collect the data and find the
48 -# Applying the steps above, we get the following BackProjection image for our Test Image:
54 use. For instance in our Test image, the brighter areas are more probable to be skin area
88 -# Declare the matrices to store our images and initialize the number of bins to be used by our
99 -# For this tutorial, we will use only the Hue value for our 1-D histogram (check out the fancier
/external/opencv3/doc/tutorials/imgproc/imgtrans/sobel_derivatives/
Dsobel_derivatives.markdown126 -# As usual we load our source image *src*:
133 -# First, we apply a @ref cv::GaussianBlur to our image to reduce the noise ( kernel size = 3 )
137 -# Now we convert our filtered image to grayscale:
154 - *src_gray*: In our example, the input image. Here it is *CV_8U*
164 -# We convert our partial results back to *CV_8U*:
170 this is not an exact calculation at all! but it is good for our purposes).
174 -# Finally, we show our result:
182 -# Here is the output of applying our basic detector to *lena.jpg*:
/external/clang/cmake/modules/
DClangConfig.cmake1 # This file allows users to call find_package(Clang) and pick up our targets.
7 # Provide all our library targets to users.
/external/llvm/docs/HistoricalNotes/
D2001-06-20-.NET-Differences.txt4 Subject: .NET vs. our VM
6 One significant difference between .NET CLR and our VM is that the CLR
23 compiled by the same compiler, whereas our approach allows us to link and
/external/opencv3/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_backprojection/
Dpy_histogram_backprojection.markdown17 that of our input image, where each pixel corresponds to the probability of that pixel belonging to
18 our object. In more simpler worlds, the output image will have our object of interest in more white
22 **How do we do it ?** We create a histogram of an image containing our object of interest (in our
26 "back-project" this histogram over our test image where we need to find the object, ie in other
/external/opencv3/doc/tutorials/photo/hdr_imaging/
Dhdr_imaging.markdown21 In this tutorial we show how to generate and display HDR image from an exposure sequence. In our
48 For our image sequence the list is following:
80 … Since we want to see our results on common LDR display we have to map our HDR image to 8-bit range
90 There is an alternative way to merge our exposures in case when we don't need HDR image. This
/external/libmicrohttpd/doc/chapters/
Dsessions.inc4 this is a network protocol, our session mechanism must support having many users with
22 Here, FIXME is the name we chose for our session cookie.
28 cookies. In order to generate a unique cookie, our example creates a random
35 Given this cookie value, we can then set the cookie header in our HTTP response
/external/opencv3/doc/tutorials/imgproc/histograms/histogram_calculation/
Dhistogram_calculation.markdown27 of information value for this case is 256 values, we can segment our range in subparts (called
45 -# **dims**: The number of parameters you want to collect data of. In our example, **dims = 1**
47 -# **bins**: It is the number of **subdivisions** in each dim. In our example, **bins = 16**
93 our input is the image to be divided (this case with three channels) and the output is a vector
97 with the B, G and R planes, we know that our values will range in the interval \f$[0,255]\f$
108 -# We want our bins to have the same size (uniform) and to clear the histograms in the
113 -# Finally, we create the Mat objects to save our histograms. Creating 3 (one for each plane):
192 -# Finally we display our histograms and wait for the user to exit:
/external/tlsdate/
DHARDENING13 As such, we prefer to be explicit rather than implicit in our casting or other
19 autotools bootstrapping on all of our supported platforms. This is not possible
32 switch to our normal unprivileged account. These users are defined at
36 In addition to the above hardening options, we have tried to extend our
/external/opencv3/doc/py_tutorials/py_ml/py_knn/py_knn_opencv/
Dpy_knn_opencv.markdown8 - We will use our knowledge on kNN to build a basic OCR application.
17 a 20x20 image. So our first step is to split this image into 5000 different digits. For each digit,
18 we flatten it into a single row with 400 pixels. That is our feature set, ie intensity values of all
56 So our basic OCR app is ready. This particular example gave me an accuracy of 91%. One option
81 like garbage. Actually, in each row, first column is an alphabet which is our label. Next 16 numbers
/external/webrtc/webrtc/
Dsupplement.gypi40 # Replace Chromium's LSan suppressions with our own for WebRTC.
49 # Replace Chromium's TSan v2 suppressions with our own for WebRTC.
/external/llvm/test/CodeGen/AArch64/
Dsibling-call.ll38 ; This should reuse our stack area for the 42
48 ; Shouldn't be a tail call: we can't use SP+8 because our caller might
60 ; Reuse our area, putting "42" at incoming sp
/external/opencv3/doc/tutorials/imgproc/imgtrans/warp_affine/
Dwarp_affine.markdown65 …-# We know both \f$X\f$ and T and we also know that they are related. Then our job is to find \f$…
98 -# Declare some variables we will use, such as the matrices to store our results and 2 arrays of
99 points to store the 2D points that define our Affine Transform.
149 We just got our first transformed image! We will display it in one bit. Before that, we also
169 -# We now apply the found rotation to the output of our previous Transformation.
173 -# Finally, we display our results in two windows plus the original image for good measure:
/external/skia/gyp/
Dcommon_variables.gypi9 # - We have to nest our variables dictionaries multiple levels deep, so that
20 # which we currently import into our build, uses the value of 'os_posix'
32 # within our ridiculous matryoshka doll of 'variable' dicts. That's why
47 # We use 'skia_os' instead of 'OS' throughout our gyp files, to allow
263 # These are referenced by our .gypi files that list files (e.g. core.gypi)
/external/opencv3/doc/tutorials/introduction/load_save_image/
Dload_save_image.markdown60 -# Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice
73 -# So now we have our new *gray_image* and want to save it on disk (otherwise it will get lost
79 Which will save our *gray_image* as *Gray_Image.jpg* in the folder *images* located two levels

12345678910>>...36