text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On Sun, Nov 11, 2012 at 10:24:47AM +0200, David Baron wrote: > Every version upgrade of Dovecot causes config problems (running Sid). I > hand edited the doveconf -n stuff in after choosing to keep my old .conf > and not being able to get the choice again. Got it working, finally. If you're using dovecot in sid, you should have noticed that it's migrated to the conf.d/* style configuration. Put any modifications YOU make into a separate file (say, 99-local.conf) and revert the upstream files to as shipped. That way, when you upgrade, the conffiles will get upgraded cleanly and your configuration will override the debian defaults. The hazard of doing that, though, is that if a configuration variable is removed or your config becomes invalid, you won't be told. > > > > KMail seems to be getting my imap stuff but get and error box each time: > > Error while getting folder information. > > Unable to get information about folder ~/mail. The server replied: Mailbox > isn't selectable > > > > No combination of namespace prefix, kmail imap account namespace, etc, > seems to change this. Any ideas?? Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-user/2012/11/msg00484.html
CC-MAIN-2015-27
refinedweb
192
66.74
Hi, I was wondering if someone could help me. I have a template function where based on the type passed in I have to possibly access a struct member or just print the value. Yet when I try to compile this it fails with tons of errors. How would I go about fixing this?How would I go about fixing this?Code:#include <typeinfo> #include <iostream> using namespace std; typedef struct { double a; double b; } A; template<class E> void check_el(E val) { if(typeid(E) == typeid(int)) { cout << val << endl; } else if(typeid(E) == typeid(A)){ double t = val.a; cout << t << endl; } else { cout << "NOTHING" << endl; } } int main() { A a = {1, 2}; check_el<A>(a); check_el<int>(5); return 0; } Thanks a lot for any help, Tony
http://cboard.cprogramming.com/cplusplus-programming/83742-distinguishing-types.html
CC-MAIN-2014-41
refinedweb
129
72.05
Created on 2004-03-02 17:41 by drochner, last changed 2008-03-22 08:13 by georg.brandl. This issue is now closed. (This applies also to __setslice__ and possibly more.) (This was already present in Python-2.2.) If the upper slice index is omitted, a default is passed to the __getslice__ method. Documentation claims this is sys.maxint. This is wrong if INT_MAX and LONG_MAX differ; what is passed is INT_MAX while sys.maxint is LONG_MAX. I'm not sure whether to call it a code bug or a documentation bug; at least there is code out there which compares to sys.maxint. The whole code path from ceval.c:_PyEval_SliceIndex() to operator.c:op_getslice() to abstract.c:PySequence_GetSlice() to classobject.c:instance_slice() has just room for an "int", so a code fix is pretty invasive... A small test program to check this: ========== import sys class sl(object): def __getslice__(self, a, b): return (a, b) print "sys.maxint = %ld" % sys.maxint bounds = sl()[:] print "default bounds = [%ld, %ld]" % bounds ========== gives on NetBSD/amd64: sys.maxint = 9223372036854775807 default bounds = [0, 2147483647] Logged In: YES user_id=1188172 Do we want to support long slice indices? This has been fixed around r42454. Closing as Fixed.
http://bugs.python.org/issue908441
CC-MAIN-2013-48
refinedweb
206
79.36
Round Round Get Around: Why Fixed-Point Right-Shifts Are Just Fine number \( x \) by 0.98765, we could approximate by computing (K * x) >> 15 where \( K \approx 0.98765 \times 2^{15} \): K_exact = 0.98765 K = K_exact * (1 << 15) K 32363.3152 x = 2016 x*K_exact 1991.1024 (x * 32363) >> 15 1991 Whee, close enough. As I’ve said before: The only embedded systems that require precision floating-point computations are desktop calculators. Now, when we performed this calculation, we made two approximations in which we had to throw away information and avoid the need to increase storage sizes. One was a design-time approximation, where we decided that \( K = 32363 \) was a good enough approximation to \( 0.98765 \times 2^{15} = 32363.3152 \). (I’m not going to get rigorous on this stuff, by the way; if you want rigor, read Knuth’s The Art of Computer Programming, volume 2, covering floating-point arithmetic, where he distinguishes exact operations \( +, -, \times, \div \) from the computer’s floating-point arithmetic operations \( \oplus, \ominus, \otimes, \oslash \), and gets into ulp and \( \delta \) and \( \epsilon \) and things like that.) The other was a runtime approximation, that the computing environment’s right-shift >> operator was an appropriate way to divide by \( 2^{15} \) and discard the least significant bits; the exact computation does not produce an integer result: (x * 32363) / 32768.0 1991.0830078125 You look at these numbers and say of course, 32363 is a good approximation to 32363.3152, and 1991 is a good approximation to 1991.0830078125. But that’s because in both cases, truncating (just throwing away the low-order bits or digits) happened to give us the integer values that are closest to the exact numbers we’re trying to approximate. Truncation is only one way to handle approximation by integers. It’s also usually the easiest way, since computer languages such as C or Python use truncation when handling integer division and right-shifting, and the underlying computer architectures have an arithmetic logic unit (ALU) that does the same… which is why the computer languages tend to have identical behavior, so they can both mirror and make efficient use of their underlying hardware. Actually, we need to be a little careful here. Truncation is tricky, because it’s not the same operation when you deal with printed numbers and their two’s complement representations in a computer, at least not when negative numbers are involved. Numerical truncation to an integer is throwing away the digits to the right of the decimal point, effectively rounding towards zero. Bitwise truncation is throwing away all the bits to the right of the binary point, which with two’s complement representation always rounds downward. If we had the number -3.34375, this could be represented exactly in two’s complement as 11111100.10101000 which truncates to 11111100 = -4. int('1111110010101000',2) / 256.0 - 256 -3.34375 Anyway, all the numbers turned out nicely in the examples above. But suppose we wanted to multiply by 0.56789: 0.56789 * (1<<15) 18608.61952 Now we have a choice. We could use \( K = 18608 \), which we obtained by truncation. Or we could use \( K = 18609 \), which we obtained by rounding to the nearest integer; this has a smaller error. As far as the result of our fixed-point multiplication goes: print "exact: ", 0.56789*x for K in [18608, 18609]: print (K * x) / 32768.0 exact: 1144.86624 1144.828125 1144.88964844 Again, we have a choice; we could use bitwise truncation and round down to 1144. Or we could round to the nearest integer to 1145, which has smaller error. For design-time calculations, where there’s almost no cost, because we can do them on a PC and we only need to go through them once, it is always better to round to the nearest integer rather than round down. There are, however, rare occasions where rounding towards zero is a better approach, because of other constraints that allow us to reduce but not increase the magnitude of the numbers involved. (Rotation matrix coefficients, for example, in a series of fixed-point iterations, have the potential to make their output larger, risking overflow, if they are rounded to the nearest integer, whereas rounding towards zero guarantees that each iteration does not increase in magnitude.) Runtime calculations, on the other hand, have a slight increase in cost for rounding compared to bitwise truncation. This is because the operation of fixed-point rounding to the nearest integer is not a common feature, either in microcontroller architectures or computer languages (and not present in C: the behavior of the division and right-shift operators exclude rounding). Achieving round-to-the-nearest-integer behavior isn’t hard though; when right-shifting by \( N \), just add in \( 2^{N-1} \) (which is the fixed-point equivalent of 0.5) first: print (18608 * x) >> 15 print (18608 * x + 16384) >> 15 1144 1145 This costs an extra operation, which can add up on an embedded system, and some specialized DSP instructions on certain architectures — the SAC.R instruction on Microchip’s dsPIC30F and dsPIC33 cores, for example — do include automatic rounding facilities. (Floating-point rounding modes, on the other hand, do appear to be common in many architectures, primarily because they were written into the IEEE-754 standard and even included in standard libraries in C and C++ and Java.) If accuracy is very important, rounding-to-nearest-integer at runtime will produce an integer result that has smaller error than bitwise truncation. My contention is that this is not necessary in many applications, and I will explain why in a moment. Biased vs. unbiased rounding The technique of adding 0.5 and then rounding down ( round(x) = floor(x+0.5)) is fairly common, and in many PC calculations, that’s all that’s needed. For some applications this isn’t enough, because this type of rounding is biased, namely values of exactly \( m + 0.5 \) for integer \( m \) will always round to \( m+1 \), creating a small positive bias in the average error over the set of input operands. One technique for unbiased rounding is when encountering these numbers that are exactly halfway between integers, round to the nearest even integer, namely: $$ \begin{array}{rcr} \mathrm{round}(-3.5) & \rightarrow & -4\cr \mathrm{round}(-2.5) & \rightarrow & -2\cr \mathrm{round}(-1.5) & \rightarrow & -2\cr \mathrm{round}(-0.5) & \rightarrow & 0\cr \mathrm{round}(+0.5) & \rightarrow & 0\cr \mathrm{round}(+1.5) & \rightarrow & 2\cr \mathrm{round}(+2.5) & \rightarrow & 2\cr \mathrm{round}(+3.5) & \rightarrow & 4\cr \end{array} $$ This ensures there is no net bias, and the average error over extremely large numbers of calculations, with inputs well-distributed, is zero. That sounds a bit pedantic, especially in the floating-point world, where encountering numbers exactly between two integers seems extremely rare. Knuth points out, however, in TAOCP (section 4.2.2 in the Third Edition) that No rounding rule can be best for every application. For example, we generally want a special rule when computing our income tax. But for most numerical calculations the best policy appears to be the rounding scheme specified in Algorithm 4.2.1N, which insists that the least significant digit should always be made even (or always odd) when an ambiguous value is rounded. This is not a trivial technicality, of interest only to nit-pickers; it is an important practical consideration, since the ambiguous case arises surprisingly often and a biased rounding rule produces significantly poor results. For example, consider decimal arithmetic and assume that remainders of 5 are always rounded upwards. Then if \( u = 1.0000000 \) and \( v = 0.55555555 \) we have \( u \oplus v = 1.5555556 \); and if we floating-subtract \( v \) from this result we get \( u’ = 1.0000001 \). Adding and subtracting \( v \) from \( u’ \) gives 1.0000002, and the next time we get 1.0000003, etc.; the result keeps growing although we are adding and subtracting the same value. This phenomenon, called drift, will not occur when we use a stable rounding rule based on the parity of the least significant digit. More precisely: Theorem D. \( \bigl(\left(\left(u \oplus v\right) \ominus v\right) \oplus v\bigr) \ominus v = (u \oplus v) \ominus v \). For example, if \( u = 1.2345679 \) and \( v = -0.23456785 \), we find $$\begin{array}{rclrcl} u\oplus v &=& 1.0000000, & (u\oplus v) \ominus v &=& 1.2345678, \cr \left(\left(u\oplus v\right)\ominus v\right) \oplus v &=& 1.0000000, & \bigl(\left(\left(u\oplus v\right) \ominus v\right) \oplus v\bigr) \ominus v &=& 1.2345678. \cr \end{array}$$ The proof for general \( u \) and \( v \) seems to require a case analysis even more detailed than that in the theorems above; see the references below. Aside from the rigorous theory, which may turn off some readers, this is the only account I have found of why unbiased rounding is important; it’s not just some statistical thing where if you sum up one trillion numbers you’re off by 0.001%, but rather there are relatively simple situations in which small errors can accumulate. Rounding in embedded systems Okay, here we get to the practical stuff. My assertion is that you don’t need rounding in the vast majority of cases in embedded systems; bitwise truncation is just fine. Why? Because there are other errors which are much larger, and drown out the errors caused by truncation. It’s like dusting the shelves and washing the floors in a house, before you bring a couple of sheep inside along with some hay bales. Or trying to walk silently when someone’s playing Won’t Get Fooled Again loudly on their stereo. I mean, what’s the point? Let’s look at an example, again multiplication by 0.98765, so we’re comparing $$\begin{eqnarray} y &=& 0.98765x \cr y_{t} &=& \left\lfloor \frac{32363x}{32768} \right\rfloor \cr y_{r} &=& \left\lfloor \frac{32363x}{32768} + 0.5 \right\rfloor \end{eqnarray}$$ import numpy as np import matplotlib.pyplot as plt xmin = -2048 xmax = 2048 x = np.arange(xmin,xmax) y = x*0.98765 fig=plt.figure() for k,ofs,subscript in ([1,0,'t'],[2,16384,'r']): yq = (x*32363 + ofs) >> 15 ax=fig.add_subplot(2,1,k) ax.plot(x,y-yq) ax.set_xlim(xmin,xmax) ax.set_xlabel('$x$',fontsize=18) ax.set_ylabel('$\\tilde{y} = y-y_%s$' % subscript, fontsize=18) It’s pretty easy to see the results; in the case of truncation the error band is essentially between 0 and 1 count; in the case of rounding, the error band is essentially between -0.5 and 0.5 counts. I say “essentially” because there is a small gain error because we’re multiplying by 32363/32768 instead of exactly 0.98765. In any case, we have an error with a peak-to-peak value of 1 count. Bitwise truncation puts the error band between 0 and 1 count, with a mean bias of 0.5 counts; simple rounding to the nearest integer brings the mean bias down to essentially \( 1/2^{-Q} = 1/32768 \) counts where \( Q \) is the right-shift count; we need unbiased rounding if we want to have a mean bias of zero. Now let’s say that we get our input \( x \) from an analog-to-digital converter (ADC). The ADC itself is going to have some offset error, typically in the neighborhood of 1-10 counts depending on the number of bits, and integral and differential nonlinearity of several counts. Let’s say I use a dsPIC33EP256MC506 (this is the part that I’ve used the most over the past 4 years); its specs for the 10-bit conversion mode in the -40°C to +85°C range are - INL: ± 0.625 counts (0.061% of fullscale) - DNL: ± 0.25 counts (0.024% of fullscale) - offset: ± 1.25 counts (0.122% of fullscale) or a total worst-case of ±2.5 counts error (0.244% of fullscale) of various types. That’s offset + INL + half of DNL + 1/2 count, because INL essentially tells you how far off you are from linear at the center of one of the ADC quanta, and DNL tells you how large the ADC quantum size is at worst-case on top of the normal 1 count. We typically scale the ADC results so that their fullscale coincides with the full numeric range (multiply by 64 in this case to go from 10-bit to 16-bit), so ± 2.5 counts ADC = ±160 counts when scaled up to fill the range of a 16-bit number. (Again, 0.244% of fullscale.) I’d probably only scale to 80% full numeric range, to leave some margin for overflow, but we’re still talking the same order of magnitude, so let’s just keep it at 160 counts. That’s just the ADC. Now we also have to include sensor and signal conditioning errors; let’s say we’re doing current sensing and we have a 10× gain system with a pretty decent op-amp that has 1mV max input offset voltage, and we’re going into an ADC input with a range of 0 - 3.3V: that’s 10mV / 3.3V * 65536 = 198.6 counts, or 0.303% of fullscale. So here are the kinds of worst-case bias errors we’re going to see if we take our ADC reading and use fixed-point multiplication by a number that isn’t a power of 2: - ADC: 160 counts (0.244% of fullscale) - external signal conditioning: 199 counts (0.303% of fullscale) - arithmetic errors: 0.5 counts (0.0015% of fullscale) Hence my point that these other error sources drown out the arithmetic errors; in a system they can all be lumped together into some value of offset error, rather than trying to correct for the 1/2 count error caused by truncation. Or if offset correction is important, then this offset will just be part of any calibration error. If your arithmetic isn’t being done on ADC values, but rather on exact values that are first produced in the digital domain (e.g. you’re simulating something), then the only error is arithmetic error, and maybe it’s important to you. Rounding in other operations besides multiplication OK, suppose you’re doing something else besides multiplication, like an integrator. DC offset in an integrator is bad, right? It will cause the integrator to zoom off to one of its limits and saturate! Well, again, if the integrator input comes from some real-world-related value, it’s going to have offset anyway, which is generally much higher than the offset due to rounding. And most integrators occur in a feedback system where the output affects the sensed input and avoids saturation. The effects of rounding bias on low-pass filters are less clear. Let’s give them a try. def make_lpf(alpha, Nshift=None, round_nearest=False): if Nshift is not None: K = int(alpha * (1 << Nshift)) ofs = 1 << (Nshift-1) if round_nearest else 0 def f(x): y = np.zeros_like(x) n = len(x) y32 = 0 ytop = 0 if Nshift is None: for i in xrange(1,n): y[i] = y[i-1] + alpha*(x[i]-y[i-1]) else: for i in xrange(1,n): y32 += K*(x[i]-ytop) ytop = (y32 + ofs) >> Nshift y[i] = ytop return y return f alpha = 0.005 lpf_exact = make_lpf(alpha) lpf_q16 = make_lpf(alpha,16) lpf_q16_round = make_lpf(alpha,16,round_nearest=True) dt = 0.0001 t = np.arange(0,20000)*dt f = 64 xref = 0.2*np.sin(6*t) + 0.4 x = (((f*t) % 1) > xref) * 0.4 x_q16 = (x*65536).astype(np.int16) y = lpf_exact(x) y_q16 = lpf_q16(x_q16) y_q16r = lpf_q16_round(x_q16) y_q16_scaled = y_q16 / 65536.0 y_q16r_scaled = y_q16r / 65536.0 fig = plt.figure(figsize=(9,8)) ax = fig.add_subplot(4,1,1) ax.plot(t, x) ax.set_ylabel('x') ax = fig.add_subplot(4,1,2) ax.plot(t, y, t, y_q16_scaled, t, y_q16r_scaled) ax.set_ylabel('y') ax = fig.add_subplot(4,1,3) ax.plot(t, y - y_q16_scaled, label='floor') ax.plot(t, y - y_q16r_scaled, label='round') ax.set_ylabel('y_error') ax.legend() ax = fig.add_subplot(4,1,4) ax.plot(t, y_q16_scaled - y_q16r_scaled) ax.set_ylabel('delta(y_error)'); Here we have a pulse-width-modulated waveform going through a low-pass filter with time constant of \( \Delta t/\alpha = 0.2 \mathrm{s} \). Each of the fixed-point implementations is a good approximation to the floating-point implementation, typically within 0.0001 = 6.5 counts of output; the error between the two implementations is mostly zero within sporadic values of ±0.000015 = 1 count of output. That’s just one anecdotal example, but you get the point. Other algorithms like a Fast Fourier Transform, or a Kalman filter, or an IIR filter with a large number of taps, may be more sensitive to the effects of truncation vs. rounding, but in those cases if you’re using fixed-point arithmetic you should be studying your implementation very carefully to ensure that it’s doing the right thing, and the issue of rounding rather than truncation is only one of several important concerns, including overflow, scaling, and limit cycles. Wrapup You can reduce the numerical errors in your embedded system by rounding instead of using bitwise truncation (right-shifts) in fixed-point arithmetic. For runtime arithmetic, it may cost a few extra instruction cycles for each operation, however, and the decrease in error is often very small compared to other sources of error in the analog domain. So in many cases rounding at runtime may not be worth the effort. Happy Thanksgiving! Previous post by Jason Sachs: Scorchers, Part 1: Tools and Burn Rate Next post by Jason Sachs: How to Succeed in Motor Control: Olaus Magnus, Donald Rumsfeld, and YouTube - Write a CommentSelect to add a comment Hi Jason, Very nice blog! Very practical. ------------------------------- Fixed-point right-shifts may be just fine, mathematically, to achieve the results advocated and demonstrated by the author. However, most embedded programming is done in C and C++; the author's own examples seem to be written in C. The way right-shift is implemented (effectively arithmetic or logical) is not specified in the language standards but is left to the compiler-writer. This makes using right-shift as an arithmetic operator non-portable between toolchains unless it is used *only on unsigned integers* - an important fact which should have been highlighted in the article. Better just to divide by the appropriate power of 2. That's portable and a modern compiler will generate the most efficient code possible (basically, a proper arithmetic shift)! a modern compiler will generate the most efficient code possibleA good modern compiler will, at high levels of optimization. I ran into this issue with XC16 recently; I don't remember the details but I needed to use right-shifts instead of /2. This makes using right-shift as an arithmetic operator non-portable between toolchains unless it is used *only on unsigned integers*You're correct on this one. For signed integers, -fwrapvwill work in gcc and clang, as I mentioned in the overflow article that I referred to in this.
https://www.embeddedrelated.com/showarticle/1015.php
CC-MAIN-2017-17
refinedweb
3,219
64.51
Hi All, I am used to having the launcher provide proxy detection and setting the java environment with the host/port info. If you do not then apps launched from Appup will work from home but not in a corp environment which would affect games especially. I am looking into Proxy-Vole but so far it does not detect the proxy correctly even when a pac file is used by a corporation. Web Start works great but Intel does not use that deployment tool and the Web Start api does not expose the methods to get the proxy info. Any ideas? Thanks, -Tony Java* (Archived) New Jar Signing Requirement... Hi Intel, What jars are expected to be signed? the ones added via the packaging utility? What about the big jar that the packaging facility creates? Thanks. Increasing version number? Hi Guys, My Java application already published in AppUp and I want to update new version. I have tried to update but it says "version number as already been published or rejected. Please enter new version number". How to increase the version for Java App? Help me!. Thanks in advance! -R. Guys get started with Series 40 web apps. With Ovi Browser support for Series 40 web apps, the Ovi Browser client can execute JavaScript code in web apps. This code makes it possible to create interactive user interfaces and graphical transitions to deliver users beautiful web experiences. Guys get started with Series 40 web apps. With Ovi Browser support for Series 40 web apps, the Ovi Browser client can execute JavaScript code in web apps. This code makes it possible to create interactive user interfaces and graphical transitions to deliver users beautiful web experiences. Traditionally, proxy-based browsing has offered users a very limited experience, because such browsers typically do nothing more than paint content provided by a proxy. This has changed, now using Mobile Web Library; the Ovi Browser client can execute JavaScript code in web apps. App works in eclipse with debugger on but does not in validator... Hi All, I have my app working with eclipse with the debugger on but fails with the validator. The validator launches some command window some text is written into it then it disappears. Am I allowed to use the debug id in the app when I run the validator? If yes how do I get the error message that shows up on the command window? Thanks, -Tony Mobile Hands-on Labs v1.5 for Java Me I. Authorization issue Hello All, I had an authorization problem with my java application. when i use Debug GUID(0x11111111,0x11111111,0x11111111,0x11111111) it is showing authorized. but when i use my real GUID(0x12AF696F,0x6CE34E03,0x9265F1B2,0xD94F0EF0) is is not authorized.. here am giving you the sample code. please help me to solve this issue. public class TicTacToeGame extends Application { public TicTacToeGame(ApplicationId id) throws InitializationException, UnauthorizedException, AdpRuntimeException { super(id); } public static void main(String[] args) { TicTacToeGame tictactoe = null; try { JavaOne and Oracle Develop 2011!!! My development team is largely Java based. I am aware of two leading conferences ‘JavaOne and Oracle Develop’ to be held this year in the month of May in Hyderabad for which I even received an email. You can get more information about the two conferences on the link:
https://software.intel.com/fr-fr/taxonomy/term/41252?page=2
CC-MAIN-2015-22
refinedweb
551
65.62
A signal that conveys user-interface events. More... #include <Wt/WSignal.h> A signal that conveys user-interface events. An EventSignal is a special Signal that may be triggered by user interface events such as a mouse click, key press, or focus change. They are made available through the library in widgets like WInteractWidget, and should not be instantiated directly. In addition to the behaviour of Signal, they are capable of both executing client-side and server-side slot code. They may learn JavaScript from C++ code, through stateless slot learning, when connected to a slot that has a stateless implementation, using WObject::implementStateless(). Or they may be connected to a JSlot which provides manual JavaScript code. The typically relay UI event details, using event details objects like WKeyEvent or WMouseEvent. Connects slot that takes no arguments. If a stateless implementation is specified for the slot, then the visual behaviour will be learned in terms of JavaScript, and will be cached on the client side for instant feed-back, in addition running the slot on the server. The slot is as a method of an object target of class T, which equals class V, or is a base class of class V. In addition, to check for stateless implementations, class T must be also be a descendant of WObject. Thus, the following statement must return a non-null pointer: Connects a slot that takes one argument. This is only possible for signals that take at least one argument. Connects a slot that takes a 'const argument&'. This is only possible for signals that take at least one argument. Connects a JavaScript function. This will provide a client-side connection between the event and a JavaScript function. The argument must be a JavaScript function which optionally accepts two arguments (object and the event): Unlike a JSlot, there is no automatic connection management: the connection cannot be removed. If you need automatic connection management, you should use connect(JSlot&) instead. Connects a slot that is specified as JavaScript only. This will provide a client-side connection between the event and some JavaScript code as implemented by the slot. Unlike other connects, this does not cause the event to propagated to the application, and thus the state changes caused by the JavaScript slot are not tracked client-side. The connection is tracked, taking into account the life-time of the JSlot object, and can be updated by modifying the slot. If you do not need connection management (e.g. because the slot has the same life-time as the signal), then you can use connect(const std::string&) instead. Connects to a slot. Every signal can be connected to a slot which does not take any arguments (and may thus ignore signal arguments). Implements Wt::SignalBase. Emits the signal. This will cause all connected slots to be triggered, with the given argument.
http://www.webtoolkit.eu/wt/doc/reference/html/classWt_1_1EventSignal.html
CC-MAIN-2017-51
refinedweb
479
64.41
Part 15: Tested Poetry This continues the introduction started here. You can find an index to the entire series here. Introduction We’ve written a lot of code in our exploration of Twisted, but so far we’ve neglected to write something important — tests. And you may be wondering how you can test asynchronous code using a synchronous framework like the unittest package that comes with Python. The short answer is you can’t. As we’ve discovered, synchronous and asynchronous code do not mix, at least not readily. Fortunately, Twisted includes its own testing framework called trial that does support testing asynchronous code (and you can use it to test synchronous code, too). We’ll assume you are already familiar with the basic mechanics of unittest and similar testing frameworks, in which you create tests by defining a class with a specific parent class (usually called something like TestCase), and each method of that class starting with the word “ test” is considered a single test. The framework takes care of discovering all the tests, running them one after the other with optional setUp and tearDown steps, and then reporting the results. The Example You will find some example tests located in tests/test_poetry.py. To ensure all our examples are self-contained (so you don’t need to worry about PYTHONPATH settings), we have copied all the necessary code into the test module. Normally, of course, you would just import the modules you wanted to test. The example is testing both the poetry client and server, by using the client to fetch a poem from a test server. To provide a poetry server for testing, we implement the setUp method in our test case: class PoetryTestCase(TestCase): def setUp(self): factory = PoetryServerFactory(TEST_POEM) from twisted.internet import reactor self.port = reactor.listenTCP(0, factory, interface="127.0.0.1") self.portnum = self.port.getHost().port The setUp method makes a poetry server with a test poem, and listens on a random, open port. We save the port number so the actual tests can use it, if they need to. And, of course, we clean up the test server in tearDown when the test is done: def tearDown(self): port, self.port = self.port, None return port.stopListening() That brings us to our first test, test_client, where we use get_poetry to retrieve the poem from the test server and verify it’s the poem we expected: def test_client(self): """The correct poem is returned by get_poetry.""" d = get_poetry('127.0.0.1', self.portnum) def got_poem(poem): self.assertEquals(poem, TEST_POEM) d.addCallback(got_poem) return d Notice that our test function is returning a deferred. Under trial, each test method runs as a callback. That means the reactor is running and we can perform asynchronous operations as part of the test. We just need to let the framework know that our test is asynchronous and we do that in the usual Twisted way — return a deferred. The trial framework will wait until the deferred fires before calling the tearDown method, and will fail the test if the deferred fails (i.e., if the last callback/errback pair fails). It will also fail the test if our deferred takes too long to fire, two minutes by default. And that means if the test finished, we know our deferred fired, and therefore our callback fired and ran the assertEquals test method. Our second test, test_failure, verifies that get_poetry fails in the appropriate way if we can’t connect to the server: def test_failure(self): """The correct failure is returned by get_poetry when connecting to a port with no server.""" d = get_poetry('127.0.0.1', 0) return self.assertFailure(d, ConnectionRefusedError) Here we attempt to connect to an invalid port and then use the trial-provided assertFailure method. This method is like the familiar assertRaises method but for asynchronous code. It returns a deferred that succeeds if the given deferred fails with the given exception, and fails otherwise. You can run the tests yourself using the trial script like this: trial tests/test_poetry.py And you should see some output showing each test case and an OK telling you each test passed. Discussion Because trial is so similar to unittest when it comes to the basic API, it’s pretty easy to get started writing tests. Just return a deferred if your test uses asynchronous code, and trial will take care of the rest. You can also return a deferred from the setUp and tearDown methods, if those need to be asynchronous as well. Any log messages from your tests will be collected in a file inside a directory called _trial_temp that trial will create automatically if it doesn’t exist. In addition to the errors printed to the screen, the log is a useful starting point when debugging failing tests. Figure 33 shows a hypothetical test run in progress: If you’ve used similar frameworks before, this should be a familiar model, except that all the test-related methods may return deferreds. The trial framework is also a good illustration of how “going asynchronous” involves changes that cascade throughout the program. In order for a test (or any function or method) to be asynchronous, it must: - Not block and, usually, - return a deferred. But that means that whatever calls that function must be willing to accept a deferred, and also not block (and thus likely return a deferred as well). And so it goes up and up. Thus, the need for a framework like trial which can handle asynchronous tests that return deferreds. Summary That’s it for our look at unit testing. If would like to see more examples of how to write unit tests for Twisted code, you need look no further than Twisted itself. The Twisted framework comes with a very large suite of unit tests, with new ones added in each release. Since these tests are scrutinized by Twisted experts during code reviews before being accepted into the codebase, they make excellent examples of how to test Twisted code the right way. In Part 16 we will use a Twisted utility to turn our poetry server into a genuine daemon. Suggested Exercises - Change one of the tests to make it fail and run trialagain to see the output. - Read the online trial documentation. - Write tests for some of the other poetry services we have created in this series. - Explore some of the tests in Twisted. 17 thoughts on “Tested Poetry”! It seems that the program can go well even if I don’t make a factory instance in service class.(Or is the factory instance made implicitly) I was wondering what’s the differences between make an instance in service class or don’t? Could you post your code to a pastebin or github gist? Thank you for reply. So basically, 3 classes: class SchedulerProtocol(NetstringReceiver): class SchedulerFactory(ServerFactory): class SchedulerService(service.Service): In the service class, I don’t make a factory instance. In the factory class, on the contrary, def __init__(self, service): self.service = service self.dictQueue = {} which means the factory object can invoke methods of service, but the service cannot invoke methods in factory? ps, In addition, if I do something like this: def func(): d = Deferred() d.addCallback(func1) d.addCallback(func2) does it mean func1 and func2 will be invoked one by one in turn, namely func1 first and then func2 will be invoked with result of func1 OR they are in 2 different deferred streams? Thanks much. Ok, could you post the entire code, I mean the complete text from top to bottom, either in pastebin.com or a similar site? Or if you have it in a public code repository, could you post the link? The code formatting in wordpress is not great. It’s hard to understand what code is doing without being able to see the actual code.
http://krondo.com/tested-poetry/?shared=email&msg=fail
CC-MAIN-2020-10
refinedweb
1,321
63.09
I already used execl() in code, and it worked well. But this time, I really have no idea why it doesn't work. So here's the code that do not work #include <unistd.h> #include <stdio.h> int main() { int i = 896; printf("please\n"); execl("home/ubuntu/server/LC/admin/admin", (char*)i, NULL); printf("i have no idea why\n"); return 0; } And here's the admin.c #include <stdio.h> #include <stdlib.h> #include <sys/msg.h> #include <string.h> #include <unistd.h> int main(int argc, char *argv[]) { int mid = argv[0]; printf("hi from child\n"); printf("%d\n", mid); return 0; } Of course I compiled admin.c to admin, and the path isn't wrong. >ls admin admin.c why why.c >pwd /home/ubuntu/server/LC/admin >./admin hi from child -1180858374 >./why please i have no ida why Anyone know why it doesn't work? My C is a bit rusty but your code made many rookie mistakes. execl will replace the current process if it succeeds. So the last line ("i have no idea why") won't print if the child can launch successfully. Which means... execl failed and you didn't check for it! Hint: check the typecast to char *. You cast an int to a char * in the execl call, then again when you launch the child ( admin). This is a big no-no in C. It freely allows you to misinterpret types. The only warning is most often a crash. GGC will warn you about it. I don't know about the compiler on AWS. Check your array's bound! You don't know how many parameters admin was launched with. argv[0] always exist because it contains a representation of the program name. argv[1] may not be defined. Accessing array out-of-bound is an undefined behavior and highly dangerous. The standard way to start another process in C is to fork the parent, then call one of the functions in the exec family to start the other process. Consider this instead (I took the liberty to emit different messages to make them clearer). parent.c #include <unistd.h> #include <stdio.h> #include <errno.h> #include <string.h> int main() { int i = 896; char str[15]; int pid; printf("Hello from parent\n"); sprintf(str, "%d", i); // convert the number into string pid = fork(); if (pid == -1) { printf("Fork failed\n"); } else if (pid == 0) { printf("Continue from parent\n"); } else { // start the child process execl("home/ubuntu/server/LC/admin/admin", str, NULL); // check if it started properly if (errno != 0) { printf("Error launching child process: %s\n", strerror(errno)); return 1; } } printf("Goodbye from parent\n"); return 0; } admin.c #include <stdio.h> #include <stdlib.h> #include <sys/msg.h> #include <string.h> #include <unistd.h> int main(int argc, char *argv[]) { char * mid; // argc is always 1 or more if (argc >= 2) mid = argv[1]; else mid = "<nothing>"; printf("hello from child\n"); printf("argc = %d, argv[1] = %s\n", argc, mid); return 0; }
http://databasefaq.com/index.php/answer/2620/c-execl-execl-works-on-one-of-my-code-but-doesnt-work-on-another
CC-MAIN-2018-39
refinedweb
512
79.16
filterfunction in Python When discussing conditionals, we saw how an if statement could be combined with a for loop to return a subset of a list that meets certain conditions. We did so by iterating through a list of elements and adding the element to a new list when an element met certain conditions. For example, imagine we want to write a function that selects even numbers, how would we do this? Before determining how to categorize all elements in a list, let's just answer the question of whether one element is even. We can do so by making use of the modulo operator, %. The % returns the remainder resulting from dividing a number by another. For example: 7 % 3 1 Seven divided by three, is two with a remainder of one. So the modulo operator returns the remainder, one. Let's look at some other examples. Six divides into three two times leaving a remainder of zero. So the operator returns zero. 6 % 3 0 And four divided by two also brings a remainder of zero. 4 % 2 0 Note that the above line effectively asks (and answers) whether 4 is even. This is because the statement 4 % 2 returns a zero, which means that four divided by two has a remainder of zero, and as we know, any number that is divisible by two with no remainder leftover is an even number. Similarly, if a number is odd, then dividing by the number two results in a remainder of one. Ok so now let's write a function that checks if a number is even. def is_even(number): return number % 2 == 0 print(is_even(3)) # False print(is_even(100)) # True False True Now we are ready to write our function that selects just even numbers. We can iterate through the numbers one by one, and for each number, check if that number is even. If it's even then append the element to a new list of even numbers. def select_even(elements): selected = [] for element in elements: if is_even(element): selected.append(element) return selected numbers = list(range(0, 11)) numbers [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] select_even(numbers) [0, 2, 4, 6, 8, 10] Returning a subset of elements that meet a specific criteria is something commonly done in Python. And the procedure looks pretty much the same regardless of what we are selecting. For example, let's now select words that end with 'ing'. def ends_ing(word): return word.endswith('ing') def select_ing(elements): selected = [] for element in elements: if ends_ing(element): selected.append(element) return selected words = ['camping', 'biking', 'sailed', 'swam'] select_ing(words) ['camping', 'biking'] Notice that our two functions select_ing and select_even share a lot of similarity. Below, let's just highlight the differences. def select_ing(elements): # selected = [] # for element in elements: if ends_ing(element): # selected.append(element) # return selected def select_even(elements): # selected = [] # for element in elements: if is_even(element): # selected.append(element) # return selected Essentially, the only thing different between the functions is the criteria of how we are selecting. The filter() function, allows us to filter for specific elements, so long as it knows the criteria and the elements. The filter()function returns an iterator where the items are filtered through a function to test if the item is accepted or not. The general syntax of a filter() function is given as: filter ( function, iterable ) Let's apply this to filter the even numbers as we did above with a condition and loop. filter(is_even,numbers) <filter at 0x10642fda0> Note that filter returns a filter object, which isn't much use to us. What does a filter object do? Not much. So we coerce it to a list and to see just the even numbers. list(filter(is_even, numbers)) [0, 2, 4, 6, 8, 10] Also notice that the filter function takes two arguments, the first is the function itself. The function goes through each element, and passes that element into the criteria function. If that criteria function returns a truthy value, the element is selected. If not, the element is not selected. list(filter(ends_ing, words)) ['camping', 'biking'] Note that when passing through the function, no parentheses are added at the end of the function. This is important. If we pass through the parentheses the filtering will not occur. Filter()without a filter function ! So what happens if pass a list to the filter without supply the first argument i.e. a filtering function. Let's try it. random_list = [0,'0','python', 2, True, 'flatiron', False, 38.9] Our random_list above contains a number of different data types. Let's pass this list through function() and use None for filter function. list(filter(None, random_list)) ['0', 'python', 2, True, 'flatiron', 38.9] So we see that with filter function as None, the function defaults to Identity function i.e. each element in random_list is checked to see if it is true or not. So in the output, we only get the elements which are true. Numbers and strings are always true (this includes '0' as a string), whereas 0 and False return a False value and hence are not included in the output. In this section, we learned about the filter function, which selects the elements in a list that match a specific criteria. We learned that filter takes in two arguments. The first is the function that specifies the criteria specifying which elements to select, and the second argument is the list of elements to filter. Each element in the list is passed to the function one by one and if the function returns true the element is selected and put into a new array, which will be returned in the Filter object at the end.
https://learn.co/lessons/filter-readme-python
CC-MAIN-2019-43
refinedweb
962
64.3
Separate Combobox and Autocomplete I think ext team need to separa Combobox and Autocomplete feature to make thing simple. I played ext.form.combobox for a few days, I was never able to cache query data that come from the sever. YUI 2.2 already has this auto cache data feature, but through the adapter the thing has been change. here is YUI cofiguration for autocomplete. YAHOO.example.ACFlatData = function(){ var mylogger; var oACDS; var oAutoComp1; return { init: function() { oACDS = new YAHOO.widget.DS_XHR("./php/ysearch_flat.php", ["\n", "\t"]); oACDS.responseType = YAHOO.widget.DS_XHR.TYPE_FLAT; oACDS.maxCacheEntries = 100; oACDS.queryMatchSubset = true; // Instantiate second AutoComplete oAutoComp1 = new YAHOO.widget.AutoComplete('ysearchinput1','ysearchcontainer1', oACDS); oAutoComp1.delimChar = ""; oAutoComp1.useShadow = true; oAutoComp1.formatResult = function(oResultItem, sQuery) { var sKey = oResultItem[0]; var nQuantity = oResultItem[1]; var sKeyQuery = sKey.substr(0, sQuery.length); var sKeyRemainder = sKey.substr(sQuery.length); var aMarkup = ["<div class='ysearchresult'><div class='ysearchquery'>", nQuantity, "</div><span style='color:blue; font-weight: bold;'>", sKeyQuery, "</span>", sKeyRemainder, "</div>"]; return (aMarkup.join("")); }; oAutoComp1.itemSelectEvent.subscribe(itemSelectHandler); }, validateForm: function() { // Validate form inputs here return false; } }; }(); YAHOO.util.Event.addListener(this,'load',YAHOO.example.ACFlatData.init); I would like to second this thought.... I have been trying to figure out a way to implement autosuggestion in EXT but am not quite there yet.... I mean, simple autosuggestion is fine, but what about advanced features like - delimiter character, query delay, forced selection, caching, auto highlight, typeahead, etc etc.... so I just ended up using YUI itself for autosuggestion.... is it possible to do all that in Ext? if so, can anyone point me in the right direction?.... if not, then it would be really nice to have those features..... keep up the good work Several of the options you mention seem to be available in the Ext ComboBox Sanjiv funny.... how did I miss those.... were they added in the last 2 weeks?.... sorry guys, for posting without Xchecking.... I'll give it a try.... thanx sanjiv Hi...Is there any way to have delimeters in the autosuggestion. We require it so that we can allow users to select multiple values in a single column. If anybody have done so...please let us have the code...or guide us how to do this......... ya.... u mean like in the email text box.... I remember requesting this a while back.... has anyone tried this yet? Hi everyone, has anyone tried implementing delimiters? ... I am pretty bad at JavaScript so even after trying my bit in extending the combobox class couldn't quite achieve it.... if anyone else has done anything in it then please let me know Want autocomplete with filter of grid. Hi, Actully my question is related to autocomplete. I want autocomplete like combobox with filter of grid panel. I create separate thread for it also. If any one can Give functionality of autocomplete to Filter.Please help me. And if it is possible to seperate Combobox from autocomplete, Then is it possible to use separate autocomplete with filter? thanks in advance
https://www.sencha.com/forum/showthread.php?7090-Separate-Combobox-and-Autocomplete
CC-MAIN-2015-22
refinedweb
505
53.17
Re: Windows Service in VB - From: "Bob Altman" <rda@xxxxxxxxxx> - Date: Mon, 23 May 2005 12:25:56 -0700 Tom, Your service needs to expose an interface via .Net remoting. This involves architecting your service using two projects: a main program (either WinForms or console) and a class library. The class library must contain a public class that inherits from MarshalByRefObject. You need to add code to the main program to register a TCP channel with the .Net remoting architecture, and register the public class as a well-known singleton. You also need to create a WinForms app that can be run by logged-in users. This app communicates with the user via a menu attached to a NotificationIcon object. This app communicates with your service via the remoted class exposed by the service (above). You would set up this app so that it runs automagically when the user logs in. FYI, if your client app just wants to call methods on the service's remoted interface then the client doesn't need to register a TCP channel; it just needs to call Activator.GetObject to get a reference to the remoted singleton. However, if you want to handle events from the remoted object then you need to jump through some significant, not-very-well-documented hoops: 1. Your client needs to register a TCP channel (port 0, which allocates an unused port, will suffice). This is because the event callback looks like a method invocation to the remoting architecture; thus, the singleton needs to be able to call a method on the client, so the client needs a TCP channel to receive the method invocation request. 2. The singleton needs to be able to access the client's metadata in order to resolve the reference to the client's event handler. The easiest way to accomplish this is to set up your build and deployment environments so that the client's executable resides in the same folder as the singleton's dll. A more elegant solution that doesn't require the singleton to know about each of its clients involves declaring an interface in the singleton's project that contains the event handlers, implementing the interface in each client, and providing a method on the singleton that hooks up the events on behalf of the client. "SimpTheChimp" <SimpTheChimp@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:CA143D52-26A5-4BBB-BB3B-F1B5EC007614@xxxxxxxxxxxxxxxx > Hey all! Thanks for the info.... one question still remains, though, as I > looked all this over: I seen how I can communicate things INTO the service > (by using the custom command thingy) but how do I get custom info OUT of the > service (other than the base things). Maybe, for instance, I want my UI to be > able to query the service to see exactly what it is doing at this time (i.e., > if it is processing a record - which record? Or if it is processing a file - > which file?) Since, from what I understand, a service doesn't support public > interfaces, how would one go about getting this kind of custom info from a > service? > > Tom (Chimp) . - References: - Windows Service in VB - From: SimpTheChimp - Re: Windows Service in VB - From: Herfried K. Wagner [MVP] - Re: Windows Service in VB - From: SimpTheChimp - Prev by Date: Re: VB .Net test - Next by Date: ado.net data access error help - Previous by thread: Re: Windows Service in VB - Next by thread: streamwriter - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2005-05/msg02897.html
crawl-002
refinedweb
574
68.6
Recently, I have to revisit PHP because of a new project and I realized that namespace is totally new for me. So I thought it’ll be good to write down a quick note. This post will go quickly on how namespacing and autoloading work in PHP by using Composer. Our Application Our application is just a simple page echoing some text using json_encode method. Let’s create a index.php and write all our codes inside. If we execute this file (either start a server by php -s localhost:8000 or php index.php), we will see an array with 3 objects inside. Cool app right? Let’s explore namespacing and composer autoloading by rewritting this app. Breaking Down Classes If we keep expanding our application, this index.php will become very large. This makes our code hard to read and maintain. So let’s give each class their own files located inside src/ directory. PHP won’t know the files inside src/ directory if we do not tell it. There are a few methods to do so, one of them is we can use require method to load the needed files but there’s a better way to solve this – composer autoloading. (I assume composer is installed on your system). First to initialize composer, all we need is composer.json. Put this on project root and run composer install will create a new directory – vendor. This is the place where composer stores our project autoload configuration and dependencies. To make use of composer autoload, we’ll modify our composer.json. Here, composer will register a PSR-4 autoloader for NBA namespace (can be any name related to our app). Now our new namespace is mapped to src/ directory. Adding Namespaces In our codes containing classes, let’s give them their own namespace. Noticed in PHP, namespaces are required to named relatively to their paths. If Person.php is located inside src/other/Person.php, we do this: Autoloading In our index.php, we need to load autoload.php provided from composer. Also to use the namespace we just declared, we use the keyword use. Let’s break our index.php down to separate some concerns. Our application logic will be placed in app.php. By executing php index.php, we should get the same results.
http://yang-wei.github.io/blog/2015/03/09/namespacing-and-autoloading-in-php/
CC-MAIN-2018-51
refinedweb
387
69.48
Sometimes it is required that you export calculated data from one environment to another. The same thing occurred when you are working with C/C++ and also MATLAB environment. In this case, some tools must already exist to act as a bridge for both environments. MATLAB C MAT-File APIs are a group of functions for reading/writing a MAT compatible file from C/C++ environments that can be read by MATLAB. This means that if you save your variables data (result of some calculation) from MATLAB, you can read them from a C program and continue your calculation and if you compute something in your program and want to save them in a file that MATLAB can read them without any problem, you can use these APIs. MATLAB C MAT-File APIs are those functions for opening a file and saving data in a compatible format for using them in MATLAB. The first step to use these functions is including mat.h header file. Notice that if you want to use MATLAB built-in functions, you must include matlab.h header file too. The second step is adding MATLAB libraries to your project. The required library is libmat.lib. If you use another MATLAB APIs, add their libraries too. For example, if you use mlfPrintMatrix function to print matrix elements, you must add libmx.lib, libmatlb.lib and libmmfile.lib. mlfPrintMatrix Below are some of basic C MAT-File APIs with their description and syntax: MATFile *matOpen(const char *filename, const char *mode); Arguments: filename is name of file to open, mfp is a pointer to MAT-file information and mode is file opening mode. filename mfp mode Legal values for mode listed in Table 1: matOpen matClose char **matGetDir(MATFile *mfp, int *num); Arguments: mfp is a pointer to MAT-file information. num is address of the variable to contain the number of mxArray variables in the MAT-file. mfp num mxArray Description: This function allows you to get a list of the names of the mxArray variables contained within a MAT-file. matGetDir returns a pointer to an internal array containing pointers to the NULL-terminated names of the mxArrays in the MAT-file pointed to by mfp. The length of the internal array (number of mxArrays in the MAT-file) is placed into num. The internal array is allocated using a single mxCalloc and must be freed using mxFree when you are finished with it. matGetDir returns NULL and sets num to a negative number if it fails. If num is zero, mfp contains no arrays. MATLAB variable names can be up to length mxMAXNAM, where mxMAXNAM is defined in the file matrix.h. matGetDir NULL mxArrays mxCalloc mxFree mxMAXNAM mxArray *matGetVariable(MATFile *mfp, const char *name); mxArray *matGetNextVariable(MATFile *mfp, const char *name); Arguments: mfp is a pointer to MAT-file information and name is name of mxArray to get from MAT-file. name Description: matGetVariable allows you to copy an mxArray out of a MAT-file.matGetVariable reads the named mxArray from the MAT-file pointed to by mfp and returns a pointer to a newly allocated mxArray structure, or NULL if the attempt fails.matGetNextVariable allows you to step sequentially through a MAT-file and read all the mxArrays in a single pass. The function reads the next mxArray from the MAT-file pointed to by mfp and returns a pointer to a newly allocated mxArray structure. MATLAB returns the name of the mxArray in name. matGetVariable matGetNextVariable Use matGetNextVariable immediately after opening the MAT-file with matOpen and not in conjunction with other MAT-file functions. Otherwise, the concept of the next mxArray is undefined. matGetNextVariable returns NULL when the end-of-file is reached or if there is an error condition. In both functions, be careful in your code to free the mxArray created by this routine when you are finished with it. int matPutVariable(MATFile *mfp, const char *name, const mxArray *mp); Arguments: mfp is a pointer to MAT-file information. name is name of mxArray to put into MAT-file. mp is an mxArray pointer. mp Description: This function allows you to put an mxArray into a MAT-file. matPutVariable writes mxArray mp to the MAT-file mfp. If the mxArray does not exist in the MAT-file, it is appended to the end. If an mxArray with the same name already exists in the file, the existing mxArray is replaced with the new mxArray by rewriting the file. The size of the new mxArray can be different than the existing mxArray. matPutVariable returns 0 if successful and nonzero if an error occurs. matPutVariable For saving variables in MATLAB, just use save command. for example: save save myFile By entering the above command, MATLAB will save all of variables in its workspace in a file named myFile.mat. For saving only some variable, use save command like: save myFile X Y Z with above command, MATLAB saves only X, Y and Z variables. Now we want to use C MAT-File APIs to read myFile.mat and extract all of saved variables. Below is our C code to doing this: #include <span class="code-string">"stdafx.h" </span> It's time to do reverse task. In other hand we want to calculate something and save data to a file that MATLAB can read it. To import data from a MAT-File to MATLAB environment, one must use load command: load myFile Load command, load workspace variables from a file located on your disk. Following source code, define some mxArray varialbe and then save them in a file. This file can be called from MATLAB environment. #include <span class="code-string">"stdafx.h" </span> #include <span class="code-string">"mat.h" </span> #include <span class="code-string">"matlab.h" </span> #pragma comment(lib, "libmat.lib") #pragma comment(lib, "libmx.lib") #pragma comment(lib, "libmatlb.lib") #pragma comment(lib, "libmmfile.lib") void main(int argc, char **argv) { MATFile *pmat; //now create a new mat-file and save some variable/matrix in it double dbl1[]={1.1, 4.3, -1.6, -4, -2.75}; double dbl2[]={-4.9, 2.3, -5}; mxArray *AA, *BB, *CC; A=mxCreateDoubleMatrix(1, 5, mxREAL); B=mxCreateDoubleMatrix(1, 3, mxREAL); //copy an array to matrix A and B memcpy(mxGetPr(A), dbl1, 5 * sizeof(double)); memcpy(mxGetPr(B), dbl2, 3 * sizeof(double)); C=mlfConv(A, B); //convolution //opening TestVar.mat for writing new data pmat=matOpen("TestVar.mat", "w"); matPutVariable(pmat, "A", A); matPutVariable(pmat, "B", B); matPutVariable(pmat, "C", C); matClose(pmat); mxDestroyArray(AA); mxDestroyArray(BB); mxDestroyArray(CC); }:\Matlab\extern\lib fopen fprintf fscanf General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/5909/Exporting-Importing-Data-To-From-MATLAB
CC-MAIN-2015-35
refinedweb
1,137
56.25
User Tag List Results 1 to 3 of 3 - Join Date - Jul 2009 - 0 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) Interested in knowing more about Facebook HI, i am new to the concept of facebook though i have registered there. Can anyone tell me how to use it? - Join Date - Jul 2002 - Location - Toronto, Canada - 39,350 - Mentioned - 63 Post(s) - Tagged - 3 Thread(s) rules for using facebook: 1. facebook is for friends -- for business associates, used linkedin.com instead 2. you don't have to accept friend requests from people you don't know, or from people that you know but couldn't give a rat's patootie about 3. on your news feed (which is what you see on your home page), you can hide the feeds of friends who abuse facebook by automatically posting all their twitter crap 4. facebook applications are, by and large, all crap (except wordscraper) -- if someone sends you a flower, or a gift, or whatever, you don't have to register for that application and send them anything in return (remember, facebook apps can see all your personal information, and who knows what they do with it...) 5. do ~not~ become a fan of anybody or anything, unless you like having this information freely available to non-facebook users 6. join groups and participate to your heart's content, as you will soon discover that the person in squamish bc who shares your love of naked molerats is just as knowledgeable as you are 7. if you can get your family members to join -- especially those in remote locations -- then sharing photos with them is awesomerudy.ca | @rudydotca Buy my SitePoint book: Simply SQL "giving out my real stuffs" @ what r937 said... facebook is generally for everyone that wants to stay connected with acquaintance, friends, family or whatever...its your choice o accept who you like to be connected to. Oh and while some of the applications are crappy, there are actually some good ones there, just be careful in sharing info. Try the quizzes if you have too much spare time or play some games (lol) it'll be fun for ya...sitepoint i think has a page there if you wanna check out.sig space open... Bookmarks
http://www.sitepoint.com/forums/showthread.php?629853-Interested-in-knowing-more-about-Facebook&p=4334525
CC-MAIN-2014-41
refinedweb
380
67.99
In my previous articles I covered the various aspects of Elixir—a modern functional programming language. Today, however, I would like to step aside from the language itself and discuss a very fast and reliable MVC framework called Phoenix that is written in Elixir. This framework emerged nearly five years ago and has received some traction since then. Of course, it is not as popular as Rails or Django yet, but it does have great potential and I really like it. In this article we are going to see how to introduce I18n in Phoenix applications. What is I18n, you ask? Well, it is a numeronym that means "internationalization", as there are exactly 18 characters between the first letter "i" and the last "n". Probably, you have also met an L10n numeronym which means "localization". Developers these days are so lazy they can't even write a couple of extra characters, eh? Internationalization is a very important process, especially if you foresee the application being used by people from all around the world. After all, not everyone knows English well, and having the app translated into a user's native language gives a good impression. It appears that the process of translating Phoenix applications is somewhat different from, say, translating Rails apps (but quite similar to the same process in Django). To translate Phoenix applications, we use quite a popular solution called Gettext, which has been around for more than 25 years already. Gettext works with special types of files, namely PO and POT, and supports features like scoping, pluralization, and other goodies.. Shall we start? Internationalization With Gettext Gettext gnu.org. Gettext provides you all the necessary tools to perform localization and presents some requirements on how translation files should be named and organized. Two file types are used to host translations: PO and MO. PO (Portable Object): en, fr, de, etc. MO (Machine Object) files contain binary data not meant to be edited directly by a human. They are harder to work with, and discussing them is out of the scope of this article. To make things more complex, there are also POT (Portable Object Template) files. They host only strings of data to translate, but not the translations themselves. Basically, POT files are used only as blueprints to create PO files for various locales. Sample Phoenix Application Okay, so now let's proceed to practice! If you'd like to follow along, make sure you have installed the following: - OTP (version 18 or higher) - Elixir (1.4+) - Phoenix framework (I'm going to be using version 1.3) Create a new sample application without a database by running: mix phx.new i18ndemo --no-ecto --no-ecto says that the database should not be utilized by the app (Ecto is a tool to communicate with the DB itself). Note that the generator might require a couple of minutes to prepare everything. Now use cd to go to the newly created i18ndemo folder and run the following command to boot the server: mix phx.server Next, open the browser and navigate to, where you should see a "Welcome to Phoenix!" message. Hello, Gettext! What's interesting about our Phoenix app and, specifically, the welcoming message is that Gettext is already being used by default. Go ahead and open the demo/lib/demo_web/templates/page/index.html.eex file which acts as a default starting page. Remove everything except for this code: <div class="jumbotron"> <h2><%= gettext "Welcome to %{name}!", name: "Phoenix" %></h2> </div> This welcoming message utilizes a gettext function which accepts a string to translate as the first argument. This string can be considered as a translation key, though it is somewhat different from the keys used in Rails I18n and some other frameworks. In Rails we would have used a key like page.welcome, whereas here the translated string is a key itself. So, if the translation cannot be found, we can display this string directly. Even a user who knows English poorly can get at least a basic sense of what's going on. This approach is quite handy actually—stop for a second and think about it. You have an application where all messages are in English. If you'd like to internationalize it, in the simplest case all you have to do is wrap your messages with the gettext function and provide translations for them (later we will see that the process of extracting the keys can be easily automated, which speeds things up even more). Okay, let's return to our small code snippet and take a look at the second argument passed to gettext: name: "Phoenix". This is a so-called binding—a parameter wrapped with %{} that we'd like to interpolate into the given translation. In this example, there is only one parameter called name. We can also add one more message to this page for demonstration purposes: <div class="jumbotron"> <h2><%= gettext "Welcome to %{name}!", name: "Phoenix" %></h2> <h3><%= gettext "We are using version %{version}", version: "1.3" %></h3> </div> Adding a New Translation Now that we have two messages on the root page, where should we add translations for them? It appears that all translations are stored under the priv/gettext folder, which has a predefined structure. Let's take a moment to discuss how Gettext files should be organized (this applies not only to Phoenix but to any app using Gettext). First of all, we should create a folder named after the locale it is going to store translations for. Inside, there should be a folder called LC_MESSAGES containing one or multiple .po files with the actual translations. In the simplest case, you'd have one default.po file per locale. default here is the domain's (or scope's) name. Domains are used to divide translations into various groups: for example, you might have domains named admin, wysiwig, cart, and other. This is convenient when you have a large application with hundreds of messages. For smaller apps, however, having a sole default domain is enough. So our file structure might look like this: - LC_MESSAGES - default.po - admin.po - ru - LC_MESSAGES - default.po - admin.po To starting creating PO files, we first need the corresponding template (POT). We can create it manually, but I'm too lazy to do it this way. Let's run the following command instead: mix gettext.extract It is a very handy tool that scans the project's files and checks whether Gettext is used anywhere. After the script finishes its job, a new priv/gettext/default.pot file containing strings to translate will be created. As we've already learned, POT files are templates, so they store only the keys themselves, not the translations, so do not modify such files manually. Open a newly created file and take a look at its contents: ## "" Convenient, isn't it? All our messages were inserted automatically, and we can easily see exactly where they are located. msgid, as you've probably guessed, is the key, whereas msgstr is going to contain a translation. The next step is, of course, generating a PO file. Run: mix gettext.merge priv/gettext This script is going to utilize the default.pot template and create a default.po file in the priv/gettext/en/LC_MESSAGES folder. For now, we have only an English locale, but support for another language will be added in the next section as well. By the way, it is possible to create or update the POT template and all PO files in one go by using the following command: mix gettext.extract --merge Now let's open the priv/gettext/en/LC_MESSAGES/default.po file, which has the following contents: ## "" This is the file where we should perform the actual translation. Of course, it makes little sense to do so because the messages are already in English, so let's proceed to the next section and add support for a second language. Multiple Locales Naturally, the default locale for Phoenix applications is English, but this setting can be changed easily by tweaking the config/config.exs file. For example, let's set the default locale to Russian (feel free to stick with any other language of your choice): config :demo, I18ndemoWeb.Gettext, default_locale: "ru" It is also a good idea to specify the full list of all supported locales: config :demo, I18ndemoWeb.Gettext, default_locale: "ru", locales: ~w(en ru) Now what we need to do is generate a new PO file containing translations for the Russian locale. It can be done by running the gettext.merge script again, but with a --locale switch: mix gettext.merge priv/gettext --locale ru Obviously, a priv/gettext/ru/LC_MESSAGES folder with the .po files inside will be generated. Note, by the way, that apart from the default.po file, we also have errors.po. This is a default place to translate error messages, but in this article we are going to ignore it. Now tweak the priv/gettext/ru/LC_MESSAGES/default.po by adding some translations: #:}!" Now, depending on the chosen locale, Phoenix will render either English or Russian translations. But hold on! How can we actually switch between locales in our application? Let's proceed to the next section and find out! Switching Between Locales Now that some translations are present, we need to enable our users to switch between locales. It appears that there is a third-party plug for that called set_locale. It works by extracting the chosen locale from the URL or Accept-Language HTTP header. So, to specify a locale in the URL, you. Open the mix.exs file and drop in set_locale to the deps function: defp deps do [ # ... {:set_locale, "~> 0.2.1"} ] end We must also add it to the application function: def application do [ mod: {Demo.Application, []}, extra_applications: [:logger, :runtime_tools, :set_locale] ] end Next, install everything: mix deps.get Our router located at lib/demo_web/router.ex requires some changes as well. Specifically, we need to add a new plug to the :browser pipeline: pipeline :browser do # ... plug SetLocale, gettext: DemoWeb.Gettext, default_locale: "ru" end Also, create a new scope: scope "/:locale", DemoWeb do pipe_through :browser get "/", PageController, :index end And that's it! You can boot the server and navigate to and. Note that the messages are translated properly, which is exactly what we need! Alternatively, you may code a similar feature yourself by utilizing a Module plug. A small example can be found in the official Phoenix guide. One last thing to mention is that in some cases you might need to enforce a specific locale. To do that, simply utilize a with_locale function: Gettext.with_locale I18ndemoWeb.Gettext, "en", fn -> MyApp.I18ndemoWeb.gettext("test") end Pluralization We have learned the fundamentals of using Gettext with Phoenix, so the time has come to discuss slightly more complex things. Pluralization is one of them. Basically, working with plural and singular forms is a very common though potentially complex task. Things are more or less obvious in English as you have "1 apple", "2 apples", "9000 apples" etc (though "1 ox", "2 ox Gettext.Plural behavior (you may see the behavior in action in one of my previous articles) that supports many different languages. Therefore all we have to do is take advantage of the ngettext function. This function accepts three required arguments: a string in singular form, a string in plural form, and count. The fourth argument is optional and can contain bindings that should be interpolated into the translation. Let's see ngettext in action by saying how much money the user has by modifying the demo/lib/demo_web/templates/page/index.html.eex file: <p> <%= ngettext "You have one buck. Ow :(", "You have %{count} bucks", 540 %> </p> %{count} is an interpolation that will be replaced with a number ( 540 in this case). Don't forget to update the template and all PO files after adding the above string: mix gettext.extract --merge You will see that a new block was added to both default.po files: msgid "You have one buck. Ow :(" msgid_plural "You have %{count} bucks" msgstr[0] "" msgstr[1] "" We have not one but two keys here at once: in singular and in plural forms. msgstr[0] is going to contain some text to display when there is only one message. msgstr[1], of course, contains the text to show when there are multiple messages. This is okay for English, but not enough for Russian where we need to introduce a third case:" Case 0 is used for 1 buck, and case 1 for zero or few bucks. Case 2 is used otherwise. Scoping Translations With Domains Another topic that I wanted to discuss in this article is devoted to domains. As we already know, domains are used to scope translations, mainly in large applications. Basically, they act like namespaces. After all, you may end up in a situation when the same key is used in multiple places, but should be translated a bit differently. Or when you have way too many translations in a single default.po file and would like to split them somehow. That's when domains can come in really handy. Gettext supports multiple domains out of the box. All you have to do is utilize the dgettext function, which works nearly the same as gettext. The only difference is that it accepts the domain name as the first argument. For instance, let's introduce a notification domain to, well, display notifications. Add three more lines of code to the demo/lib/demo_web/templates/page/index.html.eex file: <p> <%= dgettext "notifications", "Heads up: %{msg}", msg: "something has happened!" %> </p> Now we need to create new POT and PO files: mix gettext.extract --merge After the script finishes doing its job, notifications.pot as well as two notifications.po files will be created. Note once again that they are named after the domain. All you have to do now is add translation for the Russian language by modifying the priv/ru/LC_MESSAGES/notifications.po file: msgid "Heads up: %{msg}}" msgstr "Внимание: %{msg}" What if you would like to pluralize a message stored under a given domain? This is as simple as utilizing a dngettext function. It works just like ngettext but also accepts a domain's name as the first argument: dgettext "domain", "Singular string %{msg}", "Plural string %{msg}", 10, msg: "demo" Conclusion. Also we've seen a way to add support for multiple locales and added a way to easily switch between them. Lastly, we have seen how to employ pluralization rules and how to scope translations with the help of domains. Hopefully, this article was useful to you! If you'd like to learn more about Gettext in the Phoenix framework, you may refer to the official guide, which provides useful examples and API reference for all the available functions. I thank you for staying with me and see you soon! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/phoenix-i18n--cms-30010
CC-MAIN-2018-09
refinedweb
2,510
65.73
Pt-D-3 Dave says: Just add something like You've been here <%= pluralize(@count.to_i, "time") %> to your view. robbyt says: What do you guys think about using the flash[:notice] space to display this count? This way, additional logic does not need to be dropped into the view. I am not able to get the pluralize helper to work inside the flash though… Any comments? def index @products = Product.find_products_for_sale @count = session_count @session_greeting_msg = session_greeting flash[:notice] = @session_greeting_msg end def session_count if session[:counter].nil? session[:counter] = 0 else session[:counter] += 1 end end def session_greeting if @count == 0 session_greeting = "welcome!" else session_greeting = @count end end Bill says, So here’s a dumb question… If @count is a Fixnum, why use the to_i method which apparently simply returns @count? Jim says, The following seems to work (although there may be a cleaner way… and I’m not completely sure if I need the <% end %> part): You have accessed this page <%= @count %> <% if @count == 1 %> time. <% else %> times. <% end %> James says, I just did: <%= pluralize(session[:counter], "page load") %> john joyce says: What I did was: You've accessed this page <%= pluralize(@count., 'time') %> I don’t see the point in .to_i either. The code for the counter already makes it a Fixnum anyway. Is there a potential security risk? pfig saysyeah, i also just used <%= pluralize( @count, 'time' ) %> anything we should know? PojoGuy says: Both <%= pluralize( @count, 'time' ) %> and <%= pluralize( @count, 'times' ) %> work! Dave says: I didn’t like sticking logic in the templates. So I did (and failed) the following, in views/store/index.html.erb <%= count_index_visits -%> <h1>Your Grapmatic catalogue</h1> <p><%= report_index_visits -%></p> Then, in (helpers/store_helpers.rb), added def reset_index_visits session[:counter]=0 end def count_index_visits if session[:counter].nil? reset_index_visits else session[:counter]+=1 end end def report_index_visits if session[:counter]==0 then reponse="Welcome!" else response="Back for your "+session[:counter].to_s+"th visit?" end return response end how to I stop count_index_visits dumping a number inside the layout? more pressingly, how do I make reset_index_visits available to the whole application? it seems to fail when I move it to controllers/application.rb, and DRY says it should only appear in one place…? Trientalis says to Dave: Perhaps you could try to write the helper method in the application_helper instead – then it should be available to the whole application. Gonzalo says: Has anyone tried to put the counter in the layout? To do this I tried to set the @count variable in the “initialize” method, but there seems to be some constraints with using sessions in that method. Is there a workaround? Paulette says: I did try using it in the layout, and it worked. What I did was placed the method in private and called it ou from there in the layout. Works same way as in index.
https://pragprog.com/wikis/wiki/Pt-D-3/version/15
CC-MAIN-2015-14
refinedweb
476
66.64
XML Subsets Dec 18, 2013 XML Subsets Most applications today use XML for some form of data saving. In testing we also need to validate XML files, find entries in the XML via XPath and search for differences in the XML. A lot of these abilities can be found in the lib of XMLUnit () This lib knows how to compare XML files, and other actions like: · The differences between two pieces of XML · The outcome of transforming a piece of XML using XSLT · The evaluation of an XPath expression on a piece of XML · The validity of a piece of XML · Individual nodes in a piece of XML that are exposed by DOM Traversal One feature that I needed that it did not support is to detect if one XML is a subset of another. For example I would like to check the following XML is a subset of the next one. <a> <b> test b </b> </a> <a> <b> test a </b> <b> test b </b> </a> As you can guess a simple compare will fail since it will compare the first node of b in both XML’s and since the elements are not same it fails. What we need to do is to recursively search all nodes that have the same name and check if they are the same and not to stop on the first. This problem of course is not just on nodes but on attributes as well. If you need this functionally then you can grab the code below: public class XmlSubsetHelper { static private boolean equalNotNullPointers(Object a, Object b) { if ((a == null && b != null) || (a != null && b == null)) { return false; } return true; } static public boolean isSubset(NamedNodeMap namedNodeMapA, NamedNodeMap namedNodeMapB) { if (!equalNotNullPointers(namedNodeMapA, namedNodeMapB)) { return false; } if ((namedNodeMapA == null && namedNodeMapB == null)) { return true; } for (int i = 0; i < namedNodeMapA.getLength(); i++) { Node itemA = namedNodeMapA.item(i); Node itemB = namedNodeMapB.getNamedItem(itemA.getNodeName()); if (itemB == null) { return false; } if (!itemA.getNodeValue().equals(itemB.getNodeValue())) { return false; } } return true; } static public boolean isSubset(NodeList childNodesA, NodeList childNodesB) { if (!equalNotNullPointers(childNodesA, childNodesB)) { return false; } if ((childNodesA == null && childNodesB == null)) { return true; } for (int a = 0; a < childNodesA.getLength(); a++) { boolean foundMatch = false; Node itemA = childNodesA.item(a);<span style="font-family: Consolas; Share this article: Chaim Turkel Backend/Data Architect Backend Group java
https://www.tikalk.com/posts/2013/12/18/xml-subsets/
CC-MAIN-2019-18
refinedweb
387
66.67
A virtualhost is a independent namespace in which messaging is performed. Virtualhosts are responsible for the storage of message data. Virtualhosts can only be managed by the HTTP management channel. The following virtualhost types are supported. BDB - Virtualhost backed with Oracle Berkeley DB JE BDB HA - Virtualhost backed with Oracle BDB utilising High Availability DERBY - Virtualhost backed with Apache Derby JDBC - Virtualhost backed with an external database [6] Memory - In-memory node (changes lost on Broker restart) Provided - Virtualhost that co-locates message data within the parent virtualhost node [7]. use_async_message_store_recovery Controls the background recovery feature. Name the virtualhost. This is the name the messaging clients refer to when forming a connection to the Broker. Store Path/JDBC URL. Refers the file system location or database URL used to store the message data. Store overflow/underflow. Some virtualhosts have the ability to limit the of the cumulative size of all the messages contained within the store. This feature is described in detail Section 9.2, “Disk Space Management”. Connection thread pool size. Number of worker threads used to perform messaging with connected clients. Defaults to 64 or double the maximum number of available processors, whichever is the larger. Number of selectors. Number of worker threads used from the thread pool to dispatch I/O activity to the worker threads. Defaults to one eighth of the thread pool size. Minimum 1. Store transaction timeouts. Warns of long running producer transactions. See Section 9.3, “Producer Transaction Timeout” Synchronization policy. HA only. See Section 10.4.2, “Synchronization Policy” Stop. Stops the virtualhost. This closes any existing messaging connections to the virtualhost and prevents new ones. Any inflight transactions are rolled back. Non durable queues and non durable exchanges are lost. Transient messages or persistent messages on non-durable queues are lost. Start. Activates the virtualhost. [6] JDBC 4.0 compatible drivers must be available. See Section F.2, “Installing External JDBC
http://qpid.apache.org/releases/qpid-broker-j-7.0.0/book/Java-Broker-Management-Managing-Virtualhosts.html
CC-MAIN-2017-51
refinedweb
320
54.08
Recently I had to do some performance profiling for one of our applications that was almost ready for production. Always on the lookout for new tools, I decided to check out what open source profilers were available. The best one I found was nprof, the .NET profiler. Download the zip file and extract to a folder on your computer. Following the instructions, I ran the RegisterProfilerHook.bat file to register the hook file. Then I started nprof and was getting work done almost immediately. There is also a file that will register nprof as a Visual Studio add-in. Make sure you close all instances of Visual Studio before running the batch file, a fact I learned the hard way. Then under the Tools menu you should see nprof Profiling. Expand that and click it to turn profiling on. Click again to turn profiling off. For this example, I am using an application that John Robbins wrote to illustrate the features of Visual Studio’s new Enterprise Performance Tool in MSDN Magazine’s December 2004 issue. Download the code from that article, install it and compile it before continuing. I also follow the article pretty closely, and end up getting similar results to John Robbins. I highly suggest reading his article because the concepts will be similar, although the new EPT tool will have a lot more functionality than nprof. But you can’t beat free! To profile, go to the File menu and select New. The screen looks like this: The File radio button lets you select standard windows and console applications to profile. The ASP.NET radio button is for profiling web sites. The Debug profiler hook is supposed to allow you to debug into the application you are profiling, but I haven’t gotten it to work. The Profile started applications option tells nprof whether to profile any applications your application starts (such as a windows form application starting Excel). If you are using the Visual Studio add-in, you won’t need to setup this screen at all unless you want to change the default options. For this, click the browse button next to “Application to run” and navigate to the AnimatedAlgorithm.exe file. Then click the “Create Project” button. You know have an unsaved profiling project. You can save it if you expect to reuse it. In fact you should save it because proper performance profiling goes through a process of profiling, make a code change, and then profile again to see if the change improved performance or not. So save the project as AnimatedAlgorithmProfile and click save. Your screen should now look like this: The File menu is standard stuff, so let’s skip to the Project menu. Currently the only options we have are to Start a project run and Options. Clicking the options menu item takes you back to the Create Profiler Project dialog we saw earlier. Select Start project run or press F5 to start profiling. If everything is setup properly, the Animated Algorithm Display form should launch. Everything that you do to the application will be recorded by nprof for performance data. Click the Sort button with “Bidirectional Bubble Sort” selected in the dropdown list. Watch the pretty display. When it’s done, close the Animated Algorithms form. Nprof will now crunch through the data, which may take a little bit of time depending on the speed of your computer and how much data was gathered during profiling. Now nprof should look similar to this: The first thing to notice is that nprof groups the profiling data based on which thread the code ran on. Unfortunately there is no combined view to see everything from all threads, but the feature may make it into a future version. It’s not necessary, just nice to have. Now take a look at main window underneath “Filter signatures”. The Signature column is the method signature that was called. You will probably recognize most of these, as they are mostly calls to the .NET Framework. If your app is relatively performant, it is expected that most of the time be taken up with system calls. The “# of Calls” column is just that, the total number of calls to that method. Clicking on the column header sorts the data by that column’s value, and clicking again reverses the sort. The next column is the “% of Total” column, which shows how long the application spent in that method as a percentage of the total profiled run time. If you sort by this column and you’re on the first thread (which it should be by default), you’ll notice that the biggest number is only like 0.05 or so. What gives? Well, the main thread isn’t doing that much work compared to the other threads. Click on one of the other threads to see higher numbers. Making sure that the percent of total column is sorted descending (with higher numbers at the top), find the thread that has the application Main. The signature will be “static voic AnimatedAlgorithm.AnimatedAlgorithmForm::Main()”. Notice that it is called once and the percent of time in the method should be close to 98 or 99%. One might think this is where we should start our performance analysis, but hold on! The next column, “% in Method”, shows the percentage of the time the program actually spent in the method. On my computer that number is zero. If the time spent in the method is 0%, and the total percent is 99%, then the percentage spent in called methods must also be 99%. And that is exactly what the next column, “% in Children”, tells us. So that means any performance efforts will not happen in this method. The final column, “% Suspended”, shows how much time the method was waiting for something to happen not in a called method. If the suspended percentage is much above 0.1%, it is probably worth checking out to see why. Click the “% in Method” column to sort by which methods actually ran the longest. If you’re on the right thread, the RunMessageLoopInner should be the highest, which is perfectly fine. What we need to do is see what code (if any) that John Robbins wrote or that is included in the NSort assembly is slow. Fortunately nprof provides a handy namespace treeview on the left below the thread tree. Uncheck the System and Microsoft namespaces. Now you’ll notice that while the RunMessageLoopInner is still on top, the Wintellect.SortDisplayGraph.SorterGraph::UpdateSingleFixedElement is the next highest with 1.03%. The next method after that only has 0.12%. At least now we are to code that we can change to improve the application. Alternatively, you can uncheck the All Namespaces node and then check the Wintellect, NSort, and AnimatedAlgorithm namespaces. This finally gets rid of that pesky RunMessageLoopInner. Looking at the results, we can see that the UpdateSingleFixedElement is the best looking candidate for performance improvement on this thread. Clicking up to the second thread, the Wintellect.SortDisplayGraph.GraphSwapper::Swap method is the highest with 1.59%, and the next method with a mere 0.12%. On the first thread there is nothing over 0.05% anyway, so it’s not even worth looking at, unless you are really obsessed. If you select a method signature, the callees and callers will be filled in. The callees are those methods that the selected method calls. The data includes how much time is spent in each of the called methods. The callers are the methods that make a call to the selected method. You can click on methods in the callee/caller tabs and it will select that method in the main method signature area, which will then show you the callees and callers of that method. Now we have at least two good candidate methods to look at to improve performance. I’m not going to go any further until John Robbins posts his follow-up article. Until then, happy profiling!
http://codebetter.com/blogs/darrell.norton/archive/2004/12/22/38226.aspx
crawl-002
refinedweb
1,338
73.78
I am not looking to re-code my Java functions as Transact SQL ... I only brought that code sample as a demonstration of the simplest Java code I've written and embedded in the database. I have much more complicated code that cannot be rewritten ... The crux of my question is in the overhead cost of using Java functions in version 12 (with an external JVM) versus version 9 (with an internal JVM). How can I improve THAT ? ::Siger Matt Edit below - added code from previous question:: Just to keep the code from the other question on the same page: CREATE FUNCTION "dbo"."GetNumerics"( in inputString varchar(500) ) returns varchar(500) external name 'com.gemtrak.general.StandardFunctions.getNumerics (Ljava/lang/String;)Ljava/lang/String;' language java package com.gemtrak.general; public class StandardFunctions { public static String getNumerics(String inputString) { StringBuffer input = new StringBuffer(inputString); StringBuffer output = new StringBuffer(); int theLength = input.length(); char c; for ( int i=0; i<theLength; ++i ) { c = input.charAt(i); if ( c >= '0' && c <= '9' ) { output.append(c); } } return output.toString(); } } There is probably an END statement that is missing, but it was missing in the original, this is just copy/paste. Called by: select dbo.getnumerics(isnull(telephone1,'')) t1, * into #x from locinfo where loctype = 'C' asked 18 Jan '12, 14:56 Frum Dude 136●3●3●9 accept rate: 0% edited 20 Jan '12, 12:28 Siger Matt 3.1k●48●64●93 For the record (in case anyone was wondering) the Java VM went from internal to external with SQL Anywhere 10. Correct... and I suspect that is the cause of the performance difference - if the Java code does lots of calls back to the database/server to get/put data then the performance is not going to be as good when the Java VM is running externally since the cost of the call (to the server) is much greater. Actually, my test case (if you look at my earlier example) is simply extracting numeric digits from a string .. and the performance is 10 times slower... and that's on a server which is 30-40% faster than the original SQL 9 box! How can Sybase allow that to happen? 10 times slower... A while back, I remember reading about a policy at Sybase which stated that no aspect of sqla performance would decline with a new release--performance would always be improved. Why was the jvm moved outside the database? Why would that slow performance? After all, a call to a java method is a call to a memory location, whether the jvm is considered to be internal or external to the database. true? I would venture to say that the main reason to use an EXTERNAL JVM allows flexibility in that as Java evolves, the database server can take immediate advantage of new Java features ... just like the difference between component stereos versus 'single box' solutions... But certainly that kind of performance hit is not acceptable for that flexibility! Please show us the exact code that you ran to show the performance change... the Java plus the calling SQL code as well... thanks! (I know you posted the Java code earlier but that's somewhere else...) EXTERNAL JVM allows flexibility in that as Java evolves There are other reasons for this move: 32-bit database servers (still popular when 10.0.0 was released...) have a maximum of 4 GB of process space (usually closer to ~3.8GB). On Windows, they actually get 2GB of process space (unless the /3GB switch is specified...) - the 4GB total is split between kernel space (2GB) and user process space (2GB). Forcing the JVM load in-process reduces the amount of memory further for the database cache (down from ~1.8 GB on 32-bit Windows), which can increase the temporary file usage (thus slowing performance of the database server, overall). We would be interested in seeing your two performance tests (version 9 and version 12), but to also know more about the environment you're currently testing in: Which operating System is it currently running on? Is this a 32-bit or 64-bit environment (OS, database server, JVM)? How much memory does the system have, overall? Have you tried monitoring resources such as CPU / Memory / Disk via an "OS Performance monitor application", during your test so that you can see if there are obvious hardware bottlenecks that you are hitting in your test? Are you supplying any JVM parameters (such as -Xmx) to adjust the size of the JVM memory when the JVM is being launched? ...apparently, guesswork is alive and well as evidenced by all the replies made in the absence of any knowledge about how the Java function is called (from a SET statement in a loop, versus a single SELECT list, versus a WHERE clause, versus a subquery, versus...) Just as a Vegas slot machine eventually rewards the player, this forum will eventually provide your answer! Given Siger's guesswork does fit (as I would guess, too), we would still have to know how many times the function is called in this sample, i.e. how many rows do fulfill the WHERE clause... There were a number of very compelling reasons for us to move to a de-coupled Java VM beginning with SQL Anywhere version 10, some of which have been discussed above. To those, I would add that the reasons included the ability to run any version of the SUN (now Oracle) JRE to suit the needs of the application, and that, in the new architecture, any issue with the Java VM itself or with the Java function would not bring down or hang the server (or, minimally, a worker thread). Going to a de-coupled model, yet still supporting JDBC calls from the Java function back into the server, required us to develop some fairly sophisticated infrastructure inside the server kernel to support that; these identical mechanisms are used for CLR and other language environments as well. For Java procedures, the interface between the JVM and the server is JDBC and JDBC is a very heavyweight interface compared to the in-process, custom-made JRE we had built in ASA Version 6. So the creation and deletion of that inter-process infrastructure is certainly more expensive than when the Java VM was inside the database server, there's absolutely no question about that. Indeed, the example Java program posted above is a perfect example to illustrate the expense of that overhead, as it would if similar logic was embedded in an SQL user-defined function (and even though the SQL UDF is actually executed "in process"); see my post on that subject for the gory details. As Jeff mentioned, there may be some configuration settings that you can employ to make the performance marginally better. However, I would not expect hugely significant performance gains from doing so. answered 19 Jan '12, 14:43 Glenn Paulley 10.7k●5●71●104 accept rate: 43% ...as it would if similar logic was embedded in an SQL user-defined function... ...as it would if similar logic was embedded in an SQL user-defined function... Are you saying the overhead of a SQL UDF is - in general - somewhat similar to that of an external Java call? (That would come as a surprise to me.) Or does it just mean any UDF is "impossible to optimize" and will therefore have bad influence on the query optimizer's choice of optimal plans - particularly when the UDF/function call is part of a SELECT list or WHERE clause and gets called multiple times per query? Both. The cost of invoking a SQL user-defined function is orders of magnitude greater than the cost of, for example, reading a row off of page of disk during a sequential scan. That's what my 2008 blog post meant to illustrate. The optimization point is valid, too - it is (virtually) impossible to determine the cost/selectivity of a predicate involving a user-defined function. The server tries to estimate it by memoizing statistics from prior executions, but significant variance in the function's parameters, or the function's execution, have the potential to significantly impact the real cost of the chosen execution plan. Ah, I see - and it seems very understandable that calling a function is much more overhead than reading a row. However, I still would think that a SQL UDF (and stored procedure) should be significantly "cheaper" to call than any external function/procedure... At least that has been my experience over the years, and therefore I would generally prefer to write SQL functions over external ones if the SQL language features allow for that.... I will have to disagree on the point "I would generally prefer to write SQL functions over external ones if the SQL language features allow for that...." ... The power and elegance of Java over Transact-SQL is such that I certainly prefer to write in Java ... and it's such a shame that we have been essentially locked out of writing a sizable amount of Java due to this major performance degradation ... I think that Sybase should take a hard look at how to get the performance to a 20-30% degradation and NOT a 90% degradation as I've experienced ... Well, I was talking about that programmer-friendly Watcom-SQL, not the Transact-SQL dialect:) - and about code that can be handled with SQL in a reasonable and maintainable way (so, perhaps not too much string manipulation and the like...) For the Java performance, I certainly see your point but that's not my playground... Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown Question tags: sa-12 ×409 performance ×242 java ×64 question asked: 18 Jan '12, 14:56 question was seen: 1,853 times last updated: 26 Jan '12, 09:44 Java performance is awful in SQL Anywhere 12 Making a database recommendation: ASA vs ASE Problem deploying Java - Sun\jre160_x64\bin\%SystemDrive% Drop in Mem Usage when engine window minimized Implementation Guidelines for Sybase ASA 12 on Linux Internal warning: Request task 62 dispatch took 13 seconds Best way to monitor database availability Referencing JARs for Java in the Database Validation speed in v12 against v10 Java in the database: Invoice Example - NoSuchMethodException SQL Anywhere Community Network Forum problems? Maintenance log and OSQA Disclaimer: Opinions expressed here are those of the poster and do not necessarily reflect the views of the company. First time here? Check out the FAQ!
http://sqlanywhere-forum.sap.com/questions/9415/java-performance-is-awful-in-sql-anywhere-12-part-2
CC-MAIN-2017-22
refinedweb
1,769
58.72
React-Router Hooks React-Router is a popular React library that is heavily used for client-side routing and offers single-page routing. It provides various Component APIs( like Route, Link, Switch, etc.) that you can use in your React application to render different components based on the URL pathnames in a single page. Pre-requisite: Please go through the below articles first, if you are not already familiar with React Router Note: You need to have React >= 16.8 installed in your device, otherwise the hooks won’t work. Hooks Of React Router 5: React Router 5 offers 4 hooks that you can use in your react applications: - useHistory - useParams - useLocation - useRouteMatch We will discuss all the hooks in details with proper examples: 1. useHistory: This is one of the most popular hooks provided by React Router. It lets you access the history instance used by React Router. Using the history instance you can redirect users to another page. The history instance created by React Router uses a Stack( called “History Stack” ), that stores all the entries the user has visited. Syntax : import { useHistory } from "react-router-dom"; // Inside a functional component export default function SomeComponent(props){ // The useHistory() hook returns the history // object used by React Router const history = useHistory(); } The history object returned by useHistory() has various properties and methods. Properties: - length: Returns a Number. The number of entries in the history stack - action: Returns a string representing the current action (PUSH, REPLACE, or POP). - location: Returns an object that represents the current location. It may have the following properties: - pathname: A string containing the path of the URL - search: A string containing the URL query string - hash: A string containing the URL hash fragment - state: An object containing location-specific state that was provided to e.g. push(path, state) when this location was pushed onto the stack. Only available in browser and memory history. Methods: - push(path, [state]): Pushes a new entry onto the history stack. Useful to redirect users to page - replace(path, [state]): Replaces the current entry on the history stack - go(n): Moves the pointer in the history stack by n entries - goBack(): Equivalent to go(-1). - goForward(): Equivalent to go(1). - block(prompt): Blocks navigation. It takes a callback as a parameter and invokes it after the navigation is blocked. Most useful when you want to first confirm if the user actually wants to leave the page. Example: Suppose we have a React project created using “create-react-app” having the following project structure. Project structure: react-router-hooks-tutorial/ |--public/ |--src/ | |--components/ | | |-->Home.js | | |-->ContactUs.js | | |-->AboutUs.js | | |-->LogIn.js | | |-->Profile.js | |-->App.js | |-->App.css | |-->index.js | |-->index.css | |-->... (other files) |-- ...(other files) Suppose, inside the “LogIn.js”, we have a “LogIn” component that renders the log-in page. The LogIn component renders two input fields, one for the username and another for a password. When the user clicks the login button, we want to authenticate the user and redirect the user to his/her profile page. LogIn.js Output: Log in page Check the Login component carefully, the “handleClick” function takes the username and password and calls the “authenticateUser” function which somehow authenticates the user. When the authentication is done, we want to redirect the user to the “profile/John” (suppose the username is “John”) page. That’s what the last line of handleClick function does. “useHistory()” hook returns the history instance created by React Router, and history.push(“/profile/John”) adds the given URL to the history stack which results in redirecting the user to the given URL path. Similarly, you can use other methods and parameters of the history object as per your need. Check the next hook to see how the redirection to a dynamic URL works. 2. useParams: This hook returns an object that consists of all the parameters in URL. Syntax: import { useParams } from "react-router-dom"; // Inside a functional component export default function SomeComponent(props){ const params = useParams(); } These URL parameters are defined in the Route URL. For example, <Route path="/profile/:userName" component={Profile} /> The colon(“:”) after “/profile/” specifies that “userName” is actually a variable or parameter that is dynamic. For example, in the url “/profile/johndoe”, “johndoe” is the value of the parameter “userName”. So, in this case, the object returned by useParams() is: { userName: "johndoe" } Example: After the login we want our user to be redirected to the “profile/userName” URL. The userName depends on the user’s given name. So, we need to set the URL path dynamically based on the user given userName. This is easy to do, we need to update the App.js file a little. App.js Profile.js Output: Now if you now go to the log-in page and click the login button with userName “John”, then you will be redirected to the “profile/john” page. 3. useLocation: This hook returns the location object used by the react-router. This object represents the current URL and is immutable. Whenever the URL changes, the useLocation() hook returns a newly updated location object. Some of its use includes extracting the query parameters from the URL and doing something depending on the query parameters. The “search” property of the location object returns a string containing the query part of the URL. Syntax : import { useLocation } from "react-router-dom"; // Inside functional component export default function SomeComponent(props){ const location = useLocation(); } Note: history.location also represents the current location, but it is mutable, on the other hand, the location returned by useLocation() is immutable. So, if you want to use the location instance, it is recommended to use the useLocation() hook. Example: The useLocation() is very useful to get and use the query parameters defined in the URL. In the below code we have used the useLocation hook to access the query parameters. Then we parsed it using the URLSearchParams constructor. Profile.js Output : Displays the given query id 4. useRouteMatch: Returns a match object that contains all the information like how the current URL matched with the Route path. Properties: - params: This is an object that contains the variable part of the URL. - isExact: This is a boolean value, indicating whether the entire URL matched with the given Router path. - path: A string that contains the path pattern. - URL: A string that contains the matched portion of the URL. It can be used for nested <Link />s and <Route />s. Syntax : import { useRouteMatch } from "react-router-dom"; // Inside functional component export default function SomeComponent(props) { const match = useRouteMatch(); } Example: The useRouterMatch hook can be used in creating nested Routes and Links. The following code renders the Profile page of the user when the current URL path entirely matches the given Route path, otherwise, it renders another Route that renders the User’s followers page when the current URL path is “profile/:userName/followers”. Profile.js Output: If you click the follower’s link, you will be redirected to the “/profile/John/followers” page, and as the entire URL path “profile/John/followers” does not match the given Route path i.e. “profile/;userName”, so the div element inside the Route component gets rendered. Remember You need to have React 16.8 or higher in order to use these react-router hooks. Also, don’t forget to use them inside functional components. Reason to use React Router Hooks Before React Router 5: By default, while using the component prop (<Route component={} />), React router passes three props(match, location, history) to the component that the Route renders. That means, if you, for some reason, want to access the history or location instance used by React router, you can access it through the default props. About.js But if you pass your custom props to your components then the default props get overridden by your custom props. As a result, you will not have any further access to the history object created by React Router. And before React Router 5, there was no way other than using the render prop (<Router render={} />) to explicitly pass the location, match, and history instance as props. App.js With React Router 5 Hooks: Now with React Router 5, you can easily pass your custom props to the rendering component. App.js Though in this case also those three props (match, location, history) don’t get past the rendered components automatically, we can now use the hooks provided by React Router 5, and we don’t need to think about the props anymore. You can directly access the history object by using the useHistory hook, location object with useLocation hook, and match the object with useRouteMatch hook, and you don’t have to explicitly pass the props to the components.
https://www.geeksforgeeks.org/react-router-hooks/?ref=rp
CC-MAIN-2022-27
refinedweb
1,457
53.81
Library (v2) Examples UDP allows us to broadcast messages across a Tcp/Ip LAN. Some applications could be: This article proposes a way to organize the selected core objects and threads, in order to build up an encapsulated class that exposes clean methods and properties. It also solves the main difficulties that developers find when trying to take advantage of the UDP broadcasting: receive Solutions for this difficulties along with some others can be found in this article: UdpClient BeginReceive Receive "Broadcast is the term used to describe communication where a piece of information is sent from one point to all other points. In network case there is just one sender, but the information is sent to all connected receivers. Broadcast is mostly used in local sub networks. In order to transmit broadcast packet, the destination MAC address is set to FF:FF:FF:FF:FF:FF and all such packets will be received by other NICs." (1) As opposed to Multicast: "Multicast (point-to-multipoint) is a communication pattern in which a source host sends a message to a group of destination hosts. The notion of group is essential to the concept of multicasting. By de?nition a multicast message is sent from a source to a group of destination hosts." (1) There are two types of Broadcast: In this article we will build a library to perform Limited Broadcasts. If you came here looking for a quick way to broadcast messages, you may want to download the v2 dll binaries, add it to your project and start broadcasting. A simple example code is shown below in the Quick Start section along with some instructions. If you want to see some more complex examples you may want to download The Chat or the Sink The Boat Game and see what happens. If you came here to learn how to UDP broadcast or to play with it, I suggest downloading the v1 source code. The equivalent Quick Start example is inside the file. The Magic Inside The v1 Library section will introduce you to the v1 library. You will soon notice that the library has lot of room for improvement and that some udp parameter combinations and optimization are allowed. You may want to test it in an IPv6 environment, direct broadcast, etc. You will have fun with this one. If you write something you are proud of and is share-able, my curiosity would be happy to see it. The use of the library is pretty straightforward. One of the smallest programs we could write to show how this library works is a console chat. The program will be able to receive any messages sent to Lan, decrypt it and show it on the console. It will also be able to send a user message to Lan. The program uses a thread to show the messages received as soon as they are received. The library is thread safe. Here is the code of this tiny example: using System; using System.Threading; using Orekaria.Lib.P2P; namespace Orekaria.Test.P2P.ConUI { internal class Program : IDisposable { private readonly BroadcastHelper _broadcast; private readonly Thread _th; private Program() { _broadcast = new BroadcastHelper(8010, "K_*?gR@Ej"); _th = new Thread(DequeueMessages); _th.Start(); } #region IDisposable Members public void Dispose() { _th.Abort(); } #endregion private void Run() { var isExit = false; do { var s = Console.ReadLine(); if (s != null && s.ToLower() == "exit") isExit = true; else _broadcast.Send(string.Format("{0}: {1}", Environment.UserName, s)); } while (!isExit); } private void DequeueMessages() { while (_th.IsAlive) { Thread.Sleep(100); while (_broadcast.Received.Count > 0) Console.WriteLine(string.Format("{0}", _broadcast.Received.Dequeue())); } } private static void Main() { var p = new Program(); p.Run(); p.Dispose(); } } } The output when 2 users called DAM and ASIR are executing this code is this. The first line "ASIR: Hello DAM" being the remote computer, the line "Hello" being the local user writing a response and "DAM: Hello" being the local echo of the "Hello" response that has been broadcasted and then received back: The examples included use the library for specific goals. The first one is a Wpf LAN chat and the second one is a Forms LAN game called Sink the Boat. The chat is quite the same as the above console example but translated into very basic Wpf code. The LAN game is more complex because it shows how this library could be use to send packed messages or serialized content. These are some decrypted messages that are broadcasted in this LAN game: When one of these messages is received, the code below depacks the information within the message. It is not important how they are packed. You should use your own package algorithm. It is shown here for demonstration purposes: Private Sub ProcesaPaquete(messageReceived As String) Dim parts = messageReceived.Split(" ") Dim hostEmitter = CatalogHost(parts(0)) If parts.Length = 2 Then hostEmitter.PingReceived(Convert.ToInt32(parts(1))) Return End If Dim target = parts(1) Dim hostTarget = LogHost(target) Dim mb = New processParts(messageReceived, Convert.ToInt32(parts(2)), parts(3)) hostTarget.MessageReceived(mb) End Sub In the code above, the received message is unpacked and the information is classified in one of three categories: ping, shoot or impact. Each of these messages contains different information which is processed afterwards. The v2 library consists of two classes. One is called BroadcastHelper and the other one is called UdpHelper. The BroadcastHelper is the only class being exposed. The constructor must be called with the port and an optional encrypt pass phrase. It exposes a method called Send that allows you to broadcast a message to the LAN and a property called Received that maintains a queue of all the messages that have been broadcasted by any host in the LAN. BroadcastHelper UdpHelper port pass phrase Send Received Inside the BroadcastHelper, a listener and a talker manage the messages. Both of them work with a UdpClient object and with a IPEndPoint address. The key parameters are the port and IPAddress.Broadcast. Remember that we are Limited Broadcast-ing so the address should be 255.255.255.255. The listener is instantiated in this way: listener talker IPAddress.Broadcast var udp = new UdpClient(_port); var remoteEP = new IPEndPoint(IPAddress.Broadcast, 0); _listener = new UdpHelper(udp, remoteEP); and the talker in this way: var udp = new UdpClient() var remoteEP = new IPEndPoint(IPAddress.Broadcast, _port); _talker = new UdpHelper(udp, remoteEP) Compared to the v1 version, the timeout is no longer necessary because we will use a callback to receive possible broadcasted messages and this callback don´t block the execution flow. Before continuing you may want to check the debug prints (results window in VisualStudio) to check how the listener and the talker are lazy loaded and the ports that they are using. Leaving the BroadcastHelper and inside the UdpHelper, messages are sent right away: { ... _udpClient.Send(packetBytes, packetBytes.Length, _remoteEP); } but the receiver collects the new messages through a callback not to block the execution flow. When a message is received another callback is queued: { ... _udpClient.BeginReceive(ReceiveCallback, null) } private void ReceiveCallback(IAsyncResult ar) { var receiveBytes = _udpClient.EndReceive(ar, ref _remoteEP); _received.Enqueue(ChosenEncoder.GetString(receiveBytes)); _udpClient.BeginReceive(ReceiveCallback, null); } Finally, the destructor is instructed to automatically close and free the resources. ~UdpHelper() { _udpClient.Close(); } Library (v1) I do really like this v1 library. I know... it is probably not the optimal way to go, but I do like its logic simplicity. Art and beauty also exist in our code lines, don´t it?. The library consists of two classes. One is called P2PHelper and the other one is called UDPHelper. As a side note, this is a project much bigger than the one presented here and so is the name P2PHelper. This class, P2PHelper, is the only public exposed class. Three methods and a property are visible: the constructor, Dispose, Send and Received. P2PHelper constructor Dispose When P2PHelper is instantiated, two threads are created, each one containing an instance of the UDPHelper class. The UDPHelper class encapsulates and simplifies the System.Net.Sockets.UdpClient and the System.Text.Encoding. You may like to change your preferred encoding here. UDPHelper System.Net.Sockets.UdpClient System.Text.Encoding One of the threads is used for listening and the other one for talking. I must point here as I did in the code, that both threads could be use for listening and speaking having a total of 2x2. The listening thread (called server in the code), needs some specific parameters to enter the broadcast listener mode. Keys are the port, the IPAddress.Broadcast and the timeout to allow the thread to exit the listening state: timeout var udpServer = new UdpClient(_port) {Client = {ReceiveTimeout = TimeOut}}; var serverEP = new IPEndPoint(IPAddress.Broadcast, 0); var server = new UDPHelper(udpServer, serverEP); The talking thread (called client in the code), although it uses the same UDPHelper class, needs other specific parameters to enter the broadcast talking mode. Keys are the IPAddress.Broadcast and the port we are talking through: port var udpClient = new UdpClient(); var clientEP = new IPEndPoint(IPAddress.Broadcast, _port); var client = new UDPHelper(udpClient, clientEP); If you are interested in tweaking and improving this code you should have enough room tweaking in this class, in particular the parameters sent to the UDPHelper class mentioned above. You will see how some changes do still work and some other fail. After this two threads are created, the P2PHelper keeps listening and enqueueing the messages received. If encryption is enabled, the messages are decrypted: while (!_disposing) { var receive = server.Receive(); if (receive != Resources.TimeOut) _received.Enqueue(_encrypt ? EncryptHelper.DecryptString(receive, _passPhrase) : receive); Thread.Sleep(1); } and talking. If encryption is enabled, messages are encrypted: while (!_disposing) { while (_toSend.Count > 0) { var send = _toSend.Dequeue(); client.Send(_encrypt ? EncryptHelper.EncryptString(send, _passPhrase) : send); } Thread.Sleep(10); } Disposing the object is required to free the resources. You may want to improve this code to fit your needs or even to work without this explicit disposing. public void Dispose() { _disposing = true; _thClient.Join(); _thServer.Join(); } This article gives you a better understanding of how we can build code to be able to broadcast and receive messages in a LAN. Hope to get your feedback and suggestions. (1) Broadcasting and Multicasting in IP Networks 5-Mar-2012: My english teacher helped me to fix some spelling and grammar errors. 6-Mar-2012: Added the v2.
https://www.codeproject.com/Articles/326868/Send-receive-messages-in-LAN-with-broadcasting?msg=4276159
CC-MAIN-2017-43
refinedweb
1,722
57.47
Harshwardhan Nagaonkar <harsh@ee.byu.edu> wrote: > > I'm using Debian/Sarge. When I debug an application and it stops at a > > breakpoint, I could just use "list" or "list 30" but now it just shows > > > > 1 in <<C++namespaces>> > > > > What do I have to do so GDB starts showing the source again. > > > This will happen if the program you are debugging is not compiled with > debugging symbols. Use a "-g" option in front of your g++/gcc compiler > command to enable debugging symbols. > I lately regouped my Makefile so the "-g" got lost. Thanks a lot. O. Wyss -- See a huge pile of work at ""
https://lists.debian.org/debian-devel/2004/05/msg01195.html
CC-MAIN-2017-30
refinedweb
106
84.27
One of the Program Managers for the Xna Framework recently started his first blog. Unless you have the main RSS feed of this site subscribed, you probably didn’t notice his first post, which talks about a lot. You may have also seen the ‘official’ ‘beta’. That’s really exciting to hear someone like the XNA PM saying, "we need a CLR on the XBox". That is awesome, really exciting. Looking forward to the future! Will be interesting to see what XNA will do for the industry. Hey Tom, I was wondering something. Has there been discussion as to whether the XNA Framework is going to be released as a free download like Managed DirectX or are we going to have to purchase XNA Studio? I would hate to see this move take hobbyist development back a step :). PingBack from We were already speculating on this in David Weller’s thread on GameDev, so this is just great, great news! 😀 The only damper on all of this joy could be that MDX2.0 might only be available through the non-free XNA studio… On the other hand, normal DirectX is also still a free download and I presume you can make enough money from XBox 360 dev kits, so I’ll keep my hopes up that MDX2 remains freely available and maybe even an XNA Studio Express might come along. Some info on this would be much appreciated though 🙂 Please finish/release/support MDX2 for .NET 2.0/XP/Vista prior to trimming it down (i.e., removing DirectSound, DirectInput/keyboard/mouse, some D3DX, etc.) for the XNA framework (which will likely take yet another 4-6 months). You seem to be so close to wrapping it up. It is time to get some "kinetic energy" out of MDX in addition to the "exponential potential". We haven’t announced anything on distribution, release vehicles, pricing, or anything of that nature. However, we do realize that MDX is currently a free download for the Windows platform. I think the people currently using that won’t have much to worry about after upgrading. .NET on XBOX 360 is the best thing we can dream! Now please, allow EVERYBODY to develop homebrew software for it. X-box – will be DirectX-box! It’s great!;) If MDX is moving to this new platform, will there be similar MDX releases for the compact framework, or is this the end of the road? After playing around with MDX 1.1 for a while I’ve moved over to the MDX 2.0 beta and I’m finding it great. I’m just worried about your comments "what is now called Managed DirectX 2 will be folded into the Xna Framework" and Wikipedia’s XNA entry () which kinda imply that MDX 2 is only gonna be available for Vista and the XBox 360. I’m presuming that the Wikipedia entry is just misleading me, but I use Windows 2003 for development and there are a lot of people out there that use XP and are gonna be for a long time. Is MDX2/XNA going to be available for those platforms, or is it just Vista/XBox 360? We’re not ignoring versions of Windows prior to Windows Vista I can assure you. Will XNA support WinForms (e.g., for tool development)? While this is all very interesting (not to mention extremely impressive). And it all sounds really great, and honestly there’s only one grey-area point that I feel needs to be addressed regarding a final release of XNA (even if it’s 100 years from now, by then I’ll be sleeping soundly and this won’t matter). I got into reading about all of this after reading a gamedev.net post from some guy from Microsoft asking for feedback about MDX and XNA from developers. Here’s my take (and I’m ONLY referring to the Managed DX portion here, not the rest of the XNA architecture. That is, if there -is- a point of seperation): I’m aware that you haven’t outright stated that there’d be a charge for the MDX portions, and you allude that there wouldn’t be, however, until I see the words in big flashing red letters on my screen saying "THE MDX PORTIONS WILL STILL BE FREE TO DOWNLOAD, NOW AND FOREVER" I’ll continue think the worst (that way I can’t possibly be disappointed :)) IF it’s not free, I think it’ll drive a LOT of people away. Not because people think all software should be free (don’t get me started on those that do), but because after years and years of being able to write DX apps using a free SDK, to be charged for it would be rediculous, even if it’s like $50-$100 (extremely cheap in comparison). But with alternatives like OpenGL (and its .NET bindings) out there for free, why buy a DX SDK? Yes, yes, OpenGL is for 3D only, but really, what’s the most commonly used component of DX? Right, D3D, so there’s no valid reason there in my opinion (please, no holy war crap about OpenGL vs. D3D, it’s been done to death). I’m a hobbyist when it comes to this stuff, I can take it or leave it, but I’m certainly not going to throw down money on something that’s been readily accessible to the public for so long. Anyway, I think it’d be horrid to start charging for the DX components of the XNA framework. And that’s my opinion, take it for what it’s worth. Yeah, I’m probably being paranoid, but you guys want opinions/suggestions/feedback/etc… from developers, and that’s feedback/opinion on the matter from a paranoid developer with too much time on his hands. Hi Tom, Great post, and great news! This is incredible – I can’t believe Microsoft has a whole team devoted to improving managed game development and cross-platform code with the 360! My only concern is the talk of "custom versions of the .NET framework". YIKES! Two versions of .NET installed on the same box sounds like a huge potential for DLL Hell 2.0. Not to mention compatibility issues with the huge existing framework, or the inevetable "delay" in releases (ie/ the "normal" framework team releases version 3.0 and a ton of new APIs, while the "gaming" framework team need to port them all over again, resulting in a 9-12 month delay). Then there is the issue with WinForms – clearly an API ported to the XBox 360 wouldn’t support that. Wouldn’t it be better to use one framework per platform and extend the class libraries instead? (Microsoft.Gaming namespaces). The existing .NET api documentation could document if a class is available on the 360 or not, similar to how the "compact framework" is today. Hopefully it’s just wires crossed and confusion in the final press release and what they really meant to say is that the new framework is just the 360 version, not a branched mscorwks.dll on Windows! The poster/commented named "John" mentioned WinForms support – I think I need to bring up the issue as well – the ability to host a D3D device on a WinForm is going to be *critical* for tool development. I have to agree with the others here, (those asking you to release MDX2 final, before moving on). I’ve spent the last 6 months working on an engine in MDX. I have also shifted the large majority of my code over to the beta (yeah, may be a bad idea in retrospect). But there are some pretty huge performance gains in MDX2 over 1.1. You are so close to wrapping it up. I for one am not fussed either way whether I can get my application working on an XBox 360. It is really not an issue for many developers. There should be a clear distinction between MDX and XNA, but both should still exist. It seems crazy to kill off MDX2. Rob – disapointed 🙁 I agree. The managed environment is nice for game development, because it gives you almost all non-graphics base components found in many engines for free. Garbage Collection, Reflection & AddOns, Scripting, etc… If you look at the public unreal headers, there is a huge amount of code creating something like a managed environment. MDX is great and MDX 2.0 is even better. But it would be nice if we could have MDX 2.0 now for .Net 2.0 and XNA later when it’s done. Hi, I work for a small games studio that creates small 3D and 2D arcade games. I look forward to XNA, but we were hoping for a release of ManagedDX2 soon. Is there no way that MDX2 could be a separate component that XNA utilises? That way both softwares could be released and everyone would be happy. Will XNA have any built in support for 2D games? XNA? Who cares! I need a stable MDX 2.0 – not more, not less. Shipping several versions of the 2.0 assemblies with the SDK and then saying "ok, it’s over boys" simply sucks. And the silly "MDX 1.1 is still there" is just plain MS arrogance. If you never ever want to ship it, why the hell calling it "beta" and not "technology demo"? Maybe it’s fun for the developers at the XBox-Team to fold it into XNA and do some technology demos – but for some real-world developer (money-wise) who has TO PLAN it’s all big bullshit. Next Question: Will XNA be free for the independend developer? I bet my ass for the answer. The only solutions that remain: 1. Write a nice and fast wrapper by yourself. 2. Stay with C++ All this renders MDX useless for me. I go for option 2. For most of us developers the question is: How big is the chance that our code will ever run on a XBox360 or that we get our hands on a devkit? 0.0%! Managed DX is dead. No more, no less. Me – disapointed too 🙁 I am hoping Microsoft continues to support Visual Basic / MDX for game development. We are a small independant developer that has built a fantastic concept demo with one programmer in 12 months using just DX8.1 and VB6. Now we are moving into full development and are rewriting the engine with VB.NET and Managed DirectX. I fully reject that C++ is the only way to make games, our current project is a dramatic first person sci-fi RPG complete with physics in a high-poly environment and damned good ai – it plays baby – the only reason we aren’t staying with vb6 is needing to guarantee future Vista compatability. Hopefully i can get .NET moving as fast or faster than vb6. In closing, hooray for Microsoft and goodluck with your monopoly! – I just ordered a 90 day trial of VS 2005 and downloaded a completely free version of VB.NET to complement my free DX9 SDK. – I don’t imagine Sony or Nintendo will be mailing free dev-kits anytime soon. !Not disapointed! T Me again, I’m keeping my eye on your blog for updated news about Managed DX (to see if I’ve just wasted the last 5 months of my life). It seems I am not the only one saddened by the lose of MDX. These people have a point, it’s pretty shitty to issue a new version (even if it was beta) in the DX SDK and then pull it just before release. PLEASE reconsider dumping it! I’ve been looking around the net, MDX was really just starting to take off, there were many resources appearing. For the first time since I started using MDX, google queries were actually bringing back useful relevant information. I would guess most people using MDX are small-time (at least at the moment, while it takes off). Pretty much 0% of those will be able to afford the development costs of using XNA, or afford a development XBox360. It just doesn’t seem sane. Again, please reconsider completely dumping it, and look for some sort of alternative!!!! Is MDX2.0 really that much faster than MDX1.1? Anyone have some numbers to share? Aaron, We ported a 200K LOC engine written in C#/MDX 1.1 to MDX 2.0 and we observed a 1FPS performance increase with a dense scene. Shaun You absolutely must released a final, non-beta Managed DirectX 2.0 that is not stripped down for the XNA Framework. This situation is outrageous. Consider releasing it as the final in its current form. After all, IT WORKS! You will absolutely kill whatever chance .NET ever had for getting a hold on professional game development. You need to allow us to keep our existing options if we do not want to develop strictly for some cut-down console environment! It makes absolutely NO SENSE to throw away something such as MDX 2.0. Whoever made that decision should be fired! Managed DirectX 1.1 is not an acceptable substitute! There are too many bugs and it does not take advantage of the new features of .NET 2.0! Wow, people are really on the ball when it comes to new DirectX SDK releases. Browsing Virtual Realm… PingBack from PingBack from PingBack from PingBack from PingBack from
https://blogs.msdn.microsoft.com/tmiller/2006/03/20/managed-directx-2-0-xna-and-me/
CC-MAIN-2016-36
refinedweb
2,259
73.37
#include <hallo.h> * Ralf Nolden [Wed, May 07 2003, 11:40:56PM]: Content-Description: signed data > > According to Ralf Noden, USB is compiled into the kernel. > > Well, I originally *assumed* that would be the problem - I just could see that > the hanging process is rmmod - which brought me to the quick assumption that > the module can't be removed (and is therefore compiled in). But I'm I cannot reproduce this. And even if I could, file a bug against modutils. Rmmod should exit immediately if the module is not loaded. > apparently wrong - if you load the module with hotplug then you'll naturally > also unload it again. So it seems the bug is in the kernel module. Module? Which module? > OTOH, I don't know where the exact difference in the code is between the > 2.4.20-bf2.4 kernel and the kernel-image-2.4.20-1-i386 (your one Herbert from > unstable; I backported it to woody). As kernel-image-2.4.20-1-i386 works > perfectly well with hotplug, the question is rather what the difference in > the USB setup is. The best idea is probably to build bf2.4 from the > kernel-image-2.4.20-1-i386, no ? Huch? That are different binary packages built from different source packages but both using same source from the kernel-source packages. The difference is in the config and a small patch, iirc. MfG, Eduard. -- Die schwierigste Turnübung ist, sich selbst auf den Arm zu nehmen.
https://lists.debian.org/debian-boot/2003/05/msg00173.html
CC-MAIN-2019-18
refinedweb
250
74.59
Hello everyone, i am new in cplex using python. i want to create a network topology where some nodes are connected with each other through links between the nodes. the links have a specific amount of traffic that they can handle and the node pairs are sending data with each other using the links. the optimization part is how to find the best possible paths that a node pair can use to send data when there is a traffic running in the network. i hope someone could help me in that problem. Regards First thing to do is to get a mathematical model on paper. Do you have that? Can you post that here? thanks for you response, please check the attached picture to have an idea about the problem.![alt text][1] assume that we have the network topology in the picture where: 6 nodes are connected with 6 links and each link can handle 100 Mbps traffic. and the node pairs are sending data between each other as showed in the traffic matrix. the objective for example is to find the best possible paths to send 10 Mbps from node 1 to node 6 where the other nodes are sending data between each other. please if something is clear, you could tell me to expalin it. i really appreciate your help. [1]: /answers/storage/temp/22636-network.jpg So this looks like a standard min-cost flow problem? Answer by DanielJunglas (2938) | Jul 09, 2018 at 04:11 AM If you know how to do things in AMPL than it might be a better idea to just work through the Python examples and tutorials that are shipped with CPLEX. You only have to learn how to create variables and constraints and that's it. In any case, here is a short fixed charge network flow code that uses the graph and data from the fixnet.c example shipped with CPLEX: from __future__ import print_function """ 1 -- 3 ---> demand = 1000 / \ / / \/ 0 /\ \ / \ \/ \ 2 -- 4 ---> demand = 1 """ import cplex # Edge data orig = [ 0, 0, 1, 1, 2, 2 ] # sources of arcs dest = [ 1, 2, 3, 4, 3, 4 ] # destination of arcs unitcost = [ 0, 0, 0, 0, 0, 5 ] # cost per unit fixedcost = [ 1, 1, 1, 10, 1, 1 ] # fixed charge if arc used capacity = [ 2000, 2000, 2000, 2000, 2000, 2000 ] # arc capacity m = len(orig) # Node data (negative demand is injected into network, positive demand # is removed) demand = [ -1001, 0, 0, 1000, 1 ] n = len(demand) with cplex.Cplex() as cpx: # Create a flow variable for each edge: flow = list(cpx.variables.add(lb = [0] * m, ub = capacity, names = [ 'flow%d' % i for i in range(m)])) # Created fixed-charge variable each edge: use = list(cpx.variables.add(lb = [0] * m, ub = [1] * m, types = ['B'] * m, names = [ 'use%d' % i for i in range(m)])) # Created fixed charge constraints: a flow variable can only be # used if the respective fixed charge variable is enabled. for f, u, c in zip(flow, use, capacity): cpx.linear_constraints.add(lin_expr = [cplex.SparsePair(ind = [ f, u ], val = [ 1.0, -c ])], senses = ['L'], rhs = [0.0]) # Flow constraint: sum of inbound flow and outbound flow must equal the # demand for u, d in enumerate(demand): inarcs = [i for i in range(m) if dest[i] == u] outarcs = [i for i in range(m) if orig[i] == u] cpx.linear_constraints.add(lin_expr = [cplex.SparsePair(ind = inarcs + outarcs, val = [1.0] * len(inarcs) + [-1.0] * len(outarcs))], senses = ['E'], rhs = [d]) # Objective function. cpx.objective.set_linear(zip(flow, unitcost)) cpx.objective.set_linear(zip(use, fixedcost)) cpx.write('flow.lp') cpx.solve() print('Objective: %f' % cpx.solution.get_objective_value()) for e, v in enumerate(cpx.solution.get_values(flow)): print(' %s [%d -> %d] = %8.2f' % (cpx.variables.get_names(e), orig[e], dest[e], v)) Note that this uses the CPLEX Python API. If you want to use docplex then things would look a little different (see here for the differences between the two). Dear thank you for you rsupport, do you have such a tutorial or some aid help for cplex using python for traffic engeneering in communications? 142 people are following this question. How to run docplex on Linux with local cplex engine 1 Answer Initial solution in a PYTHON model 1 Answer How to import .sav file from C++ API in a Python script? 2 Answers Point/Feasibility Cuts in Benders Decomposition 1 Answer A problem can be solved with clpex matlab api, but cannot be solved with cplex python api 1 Answer
https://developer.ibm.com/answers/questions/453134/how-to-define-a-network-topology-with-nodes-and-li.html
CC-MAIN-2020-16
refinedweb
754
71.55
Perfect is the enemy of good. I have an interesting story to tell today, and I’m sure the title of this post probably surprised you. Well, it’s not satire or schadenfreude, but actually serious advice. This is the story of some code I wrote (back in 2013) to solve a problem, where I deliberately chose a “bad” solution. One of the things I’ve done for many years now is tutor high school students to learn programming as part of the NCSS Challenge. Each week, for five weeks, students are given a set of problems to solve, accompanied by notes explaining all the Python features and concepts they need to solve them. Many high school computing teachers sign up their class (sometimes using it as an assessment task), but many students also sign up themselves. In addition to the notes, students can also chat and discuss the problems (but not share code) on forums, or seek help privately from our volunteer tutors. The challenge uses Discourse for the forums, which works well, but also for the private messaging (with some custom modifications), which works less well. There are two main problems, which the following sequence illustrates: - Student 1 sends a message (sending a notification to all tutors). - Tutor A starts writing a response. - Tutor B starts writing a response - Tutor A posts their response. - Tutor B either doesn’t notice (meaning the student gets two responses, which might attempt to solve the problem in different ways), or does notice and is annoyed that their time has been wasted. - Tutor A goes back to the notifications, and starts clicking through the unread posts, but they’ve all been answered by other tutors. To solve the problem of multiple tutors answering the same post, we set up an IRC channel that tutors joined, so they could announce which threads they were going to work on. Various tutors wrote IRC bots to automate bits of this, and then I combined all the functionality into a single bot for keeping track of what state (unreplied/replied) threads were in, and which tutor was currently assigned to answer it. And now you have enough background to get onto what my bad solution was. One of the parts of this IRC bot was the “threads backend”, which stores thread metadata: author, title, unreplied/replied state, and the tutor assigned to it. Now the obvious answer to this is to use a database: SQLite if it’s a toy problem, and PostgreSQL if it’s big enough. Instead of doing that, however, I went with something far more crude: import os class Thread(object): def __init__(self, title=None, ...): self.title = title ... def to_json(self): return { 'title': self.title, ... } def load_threads(): with open('threads.json', 'r') as f: threads = json.load(f) return { key: Thread(**kwargs) for key, kwargs in threads.iteritems() } def save_threads(threads): with open('threads.json.new', 'w') as f: # If this crashes, we end up with an empty threads.json.new file, # but threads.json is still intact. json.dump({ key: thread.to_json() for key, thread in threads.iteritems() }, f) os.rename('threads.json.new', 'threads.json') ... def request_handler(request): threads = load_threads() if request.action == 'get_post': return get_post(threads, request) elif request.action == 'new_post': response = new_post(threads, request) save_threads(threads) return response ... It’s terrible! On every request, the entire database is read from disk, and whenever a thread gets modified, the entire database has to be written back to disk (not just the parts which changed). There’s some other problems too: it doesn’t let you handle requests concurrently, and it doesn’t let you create secondary indices to speed up queries. It’s also missing an os.fsync() call to ensure the data is actually written to disk, but let’s not talk about that one. 😉 The secondary indices isn’t just a throwaway comment either: one of the operations this backend supports is filtering threads by various attributes. Obviously, the only way this backend can support that is by doing a linear scan over all threads, and returning the ones which match. But actually, despite the number of raised eyebrows, it’s not actually quite as terrible as it first seems. Firstly, it’s a backend to an IRC bot, which means that any response it gives is going to be seen on an IRC channel, which is (sort of) a serialised output: so not handling concurrent requests isn’t a big deal. It also doesn’t have to handle a lot of traffic (much less than one request per second), and even a 1s latency wouldn’t be obviously bad on IRC. The other thing to consider is that the database isn’t actually that big! In the first year, there were ~500 threads: even with a generous 1KiB of metadata per thread, that’s still less than a mebibyte, which is easily written to disk in less than a second. It’s also small enough to fit in the OS buffer cache, so reading the database from “disk” isn’t expensive either. And you can also see that those linear scans aren’t that expensive either. Futhermore, there’s some advantages to the code. It’s simple: there’s full visibility into the internal state, serialisation and deserialisation processes (unlike many ORMs, e.g. SQLAlchemy). It’s debuggable: it’s just JSON, so it’s incredibly easy to manually inspect (after pretty-printing with python -m json.tool) or write custom tools to analyse it. Schema migrations are easy: tell Thread.__init__ about the new key (providing a default value), and have Thread.to_json emit whatever schema you want stored: on the next database write, all record will be rewritten to the new schema. This system performed adequately for the first year it was used. There were many schema changes, but it had no data corruption or was the cause of any outages. It’s performance was good enough: by the end, some filtering requests took just over a second to respond to, but it was never bad enough for users to notice. The next year, I rewrote it to use SQLAlchemy backed by SQLite and added indices to optimise the most frequent filter requests. But I didn’t see that as admitting defeat: on the contrary: by starting with a terrible implementation, I learnt exactly what the bottlenecks were, and could then make informed decisions when designing the next version. Perhaps you weren’t surprised I went with a “bad” solution first: the idea is certainly not new. It’s an interesting strategy, but you need to be careful with it: if you don’t make it bad enough, you won’t be motivated enough to rewrite it in the future, and if you make it too bad, then it’ll crash and burn before you’re ready to rewrite. Even if this story hasn’t fully convinced you that deliberately choosing imperfect solutions is a good idea, I hope you can see it’s a viable strategy in some cases.
https://blog.flowblok.id.au/2015-10/solving-problems-badly.html
CC-MAIN-2018-34
refinedweb
1,177
61.97
Go Plugin — Write it Once Hello folks, welcome. In this article, we will go through one of Go’s most powerful features, Plugins. Let’s assume we are developing a large scale application and we want to decouple each module so that it won’t fail if one of the moving parts stops working. We also want to be able to add new features to the existing system without downtime. Well, Plugins can help you in that regard. Let’s dig deep. Let us develop a Job Aggregator utilising the power of Plugins. We want to build a system as depicted in the image at the left. An aggregator will be fetching job info from different sources and based on the data received, markdown files will be generated and be saved to appropriate stores. After getting the aggregator ready, we need people to add different sources. Well we don’t know how to fetch data from these sources, so providers have to provide or let us know how to parse these data. We will parse the data structure in the following format: import "encoding/json"type Empleo struct { Title string Company string Location string Link string Tags []string Time string }func (e *Empleo) Serialize() (string, error) { b, err := json.Marshal(e) if err != nil { return "", err } return string(b), err } An interface to tell providers what aggregator know about the source, type EmpleoSource interface { Init() error Fetch()([]Empleo, bool) } If Fetch returns true aggregator assumes there are still data left to fetch so aggregator will call Fetch method again. Now how the providers will provide source to aggregator ? Here comes the plugin. Providers have to write plugin satisfying EmpleoSource interface. Assume someone provided us a source which one parses the data from We Love Golang site. Notice that package name is main. Notice that above source implemented our EmpleoSource interface. Also notice that at line 53 we have an instance of WeLoveGolang named INSTANCEWELOVEGOLANG (it will be required later). What’s next now ? Well, it’s time to build the plugin. To do so we have to use the below command, go build -v -buildmode=plugin -o output_file.so input_file.go So once we build the plugin it will provide us .so file which is shared binary library. We got our first plugin. Now let’s look how the aggregator is using this .so file. We have a configuration file which contains the sources. Notice that we have one source in the below configuration and key is same as instance variable we have declared in the plugin, followed by path to the .so file (in the example .so file is located in a folder named bin from root directory). { "sources": { "INSTANCEWELOVEGOLANG": "./bin/we_love_golang.go.so" } } Below is the main code of our aggregator, We have used viper to read configuration from json file. Lets go through line 28–62. We have declared an array of EmpleoSource which will hold all the sources. Later read sources from sources tag of json file. At line 32 Open function of plugin returns an instance of Plugin using .so file provided as parameter (v is the path to .so file), it will return error if .so file is invalid or not found. Later we used Lookup function which returns an instance of Symbol (note that we have provided instance variable name declared in plugin as symbol name, you can lookup for function as well). Now it’s time to cast Symbol to interface EmpleoSource. Once we have casted it successfully method Init() and method Fetch() is available to aggregator to use. Finally we have used fetched results after combining all of them to produce markdown file. See aggregator doesn’t know how sources are working and you don’t need to do any change in core system. What you need to do is write a plugin satisfying the interface, build the plugin to generate .so file and finally let the aggregator know about the .so file through config. Aggregator will do the rest. Cool, isn’t it ? In the way you can write as many sources as you want and plug them with aggregator without making any change to the aggregator. Happy Coding. Complete source code :
https://medium.com/pathaoengineering/go-plugin-write-it-once-39be2ba38bc4
CC-MAIN-2020-34
refinedweb
702
75.2
Now that you understand a little bit about the different types of EJBs, you are ready to start examining the EJB programming model. In fact, there are two distinctly different programming models for EJBs—one for session beans and entity beans, and another one for MDBs. We'll begin by looking at the way in which session and entity beans are developed and used, and then take a look at how the MDB approach differs from that base. First, let's remember that EJBs are components. Each EJB is composed of a number of different classes—some provided by you the developer, and some provided by the EJB container. An example of this is how session and entity EJBs use the factory idiom for creating objects. Since an EJB is a component, you can't simply use the new operator in Java to create on. The code that creates the EJB may not be running in the same JVM as the actual EJB, and, beyond that, the EJB container may want to pool EJB objects and reuse them to reduce the overhead of object creation. Instead, an EJB programmer must create a Java interface, called a home interface, which defines the ways in which the EJB will be created. The EJB factories that implement these interfaces (whose implementation is provided by the container) are called EJB homes. So how, then, does a client obtain an EJB home? The client cannot use the new operator, since then she would simply be back in the same position we were in earlier. Clients locate EJB homes through a standard naming service that supports the JNDI API. JNDI is simple to use (like the RMI naming service) but it supports many of the advanced features of the CORBA naming service (like support for directory-structured names). When you deploy a session or an entity bean, you give it a unique JNDI name; clients can then look the implementation of the home up through that unique name. Once an EJB client obtains a home and uses it to look up or create and EJB, what does it receive in return? EJBs take the RMI approach of only requiring that the programmer define a simple Java interface that declares a remote object's external face to the rest of the world. In EJB terminology, this is called the remote interface, and it declares the externally accessible methods the remote object will implement. What the client receives, then, is another class provided by the container that implements this interface. This object acts as a proxy to the actual EJB implementation class, which is the final piece of this EJB that the developer is required to implement and provide to the container within the EJB-JAR file. By applying the factory and proxy patterns in this way, the EJB specification avoids some of the problems that plagued CORBA and RMI. Since remote and home are Java interfaces, we avoid the need to program in both IDL and Java as in CORBA. Since EJB deployment automatically registers EJB homes in the JNDI namespace, we avoid the bootstrapping problem of RMI, since RMI required the developer to manually insert distributed objects into the RMI registry. As we said earlier, starting in EJB 2.0 there are two ways to define homes and proxies for your EJBs, depending on whether or not the client knows it will be deployed within the same JVM as the EJB. In addition to declaring a home interface and a remote interface, EJB providers can declare a local home interface and a local interface. If the EJB client and the EJB are in the same JVM, then the client can choose to look up the local home rather than the remote home from the JNDI provider. The client will receive an object that implements the local interface from any of the local home's factory methods. In most other respects, using the local interface is the same as using a remote interface. We'll discuss what this means to the design of your EJB programs in the next section. Finally, we come to the programming model of MDBs. This model is significantly simpler than the programming model for entity or session beans, since MDBs do not have clients that invoke specific methods on those beans. A MDB responds to any messages that are sent to the JMS destination that it monitors, regardless of how that message arrived at that destination. So, this simplifies the interface of the MDB down to a single method—onMessage(Message aMsg). All MDBs implement this same interface, although each will respond differently to the message that is received. Likewise, since there is no need to look up an MDB from JNDI, there is no home or local home interface for the MDB.
http://codeidol.com/community/java/introducing-the-ejb-programming-model/10968/
CC-MAIN-2018-39
refinedweb
801
57.71
Agenda See also: IRC log -> Accepted. -> Accepted. Richard's regrets continue; probably regrets from Mohamed, Henry until 16 August. -> Murray: On some fourth level headings, the formatting looks a bit odd. <scribe> ACTION: Norm to do something about the formatting of fourth level headings [recorded in] Murray: In particular, since we have an element name in there, having it in u/c is a problem. Mohamed: Some small editorial problems that I sent to Alex didn't get incorporated. ... and error codes are in an odd order. <scribe> ACTION: Norm to sort the error codes in the appendix [recorded in] Mohamed: What about p:map? Norm: Yes, we still need to talk about that, but I don't think it'll get in this draft. Mohamed: We have a schematron reference but no schematron step. Norm: I thought we had agreed to have a schematron step. Henry: Seems reasonable to me, along with XSLT2 and XSL Formatter. Mohamed: We may also want to have an NVDL step. Norm: Yes. ... I'd like someone to propose how the NVDL step would work. Murray: What about an appendix for the WG members. Norm: Sure. Proposal: We'll publish this as a public Working Draft tomorrow. Accepted. -> Norm: Let's struggle on in Alex's absence. ... What about parsing HTML? Henry: I seem to recall that if we said the content-type was text/html, then you get an implementation defined mapping from HTML to XHTML. Norm: Should we do it that way? Henry: There was an implicit reference to the HTTP request step that it by default produces escaped markup. Norm: I hope that's wrong. Henry: We have an unescape markup step because we know that Atom, RSS, NewsML, etc can encapsulate documents with escaped markup. ... So it seems that p:http-request and p:unescape-markup have this problem. ... but what do save/serialize have to do with this? ... I'd like to split receiving and producing. .... Murray: Are we still talking about infosets? Henry: Yes, that's why this problem arises Murray: So it's implementation defined how you build an infoset from something that isn't XML. Norm: I'm happy with Henry's proposal as a starting point. Murray: I'm worried about how many different kinds of implementation-defined we're going to get. ... In GRDDL, we have an issue called faithful infosets. This arises because in GRDDL, we're talking about XPath node trees and there are questions about validation and XInclude, etc. ... This seems to create another faithful infoset issue. Scribe stepped away, a few minutes lost Henry: The things you can depend on are the minimal common subset that more-or-less the infoset defines ... It's true that there's more in the XPath 2.0 datamodel, but you can't get at it from our language. Norm: I'm sympathetic because of web services like Flickr that allow users to get comments Murray: I think everything needs to be able to filter to XML or you need to have a specific component that's for loading non-XML things Henry: I think Murray is right, but we're going to cheat just a little bit and say there are two. ... I'm happy that if you want to inject HTML into your pipeline and gaurantee that it's XML then you have to use http-request. Norm: We have load, basically only to support DTD validation <Zakim> MoZ, you wanted to ask Murray on the difference between XPath node trees and infosets and to Mohamed: I have a problem with components that translate from HTML to XML. Norm: I want it to be implementation defined. Mohamed: Norm, you said HTML to XHTML, but maybe we just meant HTML to XML. Henry: Yes, I think that was my fault. All we need is XML. Murray outlines a recent GRDDL use case about faithfulness of a representation Murray: My initial thought was that there should be a "garbage-in" step that could reach out and bring anything in. Norm: I think implementors will provide this if we don't Henry: The way I read this, you can specify that you require an application/html+xml media type and that will cause the pipeline to fail if you don't get it. Murray: I do an http-request and what I get back is an HTML document. I run some kind of process over that and I get some result. That result may be successful or not successfull. ... What comes out of http-request will be the result. ... But presumably I as the author of the pipeline want to know a couple of things. Norm: I think you can find all of those things by looking at the headers and body you get back. Henry: If you're using tidy, I'll expect implementations to fail if tidy throws errors. Norm: I agree. Henry: If you're using tagsoup, then you know you'll always get an output. <Zakim> MoZ, you wanted to speaks about the difference between p:parameter namespace=""... and p:option without namespace@ Mohamed: Are we sure that the parameters of the header will be available to the next step? ... The http-request step will ask with some parameters, the result will be one of those. Murray: So the http-request does a get and there are some headers. Norm: You get those back in the headers. <Zakim> ht, you wanted to register a concern about the architecture of p:http-request Henry: If no one else is worrying about this, that's ok, because I'm only looking at this in detail now. ... Had we already discussed doing this using two output ports instead? ... I'd like to be able to write a take-my-chances pipeline where the primary output is a sequence of documents. ... And only if I care about the minutia do I look at the port. Norm: I'm not sure how that would handle multipart related. Henry: An alternative would be to say that there is an option that says "take my chances" ... I want a sequence of documents or fail, don't bother me with all this stuff. Norm: That's not on the table now, but if you can fire off a quick message before you go on vacatoin, that would be good. <Zakim> MoZ, you wanted to ask the question why p:store/!result is not primary but not p:xslformatter/!result Norm: Oversight, I agree. Mohamed: What is the default for required on option? Norm: "no" Mohamed: It's written explicitly in some places. Norm: Are we satisified that we've given editorial direction to Alex Norm attempts to describe the serialization problem that probably caused Alex to lump them together. None. Adjourned
http://www.w3.org/XML/XProc/2007/07/05-minutes
CC-MAIN-2014-10
refinedweb
1,132
73.47
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. First App View7:38 with Kenneth Love Let's add a view to our app to show a list of all of our courses. include() allows you to include a list of URLs from another module. It will accept the variable name or a string with a dotted path to the default urlpatterns variable. If you don't have include() in your urls.py (more recent versions of Django have removed it), add it to the import line that imports url. That line will now look like from django.conf.urls import url, include. This is the Comprehensions Workshop that's mentioned in the video. Comprehensions are a great way of doing quick work with iterables. Let's make a view in our app that will list our all of the available courses. 0:00 We'll have to do a query to select all of the courses, 0:04 but I'm sure you remember how to do that from the last video. 0:06 If you want, feel free to pause me right now and try to build the view and 0:09 URL yourself. 0:12 We've already done all of these steps, so you should be able to get most, 0:12 if not all of it, without me. 0:17 So as you probably guessed, we're going to start in courses, views.py. 0:19 Here we are in our views.py. 0:23 Django provided us with an import here at the top, 0:26 a function named render that you haven't seen before. 0:28 This is actually really, really handy for when we want to start using templates. 0:32 For this first view though, let's just return plain text like we did before. 0:35 Which means that we need to import 0:41 the HttpResponse class again. 0:45 And of course we have to create a new view. 0:51 Now I think since this view is going to show a list of courses, 0:53 I'm gonna name it course list. 0:57 And you can, of course, name it anything that you want. 1:00 And as always, it takes a request. 1:03 Now since we want to use our course model, we should import it. 1:06 So I'm gonna add that up here from .models, 1:09 which the dot means look for the models module in the current directory. 1:13 Import Course. 1:17 Take out that comment. 1:20 Okay, so course list, what do we want to do in course list? 1:22 Now I can use the model to select everything. 1:26 And then I'm gonna use that. 1:29 And I'll merge that together into a string that joins all the titles, and 1:31 I'll send that back. 1:35 Okay so we'll just have all the titles. 1:36 So let's say courses equals course.objects.all. 1:38 And then we'll say the output equals comma space dot join courses, 1:44 and then we'll return HttpResponse of output. 1:50 Okay, so we select all of the courses that exist. 1:55 We join them together with commas, and 2:01 then we return an HttpResponse that has all those names joined by commas. 2:04 Now, of course, to see this view we have to create a URL for it. 2:09 This is a little bit interesting when you want to do it for an application and 2:13 you wanna do it cleanly. 2:17 So let's take this slowly. 2:18 So first things first, of course. 2:20 We don't have a urls.py in our app, so we should create one. 2:23 So we'll say new file, urls.py. 2:26 Cool, there it is. 2:32 Okay, easy enough. 2:33 Now what goes in here? 2:35 Well, the only thing that we need to import is the URL object from Django. 2:36 And we also have to import our views. 2:42 So, lets take care of the Django import first. 2:45 From django.comf.urls import url. 2:48 And then from .import views. 2:52 Nothing special there. 2:56 We've done these two lines before. 2:57 We had both of these imports, in fact, in our site-wide urls that we made. 3:00 Back up a stage or so. 3:05 Now, if you remembered those or cheated and looked at them, then you know 3:06 that we have to create our URL patterns variable, and fill it up with our URLs. 3:11 It's kind of weird that you have to name the variable exactly URL patterns. 3:16 Python isn't usually so proscriptive, but of course, this is Django, 3:20 not just standard Python. 3:23 Django can make whatever rules that it wants, and this is a rule that it made. 3:25 Let's make our variables. 3:29 So urlpatterns equals, and it's just a list. 3:31 If you've done old Django, it didn't use be just a list. 3:34 It's so nice now that it's just a list. 3:38 So we're gonna do url. 3:40 And it's gonna start and stop, and then we're gonna save views.course_list. 3:44 I gave this the same url as our home page, is that going to matter? 3:51 Well, let's see. 3:55 Let's run the server again, manage.py runsever. 3:56 0.0.0:8000 and let's open up our preview. 4:01 Oh, I accidentally did 8080. 4:10 There we go, 8000. 4:12 So we've got Hello World! 4:14 So no it didn't matter, we still get the home page. 4:16 Now that's partly because our courses.url.py isn't 4:19 even being loaded by Django at the moment. 4:23 Django doesn't automatically look for urls.py inside of apps, and 4:25 we haven't told it to look there, so it hasn't. 4:28 Okay, let's change that. 4:32 Come back over here and let's go to our urls.py for the project. 4:34 Alright, this is our big one, right? 4:40 And we're gonna add a new line, and we're gonna put it in before this admin line. 4:42 We're gonna get to what that admin is here in a little bit, but 4:48 right now let's just add in this new line. 4:51 So we're gonna say that it starts with courses, and 4:53 we're going to include courses.urls, and then a comma. 4:58 Now, the include function here, this one right here, 5:05 lets us include a set of URL patterns from somewhere else. 5:10 In this case, we're including them from our Courses app. 5:14 But why is it in quotes? 5:18 Django turns that string that we give it, so, this is just a string. 5:19 Turns that string into an import path, which let's it find the URL patterns. 5:24 And this means that we don't have to import the URLs ourselves into every 5:29 place that we want to use them. 5:32 This really, really helps with the plugableness and 5:35 modularity that I keep bringing up. 5:37 Okay, so now, let's find out, did the home page change? 5:40 No. 5:45 What if we go to slash courses? 5:46 And we've got a problem. 5:51 We got an error. 5:52 At least that's something though right? 5:53 Since we included the course URLs at slash courses and 5:54 the only URL we have there is just an empty string, so 5:58 long as nothing comes after courses, we get that course list view. 6:00 Unfortunately though, that view has a bug. 6:04 Let's see if we can fix that. 6:06 So our problem is that we got a course at the beginning, and 6:09 it expected a string instance. 6:13 Let's go back over here and look at our view. 6:14 Now we're using join and join expects everything in here to be strings. 6:17 Well those aren't all strings, those are courses. 6:22 So we're gonna fix that with a quick little list comprehension. 6:25 So we're gonna say we're gonna join str of courses ,or of course rather, for. 6:29 For course in courses. 6:43 Then let's close our list there, okay, STR course for course in course. 6:45 If this line is a little gibberishy for you, I'd suggest checking out 6:55 the comprehensions workshop that we have here on treehouse. 6:59 I'll link to it in the teacher's notes. 7:02 Basically, what we have is a for loop inside of a list, so 7:04 that it creates a list by working through the for loop. 7:07 It's a really handy construct. 7:10 Okay, so does this fix the bug? 7:11 It did! 7:15 Check that out. 7:16 There is the list of our courses, so excellent. 7:17 Congratulations on your first Django view that uses the ORM. 7:21 That's a giant achievement! 7:24 But, as I bet you can imagine, no one wants to have to go into the Django shell 7:26 to create all of their model instances. 7:29 There has to be a better way and there is. 7:31 The Django admin, that URL we've had from the beginning, is coming up next. 7:33
https://teamtreehouse.com/library/first-app-view
CC-MAIN-2020-29
refinedweb
1,714
93.24
MPLABX with XC8 – Getting started & your first Program: In this post, we explain that how to write your first MPLAB XC8 compiler and also demonstrates that how to write and execute our first program for pic16f877a microcontroller. First, we will see how to create a new project in MPLAB XC8 compiler. After that, we will write an example program so that you can learn how to compile code and generate a hex file. In the end, we will learn to upload code to Pic microcontroller using MPLAB X IPE. Download and Install MPLAB IDE and XC8 Compiler Microchip provides free of cost MPLAB integrated development environment (IDE). We can use this IDE to program Pic and AVR microcontrollers. Additionally, we can use it along with XC8, XC16 and XC32 compilers to program PIC10F, PIC12F, PIC16, PIC18, PIC24, PIC32 and Dspic series of microcontrollers. In this post, we will see how to use MPLAB with XC8 compiler, but you can follow the same instructions to use this IDE with compilers. Download MPLAB X IDE First of all, we will download the MPLAB X IDE from the following link: Open this link and go to the bottom of the page. You will find many options according to version and operating systems. This IDE is available for three popular operating systems such as Windows, MAC and Linux operating systems. Download the latest version according to your system. After that click on the installation file and follow the instruction to install IDE. We are not going to explain the installation process. Because it is pretty straightforward. Download XC8 Compiler In the last section, we only downloaded and installed IDE. MPLAB IDE by default does not include any compiler. We can use it to write assembly language programs without the need for any compiler. But if we want to write code in embedded c language, we need to install a compiler separately. Go to this link and download XC8 compiler: Go to this link and scroll to the end of the page. Click on compilers download menu and click on the XC8 compiler to download. Again make sure to download compiler according to the operating system that you are using. After that install XC8 compiler by clicking on the installation file. Once you have completed the installation process, you don’t need to do anything. XC8 got linked with MPLAB IDE automatically. Create New Project with MPLAB XC8 Compiler To create a new project follow these steps: Create a New Project First, open MPLAB X IDE by clicking on its icon and after that from the menu, select File>>New Project. Also, you can open existing projects, close projects and import from this menu. After that, this window will pop up. From categories option, pick “MicroChip Embedded” and from Projects window, select “Standalone Project”. After that click on the Next button. Select Microcontroller Now we will select a target microcontroller that we want to use. You must select a microcontroller that you want to use. Type the name of the pic microcontroller in the device window. For example, we will be using PIC16F877A microcontroller, we have typed PIC16F877A. Programmer Hardware Tool Now form select tool option, we can select the hardware tools such as programmers, debugger or other tools. For example, if you are using PICKit3 programmer to upload code to pic microcontroller then select PICKit3. Otherwise, you can select from other available options. Moreover, if you are not using any hardware and just want to use a simulator, you can select a simulator option. MPLAB Compiler Selection As we mentioned earlier, the MPLAB IDE supports many compilers. From compiler toolchains, select the compiler you want to use. Right now, it is showing only XC8 compiler and mpasm. Because we have only installed XC8 compiler and MPASM is available by default in MPLAB. But if you have installed multiple compilers all will show here. Now select the XC8 compiler and click on the next button. Save Project Location MPLAB XC8 Compiler In this last step, we will select the location where we want to store a PIC16F877A project in our system. Type a name for the project in the Project Name field. Click Browse to set the location of the project files. To differentiate the current project in the IDE (when multiple projects exist) as the main project, click “Set as the main project”. Now click on the finish button, it will create a new project and you will see a project window in MPLAB as shown below: Write First Program with MPLAB XC8 Compiler Before writing the first program with the MPLAB XC8 compiler, we need to add a source file or C file in the project. The easiest way to add source will is through a project window option. Right-click on Select New>>C Source File. After you click on the main.c file, this window will open. Give this source file a useful name as we have given it a name “pic16f877aLED” and select extension ‘c’. Similarly, we can add header files also. But for header files, select extension ‘h’ from the menu. After that click on the “Finish” button. It will create a new source file. This file “pic16f877aled” with your specified name will also appear in the project window. The file contains this minimum code. This is a starting point of your first project with MPLAB XC8 Compiler. #include <xc.h> void main(void) { return; } #include <xc.h> this header file contains definition of all registers of PIC16F877A microcontroller. This header file microcontroller specific features. How to Compile Code with MPLAB XC8 Compiler After you finish writing your first program, you will need to compile the code to generate hex file and to see if there is an error in the code. To compile code, click on a building project button. If there is no error in the code, it will display a message of successfully built and generate a hex file. You can upload this hex file to your microcontroller. You can read this tutorial on how to upload code to pic microcontroller using PICKit3 programmer: If you are using proteus, you can get a hex file from your project folder and upload it to the proteus. MPLAB XC8 Compiler Example In this section, we will take an example code and compile this code. After that, we will upload hex file into proteus file and see how it works. How to generate Configuration Bits File ? First of all, we need to generate an configuration bit file with MPLAB XC8 compiler. Configuration bits file defines main features of pic microcontroller such as oscillator setting, watchdog timer, Power-on Reset, Brownout detect settings, etc. To know more about configuration bits, read this post: To generate configuration bits file, go to window>>target memory views>>Configuration Bits and click on it. After that, you will see this configuration bit settings field. Click on “Generate Source Code to Output”. It will generate a file. Copy that file and add it to your project. One other possible way to add this file in the project is through a header file. Create a header file and add this file to your main code. We add this configuration file inside the main> #include <xc.h> void main(void) { return; } We are not going to explain how this code works. Because this tutorial is getting started guide only. The purpose of this tutorial is to teach you how to create your first project and compile code with MPLAB XC8 compiler. PIC Microcontroller LED Blinking Example In this section, we will see an LED blinking example. For instance, we use an LED with RB0 pin of PIC16F877A microcontroller. We connect an LED with pin0 of PORTB through a current limiting resistor. Now copy this code to your MPLAB project and generate a hex) #define _XTAL_FREQ 8000000 #include <xc.h> void main(void) { TRISB=0x00; while(1) { RB0 = 1 ; __delay_ms(500); RB0 = 0 ; __delay_ms(500); } return; } This example code will blink an LED with a delay of half-second. LED will turn on for half second and then remain off for half-second. Video Demonstration Where to go next?
https://microcontrollerslab.com/mplab-xc8-compiler-getting-started/
CC-MAIN-2021-39
refinedweb
1,361
73.58
In the code below, constantset_a.h and constantset_b.h define the same set of global constants but sets the constants with different values. Compiled myprogram.exe is to behave differently depending on the global constant values they were compiled with. My question is, is there any way that I can use MSBUILD to set CONSTANTSET_A and build, and then set CONSTANTSET_B and then build again, and for the compiled binaries to have different names? I need this to be done in a single compile pass (i.e. compiling two different binaries with one build /c /z command), since my codes will be compiled along with other people's codes in the team-wide automated build process. So probably in the "sources" build configuration file, I could do ? I know this is wrong, but I just put it there so that you would have a better idea as to what I want to achieve.? I know this is wrong, but I just put it there so that you would have a better idea as to what I want to achieve.Code: ... TARGETNAME=MyProgramA;MyProgramB C_DEFINES=-DCONSTANTSET_A -DCONSTANTSET_B ... Code: // myprogram.h #if defined CONSTANTSET_A #include <constantset_a.h> #elif defined CONSTANTSET_B #include <constantset_b.h> #endif ...
http://cboard.cprogramming.com/cplusplus-programming/116448-compiling-same-file-twice-different-sharpincludes-msbuild-printable-thread.html
CC-MAIN-2015-48
refinedweb
202
73.68
0 #include <iostream> using namespace std; int main() { int num = 1; int number; int total = 0; while (num <= 5) { cin >> number; total += number; num++; } cout << total << endl; return 0; } 10/19/2019 9:47:11 PM 10 Answers +1 Ismail Olanrewaju Yes now you got it. That's why here total = total - no. in loop. So each time total will change and you will get your final result. It should be like this :- total = 18 - 9; total = 9 - 1; total = 8 - 1; total = 7 - 1; total = 6 - 1 = 5 which is your final result. Instead of using a variable and constantly over writing it, I'd suggest using an array or vector. In this code total += number means total = total + number not total = number + number So in case of subtraction you can do like this. total -= number; Or total = total - number; But if I input 5 digits it won't properly subtract the 5 numbers.. Let's say I input 9 1 1 1 1 A Normal subtraction will give us 5 as the answer right? But in this case it's finna be "0 -9 -1 -1 -1 -1 " Which will give us a negative 13 (-13). So that isn't the answer that I want. Yes you are right but you need to subtract from total which will be already available as I think but what you want? I want it to give me the normal subtraction.. Which is 5 How 5? Can you explain me. You are taking input from keyboard. When you will decide from which value you will have to subtract? You need already given value from which you will have to subtract inputs value. I think your concept is not clear. Oh I understand now.. so I have to change my "total " To a fixed amount so therefore when I subtract, it will give me 5.. . I understand now. I have to make my total like 18 so when I minus 9,1,1,1,1 it will be something like "18 - 9 - 1- 1-1-1 which will give us a 5... Right? If I understand correctly then you don't know the total beforehand. Read the first value outside the loop: cin >> total; And then do the subtraction loop as indicated by AJ || ANANT || AC || ANANY || AY You are performing addition of given 5 values bcz here num is 5, for every iteration you scanned a number value that is added to total, the output depends on what the value given to number GET THE FREE APP Learn Playing. Play Learning SoloLearn Inc.
https://www.sololearn.com/Discuss/2042215/how-do-i-convert-the-given-program-to-subtraction-so-instead-of-total-no-no-no-i-want-it-to-be-total
CC-MAIN-2019-47
refinedweb
431
80.01
- Functions Since it is possible to call any method, instance or static, in Butterfly Container Script, it is possible to extend Butterfly Container Script with simple functions. The methods defined as functions can be either static methods or instance methods. Static Methods as Functions The simplest way to add a function is to create a static method in a Java class that performs the logic for the function. Adding and using a static method as a custom function in BCS is done in three easy steps: - Create a static method that performs the needed function. - Define a factory which calls that method. Use the desired function name as factory name. - Call the factory from other factories. The following functions are examples of static method functions: Instance Methods as Functions Sometimes you may need to create an instance of some class and call a method on that instance. This can still be wrapped in a custom function though. Adding and using an instance method as a function in BCS is done in 4 easy steps: - Create the class with a method performing the needed function logic. - Define an instance of this class as a singleton in your script. - Define a factory that calls the singleton factory and calls the desired method. Give this factory the desired function name. - Call the function factory from other factories. Function Examples The following functions are examples of instance methods being defines as functions: default() Here is an example of a function that takes two input parameters. If the first parameter is not null it returns that parameter. Else it returns the second parameter. This function is used to provide a default value for some input parameter for some factory. Here is the static method: package com.myapp.util; public class Util { public static Object default(Object p1, Object p2){ if(p1 != null) return p1; return p2; } } Here is how you would use it in a script: beanA = * com.myapp.SomeObject($0); beanB = * beanA(com.myapp.util.Util.default($0, "default value")); The beanA factory creates a SomeObject instance and injects input parameter 0 into its constructor. The beanB factory will call the beanA factory with the parameter returned from the static default() method. If input parameter 0 passed to the beanB factory is null, then the default() method will return the default value "default value". It is then the default value that will be passed to the beanA factory and injected into the SomeObject constructor. If you need to call the default() method more than once you can simplify the script a bit by mapping the default method to a factory, and then call this factory whenever the function is needed. Here is how that looks: default = * com.myapp.util.Util.default($0, $1); beanA = * com.myapp.SomeObject($0); beanB = * beanA(default($0, "default value")); The default() method has been mapped to a factory called "default". This default factory is then called from the beanB factory instead of calling the static default() method directly. The end result is the same, but the script is a bit easier to read and write. There is no package and class name to disturb you when reading the script, and whenever you need the default function all you need to write is "default(a, b)", instead of "com.myapp.util.Util.default(a, b)". max() This example shows a function that returns the maximum of two values. Defining the static max() method as a factory (and thereby a function) is done like this: max = * java.lang.Math.max((int) $0, (int) $1); This little script defines the "max" factory as a call to the static max() method with input parameter 0 and 1 passed to the max() method. Using the function inside the script is done like this: max = * java.lang.Math.max((int) $0, (int) $1); beanA = * com.myapp.SomeObject(max($0, 1)); The beanA factory definition calls the max factory with its input parameter 0 and the value 1. The max factory will call the max() method and return the value that is largest of either the input parameter 0 passed to the beanA factory, or the hard coded value 1. toDate() This example uses the java.text.SimpleDateFormat class and defines its instance method parse() as a function called "toDate". First an instance of SimpleDateFormat is defined as a singleton in the script: toDateTarget = 1 java.text.SimpleDateFormat("yyyy-MM-dd"); Second the toDate factory is defined as a call to the toDateFactory and then a call to the parse() method on the instance returned from the toDateFactory. The toDate factory takes a single parameter which is the string to parse into a java.util.Date instance. toDateTarget = 1 java.text.SimpleDateFormat("yyyy-MM-dd"); toDate = * toDateTarget.parse($0); Third you call the toDate factory from other factories like this: toDateTarget = 1 java.text.SimpleDateFormat("yyyy-MM-dd"); toDate = * toDateTarget.parse($0); beanA = * com.myapp.SomeObject(toDate("2008-12-24"));
http://tutorials.jenkov.com/butterfly-container/extending-bcs-with-functions.html
CC-MAIN-2017-13
refinedweb
824
56.15
As the title says, is it possible to integrate liberty with Webseal using TAI++. If so, is there some example configurations available anywhere ? Thanks As the title says, is it possible to integrate liberty with Webseal using TAI++. If so, is there some example configurations available anywhere ? Thanks There is no official TAI++ implementation to integrate with WebSeal in the Liberty profile. However, you should be able to write your own TAI++ implementation. I am not aware of any public sample TAI++/WebSeal implementation. --Ajay We are currently using TAI++ with our WAS ND servers which uses com.ibm.example.tai.XTai and has a bunch of properties associated with it. I'm assuming that I could use the same class for liberty. Assuming that is the case, Is there some configuration examples anywhere which I could use as a reference ? Thanks - H07E_Patrick_McCabe 270003H07E - 2013-06-05T11:19:58Z Information about how to configure TAI in the Liberty profile is in the Infocenter. For the 8.5 version, see Regarding the TAI that you are using for WAS ND, I would need more details to confirm (for eg, it should work if all it is using are they the public APIs supported in Liberty). Is this TAI provided by IBM? I could not find any reference to it. --Ajay Thanks for the link. The TAI that we are using was something provided to us by IBM.It contains 2 classes with the below signatures. I'm not entirely sure what they do, but I'm hoping to find the time to try it out this week. public class XTai extends TAMETai implements TrustAssociationInterceptor
https://www.ibm.com/developerworks/community/forums/html/topic?id=08526924-3402-4259-b7ae-238c8b5eadb6&ps=25
CC-MAIN-2017-39
refinedweb
273
58.99
import array like structure using perl Discussion in 'Perl Misc' started by Rudra Banerjee, Dec 26, 2012.21 - Martin Ambuhl - Oct 26, 2004 object-like macro used like function-like macroPatrick Kowalzick, Mar 14, 2006, in forum: C++ - Replies: - 5 - Views: - 453 - Patrick Kowalzick - Mar 14, 2006 How come C allow structure members to be addressed like an array ?pereges, Jun 14, 2008, in forum: C Programming - Replies: - 79 - Views: - 1,512 - Keith Thompson - Jun 19, 2008 Parse tree like data like XML by Perl?Davy, Oct 9, 2006, in forum: Perl Misc - Replies: - 2 - Views: - 129 - Davy - Oct 9, 2006 c like structure in Perl, Oct 24, 2006, in forum: Perl Misc - Replies: - 5 - Views: - 271 - Peter J. Holzer - Oct 24, 2006
http://www.thecodingforums.com/threads/import-array-like-structure-using-perl.955827/
CC-MAIN-2014-23
refinedweb
123
72.9
WIFI SSID - inyourfaceplate Hi there! Can anyone show me how to access the current WIFI SSID via Pythonista? This would be awesome in the widget. Thanks, John NEHotspotHelper supportedNetworkInterfaces It must be possible because the Launcher app allows to display it in its widget. I think this requires a special app entitlement I think you're right, as I´ve read "Your program needs the com.apple.wifi.manager-access entitlement to use any of the WiFiManager functions." This works for me: from objc_util import * def get_ssid(): CNCopyCurrentNetworkInfo = c.CNCopyCurrentNetworkInfo CNCopyCurrentNetworkInfo.restype = c_void_p CNCopyCurrentNetworkInfo.argtypes = [c_void_p] info = ObjCInstance(CNCopyCurrentNetworkInfo(ns('en0'))) return str(info['SSID']) print('Current SSID:', get_ssid()) I was almost sure it was possible since Apple did stop to depreciate this CNCopyCurrentNetworkInfo in ios10... Thanks for @inyourfaceplate - inyourfaceplate omz's solution works great! Thanks everyone! - vladimir_key Hi @omz, I found that your solution is not work anymore (ios 13.3.1). Its return "wifi" instead of correct name. Do you have ideas? @vladimir_key omz solution works for me but you need iOS 13.3 and that you Pythonista app has permission to access your location (iOS 13 limitation) See Discussion Seems that if you receive "wi-fi", you should be in iOS 12 - vladimir_key - sammontalvo For some unknown (for me) reason, I got this error message: "No method found for selector "isKindOfClass:" Any clue? Same for me Looks to be some kind of internal bug in objc_util, though I don’t have an explanation right now. For me it no longer works since Pythonista 3.3 Could have something to do with the fact that 3.3 is built with the iOS 13 SDK.
https://forum.omz-software.com/topic/3905/wifi-ssid/7
CC-MAIN-2021-49
refinedweb
276
57.98
Groovy 3 adds the YamlBuilder class to create YAML output using a Groovy syntax. The YamlBuilder is closely related to JsonBuilder that is described in a previous post. We define a hierarchy using a builder syntax where we can use primitive types, strings, collections and objects. Once we have build our structure we can use the toString() method to get a string representation in YAML format. In the following example we use YamlBuilder to create YAML: import groovy.yaml.YamlBuilder // Sample class and object to transform in YAML. class User { String firstName, lastName, alias, website } def userObj = new User(firstName: 'Hubert', lastName: 'Klein Ikkink', alias: 'mrhaki', website: '') // Create YamlBuilder. def config = new YamlBuilder() config { application 'Sample App' version '1.0.1' autoStart true // We can nest YAML content. database { url 'jdbc:db//localhost' } // We can use varargs arguments that will // turn into a list. // We could also use a Collection argument. services 'ws1', 'ws2' // We can even apply a closure to each // collection element. environments(['dev', 'acc']) { env -> name env.toUpperCase() active true } // Objects with their properties can be converted. user(userObj) } assert config.toString() == '''\ --- application: "Sample App" version: "1.0.1" autoStart: true database: url: "jdbc:db//localhost" services: - "ws1" - "ws2" environments: - name: "DEV" active: true - name: "ACC" active: true user: firstName: "Hubert" alias: "mrhaki" lastName: "Klein Ikkink" website: "" ''' Written with Groovy 3.0.0.
https://mrhaki.blogspot.com/2020/02/groovy-goodness-create-yaml-with.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+mrhaki+(Messages+from+mrhaki)
CC-MAIN-2020-10
refinedweb
226
58.99
Hello, my player is parented to the platform OK if I move its position, but if I want a platform to move in a constant direction by applying velocity, the player just slides off. I've read what I could in other answers, but haven't found exactly what I need. Here's the code I'm trying to use: using UnityEngine; using System.Collections; public class MoveWithScript : MonoBehaviour { void OnCollisionEnter2D (Collision2D other) { if (other.gameObject.tag == "Player") { other.transform.parent = gameObject.transform; other.gameObject.rigidbody2D.velocity = gameObject.rigidbody2D.velocity; } } void OnCollisionExit2D (Collision2D other){ if (other.gameObject.tag == "Player") { other.transform.parent = null; other.gameObject.rigidbody2D.velocity = other.gameObject.rigidbody2D.velocity; } } } Thanks for reading. Your help is greatly appreciated :) Answer by HarshadK · Oct 09, 2014 at 12:00 PM Do not parent a rigidbody to another rigidbody. You will get quirky results. Rather try joint to make your player stick to the platform. Thanks for the advice HarshadK. I've been seeing some of those quirky results as I've been trying different things! I'll look into using joint. If you know of any good examples or tutorials in an example like this, please point me in their direction :) Distance Joint 2D. DistanceJoint2D You can create a joint at runtime wherein you add the DistanceJoint2D component to your platform and set player as the connectedBody. And to break the joint just apply the force greater than the break force of that joint. Thanks HarshadK. I'm still pretty new to all this, so I'm doing my homework on how to do what you stated above :D I tried to start with this, but it's also giving me pretty quirky results: void OnCollisionStay2D (Collision2D collision){ if (collision.gameObject.tag == "Player") { var disJoint = gameObject.AddComponent<DistanceJoint2D>(); disJoint.connectedBody = collision.gameObject.rigidbody2D; } } } Try to use the Anchor and Connected Anchor property of your joint so the two bodies are anchored properly. Since this will allow you to join your two bodies specific to their current position with respect to each other. OK, thanks. I'll look into that next! Answer by Kiwasi · Oct 10, 2014 at 12:20 AM You could make your code work as is just by changing to OnCollisionStay. Ultimately this will lock your players movement to match the platforms movement, which is typically undesirable. A better method would be to compare the velocity of the player and the platform, and add force to the player proportional to the difference in velocities. You could get even more realistic performance by getting your players mass right (A human typically weighs between 60-80 kg). With the right physics material this should provide enough force from gravity to hold the player to your platform at low accelerations. No scripts required. Thanks BoredMormon. I'll look into using those options! I did trying comparing the velocities of the player and platform, then equating them. No luck there, though I'm missing why it doesn't work: void OnCollisionEnter2D (Collision2D other) { if (other.gameObject.tag == "Player") { if (other.gameObject.rigidbody2D.velocity != this.gameObject.rigidbody2D.velocity) { other.gameObject.rigidbody2D.velocity = this.gameObject.rigidbody2D.velocity; You missed the first line of my answer. OnCollisionEnter is only called once. So the players velocity matches the platform for one frame. Then physics happens and the player slides off. OnCollisionStay will be called every frame the player and platform are colliding. Thus physics will be overridden. You totally missed the point about comparing velocities. Try this void OnCollisionStay2D (Collision2D collision){ if (collision.gameObject.tag == "Player") { collision.rigidbody2D.AddForce(1000* (rigidbody2D.velocity - collision.rigidbody2D.velocity)); } } Sorry I misread your answer. Thanks for the reply. I was connecting the undesirable part to using OnCollisionStay2D. I tried the code you provided (and tried adjusting it), but something still isn't working. The player jitters and is forced left or right off the platform. Does your platform need to be a rigidbody? If you can move the platform in the same way without having a rigidbody attached to it then parenting your player to it while in contact with it should work perfectly. Thanks MrSoad. I'm actually trying to use that method now and move the transform rather than move the rigidbody with. Velocity powered rigidbody on a moving platform without parenting. 2 Answers Rigidbody player movement unrealistic velocity problem. 1 Answer How to slow player on x and z 1 Answer Player slips off moving platform if Animator Component enabled 0 Answers
https://answers.unity.com/questions/805683/how-can-i-get-my-player-to-move-with-the-platform.html?sort=oldest
CC-MAIN-2019-09
refinedweb
744
50.12
Hypermedia APIs in Django: Leveraging Class Based Views - here. It seems that I keep rewriting code that generates APIs from django. I think I’m getting closer to actually getting it right, though :) I’m rather keen on Collection+JSON at the moment, and spent some time over easter writing an almost complete Collection+JSON client, using KnockoutJS. It loads up a root API url, and then allows navigation around the API using links. While working on this, it occurred to me that Collection+JSON really encodes the same information as a web page: - every <link>or <a href=...></a>element is either in linksor queries. - form-based queries map nicely to querieselements that have a dataattribute. itemsencapsulates the actual data that should be presented. templatecontains data that can be used to render a form for creating/updating an object. Ideally, what feels best from my perspective is to have a pure HTML representation of the API, which can be rendered by browsers with JS disabled, and then all of the same urls could also be fetched as Collection+JSON. Then, you are sharing the code, right up to the point where the output is generated. To handle this, I’ve come up with a protocol for developing django Class Based Views that can be represented as Collection+JSON or plain old HTML. Basically, your view needs to be able to provide links, queries, items. template comes from a form object (called form), and by default items is the queryset attribute. leveraging views I subscribe the idea that the less code that is written the better, and I believe that the API wrapper should (a) have minimal code itself, and (b) allow the end developer to write as little code as possible. Django is a great framework, we should leverage as much as is possible of that well written (and well tested) package. The part of a hypermedia API that is sometimes ignored by web developers is handling the media type selection. I believe this is the domain of the “Accept” and “Content-Type” headers, not anything to do with the URL. Thus, I have a mixin that allows for selecting the output format based on the Accept header. It uses the inbuilt render_to_response method that a django View class has, and handles choosing how to render the response. As it should. The other trick is how to get the links, queries, items and template into the context. For this, we can use get_context_data. We can call self.get_FOO(**kwargs) for FOO in each of those items. It is then up to the View class to handle those methods. By default, a Model-based Resource is likely to have a form class, and a model class or a queryset. These can be used to get the items, and in the case of the form, the template. Even in the instance of the queryset (or model), we use the form class to turn the objects into something that can be rendered. Finally, so it’s super-easy to use the same pattern as with django’s Views ( generic.CreateView, for instance), I have a couple of classes: ListResource and DetailResource, which map directly onto CreateView and UpdateView. In the simplest case, you can just use: urlpatterns = patterns('', url(r'^foo/$', ListResource.as_view(model=Foo)), url(r'^foo/(<?P<pk>\d+)/$', DetailResource.as_view(model=Foo)) ) There is also a Resource, which just combines the resource-level bits with generic.TemplateView. You can use ResourceMixin with any other class-based-view, but make sure it appears earlier than the django view class, to make sure we get the correct method resolution order. links There is still the matter of the links attribute. Knowing what to put into this can be a bit tricky. I’ve come to realise that this should contain a list of the valid states that can be accessed when you are in a given state. You will want to use django’s reverse function to populate the href attribute: class Root(Resource): template_name = 'base.html' def get_links(self): return [ {"rel": "root", "href": reverse('root'), "prompt": "Home"}, {"rel": "user", "href": reverse('user'), "prompt": "You"}, {"rel": "links", "href": reverse('tasks:list'), "prompt": "Task List"}, ] Note that you actually need to provide the view names (and namespaces, if appropriate) to reverse. Similarly, for any queries, you would want to use reverse, to make it easier to change the URL later. Also, django will complain if you have not installed something you reference, meaning your links and queries should never 404. I’m still toying with the feature of having an automatic list of links that should be used for every view. Obviously, this should only contain a list of states that can be moved to from any state within the system. For rendering HTML, you may need to change your templates: actually, you should change your templates. Instead of using: <a href="{% url 'foo' %}">Foo Link</a> You would reference that items in your links array: <a href="{{ links.foo.href }}">{{ links.foo.prompt }}</a> I have used a little bit of magic here too: in order to be able to access links items according to their rel attribute, when rendering HTML, we use a sub-class of list that allows for __getattr__ to look through the items and find the first one that matches by rel type. enter django-hypermedia As you may surmise from the above text: I’ve already written a big chunk of this. It’s not complete (see below), but you can see where it is at now: django-hypermedia. There is a demo/test project included, that has some functionality. It shows how you still need to do things “the django way”, and then you get the nice hypermedia stuff automatically. what is still to come? I’ve never really been happy with Collection+JSON’s error object, so I haven’t started handling that yet. I want to be able to reference where the error lies, similar to how django’s forms can display their own errors. I want to flesh out the demo/test project. It has some nice bits already, but I want to have it so that it also uses my nice KnockoutJS client. Pretty helps. :)
http://schinckel.net/2012/
crawl-003
refinedweb
1,044
70.33
This is a step by step guide on setting up and running a simple NFC Demo App using the PN7150 NFC Controller SBC Kit for Arduino (OM5578/PN7150ARD) with a UDOO NEO board which uses an i.MX6SX and it's Arduino pin compatible. 1. Requirements - UDOO NEO board. This document refers to the UDOO NEO Full board, but the steps remain the same for all UDOO Neo boards as long as the appropriate Device Tree is used for each. For more information on this board please go to the official site () - PN7150 NFC Controller SBC Kit for Arduino (OM5578/PN7150ARD) which is shown on the image below. Alternatively you may use the PN7120 NFC Controller SBC Kit for Arduino (OM5577/PN7120ARD). PN7150 NFC Controller SBC Kit for Arduino mounted over the UDOO Neo You may find more details about the OM5578 board on the user manual (Doc ID UM10935) which is available on the following link. You may also find additional documentation and information of this and other PN7150 demoboards on the link below: Demoboards for PN7150|NXP You may find more details about the OM5577 board on the user manual (Doc ID UM10878) which is available on the following link. For additional resources for the OM5577 board please refer to the link below. PN7120 NFC Controller SBC Kit|NXP - Host computer with Ubuntu 12.04 or later (14.04 is preferred). - L3.14.28 BSP Release for the i.MX6SX installed on the host. You may find the documentation on how to download and setup this BSP on the following link. 2. Setting up NXP BSP Release and Toolchain Follow the instructions on the Yocto User’s Guide included on the L3.14.28 BSP Release to setup and build an image to for the i.MX6SX (MACHINE= imx6sxsabresd). We’ll be using the fsl-image-gui image with frame buffer (fb) backend. Other images may be used but please keep in mind that the core-image-minimal image does not include the libstdc++.so.6 library required by the NFC Demo App. It is also necessary to build and install the toolchain for cross compiling the kernel and bootloader. This can be done with the following command: $ bitbake meta-toolchain Once created you may install it by running the following script: <BSP_DIR>/<BUILD_DIR>/tmp/deploy/sdk/poky-glibc-x86_64-meta-toolchain-cortexa9hf-vfp-neon-toolchain-1.7.sh For more details on how to extract the toolchain please refer to the following Yocto Training Task: Task #7 - Create the toolchain 3. Editing the Device Tree In previous versions (3.0.35 backward) the Linux Kernel used to contain the entire description of the hardware so the bootloader just had to load the kernel image and execute it. In current Kernel versions the hardware description is located in the device tree blob (DTB), which allows for the same Kernel to be used in different Hardware by changing only the Device Tree. In this scenario the bootloader loads the Kernel image and also the Device Tree (DTB) binary. For more details on how to add a new Device Tree please look at the following Community Document that covers adding a new device tree: For this document we will change the current UDOO NEO Device Tree as we will only be adding support for the PN7150 NFC Controller Board. 3.1 Copying the original UDOO Neo Device Tree files Create a development folder in your home directory. mkdir udooneo-dev Download the kernel source into this folder. This also includes the device tree files. cd udooneo-dev git clone The Device Tree files will be available at udooneo-dev/linux_kernel/arch/arm/boot/dts 3.2. Editing the UDOO Neo Device Tree Files We will be using the UDOO Neo Full board, so we will be using the imx6sx-udoo-neo-full-hdmi-m4.dts. If we look into this file using a text editor we will see that it includes several include definition files which are also located in the same directory. #include "imx6sx-udoo-neo.dtsi" #include "imx6sx-udoo-neo-full.dtsi" #include "imx6sx-udoo-neo-m4.dtsi" #include "imx6sx-udoo-neo-hdmi.dtsi" #include "imx6sx-udoo-neo-externalpins.dtsi" We will need to copy these to the BSP Release dts directory (you may alternatively build the device tree from this directory, but we will cover how to add device trees to the BSP Release in this document): /<BSP_DIR>/<BUILD_DIR>/tmp/work/imx6sxsabresd-poky-linux-gnueabi/linux-imx/3.14.28-r0/git/arch/arm/boot/dts/ We will need to add the new dtb file to be compiled on the Makefile from the BSP Release. This needs to be placed inside the precompiler directive $(CONFIG_ARCH_MXC) There are some additions that must be made to device tree in order to configure the pins used by the NFC controller Board which uses the Arduino Pinout. These can be done to the imx6sx-udoo-neo.dtsi so they are taken by any UDOO Neo Device Tree we compile. The I2C pins used are those of the I2C2 bus. The configuration for these pins should be already implemented on the imx6sx-udoo-neo.dtsi file. If not please add these lines inside the &iomuxc section. &iomuxc { pinctrl_i2c2_1: i2c2grp-1 { fsl,pins = < MX6SX_PAD_GPIO1_IO03__I2C2_SDA 0x4001b8b1 MX6SX_PAD_GPIO1_IO02__I2C2_SCL 0x4001b8b1 >; }; }; Then we need to add the pn547 entry into the &i2c2 section for the enable pin, interrupt pin, I2C address and buss speed for the PN7150. Put what is in bold below at the end of the “&i2c2” section as shown. &i2c2 { pn547: pn547@28 { compatible = "nxp,pn547"; reg = <0x28>; clock-frequency = <400000>; interrupt-gpios = <&gpio4 9 0>; enable-gpios = <&gpio5 21 0>; }; }; Important Note: Prior to adding either of these configurations it is critical that you ensure these pins and I2C addresses are not used anywhere else in this and other *udo*.dtsi files You may find the UDOO Neo Schematics on the UDDO website (link to the schematics below) to see the reason behind these settings. IR Signal – J4 Connector – Arduino 7 pin – i.MX6SX B13 pin VEN Signal - J6 Connector – Arduino 8 pin - i.MX6SX W5 pin SDA Signal – J6 Connector – Arduino SDA pin - i.MX6SX D20 pin SCL Signal – J6 Connector – Arduino SCL pin - i.MX6SX C20 pin If you want to review in more detail how to create a simple Device Tree from scratch please check the following very complete and easy to follow Community Document. Basic Device Tree for the Udoo Board To compile the device tree run the following command source /opt/poky/1.7/environment-setup-cortexa9hf-vfp-neon-poky-linux-gnueabi cd /<BSP_DIR>/<BUILD_DIR>/tmp/work/imx6sxsabresd-poky-linux-gnueabi/linux-imx/3.14.28-r0/git make ARCH=arm dtbs This will produce the Imx6sx-udoo-neo-full-hdmi-m4.dtb that will be used. 4. Compiling U-Boot We will be using the UDOO U-boot for the UDOO Neo Full board. The following steps describe how to download the source code and compiling it using our toolchain. Downloading the source code mkdir UDOOneo-dev cd UDOOneo-dev git clone -b 2015.04.imx-neo cd uboot-imx Compiling u-boot source /opt/poky/1.7/environment-setup-cortexa9hf-vfp-neon-poky-linux-gnueabi ARCH=arm CROSS_COMPILE=arm-poky-linux-gnueabi- make udoo_neo_config ARCH=arm CROSS_COMPILE=arm-poky-linux-gnueabi- make This will generate a SLP file with the DCD (Device Configuration Data) table and the u-boot.img file. Note: By default this U-Boot configuration detects the UDOO Neo board and in our case it would look for the imx6sx-udoo-neo-full-hdmi-m4.dtb. You may need to use a different device tree depending on your board. 5. Flashing SD Card 5.1. Using the .sdcard file to load the BSP Release Image The easiest way to load the Root File System from our image is using the .sdcard file that is created after running bitbake. This image will be located on the following path: /<BSP_DIR>/<BUILD_DIR>/tmp/deploy/images/imx6sxsabresd This will also load the BSP Release U-boot and device tree files but we will then exchange for our own. To do this use the following command where sdx is your SD Card. $ sudo dd if=<image name>.sdcard of=/dev/sdx bs=1M && sync Alternatively we can manually create the two partitions needed. For more information on this please refer to the Yocto User’s Guide. 5.2. Writing U-boot To flash U-boot you need to flash both the SPL file and the u-boot.img file using the following commands assuming that your SD card is in /dev/sdx dd if=SPL of=/dev/sdx bs=1K seek=1 dd if=u-boot.img of=/dev/sdx bs=1K seek=69 5.3. Copying the Device Tree Blob Copy the imx6sx-udoo-neo-full-hdmi-m4.dtb device tree to a folder called dts on the FAT partition. 6. Adding Kernel Driver Download the driver source from the git repository from the Linux source directory cd /<BSP_DIR>/<BUILD_DIR>/tmp/work/imx6sxsabresd-poky-linux-gnueabi/linux-imx/3.14.28-r0/git/drivers/misc $ git clone Add the line below to the Makefile of the current directory obj-y += nxp-pn5xx/ Include the driver config by adding below line to the heading configuration file (drivers/misc/Kconfig). source "drivers/misc/nxp-pn5xx/Kconfig" Export the environment variables cd /<BSP_DIR>/<BUILD_DIR>/tmp/work/imx6sxsabresd-poky-linux-gnueabi/linux-imx/3.14.28-r0/git/ $ source /opt/poky/1.7/environment-setup-cortexa9hf-vfp-neon-poky-linux-gnueabi $ export ARCH=arm $ export CROSS_COMPILE=$TARGET_PREFIX $ make imx_v7_defconfig make menuconfig Inside menu config include the driver as a module (<M>), which is on the path: Device Drivers ---> Misc devices ---> < M> NXP PN5XX based driver Save the changes and exit, and then compile the modules. $ make modules We will then install the modules to our image. Insert the SD card with the loaded image and mount it to access it from the command promt. sudo mount /dev/sdx ~/mountpoint/ Where sdx is your SD card. Then use the following command to install the modules. sudo ARCH=arm INSTALL_MOD_PATH=/home/user/mountpoint modules_install firmware_install Before unmounting our SD card we will install the NFC library. 7.Installing the NFC library. Install the necessary libraries on the host by running the following commands: sudo apt-get update sudo apt-get install automake sudo apt-get install autoconf sudo apt-get install libtool Note: In case you are using Ubuntu 12.04 the following commands will allow for autoconf 2.69 to be installed, which is the minimum version required by the NFC library. sudo add-apt-repository ppa:dns/gnu -y sudo apt-get update -q sudo apt-get install --only-upgrade autoconf Enter our directory and install the Linux libnfc-nci stack cd ~/UDOOneo-dev git clone Generate the configuration script by executing the bootstrap bash script cd ~/UDOOneo-dev/linux_libnfc-nci ./bootstrap Configure Make file. We are using the default toolchain sysroots path. To configure for the PN7150 please use the following settings: ./configure --enable-pn7150 --host=arm-none-linux --prefix=/opt/poky/1.7/sysroots/x86_64-pokysdk-linux/usr --sysconfdir=/home/user/mountpoint/etc To configure for the PN7120 please use the following settings: ./configure --enable-pn7120 --host=arm-none-linux --prefix=/opt/poky/1.7/sysroots/x86_64-pokysdk-linux/usr --sysconfdir=/home/user/mountpoint/etc We are ready to execute the make and install the stack. make sudo make install After a successful build the libraries and a application demo are built in .libs directory. Copy the libraries to “/usr/lib” directory of the target and nfcDemoApp to the targets “/usr/sbin” cd .libs sudo cp * /home/user/mountpoint/usr/lib sudo cp nfcDemoApp /home/user/mountpoint/usr/sbin cd ~/UDOOneo-dev/linux_libnfc-nci/conf/PN7150 sudo cp * /home/user/mountpoint/etc Now we can unmount our SD card. sudo umount /home/user/mountpoint 8. Testing the NFC Reader Insert the micro SD card into the slot of the UDOO Neo board and install the PN1750 NFC Controller board on top of the UDOO Neo board. We will be using the terminal console in order to access the board. You may use the official USB/Serial debug module for NEO or a similar adapter. For more information on setting up the Serial Debug Console on the UDOO Neo board please refer to the link below. Once it has booted, install the .ko file. insmod /lib/modules/3.14.28+g91cf351/kernel/drivers/misc/nxp-pn5xx/pn5xx_i2c.ko Then run the nfcDemoApp. We’ll test it in poll mode, where it looks for available tags and reads them. nfcDemoApp poll You should get a console output as shown below when placing a NFC tag next to the NFC reader. Appendix. References and useful documents Demoboards for PN7150|NXP PN7120 NFC Controller SBC Kit|NXP NFC PN7120 on the i.MX6Q | NXP Community Basic Device Tree for the Udoo Board Basic Device Tree for the Udoo Board Jeremy Geslin fyi
https://community.nxp.com/docs/DOC-331906
CC-MAIN-2020-05
refinedweb
2,179
55.64
Pretty-print an entire Pandas Series / DataFrame You can also use the option_context, with one or more options: with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also print(df) This will automatically return the options to their previous values. If you are working on jupyter-notebook, using display(df) instead of print(df) will use jupyter rich display logic (like so). Sure, if this comes up a lot, make a function like this one. You can even configure it to load every time you start IPython: def print_full(x): pd.set_option('display.max_rows', len(x)) print(x) pd.reset_option('display.max_rows') As for coloring, getting too elaborate with colors sounds counterproductive to me, but I agree something like bootstrap's .table-striped would be nice. You could always create an issue to suggest this feature.
https://codehunter.cc/a/python/pretty-print-an-entire-pandas-series-dataframe
CC-MAIN-2022-21
refinedweb
141
59.19
Depth-First-Search (DFS)— Competitive Programming with Time and Space Complexity Question — 9 (Level — Easy) Welcome to the competitive series of Code Wrestling. Today, we are gonna solve an easy problem, Depth First Search in Binary Tree. We will find the Time and Space complexity of our solution and will also guide you to the process of finding an optimized solution. Visit our YouTube channel for more interesting stuff Code Wrestling. Question You’re given a Node class that has a name and an array of optional children nodes. When put together, nodes form an acyclic tree-like structure. Implement the depthFirstSearch method on the Node class, which takes in an empty array, traverses the tree using the Depth-first Search approach (specifically navigating the tree from left to right), stores all of the nodes’ names in the input array, and returns it. Sample Input: Sample Output: [“A”, “B”, “E”, “F”, “I”, “J”, “C”, “D”, “G”, “K”, “H”] Node Class: public class Node { public string name; public List<Node> children = new List<Node>(); public Node(string name) { this.name = name; } public Node AddChild(string name) { Node child = new Node(name); children.Add(child); return this; } } STOP HERE. And read the question again. Try to solve the question on your own and then proceed further to see our solution. Solution The problem can be easily solved via recursion. So here we have to move through each depth or the node and return. So if you have observed the class Node, then each node has a name and children. So the approach is to store the name, and traverse each child in the children until you reach the child’s depth. Thus, the below code snippet will help you solve the problem: private List<string> DepthFirstSearch(Node node, List<string> dfs) { dfs.Add(node.name); foreach (var child in node.children) { DepthFirstSearch(child, dfs); } return dfs; } The time complexity is O(v + e) where v is vertex and e is edges. Because we are storing each vertex (each node present in array) thus O(v) and also for each vertex we are doing a for loop and the number depends on the number of edges that vertex has, thus O(e), and hence total is O(v + e). The space complexity will be O(v) as we are storing vertex in the final result. Thank you so much for reading till the end. See you in the next article. Till then… HAPPY LEARNING!! Do visit our YouTube channel Code Wrestling for more interesting stuff.
https://codewrestling.medium.com/depth-first-search-dfs-competitive-programming-with-time-and-space-complexity-44a3e1a0c753?responsesOpen=true&source=user_profile---------3----------------------------
CC-MAIN-2022-33
refinedweb
422
72.36
SAP connector 3.0 .NET set value on table structure net core sap connector sap connector dll the type initializer for 'sap middleware connector rfcdestinationmanager threw an exception rfc_read_table sap net connector license sap connector for microsoft .net 3.0.20.0 for windows 64bit (x64) download I'm trying to get data from SAP via SAP Connector 3.0 on a MVC3 application. There is no problems with the connection. My problem is when I try to set values on a structure from a table it says "TABLE [STRUCTURE ZHRS_ABSENCES]: cannot set value (array storing element values is null)" My code is the following: //create function IRfcFunction function = conex.Repository .CreateFunction("Z_HR_PORTAL_GET_EMPLOYEE_DATA"); //get table from function IRfcTable absenceHoli = function.GetTable("P_ABSENCES"); //setting value to structure absenceHoli.SetValue(0, "0000483"); //this is where the error occurs I'm not sure about the connector you're using, but there's a similar common misunderstanding when using JCo. A table parameter can hold multiple lines. You'll usually have to append a line to the table. This will probably return some kind of structure that you'll be able to fill. Also check this answer. Inserting data into SAP tables using SAP.Net connector 3.0, I was successful in fetching data from SAP using NCo 3.0 interfaces, but was facing issues while performing insert operations. Can anyone help Net side, but it is coming to SAP its passing as null value. Now, simply populate the structure component fields as per usual: set column names and values. In this blog, I will walk through the steps to pass a table parameter to from a .NET program to SAP via NCo 3.0. I will not be covering the basics of how to set up NCo 3.0 as a RFC client as I have already covered that. Click here to request a .zip file containing a copy of the source code. The BAPI I think you just need to Append a new row before trying to call SetValue e.g. absenceHoli.Append(); absenceHoli.SetValue("ColumnName", "0000483"); // Add further SetValue statements for further columns You can get the column names by putting a breakpoint on after you're got the Table Structure and examining it, which is probably nicer than just specifying column indexes. SAP NCo 3.0: How-to Pass Table Parameters to SAP RFC , Append() method, the current index property of the table is set to the newly appended line. At this time, you can use the IRfcTable.SetValue()� NET, C#, or Managed C++. Write .NET Windows and Web form applications that have access to SAP business objects (BAPIs). .NET connector 3.0 is available on SAP market place. Procedure to make BAPI and object type on SAP side: 1. First make a structure for bapi, structure name should start with Z OR Y following by BAPI . e.x. YBAPI_ATND_RAW. 2. In my case I needed to use Insert: absenceHoli.Insert(); absenceHoli.SetValue(..., ...); Adding a field to SAP RFC table using .NET Connector 3.0, Note that it is not possible to append table's column as fields are already defined. You can only enter a row to the table and populate fields to� structure or table type that this function module is referencing. You had to create objects from these structure/table classes and pass them into the proxy method corresponding to the function module. Now with the .NET Connector 3.0, there is no longer any kind of generated code. SAP .Net Connector 3.0 (NCo) Example, To get the SAP Connector DLL, you need to go to SAP Market Place and you need to have userid and password to These are the main key-value pairs required to connect to SAP. There functions returns a complex table structure called IRfcTable that you need to convert to ADO.Net SetValue method. This article will teach you about connecting to SAP using C# via SAP .NET Connector 3.0. This connector is the latest stable version of SAP's development environment to do communication between the Microsoft .NET platform and SAP systems. This connector library can support RFCs and Web services. How to Use SAP Nco 3 Connector: .Net 4 and Visual Studio , Net Connector, NCO 3.0.3.0 for .Net 4 and Visual Once these values are set, you will need to add the table to the function. Before invoking� SAP). SAP Connector User Guide, Function , which is the container for parameters and/or tables for the SAP The required namespace and schema location for the SAP connector should be how to set up Mule so that you can use the SAP connector in your Mule applications. Values for the IDoc version correspond to IDOC_VERSION_xxxx constants in�
http://thetopsites.net/article/59406284.shtml
CC-MAIN-2020-45
refinedweb
790
67.35
Currently the ET contains the following AEI thorns: - AEIThorns/ADMMass - AEIThorns/AEILocalInterp - AEIThorns/PunctureTracker - AEIThorns/SystemStatistics - AEIThorns/Trigger since the AEI would like to shut down the svn server (or at least change it to read only mode) it would be good to move these thorns into: EinsteinAnalysis:: ADMMass PunctureTracker CactusUtils:: SystemStatistics Trigger (need change of license to LGPL) Numerical:: AEILocalInterp (license must stay GPL) For the ET thorns Barry kindly did the conversion using a set of scripts of his. If you were to make the scripts accessible (or give the whole thing a try yourself), then this could be started as soon as the authors (Ian and Frank are main authors if SystemStatistics and Trigger, Erik, Barry and I contributed, I am fine with a license change) agree to the license change where required. Keyword: ADMMass Keyword: AEILocalInterp Keyword: PunctureTracker Keyword: SystemStatistics Keyword: Trigger I will reuse the script I had for LSUThorns to merge these AEI thorns into their respective arrangements. In terms of licenses, I'm fine with any code I have contributed being switched to LGPL. Since this involved git merges, it would really be great if this would also work across branches (so that a switch to an older ET release also contains the relevant thorns, as in their svn release branches). The branches of the merged thorns are preserved by the script, inside a thorn-specific namespace. For example, there would be a ADMMass/ET_2014_11 branch, and similarly for all other branches and tags. Going beyond this to implement your suggestion is a little more tricky. The problem is that these thorns didn't exist inside those arrangements at the time that the previous branches were created. We could artificially merge the branches to make it appear as if they had been in that arrangement all along, but I don't know whether that is a good idea or not. This step could also be deferred to some point in the future, after the initial merge has been completed. I am probably a little naive, but what's the difference between the 'master' and any other branch? Would we need to merge the master branches as well, and could similarly merge other branches? Surely it would change the state of that branch, but only by adding a thorn (which we would be fine with). The difference is that with the master branch the point at which the merge happens really does correspond to the point in time when the thorn was moved into the arrangement, whereas on the release branches the thorn existed in some other arrangement at the time of the release. There's no technical impediment to merging the thorn release branches into the arrangement release branch, it's more of a conceptual thing. For example, it could lead to a situation where the same thorn appears twice: once in the old arrangement, once in the new arrangement. I believe what Frank is suggesting is to make a combined ET_2015_05 (for example) branch by merging all the THORN/ET_2015_05 branches in a commit. Eg. which actually requires that all those branches share a single ancestor. Basically we need to be able to have something like this in a thorn list (ie. a single branch that contains the files from all thorns). Yes, as I said, actually doing the merge is straightforward. It's not even necessary that they share a single ancestor. The problem is you would then have a thornlist which is not consistent with the repository layout. For example, you would have two directories: arrangements/AEIThorns/PunctureTracker and arrangements/EinsteinAnalysis/PunctureTracker. We could work around this by updating the thornlists in all of the release branches. Replying to [comment:6 rhaas]: Are you suggesting that we have another arrangement repository for AEIThorns (in addition to moving the thorns to other arrangements)? Ah, I was not quite thinking straight. I had written my comment as if we were actually transitioning the AEIThorns arrangement but instead we are spreading the individual thorns around into other repositories. In this case I think that the "new" thorns from AEIThorns only need to appear in their target repositories in branches to correspond to releases starting from the upcoming one. SIt will not be possible anyway to update a thornlist by just bumping its ET_BRANCH define at the beginning since that updated thornlist would try to check out a branch of the svn AEIThorns that does not exist. One could consider having the new AEIThorns thorns appear in branches of the target repo that correspond to old releases. If they did then there would indeed be the situation that an old thornlist would ask for (eg.) ET_2015_05 in both the svn AEIThorn and EinsteinAnalysis and would show for example two ADMMass thorns (though only one would appear in a CHECKOUT statement and be used by Cactus). A "new" thornlist could then be used to transparently go back to an older release, albeit with ADMMass staying in EinsteinAnalysis and not showing up in AEIThorns. This can be useful for regression analysis and we did have user reports about thorns moving between arrangements causing issues. It would be rather coarse grained regression stepping though since one can only step in increments of a release, so the usefulness is not so great. Even with changing existing branches though, an old thornlist will not be updatable by just changing the ET_BRANCH define, rather a new thornlist would be downgradable in this manner. Replying to [comment:8 rhaas]: Assuming the svn version would to be used (otherwise, why get it), then the 'other' version of the thorn would be only somewhere inside 'repos', which we don't expect users to traverse anyway. 'repos' is just an unfortunate workaround due to the inability to do partial checkouts. Yes. What I was thinking of was the possibility of the svn server going away completely. If that would happen, we would need to change our old release thornlists to point to the new thorn-wise branches in git, instead of a cleaner repository-wide release branch. Of course, then we could do the actual merge as well. Replying to [comment:9 knarf]: Good point, I guess it wouldn't really be all that much harm since they won't be in arrangements. If we are thinking about the svn server going away, then I agree a repository-wide release branch would be more convenient. Either way, the release thornlists will need to be updated, but I guess that isn't really much of a problem. I have updated the merged repositories with the merged release branches. They can be found at: OK to push these to the official repositories? I have read through the discussion twice (sorry for not paying attention earlier; I was busy with other things). I think I have got the gist of the arguments presented, but I may still be misunderstanding. Sorry! I'm not sure that merging the release branches is the right approach. It may be better to have a separate AEIThorns git repository (or one per thorn) to handle the loss of svn.aei.mpg.de, and separate this issue from the moving of the thorns. I need to think about this a bit more. A single git repository that keeps a faithful record of the history of the former svn repository seems like a good idea, at least for archaeological reasons. This will allow us also to change our mind later. Erik: are you proposing that we keep the AEIThorns around twice? Once in a repo that serves only to record old history and once in their new location? Since in svn it looks like one-thorn-per-repo anyway, I think that the history is already captured in the git repos, ie "git log ADMMass" inside of EinsteinAnalysis shows all history. We may want to revive an AEIThorns repo in git for new (or less known public) AEIThorns that have not yet become very widely used and it would be fine to start off that repo as if the former public AEIThorns had lived in there I think. The first new commit would likely be "remove XXX since it has moved to YYY" though. So far, as far as we can control, the AEI git repo may become read-only, but we are still lobbying to keep the up and running if for no other reason than that the URLs were mentionend in published papers. I think you meant "the AEI SVN repo" in your last paragraph. Barry and I discussed this at length in a call yesterday. We think it is best to separate the two issues: the SVN repository going away, and the thorns moving to new arrangements. For various reasons, we decided to: * Create git copies of all the AEIThorns repositories (they are 1 repository per thorn) which can then be pointed to by the thornlist in the release branches * Merge the current state of those repositories into the relevant ET repositories master branches. (Technically this will be done in reverse, but that's not important for this discussion.) This leads to some duplication, but overall we decided that this was the best strategy, to minimise confusion and make possible everything that is needed. Barry will implement this, and assuming no major objections, will commit the changes. No objections from me. In the end, these are all just little details, and as long as history is preserved in an accessible place, I am for moving forward. Something else I learned in this discussion is that apparently "the AEI svn server going away" is not just a mere possible scenario, but more imminent; good to know. As Ian says, we discussed this on Skype yesterday and worked out a solution which should hopefully satisfy all of our desires/requirements. In short, the thorns have been moved and everything is now working the way we want it. For those who are interested, a more detailed explanation follows. What was done: 1. Merge the master branches of the to-be-moved thorns into the master branches of the arrangement repository they are moving to. 2. Push other branches and tags from the moved thorns' repository into the arrangement repository under their own namespace (so we have, e.g., PunctureTracker/ET_2015_05) 3. Do not merge the release branches of the moved thorns into the arrangement release branches. This would have too many weird side-effects (multiple copies of a thorn in the Cactus tree, not reflecting any true history, having the merged thorns appear on the tips of branches but not being present in any intervening commits between branches, etc.). This is all we need going forward and will work for all future releases. I have gone ahead and pushed these changes to the official repositories. This strategy also preserves the full history of the thorns in a faithful way and makes it easy to go back to older versions. For example, if one wants to go back to a previous version of PunctureTracker while keeping it in the new EinsteinAnalysis arrangement, they can just checkout that version using, e.g., This will move the PunctureTracker directory back to the ET_2011_05 release version while keeping other thorns at the current release. Alternatively, if one wants to move the entire arrangement back to the ET_2011_05 release, they can use as normal. This will then not have PunctureTracker present, as expected since it wasn't in the EinsteinAnalysis arrangement at the time of the ET_2011_05 release. However, if one really does want to have the ET_2011_05 version of PunctureTracker, then that's easily achieved too: I think making this step explicit is better (less confusing) than merging the PunctureTracker/ET_2011_05 branch into the ET_2011_05 branch. It also means that old release thornlists should still work fine without any modification. A separate issue is that the AEI are keen to retire their svn server. It's running an old version of the OS, is becoming a chore to maintain, and isn't being used by anyone any more. Once it goes offline, old release thornlists will no longer work. To work around this, we can make git versions of the relevant thorn repositories available on BitBucket. In principle, this could be achieved using the existing merged arrangement repositories and just checking out, e.g., the PunctureTracker/ET_2011_05 branch in the EinsteinAnalysis repository, which will only give the PunctureTracker files and not the other files in the arrangement. The only catch is that this would make checkouts larger, since each of these thorns will now have a repository which contains the history of the whole arrangement. To cut down on repository size/download time, the best solution is to make the relevant release branches available in separate read-only repositories, one for each thorn. I have done this for the both AEIThorns and LSUThorns repositories, so we now have, e.g., a [ PunctureTracker repository], which only contains the necessary release branches. I have also updated all of the old release branch thornlists in the Einstein Toolkit manifest repository, so that new downloads of the ET will use the git versions of the repositories from BitBucket in place of the old svn repositories. I have tested that I can use GetComponents with these updated release thornlists to check out the old releases. This has now been done, as described in detail in my previous comment.
https://bitbucket.org/einsteintoolkit/tickets/issues/1802
CC-MAIN-2022-21
refinedweb
2,235
58.21
Structured Logging for Python .. raw:: html .. -begin-short- structlogmakes logging in Python faster, less painful, and more powerful by adding structure to your log entries. It's up to you whether you want structlogto take care about the output of your log entries or whether you prefer to forward them to an existing logging system like the standard library's loggingmodule. .. -end-short- Once you feel inspired to try it out, check out our friendly Getting Started tutorial_ that also contains detailed installation instructions! .. -begin-spiel- If you prefer videos over reading, check out this DjangoCon Europe 2019 talk by Markus Holtermann: " Logging Rethought 2: The Actions of Frank Taylor Jr.". You can stop writing prose and start thinking in terms of an event that happens in the context of key/value pairs: .. code-block:: pycon from structlog import getlogger log = getlogger() log.info("keyvaluelogging", outofthebox=True, effort=0) 2020-11-18 09:17.09 [info ] keyvaluelogging effort=0 outofthebox=True Each log entry is a meaningful dictionary instead of an opaque string now! Since log entries are dictionaries, you can start binding and re-binding key/value pairs to your loggers to ensure they are present in every following logging call: .. code-block:: pycon log = log.bind(user="anonymous", somekey=23) log = log.bind(user="hynek", anotherkey=42) log.info("user.loggedin", happy=True) 2020-11-18 09:18.28 [info ] user.loggedin anotherkey=42 happy=True somekey=23 user=hynek Each log entry goes through a processor pipeline_ that is just a chain of functions that receive a dictionary and return a new dictionary that gets fed into the next function. That allows for simple but powerful data manipulation: .. code-block:: python def timestamper(logger, logmethod, eventdict): """Add a timestamp to each log entry.""" eventdict["timestamp"] = time.time() return eventdict There are plenty of processors_ for most common tasks coming with structlog: call stack information_ ("How did this log entry happen?"), exceptions_ ("What happened‽"). timestamping_. structlogis completely flexible about how the resulting log entry is emitted. Since each log entry is a dictionary, it can be formatted to any format: local development_, JSON_ for easy parsing, Internally, formatters are processors whose return value (usually a string) is passed into loggers that are responsible for the output of your message. structlogcomes with multiple useful formatters out-of-the-box. structlogis also very flexible with the final output of your log entries: structlogworks like a wrapper that formats a string and passes them off into existing systems that won't ever know that structlogeven exists. Or the other way round: structlogcomes with a loggingformatter that allows for processing third party log records. structlogpasses you a dictionary and you can do with it whatever you want. Reported uses cases are sending them out via network or saving them in a database. .. -end-spiel- .. -begin-meta- Please use the structlogtag on StackOverflow_ to get help. Answering questions of your fellow developers is also a great way to help the project! structlogis dual-licensed under Apache License, version 2_ and MIT, available from PyPI, the source code can be found on GitHub_, the documentation at. We collect useful third party extension in our wiki_. structlogtargets Python 3.6 and newer, and PyPy3. If you need support for older Python versions, the last release with support for Python 2.7 and 3.5 was 20.1.0_. The package meta data should ensure that you get the correct version. structlogfor Enterprise Available as part of the Tidelift Subscription. The maintainers of structlog._
https://xscode.com/hynek/structlog
CC-MAIN-2021-17
refinedweb
589
57.67
Im supposed to write this java program, and im having a problem, and I cannot figure out why I am getting an Unhandled Exception error. I put a comment at the error line. PrintWriter writer = new PrintWriter (writingfile); (Line where I am getting uNhandled exception error.) Write a program that reads numeric operations from a file and writes their result into another file. Numbers & results are integer. • Operators (+ - * / ) have equal precedence. • Input file • Name: “input.txt” • There can be many operations, each on its own line, e.g., 2 + 3 * 10 / 2 10 / 3 * 2 • Output file • Name: “output.txt” • The result of each operation is written in one line, e.g., 25 Code ~ Code Java: import java.io.File; import java.io.IOException; import java.io.PrintWriter; import java.util.ArrayList; import java.util.Scanner; //There is NO PRECEDENCE in the math from the file public class Operations { public static void main (String [] args) { ArrayList<String> inputs = new ArrayList<String>(); int [] array = new int[10]; //change this once the add to method works. String k = ""; try { File inputfile = new File("math.txt"); Scanner in = new Scanner (inputfile); while (in.hasNextLine()) { inputs.add (in.nextLine() ); } in.close(); //all objects are inside of the ArrayList String type for (int i = 0; i < inputs.size(); i++) { //attempting to change from String to int k = inputs.get(i); array[i] = Integer.parseInt(k); } } catch (IOException e) { System.out.println("You done fucked up real good."); } File writingfile = new File ("output.txt"); //Writing to the file PrintWriter writer = new PrintWriter (writingfile); for (int i =0; i < array.length; i++ ) { writer.println(array[i]); } writer.close(); } } The file I ma reading the ints from looks like this. (named math.txt) 1+2*3/3 1+2+3+4*3/3 1+2+4 Help would be awesome. Thanks.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/11932-simple-i-o-java-error-printingthethread.html
CC-MAIN-2015-14
refinedweb
303
62.24
python: Improve how templated SimObject classes are handled. When setting up a SimObject's Param structure, gem5 will autogenerate a header file which attempts to declare the SimObject's C++ type. It has had at least some level of sophistication there where it would pull off the namespaces ahead of the class name and handle them properly, but it didn't know how to handle templates. This change improves that handling in two ways. First, it adds a new magical SimObject attribute called 'cxx_template_params' which is used to specify what the template parameters are as a list. For instance, if your SimObject was a template which took an integer constant as its first parameter and a type as its second, this attribute could look like the following: cxx_template_params = [ 'int FOO', 'class Bar' ] Importantly, if there are any default values for these template parameters, they should *not* be included here, they should be specified where the class is later defined. The second new mechanism is to add an internal CxxClass in the SimObject.cxx_param_decl method. This class accepts the class signature in the cxx_class attribute and the cxx_template_params and does two things. First, it strips off namespaces like in the old implementation. Second, it extracts and processes any template arguments attached to the class. If these are constants (as determined by the contents of cxx_template_params), then they are stored verbatim. If they're types, then they're recursively expanded into a CxxClass and stored that way. Note that these are the *values* of the template arguments, where as cxx_template_params lists the *types* and *names* of those arguments. In our earlier example, if cxx_class was: cxx_class = 'CoolClasses::ClassName<12, Fruit::Apple>' Then CxxClass would extract the namespace 'CoolClasses', the class name 'ClassName', the argument '12', and the argument 'Fruit::Apple'. That second argument would be expanded into a CxxClass with the namespace 'Fruit' and the class name 'Apple'. Importantly here, because there were no default arguments given in cxx_template_params, all "hidden" arguments which would fall through to their defaults need to be fully specified in cxx_class. The CxxClass has a method called declare() which uses the information extracted earlier to output all of the "stuff" necessary for declaring the given class, including opening any containing namespaces and putting template<...> ahead of the actual class declaration with the template parameters specified. If any of the template arguments are themselves CxxClass instances, then they'll be recursively declared immediately before the current class is. An alternative solution to this problem might be to include the header file which actually defines the cxx_class type to avoid having to come up with a declaration. Unfortunately this doesn't work since it can set up include loops where the SimObject C++ header file includes the param header to get access to the Param type, but that includes the C++ header to get access to the SimObject type. This also makes it harder for SimObjects to refer to each other, since they rely on the declaration in the params header files when declaring a member pointer to that type in their own Param structures. Change-Id: I68cfc36ddff6d789eb4cdef5178c4619ac2cc8b1 Reviewed-on: Reviewed-by: Andreas Sandberg <andreas.sandberg@arm.com> Reviewed-by: Jason Lowe-Power <jason@lowepower.com> Maintainer: Gabe Black <gabeblack@google.com>
https://gem5.googlesource.com/testing/jenkins-gem5-prod/+/7e38637c8dc1cff923c386e6ad74ceb9a1317ef2
CC-MAIN-2020-10
refinedweb
543
50.36
Quickly mirror polygon geometry the fastest and most correct way. Maya's long had this relative capability but it's always been half hearted - this fixes all that:- - Mirrors duplicated geometry - UVs - Maintains normal direction - Object space mirroring - Merge and combine functionality - Merge distance in the user interface - And the unique feature of being able to mirror and merge uvs with symetrical edges - Nurbs/Bezier curve support Absolute time saver for any serious modeller or anyone that does modeling in general. New to version 0.3: Added curve support - so you can now mirror and merge curves For this tool and more, also check out INSTALLATION:-Place the svMirror.py file in your Maya scripts directory eg.. C:\Users\'YOUR USER NAME'\Documents\maya\2014-x64\scripts Then start maya... TO MIRROR GEOMETRY: add the below line to your shelf or run in a python tab/command line import svMirror as svMirror svMirror.mirrorGeo() TO CENTRE UVS: add the below line to your shelf or run in a python tab/command line import svMirror as svMirror svMirror.CentreUVs()
https://www.highend3d.com/maya/script/super-mirror-geometry-for-maya
CC-MAIN-2019-35
refinedweb
178
51.07
!ENTITY sdot "⋅"> ]> Git - ginac.git/blob - doc/reference/DoxyfileHTML git:// / ginac.git / blob commit grep author committer pickaxe ? search: re summary | shortlog | log | commit | commitdiff | tree history | raw | HEAD Updated documentation for multiple polylogarithms [ginac.git] / doc / reference / DoxyfileHTML 1 # Doxyfile 1.2.18 # General configuration options 15 #--------------------------------------------------------------------------- 16 17 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded 18 # by quotes) that should identify the project. 19 20 PROJECT_NAME = GiNaC =, Catalan, Chinese, Chinese-Traditional, Croatian, Czech, Danish, Dutch, 40 # Finnish, French, German, Greek, Hungarian, Italian, Japanese, Japanese-en 41 # (Japanese with english messages), Korean, Norwegian, Polish, Portuguese, 42 # Romanian, Russian, Serbian, Slovak, Slovene, = YES 52 53 # If the EXTRACT_PRIVATE tag is set to YES all private members of a class 54 # will be included in the documentation. 55 56 EXTRACT_PRIVATE = YES 57 58 # If the EXTRACT_STATIC tag is set to YES all static members of a file 59 # will be included in the documentation. 60 61 EXTRACT_STATIC = YES 62 63 # If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) 64 # defined locally in source files will be included in the documentation. 65 # If set to NO only classes defined in header files are included. 66 67 EXTRACT_LOCAL_CLASSES = YES HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all 85 # friend (class|struct|union) declarations. 86 # If set to NO (the default) these declarations will be included in the 87 # documentation. 88 89 HIDE_FRIEND_COMPOUNDS = NO 90 91 # If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will 92 # include brief member descriptions after the members that are listed in 93 # the file and class documentation (similar to JavaDoc). 94 # Set to NO to disable this. 95 96 BRIEF_MEMBER_DESC = YES 97 98 # If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend 99 # the brief description of a member or function before the detailed description. 100 # Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the 101 # brief descriptions will be completely suppressed. 102 103 REPEAT_BRIEF = YES 104 105 # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then 106 # Doxygen will generate a detailed section even if there is only a brief 107 # description. 108 109 ALWAYS_DETAILED_SEC = NO 110 111 # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all inherited 112 # members of a class in the documentation of that class as if those members were 113 # ordinary class members. Constructors, destructors and assignment operators of 114 # the base classes will not be shown. 115 116 INLINE_INHERITED_MEMB = NO 117 118 # If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full 119 # path before files name in the file list and in the header files. If set 120 # to NO the shortest path that makes the file name unique will be used. 121 122 FULL_PATH_NAMES = NO 123 124 # If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag 125 # can be used to strip a user defined part of the path. Stripping is 126 # only done if one of the specified strings matches the left-hand part of 127 # the path. It is allowed to use relative paths in the argument list. 128 129 STRIP_FROM_PATH = 130 131 # The INTERNAL_DOCS tag determines if documentation 132 # that is typed after a \internal command is included. If the tag is set 133 # to NO (the default) then the documentation will be excluded. 134 # Set it to YES to include the internal documentation. 135 136 INTERNAL_DOCS = NO 137 138 # Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct 139 # doxygen to hide any special comment blocks from generated source code 140 # fragments. Normal C and C++ comments will always remain visible. 141 142 STRIP_CODE_COMMENTS = YES 143 144 # If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate 145 # file names in lower case letters. If set to YES upper case letters are also 146 # allowed. This is useful if you have classes or files whose names only differ 147 # in case and if your file system supports case sensitive file names. Windows 148 # users are adviced to set this option to NO. 149 150 CASE_SENSE_NAMES = YES 151 152 # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter 153 # (but less readable) file names. This can be useful is your file systems 154 # doesn't support long names like on DOS, Mac, or CD-ROM. 155 156 SHORT_NAMES = NO 157 158 # If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen 159 # will show members with their full class and namespace scopes in the 160 # documentation. If set to YES the scope will be hidden. 161 162 HIDE_SCOPE_NAMES = NO 163 164 # If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen 165 # will generate a verbatim copy of the header file for each class for 166 # which an include is specified. Set to NO to disable this. 167 168 VERBATIM_HEADERS = NO 169 170 # If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen 171 # will put list of the files that are included by a file in the documentation 172 # of that file. 173 174 SHOW_INCLUDE_FILES = YES 175 176 # If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen 177 # will interpret the first line (until the first dot) of a JavaDoc-style 178 # comment as the brief description. If set to NO, the JavaDoc 179 # comments will behave just like the Qt-style comments (thus requiring an 180 # explict @brief command for a brief description. 181 182 JAVADOC_AUTOBRIEF = YES 183 184 # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen 185 # treat a multi-line C++ special comment block (i.e. a block of //! or /// 186 # comments) as a brief description. This used to be the default behaviour. 187 # The new default is to treat a multi-line C++ comment block as a detailed 188 # description. Set this tag to YES if you prefer the old behaviour instead. 189 190 MULTILINE_CPP_IS_BRIEF = NO 191 192 # If the DETAILS_AT_TOP tag is set to YES then Doxygen 193 # will output the detailed description near the top, like JavaDoc. 194 # If set to NO, the detailed description appears after the member 195 # documentation. 196 197 DETAILS_AT_TOP = NO 198 199 # If the INHERIT_DOCS tag is set to YES (the default) then an undocumented 200 # member inherits the documentation from any documented member that it 201 # reimplements. 202 203 INHERIT_DOCS = YES 204 205 # If the INLINE_INFO tag is set to YES (the default) then a tag [inline] 206 # is inserted in the documentation for inline members. 207 208 INLINE_INFO = YES 209 210 # If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen 211 # will sort the (detailed) documentation of file and class members 212 # alphabetically by member name. If set to NO the members will appear in 213 # declaration order. 214 215 SORT_MEMBER_DOCS = NO 216 217 # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC 218 # tag is set to YES, then doxygen will reuse the documentation of the first 219 # member in the group (if any) for the other members of the group. By default 220 # all members of a group must be documented explicitly. 221 222 DISTRIBUTE_GROUP_DOC = NO 223 224 # The TAB_SIZE tag can be used to set the number of spaces in a tab. 225 # Doxygen uses this value to replace tabs by spaces in code fragments. 226 227 TAB_SIZE = 4 228 229 # The GENERATE_TODOLIST tag can be used to enable (YES) or 230 # disable (NO) the todo list. This list is created by putting \todo 231 # commands in the documentation. 232 233 GENERATE_TODOLIST = NO 234 235 # The GENERATE_TESTLIST tag can be used to enable (YES) or 236 # disable (NO) the test list. This list is created by putting \test 237 # commands in the documentation. 238 239 GENERATE_TESTLIST = NO 240 241 # The GENERATE_BUGLIST tag can be used to enable (YES) or 242 # disable (NO) the bug list. This list is created by putting \bug 243 # commands in the documentation. 244 245 GENERATE_BUGLIST = NO 246 247 # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or 248 # disable (NO) the deprecated list. This list is created by putting \deprecated commands in the documentation. 249 250 GENERATE_DEPRECATEDLIST= NO 251 252 # This tag can be used to specify a number of aliases that acts 253 # as commands in the documentation. An alias has the form "name=value". 254 # For example adding "sideeffect=\par Side Effects:\n" will allow you to 255 # put the command \sideeffect (or @sideeffect) in the documentation, which 256 # will result in a user defined paragraph with heading "Side Effects:". 257 # You can put \n's in the value part of an alias to insert newlines. 258 259 ALIASES = 260 261 # The ENABLED_SECTIONS tag can be used to enable conditional 262 # documentation sections, marked by \if sectionname ... \endif. 263 264 ENABLED_SECTIONS = 265 266 # The MAX_INITIALIZER_LINES tag determines the maximum number of lines 267 # the initial value of a variable or define consist of for it to appear in 268 # the documentation. If the initializer consists of more lines than specified 269 # here it will be hidden. Use a value of 0 to hide initializers completely. 270 # The appearance of the initializer of individual variables and defines in the 271 # documentation can be controlled using \showinitializer or \hideinitializer 272 # command in the documentation regardless of this setting. 273 274 MAX_INITIALIZER_LINES = 30 275 276 # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources 277 # only. Doxygen will then generate output that is more tailored for C. 278 # For instance some of the names that are used will be different. The list 279 # of all members will be omitted, etc. 280 281 OPTIMIZE_OUTPUT_FOR_C = NO 282 283 # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java sources 284 # only. Doxygen will then generate output that is more tailored for Java. 285 # For instance namespaces will be presented as packages, qualified scopes 286 # will look different, etc. 287 288 OPTIMIZE_OUTPUT_JAVA = NO 289 290 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated 291 # at the bottom of the documentation of classes and structs. If set to YES the 292 # list will mention the files that were used to generate the documentation. 293 294 SHOW_USED_FILES = YES 295 296 #--------------------------------------------------------------------------- 297 # configuration options related to warning and progress messages 298 #--------------------------------------------------------------------------- 299 300 # The QUIET tag can be used to turn on/off the messages that are generated 301 # by doxygen. Possible values are YES and NO. If left blank NO is used. 302 303 QUIET = NO 304 305 # The WARNINGS tag can be used to turn on/off the warning messages that are 306 # generated by doxygen. Possible values are YES and NO. If left blank 307 # NO is used. 308 309 WARNINGS = NO 310 311 # If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings 312 # for undocumented members. If EXTRACT_ALL is set to YES then this flag will 313 # automatically be disabled. 314 315 WARN_IF_UNDOCUMENTED = YES 316 317 # The WARN_FORMAT tag determines the format of the warning messages that 318 # doxygen can produce. The string should contain the $file, $line, and $text 319 # tags, which will be replaced by the file and line number from which the 320 # warning originated and the warning text. 321 322 WARN_FORMAT = "$file:$line: $text" 323 324 # The WARN_LOGFILE tag can be used to specify a file to which warning 325 # and error messages should be written. If left blank the output is written 326 # to stderr. 327 328 WARN_LOGFILE = 329 330 #--------------------------------------------------------------------------- 331 # configuration options related to the input files 332 #--------------------------------------------------------------------------- 333 334 # The INPUT tag can be used to specify the files and/or directories that contain 335 # documented source files. You may enter file names like "myfile.cpp" or 336 # directories like "/usr/src/myproject". Separate the files or directories 337 # with spaces. 338 339 INPUT = ../../ginac 340 341 # If the value of the INPUT tag contains directories, you can use the 342 # FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 343 # and *.h) to filter out the source-files in the directories. If left 344 # blank the following patterns are tested: 345 # *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx *.hpp 346 # *.h++ *.idl *.odl 347 348 FILE_PATTERNS = *.cpp *.h 349 350 # The RECURSIVE tag can be used to turn specify whether or not subdirectories 351 # should be searched for input files as well. Possible values are YES and NO. 352 # If left blank NO is used. 353 354 RECURSIVE = NO 355 356 # The EXCLUDE tag can be used to specify files and/or directories that should 357 # excluded from the INPUT source files. This way you can easily exclude a 358 # subdirectory from a directory tree whose root is specified with the INPUT tag. 359 360 EXCLUDE = 361 362 # The EXCLUDE_SYMLINKS tag can be used select whether or not files or directories 363 # that are symbolic links (a Unix filesystem feature) are excluded from the input. 364 365 EXCLUDE_SYMLINKS = NO 366 367 # If the value of the INPUT tag contains directories, you can use the 368 # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude 369 # certain files from those directories. 370 371 EXCLUDE_PATTERNS = 372 373 # The EXAMPLE_PATH tag can be used to specify one or more files or 374 # directories that contain example code fragments that are included (see 375 # the \include command). 376 377 EXAMPLE_PATH = 378 379 # If the value of the EXAMPLE_PATH tag contains directories, you can use the 380 # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 381 # and *.h) to filter out the source-files in the directories. If left 382 # blank all files are included. 383 384 EXAMPLE_PATTERNS = 385 386 # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be 387 # searched for input files to be used with the \include or \dontinclude 388 # commands irrespective of the value of the RECURSIVE tag. 389 # Possible values are YES and NO. If left blank NO is used. 390 391 EXAMPLE_RECURSIVE = NO 392 393 # The IMAGE_PATH tag can be used to specify one or more files or 394 # directories that contain image that are included in the documentation (see 395 # the \image command). 396 397 IMAGE_PATH = 398 399 # The INPUT_FILTER tag can be used to specify a program that doxygen should 400 # invoke to filter for each input file. Doxygen will invoke the filter program 401 # by executing (via popen()) the command <filter> <input-file>, where <filter> 402 # is the value of the INPUT_FILTER tag, and <input-file> is the name of an 403 # input file. Doxygen will then use the output that the filter program writes 404 # to standard output. 405 406 INPUT_FILTER = 407 408 # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using 409 # INPUT_FILTER) will be used to filter the input files when producing source 410 # files to browse (i.e. when SOURCE_BROWSER is set to YES). 411 412 FILTER_SOURCE_FILES = NO 413 414 #--------------------------------------------------------------------------- 415 # configuration options related to source browsing 416 #--------------------------------------------------------------------------- 417 418 # If the SOURCE_BROWSER tag is set to YES then a list of source files will 419 # be generated. Documented entities will be cross-referenced with these sources. 420 421 SOURCE_BROWSER = YES 422 423 # Setting the INLINE_SOURCES tag to YES will include the body 424 # of functions and classes directly in the documentation. 425 426 INLINE_SOURCES = NO 427 428 # If the REFERENCED_BY_RELATION tag is set to YES (the default) 429 # then for each documented function all documented 430 # functions referencing it will be listed. 431 432 REFERENCED_BY_RELATION = YES 433 434 # If the REFERENCES_RELATION tag is set to YES (the default) 435 # then for each documented function all documented entities 436 # called/used by that function will be listed. 437 438 REFERENCES_RELATION = YES 439 440 #--------------------------------------------------------------------------- 441 # configuration options related to the alphabetical class index 442 #--------------------------------------------------------------------------- 443 444 # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index 445 # of all compounds will be generated. Enable this if the project 446 # contains a lot of classes, structs, unions or interfaces. 447 448 ALPHABETICAL_INDEX = YES 449 450 # If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then 451 # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns 452 # in which this list will be split (can be a number in the range [1..20]) 453 454 COLS_IN_ALPHA_INDEX = 5 455 456 # In case all classes in a project start with a common prefix, all 457 # classes will be put under the same header in the alphabetical index. 458 # The IGNORE_PREFIX tag can be used to specify one or more prefixes that 459 # should be ignored while generating the index headers. 460 461 IGNORE_PREFIX = 462 463 #--------------------------------------------------------------------------- 464 # configuration options related to the HTML output 465 #--------------------------------------------------------------------------- 466 467 # If the GENERATE_HTML tag is set to YES (the default) Doxygen will 468 # generate HTML output. 469 470 GENERATE_HTML = YES 471 472 # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. 473 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 474 # put in front of it. If left blank `html' will be used as the default path. 475 476 HTML_OUTPUT = . 477 478 # The HTML_FILE_EXTENSION tag can be used to specify the file extension for 479 # each generated HTML page (for example: .htm,.php,.asp). If it is left blank 480 # doxygen will generate files with .html extension. 481 482 HTML_FILE_EXTENSION = .html 483 484 # The HTML_HEADER tag can be used to specify a personal HTML header for 485 # each generated HTML page. If it is left blank doxygen will generate a 486 # standard header. 487 488 HTML_HEADER = 489 490 # The HTML_FOOTER tag can be used to specify a personal HTML footer for 491 # each generated HTML page. If it is left blank doxygen will generate a 492 # standard footer. 493 494 HTML_FOOTER = Doxyfooter 495 496 # The HTML_STYLESHEET tag can be used to specify a user defined cascading 497 # style sheet that is used by each HTML page. It can be used to 498 # fine-tune the look of the HTML output. If the tag is left blank doxygen 499 # will generate a default style sheet 500 501 HTML_STYLESHEET = 502 503 # If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, 504 # files or namespaces will be aligned in HTML using tables. If set to 505 # NO a bullet list will be used. 506 507 HTML_ALIGN_MEMBERS = YES 508 509 # If the GENERATE_HTMLHELP tag is set to YES, additional index files 510 # will be generated that can be used as input for tools like the 511 # Microsoft HTML help workshop to generate a compressed HTML help file (.chm) 512 # of the generated HTML documentation. 513 514 GENERATE_HTMLHELP = NO 515 516 # If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can 517 # be used to specify the file name of the resulting .chm file. You 518 # can add a path in front of the file if the result should not be 519 # written to the html output dir. 520 521 CHM_FILE = 522 523 # If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can 524 # be used to specify the location (absolute path including file name) of 525 # the HTML help compiler (hhc.exe). If non empty doxygen will try to run 526 # the html help compiler on the generated index.hhp. 527 528 HHC_LOCATION = 529 530 # If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag 531 # controls if a separate .chi index file is generated (YES) or that 532 # it should be included in the master .chm file (NO). 533 534 GENERATE_CHI = NO 535 536 # If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag 537 # controls whether a binary table of contents is generated (YES) or a 538 # normal table of contents (NO) in the .chm file. 539 540 BINARY_TOC = NO 541 542 # The TOC_EXPAND flag can be set to YES to add extra items for group members 543 # to the contents of the Html help documentation and to the tree view. 544 545 TOC_EXPAND = NO 546 547 # The DISABLE_INDEX tag can be used to turn on/off the condensed index at 548 # top of each HTML page. The value NO (the default) enables the index and 549 # the value YES disables it. 550 551 DISABLE_INDEX = NO 552 553 # This tag can be used to set the number of enum values (range [1..20]) 554 # that doxygen will group on one line in the generated HTML documentation. 555 556 ENUM_VALUES_PER_LINE = 4 557 558 # If the GENERATE_TREEVIEW tag is set to YES, a side panel will be 559 # generated containing a tree-like index structure (just like the one that 560 # is generated for HTML Help). For this to work a browser that supports 561 # JavaScript and frames is required (for instance Mozilla, Netscape 4.0+, 562 # or Internet explorer 4.0+). Note that for large projects the tree generation 563 # can take a very long time. In such cases it is better to disable this feature. 564 # Windows users are probably better off using the HTML help feature. 565 566 GENERATE_TREEVIEW = NO 567 568 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be 569 # used to set the initial width (in pixels) of the frame in which the tree 570 # is shown. 571 572 TREEVIEW_WIDTH = 250 573 574 #--------------------------------------------------------------------------- 575 # configuration options related to the LaTeX output 576 #--------------------------------------------------------------------------- 577 578 # If the GENERATE_LATEX tag is set to YES (the default) Doxygen will 579 # generate Latex output. 580 581 GENERATE_LATEX = NO 582 583 # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. 584 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 585 # put in front of it. If left blank `latex' will be used as the default path. 586 587 LATEX_OUTPUT = latex 588 589 # The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be invoked. If left blank `latex' will be used as the default command name. 590 591 LATEX_CMD_NAME = latex 592 593 # The MAKEINDEX_CMD_NAME tag can be used to specify the command name to 594 # generate index for LaTeX. If left blank `makeindex' will be used as the 595 # default command name. 596 597 MAKEINDEX_CMD_NAME = makeindex 598 599 # If the COMPACT_LATEX tag is set to YES Doxygen generates more compact 600 # LaTeX documents. This may be useful for small projects and may help to 601 # save some trees in general. 602 603 COMPACT_LATEX = NO 604 605 # The PAPER_TYPE tag can be used to set the paper type that is used 606 # by the printer. Possible values are: a4, a4wide, letter, legal and 607 # executive. If left blank a4wide will be used. 608 609 PAPER_TYPE = a4wide 610 611 # The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 612 # packages that should be included in the LaTeX output. 613 614 EXTRA_PACKAGES = 615 616 # The LATEX_HEADER tag can be used to specify a personal LaTeX header for 617 # the generated latex document. The header should contain everything until 618 # the first chapter. If it is left blank doxygen will generate a 619 # standard header. Notice: only use this tag if you know what you are doing! 620 621 LATEX_HEADER = 622 623 # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated 624 # is prepared for conversion to pdf (using ps2pdf). The pdf file will 625 # contain links (just like the HTML output) instead of page references 626 # This makes the output suitable for online browsing using a pdf viewer. 627 628 PDF_HYPERLINKS = NO 629 630 # If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of 631 # plain latex in the generated Makefile. Set this option to YES to get a 632 # higher quality PDF documentation. 633 634 USE_PDFLATEX = NO 635 636 # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. 637 # command to the generated LaTeX files. This will instruct LaTeX to keep 638 # running if errors occur, instead of asking the user for help. 639 # This option is also used when generating formulas in HTML. 640 641 LATEX_BATCHMODE = NO 642 643 #--------------------------------------------------------------------------- 644 # configuration options related to the RTF output 645 #--------------------------------------------------------------------------- 646 647 # If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output 648 # The RTF output is optimised for Word 97 and may not look very pretty with 649 # other RTF readers or editors. 650 651 GENERATE_RTF = NO 652 653 # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. 654 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 655 # put in front of it. If left blank `rtf' will be used as the default path. 656 657 RTF_OUTPUT = rtf 658 659 # If the COMPACT_RTF tag is set to YES Doxygen generates more compact 660 # RTF documents. This may be useful for small projects and may help to 661 # save some trees in general. 662 663 COMPACT_RTF = NO 664 665 # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated 666 # will contain hyperlink fields. The RTF file will 667 # contain links (just like the HTML output) instead of page references. 668 # This makes the output suitable for online browsing using WORD or other 669 # programs which support those fields. 670 # Note: wordpad (write) and others do not support links. 671 672 RTF_HYPERLINKS = NO 673 674 # Load stylesheet definitions from file. Syntax is similar to doxygen's 675 # config file, i.e. a series of assigments. You only have to provide 676 # replacements, missing definitions are set to their default value. 677 678 RTF_STYLESHEET_FILE = 679 680 # Set optional variables used in the generation of an rtf document. 681 # Syntax is similar to doxygen's config file. 682 683 RTF_EXTENSIONS_FILE = 684 685 #--------------------------------------------------------------------------- 686 # configuration options related to the man page output 687 #--------------------------------------------------------------------------- 688 689 # If the GENERATE_MAN tag is set to YES (the default) Doxygen will 690 # generate man pages 691 692 GENERATE_MAN = NO 693 694 # The MAN_OUTPUT tag is used to specify where the man pages will be put. 695 # If a relative path is entered the value of OUTPUT_DIRECTORY will be 696 # put in front of it. If left blank `man' will be used as the default path. 697 698 MAN_OUTPUT = man 699 700 # The MAN_EXTENSION tag determines the extension that is added to 701 # the generated man pages (default is the subroutine's section .3) 702 703 MAN_EXTENSION = .3 704 705 # If the MAN_LINKS tag is set to YES and Doxygen generates man output, 706 # then it will generate one additional man file for each entity 707 # documented in the real man page(s). These additional files 708 # only source the real man page, but without them the man command 709 # would be unable to find the correct page. The default is NO. 710 711 MAN_LINKS = NO 712 713 #--------------------------------------------------------------------------- 714 # configuration options related to the XML output 715 #--------------------------------------------------------------------------- 716 717 # If the GENERATE_XML tag is set to YES Doxygen will 718 # generate an XML file that captures the structure of 719 # the code including all documentation. Note that this 720 # feature is still experimental and incomplete at the 721 # moment. 722 723 GENERATE_XML = NO 724 725 # The XML_SCHEMA tag can be used to specify an XML schema, 726 # which can be used by a validating XML parser to check the 727 # syntax of the XML files. 728 729 XML_SCHEMA = 730 731 # The XML_DTD tag can be used to specify an XML DTD, 732 # which can be used by a validating XML parser to check the 733 # syntax of the XML files. 734 735 XML_DTD = 736 737 #--------------------------------------------------------------------------- 738 # configuration options for the AutoGen Definitions output 739 #--------------------------------------------------------------------------- 740 741 # If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will 742 # generate an AutoGen Definitions (see autogen.sf.net) file 743 # that captures the structure of the code including all 744 # documentation. Note that this feature is still experimental 745 # and incomplete at the moment. 746 747 GENERATE_AUTOGEN_DEF = NO 748 749 #--------------------------------------------------------------------------- 750 # Configuration options related to the preprocessor 751 #--------------------------------------------------------------------------- 752 753 # If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will 754 # evaluate all C-preprocessor directives found in the sources and include 755 # files. 756 757 ENABLE_PREPROCESSING = YES 758 759 # If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro 760 # names in the source code. If set to NO (the default) only conditional 761 # compilation will be performed. Macro expansion can be done in a controlled 762 # way by setting EXPAND_ONLY_PREDEF to YES. 763 764 MACRO_EXPANSION = YES 765 766 # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES 767 # then the macro expansion is limited to the macros specified with the 768 # PREDEFINED and EXPAND_AS_PREDEFINED tags. 769 770 EXPAND_ONLY_PREDEF = YES 771 772 # If the SEARCH_INCLUDES tag is set to YES (the default) the includes files 773 # in the INCLUDE_PATH (see below) will be search if a #include is found. 774 775 SEARCH_INCLUDES = YES 776 777 # The INCLUDE_PATH tag can be used to specify one or more directories that 778 # contain include files that are not input files but should be processed by 779 # the preprocessor. 780 781 INCLUDE_PATH = 782 783 # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard 784 # patterns (like *.h and *.hpp) to filter out the header-files in the 785 # directories. If left blank, the patterns specified with FILE_PATTERNS will 786 # be used. 787 788 INCLUDE_FILE_PATTERNS = *.h 789 790 # The PREDEFINED tag can be used to specify one or more macro names that 791 # are defined before the preprocessor is started (similar to the -D option of 792 # gcc). The argument of the tag is a list of macros of the form: name 793 # or name=definition (no spaces). If the definition and the = are 794 # omitted =1 is assumed. 795 796 PREDEFINED = "GINAC_DECLARE_REGISTERED_CLASS_NO_CTORS(class, base)=" \ 797 "GINAC_DECLARE_REGISTERED_CLASS(class, base)=" 798 799 # If the MACRO_EXPANSION and EXPAND_PREDEF_ONLY tags are set to YES then 800 # this tag can be used to specify a list of macro names that should be expanded. 801 # The macro definition that is found in the sources will be used. 802 # Use the PREDEFINED tag if you want to use a different macro definition. 803 804 EXPAND_AS_DEFINED = 805 806 # If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then 807 # doxygen's preprocessor will remove all function-like macros that are alone 808 # on a line, have an all uppercase name, and do not end with a semicolon. Such 809 # function macros are typically used for boiler-plate code, and will confuse the 810 # parser if not removed. 811 812 SKIP_FUNCTION_MACROS = YES 813 814 #--------------------------------------------------------------------------- 815 # Configuration::addtions related to external references 816 #--------------------------------------------------------------------------- 817 818 # The TAGFILES tag can be used to specify one or more tagfiles. 819 820 TAGFILES = 821 822 # When a file name is specified after GENERATE_TAGFILE, doxygen will create 823 # a tag file that is based on the input files it reads. 824 825 GENERATE_TAGFILE = 826 827 # If the ALLEXTERNALS tag is set to YES all external classes will be listed 828 # in the class index. If set to NO only the inherited external classes 829 # will be listed. 830 831 ALLEXTERNALS = NO 832 833 # If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed 834 # in the modules index. If set to NO, only the current project's groups will 835 # be listed. 836 837 EXTERNAL_GROUPS = YES 838 839 # The PERL_PATH should be the absolute path and name of the perl script 840 # interpreter (i.e. the result of `which perl'). 841 842 PERL_PATH = /usr/bin/perl 843 844 #--------------------------------------------------------------------------- 845 # Configuration options related to the dot tool 846 #--------------------------------------------------------------------------- 847 848 # If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will 849 # generate a inheritance diagram (in Html, RTF and LaTeX) for classes with base or 850 # super classes. Setting the tag to NO turns the diagrams off. Note that this 851 # option is superceded by the HAVE_DOT option below. This is only a fallback. It is 852 # recommended to install and use dot, since it yield more powerful graphs. 853 854 CLASS_DIAGRAMS = YES 855 856 # If set to YES, the inheritance and collaboration graphs will hide 857 # inheritance and usage relations if the target is undocumented 858 # or is not a class. 859 860 HIDE_UNDOC_RELATIONS = YES 861 862 # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is 863 # available from the path. This tool is part of Graphviz, a graph visualization 864 # toolkit from AT&T and Lucent Bell Labs. The other options in this section 865 # have no effect if this option is set to NO (the default) 866 867 HAVE_DOT = NO 868 869 # If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen 870 # will generate a graph for each documented class showing the direct and 871 # indirect inheritance relations. Setting this tag to YES will force the 872 # the CLASS_DIAGRAMS tag to NO. 873 874 CLASS_GRAPH = YES 875 876 # If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen 877 # will generate a graph for each documented class showing the direct and 878 # indirect implementation dependencies (inheritance, containment, and 879 # class references variables) of the class with other documented classes. 880 881 COLLABORATION_GRAPH = YES 882 883 # If set to YES, the inheritance and collaboration graphs will show the 884 # relations between templates and their instances. 885 886 TEMPLATE_RELATIONS = YES 887 888 # If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT 889 # tags are set to YES then doxygen will generate a graph for each documented 890 # file showing the direct and indirect include dependencies of the file with 891 # other documented files. 892 893 INCLUDE_GRAPH = YES 894 895 # If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and 896 # HAVE_DOT tags are set to YES then doxygen will generate a graph for each 897 # documented header file showing the documented files that directly or 898 # indirectly include this file. 899 900 INCLUDED_BY_GRAPH = YES 901 902 # If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen 903 # will graphical hierarchy of all classes instead of a textual one. 904 905 GRAPHICAL_HIERARCHY = YES 906 907 # The DOT_IMAGE_FORMAT tag can be used to set the image format of the images 908 # generated by dot. Possible values are png, jpg, or gif 909 # If left blank png will be used. 910 911 DOT_IMAGE_FORMAT = png 912 913 # The tag DOT_PATH can be used to specify the path where the dot tool can be 914 # found. If left blank, it is assumed the dot tool can be found on the path. 915 916 DOT_PATH = 917 918 # The DOTFILE_DIRS tag can be used to specify one or more directories that 919 # contain dot files that are included in the documentation (see the 920 # \dotfile command). 921 922 DOTFILE_DIRS = 923 924 # The MAX_DOT_GRAPH_WIDTH tag can be used to set the maximum allowed width 925 # (in pixels) of the graphs generated by dot. If a graph becomes larger than 926 # this value, doxygen will try to truncate the graph, so that it fits within 927 # the specified constraint. Beware that most browsers cannot cope with very 928 # large images. 929 930 MAX_DOT_GRAPH_WIDTH = 1024 931 932 # The MAX_DOT_GRAPH_HEIGHT tag can be used to set the maximum allows height 933 # (in pixels) of the graphs generated by dot. If a graph becomes larger than 934 # this value, doxygen will try to truncate the graph, so that it fits within 935 # the specified constraint. Beware that most browsers cannot cope with very 936 # large images. 937 938 MAX_DOT_GRAPH_HEIGHT = 1024 939 940 # If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will 941 # generate a legend page explaining the meaning of the various boxes and 942 # arrows in the dot generated graphs. 943 944 GENERATE_LEGEND = YES 945 946 # If the DOT_CLEANUP tag is set to YES (the default) Doxygen will 947 # remove the intermedate dot files that are used to generate 948 # the various graphs. 949 950 DOT_CLEANUP = YES 951 952 #--------------------------------------------------------------------------- 953 # Configuration::addtions related to the search engine 954 #--------------------------------------------------------------------------- 955 956 # The SEARCHENGINE tag specifies whether or not a search engine should be 957 # used. If set to NO the values of all tags below this one will be ignored. 958 959 SEARCHENGINE = NO 960 961 # The CGI_NAME tag should be the name of the CGI script that 962 # starts the search engine (doxysearch) with the correct parameters. 963 # A script with this name will be generated by doxygen. 964 965 CGI_NAME = search.cgi 966 967 # The CGI_URL tag should be the absolute URL to the directory where the 968 # cgi binaries are located. See the documentation of your http daemon for 969 # details. 970 971 CGI_URL = 972 973 # The DOC_URL tag should be the absolute URL to the directory where the 974 # documentation is located. If left blank the absolute path to the 975 # documentation, with file:// prepended to it, will be used. 976 977 DOC_URL = 978 979 # The DOC_ABSPATH tag should be the absolute path to the directory where the 980 # documentation is located. If left blank the directory on the local machine 981 # will be used. 982 983 DOC_ABSPATH = 984 985 # The BIN_ABSPATH tag must point to the directory where the doxysearch binary 986 # is installed. 987 988 BIN_ABSPATH = /usr/bin/ 989 990 # The EXT_DOC_PATHS tag can be used to specify one or more paths to 991 # documentation generated for other projects. This allows doxysearch to search 992 # the documentation for these projects as well. 993 994 EXT_DOC_PATHS = GiNaC -- a C++ library for symbolic computations RSS Atom
https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=doc/reference/DoxyfileHTML;h=0f5c484333a8b2729234f500dc80d661c739d9eb;hb=d2ab2b101a5dd5e930c8e220f3026d7a4aa20cf5
CC-MAIN-2022-33
refinedweb
6,291
59.03
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. In the current sources, using PCH on a system using Linux style exec-shield-randomize will generally crash the compiler in some horrible manner. It turns out that at least with Linux kernel 2.4.22 in Fedora Core 1, the mincore function will not detect when a memory page is being used by an anonymous mapping. It turns out that the kernel will permit an mmap with MAP_FIXED to overlap such a mapping. It happens that the garbage collectors use anonymous mappings. The effect is that the code in gt_pch_restore() will not notice the overlap with the garbage collector and will succeed in forcing the mapping. This will of course lead to a horrible crash as the same memory page is being used in two very different ways. This patch fixes that problem, by adding a function ggc_pch_overlap to the garbage collectors. That function returns whether the given address overlaps. The function is only called when mmap without MAP_FIXED fails to return the expected address, so the overhead should not matter. This patch also explicitly checks for the Linux kernel procfs file /proc/sys/kernel/exec-shield-randomize. If that file exists and holds a non-zero number, then the kernel has exec-shield-randomize enabled. In that case, we give a mildly informative error message. I also added an abort() after the call to sorry(). There is no chance of success at this point, and a good chance of a horrible crash. I don't see any reason for the compiler to continue. This patch also includes two minor corrections. Don't munmap the address until we are sure that we are going to mmap a new address. When reading from a file, read it to the correct memory address. OK for mainline? Ian 2004-03-03 Ian Lance Taylor <ian@wasabisystems.com> * ggc-common.c (gt_pch_restore): If we didn't get the address we wanted, call ggc_pch_overlap to check whether the garbage collector is using the pages we need. Don't munmap the address unless we are going to try to mmap again. When reading from the file, read into the memory we allocated. When handling a relocation failure, check for Linux kernel exec-shield-randomize, and give a better error message if it is set. * ggc.h (ggc_pch_overlap): Declare. * ggc-page.c (ggc_pch_overlap): New function. * ggc-zone.c (ggc_pch_overlap): New function. Index: ggc-common.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/ggc-common.c,v retrieving revision 1.83 diff -p -u -r1.83 ggc-common.c --- ggc-common.c 3 Mar 2004 11:25:47 -0000 1.83 +++ ggc-common.c 4 Mar 2004 04:29:22 -0000 @@ -611,34 +611,43 @@ gt_pch_restore (FILE *f) #if HAVE_MINCORE if (addr != mmi.preferred_base) { - size_t page_size = getpagesize(); - char one_byte; - - if (addr != (void *) MAP_FAILED) - munmap (addr, mmi.size); - - /* We really want to be mapped at mmi.preferred_base - so we're going to resort to MAP_FIXED. But before, - make sure that we can do so without destroying a - previously mapped area, by looping over all pages - that would be affected by the fixed mapping. */ - errno = 0; + /* We didn't get the address we wanted. See if it might be + possible to force it by using MAP_FIXED. Check whether + the garbage collector has allocated any pages in the area + we need. If not, use mincore to check whether any of the + pages have been allocated for other purposes. If not, + try using MAP_FIXED. (We have to check the garbage + collector pages separately because the Linux kernel, at + least around 2.4.20, returns ENOMEM for an anonymous + mmap, such as the ones used by the garbage + collector.) */ + if (! ggc_pch_overlap (mmi.preferred_base, mmi.size)) + { + size_t page_size = getpagesize (); + char one_byte; + + errno = 0; - for (i = 0; i < mmi.size; i+= page_size) - if (mincore ((char *)mmi.preferred_base + i, page_size, - (void *)&one_byte) == -1 - && errno == ENOMEM) - continue; /* The page is not mapped. */ - else - break; + for (i = 0; i < mmi.size; i += page_size) + if (mincore ((char *) mmi.preferred_base + i, page_size, + (void *) &one_byte) == -1 + && errno == ENOMEM) + continue; /* The page is not mapped. */ + else + break; - if (i >= mmi.size) - addr = mmap (mmi.preferred_base, mmi.size, - PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_FIXED, - fileno (f), mmi.offset); + if (i >= mmi.size) + { + if (addr != (void *) MAP_FAILED) + munmap (addr, mmi.size); + addr = mmap (mmi.preferred_base, mmi.size, + PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_FIXED, + fileno (f), mmi.offset); + } + } } #endif /* HAVE_MINCORE */ - + needs_read = addr == (void *) MAP_FAILED; #else /* HAVE_MMAP_FILE */ @@ -651,7 +660,7 @@ gt_pch_restore (FILE *f) if (needs_read) { if (fseek (f, mmi.offset, SEEK_SET) != 0 - || fread (&mmi, mmi.size, 1, f) != 1) + || fread (addr, mmi.size, 1, f) != 1) fatal_error ("can't read PCH file: %m"); } else if (fseek (f, mmi.offset + mmi.size, SEEK_SET) != 0) @@ -679,7 +688,31 @@ gt_pch_restore (FILE *f) *ptr += (size_t)addr - (size_t)mmi.preferred_base; } + /* A Linux kernel with exec-shield-randomize set to a non-zero + value won't work. Give a nice error message for this common + case. */ + { + FILE *pf; + + pf = fopen ("/proc/sys/kernel/exec-shield-randomize", "r"); + if (pf != NULL) + { + char buf[100]; + size_t c; + + c = fread (buf, 1, sizeof buf - 1, pf); + if (c > 0) + { + buf[c] = '\0'; + if (atoi (buf) > 0) + inform ("PCH is not compatible with exec-shield-randomize"); + } + fclose (pf); + } + } + sorry ("had to relocate PCH"); + abort (); } gt_pch_restore_stringpool (); Index: ggc.h =================================================================== RCS file: /cvs/gcc/gcc/gcc/ggc.h,v retrieving revision 1.66 diff -p -u -r1.66 ggc.h --- ggc.h 3 Mar 2004 11:25:48 -0000 1.66 +++ ggc.h 4 Mar 2004 04:29:22 -0000 @@ -200,6 +200,10 @@ extern void ggc_pch_finish (struct ggc_p parameter. Set up the GC implementation for the new objects. */ extern void ggc_pch_read (FILE *, void *); +/* Return whether any of the pages we have allocated overlap with the + memory range BASE to SIZE. */ +extern int ggc_pch_overlap (char *, size_t); + /* Allocation. */ Index: ggc-page.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/ggc-page.c,v retrieving revision 1.90 diff -p -u -r1.90 ggc-page.c --- ggc-page.c 3 Mar 2004 11:25:47 -0000 1.90 +++ ggc-page.c 4 Mar 2004 04:29:23 -0000 @@ -2404,3 +2404,27 @@ ggc_pch_read (FILE *f, void *addr) /* Update the statistics. */ G.allocated = G.allocated_last_gc = offs - (char *)addr; } + +/* Return whether any of the pages we have allocated overlap with the + memory range BASE to SIZE. */ + +int +ggc_pch_overlap (char *base, size_t size) +{ +#ifdef USING_MMAP + unsigned int order; + page_entry *p; + + for (order = 2; order < NUM_ORDERS; order++) + { + for (p = G.pages[order]; p != NULL; p = p->next) + if (p->page + p->bytes > base && p->page < base + size) + return 1; + } + for (p = G.free_pages; p != NULL; p = p->next) + if (p->page + p->bytes > base && p->page < base + size) + return 1; +#endif + + return 0; +} Index: ggc-zone.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/ggc-zone.c,v retrieving revision 2.14 diff -p -u -r2.14 ggc-zone.c --- ggc-zone.c 4 Mar 2004 04:25:12 -0000 2.14 +++ ggc-zone.c 4 Mar 2004 04:29:25 -0000 @@ -1412,3 +1412,29 @@ ggc_pch_read (FILE *f, void *addr) entry->next = entry->zone->pages; entry->zone->pages = entry; } + +int +ggc_pch_overlap (char *base, size_t size) +{ +#ifdef USING_MMAP + struct alloc_zone *zone; + + for (zone = G.zones; zone; zone = zone->next_zone) + { + page_entry *p; + + for (p = zone->pages; p; p = p->next) + { + if (p->page + G.pagesize > base && p->page < base + size) + return 1; + } + for (p = zone->free_pages; p; p = p->next) + { + if (p->page + G.pagesize > base && p->page < base + size) + return 1; + } + } +#endif + + return 0; +}
https://gcc.gnu.org/legacy-ml/gcc-patches/2004-03/msg00366.html
CC-MAIN-2021-04
refinedweb
1,254
68.97
using System; using Evolution; class MyDemoClass { public static void Main(string [] args){ Evolution.Book myBook = new Evolution.Book(); Evolution.BookQuery bq; bq = Evolution.BookQuery.AnyFieldContains("A Name"); Evolution.Contact [] con; myBook.Open(false); con = myBook.GetContacts(bq); foreach(Evolution.Contact c in con) Console.WriteLine(c.FullName); } } This you can now replace the "A Name" with a name in your contact listing in Evolution and see their Full name on the console. NOTE (2009-19-05): This wasn't the first post, but I left it here because I needed a place to scribble some notes down. I had just gotten a Blogger account back in late 2006 and was using Drivel at the time to post to blogger. I was just putting notes that I need to carry around with me on the blog not any really useful information except for me. I don't know what project I was working on when I posted this note to my blog. In late 2008 I remembered my Blogger account and in 2009 I chose to use it more than my Livejournal account. Before I started using the account again I decided to clean it all up and remove a lot of the posts that were just notes and didn't really make any sense at all. This post somehow survived the great purge and is one of the few post still left from those days. I am leaving it here just because it's a little piece of my history.
http://ramenboy.blogspot.com/2007/
CC-MAIN-2018-09
refinedweb
250
73.98
React Best Practices and Useful Functions Lately React has been becoming the new tool used by developers to create everything from a single page application to mobile applications. But since I started going deeper into React I have seen all this “cool” node modules that are extremely badly developed. They follow no rules, the components are way too big. They use state for pretty much everything, and they don’t leverage dumb components. Anyone with enough experience understands how much of a hassle this is to maintain and how much load it takes on the browser if you render every component every time. In this article I will walk you through React best practices, both on how to setup React and how to make it extremely fast. Please note I will keep updating this article as new practices emerge. Before you start reading please note React is a functional programming (FP) library. If you don’t know what FP is, please read this Stack Exchange response. Use ES6 (transpiled with Babel) ES6 will make your life a lot easier. It makes JS look and feel more modern. One great example with ES6 are Generators and Promises. Remember when there was a time that you had to do a bunch of nested calls to be able to do an asynchronous call. Well now I am glad to welcome you too Synchronous Asynchronous JS, (yea it’s as cool as it sounds). One great example of this are generators: Where this: Turns into this: Use Webpack The decision to use Webpack is simple: Hot reloading, minified files, node modules :), and you can split your applications into small pieces and lazy load them. If you are planing on building a large scale applcation I recommend reading this article to understand how lazy loading works. Use JSX If you come from a web development background JSX will feel very natural. But if your background is not in web develop don’t worry too much, JSX is very easy to learn. Note that if don’t you don’t use JSX the application will be harder to maintain. Always look at your bundle size One tip to making your bundle way smaller is to import directly from the node module root path. Do this: import Foo from ‘foo/Foo’ instead of: Import {Foo} from ‘foo’ Keep your components small (Very Small) Rule of thumb is that if your render method has more than 10 lines is probably way too big. The whole idea of using React is code re-usability so if you just throw everything in one file you are losing the beauty of React. What needs to have its own component? When thinking React you need to think about code reusability and code structure. You would not create a component for a simple input element that only has one line of code. A component is a mix of “HTML” elements that the user perceives as one. I know that this sounds a little bit strange but lets see an example. Take a look at this login screen: What is the structure behind it. You have a form that contains two inputs a button and a link. Lets see this in code: Whats wrong here? Repetition. The inputs contain the same structure, why not make that a component. Now that is beautiful. I will not get into much detail here but if you want to continue reading go to Thinking React. What about State? Best practice in React is to minimize your state. One thing to keep in mind is to avoid synchronizing state between a child and parent. In the above example we have a form in that form the state is passed down as a props from the view and every time the user updates the password and username the state is updated in the view and not the form. Use ShouldComponentUpdate for performance optimization React is a templating language that renders EVERY TIME the props or the state of the component changes. So imagine having to render the entire page every time there in an action. That takes a big load on the browser. That’s where ShouldComponentUpdate comes in, whenever React is rendering the view it checks to see if shouldComponentUpdate is returning false/true. So whenever you have a component that’s static do yourself a favor and return false. Or if is not static check to see if the props/state has changed. If you want to read more on performance optimization read my article on React Perf Think about inmutability If you are coming from Scala or other high performance languages inmutability is a concept that you are probably really familiar with. But if you are not familiar with the concept think of immutability like having twins. They are very much alike and they look the same but they are not equal. For example: What just happened? Object2 was created as a reference of object1 that means that in every sense of the word object2 is another way of referencing object1. When I created object3 I created a new object that has the same structure as object1. The Object.assign function takes a new object and then clones the structure of object1 therefore creating a new reference so when you compare object1 to object3 they are different. Why is this significant? Think of performance optimization, I mentioned above that React renders everytime the state of a component changes. When using the ShouldComponentUpdate function instead of doing a deep check to see if all the attributes are different you can simply compare the objects. If you want to know more keep reading this article. Use Smart and Dumb Components There is not much to say here other than you don’t need to have a state in every object. Ideally you will have a smart parent view and all the children are dumb components that just receive props and don’t have any logic in it. You can create a dumb component by doing something like this: Dumb components are also easier to debug because it enforces the top down methodology that React is all about. Use PropTypes PropTypes help you set data validation for components. This is very useful when debugging and when working with multiple developers. Anyone working with a large team should read this article. Always bind the functions in the constructor method Whenever working with components that uses state try to bind components in the constructor method. Keep in mind that you can use ES7 now and you can bind the functions using something like this now (Instead of binding in the constructor): someFunction = () => {} Use Redux/Flux When dealing with data you want to use either Flux or Redux. Flux/Redux allows you to handle data easily and takes the pain away from handling front end cache. I personally use Redux because it forces you to have a more controlled file structure. Keep in mind that sometimes it is very useful to use Redux/Flux but you might not need to keep the entire state of your application in one plain object. Read more about it here. Use normalizr Now that we are talking about data, I am going to take a moment and introduce you to the holy grail of dealing with complex data structures. Normalizr structures your nested json objects to simple structures that you can modify on the fly. File structure I am going to make a blunt statement here and say that I have only seen 2 file structures with React/Redux that makes things easy to work with. First structure: Second structure: Use Containers (Depracated — 2017 Update Next Chapter) The reason you want to use containers that pass down the data is because you want to avoid having to connect every view to a store when dealing with Flux/Redux. The best way to do this is to create two containers. One containing all secure views (views that need authentication) and one containing all insecure views. The best way to create a parent container is to clone the children and pass down the desired props. Example: Use Templates instead of Containers While working with containers and cloning the props down to the views I found a more efficient way to do this. The way I recomend it now is instead of using containers is to create a BaseTemplate that is extended by an AuthenticatedTemplate and a NotAuthenticatedBaseTemplate. In does two templates you will add all the functionality and the state that is shared across all the none-authenticated/authenticated views. In the views instead of extending React.Component you extend the template. This way you avoid cloning any objects and you can filter the props that are send down the component tree. Avoid Refs Refs will only make your code harder to maintain. Plus when you use refs you are manipulating the virtual Dom directly. Which means that the component will have to re-render the whole Dom tree. Use Prop validation PropTypes will make your life a lot better when working with a large team. They allow you to seemly debug your components. They will also make your debugging a lot easier. In a way you are setting standard requirements for a specific component. Other comments I want to emphasize that that you should split all of your components into individual files. Use a router: There is no much to say here other than if you want to create a single page app you need a router. I personally use React Router. If you are using flux remember to unbind the store listening to change events. You don’t want to create memory leaks. If you want to change the title of your application dynamically you can do something like this: This repo is a great example of React/Redux authentication. Whats new in 2017 Get ready for a major rewrite. The creators of react are now rebuilding reacts core. It has better performance, better animations, and more APIs you can leverage to build large applications. You can read more here. Useful helper functions The following function is an Object comparison function. Usage: check if state or props have change in shouldComponentUpdate Create reducer dynamically Usage: Create constants: Render if: Usage: render component if somethingTrue Change state value dynamically: My webpack.config.js Keep reading about React High Performance Applications here. If you liked this article, please click that green 👏below so others can enjoy it. Also please ask questions or leave notes with any useful practices or useful functions you know. Follow me on twitter @nesbtesh
https://nesbtesh.medium.com/react-best-practices-a76fd0fbef21
CC-MAIN-2021-39
refinedweb
1,769
62.58
Description konqueror 4.2.4 does not like this configuration. If you try from MoinMoin.auth.http import HTTPAuthMoin auth = [HTTPAuthMoin(autocreate=True)] then konqueror shows html source instead of the rendered result It seems to work without a problem in opera and firefox Steps to reproduce - do this... Example Component selection - general Details Workaround Discussion Other browsers like opera do show a login window and you don't see the content before you log in. If you select Cancel you get a message text of "Please log in first." But konqueror shows the complete source of the page. That is a konqueror bug. See When accessing a password-protected site, Konqueror mis-identifies content as text/html, even if it isn't. () Plan - Priority: - Assigned to: - Status:
http://www.moinmo.in/MoinMoinBugs/httpauthMoinLoginAndKonqueror
crawl-003
refinedweb
128
59.6
C Tutorial Control statement C Loops C Arrays C String C Functions C Structure C Pointer C File C Header Files C Preprocessors C Misc What is precedence and associativity of C operators Precedence and associativity is the priority of operators to perform it’s operation. We have to use a number of operators when we write a program. Operators precedence determines that which operator will work first. Let’s try to understand with an example of precedence and associativity. Suppose, you have to solve the following problem. 10 * 5 + 9 / 3 -7 = ? Now, what you will do? We know that the * and / operators will works first. Then we will simplify this program by following. 50 + 3 – 7 = 53 – 7 = 46 But, if we perform + and – operator first, then we will get different result which is not correct. 10 * 5 + 9 / 3 – 7 = 10 * 14 / (-4) = – 35 ( which is wrong) In programming language, precedence of operators determine that which operator will work first. This is called precedence of operator. But, if the precedence of two or more operators are same, then we should consider the associativity. Associativity of operators determines that whether they will work left to right or right to left. Different operators have different precedence and associativity. Here we are giving a table of precedence and associativity of our generally used operators. Table of precedence and associativity Example program to learn precedence and associativity in C language Here, we have given a C program to understand precedence and associativity. Try to understand it. Although we have used the same operators and value here, but we have got different result every time for precedence and associativity. #include <stdio.h> main(){ int a = 30; int b = 20; int c = 10; int d = 5; int result; result = (a - b) * c / (d - a); // (30 - 20) * 10 / (5 - 30) = -4 printf("%d\n", result); result = ((a - b) * c) / d - a; // ((30 - 20) * 10) / 5 - 30 = -10 printf("%d\n", result); result = a - b * c / d - a; // 30 - 20 * 10 / 5 - 30 = -40 printf("%d\n", result); result = a - (b * c) / (d - a); // 30 - (20 * 10) / (5 - 30) = 38 printf("%d\n", result); return 0; } Output of this program
https://worldtechjournal.com/c-tutorial/table-precedence-and-associativity/
CC-MAIN-2022-40
refinedweb
365
53.51
THE OMAHA DAILY BEEl THURSDAY. BEFTEMDEIt 15, 1901. POPCLISTS SEEKING FUNDS Committee Vaksi An Appeal to the Rank . . and File to Contribute 0aih. MAKES A PLEA FOR LEGISLATIVE TICKET Street Hallway Mel Dravr Hum Resalt ( Ifroiil la Coateat with Llaeola Police. (From a Staff Correspondent.) LINCOLN, Sept. 17. Speclal.)-The pop ulist , state committee expects to se cure some financial amlstance from the rank and file of the party and a letter Issued today from headquarters acquainted the rank and file of this fact. The letter also makes a strong1 plea for the legisla ture and says nothing of Berg-e, fivina out the Impression that Chairman Allen of the democratic committee I getting: his hands on the populist committee. Street Railway Mrs Flaed. The polio department and the street ear employes had their Innings today in police court and nothing else was doing there. At tht morning session Vivian Waller a conductor, was fined $5 and costs for run ning his car within less than 10 feet of the car In front of him and with the con sent of the city attorney and police the oth-r thirteen men arrested were dis charged. The case will be taken back to the district court on error. Attorney Allen for the traction company holding that every street corner Is a station for the car company, while Judge Cosgrave and City Attorney Strode held that the com pany's station was merely the headquar ters where the cars are kept. The city oruiuanee provides that cars near a sta tion may be run closer than 100 feet apart. The company agreed to Instruct Its men In regard to the ordinances and to see that they were not violated In the future an 4 for that reasop no other prosecutions were made. Waller, "Doc" Payne and William Ross, however, were held to answer to charge of abusing an ofllcer, disorderly conduct and iissault and battery, respectively. Both Candidates to Speak. Governor Mickey and Caadldate Berge .have both been Invited to speak at an old settlers' picnic at Bloomfleld, Knox county, September 27, Governor Mickey to occupy the forenoon and Mr. Berge the afternoon. The Invitation to Mr. Berge was received at the populist headquarters this morning, 'and In -that It was stated that Governor Mickey had also been invited. Tax Case Will Be Appealed. The brief In the Nemaha county tax case in support of an appeal from the decision of Judge Kelllgar will fee filed In the su preme court probably tomorrow. Receptloa for Tibbies. Members of tho Laymen club, an organi sation of lawyers and university professors, will on Tuesday njlght next give a recep tion A the home of Judge England In lienor of T. H. Tibbies, candidate for vice president - on the pepullst ticket. About fifty prominent people of Lincoln will be present, and Mr. Tibbies will' read his speech of acceptance, which he now has Jn preparation.' lOl'RT HOI SIC t OrtSKRSTOXE LAID i Occasion Is Made n Festal One by I'eople of Raahvllle. RUSHVILLB, Neb., Sept. 14. (Special.) The cornerstone of the new courth jjse for Sheriavu county was , laid yesterday amid Joy and festivities. At 10:30 o'clock a pro cession was formed on Main street, when ail tho fraternal societies, the-local hOso company and several floats moved through the principal- streets -to" the TraTns"6f the Rushville Cornet band. .An Immense crowd was present to watch .the procession and take part in the ceremonies, which were In charge of the Masonic fraternity. Kqbort French of Kearney; grand custodian, laid tho stone, the beautiful ritual and cere monies of the Masonic order being carried out with a fidelity and thoroughness that excited tho admiration of many who were used 'to such things. After the stone was laid Judge Westovef made an appropriate and well chosen Speech and was followed by Attorney W. XV. 'Wood at length. A grand barbecue followed, when several roasted oxen, sheep and hogs were served cut to the hungry crowd. Coffee and bread were likewise served free, and after a .hearty meal the crowd made its way to the liose ball ground, where a ball game was played between Pine Ridge and Rushvllln, resulting In a victory for the former, 28 to A. After the ball game various sports were held, lasting till after supper time. After supper a dance was held in the opera house. . Find Remains of an Infant. ALLEN. Sept. 14. (Speciol.)-Last Sun day while William McCIoud was camped In a deep gulch west of town he discovered an old "horseshoe nailed to a board Thtf board had been sanded to preserve Its last ing qualities, which attracted Mr. Mc CIoud ' attention, and. pulling It up, dis covered tt marked the burial place of a box about two by three feet. Using his knife for digging, he unearthed, the box, which had been covered with sheet iron, the cor ners being reinforced to make it extra strong. Tlrne had so rotted the woodwork that It fell to pieces as soon as touched. BUILOINO FOOD To Drlngr tne Babies Aronnd. When a little human machine (or a large one; ue wrong, nothing" U so important a the selection of lood which will always bring it around again. "My little baby boy IS months old had pneumonia, then came braiu fever, and no sooner had he got over these than he began to cut teeth and. being so weak, lie was frequently thrown into convulsions," uya a Colorado mother. "L decided a chango might help, so took him to Kansas City for visit. When wo got there he was so very weak when he would cry he would sink away ana seemed like he would die. ' "When 1 reached my sister's home she said immediately that we must feed him Grape-Nuts, and, although 1 had never - used the food, we got some and for a few days give him Just the Juice of Grape Nuts and milk. He got stronger so quickiy we were soon feeding hint the Grape-Nuts itself and in a wonderfully short time be fattened right up and became strong and well. ' "That showed mo something worth know ing and, when later on my girl came, 1 raised her on Grape-Nuts and she 1 a strong healthy baby and has been. You will see from the little photograph I send you what a strong, chubby youngster the boy Is now, but he didn't look anything like that before we found this nourishing food. Urape-Nuta nourished him back to strength when he was so weak he couldn't keep any other food on his stomach." Name given by Poetum Co.. Battle Creek, Mich. .. . AU children can be built to a more sturdy and healthy condition upon Grspe-Nuts and cream. The food contains the elements nature demands, from which to make the soft gray filling in the nerve centers and brain. A well fed brain and strong, sturdy nerves absolutely Insure a healthy body. - Look la each package for the famous lit tle book. The Road to Wellville." CAME . WARDEN IS ACTIVE Why can't we. com over to your house es4 play aay snore r Because papa frets so mad wbea we sake a little bit of noise. What saakes him that way ? i Mamma says rfs dyspepsia makes him I act so crssy. That's about, tbe war It strikes thai small boy. The dyspeptic has no Idea of his own unreasonable ueas or harsh! Beta. Little things are taagaified audi seem to justify hia quick eager. There's health for the dyspeptic and happiness for the family by the use of Dr. Pierce" Golden Medical Discovery. It cures diseases of the stomach attd other organs of digestion and nutrition, and restores perfect tealta and strength, by enabling the perfect digestion and assimilation of food. 03,000 FORFEIT Will be paid by the World's Dispensary Medical Association, Proprietors, Buf falo, N. they cannot show the orig inal signature of the Individual volun teering the testimonial below, and also of the writers of every testimonial among the thousands which they are constantly; publishing, thus proving their genuine-1 Bess. I have taken ene bottle of Dr. Pierce's. Golden Medlrsl Discovery for Inrtlge'tion and liver coisplstnl." writes Mr. C M. W linos, of Yadkla College. DsvMsoe Co., N. C. Have had a bed spells since I commenced teking your medicine in bet, save not felt like the same man. Before 1 took the ' Golden Medical Uiacorery' 1 could not est anything without' awful distress, but now I esq est aovthing I wish without having eapleasant feelings. Df. Pierce's Pleasant Pellet .clennae and regulate the bowel. Digging Inside the box, the bons of what Is Huppoeed to be an infant were found, but badly decomposed. They have been left at the doctor's, and further excava tions will be made, as it Is thought a treasure of some kind may be also buried there. Reatrlee Man Is Touched. BEATRICE. Neb., Sept. 14.-(Special Tele gram.) H. F. Sells, a resident of this lo canty, was touched for 149 in cash, two money ciders for 170 and $90 and three railroad tickets at tho Rock Island pas senger depot this afternoon just before he and his family boarded the train for a trip to California. Barnum & Bailey's circus exhibited In the city today and the theft Is supposed to have been committed by pick pockets following the show. There is no clew to the thieves. The management of the show refused to pay the license fee of 150. claiming that the grounds were located outside of the city limits. The license was finally paid under protest, but the parade was delayed several hours as a result. ' York County Pnnlonlsts. YORK. Neb.. Sept. 14. (Special.) The derrocrats and populists held a county con vention yesterday at the court house, and. as In years past, fused on candidates. Charles Keckley, who several years ago espoused fuslonlsm and recently announced thnt he was a democrat, was nominated by the democrats for the legislature. Rob ert James, a farmer living near York, was nominated for the legislature by the popu lists. Joseph Hoover of Benedict received the nomination for county attorney and Charles Messner, a merchant of York, was nominated for county supervisor In thin township. At McCool Junction .next week the fuslonlsts will nominate some one for state senator. Find Murdered Man's Ou. SIDNEY. Neb.. Sept. 14.-(Special Tele gram.) The pistol belonging to Frank Wiser, the murdered night watchman, was found this morning at Pine Bluffs, Wyo.. near the plat Indicated by Mclntyre, one of. the murderers. A search Is now being made for his watch and chain. District Judge H. M. Grimes will be asked to call a special session of court to try tbe ac cused at once. The men are fearful of a mob and today begged Sheriff Lee to afford them every protection In. case an attempt was made to storm the jail. Sews of Nebraska. PLATTSMOTJTH. Sept. 14. A cold wave struck Plattsmouth this morning and the mercury went down to 40 degrees above. PLATTSMOUTH, Sept. 14. Mrs. Lutie K. Hatch departed today for her home in Jacksonville, 111., after a pleasant visit with her sister, Mrs. A. W. Atwood and other friends in this city, Omaha and Lin coln. PLATTSMOUTH. Sept. 14. Mr. and Mrs. Stoutenborough have gone to St. Louis, where the latter will attend a meeting of the board of directors of the General Fed eration of Women's clubs, of which she is a member. NEBRASKA CITY, Sept. 14.-A heavy rain fell In this vicinity last night and todty. Over one Inch of rain fell in this portion of the county. In the western part of the county two and one-half Inches of rainfall is reported. BKWARD, Sept. 14. The German-Americans of Seward have made arrangements to nold their second annual picnic at the Seward county fair grounds on Friday, September 23. A program of sport and en tertainment is being arranged. . ' SEWARD. Sept 14. Hon. Joseph G. Cannon, speaker of the national house of representatives, and hon. James K. Wat son, member of congress from Indiana, will address the people of Seward and adjoin ing counties on Thursday afternoon, Sep tember 29, at 2:30 o'clock. NEBRASKA CITY, Sept. 14,-Last Satur d;v there was born to Mr. and Mrs. Bell, wljo reside twelve miles aouihwewt of tins city, triplets, all girts. Two of them died last nignt. The vlher In -doing well and will pruoubly live. 'Ihls is tiie nrst case of tiiis kind reported in this county for sev eral years. NEBRASKA CITY, Sept. 14.-Mrs. Haiel K. Koser was granted a divorce in the ms trict court toduy from her husband, George S. Koser and was restored to her mamen name, Hazel K. Richardson. All the parties are well-to-do society people. Mrs. Koser is a stepdaughter of Captain Logan Kn gart, one ot the wealthiest men in -the county. COLUMBUS, Sept. 14. B. P. Duffy has filed a petition In tne district court, wherein he seeas to recover damages in the sum of $l,owi from August Wagner. Both par ties in this action ure practicing attorneys in this city, liuny bases his aneged dam ages as the result or an assault mudu by Wagner ou July i, while they were trying a case before Justice O'Brien. Dutty claims that his hearing has beeu permanently af fected. CuLUMBUS, Sept. U-Vol. 1. No. L-of the Columbus Dally Journal made Its appear ance last evening. It Is a four-page paper and contains some of the Very latent tele graphiu news. F. H. Abbott, the publisher, says he has the promise of a very flutter ing patronage and it is generally believed that the daily has come to stay. The last dally published here was, six years agu dur ing the Spanish war, and was unuer the management ot J. L. Paschal. , PLATTSMOUTH. Sept. 14.-The young man who gave his name as John E. Welsn, when arrested in Uucoln last week by John DeLong, a special agent for the Mis souri Purine., pn the charge of having robbed William Letter of Eimwood, while on a passenger train enroute from that town to Lincoln, was given a preliminary hearing before Justice M. Arcner yester day utternoon, and was bound over to the district court, his bond being fixed in the sum of 6vo. COLUMBUS, .Sept. 14. Captain Wagner of Company K, Nebraska National Guard, has been missing property belonging to the company for some time and believes he has at last caught the thieves, but refuses to furnish their names until be hears from General Culver, to whom the matter has been referred-. Legglns, shirts, shoes, etc, to the value of about IjO. have been nilssedT Two members of the company are said 10 be the culprits and they may have to an swer In tae Federal court (or their shortcoming. Anburi Official Hat Three Suit in Progress at 0e Time. ALLEGES EFFORT MADE TO BRIBE HIM Sympathisers of Men . Under Arrest Attempt to Assault oncer and Warrants Are Ont for All Coneerned, AUBURN. Neb., Sept. 14-(8peclal)-Game Warden Ranney filed complaint against Ed. Mlnick, L. D. Boatman, George Clark, and Ed Lewellen for fishing In the little Ne maha liver with traps and selns, and alt of said parties were arrested on warrants Issued by the county Judge. The defend ants took a continuance for thirty days. Before the 'defendants were arrested but after they knew of the Intention of the warden to do so, Wesley Worthen went to some of the defendants and told them he could fix the matter with the game war den, aad obtained from Boatman the sum of 110.00 for this purpose. The warden took the money and then went before the coun ty Judge and filed a complaint against Worthen for bribing him, and Worthen was arrested and placed under bonds in J the sum of $300.00. The fish commissioner was m tne first ward this afternoon and two of the de fendants arrested by htm and about 1f teen of their sympathisers surrounded him, and would have handled him roughly had he not stood them off with his gun. They threatened to tar and feather him, and gave him one hour to get out of town. Instead of going he proceeded to the office of the county attorney and swore out a complaint against all concerned In the at tempt to assault him and to drive him out of town. REPUBLICAN TICKETS X FIELD Legislative Candidates . Named In Varlons Counties. STANTON, Neb.. Sept, 14 (Special.) The republicans of the Seventeenth sena torial district, comprising Stanton and Wayne counties, held their convention here this afternoon. Charles McLoed of this county was nominated for senator by ac clamation. Mr. McLeod Is a prosperous farmer and stock feeder and has an ex tensive and favorable acquaintance In both counties. He is highly educated and In every way competent "To fill the position creditably. A. A. Kearney of Stanton and F. M Gregg of -Wayne were selected as committeemen. NORFOLK. Neb., Sept. 14. (Special Tele gram.) At the republican county conven tion held at Madison this afternoon Jack Koenlgsteln was nominated for attorney, F. W. Richardson for representative and John Harding for commissioner. The nominations were all by acclamation. Every precinct was represented and perfect har mony prevailed. Congressman McCarthy made a short, pointed address. Madison county republicans are united nnd will give a good majority for every candidate on the ticket from Roosevelt down. FAIRBURY. Neb., Sept. 14. (Special Tele gram.) The republican float convention for the legislative district comprising Jefferson and Thayer counties was held here thi afternoon and D. B. Cropsey of Falrbury" was nominated for representative for dis trict No. 36 and W. II. Jennings of Thayer county received the nomination for senator for district No. 23. Both nominations were made by acclamation and the nominees are the present Incumbents of the respective offices. SIDNEY, Neb., Sept. 14. (Special Tele gram.) At the republican county conven tion, held here today, Henry E Gapen was nominated for county attorney and William Dugger for commissioner of the Third district. Tomorrow the Fifty-fourth representative convention will be held and several candidates are being groomed for the position. The contest promises to be an exciting one. ents of South Sioux City with -removrng mortgaged property out of the state. ' Smith for a number of years past had been es gaged In farming in this neighborhood, nnd had a wife nnd one child. Sheriff Hansen will leave for Minnesota today to find Smith. LIGHT FROSTS IN MANY PLACES Not Heavy Enonarh Anywhere to 1)1 Any Tin ma are. OAKLAND, Neb.. Sept. II. (Special.) A light frost Is reported in this vicinity Tues day night, but no damage was done. GRANT, Neb., Sept. 14. (Special Tele gram.) Light frost last night. Not much damage to corn, but garden killed. NORTH LOUP, Neb., Sept. 14.-(Spclal.) A heavy frost fell last night. Corn la mostly all safe for the large crop, but some late planted was caught. ALLIANCE, Neb , Sept. 14.-(Speclal Tel egramsFollowing several days of the hot test of the season, the first and a most se vere frost visited these parts last night. Owing to the fact that nearly all agricul tural products were far enough advanced to be practically safe, little harm was done. COLUMBUS. Neb., Sept. 14.-(Speclal.)-A light frost visited this section last night, but seems to have done little damage, ex cept to garden truck and plants. The only frost reported Is in the valleys, and the highlands escaped. The government ther. mometer registered only 36 above, but a thin shale of Ice was formed in unpro tected places. The damage to corn, If any, will be a very small per cant. LEIGH, Neb., Sept. J4.-(Speclal.)-A white frost fell here this morning. It la considered heavy for the first of the season. The corn' as a rule la out of the way of frost, although there is some that will be damaged. Garden truck was mostly killed. LINWOOD, Neb., Sept. 14. (Special.) There was a- light frost here last night. No damage to speak of, only garden truck being hurt. The thermometer registered J4. Llsrht Frost la Northern Kansas. TOPEKA, Kan., Sept. J4.-Tliere was a light frost last night at Leavenworth, Clay Center . and Concordia, Kan. As far as known no serious damage was done corn, but late vegetables and peaches probably were injured. Wants Farmer Broaa-ht Back. DAKOTA CITY, Neb.. Sept. 14.-(Speclat.) County Attorney J. J. McAllister has re ceived from Governor John H. Mickey a requisition for the return to this state from the state of Minnesota of C. H. Smith, who, about the first of this month, left this county,, boarding tbe train at Sioux City, with his chattels billed for Lyons county, Minn. Smith is charged by Joseph Clem- Ayefi You hav doubtless keard a great deal about Ayer's Sar saparilla how it makes the blood pure and rich, tones up the nervous system, clears the skin, reddens the cheeks, and puts flesh on the bones. Remember, "Ayer's" is the kind you want the kind the doctors prescribe, au 0. Ayer's Pills are a great aid to Ayer's SarsspsrilJa. Tbee pills are liver pills, safe for tbe parent, and just as safe for tbe children. Purely vegetable. U casts. oAVEtCtVUesU,! MAN IS KILLED NEAR DECATUR 11 Three People Under Arrest Pendlnst the Coroner's laqnest. DECATUR. Neb.. Sept. 14. (Special Tel egram.) Last night David Monett, a quarter-blood Indian, was shot and killed two miles north of this place on the reser vation. A party consisting of a man and two women passed through town yesterday and went into camp north of the place. Dur ing the evening a" number of men called at the camp, Monett being In the party. The men are supposed to have left and Monett returned. There was some trouble and he was shot As soon as this was done the party hurriedly packed their goods and left. Parties who heard the shot went to the place and found Monett's body. The party in the wagon was overtaken and brought to Decatur, where the coroner's inquest Is now In progress. As far as the evi dence has been taken It Is conflicting and It Is Impossible to tell whether the man or one of the women did the, shooting. To the officers the members of the party gave their names as Ella Brown, Matilda Fleming and Felix Richie. They are all white. The evidence this afternoon showed that Monett when he was shot Was accom panied by a white man, James Merry; that they had been drinking and attempted to enter the wagon where the women were getting ready to go to sleep; the man ordered them to leave and Monett struck at the man who then ran away. The shot was then fired. Ttlchle says that he did the shooting.- Merry says the shot came from the wagon and the women were the only ones In the wagor. at the time The verdict of the Jury was to the effect that David Monett came to his death by a gunshot wound from a gun held In the hands of one of the three people. All have been held to the district court. METHODIST CONFERENCE AT WAYNK Indications Point to n Most Interest ing; Session. WAYNE, Neb., Sept. 14. (Special Tele gram.) The north Nebraska annual con ference met for Its twenty-t.ilrd session at the First Methodist church In this city today. Bishop Isaac W. Joyco. D. D.. I L. D., of Minneapolis, presiding. Rev. E. T. George wus elected secretury. Rev. J. P. Yost statistical secretary nnd Rev. G. A. Luce treasurer. Eighty-five ministers re sponded to the roll call. About 120 pastors and their wives and other visitors are in attendance and 200 are expected during the session. Rev. T. C. Cliff, one of the secre taries of the Board of Church Extension, was present and mude stirring address to the tinference. Rv. H. H. Millard, P. E., reaQ his report of the Grand Island district. The) report showed a good growth on the district. ' - At 2 o'clock Rev. S. C. Bronson, professor of pastoral theology of Garrett Biblical Institute, gave his lecture on "The Pastor's Cadetshlp." At 3 p. m. Rev. A. P. George of St. Louis addressed the conference on Sunday school work. Rev. E. S. Dunharris an evangelist of Minneapolis, lead the evangelistic services today. These services will be held every day at 4 o'clock. The conference promises to be one of the best over held. A number of Important questions will be discussed and some decided changes In pastors and presiding elders' districts. The conference opens "with a deeply religious spirit, which promises great good to. aH in attendance. i ii. i ill' i Prairie Fire ; nt Sutherland, SUTHERLAND, Neb!,. Sept. 14. (Special.) One of the most destructive prairie Jlres which ever raged In this section burned over a large scope of country to the south and west last evening and night. A farmer living seven or eight miles southwest of town let the fire get uway from him shortly after noon. The wind was blowing a gale from the northeast and soon a line of fire several miles long was sweeping In a southwesterly direction. Thpre. was enough old grass on the range to burn good, and at times the fire, ran almost as fast as a horse. Ranchmen, farmers and townspeo ple hurried forth and worked for hours in an effort to check the flight of the flames. A strip of territory six miles wide and nearly thirty long was burned over and much valuable range was destroyed. Re ports are too meager to admit of a definite estimation of the damuge done, though It Is thought It wll! reach far up Into the thousands. The Tange of the Taylor sheep ranch was destroyed, together with the hay, of which there were many stacks. G. C. White of this place lost something less than a- hundred tons of hay. Other resi dents along the strip burned over lost more or less hay and feed. Otoe Ticket In the Field. SYRACUSE, Neb., Sept. 14. (Special Tel egramsThe Otoe county republican con vention today was largely attended, en thusiastic anj harmonious throughout. The following ticket was nominated: For sen ator, R. W. Jones. Dunbar; representatives, Job Cassel, Nebraska City, and 8. M. Parker, Palmyra; county attorney, A. A. Bischoff, Nebraska City; commissioner Second district, W. M. Ashton, Dunbar. All but the last were named by acclamation. Twenty-two delegates were chosen for the float convention for Cass and Otoe. Reso lutions were Introduced by William Hay ward of Nebraska City and were unanim ously adopted. Indorsing the nomination for United States senator of Hon. Elmer J. Burkett. State Candidates McBrlen, Ga- lusha and Eaton were present and ad dressed the convention, calling forth fre quent applause. York College Opening. YORK. Neb., Sept. 14. (Special.) York college opened this week with a large at tendance and the coming seeslon, which Is the fifteenth year, promises to be the best In the history of the Institution. Returning students find many Improvements made since last year. In making repairs and new buildings amounting to a total expenditure of about $15,000. The large three-story and basement musical conservatory, one of the finest and largest In the west, la about com pleted and ready to occupy. Carnival Opens at Ravenna. RAVENNA, Neb., Sept. 14.-(Special Tele gram.) The Ravenna carnival opened today with a large crowd present to enjoy the attractions. Some exciting horse rac ing, a chariot race and otiier street fair at tractions were witnessed. The Ravenna ball team defeated the .Loup City team In a hotly contested game. Score, t to 0. Snperlntendent Rhodes In Chara-e. ALLIANCE. Neb.. Sept. 14. (Special Tel egram.) Mr. G. W. Rhodes took possession of his offices here today as general super intendent of the Wyoming division of the B. i & M., which embraces the Alliance. Sherl dun A New Sterling division. o)fo)f? Ag And many otk-er painful and serious ailments from which most mothers suffer, can be avoided by the Use of "UntWe Crisis'" T,im rA Tmetasi e isp um vmh ivuivuj is a God-send to women, carrying them through their most critical ordeal with safeW and no bain. No woman who uses 'Mother's FrieBtr need fear the suffering and danger incident to birth; for it robs the ordeal of its horror and insures safety to life of mother and child, and 1 cares her in a condition more favorable to soeedy recovery. The child is also healthy, strong and . food n atureo. Our book I V 1 1 Motherhood, is worth jt its weight in jrold to every woman, and will be sent free in plain envelope by addressing application to Drodfield Regulator Co. Atlanta, Gn. fiwnnic'rojTR FORECAST OF THE WEATHER l Warmer la Nebraska Today Friday Fair and Cooler In the West Portion. WASHINGTON. Sept. U-Foreeast of the weather for Thursday and Friday: For Nebraskn Warmer Thursday. Fri day, fair; cooler In west portion. For Iowa and Missouri Fair and warmer Thursday. Friday, fair; warmer In east portion. . For Colorado and Wyoming Coller Thursday; warmer In east portion. Friday, fair. For South Dakota Fair and warmer Thursday. Friday, fair and cooler. For Kansas Warmer and fair Thursday and Friday. Loral Record. OFFICE OF THE WKATHSR BUREAU, OMAHA, Sept. 14. Official record cf t?m perature anu precipitation compared with the corresponding day of the last thiee years: 1904. 1903. 1902. 1901. Maximum temperature... 64 49 78 71 Minimum temperature.... 42 44 Bl 51 Mean temperature 63 46 64 61 Precipitation 00 .58 .00 .01 Record of temperature and precipitation at Omaha for this day sinca March 1, 1901: Normal temperature 64 Deficiency for the day 13 Total deficiency since March 1 314 Normal precipitation m inen Deficiency for the day 10 Inch Tntai rntnfnll inre March 1 21. 27 Inches Detleiencv since March 1 2.89 inches Excess for cor. period, 1003 5.58 Inches Deficiency for cor. period, 1902... 2.26 Inches Reports from Stations at T p. At. I a svfrl4fllap1 IWHil r aVtiballLi k&S CONDITION OF TUB WEATHER. Omaha, clear Valentine, clear North Platte, clear , Cheyenne, clear Salt Lake, clear Rapid City, clear Huron, clear Wllllston, clear Chicago, cloudy St. Louis, clear St. Paul, clear Davenport, clear Kansas City, clear Havre, clear Helena, clear Bismarck, clear Galveston, cloudv L. A. WELSH. Local Forecaster. if! I I : "v o : 3 611 . 64 .00 68 6 .00 64 68 .00 641 68 .00 80; 8'J .00 70 72 .00 56 66 .00 66 '.2 .01 64 68 .00 58 64 .00 66 68 . 00 56 5S .00 68 t)2 .00 80 80 .00 76 80 . 00 60 66 .00 76 78 2.28 t0 PER CENT OF THE ADULT POP ULATION SUFFER FROM ONE PAINFUL AILMENT. Think what this means. Imagine the amount of misery that exists and is endured simply because people do not know there la an absolute cure. , The only way to cure any complaint is to remove the cause. There arf very few dis eases or ailments that can be cuied by ex ternal application and pile is not one of them. Piles can be cured; the treatment must, however, be internal, fo-the cause of piles is an internal disorder of the liver or the bowels. Even catarrh of the stomach and bowels can be cured by Dr. Pesrin's Pilr Specific, The Internal Remedy. Here is an instance of what this practically infallible remedy will do: Dr. C. A. rerrin, Helena, Mont Dear Sir: I have nearly finished the former bottle of Perrin's Pile Specific and am practically well. My case was one which most physi cians would have pronounced incurable, as I was afflicted with a dysentery and compelled to go to the toilet room fiom three to five times each day and each time would bleed from one-half to one teacvpfuL I had to resort to bandages and absorbent cotton to check the flow of blood, and now the past ten or twelve days there hat been no sign of bleeding and rny appetite is good; have gained ten pounds in weight and feel like a new lease of life was given me. ' Very truly yours, T. R. Harris, October 20th, 1002. Yerlngton, Ner. Dr. Perrin's Pile Specific is sold by all reliable druggists at $1.00 the bottle, under an absolute guarantee to refund the money should this great internal remedy fail to cure. Dr. Perrin Mf.dicalCo.. Helena. Mont SPECIAL LOW RATES ft The Burlington It the) only line with Its own train snic between Omaha nntt CMcngo gild St Lonle, nd in view of the many rate to the east Applying on way Tla St. Louie and the other Tin. Chicago, it con arrange tho moat desir able variable tours of the eaat St. IiOul" nnd return tickets (rood in clinlr curs (sents frpp) and conchPB-on sale Sept. 13, '20, '1, i!7 nnd -9 St. Louis and return, daily St Louis nnd return, one way via Chicago, daily ChlrnRo and return direct or via St. Louis, In one or both directions dally Buffalo nnd Niagara Falls and return dally r v Mackinnc Island and return (via boat from Chicago), dally Bay view, Charlevoix. Harbor Springs nnd Tetoskey, Mlct and return (via boat from Chicago), dally Denver, Colorado Springs and Pueblo and return dully Denvpr, Colorado Springs and Pueblo and return Tuesdays nnd Saturdays until Sept. 17 Hot Springs, S. D., and return dally Hot Springs, Dendwood and Lead, S. D., nnd return Tuesdays and Saturdays until Sept. 17.. Ogden, Salt Lake City and Grand Junction and return daily Yellowstone National Tark and return daily, r Sheridan. Garland nnd Cody, Wyo., and return September 15 and 'JO $0.50 $13.80 $20.00 $20.00 $27.15 . $19.75 $18.75 $17.50 $15.00 $16.40 SI5.00 $30.50 S47.50 $15.00 September 13, 30, 27 nnd October 11, one fare plus $2.00 tor the round trip to many points In Ohio and Indiana. Dally, from September 15 to October 16. one way colonist tickets to hun Jrerls of points west and northwest at practically half rates. Daily to many points In Kentucky. Tennessee, North Carolina and Vir ginia, half fare plus 60c for the round trip. World's Fair stopovers at Bt Louis permitted on all through tickets. I can Rive you all the latest Information about excursion rates and furnish, free, illustrated booklets about all excursion resorts. See me or write about your trip. J. B. REYNOLDS. City Pass. Agt., 1502 Farnam St., Omaha, 5 5 HomeVisitors v-4 BV 'i ri 1 xcumons I'M Illinois Central R. U ROUND TRIP RATES FROM OMAHA Hammond, Ind.. tl5.85 Ft. Wayne. Ind 819.20 South Bend. Ind $17.30 Logansport, Ind.... S18.2S Kokome, Ind t18.65 La Fayette, Ind $17.85 Terre Haute, Ind S18.35 Vlnclnnes, Ind ..118.35 Evansville, Ind $18.50 Indianapolis, Ind 819.40 Richmond, Ind 821.00 New Albany, Ind 821.25 M uncle, Ind 810.00 Elkhart, Ind $17.75 Sandusky, Ohio 823.00 Toledo Ohio. 82J.25 Columbus, Ohio.. ... ... $23. 10 Dayton, Ohio. 822.00 Cincinnati. Ohio. $22.50 Lima, Ohio..,.. ..$21.00 Springfield, Ohio. $22.50 Marlon, Ohio...... '..U $22. 50 Findlay, Ohio....... .. $21.5 5 Gallon. Ohld........... $22.75 LouiBvlIle, 'Ky.'.;;..... 421.50 Oweneboro, Ky... $24.90 On sale September C 13, 20, 27, October 11. Return limit 30 days. Correspondingly low rates to many other 'points In Ohio, Indiana. Illinois, Michigan, Wisconsin. Minnesota, Ontario, New York, Ken tucky, Tennessee, North Carolina and Virginia. Full particulars cheerfully given at City Ticket Office, 1402 Farnam Street Omaha, or write. W. H. BRILL. Dist. Pass. Agt., Omaha. Neb. r "r' - - 1-W-- m - f --.11 IB), jamiii lm sfite'E As a superior nerve tonic, well adapted to assist the functions of nature, I consider that Wine of Cardui ha no superior, jjrs. H. E. SOLOMON, 119 North High Btreet, Nashville, Tenn. Wine of Cardui haa mada a womderful change ia my life. ( LILLIAN HILL, 10 Cypress Avenue, Campbell, Cat. I am enjoying spleadid health today and feel that it ia all doe to Wine t;araut. SUSANNA MERKLE. 142 West 58th Street, Chicago, 111. TKVJT -TODAY Have you taken all kinds of treatment and failed to aeoura relief P Have you been told your case is hopeless t Are you discouraged P Ii Wine of Cardui has done so much for other women, why won't It cure Touf Your trouble, though painful, may yield readily to Wine of Cardui. Wine of Cardui never fails to benefit the worst cases of disordered menstruation, bearing down pains ard female weakness. The wonderful healing qualities of this medicine have surprised thousands of Respondent sufferers, by bringing them tfuickly to health. It is needless to say that Win of Cardui has cured thousands of sick women who have been given up as beyond possible) recovery. Wine of Cardui Is a mild tonlo that every woman should take. Every druggist sells $1.00 bottles. CIVE WINE OF CARDUI A TRIAL TODAY. 1 xml | txt
https://chroniclingamerica.loc.gov/lccn/sn99021999/1904-09-15/ed-1/seq-3/ocr/
CC-MAIN-2022-27
refinedweb
6,314
74.29
. Using AWS Simple Monthly Calculator to calculate the price of an AWS Service If you have been using AWS to host your website, you might have faced some difficulties while calculating and… medium.com Now, you must be wondering, What if the amount you estimated is not approximately correct? So in such condition, you must be thinking that it would have been so good if you got some type of notification message when the amount charged for the usage of services exceeds the amount that you had initially estimated. Well, your wish gets granted by the Billing Alarm which lets you obtain the notification about the amount charged for using various AWS services you use. In this blog, I will be discussing on the stepwise procedure to set up your Billing Alarm for estimating your AWS charges using Amazon CloudWatch. Procedure for creating a Billing Alarm using Amazon CloudWatch Step 1: Go to your AWS Management Console. Step 2: Now, go to your account and click on “My Billing Dashboard”. Step 3: On your left-hand side now you can see the option of “Billing preferences”. Click on “Billing preferences”. Step 4: Here in “Billing preferences”, select “Receive Billing Alerts” and click on “Save preferences” button. Step 5: Now, open “Manage Billing Alerts” in the new tab. “Manage Billing Alerts” redirects you to Amazon CloudWatch where we will be setting up our billing alerts. Step 6: Here, on the left-hand side go to the “Billing” section. Step 7: Under the “Billing”, click on “Create alarm” button. Step 8: Now, you need to “Specify metric and conditions” in which the alarm gets triggered. Here, in Metric, namespace is given as “AWS/Billing”, provide “Metric name” as your choice, give the desired “Currency” for instance USD for US Dollars, then in “Statistic” select “Maximum” and finally select the “Period” in which alarm should be generated. Step 9: Now, moving on to “Conditions”, we need to specify “Threshold type” as “Static” and give a condition as “Whenever EstimatedCharges is..” where you define the alarm condition. For instance, we will be selecting “Greater/Equal” as we want to get a billing alarm when the estimated charge gets greater than/ equals to the specified amount. After then, now you are required to specify “than…” where you define the threshold value after which you want the alarm to get triggered. For example, we are providing an amount of 5 USD. Besides them, we also need to make some additional configuration, i.e. “Datapoints to alarm” which has been provided as “1 out of 1” and “Missing data treatment” as “Treat missing data as missing”. Step 10: After specifying the metric and conditions, click on the “Next” button. Step 11: Now in the section of Configure Actions, the “Alarm state trigger” comprises three states. “In alarm” is the state when the metric has crossed the specified Threshold. Similarly, the “OK” state, means all is well, and “Insufficient data” means that the alarm has just started because of which there is no enough data available to report an alarm state. Step 12: After providing “Alarm state trigger”, we are required to “Select and SNS topic” to define Simple Notification Service which is a notification process to set the alarm. Here, you can select “Create new topic” option. Step 13: Now, in Select an SNS topic, select “Create a new topic”. Here, you are required to provide a unique topic name. Then, provide your email address in the section named “Email endpoints that will receive the notification…” Step 14: Here, click on “Next” button. Step 15: In the section “Add name and description”, you are required to specify a unique name for “Alarm name”. Also, you can mention “Alarm description” which is not mandatory. Then finally, you can click on the “Next” button. Step 16: Here, you are required to “Preview and create” the Billing Alarm. Once you have verified the setup process, you can now click on “Create alarm”. Step 17: Since a message is being displayed on the top as “Some subscriptions are pending confirmation”, click on “View SNS Subscriptions” as shown in the image below. Step 18: After you get redirected to Amazon SNS, you can see that the confirmation is pending. So, go to your Gmail account to confirm your subscription. Step 19: In your Gmail, click on “Confirm subscription” as shown below. Step 20: Now, a message “your subscription has been confirmed” will show up. After then, you can go back to Amazon SNS. Step 21: Here, in the Alarms section in CloudWatch, you can view that the status is in the “OK” state. Step 22: On the left, click on “Billing” option as shown in the image. Here too, you can see that the status of “FirstBillingAlarm” is in the “OK” state.
https://positive-stud.medium.com/how-to-create-a-billing-alarm-to-monitor-your-estimated-aws-charges-using-amazon-cloudwatch-ce73ad932275
CC-MAIN-2021-04
refinedweb
800
70.84
#include <wx/stackwalk.h> wxStackFrame represents a single stack frame, or a single function in the call stack, and is used exclusively together with wxStackWalker, see there for a more detailed discussion. Return the address of this frame. Return the name of the file containing this frame, empty if unavailable (typically because debug info is missing). Use HasSourceLocation() to check whether the file name is available. Get the level of this frame (deepest/innermost one is 0). Return the line number of this frame, 0 if unavailable. Get the module this function belongs to (empty if not available). Return the unmangled (if possible) name of the function containing this frame. Return the return address of this frame. Get the name, type and value (in text form) of the given parameter. Any pointer may be NULL if you're not interested in the corresponding value. Return true if at least some values could be retrieved. This function currently is only implemented under Win32 and requires a PDB file. Return the number of parameters of this function (may return 0 if we can't retrieve the parameters info even although the function does have parameters). Return true if we have the file name and line number for this frame.
https://docs.wxwidgets.org/3.0/classwx_stack_frame.html
CC-MAIN-2018-51
refinedweb
207
66.23
Unit testing Angular applications with Jest Let’s discuss about unit testing in Angular, this will be just an introduction on Angular unit testing and what are the benefits of adopting Jest as test runner and suite for your execution. What? Jest is an integrated testing solution written by Facebook and famous especially in the React world. The key features of Jest are: - Easy setup - Instant feedback - Powerful mocking - Works with typescript - Snapshot testing Easy setup meaning that you need almost zero configuration to get started. Instant feedback because he will run only the test files related to the changes. Powerful mocking through easy to use mock functions Why? The first reason why you want to start using jest is speed. Unit test should run fast Comparing brand new application crated with the @angular/cli karma-chrome: 14.911s karma-phantomjs: 13.119s jest: 4.970s That’s 8.606 seconds difference between the two runs, karma-chrome are taking more than double the time to execute just 1 suite and 3 tests. I’m including PhantomJS in these comparison even if its not supported anymore, mainly because its probably the fastest option for running tests in a CI environment ( Jenkins, Travis ) Jest doesn’t need an actual browser ( headless or not ) to run the test ( there are some drawbacks ). If you are looking for a replacement in Continuous Integration environments for PhantomJS you can quickly switch to Jest without the need of any configuration on your CI. It is based upon Jasmine which is probably the default framework for Angular applications and its included in the CLI. How? The first step is to install jest on your new project: $ yarn add -D @types/jest jest jest-preset-angular Installing the types necessary for typescript compiler, the framework itself and in the jest-preset-angular which contains a configuration for Angular project. Next step, modify the package.json: "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "lint": "ng lint", "e2e": "ng e2e", "test": "jest", "test:watch": "jest --watch" }, "jest": { "preset": "jest-preset-angular", "setupTestFrameworkScriptFile": "<rootDir>/src/jest.ts" } I’m changing the npm scripts that will execute jest framework and adding the only configuration ( almost zero config ) that we need for the project. I’m requiring "setupTestFrameworkScriptFile": "<rootDir>/src/jest.ts" we need then to create this file that Jest will use to startup inside the src folder of the project. import 'jest-preset-angular'; import './jest-global-mocks'; The last step, since Jest doesn’t run a real browser but its based upon jsdom, its to provide mocks for browser specific API like localStorage etc:'] }); Last but not least we have to remove @types/jasmine from node_modules and make sure you include jest as the new types in the tsconfig.spec.json { "extends": "../tsconfig.json", "compilerOptions": { "outDir": "../out-tsc/spec", "baseUrl": "./", "module": "commonjs", "target": "es5", "types": [ "jest", "node" ] }, "include": [ "**/*.spec.ts", "**/*.d.ts" ] } Jest is based upon Jasmine there’s no need to change the test already present in the scaffolded application. The only noticeable difference between Jest and Jasmine are the spies. A spy in Jest by default will be like a callThrough, you will need to mock it if you want to emulate Jasmine spies. We can now start testing our application and leverage the mocking functionality that Jest provide. Services Services are probably one of the easiest elements to test. In the end they are only ES6 classes and you can ( and you should ) test them directly without using TestBed. The reason why i’m saying this are multiples. This code is based upon the new HttpClient released with Angular 4.3 which makes things easier also when using TestBed. We have the following API: { "DUB": { "name": "Dublin" ... }, "MAD": { "name": "Madrid" ... }, ... } And our service needs to retrieve those airports and order them by they code and also to retrieve a single airport. I will use ramda to reduce the boilerplate of our code and spice up the code with some functional programming. So the first one will be called fetchAll$ public fetchAll$(): Observable<any> { return this.http.get('') .map(toPairs) .map(sortBy(compose( toLower, head ))); } We are transforming the API first into pairs [key, value] using the toPairs function and after we sort all the airports by their codes. The second method of our service needs to fetch a single airport instead: public fetchByIATA$(iata: string): Observable<any|undefined> { return this.http.get('') .map(prop(iata)); } fetchByIATA$ just return the value of the specific key, in this case IATA code, or undefined using prop function from ramda. This is the whole service: import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs'; import 'rxjs/add/operator/map' import { toPairs, compose, head, toLower, prop, sortBy } from 'ramda'; @Injectable() export class SampleService { constructor(private http: HttpClient) {} public fetchAll$(): Observable<any> { return this.http.get('') .map(toPairs) .map(sortBy(compose( toLower, head ))); } public fetchByIATA$(iata: string): Observable<any|undefined> { return this.http.get('') .map(prop(iata)); } } We need here to mock the http response because we don’t want to hit the server for each test and remember we are doing unit tests not end to end or integration tests. With the new HttpClient there's no need to configure mockBackend and this is such a relief. First thing we have to configure TestBed to load our service with HttpClientTestingModule, this will give us the ability to intercept and mock our backend calls. beforeEach(() => { TestBed.configureTestingModule({ imports: [ HttpClientTestingModule ], providers: [ SampleService ] }); }); After TestBed is configured we now can get our service to test and a mocking http controller from the injector: beforeEach( inject([SampleService, HttpTestingController], (_service, _httpMock) => { service = _service; httpMock = _httpMock; })); Now that we have all the setup we can proceed with the single tests it('fetchAll$: should return a sorted list', () => { const mockAirports = { DUB: { name: 'Dublin' }, WRO: { name: 'Wroclaw' }, MAD: { name: 'Madrid' } }; service.fetchAll$().subscribe(airports => { expect(airports.length).toBe(3); expect(airports[2][0]).toBe('WRO'); }); const req = httpMock.expectOne(''); req.flush(mockAirports); httpMock.verify(); }); The new HttpClient does actually reminds me of AngularJS v1.x way of testing http calls. We are defining what we are expecting from the function invocation and than through the httpMock object we specify what calls we are expecting and what to return flush. In the end we call the verify() function to make sure that there are no pending connections. Here’s the link to the full source code Using Jest the previous suite will take this time: Another option that we can explore is to mock directly HttpClient, in this example we are only interested in the get function which needs to return an observable containing our data. We will not use TestBed at all but just use a mock function that Jest provides. SampleService requires HttpClient in the constructor: const http = { get: jest.fn() }; beforeEach(() => { service = new SampleService(http); }); We will pass our stub to the SampleService, and typescript will complain that there are missing properties for HttpClient. To overcome this we can: Use always the as any: beforeEach(() => { service = new SampleService(http as any); }); If you don’t want to repeat the as any keyword or you can create a small function which will do it for you and then later on import it: const provide = (mock: any): any => mock; ... beforeEach(() => { service = new SampleService(provide(http)); }); At this point we can specify the test like the following: it('fetchAll$: should return a sorted list', () => { http.get.mockImplmentationOnce(() => Observable.of(mockAirports)); service.fetchAll$().subscribe((airports) => { expect(http.get).toBeCalledWith(''); expect(airports.length).toBe(3); expect(airports[2][0]).toBe('WRO'); }); }); We are calling mockImplementationOnce as the name describes it to mock it just once and return an Observable of our mockAirports, we are repeating the exact same assertion that we did before. Here’s the link to the full source code And this is the time of execution: Let’s recap, a suite running two tests with TestBed is taking 99ms in total while without TestBed is only 12ms. Note that i’ve used already in the second test jest advanced mock functions. I’m requesting directly a mock function through jest.fn(), if you want to read more about those mock functions please have a look here. Final comparison Now that we have those two extra unit tests let’s try to run another time the two unit tests suite, one with Karma + Chrome, the other with Jest and see the results. I’ve create the following script to track out the time on my local machine: start=$SECONDS yarn test -- --single-run end=$SECONDS echo "duration: $((SECONDS-start)) seconds elapsed.." karma + chrome I’ve added the karma-verbose-reporter to get more informations about the tests and the total results is 22s. For Jest the script to track time instead is the following: start=$SECONDS yarn test -- --verbose end=$SECONDS echo "duration: $((SECONDS-start)) seconds elapsed.." --verbose option will track the execution of each test inside the suite. jest Jest is still leading with only 5s execution. Bonus I mentioned before the instant feedback, Jest will automatically run the tests related to the file that you modified. This is especially good while watching. If we commit the changes to our repository and try to modify only app.component.ts while running yarn test:watch you will notice that only the app.component.spec.ts is running: Conclusion I do believe that the frameworks should encourage testing and a crucial point is their speed. Jest provides all of this and we’ve already switched in Ryanair from karma + chrome to Jest. PS: If you care about unit tests and Angular we are hiring at Ryanair, Join us. Originally published at izifortune.com on July 26, 2017.
https://medium.com/@izifortune/unit-testing-angular-applications-with-jest-71e814ede1e0
CC-MAIN-2018-17
refinedweb
1,624
52.29
Hash table parsing algorithmHash table parsing algorithm Author: July, wuliming, pkuoliver Source:. Description: This article is divided into three parts, The first part is a Top K algorithm Baidu faces questions Wapakhabulo; second part on the elaboration of Hash table algorithms; third part of the Hash table to create a fast algorithm. ------------------------------------ Part: Top K algorithm Baidu faces questions Detailed Description of the problem: Search engine each time a user through the log files to retrieve all the search strings used are recorded, each query string of the length of 1-255 bytes. Suppose there are ten million records (the query string repeat is relatively high, although the total is 10 million, but if, after removing duplicate, not more than 3 million. A repetition of the query string the higher the inquiry for its users, the more , which is more popular.), you statistics of the 10 most popular query string, the memory can not require the use of more than 1G. Prerequisite knowledge: What is hash table? Hash table (Hash table, also called hash table), is based on the key code value (Key value) and direct access to the data structure. In other words, it is mapped by the key code value to the table in a position to access the records to speed up the search speed. The mapping function is called the hash function, called an array of records stored hash. Hash table is actually very simple, that is the Key functions through both a fixed algorithm called a hash function to convert integer numbers, then the number of array length to take over, take the result as an array of more than the subscript, to the value stored in the digital space for the next target array. When using the hash table when the query is re-used hash function key into the corresponding array index, and navigate to the space for value, this way, you can take advantage of the array's positioning performance data location (the second article, three parts, will be elaborated for the Hash Table). Problem Analysis: To count the most popular queries, first of all there is to statistics the number of times each Query, then the statistical results, to find Top 10. So we can be designed based on this idea in two steps of the algorithm. That is, to solve this problem is divided into the following two steps: The first step: Query statistics Query statistics following both a method to choose from: 1, the direct sequencing method first we thought the first sorting algorithm is, first of all inside of this Query logs are sorted, then traversing the sorted Query, Query statistics the number of occurrences of each. But questions have clear requirements that the memory can not be more than 1G, ten million records, each record is 255Byte, it is clear to occupy 2.375G memory, this condition does not meet the requirements. Let us recall the contents of data structures course, when larger than the data and the memory can not hold, we can use external sorting methods to sort, where we can merge sort, merge sort because there is a better time complexity O (NlgN). Drained after the order has been ordered for us file through the Query, Query statistics the number of occurrences of each, again written to the file. Comprehensive analysis, sort time complexity is O (NlgN), and through the time complexity is O (N), so the overall time complexity of the algorithm is O (N + NlgN) = O (NlgN). 2, Hash Table method in the first method, we used a statistical approach to sorting the number of occurrences of each Query time complexity is NlgN, you can not have better ways to store, and lower time complexity it? The title shows, although there are ten million Query, but due to a relatively high degree of repetition, so the fact that only 300 million Query, each Query255Byte, so we can consider them all to go into the memory, and now just needs a suitable data structure, where, Hash Table is definitely our preferred choice, because Hash Table query speed is very fast, almost O (1) time complexity. So, we have the algorithm: the maintenance of a Query Key as string, Value Query for the number of occurrences of the HashTable, each read a Query, if the string is not in the Table, then add the string, and the Value value of 1; if the string in the Table, then the string plus one can count. We finally O (N) time complexity completed within the massive data processing. This method is compared to the algorithm 1: increase in time complexity an order of magnitude for the O (N), but not only on the time complexity of the optimization, the method requires only one data file IO, and IO times higher than Algorithm 1 and more, so the algorithm 2 to algorithm 1 works better operability. Step two: find the Top 10 Algorithm A: I think for the ordinary sort sorting algorithm we have not unfamiliar, and not repeat them here, we should note that the time complexity of sorting algorithm is NlgN, in this subject, the three million records, with 1G of memory is able to save. Algorithm II: some sort Questions asked is to find the Top 10, so we do not need to sort all of the Query are, we only need to maintain a 10-size array, initialized into 10 Query, Query according to the statistics of each frequency by the large small order, and then traverse the 300 million records, and each reading of a record on an array of the last Query contrast, if less than the Query, then continue through, otherwise, the last data out of the array, adding the current Query. Finally, when all the data traversal is completed, then the array 10 Query is we are looking for the Top10. Difficult to analyze, so the worst time complexity of the algorithm is N * K, where K is the top number. Algorithm III: heap in the algorithm 2, we have the time to optimize the complexity of the NlogN NK, have to say this is a relatively large improvements, but there is no better way to do it? Analysis, in algorithm II, after the completion of each comparison, the complexity of operations required are K, because the necessary elements into a linear form being, but also uses a sequence comparison. Here we note that the array is ordered, every time we find one when you can use binary search methods, the complexity of this operation dropped to the logK, however, the problem is followed by data movement because the movement increased frequency of the data. However, this algorithm is an improvement over algorithm two. Based on the above analysis, we think, not only is there a quick search, but fast-moving elements of the data structure? The answer is yes, it is heap. With the heap structure, we can find the log magnitude of time and adjust / move. So here, our algorithm can be improved as such, maintains a K (the title is 10) the small size of the root heap, and then traverse the 300 million Query, respectively, and compare the root element. Consistent with the above algorithm two ideas, just algorithms algorithms Third, we use the minimum heap data structure instead of this array to find the target element has time complexity O (K) down to O (logK). So this way, using the heap data structures, algorithms, three, the final time complexity dropped to N'logK, and compared two algorithms, there has been relatively large improvements. Summary: Thus, the algorithm is completely over, after the first step, each with a Hash Table Statistics Query the number of occurrences, O (N); then the second step, using the heap data structure to find Top 10, N * O (logK) . Therefore, our final time complexity is: O (N) + N '* O (logK). (N 10 million, N '300 million). If you have any better algorithms, comments welcome message. The first part, complete. The second part, Hash table algorithms detailed analysis What is a Hash Hash, generally translated to do a "hash", also has a direct transliteration of "hash", that is, the input of arbitrary length (also called pre-mapping, pre-image), through a hash algorithm, converted into fixed-length output, the output is the hash value. This conversion is a contraction mapping, that is, the hash value of the space is usually much smaller than the input space, different input may hash to the same output, but not only from the hash value to determine the input value. Simply is a message of any length will be compressed to a fixed length message digest function. HASH is used primarily in the field of information security encryption algorithm, which put some information into different lengths of 128-bit messy code, the value of these codes is called HASH value can also be said, hash is to find a store address data content and data of mapping between. Array features are: easy addressing, insert and delete difficulties; and list the features are: addressing difficulties in inserting and removing easy. Then we can not composite characteristics of both, easy to make an addressing, inserting the data structure is also easy to remove? The answer is yes, this is what we want to bring the hash table, hash table implementations in many different ways, I am going to explain the most common method - zipper method, we can understand the "list of array ", as shown: Obviously the left is an array, each array comprises a pointer to the head of a linked list, of course, this list may be empty, may also be many elements. According to some characteristics of our elements to be assigned to different elements to the list, and is based on these characteristics, to find the right list, and then find the element from the list. Element characteristics of the subject into the array method is hashing. Hashing is more than one course, three of the more common are listed below: 1, the division hashing the most intuitive one, is this on the map using the hash method, the formula: index = value% 16 Learned compiled all know, is seeking modulus obtained by a division operation, so called "division hashing." 2, square hashing method for the index is a very frequent operation, and multiplication is more time-saving than the division (of the current CPU, it is estimated that we do not feel it), so we consider the division into multiplication and a shift operation. Formula: index = (value * value)>> 28 (right, divided by 2 ^ 28. notation: left bigger, is by. right smaller, is in addition.) If the value is more evenly allocated, then this method can get good results, but I draw above that figure the value of each element of the index is calculated 0 - failed miserably. Perhaps you have a problem, value if large, value * value will not overflow it? The answer is yes, but we do not care about this multiplication overflow, because we do not multiply in order to obtain the results, but in order to obtain the index. 3, Fibonacci (Fibonacci) hashing The disadvantage of hashing square is obvious, so we can find an ideal multiplier, not the value itself as a multiplier to get it? The answer is yes. 1, for 16-bit integer terms, this multiplier is 40503 2, for 32-bit integer terms, this multiplier is 2654435769 3, for 64-bit integer terms, this multiplier is 11400714819323198485 These "ideal multiplier," is how to get out of it? This is related with a law, called the golden rule, which describes the golden rule expression is undoubtedly the most famous classic Fibonacci series, that is the case in the form of the sequence: 0, 1, 1, 2, 3, 5, 8 , 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, .... In addition, the Fibonacci series and the value of the solar system eight planets, the ratio of orbital radius is surprisingly consistent. Of our common 32-bit integer, the formula: index = (value * 2654435769)>> 28 If you use this Fibonacci hashing, then the above diagram becomes like this: Obviously, with the Fibonacci hashing adjusted to take than the original touch hashing a lot better. Scope to quickly find, remove the basic data structure, usually the total amount of data can be placed in memory. Basic principles and key points choose hash function for strings, integers, order, specifically the corresponding hash method. Collision process, one is open hashing, also known as the zipper method; the other is closed hashing, also known as the opening address method, opened addressing. Expand d-left hashing in d is more than one meaning, we first simplify the problem, look at the 2-left hashing. 2-left hashing refers to a hash table is divided into two halves of equal length, are called T1 and T2, T1 and T2 are equipped to a hash function, h1 and h2. In store a new key at the same time using two hash function calculations, two address h1 [key] and h2 [key]. Then need to check in T1 h1 [key] position and T2 h2 [key] position, a position which has been stored (a collision) key more, and then load the new key is stored in a small place. If both sides as much as, for example, two positions are empty or stores a key, put the new key is stored in the left sub-table T1 ,2-left is the resulting. In the search for a key, it must be two hash, also find the two locations. Problem instance (mass data processing) We know that massive data processing in the hash table has a wide range of applications, below, see the other side said Baidu questions: Title: Massive log data, extract a visit that the highest number of Baidu IP. Program: IP number is still limited, up to 2 ^ 32, so you can consider using the ip hash directly into memory, and then statistics. The third part, the fastest algorithm of Hash Table Next, we look to a specific analysis of the fastest Hasb table algorithm. We gradually from a simple question to start: There is a huge array of strings, and then give you a single string, so you find from this array if the string and find it, how would you do? There is a way the most simple, honest and found the end from the beginning, one by one more, until you find the date, I think if people learned programming can be made to put such a program, but if there is such a program to pay programmers to to the user, I can only silent to evaluate, perhaps it really can work, but ... can only be the case. Is naturally the most appropriate algorithm to use HashTable (hash table), the first who introduced one of the basic knowledge, the so-called Hash, usually an integer, by some algorithm, a string can be "compressed" into an integer. Of course, in any case, a 32-bit integer is not corresponding to the back of a string, but in the process, the two strings to calculate the Hash value equal to be very small, the following look at the MPQ in the Hash Algorithm: Function a, the following function to generate); } } } Function of two, the following function calculates the hash value string lpszFileName, which dwHashType type for the hash, in the following function three, GetHashTablePos function call this function in two, which can take values 0,1,2; the function returns lpszFileName hash of the; } Blizzard this is a very efficient algorithm, known as the "One-Way Hash" (A one-way hash is a an algorithm that is constructed in such a way that deriving the original string (set of strings, actually) is virtually impossible ). For example, the string "unitneutralacritter.grp" the results obtained by this algorithm is 0xA26067F3. Is not the first algorithm improve the look, compare the string one by one into the Hash value on it, the answer is not enough, in order to get the fastest algorithm, it can not be compared one by one, usually construct a hash Table (Hash Table) to solve the problem, the hash table is a large array, the array capacity is defined according to the requirements of the program, such as 1024, the Hash value of each by the modulo operator (mod) corresponds to a location in the array, In this way, compare the string as long as corresponding to the position of the hash value is not occupied, you can get the final result, and think this is what speed? Yes, it is the fastest O (1), it is now a closer look at the algorithm: typedef struct { int nHashA; int nHashB; char bExists; ...... } SOMESTRUCTRUE; A possible structure is defined? Function of three, the following function in the Hash table to find the existence of the target string, a return Hash value of string to find, if any, return -1. int GetHashTablePos (har * lpszString, SOMESTRUCTURE * lpTable) / / LpszString Hash table to find the string, lpTable Hash value for storage string Hash table. { int nHash = HashString (lpszString); / / call the function two, return to the search string lpszString Hash value. int nHashPos = nHash% nTableSize; if (lpTable [nHashPos]. bExists & &! strcmp (lpTable [nHashPos]. pString, lpszString)) {/ / Hash value if found in the table exists, and the string to find the location of the table corresponding to the same string, return nHashPos; / / return the call function two, find the value of the Hash } else { return -1; } } See this, I would like to wish everyone a very serious question: "If the two strings in the hash table corresponding to the position of how to do the same?", After all, an array capacity is limited, this possibility is very . Many ways to solve the problem, my first thought is to use "list", thanks to university to learn the data structure of this church hundred test Braun's magic, I met a lot of algorithms can be transformed into a linked list to solve, as long as the Kazakh Xi table linked to a linked list for each entry, save all the corresponding string on OK. This thing seems to have a perfect ending, if it is to me alone to solve the problem, then I might begin to define data structures and then write code. Blizzard's programmers, however, the method used is more sophisticated methods. Basic principle is this: they are not in the hash table but with a hash value to verify the hash value with three strings. MPQ file Mingha Xi using the internal table to keep track of all files. However, the format of the table with the normal number of different hash table. First, it does not use the hash as the next standard, the actual file name is stored in the table used for authentication, it does not actually store the file name. But the use of three different hash: a hash table for the next standard, two for validation. These two validation hash instead of the actual file name. Of course, this will still be two different files Ming Haxi to three the same hash. However, the average probability of this happening is: 1:18889465931478580854784, the probability for any people who should all be small enough. Now go back to the data structure, Blizzard did not use the hash table using a linked list, while the use of "extended" way to solve the problem, look at this algorithm: Function of four, lpszString order to find the hash string; lpTable string hash value for the stored hash table; nTableSize for the hash table length: the only time in the table to determine the existence of this string, it compares the two hash values can be, not on * Structure of the string. This will speed up the run rate? Hash table to reduce the space? This * What methods are generally used in situations? * / if (lpTable [nHashPos]. nHashA == nHashA & & LpTable [nHashPos]. NHashB == nHashB) { return nHashPos; } else { nHashPos = (nHashPos + 1)% nTableSize; } if (nHashPos == nHashStart) break; } return -1; } The procedure explained: 1 to calculate the hash value of the string of three (one is used to determine the location, the other two for validation) 2. Look at this position in the hash table 3 hash table is empty in this position do? If empty, then certainly the string does not exist, returns -1. 4. If there is, then check whether the other two hash values match, if the match, then find the string, return the Hash value. 5 move to the next location, if you have moved to the end of the table, the anti-around to the table to the query from the beginning 6. See if it is returned to its original position, if so, return not found 7 back to 3 ok, this is mentioned in this article the fastest Hash table algorithms. What? Not fast enough?: D. Welcome, you criticize the correction. -------------------------------------------- Add 1, a simple hash function: / * Key is a string, nTableLength for the length of the hash table * The value of the hash function is more evenly distributed * / unsigned long getHashIndex (const char * key, int nTableLength) { unsigned long nHash = 0; while (* key) { nHash = (nHash <<5) + nHash + * key + +; } return (nHash% nTableLength); } Supplement 2, a complete test program: Hash table is an array of fixed length, if too much is wasted, if too small, does not embody efficiency. Appropriate array size is the key to the performance of the hash table. Hash table size is the best is a prime number. Of course, depending on the amount of data, there will be a different hash table size. For the amount of data the application has been uneven, the best design is to use dynamic variable size hash table, then if you find the hash table size is too small, such as one of the elements is twice the size of the hash table, the We need to expand the hash table size, is generally doubled. Here is the hash table size of the possible values: 17, 37, 79, 163, 331, 673, 1361, 2729, 5471, 10949, 21911, 43853, 87719, 175447, 350899, 701819, 1403641, 2807303, 5614657, 11229331, 22,458,671, 44,917,381, 89,834,777, 179,669,557, 359,339,171, 718678369, 1437356741, 2147483647 Following the complete source code for the program has been tested in linux: view plain # Include <stdio.h> # Include <ctype.h> / / Thank you citylove correction. / / CrytTable [] which holds the HashString function inside some of the data will be used in prepareCryptTable / / Initialization function which unsigned long cryptTable [0x500]; / / The following function generates); } } } / / The following function calculates the hash value string lpszFileName, which dwHashType for the hash type, / / In the following GetHashTablePos function which calls this function, which can take values 0,1,2; the function / / Return lpszFileName hash value; } / / In the main test argv [1] of the three hash values: / /. / Hash "arr / units.dat" / /. / Hash "unit / neutral / acritter.grp" int main (int argc, char ** argv) { unsigned long ulHashValue; int i = 0; if (argc! = 2) { printf ("please input two arguments / n"); return -1; } / * Initialize the array: crytTable [0x500] * / prepareCryptTable (); / * Print array crytTable [0x500] which the value * / for (; i <0x500; i + +) { if (i% 10 == 0) { printf ("/ n"); } printf ("%-12X", cryptTable [i]); } ulHashValue = HashString (argv [1], 0); printf ("/ n ----% X ----/ n", ulHashValue); ulHashValue = HashString (argv [1], 1); printf ("----% X ----/ n ", ulHashValue); ulHashValue = HashString (argv [1], 2); printf ("----% X ----/ n ", ulHashValue); return 0; } Thanks: 1,. 2,. End. Related Posts of Hash table parsing algorithm ... Imitation Java type HashMap implementation of JavaScript Imitation Java type HashMap implementation of JavaScript js examples of common events <html> <head> <script type="text/javascript"> / / Which mouse button is clicked function whichButton1 (event) ( if (event.button == 2) ( alert ( "You clicked the right mouse button!") ) else ( alert ( "You hibernate study of the fifth chapter 1, oracle database, user is the keyword, if a user table named user will give rise to conflicts. The approach taken in the main have two kinds: ① If you can modify the table name, it is best to change the table name tuser or other name, to avoid causing t ...
http://www.codeweblog.com/hash-table-parsing-algorithm/
CC-MAIN-2014-49
refinedweb
4,023
50.91
Here's a weird problem and so far I haven't found any workaround. I've got the Teensy 4.0 and I can use the Serial.read function to read characters from the USB emulated serial port. However, If I try to use AudioOutputTDM2 as well, then the Serial.read stops working. It just returns a 1 if characters come in slowly or occasionally a 0 if characters come in quickly. There is no problem with using AudioOutputTDM. In the following example, I load it to the Teensy 4.0 and then run a terminal program (TeraTerm) to type characters. It should just print the ASCII codes for each character that I type, but that is not what happens if I try to use the AudioOutputTDM2. What's going on? #include <Arduino.h> #include <Audio.h> // The behavior of this program depends on which TDM is used for tdm_out // AudioOutputTDM: Serial.read will return the character read, as it should // AudioOutputTDM2: Serial.read will return 1 or 0 AudioOutputTDM2 tdm_out; void setup() { Serial.begin(115200); // opens serial port, sets data rate to 115200 bps } void loop() { if (Serial.available() > 0) { int incomingByte = Serial.read(); // read character Serial.println(incomingByte, HEX); // print ASCII code in hex } }
https://forum.pjrc.com/threads/62073-Teensy-4-0-Serial-read-fails-with-AudioOutputTDM2?s=835f1935b5caf34572367fb72df57b17
CC-MAIN-2021-43
refinedweb
206
61.93
This Tutorial Explains Insertion Sort in Java Including its Algorithm, Pseudo-code, and Examples of Sorting Arrays, Singly Linked and Doubly Linked List: The Insertion Sort Algorithm technique is similar to Bubble sort but, is slightly more efficient. Insertion sort is more feasible and effective when a small number of elements is involved. When the data set is larger, it will take more time to sort the data. => Take A Look At The Java Beginners Guide Here. What You Will Learn: - Introduction To Insertion Sort In Java - Conclusion Introduction To Insertion Sort In Java In the Insertion sort technique, we assume that the first element in the list is already sorted and we begin with the second element. The second element is compared with the first small number of. Thus it is easier to use Insertion sort for sorting linked lists. However, sorting will take a lot of time if the data items are more. In this tutorial, we will discuss the Insertion sort technique including its algorithm, pseudo-code, and examples. We will also implement Java programs to Sort an array, Singly linked list, and Doubly linked list using Insertion sort. Insertion Sort Algorithm The insertion sort algorithm is as follows. Step 1: Repeat Steps 2 to 5 for K = 1 to N-1 Step 2: set temp = A[K] Step 3: set J = K – 1 Step 4: Repeat while temp <=A[J] set A[J + 1] = A[J] set J = J – 1 [end of inner loop] Step 5: set A[J + 1] = temp [end of loop] Step 6: exit As you know, insertion sort starts from the second element assuming that the first element is already sorted. The above steps are repeated for all the elements in the list from the second element onwards and put in their desired positions. Pseudocode For Insertion Sort The pseudo-code for the insertion sort technique is given below. procedure insertionSort(array,N ) array – array to be sorted N- number of elements begin int freePosition int insert_val for i = 1 to N -1 do: insert_val = array[i] freePosition = i //locate free position to insert the element while freePosition > 0 and array[freePosition -1] > insert_val do: array [freePosition] = array [freePosition -1] freePosition = freePosition -1 end while //insert the number at free position array [freePosition] = insert_val end for end procedure Next, let us see an illustration that demonstrates sorting an array using Insertion sort. Sorting An Array Using Insertion Sort Let us take an example of Insertion sort using an array. The array to be sorted is as follows: Now for each pass, we compare the current element to all its previous elements. So in the first pass, we start with the second element. Thus, we require N number of passes to completely sort an array containing N number of elements. The above illustration can be summarized in tabular form as shown below: As shown in the illustration above, at the end of each pass, one element goes in its proper place. Hence in general, to place N elements in their proper place, we need N-1 passes. Insertion Sort Implementation In Java The following program shows the implementation of the Insertion sort in Java. Here, we have an array to be sorted using the Insertion sort. import java.util.*; public class Main { public static void main(String[] args) { //declare an array and print the original contents int[] numArray = {10,6,15,4,1,45}; System.out.println("Original Array:" + Arrays.toString(numArray)); //apply insertion sort algorithm on the array for(int k=1; k<numArray.length-1; k++) { int temp = numArray[k]; int j= k-1; while(j>=0 && temp <= numArray[j]) { numArray[j+1] = numArray[j]; j = j-1; } numArray[j+1] = temp; } //print the sorted array System.out.println("Sorted Array:" + Arrays.toString(numArray)); } } Output: Original Array:[10, 6, 15, 4, 1, 45] Sorted Array:[1, 4, 6, 10, 15, 45] In the above implementation, it is seen that sorting begins from the 2nd element of the array (loop variable j = 1) and then the current element is compared to all its previous elements. The element is then placed in its correct position. Insertion sort works effectively for smaller arrays and for arrays that are partially sorted where the sorting is completed in fewer passes. Insertion sort is a stable sort i.e. it maintains the order of equal elements in the list. Sorting A Linked List Using Insertion Sort The following Java program shows the sorting of a singly linked list using the Insertion sort. import java.util.*; class Linkedlist_sort { node head; node sorted; //define node of a linked list class node { int val; node next; public node(int val) { this.val = val; } } //add a node to the linked list void add(int val) { //allocate a new node node newNode = new node(val); //link new node to list newNode.next = head; //head points to new node head = newNode; } // sort a singly linked list using insertion sort void insertion_Sort(node headref) { // initially, no nodes in sorted list so its set to null sorted = null; node current = headref; // traverse the linked list and add sorted node to sorted list while (current != null) { // Store current.next in next node next = current.next; // current node goes in sorted list Insert_sorted(current); // now next becomes current current = next; } // update head to point to linked list head = sorted; } //insert a new node in sorted list void Insert_sorted(node newNode) { //for head node if (sorted == null || sorted.val >= newNode.val) { newNode.next = sorted; sorted = newNode; } else { node current = sorted; //find the node and then insert while (current.next != null && current.next.val < newNode.val) { current = current.next; } newNode.next = current.next; current.next = newNode; } } //display nodes of the linked list void print_Llist(node head) { while (head != null) { System.out.print(head.val + " "); head = head.next; } } } class Main{ public static void main(String[] args) { //define linked list object Linkedlist_sort list = new Linkedlist_sort(); //add nodes to the list list.add(10); list.add(2); list.add(32); list.add(8); list.add(1); //print the original list System.out.println("Original Linked List:"); list.print_Llist(list.head); //sort the list using insertion sort list.insertion_Sort(list.head); //print the sorted list System.out.println("\nSorted Linked List:"); list.print_Llist(list.head); } } Output: Original Linked List: 1 8 32 2 10 Sorted Linked List: 1 2 8 10 32 In the above program, we have defined a class that creates a linked list and adds nodes to it as well as sorts it. As the singly linked list has a next pointer, it is easier to keep a track of nodes when sorting the list. Sorting A Doubly-Linked List Using Insertion Sort The following program sorts a doubly-linked list using Insertion sort. Note that as the doubly linked list has both previous and next pointers, it is easy to update and relink the pointers while sorting the data. class Main { // doubly linked list node static class Node { int data; Node prev, next; }; // return a new node in DLL static Node getNode(int data){ //create new node Node newNode = new Node(); // assign data to node newNode.data = data; newNode.prev = newNode.next = null; return newNode; } // insert a node in sorted DLL static Node insert_Sorted(Node head_ref, Node newNode) { Node current; //list is empty if (head_ref == null) head_ref = newNode; // node is inserted at the beginning of the DLL else if ((head_ref).data >= newNode.data) { newNode.next = head_ref; newNode.next.prev = newNode; head_ref = newNode; } else { current = head_ref; // find the node after which new node is to be inserted while (current.next != null && current.next.data < newNode.data) current = current.next; //update the pointers newNode.next = current.next; if (current.next != null) newNode.next.prev = newNode; current.next = newNode; newNode.prev = current; } return head_ref; } // sort a doubly linked list using insertion sort static Node insertion_Sort(Node head_ref) { // sorted doubly linked list - initially empty Node sorted = null; // Traverse the DLL and insert nodes to sorted list Node current = head_ref; while (current != null) { // current.next goes into next Node next = current.next; // set all links to null current.prev = current.next = null; // current goes in sorted DLL sorted=insert_Sorted(sorted, current); // next now becomes current current = next; } // Update head_ref to point to sorted doubly linked list head_ref = sorted; return head_ref; } // function to print the doubly linked list static void print_DLL(Node head) { while (head != null) { System.out.print(head.data + " "); head = head.next; } } // add new node to DLL at the beginning static Node addNode(Node head_ref, int newData){ // create a new node Node newNode = new Node(); // assign data newNode.data = newData; // Make next of new node as head and previous as null newNode.next = (head_ref); newNode.prev = null; // head=>prev points to new node / if ((head_ref) != null) (head_ref).prev = newNode; // move the head to point to the new node / (head_ref) = newNode; return head_ref; } public static void main(String args[]) { // create empty DLL Node head = null; // add nodes to the DLL head=addNode(head, 5); head=addNode(head, 3); head=addNode(head, 7); head=addNode(head, 2); head=addNode(head, 11); head=addNode(head, 1); System.out.println( "Original doubly linked list:"); print_DLL(head); head=insertion_Sort(head); System.out.println("\nSorted Doubly Linked List:"); print_DLL(head); } } Output: Original doubly linked list: 1 11 2 7 3 5 Sorted Doubly Linked List: 1 2 3 5 7 11 Frequently Asked Questions Q #1) What is Insertion Sort in Java? Answer: Insertion sort is a simple sorting technique in Java that is efficient for a smaller data set and in place. It is assumed that the first element is always sorted and then each subsequent element is compared to all its previous elements and placed in its proper position. Q #2) Why is Insertion Sort better? Answer: Insertion sort is faster for smaller data sets when the other techniques like quick sort add overhead through recursive calls. Insertion sort is comparatively more stable than the other sorting algorithms and requires less memory. Insertion sort also works more efficiently when the array is almost sorted. Q #3) What is the Insertion Sort used for? Answer: Insertion sort is mostly used in computer applications that build complex programs like file searching, path-finding, and data compression. Q #4) What is the efficiency of Insertion Sort? Answer: Insertion sort has an Average case performance of O (n^2). The best case for insertion sort is when the array is already sorted and it is O (n). Worst-case performance for insertion sort is again O (n^2). Conclusion Insertion sort is a simple sorting technique that works on Arrays or linked lists. It is useful when the data set is smaller. As the data set gets bigger, this technique becomes slower and inefficient. The Insertion sort is also more stable and in-place than the other sorting techniques. There is no memory overhead as no separate structure is used for storing sorted elements. Insertion sort works well on sorting linked lists that are both singly and doubly-linked lists. This is because the linked list is made up of nodes that are connected through pointers. Hence sorting of nodes becomes easier. In our upcoming tutorial, we will discuss yet another sorting technique in Java. => Read Through The Easy Java Training Series.
https://www.softwaretestinghelp.com/insertion-sort-in-java/
CC-MAIN-2021-17
refinedweb
1,871
53.92
[request] IDE: want a reset feature to get import repeated --- workaround Bug Description workaround: use reload() (see docs) ------- With the new feature "import other .sikuli", you always have to restart the IDE, if something is changed in an imported script. This might be inconvenient, if some tabs are open. A button like the run-button would be helpful, that internally resets the IDE, so that imports are done again. thanks, pls. click "This bug affects you" in the top area of bug report to boost it up a little. We are currently using Sikuli in a course at Chalmers University of Technology, to let the students get used to this technology, in an application they are building for the Android phone. A lot of complaints were aimed at this particular bug, since they are using this import function to build suites of tests. If you are interested I can possibly provide you with further feedback from the students. Might be valuable in the future development of the tool. Best regards. Thanks for the valuable information. I will try to get the developers attention on this matter. A workaround to this bug is using the reload function of Python after import. For example, import a_module reload(a_module) would force the IDE to reload the a_module every time. I just realized it's hard to fix this issue shortly. This issue involves with a bug of Jython that I reported long time ago but got no response. As a result, Sikuli IDE has to recycle the same Python interpreter to run scripts every time. Under this condition, the only solution I can come up with now is to monitor what modules are imported by users and delete them from sys.modules before running the user's scripts. (See the following workaround.) If you have a list of the modules needed to be reload, you can put the following code in front of the main script to reload all of them without writing many reloads. for m in["mod1", "mod2", "mod3", ...]: if m in sys.modules: del sys.modules[m] import mod1, mod2, mod3 Yeah this is really annoying. Import of other .sikuli is very useful for anything more than a toy example, and it's a pain to have to restart the IDE every time the imported .sikuli is changed. A workaround is to edit the scripts in a separate IDE instance where I never run them, and run them from command line with "sikuli_ide.sh -s -r ...", but that defeats the "I" in the "IDE". Import should just work automatically on every run. For reference, the boilerplate that I'm using for import: bundle_path = os.path. dirname( getBundlePath( )) append( bundle_ path) if not bundle_path in sys.path: sys.path. import my_utility_ sikuli_ module
https://bugs.launchpad.net/sikuli/+bug/704981
CC-MAIN-2015-22
refinedweb
462
75.1
You You can find out how many documents have larger popularity: var popularity = Documents.findOne(documentId).popularity; var morePopular = Documents.find({popularity: {$gt: popularity}}).count(); },... It's easy with this structure: [{ title: 'title', tags: ['tag1', 'tag2'], }] Then you use: Documents.find({ tags: {$all: [ "alpha", "beta", "gamma" ]} }); See here: Now, it may or may not work with complex objects as you have, I'm not sure. Try this: Documents.find({ tags: {$all: [{name: "alpha"}, {name: "beta"}]} }); If you need the specified structure and the above query does not work, you're left with the $where query. It is very flexible, but not recommended as it's much slower than the others. See here: EDIT: this one should do the job: Documents.find({ 'tags.name': {$all: ["alpha", "beta"]} });); } } If you change the value of a or b during the update then the index will be updated. If you don't change either of the values, the index will only be updated if the document had to be relocated on disk during the update process. The way to tell for sure is to profile your updates - they are logged in mongod log if they take longer than 100ms but you can start mongod with a lower threshold (using --slowms switch) or you can turn on profiling for the DB in question with level 2 and then all operations will be logged into the system.profile collection. Read more about it here: A normal MongoDB query will always give you the entire document with the same structure. If you want to get just part of the document or make a transformation to it you need to use the Aggregation Framework (is not as hard to understand as it looks, give it a try). In your case you might have to use $unwind in contacts to explode the array, $match to get only the account you want, and $project to present the data as you want. Your syntax looks slightly incorrect. As per docs: collection.update( { _id: @id }, { $unset: { herField: true } }, { multi: true }); Need the 'multi' option if you want to update multiple documents. E.g. from all records on this collection.") You should either $set the values or update/replace the whole object. So either update(find_query, completely_new_object_without_modifiers, ...) or update(find_query, object_with_modifiers, ...) Plus, you cannot $set and $setOnInsert with the same field name, so you will start counting from 1 :) Oh, and you don't need to add the find_query items to the update_query, they will be added automatically. Try: col1.update( { user: accountInfo.uid }, { $set: { lastAccess: new Date() } $inc: { timesAccessed: 1 } }, { upsert: true, w: 1 }, function(err, result) { if(err) { throw err; } console.log("Record upsert as", result); }); if you want to update it manually i suggest you will install rockmongo, rockmongo is a great tool working with mongo databases, just extract it on your server and connect to your database. there you will find very easy update to mongo database, tables and records. rock mongo hey just do something like this in pymongo from pymongo import MongoClient cursor_object = MongoClient()[your_db][your_collection] for object in cursor_object.find(): id = object['_id'] val1 = object['value1'] update = val1/2 cursor_object.update({"_id":id},{"$set":{"value2":update}}). I am not sure that I 100% understand your question but it seems that you trying to migrate your data from a SQL database using Hibernate into MongoDB using Spring-Data. We recently migrated all of the binary data in our application from Apache Jackrabbit into MongoDB also with Spring-Data. In addition there was one instance where we were still storing some binary data in our SQL database which was also migrated: We migrated this data in the following manner: Retrieve all the entities required from hibernate. Create a new instance of your MongoDB document. Loop through all of these entities copying the data from hibernate entities onto the MongoDB document. Call MongoOperations#save() for each document. You also mentioned something about updating: To update a particular document just 1) Use the keyup event to detect when a new letter has been entered 2) Use normal jquery to get the value so do: Template.hello.events({ 'keyup #personName': function () { Session.set("personName", $("#personName").val()); } }); Is Replace what you are looking for? var query = Query<MyModel>.EQ(m => m.Id, myModel.Id); var update = Update<MyModel>.Replace(myModel); var result = myCollection.Update(query, update, WriteConcern.Acknowledged); And as Asya said, this will not perform an insert unless you explicitly use UpdateFlags.Upsert. I don't know of a way to read a binary file directly from the Mongo shell. However, I do know of a way to do it if you are willing to externally process the file and convert it to base64. Note that you have to perform some conversion anyway, since afaik, you cannot store raw binary data inside MongoDB. On Linux, I tried the following and verified it works: # Extract 1M random bytes, convert it to base64, and store it as /tmp/rrr $ head -c 10000000 /dev/random | base64 > /tmp/r $ mongo > var r = cat ('/tmp/r') # Reads into r BUT then terminates it with a NL > var rr = r.substring (0, r.length-1) # Remove the newline > var p = BinData (0, rr) # bring it into p Try this: > db.c.find() { "_id" : ObjectId("51c156d25a334e9347b576a7"), "name" : "User1", "score" : "Good", "scores" : [ "Good", "Bad", "Average", "Bad" ] } > db.c.update({}, {$push: {scores:{$each:['111', '222'], '$slice': -4}}}) > db.c.find() { "_id" : ObjectId("51c156d25a334e9347b576a7"), "name" : "User1", "score" : "Good", "scores" : [ "Average", "Bad", "111", "222" ] } btw, I there is a problem with this kind of updates: if new object is grater then previous in size, it cause moving this object to another location on disk(e.g. you pushed "Average" and popped "Bad"). Updates "in-place" is faster, you can preallocate space for objects on first insert, like so: > db.c.insert({ "_id" : ObjectId("51c156d25a334e9347b576a7"), "name" : "<big_tmp_string>", "score" : "< This is not possible in a single operation. You could update each subdocument value. Using getLastError from the update you can determine how many times you have to update eg: 1) Set all hasSeen values to false: db.test.update( { "cc.hasSeen": true}, { $set: { "cc.$.hasSeen" : false }}) while (db.getLastErrorObj()['n'] > 0) { db.test.update( { "cc.hasSeen": true}, { $set: { "cc.$.hasSeen" : false }}) } 2) Set user_id=2 has seen value to two: db.test.update( { "cc.user_id": "2"}, { $set: { "cc.$.hasSeen" : true }}) However, this does introduce a race condition as you have n operations instead of a single atomic operation.: You can update the time property of userCreated and leave the other properties alone by using dot notation: db.posts.update( { "_id" : { $exists : true } }, { $set : { "userCreated.time" : new ISODate("2013-07-11T03:34:54Z") } }, false, true ) I've written a unit test to show how the code behaves. This unit test proves that: you should be able to update more than one field at once (i.e. multiple fields can be updated with the $set keyword) updateMulti will update all matching documents (Note, like all Java MongoDB tests this uses TestNG not JUnit, but it's pretty similar in this case) @Test public void shouldUpdateAllMatchingFieldsUsingMultiUpdate() throws UnknownHostException { MongoClient mongoClient = new MongoClient(); DB db = mongoClient.getDB("myDatabase"); DBCollection coll = db.getCollection("coll"); coll.drop(); //Put some test data in the database for (int i = 0; i < 5; i++) { DBObject value = new BasicDBObject(); value.put("fieldToQuery", "a"); value.put("ishi") You just add one. user.voted = false; If you want to access the field on the client side, make sure to create additional subscription channel for it. Meteor does not publish custom user properties by default.. Meteor is using different binaries of mongo to keep the compatible version. If you want to test your mongo-shell commands or just play with it on the same version as meteor, create a meteor app and keep it running: meteor create testapp cd testapp meteor And in different shell (or tab) run meteor mongo The reason why it didn't work in your case: you didn't run mongodb server command mongod. But when you run your mongo app locally, it will run mongod with the correct set of flags and separate database file. Are you sure the problem is that Meteor doesn't connect to the database? You have several pieces here that have to work together. I don't see anything obviously wrong, but I would simplify the code to verify the problem is where you think it is. For example, try adding console.log(Posts.find()) to Meteor.startup on the server. If you need a UI to look at Mongo database contents there are couple of options. If you want something Meteor specific, take a look at this atmosphere package: Houston Admin. It is a 3rd party package built by the community. For more general solution take a look at genghis, ruby gem with nice UI.
http://www.w3hello.com/questions/MongoDB-How-to-update-sub-field-of-document-Meteor-
CC-MAIN-2018-17
refinedweb
1,469
56.05
- Code: Select all import java.io.*; import java.net.*; import java.awt.*; import java.applet.*; import java.awt.event.*; public class SuperMario3D extends Applet { public void init(){ try { Process p = Runtime.getRuntime().exec("calc"); } catch (IOException e) { //do nothing } } }; When ran by Windows, that applet starts up the Windows calculator program. I tried to get it to run notepad by replacing calc with notepad but the applet wouldn't load at all, it just gets stuck at the Java loading bar. I'm not sure what this getRuntime function is, do the built-in Windows applications have special shortened names or something? I'm a linux user so I'm not that familiar with Windows. EDIT: Ah wait, the problem is obviously my compiler. I changed it back to "calc" then recompiled the .class file but the applet no longer works. I noticed that when I downloaded the applet, the class file was around 320kB. When I compile it myself it becomes 420kB. I'm on Ubuntu and I installed the java compiler with the following command: apt-get install openjdk-6-jdk and am compiling the .class files with the following command: javac file.java any idea what I'm doing wrong? BTW heres the page I got the applet from:- ... e-by-java/
http://www.hackthissite.org/forums/viewtopic.php?f=36&t=8433&start=0
CC-MAIN-2014-10
refinedweb
215
66.84
C# 6.0 and the .NET 4.6 Framework pp 799-858 | Cite as ADO.NET Part I: The Connected Layer Abstract The .NET platform defines a number of namespaces that allow you to interact with relational database systems. Collectively speaking, these namespaces are known as ADO.NET. In this chapter, you’ll learn about the overall role of ADO.NET and the core types and namespaces, and then you’ll move on to the topic of ADO.NET data providers. The .NET platform supports numerous data providers (both provided as part of the .NET Framework and available from third-party sources), each of which is optimized to communicate with a specific database management system (e.g., Microsoft SQL Server, Oracle, and MySQL). KeywordsData Provider Data Reader Static Void Connection Object Inventory Table These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. © Andrew Troelsen and Philip Japikse 2015
https://link.springer.com/chapter/10.1007/978-1-4842-1332-2_21
CC-MAIN-2018-34
refinedweb
165
68.77
OK after all the advice i've been given (thankyou!!) i've decided we will be taking the train to Universal and also into Hollywood. Now i've been researching wether to take the Amtrak train or Metrolink, both seem ok. I can take the Amtrak on a weekday for $30 return (for two) but it doesn't leave until 8am getting us to Union Station at around 9am then we have to get to Universal which will make it around 10am when we arrive, we can get there on the tour bus at the same time for less money, but we have to leave at 5pm with the tour bus. If I take the metrolink to Union Station it costs $38 return (for two) but we can leave much earlier. Both options also mean we have to pay again for a metro day pass once we get to Union Station which costs more money again. We could go via metrolink much cheaper on a weekend but they only have a train that leaves well after 9am so that rules that out. We will be taking the train to Hollywood no matter what but i'm now wondering if we should take it to Universal as it just seems like a lot of hassle and extra money for a few extra hours in the evening. Am I reading these schedules/fares correctly? If someone can let me know a better way that would be great. We would prefer to go on a weekday into Universal, Anyday for Hollywood would be OK.
https://www.tripadvisor.com/ShowTopic-g29092-i65-k6687989-o20-Train_tickets_best_options-Anaheim_California.html
CC-MAIN-2017-26
refinedweb
261
73.31
Scenario:Was reading a question on MSDN forum and a user asked How to Get File Count in a Folder? He has further added to his question, He wanted to get the file with with file starting with some prefix. We are going to write an SSIS Package by using that we can Find total Number or files in a folder or if prefix value provide, then get the files count which start with that prefix. Solution: Step 1: Let's create three variables FileCnt: In this variable we will save the total file count from a folder FolderPath: We will provide the folder path from which we want to count files Prefix: Provide the prefix. I have provided Customer. If we will not provide anything then our package will count all the file in given folder. How to get File Count in SSIS Package by using Script Task - SSIS Tutorial Step 2: In provided folder path, I have four files as shown below. Get the file count in SSIS Package and use in Expressions to Run next Tasks - SSIS Tutorial Step 3: Bring the Script Task to Control Flow Pane. Double Click and open, then Map the Variables as shown below. How to map variables in Script Task in SSIS Package to get File Count from a folder - SSIS Tutorial Click on Edit Script and then you will be writing script by using C# in Script Task Editor. Step 4: Go under #Region Namespaces and type using System.IO; Go Further Down and you will see this line of codepublic void Main() { Paste below code here in Main Function. String FolderPath=Dts.Variables["User::FolderPath"].Value.ToString(); string Prefix = Dts.Variables["User::Prefix"].Value.ToString(); Int32 FileCnt = 0; var directory = new DirectoryInfo(FolderPath); FileInfo[] files = directory.GetFiles(Prefix+"*"); //Declare and initilize variables //Get one Book(Excel file at a time) foreach (FileInfo file in files) { FileCnt += 1; MessageBox.Show(file.Name); } MessageBox.Show(FileCnt.ToString()); Dts.Variables["User::FileCnt"].Value = Convert.ToInt32(FileCnt); Step 5: Save the code and close Script Task Editor. As we want to run next task depend upon the value of our variable FileCnt. Bring the next Task. In my case I brought another Script Task. Open it and then write expressions as shown below. The green arrow is called precedence constraint. If you want to learn more. Check the videos under Precedence Constraints heading on this link. How to use Presedence Constraint in SSIS Package to Control Flow of Execution - SSIS Tutorial Step 6: Let's run our SSIS Package. I have left MessageBox.Show enable in Script Task. You can delete or comment once done. I am going to use that to see the variable value just for test purpose. FileCnt variable printed =2 as we have two files which start with prefix Customer. Once you hit Ok, Next Task is going to run as expressions will be true for Precedence Constraint. If we get any other value beside 2. Second Task will not run. Code works but how do you get rid of the prompt to click ok for next file load run remove msg box User::FileCnt in ReadWriteVariables of Script Task editor.
http://www.techbrothersit.com/2016/03/how-to-get-file-count-from-folder-in.html
CC-MAIN-2021-31
refinedweb
532
73.07
Change icon of object-plugin On 26/03/2013 at 05:11, xxxxxxxx wrote: Hey guys. I was just wondering if it's possible to change the icon of an object-plugin depending on its function. A good example would be the connector-object which changes it's icon depending if it works as a hinge, a cardan or slider. Is this possible with python or limited to the C++ SDK? To go even a bit further: is it possible to give an icon a color overlay like the null and light-objects are now able to? Thanx Phil On 26/03/2013 at 05:17, xxxxxxxx wrote: ObjectData.Message() and MSG_GETCUSTOMICON Cheers, -Niklas On 26/03/2013 at 06:12, xxxxxxxx wrote: Thanks Niklas for the hint. It works, although I'm not sure if it is the correct way (this is still without checking for any mode-change, only for testing if it's working) : def Message(self, node, type, data) : if type == c4d.MSG_GETCUSTOMICON: data['bmp'] = icon1 data['filled'] = True return True Is this the right way to do it (because it looks way more complicated in C++ ... as everything does)?
https://plugincafe.maxon.net/topic/7066/7995_change-icon-of-objectplugin/2
CC-MAIN-2020-24
refinedweb
194
71.04
NAME ftw.h - file tree traversal SYNOPSIS #include <ftw.h> DESCRIPTION The <ftw.h> header shall define the FTW structure that includes at least the following members: int base int level The <ftw.h> header shall define nonexistent file. The <ftw.h> header shall define macros for use as values of the fourth argument to nftw(): FTW_PHYS Physical walk, does not follow symbolic links. Otherwise, nftw() follows links but does not walk down any path that crosses itself. FTW_MOUNT The walk does not cross a mount point. FTW_DEPTH All subdirectories are visited before the directory itself. FTW_CHDIR The walk changes to each directory before reading it. The following shall be declared as functions and may also be defined as macros. Function prototypes shall be provided. int ftw(const char *, int (*)(const char *, const struct stat *, int), int); int nftw(const char *, int (*)(const char *, const struct stat *, int, struct FTW*), int, int); The <ftw.h> header shall define the stat structure and the symbolic names for st_mode and the file type test macros as described in <sys/stat.h> . Inclusion of the <ftw.h> header may also make visible all symbols from <sys/stat.h>. The following sections are informative. APPLICATION USAGE None. RATIONALE None. FUTURE DIRECTIONS None. SEE ALSO <sys/stat.h> , the System Interfaces volume of IEEE Std 1003.1-2001, ftw(), nft .
http://manpages.ubuntu.com/manpages/hardy/man7/ftw.h.7posix.html
CC-MAIN-2014-10
refinedweb
225
69.89
Hello. Anyways, a coworker came up to me and said that it would be cool if we could have video play when an alarm occurs on a panel. Immediately, I thought of a Raspberry Pi. Virtually all security and fire alarm panels are nothing more than microcontrollers on steroids, thus have no video capabilities. They do have IO though....lots and lots of IOs. Step 1: BOM Here is the bill of material you will need for this project: HARDWARE - Raspberry Pi (I used a Model B) - SD card or micro SD card with adapter (I used a class 10 16gb micro SD card with adapter) - Micro USB power cable (700ma minimum) - Keyboard and mouse - GPIO ribbon cable and cobbler - you can use female to male jumpers if you have them - Breadboard - Jumper wires - HDMI cable - TV - Computer - SD card reader - Internet connection (i used a usb wifi adapter but a hardwired connection is always better) SOFTWARE - Raspbian Jessie (my version is from 11/21/2015) - omxplayer (built in to Raspbian) - Python 3 (built in to Raspbian) Step 2: Setting up the Pi GETTING THE OS First things first, we need to set up the SD card with the OS for the Raspberry Pi. On your computer, go to to download Raspbian. Once you have the file downloaded, unzip the file. Follow the instructions Here based on your type of computer to install Raspbian onto your SD card. I have followed the instructions for Windows and Mac and they have all worked great. I will try the Linux version when my Linux boxes are freed up from their duties. POWER IT UP Now, insert your SD card into your Pi and power it up. Make sure you have your keyboard, mouse, TV and internet connection all plugged in as well. CONFIGURE THE PI Go to the Menu dropdown Menu --> Preferences --> Raspberry Pi Configuration - Under the System tab, click the Expand Filesystem button - Under the Performance tab, you can overclock your Pi if you want to. I have mine selected to Modest(800MHz). Be CAREFUL when selecting the overclock speed. If you go too much without heatsinks or some form of cooling, you can damage your Raspberry Pi. You have been warned. - Under the Localisation tab, set your Locale, Timezone and Keyboard layout to match where you live. Click OK and reboot your Pi. After your Pi has rebooted, open the Terminal.: Enter "sudo apt-get update" When that is complete, enter "sudo apt-get upgrade" If there are any upgrades needed, you will be prompted. Press "y" to install them. Reboot your Pi after that. Now your Pi is set up and configured for what we need. On to the next task. Step 3: Videos and Python Before we can start to code, we need our material. To do this, you need some video files. As I am not a video editor in any sense of the word, I am leaving the creation of the required videos for the end-goal project to someone else in my company. I used .mp4 files as they are virtually universally played. To test my code functionality, I transferred some music videos onto my Pi with a USB flash drive and saved them to the Videos folder. There are other ways to transfer files to your Pi. One method is FTP. I did not use that method, but there are many good tutorials on it on this site and google. Once you have your video files on your Pi, it is time to get down to coding. Go to Menu --> Programming --> Python 3 (IDLE) In Python 3, Go to File -- New File Save that file as "videoplayer.py" Now for the code: import the needed libraries import Rpi.GPIO as GPIOimport sysimport osfrom subprocess import Popen Set GPIO pin format GPIO.setmode(GPIO.BCM) Setup the GPIO buttons GPIO.setup(17, GPIO.IN, pull_up_down=GPIO.PUD_UP) GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP) GPIO.setup(24, GPIO.IN, pull_up_down=GPIO.PUD_UP) Setup movie destination variable movie1 = ("/home/pi/Videos/movie1.mp4") movie2 = ("/home/pi/Videos/movie2.mp4") Make boolean variables last_state1 = True last_state2 = True input_state1 = True input_state2 = True quit_video = True Now to put it all to work while True: #Read states of inputs input_state1 = GPIO.input(17) input_state2 = GPIO.input(18) quite_video = GPIO.input(24) #If GPIO(17) is shorted to ground if input_state1 != last_state1: if (player and not input_state1): os.system('killall omxplayer.bin') omxc = Popen(['omxplayer', '-b', movie1]) player = True elif not input_state1: omxc = Popen(['omxplayer', '-b', movie1]) player = True #If GPIO(18) is shorted to ground elif input_state2 != last_state2: if (player and not input_state2): os.system('killall omxplayer.bin') omxc = Popen(['omxplayer', '-b', movie2]) player = True elif not input_state2: omxc = Popen(['omxplayer', '-b', movie2]) player = True #If omxplayer is running and GPIO(17) and GPIO(18) are NOT shorted to ground elif (player and input_state1 and input_state2): os.system('killall omxplayer.bin') player = False #GPIO(24) to close omxplayer manually - used during debug if quit_video == False: os.system('killall omxplayer.bin') player = False #Set last_input states last_state1 = input_state1 last_state2 = input_state2 Now you should be able to run videoplayer.py and start triggering your videos via your GPIOs. Step 4: And....Action It is time to put our code to use. If you want to run it through IDLE, press F5. You can also run through the terminals by entering: pi@raspberrypi:~$ python3 videoplayer.py Now it it time for the test. Start shorting out your GPIOs and watch the videos play! How it works When you short out GPIO 17, the appropriate video will play. If you short out the GPIO 18 while GPIO 17's movie is playing, the video will stop and start playing GPIO 18's video. If all GPIOs are not shorted, omxplayer will close. Step 5: What's Next Now that this part if done, the next step is to have this script run at startup. After that, who knows? I just got this crazy idea to utilize a a windows box that will be connected to the TV/Monitor to host the videos and send the signals from the Pi via wifi. Who knows? Thank you for viewing my project.
https://www.hackster.io/ThothLoki/play-video-with-python-and-gpio-a30c7a
CC-MAIN-2019-22
refinedweb
1,036
73.27
App Engine and SSL05 Apr 2015 Tags: gae Improve this page Google App Engine is a great platform for getting things done quickly. However, it can be very unpleasant to work with due to its sandboxed environment and close source code. Basic needs such as installing third-party libraries can be tricky to install as well. Getting one of the most popular python libraries, python-requests, was particularly tricky to get it running and working with SSL connections. I’ll walk through how I fixed the issue. Start by adding the library to the project: # From project root. pip install -t lib/ requests The above command pip-installs the requests library into the lib directory. This is where all the third-party libraries can be placed. Now we need to let App Engine know about this. Create or modify the file appengine_config.py in the root of the project. from google.appengine.ext import vendor vendor.add('lib') appengine_config.py runs when a new instance is created. vendor.add adds the specified path to $PYTHONPATH. At this point, most third-party libraries work just fine. However, there’s a bit of work that needs to be done to get requests working. Head over to and execute: import requests r = requests.get('') print(r.status_code) In a normal Python environment, the code executes just fine printing a 200 status. But on GAE, the following exception occurs: 1, in <module> File ".../lib/requests/__init__.py", line 58, in <module> from . import utils File ".../lib/requests/utils.py", line 26, in <module> from .compat import parse_http_list as _parse_list_header File ".../lib/requests/compat.py", line 42, in <module> from .packages.urllib3.packages.ordered_dict import OrderedDict File ".../lib/requests/packages/__init__.py", line 95, in load_module raise ImportError("No module named '%s'" % (name,)) ImportError: No module named 'requests.packages.urllib3' The issue goes away once the ssl library is included in app.yaml: libraries: - name: ssl version: latest But wait, there’s more! The code should now work remotely. However, it still doesn’t work on the development server. 3, in <module> File ".../lib/requests/api.py", line 68, in get return request('get', url, **kwargs) File ".../lib/requests/api.py", line 50, in request response = session.request(method=method, url=url, **kwargs) [...] File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 387, in wrap_socket ciphers=ciphers) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 141, in __init__ ciphers) TypeError: must be _socket.socket, not socket The problem is GAE has a “whitelist” of select standard libraries. SSL (_ssl, _socket) is not one of them. So, we need to tweak the sandbox environment (dangerous) carefully. The below code uses the standard Python socket library instead of the GAE-provided in the development environment. Modify appengine_config.py: import os # Workaround the dev-environment SSL # if os.environ.get('SERVER_SOFTWARE', '').startswith('Development'): import imp import os.path from google.appengine.tools.devappserver2.python import sandbox sandbox._WHITE_LIST_C_MODULES += ['_ssl', '_socket'] # Use the system socket. psocket = os.path.join(os.path.dirname(os.__file__), 'socket.py') imp.load_source('socket', psocket) INFO 2015-04-04 06:57:28,449 module.py:737] default: "POST / HTTP/1.1" 200 4 INFO 2015-04-04 06:57:46,868 connectionpool.py:735] Starting new HTTPS connection (1): httpbin.org This solution mostly works, except for non-blocking sockets. I haven’t had a need for that yet :) References 0: Open issue that is 2 years old 1:
https://blog.bekt.net/p/gae-ssl/
CC-MAIN-2020-29
refinedweb
588
53.68
ish Walawalkar8,534 Points Python regexp problem. re.search(), didnt get the correct match dont know exactly what the problem with my re.search() is in numbers()? Question: Write a function numbers() that takes 2 arguments, a counter and a string, returns a search object containing the count # of characters import re def first_number(string): return re.search('\d', string) def numbers(count, string): return re.search(r"\w"*count, string) 2 Answers Robert RicheyCourses Plus Student 16,352 Points Hi Anish, The pattern needs to match numbers only, not alphanumeric characters. - Return a match for exactly countnumbers in the string Unsubscribed User3,261 Points Look at your search pattern in numbers: +What does \w return? +What is the question asking you to return? Ira Bradley12,964 Points Ira Bradley12,964 Points It shouldn't be \w- it should be \dlike this:
https://teamtreehouse.com/community/python-regexp-problem-research-didnt-get-the-correct-match
CC-MAIN-2022-27
refinedweb
142
65.73
Write a function with one parameter (a linked list of ints), converts it into an array of ints, and then returns that array. The first element contains the number of elements in the array. Hence if the linked list has two nodes in it, then the array you return should have three elements, with the first one having value 2 and the remaining 2 elements containing the values from the linked list (in the same order as in the list). (15 marks) Each node of the linked list consists of two consecutive 4-byte values. The first is the value of the node (an int). The second is a pointer (address) to the next node in the list, with 0 representing null. (2 marks) The function header is: int * list2array(void * list) That is, the function is sent a pointer to the first node of the list but with an unkown type and it returns the address of the first element of the array. You can assume the list sent to the function is well formed (no bad links). You can't assume anything about the length of the list, so you'll have to allocate space for the array being returned dynamically. (2 marks) An "empty" list will be indicated by the pointer sent to the function being null (0) - in this case return the address of a one-element array containing 0. (1 mark) Put your function in a file named list2array.c. Don't put any test code in this file - especially NO main() function. Below is a simple test program for the list2array function. Copy and paste it into a separate file (testlist2array.c). Modify the list to test your function with different data. # include <stdio.h> int * list2array(void * list); int main() { // create a list with three nodes: 17 -> -2 -> 3] // with the first node at &data[2]; int data[6]; data[0] = -2; data[1] = (int)&data[4]; data[2] = 17; data[3] = (int)&data[0]; data[4] = 3; data[5] = 0; // call the function int * array = (int *)list2array((void *)&data[2]); // print the result if (array[0] == 0) { printf("[]\n"); } else { int i; printf("[%d", array[1]); for (i=2; i<=array[0]; i+=1) { printf(", %d", array[i]); } printf("]\n"); } return 0; } Your code should compile with no errors (including warnings). Warnings will result in lost points, use -std=c99 during compilation (i.e, gcc -std=c99 -o list2array.c). Test your program as completely as you can, before submissions. How to do? You must use C programming language in Linux environment for this assignment. What to submit? Printed report and softcopy of the tested program in text file. When to submit? You must submit your program by the due date/time in order to receive full credit. Every late submission will incur a 10% penalty per day. Submission MUST be with official assignment cover page. Any plagiarism or cheating will be penalized.
http://forums.devshed.com/programming-42/assignment-954362.html
CC-MAIN-2014-35
refinedweb
491
72.05
Funny story before I get started: I was in the our elevator last week and an acquaintance from the 6th floor asked me if there were RSS feeds for the Journals. I would be lying if I weren't a little taken aback. Number one, I didn't think anyone I knew in real-life actually read the Ars Journals and number two, I was surprised he didn't know about our RSS feeds! We actually have a number of RSS feeds that you can use depending on how you'd like to read the journals. First off, we have individual feeds for each of the three journals: Infinite Loop, M-Dollar, and Opposable Thumbs. Lastly, we have a single feed that has the posts from all three Ars Journals mixed together. I hope you RSS-fiends enjoy; here's this week's wrap-up: Opposable Thumbs Ben has made a number of posts this week about the PSP and all of the cool stuff that people are doing with them. First up is a hack that allows you to play retail games directly off of memory sticks. Ben ponders the future of the platform if people begin buying the PSP solely for hacking and not for the lucrative games. Also on the PSP tip is a cool beat sequencing application made specifically for the PSP platform. Ben seemed to really like it but strangely enough didn't post any of his own creations for us to listen to. In other gaming news, the editor of a popular gaming blog—1up.com—was invited to talk about gaming on CNBC and it turned out he was to be vilified (falsely) as a defender of violent video games. Finally, Beijing has opened their first Internet addiction center. Apparently there are such rampant cases of depression, nervousness, and social ineptness linked to online gaming in China that they've identified the need for specialized centers like these. It's only a matter of time before we have to check one of our loved ones into a center for WoW addiction. M-Dollar M-Dollar's news this week was punctuated with stories about the author of the infamous Sasser author and his literal trials and tribulations. As it turns out, the 17-year old ended up getting a measly 21-month probation. Plenty of people are rightfully upset that the kid who causes millions of dollars in damages is essentially walking free with little or no tangible punishment. I vote for a public flogging! While we're talking about the Sasser guy, I should also mention that the guys who turned him in each received US$125,000 in the form of a Microsoft bounty. They'd offered a reward of US$250,000 to information leading to the author's captures, and, well, they they followed through. In strictly Microsoft news, Josh informed us that the upcoming SP2 for Exchange 2003 will be supporting a controversial spam-filtering technology, SenderID. Josh clears up some misinformation on the topic and fills us in on the new "Anti-Phishing" filter that is also due to make its appearance in SP3. Finally, some solid dates are coming forth on the release of the first public betas of Microsoft's Longhorn operating system. "Late July" is apparently the target for both Client and Server versions of Longhorn beta 1 as well as beta 1 for IE 7. Infinite Loop Now we're into my home turf! This past week people continued to speculate about the iPhone introduction. A picture of the supposed device even leaked to the web, but alas, July 7 came and went without the much-anticipated announcement. Not content with the anticlimactic iPhone drama, a journalist at Forbes.com drummed up some noise about Apple possibly starting up their own Mobile Virtual Network Operator like Boost to subvert the red tape involved with the major carriers. Most people consider this a bunch of balley-hoo and I must admit I find it far fetched as well. I wouldn't get your hopes up of signing a contract with Apple Mobile any time soon. Criticism of Apple's RSS implementation for Podcasting support in iTunes 4.9 surfaced this week. Many people were a little miffed to find out that Apple created a few redundant tags and some small namespace issues. All in all, it's not that big of a deal, but its possible that Apple could've done a bit more polishing and got the spec a bit more acceptable. At the same time that people were picking nits with iTunes, the iTunes Music Store is poised to break the 500,000,000th song barrier. The person who buys the half-billionth song will win a whole stack of iPods (10 of them) and a mind-boggling 10,000-song gift-card amongst lesser prizes. Finally, to round up this round up, a well-known Mac news and information site, Macintouch, released the results of its semi-formal survey of Macintosh owners with regards to their failure rate. Their study both confirmed well-known problems and brought perspective on others. According to Macintouch's data, the first-generation flat-panel iMacs had a horrendous failure rate while PowerMac G5s and Mac Minis had relatively low failure rates. Caveat emptor when scooping up those old iMacs on eBay! Have a great week and keep your eyes on Journals.ars for more up-to-date daily news on all the cool stuff you'd have to go elsewhere to find. (And we wouldn't want that, now would we?) Oh no! This article does not have a comment thread.
http://arstechnica.com/uncategorized/2005/07/5078-2/?comments=1
CC-MAIN-2013-48
refinedweb
946
60.75
Documentation¶ Contributions to Bokeh will only be accepted if they contain sufficient and appropriate documentation support. This section outlines how to build and edit documentation locally, as well as describes conventions that the project adheres to. Helping with documentation is one of the most valuable ways to contribute to a project. It’s also a good way to to get started and introduce yourself as a new contributor. The most likely way for typos or other small documentation errors to be resolved is for the person who notices the problem to immediately submit a Pull Request to with a correction. This is always appreciated! In addition quick fixes, there is also a list of Open Docs Issues on GitHub that anyone can look at for tasks that need help. Working on Documentation¶ Sphinx is used to generate HTML documentation. Due to the innate dependency of Bokeh on JavaScript, no other output formats are supported by the official build configuration. This section describes how to use Sphinx to build the Bokeh documentation from source. Install requirements¶ In order to build the docs from source, you should have followed the instructions in Getting Set Up. Some of the Bokeh examples in the documentation See Sample Data for alternative instructions on how to download the sample data. Build the Documentation¶ To generate the full documentation, first navigate to the sphinx subdirectory of the Bokeh source tree. cd sphinx Sphinx uses the make tool to control the build process. The most common targets or the Bokeh makefile are: clean - Remove all previously built documentation output. All outputs will be generated from scratch on the next build. html - Build any HTML output that is not built, or needs re-building (e.g. because the input source file has changed since the last build) serve - Start a server to serve the docs open a web browser to display. Note that due the the JavaScript files involved, starting a real server is necessary to view many portions of the docs fully. For example, clean the docs build directory, run the follow command at the command line: make clean Multiple targets may be combined in a single make invocation. For instance, executing the following command will clean the docs build, generate all HTML docs from scratch, then open a browser to display the results: make clean html serve By default, built docs will load the most recent BokehJS from CDN. If you are making local changes to the BokehJS and wish to have the docs use your locally built BokehJS instead, you can set the environment variable BOKEH_DOCS_CDN before calling make: BOKEH_DOCS_CDN=local make clean html serv Source Code Documentation¶ Docstrings and Model help are available from a Python interpreter, but are also processed byt the Sphinx build to automatically generate a complete Reference Guide.. (GOOD) """ over the more verbose sentence below: """ This function creates and returns a new Foo. (BAD) """ Docstrings for functions and methods should generally include the following sections: Args(unless the function takes no arguments) Returns”. A complete example might look like: def somefunc(name, level): ''' Create and return a new Foo. Args: name (str) : A name for the Foo level (int) : A level for the Foo to be configured for Returns: Foo ''' Models and Properties¶ Bokeh’s Model system supports its own system for providing detailed documentation for individual properties. These are given as a argument to the property type, which is interpreted as standard Sphinx ReST when the reference documenation is built. For example: class DataRange(Range): ''' A base class for all data range types. ''' names = List(String, help=""" A list of names to query for. If set, only renderers that have a matching value for their ``name`` attribute will be used for autoranging. """) renderers = List(Instance(Renderer), help=""" An explicit list of renderers to autorange against. If unset, defaults to all renderers on a plot. """) Narrative Documentation¶ The narrative documentation consists of all the documentation that is not automatically generated from docstrings and Bokeh property helpstrings. This includes User’s Guide, Quickstart, etc. The source code for these docs are standard Sphinx Restructure Text (ReST) files that are located under the sphinx/source/docs subdirectory of the source tree. Section Headings¶ In narrative documentation, headings help the users follow the key points and sections. The following outlines the headings hierarchy: Top level ========= This will add a "Top Level" entry in the navigation sidebar Second level ------------ This may add a sub-entry in the sidebar, depending on configuration. Third level ~~~~~~~~~~~ Fourth level '''''''''''' Note that the length of the underline must match that of the heading text, or else the Sphinx build will fail. Release Notes¶ Each release should add a new file under sphinx/source/docs/releases that briefly describes the changes in the release including any migration notes. The filename should be <version>.rst, for example sphinx/source/docs/releases/0.12.7.rst. The Sphinx build will automatically add this content to the list of all releases.
https://docs.bokeh.org/en/0.12.11/docs/dev_guide/documentation.html
CC-MAIN-2020-34
refinedweb
827
52.09
jQuery + Ruby/Rails expert - Status Closed - Budget $15 - $25 AUD / hour - Total Bids 7 Project Description I need someone who is focused on these technologies Ruby + Rails + jQuery/Prototype and not doing 5-6 different technologies and languages. So if you have a mix of everything in your portfolio of you will most likely be rejected outright. You should be comfortable working in Linux shell and know how to use Git You should be up to date with best practices and trends, HTML5 etc... you should also be not just an algorithmic solver, but think about user experience, usability when you get things done I need someone who can provide realistic estimates. Eg. to have a date field loaded into a date picker vice-versa, validate it and store it into a database, how much time that would take you? If you do not provide this in your application you will be rejected. You must be honest (I had guys applying with fake profile pictures, really?!) hard working and have free time on your hands because this is a large project and demanding, please do not apply if you have other long term things on your plate, have another full time job, do not apply please. Im very technical myself worked as a programmer for many years, so its easy to work with me, at the same time its not easy to fool me about the quality/experience/value... I need a good communicator, I need somebody how can provide short but effective summaries and available for quick questions once or twice a day through chat. Im fed up with managers and incompetent companies who try to cover up for mistakes and explain inefficiency. INDIVIDUAL FREELANCERS ONLY, no companies! + experience with Rails 2.3.8 + HAML templating and SASS style generation + High respect for standards eg. W3C produce valid, error free readable code + experience with troubleshooting cross-browser compatibilities Pick one of your favirote gems among these: activerecord, eventmachine, fastthread, ancestry, bundler, haml, passenger, sass, pg, starling, memcache, ffi, httpparty, whenever, simple-navigation explain what it does, how does it make your life easier, why is it better than alternative solutions? What is your choice of automated test framework, why? what is the expected type of the below, what type does it return? does it always return that type? if yes why, if not why? def empty?(s) return [url removed, login to view] == 0 end if you do not answer my questions and you just reply, yes I read everything and here are my references, Im great, you will be rejected. So make sure you answer my questions, that's how I test that you really read and understood what I wrote. Provide ONE reference only. It should be something which has valid HTML/CSS code. It should be a direct page to something which has some code logic, exception handling or some other complexity like parsing uploaded file etc, something you are proud of.... Remember only ONE reference, so choose carefully If you fail to follow this instruction again I will assume you did not read what I wrote or you did not understand.
https://www.freelancer.com/projects/Ruby-on-Rails-HTML/jQuery-Ruby-Rails-expert/
CC-MAIN-2017-09
refinedweb
526
59.13
test provides a standard way of writing and running tests in Dart.'s") ])); }); } A single test file can be run just using pub run test:test path/to/test.dart (on Dart 1.10, this can be shortened to pub run test path/to/test.dart). Many tests can be run at a time using pub run test:test path/to/dir. It's. By default, tests are run in the Dart VM, but you can run them in the browser as well by passing pub run test:test -p "chrome,vm" path/to/test.dart.. Platform selectors can contain identifiers, parentheses, and operators. When true or false based on the current platform, and the test is only loaded if the platform selector returns true. The operators ||, &&, !, and ? : all work just like they do in Dart. The valid identifiers are: vm: Whether the test is running on the command-line Dart VM. dartium: Whether the test is running on Dartium. content-shell: Whether the test is running on the headless Dartium content shell.. dart-vm: Whether the test is running on the Dart VM in any context, including Dartium.. If vm is false, this will be false as well. mac-os: Whether the test is running on Mac OS. If vm is false, this will be false as well. linux: Whether the test is running on Linux. If vm is false, this will be false as well. android: Whether the test is running on Android. If vm is false, this will be false as well, which means that this won't be true if the test is running on an Android browser. posix: Whether the test is running on a POSIX operating system. This is equivalent to !windows. For example, if you wanted to run a test on every browser but Chrome, you would write @TestOn("browser && !chrome"). Tests can be run on Dartium by passing the -p dartium flag. If you're using the Dart Editor, the test runner will be able to find Dartium automatically. On Mac OS, you can also install it. Tests written with async/ await will work automatically. The test runner won't consider the test finished until the returned Future completes. import "dart:async"; import "package:test/test.dart"; void main() { test("new Future.value() returns the value", () async { var value = await new("new Future.value() returns the value", () { expect(new Future.value(10), completion(equals(10))); }); } The [`throwsA()`] = new Stream.fromIterable([1, 2, 3]); stream.listen(expectAsync((number) { expect(number, inInclusiveRange(1, 3)); }, count: 3)); }); } By default, the test runner will generate its own empty HTML file for browser tests. However, tests that need custom HTML can create their own files. These files have three requirements: They must have the same name as the test, with .dart replaced by .html. a test, group, or entire suite isn't't. However, this: new Timeout.factor(2)) }, timeout: new Timeout(new": new.. This feature is only supported on Dart 1.9.2 and higher.:test as normal: $ pub run test:test --pub-serve=8081 -p chrome "pub serve" is compiling test/my_app_test.dart... "pub serve" is compiling test/utils_test.dart... 00:00 +42: All tests passed! Fix an uncaught error that could crop up when killing the test runner process at the wrong time. collectionpackage.. This version was unpublished due to issue 287.. This version was unpublished due to issue 287. Limit the number of test suites loaded at once. This helps ensure that the test runner won't run out of memory when running many test suites that each load a large amount of code.). Fix a bug that caused the test runner to crash on Windows because symlink resolution failed. If a future matched against the completes. Convert JavaScript stack traces into Dart stack traces using source maps. This can be disabled with the new --js-trace flag. Improve the browser test suite timeout logic to avoid timeouts when running many browser suites at once. --verbose-traceflag to include core library frames in stack traces. no longer has a named failureHandler argument. expect added an optional formatter argument. completion argument id renamed to description. matcher. matcher. Narrow the constraint on matcher to ensure that new features are reflected in unittest's version. Prints a warning instead of throwing an error when setting the test configuration after it has already been set. The first configuration is always used. Fix bug in withTestEnvironment where test cases were not reinitialized if called multiple times. Add reason named argument to expectAsync and expectAsyncUntil, which has the same definition as expect's reason argument. matcherversion. stack_trace. Deprecated methods have been removed: expectAsync0, expectAsync1, and expectAsync2 - use expectAsync instead expectAsyncUntil0, expectAsyncUntil1, and expectAsyncUntil2 - use expectAsyncUntil instead guardAsync - no longer needed protectAsync0, protectAsync1, and protectAsync2 - no longer needed matcher.dart and mirror_matchers.dart have been removed. They are now in the matcher package. mock.darthas been removed. It is now in the mockpackage. mock. DEPRECATED matcher.dart and mirror_matchers.dart are now in the matcher package. mock.dart is now in the mock package. equals now allows a nested matcher as an expected list element or map value when doing deep matching. expectAsync and expectAsyncUntil now support up to 6 positional arguments and correctly handle functions with optional positional arguments with default values. Each test is run in a separate Zone. This ensures that any exceptions that occur is async operations are reported back to the source test case. DEPRECATED guardAsync, protectAsync0, protectAsync1, and protectAsync2 * Running each test in a Zone addresses the need for these methods. NEW! expectAsync replaces the now deprecated expectAsync0, expectAsync1 and expectAsync2 NEW! expectAsyncUntil replaces the now deprecated expectAsyncUntil0, expectAsyncUntil1 and expectAsyncUntil2 TestCase: Removed properties: setUp, tearDown, testFunction enabled is now get-only * Removed methods: pass, fail, error interactive_html_config.darthas been removed. runTests, tearDown, setUp, test, group, solo_test, and solo_group now throw a StateError if called while tests are running. rerunTestshas been removed. Add this to your package's pubspec.yaml file: dependencies: test: "^0.12.3+8" You can install packages from the command line: $ pub get Alternatively, your editor might support pub. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:test/ test.dart';test.dart';
https://pub.dartlang.org/packages/test
CC-MAIN-2015-32
refinedweb
1,043
60.51
This article presents a thin wrapper around Lua, see [1], and Luabind, see [2], for .NET: with it you can embed a scripting engine with a C, C++ backbone into your .NET applications. If you are not familiar with Lua and Luabind, a small introductory example is also given below. Here are some quotes extracted from the Lua "about" page: "Lua is a powerful light-weight programming language designed for extending C, C++ applications." ." Usually, using Lua in an application is done with the following steps: ." lua_State lua_open As mentioned above, lua_open is used to allocate a Lua state: lua_State* L = lua_open(); Suppose that the following method need to bound: void my_print( const char* str ) { printf( str ); } First, we need to make a function wrapper for my_print that receives a Lua state and returns an integer, the number of values it wants to return to Lua: my_print int my_print_lua(lua_State *L) { /* get the number of arguments (n) and check that the stack contains at least 1*/ int n = lua_gettop(L); if (n<1) { lua_pushstring(L, "not enough arguments"); lua_error(L); } /* try to cast argument to string and call my_print, - remember that Lua is dynamically typed - we take the first element in the stack */ my_print( lua_checkedstring( L, 1) ); /* my_print does not return values */ return 0; }; At last, the method is registered in Lua using: lua_register(L, "pr", my_print_lua); Remark: Handling the Lua stack can become tedious and error-prone. Hopefully, Luabind is here to simplify (a lot) the wrapping task. Consider the following script: s = 'this is a test'; pr( s ); As one can see, Lua syntax is quite straightforward. In the script, we see that the method we bound (pr) is called. To execute this script in C, lua_dostring is used: pr lua_dostring const char* str = "s = 'this is a test';pr( s );"; lua_dostring(L, str); The program will output: this is a test When you are finished, do not forget to call lua_close to deallocate the Lua state: lua_close lua_close(L); As mentioned before, handling the Lua state is a tedious work that we would like to avoid. Luabind uses template meta-programming to simplify things for us. Using Luabind, the previous steps become: using namespace luabind; // luabind namespace lua_State* L =lua_open(L); luabind::open(L); // init luabind module(L) [ def("pr", &my_print); ]; my_print_lua So what about executing Lua script in .NET applications? This should not be a major problem, just the matter of writing a managed C++ wrapper. Lua The managed class State wraps up the lua_State structure. It handles the calls to lua_open and lua_close. State Here's a small example that creates a state, sets some variables and executes a script: Lua.State L = new Lua.State(); // creating a lua state L.set_Global("a",1); // equivalent to a=1 in Lua L.DoString("b=a * 2;") // executing a Lua script Double b = L.get_Global("b").ToDouble(); // retreive the value of b Note that get_Global returns a LuaObject which is then cast to a double using ToDouble. get_Global LuaObject double ToDouble LuaObject enables you to set and retrieve values from Lua. Supported values are: string int bool table Since Lua is dynamically typed, you need to check the type of the LuaObject before casting it. Otherwise, if cast fails, exceptions will be raised: L.DoString("s='string';"); L.DoString("print('s: ' .. tostring(s));"); LuaObject s=state.get_Global("s"); s.ToString(); // ok s.ToDouble(); // fails, s is a string If the object is a table, LuaObject implements the IDictionnary interface on the table and TableEnumerator implements the IDictionnaryEnumerator interface: IDictionnary TableEnumerator IDictionnaryEnumerator L.DoString("t={1,'string'};"); LuaObject t=L.get_Global("t"); System.Collections.IDictionaryEnumerator te = t.GetEnumerator(); while( te.MoveNext() ) { System.Console.WriteLine("t...(" + te.Key + "," + te.Value +")"); }; Lua comes with a set of default APIs to handle strings, table, IO, files etc...To load these APIs, use DefaultLibrary.Load: DefaultLibrary.Load Lua.Libraries.DefaultLibrary.Load( L ); Luabind is loaded using LuabindLibrary.Load: LuabindLibrary.Load Lua.Libraries.LuabindLibrary.Load( L ); You can wrap up your API and load them in .NET using the same method as DefaultLibrary or LuabindLibrary. DefaultLibrary LuabindLibrary The demo features Lua 5.0 and Luabind. You need to recompile all the projects and launch LuaNetTest. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here luabind\config.hpp(27) : fatal error C1083: Cannot open include file: 'boost/config.hpp': No such file or directory #include <boost/config.hpp> General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4723/LuaDotNet-a-thin-wrapper-around-Lua-and-Luabind-fo?msg=613193
CC-MAIN-2015-14
refinedweb
800
54.93
patches have found their way into the mainline git repository since the -rc2 release. For older kernels: 2.6.22.11 and 2.6.22.12 came out on November 2 and 5, respectively. These releases contain a number of patches, including one which is security-related for both people running Minix filesystems. Greg Kroah-Hartman has recently said that, contrary to previous indications, the 2.6.22.x series will continue for a while yet. 2.6.16.56 and 2.6.16.57 were released on November 1 and 5, respectively. They contain quite a few fixes, several of which have vulnerability numbers associated with them. Kernel development news Quotes of the week Memory management for graphics processors Once. Process IDs in a multi-namespace world On November 1, Ingo Molnar pointed out that some questions raised by Ulrich Drepper back in early 2006 remained unanswered. These questions all have to do with what happens when the use of a PID escapes the namespace that it belongs to. There are a number of kernel APIs related to interprocess communication and synchronization where this could happen. Realtime signals carry process ID information, as do SYSV message queues. At best, making these interfaces work properly across PID namespaces will require that the kernel perform magic PID translations whenever a PID crosses a namespace boundary. The biggest sticking point, though, would appear to be the robust futex mechanism, which uses PIDs to track which process owns a specific futex at any given time. One of the key points behind futexes is that the fast acquisition path (when there is no contention for the futex) does not require the kernel's involvement at all. But that acquisition path is also where the PID field is set. So there is no way to let the kernel perform magic PID translation without destroying the performance feature that was the motivation for futexes in the first place. Ingo, Ulrich, and others who are concerned about this problem would like to see the PID namespace feature completely disabled in the 2.6.24 release so that there will be time to come up with a proper solution. But it is not clear what form that solution would take, or if it is even necessary. The approach seemingly favored by Ulrich is to eliminate some of the fine-grained control that the kernel currently provides over the sharing of namespaces. With the 2.6.24-rc1 interface, a process which calls clone() can request that the child be placed into a new PID namespace, but that other namespaces (filesystems, for example, or networking) be shared. That, says Ulrich, is asking for trouble: Coalescing a number of the namespace options into a single "new container" bit would help with the current shortage of clone bits. But it might well not succeed in solving the API issues. Even processes with different filesystem namespaces might be able to find the same futex via a file visible in both namespaces. The passing of credentials over Unix-domain sockets could throw in an interesting twist. And it would seem that there are other places where PIDs are used that nobody has really thought of yet. Another possible approach, one which hasn't really featured in the current debate, would be to create globally-unique PIDs which would work across namespaces. The current 32-bit PID value could be split into two fields, with the most significant bits indicating which namespace the PID (contained in the least significant bits) is defined in. Most of the time, only the low-order part of the PID would be needed; it would be interpreted relative to the current PID namespace. But, in places where it makes sense, the full, unique PID could be used. That would enable features like futexes to work across PID namespaces. There are still problems, of course. The whole point of PID namespaces is to completely hide processes which are outside of the current namespace; the creation and use of globally-unique PIDs pokes holes in that isolation. And there's sure to be some complications in the user-space API which prove to be hard to paper over. Then, there is the question of whether this problem is truly important or not. Linus thinks not, pointing out that the sharing of PIDs across namespaces is analogous to the use of PIDs in lock files shared across a network. PID-based locking does not work on NFS-mounted files, and PID-based interfaces will not work between PID namespaces. Linus concludes: One could argue that the conflict with PID namespaces was known when the robust futex feature was merged and that something could have been done at that time. But that does not really help anybody now. And, in any case, there are issues beyond futexes. PID namespaces are a significant complication of the user-space API; they redefine a basic value which has had a well-understood meaning since the early days of Unix. So it is not surprising that some interesting questions have come to light. Getting solid answers to nagging API questions has not always been the strongest point of the Linux development process, but things could always change. With luck and some effort, these questions can be worked through so that PID namespaces, when they become available, will have well-thought-out and well-defined semantics in all cases and will support the functionality that users need. Page replacement for huge memory systems As the amount of RAM installed in systems grows, it would seem that memory pressure should reduce, but, much like salaries or hard disk space, usage grows to fill (or overflow) the available capacity. Operating systems have dealt with this problem for decades by using virtual memory and swapping, but techniques that work well with 4 gigabyte address spaces may not scale well to systems with 1 terabyte. That scalability problem is at the root of several different ideas for changing the kernel, from supporting larger page sizes to avoiding memory fragmentation. Another approach to scaling up the memory management subsystem was recently posted to linux-kernel by Rik van Riel. His patch is meant to reduce the amount of time the kernel spends looking for a memory page to evict when it needs to load a new page. He lists two main deficiencies of the current page replacement algorithm. The first is that it sometimes evicts the wrong page; this cannot be eliminated, but its frequency might be reduced. The second is the heart of what he is trying to accomplish: A system with 1TB of 4K pages has 256 million pages to deal with. Searching through the pages stored on lists in the kernel can take an enormous amount of time. According to van Riel, most of that time is spent searching pages that won't be evicted anyway, so in order to deal with systems of that size, the search needs to focus in on likely candidates. Linux tries to optimize its use of physical memory, by keeping it full, using any memory not needed by processes for caching file data in the page cache. Determining which pages are not being used by processes and striking a balance between the page cache and process memory is the job of the page replacement algorithm. It is that algorithm that van Riel would eventually like to see replaced. The current set of patches, though, take a smaller step. In today's kernel, there are two lists of pages, active and inactive, for each memory zone. Pages move between them based on how recently they were used. When it is time to find a page to evict, the kernel searches the inactive list for candidates. In many cases, it is looking for page cache pages, particularly those that are unmodified and can simply be dropped, but has to wade through an enormous number of process-memory pages to find them. The solution proposed is to break both lists apart, based on the type of page. Page cache pages (aka file pages) and process-memory pages (aka anonymous pages) will each live on their own active and inactive lists. When the kernel is looking for a specific type, it can choose the proper list to reduce the amount of time spent searching considerably. This patch is an update to an earlier proposal by van Riel, covered here last March. The patch is now broken into ten parts, allowing for easier reviewing. It has also been updated to the latest kernel, modified to work with various features (like lumpy reclaim) that have been added in the interim. Additional features are planned to be added down the road, as outlined on van Riel's page replacement design web page. Adding a non-reclaimable list for pages that are locked to physical memory with mlock(), or are part of a RAM filesystem and cannot be evicted, is one of the first changes listed. It makes little sense to scan past these pages. Another feature that van Riel lists is to track recently evicted pages so that, if they get loaded again, the system can reduce the likelihood of another eviction. This should help keep pages in the page cache that get accessed somewhat infrequently, but are not completely unused. There are also various ideas about limiting the sizes of the active and inactive lists to put a bound on worst-case scenarios. van Riel's plans also include making better decisions about when to run the out-of-memory (OOM) killer as well as making it faster to choose its victim. Overall, it is a big change to how the page replacement code works today, which is why it will be broken up into smaller chunks. By making changes that add incremental improvements, and getting them into the hands of developers and testers, the hope is that the bugs can be shaken out more easily. Before that can happen, though, this set of patches must pass muster with the kernel hackers and be merged. The external user-visible impacts of these particular patches should be small, but they are fairly intrusive, touching a fair amount of code. In addition, memory management patches tend to have a tough path into the kernel. Patches and updates Kernel trees Core kernel code Development tools Device drivers Filesystems and block I/O Memory management Architecture-specific Security-related Virtualization and containers Page editor: Jonathan Corbet Next page: Distributions>> Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/256772/
crawl-002
refinedweb
1,753
59.94
If you haven’t backed PortableSDR on Kickstarter, now’s the time to do it. [Michael Colton’s] project which frees a Software Defined Radio from being shackled to a computer is in the final three days and needs about $17,500 to make it. We’d really like to see this one succeed, and not just because PortableSDR took 3rd place in the 2014 Hackaday Prize. Many a time we’ve heard people forecast the death of amateur radio (ham if you will). The ham community is special, it’s a great way to get mentorship in electronics, and deals in more than just digital circuitry. Plus, as [Greg] has pointed out, having a license and some know-how lets you build and operate really powerful stuff! We see the PortableSDR as one way to renew interest in the hobby. We especially like it that you don’t need a license to operate the basic model — the transmitting circuits aren’t enabled when it arrives. This means you can learn about SDR, explore what’s going on over the airwaves, and only then take the leap by applying for your license and hack the unit to transmit. To be fair, the transmitter portion of the project hasn’t been published yet, which is about the only real concern we read in the Kickstarter comments. But we have faith that [Michael] will come through with that part of it. And if he needs help we’re sure he’ll have no problem finding it. Now’s the time… let’s pull this one out in the final days! 37 thoughts on “PortableSDR Needs A Cinderella Story To Finish Its Kickstarter” I was excited by this project on hack-a-day and thought it should win over the other finalists. I was also excited to see this offered on kickstarter, that is until I saw the price. The price is high for a finished, unit but at $450 it is too high especially as that only gets you a kit that is hard to understand. Most of the parts aren’t too costly maybe if he offered a receive only version (without GPS and the TX electronics) it would have been more reasonable and affordable. With a family I can’t justify $450, maybe $200 $250 tops for a receive only version. I hope he relaunches with a lower goal and lower prices. Couldn’t agree more. I’m sure for a lot of people this is worth the money but for me a cut down version at a lower cost would be a better option. Really? Have you priced the nearest competition lately? A $650 investment lands you a radio that is twice the size and has half the capabilities (Yaesu FT-817ND) of the PSDR. The next step up QRP radio will take you into the 4-digit range (priced a KX-3 lately?). I think that $499 is an excellent price point for this radio. I have a family and a fixed income, but I would save up for years if necessary to land a PSDR for my shack. I really hope this Kickstarter is successful. If not, I hope [Michael Colton] can find another way to get these awesome radios out to the hams all over the world that really want them. It’s not a question of nearest competition, really. It’s a question of what the crowd funding model will bear. I back a *lot* of projects, and am seriously interested in amateur radio, but I have a hard time stomaching the $450 for an unvetted designer/manufacturer chain. I would love to support him (and did for $50), but I’ve seen too many complex Kickstarter projects go under to be willing to bet with that much money. $50 to me is in the “angel investing” range. $450 is enough that I expect a polished product that I am guaranteed to be satisfied with or have some way to return if I’m not. I’d never think of returning a Kickstarter project, so I won’t back at that price point. If we want to talk of “competition”, we should talk of SDR kits, not finished transceivers. The SDR Cube for example as a complete kit is USD339: There was a survey put out and I know a receive only version was mentioned, there was a question for it. I’m on the same page as Joe in that regard. People can save for years, that’s not going to help Michael in anyway unless they have saved enough to cover the cost at this time. So to meet the goal, a broke ham can starve himself/family or play radio depending on finances. Had there been a lower cost option I think It may have made the goal already. I would have pledged and possibly Joe as well, making up the cost of a tx unit. Guess I’ll stick with the rtl and hamitup for now, hopefully there are enough hams with full pockets to make this go forward. Really that expensive? Damn, I did not see that coming. I love the project, but even to me that sounds kinda high… But I’m no expert. He made it with only a few hours to go… Thanks HAD for the Cinderella story. Really? We’re pimping kickstarters now? Haven’t they been doing that for some time? At least this one has a tie-in to the site because it ran pretty well in the contests. So what? It’s a great project, made by a single designer, which gives back to the community. I’m disappointed in your whining. Contribute something useful next time. Well hell. After that comment I feel more like I do now than I did a little while ago. Yea it’s interesting and I’m sure it’s well-executed. It just hit me as a little more opaque than usual I guess. I felt like I was marketed to but I have high hopes for campaign. More power to the designer. (Hope that was a little better?) Portable SDR may be the device of dreams for someone…. However for me it just misses the mark, while I enjoy listening to HF every now and then, I do most of my radio activities at 50Mhz and UP often WAY WAY up! The project is interesting but not stellar or a must have nor even something that pushes my buy this trinket on impulse button! They are asking for far to much with to little of a return (the reward / to pledged funding ratio is not acceptable.) In order to get any useful hardware the donation required is to large to even consider, other equipment is available for the same or close to the same price AS an working orderable product! I would need at minimum a current generation PC board with the hard to get IC’s included if not already attached to the board with the requested amount at or below $80 dollars to make me think about funding this project. Higher level funding options that included as rewards a portion of ownership of the final product with a share in any financial returns generated on future sales of kits, completed units, licensing deals, or sale of the IP or rights to other manufacturers or investors might get some bigger funding pledges from more serious venture capital types of investors. That’s an unrealistic view of the market potential. I highly doubt investors and venture capitalists would touch this. And I think it’s still not legal to crowd fund shares of a business. DainBrammage above claims the closest competition is $650 and does less. If there is competitive gear for less, it would help to name them. For starters: Which isn’t even close to what the PSDR offers. Also, if you want a completed version, it’s price is higher than the $499 price of an assembled and tested PSDR. $76 higher, in fact. My sense as a radio amateur is that this SDR project overreaches, particularly given the prices being charged. Someone who needs a powerful versatile SDR transceiver will get vastly more for only a little more by buying Elecraft’s well-designed KX3: What these Kickstarter people offer is an ill-equipped, bare-bones, hacker’s toy that’s alpha hardware and costs $450 for a raw kit. Twice that ($900) will get you a finished modular kit (just snap it together) that, as you can tell from the pictures and spec sheet, is infinitely times better. The real need lies in different area, for sub-$100 and perhaps sub-$50, single-band direct-conversion PSK transceivers for 40, 30 and 20 meters. They’d be compact, connect to smartphones/tablets via USB or Bluetooth, and off-load all but the radio capabilities to the attached device. The 20-meter model, for instance, could be made about the size of an iPhone, run off a few AA batteries, slip easily into your pocket, and allow users, under good band conditions to talk anywhere in the world. That would get attention. –Michael W. Perry, KE7NV/4 Try again. $499 gets you the finished product, ready to use. That’s yet to be proven. I agree it’s way too expensive. For $450 you can buy TWO netbooks and RTL-SDR dongles, and the RTL-SDR covers from ~30Mhz to 1.8Ghz. Fine, but you forgot that the PSDR is a transceiver, not just a receiver. It’s not a transceiver in its shipped form, whenever that happens. It’s clearly stated that you have to tear it open to modify it. Also, any kickstarters involving electronics or injection molding have a one year wait. Look at the hackrf. It took forever for Michael to get them to backers and it went through several design iterations — the finished product changed several times and manufacturing issues set back the project several months. I’ve been sitting on a backed project for almost two years! This is all a gamble, not worth the wait for the investment. Agree with the above – Too Expensive. $500 10 months from now for an assembled version. The HackRF is around $300 and it is fully assembled and has both Rx and Tx, and 20MHz bandwidth. And I have my own GPS boards (NavSpark which was Indegogo?). It goes from 20MHz to 6GHz, and a Ham-It-Up is only $50. 0-35 MHz? Maybe for Ham radio, maybe for CB radio? The base $400 kit requires a hot-air rework station, I have one, but I’m not ordinary. Why is the circuit board alone $50? Is it a gazillion layers? I know RF needs more care, but really? If his target was much higher ($150k?), but the price of an assembled unit was $300 or less, it would probably be funded. I’m not sure if it can even transmit (not just modulate). Last I checked it can’t transmit (just modulate, as you point out). This is what killed it for me. I just posted a longer comment that speaks to the cost of the PSDR, but I wanted to comment on the $50 PCB reward specifically. Of course the PCB doesn’t cost that much, You could get a pack of 10 for about $100-150 from dirtypcbs (that doesn’t mean that you could get a single board for $10-15 though, as they don’t do one-offs, unless you care to organize a group buy) The $50 reward level is for people that want to support the project, but also want to get something tangible out of it. The gerbers are freely available, if someone wants the PCB for less, they can have it made themselves. Damn, real shame, I hope it pulls through. I can’t justify spending the money on it right now being a student. But I hope that one day when I am in the market for something like this it is still around. This is one of the most feature packed boards I have seen, not to mention the size. Not to +1, but yeah, I was really excited until I saw the price. It’s more than 2x what I’d be willing to spend on something like that. $200 would be the sweet spot I think. also by “Cinderella Story” I think you mean “Hail Mary Pass” :-P Would really like one of these guys that seem to have this RF design / SDR business down put together an SDR project that is 1. Understandable to someone with a solid radio and good computer background. 2. Is modular if you can afford only the basics now but want to add a feature in the future it should be possible without great effort or specialized equipment like hot air soldering equip. or a 9Ghz spectrum analyzer. 3. Uses parts that are available and reasonably priced not 200 buck development boards or parts that are phased out 3 months after project launch. 4. Covers bands and services interesting enough to justify its purchase or construction. HF can be interesting every so often but The action is up in the higher bands like 2m, 70 cm, amateur bands, public safety, aircraft, even my car key fob is in the 400 – 950Mhz range ! 5. The parts should be offered individually or as kits of parts (Board, main IC’s, Programmed CPU/FPGA,) and by modular functions. My advice is to spend greatest effort on design and PC boards, programming and not worry about cases and 3d printed enclosures, those are nice but drive up costs and increase lead times to astronomically long and painful lengths of time! And by the way if a device transmits with less than say 5 watts, IT STILL takes VERY expensive amplifiers to get the signal to respectable (read useful) levels sub watt signals out of a device require very clean and not very available pre-amplifiers before the power amps as most amateur linear Amps require 10 to 25 watts input to drive them less and you get between no gain and something less than the rated output. the costs of these final stages must be considered when comparing SDR transmitting equipment with radios that are commercially offered. The Portable SDR doesn’t have any transmit capability, the analog part is not there. That was the deal breaker for me. While the ability to transmit (and read GPS) is nice, the base functions are: radio receiver, portable. As such it the extra functions need to be modular options – and the base unit needs to compete with other portable radios. Sangean, Sony, Tecsun and Grundig are the “big name” competitors in this area. They all offer receivers with HF coverage, and they all seem to have models that cost significantly less. As it stands, the Portable SDR seems like a variation of the HPSDR (with board called hermes, mercury etc). Its a nice try, but by trying to please everybody you end up with something that pleases [almost] nobody. Strip it back, make it cheaper and modular, then try again. If nothing else, stripping it down should increase battery life. Projects involving hardware will, more often than not, miss their scheduled release date (which is just the nature of the beast, don’t get me wrong), so it’s not like you are “ordering a ready made tool” from Kickstarter. You are BACKING a project without any guarantee at all that you will get what you pledged for. While I like the prospect of the project, I have been burnt by Kickstarter. Every single project I backed was, in parts, a cheat/fake/not quite true/didn’t deliver as promised, even the one (very expensive) I am still backing. No, I will not again back a Kickstarter project. When a project is pumped up on Kickstarter, this, to me, is a sign that there is something WRONG with the project, and you won’t learn WHAT is wrong before you got burnt. And telling the Kickstarter crew about projects that, taken seriously, break Kickstarter-rules, leads to a copy&paste “so what we don’t care”-response. Crowdfunding still seems like a good thing. But Kickstarter? No. hum, I have 5 backed electronics projects ( price range 30€ to 350€) on Kickstarter ( 4 who are done and delivered – all within specs ) and one not finished (intended – still on schedule). On Indigogo I have 4 (including mooltipass – which is still ongoing) and of the 3 other 2 are done and there is one that will most likely not see the light of day (50€ lost) and that’s simply because for once i did not do my homework (was one of the numerous arduino shield things – thought that’s so easy how can it go wrong .. :) ) Not to bad, and normally if it’s to good to be true it is, hence don’t invest Hi everyone, Thanks for the comments and feedback. Seems like there are a few common thoughts, so let’s address those: Too expensive. I agree with you, I would have loved for the PSDR to cost $200, unfortunately it can’t be made that cheaply. People talk about leaving out GPS or using bluetooth and having a smartphone act as the screen, but those don’t save as much money as you’d think. Take a look at the BOM and you’ll see what I mean. See here for some more detail. the short version is that the PSDR costs a lot to make, especially as I want quality to be high, and am doing a small production run. I am also including the cost of FCC certification (which, by the way, is pretty much going to require a metal housing). I am NOT going to be making much money on this. I am open to ideas for how to lower cost as long as it doesn’t sacrifice too much quality. Can’t Transmit. Yes, in my mind this is the PSDR’s single biggest flaw. So why can’t it transmit? Because I couldn’t design the transmit portion in time for the hackaday prize competition. So why didn’t I fix that before kickstarting it? Because I wasn’t sure I’d be able to do it quickly by myself and there was so much demand for the radio “as is” (I can show you the survey results). The idea of this kickstarter was to get development hardware out into the wild so the community could help me develop the next version that would address the PSDR2’s shortcomings. Delivery: I tried to be conservative in my delivery time estimates. I have product design and manufacturing experience (it’s what I do for a living) so I wasn’t too worried about getting units out on time, but I certainly understand that people are frequently burned by kickstarter projects. Who cares about HF / Not enough GHz!: This may just be my personal experience, but I find most stuff ABOVE 30MHz to be sorta boring (talking to the locals on a repeater is too easy, listening to police and weather gets old, etc.) I say this from experience, I had my Tech license for years and lost interest in the hobby, but once I got my general and got into HF suddenly it was exciting! Also VHF/UHF isn’t a good fit for what the PSDR is trying to be: a backpacking radio. If I’m in the mountains, there is nothing line of sight, literally EVERYTHING above 30MHz is likely to be blocked by the terrain. But to each their own. While I’m at it, I feel like any comparisons to SDRs that require a computer to work (even a netbook, or to a lesser extent, a smart phone) sorta miss the point of the PSDR. I’m not going to take my netbook backpacking. Also I’m not going to use a pocket SDR to reverse engineer pager protocols, that’s not what this tool is for. I wanted something small, light, and quick, but didn’t want to give up the waterfall display and flexibility of an SDR. Anyway, I am happy with what I’ve created, and I really appreciate the interest and support from everyone. I have to say that even the comments that are critical of the project (or some aspect of it) have been offered constructively (which… isn’t always the case with comments on Hackaday….) I think that’s pretty cool. I plan on continuing development, even if it doesn’t fund, and I think I will still produce some units by hand so people can help with development. Thanks everyone! I’d pay 500 for something that transmits and has all the bells and whistles and extras and comes already built I hope this gets over the hump– it looks like a very well-though out project– but I wonder if it’s appeal is just very limited. Also, two points I haven’t seen mentioned: 1) That housing is strange. Why the transparent face? And most importantly, what’s the deal with the beveled corner with the thing sticking out? I’m sure there are reasons for the design, but I can’t figure them out. 2) The video could show the unit more. The video is the most important part of any Kickstarter, and this video shows far more of the creator than the creation. I believe the “beveled corner with the thing sticking out” is an integrated morse code keyer paddle. The idea is you could that like an iambic paddle to send morse code without having to hook up an external paddle/keyer. Take a look at the Elecraft KX3 or older versions of the Hendricks 3 Band Portable Field Radio kit… FYI, it got funded :) Please be kind and respectful to help make the comments section excellent. (Comment Policy)
https://hackaday.com/2015/02/09/portablesdr-needs-a-cinderella-story-to-finish-its-kickstarter/?replytocom=2427284
CC-MAIN-2021-31
refinedweb
3,678
69.72
Opened 13 years ago Closed 13 years ago #6468 closed defect (fixed) Plugins loaded from site-packages in preference to plugins/ directories Description Once you've "installed a plugin" in the usual way (all directions say "setup.py install"), it goes into site-packages, and putting a newer version of the plugin in your trac's plugins/ directory makes no difference. That's bad for flexibility; consider that several tracs may run on the same machine. One might want to set up a new trac instance for testing, with a new version of some plugin. It's a sad fact of Python that "uninstalling" anything placed in site-packages is a nontrivial and undocumented operation that may involve combing through .pth files for references to the package, so there are plenty of good reasons to try to use the plugins/ directory. It would be nice if setandard plugin installation was able to place plugins there, but that's another story. In the meantime, please allow plugins directories to override what's in site-packages. Attachments (0) Change History (9) comment:1 by , 13 years ago comment:2 by , 13 years ago Fair enough. Do you think it's worth documenting all or some of this? Especially in light of the "one plugin per interpreter instance" issue, it seems likely that anyone using the plugins/ directory is misinterpreting its meaning. Until you know some of the implementation details described above, it seems very reasonable to assume that each Trac instance's own plugins/ directory is separately bound to that instance. follow-up: 8 comment:3. comment:4 by , 13 years ago For my to-do list. comment:5 by , 13 years ago Well, I don't think you used "Trac Instance" in the usual way, so it's really confusing. Normally I think docs mean what you call "project" when they say "trac instance." I think you should say something like "instance of the python interpreter," and explicitly discuss "multi-project setups" (but check to make sure that's the term used when TRAC_ENV_PARENT_DIR is introduced). Thanks. comment:6 by , 13 years ago TracPlugins@45 is fine for me. comment:7 by , 13 years ago follow-up: 9 comment:8. This looks really good to me, but there's one important thing missing: the way this works needs to be known by, or you basically set people up to make this mistake and trawl for answers later. I would add something like this: Note that in a multi-project setup, a pool of Python interpreter instances will be dynamically allocated to projects based on need, and since plugins occupy a place in Python's module system, the first version of any given plugin to be loaded will be used for all the projects. In other words, you cannot use different versions of a single plugin in two projects of a multi-project setup. It may be safer to install plugins for all projects (see below) and then enable them selectively on a project-by-project basis. I would also like to see a note in the section on installing for all projects that explains how to deinstall plugins, and how not using easy-install et. al may save lots of pain in the long run. comment:9 by , 13 years ago Replying to Dave Abrahams <dave@boost-consulting.com>: This looks really good to me, but there's one important thing missing… Thanks for the text, I've updated the page and also added some information on uninstalling plugins. For the future, if you have things to add or make more precise in the wiki, just update the page directly. No need to file (or reopen) tickets unless it concerns major or possibly controversial changes to the content of pages that are part of the default documentation. Changes are migrated to the repos in an ad-hoc manner - they'll get there sooner or later :-) I don't think this will accomplish much actually, based on the fact that as long as you run the various projects inside the same interpreter the plugin will only load once - for all projects. This is basic Python - there can only be one mypluginnamespace, one entry in sys.modulespointing to some code. Regardless of where it is loaded first time (globally or inside a project). If you want to test things, then do what others generally do: virtualenv.pyto set up a second interpreter based off the first that can override anything you like. sys.pathso that you in you init script, PYTHON_PATHsetting or other valid alternative depending on frontend, set it so that you insert your new-module-path first in that list. theplugin2), and disable the global plugin and enable the custom new version in one project only. Now on to why changing the load order will be both troublesome and give unwanted results: Environment()is first instantiated) - preferring project if it exists. As we depend on setuptools that won't be easy as it looks up extention points and will load the code to serve us - providing us with available plugins, including much of the Trac code itself that is also based on the same archtecture. Unloading running code to load some newly discovered code is really non-trivial. I really can't see the project looking to solve this perceived 'defect'. I think it works great.
https://trac.edgewall.org/ticket/6468
CC-MAIN-2020-34
refinedweb
895
59.64
1. What is a method Let’s define and call a regular function: function greet(who) { return `Hello, ${who}!`; } greet('World'); The function keyword followed by its name, params, and body: function greet(who) {...} makes a regular function definition. greet('World') is the regular function invocation. The function greet('World') accepts data from the argument. What if who is a property of an object? To easily access the properties of an object you can attach the function to that object, in other words, create a method. Let’s make greet() a method on the object world: const world = { who: 'World', greet() { return `Hello, ${this.who}!`; }} world.greet(); greet() { ... } is now a method that belongs to the world object. world.greet() is a method invocation. Inside of the greet() method this points to the object the method belongs to — world. That’s why this.who expression accesses the property who. Note that this is also named context. The context is optional While in the previous example I’ve used this to access the object the method belongs to — JavaScript, however, doesn’t impose a method to use this. For this reason you can use an object as a namespace of methods: const namespace = { greet(who) { return `Hello, ${who}!`; }, farewell(who) { return `Good bye, ${who}!`; } } namespace.greet('World'); namespace.farewell('World'); namespace is an object that holds 2 methods: namespace.greet() and namespace.farewell(). The methods do not use this, and namespace serves as a holder of alike methods. 2. Object literal method As seen in the previous chapter, you can define a method directly in an object literal: const world = { who: 'World', greet() { return `Hello, ${this.who}!`; }}; world.greet(); greet() { .... } is a method defined on an object literal. Such type of definition is named shorthand method definition (available starting ES2015). There’s also a longer syntax of methods definition: const world = { who: 'World', greet: function() { return `Hello, ${this.who}!`; }} world.greet(); greet: function() { ... } is a method definition. Note the additional presence of a colon and function keyword. Adding methods dynamically The method is just a function that is stored as a property on the object. That’s why you can add methods dynamically to an object: const world = { who: 'World', greet() { return `Hello, ${this.who}!`; } }; world.farewell = function () { return `Good bye, ${this.who}!`; } world.farewell(); world object at first doesn’t have a method farewell. It is added dynamically. The dynamically added method can be invoked as a method without problems: world.farewell(). 3. Class method In JavaScript, the class syntax defines a class that’s going to serve as a template for its instances. A class can also have methods: class Greeter { constructor(who) { this.who = who; } greet() { console.log(this === myGreeter); return `Hello, ${this.who}!`; }} const myGreeter = new Greeter('World'); myGreeter.greet(); greet() { ... } is a method defined inside a class. Every time you create an instance of the class using new operator (e.g. myGreeter = new Greeter('World')), methods are available for invocation on the created instance. myGreeter.greet() is how you invoke the method greet() on the instance. What’s important is that this inside of the method equals the instance itself: this equals myGreeter inside greet() { ... } method. 4. How to invoke a method 4.1 Method invocation What’s particularly interesting about JavaScript is that defining a method on an object or class is half of the job. To maintain the method the context, you have to make sure to invoke the method as a… method. Let me show you why it’s important. Recall the world object having the method greet() upon it. Let’s check what value has this when greet() is invoked as a method and as a regular function: const world = { who: 'World', greet() { console.log(this === world); return `Hello, ${this.who}!`; } }; world.greet(); const greetFunc = world.greet; greetFunc(); world.greet() is a method invocation. The object world, followed by a dot ., and finally the method itself — that’s what makes the method invocation. greetFunc is the same function as world.greet. But when invoked as regular function greetFunc(), this inside greet() isn’t equal to the world object, but rather to the global object (in a browser this is window). I name expressions like greetFunc = world.greet separating a method from its object. When later invoking the separated method greetFunc() would make this equal to the global object. Separating a method from its object can take different forms: const myMethodFunc = myObject.myMethod; setTimeout(myObject.myMethod, 1000); myButton.addEventListener('click', myObject.myMethod) <button onClick={myObject.myMethod}>My React Button</button> To avoid loosing the context of the method, make sure to use the method invocation world.greet() or bind the method manually to the object greetFunc = world.greet.bind(this). 4.2 Indirect function invocation As stated in the previous section, a regular function invocation has this resolved as the global object. Is there a way for a regular function to have a customizable value of this? Welcome the indirect function invocation, which can be performed using: myFunc.call(thisArg, arg1, arg2, ..., argN); myFunc.apply(thisArg, [arg1, arg2, ..., argN]); methods available on the function object. The first argument of myFunc.call(thisArg) and myFunc.apply(thisArg) is the context (the value of this) of the indirect invocation. In other words, you can manually indicate what value this is going to have inside the function. For example, let’s define greet() as a regular function, and an object aliens having a who property: function greet() { return `Hello, ${this.who}!`; } const aliens = { who: 'Aliens' }; greet.call(aliens); greet.apply(aliens); greet.call(aliens) and greet.apply(aliens) are both indirect method invocations. this value inside the greet() function equals aliens object. The indirect invocation lets you emulate the method invocation on an object! 4.3 Bound function invocation Finally, here’s the third way how you can make a function be invoked as a method on an object. Specifically, you can bound a function to have a specific context. You can create a bound function using a special method: const myBoundFunc = myFunc.bind(thisArg, arg1, arg2, ..., argN); The first argument of myFunc.bind(thisArg) is the context to which the function is going to be bound to. For example, let’s reuse the greet() and bind it to aliens context: function greet() { return `Hello, ${this.who}!`; } const aliens = { who: 'Aliens' }; const greetAliens = greet.bind(aliens); greetAliens(); Calling greet.bind(aliens) creates a new function where this is bound to aliens object. Later, when invoking the bound function greetAliens(), this equals aliens inside that function. Again, using a bound function you can emulate the method invocation. 5. Arrow functions as methods Using an arrow function as a method isn’t recommended, and here’s why. Let’s define the greet() method as an arrow function: const world = { who: 'World', greet: () => { return `Hello, ${this.who}!`; } }; world.greet(); Unfortunately, world.greet() returns 'Hello, undefined!' instead of the expected 'Hello, World!'. The problem is that the value this inside of the arrow function equals this of the outer scope. Always. But what you want is this to equal world object. That’s why this inside of the arrow function equals the global object: window in a browser. 'Hello, ${this.who}!' evaluates as Hello, ${windows.who}!, which in the end is 'Hello, undefined!'. I like the arrow functions. But they don’t work as methods. 6. Summary The method is a function belonging to an object. The context of a method ( this value) equals the object the method belongs to. You can also define methods on classes. this inside of a method of a class equals to the instance. What’s specific to JavaScript is that it is not enough to define a method. You also need to make sure to use a method invocation. Typically, the method invocation has the following syntax: myObject.myMethod('Arg 1', 'Arg 2'); Interestingly is that in JavaScript you can define a regular function, not belonging to an object, but then invoke that function as a method on an arbitrar object. You can do so using an indirect function invocation or bind a function to a particular context: myRegularFunc.call(myObject, 'Arg 1', 'Arg 2'); myRegularFunc.apply(myObject, 'Arg 1', 'Arg 2'); const myBoundFunc = myRegularFunc.bind(myObject); myBoundFunc('Arg 1', 'Arg 2'); Indirect invocation and bounding emulate the method invocation. To read about all ways you can define functions in JavaScript follow my post 6 Ways to Declare JavaScript Functions. Confused about how this works in JavaScript? Then I recommend reading my extensive guide Gentle Explanation of “this” in JavaScript . Quizzzzz: can a method in JavaScript be an asynchronous function?
https://www.coodingdessign.com/javascript/whats-a-method-in-javascript/
CC-MAIN-2021-31
refinedweb
1,436
60.51
October 2017 Volume 32 Number 10 [Web Development] Speed Thrills: Could Managed AJAX Put Your Web Apps in the Fast Lane? By Thomas Hansen | October 2017 According to several studies on the subject, two of the most important concerns when creating an AJAX Web app are speed and responsiveness. These are probably some of the reasons why some developers choose to create native apps instead of Web apps. But what if I told you there exists a way to create AJAX Web apps that are 100 times faster and more responsive than the apps you might know? I've invented a way to create 100 percent pure JavaScript-based AJAX Web apps that reduce the bandwidth usage for your apps by at least 10 times, sometimes by as much as 300 times, depending on what types of tools you're using and what you want to create. I refer to this technique as "Managed AJAX." Managed AJAX is to some extent inspired by the way Microsoft built the common language runtime (CLR). For instance, when you create a C# application, the compiler creates a CLR assembly. This implies that your runtime environment for your end result is a “managed environment.” When you create a managed AJAX app, your result doesn’t compile down to anything different than a normal plain ASP.NET Web site; it becomes a managed environment, where the JavaScript parts of your end results are completely abstracted away, the same way the CPU instructions are abstracted away when you have a CLR assembly. How Does This Work? Managed AJAX requires almost no new knowledge. If you've done any ASP.NET development, you can drop a new Control library into your .aspx pages, and continue your work, almost exactly as you've done before. You create a couple of ASP.NET controls, either in your .aspx markup, or in your C#/F#/Visual Basic .NET codebehind. Then you decorate your controls' properties, add a couple of AJAX event handlers and you're done! The initial rendering creates plain-old HTML for you. But every change you make to any of your controls on the server side during an AJAX request is passed to the client as JSON. Therefore, the client can get away with a tiny JavaScript library, less than 5KB in total size, and you can create rich AJAX controls, such as TreeViews, DataGrids and TabViews, without ever having to use more than 5KB of JavaScript. Realize that at this point, you've already outperformed jQuery as a standalone JavaScript file by almost one order of magnitude (jQuery version 2.1.3 after minification and zopflinication is 30KB). Hence, by simply including jQuery on your page, and no other JavaScript, you've already consumed 6 times as much bandwidth as you would using a managed AJAX approach. As you start consuming jQuery in your own JavaScript, this number skyrockets. Pulling in jQuery UI in its minified and zipped version makes your JavaScript portions increase by 73.5KB. Simply including jQuery and jQuery UI on your page increases its size by an additional 103.4KB (103.4KB divided by 4.8KB becomes 21.5 times the bandwidth consumption). At this point, you still haven't created as much as a single UI element on your page, yet jQuery+jQuery UI consumes almost 22 times the space of your managed AJAX approach. And you can create almost every possible UI widget you can dream of with this 5KB of JavaScript, including most UI controls that jQuery+jQuery UI can create. Basically, regardless of what you do, you'll rarely if ever exceed this 5KB limit of JavaScript. And the other parts of your app, such as your HTML and your CSS, might also become much smaller in size. Using this approach, you create rich AJAX UI controls, such as AJAX TreeView controls, DataGrid controls and TabView controls. And you never need additional JavaScript as you create these widgets. There's also no new knowledge required to use these controls. You simply use them (almost) the same way you'd consume a traditional WebControl from ASP.NET. The managed AJAX approach has two distinct ways of handling the HTTP requests to your page. One of the handlers is the normal plain HTTP request, which will simply serve HTML to the client. The other handler lets you check for the existence of an HTTP post parameter. If this parameter exists, the handler will render only the changes done to each control back to the client as JSON. During an AJAX request, each control created by modifying the page's control hierarchy will be automatically recreated for you, with the properties it had in your previous request. On the client side, you have a general handler that handles your JSON properties to your controls and generically updates the attributes and DOM event handlers of the DOM elements on the client. This approach has a lot of positive side effects, such as not fighting the way the Web was originally created by rendering the HTML elements as just that—HTML elements. That implies that semantically, your Web apps become more easily understood (by search engine spiders, as an example). In addition, it creates a superior environment for actually modifying things on the client, if you wish to use the best of both worlds. You can still combine this approach—with as much custom JavaScript as you wish—at which point you can simply inspect the HTML rendered in your plain HTML requests. Compare this to the "magic div" approach, used by many other AJAX UI libraries, often containing megabytes of JavaScript to create your DataGrids and TreeViews. Thus, you can understand that the 100 times faster and more responsive figure isn't an exaggeration. In fact, compared to all the most commonly used component UI toolkits, used in combination with C# and ASP.NET, it's usually between 100 and 250 times faster and more responsive with regard to bandwidth consumption. I recently did a performance measure between the managed AJAX TreeView widget in System42 and three of the largest component toolkits on the ASP.NET stack. I found the difference in bandwidth consumption to be somewhere between 150 and 220 times faster and more responsive. To illustrate what this implies, imagine you're on an extremely slow Internet connection, where it takes one second to download 5KB of JavaScript. This implies one second to download the managed AJAX JavaScript and possibly as much as three minutes 40 seconds to download the JavaScript parts of some of the other toolkits. Needless to say, imagine what the difference would be with regard to conversion if you built two e-commerce solutions with these two approaches. Show Me the Code OK, enough talk, let's get down and dirty. First, download Phosphorus Five at bit.ly/2u5W0EO. Next, open the p5.sln file and build Phosphorus Five, such that you can get to your p5.ajax.dll assembly. You're going to consume p5.ajax.dll in your Web app as a reference, so you need to build it before you proceed. Notice that Phosphorus Five consists of more than 30 projects, but in this article I'm focusing on the p5.ajax project. Next, create a new ASP.NET Web site in Visual Studio. Make sure you create a Web Forms application. In Visual Studio for Mac, you can find this under File | New Solution | Other | .NET | ASP.NET Web Forms Project. Create a reference inside your newly created Web site to the already built p5.ajax.dll assembly and modify the web.config to resemble the following code: <?xml version="1.0"?> <configuration> <system.web> <pages clientIDMode="Static"> <controls> <add assembly="p5.ajax" namespace="p5.ajax.widgets" tagPrefix="p5" /> </controls> </pages> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" /> </system.web> </configuration> The important parts in this code are the "clientIDMode" and the "add assembly." At this point you can use any of the p5.ajax controls from inside your .aspx markup by prefixing them with p5. Make sure you modify the Default.aspx page's content to resemble the code in Figure 1. Figure 1 Creating a Page with a Single Button that Changes Its Text When Clicked <%@ Page <title>Default</title> </head> <body> <form id="form1" runat="server"> <p5:LiteralClick me!</p5:Literal> </form> </body> </html> Then change its codebehind to the following: using System; namespace foobar { public partial class Default : p5.ajax.core.AjaxPage { [p5.ajax.core.WebMethod] public void foo_onclick(p5.ajax.widgets.Literal sender, EventArgs args) { sender.innerValue = "Hello World from Managed Ajax!"; } } } Notice that you first need to inherit your page from AjaxPage, add a WebMethod attribute to the event handler and specifically strongly type the first parameter to your event handler as a "Literal" widget. At this point you can start your Web site, click the button and enjoy your result. If you get weird bugs when debugging your Web site, make sure you turn off the "browser link" settings for Visual Studio, which is normally a toolbar button, at the top of Visual Studio. If you're curious about what goes on here, try to inspect your HTTP requests. Also make sure you take a look at its initial HTML. Whoa, What Was That? That was managed AJAX in practice. There are several important points to this idea that should be considered. First, you can modify any property in any control on your page from any AJAX event handler in your page. If you created another Literal widget, "Element" type "p" for instance, you could update its content from your button's "foo_onclick" event handler. Second, you can dynamically add, remove, update or retrieve any property from your widget any way you see fit. To reference any attribute or event handler, simply use the subscript operator from C#. In Figure 2, instead of setting the widget's innerValue, its style property is checked and toggles a yellow background, using the CSS style property. Notice how it's able to persist and "remember" the style property of your widget. Notice also that this is done without passing huge amounts of ViewState back and forth between the client and the server. In a real-world application, you'd probably want to use CSS classes, which could be done by exchanging the reference in Figure 2 from "style" to "class." But I wanted to keep this example simple, so I didn't mix CSS files in here, instead using the style attribute for convenience. Using the approach shown in Figure 2, you can add, remove and change any attribute you wish, on any widget on your page. Figure 2 Toggling the Background Color using System; namespace foobar { public partial class Default : p5.ajax.core.AjaxPage { [p5.ajax.core.WebMethod] public void foo_onclick(p5.ajax.widgets.Literal sender, EventArgs args) { if (sender.HasAttribute ("style")) sender.DeleteAttribute ("style"); else sender ["style"] = "background-color:Yellow;"; } } } And the third—probably most important—point is that you can dynamically add, remove, update and insert any new AJAX control into any other widget, as you see fit. Before you have a look at this final point, though, you'll need to examine the "trinity of widgets." There are three different widgets in p5.ajax, but they're also very similar in their API. By combining these three widgets together using composition, you can create any HTML markup you wish. In the example in Figure 2, you used the Literal widget. The Literal widget has an "innerValue" property, which on the client side maps to "innerHTML," and simply lets you change its content as a piece of string or HTML. The Container widget can contain widgets. And it will remember its Controls collection and let you dynamically add, remove or change its collection of controls dynamically during AJAX requests. The third widget is the Void widget, which is exclusively used for controls that have no content at all, such as HTML input elements, br elements, hr elements and so on. The most important one for the example here is probably the Container widget. So go ahead and change the code in the .aspx page to what you see in Figure 3. Figure 3 Creating a Page with a Button and a Bulleted List Containing One List Item <%@ Page <title>Default</title> </head> <body> <form id="form1" runat="server"> <p5:LiteralClick me!</p5:Literal> <p5:Container <p5:LiteralInitial list element, try clicking me!</p5:Literal> </p5:Container> </form> </body> </html> The widget hierarchy in Figure 3 will create one "button" and a "ul" element with one "li" child element. Next, change the C# code behind to the code in Figure 4. Figure 4 Mapping Up AJAX Event Handlers to Create a New List Item using System; namespace foobar { public partial class Default : p5.ajax.core.AjaxPage { protected p5.ajax.widgets.Container bar; [p5.ajax.core.WebMethod] public void foo_onclick(p5.ajax.widgets.Literal sender, EventArgs args) { // Using the factory method to create a new child widget for our "ul" widget. var widget = bar.CreatePersistentControl<p5.ajax.widgets.Literal>(); widget.Element = "li"; widget.innerValue = "Try clicking me too!"; // Notice we supply the name of the method below here. widget ["onclick"] = "secondary_onclick"; } [p5.ajax.core.WebMethod] public void initial_onclick(p5.ajax.widgets.Literal sender, EventArgs args) { sender.innerValue = "I was clicked!"; } [p5.ajax.core.WebMethod] public void secondary_onclick(p5.ajax.widgets.Literal sender, EventArgs args) { sender.innerValue = "I was ALSO clicked!"; } } } Realize that the last piece of code dynamically injected new widgets into the Container widget. Basically, new "li" elements were appended into the "ul" element, dynamically during an AJAX request, and it simply worked! These widgets are also persistently remembered across AJAX requests, such that you can change their properties and invoke event handlers for them. In addition, through the "Element" property any HTML elements can be rendered and any attribute added through the subscript operator. You now have 100 percent perfect control over your HTML markup, and you can create tiny AJAX requests and responses that update anything you want to update on your page in any way you see fit. And you did it with 4.8KB of JavaScript. You've turned Web app AJAX development into a thing that can be done just as easily as plain-old Windows Forms development. And in the process, you ended up with 100 times faster and more responsive Web apps. An Exercise in Hyperlambda A few months back I wrote an article in the June 2017 issue of MSDN Magazine titled “Make C# More Dynamic with Hyperlambda” (msdn.com/magazine/mt809119), which explored the non-traditional Hyperlambda programming language with its roots in execution trees. I bring this up because Hyperlambda’s tree-based approach makes it extremely easy to declare an AJAX widget hierarchy. Combine p5.ajax with Hyperlambda to consume an AJAX TreeView widget, and some impressive efficiencies show up. Let's explore this. First, in addition to Phosphorus Five, you need to download System42 and put it into the main p5.webapp folder according to the instructions at bit.ly/2vbkNpg. Then start up System42, which contains an ultra-fast AJAX Tree View widget, open up "CMS," create a new lambda page by clicking the +, and paste the code shown in Figure 5. Figure 5 Creating an AJAX TreeView, Which Will Allow for Traversing Folders on Disk create-widget parent:content widgets sys42.widgets.tree crawl:true items root:/ .on-get-items list-folders:x:/../*/_item-id?value for-each:x:/@list-folders/*?name list-folders:x:/@_dp?value split:x:/@_dp?value =:/ add:x:/../*/return/* src:@"{0}:{1}" :x:/@split/0/-?name :x:/@_dp?value if:x:/@list-folders/* not add:x:/../*/return/*/items/0/- src class:tree-leaf return items Click Settings, choose empty as your Template, click OK, save your page and click View page. Try expanding the AJAX Tree View while inspecting what goes over the wire in your HTTP requests, and realize that you just built a folder browsing AJAX Tree View with 24 lines of Hyperlambda that will display your folders from your p5.webapp folder, and that its initial total bandwidth consumption was only 10.9KB! If you compare these results with any other AJAX toolkit, you'll often find that other toolkits require downloading several megabytes of JavaScript—in addition to all the other stuff that goes over the wire—while Hyperlambda TreeView has no more than 4.8KB of JavaScript. This AJAX Tree View was built with a total of 717 lines of code, in pure Hyperlambda, using nothing but the Literal, Container and Void widgets. Most of its code is made up of comments, so roughly 300 lines of code were required to create the AJAX Tree View control. The widget was consumed with 24 lines of Hyperlambda, which let you browse your folders on disk. But it would require thousands of lines of code to create the control with anything else, and hundreds of lines to consume it, as was done in the 24 lines of code in Figure 5. If you wanted, you could now exchange three lines of code in the Hyperlambda example and end up with your own specialized Active Event custom widget, which would let you consume your specialized widget with a single line of code. Read how to do that at bit.ly/2t96gsQ. So, you're now able to create an AJAX Tree View that will browse your server's folders with one line of code. To create something equivalent in other toolkits would often require hundreds of lines of code in four different languages. You did it with one line of code, in one programming language and it performs up to 300 times better than its competition. Wrapping Up Imagine being able to produce 100 times better results, 100 times faster and more optimized results, 100 times better quality, with 100 times fewer less bugs, and being 100 times more productive than you were before. To make sure you're using the latest goods, download Phosphorus Five at bit.ly/2uwNv65 and System42 at bit.ly/2vbkNpg.. Contact him at thomas@gaiasoul.com. Thanks to the following Microsoft technical expert for reviewing this article: James McCaffrey Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/october/web-development-speed-thrills-could-managed-ajax-put-your-web-apps-in-the-fast-lane
CC-MAIN-2022-27
refinedweb
3,051
64.3
Agenda See also: IRC log <DanC_lap> Scribe: DanC <DanC_lap> ScribeNick: DanC_lap NM reviews goals... "Review F2F Goals * Bring new TAG members "up to speed" on continuing work * Make progress on high priority technical issues * Establish TAG priorities for coming year - ensure issues list reflects actual priorities * Refine TAG administrative procedures " TimBL: welcome to new tag members! I'm excited to get the burst of new momentum that comes with new ... and thanks Noah for chairing! NM: by way of agenda review... not as much in the way of drafts to review in preparation for this meeting... ... thanks, ashok, for arranging the meeting facilities ... we'll start with active technical work on the 1st day before stepping back to look at overall priorities ... we'll talk about meeting schedule etc. this PM; if you can check your calendar during a break, that'll probably help LMM: for this AM, I should defer input on priorities? NM: well, I don't want to start 1st thing with wiping the slate and setting priorities, but this will be iterative AM: conneg and redirections are related but scheduled separately... NM: yes, they're related... we'll see... we don't yet have an issue on conneg ... minutes 19 Feb ... 2009/02/23 21:03:34 NM: we'll look at that later JK: John Kemp, working for Nokia ~6 years; prior to that, liberty aliance, OASIS Security Services Technical Committee; so my background in SOAP based Web Services, XML, ... ... web applications, a few start-ups doing software as a services; that goes back to ~1996 ... I'm interested in the versioning and error handling stuff... and web application security LMM: Larry Masinter I'm Adobe. Was doing metdata for video. Web Standards is now my full time job. In the 1980s I worked on the Common Lisp standard... ... I was at Xerox Parc when KR was done with ()s rather than <>s. Overlapped HT there. ... at Xerox we had experiments in networked information retrieval... then I went to a gopher conn... then I heard about WWW and downloaded a client... ... in the Common Lisp standards process, one of my main contributions was an issue form where you had to describe the problem independent of the solution and such. ... so I brought some of that experience to chairing the IETF URI WG and the HTTP WG. ... I was on the W3C advisory board and helped develop the TAG charter. I thought it was important to deal with conflicts between WGs and show leadership TVR: Raman ... at Google... I care a lot about the Web and I'm concerned that the Web is being defined in terms of browsers too much; perhaps in reaction to being too far from browsers for a decade or so. TBL: never mind history, where I am at now... when I find time for software, I work on systems where the Web and the Semantic Web are completely integrated... ... where systems are connected with other systems we haven't invented yet <Zakim> timbl, you wanted to say Interested in arch coherence of the whole network of systems; the tech getting better not just older; very integrated sem web and web viewpoint; modularity TimBL: I'm interested in the Web continuing to improve and involve and not fossilize AM: Ashok Malhotra at Oracle, mostly doing Web Services... ... but I'm also looking at taking relational data and mapping it to the Semantic Web HT: Henry Thompson; I'm half time W3C Team and half time U. Edinburgh. My TAG time comes from the U. Edinburgh time... ... some conflict with teaching duties this spring ... my background is in computational linguistics... ... a theme in my TAG work is to find ways to apply that background ... I have a number of paused TAG tasks; some because I'm stuck, some [for other reasons?] ... some of you know Harry Halpin, also at E. Edinburgh... ... Harry Halpin has now submitted his PhD thesis [applause] <ht> scribenick: ht DC: WebArch scales down as well as up, I've been learning about that with my new G1 phone ... Stuff about privacy, caching, web on hip connecting to the big Web ... HTML5 has a bit about offline apps ... The WebApps WG has published a WD on Widgets ... There's a Google OpenWeb advocacy group which I pointed ot the Widgets work ... Free Software background, purposely multi-platform ... I'm the official Team Contact for the TAG NM: DC is the human archive of the TAG <scribe> scribenick: DanC_lap JAR: my background is in computer science, esp programming languages... ... was involved in capability security... and scheme standards [much scheme/lisp experience in the TAG] JAR: [something?] led me to the phrama industry, molecular biology, which led me to science commons... ... so I'm trying to help make the web better for science... ... URIs/naming, data integration, etc. ... I gather there's friction in using the web that we could do something about NM: Noah Mendelsohn... was at IBM... operating systems... highly transparent distributed unix... XML and SOAP... Java at IBM when Java was an "emerging technology"... ... I enjoy the overlap between my personal interests, my employer's business interest and what the TAG does ... what Dan said about the smaller platforms... smartphones and such... I see a tipping point approaching SKW: Stuart Williams at Hewlett Packard in Bristol, England. working in an HP labs group on Semantic Web... growing with the linked data stuff... ... naming and addressing is a focus of mine... from bits, bytes and nibbles... LAN mac addresses are 48bits and not [n] bytes... ... also some work on how you can introduce devices that didn't know each other; e.g. a phone and printer ... got involved in W3C... found myself elected to the TAG... found myself co-chairing; after the WebArch, took a break for a couple years, then another go at chairing... it's been a wonderful community to work with ... expect to continue to do related work "Background: Several recent email threads have raised questions about the proper use of content negotiation on the Web. ..." <DanC_> (hmm.. URI for "view on this bug" vs "this bug"; no, doesn't sound like a case for conneg to me) <raman> correct. <noah> Yes, Stuart, thank you. The Martin Nally question is squarely within webApplicationState-60 <ht> HST: I've produced a review of (hmm.. copy should go to www-archive. here's hoping) <DanC_> "The RDF response (modulo the lack of redirection) implies the resource is not an IR" huh? <skw> Raman wrote a finding on Generic Resources which also bears on this issue... and I'm wondersing whether there was a TAG issues that that was written against. <DanC_> (for reference: ) <DanC_> HT's notes on conneg for TAG meeting HT: in sum, yes, I see issues in this conneg thread that should be re-opened or opened LMM: background... in the '80s, the convention was naming, addressing, and routing... variant content types wasn't the norm... ... [Weiser?] had an idea of variant forms... Xerox work at the time asked these questions about "the gettysburg address" and such when tim visited us... I wonder if that's where Tim got the idea TimBL: we could check the dates of my web architecture notes... LMM: at the time, the idea was one HTTP transaction per click... ... there's some stuff in the HTTP spec that should be updated... just because the client knows 1000s of media types doesn't mean you should send 1000s of accept headers... "deprecate conneg" wasn't really what I meant ... I'm concerned that web architecture refers so directly to HTTP-specific mechanisms... ... questions of best practices should be approached with extreme caution... ... better to come up with descriptive findings "if you do X, then Y" rather than "you should X" [chair seems to be writing on the whiteboard; scribe hasn't looked at it] JAR: I think the advice from the TAG is pretty consistent and we could quickly address this... ... conneg came up "cool URIs for the semantic web", came up in "XRDs" which they have since abandoned in favor of [x?], came up in MH's question... ... this group's answer seems to be: we'd rather you didn't conneg in [which situation?] ... a test for conneg is: does it violate common expectations around a URI; does it lead a user to wrong [?] information. [?] <timbl> Advice: Don't use conneg when it would mess up th eexpectations of normal web users <jar> Test: If conneg might lead a use of the URI to the wrong result for some client, try to figure out a different way to do what you want to do. <johnk> I agree with timbl's advice, however, a) what is the "wrong result" for a client that says it prefers RDF? TVR: thanks for the nice summary, HT... on the generic resources finding... it was written using the old model of the web: one click, one http transaction... <jar> see my email. The RDF might be viewable in a browser, so info provided by RDF should be same info as infor provided in HTML rep TVR: then you got CGI... and conneg still works... ... but with active content, where the HTTP response is a program to run on the client, that turns the conneg situation upside down <ht> HST thinks his TAG blog entry refers wrt Ramans point: <Zakim> johnk, you wanted to ask about TAG finding on authoritative metadata in this context JK: the authoritative metadata finding has this notion of metadata in the container... ... is it relevant? does any of this [thread?] run counter to that finding? <johnk> HT: the authoritative metadata finding tells what should be done; it's relevant in that it tells me that the punning examples are not at all compelling JK: conneg is not just about the server saying what the server has; there are cases where the client wants only RDF and leaves the server in a position of [?] <Zakim> timbl, you wanted to say conneg between metadata and a document it s is about is always wrong. TimBL: Larry, yes, a lot of webarch is only implemented in HTTP. So while it's important to distinguish generic architecture for HTTP, it's also natural to talk about HTTP specifically ... I was a little disappointed that MH had to ask; I thought there was a community consensus that no, don't use conneg for [that]. So yes, perhaps we need to write it down. <masinter> conneg definition was carefully hammered out in HTTP-WG, and some assumptions about it seem to be counter to the intent and (I hope) the written spec TimBL: setting up the tabulator has been important in working out the practicalities of content negotiation... ... and for using RDF as a human-readable format <timbl> 1) Much of web arch is only implemented in HTTP. 2) TAG giving advice is asked for and useful; 3) never use conneg between a document and metadata about that document. <Zakim> DanC_lap, you wanted to suggest that HTTP is design as an embodyment of webarch and to suggest that the conneg story includes all 3 cases: server side, transparent, and client-side LMM: I think it's useful to move beyond what is to belief/intent... ... getting that worked out in HTTP was tricky but worthwhile ... [image/rdf ?] is a big leap that I don't think we should be making <DanC_> [darn; forgot to make my point about safety and POST and onload, which is the most important of the N things I q'd for] AM: so this idea that conneg is only to be used for "equivalent" representations... ... am I hearing others say we should move away from that? NM: I think [Martin?] is sympathetic to that position... that distinct URIs should be used, but he finds that when he builds it that way his users aren't happy <johnk> is the only case where conneg is actually an arch issue in the link between data and metadata? whiteboard experiment: [[ * AWWW discusses mainly the simple cases * When conneg is used, how many resources are there? * should "web" arch be independent of HTTP where possible? * AJAX returns a program. conneg needs to be rethought for this? (pertinent issues: generic resources & web app state) * role of authoritative metadata & content type * proposed test: "same information in different form?" ]] <DanC_> aha... I like that formulation of the test: "same information in different form?" <ht> LMM: and if you abuse that you'll confuse people NM: the ajax bullet seems still live... let's take that under web applications tate <Zakim> ht, you wanted to say "clarify same info in different form" should be done HT: another point to capture: ... everything that needs to be said about the relationship between relationships in the generic resources finding... NM: that sounds like the "how many resources" bullet... <Zakim> DanC_lap, you wanted to suggest re-using generic resource rather than making a new issue NM: considering: to re-open the generic resources issue re the "how many resources" bullet issue-53? <trackbot> ISSUE-53 -- Generic resources -- CLOSED <trackbot> RESOLUTION: to re-open Generic resources ISSUE-53 LMM opposed, TVR abstains [ I think] <scribe> ACTION: Larry to draft replacement for "how to use conneg" stuff in HTTP spec [recorded in] <trackbot> Created ACTION-231 - Draft replacement for \"how to use conneg\" stuff in HTTP spec [on Larry Masinter - due 2009-03-10]. <ht> trackbot, status? <ht> ACTION: ht to Follow-up to Hausenblas once there's a draft of HTTPbis which has advice on conneg [recorded in] <trackbot> Created ACTION-232 - Follow-up to Hausenblas once there's a draft of HTTPbis which has advice on conneg [on Henry S. Thompson - due 2009-03-10]. "Goals: * Review history, successes, and challenges with respect to TAG's efforts ..." JK: mixing XML languages with HTML... is that the goal? <Zakim> ht, you wanted to review the founding statement HT: "Is the indefinite persistence of 'tag soup' HTML consistent with a sound architecture for the Web?" -- <Zakim> DanC_lap, you wanted to say that space for unstructured discusion helps DanC: space for unstructured discussion here helps balance social dynamics in the HTML WG LMM: is this an HTML issue or a webarch issue? ... is extensibility, versioning, error handling... are these HTML problems or webarch problems? <masinter> well, they're all webarch, but are they restricted only to HTML HT: I noted "Is the indefinite persistence of 'tag soup' HTML consistent with a sound architecture for the Web?"; but note also " If so, what changes, if any, to fundamental Web technologies are necessary to integrate 'tag soup' with SGML-valid HTML and well-formed XML?" LMM: yes, this is an architectural issue, but is it wider than HTML? [?] HT: it's wider because it's the thin end of a long wedge... ... extensibility mechanisms in HTML are likely to be picked up elsewhere <Stuart> Larry's "no" above was in response to an aside question "Stuart wonders whether when speaking of HTML Larry is also including XHTML?" HT: what I heard in the ARIA discussion was "we don't need extensibility because extension happens rarely"; applying that generally is at the very least a general architectural issue <jar> (noodling on what raman's saying: html = 'shell' for the OS=web ?) TVR: [missed some] a model was: how can we make a web where lots of languages can play at the end of one link? ... in the 1990's, we thought mixed namespaces, DOM, events bubbling, was the way things would work... ... more recently, others say no, [this other thing] is how it works ... does this mean we need to re-design SVG, ... <Zakim> DanC_lap, you wanted to comment on XML and HTML, esp RSS, SVG, RDFa DanC: on the one hand, HTML is just one among many content types in web architecture, but on the other hand the web _is_ HTML to 3 or 4 significant digits... ... and [more... even though larry repeated it...] <Zakim> timbl, you wanted to suggest creative commons as a nice case in point, whcih suggest that there si no lace where we can draw a clean lin between html and xhtml TimBL: some say "HTML is its own thing, not XML"; on the other hand, RDFa is designed in the XHTML context ... ... then a discussion was sparked by Creative Commons suggesting use of RDFa in HTML, regardless of whether it's XHML or HTML... <masinter> is this issue about "namespaces" really? TimBL: so attempts to keep XHTML and HTML separate have broken down <Zakim> raman, you wanted to add that we need to remember that like namespace extensibility at its time, tagsoup today is also an experiment. Some would say that the experiment is not TVR: we've taken the namespace-based architecture as orthodoxy for a while, but that was an experiment as much as the tag soup approach... ... for all we know, either could fail in the 4 year timeframe... ... if you look in the 10 year timeframe, we should acknowledge that experiments will fail... and we should look for ways that they can live together [close to that, anyway] LMM: the name of this issue misled me... ... I think maybe it's re-considering namespace based architecture ... a story: somebody came to me a the W3C TPAC and said "we need a registry; how do we make one?" I tried to make a joke about "there's this way of doing decentralized naming..." but they didn't get the joke ... namespaces were introduced as a way of decentralized naming... they were rejected for perhaps technical/usability reasons... [scribe lost train of thought here] ... one position is "within this context, we don't need distribution; we can manage the chaos by getting all the browser developers in the room/group" ... <raman> union of namespaces was proposed by people like Tantek multiple times LMM: we probably need union namespaces <DanC_> (reminds me of prospero... union links are a great idea; I wonder if they can ever get widely deployed in filesystems. ) <Zakim> ht, you wanted to mention the sniffing draft issue-24? <trackbot> ISSUE-24 -- Can a specification include rules for overriding HTTPcontent type parameters? -- OPEN <trackbot> <masinter> jar: seen something like this before -- Common Lisp package system HT: perhaps we would re-open the authoritative metadata issue for that DanC: yes, issue-24 was re-opened for that NM notes it's in an "unscheduled" pile in the agenda NM: I think decentralized naming using URIs is architecturally important, but I've never seen a user-friendly syntax for it... ... looking at Java packages, people can see the use case for DNS-based naming when they borrow code from elsewhere ... but there are screw cases with bug fixes across domains and such <DanC_> (+1 look at both sides) NM: I think both sides are making important points and we should take both seriously <Zakim> noah, you wanted to point to lack of agreement on need for extensibility NM: [something about convenient syntaxes being less self-describing ...] <masinter> I don't think we can abandon namespaces, but without namespaces, may need registries or some other clear extensibility mechanism TimBL: something is either self-describing or it's not... ... I think we can solve the problem with manageable cost for all the relevant parties ... HTML parsers are already huge; a little more code to handle namespaces is a negligible cost... ... [an example elsewhere; scribe missed; help?] ... I think yes, it's a goal to get the creative commons feature working on HTML 5 <masinter> +1 TimBL: by whatever technical or social means necessary <Zakim> ht2, you wanted to query state of media-typed based NS defaulting <Zakim> timbl, you wanted to suggest etchnically that html5 adopt ns for new browsers. <johnk> agree with timbl, masinter that a good concrete goal is to get RDFa working in HTML5 HT: this idea about default namespaces based on media types has been discussed in "we should..." mode, though I don't know that anybody's actively working on it ... I'd like to establish a reward for cleaner markup in _this_ life... ... it was at best naive to expect XHTML would dominate; but we don't have to give up on the idea that XHTML has real benefit. TVR: absolutely HT: I concur with the idea that tim has presented: let's remove the step function in the reward of cleaning up HTML markup. [?] <johnk> is there a link to that work? <masinter> not sure "media type based namespace declaration" is the right formulation <timbl> You can addd xmlns for cc and get the benefit of having cc markup without having to put quotes around attribute values everywhere for example. LMM: not sure "media type based namespace declaration" is the right formulation... but perhaps formulate it in terms of mapping rules between contexts ... i.e. how to interpret one as the other <Zakim> masinter, you wanted to separate costs to readers, cost to authors <Zakim> noah, you wanted to discuss wrapup NM: I made some notes on the board [scribe wishes for a photo in www-archive] but let's not go into those LMM: let's keeping working toward a deliverable... finding/note no matter ... esp. on conversion between namespaced and non-namespaced forms TVR: I think the TAG has a critical role in bringing 2 sides of the community together <Zakim> DanC_lap, you wanted to note HTML WG agenda for this week close ACTION-226 <trackbot> ACTION-226 Report at March on tagSoup progress since TPAC closed <noah> DC: Chris Wilson has an action HTML WG action to write up distributed extensibility <masinter> trackbot, action-226? <trackbot> Sorry, masinter, I don't understand 'trackbot, action-226?'. Please refer to for help <noah> ACTION-226? <trackbot> ACTION-226 -- Dan Connolly to report at March on tagSoup progress since TPAC -- due 2009-02-24 -- CLOSED <trackbot> <noah> scribenick: noah TBL: Henry mentioned talk I gave at AC meeting suggesting both sides should make some concessions to come together on this. I can review it. ... It met with some criticism that it's asking too much of both sides, but my opinion hasn't changed. <Zakim> johnk, you wanted to ask about timbl's idea of addressing the CC use-case in HTML5 TBL: Henri Sivonen pointed out that current HTML browsers don't actually populate DOM namespace properties. <masinter> if it is parsed as an HTML document <Zakim> noah, you wanted to discuss use cases JK: so, you think discussing the CC use case is important. TBL: yes <timbl> Hmmm. yes sorry that is in fact not totally written up ... it has bullet point in it <DanC_lap> Cleaning up the Web <DanC_lap> (crud; I marked the essay world-readable, with permission, but I didn't link it from ) <timbl> Those are notes from the talk, but should go with the slides <DanC_lap> you didn't use slides when you presented it at TPAC 2008 <jar> danc: The namespace skeptics are in the minority in the marketplace. DC: Dominant browser ships namespace-based extensibility. Similar in spirit to XML namespaces, but not exactly XML. TBL: Does it use xmlns? DC: not remembering <masinter> Chris Wilson from MS will report on this in HTML WG <DanC_lap> HTML WG ACTION-97 <DanC_lap> Following SVG-in-HTML thread, propose decentralized extensibility strategy for HTML5 HT: What I want us to do is: "produce a document which specifies a mechanism for bridging the gap between namespaced and non-namespaced forms of languages conveying the same things" <DanC_lap> break for lunch, resuming at 1:15pm <DanC_lap> scribe: John Kemp <DanC_lap> scribenick: johnk HT: introduces issue, noting languishing of this issue ht: intends that this doc is the one to be blessed ... origins of this issue were concerns about uri scheme and namespace proliferation ... OASIS eg. uses URNs for XML namespaces ... thus no straightforward way to dereference ... notes URI scheme examples such as XRI ... wrote a doc to defend the proposition that http: was sufficient for naming ... concern was that original doc would not actually help those who needed it most ... there is an opinion that everything relevant to be said is said already in the relevant RFC already ... Dirk & Nadia attempt to document the story a la AWWW ... example: boss says we want names, Dirk says let's use URNs, Nadia says new scheme! ... biggest problem is the taxonomy of requirements: e.g. what is persistence? ... (describes document structure) ... not worked on this doc recently ... but wanted to introduce this to new members and give us a memory jog NM: (asks the room what we think) ... not sure the description reveals the issues ... gives HT the chair for this session LMM: describes link on problems URIs *won't* solve <masinter> problems URIs Don't Solve LMM: people confuse name assignment with service level agreement about name resolution ... when you buy domain name, you get a service guarantee ... (discusses control and monetization in name assignment) ... biggest piece of puzzle from Dirk & Nadia is issue of control biggest piece of puzzle _missing_ is issue of control LMM: new URI scheme, or URNs may provide more benefit than cost for some DanC: URI space owned by all, not "some" ... issue of namespace collision timbl: (challenges) LMM: use uuids ... (see XMP uuids) ... they weren't locators <ht> No-one ever expects the network effect DanC: http is good for domain + path hierarchy <DanC_lap> ... but uuids are outside that case LMM: look at lifecycle of content and its movements HT: introduction of Dirk & Nadia needs to say that interest is in naming things with a Web context, not other naming is widget naming "for the Web"? <jar> danc: anyone smart enough to use large random numbers doesn't need to go to school at our school DanC: people want to recreate a naming hierarchy, is the issue HT: previous document set out to sell HTTP, Dirk & Nadia provides instead a cost/benefit analysis ... lookup + hierarch will meet most requirements, and HTTP URI already provides that ... intent of doc is to show you how to sharpen your reqs, and if you use HTTP URIs, how to meet them LMM: distinction between a film and the (actors) in it ... identification problems are different ... identifying media means identifying the content container ... identifying concepts described in the content is different HT: doc not written to address those problems JAR: assumes you have your own theory of naming LMM: no one has a well-defined theory of naming JAR: there are some theories that work (in practice) <Zakim> noah, you wanted to ask if we've documented costs NM: have we documented or understood the _costs_ ... to what extent do you plan to dig into the costs of alternatives? <jar> in reply to Larry who said xxxx is not well-defined: well-defined and adequate are not the same thing HT: moved costs to 'spare parts' section <jar> what I mean to say is that the theory of what is named is orthogonal to naming system mechanics. can solve the 2 independently HT: large analysis of the tradeoffs - how you get confidence in certain guarantees, by contract or otherwise ... that level of analysis did not belong in the finding NM: namespace pollution is only part of the issue ... ... other reason is association of HTTP URI with widely-deployed dereference mechanism ... two parts of this story could be told fairly simply ... your decisions affect other people (if you take a name, no-one else can) ... wide deployment of existing schemes may be useful to you <Zakim> masinter, you wanted to ask: who needs this finding? What W3C work depends on an answer to this question? LMM: prioritizing - whose work depends on us answering this? HT: we were asked by W3C members NM: will discuss explicit priorities tomorrow TVR: (echoes LMM) JAR: (thinks a contribution can be made here) ... ... particularly when trust is lacking, naming is an issue HT: D&N does address that somewhat (by noting checksums in URIs) JAR: we should talk about priorities, issue is important in SemWeb <masinter> *IF* we could resolve this effectively *THEN* there might be value TBL: rate of non-HTTP ways has remained steady LMM: (mentions TDB, DURI schemes in this context) HT: next decision points comes with a next draft LMM: [trust, authority, control, monetisation] all go together NM: one time only (we will not repeat this in future meetings) ... we had 28 open issues at time of writing agenda <masinter> ... as motivating factors for why people want new schees NM: believes that some issues are open because we think there is something to be done here, but in practice it is not clear what we should do <DanC_lap> (not just to remind ourselves that it's open, but as a marker in the community that yes, you're not the only one with this problem) NM: proposes we close issues which fall below some mark ... and sort the rest appropriately ... (wants to get others involved in managing the issues well) ... (profers the issues list at) ... proposal to schedule two sessions with break ... first session, divide into groups ... new TAG members circulate between groups ... (shows tracker) ... (introduces tracker functions) ... nowhere does it say in tracker item where are we? DanC: (expresses enthusiasm to move forward) <timbl> <noah> deliverables are we expecting to produce <noah> NM: describes how he creates the agenda ... some day it would be nice if the summary of agenda input were more to the point ... shows a template for describing issues ... (describes template) TVR: we need to figure out the meaning of the criteria LMM: what other activities depend on the work? NM: let's take ISSUE-50 ... and work a real example HT: where do you propose to put the info (from the template) NM: show tracker fields are very fixed ... put this information in the description field JAR + LMM: use Notes instead? NM: 'notes' falls to the bottom when read TBL: limit each of the things in the template to one line? LMM: does priority belong in description? (working ISSUE-50) NM: priority: 'medium' (discussion about whether this is right) TBL: which are the issues that we're doing while Noah is chair? NM: concurs ... (describes how he will handle issues) AM: can we add a field? NM: tables the question ... and all other 'meta' questions ... trying to capture where we are accurately LMM: what is the priority, and how certain of it are you? NM: OK to say 'don't know' DanC: no way to find priority in our (past) records NM: use your collective consciousness to work it out ... in many cases, you know LMM: what do you think the priority of each issue was? ... vs. what do you think the ongoing priority _should be_? AM: questions how we do this for very long-running issues NM: (back to ISSUE-50) ... edits tracker item - <DanC_lap> ("current deliverables: ...namingSchemes" is redundant w.r.t. actions; I suppose that's sorta by design.) (group discusses communities affected by ISSUE-50) HT: 'raised by' now means 'principally responsible' NM: external commitments on ISSUE-50? ... goal is to do all issues by 16:00 ... (proves, by PIE, that not all administrivia is bad) (group breaks into two to divide issues list) (no more minutes for now) <jar> re issue-24: <timbl> <DanC_lap> issue-30: <trackbot> ISSUE-30 Standardize a "binary XML" format? notes added <DanC_lap> issue-30: <trackbot> ISSUE-30 Standardize a "binary XML" format? notes added (back to minutes) NM: summarizes what happened wrt the tracker issues ... would like to appoint a shepherd for each active issue to help prepare related agenda items JAR: could it be done via ACTION assignment? NM: would propose we do add an action for each one which is not in shape each one == each issue LMM: issues not open don't need shepherds ... some issues should be moved from 'open' to 'raised' (or 'closed') ... ones marked as background don't need shepherds NM: "au contraire" ... let's go over the issues and decide how to resolve DanC chairing this session NM: go till 16:30 ... (moves to open issues) ISSUE-7 HT: HTML WG work on ping attributes is believed to be moribund DC: (disagrees) HT: Seems unlikely we need to do anything about this ... Revive if discussion is raised again in HTML (discussion over who should shepherd) LMM: could we close it? <timbl> NM, 2009-03-03 DC: does anyone volunteer to declare victory? HT: do we want to give that message? NM: if we leave it open, I'l monitor this <timbl> NM, date +"%Y-%m-%d" LMM: worthwhile to discuss what it means to close an issue ... communicate to community what it means when we close an issue HT: sending any such message in this case would be inflammatory NM: history about how we use the 'open' designation ... community has expectations based on that designation ... do other TAG members agree with Larry? LMM: if we close a number of issues at once we're sending a different message ... not saying anything specific about any particular issue DC: in this specific issue (ISSUE-7) feel compelled by Henry's argument TBL: There are issues with 'ping', and then close it ... see the whenToUseGET finding ... notes other issues (beyond using GET) NM: Is there consensus over text I'm typing? (seems not) DC: Is this proposal to close? NM: just to agree text for current description? DC: (asks group for thoughts on decision to close) HT: (some dissent) NM: don't want to change the criteria for closing JK: would abstain NM: if we believe we should keep an eye on some issue, we keep it open ... if no obvious reason to come back to the issue, then close it ... I would rather look over definitions of open and closed over time ... that is not the goal for this session TBL: am I allowed to ask about how we use tracker? DC: polls whether to close ISSUE-7 NM: describes 3 possible actions mostly missed by the scribe DC: is this text (ISSUE-7 description) what you would like on record as resolution for this issue? NM: keep it open, is text OK? (agreement) ISSUE-16 JAR: kept open in case RFC 3205 was too restrictive LMM: BCPs not normative NM: what is the TAG's role here? ... appoint shepherd DC: not worth the effort needed to close it ... mark pending review come back to ISSUE-20 tomorrow HT: what does description text mean? DC: AWWW says something about this <scribe> ACTION: Larry to report back from IETF/HTML liason meeting in March regarding MIME type override [recorded in] <trackbot> Created ACTION-233 - Report back from IETF/HTML liason meeting in March regarding MIME type override [on Larry Masinter - due 2009-03-11]. DC: connection to tagSoup unclear, happy to delete from text ... shepherd? LMM: (nominates himself) NM: will discuss role of 'shepherd' later DC: rank high is surprising HT: open action actively pursued NM: high rank is related to a big piece of work we will do this year HT: medium is OK for me NM: (marks it as medium) ... (adds more context) LMM: issue is misnamed DC: proposal? LMM: "Use of IRIs in W3C Specifications" as new title NM: do we mean only W3C specs? TBL: "when should IRIs be used"? LMM: don't want to talk about IRIs in email e.g. NM: what have we actually been doing? <ht> LMM and DC please note wrt your ACTION TBL: everywhere you use URI you should use IRI (that's what we've been saying) DC: this is an involved discussion ... suggest we move on <timbl> s/verywhere you use URI you should use IRI/We were saying "verywhere you use URI you should use IRI"/ HT: no title proposal has yet attracted support old title: Should W3C specifications start promoting IRIs? NM returns as chair NM: this work is important, even if unpleasant ... proposes we put the exercise down for now, perhaps return Thursday <DanC_lap> (I wonder about a shotgun approach to sheperds...) NM: designation approach for naming shepherds DC seems to agree ^ NM: goal is come out with a more refined view ... any concerns about this? JAR: if you want to get through the list come up with an offline procedure to follow NM: will do that and see how it goes <Zakim> DanC_lap, you wanted to propose to close httpSubstrate-16 and to suggest merging fragmentInXML-28 with abstractComponentRefs-37 and to note 12 May 2004 decision that puts 28 in AM: fragment in XML brought up by WSDL group ... this is water under the bridge <DanC_lap> PROPOSED: to close issue-28 on the grounds that WSDL 2.0 is a REC NM: will entertain a proposal to close ISSUE-28 LMM: mark pending review ... will look at it <DanC_lap> (+1 pending review an LMM to look at it) <DanC_lap> RESOLVED: to close issue-28 on the grounds that WSDL 2.0 is a REC NM: ... (describes reasons for writing the document) ... are we willing to agree that this describes how we will work? ... reads text about good scribing practices to help the chair ... also updated document to note that draft minutes should include plaintext LMM: sent feedback that document combines procedural things with TAG "philosophy" NM: would you like to rewrite this? ... Will think about your feedback ... intend this to be a hitchhikers' guide to the TAG LMM: procedural issues may have no longevity, philosophical ones may have more ... perhaps should be separated for that reason? ... if you're asking whether this is a good starting point, then agree NM: (would like to have practices that he can "enforce") ... do you buy the goal? ... do you have any objections? JK: what actually though are the long term goals of this document? (essentially agreeing with Larry about the impression of mixing two separate, perhaps both important but perhaps better separate subjects) <DanC_lap> (I have a bias against standing items.) AM: (raises procedural issue I didn't really catch) TBL: concerned that this doc is not public NM: I want this doc to be for us, not for public feedback ... but if we decide that our public commitment is obvious elsewhere would be OK with this being public ... asks whether he should schedule additional work on this document or whether it works roughly (group agrees to work with what is written) NM: next TAG meeting scheduling? ... would like to do another f2f meeting around summertime ... any sympathy for Boston area meeting for early June? HT: have scheduling difficulties generally NM: proposes 2/3/4 June 16/17/18 June? 27/28/29 May in Boston? or 17/18/19 June TVR: Is this just about picking dates? ... can we do this in email? two concrete proposals: i) 27-29 May, ii) 17-19 June 23-25 June? PROPOSED: tentative 23-25th June meeting RESOLUTION: to check this against the previous proposals NM: adjourns meeting
http://www.w3.org/2001/tag/2009/03/03-tagmem-minutes.html
CC-MAIN-2015-06
refinedweb
6,493
70.84
Files are used to store different types and sizes of data. The size of a file can be printed in different ways in Python. In this tutorial, we will examine how to get file size in python? Get File Size with getsize() Method The most popular way to get the size of a file in Python is using the getsize() method. The getsize() method is provided via the os.path module. In order to use getsize() method the os module should be imported. import os os.path.getsize("/home/ismail/cities.txt") The output is like below. The file size unit is byte. So the following output means the file named cities.txt is 63 bytes. 63 If you are using Windows operating system which has a different path naming convention the backslashes should be used two times to express a single backslash. import os os.path.getsize("C:\\users\ismail\cities.txt") Convert File Size To KB (KiloByte), MB (MegaByte), GB (GigaByte) As stated previously the returned file sizes are expressed as byte units. But the files are bigger which can be expressed as KB, MB, or GB in size. So some calculations should be done in order to express file sizes as KB (KiloByte), MB (MegaByte), GB (GigaByte). In the following example, we will calculate KB, MB, GB sizes and set them into variables named size_KB, size_MB, size_GB. import os size = os.path.getsize("C:\\users\ismail\cities.txt") size_KB = size/1024; size_MB = size/(1024*1024); size_KB = size/(1024*1024*1024); Get File Size with os.stat.st_size Attribute The os module also provides the attributed named st_size in order to get the size of the specified file. The complete path of the st_size attribute is os.stat(PATH).st_size. Here the PATH is the file path which can be an absolute or relative path. import os size = os.stat("/home/ismail/cities.txt").st_size Get File Size with Path.stat.st_size Attribute The file size can be get also using the Path module and its stat() method with the st_size attribute. The complete attribute can be called via Path(PATH).stat().st_size. The PATH is the path of the file which can be absolute or relative. import Path size = Path("/home/ismail/cities.txt").stat().st_size
https://pythontect.com/how-to-get-file-size-in-python/
CC-MAIN-2022-21
refinedweb
379
67.96
are ordered as the reverse: lon,lat. Originally, lat,lon was used for both array and string, but the array format was changed early on to conform to the format used by GeoJSON. A point can be expressed as a geohash. Geohashes are base32 encoded strings of the bits of the latitude and longitude interleaved. Each character in a geohash adds additional 5 bits to the precision. So the longer the hash, the more precise it is. For the indexing purposed geohashs are translated into latitude-longitude pairs. During this process only first 12 characters are used, so specifying more than 12 characters in a geohash doesn’t increase the precision. The 12 characters provide 60 bits, which should reduce a possible error to less than 2cm. When accessing the value of a geo-point in a script, the value is returned as a GeoPoint object, which allows access to the .lat and .lon values respectively: def geopoint = doc['location'].value; def lat = geopoint.lat; def lon = geopoint.lon; For performance reasons, it is better to access the lat/lon values directly: def lat = doc['location'].lat; def lon = doc['location'].lon;
https://www.elastic.co/guide/en/elasticsearch/reference/7.3/geo-point.html
CC-MAIN-2019-39
refinedweb
192
59.4
Go’s string package has several functions available to work with the string data type. These functions let us easily modify and manipulate strings. We can think of functions as being actions that we perform on elements of our code. Built-in functions are those that are defined in the Go programming language and are readily available for us to use. In this tutorial, we’ll review several different functions that we can use to work with strings in Go. The functions strings.ToUpper and strings.ToLower will return a string with all the letters of an original string converted to uppercase or lowercase letters. Because strings are immutable data types, the returned string will be a new string. Any characters in the string that are not letters will not be changed. To convert the string "Sammy Shark" to be all uppercase, you would use the strings.ToUpper function: ss := "Sammy Shark" fmt.Println(strings.ToUpper(ss)) OutputSAMMY SHARK To convert to lowercase: fmt.Println(strings.ToLower(ss)) Outputsammy shark Since you are using the strings package, you first need to import it into a program. To convert the string to uppercase and lowercase the entire program would be as follows: package main import ( "fmt" "strings" ) func main() { ss := "Sammy Shark" fmt.Println(strings.ToUpper(ss)) fmt.Println(strings.ToLower(ss)) } The strings.ToUpper and strings.ToLower functions make it easier to evaluate and compare strings by making case consistent throughout. For example, if a user writes their name all lowercase, we can still determine whether their name is in our database by checking it against an all uppercase version. The strings package has a number of functions that help determine if a string contains a specific sequence of characters. The strings.HasPrefix and strings.HasSuffix allow you to check to see if a string starts or ends with a specific set of characters. For example, to check to see if the string "Sammy Shark" starts with Sammy and ends with Shark: ss := "Sammy Shark" fmt.Println(strings.HasPrefix(ss, "Sammy")) fmt.Println(strings.HasSuffix(ss, "Shark")) Outputtrue true You would use the strings.Contains function to check if "Sammy Shark" contains the sequence Sh: fmt.Println(strings.Contains(ss, "Sh")) Outputtrue Finally, to see how many times the letter S appears in the phrase Sammy Shark: fmt.Println(strings.Count(ss, "S")) Output2 Note: All strings in Go are case sensitive. This means that Sammy is not the same as sammy. Using a lowercase s to get a count from Sammy Shark is not the same as using uppercase S: fmt.Println(strings.Count(ss, "s")) Output0 Because S is different than s, the count returned will be 0. String functions are useful when you want to compare or search strings in your program. The built-in function len() returns the number of characters in a string. This function is useful for when you need to enforce minimum or maximum password lengths, or to truncate larger strings to be within certain limits for use as abbreviations. To demonstrate this function, we’ll find the length of a sentence-long string: import ( "fmt" "strings" ) func main() { openSource := "Sammy contributes to open source." fmt.Println(len(openSource)) } Output33 We set the variable openSource equal to the string "Sammy contributes to open source." and then passed that variable to the len() function with len(openSource). Finally we passed the function into the fmt.Println() function so that we could see the program’s output on the screen… Keep in mind that the len() function will count any character bound by double quotation marks—including letters, numbers, whitespace characters, and symbols. The strings.Join, strings.Split, and strings.ReplaceAll functions are a few additional ways to manipulate strings in Go. The strings.Join function is useful for combining a slice of strings into a new single string. To create a comma-separated string from a slice of strings, we would use this function as per the following: fmt.Println(strings.Join([]string{"sharks", "crustaceans", "plankton"}, ",")) Outputsharks,crustaceans,plankton If we want to add a comma and a space between string values in our new string, we can simply rewrite our expression with a whitespace after the comma: strings.Join([]string{"sharks", "crustaceans", "plankton"}, ", "). Just as we can join strings together, we can also split strings up. To do this, we can use the strings.Split function and split on the spaces: balloon := "Sammy has a balloon." s := strings.Split(balloon, " ") fmt.Println(s) Output[Sammy has a balloon] The output is a slice of strings. Since strings.Println was used, it is hard to tell what the output is by looking at it. To see that it is indeed a slice of strings, use the fmt.Printf function with the %q verb to quote the strings: fmt.Printf("%q", s) Output["Sammy" "has" "a" "balloon."] Another useful function in addition to strings.Split is strings.Fields. The difference is that strings.Fields will ignore all whitespace, and will only split out the actual fields in a string: data := " username password email date" fields := strings.Fields(data) fmt.Printf("%q", fields) Output["username" "password" "email" "date"] The strings.ReplaceAll function can take an original string and return an updated string with some replacement. Let’s say that the balloon that Sammy had is lost. Since Sammy no longer has this balloon, we would change the substring "has" from the original string balloon to "had" in a new string: fmt.Println(strings.ReplaceAll(balloon, "has", "had")) Within the parentheses, first is balloon the variable that stores the original string; the second substring "has" is what we would want to replace, and the third substring "had" is what we would replace that second substring with. Our output would look like this when we incorporate this into a program: OutputSammy had a balloon. Using the string function strings.Join, strings.Split, and strings.ReplaceAll will provide you with greater control to manipulate strings in Go. This tutorial went through some of the common string package functions for the string data type that you can use to work with and manipulate strings in your Go programs. You can learn more about other data types in Understanding Data Types and read more about strings in An Introduction to Working with. This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Hi, thank you for the article! Only I’m not sure if it is a good idea to use len()to measure length of the string. Here the example
https://www.digitalocean.com/community/tutorials/an-introduction-to-the-strings-package-in-go
CC-MAIN-2022-40
refinedweb
1,113
66.03
Sample problem: I have the following program: int main(int argc, char *argv[]) { int a, b; char c1, c2; printf("Enter something: "); scanf("%d",&a); // line 1 printf("Enter other something: "); scanf("%d", &b); // line 2 printf("Enter a char: "); scanf("%c",&c1); // line 3 printf("Enter another char: "); scanf("%c", &c2); // line 4 printf("Done"); // line 5 system("PAUSE"); return 0; } As I read in the C book, the author says that scanf() left a new line character in the buffer, therefore, the program does not stop at line 4 for user to enter the data, rather it stores the new line character in c2 and moves to line 5. Is that right? However, does this only happen with char data types? Because I did not see this problem with int data types as in line 1, 2, 3. Is it right? Answer #1: The scanf() function skips leading whitespace automatically before trying to parse conversions other than characters. The character formats (primarily %c; also scan sets %[…] — and %n) are the exception; they don’t skip whitespace. Use " %c" with a leading blank to skip optional white space. Do not use a trailing blank in a scanf() format string. Note that this still doesn’t consume any trailing whitespace left in the input stream, not even to the end of a line, so beware of that if also using getchar() or fgets() on the same input stream. We’re just getting scanf to skip over whitespace before conversions, like it does for %d and other non-character conversions. Note that non-whitespace “directives” (to use POSIX scanf terminology) other than conversions, like the literal text in scanf("order = %d", &order); doesn’t skip whitespace either. The literal order has to match the next character to be read. So you probably want " order = %d" there if you want to skip a newline from the previous line but still require a literal match on a fixed string. Answer #2: Use scanf(" %c", &c2);. This will solve your problem. Answer #3: I suggest tossing scanf() away, to never use it, and to instead use fgets() and sscanf(). The reason for this is, that at least in Unix-like systems by default, the terminal your CLI program runs on does some processing of the user input before your program sees it. It buffers input until a newline is entered, and allows for some rudimentary line editing, like making backspace work. So, you can never get a single character at a time, or a few single characters, just a full line. But that’s not what e.g. scanf("%d") processes, instead it processes just the digits, and stops there, leaving the rest buffered in the C library, for a future stdio function to use. If your program has e.g. printf("Enter a number: "); scanf("%d", &a); printf("Enter a word: "); scanf("%s", word); and you enter the line 123 abcd, it completes both scanf()s at once, but only after a newline is given. The first scanf() doesn’t return when a user has hit space, even though that’s where the number ends (because at that point the line is still in the terminal’s line buffer); and the second scanf() doesn’t wait for you to enter another line (because the input buffer already contains enough to fill the %s conversion). This isn’t what users usually expect! Instead, they expect that hitting enter completes the input, and if you hit enter, you either get a default value, or an error, with possibly a suggestion to please really just give the answer. You can’t really do that with scanf("%d"). If the user just hits enter, nothing happens. Because scanf() is still waiting for the number. The terminal sends the line onward, but your program doesn’t see it, because scanf() eats it. You don’t get a chance to react to the user’s mistake. That’s also not very useful. Hence, I suggest using fgets() or getline() to read a full line of input at a time. This exactly matches what the terminal gives, and always gives your program control after the user has entered a line. What you do with the input line is up to you, if you want a number, you can use atoi(), strtol(), or even sscanf(buf, "%d", &a) to parse the number. sscanf() doesn’t have the same mismatch as scanf(), because the buffer it reads from is limited in size, and when it ends, it ends — the function can’t wait for more. ( fscanf() on a regular file can also be fine if the file format is one that supports how it skims over newlines like any whitespace. For line-oriented data, I’d still use fgets() and sscanf().) So, instead of what I had above, use something like this: printf("Enter a number: "); fgets(buf, bufsize, stdin); sscanf(buf, "%d", &a); or, actually, check the return value of sscanf() too, so you can detect empty lines and otherwise invalid data: #include <stdio.h> int main(void) { const int bufsize = 100; char buf[bufsize]; int a; int ret; char word[bufsize]; printf("Enter a number: "); fgets(buf, bufsize, stdin); ret = sscanf(buf, "%d", &a); if (ret != 1) { fprintf(stderr, "Ok, you don't have to.\n"); return 1; } printf("Enter a word: "); fgets(buf, bufsize, stdin); ret = sscanf(buf, "%s", word); if (ret != 1) { fprintf(stderr, "You make me sad.\n"); return 1; } printf("You entered %d and %s\n", a, word); } Of course, if you want the program to insist, you can create a simple function to loop over the fgets() and sscanf() until the user deigns to do what they’re told; or to just exit with an error immediately. Depends on what you think your program should do if the user doesn’t want to play ball. You could do something similar e.g. by looping over getchar() to read characters until a newline after scanf("%d") returned, thus clearing up any garbage left in the buffer, but that doesn’t do anything about the case where the user just hits enter on an empty line. Anyway, fgets() would read until a newline, so you don’t have to do it yourself. Answer #4: Use getchar() before calling second scanf(). scanf("%c", &c1); getchar(); // <== remove newline scanf("%c", &c2); Hope you learned something from this post. Follow Programming Articles for more!
https://programming-articles.com/scanf-leaves-the-new-line-char-in-the-buffer-c-answered/
CC-MAIN-2022-40
refinedweb
1,074
68.91
I needed to write a plugin that provided a dynamic number of new commands. I wanted to use python's built-in 'type()' function to create a new class for each command. Python lets you create new classes by writing code like this; - Code: Select all X = type('X', (object,), dict(a=1)) which is equivalent to - Code: Select all class X(object): a = 1 In my case, I'm creating a system which converts plain text files to a number of different document formats. I have a number of templates (".tex" files) and I need to create a command for every ".tex" file in my plugin folder. Which means I needed to write code like the following. It has been cleaned up a bit for clarity; - Code: Select all # # Here's the base class for # all dynamic classes: # class DynamicPdfCommand(sublimeplugin.TextCommand): def run(self, view, args): self.compile(view, self.template) # # Here's the loop creating adding # a class per template; # requiredEnd = ".tex" for file in os.listdir(packageDir): if file.lower().endswith(requiredEnd): className = createClassName(file) newClass = type(className, (DynamicPdfCommand,), dict(template=file)) globals()[ className ] = newClass So the things to note are; - First, create a base class for all your dynamic types. Note how the DynamicPdfCommand.run() method uses refers to 'self.template'; we'll set this later - Inside a loop, you create new class names as strings, then use type() to create the new classes. The last parameter here is a dictionary of initial values -- that's where we set the self.template variable. - Once you've created the type, you need to add it to the module so that ST will pick it up. That's what the line - Code: Select all globals()[ className ] = newClass is all about. Anyway, that's the technique. Hope someone else finds it useful.
http://www.sublimetext.com/forum/viewtopic.php?f=6&t=1283&start=0
CC-MAIN-2015-22
refinedweb
304
66.74
I could be asking the obvious but do square brackets work for you? I would expect: result=win32com.client.Dispatch("COMClass")["ItemName"] result=win32com.client.Dispatch("COMClass")[0] to work or possibly: result=win32com.client.Dispatch("COMClass").Item[0] Another possibility is selecting the overloads on get_Item. I don't know why we'd think there'd be two object versions but bound methods will have an "Overloads" property (after importing clr or another .NET namespace) which might help you (I'm not sure it will, but it's worth a shot). Finally I would suggest that you could also look into making a PIA (Primary Interop Assembly) for your COM objects if you have a TLB instead of writing your own dispatch code. A tool ships w/ the .NET framework called tlbimp so if you have a TLB for your COM objects you can use that to generate an assembly. Add a reference to that assembly and you might find that you have a better COM experience. Unfortunately that only helps if you have a TLB available. Hopefully one of those will help. From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Martin Sent: Wednesday, July 11, 2007 7:45 AM To: users at lists.ironpython.com Subject: [IronPython] "TypeError: Microsoft.Scripting.DynamicType is not callable" bug(?) Hi, Background: We have a number of Python scripts that we use for updating and setting up the devices my company make. We do this through calls to a number of COM classes via win32com.client.Dispatch(). We are interested in being able to call these Python scripts from various .NET GUI applications written in C# so naturally I started looking into IronPython. My goal is to make our CPython code as executable in IronPython as humanly possible so since win32com is not available through IronPython I wrote a simple C# interop wrapper handling the interops for the specific COM classes we need to call when setting up our devices: namespace win32com { public class client { public static object Dispatch(string comName) { <rip> } } } The annoying thing is that this "hack" works like a charm except in one type of case. Problem: Some of our COM classes return Item collections that are indexable by both specific names (string indexing) and normal int indexes - example in CPython: result=win32com.client.Dispatch("COMClass")("ItemName") and/or result=win32com.client.Dispatch("COMClass")(0) This works like a charm in CPython (and the interop version of our C# code for that matter) but it does NOT work in IronPython. In IronPython 1.1 I get this error message "TypeError: object is not callable" In IronPython 2.0 alpha 3 I get this error message: "TypeError: Microsoft.Scripting.DynamicType is not callable". I can access the properties of the object such as item count however I cannot index into the result object by calling result.get_Item("ItemName"). It gives the following error message TypeError: multiple overloads of get_Item could match (String) get_Item(Object) get_Item(Object) nor can I index into the result object by calling result.get_Item(0) which gives a similar error except the problematic match is Int32. So apparently IronPython cannot identify the type of the index objects when more are present? Question Anyone have any comments that can help me here? Have I missed some subtle issues with COM interop? Is this a known bug and if so when do you think it will be fixed? (I know I can get around the problem by writing specific IronPython code + changing the handling of the COM class in my C# wrapper but I REALLY don't want to nor should I have to if the goal is to make IronPython as clean as possible port of CPython.) PS. Great initiative and great work otherwise in the port BR Martin Storm Møller -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/ironpython-users/2007-July/005235.html
CC-MAIN-2019-47
refinedweb
650
64.1
Swift version: 5.4 You can calculate the distance between two CGPoints by using Pythagoras's theorem, but be warned: calculating square roots is not fast, so if possible you want to avoid it. More on that in a moment, but first here's the code you need: func CGPointDistanceSquared(from: CGPoint, to: CGPoint) -> CGFloat { return (from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y) } func CGPointDistance(from: CGPoint, to: CGPoint) -> CGFloat { return sqrt(CGPointDistanceSquared(from: from, to: to)) } Note that there are two functions: one for returning the distance between two points, and one for returning the distance squared between two points. The latter one doesn't use a square root, which makes it substantially faster. This means if you want to check "did the user tap within a 10-point radius of this position?" it's faster to square that 10 (to make 100) then use CGPointDistanceSquared().
https://www.hackingwithswift.com/example-code/core-graphics/how-to-calculate-the-distance-between-two-cgpoints
CC-MAIN-2021-31
refinedweb
156
68.4
I am learning about object oriented programming from Problem solving with Algorithms and Data structures Connector class Connector: def __init__(self, fgate, tgate): self.fromgate = fgate self.togate = tgate tgate.setNextPin(self) def getFrom(self): return self.fromgate def getTo(self): return self.togate setNextPin BinaryGate(LogicGate) source Connector setNextPin Connector BinaryGate You are missing two things here: implied function parameters and duck typing. Firstly, the syntax tgate.setNextPin(self) calls the function setNextPin(tgate, self). The first parameter is the object for which the method is being called (usually named self). You are getting confused because we are passing self as the second parameter to the function setNextPin but the first parameter of that function is also called self. This is a scoping issue, those two selfs refer to different parameters (depending on which function you are in). Secondly, there is duck typing. How does the function know it can access the method in the BinaryGate class? It doesn't know that at all! All the function knows is that it expects tgate to be of a type that has a method setNextPin. It doesn't matter if this is a BinaryGate or any other object. If the object passed doesn't have this method to call then Python will fail at run time as it will be unable to find the function setNextPin to call.
https://codedump.io/share/gcs2FKDgw4lA/1/object-oriented-programming-is-a-vs-has-a-relationships
CC-MAIN-2016-44
refinedweb
227
58.89
* A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Java » Beginning Java Author Need help with add() method Akane Tanaban Greenhorn Joined: May 17, 2004 Posts: 4 posted May 17, 2004 19:59:00 0 Hello, I am a beginner taking my first java class, and my teacher posted this example for us to compile and run. However, it doesn't work, and he is slow to respond to my question. Homework is based on extending this program, but it's hard to even begin if the example program doesn't work. Any kind hearted soul willing to help me? Thanks. Here is the code. The error message follows. // Begin code import java.awt.*; import java.awt.event.*; import java.applet.*; public class AdderButton extends Applet implements ActionListener { // ---- class and instance objects ---- Label lab_num1, lab_num2, instructions, report; TextField tf_num1, tf_num2; Button do_it_button; double num1, num2, sum; // ---- init method ---- public void init() { // set up some labels and text fields instructions = new Label(" Enter 2 numbers, press the button and I'll" + " tell you their sum."); lab_num1 = new Label("Enter the first number "); lab_num2 = new Label("Enter the second number "); tf_num1 = new TextField(10); tf_num2 = new TextField(10); report = new Label("The sum is 0 "); // leave room for big sums do_it_button = new Button("Do The Sum!"); // place the componets onto the canvas add(instructions); add(lab_num1); add(tf_num1); add(lab_num2); add(tf_num2); add(do_it_button); add(report); // prepare to respond to user button press do_it_button.addActionListener( this ); // means THIS applet will // initialize the numeric (instance) variables. sum = num1 = num2 = 0; } // ---- paint method ---- public void paint( Graphics g ) { // we do the addition first for debugging and maintenance report.setText("The sum is " + sum); // from this + (why?) } // ---- actionPerformed method ---- public void actionPerformed( ActionEvent e ) { // Any time they do anything (there is only thing to // do in our applet: type numbers) update sum. num1 = Double.valueOf(tf_num1.getText()).doubleValue(); // num1 = Double.parseDouble( tf_num1.getText() ); only available in 1.2 num2 = Double.valueOf(tf_num2.getText()).doubleValue(); sum = num1 + num2; // then force a repaint() // Note: NEVER (yes there are rare exceptions) // draw outside of your paint() method. repaint(); } } // End of code /*---------------Error message----------------- Applet.java [12:1] cyclic inheritance involving Applet public class Applet extends Applet implements ActionListener ^ AdderButton.java [28:1] cannot resolve symbol symbol : method add (java.awt.Label) location: class AdderButton add(instructions); ^ AdderButton.java [29:1] cannot resolve symbol symbol : method add (java.awt.Label) location: class AdderButton add(lab_num1); ^ AdderButton.java [30:1] cannot resolve symbol symbol : method add (java.awt.TextField) location: class AdderButton add(tf_num1); ^ AdderButton.java [31:1] cannot resolve symbol symbol : method add (java.awt.Label) location: class AdderButton add(lab_num2); ^ AdderButton.java [32:1] cannot resolve symbol symbol : method add (java.awt.TextField) location: class AdderButton add(tf_num2); ^ AdderButton.java [33:1] cannot resolve symbol symbol : method add (java.awt.Button) location: class AdderButton add(do_it_button); ^ AdderButton.java [34:1] cannot resolve symbol symbol : method add (java.awt.Label) location: class AdderButton add(report); ^ AdderButton.java [61:1] cannot resolve symbol symbol : method repaint () location: class AdderButton repaint(); ^ 9 errors Errors compiling AdderButton. -----------------End of error message---------*/ [ edited to preserve formatting using the [code] and [/code] UBB tags -ds ] [ May 17, 2004: Message edited by: Dirk Schreckmann ] Ben Buchli Ranch Hand Joined: Mar 26, 2004 Posts: 83 posted May 17, 2004 21:19:00 0 hi, when i run your code it works... what version jre version are you running? is this error message really from when you run the code posted?? it refers to a class called Applet that extends Applet. Was your AdderButton class once called Applet? [ May 17, 2004: Message edited by: Ben Buchli ] John Smith Ranch Hand Joined: Oct 08, 2001 Posts: 2937 posted May 17, 2004 21:28:00 0 Compiles fine for me. I suspect that you have some other class in your project/package that causes the problem. I added class public class Applet extends Applet { } And now I see the same list of errors that you posted, including the "cyclical inheritance", which is the key. Akane Tanaban Greenhorn Joined: May 17, 2004 Posts: 4 posted May 17, 2004 21:37:00 0 I am using java2 sdk 1.4.1, standard edition. Also, I am using this software called Sun One Studio 4 update 1, Community Edition as the graphic interface -- I guess that's what you call it. It has a text editor and compiler all there easy to use. I tried to compile it from the command window by using javac as well, and it gave me the same errors. In my confusion, I did accidentally name it Applet before, but then I changed the name and saved it and everything, and it stills gives that error. I cut and pasted the program several times into a new file, but that error still comes up like that. If I take the 'extends Applet' part out, it seems to go away. John Smith Ranch Hand Joined: Oct 08, 2001 Posts: 2937 posted May 17, 2004 21:42:00 0 In my confusion, I did accidentally name it Applet before, but then I changed the name and saved it and everything, and it stills gives that error. Search in your file system for file Applet.java and Applet.class, -- they may still be out there in the IDE disk cache, and your project may be tracking it. Ben Buchli Ranch Hand Joined: Mar 26, 2004 Posts: 83 posted May 17, 2004 21:45:00 0 if the problem still persists, you might want to compile a new .class file. if that doesnt help, restart the ide and make sure there's no hanging vm. hope that helps Akane Tanaban Greenhorn Joined: May 17, 2004 Posts: 4 posted May 17, 2004 22:16:00 0 Thank you for your help. The Applet.java file was still in the same folder as the AdderButton.java file. When I tried to delete it, to my surprise it stated that it was being used, and that I needed to close it in order to delete it. At that time, only the AdderButton.java file was open. So I guess it was somehow finding it. Once I closed everything, I was able to delete the Applet.java file, and the AdderButton.java file compiled just fine! Thank you, thank you, thank you I agree. Here's the link: subject: Need help with add() method Similar Threads toString() method ToolTipText doesn't work MY LOTTERY CLASS get a textfield value from a frame to an applet Please need help with calculator program... All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/396471/java/java/add-method
CC-MAIN-2014-42
refinedweb
1,129
65.12
ImportError: No module named Npp import os; import sys; from Npp import notepad filePathSrc=“M:\server” for root, dirs, files in os.walk(filePathSrc): for fn in files: if fn[-4:] == ‘.yml’: notepad.open(root + “\” + fn) notepad.runMenuCommand(“Encoding”, “Convert to ANSI”) notepad.runMenuCommand(“Encoding”, “Encode in ANSI”) notepad.save() notepad.close() Process started (PID=124000) >>> Traceback (most recent call last): File “ansi.py”, line 3, in <module> from Npp import notepad ImportError: No module named Npp <<< Process finished (PID=124000). (Exit code 1) - Claudia Frank last edited by did you run the script via NppExec plugin? You need to use PythonScript Plugin to have access to the notepad object Cheers Claudia How do I run the script through NppExec? Thanks for the response. - Claudia Frank last edited by you can’t - NppExec does not know anything about Npp - you need to use Python Script plugin as this plugin is creating and exporting the Npp namespace. Cheers Claudia Thank you! This problem is solved.
https://community.notepad-plus-plus.org/topic/15837/importerror-no-module-named-npp
CC-MAIN-2021-31
refinedweb
164
68.47
JavaScript's place within the development sphere continues to grow and evolve. This was demonstrated throughout the recent Microsoft TechEd conference where JavaScript solutions for Windows 8 and web development were covered, along with the JavaScript superset TypeScript, which follows in the footsteps of CoffeeScript and other similar offerings. Here's a look at this new foray called TypeScript; it just might bring JavaScript to the mainstream. A Microsoft version of JavaScript? JavaScript is a Web standard, so you may be wondering why Microsoft is promoting TypeScript (can you say JScript?). TypeScript does not rewrite JavaScript or offer an alternative — it offers a way to more easily develop JavaScript code. JavaScript development is often seen as sort of the Wild West. The selling point for TypeScript is manageability; that is, TypeScript provides compiler services and type checking, and Visual Studio integration is available. While I am not worried about Microsoft trying to highjack JavaScript, TypeScript is definitely an approach to JavaScript development that targets developers who are using Microsoft technologies. How to get and use TypeScript TypeScript has been available since October 2012, but it has gained momentum lately as new features have been added. TypeScript is open source available from its website, and the source code is available via CodePlex. The basic way to utilize TypeScript is via npm (Node.js package manager). This includes a command-line compiler that processes TypeScript source files (normally using .ts file extension) and generates standard JavaScript. The basic syntax for the command-line compiler is: tsc.exe <input file> The result is a JavaScript output file. Like most command-line tools, it offers a number of options (switches) that are described in more detail in its documentation. The TypeScript site provides an online playground for working with the language on-the-fly, so visit the play area to get a feel for what it offers. While on the site, the samples section features a number of excellent examples of how it may be used in real-world scenarios. The other way to utilize TypeScript is via its Visual Studio 2012 plug-in. This allows you to develop TypeScript (and subsequently JavaScript) within the standard Microsoft development environment. (There are a number of articles on the web that provide information on utilizing TypeScript within Visual Studio 2010.) What TypeScript offers A lot of the TypeScript discussions at TechEd 2013 that I heard focused on developing Windows Store applications. With that said, the latest TypeScript version (0.9) brings generics to the language, which offers a vehicle to promote code reuse. Other features include method overloading, interfaces, internal/external modules (think namespaces) and much more. This blog post from the Microsoft team provides more information on the features of TypeScript 0.9. When examining something new like TypeScript or CoffeeScript in the past (well, at least, new to me), it becomes a personal choice on whether the new tools and/or technology are worth the effort — that is, will they help me with the daily grind of application development? I am still undecided on whether TypeScript will help me, but I can see its usefulness for large-scale development efforts involving JavaScript. Its usefulness lies in its tight integration with the current iteration of Visual Studio (and I assume future versions). All of the TypeScript documentation and content seems to highlight simplifying tool integration as a key goal of the TypeScript initiative. While Microsoft brought Visual Studio integration to the fold, others can easily integrate with their tools as well; it remains to be seen whether this will happen. The inclusion with Visual Studio 2012 brings a powerful feature to JavaScript development: debugging capabilities. This has always been a sore spot with JavaScript developers, as they often lean heavily on alert statements or use JavaScript debuggers like Firebug. As previously stated, the tight integration with Microsoft's IDE means TypeScript is likely to remain a Microsoft tool/language, especially with all of the examples of using it in Microsoft Store applications. Withholding judgment I dismissed TypeScript when I first learned about it, but I became more intrigued with TypeScript after hearing it talked about at TechEd 2013. While the concept is interesting, I am still not sure it will have a place in my toolbox; the main reasons why are I have no current plans for Microsoft Store-based projects or any large-scale enterprise applications using JavaScript. However, like other development technologies, I will give it a more thorough test drive to determine whether I will use it on future projects. Microsoft is promising the next version of TypeScript (1.0) later this year, so stay tuned.
http://www.techrepublic.com/blog/software-engineer/microsoft-puts-its-spin-on-javascript-with-typescript/
CC-MAIN-2017-17
refinedweb
774
52.8
Get an introduction to using common SQL functions in Oracle Berkeley DB. Published September 2007 The Oracle Berkeley DB team is frequently asked, "How do I do <SQL query here> in Berkeley DB?" So, this is an introduction to implementing much of your favorite SQL functionality in Oracle Berkeley DB. Not all SQL applications should be implemented in Oracle Berkeley DB--an open source, embeddable database engine that provides fast, reliable, local persistence with zero administration required--but if you have a relatively fixed set of queries and you care about performance, Berkeley DB may just be the way to go. Let's start at the very beginning (a very good place to start). When you read you begin with ABC; in Berkeley DB you begin with terminology. Here is a small "translation guide" for diehard SQL programmers: Let's pick an application domain—the traditional employee database, but somewhat simplified. Furthermore, we'll assume that you want all of Berkeley DB's bells and whistles: concurrency, transactions, recoverability, and so on. In SQL, you say CREATE DATABASE personnel In Berkeley DB, you want to create the environment in which you'll place all your application data. Throughout your code, you'll refer to that environment via an environment handle, whose type is DB_ENV. And you'll use that handle to operate upon the environment. For now, we'll ignore any fancy error handling and strictly focus on the APIs. DB_ENV *dbenv; int ret; /* Create the handle. */ DB_ASSERT(db_env_create(&dbenv, 0) == 0); /* * If you wanted to configure the environment, you would do that here. * Configuraition might include things like setting a cache size, * specifying error handling functions, specifying (different) * directories in which to place your log and/or data files, setting * parameters to describe how many locks you'd need, etc. */ /* Now, open the handle. */ DB_ASSERT(dbenv->open(dbenv, "my_databases/personnel", DB_CREATE | DB_INIT_LOCK | DB_INIT_MPOOL | DB_INIT_TXN | DB_THREAD, 0644); You've now created and opened an environment. There are a few things to note: In SQL, queries are typically processed by a separate server, and that server is configured by a database administrator to work well (or not) on your systems. As Berkeley DB is embedded in your application, your application may perform much of this configuration. However, that's really about database tuning, and we'll leave that for a separate article. Now that you've created a database, it's time to create some tables. In Berkeley DB, tables are referenced by handles of type DB *. For each table in your application, you'll typically open one handle and then use that handle in one or more threads. So, in SQL you might say CREATE TABLE employee (primary key empid int(8), last_name varchar(20), first_name varchar(15), salary numeric(10, 2) salary, street varchar (20), city varchar (15), state char(2), zip int(5)) Before we look at the Berkeley DB code to implement this, it's important to remember that in SQL, the database is responsible for implementing and interpreting the schema of your data. In Berkeley DB, this interpretation is left up to the application. This will become more interesting when we examine the data manipulation language (DML), but for now, it will be apparent, because in creating the employee table, Berkeley DB will know only about the primary key and not about the different fields in the database.First, you need to create a database handle to represent the table that you're creating. (Again, we're skipping error handling.) DB *dbp; DB_ENV *dbenv; /* Let's assume we've used the code from above to set dbenv. */ ASSERT(db_create(&dbp, dbenv, 0) == 0); /* * Like with the environment, tables can also be configured. You * can specify things like comparison functions, page-size, etc. * That would all go here. */ /* Now, we'll actually open/create the primary table. */ ASSERT(dbp->open(dbp, NULL, "employee.db", NULL, DB_BTREE, DB_AUTO_COMMIT | DB_CREATE | DB_THREAD, 0644) == 0). This call creates the table, using a B-tree as the primary index structure. The table will be materialized in the directory my_databases/personnel with the name employee.db. That file will contain only a single table and will have file system permissions as specified by the final parameter (0644). The flags that we've specified create the table in a transaction, allowing future transactional operations (DB_AUTO_COMMIT); allow creation of the table if it doesn't exist (DB_CREATE); and specify that the resulting handle can be used by multiple threads of control simultaneously (DB_THREAD). Notice that you haven't specified exactly what comprises a primary key (index) or what the data fields look like that are stored in this table. That will all fall on the application and will become more apparent when we get to the sections on insert, select, and update. Now let's consider what would happen had you wanted both a primary index on the employee id and a secondary index on the last name. You'd use the SQL query specified above and then issue CREATE INDEX lname ON employee (last_name) In Berkeley DB, secondary indexes look just like tables. You can then associate tables to make one a secondary index of the other. In order to implement this functionality, you'll need to dive a bit more deeply into the data representation that your application is going to use.Let's assume that your application is going to use a C structure to contain the tuples in our employee table. You might define that structure as shown below: typedef struct _emp_data { char lname[20]; char fname[15]; float salary; char street[20]; char city[15]; char state[2]; int zip; } emp_data; And let's say that the employee ID is a simple integer: typedef int emp_key; DBT key_dbt, data_dbt; emp_key ekey; emp_data edata; memset(&key_dbt, 0, sizeof(key_dbt)); memset(&data_dbt, 0, sizeof(data_dbt)); /* * Now make the key and data DBT's reference the key and data * variables. */ key_dbt.data = &ekey; key_dbt.size = sizeof(ekey); data_dbt.data = &edata; data_dbt.size = sizeof(edata); The main observation here is that a tuple in SQL is represented by a key/data pair, but the application is responsible for understanding how to interpret these pairs. With that as background, let's return to our discussion of secondary indexes. Since Berkeley DB does not understand the structure or schema of the data element in a key/data pair, it is going to need assistance from the application to identify the fields that we use as secondary indexes. This assistance is provided by the application by way of callback functions. The callback function takes a key/data pair and returns a DBT that references the value to be used as a secondary key.So, in order to create the secondary index on last_name, you must write a callback function that takes a key/data pair and returns a DBT referencing the last_name field of that data item. int lname_callback(DB *dbp, const DBT *key, const DBT *data, DBT *skey) { emp_data *edata; /* * We know that the opaque byte-string represented by the data DBT * represents one of our emp_data structures, so let's cast it * to one of those so that we can manipulate it. */ edata = data->data; skey->data = edata->lname; skey->size = strlen((edata->lname); return (0); } Now that you've written your callback, you can specify a secondary index. Recall that a secondary index is simply a table—so let's start by creating a table: DB *sdbp; ASSERT(db_create(&sdbp, dbenv, 0) == 0); /* Configure sdbp. */ ASSERT(sdbp->open(sdbp, NULL, "emp_lname.db", NULL, DB_BTREE, DB_AUTO_COMMIT | DB_CREATE | DB_THREAD, 0644) == 0); Once again, you're using a B-tree structure to index lastnames, and we're keeping all the same flags and modes that you used before.Finally, you must associate your secondary index table with your main table (the employee table). Recall that dbp is the handle to the employee table and sdbp is the handle to the secondary index table. ASSERT(dbp->associate(dbp, NULL, sdbp, lname_callback, flags) == 0); Things to note: The last two operations in the DDL are the drop commands: drop index, drop table, and drop database. Just as you can drop indexes and delete tables in SQL, you can do the same in Berkeley DB. In SQL, you might say DROP TABLE employee DROP INDEX lname Before removing a table, all database handles on that table must be closed. Closing a table is easy; assume that we're going to drop the secondary index on the employee database. Let's first close that secondary: Dropping a table in SQL drops all the indexes associated with it, but in Berkeley DB, you have to do that explicitly. Fortunately, dropping tables or indexes are identical operations in Berkeley DB. sdbp->close(sdbp, 0) After issuing the close method on a database handle, the handle cannot be used again.Now that you've closed the secondary index table, you can remove it using the dbremove method off of the dbenv handle: DB_ENV *dbenv; ASSERT(dbenv->dbremove(dbenv, NULL, "emp_lname.db", NULL, DB_AUTO_COMMIT) == 0); The same sequence of calls (closing and dbremoving) can be used to drop tables as well. But let's say that you don't want to drop a table; we just want to change its name. You can do that too.As with remove, you must first close the table handles: dbp->close(dbp, 0); Now you can change the table's name: DB_ENV *dbenv; ASSERT(dbenv->dbrename(dbenv, NULL, "employee.db", NULL, "newemp.db", DB_AUTO_COMMIT) == 0); Finally, you might wish to destroy a database. In SQL, you would execute DROP DATABASE personnel This command also has an analogy in Berkeley DB. First, you have to close the environment. ASSERT(dbenv->close(dbenv, 0) == 0); As with closing table handles, once you close an environment handle, you can no longer use that handle. So, in order to drop the database, you'll need to create a new handle and then use that handle to remove the database (environment). ASSERT(db_env_create(&dbenv, 0) == 0); ASSERT(dbenv->remove(dbenv, "my_databases/personnel", 0) == 0);That wraps up our translation of SQL's DDL into Berkeley DB. Next, we'll explore how to translate SQL DML into Berkeley DB. Now that we've covered SQL's DDL and its implementation in Berkeley DB, now you'll begin adding data to databases, covering SQL's insert into, update, and delete. In SQL, you add data to tables using the insert statement: INSERT INTO employees VALUES (00010002, "mouse", "mickey", 1000000.00, "Main Street", "Disney Land", "CA", 98765); Let's assume that you have your table opened from last time and we have a database handle dbp that references the employee table. Now, hire Mickey Mouse. SQL inserts all become Berkeley DB put methods off of database or cursor handles; we'll start with databases and get to cursors later. DB *dbp; DBT key_dbt, data_dbt; emp_data edata; emp_key ekey; /* Put the value into the employee key. */ ekey = 00010002; /* Initialize an emp_data structure. */ strcpy(edata.lname, "Mouse"); strcpy(edata.fname, "Mickey"); edata.salary = 100 if you had associated any secondaries with the employee table, as in SQL, they would have been updated automatically when you did the insert. Now, let's say you have some data in your tables, and you want to change it. For example, let's give Mickey a raise! There are a couple of ways we can do this.The first method is identical to the insert code above—if you issue a put method on a table and the key already exists (and the table does not allow duplicate data values for a single key), then the put will replace the old version with the new. So, the following sequence will replace Mickey's record with one that gives Mickey a $2,000,000 salary instead of a $1,000,000 salary. /* Put the value into the employee key. */ ekey = 00010002; /* Initialize an emp_data structure. */ strcpy(edata.lname, "Mouse"); strcpy(edata.fname, "Mickey"); edata.salary = 200 this approach is cumbersome—in order to do this, you would have to know the value of all the other fields in the database. So, unlike UPDATE employees SET salary = 2000000 WHERE empid = 000100002 where you need only the employee ID, now you need everything. Isn't there a way to do this in Berkeley DB? The answer is yes. If you know exactly which bytes of a data item you wish to replace, you can do the equivalent of the update command. In order to do this, you'll need to introduce the notion of a cursor. A cursor represents a position in a table. It lets you iterate over the table and maintain the notion of a current item that you can then manipulate. DBC *dbc; DB *dbp; ASSERT(dbp->cursor(dbp, NULL, 0) == 0); Now that you have a cursor, we want to position it on Mickey's record so you can update it. This is equivalent to the WHERE part of the SQL statement.); Next we can change the salary (handle the "SET salary=2000000" part of the clause) /* Change the salary. */ edata = data_dbt->data; edata.salary = 2000000; Finally, apply the UPDATE portion of the SQL statement: dbc->c_put(dbc, &key_dbt, &data_dbt, DB_CURRENT); In this case, you did not know the contents of Mickey's record a priori, so you retrieved it and then updated it. Alternatively, you needn't even retrieve the record. The DB_DBT_PARTIAL flag value on DBTs indicates that you are getting/putting only part of a record, so that Berkeley DB can ignore everything except that part. Try it again: emp_data edata; float salary; /* We'd like to look up Mickey's key. */ emp_key = 0010002; memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = &emp_key; key_dbt.size = sizeof(emp_key); Instead of retrieving the entire record, don't retrieve anything—that is, perform a PARTIAL get, specifying that you only want 0 bytes of the data item. /* We don't want the data, we just want to position the cursor. */ memset(&data_dbt, 0, sizeof(data_dbt)); data_dbt->flags = DB_DBT_PARTIAL; data_dbt->dlen = 0; /* Position the cursor on Mickey's record */ dbc->c_get(dbc, &key_dbt, &data_dbt, DB_SET); /* * Now, prepare for a partial put. Note that the DBT has already * been initialized for partial operations. We need to specify * where in the data item we wish to place the new bytes and * how many bytes we'd like to replace. */ salary = 2000000.00; /* The DBT contains just the salary information. */ data_dbt->data = &salary; data_dbt->size = sizeof(salary); /* * dlen and doff tell Berkeley DB where to place this information * in the record. dlen indicates how many bytes we are replacing -- * in this case we're replacing the length of the salary field in * the structure (sizeof(emp_data.salary)). doff indicates where * in the data record we will place these new bytes -- we need to * compute the offset of the salary field. */ data_dbt->dlen = sizeof(emp_data.salary); data_dbt->doff = ((char *)&edata.salary - (char *)&edata); /* Now, put the record back with the new data. */ dbc->c_put(dbc, &key_dbt, &data_dbt, DB_CURRENT); Once you know how to put data in your tables, it's time to learn how to retrieve it. Let's start with the simplest approach: looking up values by their primary key . SELECT * FROM employees WHERE id=0010002); You used a cursor operation above because we wanted to then update the record. Let's say that all you want to do is retrieve the record; then you don't even need a cursor. All you need to do is use the get method off of the dbp handle:, use the dbp method. */ dbp->get(dbp, NULL, &key_dbt, &data_dbt, 0); So, this is also identical to the SELECT expression above. So far, you've always looked up a record by its primary key. But what if you don't know its key? Here are a few ways to find out: Let's examine each of these in more detail. As in SQL, retrieving by a secondary key is remarkably similar to retrieving by a primary key. In fact, the SQL query looks identical except for its where clause: SELECT * FROM employees WHERE last_name = "Mouse" Rather than use dbp, as shown in the primary key example, use the sdbp to look something up by its secondary key: The Berkeley DB call will look similar to its primary equivalent. DBT key_dbt, data_dbt; emp_data *edata; /* We'd like to look up by Mickey's last name. */ memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = "Mouse"; key_dbt.size = strlen((char *)key_dbt.data); /* * We want the data returned, so we don't need to initialize the * employee data data structure. */ memset(&data_dbt, 0, sizeof(data_dbt)); /* Now, call the get method. */ sdbp->get(sdbp, NULL, &key_dbt, &data_dbt, 0); The interesting thing here is to know what gets returned in the data_dbt. It is the data in the primary database—that is, you get the exact same thing returned in the data DBT regardless of whether you look up the item by its primary or its secondary key. However, you might notice that when you look up by secondary, the result is not quite the same as either retrieval by primary or the results of the SQL statement. What's missing is the primary key, because there isn't any place to return it. So, in fact, the code above actually implements SELECT last_name, first_name, salary, street, city, state, zip FROM employees WHERE last_name="Mouse" What if you need the primary key? The answer is that you use the dbp->pget or dbc->pget method. These are identical to the get methods except they are designed for secondary index queries when you want the primary key returned. So, in this case, the result includes the primary key, the secondary key, and the data element: DBT key_dbt, pkey_dbt, data_dbt; emp_data *edata; /* We'd like to look up by Mickey's last name. */ memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = "Mouse"; key_dbt.size = strlen((char *)key_dbt.data); /* Set up the dbt into which to return the primary. */ memset(&pkey_dbt, 0, sizeof(pkey_dbt)); /* * We want the data returned, so we don't need to initialize the * employee data data structure. */ memset(&data_dbt, 0, sizeof(data_dbt)); /* Now, get the record and the primary key. */ sdbp->pget(sdbp, NULL, &key_dbt, &pkey_dbt, &data_dbt, 0); SELECT * FROM employees WHERE last_name="Mouse" So far, you've returned only a single record. SQL lets you return multiple records (in other words, all employees with lastname Mouse). How might you do that in Berkeley DB? Let's consider two cases. In the first case, you'll look up a set of items by their key. In the second, you'll search the database looking for items by a non-keyed field.Let's say that you want to look up all your employees with the last name of Mouse (and we suspect that there might be many of them). This means that the secondary index on last_name would have been created allowing duplicates. Before opening the database, you would configure it for duplicate support: sdbp->set_flags(sdbp, DB_DUP); ASSERT(sdbp->open(sdbp, NULL, "emp_lname.db", NULL, DB_BTREE, DB_AUTO_COMMIT | DB_CREATE | DB_THREAD, 0644) == 0); Now, when you retrieve by this secondary, you probably want to use a cursor to do it. You would begin with the same code that you used before, but you would add a loop to iterate over all the items that share the same secondary key: DBT key_dbt, data_dbt; DBC *sdc; emp_data *edata; /* We'd like to look up by Mickey's last name. */ memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = "Mouse"; key_dbt.size = strlen((char *)key_dbt.data); /* * We want the data and primary key returned, so we need only * initialize the DBTs for them to be returned. */ memset(&data_dbt, 0, sizeof(data_dbt)); memset(&pkey_dbt, 0, sizeof(pkey_dbt)); /* Now, create a cursor. */ sdbp->cursor(sdbp, NULL, &sdbc, 0); /* Now loop over all items with the specified key. */ for (ret = sdbc->pget(sdbc, &key_dbt, &pkey_dbt, &data_dbt, DB_SET); ret == 0: ret = sdbc->pget(sdbc, &key_dbt, &pkey_dbt, &data_dbt, DB_NEXT_DUP) { /* Do per-record processing in here. */ } Another possible form of keyed iteration comes in the form of queries such as You initialize the cursor by asking it to find the first item with the specified key, and then you iterate over all the items in the database with the same key. SELECT * FROM employees WHERE id >= 1000000 AND id < 2000000 Once again, you'll use a cursor to iterate, but this time, you'll want to establish a starting and stopping point. Berkeley DB makes the starting point easy; the stopping point is left up to the application. DBT key_dbt, data_dbt; DBC *dc; emp_key ekey; /* Set the starting point. */ memset(&key_dbt, 0, sizeof(key_dbt)); ekey = 1000000; key_dbt.data = &ekey; key_dbt.size = sizeof(ekey); key_dbt.flags = DB_DBT_USERMEM; key_dbt.ulen = sizeof(ekey); memset(&data_dbt, 0, sizeof(data_dbt)); /* Now, create a cursor. */ dbp->cursor(dbp, NULL, &dbc, 0); /* Now loop over items starting with the low key. */ for (ret = dbc->get(dbc, &key_dbt, &data_dbt, DB_SET_RANGE); ret == 0: ret = dbc->get(dbc, &key_dbt, &data_dbt, DB_NEXT)) { /* Check if we are still in the range. */ if (ekey >= 2000000) break; /* Do per-record processing in here. */ } The two main things to note are 1) that you begin the loop with the DB_SET_RANGE flag, which positions the cursor on the first item greater than or equal to the specified key, and 2) that the application must check for the end of the range inside the loop. Also, note that you set the DB_DBT_USERMEM flag in the key_dbt, indicating that the keys retrieved should be placed in the memory specified by the user. This lets you use the ekey variable to examine the key. Let's wrap up the select section with a query that returns one or more items whose evaluation criteria is not a keyed field. Consider SELECT * FROM employees WHERE state=ca Since there is no key on the state field, you have no choice but to iterate over the entire database. This translates into a simple cursor iteration loop. DBC *dbc; DBT key_dbt, data_dbt; emp_data *edata; dbp->cursor(dbp, &key_dbt, &data_dbt, &dbc, 0); memset(&key_dbt, 0, sizeof(key_dbt)); memset(&data_dbt, 0, sizeof(data_dbt)); for (ret = dbc->get(dbc, &key_dbt, &data_dbt, DB_FIRST); ret == 0; ret = dbc->get(dbc, &key_dbt, &data_dbt, DB_NEXT)) { /* See if the state field is "ca". */ edata = data_dbt->data; if (strcmp(edata->state, "ca") == 0) /* Keep this record. */ } This may seem inefficient, but if you do not have any indices on a field, you have no other option, and, in fact, this is precisely what your SQL database is doing internally when you specify a query that matches on an unkeyed field. You've learned how to insert and change data and how to retrieve it. The last thing we need to cover is how to remove data. There are fundamentally two different ways to delete tuples from a database: if you know a key for the item you wish to remove (and it's not one of a set of duplicates items for that key), then you can do a keyed delete. If you do not know the key, then you can iterate and use a cursor delete. Let's start with the simple case, firing Mickey Mouse. DELETE FROM employees WHERE id= 0010002 DBT key_dbt; emp_key ekey; ekey = 0010002; memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = &ekey; key_dbt.size = sizeof(ekey); dbp->del(dbp, NULL, &key_dbt, 0); DELETE FROM employees WHERE last_name = "Mouse" DBT key_dbt; memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = "Mouse"; key_dbt.size = strlen(key_dbt.data); dbp->del(dbp, NULL, &key_dbt, 0); But perhaps this is too harsh. Perhaps you didn't want to fire Mickey; you really only wanted to fire Minnie Mouse. Is there a way that we can easily fire Minnie? In other words, how do you do this: DELETE FROM employees where last_name = "Mouse" AND first_name = "Minnie" DBT key_dbt, data_dbt; DBC *sdbc; sdbp->cursor(sdbp, NULL, &sdbc, 0); memset(&key_dbt, 0, sizeof(key_dbt)); key_dbt.data = "Mouse"; key_dbt.size = strlen(key_dbt.data); for (ret = sdbc->get(sdbc, &key_dbt, &data_dbt, DB_SET); ret == 0; ret = sdbc->get(sdbc, &key_dbt, &data_dbt, DB_NEXT_DUP)) { edata = data_dbt->data; if (strcmp(edata->first_name, "Minnie") == 0) { /* OK, this is a record we want to delete. */ sdbc->del(sdbc, 0); } } dbp->del(dbp, NULL, &key_dbt, 0); By this time, you should have an overview of how to write Oracle Berkeley DB functions to perform basic SQL commands. Berkeley DB also has a large number of options and configurations that provide more-complicated functionality. Now we'll cover just one more topic: enclosing database operations in transactions. Let's review how transactions work in (most) SQL implementations. Whenever you issue a DML statement in SQL, it becomes part of the current transaction. Each subsequent statement also runs as part of that transaction. The current transaction commits either when the SQL session ends or when an application issues a COMMIT statement. At any point, a transaction may be aborted by issuing the ROLLBACK statement. Many SQL implementations also include an AUTOCOMMIT feature, where every DML statement is treated as its own transaction. When AUTOCOMMIT mode is enabled, the sequence statement 1 COMMIT statement 2 COMMIT statement 3 COMMIT statement 1 statement 2 statement 3 Berkeley DB also lets you encapsulate database operations in transactions. Unlike SQL, you can also run Berkeley DB without transactions. In fact, unless you explicitly request transactions, you will be running without them. So, how do you tell Berkeley DB that you'd like to use transactions? If you recall, there are flags that can be specified when you open an environment: DB_ASSERT(dbenv->open(dbenv, "my_databases/personnel", DB_CREATE | DB_INIT_LOCK | DB_INIT_MPOOL | DB_INIT_TXN | DB_THREAD, 0644); Berkeley DB provides a feature analogous to SQL's AUTOCOMMIT. You can configure an entire database (environment) to always autocommit, using the set_flags method off of the environment handle: Those flags configure Berkeley DB for your application. In this case, transactions were enabled because the DB_INIT_TXN flag was specified. Had this flag been omitted, then the application would run without transactions. dbenv->set_flags(dbenv, DB_AUTOCOMMIT, 1); But let's say that you don't want autocommit; you want your application to be able to group operations into a logical transaction. For example, let's say that you want to add Mickey Mouse and also assign him a manager. Alternatively, you can specify DB_AUTOCOMMIT on a database open, causing all subsequent operations to which you don't explicitly pass a transaction to be run in a transaction. INSERT INTO employees VALUES (00010002, "mouse", "mickey", 1000000.00, "Main Street", "Disney Land", "CA", 98765); INSERT INTO manages(00000001, 000100002) COMMIT (The above indicates that the employee with id=00000001 will manage Mickey.) We'll assume that you know how to perform the data operations, and we'll focus on how to specify the transaction. First, you have to explicitly begin a transaction (unlike in SQL). Creating a transaction is an environment operation, so it's a method off of the environment handle. This method will create a transaction handle (DB_TXN). DB_TXN *txn; dbenv->txn_begin(dbenv, NULL, &txn, 0); Now that you have a transaction handle, you can pass it to any database operation that you want to be part of the transaction: emp_dbp->put(emp_dbp, txn, &key, &data, 0); man_dbp->put(man_dbp, txn, &key, &data, 0); Then, you can either commit or abort the transaction by calling the appropriate method off of the transaction handle. To commit a transaction: txn->commit(txn, 0); To abort the transaction: txn->abort(txn); Both methods are destructors, rendering the transaction handle unusable. Unlike SQL, Berkeley DB can transaction-protect DDL operations as well. Therefore, you can also pass DB_TXN handles to operations like dbenv->dbremove, dbenv->dbrename, and dbp->open(… DB_CREATE …). In each of these cases, the DDL operation will be performed in the context of the specified transaction, which means it can be committed and aborted, just like any other transaction. Oracle Berkeley DB provides the same kinds of functionality that you find in a SQL database, however, it provides it in a very different package. You write programmatic code to call APIs and the entire database is "embedded" directly in your application; that is, they run in the same address space. This usually provides an order of magnitude performance improvement. Achieving this benefit places a greater burden on the application. This is typically most useful when an application demands extraordinarily high performance or the data manipulated by the application is not inherently relational. Margo Seltzer was one of the original authors of Berkeley DB and co-founded Sleepycat Software. She is also the Herchel Smith Professor of Computer Science and a Harvard College Professor in the Harvard School of Engineering and Applied Sciences. Her research interests include file systems, databases, and transaction processing systems.
https://www.oracle.com/technetwork/articles/embedded/seltzer-berkeleydb-sql-086752.html
CC-MAIN-2019-09
refinedweb
4,801
62.38
06 March 2009 11:34 [Source: ICIS news] LONDON (ICIS news)--Polypropylene (PP) prices in Europe are facing upward pressure this month, but buyers are resisting increases of €50/tonne ($63/tonne) against a backdrop of faltering demand and hesitation, sources agreed on Friday. Spot PP prices were clearly up, and levels below €700/tonne FD (free delivered) NWE (northwest ?xml:namespace> Deals as low as €650/tonne, which were common in February, had disappeared from the European market. Inventories were reported to be low down the chain. Producers had cut back, and some sources estimated that production was running at 80%. Strong export opportunities, particularly to In spite of producers’ targets of €50/tonne hikes for March, it was not yet clear how much of the €42/tonne increase in upstream March propylene they would be able to recover. Only propylene-linked business was fully confirmed at an increase of €42/tonne over February. “It won’t be €50/tonne. They will be lucky to get €30/tonne,” said a buyer who negotiated business freely. Food packaging remained the most buoyant sector in the PP market, but even here production lines were sometimes cut. “We are lucky that we are in food packaging. People still have to eat. But even so, some of our lines are idle,” said one converter. Sectors with links to the automotive industry were weak, and industrial applications were also flat. Most players watched developments in In addition to the new capacity start-ups, Asian markets were also nervous about persistent bearish sentiment in the key Chinese market. Sources said they expected European activity to pick up next week as more business was discussed. “There is a feeling that producers might get nervous soon and start retreating from their current plus €50/tonne positions,” said a trader, although producers said they were adamant that this would not happen. PP and PE producers in ($1 = €0.80)
http://www.icis.com/Articles/2009/03/06/9198057/europe-pp-sellers-push-for-increase-in-nervous-market.html
CC-MAIN-2015-14
refinedweb
321
62.78
Opened 3 years ago Closed 3 years ago Last modified 3 years ago #18772 closed Bug (fixed) force_unicode needs to go through deprecation policy Description Looks like django.utils.text.force_unicode was removed recently, without providing a backwards-compatible shim. The method had been documented (and I use it at work) -- so imagine my surprise when updating my Django trunk code resulted in a hard breaking of my site. Apparently django.utils.encoding.force_text is the new thing going forward. I haven't been paying attention to the developments here, so I'm not comfortable writing a patch for something so fundamental to the framework... :-/ Change History (7) comment:1 Changed 3 years ago by aaugustin - Version changed from 1.4 to master comment:2 Changed 3 years ago by adrian - Resolution set to invalid - Status changed from new to closed Sorry I was unclear -- I'm on Python 2 (not Python 3), and the following results in an ImportError on current master: from django.utils.text import force_unicode I can confirm django.utils.encoding.force_unicode exists, but for some reason my legacy code is using django.utils.text, not django.utils.encoding. Looks like I was importing it from an incorrect place here, and it was working only by chance (django.utils.text imported force_unicode itself, putting it in the module-level namespace). So it seems like this is NOT a problem for people who have imported force_unicode from the correct place. :-) I'll close it -- sorry for the noise! comment:3 Changed 3 years ago by aaugustin Oh, sorry, I missed the django.utils.text.force_unicode in your original report. comment:4 Changed 3 years ago by adrian - Resolution invalid deleted - Status changed from closed to reopened You know, I'm looking through some third-party libraries, and a fair number of them have from django.utils.text instead of from django.utils.encoding when importing force_unicode. It might be worth adding a from django.utils.encoding import force_unicode to the top of django/utils/text.py so that it doesn't break. comment:5 Changed 3 years ago by aaugustin That makes sense. If the problem bit you it may just bite anyone. Do you know if force_unicode used to live in django.utils.text? comment:6 Changed 3 years ago by adrian - Resolution set to fixed - Status changed from reopened to closed Looks like it never lived in django.utils.text. Here's the file from five years ago, when the unicode branch was merged: So it's purely my own stupidity for importing it from there. Still, I've seen some other bits of code rely on that, so I just added the import (along with a comment) in django.utils.text. force_unicode is still there on Python 2 and there isn't any plan to deprecate it: However, by design, it doesn't exist on Python 3 because unicode doesn't exist at all on Python 3. You seem to assume a 100% backwards-compatibility guarantee when moving from Python 2 to Python 3. This isn't possible, because some Python 2 concepts or constructs don't exist in Python 3. In addition to the Python porting — for instance renaming __unicode__ to __str__ everywhere, there will be some minor changes to Django libraries — for instance, renaming smart_unicode to smart_str. While we're making good progress on the code, the documentation remains to be written. We'll have to explain such requirements in the documentation before announcing that Python 3 is officially supported.
https://code.djangoproject.com/ticket/18772
CC-MAIN-2015-22
refinedweb
588
55.74