text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Matplotlib: gridding irregularly spaced data¶ A commonly asked question on the matplotlib mailing lists is "how do I make a contour plot of my irregularly spaced data?". The answer is, first you interpolate it to a regular grid. As of version 0.98.3, matplotlib provides a griddata function that behaves similarly to the matlab version. It performs "natural neighbor interpolation" of irregularly spaced data a regular grid, which you can then plot with contour, imshow or pcolor. Example 1¶ This requires Scipy 0.9: import numpy as np from scipy.interpolate import griddata import matplotlib.pyplot as plt import numpy.ma as ma from numpy.random import uniform, seed # make up some randomly distributed data seed(1234)[None,:], yi[:,None]), method='cubic') #() import numpy as np from matplotlib.mlab import griddata import matplotlib.pyplot as plt import numpy.ma as ma from numpy.random import uniform # make up some randomly distributed data,yi) #() By default, griddata uses the scikits delaunay package (included in matplotlib) to do the natural neighbor interpolation. Unfortunately, the delaunay package is known to fail for some nearly pathological cases. If you run into one of those cases, you can install the matplotlib natgrid toolkit. Once that is installed, the griddata function will use it instead of delaunay to do the interpolation. The natgrid algorithm is a bit more robust, but cannot be included in matplotlib proper because of licensing issues. The radial basis function module in the scipy sandbox can also be used to interpolate/smooth scattered data in n dimensions. See ["Cookbook/RadialBasisFunctions"] for details. Example 3¶ A less robust but perhaps more intuitive method is presented in the code below. This function takes three 1D arrays, namely two independent data arrays and one dependent data array and bins them into a 2D grid. On top of that, the code also returns two other grids, one where each binned value represents the number of points in that bin and another in which each bin contains the indices of the original dependent array which are contained in that bin. These can be further used for interpolation between bins if necessary. The is essentially an Occam's Razor approach to the matplotlib.mlab griddata function, as both produce similar results. # griddata.py - 2010-07-11 ccampo import numpy as np def griddata(x, y, z, binsize=0.01, retbin=True, retloc=True): """ Place unevenly spaced 2D data on a grid by 2D binning (nearest neighbor interpolation). Parameters ---------- x : ndarray (1D) The idependent data x-axis of the grid. y : ndarray (1D) The idependent data y-axis of the grid. z : ndarray (1D) The dependent data in the form z = f(x,y). binsize : scalar, optional The full width and height of each bin on the grid. If each bin is a cube, then this is the x and y dimension. This is the step in both directions, x and y. Defaults to 0.01. retbin : boolean, optional Function returns `bins` variable (see below for description) if set to True. Defaults to True. retloc : boolean, optional Function returns `wherebins` variable (see below for description) if set to True. Defaults to True. Returns ------- grid : ndarray (2D) The evenly gridded data. The value of each cell is the median value of the contents of the bin. bins : ndarray (2D) A grid the same shape as `grid`, except the value of each cell is the number of points in that bin. Returns only if `retbin` is set to True. wherebin : list (2D) A 2D list the same shape as `grid` and `bins` where each cell contains the indicies of `z` which contain the values stored in the particular bin. Revisions --------- 2010-07-11 ccampo Initial version """ # get extrema values. xmin, xmax = x.min(), x.max() ymin, ymax = y.min(), y.max() # make coordinate arrays. xi = np.arange(xmin, xmax+binsize, binsize) yi = np.arange(ymin, ymax+binsize, binsize) xi, yi = np.meshgrid(xi,yi) # make the grid. grid = np.zeros(xi.shape, dtype=x.dtype) nrow, ncol = grid.shape if retbin: bins = np.copy(grid) # create list in same shape as grid to store indices if retloc: wherebin = np.copy(grid) wherebin = wherebin.tolist() # fill in the grid. for row in range(nrow): for col in range(ncol): xc = xi[row, col] # x coordinate. yc = yi[row, col] # y coordinate. # find the position that xc and yc correspond to. posx = np.abs(x - xc) posy = np.abs(y - yc) ibin = np.logical_and(posx < binsize/2., posy < binsize/2.) ind = np.where(ibin == True)[0] # fill the bin. bin = z[ibin] if retloc: wherebin[row][col] = ind if retbin: bins[row, col] = bin.size if bin.size != 0: binval = np.median(bin) grid[row, col] = binval else: grid[row, col] = np.nan # fill empty bins with nans. # return the grid if retbin: if retloc: return grid, bins, wherebin else: return grid, bins else: if retloc: return grid, wherebin else: return grid The following example demonstrates a usage of this method. import numpy as np import matplotlib.pyplot as plt import griddata npr = np.random npts = 3000. # the total number of data points. x = npr.normal(size=npts) # create some normally distributed dependent data in x. y = npr.normal(size=npts) # ... do the same for y. zorig = x**2 + y**2 # z is a function of the form z = f(x, y). noise = npr.normal(scale=1.0, size=npts) # add a good amount of noise z = zorig + noise # z = f(x, y) = x**2 + y**2 # plot some profiles / cross-sections for some visualization. our # function is a symmetric, upward opening paraboloid z = x**2 + y**2. # We expect it to be symmetric about and and y, attain a minimum on # the origin and display minor Gaussian noise. plt.ion() # pyplot interactive mode on # x vs z cross-section. notice the noise. plt.plot(x, z, '.') plt.title('X vs Z=F(X,Y=constant)') plt.xlabel('X') plt.ylabel('Z') # y vs z cross-section. notice the noise. plt.plot(y, z, '.') plt.title('Y vs Z=F(Y,X=constant)') plt.xlabel('Y') plt.ylabel('Z') # now show the dependent data (x vs y). we could represent the z data # as a third axis by either a 3d plot or contour plot, but we need to # grid it first.... plt.plot(x, y, '.') plt.title('X vs Y') plt.xlabel('X') plt.ylabel('Y') # enter the gridding. imagine drawing a symmetrical grid over the # plot above. the binsize is the width and height of one of the grid # cells, or bins in units of x and y. binsize = 0.3 grid, bins, binloc = griddata.griddata(x, y, z, binsize=binsize) # see this routine's docstring # minimum values for colorbar. filter our nans which are in the grid zmin = grid[np.where(np.isnan(grid) == False)].min() zmax = grid[np.where(np.isnan(grid) == False)].max() # colorbar stuff palette = plt.matplotlib.colors.LinearSegmentedColormap('jet3',plt.cm.datad['jet'],2048) palette.set_under(alpha=0.0) # plot the results. first plot is x, y vs z, where z is a filled level plot. extent = (x.min(), x.max(), y.min(), y.max()) # extent of the plot plt.subplot(1, 2, 1) plt.imshow(grid, extent=extent, cmap=palette, origin='lower', vmin=zmin, vmax=zmax, aspect='auto', interpolation='bilinear') plt.xlabel('X values') plt.ylabel('Y values') plt.title('Z = F(X, Y)') plt.colorbar() # now show the number of points in each bin. since the independent data are # Gaussian distributed, we expect a 2D Gaussian. plt.subplot(1, 2, 2) plt.imshow(bins, extent=extent, cmap=palette, origin='lower', vmin=0, vmax=bins.max(), aspect='auto', interpolation='bilinear') plt.xlabel('X values') plt.ylabel('Y values') plt.title('X, Y vs The No. of Pts Per Bin') plt.colorbar() The binned data: Raw data superimposed on top of binned data: Section author: AndrewStraw, Unknown[95], Unknown[55], Unknown[96], Unknown[97], Unknown[13], ChristopherCampo, PauliVirtanen, WarrenWeckesser, Unknown[98] Attachments
http://scipy-cookbook.readthedocs.io/items/Matplotlib_Gridding_irregularly_spaced_data.html
CC-MAIN-2018-26
refinedweb
1,327
61.63
Suppose you have a mass suspended by the combination of a spring and a rubber band. A spring resists being compressed but a rubber band does not. So the rubber band resists motion as the mass moves down but not as it moves up. In [1] the authors use this situation to motivate the following differential equation: where If a = b then we have a linear equation, an ordinary damped, driven harmonic oscillator. But the asymmetry of the behavior of the rubber band causes a and b to be unequal, and that’s what makes the solutions interesting. For some parameters the system exhibits essentially sinusoidal behavior, but for other parameters the behavior can become chaotic. Here’s an example of complex behavior. Here’s the Python code that produced the plot. from scipy import linspace, sin from scipy.integrate import solve_ivp import matplotlib.pyplot as plt def pos(x): return max(x, 0) def neg(x): return max(-x, 0) a, b, λ, μ = 17, 1, 15.4, 0.75 def system(t, z): y, yp = z # yp = y' return [yp, 10 + λ*sin(μ*t) - 0.01*yp - a*pos(y) + b*neg(y)] t = linspace(0, 100, 300) sol = solve_ivp(system, [0, 100], [1, 0], t_eval=t) plt.plot(sol.t, sol.y[0]) plt.xlabel("$t$") plt.ylabel("$y$") In a recent post I said that I never use non-ASCII characters in programming, so in the code above I did. In particular, it was nice to use λ as a variable; you can’t use lambda as a variable name because it’s a reserved keyword in Python. Update: Here’s a phase portrait for the same system. More posts on differential equations - Ten life lessons from differential equations - Maximum principles for boundary value problems - Self-curvature [1] L. D. Humphreys and R. Shammas. Finding Unpredictable Behavior in a Simple Ordinary Differential Equation. The College Mathematics Journal, Vol. 31, No. 5 (Nov., 2000), pp. 338-346 2 thoughts on “A spring, a rubber band, and chaos” What’s funny is I didn’t even notice the use of non-ascii until you pointed it out. Hello Interesting post. I was inspired to implement the model in EES Engineering Equation Solver. See.
https://www.johndcook.com/blog/2020/04/26/spring-rubberband-chaos/
CC-MAIN-2020-40
refinedweb
376
65.22
SCHED_SETSCHEDULER(2) Linux Programmer's Manual SCHED_SETSCHEDULER(2) sched_setscheduler, sched_getscheduler - set and get scheduling pol‐ icy/parameters #include <sched.h> int sched_setscheduler(pid_t pid, int policy, const struct sched_param *param); int sched_getscheduler(pid_t pid); The sched_setscheduler() system call sets both the scheduling policy and parameters for the thread whose ID is specified in pid. If pid equals zero, the scheduling policy and parameters of the calling thread will be set. pol‐ icy when calling sched_setscheduler(). As a result of including this flag, children created by fork(2) do not inherit privileged schedul‐ ing policies. See sched(7) for details. sched_getscheduler() returns the current scheduling policy of the thread identified by pid. If pid equals zero, the policy of the calling thread will be retrieved. On success, sched_setscheduler() returns zero. On success, sched_getscheduler() returns the policy for the thread (a nonnegative integer). On error, both calls return -1, and errno is set appropriately.. POSIX.1-2001, POSIX.1-2008 (but see BUGS below). The SCHED_BATCH and SCHED_IDLE policies are Linux-specific. Further details of the semantics of all of the above "normal" and "real-time" scheduling policies can be found in the sched(7) manual page. That page also describes an additional policy, SCHED_DEADLINE, which is settable only via sched_setattr(2)..) POSIX.1 says that on success, sched_setscheduler() should return the previous scheduling policy. Linux sched_setscheduler() does not conform to this requirement, since it always returns 0 on success.) This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 SCHED_SETSCHEDULER(2) Pages that refer to this page: chrt(1), getrlimit(2), gettid(2), mlock(2), nanosleep(2), prctl(2), sched_get_priority_max(2), sched_setaffinity(2), sched_setattr(2), sched_setparam(2), syscalls(2), posix_spawn(3), proc(5), systemd.exec(5), capabilities(7), cpuset(7), credentials(7), sched(7)
http://www.man7.org/linux/man-pages/man2/sched_setscheduler.2.html
CC-MAIN-2018-51
refinedweb
321
58.58
I was implementing a distutils' setup script when I got a problem to define permissions to the so called data files. As I hadn't found anything in the documentation about this, the remaining alternative was to look at the source code (in distutils.command.install_data and distutils.cmd modules) to see how these things were expected to work. For my surprise, permissions seemed not to be supported by distutils and all data files always were being installed in a non-restricted mode, i.e, 0777. So good, this liberal mode could be my solution, although I don't like the idea to be so limited in this way. So the question was: although the mode 0777 was being used as default, the data files still being wrote with mode 0755 (actually, it depends on the umask of the user running the installer)... then, my last attempt was to look in depth to the remaining related modules and I found out that distutils implements its own "makedirs" on distutils.dir_utils.mkpath module and this function was ignoring completely its "mode" parameter (the one which defaults to 0777)! Finally I've found the culprit. I moved ahead and create a patch and report an issue on python's bug tracking system:. Hmm, but the problem hadn't been fixed, even with mkpath using its mode parameter now, the files/directories were still being save with modes other than 0777; but the problem here is more complicated, it seems it is related to how python is using the mkdir system call: depending on the compiler directives, it doesn't specify the mode parameter to mkdir... I don't known why it is this way and I think we moved too down - the python core developers should have some good reason and it is out of the scope of this post). So I go to the less attractive solution (IMO): to extend the distutils' install_data command. To do this we need to known a bit how the commands are structured and executed. The idea is quite simple, every distutils' command is a python module available through distutils.command dir. In this directory, there is a module for each command, and each module has a class with the same name, so we do: from distutils.commands.install_data import install_data class MyInstallData(install_data): pass Each class must implement a method run(), which is the place to look to see how the command does its work. For the install_data command, the operations are closed to copy files and create directories (through copy_file() and mkpath() methods of the Command superclass). The mkpath() was the problem, so it is what needs to be extended: class MyInstallData(install_data): def mkpath(self, name, mode=0777, verbose=0, dry_run=0): rv = Command.mkpath(self, name, mode, verbose, dry_run) os.chmod(name, mode) return rv When a path is created, I force the chmod to fix the permissions. Problem fixed. Not so good solution, but "it works" (tm). Ah! to use our custom install_data command with setup, we just specify the cmdclass parameter: setup( name="package name", author="foo", ... cmdclass={"install_data": MyInstallData,} ) that's it.
http://www.advogato.org/person/henrique/diary/0.html
CC-MAIN-2015-40
refinedweb
523
60.04
#include <openssl/buffer.h> BUF_MEM *BUF_MEM_new(void); void BUF_MEM_free(BUF_MEM *a); int BUF_MEM_grow(BUF_MEM *str, int len); char * BUF_strdup(const char *str); The library uses the BUF_MEM structure defined in buffer.h: typedef struct buf_mem_st { int length; /* current number of bytes */ char *data; int max; /* size of buffer */ } BUF_MEM; length is the current size of the buffer in bytes, max is the amount of memory allocated to the buffer. There are three functions which handle these and one ``miscellaneous'' function. BUF_MEM_new() allocates a new buffer of zero size. BUF_MEM_free() frees up an already existing buffer. The data is zeroed before freeing up in case the buffer contains sensitive data. BUF_MEM_grow() changes the size of an already existing buffer to len. Any data already in the buffer is preserved if it increases in size. BUF_strdup() copies a null terminated string into a block of allocated memory and returns a pointer to the allocated block. Unlike the standard C library strdup() this function uses OPENSSL_malloc() and so should be used in preference to the standard library strdup() because it can be used for memory leak checking or replacing the malloc() function. The memory allocated from BUF_strdup() should be freed up using the OPENSSL_free() function. BUF_MEM_free() has no return value. BUF_MEM_grow() returns zero on error or the new size (i.e. len).
http://www.linuxmanpages.com/man3/buffer.3.php
crawl-003
refinedweb
219
63.49
Well, the best way to solve this is to store the minutes separately as well. But you can get around this with the aggregation framework, although that is not going to be very fast: db.so.aggregate( [ { $project: { loc: 1, vid: 1, datetime_recorded: 1, minutes: { $add: [ { $multiply: [ { $hour: '$datetime_recorded' }, 60 ] }, { $minute: '$datetime_recorded' } ] } } }, { $match: { 'minutes' : { $gte : 12 * 60, $lt : 16 * 60 } } } ] ); In the first step $project, we calculate the minutes from hour * 60 + min which we then match against in the second step: $match. Persistent binary search tree may be used here. Pre-processing: Create empty persistent tree. It should store intervals sorted by their ending points. Sort intervals by their starting points. For each interval, starting from the end of sorted list, create a "copy" of persistent tree and add this interval to this copy. Search: Find starting point of the query interval in sorted list. Iterate corresponding "copy" of persistent tree from the smallest key to end of the query interval. Search time complexity is O(log(n) + m), where m is number of elements in the output. Use a ready-made interpolation routine. If you really want nearest neighbor behavior, I think it will have to be scipy's scipy.interpolate.interp1d, but linear interpolation seems a better option, and then you could use numpy's numpy.interp: def trailing_diff(time, data, diff): ret = np.zeros_like(data) mask = (time - time[0]) >= diff ret[mask] = data[mask] - np.interp(time[mask] - diff, time, data) return ret time = np.arange(10) + np.random.rand(10)/2 weight = 82 + np.random.rand(10) >>> time array([ 0.05920317, 1.23000929, 2.36399981, 3.14701595, 4.05128494, 5.22100886, 6.07415922, 7.36161563, 8.37067107, 9.11371986]) >>> weight array([ 82.14004969, 82.36214992, 82.25663272, 82.33764514, You can implement the functionality that you need in a Lua script. When evaluating Lua script Redis loads cjson library among others and this library allows you to parse your JSON to extract values from it. See the EVAL command. A code sample from: json_text = '[ true, { "foo": "bar" } ]' value = cjson.decode(json_text) -- Returns: { true, { foo = "bar" } } Mind that Redis evaluates scripts one at a time, and no other clients can run their commands while a script is running, so this might now be suitable for you. It's a difficult question and I am not sure if I can give a definite answer but I have experience with both HDF5/pyTables and some NoSQL databases. Here are some thoughts. HDF5 per se has no notion of index. It's only a hierarchical storage format that is well suited for multidimensional numeric data. It's possible to extend on top of HDF5 to implement an index (i.e. PyTables, HDF5 FastQuery) for the data. HDF5 (unless you are using the MPI version) does not support concurrent write access (read access is possible). HDF5 supports compression filters which can - unlike popular belief - make data access actually faster (however you have to think about proper chunk size which depends on the way you access the data). HDF5 is no database. MongoDB has ACID properties, HDF5 doesn't (might be A far as I know it's only possible to make such a query on the top-most model (in this case Course); I have not seen a way to directly obtain an embedded model like that yet. So this should work, but only gives you the Course: Course.where(name: 'Course1', 'subjects.short_name' => 'Maths', 'subjects.topics.name' => "Algebra 101") And this is the (unfortunately rather ugly) query that should give you what you want: Course.where(name: 'Course1').subjects.where(short_name: 'Maths').topics.where(name: 'Algebra 101') I suggest you to change the schema, so that this someIdAsString text becomes a value instead of a key, making the object in test become a list of objects. If you know every key, you can try db.sample.find({$or: [ {"test.someIdAsString.field1": value1, "test.someIdAsString.field2": value2}, {"test.someOtherIdAsString.field1": value1, "test.someOtherIdAsString.field2": value2}, ... ]}) For all your "someIdAsString" possibilities. If you change the structure to: { _id : ObjectId("someObjectId"), test : [ { _id : someIdAsString, field1 : value1, field2 : value2 }, { _id : someOtherIdAsString, field1 : value3, field2 : value4 }, ... ] } You() Yes, you can query on the DbRef fields, but not the way you are doing it. DbRef is a small sub-documents which contains two fields: $ref -the referenced collection $id - the _id value of a document in that referenced collection (actually there is a third field $db if the reference is to a different db) So, using the shell you can only ask for contacter.$id (which returns the Object id in users collection) or $ref, but you can't query on something such as contract.isActive, as this is a field of the user, not the Ref, and the shell doesn't fetch the user. If you are using java driver, both Contacter and Contactee are represented as com.mongodb.DBRef which has a method fetch() to retrieve the DBObject (user) If using spring-data-mongodb, you might want to have class such as: class" }}}) db.collection.find( { "query": { $elemMatch: { "filterId": "5215b40c0ff5fa111e000001", "subfilterId": "60728003610375795" } } } ); You are most probably looking for elemMatch. Check out the docs. I didn't understand why you're querying for data.channelId and data.data.topic if you want to find MoDBIAGlobals by id and channelId. And your data modeling it's a quite confusing too. Anyway, seems that your query does not match your document structure. The fields data.data.topic and data.channelId does not exists. Try to fix replacing with code bellow: MoDBIA_DAO dao = new MoDBIA_DAO(mongo, morphia, DB_Name); Datastore dataStore = morphia.createDatastore(mongo, DB_Name); Query<MoDBIAGlobals> query = dataStore.createQuery(MoDBIAGlobals.class).disableValidation(); query.field("data.FACEBOOK.channelId").equal("FB1234"); query.field("data.FACEBOOK.data.topic.NO_TOPIC").equal("NO_TOPIC"); QueryResults<MoDBIAGlobals> results = dao.find(query); System.out.println("results: " + res Off of the top of my head. You can tweak as necessary. $start = new DateTime('2013-08-14 09:00:00'); $end = new DateTime('2013-08-14 17:00:00'); $interval = new DateInterval('PT30M'); $period = new DatePeriod($start, $interval, $end); foreach ($period as $dt) { // do something echo $dt->format('H:iA'); } Links DateTime DateInterval DatePeriod You want to convert the cursor returned from the find() function to something json_encode can actually use like so: $cursor = $collection->find(array("field2.subfield2" => "value 2")); echo json_encode(iterator_to_array($cursor, false)); This is because the query is not run until you iterate the cursor. Iterator_to_array basically will exhaust the cursor, get all documents, and put them into an array for json_encode. Edit Specifying false as the second $use_keys argument to iterator_to_array() will ensure that the results are indexed numerically (instead of by the _id field of each document). are locking for projection and this. Its optional operator that specifies the fields to return using projection operators and it's boolean 1 - show & 0 - hide db.customers.findOne({"users.mail": "mail@address.org"}, {users: 1}) That's not secure to use findOne({"users.mail": "mail"}) for authentication, there is special Node.js module PassportJS. You can use Aggregate method. var sum = (from twh in db.MytimeMaster where ((twh.date >= lstsun && twh.date <= tilldate) && (twh.agentID == agentid)) select twh.totalworkinghours).Aggregate(TimeSpan.FromMinutes(0), (total, next) => total + next); p.s. assume used TimeSpan for time intervals. I'm pretty certain you are getting confused. Your findOne call will do an exact, case-sensitive match. To do partial matches or case-insensitive you need to use a RegExp instance not a basic string. My suggestion is to make a simple test case separate from your app, and my suspicion is the DB is not behaving the way you think it is. Just create a Comparator<Interval> which compares by start times: public class IntervalStartComparator implements Comparator<Interval> { @Override public int compare(Interval x, Interval y) { return x.getStart().compareTo(y.getStart()); } } Then sort using that: Collections.sort(intervals, new IntervalStartComparator()); I would avoid adding a timer to every object. Perhaps you can have a separate thread which is responsible for initiating updates on your objects. You can use Parallel.ForEach to run concurrent updates on all of your if you think this won't cause concurrency issues. For example, something like: Thread updateThread = new Thread(updateLoop); IEnumerable<Updateable> _updateableObjects; public static void Main() { updateThread.Start(); } private static function UpdateLoop() { while (true) { Parallel.ForEach(_updateableObjects, obj => obj.Update()); Thread.Sleep(1000); } } I would consider changing the data format to [{start: new Date(2013, 2, 4, 0), end: new Date(2013, 2, 4, 8)}, {start: new Date(2013, 2, 4, 22), end: new Date(2013, 2, 5, 2)}, {start: new Date(2013, 2, 5, 5), end: new Date(2013, 2, 7, 5)}] Since you have the start and end date, you don't really need a duration. Alternatively you could have just the start date and a duration. I'm not extremely familiar with the stacklayout, but it might be sufficent (and easier) for this project to simply append rect elements to the right position. I made an example here: which doesn't take into account the fact that you need to wrap events that start one day and end the next. This just displays all events in the same column, with t The problem with having your array in PHP is that only PHP can access it, i.e. JavaScript doesn't know what url's to load. So if you're intent on keeping the php, go with option 1 below, otherwise scrap the php and go with option 2: Option 1 You'll have to use ajax on main view, and keep the php seperate. On the php page, instead of outputting the image, output the image info as a JSON object, e.g. $img = array_rand($img_rand); header('Content-type: application/json'); echo json_encode($img); and then on the page where you're displaying everything, use javascript (I'm using jQuery since it makes ajax way easier) to load the image: <div class="random-img"></div> <script type="text/javascript> imgLoop = function() { $.get('randomimage.php', function(img) { You can use the $exists operator and dot notation to do this, but you need to build up your query dynamically like this (in the shell): var user = 'abc'; var query = {}; query['user_details.' + user] = { $exists: true }; db.coll.find(query); You just need to offset all of the dates before grouping. TimeSpan offset = startTime.TimeOfDay; TimeSpan interval = TimeSpan.FromMinutes(45); var selected = from date in item.Dates group date by ((date.Ticks - offset.Ticks) / interval.Ticks) into g select g; You need to self-join the table and look for conflicts between pairs t1.id, t1.start, t1.end FROM table_name t1 JOIN table_name t2 WHERE t1.start < t2.end AND t2.start < t1.end AND t1.id <> t2.id AND t1.date = t2.date You can use a SELECT DISTINCT or a GROUP BY to trim the duplicates in the output. A quick look at the Interval API gives this (UNTESTED): // SUPPOSED: the big interval is "bigInterval"; the list is "intervals" // Intervals returned List<Interval> ret = new ArrayList<>(); Interval gap, current, next; // First, compute the gaps between the elements in the list current = intervals.get(0); for (int i = 1; i < intervals.size(); i++) { next = intervals.get(i); gap = current.gap(next); if (gap != null) ret.add(gap); current = next; } // Now, compute the time difference between the starting time of the first interval // and the starting time of the "big" interval; add it at the beginning ReadableInstant start, end; start = bigInterval.getStart(); end = intervals.get(0).getStart(); if (start.isBefore(end)) ret.add(0, new Inter Conceptually I would approach this by first grouping by date, then iterate over all the date groups and place the different entries into 5 min time buckets. The time buckets could perhaps be implemented as a predefined Dictionary<int,List<DateTime>> that you add to Just to correct my post: $time = time(); $last_time = ($time - ($time % (15 * 60))); $in15mins = $last_time+ (15 * 60); echo 'Now: '. date('Y-m-d H:i') ." "; echo 'in 15 mins: '. date('Y-m-d H:i', $in15mins) ." "; I think it is quite self explaining. Optimize it, use it. What you need, is to use the same method you've been using before, but to prevent the browser's cache. One method of doing so, is on the server-side, add a no-cache header to to the HTTP response for that request. Another way of doing so, is to change the request's URL, so the browser won't look it up its cache. One example is: var url = "..."; // Your URL url += "&nocache=" + (new Date()).getTime(); Lets take an example: A = 0 B = 45 C = 100 N = 10 interval (10 interval = 11 bound) 1: Find the ratio X/N which is the closest to the ratio AB / AC 4/10 < 45/100 5/10 we will take X = 4 in this example (the result will vary depending on how you round it. 2: Set the bound number taken from previous calculation to have bound from A to B A to B: Interval number 4 (from previous value) Bound number 5 Average interval length is (45-0) / 4 = 11 Bound 0 = 0 Bound 1 = 11 Bound 2 = 22 Bound 3 = 33 Bound 4 = 45 3: Set the bound number taken from previous calculation to have bound from B to C B to C: Interval number 6 (the rest) Bound number 7 Average interval length is (100 - 45) / 6 = 9 Bound 4 = 45 Bound 5 = 54 Bound 6 = 63 Bound 7 = 72 Bound 8 = 81 Bound 9 = 90 Bound 1 I probably not got all the details but to answer to your question title "Reliably select from a database table at fixed time intervals"... I don't think you could even hope for a query to be run at "second precise" time. One key problem with that approach is that you will have to deal with concurrent access and lock. You might be able to send the query at fixed time maybe, but your query might be waiting on the DB server for several seconds (or being executed seeing fairly outdated snapshot of the db). Especially in your case since the table is apparently "busy". As a suggestion, if I were you, I would spend some time to think about queue messaging systems (like just to cite one, not presaging it is somehow "your" solution). Anyway those kind of tools are prob If you were thinking of this as an N x N comparison, I would imagine that the answer would be some sort of ragged band matrix. (Look it up if band matrix is not a term you've seen before.) This code should test for overlap at the high end of the second column being greater than the first column, i.e., overlapping: Times <- read.table(text=" Num Start End 1 00:09:41 00:25:25 2 00:11:21 00:41:32 3 00:34:39 00:58:01", stringsAsFactors=FALSE, header=TRUE) mdat <- outer(Times$Start, Times$End, function(x,y) y > x) mdat[upper.tri(mdat)|col(mdat)==row(mdat)] <- NA mdat #------------------ [,1] [,2] [,3] [1,] NA NA NA [2,] TRUE NA NA [3,] FALSE TRUE NA You are not interested in the diagonal since End is alw I think that there is a little bit misunderstanding of the type of X axis you actually need. That the results and the data you are representing graphically comes over time doesn’t mean necessary that you need to use the datetime axis. In general datetime type axis is used for showing dates. Setting the step to 1 for a datetime axis as I can see you have done, means that there shall be enough space for showing 86.400.000 points for the single day. Shall there be more days there will be an enormous amount of points which contradicts to the accuracy laboratory data implies. What I might suggest, is that you simply use a linear type of X axis. There are enough ways to show users that the points occur over intervals of 1 millisecond, or one nanosecond and so on. There are several factors that can influence the GPS accuracy. Before looking for bugs in your code I would suggest checking the following information: What factors influence GPS accuracy the most? Factors affecting GPS accuracy public int longestNonOverLappingTI(TimeInterval[] tis){ Arrays.sort(tis); int[] mt = new int[tis.length]; mt[0] = tis[0].getTime(); for(int j=1;j<tis.length;j++){ for(int i=0;i<j;i++){ int x = tis[j].overlaps(tis[i])?tis[j].getTime():mt[i] + tis[j].getTime(); mt[j] = Math.max(x,mt[j]); } } return getMax(mt); } public class TimeInterval implements Comparable <TimeInterval> { public int start; public int end; public TimeInterval(int start,int end){ this.start = start; this.end = end; } public boolean overlaps(TimeInterval that){ return !(that.end < this.start || this.end < that.start); } public int getTime Create a single, sorted array of transitions. Each transition has a position, and a cumulative number based on how many intervals you're joining or leaving. As you pass through the list, keep track of how many intervals you are in. When you're in as many intervals as series, that's when you're in a common interval. For your example the transitions would be: [2, 1], [6, -1], [7, 1], [11, -1], [1, 1], [3, -1], [5, 1], [10, -1], [11, 1], [13, -1] [2, 1], [5, -1], [6, 1], [8, -1] which after sorting by position and merging collapses to: [1, 1], [2, 2], [3, -1], [5, 0], [6, 0], [7, 1], [8, -1], [10, -1], [11, 0], [13, -1] which gives you transitions for running totals of: [1, 1], [2, 3], [3, 2], [7, 3], [8, 2], [10, 2], [13, 1] And then we can read off the intervals where we're at You're using the wrong format string for that date format. A %Y is for a four digit year, %y is for a two digit year. From the fine manual: %Y - Year with century (can be negative, 4 digits at least) -0001, 0000, 1995, 2009, 14292, etc. ... %y - year % 100 (00..99) You can even see that the year isn't right in your console: => #<DateTime: 0013-12-02T13:25:21+00:00 ((1726142j,48321s,0n),+0s,2299161j)> and the dt.year value: irb(main):005:0> dt.year => 13 0013 and 2013 aren't quite the same. You want to say: self.datetime = DateTime.strptime("12/02/13 13:25:21", "%m/%d/%y %H:%M:%S") # ------------------------------------------------------------^^
http://www.w3hello.com/questions/Storing-and-querying-time-intervals-in-MongoDB
CC-MAIN-2018-17
refinedweb
3,099
64.91
Object oriented programming or OOPS concept in C# is a type of programming that contains the collection of objects. Each Object contains data fields and data members. Unlike procedural programming languages like C, FORTRAN, BASIC, etc., C# Object oriented programming languages can easily upgrade. Here, we shall learn the concepts of C# OOPS (Object oriented programming) like Class, Object, Encapsulation, inheritance, polymorphism, etc. Here, encapsulation, inheritance, and polymorphism are the mechanism that is useful to implement the C# Object oriented programming model. C# Class A class is a user-defined type, and it is the main concept of the C# OOPS or object oriented programming. A Class is just like a blueprint or a template for creating an object. The C# class doesn’t have any memory allocated. But the memory is allocated only for the created class instance. The syntax of the C# class is class <classname> { } C# Object The C# Object is said to be the instance of a class, and Object holds data fields, member functions(data members). Actually, in a real-time environment, everything is said to be an object. The same is applicable in this C# object oriented programming. - An object is an entity that can be seen or imagined. - The C# Object has some properties for identifying its current state and for the sake of validations. - Data members or member functions for telling their functions or behavior. - Events for representing the change of its state. So we can say that the C# objects can distinguish from one another based on their state, functionality, and event. For example, if we consider a revolving fan as an object, - A nob surrounding by blades, its brand, the color of it depicts the state of the revolving fan. - The airflow created by the fan is said to be its functionality. - Whereas change in the position of blades while revolving represents the change of its state. In C# object oriented terminology, we say that to access the data fields and the member functions inside a C# class, an object is created to the class. C# Class Instance syntax <ClassName> <ObjectName> = new <ConstructorName(<parameter list>) Here, new is the keyword used while creating an object for a C# class. Differences among C# Variable, Instance (Object) and Reference This section covers the C# Variable, Instance (Object), and Reference. By seeing an example of each one, will help you find the differences among them. C# Variable The C# Variable is nothing but a copy of a class which is not initialized, for example. int x; x is a copy of the type int. string s; String is a class, and s is the copy of the class string. Here data type int and the class string is a blueprint that doesn’t have any memory allocated to it. But for the variables x and s, memory is allocated. So a C# variable is a copy of a particular data type that holds some memory. In contrast, the C# datatype is a logical one or a blueprint for which no memory allocated. Say, for example, a house plan is just like the datatype, and the physical house constructed based on that plan is the variable. i.e., int or string in the above context doesn’t have any memory whereas a variable has physical memory. using System; using System.Collections.Generic; using System.Text; namespace CSharp_Tutorial { class Class1 { int x = 100; static void Main(string[] args) { Console.WriteLine(x); } } } OUTPUT In the above C# OOPS program, when we try to print the variable x in the main method, it shows error. And the C# error says that you are trying to print the non-static class variable in the static main method. In C# object oriented programming, we have to create an instance for that class to initialize variables or print any value in the main method. And use that instance to print the values, or else it throws an error, as shown in the following code. using System; using System.Collections.Generic; using System.Text; namespace CSharp_Tutorial { class Class1 { int x = 100; static void Main(string[] args) { Class1 c = new Class1(); Console.WriteLine(c.x); } } } OUTPUT The reason for this is a C# variable in object oriented programming is a copy of the class that not initialized. C# Instance The C# instance in OOPS concept is nothing but a copy of the class, which helps initialize a variable using a keyword new. Every instance has its own memory. The memory allocated for one C# instance never shares with another instance. It means how many instances created; the memory allocations allocated for that many instances without memory sharing. using System; using System.Collections.Generic; using System.Text; namespace CSharp_Tutorial { class Class1 { int x = 50; public Class1() { Console.WriteLine("value of x is " + x); } public Class1(int i) { Console.WriteLine("value of i is " + i); } static void Main(string[] args) { Console.WriteLine("Default Constructor is invoked"); Class1 c = new Class1(); Class1 c1 = new Class1(); Console.WriteLine("Parameterized Constructor is invoked"); Class1 c2 = new Class1(100); Class1 c3 = new Class1(200); } } } OUTPUT In the above C# oops program, four instances have created for Class1, Instance c, c1 created using default or parameterless constructor Class1(). Whereas instance c2, c3 are created using parameterized constructor Class1(int i). All instances that created irrespective of the constructor used will have allocated separate memory locations. C# Reference The Reference in C# is also a copy of the class, which initializes along with an instance that already existed. Still, a reference doesn’t have any memory allocated. Instead, it created for sharing the memory of the instance. A reference to a C# class can also say as a pointer to the instance. Every action performed on the fields and member functions using the instance a reference pointed to reflects that Reference. And vice versa (changes made using Reference reflects the instance to which it created). using System; using System.Collections.Generic; using System.Text; namespace CSharp_Tutorial { class Class1 { int x; public Class1(int i) { x = i; } static void Main(string[] args) { Class1 c = new Class1(100); Class1 c2 = c; Console.WriteLine("x value using instance c"); Console.WriteLine(c.x); Console.WriteLine("x value using reference c2"); Console.WriteLine(c2.x); } } } OUTPUT In the C# OOPS code written above, c is the instance of Class1, whereas c2 is the Reference created to c. When we try to print the value of x using instance c and reference c2 is the same. The output, i.e., 100 printed also using reference c2, is because it pointed to the instance c.
https://www.tutorialgateway.org/oops-concept-in-csharp/
CC-MAIN-2021-43
refinedweb
1,099
56.96
Opened 2 years ago Last modified 2 years ago #9680 enhancement new — at twisted.logger.Logger creates too many new instances when declared within a classVersion 3 Description (last modified by ) Situation In a new instance of the Logger is created each and every time it is accessed as a descriptor. This can result in a lot of unnecessary object creation. Target When a Logger instance is created within a class declaration, all subsequent attribute accesses should return the same Logger instance. Proposal As a first step, stop creating a new instance of the Logger, and instead set the namespace, source, and observer attributes on the existing Logger instance. This will be sufficient for purposes of stemming the object creation tide, but will still be doing pointless work, in that the namespace, source, and observer never change. As a second step, add an internal flag to the Logger class, which the __get__ method can check to see if it has already configured the Logger instance.
https://twistedmatrix.com/trac/ticket/9680?version=3
CC-MAIN-2022-05
refinedweb
165
56.29
A TMultiGraph is a collection of TGraph (or derived) objects. A TMultiGraph allows to manipulate a set of graphs as a single entity. In particular, when drawn, the X and Y axis ranges are automatically computed such as all the graphs will be visible. TMultiGraph::Add should be used to add a new graph to the list. The TMultiGraph owns the objects in the list. The number of graphs in a multigraph can be retrieve with: The drawing options are the same as for TGraph. Like for TGraph, the painting is performed thanks to the TGraphPainter class. All details about the various painting options are given in this class. Example: The drawing option for each TGraph may be specified as an optional second argument of the Add function. If a draw option is specified, it will be used to draw the graph, otherwise the graph will be drawn with the option specified in TMultiGraph::Draw The global title and the axis titles can be modified the following way: A special option 3D allows to draw the graphs in a 3D space. See the following example: The method TPad::BuildLegend is able to extract the graphs inside a multigraph. The following example demonstrate this. Automatic coloring according to the current palette is available as shown in the following example: When a TMultiGraph. The following example shows how to fit a TMultiGraph. When the graphs in a TMultiGraph are fitted, the fit parameters boxes overlap. The following example shows how to make them all visible. The axis limits can be changed the like for TGraph. The same methods apply on the multigraph. Note the two differents ways to change limits on X and Y axis. Definition at line 36 of file TMultiGraph.h. #include <TMultiGraph.h> Copy constructor. Definition at line 385 of file TMultiGraph.cxx. TMultiGraph default constructor. Definition at line 358 of file TMultiGraph.cxx. Constructor with name and title. Definition at line 371 of file TMultiGraph.cxx. TMultiGraph destructor. Definition at line 416 of file TMultiGraph.cxx. Add a new graph to the list of graphs. Note that the graph is now owned by the TMultigraph. Deleting the TMultiGraph object will automatically delete the graphs. You should not delete the graphs when the TMultigraph is still active. Definition at line 451 of file TMultiGraph.cxx. Add all the graphs in "multigraph" to the list of graphs. Definition at line 467 of file TMultiGraph.cxx. Get iterator over internal graphs list. Definition at line 1687 of file TMultiGraph.cxx. Browse multigraph. Reimplemented from TObject. Definition at line 489 of file TMultiGraph.cxx. Compute distance from point px,py to each graph. Reimplemented from TObject. Definition at line 504 of file TMultiGraph.cxx. Draw this multigraph with its current attributes. Options to draw a graph are described in TGraphPainter. drawing the TMultiGraph. Reimplemented from TObject. Definition at line 541 of file TMultiGraph.cxx. Definition at line 72 of file TMultiGraph.h. Fit this graph with function with name fname. interface to TF1::Fit(TF1 *f1... Definition at line 559 of file TMultiGraph.cxx.which may takes the following values:: Note that this function is called when calling TGraphErrors::Fit or TGraphAsymmErrors::Fit ot TGraphBentErrors::Fit see the discussion below on the errors calculation. if parmin>=parmax, the parameter is fixed Note that you are not forced to fix the limits for all parameters. For example, if you fit a function with 6 parameters, you can do: With this setup, parameters 0->3 can vary freely Parameter 4 has boundaries [-10,-4] with initial value -8 Parameter 5 is fixed to 100. The fit range can be specified in two ways: By default a chi2 fitting function is used for fitting the TGraphs's. The function is implemented in FitUtil::EvaluateChi2. In case of TGraphErrors an effective chi2 is used (see TGraphErrors fit in TGraph::Fit) and is implemented in FitUtil::EvaluateChi2Effective To specify a User defined fitting function, specify option "U" and call the following function: where MyFittingFunction is of type: The function returns a TFitResultPtr which can hold a pointer to a TFitResult object. By default the TFitResultPtr contains only the status of the fit and it converts automatically to an integer. If the option "S" is instead used, TFitResultPtr contains the TFitResult and behaves as a smart pointer to it. For example one can do: The fit parameters, error and chi2 (but not covariance matrix) can be retrieved also from the fitted function. One or more object (typically a TF1*) can be added to the list of functions (fFunctions) associated to each graph. When TGraph::Fit is invoked, the fitted function is added to this list. Given a graph gr, one can retrieve an associated function with: If the graph is made persistent, the list of associated functions is also persistent. Given a pointer (see above) to an associated function myfunc, one can retrieve the function/fit parameters with calls such as: You can change the statistics box to display the fit parameters with the TStyle::SetOptFit(mode) method. This mode has four digits. mode = pcev (default = 0111) For example: gStyle->SetOptFit(1011); prints the fit probability, parameter names/values, and errors. You can change the position of the statistics box with these lines (where g is a pointer to the TGraph): Definition at line 733 of file TMultiGraph.cxx. Display a panel with all histogram fit options. See class TFitPanel for example Definition at line 750 of file TMultiGraph.cxx. Return pointer to function with name. Functions such as TGraph::Fit store the fitted function in the list of functions of this graph. Definition at line 1106 of file TMultiGraph.cxx. Return the draw option for the TGraph gr in this TMultiGraph. The return option is the one specified when calling TMultiGraph::Add(gr,option). Definition at line 774 of file TMultiGraph.cxx. Returns a pointer to the histogram used to draw the axis. Takes into account following cases. fHistogramexists it is returned fHistogramdoesn't exists and gPadexists gPadis updated. That may trigger the creation of fHistogram. If fHistogramstill does not exit but hframedoes (if user called TPad::DrawFrame) the pointer to hframehistogram is returned fHistogramstill doesn't exist, then it is created. Definition at line 1049 of file TMultiGraph.cxx. Return pointer to list of functions. If pointer is null create the list Definition at line 1116 of file TMultiGraph.cxx. Definition at line 74 of file TMultiGraph.h. Definition at line 70 of file TMultiGraph.h. Get x axis of the graph. This method returns a valid axis only after the TMultigraph has been drawn. Definition at line 1127 of file TMultiGraph.cxx. Get y axis of the graph. This method returns a valid axis only after the TMultigraph has been drawn. Definition at line 1139 of file TMultiGraph.cxx. Compute Initial values of parameters for an exponential. Definition at line 834 of file TMultiGraph.cxx. Compute Initial values of parameters for a gaussian. Definition at line 789 of file TMultiGraph.cxx. Compute Initial values of parameters for a polynom. Definition at line 851 of file TMultiGraph.cxx. Return 1 if the point (x,y) is inside one of the graphs 0 otherwise. Definition at line 1023 of file TMultiGraph.cxx. Least squares lpolynomial fitting without weights. based on CERNLIB routine LSQ: Translated to C++ by Rene Brun Definition at line 875 of file TMultiGraph.cxx. Least square linear fit without weights. Fit a straight line (a0 + a1*x) to the data in this graph. extracted from CERNLIB LLSQ: Translated to C++ by Rene Brun Definition at line 970 of file TMultiGraph.cxx. Assignment operator. Definition at line 399 of file TMultiGraph.cxx. Paint all the graphs of this multigraph. Reimplemented from TObject. Definition at line 1150 of file TMultiGraph.cxx. Divides the active pad and draws all Graphs in the Multigraph separately. Definition at line 1405 of file TMultiGraph.cxx. Paint all the graphs of this multigraph as 3D lines. Definition at line 1454 of file TMultiGraph.cxx. Paint all the graphs of this multigraph reverting values along X and/or Y axis. New graphs are created. Definition at line 1558 of file TMultiGraph.cxx. Print the list of graphs. Reimplemented from TNamed. Definition at line 1593 of file TMultiGraph.cxx. Recursively remove this object from a list. Typically implemented by classes that can contain multiple references to a same object. Reimplemented from TObject. Definition at line 1609 of file TMultiGraph.cxx. Save primitive as a C++ statement(s) on output stream out. Reimplemented from TObject. Definition at line 1622 of file TMultiGraph.cxx. Set multigraph maximum. Definition at line 1667 of file TMultiGraph.cxx. Set multigraph minimum. Definition at line 1677 of file TMultiGraph.cxx. Definition at line 40 of file TMultiGraph.h. Definition at line 39 of file TMultiGraph.h. Definition at line 41 of file TMultiGraph.h. Definition at line 42 of file TMultiGraph.h. Definition at line 43 of file TMultiGraph.h.
https://root.cern.ch/doc/v622/classTMultiGraph.html
CC-MAIN-2022-05
refinedweb
1,491
60.72
Automatic Differentiation and Gradients Automatic differentiation is useful for implementing machine learning algorithms such as backpropagation for training neural networks. In this guide, you will explore ways to compute gradients with TensorFlow, especially in eager execution. Setup import numpy as np import matplotlib.pyplot as plt import tensorflow as tf Computing gradients To differentiate automatically, TensorFlow needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients. Gradient tapes TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables. TensorFlow "records" relevant operations executed inside the context of a tf.GradientTape onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using reverse mode differentiation. Here is a simple example: x = tf.Variable(3.0) with tf.GradientTape() as tape: y = x**2 Once you've recorded some operations, use GradientTape.gradient(target, sources) to calculate the gradient of some target (often a loss) relative to some source (often the model's variables): # dy = 2x * dx dy_dx = tape.gradient(y, x) dy_dx.numpy() 6.0 The above example uses scalars, but tf.GradientTape works as easily on any tensor: w = tf.Variable(tf.random.normal((3, 2)), name='w') b = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b') x = [[1., 2., 3.]] with tf.GradientTape(persistent=True) as tape: y = x @ w + b loss = tf.reduce_mean(y**2) To get the gradient of loss with respect to both variables, you can pass both as sources to the gradient method. The tape is flexible about how sources are passed and will accept any nested combination of lists or dictionaries and return the gradient structured the same way (see tf.nest). [dl_dw, dl_db] = tape.gradient(loss, [w, b]) The gradient with respect to each source has the shape of the source: print(w.shape) print(dl_dw.shape) (3, 2) (3, 2) Here is the gradient calculation again, this time passing a dictionary of variables: my_vars = { 'w': w, 'b': b } grad = tape.gradient(loss, my_vars) grad['b'] <tf.Tensor: shape=(2,), dtype=float32, numpy=array([-1.6920902, -3.2363236], dtype=float32)> Gradients with respect to a model It's common to collect tf.Variables into a tf.Module or one of its subclasses ( layers.Layer, keras.Model) for checkpointing and exporting. In most cases, you will want to calculate gradients with respect to a model's trainable variables. Since all subclasses of tf.Module aggregate their variables in the Module.trainable_variables property, you can calculate these gradients in a few lines of code: layer = tf.keras.layers.Dense(2, activation='relu') x = tf.constant([[1., 2., 3.]]) with tf.GradientTape() as tape: # Forward pass y = layer(x) loss = tf.reduce_mean(y**2) # Calculate gradients with respect to every trainable variable grad = tape.gradient(loss, layer.trainable_variables) for var, g in zip(layer.trainable_variables, grad): print(f'{var.name}, shape: {g.shape}') dense/kernel:0, shape: (3, 2) dense/bias:0, shape: (2,) Controlling what the tape watches The default behavior is to record all operations after accessing a trainable tf.Variable. The reasons for this are: - The tape needs to know which operations to record in the forward pass to calculate the gradients in the backwards pass. - The tape holds references to intermediate outputs, so you don't want to record unnecessary operations. - The most common use case involves calculating the gradient of a loss with respect to all a model's trainable variables. For example, the following fails to calculate a gradient because the tf.Tensor is not "watched" by default, and the tf.Variable is not trainable: # A trainable variable x0 = tf.Variable(3.0, name='x0') # Not trainable x1 = tf.Variable(3.0, name='x1', trainable=False) # Not a Variable: A variable + tensor returns a tensor. x2 = tf.Variable(2.0, name='x2') + 1.0 # Not a variable x3 = tf.constant(3.0, name='x3') with tf.GradientTape() as tape: y = (x0**2) + (x1**2) + (x2**2) grad = tape.gradient(y, [x0, x1, x2, x3]) for g in grad: print(g) tf.Tensor(6.0, shape=(), dtype=float32) None None None You can list the variables being watched by the tape using the GradientTape.watched_variables method: [var.name for var in tape.watched_variables()] ['x0:0'] tf.GradientTape provides hooks that give the user control over what is or is not watched. To record gradients with respect to a tf.Tensor, you need to call GradientTape.watch(x): x = tf.constant(3.0) with tf.GradientTape() as tape: tape.watch(x) y = x**2 # dy = 2x * dx dy_dx = tape.gradient(y, x) print(dy_dx.numpy()) 6.0 Conversely, to disable the default behavior of watching all tf.Variables, set watch_accessed_variables=False when creating the gradient tape. This calculation uses two variables, but only connects the gradient for one of the variables: x0 = tf.Variable(0.0) x1 = tf.Variable(10.0) with tf.GradientTape(watch_accessed_variables=False) as tape: tape.watch(x1) y0 = tf.math.sin(x0) y1 = tf.nn.softplus(x1) y = y0 + y1 ys = tf.reduce_sum(y) Since GradientTape.watch was not called on x0, no gradient is computed with respect to it: # dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1) grad = tape.gradient(ys, {'x0': x0, 'x1': x1}) print('dy/dx0:', grad['x0']) print('dy/dx1:', grad['x1'].numpy()) dy/dx0: None dy/dx1: 0.9999546 Intermediate results You can also request gradients of the output with respect to intermediate values computed inside the tf.GradientTape context. x = tf.constant(3.0) with tf.GradientTape() as tape: tape.watch(x) y = x * x z = y * y # Use the tape to compute the gradient of z with respect to the # intermediate value y. # dz_dy = 2 * y and y = x ** 2 = 9 print(tape.gradient(z, y).numpy()) 18.0 By default, the resources held by a GradientTape are released as soon as the GradientTape.gradient method is called. To compute multiple gradients over the same computation, create a gradient tape with persistent=True. This allows multiple calls to the gradient method as resources are released when the tape object is garbage collected. For example: x = tf.constant([1, 3.0]) with tf.GradientTape(persistent=True) as tape: tape.watch(x) y = x * x z = y * y print(tape.gradient(z, x).numpy()) # [4.0, 108.0] (4 * x**3 at x = [1.0, 3.0]) print(tape.gradient(y, x).numpy()) # [2.0, 6.0] (2 * x at x = [1.0, 3.0]) [ 4. 108.] [2. 6.] del tape # Drop the reference to the tape Notes on performance There is a tiny overhead associated with doing operations inside a gradient tape context. For most eager execution this will not be a noticeable cost, but you should still use tape context around the areas only where it is required. Gradient tapes use memory to store intermediate results, including inputs and outputs, for use during the backwards pass. For efficiency, some ops (like ReLU) don't need to keep their intermediate results and they are pruned during the forward pass. However, if you use persistent=Trueon your tape, nothing is discarded and your peak memory usage will be higher. Gradients of non-scalar targets A gradient is fundamentally an operation on a scalar. x = tf.Variable(2.0) with tf.GradientTape(persistent=True) as tape: y0 = x**2 y1 = 1 / x print(tape.gradient(y0, x).numpy()) print(tape.gradient(y1, x).numpy()) 4.0 -0.25 Thus, if you ask for the gradient of multiple targets, the result for each source is: - The gradient of the sum of the targets, or equivalently - The sum of the gradients of each target. x = tf.Variable(2.0) with tf.GradientTape() as tape: y0 = x**2 y1 = 1 / x print(tape.gradient({'y0': y0, 'y1': y1}, x).numpy()) 3.75 Similarly, if the target(s) are not scalar the gradient of the sum is calculated: x = tf.Variable(2.) with tf.GradientTape() as tape: y = x * [3., 4.] print(tape.gradient(y, x).numpy()) 7.0 This makes it simple to take the gradient of the sum of a collection of losses, or the gradient of the sum of an element-wise loss calculation. If you need a separate gradient for each item, refer to Jacobians. In some cases you can skip the Jacobian. For an element-wise calculation, the gradient of the sum gives the derivative of each element with respect to its input-element, since each element is independent: x = tf.linspace(-10.0, 10.0, 200+1) with tf.GradientTape() as tape: tape.watch(x) y = tf.nn.sigmoid(x) dy_dx = tape.gradient(y, x) plt.plot(x, y, label='y') plt.plot(x, dy_dx, label='dy/dx') plt.legend() _ = plt.xlabel('x') Control flow Because a gradient tape records operations as they are executed, Python control flow is naturally handled (for example, if and while statements). Here a different variable is used on each branch of an if. The gradient only connects to the variable that was used: x = tf.constant(1.0) v0 = tf.Variable(2.0) v1 = tf.Variable(2.0) with tf.GradientTape(persistent=True) as tape: tape.watch(x) if x > 0.0: result = v0 else: result = v1**2 dv0, dv1 = tape.gradient(result, [v0, v1]) print(dv0) print(dv1) tf.Tensor(1.0, shape=(), dtype=float32) None Just remember that the control statements themselves are not differentiable, so they are invisible to gradient-based optimizers. Depending on the value of x in the above example, the tape either records result = v0 or result = v1**2. The gradient with respect to x is always None. dx = tape.gradient(result, x) print(dx) None Getting a gradient of None When a target is not connected to a source you will get a gradient of None. x = tf.Variable(2.) y = tf.Variable(3.) with tf.GradientTape() as tape: z = y * y print(tape.gradient(z, x)) None Here z is obviously not connected to x, but there are several less-obvious ways that a gradient can be disconnected. 1. Replaced a variable with a tensor In the section on "controlling what the tape watches" you saw that the tape will automatically watch a tf.Variable but not a tf.Tensor. One common error is to inadvertently replace a tf.Variable with a tf.Tensor, instead of using Variable.assign to update the tf.Variable. Here is an example: x = tf.Variable(2.0) for epoch in range(2): with tf.GradientTape() as tape: y = x+1 print(type(x).__name__, ":", tape.gradient(y, x)) x = x + 1 # This should be `x.assign_add(1)` ResourceVariable : tf.Tensor(1.0, shape=(), dtype=float32) EagerTensor : None 2. Did calculations outside of TensorFlow The tape can't record the gradient path if the calculation exits TensorFlow. For example: x = tf.Variable([[1.0, 2.0], [3.0, 4.0]], dtype=tf.float32) with tf.GradientTape() as tape: x2 = x**2 # This step is calculated with NumPy y = np.mean(x2, axis=0) # Like most ops, reduce_mean will cast the NumPy array to a constant tensor # using `tf.convert_to_tensor`. y = tf.reduce_mean(y, axis=0) print(tape.gradient(y, x)) None 3. Took gradients through an integer or string Integers and strings are not differentiable. If a calculation path uses these data types there will be no gradient. Nobody expects strings to be differentiable, but it's easy to accidentally create an int constant or variable if you don't specify the dtype. x = tf.constant(10) with tf.GradientTape() as g: g.watch(x) y = x * x print(g.gradient(y, x)) WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32 WARNING:tensorflow:The dtype of the target tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32 WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32 None TensorFlow doesn't automatically cast between types, so, in practice, you'll often get a type error instead of a missing gradient. 4. Took gradients through a stateful object State stops gradients. When you read from a stateful object, the tape can only observe the current state, not the history that lead to it. A tf.Tensor is immutable. You can't change a tensor once it's created. It has a value, but no state. All the operations discussed so far are also stateless: the output of a tf.matmul only depends on its inputs. A tf.Variable has internal state—its value. When you use the variable, the state is read. It's normal to calculate a gradient with respect to a variable, but the variable's state blocks gradient calculations from going farther back. For example: x0 = tf.Variable(3.0) x1 = tf.Variable(0.0) with tf.GradientTape() as tape: # Update x1 = x1 + x0. x1.assign_add(x0) # The tape starts recording from x1. y = x1**2 # y = (x1 + x0)**2 # This doesn't work. print(tape.gradient(y, x0)) #dy/dx0 = 2*(x1 + x0) None Similarly, tf.data.Dataset iterators and tf.queues are stateful, and will stop all gradients on tensors that pass through them. No gradient registered Some tf.Operations are registered as being non-differentiable and will return None. Others have no gradient registered. The tf.raw_ops page shows which low-level ops have gradients registered. If you attempt to take a gradient through a float op that has no gradient registered the tape will throw an error instead of silently returning None. This way you know something has gone wrong. For example, the tf.image.adjust_contrast function wraps raw_ops.AdjustContrastv2, which could have a gradient but the gradient is not implemented: image = tf.Variable([[[0.5, 0.0, 0.0]]]) delta = tf.Variable(0.1) with tf.GradientTape() as tape: new_image = tf.image.adjust_contrast(image, delta) try: print(tape.gradient(new_image, [image, delta])) assert False # This should not happen. except LookupError as e: print(f'{type(e).__name__}: {e}') LookupError: gradient registry has no entry for: AdjustContrastv2 If you need to differentiate through this op, you'll either need to implement the gradient and register it (using tf.RegisterGradient) or re-implement the function using other ops. Zeros instead of None In some cases it would be convenient to get 0 instead of None for unconnected gradients. You can decide what to return when you have unconnected gradients using the unconnected_gradients argument: x = tf.Variable([2., 2.]) y = tf.Variable(3.) with tf.GradientTape() as tape: z = y**2 print(tape.gradient(z, x, unconnected_gradients=tf.UnconnectedGradients.ZERO)) tf.Tensor([0. 0.], shape=(2,), dtype=float32)
https://www.tensorflow.org/guide/autodiff?hl=el
CC-MAIN-2021-43
refinedweb
2,482
53.37
Re: array in PHP From: d (d_at_example.com) Date: 12/03/04 - ] Date: Fri, 03 Dec 2004 14:39:07 GMT Well, this line empties the array: $Lijst=Leesnamen($Moeder, ($n*2)+1); as you're putting the return value of Leesnamen() straight into the array, not into the element you're looping through. I can see from the function that it changes the $Lijst array, but I can't see how it can read that variable, as it's outside the scope of that function. Using a "global $Lijst;" in the Leesnamen() function, will let you see and alter that array. Now, you can change the $Lijst=Leesnamen() line, removing the '$Lijst=' part, as the function in question doesn't return anything. Try that! d. "Piet" <p.potjer@chello.nl> wrote in message news:8O_rd.1799$yn4.1409@amsnews03-serv.chello.com... >I let it print (see insertion) as a test > "d" <d@example.com> schreef in bericht > news:jD_rd.79663$38.17955@fe2.news.blueyonder.co.uk... >> Cool :) Where do you notice your array is missing variables? >> >> "Piet" <p.potjer@chello.nl> wrote in message >> news:Vz_rd.1736$yn4.972@amsnews03-serv.chello.com... >>> This is the exact code - with comments in Dutch. >>> "d" <d@example.com> schreef in bericht >>> news:_p_rd.79653$38.1588@fe2.news.blueyonder.co.uk... >>>> Is that the exact code you're using? There are quite a few problems >>>> with it, like the missing $ from variables, missing closing brackets, >>>> etc... it most definitely won't run :) >>> >>> Function MaakLijst ($idnr){ >>> global $Lijst; >>> $m=1; >>> $Lijst = Leesnamen($idnr, $m); > Print $Lijst[1]["Naam"]; // Result: Pete >>> for ($n=1; $n<=7;$n++){ >>> $Vader=$Lijst[$n]["Vader"]; >>> $Moeder=$Lijst[$n]["Moeder"]; >>> if ($Vader!="1"){ >>> $lijst=Leesnamen($Vader, $n*2); >>> $Lijst=Leesnamen($Moeder, ($n*2)+1); >>> } > Print $Lijst[1]["Naam"]; // Result: Empty > Print $Lijst[2]["Naam"]; // Result: Chris > >>> } >>> return $Lijst; >>> } >>> Function LeesNamen($idnr, $m) { >>> $x=False; // Variabele laat while lus niet onnodig lang doorlopen >>> //stel de variabelen voor de toegang tot de database in >>> $Host = "localhost"; >>> $Gebruiker = "User"; >>> $Wachtwoord = ""; >>> $DBNaam = "Parenteel"; >>> $Tabelnaam="familieleden"; >>> $Verbinding=mysql_connect($Host, $Gebruiker, $Wachtwoord); >>> $Opdracht = "Select * from $Tabelnaam"; >>> $Resultaat =mysql_db_query ($DBNaam, $Opdracht, $Verbinding); >>> //Haal de resultaten uit de database >>> while (($Rij = mysql_fetch_array ($Resultaat)) && ($x==FALSE)) { >>> if ($Rij[id]==$idnr) { >>> $Lijst[$m]["Naam"]=$Rij[Naam]; >>> $Lijst[$m]["Vader"]=$Rij[Vader]; >>> $Lijst[$m]["Moeder"]=$Rij[Moeder]; >>> $x=True; >>> } >>> } >>> $x=False; >>> mysql_close($Verbinding); >>> } >>> return ($Lijst); >>> }>> "d" <d@example.com> schreef in bericht >>>>> news:5c_rd.69942$F7.2426@fe1.news.blueyonder.co.uk... >>>>>> Can you copy and paste your exact script here? It seems something >>>>>> very subtle is happening :) >>>>>> >>>>>> "Piet" <p.potjer@chello.nl> wrote in message >>>>>> news:c1_rd.56485$lN.37579@amsnews05.chello.com... >>>>>>> You're right: in my message I did not mention the quotes. I did in >>>>>>> my script. Sorry. >>>>>>> So the problem remains: The content of List[1]["Name"} disappears >>>>>>> when I call the function again. >>>>>>> >>>>>>> "d" <d@example.com> schreef in bericht >>>>>>> news:PqZrd.79502$38.8284@fe2.news.blueyonder.co.uk... >>>>>>>> For one thing, you should put your textual keys in quotes, so >>>>>>>> $List[1][Name] becomes $List[1]["Name"]. PHP shouldn't support the >>>>>>>> method you use (as, technically, Name should be a constant in your >>>>>>>> script, not a textual key) - it just does to prevent people's code >>>>>>>> from breaking :) >>>>>>>> >>>>>>>> Try changing that and see if it helps. >>>>>>>> >>>>>>>> If not, put some values in your array then put this command at the >>>>>>>> very end of your script: >>>>>>>> >>>>>>>> var_dump($List); >>>>>>>> >>>>>>>> It will display the contents of the $List variable, letting you see >>>>>>>> if things are as they should be. >>>>>>>> >>>>>>>> d >>>>>>>> >>>>>>>> "Piet" <p.potjer@chello.nl> wrote in message >>>>>>>> news:jjZrd.56081$lN.17932@amsnews05.chello.com... >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I try to use a two-dimension array in a function. >>>>>>>>> It is declared as global (global $List). >>>>>>>>> When it is filled the first time everything seems to be okay: >>>>>>>>> $List[1][Name]="Pete"; >>>>>>>>> When I call the function again I fill it like >>>>>>>>> $List[2][Name]="Chris"; >>>>>>>>> The result seems good, but the foolish effect is that >>>>>>>>> $List[1][Name] is empty then. >>>>>>>>> What is going wrong? Can anybody help me? >>>>>>>>> (I use PHP 4.3.8.) >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > - ]
http://coding.derkeiler.com/Archive/PHP/alt.php/2004-12/0096.html
crawl-002
refinedweb
694
64.81
Before you grant an GCP - Open the IAM page in the GCP Console. - COMMAND API QueryGrantableRoles returns a list of all roles grantable on a resource. Request: Body: { "fullResourceName": [FULL-RESOURCE-NAME] } Cloud IAM Quickstart Using Client Libraries . For more information, see the Cloud IAM Go API reference documentation . import ( "context" "fmt" "io" "golang.org/x/oauth2/google" iam "google.golang.org/api/iam/v1" ) // viewGrantableRoles lists roles grantable on a resource. func viewGrantableRoles(w io.Writer, fullResourceName string) ([]*iam.Role, error) { client, err := google.DefaultClient(context.Background(), iam.CloudPlatformScope) if err != nil { return nil, fmt.Errorf("google.DefaultClient: %v", err) } service, err := iam.New(client) if err != nil { return nil, fmt.Errorf("iam.New: Cloud IAM Quickstart Using Client Libraries . For more information, see the Cloud'])] To learn more about how full resource names are constructed, see the article Resource Names and the reference documentation for the API service you want to get grantable roles for. What's next - Read about the available IAM roles. - Learn how to grant, change, and revoke access on project members.
https://cloud.google.com/iam/docs/viewing-grantable-roles?hl=no
CC-MAIN-2019-22
refinedweb
177
63.46
Greetings, I would like to assert that certain values are not present in the response of my REST request. I'm using ReadyAPI V2.8.0. when sending the endpoint I'm expecting only values for the language='nl'. If the language is 'fr','de' or 'en' the testcase should fail. I receive a lot of answers for that request, so using the Smart Assertions is not an option. I've tried regular expression, but I don't seem to understand this. I've seen in the documentation of SmartBear that I could use Script Assertions but I don't find snippets to get to that. Does anyone have an idea please? Response is Json and looks like this You won't be able to test the endpoint as there is a strong security on it. Thanks in advance Kind Regards, AboveAndBeyond Solved! Go to Solution. Hey @AboveAndBeyond Here's the script assertion that might need a little tweak for multiple values but works to assert an attribute doesnt have a specific single value. //courtesy of Rao //this is when the JSONPATH to the relevant attribute is 'pdcMetadata.pdcData.concepts.description.language' //this checks that the language attribute does NOT contain the value 'fr' def json = new groovy.json.JsonSlurper().parseText(context.response) def languageVals = json.pdcMetadata.pdcData.concepts.description.language def checkFor = 'fr' //Negative check - value should not have appointment "!=", //Positive check - use "==" to match value with fr assert languageVals.every {it != checkFor}, "Not expecting value ${checkFor} for language, but found" As you can see above - this just verifies that the language attribute doesn't contain the value 'fr' - if you are trying to assert that language doesn't hold 'fr', nor 'en', nor 'de' - at a worst case (which is a rubbish option) you could just add multiple script assertions to cover each language option - but I think you can just stick them in an array on the checkFor variable definition - e.g. def json = new groovy.json.JsonSlurper().parseText(context.response) def languageVals = json.pdcMetadata.pdcData.concepts.description.language def checkFor = ['fr', 'en', 'de'] //here I've included the values in an array cos theres >1 value to assert againstassert languageVals.every {it != checkFor}, "Not expecting value ${checkFor} for language, but found" I haven't actually had a chance to setup something to test this yet - but hopefully it'll start you off - the 3 separate script assertions would do the job - but that is so inelegant I'm ashamed of suggesting it - also - getting it to work properly is just waaaaay more satisfying! Hope this helps fella! nice one rich View solution in original post Hey @richie Thanks for your time! I'll look at it in a few....doing other stuff first, but looks good! I'll see what groovy says about this 😉 Cheers! Hello @richie Thanks for the code and your time. it works as a charm for me Thanks, Kr, @AboveAndBeyond If you're content with the response - can you please mark up the post and 'accept as solution', so people know the issue has been resolved? nice one fella,
https://community.smartbear.com/t5/SoapUI-Pro/ReadyAPI-script-assertion-is-not-equal-to-for-JSON-response/td-p/191940
CC-MAIN-2020-29
refinedweb
517
64.1
EventLog.DeleteEventSource Method (String, String) Assembly: System (in system.dll) Parameters - source The name by which the application is registered in the event log system. - machineName The name of the computer to remove the registration from, or "." for the local computer. Use this overload to remove the registration of a Source from a remote computer. DeleteEventSource accesses the registry on the computer specified by machineName and removes the registration of your application as a valid source of events. You can remove your component as a valid source of events if you no longer need it to write entries to that log. For example, you might do this if you need to change your component from one log to another. Because a source can only be registered to one log at a time, changing the log requires you to remove the current registration. DeleteEventSource removes only the source registered to a log. If you want to remove the log itself, call Delete. If you only want to delete the log entries, call Clear. Delete and DeleteEventSource are static methods, so they can be called on the class itself. It is not necessary to create an instance of EventLog to call either method. Deleting a log through a call to Delete automatically deletes the sources registered to that log. This can make other applications using that log inoperative. The following example deletes a source from the specified computer. The example determines the log from its source, and then deletes the log. using System; using System.Diagnostics; using System.Threading; class MySample{ public static void Main(){."); } } } import System.*; import System.Diagnostics.*; import System.Threading.*; class MySample { public static void main(String[] args) {."); } } //main } //MySample - EventLogPermission for administering event log information on the computer. Associated enumeration: EventLogPermissionAccess.Administer.
http://msdn.microsoft.com/en-US/library/e3b4zfb5(v=vs.80).aspx
CC-MAIN-2014-52
refinedweb
294
59.4
Red Hat Bugzilla – Bug 1260990 heat updates raw_template when there is no change in raw_template Last modified: 2016-04-26 14:52:36 EDT Description of problem: heat updates raw_template when there is no change in raw_template Looking into the Heat code it looks that Heat should update the raw_template only where there is a change in the template (update stack) . The UPDATEs does not seem to update anything - same values are used over and over again. This lead to disk explosion in case of a large templates (100K-200K for a line) on Maria DB. Version-Release number of selected component (if applicable): How reproducible: OSP5 RHEL 6 Steps to Reproduce: 1. 2. 3. Actual results: when there are no changes in raw_teplates , stack updates . Expected results: Additional info: Its suspected that logic code is not functioning correctly and sending some extra updates. This is from the code Heat that we suspect sending extra updaes: ----------------------- def raw_template_update(context, template_id, values): raw_template_ref = raw_template_get(context, template_id) # get only the changed values values = dict((k, v) for k, v in values.items() if getattr(raw_template_ref, k) != v) if values: raw_template_ref.update_and_save(values) return raw_template_ref ----------------------- Reference Logs: 10-heat-api.log.gz 10-MariaDB.txt.gz 20-heat-engine.log.gz I looked through the MariaDB log and confirmed that it appears to be rewriting the raw_template table with the exact same template and files that it already contained (i.e. the SET and WHERE sections are identical except for the modification time). I'm not sure yet why this is happening - the code you pasted above is intended to check for this possibility and avoid unnecessary updates. The most likely reason is that the template deserialised from the DB compares differently to the one in memory, even though they both serialise to the same JSON representation - but I can't see how that could occur (the obvious one - unicode vs str keys and values - compares correctly). The way to check that would be to compare the serialised versions instead: # get only the changed values values = dict((k, v) for k, v in values.items() if json.loads(getattr(raw_template_ref, k)) != json.loads(v)) I've reproduced locally and have filed an upstream bug. Please ignore comment 10. Created attachment 1072014 [details] raw_template_update patch I haven't reproduced locally yet, but can you please apply the attached patch then restart heat-engine and confirm if the updates are still occurring? I do have a local unit test which confirms that this patch doesn't cause a regression, but it sounds like you've reproduced in a test environment anyway. Hi Steve , Also wanted to confirm if this patch applies for juno too ? The file looks similar . Regards, Jaison R Created attachment 1072461 [details] Make ResourceDefinition round-trip stable to avoid extra writes The part of a ResourceDefinition that lists explicit dependencies was not round-trip stable. As a result, when we copied a new resource definition into the existing template during a stack update, we would end up rewriting the template unnecesarily (i.e. even though we check for changes) every time if depends_on was not specified in the resource originally. At the end of each update, we write the new template to the DB in its entirety, which removes these extra lines again, ensuring that we will experience the same problem on every update. This was causing a *lot* of unnecessary writes. This change ensures that the definition remains stable across a round-trip, so that no unnecessary changes appear in the template. Created attachment 1073870 [details] Log all calls to raw_template_update() I'm attaching a patch to log all calls to raw_template_update(). I still can't reproduce this issue locally, but if we try it on the system where we are experiencing the problem, this should either give us a good idea of what the cause is if it's in this function or it will rule out this function as the source. Created attachment 1073871 [details] Work with copies of the DB contents My current best guess is that the problem is *not* caused by raw_template_update(). Most likely we are not calling update_and_save() on the DB object, but rather making some innocuous-seeming change to the DB object itself and committing it as part of some other transaction. (I don't see the larger transaction in the MariaDB logs, but then I don't see *any* operations other than writes to the raw_template table in the logs - not even reads.) In particular, I think we wrote the code with the assumption that the files dict is immutable, but in some cases (a TemplateResource where the the template is not available and we have to fetch it by URL) we do actually update that dict. The templates being used are full of TemplateResources. I've attached a completely untested patch that ensures that the Template class works only with copies of the template data, and not the ones retrieved directly from the DB proxy object. Steve, please have a play around with this and check that it doesn't crash and burn horribly, and see if you can come up with a reproducer. I believe I can make the following assertions about this bug now: - an UPDATE raw_template is triggered on heat resource-list for templates with template resources (likely for other calls too) - It is the files dict being modified which triggers the update (not the template dict) - It is caused by template data being written to the dict whether it has changed or not [1], which causes an UPDATE because the Json type is backed by a MutableDict [2] - This affects Juno and Kilo but not Liberty because we no longer use Mutable sqlalchemy types [3] I think the appropriate course would be to fix [1] in Juno, Kilo and Liberty. I will attach a WIP patch for Juno which solves the problem in my environment. [1] [2] [3] Created attachment 1074247 [details] Only write to files dict if template data has changed tested it better, thanks ther.
https://bugzilla.redhat.com/show_bug.cgi?id=1260990
CC-MAIN-2017-51
refinedweb
1,009
55.68
. First, for purposes of clarity, . will be written as (dot), (period), (radix), etc. for the remainder of this writeup. Acceptable pronunciations of (dot) include "dot," "period," "point," "mark," and "radix" depending on context. In grammar, (dot) is referred to as a period and is used to end a non-interrogative, non-exclamatory sentence. For this reason, it is the punctuation mark of choice for most sentence writers. After all, most things written are neither questions nor exclamations. (period) is also used to signal the end of an abbreviation. A semi-notable exception to this rule is the soft drink Dr Pepper which, despite not ending its abbreviation with a (period) is still read as "Doctor Pepper." In the realm of mathematics, (dot) is known as radix or decimal point1. The (radix) signifies the division between whole parts and fractional parts of numbers. For example, in 3.510, there are three whole parts and 5 tenths of whatever is being counted. It should be noted that German mathematics use the comma in the same way American mathematics use the (dot) (I can't speak for others) so the preceding point does not apply to them. Also in the realm of mathematics, (dot) is used as the multiplication operator. The transition from x to (dot) usually occurs when one begins learning algebra, as x is the most popular algebra variable name known to man. When one begins to learn higher level geometry, (dot) also becomes known as the dot operator. The dot operator is used to determine the dot product of two vectors. In both cases, (dot) used as a mathematical operator, it is centered vertically. In textbooks (and other printed documents), (dot) is often used as a section separator. Instead of forcing an author to number his/her sections 1, 2, 3, 4, etc., s/he can use the . to specify to the reader that sections are related to one another. Sections 1 and 2 may be related, but one can't be certain. Sections 1.1 and 1.2, however, are almost assuredly related. In product releases (most notably computer software), (dot) is known as (point) and separates major release numbers from minor release numbers. For example, version 4.2 of your favorite product has most likely undergone 4 major revisions (or rewrites) and had 2 minor patches applied. Version 4.3 may not be worth your time, but version 5 will be if this product is important to you and/or exceedingly useful. In computing, (dot) is used to separate a file name from its extension. In myprog.exe, myprog is the file name, and exe is the extension (signifying executable, in this case). In URLs/URIs, (dot) is used as a delimiter in the authority field. For example, in, is the authority field. myprog.exe myprog exe In computer programming, (dot) means many things. It is used as the concatenation operator in PHP and Perl2. In C, C++, C#, Delphi, Jade, Java, JavaScript, Python, and VB, (dot) is used to access member variables and functions of classes.C# also uses (dot) to qualify namespaces. Pascal uses (dot) to access record members and end a program with the end. statement. Lua uses (dot) to access members of a table. Finally, in COBOL, (dot) is used to mark the end of a statement. end. 1 - It's only correctly referred to as decimal point when working in the base 10 number system. Radix is easier to remember when you switch bases a lot, and "octal point," "hexadecimal point," and "binary point" sound funny. 2 - Perl will cease to use (dot) as the concatenation operator as of version 6 because it will be used to access member variables and function. (underscore) will be used for concatenation. Thanks to Jurph for help with the pronunciations and some gramattical fixes. Thanks to StrawberryFrog for fleshing out the list of languages that use (dot) for accessing class members and the namespace information for C#. Thanks to RPGeek for reminding me that mathematical (dot) is aligned differently than all other (dot)s. Thanks to BlackPawn for informing me that Jade also uses (dot) for accessing class members. Thanks to small for enlightening me about the German mathematical system's use of the comma. Thanks to OldMiner for the Lua information. Thanks to ariels for the Pascal information It's amazing we didn't have a writeup on (dot) already. Everybody knows something about it that I didn't. This node has taught me a lot...and I wrote it! Thanks for all the help. You guys are fantastic. The languages listed are not meant to be an exhaustive list. If your language of choice is not represented and you wish it was, /msg me, and I'll add it to the list. 09 March 2005 - Oops, it turns out we do already have a writeup on (dot). It's here. 10 April 2005 - This writeup was moved here from its prior unsearchable home, .. Need help? accounthelp@everything2.com
https://everything2.com/user/jclast/writeups/%2526%252346%253B
CC-MAIN-2018-13
refinedweb
833
66.54
SQL Server XML- The Crib Sheet For things you need to know, rather than the things you want to know This is written with the modest ambition of providing a brief overview of XML as it now exists in SQL Server, and the reasons for its presence. It is designed to supplement articles such as Beginning SQL Server 2005 XML Programming. Contents - Introduction. - XML - XML Support in SQL Server - Querying XML Documents - Transforming XML data - XSL - XSLT - The Document Object Model - XML Web Services - Glossary - Happy reading Introduction XML has become the accepted way for applications to exchange information. It is an open standard that can be used on all technical platforms and it now underlies a great deal of the inter-process communication in multi-tiered and distributed architectures. XML is, like HTML, based on SGML. It is a meta-language, used to define new languages. Although it is not really a suitable means of storing information, it is ideal for representing data structures, for providing data with a context, and for communicating it in context Previous versions of SQL Server relied on delivering data to client applications as proprietary-format ‘Recordsets’, either using JDBC or ODBC/ ADO/ OLEDB. This limited SQL Server’s usefulness on platforms that could not support these technologies. Before XML, data feeds generally relied on ‘delimited’ ASCII data files, or fixed-format lists that were effective for tabular data, but limited in the information they could provide. SQL Server has now been enhanced to participate fully in XML-based data services, and allow XML to be processed, stored, indexed, converted, queried and modified on the server. This has made complex application areas, such as data feeds, a great deal easier to implement and has greatly eased the provision of web services based on XML technologies. XML has continued to develop and spawn new technologies. There are a number of powerful supersets of XML, such as XHTML, RSS, and XAML that have been developed for particular purposes. A number of technologies have been developed for creating, modifying, and transforming XML data. Some of these have been short-lived, but there are signs that the standards are settling down. XML Extensible Markup Language (XML) is a simple, flexible, text-based representation of data, originally designed for large-scale electronic publishing. XML is related to HTML, but is data-centric rather than display-centric. It was developed from SGML (ISO 8879) by employees of Sun (notably Jon Bosak) and Microsoft working for W3C, starting in 1996. In XML, as in HTML, tags mark the start of data elements. Tags, at their simplest, are merely the name of the tag enclosed in ‘<‘ and ‘>’ chevrons, and the end tag adds a ‘/’character after the ‘<‘, just like HTML. Attributes can be assigned to elements. The opening and closing tag enclose the value of the element. XML Tags do not require values; they can be empty or contain just attributes. Unlike HTML, XML tag names are not predefined, and are case-sensitive. In XML, there are few restrictions on what can be used as a tag-name. They are used to name the element. By tradition, HTML documents can leave out parts of the structure. This is not true of XML. XML documents must be ‘well formed’, to remove any chance of ambiguity. By ‘well formed’ they must: - Have a root element - Have corresponding closing tags to every tag (e.g. <;address></address>) - Have tags properly nested. - Have all attributes enclosed in quotes. - Have all restricted characters (‘<‘, ‘>’, ”’, ‘&’ and ‘”‘) properly ‘escaped’ by character entities (<, > ' & "). - Have matching end-Tags, case-insensitive. A valid XML document is a well-formed document that is sensible, and conforms to the rules and criteria of the data structure being described in the document. An XML document can be validated against the schema provided by a separate XML Schema document, referenced by an attribute in the root element. This also assigns data types, and constraints, to the data in the document. XML Support in SQL Server SQL Server is fundamentally a relational database, conforming where it can to the SQL standards. XML has different standards, so that integration is made more difficult by the fact that the XML data type standards are not entirely the same as the relational data type standards. Mapping the two together is not always straightforward. XML has considerable attractions for the DBA or Database developer because it provides a way to pass a variety of data structures as parameters, to store them, query and modify them. It also simplifies the process of providing bulk data-feeds. The challenge is to do this without increasing complexity or obscuring the clarity of the relational data-model. XML’s major attraction for the programmer is that it can represent rowset (single table) and hierarchical (multiple-table) data, as well as relatively unstructured information such as text. This makes the creation, manipulation, and ‘persisting’ of objects far easier. XML can represent a complex Dataset consisting of several tables that are related through primary and foreign keys, in such a way that it can be entirely reconstructed after transmission. XML documents can represent one or more typed rowsets (XML Information Set or ‘Infoset’). To achieve this, a reference to the relevant XML Schema should be contained in every XML document, or fragment, in order to data-type the XML content. SQL Server now provides a schema repository, or library, for storing XML schemas, and it will use the appropriate schema to validate and store XML data. XML documents of any size are best loaded using the XML Bulk Load facility, which now has the ability to insert XML data from a flat file into an XML column. You can insert XML data from a file into base tables in SQL Server using the OPENROWSET table function, using the ‘bulk rowset Provider’, with an INSERT statement. The data can then be shredded to relational tables by using the xml.nodes function. (OpenXML can also be used. It is retained by SQL Server for compatibility with SQL Server 2000). Storing XML XML documents, XML fragments and top-level text nodes can be stored as XML. XML can be used like any other data type, as a table column, variable, parameter or function return-value. However, there are obvious restrictions due to the fact that, although stored as UTF-16, the XML data is encoded and cannot be directly compared with other XML data, neither can it be used as a primary or foreign key. It cannot have a unique constraint either. The XML data is stored in a binary format rather than ASCII. Unlike other data types, the XML data type has its own methods to Create, Read, Update or Delete the elements within the XML document. XML data can have default values, and can be checked by a variation of the RULE, where the validation is encapsulated within a user-defined function. XML data types can be allocated data by implicit conversion from the various CHAR formats, and TEXT, but no others. There are no implicit conversions from XML data to other formats. Checking XML (XML Schemas) To specify the data type for an element or an attribute in an XML document you use a schema. >XML documents are checked against XML Schemas. The XML Schema is a definition of the data structure used within an XML Document. This indicates, for example, whether a value such as “34.78” (which is stored as a text string within the XML) represents a character string, a currency value, or a numeric value. If, for example, the XML document represents an invoice, the XML Schema describes the relationship between the elements and attributes, and specifies the data types for that invoice. You can check, or validate, untyped XML, whether used in a column, variable or parameter, by associating it with an XML Schema. Once checked, it becomes ‘typed’. This ensures that the data types of the elements and attributes of the XML instance are contained, and defined, in the schema. These names are valid within the particular ‘namespace’ specified. An XML Schema definition is, itself, an XML document. These are catalogued in SQL Server as XML Schema collections, and shredded in order to optimise Schema validation. They are tied to specific SQL Schema within a database. Using typed XML introduces integrity checking and helps the performance of XQuery. Accessing Data in XML XML Data type columns can be indexed, and manipulated using XQuery and XML Data Manipulation Language (XML DML), which adds ‘Insert’, ‘delete’ and ‘replace’ to XQuery. To make data-access more effective, XML in SQL Server can be indexed. To be indexed, the XML must be a column in a table that already has a primary key. The index can be over the document structure, or for the values of the elements. The XML data type can be viewed or modified by a number of methods. One can determine whether a node exists, get its value, retrieve it as table-result of a query, or modify its value. XML can be read by the XML parser into a ‘Document Object Model’ (DOM, see below) and then accessed programmatically via methods and properties, but it is not really a suitable server-side technology due to the overhead of parsing the document into the model. Shredding XML The process of converting XML data into a format that can be used by a relational database is called ‘Shredding”, or decomposition. One can either use the NODES method on an XML data type or, from a Document Object Model (DOM), use the OpenXML function. OpenXML is retained in SQL 2005, but the NODES method is generally preferable because of its simplicity and performance. Converting relational data to XML XML fragments, or documents, can be produced from SQL Queries against relational tables, using the SELECT …For XML syntax. An inline XSD Format schema can be produced, and added to the beginning of the document. This is convenient but not covered by a W3C standard. Converting XML to other formats XML documents can be converted into other XML documents, or into formats such as HTML, using XSL Stylesheets (see below). These are themselves XML documents that provide a mixture of commands and text. It is applied to an XML document by processing it via a parser. Querying XML Documents XQuery XQuery, derived in part from SQL, is the dominant standard for querying XML data. It is a declarative, functional query language that operates on instances of the XQuery/XPath Data Model (XDM) to query your XML, using a “tree-like” logical representation of the XML. With XQuery you can run queries against variables and columns of the XML data type using the latter’s associated methods. XQuery has been around for a while. It evolved from an XML query language called Quilt, which in turn was derived from XML Path Language (XPath) version 1.0, SQL, and XQL. XQuery has similarities with SQL, but is by no means the same – SQL is a more complete language. The SELECT statement is similar to XQuery’s language, but XQuery has to deal with a more complex data model. The XQuery specification currently contains syntax and semantics for querying, but not for modifying XML documents, but these are made-good by extensions to XQuery, collectively called the XML Data Manipulation Language (XML DML). This allows you to modify the contents of the XML document. With XML DML one can insert child or sibling nodes into a document, delete one or more nodes, or replace values in nodes. Microsoft thoughtfully provided extensions that allow T-SQL variables and columns to be used to bind relational data inside XML data. Server 2005 adds three keywords: insert, update, and delete. Each of these are used within the modify() method of the XML data type. The XDM that XQuery uses is unlike the Document Object Model (DOM). Each branch (or “node”) of the XDM tree maintains a set of attributes describing the node. In the tree, each node has an XML node type, XDM data type information, node content (string and typed representations), parent/child information, and possibly some other information specific to the node type. FLWOR XQuery’s FLWOR expressions (For, Let, Where, Order by, and Return) iterates XML nodes using the for clause, limits the results using the where clause, sorts the results using the order by clause, and returns the results via the return clause. These constructs greatly extend the versatility of XQuery to make it comparable to SQL XPath XPath was designed to navigate an XML document to retrieve the documents elements and attributes. It also provides basic facilities for manipulation of strings, numbers and Booleans. It represents the document as a tree of nodes, and allows reference to nodes by absolute or relative paths. One can specify criteria for the nodes that are returned in square brackets. XML Template Queries An XML template query is an XML document with one or more TSQL or XPath queries embedded in it, allowing you to query an XML document. The results can be transformed with an XSLT stylesheet. Template queries are used in client code to update SQL Server data. They are templates with attributes and elements that specify the data that requires updating and how it is to be updated. UpdateGram An UpdateGram is an XML template that is used to insert, update or delete data in a database. It contains an image of the data before and after the required modification. It is usually transmitted to the server by a client application. Each element usually represents one record in a table. The data is ‘mapped’ either implicitly or explicitly. One can pass parameters to them. DiffGram This is an XML document format that is used to synchronise offline changes in data with a database server. It is very similar to an UpdateGram, but is less complex. It is generally used for ‘persisting’ the data in data objects. Transforming XML data XSL XSL is a stylesheet language for XML that is used to transform an XML document into a different format. It includes XSLT, and also an XML vocabulary for specifying formatting. (XSL-FO) XSL specifies the styling of an XML, to describe how an XML document is transformed into another document. Although the resulting document is often HTML, one can transform an XML document into formats such as Text, CSV, RTF, TeX or Postscript. An application designer would use an XSL stylesheet to turn structured content into a presentable rendition of a layout; they can use XSL to specify how the source content should be styled, laid out, and paginated onto some presentation medium. This may not necessarily be a screen display but might be a hand-held device, a set of printed pages in a catalogue, price-list, directory, report, pamphlet, or book. XSLT XSLT (XSL Transformations), a language for transforming XML documents into other XML documents, Is an intrinsic part of XSL. XSLT and XSL are often referred-to as if they were synonymous. However, XSLis the combination of XSLT and XSL-FO ( the XSL Formatting Objects). The Document Object Model The Document Object Model (DOM) is a platform- and language-neutral interface to enable programs and scripts to dynamically access and update the content, structure and style of XML documents. XML represents data in a tree structure. Any parser will try to convert the flat text-stream representation of an XML or HTML document into a structured model. The Document Object model provides a standardised way of accessing data from XML, to query it with XPath/XQuery and manipulate it as an object. This makes it a great deal easier for application languages to read or manipulate the data, using methods and objects The DOM defines the logical structure of the documents, and the way they can be accessed. It provides a programming interface for XML documents SQL Server’s OpenXML function actually uses a DOM, previously created using the sp_xml_prepareDocument stored procedure. This function is a ‘shredder’ that then provides rowsets from the DOM. XML Web Services SQL Server 2005 will support web services based on SOAP. SOAP is a lightweight, stateless, one-way message protocol for exchange of information in a decentralized, distributed environment. SQL Server’s support makes it much easier for SQL Server to participate in systems based on Unix, Linux or mobile devices. XML Web services can be placed in the database tier, making SQL Server an HTTP listener. This provides a new type of data access capability for applications that are centralized around Web services, utilizing the lightweight Web server, HTTPSYS that is now in the operating system, without Internet Information Services (IIS). SOAP can potentially be used in combination with a variety of other protocols other than HTTP but the HTTP-based service is the only one in current use; SQL Server exposes a Web service interface, to allow execution of SQL statements and invocation of functions and procedures. Query results are returned in XML format and can take advantage of the Web services infrastructure of Visual Studio. Web service methods can be called from a .NET application almost like any other method. A web service is created by: - Establishing an HTTP endpoint on the SQL Server instance, to configure SQL Server to listen on a particular port for HTTP requests. - Exposing Stored procedures or user-defined functions as Web Methods - Creating the WSDL The web services can include SQL batches of ad-hoc queries separated by semicolons. Glossary - Character entities - these are certain characters that are represented by multicharacter codes so as not to conflict with the markup. - Infoset - This is an XML document that represents a data structure and is associated with a schema. - Namespace - Namespaces are designed to prevent clashes between data items that have the same name but in different data structures. A ‘name’, for example, may have different meanings in different part of a data map. Namespaces are generally defined in XML Schemas. Elements in an XML document can be prefixed to attributes. SOAP Namespaces are part of SOAP messages and WSDL files - RSS - an RDF vocabulary used for site summaries. - SGML - This is the standard Generalised Markup Language. HTML and XML are applications of SGML - WSDL - Web Services Description Language (WSDL) an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information - XDM - The Data model used by Xquery to shred XML documents - XDR - XML-Data reduced, a subset of the XML-Data schema method. - XHTML - a language for rendering web pages, basically HTML that conforms to general XML rules, and can be processed as an XML document. - XML - XML is an acronym for Extensible Markup Language and is a language that is used to describe data and how it should be displayed. - XML Schema - An XML Schema is an XML document that describes a data structure and metadata rather than the data itself - XQuery - This a query language designed to be used to query XML data in much the same way that SQL is used but appropriate to the complex data structures possible in XML documents - XSD - A schema-definition vocabulary, used in XML Schemaa - XSL - A transformation language for XML documents: XSLT. Originally intended to perform complex styling operations, like the generation of tables of contents and indexes, it is now used as a general purpose XML processing language. XSLT is thus widely used for purposes other than XSL, like generating HTML web pages from XML data. - Well-formed XML document - A well-formed XML document is properly formatted in that the syntax is correct and tags match and nest properly. It does not mean that the data within the document is valid or conforms to the data definition in the relevant XML Schema - XML fragment - This is well-formed XML. that does not contain a root element - XQuery - an XML Query language, geared to hierarchical data . Happy reading - XML Support in Microsoft SQL Server 2005 - Beginning SQL Server 2005 XML Programming - The XML 1.0 standard - XML 1.1 standard - The XSL family of recommendations - HTML Reference - The W3C website - The XQuery 1.0/XPath 2.0 Data Model (XDM) This article leads on to the XML Jumpstart Cribsheet which has practical examples
https://www.simple-talk.com/sql/learn-sql-server/sql-server-xml-cribsheet/
CC-MAIN-2017-04
refinedweb
3,385
52.39
Am Tue, 06 Nov 2012 09:49:42 +0100 schrieb "Jakob Ovrum" <jakobovrum@gmail.com>: > But, I yield until someone comes up with actual examples of how > these UDAs are useful, because I can't think of anything > interesting at the moment. I guess I should go read over the old > discussions you linked (I remember participating, but can't > remember any specifics). > >. On 11/6/2012 7:55 AM, Jacob Carlborg wrote: > On 2012-11-06 16:39, Walter Bright wrote: >> On 11/6/2012 5:04 AM, Jacob Carlborg wrote: >>> I agree, I a syntax like this would have been nicer: >>> >>> @mtype("key" : "value") int a; or @mtype(key : "value") int a; >>> @mtype("value") int b; >>> @mtype int c; >>> >> >> Part of what I was trying to do was minimizing inventing new syntaxes. The >> >> [ ArgumentList ] >> >> invents nothing new but the brackets. Your proposal is both a new >> syntax, and it can only do key/value pairs - nothing else. > > It depends on how you look at it. > > * @mtype - is the same syntax as the current syntax for attributes > * @mtype("key" : "value") - Uses the above in combination with the syntax for > associative array literals > > How about this then: > > @mtype("foo", 3, "bar") int a; > > And have the argument list be optional? I really like to have a short nice > looking syntax for the simple use cases, i.e. > > @mtype int b; > There's a lot more you can do with the ArgumentList syntax than associative arrays. Furthermore, there remains the problem of how mtype fits into the name scoping system. On 11/6/2012 8:04 AM, Johannes Pfau wrote:>. > Consider that you can use a tuple generated elsewhere for a UDA: [tp] void foo(); where tp is a tuple. You can even grab the attributes from another symbol, turn them into a tuple, and apply the tuple as an attribute to a new symbol. Tuples can, of course, be sliced and concatenated. In other words, by using tuples, you can "encapsulate" what the attributes expand to in the same way you can change target code by changing the definition of user defined types. For the syntax maybe it's better something like @() instead of [], so it becomes more greppable and more easy to tell apart visually from the array literals: @(1, "xx", Foo) int x; Supporting annotations for function arguments is probably an important sub-feature."). --------------------- Gor Gyolchanyan: > @flags enum A { ... } > > the "flags" attribute would replace the declaraction of A with > another enum declaration with the same name and same members, > but with replaced initialization and would static assert(false, > "flags enum can't have initializers") if any initializers are > given. I appreciate your idea (I think of user-defined attributes also as ways to extend the type system), But I know the engineer in Walter prefers extra-simple ideas, so maybe your idea will not be accepted :-) But let's see. Bye, bearophile On 11/6/2012 8:02 AM, Tove wrote: > On Tuesday, 6 November 2012 at 15:19:53 UTC, Walter Bright wrote: >> On 11/6/2012 7:14 AM, Tove wrote: >>> Hmmm, actually it doesn't work in plain function/block scope either. >> >> Right, I don't see a compelling purpose for that, either. > > Hmm, what about library based GC annotations? > > ["GC.NoScan"] int* local_p; > I have no idea how you could make that work. Le 06/11/2012 08:55, Walter Bright a écrit : > References: > > > > > > > > Inspired by a gallon of coffee, I decided to get it implemented. It's > simple, based on what D already does (CTFE and heterogeneous tuples), > easy to implement, easy to understand, and doesn't break anything. It > should do everything asked for in the above references (except it's not > a type constructor). > > You can download it here and try it out: > > > > As a bonus, that beta also can generate Win64 executables, and you can > even symbolically debug them with VS! (Thanks to Rainer Schütze for his > invaluable help with that). > > Here's the rather skimpy and lame spec I banged out: > ===================================================== > User Defined Attributes > ----------------------- > > User Defined Attributes (UDA) are compile time expressions that can be > attached > to a declaration. These attributes can then be queried, extracted, and > manipulated > at compile time. There is no runtime component to them. > > Grammatically, a UDA is a StorageClass: > > StorageClass: > UserDefinedAttribute > > UserDefinedAttribute: > [ ArgumentList ] > > And looks like: > > [ 3 ] int a; > [ "string", 7 ]: int T Tuple; > } > > enum EEE = 7; > ["hello"] struct SSS { } > [3] { [4][EEE][SSS] int foo; } > > alias Tuple!(__traits(getAttributes, foo)) TP; > > pragma(msg, TP); > pragma(msg, TP[2]); > > prints: > > tuple(3,4,7,(SSS)) > 7 > > and. OK, I may break all the happiness of that news but . . . Tuple in D is notoriously known to be a badly designed feature. Basing more stuff on that just because we have them is short sighted and will only result in D's tuples being broken forever, several tuples implementations for more user confusion, or future major breakage. We still don't have any scheme for a stable D, feature testing or whatever, so everybody should be prepared for many new ICE (or even more fun, bugs). After we all love them or we wouldn't be using D ! Who need a programming language to be stable or reliable ? Surprise feature ! Yaw, no wonder D's toolchain is so wonderfull ! Let's not talk these awesome static code analysis tools, java would become jealous. BTW, I don't really like that syntax, but really, that isn't important. Le 06/11/2012 09:20, Sönke Ludwig a écrit : > Wow, that's a surprise! Just yesterday I was thinking that it would be > really nice to have them for a piece of code ;) > > But shouldn't we keep the syntax closer to normal attributes and other > languages(*)? I see a lot of arguments for doing that, with the only > counter-argument that they would be in the same namespace as the > built-in attributes (which should not be that bad, as this is very low > level language stuff). > > (*) i.e. @mytype or @("string") and without the '[]' +1 In addition, this is [] thing will require lookahead when parsing to detect if we have an expression (array literal) or a declaration. On 11/6/2012 8:29 AM, deadalnix wrote: > In addition, this is [] thing will require lookahead when parsing to detect if > we have an expression (array literal) or a declaration. Not really, as an array literal starting an expression is kinda meaningless, like: a*b; is a declaration, not a multiply. Le 06/11/2012 10:07, Walter Bright a écrit : > n 11/6/2012 12:59 AM, Jakob Ovrum wrote: > > Problem is that there's no way to do this without having the user > specify which > > modules it should work for, like: > > > > import attributes; > > import a, b, c; > > > > static this() // This code cannot be automated. > > { > > initAttributes!a(); > > initAttributes!b(); > > initAttributes!c(); > > } > > > > Is that really a problem? I'm not sure. How can AOP be implemented on top of that ? On 11/6/2012 8:23 AM, bearophile wrote: > Supporting annotations for function arguments is probably an important sub-feature. It would be a significant extension, and so I'd like to see a compelling use case first. >"). User defined attributes cannot invent new semantics for the language. And besides, 'ref' already does what you suggest.
http://forum.dlang.org/thread/k7afq6$2832$1@digitalmars.com?page=7
CC-MAIN-2014-52
refinedweb
1,218
60.04
We're back with another week of challenges! Today, let's get a little creative and develop a ranking system where we can sort points and calculate the position of an individual or a team competing in a game/competition. Note: If two or more persons have the same number of points, they should have same position number and sorted by name (name is unique). For example, Input structure: [ { name: "John", points: 100, }, { name: "Bob", points: 130, }, { name: "Mary", points: 120, }, { name: "Kate", points: 120, }, ] Output should be: [ { name: "Bob", points: 130, position: 1, }, { name: "Kate", points: 120, position: 2, }, { name: "Mary", points: 120, position: 2, }, { name: "John", points: 100, position: 4, }, ] Good luck! This challenge comes from user kzm. Thank you to CodeWars, who has licensed redistribution of this challenge under the 2-Clause BSD License! Want to propose a challenge for a future post? Email yo+challenge@dev.to with your suggestions! Discussion (13) JavaScript Live demo on Codepen. This could be considerably reduced by using ternary operators. Hi alvaro. Could u please explain what u did in this program by putting comments in ur codepen link. Im trying to understand😊 I updated the Codepen with comments. Let me know if more details are needed. Thanku alvaro😊 My solution in clumsy Haskell: Python There's probably a better way which requires only one iteration? i want to contribute my idea with your code. i hope u enjoy with that. Maybe it's shoter players.sort((a, b) => { return (a.points == b.points ? a.name > b.name : a.points < b.points) || -1; }).map((e,i)=>({...e,position:(i+1)})) Going functional in Perl: Just a short one in Javascript Edit: somehow I hadn't noticed the positionbit 😳, so I've added that as a map call. Here it goes! And the result: Shouldn't John's position in the example be 3?
https://practicaldev-herokuapp-com.global.ssl.fastly.net/thepracticaldev/daily-challenge-26-ranking-position-3c72
CC-MAIN-2021-10
refinedweb
313
74.59
Introduction: Killer Candy Robot 3000 I am an 11-year-old boy who loves to make stuff with electronics and programming. This year for Halloween, I decided to make a robot costume. This robot costume that took me about 4 weeks to make, most of the time was soldering and programming/testing the microcontroller code. This costume has 4 microcontrollers inside (2 in the head, and 2 in the body). In the head, an Arduino Nano is controlling the voice-activated LEDs, and an mbed Nucleo is controlling the eyes which are made from two MAX7219s. I programmed both the nano and the mbed using C++. In the body, an Arduino Nano is controlling the 14 LEDs that randomly blink (two on each side of the body, and 12 in an array on the front to look like computer read-outs from old movies), and a LinkIt One from Mediatek Labs is using a Grove shield to control the LEDs and LED Bar next to the candy drawer, the servo that opens and closes the candy drawer, the touch-sensitive button for controlling the servo, and it plays sound from a speaker on the side of the body. I had to wear special gloves that are used for texting on touch screen phones in order to be able to activate the candy drawer using the touch-sensitive button. The eyes look left, right, and forward, and they blink. The arms are made from aluminum heating ducts as well as plastic paint buckets with the bottoms cut out and painted silver. The legs are made from aluminum heating ducts and two cardboard boxes. The head and body are made from three cardboard boxes painted silver. The mouth of the robot is covered with a nylon so that I can see through it, yet it looks black (when I am not talking) I wore this costume to my local Halloween festival and came in second place, and then I wore it during Trick-or-Treating without the legs (because it was impossible to go up and down stairs wearing them) Here is the full list of materials I used to create this costume: 1 14x10x8 box for head 1 14x20x18 box for body (top) 1 14x20x10 box for body (bottom) 1 custom sized box for candy drawer (used scraps from other boxes to make) 2 8x14x4 boxes for feet (these were boxed from Samsung Series 7 Slates, very heavy cardboard) 21 8mm Red LEDs 1 8mm Green LED 1 Nylon Stocking (for mouth) 14 Images for "stickers" printed 1 baseball cap with the brim removed 1 aluminum air duct 4"x25' (cut into two parts for arms) - had some left over 1 aluminum air duct 8"x50' (cut into two parts for legs) - had some left over 2 Paint buckets (small from Lowes) with the bottoms removed 1 LinkIt One from Mediatek (SeeedStudio) 1 Grove Sheidl for LinkIt One from Mediatek 2 custom made Grove Connectors for the LED near the candy drawer 1 Grove LED bar for LinkIt One 1 Grove Servo for the candy drawer 1 Grove touch-sensitive control 1 pair texting gloves 2 Arduino Nanos 1 mbed NucleoF411RE 1 light switch and metal cover 4 lithium ion batteries 1 SD card for storing sounds 1 rechargeable speaker with audio cable plugged into LinkIt One many wires soldered and heat-shrunk together The programming for the mbed was done using their online IDE/compiler. The programming for the Arduinos was done using the Arduino IDE. The LinkIt One used the Grove libraries for LinkIt One as well as SD card and sound libriaries. Step 1: Making the Head I started by making the head, I measured and cut out the eye slots (for the MAX7219s) and the mouth (which is actually where I looked out of the head). I used hot glue to attach a baseball cap (with the brim removed) in the center of the top so that I could wear the head. I programmed the sound-activated LEDs for the Arduino Nano: int sensorPin = 4; int sensorValue = 0; int LED_1 = 2; int LED_2 = 3; int LED_3 = 4; int LED_4 = 5; int LED_5 = 6; int LED_6 = 7; void setup() { pinMode(LED_1, OUTPUT); pinMode(LED_2, OUTPUT); pinMode(LED_3, OUTPUT); pinMode(LED_4, OUTPUT); pinMode(LED_5, OUTPUT); pinMode(LED_6, OUTPUT); Serial.begin(9600); } void loop() { // read the value from the sensor: sensorValue = analogRead(sensorPin); Serial.println(sensorValue); if (sensorValue<100){ //The 'silence' sensor value is 509-511 digitalWrite(LED_1,HIGH); digitalWrite(LED_2,HIGH); digitalWrite(LED_3,HIGH); digitalWrite(LED_4,HIGH); digitalWrite(LED_5,HIGH); digitalWrite(LED_6,HIGH); delay(5); // The red LEDs stay on for 2 seconds } else { digitalWrite(LED_1,LOW); digitalWrite(LED_2,LOW); digitalWrite(LED_3,LOW); digitalWrite(LED_4,LOW); digitalWrite(LED_5,LOW); digitalWrite(LED_6,LOW); } } Then I programmed the eyes for the mbed:: #include "mbed.h" #include #include /* printf, scanf, puts, NULL */ #include /* srand, rand */ #include /* time */ using std::string; // p5: DIN, p7: CLK, p8: LOAD/CS SPI max72_spi(SPI_MOSI, NC, SPI_SCK); DigitalOut load(D5); Serial pc(SERIAL_TX, SERIAL_RX); InterruptIn mybutton(USER_BUTTON); int maxInUse = 2; //change this variable to set how many MAX7219's you'll use int lastMode = -1; int currMode = 0; // define max7219 registers #define max7219_reg_noop 0x00 #define max7219_reg_digit0 0x01 #define max7219_reg_digit1 0x02 #define max7219_reg_digit2 0x03 #define max7219_reg_digit3 0x04 #define max7219_reg_digit4 0x05 #define max7219_reg_digit5 0x06 #define max7219_reg_digit6 0x07 #define max7219_reg_digit7 0x08 #define max7219_reg_decodeMode 0x09 #define max7219_reg_intensity 0x0a #define max7219_reg_scanLimit 0x0b #define max7219_reg_shutdown 0x0c #define max7219_reg_displayTest 0x0f #define LOW 0 #define HIGH 1 #define MHZ 1000000 void maxSingle( int reg, int col) { load = LOW; // begin max72_spi.write(reg); // specify register max72_spi.write(col); // put data load = HIGH; // make sure data is loaded (on rising edge of LOAD/CS) } void maxAll (int reg, int col) { // initialize all MAX7219's in the system load = LOW; // begin for ( int c=1; c<= maxInUse; c++) { max72_spi.write(reg); // specify register max72_spi.write(col); // put data } load = HIGH; } void maxOne(int maxNr, int reg, int col) { int c = 0; load = LOW; for ( c = maxInUse; c > maxNr; c--) { max72_spi.write(0); // no-op max72_spi.write(0); // no-op } max72_spi.write(reg); // specify register max72_spi.write(col); // put data for ( c=maxNr-1; c >= 1; c--) { max72_spi.write(0); // no-op max72_spi.write(0); // no-op } load = HIGH; } void setup () { // initiation of the max 7219 // SPI setup: 8 bits, mode 0 max72_spi.format(8, 0); // going by the datasheet, min clk is 100ns so theoretically 10MHz should work... // max72_spi.frequency(10*MHZ); (int e=1; e<=8; e++) { // empty registers, turn all LEDs off maxAll(e,0); } maxAll(max7219_reg_intensity, 0x0f & 0x0f); // the first 0x0f is the value you can set // range: 0x00 to 0x0f } int getBitValue(int bit) { pc.printf("bit = %d\n\r", bit); switch(bit) { case 0: return 1; case 1: return 2; case 2: return 4; case 3: return 8; case 4: return 16; case 5: return 32; case 6: return 64; case 7: return 128; } return 0; } void OpenEyes() { maxAll(7,60); maxAll(6,126); maxAll(5,102); maxAll(4,102); maxAll(3,126); maxAll(2,60); } void Blink() { maxAll(3,0); maxAll(4,0); maxAll(5,0); maxAll(6,0); maxOne(1, 7, 0); maxOne(2, 2, 0); } void LookAhead() { maxAll(4,102); maxAll(5,102); } void LookLeft() { maxOne(1,4,78); maxOne(1,5,78); maxOne(2,4,114); maxOne(2,5,114); } void LookRight() { maxOne(2,4,78); maxOne(2,5,78); maxOne(1,4,114); maxOne(1,5,114); } int looky = 0; int looking = 10; int lookMode = 0; int main() { srand (time(NULL)); setup (); OpenEyes(); while(true) { if(looky > looking) { looky = 0; switch(lookMode) { case 0: currMode = 1; break; case 1: currMode = 0; break; case 2: currMode = 1; break; case 3: currMode = 2; break; case 4: currMode = 1; break; case 5: currMode = 0; break; case 6: currMode = 1; break; case 7: currMode = 3; break; case 8: // ahead; currMode = 1; break; case 9: currMode = 0; break; } lookMode++; if(lookMode > 9) lookMode = 0; if(lastMode != currMode) { lastMode = currMode; switch(currMode) { case 0: // blink Blink(); wait(.25f); OpenEyes(); break; case 1: // look ahead LookAhead(); break; case 2: LookLeft(); break; case 3: LookRight(); break; } } } else looky++; wait(.25f); } } Step 2: Body Functions I put two boxes together in order to make a body that was big enough for me to wear, house the electronics,and have a candy drawer. I initially taped it together with duct tape, painted it silver, then added stickers and some metallic tape as well. The Grove Bar and candy drawer indicators (LEDs) had to be taped onto the body using metallic tape so that it looked better. The Grove touch sensor I put on the top of the body so that I could reach it with my finger. The speaker had to be mounted using tape inside the body so that it could play sounds when I opened and closed the drawer. I used an Arduino Nano to drive 14 LEDs arranged on the body to resemble computer displays seen in old movies my dad and I watch and riff on (like MST3K does). int demoMode = 0; void setup() { for(int l = 0; l < 15; l++) { pinMode(l, OUTPUT); } randomSeed(analogRead(0)); } // the loop routine runs over and over again forever: void loop() { for(int LedIndex = 0; LedIndex < 15; LedIndex++) { if(demoMode ==1 ) { digitalWrite(LedIndex, HIGH); delay(1000); } else { int onOff = random(10); if(onOff % 2 == 0) { // on digitalWrite(LedIndex, HIGH); } else { // off digitalWrite(LedIndex, LOW); } } } delay(1000); } The LinkIt One provided the majority of the robot's functions inside the body. This took a while to figure everything out, especially how to attach the servo to the candy drawer so that it opened and closed when I presses and released the touch-sensitive control. Here is the code for the LinkIt One. #include "Suli.h" #include #include #include #include "Seeed_LED_Bar_Arduino.h" #include const int ROBOT_START = 1; const int ROBOT_ON = 2; const int ROBOT_OFF = 3; const int TRICK_TREAT = 4; const int THANK_YOU = 5; const int pinTouch = 4; const int pinLed = 8; const int REDLED = 8; const int GREENLED = 7; int lastState = LOW; int barLevel = 1; int maxOpenCount = 5; int openCount = 0; int tray; Servo myservo; int maxTray = 90; int minTray = 10; SeeedLedBar bar(6, 5); // CLK, DTA void PlaySound(int soundId) { AudioStatus status; switch(soundId) { case ROBOT_START: LAudio.playFile( storageSD,(char*)"RobotStart.mp3"); break; case ROBOT_ON: LAudio.playFile( storageSD,(char*)"RobotOn.mp3"); break; case ROBOT_OFF: LAudio.playFile( storageSD,(char*)"RobotOff.mp3"); break; case OPEN_TRAY: LAudio.playFile( storageSD,(char*)"RobotCandyDrawerOpen.wav"); break; case CLOSE_TRAY: LAudio.playFile( storageSD,(char*)"RobotCandyDrawerClose.wav"); break; } } void setup() { tray = maxTray; LAudio.begin(); LSD.begin(); // Init SD card bar.begin(6, 5); pinMode(pinTouch, INPUT); pinMode(pinLed, OUTPUT); LAudio.setVolume(3); bar.setLevel(1); myservo.attach(3); myservo.write(tray); pinMode(REDLED, OUTPUT); pinMode(GREENLED, OUTPUT); // PlaySound(ROBOT_START); } void OpenTray() { PlaySound(OPEN_TRAY); tray = minTray; myservo.write(tray); digitalWrite(REDLED, LOW); digitalWrite(GREENLED, HIGH); openCount++; if(openCount > maxOpenCount) { openCount = 0; barLevel++; if(barLevel > 10) barLevel = 1; bar.setLevel(barLevel); } } void CloseTray() { PlaySound(CLOSE_TRAY); tray = maxTray; myservo.write(tray); digitalWrite(REDLED, HIGH); digitalWrite(GREENLED, LOW); } void toggleTray() { if(tray == minTray)CloseTray(); else OpenTray(); } void checkButton() { int state = digitalRead(pinTouch); if(state != lastState) { lastState = state; toggleTray(); } } void loop() { checkButton(); } Step 3: Finishing Touches I found a bunch of funny and cool images on the Internet and printed them on my dad's laser printer, then attached them all over the robot's body using rubber cement. We got numbers and letters from Lowes as well as the light switch (which provides no functionality other than letting others flip it up and down while I stood in line at houses to get candy). I am happy with how this project turned out, however, there are some things that would have improved the costume. 1) shoulder padding inside - my arms got a bit numb and sore from wearing this all night 2) additional support in the head - the cap worked well, but the head wobbled about and eventually my microphone for the voice sensor broke off from being rubbed by the neck foam. 3) usable feet - although they looked really cool, I couldn't really wear my legs and feet during Trick-or-Treating because they were clumsy and difficult for me to walk up and down stairs while wearing. Thanks for looking at my costume. 2 Discussions Great robot! Thanks! :)
https://www.instructables.com/id/Killer-Candy-Robot-3000/
CC-MAIN-2018-39
refinedweb
2,050
51.82
Vadim Gritsenko wrote: > > Hi all, > > Preamble: I do remember we had talks about inward flows, etc... Then > somebody (sorry, already forgot who it was) wrote about transformer > which writes file as a side effect... Then Kinga proposes to add > handlers reacting on input stream/documents... And recently I have got > an idea that it should be quite an elegant way to handle all this if > Cocoon is designed symmetrically: whatever abstractions we have to > handle outgoing flow, let apply to the in-flow. Let me start saying that I consider 'simmetry-driven design' the source of all FS. So, my FS alarms will be tuned to 'sensible' as we go along. > First of all, we already (almost) have symmetry in pipelines: generators > are symmetrical to serializers. Generators are able to produce XML > content out of almost anything we can think of: physical file, HTTP > resource, SQL database, XMLDB database, incoming request, etc. > Unfortunately, serializers are somewhat limited comparing to the > generators: the only out option we have is output stream. Let's expand > this to the other sources as well. Then we can have, say, file > serializer. Coupled with the ability of Sources to handle input streams, > one can save XML stream into file, XMLDB, and other sources were > protocol handler exists. This asymmetric limitation is intentional and due to the architecture of the web: the response goes back to the requestor. Always. In SMTP, for example, this is different. The above is the main reason why our (Pier's and mine) proposal for the Mailet API addition to the Servlet Framework was rejected: we should have abstracted the concept of 'where does the response goes' that is now implicitly hardwire back to the requestor. So, I perfectly see your point since I already hit that wall once. > Second. Now Cocoon perfectly handles aggregation, both on sitemap level > and using aggregated transformers. But what we have opposite to the > aggregation? Nothing. Let's add separation notion: result will be > several SAX streams. Every stream will have a destination pipeline. This > pipeline will be retrieved by "separator", and generator of the pipeline > will be replaced by the "separator" which will feed SAX events into this > pipeline. As you see, it is same mechanism aggregator employs but > reversed. This is admittedly a cool concept but only if designed to replace functionality implemented by 'fragment extractors'. The idea is to be possible for a pipeline to be 'separated' and content injected into another pipeline, awaiting for the requestor to make another request on another URI. This would allow an *easy* way to factor out and rasterize, say, MathML namespaces into included GIFs. But yes, I think your 'separator' abstraction might be powerful enough to allow this. The only problem I see is that if we go down this path, we have to explicitly indicate the 'destination' of the request, since this is not a property of the serializer. (ie, I might use the PDFSerializer both to send back the serializer to the client or to send an email to somebody with it. It's not the serializer's concern, but it's the sitemap's concern to attach the right outputStream to the serializer) > Third. To top all this, symmetrical component is to be developed to the > X/C Include transformers. As "separator", it will extract parts of the > stream and send them to the other pipelines. Hmmm, what about X/C Fragment? that would do parallel with aggregation <----> separation inclusion <----> fragmentation > At last, let's consider an example. Let it be some request from the user > to perform modification of the XML resource stored in the file (poor > man's CMS ;) > > <!-- inflow internal --> > <map:match > <map:generate > <map:serialize > </map:match> Hmmm, the need of dummy components shows that we have some problems with the sitemap semantics (see below) > <map:match > <map:generate > <map:transform > <map:serialize > </map:match> > > <!-- main --> > <map:match > <map:act > <map:aggregate> > <map:part > <map:part > </map:aggregate> > <map:transform > <map:transform > <map:transform > <map:separate> > <map:part > <map:part > <map:part > <map:part > </map:separate> > </map:act> > </map:match> > > <!-- outflow internal --> > <map:match > <map:generate > <map:serialize > </map:match> > > <map:match > <map:generate > <map:transform > <map:serialize > </map:match> > > <map:match > <!-- ... --> > </map:match> > > <map:match > <map:generate > <map:transform > <map:serialize > </map:match> > > PS: /dev/null: Currently, aggregator ignores serializer. That's to show > that this is the dummy serializer. Ok, I think you are suggesting something good but I see a few concepts that we must think about more: 1) serializers should have no notion of where their output goes. this is a property of the pipeline. 2) there are three types of pipeline: - complete: G -> T* -> S - generating: G -> T* - serializing: T* -> S interesting enough, for the in-out flow, internal-only pipelines are 'generating pipelines' and views are 'serializing pipelines'. So, we already have this semantic, we just have to expand it a little. What do you
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200202.mbox/%3C3C665E6B.ED2E6922@apache.org%3E
CC-MAIN-2014-23
refinedweb
823
53.92
Wiki premake-dev / Extending Premake We've moved to GitHub! The latest version of this page is now here. Here is the quick five minute version to get things started. I (or we) will refine it as we go. Update: I started writing up more information about the new approach, which I now calling "Modules". I will work on cleaning up and consolidating the information as I go, using that page for the user facing "how do I use a module" and this page for "how to I create a module". Currently, Premake modules don't exist, really. I'm trying to change that. I would like to get to the point where major new features can be developed as modules and shared with the community easily, without the need to involve the core developers and modify core code. Mature or popular modules can then be integrated into the core distribution easily, as everything will be well-tested and self-contained. Introducing Modules Modules are simply Lua scripts. You enable them for your project by including them at the top of your script. For instance, you could use Andrew Gough's excellent D language extension (which doesn't exist yet) by simply adding this line to the top of your script: include 'd' But that raises questions like: were do I get the d.lua file(s)? Where do I put them? How does Premake find them? Some work has been done on searching a set of well-known system paths for includes; can someone dig up that code and document its behavior here? Writing Modules Create a new Lua file for your extension. If your module depends on other modules, include those at the top of the file. If you plan to expose functions for other modules to use, create a namespace for them. -- include any dependencies first include 'monodevelop' -- define a namespace premake.modules.d = {} local d = premake.modules.d -- define a function for other extensions to use function d.doSomethingCool() -- do something cool end Now you need to deep dive the Premake source code. Find the function that you want to replace or extend. Use premake.override() to replace it. Here is an example of overriding the Visual Studio C++ preprocessor definitions element: premake.override(premake.vstudio.vc2010, 'preprocessorDefinitions', function(base, cfg, defines) -- if my conditions are met, add something to the list of defines if cfg.flags.AddCoolness then defines = table.join(defines, { "COOLNESS=11" }) end -- call the base implementation to output the element base(cfg, defines) end) The first argument is the namespace of the function you would like to override. The second argument is the name of the function within that namespace, as a string (this is not ideal; I'm not sure how to specify it as a function and still have it work if someone else has previously overridden it. Suggestions welcome). The last argument is the replacement function. The replacement function will always receive the the previous implementation as the first argument, followed by the function's regular list of arguments. Tips I am still trying to figure out how extensions are going to work. Feel free to raise questions, concerns, whatever over on the forums. If you find yourself copy-and-pasting code from Premake core into your extension, stop. You've found an area of code that needs to be improved. Raise the issue on the forums and we'll try to find a better way. If you find your self directly accessing or modifying tables of values within Premake's core, stop. We should add an API to do that manipulation for you, or move the values into configuration blocks. Raise the issue on the forums. Open Questions How do you share the extension? How are dependencies managed/communicated? How are extensions unit tested? How should extensions be documented? When are extensions moved into the core distribution, and how is that done? Updated
https://bitbucket.org/premake/premake-dev/wiki/Extending%20Premake
CC-MAIN-2015-22
refinedweb
655
65.93
» Web Component Certification (SCWCD/OCPJWCD) Author HFS mock exam questions 9 and 17 - custom tag dev jean-francois lepetitj Greenhorn Joined: Sep 01, 2004 Posts: 4 posted Dec 08, 2004 04:21:00 0 Hi, - In the question 9 p 563, I don't understand why the answer C is wrong : Given: 10. public class BufTag extends BodyTagSupport { 11. public int doStartTag() throws JspException { 12. // insert code here 13. } 14. } Assume that the tag has been properly configured to allow body content. Which if inserted at line 12, would cause the JSP code <mytags:mytag> BodyContent </mytags:mytag> to output BodyContent ? C. return EVAL_BODY_BUFFERED -... Anyway this book is realy great especially for french guys who can "joindre l'utile a l'agreable" by learning also US english frickin' expressions as : Boom, Gee, Dude, Uh-oh, Um, Uh, Yikes, Crap [ December 08, 2004: Message edited by: jean-francois lepetitj ] Krzysiek Hycnar Ranch Hand Joined: Jan 02, 2004 Posts: 74 posted Dec 13, 2004 05:13:00 0 Hello Jean >Hi, >- In the question 9 p 563, I don't understand why the answer C is wrong : >Given: >10. public class BufTag extends BodyTagSupport { >11. public int doStartTag() throws JspException { >12. // insert code here >13. } >14. } > more... As for the first problem you mention, the answer is that if you return EVAL_BODY_BUFFERED you will not see the body content, because the EVAL_BODY_BUFFERED is a call for the containter to put the body in a BodyContent instance, which can be obtained later on when doAfterBody() is called. The problem is that in HFS (just a few pages before the mock exam) there's a picture that SUGGESTS that if you return EVEL_BODY_BUFFERED from doStartTag() the body will be evaluated (evaluated - to me it means that you will see the body content, or the output it produces) which is NOT TRUE because you must print the body contents yourself to see them. You're not the only one who failed to answer this question anyway >-... In the JSP pre 2.0 spec the default value for <body-content> was JSP and the common sense says, that it should remain true for the 2.0 spec as well (for backward compatibility). And it is. In the HFS the authors say (cannot remember the page #) that <body content> tag is mandatory, but they failed to add that it is mandatory only for Simple Tags, so if in a simple tag TLD description you don't specify it to be other than JSP (must be scriptless, empty or tagdependent) - it gets the default value... gues what?? (starts with J and is 3 characters long , and this in turn will blow up your page. To the authors... In the mock exam section (for the Custom Tag Developement) there are about 4 questions the reader has no chance to answer right based on the knowledge he/she gets from the chapter (ok.. there'are chances but they're described by probability calculus). I may be wrong, but in the chapter on Custom Tag Developement (don't have the copy of the book with me right now )... - <jsp:invoke> is never mentioned - dynamic attributes (as well as DynamicAttributes interface) is never mentioned - the fact that for every attribute in a tag file a page scoped variable is created is never mentioned - the fact that not every directive that is legal in regular jsp pages is legal in tag files is never mentioned (I mean page - which is illegal in tag files) All of the above listed facts are however mentioned in the mock questions for that chapter. I was bit confused to see them there, but hmmm.. maybe you just wanted to make us (the SCWCD candidates) start studying the specs or learn more.. you did it (at least in my case) I also found some minor mistakes in the book. I'm now doing revision, and will try to gather them all and publish here for discussion - hope this helps to improve the quality of the book, which I think despite the mistakes is GREAT!! Cheers Chris jean-francois lepetitj Greenhorn Joined: Sep 01, 2004 Posts: 4 posted Dec 13, 2004 07:59:00 0 Thank you very much Krzysiek* for your explanations. For the question 9 p 563, the HFS page that you mentioned is p. 533. So the same authors that you hail should have to fix it. For all the other book errata there is the html page. So glance at it before publishing your discoveries. However while reading this o'reilly errata page, some ambiguities stay. Here is an example that I am just studying p 634 about <security-constraint> rules: The book: Hand written - if there were NO <http-method> elements, in the <web-resource-collection>, it would mean that NO HTTP Methods are allowed, by ANYONE in any role. Key-point - if no HTTP Methods are specified then ALL Methods will be constrained. Errata fix: Hand written - if there are NO <http-method> elements, in the <web-resource-collection>, it would mean that ALL HTTP Methods are allowed, by ANYONE in any role. Key-point - if no HTTP Methods are specified then ALL Methods are allowed. Pretty different isn't it! But NO new erratum has be written p 635 1 st paragraph for: A resource is always constrained on an HTTP method by HTTP basis, although you CAN configure the <web-resource-collection> in such a way that ALL Methods are constrained, simply by not putting in ANY <http-method> elements. So what is true! Indeed is there a difference between nothing and <http-method></http-method> (<http-method> does not seem to be a mandatory element), as there is one between nothing (all users are allowed)and <auth-constraint></auth-constraint> (no user is allowed)? JF. * Why Chris instead of Krzysiek or Krzys? Nikhil Jain Ranch Hand Joined: May 15, 2005 Posts: 385 posted Jun 14, 2006 07:07:00 0 Hello there, I just just answering the Question 9 for custom tag. I am really confused with the explaining given for this topic & for the explaination given in HFS. If you read the table on page 546, (Key diff bet simple & classic tag). The category: "How to cause the body to be processed" says, Return EVAL_BODY_INCLUDE from doStartTag() or EVAL_BODY_BUFFERED if the calss implements BodyTag . The question 9 implements the body tag & therefore the answer should be C. SCJP 1.4, SCWCD 1.4, SCBCD 1.5 I agree. Here's the link: subject: HFS mock exam questions 9 and 17 - custom tag dev Similar Threads Custom Tags Question from Head First book chapter 10 Mock Exam: Head First 1.4 : Doubt Doubt Qno-9 pno-563 HFS&JSP HF Custom Tag question All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/168674/java-Web-Component-SCWCD/certification/HFS-mock-exam-questions-custom
CC-MAIN-2015-22
refinedweb
1,140
67.79
On Sat, Sep 25, 2010 at 11:15 PM, anatoly techtonik <techtonik at gmail.com> wrote: > Hi, > > I wonder if situation with relative imports in packages is improved in > Python 3k or > we are still doomed to a chain of hacks? > > My user story: > I am currently debugging project, which consists of many modules in one package. > Each module has tests or other useful stuff for debug in its main > section, but it is a > disaster to use it, because I can't just execute the module file and > expect it to find > relatives. All imports are like: > > from spyderlib.config import get_icon > from spyderlib.utils.qthelpers import translate, add_actions, create_action. > PEP 328 proposes: > > from ... import config > from ..utils.qthelpers import translate, add_actions, create_action This fails for two reasons: 1. "__main__" is missing the module namespace information PEP 328 needs to do its thing 2. Even if 1 is resolved, PEP 328 will resolve to the same absolute imports you're already using and likely fail for the same reason (i.e. spyderlib not found on sys.path) > But this doesn't work, and I couldn't find any short user level > explanation why it is > not possible to make this work at least in Py3k without additional magic. If you use the -m switch to run your module from the directory that contains the spyderlib package directory, it will work. The use of -m provides the module namespace information that PEP 328 needs, while running from the directory that contains the spyderlib package ensures it is visible through sys.path. The one caveat is that the specified module is run as "__main__" and hence does not exist in sys.modules under its normal name. If it gets imported by another module, then you will end up with two copies of the module (one as "__main__" and one under the normal name). This may or may not cause problems, depending on the nature of the module being executed. While PEP 366 describes the boilerplate you can put at the top of your module to allow a directly executed module to try to find its siblings, it still requires that PYTHONPATH be set appropriately. And if you set PYTHONPATH appropriately, then direct execution with absolute imports will work. (The explanation of the failures applies for all Python versions that I am aware of, but the -m based fix only became available in 2.6) (The impact of various command line options and the PYTHONPATH environment variable on sys.path are described at) (The basic import search algorithm is described in the tutorial:) (And the full gory specification details are in PEP 302, with a few subsequent tweaks courtesy of PEP 328 and PEP 366). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
https://mail.python.org/pipermail/python-dev/2010-September/104123.html
CC-MAIN-2014-15
refinedweb
463
63.19
ASP.NET provides a robust framework for your Web applications. However, at times it becomes necessary to go beyond the out-of-the-box functionality. For example, when you request a resource such as an HTML page or ASP page using the browser, IIS processes that resource based on its file extension. IIS processes ASP pages using a DLL named asp.dll. Similarly, IIS processes ASP.NET (.aspx) pages with the aspnet_isapi.dll. But sometimes you may need to process custom file extensions.. To find analogous functionality, you had to step outside into the world of ISAPI programming. HttpModules and HttpHandlers in ASP.NET are fairly similar to ISAPI filters, but they implement slightly different functionality. ISAPI is an important technology that allows us to enhance the capabilities of an ISAPI-compliant Web server (IIS is an ISAPI-compliant Web server). The following components serve this purpose: ASP .NET Request Processing ASP.NET request processing is based on a pipeline model in which ASP.NET passes http requests to all the http modules in the pipeline. Each http module receives the http request and has full control over it. The http.: 1. BeginRequest 2. AuthenticateRequest 3. AuthorizeRequest 4. AcquireRequestState 5. ResolveRequestCache 6. Page Constructor 7. PreRequestHandlerExecute 8. Page.Init 9. Page.Load 10. PostRequestHandlerExecute 11. ReleaseRequestState 12. UpdateRequestCache 13. EndRequest 14. PreSendRequestHeaders 15. PreSendRequestContent From a programmer's point of view, HttpHandlers and HttpModules are nothing but classes that implement certain interfaces. HttpHandlers must implement the IHttpHandler interface. Some built-in classes such as HttpApplication and Page already implement the IHttpHandler interface. Similarly, new HttpModule-derived classes must implement the IHttpModule interface. Again, the Framework contains built-in classes such as FormsAuthenticationModule and WindowsAuthenticationModule that already implement the IHttpModule interface. Both interfaces reside in the System.Web namespace. Http Handlers HTTP handlers are the .NET components that implement the System.Web.IHttpHandler interface. Any class that implements the IHttpHandler interface can act as a target for the incoming HTTP requests. HTTP handlers are somewhat similar to ISAPI extensions. The interface consists of a read only property named IsReusable, which returns a Boolean value (typically true) indicating whether another request can use the IHttpHandler instance, and a ProcessRequest method, which takes a parameter of type HttpContext and performs the job of handling the extension. I used VB.NET for the sample code, but you can use the .NET language of your choice. First, launch VS.NET, create an empty project (in the sample code), and add a new class to it. Add a reference to the System.Web namespace and add an Imports System.Web statement at the top of the class file. Add an Implements statement to add the IHttpHandler interface. Imports System Imports System.Web Public Class ClsHttpHandlers Implements IHttpHandler Public ReadOnly Property IsReusable() As Booleanw Implements System.Web.IHttpHandler.IsReusable Get Return Truet End Get End Property Public Sub ProcessRequest(ByVal context As System.Web.HttpContext) Implements System.Web.IHttpHandler.ProcessRequests context.Response.Write _ ("<html><body><h3> Your request is handled by ClsHttpHanlders " & _ "</h3></body></html>") End Sub End Class Configuring the HttpHandler in IIS After you develop an HttpHandler or HttpModule you must configure IIS and ASP.NET for the new code to take effect. There are actually two steps involved in running a custom HttpHandler. First, you use the IIS Application Configuration dialog to map the file extension to the ASP.NET engine. Second, you modify the configuration sections in the application's web.config file to specify the namespace and class you want to use to handle that extension. Since we are creating a handler for handling files of a new extension, we also need to tell IIS about this extension and map it to ASP.NET. If we don't perform this step and try to access the .hand file, IIS will simply return the file rather than pass it to ASP.NET runtime. As a consequence, the HTTP handler will not be called. For IIS, you add the required file extension in the Application Configuration dialog as shown in following figure. You can add or remove handlers using this dialog. The dialog associates a specific file extension with a specific handler-in this case, tell IIS to use the ASP.NET engine (aspnet_isapi.dll). Following figure shows how to fill in the dialog for the example. Set the Executable field to the location of the aspnet_isapi.dll file on your server (your path may vary from the path shown in the figure). Fill in the Extension field with .hand. Configuring the HttpHandler in Web.config" ( *.hand in our case).). Next, to configure ASP.NET to recognize new handlers and modules, you modify two configuration sections in the web.config file for your application: the <httpHandlers> section, and the <httpModules>; section. To test the sample HttpHandler you first need to create a new Web project with VS.NET. Next, open the web.config file of the new application and locate the <httpHandlers> section. ASP.NET uses the <httpHandler> section to specify mappings between file extensions and the appropriate HttpHandler. The section will already contain some handlers, for example, the following fragment shows the <httpHandlers> section on my machine : <httpHandlers> <add verb="*" path="*.chart" type="MyHttpHandler.ChartHandler,MyHttpHandler" /> <add verb="*" path="*.vb" type="System.Web.HttpNotFoundHandler,System.Web" /> <add verb="*" path="*.cs" type="System.Web.HttpNotFoundHandler,System.Web" /> <add verb="*" path="*.vbproj" type="System.Web.HttpNotFoundHandler,System.Web" /> <add verb="*" path="*.csproj" type="System.Web.HttpNotFoundHandler,System.Web" /> <add verb="*" path="*.webinfo" type="System.Web.HttpNotFoundHandler,System.Web" /> </httpHandlers> For our example, you need to add httphandlers like this <httpHandlers> <add verb="*" path="*.hand" type="HttphandlersAsm.ClsHttpHandlers,HttphandlersAsm" /> </httpHandlers> Save your changes. Next, add a new file with the extension .hand to the Web application. Navigate to this file from your Web browser, you should see the following page.. So you can write your http modules for the events which is mentioned already in request handling chapter. An HTTP module is supposed to implement the following methods of the IHttpModule interface: An HTTP module can register for the following events exposed by the System.Web.HttpApplication object.. After we have implemented IHttpModule, we can get into doing the things that are specific to our task. In this example, we need to create event handlers for BeginRequest and EndRequest. We do this by first creating our sub procedure like this: Public Sub SubBeginReq(ByVal s As Object, ByVal e As EventArgs) Next we need to wire them up in the Init method that is part of the IhttpModule interface like this: AddHandler context.BeginRequest, AddressOf SubBeginReq The complete HttpModule is shown below: Imports System Imports System.Web Imports System.Web.Caching Public Class ClsHttpMod Implements IHttpModule ' Register Event Handlers Public Sub Init(ByVal context As System.Web.HttpApplication) Implements System.Web.IHttpModule.Init AddHandler context.BeginRequest, AddressOf SubBeginReq AddHandler context.EndRequest, AddressOf SubEndReq End Sub Public Sub Dispose() Implements System.Web.IHttpModule.Dispose End Sub Public Sub SubBeginReq(ByVal s As Object, ByVal e As EventArgs) Dim app As HttpApplication app = CType(s, HttpApplication) app.Response.Write _ ("<h4>Request Begins Here... (ClsHttpMod)</h4>") End Sub Public Sub SubEndReq(ByVal s As Object, ByVal e As EventArgs) Dim app As HttpApplication app = CType(s, HttpApplication) app.Response.Write _ ("<h4>Request Ends Here...(ClsHttpMod)</h4>") End Sub End Class. For our example , you need to add the following code: <httpModules> <add type="HttpModulesAsm.ClsHttpMod,HttpModulesAsm" name="ClsHttpMod" / </httpModules> Save your changes. Next, add a new file with the extension .hand to the Web application. Navigate to this file from your Web browser, you should see the following page. Using the test form, any controls or HTML markup you add to the form would appear between the two lines written by the MyBeginRequest and MyEndRequest events. Of course, you might not want to have your HttpModule classes write output, but the example serves to show that the sample works as expected, trapping the BeginRequest and EndRequest events and processing the request . Conclusion As you might have realized with HTTP handlers and HTTP modules, ASP.NET has put a lot of power in the hands of developers. Plug your own components into the ASP.NET request processing pipeline and enjoy the benefits. This article should at least get you started with these components. Now you can build custom HttpHandlers to process requests with specific file extensions or to process one particular request, or create custom HttpModules to filter or manipulate requests, either before or after the ASP.NET engine processes the request.
http://www.microsoft.com/india/msdn/articles/57.aspx
crawl-002
refinedweb
1,426
51.14
freewind lee - Total activity 28 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 11 freewind lee created a post, Is it possible to capture all the exceptions of my custom plugin, so I can sent it to a serverI'm writing an IDEA plugin, when my friends use it, it often throws all kind of exceptions and hard to reproduce it in my place.I'm thinking if I can capture all the exceptions in the IDEA, and sub... freewind lee created a post, Can't run plugin with Jdk1.6?I'm writing an idea plugin, and was using an old version (several months ago) IDEA IC, and using JDK1.6 to compile and run, everything was well.Today, I cloned the [intellij-community](... freewind lee created a post, How to test an action?I defined a very simple action: public class MyAction extends AnAction { @Override public void actionPerformed(AnActionEvent event) { new MyLoginDialog(event.getProject()).show(); ... freewind lee commented, freewind lee created a post, How to write a test for two opened IDEA?I'm writing an IDEA plugin, which will allow two opened standalone IDEA to communicate with each other(they will send messages to a shared server socket, and redirect to another).But I'm not sure h... freewind lee created a post, How to get the memory content of a file?I'm writing an IDEA plugin, which needs to get the content of a file when it's renamed.I just use event.getFile.contentsToByteArray()to get the content. For plain text files, which is working w...
https://intellij-support.jetbrains.com/hc/en-us/profiles/2148008845-freewind-lee
CC-MAIN-2019-09
refinedweb
268
64.61
From: Brian Ravnsgaard Riis (brian_at_[hidden]) Date: 2005-09-21 10:56:36 Rob Stewart wrote: > From: "Andrey Semashev" <andysem_at_[hidden]> > >>Rob Stewart wrote: >> >>>From: "Andrey Semashev" <andysem_at_[hidden]> >>> >>>I've not looked at anything else, but I thought I'd address >>>these: >>> >>> >>>>- The naming of arm/disarm methods of scope guard. They are used to >>>>change the activity status of the guard. Personally, I feel fine >>>>with them but the commonly used name for disabling the guard is >>>>"dismiss" and I just can't figure out its suitable counterpart in >>>>English. I wonder if anyone have a proposal about this. >>> >>>"Dismiss" would be the right word in English to tell the guard to >>>go away and do nothing more. >> >>Yes but what about its antipod - a function to enable the guard? Note that >>the guard may even be initially disabled (that's another reason I didn't >>like dismiss) and then it may be enabled in some place. > > > I see. I thought you somehow knew of "dismiss" in another > language and didn't know the English word for it. > > I think Markus is right: summon is the opposite of dismiss for a > guard. The question is whether it reads well when used: > > guard g; > if (something) g.dismiss(); > ... > if (whatever) g.summon(); "Guard"? Somehow "Summon" doesn't read very well above. I'm sorta partial to guard, but this may cause confusion with both the class name and the namespace name? scope_guard g; if(cond) g.dismiss(); ... if(cond2) g.guard(); Just a thought... -- /Brian Riis Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/09/93921.php
CC-MAIN-2019-43
refinedweb
279
75
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I have the script runner plugin. I have not found a recent answer to this question that works. I have a Jira server with a custom field that I want to remove. But before I remove this field I would like to copy the contents of that custom into a comment when the field is not EMPTY. How can I accomplish this? Thanks, Nick Hey Nick, You will have to write a custom script to do so. Logic would be something like this: - foreach project -- foreach issue --- get customfield value ---- if not null ----- create new comment To get projects, refer to To get custom field and values, refer to To create comment, refer to Hope that helps. Cheers Bhushan Bhushan, I've started working on implementing the pseudo code you have above. Here is what I have so far: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.project.ProjectManager def projectMan = ComponentAccessor.getProjectManager(); String results = ""; List projectList = projectMan.getProjects(); for(Project p : projectList) { results = p.getName() + " "; } return results I am getting an error at line 11 that the class Project is resolvable. Is there something else I need to import to be able to work with objects of type Project? Edit: Nevermind, I needed to import com.atlassian.jira.project.
https://community.atlassian.com/t5/Jira-questions/Bulk-copy-custom-field-values-into-comments/qaq-p/704896
CC-MAIN-2018-30
refinedweb
234
58.48
The ++ operator always increments a variable's value, and the -- operator always decrements. Consider this statement: a=b++; If the value of variable b is 16, you know that its value will be 17 after the ++ operation. C language math equations are read from left to right. After the preceding statement executes, the value of variable a is 16, and the value of variable b is 17. #include <stdio.h> int main() // w w w . ja v a 2s. co m { int a,b; b=16; printf("Before, a is unassigned and b=%d\n",b); a=b++; printf("After, a=%d and b=%d\n",a,b); return(0); }.
http://www.java2s.com/example/c-book/prefixing-the-and-operators.html
CC-MAIN-2019-30
refinedweb
110
74.9
Using Solong in Serum Swap Recently, Serum has released its Swap pool. For now it supports trading by using wallets sollet.io and solflare. Now let’s see how we make the code to support Solong extension. Let’s see How we use Solong for swap first: Very easy huh, you do not need to open the wallet and input password every time and keep the wallet window in background neither open separate wallets for different sites. You can directly try it in our host of Swap which has Solong integrated already. Before using, make sure you have Solong extension added to your chrome and account initiated. You can also find tutorials on using Solong here. Authorize Account First, is to authorize our account in Solong, we can use window.solong to check users have Solong installed or not. If existed, you can call selectAccount to pop up Solong asking user’s permission. solong.selectAccount().then(account) => { console.log("connect account with ", account) } In order to allow the original codes able to interaction with our extension, here we have created a SolongProvider for providing context. export function SolongProvider({ children = undefined as any }) { //const solong = new SolongHelper(); const solong = useMemo(() => new SolongHelper(), []); const [connected, setConnected] = useState(false); useEffect(() => { console.log('trying to connect'); solong.onSelected = (pubKey) => { console.log('helper on select :', pubKey); setConnected(true); }; }, [solong]); return ( <SolongContext.Provider value={{ solong, connected, wallet: solong, }} > {children} </SolongContext.Provider> ); } SolongHelper acts like bridge for communications between Solong and Swap, let’s add selectAccount to our helper: selectAccount() { if (this._onProcess) { return } this._onProcess = true console.log('solong helper select account'); (window as any).solong .selectAccount() .then((account: any) => { this._publicKey = new PublicKey(account); console.log('window solong select:', account, 'this:', this); notify({ message: `Account ${account} connected`, type: 'success', }); if (this._onSelected) { this._onSelected(account); } }) .catch(() => {}) .finally(()=>{this._onProcess=false}); } Sign transaction When we call swap, we need to construct a Transaction and call signTransaction to sign it. So here we call Solong extension to sign it. We add it in helper like: async signTransaction(transaction: any) { return (window as any).solong.signTransaction(transaction); } After we call solong’s signTransaction api, it will pop up the extension to ask for user’s signature. Rejection handling Other than sollet.io and solflare which using JSON-PRC to return the rejection of sign, Solong will return a Promise rejection, so we can catch it and deal with it: await wallet.signTransaction(transaction).catch(()=>{ console.log("User reject the action") throw {"error":"User reject the action"} }) Final words Overall, it’s very easy to integrate Solong to your Solana Dapp. You can put this solong_helper.tsx in your project and import UseSolong hook from it, then using the wallet from the hook as the way you use sollet.io and solflare. import { useSolong } from "./solong-helper"const { wallet, connected } = useSolong(); For more example, you can refer to our fork of serum swap here: solongwallet/oyster-swap Any content produced by Solana, or developer resources that Solana provides, are for educational and inspiration… github.com So now, take a cup of solong, and happy coding :)
https://solongwallet.medium.com/using-solong-in-serum-swap-a01f8d075192
CC-MAIN-2021-25
refinedweb
516
50.43
Does anyone try to connect to a bluetooth printer and try to print? I bought Eutron EP-100 printer which has bluetooth connectivity. I want to try printing. Please help. Does anyone try to connect to a bluetooth printer and try to print? I bought Eutron EP-100 printer which has bluetooth connectivity. I want to try printing. Please help. I'm call to all experts, it seems that nobody has an idea of bluetooth printing for pys60. Your help is very much appreciated. hi rsf this is like a remotely accessing or controlling a device. i dont think anykind of such sort is available in PyS60. give a feedback Gargi Das- Forum Nokia Python Wiki Learn Python at It depends on the printer you are using. If the printer is programmable and hears to the files it receives through mobile, then you can try to send the file to the printer using bluetooth and it will print it when given the command. I think you can atleast transfer the files, if the printer has some storeage capacity. Kandyfloss V 7.0642.0 18-10-06 RH-51 Nokia 7610 thanks for the reply. I was given a documentation from eutron ep-100 bluetooth printer on code for programing printing. However, I dont know how to apply it. I will paste some infomation below: LF - Print and line feed [Format] ASCII LF Hex 0A Decimal 10 [Range] None [Default] None [Description] Prints the data in the print buffer and feeds one line. [Notes] 1 The amount of paper fed per line is based on the value set using the line spacing command (ESC 2 or ESC 3). 2 After printing, the printing position moves the beginning of the line. I hope you guys to help me about this. hi rsf again the problem in ur case is that you have to make the printer understand trough PyS60 to print a document i dont think the printer can uderstand it. What you can do is if ur phone and printer both are capable of communicating with each other. then you can make an application by which you can use the default printing of ur phone. Second option you can transfer the file to the printer by searching the device and then i think the printer will be able to print it. second one seems easy. plz give a feedback Gargi Das- Forum Nokia Python Wiki Learn Python at thanks gaba88, My bluetooth printer is conecting with pys60. as a matter of fact, it recognize "\n" as line feed. Im wondering how to try to print "HELLO WORLD". I really appreciate everybody to participate on this discussion. this could be useful later for a beginner like me. hi rsf thanks for the feedback just send a file to ur printer in which hello world is written. i think the printer will print it. best regards Gargi Das- Forum Nokia Python Wiki Learn Python at thanks for the reply gaba88, I try your suggestion using below code, but it show "error (2, no such file or directory)". Although when i try the code for my N70, it run well. Any advise again? #------------------------------- from appuifw import * from e32socket import * try: phone = bt_obex_discover() addr=phone[0] port=phone[1].values()[0] #file = query(u'File Selection', 'text') send_path = u"c:\\emgr.txt" bt_obex_send_file(addr, port , send_path) note(u'File Sent') except Exception, error: note(unicode(error), 'error') #------------------------------- Hi rsf, If you get the error, "error (2, no such file or directory)" then it means that the file specified doesn't exist. Next time you post any CODE, be sure to use the CODE tags to mantain the indentations in the CODE. Best Regards Croozeus Pankaj Nathani thanks for the reply. Sorry I didnt indent my code. I dont know how to do it. I paste it indented, but is appears no indention. Anyway, as i said, I ran the program to send the file from my 6600 to N70, it works fine. But when i try my 6600 to bluetooth printer, it shows that error. By the way, until now i'm experimenting. It appears that i got a good result now. The printer moves and scroll its paper when i try: CODE: sock.sent(chr(27) + "J" + "1" + u'Hello world') WHERE: ESC J 1 >>is the print code to print and feed paper. However, when I look at the printer paper, nothing was written. hmmmm..... any suggestion? It seems im near to success! hi rsf your code seems to be correct. Please check that you have put the txt file in the path which you have mentioned. best regards Gargi Das- Forum Nokia Python Wiki Learn Python at Thanks guys. I try again experiment things.
http://developer.nokia.com/community/discussion/showthread.php/131623-printing-thru-bluetooth
CC-MAIN-2014-15
refinedweb
792
74.59
Hi all. > To make things easier for development I'd like to suggest a few madwif > ibranches created: > > * madwifi-1152-openhal: based on madwifi-1152, patched with my patch, > added openhal, old hal removed. Working free solution. Should exist > just as a reference and to allow users to checkout a working free > alternative in the mean time. > > * madwifi-dadwifi-openal: based on the latest dadwifi with the > openhal, old hal removed. As dadwifi gets updated you can pull updates > to this branch. As the openhal advances you can make updates to the > openhal here. I've set up the suggested branches, with slight changing to the proposed names: * (based on r1142) * (based on r1827) Neither of them has received any modifications yet, they are basically copies of the mentioned revisions. I'm currently lacking the time to import Nick's work into the madwifi-old-openhal branch, for example - it would be nice if someone else could work on that. The MadWifi project happily provides any interested party access to the resources we have at hands - including (but not limited to) r/w access to the repository, an account for our Trac (used to manage tickets for bugs, patches, ...), e-mail, .... Let me know if you need something in that regard and I'll try to get it done. We'd love to support these efforts where possible. Bye, Mike - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo <at> vger.kernel.org More majordomo info at
http://article.gmane.org/gmane.linux.drivers.madwifi.devel/3531
crawl-002
refinedweb
256
61.26
The builder pattern, as the name implies, is an alternative way to construct complex objects. This pattern should be used when we want to build different immutable objects using the same object building process. 1. The GoF Builder Pattern Before starting the discussion, I want to make it clear that the builder pattern which we are going to discuss in this post, is slightly different from what is mentioned in GangOfFour “Design Patterns” book. The book says: The builder pattern is a design pattern that allows for the step-by-step creation of complex objects using the correct sequence of actions. The construction is controlled by a director object that only needs to know the type of object it is to create. And the book gives examples like below: I really find it hard to make use of the above example in real-life programming and applications. The above process is very much similar (not exactly) to the abstract factory pattern, where we find a factory (or builder) for a specific type of object, and then the factory gives us a concrete instance of that object. The only big difference between the builder pattern and the abstract factory pattern is that builder provides us more control over the object creation process and that’s it. Apart from it, there are no major differences. In one sentence, abstract factory pattern is the answer to "WHAT"and the builder pattern to "HOW". Now from here, we will start discussing the builder pattern the way I find it useful especially in practical cases. 2. Definition of Builder Pattern Let’s start by giving a definition of builder pattern: Builder pattern aims to “Separate the construction of a complex object from its representation so that the same construction process can create multiple different representations.” A builder pattern should be more like a fluent interface. A fluent interface is normally implemented by using method cascading (or method chaining) as we see it in lambda expressions. 3. Where We Require Builder Pattern? We already know the benefits of immutability and immutable instances in an application. If you have any questions about it, let me remind you of the String class in Java. And as I already said, the builder pattern helps us in creating immutable classes with a large set of state attributes. Let’s discuss a common problem in our application. In any user management module, the primary entity is User, let’s say. Ideally and practically as well, once a User object is fully created, we will not want to change its state. It simply does not make sense, right? Now, let’s assume, our User object has the following 5 attributes i.e. age, phone and address. In normal practice, if we want to make an immutable User class, then we must pass all five information as parameters to the constructor. It will look like this: public User (String firstName, String lastName, int age, String phone, String address){ this.firstName = firstName; this.lastName = lastName; this.age = age; this.phone = phone; this.address = address; } Very good. Now what if only firstName and lastName are mandatory and the rest 3 fields are optional. Problem !! We need more constructors. This problem is called the telescoping constructors problem. public User (String firstName, String lastName, int age, String phone){ ... } public User (String firstName, String lastName, String phone, String address){ ... } public User (String firstName, String lastName, int age){ ... } public User (String firstName, String lastName){ ... } We will need some more like above. Still can manage? Now let’s introduce our sixth attribute i.e. salary. Now it is problem. One way is to create more constructors, and another is to lose the immutability and introduce setter methods. You choose any of both options, you lose something, right? Here, the builder pattern will help you to consume additional attributes while retaining the immutability of the User class. 4. Implementing Builder Pattern Lombok’s @Builder annotation is a useful technique to implement the builder pattern. Let’s solve the above problem in code. The given solution uses an additional class UserBuilder which helps us in building desired User instance with all mandatory attributes and a combination of optional attributes, without losing the immutability. public class User { //All final attributes; } //All getter, and NO setter to provde immutability public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public int getAge() { return age; } public String getPhone() { return phone; } public String getAddress() { return address; } @Override public String toString() { return "User: "+this.firstName+", "+this.lastName+", "+this.age+", "+this.phone+", "+this; } //Return the finally consrcuted User object public User build() { User user = new User(this); validateUserObject(user); return user; } private void validateUserObject(User user) { //Do some basic validations to check //if user object does not break any assumption of system } } } And below is the way, we will use the UserBuilder in our code: public static void main(String[] args) { User user1 = new User.UserBuilder("Lokesh", "Gupta") .age(30) .phone("1234567") .address("Fake address 1234") .build(); System.out.println(user1); User user2 = new User.UserBuilder("Jack", "Reacher") .age(40) .phone("5655") //no address .build(); System.out.println(user2); User user3 = new User.UserBuilder("Super", "Man") //No age //No phone //no address .build(); System.out.println(user3); } Please note that the above-created User object does not have any setter method, so its state can not be changed once it has been built. This provides the desired immutability. Sometimes developers may forget to add a few attributes to the User class. While adding a new attribute and containing the source code changes to a single class (SRP), we should enclose the builder inside the class (as in the above example). It makes the change more obvious to the developer that there is a relevant builder that needs to be updated too. Sometimes I think there should be a destroyer pattern (opposite to builder) that should tear down certain attributes from a complex object in a systematic manner. What do you think? 5. Existing Implementations in JDK All implementations of java.lang.Appendable are infact good examples of the use of Builder pattern in java. e.g. - java.lang.StringBuilder#append() [Unsynchronized class] java.lang.StringBuffer#append() [Synchronized class] - java.nio.ByteBuffer#put() (also on CharBuffer, ShortBuffer, IntBuffer, LongBuffer, FloatBuffer and DoubleBuffer) - Another use can be found in javax.swing.GroupLayout.Group#addComponent(). Look how similar these implementations look to what we discussed above. StringBuilder builder = new StringBuilder("Temp"); String data = builder.append(1) .append(true) .append("friend") .toString(); 6. Advantages Undoubtedly, the number of lines of code increases at least to double in the builder pattern, but the effort pays off in terms of design flexibility and much more readable code. The parameters to the constructor are reduced and are provided in highly readable chained method calls. This way there is no need to pass in null for optional parameters to the constructor while creating the instance of a class. Another advantage is that an instance is always instantiated in a complete state rather than sitting in an incomplete state until the developer calls (if ever calls) the appropriate “setter” method to set additional fields. And finally, we can build immutable objects without much complex logic in the object building process. 7. Disadvantages Though the Builder pattern reduces some lines of code by eliminating the need for setter methods, still it doubles up total lines by introducing the builder object. Furthermore, although client code is more readable, the client code is also more verbose. Though for me, readability weighs more than lines of code. That’s the only disadvantage I can think of. Happy Learning !! 43 thoughts on “Builder Design Pattern” Hey buddy, I never find best example of builder pattern then your’s example. You explained in very easy way, while many tutorial make it complex & hard to understand. and ur way is superb. Keep going. Greate article. Have a question of this block: public UserBuilder(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } Should we return this as well? like other funcs Never mind, I figured it out. it’s constructor of UserBuilder, and we don’t need to return. I like the idea of a Destroyer pattern, but it sounds like an impossible task. I have a class User (same as your User). When I try to inherit the User class, it gives compilation error that “no default constructor available”. Yes because the constructor for User class is private and classes with private constructors cannot be inherited. its a immutable class, they are not meant to be extended. Great article! You have provided a great insight here about! I’m not aligned on jdk example of Builder Pattern of StringBuilder#append and StringBuffer#append because these two are not immutable and also Builder pattern is best suitable for the classes that has large number of instance fields where as for StringBuilder#append and StringBuffer#append only single value is required. Please correct me if i’m missing anything in my understanding. I will suggest you to look at the pattern from steps to create the full object.. rather number of fields. That will make things more clear. Thank you, it resolves my doubt. Very nicely explained. I have a query in this pattern that as User class might have some fields as mandatory and some as optional so even if object is built using builder pattern that has only few optional fields set and others will be null then how does it make difference to create object using constructor in the same class that will have required and applicable fields set with some value and not applicable fields as null. Please help me understand this. Thanks. Very good reasoning. You are correct that with mentioned optionality, there seems no difference. In last, it all boils down to readability and ability to create immutable objects, which is not possible using constructors. That is the good point, thank you. Actually there is a clear difference. As stated before, once there is an optional object in the constructor, the developer is forced to initiate with a nullin cases where the value is not known. In the builder factory pattern, the optional fields are simply skipped and the code looks neater. The developer doesn’t have to instantiate them. Hi Lokesh! First of all, sincere thanks for your clear explanation! Can you please clarify my query below? With the example code that you provided, the immutable User instance is created for no doubt. But if the User wants to update some detail, let’s say the phone number, please explain clearly how the behaviour would be. Will it be again creating a new instance or update the existing instance. Thank you Regards Datt In such cases, I would not like to change the existing user object. Rather, I will define a method fromUser(User u)in userbuilder which will take an existing user instance and populate create a new User object with same data and a chance to update desired fields before constructing the object. Something like this: Thanks for your reply Lokesh! If we follow the above approach, 1. How about the first user object that was created before modification? 2. Also, if the new object is created with modified data as mentioned, there could be chance for two identical objects to exist with first name and last name as common in both of them and other details in different way. If this point is correct, Won’t it be a memory loss? Please correct if am going wrong in understanding. If possible please put this code patch in main program for better reference. Thanks In that case, you can provide methods for firstName and lastName fields also. Don’t make them final and treat as other fields. I am studying for my Java professional Exam and I really can’t see the huge advantage of this pattern. For example let’s say I have an Animal class and an AnimalBuilder class: Animal needs a new mandatory parameter. Not only do I have to change it in the Animal class, but in the AnimalBuilder class, plus all the classes using AnimalBuilder. To be honest all I can see in terms of advantages is more readable code, but even then it is a stretch. Please tell me I am wrong and show me the light 🙂 Adding a new mandatory parameter is a big design change. To accommodate this change, you may follow below steps: 1) Add attribute to Animal class, assign some default value to it, AND one more constructor which sets its value from parameter. 2) Modify AnimalBuilder. Add new builder method to use this new constructor. Till now, no change is needed in any other class. Now, you may use new builder method in new classes OR may use in older classes where its absolutely necessary. The GOF book actually says “Separate the construction of a complex object from its representation so that the same construction process can create different representations.”. Please correct your post. Hi Anijith, I clearly mentioned that this definition is slightly different from what is mentioned in GangOfFour. Your post says “The book says: The builder pattern is a design pattern that allows for the step-by-step creation of complex objects using the correct sequence of actions. The construction is controlled by a director object that only needs to know the type of object it is to create.” But, I don’t see the above description in the GOF book. Instead the book says” “Separate the construction of a complex object from its representation so that the same construction process can create different representations.” Brilliant! Thank you for sharing. Been Inspired, so I ventured to borrow your example and do another with the same goal but slightly different idea. Thanks for sharing this. awesome!! really I have gone through many sites but couldnt understand the builder pattern this much clearly… Thanks 🙂 I am trying to implement a similar problem,except that it has some nested builders.I have the builders created separately. Now,the problem is how do I write the client code . Also,I have seen implementations using Interfaces,when are they used? is it like interfaces give us the ability to have a control over the sequence in which we want to call out the methods? I have a user class: with similar fields as to what you have, Then I have an Address class with usual fields such as city,state,zip etc. How do I call the builder in Address inside the builder for User? And while creating the user I want to give the option of creating the address or not. Can this be done? DO you need the builder code here? its pretty big,hence did not add.But,assuming its on the similar lines of yours how would go ahead and do it? What I have tried is this.. public HomeAddressBuilder havingHomeAddressAs() { //HomeAddress is the other class,and it contains the static class for the builder inside it. return new HomeAddress.HomeAddressBuilder(); } is that the correct way? so if I had the user builder will it look like this: If I will be in your place, I will not build address using userbuilder reference. Address is another example of complex data having lot’s of possible fields with multiple permutations and combinations internationally. So I will create a separate builder object for address as well. //Build the address using addressbuilder //Build the user using userbuilder AND set homeaddress obtained in above step. This approach is good for two reasons: 1) Your code remain clean and easy to extend and modify. Always prefer simplicity in you code. Never make it unnecessarily complex. 2) There is no dependency between user class and address class builders. So any change in one class will not affect other class. Single Responsibility Principle. Hi Lokesh, Thank you very much for the reply.I got around that problem.I actually had created 2 separate classes for the purposes you mentined,sorry for mentioning the nested word.I also had 1 more problem. My test team wants to create a user even though no field has been set,so my approach was,when I am telling my builder to return the user object back,before that check if any fields have been set to null or are empty strings.If they are ,then recreate the user object with default-values.And then return the user.Is this approach correct? Also,another problem I see with this approach is,I will be getting the same method name appearing twice,meaning,if set the name of the user for the first time,and I want to set some other properties, i will still be able to select the setName method and set the name again,i.e. i am not able to restrict the fields/reduce the fields that will now be set,I have an approach of creating the interfaces and making return type of the next field,i.e. make each field mandatory.Is that a good approach? Thank you very much for the response. I will suggest to create a constructor with all mandatory fields as arguments. This way you can force the coder to pass all mandatory fields even before getting the builder object. This will also remove unnecessary checks for mandatory fields inside build() method. You can remove the setter methods for these mandatory fields to avoid overriding them wrongly. Note: Keep the number of mandatory fields to minimum. “java.lang.Appendable are in fact good example of use of Builder pattern” ??? 🙁 “All implementations of java.lang.Appendable are infact good example”.. Can you please tell me, why you disagree? Is above code is example of builder pattern? I am not able find any reason to say any piece of above code as builder pattern. Perhaps I am missing something you want to say. Can you please highlight, what and why in above code you see a possibility of builder pattern. I am just new to learn design pattern and I want to learn design pattern when in real world scenario I cannot think how to solve real world problem using design patterns. Please do not mind on my stupid question. I post above question because of I have Registration inner class and RegistrationDalHelper outer class. In registration class I have not create any getter and setter. I access these value inside the RegistrationDaoHelper class because of that I have create the Outer class and make inner registration class.I can create the getter for the registation private members varialbe to accesss these outside the class. I just post question because I have found that Inner and outer class inside the code. Sorry again for stupid questions. Please tell me why you have not find any reason for above code to be a builder patter.Please make just bullet points Thanks Hi Sohaib, no need to say sorry ever. And there are no stupid questions in discussions. Further, Builder pattern is used to build an object using simple steps; which otherwise will take a very complex logic to build. Breaking building process in steps keep it simpler and easy to use. An example I have already covered into the tutorial. Your example can be more related to Proxy pattern where a class functioning as an interface to something else. Thanks Really Awesome Article …… Getting lot of knowledge from your article Lokesh. Hi Lokesh, After your super duper blogs it made me to read more artcle ..after chetan bhagat ..I felt to read is only your blogs 🙂 … I liked your StringBuilder with any data type append method implemented using builder pattern , but in your case I am using inner class for object creation and it divert me instead of building my own class using some inner class…is not possible like StringBuilder. public class Pizza { private final int size; private final boolean cheese; private final boolean corn; public Pizza(int size){ //check the state if(0==size){ throw new IllegalStateException(“oop! size zero pizza not exist in world :)”); } this.size = size; } public Pizza cheese(final boolean cheese){ this.cheese = cheese; return this; } public Pizza corn(final boolean corn){ this.corn = corn; return this; } } May I wrong in this case? Thanks Vinod Vinod, thanks for the kind words. Next, your question about Pizza example. I strongly believe that design patterns are just concepts, and they do not enforce any particular implementation (literally any). So, if you are making the pizza building process easy by breaking the whole process into multiple small steps such that you can call PizzaBuilder().createBase().addStuff1().addStuff2()… or any such pattern, then you have implemented builder pattern. Whether you have used inner classes or not, doesn’t make any difference. It’s only a design solution, not implementation guideline. Hi Lokesh, Nicely explained. But I have one question. You stated the following line before explaining builder pattern: “Now what if only firstName and lastName are mandatory and rest 3 fields are optional.” How is this problem solved using Builder as I still need to include all fields and their values while object creation in the line: new User.UserBuilder(“Lokesh”, “Gupta”).age(30).phone(“1234567”).address(“Fake address 1234”).build(); Also, if additional fields are added, they also need to be included. Ujjawal, If you write new User.UserBuilder(“Lokesh”, “Gupta”).build(); then also you will get a User object with only first name and lastname set. That means other fields are optional. If want to set age then call “.age(24)” OR if you don’t want to set it, simply leave it. So essentially they are optional.
https://howtodoinjava.com/design-patterns/creational/builder-pattern-in-java/
CC-MAIN-2022-27
refinedweb
3,613
65.32
As part of the GStreamer invasion of KDE, Ronald Bultje has implemented Kiss, a simple version of a Totem-like video player for KDE with a GStreamer-backend (screenshot). I took a look at the source code and it looks fairly simple -- it's actually 23K worth of well-separated code in all, with the other 2.8M being makefile/configure overhead. Even if one doesn't understand the GStreamer parts, you could probably copy and paste them into your own KDE application. (Incidentally, this application seems to have been made with KDevelop, for those of you who need inspiration.) Why cut and past when it is just fine? What's wrong with Kdevelop? 1. Maybe because you want to reuse the code? 2. Who said there was anything wrong with KDevelop? Wait a second, 23 K of code and 28 000 K of overhead? What the hell? I think u mean 2800 KB of overhead The price of source code portability, I suppose? Ever wonder how all that "./configure; make; make install" magic happens? *shrugs* It's been like this for a while, try looking at the source code for Ky, for instance. The 2800k overhead are lots of configure and makefile magic implemented in a generic way to cover most, if not all and then some ways to get the build to work on different systems. You could write the configure and makefiles from scratch and the resulting overhead would become negligible, but the time you end up using maintaining and debugging the buildsystem for different targets are time you could spend writing code for the application. Or you could remove unneeded parts from the generic configure/makefile's, but the cost are again time. You trade size of the source tree against development time, since this has no effect on the compiled app it's a good solution. Weird. A PyQt app works on just as many systems and has about 500 bytes of overhead (the setup.py script). C/C++ really is lame in this area. Hey, this is not a fair comparison. And since you usually have to write 2-10 times the lines of code in C/C++ compared to Python to do the same thing, it's not fair at all:-) But if PyQt binding uses autoconf/automake (I don't remember, I'm not a Python fan) then it have that large overhead itself, so indirectly you still have it. Not to mention the overhead in the build system of KDEBase/Libs/... Probably the only solution could be a "runtime" for configure, so most of its stuff is on its own package and not in the source tarballs. But there are a lot of people (not me) claiming HD/RAM/BW are cheap and wanting for self-contained packages, so I'm not sure how such idea could be taken. No, sorry, you can't make both arguments. Yes, PyQt (and Python) have some overhead like that. But a PyQt app doesn't. If you want to make the argument that it does, then you can't make the argument about the "runtime" because what python gives you is exactly that (distutils) :-) You are going too far, the point was about applications not the whole framework. Folowing your reasoning you forgot to mention the buildsystem for X, the compilers and the kernel, and that's ridiculous. The requirements for installing an application either binary or from source, are that the needed infrastructure are in place, library's etc. Anything concerning install of those have no bearing on this case, as they are already installed. Based on the quality they now have I consider the Python, Ruby and Javascript packages from kdebindings as an integral part of KDE, and I rate it as a bug if they are not included in the distribution. That's because PyQt needs python to be installed already. Automake/Autoconf relies on only a few tools which are preinstalled on almost every Unix system (like HP/UX, Solaris, AIX). However these tools are implemented differently on each Unix, so Automake/Autoconf has to bend over backwards to be compatible with everybody. Plus it has the capability to check several hundred compatibility issues that are different between Unixes (plus your own checks that you define) so that your source code can be compatible. Python is the same everywhere so PyQt does none of that. It would be nice if C was the same everywhere too, but for historical reasons every Unix is different. Luckily Automake/Autoconf are here and do the compatibility work for you, at the price of a couple of MB of automatically-generated files. It is a good trade-off. The auto* files are hardly autogenerated. Every line of your configure file was written by someone as a macro. Also, pretty much noone understands the things, so if you need to test for, say, libmikmod, you just don't, and it seems to be portable, but isn't. As for auto* requiring only a few tools... well, that's their problem. The way they are written sucks. Mind you, I have nothing better to offer, except maybe publishing programs in ways that force platform vendors to wake up and offer a decent platform to work on, but that's just wishful thinking. Python being the same everywhere is a python *plus* as a platform. C and C++, as a platform, don't have it. There comes my point about C and C++ being lame in that area. BTW: here's how you test for PyQt on a setup.py: try: import qt except ImportError: print "You need to install PyQt" Of course "The auto* files are hardly autogenerated", is python autogenerated? The executables of your c(++) code. They ARE autogenrated, after you manually give some command. So first define autogerated, but then still this is a silly discussion, because you are making a comparrsion between two different things that are made for different goals. When c++ was invented phyton was just a snake I guess. So many people know c++, so its a waste of time, for a lot of them (they can make good money with this knowledge) to learn python. So python is newer, so it SHOULD be better. Else it was a lame job too invent it at the first place. Too give all those good c++ programmers a more easy job by publishing and share there great ideas and work with everybody on a lot of different systems, designed for a lot of different tasks, auto* was made. Very handy, with unfortanatly a lot of overhead. Owh my .... Lets start a disgussion about it. because. WE python guys don't have this overhead. Whoopsy we are much better. Well in that area. okay you made your useless point. A ferrary IS faster but cannot transport the milk from hunderd cows. Whow the milk tank is slow, but hey it CAN transport a lot of milk. Thats why they are both there. So if you compare those two completlye different things, compare them fully, test them, write a report and publish it :) Or just code good programs :). Well anyway, it stays fun to make all this stupid replies :). Auke Will anyone be creating C++/Qt bindings to GStreamer for KDE 4, similar to what's happened with D-BUS? Judging by Gnome its a powerful backend, but it doesn't seem to mesh perfectly with KDE style code. Also, does anyone know why is the author using greater than and less than instead of plain old equals when querying the GStreamer state? I'm not saying this is wrong, I've just never used GStreamer before and I'm curious to know what's going on there. The greater thans are there for no real reason. You could replace them with equals and it'd work just as well (state changes are guaranteed to be one-level at a time). I guess I use greater thans to make the purpose of the code (as in: checking a state shift) somewhat more clear. Wow, could be a nice starting point for kmplayer having a gstreamer backend too ... only get seg faults in KissWindow::open... ah I should use the File dialog (silly me, the mac menubar).. hmm no video only sound (mplayer says [mpeg4 @ 0x84857a8]looks like this file was encoded with (divx4/(old)xvid/opendivx) -> forcing low_delay flag). Btw. anyone know if debian's gstreamer-0.8 from testing is crippled or not (eg. can it play avi and such) This is what I have $ apt-show-versions |grepgstreamer libgstreamer-plugins0.8-0/testing uptodate 0.8.5-1 gstreamer0.8-a52dec/testing uptodate 0.8.5-1 gstreamer0.8-oss/testing uptodate 0.8.5-1 gstreamer0.8-hermes/testing uptodate 0.8.5-1 gstreamer0.8-alsa/testing uptodate 0.8.5-1 gstreamer0.8-vorbis/testing uptodate 0.8.5-1 gstreamer0.8-sdl/testing uptodate 0.8.5-1 gstreamer0.8-dv/testing uptodate 0.8.5-1 gstreamer0.8-dvd/testing uptodate 0.8.5-1 gstreamer0.8-esd/testing uptodate 0.8.5-1 gstreamer0.8-mpeg2dec/testing uptodate 0.8.5-1 gstreamer0.8-jpeg/testing uptodate 0.8.5-1 gstreamer0.8-doc/testing uptodate 0.8.7-1 gstreamer0.8-speex/testing uptodate 0.8.5-1 gstreamer0.8-swfdec/testing uptodate 0.8.5-1 gstreamer0.8-mikmod/testing uptodate 0.8.5-1 gstreamer0.8-flac/testing uptodate 0.8.5-1 libgstreamer-gconf0.8-0/testing uptodate 0.8.5-1 libgstreamer0.8-dev/testing uptodate 0.8.7-1 gstreamer0.8-jack/testing uptodate 0.8.5-1 gstreamer0.8-cdparanoia/testing uptodate 0.8.5-1 gstreamer0.8-x/testing uptodate 0.8.5-1 gstreamer0.8-sid/testing uptodate 0.8.5-1 gstreamer0.8-caca/testing uptodate 0.8.5-1 gstreamer0.8-gsm/testing uptodate 0.8.5-1 gstreamer0.8-plugin-apps/testing uptodate 0.8.5-1 gstreamer0.8-plugins/testing uptodate 0.8.5-1 libgstreamer-gconf0.8-dev/testing uptodate 0.8.5-1 gstreamer0.8-gnomevfs/testing uptodate 0.8.5-1 libgstreamer0.8-0/testing uptodate 0.8.7-1 libgstreamer-plugins0.8-dev/testing uptodate 0.8.5-1 gstreamer0.8-misc/testing uptodate 0.8.5-1 gstreamer0.8-artsd/testing uptodate 0.8.5-1 gstreamer0.8-aa/testing uptodate 0.8.5-1 gstreamer0.8-festival/testing uptodate 0.8.5-1 gstreamer0.8-tools/testing uptodate 0.8.7-1 gstreamer0.8-mad/testing uptodate 0.8.5-1 gstreamer0.8-audiofile/testing uptodate 0.8.5-1 gstreamer0.8-theora/testing uptodate 0.8.5-1 Hi Koos, you want to use at least gst-plugins 0.8.6 (as the website also mentions). 0.8.5 will work for audio, and with a small hack it'll work for video too, but I don't want those hacks in example code. ;-). The hack is that you need a different property for getting the video size in 0.8.5 than in 0.8.6. With current code and 0.8.5, Kiss will always claim that the video is 0x0 pixels, and thus not show the video window (user experience: no video). For playback of divx, you will additionally want gst-ffmpeg. This package is not yet available in Debian's main archives, but it's available in several unofficial repositories. Ask the debian people for details. RPMs for both gst-plugins-0.8.6 and gst-ffmpeg are available all over the internet. If you want to add a backend to kmplayer, please do. I'll try to help if I can. Got it working now for a MPEG-PS stream w/ 0.8.6 debian/unstable. Does crash on exit but that's for kmplayer no problem :-) FFMpeg can wait .. Did need to resize the window a bit for the video actually stayed, but I had that with Xine too. Give me a few days and I'll have something to start with. It would be nice if kaffeine added this functionality as well. It already has multiple backends (xine/arts/netscape) so adding another plugin should be easier than with kmplayer. While I'm a fan of Kaffeine and not as much KMPlayer, this is totally inaccruate. Just read the first line on their page It supports multiple backends (MPlayer/Xine/ffmpeg/ffserver/VDR) -- I think even more than Kaffeine. -//--- standsolid --->> Kaffeine support kmplayer kpart so ... :) Ok, you're right. It didn't have them last time I looked - but then neither did kaffeine at that time. Easier is hard to measure, no? For me it's obviously easier to add it to kmplayer. Actually it already works, but only for a local file. More and more applications arise that use a the backend (GStreamer, NMM, ...) directly and in its own way. It think its time for a KDE standard way, a abstraction layer, so that there are not too many KDE application which are fixed on one backend. Me, for example would prefer using NMM. The abstraction layer should be agnostic of the multimedia backend used. Just my two -Cents... Florian If you would have read any off the kdemultimedia discussion or summaries of the last week you would know that such a layer for basic audio functionality is planned. I know that its planned. But planning is just the first step. What I wanted to say it that they need to hurry with planning. But not for complicated audio funcionality. The layer that will exist will be suitable for things like the track playing preview thingy in K3b, and possibly also in juk. It won't be suitable for a dedicated media player. For that gstreamer looks (to me) to be the best option, but what would be nice is an option "approved" by kde. (of course gstreamer has the approval of the, ah, "other people") I think the major disadvantage of GStreamer is the missing network transparency, which is included in NMM. True, but the time needed to add network transparency to GStreamer would be smaller (15 days of work to design and implement NMM style or better network transparency was our estimate when we discussed it last) than the time needed to create all the GStreamer plugins etc. for NMM. Seriously, does a video player need a network-transparent multimedia backend? Are you seriously going to play this video in a distributed network fashion? I am not talking about the backend for this specific player but the standard backend used for all KDE applications. KDE is often deployed on thin-clients. The applications run on the server and display on the thin-client. It wouldn't do to play Kiss on the server and have the sound play *on the server*. The sound has to play on the client. The A is before the D, retard, first, get gstreamer working well with the KDE enviroment and ask for network transparency later. yeah, its called gstreamer. people make you think that but it's not really the case (yet). configure script doesn't insert the CXXFLAGS for gstreamer and kde include paths. Correct, it sets CPPFLAGS instead. Requires automake >= 1.7, though, won't work with 1.6 (which you appear to be using). I've disted using 1.7, so you shouldn't see that unless you re-ran make -f Makefile.cvs. Now if only GStreamer wouldn't crash and burn on startup, I could try some of these new KDE applications. But since the aRTs based apps work just fine, I don't feel any pressing need to debug GStreamer... Exactly what I was thinking every time I tried gstreamer and lets wait until it's robust first. But this is a nice gift to at least start with it. Remember xine was also a crashing beast not long ago. I mean, if Totem had as many feature as this example application, nobody, not even me, would use it. I think got that description from the original link submission to this site, and I guess Ronald himself described it as Totem-like at some point, although he did say "simple version". Sorry. :) Well, I don't think that the classic Gnome app has got so much more features then this code example app Find me any movie player other than Totem that has a telestrator mode! Is it your mission to troll other news sites and bash anything that can play video? Where did you see bashing? Buy goggles. Your hostility is uncalled for and out of line, if I may say so. Another KISS video player is codeine It uses Xine as its backend. I still just use CLI mplayer. 9 times out of 10 any GUI just gets in the way for me, though I usually use something like kmplayer for DVDs. i'm trying to compile the kiss project but the configure script keeps failing its says i dont have gstreamer version 0.8 or higher although i installed gstreamer 0.10. what could be the problem?
https://dot.kde.org/comment/70090
CC-MAIN-2018-39
refinedweb
2,866
67.45
I have now started two separate projects using .Net Standard 2.0 and the blank template and on both at first everything is fine but after a 5-10 minutes when i start to add some code and compile the Android project suddenly looses the reference to the shared project. I get the following error when i try to run the project: (1,1): error: Dependent project Savely.App.csproj failed to build, using old version. But when i try to build the project it builds fine except this error is till there: 'App' is a namespace but is used like a type The only changes i have had time to make was adding Microsoft App Center nugget packages on the shared project but even after reverting those changes its still not back to normal. Do any one know what the issue could be? Answers Did you ever find a solution to your issue? Looks like it was getting confused with the namespace XXX.App because of the default App.xaml file. My MainActivity.cs looked like this: I renamed my shared .NetStandard project (and updated all namespaces) to XXX.MobileApp and now it seems to works.
https://forums.xamarin.com/discussion/comment/330267
CC-MAIN-2019-35
refinedweb
197
82.14
snellspace : a perfect world spoiled by reality Judgement over how much overhead is in the .NET example is purely a matter of perspective. If I look at it strictly from a scripting point of view, there are about six items of extra information that may not be strictly necessary. I count in that number the following items: Compare the .NET version to the Perl version and the differences stick out very clearly. And yes, it was much easier to write this simple example using Perl. And I don't even like Perl. From the point of view of somebody who is used to working in non-scripting, type-safe languages, however, I see hardly any unnecessary overhead in the .NET sample. In fact, from this perspective, the Perl version leaves much to be desired. So which is the vest environment to create Web services? Scripting or non-scripting? Who cares, just write the code you need to write. If a scripting language does what you need it to do, knock your socks off. If a non-scripting environment works better for you, code away. Both have their respective advantages and disadvantages. Wow, this is getting much better discussion than I had imagined... Simon Fell writes: Sjoerd's example doesn't specify the namespace URI, come on guys post some thing that's (a) equivilent to the other examples, (b) is callable by the sample client code. Some of those lines of overhead are nothing to do with web services but just the fact that C# is a OO language.. [Simon Fell] Because even I learn something new every day ;-) So the SoapRpcMethod attribute takes care of this? Great, ok, let me write that down. I must admit that when it comes to .NET, I'm not as familiar as I should be, my nose has been deep into Java Web Service architectural issues for the past year.
http://radio.weblogs.com/0101915/2002/01/31.html
crawl-002
refinedweb
317
74.19
This article is a collection of tips and tricks for migrating projects from Visual Studio 2008 to Visual Studio 2012. Migrating a large application (around 120 projects, 4MLO) written with C++, C#, C++/CLI and involving different technologies and frameworks I have stumbled on different problems that I want to share, hopefully to make the transition easier for others. This article is not about new features in Visual Studio 2012 and .NET 4.5, it is about problems you might encounter when you migrate. Of course, these are not all the problems, but some that I can share from experience. Some of the issues mentioned in this article are not specific to Visual Studio 2012 or .NET 4.5, but were actually introduced in Visual Studio 2010 and .NET 4.0. Therefore, if you are already familiar with this version, you probably know at least some of theme. The first most important issue for VC++ projects is that the format of the project files has changed. In Visual Studio 2010, VC++ moved away from VCBuild and started using MSBuild. With this switched the project files also changed to an MSBuild format. The new files have the extension .vcxproj, but there is an automatic conversion from the old .vcproj files to the new format. This can however lead to some warnings or even errors in the build. One of the issues that I encountered was caused by the fact that I used to explicitly specify the Output File in the Linker settings. I used to do settings like $(OutDir)\MyAppD.exe in a Debug configuration and $(OutDir)\MyApp.exe in a Release configuration. That triggers some warnings: 1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.CppBuild.targets(1137,5): warning MSB8012: TargetPath(D:\Marius\VC++\MFCApplication1\Debug\MFCApplication1.exe) does not match the Linker's OutputFile property value (D:\Marius\VC++\MFCApplication1\Debug\MyAppD.exe). This may cause your project to build incorrectly. To correct this, please make sure that $(OutDir), $(TargetName) and $(TargetExt) property values match the value specified in %(Link.OutputFile). 1>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\Microsoft.CppBuild.targets(1139,5): warning MSB8012: TargetName(MFCApplication1) does not match the Linker's OutputFile property value (MyAppD). This may cause your project to build incorrectly. To correct this, please make sure that $(OutDir), $(TargetName) and $(TargetExt) property values match the value specified in %(Link.OutputFile). To fix this the most appropriate solution is to use $(OutDir)$(TargetName)$(TargetExt) as the Output file and make the appropriate settings of these properties under the General page. The RTM release of Visual Studio 2012 has dropped support for targeting Windows XP and Windows Server 2003 for VC++ projects, requiring minimum Windows Vista and Windows Server 2008. After great demand Microsoft have brought back the support for those operating systems in Update 1. By default, the toolset for VC++ projects is set to Visual Studio 2012 (v110), but after installing Update 1 a new toolset called Visual Studio 2012 - Windows XP (v110_xp) is available (installed side-by-side). Change to this toolset if you want to still target Windows XP/Windows Server 2003. However, you should notice that if your application is mixed and also using .NET, the 4.5 version also no longer supports these older operating systems. In this case you either drop support for them, or do not migrate your managed projects to .NET 4.5. .NET 4.0 is the last framework version that supports those operating systems. To my surprise, some important routines that were using CDatabase for SQL server access no longer worked and even crashed the application. Investigating the problems I have discovered a couple of bugs in MFC: Until fixes will be available it is possible to work-around these problems by deriving your own class from CDatabase and: For C++/CLI projects it is not possible to specify the version of the .NET framework you want to target from the IDE. The only available option is to manually set the desired value in the .vcxproj file. What you have to do is adding a <TargetFrameworkVersion> element under <PropertyGroup Label="Globals">. <PropertyGroup Label="Globals"> <ProjectGuid>...</ProjectGuid> <TargetFrameworkVersion>v4.5</TargetFrameworkVersion> </PropertyGroup> Regardless you use .NET 4.0 or .NET 4.5 (which share the same 4.0 CLR), you may run into a runtime error like this: Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information. That occurs when your application targeting CRL 4.0 tries to load a mixed-mode assembly (directly, or indirectly through one of the loaded modules) that is built with a previous version of the .NET framework that targets CLR 2.0 (or even 1.x). The problem occurs because .NET 4.0 has changed the way it binds to older mixed-mode assemblies. There are two possible fixes: <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> You can read more about this problem here. You may have projects that have references to COM servers, and therefore using interop assemblies, and code that may look like this: using SomeLib; namespace ClientApp { class Program { static void Main(string[] args) { var o = new SomeLib.DummyClass(); } } } When you migrate this to .NET 4.0/4.5 you get the following error: 1>[…]: error CS1752: Interop type 'SomeLib.TestClass' cannot be embedded. Use the applicable interface instead. 1>[…]: error CS0143: The type 'SomeLib.TestClass' has no constructors defined .NET 4.0 (and of course 4.5) allow embedding type information for COM types directly into managed assemblies instead of using an interop assembly (basically statically linking everything that is needed, instead of requiring an additional interop assembly) (see MSDN for details). However, this has some limitations, and one of them is that classes cannot be embedded. Detailed information about the problem can be found here. There are basically two solutions: var o = new SomeLib.Dummy(); I would prefer the later, since it still leverages the benefit of embedding the interop types and making unnecessary the interop assemblies. (Notice that if you build COM components, primary interop assemblies for COM components are still necessary if you want your COM component to be consumed from applications using a previous version of the .NET framework). Applications using WPF need to add a new reference (in addition to PresentationCore, PresentationFramework and WindowsBase): System.Xaml.dll. Visual Studio Setup Project is no longer available in Visual Studio 2012. If you have such projects then you have several alternatives: Which one should be the best option probably depends on various factors. I would personally prefer switching to WIX. The only feature that WIX is missing is handling pre-requisites, but you can use dotNetInstaller for that. A comparison of various deployment tools is available here. Template assembly directives used to be resolved using project references. So if you had to reference an assembly called something.dll in a .tt file you first had to add a reference to it and then use it in the assembly directive: <#@ assembly name="something.dll" #> In Visual Studio 2010 the way assembly directives are resolved has changed (the complete list of changes for T4 in Visual Studio 2010 can be found here). The T4 engine is now sandboxed to keep the T4 assemblies separate from the project assemblies. Therefore migrating T4 projects to VS2010/VS2012 may result in compiler errors. The possible solutions for this problem are: You can learn more about the problem and the solutions here and here. A first important aspect is even though you can still target Windows XP for native projects, remote debugging only works for Windows 7/Server 2008 R2 and newer operating systems. For all the previous versions (Windows XP, Windows Vista, Windows Server 2003 and Windows Server 2008) this feature is no longer available. You can check the platform compatibility and system requirements. If this is still important to you then you must use the debugger from a previous version of Visual Studio (together with the remote tools). Another feature that affects the debugging experience is that Visual Studio 2012 by default will try to load symbols for all modules from Microsoft symbol servers. The result is when the debugger attaches to a process it takes a long time to load everything. That can vary from several seconds, to several minutes, depending how many symbol files it must download (the bigger the application and more modules it loads, the longer it takes). I recommend changing the default "All modules, unless excluded" to "Only specified modules". You can leave the list of specified modules empty, or you can specify the modules you want. As mentioned in the beginning, this article is a collection of lessons learned migrating various projects from Visual Studio 2008 and .NET 3.5 to Visual Studio 2012 and .NET 4.5. You will probably encounter other problems too, but I hope this article can guide you in solving some of them. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) void CClientSocketS::OnReceive(int nErrorCode) { CSocket::OnReceive(nErrorCode); _pDlg->ProcessPendingRead(this); //THIS will reenable the MessagePump to post the OnReceive AsyncSelect(); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/562386/Lessons-learned-migrating-to-Visual-Studio-an?msg=4790951
CC-MAIN-2016-36
refinedweb
1,573
56.35
DS18B20 temperature sensor I'd like to read the temperature from my LoPy using a DS18B20 sensor. My problem is that this is a digital sensor using the 1-wire protocol, and as far as I can tell, there's no 1-wire library on the LoPy? Looking at the Micro Python documentation () I see that they've got a onewire module, they've even got a ds18x20 module, but that's not available on my LoPy I think. I've been searching the Internet looking for a suitable 1-wire library, but so far without any luck. I can find some that require libusb or the like, but that won't work I assume. Does anyone know how I should go about reading from a 1-wire sensor on a LoPy? Best regards, Mads @dchappel I followed your main.py with both onewire.py and ds18x20.py in my lib, this works for the first temp reading, every following temp reading doesn't change from the first. I moved the sensor into the fridge and then freezer and the readings styed the same. Once I reset the device it will bring the new reading but only again for the first reading. Did you find this with yours? - what you got from (you must change P10 to pin you use): ow = OneWire(Pin('P10')) ow.roms if nothing then test(double ow) ow.ow.scan() - which sensor do you have exactly? woterproof or some different? @livius said in DS18B20 temperature sensor: When I do exactly as you state here, and connect the temperature sensor to 3.3v, and with the 4,7k resistor between sig and 3.3v - I get this error: File "main.py", line 22, in <module> File "/flash/lib/onewire.py", line 190, in read_temp_async IndexError: list index out of range I have latest firmware and atom plugin EDIT: after checking that I have latest onewire from here this error was gone. I then had to correct your spelling mistake in "convertion" and now it's working, but despite everything being hooked up correctly - it prints out "None"... @livius it didn't seem to work with Pymakr. The Atom IDE with Pymakr pluggin solution that was updated as a tutorial last week definitely helped. It seems to be good now. Thank you! but do you have upgraded firmware? current is 1.6.12.b1 what is yours? you can check firmware version by os.uname() if it is really old then go to next, is this how your onewire looks like? put it in the same place where you have main.py or in lib subfolder and then you can simply use it_convertion() time.sleep(1) @livius I get that. Copied the code into a onewire.py file. Where should I put it? I've tried to execute before the main.py file and I always get the "cannot import name OneWire" or "DS18X20". is the const() a problem in class OneWire ? @Guizmolux I do not know if you ask for this but fully working examlpe code you can find in official docs @dchappel I'm very new to this (and LoPy and python) : where is your lib directory ?(I'll adapt to my own config). (sysname='LoPy', nodename='LoPy', release='1.5.1.b1', version='v1.8.6-423-g18444a2 on 2017-02-07', machine='LoPy with ESP32', lorawan='1.0.0') The pin not going high message sounds more like a wiring issue? Are you sure your wiring is correct and the pullup resistor bridges +3.3V and data? I had same issue and realized that I had sensor wire colors mixed up... @dchappel i cant get this to work, i get # code block Running Traceback (most recent call last): File "<stdin>", line 7, in <module> File "ds18x20.py", line 47, in __init__ File "onewire.py", line 246, in scan File "onewire.py", line 269, in _search File "onewire.py", line 90, in reset OSError: OneWire pin didn't go high > MicroPython v1.8.6-464-g0f843911 on 2017-02-17; LoPy with ESP32 Type "help()" for more information. what firmware are you using? onewire.py and ds18x20.py are in my lib directory (available here per @LoneTech:) This works for me: import time from machine import Pin from ds18x20 import DS18X20 while True: d=DS18X20(Pin('P23', mode=Pin.OUT)) result=(d.read_temps()) if result: val=str(result[0]*1.8+32) else: val="-1" print(val) time.sleep(5) @iber how did you? @Colateral I checked the code an conv_temp function shall be changed as following: temp = (temp_msb << 8 | temp_lsb) / 16 if (temp_msb & 0xf8) == 0xf8 : # for negative temperature temp -= 0x1000 return temp * 1000 else: assert False @michal The temperature output(2525 = 25.25 degrees celsius) is as 4 digits output. The strange fact is that If I read the same sensor with raspberry pi or on Arduino I got the value as 5 digits. (eq 25437) I got it working now. from machine import Pin from ds18x20 import DS18X20 d=DS18X20(Pin('G17', mode=Pin.OUT)) result=d.read_temps() print(result)
https://forum.pycom.io/topic/303/ds18b20-temperature-sensor
CC-MAIN-2019-30
refinedweb
848
84.98
{-# OPTIONS non-empty. init :: [a] -> [a] #ifdef USE_REPORT_PRELUDE #endif -- | Test whether a list is empty. null :: [a] -> Bool null [] = True null (_:_) = False -- | /O(n)/. . -- We write foldl as a non-recursive thing, so that it -- can be inlined, and then (often) strictness-analysed, -- and hence the classic space leak on foldl (+) 0 xs foldl :: (a -> b -> a) -> a -> [b] -> a foldl f z0 xs0 = lgo z0 xs0 where lgo z [] = z lgo z (x:xs) = lgo (f z x) xs -- | :: (a -> b -> a) -> a -> [b] -> [a] scanl f q ls = q : (case ls of [] -> [] x:xs -> scanl f (f q x) x [] = error "Prelude.cycle: empty list"] == [] -- takeWhile :: (a -> Bool) -> [a] -> [a] takeWhile _ [] = [] takeWhile p (x:xs) | p x = x : takeWhile p xs | otherwise = [] -- | . -- | 'reverse' @xs@ returns the elements of @xs@ in reverse order. -- @xs@ must be finite. reverse :: [a] -> [a] #ifdef USE_REPORT_PRELUDE -- | 'or' returns the disjunction of a Boolean list. For the result to be -- 'False', the list must be finite; 'True', however, results from a 'True' -- value at a finite index of a finite or infinite list. or :: [Bool] -> _ _ _ = [] -- zipWithFB must have arity 2 since it gets two arguments in the "zipWith" -- rule; it might not get inlined otherwise {-# INLINE [0] zipWithFB #-} zipWithFB :: (a -> b -> c) -> (d -> e -> a) -> d -> e -> b -> c zipWithFB c f = \x y r -> (x `f` y) `c` r {-# RULES #-}\end{code} \begin{code} -- |'.}
https://downloads.haskell.org/~ghc/7.0-latest/docs/html/libraries/base-4.3.1.0/src/GHC-List.html
CC-MAIN-2017-30
refinedweb
237
71.38
MemSQL Start[c]UP 3.0 Round 2 Editorial Pardon me if I am wrong, but can 2C be solved using some sort of a ternary search? Given x slices of type 1, and y slices of type 2, is there an optimal greedy way of distributing them? If yes, I believe ternary search can be applied (I may be wrong, of course). Any help will be appreciated, thank you in advance! It is not needed, you know the number of pizzas of type A is (sum of pizza slices eaten where A greater than B / slices per pizza) plus or minus one, or the (sum of pizza slices eaten where A greater than or equal to B / slices per pizza) plus or minus one. Yes, Ternary Search can be applied. The optimal strategy of distributing slices among contestants is same as mentioned in the editorial.You can see my submission using Ternary Search here.Although it is not needed as pointed out by farmersrice So is 865C solved in ? Can someone elaborate Div2 E? Um, this is actually the solution kfoozminus explained to me: Let's maintain two multisets, "unused" and "already sold". After taking input "x", I tried to sell this pairing with a previous element(which I'm gonna buy). Took the minimum valued element among "unused" + "already sold" stuffs, let's call it "y". If(y < x) { if(y came from unused set) buy y and sell x. else if(y came from "already sold" set) put y in unused set and sell x. } else put "x" in unused set The solution actually comes from the idea — If I see that there's an "already sold" stuff smaller than x, it'd be optimal to sell "x" and not use y, rather than use y and buy x. Could you explain some details about DIV1-C? which takes exactly K seconds to complete which takes exactly K seconds to complete To complete to whole game, or game suffix? If the optimal strategy is to immediately switch to the deterministic game, then the answer is greater than K. Otherwise it's less than K If the optimal strategy is to immediately switch to the deterministic game, then the answer is greater than K. Otherwise it's less than K Could you clarify it? Binary search should be made by K or by answer? And is there a code, which corresponds the idea from editorial? You assume that the answer is K and compute f(K) (using dynamic programming). The answer is such x that x = f(x). Since f(x) is monotone, binary search can find an answer. Another view is as follows: assume I tell you that the answer is K. How do you verify that? Take a look at my submission. Keep in mind that the upper bound is very coarse and the number of iterations in the binary search is unnecessarily large. if (i+S[j] <= R) { play += (S[j] + EXP[j+1][i+S[j]]) * (1-P[j]); } else { play += (S[j] + ans) * (1-P[j]); } Can you please explain the "else" part? I was not able to figure out why it would be S[j]+ans in this case If i + S[j] > R, it means that when you're unlucky and the current level taking requires the longer timeslot causes you to be over the limit, you have to restart the game after finishing this level. So in this case isn't play += ans * (1-P[j]) sufficient? No. You need to spend the S[j] seconds playing, and only after that you can restart. Quoting the statement: After completing a level, you may decide to either continue the game and play the next level, or reset the game and start again from the first level. After completing a level, you may decide to either continue the game and play the next level, or reset the game and start again from the first level. Thanks for clarifying :D What is the semantics of K and f(K) ? Did you mean, that the answer is such x that R = f(x)? And could you clarify about "upper bound coarse"? I thought, that the iterations count difference between upper_bound and binary_search is very low, at most 1-2 iterations. you calculate the answer using dp, but anytime you start from the beginning you use K instead of dp[starting state] which is not yet calculated. f(K) is the answer for dp in starting state Now first thing to understand that if K is answer, then you will get K as answer to your dp (that's easy) Now understand that if K is more then answer you'll get more then answer (because all number can just increase) but less then K (thats harder) So now function f(k) — k has one zero and you search for it with binary search How can we show that if K is more than answer, f(K) will never be equal to K? Got the idea, thanks! I am confused by the upper bound in this problem. How would the upper bound of the binary search be calculated? The worst case should be that R is the sum of Fi's, Fi's are 99, Si's are 100 and Pi's are 80. So one has to pass every stage quickly to make it in R seconds. If I am right, how does this lead to the upper bound? Oh, is that: Yup, it's around that. But in the contest I just went for somewhat more generous bound. You don't want to fail a task just because of this, and time limit wasn't an issue. I don't understand why the upper bound of expection is not R but . Can you explain? In the best case, you have to pass all 50 stages fast — any slow move will result in a replay. You pass the whole game in one shot with probability 0.8^50. So, the number of game required for such one shot exists is 1 over 0.8^50. For each such pass you may have to play at worst 50 stages each with no more than 100 units of time. (This is a rough upper bound, so it is not that tight, but also enough for a binary search) Finally, i've got the idea of the solution, thanks! Also, i've understood your notice about too big upper bound, it wasn't about std::upper_bound function :) I still have the only question about optimal progress reset strategy. If I've got you solution right, the optimal strategy is to reset a progress only if all levels completion with slow completion of the current level becomes greater than R. Why it is right? It doesn't take in account the relation between Fi/Si/Pi at the beginning and at the ending. It seems, that in one situation it could be more profitable to reset a progress after slow completion of the first level, and in another situation it could be profitable to reset progress nearby before final level, but don't see it in DP calculation. Let's call a strategy a decision in which cells of the dp we will hit restart. Then for a fixed strategy P the function f_P produced by the dp is linear. Now f(x) = min f_P(x). So f in convex upward. Also f(0) > 0. Thus f(x) = x has at most one root. It is also possible to use Newton's method in problem C to make less iterations. My solution converges in 4 iterations. I believe the Pi ≥ 80 restriction is also unnecessary for this solution. 30890918 Can anybody elaborate Div2 C? EDIT: Missed part of statement. Can Div1 C be solved without bin search? Let's calculate p[i][t] — probability to finish game in at least R seconds if we are at the i-th level and t seconds have passed. Now, I think, it is optimal to go to the start every time when p[i][t] < p[0][0] (Is this correct?). Now let dp[i][t] be probability that we are at the i-th level with t seconds passed. Let's say answer is S. Then formula for S will be something like this S = dp[n][t1] * t1 + dp[n][t2] * t2 + ... + dp[i1][T1] * (S + T1) + dp[i2][T2] * (S + T2) + ... first we add runs which were successful (completed in less then R seconds) and then we add runs where we either chose to go back or got bad score and had to go back. We do simple dp and accumulate constant values and coefficients of S. In the end S = CONST / (1 - COEF) My code sometimes gives wrong answer, for example third pretest. In my code good[i][t] means probability to finish game with at least R seconds if we are at i-th level and t seconds have passed. #include <vector> #include <algorithm> #include <iostream> #include <cassert> #include <cstdlib> #include <cmath> #include <cstring> #include <cstdio> #include <ctime> #include <map> #include <set> #include <string> #include <cassert> #define INFLL 2000000000000000000 #define INF 2000000000 #define MOD 1000000007 #define PI acos(-1.0) using namespace std; typedef pair <int, int> pii; typedef long long ll; typedef vector <ll> vll; struct Level { int a; int b; double p; }; int n, r; Level arr[50]; double p[51][5001]; double good[51][5001]; double sums[51][5001]; double back, con; double dp[51][5001]; int main() { //freopen("input.txt", "r", stdin); //freopen("output.txt", "w", stdout); cin >> n >> r; for (int i = 0; i < n; i++) { cin >> arr[i].a >> arr[i].b >> arr[i].p; arr[i].p /= 100.0; } p[n][0] = 1; for (int i = n - 1; i >= 0; i--) { for (int j = 0; j <= 5000; j++) { if (j + arr[i].a <= 5000) p[i][j + arr[i].a] += p[i + 1][j] * arr[i].p; if (j + arr[i].b <= 5000) p[i][j + arr[i].b] += p[i + 1][j] * (1 - arr[i].p); } } for (int i = 0; i <= n; i++) { sums[i][0] = p[i][0]; for (int j = 1; j <= 5000; j++) sums[i][j] = sums[i][j - 1] + p[i][j]; } for (int i = 0; i <= n; i++) { for (int j = 0; j <= r; j++) { good[i][j] = sums[i][r - j]; } } dp[0][0] = 1; for (int i = 0; i < n; i++) { for (int j = 0; j <= 5000; j++) { if (j > r) { back += dp[i][j]; con += dp[i][j] * j; continue; } if (good[i][j] < good[0][0]) { back += dp[i][j]; con += dp[i][j] * j; continue; } if (j + arr[i].a <= 5000) dp[i + 1][j + arr[i].a] += dp[i][j] * arr[i].p; if (j + arr[i].b <= 5000) dp[i + 1][j + arr[i].b] += dp[i][j] * (1 - arr[i].p); } } for (int i = 0; i <= r; i++) con += dp[n][i] * i; for (int i = r + 1; i <= 5000; i++) { con += dp[n][i] * i; back += dp[n][i]; } printf("%.15f\n", con / (1 - back)); return 0; } According to me, your statement: it is optimal to go to the start every time when p[i][t] < p[0][0] is incorrect. We have to minimize the expected amount of time we play. it is optimal to go to the start every time when p[i][t] < p[0][0] Say dpi, j is the expected amount of time needed to play to finish in less than R seconds such that j seconds have already passed and we are at the ith level. It is optimal to reset if dpi, j + j > dp0, 0. But since we don't know what's dp0, 0, we say dp0, 0 = K for some K and verify if it's possible or not. If it's possible for K, it's possible for all values > K. So, we can apply binary search on K. Please elaborate the solution of 865B — Ordering Pizza. Can't able to understand from this point ->Then the first contestant should take the first s1 slices, then the second contestant should take the next s2.. Why the following approach is not valid? Let sumA be the sum of all slices of pizzas from the contestants where ai > bi. I sort the participants by (bi-ai), and then i distribute the pizzas starting from the index 0. When i take K slices of a participant that has ai > bi, i make sumA -= K. When i know that sumA < S, (i.e i can't eat more entire pizzas with sumA), i know that i have to share the remainder sumA slices. Submission: can anyone tell how to compute p(x^(-1))mod Q(x) in question G Hi, I have a question regarding 865B — Ordering Pizza. I've debugged for so many hours that I decided to ask for some insight. I always get runtime error for Test case 5. And I simply have no idea which part of my code, pasted below, could cause run-time error. Any insight is appreciated. Thanks!!!! #include <iostream> #include <algorithm> #include <cmath> #include <utility> // std::pair, std::make_pair #include <vector> using namespace std; typedef long long ll; typedef pair<ll, ll> pll; bool comp(pll a, pll b) { return a.first >= b.first; } // x is number of pieces to consume ll findBest(vector<pll> &ps, ll x) { ll res = 0; for (ll i = 0; i < ps.size(); i++) { ll num = ps[i].second; ll diff = ps[i].first; if (x - num >= 0) { res += (num * diff); x -= num; } else { res += (x * diff); break; } } return res; } int main() { ll n, s; cin >> n >> s; vector<pll> ps; ll total = 0; ll positives = 0; ll pieces = 0; // overall pieces for (ll i = 0; i < n; i++) { ll num, a, b; cin >> num >> a >> b; ps.push_back(make_pair(a-b, num)); total += num * b; pieces += num; if (a-b > 0) { positives += num; } } sort(ps.begin(), ps.end(), comp); ll pizzas = (pieces % s == 0) ? (pieces / s) : (pieces / s + 1); ll extra = pizzas * s - pieces; ll best = total; if (positives > 0) { best = max(best, findBest(ps, positives / s * s) + total); if (positives % s != 0) { ll x = max(positives, (positives/s + 1) * s - extra); best = max(best, findBest(ps, x) + total); } } cout << best << endl; return 0; } Your compare function is incorrect, as comp(a, a) should return false. OMG! You're amazing!!!!! That is indeed the error!! I've never encountered that as run-time error before (can only say that I'm not experienced enough I guess...). According to, the requirements are For all a, comp(a,a)==false If comp(a,b)==true then comp(b,a)==false if comp(a,b)==true and comp(b,c)==true then comp(a,c)==true Thank you again Michal!!!!! Would this logic work in Problem E of Div 2? Take maximum of 1 to N from segtree and let its index be x . We will sell a stock at this index and take minimum of 1 to x-1 from segtree and let its index be y. We will buy stock at index y. Now remove this values from segment tree. And keep on doing this till all values have been exhausted. I tried this solution but i am getting wrong answer submission:
http://codeforces.com/blog/entry/54888
CC-MAIN-2018-47
refinedweb
2,568
80.51
SAP Connector User Guide Premium The SAP connector enables the integration of data to and from SAP NetWeaver-based and external systems. MuleSoft maintains this connector under the Premium support policy, and. SAP NetWeaver is an umbrella term for these technical components. The SAP connector uses the RFC protocol to connect to NetWeaver Application Servers (NWAS). ECC and CRM run on top of NWAS, as other SAP solutions do, hence any customer using the connector may access those systems. SAP NetWeaver runs on both Java and ABAP stacks.. Some fundamental knowledge of the ABAP language. Namespace and Schema The required namespace and schema location for the SAP connector should be included in the header area of your Mule application. <mule xmlns: ... <flow name="yourFlow"> ... </flow> </mule> Requirements This connector requires the following SAP libraries: Java Connector (JCo) library IDoc library Note: The JCo library depends on your hardware platform and operating system. Therefore, you need to download the proper version for the local drive running Anypoint Studio. Three files are required for both libraries: Two) The SAP JCo libraries are OS-dependent. Therefore, make sure to download the SAP libraries that correspond to the OS and hardware architecture of the host server on which Mule is running. If you deploy to a platform different from the one used for development, you must change the native library before generating the zip file. Dependencies There are four versions of the SAP connector that have been released, which depend on certain versions of Mule. Stateful transactions, involving multiple outbound endpoints, only work by setting the transactional scope. Read more about SAP Transactions. Every SAP customer or partner has access to the SAP Service Marketplace (SMP). There you can download both these files as well as the NetWeaver RFC Library and other connectors. Compatibility Matrix The SAP connector is compatible with any SAP NetWeaver-based system and supports SAP R/3 systems from release 3.0.11 and later. Note: With the exception of SAP 2.2.5, which is incompatible with IDoc 3.0.12, the rest of the JCo and IDoc libraries displayed in the above matrix have been tested with the connector. Note that there may be other SAP-compatible versions, which are not listed above. Installing and Configuring The SAP connector is bundled within Anypoint Studio: typically, the latest version of Studio comes with the latest version of the SAP connector. If you require another version of the connector in Anypoint Studio or must reinstall it: In Anypoint Studio, click the Exchange icon in the Studio taskbar. Click Login in Anypoint Exchange. Follow the prompts to install the connector. When Studio has an update, a message displays in the lower right corner, which you can click to install the update. The SAP connector needs JCo libraries to operate. The current section explains how to set up Mule so that you can use the SAP connector in your Mule applications. This procedure assumes that you already have a Mule runtime instance installed on your host machine. Note: Throughout this document, $MULE_HOME refers to the directory where Mule is installed. Download the SAP JCo and IDoc libraries from the SAP Service Marketplace (SMP). To do so, instance. JCo relies on a native library, which requires additional installation steps. If you plan to use SAP as an inbound endpoint, that is where Mule is called as a BAPI or receives IDocs, you must perform additional configurations within the services file at the OS level. A detailed explanation of the requirements can be found at SAP JCo Server Services Configuration. runtime get added automatically. If there is more than one SAP transport dependency for the Mule runtime configured in the project, then you are prompted to select the one you want to use, the newest, oldest, or select Choose manually. To add the SAP connector manually to the classpath, complete the following steps: Right-click the top of the project in the Package Explorer panel. Select Build Path > Add Libraries. Select the library type Anypoint Connectors Dependencies and click Next. From the list, check the SAP extension you require, noting the version of the connector and the Mule runtime version requirements. Configuring To use the SAP connector in your Mule application, you must first configure a global SAP element. Read more about Global Elements. Setting up the Global Element The SAP connector object holds the configuration properties that allow you to connect to the SAP server. When an SAP connector is defined in a Global Element, all SAP endpoints use its connection parameters; otherwise each SAP endpoint uses its own connection parameters to connect to the SAP server. To create a configuration for an SAP connector, complete the following steps: Click the Global Elements tab below the Message Flow canvas. Click Create, then click the. Tip: As a best practice, use property placeholder syntax to load the credentials in a more simple and reusable way. Read more about property placeholders at Configuring Properties. Finally, click the Test Connection button to verify that the connection to the SAP instance succeeded. If the credentials are correct you should receive a Test are automatically added to the project’s classpath. Important: If you are adding the JCo libraries and configuring the classpath manually using a version of SAP JCo later than SAP JCo 3.0.11, the sapjco3.jarand the corresponding native library must be in different directories for Datasense to work. If you are using a Mavenized app, the native library should be named libsapjco3followed by the extension according your OS. Extended Properties To define extended properties for the SAP connector global element, complete the following steps: Navigate to the Advanced tab on the Global Elements Properties pane. Locate the Extended Properties section at the bottom of the window. Click the plus icon next to the Extended Properties drop-down menu to define additional configuration properties. You can provide additional configuration properties by defining a Spring bean global element representing a Map ( java.util.Map) instance. This can be used to configure SCN (Secure Connections) or advanced pooling capabilities, among other properties. Important: For this to work you must set the property name, as defined by SAP, in your configuration. Check SAP JCo Extended Properties for the complete list of properties. Upgrading From 2.x.x to 3.0.0 The SAP Connector can be updated via the integrated Update function within Mule Studio. The main change introduced in SAP 3.0.0 is the removal of XML parser Version 1. From now on, Version 2 is the one and only supported format. Consequently, to move smoothly from V1 to V2, the following modifications are needed: In SAP Endpoints and Transformers Attribute xmlVersion is deprecated and no longer needed in SAP flows. Projects using xmlVersion="1" fail but those using xmlVersion="2" are still compatible. The same applies to SAP transformers such as SAP Object to XML, XML to SAP Function (BAPI) and XML to SAP IDoc. Details below: In XML Definitions Replace the jco node: With the Function/BAPI name: Elements import, export, tables and exceptions nodes remain the same. Replace field and structure nodes with their name attributes. To create: Replace child elements of table with its name attribue and remove the id from every row. Using the Connector Configurable Properties The <sap:connector/> element allows the configuration of JCo connection parameters that can be shared among <sap:inbound-endpoint/> and <sap:outbound-endpoint/> in the same application.. Typing "sap" in the filter input textbox above the palette should display both the SAP Connector and the SAP Transformers: Click and drag the SAP Object to XML transformer after an SAP inbound endpoint (or a SAP outbound endpoint if the endpoint is a function and expects a response). Important: With the option to enable DataSense on the SAP endpoint came a new attribute, outputXml. The default value, false, ensures that the output produced by the endpoint is XML instead of a Java object. However, if you set this value to true in order to output a Java Object, avoid the subsequent use of an SAP Object to XML transformer. Click and drag the XML to SAP Function (BAPI) or the XML to SAP IDoc transformers before your SAP outbound endpoint within your Mule application flow. The input to the outbound endpoint can be both the SAP Object created by the XML to SAP Function (BAPI) or the XML to SAP IDoc as well as any type (String, byte[] or InputStream) that represents the XML document. As mentioned before, in order to avoid using the SAP Object to XML you can now use the outputXML attribute set to true at the endpoint level (works for both inbound and outbound SAP endpoints).. Note: With DataSense support, the recommended way to generate the XML definitions is using DataWeave. However, if you are using a Mule 3.3 application, see DataMapper. For BAPIs, the SAP Connector offers a proprietary format fully compatible with DataWeave and DataMapper. BAPI XML Structure Each of the main records (import, export and changing) support fields, structures and/or tables: Field elements allow, since version 1.4.1 and 2.1.0, a special attribute named trim which holds a boolean value indicating whether the value of the field should be trimmed (remove leading and trailing space characters) or not. The default behavior is to trim the value ( trim="true"). Note: The trim attribute is valid in all XML versions. The example above uses XML version 2. Exceptions are represented the same way in all XML versions as well. The result of a metadata retrieval method shows a list of exceptions a function module (BAPI) can throw. The exception element is also used when an ABAP exception needs to be returned to SAP by the inbound endpoint. In this case only one exception should be present. If more than one exception is returned, then the first one is thrown and the rest are ignored. There are two constructors for the ABAP exception and the XML varies depending on which one you want to call: new AbapException(String key, String message) new AbapException(String key, String messageClass, char messageType, String messageNumber, String[] messageParameters) You can use the SAP outbound endpoint with type function-metadatato retrieve the XML template for a given function module (BAPI): Here, functionNameholds a Mule Expression (MEL), which returns the name of the function module. For IDoc templates, use operation idoc-metadatainstead.. Important: XML version 2.0 is the default version since SAP connector v2.1.0, and it is the only supported version from SAP connector v3.0.0 onward. Use Cases and Demos Generally speaking, there are two main scenarios. Note: Some settings may vary in your SAP instance depending on how it has been customized. Values used in these demo scenarios are based on SAP ERP IDES (International Demonstration and Education System), which is a pre-configured system that covers the most common SAP deployment modules and scenarios. 1. Inbound Scenario - IDoc - Using the Studio Visual Editor Uses a SAP inbound endpoint that acts as an IDoc server. The JCo server needs to register against the SAP instance. For this reason, it requires both client and server configuration attributes. This example receives data in SAP IDoc format. Drag and drop the SAP Connector from the connector palette to the beginning of your flow. Double-click the SAP icon to open the Endpoint Properties pane and configure the following properties: Add a Logger component at the end of the flow to display the result data. 1. Inbound Scenario - IDoc - Using the Studio XML Editor Note: The complete XML code for this demo flow can be found in Example Code along with the other example flows. 2. Inbound Scenario - BAPI - Using the Studio Visual Editor Uses a SAP inbound endpoint that acts as. 2. Inbound Scenario - BAPI - Using the Studio XML Editor Note: The complete XML code for this demo flow can be found in Example Code along with the other example flows. Inbound - BAPI -: 3. Outbound Scenario - IDoc - Using the Studio Visual Editor: Add the missing fields by editing the mapping in the Transform Message component. For IDocs, always check the items @BEGINand @SEGMENTto properly build the final XML. Set the values of the required fields. The resulting XML should look like this: Add a Logger component to display the outcome of the processed IDoc. 3. Outbound Scenario - IDoc - Using the Studio XML Editor Note: The complete XML code for this demo flow can be found in Example Code along with the other example flows. 4. Outbound Scenario - BAPI - Using the Studio Visual Editor Uses the SAP outbound endpoint to send data from a Mule application to SAP where the data is processed by a BAPI function. . Since the IDoc is a nested structure, DataWeave may not display all fields, as in this example: The resulting XML should look like the following: Add a Logger component at the end of the flow to display the results obtained by the BAPI in a web browser. 4. Outbound Scenario - BAPI - Using the Studio XML Editor Note: The complete XML code for this demo flow can be found in Example Code along with the other example flows. Best Practices Read the following sections on best practices for designing and configuring your applications that use the SAP Connector. Design Tips deploying multiple applications to the same server, keep all of the share them among all applications running within the same Mule instance. As SAP JCo configuration is a singleton, if you go this way, then all your applications shares the same configuration, including the JCo destination repository. For this setup to work, you must also manually configure the wrapper.conf file to add support for the $MULE_HOME/lib/native directory. What you did so far is enough to run this in a Mule Standalone instance, however to make this run properly in the Anypoint Studio runtime and be able to test your app while developing it, you must do the following: Add the following command line argument to the JRE Default VM Arguments -Djava.library.path=PATH. This handles the native library Modify your POM to include the <scope>provided</scope>for supporting the file mule-transport-sap-{version}.jar About the Application CLASSPATH Your application lib directory is automatically enabled to support dynamic libraries. If you are not including them there, then you also need to tell Mule where the SAP JCo dynamic linked library resides. To accomplish this, you can do either of the following: Configure the LD_LIBRARY_PATHenvironment variable. Configure the Mule wrapper configuration file $MULE_HOME/conf/wrapper.confby adding the line wrapper.java.library.path.{N}=PATH/TO/SAP-JCO/LIB-DIR. Do not combine both strategies, such as putting JCo libraries in the Mule instance shared lib directory, (for example, $MULE_HOME/lib/user) and the SAP connector library inside your application (for example, $MULE_HOME/apps/YOUR_APP/lib). This causes classloader issues since JCo libraries hold configuration in static fields (singletons). asterisk ( *).
https://docs.mulesoft.com/mule-user-guide/v/3.9/sap-connector
CC-MAIN-2018-09
refinedweb
2,500
53.92
I have an exception when I call SaveChangesAsync. My architecture is really simple, i have a Category class, wich contains public class Category { public Guid ID {get; set;} public Guid? ParentId {get; set;} public Category Parent {get; set;} [...] } When I want to insert a new category in database (connected to my ASP MVC application), I set the GUID before doing the insert. The error occured when my database is empty and I want to insert parent category (so with a null Guid IdParent and null Parent). This is NOT happening if I set a parent value. I can add a record manually by setting parent to Null by Visual studio. I have the following error : The INSERT statement conflicted with the FOREIGN KEY SAME TABLE constraint "FK_Category_Category_ParentId". The conflict occurred in database "HT_Root", table "dbo.Category", column 'ID'. The statement has been terminated. I searched on stack overflow for a simple answer, and not found it. I try with Fluent API : modelBuilder.Entity<Category>().HasOne(s => s.Parent).WithMany().HasForeignKey(s => s.ParentId); But nothing changed. What am I doing wrong ? It seems to have changed in EF 7 See this github issue Try public class Category { public Guid ID {get; set;} public Guid? ParentId {get; set;} public Category Parent {get; set;} public ICollection<Categories> Children {get; set;} } And modelBuilder.Entity<Category>() .HasOne(x => x.Parent) .WithMany(x => x.Children) .HasForeignKey(x => x.ParentId) .Required(false); You should also always check that (if specified) the ParentId exists in the database. Watch out for adding Guid.Empty (00000000-0000-0000-0000-000000000000) instead of null as this can cause issues.
https://entityframeworkcore.com/knowledge-base/34996155/ef-7---insert-statement-conflicted-with-the-foreign-key-same-table
CC-MAIN-2020-40
refinedweb
269
50.63
set the priority of a process #include <sys/sched.h> int setprio( pid_t pid, int prio ); The setprio() function changes the priority of process pid to priority prio. If pid is zero, the priority of the calling process is set. The prio parameter must lie between 1 (lowest) and 29 (highest for superuser) or 19 (highest for non-superuser). By default, the process priority and scheduling algorithm are inherited from or explicitly set by the process that created it. Once running, the process may change its priority by using this function. The previous priority. If an error occurs, -1 is returned and errno is set. See qnx_scheduler(). QNX errno, getprio(), qnx_scheduler(), sched_getparam(), sched_getscheduler(), sched_setscheduler(), sched_yield()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/setprio.html
CC-MAIN-2022-33
refinedweb
115
58.89
If you want to analyze a CSV dataset that is larger than the space available in RAM, then you can iteratively process each observation and store/calculate only what you need. There is a way to do this in standard Python as well as the popular library Pandas. Standard Library import csv with open('/path/to/data.csv', newline='') as csvfile: reader = csv.reader(csvfile, delimeter=',') for row in reader: for column in row: do_something() Pandas Pandas is slightly different in where you specify a chunksize which is the number of rows per chunk and you get a pandas dataframe with that many rows import pandas as pd chunksize = 100 for chunk in pd.read_csv('/path/to/data.csv', chunksize=chunksize): do_something(chunk)
https://brandonrozek.com/blog/iterativecsv/
CC-MAIN-2020-24
refinedweb
123
53
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). I want to store an object and its children into a external file using following script. import c4d from c4d import gui # Main function def main(): sel = doc.GetSelection() newDoc = c4d.documents.IsolateObjects(doc, sel) c4d.documents.SaveDocument(newDoc, "isolated_objects.c4d", 0, c4d.FORMAT_C4DEXPORT) c4d.EventAdd # Execute main() if __name__=='__main__': main() However, if I try to export following object with its children (I selected all objects manually), the hierarchy is not correct and also the original objects are in the scene file? What am I doing incorrect? Is it because the selection is incorrect, not taking into account the hierarchy. If so, how can I select the objects and its children withthe correct hierarchy? -Pim Hi, sel = doc.GetSelection() sel = doc.GetSelection() BaseDocument.GetSelection() will return all selected materials, tags and objects. Unless this was an intentional choice, BaseDocument.GetActiveObjects() would be a more appropriate choice. BaseDocument.GetSelection() BaseDocument.GetActiveObjects() newDoc = c4d.documents.IsolateObjects(doc, sel) newDoc = c4d.documents.IsolateObjects(doc, sel) You are passing here sel to IsolateObjects() as the argument of the objects to be isolated. Since you selected your whole scene graph this will result in this somewhat self-recursive output you have encountered. Copying Cube from your example also means copying all its descendants (Bend, Cylinder, Sphere and Cone). If you also pass all descendants it will result in the output shown above. So, if you just want to get a copy of Cube and all its children, you just need to select Cube and execute (op is predefined as the selected object) the following: sel IsolateObjects() Cube Bend, Cylinder, Sphere and Cone op new_doc = c4d.documents.IsolateObjects(doc, op) Cheers zipit hello, this is related to that thread i guess. (I also added element there) As @zipit said, all the selected element are passed to the isolate function, so that's the result you have. All element and their children isolated one by one. You have to only call isolate on the cube. Cheers, Manuel @zipit Thanks, just selecting the cube works. One remark using op will give you an error "TypeError: 'c4d.BaseObject' object is not iterable". So, just select the cube and will do the trick. @m_magalhaes Yes, it is related to the other thread, but let's continue there. @pim said in IsolateObjects question: @zipit Thanks, just selecting the cube works. One remark using op will give you an error "TypeError: 'c4d.BaseObject' object is not iterable". So, just select the cube and sel = doc.GetSelection() will do the trick. sorry, my bad, that was a typo, IsolateObjects() expects a list of BaseObjects. So the the the correct call would be: BaseObjects new_doc = c4d.documents.IsolateObjects(doc, [op]) Aha, thanks.
https://plugincafe.maxon.net/topic/11872/isolateobjects-question
CC-MAIN-2021-31
refinedweb
492
51.75
11 December 2009 15:25 [Source: ICIS news] TORONTO (ICIS news)--Canada’s chemical producers managed to maintain operating profits this year despite a sharp 35% decline in sales, mainly because of low natural gas prices, an industry group said on Friday. Canadian sales of basic chemicals and resins fell to Canadian dollar (C$) 16.6bn ($15.8bn), down 35% from 2008, in the wake of the global recession, the Canadian Chemical Producers Association said, citing the findings of its latest industry survey. However, chemical industry operating profits before interest, taxes and special write-offs were C$1.4bn, almost unchanged from 2008, because of the lower natural gas prices, the group said. Natural gas, the main feedstock for the industry, was down from C$13/m Btu in 2008 to C$4-5/m Btu in 2009. This cost advantage meant that Canadian-based chemical producers could keep their plants operating while many competitors in other parts of the world had to idle theirs. However, as demonstrated by the sharp year-over-year decline in sales, ?xml:namespace> Chemical sales to Canadian customers fell 47% while export sales fell 31%. Sales to the The Ottawa-based industry group said the sales decline was primarily due to the slump in sectors such as autos, housing, as well as pulp and paper. Meanwhile, Canada's chemical industry employment fell by 17% to 15,800. Major chemical firms with production in Canada include NOVA Chemicals, Dow Chemical, Shell, MEGlobal, DuPont Canada, LANXESS and Invista, among others. ($1 = C$1
http://www.icis.com/Articles/2009/12/11/9318717/canada-chems-remain-profitable-despite-sharp-sales-decline.html
CC-MAIN-2015-06
refinedweb
257
51.48
Prev C++ VC ATL STL Set Code Index Headers Your browser does not support iframes. Re: iomanip - alignment From: James Kanze <james.kanze@gmail.com> Newsgroups: comp.lang.c++ Date: Fri, 19 Oct 2007 09:10:37 -0000 Message-ID: <1192785037.529929.138480@k35g2000prh.googlegroups.com> On Oct 18, 11:04 pm, brekehan <cp...@austin.rr.com> wrote: I can't seem to get the alignment to switch back and forth combined with a set width. I tryed a search on this group and alot of googling, but still can't deduce the problem. It works fine for me, both with g++ and with Sun CC. However... #include <iostream> #include <iomanip> using namespace std; int main() { // using cout instead of ostream & operator << method, for example pur= poses // hardcoding values instead of class data for example purposes cout << setiosflags(ios::fixed); I'm not sure that this is defined behavior. I'd normally use std::cout << std::fixed ; or std::cout.setf( std::ios::fixed, std::ios::floatfield ) ; (In a real program, of course, you'll almost always be using custom manipulators, defined in accordance with the semantics of the data being output.) cout << left << setw(25) << "attribute name:"; cout << right << setw( 6) << 0.111; Note that the setw here doesn't do anything, since you're generating more than six characters anyway ("0.111000"). cout << endl; cout << left << setw(25) << "attribute name"; cout << right << setw( 6) << 0.123556; Same comment as above for the setw. This generates "0.123556". cout << endl; // etc etc cout << resetiosflags(ios::fixed); Again, this isn't guaranteed to do what you think. "std::cout << std::resetiosflags( ios::floatfield )" would do the trick, "std::cout << std::defaultfloat" is probably more readable, but normally, if the goal is to restore the previous context, you'll read the previous context before changing it, typically by means of some sort of RAII class. (Of course, if you're using custom manipulators, you'll arrange for them to restore the original state at the end of the full expression, so this won't be necessary.) return 0; } The output seems to always be left aligned. Try putting markers around it, and I think you'll see what the problem is. Something like: std::cout << '|' << std::left << std::setw( 25) << "attr:" << '|' ; std::cout << '|' << std::right << std::setw( 6 ) << 1.23 << '|' ; This will show exactly how each field is being formatted. I've also tryed using << setiosflags and << resetiosflags with the same results. Can anyone clear this up for me? I hate to admit that I've forgotten something so basic. What is the lifetime of the differant iomanip members used here? How are right and left alignments represented internally? i.e do you need to reset left before using right and vica versa? or are they two states of the same bit. Alignment is a field in ios, consisting of at least 2 bits. To assign to it, you have to do something like: std::setf( std::ios::left, std::ios::alignfield ) ; The second argument tells setf to reset the bits first. This is exactly what std::left does, however, so there should be no problem using the manip I am interested to keep the Ancient and Accepted Rite uncontaminated, in our (ital) country at least, by the leprosy of negro association. -- Albert Pike, Grand Commander, Sovereign Pontiff of Universal Freemasonry
https://preciseinfo.org/Convert/Articles_CPP/Set_Code/C++-VC-ATL-STL-Set-Code-071019121037.html
CC-MAIN-2022-05
refinedweb
560
65.32
Some tips to help with static linking of GCJ generated programs. Normally java programs compiled with GCJ are dynamically linked against libgcj.so. While convenient, this makes it necessary to have libgcj.so available at runtime and libgcj.so is quite large. An alternative is to statically link against libgcj. This causes its own set of problems, but in some cases it is worth the extra hassle to save space. For versions of GCJ before 4.2 One way to link libgcj statically is to use a link line similar to this (on linux): Take a look at Foo public class Foo { public static void main(String args[]) { System.out.println("Hello."); } } Now lets statically link against libgcj... gcj -c Foo.java gcj --main=Foo -save-temps Foo.java gcc -o Foo Foo.o Foomain.i -shared-libgcc -Wl,-non_shared -lgcj -Wl,-call_shared -lsupc++ -Wl,--as-needed -lz -lgcc_s -lpthread -lc -lm -ldl -Wl,--no-as-needed Note that this is quite a bit more complicated than: gcj -o Foo --main=Foo Foo.java But the resulting executable does not need libgcj.so. Check it with readelf -d if you are in doubt. New in GCJ 4.2 Starting in GCJ 4.2 static linking is supported from the gcj command line. The program above can be statically linked like this: gcj -static-libgcj -o Foo --main=Foo Foo.java Caution: Some parts of libgcj use reflection to load needed classes. When doing static linking, the linker will not find that some of these classes are needed and will omit them. If you get a ClassNotFoundException at runtime about some class in libgcj, you can force it to be loaded by adding some dummy code that refers to it. Something like this: public static void forceLinkingOfLibGcjClasses { object dummy = new fully.qualified.name.of.missing.Class(); . . . }
http://gcc.gnu.org/wiki/Statically_linking_libgcj
crawl-002
refinedweb
305
68.47
biodome Project description biodome Controlled environments Reading environment variables with os.environ is pretty easy, but after a while one gets pretty tired of having to cast variables to the right types and handling fallback to defaults. This library provides a clean way read environment variables and fall back to defaults in a sane way. How you were doing it: import os try: TIMEOUT = int(os.environ.get('TIMEOUT', 10)) except ValueError: TIMEOUT = 10 Wordy, boilerplate, DRY violation, etc. How you will be doing it: import biodome TIMEOUT = biodome.environ.get('TIMEOUT', 10) That’s right, it becomes a single line. But there’s a magic trick here: how does biodome know that TIMEOUT should be set to an int? It knows because it looks at the type of the default arguments. This works for a bunch of different things: # Lists os.environ['IGNORE_KEYS'] = '[1, 2, 3]' biodome.environ.get('TIMEOUT', []) == [1, 2, 3] # Dicts os.environ['SETTINGS'] = '{"a": 1, "b": 2}' biodome.environ.get('SETTINGS', {}) == dict(a=1, b=2) If you look carefully at the above, you can see that we set the data via the stdlib os.environ dictionary; that’s right, biodome.environ is a drop-in replacement for os.environ. You don’t even have to switch out your entire codebase, you can do it piece by piece. And while we’re on the subject of setting env vars, with biodome you don’t have to cast them first, it does string casting internally automatically, unlike os.environ: # Dicts biodome.environ['SETTINGS'] = dict(b=2, a=1) # No cast required biodome.environ.get('SETTINGS', {}) == dict(a=1, b=2) biodome also provides a function to load a file which specifies the values of environment variables. An example of such an env file: # myconfig.env # This sets the log level for all the loggers in the program LOGGER_LEVEL=info # Hourly backups are stored at this path and named with a timestamp. BACKUP_PATH=/data/backups/ # The number of times to retry outgoing HTTP requests if the status # code is > 500 RETRY_TIME=5 The name of the environment variable must be on the left and the value on the right. Each variable must be on its own line. Lines starting with a # are considered comments and are ignored. This env file can be loaded like this: >>> import biodome >>> biodome.load_env_file('myconfig.env') >>> print(biodome.environ['RETRY_TIME']) 5 True and False I don’t know about you, but I use bool settings a LOT in environment variables, so handling this properly is really important to me. When calling biodome.environ.get('SETTING', default=<value>), the default value can also be a bool, i.e., True or False. In this case, any of the following values, and their upper- or mixed-case equivalents will be recognized as True: ['1', 'y', 'yes', 'on', 'active', 'activated', 'enabled', 'true', 't', 'ok', 'yeah'] Anything not in this list will be considered as False. Do you have ideas for more things that should be considered as True? I take PRs! Callables For explictness it is often convenient to declare and load environment variables at the top of the module in which they’re used: """ My new module """ import biodome ENABLE_SETTING_XYZ = biodome.environ.get('ENABLE_SETTING_XYZ', True) def blah(): print(ENABLE_SETTING_XYZ) You could call environ.get() inside the functions and methods where it is used, but then you would lose the convenience of documenting all the available environment variables at the top of the module. As a solution to this problem, biodome provides a way to produce a callable for a particular setting. An extra advantage of doing this is that it becomes quite easy to make use of changes in environment variables on the fly. Here’s the modified example: """ My new module """ import biodome ENABLE_SETTING_XYZ = biodome.environ.get_callable( # Same as before 'ENABLE_SETTING_XYZ', True ) def blah(): print(ENABLE_SETTING_XYZ()) # Now a callable! How it works internally The key theme here is that the type of the default value is used to determine how to cast the input value. This works for the following types: - int - float - str - list - dict - set (NOTE: only supported in Python 3+ due to ast.literal_eval()) - tuple For the containers, we use ast.literal_eval() which is much safer than using eval() because code is not evaluated. Safety first! (thanks to @nickdirienzo for the tip) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/biodome/
CC-MAIN-2021-21
refinedweb
750
65.32
I found something difficult in Python, which was a bit of a first, so I wrote a whole blog series about it, and now a whole video: Slides: Python Async Basics slides Blog posts: asyncio basics, large numbers in parallel, parallel HTTP requests, adding to stdlib 4 thoughts on “Python Async basics video (100 million HTTP requests)” Hi, which version of python you’re using in this video? I got an error like this in Python 3.7.0 and Python 3.7.3: import asyncio async def mycoro(number): print(“Starting %d” % number) await asyncio.sleep(1) print(“Finishing %d” % number) return str(number) many = asyncio.gather(mycoro(1),mycoro(2),mycoro(3)) asyncio.run(many) Traceback (most recent call last): File “C:/xx/study.py”, line 13, in asyncio.run(many) File “C:\Users\xxxx\Python\Python37-32\lib\asyncio\runners.py”, line 37, in run raise ValueError(“a coroutine was expected, got {!r}”.format(main)) ValueError: a coroutine was expected, got Hi, it would have been 3.7.something. Try this: Note the extra “await” before asyncio.gather. If that works, I think it’s a mistake I made translating 3.6 -> 3.7 – well spotted, and apologies! Hi Andy, This is best explanation so far I’ve been able to find for asyncio in a nutshell.. And I’ve been able to implement it in my small project, all thanks to you. I’ve encountered an issue though, while trying to use https proxies with my requests. It says, “ValueError: Only http proxies are supported” After browsing through some github issues, I found the notorious issue #845 >>, which states that https proxies are not supported by aiohttp.. Do you have a way around this lil’ hurdle? Hi Ronnie, thank you for the kind words – I’m so glad it’s been helpful. I’m afraid I have absolutely no idea how to make aiohttp work for https proxies – sorry!
https://www.artificialworlds.net/blog/2019/02/26/python-async-basics-video-100-million-http-requests/
CC-MAIN-2020-05
refinedweb
323
74.49
In today’s Programming Praxis exercise, our goal is to implement a sorting algorithm for lists with three different elements that works in linear time. Let’s get started, shall we? import qualified Data.Vector as V We’re only allowed to use index and swap operations. Swap isn’t defined in the Vector package, but is easy to express as a bulk update. swap :: Int -> Int -> V.Vector a -> V.Vector a swap i j a = a V.// [(i,a V.! j), (j,a V.! i)] To sort the array, we keep track of where the next red and blue elements should be inserted, as well as the current index. When encountering red and blue elements, they are shifted to the correct location. If the element was blue, we have to test again since it could be anything. If it was red, the element that’s swapped to the current location will always be white, so we can move on. Once we reach the start of the group of blue elements at the end we can stop. flag :: V.Vector Char -> V.Vector Char flag xs = f (0, V.length xs - 1) xs 0 where f (r,b) a n = if n > b then a else case a V.! n of 'R' -> f (r+1,b ) (swap n r a) (n+1) 'B' -> f (r, b-1) (swap n b a) n _ -> f (r, b ) a (n+1) Some tests to see if everything is working properly: test :: String -> Bool test x = flag (V.fromList x) == V.fromList (filter (== 'R') x ++ filter (== 'W') x ++ filter (== 'B') x) main :: IO () main = do print $ test "" print $ test "W" print $ test "R" print $ test "B" print $ test "RWB" print $ test "BWR" print $ test "RWBR" print $ test "WRBRBRBWRWBWBRBWRBWRBWRBWBBBRBRWBRWB" Tags: algorithm, bonsai, code, dutch, flag, Haskell, kata, praxis, programming, sort, sorting
https://bonsaicode.wordpress.com/2013/03/05/dutch-national-flag/
CC-MAIN-2016-50
refinedweb
308
82.24
Xamarin Component Review Guidelines - PDF for offline use - Let us know how you feel about this Translation Quality 0/250 last updated: 2017-03 provides the rules and requirements that your component should comply with to ensure it is reviewed and approved as quickly as possible.. Overview We opened the Xamarin component store in March 2013 as part of our Xamarin 2.0 launch. The component store allows talented developers to package and publish their code for other developers to integrate into their own projects improving the entire Xamarin ecosystem and making it an even better place to develop your mobile apps. We have published these component review guidelines so that you can see how we review your components and what we are looking for. By reading them you can improve the speed of review and increase your chances of getting approved first time. 1. Terms and Conditions 1. 1 Xamarin pays 70% of net component revenue to vendors within 30 days of the end of each calendar month. 1.2 Vendors must provide 30-day notice on price changes and must honor Xamarin's 30-day return policy. 1.3 Vendor licenses must allow library distribution as part of end-user apps without hidden fees or royalties. 1.4 You can read the full agreement for complete details. 2. Legal Requirements 2.1 Your code must comply with all laws that apply to you in your location. It is up to you to know what they are, and whether your code complies or not. 2.2 You must own, or have full rights to, all of the intellectual property that you include in your component package. 2.3 We trust that your component does NOT misrepresent itself, or mislead people as to its purpose or its origin. 2.4 We won’t approve components that induce or entice other people to commit crimes or endanger themselves or others. 2.5 Components with rude or offensive content or contain defamatory or libelous content will be rejected. 3. Metadata (name, descriptions, icons, websites) 3.1 Component name must be descriptive. 3.2 Component name must not include “Xamarin” or imply a connection with Xamarin 3.3 Publisher name must be provided and be relevant to the actual author 3.4 Component summary must be correct and concise. 3.5 If the submission updates an existing component, ensure that release notes are provided. The notes should indicate what has changed since the release. This information assists users in determining whether to update or not. 3.6 Make sure that the component is appropriately categorized and tagged. Icons should look good, appropriate, and professionally represent the component in our store. 3.7 Icon should not apply rounded corners or glossy highlights. 3.8 If you charge for your component make sure that the price is comparable to similar components. 3.9 The main website link must work and be either related to the component(such as the github repository) or the author. 3.10 If source link is provided it must work and be related to the component(such as the github repository) 3.11 The component ID must not include the name "Xamarin" and must be specific to your product. Component ID's that are catch all terms will be rejected 4. Screenshots and Videos 4.1 If a link to video is provided it must work and be appropriate for the component. 4.2 If screenshots are provided they must be appropriate for the component, be 740 wide by 400 high, and look clean and professional. 4.3 If the screenshots require attribution, such as with PlaceIt, this must be included on the details page. 5. Details 5.1 Components must include a short description of what the component does and why someone would want to put it in their apps. 5.2 Code snippets included in the page must work and be compilable. 5.3 Ensure that the information provides accurate and useful content, with proper spelling and grammar. *5.4 * All links included in the details page must work. 5.5 Any features listed as available must be functional in the component. 5.6 If the screenshots require attribution, such as with PlaceIt this must be included on the details page 5.7 If any portion of the details has been copied from elsewhere then attribution must be provided 5.8 The formatting of the details page must be render correctly 6 Getting Started 6.1 An extended description of the component must be provided 6.2 If a License Key or access token is required it must be stated clearly and illustrate the steps involved to get one. 6.3 If a particular Android API is being targeted, rather than being Automatic, this should be documented on the getting started page. 6.4 Must demonstrate initializing the library and perhaps some useful code that the user might use. 6.5 Must show library-specific using statements or fully qualified class names. 6.6 Code snippets included in the page must work and be compilable. 6.7 Ensure that the information provides accurate and useful content, with proper spelling and grammar. 6.8 All links included in the getting started page must work. 6.9 Any features listed as available must be functional in the component 6.10 If any portion of the details has been copied from elsewhere then attribution must be provided. 6.11 The formatting of the details page must be render correctly. 7. License 7.1 For open source components the license should be MIT X11, BSD, Apache2 or MS-PL. 7.2 The terms of the license should be included on the page. 7.3 For commercial or proprietary libraries it must include the appropriate licensing, copyright, and attribution information. 7.4 The formatting of the license page must be render correctly. 7.5 All links included in the license page must work. 8. Libraries 8.1 Components advertised as supporting iOS should comply with Apple’s App Store Guidelines and Mobile Human Interface Guidelines. 8.2 Components advertised as supporting Android should comply with Google Play’s publishing checklist, and we recommend adhering to Amazon’s Mobile App Distribution Program as well. 8.3 Components will be reviewed in both simulators/emulators and real devices. 8.4 Any components that crash, or do not run as expected will be rejected. 8.6 Must not target a pre-release build of Xamarin.iOS or Xamarin.Android, such as an Alpha or Beta channel release. 8.6 Any dependencies that are not available from the component store must be included with the component. 8.7 Nuget is not currently installed in Xamarin Studio by default and therefore can not be guaranteed to be installed on the users machine. Any component that has dependencies on Nuget packages will be, currently, rejected. 8.8 Libraries must a namespace. 8.9 Namespace should not include Xamarin. 8.10 If source code is provided within the component package then it should compile without modification by the user. 9. Samples - General 9.1 Component must includes at least one sample for every supported platform. 9.2 Each sample should be provided inside its own separate solution per platform, though multiple projects can be include in each solution. 9.3 Each sample must build using the latest Stable channel release in both Release and Debug mode 9.4 Each sample must build and compile without requiring modification by the user 9.5 Each sample must deploy successfully to both the simulator and real devices. 9.6 Each sample must runs successfully and not crash. 9.7 Each sample must behave as expected 9.8 Samples must target the same platform API / SDK version as the library in the component. 9,9 Samples should not reference components that must be paid for in order to download and allow the sample to compile. 9.10 Comments in sample code must be clean, descriptive and concise and not contain foul or offensive language. 10. Samples - iOS 10.1 Project Options > Build > iOS Bundle Signing > Identity must be set to Developer (Automatic) 10.2 Project Options > Build > iOS Bundle Signing > Provisioning Profile must be set to Automatic. 10.3 Info.plist > Deployment Target must be set to 4.3 or above. 11. Samples - Android 1 1.1 Project Options > Android Build > Advanced > ABI Settings must all be ticked to ensure that the sample will deploy to any and all simulator and devices. 11.2 Project Options > General > Target framework must be API Level 10(Gingerbread) or above. 11.3 Project Options > Android Application > Target Android version should be set to Automatic..
https://docs.mono-android.net/guides/cross-platform/advanced/submitting_components/component_review_guidlines/
CC-MAIN-2017-30
refinedweb
1,446
56.86
This article describes how to shuffle the content of an array or a list in Java. After the shuffle the elements in the array or the list are randomly sorted. 1. Shuffle an array with the Collections framework An array or an java.util.List data structure contains a sorted list of values. Shuffling an array or a list means that you are randomly re-arranging the content of that structure. Have you wondered how you could shuffle an array or a list without the collection framework? This article demonstrates how the shuffling works so that you can learn how the standard libraries might do this. The approach works independent of the content of the array or the list. The shuffle is random as the algorithm by selecting uniformly an element which has not been selected. For example if the element at position 2 is selected it can be exchanged with all elements at position 2 until position n-1 (as the list /array has 0 - n-1 positions). 2. Implementation in Java Create a Java project "de.vogella.algorithms.shuffle". Create the following program for sorting arrays. package de.vogella.algorithms.shuffle; import java.util.Random; public class ShuffleArray { public static void shuffleArray(int[] a) { int n = a.length; Random random = new Random(); random.nextInt(); for (int i = 0; i < n; i++) { int change = i + random.nextInt(n - i); swap(a, i, change); } } private static void swap(int[] a, int i, int change) { int helper = a[i]; a[i] = a[change]; a[change] = helper; } public static void main(String[] args) { int[] a = new int[] { 1, 2, 3, 4, 5, 6, 7 }; shuffleArray(a); for (int i : a) { System.out.println(i); } } } Create the following program for sorting list. package de.vogella.algorithms.shuffle; import java.util.ArrayList; import java.util.List; import java.util.Random; public class ShuffleList { public static void shuffleList(List<Integer> a) { int n = a.size(); Random random = new Random(); random.nextInt(); for (int i = 0; i < n; i++) { int change = i + random.nextInt(n - i); swap(a, i, change); } } private static void swap(List<Integer> a, int i, int change) { int helper = a.get(i); a.set(i, a.get(change)); a.set(change, helper); } public static void main(String[] args) { List<Integer> list = new ArrayList<Integer>(); list.add(1); list.add(2); list.add(3); list.add(4); list.add(5); list.add(6); list.add(7); shuffleList(list); for (int i : list) { System.out.println(i); } } } Why does this shuffle all values randomly? The value at position 0 is exchanged with a randomly selected value (including the original value at position 0). This means that after the first loop, position 0 has a random value with an equal likelihood of any value. We continue this with position 1 but we do not need to consider position 0 again as this position already has a random value assigned. After the loop it is equally likely that each value is at any position.
https://www.vogella.com/tutorials/JavaAlgorithmsShuffle/article.html
CC-MAIN-2021-17
refinedweb
498
58.08
Asked by: Pipeline Error Question Hi Please help as i get below error on BizTalk There was a failure executing the receive pipeline: "@@@@@@@@@@, @@@@@@@@ Version=1.6.0.0, Culture=neutral, PublicKeyToken=0ba159bca7ecb7f3" Source: "XML disassembler" Receive Port: "IJS Services" URI: "" Reason: Finding the document specification by message type "" failed. Verify the schema deployed properly. Lesibana Chokoe All replies Sorry, are we missing something: Reason: Finding the document specification by message type "" failed. Verify the schema deployed properly. The message is clear and should be taken at face value. Hi, 1) Check the GAC and ensure that the schema is there, be sure the version numbers are the same. 2) You should also restart the host instance and try again. = targetnamespace#rootnode; 3) Check the input message. Generate instance from the schema and then try. Rachit Sikroria (Microsoft Azure MVP) The service you are consuming is utilising WS-Trust, a part of WS-Security. You will need to configure your WCF Port to handle these extensions. This is most likely to be a part of message security, but you will have to check the requirements with the service or provider directly. If this is helpful or answers your question - please mark accordingly. Because I get points for it which gives my life purpose (also, it helps other people find answers quickly) Let's keep it simple. You don't need to look in the database, where you should is the Schemas section under All Artifacts. Based on the message, the caller is expecting you to support WS-Trust. Ug... So, the first thing you should do is talk to them to find out of the is really necessary and if they support a more direct, simpler and current authentication scheme, such as Certificates. If not, then your next step is to inform your management that because they are requiring WS-Trust, you may have to take extra time to implement the full setup on you side to accommodate them. Hi Alastair Grant.. This is not the schema that i had costume made it looks like its a schema that the BizTalk system use. Yes am using WCF port and and its consuming a service. how do i configure it to handle these extension. PLEASE HELP Lesibana Chokoe Hi Hi Alastair Grant.. This is not on a live environment and i have access to the service as i am the one hosting it so i have privilege to change anything i want. Lesibana Chokoe I'd turn off WS-Trust/Security on the service if you don't have any need for it. In a standard .NET WCF service, it's not enabled by default and requires a fair bit of configuration, so I assume this isn't a service you've written. If you can't turn it off, then you will need to look at the accompanying documentation or speak to the author as we cannot help you without having a closer look at this service (and even then, that may be beyond what can be done on a forum).If this is helpful or answers your question - please mark accordingly. Because I get points for it which gives my life purpose (also, it helps other people find answers quickly) - Proposed as answer by Alastair Grant Thursday, May 31, 2018 6:30 PM
https://social.msdn.microsoft.com/Forums/en-US/94bc631f-8920-4b28-8c8c-4730bafbdd84/pipeline-error?forum=biztalkgeneral
CC-MAIN-2020-40
refinedweb
553
71.04
fody weather station, wind sensor Anyone heard of this before? For 400 SEK(~€40) you get temp, hum, wind and rain on the outdoor unit and temp, hum and pressure on the indoor unit. Worth its price. I will open up the outdoor unit and try to connect to it with an Arduino. Other name for this is Alecto WS-48000 I have now opened the outdoor unit. Anyone that have a code for collection data from 8 IR receivers? I need to find out what voltage the IR emitter is using, so I don't burn it. EDIT: google says 1,4-1,6 volt For wind it is simple to use interrupt Temperature and Humidity I don't know what kind of sensor it is, that one I can easily change to e.g. si7021/SHT21 For wind direction it is using IR. wheel to control where IR is pointing For wind speed it is reed switch Control board - AWI Hero Member @flopp the Ir 'emitter' is probably a led so you need a current of a few milli amps. Start low (1 kOhm in series) and find out when it starts to function. EDIT: I use digital input with pullup to see what sensor that receives the light So far I have a code for reading the direction and sending it to Controller. I will continue to add code for WindSpeed, Temp, Hum and Rain It was a HT-01D sensor for measuring temp/hum. It is I2C and address is 0x28. I found code for HYT 221 that worked fine. I was looking at another product but your seems more easily hackable For anyone interested, I am in the process of designing a fully 3D printable weather station. The plan for the 3D printable parts will be: - rain gauge - combined wind speed and wind direction sensors - radiation shield that will house sensors for temp, humidity, barometric pressure and possibly a lux sensor. - central brain box for holding the MySensors electronics to control everything. I am just finishing the wind speed and direction sensor parts today. I will post pics later. I have a post in this category for the rain gauge that will be part of the station. I am designing all the parts myself. I still need to figure out a few more parts plus the MySensors electronics and code. I'll post everything as I get the different parts done. This post is deleted! @flopp are you still using old library version? Are you going to make it a battery powered node or else? I noticed you didn't use any sleep in the code @gohan said in fody weather station, wind sensor: @flopp are you still using old library version? Yes, this is for 1.5 Are you going to make it a battery powered node or else?I noticed you didn't use any sleep in the code This is powered from a PC, not battery. I want data very often that's why it is not battery powered. I can also change the code if something is wrong. had problem with very high gust, up to 30-50 m/s new code again #include <SPI.h> #include <MySensor.h> #include <Wire.h> #define WIND_CHILD 0 #define TEMP_CHILD 1 #define HUM_CHILD 2 #define RAIN_CHILD 3 MySensor gw; MyMessage WSMsg(WIND_CHILD, V_WIND); MyMessage WGMsg(WIND_CHILD, V_GUST); MyMessage WDMsg(WIND_CHILD, V_DIRECTION); MyMessage TempMsg(TEMP_CHILD, V_TEMP); MyMessage HumMsg(HUM_CHILD, V_HUM); MyMessage RainMsg(RAIN_CHILD, V_RAIN); MyMessage RainCounterMsg(RAIN_CHILD,V_VAR1); //Wind Speed volatile unsigned long lastPulse = 0; float WS = 0; float WG = 0; float WScount = 0; //Wind Direction int WDarray[8] = {45,90,135,180,225,270,315,0}; int WD; //Rain float hwRainVolume = 0; // Current rainvolume calculated in hardware. float bucketSize = 0.4; // mm per tip boolean pcReceived = false; unsigned long lastSend =0; // Temperature/Humidity double hum = 0; double temp = 0; void setup() { Wire.begin(); //start i2c gw.begin(incomingMessage, AUTO, false); // Send the sketch version information to the gateway and Controller gw.sendSketchInfo("Wind test", "170421"); gw.present(WIND_CHILD, S_WIND); gw.present(TEMP_CHILD, S_TEMP); gw.present(HUM_CHILD, S_HUM); gw.present(RAIN_CHILD, S_RAIN); gw.request(RAIN_CHILD, V_VAR1); gw.wait(5000); //pin 2 is rain, pin 3 is wind speed //pin 4-8, 14-16 is wind direction //configure pin 2-8 as an input and enable the internal pull-up resistor for (int i=2 ; i < 9 ; i++) { pinMode(i, INPUT_PULLUP); } //configure pin 14-16(A0-A2) as an input and enable the internal pull-up resistor for (int i=14 ; i < 17 ; i++) { pinMode(i, INPUT_PULLUP); } attachInterrupt(0, Rain, FALLING); //rain attachInterrupt(1, WindSpeed, FALLING); //wind speed } void loop() { readTempHum(); //read temperature and humidity readWindDirection(); //read wind direction resend((WSMsg.set(WS, 1)),10); resend((WGMsg.set(WG, 1)),10); WG = 0; //reset gust resend((WDMsg.set(WD, 1)),10); resend((RainMsg.set((float)hwRainVolume,2)),10); resend((TempMsg.set(temp, 1)),10); resend((HumMsg.set(hum, 1)),10); gw.wait(60000); }; } } } void Rain() { Serial.println("Rain"); unsigned long currentTime = millis(); if (!pcReceived && (currentTime - lastSend > 5000)) { gw.request(RAIN_CHILD, V_VAR1); Serial.println("Request rainCount"); lastSend=currentTime; gw.process(); return; } if (!pcReceived) { return; } hwRainVolume = hwRainVolume + bucketSize; resend((RainCounterMsg.set(hwRainVolume,2)),10); resend((RainMsg.set((float)hwRainVolume,2)),10); lastSend=currentTime; } void WindSpeed() { WScount++; if (WScount >= 5) { detachInterrupt(0); Serial.println(WScount); WScount = 0.4775 * WScount; Serial.println(WScount); unsigned long newPulse = micros(); unsigned long interval2 = newPulse-lastPulse; Serial.print("newPulse="); Serial.println(newPulse); Serial.print("lastPulse="); Serial.println(lastPulse); Serial.print("interval2="); Serial.println(interval2); if (interval2<(WScount*12000L)) { // Sometimes we get wrong interrupt Serial.println("RETURNnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn"); return; } WS = (WScount/(interval2/1000000.0)); Serial.print("Wind Speed "); Serial.print(WS); Serial.println(" m/s"); lastPulse = newPulse; if (WS > WG) { WG = WS; } Serial.print("Gust "); Serial.print(WG); Serial.println(" m/s"); WScount=0; attachInterrupt(1, WindSpeed, FALLING); } } void readTempHum() { Wire.beginTransmission(0x28); // Begin transmission with given device on I2C bus Wire.requestFrom(0x28, 4); // Request 4 bytes if(Wire.available() == 4) { int b1 = Wire.read(); int b2 = Wire.read(); int b3 = Wire.read(); int b4 = Wire.read(); Wire.endTransmission(); // End transmission and release I2C bus // combine humidity bytes and calculate humidity int rawHumidity = b1 << 8 | b2; // compound bitwise to get 14 bit measurement first two bits // are status/stall bit (see intro text) rawHumidity = (rawHumidity &= 0x3FFF); hum = 100.0 / pow(2,14) * rawHumidity; // combine temperature bytes and calculate temperature b4 = (b4 >> 2); // Mask away 2 least significant bits see HYT 221 doc int rawTemperature = b3 << 6 | b4; temp = 165.0 / pow(2,14) * rawTemperature - 40; //Serial.print(hum); //Serial.print("% - Temperature: "); //Serial.println(temp); } } void incomingMessage(const MyMessage &message) { if (message.type==V_VAR1) { hwRainVolume = message.getFloat(); pcReceived = true; Serial.print("Received last pulse count from gw: "); Serial.println(hwRainVolume,2); } }); } }
https://forum.mysensors.org/topic/6645/fody-weather-station-wind-sensor
CC-MAIN-2017-17
refinedweb
1,125
51.55
On Sat, Jul 27, 2002 at 12:26:53PM +0100, Thomas Leonard wrote: > On Fri, Jul 26, 2002 at 09:27:34PM +0200, Filip Van Raemdonck wrote: > >. Also, I'm not sure if the part describing the use of the mime database to tell which programs can open what mime type belongs in that section. Shouldn't that rather go in the section describing what the xml files are? > > Next, I haven't seen any indication as to which file takes precedence when > > two or more in the same directory provide the same information, only for > > when they are in different directories or if one of them is Override.xml. > >? Or they don't agree about the magic. This may not be the best example since the word document most likely is in the shared database already, but the same can (and eventually will) happen with some new file type. Regards, Filip -- /* Amuse the user. */ \|/ ____ \|/ "@'/ ,. \`@" /_| \__/ |_\ \__U_/ -- /usr/src/linux-2.4.2/arch/sparc/kernel/traps.c::die_if_kernel() --- shared-mime-info-0.8/shared-mime-info-spec.xml.orig +++ shared-mime-info-0.8/shared-mime-info-spec.xml @@ -296,7 +296,7 @@ Further, the existing databases have been merged into a single package <citation>SharedMIME</citation>. </para> - <sect2> + <sect2 id="s2_layout"> <title>Directory layout</title> <para> There are two important requirements for the way the MIME database is stored: @@ -567,14 +567,17 @@ </para> </sect2> <sect2> - <title>User preferences</title> + <title>User modification</title> <para> -The MIME database is NOT intended to store user preferences. Although users can edit the database, -this is only to provide corrections and to allow them to install software themselves. Information such -as "text/html files should be opened with Mozilla" should NOT go in the database. However, it may be -used to store static information, such as "Mozilla can view text/html files", -and even information such as "Galeon is the GNOME default text/html browser" (via an extension element -with a GNOME namespace). +The MIME database is NOT intended to store user preferences. Users should never +edit the database. If they wish to make corrections or provide MIME entries for +software that doesn't provide these itself, they should do so by means of the +Override.xml mentioned in <xref linkend="s2_layout" />. Information such as +"text/html files need to be opened with Mozilla" should NOT go in the database. +However, using extension elements introduced by additional namespaces (like a +GNOME namespace), the database may be used to store static information, such as +"Mozilla can view text/html files", and even information such as "Galeon is the +GNOME default text/html browser". </para> </sect2> </sect1>
https://listman.redhat.com/archives/xdg-list/2002-July/msg00086.html
CC-MAIN-2016-44
refinedweb
447
51.89
Firebase is a complete package when it comes to building mobile applications and integrating them with a serverless service. Managed by Google, Firebase includes services like mobile analytics, push notifications, crash reporting, and out-of-the-box provides email as well as social authentication. As a React Native developer, by using Firebase you can start building an MVP (minimum viable product) by keeping the costs low and utilizing your time and effort in building the application quite faster than adopting a traditional approach by building your own custom backend solution. In this tutorial, you will learn how to integrate the Firestore cloud database in a React Native app. We will create a bare minimum demo application from scratch with the help of Firebase & React Native to see how they work together and understand the differences between a Firestore database and a Real-time Database in Firebase. To learn more about what Firestore is and how it differs from the Real-time Database offered by Firebase, please refer to the official documentation here. The complete code for this tutorial is available in this GitHub repository. Setup a Firebase Project To get started, you need a Firebase account. To sign-up or log-in for one, visit the Firebase Console and click on the button “Add Project”. In the next screen, fill the name of the project, check both the boxes for now and click on “Create Project”. Once the Firebase project is created, you will be welcomed by the home screen like below. Create a New React Native Project To set up a new React Native project, make sure you have react-native CLI installed. If not, you can run the command below. npm i -g react-native-cli Next, run the command to generate a new React Native app. react-native init RNFirestoreDemo When the above command is done running, traverse into the project directory using cd rnFirebaseDemo. Now, let’s check if everything is working correctly and our React Native application has been properly initialized by running one of the following commands. # on macOS react-native run-ios # For Windows/Unix users react-native run-android Connecting Firebase with React Native App To connect Firebase SDK with a React Native app, you need an API key and to store in the client side app somewhere (probably as environmental variables when deploying the app in production). Click on the settings icon in the sidebar menu in the Firebase console and go to Project settings. Look out for "Your apps" section where all the platforms (iOS, Android, and web) are available. Click on the Web as shown below. Next, copy only the config variable in a new file called config/firebase.js inside the React Native project. Initially, the file might look like the snippet below. // firebase.js import firebase from "firebase/app" const config = { apiKey: "AIzaXXXXXXXXXXXXXXXXXXXXXXX", authDomain: "rnfirebXXX-XXXX.firebaseapp.com", databaseURL: "rnfirebXXX-XXXX.firebaseapp.com", projectId: "rnfirebase-XXXX", storageBucket: "rnfirebase-XXXX.appspot.com", messagingSenderId: "XXXXXXX", appId: "XXXX } Where all the XXXXs are the key values. Now, you are required to add Firebase SDK in the React Native app. You could have imported Firebase from just Firebase in the above snippet. The reason we are using firebase/app is that /app only adds the core of the Firebase services. Right now, to integrate Firebase with our React app, we only need the initializeApp() method to pass all the keys required to configure from Firebase. To proceed, run the following command from your terminal. Do note that the React Native CLI by default uses yarn instead of npm as the JavaScript package manager to install dependencies. yarn add firebase Lastly, do not forget to export the Firebase object instance that you will be using in the React app later. // firebase.js //... firebase.initializeApp(config) export default firebase Adding Environment Variables To make sure sensitive data like API keys are always safe, there is a mechanism of .env files. In JavaScript programming, what are you going to see is a very common practice, at least in the case of web applications. In React Native, there is a way to save API Keys and other sensitive information without integrating any native code. This is provided by react-native-dotenv. This package allows you to import environment variables from a .env file. To get started, install this dependency by running the command below. yarn add --dev react-native-dotenv After it has successfully installed, open the babel.config.js file and the following. module.exports = { presets: ["module:metro-react-native-babel-preset", "module:react-native-dotenv"] } Create a new file called .env in the root directory and add all the Firebase config values here (hint: the config value we added earlier in the firebase.js file). API_KEY=XXXX AUTH_DOMAIN=XXXX DATABASE_URL=XXXX PROJECT_ID=XXXX STORAGE_BUCKET=XXXX MESSAGING_SENDER_ID=XXXX APP_ID=XXXX Again, the Xs represent the actual values from your configuration object. When done with this step, open firebase.js and modify it accordingly. // firebase.js import firebase from "firebase" import { API_KEY, AUTH_DOMAIN, DATABASE_URL, PROJECT_ID, STORAGE_BUCKET, MESSAGING_SENDER_ID, APP_ID } from "react-native-dotenv" const config = { apiKey: API_KEY, authDomain: AUTH_DOMAIN, databaseURL: DATABASE_URL, projectId: PROJECT_ID, storageBucket: STORAGE_BUCKET, messagingSenderId: MESSAGING_SENDER_ID, appId: APP_ID } firebase.initializeApp(config) export default firebase Creating a Firestore Database (who has access to the config keys) to read and write to the database. That said, this section should look like below. service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write; } } } Open firebase.js and import the firestore instance. // firebase.js // after other imports import "firebase/firestore" // just before export default statement export const firestore = firebase.firestore() Exporting the Firestore instance will let you use it to query the database. Now, go back to the Firebase console and go to the Data tab under Firestore. You will notice that there is currently no data inside the database. Under the “Add Collection” column, you’ll find each collection that you might have in the database. Click on . You can try to add the first document to this collection by pressing the save button. Adding Data to Firestore In this section, let us build a React Native component that will in return allow us to communicate with the Firestore database. In other words, it will allow us to perform CRUD operations. We are going to build a bare minimum app. The UI will have two input fields and a button to add the value of each input field to the Firestore database. Each input field will represent the title and the author of the book (in the previous section, we did create a collection in the database called "books"). Open the App.js file which, at this moment, contains a lot of boilerplate code. Define an initial state to the App component. Then, inside the render function, this component has two input fields for the title and the author and a button to add the book. import React, { Component } from "react" import { StyleSheet, SafeAreaView, Text, View, TextInput, Button } from "react-native" export default class App extends Component { state = { title: "", author: "" } render() { const { title, author } = this.state return ( <SafeAreaView style={styles.container}> <View style={styles.inputContainer}> <TextInput value={title} </View> </SafeAreaView> ) } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: "#F5FCFF" }, inputContainer: { margin: 30 }, textInput: { height: 30, textAlign: "center", color: "#333333", marginBottom: 10, fontSize: 24, borderWidth: 1, borderBottomColor: "#111111" } }) Once you are done modifying the App component, open two terminal windows simultaneously. Run the following commands in separate terminal windows. # First terminal window npm start # Second terminal window (for iOS) react-native run-ios # Second terminal window (for Android) react-native run-android Once the compilation is done, you will get the following result. Let us try and add a book. Since the application right now is not throwing an error, this means it did work. To verify, go to Firebase console and the Firestore database section. You will see the name of the collection as well as the document with an autogenerated ID containing the input values we just entered above. Conclusion Congratulations! You have completed the tutorial. The demonstration in this tutorial is the bare minimum but you can go ahead and write features like reading and deleting an item from the Firestore database in the React Native app. Think of it as a challenge. Lastly, if you're building React Native apps with sensitive logic, be sure to protect them against code theft and reverse-engineering by following our guide.
https://blog.jscrambler.com/getting-started-with-firestore-and-react-native
CC-MAIN-2022-27
refinedweb
1,405
55.13
- Locate us - success@ashnik.com - SG: +65 64383504 - IN: 022 25771219 - IN: 022 25792714 - IN: +91 9987536436 How to explore data using Elastic search and Kibana? Part – 1 Tushar Raut I Full Stack Developer, Ashnik x-pack basic license, indexing data using python and some use cases of elastic stack’s Graph with sample dashboards. Elastic Search: Elasticsearch is a highly scalable open-source full-text search and analytics search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is a RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data, so you can discover the expected and uncover the unexpected. Some use cases: 1. Movie Recommendation system 2. Loan predictions system 3. Online web store for various products and design recommendation system based on the history of purchase. 4. Price alerting about various products like I am interested in buying some mobile phone and I want to be notified if the price of gadget falls below $X from any provider within the next month. Installation of Elastic Search 1. Install java, elastic search requires at least java 2. Download the latest Elastic search 6.3.0 tar as follows: 3. Then extract it as follows: tar -xvf elasticsearch-6.3.0.tar.gz 4. It will then create a bunch of files and folders in your current directory. We then go into the bin directory as follows: cd elasticsearch-6.3.0/bin And now we are ready to start our node and single cluster using the command: ./elasticsearch Elasticsearch instance should be running at in your browser if you run with default configuration. Keep the terminal open where the above command elastic search is running to be able to keep the instance running. you could also use nohup mode to run the instance in the background. nohup ./elasticsearch & of Kibana: 1. Download the latest Kibana for Windows () and for Linux() 2. Unzip or Untar the file and open config/kibana.yml in an editor. Set elasticsearch.url to point at your Elasticsearch instance in our case it should be like localhost:9200 3. For Windows Run bin/kibana or bin\kibana.bat 4. Open which will show you the Kibana UI. Note: If you are using an x-pack in Kibana then the default username is elastic and password is changeme. If are not using x-pack, then the Kibana URL() will redirect you on the main page and if you have installed the x-pack, the first page will look like: (In this article, I am using x-pack). Creating Dashboards: A Kibana dashboard displays a collection of saved visualizations. The first page of Kibana UI Sample Dashboard: Indexing data Elastic Search indexes data into its internal data format and stores them in a basic data structure like a JSON object. Below is the python code to insert datainto ES. Install elasticsearch library as shown for indexing through python. pip install elasticsearch Note: The code assumes that the elastic search is running on localhost with default configuration. Creating Simple Index using Python: 1. Create py file and copy following code. from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch() # This line will change for x-pack, # need to add user name and password. doc = { ‘author’: ‘tushar raut’, ‘text’: ‘Elasticsearch: ELK stack is cool.’, ‘timestamp’: datetime.now(), 2. Execute the above python script: python index_test.py 3. Go to the Kibana à Management à Index Patterns, there you will see the index which is created in Elasticsearch. 4. Create an index and click on discover menu to see data within that index. Creating Graph: There are potential relationships living among the documents in your Elastic Stack; linkages between people, places, preferences, products, you name it. Graph offers a relationship-oriented approach that lets you explore the connections in your data using the relevant capabilities of Elasticsearch. Graph is an API- and UI-driven tool that helps you surface relevant relationships in your data while leveraging Elasticsearch features like distributed query execution, real-time data availability, and indexing at any scale. Use cases: 1. Fraud: Discover which vendor is responsible for a group of compromised credit cards by exploring the shops where purchases were made. 2. Recommendations: Suggest the next best song for a listener who digs Mozart based on their preferences and keep them engaged and happy. 3. Security: Identify potential bad actors and other unexpected associates by looking at external IPs that machines on your network are talking to. Example: We have a very good example of movie recommendation system. The source and data is available here: The simple graph is just to recommend movies based on parameters like number of likes for a movie for the respective year. 1. Click on Graph, then select index, click on + icon to select fields. 2. Add some movie name in search bar and click on the search icon. 3. The graph will be shown as follows: Another example of graph based on security analysis: So, from the above graph, it becomes very easy to understand which movies are highly liked by people – the movie ”Rocky” was liked by people who also liked Rocky-II, Jaws, and some others. And this way, the graph makes life easy to understand the insights of data by plotting and visualizing using Elastic stack and graph feature. So, in this article, I wanted to cover the first step of data exploration, i.e. installation and configuration of Elasticsearch and Kibana, indexing data using python module of Elasticsearch. In the next part I’ll go through how to deal with large dataset using python and load that data into Elasticsearch for real-time search and analytics and explore that data with Elastic stack’s Machine Learning feature. Watch this space! Tushar Raut I Full Stack Developer, Ashnik. - Unify your Enterprise Logging layer using ELK - Containerization for Modernizing Applications - How to explore data using Elastic search and Kibana? Part – 1
https://www.ashnik.com/how-to-explore-data-using-elastic-search-and-kibana-part-1/
CC-MAIN-2021-21
refinedweb
1,017
64.2
Go Up to Runtime Packages You can dynamically load packages by using either of the following procedures: - Choosing Link with Runtime packages in the Project Options dialog box (as described in this topic) To load packages using the Project > Options dialog box - Open or create a project in the IDE. - Choose Project > Options > Packages > Runtime Packages. - Enable the Link with Runtime Packages check box. - Review the list of known run-time packages in the Runtime package import libraries field. (Run-time packages associated with installed design-time packages are already listed.) Run-time packages are loaded implicitly only when needed (that is, when you refer to an object defined in one of the units in that package). - To enter additional library names, click the Runtime package import libraries entry field and then either: - Click the ellipsis that appears at the right-hand end of the field. - In the Runtime package import libraries dialog box, you can: - Enter the name of the new package and click Add. - Browse from a list of available packages by clicking the Browse for Folder button on the Runtime package import libraries dialog box. Then click the Browse button next to Package Name in the Add Runtime Package dialog box. - Note: If you edit the Search Path edit box in Add Runtime Package, you change the Global Library Path. - Enter one or more package names in the entry field. - You do not need to include file extensions with package names (or the version number representing the Delphi release); that is vcl90.bpl in a VCL application is written as vcl. If you type directly into the Runtime Package edit box, be sure to separate multiple names with semicolons. For example: rtl;vcl;vcldb;vclado;vclbde; Packages listed in the Runtime Packages edit box are automatically linked to your application when you compile. Duplicate package names are ignored, and if the Link with runtime packages check box is unchecked, the application is compiled without packages. Run-time packages are selected for the current project only. To make the current choices into automatic defaults for new projects, select the Defaults check box at the bottom of the Project Options page. Note: When you create an application with packages, you must include the names of the original Delphi units in the uses clause of your source files. For example, the source file for your main form might begin like this: unit MainForm; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs; #include "vcldb.h" The units referenced in this VCL example are contained in the vcl and rtl packages. Nonetheless, you must keep these references in the uses clause, even if you use vcl and rtl in your application, or you will get compiler errors. In generated source files, the Form Designer adds these units to the uses clause automatically.
https://docwiki.embarcadero.com/RADStudio/Alexandria/en/Loading_Packages_in_an_Application
CC-MAIN-2022-40
refinedweb
473
60.45
Understanding cv2.imwrite() in OpenCV Python In order to solve computer vision problems, we use one of the biggest open-source library known as OpenCV. It contains a collection of computer vision and machine learning software that accelerates the use of machine perception in commercial products. cv2.imwrite(path, image) cv2.imwrite() is one of the function of openCV library that is used to save the resulting or the transformed image into a specific file or folder. It takes two arguments : - path : It is the destination of a specific file or a folder where image is required to be saved. - image: The second argument is the image that is to be saved. Returns true if image is saved successfully. Example: import cv2 img = cv2.imread('D:\Desktop Projects\hacker.png', cv2.IMREAD_GRAYSCALE) status = cv2.imwrite('D:\Desktop Projects\grey_hacker.png',img) print("Image status : ",status) Output: Image status : True Explanation In the above example we first read the image that is to be saved using imread() function in grey scale. After that we used the imwrite() function to save the transformed image with different name to a specified location. Finally we returned the status of the image saved. The above code when run returns True which means that the file is successfully save in the format and the path we want. Now check the status manually by migrating to the particular directory or using command prompt to find the new image saved. (Note : While running above code in your system kindly use your own file path and not the one specified here.)
https://www.codespeedy.com/understanding-cv2-imwrite-in-opencv-python/
CC-MAIN-2020-45
refinedweb
262
54.93
- Minimize side effects -- Try writing Tcl apps without global or namespace variables. Pass state as proc parameters (localize your errors). E.g. Rewrite fileevent/after handlers to accumulate state... - Concurrency interaction through message passing -- The Tcl thread package does this beautifully. - Controllers/Monitors -- Layer your app with processes/threads that do logic, monitor the threads that do the logic, manage the threads that monitor the threads that do the logic ... etc - Handle errors in a higher layer -- Interesting... Don't waste valuable processing time wrapping catch { } around everything. Handle it in a separate (controlling) thread that is signaled with the error. - Threads/processes should do only one thing, etc. - Message Receivers in Erlang: Queue messages received and use pattern matching to select the ones you are interested in. - Guarded Function Clauses factorial(0) -> 1; factorial(N) when N > 0 -> N * factorial(N - 1).the expression N > 0 guards the function clause factorial(N) -> N * factorial(N-1) (the reserved word when introduces the guard). It essentially says "Don't evaluate this function clause if the guard expression fails". It basically translates to an if test, so I assume Todd's commenting about it being difficult to similarly express such things in Tcl. - Tail Recursion instead of iteration (KBK 2003-05-23 Tail call optimization has come up as a topic before.) - lexfiend Massive concurrency: Erlang spawns VM threads rather than OS threads, so applications using literally thousands of threads (actually processes, since they don't share data) are both amazingly performant and not unheard of. Erlang makes extensive use of dictionaries (like Tcl arrays). It has a fast and scalable implementation but Scott Lystig Fritchie discovered a way to incorporate something called Judy Arrays into Erlang. His paper NEM Erlang is a nice language. Guarded functions aren't too difficult to do (I'm sure this is probably on the wiki somewhere already): proc func {name args} { set params [lrange $args 0 end-1] set body "when [list [lindex $args end]]" proc $name $params $body } proc when cases { foreach {cond -> action} $cases { if {$cond eq "otherwise"} { return [uplevel 1 [list expr $action]] } elseif {[uplevel 1 [list expr $cond]]} { return [uplevel 1 [list expr $action]] } } error "undefined case" } func factorial n { {$n == 0} -> { 1 } {$n > 0} -> { $n * [factorial [expr {$n - 1}]] } } puts [factorial 0] puts [factorial 5] puts [factorial -1] ;# Error caseIt's not exactly like the Erlang (no pattern matching, etc) but you could make it nearer if you really want. You can also do more to "compile" the proc down to something sensible at definition time -- the version above delays processing to runtime, for neatness mainly.Emulating Erlang's processes would be a neat trick; perhaps something useable could be built over the event loop (i.e., a lightweight process==proc type solution, rather than something more heavyweight). lexfiend: I believe Factorial Using Event Loop describes the basic methodology for that. NEM Well, not exactly. Are Erlang's processes cooperative or preemptive? lexfiend: Well, Erlang's scheduler is preemptive from the programmer's perspective (full story here: [1]), but I think there should be little perceptive difference using the Factorial Using Event Loop methodology provided you're careful about not writing huge linear procs.NEM I'm not sure what the Factorial Using Event Loop methodology is, in terms of comparison to Erlang's process model. Do you mean manually converting code to continuation-passing style (i.e., passing a result callback to each function) and then using after to schedule calls? lexfiend: Yes, though I'll confess to not thinking too deeply about this topic. 8-)DKF: Tcl can't really do massively (when I say "massive" I mean hundreds-of-thousands) threaded stuff as it uses OS threads for its threading model. On the other hand, Tcl's threading works with OS calls quite nicely (depending on the OS itself) without great gymnastics, and its event handling model is really great (I'm not aware offhand of any other language that is quite as event-oriented in practice.) lexfiend: In fact, I'd avoid threads altogether (as far as possible) for the 100,000 "thread/event" scenario. The event queue seems to be limited only by memory, and looks to be very robust. Cooperative multi-tasking depends on, well, on all of the threads being cooperative. This can be a pain to program. Perl tries to do the event-queue (callback) approach for multi-tasking in POE (a nice attempt at doing what Tcl already has built in).Erlang's (the language, not the math guy) philosophy is to do everything thru light-weight user level threading (processes). Rather than factor your code out into procedures/functions, an Erlanger can factor code out into processes. This is an interesting idea, but not one Tcl can easily emulate. Instead, a Tcler (with the thread extension) could factor computationally complex (or state-heavy) things into threads and factor cooperative stuff out into (continuable) events.For example, Erlang could do a webserver as: 100 processes for 100 concurrent user connections plus 1 process for the database and N processes for backend computationally intensive apps (let's say N is 5).In Tcl, I would do this as: 4 threads (each handling 25 concurrent user connections) plus 1 thread for the database and N processes for backend apps (where N is 5).-- Todd CoramI keep gushing on Tcl's event loop, but I can't help it. Erlang does concurrency better than most languages I've seen (Haskell, Mozart and Concurrent ML do concurrency very well too, but they lack the industrial support of Erlang -- Erlang/OTP lives and breathes real world concurrency while most other languages are more academic about it). Tcl's event loop support is stronger than any other mainstream language. And, it was brilliant how the Tcl threading library took this into account! -- Todd Coram
http://wiki.tcl.tk/8966
CC-MAIN-2017-17
refinedweb
978
52.09
ley riley3,159 Points My query loop is not pulling my custom post types that I created using CPT UI. They don't display on any custom page. I'm not sure what I'm missing. I've looked at outside sources but cannot seem to find an answer. On my custom portfolio page, my custom post type, which is a portfolio piece, does not display. I don't get any errors, the page simple just loads blank. <' => 'entry', 'posts_per_page' => 10 ); $the_query = new WP_Query( $args ); ?> <section class="row no-max pad"> <?php if ( $the_query->have_posts() ) : while ( $the_query->have_posts() ) : $the_query->the_post(); ?> <div class="small-6 medium-4 large-3 columns grid-item"> <a href="<?php the_permalink(); ?>"><?php the_post_thumbnail('large'); ?></a> </div> <?php ?> <?php endwhile; endif; wp_reset_postdata(); ?> </section> <?php get_footer(); ? Lyle LewtonCourses Plus Student 10,585 Points Lyle LewtonCourses Plus Student 10,585 Points In the code you pasted, there is a missing ">" at the very end. There is also an unnecessary "<?php ?>" above your second endwhile & endif statements that could be removed. However, if you are seeing a simple blank page these issues are not the reason! Your code worked fine for me. That said I think the issue is coming from either not having the correct slug for the custom post type you want to display, or not having your portfolio page set to the "Portfolio Page" template. 1st possible issue: in this line: the string: 'entry' has to match the slug you've set in the custom post type you want to display. To make sure they match, go into your WP admin dashboard and click on CPT UI then click on the Edit Post Types tab. Make sure the post type you want to check is selected in the drop down menu immediately below the tabs and look at what is in the "Post Type Slug" field. Whatever is in that field should be put in the above line of code in place of 'entry'. If 'entry' is the correct slug then maybe it is issue number 2. 2nd possible issue: From the WP admin dashboard click on pages, then click on your portfolio page. On the right hand side there will be a dropdown with "Template" above it. Make sure that is set to "Portfolio Page" and the updates are saved. If neither of these are the issue let me know and I'll try to think of what else could be causing problems!
https://teamtreehouse.com/community/my-query-loop-is-not-pulling-my-custom-post-types-that-i-created-using-cpt-ui-they-dont-display-on-any-custom-page
CC-MAIN-2022-27
refinedweb
407
81.83
Document Object Model In Chapter 5, you wrote an XML file that contains slides for a presentation. You then used the SAX API to echo the XML to your display. In this chapter, you'll use the Document Object Model (DOM) to build a small application called SlideShow. You'll start by constructing and inspecting a DOM. Then see how to write a DOM as an XML structure, display it in a GUI, and manipulate the tree structure. A DOM is a garden-variety tree structure, where each node contains one of the components from an XML structure. The two most common types of nodes are element nodes and text nodes. Using DOM functions lets you create nodes, remove nodes, change their contents, and traverse the node hierarchy. In this chapter, you'll parse an existing XML file to construct a DOM, display and inspect the DOM hierarchy, convert the DOM into a display-friendly JTree, and explore the syntax of namespaces. You'll also create a DOM from scratch, and see how to use some of the implementation-specific features in Sun's JAXP implementation to convert an existing data set to XML. First though, we'll make sure that DOM is the most appropriate choice for your application. Note: The examples in this chapter can be found in <INSTALL >/ j2eetutorial14/examples/jaxp/dom/samples/.
http://docs.oracle.com/javaee/1.4/tutorial/doc/JAXPDOM.html
CC-MAIN-2013-48
refinedweb
226
61.87
Arne pointed out that all the code samples out there iterate through each feature in a shapefile and add them to the merged file. He says this method is slow. I agree to an extent (no pun intended). However, at some point the underlying shapefile library MUST iterate through each feature in order to generate the summary information, namely the bounding box, required to write a valid shapefile header. But it is theoretically slightly more efficient to wait until the merge is finished so there is only one iteration cycle. At the very least, waiting till the end requires less code. The following example merges all the shapefiles in the current directory into one file and it is quite fast. #") To make the code you wrote works I had to modify few thing, because the writer need something to extend to use extend. # Merge a bunch of shapefiles with attributes quickly! import glob import shapefile files = glob.glob("*.shp") w = shapefile.Writer() r = shapefile.Reader() w._shapes.append(shapefile.Reader(files[0])) for f in files[1:]: print f r = shapefile.Reader(f) w._shapes.extend(r.shapes()) w.records.extend(r.records()) w.fields = list(r.fields) w.save("merged") I hope this can help gionata, That is strange... the writer initializes shapes as an empty array which is extendable: >>> import shapefile >>> w = shapefile.Writer() >>> w._shapes() [] >>> w._shapes.extend([1,2,3]) >>> w._shapes() [1,2,3] What version of Python and what platform are you using? Could you post some sample code? Very nice piece of code and very useful to merge a lot of tiled data. I however had to update a little bit the shapefile.py (date: 20110927, version: 1.1.4) to process my pointZ shapefiles. On line 699, I've change s.points[0][2] by s.z[0] and on line 705 s.points[0][3] by s.m. Does it seems correct? Thanks a lot, David hi, I wrote something with the same purpose but I little bit more sophisticated: any effort would be appreciated, thx Wonderful work Joel Thank you Hope this gets to you Joel. I like your code for merging and it works fine for me on up to 6 files however if I exceed that I get the error message below. Any ideas? (Note I am only trying to merge 20 shp files of less than 2MB total) Traceback (most recent call last): File "C:\workspace_scratch\final data\shapemerger_01.py", line 21, in w.save("merged") File "C:\Python27\ArcGIS10.1\Lib\site-packages\shapefile.py", line 1032, in save self.saveDbf(target) File "C:\Python27\ArcGIS10.1\Lib\site-packages\shapefile.py", line 1004, in saveDbf self.__dbfRecords() File "C:\Python27\ArcGIS10.1\Lib\site-packages\shapefile.py", line 891, in __dbfRecords assert len(value) == size AssertionError Cheers, Peter Peter, I haven't been able to recreate the problem yet. Are you using the latest version (1.2.0)? If you can make the shapefiles available for download I can see what's going on. I just merged 298 shapefiles into a single point shapefile. You can email me: jlawhead geospatialpython.com Hmmm, 1.2.0 ?? I am using the code from above. ''' #") ''' Peter, The error is related to the dbf files. That part of the library isn't as robust as it could be. Try changing the w.save("merged") line to w.saveShp("merged"). That will save just the shp file and hopefully avoid dealing with the dbf to see if we can isolate the problem. If you don't get an error change that line to w.saveShx("merged") and see if that works. Finally try w.saveDbf("merged") to see if something in the component dbf files are causing the issue. I got the same error as Peter Meysam - please see my comment above to Peter. Hi, I have a similar error with that code.I use python 2.7.6. in Win7 64bits. Traceback (most recent call last): File "........MERGE.py", line 24, in w.save("merged") File "C:\Python25\lib\shapefile.py", line 1028, in save self.saveShp(target) File "C:\Python25\lib\shapefile.py", line 985, in saveShp self.__shapefileHeader(self.shp, headerType='shp') File "C:\Python25\lib\shapefile.py", line 699, in __shapefileHeader f.write(pack(">i", self.__shpFileLength())) File "C:\Python25\lib\shapefile.py", line 607, in __shpFileLength size += nParts * 4 UnboundLocalError: local variable 'nParts' referenced before assignment my programming skills are very basic. Someone can help me? thank you very much solved, do not know how, but it works Thanks for the great pySHP library! However, I'm currently facing an error, I hope you can help? Traceback (most recent call last): File "/Users/daniel/Projects/shapefileimport/app.py", line 55, in hectopunten_output_writer.saveShp("Wegvakken") File "/Users/daniel/.virtualenvs/shapefileimport/lib/python2.7/site-packages/shapefile.py", line 995, in saveShp self.__shapefileHeader(self.shp, headerType='shp') File "/Users/daniel/.virtualenvs/shapefileimport/lib/python2.7/site-packages/shapefile.py", line 709, in __shapefileHeader f.write(pack(">i", self.__shpFileLength())) File "/Users/daniel/.virtualenvs/shapefileimport/lib/python2.7/site-packages/shapefile.py", line 617, in __shpFileLength size += nParts * 4 UnboundLocalError: local variable 'nParts' referenced before assignment I have 200 cities shapefile for three different time period, means 1 city has 3 shapefile. I want to merge city wise (Total 600 shapefiles into 200). Is it possible in python or within ArcGIS (script or model) Hi Joel, I just tried this with V1.2.3 on python 2.7.8 and got the following error: Traceback (most recent call last): File "mergeShapefiles.py", line 16, in w.records.extend(r.records()) File "/usr/lib/python2.7/site-packages/shapefile.py", line 539, in records self.__dbfHeader() File "/usr/lib/python2.7/site-packages/shapefile.py", line 457, in __dbfHeader fieldDesc = list(unpack("<11sc4xBB14x", dbf.read(32))) struct.error: unpack requires a string argument of length 32 Cheers, Peter
http://geospatialpython.com/2011/02/merging-lots-of-shapefiles-quickly.html
CC-MAIN-2017-26
refinedweb
990
70.7
Having a hard time trying to wrap my head around why my grid system fails, I'm assuming my train of thought is stuck at the moment(akin to writers block). My understanding of grids is while the current cell isn't the last one in the column/row(w/e is horizontal) /do stuff/ then increment to the next, else if its the last cell then go to the next "line"(row/column(w/e is vertical)). I guess what I'm asking is what is the problem with my logic and or code? and is there a std::container better suited for this task? (also sorry if this is the wrong forum but the header in C++ said to ask all game development questions here) I've tried several methods if you look through my github history at one point it was "working" though I've since started a rewrite of everything since my old code base was ugly to look at. bottom is the grid header its mostly agnostic but does depend on sdl2 rest of the code is at My github repo it does include a Makefile(for linux users) for windows users you only have to tell your compiler to search src/ for includes and link against SDL2 SDL2_image, (the drop the physics header if you don't want to link against box2d same with the filesystem one(cept tinyxml in this case) #ifndef __GRID_HPP__ #define __GRID_HPP__ #include <common.hpp> #include <gfx/gfx_common.hpp> //the *getSize()'s you see returns a pointer to an SDL_Rect which is a struct //containing 4 members (h,w,x,y). //*getImage()'s just returns an SDL_Texture * //this is also templated so that it will be class agnostic template <typename Element> class Grid { private: protected: std::vector<Element> grid; int padding,gx,gy; SDL_Rect position; public: Grid(int x = 0, int y = 0, int p = 50, int gw = 3, int gh = 3) { padding = p; position.x = x; //this is the position of the first element position.y = y; gx = gw; gy = gh; } void gridify() { typename std::vector<Element>::iterator i; SDL_Rect * current = grid.begin()->getSize(); SDL_Rect * last = current; int count = 0, row = 0; for(i = grid.begin(); i != grid.end(); i++) { if(i == grid.begin()) { #ifdef DEBUG printf("First Element\n"); printf("X: %d Y: %d\n", i->getSize()->x, i->getSize()->y); #endif i->getSize()->x = position.x; i->getSize()->y = position.y; #ifdef DEBUG printf("After Assignment(First Element)\n"); printf("X: %d Y: %d\n", i->getSize()->x, i->getSize()->y); #endif last = i->getSize(); } else { if(count != gx) { #ifdef DEBUG printf("Before assign X: %d\n",i->getSize()->x); #endif i->getSize()->x = last->x + padding; #ifdef DEBUG printf("After assign X: %d\n",i->getSize()->x); #endif count++; } else { count = 0; row++; i->getSize()->x = position.x; i->getSize()->y = last->x + padding; } } } } void update(SDL_Renderer * rnd) { typename std::vector<Element>::iterator i; for(i = grid.begin(); i != grid.end(); i++) { SDL_RenderCopy(rnd, i->getImage(), NULL, i->getSize()); } } void append(Element& element) { grid.push_back(element); } }; #endif // __GRID_HPP__ Edited by Xecantur: guess the code box for scrolling is gone :(
https://www.daniweb.com/programming/game-development/threads/482427/implementing-a-grid-system-for-sprites-c
CC-MAIN-2017-47
refinedweb
523
58.42
Dear all, Could you please advise me on the way I could use JMapViewer with stored map data (osm file or tiles extracted by Maperitive) for real-time gps tracking? I have used the following code to set the tiles or osm source but this does not work: final JMapViewer map = new JMapViewer(); map.setTileSource(new OfflineOsmTileSource("", 1, 12)); where OfflineOsmTileSource class: public class OfflineOsmTileSource extends AbstractOsmTileSource { private final int minZoom; private final int maxZoom; public OfflineOsmTileSource(String path, int minZoom, int maxZoom) { super("Offline from "+path, path); this.minZoom = minZoom; this.maxZoom = maxZoom; } @Override public int getMaxZoom() { return maxZoom; } @Override public int getMinZoom() { return minZoom; } @Override public TileUpdate getTileUpdate() { return TileUpdate.None; } } Thanks in advance. Best wishes George asked 27 Aug '14, 13:31 grccr 26●1●1●2 accept rate: 0% edited 27 Aug '14, 14:22 scai ♦ 31.4k●18●285●439 Can you please explain what exactly "does not work"? That URL definition looks wrong to me. Does this help? Thank you for your reply. The problem is that it fails to load the osm file or the tiles. osm file: failed loading 3/3/1 c failed loading 3/5/2 c ....... tiles: failed loading 3/5/1 c failed loading 3/5/2 c ...... I changed the argument to map.setTileSource(new OfflineOsmTileSource((new File("C:\OpenStreetMap").toURI().toURL()).toString(), 1, 12)); but the problem remains output: failed loading 3/4/2 C:\OpenStreetMap\3\4\2.png (The system cannot find the path specified) ....... failed loading 3/4/3 C:\OpenStreetMap\3\4\3.png (The system cannot find the path specified) What is the correct path for the tiles on your system? The path is C:\OpenStreetMap That's not the correct path according to your error message or it doesn't contain any tiles. Please search for files named "2.png" in C:\OpenStreetMap and compare the path with the one passed to OfflineOsmTileSource. Maybe there is another subdirectory you forgot to add? OfflineOsmTileSource Thank you for your time to help me. Maybe I am using the wrong data to load. I have made two attempts: 1) In the directory c:\openstreetmap I saved a map.osm file exported for the openstreetmap.org 2) In the directory c:\openstreetmap I saved tiles generated from the above mentioned map.osm by Maperitive. In this case I have three main folders 17, 18, 19 and each of these folders contains other folders and each of these contains a .png file. I have noticed that next to the three folders there is a tiles.json file. Additionally there is no 2.png file as indicated by the error message. I have no experience with JMapViewer but it seems like you just got the wrong zoom level. You only have subdirectories for zoom levels 17, 18, 19 but you are trying to open tiles for zoom level 3 (the first number in 3/4/2). Try to generate tiles for all zoom levels (probably expensive) or set minZoom to 17 and maxZoom to 19. minZoom maxZoom Thanks again for your help. I just changed the zoom levels to 17-19 but the error remains. output: failed loading 17/68800/44416 C:\OpenStreetMap\17\68800\44416.png (The system cannot find the path specified) failed loading 17/68799/44416 C:\OpenStreetMap\17\68799\44416.png (The system cannot find the path specified) There is no subfolder 68800 or 68799. The available subfolders are: 74685, 74686, 74687. 17/74685/51764.png 17/74685/51765.png 17/74686/61764.png . . . Then I assume you are looking at the wrong part of the map. For the current location there are no tiles present. Go to a location for which you have tiles generated by Maperitive. You are correct! At the command map.setDisplayPositionByLatLon(lat, lon, zoom); I have to put a location inside the tiles I am loading and the correct zoom level(in my case 17/18/19) too. Thank you so much for your help! Best wishes George For those that are interested in JMapViewer the correct command for the tile source is : map.setTileSource(new OfflineOsmTileSource((new File("C:\OpenStreetMap").toURI().toURL()).toString(), 17, 19)); Thank you all for your contribution! Best wishes George Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: tiles ×233 offlinemaps ×28 jmapviewer ×14 question asked: 27 Aug '14, 13:31 question was seen: 2,864 times last updated: 27 Aug '14, 16:56 Guide me how to getting started with OSM Offline Map ? Can i download osm tiles for offline use? Error loading Tiles by jmapviewer Problem with map scale when generating maps with Osmarender Is there a tileset like the hybrid label overlay in google maps? Newbie: 3rd zoom level rendering issue after edit How to trigger a repaint for a specific OSM map tile? HTTP GET Request Times Out How to display your own rendering tiles in a Symbian Phone? Trinidad coastline not rerendering First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/36271/offline-jmapviewer
CC-MAIN-2019-47
refinedweb
857
67.35
#include <Servo.h>Servo servo;void setup() { pinMode(13, OUTPUT); servo.attach(8); servo.write(90); Serial.begin(9600);}void loop() { servo.write(90); Serial.println("90"); digitalWrite(13, HIGH); // set the LED on delay(1000); // wait for a second digitalWrite(13, LOW); // set the LED off servo.write(45); Serial.println("45"); delay(1000); // wait for a second } it's hard to belive in for me. Especially that the specification of the servo (HS 645MG) says that it operational voltage is between 4.8V and 6V Arduino can't provide that kind of power? I won't check that until tommorow but it's hard to belive in for me. Especially that the specification of the servo (HS 645MG) says that it operational voltage is between 4.8V and 6V (changing the torque respectively) and the servo moves as it's supposed to: the problem is that it blocks the serial communication.Any other suggestions? Now I'm just a simple man, but in MY WORLD a device (the Arduino) that can provide 40ma is NEVER GOING TO FUNCTION PROPERLY when trying to power a device (the HS-645MG servo) that requires AT LEAST 350ma when operating. That means that the servo requires 8.75 times MORE current than the Arduino can provide. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=126023.msg947566
CC-MAIN-2015-18
refinedweb
249
57.87
Adding mathematical notations to R plots [This article was first published on mages' blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)I have to admit that I find the Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. plotmathexpressions in R a little fiddly to annotate plots with mathematical notation. Apparently I am not the only one, but Stefano Meschiari did actually something about it. A few days ago his package latex2expappeared on CRAN. The package provides the wonderful function latex2expthat translates LaTeX code into plotmathexpressions. Brillant! All I have to remember is to escape the ""character, that is write "\"instead of "". Below is the first example from the plotmathhelp file and again using latex2exp. I think this is much easier to read and write. You find more information about latex2expon Stefano’s web site and his GitHub repository.] latex2exp_0.3.1 loaded via a namespace (and not attached): [1] magrittr_1.5 tools_3.2.1 Rcpp_0.11.6 stringi_0.5-5 stringr_1.0.
https://www.r-bloggers.com/2015/07/adding-mathematical-notations-to-r-plots/
CC-MAIN-2022-21
refinedweb
206
66.23
Random assortment of Flex tips The challenge in learning a new language is that you don’t know what you don’t know. Very often when I look at older code I’ve written I realize that there’s a far better way of solving the problem, I just wasn’t aware of it at the time. The purpose of this post is to cover a bunch of random things about Flex that I’ve learned along the way. For many of you I’m sure this list will be more of a review, but if you’re new to Flex hopefully you’ll find something which you didn’t know you were looking for. mx_internal Essentially, it’s a namespace (similar to public, private, protected and internal) used by the framework to protect code. Some properties and methods in the framework need to be accessible from other places while still being hidden from the casual user. Using ‘protected’ would require that the class is a subclass, and ‘internal’ requires that it’s in the same package. This way any class can access a ‘hidden’ property or method of another class. While this can be used to hack the framework, in general using the mx_internal namespace to access hidden properties should be avoided as it’s very easy to cause unexpected side effects (the property/method is probably hidden for a reason). callLater() Let me start by saying if using this function is fixing your problem but you’re not sure why you’re probably doing something else wrong. The callLater function is used to tell the Flash Player to call the function on the next frame. Here’s a valid use of this function. If you set the dataProvider property of a list and then want to set an item as being selected you’d want to use the callLater. This gives the Flash Player a chance to setup the new dataProvider before you select your item. So… why is this. When you set most properties the values aren’t actually set right away. The component will generally store the new value and mark that it’s been changed. It will then wait until the next validation to actually set the new value. Using the callLater gives the component a chance to validate itself first. Because of this, you can often call validateNow instead of using a callLater but by doing so you’re forcing the class validate sooner than it’d like which could introduce performance issues. This topic touches upon a key Flex idea, work with the framework not against it. The framework has a specific order in which things happen, the more familiar you are with how the framework operates the better off you’ll be. initialize vs creationComplete This topic is a good follow up to the last one. To really understand the difference you need to study up on the component lifecycle . In a nutshell initialize is called when the component’s children exist but aren’t yet measured/positioned, while creationComplete is called once the component has finished setting itself up. The take away point here is you want to avoid placing code which will change the size/layout of a component’s children in the creationComplete handler. Doing so will force the Flash Player to size/layout the children a second time (which decreases the performance). It’s good to be able to spot this issue by looking at how your application renders itself. Ideally, when the window is shown it should already be fully created but if you use creationComplete event handler on a slower machine you can visibly see the application resize its children. masks When I started to learn Flex I read all of the Flex books I could get my hands on, what I was missing however were the ActionScript books. Being an expert in Flex requires two sets of knowledge: the Flex Framework as well as a good understanding of ActionScript. I obviously can’t cover all the cool things that you can do with ActionScript in this post but one item that I find handy is the mask property. This can be used to define a section of a component to be visible. Before I knew about masks I was adding white boxes to my applications to cover up the parts that I wanted to hide. Masks are a much more elegant solution to this problem. suspendBackgroundProcessing One of the greatest aspects of Flex is how easy it is to create really cool effects (as a side point, remember to use them where they make sense… every item on the screen doesn’t need to slide into place). The catch is sometimes the effects can get slowed down by other things happening in the background. I find that hurky-jurky effects can be somewhat painful to watch. A possible solution is to set the suspendBackgroundProcessing property to true. This will tell the Flash Player to focus all of it’s resources on making the effect run smoothly. Another technique which I’ve found sometimes helps is to delay the effect slightly. relatedObject There are a number of different event types in Flex which each have their own special properties. It’s a good idea to glance at the full list to see what’s out there but here’s one property which I find particularly useful. The FocusEvent class has a property called relatedObject which will tell you where the focus is going to/coming from. This can be really handy when creating focus event handlers. ObjectUtil.toString() Next to the trace function, this is the single most important function for debugging. This will dump an object to a string. While debugging I’ll very often write something like. trace( ObjectUtil.toString( myVariable ) ); DefaultListEffect This one’s just eye candy. Using this effect will cause the list to fade in/out items which are added or removed. When using this effect you need to remember to set variableRowHeight property on the List component to true. That about wraps it up. Hopefully you learned something new while reading the post, or at the very least feel good about yourself because you were already familiar with everything mentioned here. Best, Hillel Cool article, I found initialize vs creationComplete very useful. Thanks for this helpful set of tips.
https://hillelcoren.com/2009/06/07/random-assortment-of-flex-tips/
CC-MAIN-2017-13
refinedweb
1,059
69.62
Last week I showed you how you can capture the remote codes for cheap radio controlled electrical outlets and this week the theme is MOTION DETECTORS. With a properly configured motion detector you can then trigger that outlet. For example……..when you open the pantry door the light comes on………when you walk in the laundry room, the light comes on……..when someone presses the smart doorbell, the lights come on. Pretty handy stuff. Most home automation motion sensors send TWO signals. One when they are tripped and one when they reset. Most of them will stay tripped for a predetermined amount of time. Usually for 2-4 minutes or so. Good idea to know the state of the motion detector BEFORE you buy it. For example I have a motion detector with a 4 minute reset on it in my garage and laundry closet. That means that both of those lights that get triggered are staying on for 4 minutes whether I like it or not (unless I write some crazy code). But some of these cheap sensors send ONE signal. “I’m ON” and that’s it. They don’t reset. That provides a challenge. This is one such sensor I bought (and DEFEATED). It costs about $10 and you can get them even cheaper. I got this for an outside sensor. Anyway it does have some user control as inside there are two DIP switches. One sets the sensor state for 5 seconds or 5 minutes and the other one turns the LED indication it has been tripped on or off. If you want to be stealthy, turn off the LED. I personally like the bad guys to know they got got. Anyway before you can do this you must capture the code with a program called RTL_433. I discussed this in another blog. Once your hardware is set up (an RTL-SDR device) you run this command on that pi to send THIS code below. The last part would only be necessary if your mosquitto MQTT server is on another device. Mine is. rtl_433 -F json -M utc | mosquitto_pub -t home/rtl_433 -l -h 192.168.XX.X Now that will take all the messages it receives from the SDR device (on 433.920 MHz) then it publishes a message on your MQTT server that looks like this: {"time" : "2019-01-24 00:26:07", "model" : "Kerui Security", "id" : 924442, "cmd" : 10, "state" : "motion"} Now we have to extract that data to make sure we hone in on this device (because I have multiple devices on this same MQTT Topic). In your HomeAssistant configuration.yaml file you add the following to create a binary sensor. binary_sensor: - platform: mqtt qos: 1 state_topic: home/rtl_433 name: Pantry Motion value_template: > {% if value_json is defined and value_json.id == 924442 %} {{ value_json.state }} {% endif %} payload_on: 'motion' off_delay: 10 optimistic: true retain: false Whew! You can see I have it so it only takes information from sensor id #924442 and when the “state” = “motion” then it triggers the binary sensor. Then the line with: off_delay: 10 Sets the sensor back to the OFF position after 10 seconds. So with a ONE signal sensor I can turn the state to OFF after how ever many seconds I want. That garage light will now go out after 1 minute or 2 minutes or 2 seconds……..whatever I want. Well……It works PERFECTLY. Here’s what the tripped state looks like: Now I can start making some automations to use with my $10 motion sensor. Open the pantry door……..the light comes on……….hence the name “Pantry Motion”. So what I did here was to make an automation to turn on one of my inexpensive outlets. Eventually this will be tied to a smart light or smart switch. Right at this moment I have no smart light nor have installed a smart switch in the pantry. Changing the line of the entity_id in the action part of the code can turn basically any device on or off. Because the code above leaves the sensor state on for 10 seconds, running the automation means the “light” (in this case outlet) will be on for 1 minute. Pretty cool, huh? automation 21: alias: Den Outlet Motion Sensor trigger: platform: state entity_id: binary_sensor.pantry_motion to: 'on' # condition: # condition: state # entity_id: sun.sun # state: below_horizon action: service: switch.turn_on entity_id: switch.den_outlet automation 22: - alias: Den Outlet Motion Sensor Off trigger: platform: state entity_id: binary_sensor.pantry_motion to: 'off' for: '00:00:50' action: service: switch.turn_off entity_id: switch.den_outlet Notice that I have # marks in front of the condition statements. In many cases you wouldn’t want a light to come on until after dark. I left that in there to easily change it back if I find I don’t need the light on in the daytime, but hey, it is essentially a windowless closet. Here’s a video of how it all works. I shortened the sensor time for the purpose of the video. And Bob is your Uncle. Really useful! Thanks for sharing. Quick question. Is there any way to detect if the device is running out of batteries? Does it emit any kind of heartbeat?
https://www.hagensieker.com/wordpress/2019/01/24/configuring-inexpensive-433-mhz-motion-sensors-for-use-with-homeassistant/
CC-MAIN-2021-21
refinedweb
869
75.81
Ext.DOMQuery and XML namespaces Ext.DOMQuery and XML namespaces Hi, Are there plans to support XML namespaces in the CSS3 selectors as described in? This is relevant for Ajax responses that are formatted in XML, but also for the document tree in the browser, because there can be parts in other namespaces. For this to work the Ext.DOMQuery class would need a way to declare namespace prefixes, analogous to the @namespace rules in CSS3. This way the selectors can use the prefixes defined in the application, regardless of the prefixes used in the document. Currently the only safe way to query a DOM tree with namespaces is to strip all prefixes first. This can of course lead to local name clashes, but this is still more robust than relying on prefixes, because these can be chosen freely by a server. Best regards, Werner Donné. - Join Date - Mar 2007 - Location - The Netherlands - 24,246 - Vote Rating - 99 I have seen most of those, but they are about parsing prefixes. People seem to try to use the namespace prefixes that are in the document in their selectors. This is wrong. Namespace URIs should be matched instead, which is why the application should be able to declare its own prefix to namespace URI mapping and use that in its selectors. There also seems to be confusion about the prefix syntax in CSS3, which uses "|" as a separator instead of ":". Similar Threads xml namespaces with Ext.DomQueryBy fshort in forum Ext 1.x: Help & DiscussionReplies: 13Last Post: 30 Jul 2012, 4:47 AM namespacesBy dke01 in forum Ext 3.x: Help & DiscussionReplies: 0Last Post: 4 Jul 2009, 6:08 AM xml namespacesBy sekaijin in forum Ext 2.x: Help & DiscussionReplies: 4Last Post: 12 Mar 2008, 11:12 AM How to use namespaces?By LeonardoAP in forum Ext 2.x: Help & DiscussionReplies: 4Last Post: 24 Jan 2008, 5:35 AM
http://www.sencha.com/forum/showthread.php?124807-Ext.DOMQuery-and-XML-namespaces&p=573967
CC-MAIN-2015-11
refinedweb
317
73.78
A Python wrapper for tracking delivery! Project description Sheepped 🚚 A Python wrapper for tracking delivery (e.g. USPS). Getting Started First, install the package from PyPI using pip. $ pip install sheepeed Now, register at USPS to get your USPS_USER_ID. Usage Suppose you have set an environment variable USPS_USER_ID with your USPS ID and your tracking number is 42: from sheepped import USPS usps = USPS() usps.track("42") If you have a bucnh of tracking numbers, you might want to use the async API: import asyncio from sheepped import USPS usps = USPS() async def main(): tracking_numbers = ("1", "2", "3", "5", "8", "13", "21") tasks = tuple(usps.aiotrack(n) for n in tracking_numbers) return await asyncio.gather(*tasks) asyncio.run(main()) Tests $ python setup.py test Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sheepped/
CC-MAIN-2019-47
refinedweb
151
65.83
- Comparison function in qsort() - K&R exercises - TCP/IP UDP Protocol Development Library and Development Resources - Deleting linked list - fgetc() vs. fread() - Regarding STL-Map - Assignment: get integers and display in words - Assignment: Print reverse order - Assignment: Print 2D array - query: Windows exploits (random stack frame pointer) - how to create sub-directory if it doesn't exists? - Com object using c++ related question. - Problem in release mode - atof approximates the String, I need exact value of the string passed - Help: need non-const first_type - novice template query - How can I know if a string is BiDi string? - C++ and db - Number Fomatting - Plz help me with the characters showing. - one thread hanging in RWTPtrSlist<POSIXThread>::append - ..and now it doesnt compile? - Inserting a space in C++ - Connect to a cbase program running on Linux from WinXP - life cycle - Variable and typename naming - Help with non-const first_type in std::map - Help with Parser - operators and namespaces - a[++j]=a[j]+a[j+1]; - Why does this fail? - help on using <list> <vector> in a simple VC program - Input to enum - mutable - Able to call member function with a pointer assigned NULL - system function - smart pointers for c++ - fwrite() fails when called after fread() - Core dump with string functions - Some Questions - question - sine wave generation - Malloc and recursion - What kind of macros are permissable? - malloc - quick sort - String array - How does assert benefit your code really? - problem with dll instances - parsing float number - Code quality and examples from open source C code - How to encrypt a password - Actual C++ Library Source Code - LISP generalized lists in C - Translator to edit string table - stack trace not working independently? - [Compiler Error C2564] - realloc(): invalid next size - [Compiler Error C2564] - empty class - Can I integrate .EXE files into a new C program? - parsing XML file with SAX in C++ - Can I integrate .EXE files into a new C program? - parsing XML file with SAX in C++ - parsing XML file with SAX in C++ - Tru64: Using stringstream to convert inbt/long to string causes application to crash - Problem Taking input:( - Tru64: passing integer to a stringstream causes crash - Converting a byte[] to a bitmap - Cannot Open #include<...> files Why?? - how to clear a stringstream object's data, not state. - Can I integrate .EXE files into a new C program? - how can I integrate a .EXE file into a C program? - parsing XML file with SAX in C++ - Accessing members of a struct - Please tell me some useful website about STL - Argv - HP-UX 11.23 Assertion failed: __thread_init == NULL - How to update a file - Design of singleton sub-type? - question about "while" - how to check write failure in ofstream?? - reinterpret_cast ? bad? good? - how to print extended ASCII codes in vc? - Ambig - searching, creating new file from old using c++ - searching, creating new file from old using c++ - virtual binding surviving stream transport of objects - C header files--Urgent - Another way of adding to a string (or output iterator)? - test - Curious probabilities - template specialization - recursive control path issue. - Defining globals inside structs - formatted input/output question - best practices - question - Strip String in 2 blocks - Strip String in 2 blocks - Using this objdct - Question about std:string and std:cout - Audio streaming -? - using return in main to check memory leaks - implementing constructors given class declarations - is assigning a substr to the parent str OK? - Set a pointer to null when deleting? - counting nesting level in template classes - convert an encoded string - Question on static - implementation question - where to find a complete C++ library reference? - Structure type definition in different files? - Using sizeof without parentheses - 500 C language Sample Programs - Mutithread monitoring - OPENING OF .chm or .html file in C? - Running a process in Background - Signal handling in VC++ - How to extract data of a Blitz++ Array ? - How Can I link a .cp file in the "main" header? - how to concatenate the string - how to concatenate the string - logic for converting data obtained from input - static member variable in a DLL... - C/C++ Hardware modelling - test - tower of hanoi - tower of hanoi - Convert int to *char. - Bad Access - Memory Problem? - COM in PHP - How to read strings from a file with comments - Name + number - Template files and compiling them - Efficient searching through stl map? - C++ vector of pointers - ICQ 'channel' for C questions ? - classobject->name(); - find_if algorithm on multimaps - the use of the :: operator - Debugging Help - char string - FAQ 1.12--auto keyword - ICQ 'room' for C communication? - Could a struct with size 44 bytes point always points to a char array with size 2024 bytes? - Packed structs vs. unpacked structs: what's the difference? - Could a struct with size 44 bytes point always points to a char array with size 2048 bytes? - Microsoft Visual C++ 2005 Express Edition - matrix 2x2 - VERY URGENT C PROGRAM - 500 C sample Programs - complex <double> as return type or parameter - C compiler for Windows? - C++ compiler "return" behavior (guru question ;) - Help for a simple program! - keywords "export" - Error when overloading << in template - cachegrind output help - Can I pass a type name to a function? - few topics to refer back - undefined refrence to a function - few topics to refer back - few topics to refer back - how to serialise objects in c++ - Need help on template - Program Required - "Timeout while waiting for connection" - C parser yielding syntax tree data structure? - updating a file at the same time - Unnamed namespace predicament... - Can this conversion code be simplified? - overload on return type - Which design pattern to choose in this situation? - Standalone Executables - loop question - article - Guidelines for writing efficient C code - how can I send the error message to the user by email? - how to sendmail - global array defined by parameters passed. protoyping in header? - Catching return value in a const reference needed? - How to generate 52 different random number?? - Template class inheritance problem - Best book for C++ progrmming in web! - value or refernce -) - Need to restore your photo??? We have solution for you! - Affordable Graphic Design - Mock objects and testing - Using Inheritance -- clarification needed ? - read registry keys - restrictions with malloc??? - lint and Makefiles - .h? - visual c++ error - I can't regain control of my form - key mapping for command-line tools - linking error - Changing access specifier for virtual function - pgm without std library functions - memory leak using mtrace to find - While Loop Question - C Statistics library? - mov or avi transparency - accessing an array outside its bounds - Accessing private member of a class through type-casting - UCHAR undeclared - output - pgm without std library functions - Trouble resizing a pointer of pointers - How to hide subclasses? - Programmer's Avenue - Variables are accepted as the subscription when declare a array - How to find out the names of uncalled functions from object files that are dynamically linked? - Can't figure this out - under/over flow problem? - remove index - "C" Callbacks: Static method vs. extern "C" - Intel c/c++ on linux system. - Intel c/c++ on linux system. - Reading a matrix file and storing it in array - str().c_str() question - Vector using shared memory - This is My HW,PLS Help me! - is this true? sort algo on STL Lists - C++ vectors ...encountered a problem and need to close - parsing c++ declarations - casting a struct to a class - portable sha1 hmac sources - How to generate warnings when How generate a warning when int is converted to bool or vice versa? - biblioteka do gif/png - Detecting typedef at preprocessing time - loop variable - wrong about fstream file(s.c_str(),std::ios_base::in | std::ios_base::app); - Invoking member functions of objects - What is the defined behavior of past bound iterator - unsigned char and char print - int main(void) Why void? - cons int manipulation - static variable:declare and define - open file in c / create if file doesn't exist - Operator override when using STL - Anyone know how to get the files in one directory? - New Joiners : Pls. Read this - filling static vars from different file - Token-pasting trouble - throwing exception from constructor . - overload of operator= - wrong about fstream file(s.c_str(),std::ios_base::in | std::ios_base::app); - GUI Issues - Accessing array by a pointer - C++ Builder 6 - Printing - how to compose the two expression to one? - [lib] pgp/gpg - Can abstract base class have V-table?, Will the pointer to virtual destructor be entered into the virtual table? - Floating Point Accuracy - Comparing the values of two vectors - Inheriting only interface - float f=0.7; f < 0.7 f<0.7f - factory method - What is the use of keyword __cdel in function signature. - Hi, has anyone tried to use C to realize some data structure like cells in Matlab? - C++ Function Signature string Parser - Is it usful to write a class like "Do_When_Return" so I can make it easy to delete some objects when I exit from a code block? - using pointers to map to data - fstream and number conversions - Reading a Text file - Model View Controller? - Destroying STL Strings - template accessors for class? - I can't seem to use pow() - C++ Questions - GUI in C++ - Using parenthesis with defined (#if defined(...)) - find length of unsigned char *input? - abstract classes and virtual deconstructors assistance - QuantLib or more general C++ - non-aggregate type error assistance needed - Managing variable argument call - Question on static attributes in inherited classes - Copy Constructor segmentation fault - n x n matrix transposition recursive algorithm. - Access problem - getchar() behaviour on gcc 4.1 - strange struct behaviour - Debugger not reaching the first line of code - problem with mysql c API and c - how to define/initialise static template class data members of classtype - Construtors - Collecting different execution statistics of C++ programs - how to make gcc know #inlcude<framework/filename.h> ? - code dl - Memory-resident message queue - passing class pointer to other objects - hash_set in STL - c++ callback functor question - int representation incorrect - About partially specify template member function - improve c++ by exercises/projects - is anyone here who made text vesion tetris? - I'm pretty sure I have some out of control Undefined Behavior - Linear search - Is this legal (templates) - boost::weak_ptr and shared_ptr pointers from "this" - 2nd Underhanded C Contest Begins - Inputting text from a text file into a 2 -D array - include .h files in .h - Separating scope of try block? - Fast and safe method for XOR folding - Fastest way to read from a file into a vector<unsigned char> - using placement new to re-initialize part of an object - using placement new to re-initialize part of an object - using placement new to re-initialize part of an object - Macro to iteratively generate variable names - a dumb question - signals - Nesting if statements in switch structures - more help please - Can a class have a non-static const array as a data member? - source-code viewer - anderson's equivalence method - 3rd party tool - abt void pointer - can the array size be ZERO - __FILE__ and __LINE__ within macros - Non Recursive In-Order Traversal without using stack - What's the tilde in a &= ~b ? - error C2296: '.*' : illegal, left operand has type 'cStepCurveEvaluator *const ' - Why is the behavior different - Address - why is wcschr so slow??? - C++ standards - why is wcschr is so slow??? - SHFileOperation Help - variation with repetitions in C++ - How to implement an async-function? - C linkage problem with ACE on windows - is any body having cp implementaion - is any body having cp implementaion - Regarding McCabe's Cyclomatic Complexity - Is it possible to write a struct to binary file? - popen cannot allocate memory - insert an elem into a link list - Undeclared identifier - ifsteam - Hash Table Implementation in C++ - Double Clock Experiment - memcpy() question in C++ - Trying to understand character arrays. - Design pattern question - Greta and basic_string<char, ignore_case_traits> - Java Interview questions and answers - Refactoring-Feature Article in Better Software Magazine - difference between pass by address and pass by reference!! - Help wiht VC 7 - Error in OnFinish: Failed to return new Code Element. Possibly synta - getch() and getche() - getch() and getche() - Code fails under vc8 - clearing a structure - Has any C library to parse JAVA serialized object string? - how to check the reference of C++on Linux - Today I buy a new disk of computer but it run not fast - templates question - variable declaration in for() - executing code in a String! - plottting graphs in C++ - Ncurses - STL inherit from container<T>::iterator - windows.h - Problem finding key in STL map with gcc 4.1.0 - C++ Builder 6 - Opening a File - system and user time - Multiple dispatch - can operator << accept two parameters? - how to resolve this problem - design pattern .. factory i suspect - How to change the value of PS1 from within a c program - In-place function - In-place function - What does 'restrict' mean? - Difference between including a header file in .h and .cpp - factory design question.. - reading a file - help with recursion please - Q: stl, boost smart pointer to prevent memory leaking - Command Line Argument with & and ^ - String Parser using BOOST.Spirit - Constness of the container of shared_ptr and the constness of it elements - using native C code in a C# application - ring iterator adaptor for vector interator - copy elements from a pq to a vector - operator new() and new[] - design pattern .. factory i suspect - CGI C++ problem - tournament tree implementation - Obscure Syntax - Name conflict with windows.h define - size of a function - scanning UTF-8 characters - Life of temporaries - Double Bubble Sort Algorithm Fails to Sort AT ALL - how to make bool class with custom output? - How to amend this code? - Where I can find itoa()? - problem named walk - swap pointers - when to use private inheritance? - convert int to store in binary file in 1byte - convert int to store in binary file in 1byte - convert int to store in binary file in 1byte - GUI compatibility in C++ - struct reference? - Question on class private member - 'New' operator - need to call mfc application from C program - Manipulating Bitmaps - C++ Visual Studios 2003 GUI Help - error C2664: 'TextOutA' - modem code - file problem - C++ FileIO problem. - how to copy a file into an array? - reading file and copying it into array... - Getting C++ data into an excel graph - How to store words? - How to avoid the use of copy constructor when using STL container function (push_back, etc.) - How to write game such like this? - How to write the variables of a structure from an array? - extern needed or not? - private within this context ... more - N -ROOM LIGHTS PROBLEM - Flow chart for C code - *p++ = *q++ undefined? why? - When do we want to declare static object - twice(twice(x)) - Can I avoid rewriting this function in a subclass? - data mining - n00b input help - compilation wrning in altix350:(Typecasting from int to void *) - Deque Item Deletion Problem - Behaviour of custom allocator for vector - I got one compliation error. - get one character from input at a time? - Template specialization problems - Dummies-books-related java/programming/SQL - Runtime polymorphism - public inheritance - How to invoke the operator of the private base class? - unions with structures of common initial sequence - Chess - Parsing two formatted text files - try/finally implementation in c++ - reference vs. pointer - When to use null and when to use static_cast<some_pointer_type>(0)? - scope and linkage - A question on variable defination - private inheritance - STL Queue Data Lifetime - Initialise elsewhere -- as simple as possible... - Significance of trigraph in C preprocessor - What will happen if main called in side main function? - RReeaallyy long function -- inlining? - HI, a question on #include - best way to index numerical text data ? - best way to index numerical text data ? - Text mode fseek/ftell - Globally declared arrays - need help with COM in C - I need a container to hold grid positions and the objects on the grid? - Data structures or STL - Help us celebrate the Embedded Tools Web Shop's second birthday - Help us celebrate the Embedded Tools Web Shop's second birthday - Copying fread behaviour on char array - coding standard - STL containers and managing memory allocation in embedded systems - passing a pointer to a struct to a function that uses ncurses - How to Identify functions in a .C File?? - [CFP] WLPE'06 - Workshop on (logic-based) Programming Environments - How to fix this error? - open source xml parser in C++ - Some doubts on variable-length array - How to get over this error? - address of return value - new & null - How to Define High-precision Date type - Something About Date Type - value_type of back_inserter? - Illegal Immigration, the Non-Issue of the Week?????????????? - Illegal Immigration, the Non-Issue of the Week?????????????? - iostream and boolean - More on pointers to pointers. - Deleting multiple elements from STL hash_multiset - why cann't I access protected var from a inherited class? - How to use gdb to show the type of a variable? - How to use gdb to show the type of a variable? - Complicating things? - Binary or Ascii Text? - Convert Java <long> to C <?> - gui design - signal trapping in C++ - Any Good Freebie Text Editors With Auto Indent Capability? - How to format the screen output? - Inheritance Puzzler - streambuf filter - fread fwrite struct - Silly STL question... - how to schedule the compiler? - operator overloading - forcing compiler to consider inline function inline. - Create c++ library for use in C - best practice for global variables? - switch case with vector problem - sizeof(non_static_member_variable) in a static function - new & null - Problem: using this pointer - Er Sorry About That - warning: implicit declaration of function 'printf' - Sizewell B++ - Rods - Lowering Of Graphite Rods Into Reactor Core - Performance measurement of a single method - size in kbytes? - What is the replacement in C++ for _aligned_malloc in C? - Template and typename - Loki-like functors with boost::function? - to malloc or not to malloc?? - what all a class has by default - How to convert image formats? - what's wrong with my code of linked list? - 100% CPU - Can 'C' program be run as a background service? - When to use Boehm-Demers-Weiser Garbage Collector? - using exceptions as a "deep" return - how to check the function free() - using std::string; string("hello") vs std::string("hello") in header file. - std::istringstream and ignore .. - Gotta love it - C++ Linux Programming Job At UC Riverside - Telephone bill
https://bytes.com/sitemap/f-308-p-87.html
CC-MAIN-2020-45
refinedweb
2,971
55.44
Assign CPU Resources to Containers and Pods This page shows how to assign a CPU request and a CPU limit to a container. Containers cannot use more CPU than the configured limit. Provided the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requests.. Your cluster must have at least 1 CPU available for use to run the task examples. A few of the steps on this page require you to run the metrics-server service in your cluster. If you have the metrics-server running, you can skip those steps. If you are running Minikube, run the following command to enable metrics-server: minikube addons enable metrics-server To see whether metrics-server (or another provider of the resource metrics API, metrics.k8s.io) is running, type the following command: kubectl get apiservices If the resource metrics API is available, the output will include a reference to metrics.k8s.io. NAME v1beta1.metrics.k8s.io Create a namespace Create a Namespace so that the resources you create in this exercise are isolated from the rest of your cluster. kubectl create namespace cpu-example Specify a CPU request and a CPU limit To specify a CPU request for a container, include the resources:requests field in the Container resource manifest. To specify a CPU limit, include resources:limits. In this exercise, you create a Pod that has one container. The container has a request of 0.5 CPU and a limit of 1 CPU. Here is the configuration file for the Pod: apiVersion: v1 kind: Pod metadata: name: cpu-demo namespace: cpu-example spec: containers: - name: cpu-demo-ctr image: vish/stress resources: limits: cpu: "1" requests: cpu: "0.5" args: - -cpus - "2" The args section of the configuration file provides arguments for the container when it starts. The -cpus "2" argument tells the Container to attempt to use 2 CPUs. Create the Pod: kubectl apply -f --namespace=cpu-example Verify that the Pod is running: kubectl get pod cpu-demo --namespace=cpu-example View detailed information about the Pod: kubectl get pod cpu-demo --output=yaml --namespace=cpu-example The output shows that the one container in the Pod has a CPU request of 500 milliCPU and a CPU limit of 1 CPU. resources: limits: cpu: "1" requests: cpu: 500m Use kubectl top to fetch the metrics for the pod: kubectl top pod cpu-demo --namespace=cpu-example This example output shows that the Pod is using 974 milliCPU, which is slightly less than the limit of 1 CPU specified in the Pod configuration. NAME CPU(cores) MEMORY(bytes) cpu-demo 974m <something> Recall that by setting -cpu "2", you configured the Container to attempt to use 2 CPUs, but the Container is only being allowed to use about 1 CPU. The container's CPU use is being throttled, because the container is attempting to use more CPU resources than its limit. CPU units The CPU resource is measured in CPU units. One CPU, in Kubernetes, is equivalent to: - 1 AWS vCPU - 1 GCP Core - 1 Azure vCore - 1 Hyperthread on a bare-metal Intel processor with Hyperthreading Fractional values are allowed. A Container that requests 0.5 CPU is guaranteed half as much CPU as a Container that requests 1 CPU. You can use the suffix m to mean milli. For example 100m CPU, 100 milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not allowed. CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine. Delete your Pod: kubectl delete pod cpu-demo --namespace=cpu-example Specify a CPU request that is too big for your Nodes CPU requests and limits are associated with Containers, but it is useful to think of a Pod as having a CPU request and limit. The CPU request for a Pod is the sum of the CPU requests for all the Containers in the Pod. Likewise, the CPU limit for a Pod is the sum of the CPU limits for all the Containers in the Pod. Pod scheduling is based on requests. A Pod is scheduled to run on a Node only if the Node has enough CPU resources available to satisfy the Pod CPU request. In this exercise, you create a Pod that has a CPU request so big that it exceeds the capacity of any Node in your cluster. Here is the configuration file for a Pod that has one Container. The Container requests 100 CPU, which is likely to exceed the capacity of any Node in your cluster. apiVersion: v1 kind: Pod metadata: name: cpu-demo-2 namespace: cpu-example spec: containers: - name: cpu-demo-ctr-2 image: vish/stress resources: limits: cpu: "100" requests: cpu: "100" args: - -cpus - "2" Create the Pod: kubectl apply -f --namespace=cpu-example View the Pod status: kubectl get pod cpu-demo-2 --namespace=cpu-example The output shows that the Pod status is Pending. That is, the Pod has not been scheduled to run on any Node, and it will remain in the Pending state indefinitely: NAME READY STATUS RESTARTS AGE cpu-demo-2 0/1 Pending 0 7m View detailed information about the Pod, including events: kubectl describe pod cpu-demo-2 --namespace=cpu-example The output shows that the Container cannot be scheduled because of insufficient CPU resources on the Nodes: Events: Reason Message ------ ------- FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3). Delete your Pod: kubectl delete pod cpu-demo-2 --namespace=cpu-example If you do not specify a CPU limit If you do not specify a CPU limit for a Container, then one of these situations applies: The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running. The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit. If you specify a CPU limit but do not specify a CPU request If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit. Motivation for CPU requests and limits By configuring the CPU requests and limits of the Containers that run in your cluster, you can make efficient use of the CPU resources available on your cluster Nodes. By keeping a Pod CPU request low, you give the Pod a good chance of being scheduled. By having a CPU limit that is greater than the CPU request, you accomplish two things: - The Pod can have bursts of activity where it makes use of CPU resources that happen to be available. - The amount of CPU resources a Pod can use during a burst is limited to some reasonable amount. Clean up Delete your namespace: kubectl delete namespace cpu-example What's next For app developers For cluster administrators
https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
CC-MAIN-2022-27
refinedweb
1,227
57.4
So at the end of last week I blogged about my experiments to get a styled text viewer for (java) source using the new TextFlow-Support in JavaFX8, I did a short screencast and there you noticed that for larger source files the rendering speed was not acceptable anymore. I’ve worked a bit on it last night and take a look at this result: The really cool thing is that the API of my StyledTextArea is fairly the same than the one from SWT-StyledText (in fact also many of the inner workings are a shameless copy from SWT!) this will make it super easy to use and adjust the current text parsing infrastructure used by the Eclipse IDE. public class StyledTextArea extends Control { public void setContent(StyledTextContent content) { // ... } public StyledTextContent getContent() { // ... } public ObjectProperty<StyledTextContent> contentProperty() { // ... } public void setStyleRange(StyleRange range) { // ... } public void setStyleRanges(int start, int length, int[] ranges, StyleRange[] styles) { // ... } public void setStyleRanges(int[] ranges, StyleRange[] styles) { // ... } public void setStyleRanges(StyleRange[] ranges) { // ... } public void replaceStyleRanges(int start, int length, StyleRange[] ranges) { // ... } public StyleRange[] getStyleRanges(int start, int length, boolean includeRanges) { // ... } } Ok. This was the easy part, next is to somehow add editing functionality, I have no freaking idea on how to do it. I don’t know: Often, when you say, you have no idea how to solve a problem, you miraculously come up with a solution only a short time later! Whatever: This is really nice to see and my hopes, that the Eclipse UI *might* one day be ported to JavaFX, are rising by the day… Keep up the great work! Guess you are right – today in the evening I had the idea how to make it work! See
http://tomsondev.bestsolution.at/2013/02/19/speeding-up-textflow-for-javafx8-by-a-factor-of-100/
CC-MAIN-2013-48
refinedweb
284
71.14
, endl, namespaces, and using statements),++. Hey Alex, I’m a big fan of your tutorials. I wanted to ask why you told us to code using 32 bit on Visual Studio. Is there a specific reason for this? Why not on the x64 configuration? Thanks in advance for your time and consideration. 32-bit programs tend to use less memory and are smaller, which also makes them more performant by default. 64-bit is useful if you need more than 4 GB of memory or some of the 64-bit instructions, but you probably don’t. #include <iostream> //coming from python int askUser() { std::cout << "Please enter an Integer: " << std::endl; int a; std::cin >> a; return a; } int doubleNumber(int x) { return x*2; } int main() { std::cout << doubleNumber(askUser()) << std::endl; Seems to be working just fine. Hi alex, Thanks for such a good material! My doubt is related to quiz number 5. I wrote my code like this: #include<iostream> using namespace std; int doublenumber(int x) { return 2*x; } void main() { Cout << "enter the value to double"<< endl; int a; cin >> a; cout << "the result is:" << doublenumber(a) << endl; return 0; } I got error for above code as main should return int. But main function is not returning any value then why we have to make it as int main() instead of void main(). Function main() always needs to return an int back to the operating system -- and your main() is returning the value 0 (via the “return 0” statement). Changing your void main() to int main() should resolve your issue. Hi alex, I am sorry but i am still not satisfied may be i am not clear but i have read previously that when we don’t want to return any value then we can use void. I have seen many examples previously when we are not returning any value hence we use void main(). The system will automatically return 0 even if we don’t return any value. If you are saying that main always return an integer then is it like we can never use void main()? I already know that changing from void main to int main will resolve the issue i already tried that 1st time itself. Sorry to argue on this but i am still confused. You’re conflating a couple of related things. For non-main functions: * We can use a void return type if we don’t want to return anything. * Functions with void return types do not return any value to the caller. * Functions with non-void return types must return a value. For main: * We are required to use an int return type (some compilers will let you use a void return type, but this is invalid C++ and so should be avoided). * If you do not return a value, C++ will return 0 on your behalf. Ok Alex. Thank you very much for the clarification. Hello,first thanks for your awesome website. then i just wanna know the code i wrote here(i know this is dumb question) : can use it in real programs this way or it takes much more time and source to do the same process ? i mean less code isn’t better? I’m not sure I understand what you’re asking. If you’re asking why we break things out into functions like this instead of putting everything into a single function, the main reason here is to start getting you used to the idea of using functions, and how to structure then. For programs any more complicated than this, functions start to become very useful for organizing your code and enabling re-use (which leads to less code). My dear c++ Teacher, Please let me point out that in quiz, 1) and 2) questions, you speak about program fragments. American heritage dictionary first meaning for fragment is "A small part broken off or detached". My view is that they are not program fragments but complete programs though very small. Is it correct? By the way let me say my understanding of "argument" in c++ jargon. It is pass by value from caller function to called function parameter. With regards and friendship. I use the term “fragment” when I’m presenting a piece of code that can not be compiled by itself (such as a single function that isn’t main, or a few statements). In C++, an argument is the value being passed into a function by the caller. It may be passed by value, but there are other ways to pass arguments too (by address and by reference), which I cover in later chapters. For 5th Question the input of the Variable "x" should be from User, but you didn’t do it. Yes he has. "std::cin >> x;" line waits for you to give the input Copy the codes and try to run the program, the console will wait for you to give the input. The program does not proceed until you give the input number Good stuff. Quiz questions 1 and 2 should probably include: #include <iostream> Updated. Thanks for pointing that out. Alex mate, you are a patient man, sometime I get the feeling you are doing peoples homework/actual real work for them, I would be inclined to tell folk where to go if they were asking questions that didn’t refer to the lesson at hand, what a nice guy you are. int doubleNumber(int x) { return 2 * x; } build fails everytime idk why Undefined symbols for architecture x86_64: “_main”, referenced from: implicit entry/start for main executable ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) keeps saying this Do you have a main() function in your program? My dear c++ Teacher, Please let me say I asked question at and answer was that "All implementations shall allow both of the following definitions of main: int main() { /* … */ } and int main(int argc, char* argv[]) { /* … */ }" Because answerer denied explain me main()’s arguments on the ground he cannot teach me C++ like that, I ask you explain me it "like that". With regards and friendship. I talk more about this subject in lesson 7.13 (command line arguments). Have a read and if you still have any questions, ask there. Hi. For quiz 5, is it preferable to create one function to grab the number and one function to double the number or could I just let the number grabbing in the main function? Or, more generally, is it better to leave the main function as lean as possible, distributing most of the heavy work to extra functions? Edit: Guess this is answered in the next lesson. With short programs like these it doesn’t matter too much. With longer programs, I find it helpful to keep the main() function lean (just connecting a few functions together) and do all the heavy lifting inside other functions. The reasons for this are simple: other functions are potentially reusable. main() is not. My dear c++ Teacher, Please let me send you following program as example of functions with parameters other functions. I have put function call currYear() as second parameter in function call age() for be executed first, because compilers on line execute first the second function call parameter. With regards and friendship. One note about this: in C++, the order that function parameters are resolved in is indeterminate. This means that in your program, birthYear() or currYear() may be evaluated first. In this program, it doesn’t matter so much, but in other programs, this might be a problem. Speaking personally, I find this level of nesting hard to comprehend: I think it’s better to use intermediate variable in this case: This way, the program will always execute in a deterministic manner (birthYear() called first), and we don’t have to worry about whether birthYear or currYear is evaluated first when age is called since there are no side effects involved. My dear c++ Teacher, Please let me express my gratitude for your immediate reply. I inserted your snippet in my code and compiler outputs: In function ‘int main()’: 31:31: error: ‘birthYear’ cannot be used as a function 32:29: error: ‘currYear’ cannot be used as a function My modified code is following. With regards and friendship. Oops, my mistake. The fact that the variable and function have the same name is confusing the compiler. Rename your birthYear and currYear variable to anything else and it should be fine. Sorry about that. My dear c++ Teacher, Please accept my many thanks for you responded and for your helpful answer. I did it and program works fine. However your editor still sometimes does not accept "\". With regards and friendship. Hi Alex, Great tutorial !! I have started learning C++ few days back. Pls. advice about below program… #include "stdafx.h" // Visual Studio users need to uncomment this line #include <iostream> int cyear() { int c; std::cout << "Enter current year: "; std::cin >> c; return c; } int byear() { int b; std::cout << "Enter birth year: "; std:: cin >> b; return b; } int age() { int a; a = cyear() - byear(); return a; } void output() { std::cout << "Your age is : " << age() << std::endl; } int main() { output(); return 0; } A few thoughts: 1) The output() function is needless -- the std::cout line can go straight into main(). 2) age() is a calculation function which calls two input functions. That means when you call age(), you always have to enter input. Tying these two things together isn’t great. It’s better to have the program first as for the inputs, THEN do the calculation with those inputs, not have the calculation function do the asking for inputs itself (indirectly). Consequently, it would be better if main called byear() and cyear() and passed the results of those functions into age() as arguments. Understood 🙂 Thanks for quick reply and nice explanation. Regards… !bad at all 🙂 thx a lot for the tutorials man, really helpful!! My dear c++ Teacher, Please let me ask your help on my problem. Following program returns twice: Enter current year: Enter year of birth: and after I heve entered twice corresponding values outputs result. With regards and friendship. OutputAge() is using age() as a argument, so age() is called first. age() calls currYear() and yearOfBirth(), which ask the user for an initial set of inputs. The return value of age() is then passed into OutputAge(). However, because you haven’t given the function parameter a name (only a type), the parameter is ignored. OutputAge() calls age(), which prints the second set of inputs. If you update outputAge() like this, I think it will do what you want: My dear c++ Teacher, Please let me express my sincere gratitude for you answered my question, moreover as you use to do. Although it looks small problem given me headache for 24 hours. I send you my code corrected according to your suggestion and also for new line. Unfortunately your editor does not accept character and also makes more problems. With regards and friendship. I don’t get it… I guess I won’t become a programmer after all… Or you could ask questions about the things you don’t understand. SO I got a little help from my friend and I made the code better. Here’s what I ended up with. Much better! I made a solution to number 5 but Im not sure it counts it does what it needs to do but I couldn’t figure out how to get the get value function to work with the doublenumber function kept getting a function cannot return 0 code so this is what I came up with, passable? Thanks for the tutorial just want to make sure I don't progress till I have these concepts down. Dear Elliot; To get user’value, you need the "cin" not "cout" due "cout" is to print out only, I think your code should be like that: My code works as intended, but I’m wondering if it is wise to call a function within calling a function; Regards, The Q There’s no rule or best practice that advocates avoiding nested function calls. As long as you can do so without making your code hard to understand, then you are free to do so. Personally, I’d probably avoid nesting more than one level deep. Hi again, this one threw up something new: x not declared, y not declared, z not…(and then some) plus ‘add’, ‘mulitply’, etc cannot be used as a function. Am I missing a dictionary? Both compilers did the same, so I don’t know, I’m begining to dread the quizzes. #include <iostream> int add(x + y) { return x + y; } int multiply(z * w) { return z * w; } int main() { std::cout << add(4, 5) << std::endl; std::cout << multiply(2, 3) << std::endl; std::cout << add(1 + 2, 3 * 4) << std:: endl; int a = 5; std::cout << add(a, a) <<std::endl; std::cout << add(1, multiply(2, 3)) << std::endl; std::cout << add(1, add(2, 3)) <<std::endl; return 0; } almost forgot, they didn’t like the double brackets ‘(())’ despite all this i love learning code and I really want to keep going. thanks Therese Function parameters need to be defined like normal variables. Instead of: You need to do this: Hi Alex, I’ve tried this on two different compilers, using different versions of C++ and I keep getting the same response "error: expected ‘;’ before ‘void’ ". #include <iostream> using namespace std void doPrint() { std::cout << "In doPrint()" << std::endl; } void printValue(int x) { std::cout << x << std::endl; } int add(int x, int y) { return x + y; } I’ve tried different things, sometimes I get " expected ‘int’ ", and there is no ‘main()’ in this code, where do I put it? Thanks Therese You’re missing the ; at the end of “using namespace std;” My dear c++ Teacher, Please let me send correct code. With regards and friendship. i mean the code structure, but i see where i need to work, i need to work in my function naming. Thank you for the advice Alex! I did it like that the input is taken in the function doubleNumber() program works fine but is that wrong? #include "stdafx.h" #include <iostream> int doubleNumber() { int x; //std::cout << "Enter a number that will be multiplied by 2: " << std::endl; std::cin >> x; //std::cout << "Result is: "; return x * 2; } int main() { std::cout << doubleNumber() << std::endl; std::cin.clear(); std::cin.ignore(); std::cin.get(); return 0; } It depends on what you mean by “wrong”. It produces the correct answer, so it’s not “wrong” in that sense. But it is wrong in that you have a function named doubleNumber() that’s not just doubling the number, it’s also asking for input. You should either name your function more accurately, or split doubleNumber() into two functions, one to do input, and one to do the doubling.. Hello, I am wondering if the way I get input is acceptable or commonly used, I write a separate function for it. It seems unnecessary here but in larger programs I could see a benefit….or am i crazy? Totally fine, and in many cases, desirable (particularly for non-trivial programs, or programs where you need to get user input more than once). This also gives you a convenient place to do error handling. For example, what if the user doesn’t enter an integer? You’ll probably want to ask the user to try again. That error handling logic would be perfect to add to that getUserInput() function. can I get a better description of why is cin>> used std::cin is an object representing console input. >> is an operator that (in this context) means get data from the console user and put it in whatever variable is to the right of the operator. So when we say “std::cin >> x”, we’re saying “get input from the console user, and put that input in variable x”. LOL @Alex… You forgot to mention that if you use return instead of parameters or vice-versa it will result in an indefinite loop…We can use return and parameter as a pair but not return-return or parameter-parameter…I learnt it the hard way…Please mention it above for gods sake.But i can interchange return and parameter statement between two functions but not with main…Because main always returns 0…Correct me if I am wrong…I am new to functions so the above statement is a little vague Sorry, I’m not clear on what you mean. How would you use return instead of parameters or vice-versa? They serve different functions: parameters allow you to pass one or more values into the functions, return values let you pass a single value back to the caller. [code] #include <iostream> int main(int); using namespace std; int bookmark() { int x; x=2; main(x); } int main(int y=0) { cout<<"aaa"<<y; \\I could have written this line below the next line,Its just to see the working of bookmark(); \\my code return 0; } [\code] You see what I mean…If I forcefully try to use parameters instead of return to send values back to the main function…It becomes a loop…INEVITABLY.Now if you use parameter-parameter pair (lets say for sending and receiving multiple values)explicitly between two functions…If you manage to write a program like that you will see that it turns out to be a loop…So to solve that problem there needs to be one more function to compensate for the loop…Its hard to explain…Suppose there are two functions A,B…I can send multiple values from A to B with the help of parameters…Now I want to send multiple values from B to A…I cannot use return statement as it deals only with one value at a time….So ill make a new function C,Ill use parameters to send multiple values from B to C and then C to A…..I have not written this program till now…Its just an Idea…Better Ideas are appreciated…Thanx Aah, I see what you mean. Yes, in the program above, main() calls bookmark(), which calls main(), which calls bookmark(), infinitely. Because none of these functions ever terminate, the program will eventually consume all available stack memory and crash. The thing to note is that when bookmark() calls main(), it’s not sending a value back to main. It’s calling main() again with a new set of parameters. It is possible to have a function modify a value that the caller passed in by using what’s called a reference parameter. In the case above, this could be used to have bookmark() pass a value back to main(). This is covered in chapters 6 and 7. Awesome tutorial heres my answer to question 5 int doubleFx(int x) { return 2*x; } int main() { std::cout << "Enter a Number: "; int x; std::cin >> x; std::cout << "your number doubled is " << doubleFx(x) << std::endl; return 0; } I’m trying to do the 5th quiz but i keep getting errors and I dont know why, I could use the info! 3rd line expected unqualified id before ‘int’ expected ‘)’ before ‘int’ Hello Alex, I am trying to make a simple game where you fight other countries and the battles are generated randomly. I think I have made a good function for the battle, but now I must make it repeat itself until you win. (For example if you win the first time you will not need to fight again, but every time you fail you must repeat the battle.) I tried using for like this : But it just skips the for. I don’t really know how to use for. The thing it should do is to start the battle once. Then every time it fails it should do it again. country_win is declared in the battle() as well. Here is the function: Thank you in advance. A do/while loop seems appropriate for this use case since you want to loop while a condition is true (the user has not won yet). We talk about different kinds of loops in chapter 5. thanks so much for these helpful tutorials they are a great help! You’ve earned my eternal gratitude Sir! I might have overlooked this in previous less or it could be in a future lesson, but where would it be appropriate to use "void" rather than "int"? I understand that void does not return a value, and that using it as a main will not work, so is it only useful in functions that will be used in mains? (Sorry if I’m fumbling in my coding knowledge. Never coded before coming across these lessons, and throughout the week I’ve been going through and typing out and playing with the examples.) You should use void as the return type for any function that does not return a value to the caller. For example: This function takes an integer parameter and does not return anything back to the caller. Thus, the return type is void. My question is why is it important to inisialize y as I already have wrote int y? “int y” just tells the compiler that you’re defining an integer variable named “y”. It doesn’t give that variable a value. In this case, since you’re asking the user to input the value of y immediately, you don’t really need to initialize y. But more generally, you’ll typically want to give your variables known values so they have predictable behavior. i did it that way #include "stdafx.h" #include <iostream> using namespace std; int main(int a) { cout << " please enter an integer "; cin >> a; cout << a << " x " << 2 << " = " << (a * 2); cin.clear(); cin.ignore(32757, ‘\n’); cin.get(); return 0; From question 4 and 5 this is my solution, what’s the difference of my solution to your example? any advantage/disadvantages? What you’re doing isn’t wrong in a logical sense, it just doesn’t answer the question the quiz asks. doubleNumber() is supposed to be a function that takes one integer parameter and returns double that value. Your function takes no parameters. From a purist standpoint, I’d say your function violates the “functions should do one thing” rule, as it asks the user for input AND doubles it. It would be better to separate those two things. Line 16 on quiz #3 is confusing. Since it’s easy to confuse the add parameters (x,y) with the multiply parameters with the same names. But they are entirely different. I would have written it like this to make it a little clearer. multiply((add(1,2,3)),4) But even this is confusing. Can’t this be done another way? Sure, you could break it into two lines, like this: Although this may be less efficient, in some cases it can be easier to understand. Alex, Great Tutorials. I’ve never understood why they use the word argument in programming languages. Since you are not arguing about anything. They should always be called Parameters in my opinion. Since you are passing parameters to a function and receive those same parameters from the caller. Can you argue about that? (Smile) I’ve never understand why they are called arguments either. Seems more like an agreement to me! Many people informally use the word parameter to refer to both the argument and the parameters in the function declaration. I find it’s useful to distinguish them, at least when trying to teach. Made another one I am so surprised how its possible to make the same thing in different ways amazing language Thanks for the awesome tutorial. How over complicated is my code for question 5 solution (after seeing so many doubleNumber() {return 2*x;} solutions)? Your code isn’t over-complicated, but it is structured a little strangely. If you didn’t know how doubleNumber() was implemented, would you expect it to take input from the user? Probably not. The function is also doing two things: getting input, and doubling it. That violates the functions should do one thing rule. You should probably split that function into two functions -- one to handle the input, and one to do the doubling. I guess I more misunderstood the nature of functions. I was thinking of them almost as being small, independent programs that are combined to create more complex programs as opposed to being variables/aliases used to call individual tasks. Since doubleNumber() was supposed to double a value, I had it also ask for the value to double. Thanks for helping clear that up! 🙂 Thinking about functions as small independent programs is actually a good metaphor. Personally, I like to think about functions as “building blocks”, like Legos, that you can combine to do more advanced things. The simpler and more focused on doing one thing well you can keep each individual function, the easier they will be to write, test, reuse, and combine. Consider your original function that asks the user for input AND doubles it. What if we wanted to print the user value doubled, and then doubled again? We couldn’t use your function twice, because we wouldn’t want to ask for input a second time. However, if we separated the input from the doubling, we could call the input function once, and the doubling function twice. Parameters exist to provide a way for the caller to provide inputs for a function to work with, so each function that needs to work with data doesn’t have to ask the user for it. This makes most of our functions _more_ independent, because now only the input functions need to have a dependence on user-entered data. The rest can just get input from the caller and calculate a value or print output without any dependence on the user. So I THINK I finally get it. I used your example (double doubling) as practice, and came up with this: Another interpretation of functions (especially those you have not written yourself) is as a number crunching machine that takes inputs and gives an output (ignoring void functions and any side-effects). In well written code the name of the function tells the user what it does. For example, will initialise the floating point variable y with the sine of the input variable x (with x being previously initialised and representing an angle). Input is x, it is processed according to the function definition, and the output or result is returned and given to y. 1More way of doing it, my question is: is there any difference? Here is the code I did for exercise #5 Hey dude, Just working through your website, loving the tutorials so far. Really easy to follow and the questions so far have been about right for difficulty. Really looking forward to getting towards the end. Wanted to post my question 5 answer. I actually did question 4, then wrote the program to just test out the function XD. Then read question 5 like… oh that’s cool already done haha. Ehm… You always say that if you’re using visual studio you should use #include "stdafx.h". But I’m using visual studio and #include "stdafx.h" didn’t work for me- Instead I have to use #include "stdfix.h". Why is that? No idea. I’ve never heard of such a thing and Google didn’t turn up anything related… Another question about number 5. I wrote a program that completed the parameters, but is it too complicated for the task? mine seems much longer than yours. Thanks. No, this is great. 🙂 You’ve done a good job having each of your functions have one job, and using parameters and return values to move the data around. hey there i made the 5th question but different and gave me a doubled result is it better?: #include<iostream.h> int doubleNumber(int a) { cin>> a; return a * 2; } int main() { cout<< doubleNumber(3); return 0; } Thanks for the help Alex and Avneet. Think I’m getting my head around it now. I apologize if the question was a bit obvious. Alex I have to say that as an absolute beginner with both C++ and programming in general, the tutorials are much better than anything else I’ve found so far. Thanks =) Thank you so much for this guide, really easy to understand. I was so suprised i got all the questions correct especially question 5 (slightly the long route ha). int multiply(int x, int y) { return x * y; } int main() { std::cout << "enter a number to multiply by 2"; int x = 0; std::cin >> x; std::cout << "your value is" ; std::cout << multiply(x, 2) << std::endl; return 0; } @Tudor What I understand is that if you want to do the same calculation with different data, you can define a function with parameters(to catch the values passed by the caller) for that calculation and if the calculation is needed in multiple functions, all the different functions can pass value(arguments) to the calculator function to do the calculation with passed argument(s) and return the calculated value back to them. main() is not the only function that can call other functions. Any function can call any other function. I am also a newbie and not sure if am everywhere right. @Alex "Problem 2: multiply() calculates a value and puts the result in a variable, but never returns the value to the caller. Because there is no return statement, and the function is supposed to return an int, this will produce a compiler error." You wrote this in solution 2. I had a doubt on your statement("this will produce a compiler error"). To clear this, I fixed one problem in that code by giving one more value(5) as argument when multiply() is called in main(e.g. multiply(4,5)). When I pasted the fixed code in Code::Blocks, the output it gave was shocking. It printed 20 on the console (that was not expected because there is no return expression in function multiply). I took that code and pasted in ideone.com (an online compiler). The code was compiled fine there and printed 0(confusion, why 0?). I don’t know what compiler they use. May be I am wrong but I think that code(when given one more value to match the parameter list) won’t give any compiler error in any case because when we do not write any return statement for a function and call it in other function, compiler doesn’t throw any error. I think you should recompile that code in your compiler/IDE. Whatever, The result that Code::Blocks gave is still confusing. It looks like some compilers don’t treat missing return statements for non-void functions as an error. The C++ spec doesn’t define what the behavior should be in this case, so the result could be anything. And as you see, the result was different in different compilers. Hi all, I have to be honest and say that I’m really struggling with this section. I can’t understand why function parameters and arguments are necessary. And, since I don’t understand the purpose, I’m finding it difficult to understand the whole section. I’ve read the intro several times as well as reviewed the previous sections looking for a clue but I can’t see why you’d need to pass a value from main to a function. Can’t you just perform whatever calculation you need within the function body? > Can’t you just perform whatever calculation you need within the function body? For many things, yes. However, for non-trivial programs, having all your code in main() will get unmanageable. Functions have quite a few benefits: 1) They break up your code into smaller pieces that can be managed. 2) They allow you write reusable pieces of code that can be used multiple times (e.g. the getValueFromUser() example from the previous lesson -- we only had to write the code for getValueFromUser() once, and we used it twice). 3) They abstract away the details of how something works. To use a function, you only need to understand its name, inputs, and outputs. You don’t need to know how it works internally. This is less important when you’re writing your own functions, but super nice when you’re using someone else’s functions (e.g. something from the standard library). Edit: I added a lesson 1.4b -- Why functions are useful, and how to use them effectively to address this point further. This might have been said before but for questions 1 and 2 you have used cout without the namespace (the std::). Not a big deal I know, but could be considered as one of the mistakes you’re asking people to look for. Unless I’m missing something obvious? I am enjoying reading through this tutorial thus far. Thank you very much for it. Im using visual studio and whenever i try to run the exact same code you used for question 5 it comes up with 1>MSVCRTD.lib(exe_winmain.obj) : error LNK2019: unresolved external symbol _WinMain@16 referenced in function "int __cdecl invoke_main(void)" (?invoke_main@@YAHXZ) 1>c:\users\nick\documents\visual studio 2015\Projects\hello world\Debug\hello world.exe : fatal error LNK1120: 1 unresolved externals This should compile. It’s not clear to me what the problem might be. I’d try creating a new project, making sure you’re creating a Win32 console application, and see if it works. hello Alex, about question #5: its the first time i saw you using "main()" two times in one code! didnt you say that only one main can be used in a program coding? Yes, a program can only have one main() function. The second main was meant to be an optional way of solving the problem. I’ve commented it out to make it more clear that it’s an optional solution and not part of the main solution. Hi Alex ! I would love it if you could answer my question , I noticed a very odd thing… Using what you taught me I wrote this piece of code that adds two numbers without defining a value for the 2 variables and what I find amazing is that every time I run the code the first variable (X in my case) is automatically assigned to the value of 1 and the second (Y) is random like it should be.. Why is the computer always giving the value of 1 to X ?? I know why the Y value is random, but why is X always 1 even though I never defined X as being 1…Please let me know what you think, I am running it via Visual Studio 2010 and every time the result is 1 + Random y The same thing happens if I multiply or divide , the computer always makes the same choice making x= 1 and gives y a random value like it supposed to….why isn’t x random as well ? Also when dividing x/y , the result is 0, which is not slightly accurate to say the least.. int add (int x, int y) { return x + y; int main(int x, int y) { std::cout << x << "+" << y << "=" << add (x,y); return 0; The answer is due to the fact that you have parameters on main(). So far, I’ve only talked about main() functions with no parameters. However, main() can have parameters, which is used for processing command line arguments. When main has parameters, the first parameter represents the number of command line parameters. That’s why your x is set to 1 -- your program always has 1 parameter (the executable name). If you were to run your program with additional parameters, x would be some other number. For now, you shouldn’t have any parameters on function main(). Personally, I’m surprised it even compiles, since your second parameter has the wrong type. My solution to question 5 was Thanks for the Tutorial love it so far, this has been the best website i have ever found. Thank you for helping us noobs with this!… I did question the way you have done it in codeblocks and in another compiler, i dont get any output. Then i erased the int x and cin>>x and just wrote cout<<doubleNumber(1)<<endl and i get the right output. How did this happen? cin >> x is waiting for you to enter something from the keyboard, so it can put what you enter in variable x. If you type in a number and hit enter, your program will continue. Nitpick regarding Quiz question 5: I was searching section 1.3 for cin. (more difficult because "search" or "find" was not working for some reason). Could not find it there. I think you mean to say that it is discussed in 1.3a? Thanks, BobZ Updated, thanks for the catch. Thanks it works. HI When i compile i get the following error that seems to refer to the main function that was in the code when the project was open. I replaced this with my own code. Error 1 error LNK2019: unresolved external symbol _main referenced in function ___tmainCRTStartup c:CppProjectstest14atest14aMSVCRTD.lib(crtexe.obj) test14a Error 2 error LNK1120: 1 unresolved externals c:CppProjectstest14aDebugtest14a.exe 1 1 test14a The initial code when opening the project is: // 14atest.cpp : Defines the entry point for the console application. // #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { return 0; } The code i replaced it with was: // 14bDouble.cpp : Defines the entry point for the console application. // int DoubleNumber(int x) { return x * 2; } int Main() { using namespace std; int x = 0; cout << "Please enter a number" << endl; cin >> x; cout << x << " * 2 = " << DoubleNumber(x) << endl; return 0; } How can i solve this replace “int Main()” with “int main()”. C++ names are case sensitive. When I compile the code I’m getting a blank black screen in cmd….. #include “stdafx.h” #include <iostream> int main() { using namespace std; int x; cin >> x; cout << doubleNumber(x) << endl; return 0; } You’re getting a blank screen because the line: is waiting for you to enter a number. If you change your main() function to this: The output will make it clearer why your program is waiting. plz reply me where i have done mistakes??? int readanInteger() { std::cout<<"enter an Integer"<<std::endl; int a; std::cin>>a; return a; } int doubleNumber(int x) { x=readanInteger(); return 2*x; } int main() { int x; doubleNumber(x); std::cout<<doubleNumber(x)<<std::endl; return 0; } You’re passing an uninitialized variable x from main to doubleNumber(). You’re then initializing the variable inside doubleNumber(). Instead, you should initialize x in main(), and then pass that value to doubleNumber. Great. Thanks! Hey! Regarding the second method in quiz 5). I don’t understand how this works: x = doubleNumber(x); I have been thinking really hard with respect to mechanism and I am not able to understand. It’s like writing x=2x. Please help. Thanks! x = 2x does makes sense if you think about it like this: x = (2x). Whatever value is in x, double it, and then assign that value back to x. So if x starts as 5, we double it to 10, and assign it back to x. x now equals 10. The function call is doing exactly that. Thanks! I understood. But I still find it tricky as we are doing two things at the same time in It would work if the program first evaluates doublenumber() and then equates it to x. It won’t if it does the other way round. How does the program know what to do first. Or does it do both at the same time? Am I over thinking it? Thanks! You aren’t overthinking it. On the contrary, you’re thinking ahead. 🙂 C++ has a set of rules that it uses to evaluates all expressions. How this works is covered in section 3.1 -- precedence and associativity. In this case, the evaluation of doublenumber(x) always happens before the assignment, because evaluation of functions is higher precedence than assignment. I think that the function doubleNumber cannot be of int type. It has to be able to take *any* integer and return its double value. That includes also large (signed) integers like 2147483647 (31 bit long integer). Hence doubleNumber must be of long (or even long long) type: The implementation of doubleNumber() above is flawed, as you correctly point out. For now, that’s okay. We’re more focused on function basics (parameters and return values) than on other data types (such as long long), overflow, and conversion between types. We’ll get to all of those things later. At this point in the tutorials, simplicity wins the day. 🙂 [code cin >> x; cout << doubleNumber(x) << endl;] OMG, who can tell me how to type coding style in the comment ? [code] your code here [/code] Hi Alex, Loving the tutorials. I’ve been wanting to learn how to program for many years (didn’t know it though) and now I’m finally able to because of this awesome tutorial. Here is the solution I came up with for Question 5 Oh and I figured out my error with visual studio as to why it shutting down my console immediately. I was starting my programs with debugging and not building them first either. Thanks again Alex! Dear Alex, Sir,This is the best place to learn programming from beginning . I know java, c and so learning C++ is getting much easier , One thing that i noted that every language has most of the thing in common and if one knows a language they can learn others easily. I wish you put on some other programming language tutorials online. It would be very nice if you put on some more complex programs of real life or suggest me some good books for c++ and python. Thank you, Souvik.. Well, I'm following the tutorial here to further improve myself in my Computer Science class and I feel very happy with these tutorials. I learn more here than in my class to be honest. I feel accomplished with my finals solution to #5: #include <stdafx.h> #include <iostream> using namespace std; int doubleNumber(int x) { return x * 2; } int main() { int x; cout << "Enter number you wish to double:" << endl; cout << " x= "; cin >> x; cout << "Your double variable is equivalent to " << doubleNumber(x) << "." << endl; return 0; } Hello Alex, Thank you very much for these fantastic tutorials, I am slowly working my way through them and have a question in regards to the functions for simple things such as addition, division,multiply. At least in the context of these tutorials, why can one not simply put… cout << x * 2; cout << x + y; I have tried the above (after assigning a value to x and y of-course), I still get the same result as I would have done when using a function. Are there examples where the method above would be favorable over the creation of functions? I can sort of see where a function would be useful in describing/showing why something is being done, but this could easily be done with comments. At first I thought it would be for reusing the function with different variables for example in a game where you want to double several scores or items, but wouldn't you then have to include the variables within the function so that it gets returned what it expects? It just seems like extra work. I hope i haven't shown a huge misunderstanding of programming by asking this question. I just want to be 100% in the understanding of why I am doing things a certain way. Kind regards Daniel Ignore my last comment, I understand now. You discuss the answers I was looking for in one of the last paragraphs (I did this section late last night!) It was going through Catreece' program above that made me appreciate how using functions can clean up code the more complex it gets. Then after reading the whole page again it just clicked :). I must say making a program is a completely different animal to simply being able to code, my grey matter is taking a right hammering XD. Whelp, that's clearly it for me for today, my brain has officially started being stupid. =P int SetIntN() { using std::cin; int n = 0; cin >> n; return n; } int add(int x, int y) { return x + y; } int multiply(int Na, int Nb) { return Na * Nb; } int main() { using std::cout; cout << "choose x: "; int x = SetIntN(); cout << "you chose x: " << x << " " << std::endl; cout << "choose y: "; int y = SetIntN(); cout << "you chose y: " << y << " " << std::endl; cout << "Constant Z multiplies your answer by 3" << std::endl; cout << "your final answer is: " << x << " + " << y << " * " << 3 << std::endl; cout << multiply(add(x, y), 3) << std::endl; return 0; } Looks great! Tested it with ((5 + 6) * 3) and it kept coming out to 33! I was trying to figure out how it messed up 5*6=30 and where it was getting that extra 3 from… for about 10 minutes before I realized, waaaaait a second. My program's feeding me the right answer, I'm just too inattentive to perform basic math in my head. Oops. On the plus side, it worked right. XD It was surprisingly easy to chain together the arithmetic functions like that, however. I expected to see big mess when trying to feed variables into it instead of integers, or when mixing variables and integers together, but no problems at all other than I apparently can't add and multiply. =P Pretty sad when you have an easier time understanding new programming functions over basic math you learned before even entering grade school. Regardless, thanks again for the tutorials; they're really making this pretty easy and they make a lot of sense. Any time I get hung up on something it's invariably because I'm trying to do stuff that hasn't been explained yet, or testing out something I'm not supposed to, rather than due to any issues with the tutorial information itself. As such, I have only myself to blame, and I can live with that. XD Visual studio 2010 won’t let me make a “}” in the code writing sections. Great tutorial, thank you. 🙂 Found typo it seems. Search on page for: multiple(2, 3) In that example the function is “multiply” but appears to be typo on line 19 calling multiple(2, 3). Fixed. Thanks for noticing. i did the lesson and tried to make my own code, but i cant seem to get the right answer. Plz help me out. What i was trying to do was to get a user to input a number (x) which would the get doubled and be added to y=5. But its instead giving me a strange number. I ran your code and its fine. Your problem could be from line 2 not containing anyhting after that include statement. You need to include the library to access cout , cin and endl since they reside under the namespace std which resides in iostream. When you write #include <iostream> , the compiler knows where to look for cout , cin and endl. Not sure whats wrong with my code. I know that its wrong even though it compiles correctly as when i input 23 it doubles to 83214 or any random number just not sure why, I rewrote the code in several other ways that worked, So i did get the quiz done in a way that is successful now i just want to understand why it is that this didnt work, any input is greatly appreciated! #include <iostream> #include <windows.h> using namespace std; #include <iostream> #include <windows.h> using namespace std; int user(int x) { cout << "Pick a number to double: " << endl; cin >> x; } int doublenumber(int x) { return 2 * x; } int main() { int x; user(x); cout << doublenumber(x) << endl; system("pause"); //can also be replaced with getchar() to make portable return 0; } Figured it out, i needed to add the return x into function user() and then adjust main() to cout << doublenumber(user(x)) << endl; Here is the working version! #include #include using namespace std; int user(int x) { cout << "Pick a number to double: " <> x; return x; } int main() { int x; cout << doublenumber(user(x)) << endl; system("pause"); //can also be replaced with getchar() to make portable return 0; } cout << add(1, multiply(2, 3)) << endl; // evalues 1 + (2 * 3) cout << add(1, add(2, 3)) << endl; // evalues 1 + (2 + 3) Are all integers of "add" functions assigned to x and y, and all integers of "multiply" assigned to z and w? answered my own question.. yes! As you have already noted, they are! Ehhh, wow mine was overly complicated. int multiply(int x) { using namespace std; int y; y = x * 2; return y; } int main() { using namespace std; cout <> x; cout << multiply(x) << endl; return 0; } It is because when calling the function you also need to specify its correct parameters.Since the doubleNumber(int x) function is not inside int main().the int main() does not know the parameters or even whether int x exists.So you need to tell int main that doubleNumber(int x) is a function with “int x” as its parameter. Superb tutorials! I was getting an error on Quiz question 5 because on line 13 I was calling the function like this: doubleNumber() rather than like this: doubleNumber(x). Sorry for the stupid question, but can someone explain why the x needs to be there? How did you define the function doubleNumber? It was doubleNumber( int x ) {}, right? See, the int x part is the input part of the function - you list all the things that go into the function. You can’t actually specify specific variable names though, you can only specify the name that will be used in the function (can be anything you want, that’s how you’ll refer to the input inside the function), and the type (int, float, string, etc). So the reason it’s doubleNumber(x), is because you have to mention which variable to pass to doubleNumber. That way you can also call it like doubleNumber(y) and doubleNumber(z), and it will double the variables y and z. CAN A USER DEFINED FUNCTION BE INITIALISED INSIDE MAIN()? NO. C++ DOES NOT SUPPORT NESTED FUNCTIONS. #include int multiply(int x, int y) { int product = x * y; int another_variable = x * 2; } int main() { using namespace std; cout << multiply(4, 5) << endl; return 0; } This compiles and gives the answer 8. This shouldn’t even compile. multiply() has a return type of int, but you’re not returning a value. The compiler should complain. does the doprint function have to be typed before the main function? Also, does doprint have to be capitalized? For now, doPrint() has to be declared before the main function. I’ll discuss how to get around this restriction in later sections. doPrint() doesn’t have to be capitalized. It could be called doprint() or do_print(). C++ is case-sensitive though, so doprint() and doPrint() would be considered two different functions. Here’s my code for quiz 4 and 5. Give me feedback on it, please. I just began C++ today. -------------------------- #include “stdafx.h” #include int doubleNumber( int x ) { return x * 2; } int main( ) { int integerToDouble = -1, theResult = -1; using namespace std; cout <> integerToDouble; cout << endl; theResult = doubleNumber( integerToDouble ); cout << "The doubled value of " << integerToDouble << " is " << theResult << "." << endl; system( "PAUSE" ); return 0; Very much a beginner here, but there’s something I need a little clarification with - switching the order of “main()” and “add(x, y)” in the code seems to cause an error (I was trying to input it that way due to my own OCD wanting main to be up the top of the code). The result seemed to be that “main()” had no idea that “add(x, y)” even existed unless “add(x, y)” was assigned/created in code *before* “main()”. Is this always the case? That “function a” trying to call “function b” requires function b to be defined (and placed literally above function a’s code in the editor) before it’s run? Yes, this is always the case. You can also use function prototypes before “main()” instead of writing the full funtion definitions and the write the funtion definitions after “main()”… //Function prototypes… int add(int, int); // NOTE: the semicolon (;) at the end! int main() { … return 0; } int add(int x, int y) { return x + y; } Hope this helps? Function prototypes help address the order of declaration issue. I discuss this more in section 1.7 -- Forward Declarations. Hi I decided to post mine since it looks like I chose a different way to make this program but with the same results. here is my code : Alex, are parameters only used for mathematics? By the way, I found out a way to calculate the addition of two numbers with only one function =D #include(less than)iostream(more than) #include(less than)conio.h(more than) using namespace std; int main(){//returns 0 double dA,dB,dC; cout<>dA; cout<>dB; dC=dA+dB; cout<<dA<<" + "<<dB<<" = "<<dC<>dA; cin>>dB; dC=dA+dB; cout<<dC; } ?? My code’s gone haywire… Let me retype it here: int main() { double dA,dB,dC; cin>>dA; cin>>dB; dC=dA+dB; cout<<dC; system("pAuSe"; } Parameters can be used for all kinds of purposes. Mathematics just makes for easy examples because (almost) everyone intuitively understands how to add and subtract small numbers. int doubleNumber(int x) // doubles value of x { return x * 2; } int main() { using namespace std; int x; // declares variable x cout << "Input an integer to double: "; cin >> x; // defines x as users input cout << x << " doubled is " << doubleNumber(x) << endl; return 0; } Here is my solution to #5: #include “stdafx.h” #include int main() { using namespace std; int y; // declares variable y int x; // declares variable x cout <> x; // defines x as users input y = x; // defines y as equal to x to display the number that was doubled cout << y << " doubled is " << doubleNumber(x) << endl; return 0; } You don’t actually need y, as you can simply do this cout << x << " doubled is " << doubleNumber(x) << When you take the x the first time, it will hold the original number, when you use doubleNumber, it will output the result of x * 2. That way the code is more optimized, as it requires less memory to hold the values, and it will no longer need to assign the value of x to y, which is a waste of CPU cycles (I know that in this case the difference will be a nanosecond at most, however in bigger programs this difference might be seconds). i see, thanks for helping me to understand this better here is my updated code: int main() { using namespace std; int x; // declares variable x cout <> x; // defines x as users input cout << x << " doubled is " << doubleNumber(x) << endl; return 0; } same functionality as my original program but i can see how this is a better way of doing it in the piece of code, for example: int add(int x, int y) { return x + y; } does it matter that when you create a function you have the (int x, int y) in the parenthesis? what does it do for the function? heres another example: int multiply(int z, int w) { return z * w; } (int x, int y) are the function parameters. Without them, the compiler won’t know what x and y are inside of the add() function, and you won’t be able to pass values into add(). Hey, I’m new to C++, and I answered question 5 as follows: It works, but is not included in the answers…so, is it a correct way? If you define “correct” as producing the right answer, then it looks correct. There are usually many ways to do the same thing in C++. Sometimes one way is better than another, but other times, it’s just a matter of preference. You could improve this program by getting rid of line 12 (int y=2) and changing the last line to: Here is what I came up with for question 5 finally! addition of 2 numbers Hey nIce tutorial i am understanding it 🙂 btw here is my code for solution 5 -.^ I love my solution for question 5: Nice Tutorial so far! On solution 5 you have two solutions to do the same thing: This, and this: I understand that this is a small simple program where performance isn’t an issue, but if hypothetically this was a program with a much larger scale, is there a difference in speed/performance between these two techniques? The first one seems shorter, and therefore faster? The program with x = doubleNumber(x) has an extra assignment operation that happens, so it would be slightly slower. The speed difference would likely be unnoticible unless you were running this thousands of times. These tutorials are incredibly well constructed and very easy to understand. Many thanks to the author! I also love the quizzes! Here’s what I came up with for #5: Ah, that code wasn’t working well. Mistakes are the best way to learn though 😀 hey i have a problem with the following code: it gives me this error: In function “int main() “multiply” undeclared (first use this function) I don’t understand what went wrong Marco, you misspelled multiply when you wrote the function. But you spelled it correctly when you called it from main. So the compiler was expecting a function called multiply, which doesn’t exist in your program, although one called multipliy does. To fix the error, either change the function name to the correct spelling, or misspell it the same way in the function call. I’d go for my first suggestion, though either will work. Hello, Marco its just a typo. You misspelled multiply when we write a program like this, it will not work #include “stdafx.h” #include “iostream” int main() { using namespace std; int numb,a; cout << "Enter number" <> numb; a = doubleNumber(numb); cout << a; return 0; } int doubleNumber(int x) { return x*2; } Jil, In the third line, should be . Also, int main() should be at the bottom of the program. (or write under the #include . I hope that helps. First of all kudos for the great site you have. I had a little problem with exercise 5. My solution was this: The problem is that the value is not doubling. Doing a debug, I saw that the function was called correctly, x was calculated as it should inside the function, but never returned to main. Doing a x=myFunction(x) resolves it, but I don’t understand why? If I’m simply using a myFunction(x) isn’t x supposed to get the new value calculated inside the function? Try using x=doubleNumber(x); instead of doubleNumber(x); or Remove doubleNumber(x); and correct the following to: cout << "The result is " << doubleNumber(x); The reason this happens would be if i am not mistaken that x variable in doubleNumber function is active only while in the function itself so when the control returns to main the x in main being a different variable has the set value of what u ve inserted with cin. Note that the returned value of ur doubleNumber function is not saved anywhere thus is not accessible… So the solution is either assign the returned value to x in main function or ask from cout to execute the doubleNumber function and print the returned value right away… Hope i helped… 🙂 But the x in doubleNumber() is **returned** to main() so I don’t understand how what you say explains the issue. Why does work but doesn’t??? Uh, never mind -- Alex’s explained it below: the x is indeed returned in the second situation but its value is the same one that was originally passed into the function because the computer must be told (via a statement like "x = x * 2;") that x will now have a new value! Why doesn’t this work? you used two parameters in doubleNumber function definition, but during function call only one parameter..number of parameters must be same. and also doubleNumber must return a value. Correct. The last line of main should be: Because you’ve defined your function to take two parameters, and if you want to double z, you need to pass in 2 as the second parameter. That said, it’s weird to call your function doubleNumber when it actually multiplies two numbers (one of which may or may not be 2). There are 2 problems in the code. 1) You did not return anything in doubleNumber fucntion 2) last line in the main() function, you did not pass the values properly Here is the correction
http://www.learncpp.com/cpp-tutorial/1-4a-a-first-look-at-function-parameters/
CC-MAIN-2017-26
refinedweb
10,277
70.13
04 September 2012 10:18 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> FPC restarted its 1.18m tonne/year EDC and 1.33m dry metric tonne (dmt)/year caustic soda facilities in Mailiao on 3 September after two weeks of maintenance from 16 August, the source said. Its 500,000 tonne/year PVC unit in Mailiao resumed operation on 1 September, after a week-long shutdown from 27 August on shortage of feedstock vinyl chloride monomer (VCM), the source said. Meanwhile, the company’s 800,000 tonne/year VCM plant at the same site that was shut on 16 August is still undergoing turnaround, and is scheduled to restart in mid-September,
http://www.icis.com/Articles/2012/09/04/9592393/taiwans-formosa-runs-mailiao-caustic-soda-edc-pvc-units-at.html
CC-MAIN-2014-41
refinedweb
112
63.39
Enhanced Monitoring provides OS metrics such as Free Memory, Active Memory, Swap Free, Processes Running, and File System Used. These metrics can be used to understand your environment's performance and are ingested by CloudWatch Logs as log entries. CloudWatch allows you to create alarms based on metrics. These alarms execute. Create a custom metric using filters on a log group Note: These steps require that Enhanced Monitoring is enabled in your RDS instance; for more information, see Monitoring Amazon RDS. 1. In the CloudWatch console, choose Logs from the left navigation pane, locate RDSOSMetrics in the list of Log Groups, and then choose the Filters link. 2. Choose Add Metric Filter. 3. Choose a Filter Pattern term for your RDS instance, for example: { $.instanceID = "nameOfYourRDSInstance"} 4. Choose the Log Data to test and then choose Assign Metric. 5. Choose a Metric Namespace and Metric Name, and then select the Show advanced metric settings link. 6. Enter a Metric Value—for example, $.cpuUtilization.idle—and then choose Create Filter. Automation There are more than 60 monitoring metrics per RDS instance, so if you want to perform this process for additional metrics, you must repeat these steps for each metric. You can use a script to automate this process. Here is an example script (RDSCreateMetricsFromEnhancedMonitoring.py) that uses an RDS MySQL DB instance that will work with the following engines: - MySQL - MariaDB - Amazon Aurora - PostgreSQL To you use the script, you must specify the RDS instance that has Enhanced Monitoring enabled, the namespace where you want these metrics to reside, and, optionally, the names of the metrics and the region. If none of these optional fields are specified, the script will consider all the metrics to publish and use the region by default specified in the file .aws/config using AWS Command Line Interface (CLI). The names of the metrics must be specified by the following pattern: group.metricname Here are some examples; for more information, see Monitoring Amazon RDS. cpuUtilization.idle diskIO.readKb The following sample code illustrates a call to enable this metric for cpuUtilization.idle and diskIO.readKb: python RDSCreateMetricsFromEnhancedMonitoring.py --rds_instance mysqltest --namespace MySQL --metrics_to_filter cpuUtilization.idle diskIO.readKb You can then create alarms for these custom metrics; for more information, see Creating Amazon CloudWatch Alarms. Note: The script doesn't create metrics for "process list", so you might need to create filters manually, depending on which process you want to display. RDS, CloudWatch, Relational Database Service Did this page help you? Yes | No Back to the AWS Support Knowledge Center Need help? Visit the AWS Support Center Published: 2017-01-09
https://aws.amazon.com/ko/premiumsupport/knowledge-center/custom-cloudwatch-metrics-rds/
CC-MAIN-2018-13
refinedweb
435
57.37
This is your resource to discuss support topics with your peers, and learn from each other. 12-05-2013 12:01 PM Yea, I'm asking what does happen, not what doesn't happen. You tap the to open the app(s), do you see them start up, close or what? Do a simple reboot On the side edge volume keys, press and hold down both of the Up and Down volume keys for about 20 seconds, ignoring the initial screenshot message... the screen will go black and reboot. Now, try your apps again. 12-05-2013 12:21 PM I'm sorry for my bad explanation.. If I click on the app, the app starts,opens, but I see just black screen with Blackberry 10 written on it. Down on monitor is running blue line (looks like its downloading sth) but after few seconds it closes and i am automaticaly back on my home screen. I tried now to reboot the screen, but its still the same. Nothing changed. Can I do sth else? Thank you for your answers 12-05-2013 03:44 PM other than delete the app(s), reboot again, and then reinstall, I don't know what else. 12-10-2013 01:06 PM 12-23-2013 08:17 PM Still waiting for AT&T to roll it out. i heard Jan 2014. smh at BB. hope they can make the updates avaible in the USA first to win back the smartphone war. 12-24-2013 07:28 AM While it would be a good start, it will take more than timely updates by ATT to win the smartphone war in the USA. You should let ATT know of your displeasure. They do control the release of the update to their own customers. You can manually upgrade if you want. 01-06-2014 11:05 AM How you can said that is faster ? Since I've updated to the new version, it's took me at least 15 second to read an SMS message, Ive trouble with the french keyboard, still missing the caariage return ( and the Shift Enter don't work). I hate this new version, and I'm just wondering how I can reinstall the previous version Regards ( from Paris) Gilles 01-28-2014 09:20 PM - edited 01-28-2014 09:23 PM Updated to 10.2.1.537 last night. Good news: 1. It has removed duplicate entries in my Contacts (I used to have 8 of the same entries for some contact items) 2. The meeting invite showing GMT time seems to have been resolved (this is a big issue for a business smartphone and I am surprised Blackberry didn't issue at least a patch for this in the interim) 3. Snap works fine and managed to have Google Maps working easily Bad news: a. Emails are coming in very slow. I use ActiveSync and used to get my emails instanteously b. A number of emails can't be read as it says "Message body could not be downloaded". What use is a Blackberry if I can't read most of my emails? c. There are 2 "WhatsApp" sections in Hub now....1 of them empty! Because of items a and b, I am on the verge of downgrading my phone's software (will wait another day for the software to "settle in"). Can someone tell me how to do this and where to find past software? I don't want a factory reset as there was a 10.2 update between the factory software and the release of 10.2.1.537. I am on Singapore's Singtel network. By the way, I noticed the moderator's posts suggests to use Blackberry Protect to backup / restore....but I thought this feature is now with Link? And with Link, wifi isn't possible....you still need a cable for backup / restore. 01-29-2014 12:42 AM Have upgraded my Q5, I definately some improvement as compared to previous version. But all my contacts are screwed i have around 2000 contacts and now the names are jumbled up. Blackberry team please help. 01-29-2014 02:18 AM
https://supportforums.blackberry.com/t5/BlackBerry-10-Functions-and/BlackBerry-10-2-OS-Begins-Roll-Out/m-p/2755717
CC-MAIN-2017-09
refinedweb
699
83.15
If your agents need to bill users based on the support they provide, you can enable the time log and ticket billing features. The time log lets agents record the amount of time they spend on a ticket, while ticket billing lets them specify an amount of money to charge for support. {This is not to be confused with the Billing & Licensing interface where you pay for your Deskpro license, accessible through the icon at the lower left). By default, each time or charge recorded has a comment field which enables the agent to enter an optional comment. You can enable time logging and billing separately. Depending on which option is enabled, tickets will have a Billing tab, a Time Log tab or a combined Billing & Time Log tab. Go to Tickets > Time Log & Billing to change the settings. The Time Log sections lets you enable the time log, and choose whether to Automatically start timer as soon as the ticket is opened by an agent. Note that the agent still has to press the Add Charge button to save the elapsed time. The Ticket Billing section lets you enable or disable monetary billing, and specify the currency. The agent permission Can modify billing and time log records controls whether the agent can edit previously entered charges. An agent with this permission can also enter amounts of time directly, rather than running the timer. Custom billing fields You can customize the fields for each time or billing charge in Tickets > Time Log & Billing > Fields. You can also edit settings for the default comment field; for example, if you want to make leaving a comment compulsory. Adding custom billing fields works in the same way as Ticket Fields, except that - billing fields can never be visible to users - the Display and Hidden field types are not available for billing The same custom fields are used for both monetary billing and time log entries. The Can modify billing and time log records enables an agent to edit the values of custom billing fields, as well as the charges. Billing users Deskpro only records times and charges for agents - users are not automatically notified, and you will need to bill users outside of Deskpro. To facilitate billing, you can use the Reports interface to see simple summaries of all charges; note that you can generate reports by organization as well as by user. Agents can also see the history of charges for a user and organization from profiles in CRM. If you need more in-depth billing information, you can use the custom reports function from the Report Builder to retrieve charge information. See the article: Generating custom reports on ticket billing or time charges Charges in email notifications If you want to include a list of charges on a ticket in user email notifications, you can do so by editing the relevant email notification templates in Admin > Tickets > Email Templates. Here’s an example to display a list of all time charges, with comments and which agent made the charge: {% if ticket.charges %} Chargeable time for this ticket:<br /><br /> {% endif %} {% for charge in ticket.charges %} time: {{ relative_time(charge.charge_time) }} <br /> comment: {{ charge.comment }} <br /> agent: {{ charge.person.display_name }} <br /><br /> {% endfor %}
https://support.deskpro.com/en/guides/admin-guide/agent-channel-setup/agent-interface-options/time-log-and-billing-2
CC-MAIN-2020-34
refinedweb
540
68.5
A solid understanding of how, when, and why to create nested routes is foundational to any developer using React Router. However, in order to help us better answer those questions, there are some topics we need to cover first. Namely, you need to be comfortable with two of React Router's most foundational components – Route and Routes. Let's start with. I realize we're starting off off slow here, but in doing so we'll set the proper foundation that we can build off of later. Pinky promise. With Route out of the way, lets look at its friend – Routes. <Routes><Route path="/" element={<Home />} /><Route path="/messages" element={<Messages />} /><Route path="/settings" element={<Settings />} /></Routes> You can think of Routes as the metaphorical conductor of your routes. Its job is to understand all of its children Route elements, and intelligently choose which ones are the best to render. It's also in charge of constructing the appropriate urls for any nested Links and the appropriate paths for any nested Routes – but more on that in a bit. Playing off our <Routes> above, say not only do we want a /messages page, but we also want a page for each individual conversation, /messages/:id. There are a few different approaches to accomplish this. Your first idea might be to just create another Route. <Routes><Route path="/" element={<Home />} /><Route path="/messages" element={<Messages />} /><Route path="/messages/:id" element={<Chat />} /><Route path="/settings" element={<Settings />} /></Routes> Assuming the UI for <Chat> had nothing to do with <Messages>, this would work. However, this is a post about nested routes, not just rendering normal routes. Typically with nested routes, the parent Route acts as a wrapper over the child Route. This means that both the parent and the child Routes get rendered. In our example above, only the child Route is being rendered. So to make a truly nested route, when we visit a URL that matches the /messages/:id pattern, we want to render Messages which will then be in charge of rendering Chat.. So how could we adjust our code to do this? Well, what's stopping us from just rendering another Routes component inside our Messages component? Something like this: // App.jsfunction App() {return (<Routes><Route path="/" element={<Home />} /><Route path="/messages" element={<Messages />} /><Route path="/settings" element={<Settings />} /></Routes>);} // Messages.jsfunction Messages() {return (<Container><Conversations /><Routes><Route path="/messages/:id" element={<Chat />} /></Routes></Container>);} Now when the user navigates to /messages, React Router renders the Messages component. From there, Messages shows all our conversations via the Conversations component and then renders another Routes with a Route that maps /messages/:id to the Chat component. /messages path, we're essentially telling React Router that Messages has a nested Routes component and our parent path should match for /messages as well as any other location that matches the /messages/* pattern. Exactly what we wanted. /*to the end of our There's even one small improvement we can make to our nested Routes. Right now inside of our Messages component, we're matching for the whole path – /messages/:id. <Routes><Route path="/messages/:id" element={<Chat />} /></Routes> This seems a bit redundant. The only way Messages gets rendered is if the app's location is already at /messages. It would be nice if we could just leave off the /messages part all together and have our path be relative to where it's rendered. Something like this. function Messages() {return (<Container><Conversations /><Routes><Route path=":id" element={<Chat />} /></Routes></Container>);} As you probably guessed, you can do that as well since Routes supports relative paths. Notice we also didn't do /:id. Leaving off the path to be relative. /is what tells React Router we want the At this point, we've looked at how you can create nested routes by appending Route's path and rendering, literally, a nested Routes component. This works when you want your child Route in control of rendering the nested Routes, but what if we didn't want that? /*to our Meaning,. Opinion Time: Though there's no objective benefit to one approach over the other, I'd probably favor using the latter approach with <Outlet /> over the former nested Routes approach as it feels a little cleaner, IMO. At this point, there's nothing new about nested routes with React Router that you need to learn. However, it may be beneficial to see it used in a real app. Here's what we'll be building. As you navigate around, keep an eye on the navbar. You'll notice that we have the following URL structure. //topics:topicId:resourceId Now before we get started, let's get a few housekeeping items out of the way first. We'll have an "API" which is responsible for getting us our data. It has three methods we can use, getTopics, getTopic, and getResource. export function getTopics() {return topics;}export function getTopic(topicId) {return topics.find(({ id }) => id === topicId);}export function getResource({ resourceId, topicId }) {return topics.find(({ id }) => id === topicId).resources.find(({ id }) => id === resourceId);} If you'd like to see what topics looks like, you can do so here - spoiler alert, it's just an array of objects which maps closely to our routes. Next, our Home component for when the user is at the /route. Nothing fancy here either. function Home() {return (<React.Fragment><h1>Home</h1><p>Welcome to our content index. Head over to{" "}<Link to="/topics">/topics</Link> to see our catalog.</p></React.Fragment>);} Because we've seen both patterns for creating nested routes, let's see them both in our example as well. We'll start out with the nested Routes pattern, then we'll refactor to use the <Outlet /> pattern. Next, we'll build out our top level App component which will have our main navbar as well as Routes for /topics. /and Looking at our final app, we know that Home component and /topics is going to map to a component that shows our top level topics (which we can get from calling getTopics). /is going to map to our We'll name this component Topics and since it'll contain a nested Routes, we'll make sure to append path. /*to the parent function Topics() {return null;}export default function App() {return (<Router><div><ul><li><Link to="/">Home</Link></li><li><Link to="/topics">Topics</Link></li></ul><hr /><Routes><Route path="/" element={<Home />} /><Route path="/topics/*" element={<Topics />} /></Routes></div></Router>);} Now we need to build out the Topics component. As I just mentioned, Topics needs to show our top level topics which it can get from getTopics. Let's do that before we worry about its nested routes. import { Link } from "react-router-dom";import { getTopics } from "./api";function Topics() {const topics = getTopics();return (<div><h1>Topics</h1><ul>{topics.map(({ name, id }) => (<li key={id}><Link to={id}>{name}</Link></li>))}</ul><hr /></div>);} Notice that because we're using nested routes, our Link is relative to the location it's rendered – meaning we can just do to={id} rather than having to do to={'/topics/${id}'} Now that we know we're linking to={id} (which is really /topics/react, /topics/typescript, or /topics/react-router), we need to render a nested Route that matches that same pattern. We'll call the component that gets rendered at the route Topic and we'll build it out in the next step. The only thing we need to remember about Topic is it's going to also render a nested Routes, which means we need to append Route's path we render in Topics. /*to the function Topic() {return null;}function Topics() {const topics = getTopics();return (<div><h1>Topics</h1><ul>{topics.map(({ name, id }) => (<li key={id}><Link to={id}>{name}</Link></li>))}</ul><hr /><Routes><Route path=":topicId/*" element={<Topic />} /></Routes></div>);} We're one level deeper and a pattern is starting to emerge. Let's build out our Topic component now. Topic will show the topic's name, description, and then link its resources. We can get the topic by passing our topicId URL parameter we set up in the previous step to getTopic. import { useParams } from "react-router-dom";import { getTopic } from "./api";function Topic() {const { topicId } = useParams();const topic = getTopic(topicId);return (<div><h2>{topic.name}</h2><p>{topic.description}</p><ul>{topic.resources.map((sub) => (<li key={sub.id}><Link to={sub.id}>{sub.name}</Link></li>))}</ul><hr /></div>);} Notice that even though we're a few layers deep, our nested Links are still smart enough to know the current location so we can just link to={sub.id} rather than to={/topics/${topicId}/${sub.id}} We're almost there. Now we need to render our last nested Routes that matches the pattern we just saw. Again, because Routes is smart and supports relative paths, we don't need to include the whole /topics/:topicId/ path. function Resource() {return null;}function Topic() {const { topicId } = useParams();const topic = getTopic(topicId);return (<div><h2>{topic.name}</h2><p>{topic.description}</p><ul>{topic.resources.map((sub) => (<li key={sub.id}><Link to={sub.id}>{sub.name}</Link></li>))}</ul><hr /><Routes><Route path=":resourceId" element={<Resource />} /></Routes></div>);} Finally, we need to build out the Resource component. We're all done with nesting, so this component is as simple as grabbing our topicId and resourceId URL parameters, using those to grab the resource from getResource, and rendering some simple UI. function Resource() {const { topicId, resourceId } = useParams();const { name, description, id } = getResource({ topicId, resourceId });return (<div><h3>{name}</h3><p>{description}</p><a href={` Post</a></div>);} Well, that was fun. You can find all the final code here. Now, let's throw all that out the window and refactor our app using the Outlet component. First, instead of having nested Routes sprinkled throughout our app, we'll put all of them inside of our App component. export default function App() {return (<Router><div><ul><li><Link to="/">Home</Link></li><li><Link to="/topics">Topics</Link></li></ul><hr /><Routes><Route path="/" element={<Home />} /><Route path="/topics" element={<Topics />}><Route path=":topicId" element={<Topic />}><Route path=":resourceId" element={<Resource />} /></Route></Route></Routes></div></Router>);} Now, we need to swap out the nested Routes inside of Topics and Topic for the <Outlet /> component. function Topic() {const { topicId } = useParams();const topic = getTopic(topicId);return (<div><h2>{topic.name}</h2><p>{topic.description}</p><ul>{topic.resources.map((sub) => (<li key={sub.id}><Link to={sub.id}>{sub.name}</Link></li>))}</ul><hr /><Outlet /></div>);}function Topics() {const topics = getTopics();return (<div><h1>Topics</h1><ul>{topics.map(({ name, id }) => (<li key={id}><Link to={id}>{name}</Link></li>))}</ul><hr /><Outlet /></div>);} And with that, we're done. You can find the final code for using <Outlet> here. To recap, nested routes allow you to, at the route level, have a parent component control the rendering of a child component. Twitter's /messages route is the perfect example of this. With React Router, you have two options for creating nested routes. The first is using the <Routes> pattern and the second is using the <Outlet /> pattern. /*with.
https://ui.dev/react-router-nested-routes
CC-MAIN-2022-21
refinedweb
1,875
55.84
C Programming String In C programming, array of character are called strings. A string is terminated by null character /0. For example: "c string tutorial" Here, "c string tutorial" is a string. When, compiler encounters strings, it appends null character at the end of string. Declaration of strings Strings are declared in C in similar manner as arrays. Only difference is that, strings are of char type. char s[5]; Strings can also be declared using pointer. char *p Initialization of strings In C, string can be initialized in different number of ways. char c[]="abcd"; OR, char c[5]="abcd"; OR, char c[]={'a','b','c','d','\0'}; OR; char c[5]={'a','b','c','d','\0'}; String can also be initialized using pointers char *c="abcd"; Reading Strings from user. Reading words from user. char c[20]; scanf("%s",c); String variable c can only take a word. It is beacause when white space is encountered, the scanf() function terminates. will ignore Ritchie because, scanf() function takes only string before the white space. Reading a line of text C program to read line of text manually. #include <stdio.h> int main(){ char name[30],ch; int i=0; printf("Enter name: "); while(ch!='\n') // terminates if user hit enter { ch=getchar(); name[i]=ch; i++; } name[i]='\0'; // inserting null character at end printf("Name: %s",name); return 0; } This process to take string is tedious. There are predefined functions gets() and puts in C language to read and display string respectively. int main(){ char name[30]; printf("Enter name: "); gets(name); //Function to read string from user. printf("Name: "); puts(name); //Function to display string. return 0; } Both, the above program has same output below: Output Enter name: Tom Hanks Name: Tom Hanks Passing Strings to Functions String can be passed to function in similar manner as arrays as, string is also an array. Learn more about passing array to a function. #include <stdio.h> void Display(char ch[]); int main(){ char c[50]; printf("Enter string: "); gets(c); Display(c); // Passing string c to function. return 0; } void Display(char ch[]){ printf("String Output: "); puts(ch); } Here, string c is passed from main() function to user-defined function Display(). In function declaration, ch[] is the formal argument. String handling functions You can perform different type of string operations manually like: finding length of string, concatenating(joining) two strings etc. But, for programmers ease, many library function are defined under header file <string.h> to handle these commonly used talk in C programming. You will learn more about string hadling function in next chapter.
http://www.programiz.com/c-programming/c-strings
CC-MAIN-2014-15
refinedweb
437
65.93
Yelp is powered by more than 250services, from user authentication to ad delivery. As the ecosystem of services has grown since 2011, performance introspection has become critical. The tool we chose to gain visibility was Zipkin , an open source distributed tracing framework. Since most of our services are built with Pyramid , we built instrumentation for Zipkin called pyramid_zipkin and a Swagger client decorator, swagger_zipkin . Below, we walk you through how we use these powerful tools at Yelp and how you can leverage them for your organization. Why Distributed Tracing is Critical in an SOA Environment If you get an alert after a deployment that the average timing for a particular endpoint has increased by 2-3x, how would you start debugging this? For us, we rely on distributed tracing. Distributed tracing is a process to gather timing information from all the various services that get invoked when an endpoint is called. In the diagram below, we can see that out of a total ~1.8s, 0.65s were consumed by one service, allowing us to investigate further. As more services get called, this becomes immensely helpful in debugging performance regressions. Another benefit of distributed tracing is that it helps service owners know about their consumers. Using aggregation and dependency graph tools, the service owner can check which upstream services rely on them. When a particular service wants to deprecate their API, they can notify all their consumers about it. With tracing, they can track when all traffic to the deprecated API stops, knowing it’s safe to remove the API. Background To understand distributed tracing we first need to cover some terminology. There are two key terms that need to be explained: span- is information about a single client/server request and response trace- is a tree of spans which represent all service calls triggered during the lifecycle of a request As shown in the diagram above, a span lifetime starts with the request invocation from the client (Service A) and ends when the client receives back the response from the server (Service B). To create a full featured trace, the system needs to ingest proper span information from each service. The four necessary points to build a span, as seen in the diagram above, (in chronological order) are: - Client Send (CS): timestamp when upstream service initiated the request. - Server Receive (SR): timestamp when downstream service receives the request. - Server Send (SS): timestamp when downstream service sends back the response. - Client Receive (CR): timestamp when upstream service receives back the response. CS and CR as client side span information and SR and SS as server side span information. Introducing Zipkin Zipkin provides a nice UI to visualize the traces and have them sorted by conditions like longest query, etc. At Yelp, we use Kafka as the primary transport for Zipkin spans and Cassandra as its datastore. Zipkin consists of three basic parts: - collector : reads from a transport (like Kafka) and inserts the traces to a datastore (like Cassandra) - query : reads the traces from the datastore and provides an API to return the data in JSON format - web UI : exposes a nice UI for the user to query from. It makes API calls to the query and renders the traces accordingly These components are explained in more detail in the Zipkin docs . One critical component to the system is the tracer . A tracer gets hooked into a service (which is to be traced) and its responsibility is to send the service trace information to the Zipkin collector while making sure it does not interfere with the service logic and does not degrade service performance. Openzipkin/zipkin provides tracer support for various languages like Scala , Java and Ruby , but currently not Python, which we use extensively. There is a third party tracer available for Django but we wanted a solution for Pyramid so we implemented a Pyramid Python tracer which integrates with our services and sends data to Zipkin. The tracer is called pyramid_zipkin . Server side span: Introducing pyramid_zipkin Most of our services run on Pyramid with uWSGI on top for load balancing and monitoring. Pyramid has a feature called tweens , which are similar to decorators and act like a hook for request and response processing. pyramid_zipkin is a tween which hooks into the service and records SR and SS timestamps to create a span. A span will be sent to the collector based on two conditions: - If the upstream request header contains X-B3-Sampled, it is accepted. This is only sent if the value is “1”, otherwise it’s discarded. - If no such header is present, pyramid_zipkinassumes the current service to be the root of the call stack. It rolls a die to decide whether to record the span or not. The probability is based on a configuration option tracing_percent which can be different for each service. pyramid_zipkin provides other features like recording custom annotations and binary annotations. Complete documentation is available here . Connecting spans: Introducing swagger_zipkin Before getting into how we record client side spans, let’s explain how we connect Zipkin spans. This is done by sending span_id and trace_id information in each request’s header. We do this with the help of our open source package swagger_zipkin . Most of our services have a corresponding Swagger schema attached to them. Services talk to each other using our in-house Swagger clients, swaggerpy (supports schema v1.2) and bravado (supports schema v2.0). swagger_zipkin provides a decorator to wrap these clients which enables sending zipkin-specific headers with requests. Behind the scenes it calls pyramid_zipkin ’s API to get the necessary header information. More details are available in the documentation . A sample service call looks like: from bravado.client import SwaggerClient from swagger_zipkin.zipkin_decorator import ZipkinClientDecorator client_b = SwaggerClient.from_url("") zipkin_wrapped_client = ZipkinClientDecorator(client_b) response = zipkin_wrapped_client.resource.operation(query="foo").result() Client side span: Leveraging Smartstack It might seem obvious that the client spans ( CS and CR ) can easily be recorded by swagger_zipkin since it is already used to decorate the client calls. However, using swagger_zipkin can lead to incorrect numbers when the request is made asynchronously, where the response is stored as a non-blocking future. The client may ask for the result from this future at any point which could be five or even ten seconds after the response was actually returned back by the server. The decorator will include this waiting time in the CR resulting in misleading numbers. We need a way that’s not dependant on whether the call was made by a synchronous or an asynchronous client. We leverage ourSmartStack setup to get the correct CS and CR numbers. All of Yelp’s services are on SmartStack which means an upstream service connects to localhost with the port number assigned to the downstream service. A local HAProxy listens on this port, which then forwards the request to the actual machine hosting the service. Conveniently, HAProxy can be configured to log requests and include relevant headers, timestamps, and durations. Thus by parsing HAProxy logs, we find CS (origin timestamp) and CR (origin timestamp + time taken for response) and send client span directly to the collector . Zipkin components: Powered by PaaSTA As discussed earlier, to complete the infrastructure we need to deploy the three Zipkin components ( collector , query and web ). These are deployed as PaaSTA services (more). This part was relatively simple as Zipkin already provides Dockerfiles for each of these components. PaaSTA can read these Dockerfiles and launch the containers. Wrapping things up This blog post gave an overview about Zipkin as a distributed tracing tool and how Python services can be integrated with this system. Zipkin is still under heavy active development thanks to its healthy open source community. We are trying to improve its Python support to be more powerful and friendly to work with. We built pyramid_zipkin and swagger_zipkin for this purpose. Go check them » Distributed tracing at Yelp 评论 抢沙发
http://www.shellsec.com/news/8063.html
CC-MAIN-2017-22
refinedweb
1,318
53.81
Detect cycle in undirected graph Given an undirected graph, find if there is a cycle in that undirected graph. For example, below graph has cycles as 2->3->4->2 and 5->4->6->5 and a few more. A simple definition of a cycle in an undirected graph would be: If while traversing the graph, we reach a node which we have already traversed to reach the current node, then there is a cycle in the graph. How can we detect the above condition? It’s an application of Depth First Search. Do a depth-first traversal of a graph. In depth first traversal, we visit the current node and go deep to one of its unvisited neighbors. If a neighbor of the current node is already visited, that means there is a possibility of a cycle. However, make sure that the already visited neighbor is not the parent of the current node, otherwise, there will be always a cycle of two nodes in an undirected graph. While depth first traversing a graph, keep track of the nodes which are on the stack at present. If there is an edge from the current node to any one of the nodes which are on recursion stack, we say there is a cycle in undirected graph. To avoid additional space, we can pass the parent pointer to the recursive call and check if the visited node is the parent. Detect cycle in undirected graph: implementation package com.company.Graphs; import java.util.*; /** * Created by sangar on 21.12.18. */ public class AdjacencyList { private Map<Integer, ArrayList<Integer>> G; private boolean isDirected; private int count; public AdjacencyList(boolean isDirected){ this.G = new HashMap<>(); this.isDirected = isDirected; } public void addEdge(int start, int dest){ if(this.G.containsKey(start)){ this.G.get(start).add(dest); }else{ this.G.put(start, new ArrayList<>(Arrays.asList(dest))); } if(!this.G.containsKey(dest)) { this.G.put(dest, new ArrayList<>()); } //In case graph is undirected if(!this.isDirected) { this.G.get(dest).add(start); } } public boolean isEdge(int start, int dest){ if(this.G.containsKey(start)){ return this.G.get(start).contains(dest); } return false; } private boolean isCycleUtil(int u, boolean[] visited, int parent){ for(int v: this.G.get(u)) { if(!visited[v]){ visited[v] = true; return isCycleUtil(v, visited, u); } else if(v != parent){ return true; } } return false; } public boolean isCycle(){ boolean[] visited = new boolean[this.G.size() + 1]; for(int i = 1; i < this.G.size() + 1; i++) { if (!visited[i]) { return isCycleUtil(i, visited, -1); } } return false; } } The complexity of the DFS approach to finding a cycle in an undirected graph is O(V+E) where V is the number of vertices and E is the number of edges. Please share if there is something wrong or missing. If you are preparing for an interview, please singup for free interview preparation material. Related articles Connected components of a graph Depth first traversal of graph Graph representations Breadth First traversal Topological sorting
https://algorithmsandme.com/tag/depth-first-search/
CC-MAIN-2019-18
refinedweb
498
58.18
Memory and Thread leak from getConnection when password is expired964907 Jan 8, 2015 7:45 PM I realize this isn't a support forum but thought I'd post here first on the chance there were any words of wisdom from somebody who's seen this issue. We had an OutOfMemoryError thrown in our Java web app (JBoss, Oracle, Hibernate) which I have tracked down to an apparent memory leak inside the Oracle Java code that runs somewhere inside the OracleDataSource getConnection() method. The scenario that lead to the error was the fact that our database's user password had expired. Our app has two parts of it that each do a database access once every 30 seconds. Each one of these was incurring an "ORA-28001: the password has expired" exception. After a day or so OutOfMemoryError exceptions started getting thrown. I isolated it into a stripped down test program (no JBoss, no Hibernate) and can reproduce the issue easily on my development machine which is runing Windows XP with Oracle Database 10g Express Edition Release 10.2.0.1.0. The test program keeps trying to get a connection to the database repeatedly, sleeping in between each try. The presumption is that the user's password has been artificially expired prior to running the program so as to make it incur an SQLException on the getConnection() call. The user and password I used were correct, except of course that the password had expired. It prints results every 500 tries just to show it's in porgress. The code takes two parameters. The first tells it whether or not to use connection caching - default "true". The second varies the sleep time between connection attempts - default 30 milliseconds. The output of a run with the default values follows: Starting at: Thu Sep 27 13:35:33 EDT 2012 exception 0: java.sql.SQLException: ORA-28001: the password has expired exception 500: java.sql.SQLException: ORA-28001: the password has expired exception 1000: java.sql.SQLException: ORA-28001: the password has expired exception 1500: java.sql.SQLException: ORA-28001: the password has expired exception 2000: java.sql.SQLException: ORA-28001: the password has expired exception 2500: java.sql.SQLException: ORA-28001: the password has expired exception 3000: java.sql.SQLException: ORA-28001: the password has expired exception 3500: java.sql.SQLException: ORA-28001: the password has expired exception 4000: java.sql.SQLException: ORA-28001: the password has expired exception 4500: java.sql.SQLException: ORA-28001: the password has expired exception 4883: java.lang.OutOfMemoryError: unable to create new native thread Done at: Thu Sep 27 13:38:42 EDT 2012 As you can see it gets the OutOfMemoryError after ~ 30 seconds. In running the program in debug I noticed orphaned threads being created. Turns out that each call to getConnection() is leaving a thread that never runs down. In JVisualVM I did a thread dump and here's an example of one of orphaned threads: ----- "Thread-703" daemon prio=6 tid=0x09c5b400 nid=0x102e0 waiting on condition [0x11c0f000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at oracle.jdbc.pool.OracleImplicitConnectionCacheThread.run(OracleImplicitConnectionCacheThread.java:91) Locked ownable synchronizers: - None ----- Has anybody seen this? If so, is there a work around? Here is the code: <code> import java.sql.Connection; import java.sql.SQLException; import java.util.Date; import java.util.Properties; import javax.sql.DataSource; import oracle.jdbc.pool.OracleDataSource; public class TestLeak { private static final boolean DEFCONCACHE = true; private static final int DEFINTERVAL = 30; private static DataSource getDataSource(boolean connCache) throws SQLException { OracleDataSource ds = new OracleDataSource(); ds.setURL("jdbc:oracle:thin:@localhost:1521:xe"); ds.setDriverType("oracle.jdbc.driver.OracleDriver"); ds.setUser("myuser"); ds.setPassword("mypassword"); if (connCache){ ds.setConnectionCachingEnabled(true); ds.setConnectionCacheName("MyCache"); Properties cacheProps = new Properties(); cacheProps.setProperty("MinLimit", Integer.toString(5)); cacheProps.setProperty("MaxLimit", Integer.toString(20)); cacheProps.setProperty("InitialLimit", Integer.toString(5)); cacheProps.setProperty("InactivityTimeout", Integer.toString(900)); cacheProps.setProperty("ConnectionWaitTimeout", Integer.toString(10)); cacheProps.setProperty("PropertyCheckInterval", Integer.toString(60)); cacheProps.setProperty("ValidateConnection", "true"); ds.setConnectionCacheProperties(cacheProps); } return ds; } private static String time() { return (new Date().toString()); } public static void main(String[] args) { boolean connCache = DEFCONCACHE; Integer interval = new Integer(DEFINTERVAL); DataSource ds; int count = 0; Connection conn1 = null; boolean go = true; System.out.println("Starting at: " + time()); if (args.length == 2) { if (args[0].equalsIgnoreCase("true")) connCache = true; else if (args[0].equalsIgnoreCase("false")) connCache = false; else{ System.out .println("Didn't understand P1=\"" + args[0] + "\" Value must be true/fase for whether to use cxn pooling. Using value of " + connCache); } try { interval = Integer.valueOf(args[1]); } catch (NumberFormatException nfe) { System.out .println("Didn't understand P2=\"" + args[1] + "\" Value must be milliseconds to wait between connect tries. Using value of " + interval); } } while (go) { try { ds = getDataSource(connCache); /*Connection*/ conn1 = ds.getConnection(); if (conn1 != null) { System.out.println("exception conn1.hashCode()=" + conn1.hashCode()); } } catch (OutOfMemoryError oom) { System.out.println("exception " + count + ": " + oom.toString().trim()); go = false; } catch (SQLException e) { if (count % 500 == 0) System.out.println("exception " + count + ": " + e.toString().trim()); } try { Thread.sleep(interval); } catch (InterruptedException e) { System.out.println("Sleep threw InterruptedException"); } count++; } System.out.println("Done at: " + time()); } } // end file TestLeak.java </code> 1. Re: Memory and Thread leak from getConnection when password is expiredJoe Weinstein-Oracle Sep 27, 2012 6:06 PM (in response to 964907)Seems like a clear driver bug. Open a support case if this reproduces using the latest driver version you can download from Oracle. 2. Re: Memory and Thread leak from getConnection when password is expired964907 Sep 28, 2012 2:26 PM (in response to Joe Weinstein-Oracle)Oh duh. I forgot to check for driver updates. Just did though and we have the current driver for 10.2.0.1. Just for yuks I uploaded the most current driver for the last listed version of 10g R2 which is 10.2.0.5 and that behaved the same way. We'll make a support case. Thanks. 3. Re: Memory and Thread leak from getConnection when password is expireddsurber-Oracle Sep 28, 2012 4:54 PM (in response to 964907)You should always use the latest version of the drivers, which is 11.2.0.3.0. The latest drivers always support all versions of the database that have not been desupported. In your case 10.2 database is still supported so the latest driver supports it. If your database has been desupported then you should use the latest version of the drivers that was released while your database was supported. Also you are using the driver's Implicit Connection Cache. That is deprecated in 11 in favor of the Universal Connection Pool. You can continue to use ICC with 11 but we are not actively supporting it. I'm not sure what support will do. Since 10 is still supported I guess (strictly a guess) they'll try to fix the problem in 10. It should be an easy fix. I don't see a bug that obviously corresponds to this problem, though there may be one. I suggest switching to the 11.2.0.3.0 driver and if feasible UCP. 4. Re: Memory and Thread leak from getConnection when password is expired964907 Sep 28, 2012 5:55 PM (in response to dsurber-Oracle)Thanks, I downloaded ojdbc6.jar from the 11.2.0.3 download page. After a re-test of the code unmodified it still behaves the same - leaks threads & gets OOME. Even if we could get the 11.2.0.3 driver to work though it may turn out to be really hard politically to switch to a driver that divergent from what we and our partners have tested on. Mod's to an existing older driver would probably be more palatible. (I could be wrong though. I will pursue this internally.) I will go off and look at UCP. I assume this is the place I need to be reading up: [] Thanks for the help. 5. Re: Memory and Thread leak from getConnection when password is expired964907 Oct 1, 2012 6:27 PM (in response to 964907)I changed the test program to do things ala UCP. Changed code snippet to get the data source follows: PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource(); ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource"); ds.setURL(URL); ds.setUser(USER); ds.setPassword(PASS); return ds; The change worked. Ran 10+ hours with no leak. Also, I was able to do this running our existing (albeit old) ojdbc14.jar file from 10.2.0.4 and only adding the ucp.jar file. I then integrated the new UCP approach into our application source. Unfortunately it exposed a known bug in Hibernate 3.3.1 (the version we’re using). The Hibernate code had a hardwired reference to the “oracle.jdbc.driver.OracleTypes” class which had been moved to the “oracle.jdbc” package in Oracle 11: Re: Problems of hibernate calling oracle stored procedure This bug is fixed in Hibernate 3.3.2. Unfortunately we cannot change our environment around at this point so we’re stuck with 3.3.1 so we need old-style OracleDataSource “IIC” to work. That’s not an issue for here though. Will make a support case. Thanks everyone for the help.
https://community.oracle.com/thread/2447299
CC-MAIN-2016-30
refinedweb
1,543
51.65
Go WikiAnswers ® Categories Jobs & Education Education College Degrees Unanswered | Answered College Degrees Parent Category: Education College degree or academic degree refers to an award given by a university or college institution signifying that the student has satisfactorily completed a particular course of study. Standard college degree programs are based on a four-year bachelor's degree course. Subcategories Associates Degrees Bachelor of Commerce (BCom) Bachelor of Computer Applications BCA Bachelors Degrees Graduate Degrees Master of Commerce (MCom) Master of Computer Applications MCA 1 2 3 > Which US universities do not require the GRE for admission to their Masters in Social Work degree? I think it is university of arizona How many semester hours does it take to get an AS? 124-128 hours What is the highest degree a veterinarian can get? doctorate What are four factors of free enterprise system? private ownership, individual initiative, profit, and competition Does Robert Morris College offer an on campus athletic program? The Robert Morris University in Illinois offers an on campus athletic program. What was the datesheet of Ptu MBA distance 1 sem? 4.43 Is there a price difference between minors and majors? The exact cost of attending an institute of higher education varies depending on the type of school it is. There is typically no price difference between minors and majors. M.A.44.5 percent can apply ignu b.ed in distant education? Yes Hey guys what kind of position will you get if you have a ccna ccnp and MBA in systems along with 4-5 yrs of experience i want to work in IT companies? yeah the question is right and i have accpeted it Mca question papers of previous years? Volquetes en Zarate 03487-450202 Ba first year einglish question paper MKU DDE? Who was the fifth rule of the Mughal Dynasty Mba project marks for jntu hyderaba? The MBA Degree ... 2.2 If a candidate fails to complete MBA course within four academic years When will come trichy Anna university results for 1 trimester MBA? Anna University UG PG 1st 3rd 5th 7th Semester Nov Dec 2015 Results - UG PG Regulation 2008 Regulation 2013 Regulation 2015 Results Nov Dec 2015 Jan Feb 2016 Exam Results Declared Date... Can credits be transfered from Everest College? Yes What is Artificial Intelligence and what roles are there for such technologies in business? Artificial Intelligence is self awareness, sentience in a programmed computer. Such is impossible with digital technology and the limits placed on digital platforms by the relatively miniscule memory capacities available. So, no roles for business. You want ptu dep mode 1st sem MBA privious question papers for 2009 batch? No If got a call for admission in MBA which should be preferred UGC recognized or AICTE approved? if a college is AICTE approved then it is very good because it is recognized by government which is government certified...it is very important Today. Since AICTE approved degrees are given preference in some companies as well as government jobs - it is better to get a degree from AICTE... Are the distance education mphil degrees offered by global open university recognised by ugc? During his 2007 MVP season for the New York Yankees, Alex Rodriguezhit 98 singles, 31 doubles, 0 triples, and 54 home runs in a totalof 583 at bats. The following table arranges these data in terms ofthe number of bases for a hit, counting 0 bases for the times whenhe did not get a hit and 4 bases... Should you attend Shippensburg University or Kutztown University for environmental science? I'm just starting to learn c++. I tried to write linear search program using class.please notify meif there are any errors in my codes.? #include using namespace std; class search { private: int i,pos,flag,n,num,arr[50]; public: void read() { coutn; cout Did Serena Williams get an associate's degree? No Is California Southern University in Irvine California accredited? It is accredited by the western association of schools and colleges. Some occupations require more education and training than others? because each course has its own prerequisite and some are more hand on work and others requires just study. What are Advantages and disadvantages of customized software versus off-the-shelf software? Customized software may be good for one task and nothing else while general off the shelf software will be sufficient for most tasks but not any better for a specific.... What are books that wrote by heraclitus? Follow these books, all are written by heraclitus Heraclitus: The Cosmic Fragments - Expect the Unexpected (Or You Won't Find It): A Creativity ToolBased on the Ancient Wisdom of Heraclitus - List and explain 3 such challenges and opportunities for organizational behavior? 1) Engage in transparent self-evaluation. 2) Understand past experiences and behaviors more readily. 3) Replace destructive thought patterns with positive patterns. 1st year btech cse drawing previous solved question papers? projection of solids What is full form of MBA? The MBA is one of the most widely gained college degrees. In long form MBA stands for a Masters of Business Administration. When does the osmania university Bsc final year 2008 exam time table to be released? mscs What does a 3.625 GPA mean in terms of grades? it is an A-. Do you need a degree to do a masters? Typically the bachelors degree comes first. is a Ph.D in biblical studies called? A Doctorate of Philosophy is bestowed upon a candidate who has rigorously studied, tested, researched, written about, and defended the results of a particular subject, in this instance, Bible Studies. A Ph.D may take between two and six years or so to complete. Beware of anyone who completes... What government agencies employ hospital social workers? The Veterans Administration. Can you go to school and collect disability? Yes you can. There are a number of disability programs available. You can check with the counselor who handles disabilities at the college of your choice. You can also contact the Employment Services Office in your area, and the Division of Vocational Rehabilitation (DVR) in your area.... How do you improve your job? to work hard ,under pressure and ask questions to reach your goals.. What is a net saver? Either something that saves a net, or something that results in a net saving (more saving than expenses). MBA HR project on recruitment process in consultant agency? sir my project is HR-recruitment in php but i have no metter for study can send me documentaion me HR-recruitment. BUT FIRST SUCK ON GOT EEMM What is applied genetics? Applied genetics is the process of using gene theories to attempt to actually produce a genetic product. An example of this would be genetically engineered seeds. If you already have a master degree how long do you attend law school for? Law school will take three years to obtain the juris doctor (JD) degree. Is kerala psc approved madurai kamaraj university? yes , approved What are the similarities between international and local human resources management? l can not answer. . Which career has better salary a pediatrician or a elementary school teacher? Physicians make a greater salary than a school teacher. How long will it take to get a masters degree to be a chef? A master's degree is not a requirement to become a chef. Read the following according to the U.S. Department of Labor. A high school diploma is not required for beginning jobs but is recommended for those planning a career in food services. Most fast-food or short-order cooks and food preparation...... What do you need to do to become a school is the percentage of 90 degrees? 25 Does Clemson University offer a program in Health and Fitness? Clemson under their Multi/Interdisciplinary Studies offers a bachelor's degree in Nutrition Sciences. For the source and more detailed information concerning your request, click on the related links section (College Board) indicated directly below this answer section. Who earned degrees in chemistry and psychology from Hunter College in New York? I would suggest you contact the Alumni association at the college. They keep records of all students who have graduated. Where did Bill Gates get an education from? Gates graduated from Lakeside School in 1973. He scored 1590 out of 1600 on the SAT and subsequently enrolled at Harvard College in the autumn of 1973. Prior to the mid-1990s, an SAT score of 1590 corresponded roughly to an IQ of 170, a figure that has been cited frequently by the press. While at.... What kind of degree do you get when you complete four years in history? Typically it is a bachelor of arts (BA) degree. Question paper of b.com 1st year ousmaniya university? business oganisation and management question paper 2013 How many credits for 90 quarter hours? 90 quarter hours equals 60.00 credits (semester hours) Can a bachelor's degree in Communication studies major get their masters in education to obtain a career in secondary education...how long will it take and how to apply? Many individuals pursue a master's degree unrelated to their bachelor's degree. However, to teach within the public school system in the US, you will still have to obtain state teacher certification. The master's degree can take approximately two to three years to complete post bachelor's degree.... How many years in college or tech school to become a computer programer? may... How much does an RN earn with an associate's degree vs a diploma? According to the U.S. Bureau of Labor Statistics the estimated mean annual wage for registered nurses as of May 2008 is, $65,000. This would amount to $31.31 per hour. What is the cost of tuition per semester at the Strayer University online? Strayer University Washington, District of Columbia Annual College Costs (Fall 2009) . Tuition and fees: $13,635 . Books and supplies: $1,200 For the source and more detailed information concerning your request, click on the related links section (College Board) indicated directly... How much school is required for x-ray tech? 2 years you get your (AS degree). 4 years you get (BS degree) What is a postdoctoral degree? A postdoctoral degree is a degree earned after obtaining a doctorate where the doctorate is a prerequisite necessary to pursue the degree.This is seen primarily with professional doctorates. For example, most dentists in the U.S. have a DDS degree, which stands for doctor of dental surgery. It is a... How long to obtain Bachelor in Management Studies?... How. What is a 4.0 equal to grade wise? On a 4.0 scale with 4.0 being the highest, it would be a letter grade of A. Is sathyabama university is recognized by UGC? yes section 3.UGC,1956 Can a triangle have a 90 degrees angle and a 70 degrees angle and a 20 degrees angle? yes, just add all the angles together and if it equals 180 your ok. ...... How much do cosmetologists make? the average is around 30,000 a year What GPA do you need to get into the University of Connecticut? around a 3.5 5 years papers of Bsc of Punjab university? complete essay on dengue What is the value of bba from IP university and how much one earn after doing it and what kind of jobs are there? bba from indraprstha university is a highly qualifying thing,it has its own value as good valuable course.but there are not many good jobs straight after bba to earn money,if u opt for bba from ip u mean business and management and mba should be your aim first.MBA matters when you are in this field... Is a college degree needed to coach college football? yes you have to go to school to teach any sport college or not phys. ed is your best chance to teach a sport What college classes do you have to take to become a cardiac surgeon?. Which colleges in delhi university offer the course bachelor in finance n investment analysis...? only SUKHDEV COLLEGE OF BUSINESS STUDIES Ooffer BFIA... What type of schooling or training does a cardiac surgeon need?. Can any one give list of pharmacy colleges in India offering master degree in pharmaceutics? The University of Delhi has many graduate programs within itself and with affiliate colleges and universities. You can access this information by going to, Viper is the role of public administration in a modern society? It is important and vital to the implementation of laws and policies of the government. Without public administration, the civilizations fails. It is essential for the social change. What does upper division hours mean? Typically, it refers to upper level courses within the junior and senior years of a four year program of study (bachelors degree). How many years of college does it take to become an astronomer? You will have to go to four years of school to get a bachelor's degree. Astronomers typically hold a PhD which could mean six or more years of school after earning your bachelor's. Is 207 degrees 90 degrees? No. What is the schooling necessary for a psychologist? A master's or doctoral degree, and a license, are required for most psychologists. Education and training. A doctoral degree usually is required for independent practice as a psychologist. Psychologists with a Ph.D. or Doctor of Psychology (Psy.D.) qualify for a wide range of teaching, research... How much money does a high school teacher in Nevada earn a year? According to the U.S. Bureau of Labor Statistics the estimated mean annual wage for high school teachers as of May 2008 is, $54,390. Minimum mark of 10 plus 2 for studying Bsc biotechnology? Minimum mark for 10 plus 2 is 80%. Jnu entrance question papers for MSC in life science of previous years? please tell me name of books m.sc life science for the preparation of entrence exam. How long do paramedics train for? Certification vary from 30 to 300 hours to complete. Paramedictraining programs can take six months to two years to complete andare often completed as part of an associate's degree. Should you join the College of Paramedics, prospective studentsmay join as associate members. This will both... 1849 gold rush? A large migration and immigration movement to the San Fransisco area in hopes of finding gold. 1 2 3 >
http://www.answers.com/Q/FAQ/2313
CC-MAIN-2017-51
refinedweb
2,410
59.19
. I have decided to name this edition #130-2 so that eventually (well, in about a week), we will be back to uninflated post numbers. Nobody likes inflation. Except perhaps tyres. And balloons. Your brain at work part 2: Dopamine and more mindfulness Ironically, the incorrectly numbered post #130 dealt with the many ways in which our brains fail us every day. (Now that I’ve finally gotten around to installing the WP Anchor Header plugin, we can link directly down to any heading in any post, as demonstrated in the previous sentence.) At least some clouds do seem to have a silver lining. Your Brain at Work, the book I mentioned last week, has turned out to be a veritable treasure trove of practical human neuroscience, and I still have about 30% to go. My attempt at meteorological humour above was inspired by part of the book’s treatment of the important role of dopamine in your daily life. For optimal results, one is supposed to remain mildly optimistic about expected future rewards, but not too much, which will result in a sharp dopamine drop when those rewards don’t crystallise, and a greater increase when they do. For optimal results, one should try to remain in a perpetual state of mildly optimistic expectations, but also in a state of being continually pleasantly surprised when those expectations are slightly exceeded. More generally, the book deals really well with the intricacies of trying to keep one’s various neural subsystems happy and in balance. Too much stress, and the limbic system starts taking over (you want to run away, more or less), blocking your ability to think and make new connections, which in this modern life could very well be your only ticket out of Stress Town. To my pleasant surprise (argh, I’ll stop), mindfulness made its appearance at about 40% into the book, shortly after I had published last week’s WHV. In my favourite mindfulness book, Mindfulness: A Practical Guide to Peace in a Frantic World by Mark Williams and Danny Penman, two of the major brain states are called doing, the planning and execution mode we find ourselves in most of the time, also in the middle of the night when we’re worrying about things we can do nothing about at that point, and being, the mode of pure, unjudgemental observation the activation and cultivation of which is practised in mindfulness. In David Rock’s book, these two states are described as being actual brain networks, and they have different but complementary names: The narrative network corresponds to the doing mode, and the direct experience network corresponds to the being mode. The narrative network processes all incoming sensory information through various filters, moulding it to fit into one’s existing mental model of the world. David Rock describes it in the book and in this HuffPost piece as follows:. This is certainly useful most of the time, but it can get tiring and increase stress when you least need it. The much-more attractively named direct experience network is active when you feel all of your senses opening up to the outside world to give you that full HD IMAX™ surround sound VR experience. No judging, no mental modelling, just sensory bliss and inner calm. Rock sez:. Again, these two systems are on opposite sides of a neurophysiological see-saw. When you are worrying and planning, no zen for you! On the other hand, when you’re feeling the breeze flowing and and through each individual hair on your arms and the sun rays seemingly feeding energy directly into your cells, your stress is soon forgotten. Fortunately, mindfulness gives us practical tools to distinguish more easily when we’re on which path, and, more importantly, to switch mental modes at will. I hope you don’t mind me concluding this piece by recursively quoting David Rock quoting John Teasdale, one of the three academic founders of Mindfulness Based Cognitive Therapy (MBCT):. (If the book has any more interesting surprises, I’ll be sure to report on them in future WHV editions.) Miscellany at the end of week 5 of 2018 - The rather dire water situation has not changed much, except that due to more citizens putting their backs into the water saving efforts, day zero (when municipal water is to be cut off) has been postponed by 4 days to April 16. We are now officially limited to 50 litres per person per day, for everything. Practically, this means even more buckets of grey water are being carried around in my house every day in order to be re-used. - I ran 95km in January, which is nicely on target for my modest 2018 goal. Although January was a long month, and Winter Is Coming (And Then We Run Much Less Often), I am mildly optimistic that I might be able to keep it up. - Python type hinting is brilliant. I have started using it much more often, but I only recently discovered how to specify a type which can have a value or None, an often-occurring pattern: from typing import Optional, Tuple def get_preview_filename(attachment: Attachment) -> Tuple[Optional[str], Optional[str]]: pass - On Wednesday, January 31, GOU #3 had her first real (play) school day, that is, without any of us present at least for a while. We’re taking it as gradually as possible, but it must be pretty intense when you’re that young (but old enough to talk, more or less) and all of a sudden you notice that you’re all alone with all those other little human beings, none of which are the family members you’re usually surrounded with. The End Thank you dear reader for coming to visit me over here, I really do enjoy it when you do! I hope to see you next again next week, same time, same place.
https://cpbotha.net/2018/02/04/weekly-head-voices-130-2-direct-experience-dopamine/
CC-MAIN-2019-22
refinedweb
986
52.83
A mini project that highlights the usage of requests and grequests. - Objectives: - Download multiple images from Google Image search results. - Required Modules: - Steps: - Retrieve html source from the google image search results. - Retrieve all image url links from above html source. (function: get_image_urls_fr_gs) - Feed the image url list to grequests for multiple downloads (function: dl_imagelist_to_dir) - Breakdown: Steps on grequests implementation. - Very similar to requests implementation which instead of using requests. get() use grequests.get() or grequests.post() - Create a list of GET or POST actions with different urls as the url parameters. Identify a further action after getting the response e.g. download image to file after the get request. - Map the list of get requests to grequests to activate it. e.g. grequests.map(do_stuff, size=x) where x is the number of async https requests. You can choose x for values such as 20, 50, 100 etc. - Done ! Below is the complete code. import os, sys, re import string import random import requests, grequests from functools import partial import smallutils as su #only use for creating folder USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36' headers = { 'User-Agent': USER_AGENT } def get_image_urls_fr_gs(query_key): """ Get all image url from google image search Args: query_key: search term as of what is input to search box. Returns: (list): list of url for respective images. """ query_key = query_key.replace(' ','+')#replace space in query space with + tgt_url = '{}&tbm=isch&tbs=sbd:0'.format(query_key)#last part is the sort by relv r = requests.get(tgt_url, headers = headers) urllist = [n for n in re.findall('"ou":"([a-zA-Z0-9_./:-]+.(?:jpg|jpeg|png))",', r.text)] return urllist def dl_imagelist_to_dir(urllist, tgt_folder, job_size = 100): """ Download all images from list of url link to tgt dir Args: urllist: list of the image url retrieved from the google image search tgt_folder: dir at which the image is stored Kwargs: job_size: (int) number of downloads to spawn. """ if len(urllist) == 0: print "No links in urllist" return def dl_file(r, folder_dir, filename, *args, **kwargs): fname = os.path.join(folder_dir, filename) with open(fname, 'wb') as my_file: # Read by 4KB chunks for byte_chunk in r.iter_content(chunk_size=1024*10): if byte_chunk: my_file.write(byte_chunk) my_file.flush() os.fsync(my_file) r.close() do_stuff = [] su.create_folder(tgt_folder) for run_num, tgt_url in enumerate(urllist): print tgt_url # handle the tgt url to be use as basename basename = os.path.basename(tgt_url) file_name = re.sub('[^A-Za-z0-9.]+', '_', basename ) #prevent special characters in filename #handling grequest action_item = grequests.get(tgt_url, hooks={'response': partial(dl_file, folder_dir = tgt_folder, filename=file_name)}, headers= headers, stream=True) do_stuff.append(action_item) grequests.map(do_stuff, size=job_size) def dl_images_fr_gs(query_key, tgt_folder): """ Function to download images from google search """ url_list = get_image_urls_fr_gs(query_key) dl_imagelist_to_dir(url_list, tgt_folder, job_size = 100) if __name__ == "__main__": query_key= 'python symbol' tgt_folder = r'c:\data\temp\addon' dl_images_fr_gs(query_key, tgt_folder) Further notes - Note that the images download from google search are only those displayed. Additional images which are only shown when “show more results” button is clicked will not be downloaded. To resolve this case: - a user can continuously clicked on “show more results”, manually download the html source and run the 2nd function (dl_imagelist_to_dir) on the url list extracted. - Use python selenium to download the html source. - Instead of using grequests, request module can be used to download the images sequentially or one by one. - The downloading of files are break into chunks especially for those very big files. - Code can be further extended for downloading other stuff. - Further parameters in the google search url here. Advertisements
https://simply-python.com/tag/webscraping/
CC-MAIN-2019-30
refinedweb
604
58.69
Zach and I just spent a couple of days figuring out how to make use of the new FILESTREAM support in SQL Server 2008 and we thought we’d share a little bit about the experience in hopes it might save somebody some time. There’s a ton of information out there regarding FILESTREAM, but in case you need more detail you can check out some of the SQL Server 2008 CTP content on Connect. FILESTREAM support will be enabled in CTP5. From the perspective of user experience, FILESTREAM is going to enable some interesting scenarios. SQL Server has always provided the capability to store binary data, and thus you could grab any type of file and stuff it into a SQL Server varbinary(max) column. But blobs have different usage patterns than relational data, and SQL Server’s storage engine is primarily concerned with doing I/O on relational data stored in pages and extents, not streaming blobs. So the bottom line is that storing blob’s in SQL Server have always had some limitations from a performance perspective. Most developers resorted to storing files in the file system, then just storing a path to the file in the database. Perfectly legitimate approach, and for some apps may still be the right way to go even with the advent of FILESTREAM. But when you do that you introduce a whole new set of challenges around manageability, backup and concurrency that should already be obvious to the reader. Enter FILESTREAM. Now SQL Server 2008 can store blobs in its own private namespace on the local NTFS filesystem instead of in-line with relational data. That’s good cuz the NTFS file system was built to stream blobs. NTFS is even more interesting form a database perspective because it’s transactional and supports recovery. So you can imagine the SQL Server Storage Engine and NTFS having a little mutual self-respect love fest. Again this has obvious advantages from a manageability, backup and concurrency perspective that I won’t reiterate here. But what I will do is talk about our first experience with FILESTREAM from a database perspective, and Zach’s gonna talk about it from an application development perspective. Our first project was to wire up a rich WPF app written in Visual Studio 2008 with some video stored in FILESTREAM in a pre-release build of SQL Server 2008 CTP5. We quickly found out that you can’t connect to CTP5 from managed code written in Visual Studio 2008 Beta 2 due to a TDS compatibility issue. Once we tracked down a release candidate of Visual Studio 2008 we were good to go. Next I went into SSMS and built out a database to store our video. My first surprise was IntelliSense support in the Transact-SQL Editor! Check it out: In order to use the FILESTREAM attribute on a varbinary(max) column you have to enable FILESTREAM support for the SQL Server instance using: EXEC [sp_filestream_configure] @enable_level = 3; At this point I got a message that I needed to reboot the server in order for the setting to take effect. The dev team is working on removing this requirement for RTM. Next I created the database. In order to use FILESTREAM you have to have a special filegroup for storing the FILSTREAM data which I called FileStreamGroup1: CREATE DATABASE AdventureWorksRacing ON PRIMARY ( NAME = AdventureWorksRacing_data, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorksRacing_data.mdf’, SIZE = 2MB, MAXSIZE = 50MB, FILEGROWTH = 15%), FILEGROUP FileStreamGroup1 CONTAINS FILESTREAM ( NAME = AdventureWorksRacing_media, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorksRacing_media’) LOG ON ( NAME = AdventureWorksRacing_log, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorksRacing_log.mdf’, SIZE = 5MB, MAXSIZE = 25MB, FILEGROWTH = 5MB); GO When I went out to the AdventureWorksRacing_media folder there was some initial NTFS folders with GUID’s for names, and some header files and log folders. This is all storage engine gobbledy-goop for creating the FILESTREAM namespace. Next I created a table to store our video with a FILESTREAM column: CREATE TABLE [dbo].[eventMedia] ( [mediaId] [uniqueidentifier] NOT NULL ROWGUIDCOL PRIMARY KEY, [dateCreated] [datetime] NOT NULL DEFAULT(GETDATE()), [createdBy] [nvarchar](256) NOT NULL, [fileName] [nvarchar](256) NOT NULL, [mediaType] [nvarchar](256) NOT NULL, [location] [geometry] NULL, [file] [varbinary](max) FILESTREAM); GO; Tables with FILESTREAM columns required a ROWGUIDCOL column. This is used by the storage engine to keep track of instances in the filesystem. Next I wrote some TSQL to insert some garbage to see the affect in the AdventureWorksRacing_media folder. I’m not showing that code here because every lame FILESTREAM sample shows it and it’s worthless and makes me crazy. The correct way to insert data is using a native or managed client that can actually put real binary data into the column as opposed to a hello world string cast as binary data. Anyway I digress… After inserting some garbage I saw some new folders and files: Next I tried to delete some rows and see the affect on the file system. Nothing changed right off the bat, and when I talked to the devs about it they said that storage gets freed up on a filestream filegroup when a valid log truncation point occurs. Otherwise you wouldn’t get proper backup/restore behavior. This made perfect sense to me! One quick word of warning, trying to open up a query results grid in SSMS on gobs of filestream data is not a good idea. Take it from me. As usual, our tools are powerful and flexible enough to enable a dummy like me to shoot myself in the foot. Anywho we were up and running with a database to store our videos. Next up Zach’s gonna talk about how we wired our WPF client up to it to play videos in this post. PingBack from How many times have you stored a file in your database? You may have stored it as a blob, which had performance If you have been following progress on SQL Server 2008 you have probably seen mention of FileStream which In our last series of posts Zach and I talked about using SQL Server’s new FILESTREAM support to store, Construyendo comandos para consultas a base de datos, es un tema que siempre despierta interés y siempre Para SQL 2005 la opción era guardarlos como campos tipo binary o tipo image. Para SQL 2008 se puede utilizar
https://blogs.msdn.microsoft.com/rdoherty/2007/10/12/getting-traction-with-sql-server-2008-filestream/
CC-MAIN-2017-13
refinedweb
1,078
57.91
The initial purpose of this post was give a proper proof of a problem posted on Twitter by @jamestanton (it is hard within the 140 char limit), but the post was later extended to cover some other problems. The Twitter-problem Show that a four-sided and a nine-sided dice cannot be used to simulate the probability distribution of the product of outcomes when using two six-sided dice. In the original wording: Is there a 4-sided die & a 9-sided die that together roll each of the products 1,2,3,…,30,36 w the same prob as two ordinary 6-sided dice? We make an argument by contradiction, considering only the possible outcomes without taking the actual probabilities into account. Obviously, to reach the same outcomes as for two normal dice , we need both dice to have the identity (otherwise, we will not be able to reach ). So, . Now, consider the prime . It must be on both dice, or we would have . So, . Also, since appears on both dice, no dice can contain some product of the primes and their powers (e.g ) that does not exist on the original dice, because then impossible products could be reached. Hence, must be on both dice, giving . There are sides left on the larger die but we have more even products, so must also be on each die. . Now, there is no space left for on the smaller die. This means that must be one the larger die, but then , which is a contradiction. (@FlashDiaz gave a shorter proof) Project Euler 205 Peter has nine four-sided (pyramidal) dice, each with faces numbered .. Colin has six six-sided (cubic) dice, each with faces numbered .. .. The probability functions of the nine four-sided dice and the six six-sided dice are given by the generating functions and , respectively. Let be i.i.d random variables taking values in the range and let taking values in the range . We want to determine the probability that . The distributions can be computed as def rec_compute_dist(sides, nbr, side_sum): global dist if nbr == 1: for i in range(1, sides+1): dist[side_sum+i] += 1 else: for i in range(1, sides+1): rec_compute_dist(sides, nbr-1, side_sum+i) dist = [0]*37 rec_compute_dist(4,9,0) dist_49 = dist dist = [0]*37 rec_compute_dist(6,6,0) dist_66 = dist To determine , we may express it as . Computing the sum using the following code, probability = 0 for i in range(6,36+1): for j in range(i+1,36+1): probability += dist_66[i]*dist_49[j] print 1.0 * probability/(6**6 * 4**9) we obtain the answer. Great 🙂 Project Euler 240 There are ways in which five six-sided dice (sides numberedways in which five six-sided dice (sides numbered toto ) can be rolled so that the top three sum to) can be rolled so that the top three sum to . Some examples are:. Some examples are: In how many ways can twenty twelve-sided dice (sides numbered toto ) be rolled so that the top ten sum to) be rolled so that the top ten sum to ?? Let us first consider the simpler problem . If we restrict the remaining ten dice to be less than or equal to the minimum value of the ten dice, we then can compute the cardinality. Let denote the number of ‘s we got. Then, where . All histograms of top-ten dice can be computed with from copy import copy d = [0] * 12 possible = [] def rec_compute(i, j, sum): global d if j == 0: if sum == 70: possible.append(copy(d)) return while i > 0: if sum + i <= 70: d[i - 1] += 1 rec_compute(i, j - 1, sum + i) d[i - 1] -= 1 i -= 1 rec_compute(12, 10, 0) The code exhausts all solutions in 200ms. Call any solution . For instance H = [0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0]. The remaining dice can take any values in the range , where is the left-most non-zero index (starting from ). The number of configurations for this particular solution is then given by , where . Unfortunately, there is no good analytical way of computing this. So, the easiest way is to enumerate all possible . Disregarding , we compute all permutations of a given histogram in the same way (hence, we can make the search space a lot smaller) and the using the multiplicity to determine the exact number. All and all, the following code gives our answer: def configurations(i, j, x, s, l): if sum(x) == s: # we can stop here as the sum cannot get smaller multiplicity = fact(l) / fact(l-len(x)) / \ reduce(lambda m, n: m * n, \ [fact(y) for y in \ Counter(x).values()]) return fact(DICE) * multiplicity / \ reduce(lambda m, n: m * n, \ [fact(y) for y in x]) if j == 0 or i == 0: return 0 return configurations(i-1, j, x, s, l) + \ configurations(i, j-1, x + [i], s, l) S = 0 for H in possible_top_dice: min_index = next((i for i, \ x in enumerate(H) if x), None) for j in range(0, REMAINING+1): u = reduce(lambda m, n: m * n, \ [fact(y) for y in H]) # mutually exclusive -- may add instead of union if j < REMAINING: q = configurations(REMAINING-j, min_index, \ [], REMAINING-j, min_index) / u else: q = fact(DICE) / u H[min_index] += 1 S += q print S
https://grocid.net/2016/06/30/problem-of-dice/
CC-MAIN-2017-17
refinedweb
899
58.92
Project Package Organization Project Package Organization Package by layer or package by feature? Let's examine their pros and cons to determine which approach you should take in your Java projects. Join the DZone community and get the full member experience.Join For Free. There are a few major options to consider and picking the right one might not be an obvious choice. This article should give you an overview of commonly selected strategies. Why Do We Use Packages? From the language perspective, packages in Java provide two important features, which are utilized by the compiler. The most apparent one is the namespace definition for classes. Several classes with an exactly the same name can be used in a single project as long as they belong to different packages which distinguish one class from another. If you can’t imagine how the language would look like if there were no packages, just have a look at the modularization strategies in the JavaScript world. Before ES2015 there were no official standards and a naming collision wasn’t a rare case. The second thing is that packages allow defining access modifiers for particular members of a project. Accessibility of a class, an interface, or one of their members like fields and methods can be limited or completely prohibited for members of different packages. Both features are, first of all, used by the compiler to enforce language rules. For clean coders and software crafters, the primary property of a package is the possibility to have a meaningful name that describes its purpose and the reason for existence. For compilers, it’s just a random string of chars, while for us, it’s another way to express our intention. What Is a Package? In the official Java tutorial, we can find the definition, which begins like this: A package is a namespace that organizes a set of related classes and interfaces. Conceptually you can think of packages as being similar to different folders on your computer. You might keep HTML pages in one folder, images in another, and scripts or applications in yet another. (…) The first sentence emphasizes the organizational purpose of packages. The definition doesn’t explain, though, what kind of a relationship classes and interfaces should have to consider them as a single group. The question is open to all software developers. Recently, Kent Beck wrote a general piece of advice that also applies to the topic discussed in this post: If you're ever stuck wondering what to clean up, move similar elements closer together and move different elements further apart Yet, just like the word “related” in the aforementioned package definition, the word “similar” can have completely different meaning to different people. In the remaining part of the article, we will consider possible options in the context of package organization. Package by Layer Probably the most commonly recognized similarity between project classes is their responsibility. The approach that uses this property for organization is known as package by layer or horizontal slice and, in practice, looks more or less like on the picture below. If you didn’t have a chance to work on a project with such a structure, you could come across it in some framework tutorials. For instance, Play Framework recommended such an approach up to version 2.2. The tutorial of Angular.js initially suggested keeping things together based on their responsibilities. The fact they changed their opinion about the subject and updated tutorials should probably excite you to think what the reason was. But before we judge the solution, let’s look at its strengths and weaknesses. Pros Considering layered architecture as the most widely used, it shouldn’t be surprising that developers aim to reflect the chosen architecture in the package structure. The long prevalence of the approach influences the decision to apply the structure in new projects because it’s easier for a team’s newcomers to adapt to an environment that is familiar to them. Finding the right place for a new class in such an application is actually a no-brainer operation. The structure is created at the beginning of development and kept untouched during the entire project’s existence. The simplicity allows keeping the project in order, even by less-experienced developers, as the structure is easily understandable. Cons Some people say having all model classes in one place makes them easier to reuse because they can simply be copied with the whole package to another project. But is it really the case? The shape of a particular domain model is usually valid only in a bounded context within a project. For instance, the product class will have different properties in a shopping application than in an application that manages orders from manufacturers. And even if you would like to use the same classes, it would be reasonable to extract them to a separate JAR marked as a dependency in each application. Duplication of code leads to bugs, even if it exists in separate-but-related projects, hence we should avoid copy-pasting. A major disadvantage of the package by layer approach is overuse of the public access modifier. Modern IDEs create classes and methods with the public modifier by default without forcing a developer to consider a better-fitting option. In fact, in the layered package organization, there is no other choice. Exposing a repository just to a single service class requires the repository to be public. As a side effect, the repository is accessible to all other classes in the project, even from layers that shouldn’t directly communicate with it. Such an approach encourages creating unmaintainable spaghetti code and results in high coupling between packages. While jumping between connected classes in IDEs is nowadays rather simple no matter where they are located, adding a new set of classes for a new feature required more attention. It is also harder to assess the complexity of a feature just by looking at code as classes are spread across multiple directories. At the beginning, we said that the name of a package should provide additional details about its content. In the package by layer approach, all packages describe the architecture of the solution, but, separately, they don’t give any useful information. Actually, in many cases, they duplicate information present in class names of its members. Package by Feature On the other side of the coin, you can structure your classes around features or domain models. You might have heard of this approach as the vertical slice organization. If you work only with the horizontal slice, at the first glance, it might look a little bit messy. But in the end, it’s just a question of mindset. The following picture represents the same classes as in the previous paragraph, but with a different package layout. You probably don’t keep all your left shoes in one place and all right in another just because they fit the same feet. You keep your shoes in pairs because that’s the way you use them. By the same token, you can look at the classes in your project. The core idea of the vertical slice is to place all classes that build a particular feature in a single package. By following this rule, in return, you will receive some benefits but also face some negative consequences. Pros When all feature classes are in a single package, the public access modifier is much more expressive, as it allows describing what part of a feature should be accessible by other parts of the application. Within a package, you should favor usage of the package-private modifier to improve modularization. It’s a good idea to modify default templates in your IDE to avoid creating public classes and methods. Making something public should be a conscious decision. Fewer connections between classes from different packages will lead to cleaner and a more maintainable code base. In the horizontal slice, packages have the same set of names in each project while, in the vertical slice approach, packages have much more meaningful names describing their functional purpose. Just by looking at the project structure, you can probably guess what users can do with the application. The approach also expresses hierarchical connections between features. Aggregate roots of the domain can be easily identified, as they exist at the lowest level of the package tree. The package structure documents the application. Grouping classes based on features results in smaller and easier-to-navigate packages. In the horizontal approach, each new feature increases the total number of classes in layer packages and makes them harder to browse. Finding interesting elements in the long list of classes becomes an inefficient activity. By contrast, a package focused on a feature grows only if that feature is extended. A new feature receives its own package in an appropriate node of the tree. It’s also worth mentioning the flexibility of the vertical slice packaging. With the growing popularity of the microservice architecture, having a monolith application that is already sliced by features is definitely much easier to convert into separate services than a project that organizes classes by layers. Adopting the package by feature approach prepares your application for scalable growth. Cons Along with the development of the project, the structure of packages requires more care. It’s important to understand that the package tree evolves over time as the application gets more complex. From time to time, you will have to stop for a while and consider moving a package to a different node or to split it into smaller ones. The clarity of the structure doesn’t come for free. The team is responsible for keeping it a good shape with alignment to knowledge about the domain. Understanding the domain is the key element of clean project structure. Choosing a right place for a new feature may be problematic, especially for team newcomers, as it requires knowledge about the business behind your application. Some people may consider this as an advantage, as the approach encourages sharing knowledge among team members. The introduction of a new developer to the project is slightly more time consuming, yet it might be seen as an investment. Mixed Approach You may think that no extremity is good. Can’t we just take what is best from both approaches and create a new quality that is intermediate between two extremes? There are two possible combinations. Either the first level of packages is divided by layer and features are their children, or features build the top level and layers are their sub-nodes. The first option is something that you might have encountered, as it is a common solution for packages that grow to large sizes. It increases the clarity of the structure, but unfortunately, all disadvantages of the package by layer approach apply just like before. The greatest problem is that the public modifier must still be used almost everywhere in order to connect your classes and interfaces. With the other option, we should ask the question of whether the solution really makes much sense. The package by feature approach supports organization of layers, but it does it on the class level and not using packages. By introducing additional levels of packages, we lose the ability to leverage default access modifiers. We also don’t gain much simplicity in the structure. If a feature package grows to an unmanageable size, it’s probably better to extract a sub-feature. Summary Picking the package structure is one of the first choices you have to make when starting a new project. The decision has an impact on the future maintainability of the whole solution. Although, in theory, you can change the approach at any point in time, usually the overall cost of such a shift simply prevents it from happening. That is why it’s particularly important to spend a few minutes with your whole team at the beginning and compare possible options. Once you make a choice, all you have to do is to make sure that every developer follows the same strategy. It might be harder for the package by feature approach, especially if done for the first time, but the list of benefits is definitely worth the effort. If you have any experience with the vertical slice and would like to add your two cents to the topic, don’t hesitate to share your thoughts in the comments. Also, please consider sharing the post with your colleagues. It would be great to read the outcome of your discussions and feelings about the approach. Published at DZone with permission of Daniel Olszewski . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/project-package-organization
CC-MAIN-2019-51
refinedweb
2,130
53.41
Digital Devices Deprive Brain of Needed Downtime 222 siliconbits writes with an excerpt from NY Times: ." oh rly? (Score:5, Funny) Why do you think I run Windows? ::rimshot:: Re: (Score:2, Funny) Let me check my brains uptime ... 36 hours, needs a reboot. Re: (Score:2) Because you are idling 99% of your time ? I take several short naps a day (Score:5, Interesting) Re:I take several short naps a day (Score:5, Insightful) If only most of us could do that, rather than having shitty pointy-haired micromanager bosses who insist on minute-by-minute "productivity" scales. The day the 'worker productivity index' was invented was the day society started going to hell. Re:I take several short naps a day (Score:5, Funny) I once heard a tale of someone who when faced with a boss who demanded updates every 15 minutes on what he was doing wrote a script which strung together meaningless management buzzwords in a vaguely sensible format and emailed them to his boss every 15 minutes. a few weeks later he gets an award for being a team player and keeping his boss in the loop. It's not like the boss ever reads them after the first day. Re: (Score:2) You do know that at least 20% of the folks on /. do this with our daily and weekly reports to the boss anyway, right? Don't give away ALL our secrets! Re: (Score:3, Funny) That's because his boss also had a script, which tested the updates to see if they included meaningless management buzzwords in a vaguely sensible format. Source code (Score:5, Insightful) #include <time.h> #include <stdlib.h> #include <stdio.h> #define kase(tipo,stmt) case(tipo):{stmt;break;} char *a[10] = { "in particular", "on the other hand", "however", "similarly", "in this regard", "as a resultant implication", "based on integral subsystem considerations", "for example", "thus", "in respect to specific goals"}, *b[10] = { "a large portion of the interface coordinated communication", "a constant flow of effective information", "the characterization of specific criteria", "initiation of critical subsystem development", "the fully integrated test program", "the product configuration baseline", "any associated supporting element", "the incorporation of additional mission constraints", "the independent functional principle", "a primary interrelationship between system and/or subsystem technologies"}, *c[10] = { "must utilize and be functionally interwoven with", "maximizes the probability of project success and minimizes the cost and time required for", "adds explicit performance limits to", "necessitates that urgent consideration be applied to", "requires considerable systems analysis and trade off studies to arrive at", "is further compounded when taking into account", "presents extremely interesting challenges to", "recognizes the importance of other systems and the necessity for", "effects a significant implementation of", "adds overriding performance constraints to"}, *d[10] = { /* orders: abcd, dacb, bacd, adcb */ "the sophisticated hardware", "the anticipated next generation equipment", "the subsystem compatibility testing", "the structural design based on system engineering concepts", "the preliminary qualification limits", "the evolution of specification over a given time period", "the philosophy of commonality and standardization", "the top-down development method", "any discrete configuration mode", "the total system rationale"}; main() { int n, order, w, x, y, z; srand(time(NULL)); for (n = 0; n < 1000; n++) { if (!(n % 10)) printf("\n"); w = rand() % 10; x = rand() % 10; y = rand() % 10; z = rand() % 10; order = rand() % 4; switch (order) { case 0: printf(" %c%s, %s %s %s.", a[w][0] & 0xDF, a[w] + 1, b[x], c[y], d[z]); break; case 1: printf(" %c%s, %s, %s %s.", d[w][0] & 0xDF, d[w] + 1, a[x], c[y], b[z]); break; case 2: printf(" %c%s, %s, %s %s.", b[w][0] & 0xDF, b[w] + 1, a[x], c[y], d[z]); Re: (Score:3, Interesting) Re: (Score:2) Yeah, I generally just ignore those weekly/daily/whatever status report calendar events. If they want to see what I'm doing, they can look at how empty my sprint story list is getting. I'm just utterly uninterested in wasting more time telling people what I'm doing when I already do in a daily standup meeting once per day. I can't stand management types that get all uptight about junk like this. Re:I take several short naps a day (Score:5, Insightful) Perhaps the idea of the "seista" was right! Re: (Score:2) It was 100% right. If I had a 2-3 hour time period to get a nap or something in the middle of the day I'd get twice as much done in my afternoons. I currently get about half as much done in the afternoon as I get done in the morning. Leading to a trend for me of coming in early to get work done rather than staying late. Re: (Score:2) I suggested this at a previous place; they said it was fine as long as I made up the time at the end of the day. ;) Re: (Score:2) Unfortunately my timetable revolves around everyone else's. I can't take time off in the middle of the day. If I could do that I definitely would! Re: (Score:2) Re: (Score:2) I solve a lot of work & other problems when I'm driving on the way to or from work. I spend about 2 hrs in the car per day... It's amazing when your brain isn't "busy," how many solutions just "spontaneously" come to you. Re:I take several short naps a day (Score:4, Funny) Instant distractions (Score:2) This is the very reason I don't have a cell phone* and haven't used an instant messenger in years. It is also the same reason that I only check personal email at most once a day (They call it mail for a reason). If I'm at home or the office than the land line works very well - if I'm not there than I'm busy anyway. *People ask how can you manage that - I tell them it's a little secret called forethought or planning. Re: (Score:2, Funny) Re:Instant distractions (Score:5, Funny) "I thought cell phones were only useful for buying drugs." There's an app for that. Re: (Score:2) That doesn't sound strange at all. Most people don't need cellphones. In my whole life I've owned exactly two: An old analog phone from circa 1999 which cost me $10/month. When the battery stopped working, I upgraded to a Virgin Mobile Nokia phone at $0.00/month and 18 cents per minute or per text. I make sure not to give the number to anybody (except close friends/family), so they cannot disturb me and disrupt my calm. Re:Instant distractions (Score:5, Funny) *People ask how can you manage that - I tell them it's a little secret called forethought or planning. I usually tell them it's a little secret called "no friends". Re: (Score:2, Insightful) Nah - it's real friends. They care enough to be reliable, know the contingencies, and not be offended if something crazy happens. Re: (Score:2, Funny) Re: (Score:2) I've got a cell phone but I only give the number to people I actually want to hear from. All the pros, none of the cons. Re:Instant distractions (Score:4, Insightful) You know, cell phones have a very useful functionality: You can switch them off. The advantage of a switched-off cell phone vs. no cell phone is that you can quickly get a working cell phone in case you need one: Just switch it on. Moreover, you get great times between battery recharges this way. Re:Instant distractions (Score:4, Insightful) The problem is, I loathe telephones. Typically, when the phone rings, it's because someone expects me to drop whatever I'm doing RIGHT NOW and attend to whatever it is they need. Worse, when I'm talking to people on the telephone, they tend to feel slighted if I don't give them my full and undivided attention. So if I'm at work trying to, you know, work, and my phone rings, the expectation is that I will immediately cease work to chat/be a chimney while they vent/solve the world's problems/whatever. Maybe I'm just a curmudgeon, but I find that rather irritating. I much prefer text messages or e-mail, since I can look at it and get back to you when I actually have the CPU cycles to devote to whatever it is you need. Re: (Score:2) >>>Switched off cell phones still cost money. Not if you have a plan with Virgin Mobile (like I do) which only charges you when you make a call (18 cents per minute). And while you're correct restaurants and gas stations do let you use their wired phone, sometimes your car will break down miles away from such services. I'd rather make the call to AAA from the cool of my car rather than walk-around in the hot summer sun. Re: (Score:2) I generally agree with your sentiment of planning ahead, and often leave my phone behind if I've already pre-arranged plans with people, but how are phones any better than IM or email in terms of distraction? You can't really defer a phone call without then getting into voice mail territory, which is way more annoying (and time consuming) than just reading an email. And a proper phone conversation requires input from two people simultaneously, rather than one person being able to go and do some work while th Re: (Score:2) How nice for you that you've found it comfortable to get by without a cell phone. It's too bad you feel the need to condescend to those who find cell phones useful. (Actually, it suggests you're probably compensating for the fact that you really aren't as happy with your choice as you'd like others to believe; but I digress.) I plan ahead, and then I carry a cell phone in case reality interferes with my plans. This also allows me to quickly change my plans if an opportunity arises. But then, some people do Re: (Score:2, Insightful) I never mentioned anything about people who generally use cell phones. I'm sorry if it was taken otherwise. My "flame bait" footnote is actually only directed toward the subset of people who find it absolutely inconceivable that anyone could successfully manage one's life without a cell phone. I've been attacked by that type of person as if I had suggested something absurd such as not immunizing children. It was not my attempt (or in my text) to disparage the usefulness of cell phones. I had one for a Re: (Score:2) No, someone says "I don't need a cell phone because I plan ahead", implying that anyone who uses a cell phone does not plan as well as he, and I call that condescending. Nice try, though. Re: (Score:2) I remember a time when I lived kind of like you try to. It was called the 1980s - land lines only, had to find a pay phone if not at home. No email, Facebook, Google Calendar, Instant Mes Re: (Score:2) I don't have a cell phone either. It's not called a "cell" because it's short for "cellular"... I've got a packed social schedule, two kids, and I do on-site inspections fairly often at work. (I'm an EE.) They just aren't necessary tools. Re: (Score:2) Communication is incredibly important to Humanity, and the more we have the more informed the common man is (huzzah). Having a cellphone doesn't mean you have to play Faceville on it the moment your day becomes idle. Re: (Score:2) 5. Profit! Re: (Score:2) In denial eh? tl;dr (Score:5, Funny) NPR had a long thing on this the other day. Supposedly it kills our attention span. Or something, tl;dl. Re: (Score:2) Wow (Score:5, Funny) Re: (Score:2) Re: (Score:3, Funny) We're not capable of being creative enough to think of original jokes. Re: (Score:2) We're not capable of being creative enough to think of original jokes. What? I thought Commander Taco was an original joke! HAAAAAHAHAAAHAAHARROFLLALALAOLOLOL!!11!!eleventy!!! Re: (Score:3, Funny) Why spend all that time and energy creating new jokes when recycled jokes is so much more efficient? Think green, dammit! People often overlook the horrible environmental effects of joke pollution. Re-using old jokes instead of letting them just litter our society could reduce that significantly, and also save many old comedians from complete extinction. Won't someone please think of the old comedians?! I re-use old jokes all the time. Just ask my wife. She'll tell you all about it. At length, apparently. I'd be fine (Score:2, Funny) Re: (Score:2) I remember that. He'd still be there if I hadn't gotten off the board. Going for a run or a ride... (Score:5, Interesting) I really value my exercise time for this 'down time.' I can't stand running with headphones because I can't get lost in the moment. Going out for a nice long run (or a walk) on Sunday morning when you have a problem to mull over is just about the greatest way to find some insight and a new angle on it. I've composed term papers and had some wonderful insights into my life and relationships while on runs. As I get older, I also find that I need to turn off more and more distractions if I really want to get anything serious done. I close the web browser, turn off the IM and silence the phone (I'd turn it off, but it takes so freaking long to reboot, it's obnoxious). I remember a time in my youth that I'd have 12 things going on at once, watching TV, playing video games and maybe even music running somewhere. I think I was being productive, but looking back, I question that. Perhaps my abilities to 'multi-task' have diminished as I've aged, but I think that I've just become more adept at recognizing shoddy work. What about you all? Have you fond that as you get older, you need more quite time to think than you did when you were younger? Do little distractions like email and IMs really cut into your productivity? Re: (Score:2) I can't stand running with headphones because I can't get lost in the moment. Plus, then you can hear cars and cyclists coming. I can't tell you how many times I've had to skirt around an idiot running or cycling on a dual-use path with their music jacked high enough they can't hear my bell... Re: (Score:2) I've had exactly the same experience. Used to work with music or TV shows running, now I can't concentrate with the slightest bit of noise. Re: (Score:2) I really value my exercise time for this 'down time.' I can't stand running with headphones because I can't get lost in the moment. I listen to my iPod while lifting weights & also watch headline news on TV doing the elliptical. I find it helps me to keep up my pace and also makes the time go by faster, and sometimes power through the pain. As I get older, I also find that I need to turn off more and more distractions if I really want to get anything serious done. I'm totally with you on that one. These days during my lunch, I work on improving my computer programming skills. I go some place isolated, having only the laptop. I leave my phone at my desk, I don't surf the web or check my e-mail. I learn more during that hour than I usually do with 8 hours of Re:Going for a run or a ride... (Score:4, Insightful) Have you fond that as you get older, you need more quite time to think than you did when you were younger? Do little distractions like email and IMs really cut into your productivity? I'm 24 now. As I've grown out of my college years I've noticed this to be true. I can turn out more stuff (poetry, blog updates, electronic gizmos, whatever I'm working on) if I keep the instant messengers closed. I also like to have my door closed because my roomate has a bad habit of popping into my room to show me "the funniest thing ever" on Youtube which is usually a 10 second clip of someone injuring themselves. I don't really have the problem with music though. However, I do make a point to tune my internet radio station to a type of music that would make an appropriate soundtrack for whatever I am working on (for instance, if I am writing up a short story about a swordfight, the music would be some kind of kick-ass symphonic metal, or something similar). I do notice, however, that as I get older I have more of a tendency to turn on music and just stare at a wall while sipping a nice glass of whiskey. I used to always just think of music as appropriate background noise. These days I treat it almost like T.V., where I want to take the time to get lost in it. Re: (Score:3, Interesting) I sit out on my balcony a couple of nights a week with a fine single malt and a fine cigar and just watch the world go past. When I was telling one of my friends he was amazed that I could sit for so long without doing anything. I can't understand how he's so constantly doing things. Re:Going for a run or a ride... (Score:4, Interesting) When I was telling one of my friends he was amazed that I could sit for so long without doing anything. I can't understand how he's so constantly doing things. He might be distracting himself from the reality of his own thoughts. If you tend to have an overly self-critical personality, or if you are generally unhappy about your present life situation, then sitting and doing nothing can afford you the opportunity to face the unpleasant thoughts that can come with such territory. Similarly, if your friend feels lonely, sitting around alone would afford him the opportunity to ponder his situation, which he may not want to do. I know I've had periods in my life where I had to keep myself distracted in order to avoid facing the pains that come along with heartbreak, a loss of a friend, etc. Watching the world go by, as you describe, tends to let reality settle in on one's self-awareness. That can be a hard thing to cope with. Alternatively, your friend might just be the kind of person that values action above thought. There's nothing wrong with that, and I would wager that constantly doing things helps to fulfill your friend in ways nobody but himself understands. Ah well, to each their own. Re: (Score:2) I'm the opposite...I find that I really LOVE tunes when exercising, both with weights and aerobic (mostly walk/running). I find that it distracts me from the 'pain' aspect, and especially the boring monotonous part of walking/running. I love slinging weights, but do not enjoy the aerobic stuff, but to me...it is a very necessary evil. Especially in my past...when doing thin Re: (Score:2) Instead, I get distracted by youtube/short videogames. On the other hand, once I get ramped up and there's some amount of white noise (music, tv, etc) in the background, it's hard to get distracted by anything until whatever I'm working on is done. Re: (Score:2) There was a study a while back (discussed here, too) which determined that kids NEED downtime to assimilate what they've learned -- so that "time wasted" digging in the dirt, watching ants, gazing at clouds, and generally doing nothing useful, is actually the most important part of a kid's day in terms of how well that child will assimilate what he's learned in school. I doubt it's really all that different for adults. We used to have our downtime in fairly mindless pursuits -- whether that was weeding or ru Re:Going for a run or a ride... (Score:4, Funny) Re: (Score:2) Re: (Score:2) No, it would seem just the opposite. Take time off to pass no judgments until you have been able to think things through by not distracting essential components of your "subconscious" brain by your "conscious" brain. Re: (Score:2) Thats why its called the SCHOOL of hard knocks. More than that (Score:5, Interesting) I read the article on the New York Times yesterday, but I've been thinking about this a lot lately in general, and I've come across some pretty interesting stuff. For instance, its pretty obvious that computers give off a lot of blue light. Apparently someone decided that blue LEDs meant high tech and so devices get fitted with them all over the place. Blue light in particular is linked to suppression of melatonin(source: [nih.gov]). Particularly low levels of melatonin have been observed in patients with various degrees of ASD, including slashdot's favourite asperger's (source: [nih.gov]). So, my contention is that the "rise in autism" that seems to be so prevalent these days is probably a result of children basically being deprived of proper darkness, being surrounded by light from computers, tv, video games, etc. I've started taking melatonin supplements as since I got back into IT work about two years ago and spending much more time on computers, I've been sleeping a lot less and feeling generally less sociable. My memory has gotten shot, etc. Could just be that I'm getting older, but I'm only 26... I'm not that old. When I get a break away from computers, take some time out to sleep, and get outside in the woods then I can generally shake the effects off in a day or so, but when I was a kid the world wasn't nearly as surrounded by computer technology in all its myriad of forms as it is today, where kids are basically handed a DS right out of the womb. I didn't see a gameboy until I was about 7 or 8, and it had a monochrome screen with no backlight. And no, I don't mean a break from work. I mean a break from computers. It's not just being at work -- when I'm at work, its light outside anyway. I mean no laptop, no fancy phone, no nothing. Go away for a few days and leave that stuff behind, because if I'm just at home on the weekend and spend a lot of time plugged up, then I don't feel any better for not having been at work. The way kids are today, with all their gadgets and gizmos can't possibly be any better for their brains than it is for their bodies, not playing outside nearly as much as they used to. Stories like this match up pretty well with my own anecdotal evidence, not that it means much, but when I find NIH studies that seem to point to much more extreme versions of what I've seen, even in myself. Like I said, the effects on an adult are likely to be temporary, but our brains had time to mature before being mushed up. Re: (Score:2, Insightful) Maybe that is why my work gave me this nice laptop with all the blue LEDs on the touch bar... On a more serious note though, I do have to agree with you. I spent a week on the beach in OBX with the family, didn't take my laptop, had my phone with me but left it in the house we rented, just kicked back and listened to the ocean with a beer in my hand. I felt a million times better after that, so I definitely agree that it's a good idea to just get away from technology completely ever so often. Sometimes even Re: (Score:2) Well, first, the Autism curve started pitching up in the early 80s, and blue LEDs weren't invented until the late 90s. Second, Autism presents symptoms in infancy, before the typical child has been hypnotized by Xbox. Third, simple social ineptitude due to inexperience is not the same thing as Autism. Social retardation can be repaired fairly easily. Autism is notoriously hard to work against. Re: (Score:2) When the sky is blue, its because its day time. When its not, its night time. Know what people are supposed to do at night, sleep? Melatonin helps you sleep, but its production is inhibited by blue light, like when its day time. Artificial blue light tricks your brain into thinkings its still day time, so you don't produce sufficient quantities of melatonin at the right point in the day to enter into a natural sleep cycle. The effects of sleep deprivation are pretty rough, otherwise it wouldn't be used Re: (Score:2) Yeah, I guess I just invented the whole RGB thing and the fact that blue light is mixed in with the output in CRTs and LCDs. This isn't a crackpot theory: not once did I mention how George Bush was involved. The problem as I see it, is the constantly being surrounded by light from various sources. Computers and other electronic devices just happen to be the most prevalent of those which encourage you to sit very close to them and stare directly at their light source (the display). Re: (Score:2) If I had blamed my problems on WiFi, I wouldn't have been able to find research from the NIH to back me up. How am I anti-science? We sent men to the moon using slide rules; saying that I think that our current culture, influenced by a level of technology we haven't had in the past, where we are now constantly bombarded by sensory input, put off of natural sleep cycles, and generally messed with is likely to have unforseen consequences. A lot of the strange syndromes and whatnot that people are reporting Re: (Score:2) Well, I'll give you the benefit of the doubt on being a researcher in the field and you probably do know more than me about it. I work in infosec, not medicine or research. I'm just saying that, for me, taking melatonin supplements and trying to stay away from staring at light sources such as computers, helps me sleep, keeps me more focused and lets me actually have fun once in a while, as opposed to being an exasperated crazy person like I am when I don't take the supplements. It really wouldn't surpris Um no... not really.... (Score:4, Interesting) It is not like before "digital devices" people would sit around doing nothing for "downtime".. Before pocket toys that look for our attention people had a list of tasks they had to do. Instead of wasting time sitting there playing plants-vs-zombies they read a book or talked. My downtime is usually under a car or elbow deep in a motorcycle doing high level brain activity compared to what any digital device causes. This is all bull-cockey. If anything the digital devices are making people stupid because they dont have to actually work for or retain any knowlege.. they certianly are not causing us to lose downtime, as humans by nature dont do brain downtime. Hell when we sleep we dont even have brain downtime. Re: (Score:2) I'm not so sure. I know someone who isn't so big on technology and doesn't need it in his life (sometimes I admire him the simplicity that affords him). Apparently, he's perfectly content to just sit quietly on his sofa for periods of time. No music, no TV, not even sure he's having any "inner dialog" -- I think he literally is content to just sit. I've been known to sit on a rock for an hour or two, but that was usu Re: (Score:2) They're not talking about the couple hours at the end of the day where people do their hobbies and relax, they're talking about the minute here, minute there kind of downtime throughout the day. They're talking about leaning back in your chair and stretching out for a few minutes, waiting to hear back on a question you asked your co-worker, or just sitting on the damn toilet (we all know people who can't help but get out their phones while they're taking a crap). Re: (Score:2) "as humans by nature dont do brain downtime." Funny you should say that, which leads to the obvious question, why is sleep a universal human behavior? According to you, humans don't sleep, but even limited observation suggests otherwise. Re: (Score:2) According to you, humans don't sleep, but even limited observation suggests otherwise. after he says: Hell when we sleep we dont even have brain downtime. It's not like he wrote pages of stuff for you to sift through to get to that part. Re: (Score:2) Re: (Score:2) If anything the digital devices are making people stupid because they dont have to actually work for or retain any knowlege (sic) Remembering things does not mean you are smart, or even non-stupid. Memorization does not imply adept thinking skills and, IMO (no science done here), I think that the way we're moving will make us much smarter in general, just in a different way than we're used to. Perhaps offloading some forms of memory to computers is allowing us to concentrate on actual thinking. Maybe our education will eventually evolve and our kids wouldn't have to waste 14 years of their life (K-12) memorizing and regurgitating gove Eh (Score:5, Informative) I heard an interview with the guy who wrote that book on NPR yesterday. Practically every sentence he spoke contained a "Maybe" or a "We don't know for sure" or an "It's possible that..." His entire interview was preceded by him saying this is all theories and may not be correct at all and that there's actually no scientific proof of any of this. So, grain of salt. Re: (Score:2) No, he was a guy who talked to a few scientists and wrote a book about it. I'm not saying he wasn't doing a good job, I'm saying the headline might be a biiit alarmist. Re: (Score:2) I'm not saying not to pay attention to the guy, I'm saying not to pay that much attention to the alarmist nature of the headline. Time spent in the bathroom? (Score:5, Funny) Re: (Score:2) I think you all know why. Angry Birds? Actually I've started pocketing my DS at work to get in a little DQ9 during my bathroom breaks. Re: (Score:2) There's a meme for that [google.com]. Note: most of those are already on T-shirts. WTF does it have to do with digital? (Score:2) Any device -- no, any activity -- that continuously takes up your attention is going to have the exact same effect. It's not like the brain subconsciously detects, "Hey, these inputs have discrete steps which I'm able to perceive thanks to my gold-plated Monster cables," and then the person goes nuts. Quit saying "digital device" when you mean "any thing", quit saying iPhone when you mean any mobile computer, quit saying "digital music" when you mean any music that is downloaded instead of distributed on r Re: (Score:2) Plowing fields by hand or riveting buildings could be seen as brain downtime, and have largely been lost activities since the trend in technology towards requiring us to use constant thinking and processing in normal activities. sound bites (Score:2) Gadgets force us to communicate in sound bites. We dig the new shiny. Our attentions no longer span, but spin. Subtle phrasing replaced by clever phrasing replaced by catch phrases. "Think" is a four-letter word. Four letter words are old school. Grammar mocked as elitist. Push2Talk is DoubleSpeak. Allusions wander, lost. News at 11. related article about rafting trip (Score:3, Informative) I notice the same. I think about work the first day of a backcountry trip or vacation. But then stop thinking about work by the second day. I agree! (Score:2) When I have a really vexing programming problem, I often think of a real creative way to solve it in the moments in bed waiting to fall asleep. The ideas do not occur while I am asleep but when I am fully awake waiting to fall asleep. I am quite sure that the time when nothing is happening is very important to the creative process. Other people might be different but I find this is true for me. Re: (Score:2) It could be worse. I've written most of my best fiction while shovelling the daily dog shit out of the kennel. A benefit of having an everyday mindless activity that lets my brain wander off to wherever it pleases, with no restraints. wetware downtime (Score:2) Re: (Score:2) Yes. Now it's called commuting. This is why TV is our first, best friend. (Score:2) Television gives us so much and asks so little in return. Why must you be so tempted by hours of web surfing? Just turn off you brain and give TV your whole day. There's probably a Deadliest Catch marathon you could be watching. Re: (Score:2) I have a computer set up in front of the TV. I am often watching TV and online at the same time, and sometimes dealing with incoming data on my phone as well. When I really want to zone out I lie back and fire up a few episodes of How It's Made. BTW, I got tired of Deadliest Catch after about half a marathon. But I could watch Dirty Jobs 24/7/365 and not even ask for a raise. Only if you let them (Score:2) Re:I can daydream listening to mp3s (Score:5, Informative) Re: (Score:2) Re: (Score:2) You can also do this kind of thing while driving. So much so that you can often forget the details of how you got somewhere. Re: (Score:3, Funny) Re: (Score:2, Funny) So....that would be IKEA? :) I never trusted those swedish bastards! Curse them and their delicious meatballs! Re:Please (Score:4, Informative) Time to change your sig again ;) Re: (Score:2) Re: (Score:2) No, I don't think it's a "large fraction" of the population, I don't think their belief is particularly misguided, and I CERTAINLY don't believe they're sponsoring biased research.
http://tech.slashdot.org/story/10/08/25/1234231/Digital-Devices-Deprive-Brain-of-Needed-Downtime
CC-MAIN-2015-18
refinedweb
5,941
69.72
11-08-2019 08:34 AM - edited 11-08-2019 08:36 AM Hello, I’m currently attempting to communicate with a sensor using SPI on the Zynq UltraScale+ MPSoC ZCU104 Evaluation Kit, and have been unable to read in data from the MISO line (although I can see that the sensor is outputting data). I used Vivado to assign pins on the PL PMOD 0 header to the SPI interface. On the SDK side, I’m using the XSpiPs driver to facilitate SPI communication. The communication command I’m using is XSpiPs_PolledTransfer. Using a logic analyzer, I can see that the communication works properly. The chip select, clock, and MOSI pins are activated on the Xilinx side, and the sensor responds reliably on the MISO pin. The only problem is that once the sensor responds on MISO, XSpiPs_PolledTransfer fills the readBuffer with the exact bytes that were written out on the writeBuffer, instead of the bytes that were sent on MISO. This behavior is similar to a loopback mode, but I specifically do not activate loopback mode in the XSpiPs_SetOptions call. Below, I’ve attached the code I’m running and also block design/implementation. Any assistance would be greatly appreciated! #include "xparameters.h" #include "xplatform_info.h" #include "xspips.h" #include "xscugic.h" #include "xil_exception.h" #include "xscugic_hw.h" #include "xil_printf.h" #include <stdlib.h> void delay(int ms) { int c = 0, scaler = 10000; while (c < scaler*ms) { c++; } } uint8_t registerRead(XSpiPs* sSPI_Device, uint8_t reg) { reg &= ~0x80u; u8 writeBuffer[2] = {reg, 0x00}; u8 readBuffer[2] = {0x00, 0x00}; xil_printf("Beginning registerRead...\n"); xil_printf("BEFORE: read buffer contains %d and %d\n", readBuffer[0], readBuffer[1]); xil_printf("BEFORE: write buffer contains %d and %d\n", writeBuffer[0], writeBuffer[1]); xil_printf("Starting transfer...\n"); XSpiPs_PolledTransfer(sSPI_Device, writeBuffer, readBuffer, 2); delay(50); xil_printf("AFTER: read buffer contains %d and %d\n", readBuffer[0], readBuffer[1]); xil_printf("AFTER: write buffer contains %d and %d\n\n", writeBuffer[0], writeBuffer[1]); return readBuffer[1]; } void registerWrite(XSpiPs* sSPI_Device, uint8_t reg, uint8_t value) { reg |= 0x80u; u8 writeBuffer[2] = {reg, value}; u8 readBuffer[2] = {0x00, 0x00}; XSpiPs_PolledTransfer(sSPI_Device, writeBuffer, readBuffer, 2); delay(5); } void readMotionCount(XSpiPs* sSPI_Device, int16_t *deltaX, int16_t *deltaY) { registerRead(sSPI_Device, 0x02); *deltaX = ((int16_t)registerRead(sSPI_Device, 0x04) << 8) | registerRead(sSPI_Device, 0x03); *deltaY = ((int16_t)registerRead(sSPI_Device, 0x06) << 8) | registerRead(sSPI_Device, 0x05); } int main() { xil_printf("\nBeginning SPI Interface Test...\n"); int status; XSpiPs_Config *pSPI_config; XSpiPs sSPI_Device; pSPI_config = XSpiPs_LookupConfig(XPAR_PSU_SPI_0_DEVICE_ID); if (pSPI_config == NULL) { xil_printf("SPI Interface Test 1 FAILED.\n"); return XST_FAILURE; } status = XSpiPs_CfgInitialize(&sSPI_Device, pSPI_config, pSPI_config->BaseAddress); if (status != XST_SUCCESS) { xil_printf("SPI Interface Test 2 FAILED.\n"); //return XST_FAILURE; } status = XSpiPs_SelfTest(&sSPI_Device); if (status != XST_SUCCESS) { xil_printf("SPI Interface Test 3 FAILED.\n"); } XSpiPs_SetClkPrescaler(&sSPI_Device, XSPIPS_CLK_PRESCALE_64); XSpiPs_SetOptions(&sSPI_Device, (XSPIPS_MASTER_OPTION | XSPIPS_CLK_PHASE_1_OPTION | XSPIPS_CLK_ACTIVE_LOW_OPTION)); XSpiPs_Enable(&sSPI_Device); XSpiPs_SetSlaveSelect(&sSPI_Device, 0x00); registerWrite(&sSPI_Device, 0x3A, 0x5A); uint8_t chipId = registerRead(&sSPI_Device, 0x00); delay(10); xil_printf("SPI Interface set-up succeeded!\n"); xil_printf("Chip ID: %u\n", chipId); int16_t deltaX, deltaY, count = 0; XSpiPs_SetDelays(&sSPI_Device, 2, 2, 2, 2); while (count < 20) { readMotionCount(&sSPI_Device, &deltaX, &deltaY); xil_printf("*** COUNT: %d, X: %d, Y: %d ***\n\n", count+1, deltaX, deltaY); count++; delay(2000); } XSpiPs_Disable(&sSPI_Device); xil_printf("SPI Interface Test completed.\n"); return 0; } Block design PS configuration Implemented design I/O 04-29-2020 03:17 PM Hello, I just saw your post from 11/8/2019 about a Ultrascale PS SPI issue. I am also seeing a similar issue to your. With a Chipscope I can see the data on MISO from the device that I am reading is correct and yet when reading the RX FIFO , the data receive is wrong and acted as through it is in a loopback mode. I wanted to know did you resolve the issue, if yes, please kindly tell me what you discover. Ken 05-03-2020 06:17 AM Hi, May i know which version are you using, if you have 2019.1 and evaluation board ZCU104 please let me know currently? Regards, Venu 05-03-2020 09:16 AM Vivado 2019.1, on ZCU106. Can you explain what is the tool issue to cause this problem to occur ? Does this problem occurs if the SPI is routed through MIO instead?. When will this tool issue be fixed ? 05-12-2020 04:43 AM Hi, It is issue with when we routed through EMIO only, We have a fix in 2019.2 08-05-2020 09:11 PM Hi, What is the fix in 2019.2? I use vivado 2019.2 and vitis . There are some data on the MISO signal, but RX fifo returns all 0s. 08-06-2020 11:49 PM Hi, Can you please share your BD. Regards, Venu 01-21-2021 12:30 PM Hi There, we are having similar issue with Vivado 2019.2 and ZCU102 board. Can you please point me to the solution if there happens to be one around? 01-21-2021 12:30 PM Hi There, we are having similar issue with Vivado 2019.2 and ZCU102 board. Can you please point me to the solution if there happens to be one around? Regards
https://forums.xilinx.com/t5/Processor-System-Design-and-AXI/Unable-to-read-SPI-MISO-pin-using-EMIO-interface-to-PMOD/td-p/1041466
CC-MAIN-2021-25
refinedweb
845
56.96
Hi, I have ApacheDS 1.5.7 installed. I also have sources. I've found two similar examples on the ApacheDS site for running the server embedded in a servlet. I can't get either one to work... --- One example is at: I've been unable to build the example. The first problem was that it uses org.apache.directory.server.core.DefaultDirectoryService which is not a public class. So I changed the StartStopListener package to org.apache.directory.server.core, one compile problem solved. The example also uses: org.apache.directory.server.ldap.LdapServer org.apache.directory.server.protocol.shared.transport.TcpTransport I cannot find these classes anywhere, in 1.5.7 they seem to be a part of 1.5.6 only. BUT I cannot build 1.5.6 (which would load the jars to my local repository) because the project parent pom no longer exists, seems not to be in svn, or in some maven repo somewhere.. btw. The parent pom for the apacheds-archetype-webapp also does not seem to exist... ahg... --- Another example is at: I've been able to build this example, however it cannot be started in Tomcat. The example uses javax.naming.directory.InitialDirContext and Tomcat 5.5.29 explodes with exceptions trying to use this class. +++ I can send stack traces etc... but first am I using the correct list? Should these examples work? What release of ApacheDS ? How to build ? Thanks Roy
http://mail-archives.apache.org/mod_mbox/directory-users/201007.mbox/%3CD205F956606F5043900AEAB6863B480B0119F35856@RHV-MEXMS-002.corp.ebay.com%3E
CC-MAIN-2015-32
refinedweb
242
62.95
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On Thu, Jul 5, 2018 at 5:01 AM, H.J. Lu <hjl.tools@gmail.com> wrote: > On Thu, Jul 5, 2018 at 4:51 AM, Florian Weimer <fweimer@redhat.com> wrote: >> I see this: >> >> “ >> $ objdump -d --reloc /usr/lib64/crt1.o >> >> /usr/lib64/crt1.o: file format elf64-x86-64 >> >> >> Disassembly of section .text: >> >> 0000000000000000 <_start>: >> 0: 31 ed xor %ebp,%ebp >> 2: 49 89 d1 mov %rdx,%r9 >> 5: 5e pop %rsi >> 6: 48 89 e2 mov %rsp,%rdx >> 9: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp >> … >> >> 0000000000000030 <_dl_relocate_static_pie>: >> 30: c3 retq >> ” >> >> Isn't _dl_relocate_static_pie useless there? It will not be used in >> dynamically linked binaries. Should it be included in libc_nonshared.a >> instead, so that static PIE binaries pick it up, as needed? >> > > This dummy function is provided for arm. Can we find a way to define > it only for arm? > > -- > H.J. x86 has /* Return the link-time address of _DYNAMIC. Conveniently, this is the first element of the GOT, a special entry that is never relocated. */ static inline Elf32_Addr __attribute__ ((unused, const)) elf_machine_dynamic (void) { /* This produces a GOTOFF reloc that resolves to zero at link time, so in fact just loads from the GOT register directly. By doing it without an asm we can let the compiler choose any register. */ extern const Elf32_Addr _GLOBAL_OFFSET_TABLE_[] attribute_hidden; return _GLOBAL_OFFSET_TABLE_[0]; } /* Return the run-time load address of the shared object. */ static inline Elf32_Addr __attribute__ ((unused)) elf_machine_load_address (void) { /* Compute the difference between the runtime address of _DYNAMIC as seen by a GOTOFF reference, and the link-time address found in the special unrelocated first GOT entry. */ extern Elf32_Dyn bygotoff[] asm ("_DYNAMIC") attribute_hidden; return (Elf32_Addr) &bygotoff - elf_machine_dynamic (); } arm has /* Return the run-time load address of the shared object. */ static inline Elf32_Addr __attribute__ ((unused)) elf_machine_load_address (void) { Elf32_Addr pcrel_addr; #ifdef SHARED extern Elf32_Addr __dl_start (void *) asm ("_dl_start"); Elf32_Addr got_addr = (Elf32_Addr) &__dl_start; asm ("adr %0, _dl_start" : "=r" (pcrel_addr)); #else extern Elf32_Addr __dl_relocate_static_pie (void *) asm ("_dl_relocate_static_pie") attribute_hidden; Elf32_Addr got_addr = (Elf32_Addr) &__dl_relocate_static_pie; asm ("adr %0, _dl_relocate_static_pie" : "=r" (pcrel_addr)); #endif No idea why _DYNAMIC isn't used on arm. -- H.J.
https://sourceware.org/legacy-ml/libc-alpha/2018-07/msg00103.html
CC-MAIN-2022-27
refinedweb
364
54.63
The Find Results Window gives you a descriptive information about the found results. The results are shown in a tree structure thus allowing you to easily view and navigate to all found results. By default, information for the namespace, the type and the member is shown, but you can easily change this to another type of information that you found more useful. Here is a snapshot of the available options: You can filter the results via the combobox 'Filter by' and the textbox next to it. Below are described the usages of the different filters. Contains The line tha contains the usage should contain the text inputted. File Files that contain the specified text it their name. Project Projects that contain the specified text it their name. Type Types that contain the specified text it their name. Current File The file on focus in the text editor. Current Project The Project from where the command is invoked. In order to navigate you can either double-click on a result or select it and use the Go to Declaration button. You can also use the Collapse All/Expand All buttons to collapse/expand all results. To refresh the returned results use the green button () on the left. Results of previous searches will be saved in separate tabs.
http://www.telerik.com/help/justcode/reference-justcode-windows-find-results-window.html
CC-MAIN-2017-04
refinedweb
216
73.98
Java has a concept of working with streams of data. You can say that a Java program reads sequences of bytes from an input stream (or writes into an output stream): byte after byte, character after character, primitive after primitive. Accordingly, Java defines various types of classes supporting streams, for example, InputStream or OutputStream. There are classes specifically meant for reading character streams such as Reader and Writer. Before an application can use a data file, it must open the file. A Java application opens a file by creating an object and associating a stream of bytes with that object. Similarly, when you finish using a file, the program should close the file—that is, make it no longer available to your application. Below is a list of very important java library classes related to Streams. READING FROM A FILE We will write a java program to read from file and print file data on the user screen. Let’s understand way to associate a File object with the input stream: The second method has some benefits: if you create a File object, you can use the File class methods, such as exists()and lastModified(), to retrieve file information. (Refer File Input Output Tutorial). While working with stream classes we have to take care of checked exceptions, In our program, we are doing it using a try-catch block. Java Code: package filepackage; import java.io.*; public class FileReadingDemo { public static void main(String[] args) { InputStream istream; OutputStream ostream; int c; final int EOF = -1; ostream = System.out; try { File inputFile = new File("Data.txt"); istream = new FileInputStream(inputFile); try { while ((c = istream.read()) != EOF) ostream.write(c); } catch (IOException e) { System.out.println("Error: " + e.getMessage()); } finally { try { istream.close(); ostream.close(); } catch (IOException e) { System.out.println("File did not close"); } } } catch (FileNotFoundException e) { e.printStackTrace(); System.exit(1); } } } If File is missing from root directory, we will get following error. After Creating file in project root directory, we will get file content as output. File location While the byte stream classes provide sufficient functionality to handle any type of I/O operation, they cannot work directly with Unicode characters. Since one of the main purposes of Java is to support the “write once, run anywhere” philosophy, it was necessary to include direct I/O support for characters. Now we will look at java program using character stream to read the file. This is very much similar to FileInputStream but JVM treats it differently. In below program, we are not handling the exception with try-catch block but we are adding throws clause in the method declaration. Output would be same as above program. Java Code: package filepackage; import java.io.*; public class FileReadingCharacterStream { public static void main(String[] args) throws IOException{ FileReader freader = new FileReader("Data.txt"); BufferedReader br = new BufferedReader(freader); String s; while((s = br.readLine()) != null) { System.out.println(s); } freader.close(); } } We can write same program using Java 7 syntax where we do not need to worry about closing of File and Stream resources. The output will be exactly same as above program. Java Code: package filepackage; import java.io.*; public class FileReadingJava7Way { public static void main(String[] args) { File file = new File("Data.txt"); try (FileInputStream fis = new FileInputStream(file)) { int content; while ((content = fis.read()) != -1) { // convert to char and display it System.out.print((char) content); } } catch (IOException e) { e.printStackTrace(); } } } Summary Java Code Editor: Join our Question Answer community to learn and share your programming knowledge. Help the community: JavaScript: Delete duplicates in an array
http://www.w3resource.com/java-tutorial/reading-the-file.php
CC-MAIN-2017-22
refinedweb
596
57.57
On Thu, 31 Mar 2005 11:51:53 +0200, Daniel Fagerstrom <danielf@nada.kth.se> wrote: <snip/> >. One comment: I'd hope that you don't literally copy the "portal-page.xsl" and modify it? Instead you should be creating a new XSL, including "portal-page.xsl" in it and just modifying the templates that you wish to override... I point this out because XSLT implements an interesting version of MI with include. It completely exposes the entire inheritance tree and gives you very limited ways to navigate the tree. None-the-less people use it all the time. In the 70 plus XSLT we have in our app roughly half of them do an include, sometimes using 4 levels inheritance. None of them use an import and an "apply-imports" which would be the XSLT equivalent of composition. There are a couple of ways in which XSLT (1.0) could provide better tools for manging the inheritance tree. In particular, on include you should be able to specify the default priority of the included templates and a base-priority that would be added to any existing priorities. You still wouldn't get complete navigation through the tree, I can see ways of doing that also, but that's another topic (unless people want to use an XSLT like model for block inheritance.) <snip/> -- Peter Hunsberger
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200503.mbox/%3Ccc159a4a05033106262e0b8c00@mail.gmail.com%3E
CC-MAIN-2017-13
refinedweb
226
66.74
(I’m giving up with the numbering now, unless anyone particularly wants me to keep it up. What was originally going to be a limited series appears to be growing without end…) As Chris Nahr pointed out in my previous post, my earlier idea about staged initialization was very half-baked. As he’s prompted me to think further about it, I’ve come up with another idea. It’s slightly more baked, although there are lots of different possibilities and open questions. Let’s take a step back, and look at my motivation: I like immutable types. They’re handy when it comes to thread safety, and they make it a lot easier to reason about the world when you know that nothing can change a certain value after it’s been created. Now, the issues are: - We really want to be able to fully construct the object in the constructor. That means we can mark all fields as initonlyin the generated IL, potentially giving the CLR more scope for optimisation. - When setting more than two or three values (while allowing some to be optional) constructor overloading ends up being a pain. - Object initializers in C# 3 only apply to properties and fields, not method/constructor arguments – so we can’t get the clarity of naming. - Ideally we want to support validation (or possibly other code) and automatic properties. - The CLR won’t allow initonlyfields being set anywhere other than in the constructor – so even if we made sure we didn’t call any setters other than in the constructor, we still couldn’t use them to set the fields. - We want to allow simple construction of immutable types from code other than C#. In particular, I care about being able to use projects like Spring.NET and Castle/Windsor (potentially after changes to those projects) to easily create instances of immutable types without resorting to looking up the order of constructor parameters. The core of the proposal is to be able to mark properties as initonly, and get the compiler to create an extra type which is thoroughly mutable, and contains those properties – as well as a constructor which accepts an instance of the extra type and uses it to populate the immutable instance of the main type before returning. Extra syntax could then be used to call this constructor – or indeed, given that the properties are actually readonly, thus avoiding any ambiguity, normal object initializers could be used to create instances. Just as an example, imagine this code: { public string Line1 { get; initonly set; } public string Line2 { get; initonly set; } public string Line3 { get; initonly set; } public string County { get; initonly set; } public string State { get; initonly set; } public string Country { get; initonly set; } public string ZipCode { get; initonly set; } // Business methods as normal } // In another class Address addr = new Address { Line1=“10 Fairview Avenue”, Line3=“Makebelieve Town”, County=“Mono County”, State=“California”, Country=“US” }; That could be transformed into code a bit like this: // Let tools (e.g. the compiler!) know how we // expect to be initialized. Could be specified // manually to avoid using the default class name [InitializedWith(typeof(Address.Init))] public class Address { // Nested mutable class used for initialization [CompilerGenerated] public class Init { public string Line1 { get; set; } public string Line2 { get; set; } public string Line3 { get; set; } public string County { get; set; } public string State { get; set; } public string Country { get; set; } public string ZipCode { get; initonly set; } } // Read-only “real” properties, automatically // implemented and backed with initonly fields public string Line1 { get; } public string Line2 { get; } public string Line3 { get; } public string County { get; } public string State { get; } public string Country { get; } public string ZipCode { get; } // Automatically generated constructor, using // backing fields directly public Address(Address.Init init) { <>_line1 = init.Line1; <>_line2 = init.Line2; <>_line3 = init.Line3; <>_county = init.County; <>_state = init.State; <>_country = init.Country; <>_zipCode = init.ZipCode; } // Business methods as normal } // In another class Address addr = new Address(new Address.Init { Line1=“10 Fairview Avenue”, Line3=“Makebelieve Town”, County=“Mono County”, State=“California”, Country=“US” }); That’s the simple case, of course. Issues: - Unlike other compiler-generated types (anonymous types, types for iterator blocks, types for anonymous functions) we do want this to be public, and have a name which can be used elsewhere. We need to find some way of making sure it doesn’t clash with other names. In the example above, I’ve used an attribute to indicate which type is used for initialization – I could imagine some way of doing this in the “pre-transform” code to say what the auto-generated type should be called. - What happens if you put code in the setter, instead of making it automatically implemented? I suspect that code should be moved into the setter of the initialization class – but at that point it won’t have access to the rest of the state of the class (beyond the other properties in the initialization class). It’s somewhat messy. - What if you want to add code to the generated constructor? (Possibly solution: allow constructors to be marked somehow in a way that means “add on the initialization class as a parameter at the end, and copy all the values as a first step.) - How can you indicate that some parameters are mandatory, and some are optional? (The mandatory parameters could just be marked as readonlyproperties rather than initonly, and then the initialization class specified as an extra parameter for a constructor which takes all the mandatory ones. Doesn’t feel elegant though, and leaves you with two different types of initialization code being mixed in the client – some named, some positional.) - How do you specify default values? (They probably end up being the default values of the automatically generated properties of the initialization class, but there needs to be some syntax to specify them.) I suspect there are more issues too – but I think the benefits would be great. I know the C# team has been thinking about immutability, but I’ve no idea what kind of support they’re currently envisioning. Unlike my previous ideas, which were indeed unpalatable for various reasons, I think this one has real potential. Mind you, given that I’ve come up with it after only mulling this over in “spare” time, I highly doubt that it will be a new concept to the team… 17 thoughts on “C# 4: Immutable type initialization” Certainly cleaner than many of the other options. Interesting… Although I’m not sure “initonly” is a good term here – it might map to the CLR flag, but as a language keyword I suspect there are better options. But in typical style I can’t think of any at the moment (perhaps just “init”?). I guess it does the job for discussion purposes ;-p I don’t think that a two-type implementation is going to be very effective, because of the issues you raise, as well as others. I think true immutability support is only going to come when it is built into the type system. That of course would require changes to the CLR, and I have no idea if immutability is even on the radar for the next CLR release (or even when it is going to be). @Marc: Yes, initonly was just an initial “for discussion purposes” idea :) @David: I don’t think any of the issues I’ve raised is insurmountable. Having custom setter code is probably the biggest issue, and I’d rather have the ability just for automatic properties than not to have it at all. We’ll see what happens though. Another thought; one of the outstnadning issues was default values. If that can be solved, what difference between a compiler-generated init class, and a compiler-generated constructor? i.e. those members marked “initonly” actually become an additional (single) ctor, with the defaults being provided by the compiler. For every standard public/protected ctor(foo,bar), there might be an additional ctor(foo,bar,[the various initonly members)… At the language level, these additional ctors could perhaps be attributed and available by member-name via initializers – but from other (unaware) languauges could be used as the “lots of parameters” ctor. Marc, I was thinking about that. The difference would basically be in terms of named vs positional arguments. The defaults issue goes back to “provided by the caller” or “provided by the callee” issue – the same reason Anders didn’t want optional parameters in the first place. If the value could be moved to the callee somehow, with the caller indicating which values they’re really providing, that would be an alternative approach. Re defaults… perhaps simply insist that *all* initonly values must be provided in the initializer? A half-formed immutable object isn’t that much use… I think this would address the 90% case… for the remaining case when there are scenarios to use different sets of options, then this “initonly” isn’t suitable, and the author must write ctors for each valid combination. But a simple “you can set all, by name, w/o writing a ctor” would go a long way… In fact, in this case the *final* ctors probably *replace* the existing ones – i.e. Foo(string) becomes Foo(string,[…]), but there is no Foo(string). Of course, at this point all we’ve done is provide “pass by name” to ctors, and an auto-generated ctor… not sure… LikeLiked by 1 person Why not use optional parameters and require the defaults to be Null/0. That way their is no default to speak of. You could even go one step further and require all optional types to be Reference or Nullable. Restricting the defaults that far would severely limit the usefulness of the feature, IMO. I’m sure it wouldn’t be that hard to include defaults in a useful manner. Its pretty rare that I have an immutable object with a large number of attributed. If you have any class with a lot, you probably need to revisit the design. The cases I have had them are configuration, where I use setters to avoid the ugliness of a long constructor. Example – a distributed master/worker framework, where configuration is set before the task’s execution. In these cases, I’ve cloned the object (deep copy) so that the client can’t modify it. I’ll also add a validation method before execution starts. If you’re concern is objects like a user’s profile, as in your example, it really isn’t a big deal once you have an xml-style type system and supportive framework. Its all generated code, such that the mess of large constructors isn’t seen. Anyway, the point is that good design resolves all these issues. Everyone seems to want to put their ‘touch’ on their favorite language and add something new. If you focus on a clean design even the most complex tasks really becomes simple and elegant. At that point, you don’t need the compiler or language to do anything but get faster. Ben: “The cases I have had them are configuration, where I use setters to avoid the ugliness of a long constructor.” Exactly – that’s precisely why I’ve mentioned IoC containers a couple of times. I don’t like deep cloning and then explicit validation in terms of design – it would be nicer for the validation to happen automatically on construction, and for there to be no need to do cloning at all. I don’t see explicit validation and cloning as good design – I see it as a way of getting round the lack of simple initialization of immutable types. Not sure what you mean about the xml-style type system and supportive framework, but even if there’s a lot of generated code for production stuff, I think it’s important to be able to easily construct objects manually for unit tests and the like. Sorry, ignore the type system aspect. I had a very specific design in mind to cover the general case and I shouldn’t have brought it up so vaguely on a blog. I agree that what you’re asking is the ideal, but my point was you don’t sacrafice much by working with what you have smartly. And from the Java world, everyone I meet seems to have their own language extension proposal, so perhaps I’m a bit overly cautious! Extensive deep cloning is horrible, but very careful usages is fine. In both cases you’ll get validation and a runtime error in the same execution flow. If the framework that digest it automatically validates the configuration, its equivilant to the IOC approach with huge constructors as the bean is instantiated when its specifically required. So that’s my reasoning! :) You’re idea would be a nice asset, but having seen the horrific abuses proposed for cramming garbage into Java’s type system I’m overly cautious of any type system propsals. Oh it’s definitely worth being cautious of *any* new feature in a language – and I can certainly see how some of Java’s features haven’t been as carefully thought out as they might have been. The good thing about just blogging on these matters is that I know my blog is read by much smarter people than myself (or at the very least people with much more experience of language design than I have). I can make straw man proposals which may have *some* good points along with distinct downsides. I’m relying on the professionals to polish the good stuff while avoiding the problems :) Here’s some more discussion of immutable objects in C#: Obviously, this has had a lot of thought on many fronts. I’d like to see readonly properties in general, not just autoprops. This is a little more complex, since it potentially means that readonly fields *could* be referred to outside the constructor, but only in other readonly blocks. With these, and getter/setter args instead of (or in addition to) property-scoped fields, I’d be happy to do away with (non-property-scoped) fields altogether. public readonly int IntValue { get; set(field) { if (field < 0) throw new ArgumentOutOfRangeException(); field = value; } } Hi! I’ve posted this same question to a couple of other places but was unable to find the answer: I’ve been looking around for a simple example of the “right way” to implement immutable object xml serialization/deserialization. Since IXmlSerializable.ReadXml(System.Xml.XmlReader reader) requires the object inner state to be changed, I have to make private fields writable, which I would like to avoid. I’ve tried googling it but was unable to find the answer I was looking for. Thanks a lot, Veki @Veki: I’m afraid I don’t know. I don’t even know if it’s possible. I’m pretty ignorant about XML serialization, I’m afraid :( Jon @Jon: “I’m pretty ignorant about XML serialization” — I find this hard to believe :) But anyway I found a way to do it at the end, so I will just post a quick answer if anyone comes across the same problem: I ended up using Reflection to change the state of the object inside the implementation of IXmlSerializable.ReadXml, while leaving the private fields readonly. I made a new ImmutableObject class as a base class for other immutable classes which I would like to make xml-serializable, so all the messing with Reflection is confined to this base class only. Thanks anyway – you actually answered it, I came across one of your posts: Best regards, Veki
https://codeblog.jonskeet.uk/2008/03/15/c-4-immutable-type-initialization/
CC-MAIN-2020-05
refinedweb
2,608
58.21
Summary gives an example of its use: 7 abstract class TypeRef<T> { 8 private final Type type; 9 protected TypeRef() { 10 ParameterizedType superclass = (ParameterizedType) 11 getClass().getGenericSuperclass(); 12 type = superclass.getActualTypeArguments()[0]; 13 System.out.println("type = "+type); 14 } 15 @Override public boolean equals (Object o) { 16 return o instanceof TypeRef && 17 ((TypeRef)o).type.equals(type); 18 } 19 @Override public int hashCode() { 20 return type.hashCode(); 21 } 22 } 23 24 public class Favorites { 25 private Map<TypeRef<?>, Object> favorites = 26 new HashMap< TypeRef<?> , Object>(); 27 public <T> void setFavorite(TypeRef<T> type, T thing) { 28 favorites.put(type, thing); 29 } 30 @SuppressWarnings("unchecked") 31 public <T> T getFavorite(TypeRef<T> type) { 32 return (T) favorites.get(type); 33 } 34 public static void main(String[] args) { 35 Favorites f = new Favorites(); 36 List<String> stooges = Arrays.asList( 37 "Larry", "Moe", "Curly"); 38 f.setFavorite(new TypeRef<List<String>>(){}, stooges); 39 List<String> ls = f.getFavorite( 40 new TypeRef<List<String>>(){}); 41 } 42 } The code for TypeRef is a little obscure and the Javadoc for Type etc. are of no use! Since TypeRef is generic and abstract the supertype of an instance of type TypeRef must be generic and hence the cast to ParameterizedType won't fail (line 10). One you have a ParameterizedType you can ask for its type parmeters, in this case it has one. Now you can use an instance of TypeRef as a genric token because its equals and hashCode methods look at the generic type. The usage example is a store of favourites that are accessed via generic type tokens. However the cast on line 32 that is ignored (line 30) circumvents the type checking and if two layers of generics are used, e.g. a generic list is the generic parameter to TypeRef, it fails, as shown below. 6 class Oops { 7 static Favorites f = new Favorites(); 8 9 static <T> List<T> favoriteList() { 10 TypeRef<List<T>> ref = new TypeRef<List<T>>(){}; 11 List<T> result = f.getFavorite(ref); 12 if (result == null) { 13 result = new ArrayList<T>(); 14 f.setFavorite(ref, result); 15 } 16 return result; 17 } 18 19 public static void main(String[] args) { 20 List<String> ls = favoriteList(); 21 List<Integer> li = favoriteList(); 22 li.add(1); 23 for (String s : ls) System.out.println(s); 24 } 25 } The problem is that when the token is created (line 10) for both List<String> (line 20) and List<Integer> (line 21) the same token is created, one for a java.util.List<T>, in both cases. Thus the call to favoriteList on both lines 20 and 21 returns the same list and therefore when run you get a type error when you attempt the read this list as a String (line 23), because you have already put an Integer in (line 22). The type system did detect a potential problem but the warning was supressed and hence a runtime error. One solution is to reify generics, i.e.“erase erasure”, this is my prefered option, via adding a source statement so that backward compatibility isn't broken. A source statment identifies the version of Java in use, e.g. source 7;, states that it is Java 7. This way generics can be reified in 7 and erased in pre-7. Note the idea of a source statement was my suggestion. Another possibility is to use factories instead of tokens, e.g.: 8 interface Factory< T > { 9 T create(); 10 } 11 12 enum FactoryListString implements Factory< List< String > > { 13 INSTANCE { 14 public List< String > create() { return new ArrayList< String >(); } 15 } 16 } 17 18 public class Favorites2 { 19 private final Map< Factory< ? >, Object > favorites = 20 new HashMap< Factory< ? >, Object >(); 21 22 public < T > void setFavorite( Factory< T > type, T thing ) { 23 favorites.put( type, thing ); 24 } 25 26 @SuppressWarnings( "unchecked" ) 27 public < T > T getFavorite( Factory< T > type ) { 28 return (T) favorites.get( type ); 29 } 30 31 public static void main( final String[] notUsed ) { 32 final Favorites2 f = new Favorites2(); 33 f.setFavorite( FactoryListString.INSTANCE, 34 asList( "Larry", "Moe", "Curly" ) ); 35 out.println( f.getFavorite( FactoryListString.INSTANCE ) ); 36 } 37 } The code is straightforward to anyone familiar with the factory design pattern. Factories are typically singletons and in this case an enum is used to make a singleton factory. A feature of using a factory instead of a token is that you would not have an expectation that something made by a different factory was the same sort of object even if the structural type of the object was the same. I.E. two List<String> would be different types if made by different factories, i.e. no structural matching of types just matching based on factories. The resulting example of using favorites that cause a problem before is now OK: 7 enum FactoryListInteger implements Factory< List< Integer > > { 8 INSTANCE { 9 public List< Integer > create() { return new ArrayList< Integer >(); } 10 } 11 } 12 13 public class Oops2 { 14 final static Favorites2 f = new Favorites2(); 15 16 static < T > List< T > favoriteList( final Factory< List< T > > factory ) { 17 List< T > result = f.getFavorite( factory ); 18 if ( result == null ) { 19 result = new ArrayList< T >(); 20 f.setFavorite( factory, result ); 21 } 22 return result; 23 } 24 25 public static void main( final String[] notUsed ) { 26 final List< String > ls = favoriteList( FactoryListString.INSTANCE ); 27 final List< Integer > li = favoriteList( FactoryListInteger.INSTANCE ); 28 li.add( 1 ); 29 for ( final String s : ls ) { 30 out.println( s ); 31 } 32 for ( final Integer i : li ) { 33 out.println( i ); 34 } 35 } 36 } The above code works as expected because the factories are singletons and are therefore passed to favoriteList rather than being new objects with an erased second level type. As noted above the TypeRef version in both cases is of type List<T>, whereas with the factories the types are List< String > and List< Integer >. In the case of Neil's example neither the type token nor the factory are used for anything other than undoing the effects of erasure. However both techniques can do much more, in particular they can make instances of generic objects. Have an opinion? Readers have already posted 1 comment about this weblog entry. Why not add yours? If you'd like to be notified whenever Howard Lovatt adds a new entry to his weblog, subscribe to his RSS feed.
http://www.artima.com/weblogs/viewpost.jsp?thread=206350
CC-MAIN-2017-43
refinedweb
1,053
55.34