text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Opened 8 years ago Closed 7 years ago Last modified 7 years ago #14417 closed (invalid) UnicodeDecodeError exception in recent actions Description Here is my situation. I have a model with this verbose_name in meta data : u"Matériel d'occasion". The thing is this is a non ascii string. I do some stuff on an object of this model. Then, when I go the the admin page, I get this server error : Caught UnicodeDecodeError while rendering: ('ascii', "Mat\xc3\xa9riel d'occasion", 3, 4, 'ordinal not in range(128)') It occurs in the template django/contrib/admin/templates/admin/index.html at line 70 : <span class="mini quiet">{% filter capfirst %}{% trans entry.content_type.name %}{% endfilter %}</span> Attachments (3) Change History (12) Changed 8 years ago by comment:1 Changed 8 years ago by Changed 8 years ago by Changed 7 years ago by Add test for unicode model name comment:2 Changed 7 years ago by I've just added a test that specifically tests this. But on my system, it passes without your patch. Why would entry.content_type.name not be an Unicode string in the first place? The following snippet could indicate you if ContentType names are really all Unicode strings. from django.contrib.contenttypes.models import ContentType for c in ContentType.objects.all(): print c.name.__class__, c.name comment:3 Changed 7 years ago by So I've learnt how to use the test suite. Here are my results. Adding your unit test, it also passes for me. It outputs that when displaying content types: <type 'unicode'> ¿Chapter? Then, I changed the Chapter verbose name from '¿Chapter?' to u"Matériel d'occasion" with a french accent. And so on in the unit test. Executing tests give me this: <type 'unicode'> Matériel d'occasion But the test fails with this error: AssertionError: Couldn't find '<span class="mini quiet">Matériel d'occasion</span>' in response Did I configured models in the wrong way ? comment:4 Changed 7 years ago by That's a totally different error. Simple quotes are converted to ' in the HTML result. So your test case should test for: self.assertContains(response, """<span class="mini quiet">Matériel d'occasion</span>""") comment:5 Changed 7 years ago by Oops, sorry. No problem with the test suite: sqlite or mysql. With my application in dev mode, using MySql (DEFAULT CHARACTER SET utf8 COLLATE utf8_bin), I tried accessing admin page. I've just printed content_type class value at the same place that in my patch. I get this: Development server is running at Quit the server with CONTROL-C. <type 'str'> Matériel d'occasion <type 'str'> Matériel d'occasion <type 'str'> Matériel d'occasion [02/Dec/2010 21:15:48] "GET /admin/ HTTP/1.1" 500 394166 Whereas in the test suite it's <unicode>. In the database: select * from django_content_type; Matériel d'occasion | website | occmaterial | ... comment:6 Changed 7 years ago by I suspect some database configuration problem. Did you try to get the class value of any other CharField on one of your own models? comment:7 Changed 7 years ago by Well, ramiro friendly pointed me to It is surely the cause of your problems (using utf8_bin collation). Stack trace
https://code.djangoproject.com/ticket/14417
CC-MAIN-2018-17
refinedweb
539
66.23
std::time From cppreference.com Returns the current calendar time encoded as a std::time_t object, and also stores it in the object pointed to by arg, unless arg is a null pointer. Parameters Return value Current calendar time encoded as std::time_t object on success, (std::time_t)(-1) on error. If arg is not null, the return value is also stored in the object pointed to by arg. Notes The encoding of calendar time in std::time_t is unspecified, but most systems conform to POSIX specification and return a value of integral type holding the number of seconds since the Epoch. Implementations in which std::time_t is a 32-bit signed integer (many historical implementations) fail in the year 2038. Example Run this code #include <ctime> #include <iostream> int main() { std::time_t result = std::time(nullptr); std::cout << std::asctime(std::localtime(&result)) << result << " seconds since the Epoch\n"; } Output: Wed Sep 21 10:27:52 2011 1316615272 seconds since the Epoch
http://asasni.cs.up.ac.za/docs/cpp/cpp/chrono/c/time.html
CC-MAIN-2021-49
refinedweb
163
50.77
Reduce is the Swiss-army knife of array iterators. It’s really powerful. So powerful, you can build most of the other array iterator methods with it, like .map(), .filter() and .flatMap(). And in this article we’ll look at some more amazing things you can do with it. But, if you’re new to array iterator methods, .reduce() can be confusing at first. Reduce is one of the most versatile functions that was ever discovered People often run into trouble as soon as they step beyond the basic examples. Simple things like addition and multiplication are fine. But as soon as you try it with something more complicated, it breaks. Using it with anything other than numbers starts to get really confusing. Why does reduce() cause people so much trouble? I have a theory about this. I think there’s two main reasons. The first is that we tend to teach people .map() and .filter() before we teach .reduce(). But the signature for .reduce() is different. Getting used to the idea of an initial value is a non-trivial step. And then the reducer function also has a different signature. It takes an accumulator value as well as the current array element. So learning .reduce() can be tricky because it’s so different from .map() and .filter(). And there’s no avoiding this. But I think there’s another factor at work. The second reason relates to how we teach people about .reduce(). It’s not uncommon to see tutorials that give examples like this: function add(a, b) { return a + b; } function multiply(a, b) { return a * b; } const sampleArray = [1, 2, 3, 4]; const sum = sampleArray.reduce(add, 0); console.log(‘The sum total is:’, sum); // ⦘ The sum total is: 10 const product = sampleArray.reduce(multiply, 1); console.log(‘The product total is:’, product); // ⦘ The product total is: 24 Now, I’m not saying this to shame anyone. The MDN docs use this kind of example. And heck, I’ve even done it myself. There’s a good reason why we do this. Functions like add() and multiply() are nice and simple to understand. But unfortunately they’re a little too simple. With add(), it doesn’t matter whether you add b + a or a + b. And the same goes for multiply. Multiplying a * b is the same as b * a. And this is all as you would expect. But the trouble is, this makes it more difficult to see what’s going on in the reducer function. The reducer function is the first parameter we pass to .reduce(). It has a signature that looks something like this: 2 function myReducer(accumulator, arrayElement) { // Code to do something goes here } The accumulator represents a ‘carry’ value. It contains whatever was returned last time the reducer function was called. If the reducer function hasn’t been called yet, then it contains the initial value. So, when we pass add() in as the reducer the accumulator maps to the a part of a + b. And a just so happens to contain the running total of all the previous items. And the same goes for multiply(). The a parameter in a * b contains the running multiplication total. And there’s nothing wrong with showing people this. But, it masks one of the most interesting features of .reduce(). The great power of .reduce() comes from the fact that accumulator and arrayElement don’t have to be the same type. For add and multiply, both a and b are numbers. They’re the same type. But we don’t have to make our reducers like that. The accumulator can be something completely different from the array elements. For example, our accumulator might be a string, while our array contains numbers: function fizzBuzzReducer(acc, element) { if (element % 15 === 0) return `${acc}Fizz Buzz\n`; if (element % 5 === 0) return `${acc}Fizz\n`; if (element % 3 === 0) return `${acc}Buzz\n`; return `${acc}${element}\n`; } const nums = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ]; console.log(nums.reduce(fizzBuzzReducer, '')); Now, this is just an example to make the point. If we’re working with strings, we could achieve the same thing with a .map() and .join() combo. But .reduce() is useful for more than just strings. The accumulator value doesn’t have to be a simple type (like numbers or strings). It can be a structured type like an array or a plain ol' JavaScript object (POJO). This lets us do some really interesting things, as we’ll see in a moment. Some interesting things we can do with reduce So, what interesting things can we do then? I’ve listed five here that don’t involve adding numbers together: - Convert an array to an object; - Unfold to a larger array; - Make two calculations in one traversal; - Combine mapping and filtering into one pass; and - Run asynchronous functions in sequence Convert an array to an object We can use .reduce() to convert an array to a POJO. This can be handy if you need to do lookups of some sort. For example, imagine if we had a list of people: const peopleArr = [ {', }, { username: 'iadler', displayname: 'Irene Adler', email: null, authHash: '319d55944f13760af0a07bf24bd1de28', lastSeen: '2019-05-17T11:12:12+00:00', }, ]; In some circumstances, it might be convenient to look up user details by their username. To make that easier, we can convert our array to an object. It might look something like this: 3 function keyByUsernameReducer(acc, person) { return {...acc, [person.username]: person}; } const peopleObj = peopleArr.reduce(keyByUsernameReducer, {}); console.log(peopleObj); // ⦘ { // "glestrade": { // " // }, // "mholmes": { // " // }, // "iadler":{ // "username": "iadler", // "displayname": "Irene Adler", // "email": null, // "authHash": "319d55944f13760af0a07bf24bd1de28", // "lastSeen": "2019-05-17T11:12:12+00:00" // } // } In this version, I’ve left the username as part of the object. But with a small tweak you can remove it (if you need to). Unfold a small array to a larger array Normally, we think about .reduce() as taking a list of many things and reducing it down to a single value. But there’s no reason that single value can’t be an array. And there’s also no rule saying the array has to be shorter than the original. So, we can use .reduce() to transform short arrays into longer ones. This can be handy if you’re reading data from a text file. Here’s an example. Imagine we’ve read a bunch of plain text lines into an array. We’d like to split each line by commas, and have one big list of names. const fileLines = [ ' ]; function splitLineReducer(acc, line) { return acc.concat(line.split(/,/g)); } const investigators = fileLines.reduce(splitLineReducer, []); console.log(investigators); // ⦘ [ // " // ] We start with an array of length five, and then end up with an array of length sixteen. Now, you may have come across my Civilised Guide to JavaScript Array Methods. And if you’re paying attention, you may have noticed that I recommend .flatMap() for this kind of scenario. So, perhaps this one doesn’t really count. But, you may also have noticed that .flatMap() isn’t available in Internet Explorer or Edge. So, we can use .reduce() to create our own flatMap() function. function flatMap(f, arr) { const reducer = (acc, item) => acc.concat(f(item)); return arr.reduce(reducer, []); } const investigators = flatMap(x => x.split(','), fileLines); console.log(investigators); So, .reduce() can help us make longer arrays out of short ones. But it can also cover for missing array methods that aren’t available. Make two calculations in one traversal Sometimes we need to make two calculations based on a single array. For example, we might want to calculate the maximum and the minimum for a list of numbers. We could do this with two passes like so: const readings = [0.3, 1.2, 3.4, 0.2, 3.2, 5.5, 0.4]; const maxReading = readings.reduce((x, y) => Math.max(x, y), Number.MIN_VALUE); const minReading = readings.reduce((x, y) => Math.min(x, y), Number.MAX_VALUE); console.log({minReading, maxReading}); // ⦘ {minReading: 0.2, maxReading: 5.5} This requires traversing our array twice. But, there may be times when we don’t want to do that. Since .reduce() lets us return any type we want, we don’t have to return a number. We can encode two values into an object. Then we can do two calculations on each iteration and only traverse the array once: const readings = [0.3, 1.2, 3.4, 0.2, 3.2, 5.5, 0.4]; function minMaxReducer(acc, reading) { return { minReading: Math.min(acc.minReading, reading), maxReading: Math.max(acc.maxReading, reading), }; } const initMinMax = { minReading: Number.MAX_VALUE, maxReading: Number.MIN_VALUE, }; const minMax = readings.reduce(minMaxReducer, initMinMax); console.log(minMax); // ⦘ {minReading: 0.2, maxReading: 5.5} The trouble with this particular example is that we don’t really get a performance boost here. We still end up performing the same number of calculations. But, there are cases where it might make a genuine difference. For exmaple, if we’re combining .map() and .filter() operations… Combine mapping and filtering into one pass Imagine we have the same peopleArr from before. We’d like to find the most recent login, excluding people without an email address. One way to do this would be with three separate operations: - Filter out entries without an email; then - Extract the lastSeenproperty; and finally - Find the maximum value. Putting that all together might look something like so: function notEmptyEmail(x) { return (x.email !== null) && (x.email !== undefined); } function getLastSeen(x) { return x.lastSeen; } function greater(a, b) { return (a > b) ? a : b; } const peopleWithEmail = peopleArr.filter(notEmptyEmail); const lastSeenDates = peopleWithEmail.map(getLastSeen); const mostRecent = lastSeenDates.reduce(greater, ''); console.log(mostRecent); // ⦘ 2019-05-13T11:07:22+00:00 Now, this code is perfectly readable and it works. For the sample data, it’s just fine. But if we had an enormous array, then there’s a chance we might start running into memory issues. This is because we use a variable to store each intermediate array. If we modify our reducer callback, then we can do everything in one pass: function notEmptyEmail(x) { return (x.email !== null) && (x.email !== undefined); } function greater(a, b) { return (a > b) ? a : b; } function notEmptyMostRecent(currentRecent, person) { return (notEmptyEmail(person)) ? greater(currentRecent, person.lastSeen) : currentRecent; } const mostRecent = peopleArr.reduce(notEmptyMostRecent, ''); console.log(mostRecent); // ⦘ 2019-05-13T11:07:22+00:00 In this version we traverse the array just once. But it may not be an improvement if the list of people is always small. My recommendation would be to stick with .filter() and .map() by default. If you identify memory-usage or performance issues, then look at alternatives like this. Run asynchronous functions in sequence Another thing we can do with .reduce() is to run promises in sequence (as opposed to parallel).4 This can be handy if you have a rate limit on API requests or if you need to pass the result of each promise to the next one. To give an example, imagine we wanted to fetch messages for each person in our peopleArr array. function fetchMessages(username) { return fetch(`{username}`) .then(response => response.json()); } function getUsername(person) { return person.username; } async function chainedFetchMessages(p, username) { // In this function, p is a promise. We wait for it to finish, // then run fetchMessages(). const obj = await p; const data = await fetchMessages(username); return { ...obj, [username]: data}; } const msgObj = peopleArr .map(getUsername) .reduce(chainedFetchMessages, Promise.resolve({})) .then(console.log); // ⦘ {glestrade: [ … ], mholmes: [ … ], iadler: [ … ]} Notice that for this to work, we have to pass in a Promise as the initial value using Promise.resolve(). It will resolve immediately (that’s what Promise.resolve() does). Then our first API call will run straight away. Why don’t we see reduce more often then? So, we’ve seen a bunch of interesting things you can do with .reduce(). Hopefully they will spark some ideas on how you can use it for your own projects. But, if .reduce() is so powerful and flexible, then why don’t we see it more often? Ironically, its flexibility and power sometimes work against it. The thing is, you can do so many different things with reduce that it gives you less information. Methods like map, .filter() and .flatMap() are more specific and less flexible. But they tell us more about the author’s intent. We say that this makes them more expressive. So It’s usually better to use a more expressive method, rather than use reduce for everything. Over to you, my friend Now that you’ve seen some ideas on how to use .reduce(), why not give it a go? And if you do, or if you find a novel use that I haven’t written about, be sure to let me know. I’d love to hear about it. Tweet by @JS_Cheerleader, 15 May 2019 ↩︎ If you look at the .reduce()documentation, you will see that the reducer takes up to four parameters. But only the accumulatorand the arrayElementare required. I’ve left them out in the interest of keeping things simple. It can confuse people to include too much detail. ↩︎. ↩︎ If you'd like to know how to run Promises in parallel, check out How to run async JavaScript functions in sequence or parallel. ↩︎
https://jrsinclair.com/articles/2019/functional-js-do-more-with-reduce/
CC-MAIN-2021-31
refinedweb
2,212
68.87
ArrayList Lucky Singh Ranch Hand Joined: Jan 19, 2004 Posts: 125 posted Feb 25, 2004 06:34:00 0 Is this right? public class Wheels { String name; public Wheels(String n) { this.name = n; } } public class Tyres extends Wheels { List l = new ArrayList(); public Tyres(String n) { super(n); methodA(); display(); } public void methodA() { l.add(); } public void display() { Iterator e = l.Iterator(); while(e.hasNext()) System.out.println(e.next()); } } public class Test { public static void main(String args[]) { Tyres t1 = new Tyres("goodyear"); Tyres t2 = new Tyres("sprint"); } } [ edited to preserve formatting using the [code] and [/code] UBB tags -ds ] [ February 25, 2004: Message edited by: Dirk Schreckmann ] Stan James (instanceof Sidekick) Ranch Hand Joined: Jan 29, 2003 Posts: 8791 posted Feb 25, 2004 08:41:00 0 No, they're TIRES. Sorry, culture bashing. I kinda like Tyres. I'm in Pennsylvania - American coal country - working with a bunch of Londoners these days. Anyhow, a couple things. You have an ArrayList , and you call add() but you have to find something to add. Maybe you wanted to add the String with the brand name? So your methodA() could say l.add(name). Next, each instance of the Tyres class has its own list object. And you only add to the list once when the constructor calls methodA. So you're only going to add one name to the list in each instance. You'll need a different design if you want the list to capture all the brand names in one list. One trick would be to make the ArrayList variable "static". That means the variable belongs to the class instead of to the instance, so you get only one ArrayList no matter how many Tyre objects you create. All instances kind of share the one ArrayList . What would display() do in that case? When you create "Goodyear" it would display "Goodyear". When you create "Sprint" it would display "Goodyear" and "Sprint". Would that be right? I have no idea because it's your program after all. Is that leading you the right direction? If it's not meeting your needs, maybe back up and talk about the requirements - desired behavior - a bit more. Finally, trying not to be too picky here, but "l" doesn't sing to me as a variable name. Back to the requirements topic, can you think of a name that helps me know what we're listing? Have fun exploring Java . Come back often if you need help. Cheers! A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi sever oon Ranch Hand Joined: Feb 08, 2004 Posts: 268 posted Feb 25, 2004 16:38:00 0 First off, I'm not sure how you got this to compile. The List class doesn't have an "add()" method...you have to pass something in to that method to add something ot the list. You can't just call "add". Secondly, I have a bit of a problem with the inheritance, but to be fair whether or not this is correct is completely dependent on the application. For most purposes that I can think of, though, a "wheel" is the part that connects to the axle of a car and provides a mounting point for a "tire", the rubber, air-filled encasing that cushions the vehicle. These are two different things altogether, a tire in my mind does not have the is-a relationship with wheel. Depending on your point of view, a tire "has-a" wheel, or a wheel "has-a" tire, but neither is the other. Question: for the purposes of your application, do these two objects live and die together? Is it possible to change a tire in your app without changing the wheel? If so, then the wheel might live to see many tires over its lifetime and it wouldn't make sense to destroy both objects simultaneously (which is absolutely necessary if you're going to make one inherit from the other...in one sense the "wheelness" of the object and the "tireness" of it are so tightly married they form a single object). Also, I'd shy away from creating "plural" classes. You have a Wheels object and a Tyres object. Why not just create a class that represents a wheel and a tire, and instantiate as many as you need? Why one class that needs to be able to represent more than one tire or wheel? How many wheels does one Wheels instance represent? It just adds confusion. Instead, try: public class Wheel { private Tire tire; public Tire getTire() { return tire; } public void setTire( Tire t ) { tire = t; } } public class Tire { private String name; public Tire( String name ) { setName( name ); } public String getName() { return name; } protected void setName( String n ) { name = n; } } Notice I've kept things very simple. Each class represents a single object, and now I'm free to use these elemental components to build more complicated structures. Say, for instance, I wanted to represent an axle and a car: public class Axle { private Wheel left; private Wheel right; public Axle() { this( new Wheel(), new Wheel() ); } public Axle( Wheel left, Wheel right ) { setLeft( left ); setRight( right ); } public Wheel getLeft() { return left; } protected void setLeft( Wheel l ) { left = l; } public Wheel getRight() { return right; } protected void setRight( Wheel r ) { right = r; } } public class Car { private Axle front; private Axle rear; public Car() { this( new Axle(), new Axle() ); } public Car( Axle front, Axle rear ) { setFrontAxle( front ); setRearAxle( rear ); } public Axle getFrontAxle() { return front; } protected void setFrontAxle( Axle f ) { front = f; } public Axle getRearAxle() { return rear; } protected void setRearAxle( Axle r ) { rear = r; } } Note that both Car and Axle have constructors that force callers to create them with Axles and Wheels, respectively (unless the caller creates a new Car(null,null), in which case we figure he knows what he's doing and we provide him all the rope he needs...). I made this decision because it seems to me that a Car without axles and an Axle without two mount points for tires are quite useless indeed. On the other hand, a wheel does not have to come attached to a tire. These are purely arbitrary decisions I made based upon an imagined application, and would only be defensible choices for that particular imagined application. If you were writing an app that didn't require tracking tires separate from their wheels, then you might make a different choice. My imagined application is software that runs an auto manufacturing plant, and at a manufacturing plant a car will definitely not be considered "done" until it has axles and wheels, but the customer may have a choice of different tires that could be installed (a sport package might require low profile racing tires, for instance). So, it does not detract from the "car-ness" of the car to build it without tires and simply install those later. Now we can assemble a car: public class AutoPlant { public static void main( String[] args ) { Car volvo = new Car(); volvo.getFrontAxle().getLeft().setTire( new Tire("Goodyear All-Safety") ); volvo.getFrontAxle().getRight().setTire( new Tire("Goodyear All-Safety") ); volvo.getRearAxle().getLeft().setTire( new Tire("Goodyear All-Safety") ); volvo.getRearAxle().getRight().setTire( new Tire("Goodyear All-Safety") ); Car corvette = new Car(); corvette.getFrontAxle().getLeft().setTire( new Tire("Michelin Speedster") ); // ... } } You get the point. Now you might also notice how tiresome (sorry, had to) it is to make all those method calls. Glad you noticed it--I was getting tired of typing. So, to address this, you could add a method to the Car class that takes four tires and does the setting for you (I'll take it to the next logical step too): public class Axle { // ... public void setTires( Tire left, Tire right ) { getLeft().setTire( left ); getRight().setTire( right ); } // ... } public class Car { // ... public void setTires( Tire frontLeft, Tire frontRight, Tire rearLeft, Tire rearRight ) { getFrontAxle().setTires( frontLeft, frontRight ); getRearAxle().setTires( rearLeft, rearRight ); } // ... } Now, the code in the AutoPlant class becomes much simpler: public class AutoPlant { public static void main( String[] args ) { Car volvo = new Car().setTires( new Tire("Goodyear All-Safety"), new Tire("Goodyear All-Safety"), new Tire("Goodyear All-Safety"), new Tire("Goodyear All-Safety") ); Car corvette = new Car().setTires( new Tire("Michelin Speedster"), new Tire("Michelin Speedster"), new Tire("Michelin Speedster"), new Tire("Michelin Speedster") ); } } sev I agree. Here's the link: subject: ArrayList Similar Threads variables in inteface How this is the answer. a question about overload in subclass Difference between Dependency and Association Overriding??! All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/395688/java/java/ArrayList
CC-MAIN-2015-06
refinedweb
1,461
71.34
Webpack 5 has been officially released. Read our announcement. Not ready yet? Read webpack 4 documentation here. ECMAScript Modules (ESM) is a specification for using Modules in the Web. It's supported by all modern browsers and the recommended way of writing modular code for the Web. webpack supported processing ECMAScript Modules to optimize them. The export keyword allows to expose things from an ESM to other modules: export const CONSTANT = 42; export let variable = 42; // only reading is exposed // it's not possible to modify the variable from outside export function fun() { console.log('fun'); } export class C extends Super { method() { console.log('method'); } } let a, b, other; export { a, b, other as c }; export default 1 + 2 + 3 + more(); The import keyword allows to get references to things from other modules into an ESM: import { CONSTANT, variable } from './module.js'; // import "bindings" to exports from another module // these bindings are live. The values are not copied, // instead accessing "variable" will get the current value // in the imported module import * as module from './module.js'; module.fun(); // import the "namespace object" which contains all exports import theDefaultValue from './module.js'; // shortcut to import the "default" export By default webpack will automatically detect whether a file is an ESM or a different module system. Node.js estabilished a way of explicitly setting the module type of files by using a property in the package.json. Setting "type": "module" in a package.json does force all files below this package.json to be ECMAScript Modules. Setting "type": "commonjs" will instead force them to be CommonJS Modules. { "type": "module" } In addition to that, files can set the module type by using .mjs or .cjs extension. .mjs will force them to be ESM, .cjs force them to be CommonJs. In DataURIs using the text/javascript or application/javascript mimetype will also force module type to ESM. In addition to the module format, flagging modules as ESM also affect the resolving logic, interop logic and the available symbols in modules. Imports in ESM are resolved more stictly. Relative requests must include a filename and file extension. Requests to packages e. g. import "lodash"are still supported. Only the "default" export can be imported from non-ESM. Named exports are not available. CommonJs Syntax is not available: require, module, exports, __filename, __dirname. HMR can be used with import.meta.webpackHotinstead of module.hot.
https://webpack.js.org/guides/ecma-script-modules/
CC-MAIN-2020-45
refinedweb
398
59.9
27 November 2012 23:10 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Braskem became a major consumer of In recent years, the Because of the advent of shale gas, As a result, refineries have become a more important source of propylene. However, refineries treat propylene as a by-product. Recently, though, several PDH plants also could lead to a different pricing dynamic for propylene, Musa said. Since propane is derived from natural gas, on-purpose propylene could become more dependent on gas prices versus oil prices. The PDH in Already, Braskem has signed an agreement with Enterprise Products to secure some of the propylene to be produced at the company's
http://www.icis.com/Articles/2012/11/27/9618753/new-on-purpose-propylene-plants-may-stabilise-us-market.html
CC-MAIN-2014-10
refinedweb
111
62.17
ACL_SIZE(3) BSD Library Functions Manual ACL_SIZE(3) acl_size — get the size of the external representation of an ACL Linux Access Control Lists library (libacl, -lacl). #include <sys/types.h> #include <sys/acl.h> ssize_t acl_size(acl_t acl); The acl_size() function return the size, in bytes, of the buffer required to hold the exportable, contiguous, persistent form of the ACL pointed to by the argument acl, when converted by acl_copy_ext(). Any existing ACL entry descriptors that refer to entries in acl continue to refer to the same entries. Any existing ACL pointers that refer to the ACL referred to by acl continue to refer to the ACL. The order of ACL entries within acl remains unchanged. On success, the acl_size() function returns the size in bytes of the contiguous, persistent form of the ACL. On error, a value of (ssize_t)-1 is returned and errno is set appropriately. If any of the following conditions occur, the acl_size() function returns a value of (ssize_t)-1 and sets errno to the corresponding value: [EINVAL] The argument acl is not a valid pointer to an ACL. IEEE Std 1003.1e draft 17 (“POSIX.1e”, abandoned) acl_copy_ext
https://www.man7.org/linux/man-pages/man3/acl_size.3.html
CC-MAIN-2022-27
refinedweb
193
56.05
> From: Evan Lenz [mailto:elenz@[...].com] <snip/> > I believe that XSLT 1.0 was the original culprit, but I could > be. I back? I think it's too late to turn back on this. Too many people are finding this too useful, and too many specs have incorporated this in a fundamental fashion. Most of the uses I've seen are simply for namespace-scoped enumeration values, and I think that's useful. XSLT's use of namespaces for extension functions is IMHO a very novel and useful construct, as well. It strikes me as a very elegant way to provide such a mechanism that plays well in the XML world. Why must namespaces be restricted to only scoping element and attribute names? That seems to me to be unnecessarily restrictive. Other artifacts need name-scoping mechanisms, as well, and I don't see why we should force them to find another means of accomplishing this. ----------------------------------------------------------------- The xml-dev list is sponsored by XML.org <> , an initiative of OASIS <> The list archives are at To subscribe or unsubscribe from this list use the subscription manager: <>
http://aspn.activestate.com/ASPN/Mail/Message/xml-dev/1003472
crawl-001
refinedweb
185
66.44
You can subscribe to this list here. Showing 25 50 100 250 results of 39 These look relatively harmless. They seem to be coming from this typemap: typemaps/enumint.swg:%typemap(in,fragment=SWIG_AsVal_frag(int),noblock=1) const enum SWIGTYPE & (int val, int ecode, $basetype temp) { I'd imagine initialising the val to zero would be the easiest fix. Cheers, Karl On 9 March 2014 00:48, William S Fulton <wsf@...> wrote: >: > 13.1/x86_64/swig-unstable/_log ... > > [ > > Dear all, Sorry to just be able to report problems ... But the C++11 "using" inside namespaces is not handled correctly by the actual version of swig3.0. For instance, ================================================= namespace myNS { /* template<typename KEY,typename VAL> MYCPP11 ... */ template<typename VAL> using MyIntClass = MyCPP11Class<int,VAL>; void f(MyIntClass<int>& s); void g(MyCPP11Class<int,int>& s); typedef int myNewType; using myNewType2=int; extern myNewType aVal; extern myNewType2 aVal2; } ================================================ Function g and variable aVal will be correctly wrapped : ------------------------------------------------------------------- ... myNS::aVal = static_cast< myNS::myNewType >(val); ... myNS::MyCPP11Class< int,float > *result = 0; ------------------------------------------------------------------- On the other hand, the "using" elements (f and aVal2) are not and lead to compilation errors. At least, namespace specifications are missing before MyIntClass and myNewType2 : ------------------------------------------------------------------- ... MyIntClass< int > *arg1 = 0 ; ... myNewType2 * temp; temp = reinterpret_cast< myNewType2 * >(argp); myNS::aVal2 = *temp; ... ------------------------------------------------------------------- Best regards and thank you again for your work on swig3.0, Le samedi 8 mars 2014 18:14:12 William S Fulton a écrit : >. > > _______________________________________________ > Swig-devel mailing list > Swig-devel@... > -- Associate Professor - Maître de conférences gpg : adr : Office 420/26-00 | BC 169 | 4, place Jussieu | 75252 Paris Cedex 05 phone : +33 1 44 27 71 48 | fax : +33 1 44 27 88 89 On 05/03/14 02:27, Oliver Buchtala wrote: > >'m getting to grips with the git way of doing things so a squash isn't the only way. It might be better to keep the history, so a regular rebased pull request should work. Up to you really. >> I updated the global travis configuration adding three javascript runs >> to the matrix (nodejs, jsc, v8). >> Cool, that's impressive. >>. >> Excellent. >> Anything else left? I'll review it all after I've finished the 3.0.0 release. > Short update: some problems in the test-suites are left: > > - make[1]: *** No rule to make target `javascript_version' > what is expected here? > This should display the version of javascript being used, eg some other languages: showing tcl version 8.6 showing perl5 version This is perl 5, version 18, subversion 1 (v5.18.1) built for x86_64-linux-thread-multi showing python version Python 2.7.5 showing java version java version "1.7.0_45" OpenJDK Runtime Environment (IcedTea 2.4.3) (suse-24.2.1-x86_64) OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode) javac 1.7.0_45 skipping android version showing guile version guile (GNU Guile) 2.0.9 skipping mzscheme version showing ruby version ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-linux] skipping ocaml version showing octave version GNU Octave, version 3.6.4 showing php version PHP 5.4.20 (cli) skipping pike version skipping chicken version showing csharp version Mono C# compiler version 3.0.6.0 skipping modula3 version showing lua version Lua 5.2.2 Copyright (C) 1994-2013 Lua.org, PUC-Rio skipping allegrocl version skipping clisp version skipping uffi version skipping cffi version skipping r version showing go version go version go1.1.2 linux/amd64 Please try and make sure that displaying the version always succeeds (or succeeds and displays a 'give up' type message so that it doesn't break make, eg some old versions of javac, don't display the version, so we have: java_version: $(JAVA) -version $(JAVAC) -version || echo "Unknown javac version" > Broken tests: > - c_delete.i:10: Error: Syntax error in input(1). > - c_delete_function.i:8: Error: Syntax error in input(1). > - enum_forward_wrap.cxx:1368:6: error: use of enum ‘ForwardEnum3’ > without previous declaration Are you compiling the C tests as C++? If you are because there is only a C++ api for javascript, then you have the same issue as Octave in which case ignore the testcase like Octave does. > - nested_structs_wrap.cxx:1383:5: error: ‘s’ has incomplete type > Similar problem again. I think you need to look at cparse_cplusplusout and commits b63c4839fe60b8192809ef94d078ef07305144c5 and 2121e1217eae19a0a3e852fc52c89f55faf7e20a for octave.cxx. William I have a complete implementation that allows storing pending native exceptions using thread locals using compiler support or using CLR thread locals. Everything builds and passes single threaded tests. But the multi threaded tests (the tests actually testing the pending storage) are falling in various ways. Under mono on OSX and Linux, there are crashes. On some of the windows platforms the tests seem to work... running for minutes without any failure. But on others (64bit vs2010? IIRC) it fails with protection exception relatively quickly. I'm not able to work much on this for a while, but do intend to get back to it. Marvin Hi, Python 2.7.2 Swig: 2.0.11 I've the exact problem as this case here: Multiple interpreters. Application is threaded, however all python is guaranteed to run in one thread and I've vigilant about switching interpreters before doing anything in python; objects are not shared between interpreters Scenario: Start interpreter 1 create a bunch of objects Start interpreter 2 create a bunch of objects Switch to interpreter 1 call a swig wrapped function -> returns an object of the correct type. terminate the interpreter Switch to interpretr 2 ask for an object of the correct type -> get a generic PySwigObject -> BAD=21 This appears to be occurring because some sort of swig type information is being shared across interpreter, and when one interpreter cleans it up, it corrupts the other module. Note that this does not happen if the interpreters are run sequentially. Any ideas how to prevent this? Thanks Any Idea ? Christophe -- Christophe Tornieri Tech Lead, Software Architect Sub-sea division christophe.tornieri@... Hi, I have an issue using SWIG/Python with multiple instances of python interpreters. Several (sub)-interpreters import the same SWIG-wrapped module. The first interpreter has access to the module functions. Everything is fine. When the second interpreter is created, (the first one is still alive, I just swap interpreter with function PyThreadState_Swap), the import directive for the module does not trigger the init function of the module as it has been loaded by the first interpreter. The issue is that the second interpreter can access the symbols in the dictionary but the returned object are always NULL. Meaning for instance that calling a function will trigger an error. Christophe -- Christophe Tornieri Tech Lead, Software Architect Sub-sea division christophe.tornieri@... I don't find any solution to support vector<enum>. I tested different things. Playing with swig traits, or with %apply.... All these do not work. In some cases, it is because of Scilab 5 API, or the SWIG Scilab design. But there are also some tricky mechanisms about enum in SWIG that I don't know. I am a little disappointed, but I will probably finish without that, there will be a SWIG type error for vector<enum> too. That issue away, I am close to finish, I need a couple days only to fix the documentation and to add a few more tests. Simon Le 27/02/2014 18:20, Simon Marchetto a écrit : > There is a problem in the solution I proposed : vector<enum X> will not > be supported, and I find no workaround (using vector<int>....) to > support them . > > So I will implement another solution, the solution with the context, > i.e. providing the container item type, detected earlier in the process, > to the item conversion function. > This solution needs more work compared to the others, but that is the > only way to do the things correctly I think. > > Simon > > Le 26/02/2014 11:44, Simon Marchetto a écrit : >> In my solution containers of pointers (on classes) is supported only. >> >> The problem is the following. >> >> As for other languages, I implement the operator T() in the struct >> "SciSequence_Ref" (Lib/scilab/scicontainer.swg) >> The implementation uses templates, as usual in SWIG >> (Lib/scilab/scisequence.swg) : >> >> template <typename T> struct traits_asval_sequenceitem { >> static T asval(SwigSciObject obj, void *pSequence, int iItemIndex) { >> .... ? ..... >> } >> }; >> >> I can't use the SWIG traits, because the Scilab typemaps work for >> function arguments only. You do not have any type info like the PyObject >> in Python. What provides the Scilab API is giving you the type of data >> for a specified argument, given its index. And then you convert all the >> data of the arguments in one shot. SwigSciObject obj is a dummy argument >> here. >> >> This template is specialiezd for common Scilab types: double, int, bool, >> etc... No problem for that, problem is unknown types. >> >> I have to convert an item, without knowing its type. And in SWIG you >> cannot use C++ type traits. >> So I just do not know what to write in the body of the function. >> >> There are four solutions: >> - I could provide a context. I am not sure if this solution will solve >> all problems. >> - I could consider T is a pointer. In Scilab, mostly if you do not have >> a primitive type, you have a pointer. But it makes me very scary to do >> this. I am quite sure some cases will not work. >> - I could choose to not implement the operator T(). But in that case, >> operator [] won't be supported in Scilab, do you confirm that ? >> - raise a runtime error >> >> The last solution looks like the fastest and the most secure. I do not >> have so much visibility on the other solutions. >> >> Fortunately I can implement a specific behavior for pointers: >> >> template <typename T> struct traits_asval_sequenceitem<T*> { >> static T* asval(SwigSciObject obj, void *pSequence, int iItemIndex) { >> return static_cast<T*>(SWIG_AsVal_SequenceItem_dec(ptr)(obj, (int >> *)pSequence, iItemIndex)); >> } >> }; >> >> And so vector<class*> is supported but not vector<class>. >> >> But after all, that is not so a big limitation. I do not how to really >> deal with vector<class> in Scilab as classes do not exist in Scilab. >> Technically vector<class> and vector<class*> differ mostly on memory >> management issue as you said in a previous mail. You will have to >> allocate items in output. I do not measure all the consequences of this, >> it looks like a bit akward to me, from the user point of view. >> >> Simon >> >> Le 26/02/2014 00:18, William S Fulton a écrit : >>> Simon >>> >>> Sounds good. By all means add proper support for STL containers that >>> use non-primitive types in a later version if this is going to be >>> better supported by a later version of Scilab. For this release, the >>> minimum requirement is the generated code must compile, even if it is >>> rather useless. >>> >>> What I don't understand is, do you have support for classes working? >>> That is the SWIGTYPE typemaps? Also references/pointers to classes - >>> SWIGTYPE&, SWIGTYPE&const and SWIGTYPE* typemaps? If you do, >>> containers of classes shouldn't be much different as the marshalling >>> of the elements (such as push_back) will use the aforementioned typemaps. >>> >>> William >>> >>> On 25/02/14 11:07, Simon Marchetto wrote: >>>> Hello William, >>>> >>>> I am in the race to finish my works on Scilab. >>>> >>>> I've fixed the typemaps the way I proposed to you recently, and >>>> implemented more exhaustive tests on this topic, among other things. >>>> Still some tests and documentations are missing, but I am feeling I am >>>> very close to the end. >>>> >>>> What blocks me is in the STL support, the conversion in input of >>>> container items (and maybe in output, I haven't looked at this yet). >>>> I have successfully implemented some container item typemaps for Scilab >>>> specific types such as double, bool, string, etc.... >>>> >>>> But implementing the default typemap is hard. I looked what has be done >>>> for Python, and I cannot do the same. >>>> Because of different reasons, in my case, I have no idea about the item >>>> type, if is a pointer, or something else. >>>> I cannot neither use here the swig traits library, nor the C++ type >>>> traits. >>>> >>>> I think the only reasonable solution here is to raise a type error in >>>> the default typemap. I could consider by default an item is a pointer, >>>> but I surely have some problems with this. >>>> >>>> I would like to have your advices. Maybe there is a better solution but >>>> I am afraid to spend too much time on it. I think we can go on with this >>>> for moment. STL support is important but supporting custom types in STL >>>> from Scilab is no so critical in that version. And future version of >>>> Scilab has a new C++ API which should make the things much easier on >>>> this subject. >>>> >>>> Simon >>>> >>>> >>>> Le 07/02/2014 09:58, Simon Marchetto a écrit : >>>>> Yes it will raise an error, as in Octave. The code will also check >>>>> overflows. >>>>> >>>>> Double is indeed the norm in Scilab. You can have an exemple of this >>>>> with the factorial() function in Scilab. This integer in essence >>>>> function does not even accept integers as input ! >>>>> >>>>> -->factorial(3.0) >>>>> ans = >>>>> 6. >>>>> >>>>> -->factorial(3.2) >>>>> error 10000 >>>>> .... >>>>> >>>>> -->factorial(int32(3)) >>>>> error 10000 >>>>> .... >>>>> >>>>> Simon >>>>> >>>>> Le 06/02/2014 20:20, William S Fulton a écrit : >>>>>> What do you plan to happen when you call >>>>>> >>>>>> print smethod(22.2) >>>>>> >>>>>> Presumably this will error out? If it silently loses precision, then >>>>>> I don't know that your approach is a good one and I can't see >>>>>> overloading working. If on the other hand you can reliably detect and >>>>>> extract a true integer out of a double in Scilab, then using doubles >>>>>> as the Scilab type everywhere seems reasonable. It sounds a bit >>>>>> inefficient and slower to me, but if this is the norm in Scilab, then >>>>>> fine. >>>>>> >>>>>> William >>>>>> >>>>>> On 05/02/14 10:10, Simon Marchetto@... >>>>>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>> Managing the Performance of Cloud-Based Applications >>>>> Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. >>>>> Read the Whitepaper. >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>>@... > Another update: The test-suite is now passing for JavascriptCore. For v8/node I have two which I do not understand yet: - c_delete.i:10: Error: Syntax error in input(1). - c_delete_function.i:8: Error: Syntax error in input(1). Find the current version of the merge commit for review: Cheers, Oliver > Short update: some problems in the test-suites are left: - make[1]: *** No rule to make target `javascript_version' what is expected here? Broken tests: - c_delete.i:10: Error: Syntax error in input(1). - c_delete_function.i:8: Error: Syntax error in input(1). - enum_forward_wrap.cxx:1368:6: error: use of enum 'ForwardEnum3' without previous declaration - nested_structs_wrap.cxx:1383:5: error: 's' has incomplete type The others are ok. Cheers, Oliver Am 2/14/14 6:13 PM, schrieb William S Fulton: > Can you please replicate the examples already there, just the simple > and class will do, but more if you like. These are the 2 examples > which should be the same across all languages which I think is useful > for users and developers who are not particularly familiar with the > multitude of different languages. Could you also fill any missing gaps > required by >. > > Thanks > William > Hello William, I am finishing the module today and will prepare a clean merge candidate branch. Do I remember correctly, that you wished to have the branch squashed into one commit? Cheers, Oliver On 27/02/14 17:39, Marvin Greenberg wrote: > William S Fulton wrote: >> On 24/02/14 20:11, William S Fulton wrote: > >> >> Incidentally, I fixed the C# example solution files for 64 bit a couple >> of days ago and have tested with VS2005 and VS2010. Anyone who has a >> later versions of Visual Studio, please double check that they work. > > I finally found a machine I could use, with W7-64 and VS2013. VS2012 > and VS2013 have changed the project file format (vcprojx instead of > vcproj) but it converts them automatically. Tons of indecipherable > warnings. They seemed ignorable. > > I pointed and clicked all the solution files under Examples/csharp. > > The only definite problem I found was in the solution file in > Examples/csharp/nested > It did not have x64 configuration available in pulldown, Only Win32 and > "Mixed Platforms". I'm guessing this is therefor VS2010 too. > > There are some warnings during the build because of the custom output > location for the DLL, and this: > > LNK4075: ignoring '/EDITANDCONTINUE' due to '/SAFESEH' specification 1 > > Probably don't matter. > Ah, I missed the nested example. I've updated it now. > I also checked the properties used to build the various dlls. They > correctly have /EHsc, usefully adds /RTC1 (extra runtime checks). OTOH, > it seems that the dll gets built with /MTd instead of /MDd. /MDd says > it is used to link with the multithreaded DLL library (instead of the > plain multithreaded library used for executables.) > The example code is all included in the one example.dll, so it doesn't matter too much if the C++ runtime is a dll or statically linked. However, /MDd would be better if the examples were linking to other dlls. > The calling convention used is __cdecl (/Gd) not __stdcall (/Gz) > This seems to be the default and I don't really grok the difference > especially when you read this sentence from > msdn.microsoft.com/en-us/library/fwkeyyhe.aspx: > /Gd Uses the __cdecl callin convention (x86 only) > /Gz Uses the __stdcall... (x86 only) > So calling convention does not matter on x64? > > We also use /Zi and /GR, which can help when debugging. > > Honestly, I don't know too much about the right flags and settings for > correctly compiling on visual studio. > Do they examples run successfully? If they do, then that seems good enough for me. These flags could be tweaked endlessly. William
http://sourceforge.net/p/swig/mailman/swig-devel/?viewmonth=201403&page=1
CC-MAIN-2014-42
refinedweb
2,956
66.03
Stepper motors are great for (semi-)precise control, perfect for many robot and CNC projects. This HAT supports up to 2 stepper motors. The python library works identically for bi-polar and uni-polar motors. Running a stepper is a little more intricate than running a DC motor but its still very easy. center GND terminal on the Motor HAT output block. then coil 1 should connect to one motor port (say M1 or M3) and coil 2 should connect to the other motor port (M2 or M4). For bipolar motors: its just like unipolar motors except there's no 5th wire to connect to ground. The code is exactly the same. For this demo, please connect it to M1 and M2 Run python3 to get to the Python REPL. To demonstrate the usage, we'll initialise the library and use Python code to control a DC motor from the Python REPL. First you'll need to import and initialize the MotorKit class. from adafruit_motorkit import MotorKit kit = MotorKit() from adafruit_motorkit import MotorKit kit = MotorKit(). There are a number of optional features available for the onestep() function. Let's take a look! Stepper motors differ from DC motors in that the controller (in this case, Raspberry Pi) must tick each of the 4 coils in order to make the motor move. Each two 'ticks' is a step. By alternating the coils, the stepper motor will spin all the way around. If the coils are fired in the opposite order, it will spin the other way around. If the python code or Pi crashes or stops responding, the motor will no longer move. Compare this to a DC motor which has a constant voltage across a single coil for movement. There are four essential types of steps you can use with your Motor HAT. All four kinds will work with any unipolar or bipolar stepper motor - Single Steps - this is the simplest type of stepping, and uses the least power. It uses a single coil to 'hold' the motor in place, as seen in the animated GIF above - Double Steps - this is also fairly simple, except instead of a single coil, it has two coils on at once. For example, instead of just coil #1 on, you would have coil #1 and #2 on at once. This uses more power (approx 2x) but is stronger than single stepping (by maybe 25%) - Interleaved Steps - this is a mix of Single and Double stepping, where we use single steps interleaved with double. It has a little more strength than single stepping, and about 50% more power. What's nice about this style is that it makes your motor appear to have 2x as many steps, for a smoother transition between steps - Microstepping - this is where we use a mix of single stepping with PWM to slowly transition between steps. It's slower than single stepping but has much higher precision. We recommend 8 microstepping which multiplies the # of steps your stepper motor has by 8. You canLEAVE)
https://learn.adafruit.com/adafruit-dc-and-stepper-motor-hat-for-raspberry-pi/using-stepper-motors
CC-MAIN-2020-05
refinedweb
504
71.04
08 July 2008 18:10 [Source: ICIS news] HOUSTON (ICIS news)--The National Association of Realtors (NAR) on Tuesday lowered its new US home sales prediction for the year, citing high inventories, rising commodity prices and construction costs. New-home sales in the ?xml:namespace> The outlook for new-home sales was lowered from the 529,000 prediction a month ago. “In light of high inventory conditions, rising commodity prices and construction costs will curtail new home construction deep into 2009,” said NAR chief economist Lawrence Yun. Housing starts, including multifamily units, will probably fall 28.7% to 966,000 this year, and then drop another 9.0% in 2009 to 879,000. The housing industry is a key downstream consuming sector for chemicals and chemicals-based products such as roofing materials, adhesives, insulation, siding, paints and coatings, synthetic materials, polyvinyl chloride (PVC) pipes and a broad range of other construction materials. Each new house has an average of $16,000 (€10,240) worth of chemicals, according to the American Chemistry Council (ACC). The NAR also lowered its 2008 prediction for existing home sales from last month. For all of 2008, existing-home sales should total 5.31m (down from last month’s prediction of 5.4m), the NAR said, and then increase 5.0% next year to 5.58m (down from last month’s prediction of 5.74m). The association’s Pending Home Sales Index, a forward-looking indicator based on contracts signed in May, fell 4.7% to 84.7 from an upwardly revised reading of 88.9 in April, and remains 14.0% below May 2007 when it stood at 98.5. “The overall decline in contract signings suggests we are not out of the woods by any means,” Yun said. “The housing stimulus bill that is still being considered in the Senate is critical to assure a healthy recovery in the housing market, jobs and the economy.” Yun said location has never mattered more than in the current market. “Some markets have seen a doubling in home sales from a year ago, while others are seeing contract signings cut in half. Price conditions vary tremendously, even within a locality, depending upon a neighbourhood’s exposure to subprime loans.” Double-digit pending sales gains in May from a year ago were noted in ($1 = €0.64) To discuss issues facing the chemical industry go to ICIS connect Visit
http://www.icis.com/Articles/2008/07/08/9138730/nar-lowers-new-home-forecast.html
CC-MAIN-2014-35
refinedweb
401
66.03
See more for "Warning! Computer-related stuff!" * WARNING * Computer-related thread * WARNING * OK, I think I've been here long enough to get away with asking for some hardware advice... I built my last PC 3 years ago and haven't done any upgrading, so I'm completely out of the loop with current stuff. I'm also very lazy, so was thinking I might just buy a prebuilt PC to save the hassle of building one (and having to work out why it isn't working...). I don't have a specific budget, but I don't want to spend much over £1200, since I reckon I should be able to get a decent machine at that price. My usual supplier of choice is Overclockers (), who offer this machine; Antec 300 + Corsair 650W Core 2 Due 8400 + Artic 7 cooler Asus P5Q Deluxe Corsair Dominator 4Gb (1066 Mhz) 4870 512Mb 500 GB SATA2 (Seagate Barracuda) Vista Home Premium 64 Total: £1067 (inc VAT) () Questions; I've gone for a Duo rather than a Quad since I doubt games will start using Quads for a while yet - and I can then upgrade when the processors will be cheaper (or I've saved up again!). Even with I7 just released, I imagine the 775 chips will be with us for a while yet - either new or second hand. Does this sound sensible? Other options are; 8200 +£5 7300 -£43 6600 +£10 9550 +£105 Would you recommend any of those? The 9550 is quite a lot of extra money, especially if I'm not using all 4 cores! The P5Q looks decent enough, but I could go for the X48 Rampage for £50 more - any reason to do so? The 1066Mhz memory could be downgraded to 800Mhz to save £40, but this might restrict overclocking? The 4870 - ideally I'd have gone for the 1024Mb version, but I'm liking the idea of having OCUK build the rig for me... Should I go for the extra 4870 at a whopping £170, or will there be little benefit? I'm currently using a 21" CRT, so I'm flexible on resolutions, but when it dies I'll probably get a 22" or 24" TFT. My current thoughts are to only get 1 GFX, and maybe buy another or replace it later. I'll grab 3 of extra fans for the case- any recommendations? The Noctua looks good? Sorry for the long thread - but £1,100 is quite a few beers so I want to get it right! PS - I've checked and could build the rig myself for £922 inc VAT. I might go for this if I decide I've got the time... PPS - re: overclocking. I've not done it before, but I'm prepared to give it a go, if I could get a significant boost without much effort/risk. personally i would always build my own, but i can understand why you wouldn't want the hassle. i would go quad core, the current trend with games and programs is going that way and a quad would give you some measure of "future-proofing" examples off the top of my head are: World In Conflict Supreme Commander Left 4 Dead (main one really since most companies will probably follow the trend set by valve, specially games made in the new source engine) Motherboard is fine. i would go for the 1066Mhz memory this would mean 1:1 ratio and ive heard you get the best out of core2duo in this configuration, 800 isnt really reasonable. the normal 4870 will be fine for all your needs and will play all your current games at max settings and will probably last at least the next 2 years. your right £1,100 is a few beers, i would honestly check a few sites and see if you could build for cheaper: Thanks for the recommendations - I might go with the 6600 and upgrade it later if it gets a bit long in the tooth. Or just splurge and get the 9550... I like using OCUK as I've always found them reliable and they have good customer service. I like to reward companies that make an effort (and that's why I stopped using Dabs!). Besides which, OCUK are normally quite competitive, so I doubt I'd save much shopping around. Nonetheless, I'll check out the other suppliers though - I've heard good things about Scan. Has anyone any experience of using Mesh Computers? I too would be inclined to go for self-build - are there any parts from your existing PC that you could transfer to limit costs? I'd go for the Antec P182. It's more expensive, but has much better sound and vibration isolation, if that matters to you. A case is a long term investment so you might get better mileage from the P182. Do you need this system straight away? If not then I'd consider waiting until after Christmas when the cost of a i7 CPU and board might be less. Otherwise I'd go for a board that takes the current Duo CPUs but accepts DDR3, at least then you can change the board in future to i7 without having the additional cost of memory. I'd agree with Flakes about the 1066Mhz memory. Personally, i hate building comps(i'm accident prone) but i cannot stand the thought of buying a prebuilt or having someone else do it as when you build it yourself, you know it inside out or something like that. Quad all the way if you ain't an upgrader as games are already there and heading that way, [...] ded-gaming current thread showing loads as tested by jimmysmitty Memory is cheap at least for DDR2 and i will differ and say that you are not going to be missing out on too much with some DDR2 800, i could be wrong though. Of course if you are overclocking over 400FSB then you would need the faster ram. Not sure about the 1066 and 1:1 ratio unless you get 533FSB mobo's now? Have i missed something? Really though, the comp in your OP is good enough and would be hassle free. Basically, unless you like to be leading edge which i don't think you have ever mentioned then you do not need newer tech as the current bunch is more than fast enough. I'll be moving storage drives across from my old build, but apart from that, not really. It's a socket 939 system, so pretty ancient. I could probably reuse the PSU (it's a Thermaltake 750w). I think the RAM might be DDR2, but it's 4x1Gb budget Geil stuff. BUT, I'll probably be working from home Jan-Mar so I might keep my old box as a dedicated work machine, then give it away when I'm done. I'm not especially fussed about paying for a pre-built (within reason) - I'm treating myself out of my Xmas bonus (of about £18k), so I don't mind spending an extra £150 to get a tested, working "rig" right out of the box, rather than spend a weekend faffing about assembling and installing everything. Makes me sound like an arsehole, but there we go. If I can't find a pre-built I want, I might just go for homebuild anyway. I'm sure it'll all go smoothly [/delusional]. The P182 looks OK, but it's quite deep. I'll need to hack away at my desk to ge tit to fit. Also a bit naff that you need to remove the side panels to clean out the dust filter. Even my current case doesn't need me to do that. And what have the fan speed controllers at the back? Surely at the front would be more convenient? Also not sure about the logic of having front fans located right behind a ruddy great door... my antec has the dust filters that need the side panel removed for access, at least with the fans secured in place, no big deal, however, doors on cases is just wrong. I understand your reasons for the pre-built and it ain't as if it is by pcworld. on the overclocking front, i am not expert although i keep meaning to bump my own one up. if i am right, the e8400 is a 1333fsb part and the p45 based mobo supports 1600 so as long as you follow the overclocking guide in the upstairs forums you should be able to bump your fsb to 400 to get an easy overclock, with ddr2 800 you can get a 1:1 ratio although i think that is unnecessary nowadays. Oh, and regarding DDR3... I'm not sure I see the point. DDR3 is very expensive now, so I'd be paying extra for spec I won't be using. By the time I came to upgrade, which will probably be 3-4 years again (ignoring a GFX upgrade), I7 will probably already be being replaced or AMD might have got their act together. DDR3 might already be obsolete... I have a better idea. Go to PCWorld and ask for the expert advice of those skilled individuals who work there. They are bound to find you a lovely system that will provide you with years of trouble-free computing. Honest. Here's me: the party pooper. . . Why a desktop and not a laptop? Do you already have a nice laptop? ROFL thats the funniest thing ive read in awhile! i love going to pcworld or any other pc related store and testing the knowledge of the sales peeps I'm with the furry one on the Overclockers bit. I've had dealings with Ebuyer.com and Dabs and been fcuked over. Had to move heaven and earth to get them to ship the missing items. Never ever had a problem with OC. A friend of mine always uses them and had to return his Corsair Dominator RAM. As OC acted as the agent to return (why they have no UK operation I'll never know). Corsair faffed about so long, that OC offered him a credit note and he got to pick shiny new OCZ RAM whilst they waited for Corsair. Not much of a big deal to them, but a nice touch of customer service. Sometimes it's worth paying that little bit extra knowing that you will actually get some customer service once you've paid (this means you ebuyer.com). Meanwhile, I'm off to PCWorld for some great advice and their legendary "health check". ...*waits for special bus*... If I ever ask for advice in PC World other than "where's your toilet?", I will deserve to be shot. Re: laptops - I have no intention on spending twice as much as I need to for a non-upgradable machine with a small screen and a **** keyboard in the way. Laptops are great for people on the move. They are not small desktops. But I assume you were joking? You're not just siding with OCUk because they're West Midlands are you? I agree that they're good though. Like Novatech used to be before they went ****. My god - you can't even write c r * p anymore - it censors the whole lot! There's a guy at work who lives about 10 minutes from their place. As you can probably well imagine, he got inundated with requests when he foolishly commented he was off to their counter to pick up a couple of bits one Saturday and does anyone want any DVD's or anything. Less money changes hands in major drug deals than was flashed about in our office that day. He had to get a friend to take him as he didn't have enough room in his car. He hasn't volunteered since.... Well, that's decided then; I'm building my own... or maybe buying one. I'm going dual core... but maybe quad. I should never have asked for advice - I'm just as undecided now! Maybe I should just take pot luck and steal one [/Tom] Joking? No, not at all. I was only curious. I've been using my desktop less and less over the past couple years* until, as of 6 months ago, I don't use it at all. I have four notebooks (ThinkPads T22, T23, T42p and a Dell Inspiron 1000 (ugh, but it was "free" ). I'm not a heavy gamer. I like games, but I can get by without them, so a notebook-- even my ancient T22 gets me by just fine (although I do have designs on a Lenovo Y730-- but like I said, I don't really need it.) As far as performance goes. OK Laptops are generally a smidge behind a desktop, but not by much nowadays: same processors, same memory. . . almost the same graphics (SLI / Crossfire catching on) and don't cost quite "twice as much". Even people on a bugdet can find something-- the Acer Gemstone series, for one. * Now that I think about it, my desktop usage decline. coincided with my adoption of Thinkpads in lieu of Compaq Presarios and Armadas I have a Dell D620. ...*cries*... Llama, seeing as you are like me and don't upgrade till you need to, i would just take the quad, might sound odd but they are at the end of their life unlike corei7 which is just beginning and who knows what revisions and pricing might come out in a few months, least with the current ones you know what you are getting. Early adopting is not great when aiming for the long run or at least that was my reasoning. Of course there is nothing wrong with the dual core as stock clock they are faster than quads and for most apps will show more benefit pound/performance wise. Remember, i am clueless jock and should not be trusted. I have a D520..... ...*heavy sobbing*... I've got a dell Inspiron 9100 sitting at my feet waiting to get it's wireless working again after i updated to the latest ubuntu. Oh and the battery has given up the ghost as well, now that is ancient. Hey, a D620 is still better than a 1000 Heck, a 9100, though it's a beast, is also better than the 1000. . . It's a Celeron on a SIS chipset. Doesn't support SpeedStep so a P4-M won't run at it's rated speed. The only reason the 1000 is operational is that it was a, uh, older IBX version such that drmdbg still works. If you Dell whores have quite finished crying into each others' shadies/M&Ps, does anyone else have anything constructive to add? I'm having a Wingding moment and am consumed with doubt. For Mugz's sake - I'm starting to listen to the Jock and might pay the extra £10 for the Quad6600. It's Saturday tomorrow - I'm off to PC World. Go sit in the corner and think about what you just said. First thing that came up for E8400 with google, was just rechecking it's speed [...] aming.html This is one of tons of others i know but hey, it is contructive. i would: get the quad build it myself get the normal 4850 vista is ok if you can learn it and has recently caught up with XP so thats ok too. you could keep your old case and stuff so thats less to pay(unless your like me and replace everything) Hell, he should just build my comp and enjoy. SS - looks good except for the 1200. I'm not a huge fan of the windows and cathodes look. Plus, I don't have much room so a huge tower case isn't practical for me. The 4870X2 is probably overkill for me - I'm still using a CRT so can turn down the resolution a notch or two if needed. Once te CRT dies and I get a TFT I'll revisit my GFX choice... Will probably take yours and Flakes advice re the Quad though - just need to decide if I'll go 6600 or 9550. Might wait to Wednesday to see if OCUK lower their prices with I7 having launched. Re the X-Fi: I've heard people have had problems with that on Vista and are recommending the Asus Xonar instead. How have you found it? I'll probably stick with onboard for the timebeing anyway. Thanks for all your help guys! One consideration llama, intel is introducing the newest Core i7 on Nov 17th - this new "Nehalem" chip might result in some price cuts for the older dual and quad core chips - might be worth waiting a bit (if you can) to see if there is a price drop on those you're looking at. One thing I hate is buying hardware only to see it drop 10-20% with in weeks. the x-fi problems AFAIK have been resolved, never suffered a problem myself, the zonar is not something i would consider, it emulates stuff and AFAIK cannot do any eax in vista due to not having alchemy, what sort of performance it has in openal i am not sure. Hmm, if only halycon were here, he knows more about this sort of thing as i believe he tested them but we scared him off. I'd stick with the onboard for now. Step-up to the X-Fi you may notice a worthwhile difference if you've got the speakers to take advantage of it. If you're not running 5/7.1 it's not really worth it IMO unless you're a true audiophile with the speaker/amp set up to match. I'm an IBM whore, thank you very much. . . I'm running a Creative 5.1 setup, but will probably stick with onboard to see how good it is. Jake - it's a bit of a balacing act re: I7. Whilst I7 will probably cause a fall in price of LGA775, the weakening pound means components prices have been steadilly increasing over the past few weeks - as much as 30% in some cases. Only RAM is staying consistent, as the falling prices offset the exchange rate differences. I7 has now been released, so I'll wait until next week to se eif there are any immediate drops, but I was hoping to build/buy the new system next weekend, so I won't hang around too long. To be honest, I don't think you can go too badly wrong at the moment - you seem to get a lot of PC for your money unless you insist on having bleeding-edge kit. God, that was appalling typed. Can't be arsed to edit though... For what it is worth following my build this year here are the points I will make: 1. Go quad. I got the 8400 and whilst more than enough for 90% of what I do there are times when I just sit here wondering why I was so tight on this one. You've clearly got some funds, the 6600 OC's like a dream although you will pay for it in power. The 9 series stuff is nice but you pay the premium. Given the i7 situation maybe you want to go for it from day one, I see all the inovation coming on the new chipset so accept that it is end of product line and max it from the start. 2. DDR2 not 3. Quality counts and there are some good deals on RAM. I've yet to see anything that makes DDR3 make sense to me in performance or value terms. It's just way to much money. 3. Disk. I cocked up on this one and used a budget 160Gb drive when I first built. This means that I now have the joy of Vista on a slowish (5.4) drive when my nice big WD640Gb will gladly turn in 5.9's and would have made a far better system drive. I'd say that if you do save a few quid with DDR2, Dual not quad and modest graphics then spend the momeny on a Raptor and have done. That or for the same price run a couple of the 320Gb single platter versions of the WD in Raid. Thanks - all sounds sensible. Think I'll go Quad 9550, 4Gb DDR2 RAM (2x2 so I can add another 4Gb later if I want) and a 4870 - 512Mb if I buy prebuild, 1024Mb if I build my own. How much HDD space will I need for Vista? Happy to get maybe 1x 300Gb Raptor, but mutiple Raptors in RAID is going to cost an arm and a leg, plus use up loads of power and case space. I could get a single 7200rpm drive for a lot less... Actually, having just checked the prices, I think a 150Gb Raptor will do... My XP install, including all games, comes to only 74Gb, so 150 should be plenty, even with Vista. Although I suspect the Raptors are loud, so maybe a RAID of normal, quiet drives is a better bet. Actually looking forward to this now I have 2 x 75GB Raptors in RAID 0, they're no louder than a regular drive. It's great that these days you can buy a decent graphics card with the option to add another in future using SLi or Crossfire. What games are you going to buy first? can't say i ever noticed my raptors, i got 1 74gb, 1 300gb as well as a 250gb maxtor and above my fans and what not i can't really hear them. That is even doing a defrag. You can also buy rubber washers to reduce any vibrations further. I am assuming you ain't one for ripping lossless music to your drives, if so that 150 could load up quick, that is why i have the old(IDE) maxtor. Oh and your build is slowly turning into my own, you know you want it SS, I have to admit that my ideal would be very close to your spec, especially that 4870 x 2 - is that a 1GB or 2GB card? What monitor do you have? I don't think that I'd get away with anything bigger than a 24in monitor, so as long as I could max the resolution and turn on all the AA, etc., I'd be happy. Games - X:Terran Conflict, GTA IV, the new Football Manager and the Twilight of the Arnor expansion for Gal Civ2. Might also pick up Assassins Creed if it's on budget yet. That should keep me busy for a bit The 4870X2 is a 2048Mb card. Very nice, but perhaps overkill for me when I still have a CRT so aren't tied to a given resolution. But it IS a very nice card... * sees budget shooting up to £1500 * SS - no, I don't use my PC for music. I have a nice hifi in the living room for that. (Will have a nice hifi in the study as well as soon as I pick up another amp). It's basically a gaming machine, so as long as I can fit Vista and my games on the main drive, that's fine. I've got a 500Gb SATA2 drive in the current rig I can move across if I need more storage for movies or something. The 150Gb Raptor should do fine (although Wingy's idea of 2x 74Gb in Raid sounds suitable OTT... hehehe) Looks like I'm getting SS's machine after all. Ho hum. Whore [/jealous] The x2 is actually a 1GB card, two gpu's with 1GB each. Technically not a 2GB card. I have speakers and a cd player which is used for when the pc is off, but i also have my squeezebox for playing my whole music collection without having to use be near my comp, was meant for wireless streaming but have moved everything into one room to save space, doesn't quite work but it will do. Now, wingy, if your wife can go shopping for shoes, then surely you can go shopping for a nice 24" monitor, 30" and your getting crazy and have to upgrade every 6 months. With his wife, he'll be buying his next machine out the back of the Bargain Pages. The only chip Wingy will be getting any time soon is a Dorito. (God bless you, Weird Al) Hey, i got the remnants my old machine still, he can have that. He can have my current machine after March - but I doubt he wants it (Athlon 64-3000, 2Gb RAM, 6800 vanilla). Thanks for all your help though guys. Think I've decided on the folllowing; [...] 2%20System With the 8Gb upgrade. It only costs £63 more than the individual components, which I think is a fair deal. Saves me quite a lot of effort and if anything doesn't work, it's up to them to fix it! Admittedly it doesn't have the Raptor, but I can live without that or add it later (tell MS that the old drive died hehe). I think the F1 drives are supposed to be pretty good anyway. I'll probably add a couple of extra fans (one in the side and one or two in the front) - any recommendations? The Noctuas or Sharkoon ones look good. My wife buys her clothes as Chanel and Gucci, I buy mine at Family Dollar.
http://www.tomshardware.com/forum/21674-11-warning-computer-related-stuff
crawl-003
refinedweb
4,306
79.4
Jan 03 2017 03:28 PM Jan 03 2017 03:28 PM I'm new to development against the PnP-Sites-Core solution. After downloading the solution from GitHub, the initial build fails and I'm not sure why. The restoration of all Nuget packages seems to work, but I can't build the solution because "The type or namespace name 'S2S' does not exist in the namespace 'Microsoft.IdentityModel'." I can't figure out what assembly reference I'm missing or what I'm doing wrong. Any ideas? Jan 04 2017 02:31 AM Jan 04 2017 02:31 AM Hi @Eric Skaggs, Can you check the reference to Microsoft.Identity Model? In my case it is set to: C:\WINDOWS\assembly\GAC_MSIL\Microsoft.IdentityModel\3.5.0.0__31bf3856ad364e35\Microsoft.IdentityModel.dll What version of the Nuget package have you got installed for Microsofty.IdentityModel? Jan 04 2017 08:05 AM Jan 04 2017 08:05 AM The problem appears to be related to Microsoft.IdentityModel.Extensions. Maybe @Erwin van Hunen or @Vesa Juvonen can help? I am running the code as-is from GitHub. The reference to Microsoft.IdentityModel in my local copy is C:\Program Files\Reference Assemblies\Microsoft\Windows Identity Foundation\v3.5\Microsoft.IdentityModel.dll. I changed it to match what you have and that did not make any difference. There is no Nuget package for Microsoft.IdentityModel in the solution, so I did not install one. Here are the steps I'm taking. Where can I get Microsoft.IdentityModel.Extensions? There is a Nuget package for that, but it's not supported by Microsoft and therefore we can't use it with this solution. Jan 04 2017 08:27 AM Jan 04 2017 08:27 AM We are aware of the issue. The file gets installed the moment you install the office developer addons in Visual Studio. So if you have a 'clean' VS installation the file is not there unfortunately. I will discuss this with the team. Jan 13 2017 08:50 AM Jan 13 2017 08:50 AM I was able to resolve by installing the Office Developer Tools. Noticed Windows Idenity stuff installed on confirmation screen (see attachment). Jan 13 2017 10:32 AM Jan 13 2017 10:32 AMSolution We now removed the dependency to Microsoft.IdentityModel.Extensions and replaced it with our own, which will be distributed with PnP Core. As of the next release the requirement to install the Office Developer Tools will be gone. Sep 21 2017 05:10 AM Sep 21 2017 05:10 AM Trying to follow sample code within Office Dev Center. Having the same issue in that the SharepointContextToken; SharepointContextProvider is not recongized. I have added the Microsoft Identy Model and the mentioned SharepointPNP Identy Model Extensions as explained above to my project. Does the sample code need to change? Its location is here:... Jul 03 2019 12:52 PM Jul 03 2019 12:52 PM You need to use NuGet in your solution and add Microsoft.Identity.Model.Extension. You can do this from within visual studio starting with 2017. I just did it with 2019 and now the sample app compiles. Hope you found the solution, since you posted this question 2 years ago.
https://techcommunity.microsoft.com/t5/sharepoint-developer/the-type-or-namespace-name-s2s-does-not-exist-in-the-namespace/m-p/37674
CC-MAIN-2021-21
refinedweb
543
57.98
Error C1083: Cannot open precompiled header file:'Debug\(What I named the file).pch I can't compile this due to the error listed. I usually don't use this template for writing so I'm not familiar with it. I don't see how a header file can be missing if it was there when I choose new project then click clr under visual c++ then clr console application. Code:#include "stdafx.h" using namespace System; enum class Months {January=1, February, March, April, May, June, July, August, Semptember, October, November, December}; int main(array<System::String ^> ^args) { Months::month = Months::January; int value = safe_cast<int>(month); for(int x=1; x<13; x++){ Console::WriteLine(L"The numerical value of {0} is {1}", month, value); month++; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/136433-error-c1083-cannot-open-precompiled-header-file.html
CC-MAIN-2015-11
refinedweb
130
58.21
conn.ip = "xx.xx.xx.xx";conn.port = 8000;conn.password = "hackme";conn.mount = "test";conn.bitrate = 128; % esdrec | lame -b 128 - - | ./example Hello, The instructions are kinda hammered out, lots of things are assumed. I barely got to step two before I got stuck. I downloaded and compiled everything, but then when you say: \"compile it with you server IP address and password, and add the mount point:\" I have no idea what you mean. Vonleigh I admit that the notes are a bear minimum, but I'm leaving on vacation, and I only did get it working yesterday so thats why. OK if you have compilled and installed the packages in step 1, you still need to compile one example application which resides in the libshout/example directory and it is called example. The file itself is called example.c an looks like this : #include <stdio.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <shout/shout.h> int main() { shout_conn_t conn; char buff[4096]; long read, ret, total; shout_init_connection(&conn); conn.ip = "127.0.0.1"; conn.port = 8000; conn.password = "letmein"; ... Modify the part conn.ip, port password like: conn.ip = "xx.xx.xx.xx"; <- give the IP address of your icecast server !!! conn.port = 8000; conn.password = "hackme"; <- give the password of your icecast server !!! conn.mount = "test"; <- this is the mount point conn.bitrate = 128; <- bitrate of the stream save the file and type # make (be sure you are in the example directory) now follow the other steps and you should be ok remark about setting up the icecast server, the easiest thing to do this is that you rename or copy the example configuration file which is located /usr/local/icecast/conf/icecast.conf.dist to icecast.conf in that same directory ... this way the server will work but mountpoints will not work, if you are doing one stream live stream to it it doesn't matter because it defaults to the first one ... I know this is again a weak explanation ... but setting up an icecast server is not the topic here ... I'm leaving know on vacation, I`ll be back on the 23th ... Agian I know that the explanation is not that great but believe me it is the only LIVE MP3 streaming solution under Mac OS X I know of. Yves. Hello, Thanks for the info. I had also been looking for a _live_ streaming solution for a while. I appreciate you sharing the info, sorry if last message seemed a bit terse, was a bit frustrated. Anyway, I've followed your directions but for some reason the source (libshout) is not connecting to the icecast server. I don't get an error or anything, it just says: % esdrec | lame -b 128 - - | ./example opening socket, format = 0x00002021 at 44100 Hz Connected to server... In my other terminal window I have: [Bandwidth: 0.000000MB/s] [Sources: 0] [Clients: 0] [Admins: 1] [Uptime: 4 minutes and 1 seconds] The odd thing is that if I close my encoder (^C for lack of knowing a better way of doing it), I get this in the server: -> [05/Jul/2002:14:16:26] Accepted encoder on mountpoint /test from mydomain.net. 1 sources connected -> [05/Jul/2002:14:16:26] Lost connection to source on mount /test, waiting 30 seconds for timeout So I guess it does know it's there, but only when I close it, quite odd. Have a great vacation :) Vonleigh I think you need to have a source plugged in to the line in for it to show up in the selection dialog. Where can a rank CLI newbie find some help for installing the apps at the start of this thread? Compiling things, using fink... where can I find a friendly how-to? I use Final Cut Pro and QuickTime authoring stuff all the time, but I would like help getting started with the command line. If I could stream MP3 audio on my network, record in MP3 format the audio direct from my internal PowerBook mic... these are tools I could use and maybe a good introduction to this aspect of OS X. Yes indeed, I've setup QT Broadcaster and it works like a charm streaming live video and audio. QTSS4 is a wonderful piece of software, and the new QT6 Preview + QTB makes it possible to do live streaming from QTSS4 even to QT5 users (just make sure not to use the mpg4 codecs which are QT6 only). It doesn't do MP3 streaming, and the clients need to have QT Installed ..., even iTunes is not able to tune into a QTB stream If you want to take this to the next level, a real cool program to install is Andromeda. You'll need to have your web server running and fork out a little change. But it is waaaaay cool. Better than Andromeda - Netjukebox Netjuke is cool, and I have it running on a Linux box along with about 200 Ogg Vorbis CD titles. The problem is that the streams open with iTunes. The only way to get them to open in XMMS under XFree86 is to have XMMS open the resultant *.m3u playlist files directly. There seems to be no way to assign XMMS as the "helper" so they open in XMMS directly. I would also like the m3u files to be written to a temp folder that gets cleaned by cron at night or on reboot, but that's secondary to the larger issue. Mozilla will let me select XMMS as the helper app (unlike IE), but an error results when Mozilla tries to call XMMS to play them (even if it's already running). --MM It doesn't do LIVE streaming, so if you are a real radio station, you can not use these solutions. The solutions mentioned here only work when you aleady have the mp3 files MP3 Streamer Maybe this can work if you don't want to go in the terminal. (I haven't tried it, so I don't know if it's good bad working or not... just saw it on a web site) b!gl00z3 Cool app but doesn't do live streaming. Yves. Hi, just discovered that you can encode multiple bit rates and run the whole proces as deamons in the background at the terminal you can do: # esdrec | lame -b 128 - - | ./example & to start a second encoding # esdrec | lame -b 56 - - | ./example & be aware that you first have to compile example app with an other mount point otherwise you will send both encodings to the same point,or you could also stream it to a second icecast server if you whish! If you dont want to stream your imput but just write it down as an MP3 file do # esdrec | lame -b 128 - ~/Desktop/liverec.mp3 output that you will see is something like this ... Assuming raw pcm input file LAME version 3.91 () Using polyphase lowpass filter, transition band: 15115 Hz - 15648 Hz Encoding <stdin> to /Users/yves/Desktop/liverec.mp3 Encoding as 44.1 kHz 128 kbps j-stereo MPEG-1 Layer III (11x) qval=5 You can stop the recording by pressing ctrl - c The file you get can be played in iTunes ... Sound inputs that I've tested are Powerbook Mic and sound in port on the Titanium 800 MHz Yves. There are also other tools that performs MP3 streaming. First, there's a Shoutcast server, that performs mostly like the Icecast one. Second, there's MaltX (Macamp Light X), which works wonderfully for mp3 streams. It's compatible with Shoutcast, Icecast and Live365, also. Use an IP address from 224.0.0.0-239.255.255.255 or something like that - QT broadcaster picks one automatically, you may want to use something in the middle. If you send on 225.1.2.3, you should be able to receive on any computer on your local network (or you need mrouted). I managed to compile everything (Icecast 2.2.0, libshout2, etc). To compile icecast, i had to use "fakepoll.h" header file from sealiesoftware.com/fakepoll.h to get around with the poll.h errors. The icecast server starts fine, libshout example file connects to the server, but it seems to be dropping the stream somewhere. I am basically not able to play the stream. Dumping the output of lame to a file and playing it back again is working, so there is something with icecast. Any suggestions ? Below is the icecast error_log file: [2004-12-30 18:02:00] INFO main/main Icecast 2.2.0 server started [2004-12-30 18:02:00] WARN main/main YP server handling has been disabled [2004-12-30 18:02:00] INFO stats/_stats_thread stats thread started [2004-12-30 18:02:00] INFO fserve/fserv_thread_function file serving thread started [2004-12-30 18:02:01] DBUG slave/_slave_thread checking master stream list [2004-12-30 18:02:13] INFO connection/_handle_source_request Source logging in at mountpoint "/radio.mp3" [2004-12-30 18:02:13] DBUG connection/connection_complete_source sources count is 0 [2004-12-30 18:02:13] DBUG connection/connection_complete_source source is ready to start [2004-12-30 18:02:13] DBUG source/source_init Source creation complete [2004-12-30 18:02:13] DBUG stats/modify_node_event update node connections (1) [2004-12-30 18:02:13] DBUG stats/modify_node_event update node source_client_connections (1) [2004-12-30 18:02:13] DBUG stats/process_source_event new source stat /radio.mp3 [2004-12-30 18:02:13] DBUG stats/process_source_event new node public (0) [2004-12-30 18:02:13] DBUG stats/process_source_event new node listenurl () [2004-12-30 18:02:13] DBUG stats/modify_node_event update node sources (1) [2004-12-30 18:02:13] DBUG stats/modify_node_event update node source_total_connections (1) [2004-12-30 18:02:13] DBUG stats/process_source_event new node listeners (0) [2004-12-30 18:02:13] DBUG stats/process_source_event new node server_name (no name) [2004-12-30 18:02:22] DBUG connection/_handle_get_request Source found for client [2004-12-30 18:02:22] DBUG source/source_main Client added [2004-12-30 18:02:22] INFO source/source_main listener count on /radio.mp3 now 1 [2004-12-30 18:02:22] DBUG stats/modify_node_event update node connections (2) [2004-12-30 18:02:22] DBUG stats/modify_node_event update node client_connections (1) [2004-12-30 18:02:22] DBUG stats/modify_node_event update node clients (1) [2004-12-30 18:02:22] DBUG stats/modify_node_event update node listeners (1) [2004-12-30 18:02:24] DBUG source/get_next_buffer last 1104458533, timeout 10, now 1104458544 [2004-12-30 18:02:24] WARN source/get_next_buffer Disconnecting source due to socket timeout [2004-12-30 18:02:24] INFO source/source_shutdown Source "/radio.mp3" exiting [2004-12-30 18:02:24] DBUG source/source_clear_source clearing source "/radio.mp3" [2004-12-30 18:02:24] DBUG format-ogg/format_ogg_free_headers releasing header pages [2004-12-30 18:02:24] DBUG format-ogg/free_ogg_codecs freeing codecs [2004-12-30 18:02:24] DBUG source/source_free_source freeing source "/radio.mp3" [2004-12-30 18:02:24] DBUG stats/modify_node_event update node sources (0) [2004-12-30 18:02:24] DBUG stats/process_source_event delete source node /radio.mp3 [2004-12-30 18:02:24] DBUG stats/modify_node_event update node clients (0) Thanks ! Dwipal Does anyone have any experience using ESD to capture sound from an external usb audio interface? ESD simply returns unsupported device. Are there any alternatives, apart from plugging the headphone out on the interface into my computer's line in? The interface I am using (Lexicon Omega) works well for recording various inputs using digital preformer and various other audio software. I've tried using JACK, but ESD still says it is unsupported. The ESD documentation is useless. Thanks Brent Visit other IDG sites:
http://hints.macworld.com/article.php?story=20020704134818926
CC-MAIN-2016-44
refinedweb
1,982
71.95
String examine how to check if a given string contains a specified substring in Python.. mystring = "pythontect" substring="tect" if substring in mystring: print("tect exist in pythontect") else: print("tect do not exist in pythontect") Check with String.index() Method Python string type provides the index() method in order to find the index of the specified string. The index method is provided by the string variables or string data and returns a positive number if the specified string contains provided substring. If the specified string does not contain a given substring the result is a ValueError exception which means the specified string does not contain the given substring. mystring = "pythontect" substring="tect" try: mystring.index(substring) except ValueError: print("Not found") else: print("Given substring found in given string") Check with String.find() Method The another method provided by the String type is the find() method. The find() method is very similar the the index() method. The specified substring is searched in the given string and if it is found the index number if the first character is returned. If not found -1 is returned to express it is not found. The String.find() method is easier to use than the index() method. mystring = "pythontect" substring="tect" if mystring.find(substring)>-1: print("tect exist in pythontect") else: print("tect do not exist in pythontect") Check with Regex(Regular Expression) Regular Expression or simply Regex is used to match a text or string for the specified string. Regex is used by defining patterns and search in the specified text or string. Regex can be also used to find if the string contains a specified substring. The regex module is named re in Python. The search method provided by the re module is used to match regular expressions. import re mystring = "pythontect" substring="tect" if re.search(substring,mystring): print("tect exist in pythontect") else: print("tect do not exist in pythontect")
https://pythontect.com/check-if-string-contains-substring-in-python/
CC-MAIN-2022-21
refinedweb
322
64.81
14 Nov 2017 06:52 AM Hi,I'm actually trying to write dynatrace custom plugins and I want to send events from the plugin. I've seen the add_event(event) method but doesn't find any documentation on the event object that must be sent can you give an example? thanks Olivier 14 Nov 2017 12:15 PM See the API itself in <oneagent dir>/agent/plugin/engine/ruxit/api/events.py For inspiration, how to call it, see the openstack notifications plugin (notifications_openstack_plugin.py) available with oneagent itself. However I don't think this can be considered "stable", since it's not yet in the documentation and it probably may change in the future. 15 Nov 2017 06:31 AM Hi, I tried the syntax from the openstack plugin but don't know which value to put in the keys. I try to make a plugin which reports an event when a user logs in to a server and which reports the number of logged in users. The metric is reported but no events Here is my code if you have an idea: import subprocess from ruxit.api.base_plugin import BasePlugin from ruxit.api.snapshot import pgi_name class AuthPlugin(BasePlugin): def query(self, **kwargs): pgi = self.find_single_process_group(pgi_name('sshd')) pgi_id = pgi.group_instance_id result = subprocess.run('lastlog -t1 | grep `date +%H:%M` | awk \'{print $1}\'', shell=True, stdout=subprocess.PIPE) lines=result.stdout.decode('utf8').split() for line in lines: self.results_builder.add_event( Event(1, pgi_id, EventMetadata(key=1, value=line+"logged in"))) result = subprocess.run('ps -fe | grep sshd | grep pts | grep -v grep | wc -l', shell=True, stdout=subprocess.PIPE) lines=result.stdout.decode('utf8').split() for line in lines: self.results_builder.absolute(key='loggedusers', value=line, entity_id=pgi_id) ~
https://community.dynatrace.com/t5/Extensions/Syntax-to-send-events-from-custom-plugins/td-p/42137
CC-MAIN-2021-25
refinedweb
292
50.73
On Sat, Aug 22, 2009 at 12? Now, we have found a bug! The following program exits with 2 on real hardware for -UBA and -DBA versions, but 0 for -UBA (2 for -DBA) on QEMU! #ifdef __OpenBSD__ #include <sys/syscall.h> #endif .globl _start _start: clr %o0 #ifdef BA ba first #else set first, %g1 jmp %g1 #endif ba second /* should not be executed: */ or %o0, 1, %o0 second: #ifdef __OpenBSD__ mov SYS_exit, %g1 ta 0 #else mov 1, %g1 ta 0x10 #endif first: or %o0, 2, %o0 /* should not be executed: */ ba second or %o0, 4, %o0 qemu.log reveals that in the -UBA case, instead of 'or %o0, 2, %o0', the first 'clr %o0' is executed: IN: 0x00010054: mov %g0, %o0 0x00010058: sethi %hi(0x10000), %g1 0x0001005c: or %g1, 0x74, %g1 ! 0x10074 0x00010060: jmp %g1 0x00010064: b 0x1006c -------------- IN: 0x00010054: mov %g0, %o0 -------------- IN: 0x0001006c: mov 1, %g1 0x00010070: ta 0x10 For -DBA the log is OK: IN: 0x00010054: mov %g0, %o0 0x00010058: b 0x1006c 0x0001005c: b 0x10064 -------------- IN: 0x0001006c: or %o0, 2, %o0 -------------- IN: 0x00010064: mov 1, %g1 0x00010068: ta 0x10
http://lists.gnu.org/archive/html/qemu-devel/2009-08/msg01159.html
CC-MAIN-2018-05
refinedweb
185
79.03
Seeing code Discussion in 'HTML' started by Michael Bartos, Jan 2, 2005. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Application folder not seeing namespace of the main applicationgh0st54, Jul 4, 2003, in forum: ASP .Net - Replies: - 0 - Views: - 406 - gh0st54 - Jul 4, 2003 Seeing the source code created for an ASPX page...Patrice, Feb 28, 2005, in forum: ASP .Net - Replies: - 2 - Views: - 447 - Patrice - Mar 1, 2005 code behind not seeing a labeljvcoach23, Feb 6, 2008, in forum: ASP .Net - Replies: - 3 - Views: - 464 - Mark Moeykens - Feb 6, 2008 There's a deadlock in code in the PickAxe, and I'm not seeing why:Dave Thomas, Jan 16, 2006, in forum: Ruby - Replies: - 10 - Views: - 172 - Eero Saynatkari - Jan 17, 2006 Seeing a function's code in a debugger (dynamic functions)timothytoe, Mar 10, 2008, in forum: Javascript - Replies: - 1 - Views: - 74 - timothytoe - Mar 11, 2008
http://www.thecodingforums.com/threads/seeing-code.160465/
CC-MAIN-2014-41
refinedweb
186
76.76
V8Ception | How to implement V8 into your .NET applications. Jochem Stoel Jan 7 '18 ・6 min read This article is based on my old V8 article published on 23 august 2015. V8 is the JavaScript execution engine built for Google Chrome and open-sourced by Google in 2008. Written in C++, V8 compiles JavaScript source code to native machine code instead of interpreting it in real time. In this post I will explain how to - install the required dependencies from NuGet - create instances of V8 script engine inside your C# .NET application. - evaluate JavaScript input - expose a host type to the V8 interpreter context - expose a host object to the V8 interpreter context - expose entire assemblies to your script context - create a host object and call methods from script Provide examples (with gists) that show how to - write a super simple REPL application - create an easy way to load JavaScript files - let V8 reference itself (both type and instance) This post is written with n00bs in mind but assumes that you have some basic idea of what you're doing. I have simplified the concepts introduced in this article to the best of my ability and I have made all the code snippets available on my GitHub gists. Project Start Microsoft Visual Studio and create a new C# application. (Console or Winforms does not matter) Right click the project references and click Manage nuGet packages. Enable/install Clearscript.V8 and then close the window. Make sure to select the right package (ClearScript.V8). ClearScript is a library that makes it easy to add scripting to your .NET applications. It currently supports JavaScript (via V8 and JScript) and VBScript. Basically, ClearScript assigns application objects such as Console, File and even Winforms components to Javascript objects. You can assign both object instances and object types to a script context. Add reference to Microsoft.Clearscript.v8 using Microsoft.ClearScript.V8 You can create a new instance of V8 like this V8ScriptEngine v8 = new V8ScriptEngine(); Adding an object instance allows you to control (an already created) object instance from your script. If you assign the system Console (System.Console) to myConsole, you will be able to access the console like this myConsole.ReadKey(); /* Read single key input from user */ myConsole.WriteLine(); /* Write something to the console */ So. If your Winforms application has a Button, lets say Button1 then you assign it to the V8 context like this v8.AddHostObject("btn1", Button1); // variable btn1 reflects Button1 and then from your script you can change the value of the button simply by doing btn1.Text = "exit now"; /* The button text is now changed to "exit now" */ An object type is (obviously) a reference to the type of application object (in other words, the class) rather than an instance of it. An object type is not instantiated yet. /* Inside your application */ v8.AddHostType("Button", typeof(Button)); and /* Inside your script you create multiple instances of Button */ var button1 = new Button(); var button2 = new Button(); var button3 = new Button(); If the difference between host object and object types is not clear by now then you are not ready to be using v8 in your applications. The ClearScript package also allows you to expose an entire namespace all at once. The HostTypeCollection constructor takes 1 or more namespace (string) arguments. HostTypeCollection is located in Microsoft.ClearScript so besides Microsoft.ClearScript.V8 you need to also reference Microsoft.ClearScript. This can be useful if you want to import/access a lot of different stuff that you don't want to add manually but can also be used when you are dynamically/programatically loading .DLL files. v8.AddHostObject(identifier, new HostTypeCollection(namespaces[])); /* expose entire assemblies */ engine.AddHostObject("lib", new HostTypeCollection("mscorlib", "System.Core")); engine.Execute("console.log(lib.System.DateTime.Now)"); // of course assuming console.log is already implemented Example 1 | A simple REPL using V8 Similar to Node, a super simple REPL reads input from process stdin and evaluates it. using System; using Microsoft.ClearScript.V8; namespace v8repl { class Program { static void Main(string[] args) { /* create instance of V8 */ V8ScriptEngine v8 = new V8ScriptEngine(); /* assign System.Console to Javascript variable myConsole */ v8.AddHostType("myConsole", typeof(Console)); /* */ bool kill = false; /* keep doing the following while kill = false */ while(!kill) { /* get input string from process stdin */ string input = Console.ReadLine(); /* using a string literal for simplicity sake */ if(input == "exit") { Environment.Exit(0); /* exit code 0 means no error */ } /* safely evaluate input in a try/catch block */ try { v8.Evaluate(input); /* run the code */ } catch (Exception e) { /* something went wrong, show us the exception */ Console.WriteLine(e.Message); } } } } } Gist Example 2 | REPL 'wrapper' Class / load files Simple class that wraps V8 and adds a method to load a file from the disk. This is not the ultimate way to design a REPL but is fine for this example. using System; using Microsoft.ClearScript; using Microsoft.ClearScript.V8; namespace v8repl { class REPL { /* v8 engine outside main loop */ private V8ScriptEngine v8 = new V8ScriptEngine(); private bool running = false; /* */ /* keep reading input from stdin until running = false */ public void Start() { running = true; while (running) { string line = Console.ReadLine(); if (line.Equals("kill")) { running = false; /* causes this while loop to stop */ } else { Run(line); } } } /* method to read and evaluate JavaScript file */ public void LoadFile(string inputFile) { v8.Evaluate( System.IO.File.ReadAllText(inputFile) ); } /* safely evaluate code like we did before */ public void Run(string line) { try { v8.Evaluate(line); } catch (System.Exception e) { Console.Error.WriteLine(e.Message); } } /* this allows us to get current instance */ public V8ScriptEngine GetInstance() { return v8; } } } Gist Example 3 | initialization script Using the REPL class above, we load this file init.js that contains a simple console object to sort of mimic the standarized JavaScript console object. Application using System; using Microsoft.ClearScript.V8; using Microsoft.ClearScript; namespace v8repl { class Program { static void Main(string[] args) { var repl = new REPL(); var v8 = repl.GetInstance(); // shortcut /* assign the whole .NET core library to mscorlib */ v8.AddHostObject("mscorlib", new HostTypeCollection("mscorlib")); /* reference full namespace, for example: * mscorlib.System.Console.WriteLine() * mscorlib.System.IO.File.WriteAllText() */ /* expose the V8ScriptEngine type to the V8 context */ v8.AddHostType("V8ScriptEngine", typeof(V8ScriptEngine)); /* we can now do: * var context1 = new V8ScriptEngine() */ repl.LoadFile("init.js"); /* evaluate our file init.js */ repl.Start(); } } } The JavaScript file init.js being loaded /* imitate standarized console object */ var console = { log: string => { mscorlib.System.Console.WriteLine(string) }, error: string => { mscorlib.System.Console.Error.WriteLine(string) } } /* Mental note: In JavaScript we can pass multiple variables to console.log doing console.log('string', someVariable, 1234, { a: 1 }) This will not work here because Console.WriteLine expects a string. You need some logic that will Array.join() the function arguments. */ Gist V8ception | V8 inside V8 You can even assign the V8ScriptEngine itself to a Javascript object. v8.AddHostType("V8Engine", typeof(V8ScriptEngine)); Now you can create a new instance of V8 from your application input script (JavaScript). This allows you to create a new (sub)context. A new subcontext means a new scope / execution environment without any inherited variables. /* Javascript */ var v8 = new V8Engine(); v8.Execute('var x = 3; var y = 5; var z = x+y;'); Note that this V8 instance will also need its own console if you want to print its values to your process stdout. Self reference You can create a reference to the V8 instance itself. This is a terrible idea no matter what the circumstances. :P V8ScriptEngine v8 = new V8ScriptEngine(); v8.AddHostObject("self", v8); self.Evaluate("var name = 'Jochem'") console.log(name) // this will work var myConsole = { log: function() { // } } self.AddHostObject('parentConsole', myConsole) // sketchy but works One More Thing It is important to understand that V8 is only the interpreter. A lot of standard and very common objects/apis that you know do not exist yet. This includes the console object like we have discussed in this article but also the Event class and its children. To learn more about ClearScript, I highly recommend checking out the official documentation on CodePlex, this is where I started too. Thanks for stopping by ❤️ Clean up your code with these tips! Learning new patterns to make your code smaller, and more clear. Above you say, "You can create a reference to the V8 instance itself. This is a terrible idea no matter what the circumstances.". In my project (which uses ClearScript to wrap V8) I have which by definition is "not a good thing." Seeing as I want to have an include function (and having it in such as form as to be able to be put inside at try/catch), how should I have done it? It is a long time ago I did all this but let me see. Ignoring most dogmas, What you're doing here looks about right. A terrible idea is to expose the internal instance of the interpreter to the higher context using addHostObject. It will stay there in that (more) global context after the first call to include. Simply wrap that reference too somewhere or safely delete it afterwards, otherwise I don't know from memory how V8 responds to setting a host object twice but maybe your code will crash if you make a second call to include. (maaaaybe) Make sure that you can rely on CSFile.Exists because if it is a direct reference to System.IO.File.Exists which expects a String, ... You can wrap it somewhere else but I advise to be lazy like me and put the call to CSFile.Exists it inside a try block of include(). Again, only if needed. I don't know the rest of your program. Maybe have a look at NiL. I never really liked the extra .dll files of v8, also didn't feel like embedding them or building ClearScript every time I test my code. NiL.js is an ECMAScript evaluator written in C# that understands modern ES6/7/8/9/10 and takes only a second to compile. Not that compiling is needed. It also provides APIs to extend the evaluator with your custom syntax which allowed me to implement an include method function keyword natively into the interpreter. Not for beginners though. In the documentation are some examples, they show you how to add an echo keyword or something. It is platform agnostic in the sense that it runs on Mono. They've updated their examples a bit since you were last there. There are custom statements and custom operators at github.com/nilproject/NiL.JS/tree/... What are you working on? When I get the readme done and a folder with a couple of samples in, I'll send the link. In the meantime, I've got a an enhanced JavaScript called Lychen ("Lichen" is already taken). For some years my work has been using an enhanced JScript using the same technique. I'm now pursuing V8 as a personal project. Project now on GitHub in a public repo at github.com/axtens/Lychen. Other repos have been deleted. FYI: ClearScript doesn't yet support .Net Core, but V8.Net now does: nuget.org/packages/V8.Net/ FYI: Maybe not out of the box but it is really not that hard getting it to work (ish).
https://dev.to/jochemstoel/v8ception--how-to-implement-v8-into-your-net-applications-2n2i
CC-MAIN-2019-13
refinedweb
1,861
58.48
Registry.CurrentConfig Field .NET Framework (current version) Namespace: Microsoft.Win32 Contains configuration information pertaining to the hardware that is not specific to the user. This field reads the Windows registry base key HKEY_CURRENT_CONFIG. Assembly: mscorlib (in mscorlib.dll) This member is mapped to a subkey within LocalMachine. An example of using this member is an application that stores a different server name for its data depending on whether the system is attached to a network.. using System; using Microsoft.Win32; class Reg { public static void Main() { // Create a RegistryKey, which will access the HKEY_CURRENT_CONFIG // key in the registry of this machine. RegistryKey rk = Registry.CurrentConfig; //; } } } Return to top .NET Framework Available since 1.1 Available since 1.1 Show:
https://msdn.microsoft.com/en-us/library/microsoft.win32.registry.currentconfig(v=vs.110).aspx
CC-MAIN-2017-13
refinedweb
119
53.58
! There are a couple of important interfaces you need to look at, IIdentity and IPrincipal. A windows form aplication can get access to a couple of objects (WindowsIdentity & WindowsPrincipal) that you can use to determine if the current user of the app has been authenticated, if so who they are and also through the Pricipal object you can verify group membership. You may find this will give you what you need without writing a logon screen at all. You can check the user is authenticated, check what roles the user has (group membership) and only continue if your requirements are satisfied. If you decide to authenticate against some other source you still may want to use the IIdentity and IPricipal interfaces. Hope this helps You may be able to perform a bind to Active Directory using Windows credentials and the System.DirectoryServices namespace. There is an example in the following MS KB article:;en-us;326340 Paul ~~~~ Microsoft MVP (Visual Basic) Paul - Is this possible with a windows forms app (and not an asp.net app)? Thanks for the great articles! Yes, you should still be able to bind with Active Directory. Doesn't matter whether it's an ASP.NET or VB.NET windows application. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?137507-A-more-basic-question&p=406749
CC-MAIN-2015-18
refinedweb
224
54.73
evs_fd_get - Dispatches callbacks from the EVS service #include <openais/evs.h> int evs_fd_get(evs_handle_t handle, int *fd); The evs_fd_get function is used to retrieve the file descriptor that may be used with the poll system call to determine when evs_dispatch(3) won’t block. The handle argument may not be used directly with poll because it is not the file descriptor, but instead an internal identifier used by the EVS library. This call returns the EVS_OK value if successful, otherwise an error is returned. The errors are undocumented. evs_overview(8), evs_initialize(3), evs_finalize(3), evs_dispatch(3), evs_join(3), evs_leave(3), evs_mcast_joined(3), evs_mcast_groups(3), evs_mmembership_get(3)
http://huge-man-linux.net/man3/evs_fd_get.html
CC-MAIN-2017-13
refinedweb
106
55.95
Python started as primarily a command line language. That means that it is most typically used to write scripts to automate tasks or to perform some sequence of actions. GUI applications are an integral part of our daily lives. Yet, when most people are learning to program, they spend hours on making a working terminal program in Python. How to start making GUI applications with Python? 🐍 You can create a GUI with a library. There are several libraries you can use. PyQt 🚀 PyQt is the most popular Python bindings for Qt, a cross-platform GUI framework. It comes with many widgets and has a modern look and feel. Some things made with it are the KDE desktop and related applications. PyQt works on all operating systems including Windows, Mac OS X and Linux. It also comes with a drag and drop designer program. If you are new to PyQt, I recommend this course Hello World in PyQt looks like this: import sys from PyQt5.QtWidgets import QApplication, QWidget if __name__ == '__main__': app = QApplication(sys.argv) w = QWidget() w.resize(250, 150) w.move(300, 300) w.setWindowTitle('Simple') w.show() sys.exit(app.exec_()) source: hello world tkinter 😄 You can use Tkinter as your GUI library. It is the most commonly used GUI library in Python. If you don't have Tkinter in your environment, you can install it either by adding Python's Tkinter package or by installing it from your distro's repository. The limitations of tkinter is that it doesn't have many widgets available and it has quite an old fashioned look and feel. Hello world in tkinter looks like this. To learn more about tkinter see this article import tkinter as tk top = tk.Tk() top.title("Hello World") labelHello = tk.Label(top, text = "Hello World! ") labelHello.pack() top.mainloop() Tkinter has a traditional look (90s) and feel pysimplegui Another option is PySimpleGUI. This lets you create very basic user interfaces using a layout. Underneath it uses PyQt, tkinter and remi so it's a good idea to learn those first.() Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jones268/how-to-create-gui-applications-with-python-3795
CC-MAIN-2021-25
refinedweb
348
68.26
On Dec 7, 2004, at 7:42 PM, Bill Janssen wrote: > So if we keep the Lucene version in only the packaging of the jar > file, we have a source of end-user error and fragility in two ways: > (1) the manifest file may not be available (the class files may be > re-packaged in another app which didn't know to copy the Lucene > manifest stuff, or unpacked) I'd like to hear others weigh in on this repackaging issue. Is this a common practice? Supporting users that repackage the JAR and potentially introduce incompatibilities will not be fun, and if someone reports they are running Lucene 1.5.3 I'd like to be sure I know exactly what that means. Having a Java class that contains the version information seems brittle to me, in that someone could repackage improperly. JAR manifests, while certainly not leveraged this way by most, were designed to contain versioning information. > package org.apache.lucene; > public class VERSION { > // .. > } > > Why make life tough on users? I'm merely discussing the options. We've had the version information in the manifest already and was wondering why that isn't good enough. You've certainly given some reasons why you feel it is not good enough. What do others think? Erik --------------------------------------------------------------------- To unsubscribe, e-mail: lucene-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: lucene-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/lucene-dev/200412.mbox/%3CEFA92201-48BB-11D9-AA00-000393A564E6@ehatchersolutions.com%3E
CC-MAIN-2014-10
refinedweb
236
64.71
Answered by: Strange behaviour when overriding TextBox.Text I am trying to create a control based on TextBox, which will show the entered text when there is some, and placeholder text when none is entered. At runtime, my implementation works fine, but not in the designer. It's not a major issue for me, but the behaviour is so strange that I cannot understand what is going on. The section in question is:Code Snippet publicoverride String Text { get { return _ActualText; } set { _ActualText = value; if (value.Length > 0) { base.Text = value; base.ForeColor = ForeColor; } else {base.Text = _PlaceholderText; base.ForeColor = _PlaceholderForeColor; }if (TextChanged != null) TextChanged(this, new EventArgs()); } } I place one of these controls on a form in the designer, and set the Text property. If I set it to "", the control displays the placeholder text in the correct colour. If I set the property to a string value, the colour changes as it should, but the text displayed does not change. If I change the line 'base.Text = value;' to 'base.Text = "Hello";', this works. I used a messagebox right before the assignment to check the value of 'value', and it is indeed correct. If I use value.ToUpper(), it is also correct, but (String)value.Clone() does not work. Similarly, value + "dfgdsd" works correctly, but value + "" does not. I even attempted to create a new string, append a space, then use SubString() to remove the space, then assign that to base.Text. It doesn't work. In the end, I changed it to:Code Snippet base.Text = String.Empty; base.AppendText(value); which appears to work in all cases. Note that the control always operated correctly at runtime, it only shows this behaviour in the designer. No amount of cleaning or rebuilding has any effect, either. Below is the entire class, if any of you are able to reproduce this problem:PartTextBox.cs publicpartial class PartTextBox : TextBox {public override String Text { get { return _ActualText; } set { _ActualText = value; if (value.Length > 0) {// No idea why I have to do this, but setting base.Text directly // causes many problems in the designer (VS2005) base.Clear(); base.AppendText(value); base.ForeColor = ForeColor; }else {base.Text = _PlaceholderText; base.ForeColor = _PlaceholderForeColor; }if (TextChanged != null) TextChanged(this, new EventArgs()); } }private String _ActualText = String.Empty; public new event EventHandler TextChanged; public String PlaceholderText { get { return _PlaceholderText; } set { _PlaceholderText = value; if (!Focused) base.Text = value; } } private String _PlaceholderText = String.Empty; public Color PlaceholderForeColor { get { return _PlaceholderForeColor; } set { _PlaceholderForeColor = value; if (!Focused) base.ForeColor = value; } } private Color _PlaceholderForeColor = SystemColors.GrayText; public new Color ForeColor { get { return _ForeColor; } set { _ForeColor = value; if (Focused) base.ForeColor = value; } } private Color _ForeColor = SystemColors.WindowText; public PartTextBox() {InitializeComponent(); base.ForeColor = _PlaceholderForeColor; base.Text = _PlaceholderText; base.TextChanged += new EventHandler(PartTextBox_TextChanged); }void PartTextBox_TextChanged(object sender, EventArgs e) {if (!Focused || Leaving) return; _ActualText = base.Text; if(TextChanged != null) TextChanged(this, e); }protected override void OnEnter(EventArgs e) {base.Text = _ActualText; base.ForeColor = _ForeColor; }protected override void OnLeave(EventArgs e) {Leaving = true; _ActualText = base.Text; if (_ActualText.Length == 0) {base.Text = _PlaceholderText; base.ForeColor = _PlaceholderForeColor; }Leaving = false; }private Boolean Leaving; }PartTextBox.Designer.cs partialclass PartTextBox {/// /// Required designer variable. /// private System.ComponentModel.IContainer components = null; /// /// Clean up any resources being used. /// /// true if managed resources should be disposed; otherwise, false. protected override void Dispose(bool disposing) {if (disposing && (components != null)) {components.Dispose(); }base.Dispose(disposing); } #regionComponent Designer generated code /// /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// private void InitializeComponent() {components = new System.ComponentModel.Container(); } #endregion } Question Answers All replies Take it for what it is, but my advice is to not override the .Text property. Whenever I've tried to, it has resulted in problems, especially when events are associated with the property. Since you don't have access to the underlying field, you are calling that property again, and probably triggering more events than you're intending to, and if you have listeners for those events using that property in them... Oh boy! JU - Thanks for the reply. Turns out it is actually a bug in the designer. You can see it with just a few lines of code: Code Snippet public class DoNothingLabel : Label { public new String Text { get { return String.Empty; } set { /* Do nothing */ } } } public class DoNothingTextBox : TextBox { public new String Text { get { return String.Empty; } set { /* Do nothing */ } } } The designer will use the 'new' Text on the label, but not on the text box. The compiler gets it right for both (thankfully). Filed a bug report and had it confirmed by Microsoft: Link - Martin, Thanks for explaining why this occurs. However, does this mean it's not going to be 'fixed'? It may very well be by design, but it's still incorrect and inconsistent with the compiler. Presumably there was a reason why your team chose to do this, but since it causes the designer to behave incorrectly, perhaps another solution could be found? It sounds like the decision was a bit of a workaround, especially considering controls such as label don't have this issue. If you can find out (from team members who were around at the time) what the reason was, I'd be most interested. - Is there any language construct, or part of the framework I can use to work around this? I am not familiar with writing 'designers' for controls, but if you can point me in the right direction I'm sure I could work it out. As it is, even opening a form with the control on in a designer means I will have to manually edit the *.designer.cs each and every time to correct the changes the designer wrongly makes. Even if there is some way to tell the designer to leave the control alone (ie. just persist what's already there, and not make any changes), that would be better than nothing. I am about to resort to having a UserControl and hosting a TextBox docked to fill inside it with custom methods and properties which just forward to the internal control, which is hugely inconvenient and probably not good for efficiency at runtime either. Reimplementing most of TextBox's functionality in a new custom control isn't really an option, especially when the change I am trying to achieve should be a very simple matter. Not to mention that it would be almost impossible to ensure that my own control behaved and looked the same as TextBox in all scenarios. Hoping there is some way round this... I just tried this code, and it seemed to do what you want. Am I missing something? { { { }set { { }else { }} } - Nope, overriding doesn't do what I need, which is why I was using new. At runtime, not in the designer: Create an instance of MyTextBox Set the Text property to String.Empty. Retrieve the Text property. When Text is retrieved, it will equal myPlaceHolderText, and not an empty string as I assigned. Therefore it becomes impossible to assign an empty string to this control. Although I want the control to display a placeholder when it's Text is empty, the placeholder value should never be returned. The designer would also persist this incorrect value. Perhaps I could experiment with overriding the Textbox's painting, and just draw the placeholder text manually over the top if Text is empty. Although I remember that some controls exhibit strange behaviour when trying to manipulate their painting, too. I don't think you can override Paint, but you can probably override the WndProc. When you get a WM_PAINT you set state so that you know what to return in the getter. Call the base wndproc and then clear the state. I haven't tried it, so I am not 100% that it will work. Martin
https://social.msdn.microsoft.com/Forums/windows/en-US/d31752ce-b8fa-4b36-a208-803388770ab1/strange-behaviour-when-overriding-textboxtext?forum=winformsdesigner
CC-MAIN-2016-44
refinedweb
1,297
59.09
You can subscribe to this list here. Showing 1 results of 1 Hi, coming back from a long trip, it is now time to commit some fixes. The list of changes I have done to the CVS tree are the following: * SI:PROCESS-COMMAND-ARGS is now a public function and has a more flexible interface. You can use it for your standalone applications if you wish. * Callbacks are now implemented as _static_ C functions to avoid name collisions among callbacks from different files. * Fixed the problems with missing slots when declaring subclasses of STANDARD-CLASS. * Files are now installed with a new directory structure. You should now include the main header as #include <ecl/ecl.h> and optionally the garbage collector header as #include <ecl/gc/gc.h> This fix is a long awaited one and avoids conflicts between the names of ECL's header files and those of several operating systems (SuSE being the most prominent one). The last fix is the most critical one. I think I got it right and I have been working hard on the MSVC port to ensure that it builds with the new structure. However, if someone experiences some problems, please report it ASAP. Regards, Juanjo
http://sourceforge.net/p/ecls/mailman/ecls-list/?viewmonth=200602&viewday=28&style=flat
CC-MAIN-2015-32
refinedweb
204
73.27
found that StringCol ignores the sqlType parameter, because it gets lost in StringCol's own logic for determining the column type. This caused me problems because I have a table with a BLOB field which needs to be more than 64k (the limit of TEXT on MySQL). This patch (against svn head) will always use customSQLType if it is defined. This doesn't work for Firebird databases, because it defines its own type for unbounded strings. Perhaps a better solution would be to allow the user to pass a dictionary of db-specific types? sqlobject/col.py | 4 +++- 1 files changed, 3 insertions(+), 1 deletion(-) diff -puN sqlobject/col.py~stringcol-sqltype sqlobject/col.py --- SQLObject/sqlobject/col.py~stringcol-sqltype 2004-07-07 11:46:25.927588481 -0700 +++ SQLObject-jeremy/sqlobject/col.py 2004-07-07 11:47:46.660114103 -0700 @@ -297,7 +297,9 @@ class SOStringCol(SOCol): return constraints def _sqlType(self): - if not self.length: + if self.customSQLType is not None: + return self.customSQLType + elif not self.length: return 'TEXT' elif self.varchar: return 'VARCHAR(%i)' % self.length _ J I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/sqlobject/mailman/message/7522954/
CC-MAIN-2017-17
refinedweb
226
61.63
KubeCon Europe 2018 Takeaways The KubeCon Europe this year (2018) was a massive and exceptionally well organized event. Over four thousand people came to Copenhagen. There were a lot of very good talks and interesting people to meet. Of course, I mostly focused on topics which are close to my professional interest: helping small teams to get started with Kubernetes by designing deployment pipelines and building internal clusters. The questions I’ve been asking myself were: - How are small teams getting started with Kubernetes? - What are frequent surprises, gotchas and open questions? - How to smoothen the transition towards being comfortable with Kubernetes? - What are deployment workflows people use? For what reasons? - What are frequent choices regarding CI/CD tooling and approaches? I’d like to share my high-level takeaways from the event. K8s is getting really big Last year the KubeCon Europe was in Berlin, and about one thousand people attended. This year, it was over four thousand. That’s a 4x increase. It was a huge crowd. Most people are in the motions of starting out No surprise, given the explosive growth in interest. Most of the attendees at the beginning of their Kubernetes journey. Companies are still evaluating how (and whether) to use Kubernetes in the near future, and are looking for more first-hand information to help them get started. GitOps was surprisingly popular Treating your configurations like code and using a versioning system have been regarded as best practices for a while now. GitOps takes it one step further, and makes everything revolve around typical Git workflows. The idea is not quite new, but it’s very nice to finally have a term for this type of workflow. The folks from WeaveWorks have coined the term - you can read more here. There were many talks which touched on the topic. Your infrastructure configs reside in a Git repository (on GitHub, GitLab or Bitbucket for example), and people changes can be proposed via pull requests. Proposed changes can be checked by the people who are responsible, or an automated pipeline can take over. In the case of a Kubernetes cluster, you can use it to automate the creations of new namespaces, resource limits or security rules. You get a history of changes, accountability and more control among others. Neat! Deploying to Kubernetes is a mixed bag I was very interested in all topics around CI/CD (I’m weird like that), and made sure to attend as many talks as possible where deployment pipelines played a role. My conclusion: it’s a tricky, very individual topic and there is no “one way” to do it. There are a lot of tools to choose from for every part of a deployment pipeline, and different styles of getting your code deployed. Some tools are pretty much interchangeable, others work better together, some tech stacks are best served with a particular choice. There’s no one-size-fits-all. Thus, almost every company has their own set of tools, which enable a workflow which serves their product and team needs. Some end up writing their own tools for a part of their pipeline, because nothing else really fits. Deployment pattern: custom resources and custom controllers At the core of Kubernetes, you have a declarative approach. You describe how things should look and Kubernetes takes care to make it happen. Conventional deployment methods are rather imperative. You specify what needs to happen at each step of the process. By using Kubernetes custom resources and custom controllers, you can let your cluster take care of deploying your applications in a declarative fashion! That’s a very very interesting pattern, and variations of it were presented in multiple talks. Your build pipeline produces new deployment artifacts (like Docker images in a registry), and everything else is taken care of internally. That’s especially neat if you want to use complex deployment methods, or simply don’t want to give third-party tools access to your internal infrastructure and the credentials to do so. Most people shouldn’t develop on Kubernetes I really don’t see a good reason for teams to even touch Kubernetes for their development workflows. Just develop locally, maybe run backing services in containers if you need them. Sure, there are tools which make it possible to run a development server locally, and have it be part of your Kubernetes setup on the cluster, but my impression is that very few usecases warrant this kind of effort. A few of the issues you’ll face with most approaches are slow iteration times, frustrating debugging limitations and unnecessary complexity. If you feel like you need to rely on your Kubernetes cluster during development, your workflows are probably broken. That serverless stuff is pretty neat I haven’t been too eager when it comes to serverless. But after a lot of conversations and research I have reconsidered. There’s a lot to gain if you’re working with event-driven applications. FaaS can be an incredibly powerful tool, and new services like AWS Fargate (with virtual-kubelet) can make it possible to run any number of containers in a serverless fashion as if they were on your Kubernetes cluster. Running workflows for data engineering and machine learning are the usecase which I’m most excited about. Kubernetes for data plumbing, machine learning and batch processing Here’s the biggest reason why I’m excited about serverless: with FaaS you were really constrained when it came to machine learning and data processing tasks. The code does not really fit size-wise and it might need to run a bit longer than a typical serverless function to be useful. With the possibility to run serverless containers, a whole new world of very exciting possibilities is opening up. Even without “serverless” in the mix, the Kubernetes ecosystem is growing and becoming more mature for hosting complex data-crunching tasks. You can abstract away a lot of boilerplate work while utilizing all available resources. Security! Right now, a lot of things related to Kubernetes are non-secure by default. Most people don’t realize, until they start working on their company’s internal clusters. After you have a functional cluster, there’s a lot to take care of before it is fit to be maintainable, usable and reliable. Most of those things, have to do with security and observability in one way or another. Luckily, security is this year’s focus in the Kubernetes contributor community! A lot of effort is being invested, and pretty much everybody will benefit from this. Audit logs have been mentioned in a few talks - a very useful feature once you have user management in place, and it’s only available since Kubernetes 1.10. I’m looking forward to all developments around making clusters more secure, and easier to use right! Oh, and everything which makes multi-tenancy less of a pain. You’ll need a team for that When starting out with a small internal cluster, you don’t need to have all topics covered by the team, nor to know everything there is to know about Kubernetes. It’s a learning process, your cluster is non-critical and pretty much nothing can go wrong. But if the workloads are growing, critical applications start running on your cluster(s) or you even start to run clusters which are accessible to the outside world, you’ll need to reconsider. “Who’s responsible for the security of your Kubernetes cluster?” is a very well meant question, which you should be able to answer once Kubernetes is an important part of your infrastructure. Other areas require the same level of attention though. You’ll need people who are focusing completely on a single topic in the Kubernetes environment. There’s no other way to keep up and do a proper job. Nobody can know everything there is to know about running Kubernetes in a proper and reliable fashion. Just One Talk You haven’t watched any of the talks, don’t want to deal with something purely technical and would like to see something with lots of interesting takeaways? Check out my favourite talk of the event, most probably the best talk of the conference, on YouTube!
https://vsupalov.com/kubecon-europe-2018-takeaways/
CC-MAIN-2021-31
refinedweb
1,376
62.48
require-kernel A reference implementation of a CommonJS module loader. npm install require-kernel require-kernel This is an implementation of the CommonJS module standard for a browser environment. Usage The kernel is a code fragment that evaluates to an unnamed function. Interface Modules can be loaded either synchronously and asynchronously: module = require(path) require(path1[, path2[, ...]], function (module1[, module2[, ...]]) {}) The kernel has the following methods: define: A method for defining modules. It may be invoked one of several ways. In either case the path is expected to be fully qualified and the module a function with the signature (require, exports, module). require.define(path, module) require.define({path1: module1, path2: module2, path3: module3}) setGlobalKeyPath: A string (such as "require"and "namespace.req") that evaluates to the kernel in the global scope. Asynchronous retrieval of modules using JSONP will happen if and only if this path is defined. Default is undefined. setRootURI: The URI that non-library paths will be requested relative to. Default is undefined. setLibraryURI: The URI that library paths (i.e. paths that do not match /^\.{0,2}\//) will be requested relative to. Default is undefined. setRequestMaximum: The maximum number of concurrent requests. Default is 2. Behavior JSONP If a global key path was set for the kernel and the request is allowed to be asynchronous, a JSONP will be used to request the module. The callback parameter sent in the request is the define method of require (as specified by the global key path). Cross Origin Resources JSONP accomplishes CORS, so if such a request is possible to make, it is made, else, if the user agent is capable of such a request, requests to cross origin resources can be made, if not (IE[6,7]), the kernel will attempt to make a request to a mirrored location on the same origin ( becomes). License Released to the public domain. In any regions where transfer the public domain is not possible the software is granted under the terms of the MIT License.
https://www.npmjs.org/package/require-kernel
CC-MAIN-2014-10
refinedweb
335
58.99
ShareThis and FacebookLike Dynamic Content Modules plus a DropDown Property Type Based on CMS 6.0 Sharing content with ones social networks is a popular topic in the web world today. It can act as a means of free advertising or more simply as a way of letting your network know when you come across something fun, interesting, etc. Adding ShareThis, AddThis, Facebook Like, etc buttons is very easy if you work with HTML or javascript frequently. After all, you can get the code for each with a few minor configurations and clicks on the respective organizations sites. However, all Content Authors/Editors may not understand HTML/javascript or want to deal with it. Or, maybe they just want a simple way to add and configure buttons at locations they desire through standard EPiServer methodologies. I spent a bit of time creating some Dynamic Content controls which will allow just this capability. For more on Dynamic Content read here. Otherwise, continue on with more details of my efforts. Here’s what I’ve created: 1. "ShareThis" Dynamic Content module a. The Dynamic Content item allows a Content Author (Editor) to add a ShareThis button anywhere on the page through a friendly GUI i. This may be beneficial for Content Authors who are not comfortable with adding this to the Editor HTML or Page Template ASPX b. It allows the Content Author to configure the text associated with the button (when applicable) c. It allows the Content Author to configure and select between different button types: i. Multiple Icons ii. Standard Icon iii. Rotating Icon iv. Vertical Counter v. Horizontal Counter 2. "Facebook Like" Dynamic Content module a. It allows the Content Author to configure the text associated with the button based on the pre-defined "Facebook Like" options: i. Like ii. Recommend b. It allows the Content Author to configure and select between different button types: i. Standard Icon ii. Button Count Icon 3. "DropDown" Custom Property a. This includes custom settings based on the CMS 6 Settings on Properties functionality so that the user can configure the items listed in the DropDown b. I use this in my Dynamic Content items above, but created it with the specific intent of making this useful for other purposes as well Attached you will find the code and a basic ReadMe.txt file with instructions to set this up. I’ll beg your forgiveness for not creating an installer. :) You will have to add the code files to your project, compile, and update episerver.config. The code was written for the Alloy Technologies templates and CMS 6. The associated directories and namespaces align with this but could easily be changed for other templates. Also note, it would probably be an improvement to add a call to RegisterClientScriptBlock() for the script being added to the page. Other optimizations and edits are also possible. This is merely a conceptual starting point and a nice way to demo Dynamic Content and social features. Cheers. Great stuff Jeff. Me like! Thanks Helen. :) Yes, well done! Caliming love to this dynamic content!! "Nice-a-licius"
https://world.optimizely.com/blogs/Jeff-Wallace/Dates/2010/9/ShareThis-and-FacebookLike-Dynamic-Content-Modules-plus-a-DropDown-Property-Type/
CC-MAIN-2021-39
refinedweb
518
58.58
SAP Environment Compliance – BW Implementation Guide Applies to Development and support based on SAP BI 7.0 and EC 3.0 Summary This document provides step by step guide on how to configure the SAP Environment Compliance as a source system of BW system. Author: Lakdawala Ashish Company: IBM Created on: 15th April 2014 Version: 1.0 Introduction Sap Environment compliance application is implemented with SAP Netweaver web dynpro with database in order to store all the data. In addition to this, application provides an extraction layer which extracts data out of the database with functionality to transfer the data into the SAP-BW. The extraction layer is a java based framework which uses JDBC in order to communicate with the database. It is responsible for interaction with the BW as well as transfer the data from database to BW. Following drawing shows the graphical picture of the technical composition of SAP Environmental Compliance and the interaction between the components application, database and Business Warehouse. Please note that the extraction layer of the application has bi-directional (read and write) access regarding the database but it can only transfer data from the application (database) into the Business Warehouse. It is not possible to import data from the Business Warehouse into the application (database). This document provides step by step guide on how to configure the EC as a source system of BW system. Prerequisites Below list of minimum SP level for each component is required before the installation of the EC BI Content. Installation SAP EC BI Content need to be installed via the Add-On Installation Tool (TCode – SAINT). Namespace The required Namespaces are automatically installed after successful installation of the EC Bi Content. Namespace for TDAG-BW objects are - /TDAG/ – Namespace for Technical Data objects - /B123/ – Namespace for generated Technical Data objects In addition to creating the namespaces it is necessary to flag the two new namespaces as modifiable in TA SE03. Because all delivered BW objects in these namespaces have to be activated and this is only possible if the namespaces are marked as modifiable in the customer system. To do this follow below steps: - Go to SE03 - Select “Set System Change options” - Find the namespaces /B123/ and /TDAG/ and set them to “modifiable” EC RFC Setup Log into the (http://<host>:<port>/nwa/destinations) as SAP NetWeaver Admin and Navigate to the SOA Management. Under the Technical Configuration tab click on Destinations. Here you need to set up the BW RFC connection to the ERP System. - Set up connection. Enter the ERP System name as destination (Example “BW”). Set up the destination details. Click on Create and provide Connection and Logon data details as required. After finishing the destination setup you can test the connection pinging the ERP destination system. 2. Set up a second connection to ERP. Enter the ERP System name plus PROVIDER as destination name. (Example “BW PROVIDER”). Use the same setup detail as per above connection. After finishing the Destination setup the next step is to set up the JCO RFC Providers. Navigate in NWA to Configuration Management -> Infrastructure -> JCO RFC Provider In this view create a JCO RFC Destination Enter as Program ID “XEM” and Provide Host and Gateway Service details. Enter in the next Step the RFC Destination Name like the step 2 destination “ERP Name + Provider” (Example: “BW PROVIDER”) After successful set up of JCO RFC Destinations, Go to EC application server. Login to EC Portal (http://<host:port>/irj) with admin user. Navigate to EC Plant Compliance –> Configuration –> Integration –> RFC Configuration. Assign the new created Destination as a “BW System JCo Connection”. Click on Save and Restart Extractors. Next it is necessary to activate the RFC connection between the application and the Business warehouse. Therefore we have to configure the BW Property “bw.extractor.progid” (EC Plant Compliance à Configuration à Integration à BW Properties) in order to link the application and the Business Warehouse by using the same program identifier. BW Maintain Source System XEM In order to Create EC as a source system of BW, Go to RSA1 and Create Source System under External System and connection type TCP/IP. Provide Logical System Name and Description As required and Make sure you have provided Type and Release “XEM020”. Use Program ID “XEM”. The program ID of the source system has to be identical to the program ID customized at EC Configuration -> Integration -> BW Properties (property bw.extractor.progid). To verify if the source system is created correctly, call Transaction code WE20 and check if the Partner Profile – Partner Type Logical System is created with EC. If this is not the case, visit Transaction code RSA1 and select Source Systems. Left click on the EC Source System and select the option Restore. When the restoring of the Source System is done, call Transaction code WE20 again and check if the Partner Profile is active and created correctly. After completion of all above described procedure and preparation, all BI-Content objects required for SAP EC are automatically installed which can be activated from business content. Related Information SAP Environment Compliance Implementation Guide Hi Ashish, It’s very useful your guide, thank you for sharing this knowledge. I have a question regarding the step regarding the bw.extractor.progid parameter. This parameter doesn’t appear in my system. Do you know how I can include this parameter? because the EC screen doesn’t permit to create new lines. I greatly appreciate your answer. Regards, John Barrero Dear Ashish, where can I find extractors to replicate and do the charges ? I can’t found them on source sytem XEM. Regards, Fabrício Hi Fabricio, Please confirm that your XEM system type and Release is “XEM020”. Regards, Ashish Hi Ashish! Great post! I have created some custom extractors, as per note “2029338 – Developing own BI extractors in SAP Environmental Compliance” and, after deploying the DCs to the WebAS Java and restarting the extractors in the Configuration of EC, the extrators are registered (I can see the log of them been registered but nothing else), but the BW team says they cannot find the new InfoProviders in their side. Do you know if there is anything else that need to be done beyond the note mentions? Thank you!
https://blogs.sap.com/2014/04/15/sap-environment-compliance-bw-implementation-guide/
CC-MAIN-2020-45
refinedweb
1,045
54.22
MCP4725 is a single channel, 12-bit, voltage output Digital-to-Analog Converter with integrated EEPROM and an I2C Compatible Serial Interface. Features 12-Bit Resolution On-Board Non-Volatile Memory (EEPROM) ±0.2 LSB DNL (typ) External A0 Address Pin Normal or Power-Down Mode Fast Settling Time of 6µs (typ) External Voltage Reference (VDD) Rail-to-Rail Output Low Power Consumption Single-Supply Operation: 2.7V to 5.5V I2CTM Interface: Eight Available Addresses Standard (100 kbps), Fast (400 kbps) andHigh Again the easiest way to interface this to an Arduino is to purchase a module, tehse are available from many sources, here is what my one looked at. Layout Just to be different we connect the MCP4725 and rather than using a scope to look at the output we will connect this to a MCP3008 channel 0 and read the value Code In this example we will set 2 values and read them in, my module’s I2C address was 0x60 you may need to change this import spidev import time import Adafruit_MCP4725 # Create a DAC instance. #dac = Adafruit_MCP4725.MCP4725() dac = Adafruit_MCP4725.MCP4725(address=0x60, busnum=1) # Loop forever alternating through different voltage outputs. print('Press Ctrl-C to quit...') while True: #print("Setting voltage to 1/2 Vdd!") dac.set_voltage(2048) # 2048 = half of 4096 value = readadc(0) volts = (value * 3.3) / 1024 print ("%4d/1023 => %5.3f" % (value, volts)) time.sleep(2.0) dac.set_voltage(4096, True) value = readadc(0) volts = (value * 3.3) / 1024 print ("%4d/1023 => %5.3f" % (value, volts)) time.sleep(2.0) Output This is the output I saw, you can see the half and full Vdd values Links MCP4725 I2C DAC Breakout module development board
http://www.pibits.net/code/raspberry-pi-and-mcp4725-dac-example.php
CC-MAIN-2019-04
refinedweb
285
65.32
lobopublic elm unit test runner FeaturesFeatures - Support for elm-test and lobo-elm-test-extra test frameworks - Default console reporter that displays a summary of the test run - Watch mode that builds and runs the tests when the source code is updated - Checks elm-package.json in base directory and test directory for missing source directories and packages - Friendly error output PrerequisitesPrerequisites The installation guide assumes that you already have the following installed: InstallInstall It is recommended to install lobo locally for your application and lobo-cli globally: npm install lobo --save npm install -g lobo-cli Once they are installed you can run lobo via the following command: lobo --help UpdatingUpdating After updating lobo, you may find that elm does not properly find the lobo elm code. To fix this delete your test elm-stuff directory. Tests.elmTests.elm So that lobo can find all your tests it assumes that you have a Tests.elm file that references all the tests that should be run. lobo does not require an elm file containing a main function - this is provided for you in the lobo npm package. elm-testelm-test If you are using the elm-test framework your Tests.elm file should look like: module Tests exposing (all)import Test exposing (Test, describe)all : Testall =describe "Tests"[ anExampleTest, ...] elm-test-extraelm-test-extra If you are using the elm-test-extra framework your Tests.elm file should look like: module Tests exposing (all)import ElmTest.Extra exposing (Test, describe)all : Testall =describe "Tests"[ anExampleTest, ...] The following elm-test functions are not available in elm-test-extra: - concat -> instead use describe Note: the use of skip in lobo requires a reason to be specified Typical WorkflowTypical Workflow Assuming your application follows the recommended directory structure for an elm application: elm-package.json --> definition of the elm required packages elm-stuff/ --> elm installed packages node_modules/ --> npm installed modules package.json --> definition of the npm required packages src/ --> source code directory tests/ --> test code directory elm-package.json --> definition of the elm required packages for app & testing elm-stuff/ --> elm installed packages for app & testing Tests.elm --> defines which tests are run by the test runner Locally running the following command will start lobo in watch mode: lobo --watch lobo will then check that the elm-package.json files in the application directory and tests directory are in-sync. If they are out of sync it will ask you if the tests elm-package.json can be updated. lobo will then attempt to build the tests, if this fails the errors from elm make will be displayed Once the build succeeds lobo will run the tests referenced in Tests.elm and report the result to the console. Once a build/run loop has completed if lobo is running in watch mode (recommended) it will wait for changes in the source code and automatically repeat the build/run loop. OptionsOptions The list of options available can be obtained by running: lobo --help For example lobo can be run with elm-test by running: lobo --framework=elm-test --compiler--compiler The path to elm-package and elm-make --debug--debug Disables auto-cleanup of temporary files. This can be useful when debugging issues when combined with the verbose option --failOnOnly--failOnOnly Exit with non zero exit code when there are any only tests --failOnSkip--failOnSkip Exit with non zero exit code when there are any skip tests --failOnTodo--failOnTodo Exit with non zero exit code when there are any todo tests --framework--framework Specifies the test framework to use. The default is elm-test-extra. To use elm-test use the following: lobo --framework=elm-test --noInstall--noInstall Prevents lobo from trying to run elm-package when running the tests. This can be useful when using lobo without an internet connection. --noUpdate--noUpdate Prevents lobo from trying to update the elm-package.json file in tests directory. The default is to try and sync the elm-package.json files in the base and test directories. --noWarn--noWarn Hides elm make build warnings. The default is to show warning messages --prompt--prompt Prevents lobo and elm tools asking your permission, and always answers "yes" --quiet--quiet Minimise the output to build and test summary information and errors --reporter--reporter The name of the reporter to use. Currently there is only one default-reporter --testDirectory--testDirectory Specify the path to the tests directory. The default is "tests". This is useful if you have a non standard directory setup and can be used as follows: lobo --testDirectory="test/unit" --testFile--testFile Specify the relative path to the main elm tests file within the tests directory. The default value is "Tests.elm" --verbose--verbose Increases the verbosity of lobo logging messages. Please use this when reporting an issue with lobo to get details about what lobo was trying and failed todo. --veryVerbose--veryVerbose Increases the verbosity of lobo logging to be very detailed. --watch--watch Put lobo in a infinite loop that watches for changes and automatically reruns the build and tests when the source code has changed. Note: Currently watch mode does not deal with changes to the elm-package.json source directories. If you change these you will need to exit watch mode and restart it. Test FrameworksTest Frameworks The following test frameworks are supported: elm-test-extraelm-test-extra elm-test-extra is the default framework, it is similar to elm-test with additions for running test. The following options are supported elm-test-extra: - runCount - run count for fuzz tests; defaults to 100 - seed - initial seed value for fuzz tests; defaults to a random value elm-testelm-test To use elm-test lobo will need to be run with the framework option "elm-test" - see the options section for more information. The following options are supported elm-test-extra: - runCount - run count for fuzz tests; defaults to 100 - seed - initial seed value for fuzz tests; defaults to a random value ReportersReporters The following reporters are supported: - default reporter - JSON reporter - JUnit reporter Default ReporterDefault Reporter The default reporter displays a summary of the test run followed by details of any failures. When the failure is from an Expect.equal assertion it adds a visual hint for the source of the difference: The following options are supported by the default reporter: - hideDebugMessages - prevent reporting of any test Debug.log messages - showSkip - report skipped tests and the reasons after the summary. This option is only available with elm-test-extra and is ignored when the quiet option is present - showTodo - report skipped tests and the reasons after the summary. This option is ignored when the quiet option is present JSON ReporterJSON Reporter The JSON reporter outputs the progress and run details as JSON. This reporter is generally only useful when integrating lobo with other tools. The following options are supported by the JSON reporter: - reportFile - save the output to the specified file JUnit ReporterJUnit Reporter The JUnit reporter outputs progress and summary to the console and details of the test run to the specified report file. This reporter is mainly useful when integrating lobo with other build tools. The following options are supported by the JUnit reporter: - diffMaxLength - the max length of diffed failure messages; defaults to 150 characters - junitFormat - the formatting applied to failure messages - text or html; defaults to text - reportFile - the path to save the test run report to TroubleshootingTroubleshooting The argument to function findTests is causing a mismatch If you are seeing an error similar to the following: The argument to function `findTests` is causing a mismatch. 15| ElmTestExtra.findTests Tests.all ^^^^^^^^^ Function `findTests` is expecting the argument to be: ElmTest.Runner.Test But it is: Test.Internal.Test Detected errors in 1 module. Check that you have replaced all instances of import Test with import ElmTest.Extra ReferenceError: _user$.....Plugin$findTests is not definedReferenceError: _user$.....Plugin$findTests is not defined If you are seeing an error similar to the following: ReferenceError: _user$.....Plugin$findTests is not defined Try deleting the test elm-stuff directory and re-running lobo ContributionsContributions Contributions and suggestions welcome! In the first instance please raise an issue to against this project before starting work on a pull request.
https://www.npmjs.com/package/lobo
CC-MAIN-2018-17
refinedweb
1,371
51.99
This article discusses my journey toward having a web page run inside of a WinForm client and fire events back to it. The sample app which is included provides the implementation of this article. While the sample application could have been done in an easier fashion (all ASP.NET for example) the intent is to provide a working model that can implement more difficult designs. The application consists of a WinForm application which is broken into two sections: The application implements the IDocHostUIHandler interface which will allow MSHTML to do callbacks into my code. It then displays the webpage inside the IE container. When a button is clicked, a javascript component is called with a text description of what was clicked on. The javascript passes this data to the external method which will populate the form with data. If data is not found, the purchase button is disabled. IDocHostUIHandler To make the application work on your machine, alter the application config file (SampleEventProgram.exe.config) in the BIN\Release directory and change the path to where the sampleweb application is located on your machine. Not being a raw, C++ COM developer - or even knowing much about the internals of the IE engine - I began my deployment by researching on the Internet. The first day I searched the web for anything on putting IE inside a form. I bring this up because it led me to a webpage which appeared to be a BLOG. I read it out of interest to see why the heck it showed up in my search. In it, the author said: "In my growing years of development, I have had several unanswered questions arise. [...] Why is it so hard to implement a web browser inside a windows form and take control of it?" I probably should have taken this as a warning, but I plunged forward in my quest. After all, MS had years to improve this proces....right? Over here at CodeProject, I came upon a discussion thread that died with no conclusive help, as well as articles by Wiley Techology Publishing and Nikhil Dabas. The first article was well written, but the most important part of the piece (implementing IDocHostUIHandler and ICustomDoc) were taken offline and done in Delphi! Nikhil's article, however, had a fine discussion on implementing the interface as well as a deployed DLL for the interfaces in his sample application! ICustomDoc However, his deployment for hooking events required that you know the specific controls on the webform and then sink their click events. It also did not allow the web app to send any information back to the winform client. This is great for having the click events dive directly into code. But I needed the HTML object to tell me some information about what was clicked. So while I finally got IDocHostUIHandler implemented, I still did not get my last piece done and working. I was stuck for weeks in a continuous result of 'object null or not an object'. I had a few hints such as looking into GetExternal and I could SWEAR that a post suggested using window.getExternal in my javascript. Obviously that didn't get me very far since I have since learned that is not a valid javascript call. I also got some suggestions on implementing IDispatch. But nothing really seemed to take the final step of scripting my program. GetExternal window.getExternal IDispatch A lengthy two-day discussion with CP member .S.Rod. finally led to a better understanding and a great assistance in getting everything tied together and working. The most interesting thing with all of this research is that I talked to maybe four different people and got four different implementation approaches. I am sure that in each of those, the person in the discussion had an approach that eventually worked for them. Unfortunately, it was not until my final discussion that I had one that got me past the null object problem. The only other drawback to all of this research was that I found I was occasionally killing myself by taking input from several people, combining it all together, and having conflicts with what was already done. To make matters worse, I was given a new computer in the middle of all of this and spent two days getting everything back to normal! It was just when I was ready to walk away from this project for awhile that .S.Rod. was kind enough to pull everything together for me. Here are the final results, and a sample application to help guide others in their quest to control IE. For this application I am going to have a webpage present buttons and graphics for a product catalog. Clicking on a button in the webpage will populate the form with descriptions and activate the purchase button. Clicking on the purchase button in the winform will send Lefty Larduchi to your front door for some money. My first step was just to build the webpage (just plain HTML and javascript) and get it to a point of displaying stuff. Creating the form is not a problem. Just start a new C# Windows Form project, customize the toolbar and add Internet Explorer from the COM side of the fence. The form consists of a panel docked to the left, a slider, IE docked to the right, and two textboxes and a button that are inside the panel. Now one of the first steps in taking control of IE is to implement IDocHostUIHandler. Nikhil wrote a fine article on doing this, so I won't duplicate his efforts. You can cover that first step here. Make sure you keep track of the MSHtmHstInterop.dll piece of his sample application. I used the sample app to copy and paste the base IDocHostUIHandler implementations into my form. So after implementing IDocHostUIHandler, what else needs to be done? Well, in Nikhil's article his example would require that you know the controls that will be clicked and that someone click on that control. This is the code that accomplishes that: private void WebBrowser_DocumentComplete(object sender, AxSHDocVw.DWebBrowserEvents2_DocumentCompleteEvent e) { IHTMLDocument2 doc = (IHTMLDocument2)this.WebBrowser.Document; HTMLButtonElement button = (HTMLButtonElement) doc.all.item("theButton", null); ((HTMLButtonElementEvents2_Event)button).onclick += new HTMLButtonElementEvents2_onclickEventHandler(this.Button_onclick); } I had to face an application requirement where we were showing major sections, with each section being just DHTML, each section had to provide me information about itself and then have the WinForm act upon that information. I found it interesting to find in all the numerous articles I read on this subject that Outlook deploys this WinForm/IE merge - just not in .NET! In this example, we are using the javascript object window.external to interact with the form. So when a user clicks on a section it will fire a method in the script area. That method, via window.external, issues a call through MSHTML to the IDocHostUIHandler.GetExternal method, then uses the IDispatch methods to get the address of the method and call it. This next section is quoted from a discussion with .S.Rod. I couldn't describe it better: window.external IDocHostUIHandler.GetExternal ICustomDoc.SetUIHandler(object) IUnknown window.external.mymethod() IDocHostUIHandler.GetExternal(out object ppDispatch) GetIdOfName() Invoke() [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] In the end, we have a sample html file, which reacts on clicks by calling javascript's window.external.MyMethod(). In order for this to work, the afore mentioned object must be declared and implement the MyMethod() method. In the sample application, that method name is window.external.MyMethod() MyMethod() public void PopulateWindow(string selectedPageItem). It should be important to note at this point that any method which will interact at the COM level should be defined to always return void. If there is need to return data, that is done via the parameters with the return parameters marked as out. If there is a need to return an error, for example, that is done by setting an HRESULT via System.Runtime.InteropServices. Setting the HRESULT is done in C# by doing a HRESULT System.Runtime.InteropServices throw new ComException("", returnValue) returnValue is an int value defined somewhere in your class, and is set to the value you want to raise. returnValue int In the sample application, the first step to exposing an object via IDispatch is to create the custom interface: [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] interface void ICallUIHandler { void PopulateWindow(string selectedPageItem) } Then we implement the interface in a class definition: public class PopulateClass:IPopulateWindow { SampleEventProgram.Form1 myOwner; /// <SUMMARY> /// Requires a handle to the owning form /// </SUMMARY> public PopulateClass(SampleEventProgram.Form1 ownerForm) { myOwner = ownerForm; } /// <SUMMARY> /// Looks up the string passed and populates the form /// </SUMMARY> public void PopulateWindow(string itemSelected) { // insert logic here } } So what we have done here is create an interface that exposes IDispatch, we implemented that interface in the PopulateClass class definition, and we take in the constructor logic a pointer to our form. This give access to the specific fields we choose. I'm going to need the class to be able to change the two textboxes as well as enable the button. So I have to go into the form code and change those three item definitions from private to public. So in these definitions, I have made the following connections: PopulateClass Finally I have to implement the last piece of code that will connect my webform to my class I defined above. In the implementation for IDocHostUIHandler.GetExternal I need to set the object passed to an instance of my class. In implementing IDocHostUIHandler, you should have taken the implementation from Nikhil's sample app and cut/paste it into your program. Alter the necessary implementation as follows: void IDocHostUIHandler.GetExternal(out object ppDispatch) { ppDispatch = new PopulateClass(this); } This now ties your class to the window.external portion of mshtml, it ties the form to the new class definition, and it readies everything for processing. The class implementation basically acts as a go-between between the two worlds of System.Windows.Forms.Form and Microsoft.MSHTML and your web form. The final step - before I write my code in the PopulateWindow method - is to pick which fields I want my class to access and change their definition from private to public, or to follow better coding standards - add public accessors to those fields. In this sample, I exposed the various elements that were to be changed with public accessors. System.Windows.Forms.Form Microsoft.MSHTML PopulateWindow private public Now that I have a working application as well as a working sample application, I have to wonder why it took so long to pull all of this information together. But now, here it is. In the sample application: InitializeComponents When an HTML button is clicked, it calls the method CallHostUI passing it the name of the item clicked. CallHostUI window.external.PopulateWindow() With all of this working I should add a note of warning. I have found that the Visual Designer code does not expect you to have an interface and class definition in front of your form definition. The result is if you add a control or modify a control it visually appears to take, but no change in your code has actually occured and the change disappears once you close and reopen the project. More frustrating is when you add an event handler: you get the binding to the delegate, but no actual base method implementation. Fortunately, all you need to do to work around this is to move your interface and class down to the bottom of your source code. This can provide a very rich form of client presentation as well as rich WinForm elements for processing data. In my particular example, I'm exposing webpages developed for our internal web UI presentation engine. When each section inside of a web page is moused over, the section is highlighted with a bright yellow border. Clicking on the section passes that information to my WinForm which expresses that section in a properties page display. The various built-in editors in the framework as well as custom editors we write will hook into that properties page to allow for simple modification of data. For example, changing color in a cell element pops up the color picker editor and changing a font pops up the font picker.
http://www.codeproject.com/Articles/4163/Hosting-a-webpage-inside-a-Windows-Form?fid=15529&df=90&mpp=10&sort=Position&spc=None&tid=3726144
CC-MAIN-2014-52
refinedweb
2,051
54.42
This IRC Meeting is planned for 17:00 UTC on Wednesday, 23 November 2011. <iframe src="" frameborder="0" width="175" height="66"></iframe> Please read the IRC Meetings page for details on adding items to the agenda. Vote on Namespace Routing The basic idea behind namespace routing is to avoid manually setting up several aliases for new controllers of new modules. Basicly its a cooperation between routing and mvc. Different routings and there outcome can be seen here: - more informations are welcome - Beta 2 Readiness We need to close off loose ends for beta 2. During this portion of the discussion, we should determine what tasks are still outstanding, and a timeline for completing them. Initial roadmap beta2 : - Cache - Log Locale and/or Translator if possible
http://framework.zend.com/wiki/pages/viewpage.action?pageId=49710496
CC-MAIN-2014-49
refinedweb
126
62.58
8846 Credit for Employer Social Security and Medicare Taxes Paid on Certain Employee Tips Attach to your return. OMB No. 1545-1414 Department of the Treasury Internal Revenue Service Attachment Sequence No. Identifying number 98 Name(s) shown on return Note: Claim this credit ONLY for social security and Medicare taxes paid by a food or beverage establishment where tipping is customary for providing food or beverages consumed on the premises. See the instructions for line 1. Part I 1 2 3 4 5 Current Year Credit 1 2 3 4 Tips reported by employees for services performed after December 31, 1993, on which you paid or incurred employer social security and Medicare taxes during the tax year Tips not subject to the credit provisions (see instructions) Creditable tips. Subtract line 2 from line 1 Current year credit (see instructions). Multiply line 3 by 7.65% (.0765). If you have any tipped employee(s) whose wages (including tips) exceeded $60,600, check here Form 8846 credits from If you are a— Then enter Form 8846 credit(s) from— flow-through entities a Shareholder b Partner Schedule K-1 (Form 1120S) lines 12d, 12e, or 13 Schedule K-1 (Form 1065) lines 13d, 13e, or 14 5 6 6 Total current year credit. Add lines 4 and 5 Part II Tax Liability Limitation (See Who Must File Form 3800 to see if you complete Part II or file Form 3800.) 7a Individuals. Enter amount from Form 1040, line 40 b Corporations. Enter amount from Form 1120, Schedule J, line 3 (or Form 1120-A, Part I, line 1) c Other filers. Enter regular tax before credits from your return (see instructions) 8 Credits that reduce regular tax before the general business credit: 8a a Credit for child and dependent care expenses (Form 2441, line 10) 8b b Credit for the elderly or the disabled (Schedule R (Form 1040), line 21) 8c c Mortgage interest credit (Form 8396, line 11) 8d d Foreign tax credit (Form 1116, line 32, or Form 1118, Sch. B, line 12) 8e e Possessions tax credit (Form 5735) 8f f Orphan drug credit (Form 6765, line 10) 8g g Credit for fuel from a nonconventional source 8h h Qualified electric vehicle credit (Form 8834, line 19) i Add lines 8a through 8h 9 Net regular tax. Subtract line 8i from line 7 10 Tentative minimum tax (see instructions): a Individuals. Enter amount from Form 6251, line 26 b Corporations. Enter amount from Form 4626, line 13 c Estates and trusts. Enter amount from Form 1041, Schedule H, line 37 11 Net income tax: a Individuals. Add line 9 above and line 28 of Form 6251 b Corporations. Add line 9 above and line 15 of Form 4626 c Estates and trusts. Add line 9 above and line 39 of Form 1041, Schedule H 12 If line 9 is more than $25,000, enter 25% (.25) of the excess (see instructions) 13 Subtract line 10 or line 12, whichever is greater, from line 11. If less than zero, enter -014 Credit allowed for the current year. Enter the smaller of line 6 or line 13. This is your General Business Credit for 1994. Enter here and on Form 1040, line 44; Form 1120, Schedule J, line 4e; Form 1120-A, Part I, line 2a; Form 1041, Schedule G, line 2c; or on the appropriate line of other income tax returns The time needed to complete and file this form will vary depending on individual circumstances. The estimated average time is: Recordkeeping 6 hr., 13 min. Learning about the law or the form 30 min. Preparing and sending the form to the IRS 37 min. Cat. No. 16148Z 7 8i 9 10 11 12 13. If you have comments concerning the accuracy of these time estimates or suggestions for making this form simpler, we would be happy to hear from you. You can write to both the IRS and the Office of Management and Budget at the addresses listed in the instructions for the tax return with which this form is filed. Form 8846 (1994) Form 8846 (1994) Page 2 General Instructions Section references are to the Internal Revenue Code unless otherwise noted. Purpose of Form Certain food and beverage establishments (see Who Should File below) use Form 8846 to claim a credit for social security and Medicare taxes paid or incurred by the employer on certain employees’ tips. The credit is part of the general business credit under section 38 and is figured under the provisions of section 45B. You can claim or elect not to claim the credit any time within 3 years from the due date of your return on either your original return or on an amended return. the computation. The Federal minimum wage rate (since April 1, 1991) is $4.25 per hour. For example, an employee worked 100 hours and reported $300 in tips for January. The worker received $325 in wages (excluding tips) at the rate of $3.25 an hour. Because the Federal minimum wage rate was $4.25 an hour, the employee would have received wages, excluding tips, of $425 had the employee been paid at the Federal minimum wage rate. Thus, only $200 of the employee’s tips for January is taken into account for credit purposes. 1.45% (.0145). Subtract these tips from the line 3 tips, and multiply the difference by .0765. Then, multiply the tips subject only to the Medicare tax by .0145. Enter the sum of these amounts on line 4. All taxpayers must reduce the income tax deduction for employer social security and Medicare taxes by the amount of the current year credit on line 4. Who Must File Form 3800 If for this year you have more than one of the credits included in the general business credit listed below, a carryback or carryforward of any of the credits, or a credit from a passive activity, you must complete Form 3800, General Business Credit, instead of completing Part II of Form 8846), and ● Contributions to selected community development corporations (Form 8847). The empowerment zone employment credit (Form 8844), while a component of the general business credit, is figured separately on Form 8844 and is never carried to Form 3800. Specific Instructions Part I Complete lines 1 through 4 to figure the current year credit from your trade or business. Skip lines 1 through 4 if you are claiming only a credit that was allocated to you from an S corporation or a partnership. Who Should File Employers who meet both the conditions below should file: ● During the tax year, paid or incurred employer social security and Medicare taxes after December 31, 1993, on tips received by employees for services performed after December 31, 1993; and ● Have employees whose tips are received at food or beverage establishments for the provision of food or beverages consumed on the premises of the establishment. Tips are deemed to be received by the employee when a written statement identifying the tips is furnished to the employer by the employee as required by section 6053(a). Normally, the employee must report to the employer tips received during any month no later than the 10th day of the following month. An employer may require that tips be reported more often than monthly. For example, tips received by an employee during December 1994 and reported to an employer on December 30, 1994, are deemed to be paid in 1994. Tips received by an employee during December 1994 and reported to an employer on January 6, 1995, are deemed to be paid in 1995. However, tips received by employees in December 1993 and reported to the employer after December 31, 1993, are not included in the computation because the services were performed before January 1, 1994. S Corporations and Partnerships S corporations and partnerships figure their current year credit on lines 1 through 4, enter any credit from other flow-through entities on line 5, and allocate the credit on line 6 to the shareholders or partners. Attach Form 8846 to the S corporation or partnership return and show on Schedule K-1 each shareholder’s or partner’s credit. Line 1.—Enter the tips reported by employees for services performed after December 31, 1993, on which you paid or incurred employer social security and Medicare taxes during the tax year. Include only tips received from customers in connection with providing food or beverages for consumption on the premises of a food or beverage establishment where tipping is customary. Do not include tips for carryouts or tips to food deliverers such as pizza delivery persons. Line 2.—If you pay each tipped employee wages (excluding tips) equal to or more than the Federal minimum wage rate, enter zero on line 2. Figure the amount of tips included on line 1 that are not creditable for each employee on a monthly basis. This is the total amount that would be payable to the employee at the Federal minimum wage rate reduced by the wages (excluding tips) actually paid to the employee during the month. Enter on line 2 the total amounts figured for all employees. Line 4.—If any tipped employee’s wages and tips exceeded the 1994 social security tax wage base of $60,600 subject to the 6.2% (.062) rate, check the box on line 4 and attach a separate computation showing the amount of tips subject to only the Medicare tax rate of Printed on recycled paper Part II Line 7c.—Form 990-T filers, enter the total of either lines 35c and 37 or lines 36 and 37, whichever applies. Line 10.—Enter the tentative minimum tax (TMT) that was figured on the appropriate alternative minimum tax (AMT) form or schedule. Although you may not owe AMT, you must still compute the TMT to figure your credit. Line 12.—See section 38(c)(3) for special rules for married couples filing separate returns, for controlled groups, and for estates and trusts. Line 14.—If you cannot use part of the credit because of the tax liability limitation (line 14 is less than line 6), carry the excess to other years. The excess credit cannot be carried back to any year ending before August 10, 1993. See the separate instructions for Form 3800 for details. How the Credit Is Figured Generally, the credit equals the amount of employer social security and Medicare taxes paid or incurred by the employer on tips received and reported to the employer by the employee. However, the employer social security and Medicare taxes on those tips that are used to meet the Federal minimum wage rate applicable to the employee under the Fair Labor Standards Act are not used in
https://www.scribd.com/document/542921/US-Internal-Revenue-Service-f8846-1994
CC-MAIN-2018-47
refinedweb
1,796
57
Gilbert Corrales and I metup last week here at Redmond and he and I were talking to about how I wanted to make use of an EventDispatcher approach to routing events around the code base (the analogy we came up with was "the difference between a cough and a sneeze" - well maybe not a sneeze heh). EventDispatcher then lead to a framework and before you knew it, we were up to around 2am in my office coding and this is what we ended up with (first cut). Assume for a second that you have a room full of blind people with no sound, and in the middle you essentially have a machine that handles notifications around Sneezing and Coughing (or any other bodily functions you can think of). Let's also say that PersonA wants to know if anyone coughs (so he/she react) or that PersonB wants to know if anyone Sneezes. First things first, lets take a look at the "Room" itself. As it will be the host in this equation (assume it's a really smart room that can detect BodilyFunctions as they happen). 1: private void TheRoom() 2: { 3: // Add People to the Dark Room. 4: Person personA = new Person(); 5: Person personB = new Person(); 6: Person personC = new Person(); 7: 8: // Define the curiousity of all Persons.. 9: personA.Name = "Scott"; 10: personA.DefineCuriousity("Sneeze"); 11: personB.Name = "Gilbert"; 12: personB.DefineCuriousity("Sneeze"); 13: personC.Name = "David"; 14: personC.DefineCuriousity("Cough"); 15: 16: 17: // If someone Sneezes/Coughs, let's tell everyone in the room 18: // about it via a DisplayBoard (in this case, 3 text fields). 19: NexusEvent BodilyFunctionEvent = new NexusEvent(OnBodilyFunction); 20: EventDispatcher.Subscribe("Sneeze", BodilyFunctionEvent); 21: EventDispatcher.Subscribe("Cough", BodilyFunctionEvent); 22: 23: // Ok, for arguments sake, lets force a bodily function to occur. 24: personA.Sneeze(); 25: personB.Cough(); 26: personC.Sneeze(); 27: 28: } Pretty self-explanatory right? Let's now take a look at the Anatomy of a person and let's see what makes them tick (don't worry, if your squeamish or can't stand the sight of blood, that's ok, this is rated PG 13 and you won't be offended). 1: public class Person 2: { 3: public string Name { get; set; } 4: private string curiousity { get; set; } 5: 6: public void DefineCuriousity(string eventType) { 7: // You can add your own Subscription Logic here 8: // to each individual Person to react to a case of 9: // either a sneeze or cough (ie Move Person 20px to the right 10: // then let out a speech bubble "eeww!!!" 11: } 12: 13: public void Sneeze() 14: { 15: this.PerformBodilyFunction("Sneeze"); 16: } 17: 18: public void Cough() 19: { 20: this.PerformBodilyFunction("Cough"); 21: } 23: private void PerformBodilyFunction(string eventType) 24: { 25: PersonEventArgs prsn = new PersonEventArgs(); 26: prsn.PersonsBodilyFunction = eventType; 27: prsn.PersonsName = this.Name; 28: EventDispatcher.Dispatch(eventType, prsn); 29: } 30: } Tip: For those of you whom are new to .NET you will notice that the property Name only has a get; and set; but nothing else? well that's the power of VisualStudio working there. As when you use the "prop" approach to setters/getters it automates the battle for you, so no more creating set/get values with reference to hidden private properties). Tip: I prefer to keep string pointers as once you start embedding object references into various data packets that float around the code, sometimes you can get lost in a Garbage Collection nightmare. Play it safe, agree that its your job as a developer to keep public objects identifiable as much as you can and unique, so others can find you!) This concept is something I've used for many years in other languages, and it's quite a nice tool for your day to day RIA solutions. As depending on how you implement it, it can at times get you out of a bind fast and it also does a really nice job of enforcing a layer of abstraction where at times it doesn't appear to be required (yet later saves your bacon, as the age old "ooh glad I had that there now i think about it" does apply with this ball of code) 1: /// <summary> 2: /// Event delegate definition for Nexus related events. 3: /// </summary> 4: /// <param name="args">Arguments associated to the event.</param> 5: public delegate void NexusEvent(NexusEventArgs args); 6: 7: /// <summary> 8: /// Implementation of a multi-broadcaster event dispatcher. 9: /// </summary> 10: public class EventDispatcher 11: { 12: /// <summary> 13: /// Holds a list of event handlers subscribed per event. 14: /// </summary> 15: private static Dictionary<string, List<NexusEvent>> _subscribers = new Dictionary<string, List<NexusEvent>>(); 17: /// <summary> 18: /// Subscribes a handler to an event for deferred execution. 19: /// </summary> 20: /// <param name="evtName">Name of the event to which the handler will be subscribed to.</param> 21: /// <param name="eHandler">Handler to be executed everytime the event gets dispatched</param> 22: public static void Subscribe(string evtName, NexusEvent eHandler) 23: { 24: List<NexusEvent> handlers; 25: 26: if (!_subscribers.TryGetValue(evtName, out handlers)) 27: { 28: handlers = new List<NexusEvent>(); 29: _subscribers.Add(evtName, handlers); 30: } 31: 32: handlers.Add(eHandler); 33: } 34: 35: /// <summary> 36: /// Removes a command from an event for deferred execution. 37: /// </summary> 38: /// <param name="evtName">Name of the event to which the handler will unsubscribed from.</param> 39: /// <param name="eHandler">Handler to be removed from been dispatched.</param> 40: public static void RemoveSubscription(string evtName, NexusEvent eHandler) 41: { 42: List<NexusEvent> handlers; 43: 44: if (_subscribers.TryGetValue(evtName, out handlers)) 45: { 46: handlers.Remove(eHandler); 47: } 48: } 49: 50: /// <summary> 51: /// Broadcasts an event thru its correspondant subscribers. 52: /// </summary> 53: /// <param name="evtName">Name of the event to be broadcasted.</param> 54: /// <param name="args">Arguments associated to the event been propagated.</param> 55: public static void Dispatch(string evtName, NexusEventArgs args) 56: { 57: List<NexusEvent> handlers; 58: 59: if (_subscribers.TryGetValue(evtName, out handlers)) 60: { 61: args.EventName = evtName; 62: 63: foreach (NexusEvent d in handlers) 64: { 65: d(args); 66: } 67: } 68: } 69: } If you would like to receive an email when updates are made to this post, please register here RSS Rob Houweling with a Sketch Application (2 parts), Martin Mihaylov continuing with Shapes, Karen Corby Ping back from Samiq Bits [... Visited Scott on his office and end up co-writing the first preview of a MVC framework for Silverlight 2.0. - those kind of things that happen just because, so don't ask...] we should chat I have a similar approach to an event dispatcher.... but uses custom attributes to have a single delegate fire for multiple events etc. but as scott warns, approach w/ caution. There are times where an event dispatcher solves lots of problems... but it shouldnt be overused when the current event models work pretty
http://blogs.msdn.com/msmossyblog/archive/2008/06/24/silverlight-how-to-write-your-own-eventdispatcher.aspx
crawl-002
refinedweb
1,139
54.12
Created on 2017-01-24 07:14 by elazar, last changed 2017-01-24 08:41 by levkivskyi. This issue is now closed. The following does not work as expected: ``` from typing import NamedTuple class A(NamedTuple): a: int def __repr__(self): return 'some A' def spam(self): print('spam!') >>> a = A(5) >>> repr(a) # should be 'some A' 'A(a=5)' >>> a.spam() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'spam' ``` This has been already reported in and fixed in and Now adding new methods works but overwriting existing special attributes raises AttributeError: class A(NamedTuple): x: int def spam(self): # this works ... def _fields(self): # this is an error (and also for __repr__ etc) If you think that overwriting all special attributes should be allowed (or only some of them) then we could discuss this at python/typing tracker.
https://bugs.python.org/issue29357
CC-MAIN-2021-31
refinedweb
151
66.07
Difference between revisions of "Lab: Semantic Lifting - HTML" Revision as of 07:49, 27 March 2020 Lab 10: Semantic Lifting - XML Link to Discord server Topics Today's topic involves lifting data in XML format into RDF. XML stands for Extensible Markup Language and is used to commonly for data storage/transfer, especially for websites. XML has a tree structure similar to HTML, consisting of a root element, children and parent elements, attributes and so on. The goal is for you to learn an example of how we can convert unsemantic data into RDF. Relevant Libraries/Functions import requests import xml.etree.ElementTree as ET - ET.parse('xmlfile.xml') All parts of the XML tree are considered Elements. - Element.getroot() - Element.findall("path_in_tree") - Element.find("name_of_tag") - Element.text - Element.attrib("name_of_attribute") Tasks Task 1 Lift the XML data from about news articles by BBC_News into RDF triples. You can look at the actual XML structure of the data by clicking ctrl + U when you have opend the link in browser. The actual data about the news articles are stored under the <item></item> tags For instance a triple should be something of the form: news_paper_id - hasTitle - titleValue Do this by parsing the XML using ElementTree (see import above). I recommend starting with the code at the bottom of the page and continuing on it. This code retrieves the XML using a HTTPRequest and saves it to an XML_file, so that you can view and parse it easily. You can use this regex (string matcher) to get only the ID's from the full url that is in the <guid> data. news_id = re.findall('\d+$', news_id)[0] Task 2 Parse through the fictional XML data below and add the correct journalists as the writers of the news_articles from earlier. This means that e.g if the news article is written on a Tuesday, Thomas Smith is the one who wrote it. One way to do this is by checking if any of the days in the "whenWriting" attribute is contained in the news articles "pubDate". <data> <news_publisher name="BBC News"> <journalist whenWriting="Mon, Tue, Wed" > <firstname>Thomas</firstname> <lastname>Smith</lastname> </journalist> <journalist whenWriting="Thu, Fri" > <firstname>Joseph</firstname> <lastname>Olson</lastname> </journalist> <journalist whenWriting="Sat, Sun" > <firstname>Sophia</firstname> <lastname>Cruise</lastname> </journalist> </news_publisher> </data> If You have more Time Extend the graph using the PROV vocabulary to describe Agents and Entities. For instance, we want to say that the news articles originates from BBC, and that the journalists acts on behalf of BBC. Code to Get Started from rdflib import Graph, Literal, Namespace, URIRef from rdflib.namespace import RDF, XSD import xml.etree.ElementTree as ET import requests import re g = Graph() ex = Namespace("") prov = Namespace("") g.bind("ex", ex) g.bind("prov", prov) # URL of xml data url = '' # Retrieve the xml data from the web-url. resp = requests.get(url) # Saving the xml data to a .xml file with open('news.xml', 'wb') as f: f.write(resp.content)
https://wiki.app.uib.no/info216/index.php?title=Lab:_Semantic_Lifting_-_HTML&diff=next&oldid=1193
CC-MAIN-2022-40
refinedweb
501
54.63
Opencv rectangle drawing tutorial C++ Opencv rectangle drawing tutorial by example in C++. Simple steps let you draw the rectangle inside the pictures and video sample. Several rectangle definition and redtagle drawing function just show you various way to draw the rectangles inside the picture and video.. It is simple and easy. Steps in descriptions rectangle drawing All the steps below the picture (A B ... F) are also marked inside the code. The commends inside c++ code // just describe the Rect definition to define rectangle to draw inside the function rectangle. This function just draw the defined rectangle inside Picture. Rectangle(Picture is the first parameter. This is just Mat where to put the second parameter. Defined Rect (rectangle). You can choose to put the rectangle to that function in several way. This is not 2 much important. They just do the same job for you.. I am using just one of them. Try to figure out the steps and compare the code with images.. Third parameter is most likely the color of the Rectangle. Color is described by 3 numbers Scalar(B,G,R). B for Blue, G for green and R for the last one. You now the color very well There is 3 more parameters. After the color there is the thickness of the line. Use whatever you want. Another one is type of the line. For me is this parameter always 8. I dont care about the dashed and solid line type.. The last is shift. What the hell is that. I know but do not care. Try the code and compare the steps and code.. Enjoy Opencv rect Step A in code Opencv rect Step B in code Opencv rect Step C in code Opencv rect Step D in code Opencv rect Step E in code Opencv rect Step F in code Opencv c++ tutorial draw Rectangle code #include <opencv2/core.hpp> #include <opencv2/highgui.hpp> #include <opencv2/videoio.hpp> #include <opencv2/objdetect.hpp> #include <opencv2/imgproc.hpp> #include <iostream> using namespace cv; using namespace std; int main(int argc, char** argv) { Mat Picture; Picture = imread("22.JPG"); resize(Picture, Picture, Size(800, 600)); //A Parameters x (start in x axes horizontal) y (start in vertical) // w (vertical lenght) h (Horizontal lenght) Rect RectangleToDraw(10, 10, 100, 100); rectangle(Picture, RectangleToDraw.tl(), RectangleToDraw.br(), Scalar(0, 0, 255), 2, 8, 0); imshow("DrawRectangle", Picture); int key4 = waitKey(2000); // save imwrite("1.jpg", Picture); //B Rectangle defined by 2 points Point A(10, 10); Point B(100, 100); Rect RectangleToDraw2(A, B); rectangle(Picture, RectangleToDraw2.tl(), RectangleToDraw2.br(), Scalar(0, 255, 255), 1, 8, 0); imwrite("2.jpg", Picture); //C x=100, y=100, w=300, h=300 Rect RectangleToDraw3(100,100,200,200); rectangle(Picture, RectangleToDraw3, Scalar(0, 250, 0), 2, 8, 0); imwrite("3.jpg", Picture); //D Scalar(255, 0, 0) Color parameter // Blue 255, Green 0, Red 0 Rect RectangleToDraw4(300, 300, 100, 100); rectangle(Picture, RectangleToDraw4, Scalar(255, 0, 0), 2, 8, 0); imwrite("4.jpg", Picture); //E 10 value, int is thickness of the line Rect RectangleToDraw5(300, 300, 100, 100); rectangle(Picture, RectangleToDraw5, Scalar(255, 0, 255), 10, 8, 0); imwrite("5.jpg", Picture); //F Rect defined inside drawinf function // 4 value is line type rectangle(Picture, Rect(400,400,50,50), Scalar(255, 255, 255), 2, 4, 0); imwrite("6.jpg", Picture); imshow("DrawRectangle",Picture); int key7 = waitKey(20); return 0; } its good idea but the code on first kline doesnt accepted in a machine what that............ Code is correct and works. I generate all the picture here by this code. Maybe the minor mistakes should be here. But works
http://funvision.blogspot.com/2016/12/opencv-rectangle-drawing-tutorial-c.html
CC-MAIN-2018-22
refinedweb
611
68.26
Traits. 1. Methods 1.1. Public methods Declaring a method in a trait can be done like any regular method in a class: trait FlyingAbility { (1) String fly() { "I'm flying!" } (2) } 1) 1" } 2.) 3.) 4. Properties A trait may define properties, like in the following example: trait Named { String name (1) } class Person implements Named {} (2) def p = new Person(name: 'Bob') (3) assert p.name == 'Bob' (4) assert p.getName() == 'Bob' (5) 5. Fields 5.1. 5 6.. 7.) 8. Extending traits 8.1.) 9. Duck typing and traits 9.1.) 9.2.) 10. Multiple inheritance conflicts 10.1. the method from the last declared trait in the implements clause wins. Here, B is declared after A so the method from B will be picked up: def c = new C() assert c.exec() == 'B' 10) 11. Runtime implementation of traits 11.1.) 11.2.) 12.. 12.1.. 13. Advanced features 13.1.) 13) 14.. 15.) 16. 17. Self types 17.1.. 17.2.. 18. Limitations 18.2..
http://docs.groovy-lang.org/next/html/documentation/core-traits.html
CC-MAIN-2017-47
refinedweb
170
77.33
Cocoon-Style Logging in Non-Cocoon Java Classes This page explains how to implement Cocoon-style logging, in a Java class that is not inherited from a Cocoon or Avalon component class. The class is however used within a Cocoon application. A typical use for this might be to implement logging in a Java Bean used in a Cocoon application. How to Implement Logging In Cocoon, like in Perl, there's more than one way to do it. In the case of logging the following solutions are possible: 1.Extend AbstractLogEnabled - The simplest solution, but it only - works if your class doesn't already extend some other class. 1.Implement LogEnabled - A little more work, but more flexible. === Solution One: Extend ===AbstractLogEnabled In your class you extend the AbstractLogEnabled class. To write log messages you simply call the appropriate log method using the Logger provided by the getLogger() method, which is available from the parent class AbstractLogEnabled. import org.apache.avalon.framework.logger.AbstractLogEnabled; public class SomeClass extends AbstractLogEnabled { public void someMethod() { ... getLogger().debug( "Hello, log. It worked!" ); getLogger().info( "Hello, log. Here is info" ); getLogger().error( "Hello, log. Here is an error" ); //..etc. ... } } This works fine, provided you class doesn't already extend some other class. === Solution Two: Implement ===LogEnabled Have your class implement the LogEnabled interface. A typical class might do the following: import org.apache.avalon.framework.logger.Logger; import org.apache.avalon.framework.logger.LogEnabled; class SomeBean extends SomeOtherBean implements LogEnabled { .. // The LogEnabled interface is one method: enableLogging private Logger logger; public void enableLogging( Logger logger ) { this.logger = logger; } // Example method that writes to the log public void setThing( String thing ) { logger.debug( "SomeBean: thing = " + thing ); ... } } Note that in this case you use the logger directly and don't need to use the getLogger() accessor method. Note that a maintainance aware developer would probably implement their own getLogger(). Enabling Logging For both of these solutions you must enable logging by calling the enableLogging() method on the class. This requires that you have a valid Logger object to provide to enableLogging(). Generally you can get the Logger from a Cocoon component class. In my application I called enableLogging() from my Cocoon action class, which extends AbstractXMLFormAction: ... SomeClass myClass = new SomeClass(); myClass.enableLogging( getLogger() ); myClass.someMethod(); // Writes some log messages ... Note that many of the Cocoon classes extend Avalon Component classes. Remember to call enableLogging() before you call any methods that write log messages. In Cocoon application it is not always obvious when to call enableLogging() as the creation and initialization of many of your classes will be handled automatically by Avalon, one of the Cocoon sub-systems. The ContainerUtil class provides a convenient way to enable logging: ContainerUtil.enableLogging(object, logger); This method takes care of the following issues: The logger is only passed to the object if it implements LogEnabled. If the logger is null, an exception is thrown. Links to Avalon Documentation To be absolutely sure that you are writing solid code, you'll need a basic understanding of the Avalon component life-cycle. This is a big subject and beyond the scope of this page. You can read more at The Avalon Logkit, which is used by Cocoon: The Avalon Component Lifecycle: If you're still curious, here is a link to an excellent white paper explaining development using avalon: ..and that's all there is to it. Many thanks to Marcus Crafter, Judith Andres and KonstantinPiroumian for their helpful input. -- AlanHodgkinson
https://wiki.apache.org/cocoon/JavaLogging
CC-MAIN-2017-34
refinedweb
578
50.23
Stage Five The final stage in this application is to use an XML file to load initialization data. Up to this point, we have hardcoded the number of circles as well as their properties (color, radius, and velocity) into the Flash document. However, you can create an XML document from which you can load all that data at runtime so that you can make changes to the movie without having to reexport the .swf file. Here are the steps you should complete to finish the fifth stage of the application: Open a new text document and save it as circles.xml in the same directory as where you are saving your Flash documents. Add the following content to your XML document and save it: <collisionMovieData> <!-- Define the dimensions of the rectangle within which circles can move. --> <bounceArea width="200" height="200" /> <!-- Define the circles that should be created. In this example you create four circles with random velocities and colors and with various radii. --> <circles> <circle radius="30" vel="random" col="random" /> <circle radius="5" vel="random" col="random" /> <circle radius="24" vel="random" col="random" /> <circle radius="10" vel="random" col="random" /> </circles> </collisionMovieData> Open stage4.fla and save it as stage5.fla. Modify the code on the main timeline, as shown here (changes are in bold): #include "DrawingMethods.as" #include "MovieClip.as" #include "TextField.as" #include "Table.as" function loadData ( ) { var myXML = new XML( ); myXML.ignoreWhite = true;// Load the XML ... Get Actionscript Cookbook now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/actionscript-cookbook/0596004907/ch23s05.html
CC-MAIN-2021-17
refinedweb
267
59.09
/> Soundex is a phonetic algorithm, assigning values to words or names so that they can be compared for similarity of pronounciation. For this post I will write an implementation in Python. It doesn't take much thought to realise that the whole area of phonetic algorithms is a minefield, and Soundex itself is rather restricted in its usefulness. In fact, after writing this implementation I came to the conclusion that it is rather mediocre but at least coding it up does give a better understanding of how it works and therefore its usefulness and limitations. The Algorithm The purpose of the algorithm is to create for a given word a four-character string. The first character is the first character of the input string. The subsequent three characters are any of the numbers 1 to 6, padded to the right with zeros if necessary. The idea is that words that sound the same but are spelled differently will have the same Soundex encoding. The steps involved are: - Copy the first character of the input string to the first character of the output string - For subsequent characters in the input string, add digits to the output string according to the table below, up to a maximum of three digits (ie. a total output string length of 4). Note that a number of input letters are ignored, including all vowels. Also, further occurences of an input letter with the same encoding are ignored. - If we reach the end of the input string before the output string reaches 4 characters, pad it to the right with zeros. Letter Encodings This table lists the digits assigned to the letters A-Z. I have assigned 0 to letters which are ignored, and note that uppercase and lowercase letters are treated the same. The Code When you are ready to start coding create a new folder and within it create the following empty files. You can also download the source code as a zip or clone/download it from Github if you prefer. - soundex.py - main.py Source Code Links Open soundex.py and enter this single function. soundex.py def soundex(name): """ The Soundex algorithm assigns a 1-letter + 3-digit code to strings, the intention being that strings pronounced the same but spelled differently have identical encodings; words pronounced similarly should have similar encodings. """ soundexcoding = [' ', ' ', ' ', ' '] soundexcodingindex = 1 # ABCDEFGHIJKLMNOPQRSTUVWXYZ mappings = "01230120022455012623010202" soundexcoding[0] = name[0].upper() for i in range(1, len(name)): c = ord(name[i].upper()) - 65 if c >= 0 and c <= 25: if mappings[c] != '0': if mappings[c] != soundexcoding[soundexcodingindex-1]: soundexcoding[soundexcodingindex] = mappings[c] soundexcodingindex += 1 if soundexcodingindex > 3: break if soundexcodingindex <= 3: while(soundexcodingindex <= 3): soundexcoding[soundexcodingindex] = '0' soundexcodingindex += 1 return ''.join(soundexcoding) The function argument is the name to be encoded, and we then create a list of four spaces for the encoding which will be replaced by the actual encoding. The soundexcodingindex is initialised to 1 as this is where we will start adding numbers. Next a mappings string is created to represent the table above, and then the first character of the encoding is set to the first character of the input string. Next we enter a for loop through the input string; note that the loop starts at 1 as we have already dealt with the first character. Within the loop we assign c to the current input letter's ASCII code using the ord function, converted to upper case. We then subtract 65 so the numeric value corresponds to the indexes of the mappings list. Next we check the value is within the range 0 to 25, ie. an uppercase letter. If not it is ignored, but if so we check if its corresponding numeric value is not 0. We then check the value is not the same as the previous to implement the rule that consecutive identical values are skipped, and then set the next value of the output string to the correct number. The soundexcodingindex is then incremented, before we check if it is more than 3; if so we break out of the loop. Finally, we need to check if we have not yet filled up the encoding list, which can happen if there are not enough encodable letters in the input string. If this is the case we simply fill in the empty values with 0s in a while loop. Finally we return the encoding list converted to a string with join. That's the algorithm implemented so now open main.py and enter this function. main.py import soundex def main(): """ Demonstration of the Soundex module, creating lists of name pairs and running them through the soundex method before printing results. """ print("-----------------") print("| codedrome.com |") print("| Soundex |") print("-----------------\n") names1 = ["Johnson", "Adams", "Davis", "Simons", "Richards", "Taylor", "Carter", "Stevenson", "Taylor", "Smith", "McDonald", "Harris", "Sim", "Williams", "Baker", "Wells", "Fraser", "Jones", "Wilks", "Hunt", "Sanders", "Parsons", "Robson", "Harker"] names2 = ["Jonson", "Addams", "Davies", "Simmons", "Richardson", "Tailor", "Chater", "Stephenson", "Naylor", "Smythe", "MacDonald", "Harrys", "Sym", "Wilson", "Barker", "Wills", "Frazer", "Johns", "Wilkinson", "Hunter", "Saunders", "Pearson", "Robertson", "Parker"] namecount = len(names1) for i in range(0, len(names1)): s1 = soundex.soundex(names1[i]) s2 = soundex.soundex(names2[i]) print("{:20s}{:4s} {:20s}{:4s}".format(names1[i], s1, names2[i], s2)) main() The main function first creates a couple of string lists, each pair of names being similar to some degree. To avoid hard-coding the list size the next line picks it up using len. We then loop through the name pairs, calling the soundex function for each, and finally print out the names and their Soundex encodings. Now run the program with this command in your terminal: Running the program python3.7 main.py Which will give you this output. As you can see, the algorithm is not perfect. Even with this small selection of names a few problems are apparent. Ignoring repeating values means Simons and Simmons are given the same encoding, and using only the first few letters means Richards and Richardson are also encoded the same. Ignoring vowels means that Wells and Wills, Sanders and Saunders, Parsons and Pearson are all given the same encoding despite not actually being homophones.
https://www.codedrome.com/the-soundex-algorithm-in-python/
CC-MAIN-2021-25
refinedweb
1,026
61.97
Invite Freelancer to Project You don't seem to have an active project at the moment. Why not post a project now? It's free!Post a project dynamicmindset dynamicmindset - 83%Jobs Completed - 100%On Budget - 100%On Time - 33%Repeat Hire Rate idTV online television View Prestashop modules Viewcode Recent Reviews Scripts for irc $75.00 USD “Project was completed and installed. dynamicmindset stuck around after creating and and installing to ensure I had full control over scripts.”KrazyMon 4 years ago A php script fixed to work with a different site. $30.00 USD “Wow quick and professional at all times. Look forward to working with this freelancer on any projects I may need in the future. A++++++”KrazyMon 4 years ago import .csv to Prestashop.1.3 $30.00 USD “Thanks, Good Job.”jccarlos 5 years ago Auto Uploader Bot $50.00 USD “dynamicmindset went above and beyond what I expected. Was professional had great communication and helped me out along the way. Would definitely hire again for future projects. A+++”KrazyMon 5 years ago Experience Server adminSep 2011 - May 2012 (8 months) I had to administer a server and a couple of online shops. Web DeveloperAug 2010 - Sep 2011 (1 year) Responsibilities: -creating web sites -network administrator Technologies: PHP (Zend Framework), MySQL, Javascript (jQuery), CSS, HTML, Shell Script Education Student2011 - 2013 (2 years) Certifications - US English Level 195% Verifications - Facebook Connected— - Preferred Freelancer— - Payment Verified— - Phone Verified— - Identity Verified—
https://www.freelancer.com/u/dynamicmindset.html?page=portfolio
CC-MAIN-2017-30
refinedweb
243
57.98
Hi, We've been away from SDL for some time.We may now update our SDL2 video game called "LettersFall 5".(game uses: SDL2, SDL2_Image, SDL2_Mixer, & SDL2_TTF and is 100% cross-platform) We have some technical questions:(1) Does SDL2 work with Microsoft's new "Desktop Bridge"?(Desktop Bridge converts a Windows desktop application to a Windows 10 Store application) (2) Does SDL2 work with current Emscripten?(Emscripten converts a desktop application to a web application)*How do we tell SDL2 to use OpenGL so Emscripten will build web application with WebGL? Thanks in advance! JeZxLee16BitSoft Inc.Video Game Design Studio I'm developing a game that is using SDL2 and SDL2_image that works on emscripten. I haven't used SDL2_mixer or SDL2_TTF so I can't help with those. You may need some modifications to your program to work on emscripten. At least: - Threads are not supported (at least not yet I think) - Networking. Can't use normal sockets - Possible filesystem issues - Emscripten should be able to emulate OpenGL and Open GL ES functionality on WebGL, but could be that not everything is supported - You cannot just loop and draw frames in your program. Instead you need to give a callback to emscripten that will be called each frame. You can check my simple example for more details: The simple example project is using cmake to build for emscripten too. It has .cmd file to build on windows but it should work manually on linux/mac by running something like this in the project dir (emsdk must be in path or just give full path to it): mkdir build cd build emsdk activate latest emcmake cmake -D CMAKE_BUILD_TYPE:STRING=Debug .. emmake make Another issue is that most browsers don't allow doing some javascript stuff when opening a local .html file. To test the my app without uploading to an actual server I use this command on my chrome (on windows, but same options should work on other platforms too): "c:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files --user-data-dir=c:\tmp\chrome_debug It's also using a separate data dir so you can delete it easily to clear all caching and it's not messing around with you normal browsing I can't share my actual project, but here are some cmake snippets to help you out to get more complicated stuff working: if(EMSCRIPTEN) # Make emscripten output a html page instead of just the javascript (for easier testing). set(CMAKE_EXECUTABLE_SUFFIX ".html") # This should make it easier to debug runtime problems on the browser. if(CMAKE_BUILD_TYPE MATCHES Debug) set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s ASSERTIONS=2 -s DEMANGLE_SUPPORT=1") endif() # Using c++11, SDL2 and SDL_image. Also using OPENGL_ES 2 emulation. # See for more options # I haven't fiddled around with this for a while and I'm not sure if all of this is actually necessary set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -s USE_SDL=2 -s USE_SDL_IMAGE=2 -std=c++11") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -s USE_SDL=2 -s USE_SDL_IMAGE=2") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s USE_SDL=2 -s USE_SDL_IMAGE=2 -s FULL_ES2=1") # # Embedding asset files etc. to the javascript output: # # "For SDL2_image in order to be useful, you need to specify the image formats you are planning on using with -s SDL2_IMAGE_FORMATS='["png"]'. This will also ensure that IMG_Init works properly. Alternatively, you can specify 'emcc --use-preload-plugins' but then you calls to IMG_Init will fail." # See fro more info # # Had some issues with this not sure if it works yet. see #set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s SDL2_IMAGE_FORMATS='[\"png\",\"jpg\"]'") #set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --use-preload-plugins") # This maps a dir called data in my build hierarchy to a virtual dir called data embedded in the final javascript output. # Add more of these and modify the paths as necessary set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --preload-file ../../../dist/data@data") # Another option you can try if having trouble with loading images: # #set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s STB_IMAGE=1") # Might need to play around with this if emscripten gives errors saying your program is too big. #set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s TOTAL_MEMORY=67108864") endif() Thanks for the reply.Emscripten looks much more complicated than we had realized, we will put it aside for now.Anyone try new Microsoft "Desktop Bridge" to convert SDL2 Windows desktop app to Windows 10 Store app?Windows 10 Store is the future of Windows application distribution which is why we have interest in it.Thanks! JeZxLee I forgot to mention that the example github project I linked is compiling SDL from source. It's probably better to just use the USE_SDL=2 option instead (like in the cmake snippet above). That way SDL should work pretty much out of the box. USE_SDL=2 Fwiw, Emscripten was easier than I expected to port to; the biggest stumbling block was that your main loop has to become a function that Emscripten calls once per frame and you can't ever block and read the event queue because that function has to return before more events will fire or your rendering will get to the screen. In practice, most games end up with a small ifdef to handle this and it's not a big deal, but depending on your game it can be a huge redesign problem. It's worth noting that Emscripten has a built-in reimplementation of SDL 1.2 (written in pure JavaScript!), but SDL 2 (the same C library used everywhere else) can be compiled with Emscripten and has a backend for that platform. If you don't link with this, Emscripten will think you wanted its 1.2 implementation, and things will fail. Thanks for the info.SDL2 with Emscripten will take some time so we will put it aside for now.Our SDL2 C++ main loop is the following://-MAIN-LOOP------------------------------------------------------------------------printf("Main loop started...\n");while (visuals->CoreFailure != true && input->EXIT_Game != true){ input->GetAllUserInput(); visuals->CalculateFramerate(); screens->ProcessScreenToDisplay(); visuals->ProcessFramerate();}printf("...Main loop exited\n");//------------------------------------------------------------------------MAIN-LOOP- We are REALLY interested in submitting SDL2 games to the Windows 10 Store.(with the use of new Microsoft "Desktop Bridge" conversion software)If we have any issues with the conversion then we will report them back here.Thanks! Out of curiosity - if you are using SDL2 already, you could directly create a UWP app for publishing in the store. Any reason why you want to use the Desktop Bridge instead? ---------- Původní e-mail ----------Od: Jesse Palser noreply@discourse.libsdl.orgKomu: hardcoredaniel@seznam.czDatum: 17. 6. 2017 5:19:12Předmět: Re: [SDL] SDL2 / Desktop Bridge / Emscripten Questions " JeZxLee() June 17 Thanks for the info.SDL2 with Emscripten will take some time so we will put it aside for now.Our SDL2 C++ main loop is the following://-MAIN-LOOP---------------------------------------------------------------- Creating UWP application would require us to use icky MSVS 17 Community which we want to avoid.We highly prefer using current free Code:Blocks C++ IDE on Windows.(we've had problems getting MSVS compiled source code to compile on Linux so we dumped MSVS)Seems like "Desktop Bridge" might be the preferred method to make Windows 10 Store SDL2 application.We have submitted "Advanced Installer" made *.MSI install file to Microsoft for approval to use Desktop Bridge.Thanks! Our SDL2 C++ main loop is the following: (if you aren't interested in Emscripten right now, this might be interesting to you later, or someone else might find it on Google, etc.) So if I were to rework this for Emscripten, it would look something like this (untested!)... #ifdef __EMSCRIPTEN__ #include <emscripten.h> #endif // this function runs once per frame, returns true if game should keep running the mainloop. static bool mainloopIteration(void) { if (visuals->CoreFailure == true || input->EXIT_Game == true) { return false; // stop now. } input->GetAllUserInput(); visuals->CalculateFramerate(); screens->ProcessScreenToDisplay(); visuals->ProcessFramerate(); return true; // keep going. } #ifdef __EMSCRIPTEN__ // calls mainloop, terminates if appropriate. static void emscriptenMainloop(void) { // strictly speaking, maybe you don't have to cancel the main loop, as maybe // the Emscripten port doesn't have a "quit" option on the main menu or whatever, // and the user "quits" by closing the browser tab. But you can terminate like this // and it's more or less like calling exit(). if (!mainloopIteration()) { printf("...Main loop exited\n"); emscripten_cancel_main_loop(); // this should "kill" the app. } } #endif //-MAIN-LOOP------------------------------------------------------------------------ printf("Main loop started...\n"); #ifdef __EMSCRIPTEN__ emscripten_set_main_loop(emscriptenMainloop, 0, 1); // This will now continue on until main() or whatever returns, and then // call emscriptenMainloop 60 times a second. Do this right at the end of main(), and // DON'T call SDL_Quit(), etc, on your way out the door! return 0; // just return now to prevent confusion. #else while (mainloopIteration()) { /* spin */ } printf("...Main loop exited\n"); // shutdown things here, call SDL_Quit(), etc. #endif //------------------------------------------------------------------------MAIN-LOOP- ...which is basically your mainloop's body split out into a function, and some basic glue code for Emscripten/everything else. you can avoid using Visual Studio (I still use 2015 but maybe its not too different from 2017) for development, and just use a command line to build the final UWP app for upload to the store. This way you'll never need to usethe IDE. I can send you the command line that I use for building with Jenkins if you want. It's a rather long line with many options, but includes building for 32 and 64 bit intel + 32 bit arm plus signing plus debugging info plus building the final .appxupload package for the store. Regards, Daniel ---------- Původní e-mail ----------Od: Jesse Palser noreply@discourse.libsdl.orgKomu: hardcoredaniel@seznam.czDatum: 17. 6. 2017 15:36:43Předmět: Re: [SDL] SDL2 / Desktop Bridge / Emscripten Questions " Creating UWP application would require us to use icky MSVS 17 Community which we want to avoid.We highly prefer using current free Code:Blocks C++ IDE on Windows.(we've had problems getting MSVS compiled source code to compile on Linux sowe dumped MSVS)Seems like "Desktop Bridge" might be the preferred method to make Windows 10Store SDL2 application.We have submitted "Advanced Installer" made *.MSI install file to Microsoft for approval to use Desktop Bridge.Thanks!
https://discourse.libsdl.org/t/sdl2-desktop-bridge-emscripten-questions/22632
CC-MAIN-2017-43
refinedweb
1,681
62.07
Please edit this wiki topic Fastai v2 daily code walk-thrus Fastai v2 chat Sep 10, 2019 Digging Deep in Transforms - 02_data_transforms.ipynb notebook - no order in v2 of the library for DataBlock API so can’t have mistakes when order is not correct as it was in v1 filtknows which set transform is applied to - transform creates a callable object, after creating you should call it and pass some data _TfmMetais meta class of Transform ...for pass in Python (or do nothing) Foo2 = type(‘Foo2’, (object), {})will create a class called Foo2 typeis class that constructs things - reason for meta classes is to change how Python works underneath the hood - meta class is a way to not use typeconstructor, but some other constructor in fastai v2 - meta class is called every time we create a class (not when objects are created) 3. Data model — Python 3.7.4 documentation - class’s namespace is __dict__object _TfmMetauses _TfmDictinstead of plain dict as the namespace dict - it behaves as the normal dict except for cases when ‘encodes’ and ‘decodes’ parameters are passed TypeDispatch— allows for the same function to work differently for different types - Python method dispatching — can be single or multiple - Multiple Dispatching — Python 3 Patterns, Recipes and Idioms - Method dispatching in Python - PEP 443 — Single-dispatch generic functions | Python.org TypeDispatchhas dict funcswhere key is a type and value is the function to call (NICE!) _p1_annograbs the first parameter annotation - how to use Transformsubclass as a decorator? This will allow to dynamically add functionality to a given transform later outside the class definition. - Using class as decorator needs to be a callable, but not in the __init__sense. We need to reimplement __call__()method. If parameter is callable it will be added to class’s methods. __new__()should have an expectable Shift + Tabautocomplete for the signature. Signature is customized in __new__()for Transform
https://forums.fast.ai/t/fastai-v2-code-walk-thru-5/54405/10
CC-MAIN-2022-27
refinedweb
312
56.59
#include <fei_DofMapper.hpp> A mapping from mesh-degrees-of-freedom (dofs) to equation-numbers. Mesh-dofs are represented by fei::Dof, see fei_Dof.hpp for details. Equation numbers are also called global indices. They are globally-unique and zero-based. A 'dof' may correspond to multiple global-indices if the dof's field has multiple scalar components, e.g. a vector field such as velocity in 3D. Fields are assumed to be scalar fields (have 1 component) unless a field-size is set using the setFieldSize method. Definition at line 77 of file fei_DofMapper.hpp. constructor Definition at line 80 of file fei_DofMapper.hpp. destructor Definition at line 84 of file fei_DofMapper.hpp. Set the specified field to have the specified field_size. 'field' is added to the internal field map if not already present. If 'field' is already present, its field_size is reset to the new value. Definition at line 159 of file fei_DofMapper.hpp.
http://trilinos.sandia.gov/packages/docs/r10.10/packages/fei/doc/html/classfei_1_1DofMapper.html
CC-MAIN-2014-35
refinedweb
155
53.78
With Python Programming Language, We can Easily Check if am Email address is Valid or not a valid email address. Once we find that an Email address is Valid then we can check if the email address is actually existing or not. Normally for validation, we can use a Normal Regular Expression to check for all the conditions if an Email Address is Valid or Not. In this article, I will show you all possible ways through which we can check if an email address is valid and Exists with Python Programming Langauge. To check if an Email address is valid in python we can write a Regular Expression but for that, we will see how a normal address looks like. What is the Format of a Valid Email Address? To check if an Email Address is valid or Not we have to look at the normal format of the email address. there are some conditions and if the email address fits with these conditions then we will consider that these email addresses or Valid though they might not have existed. Conditions for the Valid Email address are that it should be consists of an email prefix and an email domain but of them should be an acceptable format. 1.The Prefix of the email address should be at the left of the @ symbol. 2.The @ Symbol should be then Followed by the Domain of the email address. 3. The Allowed Characters are letters(a-z), numbers, underscores, periods, and dashes. 4. The final constraint on a valid email address is that it underscores or dash must be followed by one or more letters or numbers. A few examples of valid email addresses are Following. bbc@gamil.com ali@alixaprodev.com A few examples of invalid email addresses are Following. abc..@gmail.com abc-@gmail.com Method 1: Check if an Email Address is Valid with Python Regex Regular Expression in Python is one way that will help us find if the email address is valid or not. Creating our own Regular expression will not always be a good approach to use in this case scenario but it can help us customize our version of validation. You can learn about regular expression in python from this article which will help you how to write a good regular expression. but for our example, we will use a general regular expression that can help us to find if an email address is valid or not. Below is the sample code that will return true if an email address is valid or not with python regular expression. def isvalidEmail(email): pattern = "^\S+@\S+\.\S+$" objs = re.search(pattern, email) try: if objs.string == email: return True except: return False VorNotV = isvalidEmail("haxrataligmail.com") print(VorNotV) The Above code will return False as the email address is not a valid Email Address. Although the above code will work most of the time sometimes it is hard to find if an email is valid or not by only using the python regular expression. If you want to create your own regular Expression for email validation then you definitely need to understand the regular expressions along with the international standard format for email addresses which can be found here in this link. Method 2: Check if an Email Address is Valid or Not with Python Package- Python Provide a Third Party Package to find if an Email address is valid or not. With this third-party Python package, you can easily find the validity of an email address. This Python Package is named is "validate-email-address" which is maintained by an Experienced Python Programmer "xj9". This package makes sure if the email address is valid, properly formatted, and if this email really exists. We will check it one by one. About validate-email-address Python Package:- This Package contained only one function validate-email-address that takes two parameters one is the email address and the other parameter is for the type of validation. This package is using the RFC 2822 style of validation of an email address which is a well-known standard for email address validation. They claimed that it is better than the standard Python Library email.utils which they claimed can not detect the validation of some malformed Python re module which is a standard python module for regular expressions. All they are doing is comparing the input email address to the giganic regular expressions which are formatted according to the RFC 2822 style of email addresses. The Main Import of the validate-email-address Python Package can be seen in the following Picture. The Best Way to use the validate-email-address() Python To check if the email address is valid we can use the validate-email-address() method which will return true if the email address is valid otherwise it will return false. Step 1. Install the Third Party Python Package validate-email-address You can install it by using pip as follows. pip install validate-email-address Step 2. Install the Third Party Python Package py3dns You can install DNS by using pip as follows. Remember the official documentation say that you have to install DNS which is an outdated Python Package and the updated version can be found here as py3dns any way you have to install it in the following way. pip install py3dns Step 3. Import the Code and use the Validate email Address Function from validate_email_address import validate_email isvalid=validate_email('alixaprodev@gmail.com') print(isvalid) The above code will print True to the console as the input email address is a valid Email address. from validate_email_address import validate_email isvalid=validate_email('alixaprodev@gmail') print(isvalid) #output: False The above code will print False to the console as the input email address is a invalid Email address. Check if an Email Address Exist or Not with Python:- Although there might be many ways you can check if a given mail address exists or not. sometimes even if a mail address seems valid but it might not exist. so to make sure to send email to those address which exists we can first check if the email address exists. The same Python package that we use for the email validation provides us the ability to check if am email exists or not. Look at the following code. it will return True if the email address exists otherwise it will return Flase. from validate_email_address import validate_email isExists = validate_email('haxratali0@gmail.com', verify=True) print(isExists) By just passing another optional argument to the function it will now use the Python DNS package to make sure if the email address actually exists or not. It will return True if the email address exists otherwise it will return False. Summary and Conclusion:- It is highly recommended to use the third-party Python Package to make sure is the email address is valid but if you want to create your own validation rules then you have to go through the regular expression and understand them will in the context of the python programming language. If you have any Questions Regarding this Article please put them in the comment section I will be happy to answer them all. if you are interested in Python Programming Then Please Subscribe to My Youtube Channel as well.
https://www.alixaprodev.com/2021/08/how-to-check-if-email-address-is-valid.html
CC-MAIN-2021-43
refinedweb
1,222
59.84
Using WMI with NSClient++ 0.4.0 Part 1: Command line tools Posted by Michael Medin at 2012-03-09. NSClient++ 0.4.0 First off this is upcoming 0.4.0 it is available in the latest nightly build but I should probably not recommend installing that into your production environment . The “last” RC for 0.4.0 has just been released so feel free to download that and let me know anything which breaks before you get started on this. Anyways, The first thing if you don’t know anything about NSClient++ command line syntax is that it uses a concept similar to git and what not. Meaning the first “option” is a command and everything else works like options (with double dashes). So you can do: nscp help ... nscp client --exec ... The core defines a series of “context” such as settings, client, service, test, etc etc but modules can also provide similar functionality. To use this we use the “client” mode. Which is similar to starting NSClient++ before running the command the shutting it down again afterwards. In our case we want to use the module called CheckWMI so we need to add --modules CheckWMI end up with the following command: nscp client --module CheckW Now this is a bit of hand full to type so there are a set of short hand aliases you can use to reduce the amount of typing. In our case we want to use the wmi alias which is equivalent to the above command: nscp w Namespaces So now that we know how to access the WMI command line what can we actually do with it? Well a lot actually so lets start off by exploring namespaces. Namespaces in WMI is a bit like a path or folder on your file system or a package in your java code. In other words a hierarchical structure used to make it simpler to find things. The default namespace (if you do not specify one) is root\cimv2 which is where most of the normal system classes reside but there are a lot of things in other namespaces and most server components such as SQL Server and Exchange will provide their own namespaces. So listing namespaces is a pretty important first step. So how can we figure out which namespaces are available? Conveniently the CheckWMI plugin provides not only one command for this but two. The first --list-ns will list all child namespaces in a given parent namespace. The second --list-all-ns will list all children (and grandchildren) recursively from a given namespace. An important thing to notice is that the default namespace is always root\cimv2 which means that if you want to list ALL namespaces you need to specify the root namespace instead by adding the “--namespace root” option. nscp wmi --list-all-ns --namespace root root\\subscription root\\subscription\\ms_41d root\\subscription\\ms_409 ... root\\CIMV2 root\\CIMV2\\Security root\\CIMV2\\Security\\MicrosoftTpm root\\CIMV2\\Security\\MicrosoftVolumeEncryption root\\CIMV2\\ms_41d ... root\\Microsoft\\SqlServer root\\Microsoft\\SqlServer\\ServerEvents root\\Microsoft\\SqlServer\\ServerEvents\\SQLEXPRESS root\\Microsoft\\SqlServer\\ComputerManagement root\\Microsoft\\SqlServer\\ComputerManagement\\ms_409 ... As we can see here SqlServer has its own namespace so whenever we want to query from there we need to use the namespace option. An interesting side note which I discovered after a few hours of googling for an API to list namespaces was that namespaces are in fact instances of a class called __Namespace. Hence there is no API to list them which now that I know it it is kind of obvious but this means that --list-ns is really wrapper for --list-instance __Namespace but lets not get ahead of ourselves. Now that we have hopefully understood namespaces lets move on to the next logical component: classes. Listing Classes Classes is what WMI calls which I would call tables or objects. Basically you can think of a class as a type of objects which has instances (rows) as well as methods and metadata and what not. In NSClient++ we only deal with instances and attributes currently but that will probably change in the next version (0.4.2). Using the command line of NSClient++ there is just a single option (--list-classes) for listing object so it is pretty straight forward. The option takes an optional base class argument. This is a probably on the advanced side of things but since Classes are hierarchical (think inheritance) you can filter on just a certain kind of base class. But most likely you wont be needing this if you are reading this. So all you are left with are two other options one being --namespace which is where you specify the namespace the other is --limit where you can limit the result set. Here we have all the classes we can query under the root\Microsoft\SqlServer \ComputerManagement namespace. nscp wmi --list-classes --namespace root\\Microsoft\\SqlServer\\ComputerManagement \| __CLASS \| \|-------------------------------------\| \| __NotifyStatus \| \| __ExtendedStatus \| ... \| ClientNetworkProtocol \| \| ServerNetworkProtocol \| \| SqlServerAlias \| \| ServerNetworkProtocolProperty \| \| ServerSettings \| \| SqlServiceAdvancedProperty \| \| SecurityCertificate \| \| ClientSettingsGeneralFlag \| \| ClientNetLibInfo \| \| ServerNetworkProtocolIPAddress \| \| SqlService \| \| RegServices \| \| ClientNetworkProtocolProperty \| \| ServerSettingsGeneralFlag \| So now we know our way around and can find a set of classes in a hierarchical namespace structure which means we have to start exploring what the classes can provide us with. Making Queries There are basically two ways to query information. The first is --list-instances which lists all instance of a class and the second is --select where you ask a “WQL” query. I tend to almost always use the latter as it gives you more flexibility and power. First off lets explain what WQL is if you are familiar with SQL (or for that matter the filter syntax of NSClient++) you are spot on. WQL (WMI Query Language) is a query language modeled on SQL but it is a bit different as WMI is an object oriented data store and SQL usually deals with a relational data store. Regardless for normal use your basic SQL skills will normally get you far enough. The main benefit to using WQL over listing instances is that the query language allows you to limit the information you get back. nscp wmi --list-instances SqlService --namespace root\\Microsoft\\SqlServer\\ComputerManagement \| AcceptPause \| AcceptStop \| BinaryPath \| Dependencies \| Description \| DisplayName \| ErrorControl \| ExitCode \| HostName \| Name \| ProcessId \| SQLServiceType \| ServiceName \| StartMode\| StartName \| State \| \|-------------\|------------\|--------------------------------------------------------------------------------------------\|--------------\|----------------------------------------------------------------------------------------------\|-------------------------\|--------------\|----------\|-------------\|---------\|-----------\|----------------\|------------------\|-----------\|-----------------------------\|-------\| \| TRUE \| TRUE \| "c:\\Program Files (x86)\\Microsoft SQL Server\\MSSQL.1\\MSSQL\\Binn\\sqlservr.exe" -sSQLEXPRESS \| UNKNOWN \| Provides storage, processing and controlled access of data and rapid transaction processing. \| SQL Server (SQLEXPRESS) \| 1 \| 0 \| MIME-LAPTOP \| Unknown \| 2780 \| 1 \| MSSQL$SQLEXPRESS \| 2\| NT AUTHORITY\\NetworkService \| 4 \| \| TRUE \| TRUE \| "c:\\Program Files (x86)\\Microsoft SQL Server\\90\\Shared\\sqlbrowser.exe" \| UNKNOWN \| Provides SQL Server connection information toclient computers. \| SQL Server Browser \| 1 \| 0 \| MIME-LAPTOP \| Unknown \| 3636 \| 7 \| SQLBrowser \| 2\| NT AUTHORITY\\NetworkService \| 4 \| Versus: nscp wmi --select "select DisplayName, State, ProcessId from SqlService" --namespace root\\Microsoft\\SqlServer\\ComputerManagement \| DisplayName \| ProcessId \| State \| \|-------------------------\|-----------\|-------\| \| SQL Server (SQLEXPRESS) \| 2780 \| 4 \| \| SQL Server Browser \| 3636 \| 4 \| The last one is a lot more readable and contain hopefully the information you actually want. And if you really want all the information you can still select * from .. to get exactly same result as --list-instances. So to be fair I don’t really see a point to using the --list-instance option Remote machines Another nifty thing you can do is make remote queries. There are a set of option --computer, --user and --password which can be used to do this remotely on another machine on your network. Remember the --list-all-ns command we used in before? Here is the same command targeting a virtual machine remotely. nscp wmi --list-all-ns --computer mmedin-vm --user YYY --password XXX --namespace root \\\\mmedin-vm\\root\\ServiceModel \\\\mmedin-vm\\root\\SECURITY \\\\mmedin-vm\\root\\MSAPPS12 ... \\\\mmedin-vm\\root\\CIMV2 \\\\mmedin-vm\\root\\CIMV2\\ms_409 \\\\mmedin-vm\\root\\CIMV2\\Applications \\\\mmedin-vm\\root\\CIMV2\\Applications\\MicrosoftIE ... \\\\mmedin-vm\\root\\subscription \\\\mmedin-vm\\root\\subscription\\ms_409 \\\\mmedin-vm\\root\\nap Naturally all commands you can do locally will also work remotely so you can also query for information as well as list namespaces, classes and instances. Scripts Since this is an internal command there are APIs available so you can use these commands from scripts as well. To demonstrate this I will show a simple python script which lists all objects in all namespaces. To do this we use the --list-all-ns command to list all namespaces and then loop through the list and for each namespace we call --list-classes with that namespace. To make things simple to work with from a scripting perspective there is an option we can use to simplify the output --simple will return the data as a comma separated list which is simpler to parse in our python script. The script in its entirety looks like this: from NSCP import Core core = Core.get() def __main__(): # List all namespaces recursivly (ret, ns_msgs) = core.simple_exec('any', 'wmi', ['--list-all-ns', '--namespace', 'root']) for ns in ns_msgs[0].splitlines(): # List all classes in each namespace (ret, cls_msgs) = core.simple_exec('any', 'wmi', ['--list-classes', '--simple', '--namespace', ns]) for cls in cls_msgs[0].splitlines(): print '%s : %s'%(ns, cls) Next post in this series This ends this installment of “Using WMI with NSClient++”. In the next section I will show how to use the various check commands you can use from a monitoring tool such as Nagios or Icinga to make sure your servers are working.
https://www.medin.name/blog/2012/03/09/using-wmi-with-nsclient-0-4-0-part-1-command-line-tools/
CC-MAIN-2020-05
refinedweb
1,572
52.19
-Apr-18) PDF page: N/A Safaribooks, section: Building Forms This code is not working as of 13th April, 2018. Already this issue is reported on the errata section but that contains typo in the solution. We have to replace the :empty atom with and empty map %{}. See the code given below, Code with error: def changeset(model, params \\ :empty) do model |> cast(params, ~w(name username), []) |> validate_length(:username, min: 1, max: 20) end Working version: def changeset(model, params \\ %{}) do model |> cast(params, ~w(name username), []) |> validate_length(:username, min: 1, max: 20) end--Md Shahriar Anwar - Reported in: P1.0 (30-May-16) PDF page: NA I am reading the online version on safari. I can't locate the page number on it. In chapter 6, "Managing Related Data", after adding the video relationship to the user, the index action on page refresh generates an error because the logged in user does not have any videos associated. Don't we have to handle that? Right now its giving me an error--Sam - Reported in: P1.0 (30-Jan-18) PDF page: 1 I'd change "... to know most of our ideas aren’t new." to "... to know that most of our ideas aren’t new.". I'd also cut down on the use of "so many" here: "It’s the combination of so many of the best ideas from so many other places that has so many people excited."--Rich Morin - Reported in: P1.0 (01-Feb-18) PDF page: 4.5 Safari Books Section "Building Forms" In Ecto 2.2.8 the :empty atom has been deprecated, so an empty map is required. With this change validate_length will not work on it's own, causing a Postex error. Book Code: def changeset(model, params \\ :empty) do model |> cast(params, ~w(name username), []) |> validate_length(:username, min: 1, max: 20) end Working Code def changeset(model, params \\ :{}) do model |> cast(params, ~w(name username), []) |> validate_required([ :username]) |> validate_length(:username, min: 1, max: 20) end - Reported in: P1.0 (30-Jan-18) PDF page: 5 I'd remove most of the blanks before UserController in this line: get "/users", UserController, :index Also, doesn't the "/api/" scope need to precede the "/" scope?--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 7 I'd change "... communicated to its own process ..." to "... communicated with its own process ...".--Rich Morin - Reported in: P1.0 (19-Aug-16) PDF page: 9 emulate failures by crashing a database connections on purpose,... should be emulate failures by crashing a database connection on purpose,... --Kosmas Chatzimichalis - Reported in: P1.0 (30-Jan-18) PDF page: 10 I'd change "... and if those pieces ..." to "... and whether those pieces ...".--Rich Morin - Reported in: P1.0 (14-May-16) PDF page: 18 The original, with surrounding text for context, is "you’ll want to separate the code that calls another web server, or fetches code from a database, from the code that processes that data." It probably should be "or the code that fetches from a database", or "fetches data from a database" instead of "or fetches code from a database".--Ted - Reported in: P1.0 (29-Dec-16) PDF page: 19 Problem: root@backdraft:~# mix local.hex ** (UndefinedFunctionError) undefined function :inets.stop/2 (module :inets is not available) :inets.stop(:httpc, :mix) (mix) lib/mix/utils.ex:355: Mix.Utils.read_httpc/1 (mix) lib/mix/utils.ex:301: Mix.Utils.read_path/2 (mix) lib/mix/local.ex:107: Mix.Local.read_path!/2 (mix) lib/mix/local.ex:86: Mix.Local.find_matching_versions_from_signed_csv!/2 (mix) lib/mix/tasks/local.hex.ex:30: Mix.Tasks.Local.Hex.run/1 (mix) lib/mix/cli.ex:58: Mix.CLI.run_task/2 Solution: sudo apt-get install erlang-ssl erlang-inets--Chris Compton - Reported in: P1.0 (29-Dec-16) PDF page: 19 On page 19 you tell the user to check the postgresql version number, but do not tell them to actually create a user. Therefore, when mix ecto.create is run on page 21, it will fail. Even in the Ecto chapter, on page 54, you don't clarify that the postgresql user will require createdb permission (even if the db is already created (bad Ecto!!)). You should tell the reader that they need a create user command like this: createuser --createdb --encrypted --login --pwprompt --no-createrole --no-superuser --no-replication wwalker Users may get this: ** (Mix) The database for Hello.Repo couldn't be created: ERROR (insufficient_privilege): permission denied to create database I find it referenced in many places, like in the phoenix forums on "Is it necessary to create a postgresql role with superuser privilege" in Phoenix Talk in Groups.google : "Depending on how you are creating the database you may not even need the "Create DB" attribute." --Wayne Walker - Reported in: P1.0 (29-May-16) Paper page: 19 "Good online resources3 exist, but we recommend the excellent book , by Dave Thomas, ..." I believe the name of the book was left out. Or maybe a hyperlink? I'm not sure what was intended to go there. Notice that there is a space on both sides of the second comma.--David Tang - Reported in: P1.0 (18-May-16) Paper page: 21 The complete output of "mix phoenix.new hello" should be formatted as output in the book. The output from "We're all set!" to "$ iex -S mix phoenix.server" looks like the normal book text and it isn't clear when the command output stops and the commentary resumes. The output of the command is also much different in 1.14, but that is far less confusing that the mixed up formatting.--Andrew Kappen - Reported in: P1.0 (28-May-16) PDF page: 21-27 I just try to install the phoenix framework following the book (mix phoenix.new hello; cd hello; mix ecto.create;), and I find a error on this last commad. This error doesn´t prevent to run the server (using mix phoenix.server), but I think it could causes problems on the future, with the connections to the database or something... Here is the problem (...more lines of info...) Consolidated List.Chars Consolidated String.Chars Consolidated Collectable Consolidated Enumerable Consolidated IEx.Info Consolidated Inspect ** (ArgumentError) argument error (stdlib) :io.put_chars(:standard_error, :unicode, [[[], <<42, 42, 32, 40, 77, 105, 120, 41, 32, 84, 104, 101, 32, 100, 97, 116, 97, 98, 97, 115, 101, 32, 102, 111, 114, 32, 72, 101, 108, 108, 111, 46, 82, 101, 112, 111, 32, 99, 111, 117, 108, 100, 110, 39, 116, 32, 98, 101, ...>>], 10]) (mix) lib/mix/cli.ex:67: Mix.CLI.run_task/2 (elixir) lib/code.ex:363: Code.require_file/2 The problem seems to be that the config/devs.exs database password is not setted up. It something that you don´t say on the book, but on the web. I hope you do not displease me tell, I only do for improve the next edition. There is another thing that it is not on the book but is on the web: when you make a new controller (using the router.ex), and a new template, it seems to be enough as to make the request to the new action... but it is not true (I think), we need to make a view handler (¿no?). --Eloy Fernández - Reported in: P1.0 (25-Jun-16) PDF page: 22 2nd paragraph talks about .ex being a "extension for compiled elixir files". IMHO that should be changed to "is the extension for elixir source files" -- they're getting compiled and are source files, not the compiled byte code files.--Stefan Eletzhofer - Reported in: P1.0 (19-Aug-16) PDF page: 24 ... please define a clause for render/2 ... should be ... please define a matching clause for render/2 ... --Kosmas Chatzimichalis - Reported in: P1.0 (18-May-16) Paper page: 24 "We call the functions invoked by the router on our controller's actions, but don't get confused." is confusing (ironically) and the grammar is wrong. This might be better: "Actions are what we call the functions invoked by the router on the controller, but don't get confused."--Andrew Kappen - Reported in: P1.0 (30-Nov-16) PDF page: 25 "Replace the first route in web/router.ex with this one:" I suggest to add a hint, that changing the route will result in an error, when sending a request WITHOUT a "name" param because in get "/hello/:name", HelloController, :world :name is required and not optional. Calling localhost:4000/hello will result in Phoenix.Router.NoRouteError at GET /hello no route found for GET /hello (Hello.Router) --Andy Wenk - Reported in: P1.0 (18-May-16) Paper page: 25-26 Include the assignment of third to :bears in the first example: ". . . and it can do so by assigning first to :lions, second to :tigers, and third to :bears." Because the assignment wasn't included with the other two it made me wonder if I was misunderstanding something. You might also mention on the next page (26) that first and third still get assigned in the example where you are testing that the third element in the tuple is :bears.--Andrew Kappen - Reported in: P1.0 (30-Jan-18) PDF page: 26 I'd change "Tx" to "TX".--Rich Morin - Reported in: P1.0 (28-Sep-16) PDF page: 26 Super wishlist... After assigning values to `austin`, it would be nice if you had showed how they can be used. Something like: iex(3)> Place.city(austin) "Austin" iex(4)> Place.texas?(austin) true --Scott Bronson - Reported in: P1.0 (07-May-16) PDF page: 26 In the mobi edition at around location 776 there are blocks from the next chapter. The block that was atom keys vs strings in the pdf seems to have been replaced stuff from further up the input stream.--Charles Paterson - Reported in: P1.0 (22-Jun-17) PDF page: 27 it is a parameter, not a different route. so the line says "point your browser to localhost:4000/hello/phoenix." should say, "localhost:4000/hello?name=phoenix" this worked locally, if that is not correct, what did i miss? thanks.--Baskin Tapkan - Reported in: P1.0 (26-May-17) Paper page: 27 navigating to localhost:4000/hello/phoenix generates the error: Phoenix.Router.NoRouteError at GET /hello/phoenix but navigating to localhost:4000/hello?name=phoenix works as expected.--Jason Underdown - Reported in: P1.0 (02-May-16) PDF page: 29 Box "José says": Photo with image of José is missing (it displays a red cross). Also on page 32, 64, 225, 249.--Robert - Reported in: P1.0 (10-Oct-16) PDF page: 29 There is a missing image icon in the "José says" box indicating something did not render properly when the pdf was generated.--Andrew Cain - Reported in: P1.0 (06-May-16) PDF page: 29 José's avatar is broken (a small red cross appears instead of his face).--Jaime Iniesta - Reported in: P1.0 (19-Aug-16) PDF page: 31 the line: root: Path.dirname(__DIR__), does not appear in the generated config/config.exs from the latest phoenix release. --Kosmas Chatzimichalis - Reported in: B10.0 (21-Apr-16) PDF page: 33 double slashes in it between 'raw' and 'master' in the archive.install URL--Dorian Mcfarland - Reported in: P1.0 (21-Jul-16) Paper page: 34 The bullet list should have an entry for the router.--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 37 I'd change "... on building the controllers, views, and templates." to "... on building controllers, views, and templates.".--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 39 There's a free-floating dollar sign just above "First, run mix ecto.create to prep ...".--Rich Morin - Reported in: P1.0 (05-Jun-16) PDF page: 41 Is: user = %{usernmae: "jose", password: "elixir"} should_be: user = %{username: "jose", password: "elixir"} typo in username--Bartosz Nowak - Reported in: P1.0 (29-Apr-16) PDF page: 41 iex> user = %{usernmae: "jose", password: "elixir"} usernmae should be username--Thomas Nys - Reported in: P1.0 (03-Aug-16) PDF page: 41 error in "username" iex> user = %{usernmae: "jose", password: "elixir"}--Paul Arsiyenko - Reported in: P1.0 (30-Jan-17) PDF page: 41 The map key "username" is spelled "usernmae".--Chris McCann - Reported in: P1.0 (28-Feb-17) Paper page: 42 I was a little confused by how the get_by function works. It is described in following sentence: "Let's also add a couple of functions to get a user by id, or by a custom attribute" But the implementation shown processes a keyword list of custom attributes, so really it's more accurate to say: "Let's also add a couple of functions to get a user by id, or by custom attribute(s)"--Andre Oji - Reported in: P1.0 (30-Jan-18) PDF page: 43 The function all/1 simply returns a list, so the order shouldn't change from 1/2/3 to 1/3/2 (as shown in the iex session.--Rich Morin - Reported in: P1.0 (30-Aug-17) Paper page: 43 There is an error running the code up to here. Error when start iex : Rumpl.Repo.start_link/0 is undefined or private You need this snipped in Repo.ex def start_link do {:ok, self()} end see : stackoverflow (can send the link because your form does not allow it) Search for the error. (I do not know why (I am Beginner)--Erhard Karger - Reported in: P1.0 (30-Oct-16) PDF page: 43 In "Coding VIews" the example for index.html.eex should use "users_path" instead of "user_path", otherwise it will not compile. BROKEN: <%= link "View", to: user_path(@conn, :show, user.id)%> WORKS: <%= link "View", to: users_path(@conn, :show, user.id)%> I just got kindle edition today, 10/30/2016 - best guess at version/page, above.--Matthew Van Horn - Reported in: P1.0 (21-Jul-16) Paper page: 44 The code for user_controller.ex shows a render call which (somehow) sets the value of @users for later use in a view. In Rails, @users would be an instance variable, but here it seems to be a bit of (unexplained) magic. Is it an attribute or what?--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 46 The book says "At runtime, Phoenix will translate this template to a function, ...". Doesn't the translation occur at compile time?--Rich Morin - Reported in: P1.0 (21-Jul-16) Paper page: 46 The book explains that <%= %> tags inject results into templates, while <% %> tags do not. This seems like a side effect to me. It then says "We'll try to use code without side effects in views whenever possible, so we'll use mostly the <%= %> form." Shouldn't this be "... the <% %> form."?--Rich Morin - Reported in: P1.0 (21-Jul-16) Paper page: 46 "... within our index action." should be "... within our index action, via the call to render/3."--Rich Morin - Reported in: P1.0 (12-Aug-16) PDF page: 46 Coming from a non-Rails/ERB background, I was briefly stumped by this sentence. "Remember, we’ve already populated @users within our index action." Where did the @ prefix come from? What does "populate" mean? Took some backtracking to discover that "populating" meant passing [keyword list] data to render/3. Even so, I couldn't infer why populated data are prefixed with @. Is that EEX syntax like <%= %>? After discovering that templates are functions, I assume any arguments passed to render/3 are automagically turned into module attributes (I'm still not a 100% sure). - Reported in: P1.0 (30-May-16) PDF page: 46 In the iex listing at the bottom of the page,`Phoenix.HTML.Link.link("Delete" ....)` is called, but the example response shows the anchor text as `[x]`--Chris Flipse - Reported in: P1.0 (16-Nov-16) PDF page: 46 The *link* function seems to work differently in newer versions of Elixir. iex> Phoenix.HTML.Link.link("Home", to: "/") {:safe, ["<a href=\"/\">", "Home", "</a>"]} now outputs: iex> Phoenix.HTML.Link.link("Home", to: "/") {:safe, [60, "a", " href=\"/\"", 62, "Home", 60, 47, "a", 62]} The same issue occurs with the following line.--jpinnix - Reported in: B10.0 (21-Apr-16) PDF page: 48 Hi, I'me reasonably new to Elixir and phoenix...So, I'm not 100% sure this is a real issue or i may have done something wrong earlier in the exercise, however i discover the following issue with the following code. Cut and paste from the book: def show(conn, %{"id" => id}) do user = Repo.get(Rumbl.User, id) render conn, "show.html", user: user end I could not get this to work without first converting "id" to an integer as below def show(conn, %{"id" => id}) do user = Repo.get(Rumbl.User, String.to_integer(id)) render conn, "show.html", user: user end Thanks Craig --Craig Richards - Reported in: B10.0 (20-Apr-16) PDF page: 50 Chapter 3 -> Nesting Templates It should be </td> in another place in web/templates/user/index.html to have nice formatting on web page: Wrong: <tr> <td><%= render "user.html", user: user %></td> <td><%= link "View", to: user_path(@conn, :show, user.id) %></td> </tr> Right: <tr> <td><%= render "user.html", user: user %> <td><%= link "View", to: user_path(@conn, :show, user.id) %></td></td> </tr>--Denis Gamidov - Reported in: P1.0 (21-Jul-16) Paper page: 51 In "... receives a couple of special assigns ...", "assigns" should either be clarified as a code term or changed to "assignments".--Rich Morin - Reported in: B10.0 (24-Apr-16) PDF page: 54 "For now, it only uses Ecto.Model and import functions to work with changesets and queries, but this function serves as an extension point that will let us explicitly alias, import or use the various libraries our models might need." I believe this should be "For now, it only uses Ecto.Model and imports functions to work with changesets and queries, but this function serves as an extension point that will let us explicitly alias, import or use the various libraries our models might need." Adding an 's' to "import" makes the intention clear.--Derek Reeve - Reported in: P1.0 (06-Aug-16) PDF page: 55 There is a line that states: "import Ecto.Query, only: [from: 1, from: 2]". As far as I know, we're not instructed to actually write ", only: [from:1, from:2]". My web.ex file just says "import Ecto.Query". Unless I missed something while reading I suspect that this might be a change in Phoenix since the book was published.--Ian Charters - Reported in: P1.0 (01-Feb-17) PDF page: 55 It seems that Elixir now dislikes the timestamps macro in a migration file. The correct syntax is timestamps() --Chris McCann - Reported in: P1.0 (21-Jul-16) Paper page: 56 Somewhere around the migration example, I'd like to see an explanation of the specific changes that were made (eg, :passwd went away, :username added "null: false").--Rich Morin - Reported in: P1.0 (20-Feb-17) Paper page: 56 My 2¢: These two sentences sound redundant. "The mix ecto.gen.migration creates a migration file for us with a special timestamp to ensure ordering of our database migrations. Note that your migration file- name is different from ours because Ecto prepends a timestamp to maintain the ordering of migrations."--John Mertens - Reported in: P1.0 (14-Dec-16) PDF page: 60 `:empty` atom in params in the changeset function is deprecated in newer versions of Phoenix. Suggest that you change it to `:invalid`.--Allan Reyes - Reported in: P1.0 (01-May-17) PDF page: 60 This worked for me: def changeset(user, params \\ %{}) do user |> cast(params, [:name, :username]) |> validate_required([:name, :username]) |> validate_length(:username, min: 1, max: 20) end From what I have read, there has been an upgrade to the 'Ecto.Changeset' module since the book was written. The recommendation in the current docs is to use a form of cast with a lower arity (cast/3) to include the piped-in user struct as param 1. It appears that the former way of doing things with the ~w(name username) word list parameter goes with the :empty atom for a blank form, but the current version uses the empty map %{} with cast/3 in concert with the validate_ functions.--luzaranza - Reported in: P1.0 (07-Jul-16) PDF page: 60 I ran into some error when following the code because the "name" field isn't validated. This means that the code actually allows to insert a user with no name field. Maybe alter the code such that the "name" field is also validated?--Stefan Eletzhofer - Reported in: P1.0 (12-Jan-17) PDF page: 60 Another deprecation error warning: `Ecto.Changeset.cast/4` is deprecated, please use `cast/3` + `validate_required/3` instead Changing model |> cast(params, ~w(name username), []) |> validate_length(:username, min: 1, max: 20) to model |> cast(params, ~w(name username)) |> validate_required([:name, :username]) |> validate_length(:username, min: 1, max: 20) Fixed it for me. --Chase Jensen - Reported in: P1.0 (06-Aug-16) Paper page: 60 :empty atom is passed to the changeset (and the next page describes the reason). But now I get a warning warning: passing :empty to Ecto.Changeset.cast/3 is deprecated, please pass an empty map or :invalid instead (rumbl) web/models/user.ex:15: Rumbl.User.changeset/2 (rumbl) web/controllers/user_controller.ex:24: Rumbl.UserController.new/2 (rumbl) web/controllers/user_controller.ex:1: Rumbl.UserController.action/2 (rumbl) web/controllers/user_controller.ex:1: Rumbl.UserController.phoenix_controller_pipeline/2 Can this be clarified, pl.?--Nicholas George - Reported in: P1.0 (19-Feb-17) PDF page: 60 Text: "alias Rumbl.User def new(conn, _params) do changeset = User.changeset(%User{}) render conn, "new.html", changeset: changeset end Notice the User.changeset function. This function receives a struct and the controller parameter, and returns an Ecto.Changeset." The source code only shows changeset accepting one parameter, the struct. Then on the same page: "def changeset(model, params \\ :empty) do model |> cast(params, ~w(name username), []) |> validate_length(:username, min: 1, max: 20) end Our changeset accepts a User struct and parameters" What? This definition of changeset() is different than what was mentioned at the top of the page!--Patrick Chin - Reported in: P1.0 (30-Jan-18) PDF page: 61 "... when your learn about authentication." should be "... when you learn about authentication.".--Rich Morin - Reported in: P1.0 (20-Feb-17) Paper page: 62 resources "/users", UserController, only: [:index, :show, :new, :create] is wrong, it must be resources "/users", UserController, only: [:index, :new, :show, :create] or else the "show" route will match /users/new--Nathan Sutton - Reported in: P1.0 (21-Jul-16) Paper page: 64 In the first sentence of the Creating Resources section, I would change "macro" to "macro call", for clarity.--Rich Morin - Reported in: P1.0 (01-Jun-17) PDF page: 67 The method: def first_name(%User{name: name}) do name |> String.split(" ") |> Enum.at(0) end Fails with the String.split(" "). this happened in the form of a new user when there is nothing type in the form.--Luis Rodriguez - Reported in: P1.0 (01-Nov-16) PDF page: 70 ** (KeyError) key :password_hash not found in: %Rumbl.User{id: nil, name: "Jose", password: nil, username: "josevalim"} (stdlib) :maps.update(:password_hash, "<3<3elixir", %Rumbl.User{id: nil, name: "Jose", password: nil, username: "josevalim"}) (rumbl) web/models/user.ex:2: anonymous fn/2 in Rumbl.User.__struct__/1 (elixir) lib/enum.ex:1623: Enum."-reduce/3-lists^foldl/2-0-"/3 (rumbl) expanding struct: Rumbl.User.__struct__/1 iex:22: (file) --Sam - Reported in: P1.0 (12-Jan-17) PDF page: 71 Another deprecation error warning: `Ecto.Changeset.cast/4` is deprecated, please use `cast/3` + `validate_required/3` instead Changing model |> changeset(params) |> cast(params, ~w(password), []) |> validate_length(:password, min: 6, max: 100) |> put_pass_hash() to model |> changeset(params) |> cast(params, ~w(password)) |> validate_required(:password) |> validate_length(:password, min: 6, max: 100) |> put_pass_hash() Fixed it for me. --Chase Jensen - Reported in: P1.0 (05-Sep-17) PDF page: 72 I am running Elixir 1.5.1, Erlang/OTP 20. When I ran this page code, I got error: "module Comeonin.Bcrypt is not available". I thought to mention this for all the newbiees. Go to "mix.exs" and under "defp deps do" add {:bcrypt_elixir, "~> 1.0"} and then run mix deps.get in terminal. The error should be fixed now. Cheers--Houman Kargaran - Reported in: P1.0 (24-May-16) PDF page: 72 In this line of code: |> cast(params, ~w(name username), []) it is requiring parameters. Further down, you are expected to navigate to "/users/new" without any parameters giving you an error. I couldn't get it to render until I removed those parameters.--jeffrey Baird - Reported in: P1.0 (30-Apr-16) PDF page: 73 The code to be entered into IEx has an error. for u <- Rumbl.Repo.all(User) do Rumbl.Repo.update!(User.registration_changeset(u, %{ password: u.password_hash || "temppass" })) end It needs either a line at the beginning "alias Rumbl.User" or add "Rumbl." in front of all User references.--Eric Watjen - Reported in: P1.0 (17-May-16) PDF page: 73 "You’ll see a good example of this policy segregation when your learn about authentication." Should be "when *you* learn".--Clint Gibler - Reported in: P1.0 (05-Nov-16) PDF page: 74 Paper page: 62 When adding the users as resources you need to specify the "only" methods in a different order. resources "/users", UserController, only: [:index, :show, :new, :create] must be resources "/users", UserController, only: [:index, :new, :show, :create] this is because "/users/new" matches to the :show path also with new as the user id.--Ben Carter - Reported in: P1.0 (30-Jan-18) PDF page: 76 Under the heading path_info, I see "List" (in monospace font). Then, under the heading req_headers, I see "list" (in proportional font). This is a jarring inconsistency.--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 76 I'd rewrite tis sentence, to make it less awkward: "Sometimes a connection must be halted, such as a failed authorization."--Rich Morin - Reported in: P1.0 (21-Jul-16) Paper page: 78 "... downstream functions including ..." should be "... downstream functions, including ..."--Rich Morin - Reported in: P1.0 (21-Jul-16) Paper page: 78 I'd like to see some discussion of the fact that the return value from init/1 magically gets handed in as the second argument to call/2.--Rich Morin - Reported in: P1.0 (21-Aug-16) PDF page: 79 iex> changeset = Rumbl.User.changeset(%Rumbl.User{username: "eric"}) returns a deprication error: warning: passing :empty to Ecto.Changeset.cast/3 is deprecated, please pass an empty map or :invalid instead--Cody Norman - Reported in: P1.0 (30-Jan-18) PDF page: 82 The text says: "As you recall, the Plug.Conn struct has a field called assigns. We call setting a value in that structure an assign." This is a fine explanation, but it appears quite a ways after the first uses of the term "assign". I'd rework this to eliminate the forward reference.--Rich Morin - Reported in: P1.0 (11-Nov-17) PDF page: 82 In the most recent versions of Elixir and Comeonin, a few of the instructions here will not work, and need minor changes: 1. As of Elixir 1.4, the :applications key in the mix.exs application method is inferred by Elixir based on the deps method. Unless you've added it yourself, your mix.exs file's application method won't have an :applications key, and you won't need to add one for this to work. See the "Elixir v1.4 released" post on the Elixir blog for more details (the errata form blocks links to prevent spam; sorry to make you dig!). 2. As of Comeonin 4, you need to add an additional entry to the list in the mix.exs deps method for whichever algorithm you want to use. This book uses Bcrypt, so you'll need to add the following entry (version number may be different if you're reading this after November 11, 2017): {:bcrypt_elixir, "~> 1.0"}.--Jake Boxer - Reported in: P1.0 (10-Dec-17) PDF page: 82 "As you recall, the Plug.Conn struct has a field called assigns . We call setting a value in that structure an assign. Our function stores the given user as the :current_user assign, puts the user ID in the session, and finally configures the session, setting the :renew option to true . The last step is extremely important and it protects us from session fixation attacks. It tells Plug to send the session cookie back to the client with a different identifier, in case an attacker knew, by any chance, the previous one." I believe this is incorrect. I noticed that up to this point, no serverside session storage had been implemented, so I consulted the docs which say that Phoenix defaults to using signed cookies for sessions instead of serverside storage. In this case, session fixation should be impossible. In any case, if using serverside session storage, why would one not just not allow overwriting the user_id in the session? Or encrypt the user_id as the session_id? As it stands, this section doesn't really make sense. Or it's not great advice. Or I missed something, but my research over a couple hours failed to clear this up and only made me more confident something was fishy. - Reported in: P1.0 (21-Aug-16) PDF page: 83 |> cast(params, ~w(password), []) in block: def registration_changeset(model, params) do model |> changeset(params) |> cast(params, ~w(password), []) |> validate_length(:password, min: 6, max: 100) |> put_pass_hash() end is missing closing parens.--Cody Norman - Reported in: P1.0 (30-Jan-18) PDF page: 84 The first couple of line breaks in the "def create" example are ugly. I'd tweak them to make the code's meaning more obvious.--Rich Morin - Reported in: P1.0 (21-Jul-16) Paper page: 85 The sentence "This hardens our ... application secure" seems a bit awkward. I would rewrite it for clarity.--Rich Morin - Reported in: B10.0 (22-Apr-16) PDF page: 94 There is a picture/diagram of the model Annotation and the relationship with User and Video. In this version this diagram is split in two pages. It would be good if the entire diagram is placed in one page.--Anderson - Reported in: P1.0 (07-Jul-16) PDF page: 97 This text The VideoController, like any other controller, also has a pipeline, and the Phoenix generator plugs a function called scrub_params for the create and update actions: plug :scrub_params, "video" when action in [:create, :update] But I generated the code using the commands for html generator, and the code isn't in the controller, maybe is the version of phoenix that i'm using. Dependencies: defp deps do [{:phoenix, "~> 1.2.0"}, {:phoenix_pubsub, "~> 1.0"}, {:phoenix_ecto, "~> 3.0"}, {:postgrex, ">= 0.0.0"}, {:phoenix_html, "~> 2.6"}, {:phoenix_live_reload, "~> 1.0", only: :dev}, {:gettext, "~> 0.11"}, {:cowboy, "~> 1.0"}, {:comeonin, "~> 2.0"} ] end--Romario Alejandro López Castillo - Reported in: P1.0 (30-Jan-18) PDF page: 98 The "def change" example uses :string for short items and :text for longer ones. As I understand it, PostgreSQL treats these exactly the same, save that it imposes a 255 byte limit on :string. I'd either use :text for everything or explain why I'm not doing so.--Rich Morin - Reported in: P1.0 (25-Sep-16) Paper page: 99 Chapter 6: Building relationships - @required_fields ~w(url title description) @optional_fields ~w() Those module attributes do not appear in my generated code. Phoenix version 1.2.1, elixir version 1.2.6--J Paul Daigle - Reported in: P1.0 (30-Jan-18) PDF page: 101 The iex examples reuse the user and view variables in a manner which I find confusing. Why not pick different names for the different uses?--Rich Morin - Reported in: B10.0 (23-Apr-16) PDF page: 101 It looks like the paragraph between ecto.rollback and ecto.migrate examples is out of place. Maybe it needs some review. - Reported in: P1.0 (30-Jan-18) PDF page: 102 In "We need to change it so the video is built ...", I find the term "video" confusing. Are we referring to the video changeset or what? Please clarify this.--Rich Morin - Reported in: P1.0 (01-Apr-17) PDF page: 102 the signatures: def all(Rumbl.User) do def all(_module), do: [] throw this compilation error now: lib/rumbl/repo.ex:14: def all/1 conflicts with defaults from def all/2 --john nicholas - Reported in: P1.0 (30-Jan-18) PDF page: 107 I'd change "... by associating videos to users." to "... by associating videos with users.".--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 113 "... which fetches the names and IDs tuples ..." should be "... which fetches the name and ID tuples ...".--Rich Morin - Reported in: P1.0 (03-Jun-16) PDF page: 119 In the SQL fragment description "downcase(uname)" should be "downcase(username)" in both places where downcase is used as an example.--Mahboob Hussain - Reported in: P1.0 (09-Jan-17) PDF page: 119 Someone before me submitted it as a bug, because uname is not username. The real issue is that uname is not declared eg. add one line with uname = "Jose Valim" and then the current code.--Patryk Nowak - Reported in: P1.0 (30-Apr-16) PDF page: 122 The error message contains "constraint error when attempting to insert struct", but in my setup, it reads: "constraint error when attempting to insert model".--Mike Boone - Reported in: P1.0 (17-May-16) PDF page: 124 "but suppose tried to update" -> "but suppose WE tried to update"--Jaime Iniesta - Reported in: P1.0 (18-May-16) PDF page: 126 You write: "There are three approaches to solving this problem, and all have tradeoffs. First, you might decide to let the application (and the web framework) manage relationships for you. This approach, adopted by frameworks like Rails, leads to simpler code and database layers that are much more portable, but at a cost." There is confusion as to what the subject, i.e. "the problem", is. The second sentence cited above seems to be referring to managing foreign key integrity, but the preceding paragraph indicates that the discussion is of managing database constraints in general; if the latter is the case, it seems inaccurate to cite Rails as falling into this category (the one referenced by "first"), as it allows you to specify indices and other column based constraints in migrations that are enforced by the database. Thanks for the great book!--Maxfield Lewin - Reported in: P1.0 (30-Jan-18) PDF page: 130 "... function on your calculator module." should be "... function in your calculator module.".--Rich Morin - Reported in: P1.0 (30-Jan-18) PDF page: 133 I'd change "... needs the database and restart the database transaction ..." to "... needs the database, so we restart the database transaction ...".--Rich Morin - Reported in: P1.0 (03-Sep-17) PDF page: 135 When inserting test users, the line: "user#{Base.encode16(:crypto.rand_bytes(8))}" must be changed to "user#{Base.encode16(:crypto.strong_rand_bytes(8))}", i.e. rand_bytes -> strong_rand_bytes--felix - Reported in: P1.0 (18-May-16) PDF page: 137 Not all the dots from the first integration test are green, some are purple.--Jaime Iniesta - Reported in: P1.0 (08-Jul-16) PDF page: 138 The CLI prints a warning: ...warning: using conn/0 to build a connection is deprecated. Use build_conn/0 instead. test/controllers/video_controller_test.exs:7: Rumbl.VideoControllerTest.__ex_unit_setup_1/1 test/controllers/video_controller_test.exs:1: Rumbl.VideoControllerTest.__ex_unit__/2 (ex_unit) lib/ex_unit/runner.ex:289: ExUnit.Runner.exec_test_setup/2 (ex_unit) lib/ex_unit/runner.ex:247: anonymous fn/2 in ExUnit.Runner.spawn_test/3 The book says we write this: setup do user = insert_user(username: "max") conn = assign(conn(), :current_user, user) {:ok, conn: conn, user: user} end I changed conn() to: build_conn I'm using phoenix 1.2--Romario Alejandro López Castillo - Reported in: P1.0 (27-Jul-16) PDF page: 138 test lists all user's videos on index (Rumbl.VideoControllerTest) ** (ArgumentError) cannot retrieve association :videos for empty list--Thomas Iovine - Reported in: P1.0 (12-Jan-17) PDF page: 139 Not an error but Deprecation Notice. warning: using conn/0 to build a connection is deprecated. Use build_conn/0 instead. Changing conn = assign(conn(), :current_user, user) to conn = assign(build_conn(), :current_user, user) Fixed for me!--Chase Jensen - Reported in: P1.0 (24-Oct-17) PDF page: 140 The first assert of the second test, namely "does not create video and renders errors when invalid", assets a status code of 200, when the test itselfs says it fails, which means by definition cannot return a 2xx code. It should be 302 or something alike,--Sebastian Gonzalez - Reported in: P1.0 (30-Jan-18) PDF page: 141 Why not put the :delete, :edit, and :show asserts in a comprehension?--Rich Morin - Reported in: P1.0 (23-Feb-17) Paper page: 144 Hey, On the tests with ``login / logout. I would highly suggest to add a 'recycle(conn)' before the new 'get'. The new 'get' simulate a new connection going through the endpoint/router/:browser pipeline. But it isn't in fact new. It took me so long to wrap my head around it. Finally when I understood how cookie are sent back by the browser, I just had to look and found that there is the 'recycle(conn)' function that would be such a great addition here. (Plus explanation of why it simulates what the browser do (send back cookie)). Thanks ;)--Florian Kempenich - Reported in: P1.0 (23-Sep-17) Paper page: 145 "[...] and finally make that no user_id is in the session" I think you missed a word. It could be "finally make sure that no..." or "finally assert that no..."--David Carlin - Reported in: P1.0 (11-Nov-16) PDF page: 145 Using `Dict.merge/2` instead of `Map.merge/2`. Dict is deprecated.--Uri Gorelik - Reported in: P1.0 (30-Jan-18) PDF page: 147 The text says "The reason our tests are slow is that ...". This may have been self-evident in this case, but often it won't be. This might be a teachable moment for explaining how one might figure this out, what profiling tools are available, etc.--Rich Morin - Reported in: P1.0 (18-May-16) PDF page: 148 "on line 5" and "on line 18" do not match the code above, should be 6 and 19.--Jaime Iniesta - Reported in: P1.0 (22-Nov-17) PDF page: 149 The test for invalid attributes fails with Phoenix 1.2.5/Ecto 2.0 since the definition of cast/4 changed: changeset = User.changeset(%User{}, @invalid_attrs) To make it work as intended, the @invalid_attrs need to be changed to something like this: @invalid_attrs %{name: 123}--Stefan Chrobot - Reported in: P1.0 (09-Jul-16) PDF page: 150 This test: test "changeset does not accept long usernames" do attrs = Map.put(@valid_attrs, :username, String.duplicate("a", 30)) assert {:username, {"should be at most %{count} character(s)", [count: 20]}} in errors_on(%User{}, attrs) end Throws this error when running mix test: . 1) test changeset does not accept long usernames (Rumbl.UserTest) test/models/user_test.exs:18 Assertion with in failed code: {:username, {"should be at most %{count} character(s)", [count: 20]}} in errors_on(%User{}, attrs) lhs: {:username, {"should be at most %{count} character(s)", [count: 20]}} rhs: [username: "should be at most 20 character(s)"] stacktrace: test/models/user_test.exs:20: (test) . Finished in 0.06 seconds 3 tests, 1 failure Randomized with seed 872579 Phoenix version: 1.2--Romario Alejandro López Castillo - Reported in: P1.0 (03-Aug-16) PDF page: 150 The test "changeset does not accept long usernames" keeps failing because lhs & rhs don't match (one is a list and another is a tuple). I think I found a fix by copying what was done in the following test "registration_changeset password must be atleast 6 chars long" on the next page. Add the same changeset assignment using User.registration_changeset/2 before assert and replace error_on/2 with changeset.errors after the "in" keyword. The tests will pass this way.--Lucas Rosa - Reported in: P1.0 (14-Aug-16) PDF page: 150 Ran into the same error someone else reported (#80513), and the test on page 150 line 18 passes if changed to the following: ``` test "changeset does not accept long usernames" do attrs = Map.put(@valid_attrs, :username, String.duplicate("a", 30)) assert username: "should be at most 20 character(s)" in errors_on(%User{}, attrs) end ``` - Reported in: P1.0 (30-Jan-18) PDF page: 151 "... is a uniqueness constraint checks in our changeset." should be "... is a uniqueness constraint check in our changeset.".--Rich Morin - Reported in: P1.0 (18-Jul-16) PDF page: 152 When I run test user_repo_test with mix I getting error: test/models/user_repo_test.exs:8: undefined function insert_user/1--Ruslan - Reported in: P1.0 (18-Jul-16) PDF page: 152 Please ignore my previous reported error on page 152. I missed to add 'import Rumbl.TestHelpers' to the model_case.ex Ruslan--Ruslan Kenzhebekov - Reported in: P1.0 (30-Jan-18) PDF page: 159 The book says "We’re naming a pattern called id and then ...". I see a pattern (~r{...}), but no indication that it has been "named". Wazzup?--Rich Morin - Reported in: P1.0 (09-Oct-16) Paper page: 159 The book states: "We extract the player ID from the video url field by a function aptly named player_id." I think "aptly" is a typo.--Tiago Duarte - Reported in: P1.0 (16-Nov-16) Paper page: 159 In `watching_videos/listings/rumbl/web/views/watch_view.ex` when creating the player_id function it uses `Kernel.get_in/2` to create a single pipe rather than assign a variable and use the square bracket notation to access the "id" key. `Kernel.get_in/2` a bit out of place here and is somewhat confusing and may lead people to keep using it this way, when it really should be used for nested structures. I suggest either assigning the result of `Regex.named_captures/1` to a variable and using the square bracket notation to reference it, or using `Access.get/3`--Uri Gorelik - Reported in: P1.0 (30-May-16) PDF page: 161 "That’s the file loaded by browsers at the end of web/templates/layout.html.eex when we call static_path(@conn, "/js/app.js")." should be "That’s the file loaded by browsers at the end of web/templates/layout/app.html.eex when we call static_path(@conn, "/js/app.js")." - Reported in: P1.0 (16-Nov-16) Paper page: 162 In the Javascript for `watching_videos/listings/rumbl/web/static/js/player.js`, in the `events` object, there is an extra space: "onReady": (event => onReady(event) ), should be "onReady": (event => onReady(event)), --Uri Gorelik - Reported in: P1.0 (16-Nov-16) Paper page: 163 "Our onYouTubeReady function needs..." `onYouTubeReady` was never created, or mentioned. I believe it's supposed to be `onIframeReady`.--Uri Gorelik - Reported in: P1.0 (16-Nov-16) Paper page: 164 In `watching_videos/listings/rumbl/web/static/css/video.css` it might be worth mentioning that you do not need to manually add this CSS file in your `app.html.eex`--Uri Gorelik - Reported in: B10.0 (24-Apr-16) PDF page: 166 The video.css should be set to a fixed height with scroll enabled, or the scroll-to-bottom code in Chapter 10 won't work. #msg-container { min-height: 180px; max-height: 180px; overflow-y: scroll; } - Reported in: P1.0 (06-Dec-16) Paper page: 167 Once the Phoenix.Param.to_param is overridden for Rumbl.Video to include the slug as part of the id, the Rumbl.VideoControllerTest test "update user video and redirects when using valid data" no longer passes. This is what I did to fix it: @tag login_as: "gilligan" test "update user video and redirects when using valid data", %{conn: conn, user: us er} do user_video = insert_user_video user conn = put conn, video_path(conn, :update, user_video), video: @valid_attrs user_video_mod = Repo.get_by!(Video, @valid_attrs) assert user_video.id == user_video_mod.id assert redirected_to(conn) == video_path(conn, :show, user_video_mod) end -Bill--Bill Norman - Reported in: P1.0 (31-May-16) PDF page: 167 We are instructed to add the code snippet that implements the Phoenix.Param protocol for the Rumbl.Video struct to the bottom of web/models/video.ex I suggest it is made clearer that this needs to be added to the very bottom, outside the Rumbl.Video module's "end". At first I put it below the slugify function, but before the final "end", and the server fails to start with the rather cryptic: "** (Mix) Could not start application rumbl: Rumbl.start(:normal, []) returned an error: shutdown: failed to start child: Rumbl.Endpoint"--Andy - Reported in: P1.0 (06-Dec-16) Paper page: 189 Once annotations are added to the code, videos that are associated with annotations cannot be deleted. I see a constraint violation on annotations_video_id_fkey whenever I attempt to delete a video with annotations assigned. To fix this, the migration *_create_annotation.exs file should use the following definition: add :video_id, references(:videos, on_delete: :delete_all) This is as opposed to the current add :video_id, references(:videos), which is the default if you follow the instructions on pg. 189. (This is also what the code listing shows in the distribution) -Bill--Bill Norman - Reported in: P1.0 (14-Jan-17) Paper page: 191 `resp = %{annotations: Phoenix.View.render_many(annotations, AnnotationView, "annotation.json")}` should be `resp = %{annotations: Phoenix.View.render_many(annotations, Rumbl.AnnotationView, "annotation.json")}`--Justin Smestad - Reported in: P1.0 (15-May-16) PDF page: 198 In section 4, "building forms", right before “ecto/listings/rumbl/web/router.change1.ex” “You’ll see a good example of this policy segregation when your learn about authentication.” Should be "when you learn"--Stefan Boesen - Reported in: P1.0 (21-Nov-17) PDF page: 200 On Erlang/OTP 20, Elixir 1.5.2 Running the simple OTP Counter example, code is here gist.github.com/daya/865116d1fed01577f8764f5f1a691d85, causes ArgumentError on :erlang.send Interactive Elixir (1.5.2) - press Ctrl+C to exit (type h() ENTER for help) iex(1)> alias Rumbl.Counter Rumbl.Counter iex(2)> counter = Counter.start_link(0) listening..... initial value 0 {:ok, #PID<0.324.0>} iex(3)> Counter.val(counter) {:ok, #PID<0.324.0>} here #Reference<0.1548235550.607911943.102579> #PID<0.321.0> ** (ArgumentError) argument error :erlang.send({:ok, #PID<0.324.0>}, {:val, #PID<0.321.0>, #Reference<0.1548235550.607911943.102579>}) (rumbl) lib/rumbl/counter.ex:13: Rumbl.Counter.val/2 iex(3)>--Daya - Reported in: P1.0 (09-Jun-16) Paper page: 203 1) The explanation of the code is a bit unclear, especially the "notice the _from in the function head" as I thought you were still referring to the GenServer.call on line 8, but you were actually referring to line 28. 2) It would be good if you could explain the handle_call function in more detail. Even stating that each of the 3 arguments represent "Request, From, State", and that in this case it returns "Reply, NewState" would be enough! (It's slightly too different from lines 24-26 in the first version of counter.ex to be intuitive to grasp)--AJ Murphy - Reported in: P1.0 (11-Dec-16) PDF page: 212 In the subsection "Choosing an Information Strategy", there's a full stop where there should be a comma - "It doesn’t make sense for us to retry the computation. because".--Dee Roberts - Reported in: P1.0 (23-Sep-16) PDF page: 225 The additional info box "What About Task.async/await?" slipped in between the following code-example lines, which shows only a single line on the previous page. Would be nicer if the whole code example would be rendered after this info box.--Benjamin - Reported in: P1.0 (30-Jun-16) PDF page: 227 Paper page: 228 info_sys.change1.ex: 18: await_results(opts) 31&35: await_result (not plural). Typo?--Erik - Reported in: P1.0 (06-Jun-16) PDF page: 232 On line 20 of video_channel.ex, it should either be Rumbl.AnnotationView, or you need to alias it first. Once I prefixed it with Rumbl, it worked great.--Andrew P. - Reported in: P1.0 (28-May-17) PDF page: 235 The listing for the app.html.eex on the book is different from the file linked: /code/authentication/listings/rumbl/web/templates/layout/app.change1.html.eex Which make the app not behave like expected.--Gabriel Fróes Franco - Reported in: P1.0 (28-Sep-16) Paper page: 237 `mix test` should show 33 tests, rather than 37 tests. By the end of Chapter 8, there are 31 tests. At page 189, two tests are added automatically with `mix phoenix.gen.model`, bringing total up to 33 tests. No tests were written/added until page 237, but the sample test output now surprisingly shows 37 tests.--Jay Jun - Reported in: P1.0 (17-Dec-17) PDF page: 238.4 EPUB 238.4/551 Chapter 7 just before Diving Deeper into Ecto Queries It wouldn't let me put the hyperlink in the errata so had to drop the http protocol and replaced it with URL URL localhost:4000/manage/videos/new: should not have the trailing colon and should be URL localhost:4000/manage/videos/new--Barrie Callender - Reported in: P1.0 (16-Jun-16) Paper page: 247 I had the same problem as Marc Linsangan in both print and PDF versions; the wolfram test fails. "1+%2B+1" should be "1%20+%201" --AJ Murphy - Reported in: P1.0 (15-Sep-16) Paper page: 247 In the book testing_otp/listings/rumbrella/apps/info_sys/test/fixtures/wolfram.xml is incomplete. In media.pragprog.com/titles/phoenix/code/testing_otp/listings/rumbrella/apps/info_sys/test/fixtures/wolfram.xml the XML is actually complete. otp/listings/rumbl/lib/rumbl/info_sys/wolfram.ex will try to find the answer, which is 2, but it will not find it, since it's been cut out from the XML. Especifically the text that should be in the XML in the book is <plaintext>2</plaintext>. --diogo kersting - Reported in: P1.0 (14-Aug-16) Paper page: 247 The listing for http_client.exs has an error on line 6. The encoded version of "1 + 1" is shown as "1+%2B+1" but it should be "1%20+%201". I double checked using URI.encode inside iex and with encodeURI in javascript.--Steven Martin - Reported in: B10.0 (19-Apr-16) PDF page: 251 The Wolfram test was failing for me. I had to replace the URI-encoded "1 + 1" string from "1+%2B+1" to "1%20+%201".--Marc Linsangan - Reported in: P1.0 (14-Jun-16) PDF page: 253 The line 6 of second code sample should be ``` String.contains?(url, "1%20+%201") -> {:ok, {[], [], @wolfram_xml}} ``` instead of "1+%2B+1" --Taian Su - Reported in: P1.0 (16-Oct-17) Paper page: 262 Bottom of the page "we recommend that you to study...". Remove the "to".--David Carlin - Reported in: P1.0 (08-May-16) Paper page: 343 In “rumbrella/apps/info_sys/test/backends/http_client.exs” I had to change @wolfram_xml File.read!("test/fixtures/wolfram.xml") to @wolfram_xml File.read!(__DIR__ <> "/../fixtures/wolfram.xml") in order for the rumbl tests to pass when running them from the rumbl dir. With this change the tests pass regardless of whether they are run from rumbrella or rumbl dirs.--Paul Hollyer - Reported in: P1.0 (23-Sep-16) PDF page: 5562 Im reading it on the kindle reader so no page number. Location 5652, code "watching_videos/listings/rumbl/web/views/watch_view.ex" The original regular expression: ~r{^.*(?:youtu\.be/|\w+/|v=)(?<id>[^#&?]*)} did not extract the id for me. I instead fixed it with this which is obviously not good but works for me for now: ~r{(?<url>watch\?v=)(?<id>.*)} There is also a small formatting error where everything after [^ in the regex appears as italic. kind regards, stephan--Stephan
https://pragprog.com/titles/phoenix/errata
CC-MAIN-2018-22
refinedweb
8,732
68.87
Since be active for that session. Let’s say that I use Blender for 2D drawings with Grease pencil, VSE video editing and a General 3D workflow from modeling to shading, texturing and ending up with lighting. Sure, we have workspaces for this, but for anyone of those workflows more than one workspace is most likely needed. Thankfully, I found out about application templates. To create the simplest application template possible, we start by customizing the Blenders interface. We save the blendfile with the name “startup.blend” put it in a folder we give the name our application template should have and zip that folder. Then we open Blender and click the Blender icon next to the file menu and choose “install application template” Browse to the zip file and choose it. To use it, go to file -> new and choose your template. What are application templates in Blender? You can think of an application template as a set of default settings for Blender. It is a package of files that will contain a custom setup so that when you choose to run the application template Blender can be set up in a certain way with custom keymaps, add-ons, themes, layout and more. It can have its own startup file allowing customization of the interface. It also allows for a preference file. This is just another Blende file that some other settings are drawn from. According to the blender manual, these are: - Viewport lighting - Keymaps - Add-ons - Themes With these two files alone, we can do quite a lot of customization from within Blender. But there is, even more, we can do. If you are into Python scripting and the Blender API, you can add a file called “__init__.py” and register functions specifically for your application template. We are just going to touch on python briefly towards the end. Everything else is programming free. The last feature we can add is a custom splash screen image. This is more of a branding thing than anything else but it is neat if we really want to make our own custom template. When should we use application templates You should use application templates when you need a custom setup that cannot fit into a single workspace. Let’s say that you use Blender for very different kinds work. Then it’s nice to set up the interface according to your needs for that working session. For example, I sometimes record videos to explain how certain things work in Blender, like making tutorials. In those cases, I most likely want a default Blender set up to make it easy for anyone to follow along. But I also want a screen cast add-on for Blender and disabling add-ons that clutter the interface. Being able to set this up once and then just activate such a template every time I need to record something in Blender can save me some time in the long run. There are other examples of where application templates could be helpful. Maybe you are in a large company and they need Blender for some specific part of a pipeline or workflow. Then you can set up a specific application template for that and have only the functionality needed visible and readily available. Same is true if you just use Blender for different things on a hobby or freelance basis. A template can help you get started quickly. Let’s say that you do youtube videos and use blender as your video editor. You can set up an application template with all your default assets loaded from the start. Resolutions setup and a strip with your custom intro added to the timeline already along with any necessary add-ons enabled. Just import your footage and get going. How to use a pre-existing application template Blender comes with a few applications templates already included. Go to file-> new and you will see the list of templates. Just pick any of them to load that specific template. Any application template that we install will also be added to this list. To install a new application template, click the Blender icon next to the File menu. At the bottom, you will find “Install application template”. Blender will expect a .zip-file when installing a new template. When using an application template the defaults coming with the template are never overwritten. Instead, changes are saved separately in user-defined configurations so that the default application template can always be restored if needed. How to create our own application template At its core, an application template is just a handful of files packed together in a neat zip-file. The zip file can contain multiple application templates each one in its own folder inside the .zip file. Each template then has one or more of the following files. - startup.blend - userpref.blend - splash.png - splash_2x.png - __init__.py The startup file will act as out factory defaults for the template. It dictates the layout of the application, workspaces, default settings and pre-loaded data for example. The userpref.blend file will dictate certain preferences. These are themes, add-ons, keymaps, and viewport lighting. Basically, all preferences that can be imported and exported. You will find these different preferences if you got to “Edit -> Preferences” and you will find a section for each of these types of data. The viewport lighting will be under “Lights”. The slash and splash_2x files are just two different resolutions of the splash screen. They should be 501×282 and 1002×564 respectively and as noted above the python file is for more advanced customization that we won’t cover. Let’s focus on the startup and userpref files and set up an application template that can be used whenever we want to make a tutorial as our example. These are the changes that we will make: - Set default settings - Add a screen-cast keys add-on - Disable any distracting add-ons It’s not much but it will do for a demonstration. I begin by opening Blender and going to file -> defaults -> “Load factory settings”. “CTRL + Shift +S” to save a new file. Call it “startup.blend”. That creates a startup file that will be used when running our application template. Then I will go to preferences. Go to Edit-> Preferences. Here I will focus on the preferences that are controlled by the userpref.blend file. Setting back the theme to “Blender Dark” if I changed it, reverting any keymap changes if I made any and disabling any add-ons that distract a viewers attention. Next, I will download and install a screencast key add-on and set it up in the position and with the settings I want. Then I save this .blend file naming it “userpref.blend” Now we can go to the location where we saved these two blend files and create a folder. The folder name will be the name of the application template. I will call mine “Make_tutorial” for instance When displayed in Blenders interface, the underline will be removed and replaced with a regular space, but the folder name where the template is stored with have an underline. Move the two blend files into the folder and make a zip archive. For this, I use a program called 7-zip. The zip file should look like this: - any_name.zip - Make_tutorial - startup.blend - userprefs.blend For any splash screen images or python files, they go along in the same folder as the blendfiles. Now our package is complete and we can use the procedure laid out above to install it. Just go to the blender icon and choose “Install application template”. Browse for the .zip file and choose it. Now go to File -> New and you will see your application template in the list. Just click it to load blender with the settings we made. When Blender installs the application template it actually moves any folders inside the .zip file to this location. If you are making an application for yourself only, then you don’t have to zip the folder. You can just create or move it here instead. C:\Users\\AppData\Roaming\Blender Foundation\Blender\2.80\scripts\startup\bl_app_templates_user We may want to have our application template run by default. If so, we can make a special shortcut for it. Copy your existing desktop shortcut to Blender and rename it, preferable to your application template name or something that is recognizable to you. Then right-click it and go to “properties”. Change the “target” to: “C:\Program Files\Blender Foundation\Blender\blender.exe” --app-template Make_tutorial Add the command outside the quotes. Now when starting Blender from this new icon the chosen application template should load by default. Running python when loading the application template For the example above, I also tried to change the resolution scale preference of the user interface so that it would be easier to see what was happening on screen. The resolution scale can be found and set manually in user preferences under the interface section at the top. However, user preferences that are outside the scope of userpref.blend are shared globally in Blender and changes will affect any application template or default setup. I found a way to change the resolution scale when loading the application template but not a reliable way to change it back after the program was closed or another new file was loaded. This goes for all preferences. Here is the “__init__.py” file that I ended up with. It is very basic. import bpy def register(): bpy.context.preferences.view.ui_scale = 1.5 def unregister(): if __name__ == "__main__": register() The code you want to execute on load is put in the register function. Using the Blender API I found the property corresponding to the resolution scale and set it to 1.5. I will have to change it back manually every time I used this template. But it is better than forgetting about it and ending up with footage that can’t be seen properly. I guess I could add a similar script to all other application templates I am using but if there are a lot of templates it can become hard to manage. Final thoughts Application templates are a great addition to Blender for speeding up specific workflows and streamlining the interface for any given task. It is simple with just a few files added into a folder. It also has limitations with most of the user preferences being global. It probably makes sense in most cases since a lot of those settings depend more on your computer than on Blender itself, but it had been nice to be able to make additional preference changes as well. If you like this kind of content please consider joining our e-mail. Thanks for your time.
https://artisticrender.com/how-to-use-application-templates-in-blender/
CC-MAIN-2019-47
refinedweb
1,810
73.68
2017-03-31 02:58 AM Hi there, I have an envioremnt in which I have a SAS aggregate with flash pool (aggr1) and a SSD aggregate (aggr2), serving different purposes. Per node, of course. When creating my SVM (one per node, again), I 'm not sure about the best practices about which aggregate choose as the root aggregate for the root volume. The "where" question. When creating SVMs which will serve data volumes residing in the SAS aggrs, I choose those aggr for the root volume. Easy. But what about when creating SVMs which will serve data volumes residing in the SSD aggrs? I see two options: - Esthetic: I place the root volume in the same SSD aggr in which i will place the data volume. - Economize: I place the root volume in a SAS aggr, so I dont waste SSD resources. Yay, i know it's just 1Gb. I have been reading some articles and best-pratices KBs, with lots of how/what/why/when usefull tips (including LS mirros), but can not find anything about "where". I tell myself that, as my data volumes namespace are under the root volume namespace, I should use the SSD aggr in this case, but I want to be sure. Thanks to everybody in advance! I hope this is not a very stupid question. Solved! SEE THE SOLUTION 2017-04-10 11:34 PM - edited 2017-04-11 11:25 PM Your svm root volume can be in any of the data aggr you have. (as long as its not in node's root aggr, which have cfo failover policy) I'm lil curious by this.... "When creating my SVM (one per node, again)," There is no such requirements. The best paractice is to keep SAN proptocol in one SVM and NAS for another SVM But usually customers have verious requirement to virtulize their storage.. so they can create as many as they want. keep in mind managibilty will be an issue. There is not much data under SVM root volume.. its mostly for namespace. Its used to be 20M volume and netapp increase it to 1G with Ontap 8.3 Most of the information in that kept in memory.. unless there is a change.. system use the info from memory. So it dosen't matter you store them in SSD or SAS drive. Hope this help.. Robin.
http://community.netapp.com/t5/Data-ONTAP-Discussions/Where-to-place-SVM-root-volume/td-p/129652
CC-MAIN-2018-05
refinedweb
397
82.65
Flutter BLoC is a great architecture pattern, which was warmly welcomed by the community. But it wasn't created just for mobile applications. It was introduced as a pattern that allows for sharing up to 50% of code between Flutter and AngularDart applications. In this blog post, I will share the experience of converting my Flutter to the web app and I will compare this solution to Flutter For Web. For this blog post, I will be using the application that I have created as part of a BLoC tutorial. If you are not familiar with the BLoC pattern, or if you want to have a better understanding of the app that I will be converting, I advise you to take a look at it first. The finished shared code application can be found in this repository. Shared application code can only be applied if you want to have the same behavior on both mobile and web. You can, of course, create different states, events, and BLoC just for one platform, but creating many of them makes shared code less profitable. Your application also has to be built based on the BLoC pattern. It would be best if it had been created based on the BLoC library, which is supported both on Flutter and AngularDart. Since both mobile and web applications will behave in the same way and have the same states and events, they could share everything except the UI classes. Keeping that in mind, a diagram of a standard shared BLoC application might look like this: As you can see on the diagram above, all three layers that provide data and application states are shared between mobile and web UI. The first thing I had to do to start working with shared code is to move all of the shared classes to a new folder, which should be created in the same place that mobile and web applications will be. So in the end, a folder containing the application should look like this: Classes that could be shared between web and mobile could be all classes that provide data and manage the application state. So all BLoC, Models, Repositories, and data providers can be moved to common_bloc_lyrics. Afterward, I had to add a pubspec.yaml file, so that it could provide the necessary dependencies and allow importing this package to web and mobile projects. name: common_bloc_lyrics description: Lyrics app shared Code between AngularDart and Flutter version: 1.0.0+1 environment: sdk: ">=2.1.0 <3.0.0" dependencies: meta: ^1.1.6 equatable: ^0.2.0 http: ^0.12.0 bloc: ^2.0.0 Then, I had to create a file common_bloc_lyrics.dart, which would export all the dependencies that this new package will provide. export 'src/model/song_base.dart'; export 'src/model/api/search_result_error.dart'; export 'src/model/api/search_result.dart'; export 'src/model/api/song_result.dart'; export 'src/model/api/artist.dart'; export 'src/repository/local_client.dart'; export 'src/repository/lyrics_repository.dart'; export 'src/repository/lyrics_client.dart'; export 'src/bloc/song/search/songs_search.dart'; export 'src/bloc/song/add_edit/song_add_edit.dart'; Now I had to change all of the imports in this package to the import of the newly created file. import 'package:common_bloc_lyrics/common_bloc_lyrics.dart'; And that was it. Thanks to these changes, the shared library could be imported both by Flutter and AngularDart projects. Since all the shared dependencies were moved from the mobile app project, I had to modify the pubspec.yaml file of my Flutter project, so that it would import these classes as if it were just another plugin. name: flutter_mobile_bloc_lyrics description: Flutter BLoC shared code example version: 1.0.0+1 environment: sdk: ">=2.1.0 <3.0.0" dependencies: ... common_bloc_lyrics: path: ../common_bloc_lyrics After that, I just had to change all of the imports of classes that will be shared to the export file that was created in the common package, similarly to what I did before. Every class that will use BLoC or models should have this import: import 'package:common_bloc_lyrics/common_bloc_lyrics.dart'; And that was it. Thanks to these changes, I have managed to make my mobile application work with the shared code package. To create an AngularDart project I used the recommended Dart project generator, called Stagehand. After downloading it, I also had to update my system path so that it would point to the required tools. Creating an AngularDart application with Stagehand is as simple as generating a new Flutter project, you just need to type this command: stagehand web-angular As I stated before, I don't have a lot of experience in working with web applications. The one that I wrote was based on the Github search example from the BLoC library documentation. Since this code is very similar to the one from the example, I won't describe the web project itself. But if you are curious about what it is like, you can check out its repository. To run AngularDart projects in your browser you need to execute this command: webdev serve Working with AngularDart differs a lot from writing an application in Flutter. The views are written in HTML, and Dart files provide data and class instances to them. So, for example, the project’s search bar Dart class looks like this: import 'package:angular/angular.dart'; import 'package:common_bloc_lyrics/common_bloc_lyrics.dart'; @Component( selector: 'search-bar', templateUrl: 'song_search_bar_component.html', ) class SearchBarComponent { @Input() SongsSearchBloc songsSearchBloc; void onTextChanged(String text) { songsSearchBloc.add(TextChanged(query: text)); } } It looks similar to the mobile app version of this component. SongSearchBloC is injected into this class thanks to the @Input() annotation. Similarly to the mobile version, the method onTextChanged should be called each time the input in the search bar changes. The templateUrl property tells which view layout should be used for the creation of this component. The search bar HTML file looks like this: <label class="clip" for="term">Enter song title</label> <input id="term" placeholder="Song title" class="input-reset outline-transparent glow o-50 bg-near-black near-white w-100 pv2 border-box b--white-50 br-0 bl-0 bt-0 bb-ridge mb3" autofocus (keyup)="onTextChanged($event.target.value)" /> Thanks to the keyup keyword, on every character typed by the user, the function onTextChanges from the Dart class will be called. The dependency for the BLoC injection is created in the main search component, which, similarly to the mobile version, creates view instances for both the search bar and the list. @Component( selector: 'search-form', templateUrl: 'search_song_component.html', directives: [ SearchBarComponent, SearchBodyComponent ], pipes: [ BlocPipe ]) class SearchSongComponent implements OnInit, OnDestroy { @Input() LyricsRepository lyricsRepository; SongsSearchBloc songsSearchBloc; @override void ngOnInit() { songsSearchBloc = SongsSearchBloc( lyricsRepository: lyricsRepository, ); } @override void ngOnDestroy() { songsSearchBloc.close(); } } Initialising BLoC in ngOnInit makes sure that its instance would be created at the initialisation of component and would be accessible for all of the views created inside this class. Each time the view is destroyed, the method ngOnDestroy is called. So it is the best place to close BLoC, since it won't be needed anymore. Since web applications have special security restrictions, it is worth pointing out is that you cannot make a request to a web server that doesn’t set CORS headers that accept it from the domain that your app is on. The API that I have worked with didn't have this feature, so to make my web app running I had to run my browser without CORS. When you are working with a custom API, it isn't hard to add CORS headers to it, but you need to keep in mind that you can't do this with most of external APIs. Creating a shared web and mobile application takes time. It also requires some knowledge of Dart, Flutter, and AngularDart. It might not be the best solution for most mobile developers. But if you want to convert your Flutter app to be working with web, there is a simpler way. Flutter For Web lets you easily convert any application to a working web application, regardless of which architecture you use. Since it is still in preview, it is not advised to be used in a production-ready application. If you want to learn more about this, I would recommend this great article. By following the steps described in the article, I managed to convert and run my application locally on my browser. To make my app work, I had to make some workarounds in the Java code. You can see how I did that in the commits in the repository. But these workarounds still didn't make my app work and all I saw was a white screen. The next thing that I had to do was to remove the library that I used for app translations, since it was not supported on the web. These fixes made my application work, but not everything behaved as it should. In the mobile app version, when you swiped an element on the list, it was removed. But similarly to the library for translations, the one used for swipe and delete is not supported on web projects. What was a pleasant surprise to me was that the BLoC libraries worked as they should. Nothing behaved differently in the web application. For now, there is no easy way for mobile developers to convert their mobile apps to production-ready web applications. Working with shared code requires a lot of knowledge, and it is not so greatly supported, since AngularDart is not as popular as other web technologies. But for now, the best way to go would be to create a production-ready web application with Flutter. This makes it another reason to keep your fingers crossed for the success of Flutter web. Photo by Yogas Design on Unsplash
https://www.netguru.com/codestories/mobile-and-web-shared-code-with-flutter
CC-MAIN-2020-29
refinedweb
1,629
63.8
User could present a different configuration that could, for example: Changes between states can be animated using transitions, as discussed further below. All Item-based objects have a default state, and can specify additional states by adding new State objects to the item's states property. Each state has a name that is unique for all states within that item; the default state's name is an empty string. To change the current state of an item, set the state property to the name of the state. Non-Item objects can use states through the StateGroup element. To create a state, add a State object to the item's states property, which holds a list of states for that item. Following is an example. Here, the Rectangle is initially placed in the default (0, 0) position. It has defined an additional state named "moved", in which a PropertyChanges object repositions the rectangle to (50, 50). Clicking within the MouseArea changes the state to the "moved" state, thus moving the Rectangle. import Qt 4.7 Rectangle { id: myRect width: 200; height: 200 color: "red" MouseArea { anchors.fill: parent onClicked: myRect.state = 'moved' } states: [ State { name: "moved" PropertyChanges { target: myRect; x: 50; y: 50 } } ] } A State item defines all the changes to be made in the new state. You could specify additional properties to be changed, or create additional PropertyChanges for other objects. (Note that a State can modify the properties of other objects, not just the object that owns the state.) For example: State { name: "moved" PropertyChanges { target: myRect; x: 50; y: 50; color: "blue" } PropertyChanges { target: someOtherItem; width: 1000 } } A State is not limited to performing modifications on property values. It can also: The States and Transitions example demonstrates how to declare a basic set of states and apply animated transitions between them. Of course, the Rectangle in the example above could have simply been moved by setting its position to (50, 50) in the mouse area's onClicked handler. However, aside from enabling batched property changes, the use of states allows an item to revert to its default state, which contains all of the items' initial property values before they were modified in a state change. The default state is specified by an empty string. If the MouseArea in the above example was changed to this: MouseArea { anchors.fill: parent onClicked: myRect.state == 'moved' ? myRect.state = "" : myRect.state = 'moved'; } This would toggle the Rectangle's state between the moved and default states when clicked. The properties can be reverted to their initial values without requiring the definition of another State that defines these value changes. The when property is useful for specifying when a state should be applied. This can be set to an expression that evaluates to true when an item should change to a particular state. If the above example was changed to this: Rectangle { ... MouseArea { id: mouseArea anchors.fill: parent } states: State { name: "moved"; when: mouseArea.pressed ... } The Rectangle would automatically change to the moved state when the mouse is pressed, and revert to the default state when it is released. This is simpler (and a better, more declarative method) than creating onPressed and onReleased handlers in the MouseArea to set the current state. State changes can be easily animated through transitions. A Transition defines the animations that should be applied when an item changes from one state to another. If the above example was modified to include the following Transition, the movement of the Rectangle would be animated: Rectangle { ... MouseArea { ... } states: [ ... ] transitions: [ Transition { NumberAnimation { properties: "x,y"; duration: 500 } } ] } This Transition defines that if any x or y properties have changed during a state change within this item, their values should be animated over 500 millliseconds. See the Transitions documentation for more information.
https://doc.qt.io/archives/qt-4.7/qdeclarativestates.html
CC-MAIN-2020-45
refinedweb
628
63.19
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi all, On my issue screen creation, when I click on the create button, transition Create is run. I would like to specify some rules in the validator's part of this transition. In my case, I have : - a listbox (called Type) where I must choose between the following values : Project, GPA, Versioning Service, DBA - a listbox (called Project) containing a list of available projects - a textbox (called RP Code) I would like to control the following : If Type.value="Project" then End if If we choose the others values of Project listbox (GPA, Versioning Service or DBA), listbox Project is not mandatory as textbox RP Code. I think I will have to use cfValues['Project'], cfValues['RP Code'] and cfValues['Type'], but I don't know how. Thanks a lot. Christophe The condition should be something like: cfValues['Type'].value != 'Project' || (cfValues['RP Code'] && cfValues['Project']) ie, either the type is not project, or both other fields must have values. Test in Admin -> Built-in Scripts -> Condition Tester. There is one ] missing at the end of Jamies condition, befor the ). Thank you but it does not work unfortunately... 'Type' is a simple list component ... may be cfValues['Type'].value is not good to obtain its value ? For the moment, I am just testing cfValues['Type'].value == "Project" ... but it still does not work :-( If you have configurable options for a custom field cfValues[] contains an Option object where you have to use getValue() to get the actual string value of the selected option. So cfValues['Type'].value should get you the text of the selected option. Did you try the first and second part of the condition separatly in the condition tester mentioned by Jamie? Is Type a custom field? Or is it the issue type system field? If you want to access the issue type you have to write issue.issueTypeObject.name == "Project" As Henning said, test it in pieces, and add some asserts. See Hi Jamie et all, I was wondering if there was anything special that needs to be done to view the values on the Create Issue screen inside of a Groovy validator. I have tried the above stated "cfValues" array, but I don't see anything in the validator code. This problem only happens when creating a non-subtask issue (story/bug/epic). Any thoughts would be appreciated. thanks! No, there is nothing specific here... Not sure what this means: "but I don't see anything in the validator code" Probably best to post a new question with a concrete example of what doesn't work. Many thanks Jamie, I'll post up a new question with a proper example soon. cheers! Hi Jamie, After trying to create a new big of sample code to demonstrate the problem to you, it seemed to wanted to work. I've tested it locally on my JIRA environment as well as on our Dev/Test environments so the code works great. Some code to add in the validator of the create transition that might help others debug which is a combination of the above suggestions. Thanks again! saleem import org.apache.log4j.Category def Category log = Category.getInstance("com.onresolve.jira.groovy.PostFunction") log.setLevel(org.apache.log4j.Level.DEBUG) log.debug "_________________________" log.debug "IssueKey : " + issue.getKey() log.debug "Issue : " + issue log.debug "Description : " + issue.getDescription() log.debug "Summary : " + issue.getSummary() log.debug "CustomValsArray: " + cfValues log.debug "*** END ***" log.debug "" log.debug "".
https://community.atlassian.com/t5/Jira-questions/Script-Runner-verify-values-specified-by-users-on-validator-of/qaq-p/134813
CC-MAIN-2018-22
refinedweb
598
66.84
I'm trying to parse a file that contains, a bunch of entries which, among other fields, contains a date in its last column. Walmart,Retail,482,-0.7,2200000,Arkansas,31-10-1969 from datetime import datetime def readdata (fname): print ('*'*5,'Reading Records From File',fname,'*'*5) data = [] readf = open(fname,'r') for line in readf: name1, name2, No_1, No_2, No_3, name3, date1 = line.split(',') date = datetime.strptime(date1,'%d-%m-%Y') Number1 = float(No_1) Number2 = float(No_2) Number3 = int(No_3) rec = [name1,name2,Number1,Number2,Number3,name3,date] data.append(rec) readf.close() print('\nDone.\n\n') return data datetime.datetime data_string[found.end():]) ValueError: unconverted data remains: Traceback (most recent call last): File "C:\Users\Keitha Pokiha\Desktop\New folder\Program 2.py", line 42, in <module> main() File "C:\Users\Keitha Pokiha\Desktop\New folder\Program 2.py", line 39, in main data = readdata('fname.txt') File "C:\Users\Keitha Pokiha\Desktop\New folder\Program 2.py", line 12, in readdata date = datetime.strptime(date1,'%d-%m-%Y') File "C:\Users\Keitha Pokiha\AppData\Local\Programs\Python\Python35-32\lib\_strptime.py", line 510, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "C:\Users\Keitha Pokiha\AppData\Local\Programs\Python\Python35-32\lib\_strptime.py", line 346, in _strptime data_string[found.end():]) ValueError: unconverted data remains: The problem that you seem to be having is that when you do for line in readf:, line ends with the carriage return (special character \n, which signals a new line) so instead of trying to convert 31-10-1969 to datetime, Python is trying to convert 31-10-1969\n, using the format %d-%m-%Y Therefore, when it finishes parsing the year ( %Y) it finds an unexpected \n and that's why you're seeing that error: because it doesn't know what to do with it. You have several options to fix this. Below you'll find two that "fix" the read line, and a third that "fixes" the format expected by datetime: You can remove that \n it using rstrip after you've read the line: name1, name2, No_1, No_2, No_3, name3, date1 = line.rstrip().split(',') date = datetime.strptime(date1, '%d-%m-%Y') Or you could use the method explained here and remove the last character in the line, like this: name1, name2, No_1, No_2, No_3, name3, date1 = line[:-1].split(',') Or you could tell the datetime module to expect a newline as well in the string: name1, name2, No_1, No_2, No_3, name3, date1 = line.split(',') date = datetime.strptime(date1, '%d-%m-%Y\n') I'd use 1., because if your line doesn't end with a newline character, everything will still work. PS (as a side note): If you're reading a comma-separated-value file, I'd strongly suggest you make use of the csv.reader module.
https://codedump.io/share/0p0MUwzxJumZ/1/python-datetime-format
CC-MAIN-2017-04
refinedweb
474
55.95
This ‘Res: public class SimpleDIContainer { Dictionary<Type, object> _map; public SimpleDIContainer() { _map = new Dictionary<Type, object>(); } /// <summary> /// Maps an interface type to an implementation of that interface, with optional arguments. /// </summary> /// <typeparam name="TIn">The interface type</typeparam> /// <typeparam name="TOut">The implementation type</typeparam> /// <param name="args">Optional arguments for the creation of the implementation type.</param> public void Map<TIn, TOut>(params object [] args) { if (!_map.ContainsKey(typeof(TIn))) { object instance = Activator.CreateInstance(typeof(TOut), args); _map[typeof(TIn)] = instance; } } /// <summary> /// Gets a service which implements T /// </summary> /// <typeparam name="T">The interface type</typeparam> public T GetService<T>() where T : class { if (_map.ContainsKey(typeof(T))) return _map[typeof(T)] as T; else throw new ApplicationException("The type " + typeof(T).FullName + " is not registered in the container"); } } Then, we can construct a small program which creates a container, maps the types, and then queries for a service. Again a simple compact example but imagine what this would look like in a much larger application: class Program { static SimpleDIContainer Container = new SimpleDIContainer(); static void Main(string[] args) { // Map ILogger to TextFileLogger Container.Map<ILogger, TextFileLogger>(); // Create a DirectoryWatcher using whatever implementation for ILogger contained in the DI container DirectoryWatcher watcher = new DirectoryWatcher(Container.GetService<ILogger>()); } } () If you wish, you can download the complete project source to follow along with. Once the above has been installed, let’s begin: 1) Open up VS2010 and create a new MVC 3 Web Application (I’ve called mine MvcNinjectExample): After this screen you will get a dialog asking you to choose an Empty application or an Internet Application – just choose “Empty”. 2) Once the project has loaded, right-click ‘References’ in solution explorer and choose “Add library package reference..” (this is Nuget in action): 3) On the left, choose “Online” then search for Ninject. In the search results, you should be able to see “Ninject.Mvc3”. Select this and click the ‘Install’ button: This will download Ninject and all the other things it needs in order to work, including the WebActivator library, which gives us a place to create our dependencies. 4) Once everything has installed, look for the ‘AppStart_NinjectMVC3.cs’ file which has now appeared in your solution and open it: namespace MvcNinjectExample { public static class AppStart_NinjectMVC3 { public static void RegisterServices(IKernel kernel) { //kernel.Bind<IThingRepository>().To<SqlThingRepository>(); } public static void Start() { // Create Ninject DI Kernel IKernel kernel = new StandardKernel(); // Register services with our Ninject DI Container RegisterServices(kernel); // Tell ASP.NET MVC 3 to use our Ninject DI Container DependencyResolver.SetResolver(new NinjectServiceLocator(kernel)); } } }. 5) Add a new folder called ‘Logging’. 6) Now, head back to your “AppStart_NinjectMVC3.cs” file and set up the binding for this class: public static void RegisterServices(IKernel kernel) { kernel.Bind<ILogger>().To<TextFileLogger>(); } 7) Finally, lets create a controller which uses it. Right-click on the ‘Controllers’ folder in Solution Explorer, select ‘Add >’ and choose the ‘Controller…’. Download project source code 41 thoughts on “Dependency Injection: A Beginner’s Guide” Hi, I was following your tutorial and I cannot find the ‘Ninject.MVC3′ lib on NuGet, do you know what happened to it!? Thanks! Hi Alex, I should have known something would change within hours of putting up this article. It seems that instead you should include the package “Ninject.Web.MVC3″ instead. This package doesn’t seem to depend on WebActivator like the other one did, so you don’t get the AppStart_NinjectMVC3.cs file. However, I’ve just done a quick test and everything that was supposed to go in there (as done in my article) can just go in the normal Application_Start() method in global.asax.cs file. I’ll update the article later today with up to date instructions. I can’t find anywhere on the web where this change is documented though. Hope that helps! Thank you very much Steve, I will play with the new package while you update the article. BTW, nice post! Thanks – glad it helped you out! It looks as though the change has been reversed, so everything in this original article still applies. Hooray! See my blog post at about the changes to Ninject.MVC3. You need to do some minor changes to update your blog post. Excellent – thanks for the heads up. Cheers Steve. Thanks to your walk through I think I finally understand DI, interfaces, and ninject. Hey Steve, Your ability to simplify key concepts without assuming that the reader is already knowledgeable about the framework jargon is a very rare skill. I read a ton of material trying to simply understand DI, and DI container. But your simple article above gave me the comprehension I needed. Thanks and keep up the good work. @Beowulf, @Hassan: Thanks guys, I’m glad you found the article useful and that it helped you understand the concept! Clear explanation of what DI is. Finally i understood the concept behind this(after reading lot of articles). Thank you Steve Thanks Srinivas, I’m glad it helped! Excellent Post! Admirable. Is it possible for you to give us downloadable sample where sql connectivity will also be part of binding using mvc3. I could do, but honestly the principal would be the same. Create an interface which represents how you want to access your data, then as part of its implementation you connect to a database to retrieve the data. Then just register the interface with the Ninject kernel (as in the example above) and insert it into your constructor. Basically you do it in exactly the same way – the key is in the implementation. Absolutely brilliant article. I’ve struggled to understand the concepts behind it and that article explained it very well. Thanks – I’m glad you found it useful! Very Nice article!! After long struggle i was able to understand what is DI and where to use it Very good article on Dependency Injection.. Thanks Thanks Dan, glad you found it useful An excellent article, very useful! Thanks Steve, I finally landed on the right place… Hi Steve, I will echo everybody’s comment, “ A Wonderful Article in much simplified manner. /“. I need a help, will it be possible for you to provide a pdf version of this tutorial as in our offices this site is blocked. If it suits then you can mail me the pdf version for this. Spot on mate, found this article extremely useful. Thanks Thanks, I’m glad you found it useful! awesome explanation! I am using your article as a reference for other developers in our organization! Thanks Jose, I’m glad you enjoyed it! It was very helpful. Thanks for the post. It was a really concise explanation and really helped me understand the concept…. Very informative and straightforward. Many thanks! First time i get stuck for a while reading article about DI, it’s really self explained, thank you Steve. usefull article which taught me the basic of Dependency injection. THX This is an excellent article that clearly and concisely explains Dependency Injection! Thank you very much for writing this up! Just wanted to thank you for the article. It helped me explain the concept to my students. You take the reader through this so clear and concisely, you should consider writing a book on beginning MVC. Thanks Abhijit Hello Alex, This article is so simple and gave quick break through to work on MVC.. I have one question, Here we are using object of only one type derived from the interface through out the application .. How about handling a case where we use objects of different types, derived from same interface.. Please explain me, Thanks, varma Hi Varma, Generally this would be forbidden using this pattern. If you can only request things via their interface, then it follows that you can only register one implementation for that interface. Otherwise, how would the system know which implementation it is you want? The solution would be to specify additional information as to which implementation you want, but then you are not practising proper Dependency Injection, as you would have to know in advance what you want the implementation to do. You should not care how the implementation does its job, only that it can do that job according to the contract. Hope that helps! Wow! Thanks for this excellent article! I just suffered through reading “Dependency Injection with Unity” (Free eBook from Microsoft) and after all 147 pages still felt “Duh” on the concept. After reading this single web page, it all makes sense. Had already been doing injection, just not using a container. Click. Boom. Got it! Thank you!! Nice but creating your own DI container is pretty much recreating the wheel. I am Microsoft guy and i love unity application block here’s a video which demonstrates how to use it and it makes things simpler Thanks for the video. You’re right in that creating your own DI container is reinventing the wheel, which is why I only touch on it as a learning tool to demonstrate essentially what a DI container does. I then go on to focus on using an existing DI container (Ninject) in a web project. Hi Steve, This is a excellent article. However I am wondering how can I use NInject to test the action methods of my MVC controller. These action methods typically do CRUD operations using my Services (& Service Layer). I already have services which are stable and I use in my product based on ASPX pages. Now I am moving to MVC4/5 and want to do unit testing of Controller Action methods, hence request your views. Hi Shrirang, You would basically follow the same procedure as in the MVC section in the article: Now, given that your controller requires these services in order to be constructed, when you are writing your unit test you would simply pass something that implements your service interface to the controller for the purposes of the test. One way to do this is to use a mocking/stub framework (personally I use ) which allows you to create a mock version of your service so that the controller can use it. As an example, your controller might looked like this: public class TestController { private readonly IService service; public TestController(IService service) { this.service = service; } public ActionResult Index() { return this.View(this.service.GetFoo()); } } And your test might look like: [TestMethod] public void TestController_Index() { // Setup var service = new Mock var controller = new TestController(service.Object); // Run var result = controller.Index(); // Assert Assert.IsNotNull(result); service.Verify(v => v.GetFoo(), Times.Once()); // use Moq to verify that the controller called GetFoo() once } Hope that helps!
http://stevescodingblog.co.uk/dependency-injection-beginners-guide/
CC-MAIN-2015-32
refinedweb
1,771
55.74
I took out the trash and went for a run at about 11pm tonight. When I got back I sat down and wrote this. In brief, it defines a function "decorate" and a metaclass "DecoratableType" which may be used in this manner: class Demo(object): __metaclass__ = DecoratableType decorate(staticmethod) def foo(x, y): print 'Static method foo called with', x, y This closely parallels the '@decorator' syntax, but requires no changes to the interpreter. What are its advantages over the '@decorator' syntax? 1) It's pure-python. 2) It requires no syntax changes. 3) It works in Python 2.2, 2.3, and yes, 2.4a1, three times as many Python versions as will support the new syntax (once 2.4 is actually released ;). 4) It supports arbitrary expressions to specify decorators (The '@' patch only supports "dotted names". For example, "@foo" and "@foo.bar(baz)" will work, but "@foo or bar" will not) What are its disadantages (and there sure are some) 1) It depends on source line numbers. It's possible that this information could be unavailable in certain environments (I can't think of any, but I'm sure there are some). The posted version also will not deal with intervening whitespace, comments, or decorate() calls well. This can be fixed without very much difficulty. 2) The posted version only works for classes that use the defined metaclass. It would be possible to remove this restriction, but only by hooking into some other mechanism for performing additional processing to the objects involved. This could be in the form of an import hook or a function which is manually invoked. Hardly anyone generally comments on my blog posts. I hope that everyone who reads this and thinks a solution along the lines of the one described here will comment and let me know, and everyone who thinks the '@decorator' proposal is better will comment and tell me why I'm wrong. I agree. completely. You are my squishy friend! I decided to go ahead and post this to python-dev tonight rather than wait for further feedback, as people seemed generally in favor of the idea on IRC. Here's a link to the post on gmane: I also have found that the pie-syntax is much less offensive after using it for a bit. I deliberately went and reimplemented a bunch of code that already had decorated functions, and this format is much much clearer to my eyes. I just did a quick change to using a decorate type function, and it's much harder to see what's going on. We already have 'def ' standing out nicely in the source - I see no reason that decorators shouldn't also stand out some. My comment on both your implementation and PJE's amazing piece of work is that they seem rather brittle - more so than I'd be happy with in a release of python. Line Noise: it's not a question of *whether* you use punctuation, but *how much*. @ in isolation is not a big problem. Random crap like this making its way into the language for such corner cases, is. Once we have @ for decorators and $ for string interpolation and heck, maybe ??~ for callback registration... what next? 'def ' stands out without punctuation. I can grep for 'decorate' a lot more easily than I can grep for '@'. I don't know about you, but I work with email a fair amount in python. Brittleness / fragility: how are you measuring this? You think random changes to the syntax introduce more instability than the use of the well-documented and used-elsewhere sys._getframe? The implementation depends on sys.getframe; the interface is identical to the @ syntax. I hate the "pie" syntax. I think it is a blight on the language. Adding punctuation-based syntax is something that should be thought very long, and very hard about: there are a lot of things that I could see python programmers doing a lot more frequently than adding decorators, and wanting a much terser syntax for. Not only is it ugly and inconsistent with Python, the Java-based history of the decoration operator is dubious at best. (Let's remember that it comes from the @deprecated convention, which adds semantics that affect compiled classes into *COMMENTS*.) The worst thing about this, in my opinion, is the overhead this imposes on new python programmers. The current way that decorators work - foo = bar(foo) - was a great teaching tool to illustrate how functions are really objects. The @ syntax obfuscates this difficult-to-explain fact even further. Did you also see the followup where the bug was fixed? There's a distinction between "fragile" and "has a simple bug". Just thought I'd note that I followed up on my own blog here and here. > I don't like the various pure-python schemes - they are either fragile > (relying on sys._getframe et al) or involve additional complexity like > metaclasses. If you want backwards compatibility you can just use the > current way of doing things (explicitly saying func = staticmethod(func) > or whatever). What's fragile about sys._getframe()? _Zope_ uses it extensively. It will be supported forever. ;) Other parts of the stdlib use it, too. If it is fragile, should not those usages be removed? As to the complexity of metaclasses, that seems rather unreasonable. What about the complexity of the interpreter core changes required to implement '@'? I'll bet they require more than 30 lines of C. > I'm sure I haven't convinced anybody, and in a way that is the point > - nobody is changing their mind. There are three contenders for the > syntax and the arguments for and against each one just go round and > round. This is a chance to get the feature into python. If we wait, > what are we waiting for? A flash of inspiration from someone > producing a syntax that everyone loves? I would say that's unlikely. I'm happy without the feature if no one can agree on how it should work. Doesn't it say anything to you that after more than a year of debate, there is no concensus? I elaborated a bit more about this at As far as brittle - JP's version relies on the line numbers in the source. How is this not a brittle solution? "Doesn't it say anything to you that after more than a year of debate, there is no concensus?" No, it says that this is a perfect example of what FreeBSD calls a "bike shed". The case for having _some_ way to spell decorators has been well made over the last year - I'm not going to go back and dig out the various posts (there have been hundreds and hundreds on the topic). I was initially against having special syntax for decorators, but have come around to the idea. And now that they're in, I find them a simple way to spell out some rather funky behaviour. The remaining point has been a never-ending discussion over syntax. This is not without precedent - consider the endless hell of the ternary operation discussion. I would really like it if, after a2 is out, everyone took some time to experiment with the new syntax. Rather than letting a visceral "it's an @ sign - fear the perl! fear the perl!" reaction guide you, actually _use_ the feature for a bit. I was happy with the resolution of the ternary debate :) I've also got a CVS checkout of Python and been playing with the decorator syntax since Monday morning. I'll keep using it, though. Like I said earlier, I'm sure I'll get used to it, and that doesn't make me like it any more. Probably another feature I'll never use, so I can't get too wrapped up about it. But I truly wish they'd spend this energy on housekeeping. So much of Python is showing signs of neglect, then stuff like this comes around. It's rather discouraging. I wish I could do something about it, but I can't at the moment, or I'd be scratching like a big dog. :-( About once every release or two, some feature comes along that makes me really nervous. I start to wonder if Python's development is starting to sabotage its superior maintainence characteristics. I usually conclude that maintainability is suffering to sate the feature creeps, but Python is still the easiest to use and maintain of all the languages I know. I really hate the pie syntax. It's only obvious if you care about the feature enough to have researched it. It's not going to be easy to figure out what it means if you don't know what it is. As "the python guru" in a shop full of "regular programmers", I foresee a lot of puzzlement and muddled explanations. But Python will still be the easiest to use and maintain, for now. well, speaking of punctuation you could grep for something more along the lines of r'^\s*@' Coincidentally this would also catch every epydoc directive in the source, if you use that, which we do. You lose. Line numbers in the source tend not to change at runtime. And it doesn't _need_ to depend on line numbers, it's just the proposed straw-man implementation which does. Yes, I did. I tend to measure fragility by counting bugs and bug severity, even if they can be fixed easily, then multiplying by a million for every line of C involved ;-). Right now without the bonus multiplier, your technique - 1, JP's - 0. I don't, so it's not a loss for me, but that's a good point; parsing code with regexps is fragile as hell. Personally I don't care much about this particular aspect. Greppability is a nice, but not a requirement for me. There are lots of features in Python you can't grep for, or that are ambiguous to grep for, already. It is just another example of how so many of the arguments in favor are purely specious. It's easier to grep for? No, it isn't, @ is a very common character already, 'decorate' is far less common. It's necessary to have syntax to put the decorator before the method? No, JP's suggestion clearly proves otherwise. "If you don't like it, don't use it?" or "If you want compatibility, you can just keep doing it the old way." I've heard the same people who claim this holding their noses at Perl and C++ for the same attitude. I don't buy it. "The alternatives are fragile." - I can think of about three different metrics for "fragile" where the pie syntax loses. How does it possibly win? There are more, of course, there is a book's worth of ranting on python-dev. I've read a nauseating amount of it, and I honestly can't understand why the debate hasn't been squashed in the same way the ternary operators debate was. The arguments I have yet read in a megabyte or so of text are circular and at times self-contradictory. I think the reason is that ternary operators really weren't very useful, but decorators are. People are just arguing about *which* syntax, it's pretty clear that a lot of people do see the need for the feature. comment1,
https://as.ynchrono.us/2004/08/python-developers-considered-harmful_02.html?showComment=1091791952000
CC-MAIN-2020-10
refinedweb
1,911
73.78
I looked around and found vaguely similar questions but nothing quite the same...I do apologize if I missed the answer somewhere. I am finishing up a game I wrote in Swift using SpriteKit. Most other games that I've played, I can have itunes or something playing music in the background, and still hear it while I am playing a game. As I am playing my game, I'm noticing that it automatically shuts of the audio from other apps. I am not using AVAudioPlayer AVAudio Set your AVAudioSession category to Ambient. import AVFoundation.AVAudioSession AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryAmbient, error: nil) This category is also appropriate for “play along” style apps, such as a virtual piano that a user plays while the Music app is playing. When you use this category, audio from other apps mixes with your audio. Your audio is silenced by screen locking and by the Silent switch (called the Ring/Silent switch on iPhone).
https://codedump.io/share/xYkvl4P2bUBf/1/in-swift-how-do-i-let-other-apps-continue-to-play-audio-when-my-app-is-open
CC-MAIN-2017-47
refinedweb
159
54.42
Hi - Im making a small and relatively simple game that use's the common feature of the scoreboard to keep track of the user's score. I have declared the main game frame in one class, the actual game on one panel in another and have decided to keep the scoreboard as a class of its own. When I want to update the score using the scoreboard class I can - but when I try to do the same thing using the same methods outlined in the scoreboard class it doesn't work. I create an instance of the class as you would and have added a few lines to determine whether or not the method was being called - it was. So why wouldn't it update the way it does in the other class. The scoreboard class is as follows: import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Toolkit; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JButton; public class scoreBoard extends JPanel { int userScore = 0; String text = "Score: "+userScore; int posX = 10; int posY = 25; JButton changeText = new JButton("Update Score"); public scoreBoard() { changeText.addActionListener( new ActionListener() { public void actionPerformed(ActionEvent e) { text = "Score: "+increaseScore(); /** posX += 5; posY += 10; */ repaint(); } }); add( changeText ); setBackground( Color.RED ); } public void paint(Graphics g) { super.paint( g ); // Over-ride the paint method Graphics2D g2score = ( Graphics2D )g; g2score.drawString( text, posX, posY ); } public int increaseScore() { userScore += 10; return userScore; } } ***Please excuse the randomness of some of the colours :P*** Second class called MPanel - I have only included the relevant method & the code which creates the instance here: scoreBoard keepScore = new scoreBoard(); /*************************************************/ public JButton createJButton( int buttonNum ) { // final int rand = getRandomNum(4); /** Check the random numbers are generated System.out.println( rand ); */ // Button becomes a constant final JButton button = new JButton(); button.setText(""+buttonNum ); button.addActionListener( new ActionListener() { public void actionPerformed(ActionEvent e) { if( button.getText().contains("1") || button.getText().contains("2") ) { System.out.println( "BOOM" ); keepScore.text = "Score: "+keepScore.increaseScore(); keepScore.repaint(); button.setIcon( bomb ); } } }); return button; } So what you think?
https://www.daniweb.com/programming/software-development/threads/316472/scoreboard-panel-update
CC-MAIN-2017-17
refinedweb
357
50.02
This client library is designed to support the Datarobot API. Project DescriptionRelease History Download Files Installation Python 2.7 and >= 3.4 are supported. You must have a datarobot account. $ pip install datarobot Usage The library will look for a config file ~/.config/datarobot/drconfig.yaml by default. This is an example of what that config file should look like. token: your_token endpoint: Alternatively a global client can be set in the code. import datarobot as dr dr.Client(token='your_token', endpoint='') Alternatively environment variables can be used. export DATAROBOT_API_TOKEN='your_token' export DATAROBOT_ENDPOINT='' See documentation for example usage after configuring. Tests $ py.test Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://test.pypi.org/project/datarobot/
CC-MAIN-2017-34
refinedweb
124
54.79
upload image to M5stack problem Im using M5stack basic development kit. im trying to upload image into it using M5.lcd.drawBitmap() but it looks like images are pixelised. anyone can help me? this is the code: #include <M5Stack.h> #define img duke #define pic extern unsigned char pic img []; void setup() { M5.begin(); M5.Lcd.drawBitmap(0,0,320,240, (uint16_t*) img); } void loop() { // put your main code here, to run repeatedly: } Picture; is it a 8 bit bitmap? Images need to be made in a specific way. You can find this information in my WIP book found in the project section called UIFlow handbook. @xeon just keep answering questions that’s all I do. You look like the one who has grasped the used of the StickV so keep helping out. @xeon said in upload image to M5stack problem: @ajb2k3 You seem to help a lot around here. any way i can help too? Btw have you filled in the icebreaker discussion with a little about yourself and projects? @ajb2k3 i should. but i don't like telling people where i'm from. People will think i'm a bad person when i just want to help.
https://forum.m5stack.com/topic/1327/upload-image-to-m5stack-problem/1
CC-MAIN-2020-45
refinedweb
199
85.79
#include <CbcBranchCut.hpp> Inheritance diagram for CbcBranchToFixLots: a) On reduced cost b) When enough ==1 or <=1 rows have been satisfied (not fixed - satisfied) Definition at line 198 of file CbcBranchCut. Infeasibility - large is 0.5. Reimplemented from CbcBranchCut. Creates a branching object. Reimplemented from CbcBranchCut. Reduced cost tolerance i.e. dj has to be >= this before fixed. Definition at line 247 of file CbcBranchCut.hpp. We only need to make sure this fraction fixed. Definition at line 249 of file CbcBranchCut.hpp. Never fix ones marked here. Definition at line 251 of file CbcBranchCut.hpp. Definition at line 253 of file CbcBranchCut.hpp. Do if depth multiple of this. Definition at line 255 of file CbcBranchCut.hpp. number of ==1 rows which need to be clean Definition at line 257 of file CbcBranchCut.hpp. If true then always create branch. Definition at line 259 of file CbcBranchCut.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_cbc_branch_to_fix_lots.html
crawl-003
refinedweb
149
63.66
Introduction: In this article I will explain how to implement gridview with in a gridview example or nested gridview with expand/collapse options in asp.net. In this article I will explain how to implement gridview with in a gridview example or nested gridview with expand/collapse options in asp.net. Description:. To implement this concept first design tables (Country and State) in your database as explained in this post populate dropdown based on another dropdown in asp.net. After completion of table design write following code in your aspx page like this Now add following namespaces in codebehind C# Code After that add following code in code behind VB.NET Code Demo 62 comments : Nice Article. How can u save this one in excel file? thanks! hi.. please help! how to export this one into EXCEL file.. need help badly.. :'( thanks! please post how to export this to excel. thanks! Hi.. I like your blog very much. I am a beginner in asp.net. I have a gridview which shows the project names and it should have a expand/collapse. On expand it should have 3 linkutton for every project nested inside and on collapse it should close. Please give the program soon. Please send a complete program to my id chandanmb7@gmail.cm The Above code is looking Great ================================== This code also contain the find control using the Cell Name .. if (e.Row.RowType == DataControlRowType.DataRow) { GridView gv = (GridView)e.Row.FindControl("Gd_Specs"); Label lblItemId = (Label)e.Row.Cells[0].FindControl("lblItemId"); int itemid = Convert.ToInt32(lblItemId.Text); gv.DataSource = DataTableValue; } .. Good Day I am not able to see child grid data. it shows blank no of lines that exist in state table. Hi, I have buttons checboxes and textboxes inside the inner grid which I need to access.The inner grids get hidden on events. Can you help me with the javascript to achieve the same? Thanq, it working fine,but when we are raising any event like textchanged event in nested gridview, gridview is collapsing after the event is fired,how to make expand after the event is fired Nice article. Is it possible to show no plus image in the parent grid if there is no child data available? thanks. It's easy as 1..2...3, Thanks alot for the excellent post! :) Above article is good. I am new to asp.net. I tried but i am getting error as 'DataSource' is not a member of 'gridview' Hi, please help! how to Add the data from one gridview to another gridview.Need help.( thanks!) Hi, how to implement 3 gridviews in one worksheet of excel. Thank u. hi i am raju i want to update image in gridview please help me hii... iwant to show data in gridview and my datasource is ms access the above code is showing error. hi this is naveen,,is it possible to drag and drop nested gridview,,,if yes please give me a suggestion I am not able to see child grid data. it shows blank no of lines. Any Idea... Hello Suresh...Can you tell me how to handle the child Gridview pageindexChanging Event in this above nested gridview example HI , Please tell me i want to do drag drop in nested gridview can any one tell me ? It's working fine but when i am adding a link button into inside gridview and at the time of clicking this linkbutton gridview is collapsing .how do i keep expand always even after clicking the linkbutton and even page load also. it's very urgent thanks Santosh its working fine, but if i add link button in gvChildGrid then how can i found it using function not in row databound. Please help me ASAP. Thanks Rajiv Mr. Suresh i just want a grid view with editable gridview and dropdown filter at every colomn. Thanks To display child gridview, javascript should be as follows (div.style.display = "block"): if (div.style.display == "none") { div.style.display = "block"; img.src = "minus.gif"; } else { div.style.display = "none"; img.src = "plus.gif"; } Thank you so much. this post is very useful. How do I change the data source to my own? I will connect to an access database Thanks I have a similar grid and I am not able to achieve this! Can anyone help Superb Article..... It was very helpful. But I am trying to export the same to Excel. but not able to see nested gridview in excel. Any idea how can we export? Amazing Article.... It makes my app as super..!! But Small fix in JS function divexpandcollapse(divname) { var div = document.getElementById(divname); var img = document.getElementById('img' + divname); if (div.style.display == "none") { img.src = "images/minus.gif"; div.style.display = "block"; } else { img.src = "images/plus.gif"; div.style.display = "none"; } } Logically right but I'm facing problem with your representation of the article. Please modify it. e.g. use "pre" tag for the coding part. Thank You. good one...helped me lot can anyone help (gridview (gridview(gridview))) 3 gridviews. please help me above one urgent... Thanks Suresh, Is there any way by which i can load the inner grid view on expansion. thanks Suresh, it works for two level nesting of gridview. i need a three level nested gridview with edit and update on third level. edit is working fine for me but update is not picking edititemtemplate controls. can u pls help me on the same. thanks in advance. thanks Suresh thats was excellent. thanks alot Thanks sir ur coding style helped me lot. hello Sir its not getting man... You Forgot this One protected void gvParentGrid_SelectedIndexChanged(object sender, EventArgs e) { } This work very good. Do you have any suggestion in only loading the data for one item when child Gridview is expanded. This starts to load slow if have alot of data. expand all collapse all gridview in asp.net my expand and collapse is not working can anybody help dear sir, is it possible to activate a help gridview within a gridview when i key in something in a textbox in a gridview. i have done this with flexgrid control in vb6 Dear Suresh Sir. Nice Exp.. Please help me out. I'm getting the following error! Both DataSource and DataSourceID are defined on 'Parent'. Remove one definition. gvParentGrid.DataSource = ds; gvParentGrid.DataBind(); for the code: Good Post. Thank you so much for this example. You saved my ass :) Thanks so much! Simple and easy to understand! XD could you please tell me same application with checkboxs in both parent and child gridview. I have done same application with checkbox in both parent and child gridviews. But the problem is When I check any checkbox, gridviews was automatically collapsed. I would like to make selected gridview was in expanding mode, until to I collaps the row or expanding other new row. How to insert nested gridview data into database?????????? Nice deed.. Keep it up :) Nice Sir, Can you please share Javscript for SubGridview Textbox Null Validation nice article.. Hi.. I like your blog very much. On expand i have edit button .when i click on edit button grid collapase. Please give the solution asap.
https://www.aspdotnet-suresh.com/2012/05/gridview-with-in-gridview-or-nested.html?showComment=1411993189310
CC-MAIN-2019-39
refinedweb
1,204
77.43
comp.lang.c Discussion about C. Google Groups Ace Webacademy 2014-04-16T09:41:43Z Web Designing Training Institutes in Hyderabad-Acewebacademy For professional courses in web designing and development at an affordable price choose Ace web academy one of the best web designing institutes in Hyderabad. Call: 7660-966-660. BV BV 2014-04-15T19:37:58Z The Purpose Of Life & Muslim Spoken Word & The Daily Reminder The Purpose Of Life & Muslim Spoken Word & The Daily Reminder Thank you dude....@gmail.com 2014-04-15T14:42:11Z Copy array of structs in one go I want to copy an array of structs (fixed size array and struct). What I "have" now (yes, this is indeed a non-compilable extract, and yes, "BOOL" is a big no-no, and no, I obviously don't have two globals that should contain the exact same thing): struct arrayStruct { int a; int b; BOOL c; } st Bill Cunningham 2014-04-15T00:32:11Z memset I think alot of my problem is not understanding what a function is asking for when it's asking for a pointer. If it's asking for a pointer is it wanting what the pointer is pointing too. memset for example. void *memset (void *s, int c, size_t n); that first parameter. Is it want a pointer to s William Koch 2014-04-15T00:08:11Z Command Line Option Validation when the user runs my program from the command line I require them to type myProgram -k <some_value> <some_value> needs to be a positive integer between 1 and 100. How do I assure they do not type in a string or something like that. Here is what I have so far but I am not sure what to do from he mathog 2014-04-14T21:41:18Z malloc of large string works, but same sized array fails This is an odd one. Normally it doesn't matter if a string is allocated at run time or if it is set up as an array. In this case it does though. This causes a segfault at run time: #define MYMAXSTRING 20000000 int main(int argc, char *argv[]){ char bigstring[MYMAXSTRING]; whereas this works wi Jazon Yamamoto 2014-04-13T23:36:02Z Upcoming Multiplatform Game Programming Book for Beginners Hi,ré La Bill Cunningham 2014-04-12T22:13:11Z dereferencing problem My compiler is complaining about line 7 I am pretty sure and is saying something about dereferencing. The only dereference I intend in my code is being passed to the freeaddrinfo(). Beause it wants a struct addrinfo *. Now I am intending to declare a pointer to pointer in that line 7. This compil Jens Schweikhardt 2014-04-12T19:37:53Z Meta-C question about header order hello, world\n consider a small project of 100 C source and 100 header files. The coding rules require that only C files include headers, headers are not allowed to include other headers (not sure if this is 100% the "IWYU - Include What You Use" paradigma). The headers contain only what headers s Udyant Wig 2014-04-10T18:08:31Z Request for source code review of simple Ising model While browsing around the Physics Stack Exchange, I came across an answer by Ron Maimon*, and I saw that I could implement the simulation described therein. I first did a version in Emacs Lisp, on reviewing which I felt that I could develop a version in C, using the Emacs Lisp version as a rough gu G G 2014-04-10T15:11:46Z size_t, when to use it? (learning) typedef unsigned int size_t ............. size_t when to declare something size_t? when the variable is associated with memory usage? at any other time should the variable be declared an unsigned int? it's not a question of style, right? Udyant Wig 2014-04-10T14:45:29Z Is posting source to Usenet redistribution? I have some source code I would like to request the readers to go over, but would like to clarify a matter of etiquette first: I have used the linked list implementation from kazlib by Kaz Kylheku. The implementation has not been modified in any way. Would posting the source for the implementati James Kuyper 2014-04-08T23:59:17Z ISO/IEC 2382 Definitions extracted from usenet postings containing "IEC 2382" Output (as data) is ``data that an information processing system, or any of its parts, transfers outside of that system or part'' (ISO/IEC 2382-1 01.01.33). Output (as process) is ``the process by which an information processing syst Rumesh Krishnan 2014-04-08T07:54:16Z How to read and write png image using c program.? I want to perform image processing using c program with opencl computing framework, how to read a png image in a simple way, after read the png image could be in a form of array. how to do .. any one help me.? luser droog 2014-04-07T22:52:16Z Lukasiewicz Logic Machine ~200 lines of lightly-commented C, based on the 1954 Burks, Warren, Wright paper from the 1962 APL book bib. partr...@gmail.com 2014-04-05T20:07:19Z Is enum a suitable way to implement a "local define?" Of course I could use malloc and all that but this isn't about that. Often I do like the following: int i #define sizeMAX 1024 static char buffer[sizeMAX]; i=0; while(i<sizeMAX) { putc(buffer[i]); } ... lots more code that reference buffer[] and sizeMAX ... #undef sizeMAX But this seems messy be Malcolm McLean 2014-04-05T17:04:11Z Baby X on MS Windows I've ported Baby X, the simple C language toolkit, to Microsoft windows Now you can compile a Baby X program for Linux or Windows. There's still at least one glitch to get out - Windows defines an entry as WinMain. I'm not sure what the best way is to allow Baby X programs to compile without the tr BV BV 2014-04-04T14:28:36Z Jesus: an Islamic view Jesus: an Islamic view In this pamphlet, the author shows the nature of the Prophet Jesus as Islam provides. He shows that the Prophet Jesus is a human prophet and does not have any divine nature as Christian believe. Did you know that it is obligatory for Muslims to believe in Jesus, or that a SP 2014-04-04T13:27:42Z CFP: MuCoCoS-2014, August, Porto, Portugal 7th International Workshop on Multi/many-Core Computing Systems (MuCoCoS-2014) in conjunction with Euro-Par 2014 25-29 August, 2014, Porto, Portugal AIMS AND SCOPE The pervasiveness of homogeneous and heterogeneous multi-core and many-core processors, in a dmjc...@gmail.com 2014-04-03T20:38:33Z pointer arithmetics and casting Hi,considering data and length have valid values I have the following: struct object { uint32_t length; void *data; }; void objfunc(void *data, uint32_t length) { struct object *obj; obj = malloc(sizeof(*obj) + length); obj->length = length; /*now come the problems*/ obj->data = obj + 1; /*Does James Kuyper 2014-04-03T21:08:20Z Re: New toolchain The crosstool-ng web site says "The IRC support channel is #crosstool-ng on". That's probably the best place to go with such questions. rivkau...@gmail.com 2014-04-03T01:48:01Z Can I jump to a case inside a switch using goto? while((c=getchar())!=EOF){ switch(flower){ case rose: switch(color){ case white: // do something 1 case pink: // do something 2 ... } case lily: switch(color){ case white: // do something 1 case pink: // do something 2 ... } ... case unique: // do something 3 ... } } basically, I can write the a jay 2014-04-02T12:23:53Z compressing charatcers PROBLEM: You are given a string FOOFIGHTERS. You have to come up with an algorithm that will compress this string. You also have to make sure that you are not using extra memory. For example: FOOFIGHTERS will be compressed as FO2FIGHTERS. You should not use another array or bitfield to keep a freque jay 2014-04-02T06:31:47Z array-size/malloc limit and strlen() failure #include <stdio.h> #include <limits.h> int main(void) { char arrc[UINT_MAX] = {'a'}; printf("arrc = %s\n", arrc); return 0; } ================ OUTPUT ============== [myhome]$ gcc -ansi -pedantic -Wall -Wextra test2.c test2.c: In function `main': test2.c:6: error: size of array `arrc' is too large Eric Sosman 2014-04-01T11:45:08Z Apocalypse Next Week Most people are aware by now that Microsoft will terminate support for Windows XP in a week's time, on April 8. The Net, the newspapers, and even the broadcast media carry warnings of doom, and at least some of them may be more than mere hype: When something like a third of the world's computers run Skybuck Flying 2014-04-01T08:07:24Z Microsoft ditches C for a new language called: Coo(kie) ! April fools ! Bye, Skybuck. luser droog 2014-04-01T06:34:18Z X-Macros are awesome ... and nobody cares. :( Exhibit A: rivkau...@gmail.com 2014-03-31T02:04:19Z How to modify ((c=getchar())!=NULL) to use fscanf or scanf instead? How to modify ((c=getchar())!=NULL) to use fscanf or scanf instead? For example, compact while loops are written like this in k&r while ((c=getchar())!=NULL){ // do certain actions } and I want to convert a stanza like fscanf ( FpSource, "%d" , &input ); while (input != EOF){ // do certain ac Bart 2014-03-29T18:09:49Z Declaration of main() Never thought I'd be asking about this, but it's giving me some trouble! I want to use a declaration that looks like this: typedef unsigned char* ichar; int main(int nparams,ichar (*params)[]) { int i; for (i=0; i<nparams; ++i) printf("%d: %s\n",i,(*params)[i]); } (Why? Because this will be th Werner Wenzel 2014-03-29T12:16:27Z mbrtoc32 in MinGW-w64 buggy? Running the following MinGW-w64-built code on Windows 7 64 bit crashes with me: #include <stdio.h> #include <uchar.h> int main(void) { mbstate_t mbstate; puts("So far okay ..."); mbrtoc32(NULL, "", 1, &mbstate); puts("Not reached due to crash!"); return 0; } It should not crash as the problemati luser droog 2014-03-27T07:53:42Z inca: a hacked-up apl in c based on the J incunabulum compiles without warnings (if you don't ask for warnings) with gcc. currently 313 terse lines. Questions? Improvements? Style-bashing? inca ==== based on the J-incunabulum, lightly extended to allow propagati mathog 2014-03-26T16:54:13Z Table of "safe" methods to suppress "unused parameter" warnings? Sometimes the same parameter list must be passed to a lot of different functions, and some of those will not use all of the parameters, resulting in some compilers emitting "unused parameter" warnings. Here are all of the methods I have found so far for suppressing these: This first set of UNUSED' mathieu 2014-03-26T15:37:00Z UB ? Checking size of off_t Dear all, Could someone please confirm whether the following code is UB or not ? #include <sys/types.h> int main(int argc, char **argv) { /* Cause a compile-time error if off_t is smaller than 64 bits */ #define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62)) int off_t_is_large[ (LARGE_OF fabio 2014-03-25T16:00:11Z Read from stdin without affecting stream position Hello, I have this code: fd_set rfds; struct timeval tv; FD_ZERO(&rfds); FD_SET(0, &rfds); tv.tv_sec = 0; tv.tv_usec = 0; int retval = select(1, &rfds, NULL, NULL, &tv); if (retval && retval!=-1) { .... } I check if the stdin is ready to be read without blocking. I need now to read what has been Bart 2014-03-25T15:57:21Z Wrapping a variadic function I'm trying to wrap the standard function fprintf() with a version called qfprintf(). The following was done according to an on-line tutorial, but it doesn't work (prints three random numbers, not 10 20 30). Should it work? If not, how can it be done? #include <stdio.h> #include <stdarg.h> int qfp Peter Percival 2014-03-25T15:39:25Z Configuring Wedit (or maybe lccwin) First, my apologies for asking a question about programming tools rather than the language C... I use wedit and lccwin64. I create my programmes in a directory in D:, and keep C: for Windows' system stuff. Wedit or lccwin64 insists on putting (some at least) project data in a directory on C: whic Johannes Bauer 2014-03-25T13:43:11Z Duplicate integer values in enum Hi group, I was just a bit surprised about the behavior of my compiler and traced a bug down to the fact that enum symbols had duplicate integer values. Now I'm unsure if that is permissible by the standard (I guess it is, but it still surprises me). Consider this code: #include <stdio.h> enum fo Xavier Roche 2014-03-24T15:26:41Z Strict aliasing rule: pointer to void vs. pointer to char and transitivity Hi folks! My understanding of the aliasing rules is that a pointer to a char may alias any other pointer type. The following trivial example is therefore defined in C: static void printBytes(const char *bytes, size_t size) { size_t i; for(i = 0 ; i < size ; i++) { printf("byte [%zu] == %d\n", i, b kerravon 2014-03-21T03:33:44Z binary vs text mode for files Hello. In order for an MSDOS CRLF sequence to be converted into a single NL, a file needs to be opened in text mode. If it was opened in binary mode there would not be anything special about the sequence, and that sequence just happened by random, when we're perhaps dealing with a zip file. My que Javier Lopez 2014-03-18T19:45:52Z Choosing a name/codename for your project This isn't a C-only question, but I like to name my functions or globals prepending the project name to avoid namespace pollution as in: void libunknown_dosomething(void); So, what's the best practice, thinking of a codename (deferring the task of choosing the final name) and prepend it where neede anonymous 2014-03-18T04:54:12Z async task synchronisation Hello Experts, I was wondering about this problem how to solve: struct async{ int b; }; async_task(struct async *syn) { printf("%s %d\n", __func__, syn->b); } main() { int i; struct async *syn = malloc(sizeof(*syn)*10); for(i=0;i<10;i++) { syn->b = &i; async_task(syn) //this function will be ca Noob 2014-03-17T14:50:23Z QoI issue: silencing a warning [ NOTE : cross-posted to comp.lang.c and comp.unix.programmer, please trim as you see fit ] Hello, My compiler (gcc 4.7) is being a little fussy about the following code: (trimmed to a minimum) #include <ctype.h> int foo(const char *ext) { int ext_NNN = isdigit(ext[1]) && isdigit(ext[2]) && isdig Steven Carpenter 2014-03-17T00:52:34Z Thread-Safe Generic List Queue in C I wrote this implementation of a List Queue in C that attempts to be thread-safe by the use of pthread mutexes. The Queue entries hold void pointers to data so the Queue can be generic in type. I would like to improve the code if I can. Please, if anyone has any recommendations on improvements or dmjc...@gmail.com 2014-03-16T22:13:39Z freeing structs Hi, if I have: struct rec2 { int i; }; struct rec1 { struct rec2 two; }; int func1(void) { struct rec1 *one; one = malloc(sizeof(struct rec1)); func2(&one->two); /* here should I "free(one)" or that would scramble func2 or any other damage... */ return 1; } Thanks in advance. neto...@gmail.com 2014-03-16T06:10:08Z How to create a section in ELF? I'm reading about ELF file format and trying to create one without any library, to really understand how the things works--I'm looking for actual code examples using it. In fact, I was able to create one by setting all needed headers and put some op code to it print a string on std out. But how to c Chicken McNuggets 2014-03-16T04:49:23Z Unicode (UTF-8) in C Hopefully this is a question that is related to standard C. I have (rather shamefully) ignored Unicode when it comes to programming in C up to now. But given the prevalence of it and the ease of use when using other programming languages (Python for example) I thought I'd re-visit the subject when Aleksandar Kuktin 2014-03-15T15:51:30Z Please check my sanity regarding this backtrace Hello all. I'm debugging Polipo, a HTTP proxy whose source code repository is here:;a=summary A poster on polipo-users list posted a backtrace from a core dump generated in a crash. At the bottom of the trace is the following which I can not mathog 2014-03-14T18:47:34Z Musings, alternatives to multiple return, named breaks? I am not a big fan of code that uses multiple returns in one function. (I do it at times, but try not to.) However, without using goto, eliminating the multiple returns leads either to a deeply nested series of "if" blocks or a construct like this int function(}{ int status = 0; while(1) { /* thi Bart 2014-03-14T10:26:17Z 'Declaration reflects use' I've heard of 'declaration reflects use' many times, in connection with helping decode complex type declarations, although it's never really worked for me. Until I played with the following code. The declaration and use of the indirect 'c' array match perfectly! #include <stdio.h> #include <stdlib. Robbie Brown 2014-03-13T16:36:23Z Compiling -fPIC from object files I realise this is a C forum but background is. I'm interfacing with existing C code from Java using the Java native interface. I have my C code for which I have written a driver to test the program and it all works as expected (usual caveats on 'expected' of course). I have written the Java inter
https://groups.google.com/forum/feed/comp.lang.c/topics/atom_v1_0.xml?num=50
CC-MAIN-2014-15
refinedweb
2,981
68.3
NAME ng_atmpif - netgraph HARP/ATM Virtual Physical Interface SYNOPSIS #include <sys/types.h> #include <netatm/atm_if.h> #include <netgraph/atm/ng_atmpif.h> DESCRIPTION The atmpif netgraph node type allows the emulation of atm(8) (netatm/HARP) Physical devices (PIF) to be connected to the netgraph(4) networking subsystem. Moreover, it includes protection of the PDU against duplication and desequencement. It supports up to 65535 VCs and up to 255 VPs. AAL0, AAL3/4 and AAL5 emulation are provided. In order to optimize CPU, this node does not emulate the SAR layer. The purpose of this node is to help in debugging and testing the HARP stack when one does not have an ATM board or when the available boards do not have enough features. When a node is created, a PIF is created automatically. It is named hvaX. It has the same features as any other HARP devices. The PIF is removed when the node is removed. HOOKS There is only one hook: link. This hook can be connected to any other Netgraph node. For example, in order to test the HARP stack over UDP, it can be connected on a ng_ksocket(4) node. CONTROL MESSAGES This node type supports the generic messages plus the following: NGM_ATMPIF_SET_CONFIG (setconfig) Configures the debugging features of the node and a virtual Peak Cell Rate (PCR). It uses the same structure as NGM_ATMPIF_GET_CONFIG. NGM_ATMPIF_GET_CONFIG (getconfig) Returns a structure defining the configuration of the interface: struct ng_vatmpif_config { uint8_t debug; /* debug bit field (see below) */ uint32_t pcr; /* peak cell rate */ Mac_addr macaddr; /* Mac Address */ }; Note that the following debugging flags can be used: VATMPIF_DEBUG_NONE disable debugging VATMPIF_DEBUG_PACKET enable debugging NGM_ATMPIF_GET_LINK_STATUS (getlinkstatus) Returns the last received sequence number, the last sent sequence number and the current total PCR that is reserved among all the VCCs of the interface. struct ng_atmpif_link_status { uint32_t InSeq; /* last received sequence number + 1 */ uint32_t OutSeq; /* last sent sequence number */ uint32_t cur_pcr; /* slot’s reserved PCR */ }; NGM_ATMPIF_GET_STATS (getstats) NGM_ATMPIF_CLR_STATS (clrstats) NGM_ATMPIF_GETCLR_STATS (getclrstats) It returns the node’s statistics, it clears them or it returns and resets their values to 0. The following stats are provided. struct hva_stats_ng { uint32_t ng_errseq; /* Duplicate or out of order */ uint32_t ng_lostpdu; /* PDU lost detected */ uint32_t ng_badpdu; /* Unknown PDU type */ uint32_t ng_rx_novcc; /* Draining PDU on closed VCC */ uint32_t ng_rx_iqfull; /* PDU drops no room in atm_intrq */ uint32_t ng_tx_rawcell; /* PDU raw cells transmitted */ uint32_t ng_rx_rawcell; /* PDU raw cells received */ uint64_t ng_tx_pdu; /* PDU transmitted */ uint64_t ng_rx_pdu; /* PDU received */ }; struct hva_stats_atm { uint64_t atm_xmit; /* Cells transmitted */ uint64_t atm_rcvd; /* Cells received */ }; struct hva_stats_aal5 { uint64_t aal5_xmit; /* Cells transmitted */ uint64_t aal5_rcvd; /* Cells received */ uint32_t aal5_crc_len; /* Cells with CRC/length errors */ uint32_t aal5_drops; /* Cell drops */ uint64_t aal5_pdu_xmit; /* CS PDUs transmitted */ uint64_t aal5_pdu_rcvd; /* CS PDUs received */ uint32_t aal5_pdu_crc; /* CS PDUs with CRC errors */ uint32_t aal5_pdu_errs; /* CS layer protocol errors */ uint32_t aal5_pdu_drops; /* CS PDUs dropped */ }; SEE ALSO natm(4), netgraph(4), ng_ksocket(4), ngctl(8) AUTHORS Harti Brandt 〈harti@FreeBSD.org〉 Vincent Jardin 〈vjardin@wanadoo.fr〉
http://manpages.ubuntu.com/manpages/hardy/man4/ng_atmpif.4.html
CC-MAIN-2014-35
refinedweb
485
51.18
>>" What do they mean? (Score:3, Interesting).... Dude, You're Stuck With a Dell (Score:3, Interesting) Then again, maybe Dell was looking for a way to stop selling OS-less PCs without incurring the wrath of Linux-zealots, and chose to blame MS. I would not be suprised.... This is what essentially killed Be (Score:4, Interesting) Scott Hacker has a great column on this called He Who Controls the Bootloader [byte.com]. Opt out (Score:2, Interesting) But surely I must be able to legally opt out of the EULA by returning the sealed agreement. If there is a license agreement then there MUST be an opt-out mechanism of some sort. Or would you have to return the whole computer ! I imagine if 1% of slashdot readers bought a Dell (or other brand) read and refused the terms in the EULA and asked to return the machine/software Dell and others would get the point and force the issue with MS. We just bought HP for linux 9i RAC (Score:2, Interesting) Re:Dude, you're getting a Mac (Score:5, Interesting) Hmmm. I guess that Microtel OS-less box I bought from WalMart really had a super-secret invisible Windows XP partition... There are alternatives and they do not all come from Apple. As disturbing as this may be... (Score:1, Interesting) Let's face it, most of the people who go out and purchase computers utilize Microsoft products do so because they feel that is the only way to go. I fail to see why people are getting nervous and acting like this is shocking or something. What exactly did we expect from Microsoft? From Dell? Did we expect Microsoft to sit back while manufacturers started toying with an alternative which they feel threatens them? It is our job, as Open Source, and *IX users, to educate our customers, family, and friends. Tell them that there are alternatives, and support them where they may roam. I expect another "0" for my rambling, moderators.. Dell From Hell! (Score:2, Interesting) Then I calmly stated that "you're going to have a lot of backlash from this decision. People wont like this. You're going to lose a lot of POENTIAL future customers because of this!" To which he again reiterated his previous stance "Well we just wont know what we missed, will we? We don't have any trouble selling systems with Microsoft software installed".... I suppose that this is truly a match made in HELL! Arrogant, greedy, self righteous fucking bastards! As the owner of a small business that's about to become quite large I say "FUCK YOU DELL AND MICROSOFT!!!" My corporate policy is NEVER USE MICROSOFT OR DELL PRODUCTS! These are truly evil enterprises! P.S. Have a lovely Open Source Day...Share your FREE as in FREEDOM Open Source Love Today. 10 Ways To Revolt. (Score:0, Interesting) It's simple, really. If you want to undermine Microsoft, pirate the hell out of their products, and direct people to the alternatives. It's the American way to express your dissatisfaction with a company. When it's clear who's pocket your elected officials are in, you have a God-given right to revolt. 1) A bootable Linux CD and a few drops of krazy-glue on the spindle hub makes any PC a permanent Linux box. 2) Alt-S, Up Arrow, Enter, CMD, Enter, del c:\winnt\explorer.exe, Enter. bye-bye Windows. 3) Find a list of elected officials' email addresses. Send them an email describing "a new game I hope you enjoy it." 4) Linux-on-a-floppy, and a tap of the reset button. 5) Point people to OpenOffice, not MS Office. 6) Microsoft has a nice "automatic update" feature. It would be nice if we could back-engineer this to introduce an update to Linux. 7) Burn, And Share. 8) The going rate for Microsoft exploits is about 2:1 9) The Trojans knew what they were doing when they climbed inside the horse. Do you? 10) Charity overpowers Greed, and Generosity is a virtue. Re:Monopoly (Score:2, Interesting) There is a path from many word processing programs and versions to the latest Microsoft version so people are encouraged to upgrade. These people that upgrade then send out files they saved using the default settings and find that no one can read them. Now everyone else has to upgrade also to read these formats. Come on. There is absolutely no need to break compatibility with each advancing version of a word processor. There is no grand new feature that requires a new file format. 99%* of all word processor users could still use Word 4.3 or some other product if not for incompatible file formats. * 98% of all statistics are made up on the spot.:Monopoly (Score:2, Interesting) Four years back when trying to buy systems without OFFICE, I went to the "big three" (Dell, Gateway, and Micron) asking each one if they would drop Office and add WordPerfect because we were a WordPerfect shop. None of them would do it. Gateway offered to purchase WP for us on the "open market". The other two refused. I asked each sales rep why and each one of them reponded: "Because of agreements with Microsoft". There was a parody of Richard Nixon awhile back in which "Nixon" said: "Once you have them by the balls, their hearts and mind will surely follow". MS is no different and very Machiavellian. Machiavellian money talks. I pay my taxes but get a lame DOJ. Just ask me if I'm feeling alienated by this government. Tux is making his way into our organization and the sooner the better.... woah there, come back on track (long) (Score:1, Interesting) Of course they only care about making as much money as possible. The problem is, that people do not admit this and use the system like it should. They try to 'bring knives to a gun fight' and end up only cutting themselves. Instead of running to the government, hurt them where it will matter. As you said, they are often above the law and until that changes you must focus on real results. People that stand around and bitch, but then support things are pathetic and not worthy of calling themselves Americans. (if you are not an American, this is no matter anyway :) As far as paying taxes "to support the public that gave said entity the right to exist?" Well, you are right that many companies enjoy a Corporate Welfare that hard working people do not get (tax breaks, yet federally funded projects and then reaping all the profits themselves) That is another example of a knife in a gun fight. Our economy is not built for that, just look at loans banks, capital investors and stock. It is ok (if it is in the deal) if someone voluntarily gives money to support some company or organization. They expect something in return. In the case of philanthropy, that is not an economic return (not directly). In a regular company you expect your money back, and normally with interest (term loosely applied since it depends on the type of transaction like loan or stock (dividends or not)). The point of this is that even though the companies are often exempt from what you or I have to suffer, that does not then mean that what you and I are suffering is the lowest common denominator required for a working civilization. Perhaps by removing the tax funded support of companies (i.e. involuntary (read: theft) money exchange) this will help a bit. Everyone needs to stop being like little wasps and buzzing all around in chaos, only to bite/sting anything that gets in their way. Put your vote where it counts... NOT in politicians but in your choice granted in this economy. Or... you can choose to give up that right and let some grayhairs make it for you and have even more problems but now with _NO_ option for recourse or choice in the matter. However, don't let this make you think I am in any way a fan of the leeway that corporations have now. Small business (or just any company that has scrupples and doesn't steal, cheat and lie) can really get the shaft from the way the lawyers have made things. Example: A large grocery chain goes bankrupt, but this of course takes time to be completed (don't remember which chapter exactly). This news comes as no surprise to many who knew that the main issue with this company, like so many others who put bureaucracy and blind policy in front of results, is innefficiency and waste in a 'breaking from its own weight' method. Said company owes many people money obviously, and attempts to liquidate its assets. Now comes the fun part! Because of the way laws are written and applied, the grocery chain decides it is in its best interest to not accept certain offers to sell parts of itself to either non-bankruptcy parties or as a liquidation and settling method with concerned parties. Why? Well lets just put it this way, you know how government agencies will often have a problem where due to their uncharacteristically efficient and thus ethical and American (as in concerned about being responsible stewards of tax payers' money) practices, they end up way under budget (includes done early). Well, their reward is less funding next year where it might actually be needed. So, with full knowledge by all parties, they basically blow the remaining money and are then rewarded with the same amount, and perhaps more if spent just right. Or another example: lets take a person who has 3 kids that supply her with some valuable foodstamps that are translated into beer, cigarettes and the like (don't comment if you don't know... I happen to have detailed knowledge of this first hand and it is NOT an urban legend). Now if she takes a job down at the local Sears, she will loose her welfare. So what that some hard working mother of 3 has taken the job and does her best to make ends meet... she is 'just a chump'. Well, these situations are like the corporate welfare system discussed above. Back to my original story now! The grocery chain now is in a very lucrative deal in which they do not have to pay off their debt, the stores and other assets just rot, and many people are now out of work. Did I mention that their lenders don't get their money back? More on that later. Next, we find that the higher executives are taking advantage of this great time to buy up things like cars, real estate and such either from scratch, or are merely transfering ownership into their names from previous company assets. Where does this money come from to pay for this stuff, you ask? Well it can get complicated a bit and confusing a LOT, but suffice it to say that magically even MORE people are screwed out of money they expect (like local car dealerships, banks, investors, stores, etc). Now the final blow... as it stands, the company known as 'the grocery chain' is as mentioned by others before held accountable as an individual. This means that after the judgement is rendered, it will be rather difficult for them to gather assets from others in loans and such. However! The individuals who where the brain of this (the decision makers) are not generally held accountable. Their personal assets are theirs, not to be liquidated in the name of the company problem. Yes SIR! that does indeed include the cars, houses, land, livestock etc that was basically purchased in so much a fashion that makes dot com stock buyouts seem a good thing. This means that the people and organizations that in a contractual agreement to exchange goods or services for other goods or services (money in this case) are screwed as much as if someone directly stole from their lot. Yet this is PERFECTLY LEGAL! Now that part about paying debt? Well that was a two-part'er. First part is covered, being that of the ability to legally steal from others that you morally, ethically and legally agreed. (somewhere there are rather honorable people who founded this country spinning in their graves) The second part is this question: Would this same situation even be remotely possible if the borrower was in fact an individual or small organization? Meaning, would the individual get the same legal protection not only from debt, but all legal responsibility, record marring and even make out like a (dare I say) bandit that GAINS money or goods from this? Anyone check the Dell site? (Score:2, Interesting) Look at it this way: If Soft didn't encourage the volume demand for PC's, the Internet would still be an academic curiosity and Linux wouldn't exist. Can Soft stop me from running Linux on any of my machines? Obviously not, it takes me about a minute to switch disks, so how exactly does that make Soft a monopoly? "But what about IE?" Never stopped me from downloading the free version of Netscape. FWIW, my guess is that eBay gets a call from the DOJ people in a few years. If eBay is smart, the attys are working on a response today. What's that flaming thing heading this way?!? Re:Monopoly (Score:1, Interesting) Spoken like someone quoting from slash meme instead of having lived through using these packages during the times you write. I used Wordperfect 5.1 for work back then, and I remember when Wordperfect was ported to Windows. It lost out NOT because it was slower than Word, but because it was a miserable port with a lousy user-interface and all the baggage of the DOS versions, like a user interface that didn't match Windows, and they kept their custom printer drivers and fonts instead of using the Windows systems. Wordperfect lost because they didn't understand the paradigm changes going on and tried to fake it; not because Microsoft leveraged their hidden faster API:Monopoly (Score:2, Interesting) Second, what exactly defines an "evil" company? One that dares to challenge the established world of the computer science elites? Or simply one that happens to be innovative and successful? Using this standard of measure, then, most well-known products would come from an "evil" company. Third, examine the alternatives to a Microsoft OS or MS based system. For the causal home user, one finds that the dominance of Windows has nothing to do with Microsoft distribution practices. Here's the home PC alternatives to Windows: Macintosh - good machine, stable OS - however, not a good deal of software choices, and those that do exist are maddeningly expensive. Linux - If you're a computer scientist, no problem. However, your average user does not want to deal with that much hassle everyday. BeOS/OS2/etc. - Incompatible with majority of other systems; lack of software for platforms; no longer exists, etc. If MS's dominance in the home PC market is to be diminshed, someone has to develop a product that can actually compete with Windows - instead of running to the Justic Department everytime someone else makes a better product. Now, for business-, and especially government-based systems, any argument that MS has a monopolistic dominance there has no solid foundation in fact. Yet, even here MS software has an advantage over its counterparts. Many businesses and government agencies employ UNIX (usually some version of Sun's Solaris breed) as a standard as opposed to MS's Windows NT/2000. Now, granted that WinNT required a maturation period (as did UNIX in the 1970s/early 1980s), from a performance and functionality perspective, one can get just about the same results from Windows 2000 or a Solaris system. The difference to a business, however, is productivity and cost. To build a mainframe system, Sun can charge tens to hundreds of thousands of dollars. Microsoft's Win2K? Just a few thousand with all the bells and whistles. As far as development goes, It is clearly more cost-effective to develop software on a Windows platform than a UNIX system for 4 reasons: 1) The process is faster under windows - researching, development, coding, and testing are done far faster with MS tools than in a UNIX environment. 2) There are more resources available - Many more people can develop in, and/or administrate Windows systems than UNIX systems. 3) The cost of development tools is far lower for MS systems, even with MSDN support, than for UNIX systems. 4) There is a greater base of third-party development in existence for MS tools and systems than for any UNIX brand. Therefore it stands to reason that it is the product, that has generated Microsoft's success. The only "evil" out there is born of envy - either from other, less successful (but sneakier) corporations, or from socialist politicians who see nothing but deep pockets..
https://slashdot.org/story/02/08/10/1420208/dell-no-longer-selling-systems-wo-microsoft-os/interesting-comments
CC-MAIN-2017-43
refinedweb
2,864
70.84
Timeline May 3, 2004: - 2:54 PM Changeset [3069] by - Ready for 4.2-beta2 - 2:52 PM Changeset [3068] by - * empty log message * - 11:14 AM Ticket #554 (memory overwrite in mapxbase.c) closed by - fixed: […] - 10:10 AM Changeset [3067] by - - added note on TILEINDEX (JM) - 9:30 AM Changeset [3066] by - Updated to use v4.x mapscript calls May 2, 2004: - 8:52 PM Ticket #643 ([WMS] map= missing in onlineresource) closed by - fixed: […] - 8:46 PM Changeset [3065] by - Include map= param in default onlineresource of GetCapabilties? if it … - 7:09 PM Ticket #644 ([WMS] Crash in INIMAGE error handler) closed by - fixed: […] - 7:04 PM Changeset [3064] by - Fixed write past end of buffer in msWriteErrorImage() (bug 644) - 6:53 PM Ticket #620 (Disable WCS stuff before 4.2 release) closed by - fixed: […] Apr 30, 2004: - 1:09 PM Ticket #644 ([WMS] Crash in INIMAGE error handler) created by - […] - 1:01 PM Ticket #643 ([WMS] map= missing in onlineresource) created by - […] - 12:13 PM Changeset [3063] by - Fixed problem compiling without GDAL (use of TRUE and CPLAssert) - 6:27 AM Changeset [3062] by - Query item support implemented. Query map is not being generated yet. - 6:07 AM Changeset [3061] by - Fixed problem with PHP's pasteImage() method when angle=0 - 6:03 AM Changeset [3060] by - OOpps... fixed the fix to pasteImage() - 5:57 AM Changeset [3059] by - Fixed problem with PHP's pasteImage() method when angle=0 Apr 29, 2004: - 1:19 PM Ticket #642 (LABELSIZEITEM to allow aritmetical expressions) created by - […] - 12:54 PM Ticket #641 (Nasty bug in labelcache) created by - […] - 11:43 AM Changeset [3058] by - Finished documenting resolution of bug 557 - 11:33 AM Ticket #557 (php clone(): free(): invalid pointer) closed by - fixed: […] - 11:32 AM Ticket #640 (Fontset and Symbolset not copied in msCopyMap) created by - […] - 11:14 AM Changeset [3057] by - We are, temporarily, not copying the map fontset and symbolset in … - 10:43 AM Changeset [3056] by - changed the URL link for the tutorial and demo (kg) - 10:37 AM Changeset [3055] by - changed the url link for tutorial and demo (kg) - 10:29 AM Changeset [3054] by - We are, temporarily, not copying the map fontset and symbolset in … - 6:45 AM Ticket #639 (WMS Client Problem w/ SLD) created by - […] - 6:19 AM Ticket #638 (Change behaviour of STATUS DEFAULT in WMS interface) created by - […] Apr 28, 2004: - 10:45 PM Ticket #618 (FORCE parameter in Label Object) closed by - fixed: […] - 10:41 PM Changeset [3053] by - Fixed bug 618. This has already been applied against the main trunk. - 10:35 PM Changeset [3052] by - Found an issue with the way msRectToPolygon was computing it's … - 9:14 PM Changeset [3051] by - Added skeletal bandsitem (to map.h and the lexer). - 9:05 PM Changeset [3050] by - More WCS metadata changes. Fixed GetCoverage? to use new set processing … - 4:33 PM Ticket #619 (Document new config option) closed by - fixed: […] - 4:28 PM Changeset [3049] by - - added CONFIG to mapfile-reference (JM) - 3:31 PM Ticket #537 (length() is missing from the EXPRESSION documentation) closed by - fixed: […] - 3:27 PM Changeset [3048] by - - added length function to EXPRESSIONs (JM) - 2:27 PM Changeset [3047] by - Added a testcopy program in testcopy.c to the Makefile - 2:26 PM Changeset [3046] by - Copied Frank's changes to mapcopy.c from 4.3 into 4.2 branch, added a … - 2:15 PM Changeset [3045] by - Copied Frank's changes to mapcopy.c from 4.3 into 4.2 branch, added a … - 1:17 PM Ticket #627 (Grid object not in 4.0 map file reference) closed by - fixed: […] - 1:01 PM Changeset [3044] by - - added GRID object (JM) - 11:55 AM Ticket #637 (variable substitution and order in cgi parameters) created by - […] - 11:03 AM Ticket #636 ([WMS Server] Need clarification in docs for layers with STATUS DEFAULT) created by - […] - 9:19 AM Changeset [3043] by - modified HISTORY.TXT to include information about recent updates to … - 8:36 AM Changeset [3042] by - Change metatadata name from wms_request_method to wfs_request_method. - 6:09 AM Changeset [3041] by - Added setProcessingKey(), renamed setProcessing() to addProcessing(). - 6:05 AM Changeset [3040] by - added setProcessingKey, addProcessing superceeds setProcessing - 6:04 AM Changeset [3039] by - added msLayerSetProcessingKey - 6:04 AM Changeset [3038] by - added msLayerSetProcessingKey, add no longer replaces - 6:00 AM Ticket #635 (BUG drawing ellipses) created by - […] - 12:08 AM Changeset [3037] by - Updated msLayerAddProcessing() function to check to see if a directive … Apr 27, 2004: - 11:38 PM Changeset [3036] by - Fixed a bunch of WCS output metadata issues, enabled band subsets. - 11:44 AM Changeset [3035] by - added maprasterquery - 11:43 AM Changeset [3034] by - moved EXTRA_DEFAULT ahead of EXEs - 10:15 AM Changeset [3033] by - removed extra maptime.obj - 10:13 AM Changeset [3032] by - added windows intermediate files - 10:10 AM Changeset [3031] by - const correctness fix - 10:10 AM Changeset [3030] by - added maprasterquery.obj - 10:09 AM Changeset [3029] by - MS_EXE should not depend on external libraries - 10:08 AM Changeset [3028] by - fixed several void * copies - 10:07 AM Changeset [3027] by - const correctness fixes for strcasecmp/strncasecmp - 9:53 AM Changeset [3026] by - added max result support - 9:53 AM Changeset [3025] by - added raster query support - 9:16 AM Changeset [3024] by - Use wfs_request_method instead of wms_request_method. - 8:37 AM Ticket #634 ([WFS Client] Incorrect use of wms_request_method metadata?) created by - […] - 8:17 AM Changeset [3023] by - only grab gdal.pdb if it exists Apr 26, 2004: - 10:19 PM Changeset [3022] by - Added gml:RectifiedGrid element to the DescribeCoverage? response. - 6:24 PM Changeset [3021] by - Fixed problem in maperror.c. Someone did not add the base message for … - 6:01 PM Changeset [3020] by - Enabled FORMAT wcs parameter in GetCoverage?. Duh… - 12:04 AM Changeset [3019] by - Added basic support for TIME parameter to the GetCoverage? WCS … Apr 25, 2004: - 11:26 PM Ticket #633 ([center] template tag problem) closed by - fixed: […] - 11:24 PM Changeset [3018] by - Fixed bug 633. - 11:20 PM Ticket #633 ([center] template tag problem) created by - […] - 10:55 PM Changeset [3017] by - Added a few optional attributes to the AxisDescription? section of the … Apr 23, 2004: - 9:44 AM Changeset [3016] by - msRemoteOutputFormat() now frees output formats if refcount==0 - 9:44 AM Changeset [3015] by - Added copyStringPropertyRealloc(). Fixed rampant problems without … - 9:17 AM Changeset [3014] by - avoid const warnings Apr 22, 2004: - 10:56 PM Changeset [3013] by - Moving on to tweaking Frank's GetCoverage? function. Added a few … - 10:43 PM Changeset [3012] by - Added initial version of code to output a CoverageOffering? range set. - 8:58 PM Changeset [3011] by - Added Frank's suggestion to compute extent from the GDAL geotransform … Apr 21, 2004: - 8:48 AM Ticket #632 (Symbol collisions between postgis and mygis) created by - […] - 6:03 AM Changeset [3010] by - More WCS fixes and changes. - 6:02 AM Changeset [3009] by - Updated msOWSPrintMetadataList() so that NULL values for … Apr 20, 2004: - 1:16 PM Ticket #631 (default onlineresource is wrong) created by - […] - 6:43 AM Ticket #189 (Environment is spelled wrong.... :-() closed by - fixed: […] Apr 19, 2004: - 10:41 PM Changeset [3008] by - Getting very close to a usable WCS implementation. Still need to add … - 3:10 PM Ticket #613 ([Docs] Several HOWTOs link to www2.dmsolutions.ca urls that have moved) closed by - fixed: […] - 3:08 PM Changeset [3007] by - Added msOWSGetEPSGProj() to mapows.h/.c and updated the original from … - 2:32 PM Changeset [3006] by - - mass cleanup (JM) Apr 18, 2004: - 8:10 PM Changeset [3005] by - Removed WCS support from configure and Makefile. - 8:10 PM Changeset [3004] - This commit was manufactured by cvs2svn to create branch 'branch-4-2'. - 7:19 AM Ticket #630 ([configure] Produce a warning if libtiff and gdal are both enabled) created by - […] Apr 17, 2004: - 1:31 PM Changeset [3003] by - Updated map.h and HISTORY.TXT for v4.3 - 1:28 PM Changeset [3002] by - Updated map.h and HISTORY.TXT for v 4.2-beta1 - 7:32 AM Ticket #514 (setExpression with a variable) closed by - fixed: […] Apr 16, 2004: - 11:37 PM Ticket #446 (Incorrect Display of WMS retrieved Layers) closed by - fixed: […] - 11:33 PM Changeset [3001] by - Increased precision of values written to .wld file for WMS layers (bug 446) - 11:05 PM Ticket #421 (itemnquery limitation) closed by - duplicate: […] - 10:40 PM Changeset [3000] by - Disabled GetContext? unless wms_getcontext_enabled metadata == 1 (bug 481) - 10:09 PM Ticket #498 (msSaveMap Labelformat grid feature) closed by - fixed: […] - 10:05 PM Changeset [2999] by - Added quotes around LABELFORMAT value in writeGrid() (bug 498) - 9:48 PM Changeset [2998] by - Added note about new CONFIG keyword (bug 619) - 9:37 PM Changeset [2997] by - Added note about bug 569 - 9:37 PM Ticket #569 (Potential crash with nquery) closed by - fixed: […] - 9:35 PM Changeset [2996] by - Skip shapes with classindex==-1 in msDrawQueryLayer() (bug 569) - 8:33 PM Ticket #524 (No relative paths for WEB.IMAGEPATH in CGI browse mode) closed by - fixed: […] - 8:30 PM Changeset [2995] by - Fixed handling of relative WEB.IMAGEPATH in CGI browse mode (bug 524) - 4:44 PM Changeset [2994] by - Fixed [shpxy...] tag so that if proj=image, if necessary the shape is … - 1:23 PM Ticket #612 (msGetSymbolIndex() shouldn't call msAddImageSymbol()) closed by - fixed: […] - 1:19 PM Changeset [2993] by - Added try_addimage_if_notfound to msGetSymbolIndex() (bug 612) - 12:27 PM Ticket #629 (imageCacheObj keys need to be expanded) created by - […] - 12:23 PM Ticket #608 ([HTML Legend] Legend icons all the same color) closed by - fixed: […] - 12:12 PM Changeset [2992] by - Correct bug on windows when opening xml file (open it in binary mode). - 12:06 PM Ticket #601 (SLD parsing fails on Windows) closed by - fixed: […] - 11:48 AM Ticket #351 (Label alingnment) closed by - duplicate: […] - 8:03 AM Ticket #628 ([WMS-FE-Docs] Need docs for FE) created by - […] - 7:29 AM Ticket #627 (Grid object not in 4.0 map file reference) created by - […] - 7:07 AM Changeset [2991] by - Added notes about WMS/WFS changes - 6:35 AM Ticket #626 ([WFS] Update WFS docs with info on latest WFS enhancements) created by - […] - 6:28 AM Ticket #625 ([WMS-SLD-Docs] Need docs for SLD) created by - […] - 5:26 AM Ticket #624 (Enhancement bug: layerObj->getbounds() method) created by - […] Apr 15, 2004: Apr 14, 2004: - 8:06 PM Changeset [2990] by - Changed imageCacheObj to use a colorObj as a key rather than just a … - 8:04 PM Changeset [2989] by - Cleaning in map.h (can't stand function declarations that span … - 8:02 PM Changeset [2988] by - A few fixes to mapwcs.c to account Dan's recent changes. - 6:40 PM Changeset [2987] by - Changed the [shpxy ...] tag syntax to allow users to specify … - 12:55 PM Ticket #622 ([WMS Server HOWTO] Fix ref toinvalid GetMap request) created by - […] - 11:44 AM Ticket #621 ([WMS Client] Caching of WMS requests) created by - Dean Gadoury wrote: > > If MapServer > can do caching of WFS requests … - 10:29 AM Ticket #620 (Disable WCS stuff before 4.2 release) created by - […] - 8:35 AM Ticket #619 (Document new config option) created by - […] - 8:27 AM Ticket #592 ([Docs] New link for PHPMapscript page) closed by - fixed: […] - 7:11 AM Ticket #567 (HTTP POST handling non-compliant, and failing) closed by - fixed: […] - 5:55 AM Ticket #618 (FORCE parameter in Label Object) created by - […] - 4:57 AM Ticket #617 (PHP mapscript function querybyshape causes segmentation fault when ...) created by - […] - 1:04 AM Ticket #559 (abnormal program termination when having multiple PROJECTION) closed by - fixed: […] - 12:58 AM Changeset [2986] by - Produce an error message if multiple projection objects are found (bug 559) - 12:31 AM Changeset [2985] by - Removed msOWSGetMetadata(), replaced by msOWSLookupMetadata() - 12:20 AM Ticket #604 (Dangling pointers in layerinfo (mappostgis.c, msPOSTGISLayerOpen())) closed by - fixed: […] - 12:19 AM Changeset [2984] by - mappostgis.c: Applied patch from Frank K. for dangling pointers, bug 604 - 12:16 AM Changeset [2983] by - Applied patch from Frank Koormann for dangling pointers, bug 604 Apr 13, 2004: - 11:36 PM Ticket #538 (Need to document string length expression function) closed by - duplicate: […] - 11:32 PM Ticket #529 (GetCapabilities response invalid) closed by - fixed: […] - 11:08 PM Ticket #522 (Wrong check on error code in maptemplate.c / msGenerateImages()) closed by - fixed: […] - 11:05 PM Changeset [2982] by - Fixed tests on return value of msSaveImage() (bug 522) - 10:57 PM Ticket #338 ([PHP MapScript] set("template", "") doesn't disable querying) closed by - fixed: […] - 10:50 PM Ticket #600 (DescribeLayer response does not escape OnlineResource) closed by - fixed: […] - 10:46 PM Changeset [2981] by - HTML-encode the wfs_onlineresource in DescribeLayer? response (bug 600) - 10:37 PM Ticket #580 (segmentation fault in php_mapscript.c, function php3_ms_img_saveImage()) closed by - worksforme: […] - 10:20 PM Ticket #616 ([OWS] Allow passing a default in msOWSPrintMetadataList()) closed by - fixed: […] - 10:14 PM Changeset [2980] by - Added ability to pass a default value to msOWSPrintMetadataList() (bug 616) - 10:02 PM Ticket #615 ([OWS] Extend msOWSPrintMetadata to operate on multiple namespaces) closed by - fixed: […] - 9:54 PM Changeset [2979] by - Created msOWSLookupMetadata() and added namespaces lookup in all … - 9:16 PM Ticket #616 ([OWS] Allow passing a default in msOWSPrintMetadataList()) created by - […] - 9:06 PM Ticket #615 ([OWS] Extend msOWSPrintMetadata to operate on multiple namespaces) created by - […] - 6:52 PM Ticket #614 ([MapScript] Add a getNumResults() method on layerObj) closed by - fixed: […] - 6:47 PM Changeset [2978] by - Added layer->getNumResults() - 6:20 PM Changeset [2977] by - Removed the unnecessary pointObj typecasts that had caused problems on … - 1:45 PM Changeset [2976] by - Correct problems on windows. - 1:41 PM Changeset [2975] by - Add shptreevis.exe for build. - 1:39 PM Ticket #452 (feature request: Show Big-5 with new version of GD) closed by - duplicate: […] - 1:37 PM Ticket #536 (SDE only supports SE_ROW_ID as index column) closed by - fixed: […] - 1:22 PM Changeset [2974] by - Remove stray include of mapshape.h and maptree.h - 1:14 PM Changeset [2973] by - Include map.h for MS_DLL_EXPORT defn - 1:08 PM Changeset [2972] by - Added Howard's latest and greatest SDE patch. Hopefully this should do it! Apr 12, 2004: - 11:39 AM Changeset [2971] by - Add dll export support. - 11:38 AM Changeset [2970] by - Add dll export support for windows. - 11:21 AM Ticket #614 ([MapScript] Add a getNumResults() method on layerObj) created by - […]. Note: See TracTimeline for information about the timeline view.
https://trac.osgeo.org/mapserver/timeline?from=2004-05-03T07%3A58%3A25-0700&precision=second
CC-MAIN-2016-44
refinedweb
2,425
53.24
Full project mirroring between GitLab instances DescriptionDescription GitLab has ~Geo, which is a product for multi-region replication of GitLab data. This includes all database contents as well as files and repository + wiki data. Geo has a "selective sync" feature, which is used to replicate a subset of an instance elsewhere. GitLab is gaining a "bidirectional sync" feature: - this can be used as a sort of poor man's multi-region, multi-master replication of a subset of repositories between two non-Geo GitLab instances, but files and database contents (issues, MRs, memberships, etc) aren't part of this. ProposalProposal Enhance bidirectional replication with instance, namespace and project-level federation of database contents and files. We could start by only supporting it at project-level though. An admin or owner on gitlab-a.com would also have an account on gitlab-b.com. They would set up an instance, group or project-level integration on the latter, using a personal access token from the former. Whenever a change happens on one instance, it is replicated asynchronously to the other, using webhooks to notify that a change has happened. Obviously, conflicts can occur, as we see with bidirectional repository mirroring. We may need an explicit federation object on both sides to support read-write on both sides; if set up on only one side, it could act as a read-only replica. A major source of conflicts in the multi-master version would be IIDs of issues, MRs, etc. This can be worked around using the same hack as mysql multi-master replication with N members - fixed offsets. If you have 2 members of the federation, the first only uses odd IIDs, the second only uses even IIDs. Artifacts and pipelines are more difficult. We might just have to disable CI on all but one node to begin with. File conflicts won't happen as we add random hex to every upload. We'd need to tell the other nodes to pull the file each time one was uploaded, though. Repository conflicts are being handled orthogonally. We can apply the same logic to both main and wiki repository. Memberships could be left out-of-scope to begin with, but we could consider automatic linking by email address or a fixed map of user equivalences between instances/groups/projects too. What else? This feature proposal represents a "less-trust" form of Geo selective sync. It's something you can set up between two independent GitLab instances. Both sides would be read-write, and it could be set up entirely in the GitLab UI with no need for sysadmin work or postgresql replication on the respective instances. Since only someone who is an instance/namespace/project admin can set this up, I don't think there are permissions problems to worry about.
https://gitlab.com/gitlab-org/gitlab/-/issues/4517
CC-MAIN-2020-24
refinedweb
469
54.32
I have implemented a color picker control in a web forms project and would like to make sure that CSS/JS is loaded maximum once. I have tried the following: Protected Shared _clientNeeded As Boolean = True Public Sub NeedClient() If (_clientNeeded) Then _clientNeeded = False Else Return End If 'Loading the client-side End Sub _clientNeeded You could implement the property with a combination of Context.Items and ViewState. When the page loads several controls, the value of ClientNeeded can be shared between them through the Context.Items of the page. Once the property has been set, each control can store it in its own ViewState, so that it is maintained as long as the page is active: protected bool ClientNeeded { get { object value = ViewState["ClientNeeded"]; if (value != null) { return (bool)value; } else { value = Context.Items["ClientNeeded"]; if (value != null) { ViewState["ClientNeeded"] = value; return (bool)value; } else { return true; } } } set { Context.Items["ClientNeeded"] = value; ViewState["ClientNeeded"] = value; } }
https://codedump.io/share/GZAqCoVYwVHE/1/do-something-once-at-maximum-with-a-control-during-a-page-life-cycle
CC-MAIN-2016-44
refinedweb
156
56.96
2011/10/16 Petr Salinger <Petr.Salinger@seznam.cz>: >> Silent truncation sounds a bit dangerous. Could this introduce a security problem? E.g. a malliciously placed socket which matches the truncated path. >? > May be the limit can be raised in upstream in kernel only, > to allow 108 instead of 104 only bytes. Maybe. You mean something like this? struct sockaddr_un { ... #ifdef _KERNEL sun_path[108]; #else sun_path[104]; #endif } It's ugly though. I wouldn't count on this being accepted, specially not backported. If it doesn't serve us to support people running upstream kernels, is there a point in pushing this change? -- Robert Millan
https://lists.debian.org/debian-bsd/2011/10/msg00211.html
CC-MAIN-2016-44
refinedweb
105
60.92
Opened 6 years ago Closed 3 years ago #11518 closed Bug (fixed) Custom commands cannot be run from cron (or other directory) if project is not on Python Path Description If I wanted to create a custom command say in myproj.weather application a custom command to get_latest_weather and wanted to run that in a cron job it would fail if myproj is not on the PYTHON PATH. I am using Python 2.6 and the imp.find_module is not finding the project path. I can see the project module does indeed exist in sys.modules Attachments (1) Change History (11) comment:1 Changed 6 years ago by Mark <mark.ellul@…> - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 6 years ago by anonymous - Resolution set to worksforme - Status changed from new to closed comment:3 Changed 6 years ago by Mark <mark.ellul@…> - Resolution worksforme deleted - Status changed from closed to reopened HI, Maybe I was not being specific enough... If you create a custom command in an app in your project myproj/myapp1/management/commands/my_custom_command.py and then try to use the call_command from the django.core.management i.e. from django.core.management import call_command call_command('my_custom_command') and your project is not on the PYTHONPATH, you will get a failure, because imp.find_module is not finding the myproj module that is loaded in memory. If you add the fragment I have already mentioned it will search the sys.modules dictionary, where it does find the project's loaded module. Please let me know if you need more details. Regards Mark Changed 6 years ago by Mark <mark.ellul@…> patch for this fix comment:4 Changed 6 years ago by russellm - milestone 1.1 deleted This isn't a blocker for v1.1. comment:5 Changed 6 years ago by mark.ellul@… - Has patch set I agree not a blocker, Sorry I didn't understand the milestone setting properly! Its a weird edge case bug, easily fixed though comment:6 Changed 5 years ago by russellm - Triage Stage changed from Unreviewed to Accepted comment:7 Changed 4 years ago by julien - Severity set to Normal - Type set to Bug comment:8 Changed 3 years ago by lrekucki - Easy pickings unset - Triage Stage changed from Accepted to Ready for checkin - UI/UX unset Looks good, although getting rid of the terrible hack on PYTHON_PATH would be even better. comment:9 Changed 3 years ago by aaugustin I'm struggling to understand what this change actually does :/ comment:10 Changed 3 years ago by lukeplant - Resolution set to fixed - Status changed from reopened to closed AFAICS, the bug can happen only when you are using the old sys.path hack that we used to use, in which a module in your project could be importable as 'myproj.myapp.mymodule' and 'myapp.mymodule'. I'm marking FIXED on that basis, since we've removed/deprecated the sys.path hack - When I insert the code below at line 56 (in the ImportError exception handling) in init.py at django/core/management/ it resolves the issue Hope that helps.
https://code.djangoproject.com/ticket/11518
CC-MAIN-2015-11
refinedweb
522
62.78
Recently one of our readers and I went back and forth via email about putting together a little app that he wanted his mother to be able to use that would retrieve her current IP address and email it to him, so that he could remote to her desktop to help her out with whatever computer problem she may have been having. One of the problems we had was either finding or writing code that would enable the app to send this email without relying on her having the SMTP service present and running on her machine, and without relying on the presence of CDO either. The .NET System.Web.Mail.SmtpMail class uses Collaboration Data Objects for Windows 2000 (CDOSYS) message component under the hood, so if Mom is running Windows ME, for example, you are plum out of luck with your app. Since then, we've had several additional requests for this type of code, so I thought it would be worthwhile to investigate this further. Sending email via TCP using the native SMTP RFC commands "HELO", "MAIL From", RCPT TO", etc. is no big deal. That's one of the first tricks we learn with Telnet. Finding or writing managed code that will do so reliably is another story. The code in the class that follows is not my original code - I've cobbled it together from three different sample sources, fixing namespaces, error handling, and other minor items, changing console code to class library code, and providing a complete Winforms - based test harness front end that illustrates its correct usage. I've also included sample code to correctly process and add a mail attachment via an OpenFileDialog here. This code MIME encodes and transmits the attachment(s) according to the specification. Since the class code speaks for itself, I'll simply post it below. It should be pretty much self-explanatory. You can download the complete solution at the link below and you should be able to plug this class into your applications with no changes necessary - right out of the box. Download the code that accompanies this article Peter Bromberg is a C# MVP, MCP, and .NET consultant who has worked in the banking and financial industry for 20 years. He has architected and developed web - based corporate distributed application solutions since 1995, and focuses exclusively on the .NET Platform.
http://www.nullskull.com/articles/20030316.asp
CC-MAIN-2014-42
refinedweb
395
57
Hello, This probably seems simple, well I'm sure all of mine seem simple, so here it goes. I am having trouble constructing strings. str1, str3, and str4 give pre-compile errors and show up yellow in my IDE. The pre-compile errors are commented in the code below. str2 gives a pre-compile error (commented in code below) and also a runtime error. The runtime error was: Exception in thread "main" java.lang.RuntimeException: Uncompilable source code - Erroneous tree type: <any> at hsstringdemo.HSStringDemo.main(HSStringDemo.java:1 4) Line 14 is where the string is declared and initialized, which I think is also called the string constructor. I imported java.lang.Object and java.lang.String in hopes that something was not defined, but to no avail. I tried using the code tags this time, seems simple, but tell me how I am doing. Code java: package hsstringdemo; import java.lang.Object; import java.lang.String; public class HSStringDemo { public static void main(String[] args) { // Declare strings in various ways. String str1 = new String("Java strings are objects."); //Pre-Compile Error: String constructor invocation String str2 = new "They are constructed in various ways."; //Pre-Compile Error: <identifier> expected String str3 = new String(str1); //Pre-Compile Error: String constructor invocation String str4 = new String(str1 + str3); //Pre-Compile Error: String constructor invocation System.out.println(str1); System.out.println(str2); System.out.println(str3); System.out.println(str4); } }
http://www.javaprogrammingforums.com/%20loops-control-statements/10239-constructing-strings-printingthethread.html
CC-MAIN-2017-47
refinedweb
239
51.75
This is a code i made, for checking if a nickname and password exists.. I have two question about this code... 1) In line 38 i close "fuser", but if i try to use this again there is an error. So, i used "fpass". Is there any way to use the same fot both files and not to create a new one? 2) When i check if the username exists, i get the line that the username is. Then i finde the password tha the user gives and i get the line if the pass exists. Finally i check if the number of these two lines are the same, so the user can loggin. But i thought that we can change it a little so, that it will have only one loop. What i thought is that we don't have the second while, but insteand an if, that checks if the password in the specific line that we got from the first while is the same with the one that the user gave at the beggining. Can we do this? (I hope you understood what i said and what i ment :S ) #include <fstream> #include <iostream> #include <conio> #include <string> #pragma hdrstop #pragma package(smart_init) using namespace std; void CheckLoginRequest(string username, string password); main() { string user,pass; cout<<"Give nickname"; cin>>user; cout<<"Give password"; cin>>pass; CheckLoginRequest(user,pass); getch(); } void CheckLoginRequest(string username, string password) { int linename=0,linepass=0; string str; int foundname=0,foundpass=0; ifstream fusers("usernames.txt",ifstream::in); while(getline(fusers,str) && foundname==0) { if (username==str) { foundname=1; } linename++; } fusers.close(); if(foundname==1) { ifstream fpass("passwords.txt",ifstream::in); while(getline(fpass,str) && foundpass==0) { if (password==str) { foundpass=1; } linepass++; } fpass.close(); if(foundpass==1) { if(linepass==linename) { cout<<"you are now logged in"<<endl; } } else { cout<<"wrong password,try again"<<endl; } } else { cout<<"wrong username,try again"<<endl; } }
https://www.daniweb.com/programming/software-development/threads/117203/string-files
CC-MAIN-2018-13
refinedweb
321
71.34
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hello ladies and gents - I need to be able to auto close the tickets in "REOLVED" status within 24 hours. I was found this solution - my script below <JiraJelly xmlns: <jira:Login <log:info>Running Close issues service</log:info> <!-- Properties for the script --> <core:set resolved-slove </core:set> <core:set <core:set <core:set <!-- Run the SearchRequestFilter --> a <jira:RunSearchRequest <!-- Iterate over the issues --> <core:forEach <log:warn>Closing inactive issue ${issue.key}</log:warn> <jira:TransitionWorkflow </core:forEach> </jira:Login> </JiraJelly> But when i start it in jelly runner console i'l get this exception Exception: org.xml.sax.SAXException: could not find namespace with prefix core java.io.PrintWriter@fc2bcb5 Please give me some advice about my problem JIRA 6.3.14 LInux standalone instance JIRA Automation plugin replaced all our Jelly scripts. Would highly recommend staying away from Jelly for future compatibility I can't see where you set $autoclose ? You need to set it to the ID of the filter you want to run to find the issues. At the moment, you're not setting it in your script, and I think that might be causing a parse error (Sorry, I wrote $autoclose, it's actually $auto_close in your script) If $autoclose - it is the name of search filer - then it is in my script <core:set Er, no, that's setting a variable that you never use in the script. Also, you need to use the ID of the filter, not the name. If you're going to autoclose them why not just change the workflow so resolving them closes them? Seems like a lot less work. If the 24 hours is to allow folks to check them for correctness it isn't really a realistic time frame in my experience. You could allow them to reopen if they say it was resolved incorrectly. Don´t use a JellyScript - the JellyRunner won´t be included in the next JIRA version. Use the escalation service from ScriptRunner or The Scheduler.
https://community.atlassian.com/t5/Jira-questions/Auto-close-jira-tickets-with-status-quot-resolved-quot/qaq-p/77412
CC-MAIN-2018-34
refinedweb
358
55.34
You can subscribe to this list here. Showing 3 results of 3 Boost was compiled with Boost.Python enabled, however, they are .dylib files, which I believe should be ok. They're the os x version of .so's. boost_python *isn't* included in the otool -L output. Do you know why this would be? As far as I can tell, everything is compiling normally. When running scons in the supportlib directory, I get the following output at the end. Is that normal? ranlib lib/libcore.a ar: creating archive lib/libcore.a ranlib: file: lib/libcore.a(dependent.o) has no symbols ranlib: file: lib/libcore.a(dependent.o) has no symbols scons: done building targets. Thanks, Ed On Dec 2, 2005, at 3:45 PM, Matthias Baas wrote: > - > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD > SPLUNK! > > _______________________________________________ > cgkit-user mailing list > cgkit-user@... > > - Hi, I've been going through hell the past week trying to get cgkit work on Mac OSX 10.4. The error I'm getting now is: yps:~/Desktop/3dpythonstuff/cgkit-2.0.0alpha5 erd4819$ python2.4 / Library/Frameworks/Python.framework/Versions/2.4/bin/viewer.py Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/bin/ viewer.py", line 49, in ? from cgkit.all import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/cgkit/all/__init__.py", line 23, in ? from cgkit import _core ImportError: Failure linking new module: /Library/Frameworks/. Thanks for any help. Ed
http://sourceforge.net/p/cgkit/mailman/cgkit-user/?viewmonth=200512&viewday=2
CC-MAIN-2014-23
refinedweb
281
63.56
* GraphVertexChangeEvent.java27 * ---------------------------28 * (C) Copyright 2003-2006, by Barak Naveh and Contributors.29 *30 * Original Author: Barak Naveh31 * Contributor(s): Christian Hammer32 *33 * $Id: GraphVertexChangeEvent.java 504 2006-07-03 02:37:26Z perfecthash $34 *35 * Changes36 * -------37 * 10-Aug-2003 : Initial revision (BN);38 * 11-Mar-2004 : Made generic (CH);39 *40 */41 package org.jgrapht.event;42 43 /**44 * An event which indicates that a graph vertex has changed, or is about to45 * change. The event can be used either as an indication <i>after</i> the vertex46 * has been added or removed, or <i>before</i> it is added. The type of the47 * event can be tested using the {@link48 * org.jgrapht.event.GraphChangeEvent#getType()} method.49 *50 * @author Barak Naveh51 * @since Aug 10, 200352 */53 public class GraphVertexChangeEvent<V>54 extends GraphChangeEvent55 {56 57 //~ Static fields/initializers --------------------------------------------58 59 private static final long serialVersionUID = 3690189962679104053L;60 61 /**62 * Before vertex added event. This event is fired before a vertex is added63 * to a graph.64 */65 public static final int BEFORE_VERTEX_ADDED = 11;66 67 /**68 * Before vertex removed event. This event is fired before a vertex is69 * removed from a graph.70 */71 public static final int BEFORE_VERTEX_REMOVED = 12;72 73 /**74 * Vertex added event. This event is fired after a vertex is added to a75 * graph.76 */77 public static final int VERTEX_ADDED = 13;78 79 /**80 * Vertex removed event. This event is fired after a vertex is removed from81 * a graph.82 */83 public static final int VERTEX_REMOVED = 14;84 85 //~ Instance fields -------------------------------------------------------86 87 /**88 * The vertex that this event is related to.89 */90 protected V vertex;91 92 //~ Constructors ----------------------------------------------------------93 94 /**95 * Creates a new GraphVertexChangeEvent object.96 *97 * @param eventSource the source of the event.98 * @param type the type of the event.99 * @param vertex the vertex that the event is related to.100 */101 public GraphVertexChangeEvent(Object eventSource, int type, V vertex)102 {103 super(eventSource, type);104 this.vertex = vertex;105 }106 107 //~ Methods ---------------------------------------------------------------108 109 /**110 * Returns the vertex that this event is related to.111 *112 * @return the vertex that this event is related to.113 */114 public V getVertex()115 {116 return vertex;117 }118 }119 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jgrapht/event/GraphVertexChangeEvent.java.htm
CC-MAIN-2017-30
refinedweb
387
65.93
. Least_2<< In polynomial fitting, A is called the Vandermonde matrix and takes the form: _4<<. Calculating solar panel shading in PythonMarch 16, 2009 | categories: python, mathematics, solar | View Comments Eventually, I'd like to install solar panels on our house, but I want to know that it will be worth it before I commit the money. Back in 2007, I wrote a library for calculating the path of the sun given the time and your location on the earth. Since then, I've been thinking about the next step, which is calculating how much neighboring houses and trees would obstruct the sun at different times of the day. In a nutshell, I wanted to replicate the calculations performed by devices like the Solmetric Suneye without paying $1500 for the device. To do the job right, I still need a quality circular fisheye lens ($680 new), but I've got a first approximation working with a $13 door peephole from McMaster. (I do realize that I could just get a solar contractor to do all the site evaluation for me for free. That misses the point. If all I wanted was some sort of fiscal efficiency, I'd get a job as a financial executive or maybe something different from that, like a bank robber.) Back to the task at hand. Below is the image I started with, which I took back in the fall, before the leaves fell off the trees. This was using a Canon SD450 camera and the aforementioned peephole. I didn't take particular care to level the camera or center the peephole. Using the Python Imaging Library, I cropped the photo to be square and switched to black and white mode. Then I mapped concentric rings in the image below to the subsequent straightened image. Here's the straightening code. The guts are in the nested for loops near the end. (If there are any Pythonistas out there who know how to iterate over concentric rings using a list comprehension or map(), please let me know. The code below is functional and reasonable clear, but a little slow.) # Code under GPLv3; see pysolar.org for complete version. The straightening works pretty well, but there is a little distortion that I think is caused by the peephole not being a perfectly spherical lens. Also, because the peephole is not centered on the camera lens, my concentric ring transformation isn't centered either. From here, I needed to figure out where the sky ends and the buildings or trees start. Unfortunately, the buildings and trees are both lighter and darker than the sky in different places, so I can't just look for one type of transition. To detect the edge, I scan down each column of pixels and calculate the difference in darkness between consecutive pixels, ignoring whether the change was from light to dark or the reverse. The image below shows those differences, amplified by 10x to increase the contrast. The mathematicians call this calculation a finite forward difference. Here's the finite difference code. # Code under GPLv3; see pysolar.org for complete version. def differentiateImageColumns(im): (width, height) = im.size pix = im.load() for x in range(width): for y in range(height - 1): pix[x, y] = min(10 * abs(pix[x, y] - pix[x, y + 1]), 255) return im The last step is to scan down each column looking for the first large value. The first change that crosses a threshold is recorded. The final output is an array of values that measure the angle of the highest obstruction as a function of direction. As a sanity check, I drop a red dot on each value. It's hard to make out in the thumbnail below, but if you click on the image below, you'll get a larger version where you can see the red dots work pretty well. I think the next step will be to calculate the total energy delivered per year using Pysolar. Here's the full code. # Code under GPLv3; see pysolar.org for complete version. from PIL import Image from math import * import numpy as np def squareImage(im): (width, height) = im.size box = ((width - height)/2, 0, (width + height)/2, height) return im.crop(box) def differentiateImageColumns(im): (width, height) = im.size pix = im.load() for x in range(width): for y in range(height - 1): pix[x, y] = min(10 * abs(pix[x, y] - pix[x, y + 1]), 255) return im def redlineImage(im): (width, height) = im.size pix = im.load() threshold = 300 for x in range(width): for y in range(height - 1): (R, G, B) = pix[x, y] if R + G + B > threshold: pix[x, y] = (255, 0, 0) break return im im = Image.open('spherical.jpg').convert("L") im = squareImage(im) lin = despherifyImage(im) d = differentiateImageColumns(lin).convert("RGB") r = redlineImage(d) r.show() Least_6<<_7<<_8<< _9<<_10<< This post took two-thirds of eternity to write. Feel free to ask questions or point out how confused I am below. Next: regularized least squares! Or maybe selection of polynomial degree! Or maybe forgetting the blog and melting brain with TV!
http://pingswept.org/category/python/
CC-MAIN-2018-13
refinedweb
858
73.98
java XML openoffice Bütçe $30-250 USD Need small Java application that reads an xml file and generates an Office Open XML File Format (Standard ECMA-376) file that can be opened and edited with Microsoft Powerpoint 2007. This task requires strong java programming skills, knowledge of XML and knowledge of the Office Open XML File Format (Standard ECMA-376). Do you have experience with Office Open XML File Format (Standard ECMA-376)?? When can you deliver?? See the attached detailed application description and let me know if you can do the work. Thank you. It is not so big if you use existing libraries for much of the guts of the application. It has no GUI. It is a simple command line Java program. The single Java class below can be used to read the XML input file. This will put the input into a [url removed, login to view] object. You can also use the [url removed, login to view] library which will do most of the boiler plate handling of PresentationML and DrawingML for you. So the real task is to handle the relatively simple input definition file and add the appropriate shape and text objects to a slide and calculate the locations for the objects in the tree. I could do the task myself, but I am working on other projects at the moment. When can you deliver? What fee would you ask for? Thanks // ********************************************************* // Java Class for reading XML file * Project info to: */ import [url removed, login to view]; import [url removed, login to view]; import [url removed, login to view]; import [url removed, login to view]; import [url removed, login to view]; /** * Reads a given String filename and returns the [url removed, login to view] * object representing the XML file. The given filename must be * a valid XML file otherwise an exception will be thrown. */ public class XMLReader { public static Document getXMLFile(String filename) throws IOException, JDOMException { File f = new File(filename); Document document = null; if (f != null) { if () { throw new IOException( "In [url removed, login to view], the given String filename does NOT name a file." ); } else { SAXBuilder parser = new SAXBuilder(); document = [url removed, login to view](f); return document; } } return null; } can pay 30$ 5 freelancers are bidding on average $54 for this job I have 4+ years of exp on java & xml
https://www.tr.freelancer.com/projects/java-xml/java-xml-openoffice/
CC-MAIN-2018-13
refinedweb
391
68.91
Embedded scripting recommendation Hi All, I've spent most of the day looking through a ton of embedded scripting options. Some seem easy, some claim to work but seem to have some issues running even the simplest examples. So I'm looking for some recommendations. I'm write an instrument control app that has to be Win32 (hardware limitation). I don't need a ton of scripting horsepower. Basically I need the following: - Some basic math - Mostly simple variables (int, float, string) - Timing (IE delays, timers would be nice) - Loops - Ability to export or create functions to call some internal routines I will have to control the instrument. I'm not too particular on the syntax since it will be used really for test purposes. IE the operator will want to repeatedly test a particular operation so they would write a short script that loops the desired number of times and calls some function I give them to control the device. Thats really about the extent of what I need. It seems like most of the embedded scriptors I've been looking at aim much higher than I need and seem rather difficult to implement and some when I've downloaded and attempted to include/compile their simple example have bombed pretty bad either causing AVs or not compiling at all. I saw that there was a QtPython and I thought "alright!" but it too seems not to like Qt 5.4. It seems to have last been updated in 2012. I do see QtScript and I'm reading about it but wonder if it can do what I want/need. Any recommendations? Thanks in advance. - JKSH Moderators Hi, Qt Script supports JavaScript, and I know JavaScript supports #1 to #4 of your list. Could you explain what you mean by #5? An alternative is to use QJSEngine, which also supports JavaScript (see "Making Applications Scriptable":). It is newer than Qt Script but simpler. Hi JKSH Thanks so much for the reply. What I mean by #5 is in my app I will have a small API of sorts to control the instrument I must talk to. For example there will be a X/Y robot so I'll probably have a class/methods to move the robot around. I want to be able to expose these in some fashion to the script. For example in a much older application written in Delphi years and years ago we had a pascal based scripting language and for each low-level function we had, one of them was in fact MoveRobot() I had a scriptable version. So in a script we could write something like: @procedure MoveToPark(); begin MoveRobot( parkX, parkY ); end;@ All of the above code was evaluated by the script engine and "MoveRobot()" was evaluated and a call to my internal API MoveRobot() was made. That is my goal. The purpose of this app right now is to provide a test bed for some new hardware being developed. The mechanical and fluidics engineers are looking for something where they can say exercise the robot between some positions repeatedly. While I could do this in code quickly for them the eventual goal will be to have them writing a sequence that takes the instrument through its paces. As I mentioned my scripting needs are pretty simple. I don't need true objects just mainly the stuff in the list. In our prior scripting language (years and years ago again) we could have functions, variables etc. it was pretty advanced for a script language of 15 years ago. I'll check out JSEngine. Thanks for the reference! At an initial glance JSEngine looks ok. Since you can "pass in" an object I think I can use that method to make instrument control routines available to the script. I'll check this out further. Thanks again! Hello, I'd also advice use of QtScript, it basicaly does all you required. It also has fine debugger window with it, and can export separate functions (both of these are different for QJSEngine as far as I know). I'll give you some snippets to start with: Qt project.pro integration: @QT += core gui widgets script scripttools@ If you dont need the debugger window, you can omit gui, widgets and scripttools (latter is debuger module, and debugger is a widget, that requires gui app) The next step, you can have this class @#include <QObject> #include <QScriptEngine> #include <QScriptEngineDebugger> class Engine : public QObject { Q_OBJECT public: Engine(); ~Engine(); public slots: void start(); private: QScriptEngine *js; QScriptEngineDebugger *dbg; }; Engine::Engine() : js(new QScriptEngine) , dbg(new QScriptEngineDebugger) { js->globalObject().setProperty("script", js->newFunction(script)); js->globalObject().setProperty("warning", js->newFunction(warning)); js->globalObject().setProperty("quit", js->newFunction(quit)); dbg->attachTo(js); dbg->setAutoShowStandardWindow(true); } Engine::~Engine() { delete dbg; dbg = nullptr; delete js; js = nullptr; } void Engine::start() { fileEval(js, "myStartScript"); } QScriptValue fileEval(QScriptEngine* js, const QString &name) { QString path = QString("./../scripts/%0.js").arg(name); QFile scriptFile(path); if (!scriptFile.open(QIODevice::ReadOnly)) qDebug() << "Script file does not exist:" << name; QString code = scriptFile.readAll(); scriptFile.close(); QElapsedTimer tmr; tmr.start(); QScriptValue result = js->evaluate(code, name); if (result.isError()) qDebug() << "Uncaught exception:" << result.toString(); qDebug() << "script" << name << "executed in" << tmr.elapsed() << "msec"; return result; } QScriptValue script(QScriptContext* ctx, QScriptEngine* eng) { return fileEval(eng, ctx->argument(0).toString()); } QScriptValue warning(QScriptContext* ctx, QScriptEngine* eng) { Q_UNUSED(eng); qDebug() << "Warning:" << ctx->argument(0).toString(); return 0; } QScriptValue quit(QScriptContext* ctx, QScriptEngine* eng) { Q_UNUSED(ctx); Q_UNUSED(eng); QApplication::quit(); return 0; }@ Now you can write js scripts or extend the engine itself. Script example: @var check = script("someScript"); // do not include '.js' as it is already accounted for warning('additional scripts loaded!'); quit(); // app exit@ - JKSH Moderators You're most welcome :) I'm pretty sure QJSEngine lets you do #1 to #5 too. I haven't used Qt Script or QJSEngine myself though, so please do check them out further before committing. All the best! @RaubTieR - Thanks very much! I get the idea from your code however I couldn't seem to get it to link. So I pulled the basics out and it helped to get me some ideas. Thanks so much for taking the time to post the example! @JKSH - You guys are the best. Thanks again for all the help. Hope all enjoy the new year!. - wrosecrans I've been using boost.Python and QJSEngine in my current app. I've spent most of the effort on exposing stuff to Python, since that's the "important" language for my market segment. But, a few things are useful to interop with web stuff where JS is the only language, so I have both. Boost Python is awesome, but very complicated to expose the guts of a complete large app to Python. QJSEngine would probably also drive me crazy if I wanted to do everything I am doing with my Python bindings, but for the little stuff it is currently doing, I was almost shocked how easy it is to get up and running: @ engine->globalObject().setProperty("nominalWeight", nominalWeight); QJSValue result = engine->evaluate(QString::fromStdString(logic)); if (result.isError()) return nominalWeight; return engine->globalObject().property("nominalWeight").toNumber(); @ [quote author="RaubTieR" date="1419789716".[/quote] I have learned that. I do qmakes quite often to be sure. I was going to wait few days to avoid flooding the forum with dumb questions I need to ask about how to do a command line build of my project. One of the requirements is automated weekly builds. Eventually nightly. I've tried copying the command lines from the compile output, making sure my paths to compiler and linker are in place but still can't seem to get a command line build to run. Using VS2013 compiler and once my build starts it complains it can't find stdfx.h. Anyway I'll work up a formal question with examples on that. Thanks for the input!
https://forum.qt.io/topic/49646/embedded-scripting-recommendation
CC-MAIN-2017-34
refinedweb
1,319
65.12
Python一例。 # ftptest.py - An example application using Python's ftplib module. # Author: Matt Croydon <matt@ooiio.com>, referencing many sources, including: # Pydoc for ftplib: # ftplib module docs: # Python Tutorial: # License: GNU GPL. The software is free, don't sue me. # This was written under Python 2.2, though it should work with Python 2.x and greater.<?xml:namespace prefix = o # Import the FTP object from ftplib from ftplib import FTP # This will handle the data being downloaded # It will be explained shortly def handleDownload(block): file.write(block) print ".", # Create an instance of the FTP object # Optionally, you could specify username and password: # FTP('hostname', 'username', 'password') ftp = FTP('') print 'Welcome to Matt's ftplib example' # Log in to the server print 'Logging in.' # You can specify username and password here if you like: #('username', 'password') # Otherwise, it defaults to Anonymous print # This is the directory that we want to go to directory = 'pub/simtelnet/trumpet/winsock' # Let's change to that directory. You kids might call these 'folders' print 'Changing to ' + directory # Print the contents of the directory('LIST') # Here's a file for us to play with. Remember Trumpet Winsock? filename = 'winap<?xml:namespace prefix = st121f.zip' # Open the file for writing in binary mode print 'Opening local file ' + filename file = open(filename, 'wb') # Download the file a chunk at a time # Each chunk is sent to handleDownload # We append the chunk to the file and then print a '.' for progress # RETR is an FTP command print 'Getting ' + filename('RETR ' + filename, handleDownload) # Clean up time print 'Closing file ' + filename file.close() print 'Closing FTP connection' print
http://blog.csdn.net/runmin/article/details/68936
CC-MAIN-2018-05
refinedweb
269
56.05
Follow-up Comment #2, bug #41426 (project octave): retval = any ((x != x.')(:)); seems slightly faster for full matrices and about the same speed as the original code, but the "(:)" creates an out of memory error for large sparse matrices as well. The difference in speed for the two versions for full matrices only seems to be about 5% so maybe we should just take your patch as is otherwise something like if (issparse (x)) retval = nnz (x != x.') == 0; else retval = all ((x == x.')(:)); endif might be appropriate D. _______________________________________________________ Reply to this item at: <> _______________________________________________ Message sent via/by Savannah
https://lists.gnu.org/archive/html/octave-bug-tracker/2014-02/msg00027.html
CC-MAIN-2019-35
refinedweb
101
71.65
A quick update post to help get my latest project’s new dataset more readily indexed on Google search, etc. (Feb 8th 2021) I’ve recently been working on risk assessment for COVID-19 in our 2nd wave. To create an email alert per province (taking account of local regional data) I needed to join provincial data together. It turns out that for much of Thailand’s publicly available government datasets (particularly in Office of Agricultural Economics, Land Department, etc) the data is summarised at Province level (i.e. is not GIS coordinate-based). Yet, there’s no mapping of province -> [neighbouring provinces] dataset out there (that I could find), so I created one the other night and wrote the code to verify and integrate it. That dataset/code is now on github: An obligatory requirement of using data relations (X->Y) is making a pretty visualisation on GraphViz, so dutifully — here it is: ^^ (Along with Wikipedia’s provincial public map for comparison..) Q & A Is it correct & up to date? Yes. The newest Thai province change was adding Bueng Kan, which was split-off from Nong Khai, effective on 23 March 2011 – that’s included; so it’s up-to-date as of Feb 2021. Bangkok is referred to as a Special Administrative Area, but it’s included as province in the mappings; giving a total of 77 entries. Is it easy to use the mapping dataset by importing a Python module into my own software application? Yes, you can join province datasets together based on their semantic geo-neighbourhoods – 🙂 - Just git clonethe repository, download a province naming dataset, import the python module, - Write about 4 lines of code gives you a dictionary lookup (see the readme.md for full details). I want to SQL join my provincial datasets together, but only for the provinces nextdoor, how can I do that? Yes, that’s precisely what this dataset and code is for. Before you create your SQL query, - import the Python module ( province_neighbours.py), - instantiate the ProvinceRelationsParser object, - get the dictionary, - perform the dictionary lookup on your key province, this will give you the list of neighbouring provinces. - Simply plug those names into your SQL query and you are ready! (Find a code example in the readme.md). Can I use Thai language (UTF-8) as my lookup and get neighbour results in Thai (UTF-8)? Short answer is yes. See the readme.md on the Github repo for full details with code samples. Over to you There’s plenty more to say about this project, but if you’re interested in the details, go visit the Github repository. (Or send me a message, if you want extra detailed info). Feel free to check it out.
https://petescully.co.uk/2021/02/08/thailand-province-border-adjacency-dataset/
CC-MAIN-2022-33
refinedweb
457
63.09
Development Lead, Hosted Services (Current) Program Manager, CLR Team (2005-2007) There are four ways to throw an exception in my opinion. They are: 1. Throw a new exception a. throw new ArgumentException( … ); 2. Throw the exception you caught a. throw e; 3. Throw the same exception preserving the stack a. throw; 4. Throw with the original exception as an internal exception a. throw ArgumentException(innerException); using System; class Sample { public void Method1() { // possible throw options // throw new ArgumentException(); - Origin of the exception } public void Method2() try { Method1(); } catch(ArgumentException ae) // possible throw options // throw; // - Preserves the original stack // throw ae; // - Breaks the stack an and throws the same exception from now on // throw new ArgumentException(ae); // - throws a new exception with an inner exception // throw new ArgumentException(); // - Throws a new exception catch(Exception e) Console.WriteLine(e.ToString()); public void Method3() bool cleanup = true; Method2(); cleanup = false; catch (Exception) // perform Cleanup on error finally if (cleanup) { // perform cleanup on error } public static void Main() Sample s = new Sample(); try { s.Method2(); Console.WriteLine(ae.ToString()); } I personally would like to classify them as into two categories, 1. Origin of the exception, the place where the exception starts a. Throw new ArgumentException(..) 2. The place where the exception is handled and either gets mapped or rethrown a. Throw new ArgumentException() b. Throw new ArgumentException(innerException) c. Throw e; d. Throw; There are not many options during the origin of the exception and so that is very clear. The confusion comes only when an exception passes through your code and you attempt to handle it. In this case again, all four options are available, but only a few make sense. For example 1. Throw new ArgumentException() 2. Throw ae both don’t make much sense. Both of these have brighter counterparts in throw new ArgumentException(ae) and throw; which either captures the inner exception or preserves the stack. That leaves us with only two options in the way you handle and throw an exception: 1. If you want to map your exceptions a. Throw new ArgumentException(innerException); (or) 2. If you want to log contextual information and pass on the same exception a. Throw; If you use the finally pattern as below, then you don’t need the throw; If you use the Exception pattern as shown below then you need the throw to preserve the stack. Incidentally, I noticed that performing cleanup on error in .NET is complicated. You have to either catch all exceptions and cleanup or use a variable that can tell you if a cleanup is required in the finally block. Both of them seem to violate the .NET patterns of catching only exceptions that you can handle and not using return values to understand errors. If you had the cleanup in the catch block and you want to make sure it runs on any error, you have to catch Exception e which is not advised. This is another pattern that can be used to avoid that, but seems to me to rely on return value type paradigm which is also not advised either. So what you choose seems to be up to you to choose! Krzysztof Cwalina has a very interesting article on exception at If you would like to receive an email when updates are made to this post, please register here RSS PingBack from You can differentiate in a finally block if you are in an exception unwind scenario or normal cleanup. Simple define this helper function and you are ready to go: static bool InException() return (Marshal.GetExceptionPointers() == IntPtr.Zero) ? false : true This way you can choose what to do void func() try { Otherfunc() } finally if( IsInException() ) { // Do other things } else // Normal cleanup logic Yours, Alois Kraus I did not know that I could do this. Thanks for the great info. This probably eliminates the need for throw; wouldn't it? Hi Thotham, I did investigate the solution a bit further here: In your comment did you mean get rid of the catch handler? Yes that would be the case. The only drawback is that you cannot get your hands on the exception object in a finally clause. The most common use case for an unhandled exception is to trace it. But for this I would not want to write a catch handler. But so far I was not able to find the exception object in the finally block. Yes. I was looking into the use of 'throw;' is it can be done this way. That is a good point that you cannot get a hand on the exception object. I am also not sure if there is any performance impact in doing this as this will be done pretty much throughout the code if adopted as a pattern for tracing or logging.
http://blogs.msdn.com/thottams/archive/2008/03/24/handling-throwing-exceptions-and-clean-up-on-error.aspx
crawl-002
refinedweb
800
64.41
Banish Merge Conflicts With Semantic Merge. They’re a hassle. I know the data backs me up here. When I started at GitHub, I worked on a Git client. If you can avoid it, never work on a Git client. It’s painful. The folks that build these things are true heroes in my book. Every one of them. Anyways, the most frequent complaint we heard from our users had to do with merge conflicts. It trips up so many developers, whether new or experienced. We ran some surveys and we’d often hear things along the lines of this… When I run into a merge conflict on GitHub, I flip my desk, set it all on fire, and git reset HEAD --hardand just start over. Conflict Reduction Here’s the dirty little secret of Git. Git has no idea what you’re doing. As far as Git is concerned, you just tappety tap a bunch of random characters into a file. Ok, that’s not fair to Git. It does understand a little bit about the structure of text and code. But not a lot. If it did understand the structure and semantics of code, it could reduce the number of merge conflicts by a significant amount. Let me provide a few examples. We’ll assume two developers are collaborating on each example, Alice and Bob. Bob only works on master and Alice works in branches. Be like Alice. In each of these examples, I try to keep them as simple as possible. They’re all single file, though the concepts work if you work in separate files too. Method Move Situation In this scenario, Bob creates an interface for a socket server. He just jams everything into a single interface. Bob: Initial Commit on master +public interface ISocket +{ + string GetHostName(IPAddress address); + void Listen(); + void Connect(IPAddress address); + int Send(byte[] buffer); + int Receive(byte[] buffer); +} Alice works with Bob on this code. She decides to separate this interface into two interfaces - an interface for clients and another for servers. So she creates branch separate-client-server and creates the IServerSocket interface. She then renames ISocket to IClientSocket. She also moves the methods Listen and Receive into the IServerSocket interface. Alice: Commit on separate-client-server -public interface ISocket +public interface IClientSocket { string GetHostName(IPAddress address); - void Listen(); void Connect(IPAddress address); int Send(byte[] buffer); - int Receive(byte[] buffer); } + +public interface IServerSocket +{ + void Listen(); + int Receive(byte[] buffer); +} Meanwhile, back on the master branch. Bob moves GetHostName into a new interface, IDns public interface ISocket { - string GetHostName(IPAddress address); void Listen(); void Connect(IPAddress address); int Send(byte[] buffer); int Receive(byte[] buffer); } + +public interface IDns +{ + string GetHostName(IPAddress address); +} Now Bob attempts to merge the separate-client-server branch into master. Git loses its shit and reports a merge conflict. Boo hoo. using System.Net; -public interface ISocket +public interface IClientSocket { +<<<<<<< HEAD void Listen(); +======= + string GetHostName(IPAddress address); +>>>>>>> separate-client-server void Connect(IPAddress address); int Send(byte[] buffer); - int Receive(byte[] buffer); } +<<<<<<< HEAD public interface IDns { string GetHostName(IPAddress address); } +======= +public interface IServerSocket +{ + void Listen(); + int Receive(byte[] buffer); +} +>>>>>>> separate-client-server All Git knows is that both developers changed some text in the same place. It has no idea that Alice and Bob are extracting interfaces and moving methods around. But what if it did? This is where semantic diff and semantic merge come into play. I’m an advisor to Códice Software who are deep in this space. One of their products, gmaster is a Git client. This client includes their Semantic Merge technology. Here’s what happens when I run into this situation with gmaster. The UI is a bit busy and confusing at first, but it’s very powerful and you get used to it. - First, gmaster recognizes that Git reports a merge conflict. It doesn’t resolve it automatically. This is by design. Merge resolution is as intentional act. There’s probably a setting to allow it to automatically resolve conflicts it understands. - Down below, gmaster displays a semantic diff. The diff shows that the method moved to a new interface. It knows what’s going on here. - Click the “Launch Merge Tool” to see the magic happen. This launches the semantic merge tool. - As you can see, the tool was able to automatically resolve the conflict. No manual intervention necessary. - All you have to do is click Commit to complete the merge commit. With Git and any other diff/merge tool, you would have to manually resolve the conflict. If you’ve resolved large conflicts, you know what a pain it is. Any tool that can reduce the number of conflicts you need to worry about is valuable. And on a real-world repository, this tool makes a big impact. I’ll cover that in a future post! Summary I’ll be honest, my favorite Git client is still GitHub Desktop. I appreciate its clean design, usability, and how it fits my workflow. Along with the command line, Desktop is my primary Git client. But I added gmaster to my toolbelt. It comes in handy when I run into merge conflicts. I’d rather let it handle conflicts than do it all by hand. Gmaster is unfortunately only available on Windows, but you can’t beat the price, free! I plan to write another post or two about merge conflict scenarios and how semantic approaches can help save developers a lot of time. DISCLAIMER: I am a paid advisor to the makers of gmaster, but the content on my blog is my own. They did not pay for this post, in the same way all my previous employers did not pay for any content on my blog. 6 responses
https://haacked.com/archive/2019/06/17/semantic-merge/
CC-MAIN-2020-05
refinedweb
960
66.74
Improving your code with modern idioms¶ Once you have ported to Python 3 you have a chance to use the newer features of Python to improve your code. Many of the things mentioned in this chapter are in fact possible to do even before porting, as they are supported even by quite old versions of Python. But I mention them here anyway, because they aren’t always used when the code could benefit from them. This includes generators, available since Python 2.2; the sorted() method, available since Python 2.4 and context managers, available since Python 2.5. The rest of the new features mentioned here have in fact been backported to either Python 2.6 or Python 2.7. So if you are able to drop support for Python 2.5 and earlier, you can use almost all of these new features already before porting. Use sorted() instead of .sort()¶ Lists in Python has a .sort() method that will sort the list in place. Quite often when .sort() is used is it used on a temporary variable discarded after the loop. Code like this is used because up to Python 2.3 this was the only bult in way of sorting. >>> infile = open('pythons.txt') >>> pythons = infile.readlines() >>> pythons.sort() >>> [x.strip() for x in pythons] ['Eric', 'Graham', 'John', 'Michael', 'Terry', 'Terry'] Python 2.4 has a new built in, sorted(), that will return a sorted list and takes the same parameters as .sort(). With sorted() you often can avoid the temporary variable. It also will accept any iterable as input, not just lists, which can make your code more flexible and readable. >>> infile = open('pythons.txt') >>> [x.strip() for x in sorted(infile)] ['Eric', 'Graham', 'John', 'Michael', 'Terry', 'Terry'] There is however no benefit in replacing a mylist.sort() with mylist = sorted(mylist), in fact it will use more memory. The 2to3 fixer "idioms" will change some usage of .sort() into sorted(). Coding with context managers¶ Since Python 2.5 you can create context managers, which allows you to create and manage a runtime context. If you think that sounds rather abstract, you are completely right. Context managers are abstract and flexible beasts that can be used and misused in various ways, but this chapter is about how to improve your existing code so I’ll take up some examples of typical usage where they simplify life. Context managers are used as a part of the with statement. The context manager is created and entered in the with statement and available during the with statements code block. At the end of the code block the context manager is exited. This may not sound very exiting until you realize that you can use it for resource allocation. The resource manager then allocates the resource when you enter the context and deallocates it when you exit. The most common example of this type of resource allocation are open files. In most low level languages you have to remember to close files that you open, while in Python the file is closed once the file object gets garbage collected. However, that can take a long time and sometimes you may have to make sure you close the file explicitly, for example when you open many files in a loop as you may otherwise run out of file handles. You also have to make sure you close the file even if an exception happens. The result is code like this: Since files are context managers, they can be used in a with-statement, simplifying the code significantly: When used as a context manager, the file will close when the code block is finished, even if an exception occurs. As you see the amount of code is much smaller, but more importantly it’s much clearer and easier to read and understand. Another example is if you want to redirect standard out. Again you would normally make a try/except block as above. That’s OK if you only do it once in your program, but if you do this repeatedly you will make it much cleaner by making a context manager that handles the redirection and also resets it. >>> import sys >>> from StringIO import StringIO >>> class redirect_stdout: ... def __init__(self, target): ... self.stdout = sys.stdout ... self.target = target ... ... def __enter__(self): ... sys.stdout = self.target ... ... def __exit__(self, type, value, tb): ... sys.stdout = self.stdout ... >>> out = StringIO() >>> with redirect_stdout(out): ... print 'Test' ... >>> out.getvalue() == 'Test\n' True The __enter__() method is called when the indented block after the with statement is reached and the __exit___() method is called when the block is exited, including after an error was raised. Context managers can be used in many other ways and they are generic enough to be abused in various ways as well. Any code you have that uses exception handling to make sure an allocated resource or a global setting is unallocated or unset will be a good candidate for a context manager. There are various functions to help you out in making context managers in the contextlib module. For example, if you have objects that have a .close() method but aren’t context managers you can use the closing() function to automatically close them at the end of the with-block. >>> from contextlib import closing >>> import urllib >>> >>>>> with closing(urllib.urlopen(book_url)) as page: ... print len(page.readlines()) 117 Advanced string formatting¶ In Python 3 and also Python 2.6 a new string formatting support was introduced. It is more flexible and has a clearer syntax than the old string formatting. Old style formatting: >>> 'I %s Python %i' % ('like', 2) 'I like Python 2' New style formatting: >>> 'I {0} Python {1}'.format('♥', 3) 'I ♥ Python 3' It is in fact a mini-language, and you can do some crazy stuff, but when you go overboard you lose the benefit of the more readable syntax: >>> import sys >>> 'Python {0.version_info[0]:!<9.1%}'.format(sys) 'Python 300.0%!!!' For a full specification of the advanced string formatting syntax see the CommonStringOperations section of the Python documentation [3]. The old string formatting based on % is planned to be eventually removed, but there is no decided timeline for this. Class decorators¶ Decorators has been around since Python 2.4 and has become commonplace thanks to the builtin decorators like @property and @classmethod. Python 2.6 introduces class decorators, that work similarly. Class decorators can both be used to wrap classes and modify the class that should be decorated. An example of the later is functools.total_ordering, that will let you implements a minimum of rich comparison operators, and then add the missing ones to your class. They can often do the job of metaclasses, and examples of class decorators are decorators that make the class into a singleton class, or the zope.interface class decorators that register a class as implementing a particular interface. If you have code that modify classes, take a look at class decorators, they may help you to make your code more readable. Set literals¶ There is a new literal syntax for sets available in Python 3. Instead of set([1,2,3]) you can now write the cleaner {1,2,3}. Both syntaxes work in Python 3, but the new one is the recommended and the representation of sets in Python 3 has changed accordingly: >>> set([1,2,3]) {1, 2, 3} The set literal has been back-ported to Python 2.7, but the representation has not. yield to the generators¶ Like the floor division operators and the key-parameter to .sort(), generators have been around for long time, but you still don’t see them that much. But they are immensely practical and save you from creating temporary lists and thereby both save memory and simplify the code. As an example we take a simple function with two loops: This becomes more elegant by using yield and thereby a generator: Generators have been around since Python 2.2, but a new way to make generators appeared in Python 2.4, namely generator expressions. These are like list expressions, but instead of returning a list, they return a generator. They can be used in many places where list expressions are normally used: More comprehensions¶ In Python 3, and also in Python 2.6, generator comprehensions are introduced. They are simply a generator expression with a parenthesis around it, and work like list comprehensions, but returns a generator instead of a list. >>> (x for x in 'Silly Walk') <generator object <genexpr> at ...> In Python 3 the generator comprehension is not just a new nice feature, but a fundamental change as the generator comprehension is now the base around which all the other comprehensions are built. In Python 3 a list comprehension is now only syntactic sugar for giving a generator expression to the list types constructor: >>> list(x for x in 'Silly Walk') ['S', 'i', 'l', 'l', 'y', ' ', 'W', 'a', 'l', 'k'] >>> [x for x in 'Silly Walk'] ['S', 'i', 'l', 'l', 'y', ' ', 'W', 'a', 'l', 'k'] This also means that the loop variable no longer leaks into the surrounding namespace. A generator comprehension can in the same way be given to the dict() and set() constructors in Python 2.6 and later, but in Python 3 and also in Python 2.7 you have new syntax for dictionary and set comprehensions: >>>>> {x: department.count(x) for x in department} {'a': 1, ' ': 1, 'i': 1, 'k': 1, 'l': 3, 'S': 1, 'W': 1, 'y': 1} >>> {x for x in department} {'a', ' ', 'i', 'k', 'l', 'S', 'W', 'y'} New modules¶ There is several new modules that you should also take a look at to see if they can be of use for you. I won’t take them up in detail here, since most of them are hard to benefot from without refactoring your software completely, but you should know they exist. For more information on them, you can look at the Python documentation. Abstract base classes¶ The abc module contains support for making abstract base classes, where you can mark a method or property on a base class as “abstract”, which means you must implement it in a subclass. Classes that do not implement all abstract methods or properties can not be instantiated. The abstract base classes can also be used to define interfaces by creating classes that have no concrete methods. The class would then work only as an interface, and subclassing from it guarantees that it implements the interface. The new hierarchy of mathematical classes introduced in Python 2.6 and Python 3.0 is a good example of this. The abc module is included in Python 2.6 and later. multiprocessing and futures¶ multiprocessing is a new module that helps you if you are using Python do to concurrent processing, letting you have process queues and use locks and semaphores for synchronizing the processes. multiprocessing is included in Python 2.6 and later. It is also available for Python 2.4 and Python 2.5 on the CheeseShop [1]. If you do concurrency you may also want to take a look at the futures module which will be included in Python 3.2, and exists on the CheeseShop in a version that supports Python 2.5 and later [2].
http://www.pythontip.com/blog/post/8990/
CC-MAIN-2022-40
refinedweb
1,883
64
» Web Services Author Constantly getting a 401 with Java Sharepoint Michael Lee Greenhorn Joined: Dec 01, 2011 Posts: 6 posted Sep 25, 2012 08:29:05 0 Ok. I have no idea how to fix this. I've tried a million things. I want to just do the most BASIC of web services. I've compiled my wsdl using; wsimport -p com.microsoft.schemas.sharepoint.soap -keep -extension lists.asmx.xml.wsdl We had AD authentication but I've asked the Sharepoint guy to turn on basic auth instead and still nothing is working. I have, what I think, is very simple code; Lists service = new Lists(); listsSoap = service.getListsSoap(); (( BindingProvider ) listsSoap).getRequestContext().put(BindingProvider.USERNAME_PROPERTY, userName); (( BindingProvider ) listsSoap).getRequestContext().put(BindingProvider.PASSWORD_PROPERTY, password); (( BindingProvider ) listsSoap).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, ""); GetListCollectionResult result = listsSoap.getListCollection(); I have to have the ability to change WHERE (endpoint) the sharepoint server is because it will change in production. That is the hostname address above. I have it as a parameter. I've tried all kinds of variations. I've created an NTLMAuthenticator class when it was AD but that created a fatal java core dump! (I'm using jdk 1.7 5b3) Is it because the WSDL itself is not authenticated? No idea. I've seen people say make it anonymous but I don't control what is on the production server. The error is this; 09:55:54,370 DEBUG httpclient.wire.header - >> "POST /_vti_bin/lists.asmx?WSDL HTTP/1.1[\r][\n]" 09:55:54,411 DEBUG httpclient.wire.header - >> "Content-Type: text/xml; charset=UTF-8[\r][\n]" 09:55:54,411 DEBUG httpclient.wire.header - >> "SOAPAction: ""[\r][\n]" 09:55:54,412 DEBUG httpclient.wire.header - >> "User-Agent: Axis2[\r][\n]" 09:55:54,412 DEBUG httpclient.wire.header - >> "Authorization: Basic QVRMREVWLkNPTVxyeWFuLnJlZ2FuOjMyUCNyZEA=[\r][\n]" 09:55:54,412 DEBUG httpclient.wire.header - >> "Host: 172.16.17.229[\r][\n]" 09:55:54,413 DEBUG httpclient.wire.header - >> "Transfer-Encoding: chunked[\r][\n]" 09:55:54,413 DEBUG httpclient.wire.header - >> "[\r][\n]" 09:55:59,534 DEBUG httpclient.wire.content - >> "ec[\r][\n]" 09:55:59,535 DEBUG httpclient.wire.content - >> "<?xml version='1.0' encoding='UTF-8'?><soapenv:Envelope xmlns:<soapenv:Body><GetListCollection xmlns=""/></soapenv:Body></soapenv:Envelope>" 09:55:59,535 DEBUG httpclient.wire.content - >> "[\r][\n]" 09:55:59,536 DEBUG httpclient.wire.content - >> "0" 09:55:59,536 DEBUG httpclient.wire.content - >> "[\r][\n]" 09:55:59,536 DEBUG httpclient.wire.content - >> "[\r][\n]" 09:55:59,740 DEBUG httpclient.wire.header - << "HTTP/1.1 401 Unauthorized[\r][\n]" 09:55:59,741 DEBUG httpclient.wire.header - << "HTTP/1.1 401 Unauthorized[\r][\n]" 09:55:59,745 DEBUG httpclient.wire.header - << "Server: Microsoft-IIS/7.5[\r][\n]" 09:55:59,746 DEBUG httpclient.wire.header - << "SPRequestGuid: c5ac71b5-f43a-429c-b9bb-09b8d296aee1[\r][\n]" 09:55:59,747 DEBUG httpclient.wire.header - << "WWW-Authenticate: Negotiate[\r][\n]" 09:55:59,748 DEBUG httpclient.wire.header - << "WWW-Authenticate: NTLM[\r][\n]" 09:55:59,748 DEBUG httpclient.wire.header - << "X-Powered-By: ASP.NET[\r][\n]" 09:55:59,749 DEBUG httpclient.wire.header - << "MicrosoftSharePointTeamServices: 14.0.0.4762[\r][\n]" 09:55:59,750 DEBUG httpclient.wire.header - << "Date: Tue, 25 Sep 2012 14:00:26 GMT[\r][\n]" 09:55:59,751 DEBUG httpclient.wire.header - << "Content-Length: 0[\r][\n]" 09:55:59,751 DEBUG httpclient.wire.header - << "[\r][\n]" 09:55:59,765 ERROR .httpclient.HttpMethodDirector -) at org.apache.commons.httpclient.HttpMethodDirector.authenticateHost(HttpMethodDirector.java:282) at org.apache.commons.httpclient.HttpMethodDirector.authenticate(HttpMethodDirector.java:234) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:170) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) ..... Help me Obi Wan, you're my only hope. Paul Clapham Bartender Joined: Oct 14, 2005 Posts: 18985 8 I like... posted Sep 27, 2012 16:46:22 0 Well, at the beginning I think I see you sending something to do with HTTP basic authentication. And then I think I see the server sending you a 401, saying that didn't authenticate. And then I think I see the server sending back a request to do NTLM authentication. And then I think I see you trying to do that but failing. Does that match what you think you're seeing? If so then it looks like you have two possibilities: (1) fix the basic authentication so it works (2) fix the NTLM authentication so it works. Sorry if that sounds facile or if it duplicates one of the millions of things you already tried, but perhaps it helps you to focus a bit. Michael Lee Greenhorn Joined: Dec 01, 2011 Posts: 6 posted Sep 28, 2012 08:08:44 0 Thanks for the reply Paul. I have a questions. The server should be set to Basic auth. So why is it doing NTLM? Is that the problem? Michael Lee Greenhorn Joined: Dec 01, 2011 Posts: 6 posted Oct 01, 2012 12:33:11 0 I think I may have found the problem. Our IIS was mis-configured. There are about 10 security settings that can each be set in about 10,000 different places in IIS. It looks like what was happening was Basic auth AND NTLM were BOTH turned on. What happens is, if you send credentials to NTLM, it says you DON'T need credentials. The NTLM mechanism is different than my method above. I think it is ticket based so it expects that in the header. Anyway, I turned off NTLM and just used basic auth. That 'appears' to be working. It looks like it's just retrieving the WSDL. Now I'm getting this though;) Michael Lee Greenhorn Joined: Dec 01, 2011 Posts: 6 posted Oct 01, 2012 14:21:20 0 The problem is I can't even put in the URL of the web service! I get a 401 right out of the gate! The WSDL is obviously secured. So how do I get around this? URL wsdlLocation = new URL("http://"+host+"/_vti_bin/lists.asmx?WSDL"); QName qname = new QName ("", "GetListItems"); Lists service = (Lists) Lists.create(wsdlLocation, qname); I'm starting to think no one has done Java->Sharepoint with Authentication turned on. I see NO working code. Michael Lee Greenhorn Joined: Dec 01, 2011 Posts: 6 posted Oct 02, 2012 09:36:45 0 I see TONS of posts similar to mine but none seem to have a solution This is WAY too complicated! Here is my insanely awful 'design'. This is what we have to deal with. Literally we can set this up on a multitude of Sharepoint servers. Our app is configurable so we can set whatever we want easily but it has to handle sharepoint 2007 or 2010, any auth it supports, and the credentials of the user come through our Tomcat server somehow from the user. This is either username/password or, maybe, automagically through the header?! I have no idea. Nothing is working and I don't see any real world examples that address what seems to be the main real world scenario. I compile the WSDL on the file system, not the server. It is part of an eclipse project and I use a wsimport script to generate the code. The Sharepoint server can exist 'wherever' so the host, port, etc are variables. If I pass the URL in for the WSDL (endpoint?) then I get the 401 above BEFORE I can even send credentials for basic auth! Keep in mind I may need other auth methods. If JDK 1.6 is needed for NTLM2 support then we can dump Tomcat 5.5 as one of the requirements if need be. How has ANYONE gotten this to work? t field Greenhorn Joined: Aug 21, 2008 Posts: 2 posted Jan 08, 2013 23:10:05 0 so firstly.... store the wsdl locally then override the end point... then use the authenticator class to do the dirty work //twist - java 7 is actually changing this i think protected void authenticateWebService( String sharePointWebServiceUrl, BindingProvider port ) { String username = "blah"; String password = "pass"; ((BindingProvider) port).getRequestContext().put( BindingProvider.USERNAME_PROPERTY, username ); ((BindingProvider) port).getRequestContext().put( BindingProvider.PASSWORD_PROPERTY, password ); ((BindingProvider) port).getRequestContext().put( BindingProvider.ENDPOINT_ADDRESS_PROPERTY, sharePointWebServiceUrl ); MyAuthenticator myAuthenticator = new MyAuthenticator(); myAuthenticator.setMyUsername( ""+getUsername() ); myAuthenticator.setMyPass( ""+ getPassword() ); AuthCacheValue.setAuthCache(new AuthCacheImpl()); Authenticator.setDefault( myAuthenticator); } // somewhere else class MyAuthenticator extends Authenticator { private String myUsername; private String myPass; public PasswordAuthentication getPasswordAuthentication() { log.debug("Feeding username and password for " + getRequestingScheme()+ " " +getUsername()); String pass = getMyPass() == null ? "" : getMyPass(); return (new PasswordAuthentication( getMyUsername(), pass.toCharArray() )); } public String getMyUsername() { return myUsername; } public void setMyUsername( String myUsername ) { this.myUsername = myUsername; } public String getMyPass() { return myPass; } public void setMyPass( String myPass ) { this.myPass = myPass; } } I agree. Here's the link: subject: Constantly getting a 401 with Java Sharepoint Similar Threads Sharepoint Webservice call from Java Client returns 403-Forbidden Consume Sharepoint Web Service from inside code Problem with Axis2 via proxy with NTLM authentication (currently defaulting to BASIC) Attempt to retrieve Sharepoint WSDL fails with “Server redirected too many times” Nondescript "Unsupported Content-Type" error when connecting to https Sharepoint web services All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/593486/Web-Services/java/Constantly-Java-Sharepoint
CC-MAIN-2014-52
refinedweb
1,540
52.97
Details - Type: New Feature - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 1.4 - - Component/s: contrib - DataImportHandler - Labels:None Description: <document> <entity processor="MailEntityProcessor" user="somebody@gmail.com" password="something" host="imap.gmail.com" protocol="imaps"/> </document> The below is the list of all configuration available: Required --------- user pwd protocol (only "imaps" supported now) host Optional ---------. import javax.mail.Folder; import javax.mail.SearchTerm; clz implements MailEntityProcessor.CustomFilter() { public SearchTerm getCustomSearch(Folder folder); } processAttachement - defaults to true The below are the indexed fields. //"; Activity - All - Work Log - History - Activity - Transitions Thanks for this Preetam, looks great! A few suggestions: - Use the Lucene code style – you can get a codestyle for Eclipse/Idea from - Let us use the Java variable naming convention for the fields e.g sent_date becomes sentDate - I don't think we need the sent_date_display, people can always format the date and display as they want - All the attributes for the entity processor should be templatized e.g user="$ {dataimporter.request.user} " and so on. You'd need to use context.getVariableResolver().replaceTokens(attr) - The Profile class looks un-necessary. The values can be stored directly as private variables - Attachment names can be another multi-valued field - Exception while connecting must be propagated so that the users know why the connection is failing. - For delta imports, we can just provide a olderThan and newerThan syntax. That should be enough - Streaming is recommended instead of calling folder.getMessages(). We can use getMessages(int start, int end) and the batchSize can be a configurable parameter with some sane default. Support for recursive folders will be awesome. I agree with all the comments... Will incorporate them soon... Most of the features are implemented now. Test cases also updated. - recursion supported. - folders can be selected/excluded by list of comma separated patterns - mails can be fetched since a predefined receive date/time - custom filters can be plugged in - batching supported TODO - currently testbed needs to be setup manually. Create folders in testcase setup(). - support POP3 - any reveiws/feedbacks/cleanup attaching all the dependency jars as an attachment so that one does not have to search them. May be it should be integrated through ant-maven tasks or maven directly. looks good. A few observations. - the init must call super.init() - Right before returning nextRow() ,call super.applyTransformer(row) - Returning null signals end of rows. Close any connections or do cleanup - 'exclude' and 'include' should either allow for escaping comma (between multiple regex) or it can just take one reex for the time being - For fetchMailsSince use the format yyyy-MM-dd HH:mm:ss. There is already an instance DataImporter.DATE_TIME_FORMAT Would it make more sense for DIH to farm out it's content acquisition to a library like Droids? Then, we could have real crawling, etc. all through a pluggable connector framework. Would it make more sense for DIH to farm out it's content acquisition to a library like Droids It would be great. It should be possible to have a DroidEntityProcessor one day. Regarding comma separated list of patterns: Folder names won't contain commas usually. The regex which will contain commas is for limiting number of occurances like , which also does not seem to be very useful in restricting folder names. Can we leave it as it is till the need arises ? If not what would be a good escape character or replacement for comma ? This is a trivial thing. Other suggestions are really important Thanks for comments and feedback Noble and Shalin. Attached is the latest patch which calls init() as well as applyTransformer(). Receives fetchTimeSince in yyyy-MM-dd HH:mm:ss format. exclude/include pattern is still comma seperated. Cleanup is already being handled in FolderIterator when it learns that all folders have been exhausted. Could not attach dependency jars (13MB). Single part or multi part with smaller size both fail... updated date format MailEntityProcessor and its dependencies must be kept in one place – either in WEB-INF/lib or $solr_home/lib. We can't keep just the MailEntityProcessor in the war because it won't be able to load the dependencies from $solr_home/lib (due to the classloader being different) and asking the user to drop the dependencies to WEB-INF/lib does not sound good. It is impractical to keep all these dependencies in the solr war itself because most users will not need this functionality. I guess this needs to go into a separate contrib area. Thoughts? PS: a contrib for a contrib, cool! how about a new contrib called 'dih-ext' . So all the future DIH enhancements which require external dependencies can go here (like a TikaEntityProcessor). - Brought patch in sync with trunk - Created a 'lib' directory inside contrib/dataimporthandler which will have the mail and activation jars - Created a 'extras' directory inside src which will hold DIH components that have extra dependencies - Added ant targets to work on the extras in DIH build.xml TODO: - Need to find out the licenses for the additional dependencies - Need to add info about these dependencies into LICENSE.txt and NOTICE.txt - Test some more (perhaps index my gmail account?) and create a demo? - Add info to the wiki - Commit One question, what is the uniqueKey that we should use when indexing emails? I couldn't figure out so I removed the uniqueKey from my schema to try this out. FWIW: "Message-ID" while common is not mandatory (see sec3.6 and sec3.6.4 of RFCs #2822 and #5322) FWIW: "Message-ID" while common is not mandatory (see sec3.6 and sec3.6.4 of RFCs #2822 and #5322) In practice you can not rely on the the "Message-ID" to be unique. Most modern mail servers do a good job making sure each value is unique, but some old MS mail servers sent the same message ID for every message! One question, what is the uniqueKey that we should use when indexing emails? The "Message-ID" can be emitted by the EntityProcessor it can be left to the discretion of the user whether to use that as a uniqueKey or not. Changes - Added messageId as another field - Added another core to example-DIH for indexing mails. When the example target is run, it copies over the tika libs, mail.jar, activation.jar and extras.jar into example/example-DIH/solr/mail/lib directory. - Added a maven pom template for extras jar - Updated maven related targets in the main build.xml for the new pom - Added licenses for mail.jar and activation.jar in LICENSE.txt I'm not sure what needs to be added to NOTICE.txt, can anybody help? To run this: - Apply this patch - Create a directory called lib inside contrib/dataimporthandler - Download and add mail.jar and activation.jar in the above directory - Update example/example-DIH/solr/mail/conf/data-config.xml with your mail server and login details - Run ant clean example - cd example - java -Dsolr.solr.home=./example-DIH/solr -jar start.jar - Hit I'll let people try this out before committing this in a day or two. This will probably need some more enhancements which can be done through additional issues. Some that I can think of are: - Pluggable CustomFilter implementations - Making fields/methods inside MailEntityProcessor protected so functionality can be enhanced/overridden - Attachments are stored as two attachment and attachmentNames fields – a way to associate one with another. I recall some discussion on the LocalSolr issue about something similar for multiple lat/long pairs. - Enhance example configuration to be able to run a mailing list search service out-of-the-box Updated NOTICE.txt and LICENSE.txt with the license information given at the following: - - I'll commit this shortly. Committed revision 764601. Thanks Preetam! A few changes in this patch - Made the CustomFilter interface static - Removed logRow method. LogTransformer can be used if needed - logConfig first checks if info level is enabled or not I'll commit shortly. Committed revision 764691. Bulk close for Solr 1.4 Rough cut version. Tested with sample mails from my gmail account. TODO -------- USAGE ---------- For each mail it creates a document with the following attributes: // Created fields // single valued "subject" "from" "sent_date" "sent_date_display" "X_Mailer" // multi valued "all_to" "flags" "content" "Attachement" // flag values "answered" "deleted" "draft" "flagged" "recent" "seen" COMPILE ------------- Dependencies: JavaMail API jar Activation jar Tika and its dependent jars How should we go about adding these dependencies ?
https://issues.apache.org/jira/browse/SOLR-934
CC-MAIN-2017-09
refinedweb
1,404
58.69
- Advertisement Content count924 Joined Last visited Community Reputation1454 Excellent About KKTHXBYE - Rank>^_^< Personal Information - Website -. OpenGL GLSL Screen space shadows few issues KKTHXBYE replied to KKTHXBYE's topic in Graphics and GPU ProgrammingWell the main problem is with unprojecting the fragment - im not surenif i donthat correctly, second thing is i do shadowtest enteirly in fragment shader i only pass world vertex position and screen coord to fragment shader OpenGL GLSL Screen space shadows few issues KKTHXBYE posted a topic in Graphics and GPU ProgrammingSo, algorithm looks like this: Use fbo Clear depth and color buffers Write depth Stretch fbo depth texture to screen size and compare that with final scene GLSL algo looks like this: Project light position and vertex position to screen space coords then move them to 0..1 space Compute projected vertex to light vector Then by defined number of samples go from projected vertex position to projected light position: - Get the depth from depth texture - unproject this given texture coord and depth value * 2.0 - 1.0 using inverse of (model*view)*projection matrix Find closest point on line (world_vertex_pos, wirld light pos) to unprojected point, if its less than 0.0001 then i say the ray hit something and original fragment is in shadow Now i forgot few things, so i'll have to ask: In vertex shader i do something like this vertexClip.x = dp43(MVP1, Vpos); vertexClip.y = dp43(MVP2, Vpos); vertexClip.z = dp43(MVP3, Vpos); vertexClip.w = dp43(MVP4, Vpos); scrcoord = vec3(vertexClip.x, vertexClip.y, vertexClip.z); Where float dp43(vec4 matrow, vec3 p) { return ( (matrow.x*p.x) + (matrow.y*p.y) + (matrow.z*p.z) + matrow.w ); } It looks like i dont have to divide scrcoord by vertexClip.w component when i do something like this at the end of shader gl_Position = vertexClip; and fragments are located where they should be... I pass scrcoord to fragment shader, then do 0.5 + 0.5 to know from which position of the depth tex i start to unproject values. So scrcoord should be in -1..1 space right? Another thing is with unprojecting a screen coord to 3d position: So far i use this formula: Get texel depth, do *2.0-1.0 for all xyz components Then multiple it by inverse of (model*view)*projection matrix like that: Not quite sure if this isneven correct: vec3 unproject(vec3 op) { vec3 outpos; outpos.x = dp43(imvp1, op); outpos.y = dp43(imvp2, op); outpos.z = dp43(imvp3, op); return outpos; } And last question is about ray sampling i'm pretty sure it will skip some pixels making shadowed fragments unshadowed.... Need somehow to fix that too, but for now i have no clue... vec3 act_tex_pos = fcoord + projected_ldir * sample_step * float ( i ); I checked depth tex for values and theyre right. Vert shader Conversion from lat-long to Cartesian coordinates 's result is different with real location KKTHXBYE replied to nhungntc21's topic in Math and PhysicsWasnt that that 0.0 angle wasnrepresented by half of image so whenever thise cartesian coord mean its in the center of image since you have 180 deg move in both horizontal directions and 90 in both vert dirs its pretty straight forward, all you have is to define the rectangle sides screen space shadows KKTHXBYE replied to KKTHXBYE's topic in Graphics and GPU ProgrammingSo in other words, for each frame write depth buffer to a texture then draw scene with light/shadow pass and then sample the ray from each fragment towards the light right? ;) screen space shadows KKTHXBYE posted a topic in Graphics and GPU Programmingso how exactly you do this kind of efffect, what do you need to draw and what ro compare with, didn't find any good tutorial for that. Do you have to store each fragment position in textures then compare it within fragment shader or what? cheers DX11 Gap between Sky and Terrain KKTHXBYE replied to isu diss's topic in Graphics and GPU ProgrammingIt looks like you are using flat terrain not even scaled to match planet curve/horizon, you could stretch sky or use some kind of skybox Kinda navmesh with ability to fall down KKTHXBYE replied to QQemka's topic in Math and PhysicsYou need to provide more information, in example apicturr of your map and what are you really trying to achieve my guess is that you do what? I dont know drawnscene from top level and process height data? Once again too less info to help you C++ std::string c_str and unsigned char unix sockets write and read KKTHXBYE replied to KKTHXBYE's topic in General and Gameplay ProgrammingOk i get it i pass data without caring about anything but +1 why? Edit i get it null terminated string but its not necesarry for me now since i have my own termination standard lel C++ std::string c_str and unsigned char unix sockets write and read KKTHXBYE posted a topic in General and Gameplay ProgrammingI'm trying to send text through sockets (using tcp protocol) Annyway i use write and read routines, lets say like this: int wb = write(sockfd, cmd.c_str(), cmd.length()); Where cmd is std::string, as far i understand c_str() is some char type and i treated it always like char * array, However i dont really think thats the proper type, cause i dont even think there are negative ascii chars anyway i need to pass unsigned chars because i need to send binary data too (as text lol for simplification) So do i need to cast that c_str() thing to unsigned char buffer? And can inise a code like this int cmdlen = TextLength( txt ); unsigned char * pdata = new unsigned char[ cmdlen ]; memcpy(&pdata[0], txt.c_str(), sizeof ( unsigned char ) * cmdlen); OpenGL Framebuffer KKTHXBYE replied to mike44's topic in Graphics and GPU ProgrammingI bet you are recreating some arraysbthat you constantly pass to gpu memory, without code we cant help C++ Finish std::thread from another thread KKTHXBYE replied to KKTHXBYE's topic in General and Gameplay ProgrammingI removed binary flag and text file saves properly, i use gettext every savetofile func cause i always produce a continous str from std vector anyway dont ask me why thats the approach i chosed, C++ Finish std::thread from another thread KKTHXBYE replied to KKTHXBYE's topic in General and Gameplay ProgrammingAs you said ofstream doesnt giarantee to be thread safe i tried this code, and it doesnt even save any text all, all i see is blank nothing but the length of a file is set (i see just spaces) inwonder whysthat void SaveToFile(AnsiString fname) { pc = GetText(); FILE* f = fopen(fname.c_str(),"wb+"); int len = pc.length(); char * buff = new char[ len ]; memcpy(buff, pc.c_str(), sizeof(char) * len); fwrite(&buff[0], sizeof ( char ) * len,1,f); fclose(f); } C++ Finish std::thread from another thread KKTHXBYE replied to KKTHXBYE's topic in General and Gameplay ProgrammingOk, but first of all ill have to check out how to use select function, it seems even when it worked in the past im getting errors there, and i know that it should write so thats the big question here may be something with saving function so ill post that typedef std::string AnsiString; #ifndef WINDOWS_CMP struct TStringList #endif { int Count; std::vector<AnsiString> Strings; void Add(AnsiString text) { AnsiString p = text; Strings.push_back(text); Count = Count + 1; } AnsiString GetText() { AnsiString res = ""; int i; for (i=0; i < Count; i++) res = res + Strings[i] + "\n"; return res; } AnsiString pc; void SaveToFile(AnsiString fname) { pc = GetText(); std::ofstream outfile (fname.c_str(),std::ofstream::binary); int len = pc.length(); char * buff = new char[ len ]; memcpy(buff, pc.c_str(), sizeof(char) * len); outfile.write (buff, len); outfile.close(); } }; now where i log that in my server class like it void LOG(AnsiString str) { pstdout.Add(str); pstdout.SaveToFile("/mnt/ext_sdcard/server_log.txt"); } and use it like (ofc in a thread) LOG("TEXT 1"); call some func LOG("TEXT 2"); do other stuff LOG("TEXT 3"); C++ Finish std::thread from another thread KKTHXBYE replied to KKTHXBYE's topic in General and Gameplay ProgrammingAnyway i'm having problems debugging anything within a thread, i thought i could save a log to a file, but instead of writing text i get blank lines, maybe theres a good way to log a thread? C++ Finish std::thread from another thread KKTHXBYE replied to KKTHXBYE's topic in General and Gameplay ProgrammingServer->ProcessFrame(); Is blocking while loop in a thread - Advertisement
https://www.gamedev.net/profile/182943-kkthxbye/
CC-MAIN-2018-22
refinedweb
1,424
50.7
My results page displays a set number of results per page and allows users to jump to any page they want, aswell as prev, next etc This part is working fine. I am now trying to improve the script by highlighting search terms that appear on the results page. My code is as follows: if(!isset($search_string)) { $search_string = $_POST['search_string']; } function callback($buffer) { global $search_string; // surround search item items with highlight class return (ereg_replace($search_string, "<span class='highlight'>$search_string</span>", $buffer)); } ob_start("callback"); // Code to display results and pagination goes here ............................... ob_end_flush(); This works fine on the first page, results show ok with the search term highlighted. But when I click next page it takes an age to load and all I get where the results should be displayed is the actual source code. The results are there but so is all the code! I am at a loss as what to do and how to fix it, any help well appreciated, Thanks
http://www.webmasterworld.com/forum88/6811.htm
CC-MAIN-2014-10
refinedweb
163
77.27
Extract current stock price using web scraping in Python In this tutorial, we will learn about extracting current stock prices from using Python. Installing the libraries Initially, there are some libraries we need to install. Navigate to your command prompt and type the following lines separately. pip install requests pip install beautifulsoup After the following libraries have been installed on your pc, you can import them into the code. Required modules Here we use the two most powerful libraries: requests and bs4. - The requests module is a Python library that allows you to send HTTP requests. - Beautiful Soup is a Python package for pulling data out of HTML and XML files. import requests from bs4 import BeautifulSoup Scraping the website creating a list of URLs from the yahoo finance website Now, let us make a list of URLs in the variable ‘urls’. Here, I have made a list of Google, Amazon, Netflix, Primoris Services Corporation, and Apple stocks. urls = ['' , '' , '' , '' , ''] later, we have to loop around the list. With the help of the request module, we can access the response data and by using bs4, we can extract data from LXML. Once if you had visited the website, we see that the title of the stock will be in the <h1> element. We need to scrape the h1 element for the title of the stock. When we inspect for the stock price, we find div class and span class. So after scraping the data, we store it into the current-price variable. for url in urls: headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'} html_page = requests.get(url,headers = headers) soup = BeautifulSoup(html_page.content,'lxml') header_info = soup.find_all("div",id='quote-header-info')[0] # extracting the h1 element as title name title = header_info.find("h1").get_text() # extracting the current price according to the class and corresponding elements current_price = header_info.find('div' , class_='My(6px) Pos(r) smartphone_Mt(6px)').find('span' ,class_ ='Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)').get_text() print('Current price of '+title+'is : '+current_price); To sum up, the intention of this program is to know the current price of your favorite stocks. you can also add more URLs to the list. Output This is how the output looks after running the program. Current price of Alphabet Inc. (GOOG)is : 2,838.42 Current price of Amazon.com, Inc. (AMZN)is : 3,469.15 Current price of Netflix, Inc. (NFLX)is : 598.72 Current price of Primoris Services Corporation (PRIM)is : 25.93 Current price of Apple Inc. (AAPL)is : 148.97
https://www.codespeedy.com/extract-current-stock-price-using-web-scraping-in-python/
CC-MAIN-2021-43
refinedweb
444
68.77
I'll begin by warning that I'm very, very new to Python. I haven't even read the reference manual yet. I do like the idea of documentation strings. However, I like neither a nor b. a looks horrible for long strings, b seems funny: why should a lone string at the beginning of the function be different from a lone string anywhere else? I'd do it like this, instead: def foo(a, b): doc: "here comes the description" A new keyword (could be description instead of doc, for all I care) adds the extra bit of hint to the reader. Another thing (you're probably better off ignoring this): I didn't see anything about assertations, yet (perhaps because I didn't read the docs yet). If they're missing, I'd like to propose: assert condition # abort program if condition is false assert condition, str # ditto, but include str (an expression # having string type) in the diagnostic In both cases, it would be nice to have a string representation of the condition in the diagnostic message. I'm not sure whether it is wise to let assertations be caught (except as a command line flag to the interpreter). I'd rather not. An AssertError exception might be preferable, though, if it fits better to Python. I don't know. :-) -- Lars.Wirzenius@helsinki.fi (finger wirzeniu@klaava.helsinki.fi) My name is Ozymandias, king of kings/Look on my works, ye Mighty, and despair!
https://legacy.python.org/search/hypermail/python-1994q2/0381.html
CC-MAIN-2021-43
refinedweb
247
71.95
From: Jaakko Jarvi (jajarvi_at_[hidden]) Date: 2003-12-05 16:12:18 Is this what you have in mind? #ifndef BOOST_NO_SFINAE // real enable_if #else template<class T> struct enable_if { BOOST_STATIC_ASSERT(false); // enable_if is known not to work }; // same for enable_if_c, lazy_enable_if, ... #endif Jaakko & Jeremiah On Fri, 5 Dec 2003, Jonathan D. Turkanis wrote: > "Jaakko Jarvi" <jajarvi_at_[hidden]> wrote in message > news:Pine.LNX.4.53.0312051058570.9894_at_damogran.osl.iu.edu... > > > Can we add conditional preprocessor guards around the contents of the > > > enable_if.hpp header so that the templates are only defined on > > > supporting compilers? > > > > > > > Ok, I'm guarding it with BOOST_NO_SFINAE, then any use of enable_if on > > a compiler which does not support SFINAE is an error. > > This is the safe way, (compared to providing dummy definitions that > > do not give the desired behavior). > > > > Jaakko > > This could lead to very uninformative compiler errors if someone tries > to use enable_if on an unsupported compiler. I tried it on VC6 and found > that if enable_if is used without explicit namespace qualification I > don't get a single error which clearly states that enable_if is not defined. > > This might have been a reason to write enable_if without partial > specialization, even though no compiler supports SFINAE but not partial > spec. A static assert could inform users that SFINAE is not available. > > The dummies with static asserts might be a good alternative, to avoid > complicating the main definitions. > > Jonathan > > > _______________________________________________ > Unsubscribe & other changes: > Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/12/57245.php
CC-MAIN-2020-05
refinedweb
260
57.47
The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified! Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. StringBuffer tmpbuf = new StringBuffer(); FileReader fd = new FileReader(filename); BufferedReader in = new BufferedReader(fd); String tmp; while((tmp = in.readLine())!=null){ tmpbuf.append(tmp).append( } fd.close(); Logger.sendData(1090,"file return tmpbuf; } Servlet class directory, public html directory, file directory are three different directories. The file directory (where you should put the "test.html")is depended on the platform and web server. For example, Apache JSERV(Win NT)web server's file directory should be "htdocs/..", (Most NT version's file directory is in the parent directory of public html folder) While Apache JSERV's file directory should be the "root") You can do a simple testing with the following code File file = new File("whatever"); log(file.getAbsolutePath() This method only works if you are not running from the servlet. I have tried this. The exception is: "java.io.FileNotFoundExcep thanks anyway. Your question, your audience. Choose who sees your identity—and your question—with question security. There is no difference between file reading from servlet and other applications Try with this example. Assume that You have sample.html in c drive. Then U have to give file name as String filename = "c:\\sample.html"; Next time give the exception stack trace. java.io.filenotfound means U are giving wrong path. Check this once. I just asked the Admin. and they said that I have to store the files that I want to read in a difference directory. It should be under "htdocs". Earlier, yanchou pointed out this but I don't have write permission to this directory that's why I leave the question open so that someone can give me an alternative way to read the file-perhaps base on the URL object. thanks It is unlikely to be in htdocs directory but very likely in the parent directory of htdocs. BTW, If you are using jswdk or Apache jserv, the directory is configurable It is unlikely to be be under "htdocs", possibly be the parent directory of "htdocs". BTW, Most servlet engine allow you to configure the location of file, like Jserv and JSWDK. import javax.servlet.*; import javax.servlet.http.*; import java.io.*; import java.util.*; public class Display extends HttpServlet { RandomAccessFile raf = null; //Initialize global variables public void init(ServletConfig config) throws ServletException { super.init(config); } public void service(HttpServletRequest IOException { try { //try to read in a file to parse raf = new RandomAccessFile("c:\\jsdk String str = null; while ((str = raf.readLine()) != null) { System.out.println(str); } } catch(IOException io) { //I have IOException here System.out.println("Error: } } } I tested this example, this program reading the particular file and printing on server side. thanks.
https://www.experts-exchange.com/questions/10340987/Servlet-how-to-read-a-file-from-servlet.html
CC-MAIN-2018-17
refinedweb
485
59.6
Note: This guide was written for Phoenix 0.10. Parts of it may no longer work if you are using a newer version. If you are using a newer version of Phoenix, check out the updated blog post. Let’s build a JSON API that serves a list of contacts. We’ll be writing it using Elixir and Phoenix 0.10. Phoenix is a framework written in Elixir that aims to make writing fast, low latency web applications as enjoyable as possible. This will not go through installing Elixir or Phoenix. See the Phoenix Guides to get started. Why Elixir and Phoenix? Erlang is a Ferrari wrapped in the sheet metal of an old beater. It has immense power, but to many people, it looks ugly. It has been used by WhatsApp to handle billions of connections, but many people struggle with the unfamiliar syntax and lack of tooling. Elixir fixes that. It is built on top of Erlang, but has a beautiful and enjoyable syntax, with tooling like mix to help build, test and work with applications efficiently. Phoenix builds on top of Elixir to create very low latency web applications, in an environment that is still enjoyable. Blazing fast applications and an enjoyable development environments are no longer mutually exclusive. Elixir and Phoenix give you both. Response times in Phoenix are often measured in microseconds instead of milliseconds. Now that we’ve discussed why we might want to build something in with this framework, let’s build something! Writing the test See getting started on the Phoenix website to see how to create a new app called HelloPhoenix. We’ll be using Phoenix 0.10.0 for this exercise. Now that you have your Phoenix app setup let’s start by writing a test. Let’s create a file at test/controllers/contact_controller_test.exs defmodule HelloPhoenix.ContactControllerTest do use ExUnit.Case, async: false use Plug.Test alias HelloPhoenix.Contact alias HelloPhoenix.Repo alias Ecto.Adapters.SQL setup do SQL.begin_test_transaction(Repo) on_exit fn -> SQL.rollback_test_transaction(Repo) end end test "/index returns a list of contacts" do contacts_as_json = %Contact{name: "Gumbo", phone: "(801) 555-5555"} |> Repo.insert |> List.wrap |> Poison.encode! response = conn(:get, "/api/contacts") |> send_request assert response.status == 200 assert response.resp_body == contacts_as_json end defp send_request(conn) do conn |> put_private(:plug_skip_csrf_protection, true) |> HelloPhoenix.Endpoint.call([]) end end We write a setup function to wrap our Ecto calls in a transaction that will ensure that our database is always empty when we start our tests. The test itself does what you would expect. use Plug.Test gives us access to the conn/2 function for creating test connections. In our test we insert a new Contact, wrap it in a list and then encode it. After that we create a new connection and send the request. We assert that the response was successful and that the body contains a list of contacts encoded as JSON. Run mix test and we’ll see the error HelloPhoenix.Contact.__struct__/0 is undefined, cannot expand struct HelloPhoenix.Contact. This means we haven’t yet created our model. Let’s use Ecto for hooking up to a Postgres database. Creating our databases Ecto uses a repository for saving and retrieving data from a database. Phoenix already comes with a repo set up and a default configuration. Make sure your Postgres username and password are correct in config/dev.exs and config/test.exs. Let’s see what new mix tasks we get from Ecto by running mix -h | grep ecto. You’ll see a number of tasks you can use. For now let’s create the dev and test databases. After that we can add our first model. # This will create your dev database $ mix ecto.create # This will create your test database $ env MIX_ENV=test mix ecto.create Adding the Contact model Let’s add a schema for Contact at web/models/contact.ex. defmodule HelloPhoenix.Contact do use Ecto.Model schema "contacts" do field :name field :phone timestamps end end Next we’ll create a migration with mix ecto.gen.migration create_contacts. In the newly generated migration, write this: defmodule HelloPhoenix.Repo.Migrations.CreateContacts do use Ecto.Migration def change do create table(:contacts) do add :name add :phone timestamps end end end The default column type for Ecto migrations is :string. To see what else you can do, check out the Ecto.Migration docs. Now run mix ecto.migrate to create the new table, and once more for test MIX_ENV=test mix ecto.migrate. Adding the routes and controller Let’s get to our API endpoint with a route that will look like /api/contacts. # In our web/router.ex defmodule HelloPhoenix.Router do use Phoenix.Router pipeline :api do plug :accepts, ["json"] end scope "/api", HelloPhoenix do pipe_through :api resources "/contacts", ContactController end end If you’re coming from Rails, you’ll note that /api/contacts.json will result in a 404 not found. You are expected set the appropriate request header. In a pinch you can do /api/contacts?format=json, but this is not recommended. The trailing format param was not added for performance reasons and because HTTP headers already enable this functionality. Now, if we run mix test we see that we still need a ContactController. ** (UndefinedFunctionError) undefined function: HelloPhoenix.ContactController.init/1 (module HelloPhoenix.ContactController is not available) Let’s create our controller at web/controllers/contact_controller.ex defmodule HelloPhoenix.ContactController do use HelloPhoenix.Web, :controller alias HelloPhoenix.Repo alias HelloPhoenix.Contact plug :action def index(conn, _params) do contacts = Repo.all(Contact) render conn, contacts: contacts end end First we make sure to get all the Contacts with Repo.all(Contact). Then we render JSON with Phoenix.Controller.render/2. The function is automatically imported when we call use HelloPhoenix.Web, :controller. Check out web/web.ex to see what else is imported. If we run mix test our tests won’t pass quite yet. ** (UndefinedFunctionError) undefined function: HelloPhoenix.ContactView.render/2 (module HelloPhoenix.ContactView is not available) We need a view to render our JSON. Rendering our JSON with a view Views handle how to output our JSON. Right now it’s pretty simple, but in the future, this is where we could change what we send based on user’s permissions for example. Let’s create a file in web/views/contact_view.ex defmodule HelloPhoenix.ContactView do use HelloPhoenix.Web, :view def render("index.json", %{contacts: contacts}) do contacts end end This will use pattern matching to set and then return contacts. Phoenix will automatically encode the array of contacts to JSON. You can use this view function to customize how the JSON is presented, but we’ll cover that in a later post. At this point when you run mix test all tests should pass. Cleanup Let’s check out HelloPhoenix.Web in web/web.ex to cleanup our app a bit more. If we open that file up we see that the controller function already has an alias for HelloPhoenix.Repo. def controller do quote do # Auto generated - This imports all the macros and functions that a controller needs. use Phoenix.Controller # Auto inserted - The app was generated with an alias to Repo as a convenience. alias HelloPhoenix.Repo # This imports the router helpers so you can generate paths like # `api_contacts_path(conn)` import HelloPhoenix.Router.Helpers end end This means that in your controller you can remove your alias for HelloPhoenix.Repo. Let’s use ExUnit.CaseTemplate to clean up our tests a bit. In test/test_helper.exs # Add this above `ExUnit.start` defmodule HelloPhoenix.Case do use ExUnit.CaseTemplate alias Ecto.Adapters.SQL alias HelloPhoenix.Repo setup do SQL.begin_test_transaction(Repo) on_exit fn -> SQL.rollback_test_transaction(Repo) end end using do quote do alias HelloPhoenix.Repo alias HelloPhoenix.Contact use Plug.Test # Remember to change this from `defp` to `def` or it can't be used in your # tests. def send_request(conn) do conn |> put_private(:plug_skip_csrf_protection, true) |> HelloPhoenix.Endpoint.call([]) end end end end Adding code to using will make those functions and aliases available in every test. This makes it so that we can remove send_request/1 and the other alias from our test and replace it with use HelloPhoenix.Case defmodule HelloPhoenix.ContactControllerTest do use HelloPhoenix.Case, async: false # We removed the other aliases since they're already included in # `HelloPhoenix.Case`. We also removed the `setup` macro. test "/index returns a list of contacts" do contacts_as_json = %Contact{name: "Gumbo", phone: "(801) 555-5555"} |> Repo.insert |> List.wrap |> Poison.encode! response = conn(:get, "/api/contacts") |> send_request assert response.status == 200 assert response.resp_body == contacts_as_json end # We also removed the function definition for `send_request/1` end That’s a wrap Now you’ve seen how to create and test a Phoenix JSON API. We’ve also learned how to cleanup our files and make it easier to use our modules in other controllers and tests in the future by using HelloPhoenix.Web and ExUnit.CaseTemplate. You can now deploy this app to Heroku with the Elixir buildpack.
https://thoughtbot.com/blog/testing-a-phoenix-elixir-json-api?utm_source=programmingdigest&utm_medium=web&utm_campaign=featured
CC-MAIN-2020-40
refinedweb
1,509
61.73
I’m going to let you in on a secret. You have to promise you won’t share it with anyone. OK? Here it isb... The best coders write the least amount of code. Sounds a little counter intuitive doesn’t it? But it's not. Now an experienced coder might be writing 3D engines which are obviously many more lines of code than someone writing a menu. What I'm getting at is that if both a junior and senior developer solve the same problem, the senior developer would have written less code. The reason is that the senior developer has learned how to be lazy. They’ve learned how to code efficiently and how to leverage old code to be reusable. In this article I'm going to share with you some of the tips and tricks I've picked up over the years. Not every line of code is unique There is a common mantra in the coding world. Write reusable code. You see it on job ads, tutorials, coding articles (like this one), and so on … There is good reason for this. The more you can reuse the less you’ll have to write every time. This saves time and money which is always a good thing. However, for all this talk about making sure code is reusable it can actually be a difficult concept for new coders to wrap their heads around. Inexperienced coders typically go one of two ways on this. Either they look at their code and think everything they’ve done is a one off situation and none of it will be needed again or they go overboard and try to make everything reusable. It's this first scenario that we’ll be tackling first. It's very easy to look at a large block of code and say to yourself “oh, this code is unique, I’ll never need to use this again” and you’d probably be right. However, the trick to being lazy is to break down the code into smaller chunks and look again. Look for the little things that seem simple but are things that need to be done all the time. Maybe they only take a line or two of code but these small things add up over time. One thing that is always an issue in Flash is making sure that text is on whole pixels. If your textField is not on a whole pixel then the text won’t be rendered correctly. myTextField.x = 12; //instead ofmyTextField.x = 12.2; To fix this problem you could write this: myTextField.x = int(12.7); Pretty simple except that you’d have cast the value as an integer every time you wanted to place text. The reusable way is to create a base class for your TextFields and to use that every time you need some text. public class WholePixelText extends TextField { public function WholePixelText () { super(); } public override function set x(value:Number):void { super.x = int(value); } public override function set y(value:Number):void { super.y = int(value); } } Now instead of going new TextField(); you would say new WholePixelText(); and then never worry again that your text could be on a half pixel. No matter what we set the x and y properties to, they will always be floored to whole numbers. We have just saved ourselves countless seconds and keystrokes from typing int(xVal) over and over again. Not only that, we’ve eliminated a failure point by ensuring that our text will be done properly. Another way to reduce the amount of code you write is to save important functions. Math functions are something that I always have difficulty remembering so I’ve gotten into the habit of saving important methods for late. For example, how many times have you needed a random number between two integers? You could try to remember this: var myRand:int =Math.random()*(max-min)+min; or you could write this once and then add a static Utils class that contains this: public static function randomInt(min:int=0,max:int=1):int { return Math.random()*(max-min)+min; } //Then the next time you need a random int //this is all you need to remember:var myRand:int = Utils.randomInt(min,max); You could also include functions for getting the angle between two points or converting degrees to radians and back again. These are important functions that you’ll find yourself using over and over again. No need to write them every time from scratch. Embrace the lazy, write once. Use everywhere. It's also important to be able to identify where things look very different graphically but in reality share a lot of the same underlying code. A great example of this is buttons. Buttons can look very different but right down deep they all have some common elements. They all have hit areas and rollover effects. A great way to handle this is to write a common base class for all the buttons you use. This way you can eliminate some of the repetitive tasks associated with creating buttons. Here’s an example of a basic button base class: public class ButtonBase extends Sprite { public function ButtonBase() { setupHit(); this.buttonMode=true; this.mouseEnabled=true; this.mouseChildren=false; this.addEventListener(MouseEvent.ROLL_OVER, onOver, false, 0, true); this.addEventListener(MouseEvent.ROLL_OUT, onOut, false, 0, true); } protected function setupHit():void { this.graphics.clear(); this.graphics.beginFill(0xffffff, 0); this.graphics.drawRect(0, 0, this.width, this.height); this.graphics.endFill(); } protected function onOver(e:Event):void { } protected function onOut(e:Event):void { } } Then to customise it for your particular button you would just override the onOver and onOut functions to create your own effects. This saves a lot of time since you don’t have to worry about the hit area, making sure buttonMode is enabled or adding listeners for roll overs. Don't make everything reusable The other mistake, that I mentioned before, that a lot of coders make is to try and make everything they do reusable. In theory this is a great idea but in reality you can end up doing more work than if you just wrote it from scratch when you needed it which goes against our lazy ideals. A great example of this happening is the code scroll bar. Scroll bars come in all kinds of shapes and sizes. It makes sense to try and write code for a scroll bar that you can reuse again and again. The scroll bar trap comes from trying to make a scroll bar that does too much or is too hard to use. Most scroll bars are either vertical or horizontal. My wild guesstimate is that 98 per cent of all scroll bars are used in one of these two configurations with a heavy leaning towards vertical. However, when writing code like this the temptation for some is to make a scroll bar that can be used on any angle, be it 90, 180 or 127. 90 and 180 are no brainers but it doesn’t really make sense to spend hours/days coding something that will be used so rarely. The lazy coder we are striving to be would never do this. The lazy coder doesn’t worry about crazy edge cases. They code for the vast majority of cases and worry about the one offs when they come. Hopefully with these tips and ideas you can start thinking the lazy way. Save those key strokes and stop wasting valuable time by writing extra code. Keep all these things in mind and you too can be The Lazy Coder.
https://www.creativebloq.com/netmag/coding-efficiency-beginners-write-reusable-code-1126513
CC-MAIN-2021-31
refinedweb
1,275
72.46
Opened 3 years ago Last modified 19 months ago #22006 new defect CBC tries to use system's blas at runtime which creates an error Description (last modified by ) Here is the problem: if i build cbc on a machine where libopenblas-base is not installed, cbc works fine. Now, if i install libopenblas-base, i got: from sage.numerical.backends.coin_backend import CoinBackend ImportError: /usr/lib/libblas.so.3: undefined symbol: sgemv_thread_n I do not understand why cbc tries to use system's blas at runtime while Sage already provides one. Remark: if libopenblas-dev is installed when cbc is built, there is no problem, even if i remove libopenblas-base during runtime. Change History (4) comment:1 Changed 3 years ago by comment:2 Changed 19 months ago by - Priority changed from major to blocker comment:3 Changed 19 months ago by - Milestone changed from sage-7.5 to sage-8.1 comment:4 Changed 19 months ago by - Priority changed from blocker to major Note: See TracTickets for help on using tickets. optional packages aren't blockers
https://trac.sagemath.org/ticket/22006
CC-MAIN-2019-26
refinedweb
180
53.61