text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Build an activity feed with React A basic understanding of React and Node.js are needed to follow this tutorial. Applications can generate a lot of events when they’re running. However, most of the time, the only way to know what’s going on is by looking at the logs or running queries against the database. It would be nice to let the users see what is going on in an easy way, so why not build an activity feed to see in realtime, every change made to the models of the application? In this tutorial we are going to build a simple Node.js REST API with Express and Mongoose to work with generic measurements, let’s say for example, temperatures. Every time a database record is modified (created/updated/deleted), it will trigger an event to a channel in realtime using Pusher. In the frontend, those events will be shown in an activity feed made with React. This is how the final application will look like: This tutorial assumes prior knowledge of Node.js and React. We will integrate Pusher into a Node.js API, create React components and hook them up with Pusher. However, since Pusher is so easy to use together with Node.js and React, you might feel that in this tutorial we will spend most of our time setting things up in the backend and creating the React components. You’ll need to have access to a MongoDB database. If you’re new to MongoDB, you might find this documentation on how to install it handy. The source code of the final version of the application is available on Github. Application Structure The project has the following structure: |— models | |— measure.js |— public | |— css | |— images | |— js | | |— app.js | | |— event.js | | |— events.js | | |— header.js |— routes | |— api.js | |— index.js |— views | |— index.ejs |- package.json |- server.js - The modeldirectory contains the Mongoose schema to interact with the database. - The publicdirectory contains the CSS and images files as well as the Javascript (React) files that will be used on the main web page of the app. - The routesdirectory contains the server’s API endpoints and the route to server the main page of the app. - The viewdirectory contains the EJS template for the main page of the app. - In the root directory, we can find the package.json file with the project’s dependencies and the file for the Express server. Setting up Pusher Create a free account at Pusher. When you first log in, you’ll be asked to enter some configuration options: Enter a name, choose React as your frontend tech, and Node.js as your backend tech. This will give you some sample code to get you started. This won’t lock you into a specific set of technologies, you can always change them. With Pusher, you can use any combination of libraries. Then go to the App Keys tab to copy your App ID, Key, and Secret credentials, we’ll need them later. Setting up the application First, add a default package.json configuration file with: npm init -y For running the server, we’ll need Express, React, Pusher, and other dependencies, let’s add them with: npm install --save express ejs body-parser path pusher mongoose Here are the dependencies section on the package.json file in case a future version of a dependency breaks the code: { ... "dependencies": { "body-parser": "^1.15.2", "ejs": "^2.5.2", "express": "^4.14.0", "mongoose": "^4.6.4", "path": "^0.12.7", "pusher": "^1.5.0", } } The Node.js Backend The backend is a standard Express app with Mongoose to interact with the database. In the server.js file, you can find the configuration for Express: var app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(express.static(path.join(__dirname, 'public'))); app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'ejs'); The routes exposed to the server are organized in two different files: app.use('/', index); app.use('/api', api); Then, the app will connect to the database and start the web server on success: mongoose.connect('mongodb://localhost/temperatures'); var db = mongoose.connection; db.on('error', console.error.bind(console, 'Connection Error:')); db.once('open', function () { app.listen(3000, function () { console.log('Node server running on port 3000'); }); }); However, the interesting part is in the file routes/api.js. First, the Pusher object is created passing the configuration object with the App ID, the key, and the secret for the Pusher app: var pusher = new Pusher({ appId : process.env.PUSHER_APP_ID, key : process.env.PUSHER_APP_KEY, secret : process.env.PUSHER_APP_SECRET, encrypted : true, }); Pusher a database record is created/updated/deleted with that record as attachment so we can show it in an activity feed. Here’s the definition of our API’s REST endpoints. Notice how the event is triggered using pusher.trigger after the database operation is performed successfully: /* CREATE */ router.post('/new', function (req, res) { Measure.create({ measure: req.body.measure, unit: req.body.unit, insertedAt: Date.now(), }, function (err, measure) { if (err) { ... } else { pusher.trigger( channel, 'created', { name: 'created', id: measure._id, date: measure.insertedAt, measure: measure.measure, unit: measure.unit, } ); res.status(200).json(measure); } }); }); router.route('/:id') /* UPDATE */ .put((req, res) => { Measure.findById(req.params.id, function (err, measure) { if (err) { ... } else if (measure) { measure.updatedAt = Date.now(); measure.measure = req.body.measure; measure.unit = req.body.unit; measure.save(function () { pusher.trigger( channel, 'updated', { name: 'updated', id: measure._id, date: measure.updatedAt, measure: measure.measure, unit: measure.unit, } ); res.status(200).json(measure); }); } else { ... } }); }) /* DELETE */ .delete((req, res) => { Measure.findById(req.params.id, function (err, measure) { if (err) { ... } else if (measure) { measure.remove(function () { pusher.trigger( channel, 'deleted', { name: 'deleted', id: measure._id, date: measure.updatedAt ? measure.updatedAt : measure.insertedAt, measure: measure.measure, unit: measure.unit, } ); res.status(200).json(measure); }); } else { ... } }); }); Measure is the Mongoose schema used to access the database. You can find its definition in the models/measure.js file: var measureSchema = new Schema({ measure: { type: Number }, insertedAt: { type: Date }, updatedAt: { type: Date }, unit: { type: String }, }); This way, we’ll be listening to these events to update the state of the client in the frontend. React + Pusher React thinks of the UI as a set of components, where you simply update a component’s state, and then React renders a new UI based on this new state updating the DOM for you in the most efficient way. The app’s UI will be organized into three components, a header ( Header), a container for events ( Events), and a component for each event ( Event): The template for the index page is pretty simple. It just contains references to the CSS files, a div element where the UI will be rendered, the Pusher app key (passed from the server), and references to all the Javascript files the application uses: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Realtime Activity Feed with Pusher + React</title> <link rel="stylesheet" href="/css/all-the-things.css"> <link rel="stylesheet" href="/css/style.css"> </head> <body class="blue-gradient-background"> <div id="app"></div> <!-- React --> <script src=""></script> <script src=""></script> <script src=""></script> <!-- Libs --> <script src=""></script> <script src=""></script> <!-- Pusher Config --> <script> var PUSHER_APP_KEY = '<%= pusher_app_key %>'; </script> <!-- App/Components --> <script type="text/babel" src="/js/header.js"></script> <script type="text/babel" src="/js/event.js"></script> <script type="text/babel" src="/js/events.js"></script> <script type="text/babel" src="/js/app.js"></script> </body> </html> The application will be rendered in the div element with the ID app. The file public/js/app.js is the starting point for our React app: var App = React.createClass({ ... }); ReactDOM.render(<App />, document.getElementById("app")); Inside the App class, first, we define our state as an array of events: var App = React.createClass({ getInitialState: function() { return { events: [] }; }, ... }); Then, we use the componentWillMount method, which is invoked once immediately before the initial rendering occurs, to set up Pusher: var App = React.createClass({ ... componentWillMount: function() { this.pusher = new Pusher(PUSHER_APP_KEY, { encrypted: true, }); this.channel = this.pusher.subscribe('events_to_be_shown'); }, ... }); ... We subscribe to the channel’s events in the componentDidMount method and unsubscribe from all of them and from the channel in the componentWillUnmount method: var App = React.createClass({ ... componentDidMount() { this.channel.bind('created', this.updateEvents); this.channel.bind('updated', this.updateEvents); this.channel.bind('deleted', this.updateEvents); } componentWillUnmount() { this.channel.unbind(); this.pusher.unsubscribe(this.channel); } ... }); ... The updateEvents function updates the state of the component so the UI can be re-render. Notice how the new event is prepended to the existing array of events. Since React works best with immutable objects, we create a copy of that array to then update this copy: var App = React.createClass({ ... updateEvents: function(data) { var newArray = this.state.events.slice(0); newArray.unshift(data); this.setState({ events: newArray, }); }, ... }); ... Finally, the render method shows the top-level components of our app, Header and Events: var App = React.createClass({ ... render() { return ( <div> <Header /> <Events events={this.state.events} /> </div> ); } ... } ... public/javascript/header.js is a simple component without state or properties that only renders the HTML for the page’s header. The Events component (public/javascript/events.js) takes the array of events to create an array of Event components: var Events = React.createClass({ render: function() { var ReactCSSTransitionGroup = React.addons.CSSTransitionGroup; var eventsLength = this.props.events.length; var eventsMapped = this.props.events.map(function (evt, index) { const key = eventsLength - index; return <Event event={evt} key={key} /> }); return <section className={'blue-gradient-background intro-splash splash'}> <div className={'container center-all-container'}> <h1 className={'white light splash-title'}> Realtime Activity Feed with Pusher + React </h1> <ReactCSSTransitionGroup component="ul" className="evts" transitionName="evt-transition" transitionEnterTimeout={500} transitionLeaveTimeout={500}> {eventsMapped} </ReactCSSTransitionGroup> </div> </section>; } }); There are two important things in this code. First, React requires every message component in a collection to have a unique identifier defined by the key property. This help it to know when elements are added or removed. As new elements are prepended instead of appended, we can’t give the first element the index 0 as key since this will only work the first time an element is added (for the next added elements, there will be an element with key 0 already). Therefore, keys are assigned this way: var key = eventsLength - index; The second thing is that the insertion of a new event is done with the ReactCSSTransitionGroup add-on component, which wraps the elements you want to animate. By default, it renders a span to wrap them, but since we’re going to work with li elements, we specify the wrapper tag ul with the component property. className becomes a property of the rendered component, as any other property that doesn’t belong to ReactCSSTransitionGroup. transitionName is the prefix used to identify the CSS classes to perform the animation. You can find them in the file public/css/style.css: .evt-transition-enter { opacity: 0.01; } .evt-transition-enter.evt-transition-enter-active { opacity: 1; transition: opacity 500ms ease-in; } .evt-transition-leave { opacity: 1; } .evt-transition-leave.evt-transition-leave-active { opacity: 0.01; transition: opacity 500ms ease-in; } Finally, the Event component (public/js/event.js), using Moment.js to format the date, renders the event in the following way: var Event = React.createClass({ render: function() { var name = this.props.event.name; var id = this.props.event.id; var date = moment(this.props.event.date).fromNow(); var measure = this.props.event.measure; var unit = this.props.event.unit; return ( <li className={'evt'}> <div className={'evt-name'}>{name}:</div> <div className={'evt-id'}>{id}</div> <div className={'evt-date'}>{date}</div> <div className={'evt-measure'}>{measure}°{unit}</div> </li> ); } }); To run the server, execute the server.js file using the following command: PUSHER_APP_ID=<YOUR PUSHER APP ID> PUSHER_APP_KEY=<YOUR PUSHER APP KEY> PUSHER_APP_SECRET=<YOUR PUSHER APP SECRET> node server.js To test the whole app, you can use something to call the API endpoints with a JSON payload, like curl or Postman: Or if you only want to test the frontend part with Pusher, you can use the Pusher Debug Console on your dashboard: Conclusion In this tutorial, we saw how to integrate Pusher into a Node.js backend and a React frontend. As you can see, it is trivial and easy to add Pusher to your app and start adding new features. Remember that if you get stuck, you can find the final version of this code on Github or contact us with your questions. Further reading November 29, 2016 by Esteban Herrera
https://pusher.com/tutorials/activity-feed-react
CC-MAIN-2021-25
refinedweb
2,106
51.75
In today’s Programming Praxis exercise, our goal is to calculate the minimum total amount of coins involved in a payment (including change) for a currency with a given set of coin denominations. Let’s get started, shall we? import Data.List import Math.Combinat First we search all transactions involving one coin, then all transactions involving two coins, etc. We exclude all options where the payment and the change include the same coin, since that would make both coins useless. We return the first option for which the change equals the difference between the payment and the amount required. pay :: (Eq a, Num a) => a -> [a] -> ([a], [a]) pay total coins = head [(p,c) | n <- [1..], pc <- [1..n], p <- combine pc coins , c <- combine (n - pc) (coins \\ p), sum p - total == sum c] Some tests to see if everything is working properly: main :: IO () main = do print $ pay 17 [1,3,7,31,153] == ([31], [7,7]) print $ pay 18 [1,10] == ([10,10],[1,1]) Tags: bonsai, code, coins, denominations, floupia, Haskell, kata, praxis, programming February 23, 2013 at 1:14 am | I tried to run your code, but don’t have the Math.Combinat library. What happens if you try to pay 11 floupia with coins of 3 and 6 floupia? February 23, 2013 at 1:29 am | @programmingpraxis: My solution uses the same basic idea as yours, so it will likewise go into an infinite loop when there is no solution. Since this wasn’t part of the exercise, I didn’t bother to catch those cases. My first thought for solving it would be to do something like: – remove all denominations for which a denomination exists that is a divisor (so in your example remove the 6 since 3 is a divisor). – if only one number is left, a solution is impossible if the remaining denomination isn’t a divisor of the total amount. – I think that if more than one denomination is left a solution should be possible. I might be wrong, though. February 23, 2013 at 4:19 am | I posit, though I am not certain, that a feasible solution exists only if the greatest common divisor of the various denominations of coins evenly divides the target price. Which is pretty much the same as what you said. I’m glad to see we are thinking along the same lines. February 23, 2013 at 12:01 pm | @programmingpraxis: Not quite the same, since some further thinking reveals that my solution doesn’t work with denominations 6 and 9. I believe your gcd approach does work correctly. February 23, 2013 at 1:51 pm | I wasn’t entirely sure. Thanks for the confirmation.
https://bonsaicode.wordpress.com/2013/02/22/programming-praxis-floupia/
CC-MAIN-2016-36
refinedweb
450
60.45
/* > #include <setjmp.h> #ifdef HAVE_PWD_H #include <pwd Lisp_Object format_time_string (char const *, ptrdiff_t, Lisp_Object, int, time_t *, struct tm **);;); Vuser_full_name = Fuser_full_name (NILP (tem)? make_number (geteuid ()) : Vuser_login_name);; } static Lisp_Object buildmark (EMACS_INT charpos, EMACS_INT bytepos) { register Lisp_Object mark; mark = Fmake_marker (); set_marker_both (mark, Qnil, charpos, bytepos); return mark; }mark (PT, PT_BYTE); EMACS_INT clip_to_bounds (EMACS_INT lower, EMACS_INT num, EMACS_INT upper) { if (num < lower) return lower; else if (num > upper) return upper; else return num; } DEFUN ("goto-char", Fgoto_char, Sgoto_char, 1, 1, "NGoto char: ", doc: /* Set point to POSITION, a number or marker. Beginning of buffer is position (point-min), end is (point-max). The return value is POSITION. */) (register Lisp_Object position) EMACS non-zero means return the start. If there is no region active, signal an error. */ static Lisp_Object region_limit (int"); if ((PT < XFASTINT (m)) == (beginningp != 0)) m = make_number (PT); return m; }; EMACS)-)); { return tem; } } } { /*zero, true, EMACS_INT *beg, Lisp_Object end_limit, EMACS_INT *end) /* Fields right before and after the point. */ Lisp_Object before_field, after_field; /* 1 if POS counts as the start of a field. */ int at_field_start = 0; /* 1 if POS counts as the end of a field. */ int true, we consider the `x' and `y' fields as forming one big merged field, and so the end of the field is the end of `y'. However, if `x' and `y' are separated by a special `boundary' field (a field with a `field' char-property of 'boundary), then we), we) EMACS) EMACS. */) EMACS. A field is a region of text with the same `field' property. If NEW-POS is nil, then the current point is used instead, and set to the constrained position if that is different.. */. */ EMACS. */ || (scan_buffer ('\n', XFASTINT (new_pos), XFASTINT (field_bound), fwd ? -1 : 1, &shortage, argument N not nil or 1, move forward N - 1 lines first. If scan reaches end of buffer, return that position. The returned position is of the first character in the logical order, i.e. the one that has the smallest character position. This function constrains the returned position to the current field unless that) EMACS_INT orig, orig_byte, end;. The returned position is of the last character in the logical order, i.e. the character whose buffer position is the largest one.. EMACS_INT end_pos; EMACS_INT orig = PT; if (NILP (n)) XSETFASTINT (n, 1); else end_pos = find_before_next_newline (orig, 0, XINT (n) - (XINT (n) <= 0)); /* Return END_POS constrained to the current input field. */ return Fconstrain_to_field (make_number (end_pos), make_number (orig), Qnil, Qt, Qnil); (Lisp_Object info) Lisp_Object tem, tem1, omark, nmark; struct gcpro gcpro1, gcpro2, gcpro3; int visible_p; tem = Fmarker_buffer (XCAR (info)); /* If buffer being returned to is now deleted, avoid error */ /* Otherwise could get error here while unwinding to top level and crash */ /* In that case, Fmarker_buffer returns nil now. */ if (NILP (tem)) return Qnil; omark = nmark = Qnil; GCPRO3 (info, omark, nmark); Fset_buffer (tem); /* Point marker. */ tem = XCAR (info); Fgoto_char (tem); unchain_marker (XMARKER (tem)); /* Mark marker. */ info = XCDR (info); tem = XCAR (info); omark = Fmarker_position (BVAR (current_buffer, mark)); Fset_marker (BVAR (current_buffer, mark), tem, Fcurrent_buffer ()); nmark = Fmarker_position );
https://emba.gnu.org/emacs/emacs/-/blame/d3760c4b0ad02dd26225f6886611940ab281bd27/src/editfns.c
CC-MAIN-2020-45
refinedweb
491
61.77
0 For a given list, I would like my output to have the line "Deleting node with value ..." for each node. My destructor function works for a 2-element list, but for a 3-element list it deletes a certain node more than once, and for a list of any size greater than 3, I get an infinite loop. I try tracing through the code, but I am not sure what is going on. Any suggestions? Thanks. #include <iostream> #include <cassert> #include "lists.h" using namespace std; ListNode::ListNode (int k) { myValue = k; myNext = 0; } ListNode::ListNode (int k, ListNode* ptr) { myValue = k; myNext = ptr; } ListNode::~ListNode () { cout << "Deleting node with value " << myValue << endl; for (ListNode* p=this; p!=0; ){ p=p->myNext; delete p; } }
https://www.daniweb.com/programming/software-development/threads/104689/destructor-function-for-list-structure
CC-MAIN-2017-04
refinedweb
125
80.21
0 Ok, I have been trying to write a program that will read in a list of positive integers (including zero) and display some statistics regarding the integers. This is what I have to do,. First, ask the user to enter an integer to search for. Next, search the array to determine if the given integer is in the array. If the integer is not found, display a message stating that the integer is not in the list. If the integer is found, then display the position number of where you found the integer. If the integer happens to be in the array more than once, then you only need to tell the first position number where you found it. After performing the search, ask the user if he/she wants to search for more integers. Continue searching for numbers until the user answers "N". And this is what I have so far, import java.util.*; public class ListStats { //This is for the display part public static void main(String[] args) { Scanner kbd = new Scanner(System.in); int numsList = 0; System.out.println("The amount of numbers read are: "); System.out.println("The smallest number found was: " + ); } //This is for getting the numbers from the user and putting thm in a array public static int getNum(long[] num) { for (int i = 0; i < num.length; i++) { Scanner kbd = new Scanner(System.in); int nums; System.out.println("Enter a list of number ranging from 0 to 100 and put them from smallest to largest; "); nums = kbd.nextInt(); long []list = new long [nums]; } return nums; } //This is for finding the largest value public static int findMax(int[] nums) { int i = nums[0]; for (int j = 1; j < nums.length; j++) { if (nums[j] > i) { i = nums[j]; } } return i; } //This is for finding the smallest value public static int findMin(int[] nums) { int a = nums[0]; for (int j = 1; j < nums.length; j++) { if (nums[j] < a) { a = nums[a]; } } return a; } //This is for finding the a specified value public static int findVal(int[] nums, int val) { for (int b = 0; b < nums.length; b++) { if (nums[b] == val) { return b; } } return -1; } So my questions are, how do I put all the methods in the display part? And how do I finish the 6th part? And what am I doing wrong? Thanks
https://www.daniweb.com/programming/software-development/threads/328198/quick-java-questions
CC-MAIN-2016-50
refinedweb
394
72.46
This forum is now read-only. Please use our new forums at discuss.codecademy.com. How to use APIs with Python Forum View Course » View Exercise Why won't Codecademy see the 'kittens' variable I've created? I've copied/pasted the exact line 4 that was suggested. Here's the full code block: from urllib2 import urlopen # Open for reading on line 4! kittens = urlopen('') response = kittens.read() body = response[559:1000] # Add your 'print' statement here! print body But I keep getting an error that says, Oops, try again. Did you create a new variable called kittens?. But as you can see, in the 3rd line of actual code, there's the kittens variable being created. But when I paste that code into my Python 2.7 interpreter, it works.
https://www.codecademy.com/forum_questions/554e3bc1e39efec0ba000039
CC-MAIN-2018-09
refinedweb
132
78.35
Java as a CS Introductory Language? 913 First, tulare queries: "I'm currently a student at a small university whose CS students are required to attend two terms of Java programming courses before moving on to other OOP languages. My personal feeling is that Java is clunky, ugly, and runs much too slow on most platforms. The official CS department position is that Java is pure OOP (as opposed to C, for example), and furthermore, Java is extremely widely used at this time. Now, I may be stirring things up a little here, but just because everyone does something one way does not neccessarily mean that something is being done the best way. What I'm asking is to hear reasoned opinions on the following statements: - Java is a fine development language, and it will help me as a programmer to learn it. - I'm right. Java is a fad, not worth much more than the Windows OS in terms of quality, and my CS faculty is doing me a disservice by cramming it down my throat. - There's a little truth in both the above statements." While on IRC, I was discussing this issue a bit with other editors and Chris DiBona happened to have some thoughts on the matter, his words follow: When Cliff mentioned that a Java in education story was going to be posted, I asked to weigh in on the topic. I will not talk about the suitablity of using a non-free language, as I'm certain that will be discussed in the comments and is not a trivial issue. I don't think that Java, or any Object Oriented language, would be suitable for an AP Computer science class. I don't think it serves the needs of students looking to fully understand the internal workings of a computer, which is in my mind what an AP computer science course should be directed towards. C is a language that has been designed to be very close to the hardware, and its ideosyncracies and power reflect that. Through this relationship, C reflects the realities of the hardware your programs run on. Memory management, low level process and I/O control are all things that a computer scientist should understand at a very low level, to better aid in future programming and debugging no matter which language is chosen or inflicted upon said scientist. In contrast, Java has been designed to take such concerns away from the programmer. Memory management? Low level IO? These are not the droids you were looking for.. (at least not without an RMI written in another language) That's okay too, that isn't what people want from Java and it isn't what it was designed to do. And that is exactly my problem with it being applied in a computer science course designed to teach CS fundamentals. In short, since I believe that AP CS courses should focus on the low level architecture of computing, Java is an inappropriate language for that course. When is Java appropriate? In your college sophomore comparative languages course, or, alternatively, in an OO course or two, but it shouldn't be used as the keystone language for any CS program. Please don't take this as me saying that Java is neither useful or important in it's own (non-free) way, as it clearly is. However, in my opinion, It should be considered an adjunct subject to a serious program in Computer Science. Re:I took Java last year (Score:3) You have to be kidding. Steve Michael The language is secondary (Score:3) I say, use whatever they are teaching, and if you don't like it then pick up another language. It's a lot easier to learn a programming language than it is to learn how to program. My only concern is trying to shove too much syntax down the throat of first year students. Full blown OO languages tend to require a bit more typing than something like C for the trivial projects that students do, which may turn off some people (I know in my high school, the Pascal class lost 3/4 of it's slow typers on the second day. Worse, in high school almost everybody types slow.) I guess you might want to avoid "bad habit" languages as well, like BASIC and possibly Perl. I'd also shy away from anything that the average windows user hasn't heard of like Scheme, Python, Modula II, Ada, or APL. You probabally want to chose something with good free or cheap compilers as well. Ada95 may be a nice language, but your school won't be able to afford the licenses for the compilers under Windows. Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs. Re:The language is secondary (Score:3) Plus there is the cool factor that comes from knowing the language you are learning is the same one your Operating system (and most of the applications you run) is written in. Beginning CS student's generally aren't concerned with the fundimentals, primarily because they don't know they are supposed to be concerned with them, so the choice of language makes a big impression on them. A teachers job is to reach beyond that, teach the students what they need and other things that they won't realize are valuble until later when they want to do real programming. Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs. Teaching language? Python! (Score:2) Out of all of the other languages I have ever studied [including C, C++, TCL, Perl, PHP, Forth, Scheme] Python is by far the easiest to pick up and use right away. It's something you have to experience yourself to truly appreciate how easy it is to pick up. If I ever had to teach someone how to program, I'd definitely introduce them to Python first. The concepts are easy to grasp without having to sweat all of the small stuff that could otherwise be extremely distracting and only dilute the main ideas. Is Python ideal for O.O? (Score:2) -- Re:It is a good education language. (Score:2) Computers are getting faster all the time, but programmers won't unless they can use languages which take care of some of the housekeeping for them. It is this more pragmatic philosophy which underpins Java, not to mention most other modern languages. -- Re:Don't teach "real-world" languages (at first)! (Score:2) We were taught procedural programming in Modula 2 -- because it enforces good practice. We were taught functional programming in Miranda -- because Miranda doesn't let you sneak in procedural paradigms by the back door (although Miranda only really clicked for me after a Lisp course taught by a better lecturer -- Lisp *does* let you sneak in procedural tricks). We were taught OO using Eiffel -- because Eiffel is a pure OO language that doesn't tempt you into non-OO constructs (as C++ might). -- This was around 1995, when Java was quite new -- I recall our OO lecturer saying that maybe Java might soon become a suitable language for teaching OO but that it was too soon to tell. In my limited experience of Java, I suspect that it would make an excellent teaching language (I tried to persuade my IT-teaching girlfriend to use Java as her language for teaching programming -- school politics with regard to installing software on the system put paid to that idea). At university there were no end of students whingeing at the lack of real-world stuff they were learning: one student memorably complained that the word "Novell" was never mentioned in a complete networks. Likewise, many wanted to learn real-world languages such as C (which we did, later). Those people were and are wrong. If you have a good grounding in the theory, adapting to the real world is easy. The reverse is not as easy. -- It all depeneds what you're trying to learn... (Score:2) If it's about OO programming, I would go with Java. C++ is a complicated language, and doesn't force OO. In addition, C++ will also distract from learning OO as the student will have to fiddle around with other unrelated implementation details, such as make files, wierd linker errors, etc. If you're learning OO, spending most of your time learning a specific language is rather silly (and C++ can be a very time consuming language to learn). Personally I prefer learning the concepts and transferrable skills. I can apply these in many situations and they will last longer through out my career. The practical stuff I can learn on the go, and I doubt that school will ever be able to teach that sufficiently anyway. Particular languages don't matter much (Score:3) It will use many of the same language features that languages that currently exist have, though, and this is what it is worth learning. Of course, its features which are currently available are available in different languages, so it's important to learn multiple languages. For OOP, I think that Java is a good tool. Possibly Scheme would be better if you wanted to present the full range of possibilities, since there are theoretically significant features that Java lacks (e.g., singleton instances). Of course, in order to be particularly good programmers, people need to know more than just OO concepts, and that means they'll need to learn a language that's good for teaching those concepts. In practice, currently I would suggest C for actual programming, including OOP, unless you need platform-independence or you need libraries that exist only in another language. But I wouldn't want to try *teaching* OOP with C; you really want to have a language where the OO syntax is obvious and explicit. OOP first, procedural next (Score:2) The first real language used in the CS curriculum (the first class uses a made-up language that theoretically has no compiler or interpreter) is Java. The reason is simple: Java is OOP without being obstructive about it (*cough*Smalltalk*cough*), and it's a pretty good language. The theory goes that it's easier for students to learn OOP first, then learn procedural (and functional, etc.) programming. In my experience, this seems well borne out... most people I know who learned procedural programming first really struggled (in some cases, still struggle) with OOP. Those who learned OOP first had no real problem picking up procedural. I think a lot of posters here are ignoring the fact that no sane human would teach an entire CS curriculum in one language or even one paradigm. It's also worth noting that we're not talking about teach a language so much as we are about using a language in the course of teaching computer science concepts. Maybe it's the GT student in me, but I find that an important distinction. I've mostly forgotten Java. I've also forgotten a lot of C specifics, and I can barely remember what little C++ I learned. Smalltalk is something I'm happy to have dropped from memory. I once knew BASIC, I've seen some FORTRAN and some COBOL, and at some point I even knew MOO. I've forgotten most of these languages because I can. When I sit down to read them (as I occasionally do) or to use one (as sometimes happens), it takes me little to no time to get back up to speed and get to work. The reason is simple: I was taught the ideas that form the basis of computer science and of computer programming. The languages were just tools, and still are. Re:I took Java last year (Score:2) Other than that, there's absolutely nothing about C++ that is more intuitive than Java! Re:...Ready...Aim...Fire... (Score:2) There are already tons of implementations of the JVM and the Java compiler, in case you hadn't noticed. Sun has a very very extensive language specification published, and anyone is free to implement it, no strings attached. The trick is that to make Java useful for most things, you need the very extensive libraries that are copyrighted by Sun. But this is just the same thing as C#, isn't it? C# is made to bind tightly to the Win32 API set and to .NET. Having the C# language standardized will mean relatively little if all of the API's that C# code depends on are not standardized. And Microsoft has *always* laughed at anyone who suggested that they should 'standardize' their API's. - jon Heck, I started in BASIC and turned out okay.. (Score:2) I think smart kids will handle the transitions to more difficult languages okay. Programming in BASIC involved 'taking too many things for granted', perhaps, but every new language you learn should have something new to teach you anyway, or else why bother? - jon Re:Wrong Direction (Score:4) I'd argue that unless you understand assembly, you don't fully know how registers work. I haven't had to write any assembly for over 10 years, yet with every single line of code that I write, I'm thankful that I could if I needed to. I'm sure you're a very competent programmer, but empirical evidence from 20 years of coding shows me that without fail, coders that don't know assembly are unable to progress beyond competence into true greatness. Not that greatness is actually needed for 90% of coding tasks, but nonetheless, those with a background in assembly are without fail better coders. As for when it should be learned, I disagree that it should be a first language, but it should be mandatory in any CS course at some point. but Re:It is a good education language. (Score:2) Many people see it this way, and I can understand it to a certain degree. But you should also consider that learning a programming language to a level where you are REALLY productive and wont regret what your wrote 6 months later takes years. I think I reached this point in Java after 1-2 years (fulltime programming), C++ probably needs much longer - I cannot claim that I reached this point. Re:Java as a prelude to C++ (Score:2) The disadvantage is the lack of expressive power resulting of this. Basically Java encourages Cut&Paste programming, which is a maintanance nightmare, a major error source and the root of all evil. In Java you cannot write something once and then use it for different types, so you start copying the code and modify it slightly for each type. In C++ you can write it once and use templates. The same problem is code that repeats itself only with slight variations that cannot be expressed with functions. This mostly occurs in regression tests, and can easily done with macros in C++. In Java you will start to copy&paste... When people need multiple inheritance with several non-abstract classes in Java they start to implement an interface and copy&paste the code. Sometimes this is also done using a code-preprocessor, like some CORBA implementations did. In C++ you just multiple inheritance. Re:Wrong Direction (Score:3) And if tomorrow a new CPU is released that doesn't have a von Neumann architecture? How does the person whose fundamental thinking processes w.r.t. application developement were structured by assembly adapt to that? Personally, I would go a step farther. One of the reasons that ordinary human beings have so much trouble using software is that the programmers are far too close to the details of the machine architecture. sPh CS vs. Software Engineering (Score:2) If one is teaching the pure theory side of OO, then something like Smalltalk is going to make more sense, perhaps. (Depending on what you want to teach, theory-wise.) C/C++ would be the worst because they are closer to the machine/hardware. If one is teaching for the more practical/engineering side, then C/C++ and/or Java is going to make more sense, depending on your perception of where the job market is going. Wrong Direction (Score:3) Couldn't agree more -- Re:Ruby!... (Score:2) ...mainly because Ruby's OO model is dynamic. (Perl's is, too, but as a "first language" it is problematic.) Much of this "Ask Slashdot" seems a little unfocused: Some people are answering with the assumption that a "first language" is a CS major's first programming language, while others are assuming the class will be attended either primarily or partially by those in other disciplines. To me JAVA makes sense for the second case, but not for the first. With a good IDE, beginners can get up useful programs with a basic understanding. But it is not a language I would ever choose once I knew the others (OK, in certain cross-platform environments, maybe) for really tough projects. Ruby breaks through this barrier because it works for beginners who really need a glue language without limitations (Python, Perl or Ruby) as well as for people who are going to be pushing the limits of OO before they are through with their careers. But the Ruby book (linked in the parent) demonstrates a problem with using OO for new programmers: Which comes first, the OO chicken, or the basic programming egg? How do you teach what OO is without having some basic commands to demonstrate it with? And how do you teach basic commands in an OO language without doing it object-orientedly? I don't think this problem is insurmountable. But it may be more important than choosing a language. (In fact, the object-oriented ZOOs so offensive to Steeltoe may be failed attempts to do this.) The final question which needs to be addressed by people deciding about JAVA in first-year classes (or Ruby for that matter) is: What PRECISELY do you mean by "object oriented"? Perhaps because I have been around so long, I see OO as a dynamic concept. It has changed over time and really only reached its maturity with the publication of Design Patterns. I fully expect it will change still more in the future. Some languages (JAVA, C++ and Python) take a very accurate snapshot of the current thinking on OO and implement it very well. Other languages (Perl and Ruby) assume that OO will evolve and give you the ability to implement as much of object-orientedness as you'd like. An interesting question is whether aspect-oriented programming (boy, do I hate that name) will become a part of object-oriented programming or whether it will be considered a separate paradigm. Ruby is the one of the few languages that implements aspect-oriented concepts (like mix-ins) and it also allows programmers to choose where they want to work on that spectrum. (You can ignore aspect-orientedness, you can use the features offered by the language itself, or you can modify its aspect-oriented features into whatever becomes the next definition of the new paradigm.) All of this makes it an excellent choice for the CS majors starting a basic class which needs an OO language. One drawback with Ruby (which may actually prove a boon to beginning CS majors) is the lack of a large library like CPAN or the C libraries. Although it is growing, the Ruby Archive is nowhere near as comprehensive as CPAN. While this may be occasionally frustrating, it offers CS majors a good way to make a name for themselves. (There's nothing like applying for a job and finding your prospective employer uses a module you wrote. Voila! Instant reference.) All you have to do to make a name in the Ruby community is go to CPAN, find a module which has no counterpart on the Ruby Archive, and port it to the Ruby idiom. Of course, if Ruby fizzles, that still won't get you a job. But at least you can tell some Perl employer you know the module well enough to port it. Re:Java is a better for later on (Score:2) I'll agree with that. Procedural programming teaches the basic concepts fairly well - and makes a good starting point for learning OO stuff later on down the road. I started learning how to program in BASIC back when I had a brand-spanking-new TI994A. I was about 5 years old. All I really got out of it was the ability to really mess with the TRS-80s we had in grade school. Then I started picking up a bit of C. By the time I was gheaded to college, I had a fairly good grasp of the basics. Pointers and the like were still a bit confusing, but I was at least comfortable with if/else, for, while, and various variable types. I'd also started down the road of *thinking* in terms of procedural language - which i believe is the biggest stumbling block new programmers have - not being able to conceptualize and break down a problem into pieces that can be programmed. Intro to programming in college _was_ C, so that worked out well for me =) Learned a lot more about pointers - a _lot_ more. Left college when the money ran out - started dabbling in HTML and Perl. Eventually got myself mixed up in MySQL and PHP - which led to my current job as webmaster for a small but growing and suprisingly stable startup. Languages I use today (ranked by usage): Perl PHP ::grin:: ) No C, other than side-projects of my own. I'm spending a lot of spare time getting up-to-speed on C++ (mainly for the OO stuff), and have considered Java, but haven't gone there yet. I'm pretty sure my next language will be Python. With a background in C, I picked up PHP and Perl without too much of a problem - I'm sure the same can be said of moving from C++ to Java. I haven't quite figured out where Python fits into all this yet (that's part of what intrigues me about it Re:The language is secondary (Score:3) I wholeheartedly agree. If the language is key then you are not teaching computer science, you are just teaching people how to program. When you are learning basic computer science concepts like data structures and abstraction then the language should be one which is designed to teach. Java and C/C++ are not designed as teaching languages. In fact, C/C++ is a horrible teaching language as there are too many ways a student new to programming could shoot himself in the foot. Personally, I liked the way the University of Waterloo taught concepts of computer science. They started off with Pascal to teach things like abstraction and basic data structures. When it came time to teach OO concepts, they moved on to Modula-3 which was great: clean, instructive and with little opportunity to shoot yourself in the foot. Moving to other OO languages after this was easy as one already knew the concepts but only had to learn the syntax. Later, more advanced courses used C, LISP, Prolog and variations on C++ to teach their material. Actually, in some instances, you had a choice of the language so long as it ran on the university servers. I might add that at UW the language was not the subject material only its facilitator. When a new language was introduced, you were lucky to receive two weeks of introductory instruction on that language with the first assignment due the following week. Most other times, you were required to learn the bulk of the language yourself. Some argued that Pascal and Modula-3 weren't useful because they weren't used much in the industry. My feeling is that if that is important to you in your first and second years at university, then a technical college and programming courses might be a better way to go rather than computer science. ian. Re:College and the Workforce (Score:2) Well, that's been the British practice throughout most of Britain's higher education...and it cost them dearly, historically speaking. A brit (Perkin) created (by accident, he was looking for artificial quinine) one of the more important inventions in the modern world (the coloured dye, which eventually led directly to plastics (also a coal-tar derivative)), and the British culture of "higher education isn't something someone does for work" pretty much through that head start right down the drain...the winners in the new chemistry technologies were the Germans, where education WAS considered something for the practical. In particular, color plating and color photography was a German invention that the Brits might have had a lead on otherwise...(Source: James Burke's Connections) But as I said (and I was talking about American Universities, where the competition to get in can be harder at times, 'cause it isn't practically set-in-stone by some test one takes in the 4th grade...or do the Brits still do that?)-- the Balance is what's important. The theory languages are good for teaching good programming and design and all that, but there's no reason that the practical, business-hyped languages should be ignored -- teach BOTH of them and you've got the prime candidates for a fresh-outta-college job. -- You know, you gotta get up real early if you want to get outta bed... (Groucho Marx) Re:College and the Workforce (Score:2) If a BALANCED education, in liberal studies as well as the field of industry one intends to specialize in, wasn't important, then college wouldn't be important and the american tech industry would all be based around people who are "Computer Learning Centers" graduates. I'm not talking about higher-education being 100% directed to getting the job, but it is almost 100% NECESSARY to get a good job, especially in the IT industry. Just having a head full of theory and languages that one will never use again (and liberal studies along the way) is NOT going to necessarilly be useful in the competition to get a decent job (competition that is increasing in today's .com-deaththrows) -- some experience (classroom is usually enough) in practical languages used in the modern world is also important. Companies won't take a "generic c.s. grad" without practical experience in a language that company uses -- they'll only take the "exceptional c.s. grad". It would be nice if we were all exceptional and all could just study the finer theory of things, but its not that way -- schools don't get 100% exceptional students (not even the ivy league), and their curriculum should reflect that and provide means by which their average students are in some ways prepared for getting jobs in a competitive market. Teaching practical programming languages like Java, C++, Python, Perl are means to that. A smalltalk-educated student will have a learning curve to learn Java that a company may decide isn't worth paying for when another candidate already has Java experience. They'll only take the smalltalk one if his overall record is exceptional as well. -- You know, you gotta get up real early if you want to get outta bed... (Groucho Marx) Re:College and the Workforce (Score:2) Yes, Java has the ability to "bypass" good OO design (as does C++), but if the teacher makes the programming assignment conditions include not using those proceedural cheats, then the student MUST learn the theory too, and walks away with both theory and practical application. -- You know, you gotta get up real early if you want to get outta bed... (Groucho Marx) Re:College and the Workforce (Score:2) Remember I wasn't talking about those who "learned Java", but those who learned Java as part of their university c/s program who will have a slight advantage over those who didn't. Read between the lines, people, I didn't say "java programmer" meaning "one who only knows java" -- i was referring to the c/s grad who knew java as well as all the rest of what normally goes into a c/s degree (as opposed to the one who only had theoretically "correct" languages that aren't used as much in the real world, like smalltalk or eiffel). sheesh. -- You know, you gotta get up real early if you want to get outta bed... (Groucho Marx) College and the Workforce (Score:3) Nice sentiment, but regardless of the "ideals" of education, when one gets out of college, one expects (screw that -- NEEDS) to get a job, and given two straight-A students, one with a lot of theoretical-application languages under his belt, and the other with not so many of those, but having known Java since his freshman year, the recruiter will pick the Java programmer, 'cause it means his company can save money having to train the programmer. And any University with a reputation for letting the education get in the way of future employment for their students is gonna start losing students quickly. Its nice to "learn", but the truth is that since the 1960s and the G.I. bill, one goes to university because the degree is a requirement for getting a job, not to "learn". And that isn't gonna change anytime in the near future. The theoretical and the practical-for-today's-world should be considered hand-in-hand. I value the theory I know from my C.S. degree very highly...but I wouldn't have gotten the good job I wanted without having had C++ in college. -- You know, you gotta get up real early if you want to get outta bed... (Groucho Marx) A Brief History of First-year Languages (Score:2) Largely due to a misguided OO-fad pressure, university CS departments forgot their reasons reasons against C in the first year and adopted C++. The argument often went "but it's a relevant language, and industry wants it!" Thus academia, like many companies at the time, made an unprepared mass transition to C++ (in the mid-90's more than 67% of departments used C++ in the first year). Thus the pedagogical issues with C were compounded by the added complexities and pitfalls of C++. Worse, virtually none of the new C++ instructors had any clue about how to leverage the OO paradigm whatsoever. At best, they taught "Modula-C", and at worst you got students in senior level OS classes asking "do I need to write a class for this project?" Finally, the faculty teaching higher level classes in these departments generally didn't retrain either. What many CS students don't realize is that a good four-year program isn't just a disconncted collection of classes. The students should be gradually ramped up in their design and implementation skills throughout their program. But the faculty suddenly had a design (non-OO vs. OO language) and language (via gross misunderstanding of C++'s tools) disconnect with the students. Disaster. Beyond all that, there are specific reasons to choose Java over C++ as a first year language. First: C++ has no object library, Java does. Yes, now after many years, it sort of has the standard template library, but for anyone with experience with languages like Smalltalk or Java, the STL is too little, too late. If you need further convincing, take a look at Brown University's first-year program, and what the students are able to accomplish both from a pedagogical standpoint and from a look-what-I-made! standpoint. The first year is run by Andy van Dam (yes, of Computer Graphics fame) and is a fantastic model for excellent OO instruction. Also check out University of Virginia's program -- and if you can get the opportunity to hear Jane Prey (on the Board of Directors of the ACM's Computer Science Education SIG) talk about their program, take it! I taught a variety of CS classes over a period of several years, and was (am) very interested in the best techniques available for teaching our the mathematics and engineering of our discipline. All of this experience suggests that the introduction of C++ to the first year created a *major* problem for Computer Science -- I'm glad to finally see it phased out for a clearly better language. Re:It is a good education language. (Score:2) In scientific computing, speed is paramount. Most people use fortran. Many would like to get away from fortran because it's over thirty years old, and has none of the nice features of newer languages. But for non-CS people, C++ is not something you pick up on a whim. C++ is by far the most complicated language around. The world of scientific computing, I fear, will be stuck with fortran forever because of its lower learning curve, and because of the fact that The bastards have stopped teaching C++ altogether. Java, while nicer in some regards, does not easily lead to picking up C++ (mostly, I think, due to templates and STL). Taking an intro java class and using fortran works though, because the language constructs needed for fortran are few, and certainly contained as a (small) subset of java. So I think teaching java is a horrible turn of events for we mere mortals. It has too many variants, is not (and will not be) standardized. And most importantly, most places only have one language class. That language should be the one that is most common, and contains enough language elements to allow people to easily transition to most other languages. Going from C++ to java is easy. Going the other way is nearly impossible. Maybe when more compilers correctly implement the C++ standard the situation will improve. These days it's very hard to compile on different platforms because different vendors implement different subsets of the standard. That, and they have to make STL errors readable. A newbie is not going to sift through a page of horrible looking errors from a single template mistake, just to find that none of the errors tell him what (or where) he did wrong! --Bob Re:It is a good education language. (Score:2) I have to agree. (Though I'd add Lisp and Forth (or PostScript) to the list just to round out the programming paradigm experience.) For those of you old enough to remember life before OO, remember how the early OO advocates were fond of smarmily remarking that OO required an entirely different way of thinking, and was therefore a big jump from procedural/imperative code? That runs both ways. Now that the schools are producing people who never wrote a non-trivial program in a non-OO language, I'm increasingly having to work with people who have severe mental limitations when it comes to problem solving. Procedural-only programmers tend to create tight but obscure and hard-to-maintain code. OO-only programmers tend to create overcomplicated, overengineered code. The best programmers I know can handle both design methodologies, and their code tends to be efficient, modular, clear, and easy to maintain. There is no knowledge that is not power. -- My feeling on the matter: (Score:2) Carleton U. is using Smalltalk to teach OO. (Score:2) The only systemic failing that Smalltalk has is that contained objects only know about their containers if they are explicitely made aware and passed in a reference at instantiation time. Java is a wanna be. C++ is even worse. The rest of the languages and development environments are left standing on the shore. Use Smalltalk. You can find a free version at Go try it out. But be warned, you'll be utterly spoiled by it. How it works at Cornell (Score:2) Then you take data structures in Java (which ultimately becomes GUI construction techniques in Java, despite how hard they try.) People who have never programmed fail since they either are A) not fast enought at GUI design or B) cannot comprehend the idea of the linked list. Then you take functional programming in scheme. People who can't handle recursion and functional thinking fail. Then you take computer architecture, in which you write MIPS assembly and design a CPU. People who at this point still can't understand what a computer _really_ does, fail. From thereon you advance through stuff and less people fail out. I think it's probably a decent system since there is a steady drop-out rate, as opposed to people doing computer architecture or scheme first and everyone jumping ship at once. The first two Java classes basically get you warmed up in coding and try to get you thinking about data relationships and stuff.. things which are very important when doing scheme and even when designing a CPU. The data structures class, if taught in C or any other language, would have the same purpose and would not delve into the things Java does automatically like memory management. CS classes are not about learning the language, they're about learning the theory and concepts behind using the language. You eventually do learn the things that are absent in Java like MM, but they are taught in a different context, on the hardware level. Re:Not your father's Java... (Score:2) So, have they bothered with a 'select'-like statement, or is it still 3000+ threads for a server with 3000 cllients? IMHO, Java is NOT industrial strength. It is fine for many things, but for what I want to do, C++ is still it. It will never be as fast as C++. All those nice run-time optimizations being applied to Java work for C++ too, if anybody would bother. Of course, since C++ is generally tons faster, nobody has yet. As for worrying about all that icky memory stuff. I want to worry about it. My programs are faster and better designed for it. I think it would be highly amusing to plop down a Java programmer in an environment where careful memory management was crucial to successful execution. They wouldn't know their heads from their arses. It's possible to do memory management in Java, despite the garbage collector, but it isn't as easy, and nobody feels they have to with the nice, warm, fuzzy garbage collector wrapping them up. I think Java is fine for many things. I've watched its development and maturation with interest. I actually made a good stab at porting JVM beta 1 to my platform (UnixWare). It's just not the wonderful be-all and end-all language you make it out to be. It also makes me extremely nervous that Sun still has such tight control of it. Re:Wrong Direction (Score:2) The prime example I have of this is the small midwestern private school I graduated from. I took an AI course where we were supposed to be learning lisp. After the second week of class, when the professor showed us two syntatically identical lists and said that in one the end parentheses went away and in the other they didn't but couldn't explain how the machine was smart enough to discern the difference, I dropped it. I can't think of any of the other CompSci professors in the department that I would have expected to be any better either (I knew one who was still trying to use antique print control codes on a laser printer connected to a Unix system). The problem with this "good weed out gimmick" is that it would have weeded out the entire faculty too. And the fact is, good quality faculty are hard to come by... View from an old fart... (Score:2) we either need to kick all the CS "professors" out of the colleges or smack them in the heads, because all they are generating is sloppy programmers that write sloppy code and use sloppy techniques. The best example I ever witnessed was a CS professor proclaiming that embedded processors would fade away because they weren't powerful enough to run a program that was compiled from an OO language. Whatever..... Many of my non-OO, C programs fit in less than 2K... and if I need speed I use assembly, a language you have to learn on your own now.... and was my first introduction to programming in college (assembly,fortran,basic,C/C++ was the progression bac in the olden days) but then I work with computers that have a maximum of 64meg ram to use as system ram and filesystem, and operate at 66-200mhz sometimes without FPU.... Some advantages of Java (Score:3) As someone who uses many languages in the course of the day, Java included, I might be able to offer some information.This is dead on. I wish that I learned OOP properly in college. Learning C and other procedural languages actually hurts you in the early stages, because you have to unlearn tactics. If I were teaching someone OOP in college, I would start with a "pure OOP" language, such as Python or Java, and a book similar to Design Patterns. I sure wish I learned this way.. Design Patterns came out after I graduated and it pretty much changed the way I thought about OOP overnight. That's not to say that C++ doesn't have its place. It can be fast, and it can be very flexible. However, in an academic environment other priorities are simply more important. Garbage collection is key. Trust me, you don't want to be up all night tracking down a memory leak when all you need to do is implement a certain algorithm. Also extemely important is a free, cross platform development environment. (No, C++ isn't as strong here as java or python due to library implementation differences). I consider C++ a very dangerous language to start learning with because it's so easy to slide back into C. Until you get to the point where you can understand what the consequences of that are for your project, it's a giant boobytrap waiting to snare the unwary.This is a very popular but shortsited viewpoint. Java is stronger than it has every been in the past. It's still in heavy growth mode, with more libraries and extensions being developed for it than I can keep pace with these days. It's going to continue to evolve for quite some time. As the JVMs continue to improve in performance, and CPUs continue to double in speed, the performance difference between Java and C++ is going to become less and less. It's already at the point where I run large java applications like Jext (thank you very much Roman Guy!) on my pentium-III 600 without any noticable slowness. Java is a very good investment. It's not the best OOP language in every area but it may be the most well rounded. It's certainly not going away, and you can get a lot of useful work done with it. Java seems to enjoy better support for the corporate world than from the open source community. This is largely because Java is so useful to corporations and they're ready and willing to develop and pay for enterprise class extensions that most lone hackers would consider boring or overkill. Java isn't a zero sum game, however, and there is plenty of room for free software to thrive. I'm glad the the Apache crew recognize this- their Xerces XML parser and their Servlet engine are excellent, excellent examples of free java software. also lists a large number of useful java libraries and applications. The time is ripe for someone to bring Kaffe or similar free JVM / library up to speed. If you do decide to get into Java, I would recommend you learn python as well, and then use the embedded scripting language jython () from within your java apps. It's a killer combination. Re:Disturbing Trend in Replies... (Score:2) Da. Stimmt. Very true. You are very right that variables, flow control, functions et al. are what are important. The problem is that most CS departments don't like teaching languages. Therefore they must use a real (read: very powerful/widely used) language (C/C++/Java) for the into courses, so that later on, when students are doing more real-world things, they already have the language skills to do it. Now, the first programming course I took (in high school) was a semester of C, then a semester of C++. Personally, I think that this is one of the best ways to go. C doesn't have any of the fancy OO stuff. It encourages you to use loops and function calls. It isn't some "pure" language (Smalltalk = pure OO, Lisp = pure functional, etc), and can therefore be used to teach a variety of styles. Porcedural-type programming is the simplest thing for a beginner to understand, but they do need to be introduced to recursion and OO too. C is a fairly solid, usable, simple language and allows such. OK, now for the disclaimer. I said simple. There are 2 topics in C that are not simple, and I wish that there was a version of C where they were unneeded. Pointers and memory management. A lot of people find learning C difficult for the simple reason that pointers and memory management are confusing complex topics, and very integral to effective use of C. Disclaimer part 2: I said usable. Another problem with C is that you can't do neat shit with it right away. Beginners want eye candy. They want to do something impresive, without too much difficulty. C is bad at that. No fancy graphics. No easy network code. None of the nifty things that other languages make fairly easy and removed from the hardware. You say that libraries aren't important for the first 3-6 months. I'd revise that to say teaching libraries isn't important for the first 3-6 months. But if you can provide some code that lets your students easily do something cool (they write maze-solving code, you write a graphical frontend for them), then they are more likely to stick with it. Given that, I think that Java isn't that bad of a choice. Yes, Java is OO, which is a count against it as a teaching language. But it is garbage collected. It also has no pointers. These two things are very very nice for an intro language. I know that when we started pointers/memory management in HS, we lost 1/4 the class (and they had to take a drop). At the semester (where there was no drop penalty) we lost half again. The lack of these difficulties makes Java a very tempting choice. Also, Java has all of the eye candy one could possibly want, and then some. Yes, it isn't good to teach all of those fancy libraries to beginners, but giving them code that lets them use some of the fancy stuff is good encouragement. Now, Java is OO. Most people teaching it as an intro language introduce the OO features. This is probably not good. But just because it has OO doesn't mean you have to use the OO. So there is some magic "public class fooclass{" stuff around all of your code. And instead of "int main(){" you have "public static int main(String []..." garbage. But you can still pretend that it is functional, or procedural, or whatever. Like C, Java is a hybrid language. So, you say language features are unimportant. Fancy features are unimportant, but the simple features of Oh, and I'm not talking entirely out of my ass. My first CS course in high school was in C, but the intro college course I took was taught in Java. CS 101 [wustl.edu], while it did focus more on OO than I thought healthy, was a very good intro class nevertheless. The labs had a lot of provided code, and then ask you to fill in simple methods for it to work well. Overall, an excellent course, if a bit slow for someone who already knows how to program. I guess it sums up that Java isn't a bad language to teach in, but that you need to teach it correctly (no OO, no libraries) for it to be good. Shouldn't that be hot languages == jobs? (Score:2) Teach all of them? (Score:2) Re:movin' on up (Score:2) Re:Teach all of them? (Score:2) A visual language would be best (Score:2) It took me a while to get my head round OOP in the first place, but working in windows, when you can instantly show the subclassing of a textbox into a purple textbox (for instance), it's very easy to get the basic concepts across to people in a non-abstract manner. Having got those concepts across, it's then very easy to move onto non-visual languages and apply the concepts you've learnt. _____ Re:It is a good education language. (Score:2) Yes, it is, but I question whether schools should make programming too much like driving. Consider this: drivers who understand the workings of their cars are always better drivers than drivers who understand "wheel turn, gas go." Drivers who understand how the car works understand why you don't accelerate the car by flooring the gas pedal. They know why you change your oil periodically. Their cars last longer and work better. So it is with programmers. Those who know how the machine works know why memory allocation is slow, and reusing blocks of memory is faster than allocating new ones. Those who understand the preprocessor know why defining a frequenly used value or code block will produce faster code than those who only know object fields and functions. People who understand low level operations recognise good programming practice better than those with experience only in high level language because they've had to. Their code is more reusable, more readable, faster, and less prone to bugs. Re:Wrong Direction (Score:3) Assembly programming is like adding single digits. It's very low level. You're learning exactly what happens at the very foundation of all of the things you will go on to learn. High level programming languages are like Calculus. It's high level. The purpose of high level operations is not to iterate through the low level operations ad nauseum. Calculus is really just a bunch of addition and subtraction expressed in a very consice manner. Teaching calculus or high level languages to people who don't have any background with lower level operations will always produce inferior skills than the oposite. It depends ... (Score:2) Back when I was an undergrad, different sections of Intro Programming were taught in a variety of languages. Different engineering departments allowed their students to take different languages, but the only section that all of the engineers could take was 2/3 Pascal and 1/3 Fortran. It was definitely a weird combination, but I thinked it worked out pretty well. Learning 2 languages right away makes it easier to pick up other languages on your own later. Plus, Fortran was still (and probably is still though C/C++ are making some headway) the dominant language for engineering/hard sciences. So anyway, Java might work OK, in some of the situations above, but I'm not sure that having a particular language that is the dominant one taught in introductory classes is the best option -- what happened with basic? (Score:2) As far what is better Java, C or C++, it really depends on your application. I would not right an OS in Java, but C or C++ would be better as both could give you far greater speed improvements. JSP is a good way to go for writing web applications, and if it ever gets standardized (hint hint sun Microsystems) then it could be even better. Certain applications Java is not well suited for, but others it is great. Anywhere where speed is a real issue, C is better. I don't want a lot, I just want it all! Flame away, I have a hose! ??? (Score:3) C++ is probably the most widely used OO language, but it sucks as a teaching language. C is the worst teaching language one could think of and not even appropriate in the context of OO programming. VB is extremely well used but it's debatable if it is an OO language (or even a programming language Smalltalk enforces the parts of OO that it supports so it is an OK teaching language. Eiffel supports and enforces all parts of the OO paradigm and is an excellent teaching language despite the fact that it is not videly used. UIUC CS 125 (Score:3) The course homepage for CS125 is: The next class required of CS majors is CS225, which is a data structures class taught in C++. The first couple of days of the class are spent going over the difference between C++ and Java (most especially stuff on pointers), and then later they move in to data structures and algorithms. The course homepage for CS225 is: Don't use Java for AP CS (Score:2) The very things that make Java so useful for getting work done, are the things that make it a bad choice for learning fundamentals. Use a lower-level language that has pointers, doesn't have garbage collection, etc. C, Pascal, or even assembly would be good. --- I believe Java should be first. (Score:2) I was just about ready to give up until I came accross perl. Perl was simple and easy to learn. I quickly got good at it and started going into more advanced OO perl programming. Now, I have a great perl job. I believe I do better starting out with a simpler language and with limited tools than with a more complex language like C++. But this is just me and I know I'm not the only one out there. It's not that were too stupid to learn it's the way we are being taught is wrong. I know Perl is slow like Java so I focus on my algorithms to improve speed and I make it modular to work with others and to reuse code. I'm a better programmer because the language is simpler for me to learn and grasp. So I'm sorry for the C++ guys here but I agree with the Java crowd here. It's better to start with a simpler language than with a complex one. Here's what some teachers say (Score:3) Kevin Sullivan [virginia.edu] (U. of Virginia) A [uwa.edu.au] couple [elj.com] of less positive articles from Australia. An article at O'Reilly. [oreilly.com] Re:Experience from teaching (Score:3) I have had several students here who started to experiment with all sorts of arcane features like inner classes and operator overloading without learning how to write good programs first.Funny...Java doesn't have operator overloading (just method overloading and overriding). If your students are doing that, then they are skilled indeed (as they have probably modified the compiler to do what they want) Re:Read the subject (Score:3) Again, you're trying to blame the language for bad teaching approaches. I've seen courses taught with buggy C compilers. Does that make C an inherently bad language? Of course not. There is no requirement to teach applets as part of a programming course that incorporates Java, and IME courses which do concentrate on applets are normally outdated: applets don't play as large a role in Java today as they did back in the days of 1.0.x - most of the roles Java was expected to fulfil through applets are now using other technologies, from simple animated GIFs to Shockwave applications. The last good Java course I saw didn't cover applets at all, except in passing. It did, however, cover enough to make it possible for students to learn how to write applets if they needed to. Fair enough. Quibble over granularity rather than answering that issue. The overall question is whether Java is a better language to teach beginner programmers (or more accurately whether it's a good language to teach them) - the issue that you misunderstood was the question of whether Java is 'cleaner' than C++. This has nothing to do with buggy VMs, and everything to do with a clean and simple object model.public class Globals { public static int GLOBAL_INT; public static String GLOBAL_STRING; } Ugly, yes, but I believe you'll find it does the trick. It's actually one of those recurring features you see in bad Java code where people are trying to write C++ in Java. And if we step back a little from your argument, what you're actually saying is that it's easier to learn to code in Java than it is to code in C++ (something that I'd personally agree with). And that's a good argument for using Java over C++ in my book: there's much less time getting to grips with (and struggling with) the language, with the result that more time can be dedicated to learning what you can actually do with it. Which is presumably what people are there to learn anyway. Because hot languages = jobs (Score:4) The major problem is that after all this time spent on DIFFERENT languages, I'm a jack of all trades and a master of none... I don't even feel comfortable coding C++ anymore just because I haven't done it for at least 2 1/2 years: it wasn't asked of me toward the end of my program. The only above language that I did NOT get to take is Java, because of scheduling difficulties, and now I sorely regret that... because now I'm looking for a part time job to make ends meet as a recent BS-CS grad with an MIS minor and two completed internships. Yea, life sucks sometimes, but whatever. Point is, I may have a diverse background in languages, but that doesn't help me professionally. I still wonder why I spent all that time learning all those languages and no one made sure that I would be able to apply my skills in the real world. I mean, if you're gonna teach a course on a subject, maybe you shouldn't design it around trying to make money off the concept. But if you're going to assemble a department and an educational program that people will be paying in excess of $20,000 a year to enroll in, then perhaps more than FOR loops and system calls should be included in the bunch. This is the problem with Java though. Teaching Java instead of C++ is a cheap way out... it's not supplementing a good program with job skills, it's replacing a good program with the language of the year. It's the dumb way of answering the question of "How are we ever going to use this stuff?" Instead of teaching them what they should know, they'll teach them what they want to know. That's not always good. In this case, it's flat out horrible. But I suppose it'll make many people happy as long as no one figures out that a Java-based CS program is perhaps as bad as all the Visual Basic courses they teach over in the business school... Re:It is a good education language. (Score:3) Not to be nitpicky, but you mention providing marketable skills & performance isn't the issue. If Java is a slow performer, wouldn't it tend to hurt students more learning a language that is slow, and klunky? Last I checked, there were plenty of jobs available for C & C++ programmers. I have a feeling that Java may be a fad, and C/C++ will be around and fall back in favor unless Java really takes off. Java lets you do some cool stuff, and it lets you do some really klunky stuff, but it isn't designed for performance. Any industry that is CPU bound (Simulation, number crunching, gaming, local applications, etc.) needs to be coded to run fast. Industries that are network bound (ISP's, ASP's, Portals, etc.) don't really care about how much the CPU is choking because the network is the bottleneck. I have a feeling, once the network is no longer the bottleneck, Java either better get fast quick, or it's going to be going back to C/C++ for speed. Obligatory flamebait disclaimer: I don't think C and C++ are going away, or that Java is the One True Language. However, it is ridiculous to assert that Java is slow, poorly adopted, or unsuited to real-world applications in the face of overwhelming evidence to the contrary. Very serious companies like Oracle [oracle.com], Sybase [sybase.com], IBM [ibm.com], Macromedia/Allaire [allaire.com], Borland [borland.com] and of course Sun [sun.com], are banking lots of money on Java's success, recognizing that it's a mature, robust, stable, fast language for very serious development. Not your father's Java... (Score:5) Every time there is a discussion on Java, the same complaints come up: Guess what, folks - most of that hasn't been true for the past couple of years, and it's getting better all the time. The stuff that you're not and won't be allowed to do is prohibited for the most part because it's dangerous and counter-productive. Java, like any widely-adopted language, does not simply cater to the 31337 hax0r. I've helped teach a class for the AP and IB CS exams, and I'll tell you what they're about more than anything else - algorithms. That certainly doesn't benefit more from C/C++ than it does from Java. Hell, if that were the concern, we'd use Haskell! The point is, the class's focus isn't on pointer arithmetic, code optimization, or any other topic that makes C++ a more natural choice than Java. As a matter of fact, having to consider those things make C++ an obstacle to understanding, rather than an aid. I'm glad the College Board is changing the language to Java - it's the right thing to do for the level of understanding they're trying to teach. Java is fine for OO (Score:5) Java has: 1) strict typing 2) dynamic linking 3) built-in memory management 4) a consistent implementation and rich libraries from a single vendor (for better or for worse) 5) works *identically* on many platforms ("identically" is the key here...we don't need to be spending half our time teaching build environments for various systems) For all these reasons, it makes sense to use Java as a beginning language. The basic programming concepts are all there (yes, even resource management). The problem with C and C++ is that it is very easy to obscure larger concepts with intimate technical details, the learning curve is steep. I remember when I was learning Pascal, it was as if the class hit a brick wall when pointers were introduced. Imagine if learning pointers and intimate machine-dependent ("words"??) memory management was the prerequisite to larger programming concepts such as conditional statements, iteration, recursion, etc. The whole learning process would be stymied. And I used to be one of the oh-so-cool C++ programmers who thought that Java was just a kindergarten-level "fad", and scoffed at it when it was used to teach programming in CS courses. Now enterprise Java programming is my day job, and I can attest to the fact that it is NOT a fad, is very powerful, and is used to do some really serious, and really cool stuff. I'm sure assembly programmers said "C?? You don't even need to know what REGISTERS are to use that!!". And Knuth (all hail) agrees with you, it seems (Score:3) Hear, hear. I'm sick of seeing freshly-minted Java programmers grunt out mounds of steaming O(n!) code, believing that "this is good cuz it's Java." Knuth continues to use MIX (and the new MMIX) and MIXAL in TAOCP for this reason; once you know how the computer works, and which algorithms are the proper ones to use, your choice of high-level language often becomes irrelevent. Just a choice of style. Wake up, people: you can write FORTRAN in any language. The C++ experts don't teach pointers anymore... (Score:4) There's a really great book out called _Accelerated C++_, by Koenig and Moo. (Yes folks, that Koenig and that Moo, the C++ gods.) It's a very new approach to teaching C++ as a first language, and everybody who's used it or even just reviewed it has loved it. It doesn't even introduce pointers for several chapters. Students learn how to write simple loops, manage collections of things using std::vector, do the common 90% of string-related tasks using std::string, write some useful and practical programs, all before ever seeing a pointer. Side note: the book is part of the C++ In-Depth series, being edited by Stroustrup. One of the rules for the series is that the main body text of the book must be no more than 300 pages. No filler crap, no 1500-page tomes to raise revenue; make your point simple and clear and then shut up. Re:Wrong Direction (Score:3) I have worked with people in the past who insisted that particular machine details are unimportant. For instance, they would say "There is no need to worry about the cost of paging due to a large resident memory set -- just buy more memory!" And then it would occur that we'd maxed out physical memory on that generation of machine -- response was -- "wait -- memories will be bigger next year." Memory did get bigger, but by the time memory was large enough, we were out of business. Ignoring the machine does not solve a problem here. Ignoring people will definately cause a problem, though, as you observe. Therefore the solution (it seems to me) is to teach more -- not less. Teach good user interface practices in addition to teaching how the machine works. Re:Wrong Direction (Score:3) In order to cut down the attrition rate, you cannot scare off the incoming students. No way. If the first thing they get in Programming 101 is a solid smack upside the brain, a lot of them are going to just walk away. The dotcoms may not be hiring as much as they were before, but braving the job market is still going to be more appealing for the average student than having to put up with assembly language. They're gonna get the low level stuff before long; at this point they need to get a grasp of the big picture. As interesting as your suggestion is -- and I would agree that it's a very unusual way to approach the subject -- my advice would be to do almost completely the opposite. Use a language that shields the students from a lot of the underlying complexity, so they can focus on broader concepts that would usually come later in a software engineering class. Use Python. It's still a bit exotic, so the incentive to ditch school for a job using it is less pressing (though that would change fast if a lot of people started learning it, of course). It enforces clean syntax & frees coders up to focus on higher level problems through the use of -- get out your buzzword bingo cards -- object oriented libraries. It's scripted, so the students won't have as much arcana to deal with right away, and better still it comes with a command line interpreter, so students can test expressions to see what happens when various language constructs are executed, with instant feedback if anything is going wrong. As the students move through the curriculum, they can revisit earlier projects by rewriting libraries in a low level language like C (or assembler, if your sadistic impulse can't be denied any longer... :). This can be a bridge to understanding how a big project develops, especially among multiple programmers: the obvious thing to do would be for the first classes to use object libraries written by the second classes, which in turn are writing to specs prepared by the later software engineering classes. Etc. I really think it could be the foundation (with the later addition of C, C++, &/or Java) to a good, comprehensive CS curriculum. Re:Wrong Direction (Score:4) Thirty years ago, I learned machine code to program the PDP-8. Why not teach that today? Or why not go further down and teach VLSI processor design, or semiconductor physics? The answer is, of course, a tradeoff. Learning any of these things is potentially of value, but one must compare that potential value to the time and energy investment required. I submit that for most CS students today, the effort in learning assembler is not worth the benefit. It is therefore more appropriate for an elective rather than a core course, and has been for some years. A more interesting question is the current value of studying C after learning an object-oriented language. The tradeoff there is much more difficult, and I don't have a strong opinion one way or another. 7402 Re:Not widely used yet (Score:3) I suppose VB would be good for teaching what people have to do in the real world when their language isn't up to the task at hand, and vendors have to invent new and strange things to give programmers the features they want. VB feels so hacked-together it's not even funny. Besides, if we went by "widely-used" to decide what to teach new programmers, we'd all still be using COBOL. :-) Re:Wrong Direction (Score:5) Actually, that is a pretty interesting philosophy... I like it, and not just because of sadistic tendencies. :-) Think about it: People learn first-hand what happens under the hood. The lack of any kind of visually impressive positive feedback will guarantee that the really bad programmers with a serious lack of dedication never come back. The apparently inexplicable things the machine does when you do something wrong will guarantee that those with some dedication but poor understanding will never come back. You are left with those who are really good programmers at heart, understand what they do, and are strongly dedicated to doing it. Ergo, fewer programmers, better programmers, more money to go around to less people, six figure salaries to all and real productivity. Oh, and Windows eventually goes away too. :-) Sigh, what a world that would be... Software Engineering and Languages (Score:3) It is a known fact that hardware is orders of magnitude more reliable than software. The most obvious difference between software systems and hardware is that the former is algorithmic whereas the latter is based on parallel streams of signals. A signal-based system is ideal for the implementation of work-once, work-always components that can snap together at the click of a mouse. This is because their temporal signatures remain constant. By contrast, one can never be sure when an algorithm will be done and this is detrimental to stability. Algorithms should thefore be implemented on top of a signal based system. They should not be the basis of automation. In the future we will have technologies that allow computer memories to instantly reconfigure themselves into parallel logic circuits. In the meantime, even though the Von Neuman paradigm forces sequentiality on us, signal flow parallelism can be easily emulated in software so as to hide the serial nature of processors from the application developper. Unless computer scientists wake up from their algorithmic stupor, computer science will continue to limp along, badly. More multi-million dollar space probes will malfunction, airplanes will crash, electronic stock exchanges will suffer from glitches and airports will shut down. Half a century, thousands of lives and trillions of dollars later, we'll kick ourselves in the rear and ask ourselves "why have we been so damn stupid for so long?" Re:Experience from teaching (Score:3) Why is Java more platform dependent than C or C++? First of all, if your platform hasn't got a JVM, you are done for. You can't run Java programs. If anyone tells you you don't need to have a JVM, then WHAT THE HECK IS THIS DISCUSSION ABOUT? There's absolutely nothing in the C or C++ language standards that says anything about the platform, that's why those languages are so great for writing embedded applications and operating systems. They don't even assume the existance of a monitor or a keyboard. I have never in my life had a problem with running C++ code written using the Borland C++ tools on my GNU/Linux or NetBSD machine. I Now, I need some coffee... Re:What about Python? (Score:3) With Java, you no choice but to start the OOP way from day one. And Java's OOP isn't that hot, either (there are still primitive types that aren't objects). If you want imperative programming in Java, you have to fake it with static methods. There is a difference between what is a good first programming language and what is a good language to learn software engineering principles. For software engineering, Java would be a reasonable good choice. Gerhard PS: I study c. s. and I had Java (and Haskell) as introductory languages. Yes, I do follow the cult of the snake Ruby! (Score:3) Here are some additional links: The official Ruby page [ruby-lang.org] Dr. Dobbs article about Ruby [ddj.com] Documentation [ruby-lang.org] HotLinks [ruby-lang.org] If this weren't MUCH better than Java, I wouldn't pull this shameless plug. Please check it out, don't stay in the dark ages.. Btw, PLEASE don't make the students create Object Oriented ZOOs and the like. We were forced to such meaningless assignments when we had OO-classes in school, and such stupid problems are for OO-morons. Additionally, you don't need a "fast" language for teaching OO-concepts. On the contrary, since ruby is a glue language (like perl), it can be used to glue the right tools for the job when you need it to. It's definately fast enough if you just express your ideas in it correctly (avoiding many nested loops). Some people even use Ruby as a specification language, because of it's easy-to-understand syntax and lambdas. Ruby code is usually shorter and more readable than the same code expressed in other languages. - Steeltoe Re:It is a good education language. - NOT! (Score:3) How long do you think java is going to be "free" for? My guess -- another 5 years -- by then it will be everywhere -- and SUN will pull a Unisys and start charging fees for it. (this is just my oponion, not fact) I think schools shouldn't teach their students langauges that aren't free (as in speach). Theres 100 C/C++ compilers you can buy/download, but just a couple java compilers. \editorial : For begining students I expect Python would be a good choice -- its simple, very consistent, and is very explicit about variable cohercion, and is still "powerfull". Re:It is a good education language. (Score:3) It is widely and freely available. Not as much as C or C++. Almost every machine on the planet is capable of running C code. That's not true of Java. And you're never going to write device drivers in Java. It is being used widely in the industry - again, not as widely as C / C++. Not even close. I think educational institutions have a responsibility to release students with marketable skills. I agree with this statement, whole-heartedly. Unfortunately, you seem to think that Java programming is a marketable skill. Or at least, you seem to think it's MORE marketable than C/C++. Which is insane. Any reasonably competent C/C++ coder can pick up Java in a heartbeat. The converse is not true. I've seen Java coders who STILL can't figure out how to dispose of memory, basically don't understand the difference between stack and heap, and don't understand pointers well enough to dispose of an element of a linked list. And Java offers all the needed constructs and is good to teach the OOAD methodology. No, it doesn't. By virtue of using Garbage Collection, it is taking memory management out of the hands of the developer, teaching people to be lazy when it comes to object instantiation and use. Not having pass-by-reference gives people the idea that having class-level variables is a viable option. But the problem is that most simple projects are written in one class, which essentially teaches them to use global variables - which is not good. Lacking pointers is the critical flaw... It's possible to learn C/C++, with an understanding of Java. But it's far easier to learn Java with an understanding of C/C++. Most people will end up coding C/C++ for most of their work. Teach them to use the language that they'll end up using. Specifically because it's easy for them to then learn Java. Whereas if you taught them Java, it's not as easy for them to then learn C/C++. Computer Science is not just Systems (Score:3) The class particularly in question is called "Data Structures and Algorithms". In my experience, the students who took this course in Java had a much smoother experience than those who took it in C++ the year earlier. Programs crashed mysteriously less often, they didn't have to deal with memory leaks, they had less compatibility problems, and were able to write more sophisticated and interesting programs in the same amount of time. Since Java is simpler, we were actually able to teach almost everything in the language to them, where for C++ we had to leave out a significant amount of its core (such as templates, which meant that they couldn't really understand the string class, for instance). All of the CS majors took "Systems programming" the next semester, which is a hard-core C programming class; most go on to take Operating Systems, in which they learn anything you'd hope they'd learn by struggling with C or C++ in the intro class. I agree totally that systems programming should be a significant part of any CS program. But systems programming is just *a part* of computer science (in the Slashdot/Linux crowd, perhaps it is the most popular). And in truth, things like manual memory management and hardware access are not important in most of the other parts of CS. For teaching algorithms and data structures, these things are a hindrance. Personally, I'm waiting patiently for the day when only the most low-level software (a microkernel and hardware drivers) are written in C or assembly, with all of my applications written in a safe, GC language (perhaps Java, though there are better alternatives). It sure will be nice to be free from buffer overflows and memory leaks, finally! So while my recommendation of Java here is partially influenced by that goal, I think it is also quite justified for pragmatic reasons. Bad Idea (Score:3) - Many people who will not be getting a degree in Computer Science (but who might interact with programmers or write some MS Access scripts) take introductory programming classes. It is important to have these people learn the fundamentals of programming rather than their actual implementation -- assembly will be practically useless to them unless they take the time to take compilers, operating systems, etc... - Assemblers (and I might even stick C in this category, if I was feeling snappy) do not support the abstraction and generic programming features important for writing interesting programs. (Except systems stuff.) Introductory programming classes typically focus on data structures and algorithms; if you make the students program these in assembly, you are limiting the amount of material you can cover (and therefore, how much they can learn). - Assembly as an introductory language is going to scare people away from computer science. Systems programming is pretty fun, but it's not for everyone. Many computer scientists hardly ever program (let alone in assembler), and I would wager that most professional programmers do not need to write in assembly language. Assembly is a great enhancement to a programmer's (or scientist's) knowledge of computers, but understanding it is not a requirement for programming in high-level languages. - The world will be a better place (fewer buffer overflows (thus security holes) and memory leaks, more portable software and code reuse, and shorter development cycles) if we encourage new students to move towards high-level, abstract languages. I'm not saying Java is the best choice for this (though it may be the most practical), but assembly certainly isn't. Re:IMO... (Score:3) WOW!! I wish that I could have seen one of those. A 25 millihertz processor! My analog watch can calculate 1+1 faster than that thing. A goof like that really is just asking for some snide remark... Java has less cruft (Score:3) C++ has two big problems. One is cruft, and the other is memory management. C++ has a long legacy. Too long. Not only is there the legacy of C, there's the legacy of early C++. Templates, exceptions, and references came in late. It shows. There are too many things that are done the way they are only for historical reasons. C++ and C program design obsesses on memory management, because what you take, you must give back. The language provides little assistance with this. That's not what beginning programmers need to focus on. C++ needs a major cleanup. I've been toying with a design for "strict mode" for C++, comparable to "use strict" in Perl, which would get rid of much of the cruft. C++ comes out of the 5-year ISO standardization freeze soon, so it's time to be thinking about this. But that's for the future. Oh, pleeez.... (Score:4) Or why not start at the other end of the abstraction stack? Start with cognitive pyschology, perception, human-computer interaction, and the identification of human needs -- then you figure out an approach that best meets people's needs, which would lead to how to choose the right software tools and approach for any particular case. For a CS class, I wouldn't start at either extreme. Choosing the right level of abstraction is important, and the answer isn't just automatically one below what the other guy suggests. I think CS classes should start by making it clear that the point of it all is to create useful stuff for real people. Teach them what they need to know to get started ASAP doing so, and fill in the details with later classes. A real object-oriented language (Score:3) For those unfamiliar with LPC, it was initially designed for use in MUDs - as such, it is missing many of the non-text interface and output features needed for a full featured modern language. However, what it had was so beautiful. There was one type of object variable - object. All objects can talk to each other, call whatever functions they want on each other, etc, just by obname->do_something(). You can even call functions on not-yet-loaded objects, by doing things like ("path/filename")->do_something(). You can call functions like FindObject to get the objects of a certain type. You can run through a list of all objects. Etc. LPC is based on the principle of letting the programmer do whatever they want (I know, some people don't like this mixed blee = { some_int, some_string, { some object, {some_object, some_bool }, some_float pointer, some_object }, some_object pointer }; It basicly acts as a struct you don't have to declare in advance (of course, you can still use structs to your heart's content). So, for example, if you just wanted to store a complex value, you could use a mixed of { some_double, some_other_double }, at any time. Then, there's mappings. Mappings are a built in, arbitrary-formatted hash table that is an internal datatype. For example, you could have (and I may not be remembering the format exactly): mapping blee = [ "foo" : 1.5, "bar" : 8.2 ]; Etc. (naturally, you can use mixeds, objects, whatever, as your indexes and values. Most LPC libraries are designed to give objects a (text-based) visual representation. You give them a name, a description, a short description, etc. Objects are created in a virtual environment, and you can see them - hold them, place them inside other objects, etc. An object can talk to its environment (what it is inside), or anything in its inventory. A coder can manually call functions on various objects, or set up interface elements for them to be called at any time, not nessisarily by a coder. This visual environment, and "existance" within the code, leads for incredibly fun code wars, like you wouldn't believe For example, I was the loki-type character coding on a mud once. At one point, I had fun by calling functions that would change the name of people's character-objects around, so that they'd be confused when talking to each other. Naturally, they'd seek retribution and fire off a dest, once they found out it was me (a 'dest' is a piece of code that artfully kicks someone off the mud). But, once I got to know other people's dests, I set up protective objects which would fire off counter-dests when I saw their dests start to go off. So, instant dests started becoming popular. So, I set up objects which propogated themselves into other coders's inventories and would filter out their commands. Instead of instant dests, to maintain artfullness while still preventing them from counter-desting, my dests first cleaned out their inventory and the inventory of the object (room) they were in, to wipe out all local protective objects (sometimes they'd mess with global commands, but well, if they did that they deserved to win Ok, sorry, I got sidetracked. But... LPC is a fun language, and has the best object-oriented design I've ever dealt with. Its a good language "for love of the code". - Rei Wide use is not the issue (Score:4) I think the question here is this: Should we start by teaching an easier, higher level language (ie Java) to get programming concepts down, then move to lower level "closer to the machine" language for advanced topics, or should we start with the lower level language and then treat additional languages as extras? Personally I think the first option is the more viable. Java is a fairly easy, very portable language on which students can create fairly elaborate programs somewhat quickly. With the Swing classes, one can create GUI based programs that will run almost anywhere after probably less that a semester of learning. Java has all the necesary pieces of a fully functional OO language, and it spells them out in a very easy to understand way. It also enforces compartmentalization. I remeber as a freshman, one of the most frustratin things about the way CS was taught was that none of the programs we wrote seemed "real". The assignments were written to develop skills in algorithm analysis and to point out uses of specific structures, but they always looked like a home work assignment, not a useful piece of software. With Java (or hell, even an interpruted language, like Perl or Python) I think the same skills could have been taught while allowing for more... err.. Satisfying .. assignments. Once the basics of programmiing and software development were learned (and I don't know about anyone else, but my Uni spent most of the first year, and a chunck of the second on these skills. learning "how the computer works at the lowest level" was Sophmore and above classes) C would certainly be appropriate to tech as a lower level "how the machine thinks" langauge. Our low level systems classes were taught in Vax assembler (I never actually learned C in collage, we used Pascal as the teaching langauge, them SmallTalk and assembler in later classes), and I'd have found C both easier and more useful in "real life". As a university assistant in CS.. (Score:3) We have had the same question discussed. Normally in the first year of the course, we teach students about OO programming concepts using The Oberon language, a wirthian language much like Pascal and Modula-2. The benefit of starting with this language, is that practically no one knows about this one when they start. This levels the field for everybody starting the course. The second benefit is tthat it is fairly similar to Pascal and Modula-2 syntaxes, but allows for concepts like pointers and garbage collection to be explained. Basicly, it as good as any other OO language, but we have satisfactory results in teaching OO with this language. The incredible downside to this language, is that students will never ever use it again. That's the first year, in the second they are taught C++ in a strict Stroustrup way. Our question has been whether Java would be better. Certainly, as an assistant, I would have said Java all the way 2 years ago, because I just like Java much more than Oberon (and incidently the students do too), but Java really misses out on a lot of key concepts that C++ do have. If Java would ever be taught on our university in C.S., it would replace Oberon, not C++. We consider C++ a difficult but necessary level of experience students should have endured, simply because when they will work in a company 3 years later, the majority will be either in Java or C++. BUT. In the Mathematics and Physics classes, we HAVE switched to Java. To these people, a language is more a tool than a subject on it's own. In fact, we didn't just teach them Java, we gave them a Java environment that allows for allmost functional linear programming, an interpreter which simply translates their code (which looks just like Java code, only everything is purely functional) into 'real' Java. So far, the results are promising. They actually get the concepts of recursion, which, for a mathematics and physics 1s't year class may be easy to get down on paper but harder to get done in reality. This is the first year we're doing this, so we have to consider the end results in a few weeks when exams are over. We have also taught Java to Physics PhD students, with very good results. Most of the programming in Physics labs is now in Java because it delivers faster than C++ and is simply easier to maintain and update. From personal experience, I think C++ has a few benefits on it's own. First it is a valuable leach that leads into the UNIX world, something most people have never had the chance to experience before. You will simply have to get the UNIX concepts and commands and syntax and phylosophy right before you can say you're able to pull a C++ thing off on a server compiler. Second, I think C++ is obviously valuable once you are done studying. This should not be the major issue here, and we dislike the idea of studying C.S. because it makes good money, but it's a reality nonetheless. Thirdly, C++ has influenced more languages in the 10+ years it exists, and thus makes transition to these languages, including the Java language, easy. Fourth, c++ allmost offers every concept that an OO language should offer, so in the world of paradigma's it's a fantastic example of OO programming. Fifth, the syntax and semantics makes people think mechanically, and the ancient C subset allows for making the distinction between lowlevel and high level programming. Learning how to debug is one of the key elements that C++ offers and needs, and students will benefit from that knowledge. And finally, transitions form C++ to Java are easy, but the opposit transition is incredibly tough, even if you DO know C++. I've made the transitions about 4 times in the last few years, and I can tell you every time I had to switch back to C++ all hell breaks loose again. You simply forget the hard parts because that's what Java is, the 'nice' version without the pitfalls that C++ experienced in it's more or less uncertain growth. I hope this has some valuable insights for you. c# (Score:3) 1) It has support for many Object oriented Mechanisims. 2) It looks like c/c++/java/javascript so you could move onto them after you completed the c# class. 3) It has automatic garbage collection, but you can work in unsafe mode and have access to pointer arithemetic. So it is nicer than both c++ and java in that respect. 4) It is the Microsoft Limbo: a language i'd love to teach (Score:3) Personally, I don't believe that Java is the right way to go in that respect (although there are worse languages: my old comp sci dept started to teach Ada as a first language after I left...). Two main reasons: My personal choice for a first language to teach would be Limbo [vitanuova.com], a beautiful language, designed by some of the original designers of C (who've come a long way since then!). Amongst other things, it: I'd love to teach it as a first language. No more "ahh yes, that array just turned into a pointer because you looked at it funny"... oh happy memories... Disturbing Trend in Replies... (Score:3) I've noticed a disturbing trend in the replies; That is, most of them focus on language features. For example, they say, "Oh, they should use this, because it has good OO," or, "Oh, use [C/ASM], because it's low level, and good programmers know how low level stuff works", or, "Use [C++/Java], because that's what the industry uses", or "Use XYZ because it's got a good set of libraries," etc., etc.,. These folks have obviously never taught people who haven't programmed before. These are people who are going to struggle with variables. These are people who can't write a for loop to save their lives. They can't use a function, much less a method. OOP, pointers, bits&bytes, libraries; None of that matters for at least 3-6 months. This is why I highly recommend either LOGO or Python as a first language. These are interactive interpreters. You need to be able to say, "X=4", and then say, "what is X?", and then reassign X. You need these basic things. Once the concepts of variables, loops, and functions are in place, then you can easily map to other languages. I know this because I've taught it. I also know this because I've consoled students crying over their Java homework (quite literally) at the end of the semester, incapable of using a for loop. These are good students. As programmers, we take a lot for granted. So forget all this "X features OOP, Y has a good lib, Z is low level," and think: Variables, Flow Control, Functions. The rest will follow naturally after these are ingrained and easily used. I teach free programming classes in Seattle [taoriver.net]. Since I teach classes for free, I don't have the economic pressure to teach JAVA or C++. I could write whole articles about the damage that certification programs do to people. Another problem is that people look at the Jobs page, discover that most industry programmers are doing something called "JAVA" or "C++". They open up the university catalog and see, "Learn JAVA in 3 months!!!" ($1500), right next to the A+ certification houses. Since the ads are all over the place, they figure that it must be the way. They take a class, and drop out halfway through. The experienced programmers with CS degrees taking JAVA to learn a new language make the newcomers feel pathetic, and they decide programming isn't for them. If only I could copy the experiences in my mind for y'all... It's really bleak. College is a different situation. I think the reason the profs teach JAVA is because they actually bought (and contributed to!) the hoop-lah about OOP, in a theoretical rather than economic way. What I Teach My Students (Score:4) I teach my students [taoriver.net] in the following order: It is with great sadness that I teach my students OOP, as it is over-hyped, and people believe in it religiously and without question. I teach it in order to prepare them for the world that will hire them. The primary value in OOP, as far as I can tell, is thinking about the data first, and language features supporting polymorphism. Also, the book "Design Patterns" is the most (and quite possibly only) valuable piece of literature from the OOP community. I stress that it doesn't require a particular language or ideology to implement polymorphic behavior, or to think about the data first, or to implement a common pattern. (Device drivers and web servers are great examples of objects exhibiting polymorphism and encapsulation. In Non-OO speak, that's the product of paying attention to coupling and cohesion, which takes us right back to... The Unix Philosophy.) I teach C so that they see low level stuff, and Python, for reasons to numerous to list. I teach C++ so that they can get hired. One of the reasons for listing Python: They can start writing programs from day 1, second 1. No fussing with heavy class notations, like Java forces you to. (Just look at Java's hello world.) To believe that new students learn about OOP by using Java is hopelessly naive. Most students I've seen working with Java as a first language struggle with making for loops, while loops, and using variables. (Of course, several students will defend their teacher and difficult learning by give you the rhetoric that OOP is the way, and that Java is great because it's... OOP! You can feel the difference!) Re:IMO... (Score:4) True, esp. for gui's. (We all know about listener leaks, right?) For other uses, however, such as web-enabled db apps, it can actually make more efficient use of resources -- and Java can remove most of the pain from tasks like session management. the teaching language be C/C++ - once you know that, you can learn Java, Perl, PHP, etc. with little effort The problem of using C/C++ for teaching is, a student can get distracted from learning how to program well by the idiomatic syntactical complexities that make C/C++ such a powerful language in the first place. I say, learn Java first, so you understand the classic algorhythms, simple OO, and things like threads. Then, find out what else you can do with C/C++, and others. True, Java's cross-platform... :*) Do not forget, Java is not the only cross-platform language -- emacs-lisp is available for many many platforms, and uses such nicities as "byte-compiling", just like Java. Re:It is a good education language. (Score:3) Obviously you don't understand the language, because your "pass variables by reference" comment is completely inaccurate as shown by this response. [javaworld.com] I've seen Java coders who STILL can't figure out how to dispose of memory, basically don't understand the difference between stack and heap, and don't understand pointers well enough to dispose of an element of a linked list. Secondly, this is exactly the point of high(er) level languages: To eliminate details that are better solved by the machine, or previously by someone else. I've seen C/C++ coders who still can't produce binary output by hand from their source files. They're so stupid they have to use a compiler. No, it doesn't. By virtue of using Garbage Collection, it is taking memory management out of the hands of the developer, teaching people to be lazy when it comes to object instantiation and use Garbage Collection is completely unrelated to the concepts of Object Oriented Analysis and Design. Automatic Garbage Collection allows one to focus on solving a problem. Forcing manual garbage collection is a step backwards in any modern language. It's a detail that the machine is better able to deal with, as it should be. Automatic Garbage Collection is a concept that can be applied to many differing programming languages and it is a detriment in none of them. So, in order to have a method change a value that you pass to it, you have to encapsulate it in a class Or behavior more commonly known as a side-effect. And also best to be avoided when dealing with Object Oriented Programming. A common mistake of many C/C++ programmers is to get caught in the procedural traps introduced and taught by C and adapt those same concepts, wrongly, into their OO work when using C++. This is probably the reason why you think encapsulating your value in a custom class is a poor decision. Maybe you should study the term encapsulation. [elj.com] And as for your comments about ease of learning. It may be easier to learn C/C++ (which is the biggest source of problems: C is a procedural language, C++ is not, but the grouping of these two together produces disastrous results), but that ease is because people learn the wrong way to write OO code. Java teaches, or forces, the correct way. After learning the proper way in Java, you'll find that you actually write better OO C++. Java allows you to solve problems, C++ allows you to solve details. As an employer, I know which one I'd want you to deal with. Re:College and the Workforce (Score:3) And any University with a reputation for letting the education get in the way of future employment for their students is gonna start losing students quickly. I can't say that I agree with you. Here in the UK most of the better Universities are the ones that teach more theory whereas the less good ones teach more practical applications. For example, the University that teaches Visual Basic, that I referred to in my comment, is City University. Hardly a University renown for its computer science. Consequently, the computer graduates from there that I have come across know next to nothing. They didn't need a 3-4 year degree to learn Visual Basic programming! Down the road at UCL and Imperial they teach a lot more theory, several languages and guess what, the students that graduate are generally clued up. As for employers I think they understand the difference too. I definitely do when I look for people. I know the difference between someone who has done some training and knows particular applications and someone who has studied computing and knows a lot of theory. I usually find a simple set of problems to solve at a job interview weeds out the difference. And I for sure would prefer to employ someone who has a grasp of theory because then they will easily be able to pick up the many different programming languages that we use in our company. Re:It is a good education language. (Score:5) I agree with you in that Java is fine for an education language but it definitely shouldn't be limited to that. For example, one of the new recruits in my company has just graduated from a university where they were only taught Java. Consequently, he doesn't know what a pointer is, he doesn't know what linking object files means and he doesn't know anything about memory allocation. As I see it, University (College) is about education not about industrial training. You shouldn't be taught specific tools at university rather you should be taught theory. That way, when you leave university you will be able to apply your theory to different languages or applications. For example, one of the Universities here in the UK teaches all the programming in Visual Basic. I mean what is that all about? How can you possibly claim you know how to program when all you know is Visual Basic! A good programmer should pick a language to use like a carpenter picks a tool for a job. Different languages are good for different jobs and university should give you the theory so that you can easily understand new languages. A university should really teach you a mixture, pure object-oriented (like Smalltalk), imperative (like modula-2 or C), functional (like Miranda) etc. The student will then have firm foundations for being a good programmer. Re:Software Engineering and Languages (Score:4) Modern programming languages are not based on Ada Lovelace's ideas. They cluster around notions of change of state (imperative languages), function composition (functional programming), or logical deduction (unification based logic languages). Hardware is more reliable than software because (a) it's far less complicated, (b) H/W designs start off with a very precise spec., (c) H/W is sufficiently simple that modern formal methods are applicable, (d) most software programmers out there are monkeys. Quite how you get from "based on parallel streams of signals" to non-algorithmic (whatever that means) is beyond me. I refer you to the Church-Turing thesis ("all Turing-powerful computational models are equivalent") for which a counterexample has yet to be presented. Among the main reasons why many languages are strongly biased towards sequential execution are that (a) it's easy to understand, (b) it has a clear computational cost model, (c) it's easier to design H/W for this model, (d) data dependencies often demand that you work sequentially. There are plenty of languages which do focus on concurrency (e.g. Occam) and plenty of schemes for supporting cheap concurrent programming (e.g. the Transputer, data-flow architectures, parallel functional language compilers, lazy functional languages, etc.) although they all have their drawbacks - mainly that the bookkeeping cost tends to seriously water down the naive intuitive expectation that it'll "all just work really fast." In my opinion, we have such buggy software because programmers are rarely given a rigorous spec. to start with and are rarely capable of following it properly when they are, and that the most common languages in use today (C, C++, Java, VB, Perl) are unbelievably poor: they have weak type systems, if any, they have weak abstraction mechanisms, and they have absolutely no mathematical underpinnings, and they are very bad at preventing mistakes (I would spend time debunking the "Java's not like that" arguments, but This sort of "you're all fools - why can't you just see?" kind of rant really gets up my nose, especially when it's just backed up with an ill-informed wish list. It is a good education language. (Score:3) I think Java is a fine education language for the following reasons. It is cleaner than C++. It is widely and freely avalaible. (though so are many other languages. It is being used widely in the industry, and I think educational institutions have a responsibility to realease students with marketable skills. As for performance, it is slower than some languages closer to the heart of a computer, but speed is not the point of an educational language, constructs and methodology are. And Java offers all the needed constructs and is good to teach the OOAD methodology. Re:smalltalk? (Score:3) Smalltalk is much cleaner OO than Java which is much cleaner than C++. I think the main reason Java is chosen over Smalltalk is Java is more widely used in the industry. It is a tradeoff purity versus use, Smalltalk wins in purity and c++ wins in use, but Java is a good middle ground I took Java last year (Score:4) More imporantly if you want to keep as many people interested in computers, especially their first introduction to programming, keep the language simple, worry about whether or not a high level or low level expericne is better later. Re:Why not select language as appropriate for topi (Score:4) Agreed. While C\C++ is a good language and lies at the root of Perl and Java it and the imperative paradigm is not the center of the programming Universe. Without exploring languages from other paradimgs such as prolog, Haskell, etc one cannot get a good feel for the different ways to approache a problem. This tends to lead people into such dogamtic fun as the belif that recursion in all forms is "just plain wrong" as my C\C++ teacher put it. In terms of which language to start with Java has the advantage over c in that it's syntax is cleaner and has less of the fun archaic elements such as the need for heavy pointer arithmetic. This makes the into learning curve too damn steep. Unfortunately it too is limiting to the iterative world. I'd recommend something like Pascal which was written to be a teaching language (if you must stick with imperative). IMHO the best language to start with would be Lisp. It is more mature than python. It is quite tolerant allowing the students to play with less pain. It includes higher level elements such as lists that allows people to get into real programming in shorter order. Rather than forcing them to put off any real programming until after they have mastered the arcane nature of c's memory allocation. Lastly and most importantly as a teaching language it can be used for purely functional, purely iterative, and for object-oriented programming. Thus you can introduce your students to three of the major paradigms (you can do some Psudo-logic programming in it but it just isn't the same as prolog) under one roof. As a result the excess learning time is lessened. Irvu Java in the business world (Score:4) And I can tell you that whenever number crunching is not required, Java takes over. Java on the server is really gone rule the business world for the next 10 years (unless As a learning language, I would say that Java being easier to learn, it's easier to teach OO concepts with Java because you don't have to make sense of this huge thing call C++ first. Re:Wrong Direction (Score:3) get a newbie, teach them the hardest concepts! calculus should be the first math taught! 12th grade will be addition of single digits! how about you make it so that you learn assembly, then low level c, then basic?! i think the AP class should be MicrosoftBasic! stop trying to be karma whore and THINK. Why not select language as appropriate for topic? (Score:5) For the express purpose of teaching OOP, why not use Smalltalk, which makes it difficult to fall into other methodologies and easy to use OO techniques? But as a greater question, why restrict the field to one or a few languages? I'd think that in a CS curriculum (I insert the disclaimer that I came from mathematics and not from such a curriculum) one would want students to explore as many languages and paradigms as possible, ranging all the way from assembler to Prolog. This would presumably encourage a student both to develop many different ways of thinking about any given problem and to be able and willing to select an appropriate tool for any problem encountered. Far too often we see (as evidenced by responses on Slashdot to articles like this one) that many people are rather narrow-minded about language selection and unwilling to deviate from using their one pet language. Why not start to discourage that immediately in the course of formal CS training? Re:python (Score:4) Python is the perfect way for people to get their feet wet. Especially as it doesn't "break" a person's perspective on what to expect from languages, as perl might. No, you need both directions (Score:3) I agree that you need to know how a computer works at the low level. But this does extremely little to help you design software with typical real-world requirements - you need to also have experience thinking at a high-level, where the design of software systems takes place. I know many programmers (usually with electronic engineering background, where software development isn't emphasized) who know assembly etc very well, but don't know even the most basic things about object oriented programming, and it really shows through in the software they produce (e.g. having the base class know about ALL types of derived classes and using a "type" variable combined with lots of switch statements in functions to call derived-class-specific code - exactly what virtual functions are there for!). Likewise, I've seen people who have only learned about programming at a high-level, and their coding shows problems, because they don't understand whats going on when their code gets compiled (e.g. not understanding the difference between heap allocation and stack allocation - try "char array[10000000];" inside a function!) You absolutely need both, no question. My 1st year CS course at university included introductory assembly as well as object-oriented programming. Personally I think Java is quite a good introductory language, C++ is too complex for beginners, you want to teach the design principles without all the pointer problems etc getting in the way. I don't see how you can claim that "teaching java to beginning programmers only encourages sloppy programming". If anything, only teaching assembly would encourage some seriously sloppy programming. You cannot learn good high-level design from only doing low-level programming, thats like saying that you can learn good social skills by studying how neurons in the brain function. Re:Not quite (Score:3) Looks like Smalltalk enforces every OO paradigm I can think of. Getting back to the topic of Java as a teaching tool, I don't see the problem with it. You can teach OO with it, you don't have to worry about pointers, and it makes you marketable (OT - I still get quite a few Smalltalk job offers, so it's still useful in the workforce). You've lost sight of what is fundamental. (Score:3) Thirty years ago, I learned machine code to program the PDP-8. Why not teach that today? Or why not go further down and teach VLSI processor design, or semiconductor physics? Semiconductor physics has nothing to do with computer science. VLSI processor design is not fundamental anything, but an engineering discipline based on a particular fabrication technology (more-or-less 2D semiconductor electronics). Learning the machine code is part of learning assembler. A small part. There's no need to memorize it, just to be able to assemble by hand to understand what's going on (with references, of course). A few hours doing such exercises should suffice. Once this is learned, assembler is a simple convenience giving full control over the machine code with fewer headaches. A CS student certainly should learn about logic gates and how they build up into addition, multiplication, RAM, etc. This is also fundamental CS. How to build these gates from transistors (or vacuum tubes, or tinker toys) is not, and should be left to the engineers. If you don't understand how a functioning computer is built out of logic components, you have no theoretical basis for why one operation should be slower than another, or why there is limited memory. Without this base, all programs that work are equally good, and if one works slower than another in practice, or can't run because it needs too much memory, it is merely a quirk of the hardware design. A machine could be built which always sorts any billion entries in the same amount of time as it takes for it to add two 32-bit numbers, machines are built which take the same amount of time to multiply as to add, but addition is fundamentally simpler and faster than multiplication, and adding two 32-bit numbers is certainly fundamentally faster than sorting a billion entries. The supersorter is a quirky machine, but without knowing about the gates from which all digital computers are built, you have no valid mathematical basis for saying so. Similarly, if you don't know anything about machine language, you have no reason for saying why one HLL program should run faster than another, or which will consume more memory. The heart of CS is the interaction between the gate logic and the data fed into the gates. Anyone who doesn't understand the fundamentals of this interaction is not remotely a computer scientist. Learning a machine language is making one case study. -- Experience from teaching (Score:4) The main advantage of Java over C and C++ (and the reason why C and derivations are discontinued as teaching tools in my university) is it's relative platform-independence. When you want to test the programs that people have written at home, it's a real pain in the ass to get their Borland C++ programs running under Linux, you know. This disappears with Java. On the other hand, Java is not the most highly structured language, especially in recent versions. That greatly lessens its didactic qualities; I have had several students here who started to experiment with all sorts of arcane features like inner classes and operator overloading without learning how to write good programs first. It's a bit like comparing Niklaus Wirth's original Pascal to Borland Delphi. Deplhi is more powerful, but you need a thorough knowledge of the class hierarchy and in order to deliver good OO programs, you have to be a good OO programmer beforehand. Therefore, I now prefer either more systematic languages like Eiffel, or script-like languages like Python - the first for their higher level of abstraction and cleaner design, the latter for their greater ease of use and wider field of applications. Both are, in my opinion, better suited as didactic tools for learning OO programming. And BTW, over here in Germany the high dependence on symbols such as {} or [] or /**/ is a didactic problem in itself because these aren't so easily reached on a German keyboard. This may sound harmless, but we get endless complaints from people who hate to perform strange Alt+Key acrobatics to get a simple thing like a curly brace.
http://slashdot.org/story/01/06/11/2021220/java-as-a-cs-introductory-language
crawl-003
refinedweb
20,056
61.16
iCelGameClient Struct ReferenceThe main object to manage the client game logic. More... #include <physicallayer/network.h> Detailed DescriptionThe main object to manage the client game logic. You can use it to: - update the data of the player. - convert a server clock or a server entity ID in client values. - tell the system when the player is ready to play. - send client events. - set some options on the client. - get some network statistics. Definition at line 519 of file network.h. Member Function Documentation Convert a time from the server clock to the client clock. Return the entity on the client side that is network linked with an entity with ID entity_id on the server side. Return the ID of the entity on the client side that is network linked with an entity with ID entity_id on the server side. Return the network stats of this client. Return the current player associated with this client. Ask the game client to send an event to the server. Register the manager of this client. Only one manager can be registered at a time. The maximum bandwidth of the network data transmission in bytes per second. A bandwidth of 0 means that it is unlimited. Specify the maximum frequency at which the client can send data. Indicate that the client has loaded everything and is ready to play. The client will consider itself connected to the server until the connection is broken during specified time. Update the data of the current player. The server will check if the new data are valid and a subsequent celGameClientManager::PlayerUpdated will be called with the validated data. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.0/structiCelGameClient.html
CC-MAIN-2013-20
refinedweb
292
68.36
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 13/02/2017 at 13:03, xxxxxxxx wrote: I would like to know if it's possible to set a custom left right click menu into a gedialog when mouse is on a certain ID. I know for doing such things we normally does this into the Message function of the gedialog but we can't know on which ID the left button is pressed. Then for me is not a working solution. Typically I want to do but only in one id. Or at least I thinked something like that will work but sadly no This look like when left button is pressed, click is not evaluated so there is no control id def Message(self, msg, result) : res = c4d.BaseContainer() if msg == c4d.MSG_DESCRIPTION_COMMAND: id_ui = result['id'][0].id c4d.gui.GetInputState(c4d.BFM_INPUT_MOUSE, c4d.BFM_INPUT_MOUSERIGHT, res) if res[c4d.BFM_INPUT_VALUE] and id_ui == 10: print 'a' return True return c4d.gui.GeDialog.Message(self, msg, result) Thanks in advance. EDIT: I'm afraid the only way is to set a GeUserArea and doing my stuff in the message function of this GeUserArea, hope there is a workaround. On 14/02/2017 at 02:03, xxxxxxxx wrote: Hi, As shown in the post you link, you have to catch BFM_INPUT message in the dialog's Message() and check BFM_INPUT_CHANNEL for BFM_INPUT_MOUSERIGHT. Then to test if the right click is on specific gadget, retrieve its position/size with GeDialog.GetItemDim() and compare these with the mouse position (BFM_INPUT_X, BFM_INPUT_X). Note it's only possible to catch right clicks inside modal dialogs. Also, MSG_DESCRIPTION_COMMAND is only for plugins that have a description in the Attribute Manager. On 14/02/2017 at 07:29, xxxxxxxx wrote: Sadly it's not applicable to me since I got a hightly dynamic UI. (custom size of bitmap / custom number of bitmap and everything are in custom number of tab which are in scroll group :p) Then retriving the x/y for each of my bitmap will be a bit hard. I guess I will make a custom buttom or use another hotkey. Btw is there a way for removing the default menu and having the click evaluated in the command function?? thanks you anyway On 15/02/2017 at 04:22, xxxxxxxx wrote: Originally posted by xxxxxxxx Btw is there a way for removing the default menu and having the click evaluated in the command function?? Originally posted by xxxxxxxx The default right-click menu can't be disabled in asynchronous dialogs. Command() is limited to only some actions made on gadgets. Message() is more versatile.
https://plugincafe.maxon.net/topic/9964/13416_custom-left-right-menu-gedialog-command-function
CC-MAIN-2021-31
refinedweb
475
72.16
Lets write something closer to real C++ code. I've put the comments inside the code for convenience. P.S. The grammatical error in Elysia's sig always bothers me. ("every thing" should be "everything")P.S. The grammatical error in Elysia's sig always bothers me. ("every thing" should be "everything")Code:#include <iostream> #include <vector> using namespace std; int main() { // In C++ we don't declare all variables up front at the start of our function. // instead the normal thing to do is to declare them near their point of first use. // That way, if we were to say exit this function say when the user entered a negative // number then the program wouldn't have wasted time initialising the vector. Not too // important here obviously, but it can become very important in a larger program. int numberOfElements = 0; // A while loop was not appropriate here. That's because you intended the loop to // always be run at least once. The way to do that is not to assign a value to make // the condition true beforehand, but to make it a "do .. while" instead. That's what // they're for! In this instance a for (;;) loop with a break statement would work at // least as well too though, and save testing for the same condition twice. do { cout << "How many numbers will you be entering?" << endl; cin >> numberOfElements; cin.ignore(); if (numberOfElements <= 0) { // You had two semi-colons on this line before. cout << endl << "Error... Please enter a positive integer" << endl; } } while (numberOfElements <= 0); cout << " Enter the numbers:\n\t\t\t"; // Why use \n all of a sudden instead of endl? endl doesn't have to go at the end // The vector only gets initialised once we reach this line. vector<int> array; // vectors already keep track of how many items they contain. You're not supposed to // also track it separately in a variable of your own. Simply use the .size() method to find // out how many items it holds. arrayNumber and the finished flag were not needed. while (array.size() < numberOfElements) { int temp; // Nothing outside this loop needs to even know temp existed, so here is a good place for it. cin >> temp; array.push_back(temp); cin.ignore(); } // sum and loopNumber can be declared here. as loopNumber is just a for-loop // counter the normal thing to do is to declare it inside the for loop. Personally // I would use a shorter name. float sum = 0; // It's safer to use array.size() here in case the code changes later. for (size_t loopNumber = 0; loopNumber < array.size(); loopNumber++) { sum += array[loopNumber]; } // Serious C++ programmers would replace the loop above with one call to std::accumulate // Also pay attention to the clean use of whitespace in this code vs what was posted earlier cout << "Average equals: " << sum / numberOfElements; cin.get(); } You don't have to feed Elysia's ego anyway
http://cboard.cprogramming.com/cplusplus-programming/123843-stl-vector-problem-code-2.html
CC-MAIN-2015-40
refinedweb
485
75.3
US6850987B1 - System for multipoint infrastructure transport in a computer network - Google PatentsSystem for multipoint infrastructure transport in a computer network Download PDF Info - Publication number - US6850987B1US6850987B1 US09412815 US41281599A US6850987B1 US 6850987 B1 US6850987 B1 US 6850987B1 US 09412815 US09412815 US 09412815 US 41281599 A US41281599 A US 41281599A US 6850987 B1 US6850987 B1 US 6850987B1 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - group - mint - node - nodes -06—Broadcast or multicast traffic - priority from co-pending U.S. Provisional Patent Application 60/137,153 filed on Jun. 1, 1999. This application is related to U.S. patent application Ser. No. 09/323,869 entitled “PERFORMING MULTICAST COMMUNICATION IN COMPUTER NETWORKS BY USING OVERLAY ROUTING” filed Jun. 1, 1999 (hereinafter “McCanne '869”) and to U.S. patent application Ser. No. 09/384,865 entitled “SYSTEM FOR BANDWIDTH ALLOCATION IN A COMPUTER NETWORK” filed Aug. 27, 1999 (hereinafter “McCanne '865”). Each of these applications is hereby incorporated by reference as if set forth in full in this document. This invention relates generally to the field of computer networks, and more particularly, to a multipoint transfer protocol for use in a computer network. As the Internet gains in popularity it is desirable to provide for “multicasting” of information, such as multimedia information, over the Internet. Multicasting is the process of transmitting information from a host on a data network to a select plurality of hosts on the data network. The select plurality is often referred to a “multicast group.” While unicast delivery of data has enjoyed tremendous success as the fundamental building block of the Internet, multicasting has proven far more complex and many technical barriers remain that prevent multicasting from being deployed across a wide area. For example, interdomain multicast routing has yet to be successfully realized and there are many reasons to believe that multicast, in its present form, may never be universally deployed throughout the Internet. On the other hand, multicasting, when restricted to a singly administered network domain, has been much easier to configure and manage, and for some applications, may provide acceptable performance. One problem associated with current multicasting techniques, even in singly administered network domains, is that as group members come and go there is no delivery mechanism which assures that information will be reliably delivered to all current group members. In addition, there is generally no delivery mechanism that assures efficient routing of the information throughout the multicast group. Because of the lack of such a delivery mechanism, the use of multicasting has been largely restricted to use in applications where reliable delivery and efficient routing is not required. The present invention provides a method and apparatus for implementing a Multipoint Infrastructure Transport (MINT) protocol in a data network. The MINT protocol provides a reliable information delivery mechanism between a single node in the data network and all other infrastructure as well as end-host nodes in the data network that are subscribed to a particular group. The present invention is suitable for use with groups formed using IP Multicast routing protocols like sparse mode PIM or core based trees (CBT), or in other multicast protocols wherein the multicast group has an associated a rendezvous point or node. An example of such a protocol is described in McCanne '869, wherein a description of an Overlay Multicast Network (OMN) is disclosed. One embodiment of the present invention provides a method for distributing data in a data network. The data network connects a plurality of nodes and at least a portion of the plurality of the nodes form a multicast group. One of the nodes in the multicast group is designated as a rendezvous node. The method includes a step of maintaining a data store containing a group state at each of the nodes in the multicast group. State updates, received at the rendezvous node are used to update the group state in the data store at the rendezvous node. The state updates are propagated, using a reliable protocol, from the rendezvous node to the other nodes in the multicast group. Finally, the group state in the data stores at the other nodes in the multicast group are updated. In another embodiment of the present invention, a processing agent for processing data at a node in a data network is provided. The data network connects a plurality of nodes and at least a portion of the plurality of the nodes form a multicast group. One of the nodes in the multicast group is designated as a rendezvous node. The processing agent comprises a state memory and a protocol processor. The protocol processor has logic to couple to a selected node in the data network and has logic to transmit and receive data with other processing agents in the data network over a data channel using a reliable protocol. The protocol processor also couples to the state memory and has logic to store and retrieve the data to and from the state memory, respectively. In one embodiment, the present invention provides a method and apparatus for implementing a MINT protocol in a data network to provide a reliable information delivery mechanism between a sender node in the data network and members of a multicast group, infrastructure and/or end-hosts in the data network. Using MINT, senders associate named values to a multicast group which are published into and across the data network, thereby allowing other group members as well as network entities to query this “database” of distributed state. Each tuple in the database, called a “mint”, is identified by its owner (the multicast sender), name and multicast group. The mints are disseminated reliably to all parts of the network with active group participants. Preferably, mints flow only to routers that fall along a path from the source to the set of active receivers for that group. This results in efficient routing of the MINT information which is an advantage over prior systems, that operate by flooding the entire network with information without regard to efficient routing and distribution. An end host may query the multicast subsystem to discover and/or enumerate all known mints published by each owner. In turn, the mint values can be queried by reference to the name/owner, and the agent performing the query can be asynchronously notified when the owner modifies the values. In one embodiment, specific mints are reserved for system specific functions that, for instance, map a group to an application type or describe the attributes of a group so that the group can be mapped into locally defined traffic classes in different parts of the network. For example, if a transmitted data stream requires application-level processing and/or traffic management, a special “setup mint” provides the requisite information and precedes the transmission of data. In another embodiment, an information source can use the MINT protocol to publish mints that annotate data streams injected into the group. Specialized packet forwarding engines, located at each node on the multicast tree for the group in question, process the received data streams based on the stream annotations. For example, the packet forwarding engines can allocate network bandwidth to the data streams based on the stream annotations. As with the external physical configuration shown in Coupled to each of the nodes of the network 200 are MINT processing agents 232, 234, 236, 238, 240 and 242. The MINT processing agents are shown as being external to the routing nodes, however, the MINT processing agents can be incorporated within each of the routing nodes. The MINT processing agents receive and transmit information via their associated node to implement the MINT protocol. The network 200 is capable of forming multicast groups as in, for example, IP Multicast routing protocols like sparse mode PIM or core based trees, wherein the multicast group has an associated rendezvous point or node. The MINT-PM 302 couples to a routing node in the data network via link 308. The MINT-PM uses the link 308 to communicate with the routing node and to form a MINT channel that allows the MINT processing agents in the data network to communicate with one another. For example, the MINT channel is used to transmit and receive information between the MINT processing agents and/or between the MINT processing agents and clients, information sources and any other end-hosts in the data network. The data store 304 couples to the MINT-PM and stores the mint information which forms a database of distributed state. The optional packet forwarding engine 306 can be used when the MINT processing agents are used to regulate traffic streams based on mint information as described in McCanne '865. The packet forwarding engine 306 receives data packets 310 transmitted on the network 200 and processes the received data packets to form an output data stream 312 for transmission on the network. The packet forwarding engine 302 couples to the MINT-PM 302 and the data store 304, to exchange information that is used to determine how the packet forwarding engine 306 processes the received data packets. For example, mint information retrieved from the data store 304 is used by the packet forwarding 306 engine to determine bandwidth allocations on the data network for the received data packets 310. In another example, mint information retrieved from the data store 304 is used by the packet forwarding 306 engine to schedule packets in the output data stream 312 based on priority information contained in the mint information. In another embodiment of the MINT processing agent 300, the packet forwarding engine 306 is omitted from the MINT processing agent and is assumed to exist within the incident native router. In such an embodiment, the MINT processing agent is used to process and transmit mints in the data network but performs no processing on data packets transmitted in the network. Thus, the MINT processing agent would be limited to the tasks of processing mints and providing the reliable delivery of mints in the data network. The MINT Protocol The MINT protocol provides a group-oriented, reliable information delivery mechanism to the subset of nodes in a data network that span the multicast routing tree supporting the corresponding group. In addition, end-host sources may publish data into the network by directing MINT instructions to the rendezvous point for the group in question. The MINT protocol provides a mechanism whereby a set of published values are maintained at all MINT processing agents associated with active nodes in the spanning tree as members come and go. Additional features of the MINT protocol provide for queries by arbitrary network clients or management agents to obtain the most recent set of published values. A MINT channel is associated with each active multicast group. The MINT channel might be a reliable control connection using TCP that adheres to a MINT access protocol which comprises a number of MINT instructions. Applications publish named data tuples called “mints” into the MINT channel by directing MINT instructions to the rendezvous point; in turn, the MINT-PM at the rendezvous point ensures that each such mint is propagated to all MINT processing agents associated with routing nodes that are incident to the distribution tree for that group. This allows edge applications to publish state into the network and communicate with application-level processing agents (i.e., plugins) that may exist in the network or may be provided as part of the MINT processing agents. For example, the packet forwarding engines may run application-level processing agents that can communicate with edge applications via the MINT channel, to allocate network bandwidth to the edge applications. The MINT protocol also provides a well-defined communication abstraction for disseminating mints along the paths of the spanning tree in a dynamic fashion as sub-trees come and go. Whenever a router, or node, grafts on a branch to a given group's spanning tree, all the mints for that group are flooded, or propagated, along the newly created branch. As a result, state is reliably propagated to all MINT processing agents along the newly grafted branch. The Data Model The data model assumed by the MINT protocol is a persistent data store of named tuples or mints. An origin node (or owner) may publish mints into the network or may relinquish its attachment to the persistent data store using the MINT access instructions. If a node fails or becomes otherwise disconnected from the network, all of its published bindings are expunged from its associated data store when the corresponding leg of the multicast routing tree (for the group in question) is torn down. Because mints are persistent, the MINT processing agent may run out of resources to maintain all the mints published into the network. In this case, the mint publishing process fails. To notify the end clients of this failure, a special, reserved error mint is attached to the group and has priority over all existing mints. Static priorities may be assigned to mints. This controls the relative ordering of mints as they are propagated between MINT processing agents as legs of the distribution tree come and go. Each mint is named with a structured hierarchical name, thereby providing a rich mechanism for reviewing a class of mints by prefix, regular expression or other reviewing technique. MINT data names are represented as textual strings while MINT values are arbitrary binary data. The MINT information 400 is comprised of mints having a group 401, origin 402, name 404, value 406 and priority 408. Since a node in the data network may be associated with one or more multicast groups, the MINT information may contain mint parameters associated with one or more multicast groups. As shown in The Namespace The names that index the MINT data store naturally form a namespace. Associated with each group is an autonomous namespace, i.e., each group's mints are completely independent of all other groups. To support rich and efficient queries over these namespaces, names are represented in a structured yet simple form. Specifically, the names form a hierarchical namespace, wherein the hierarchy demarcations are denoted by a “/” separator, just as the Unix file system arranges directory names into a hierarchy and uses the “/” separator to indicate the relative elements of the path through the tree-based hierarchy. The hierarchical namespace representation allows matching queries to be run against the existing name space. For example, to build an advertisement-insertion service, a broadcast system might publish advertisement information as a series of mints under the prefix “/ad/info”. Thus, a client might want to query the database to see what names exists under this prefix with a “globbing” match, e.g., “/ad/info/*”. Likewise, a network agent might want to be notified whenever this information changes, so that an event callback can occur when any mints that match “/ad/info/*” are created, deleted, or modified. The Protocol Module Each MINT processing agent in the network includes a MINT protocol module (MINT-PM) that maintains the data store, indexed by origin, group and name of all published mints known to that agent. The publisher of a mint is called its origin or owner. While an origin may be located anywhere in the data network, it must publish mints for a particular group via that group's rendezvous point. In one embodiment of the invention, each mint is published indefinitely and there is no refresh process. This is possible because the MINT protocol (in association with the underlying routing protocol) allows the group state to be maintained consistently across the group spanning-tree. For example, when a failure occurs in some routing protocols, the routing system responds by tearing down, and then re-establishing the group as necessary; consequently, any mints attached to the group in question will be propagated by the MINT protocol as the group is reconstructed. Thus, the group state can be managed consistently and there is no need for a refresh/timeout process. The amount of mint state that any single node can inject into the network is limited by a configurable parameter. The data stored for each tuple associated with any given group includes the following elements: There is no need for a sequence number or timestamp because the data store at each node will provably converge to the data store maintained at the rendezvous point for the group. There are three types of protocol messages: - publish messages cause a MINT to be created, propagated, and maintained across the broadcast tree spanned by the group in question, - relinquish messages explicitly tear down a mint binding on behalf of its origin. - query messages allow the MINT data store to be queried for name and value information. These messages are exchanged with peer MINT-PMs and have the following abstract form: The Publish Message The reliability of the present invention is based on a novel delivery mechanism tied to the group membership protocol. Since the MINT protocol is run on a per-group basis, we will refer to the group in question generically as “group G.” A peer from which a MINT-PM receives published mints will generally be the on the reverse-path shortest route back to the rendezvous point. This may not always be the case, as the path is dependent on the underlying routing processes. This peer can be referred to as the “parent” peer. At any point in time, published mints may be received from different parents, as a result of the routing changes that may occur with changes in the spanning tree for group G. All of these mints are maintained in the data store associated with the MINT-PM. MINT-PMs associated with peers in a group, such as group G, communicate mints with each other over the MINT channel in a unicast fashion. The MINT channel is a reliable connection, for example, a reliable TCP connection, that is congruent with the underlying router's peering relationships. When a MINT-PM receives a new mint from a MINT-PM associated with its parent peer, it enters the mint into its data store and forwards a copy of the mint to MINT-PMs associated with other peers on the current multicast spanning tree for G. For example, a MINT-PM receives a published mint from its parent peer, it updates its data store and then publishes the mint as a parent to other MINT-PMs. Note that this action is carried out atomically against any changes to the spanning tree. The goal is to maintain the invariant that all MINT-PMs associated with peers on the spanning tree for group G, reliably distribute all mints stored in the data store of their respective parent, wherein the ultimate parent is the MINT-PM associated with the rendezvous point for the group. If the MINT-PM receives a mint (from its parent) that is already in its table, it checks to see if the data value is different. If not, it increments an error counter (accessible via a network management protocol), because the peer should have known not to send a redundant update. If the value is different, the MINT-PM updates its data store and propagates the change (by re-publishing the mint as a parent peer) to each of its child peers on the current multicast spanning tree for G. In effect, the MINT-PM issues another publish command to peer MINT-PMs, as if it were the parent. If the MINT-PM receives a mint from a peer that is not its parent for group G, then it records the mint update in a shadow table for that peer. If that peer later becomes its parent for G, then this shadow table becomes the actual data store (and any differences encountered while changing tables are treated as normal mint arrivals, changes, or deletions). If a group G node receives a graft message, and the requesting node is grafted to the group G, all mints associated with group G are sent to the MINT-PM associated with the requesting node. The mints are sent in static priority order (according to the priority field in the tuple). The collection of all known mints must be formed atomically against later mint arrivals and other state changes. If the node receives a prune message from another node in group G, then it need not do anything and must assume that the downstream peer has forgotten all the mints for group G. If a MINT-PM receives a mint from a peer that is not on the multicast spanning tree for group G, it ignores the update and increments an error counter. This is an error condition, since a peer cannot send mints for group G unless it had previously joined the group. The Relinquish Message When a mint for group G is deleted by an origin node (via the relinquish message sent to the rendezvous point), the MINT-PM at the rendezvous point removes the corresponding mint from its data store and propagates a relinquish message to each of its child peers on the current multicast spanning tree for G. When the MINT-PM receives a relinquish message for a mint from a parent peer, it consults its data store (indexed by owner and name). If a tuple with the same owner and name exists, it removes the corresponding mint from its data store and propagates a relinquish message to each of its child peers on the current multicast spanning tree for G. If no mint with that name and owner exists, an error counter is incremented to indicate the error condition. If a relinquish message is received from a non-parent peer, a shadow table is updated and will be used if that non-parent peer becomes a parent. Any events associated with the relinquishment of a mint are dispatched when the mint is deleted from the data store. The following description will present transaction examples using the MINT protocol in accordance with the present invention. The transaction examples assume that network groups may be formed by routing protocols that use a rendezvous point (RP) to serve as an anchor for the group. During typical network operation, each routing node can directly or indirectly access a specific group and its associated RP. At block 604, the information source 220 publishes a mint to group A. For example, in one embodiment, information source 220 transmits a publish command to group A (which includes mint information), to node 202. As part of the underlying routing protocol, node 202 is aware that node 206 is the RP for the group A. As a result, node 202 routes the publish instruction toward the RP where it is eventually received. In another embodiment, the information source 202 can query the network, using a directory service for example, to determine the location of the RP for group A. Once the location of the RP is known, the information source may transmit the publish command directly to the RP. The transaction path 502 shows the route of the publish command from the source 220 to the RP. At block 606, the RP receives the publish command where it is forwarded to the MINT processing agent 236 as shown by transaction path 504. At block 608, the MINT processing agent 236 updates its data store with the new mint information. This operation occurs when the MINT-PM 302 receives the published mint over the MINT channel 308 and uses the mint information to update its associated data store 304. At block 610, the updated mint information is propagated to other MINT processing agents in group A, namely, agents 232, 234 and 240. To accomplish this, the MINT-PM associated with the RP distributes the new mint information to the other MINT processing agents in the group A via the MINT channel. The MINT processing agent 236 publishes the new MINT information to group A and the update follows the group A routing as determined by the underlying routing protocol. For example, transaction path 506 shows the routing of the newly published mint information from the MINT processing agent 236 to the other MINT processing agents in group A. Consistency is maintained by following the mint distribution rules as set forth above. As a result, the mint information published by information source 220 is reliably distributed to all the MINT processing agents in group A. At block 802, the current membership of group A includes nodes 202, 204, 206 and 208 as shown at 750 in FIG. 7. At block 804, the client 228 transmits a request to node 212 to join group A. The client 228 may wish to receive information currently being multicasted over group A. At block 806, the node 212 forwards the join request to node 206, which is already a member of group A. The join request is shown by transaction path 702. When node 212 receives the join request from node 212, node 212 will be included in the spanning tree for group A, so that the group A membership is shown by 752. The MINT processing agent 242 which is associated with node 212 also becomes a member of group A. At block 808, node 206 notifies the MINT processing agent 236 that node 212, and its associated MINT processing agent 242, have joined group A. This is shown by transaction path 704. At block 810, the MINT processing agent 236 propagates mints relating to group A from its MINT data store to newly added MINT processing agent 242. The mints are propagated over the MINT channel when the MINT-PM of agent 236 publishes mint information to the MINT-PM of agent 242. This is shown by transaction path 706. At block 812, the MINT processing agent 242 updates its data store with the new MINT information so that all of the MINT processing agents in group A have identical group A MINT information. Note that MINT agent 236 is the parent peer of MINT agent 242. If the MINT processing agent 242 was a parent peer to other newly attached MINT processing agents, it would re-publish the new mint information to those other MINT processing agents. In the above example, only one node is added to the group A, which was then subsequently updated with mint information. In a situation where several nodes are added to the group, the mint propagation may follow the route established as a result of the join request. For example, in one embodiment, the mint information propagates in the reverse direction (compared to the join request); hop by hop starting from the node in the group that received the join request back to the join requestor. Each MINT processing agent in the reverse hop by hop route is updated until all the MINT processing agents associated with the new branch of the spanning tree for the group are updated. With all MINT processing agents having the identical mint information relating to group A in their respective data stores, the source 220 desires to publish updated mint information to group A. At block 814, the source 220 transmits a publish command to the RP via node 202. This is shown by transaction path 708. As before, the source may use one of several ways to transmit the publish command to the RP for group A. At block 816, the RP receives the publish command from the source 220. At block 818, the RP notifies the MINT processing agent 236 of the publish command (path 708) and the MINT processing agent 236 receives the mints and updates its data store based on the mints in the new publish command. At block 820, the MINT processing agent 236 propagates (as parent) the new mint information to all peer MINT processing agents (child peers) associated with the group A. One way this can occur is when the MINT processing agent 236 issues a publish command over the MINT channel to other members of group A, as shown by transaction path 710. As a result, the new mint information is reliably propagated to the nodes 202, 204, 208 and 212, which are all part of group A and child peers to agent 236. In this example, the new mint information published by agent 236 only need propagate one hop to reach the child peers as shown in FIG. 7. However, it will be apparent to those with skill in the art that the child peers can re-publish the mint information (as parents) to other nodes in group A. Thus, if the group A spanning tree includes large branches of interconnected nodes, the new mint information would propagate hop by hop (from parent to child) down the spanning tree to all nodes (and MINT processing agents) associated with group A. At block 822, the MINT processing agents 232, 234, 240 and 242 all receive the new mint information and update their associated data stores with the new mint information. Thus, in accordance with the present invention, the newly published mint information is reliably distributed to all MINT processing agents associated with active nodes in the spanning tree of group A. The method 800 also illustrates how the mint information can be queried in accordance with the present invention. At block 824, client 228 wishes to query mint information associated with group A. The client 228 transmits a query instruction to node 212 that specifies group A as the group of interest. The type of query used will return all known names (and respective origins) of data bindings that have been published into the network for group A. For example, the name based query instruction [query_name(A)] above will return this information. At block 826, the MINT processing agent 242 receives the query instruction. This is shown by transaction path 712. At block 828, the MINT processing agent 242 responds with the requested mint information by transmitting the result of the query to the client 228 as shown by transaction path 714. This occurs when the MINT-PM at agent 242 retrieve the requested information from its associated mint data store and transmits the result over the MINT channel to the client 228. At block 830, the client 228 receives the requested mint information, and as a result, the client 228 can use the returned mint information to determine group A status or take action to receive a data stream transmitted in group A. Referring again to At block 906, the node 212 notifies agent 242 that client 228 is terminating its membership from group A, and thus node 212 will be pruned from the group. At block 908, since node 212 is to be pruned from the group A, agent 242 discards mints relating to group A. Note, however, that if node 212 is a member of other groups, mints relating to those other groups will be maintained by agent 242. In other embodiments, agent 242 may maintain mints after leaving the group in accordance with another aspect of the invention as describe in a section below. At block 910, the node 212 propagates the leave request toward the RP (node 206) where it will eventually be received. The RP notifies agent 236 of the leave request (by client 228) as shown at transaction path 1004. At block 912, the agent 236 maintains it store of mints for the group A since it is associated with the RP for the group. As long as group A exists, the agent 236 will maintain its data store of mints, in case it is required to propagate them to other group members. At block 914, the RP (node 206) processes the leave request from client 228, and as a result, the node 212 is pruned from the group A. After this occurs, the resulting group comprises nodes 202, 206 and 208 as shown by the group A of FIG. 11. Referring now to At block 916, the information source 220 publishes a new mint relating to the group A. The node 202 receives the publish command and routes it toward the RP. As discussed above, the information source may find the location of the RP and issue the publish command directly to the RP. Alternatively, the node 202 may know the location of the RP, as a result of the underlying group routing protocol, and therefore, route the publish command toward the RP. This transaction is shown at transaction path 1102. At block 918, the RP receives the publish command and forwards the published mints to the MINT processing agent 236, as shown at transaction path 1104. At block 920, the MINT processing agent 236 updates its data store with the new mint information. At block 922, the MINT processing agent propagates the new mint information to the other MINT processing agents in the group A, namely, agents 232, 234 and 240. This is shown by transaction paths 1106. The mint propagation occurs when the agent 236 issues a publish command with the new mint information to other nodes in the group A. As a result of the client 228 terminating its attachment to the group A, and consequently node 212 being pruned from the group A spanning tree, the MINT processing agent 242 will no longer be updated with new mint information for group A. However, the MINT protocol will continue to reliably update the mint data stores for MINT processing agents that are active members of the group A. Should node 212 request to join the group A in the future, the updated mints would again be propagated to node 212 and thereafter to MINT processing agent 242. In another embodiment, the MINT protocol operates to overcome problems associated with excessive routing fluctuations. During excessive routing fluctuations, where particular nodes repeatedly leave and then re-join the group, the mint information in the data stores associated with those nodes is repeatedly discarded and repopulated. This results in excessive transmission of mint information on the data network. To avoid this problem, enhancements to the MINT protocol avoid discarding and repopulating the data stores as a result of excessive routing changes. In one embodiment, a MINT digest is computed over the mints in the data store. The MINT digest may represent all mints in the data store or selected portions of the mints in the data store. Instead of discarding the mint information when a node leaves the group, the mint information associated with that node is preserved in the data store along with its associated MINT digest. When that node rejoins the group, it transmits its MINT digest to the group. If the MINT digest at the node is different from the current MINT digest for the group, then the node is updated with a new copy of the mint information. The node then updates its mint data store and its associated digest. If the MINT digest from the node matches the MINT digest for the group, then it is not necessary to transmit a new copy of the mint information to the node. Therefore, the enhanced MINT protocol averts the excessive transmission of mint information in the network. In another embodiment, a time parameter is used to prevent the resources of the data stores from being utilized to store outdated mint information. When a node leaves a group, the MINT processing agent associated with that node uses the time parameter to determine how long to preserve the mint information in the data store. The time parameter value can be determined by a network administrator. By preserving the data store and its associated MINT digest during the time period defined by the time parameter, excessive transmission of mint information can be prevented as discussed above. However, once a node leaves a group and the expiration of a time period defined by the time parameter occurs, the mint data store can be purged of mints for that group, thereby freeing up resources of the data store. Therefore, the MINT processing agent preserves the data store to prevent redundant mint transmissions during network flapping, and after expiration of a selected time period, purges the data store to free up valuable resources to store additional mints. As will be apparent to those of skill in the art, variations in the above described methods and apparatus for implementing the MINT protocol are possible without deviating from the scope of the present invention. Accordingly, the disclosures and descriptions herein are intended to be illustrative, but not limiting, of the scope of the invention which is set forth in the following claims.
https://patents.google.com/patent/US6850987B1/en
CC-MAIN-2018-47
refinedweb
6,091
55.27
In the previous lessons in this chapter, you’ve learned a bit about how base inheritance works. In all of our examples so far, we’ve used public inheritance. That is, our derived class publicly inherits the base class. In this lesson, we’ll take a closer look at public inheritance, as well as the two other kinds of inheritance (private and protected). We’ll also explore how the different kinds of inheritance interact with access specifiers to allow or restrict access to members. To this point, you’ve seen the private and public access specifiers, which determine who can access the members of a class. As a quick refresher, public members can be accessed by anybody. Private members can only be accessed by member functions of the same class or friends. This means derived classes can not access private members of the base class directly! This is pretty straightforward, and you should be quite used to it by now. The protected access specifier When dealing with inherited classes, things get a bit more complex. C++ has a third access specifier that we have yet to talk about because it’s only useful in an inheritance context. The protected access specifier allows the class the member belongs to, friends, and derived classes to access the member. However, protected members are not accessible from outside the class. In the above example, you can see that the protected base member m_protected is directly accessible by the derived class, but not by the public. So when should I use the protected access specifier? With a protected attribute in a base class, derived classes can access that member directly. This means that if you later change anything about that protected attribute (the type, what the value means, etc…), you’ll probably need to change both the base class AND all of the derived classes. Therefore, using the protected access specifier is most useful when you (or your team) are going to be the ones deriving from your own classes, and the number of derived classes is reasonable. That way, if you make a change to the implementation of the base class, and updates to the derived classes are necessary as a result, you can make the updates yourself (and have it not take forever, since the number of derived classes is limited). Making your members private gives you better encapsulation and insulates derived classes from changes to the base class. But there’s also a cost to build a public or protected interface to support all the access methods or capabilities that the public and/or derived classes need. That’s additional work that’s probably not worth it, unless you expect someone else to be the one deriving from your class, or you have a huge number of derived classes, where the cost of updating them all would be expensive. Different kinds of inheritance, and their impact on access First, there are three different ways for classes to inherit from other classes: public, protected, and private.). So what’s the difference between these? In a nutshell, when members are inherited, the access specifier for an inherited member may be changed (in the derived class only) depending on the type of inheritance used. Put another way, members that were public or protected in the base class may change access specifiers in the derived class. This might seem a little confusing, but it’s not that bad. We’ll spend the rest of this lesson exploring this in detail. Keep in mind the following rules as we step through the examples: Public inheritance Public inheritance is by far the most commonly used type of inheritance. In fact, very rarely will you see or use the other types of inheritance, so your primary focus should be on understanding this section. Fortunately, public inheritance is also the easiest to understand. When you inherit a base class publicly, inherited public members stay public, and inherited protected members stay protected. Inherited private members, which were inaccessible because they were private in the base class, stay inaccessible. Here’s an example showing how things work: This is the same as the example above where we introduced the protected access specifier, except that we’ve instantiated the derived class as well, just to show that with public inheritance, things work identically in the base and derived class. Public inheritance is what you should be using unless you have a specific reason not to. Rule Use public inheritance unless you have a specific reason to do otherwise. Protected inheritance Protected inheritance is the least common method of inheritance. It is almost never used, except in very particular cases. With protected inheritance, the public and protected members become protected, and private members stay inaccessible. Because this form of inheritance is so rare, we’ll skip the example and just summarize with a table: Private inheritance With private inheritance, all members from the base class are inherited as private. This means private members stay private, and protected and public members become private. Note that this does not affect the way that the derived class accesses members inherited from its parent! It only affects the code trying to access those members through the derived class. To summarize in table form: Private inheritance can be useful when the derived class has no obvious relationship to the base class, but uses the base class for implementation internally. In such a case, we probably don’t want the public interface of the base class to be exposed through objects of the derived class (as it would be if we inherited publicly). In practice, private inheritance is rarely used. A final example Base can access its own members without restriction. The public can only access m_public. Derived classes can access m_public and m_protected. D2 can access its own members without restriction. D2 can access Base’s m_public and m_protected members, but not m_private. Because D2 inherited Base privately, m_public and m_protected are now considered private when accessed through D2. This means the public can not access these variables when using a D2 object, nor can any classes derived from D2. D3 can access its own members without restriction. D3 can access D2’s m_public2 and m_protected2 members, but not m_private2. Because D3 inherited D2 publicly, m_public2 and m_protected2 keep their access specifiers when accessed through D3. D3 has no access to Base’s m_private, which was already private in Base. Nor does it have access to Base’s m_protected or m_public, both of which became private when D2 inherited them. Summary The way that the access specifiers, inheritance types, and derived classes interact causes a lot of confusion. To try and clarify things as much as possible: First, a class (and friends) can always access its own non-inherited members. The access specifiers only affect whether outsiders and derived classes can access those members. Second, when derived classes inherit members, those members may change access specifiers in the derived class. This does not affect the derived classes’ own (non-inherited) members (which have their own access specifiers). It only affects whether outsiders and classes derived from the derived class can access those inherited members. Here’s a table of all of the access specifier and inheritance types combinations: As a final note, although in the examples above, we’ve only shown examples using member variables, these access rules hold true for all members (e.g. member functions and types declared inside the class). >>Put another way, members that were public or protected in the base class may change access specifiers in the derived class. This is the same when we use private inheritance and the public and protect base members become private for code access the derived class objects, right? >>But there’s also a cost to build a public or protected interface to support all the access methods or capabilities that the public and/or derived classes need. What are public or protected interface here? Does this mean implementing getters and setters cost a lot? >>The public can only access m_public What does 'the public' refer to in this context? >>A class accesses inherited members based on the access specifier inherited from the parent class. This varies depending on the access specifier and type of inheritance used. Shouldn't it start with "A derived class accesses inherited ..."? In the section "The protected access specifier", wouldn't it be more correct if the sentence... "The protected access specifier allows the class the member belongs to, friends, and derived classes to access the member." ... would say something like ... "The protected access specifier allows the base class the (protected) member belongs to and the base class' friends to access the (protected) member within the base class. It also allows the derived class and the derived class' friends to access the inherited (protected) member within the derived class (access of the (non-inherited/original) (protected) member of the base class from the derived class and its friends is not allowed)." The sentence... "In the above example, you can see that the protected base member m_protected is directly accessible by the derived class, but not by the public." ... could also be changed into something like this... "In the above example, you can see that the inherited protected base member m_protected is directly accessible by the derived class, but not by the public." Not sure if this disclaimer would be beneficial or just add unnecessary complexity though. It kind of bothered me there was no 'use-case' explanation for 'protected inheritance'. This is my understand of it... When chaining inheritance, protecting parent classes allows each successive child to access all their ancestors' non-private members, while preventing those ancestors' public interfaces from being available to users of the child class. What is the difference between private and inaccessible ? 'private' here means that the member is still accessible to member functions in the derived class. 'inaccessible' means that it is not accessible to member functions (like the parent class's private members are inaccessible in derived class). This lesson has raised a question: If I understand correctly, the protected access specifier means that I can access the inherited class's members from the inheriting class without making use of access functions, meaning that it's more performant. Why, then, would I opt to use private members over protected? I think I agree with you. Only reason to use private is to shield members from children. using namespace std; #include<iostream> class baseclass { public: int a; baseclass() { std::cout << "baseclass\n"; } }; class deriveclass: public baseclass { public: deriveclass() { std::cout << "derived class\n"; } }; int main() { baseclass * a=new baseclass(); //ok deriveclass * b=new deriveclass(); //ok baseclass * aa=new deriveclass(); //ok deriveclass * bb=new baseclass(); //not ok return 0; } In the above programme which way to access base class function or variable? I want to access a base class property or method? Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/inheritance-and-access-specifiers/
CC-MAIN-2021-17
refinedweb
1,832
53
You're reading the documentation for an older, but still supported, version of ROS 2. For information on the latest version, please have a look at Humble. ament_cmake_python user documentation ament_cmake_python is a package that provides CMake functions for packages of the ament_cmake build type that contain Python code. See the ament_cmake user documentation for more information. Note Pure Python packages should use the ament_python build type in most cases. To create an ament_python package, see Creating your first ROS 2 package. ament_cmake_python should only be used in cases where that is not possible, like when mixing C/C++ and Python code. Table of Contents Basics Basic project outline The outline of a package called “my_project” with the ament_cmake build type that uses ament_cmake_python looks like: . └── my_project ├── CMakeLists.txt ├── package.xml └── my_project ├── __init__.py └── my_script.py The __init__.py file can be empty, but it is needed to make Python treat the directory containing it as a package. There can also be a src or include directory alongside the CMakeLists.txt which holds C/C++ code. Using ament_cmake_python The package must declare a dependency on ament_cmake_python in its package.xml. <buildtool_depend>ament_cmake_python</buildtool_depend> The CMakeLists.txt should contain: find_package(ament_cmake_python REQUIRED) # ... ament_python_install_package(${PROJECT_NAME}) The argument to ament_python_install_package() is the name of the directory alongside the CMakeLists.txt that contains the Python file. In this case, it is my_project, or ${PROJECT_NAME}. Warning Calling rosidl_generate_interfaces and ament_python_install_package in the same CMake project does not work. See this Github issue for more info. It is best practice to instead separate out the message generation into a separate package. Then, another Python package that correctly depends on my_project can use it as a normal Python module: from my_project.my_script import my_function Assuming my_script.py contains a function called my_function().
https://docs.ros.org/en/galactic/How-To-Guides/Ament-CMake-Python-Documentation.html
CC-MAIN-2022-40
refinedweb
295
51.75
Subdownloader with GNU license and requiring payment to use the software? Is this legal? Hi, I just downloaded Subdownloader and I'm required to pay 10 $ to use it (shareware). But the license says it's freeware.... Strange to have to pay for freeware? It's the first time I face this situation. Is this legal? The license disclaimer is: License. Question information - Language: - English Edit question - Status: - Invalid - Assignee: - No assignee Edit question - Last query: - 2008-12-01 - Last reply: - 2009-02-11 Hi, Well, I'm not sure that mixing GNU GPL License and Shareware with a limit of 30 days of use is so legal. But it's quite confusing.... Under the GPL license, the program may be distributed having to pay a fee (that's ok according to GPL License). But having a limit of usage in days doesn't seem to be any longer software distributed under a GPL License. In a GPL License, I'm able to pay the software to get it (if the distributor requires so, in this case you do) but I'm also able to give that copy that I paid to anybody I want. I'm even able to post it somewhere and distribute it freely. Having to request an activation prevents me from doing it. That's not anymore a GPL License, that's something else.... I think you should clarify the license you are using. You should not mix shareware with GPL. It's one or the other but not both. If you want to keep the payment part, you should provide the software only after payment but remove the 30 days activation period. Otherwise, that's shareware. Best Regards, Joaquim I agree with you netquim, this is a bit ackward. If you reffer to the FAQ issued by the gnu project : http:// You see that requiring people to pay you after using the program, or just that people notify you that they use your program make it non free. However, the GPL licence is not exclusive, as a copyright holder, you can distribute in any different licence you want, including gpl. What you can do is write a shareware licence ans ditribute your windows version in that licence, but you may not distribute a copy under the GPL and ask for activation. Hi, I agree with you. The windows version needs to e distributed under another license. I posted the same question to the FSF, and here is the answer I got from them: "The GNU GPL is always free software. If a program author releases something under the GPL but then says "oh and by the way it's shareware", they aren't releasing it under the GPL. Typical Shareware restrictions (pay $X after Y months) are incompatible with the GPL. They're releasing it under a new license that they are making up (in a very confusing way) -- essentially just the shareware part. They shouldn't do this." Cheers, Joaquim The shareware binaries are released with a propietary license due that we are the creators of the software we can reserve this right for us. Sorry if the GPL term has confused people, the source code is GPL, also the linux and Mac binaries are freeware in case you will like to move to better OS than micro$oft. Messages of the GPL in the windows shareware version will be removed in 2.0.9 . Those who would like to have Windows binaries of the program without Shareware limitation, are welcome to compile the program as I did, the source code is there. Best Regards and thanks for using the program. First of all i should apologize for my words that follow, but they are the only way i can show my indignation on this topic. Sooo... I have to admit I've never seen such a stupidity: Releasing software under GPL and making it a kind of shareware at the same time. @Ivan, you are right that FREE software doesn't mean money free, but using GPL means complying with the standards of open source society and not only the standards but the philosophy also. Part of that philosophy includes that programs under GPL should not require the user to have knowledge how to change the source code so he makes the "open source shareware" (what a stupid term... it is like the cold hot tea :P) just open source code with no limitations. I will not at all comment on the fact that your "open source" program uses an open totally free site like http:// > BTW, source code versions and Linux versions don't have any limitation of days, so you can use this methods to use the > program. > Best Regards. > Ivan. This is not true even in the source code version 2.0.9 that can be downloaded from the home page. Because of the stupid way of coding (or to be exact of the stupid way of removing part of the code) it throws an error: Traceback (most recent call last): File "/tests/ import gui.main File "/tests/ import gui.expiration as expiration ImportError: No module named expiration So having in mind your own words and the funny licensing policy you use, what do you say if I put a link here with a place where Subdownloader's users can download the same Subdownloader latest version as can be found on the main site, but with the license check subroutines removed both for Win and Lin version? Will that be ok with you? It should be ok, since you use GPL :) Oh... And BTW I think you should update your documentation in the source code and on your home page, so users have the information that the program is paid BEFORE they decide if to try it or not. Because there is not a single word on your download site about paying.... Sorry but it is really lame and amateur to do things like this. And yes you are right... i am disgusted when someone uses Open Source to get money... That is the most stupid thing one calling himself an open source coder could do. And just a piece of advice - read carefully your users' opinions... I will cite only one (http:// > ....I have difficulties to understand why you made the windows version shareware (30 days). > Moreover, I always contribute by a donation to software I am using, but here, the price is really excessive to my view..... And here is another one: > I donated a few euros to the developers of subdownloader thinking it wass a common free-gratis and free-open source project > but seeing now that they're trying to charge people for using the prog I'm very pissed off and will not pay for it, > when it expires i'll stop using it altogether. At the end of the day is the users who are uploading the content. And a third one i will end with: > i find it insulting that somebody would expect me to pay rather than donate for an open-source app. P.S. I hope my post will stay like this here so other users can see it also... Hello PhobosK, thanks to let me know about the source code package bug, I have fixed and removed the shareware limitation for the source code, that was my mistake, it only happened since yesterday night when I had replaced some .mo localization files and I reuploaded the .tar.gz without remembering to removing the limitation. Now is ok, you can get it from here. https:/ Regarding the shareware and opensource licenses this is called Dual-Licensing, many opensource companies do it, if you think it's my invent I recommend you to read this: http:// Again, if you like other OS rather than Windows, the application is free. This revenue of money is a tax for those windows users, I have to admit, I don't like Windows so if you like it that much that you are still using it, then I don't feel pitty about you. You are free to not use the app, or do with the app wathever you want to do with :-), that's why the source code is 100% Free Libre Software. Thanks for your feedback and keep reporting bugs to give all our users the best experience when finding their subtitles. Hello all. I don't see the big deal with the license paying. Let's say it's just a price to pay for the packaging into a binary for easy install (without having to install python). As it was said, the source is available for every person, whether it's a linux or windows user. We will surely solve this issue by clearing up the mess with the licenses. And just to be clear: Source-code: GPL *nix binaries: GPL Windows binaries: Shareware Cheers > ....this is called Dual-Licensing, many opensource companies do it, > if you think it's my invent I recommend you to read this... @Ivan, I do not need any recommendations about such things... I do know what Dual-Licensing is and i agree it is totally up to you to decide how you will release the code. What i was implying is that SubDownloader is not QT, nor Mozilla , nor Perl etc. and using the Windows users lack of knowledge on compiling things is merely cynical and indecent when it comes to OSC. > This revenue of money is a tax for those windows users, I have to admit, > I don't like Windows so if you like it that much that you are still using it, then I don't feel pitty about you. Lol...How pathetic... And no comment.... I will just say that as a developer i cannot deny the existence of MS OS, nor i can stop using it because of misinterpreted feelings. And because i do not wanna get into personal fight with someone's way of seeing OSC... i just inform you that i wrote a quick step by step explanation on compiling your "GPL'ed" software on Windows ( http:// Wish you luck with your "experiment with the acceptance of the people"... BTW take my installer.nsi (http:// thanks a lot PhobosK, the installer.nsi will be very useful for our next project (this one not even shareware for windows, 100% opensource :-)) http:// Best Regards. Ivan. Yeah that is a good information... And BTW I forgot to tell you that you current SubDownloader's Windows binaries violate every MS EULA for distributing MS code and libraries, which makes their distribution ILLEGAL. More info you can find here: http:// http:// Hi, I've just downloaded the 2.0.9 for windows. As I see it is using QT from Nokia (Trolltech). Is it the commercial or the open source one ? Regards, Zolee As I don't like the Windows shareware discrimination, I've created new project OpenSubdownloader that is primarily focused on bringing you Windows binaries. Check https:/ Thanks for doing this @Libor :) Though It's a pity that mind resources and efforts are lost into useless things like splitting projects (if for this we can use the word split) .... But keep up the good job for the Windows binaries :) Due to Launchpad restrictions to only opensource questions, we need to move any question regarding the SubDownloader license version into here. http:// Best Regards. SubDownloader team OMG... LOL .... ROFL.... "Launchpad restrictions to only OpenSource questions" ?!?!? OR the impossibility to close and forbid writing in a question here on answers. Besides... as far as we all know SubDownloader is still Opensource isn't it? ... hmm, I'm not kiding, this is the email from canonical about it: Hello Ivan, I recently became aware of the Subdownloader project on Launchpad. First, let me congratulate you on a very interesting project and your extensive use of Launchpad. Subdownloader is one of the most viewed on the site! Your project is licensed on Launchpad as GPLv3, which is the license you've applied to your source code. In addition you have a different, proprietary license for the Windows installer. I fully understand the need for a dual license and support your right to do so. However, as you probably know, Launchpad is free to use for open source projects. Your use of Launchpad to distribute the proprietary binaries and your use of the Answers forum to handle registration key requests puts your project in an awkward position where the licensing is concerned. While the use of Launchpad is free for open source projects we also allow commercial projects to use Launchpad provided they buy a commercial-use subscription for US$250/ https:/ You need to do one of two things for your project to use Launchpad in an approved manner. You can either add the 'Other/Proprietary' license to your project to cover your Windows binary distribution and then purchase a commercial-use subscription. Or you can maintain the GPLv3 license for your source code but remove the Windows distribution from our servers and cease to use the Answers forum for registration issues. You would need to move the distribution of the executable and the registration discussions off Launchpad while being able to maintain the code hosting and the distribution of free packages. Use of the other Launchpad facilities (Bug tracker, Blueprints, etc) would only be allowed as the issues pertain to the open source code. Clearly we would prefer you to choose the first option but are happy to work with you to realize the second option should you choose to go that route. You do need to make a decision, though. Your project cannot continue to use Launchpad in the current manner. Please follow up with me if you have any questions. I would like to have this issue resolved within the week. Best, Hello, It seems like 3 years after our dispute about your project licensing and after the warning issued by the Launchpad Administration (https:/ "- It must not require royalty payments or any other fee for redistribution or modification." "- It must not discriminate against persons, groups or against fields of endeavour. The licence of software hosted by Launchpad can not discriminate against anyone or any group of users and cannot restrict users from using the software for a particular field of endeavor - a business for example." "- It must not be distributed under a licence specific to one operating system. The rights attached to the software must not depend on the programme's being part of Ubuntu, for example." So I think your users that are about to pay for your Windows binaries should pay close attention to your attitude towards them (stated clearly in the dispute mentioned above) and your attitude towards OSC and they should think twice before they do actually pay you. The funny thing is that you expect and require your Win users to pay you, but at the same time you are obviously not inclined to follow the rules of Launchpad and pay for your project US$250/project/year to LP and thus using their servers for free to distribute a shareware... What would you call that? I would call it disrespect and mercantilism... So in a word, would you be so kind at least to adhere to LP Terms of use and remove your Windows binaries + remove all the registration questions/answers from LP servers? And to all of your users that like your Windows version... I remind that they have an option to get it for free: http:// https:/ Though I would propose them a much better solution with two open source projects running on Windows that bring them the benefit of downloading/ http:// http:// Oh and BTW I have filed a notice to LP council about all that () .... Let's see what they have to say about this.... Thanks and Have a nice day :) Thanks to PhobosK reminder, we just realized that we were wrongly hosting the shareware binary also in Launchpad servers, and using LP Questions&Answers service to answer some of our users inqueries. This has been all taken care already and we would be leaving the Subdownloader Launchpad project exclusively for the OpenSource version of it and following Launchpad requirements. The team strongly believes in the Free Software principles that's why we decided to release Subdownloader GPLv3 from the beggining with a dual license(completely legal thing). More versions of the free opensource version will come soon! Best Regards. Dear netquim, english language is a bit weak in order to not confuse people about the term "FREE", in the case of FREE SOFTWARE, as it says the Free Software Foundation , is a Free as a Freedom, not as a free beer or freeware. This software is "open source", the GPL license doesn't forbid anybody to make money from their effort, as long as they always provide the source code (which you can find available in the site). So it's perfectly legal, and I would even dare to say "fair". Please check this other sites to find more information about this: http://. org/philosophy/ free-sw. html BTW, source code versions and linux versions don't have any limitation of days, so you can use this methods to use the program. Best Regards. Ivan.
https://answers.launchpad.net/subdownloader/+question/52299
CC-MAIN-2019-22
refinedweb
2,884
68.7
How do we get into the following situation? First we complain about an ignored file (why not just ignore it?), and then apparently we delete the ignored file. [...] import Pictures/2005/11/16/.IMG_0819.tmpwrite.JPG (duplicate) ok (Recording state in git...) The following paths are ignored by one of your .gitignore files: Pictures/2008/11/27/.img_1315.tmpwrite.jpg Use -f if you really want to add them. fatal: no files added git-annex: user error (xargs ["-0","git","--git-dir=/.../annex/.git","--work-tree=/.../annex","add","--"] exited 123) # eek, the file that we complained about has vanished! $ rm ../Pictures/2008/12/27/.img_1315.tmpwrite.jpg rm: cannot remove ‘../Pictures/2008/11/27/.img_1315.tmpwrite.jpg’: No such file or directory Expected: - leave ignored files untouched. Maybe report "Skipped ignored files." Actual: - Stop import, but delete the ignored file as side effect. Today I'm seeing this: This repeats until I kill the import. Subsequently I see that the ignored file was in fact imported: In this case the original file was not deleted, because I used import --duplicate: It seems this was also filed as a bug report so I'll deal with it there.
http://git-annex.branchable.com/forum/Why_are_ignored_files_being_deleted__63__/
CC-MAIN-2017-13
refinedweb
197
60.61
This post is going to help save you money if you're running a Rails server. It starts like this: you write an app. Let's say you're building the next hyper-targeted blogging platform for medium length posts. When you login, you see a paginated list of all of the articles you've written. You have a Post model and maybe for to do tags, you have a Tag model, and for comments, you have a Comment model. You write your view so that it renders the posts: <% @posts.each do |post| %> <%= link_to(post, post.title) %> <%= teaser_for(post) %> <%= "#{post.comments.count} comments" <% end %> <%= pagination(@posts) %> See any problems with this? We have to make a single query to return all the posts - that's where the @posts comes from. Say that there are N posts returned. In the code above, as the view iterates over each post, it has to calculate post.comments.count - but that in turn needs another database query. This is the N+1 query problem - our initial single query (the 1 in N+1) returns something (of size N) that we iterate over and perform yet another database query on (N of them). Introducing Includes If you've been around the Rails track long enough you've probably run into the above scenario before. If you run a Google search, the answer is very simple -- "use includes". The code looks like this: # before @posts = current_user.posts.per_page(20).page(params[:page]) and after @posts = current_user.posts.per_page(20).page(params[:page]) @posts = @posts.includes(:comments) This is still textbook, but let's look at what's going on. Active Record uses lazy querying so this won't actually get executed until we call @posts.first or @posts.all or @posts.each. When we do that two queries get executed, the first one for posts makes sense: select * from posts where user_id=? limit ? offset ? Active Record will pass in user_id and limit and offset into the bind params and you'll get your array of posts. Note: we almost always want all queries to be scoped with a limit in production apps. The next query you'll see may look something like this: select * from comments where post_id in ? Notice anything wrong? Bonus points if you found it, and yes, it has something to do with memory. If each of those 20 blog posts has 100 comments, then this query will return 2,000 rows from your database. Active Record doesn't know what data you need from each post comment, it'll just know it was told you'll eventually need them. So what does it do? It creates 2,000 Active Record objects in memory because that's what you told it to do. That's the problem, you don't need 2,000 objects in memory. You don't even need the objects, you only need the count. The good: You got rid of your N+1 problem. The bad: You're stuffing 2,000 (or more) objects from the database into memory when you aren't going to use them at all. This will slow down this action and balloon the memory use requirements of your app. It's even worse if the data in the comments is large. For instance, maybe there is no max size for a comment field and people write thousand word essays, meaning we'll have to load those really large strings into memory and keep them there until the end of the request even though we're not using them. N+1 Is Bad, Unneeded Memory Allocation Is Worse Now we've got a problem. We could "fix" it by re-introducing our N+1 bug. That's a valid fix, however, you can easily benchmark it. Use rack-mini-profiler in development on a page with a large amount of simulated data. Sometimes it's faster to not "fix" your N+1 bugs. That's not good enough for us, though -- we want no massive memory allocation spikes and no N+1 queries. Counter Cache What's the point of having Cache if you can't count it? Instead of having to call post.comments.count each time, which costs us a SQL query, we can store that data directly inside of the Post model. This way when we load a Post object we automatically have this info. From the docs for the counter cache you'll see we need to change our model to something like this: class Comment < ApplicationRecord belongs_to :post , counter_cache: count_of_comments #… end Now in our view, we can call: <%= "#{post.count_of_comments} comments" %> Boom! Now we have no N+1 query and no memory problems. But... Counter Cache Edge Cases You cannot use a counter cache with a condition. Let's change our example for a minute. Let's say each comment could either be "approved", meaning you moderated it and allow it to show on your page, or "pending". Perhaps this is a vital piece of information and you MUST show it on your page. Previously we would have done this: <%= "#{ post.comments.approved.count } approved comments" %> <%= "#{ post.comments.pending.count } pending comments" %> In this case the Comment model has a status field and calling where(status: "pending"). It would be great if we could have a post.count_of_pending_comments cache and a post.count_of_approved_comments cache, but we can't. There are some ways to hack it, but there are edge cases, and not all apps can safely accommodate for all edge cases. Let's say ours is one of those. Now what? We could get around this with some view caching because if we cache your entire page, we only have to render it and pay that N+1 cost once. Maybe fewer times if we are re-using view components and are using "Russian doll" style view caches . If view caching is out of the question due to <reasons>, what are we left with? We have to use our database the way the original settlers of the Wild West did, manually and with great effort. Manually Building Count Data in Hashes In our controller where we previously had this: @posts = current_user.posts.per_page(20).page(params[:page]) @posts = @posts.includes(:comments) We can remove that includes and instead build two hashes. Active Record returns hashes when we use group(). In this case we know we want to associate comment count with each post, so we group by :post_id. @posts = current_user.posts.per_page(20).page(params[:page]) post_ids = @posts.map(&:id) @pending_count_hash = Comment.pending.where(post_id: post_ids).group(:post_id).count @approved_count_hash = Comment.approved.where(post_id: post_ids).group(:post_id).count Now we can stash and use this value in our view instead: <%= "#{ @approved_count_hash[post.id] || 0 } approved comments" %> <%= "#{ @pending_count_hash[post.id] || 0 } pending comments" %> Now we have 3 queries, one to find our posts and one for each comment type we care about. This generates 2 extra hashes that hold the minimum of information that we need. I've found this strategy to be super effective in mitigating memory issues while not sacrificing on the N+1 front. But what if you're using that data inside of methods. Fat Models, Low Memory Rails encourage you to stick logic inside of models. If you're doing that, then perhaps this code wasn't a raw SQL query inside of the view but was instead nested in a method. def approved_comment_count self.comments.approved.count end Or maybe you need to do the math, maybe there is a critical threshold where pending comments overtake approved: def comments_critical_threshold? self.comments.pending.count < self.comments.approved.count end This is trivial, but you could imagine a more complex case where logic is happening based on business rules. In this case, you don't want to have to duplicate the logic in your view (where we are using a hash) and in your model (where we are querying the database). Instead, you can use dependency injection. Which is the hyper-nerd way of saying we'll pass in values. We can change the method signature to something like this: def comments_critical_threshold?(pending_count: comments.pending.count, approved_count: comments.approved.count) pending_count < approved_count end Now I can call it and pass in values: post.comments_critical_threshold?(pending_count: @pending_count_hash[post.id] || 0 , approved_count: @approved_count_hash[post.id] || 0 ) Or, if you're using it somewhere else, you can use it without passing in values since we specified our default values for the keyword arguments. BTW, aren't keyword arguments great? post.comments_critical_threshold? # default values are used here There are other ways to write the same code: def comments_critical_threshold?(pending_count , approved_count ) pending_count ||= comments.pending.count approved_count ||= comments.approved.count pending_count < approved_count end You get the gist though -- pass values into your methods if you need to. More than Count What if you're doing more than just counting? Well, you can pull that data and group it in the same way by using select and specifying multiple fields. To keep going with our same example, maybe we want to show a truncated list of all commenter names and their avatar URLs: @comment_names_hash = Comment.where(post_id: post_ids).select("names, avatar_url").group_by(&:post_ids) The results look like this: 1337: [ { name: "schneems", avatar_url: "" }, { name: "illegitimate45", avatar_url: "" } ] The 1337 is the post id, and then we get an entry with a name and an avatar_url for each comment. Be careful here, though, as we're returning more data-- you still might not need all of it and making 2,000 hashes isn't much better than making 2,000 unused Active Record objects. You may want to better constrain your query with limits or by querying for more specific information. Are We There Yet At this point, we have gotten rid of our N+1 queries and we're hardly using any memory compared to before. Yay! Self-five. :partyparrot:. 🎉 Here's where I give rapid-fire suggestions. - Use the bulletgem -- it will help identify N+1 query locations and unused includes-- it's good. - Use rack-mini-profilerin development. This will help you compare relative speeds of your performance work. I usually do all my perf work on a branch and then I can easily go back and forth between that and master to compare speeds. - Use production-like data in development. This performance "bug" won't show until we've got plenty of posts or plenty of comments. If your prod data isn't sensitive you can clone it using something like $ heroku pg:pullto test against, but make sure you're not sending out emails or spending real money or anything first. - You can see memory allocations by using rack-mini-profilerwith memory-profilerand adding pp=profile-memoryto the end of your URL. This will show you things like total bytes allocated, which you can use for comparison purposes. - Narrow down your search by focusing on slow endpoints. All performance trackers list out slow endpoints, this is a good place to start. Scout will show you memory breakdown per request and makes finding these types of bugs much easier to hunt down. They also have an add-on for Heroku. You can get started for free $ heroku addons:create scout:chair If you want to dig deeper into what's going on with Ruby's use of memory check out the Memory Quota Exceeded in Ruby (MRI) Dev Center article , my How Ruby Uses Memory, and also Nate Berkopec's Halve your memory use with these 12 Weird Tricks.
https://blog.heroku.com/solving-n-plus-one-queries
CC-MAIN-2018-26
refinedweb
1,917
65.73
Get the highlights in your inbox every week. What you need to know about open source for products Open source for products in four rules (and 10 slides) Subscribe now There are four rules to understand when building products out of open source software. A product team (engineering, product management, marketing) needs to understand these rules to participate best in an open source project community and deliver products and services to their customers at the same time. These four rules are the start of all other discussions about the open source product space. Rule #1: You ALWAYS get more than you give The investment over time in a technology follows a normal distribution. Think about the investment in open source projects as a stacked bar chart where company and individual contributions are taken together and replace a single company's investment. So the collected investment looks the same in an open source project as a single company's investment looks when developing closed proprietary software products. Individuals and companies contribute to meet their own selfish needs. It's a perfect asymmetric relationship where the contributor gives up some thing relatively small in value (their contributions) and gets something substantial in return (an entire working piece of software). One can look at Openstack or the Linux kernel to see this activity best in well measured ways. Instead of viewing this as giving away IP, it needs to be looked at rightly as gaining all the rest of the IP. Lines-of-code and the COCOMO calculations come from Openhub.net crawling repositories. I understand exactly how fraught lines-of-code is. I understand the concerns over the accuracy of COCOMO, but they are representative models if not perfect ones, and they show the trends appropriately. Rule #2: Don't confuse projects with products This one is sometimes hard to understand. First, we need to assume we're talking about a well-run, successful open source project. (More on this in rules #3 and #4.) A project is a collection of working software that installs and runs and solves an interesting problem. It's a collaboration and conversation in code between a relatively small number of people developing the software that have write access on the software repositories (i.e. committers) and hopefully a larger set of users and contributors. A product is something that solves a customer's problem for money. Projects are NOT products. While a lot of excellent software can come out of a well-run open source project that relieves some of the work for engineering (see Rule #1), there is enormous work still to be done to turn it into a problem-solving product for customers. The Linux kernel is a project. Fedora is a distro project. RHEL is a product. "But what about Ubuntu," you cry? It's a variation on the business model. Ubuntu is a distro project. The Long Term Support (LTS) editions are the basis of multiple products for Canonical. Products meet customer expectations of value for money. They install out of the box, run, and come with warranties and indemnifications, services (support, upgrades, training, consulting), and documentation. The product may be a service or hardware wrapped around the project. Products are as varied as markets of problems customers want solved for money. While good projects tick the first two boxes (install, run), they don't tackle the customer focus the same way. Projects also solve much narrower problems than customers want solved. And don't be confused about which open source licenses are involved and whether they're "business friendly" or not. Different vendors use different strategies around different licenses. There are success stories and failures around every major OSI approved license. The license is irrelevant in comparison to business execution. Rule #3: Don't confuse communities with customers This rule is tightly woven together with Rule #2, and if anything harder to understand. If Rule #2 is about engineering and business model, Rule #3 is about messaging and sales. Communities and customers live in different value spaces. Communities have time, but no money. Customers have money, but no time. Perhaps a better statement is that customers spend money to expedite a solution and remove risk, while communities (individuals in community) have no money. Traditionally, engineering feeds products into the pipeline, marketing feeds messages, and sales pulls qualified leads through into closed deals. A simple matter of execution. Many many companies using open source think that the project community is a part of this pipeline, and they further believe this when they find customers in community forums. They may even think the community project is a try-before-you-buy. All of this is WRONG. The conversations that a company (product management, engineering, marketing) has with its relevant communities and conversations with paying customers are different conversations. Each conversation has specific tools and rules of engagement. Successful companies understand how to have these conversations. There are well understood tools for building and qualifying pipelines. There are equally well understood tools and rules for developing successful communities (Rule #4). Each tool chain and conversation has different metrics to capture and consider. There IS interaction between a company's community and customers. Community members are evangelists for the project (so there's value to link it to the company brand in thoughtful ways). Community members provide support and expertise to potential customers that are self-qualifying in the project community before re-joining the product pipeline. Community also provides inertia for the ultimate product solution by being a sink for expertise and time invested. The challenge is to keep things crisply separate between the community and customers such that you can quickly and easily recognize what role the person in front of you is playing and guide them appropriately. There must never be confusion in the messages (deliberate or otherwise). For example, the product is for customers. If you have a trial edition, as in try-before-you-buy, then the "buy" word is there, so, customer conversation. If you have a community edition, then build a community (Rule #4), because otherwise you're simply publishing software under an open source license without gaining any of the benefits of an open source community. These are separate things, which brings us to the final rule. Rule #4: Successful open source project communities follow well-understood patterns and practices All successful open source community projects follow the same set of patterns and practices. The project starts as a conversation in code around a small core of developers. There are three on-ramps that need to be built. First drive use and grow the user base, because that will lead to developers finding your project. (You NEED freeloaders! It means you're doing it right.) The software has to be easy to install and run. Users will tell you what they need, i.e. you get bug reports and feature requests in return for getting this right. More importantly, developers find you. Second, make it blindingly easy to build the software into a known, tested state. This will allow developers to self-select and experiment for their own needs. Assuming a smart developer will figure it out is throwing away developer-cycles. They won't. No one wants to waste their time on your laziness and lack of discipline. They'll leave in frustration and disgust. Getting them back will be very hard if not impossible. Get this right and you'll get the next set of harder bug reports and likely suggested fixes. Third and last, tell developers how and where to contribute and make it easy to do. Thank them for the contributions. If things other than code are to be encouraged, set up those contribution channels as clearly and make them easy. Regularly say "thank you." Reward folks anyway you can, especially when you're a company. Building communities is hard work. It doesn't come for free. It does, however, bring value with it in terms of contributions from users and developers, as well as stickiness for the technology. The last collection of practices in this space is around understanding the role of foundations and open source software. Foundations organize and clarify IP management regimes. Foundations can do many other things, but if they don't get this central thing right, then they're a failure for the project community's potential for growth. Clarifying neutral IP ownership allows growth for dedicated investment from participants and contributors interested in growing the entire ecosystem, i.e. companies trying to solve problems for customers. Foundations create neutral space in which companies can participate on equal footing. A company building products out of open source projects they didn't start and own (e.g. SUSE and Linux, HP and Openstack, etc.) need to understand clearly how their contributions are handled and that they aren't simply building someone else's product. Likewise, a company that has started an open source project and wants to drive adoption and growth of an ecosystem around it would do well to contribute the project software IP to a separate non-profit foundation (or create one if appropriate) such as what Google is presently doing with Kubernetes, or Pivotal has done with Cloud Foundry. This is ultimately a fourth on-ramp to get right. Conclusion So there you have it. Everything I've learned over 20 years of open source project support, foundation participation, and product engineering summarized in four rules, 10 slides, and approximately 1,600 words. I look forward to questions and comments. 5 Comments, Register or Log in to post a comment. Stephen, this is an amazing article, with very good slides to illustrate what you mean. I would like that every people working at an "open source company" could read and apply what you explain here. This is also a very good base for an "open source go-to-market" strategy, and I would love to read you on that topic !!! I will talk at Eclipse Con Europe about building communities around open source projects, can I please quote your article in my presentation as a reference ? Great summary of insights around open source communities, projects, and products from an experienced open source player. This is a very insightful blog that I wish I read six years ago before we started Gluu. Outstanding work, Steve!
https://opensource.com/business/15/8/open-source-products-four-rules
CC-MAIN-2021-39
refinedweb
1,728
56.76
Hi, I've been learning about the std::copy function in the <algorithm> header and I'm stuck on this exercise that I gave my self. Let's see the code snippet first: #include <iostream> #include <vector> #include <algorithm> #include <iterator> using namespace std; ostream& operator<<(ostream& os, const int p_val) { os.put( p_val*2); return os; } int main(/*const int argc,const char* const argv[]*/) { std::vector<int> myVector = {34,54,43,2,7,87}; std::copy(myVector.begin(), myVector.end(), ostream_iterator<int>(std::cout, " ")); cin.get(); return 0; } Basically what I'm trying to achieve is: I have a vector full of integers. I want to print the mutiply of 2 of these integers to std::cout using std::copy(). I learnt that I would have to overload operator<<(). But as you can see from my failed attempt above, I'm at a complete lost as to what I need to do insife operator<<(). Any help's much appreciated. Cheers, Ben
https://www.daniweb.com/programming/software-development/threads/456963/std-copy-and-operator-overload-for-ostream
CC-MAIN-2018-05
refinedweb
163
56.35
15 July 2010 16:18 [Source: ICIS news] LONDON (ICIS news)--LyondellBasell’s new 260,000 tonne/year high density polyethylene (HDPE) plant at Munchmunster, Germany, is now on stream, a company source said on Thursday. The plant will produce HDPE injection, blowmoulding and textile grades. HDPE prices in Europe had been firm throughout 2010 in spite of expectations of price erosion caused by an influx of material from new capacities in the Middle East and ?xml:namespace> HDPE prices were reported in a wide range, between €1,020-1,120/tonne ($1,291-1,418/tonne) FD (free delivered) NWE (northwest Borouge had also started up its new 540,000 tonne/year HDPE/linear low density (LLDPE) swing plant at The new PE plant was currently producing injection and blowmoulding grades. Other grades, such as film and pipe, would be introduced into the production system after a while, said the
http://www.icis.com/Articles/2010/07/15/9376943/lyondellbasells-new-germany-hdpe-plant-comes-on-stream.html
CC-MAIN-2015-18
refinedweb
151
52.73
Revision history for Perl extension App::Office::Contacts. 2.04 Thu Feb 6 14:06:00 2014 - Add code for Pg and MySQL re utf8 in the DBI attributes. See App::Office::Contacts::Util::Logger. See lines 44 .. 57. - Default to mysql_enable_utf8 = 1 and pg_enable_utf8 = 0 in the config file. See lines 48 .. 52. I'm using pg_enable_utf8 = 0 in order to make my code work under DBD::Pg V 3.0.0. 2.03 Thu Jan 23 10:36:00 2014 - Switch from bareword file handles to lexically-named file handles. This stops these types of msgs: Use of bareword filehandle in open at lib/App/Office/Contacts/Database/Library.pm line 74. - Not that I don't use the JS '$.uiBackCompat = false;', I had to re-write some of the JS to not use obsolete features of jQuery UI. 2.02 Thu Nov 13 15:11:00 2013 - Replace decode/encode('utf8', $x) with decode/encode('utf-8', $x). - Remove: o use open qw(:std :utf8); # Undeclared streams in UTF-8. o use charnames qw(:full :short); # Unneeded in v5.16. The problem with 'use open ...' is that it's global. The problem with 'use charnames ...' is that it simply wasn't used. If you need 'use open ...', put in your scripts. - Switch jQuery V 1.8.2 (inside DataTables V 1.9.4) to V 2.0.3 (standalone). Note: homepage.tx includes $.uiBackCompat = false;. DataTables V 1.9.4 is still used. - Switch jQuery UI V 1.9.2 to V 1.10.3. - Rename web.page.css to homepage.css. - Rename web.page.tx to homepage.tx. - Rename whole_page.tx to standalone.tx. - Add an output_file option to scripts/export.as.csv.pl. This patch includes using Text::CSV::Encoding to set the output I/O layer to utf-8. - Add scripts/export.as.html.pl. 2.01 Wed Jun 12 12:40:00 2013 - Update the POD regarding editing and copying the config file share/.htapp.office.contacts.conf, now that the re-written Makefile.PL does not install it for you. 2.00 Wed Jun 12 09:25:00 2013 - Warning: Some database tables have changed structures. - Rename CHANGES to Changes as per CPAN::Changes::Spec. - I'm using Perl V 5.14. - Add explicit support for UTF8. See the FAQ in Contacts.pm. - Use Unicode::Collate for sorting. - Switch from CGI::Application to CGI::Snapp. - Switch from HTML::Template to Text::Xslate. - Switch from CGI::Session to Data::Session. - Switch from Log::Dispatch to Log::Handler. - Switch from Path::Class to Path::Tiny. - Switch from DBIx::Class to DBIx::Simple. - Switch from Moose to Moo. - Switch from YUI to jQuery. - Update pre-reqs. This includes making Test::Pod optional. - Update licence to artistic_2. See. - Accept AutoCommit and RaiseError in the config file. If omitted, they both default to 1. Nevertheless, DBI's begin_work is used for transactions, where AutoCommit will be 0. - Restructure the hierarchy a bit. This basically means a object attributes have been shifted around. E.g.: The db object now 'hasa' session object. These changes should never be visible to the end user. - For users of MySQL, default to use engine=innodb. - Convert from Module::Build to Module::Install, so we can ship share/.htapp.office.contacts.conf rather than shipping lib/.htapp.office.contacts.conf. This means using File::ShareDir to install and retrieve the config file, rather than File::HomeDir. That should solve problems Windows users have had re the config file's location. - Additions to the people table are: o facebook_id varchar(255) o twitter_id varchar(255) - The list of personal titles has been much expanded, and SMS is a new type of mobile phone. - Communication types now include Any and SMS only. - Change log table message type from varchar(255) to text. - Remove the singular column from the table_names table. It was never used. - Rename table from yes_nos to yes_noes. - Rename table from broadcasts to visibilities. - What used to be broadcast is now called visibility. - What used to be broadcast type '(Hidden)' is now called visibility type 'No-one'. - Adopt Lingua::ENG::Inflect to help with singular/plural issues. - Timestamps in database columns have changed (for MySQL and Postgres): o Old: timestamp timestamp (0) without timezone not null default current_timestamp o New: timestamp timestamp not null default localtimestamp This allows the code to use Time::Stamp. See App::Office::Contacts::Database::Organization and App::Office::Contacts::Database::People. - Add data/organizations.txt, containing just the org called '-'. Remove it from the fake data file data/fake.organizations.txt. This means it is inserted when you run populate.tables.pl, which is what should have been happening from the year dot. - Rename home_page column in organizations and people tables to homepage (as per Google's suggestion). - Add the TODO section to the docs. - Remove the table_names table. Now, the notes table uses entity_id rather than table_id, to point into either the organizations or people table, and entity_type rather than table_name_id, to specify which of those 2 tables the note refers to. - Remove data/edit_types.txt, which was not used. - Add scripts/pod2html4all.pl. 1.17 Wed Jun 15 8:15:00 2011 - Patch Build.PL and Makefile.PL to reduce the version requirement of File::Copy from 2.18 to 2.14, to make it easier for people to install without having to upgrade Perl to get the later version. Thanx to Gabor Szabo for reporting this problem. - No other changes. 1.16 Tue Nov 16 15:47:00 2010 - Switch from FindBin::Real to FindBin (which is in core). - Replace /usr/bin/perl with /usr/bin/env perl. - Replace common::sense with use strict and use warnings, to get uninit var warnings. - Move lib/App-Office-Contacts/lib/App/Office/Contacts/.htoffice.contacts.conf to config/. - Change default template path to /dev/shm/html/assets/templates/app/office/contacts. - Change name of default template path from tmpl_path to template_path, as part of adopting Text::Xslate. - Add scripts/copy.config.pl to copy .htoffice.contacts.conf to ~/.perl/App-Office-Contacts/. - Add missing pre-reqs to Build.PL and Makefile.PL. - Make Build.PL and Makefile.PL run scripts/copy.config.pl. 1.15 Wed Sep 22 10:00:00 2010 - Replace sub script_name() with $self -> query -> url(-absolute => 1). - Shift some code into a new module, App::Office::CMS::View::Search. This means a view now hasa search. - Chop subs generate_cookie(), generate_digest() and validate_post(). See V 1.09 below. 1.14 Fri Jun 25 11:15:00 2010 - Change all JSON::XS->new->encode(...) to JSON::XS->new->utf8->encode(...). 1.13 Thu Jun 24 14:38:00 2010 - Fix syntax error. - Use 'select count(*) as count' rather than just 'select count(*)' to avoid differences between Postgres and SQLite. 1.12 Wed Jun 23 13:29:00 2010 - Fix logic error in Create.pm.report_all_tables(). I was getting a list of table names from a file in the distro, the same way I do when populating tables at installation time. But, this file may not be available at run time after installation. - No longer ship scripts/schema.sh. I use dbigraph.pl from GraphViz::DBI, modified to use GraphViz::DBI::General (which subclasses GraphViz::DBI). 1.11 Thu Jun 3 17:23:00 2010 - Fix typos arising after I changed the name of the module from CGI::Office::* to App::Office::*. This patch was lost when I replaced Debian testing with lenny on my laptop. - Ship docs/html/contacts.faq.html, as previously documented. 1.10 Wed May 19 11:11:00 2010 - Update comments re starman usage in contacts.psgi. - Chop mailing list stuff from support. - Update version numbers in Build.PL and Makefile.PL. - Ensure config code is only called once (App::Office::Contacts::Util::LogConfig). 1.09 Tue Apr 20 8:38:00 2010 - Comment out the processing which checks for CSRF, since I encountered a case where it did not work. 1.08 Fri Apr 16 8:52:00 2010 - Warning: The organizations and people tables have a new column: upper_name. This is due to a defect in SQLite, which does not allow function calls when defining an index. Hence the index on people(upper(name) ) has to be now written as people(upper_name). You can easily write a little program to use alter table, and then populate the new column. The search code uses the new column. - Change SQLite attribute from unicode to sqlite_unicode. - Change the default database driver from Postgres to SQLite, to make installation easier (by not requiring DBD::Pg). If using another database server, you'll need to edit the 2 lines in .htoffice.contacts.conf which refer to SQLite. - Fix Makefile.PL to use App::* not CGI::*. My apologies for this carelessness. - Rework cookies and POST validation, to allow Contacts, Donations and Import::vCards to run in parallel. 1.07 Wed Apr 7 8:51:00 2010 - Update pre-reqs for Test::Pod to 1.41 to avoid Test::Pod's dithering about a POD construct I used: L<text|scheme:...>, which makes a test fail. See comments for V 1.40 and 1.41 at: - Update pre-reqs from Test::More V 0 to Test::Simple 0.94. 1.06 Mon Mar 29 14:53:00 2010 - Create indexes on organizations and people tables, using upper(name), to speed up searching. The index names are: - organizations: opganizations_upper_name - people: people_upper_name. - Add parent to pre-reqs in Build.PL and Makefile.PL. 1.05 Tue Mar 2 9:28:00 2010 - In cgiapp_prerun() protect against XSS and CSRF: o Only accept CGI params if the request method is 'POST'. o Ensure digest in session matches digest in cookie. o - Change 'use base' to 'use parent'. - Remove form_action from config file. See sub script_name. - Replace references to FCGI with Plack. This includes no longer shipping FCGI-specific files nor patches to Apache's httpd.conf. - Ship httpd/cgi-bin/office/contacts.psgi. - Adopt Log::Dispatch::Configurator. See App::Office::Contacts::Util::LogConfig. - Replace Carp::croak with die, assuming calling code uses Try::Tiny. - Stop using Time::Elapsed (at table create/populate time). - Zap drop_and_create_all_tables() and run() from App::Office::Contacts::Util::Create. - In drop.tables.pl and create.tables.pl, change the 'verbose+' option definition to 'verbose', since the '+' doesn't make sense. - Clean up what is real data and what is fake data. - Rename data/email_addresses.txt => data/fake.email_addresses.txt. - Rename data/email_people.txt => data/fake.email_people.txt. - Rename data/people.txt => data/fake.people.txt. - Rename data/phone_numbers.txt => data/fake.phone_numbers.txt. - Rename data/phone_people.txt => data/fake.phone_people.txt. - Rename data/organizations.txt => data/fake.organizations.txt. - Add comments to .htoffice.contacts.conf, while simplifying the discussion of the Javascript URL. - Change the default URL of the FAQ. - Use common::sense instead of strict and warnings. - Add unicode to .htoffice.contacts.conf - used by SQLite - and add corresponding code to BEGIN{} in App::Office::Contacts::Database, in case anyone wants to use DBD::SQLite. - Fix off-by-one error in report.js when indexing into document.report_form.report_id.options[report - 1].text. - Change some logging in Contacts.pm from info to debug. 1.04 Sun Feb 21 12:54:14 2010 - Remove text 'All rights reserved' (for Debian licensing). - Remove docs heads 'Required Modules' and 'Changes'. - Replace personal doc root with /var/www. - Use namespace::autoclean with Moose. 1.03 Fri Feb 5 17:27:00 2010 - Remove personal use lib from CGI scripts. - Use smarter check for calendar div in Contacts.build_head_init, so cursor appears in search name box upon startup. 1.02 Fri Jan 29 09:52:00 2010 - Change namespace from CGI::Office::* to App::Office::* after discussion with Matt Trout. - Add config item css_url. - Tell Module::Build to install .htoffice.contacts.conf. 1.01 Thu Jan 7 15:39:00 2010 - Add MANIFEST and MYMETA.yml 1.00 Thu Dec 31 10:48:00 2009 - Rename from Local::Contacts. - Remove Apache-specific code. - Split into N controllers, using CGI::Application::Dispatch. - Split into separate distros: o App::Office::Contacts o App::Office::Contacts::Donations o App::Office::Contacts::Export::StickyLabels o App::Office::Contacts::Import::vCards o App::Office::Contacts::Sites 0.99 Thu Mar 06 11:30:45 2008 - Original version.
https://metacpan.org/changes/distribution/App-Office-Contacts
CC-MAIN-2015-48
refinedweb
2,053
62.54
AnalogOut Use the AnalogOut interface to set the output voltage of an analog output pin specified as a percentage or as an unsigned short. Mbed OS provides separate APIs to use percentage or range. Mbed OS supports a maximum resolution VCC/65,536 V, though the actual resolution depends on the hardware. Note: Not all pins are capable of being AnalogOut, so check the pinmap for your board. AnalogOut class reference AnalogOut hello, world /*(A5);); } } } AnalogOut example Create a sine wave. #include "mbed.h" const double pi = 3.141592653589793238462; const double amplitude = 0.5f; const double offset = 65535/2; // The sinewave is created on this pin AnalogOut aout(A5); int main() { double rads = 0.0; uint16_t sample = 0; while(1) { // sinewave output for (int i = 0; i < 360; i++) { rads = (pi * i) / 180.0f; sample = (uint16_t)(amplitude * (offset * (cos(rads + pi))) + offset); aout.write_u16(sample); } } }
https://os.mbed.com/docs/mbed-os/v5.14/apis/analogout.html
CC-MAIN-2021-04
refinedweb
145
58.18
Dialog API API documentation for the React Dialog component. Learn about the available props and the CSS API. Import You can learn about the difference by reading this guide on minimizing bundle size. import Dialog from '@mui/material/Dialog'; // or import { Dialog } from '@mui/material'; Dialogs are overlaid modal paper based components with a backdrop. Component nameThe name MuiDialogcan be used when providing default props or style overrides in the theme. Props Props of the Modal component are also available. The refis forwarded to the root element. InheritanceWhile not explicitly documented above, the props of the Modal component are also available on Dialog..
https://mui.com/api/dialog/
CC-MAIN-2021-43
refinedweb
103
50.33
Hi, I have a current issue where I am trying to write a groovy script where if criteria is met, then to proceed with the rest of the test steps and if criteria is not met then to skip to thenext row of the datasource. Test case stucture: Datasource Test step 1 Groovy - if loop to find if a criteria is met. If it is not met then i want it to proceed to the next line in the datasource and no proceed with the rest of the test steps. If the criteria is met then i want it to proceed with th rest of the test steps. Test step 2 Test step 3 Datasource Loop There are two groovy scripts I have tried: 1. if(testRunner.testCase.getTestStepByName("step 1").getPropertyValue("property1")=="0" ){ testRunner.fail("There was no property found") testRunner.runTestStepByName("DataSource Loop") } The problem with this is that 'fail' will fail the whole test case and does not proceed with any further execution. 2. if(testRunner.testCase.getTestStepByName("step 1").getPropertyValue("property1")=="0" ){ throw new Exception("There was no property found") testRunner.runTestStepByName("DataSource Loop") } This proceeds with Step 2 and Step 3 and does not skip to next row of the datasource. Does anyone have a solution to this issue? Solved! Go to Solution. Would you be able to use a groovy assertion inside of the request and then pass/fail the step if the criteria is met? If you can do that you could have a seperate groovy script to check if the request failed then skip to the data source loop step. def tStep = testRunner.testCase.getTestStepByName("Test step 1") def result = tStep.getAssertionList() // if you have more than one assertion result.size() will get the assertion count if (result[0].status.toString() == "FAIL"){ testRunner.gotoStepByName("DataSource Loop") } If you are not able to run the groovy script as a request assertion you could disable all the test steps that should run only if the request passed. You can then use a groovy script to run the disabled test steps. def property1 = testRunner.testCase.getTestStepByName("step 1").getPropertyValue("property1") //if the assertion failed the script ends and the test steps will not be run assert property1 == 0 testRunner.runTestStepByName("Test step 2") testRunner.runTestStepByName("Test step 3") Hi all, Thanks for the instruction, @jsheph01 ! @cjamieson did the above suggestions help? We are looking forward to hearing from you.
https://community.smartbear.com/t5/SoapUI-Pro/Groovy-script-to-fail-test-run-and-skip-to-next-line-in/m-p/187841
CC-MAIN-2019-35
refinedweb
404
65.73
Wiki Syntax Comparison Courtesy of RadomirDopieralski... (with editions) Well, it's a shame about the pipe character, but the other arguments about why Moin syntax "sucks" aren't convincing: you can't apparently have non-WikiName links in Google Code or Trac (unless "no need" means that Trac figures it out), Trac still uses the baroque wiki:WikiName stuff when making a labelled link. And the full URL for attachments? It's exactly this kind of stuff which either confuses the Wiki when trying to label links (a problem I've seen with MediaWiki) or just breaks when any aspect of the Wiki configuration changes (a problem that also comes with MediaWiki). You should definitely argue with the Moin folks about that pipe character, though. -- PaulBoddie 2010-09-30 14:24:39 pipe character alone is enough to keep off a 80% of users. You can have non-WikiName links in both Trac and Google Code. What Google Code doesn't allow is to name your pages with non-WikiNames. Trac allows arbitrary page name and I've just added Trac syntax for it. Trac supports non-baroque syntax of Google Code, and I specifically added wiki:WikiName syntax, becase explicit is better than implicit, especially if you want to illustrate how convenient are Trac namespaces for various links. As for attachments, I don't link them so often as to understand the underlying problem. Nor I have time to argue with authors of products I don't use (this wiki just forces me to it). -- techtonik 2011-01-15 14:25:45
https://wiki.python.org/moin/SiteImprovements/WikiSyntaxComparison?action=fullsearch&context=180&value=linkto%253A%2522SiteImprovements%252FWikiSyntaxComparison%2522
CC-MAIN-2018-26
refinedweb
263
61.16
Cannot Invoke Method On An Object Of Type coldfusion.runtime.VariableScope With Named Arguments I am getting a very strange error when I try to scope a private method within a ColdFusion component. I have this very simple ColdFusion component: - <cfcomponent - - <cffunction - name="PrivateMethod" - access="private" - returntype="string" - output="false" - - <!--- Define arguments. ---> - <cfargument name="Value" type="string" required="true" /> - <!--- Return formatted string. ---> - <cfreturn ("<strong>" & ARGUMENTS.Value & "</strong>") /> - </cffunction> - <cffunction - name="CallMethod" - access="public" - returntype="void" - output="false" - - <cfset VARIABLES.PrivateMethod( - Value = "Crazy Bananas!" - ) /> - <!--- Return out. ---> - <cfreturn /> - </cffunction> - <cffunction - name="Debug" - access="public" - returntype="void" - output="true" - - <cfdump - var="#VARIABLES#" - label="VARIABLES Scope Dump" - /> - <cfabort /> - </cffunction> - </cfcomponent> It does absolutely nothing. It has one private method named "PrivateMethod". It has two public methods; one, Debug(), that dumps out the variables scope, and another, CallMethod(), that invokes the private method in the VARIABLES scope using named arguments. Now, if I call the Debug() method on this ColdFusion component I get: As you can clearly see, the method PrivateMethod() is clearly within the VARIABLES scope of this component. However, when I try to call CallMethod(): - <!--- Create CFC instance. ---> - <cfset objTest = CreateObject( "component", "Test" ) /> - <!--- Invoke method (which calls private method). ---> - <cfset objTest.CallMethod() /> ... I get this ColdFusion error: Cannot invoke method PrivateMethod on an object of type coldfusion.runtime.VariableScope with named arguments. Use ordered arguments instead. Now, I have two options to fix it here. I can either take out the names arguments to the PrivateMethod() function: - <cfset VARIABLES.PrivateMethod( - "Crazy Bananas!" - ) /> ... and just use ordered arguments (values are assigned to arguments in the same order in which they were passed). Or, I can simply not use the a method scope when invoking the private method and let ColdFusion search for the appropriate method: - <cfset PrivateMethod( - Value = "Crazy Bananas!" - ) /> Both of the "solutions" will allow the code to execute fine. But, of course, this should not be the case. There is no reason that I can see that I should not be able to invoke a private method using its scope. Any one have any ideas? Is this a bug? Am I just not seeing Cool. I knew about the variables.methodcall(namedArg="blah") because I tried it a while back. I just settled on taking out the reference to variables though, since I don't think it does anything for the readability of the program in that case (actually degrades it, imo). But I didn't know I could simply have used ordered arguments. Strange stuff... keep us updated =) I head what you are saying on readability. My issues it that I like using the THIS scope when referencing public method within the same CFC. I learned to do this after I once had an ARGUMENTS key and a method name conflicting (it kept trying to invoke the argument "Commit" like it was a method (in the THIS scope)). So anyway, if I use the THIS scope for public method invocation, which to me ups the readability (very clear what I am referring to), I figure I should use the VARIABLES scope for private methods in order to keep things very consistent. I agree with that - certainly in my mind if you can do this.publicmethod() you should be able to do this.privatemethod(). And adding the fact that you can use it with ordered arguments just strengthens the case.ConversionRate on an object of type coldfusion.runtime.RequestScope with named arguments. Use ordered arguments instead. The bizarre workaround was to create a copy of the function as a local variable and then invoke that copy instead. E.g. <cfset var getConversionRate = request.getConversionRate> <cfset var convRate = getConversionRate(foo = bar)> This is pretty lame. Using ordered arguments wasn't an option because I have multiple optional arguments in my function. @Leon, Yeah, I am not sure what the problem is with it. It must be some weird wiring issue behind the scenes that makes it very complicated. Cause to me, a scope is a scope is a scope. @ben The variables scope of a component is part of the class definition, not an object instance definition. You can use it for class methods, class constants, et al. The this scope of a component is part of an object instance definition. You can use it for instance methods, instance properties, et al. The problem is that ColdFusion has confusing syntax. Implicit scope = variables, unless it = url, form, or other. Explicit scope this. Don't even get me started on the super variable, and how it's a class variable, not an object instance variable. [Say what?] What's more, is that createObject doesn't necessarily create an object instance. [Say what?] It just creates a reference to the class. You can double check this by creating Java objects. You need to call object=Class.init() to actually get an object instance. You can't call a private method as a class method. You can only call a private method from an instance. You called variables.privateMethod, which basically treats the call as if it originated from outside the class. Hence, like a class method. COldFusion wants you to always use this.privateMethod, which treats the call instead as if it originated from inside the class. Hence, an instance method. Examples are psuedocode-ey. This is what you did: component function privateMethod private function publicMethod public variables.privateMethod Which is the same as this: x = createObject(component, foo).privateMethod() You need to do this: component function privateMethod private function publicMethod public this.privateMethod @Alex, I think that some of what you're saying is on the money. However, calling a method on the variables scope is not the same as calling it on the public face of the component instance. The reason that you cannot use named arguments on calling the variables scope is because ColdFusion simply doesn't allow it. This, however, is something that is changing in CF9. Just googled this error because I hadn't seen it in awhile and stumbled across your post Ben. Funny thing is, this works now in CF9 ... but I ran into this error when deploying something I was developing in CF9 onto a CF8 server. @Steve, Yeah, I'm pretty excited that this is now fixed. Previously, you could only use ordered arguments. This is one of the reasons that I very rarely use privately-scoped functions. So now with this fixed AND with the synthesized getters / setters, I might just be using the variables scope a whole lot more :) @Ben, actually, I think I lied ... after testing a few more things out, I think my error was being generated by something else ... hmm. still investigating though. Oh well. sorry i got you all hot and bothered man. @Steve, You're still right about this being fixed in CF9 though :) @Ben, Yeah, I just confirmed that myself ... man, trying to remember what you can/can't do from one version to the next drives me nutty sometimes. ugh. @Steve, WORD UP :D I keep forgetting that there are new functions for a lot of the tag-based features. Like all the file/directory functions, which keep growing even in CF9. fine, but Application.cfc itself stumbled on this error until I applied Leon's technique. 4. If you are still banging your head on CF7, you are not alone :) I ran into this issue trying to invoke an object I have for logging errors into a database. The following code ran fine on my dev server, which runs cf9, but when moved to qa which runs cf8, it produced an error. I came up with a quick fix and resubmitted back to qa, but wanted more information on why that error happened in the first place, which led me to this page. My quick fix was simply removing the variables. from the second reference. It apparently didn't mind me scoping to variables when I created the object, but didn't like it when I tried to use the object in that scope. I don't know what affect leaving the variables. when it creates the object if any. At this point I am not too concerned with it. I have been moving over to trying to appropriately scope all of my variables both when setting and referencing them. Both for the value of it being more efficient for the compiler and to reduce confusion as to what a particular variable is used for. This was a nice lesson since a lot of the servers I code for are cf8.
http://www.bennadel.com/blog/570-cannot-invoke-method-on-an-object-of-type-coldfusion-runtime-variablescope-with-named-arguments.htm?_rewrite
CC-MAIN-2015-35
refinedweb
1,425
67.15
Description Typically when dealing with 3rd party COM libraries (eg. Office), you make use of the constants defined in that library. Once makepy creates the typelib for you, using these constants is normally easy in python. You only need to import win32com.client.constants to access them. However, since these typelibs aren't really imported, py2exe needs to be told to include them in your setup-script. Solution 1 Here's how I'm importing the typelib for Excel XP: setup.py: So, as you can see, it's an option in the options dictionary, containing a list of the typelibs you need. Each typelib being represtented as a tuple of (CLSID, LCID, MajorVersion, MinorVersion) - all of which numbers you can find in the typelib file itself. You can print out these magic numbers by running the makepy script with the -i command line option. Solution 2 That solution was the easier one for py2exe 0.4, but it still works: cd \python23\Lib\site-packages\win32com\client python makepy.py -o {MyProjectDirectory}\OLE_Excel10.py within the software change: (taken from M. Hammonds documentation "Quick Start to Client side COM and Python") to I believe that Solution 2 is not correct. It will work if you happen to be on a machine that has python installed, but if your goal is to use py2exe to distribute an application to machines that don't have python (and win32com) installed, then I don't think it will work. What the py3exe stuff in Solution 1 does is incorporate the generated win32com wrappers into the py2exe stuff. -Alec Wysoker Solution 3 The downside of Solution 1 is that the machine on which the py2exe app runs must have the same version of the typelib as the one specified in setup.py. I am finding that for the app I want to communicate with (iTunes) there are multiple versions out there. Ideally, one should let win32com generate the typelib wrapper dynamically, so that one can be generated for whatever version is on the machine on which the py2exe app is running. The problem is that win32com figures out that it is being run from a zipfile (a read-only module store) so it decides that it shouldn't try to dynamically generate the typelib wrapper. The solution, amazingly, is to simply tell win32com that is is OK to dynamically generate code, like this: import win32com.client win32com.client.gencache.is_readonly=False Then, you can get a reference to a COM object in the normal win32com fashion: iTunes = win32com.client.Dispatch("iTunes.Application") This causes win32com to magically generate the modules it needs, and import them appropriately. The files go into a temp directory. This works for me with py2exe 0.6.3. -Alec Wysoker I have found there is a little more to Solution 3 than just setting the gencache value is_readonly to False. In particular, one needs to make sure that the gen_py directory does not exist in the win32com package deployed, or the win32com.__init__.py will attempt to use that instead of the one in the temp directory. Secondly, I've also found that if the gen_py directory has not yet been created in the temp directory, that win32com.client.gencache.EnsureDispatch('<your_com_object>') can fail on an initial run, but work when you run your program a second time (since gen_py now exists). So my steps to a successful use of dynamically created typelib wrappers under py2exe are: Make sure that the win32com\gen_py does not exist in your build source, or at least in your packaged app. (Using VirtualEnv can make this easier.) Also run GetGeneratePath(). This ensures that the gen_py directory is already created before use. So we end up with: import win32com.client win32com.client.gencache.is_readonly=False win32com.client.gencache.GetGeneratePath() If running as a Windows Service, don't forget to call pythoncom.CoInitialize() or pythoncom.CoInitializeEx(pythoncom.COINIT_MULTITHREADED). -Rasjid Wilcox There is a converse solution to the third method. Instead of telling the program at run-time that it is OK to dynamically create files. Let it by OK by making the area where it would be not "read only." While this is not very neat to the distribution directory, it gets the job done. To do this, simply add skip archive to the set up call like so: setup( console=['HELLO_MAIN.py'], options={ "py2exe":{ "skip_archive": True } } )
http://www.py2exe.org/index.cgi/IncludingTypelibs?highlight=VirtualEnv
CC-MAIN-2015-22
refinedweb
731
55.95
Given a sample code: 1 public class Test extends Thread { 2 public static void main(String[] args) throws Exception { 3 Test t = new Test(); 4 t.start(); 5 t.method(); } 6 public void run() { 7 System.out.println("run"); } 8 public void method() { 9 System.out.println("method"); }} Which of the following statement is correct ? (A) Print "run method" (B) Print "method run" (C) Raise runtime exception (D) Compilation error at line no 3. Answer (B) Print "method run" Reason At first the stat() method is called and this will start the thread but just after the start method the method() is called which will call the "method" and after this the run() method will be called automatically. Therefore the output is "method run".+
http://www.roseindia.net/tutorial/java/scjp/part8/question6.html
CC-MAIN-2015-22
refinedweb
124
73.78
A Python interface library for WKlata's Mirobot Project description mirobot Description mirobot is a python module that can be used to control the WLkata Mirobot This library uses the G code protocol to communicate with the Mirobot over a serial connection. The official G code instruction set and driver download for the Mirobot can be found at the WLkata Download Page Installation mirobot requires Python >= 3.6. Use pip3 to install it: pip3 install mirobot-py Make sure to not install the mirobot package-- that package is unrelated to this one. Example Usage from mirobot import Mirobot with Mirobot(portname='COM3', debug=True) as m: m.home_individual() m.go_to_zero() And that's it! Now if you want to save keystrokes, here's a even more minimal version: from mirobot import Mirobot with Mirobot() as m: m.home_simultaneous() The Mirobot class can detect existing open serial ports and "guess" which one to use as the Mirobot. There's no need to specify a portname for most cases! Documentation Many of the functions and structures in this library are documented. The documentation is hosted here. If anything is unclear in the docs, please open a Github issue. Differences from source repository Credits Big thanks to Mattew Wachter for laying down the framework for this library-- please check out his links below: Reasons to fork (and not merge upstream) While based of the same code initially, this repository has developed in a different direction with opinionated views on how one should use a robotics library. Specifically, there is the problem of 'output' when operating a gcode-programmed machine like Mirobot. Matthew's library takes the traditional approach to recieving output from the robot as they appear. Basically this replicates the live terminal feedback in a client similar to Wlkata's Studio program. The original code has a thread listening the background for new messages and displays them as they appear. This repository intends to take a more programmatic approach to this behavior. Specifically it narrows down the path to responsibility by explicitly pairing each command to its output. In a stream-messages-as-they-come approach to output messaging, it is not clear (or atleast easy) to determine which command failed and how to ensure scripts stop execution at exactly the point of failure (and not after). That is why each instruction in this library has a dedicated output, ensuring success and having its own message output as a return value. This approach is a lot harder to construct and relies on adapting to the idiosyncrasies of gcode and Mirobot programming. In the end, while developing this approach to error responsibility, I realized that this would probably not suit everyone's needs-- sometimes people just want a live feed of output. That is why I think Matthew's continued work would be great for the community. I don't want this repository and its beliefs to consume another. I also do not see a way to combine both approaches-- they are inherently incompatible at the core level. It is my belief that people who are looking to do significant scripting and logic-testing routines will benefit greatly from this library. People who are looking to use a CLI-friendly framework should instead use Matthew's py-mirobot library. License Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/mirobot-py/
CC-MAIN-2021-10
refinedweb
576
52.29
Hello All, I have a seemingly minor problem which likely requires some Python coding. I am not currently familiar with python (more of a Matlab person), and was wondering if anyone had any advice. I am attempting to use the Feature Class Z to ASCII tool to make text files with my Z elevations. This tool performs correctly in all respects except 1: It names the text files based on the OID (or FID) column instead of a unique column (named 'Id') that is in my attribute table. Up until this point we have had a separate, hand made file that converts from FID to Id in subsequent analysis, but it is incredibly prone to error and confusing to use. Is there any simple way to modify this tool so that it outputs names based on a chosen column instead of the default OID column? I have attached a zip file of a z-interpolated shapefile below. The 'Id' column is the desired naming column. The Code is below: '''**************************************************************************** Name: FeatureClassZToASCII Example Description: This script demonstrates how to use the FeatureClassZToASCII tool to create generate files for all z-aware point features in a given workspace. ****************************************************************************''' import arcpy import exceptions, sys, traceback from arcpy import env try: # Obtain a license for the ArcGIS 3D Analyst extension arcpy.CheckOutExtension('3D') # Set environment settings.FeatureClassZToASCII_3d(fc, outFolder, outName, outFormat, delimeter, decimal, digits, dec_sep) else: print "There are no feature classes in the " + env.workspace + " directory." except arcpy.ExecuteError: print arcpy.GetMessages() except: # Get the traceback object tb = sys.exc_info()[2] tbinfo = traceback.format_tb(tb)[0] # Concatenate error information into message string pymsg = 'PYTHON ERRORS:\nTraceback info:\n{0}\nError Info:\n{1}'\ .format(tbinfo, str(sys.exc_info()[1])) msgs = 'ArcPy ERRORS:\n {0}\n'.format(arcpy.GetMessages(2)) # Return python error messages for script tool or Python Window arcpy.AddError(pymsg) arcpy.AddError(msgs) I assume that the root of the problem is the 'fclist' variable. Any help in solving this problem is appreciated. Ps. Bonus points if anyone can give some guidance on how to include this as a tool in arcgis so I don't have to run it in an IDE your code is hard to read... for reference Code Formatting... the basics++ maybe dump the try, except block to make the error line more visible in the trackback
https://community.esri.com/thread/230249-feature-class-z-to-ascii-file-names
CC-MAIN-2020-45
refinedweb
390
55.84
Rÿffffe9veillÿffffe9 wrote: > Hello, > > I have just started doing the python tutorials and i > tried to modify one of the exercises, it has to to > with defining functions. > > I wanted the user to be able to enter an option and > then get a print of the selected option. I also wanted > to have an exit for the user. > > This is the code.... > > > def PU(): > print 'To have a ...' > > def Python(): > print 'To create a programme ...' > > def overall(): > print 'To make .....' > > def motv(): > print 'I know you can do it!' > > > def menu(): > print ' GOALS IN... ' > print '____________________' > print '1.Pick up' > print '2.Python Programming' > print '3.Overall' > print '4.Motivation' > print '5.Exit' > print '____________________' > > menu() > > > while choice != 5: > choice = input ('Pick a number:') > if choice == 1: > PU() > elif choice == 2: > Python() > elif choice == 3: > overall() > elif choice == 4: > motv() > else: > print 'Bye' > > > > The problem is that it doesnt print the > > [ choice = input ('Pick a number:') ] > > command. It just runs thru the whole thing without > allowing the user a selection. > No, it doesn't. It prints: Traceback (most recent call last): File "test97.py", line 24, in ? menu() NameError: name 'menu' is not defined There's a good reason for this, too: you define motv(), and inside that function you define the menu() function. Since the menu() function is defined inside the body of motv(), its definition is only created when motv() is callinside the *local* namespace of the invocation of motv(). ed. The call to motv() returns, and everything the function "knew" is forgotten. I suggest you change the indentation of the menu() definition so it's at the same level as your other functions. That was a lucky problem, however, because it stopped a later error from occurring. That "while choice != 5" will fail the first time it is executed, since you haven't actually set the value of choice to be anything. Now, quite why you chose to misinform us as to the behavior of your program I can't really divine. I'll be charitable, and assume that you are actually referring to some earlier version. But a sound rule for getting help is "always post the code AND the error traceback". Also, note that when you type in the digit 1 in response to your program's prompt (when you eventually see it), that will become the string value "1" in the choice variable. Since "1" is not equal to 1 you will always "fall off the end" and print "Bye". Perhaps you'd like to try again after you've attempted to remedy some of the deficiencies I have pointed out? There's plenty of help available here, and you aren't far from a working program. regards Steve -- Steve Holden Python Web Programming Holden Web LLC +1 703 861 4237 +1 800 494 3119
https://mail.python.org/pipermail/python-list/2004-December/261064.html
CC-MAIN-2014-15
refinedweb
470
75.3
This file is fg_bg.def, from which is created fg_bg.c. It implements the builtins "bg" and "fg_bg.c $BUILTIN fg $FUNCTION fg_builtin $DEPENDS_ON JOB_CONTROL $SHORT_DOC fg [job_spec] Place JOB_SPEC in the foreground, and make it the current job. If JOB_SPEC is not present, the shell's notion of the current job is used. $END #include <config.h> #include "../bashtypes.h" #include <signal.h> #if defined (HAVE_UNISTD_H) # include <unistd.h> #endif #include "../bashintl.h" #include "../shell.h" #include "../jobs.h" #include "common.h" #include "bashgetopt.h" #if defined (JOB_CONTROL) extern char *this_command_name; static int fg_bg __P((WORD_LIST *, int)); /* How to bring a job into the foreground. */ int fg_builtin (list) WORD_LIST *list; { int fg_bit; register WORD_LIST *t; if (job_control == 0) { sh_nojobs ((char *)NULL); return (EXECUTION_FAILURE); } if (no_options (list)) return (EX_USAGE); list = loptend; /* If the last arg on the line is '&', then start this job in the background. Else, fg the job. */ for (t = list; t && t->next; t = t->next) ; fg_bit = (t && t->word->word[0] == '&' && t->word->word[1] == '\0') == 0; return (fg_bg (list, fg_bit)); } #endif /* JOB_CONTROL */ $BUILTIN bg $FUNCTION bg_builtin $DEPENDS_ON JOB_CONTROL $SHORT_DOC bg [job_spec ...] Place each JOB_SPEC in the background, as if it had been started with `&'. If JOB_SPEC is not present, the shell's notion of the current job is used. $END #if defined (JOB_CONTROL) /* How to put a job into the background. */ int bg_builtin (list) WORD_LIST *list; { int r; if (job_control == 0) { sh_nojobs ((char *)NULL); return (EXECUTION_FAILURE); } if (no_options (list)) return (EX_USAGE); list = loptend; /* This relies on the fact that fg_bg() takes a WORD_LIST *, but only acts on the first member (if any) of that list. */ r = EXECUTION_SUCCESS; do { if (fg_bg (list, 0) == EXECUTION_FAILURE) r = EXECUTION_FAILURE; if (list) list = list->next; } while (list); return r; } /* How to put a job into the foreground/background. */ static int fg_bg (list, foreground) WORD_LIST *list; int foreground; { sigset_t set, oset; int job, status, old_async_pid; JOB *j; BLOCK_CHILD (set, oset); job = get_job_spec (list); if (INVALID_JOB (job)) { if (job != DUP_JOB) sh_badjob (list ? list->word->word : "current"); goto failure; } j = get_job_by_jid (job); /* Or if j->pgrp == shell_pgrp. */ if (IS_JOBCONTROL (job) == 0) { builtin_error (_("job %d started without job control"), job + 1); goto failure; } if (foreground == 0) { old_async_pid = last_asynchronous_pid; last_asynchronous_pid = j->pgrp; /* As per Posix.2 5.4.2 */ } status = start_job (job, foreground); if (status >= 0) { /* win: */ UNBLOCK_CHILD (oset); return (foreground ? status : EXECUTION_SUCCESS); } else { if (foreground == 0) last_asynchronous_pid = old_async_pid; failure: UNBLOCK_CHILD (oset); return (EXECUTION_FAILURE); } } #endif /* JOB_CONTROL */
http://opensource.apple.com/source/bash/bash-86.1/bash-3.2/builtins/fg_bg.def
CC-MAIN-2016-30
refinedweb
402
65.52
Very often I need to create dicts that differ one from another by an item or two. Here is what I usually do: setup1 = {'param1': val1, 'param2': val2, 'param3': val3, 'param4': val4, 'paramN': valN} setup2 = copy.deepcopy(dict(setup1)) setup2.update({'param1': val10, 'param2': val20}) setup2 setup1 setup2 = dict(setup1).merge({'param1': val10, 'param2': val20}) Build a function for that. Your intention would be clearer when you use it in the code, and you can handle complicated decisions (e.g., deep versus shallow copy) in a single place. def copy_dict(source_dict, diffs): """Returns a copy of source_dict, updated with the new key-value pairs in diffs.""" result=dict(source_dict) # Shallow copy, see addendum below result.update(diffs) return result And now the copy is atomic, assuming no threads involved: setup2=copy_dict(setup1, {'param1': val10, 'param2': val20}) For primitives (integers and strings), there is no need for deep copy: >>> d1={1:'s', 2:'g', 3:'c'} >>> d2=dict(d1) >>> d1[1]='a' >>> d1 {1: 'a', 2: 'g', 3: 'c'} >>> d2 {1: 's', 2: 'g', 3: 'c'} If you need a deep copy, use the copy module: result=copy.deepcopy(source_dict) # Deep copy instead of: result=dict(setup1) # Shallow copy Make sure all the objects in your dictionary supports deep copy (any object that can be pickled should do).
https://codedump.io/share/ED7aoNnvDx1w/1/how-to-copy-a-dict-and-modify-it-in-one-line-of-code
CC-MAIN-2017-26
refinedweb
217
60.95
summon_process 0.1.3 Process coordinator for tests Current status: work in progress. The code is lacking proper documentation and is broken on Python 3.3. Python process orchestration library. About As developers we have to work on project that rely on multiple processes to run their tests suites. Sometimes these processes need some time to boot. The simple (and wrong) solution is to add time.sleep and pretend that it works. Unfortunately there is no way the estimate the amount of time to sleep and not loose too much time. summon_process is an attempt to solve this problem. What you can see below is an example test that waits for a HTTP server to boot, and then it checks whether the returned status is OK. from unittest import TestCase from summon_process.executors import HTTPCoordinatedExecutor from summon_process.utils import orchestrated from httplib import HTTPConnection, OK class TestServer(TestCase): def test_it_works(self): executor = HTTPCoordinatedExecutor("./server", url="") with orchestrated(executor): conn = HTTPConnection("localhost", 8000) conn.request('GET', '/') assert conn.getresponse().status is OK The server command in this case is just a bash script that sleeps for some time and then launches the builtin SimpleHTTPServer on port 8000. License summon_process is licensed under LGPL license, version 3. Contributing and reporting bugs Source code is available at: mlen/summon_process. Issue tracker is located at GitHub Issues. Projects PyPi page. - Downloads (All Versions): - 9 downloads in the last day - 78 downloads in the last week - 244 downloads in the last month - Author: Mateusz Lenik - License: LGPL - Package Index Owner: mlen - DOAP record: summon_process-0.1.3.xml
https://pypi.python.org/pypi/summon_process/0.1.3
CC-MAIN-2015-32
refinedweb
263
59.7
check template type for type Hey guys, I know, it's probably more a general c++ question than qt but anyway... I have a dll which provides functions like: bool GetDataAsU8(unsigned char *data) bool GetDataAsS16(short *data) bool GetDataAsU32(uint *data) ... I want to write a helper function which uses templates and selects the right imported dll function automatically. template <typename T> bool GetData(T *data) { //here I want to check the type of data and select the right function } Could someone help? Thanks in advance! mts - A Former User last edited by Hi, why can't you simply provide several overloaded versions of GetData() ? You could use the typeidoperator if (typeid(data) == typeid(uint)) { ... } - SGaist Lifetime Qt Champion last edited by Hi, If you only want one function you could use the technique described here to "if" trough possible types. Otherwise, template specialization could also be an option Hope it helps - Chris Kawa Moderators last edited by To check the type at runtime you can also use is_same template: #include <type_traits> template <typename T> bool GetData(T *data) { if(std::is_same<T, unsigned char>::value) return GetDataAsU8(data); //and so on for other types } but this is unnecessarily slow, so is the typeidsolution. Template specialization, as suggested by @SGaist, is the way to go here. Thanks for your answers! I think I will go with Template specialization. CU mts
https://forum.qt.io/topic/52448/check-template-type-for-type
CC-MAIN-2020-05
refinedweb
231
53.1
PUTWC(3P) POSIX Programmer's Manual PUTWC(3P) #include <stdio.h> #include <wchar.h> wint_t putwc(wchar_t wc, FILE *stream); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The putwc() function shall be equivalent to fputwc(), except that if it is implemented as a macro it may evaluate stream more than once, so the argument should never be an expression with side-effects. Refer to fputwc(3p). Refer to fputwc(3. Section 2.5, Standard I/O Streams, fputwc(3p) The Base Definitions volume of POSIX.1‐2008, stdioWC(3P) Pages that refer to this page: wchar.h(0p), putwchar(3p)
http://man7.org/linux/man-pages/man3/putwc.3p.html
CC-MAIN-2017-43
refinedweb
133
69.48
28 March 2011 22:47 [Source: ICIS news] SAN ANTONIO, ?xml:namespace> Ethylene spot margins were at 29.09 cents/lb ($641/tonne, €455/tonne) last week, up from 27.53 cents/lb one week earlier, using ethane as a feedstock. The increase was based on a range of spot deals done at 58.50-60.00 cents/lb, which is up from 54.75-58.00 cents/lb in the week ended 18 March. Ethylene also traded at 60.625 cents/lb at the end of last week. The deal was not included in the range because information on the transaction became available only after margin calculations had been completed. Market sources continued to point to strong demand as a reason behind the uptrend in ethylene, but an additional driver emerged last week after Shell announced it would allocate US deliveries of the product in April. The company will restrict US ethylene deliveries to 90% next month, market participants said. According to sources, Shell is still having difficulties accessing storage wells in Mont Belvieu, That probably caused the spot market to move further up, a source said, referring to the allocation announcement. Higher energy prices in recent weeks, anticipated supply tightness and constrained supply were all heard as also lending support to spot ethylene. The rise in ethylene spot prices in the last four weeks is expected to trigger another contract increase for the monomer. Hosted by the National Petrochemical & Refiners Association (NPRA), the IPC continues through
http://www.icis.com/Articles/2011/03/28/9447781/npra-11-us-ethylene-margins-grow-5.7-on-spot-uptrend.html
CC-MAIN-2014-42
refinedweb
248
56.86
Opened 6 years ago Closed 6 years ago #15537 closed (wontfix) allow the login_url accept a relative path Description In the development version, the login_required decorator accepts a parameter named login_url: @login_required(login_url='/accounts/login/') def my_view(request): ... Sometimes, the login_url may depend on the request path (for example, when the path already implies the login name -- the name may even not be known to the user, and the login page would only ask for a password). Allowing a relative path would be great in this case: @login_required(login_url='login.html') def my_view(request): ... To implement this, simply insert an one-way if-statement into the decorated view function before using login_url: def login_required(login_url=None, ...): ... def decorated_view(request, *args, **kwds): ... if login_url and not login_url.startswith('/'): login_url = request.path + login_url ... return decorated_view A more general approach is to have login_url accept a callable that takes the request as a parameter and returns an absolute path; however, I don't have a use case for that yet. I'm not sure I agree that the proposed feature would be a good thing. Authentication shouldnt' be tightly coupled to application design, and the design your proposing would require either a tightly coupled design, or a large number of deployed login views. I'm not sure I want to encourage either of those practices. If you want to argue for this feature, please start a discussion on django-dev; my suggestion to you would be to try and explain why this sort of design is the right thing. i.e., provide a much better explanation of your use case.
https://code.djangoproject.com/ticket/15537
CC-MAIN-2016-44
refinedweb
267
62.07
Hello, I would like to add a six-line header to a data file for many files in a loop. However, I need to add the header to a data properly first... My inclination (as a python novice) is to write the header and append the data to the header. However, this doesn't work. Can someone recommend a more efficient way to go about adding the header? I am not yet familiar with all the functions and functionality of python, there has to be a better way to do it. Do I have to read the data in line by line to simply attach the header? Here is my error: Traceback (most recent call last): File "C:/work/data/makeheader", line 19, in <module> newout=header.append(data) AttributeError: 'str' object has no attribute 'append' I attached the example data file below. Here is my code: import os #set working directory workDIR = 'C:\\work\\data' os.chdir(workDIR) runlist=os.listdir(workDIR) data=open('justdata2.txt','a') outfile=open('output.txt','w') row1="ncols 1422" row2="nrows 2044" row3="xllcorner 409924.44886063" row4="yllcorner 3631074.3284728" row5="cellsize 500" row6="NODATA_value -9999" n='\n' header=row1+n+row2+n+row3+n+row4+n+row5+n+row6+n newout=header.append(data) print newout outfile.write(newout)
https://www.daniweb.com/programming/software-development/threads/263700/adding-6-line-header-to-data
CC-MAIN-2018-22
refinedweb
217
59.3
The field that you want to sort the RecordSet object by. The direction to sort the recordset. "DESC" specifies a descending sort; anything else is ascending. The label that will show in the UI component. The data that will correspond to the label in the UI component. The DataGlue object contains two methods for binding data to a UI component. The bindFormatFunction( ) method is best used when the data coming from the recordset or other data provider has to be formatted in a particular way. If the data can be used directly, the bindFormatStrings( ) method is easier to use because you don't have to define a custom function that formats the data. Simply specify the fields to use for the label and data properties of the data consumer in the method call. The following example code assumes a combo box named allProducts_cb is present on the main timeline: #include "NetServices.as" #include "DataGlue.as" // Initialize the connection and service objects. if (connected == null) { connected = true; NetServices.setDefaultGatewayUrl(""); var my_conn = NetServices.createGatewayConnection( ); var myService = my_conn.getService("com.oreilly.frdg.searchProducts", this); } // The remote getSearchResult( ) method (not shown) returns a recordset. myService.getSearchResult( ); // Display the product names in the combo box. Use the product IDs as the data. function getSearchResult_Result(result_rs) { DataGlue.bindFormatStrings(allProducts_cb, result_rs, '#ProductName#', '#ProductID#'); } The fields that are utilized in the bindFormatStrings( ) method (ProductName and ProductID) are surrounded by quotes and pound signs (#). The pound signs around the RecordSet fields denote that the field is to be replaced by a field from the data provider (the RecordSet, in this case). DataGlue.bindFormatFunction( ), the RecordSet class; Chapter 3 and Chapter 4
http://etutorials.org/Macromedia/Fash+remoting.+the+definitive+guide/Part+III+Advanced+Flash+Remoting/Chapter+15.+Flash+Remoting+API/DataGlue.bindFormatStrings/
CC-MAIN-2020-10
refinedweb
272
51.14
Opened 12 years ago Closed 12 years ago Last modified 12 years ago #3388 closed defect (fixed) dods driver crashes on x-flip read, 64bit OSX Description gdriver/dods autotest #5 crashes on 64bit OSX 10.6. It's fine on 32bit OSX. I happens in GDALCopyWordsFromT(): Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000204800598 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 org.gdal.gdal 0x00000001011d0a82 void (anonymous namespace)::GDALCopyWordsFromT<float>(float const*, int, bool, void*, GDALDataType, int, int) + 643 1 org.gdal.gdal 0x0000000101075707 DODSRasterBand::IReadBlock(int, int, void*) + 2845 2 org.gdal.gdal 0x00000001011ca2a3 GDALRasterBand::GetLockedBlockRef(int, int, int) + 279 3 org.gdal.gdal 0x00000001011d326e GDALRasterBand::IRasterIO(GDALRWFlag, int, int, int, int, void*, int, int, GDALDataType, int, int) + 402 4 org.gdal.gdal 0x00000001011f11f4 GDALChecksumImage + 283 5 _gdal.so 0x0000000100522abd _wrap_Band_Checksum + 1140 6 org.python.python 0x0000000100017173 PyObject_Call + 112 ... Change History (5) comment:1 by , 12 years ago comment:2 by , 12 years ago comment:3 by , 12 years ago comment:4 by , 12 years ago I noticed that and was wondering about it, but figured it had something to do with the flipping. I'll have a chance to test it later this evening. comment:5 by , 12 years ago Yep, it's working now. Note: See TracTickets for help on using tickets. I've fixed this one up (r18738 for trunk, r18739 for 1.7). I had never built the DODS driver, so never saw the unit test failure. Apparently it's using GDALCopyWords with a negative output offset, and since internally GDALCopyWordsT stores the offset amount in an unsigned integer, nasty things were happening.
https://trac.osgeo.org/gdal/ticket/3388
CC-MAIN-2022-21
refinedweb
282
67.15
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards There was a discussion about whether tuples should be in a separate namespace or directly in the boost namespace. The common principle is that domain libraries (like graph, python) should be on a separate subnamespace, while utility like libraries directly in the boost namespace. Tuples are somewhere in between, as the tuple template is clearly a general utility, but the library introduces quite a lot of names in addition to just the tuple template. Tuples were originally under a subnamespace. As a result of the discussion, tuple definitions were moved directly under the boost namespace. As a result of a continued discussion, the subnamespace was reintroduced. The final (I truly hope so) solution is now to have all definitions in namespace ::boost::tuples, and the most common names in the ::boost namespace as well. This is accomplished with using declarations (suggested by Dave Abrahams): namespace boost { namespace tuples { ... // All library code ... } using tuples::tuple; using tuples::make_tuple; using tuples::tie; using tuples::get; } With this arrangement, tuple creation with direct constructor calls, make_tuple or tie functions do not need the namespace qualifier. Further, all functions that manipulate tuples are found with Koenig-lookup. The only exceptions are the get<N> functions, which are always called with an explicitly qualified template argument, and thus Koenig-lookup does not apply. Therefore, get is lifted to ::boost namespace with a using declaration. Hence, the interface for an application programmer is in practice under the namespace ::boost. The other names, forming an interface for library writers (cons lists, metafunctions manipulating cons lists, ...) remain in the subnamespace ::boost::tuples. Note, that the names ignore, set_open, set_close and set_delimiter are considered to be part of the application programmer's interface, but are still not under boost namespace. The reason being the danger for name clashes for these common names. Further, the usage of these features is probably not very frequent. The subnamespace name tuples raised some discussion. The rationale for not using the most natural name 'tuple' is to avoid having an identical name with the tuple template. Namespace names are, however, not generally in plural form in boost libraries. First, no real trouble was reported for using the same name for a namespace and a class and we considered changing the name 'tuples' to 'tuple'. But we found some trouble after all. Both gcc and edg compilers reject using declarations where the namespace and class names are identical: namespace boost { namespace tuple { ... tie(...); class tuple; ... } using tuple::tie; // ok using tuple::tuple; // error ... } Note, however, that a corresponding using declaration in the global namespace seems to be ok: using boost::tuple::tuple; // ok; Tuples are internally represented as cons lists: tuple<int, int> inherits from cons<int, cons<int, null_type> > null_type is the end mark of the list. Original proposition was nil, but the name is used in MacOS, and might have caused problems, so null_type was chosen instead. Other names considered were null_t and unit (the empty tuple type in SML). Note that null_type is the internal representation of an empty tuple: tuple<> inherits from null_type. Whether to use 0- or 1-based indexing was discussed more than thoroughly, and the following observations were made: bind1st, bind2nd, pair::first, etc. Tuple access with the syntax get<N>(a), or a.get<N>() (where a is a tuple and N an index), was considered to be of the first category, hence, the index of the first element in a tuple is 0. A suggestion to provide 1-based 'name like' indexing with constants like _1st, _2nd, _3rd, ... was made. By suitably chosen constant types, this would allow alternative syntaxes: a.get<0>() == a.get(_1st) == a[_1st] == a(_1st); We chose not to provide more than one indexing method for the following reasons: _1st, ...). Let the binding and lambda libraries use these for a better purpose. a[_1st](or a(_1st)) is appealing, and almost made us add the index constants after all. However, 0-based subscripting is so deep in C++, that we had a fear for confusion. The comparison operator implements lexicographical order. Other orderings were considered, mainly dominance (a < b iff for each i a(i) < b(i)). Our belief is, that lexicographical ordering, though not mathematically the most natural one, is the most frequently needed ordering in everyday programming. The characters specified with tuple stream manipulators are stored within the space allocated by ios_base::xalloc, which allocates storage for long type objects. static_cast is used in casting between long and the stream's character type. Streams that have character types not convertible back and forth to long thus fail to compile. This may be revisited at some point. The two possible solutions are: chartypes as the tuple delimiters and use widenand narrowto convert between the real character type of the stream. This would always compile, but some calls to set manipulators might result in a different character than expected (some default character). ios_base::xalloc. Any volunteers?
http://www.boost.org/doc/libs/1_57_0/libs/tuple/doc/design_decisions_rationale.html
CC-MAIN-2014-52
refinedweb
848
54.73
GETC(3) NetBSD Library Functions Manual GETC(3)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME fgetc, getc, getchar, getc_unlocked, getchar_unlocked, getw -- get next character or word from input stream LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <stdio.h> int fgetc(FILE *stream); int getc(FILE *stream); int getchar(); int getc_unlocked(FILE *stream); int getchar_unlocked(); int getw(FILE *stream); DESCRIPTIONc_unlocked() and getchar_unlocked() functions provide functional- ity identical to that of getc() and get getw() function obtains the next int (if present) from the stream pointed at by stream. RETURN VALUES If successful, these routines return the next requested object from the stream. ferror(3), flockfile(3), fopen(3), fread(3), ftrylockfile(3), funlockfile(3), putc(3), ungetc(3) STANDARDS The fgetc(), getc() and getchar() functions conform to ANSI X3.159-1989 (``ANSI C89''). The getc_unlocked() and getchar_unlocked() functions conform to ISO/IEC 9945-1:1996 (``POSIX.1''). HISTORY The getc() and getw() functions appeared in Version 1 AT&T UNIX. BUGS Since EOF is a valid integer value, feof(3) and ferror(3) must be used to check for failure after calling getw(). The size and byte order of an int varies from one machine to another, and getw() is not recommended for portable applications. NetBSD 9.1 September 2, 2019 NetBSD 9.1
https://man.netbsd.org/NetBSD-9.1/i386/fgetc.3
CC-MAIN-2021-39
refinedweb
228
64.51
SBT Importing an sbt project - Click Import Project or Open on the welcome screen. - In the dialog that opens, select a directory that contains your sbt project or simply build.sbt. Click OK. - Follow the steps suggested in the Import Project wizard. You can use the suggested default settings since they are enough to successfully import your project. We recommend that you enable the Use sbt shell for build and import (requires sbt 0.13.5+) option when you use code generation or other features that modify the build process in sbt. If your sbt project is not in the IntelliJ IDEA project root directory, we suggest you skip this option. You can also select the appropriate option for grouping modules in your project. Ensuring sbt and Scala versions compatibility Often you share your project across a team and need to use a specific version of sbt. You can override the sbt version in your project's build.properties file. - Create or import your sbt project. - In the Project tool window, in the source root directory locate the build.propertiesfile and open it in the editor. - In the editor explicitly specify the version of sbt that you want to use in the project. sbt.version=xxx - Refresh your project. (Click the in the sbt projects tool window.) Managing sbt projects sbt project structure When you create or import an sbt project, IntelliJ IDEA generates the following sbt structure: - sbt project (proper build) which defines a project and contains build.sbtfile, src, and target directories, modules; anything related to a regular project. - sbt build project which is defined in the project subdirectory. It contains additional code that is part of the build definition. - The sbt projects tool window which contains sbt tasks, commands, and settings that you can execute. When you work with sbt projects you use the build.sbt file to make main changes to your project since IntelliJ IDEA considers an sbt configuration as a single source of truth. Adding a library to the sbt project You can add sbt dependencies via the build.sbt file or you can use the import statement in your .scala file. - Open a .scalafile in the editor. - Specify a library you want to import. - Put the cursor on the unresolved package and press Alt+Enter. - From the list of available intention actions, select Add sbt dependency. - Follow the steps suggested in the wizard that opens and click Finish. - IntelliJ IDEA downloads the artifact, adds the dependency to the build.sbtfile and to the sbt projects tool window. - As soon as IntelliJ IDEA detects changes in build.sbt, a notification suggesting to refresh your project will appear. Refresh your project. (Click the in the sbt projects tool window.) Alternatively, use the auto-import option located in the sbt settings. Working with sbt shell An sbt shell is embedded in the sbt project and is available on your project start. You can use the sbt shell for executing sbt commands and tasks, for running, and debugging your projects. - To start the sbt shell, press Ctrl+Shift+S(for Windows) or Cmd+Shift+S (for Mac OS X). Alternatively, click on the toolbar located on the bottom of the screen. - To use the sbt shell for build and import procedures, select the Use sbt shell for build and import (requires sbt 0.13.5+) option located in the sbt settings and perform steps described in the Run a Scala application using the sbt shell section. Note that sbt versions 0.13.16.+ / 1.0.3.+ are recommended. - To use the sbt shell for debugging, refer to the debugging with sbt shell section. - To run your tests from the sbt shell: - Open a run/debug configuration (). - Create a test configuration and select the use sbt option from the available settings. Running sbt tasks - You can run sbt tasks by selecting the one you need from the the sbt Tasks directory in the sbt projects tool window. - You can manually enter your task (code completion is supported) in the sbt shell and run it directly from there. - You can create a run configuration for a task. For example, you can create a custom task which is not part of the list of tasks located in the sbt projects tool window. - Open (Shift+Alt+F10) a run configuration. - Specify the run configuration settings and click OK. If you need, you can add another configuration or a task to execute before running your configuration. Click the icon in the Before Launch section and from the list that opens select what you need to execute. IntelliJ IDEA displays results in the sbt shell window. Working with sbt settings To access sbt settings, click in the sbt projects tool window. You can use sbt settings for the following notable actions: - If you want sbt automatically refresh your project every time you make changes to build.sbt, select Use auto-import. - To delegate running builds to sbt, select Use sbt shell for build and import. - To debug your code via the sbt shell, select Enable debugging for sbt shell option that enables a debug button ( ) in the sbt shell tool window. To start the debugging session, simply click this button. For more information on debugging, see debugging with sbt. - To change the .ivy cache location in your project or set other sbt properties, use the VM parameters field. To check the most common sbt issues and workarounds, see the sbt troubleshooting section.
https://www.jetbrains.com/help/idea/2018.2/sbt-support.html
CC-MAIN-2018-39
refinedweb
909
66.44
How can I draw a dotted line in AWT? Created May 4, 2012 John Zukowski Prior to introducing the Java 2D API into the Java 2 platform, all lines drawn were a single pixel wide solid lines of a single color. So, if you wanted a dotted line in 1.0 or 1.1, you had to do all the intermediate drawing steps yourself. With the Java 2D API though, you can define a Stroke to define the drawing pen. For dotted lines, you would need to use the BasicStroke constructor that accepted six arguments: public BasicStroke (float width, int cap, int join, float miterlimit, float[] dash, float dash_phase) So, for a dotted line, an example might look like: import java.awt.*; import java.awt.geom.*; import java.applet.*; public class Stroked extends Applet { public void paint (Graphics g) { Graphics2D g2d = (Graphics2D)g; Rectangle2D rectangle = new Rectangle2D.Double ( 20, 20, 200, 100); g2d.setColor (Color.blue); g2d.setStroke (new BasicStroke( 1f, BasicStroke.CAP_ROUND, BasicStroke.JOIN_ROUND, 1f, new float[] {2f}, 0f)); g2d.draw (rectangle); } }
http://www.jguru.com/faq/view.jsp?EID=114099
CC-MAIN-2019-30
refinedweb
173
65.22
On Mon, 22 Feb 2010 18:17:00 -0500Oren Laadan <orenl@cs.columbia.edu> wrote:> Hi Andrew,> > We've put a stake in the ground for our next set of checkpoint/restart> patches, v19. It has some great new stuff, and we put extra effort to> address your concerns. We would like to have the code included in -mm> for wider feedback and testing.> > This one is able to checkpoint/restart screen and vnc sessions, and> live-migrate network servers between hosts. It also adds support for> x86-64 (in addition to x86-32, s390x and powerpc). It is rebased to> kernel 2.6.33-rc8.> > Since one of your main concerns was about what is not yet implemented> and how complicated or ugly it will be to support that, we've put up> a wiki page to address that. In it there is a simple table that lists> what is not implemented and the anticipated solution impact, and for> some entries a link to more details.> > The page is here: "Refuses to Checkpoint" mean that an attempt to checkpoint willfail, return the failure to userspace and the system continues asbefore?> We want to stress that the patchset is already very useful as-is. We> will keep working to implement more features cleanly. Some features we> are working on include network namespaces and device configurations,> mounts and mounts namespaces, and file locks. Should a complicated> feature prove hard to implement, users have alternatives systems like> kvm, until we manage to come up with a clean solution.> > We believe that maintenance is best addressed through testing. We now> have a comprehensive test-suite to automatically find regressions.> In addition, we ran LTP and the results are the same with CHECKPOINT=n> and =y.> > If desired we'll send the whole patchset to lkml, but the git trees> can be seen at:> > kernel:;a=summary> user tools:;a=summary> tests suite:;a=summary> I'd suggest waiting until very shortly after 2.6.34-rc1 then pleasesend all the patches onto the list and let's get to work.
http://lkml.org/lkml/2010/3/1/422
CC-MAIN-2014-10
refinedweb
347
62.98
Tutorial: Automate tasks to process emails by using Azure Logic Apps, Azure Functions, and Azure Storage Azure Logic Apps helps you automate workflows and integrate data across Azure services, Microsoft services, other software-as-a-service (SaaS) apps, and on-premises systems. This tutorial shows how you can build a logic app that handles incoming emails and any attachments. This logic app analyzes the email content, saves the content to Azure storage, and sends notifications for reviewing that content. In this tutorial, you learn how to: - Set up Azure storage and Storage Explorer for checking saved emails and attachments. - Create an Azure function that removes HTML from emails. This tutorial includes the code that you can use for this function. - Create a blank logic app. - Add a trigger that monitors emails for attachments. - Add a condition that checks whether emails have attachments. - Add an action that calls the Azure function when an email has attachments. - Add an action that creates storage blobs for emails and attachments. - Add an action that sends email notifications. When you're done, your logic app looks like this workflow at a high level: Prerequisites An Azure subscription. If you don't have an Azure subscription, sign up for a free Azure account. An email account from an email provider supported by Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, review the connectors list here. This logic app uses an Office 365 Outlook account. If you use a different email account, the general steps stay the same, but your UI might appear slightly different. Download and install the free Microsoft Azure Storage Explorer. This tool helps you check that your storage container is correctly set up. Sign in to the Azure portal with your Azure account credentials. Set up storage to save attachments You can save incoming emails and attachments as blobs in an Azure storage container. Before you can create a storage container, create a storage account with these settings on the Basics tab in the Azure portal: On the Advanced tab, select this setting: To create your storage account, you can also use Azure PowerShell or Azure CLI. When you're done, select Review + create. After Azure deploys your storage account, find your storage account, and get the storage account's access key: On your storage account menu, under Settings, select Access keys. Copy your storage account name and key1, and save those values somewhere safe. To get your storage account's access key, you can also use Azure PowerShell or Azure CLI. Create a blob storage container for your email attachments. On your storage account menu, select Overview. Under Services, select Containers. After the Containers page opens, on the toolbar, select Container. Under New container, enter attachmentsas your container name. Under Public access level, select Container (anonymous read access for containers and blobs) > OK. When you're done, you can find your storage container in your storage account here in the Azure portal: To create a storage container, you can also use Azure PowerShell or Azure CLI. Next, connect Storage Explorer to your storage account. Set up Storage Explorer Now, connect Storage Explorer to your storage account so you can confirm that your logic app can correctly save attachments as blobs in your storage container. Launch Microsoft Azure Storage Explorer. Storage Explorer prompts you for a connection to your storage account. In the Connect to Azure Storage pane, select Use a storage account name and key > Next. Tip If no prompt appears, on the Storage Explorer toolbar, select Add an account. Under Display name, provide a friendly name for your connection. Under Account name, provide your storage account name. Under Account key, provide the access key that you previously saved, and select Next. Confirm your connection information, and then select Connect. Storage Explorer creates the connection, and shows your storage account in the Explorer window under Local & Attached > Storage Accounts. To find your blob storage container, under Storage Accounts, expand your storage account, which is attachmentstorageacct here, and expand Blob Containers where you find the attachments container, for example: Next, create an Azure function that removes HTML from incoming email. Create function to clean HTML Now, use the code snippet provided by these steps to create an Azure function that removes HTML from each incoming email. That way, the email content is cleaner and easier to process. You can then call this function from your logic app. Before you can create a function, create a function app with these settings: If your function app doesn't automatically open after deployment, in the Azure portal search box, find and select Function App. Under Function App, select your function app. Otherwise, Azure automatically opens your function app as shown here: To create a function app, you can also use Azure CLI, or PowerShell and Resource Manager templates. In the Function Apps list, expand your function app, if not already expanded. Under your function app, select Functions. On the functions toolbar, select New function. Under Choose a template below or go to the quickstart, select the HTTP trigger template. Azure creates a function using a language-specific template for an HTTP triggered function. In the New Function pane, under Name, enter RemoveHTMLFunction. Keep Authorization level set to Function, and select Create. After the editor opens, replace the template code with this sample code, which removes the HTML and returns results to the caller: #r "Newtonsoft.Json" using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; using Newtonsoft.Json; using System.Text.RegularExpressions; public static async Task<IActionResult> Run(HttpRequest req, ILogger log) { log.LogInformation("HttpWebhook triggered"); // Parse query parameter string emailBodyContent = await new StreamReader(req.Body).ReadToEndAsync(); // Replace HTML with other characters string updatedBody = Regex.Replace(emailBodyContent, "<.*?>", string.Empty); updatedBody = updatedBody.Replace("\\r\\n", " "); updatedBody = updatedBody.Replace(@" ", " "); // Return cleaned text return (ActionResult)new OkObjectResult(new { updatedBody }); } When you're done, select Save. To test your function, at the editor's right edge, under the arrow (<) icon, select Test. In the Test pane, under Request body, enter this line, and select Run. {"name": "<p><p>Testing my function</br></p></p>"} The Output window shows the function's result: {"updatedBody":"{\"name\": \"Testing my function\"}"} After checking that your function works, create your logic app. Although this tutorial shows how to create a function that removes HTML from emails, Logic Apps also provides an HTML to Text connector. Create your logic app From the Azure home page, in the search box, find and select Logic Apps. On the Logic Apps page, select Add. Under Create logic app, provide details about your logic app as shown here. After you're done, select Create. After Azure deploys your app, on the Azure toolbar, select the notifications icon, and select Go to resource. After the Logic Apps Designer opens and shows a page with an introduction video and templates for common logic app patterns. Under Templates, select Blank Logic App. Next, add a trigger that listens for incoming emails that have attachments. Every logic app must start with a trigger, which fires when a specific event happens or when new data meets a specific condition. For more information, see Create your first logic app. Monitor incoming email On the designer in the search box, enter when new email arrivesas your filter. Select this trigger for your email provider: When a new email arrives - <your-email-provider> For example: For Azure work or school accounts, select Office 365 Outlook. For personal Microsoft accounts, select Outlook.com. If you're asked for credentials, sign in to your email account so Logic Apps can connect to your email account. Now provide the criteria the trigger uses to filter new email. Specify the settings described below for checking emails. From the Add new parameter list, select Subject Filter. After the Subject Filter box appears in the action, specify the subject as listed here: To hide the trigger's details for now, click inside the trigger's title bar. Save your logic app. On the designer toolbar, select Save. Your logic app is now live but doesn't do anything other check your emails. Next, add a condition that specifies criteria to continue workflow. Check for attachments Now add a condition that selects only emails that have attachments. Under the trigger, select New step. Under Choose an action, in the search box, enter condition. Select this action: Condition Rename the condition with a better description. On the condition's title bar, select the ellipses (...) button > Rename. Rename your condition with this description: If email has attachments and key subject phrase Create a condition that checks for emails that have attachments. On the first row under And, click inside the left box. From the dynamic content list that appears, select the Has Attachment property. In the middle box, keep the operator is equal to. In the right box, enter true as the value to compare with the Has Attachment property value from the trigger. If both values are equal, the email has at least one attachment, the condition passes, and the workflow continues. In your underlying logic app definition, which you can view in the code editor window, this condition looks like this example: "Condition": { "actions": { <actions-to-run-when-condition-passes> }, "expression": { "and": [ { "equals": [ "@triggerBody()?['HasAttachment']", "true" ] } ] }, "runAfter": {}, "type": "If" } Save your logic app. On the designer toolbar, select Save. Test your condition Now, test whether the condition works correctly: If your logic app isn't running already, select Run on the designer toolbar. This step manually starts your logic app without having to wait until your specified interval passes. However, nothing happens until the test email arrives in your inbox. Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter: Business Analyst 2 #423501 Your email has one attachment. For now, just create one empty text file and attach that file to your email. When the email arrives, your logic app checks for attachments and the specified subject text. If the condition passes, the trigger fires and causes the Logic Apps engine to create a logic app instance and start the workflow. To check that the trigger fired and the logic app ran successfully, on the logic app menu, select Overview. If your logic app didn't trigger or run despite a successful trigger, see Troubleshoot your logic app. Next, define the actions to take for the If true branch. To save the email along with any attachments, remove any HTML from the email body, then create blobs in the storage container for the email and attachments. Note Your logic app doesn't have to do anything for the If false branch when an email doesn't have attachments. As a bonus exercise after you finish this tutorial, you can add any appropriate action that you want to take for the If false branch. Call RemoveHTMLFunction This step adds your previously created Azure function to your logic app and passes the email body content from email trigger to your function. On the logic app menu, select Logic App Designer. In the If true branch, select Add an action. In the search box, find "azure functions", and select this action: Choose an Azure function - Azure Functions Select your previously created function app, which is CleanTextFunctionAppin this example: Now select your function: RemoveHTMLFunction Rename your function shape with this description: Call RemoveHTMLFunction to clean email body Now specify the input for your function to process. Under Request Body, enter this text with a trailing space: While you work on this input in the next steps, an error about invalid JSON appears until your input is correctly formatted as JSON. When you previously tested this function, the input specified for this function used JavaScript Object Notation (JSON). So, the request body must also use the same format. Also, when your cursor is inside the Request body box, the dynamic content list appears so you can select property values available from previous actions. From the dynamic content list, under When a new email arrives, select the Body property. After this property, remember to add the closing curly brace: } When you're done, the input to your function looks like this example: Save your logic app. Next, add an action that creates a blob in your storage container so you can save the email body. Create blob for email body In the If true block and under your Azure function, select Add an action. In the search box, enter create blobas your filter, and select this action: Create blob Create a connection to your storage account with these settings as shown and described here. When you're done, select Create. Rename the Create blob action with this description: Create blob for email body In the Create blob action, provide this information, and select these fields to create the blob as shown and described: When you're done, the action looks like this example: Save your logic app. Check attachment handling Now test whether your logic app handles emails the way that you specified: If your logic app isn't running already, select Run on the designer toolbar. Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter: Business Analyst 2 #423501 Your email has at least one attachment. For now, just create one empty text file, and attach that file to your email. Your email has some test content in the body, for example: Testing my logic app If your logic app didn't trigger or run despite a successful trigger, see Troubleshoot your logic app. Check that your logic app saved the email to the correct storage container. In Storage Explorer, expand Local & Attached > Storage Accounts > attachmentstorageacct (Key) > Blob Containers > attachments. Check the attachments container for the email. At this point, only the email appears in the container because the logic app doesn't process the attachments yet. When you're done, delete the email in Storage Explorer. Optionally, to test the If false branch, which does nothing at this time, you can send an email that doesn't meet the criteria. Next, add a loop to process all the email attachments. Process attachments To process each attachment in the email, add a For each loop to your logic app's workflow. Under the Create blob for email body shape, select Add an action. Under Choose an action, in the search box, enter for eachas your filter, and select this action: For each Rename your loop with this description: For each email attachment Now specify the data for the loop to process. Click inside the Select an output from previous steps box so that the dynamic content list opens, and then select Attachments. The Attachments field passes in an array that contains all the attachments included with an email. The For each loop repeats actions on each item that's passed in with the array. Save your logic app. Next, add the action that saves each attachment as a blob in your attachments storage container. Create blob for each attachment In the For each email attachment loop, select Add an action so you can specify the task to perform on each found attachment. In the search box, enter create blobas your filter, and then select this action: Create blob Rename the Create blob 2 action with this description: Create blob for each email attachment In the Create blob for each email attachment action, provide this information, and select the properties for each blob you want to create as shown and described: When you're done, the action looks like this example: Save your logic app. Check attachment handling Next, test whether your logic app handles the attachments the way that you specified: If your logic app isn't running already, select Run on the designer toolbar. Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter property: Business Analyst 2 #423501 Your. In Storage Explorer, expand Local & Attached > Storage Accounts > attachmentstorageacct (Key) > Blob Containers > attachments. Check the attachments container for both the email and the attachments. When you're done, delete the email and attachments in Storage Explorer. Next, add an action so that your logic app sends email to review the attachments. Send email notifications In the If true branch, under the For each email attachment loop, select Add an action. In the search box, enter send emailas your filter, and then select the "send email" action for your email provider. To filter the actions list to a specific service, you can select the connector first. For Azure work or school accounts, select Office 365 Outlook. For personal Microsoft accounts, select Outlook.com. If you're asked for credentials, sign in to your email account so that Logic Apps creates a connection to your email account. Rename the Send an email action with this description: Send email for review Provide the information for this action and select the fields you want to include in the email as shown and described. To add blank lines in an edit box, press Shift + Enter. If you can't find an expected field in the dynamic content list, select See more next to When a new email arrives. Note If you select a field that contains an array, such as the Content field, which is an array that contains attachments, the designer automatically adds a "For each" loop around the action that references that field. That way, your logic app can perform that action on each array item. To remove the loop, remove the field for the array, move the referencing action to outside the loop, select the ellipses (...) on the loop's title bar, and select Delete. Save your logic app. Now, test your logic app, which now looks like this example: Run your logic app Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter property: Business Analyst 2 #423501 Your email has one or more attachments. You can reuse an empty text file from your previous test. For a more realistic scenario, attach a resume file. The email body has this text, which you can copy and paste: Name: Jamal Hartnett Street address: 12345 Anywhere Road City: Any Town State or Country: Any State Postal code: 00000 Email address: jamhartnett@outlook.com Phone number: 000-000-0000 Position: Business Analyst 2 #423501 Technical skills: Dynamics CRM, MySQL, Microsoft SQL Server, JavaScript, Perl, Power BI, Tableau, Microsoft Office: Excel, Visio, Word, PowerPoint, SharePoint, and Outlook Professional skills: Data, process, workflow, statistics, risk analysis, modeling; technical writing, expert communicator and presenter, logical and analytical thinker, team builder, mediator, negotiator, self-starter, self-managing Certifications: Six Sigma Green Belt, Lean Project Management Language skills: English, Mandarin, Spanish Education: Master of Business Administration Run your logic app. If successful, your logic app sends you an email that looks like this example: If you don't get any emails, check your email's junk folder. Your email junk filter might redirect these kinds of mails. Otherwise, if you're unsure that your logic app ran correctly, see Troubleshoot your logic app. Congratulations, you've now created and run a logic app that automates tasks across different Azure services and calls some custom code. Clean up resources When you no longer need this sample, delete the resource group that contains your logic app and related resources. On the main Azure menu, select Resource groups. From the resource groups list, select the resource group for this tutorial. On the Overview pane, select Delete resource group. When the confirmation pane appears, enter the resource group name, and select Delete. Next steps In this tutorial, you created a logic app that processes and stores email attachments by integrating Azure services, such as Azure Storage and Azure Functions. Now, learn more about other connectors that you can use to build logic apps. Feedback
https://docs.microsoft.com/en-us/azure/logic-apps/tutorial-process-email-attachments-workflow
CC-MAIN-2019-51
refinedweb
3,348
63.19
Quine in ActionScript 3 In computing, a quine is a computer program which produces a copy of its own source code as its only output. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language. While playing with wonderfl I have found a nice ActionScript quine solution from yonatan, which I decided to shorten. It would be shorter code using trace() but than you won’t be able to log/see output on wonderfl, thats why TextField is used. … and here is the result: The source code is the same as output: package{ import flash.display.*; import flash.text.*; public class Quine extends Sprite{ public function Quine(){ var t:*=addChild(new TextField),q:*=<![CDATA[package{ import flash.display.*; import flash.text.*; public class Quine extends Sprite{ public function Quine(){ var t:*=addChild(new TextField),q:*=<![CDATA[2]>+''; t.text=q.replace(1+1,q+']'),t.width=t.height=465}}}]]>+''; t.text=q.replace(1+1,q+']'),t.width=t.height=465}}} Feel free to try to fork it to even shorten source. Another ActionScript quine attempts:
http://blog.yoz.sk/2010/03/quine-in-actionscript-3/
CC-MAIN-2013-20
refinedweb
182
56.86
A simple Python package for creating or reading GDSII layout files. Project description. Documentation - Complete documentation can be found at: - Download - The package can be downloaded for installation via easy_install at - Gallery A Simple Example Here is a simple example that shows the creation of some text with alignment features. It involves the creation of drawing geometry, Cell and a Layout . The result is saved as a GDSII file, and also displayed to the screen: import os.path from gdsCAD import * # Create some things to draw: amarks = templates.AlignmentMarks(('A', 'C'), (1,2)) text = shapes.Label('Hello\nworld!', 200, (0, 0)) box = shapes.Box((-500, -400), (1500, 400), 10, layer=2) # Create a Cell to hold the objects cell = core.Cell('EXAMPLE') cell.add([text, box]) cell.add(amarks, origin=(-200, 0)) cell.add(amarks, origin=(1200, 0)) # Create two copies of the Cell top = core.Cell('TOP') cell_array = core.CellArray(cell, 1, 2, (0, 850)) top.add(cell_array) # Add the copied cell to a Layout and save layout = core.Layout('LIBRARY') layout.add(top) layout.save('output.gds') layout.show() Recent Changes - v0.4.5 (05.02.15) - Added to_path and to_boundary conversion methods - Added experimental DXFImport - v0.4.4 (12.12.14) - Added Ellipse boundary (cjermain) - Added missing area method to base classes - Fixed bug when objects are defined with integers then translated by float (cjermain) - Added missing flatten method - v0.4.3 (07.10.14) - (bugfix) Boundaries to again accept non-numpy point lists - Removed deprecated labels attribute from Cell - Reduced internal uses of Cell._references - v0.4.2 (15.09.14) - (bugfix) Boundaries are now closed as they should be (thanks Phil) - gdsImport loads all Boundary points (including final closing point) from file - v0.4.1 (05.06.14) - Allow Boundaries with unlimited number of points via multiple XY entries - v0.4.0 (07.05.14) - Several performance improvements: Layout saving, reference selection, and bounding boxes should all be faster - Layout save now only uniquifies cell names that are not already unique - v0.3.7 (14.02.14) - More colors for layer numbers greater than six (Matthias Blaicher) - v0.3.6 (12.12.13) bugfix - Fixed installation to include missing resource files - v0.3.5 (11.12.13 PM) bugfix - Introduced automatic version numbering - git_version module is now included in distribution (Thanks Matthias) - v0.3.2 (11.12.13) - CellArray spacing can now be non-orthogonal - Block will now take cell spacing information from the attribute cell.spacing - v0.3.1 (06.12.13) - Added support for Hershey Fonts. - Thanks to Matthias Blaicher. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/gdsCAD/
CC-MAIN-2018-43
refinedweb
454
58.48
Build a calculator for a website Budget $30-250 CAD I'm looking for help for a small project to create a calculator to calculate return of of return (Money Weighted & Time Weighted) and upload to my personal website. 29 freelancere byder i gennemsnit $171 på dette job Hi I am professional & highly experinced Web programmer i do this job with good perfection please come on chat so we can discuss further regarding the project Thanks, Manish Hi, I am highly interested to build the calculator. I am a full stack website developer. I have more than 5 years experience. I will make sure the best quality service. Thanks Hi. I am a full time developer and have high skills in web develop by JQuery, PHP & MySQL. So I can complete this project. My price and time is negotiable. Let's discuss details via chat. I'll provide best service. Dear sir, I can do your project. I am professional website developer i will add calculator in your present website i have solid command creating anything in website. waiting for your positive response Best Regards
https://www.dk.freelancer.com/projects/graphic-design/build-calculator-for-website/
CC-MAIN-2019-18
refinedweb
185
65.83
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world Canvas And Its Callbacks In OO Code I learn to use 2 different types of Canvas callbacks in the <a href=>last snippet</a>. Typically, when I wrote a non-OO code, I will use app.body = c = Canvas() where I already had from appuifw import * The shortcoming is that I need to define callbacks first, then pass it to the constructor c = Canvas(redraw_callback, event_callback) By using OO, the canvas is created in __init__() and it can access other methods that come later in the code. In this case, I use Canvas(self.update) which means that the self.update will be used to redraw screen. The secode way to use callback is Canvas.bind() method. I have always been using this approach to binding any event callback to a canvas. In some case, the event_callback in the constructor maybe more elegant, though. Notice my use of self.canvas.bind(EKeySelect, self.toggle) Here I can bind the select key to self.toggle whose definition will follow. This is more convenient than having to define it first. So, I think OO code is easier to write in this way. I also use class variables instead of instance variables. I found declaring it outside __init__() is more natural and similar to my previous non-OO approach. (still easy to read, with variable & def declarations) When I write self.myvar inside __init__(), I feel the code is somewhat bloated. The class will have only 1 instance anyway.
http://www.dzone.com/snippets/canvas-and-its-callbacks-oo
CC-MAIN-2013-20
refinedweb
274
66.84
K 'survival table' which works out an average survival rate of people based on 3 features: - Passenger Class - Passenger Fare (grouped into those who paid 0-9, 10-19, 20-29, 30+) - Gender It looks like this: ![2013-10-30_07-05-03.png 2013 10 30 07 05 03]( /uploads/2013/10/2013-10-30_07-05-03.png) And the code that creates that is: import pandas as pd def addrow(df, row): return df.append(pd.DataFrame(row), ignore_index=True) def fare_in_bucket(fare, fare_bracket_size, bucket): return (fare > bucket * fare_bracket_size) & (fare <= ((bucket+1) * fare_bracket_size)) def build_survival_table(training_file): fare_ceiling = 40 train_df = pd.read_csv(training_file) train_df[train_df['Fare'] >= 39.0] = 39.0 fare_bracket_size = 10 number_of_price_brackets = fare_ceiling / fare_bracket_size number_of_classes = 3 #There were 1st, 2nd and 3rd classes on board survival_table = pd.DataFrame(columns=['Sex', 'Pclass', 'PriceDist', 'Survived', 'NumberOfPeople']) for pclass in range(1, number_of_classes + 1): # add 1 to handle 0 start for bucket in range(0, number_of_price_brackets): for sex in ['female', 'male']: survival = train_df[(train_df['Sex'] == sex) & (train_df['Pclass'] == pclass) & fare_in_bucket(train_df["Fare"], fare_bracket_size, bucket)] row = [dict(Pclass=pclass, Sex=sex, PriceDist = bucket, Survived = round(survival['Survived'].mean()), NumberOfPeople = survival.count()[0]) ] survival_table = addrow(survival_table, row) return survival_table.fillna(0) survival_table = build_survival_table("train.csv") where 'train.csv' is structured like so: $ head -n5 train.csv After we've built that we iterate through the test data set and look up each person in the table and find their survival rate. def select_bucket(fare): if (fare >= 0 and fare < 10): return 0 elif (fare >= 10 and fare < 20): return 1 elif (fare >= 20 and fare < 30): return 2 else: return 3 def calculate_survival(survival_table, row): survival_row = survival_table[(survival_table["Sex"] == row["Sex"]) & (survival_table["Pclass"] == row["Pclass"]) & (survival_table["PriceDist"] == select_bucket(row["Fare"]))] return int(survival_row["Survived"].iat[0]) test_df = pd.read_csv('test.csv') test_df["Survived"] = test_df.apply(lambda row: calculate_survival(survival_table, row), axis=1) I wrote up the difficulties we had working out how to append the 'Survived' column if you want more detail. 'test.csv' looks like this: $ head -n5 test.csv PassengerId,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked 892,3,"Kelly, Mr. James",male,34.5,0,0,330911,7.8292,,Q 893,3,"Wilkes, Mrs. James (Ellen Needs)",female,47,1,0,363272,7,,S 894,2,"Myles, Mr. Thomas Francis",male,62,0,0,240276,9.6875,,Q 895,3,"Wirz, Mr. Albert",male,27,0,0,315154,8.6625,,S We then write out the survival value for each customer along with their ID: test_df.to_csv("result.csv", cols=['PassengerId', 'Survived'], index=False) $ head -n5 result.csv PassengerId,Survived 892,0 893,1 894,0 895,0 I've pasted the code as a gist for those who want to see it all as one. Next step: introduce some real machine learning, probably using scikit-learn unless there's something else we should be using?
https://markhneedham.com/blog/2013/10/30/kaggle-titanic-python-pandas-attempt/
CC-MAIN-2020-24
refinedweb
478
50.02
#include <wx/event.h> This event class contains information about command events, which originate from a variety of simple controls. Note that wxCommandEvents and wxCommandEvent-derived event classes by default and unlike other wxEvent-derived classes propagate upward from the source window (the window which emits the event) up to the first parent which processes the event. Be sure to read extra information dependent. Returns the integer identifier corresponding to a listbox, choice or radiobox selection (only if the event was a selection, not a deselection), or a boolean value representing the value of a checkbox. For a menu item, this method returns -1 if the item is not checkable or a boolean value (true or false) for checkable items indicating the new state of the item. Returns item index for a listbox or choice selection event (not valid for a deselection). Returns item string for a listbox or choice selection event. If one or several items have been deselected, returns the index of the first deselected item. If some items have been selected and others deselected at the same time, it will return the index of the first selected item.). Notice that this method cannot be used with wxCheckListBox currently..
https://docs.wxwidgets.org/trunk/classwx_command_event.html
CC-MAIN-2018-51
refinedweb
201
51.48
Smoke-GObject generates a heirachy of QMetaObjects from GObject Introspection typelib files. This allows the functions in GObject based libraries to be invoked as slots, and for GObject signals to be forwarded to Qt signals. The GObject based types being converted to and from their Qt equivalents. The bindings can be used either by dynamic runtimes such as QtScript or QML with no code generation necessary, or they can include generated C++ classes to be compiled against for C++ projects. The project was described in these three blogs: http:// http:// http:// http:// It was checked into the KDE playground SVN module, but nothing has been done for the last two years. At the UDS in Budapest, we had discussions about using the bindings to create a wrapper for a common library that will be used by both Unity-3D and Unity-2D. The library will be written using Gnome GObjects for Unity-3D, but will need to be used from Qt C++ and QML code in Unity-2D. Currently the marshalling and function invocation code is being written. The 'Everything' library from the GObject Introspection project is being used to test the marshalling via a QtTest program to invoke all the code in this 'Everything' library. Once the marshalling code is reasonably complete, the next stage will be to implement getting and setting Q_PROPERTYs corresponding to GObject properties, and to unite GObject signals with Qt signals. After that an option will be added to generate C++ code for the QObject classes. Project information - Maintainer: - Smoke GObject Developers - Driver: - Not yet selected - Licence: - GNU LGPL v2.1 View full history Series and milestones trunk series is the current focus of development. All code Code - Version control system: - Bazaar All bugs Latest bugs reported - Bug #919786: Compilation fails at gobjectnamespace.cpp Reported on 2012-01-21 - Bug #856878: Can only build by running cmake again after make Reported on 2011-09-22
https://launchpad.net/smoke-gobject
CC-MAIN-2017-34
refinedweb
320
59.03
Working with Colour It’s useful to be able to transform colour specifiers into hue, saturation, brightness and kelvin values. These are used by LIFX devices to change how they look. Photons supports all the colour formats used by the LIFX HTTP API as explained in the ColourParser below. - class photons_control.colour.ColourParser This knows how to convert valid colour specifiers into a SetWaveformOptional you can send to a device. A valid colour specifier is a combination of any of the following components: - A valid colour name 'blue' 'cyan' 'green' 'orange' 'pink' 'purple' 'red' 'white' 'yellow' - random_colour The string "random"will randomly choose hsbk values - kelvin "kelvin:3500"will set kelvin to 3500. - brightness "brightness:0.5"will set the device to half brightness. - saturation "saturation:0.5"will set the device to half saturation. 0 saturation is white, and 1 saturation is colour. - hue "hue:200"will set the device to a hue value of 200, which in this case is a blue. - hex "hex:#00aabb"or #00aabbwill turn that hex value into the appropriate hsbk values. In this case #00aabbtransforms into a a light blue. - rgb "rgb:200,100,120"will take red, green, bluevalues and convert them. In this example, it’s a light red. You can use the following classmethods: - classmethod hsbk(components, overrides=None) Return (h, s, b, k)given a list of colour components Take into account hue, saturation, brightness and kelvin keys in overrides if provided. from photons_control.colour import ColourParser h, s, b, k = ColourParser.hsbk("green") - classmethod msg(components, overrides=None) Create a SetWaveformOptional message that may be used to change the state of a device to what has been specified. from photons_control.colour import ColourParser async def my_action(target, reference): msg = ColourParser.msg("green") await target.send(msg, reference) - class photons_control.colour.Effects This has the logic used by the ColourParserto create waveform effects on your devices. You use them by giving the effectoption when you use the ColourParserand any of the extra options used by the effect. For example: from photons_control.colour import ColourParser async def my_action(target, reference): msg = ColourParser.msg("red", {"effect": "pulse", "cycles": 2}) await target.send(msg, refernece) or from the command line: lifx lan:transform -- '{"color": "red", "effect": "pulse", "cycles": 2}' - pulse(cycles=1, duty_cycle=0.5, transient=1, period=1.0, skew_ratio=<class 'delfick_project.norms.spec_base.NotSpecified'>, **kwargs) Options to make the light(s) pulse color and then back to its original color - sine(cycles=1, period=1.0, peak=0.5, transient=1, skew_ratio=<class 'delfick_project.norms.spec_base.NotSpecified'>, **kwargs) Options to make the light(s) transition to color and back in a smooth sine wave - half_sine(cycles=1, period=1.0, transient=1, **kwargs) Options to make the light(s) transition to color smoothly, then immediately back to its original color - triangle(cycles=1, period=1.0, peak=0.5, transient=1, skew_ratio=<class 'delfick_project.norms.spec_base.NotSpecified'>, **kwargs) Options to make the light(s) transition to color linearly and back - saw(cycles=1, period=1.0, transient=1, **kwargs) Options to make the light(s) transition to color linearly, then instantly back - photons_control.colour.make_hsbk(specifier) Return {"hue", "saturation", "brightness", "kelvin"}dictionary for this specifier. If it’s a string, use photons_control.colour.ColourParser.hsbk() If it’s a list, then take h, s, b, kfrom the list and default to 0, 0, 1, 3500, the list can be 0 to 4 items long. If it’s a dictionary, get hue, saturation, brightness, kelvinfrom it and default them to 0, 0, 1, 3500. - photons_control.colour.make_hsbks(colors, overrides=None) Colors must be an array of [[specifier, length], ...]and this function will yield {"hue": <hue>, "saturation": <saturation>, "brightness": <brightness>, "kelvin": <kelvin}such that we get a flat list of these hsbkvalues. We use photons_control.colour.make_hsbk()with each specifier.
https://photons.delfick.com/useful_helpers/colour.html
CC-MAIN-2022-40
refinedweb
638
50.23
being totally swamped with no real time to do much of anything. I just recently due to school got back into doing some programming. Partially because of the nature of the class and me being as lazy as I could possibly be just not wanting to go through all the repeditive steps. Right now I am taking a statistics class and calculating all of the probability stuff can get very very long and repedative to find out the various different answers. For instance when finding the binomial probability of a range of numbers in a set you might have to calculated 12 different binomial probabilities and then add them together so you can then caluculate the complement of that probability to find the other side of the range of numbers. It is just way too repedative in my liking. The advantage of this is it really re-kindled my love of the Python language. I just wish the language was a bit more useful for game development sadly. The performance hits are just way too high when you progress onto 3D. After I finished my homework I decided to do a comparison of the Python and C++ code required for calculating the binomial probability of a number in a set. This is the overall gist of the post because it is really amazing to see the difference in the code of two examples of the same program and it is simple enough to really demonstrate both in a reasonable amount of time. The interesting thing here is from a outside perspective runing both they appear to be run instantaniously with no performance difference at all. So here is the code btw it is indeed a night and day difference in readability and understandability. Python (2.7.3) def factorial(n): if n < 1: n = 1 return 1 if n == 1 else n * factorial(n - 1) def computeBinomialProb(n, p, x): nCx = (factorial(n) / (factorial(n-x) * factorial(x))) px = p ** x q = float(1-p) qnMinx = q ** (n-x) return nCx * px * qnMinx if __name__ == '__main__': n = float(raw_input("Value of n?:")) p = float(raw_input("Value of p?:")) x = float(raw_input("Value of x?:")) print "result = ", computeBinomialProb(n, p, x) C++ #include <iostream> #include <math.h> int factorial(int n) { if (n < 1) n = 1; return (n == 1 ? 1 : n * factorial(n - 1)); } float computeBinomialProb(float n, float p, float x) { float nCx = (factorial(n) / (factorial(n - x) * factorial(x))); float px = pow(p, x); float q = (1 - p); float qnMinx = pow(q, (n - x)); return nCx * px * qnMinx; } int main() { float n = 0.0; float p = 0.0; float x = 0.0; float result = 0.0; std::cout << "Please enter value of n: "; std::cin >> n; std::cout << "Please enter value of p: "; std::cin >> p; std::cout << "Please enter value of x: "; std::cin >> x; result = computeBinomialProb(float(n), float(p), float(x)); std::cout << "result = " << result << "\n\n"; return 0; } Sorry for no syntax highlighting I forget how to do this. The bigest thing you can notice is that in Python you don't need all the type information which allows for really easy and quick variable declarations which actually slims the code down quite a bit. Another thing to notice is you can prompt and gather information in one go with the Python where in C++ you need to use two different streams to do so. I think the Python is much more readible but the C++ is quite crisp as well. A) C++11 added 'auto', so you don't need to write what type of variable a new variable is. You do still need to type 'auto' though. B) The whole 'raw_input' vs C++ streams... 'raw_input' could just be a wrapper function around the streams (and once written, could be a permanent part of your own code base) so it isn't a pro or con of either language. The difference would be: Python definitely has a place in game development - mostly in the higher level logic, while C++ or C would do the heavy lifting of your engine. I scarcely know Python, but I recognize it's use and have it in my plans to learn in a few years from now. The right tool for the right job - I currently use C++ for everything, despite C++ only being the right tool for about half my code.
http://www.gamedev.net/blog/468/entry-2255483-wow-long-time/#comment_2254343
CC-MAIN-2016-50
refinedweb
734
58.92
So you want to secure your Umbraco site Imagine, if you will, for a second, that you're trying to secure your Umbraco site by running it over an https connection, sounds complicated? Not so much. It's fairly trivial to set up once you have a certificate, which you now know how to create using Let's Encrypt. I'm not going to explain how to create a binding in IIS, this process is different per hosting provider and it's impossible to cover this for everyone. We've made it easy on Umbraco as a Service (UaaS): upload a .pfx file, bind it to one of your host names. We can (and will) make it even easier. Alright, we're there, our site, when typing "https://" as the prefix works and gives us a satisfying green lock to show that the connection was encrypted end to end and Chrome tells us our site is secure, yay! There's only one more thing that you need to do for Umbraco to make sure that the backoffice is secure: go into your web.config and find the appSetting called "umbracoUseSSL". This setting is "false" by default and needs to be changed to "true". A quick look in the source code of Umbraco teaches us that the only use for this setting is to make sure that the cookies issued when logging into the backoffice get the "secure" flag, meaning it will only send the cookie when the connection is encrypted (so only over https). This is all it does, and that's all there is to it, you've secured your Umbraco site! Taking it further We're running successfully on https, but we might not be at the peak of our game yet, let's run some online scans to see what else we can do to make our site more secure. SSLLabs Okay, that was the good news. Full of hope you run over to SSLLabs to test your site's security. Depending on your hosting server's setup you might and come up with a.. disappointing grade. Luckily, my site lives on Umbraco as a Service, so I get a mighty fine "A" grade. Hoever, you might end up with a different grade if the server your hosting on has enabled insecure protocols and encryption ciphers. If you're self-hosting then I recommend you run Nartac IIS Crypto and apply the fixes suggested using the "Best Practices" button. It's a dead-simple, one-click fix for most (if not all) of your bad grades on SSLLabs. HTTPS by default For the following modifications, I'm going to assume that the URL Rewrite module for IIS is installed on your webhosting server (exactly why Microsoft doesn't ship with this installed by default is beyond me!). Now that we can access the site over HTTPS, let's redirect all traffic to the site to HTTPS, that way you always have that happy green lock in your browser and you will always encrypt all traffic against people trying to snoop on you whether they are a Man In The Middle, your internet provider or the NSA. This can be done using a URL Rewrite rule. In the following rule "localhost" is excluded from rewriting so that I don't have to jump through hoops to set up valid certificates and hostnames when debugging my site on my local machine. This configuration goes into the system.webServer/rewrite/rules section of the web.config: <rule name="HTTP to HTTPS redirect" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> <add input="{HTTP_HOST}" pattern="localhost" negate="true" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent" /> </rule> While we're adding redirects, it's good for search engines to have just one domain to look at, so I can set up a redirect that strips "www" from any requests (which will then feed into the rule above and makes sure to redirect to HTTPS): <rule name="Strip www. from URL" stopProcessing="true"> <match url="^(.*)$" ignoreCase="true" /> <conditions logicalGrouping="MatchAll"> <add input="{HTTP_HOST}" pattern="^www\.(.+)$" /> </conditions> <action type="Redirect" url="http://{C:1}/{R:1}" redirectType="Permanent" /> </rule> To further minimize any attacks where bad guys might want to trick you into using insecure HTTP request, you can send up a header with each request called the HTTP Strict Transport Security (HSTS) header. Enabling HSTS will tell the browser: for the specified amount of time you will not look up any pages on this domain over HTTP any more, always use HTTPS. This is an addition that can be made to the system.webServer/rewrite/outboundRules section: <outboundRules> <rule name="Add Strict-Transport-Security when HTTPS" enabled="true"> <match serverVariable="RESPONSE_Strict_Transport_Security" pattern=".*" /> <conditions> <add input="{HTTPS}" pattern="on" ignoreCase="true" /> <add input="{HTTP_HOST}" pattern="localhost" negate="true" /> </conditions> <action type="Rewrite" value="max-age=63072000; includeSubDomains; preload" /> </rule> </outboundRules> This adds the "Strict-Transport-Security" header that tells browsers: for the next 63072000 seconds (which is two years) the browser should not make any HTTP requests to this domain. Note: the rules for HSTS inclusion might change from time to time. Make sure to read the current requirements before you follow these steps. However... there's still a tiny sliver of an attack vector here. If you are a Man in the Middle and manage to lure someone to a site that they've never visited before, their very first request will still be the only one ever to go over HTTP. In that single request the Man in the Middle could still possibly do bad things. The only way to eliminate this risk is to never allow HTTP connections to the site, but then everybody needs to know that you can only ever get to the site by prefixing it with "https://". Not very user friendly. You can, however, ask to be put on a list that is baked into browsers like Chrome, Firefox, Safari, IE11 and Edge. This is called HSTS Preloading and takes a few weeks to get set up (it's a manual process). When you do finally make it on this preloaded list, your browser will never request any pages over HTTP but choose HTTPS by default. Part of the manual check that Google will do is to see if HSTS is set up including subdomains and if the preload parameter is there, this is why they're added to the rewrite rule above. ASafaWeb The "Automated Security Analyser for ASP.NET Websites" will test your Umbraco site for known issues with ASP.NET websites. I'm not doing so well on this one and have a few things to fix. AsafaWeb gives me excellent guidance to fix things like Custom Errors and an exposed Stack Trace, just update the web.config to set Custom Errors to "RemoteOnly" and we're good. As for the orange warnings: - Excessive headers: The header "Server: Microsoft-IIS/8.5" gets sent with each response. I have tried disabling this but apparently our UaaS servers forcefully add this header. Nothing I could do about it, this needs to be removed at a server level. There's many other ways to probe sites and find out (by looking at their behavior) that it's running IIS and even which version it runs. So attackers specifically out to target my site are only a few seconds extra delayed in picking the correct attack vectors, I'm not worried about this header. - HTTP only cookies: The "ARRAffinity" cookie is only there for IIS to quickly determine on which of the available web servers my website lives. It's not an attack vector: if it's wrong or doesn't exist, this cookie will just be overwritten with a new one. - Clickjacking: A valid concern, I can deny people framing my site with the simple addition of "X-Frame-Options" to the web.config (more on this later!). Note: Always make sure you remove a custom header first, if the webserver already has it's own "add" rule, then you can't overwrite it by inserting your own, you need to remove the existing one first. Also note that Umbraco has tried to be helpful and removed the header that tells the world what MVC version you're running by removing "X-Powered-By" in the systemWebserver/httpProtocol/customHeaders section of your web.config. <httpProtocol> <customHeaders> <!-- Ensure the powered by header is not returned --> <remove name="X-Powered-By" /> <remove name="X-Frame-Options" /> <add name="X-Frame-Options" value="DENY" /> </customHeaders> </httpProtocol> There's a few gray boxes there: because my site doesn't have a view state in the HTML, AsafaWeb assumes (correctly) that I'm not using WebForms, so those tests didn't need to run any further. I couldn't figure out how to trigger the "Hash dos patch" test, even after adding a form that does a POST (as described on AsafaWeb) it doesn't test for this problem. Luckily I know that UaaS runs on servers not affected by the MS11-100 security vulnerability, but you might want to check with your hosting provider. Looks better now: Security-headers.io Going even further down into securing our website, there's some "fun" things we can do to make most websites misbehave, like making it do the Harlem Shake. Security-headers.io looks to see if you've implemented policies to mitigate these kinds of problems which are mostly XSS (cross site scripting) based. Look at this result.. ouch: We can easily makes this a lot better by following some of the advise here on adding a "X-Xss-Protection" and a "X-Content-Type-Options" header: " /> </customHeaders> </httpProtocol> Better: The Content Security Policy (CSP) is a lot harder to implement because it requires you to look at all of your site's assets and whitelist them. This is difficult especially if you load video's from YouTube, use CDN hosted javascript libraries, links to external images etc. Which brings us to the following check to run. CSP Analyser The CSP analyser over at report-uri.io looks at any policies you've implemented and tells you how good they are. It's impossible to give a good policy for all websites, so I'll just post the one I've struggled with and finally landed on for this site: " /> <remove name="Content-Security-Policy" /> <add name="Content-Security-Policy" value="default-src 'self';script-src 'self';style-src 'self' 'sha256-MZKTI0Eg1N13tshpFaVW65co/LeICXq4hyVx6GWVlK0=' 'sha256-CwE3Bg0VYQOIdNAkbB/Btdkhul49qZuwgNCMPgNY5zw=' 'sha256-LpfmXS+4ZtL2uPRZgkoR29Ghbxcfime/CsD/4w5VujE=' 'sha256-YJO/M9OgDKEBRKGqp4Zd07dzlagbB+qmKgThG52u/Mk=';img-src 'self' data:;font-src 'self';" /> </customHeaders> </httpProtocol> I am using Gravatar images, Google Analytics and the Google Fonts API. The sha256 references are there to fix some things that Modernizr.js wants to execute, which wouldn't otherwise be allowed, Chrome dev tools will tell you exactly what to add if this is a problem for you: To make it a bit easier for you to manage, report-uri.io allows you to set up a free account. Using that, all CSP violations will be logged for you so you can have a look at updating your whitelist accordingly. After setting up a CSP, securityheaders.io now reports a respectable "A" grade. I've looked into Public Key Pinning (HPKP) but the process seems too onerous for little gain for now. The problem with HPKP is currently that I don't understand how backup CSRs are supposed to work and what exactly I need to do when my current certificate expires. I have done some experiments and they worked but I need to do further testing to see what it will take to switch to a new certificate. In case you are wondering (and are brave), the HPKP header can be configured like so in system.webServer/httpProtocol/customHeaders: <remove name="Public-Key-Pins" /> <add name="Public-Key-Pins" value="max-age=31536000; Note that the double quotes need to be escaped because the web.config file is an XML file, so replace " with " everywhere in the value of this header. Back to Umbraco Now that we've made our frontend all nice and safe, let's go back into the backoffice of Umbraco. Whoops, we broke it! There's a few things going on in the backoffice that we need to allow now that we've disallowed a lot of them on the frontend. Umbraco still uses iframes for some pages in the backoffice so we'll need to allow those. The Content Security Policy is also blocking a lot of asset loading because they're set pretty strict on the frontend. Luckily we don't have to change our frontend setup, we can just change the backoffice requirements a little bit. All the way at the bottom of our web.config we already have a <location path="umbraco"> section which tells IIS: for this location (the umbraco path) we want to apply different rules then for the rest of the site. We can amend this section with a custom CSP and allow frames from the same origin (so only frames with a location that lives somewhere in our site). We're already disabling urlCompression for the backoffice as that can conflict with our backoffice javascripts, so let's add our updated headers there: <location path="umbraco"> <system.webServer> <urlCompression doStaticCompression="false" doDynamicCompression="false" dynamicCompressionBeforeCache="false" /> <httpProtocol> <customHeaders> <remove name="X-Frame-Options" /> <add name="X-Frame-Options" value="SAMEORIGIN" /> <remove name="Content-Security-Policy" /> <add name="Content-Security-Policy" value="default-src 'self' player.vimeo.com *.vimeocdn.com packages.umbraco.org our.umbraco.org;script-src 'self' 'unsafe-inline' 'unsafe-eval';style-src 'self' 'unsafe-inline';img-src 'self' data: umbraco.tv;font-src 'self';" /> </customHeaders> </httpProtocol> </system.webServer> </location> Much better, our backoffice is back without errors. One interesting thing I found when implementing CSP rules is that I was not allowed to have inline CSS in my site, this is a good thing, I don't want inline CSS, I want everything to be nicely tucked away in a CSS file. One problem though: the rich text editor. When you insert an image in the RTE, Umbraco automatically adds an inline style for you with the dimensions of the image and there seems to be no way to prevent it from doing so. I've created a simple extension method that goes through your html and strips out those inline styles. This StringExtensions.cs can be dropped into your App_Code folder: using System.Web; using HtmlAgilityPack; namespace Cultiv.StringExtensions { public static class RteStyles { public static IHtmlString RemoveInlineImageStyles(this string text) { var htmlString = new HtmlString(text); return htmlString.RemoveInlineImageStyles(); } public static IHtmlString RemoveInlineImageStyles(this IHtmlString htmlString) { var htmlDocument = new HtmlDocument(); htmlDocument.LoadHtml(htmlString.ToString()); if(htmlDocument == null || htmlDocument.DocumentNode == null || htmlDocument.DocumentNode.SelectNodes("//img[@style]") == null) { return htmlString; } else { foreach (var node in htmlDocument.DocumentNode.SelectNodes("//img[@style]")) { var attribute = node.Attributes["style"]; node.Attributes.Remove("style"); } } return new HtmlString(htmlDocument.DocumentNode.OuterHtml); } } } I use it as follows in my templates: @(Model.Content.GetPropertyValue<string>("bodyText").RemoveInlineImageStyles()) Conclusion Security is hard. :-) Luckily there's plenty of tools that help ease the pain. We are always looking into updating Umbraco where possible to take away the pain by setting up sensible defaults. We're also working on making things easier to set up on Umbraco as a Service where we can rely more on automation. There's a few security related sites I should point to that are excellent in helping you understand security and keeping you safe: - Follow Troy Hunt's blog and Twitter account (or whatever social media you like, there's plenty of icons on his site) - If you have a PluralSight account, his security courses are always great as well - Follow Scott Helme's blog and Twitter account - I enjoy the Security Now podcast for regular in depth discussions of how security works (and most notably: where it fails, of course) and recommend playing it at 1.5 speed Finally: there's a lot more you can do to protect your site, but this is a mammoth post already so I'll end this here in hopes that I get more time in the future to cover related topics. 14 comments on this article This is great stuff mate- thanks for your time to write it up, came together really well! This is an excellent blog post! I'll be referring to it for every release now. Been thinking about the umbracoUseSSL setting. I usually use HttpContext.Request.IsSecureConnection to determine whether the Secure property should be set. That's one less thing to forget. I can't imagine the time it took you to go through all of this - thank you. All of this material is great and will become part of our checklist prior to launches. Great info, tools, and processes. Thanks! Hi Sebastiaan, great stuff and I think it sums up the basic checks you should do pretty good, thanks for writing it down! Please be aware that including "includeSubDomains" in your HSTS-header can do some serious harm if you're working for a company that isn't aware if there exists any other subdomains that you don't control or run on https! These sites will immediately break when adding this part of the header. One more thing that we discovered in our projects; In MVC5 the framework is adding the X-Frame-Options-header itself when you include an anti-forgery-token. During our security tests we discovered that sometimes there were headers with the content of SAMEORIGIN, SAMEORIGIN, SAMEORIGIN, SAMEORIGIN. This was because the header was once inserted by the server (through the configuration mentioned in the blog) and three times due to three forms with antiforgerytoken. Every browser will work correctly with this setting but our security tests failed. This issue can be fixed by adding 'AntiForgeryConfig.SuppressXFrameOptionsHeader = true;' to your Global.asax.cx. More info on: Cheers! I already told you at CG but this is a really useful post. Thanks! I thought it worth mentioning my experience with some Safari quirks and CSP. If you're still running on http, then you'll need to specify https://... for any domains loaded over https. Also, Safari doesn't support nonce, and so you'll need to specify inline-script as a fallback () Thanks Sebastiaan, This whole article, and subsequently linked urls, have given me a much better understanding of the whole security side of things for an Umbraco site! :-D Not to mention I can proudly say my site is now grade A secure ... albeit with small caveats! ;-) #H5YR!!! Great article. What are the options if we want to to encrypt the database? In our case the members can post comments and we stored their phonenumbers. Hi Sebastian, Thanks for the great article. I've tried adding the X-Frame-Options header with the DENY value, but the backend wouldn't let me add new dictionary items. Changing the value to SAMEORIGIN fixed this I think the <location path="umbraco" /> node mentioned in the article might be the bit you're missing. That sets SAMEORIGIN just for the Umbraco bit. Thanks for the heads up David, I missed that part. A brilliant post! I have been trying to implement this on a 6.2.6 installation with legacy XSLT. I get stuck with errors within RTE when editing. There is a long list of errors and the Update or Cancel buttons when using view as HTML in RTE don't fire. You have a script for the template to hid inline CSS generated by RTE but this is for MVC, is there a way around when templates are using xslt? Really, really awesome post. Thanks. Thank you so much for doing such a great work, and for sharing all your findings. So much value and so well explained and illustrated :)
https://cultiv.nl/blog/so-you-want-to-secure-your-umbraco-site/
CC-MAIN-2019-09
refinedweb
3,330
61.46
This issue comes from issue4613. The following code raises a SyntaxError("can not delete variable 'e' referenced in nested scope"): def f(): e = None def g(): e try: pass except Exception as e: pass # SyntaxError here??? The reason is because of, a "del e" statement is inserted. The above code is correct, and should work. I suggest that the limitation: "can not delete variable referenced in nested scope" could be removed. After all, the "variable referenced" has no value before it is set, accessing it raises either NameError("free variable referenced before assignment in enclosing scope") or UnboundLocalError("local variable referenced before assignment") The Attached patch adds a DELETE_DEREF opcode, that removes the value of a cell variable, and put it in a "before assignment" state. Some compiler experts should review it. Few regressions are possible, since the new opcode is emitted where a SyntaxError was previously raised. The patch could also be applied to 2.7, even if it is less critical there. Tests are to come, but I'd like other's suggestions.
https://bugs.python.org/msg77536
CC-MAIN-2019-22
refinedweb
175
55.84
- Type: Bug - Status: Resolved - Priority: Medium - Resolution: Done - - - Environment: Originally found on Swift 4.2.2 running on Ubuntu 18.04 Reproduced on Swift 4.2 running on macOS 10.14.3 Reproduced on Swift 5 beta 4 running on macOS 10.14.3 Details Variation 1 Given the following code: import Foundation struct MyStruct: Decodable { static let name = "aaa" let _id: String? let name: PersonNameComponents? } let decoded = try JSONDecoder().decode(MyStruct.self, from: "{\"_id\": \"abc123\", \"name\": { \"givenName\": \"John\"}}".data(using: .utf8)!) print(decoded.name!) The following error is produced: Swift5.playground:6:9: note: 'self.name' not initialized let name: PersonNameComponents? ^ Swift fails to understand that Decodable synthesis should only take into account instance property names and should ignore static property names. By having an instance property and static property with the same name, it confuses the compiler into not adding the Decodable synthesis for that particular property. If the static property name is changed then Decodable synthesis works correctly. Variation 2 Given the following code (note the instance property is now a var instead of a let): import Foundation struct MyStruct: Decodable { static let name = "aaa" let _id: String? var name: PersonNameComponents? } let decoded = try JSONDecoder().decode(MyStruct.self, from: "{\"_id\": \"abc123\", \"name\": { \"givenName\": \"John\"}}".data(using: .utf8)!) print(decoded.name) The following output is produced: nil This is arguably a worse result as it is not found at compile time and only found at run time. This means that code that should work at first glance and that passes compilation doesn't produce the expected output. Expected Result I would expect that the struct conformance to Decodable would be correctly synthesised and the static property name would be ignored. Failing that I would at least expect the decoding of the instance property to throw an error at runtime if it manages to compile, but this isn't the case. The most desirable result would for Decodable synthesis to be correctly applied. Workaround To circumvent the issue, add in manual support of Decodable. required init(decoder: Decoder) throws { ... }
https://bugs.swift.org/browse/SR-10045
CC-MAIN-2019-30
refinedweb
340
50.84
So to save others the bother to check what's in the zip file, the contents of mytest/mod1.py is as follows: import mytest.mod2 as mod def func(): print('mod1.func called') mod.func() There's no __init__.py (so mytest is a namespace package, PEP 420) but adding an empty __init__.py makes no difference. The problem occurs with both Python 2 and 3. The root cause is the import cycle. I played around with dis and found the following (I suspect others have already found this but the thread was hard to follow for me initially): >>> dis.dis('import a.b') 1 0 LOAD_CONST 0 (0) 2 LOAD_CONST 1 (None) 4 IMPORT_NAME 0 (a.b) 6 STORE_NAME 1 (a) 8 LOAD_CONST 1 (None) 10 RETURN_VALUE >>> compared to >>> dis.dis('import a.b as c') 1 0 LOAD_CONST 0 (0) 2 LOAD_CONST 1 (None) 4 IMPORT_NAME 0 (a.b) 6 LOAD_ATTR 1 (b) <-- error here 8 STORE_NAME 2 (c) 10 LOAD_CONST 1 (None) 12 RETURN_VALUE >>> What this shows is that the implementation of "import a.b" and "import a.b as c" are different. The former calls __import__('a.b', ...) which returns the module 'a' and stores that in the variable 'a'. In the OP's case, because of the import cycle, while sys.modules['a.b'] exists, module 'a' does not yet have the attribute 'b'. That's the reason that in the latter example, the LOAD_ATTR opcode fails. I. The semantics of imports in the case of cycles are somewhat complex but clearly defined and there are only a few rules to consider, and from these rules it is possible to reason out whether any particular case is valid or not. I would prefer to keep it this way rather than add more special cases. There's a good reason why, *in general* (regardless of import cycles), "import a.b as c" is implemented as a getattr operation on a, not as an index operation on sys.modules (it is possible for module a to override its attribute b without updating sys.modules) and I'd rather keep those semantics than give them up for this particular edge case. Cyclic imports are hard. If they don't work for you, avoid them.
https://bugs.python.org/msg291400
CC-MAIN-2017-47
refinedweb
377
76.32
My issue is that, until this change is accepted, I have to override SDL2’s window-procedure to perform the same task in my own code, i.e. after getting the HWND from SDL_GetWindowWMInfo(), I call Win32’s GetWindowLongPtr() with GWLP_WNDPROC (or GWL_WNDPROC, as the case may be), and save that in my app’s screen-oriented class (i.e. the same one that stores the pointer to the SDL_Window). I then call Win32’s SetWindowLongPtr() with GWLP_WNDPROC/ GWL_WNDPROC with my override. In my replacement window-procedure, I have to call the one I fished out of SDL. But all I have at that time is an HWND, and there seems to be no way to find an SDL_Window from an HWND, nor any way to iterate through all existing SDL_Window instances. Such a function could easily be added to SDL_video.c, e.g. SDL_Window *SDL_GetFirstWindow() { return _this->windows; } and SDL_Window *SDL_GetNextWindow(SDL_Window *window) { return (window == NULL) ? NULL : window->next; }. Is there some other API I’m not seeing? Is there a good reason why there’s no existing method to iterate through existing SDL windows? For now, I’ll obviously have to create my own linked list, to match HWND to SDL_Window* .
https://discourse.libsdl.org/t/iterate-through-existing-sdl-window-instances/34329
CC-MAIN-2022-05
refinedweb
203
71.24
In our previous blog, we had seen a brief introduction about graphql and created some records using graphql queries. In this blog, we are going to continue with a mutation in graphql. We are going to see the importance of mutations in graphql. Introduction Mutation in graphql allows users to create, update and delete the data from your database. Mutation should return an object type for querying the nested field. How to write a simple mutation? - define a class of the desired names and derive it from “graphene.Mutation” - list object type to return from mutation - define arguments that a user needs to pass - define a class with name mutate and add functionality inside this function We are going to continue with the coding part of the previous tutorial of graphql. So, let’s start the tutorial Mutation in GraphQL Creating the new records Open the Api > schema.py file and add the following code #Adding new Book's details class CreateBook(graphene.Mutation): message = graphene.String() book = graphene.Field(BookType) class Arguments: title = graphene.String(required = True, description="Title of Book") author = graphene.String(required = True, description="Author of Book") description = graphene.String(required = True, description="Overview of the Book") @classmethod def mutate(cls, root, info, **kwargs): try: book = Book.objects.create( title = kwargs.get('title'), author = kwargs.get('author'), description = kwargs.get('description') ) book.save() return CreateBook(book=book, message="Successfully added new book details") except ObjectDoesNotExist: raise GraphQLError("Object doesn't exists.") except Exception as e: raise GraphQLError("Error " + str(e)) #overall mutations class Mutation(graphene.ObjectType): create_book = CreateBook.Field() Also, open Books > schema.py file import graphene import Api.schema class Query( Api.schema.BookQuery ): pass class Mutation( Api.schema.Mutation, ): pass schema = graphene.Schema(query=Query, mutation=Mutation) Elucidation As mentioned in those steps earlier, at first, we created a class derived from “graphene.Mutation”. And then defined message and book as the return type of this mutation. After that, we define a class with the name Arguments (remember that this should be the only name). Then we use a class method decorator and inside it, we define a mutate function (keep in mind that the function name should be “mutate”) and add the functionality that we need. Output Go to URL and write your schema as After this go to the admin panel and refresh and check the records under book, you will see Editing the existing records Open the Api > schema.py file and add the following #Edit Book's details class EditBook(graphene.Mutation): message = graphene.String() book = graphene.Field(BookType) class Arguments: id = graphene.Int(required=True, description="Id of the Book of which data is to be edited") title = graphene.String(description="Title of Book") author = graphene.String(description="Author of Book") description = graphene.String(description="Overview of the Book") @classmethod def mutate(cls, root, info, **kwargs): try: book = Book.objects.get(pk=kwargs.get('id')) book.title = kwargs.get('title', book.title) book.author = kwargs.get('author',book.author) book.description = kwargs.get('description', book.description) book.save() return EditBook(book=book, message="successfully updated the book") except ObjectDoesNotExist: raise GraphQLError("Object Doesn't exists") except Exception as e: raise GraphQLError("Error " + str(e)) #overall mutations class Mutation(graphene.ObjectType): create_book = CreateBook.Field() #add this line edit_book = EditBook.Field() Output Go to URL and write your schema as You will see a message that you return from your mutation. Now, go to the admin panel and you will see something like this Here, the arguments are id, title, author, and description of the book. If you observe closely, the user must provide the id which is compulsory (id of the book which is to be edited) while other arguments can be passed according to the user’s choice as these are not compulsory. Deleting the records Go to URL and write your schema as Now, go to the admin panel and you will see the record with id 3 is no longer available in the database Conclusion So, this is how we can perform CRUD operations with a mutation in graphql. If an API has an endpoint that inserts, updates, and deletes the records then it is very easy to implement the mutation as an endpoint with a mutation in graphql.
https://pythonsansar.com/how-to-perform-crud-operations-with-mutation-in-graphql/
CC-MAIN-2022-27
refinedweb
712
50.12
There are many graphical user interface (GUI) toolkits that you can use with the Python programming language. The big three are Tkinter, wxPython, and PyQt. Each of these toolkits will work with Windows, macOS, and Linux, with PyQt having the additional capability of working on mobile. A graphical user interface is an application that has buttons, windows, and lots of other widgets that the user can use to interact with your application. A good example would be a web browser. It has buttons, tabs, and a main window where all the content loads. In this article, you’ll learn how to build a graphical user interface with Python using the wxPython GUI toolkit. Here are the topics covered: - Getting Started with wxPython - Definition of a GUI - Creating a Skeleton Application - Creating a Working Application Let’s start learning! Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code. Getting Started With wxPython The wxPython GUI toolkit is a Python wrapper around a C++ library called wxWidgets. The initial release of wxPython was in 1998, so wxPython has been around quite a long time. wxPython’s primary difference from other toolkits, such as PyQt or Tkinter, is that wxPython uses the actual widgets on the native platform whenever possible. This makes wxPython applications look native to the operating system that it is running on. PyQt and Tkinter both draw their widgets themselves, which is why they don’t always match the native widgets, although PyQt is very close. This is not to say that wxPython does not support custom widgets. In fact, the wxPython toolkit has many custom widgets included with it, along with dozens upon dozens of core widgets. The wxPython downloads page has a section called Extra Files that is worth checking out. Here, there is a download of the wxPython Demo package. This is a nice little application that demonstrates the vast majority of the widgets that are included with wxPython. The demo allows a developer to view the code in one tab and run it in a second tab. You can even edit and re-run the code in the demo to see how your changes affect the application. Installing wxPython You will be using the latest wxPython for this article, which is wxPython 4, also known as the Phoenix release. The wxPython 3 and wxPython 2 versions are built only for Python 2. When Robin Dunn, the primary maintainer of wxPython, created the wxPython 4 release, he deprecated a lot of aliases and cleaned up a lot of code to make wxPython more Pythonic and easier to maintain. You will want to consult the following links if you are migrating from an older version of wxPython to wxPython 4 (Phoenix): - Classic vs Phoenix - wxPython Project Phoenix Migration Guide The wxPython 4 package is compatible with both Python 2.7 and Python 3. You can now use pip to install wxPython 4, which was not possible in the legacy versions of wxPython. You can do the following to install it on your machine: $ pip install wxpython Note: On Mac OS X you will need a compiler installed such as XCode for the install to complete successfully. Linux may also require you to install some dependencies before the pip installer will work correctly. For example, I needed to install freeglut3-dev, libgstreamer-plugins-base0.10-dev, and libwebkitgtk-3.0-dev on Xubuntu to get it to install. Fortunately, the error messages that pip displays are helpful in figuring out what is missing, and you can use the prerequisites section on the wxPython Github page to help you find the information you need if you want to install wxPython on Linux. There are some Python wheels available for the most popular Linux versions that you can find in the Extras Linux section with both GTK2 and GTK3 versions. To install one of these wheels, you would use the following command: $ pip install -U -f wxPython Be sure you have modified the command above to match your version of Linux. Definition of a GUI As was mentioned in the introduction, a graphical user interface (GUI) is an interface that is drawn on the screen for the user to interact with. User interfaces have some common components: - Main window - Toolbar - Buttons - Text Entry - Labels All of these items are known generically as widgets. There are many other common widgets and many custom widgets that wxPython supports. A developer will take the widgets and arrange them logically on a window for the user to interact with. Event Loops A graphical user interface works by waiting for the user to do something. The something is called an event. Events happen when the user types something while your application is in focus or when the user uses their mouse to press a button or other widget. Underneath the covers, the GUI toolkit is running an infinite loop that is called an event loop. The event loop just waits for events to occur and then acts on those events according to what the developer has coded the application to do. When the application doesn’t catch an event, it effectively ignores that it even happened. When you are programming a graphical user interface, you will want to keep in mind that you will need to hook up each of the widgets to event handlers so that your application will do something. There is a special consideration that you need to keep in mind when working with event loops: they can be blocked. When you block an event loop, the GUI will become unresponsive and appear to freeze to the user. Any process that you launch in a GUI that will take longer than a quarter second should probably be launched as a separate thread or process. This will prevent your GUI from freezing and give the user a better user experience. The wxPython framework has special thread-safe methods that you can use to communicate back to your application to let it know that the thread is finished or to give it an update. Let’s create a skeleton application to demonstrate how events work. Creating a Skeleton Application An application skeleton in a GUI context is a user interface with widgets that don’t have any event handlers. These are useful for prototyping. You basically just create the GUI and present it to your stakeholders for sign-off before spending a lot of time on the backend logic. Let’s start by creating a Hello World application with wxPython: import wx app = wx.App() frame = wx.Frame(parent=None, title='Hello World') frame.Show() app.MainLoop() Note: Mac users may get the following message: This program needs access to the screen. Please run with a Framework build of python, and only when you are logged in on the main display of your Mac. If you see this message and you are not running in a virtualenv, then you need to run your application with pythonw instead of python. If you are running wxPython from within a virtualenv, then see the wxPython wiki for the solution. In this example, you have two parts: wx.App and the wx.Frame. The wx.App is wxPython’s application object and is required for running your GUI. The wx.App starts something called a .MainLoop(). This is the event loop that you learned about in the previous section. The other piece of the puzzle is wx.Frame, which will create a window for the user to interact with. In this case, you told wxPython that the frame has no parent and that its title is Hello World. Here is what it looks like when you run the code: Note: The application will look different when you run it on Mac or Windows. By default, a wx.Frame will include minimize, maximize, and exit buttons along the top. You won’t normally create an application in this manner though. Most wxPython code will require you to subclass the wx.Frame and other widgets so that you can get the full power of the toolkit. Let’s take a moment and rewrite your code as a class: import wx class MyFrame(wx.Frame): def __init__(self): super().__init__(parent=None, title='Hello World') self.Show() if __name__ == '__main__': app = wx.App() frame = MyFrame() app.MainLoop() You can use this code as a template for your application. However, this application doesn’t do very much, so let’s take a moment to learn a little about some of the other widgets you could add. Widgets The wxPython toolkit has more than one hundred widgets to choose from. This allows you to create rich applications, but it can also be daunting trying to figure out which widget to use. This is why the wxPython Demo is helpful, as it has a search filter that you can use to help you find the widgets that might apply to your project. Most GUI applications allow the user to enter some text and press a button. Let’s go ahead and add those widgets:() When you run this code, your application should look like this: The first widget you need to add is something called wx.Panel. This widget is not required, but recommended. On Windows, you are actually required to use a Panel so that the background color of the frame is the right shade of gray. Tab traversal is disabled without a Panel on Windows. When you add the panel widget to a frame and the panel is the sole child of the frame, it will automatically expand to fill the frame with itself. The next step is to add a wx.TextCtrl to the panel. The first argument for almost all widgets is which parent the widget should go onto. In this case, you want the text control and the button to be on top of the panel, so it is the parent you specify. You also need to tell wxPython where to place the widget, which you can do by passing in a position via the pos parameter. In wxPython, the origin location is (0,0) which is the upper left corner of the parent. So for the text control, you tell wxPython that you want to position its top left corner 5 pixels from the left (x) and 5 pixels from the top (y).Data The window argument is the widget to be added while proportion sets how much space relative to other widgets in the sizer this particular widget should take. By default, it is zero, which tells wxPython to leave the widget at its default proportion. The third argument is flag. You can actually pass in multiple flags if you wish as long as you separate them with a pipe character: |. The wxPython toolkit uses | to add flags using a series of bitwise ORs. In this example, you add the text control with the wx.ALL and wx.EXPAND flags. The wx.ALL flag tells wxPython that you want to add a border on all sides of the widget while wx.EXPAND makes the widgets expand as much as they can within the sizer. Finally, you have the border parameter, which tells wxPython how many pixels of border you want around the widget. The userData parameter is only used when you want to do something complex with your sizing of the widget and is actually quite rare to see in practice. Adding the button to the sizer follows the exact same steps. However, to make things a bit more interesting, I went ahead and switched out the wx.EXPAND flag for wx.CENTER so that the button would be centered on-screen. When you run this version of the code, your application should look like the following: do something when the user presses it. You can accomplish this by calling the button’s .Bind() method. .Bind() takes the event you want to bind to, the handler to call when the event happens, an optional source, and a couple of optional ids. In this example, you bind your button object to the wx.EVT_BUTTON event and tell it to call on_press() when that event gets fired. An event gets “fired” when the user does the event you have bound to. In this case, the event that you set up is the button press event, wx.EVT_BUTTON. .on_press() accepts a second argument that you can call event. This is by convention. You could call it something else if you wanted to. However, the event parameter here refers to the fact that when this method is called, its second argument should be an event object of some sort. Within .on_press(), you can get the text control’s contents by calling its GetValue() method. You then print a string to stdout depending on what the contents of the text control is. Now that you have the basics out of the way, let’s learn how to create an application that does something useful! Creating a Working Application The first step when creating something new is to figure out what you want to create. In this case, I have taken the liberty of making that decision for you. You will learn how to create a MP3 tag editor! The next step when creating something new is to find out what packages can help you accomplish your task. If you do a Google search for Python mp3 tagging, you will find you have several options: mp3-tagger eyeD3 mutagen I tried out a couple of these and decided that eyeD3 had a nice API that you could use without getting bogged down with the MP3’s ID3 specification. You can install eyeD3 using pip, like this: $ pip install eyed3 When installing this package on macOS, you may need to install libmagic using brew. Windows and Linux users shouldn’t have any issues installing eyeD3. Designing the User Interface When it comes to designing an interface, it’s always nice to just kind of sketch out how you think the user interface should look. You will need to be able to do the following: - Open up one or more MP3 files - Display the current MP3 tags - Edit an MP3 tag Most user interfaces use a menu or a button for opening files or folders. You can go with a File menu for this. Since you will probably want to see tags for multiple MP3 files, you will need to find a widget that can do this in a nice manner. Something that is tabular with columns and rows would be ideal because then you can have labeled columns for the MP3 tags. The wxPython toolkit has a few widgets that would work for this, with the top two being the following: wx.grid.Grid wx.ListCtrl You should use wx.ListCtrl in this case as the Grid widget is overkill, and frankly it is also quite a bit more complex. Finally, you need a button to use to edit a selected MP3’s tag. Now that you know what you want, you can draw it up: The illustration above gives us an idea of how the application should look. Now that you know what you want to do, it’s time to code! Creating the User Interface There are many different approaches when it comes to writing a new application. For example, do you need to follow the Model-View-Controller design pattern? How do you split up the classes? One class per file? There are many such questions, and as you get more experienced with GUI design, you’ll know how you want to answer them. In your case, you really only need two classes: - A wx.Panelclass - A wx.Frameclass You could argue for creating a controller type module as well, but for something like this, you really do not need it. A case could also be made for putting each class into its own module, but to keep it compact, you will create a single Python file for all of your code. Let’s start with imports and) Here, you import the eyed3 package, Python’s glob package, and the wx package for your user interface. Next, you subclass wx.Panel and create your user interface. You need a dictionary for storing data about your MP3s, which you can name row_obj_dict. Then you create a wx.ListCtrl and set it to report mode ( wx.LC_REPORT) with a sunken border ( wx.BORDER_SUNKEN). The list control can take on a few other forms depending on the style flag that you pass in, but the report flag is the most popular. To make the ListCtrl have the correct headers, you will need to call .InsertColumn() for each column header. You then supply the index of the column, its label, and how wide in pixels the column should be. The last step is to add your Edit button, an event handler, and a method. You can create the binding to the event and leave the method that it calls empty for now. Now you should write menubar,! Conclusion You learned a lot about wxPython in this article. You became familiar with the basics of creating GUI applications using wxPython. You now know more about the following: - How to work with some of wxPython’s widgets - How events work in wxPython - How absolute positioning compares with sizers - How to create a skeleton application Finally you learned how to create a working application, an MP3 tag editor. You can use what you learned in this article to continue to enhance this application or perhaps create an amazing application on your own. The wxPython GUI toolkit is robust and full of interesting widgets that you can use to build cross-platform applications. You are limited by only your imagination. Further Reading If you would like to learn more about wxPython, you can check out some of the following links: - The Official wxPython website - Zetcode’s wxPython tutorial - Mouse Vs Python Blog For more information on what else you can do with Python, you might want to check out What Can I Do with Python? If you’d like to learn more about Python’s super(), then Supercharge Your Classes With Python super() may be just right for you. You can also download the code for the MP3 tag editor application that you created in this article if you want to study it more in depth.
https://realpython.com/python-gui-with-wxpython/
CC-MAIN-2022-05
refinedweb
3,088
71.34
Bench False Cache Sharing Last updated on False cache sharing is when a bit of data shares a cache line and is getting dragged around the cores even though it doesn’t getting updated. It can be mittigated by having an empty bit of data at the end of the struct. #include <thread> #include <cstdint> #include <chrono> struct foo { uint32_t i; char padding[64]; /* optional */ }; void proc(foo *f, uint32_t count) { for(uint32_t i = 0; i < count; ++i) { f->i += 1; } } constexpr int th_count = 4; foo data[th_count]; std::thread pool[th_count]; int main(int argc, const char **argv) { uint32_t count = std::atoi(argv[1]); auto begin = std::chrono::high_resolution_clock::now(); for(int i = 0; i < th_count; ++i) { pool[i] = std::thread(proc, &data[i], count); } for(auto &p : pool) { p.join(); } auto end = std::chrono::high_resolution_clock::now(); auto diff = end - begin; auto time = std::chrono::duration_cast<std::chrono::nanoseconds>(diff).count(); printf("Time: %d\n", (int)(time)); } Results In milliseconds. Not a huge loss, but an easy win, its also a loss that will increase with cores/threads on your system. I should try and get some result on a system with more threads.
https://blog.cooperking.net/posts/2019-05-13-bench_false_cache_sharing-copy/
CC-MAIN-2021-39
refinedweb
195
56.89
"Michael Abbott" <michael.g.abbott at ntlworld.com> wrote in message news:Xns9145D94516673michaelrcpcouk at 62.253.162.104... > However, Python tuple assignment does look somewhat like pattern matching; > for example, sometimes my .read() method returns some (one) thing, and I > write: > > time, status, ((value, boring),) = myobject.read() > > So here I think of this as matching the one value (itself a pair of values) > that happens to be returned. This looks awfully like pattern matching (cf > Haskell). An so it is. If the patterns do not match, an exception is raised. However, after the match, the names (in your example above) 'time', 'status', 'value', and 'boring' are then bound to the corresponding objects in the current namespace. Terry J. Reedy
https://mail.python.org/pipermail/python-list/2001-October/074529.html
CC-MAIN-2016-40
refinedweb
119
68.67
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello, there! I would like to copy the animation I have on a Mixamo Control Rig to a different character. By different, I mean only the hierarchies and naming convention don't match. I'm having trouble with copying data from Mixamo Control Rig's _bind joint chain. Most curves and values can't be applied directly because the joints inside the Mixamo Rig have frozen values. That's why I'm going for GetMg() and SetMg(). I've tried looping within a frame range, but GetMg() always returns the same vectors. I even tried c4d.CallCommand(12414) inside a loop, but no luck. Any pointers? Thanks for your time, Leo @Leo_Saramago I forgot to mention I have baked the animation in the _bind chain. For testing purposes: I created a cube and gave it some keyframe animation, then baked it. Console shows the same values for every frame when I execute this simplified version of my original python code: import c4d def main(): c4d.CallCommand(12501) #Moves Playhead to Start obj = doc.GetActiveObjects(c4d.GETACTIVEOBJECTFLAGS_SELECTIONORDER) #Remember: Select Object in OM for i in range(10): mg = obj[0].GetMg() print (mg.off) c4d.CallCommand(12414) #Moves Playhead to Next Frame # Execute main() if __name__=='__main__': main()) Hi @Leo_Saramago, thank you for reaching out to us. As already pointed out by @mp5gosu (thank you for that), you can execute the passes on the document when you you want to evaluate a new state. But we have also an example on GitHub that is much closer to your specific scenario. Besides the more straight forward approach of setting the BaseTime of a document directly, it also shows you how to update the scene state via various messages. Below you will find a slightly modified version which does the same as your script, i.e., printing the global offset of an object. BaseTime Cheers, Ferdinand """Example for animating a document. Based on cinema4d_py_sdk_extended example: scripts/04_3d_concepts/scene_elements/scene_management/basedocument_animate_r13.py As discussed in: plugincafe.maxon.net/topic/13228 """ import c4d def main(): # Saves current time ctime = doc.GetTime() # Retrieves BaseTime of frame 5, 20 start = 5 end = 20 # Loops through the frames for frame in range(start, end + 1): # Sets the Status Bar c4d.StatusSetBar(100.0 * float(frame - start) / float(end - start)) # Changes the time of the document doc.SetTime(c4d.BaseTime(frame, doc.GetFps())) # Updates timeline c4d.GeSyncMessage(c4d.EVMSG_TIMECHANGED) # Redraws the viewport and regenerate the cache object c4d.DrawViews(c4d.DRAWFLAGS_ONLY_ACTIVE_VIEW | c4d.DRAWFLAGS_NO_THREAD | c4d.DRAWFLAGS_STATICBREAK) # Print the current document frame and the offset of the currently selected object. offset = op.GetMg().off if op else None print(f"Frame: {frame}, offset: {offset}") # Sets the time back to the original time. doc.SetTime(ctime) # Pushes an update event to Cinema 4D c4d.EventAdd(c4d.EVENT_ANIMATE) # Clears the Status Bar c4d.StatusClear() if __name__ == "__main__": main() @mp5gosu and @ferdinand Thank you very much. Really helpful! Sorry for the delay. I'll perform a couple of tests and get back to you before closing this one. Soon, I promise. Ok, now that I'm sure I've grabbed the proper data via GetMg(), can I apply SetMg() to each target joint and keyframe all properties at once for each frame? Or do I have to go thru each CTrack curve of my target joint and use SetMg() over and over again? Thanks, these answers have been very helpful! @Leo_Saramago This photoshop montage illustrates my point: On the left, you can see the curves for my source object. The curves on the right belong to my target object, and they came to be after applying SetMg(mg) in every frame inside a loop. This is the result I'm actually going for, except for the unwanted keyframes concerning Scale. Nothing that I couldn't put up with, I mean, deleting those keyframes wouldn't be trouble at all, it's just that MAXON SDK Team recommends avoiding CallCommands - in this case, I used c4d.CallCommand(12410) #Record Active Objects. As I had mentioned before, the source object has frozen values, hence those curves being different. As you can see above, both objects have the same animation going on in Global Space. I just want to learn how to achieve this same result the proper way. Hi @Leo_Saramago you would need to create keyframes manually, for the Position, Scale, and Rotation of each object. Find an example of how to create a keyframe in ctrack_create_keys.py. And I also recommend you to read DescIds and Animation Hope this help, cheers, Maxime. @m_adam Hi! Thanks for the reply. None of those would work because the source's curves are not to be copied as they are. I mean, the values of each keyframe in each curve are not necessarily the same because the source has frozen values. Based on those links you've provided, the only option would be to forget about GetMg()/SetMeg() methods. I would have to retrieve all frozen values from source objects, and use them to offset each key value in every curve. And I'd have to do the same to target objects - because they could also have frozen values. Then, depending on how both source and target objects were nested, I'd have to figure out their overall offsets. Is this train of thought correct? If so, what's the DescID that would retrieve values stored in "freeze"? Once again, thanks! sorry, I totally overlooked your reply. Thanks @m_adam for jumping in. You are right, your point of entry for keyframing is not Get/SetMg/Ml() since nodes have no matrix parameter which could be keyframed. Our matrix manual contains however the conversion formulas (at the very bottom). The BaseObject parameters are documented in the BaseObject ressources. The attribute for frozen rotations is for example c4d.ID_BASEOBJECT_FROZEN_ROTATION. Get/SetMg/Ml() BaseObject c4d.ID_BASEOBJECT_FROZEN_ROTATION I also would not say that the Maxon SDK team does not recommend using CallCommand. At least I would not go that far. It is a convenience tool that lacks sometimes the finer control to do more complex things. It is just a very simple procedural entry point into our API - which is sometimes misconceived as something more powerful. In these cases you have to use our object oriented API. CallCommand Reading your postings, I am still not quite sure what you want to do. If you want to animate some frozen value for multiple objects in a scene, you will have to iterate over all objects, find the ones you want to modify and then write some CKey into their CTrack\CCurve. We have a whole chapter on animation features in our Python SDK Script examples. Without any code from your side, it is hard to give you more substantial advice. CKey CTrack\CCurve @ferdinand Hi! Thanks for clarifying things some more. I guess I was looking for frozen values in the wrong place. Browser search wouldn't give me any results. The problem with writing CKey into CTrack\CCurve in my original scenario is that I have to pass the values of the keys whenever I apply the methods, and I don't have those values. I have the sources', yes, but not the targets'. The source is a Mixamo Control Rig - frozen values in almost every _bind joint. That's why I thought of GetMg()/SetMg() iterating over the joint chains within the frame range. They work, but then I have to use CallCommand to set keyframes and do some clean up afterwards. No big deal, really. So, just to wrap this up, is there a way to set keyframes, other than CallCommand, that doesn't require CKey values? while I agree that using transforms/matrices is more convenient than vectors for position, scale and rotation, it frankly do not quite get why you cannot use the latter since matrices are exactly the same, just written in a more convenient form. There are however multiple convenience methods attached to BaseDocument, .AnimateObject(), .AutoKey(), .Record() and .RecordKey(), with which you can animate nodes without having to deal with the more atomic animation types directly. I have attached a very simple example at the end for the .Record() method, in the hopes that this reflects what you are trying to do. BaseDocument .AnimateObject() .AutoKey() .Record() .RecordKey() """Little example for one of the animation convenience methods of BaseDocuement. Select an object and run the script, it will create a short animation for it. As discussed in: plugincafe.maxon.net/topic/13228/ """ import c4d import math def main(): """ """ # Get out when there is no object selected. op is predefined as the # primary selected object in a script module. if op is None: raise ValueError("Please select an object.") # Set a frozen rotation for that object. op[c4d.ID_BASEOBJECT_FROZEN_ROTATION] = c4d.Vector(0, 0, math.pi) # Set a frozen translation for that object. op[c4d.ID_BASEOBJECT_FROZEN_POSITION] = c4d.Vector(50., 0, 0) # Take ten steps. for t in range(10): # Create a BaseTime in 1/10th of a second intervals from our step count. bt = c4d.BaseTime(t * .1) # Set the document to that time. doc is like op a predefined script # module attribute, pointing to the currently active document. doc.SetTime(bt) # Set the rotation of our object. rotation = t * .1 * math.pi op[c4d.ID_BASEOBJECT_REL_ROTATION] = c4d.Vector(rotation, 0, 0) # You can also make use of SetMg() here if you want to, this however # will not respect the frozen values, or only in a way that is probably # not what you want. So if you set a frozen offset of (100, 0, 0) # for example and then write an offset of (0, 0, 0) into the object # via SetMg(), the object will then have the relative position of # (-100, 0, 0) in the coordinate manger, because (0, 0, 0) in world # coordinates is (-100, 0, 0) in frozen coordinates. Keyframing with # SetMg() will however work fine. # mg = c4d.utils.MatrixRotZ(rotation) # mg.off = c4d.Vector(t * 10, 0, 0) # op.SetMg(mg) # Record the active object(s) in the document. Additional convenience # methods for animating stuff are BaseDocument.AnimateObject(), # .AutoKey(), and .RecordKey(). Se documentation for details. doc.Record() # Push an update event to Cinema 4D, so that our editor is getting updated. c4d.EventAdd() if __name__=='__main__': main()```
https://plugincafe.maxon.net/topic/13228/global-matrix-trying-to-copy-data-from-mixamo-control-rig-with-python
CC-MAIN-2021-49
refinedweb
1,752
66.03
I had to write this quick little adapter the other day for something and figured it might be useful for people ... In general you are better off making the code that is using a stream not use a stream but to use an IEnumerable<byte []> since using stream you are required to copy ... Buy for very large buffers this is far more efficient than copying into a MemoryStream. 10 public class IteratorStream : Stream 11 { 12 private readonly IEnumerator<byte[]> m_Chunks; 13 private ArraySegment<byte> m_CurrentChunk; 14 15 public IteratorStream(IEnumerable<byte[]> _Chunks) 16 { 17 if (_Chunks == null) throw new ArgumentNullException(); 18 m_Chunks = _Chunks.GetEnumerator(); 19 } 20 21 public override bool CanRead { 22 get { return true; } 23 } 24 25 public override bool CanSeek 26 { 27 get { return false; } 28 } 29 30 public override bool CanWrite { 31 get { return false; } 32 } 33 public override long Length { get { return -1; } } 34 35 public override long Position 36 { 37 get { throw new NotImplementedException(); } 38 set{ throw new NotImplementedException();} 39 } 40 41 private bool ReadNextChunk() 42 { 43 bool HasMore = m_Chunks.MoveNext(); 44 if (HasMore) 45 { 46 m_CurrentChunk = new ArraySegment<byte>(m_Chunks.Current); 47 } 48 return HasMore; 49 } 50 51 public override int Read(byte[] buffer, int offset, int count) { 52 if (buffer == null) throw new ArgumentNullException("buffer"); 53 if (offset < 0 || offset >= buffer.Length) throw new ArgumentException("offset must be greater than or equal to 0 and less than the size of the buffer"); 54 if (count < 0) throw new ArgumentException("count must be greater than or equal to 0"); 55 if (offset + count > buffer.Length) throw new ArgumentException("offset + count must be less than the buffer size"); 56 57 int LeftToRead = count; 58 int CurrentLocation = offset; 59 if (m_CurrentChunk.Count == 0) 60 { 61 if (!ReadNextChunk()) 62 { 63 return 0; 64 } 65 } 66 while (LeftToRead > 0 && m_CurrentChunk.Count != 0) 67 { 68 int toRead = (LeftToRead > m_CurrentChunk.Count) ? m_CurrentChunk.Count : LeftToRead; 69 Buffer.BlockCopy(m_CurrentChunk.Array, m_CurrentChunk.Offset, buffer, CurrentLocation, toRead); 70 LeftToRead -= toRead; 71 CurrentLocation += toRead; 72 m_CurrentChunk = new ArraySegment<byte>(m_CurrentChunk.Array, m_CurrentChunk.Offset + toRead, m_CurrentChunk.Count - toRead); 73 if (m_CurrentChunk.Count == 0) 74 { 75 ReadNextChunk(); 76 } 77 } 78 return count - LeftToRead; 79 } 80 81 protected override void Dispose(bool disposing) 82 { 83 base.Dispose(disposing); 84 if (disposing && m_Chunks != null) 85 { 86 m_Chunks.Dispose(); 87 } 88 89 } 90 91 public override long Seek(long offset, SeekOrigin loc) { throw new NotImplementedException(); } 92 public override void SetLength(long value) { throw new NotImplementedException(); } 93 public override void Write(byte[] buffer, int offset, int count) { throw new NotImplementedException(); } 94 public override void Flush() { throw new NotImplementedException(); } 95 public override void WriteByte(byte value) { throw new NotImplementedException(); } 96 } 97 98 99 class Program 100 { 101 static IEnumerable<byte[]> GetBytes() 102 { 103 byte [] OneToTen = new byte[10]; 104 for(byte i=0;i<10;i++) 105 OneToTen = i; 106 for(int j = 0;j <10;j++) { 107 yield return OneToTen; 108 } 109 } 110 111 static void Main(string[] args) 112 { 113 IteratorStream stream; 114 using (stream = new IteratorStream(GetBytes())) 115 { 116 int read = stream.ReadByte(); 117 118 //read 1 byte at a time 119 while (read != -1) 120 { 121 Console.WriteLine(read.ToString()); 122 read = stream.ReadByte(); 123 } 124 } 125 126 //read 100 bytes 127 using (stream = new IteratorStream(GetBytes())) 128 { 129 byte[] buffer = new byte[100]; 130 int read = stream.Read(buffer, 0, 100); 131 Console.WriteLine(read); 132 } 133 134 } 135 } In a long email I wrote tonight I wrote a few words that should stay in every developer's mind and to me personally they represent a step in my own evolution.... for those who have known me for years (Craig, Brian, Steve, Toby, others). Good, even great software will never make money, it can only save money. It can nearly always be done cheaper, more simply to produce the same results. Businesses survive by making money. Good even great software is not necessary for a successful business. We focus an incredible amount of our time on how to make "great" software .. How to make it maintainable, scalable, and performant. Thinking back ... my most successful pieces of code have been complete hacks that others can easily attest to. Two in particular come to mind: Toby: The vacuum process and remote updating of sql databases in systems. We spent what 1-2 days on both? Neither were well thought out/scalable but they were probably the most valuable features delivered. My entire current system. I spoke about it at QCon a bit but we completed it in 12 days ... we later spent 11 months to do it 'right'. It was a complete hack but it made money. 80+% of systems fail for non-technical reasons (bad or late ideas, bad management, political failures) ... Why are we so focused on the technical reasons? The only technical failures I have ever seen that needed to be fixed were of already successful systems. How many similar systems failed to the one that succeeded. It is a calculated risk, but one that should be thought
http://codebetter.com/blogs/gregyoung/archive/2008/03.aspx
crawl-001
refinedweb
833
62.98
<< Tobias Jackson9,758 Points Create a method named "create_shopping_list" that returns a hash. It does not need to ask for a name or get anything fro I can't figure out what wrong with my code. def create_shopping_list hash = { "name" => name } return hash end 1 Answer andren28,523 Points The problem is the value of the hash: { "name" => name } You set a key called name to a variable called name, but that variable does not exist. That's what causes an error. You can solve the issue by simply having a string as the key like this: def create_shopping_list hash = { "name" => "Tobias" } return hash end andren28,523 Points You don't actually need to have a name key in the hash, the challenge accepts an entirely empty hash as well: def create_shopping_list return {} end The only reason I placed a name key in the hash in my solution was because I was correcting Tobias's existing code. jeffdelacruz22,516 Points jeffdelacruz22,516 Points The questions is misleading when it says it does not need to ask for a name, but in order to proceed you have to setup a hash with a name. Wouldn't the question make more sense to say "create a hash with a key of name and value of your name". I tried forever to create an empty hash with hash.new() and it didn't like that.
https://teamtreehouse.com/community/create-a-method-named-createshoppinglist-that-returns-a-hash-it-does-not-need-to-ask-for-a-name-or-get-anything-fro
CC-MAIN-2022-33
refinedweb
232
72.09
We are trying to create an auto-download for our client to combines database information with an uploaded Word document. We are currently using the following code to create the first page of the PDF: Hello, Thank you for your request. I would like to note that you are using the old version of the Aspose.Words. We strongly recommend that you use the latest version of the product. You can download it here: In the latest version you can easily save documents in pdf. Please see this article here: Regarding your question you can use the Pdf.Kit. You can learn how to use it from here: You can download Pdf.Kit from here: So how do I truncate a document that is over X number of pages, so that it’s only X number of pages long? Hi Thanks for your request. You can use code like the following: Document doc = new Document(@"test001\in.doc"); // Save 10 pages of the document to PDF. PdfSaveOptions opt = new PdfSaveOptions(); opt.PageIndex = 0; opt.PageCount = 10; doc.Save(@"Test001\out.pdf", opt); Hope this helps. Best regards, Is there a way to stream my doc into a pdf stream so that I can then use Aspose.PDF.Kit to concatenate multiple PDF files? Hello, Thank you for your request. You can use one of the overload Document.Save methods like this: I you have any other questions, fell free to ask. I’m obviously NOT asking my question correctly … Hi there, Thanks for this additional information. Yes this is what we have been suggesting. Please see the code below which demonstrates how to combine an output PDF from Aspose.Words with another PDF using Aspose.Pdf.Kit. Document attachedDoc = new Document(file.Stream);<?xml:namespace prefix = o if (attachedDoc.PageCount > 4) { attachPageCount = 4; } else { attachPageCount = attachedDoc.PageCount; } using (Stream stream = File.Create(MyDir + "trial.pdf")) { PdfSaveOptions attachedToPDF = new PdfSaveOptions(); attachedToPDF.PageIndex = 0; attachedToPDF.PageCount = attachPageCount; attachedDoc.Save(stream, attachedtoPdf); stream.Seek(0, SeekOrigin.Begin); } //Instantiating PdfFileEditor object using Aspose.Pdf.Kit PdfFileEditor editor = new PdfFileEditor(); //Create an output stream object that will store the combined PDF to disk. FileStream outputStream = new FileStream("Document out.pdf", FileMode.Create); //Store all input streams in an Array Stream[] inStreams = new Stream[] {stream, otherPdfStream}; //Call the Concatenate method editor.Concatenate(inputStreams, outputStream); //Close streams stream.Close(); otherPdfStream.Close(); outputStream.Close(); If you have any further queries, please feel free to ask. Thanks, The issues you have found earlier (filed as WORDSNET-2978) have been fixed in this .NET update and this Java update. This message was posted using Notification2Forum from Downloads module by aspose.notifier. (96)
https://forum.aspose.com/t/append-truncated-word-document-to-pdf/74694
CC-MAIN-2022-21
refinedweb
442
61.73
I’m trying to create a transform that pads a PIL image to be square. (I wish this was one of the included transforms. Anybody know why it isn’t?) def square_pad(image): w, h = image.size if w==h: return image max_wh = np.max([w, h]) hp = int((max_wh - w) // 2) vp = int((max_wh - h) // 2) hp2 = max_wh-w-hp vp2 = max_wh-h-vp padding = (hp, vp, hp2, vp2) return F.pad(image, padding, 255, 'constant') How can I add this to a list of transforms for transforms.Compose() and put it into a DataLoader with multiple workers? I always get the following error: AttributeError: Can't pickle local object 'main_worker.<locals>.square_pad' I’ve tried adding it to the list directly: train_transforms = [...,square_pad,...] I’ve tried using Lambda: train_transforms = [...,transforms.Lambda(square_pad),...] I’ve tried making it a class with a call method, I always get the “Can’t pickle local object” error. BTW, it does work when I use num_workers=0, but that will not work for my application. Any suggestions? It seems like there really is no way to use a custom transform, and there is also no way to do it with built in transforms.
https://discuss.pytorch.org/t/custom-transforms-dont-work/147983
CC-MAIN-2022-33
refinedweb
200
67.35
I was reading the submissions and came along a neat submission by the very own @pieguy (You know him, as he is setter of many problems in the Contests here at Codechef). I am pasting the code snippets where I have doubt, if anyone or @pieguy himself can explain this, then it will really be helpful. #include<iostream> #include<cstdio> #include<algorithm> using namespace std; int E, R; long long solve(int* start, int length, int se, int ee) { if(length == 0) return 0; long long pos = max_element(start, start+length) - start; long long ie = pos*R+se; if(ie > E) ie=E; long long re = ee-(length-pos)*R; if(re < 0) re=0; long long res=(ie-re)*start[pos]; res+=solve(start, pos, se, ie); res+=solve(start+pos+1, length-pos-1, re+R, ee); return res; } int main(){ int T; scanf("%d", &T); for(int t=1; t<=T; t++) { int N, v[10000]; scanf("%d%d%d", &E, &R, &N); for(int i=0; i<N; i++) scanf("%d", v+i); printf("Case #%d: %lld\n", t, solve(v, N, E, R)); } } This was the solve() that he used, the intitial call was made by solve(v, N, E, R) For meaning of symbols refer to the [problem.][1] v is the array to store each vi. I did not understand the ie = pos*R + se; part, why is he multiplying regain amount to the position of the max element and then adding to the initial energy. Again, while calculating re, he is doing something similar. Please clarify this. [1]:
https://discusstest.codechef.com/t/gcj-manage-your-energy-problem-with-pieguys-submission/1968
CC-MAIN-2021-31
refinedweb
266
65.29
The Innocuous Code That Tripped Me The Innocuous Code That Tripped Me Sometimes the code that breaks on us when we believe it shouldn't breaks because of something we've long since stopped thinking building a cache, I need a way to generate a hash code from a query. A query is a complex object that has many properties. My first attempt to do so looked like this: public int GetHashCode() { int hashCode = QueryStr.GetHashCode(); hashCode = (hashCode * 397) ^ WaitForNonStaleResultsTimeout?.GetHashCode() ?? 0; hashCode = (hashCode * 397) ^ AllowStale.GetHashCode(); return hashCode; }: public unsafe override int GetHashCode() { int num = this.QueryStr.GetHashCode() * 397; TimeSpan?* expr_18 = ref this.WaitForNonStaleResultsTimeout; return ((num ^ (expr_18.HasValue ? new int?(expr_18.GetValueOrDefault().GetHashCode()) : null)) ?? 0 ) * 397 ^ this.AllowStale.GetHashCode(); }. }}
https://dzone.com/articles/the-innocuous-code-that-tripped-me
CC-MAIN-2018-34
refinedweb
120
52.26
source code The Blender.Registry submodule. New: GetKey and SetKey have been updated to save and load scripts *configuration data* to files. This module provides a way to create, retrieve and edit persistent data in Blender. When a script is executed it has its own private global dictionary, which is deleted when the script exits. This is done to avoid problems with name clashes and garbage collecting. But because of this, the data created by a script isn't kept after it leaves: the data is not persistent. The Registry module was created to give programmers a way around this limitation. Possible uses: Example: import Blender from Blender import Registry # this function updates the Registry when we need to: def update_Registry(): d = {} d['myvar1'] = myvar1 d['myvar2'] = myvar2 d['mystr'] = mystr # cache = True: data is also saved to a file Blender.Registry.SetKey('MyScript', d, True) # first declare global variables that should go to the Registry: myvar1 = 0 myvar2 = 3.2 mystr = "hello" # then check if they are already there (saved on a # previous execution of this script): rdict = Registry.GetKey('MyScript', True) # True to check on disk also if rdict: # if found, get the values saved there try: myvar1 = rdict['myvar1'] myvar2 = rdict['myvar2'] mystr = rdict['mystr'] except: update_Registry() # if data isn't valid rewrite it # ... # here goes the main part of the script ... # ... # if at some point the data is changed, we update the Registry: update_Registry() Data saved to the Registry is kept in memory, so if you decide to store large amounts your script users should be clearly informed about it -- always keep in mind that you have no idea about their resources and the applications they are running at a given time (unless you are the only user), so let them decide. There are restrictions to the data that gets automatically saved to disk by SetKey(keyname, dict, True): this feature is only meant for simple data (bools, ints, floats, strings and dictionaries or sequences of these types). For more demanding needs, it's of course trivial to save data to another file or to a Blender Text.
http://www.blender.org/documentation/248PythonDoc/Registry-module.html
CC-MAIN-2014-10
refinedweb
352
57.3
![endif]--> Reference Language | Libraries | Comparison | Changes Write a byte to the EEPROM. EEPROM.write(address, value) address: the location to write to, starting from 0 (int) value: the value to write, from 0 to 255 (byte) none An EEPROM write takes 3.3 ms to complete. The EEPROM memory has a specified life of 100,000 write/erase cycles, so you may need to be careful about how often you write to it. #include <EEPROM.h> void setup() { for (int i = 0; i < 512; i++) EEPROM.write(i, i); } void loop() { } Corrections, suggestions, and new documentation should be posted to the Forum. The text of the Arduino reference is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. Code samples in the reference are released into the public domain.
http://arduino.cc/de/Reference/EEPROMWrite
CC-MAIN-2014-10
refinedweb
130
64.71
Modern browsers not only allow reading but also writing content: making social media posts, filling out online forms, searching for content, and so on. The next few labs implement these features. To start, this lab implements web forms, which allow the user to fill out form information and then send that form to the server. Web forms are used almost everywhere: you fill one out to post on Facebook, to register to vote, or to search Google. When your browser sends information to a web server, that is usually information that you've typed into some kind of input area, or a check-box of some sort that you've checked. So the first step in communicating with other servers is going to be to draw input areas on the screen and then allow the user to fill them out. On the web, there are two kinds of input areas: <input> elements, which are for short, one-line inputs, and <textarea> elements, which are for long, multi-line text. I'd like to implement both, because I'd like to support both search boxes (where queries are short, single-line things) and comment forms (where text inputs are a lot longer). Usually, web browsers communicate with the operating system and ask the OS to draw the input areas themselves, because that way the input areas will match the behavior and appearance of OS input areas. That's possible in Tk,In Python, you use the ttk library. but in the interests of simplicity we'll be drawing the input areas ourselves. Both <input> and <textarea> elements are inline content, like text, laid out in lines. So to support inputs we'll need a new kind of layout object, which I'll call InputLayout. It'll need to support the same kind of API as TextLayout, namely attach and add_space, so that it won’t confuse InlineLayout: class InputLayout: def __init__(self, node, multiline=False): self.children = [] self.node = node self.space = 0 self.multiline = multiline def layout(self, x, y): pass def attach(self, parent): self.parent = parent parent.children.append(self) parent.w += self.w def add_space(self): if self.space == 0: gap = 5 self.space = gap self.parent.w += gap You'll note the add_space function hardcodes a 5-pixel space, unlike TextLayout, which uses the current font. That's because the contents of a text input generally use a custom font, not the same font used by surrounding text, so I might as well hard-code in the size of spaces. For simplicity, the layout method hard-codes a specific size for input elements.In real browsers, the width and height CSS properties can change the size of input elements. One quirk is that InlineLayout.text requires w to be set on text layout objects even before we call layout, so we'll set the size in the constructor and the position in layout: class InputLayout: def __init__(self, node, multiline=False): # ... self.w = 200 self.h = 60 if self.multiline else 20 def layout(self, x, y): self.x = x self.y = y Finally, we'll need to draw the input element itself, which is going to be a large rectangle: def display_list(self): _ol, _or = self.x, self.x + self.w _ot, _ob = self.y, self.y + self.h return [DrawRect(_ol, _ot, _or, _ob)] Finally, we need to create these InputLayout objects; we can do that in InlineLayout.recurse: def recurse(self, node): if isinstance(node, ElementNode): if node.tag in ["input", "textarea"]: self.input(node) else: for child in node.children: self.recurse(child) else: self.text(node) The new input function is similar to text, except that input areas don’t need to be split into multiple words: def input(self, node): tl = InputLayout(node, node.tag == "textarea") line = self.children[-1] if line.w + tl.w > self.w: line = LineLayout(self) tl.attach(line) Finally, to make sure these elements are parsed and styled right, we need to inform our HTML parser that <input> is self-closing (but not <textarea>, see below) and, since both <input> and <textarea> are supposed to be drawn inline, we need to set display: inline for them in the browser stylesheet as well. We've now got input elements rendering, but only as empty rectangles. We need the input part! Let’s 1) draw the content of input elements; and 2) allow the user to change that content. I’ll start with the second, since until we do that there's no content to draw. In this toy browser, I’m going to require the user to click on an input element to change its content. We detect the click in Browser.handle_click, which must now search for an ancestor link or input element: # ... while elt and not \ (isinstance(elt, ElementNode) and \ (elt.tag == "a" and "href" in elt.attributes or \ elt.tag in ["input", "textarea"])): elt = elt.parent if not elt: pass elif elt.tag == "a": # ... else: self.edit_input(elt) So, how does editing an input element work? Well, <input> and <textarea> work differently. For <input>, the text in the input area is the element's value attribute, like this: Name: <input value="Pavel Panchekha"> Meanwhile, <textarea> tags enclose text that is their content:The text area can also contain manual line breaks, unlike normal text (but it does wrap lines, unlike <pre>), which I’m ignoring here. <textarea>This is the content.</textarea> Whereever the content is, editing the input has to change it. Let's add that to our browser, soliciting input on the command line and then updating the element with it:GUI text input is hard, which is why I’m soliciting input on the command line. See the last exercise. def edit_input(self, elt): new_text = input("Enter new text: ") if elt.tag == "input": elt.attributes["value"] = new_text else: elt.children = [TextNode(elt, new_text)] Now that input areas have text content, we need to draw that text. For single-line input elements, we just add a DrawText command to the display list: def display_list(self): border = # ... font = self.node.font() value = self.node.attributes.get("value", "") x, y = self.x + 1, self.y + 1 text = DrawText(x, y, value, font, 'black') return [border, text] This won’t work for multi-line inputs, though, because we need to do line breaking on that text. Instead of implementing line breaking again, let’s reuse InlineLayout by constructing one as a child of our InputLayout: def layout(self, x, y): # ... for child in self.node.children: layout = InlineLayout(self, child) self.children.append(layout) layout.layout(y) Since InlineLayout requires them, let's add some of these helper functions: It’s ugly that I have these def content_left(self): return self.x + 1 def content_top(self): return self.y + 1 def content_width(self): return self.w - 2 I’d rather the recursion be external. We also need to propagate this child’s display list to its parent: def display_list(self): border = # ... if self.children: dl = [] for child in self.children: dl.extend(child.display_list()) dl.append(border) return dl else: text = # ... return [border, text] The browser now displays text area contents! One final thing: when we enter new text in a text area, we change the node tree, and that means that the layout that we derived from that tree is now invalid and needs to be recomputed, and we can't just call browse, since that will reload the web page and wipe out our changes. Instead, let's split the second half of browse into its own function, which browse will now call: def relayout(self): style(self.nodes, self.rules) self.page = Page() self.layout = BlockLayout(self.page, self.nodes) self.layout.layout(0) self.max_h = self.layout.h self.display_list = self.layout.display_list() self.render() Now edit_input can call self.relayout() at to update the layout and redraw the page. You should now be able to run the browser on the following example web page:Don't worry—the mangled HTML should be just fine for our HTML parser. <body> <p>Name: <input value=1></p> <p>Comment: <textarea>2</textarea></p> </body> One quirk—if you add style=font-weight:bold to the <body>, so that the labels are bold, you'll find that the input area content isn’t bolded (because we override the font) but the text area content is. We can fix that by adding to the browser stylesheet: That’ll prevent the text area from inheriting its font styles from its parent. Filled-out forms go to the server. The way this works in HTML is pretty tricky. First, in HTML, there is a <form> element, which describes how to submit all the input elements it contains through its action and method attributes. The method attribute is either get or action attribute is a relative URL. The browser generates an HTTP request by combining the two. Let's focus on POST submissions (the default). Suppose you have the following form, on the web page: <form action=submit method=post> <p>Name: <input name=name value=1></p> <p>Comment: <textarea name=comment>2</textarea></p> <p><button>Submit!</button></p> </form> This is the same as the little example web page above, except there's now a <form> element and also the two text areas now have name attributes, plus I've added a new <button> element. That element, naturally, draws a button, and clicking on that button causes the form to be submitted. When this form is submitted, the browser will first determine that it is making a POST request to (using the normal rules of relative URLs). Then, it will gather up all of the input areas inside that form and create a big dictionary where the keys are the name attributes and the values are the text content: { "name": "1", "comment": "2" } Finally, this content has the be form-encoded, which in this case will look like this: name=1&comment=2 This form-encoded string will be the body of the HTTP POST request the browser is going to send. Bodies are allowed on HTTP requests just like they are in responses, even though up until now we've been sending requests without bodies. The only caveat is that if you send a body, you must send the Content-Length header, so that the server knows how much of the request to wait for. So the overall request is: POST /submit HTTP/1.0 Content-Length: 16 name=1&comment=2 The server will then respond to the POST request with a normal web page, which the browser will render. We're going to need to implement a couple of different things: We'll go in order. First, buttons. Buttons are a lot like input elements, and can use InputLayout. They get their contents like <textarea> but are only one line tall; luckily, the way I've implemented InputLayout allows those two aspects to be mixed, so we just need to modify InlineLayout.recurse to handle buttons. Second, button clicks. We need to extend handle_click with button support. That requires modifying the condition in the big while loop and then adding a new case to the big if statement: Third, we need to find the form containing our button. That can happen inside submit_form:Fun fact: HTML standardizes the form attribute for input elements, which in principle allows an input element to be outside the form it is supposed to be submitted with. But no browser implements that. Fourth, we need to find all of the input elements inside this form: def find_inputs(elt, out): if not isinstance(elt, ElementNode): return if elt.tag in ['input', 'textarea'] and 'name' in elt.attributes: out.append(elt) for child in elt.children: find_inputs(child, out) return out We can use this in submit_form to make a dictionary mapping identifiers to values: def submit_form(self, elt): # ... inputs = find_inputs(elt, []) params = {} for input in inputs: if input.tag == 'input': value = input.attributes.get('value', '') else: if input.children: value = input.children[0].text else: value = "" params[input.attributes['id']] = value self.post(relative_url(elt.attributes['action'], self.history[-1]), params) Fifth, we can form-encode the resulting parameters: def post(self, url, params): body = "" for param, value in params.items(): body += "&" + param + "=" body += value.replace(" ", "%20") body = body[1:] host, port, path = parse_url(url) headers, body = request('POST', host, port, path, body) Having browse methods is crazy. This isn’t real form-encoding—I’m just replacing spaces by "%20". Real form-encoding escapes characters like the equal sign, the ampersand, and so on; but given that our browser is a toy anyway, let's just try to avoid typing equal signs, ampersands, and so on into forms. Sixth and finally, to actually send a POST request, we need to modify the request function to allow multiple methods: def request(method, host, port, path, body=None): # create socket s s.send("{} {} HTTP/1.0\r\nHost: {}\r\n".format(method, path, host).encode("utf8")) if body: body = body.encode("utf8") s.send("Content-Length: {}\r\n\r\n".format(len(body)).encode("utf8")) s.send(body) else: s.send("\r\n".encode('utf8')) response = s.makefile("rb").read().decode("utf8") s.close() # ... This needs to match the actual request code (and fit on screen). Remember to modify all other calls to request (there are several calls in Browser.browse) to pass in the method. Once we've made the POST request, the server will send back a new web page to render. We need to lex, parse, style, and lay that page out. Once again, let's split browse into a simpler browse function that just makes the GET request and a more complex parse function that does lexing, parsing, and style, and call that from the end of I don’t like parse for this. With these changes we should now have a browser capable of submitting simple forms! We need to test our browser’s forms functionality. Let's test with our own simple web server. This server will show a simple form with a single text entry and remember anything submitted through that form. Then, it'll show you all of the things that it remembers. Call it a guest book.Online guest books… so 90s… A web server is a different program from a web browser, so let's start a new file. The server will need to: I should note that the server I am building will be exceedingly simple, because this is, after all, a book on web browser engineering. Let’s start by opening a socket. Like for the browser, we need to create an internet streaming socket using TCP: import socket s = socket.socket( family=socket.AF_INET, type=socket.SOCK_STREAM, proto=socket.IPPROTO_TCP, ) Now, instead of calling connect on this socket (which causes it to connect to some other server), we'll call bind, which opens a port waits for other computers to connect to it: Here, the first argument to bind, the address, is set to the empty string, which means that the socket will accept connections from any other computer. The second argument is the port on your machine that you want the server to listen on. I've chosen 8000 here, since that's probably open and, being larger than 1024, doesn't require administrator privileges. But you can pick a different number if, for whatever reason, port 8000 is taken on your machine. A note about debugging servers. If a server crashes with a connection open on some port, your OS prevents the port from being reusedWhen your process crashes, the computer on the end of the connection won’t be informed immediately; if some other process opens the same port, it could receive data means for the old, now-dead process. for a few seconds. So if your server crashes, you might need to wait about a minute before you restart it, or you'll get errors about addresses being in use. Now, we tell the socket we're ready to accept connections: To actually accept those connections, we enter a loop that runs once per connection. At the top of the loop we call s.accept to wait for a new connection: That connection object is, confusingly, also socket: it is the socket corresponding to that one connection. We know what to do with those: we read the contents and parse the HTTP message. But it's a little trickier to do this in the server than in the browser, because the browser waits for the server, and that means the server can't just read from the socket until the connection closes. Instead, we'll read from the socket line-by-line. First, we read the request line: def handle_connection(conx): req = conx.makefile("rb") reqline = req.readline().decode('utf8') method, url, version = reqline.split(" ", 2) assert method in ["GET", "POST"] Then we read the headers until we get to a blank line, accumulating the headers in a dictionary: def handle_connection(conx): # ... headers = {} for line in req: line = line.decode('utf8') if line == '\r\n': break header, value = line.split(":", 1) headers[header.lower()] = value.strip() Finally we read the body, but only when the Content-Length header tells us how much of it to read (that's why that header is mandatory on POST requests): def handle_connection(conx): # ... if 'content-length' in headers: length = int(headers['content-length']) body = req.read(length).decode('utf8') else: body = None response = handle_request(method, url, headers, body) Let’s fill in handle_request later; it returns a string containing the resulting HTML web page. We need to send it back to the browser: response = response.encode("utf8") conx.send('HTTP/1.0 200 OK\r\n'.encode('utf8')) conx.send('Content-Length: {}\r\n\r\n'.format(len(response)).encode('utf8')) conx.send(response) conx.close() I need to do something about the Content-Length line being so long. This is a bare-bones server: it doesn't check that the browser is using HTTP 1.0 to talk to it, it doesn't send back any headers at all except Content-Length, and so on. But look: it's a toy web server that talks to a toy web browser. Cut it some slack. All that's left is implementing handle_request. We want some kind of guest book, so let's create a list to store guest book entries: The handle_request function outputs a little HTML page with those entries: def handle_request(method, url, headers, body): out = "<!doctype html><body>" for entry in ENTRIES: out += "<p>" + entry + "</p>" out += "</body>" return out For now, I'm ignoring the method, the URL, the headers, and the body entirely. You should be able to run this minimal core of a web server and then direct your browser to, localhost being what your computer calls itself and 8000 being the port we chose earlier. You should see a list of (one) guest book entry. Let's now make it possible to add to the guest book. First, let's add a form to the top of the page: out += "<form action=add method=post>" out += "<p><input name=guest></p>" out += "<p><button>Sign the book!</button></p>" out += "</form>" This form tells the browser to submit data to; the server needs to react to such submissions. First, we will need to undo the form-encoding: def form_decode(body): params = {} for field in body.split("&"): name, value = field.split("=", 1) params[name] = value.replace("%20", " ") return params To handle submissions, we’ll want to get the guest book comment, add it to ENTRIES, and then draw the page with the new comment shown. Furthermore, handle_request will first need to figure out what kind of request this is (browsing or form submission) and then executed the relevant code. To keep this organized, let’s rename handle_request to show_comments: We can have a add_entry function to handle form submissions: This frees up the handle_request function to just figure out which of these two functions to call: def handle_request(method, url, headers, body): if method == 'POST': params = form_decode(body) if url == '/add': return add_entry(params) else: return show_comments() else: return show_comments() Try it! You should be able to restart the server, open it in your browser, and update the guest book a few times. You should also be able to use the guest book from a real web browser. We’ve added an important new capability, form submission, to our web browser. It is a humble beginning, but our toy web browser is no longer just for reading pages: it is becoming an application platform. Plus, we now have a little web server for our browser to talk to. Life is better with friends! Add check boxes. In HTML, check boxes <input> elements with the type attribute set to checkbox. The check box is checked if it has the checked attribute set, and unchecked otherwise. Submitting check boxes in a form is a little tricky, though. A check box named foo only appears in the form encoding if it is checked. Its key is its identifier and its value is the empty string. Forms can be submitted via GET requests as well as POST requests. In GET requests, the form-encoded data is pasted onto the end of the URL, separated from the path by a question mark, like /search?q=hi; GET form submissions have no body. Implement GET form submissions. One reason to separate GET and POST requests is that GET requests are supposed to be idempotent (read-only, basically) while POST requests are assumed to change the web server state. That means that going “back” to a GET request (making the request again) is safe, while going “back” to a POST request is a bad idea. Change the browser history to record what method was used to access each URL, and the POST body if one was used. When you go back to a POST-ed URL, ask the user if they want to resubmit the form. Don’t go back if they say no; if they say yes, submit a POST request with the same body as before. Right now our web server is a simple guest book. Extend it into a simple message board by adding support for topics. Each URL should correspond to a topic, and each topic should have its own list of messages. So, for example, /cooking should be a page of posts (about cooking) and comments submitted through the form on that page should only show up when you go to /cooking, not when you go to /cars. Implement proper GUI text entry. When the user clicks on an input area, store the input element to a new Browser.focus field. Clicks elsewhere should clear that field. Next, bind the <Key> event in Tkinter and use the event’s char field in the event handler to determine the character the user typed. Add that character the value of the element in Browser.focus. If there’s no focused element, don’t do anything.You can implement more features if you’d like, but it quickly gets difficult. Backspace: doable; arrow keys: hard; selection: crazy!
https://browser.engineering/forms.html
CC-MAIN-2019-51
refinedweb
3,884
64.61
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More...5 - - - - - I did a simple bar graph to show the status of a db migration. Each bar is a computer with a db file. As I was bored waiting, I added a King Kong hanging from the longest bar.8 -18 - So.. this is what happens when programmers are given crayons and paper.. We write our robot's source code out by hand and use it as a table cloth.15 - Homework: Create a fact file and orbital molecule to the substance I've adviced you. Me: - 1min ctrl+c ctrl+v of facts/description - 2h making a 3D model in blender lol Conclusion: I like wasting time8 - - - - - - What do you guys do when you get bored at work? this is what I do: 1. Drink Water 2. Stare at the code 3. Go to Bathroom 4. repeat22 - - Me: I should try out Figma's vector tool [30 minutes pass, this happened] Pros: its nice Cons: not as intuitive as Illustrator's or Inkscape's.... AND MUH GRIDS16 - - - - - - linux ( I was just bothered of all windows blue screen posts and linux nazis comments, so please avoid to comment if you're a fan)22 - ^_...5 - does anyone else have days where they just can't be bothered to make anything, and days where they get a massive amount done?3 - - - - - - - - - I am so BORED! Like, seriously.... so so bored... work isn’t fun anymore... I don’t do any fun coding anymore... Meh 😔10 - Sehr bored!!!! :v I was so bored I started reading rants with the tag “bored” and found a rant of @VenomCLC writing the name of programming languages in a particular way XD so I drawed this. Enjoy it.2 - - - - - GNU/Linux trigggggereeeed! Reeeee Also, I am fucking bored, I feel like a 10 year old with these tasks.13 - - - - - - - All I've been doing at work last few days is code review. Damn, I feel bored. Just give me something to code already!3 - - - - Getting bored in quarantine. A few days ago I wrote a script for the chrome dino to jump in PC while I jump holding my phone. Any more fun ideas guys ?12 - - Coleague: "Hey! get back to work, stop drawing weeb shit" (pictured below) Me: "Heroku's building" Colleague: "oh, carry'm seriously moving to a full-on designer now since programming stopped requiring any creativity from me recently,6 - I need to stop finishing my sprints work within the first 3-4 days of the sprint. I get so excited to get work that I loose my sense of time and space so I finish it all in one go... Now I’m gonna be bored again for 2-3 weeks 😫8 - - If an ai becomes depressed, does it encapsulate itself for better \Closure? *insert thinking dinosaur* - - Didn't have any topic for todays call with the client, so I sketched some ideas on the whiteboard.3 - I was cleaning and found this draw I made at class when I was at university (the last days of the semester). Title for the image).6 - There's the possibility that all of you just live inside my head, but it is also considerable that I exist in one of your mind.3 - Handed off a Markdown presentation to the marketing dept to add some pizzaz too---just to see what happens. My bet is that they'll redo it all in PowerPoint. Or fire me. Latter is the lessor of two evils.2 - I have so much free time! My plans all weekend have been cancelled, so I can just code all night and all day! .... So what do I write? ..... Netflix it is then? *Sigh*2 - - - anyone who can get Nintendo /PlayStation / XBox onboard to sponsor putting up consoles at every gate of an airport will have my eternal gratitude. - - - - Challenge; You will write names into the comments,. After 1 Day I will Set the highest rated Name as my actually Name here in devRant. Example Name: WorstCoderEver or Name: VisualStudio Let the Games begin25 - - Today I'm beginning my third year of Bachelor in embedded computing. And just as last year, I'm bored as fck. "Learning" the same stuff over and over, and wasting my time when I could be at work as a PHP developer ... FML7 - Who is bored with their job? Wish you had something more challenging other than the same drivel day in and day out? Wish you could learn new things or apply better technologies to existing solutions other than just trudging through each day?7 - Ugh I was so bored creating icons that I decided to create an app icon generator with Webpack-style config and icon generation for multiple platform + custom shape for desktop2 - - How often do you get bored (eg. on a bus) and just code something completely stupid on your phone? (pic is the result of some spaggeti I made on a bus today)7 - - !rant Will this code be compiled ? #include<iostream> using namespace std; int main(){ int 🥩=1; int 🧀=1; int 🥬=1; int 🍞=1; int 🍅=1; int 🥪=🍞+🥬+🍅+🧀+🥩; cout<<🥪; return 0; }13 - - - - My internship is extremely boring, I've worked at a company for four days and I've only had 2 tasks which took about 15 minutes each! What should I do tomorrow?8 - i really bored in this kind of class. the lecturer just reading what in the front of the class -_-. is this actually oracle standard ?13 - - -)80 - - - Took some leave recently to interview for a new job. Back to work to find that I have nothing to do. Asked for more work and got nothing. This place just gives me constant reminders of why I'm looking elsewhere4 - - - - - - - - - Stack Overflow is like a re-run of the Milgram experiment. Give a bunch of devs authority over their peers and watch the horror unfold. Think I'll nip over there and ask what the best JavaScript framework is just to stir them up.5 - - - - - - Whenever I see someone swear, for example, 'fucking managers'. I think of fucking as a verb. Which makes it 'managers that are fucking'. Pretty entertaining when you're bored or having a bad day. 😅2 - Some people browse YouTube when they're bored.. I browse GutHub. Found this yesterday 😂 - !rant Wrote an literal assload of stored procedures for my DB. Got introduced to Enitity Framework. My brain just exploded with rainbows and possibilities :D Not to mention it feels so much cleaner :v1 - I'm starting to get bored at work. Every fucking day is the same. I receive a design. I code that design ( aka I'm modifying the framework developed by me ). I deliver the frontend. I think that anyone with no programming experience can do my work now. - - - - Does anybody has an idea what to "code" when you have too much free time? I am done with school and waiting for my university acceptance. No Websites. TL;DR Project ideas?14 - - - - - - - - Is there anyone here that's been doing the job 15+ years and has hit the I can't be arsed stage? The only way stuff seems to be getting done at the moment is when I hand it over to someone else. I think it's due to not really being pushed with learning new things etc. Also 3 kids in the mix might be doing it as well. Anyone know how to dig my way out of this?7 - I’m too comfortable with using laptop’s trackpad and never use mouse. I want to start using mouse. I tried but trackpad was more comfortable so abandoned it But i really want to switch. :/8 - Not a rant, just a depressive thought. I earn $135k USD a year(plus benefits) and haven't done anything useful in the last 2 months, most of the time I go to the bathroom or the coffee shop to play Disney Tsum Tsum. Feel empty inside. Good thing is about to end.23 - -16 - - Still a student, working part time at a dev company, doing small work 'cause they have nothing to make me do, and today the two person I'm supposed to work with are on leave...1 - Hey guys ! I'm currently working on a project to implement a simplest version of TinyDB on Z1 motes 😀 And you, what project are you currently working on ? 🤓11 - I dislike holidays since I often get bored. Tempted to get a train back home and go to work, just for something to do1 - You probably know the liger (offspring of a tiger and a lion), and the tigon (the opposite). But how would you call the offspring of a cat and a dog? A cattog? A doccat? Debate Yes I'm extremely bored.18 - - - -.2 - -).5 - I'm in school, next lession is gonna be programming exercise with c#, soooo gonna log into my linux home server and do some stuff Xd8 - Last day at this job. Fixing a printer and then exploring electron apps until 5 because no one's giving me anything to do1 - !rant Just dropping by to tell you guys about my unofficial app for DevRant: (alpha). Anybody willing to contribute is very welcome!6 - - - - Had to take all my annual leave this month because I didn’t take any during the year. Im fucking bored. I wanna go to the office.. Don’t know what to do with my time. I stopped doing personal projects a while back because I never stick with one idea for long enough to finish it.. My job gives me a purpose..8 - - Count the number of keyboards in your room (including musical keyboards). 1,.. 2,.. 3,.. 4,.. 5,.. 6,.. Damn, that is a lot.14 - - I resolved to spend more time with the family this month, leaving my laptop behind when visiting them for the end of the year. Now, 10 days later we're all bored of each other and I truly, deeply, and most sincerely miss my beloved laptop. All I can do is refresh the devRant app, and fantasise while reading about others working on their machines, and posting formatted photos of code snippets. Like some weird, twisted form of instagram-addiction.3 - If A kilobyte would have been breaking down it would say that it is breaking BIT by BIT into bytes😅😂1 - Seriously trying not to fall asleep during compliance training at work....there's SIX HOURS worth of content each employee has to go through annually on their bday month....it's making me so slee....😴1 - Anyone use the coffee machine as a distraction when bored and between projects ? Just trying out the different varieties... We have this weird "Espressochock" that's basically hot chocolate with a shot of espresso, shit is weird...1 - First try with BF +++++ [ > +++++ +++++ ++ > +++++ +++++ ++ > +++++ +++++ +++++ + > +++++ +++++ +++++ + > +++++ +++++ ++ > +++++ +++++ ++++ > +++++ +++++ +++++ + > +++++ + > +++++ +++++ > +++++ +++ <<<<<<<<<< - ] > +++++ +++ . > +++++ ++++ . > +++++ + . > ++ . > +++++ . > +++++ +++ . > ++++ . > ++ . > +++++ +++ . > +++++ ++ . <<<<<<<<<<8 - - -. - - - Code monkey like fritos, Code monkey like Tab and Mountain Dew, Code monkey very simple man, With big brown fuzzy secret heart Code monkey like you...1 - I say it's about time we unleash all of our secret developer weapons and take over the world like everyone has been predicting. Who's with me?6 - Was bored and wanted to try to update my old LG nas With a bit of help from @linux i started on the hell hole. I'm also streaming it and it will not be done anytime soon so come chat if you want - ... - Checking out cool projects and libraries because your bored coding the function you just rewrote 20 times from all the changes that keep being sent over. - Finally done my boring summer job that promised development and delivered scooping up horse poop (seriously). Can't wait to go back to school.1 - I have a 5 hour layover until my next flight and all I want to do is code to kill time... the only problem is there is no reachable power outlet. - Pondering on what to call the act of copulation here on DevRant. Feel free to add your suggestions! My suggestion is "child class instantiation". One could on rare occasions call it "client server"... I know, really childish of me, but I'm bored...4 - I am sick since last sunday! I feel like my computer is staring at me telling me to get work done... but I don't even have the energy to be up for more than an hour. On the other hand I am bored like hell..? - - What are the origins of your usernames? Mine is a contraction of B(ack) Rolls from the legendary Alyssa Edwards.6 - I'm done with all dev tasks. I don't know how am I supposed to spend next 5-6 hours :-/ Not interested in working on pet projects either.3 - - - Hey devs, I'm working on a API for public because I'm bored, it's handy thing like an IP endpoint that says your IP, I'm looking for some more ideas so if you have things that are handy tell me and I implement them6 - - Boored of work. Anyone got any open source projects that need help ? I can help in python, sql, C and C++. 🤷3 - - I would love to have an ability to make my rubber duck be my companion :D Does this count as a superpower? And no, I'm not this lonely xD3 - - If you want to know why DevOps and SRE people have high salaries. It's because it's SO FUCKING BORING. Stay far away.7 - So, I spent an hour setting up a Debian fork since I cba to make DistroLauncher work for WSL2. The end result was this: It's ultra-scuffed but hey it works!2 - The worst task I get as a fresher is to go through the code, most it which isn't properly commented upon, let alone documented.2 - okay, can someone suggest a good series. IT related, science fiction. something like that. good actors and events.14 - - Just finished a small project and don't know what to code next. Any suggestions? (Web / server based applications)4 - End of week. Hasn't got much to do. Just browsing through Youtube and waiting for my today's shift to end. - thought that I'll be loving the first day on a new job, but I'm been waiting for hours to make some code.1 - - When you're just waiting around for the designer to finish their audit of your build so you can get back to work. - - How do you find ideas for cool projects. I'm bored and can't think of anything interesting. Maybe you could throw some ideas here?3 - - - - !Rant So... In the mood for a new lang... Mainly Java developer but have done Scala, Python lately and a bunch in the past (C, PHP, little js, HTML5). Thinking of .Net or node js ATM... I'd welcome any ideas :. - What do you guys do when youre bored at work and you really dont have anything to do at the moment)1 - - Is a picture worth a thousand words? Super fun data driven analysis based on Google's Conceptual Captions Dataset.-... #python #dataanalysis #exploratorydataanalysis #statistics #bigdata3 - WSL seems really cool from what i've been toying with it. WSL2 seems like it'll be even better and the integration with docker(another thing i'm toying with) looks interesting. as far as i can find though it's only on windows insider for now, and I don't like having telemetry on my main machine. So i spent a good chuck of my day just setting up Hyper-v, learning about nested virtualization (so docker will work), setting up a win10pro vm, and i'm now in the process of setting this up to be a virtualized dev machine (not gonna be a one use only system cause i spent way too long on this shit) and setting up docker and wsl I don't know much about docker or WSL beyond just some random stuff i've learned to toy with to simplify some things i do. but maybe this will give me a cool way to actively learn more about them and maybe use them as more than just boredom toys3 -.6 - Arrived half an hr too early at an onsen, so now I'm just reading up random stuff regarding design systems. On another note, I have no idea if I'm able to communicate with the people here 🙃3 - :(2 - When you have to be at work becuase it's work, but you finish all your work in 1 day regularly, and it takes QA 2-3 days to get back to you.... Massive downtime.2 - - I think trying to debug code is probably more annoying than figuring out the right statements to write for the project. - - - The more I get bored, the more I am curious about about my google assistant's love life. who else with me?5 - - Last semester of college! Finished my 2 theory tests this today. Now just programming (C# easy junk) next Tuesday and I'll be off the hook until assignments are due. I'd rather be at work tbh. - *generic, flaming hyperbole about software lacking preferred features* *over-glorifying opinion on preferred software and its superior implementations of features offered by hated product* *generic user* *actually employee of software #2's company* - - - I'm bored and can't sleep soooo... Bad clever code vs Good clean code Worst / best examples. - what's devRant got Stories, pictures, links. All mediums are welcome1 - I'm writing this rant because I've read all the recent rants here. As well as the top +-100 Algo rants. So for the ones with the same craving for rants, here's mine. - - - After like half a year of xamarin.android app programming in C# I switched for a moment to VB.NET to write a web service. I don't want to go back now, send help :v2 - Gosh, I'm bored! Better reinstall Ubuntu on my computer. (I don't know why I always do that when I'm bored, some sort of habit I think)1 Top Tags
https://devrant.com/search?term=bored
CC-MAIN-2021-21
refinedweb
3,061
82.75
It seems to me that for some reason this didn't make its way to the official documentation and seems to be rather unknown though implemented already in IRIS 2020.1 Thanks to @Dan Pasco I got a hint on the classes involved. I used the recommended sequence of how to use it. it is all directly taken from Class Reference and I just collected it to create a first overview.. %Net.DB.DataSource This class implements the IRIS Native API for Object Script DataSource interface. At this time that interface consists solely of the CreateConnection method. CreateConnection accepts url, port,namespace, user, and pwd parameters. Refer to %Net.DB.Connection for more information on these parameters. CreateConnection() returns an instance of %Net.DB.Connection. %Net.DB.Connection This class implements the IRIS Native API for Object Script Connection interface. This class should never be instantiated directly, only through %Net.DB.DataSource using the CreateMethod() function. The public interface for this class includes only those items specified here. All other members are internal. %Net.DB.Iris This class implements the primary set of functions IRIS Native API for Object Script. The only other members of the IRIS Native API that are not implemented by this class are implemented by %Net.DB.DataSource and %Net.DB.Connection. This class should never be instantiated directly. The proper way to instantiate this class is to invoke the CreateIris() method using an instance of %Net.DB.Connection. Summary of methods: BIG THANKS to @Dan Pasco for sharing this example:.
https://community.intersystems.com/post/iris-native-api-objectscript
CC-MAIN-2020-40
refinedweb
255
51.75
8/31/11 r.r 1 Javadocing in Netbeans (rev. 2011-05-20) Javadocing in Netbeans (rev. 2011-05-20) This note describes how to embed HTML-style graphics within your Javadocs, if you are using Netbeans. Additionally, I provide a few hints for package level and overview level documentation and the role of the properties option within a Javadoc’ed project. This tuto - rial was tested with Netbeans 6.8. [rob rucker 2010-07-13]. For Netbeans 6.9.1, the modification of the build.xml file is no longer necessary so that part of the tutorial can be ignored. [2011-04-28] Overview Displaying graphics is part of the general Javadoc documentation approach that supports ‘literate programming’. Our text books often don’t emphasize documentation except for end of line comments embedded within code using ‘//’ or multiline comments using ‘/* . . . */’. While that is o.k. for developers maybe, clients don’t want to have to read code to find out how the program works. So, to find out how the program works, a higher level of doc - umentation is called for. We need some automated support and that’s where Javadoc comes in. The word Javadoc is the general term used to describe the process of creating and dis - playing Java-based computer documentation. Creating the documentation depends on an executable program, javadoc.exe, that goes through all your package files, extracts out Ja - vadoc comments, creates corresponding formatted HTML pages, hyper links them, and then automatically opens your browser to display them. This utility program is part of the standard Java distribution and is always available for you to use. As you will see later in this tutorial, Javadoc not only goes through the computer code and extracts out distinguished comments, it also goes through specially named folders and files and extracts additional text and graphics that are also displayed on the HTML pages. Procedure for Netbeans 6.8 only (This glitch has been fixed in 6.9 so you don’t need to modify the build.xml file but do need to do the other parts in this tutorial). Currently, I don’t know of a built-in way to embed graphics inside of Javadocs, so here is one way that does work: The problem is that the current versions of Netbeans don't auto - matically copy graphics from your source directory to the 'dist' directory where javadoc looks for data to insert into javadoc’s HTML output. So, below is a way to do this by a small edit of the Ant build file (build.xml). Below is a File view (not a Project view) of my project. (To get a File view, go to the main menu->Files). The example project, IT307Ch3DeitelGradeBook, presented here, is (edited) code taken from the Deitel text chapter 3, which is being used for IT 307 and IT 408 during the 2010 sessions at WIU. Cut to the Chase for embedding graphics (a quick overview for all versions of Netbeans) Within the project’s package folder ‘demo’, I created the doc-files folder (a distinguished 8/31/11 r.r 2 Javadocing in Netbeans (rev. 2011-05-20) name you must use) and copied in a graphic, sunflower.jpg. Then, in the package.html (a distinguished name for package level documentation) I inserted an <img> callout for the graphic sunflower.jpg. After a ‘Clean and Build” I ran the JavaDoc program and produced the browser displayed documentation. All these files are shown below. End Cut to the Chase Here is a File view of the overall project. Detailed Steps to Embed and Display Graphics within your JavaDocs Copy graphics files into your project ( all NB versions) Go to your project, then your package (my package name is 'demo'). 1.create a new empty FOLDER inside your package. You must name it doc-files. to create such a folder, right click on your package name -> new -> other ->folder 2.Copy your graphics into that folder. For example, I have put sunflower.jpg in my doc- files folder. Actually, you can put anything you want in there since a (relative) hyper - text link will retrieve it. I would also recommend placing a UML class diagram in the folder as well, if you are able to create one. Edit the Build File ( for NB 6.8 only) Now go to the File view in your project, find the build.xml file, then right click ->open. This will open the xml file in the editor panel of Netbeans. Right at the bottom of the file, immediately before the ending </project> tag, insert the fol - lowing code. Note: use your package name in place of my ‘demo’ package name if yours differs. <target name="-pre-compile"> <copy todir="./dist/javadoc/demo/doc-files"> <fileset dir="./src/demo/doc-files"/> </copy> </target> FIGURE 1. File view of the IT307Ch3DeitelGradeBook project 8/31/11 r.r 3 Javadocing in Netbeans (rev. 2011-05-20) The effect of this Ant command is to copy the content of doc-files to the distribution folder (‘dist’). This is where javadoc looks for included files and now they will be there. Save the build.xml file. Insert callouts in your HTML documents. ( all NB versions) In your package.html file or in any of your source files javadoc sections, insert the follow - ing standard HTML code to access your doc-file graphics content. Below is the package.html special package level documentation file that documents my demo package and the GradeBook suite of classes. to create this HTML file, right click on your package name and navigate to find HTML File. Click that. Do a 'Clean and Build' to establish new linkages. Doing a Clean and Build is a good idea in general after you make a few code changes. Run -> Generate Javadoc This invokes the javadoc.exe executable that is in your jdk 1.6 distribution bin directory. FIGURE 2. package.html package level documentation file 8/31/11 r.r 4 Javadocing in Netbeans (rev. 2011-05-20) Javadoc output Running javadoc does a compile and then composes linked HTML files based on what is in your javadoc comments. Then it automatically calls your browser (check your bottom toolbar of programs since your browser icon may only show up there). Configuring your project to display private variables, titles and headers. By default, javadoc will not show private variables or some of the @ parameters. So, do the following. Right click on your project name and scroll to the bottom of the options and choose prop - erties. (see dialog box below) Then click on Documenting and check everything, as well as entering title and header text. As an aside, clicking on the Run option allows you to enter command line arguments that are picked up in the String[] args array from the main() method. 8/31/11 r.r 5 Javadocing in Netbeans (rev. 2011-05-20) ‘ Documenting Individual Packages To document a package level collection of files, say a package named ‘demo’, as in the above example code, you place a specially named HTML file, package.html, in the package folder. That package.html file contains text that will document the classes in the ‘demo’ package. If you want to produce graphics for that package, then you would create a doc- files folder and place your graphics in there. Callouts for graphics use the standard HTML tags, for example, here is callout from a Javadoc section of comments that will cause the browser to get the jpg file referenced and insert it into the HTML page. <img src = “doc-files/demoUMLDiagram.jpg” alt = “UML diagram”/> If you have a second package, named say, ‘otherDemo’, you would create another pack - age.html file in that package folder that documents the classes in that package. For display - ing graphics associated with the ‘production’ package, create another doc-files inside the package folder and place graphic file there. (See “File view of the CST200Threads Project” on page 6 ). Documenting Multiple Packages, the Overview of the Suite Things get more interesting when you need to document multiple packages that represent the overall suite. Now you want to document individual packages using a package.html file in each package folder as well as, an overall descriptive HTML file for the collection of packages. To create this overall documentation, create a specially named HTML file called overview.html and place it in the <src> folder, as shown in the diagram below. ( You need to be in File view to see these files). <src> is the folder that holds all your packages. 8/31/11 r.r 6 Javadocing in Netbeans (rev. 2011-05-20) Configuring Javadoc to Recognize the Location of overview.html Go to your project properties dialog box and add in the following within the Additional Ja - vadoc Options. FIGURE 3. File view of the CST200Threads Project 8/31/11 r.r 7 Javadocing in Netbeans (rev. 2011-05-20) Configuring Javadoc to override the built-in CSS file javadoc.css In the additional Javadoc Options as above, include the following (notice the placement of the ‘periods’) : -stylesheetfile ${basedir}/${src.dir}/style.css And, I have named my css file style.css. Place this file in your <src> folder. Additionally, in my package and overview html files, I include a <link href=”style.css” type = “text/css”, rel = “stylesheet” /> Templates for Your Java Classes Under Tools>Templates>Java>Java Class Go to edit and replace the contents with the first section of code shown below. Then go to Java Main Class Template and replace its contents with the respective code be - low Same for Interface Template Java Templates These templates will replace the ones in your Netbeans Tools Templates files. 8/31/11 r.r 8 Javadocing in Netbeans (rev. 2011-05-20) r.r 2011-07-25 ********************Java Class Template /* ${package}.${name} by ${user} on ${date} */ <#if package?? && package != ""> package ${package}; </#if> /**${name} shows ?? . * * @author ${user} * @version 1. 0 ${date} * @since jdk 1.6 upd 21 * @see "" */ public class ${name} { }//end ${name} ***************************END Java Class Template *****************************Java Main Class Template /* ${package}.${name} by ${user} on ${date} */ <#if package?? && package != ""> package ${package}; </#if> /**${name} shows ?? . * <p> *</p> * @author ${user} * @version 1. 0 ${date} * @since jdk 1.6 upd 21 * @see "" */ public class ${name} { public static void main(String[] args) { }//end main() 8/31/11 r.r 9 Javadocing in Netbeans (rev. 2011-05-20) }//end ${name} *********************************END Java Main Class Template ******************************* Java Interface Class Template ******************** /* ${package}.${name} (Interface class ) by ${user} on ${date} */ <#if package?? && package != ""> package ${package}; </#if> /**${name} * * @author ${user} * @version 1. 0 ${date} * @since jdk 1.6 upd 23 * @see "" */ public interface ${name} { }//end ${name} ******************************* END Java Interface Class Template Summary Documenting a program suite is considered an essential component of any client deliver - able. Prior systems that had no built in documentation facilities, made this very difficult and so not much was done without extreme effort. JavaDoc changes that. Now you know how to document individual packages with both text and graphics, as well as document the col - lection of packages making up the program suite. The resulting documentation is a major attraction for
https://www.techylib.com/el/view/draindecorum/javadocing_in_netbeans_rev._2011-05-20
CC-MAIN-2017-22
refinedweb
1,864
66.74
Download presentation Presentation is loading. Please wait. Published byEric Thompson Modified over 2 years ago 1 PHOENICS User Meetings PARIS, 2008 Relational input Relational data input for PHOENICS Contents The need for a relational input capability The Advanced PHOENICS Input Language The VR-Editor in protected mode PRELUDE, the pre-pre-processor The Gateway concept A room-fire example PARSOL, local grid refinement and multi-runs By Brian Spalding, September, 2008 2 PHOENICS User Meetings PARIS, 2008 Relational input Relational data input to PHOENICS Please note This presentation has been prepared for persons who are already familiar with PHOENICS, and especially for users of its version for heating, ventilating, air-conditioning and fire simulation, FLAIR. Persons more familiar with other CFD codes might care to ask themselves: Does my code have relational data-input capabilities? If so, how do they compare with those of PHOENICS? But, if not, why not? 3 PHOENICS User Meetings PARIS, 2008 Relational input The need for a relational input capability It is often required to ensure that the positions and sizes of objects conform to some rules. For example, doors must be of the right size to fit apertures in walls. Similarly chairs must have their legs in contact with the floor; and sitting persons must be in touch with their seats. Then if one moves the aperture or the chair, one needs the door and the person to move with them. 4 PHOENICS User Meetings PARIS, 2008 Relational input The need for a relational data-input capability The PHOENICS Virtual-Reality Editor does have a grouping feature which enables relative-position connections to be expressed and recorded in the Q1 file; but it does not allow members of the group to change relative size or position. Therefore, if the Q1 is to be used again with even slightly modified geometry, the user has to re-define the lost relationships all over again. This deficiency has now been remedied, in two different ways: by 1. use of the VR-Editor in protected mode, and more fully 2. use of the new Graphical User Interface: PRELUDE. 5 PHOENICS User Meetings PARIS, 2008 Relational input The historical background: the rise and temporary eclipse of the PHOENICS Input Language Therefore, even if expert users hand-edited relationships into a Q1 file, once the VR-Editor had read them, it recorded only their numerical implications. In the early days of PHOENICS, data-input was effected by way of assignment statements, edited into files (Q1s). The statements were expressed in terms of the first PHOENICS Input Language, known as PIL. During the following years, PIL acquired many new capabilities: Logical structures, DO-loops, capabilities in respect of graphics, file- handling, etc. This advanced PIL still flourishes; and it is used with much success by experts. Advanced PIL is well able to express the required relationships between the sizes and positions of different objects in a scenario. As the number of new users of PHOENICS increased, many of whom were reluctant to learn PIL, menu-based input procedures were provided: users clicked buttons or typed characters into boxes; then the PHOENICS Satellite wrote the Q1 file for them. Most users nowadays use these menus exclusively. However, although many advanced-PIL features are exploited by the menu system, they do not appear in the Q1s which the Satellite writes. 6 PHOENICS User Meetings PARIS, 2008 Relational input Examples of the obliterating tendency of the VR-Editor Having read the above, the VR-Editor would write simply: > OBJ, NAME, DOOR > OBJ, SIZE, E+00, E+00, E+00 > OBJ, NAME, APERTURE > OBJ, SIZE, E+00, E+00, E+00 The Editor retains only the single-instance significance; but it obliterates the declarations. Example 1. An advanced-PIL expert might write: REAL(width, height) ! declarations width=0.85; height=1.80 ! settings > OBJ, NAME, DOOR > OBJ, SIZE, width, 0.0, height ! uses > OBJ, NAME, APERTURE > OBJ, SIZE, 0.0, width, height ! uses 7 PHOENICS User Meetings PARIS, 2008 Relational input Examples of the obliterating tendency of the VR-Editor Having read and understood this, the VR-Editor would write simply: > OBJ, POSITION, E+00, E+00, E+00 Once more, the Editor retains only the single-instance values; but it obliterates the declarations and condition which led to them. Example 2. An advanced-PIL expert might write: REAL(size1, size2) ! declarations size1=1.0; size2=2.0 ! settings if (size1.gt.size2) then ! condition > OBJ, POSITION, 0., 0., Size1 ! Make z-position of object else ! equal to the larger of size1 > OBJ, POSITION, 0., 0., Size2 ! and size2 endif This can very irritating! 8 PHOENICS User Meetings PARIS, 2008 Relational input Some more history; three features needing protection 3. In 2007 it was recognised that a similar device could be used to protect those advanced-PIL statements (declarations, IF-statements, relationships, etc) which the Editor should not be allowed to obliterate. 1. In 1998 the PLANT feature was introduced into PHOENICS. This allowed formulae to be placed in the Q1 file, which after interpretation by the satellite, caused corresponding Fortran coding to be created, compiled and linked to the solver module. 2. Then in 2001 the In-Form feature was introduced. Its purpose and effect were the same, namely to allow users to extend the simulation capabilities of PHOENICS; but it did so without requiring Fortran coding to be created, compiled or linked into a new executable. Both PLANT and In-Form statements had to be protected from the obliterating tendencies of the VR-Editor, by SAVE markers placed before and after them; these warned the Editor to save the statements and place them properly in the Q1 file which it was writing. Thus came into existence the protected mode of satellite operation, the operation of which will now be illustrated. 9 PHOENICS User Meetings PARIS, 2008 Relational input A protected-mode example FLAIR-library case, I201 The image on the right shows instantaneous temperature distributions calculated on the assumption that a fire is burning on the floor of a partitioned room. The q1 file has been in the PHOENICS/FLAIR input-file library for many years as i201. i201. The 2008 version of this file will be used as an example of how the use of the protected mode of Satellite operation enables relationships to be expressed and preserved in Q1 files.2008 version In effect, all the features of advanced PIL have now become available to those users of the VR-Editor who are willing also to use its in-built text-editor. 10 PHOENICS User Meetings PARIS, 2008 Relational input Differences between old and new i201.htm Comparison of the old and new Q1s reveals that the latter has additional features, of which a few will now be described. The new file declares logical variables zup and fourwall and sets them thus between SAVE1BEGIN and SAVE1END markers: SAVE1BEGIN ! Marks start of section to be protected Group 1. Run Title boolean(zup,fourwall) ! declarations zup=f ! settings fourwall=t TEXT( Room air flows; I201; zup=:zup: Echo InForm settings for Group 1 Group 1. Run Title SAVE1END ! Marks end of section to be protected It suffices to explain only zup. This stands for z-direction is up and has been introduced because the original file, contrary to current convention, used x as up. 11 PHOENICS User Meetings PARIS, 2008 Relational input Use of the logical variable zup to change the up direction On the right is the first VR-Editor view when zup=f, its default value. But when, during the VR-Editor session, the Q1 file is hand-edited and zup=t is set, saving and loading the working files leads, below, to … what looks like the same picture, but, closely examined, proves to have its axes differently lettered. Advanced-PIL lines in the Q1 have made all the changes in response to the setting of a single variable, zup. It is much harder to do this interactively! 12 PHOENICS User Meetings PARIS, 2008 Relational input Changing positions and sizes This was effected by opening the q1 for editing while still in VR-Editor mode, and then finding and changing three of the variables which are declared there, namely: doorzpos, which governs the position of the door, doorhigh which governs its height, prt1wide which affects the width of the lowest (on the picture) partition. Evidently, the wall aperture has changed its position and height to accord with the door; and all the partitions have changed their sizes or positions in order to preserve the relationships which are implied by the Q1. Moreover, because they are protected by SAVE markers, the relationships cannot be obliterated by the Editor, which dutifully writes precisely what it has read. Here the door and partitions have moved. 13 PHOENICS User Meetings PARIS, 2008 Relational input How the relationships are expressed in the Q1 > OBJ, NAME, PART-1 xpos=0.0 ; ypos=0.0; zpos=prt1zpos xsiz=prt1high ; ysiz= prt1wide ; zsiz=prt1thck > OBJ, NAME, PART-2 xpos=0.0 ; ypos= prt1wide; zpos=0.0 xsiz=prt1high ; ysiz= prt1thck ; zsiz=prt2wide > OBJ, NAME, PART-3 xpos=0.0 ; ypos=prt1wide; zpos= prt3zpos xsiz=prt1high ; ysiz=prt1thck ; zsiz=prt2wide The relationships between the sizes and positions are expressed in the Q1 file by the lines printed on the right. It is easy to understand their meanings, once it is remembered that they were written for the non- conventional x-is-up z-is-along co-ordinate system. How, it might be asked, was the switch from the non-conventional system to the conventional effected? The following two lines, appearing after the setting and before the use of the geometric attributes of each object, did all that was necessary: if(zUP) then dummy=zpos; zpos=xpos; xpos=ypos; ypos=dummy dummy=zsiz; zsiz=xsiz; xsiz=ysiz; ysiz=dummy endif Such are the tricks that a little knowledge of advanced PIL allows one to play. 14 PHOENICS User Meetings PARIS, 2008 Relational input Introducing new logic It will then be found that, when the Editor is run, the partitions and the fire are present or absent according to the settings of the respective variables. Suppose that it is desired, temporarily, to remove the partitions and/or the fire from the scene. This can be done very simply via the built-in editor during a VR-session, as follows, namely by: 1. in imitation of what has been done for zup and fourwall, declaring new boolean variables: nopart and nofire; 2. setting them = t or = f, as desired; 3. on the line above those defining partition-object attributes, inserting the lines; if(nopart) then goto nopart endif 4. on the line below the attribute-defining lines inserting: label nopart 5. making the corresponding insertions above and below the fire-object lines. This is another example of how the protected mode of operation allows useful variables to be declared and used, without, as hitherto, being obliterated. 15 PHOENICS User Meetings PARIS, 2008 Relational input Introducing interactivity Advanced PIL allows interactive modification of settings. Thus, if the following lines are typed into the Q1: mesg(nopart = :nopart: OK? If not, type N readvdu(ans,char,Y) if(:ans:.eq.N.or.:ans:.eq.n) then nopart=f endif nopart the following question will appear on the screen: Typing N (or n) will then set nopart=F; then no partitions will be present to obstruct the flow in the room. 16 PHOENICS User Meetings PARIS, 2008 Relational input Introducing interactivity; the satellite as a calculator Loading core library case 011 into the PHOENICS satellite leads to the following: Evidently PHOENICS is offering to perform the role of a calculator; and it suggests some mathematical operations which its user might like to perform. Advanced PIL is worth learning! Thereafter the required result appears instantly on the screen. Having typed the reference-number of the formula into the enter-your- answer box, the user is asked to supply the values of the constants a, b and c which are of interest. If some other operation is preferred, the user can edit the file 011.htm appropriately, so as to provide the additional formula.011.htm 17 PHOENICS User Meetings PARIS, 2008 Relational input Introducing a new object New objects can be introduced interactively, as is well known. However, they can also be introduced by hand-editing. Thus a user might have noticed the library case i200 contains a standing man, and wish to have one in i201 also. Then he or she could simply copy the lines from the relevant q1, perhaps modifying them slightly by use of xpos, etc. xpos= E+00; ypos= E+00;zpos= E+00 xsiz= E-01; ysiz= E-01;zsiz= E+00 > OBJ, NAME, MAN > OBJ, POSITION, :xpos:, :ypos:,:zpos: > OBJ, SIZE, :xsiz:, :ysiz:, :zsiz: > OBJ, GEOMETRY, standing > OBJ, ROTATION24, 5 > OBJ, TYPE, PERSON > OBJ, POSTURE, STANDING > OBJ, FACING, +X > OBJ, WIDTH, :ysiz: > OBJ, DEPTH, :xsiz: > OBJ, HEIGHT, :zsiz: > OBJ, SOURCE-FORM, Total-heat > OBJ, HEAT, E+01 Then, if the partitions and fire have been removed and the solver activated, the picture on the left will appear in the corner of the room. As the lines above dictate and the picture confirms, the man is a source of heat. 18 PHOENICS User Meetings PARIS, 2008 Relational input Introducing an array of objects If one man can be introduced, why not many? The do loop feature of advanced PIL makes this easy, as shown below: do ixx=1,nmanx do iyy=1,nmany xpos= E+00; ypos= E+00;zpos= E+00 xsiz= E-01; ysiz= E-01;zsiz= E+00 xpos=1.5*:ixx:;ypos=2.0*:iyy:; zpos=0.0 > OBJ, NAME, MAN:ixx::iyy: > OBJ, POSITION, :xpos:, :ypos:,:zpos: > OBJ, SIZE, :xsiz:, :ysiz:, :zsiz: > OBJ, WIDTH, :ysiz: > OBJ, DEPTH, :xsiz: > OBJ, HEIGHT, :zsiz: > OBJ, SOURCE-FORM, Total-heat > OBJ, HEAT, E+01 enddo The picture above shows what results when the VR-Editor is activated. One can change the numbers of rows and columns by declaring and setting the variables:nmanx and nmany. 19 PHOENICS User Meetings PARIS, 2008 Relational input Changing their sizes The following further lines placed in the protected Q1: real(shrink,factor) factor=1/(nmanx*nmany) shrink=factor do ixx=1,nmanx do iyy=1,nmany factor=factor+shrink xpos=1.5E+00; ypos=2.E+00;zpos= 0.0E+00 xsiz=3.E-01*factor; ysiz=6.E-01*factor; zsiz= 1.76*factor xpos=1.5*:ixx:;ypos=2.0*:iyy:; zpos=0.0 … will cause the sizes of the men to vary as shown above. Of course, innumerable formulae for changing the sizes and positions could be devised; and the Editor will not obliterate them because they are SAVEd. 20 PHOENICS User Meetings PARIS, 2008 Relational input Results (for many men) The results are quickly obtained by running the PHOENICS solver, and then the VR-Viewer; and they are as expected. See below, (for the equally-sized men). Warm air rises above each of them, 21 PHOENICS User Meetings PARIS, 2008 Relational input Summarising remarks about the use of protected-mode Q1s Protected-mode Q1s are easier to read and to edit than those created by the VR-Editor, because they contain more understandable words and fewer hard- to-comprehend numbers. When PHOENICS users recognise what freedom the protected mode affords them, they will finally cease to feel forced always to work interactively. How to use the Advanced PHOENICS Input Language is explained in the PHOENICS Encyclopaedia.PHOENICS Encyclopaedia. They will cease to be the prisoners of the mouse, as illustrated on the right. Moreover, much more complex relationships can be expressed than have been exemplified so far; and they can also contain non-geometric variables, such as sources, initial values, material properties and time. If these words are the names of declared PIL variables, they can express relationships between the positions and sizes of individual objects. 22 PHOENICS User Meetings PARIS, 2008 Relational input But thats not all; theres PRELUDE! 1. Although the protected mode does allow Advanced PIL to be exploited, that language has some limitations. For example, although it does allow one- or more-dimensional arrays to be employed, their arguments must always be integers. So it does not understand such constructs as: xpos(door), where door is an object name. The answer? PRELUDE, the pre-pre processor, and its Gateways. 2.The VR-Editor does not itself allow the typing of expressions into its dialogue Boxes; nor does it provide any error-checking when the built-in text editor is used. Why we need more : 23 PHOENICS User Meetings PARIS, 2008 Relational input What PRELUDE provides PRELUDE provides both more and less than the VR-Editor &Viewer. The more includes: It can use object names as the arguments of its functions. Expressions can be typed into its dialogue boxes. The expressions can be of unlimited complexity. It provides error-checking and undo capabilities. It has a more flexible position/size/rotation language. It can handle many more CAD formats. It can launch multiple runs with systematic data-input variations. It can create parameterised objects by accessing Shapemaker. It stores its output in multiple-instance Q3 files instead of single- instance Q1s. PRELUDE provides both more and less than the VR-Editor & Viewer. The less includes: It has still only limited results-display capability, so uses the Viewer. It (deliberately) offers users the restricted choice of data-input possibilities which is appropriate to the Gateway in question. Gateways are the modern equivalent of Special-Purpose Programs. 24 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways When PRELUDE is launched, it asks what is to be loaded; and it offers certain Gateways. These are quick-access routes to the particular features of PHOENICS which are likely to be useful to narrow-interest users. PHOENICS-FLAIR users are likely to want to use the HVAC Gateway; but the others shown as available here are: Beginner, for those who want to learn; VWT, for those who wish to use the Virtual Wind Tunnel; and HEATEX, for those who are concerned with heat exchangers. 25 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; the roomfire scenario If HVAC is selected, another menu will appear. Then selection of the item calledroomfire will load a scenario which has been designed to resemble closely that of library case I201 which has been discussed above. 26 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; the room-fire scenario Their attributes can be revealed by clicking on the object name, so as to select it, and then on the red-tick icon in the tool-bar shown below. On the left of the image of the scenario, PRELUDE displays the so- called object tree. At its top are PRELUDE-specific objects, such as are explained in the tutorial supplied with the Beginners Gateway, begin1.htmbegin1.htm Then follow items which are familiar to PHOENICS users; specifically the names of the solved-for and whole-field-stored variables are listed, each being treated as a virtual object having definable attributes. Below them will be seen the names of the substantial objects which constitute the scenario: fire, door, open(ing) and the partitions, walls etc, which were encountered in library case i201. 27 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; attributes of objects Here for example are the attributes of the more- conventional object called open, the aperture in the wall which can be closed by the door. Therefore, if the door is moved, the opening will move with it, just as occurred when the scenario was described by a protected Q1, earlier in this presentation. These attributes are understandable expressions; thus its y-position is given as doorypos-doorwide. Its attributes are revealed in the white boxes by clicking on its name in the tree and then on the red tick of the top-menu bar. Moreover PRELUDE can handle more-flexibly-formulated expressions. Thus: ypos(door)-ysize(door) would have the same significance, and obviates PILs need to declare the non-standard variables: doorypos and roomwide. 28 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; attributes of objects (continued) OPEN has of course other attributes, as this image shows. They are the same as would appear in a Q1 file. Suppose one wishes to make the inflow through the supply port at first 0.0, rising to 5 m/s after 120 seconds, when the fire starts. However, PRELUDE allows more complex entry and leaving relationships to be specified than the VR-Editor can envisage. It has the standard FLAIR type, namely opening; and a pressure coefficient allowing air to enter or leave. This can be achieved as shown here. 29 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; how buoyancy is represented The interaction of the force of gravity with the density variations caused by temperature changes can be introduced by way of a buoyancy object. Here 9.81 is the gravitational acceleration, rho1 is the reference density, exttem is the external temperature (15 degrees Celsius) and tem1 is the local temperature of the gas. In the present example however, the practice of i201 is emulated by way of a source of vertical-direction momentum, i.e. W1, the z-direction velocity. This is treated as an attribute of the domain, because gravity acts everywhere. The formula can be recognised as expressing the Boussinesq approximation. 30 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways. Modifying the buoyancy object The Boussinesq formula is accurate only when the temperature variations are small compared with the absolute temperature. For flames, a more appropriate formula for the w1-source, to be typed into the box, is that shown below. Extrho is the external density and rho1 is the local density, which of course must be calculated appropriately. If the hot-air combustion model of library case i201 is retained, the appropriate formula is the Ideal-Gas Law, summoned in Prelude by a few mouse clicks, thus 31 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways: attributes of the fire object The fire object is represented in the same manner as in library case i201, namely as a fixed-flux heat source of magnitude fireflux which has been set as 70 kilowatts. However, some specialists believe that the true heat input of a fire can never be fixed; for it must fall to zero when the adiabatic combustion temperature (e.g degrees) is reached, signifying that all the oxygen has been consumed. This is easily expressed by typing not fireflux but fireflux*(1-tem1/2000). PRELUDE allows this; and the PHOENICS solver will act accordingly. The following image shows what will appear on the screen. 32 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; the need for new variables The just-described device for limiting the attainable temperature is certainly an advance over the fixed-heat-flux practice. However to represent combustion processes more realistically, it is necessary to calculate the state of the gas mixture in more detail; and this means solving for more variables. PRELUDE allows these decisions and their consequences to be expressed in a simple manner. The variables which are solved by default in the roomfire Gateway have already been seen. Those solved are: P1, TEM1, U1, V1, W1, KE and EP; while those auxiliary variables which are only stored are: ENUT and EPKE. A more complete representation of combustion conventionally needs also: the FUEL mass fraction, a measure of the fuel/air ratio MIXF, and the enthalpy H1. Which are to be solved and which only stored as auxiliary variables depends on further decisions as to whether: 1. the mixed-is-burned presumption is true or false; and 2. the flow is or is not presumed to be without heat loss to the solid surroundings. 33 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; adding new variables Adding new variables is easy with PRELUDE. If the object variables is selected, by clicking on its name in the tree, and then the red-tick attributes icon is clicked, an add a variable opportunity is provided. Whatever the answer, PRELUDE provides an easy means of expressing it, as the next slide shows. The next question to consider is: which should be solved-for variables and which stored-only? Typing into the white box H1, MIXF and FUEL, and clicking OK after each, increases the contents of the object tree as shown on the right: the desired variables have been added (and RHO1 also, so that density can vary). 34 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; solved-for or stored-only variables On the right is the menu which is offered when FUEL is selected and its attributes (red-tick box) are called for. The option store has been selected, because the simplest combustion model will be chosen first, embodying the mixed-is-burned presumption. That model needs however that MIXF should be both stored and solved. That choice is shown on the right. More choices are also shown, namely that the whole-field method of solution is to be chosen, that zero and unity shall be the minimum and maximum values which MIXF is allowed to attain, and that its initial value shall be zero. Such settings, commonly set via the VR-Editor, can be set via PRELUDE menus; but there is no need; for Gateways are provided with acceptable defaults. 35 PHOENICS User Meetings PARIS, 2008 Relational input PRELUDE and its Gateways; the choice of combustion model 4. Reaction-rate-limited; non-adiabatic. This is like 3, but with FUEL also solved for and influencing H1. The four combustion models which it is especially appropriate to introduce are: 1. Mixed-is-burned; adiabatic. For this, one solves only for MIXF, and stores FUEL, H1 and TEM1 which can be deduced from it. 2. Reaction-rate-limited; adiabatic For this, one solves for MIXF and FUEL and stores H1 and TEM1 which can be deduced from them. 3. Mixed is burned; non-adiabatic. For this, one solves for MIXF and TEM1, and stores FUEL and H1. FUEL can be deduced from MIXF; and so can H1, which must however now be interpreted as the enthalpy which would prevail if the flow were adiabatic. From H1 one can deduce TEM1_adiabatic which it is also useful to store; then the TEM1 for which one solves has the significance of the actual temperature minus TEM1_adiabatic. 36 PHOENICS User Meetings PARIS, 2008 Relational input Relational data-input to PHOENICS: interim remarks How PRELUDE facilitates the introduction of the various combustion models must be left to another presentation. Nevertheless … but PRELUDE surpasses that provision by permitting more-complex relationships and supplying as much interactivity as is needed for each particular Gateway. this is now provided, to some extent, by the PHOENICS VR-Editor in protected mode, which permits the use of all the features (declarations, logic, screen-keyboard interaction, file-handling, etc) of the years-old Advanced PHOENICS Input Language; The limited aims of this presentation have been to explain and exemplify that: the ability to enter relational data is an indispensable requirement for a modern CFD code; 37 PHOENICS User Meetings PARIS, 2008 Relational input Relational data-input to PHOENICS: what can be done without PRELUDE … lest attention to PRELUDE overshadow what can be done without it, a hydrodynamic example will be discussed. This concerns flow past objects in a wind tunnel, and how its investigation is facilitated by the VR-Editor in protected mode. Here is an example of what will be shown: two spheres, one behind the other. This might be an exercise given to students, whose attention is to be focussed on just those aspects which their professor has been lecturing upon. teaching tool The focussing feature makes PHOENICS a useful teaching tool. 38 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: Input File Library Case 807 The Q1 file can be accessed by clicking here.here. Like all library files, it can be loaded into the VR-Editor; then the users can make any desired change of input data. But students, like most of us, require guidance: helpful signposts; The PHOENICS Input Language allows teachers to provide these. PHOENICS specialists in a company can do the same for their design-department colleagues who then, too, can do CFD. but not too many of them! 39 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: Input File Library Case 807 (continued) In the case 807 Q1 is written: Provision is made for: 1.Solving for only one quarter of the domain; this is allowed, by reason of symmetry, and desirable for economy and accuracy. This means choosing between quarter-inside this wholly-inside situation or this quarter-inside one: Q1 author PHOENICS allows both (and many more); but the Q1 author made just these two easily accessible. PIL empowers! 40 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: Input File Library Case 807 (continued) How did the Q1 author do it? By declaring and setting the variable: quarter (and finegrid and reyno) in the Q1 thus: SAVE25BEGIN declarations and settings boolean(quarter,finegrid) real(reyno) quarter = t ; finegrid= t ; reyno=40 Then, lower down in the Q1 are to be found: … ! Set positions and sizes for quarter=f If(quarter) then …. ! Modify positions and sizes endif Reminder: in PIL, t means true, f means false. 41 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: Setting positions and sizes In unprotected mode, the editor accepts sizes and positions for each object in a single scenario and records them as numbers. Thats OK. In protected mode, users can create a range of scenarios and can record sizes and positions as relationships; which is much better. More freedom demands more thought: e.g. which shall be the key parameters? Which the derived ones? The case-807 author chose diam1, diam2 and gap as keys, thus: diam1 diam2 gap These can be used as parameters in a systematic study of what influences the flow, the drag, the accuracy, etc. 42 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: Setting sizes and positions in the Q1 Here are some of the lines which the 807-author wrote in the Q1: declarations real(diam1,diam2,gap) real(xpos1,ypos1,zpos1,xsiz1,ysiz1,zsiz1,dist) real(xpos2,ypos2,zpos2,xsiz2,ysiz2,zsiz2) real(xposg1,yposg1,zposg1,xsizg1,ysizg1,zsizg1) real(xposg2,yposg2,zposg2,xsizg2,ysizg2,zsizg2) settings diam1=2.0; diam2=1.0; gap=2.44 xulast=2.0*diam1; yvlast= 2.0*diam1; zwlast= 5.0*diam1 xpos1=diam1*0.5; ypos1=diam1*0.5; zpos1=1.11*diam1 xsiz1=diam1; ysiz1=diam1; zsiz1=diam1 etc Tedious and mechanical! but written once only. Thereafter innumerable runs result from changing one or more of these numbers. Systematic studies can begin. 43 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: A few results: the effect of finegrid=t It is interesting to compare the solutions with and without the fine grids. First for the full domain. The solution without the fine grid is shown here. Although qualitatively similar, the differences show that the finer grid was indeed needed. 44 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: A few results: the effect of quarter=t And now the same comparison for the quarter domain. The solution without the fine grid is shown here. Although the maximum velocities are closer, the contours show at least a display flaw at the base. 45 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: A closer look at the solution In all these computations, the PHOENICS variable PARSOL = t. This means that the mass- and momentum-conservation equations for the cut cells at the sphere surface were given special treatment. The smoothness of contours there needs to be examined. The contours of pressure are shown here. Their smoothness is very good despite the fact that the grid cells are not extremely small. 46 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: A closer look at the solution The same is true for any of the computed variables. Here are shown contours for : stagnation pressure, y-direction velocity and x-direction velocity. All are as smooth as can reasonably be desired. PARSOL, because it completely obviates the tiresome grid-generation problems which beset other codes, is regarded by users of PHOENICS as one of its best features 47 PHOENICS User Meetings PARIS, 2008 Relational input The flow-past-spheres example: remarks about the parametric study This simple study would have been difficult without use of the parameterised Q1, now permitted by the protected mode. Users labour can be still further reduced by using the PHOENICS multi-run capability (i.e. RUN(1, any number)), by introducing into the Q1 such sequences as: if(irun.eq.1) then quarter = t finegrid=f endif if(irun.eq.2) then quarter = t finegrid=t Endif etcetera In this way, PHOENICS can be set to work for a complete weekend, and to present comprehensive results on Monday morning. Reynolds number, diameter ratio, grid- refinement factors, iteration numbers and other influences can be varied run-by-run. Interactive use of the VR Editor is OK for making single runs, but… research requires parameterised Q1s. 48 PHOENICS User Meetings PARIS, 2008 Relational input Relational data-input to PHOENICS; concluding remarks In 2008, significant advances have been made in the ability of PHOENICS to accept relations rather than single settings as input. Two developments have effected this: 1.The protected mode of satellite operation, and 2.The pre-pre-processor PRELUDE. Their advantage is similar in nature to that of the Excel spread-sheet over the hand-calculator. Teachers can use the facility to focus the attention of their students. Parameterised Q1s can be used by those without time or patience to learn to interact with the VR-Editor. Research-minded users of PHOENICS can now proceed faster. The end 49 PHOENICS User Meetings PARIS, 2008 Relational input How to learn about PRELUDE and its Gateways The top menu bar of PRELUDE contains a help button. Clicking on it will evoke a drop-down menu, containing the names of the PRELUDE tutorials which are present on the machine which is being used, which will probably include: begin1, a long tutorial which explains all the main features of PRELUDE; vwt1, which explains how to use the Virtual-Wind-Tunnel; and oneroom, which concerns simulation of the flow of heat and air in a ventilated room. Each tutorial is contained in an html file which users are invited to read by means of a browser in one window while PRELUDE is open in another window. There is also a document regarding PRELUDE, its purpose and its capabilities, which can be viewed here.here Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/781252/
CC-MAIN-2016-44
refinedweb
5,868
50.87
[ Update: the Freebase type provider is now available as part of the FSharp.Data NuGet package and library. The namespace has changed from “Samples.DataStore.Freebase” to “FSharp.Data”. Find out more about F# at fsharp.org. ] The F# 3.0 Freebase Type Provider Sample includes some support for query translation from F# 3.0 LINQ queries to MQL. Some sample queries are here. You can write queries in F# 3.0 with auto-completion and strong typing, and still execute efficiently on the server, at least for those queries translated by the sample. Here are some details query translation in the sample at the time of writing. Single, non-nested ‘for’ loops over a collection, for book in data.“Arts and Entertainment“.Books.Books do - Date values are not handled very well – they are translated as strings, but you can’t, for example, compare by date - When you select compound objects in a query, the object will use “delayed loading”. This may lead to later implicit server requests when you access properties on those objects. These may “cascade”. - There are many, many things which can’t be represented in queries. In general, try to stick to a known, working query format, or use client side processing for anything beyond samples which you know work) Join the conversationAdd Comment
https://blogs.msdn.microsoft.com/fsharpteam/2012/09/24/the-f-3-0-freebase-type-provider-sample-some-info-on-queries/
CC-MAIN-2016-30
refinedweb
219
69.07
Could someone tell me what is wrong with the boolean operators in this program thanks!! Code:#include <iostream> #include "test2.h" using namespace std; int main() { double balance, chargeAmt,c,payAmt, bal,C ,P,D, Q; string trans; cout<<"Enter beginning balance. "; cout<<endl; cin>>balance; cout<<endl; if (cin.fail()) { cout<<"ERROR! Bad Input!!"; return 1; } cout<<"How can we be of service today?"; cout<<endl; do { cout<<"Enter transaction: (C, P, D, Q) "; cin>>trans; cout<<endl; } while (trans != Q); if(trans == C) { cout<<"Amount of charge: "; cin>>chargeAmt; cout<<endl; bal = charge_amount(balance,chargeAmt); cout<<"Your new balance is "<<bal; } else if (trans == P) { cout<<"Payment amount: "; cin>>payAmt; cout<<endl; bal = pay_amount(balance,payAmt); cout<<"Your new balance is "<<bal; } else if (trans == D) { cout<<"Your balance is "<<bal; cout<<endl; } else if (trans == Q) { cout<<"Good Bye!"; cout<<endl; } else cout<<"Enter valid input!"; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/51245-boolean-operators.html
CC-MAIN-2015-06
refinedweb
151
65.22
In this post I describe a pattern that lets you run arbitrary commands for your application in your Kubernetes cluster, by having a pod available for you to exec into. You can use this pod to perform ad-hoc maintenance, administration, queries—all those tasks that you can't easily schedule, because you don't know when you'll need to run them, or because that just doesn't make sense. I'll describe the issue and the sort of tasks I'm thinking about, and discuss while this becomes tricky when you run your applications in Kubernetes. I'll then show the approach I use to make this possible: a long-running deployment of a pod containing a CLI tool that allows running the commands. Background: running ad-hoc queries and tasks One of the tenants of DevOps and general declarative approaches to software deployment, is that you try to automate as much as possible. You don't want to have to run database migrations manually as part of a deploy, or to have to remember to perform a specific sequence of operations when deploying your code. That should all be automated: ideally a deployment should, at most, require clicking a "deploy now" button. Unfortunately, while we can certainly strive for that, we can't always achieve it. Bugs happen and issues arise that sometimes require some degree of manual intervention. Maybe a cache gets out of sync somehow and needs to be cleared. Perhaps a bug prevented some data being indexed in your ElasticSearch cluster, and you need to "manually" index it. Or maybe you want to test some backend functionality, without worrying about the UI. If you know these tasks are going to be necessary, then you should absolutely try and run them automatically when they're going to be needed. For example, if you update your application to index more data in ElasticSearch, then you should automatically do that re-indexing when your application deploys. We run these tasks as part of the "migrations" job I described in previous posts. Migrations don't just have to be database migrations! If you don't know that the tasks are going to be necessary, then having a simple method to run the tasks is very useful. One option is to have an "admin" screen in your application somewhere that lets you simply and easily run the tasks. There's pros and cons to this approach. On the plus side, it provides an easy mechanism for running the tasks, and uses the same authentication and authorization mechanisms built into your application. The downside is that you're exposing various potentially destructive operations via an endpoint, which may require more privileges than the rest of your application. There's also the maintenance overhead of exposing and wiring up those tasks in the UI. An alternative approach is the classic "system administrator" approach: a command line tool that can run the administrative tasks. The problem with this in the Kubernetes setting is where do you run the task? The tool likely needs access to the same resources as your production application, so unless you want severe headaches trying to duplicate configuration and access secrets from multiple places, you really need to run the tasks from inside the cluster. Our solution: a long running deployment of a CLI tool In a previous post, I mentioned that I like to create a "CLI" application for each of my main applications. This tool is used to run database migrations, but it also allows you to run any other administrative commands you might need. The overall solution we've settled on is to create a special "CLI exec host" pod in a deployment, as part of your application release. This pod contains our application's CLI tool for running various administration commands. The pod's job is just to sit there, doing nothing, until we need to run a command. When we need to run a command, we exec into the container, and run the command. Kubernetes allows you to open a shell in a running container by using exec (short for executing a command). If you have kubectl configured, you can do this from the command line using something like kubectl exec --stdin --tty test-app-cli-host -- /bin/bash Personally, I prefer to exec into a container using the Kubernetes dashboard. You can exec into any running container by selecting the pod and clicking the exec symbol: This gives you a command prompt with the container's shell (which may be the bash shell) or the ash shell for example). From here you can run any commands you like. In the example above I ran the ls command. Be aware, if you execinto one of your "application" pods, then you could impact your running applications. Obviously that could be Bad™. At this point you have a shell, in a pod in your Kubernetes cluster so you can run any administrative commands you need to. Obviously you need to be aware of the security implications here—depending on how locked down your cluster is, this may not be something you can or want to do, but it's worked well enough for us! Creating the CLI exec-host container We want to deploy the CLI tool inside the exec-host pod as part of our application's standard deployment, so we'll need a Docker container and a Helm chart for it. As in my previous posts, I'll assume that you have already created a .NET Core command-line tool for running commands. In this section I show the Dockerfile I use and the Helm chart for deploying it. The tricky part in setting this up is that we want to have a container that does nothing, but isn't killed. We don't want Kubernetes to run our CLI tool—we want to do that manually ourselves when we exec into the container, so we can choose the right command etc. But the container has to run something otherwise it will exit, and we won't have anything to exec into. To achieve that, I use a simple bash script. The keep_alive.sh script The following script is based on a StackOverflow answer (shocker, I know). It looks a bit complicated, but this script essentially just sleeps for 86,400 seconds (1 day). The extra code ensures there's no delay when Kubernetes tries to kill the pod (for example when we're upgrading a chart) See the StackOverflow answer for a more detailed explanation. #!/bin/sh die_func() { echo "Terminating" exit 1 } trap die_func TERM echo "Sleeping..." # restarts once a day sleep 86400 & wait We'll use this script to keep a pod alive in our cluster so that we can exec into it, while using very few resources (typically couple of MB of memory and 0 CPU!). The CLi exec-host Dockerfile For the most part, the Dockerfile for the CLI tool is a standard .NET Core application. The interesting part is the runtime container, so I've used a very basic builder Dockerfile that just does everything in one step. Don't copy the builder part of this Dockerfile (everything before the ###), instead use an approach that uses layer caching. # Build standard .NET Core application FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS builder WORKDIR /app # WARNING: This is completely unoptimised! COPY . . # Publish the CLI project to the path /app/output/cli RUN dotnet publish ./src/TestApp.Cli -c Release -o /app/output/cli ################### # Runtime image FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine # Copy the background script that keeps the pod alive WORKDIR /background COPY ./keep_alive.sh ./keep_alive.sh # Ensure the file is executable RUN chmod +x /background/keep_alive.sh # Set the command that runs when the pod is started CMD "/background/keep_alive.sh" WORKDIR /app # Copy the CLI tool into this container COPY --from=builder ./app/output/cli . This Dockerfile does a few things - Builds the CLI project in a completely unoptimised way. - Uses the ASP.NET Core runtime image as the base deployment container. If your CLI tool doesn't need the ASP.NET Core runtime, you could use the base .NET Core runtime image instead - Copies the keep_alive.sh script from the previous section into the background folder. - Sets the container CMDto run the keep_alive.sh script. When the container is run, the script will be executed. - Change the working directory to /appand copy the CLI tool into the container. We'll add this Dockerfile to our build process, and tag it as andrewlock/my-test-cli-exec-host. Now we have a Docker image, we need to create a chart to deploy the tool with our main application. Creating a chart for the cli-exec-host The only thing we need for our exec-host Chart is a deployment.yaml to create a deployment. We don't need a service (other apps shouldn't be able to call the pod) and we don't need an ingress (we're not exposing any ports externally to the cluster). All we need to do is ensure that a pod is available if we need it. The deployment.yaml shown below is based on the default template created when you call helm create test-app-cli-exec-host. We don't need any readiness/liveness probes, as we're just using the keep_alive.sh script to keep the pod running, so I removed that section. I added an additional section for injecting environment variables, as we will want our CLI tool to have the same configuration as our other applications/. Don't worry about the details of this YAML too much. There's a lot of boilerplate in there and a lot of features we haven't touched on that will go unused unless you explicitly configure them. I only decided to show the whole chart for completeness apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "test-app-cli-exec-host.fullname" . }} labels: {{- include "test-app-cli-exec-host.labels" . | nindent 4 }} spec: replicas: 1 selector: matchLabels: {{- include "test-app-cli-exec-host.selectorLabels" . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include "test-app-cli-exec-host.selectorLabels" . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include "test-app-cli-exec-host.serviceAccountName" . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} env: {{- $env := merge (.Values.env | default dict) (.Values.global.env | default dict) -}} {{ range $k, $v := $env }} - name: {{ $k | quote }} value: {{ $v | quote }} {{- end }} resources: {{- toYaml .Values.resources | nindent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} We'll need to add a section to the top-level chart's Values.yaml to define the Docker image to use, and optionally override any other settings: test-app-cli-exec-host: image: repository: andrewlock/my-test-cli-exec-host pullPolicy: IfNotPresent tag: "" serviceAccount: create: false Our overall Helm chart has now grown to 4 sub-charts: The two "main" applications (the API and message handler service), the CLI job for running database migrations automatically, and the CLI exec-host chart for running ad-hoc commands: All that's left to do is to take our exec-host chart for a spin! Testing it out We can install the chart using a command like the following: helm upgrade --install my-test-app-release . \ --namespace=local \ --set test-app-cli.image.tag="0.1.1" \ --set test-app-cli-exec-host.image.tag="0.1.1" \ --set test-app-api.image.tag="0.1.1" \ --set test-app-service.image.tag="0.1.1" \ --debug After installing the chart, you should see the exec-host deployment and pod in your cluster, sat there happily doing nothing: We can now exec into the container. You could use kubectl if you're command-line-inclined, but I prefer to use the dashboard to click exec to get a shell. I'm normally only trying to run a command or two, so it's good enough! As you can see in the image below, we have access to our CLI tool from here and can run our ad-hoc commands using, for example, dotnet TestApp.Cli.dll say-hello: Ignore the error at the top of the shell. I think that's because Kubernetes tries to open a Bash shell specifically, but as this is an Alpine container, it uses the Ash shell instead. And with that, we can now run ad-hoc commands in the context of our cluster whenever we need to. Obviously we don't want to make a habit of that, but having the option is always useful! Summary In this post I showed how to create a CLI exec-host to run ad-hoc commands in your Kubernetes cluster by creating a deployment of a pod that contains a CLI tool. The pod contains a script that keeps the container running without using any resources. You can then exec into the pod, and run any necessary commands.
https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-10-creating-an-exec-host-deployment-for-running-one-off-commands/
CC-MAIN-2021-10
refinedweb
2,211
55.44
Simple Access Control for CakePHP3 Simple Access Control for CakePHP3 Author Lorna Mitchell takes us through an implementation for Access Control that can be used in CakePHP3. Read on to find out more! Join the DZone community and get the full member experience.Join For Free The. The application has about 50 users; it's a small, back-office application. Users are in the users table and they can have one or more roles; the relationships between the two are in users_roles. Do the Initial Setup To begin with, I baked the models for the users and the roles. I introduced the linking table by adding the relationship into the \Model\Table\UsersTable::initialize() method. There are some great docs on doing this, but for this example I just needed: $this->belongsToMany('Roles', [ 'foreignKey' => 'user_id', 'targetForeignKey' => 'role_id', 'joinTable' => 'users_roles' ]); Then I went ahead and baked the controllers and templates. Since I'll be putting the names of the roles into my access control code, I disabled the ability to add and delete roles or change their names through the web interface. To keep those changes in step with the code that relates to them, we'll make these changes using a database patch. A minor point, but one that might be handy if you're using a similar approach to me. This approach doesn't do anything special with authentication as it uses the standard approaches for logging people in (some good examples in the CakePHP tutorials). However authorization is what controls the access to individual controllers or actions, and this is where it gets interesting. Build the Authorization Piece To work out which roles have access to which controller actions, CakePHP will call the authorize() method of the class that I configure. This call includes the currently logged in user and the request object, so we can use these two pieces of information together and decide who can see what. When the user is logged in, I'm storing their record with the roles hydrated into the object. This means that we're not hitting the database on every web request to look up what roles the user has all the time (I'd also like to use this same method at some point to work out if I should be displaying navigation to a given user, so it becomes potentially several database hits at that point rather than just one as it is in this example). First, I configure the Auth component in the Controller\AppController::initalize() method by setting up something like this (you probably want the Flash component as well while you're there): $this->loadComponent('Auth', [ 'authenticate' => [ 'Form' => [ 'fields' => [ 'username' => 'email', 'password' => 'password' ], ] ], 'loginAction' => [ 'controller' => 'Users', 'action' => 'login' ], 'authorize' => ['Example'], 'unauthorizedRedirect' => '/users/login', ]); With this in place, I have a login form where the user logs in with their email and password. It's important to set the loginAction when configuring the Auth component so that CakePHP knows that unauthenticated users should be able to see that page... it's really hard to log in if you don't have access to the login form! The authorize setting here means that CakePHP will call Auth\ExampleAuthorize::authorize() before allowing users access to anything. All we need our function to do is return true or false—in fact a good way to get started is to do the configuration, create the class, and get the method to return true. This lets you know that your configuration is correct and you can start working on the actual logic! The documentation covers everything you could need but sometimes real code is easier to look at. Here's my actual auth class: <?php namespace App\Auth; use Cake\Auth\BaseAuthorize; use Cake\Network\Request; use App\Model\Entity\User; class ExampleAuthorize extends BaseAuthorize { public function authorize($user, Request $request) { $this->_user = $user; // assume false $authorized = false; // admins see everything, return immediately if ($this->userHasRole('admin')) { return true; } switch($request->params['controller']) { case 'Users': // check the action param to control for a specific controller action if ($request->params['action'] == 'logout') { $authorized = true; // everyone can log out } break; case 'Money': // you need the finance role to see this entire controller/section if ($this->userHasRole('finance')) { return true; } default: // by default, all logged in users have access to everything if (!empty($user)) { $authorized = true; } break; } return $authorized; } protected function userHasRole($role) { if (isset($this->_user['roles']) && in_array($role, $this->_user['roles'])) { return true; } return false; } } There are a few things to look at here. For simple starters, look at the userHasRole helper method—this is just to let me quickly look up if this user has this role. By separating it out, the flow of the actual logic is a bit more readable—and, if we ever change how roles work, it only needs to change in one place! The main method starts by assuming that the user does not have access, and by storing the user into a property (to be used by the helper method). If you're an admin, you always have access, so you can really quickly return true if that's the case. If not, then I've tried to include examples of limiting access by whole controller and by specific action (everyone should be able to log out, if only to avoid error messages when someone tries to click on "log out" after their session has expired). In this system, we want most things to be accessible to everyone so that's the default; there are just a few specific instances where a particular role will be needed for specific sections. Notice the defensive approach. You don't have access unless the logic finds a reason to give it to you! Going Further This works well for my application, particularly because users can have multiple roles and the admins themselves can manage who has what. Since we have very simple requirements, the logic is just held in code; it's easy to follow and understand, but it means that only the developers of the system can change what each role can access, and therefore as discussed, roles are managed by database patch so that the roles in the database will match the ones the code expects. A more complex system would probably need per-role, per-action permissions stored in the database to determine who has what. This would have the advantage of being maintainable without a code change, if that's important in your situation. I also mentioned that I'd like to use the permissions system to check if a navigation link should be displayed. CakePHP doesn't offer this by default but I think it's something I'd like to add to my own application over time. Hopefully, this example serves as a basis for someone implementing ACL in CakePHP3, I found that there aren't a lot of examples so here's at least one that we can refer to—I had a lot of great support from the #cakephp IRC channel on freenode as well, so that's a good place to go if you still have questions. Published at DZone with permission of Lorna Mitchell , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/simple-access-control-for-cakephp3
CC-MAIN-2020-29
refinedweb
1,223
54.76
The trick of this problem is to keep two elements during the binary search, this way during the search, the index mid-1 and mid+1 will always be valid. To do so, we need to only start binary search when start+1<end, to make sure we have 3 numbers to begin with. As for the if branch statement, a trick works for me is to think about the mid falls in a bottom point. Then if nums[mid] < nums[mid-1], we need to search left, if nums[mid] < nums[mid+1], search right. Otherwise, the mid will be a peak. I hope it helps. public class Solution { public int findPeakElement(int[] nums) { int start = 0; int end = nums.length - 1; while(start + 1 < end) { int mid = start + (end - start) / 2; if(nums[mid] < nums[mid-1]) { end = mid; } else if(nums[mid] < nums[mid+1]) { start = mid; } else { return mid; } } return nums[start] >= nums[end] ? start : end; } }
https://discuss.leetcode.com/topic/38172/clean-java-binary-search-solution
CC-MAIN-2018-05
refinedweb
160
87.96
Inheritance is an integral part of Java (and all OOP languages). It turns out that youre always doing inheritance when you create a class, because unless you explicitly inherit from some other class, you implicitly inherit from Javas standard root class Object. The syntax for composition is obvious, but to perform inheritance the fields and methods in the base class. Heres an example: //: c06:Detergent.java // Inheritance syntax & properties. import com.bruceeckel.simpletest.*; class Cleanser { protected static Test monitor = new Test(); private String s = new String("Cleanser"); public void append(String a) { s += a; } public void dilute() { append(" dilute()"); } public void apply() { append(" apply()"); } public void scrub() { append(" scrub()"); } public String toString() { return s; } public static void main(String[] args) { Cleanser x = new Cleanser(); x.dilute(); x.apply(); x.scrub(); System.out.println(x); monitor.expect(new String[] { "Cleanser dilute() apply() scrub()" }); } }(); System.out.println(x); System.out.println("Testing base class:"); monitor.expect(new String[] { "Cleanser dilute() apply() " + "Detergent.scrub() scrub() foam()", "Testing base class:", }); Cleanser.main(args); } } ///:~ This demonstrates a number of features. First, in the Cleanser append( ) method, Strings are concatenated to s using the += operator, which is one of the operators (along with +) that the Java designers overloaded to work with Strings. Second, both Cleanser and Detergent contain a main( ) method. You can create a main( ) for each one of your classes, and its often recommended to code this way so that your test code is wrapped in with the class. Even if you have a lot of classes in a program, only the main( ) for the class invoked on the command line will be called. (As long as main( ) is public, it doesnt matter whether the class that its part of is public.) dont need to remove the main( ) when youre finished testing; you can leave it in for later testing. Here, you can see that Detergent.main( ) calls Cleanser.main( ) explicitly, passing it the same arguments from the command line (however, you could pass it any String array). Its important that all of the methods in Cleanser are public. Remember that if you leave off any access specifier, the member defaults to package access, which allows access only to package members. Thus, within this package, anyone could use those methods if there were no access specifier. Detergent would have no trouble, for example. However, if a class from some other package were to inherit from Cleanser, it could access only public members. So to plan for inheritance, as a general rule make all fields private and all methods public. (protected members also allow access by derived classes; youll learn about this later.) Of course, in particular cases you must make adjustments, but this is a useful guideline. Note that Cleanser has a set of methods in its interface: append( ), dilute( ), apply( ), scrub( ), and toString( ). Because Detergent is derived from Cleanser (via the extends keyword), it automatically gets all these methods in its interface, even though you dont see them all explicitly defined in Detergent. You can think of inheritance, then, as reusing the class. As seen in scrub( ), its possible to take a method thats been defined in the base class and modify it. In this case, you might want to call the method from the base class inside the new version. But inside scrub( ), you cannot simply call scrub( ), since that would produce a recursive call, which isnt what you want. To solve this problem, Java has the keyword super that refers to the superclass that the current class has been inherited from. Thus the expression super.scrub( ) calls the base-class version of the method scrub( ). When inheriting youre not restricted to using the methods of the base class. You can also add new methods to the derived class exactly the way you put any method in a class: just define it. The method foam( ) is an example of this. In Detergent.main( ) you can see that for a Detergent object, you can call all the methods that are available in Cleanser as well as in Detergent (i.e., foam( )).
http://www.linuxtopia.org/online_books/programming_books/thinking_in_java/TIJ308_001.htm
CC-MAIN-2014-15
refinedweb
678
63.9
Access your Github Account from Pythonista If you have a Github account and would like to access it from Pythonista here's a script to download and install PyGithub: It creates 'github' and 'dateutil' directories in Pythonista's Documents directory which you can import as modules. These aren't visible through the UI but can be seen for example by running @wrenoud's fantastic file browser (see). An example use of the PyGithub module (which authenticates a Github user and lists the files in their public and private repositories) can be found here: PyGithub provides a full implementation of the Github v3 API. For browsing operations the API is fairly self explanatory but for committing changes it is somewhat obfuscated ( is the best resource for help with understanding this). In principle, though, a fully featured Github client could be built on top of this implementation. Some caveats: - I've patched the original PyGithub to use dateutil rather than strptime as I get errors the second time strptime is called (a quick google suggests that this is a fairly common problem). It's this patched version which is downloaded and installed, although you can try the original by reading the comments embedded in the script if you'd like. This is also the reason that dateutil is installed - the original PyGithub library has no dependency on it. - Beyond the simple example script I've not validated that all Github functionality works. I have added to this library to provide some easier to use (and useful) functionality. There are still a large number of issues (see below) but as I am already finding the limited functionality of use I thought I'd share it. Download and install the library from Once installed, to clone a github repository, simply run the following script. It will prompt for your github credentials and the name of the repository you want to clone. <pre> import githubista githubista.clone() </pre> The repository is cloned to a directory with the same name. To access the files within you will need to use a file browser tool such as the one linked to above. Once you've made changes to a script file you can commit the changes by running the following script from the actions menu: <pre> import githubista githubista.commit() </pre> This script will ask for credentials again (although remembers the previously entered ones) and also asks for the commit message. Issues: - Only the master branch is cloned - you cannot currently choose a different branch - Only a single file can be committed at a time - Only a flat structure is supported - no directories can exist within a repository - No version history is currently saved to Pythonista - this will likely need to change as this progresses - Only python (.py) files can be edited (and therefore changed and committed) If you'd like to contribute to this library, please fork on github: I have upgraded the underlying version of PyGithub to fix a github connectivity issue. You can still install the script from:. Note that this script attempts to delete directories (Folders) called 'temp' and 'dateutil' within Pythonista as part of installation. It will also overwrite files in directories named 'github' and 'githubista'. If you are using Pythonista 1.3 or above please check that you have not created any Folders with these names before running this script as any files inside them will be irretrievably lost.
https://forum.omz-software.com/topic/178/access-your-github-account-from-pythonista
CC-MAIN-2017-34
refinedweb
568
57.4
Lab sessions Tue Oct 01 to Thu Oct 03 Lab written by Julie Zelenski, with modifications by Nick Troccoli Lab Overview Your weekly lab is a chance to experiment and explore, ask and answer questions, and get hands-on practice in a supported environment. We provide a set of lab exercises that revisit topics from recent lectures/readings and prepare you to succeed at the upcoming assignment. Lab is collaborative! We're all in this together! You will pair up and work as a team on the exercises. The entire room is one learning community working together to advance the knowledge and mastery of everyone. Stuck on an issue? Ask for help. Have an insight? Please share! The TA will circulate to offer advice and answers and keep everyone progressing smoothly. To track lab participation, we have an online checkoff form for you to fill out as you work. Lab is not a race to find answers to exactly and only the checkoff questions-- the checkoff questions are used only to record attendance and get a read on how far you got. Lab credit is awarded based on your sincere participation for the full lab period. Your other rewards for investing in lab are to further practice your skills, work together to resolve open questions, satisfy your curiosity, and reach a place of understanding and mastery. The combination of active exploration, give and take with your peers, and the guidance of the TA makes lab time awesome. We hope you enjoy it! For lab each week, we plan to mine our favorite open-source projects (musl libc, BusyBox unix utilities, Apple, Google and more) for interesting systems code to use as an object of study. We use this code to learn how various programming techniques are used in context, give insight into industry best practices, and provide opportunities for reflection and critique. We will have some noteworthy code picked out for you to explore in each lab. For lab1, the chosen code passages highlight interesting uses of the bitwise and integer operations. Learning Goals During this lab you will: - practice with bits, bitwise operators and bitmasks - read and analyze C code that manipulates bits/ints - further practice with the edit-compile-test-debug cycle in the Unix environment Find an open computer to share with a partner and introduce yourselves. Together the two of you will tackle the exercises below. Clone the lab starter code by using the command below. This command creates a lab1 directory containing the project files. git clone /afs/ir/class/cs107/repos/lab1/shared lab1 Next, pull up the online lab checkoff and have it open in a browser so you can jot things down as you go. Only one checkoff needs to submitted for both you and your partner. Exercises 1) Share Unix Joy Let's kick things off with a little Unix love! Chat up your labmates about your assign0 experiences. How is everyone doing so far on getting comfortable in the unix environment? Is there a video or resource you want to recommend to others? Do you have open questions or an issue you'd like help with? Did you learn a nifty trick or tip that you'd like share? Let's hear it! 2) Bitwise Practice (35 min) This section provides practice you can work through to get more familiar with bit operators and bitmasks. But first, one additional set of bit operators we did not have time to cover in class are the shift operators. These let you shift bits to the right or left: // shifts x to the left by k bits // lower order bits filled with zeros // bits shifted off the end are lost x << k; // shifts x to the right by k bits // for unsigned numbers, fill with zeros // for signed numbers, fill with sign bit x >> k; Mathematically, this lets us easily multiply or divide by two. For instance, for a left shift, 0b01110111 << 2 results in 0b11011100. For a right shift, filling with zeros for an unsigned number is called a "logical shift" and filling with the sign bit for a signed number is called an "arithmetic shift". Try out an example (e.g. -1) to see why filling with the sign bit is preferred over always filling with zeros or ones). A few other miscellaneous notes about bit operations: - operator precedence with bit operators and other operators can be tricky. Always use parentheses where precedence is ambiguous just to make sure operators execute in the order you expect. For instance, 1<<2 + 3<<4means 1 << (2+3) << 4due to precedence rules. Writing (1<<2) + (3<<4)ensures the correct order. - put a Uafter a number literal to make it unsigned. For instance, 1Umeans the literal 1as an unsigned number. - put an Lafter a number literal to make it a long(64 bits) instead of an int, which it is by default. This highlights a common issue! If you want, for instance, a long with the index-32 bit on and everything else off, the following does not work: long num = 1 << 32; This is because the 1 is by default a signed int, and you cannot shift a signed int by 32 spaces because it has only 32 bits. Instead, you must specify that the literal be a long: long num = 1L << 32; (As a side note, lecture slides 71-81, which we didn't have time to get to this past lecture, also document everything mentioned here). With this material and the other material from the past lectures, test your understanding with this page of bitwise practice problems. 3) Round Up (3 + 4 together about 40 min) Open the round.c file to review the code for the functions is_power_of_2 and round_up. is_power_of_2is a function that takes advantage of a unique property of powers of two at the bit level. Work with your partner to identify what is unique about the bitwise pattern for those numbers that are a power of 2. Try sketching out a few examples. It may be easier to identify this pattern if you think about the binary representation of a number as telling you which powers of two make up the number. Once you've identified the pattern, then consider the relationship between a power of two and its predecessor (e.g. number - 1) in terms of the bits the two values have in common. How does the code leverage these two facts to efficiently determine that a given value is or is not a power of 2? - The round_upfunction returns the value of the first argument rounded up to the nearest multiple of the second. First consider the general case, when the multiple is not a power of 2. How are the arithmetic operations used to round up to the next multiple? Now consider the special case when the multiple is a power of 2. How is bitwise manipulation able to take the place of the expensive multiply/divide? These functions show the advantage of being able to flip between interpretations. A number is just a bit pattern and can be manipulated arithmetically or bitwise at your convenience. 4) Midpoint For this exercise, start by reading Google researcher Joshua Bloch's newsflash about a ubiquitous bug from integer overflow. Who knew that simply computing the midpoint could be perilous? We want a function that safely and correctly computes the midpoint of two integers. If the true midpoint is not an exact integral value, we are not fussy about how the result is rounded. As long as the function returns either integer neighbor, we're happy. The file mid.c contains four different formulations to compute the midpoint in a simple program you can experiment with. The midpoint_original function mostly works, but exhibits the bug called out in Bloch's article: int midpoint_original(int x, int y) { return (x + y)/2; } If the sum x + y overflows, the result is erroneous. For example, the call midpoint_original(INT_MAX-2, INT_MAX) should return INT_MAX-1, but actually returns -2. Oops! In the original context, the two inputs to midpoint were array indexes, which perhaps explains how this bug was able to lurk for so long with no visible symptoms (i.e. arrays of yore rarely had such large dimensions). Bloch's article proposes some fixes for calculating the midpoint. First he offers midpoint_A and then champions midpoint_B as a faster alternative. int midpoint_A(int x, int y) { return x + ((y - x) / 2); } int midpoint_B(int x, int y) { return ((unsigned int)x + (unsigned int)y) >> 1; } - Consider how midpoint_Aand midpoint_Bhave rearranged the original calculation. What has midpoint_Adone to avoid overflow? What has midpoint_Bdone? Both midpoint_A and midpoint_B work correctly as long as both inputs are non-negative. This constraint is never actually stated, it is merely implicit in the original context that the inputs are array indexes. So if one or both inputs is negative, what then? Let's investigate! - Given a negative input, midpoint_Ais susceptible to a different overflow than the original, in this case during subtraction. What must be true about the two operands to subtract in order to cause overflow? What is the result of a subtraction operation that has overflowed? Work out what you think is an input that causes a failure for midpoint_A, then test out your theory by editing the mid.cprogram to try it. Build and run the program to verify your understanding. midpoint_Bhas its own distinct problem with negative inputs. To expose the flaw in midpoint_Bit helps to work backward. Consider how the expression within midpoint_Bwill never evaluate to a negative result -- why not? Given this fact, any inputs for which the midpoint is negative must fail on midpoint_B. What value will be returned instead of the expected in these cases? Edit the mid.cprogram to add an input that demonstrates the problem and verify that your theory matches the observed behavior. If you cast the sum back to a signed value before shifting right, you fix this particular problem, but at the expense of causing a different case to now fail. Experiment with the code to observe this. It feels like we just can't win! The final version of midpoint we have for you is midpoint_C. This gem comes from the marvelous fascicle on bits written by the extraordinary Don Knuth. (fascicle??). int midpoint_C(int x, int y) { return (x & y) + ((x ^ y) >> 1); } Knuth's midpoint_C is the real deal, computing a correct midpoint for all inputs, with no possibility of overflow! How this approach works is not obvious at first glance, but with some careful study you can work through how it all fits together. - Start by considering the bitwise representation of a number and how its "on" bits correspond to the powers of 2 in the number's binary polynomial. For instance, 0000...01011, which is the base-10 number 11, can be written as 1*2^0 + 1*2^1 + 0*2^2 + 1*2^3 = 11. - Now trace the effect of comparing the powers of 2 contained in x and y. The bitwise &identifies which powers of 2 the two inputs have in common and the bitwise ^pulls out those powers which differ. - How are those two results combined so as to gather the needed powers of 2 in the midpoint? - The code above joins with +. Why must we use +here? Try substituting |and work out how the resulting expression now operates. The result is nearly equivalent, but not quite -- what has changed? And what exactly is a fascicle? It took me several hops through my dictionary to find out! 5) Write, Test, Debug, Repeat (20 min) Now it's your turn to write some bitwise code of your own and practice with the Unix development tools! The parity program reports the parity of its command-line argument. A value has odd parity if there is an odd number of "on" bits in the value, and even parity otherwise. Confirm your understanding of parity by running the samples/parity_soln program on various arguments. The code in parity.c was written by your colleague who claimed it is "complete", but on their way out the door they mutter something unintelligible about unfixed bugs. Uh oh... Your task is to test and debug the program into a fully functional state using CS107 sanitycheck and the gdb debugger. Let's investigate! A helpful reference as you are using GDB is the CS107 GDB guide, also linked to from the resources page. - Use maketo build the program and try running ./paritya few times on various values. Uh oh! It thinks every value has odd parity! Does the program ever report even parity for anything? - Let's run it under the debugger. Start gdb parity. We can use the listcommand to print out parts of the code GDB is examining. Use list compute_parityto print the compute_parityfunction and note the line number where it updates the result inside the loop. - Next, let's set a breakpoint on that line so that when we run the program in GDB, GDB will pause before executing that line and await further instructions. You can add a breakpoint by typing break XXXwhere XXXis either a function name or line number. - Run the program under gdb by entering the runcommand, followed by a command line argument (for the number to examine). GDB will start running the program and pause when it hits the breakpoint. Note that it pauses before it executes the line the breakpoint is on. When stopped at the breakpoint, print the value of result. We can print by using the p(short for resultappears to be garbage. D'oh, it was never initialized! Is that even legal? In a safety-conscious language such as Java, the compiler may guard against this. - Do a make cleanand maketo review the build warnings and you'll see nary a peep about it from gcc. At runtime, the variable will use whatever junk value was leftover in its memory location. Lesson learned -- you will need to up your own vigilance in the laissez-faire world of C. - Add a correct initialization, build, and re-run to test your fix. Sanitycheck! One simple means to verify correctness is by comparing your results to a known-correct solution. We provide solution executables in the samples directory. For example, run ./parity 45 then run samples/parity_soln 45 and manually eyeball the outputs to confirm they match. Even better would be to capture those outputs and feed them to diff so the tools can do the work. To make testing as painless as possible for you, we've automated simple output-based comparison into the CS107 tool sanitycheck. If you haven't already, read our sanitycheck instructions, and run sanitycheck on lab1 to see that your fixed parity passes all tests. Now that your corrected program passes sanitycheck, it is good to go, right? Not so fast, keep in mind that sanitycheck is only as thorough as its test cases. Our default sanitycheck gives you a small set of tests to start with; you use the custom option to add tests of your own to more fully exercise the program. - Carefully read through the results from default sanitycheck. How many different test cases does it include? What are those test cases? - Run custom sanitycheck with the additional test cases in custom_teststo get a different side of the story. One of these tests fails due to timeout. It's not that the program is horribly inefficient, it is just stuck in an infinite loop. - The best way to debug an infinite loop is to run the program under GDB, and once it stalls, stop the program using Control-c. GDB will show you where the program was executing when it was stopped, and you can poke around and see what is going on. Let's try this - run the parity program in GDB on a negative argument and let it go unresponsive. - Type Control-C to interrupt it and return control to gdb. Use the gdb backtracecommand to see where the program is executing - this will display the current call stack, meaning what functions are currently executing. stepthrough a few statements as it goes around the loop and gather information to diagnosis why the loop is not being properly exited. - Once you know what's gone astray, edit the code to fix, rebuild, and test under both default and custom sanitycheck to verify you have squashed the bug. Way to go! [Optional] Challenge Problem Finished with lab and itching to further exercise your bitwise skills? Check out our challenge problem! Recap and Check Off With TA At the end of the lab period, submit the checkoff form and ask your lab TA to approve your submission so you are properly credited for your work.1 should be proficiency with bitwise operations, constructing and using bitmasks, and a solid grasp on the representation of unsigned values as a binary polynomial and signed values in two's complement. Here are some questions to verify your understanding and get you thinking further about these concepts: - Consider rounding the magnitude of an integer up to power of two (e.g. 3 rounds to 4, 4 to 4, 5 to 8, for negative: -3 rounds to -4, -4 to -4, and so on). How does the bit pattern of a positive int differ from the bit pattern of the value after rounding to power of two? What about for a negative int? - Give a bitwise expression to zero the upper N bits of an unsigned int V. What does this expression compute numerically? - When do you supply the command-line arguments to a program you wish to run under gdb: when starting gdb or from within gdb when running the program? - Chapter 2 of B&O has many excellent practice problems presented with solutions - check them out! Just For Fun - a cute parlor trick based on integer representation - a surprisingly addictive binary Tetris game - crazy bit-hacks for the truly brave. - integer overflow wreaking worldwide havoc: grounding the Boeing Dreamliner
https://web.stanford.edu/class/archive/cs/cs107/cs107.1202/lab1/
CC-MAIN-2020-10
refinedweb
3,014
64
How do i read in a file and move it to two other locations?Oops i pasted the wrong thing. Why doesn't this work? [code] #include <iostream> #include <fstream>... How do i read in a file and move it to two other locations?I want to copy a file to a new file. I thought this would do it but it doesn't copy it over to the n... How to edit the text in a filewhile(getline(f, s)){ v.push_back(s); s??? How to edit the text in a fileI have this basic coding below that will read the file and display it. I want to be able to change t... setting a string to somethinghow do you set a string to something like string name = "bob" is that right? This user does not accept Private Messages
http://www.cplusplus.com/user/omitted/
CC-MAIN-2013-48
refinedweb
140
94.15
Using Loyc.Essentials: collection interfaces03 Feb 2014 This post was imported from blogspot.Loyc.Essentials is a library of "stuff that should be in the .NET BCL, but isn't." Today I will present one of the most important parts of Loyc.Essentials: its collection interfaces. Theraot.Core instead to provide these interfaces and other features of .NET 4/4.5.) Several of these interfaces are widely used throughout Loyc.Essentials itself and other projects such as Loyc.Collections and LLLPG., B+ trees, VLists, ALists, Hash Array Mapped Tries, Bloom filters, other search trees and tries... the possibilities are endless. First, let me talk a little about the limitations that the current BCL interfaces suffer from. The .NET framework does not support much diversity. Traditionally it defined just four collection interfaces: IEnumerable<T>, ICollection<T>, IList<T>, and IDictionary<T>, and all but one were mutable. Flaw #1: IList<T>/ICollection<T> contain some methods that are almost never used, like CopyTo(), but lack methods that are often desired, like AddRange() and RemoveRange(), and they lack methods that would be often used if they existed, like Slice(). Flaw #2: Lack of support for read-only interfaces. In .NET 4.5, they finally. IReadOnlyList<T> should not be a burden or difficulty that the caller must overcome, rather it should simply be a promise that the called method will not modify the list. That leads me to flaw #3... Flaw #3:: void Foo(IReadOnlyList<T> list) {...}But you notice that most collection classes don't implement IReadOnlyList<T>, so you add an overload that automatically converts IList<T> to IReadOnlyList<T> with an extension method (e.g. AsListSource() in the Loyc.Collections namespace): // Alias for Foo(IReadOnlyList<T>) void Foo(IList<T> list) { Foo(list.AsListSource()); , immutable data structures might choose to explicitly implement IList<T> in order to stay compatible with the large amount of older code based on IList<T> for read-only access. So what's the problem? Suppose I have a data structure MyList<T> that implements both interfaces. Look what happens when I call Foo: Foo(new MyList<T>()); // ERROR: The call is ambiguous // between the following methods or properties: // '...Foo(System.Collections.Generic.IList<T>)' and // '...Foo(System.Collections.Generic.IReadOnlyList<T>)'Flaw #4: A lack of variety and flexibility. If you actually survey the various data structures that exist, you'll find a wide variety of capabilities that they may or may not offer. - A data structure might support an indexing operator but not a Count property (consider the Sieve of Eratosthenes of infinite size). Other data structures may support a Count property in theory, but calling it should be avoided because it may be expensive (in Loyc.Essentials, BufferedSequence<T> has this property). - A data structure may be queue- or stack-like, offering access to the beginning/end of a data structure but not the middle. - A data structure might not be zero-based, e.g. the range of valid indexes could be 1..100 or -100..100 or 35..39. - A data structure may be sparse, in which case scanning it for items with a for-loop from 0..Count is inefficient. - A data structure may be a sink, allowing insertion of data but not reading (write-only files and pipes may be viewed this way). - Most list data structures could allow enumeration to start at any point in the middle, but .NET ignores this possibility; in addition, indexing is less efficient than enumeration for some lists (e.g. AList), so this should be supported. - A data structure may not support access by integer index, but could still allow bi-directional enumeration moving both forward and backward), e.g. a doubly-linked list, with insertion locations denoted by enumerator positions. - Many data structures can easily tell you if they are empty, but require O(N) time to report their Count. .NET almost forces you to use Count == 0 to detect emptiness; you can use LINQ's Any() function instead, but this requires a heap allocation and two interface calls. Other collection libraries outside .NET have recognized this diversity, and provide more concepts or interfaces to address it, such as the C++ STL's various iterator and container concepts, or D's ranges. standard linked lists, or normal lists, but not both at once. This is the central problem that the .NET interfaces fail to solve. When you write a method or a class that operates on a collection, you cannot declare precisely what kind of collection you accept, so most of the time you have to mandate functionality you don't need. Some collection types will not be able to provide that functionality that you mandated but didn't need, so they won't implement the interface and will be incompatible, unnecessarily. Meanwhile, other collection types might implement the interface even though they are not compatible, and then throw an exception when you try to use unsupported functionality. How Loyc.Essentials improves the situationLoyc.Essentials improves the situation mainly by providing a much wider variety of interfaces. The new interfaces mostly address flaw #4, but can only partly alleviate the pain of flaws 1, 2 and 3.: - Sources: objects that provide data. Source interfaces are read-only. - Sinks: objects that accept data. Sink interfaces are write-only. - Slices: subsections of an indexed list, e.g. elements 5..10 of a larger list. The broad class of Divide-and-conquer algorithms benefit from these. - Ranges: Similar to slices, ranges represent a subsection of a collection, but they may only allow access to the first, or first and last, elements of that subsection. For performance reasons, ranges are mutable long-ish article about my design of Ranges in .NET. - Fancy enumerators: after designing the range interfaces, I realized that often all you need is an enumerator that can go backward. Binumerators can travel both backward and forward through a collection, and mutable variants allow the current item to be changed or deleted. - Specific categories of data structures: queues, arrays, and sparse lists. - "Neg" non-zero-based lists: indexed lists for which the minimum index is not necessarily zero. The Loyc.Essentials collection interfacesThe full documentation of these interfaces can be seen in the source code and will be provided automatically by Visual Studio Intellisense, as long as you have the Loyc.Essentials.xml file alongside your copy of Loyc.Essentials.dll. First, an interface you won't use: ICount: public interface ICount : IIsEmpty { int Count {: public interface IReadOnlyCollectionHow does this new interface make it impossible to use ICount? Well, if I define any interface that inherits from both IReadOnlyCollection and ICount, it becomes impossible (well, quite difficult) to call Count on that interface because the C# compiler says that the reference to Count is "ambiguous". The end result is that people won't bother using ICount at all (admittedly, the number of cases where somebody needs the Count of a collection, and nothing else, is very small). For completeness, I recently added IIsEmpty, because as I mentioned, IsEmpty can run much faster than Count in some data structures: : IEnumerable { int Count { get; } } public interface IIsEmpty { bool IsEmpty { get; } }However, I decided not to mandate this property as part of the common Loyc.Essentials interface IListSource<T>, although it is a part of IFRange<T> (listed near the bottom of this article). My single most favorite interface in Loyc.Essentials is IListSource<T>. As its name implies, it is a kind of source; it's basically IReadOnlyCollection<T> with extra functionality: public interface IListSource<out T> : IReadOnlyList<T> { // If index is invalid, sets fail=true and returns default(T) T TryGet(int index, out bool fail); IRange<T> Slice(int start, int count = int.MaxValue); }Both of these new functions are favorites of mine, although it's usually easier to call one of the standard TryGet() extension methods (especially the first one): public static partial class LCInterfaces { public static T TryGet<T>(this IListSource<T> list, int index, T defaultValue); public static bool TryGet<T>(this IListSource<T> list, int index, ref T value); }Plus, a couple of standard methods of IList<T> are added as extension methods: public static partial class LCInterfaces { public static int IndexOf<And there are other extension methods that will be covered in a future article. (this IReadOnlyList<T> list, T item) public static void CopyTo<T>(this IReadOnlyList<T> c, T[] array, int arrayIndex) } I don't know about you, but in my code I often have to check whether the index is in range before I call the indexer: if (index < list.Count && list[index] >= 0) {...}With IListSource this is easier: if (list.TryGet(index, -1) >= 0) {...}In theory, this version should be faster too, because it needs only one interface dispatch, not two. That's not the only reason for TryGet() to exist, though; there are a few collection types for which it is expensive to call Count. The second extension method uses ref T value to allow the caller to set a default T value before calling the method, since you will not always want to use default(T) as the default. If the index is invalid, value is left unchanged.: public IRange<T> Slice(int start, int count) { return new Slice_<T>(this, start, count); }or the potentially more efficient version, IRange<T> IListSource<T>.Slice(int start, int count) { return Slice(start, count); } public Slice_<T> Slice(int start, int count) { return new Slice_<T>(this, start, count); }Slice_ is a struct in Loyc.Essentials that provides a "view" on part of a read-only list. Why the underscore? I wanted to simply call it "Slice", but that is illegal in C# because Slice_ itself implements IListSource<T>, so it contains a method named Slice, and a method is not allowed to have the same name as its containing class. By the way, there's also a ListSlice<T> class and Slice() extension method for slicing IList<T>. Next up, here are the "neg lists". These are list interfaces that do not (necessarily) use zero as the minimum index. So far, I haven't used these interfaces in practice. public interface INegListSource<T> : IReadOnlyCollection<T> { int Min { get; } int Max { get; } T this[int index] { get; } T TryGet(int index, out bool fail); IRange<T> Slice(int start, int count = int.MaxValue); } public interface INegArrayHere's documentation, if you're interested. INegAutoSizeArray is interesting: it automatically enlarges itself when you write to an index below Min or above Max. : INegListSource { new T this[int index] { set; get; } bool TrySet(int index, T value); } public interface INegAutoSizeArray<T> : INegArray<T> { void Optimize(); } public interface INegDeque<T> : INegArray<T>, IDeque<T> { } Next there's a higher-performance variation on INotifyCollectionChanged, which can only be implemented by collections that implement IListSource<T>: before the list changes rather than afterward, there is no need to allocate a list of old items--the event handler can simply look at the current state of the list. Meanwhile, it is possible to optimize NewItems not to require an allocation. Admittedly, I have found that sometimes an event handler would really prefer to see the list after it changes rather than before. But if performance matters, this interface may be better. You be the judge. Next I present this peculiar bunch of interfaces, which reflects the difficulty I had designing an interface for immutable sets:> { }The most important interface is at the end: ISetImm<T>. If you work through it logically, you can see that ISetImm<T> contains the following members:; } }.NET already defines a set interface for mutable sets, ISet<T>, but immutable sets are more convenient to work with so I added this interface. In Loyc.Collections.dll, I implemented a very nice immutable set class called Set<T>, which is a kind of Hash Tree.>. Finally, I created an InvertibleSet<T> class, which can represent everything that is not in some other set; to account for this possibility, I added the IsInverted property to ISetImm. Sparse lists are indexed lists in which regions of indexes may be unused. Here are the sparse list interfaces I defined: i, the index associated with each and every item above index i also changes. A Dictionary can't do that, while a good sparse list implementation can do it efficiently. SparseAList<T>. It is not always a fast data structure, but it scales up effectively to very large files. Next up, here are the IQueue, IStack and IDeque interfaces, with associated extension methods for added convenience, which are mostly self-explanatory if you know what queues and stacks are:) }Currently IQueue and IStack are not being used, but IDeque<T> is implemented by DList<T>. Next up, here are the sink interfaces, which let you put data in but not take it out. /// > { }These interfaces are not often useful, but sometimes people actually do write code that only modifies a collection, without reading it. One useful property of these interfaces is contravariance: if a method needs an ISinkCollection<Foo> for example, you can pass it a ISinkCollection<object> instead. Here's an interface that models built-in arrays: public interface IArray<T> : IListSource<T>, ISinkArray<T> { new T this[int index] { get; set; } bool TrySet(int index, T value); } public interface IAutoSizeArray<T> : IArray<T> { void Optimize(); .) Next, here are some interfaces that add or remove items "in bulk":); }It's often more efficient to use a bulk insertion or deletion method rather than calling Insert or RemoveAt in a loop.>: public interface ICollectionAndReadOnly<T> : ICollection<T>, IReadOnlyCollection<T> { } public interface IListAndListSource<T> : IList<T>, IListSource<T>, ICollectionAndReadOnly<T> { >. By itself, these interfaces don't solve any problems. To recap, if you write these two methods: void Foo(IReadOnlyList<T> list) {...} void Foo(IList<T> list) { Foo(list.AsListSource()); }The C# compiler will give an "ambiguity" error when you try to call Foo(list), if the list implements both IList<T> and IReadOnlyList<T>. To work around this problem, it is necessary to define a third method that takes IListAndListSource<T>: void Foo(IReadOnlyList<T> list) {...} void Foo(IList<T> list) { Foo(list.AsListSource()); } void Foo(IListAndListSource<T> list) { Foo((IReadOnlyList<T>) list); }This workaround only works if the list class implements IListAndListSource<T>, so all mutable collections in Loyc.Essentials and Loyc.Collections do so.> { }Binumerators are enumerators that can move backward through a collection, not just forward. The IBinumerable interface can be implemented by a collection that supports binumerators. Mutable enumerators additionally allow elements of the collection to be modified during enumeration. Mutable enumerators optionally support removing the current element.(); }Finally, here are the range interfaces, which were inspired by the D programming language. These were described in a much earlier article but I tweaked the interfaces after that article was written. You can see the current documentation in the source code; I won't say anything more about these interfaces here, since this article is getting pretty long already.; } }That's it! That's all the general-purpose collection interfaces. Performance: one flaw I didn't fix>): public delegate T Iterator<out T>(ref bool fail);Note that I really would have preferred to use the signature bool Iterator<T>(out T value). I actually wrote a whole implementation of LINQ for Iterator and IIterable<T> (the counterpart to IEnumerable<T>), but then I decided to abandon the concept before I got around to benchmarking the LINQ implementation (early microbenchmarks showed that IIterable was modestly faster when called directly, but not when using the extension method). foreach C#'s iterator feature supports only IEnumerator, not Iterator. So I found that the implementation costs for Iterator tended to be high, while few people would appreciate or take advantage of the performance enhancement. For certain applications that need both high performance and flexibility, another technique for avoiding interface calls makes sense: returning data in groups, such as arrays. Consider this method of ICollection<T> that is almost never used: void CopyTo(T[] array, int arrayIndex);Imagine that method was gone and replaced with this method in IList: // copy this[startIndex..startIndex+count] to array[arrayIndex..arrayIndex+count] void CopySlice(T[] array, int arrayIndex, int startIndex, int count);Now you would have a technique for optimizing access when you need it. Rather than requesting the entire list as an array, which requires unbounded memory, you request a section. How can this be used to optimize a program? Well, you can read the list in small blocks, e.g., 50 elements at a time. Then you can use a for-loop like this one: for (int i = 0; i < array.Length; i++) { /* do something with array[i] */ ). For Loyc, I decided to define a specialized interface for lexers:. StringSlice is a structure in Loyc.Essentials that represents a slice of a string (a tuple of (string str, int startIndex, int length)).. By the way: we're doing it wrong.Defining zillions of interfaces isn't the best solution.. My favorite thing about the Go language say it implements IReadOnlyList<T>; you would still be allowed to pass it to a method that takes IReadOnlyList<T>. So, suppose I define an indexable data structure that supports add-at-the-end, but no other modifications. Then I could define an interface that exactly matches my data structure: interface IAppendableList<T> : IReadOnlyList<T> { void Add(T item); } class GuestBook : IAppendableList<GuestEntry> { ... }Later, someone else who is using MyDataStructure<T> in their code could decide to declare a method that accepts IAppendableList<T>: void Foo(IAppendableList<T> x) { ... }We can tell that Foo() will probably add items to x, but won't remove items. If .NET supported Go's ability to adapt to new interfaces with almost no extra run-time cost, it would not only accept an argument of type GuestBook, but it would also accept any of the standard collection classes like List<T>. Nice. This feature could neatly fix flaws 2 and 3 above, even if Microsoft itself never defined any read-only or specialized interfaces. directly to C# because it would break existing code, but the feature could still be supported in .NET itself, if Microsoft had the willpower. Some people argue against this feature, saying that an "interface" is more than just a set of methods but a contract for how those methods behave. And in Loyc.Essentials itself you'll see two interfaces that reflect that idea: public interface IQueue<T> : IPush<T>, IPop<T>, ICount { } public interface IStack<T> : IPush<T>, IPop<T>, IC already exists in some languages, so if Microsoft truly wants a many-language platform, they must make features like this possible, and efficient. version of that third DLL even if the interface in question is identical across different versions. The Go approach gets you out of versioning hell (at least it would in the .NET context. Whether Go itself has a similar issue, I have no idea). But I digress... Conclusion.Loyc.Essentials is a library of "stuff that should be in the BCL, but isn't." Although it was created for the Loyc. Edit: I missed an interesting interface: IEnumeratorFrame<Frame, T>, which goes with the NestedEnumerator<Frame, T> structure, and is used for enumerating tree-like data structures or manually-implemented coroutines. See the code if you're curious, although this interface would work best as part of a built-in compiler feature for stream flattening like Cω had.
http://loyc.net/2014/using-loycessentials-collection.html
CC-MAIN-2019-26
refinedweb
3,244
54.73
mmr_input_attach() Attach an input. Synopsis: #include <mm/renderer.h> int mmr_input_attach( mmr_context_t *ctxt, const char *url, const char *type ) Arguments: - ctxt - A context handle - url - The URL of the new input - type - The media type of the input. Possible values are "track", "playlist", and "autolist" (quotes are required). The autolist type represents a single track that is formatted as a playlist. This type allows a single track to be played continuously using the repeat input parameter. Library:mmrndclient Description: Attach an input file, device, or playlist. If the context already has an input, detach it first. Valid input URLs for the "track" or "autolist" input types are: - A URL starting with "HTTP". HLS (HTTP Live Streaming) is supported just as any HTTP stream, with the following caveats: - For HLS realtime broadcast the seek operation is disabled. Therefore, if your application issues a seek command it will fail. - Pause (play speed of 0) is supported but the playback may jump forward when resumed because the current stream may have become unavailable. - For HLS Video on Demand, the seek operation places the play position at the start of the video chunk that is closest to the requested time. The pause operation works as expected. - A full pathname starting with a "/" character, with or without a file: prefix - A file2b: URL containing the full path name of a dynamically growing file (a "progressive download"). Not all formats are supported. If parsing the file requires knowing the file size or reading more data than currently in the file, the input attachment operation may fail. If it does succeed, any attempt to play from beyond the end of file will cause the playback to underrun. Your application must pay attention to the buffering status and appropriately present the state to the user, depending on whether the download is happening at the time. - An snd: URL targeting an audio capture device in /dev/snd, such as snd:/dev/snd/pcmPreferredc?frate=44100&nchan=2. Currently this only works with the "file" output type. Supported parameters include: - frate — the sampling rate in Hz - nchan — the number of channels (1 for mono, 2 for stereo) - depth — the number of bits per sample (e.g., 16) - bsize — the preferred read size, in bytes Valid input URLs for the "playlist" input type are: - A full pathname of an M3U playlist file, without a file: prefix - An SQL URL in the form sql:database?query=query , where: - database is the full path to the database file - query must return a single column containing URLs in a form acceptable for the "track" input type - any special characters in the query must be URL-encoded (e.g., spaces encoded as %20, and so on) Returns: Zero on success, -1 on failure (use mmr_error_info() ) Classification: QNX Neutrino
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.mme.mmrenderer/topic/mmr_api/mmr_input_attach.html
CC-MAIN-2019-30
refinedweb
463
61.87
How to Use the Java Keyword this The keyword this in Java refers to the current class instance. For example, if a class defines a method named Calculate, you can call that method from another method within the same class like this: this.Calculate(); Of course, you can also call the Calculate method without the this keyword: Calculate(); Thus, in most cases, the keyword this is not necessary. However, sometimes the this keyword can come in handy. For example: public class Actor { string lastName; string firstName; public Actor(String lastName, String firstName) { this.lastName = lastName; this.firstName = firstName; } } The this keywords are required to distinguish among the parameters named lastName and firstName and the instance. Sometimes, you use the this keyword by itself to pass a reference to the current object as a method parameter. You can print the current object to the console by using the following statement: System.out.println(this); for the class.
https://www.dummies.com/programming/java/how-to-use-the-java-keyword-this/
CC-MAIN-2019-47
refinedweb
156
64.81
How to pass multiple PK in the AQL Select * from namespace.setName where PK command? You can use or split to pass multiple PK parameters, like this select * from namespace.setName where PK="pk1" or PK="pk2" No… nothing is parsed by aql beyond the first PK=“pk1” aql> select * from test.testset where pk="key1" +--------+-----+ | name | age | +--------+-----+ | "Jack" | 26 | +--------+-----+ 1 row in set (0.001 secs) OK aql> select * from test.testset where pk="key1" and then everything else is just ignored +--------+-----+ | name | age | +--------+-----+ | "Jack" | 26 | +--------+-----+ 1 row in set (0.001 secs) OK How would you perform an ‘OR’ statement on a bin? select * from insurance.bar where policyValid=1 or policyValid=9 for: aql> select * from insurance.bar +-------------+---------+ | policyValid | comment | +-------------+---------+ | 9 | "Dead" | | 1 | "Live" | +-------------+---------+ You have to write your own application, for example in Java using Aerospike Java client library, and use scan with Expressions. See similar discussion for writes in this thread. Aerospike Quick Look (AQL) - an application that uses Aerospike C client library, is not designed to provide full C client functionality. 1 Like
https://discuss.aerospike.com/t/how-to-pass-multiple-pk-in-the-aql-select-query/8543
CC-MAIN-2021-31
refinedweb
179
67.15
By: Martin Odersky and Jeff Olson and Paul Phillips and Joshua Suereth History Introduction This is a proposal to introduce syntax for classes in Scala that can get completely inlined, so operations on these classes have zero overhead compared to external methods. Some use cases for inlined classes are: - Inlined implicit wrappers. Methods on those wrappers would be translated to extensions methods. - New numeric classes, such as unsigned ints. There would no longer need to be a boxing overhead for such classes. So this is similar to value types in .NET. - Classes representing units of measure. Again, no boxing overhead would be incurred for these classes. The proposal is currently in an early stage. It’s not yet been implemented, and the proposed implementation strategy is too complicated to be able to predict with certainty that it will work as specified. Consequently, details of the proposal might change driven by implementation concerns. Value Classes The gist of the proposal is to allow user-defined classes to extend from AnyVal in situations like this: class C (val u: U) extends AnyVal { def m1(ps1) = ... ... def mN(psN) = ... } Such classes are called value classes. A value class C must satisfy the following criteria: Cmust have exactly one parameter, which is marked with valand which has public accessibility. The type of that parameter (e.g. Uabove) is called the underlying type of C Cmay not have @specializedtype parameters. - The underlying type of Cmay not be a value class. Cmay not have secondary constructors. Cmay not define concrete equalsor hashCodemethods. Cmust be either a toplevel class or a member of a statically accessible object. Cmust be ephemeral. A class or trait C is ephemeral if the following holds: Cmay not declare fields (other than the parameter of a value class). Cmay not contain object definitions. Cmay not have initialization statements. We say that a value class C unboxes directly to a class D if the underlying type of C is a type-instance of D. Indirect unboxing is the transitive closure of direct unboxing. A value class may not unbox directly or indirectly to itself. The following implicit assumptions apply to value classes. - Value classes are implicitly treated as final, so they cannot be extended by other classes. Value classes are implicitly assumed to have structural equality and hash codes. I.e. their equalsand hashCodemethods are taken to be defined as follows: def equals(other: Any) = other match { case that: C => this.u == that.u case _ => false } def hashCode = u.hashCode Universal traits Scala’s rule for inheritance do not permit value classes to extend traits that extend from AnyRef. To permit value classes to extend traits, we introduce universal traits, which extend from Any. A universal trait T needs to explicitly extend class Any. In the example below, Equals is a universal trait with superclass Any, but Ordered’s superclass is still assumed to be AnyRef. trait Equals[T] extends Any { ... } trait Ordered[T] extends Equal[T] { ... } To turn Ordered into a universal trait, add an explicit superclass Any: trait Ordered[T] extends Any with Equal[T] { ... } Like value classes, universal traits need to be ephemeral. Expansion of value classes. Value classes are expanded as follows. For concreteness, we assume a value class Meter that is defined like this: class Meter(val underlying: Double) extends AnyVal with Printable { def plus (other: Meter): Meter = new Meter(this.underlying + other.underlying) def divide (factor: Double): Meter = new Meter(this.underlying / factor) def less (other: Meter): Boolean = this.underlying < other.underlying override def toString: String = underlying.toString + “m” } For simplicity we assume that all expansion steps are done on erased types. Step 1: Extracting methods. Let the extractable methods of a value class be all methods that are directly declared in the class (as opposed to being inherited) and that do not contain a super call in their body. For each extractable method m, we create another method named extension$m in the companion object of that class (if no companion object exists, a fresh one is created). The extension$m method takes an additional parameter in first position which is named $this and has the value class as type. Generally, in a value class class C(val u: U) extends AnyVal a method def m(params): R = body is expanded to the following method in the companion object of class C: def extension$m($this: C, params): R = body2 Here body2 is the same as body with each occurrence of this or C.this replaced by $this. The original method m in C will be changed to def m(params): R = C.extension$m(this, params) Overloaded methods may be augmented with an additional integer to distinguish them after types are erased (see the transformations of the divide method in the following steps). Also in this step, synthetic hashCode and equals methods are added to the class. In our example, the Meter class would be expanded as follows: class Meter(val underlying: Double) extends AnyVal with Printable { def plus (other: Meter): Meter = Meter.extension$plus(this, other) def divide (other: Meter): Double = Meter.extension1$divide(this, other) def divide (factor: Double): Meter = Meter.extension2$divide(this, factor) def less (other: Meter): Boolean = Meter.extension$less(this, other) override def toString: String = Meter.extension$toString(this) override def equals(other: Any) = Meter.extension$equals(this) override def hashCode = Meter.extension$hashCode(this) } object Meter { def extension$plus($this: Meter, other: Meter) = new Meter($this.underlying + other.underlying) def extension1$divide($this: Meter, other: Meter): Double = $this.underlying / other.underlying def extension2$divide($this: Meter, factor: Double): Meter = new Meter($this.underlying / factor) def extension$less($this: Meter, other: Meter): Boolean = $this.underlying < other.underlying def extension$toString($this: Meter): String = $this.underlying.toString + “m” def extension$equals($this: Meter, other: Any) = other match { case that: Meter => $this.underlying == that.underlying case _ => false } def extension$hashCode($this: Meter) = $this.underlying } Step 2: Rerouting calls In this step any call to a method that got extracted in step 1 into a companion object gets redirected to the newly created method in that companion object. Generally, a call p.m(args) where m is an extractable method declared in a value class C gets rewritten to C.extension$m(p, args) For instance the two calls in the following code fragment val x, y: Meter x.plus(y) x.toString would be rewritten to Meter.extension$plus(x, y) Meter.extension$toString(x) Step 3: Erasure Next, we introduce for each value class C a new type C$unboxed (this type will be eliminated again in step 4). The newly generated type is assumed to have no members and to be completely outside the normal Scala class hierarchy. That is, it is a subtype of no other type and is a supertype only of scala.Nothing. We now replace every occurrence of the type C in a symbol’s type or in a tree’s type annotation by C$unboxed. There are however the following two exceptions to this rule: Type tests are left unaffected. So, in the type test below, Cis left as it is. e.isInstanceOf[C] All occurrences of methods in class Care left unaffected. We then re-typecheck the program, performing the following adaptations if types do not match up. If eis an expression of type C$unboxed, and the expected type is some other type T, eis converted to type Cusing new C(e.asInstanceOf[U]) where Uis the underlying type of C. After that, further adaptations may be effected on C, employing the usual rules of erasure typing. Similarly, if a selection is performed on an expression of type C$unboxed, the expression is first converted to type Cusing the conversion above. If the expected type of an expression eof type Tis C$unboxed, then eis first adapted with expected type Cgiving e2, and e2then is converted to C$unboxedusing e2.u.asInstanceOf[C$unboxed] where uis the name of the value parameter of C. Similarly, if an expression eis explicitly converted using e.asInstanceOf[C$unboxed] then eis first converted to type C, giving e2, and the cast is then replaced by e2.u.asInstanceOf[C$unboxed] The rules for conversions from and to arrays over value classes are analogous to the rules for arrays over primitive value classes. Value classes are rewritten at this stage to normal reference classes. That is, their parent changes from AnyVal to java.lang.Object. The AnyVal type itself is also rewritten during erasure to java.lang.Object, so the change breaks no subtype relationships. We finally perform the following peephole optimizations: new C(e).u ==> e new C(e).isInstanceOf[C] ==> true new C(e) == new C(f) ==> e == f new C(e) != new C(f) ==> e != f Step 4: Cleanup In the last step, all occurrences of type C$unboxed are replaced by the underlying type of C. Any redundant casts of the form e.asInstanceOf[T] where e is already of type T are removed and replaced by e. Examples Example 1 The program statements on the left are converted using steps 1 to 3 to the statements on the right. var m, n: Meter var m, n: Meter$unboxed var o: AnyRef var o: AnyRef m = n m = n o = m o = new Meter(m.asInstanceOf[Double]) m.print new Meter(m.asInstanceOf[Double]).print m less n Meter.extension$less(m, n) m.toString Meter.extension$toString(m) m.isInstanceOf[Ordered] new Meter(m.asInstanceOf[Double]).isInstanceOf[Ordered] m.asInstanceOf[Ordered] new Meter(m.asInstanceOf[Double]).asInstanceOf[Ordered] o.isInstanceOf[Meter] o.isInstanceOf[Meter] o.asInstanceOf[Meter] o.asInstanceOf[Meter].underlying.asInstanceOf[Meter$unboxed] m.isInstanceOf[Meter] new Meter(m.asInstanceOf[Double]).isInstanceOf[Meter] m.asInstanceOf[Meter] m.asInstanceOf[Meter$unboxed] Including the cleanup step 4 the same program statements are converted as follows. var m, n: Meter var m, n: Double var o: Any var o: Any m = n m = n o = m o = new Meter(m) m.print new Meter(m).print m less n Meter.extension$less(m, n) m.toString Meter.extension$toString(m) m.isInstanceOf[Ordered] new Meter(m).isInstanceOf[Ordered] m.asInstanceOf[Ordered] new Meter(m).asInstanceOf[Ordered] o.isInstanceOf[Meter] o.isInstanceOf[Meter] o.asInstanceOf[Meter] o.asInstanceOf[Meter].underlying m.isInstanceOf[Meter] new Meter(m).isInstanceOf[Meter] m.asInstanceOf[Meter] m.asInstanceOf[Double] Example 2 After all 4 steps the Meter class is translated to the following code. class Meter(val underlying: Double) extends AnyVal with Printable { def plus (other: Meter): Meter = new Meter(Meter.extension$plus(this.underlying, other.underlying)) def divide (other: Meter): Double = Meter.extension1$divide(this.underlying, other) def divide (factor: Double): Meter = new Meter(Meter.extension2$divide(this.underlying, factor)) def less (other: Meter): Boolean = Meter.extension$less(this.underlying, other) override def toString: String = Meter.extension$toString(this.underlying) override def equals(other: Any) = Meter.extension$equals(this) override def hashCode = Meter.extension$hashCode(this) } object Meter { def extension$plus($this: Double, other: Double) = $this + other def extension1$divide($this: Double, other: Double): Double = $this / other def extension2$divide($this: Double, factor: Double): Double = $this / factor) def extension$less($this: Double, other: Double): Boolean = $this < other def extension$toString($this: Double): String = $this.toString + “m” def extension$equals($this: Double, other: Object) = other match { case that: Meter => $this == that.underlying case _ => false } def extension$hashCode($this: Double) = $this.hashCode } Note that the two divide methods end up with the same type in object Meter. (The fact that they also have the same body is accidental). That’s why we needed to distinguish them by adding an integer number. The same situation can arise in other circumstances as well: Two overloaded methods might end up with the same type after erasure. In the general case, Scala would treat this situation as an error, as it would for other types that get erased. So we propose to solve only the specific problem that multiple overloaded methods in a value class itself might clash after erasure. Further Optimizations? The proposal foresees that only methods defined directly in a value class get expanded in the companion object; methods inherited from universal traits are unaffected. For instance, in the example above m.print would translate to new Meter(m).print We might at some point want to investigate ways how inherited trait methods can also be inlined. For the moment this is outside the scope of the proposal.
https://docs.scala-lang.org/sips/value-classes.html
CC-MAIN-2018-47
refinedweb
2,079
50.23
The dspic-utility.h header file defines two functions for calculating the 16-bit CRC-CCITT value of a stream of bytes. These functions are based on an article by Joe Geluso, originally located at. Unfortunately, it seems that that site no longer exists, but a copy of the article can be found in the Wayback Machine at. To use these functions, you can use the -I compiler option to add the dsPIC Helper Library source directory to the compiler command line, and then include the header file like so: #include <crc-ccitt.h> Call this function with the first byte in the stream, using an initial crc value of CRC_CCITT_INITIAL_VALUE . The function returns the crc value to use in the call for the next byte in the stream. Once all the bytes have been processed, call crc_ccitt_normalize() to retrieve the actual CRC value. Once all the bytes have been processed, call this function with crc set to the last value returned by crc_ccitt() to retrieve the actual CRC value. Here’s a simple example that calculates the 16-bit CRC-CCITT value for the string "123456789". A call to test() will return value 0xE5CC: #include <crc-ccitt.h> uint16_t test(void) { const char* str = "123456789"; uint16_t crc = CRC_CCITT_INITIAL_VALUE; while (*str) crc = crc_ccitt(crc, *str++); return crc_ccitt_normalize(crc); }
https://www.bourbonstreetsoftware.com/dspic-helper/dspic-helper-Z-H-8.html
CC-MAIN-2019-18
refinedweb
218
52.7
At the end of my visit to 8th Light Justin Martin was kind enough to give me a ride to the train station; for my train back to O’Hare. Just before he left he asked me an interesting question which I then posted to twitter: Liam McLennan: . @JustinMartinM asked what I think is the most important attributes of craftsmen. I said, "desire to learn and humility". What's yours? 6:25 AM Apr 17th via TweetDeck Liam McLennan: . @JustinMartinM asked what I think is the most important attributes of craftsmen. I said, "desire to learn and humility". What's yours? 6:25 AM Apr 17th via TweetDeck several people replied with excellent contributions: I thought this was an interesting conversation, and I would love to see other people contribute their opinions. My observation is that Alex, Steve, Matt and I seem to have essentially the same answer in different words. It is also interesting to note (as Alex pointed out) that these definitions are very similar to Alt.NET and the lean concept of kaizen. Over the past twelve months I have been thinking a lot about executable specifications. Long considered the holy grail of agile software development, executable specifications means expressing a program’s functionality in a way that is both readable by the customer and computer verifiable in an automatic, repeatable way. With the current generation of BDD and ATDD tools executable specifications seem finally within the reach of a significant percentage of the development community. Lately, and partly as a result of my craftsmanship tour, I have decided that soon I am going to have to get a job (gasp!). As Dave Hoover describes in Apprenticeship Patters, “you … have mentors and kindred spirits that you meet with periodically, [but] when it comes to developing software, you work alone.” The time may have come where the only way for me to feel satisfied and enriched by my work is to seek out a work environment where I can work with people smarter and more knowledgeable than myself. Having been on both sides of the interview desk many times I know how difficult and unreliable the process can be. Therefore, I am proposing the idea of executable resumes. As a journeyman programmer looking for a fruitful work environment I plan to write an application that demonstrates my understanding of the state of the art. Potential employers can download, view and execute my executable resume and judge wether my aesthetic sensibility matches their own. The concept of the executable resume is based upon the following assertion: A line of code answers a thousand interview questions A line of code answers a thousand interview questions Asking people about their experiences and skills is not a direct way of assessing their value to your organisation. Often it simple assesses their ability to mislead an interviewer. An executable resume demonstrates: The idea of publishing a program to demonstrate a developer’s skills comes from Rob Conery, who suggested that each developer should build their own blog engine since it is the public representation of their level of mastery. Rob said: Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. In honour of Rob’s inspiration I plan to build a blog engine as my executable resume. While it is true that the world does not need another blog engine it is as good a project as any, it is a well understood domain, and I have not found an existing blog engine that I like. Executable resumes fit well with the software craftsmanship metaphor. It is not difficult to imagine that under the guild system master craftsmen may have accepted journeymen based on the quality of the work they had produced in the past. We now understand that when it comes to the functionality of an application that code is the final arbiter. Why not apply the same rule to hiring? Thursday morning the Illinois public transport system came through for me again. I took the Metra train north from Union Station (which was seething with inbound commuters) to Prairie Crossing (Libertyville). At Prairie Crossing I met Paul and Justin from 8th Light and then Justin drove us to the office. The 8th Light office is in an small business park, in a semi-rural area, surrounded by ponds. Upstairs there are two spacious, open areas for developers. At one end of the floor is Doug Bradbury’s walk-and-code station; a treadmill with a desk and computer so that a developer can get exercise at work. At the other end of the floor is a hammock. This irregular office furniture is indicative of the 8th Light philosophy, to pursue excellence without being limited by conventional wisdom. 8th Light have a wall covered in posters, each illustrating one person’s software craftsmanship journey. The posters are a fascinating visualisation of the similarities and differences between each of our progressions. The first thing I did Thursday morning was to create my own poster and add it to the wall. Over two days at 8th Light I did some pairing with the 8th Lighters and we shared thoughts on software development. I am not accustomed to such a progressive and enlightened environment and I found the experience inspirational. At 8th Light TDD, clean code, pairing and kaizen are deeply ingrained in the culture. Friday, during lunch, 8th Light hosted a ‘lunch and learn’ event. Paul Pagel lead us through a coding exercise using micro-pomodori. We worked in pairs, focusing on the pedagogy of pair programming and TDD. After lunch I recorded this interview with Paul Pagel and Justin Martin. We discussed 8th light, craftsmanship, apprenticeships and the limelight framework. Interview with Paul Pagel and Justin Martin My time at Didit, Obtiva and 8th Light has convinced me that I need to give up some of my independence and go back to working in a team. Craftsmen advance their skills by learning from each other, and I can’t do that working at home by myself. The challenge is finding the right team, and becoming a part of it. I like Chicago. It is a great city for travellers. From the moment I got off the plane at O’Hare everything was easy. I took the train to ‘the Loop’ and walked around the corner to my hotel, Hotel Blake on Dearborn St. Sadly, the elevated train lines in downtown Chicago remind me of ‘Shall We Dance’. Hotel Blake is excellent (except for the breakfast) and the concierge directed me to a pizza place called Lou Malnati's for Chicago style deep-dish pizza. Lou Malnati’s would be a great place to go with a group of friends. I felt strange dining there by myself, but the food and service were excellent. As usual in the United States the portion was so large that I could not finish it, but oh how I tried. Dave Hoover, who invited me to Obtiva for the day, had asked me to arrive at 9:45am. I was up early and had some time to kill so I stopped at the Willis Tower, since it was on my way to the office. Willis Tower is 1,451 feet (442 m) tall and has an observation deck at the top. Around the observation deck are a set of acrylic boxes, protruding from the side of the building. Brave soles can walk out on the perspex and look between their feet all the way down to the street. It is unnerving. Obtiva is a progressive, craftsmanship-focused software development company in downtown Chicago. Dave even wrote a book, Apprenticeship Patterns, that provides a catalogue of patterns to assist aspiring software craftsmen to achieve their goals. I spent the morning working in Obtiva’s software studio, an open xp-style office that houses Obtiva’s in-house development team. For lunch Dave Hoover, Corey Haines, Cory Foy and I went to a local Greek restaurant (not Dancing Zorbas). Dave, Corey and Cory are three smart and motivated guys and I found their ideas enlightening. It was especially great to chat with Corey Haines since he was the inspiration for my craftsmanship tour in the first place. After lunch I recorded a brief interview with Dave. Unfortunately, the battery in my camera went flat so I missed recording some interesting stuff. Interview with Dave Hoover In the evening Obtiva hosted an rspec hackfest with David Chelimsky and others. This was an excellent opportunity to be around some of the very best ruby programmers. At 10pm I went back to my hotel to get some rest before my train north the next morning. On Monday I was at Didit for my first ever craftsmanship visit. Didit seem to occupy a good part of a non-descript building in Rockville Centre Long Island. Since I had arrived early from Seattle I had some time to kill, so I stopped at the Rockville Diner on the corner of N Park Ave and Sunrise Hwy. I thoroughly enjoyed the pancakes and the friendly service. After walking to the Didit office I met Rik Dryfoos, the Didit Engineering Manager who organised my visit, and got the introduction to Didit and the work they are doing. I spent the morning in the room shared by the Didit developers, who are working on some fascinating deep engineering problems. After lunch at a local Thai place I setup a webcam to record an interview with Rik and Matt Roman (Didit VP of Engineering). I had a lot of trouble with the webcam, including losing several minutes of conversation, but in the end I was very happy the result. Here are the full interviews with Rik and Matt: Interview with Rik Dryfoos Interview with Matt Roman We had a great chat, much of which is captured in the recording. It was such great conversation that I almost missed my train to Manhattan. I’m sure Didit will continue to do well with such a dedicated and enthusiastic team. I sincerely thank them for hosting me for the day. If you are looking for a true agile environment and the opportunity to work with a high quality team then you should talk to Didit.. While I was in the New York area Stephen Bohlen graciously organised an Alt.NET dinner. I left Rockville Centre on the 17:15 train, thinking I had plenty of time to get to Toloache Mexican Bistro on W 50th St. However, when I changed at Penn Station I took the service downtown, instead of uptown. I corrected that mistake and made it to 51st St, but then ended up in completely the wrong place because I did not understand the street numbering system. For future reference I now have the following rules for NYC navigation: Having gotten totally confused I called Steve, who helped me find the restaurant. I still had my luggage, which we stowed in a corner. Over some descent Mexican food we had some great discussions about Alt.NET, the 2010 conference, and other things of interest to Alt.NET folks. Thanks to Steve for organising and to all the guys who turned up. Arriving at JFK, at dawn, is beautiful. From above 1,000ft I can see no crime, poverty or ugliness – just the dark orange sunrise-through-smog. The Atlantic appears calm, and I take that as a good sign. Today is the first day of my software craftsmanship tour. I will be visiting three of the shining lights of the software industry over five days, exchanging ideas and learning. Arriving on the red eye from Seattle I feel like hell. My lips, not used to the dry air, are cracked and bleeding. I get changed in the JFK restroom and make my way from the airport. Following Rik’s directions I take the airtrain to Jamaica. Rik is an engineering manager at Didit in Long Island, the first stop on my tour. From Jamaica I take the Long Island Rail Road train to Rockville Centre, home of Didit. The refactoring I’m talking about is recommended by resharper when it sees a lambda that consists entirely of a method call that is passed the object that is the parameter to the lambda. Here is an example: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(n => IsEven(n)); } private bool IsEven(int number) { return number%2 == 0; } } When resharper gets to n => IsEven(n) it underlines the lambda with a green squiggly telling me that the code can be replaced with a method group. If I apply the refactoring the code becomes: public class IWishIWasAScriptingLanguage { public void SoIWouldntNeedAllThisJunk() { (new List<int> {1, 2, 3, 4}).Select(IsEven); } private bool IsEven(int number) { return number%2 == 0; } } The method group syntax implies that the lambda’s parameter is the same as the IsEven method’s parameter. So a readable, explicit syntax has been replaced with an obfuscated, implicit syntax. That is why the method group refactoring is evil. Talking about GIT Closing circle One of this morning’s sessions at Alt.NET 2010 discussed BDD. Charlie Pool expressed the opinion, which I have heard many times, that BDD is just a description of TDD done properly. For me, the core principles of BDD are: If we go back to Kent Beck’s TDD book neither of these elements are mentioned as being core to TDD. BDD is an evolution of TDD. It is a specialisation of TDD, but it is not the same as TDD. Discussing BDD, and building specialised tools for BDD, is valuable even though the difference between BDD and TDD is subtle. Further, the existence of BDD does not mean that TDD is obsolete or invalidated. Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.
http://geekswithblogs.net/liammclennan/archive/2010/04.aspx
crawl-003
refinedweb
2,782
63.09
Styling your React component is an important part for any real world application. We can style react components in a couple of ways. such as – - inline styling - CSS modules - emotion - styled-components We will talk about styled-components in this article. We are gonna create one simple animated loading spinner component. We can install the package from npmjs using npm or yarn cli. npm i styled-components --save Or yarn add styled-components We can import that in our Component module like import styled from "styled-components"; Now I will use the styled API to create the spinner.We are using one DIV as a target for that spinner. const StyledSpinner = styled.div` border: 16px solid #f3f3f3; border-radius: 50%; border-top: 16px solid #3498db; width: 120px; height: 120px; -webkit-animation: spin 2s linear infinite; /* Safari */ animation: spin 2s linear infinite;</code> @keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } } `; Now we can use this like a react component. class Spinner extends Component { render() { return ( <StyledSpinner /> ); } } We don’t need any other tool or webpack to build this css. It will work just fine. I will continue writing more on styled-components. Update Part 2 is available at Style React component with styled-components : Part-2 Cheers! 👋 As I am trying to contribute contents on the Web, you can buy me a coffee for my hours spent on all of these ❤️😊🌸 PS: You can also have a look on my blog site Discussion (0)
https://dev.to/destro_mas/style-react-component-with-styled-components-part-1-19fl
CC-MAIN-2022-21
refinedweb
246
63.29
ghudson@tigris.org writes: > Added: trunk/subversion/include/svn_ra_svn.h > ============================================================================== > --- trunk/subversion/include/svn_ra_svn.h (original) > +++ trunk/subversion/include/svn_ra_svn.h Mon Dec 2 21:55:41 2002 > @@ -0,0 +1,208 @@ > +/* > + * svn_ra_svn.h : libsvn_ra_svn functions used by the server > + * > + * ==================================================================== > + * > + *. > + * ==================================================================== > + */ > + > + > + > + > +#ifndef SVN_RA_SVN_H > +#define SVN_RA_SVN_H > + > +#include <svn_delta.h> > + > +#ifdef __cplusplus > +extern "C" { > +#endif /* __cplusplus */ > + > +/* The well-known svn port number. Right now this is just a random > + * port in the private range; I am waiting for a real port > + * assignment. -ghudson */ > +#define SVN_RA_SVN_PORT 51662 > + > +/* A specialized form of SVN_ERR to deal with errors which occur in an > + * svn_ra_svn_command_handler. An error returned with this macro will > + * be passed back to the other side of the connection. Use this macro > + * when performing the requested operation; use the regular SVN_ERR > + * when performing I/O with the client. */ > +#define CMD_ERR(expr) \ While svn_ra_svn.h is unlikely to be included in an application, it is a public header file, and it does get installed with the rest of Subversion. As such it is probably a bad idea to have a macro with a name that does not start SVN_. -- Philip Martin --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2002-12/0405.shtml
CC-MAIN-2017-13
refinedweb
216
59.9
How to Speed Up Your Trades Execution using Python? Using the concepts of Concurrency with Angel Broking SmartAPI 11 min read A lot of people in the industry talk about Python being one of the slower languages as compared to the others out there, and that is absolutely true as well; we have discussed some pros and cons of various languages used in the financial world here. But a lot of times, retail investors/traders give too much weight to this disadvantage which is completely irrational because, within Python, you have lots of avenues of writing smart code which can execute faster than you expected and unless you are into HFT (High-Frequency Trading), I will tend to disagree that Python is slow. It's not! And if you think you have dreams of doing HFT and you are a retail investor, choosing a programming language should be the least of your concerns as there are many more headaches. You can read my HFT article to find out more. In this article, I aim to show you how to execute your trades in parallel by using the concepts of Concurrency/Parallelism. If those words scare you at the moment, don't worry; we will take baby steps to cover them in this article. But beware, this will be a long detailed read but one that you will enjoy and will be mentally stimulating. For the purposes of this article, we will be using the SmartAPI as an example. SmartAPI is powered by Angel Broking, which is a renowned broking house in India. I have personally used the APIs of most of the brokers out there, but why I chose SmartAPI for this article is because: - Free of Charge (unlike Kite API by Zerodha, which charges 2000 Rs a month) - Well Maintained Python Client (Available on Github ) - Decent API Documentation - Forum to ask questions and solve issues. - 10 Min Account Opening (Only applicable if you not an NRI) For clarity, this post is not sponsored by Angel Broking; using them for this piece is my personal preference. There are no affiliate links in this article, so you can be sure we are not biased. Okay, so let's get into the main piece; before we show you how the code will work, let's first understand what Concurrency is; if you are already familiar with the topic, please skip to the next section. Concurrency in Python See, you will find lots of definitions of Concurrency on the internet. Still, in very simple terms, Concurrency can execute multiple new requests while waiting for a response on those existing requests. Confusing? I am sure it will be; these are difficult concepts, but let's take a simple multitasking example. Let's assume you are making a cake and you have made all the dough and everything, and now it's time to put it in the oven, which will take around 45-60 mins for the cake to bake. You can either wait for the whole 45-60 mins and do nothing OR you can utilize this time and prepare your topping/icing, which will go on the cake. Which one sounds more sensible to you? The second option, right, why waste your precious time waiting and then doing something when you can do that right now. Yes, that is concurrency. Thank You Real Python for an apt image on processing things in a loop. So how will this fit into our SmartAPI example? Let's say you have 20 trades to execute right away; you can either send the trade request via API one by one 20 times. (This is the example diagram above where you send one request, wait, and then send another) OR You can send multiple trade requests (respecting the API limits) in one go. For example, you are buying RELIANCE-EQ, and you are waiting for that 200 milliseconds for the request to go to the API server, get accepted, and return back with an order number; why not utilize that 200 milliseconds to place another buy request on ITC-EQ? Now the below diagram will be easier to understand. Thank You Real Python for an apt image on processing things concurrently. So imagine placing 20 trades in a loop one by one would have taken 10 seconds to perform; the same task concurrently might have taken just 2.5 seconds! Tons of saving in terms of execution time; if you don't believe these, wait until we do a live example. Python Code Implementation on Concurrency using SmartAPI Before getting into the code, here are my assumptions if you want to follow all the code below: - You know the basics of Python and a little bit of data manipulation using pandas - You have registered for SmartAPI and have a broking account with Angel Broking. - You are doing this after Indian Market hours to avoid real execution in the markets. - You have installed pandas, smartapi-pythonlibrary. If not please follow instructions here Disclaimer: All the code is for educational purposes; please do not blame TradeWithPython if you end up losing money and none of this is investment advice. 1. Importing Necessary Libraries from smartapi import SmartConnect import pandas as pd from datetime import datetime #to calculate the execution time import time 2. Logging into your Broking Account using SmartAPI obj = SmartConnect(api_key = "your_api_key") #this obj will be used later on to make all the trade requests. #Let's login data = obj.generateSession("Your Client ID","Your Password") #verifying if the login was successful print(data) If all is well, you should see something like below; if you receive failure messages or any error codes, best to contact the Angel Broking Support Team or the SmartAPI Forum. The image has been redacted below for confidentiality reasons. 3. Let's place a sample trade using SmartAPI WARNING: Do not attempt to run this piece of code in live market hours (9:15 AM - 3:30 PM IST), or else you risk real execution of the trade. try: orderparams = { "variety": "NORMAL", "tradingsymbol": "SEQUENT-EQ", "symboltoken": "14296", "transactiontype": "BUY", "exchange": "NSE", "ordertype": "MARKET", "producttype": "DELIVERY", "duration": "DAY", "price": "256", "quantity": "100", "triggerprice": "0" } orderId=obj.placeOrder(orderparams) #using the obj we initiated earlier print("The order id is: {}".format(orderId)) except Exception as e: print("Order placement failed: {}".format(e)) If all goes well and your login was successful, you will see a message in your console or Jupyter notebook saying The order id is: 210611000XXXXXX Now let's decode what we did in the above piece of code; first we created a dict with the variable name orderparams which basically contains all the information about the trade: - Is it an Intraday Trade or a Delivery Trade? - What is the Trading Symbol? - What is the Symbol Token? (You can find this here) - Which Exchange are you trading on? - When do you want the order to cancel if not executed? (Immediate or wait for the day) - What is Price and Quantity? - What is Order Type? (MARKET, LIMIT, STOPLOSS, etc.) You can find the comprehensive list of possible parameters in the SmartAPI Docs. Now these parameters are something that you have to select anyway when you are trading via their Mobile App or their Website, so it's no surprise that the API also needs this data to execute trades. And then you are using the obj we defined earlier to placeOrder by passing the orderparams, if the order is placed in the account, it will generate an orderId. Hopefully, no doubts here. 4. Creating a List of Trades via an Excel Spreadsheet Now, let's say you want to execute 20 trades in the market, and you have to create the orderparams dictionary manually for each and every trade, an extremely painful process, right? Let's simplify this; how about we just give our program an excel spreadsheet like below and then wrangle this data around to convert it into a list of orderparams which we can loop over? Confused? Sorry about that, let's simplify this step by step. You can download this sample data HERE. - Read this data into Python df = pd.read_excel("path/to/file.xlsx") print(df.head()) - Create a for loop to generate the orderparams dict by iterating over each row in the dataframe. trade_list = [] #empty list #looping over each row in the dataframe and storing #the value in each column to generate orderparams dict #we use str to convert to strings for index, rows in df.iterrows(): new_dict = {"variety": str(rows['variety']), "tradingsymbol" : str(rows['tradingsymbol']), "symboltoken" : str(rows['symboltoken']), "transactiontype": str(rows['transactiontype']), "exchange": str(rows['exchange']), "ordertype": str(rows['ordertype']), "producttype": str(rows['producttype']), "duration": str(rows['duration']), "price": str(rows['price']), "quantity": str(rows['quantity']), "triggerprice": str(rows['triggerprice'])} trade_list.append(new_dict) print(trade_list) What you see in the above screenshot is basically a list that has the orderparams dictionary for each row of trade we uploaded; now, we just need to pass each and every dict into the main obj we created earlier to place a trade on the account. 5. Creating a Function to Place Orders Let's create a quick function where we can just pass the orderparams to place the order. def place_order(orderparams): try: orderID = obj.placeOrder(orderparams) print("The order id is: {}".format(orderID)) except Exception as e: print("Order placement failed: {}".format(e)) 6. Placing Orders using the Function in a Normal Loop start = datetime.now() for trades in trade_list: place_order(trades) end = datetime.now() print(start - end) As you can see in the above results, we placed around 34 trades in 12.33 seconds in a normal loop where we waited for each request to return an orderId and then make the follow-on request to place another trade. This time taken should be more or less in the range of 11-14 seconds, depending on your internet speed. 7. Creating a Function to Place Order Concurrently def place_multiple_orders(tradeList): with concurrent.futures.ThreadPoolExecutor() as executor: executor.map(place_order, tradeList) The above looks like a very simple function that uses Python's internal library concurrent.futures in the background. What is happening behind the scenes is magic where Python basically creates multiple threads (think of it as an isolated space in your brain to do various tasks), and the executor.map uses the normal place_order function and passes our list of orderparams into it. place_multiple_orders(trade_list) Let's see the output of the above and see how quickly we are able to place 34 trades which earlier took ~12.5 seconds to execute. Oh Shit 😟, what happened here? Why are we getting these ugly errors? Well, that is because of rate-limiting factors that SmartAPI has; if you visit this page, you will see how many requests you are allowed to make every second for each type of function. As you can see, we are only allowed to place 10 trades per second, but in parallel processing, our function is making more than 10 requests within that one second which is why SmartAPI's server is rejecting it and sending an error code. Well, important stuff to keep in mind, but how can we solve this now? As you can see, the place_multiple_orders function uses the place_order function to actually place orders concurrently, so let's put in a wait time of one second in our place_order function in case of an Exception and then try again, maybe that will solve the problem? def place_order(orderparams): try: orderID = obj.placeOrder(orderparams) print("The order id is: {}".format(orderID)) except Exception as e: #1st error time.sleep(1) #ensure you have imported time at the top try: orderID = obj.placeOrder(orderparams) print("The order id is: {}".format(orderID)) except Exception as e: #2nd error time.sleep(1) try: orderID = obj.placeOrder(orderparams) print("The order id is: {}".format(orderID)) except Exception as e: print("Order placement failed: {}".format(e)) def place_multiple_orders(tradeList): with concurrent.futures.ThreadPoolExecutor() as executor: executor.map(place_order, tradeList) So in the above place_order function, if while placing an order, we get an error, the code will force a 1-second wait which will help us abide by the rate limits of SmartAPI, if the code fails for the second time, we force another 1-second wait and then execute the trade, if it fails again, it will be raised as an Exception, but again, I don't think you will be executing 35 trades in one single go in one single account. (Think of all the Brokerage charges Angel Broking will make, just kidding 😛) 8. Placing Orders Concurrently using place_multiple_orders Function start = datetime.now() place_multiple_orders(trade_list) end = datetime.now() print(end - start) Tada 🎉, we were able to place the same 34 orders in almost 1/4th of the time (3.42 seconds); that's the power of concurrency. Looking at the image above, when you notice the highlighted parts, you will see some trades were placed faster than other trades; for example, an order ending with 42167 was placed before orders ending with 42165 and 42166. 9. Conclusion First of all, I hope you liked this article; I recently implemented this code for one of my clients who wanted parallel execution in their strategy, so it was well researched, and the production code is more advanced, but the code in this article has been regularized and made easy to understand. You will find several articles on Concurrency on the Internet, but none of them shows a live example in this detail. 10. Closing Points I hope you did understand the concepts and the code; if you do not, please do ping me on Linkedin or you can email me. I am happy to help each and every one of my reader with their doubts, but my only request is to research your doubts on Google/StackOverflow before reaching out 😃 You can also find the full code on Github by clicking here.!
https://tradewithpython.com/concurrency-in-python-using-smartapi
CC-MAIN-2022-21
refinedweb
2,328
59.74
So this recipe is a short example on how to change frequncy of timeseries in python. Let's get started. import pandas as pd Let's pause and look at these imports. Pandas is generally used for performing mathematical operation and preferably over arrays. ddf = pd.DataFrame({ 'timestamp': pd.to_datetime(['2021-01-01 00:00:00.40', '2021-01-01 00:00:00.46', '2021-01-01 00:00:00.49']), 'X': [7,10,3]}) print(df) Here we have taken a randome example with irregular frequency interval. df = df.set_index('timestamp').asfreq('3ms', method='ffill') print(df) Here, we have set the frequecy to 3 miliseconds and used 'fill' method to adjust other columns values. Once we run the above code snippet, we will see: Scroll down the ipython file to visualize the final output.
https://www.projectpro.io/recipes/change-frequncy-of-timeseries-python
CC-MAIN-2021-43
refinedweb
136
60.61
Metaprogramming Roundup: Speed, Ruby Macros, Screencasts define_methodwith a Proc to define utility methods was running considerably slower than code using statically defined methods (ie. defined with def method_name). However, in a follow-up article, Matt finds the reason and a solution for the speed difference. The reason for the speed difference:. Matt provides a modified version of the code he previously benchmarked using class_eval, which now runs as fast code using statically defined methods. A useful bit of information to keep in mind when working with metaprogramming. On the other end of the metaprogramming spectrum is Reginald Braithwaite's Rewrite gem. The Rewrite gem uses an approach found in languages like LISP or Scheme called Macros or Macro expansion. In Ruby, this approach requires to step (slightly) outside the language - normally Ruby code doesn't have access to a representation of loaded Ruby code - Reflection stops as the method body level. To get access to code - to get the AST from the Ruby interpreter - the ParseTree extension is used for this in MRI 1.8. Rubinius can support ParseTree s-exprs (eg. the Debugger allows to access the s-exprs of methods) and there is an incomplete version of ParseTree for JRuby (it lacks support for accessing the AST from certain types of methods). ParseTree doesn't support Ruby 1.9. The current incarnations of the Rewrite gem are used to solve one particular issue, but the principles in it will be generally usable. What is solves now is the problem of adding methods to classes (Open Classes). The problem is the global nature of modifying a class. If one piece of code adds, say, a foomethod to Object, all code in the runtime will see this method as well - which can cause problems like name clashes if other bits of code add a method with the same name to the same class. The solution implemented with the Rewrite gem: limit the visibility of an added method to the scope of a block, which can look like this:. It's important to keep in mind that the current ideas implemented in the Rewrite gem are ways to experiment with Macros in Ruby, and many more ideas are conceivable. While Ruby's convenient syntax for Blocks allows to achive a concise notation for many things, passing multiple pieces of lazily evaluatable code gets harder, requiring to more verbose methods of creating Procs. For an introduction to the concepts like macros, a good place to start is "Practical Common Lisp". There are other projects making use of ParseTree to analyze code - InfoQ showed how Ambition, Sequel and merb use ParseTree. Also see Joel Klein's discussion of - what he refers to as lazy-lambdas - which are somewhere between Macros and Lambda's. Finally, for everyone who's not familiar with the possibilities of metaprogramming in Ruby, a new series of screencasts by Dave Thomas (PragDave) gives an introduction to the concepts. The screencast series has already received a few very positive reviews: review by Antonio Cangiano and review by Mike Riley. Ruby macros by Drew Olson I use ruby2ruby (which uses ParseTree) to accomplish this as well. Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
http://www.infoq.com/news/2008/07/metaprogramming-roundup/
CC-MAIN-2015-48
refinedweb
563
61.46
Using Java from Ruby with JRuby IRB Last month I visited a joint meeting for the Richmond Java User Group and Central Virginia Ruby Enthusiasts' Group. The audience was evenly split between Java and Ruby developers, which made for an ideal setting for my presentation about Java-Ruby interoperability that JRuby facilitates. On the tails of another great JRubyConf last week, I was inspired to share a summary of my presentation and my slides with interested folks who weren't able to attend the talk. Using a Java library from Ruby In this article, I'll focus on using a Java library from Ruby. You see, if you were to tinker with a single Java method, you would need to write a complete program with “public static void main(String[] args)”. As we will demonstrate here, with JRuby IRB, it becomes almost trivial to play with Java methods. We will use the unique Java library Akka here. According to the Akka website, “using the Actor Model together with Software Transactional Memory [Akka raises] the abstraction level and provides a better platform to build correct concurrent and scalable applications.” The Akka Getting Started Tutorial To begin, let us look at the Akka Getting Started Tutorial written in Java, and translate it into Ruby. The example computes π using the Madhava-Leibniz series. Note that in the series, the neighboring terms can be gathered into a smaller work unit without disturbing the identity, or other work units. We created a worker and a few additional workers that communicate by passing a message--Plain Old Java/Ruby Object with a little extra information--along with the message router acting as a broker. For comparison, here is the Ruby version, and the Java version. The Ruby version is shorter, and requires less ceremony in setting up various classes involved in the program. Exploring the code Now let us walk through the key points in the Ruby version. Preliminary stuff require "java" Line 1: Enable Java integration support in JRuby. $: << File.join(File.dirname(__FILE__), 'lib') Line 3: Add the lib directory (that resides in the same directory as the current file) to JRuby's library search paths. java_import 'akka.actor.Actors' java_import 'akka.actor.ActorRef' java_import 'akka.actor.UntypedActor' java_import 'akka.actor.UntypedActorFactory' java_import 'akka.routing.CyclicIterator' java_import 'akka.routing.InfiniteIterator' java_import 'akka.routing.Routing' java_import 'akka.routing.UntypedLoadBalancer' java_import java.util.concurrent.CountDownLatch Lines 8 through 16: Import the Java library's classes into the current namespace so we don't have to prefix them with Java::…. This is similar to Java's import statements. def actorOf(&code) Actors.actorOf(Class.new do include UntypedActorFactory define_method(:create) do |*args| code[*args] end end.new) end Lines 18 through 25: This is a convenient method to create an Akka actor instance through the use of Ruby's Class.new, which is analogous to anonymous class declaration in Java. This greatly reduces the clutter when used on lines 81 and 125. class Calculate; end class Work < Struct.new(:start, :nrOfElements); end class Result < Struct.new(:value); end Lines 27 through 29: Define Plain Old Ruby Objects acting as messages between Akka actors. Note there is a lot less noise for these classes than there is for the Java counterparts. Worker class class Worker < UntypedActor # needed by actorOf def self.create(*args) new *args end # define the work def calculatePiFor(start, nrOfElements) ((start * nrOfElements)...((start + 1) * nrOfElements)).inject(0) do |acc, i| acc + 4.0 * (1 - (i.modulo 2) * 2) / (2 * i + 1) end end # message handler def onReceive(message) if message.kind_of? Work work = message # perform the work result = calculatePiFor(work.start, work.nrOfElements) # reply with the result context.replyUnsafe(Result.new(result)) else raise IllegalArgumentException.new "Unknown message [#{message + b}]" end end end Lines 31 through 59 define the Worker class. The part that actually performs the calculation calculatePiFor is much more compact with the use of Enumerable#inject. When this actor is sent a message, it executes the methodonReceive. Following the Java example, we are examining the class of the message passed; this is admittedly not characteristic of Ruby. PiRouter class class PiRouter < UntypedLoadBalancer attr_reader :seq def initialize(workers) super() @seq = CyclicIterator.new(workers) end end Lines 61 through 68 define the PiRouter class. Besides having an instance variable :seq, there isn't much code, much of it is done in the Akka library itself. Master class class Master < UntypedActor def initialize(nrOfWorkers, nrOfMessages, nrOfElements, latch) super() @nrOfMessages, @nrOfElements, @latch = nrOfMessages, nrOfElements, latch @nrOfResults, @pi = 0, 0.0 # create the workers workers = java.util.ArrayList.new nrOfWorkers.times { workers << Actors.actorOf(Worker).start } # wrap them with a load-balancing router @router = actorOf { PiRouter.new(workers) }.start end # message handler def onReceive(message) if message.kind_of? Calculate # schedule work @nrOfMessages.times do |start| @router.sendOneWay(Work.new(start, @nrOfElements), context) end # send a PoisonPill to all workers telling them to shut down themselves @router.sendOneWay(Routing::Broadcast.new(Actors.poisonPill)) # send a PoisonPill to the router, telling him to shut himself down @router.sendOneWay Actors.poisonPill elsif message.kind_of? Result # handle result from the worker @pi += message.value @nrOfResults += 1 context.stop if @nrOfResults == @nrOfMessages else raise IllegalArgumentException.new "Unknown message [#{message}]" end end def preStart @start = java.lang.System.currentTimeMillis end def postStop # tell the world that the calculation is complete puts format("\n\tPi estimate: \t\t%s\n\tCalculation time: \t%s millis", @pi, (java.lang.System.currentTimeMillis - @start)) @latch.countDown end end Lines 70 through 116 define the Master class. Master responds to two kinds of messages. Calculate sets everything in motion. Result messages are sent from the PiRouter to the Master, which adds up the values of the Result messages until the set number of them were sent its way. Pi class class Pi def self.calculate(nrOfWorkers, nrOfElements, nrOfMessages) # this latch is only plumbing to know when the calculation is completed latch = CountDownLatch.new(1) # create the master master = Actors.actorOf do Master.new(nrOfWorkers, nrOfMessages, nrOfElements, latch) end.start master.sendOneWay(Calculate.new) # start the calculation latch.await # wait for master to shut down end end Pi.calculate(4, 1000, 1000) Lines 119 through 133 include the top-level class with one class method that sets up the concurrency latch and sends the Calculate message to the Master. Conclusion This example shows the basics of Java-Ruby interaction in a complete program. These fundamentals are applicable to your first explorations. If you come up with your own harmony of Java and Ruby, be sure to share it with us in the comments section. Thank you, Richmond! (And beyond?) I enjoyed sharing JRuby with the user group members I met in Virginia. Thanks again to the Richmond Java User Group and Central Virginia Ruby Enthusiasts' Group for hosting me. We are always excited to share JRuby with a larger audience and to hear your thoughts. If you are interested in arranging a JRuby presentation, drop us a line, and we'll chat! Share your thoughts with @engineyard on Twitter
https://blog.engineyard.com/2011/using-java-from-ruby-with-jruby-irb
CC-MAIN-2015-11
refinedweb
1,171
51.14