text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
hi all. im new to c++, trying to teach myself from internet tutorials... its slow progress. anyway i wanted to make a program to solve a puzzle. i havent even got on to writing an algorithm for working it out yet. im stuck trying to make a function that prints a 9x9 2 dimentional array. heres the complete program its more than likely that im going about this all the wrong way. my compiler saysits more than likely that im going about this all the wrong way. my compiler saysCode:#include <iostream> using namespace std; void print_table(int array); int main() { int table[9][9]; for(int i=0;i<9;i++) { for(int ii=0;ii<9;ii++) { table[i][ii] = 0; } } int *ptr; ptr = table; print_table(ptr); cin.get(); } void print_table(int array) { for(int i=0;i<9;i++) { for(int ii=0;ii<9;ii++) { cout << *array[i][ii] << "|"; } cout << "- - - - - - - - -" << endl; } } 18 C:\Dev-Cpp\Untitled1.cpp cannot convert `int[9][9]' to `int*' in assignment so im assuming it has something to do with the fact that i dont know how to pass an array to a function, im just guessing. thanks in advance for help. first post here
https://cboard.cprogramming.com/cplusplus-programming/67391-please-can-some1-see-whats-wrong-newbie.html
CC-MAIN-2017-47
refinedweb
204
74.29
Editor’s note: This post was last updated on 16 August 2021 to include updated technology changes and various other updates. However, it may still contain information that is out of date. Everyone had something to say about React Hooks when they released, but developers still continued to use render props. However, developers can stop using render props because of custom Hooks, and in this article, we’ll learn how to do that. But, note that render props haven’t and won’t completely die off, but they have evolved to provide different functionalities. To run the examples in this post, use Create React App v4.0.3 and React v17.0.3 or above. What are render props? Render props are an advanced pattern for sharing logic across components. A component, usually termed as a container component, can delegate how a UI looks to other presentation components and only implement business logic. This means we can implement cross-cutting concerns as components by using render prop patterns. The overall purposes of using render props are: - Sharing code between components - Using the Context API Are render props bad? Using render props comes with their own issues. Some of these issues appear only when we dig deeper or when we scale a project. Render props’ issues with wrappers To increase the DRYness of our codebase, we often implement many small, granular components so each component deals with a single concern. However, this often leaves developers with many wrapper components nested deeply inside one another. If we increase the number of wrapper components, the component size and complexity increase while the reusability of a wrapper component might decrease. Andrew Clark perfectly summed up the issue on Twitter: Andrew Clark on Twitter: “I mean come on (screen shot of actual code I’m playing with right now) pic.twitter.com/Ucc8gaxPMp / Twitter” I mean come on (screen shot of actual code I’m playing with right now) pic.twitter.com/Ucc8gaxPMp Render props’ issues with binding this Since the wrapper components deal with state or lifecycle methods, they use class components. With class components, we must bind this properly, otherwise, we risk losing the this context inside functions. The syntax for binding all methods looks ugly and is often a burden for developers. Render props’ issues with classes Classes come with a good amount of boilerplate code, which is awful for us to write every time we convert a functional component into a class component. Turns out, classes are hard to optimize by our build tools, as well. This incurs a double penalty because it neither leads to a good developer experience nor a good user experience. The React team is even thinking of moving class components support to a separate package in the future. What are the problems of using render props with pure components? Using a render prop can negate the advantage that comes from using PureComponent if we create a function assigned inside the render method. This is because the shallow prop comparison always returns false for new props, and, in this case, each render generates a new value for the render prop. Refer to the React docs for more details. Many of these problems are not entirely the fault of the render props pattern, though. Until now, React did not provide a way of using state or lifecycle methods without involving classes. That’s why we must use classes in container components for implementing the render props pattern. However, all of that changes with the introduction of the React Hooks API. React Hooks lets us use state and lifecycle Hooks inside functional components with only a few lines of code. What’s better, we can implement our own custom Hooks. These Hooks give us an easy and powerful primitive for sharing logic across components. That means we don’t need classes or a render props pattern to share code between components. Before jumping into that, let’s first get a good look at how React Hooks can be used. What are React Hooks? As a short description, React Hooks let you use state and other features within functional components without writing a class component. However, the best way to learn more about something is by using it. So, to use React Hooks, we’ll build a component that shows information by default and lets us update that information by clicking a button. What’s a React Hook editable item? What we can observe from this example is that the component has two types of states. One state controls the input field and the other toggles between the viewer and the editor. Let’s see how this can be implemented with React Hooks: import React, { useState } from "react"; function EditableItem({ label, initialValue }) { const [value, setValue] = useState(initialValue); const [editorVisible, setEditorVisible] = useState(false); const toggleEditor = () => setEditorVisible(!editorVisible); return ( <main> {editorVisible ? ( <label> {label} <input type="text" value={value} onChange={event => setValue(event.target.value)} /> </label> ) : ( <span>{value}</span> )} <button onClick={toggleEditor}>{editorVisible ? "Done" : "Edit"}</button> </main> ); } We defined the EditableItem functional component, which takes two props, label and initialValue props for showing the label above the input field, and the initialValue prop for showing the default info. Calling the useStateHook gives us an array of two items: one for reading state and the other for updating that state. We must hold the state for controlled input in the value variable and the state updater function in the setValue variable. By default, theis false and the editor when the value is true. Since we want to show the viewer by default, we must set the editorVisible value as false initially. That’s why we pass false while calling useState. To toggle between the viewer and editor, we must define the toggleEditor function, which sets the editorVisible state to its opposite when calling the function. As we want to call this function whenever the user clicks on the button, we assign it as the button’s onClick prop. That’s how easy using React Hooks can be, but it doesn’t stop here. Hooks have one more trick: custom Hooks. What are custom Hooks in React? Custom Hooks in React are mechanisms that reuse stateful logic, according to the React docs. In our use case, we can see that the editorVisible state is a toggler, and toggling is a common use case in our UIs. If we want to share the toggling logic across components, we can define a Toggler component and use the render props pattern to share the toggling method. But, wouldn’t it be easier if we could just have a function for that instead of messing with components? Enter React custom Hooks. With custom Hooks, we can extract the toggling logic from the EditableItem component into a separate function. We can call this function useToggle because it is recommended to start the name of a custom Hook with use. The useToggle custom Hook looks like this: import React, { useState } from "react"; function useToggle(initialValue) { const [toggleValue, setToggleValue] = useState(initialValue); const toggler = () => setToggleValue(!toggleValue); return [toggleValue, toggler]; } First, we get the state and state updater by using the useState Hook. We then define a toggler function that sets the toggleValue to the opposite of its current value. At last, we return an array of two items: toggleValue to read the current state and toggler to toggle the toggleValue state. Though creating functions at each render is not slow in modern browsers, we can avoid it by memoizing the toggler function. For this purpose, the useCallback Hook comes in handy: import React, { useState, useCallback } from "react"; function useToggle(initialValue) { const [toggleValue, setToggleValue] = useState(initialValue); const toggler = useCallback(() => setToggleValue(!toggleValue)); return [toggleValue, toggler]; } Custom Hooks are used just like any other Hook. This means using useToggle in our EditableItem component is as easy as this: import React, { useState } from "react"; import useToggle from 'useToggle.js';> ); } Now,> ); } } Reusing code with React custom Hooks No doubt, reusing code between components is easier with custom Hooks and requires less coding. We can then reuse code with the render props pattern: function useToggle(initialValue) { const [toggleValue, setToggleValue] = useState(initialValue); const toggler = useCallback(() => setToggleValue(!toggleValue)); return [toggleValue, toggler]; }> ); } Next, we’ll learn how to consume context data with React Hooks instead of using the render props pattern. Consuming context data with React custom Hooks Just like we have useState Hook for state, we have useContext for consuming context data. Again, we’ll try to learn by using it in a practical scenario. It’s a common requirement to have user details available across components. This is a great use case for context Hooks: Use context to change users Here we have two components: UserProfile and ChangeProfile. The UserProfile component shows user details and the ChangeProfile component switches between users. The switching between users is only applicable for our demo. In real projects, instead of the select menu, we must update user details based on who logs in. Implementing this looks like the following:: "[email protected]" }, { name: "Arnold", email: "[email protected]" } ];: "[email protected]" }); return ( <UserContext.Provider value={userState}>{children}</UserContext.Provider> ); } function App() { return ( <div className="App"> <User> <ChangeProfile /> <UserProfile /> </User> </div> ); } We have made a separate component called User for storing user state and providing data; UserContext.Provider provides the data update method. These are then consumed by its child components UserProfile and ChangeProfile. The UserProfile component only reads shows how using context is very simple with custom Hooks. Implementing slots in components There is one more thing for which people sometimes use the render props pattern. That is for implementing slots in their components> )} /> ) } Well, there is a simpler way that doesn’t need functions as props. Instead, we can assign JSX as a component prop> } /> ); } Using the render props pattern here would be a mistake because it’s intended to share data between components. So, in this case, we should avoid using render props. Conclusion My personal opinion is that the render props pattern wasn’t intended for the above use cases but the community used it because there was no other way. It’s great that the React team took note and made something we will all love to use. Hooks and render props can co-exist because each has a different role. It’s clear that the future of React is very bright and the React team’s focus is crystal clear. If you like my work, please follow me on Twitter and Medium or subscribe to my newsletter.. 5 Replies to “React render props vs. custom Hooks”? Good examples and code in this article. Thanks for sharing your thoughts, and keep up the great work 🙌 > Using the render props pattern here would be a mistake because it’s intended to share data between components. So, in this case, we should avoid using render props. It seems that button from the example may require some data that’s handled by CardComponent. Would that still be a mistake?
https://blog.logrocket.com/react-render-props-vs-custom-hooks/
CC-MAIN-2022-21
refinedweb
1,823
54.83
Quoting Greg KH (greg@kroah.com):>.> > Why not just keep all users from seeing sysfs, and then have a user> daemon doing something on top of FUSE if you really want to see this> kind of stuff.Well the blocker is really that when you create a new network namespace,it wants to create a new loopback interface, but/sys/devices/virtual/net/lo already exists. That's the same issue withuser namespace when the fair scheduler is enabled, which tries tore-create /sys/kernel/uids/0.Otherwise yeah at least for my own uses, containers wouldn't need tolook at /sys at all.Heck you wouldn't even need FUSE, just mount -t tmpfs /sys/class/netand manually link the right devices from /sys/devices/virtual/net.-serge
https://lkml.org/lkml/2008/10/7/426
CC-MAIN-2020-16
refinedweb
129
53.71
[Solved] Can't move QPushButton in QGraphicsScene? There is the code: @#include <QtGui> int main(int argc, char *argv[]) { QApplication a(argc, argv); QGraphicsScene scene(QRectF(-100, -100, 300, 300)); QGraphicsView view(&scene); QGraphicsRectItem* rectItem = new QGraphicsRectItem(0, &scene); rectItem->setPen(QPen(Qt::black)); rectItem->setBrush(QBrush(Qt::green)); rectItem->setRect(QRectF(-30, -30, 120, 80)); rectItem->setFlags(QGraphicsItem::ItemIsMovable); QPushButton* button = new QPushButton("Ok"); QGraphicsProxyWidget* widgetItem = scene.addWidget(button); widgetItem->setFlags(QGraphicsItem::ItemIsMovable); view.show(); return a.exec(); }@ The problem is that I can't move QPushButton, for example QGraphicsRectItem have moved good. What should I do to make QPushButton movable, and if possible without the inheritance just to set some flags or properties? I think that the problem is, that QPushButton handles the mouse press itself, and thus blocks out the graphics view system to start moving the item. Sounds right, and don't you know how to make it start moving? Perhaps the question is: what do you expect would happen? I mean: does clicking initiate a move, or a click on the button? When I clicking on button and start moving. I want the same behavior as QGraphicsRectItem have, but don't know how)) And one more, I want that button only draws on scene and moving like simple QGraphicsItem, without hot tracking, hadling of mouse event and so on. What can I do for that? P.S. Sorry for english. And more correct, I want to have the button had the same behavior like in Qt Designer. [Edit: merged three separate messages into one. Please use the Edit button to add to your last message - Andre] So, if I understand you correctly, you don't want your button to function like a button at all. It should just look like one. Do I get that right? In that case, I would simply disgard using QGraphicsProxyWidget completely (it is not a recommended item to use anyway, as it has issues the Trolls have not been able to sort out). Instead, you can create your own QGraphicsRectItem decendant, and reimplement the paint operation. In that operation, you get the current style, and use QStyle::drawControl to draw a button. This will give you a graphics item that looks exactly like any button on your system, but that you can manipulate in the same way as any other graphics item. Of course, it won't have the event handling, signals and slots that QPushButton offers, but you just said you don't need those anyway. Andre, thanks a lot, this is exactly what I want. And again thanks Andre Could you show how would be implemented "paint" method? I tried but had no success thanks If you show us what you tried, we may be able to help you fix the issue. Ok I'm doing in pyqt @class ItemUI(QGraphicsRectItem): def init(self, itemUI, scene=None, parent=None): super(ItemUI, self).init(parent, scene) self.setFlag(self.ItemIsMovable, True) self.setFlag(self.ItemIsSelectable, True) self.item = itemUI self.setRect(QRectF(10, 10, 100, 50)) def paint(self, painter, option, widget=None): self.item.style().drawControl(QStyle.CE_PushButton, option, painter, widget)@ - itemUI is the UI Item, for example a QPushButton - Nothing shows on the screen I still find Python hard to read, so I might be off here: I think you will need to construct your own option struct, and fill it with the proper values for the button that you want to draw. You can not just pass a QStyleOptionGraphicsItem if you want to draw a button and expect it to work. OK I have no idea of how to create a option struct but I'll try here... any doubt I will return here Thank you for your assistance and availability. and sorry for my poor english =] try this creation and further adapt to your implementation @ void GraphicsRectItemButton::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget widget) { QStyle style = qApp->style(); QStyleOptionButton buttonOption; buttonOption.rect = option->rect; buttonOption.state = option->state; buttonOption.text = "ItemButton"; style->drawControl(QStyle::CE_PushButton, &buttonOption, painter, widget); } @ feliperfranca: I think I understand what you are trying to do: pass a QPushButton to your ItemUI instance, then ask the button to paint itself, as the implementation of your ItemUI.paint(). I believe Andre is saying that the paint method of a QGraphicsRectItem is passed an option of type QStyleOptionGraphicsItem but QStyle.drawControl expects an option of type QStyleOptionButton. Both types are subclasses of QStyleOption, but they may have different attributes. Also, who knows the default attributes of a QPushButton's style? One would hope it defaults to something sensible and visible. Also, if the QPushButton instance is not realized (shown), possibly it's paint method (style.drawControl()) returns a None path (just calls Python pass)? Possibly you should do the painting yourself, instead of asking the button widget to paint itself. You could access the style of the button for attributes needed to paint. But I agree with your design, if you use the button's paint implementation, it would be foolproof against changes to Qt toolkit. You would get a style that matches the style of other buttons in the GUI. Your ItemUI would be a graphics item, not a widget, but would look like a button widget. I would be interested in any solution you came up with. What you could do, is look into the way the Quick components for desktop are implemented. As you may or may not be aware of, in Quick 1.x, items there are QGraphicsItems. The desktop components render these items using the Qt style API. I think that could be a good source of inspiration. The full source code is online for it. Thanks, thats a good idea. I think you are saying that you don't need to create a dependency on QWidget (shouldn't, since the QGui module is deprecated on some platforms?) Instead the Qt style API has all you need to insure that the style of your custom GUI items (whatever you call them: widgets, controls, etc.) conform to the platform's guidelines and the user's preferences. Don't arbitrarily fabricate a style attribute if it can come from the Qt style API. This is pertinent to my case because I am implementing a custom control that in a QGraphicsView. It is a tracking button. It appears on hover. It can be clicked. It follows the mouse (to a certain extent.) It is part of a tracking menu (a tracking menu is a set of tracking buttons.) I am studying whether it needs to be a QWidget (so that it receives mouseMoveEvent easily) or a QGraphicsItem. This is different from what Rokemoon wants (his tracks, ie follows the mouse, when clicked, mine tracks before clicked.) The handling of mouseMoveEvent seems to be easier if it is a QWidget, but as you suggest, it could be a QGraphicsItem as in Qt Quick. I need to study Qt Quick and the design reasons their controls are QGraphicsItems. It is very interesting the differences between event dispatch in QWidget trees and QGraphicItem trees.
https://forum.qt.io/topic/3936/solved-can-t-move-qpushbutton-in-qgraphicsscene
CC-MAIN-2018-30
refinedweb
1,180
65.32
How to: Autoscaling Gitlab Continuous Integration runners GCP 🤓 Test runner servers can consume quite some resources, which is rather expensive. But even more problematic is a filled up queue of pipelines that block releases.. At Luminum Solutions, we run some relatively heavy test suites. The problem however, is that runner servers can consume quite some resources, which is rather expensive. But even more problematic is a filled up queue of pipelines that need running.. It can block Merge Requests (Gitlab's equivalent to a Pull Request), in turn blocking the release of new features. So ideally you'd have enough computing power running for all test suites, without paying too much. This is exactly what autoscaling infrastructure can bring! The idea is to create an Compute Engine Instance Group that automatically scales as instances get above a certain CPU usage. The Managed Instance group allows you to specify a template for the instances running as part of the Instance Group. Step 1. creating the image Assuming you have Google Cloud project (if not, create one here), the first step is to create a custom image that can be used in the template. The image is based on Ubuntu 16.04 in my case. Create the VM To create the image, start a Compute Engine VM. We use VM's with 2 CPU's and 4GB of RAM but you're free to choose what you prefer during this step. You can make the disk quite small too, since the VM's will be ephemeral. I chose normal 20Gb hard disks. This is what it should look like (sorry for the Dutch in there): Install the Gitlab Runner After creating your VM, SSH into it and install the Gitlab Runner (instructions for Ubuntu are here). We don't use the Docker executor, even though most of the attention heavy infrastructure we run is on Docker. This is because so called Docker in Docker scenarios can cause problems that are outside of this post's scope. Set up a cron job to clean images Since the hard disk on our VM is not that big and Docker images can accumulate to take quite some space on the hard disk, it's ideal to automatically clean the unused Docker images every 24 hours. Just to be sure :) You can save the following script to /etc/cron.daily/docker-auto-purge and make it executable with chmod +x /etc/cron.daily/docker-auto-purge. #!/bin/sh docker images -q |xargs docker rmi If you want to specify the amount of runners you want to allow per instance, you can edit the /etc/gitlab-runners/config.toml file and change the following lines to whatever you prefer: concurrent = 1 # Concurrent jobs for this instance (0 does not mean unlimited!) check_interval = 0 # How often to check GitLab for a new builds (seconds) Step 2. Create the image and template based on your VM To actually setup your VM to be managed inside of an instance group, we'll have to create a blueprint of it. After this, we'll remove the VM and allow the Instance Group to create it based on the blueprint, and manage it. Create the image First, stop your VM. This will allow for safer passage through the land of image creation we're about to enter. Select 'Compute Engine' from the sidebar in the Google Cloud Console and click 'Images'. Here, create a new image based on the hard disk (use it as the source disk) of your recently created VM. With your image now created, you can easily create VM's based on it! It's exactly as easy as selecting 'Ubuntu 16.04' as your image, but instead it will be your own shiny image ✨ Create the Instance Template Next, we'll add the image to an instance template! I used the following settings: - 2 CPU's - 4 Gb memory - 20 Gb persistent disk - Your custom image as the startup disk If you have any custom networking set up, this is the time to add the configuration for it to your instance template. But no worries if you forget; it's super easy to update the instances in a managed instance group to a newer template 😄 Note: Make sure your VM won't run at > 95% CPU on your test suite though, as this will trigger the autoscaler to add new instances without good reason. This step is important! Now, you should set up the startup script on your VM: #!/bin/bash # Register runners sudo gitlab-ci-multi-runner register -n --url --registration-token s5-FUy15QVjqMNsgZWPM --executor shell --description "ext-shell-$(hostname)-1" sudo gitlab-ci-multi-runner register -n --url --registration-token s5-FUy15QVjqMNsgZWPM --executor shell --description "ext-shell-$(hostname)-2" apt install build-essential -y # Install docker-compose sudo curl -L-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose # Add the gitlab-runner user to the docker group sudo usermod -aG docker gitlab-runner This will let your runners automatically register on creation, and set up Docker compose. The reason I chose to add the docker-compose install here is to let me update/change the version whenever I please without having to update my image. Now, save your instance template. Almost there! Step 3. Create the instance group Next, we'll create the actual instance group that runs the Gitlab runners. Go to the Compute Engine page from the sidebar of Google Cloud Console, and click 'Instance Groups'. Now add a new Instance Group with your preferred configuration. Don't forget to double check that you selected 'Managed Instance group'! Use the Instance Template you created in the previous step here. For the Statuscheck, I chose to set it to CPU usage of > 95%. This will trigger the autoscaler whenever an instance has more than 95% CPU utilization. Then click 'Create'. Presto! One more step to go! Step 4. Enable Gitlab to automatically remove old runners Having a lot of runners running because of autoscaling is handy, but the runners don't unregister themselves. A solution would be to periodically check for runners that have been inactive for a while. I wrote a small Python script to help with this, which can be added as a CRON job on your Gitlab instance: import requests import json from dateutil import parser from datetime import datetime, timedelta import pytz import logging logger = logging.getLogger('gitlab_python_cron') logger.setLevel(logging.DEBUG) fh = logging.FileHandler('/var/log/gitlab_runners_autodelete.log') formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) logger.addHandler(fh) headers = {"PRIVATE-TOKEN": "<YOUR_PERSONAL_TOKEN_HERE>"} base_url = "" runners = requests.get(base_url % 'all/', headers=headers) runners = json.loads(runners.content) logger.info("Got %s runners to delete" % str(len(runners))) threshold_time = datetime.now(pytz.timezone('Europe/Amsterdam')) - timedelta(hours=8) # All runners that haven't reported for 8 hours will be deleted. for runner in runners: runner = json.loads(requests.get(base_url % runner["id"], headers=headers).content) if parser.parse(runner["contacted_at"]) < threshold_time: resp = requests.delete(base_url % runner["id"], headers=headers) logger.info("Deleted runner with ID %s" % str(runner["id"])) logger.info("Deleted runners! \n ---------") You'll have to generate a Personal access token for the API and replace <YOUR_PERSONAL_TOKEN_HERE> in the snippet above. You can see how to create one here. And that's it! Now your Gitlab runner infrastructure can auto scale as your test suites run, and you'll pay as you go whenever your runner infrastructure scales beyond the single node! And your Gitlab instance even cleans itself up every day.. ✨ Get the latest posts delivered right to your inbox
https://zowielangdon.nl/2017/09/20/autoscaling-ci-runners-on-gitlab-using-gcp/
CC-MAIN-2019-22
refinedweb
1,277
55.54
Introduction The Pepper Plug-in API (PPAPI) lets Native Client modules communicate with the hosting browser and access system-level functions in a safe and portable way. For a brief description of Pepper see the Technical Overview. Pepper has both a C API and a C++ API. These APIs are generally divided into two parts: functions that are implemented in the browser and that you call from your Native Client module, and functions that the browser invokes and that you must implement in your Native Client module. To develop a Native Client module using a specific version of Pepper, you must download and use the corresponding SDK bundle. For example, to use Pepper 16 you must download and use the pepper_16 SDK bundle. Native Client modules compiled using a particular Pepper version will generally work in corresponding versions of Chrome and higher. For example, a module compiled using Pepper 16 will work in Chrome 16 and higher. The reference documentation on this site includes both the C and C++ APIs for the most recent versions of Pepper. Before you read the reference documentation, we recommend that you read the Technical Overview and the Getting Started Tutorial. Pepper 16 The Pepper 16 bundle/platform includes these new features: - the Pepper Fullscreen API - the Pepper Mouse Lock API For additional information about Pepper 16, see the SDK release notes. Pepper C++ API The lowest level of Pepper is the C API, declared in the header files in include/ppapi/c. The C API represents the lowest level binary interface between a Native Client module and the browser. The Pepper C++ API, declared in the header files in include/ppapi/cpp, is a wrapper around the C API. The C++ API provides a layer of abstraction and takes care of many details such as reference counting resources, thus making Pepper considerably easier to use. Click the links below "Pepper C++ API" in the left navbar to view the C++ API by class or by file (all classes, functions, namespaces, and so on in audio.h, instance.h, url_loader.h, etc.).
https://developers.google.com/native-client/pepper16/peppercpp/
CC-MAIN-2014-10
refinedweb
347
58.52
Customizing¶ You can override and customize almost everything on the UI, or use different templates and widgets already on the framework. Even better you can develop your own widgets or templates and contribute to the project. Changing themes¶ F.A.B comes with bootswatch themes ready to use, to change bootstrap default theme just change the APP_THEME key’s value. On config.py (from flask-appbuilder-skeleton), using spacelab theme: APP_THEME = "spacelab.css" Not using a config.py on your applications, set the key like this: app.config['APP_THEME'] = "spacelab.css" You can choose from the following themes Changing the index¶ The index can be easily overridden by your own. You must develop your template, then define it in a IndexView and pass it to AppBuilder The default index template is very simple, you can create your own like this: 1 - Develop your template (on your <PROJECT_NAME>/app/templates/my_index.html): {% extends "appbuilder/base.html" %} {% block content %} <div class="jumbotron"> <div class="container"> <h1>{{_("My App on F.A.B.")}}</h1> <p>{{_("My first app using F.A.B, bla, bla, bla")}}</p> </div> </div> {% endblock %} What happened here? We should always extend from “appbuilder/base.html” this is the base template that will include all CSS’s, Javascripts, and construct the menu based on the user’s security definition. Next we will override the “content” block, we could override other areas like CSS, extend CSS, Javascript or extend javascript. We can even override the base.html completely I’ve presented the text on the content like: {{_("text to be translated")}} So that we can use Babel to translate our index text 2 - Define an IndexView Define a special and simple view inherit from IndexView, don’t define this view on views.py, put it on a separate file like index.py: from flask_appbuilder import IndexView class MyIndexView(IndexView): index_template = 'my_index.html' 3 - Tell F.A.B to use your index view, when initializing AppBuilder: from app.index import MyIndexView app = Flask(__name__) app.config.from_object('config') db = SQLA(app) appbuilder = AppBuilder(app, db.session, indexview=MyIndexView) Of course you can use a more complex index view, you can use any kind of view (BaseView childs), you can even change relative url path to whatever you want, remember to set default_view to your function. You can override IndexView index function to display a different view if a user is logged in or not. Changing Widgets and Templates¶ F.A.B. has a collection of widgets to change your views presentation, you can create your own and override, or (even better) create them and contribute to the project on git. All views have templates that will display widgets in a certain layout. For example, on the edit or show view, you can display the related list (from related_views) on the same page, or as tab (default). class ServerDiskTypeModelView(ModelView): datamodel = SQLAInterface(ServerDiskType) list_columns = ['quantity', 'disktype']'] The above example will override the show and edit templates that will change the related lists layout presentation. If you want to change the above example, and change the way the server disks are displayed has a list just use the available widgets: class ServerDiskTypeModelView(ModelView): datamodel = SQLAInterface(ServerDiskType) list_columns = ['quantity', 'disktype'] list_widget = ListBlock'] We have overridden the list_widget property with the ListBlock Class. This will look like this. You have the following widgets already available - ListWidget (default) - ListItem - ListThumbnail - ListBlock If you want to develop your own widgets just look at the code Read the docs for developing your own template widgets Templates Implement your own and then create a very simple class like this one: class MyWidgetList(ListWidget): template = '/widgets/my_widget_list.html' Change Default View Behaviour¶ If you want to have Add, edit and list on the same page, this can be done. This could be very helpful on master/detail lists (inline) on views based on tables with very few columns. All you have to do is to mix CompactCRUDMixin class with the ModelView class. from flask_appbuilder.models.sqla.interface import SQLAInterface from flask_appbuilder.views import ModelView, CompactCRUDMixin from app.models import Project, ProjectFiles from app import appbuilder class MyInlineView(CompactCRUDMixin, ModelView): datamodel = SQLAInterface(MyInlineTable) class MyView(ModelView): datamodel = SQLAInterface(MyViewTable) related_views = [MyInlineView] appbuilder.add_view(MyView, "List My View",icon = "fa-table", category = "My Views") appbuilder.add_view_no_menu(MyInlineView) Notice the class mixin, with this configuration you will have a Master View with the inline view MyInlineView where you can Add and Edit on the same page. Of course you could use the mixin on MyView also, use it only on ModelView classes. Take a look at the example: Next we will take a look at a different view behaviour. A master detail style view, master is a view associated with a database table that is linked to the detail view. Let’s assume our quick how to example, a simple contacts applications. We have Contact table related with Group table. So we are using master detail view, first we will define the detail view (this view can be customized like the examples above): class ContactModelView(ModelView): datamodel = SQLAInterface(Contact) Then we define the master detail view, where master is the one side of the 1-N relation: class GroupMasterView(MasterDetailView): datamodel = SQLAInterface(Group) related_views = [ContactModelView] Remember you can use charts has related views, you can use it like this: class ContactTimeChartView(TimeChartView): datamodel = SQLAInterface(Contact) chart_title = 'Grouped Birth contacts' chart_type = 'AreaChart' label_columns = ContactModelView.label_columns group_by_columns = ['birthday'] class GroupMasterView(MasterDetailView): datamodel = SQLAInterface(Group) related_views = [ContactModelView, ContactTimeChartView] This will show a left side menu with the groups and a right side list with contacts, and a time chart with the number of birthdays during time by the selected group. Finally register everything: // if Using the above example with related chart appbuilder.add_view_no_menu(ContactTimeChartView) appbuilder.add_view(GroupMasterView, "List Groups", icon="fa-folder-open-o", category="Contacts") appbuilder.add_separator("Contacts") appbuilder.add_view(ContactModelView, "List Contacts", icon="fa-envelope", category="Contacts")
https://flask-appbuilder.readthedocs.io/en/latest/customizing.html
CC-MAIN-2019-04
refinedweb
986
54.22
serializable problem Changchun Wang Ranch Hand Joined: Feb 15, 2006 Posts: 83 posted Apr 08, 2006 19:57:00 0 An Object that needs to be serialized must only contain primitive variables or Serializable objects ,unless the fields are marked transient or static I have a problem about deserialize ,when deserialize ,these fields which are maked transient or static if are initialize to default variable I have demo it in the following server code import java.io.*; import static java.lang.System.*; class Fruit{} class Apple implements Serializable { int one; transient int two; static int three; static transient int four; static Fruit ff=new Fruit(); static String ss="static&serializable objects"; transient Fruit ft=new Fruit(); transient String st="transient&serializable objects" public Apple() { one =1; two=2; three =3; four=4; } public String toString() { return "one :" + one + " two:" + two + " three:" + three+" four:"+four+" static Objects:"+ss+" transient objects:"+st; }} public class Server { public static void main(String ...args) throws Exception { Apple b = new Apple(); ObjectOutputStream save = new ObjectOutputStream( new FileOutputStream("data.txt")); save.writeObject(b); save.flush(); } } client code import java.io.*; public class Client { public static void main(String[] args) throws Exception { Apple a = null; ObjectInputStream restore = new ObjectInputStream( new FileInputStream("data.txt")); a = (Apple) restore.readObject(); System.out.println("After deserialization:"); System.out.println(a); } } o/p one:1 two:0 three:0 four:0 static objects:[B]java [/B]transient objects:null I found static objects field and static primitive type field are different ,if static fields are set to default value when deserialize why do not oupput one:1 two:0 three:0 four:0 static objects:[B]null[/B] transient objects:null instead. maybe I need to test the server code and client code in different computer and then get output as i expected more what is the diference between static transient variable and transient variable Edwin Dalorzo Ranch Hand Joined: Dec 31, 2004 Posts: 961 posted Apr 08, 2006 21:59:00 0 Remember that the static member values are set when the class is loaded. Hence, every time your deserialize your objects, as your class has already been loaded their former values are not overriden. Test your program by running it twice. The first time just serialize the object. The second time deserialize it. But before doing so, remove all default initializations on static and transient members. You better set those values prior to serialization. The second time you run your program, load the object from the file. This time you will the the static content assumes the default values. For instance.... This is the class to be seriliazed. Note as I leave static members without initialization. This is just to demonstrate the theory of serialization. @SuppressWarnings("serial") public class Gun implements Serializable{ private static int bullets; //static non-serializable member public void setBullets(int bullets){ this.bullets = bullets; } @Override public String toString(){ return "I am gun with "+bullets+" bullets"; } } Now, the first time you run your test, do it like this: public static void main(String args[]){ Gun obj1 = new Gun(); obj1.setBullets(10); //I change it before serialization serializeObject(obj1); //Leave this code for the second test /** Object obj2 = deSerializeObject(); System.out.println(obj2); */ } And, the second time that you run your code. Do it like this; public static void main(String args[]){ //This time comment these lines /** Gun obj1 = new Gun(); obj1.setBullets(10); serializeObjec(obj1); */ //Leave this code for the second test Object obj2 = deSerializeObject(); System.out.println(obj2); } The output should be "I am gun with 0 bullets" despite of setting the bullets to 10 when you serialized the object during the first test. You will see similar results with transient variables. Regards, Edwin Dalorzo [ April 08, 2006: Message edited by: Edwin Dalorzo ] Edisandro Bessa Ranch Hand Joined: Jan 19, 2006 Posts: 584 posted Apr 08, 2006 23:39:00 0 Hi Changchun, I don't know whether or not you have the K&B SCJP Study Guide for Java 5 Book. If so, take a loot at Chapter 6, page 456. You will get some more details about serialization with static variables problem. For those who don't have purchased the book yet, here's the great and simple explanation about the problems when trying to serialize static variables. Serialization Is Not for Statics Should static variables be saved as part of the object's state? Isn't the state of a static variable at the time an object was serialized important? Yes and no. It might be important, but it isn't part of the instance's state at all. Remember, you should think of static variables purely as CLASS variables. They have nothing to do with individual instances. But serialization applies only to OBJECTS . And what happens if you deserialize three different Dog instances, all of which were serialized at different times, and all of which were saved when the value of a static variable in class Dog was different. Which instance would "win"? Which instance's static value would be used to replace the one currently in the one and only Dog class that's currently loaded? See the problem? Static variables are NEVER saved as part of the object's state because they do not belong to the object! Hope that helps. "If someone asks you to do something you don't know how to, don't tell I don't know, tell I can learn instead." - Myself Changchun Wang Ranch Hand Joined: Feb 15, 2006 Posts: 83 posted Apr 08, 2006 23:44:00 0 thanks Edwin when deserialize ,these fields which are maked static are initialize to the value according to static initializers and static block but the fields which are marked transient are initialize to default value [ April 08, 2006: Message edited by: Changchun Wang ] Changchun Wang Ranch Hand Joined: Feb 15, 2006 Posts: 83 posted Apr 08, 2006 23:54:00 0 I have not get this book yet because I am from china,and in china I can not buy the book thanks your reply! Edwin Dalorzo Ranch Hand Joined: Dec 31, 2004 Posts: 961 posted Apr 09, 2006 00:00:00 0 That's correct, comrade! I do not know if some other more advanced topics related to serialization are part of the SCJP now. One of this topics is the custom formatting of serialization process. If you implement two methods in the Serializable object you can override the default serialization process to accomodate specific requirements. Those methods are private void writeObject( ObjectOutputStream ) and private void readObject( ObjectInputStream ). Hence, you could actually save the state of the static member of the class Gun, that I wrote previously using this code. @SuppressWarnings("serial") public static class Gun implements Serializable{ private static int bullets; public void setBullets(int bullets){ Gun.bullets = bullets; } @Override public String toString(){ return "I am gun with "+bullets+" bullets"; } //custom serialization private void writeObject(ObjectOutputStream out) throws IOException{ System.out.println("Serializing static content"); out.defaultWriteObject(); //write default state out.writeInt(bullets); //write static state } //custom deserialization private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException{ System.out.println("Deserializing static content"); in.defaultReadObject(); //read default state Gun.bullets = in.readInt(); //read static state and restore it. } } If you test this new implmentation you will find that the serialization process of the current object state is overriden by the code programmed in the new private methods. By means of this methods (which should be private) you can control the way an object serializes its state. For example, saving the state of the static members or data related to static or transient members. For instance, database connection object should be transient, because it is not serializable. But you could serialize the connection string , so that when you deserialize the object you can restablish the connection with the database. I guess, there a couple of other nices features related to serialization that may not be in exam. But I hope you may find this interesting. [ April 09, 2006: Message edited by: Edwin Dalorzo ] [ April 09, 2006: Message edited by: Edwin Dalorzo ] I agree. Here's the link: subject: serializable problem Similar Threads what happens when try to serialize static variable Serialization. serialization Can transient variables be static?? static in Serialization All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/253676/java-programmer-SCJP/certification/serializable
CC-MAIN-2015-40
refinedweb
1,390
53.71
Ankh Morpork's Finest Coder Saturday, October 30, 2004 The meaning of life. Ever wanted the cultural and episodal references for futurama. Well look no further than The Tv Tome! Thursday, October 28, 2004 The cat in the hat! Theres this cat thats been coming into my apartment and trying to look around. It just invites itself and likes to look around. Yesterday, it jumped onto the couch and sat next to me and liked to get its neck scratched. I think its a persian but I'm not much of a cat fanatic. But still its a cool cat. The cat inspects my fridge. It rains, it pours and the old man snores. Its been raining. To quote Garbage I'm only happy when it rains. Wednesday, October 27, 2004 The SOX are Red!!! THE SOX HAVE WON!!! THE SOX HAVE WON!!! 86 years and the SOX HAVE FINALLY WON! 4-0 against the cardinals and first time since 1918, the Red Sox have "the curse" lifted. Monday, October 25, 2004 The Metaphorical Death March Yes, the proverbial "Death March" is upon us (well me more than anyone cause I'm the only guy designing and developing the reader). To quote Ed Yourdoun, it is "a test of manhood, somewhat akin to climbing Mount Everest barefoot." Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better. Samuel Beckett Worstward Ho (1984) Sunday, October 24, 2004 Which File Extension are You? I couldn't agree more. Amateurs are boring and a waste of time. Read a book instead of asking people or use google u lazy sods. :-) (every friend of mine is NOT in this category thankfully). Macro I was getting tired to typing the same code over and over in different places so I created a small macro to just add the skeleton automatically. I'm hoping to do a project where I add components into VS.NET based on my Thesis work. But thats for much much later. So, now I present to you, A Macro..... Option Strict Off Option Explicit Off Imports EnvDTE Imports System.Diagnostics Public Module RecordingModule Sub AddMessageBox() DTE.ActiveDocument.Selection.Text = "MessageBox.Show(this, """", """", MessageBoxButtons.OK, MessageBoxIcon.Error);" End SubEnd Module Name The Game Challenge 2 So you call yourselves game critics and game buffs...but are u really that good. Test your Hours wasted playing games knowledge. Answers will follow soon. T - 13.5hrs Daniele is coming back. There is a welcoming committee headed to the airport to get him. I just hope he has left already so he doesn't see this cause its supposed to be a surprise cooked up by me with approval from KC. Stop your incessent whining! In our reader app, i've created a number of managed libraries. I use a library within those libraries called AssemblySettings.dll in order to get the AppSettings data. Anyways, in the main app, when you add two dll's that use the same dependency, Visual Studio.Net starts whining like a dog forbidden to eat dinner. Searching all over the vast expanse we call the Internet, I didn't find a satisfactory solution. So, I said to myself, why not just add all the projects together into one CUMULATIVE project. Then when it builds, it will see that the AssemblySettings.dll file for all the projects are the exact same and should stop its whining. Without much further ado, Ladies and Gentlemen, I give you....The Build Output. ----- Build started: Project: TestReaderApp1, Configuration: Release .NET ------ Preparing resources... Updating references... Performing main compilation... The project is up-to-date. Building satellite assemblies... ---------------------- Done ---------------------- Build: 6 succeeded, 0 failed, 0 skipped Fantastic! Saturday, October 23, 2004 A star is born. So my good friend from high school, whom you've heard quoted in this blog sometimes (ok so his name is Vijay), has just won the Southern Arizona Racquetball Tournament. Of course he was playing in the senior women's league but thats a side of him you'll just have to get to know. :-) I'm just kidding....good job Vijay!!! Day 3: Alive & Awake - All work and no play makes jack a dull boy...or does it? [Vijay says] - if(dumbass), sleep = null Are u calling me a "dumbass"??? :-) Would a dumbass tell you that your code is syntactically incorrect and will not compile. :-) Not to mention that the else should be included if you want to have a robust program :-) Friday, October 22, 2004 The office with a view. I love the cloud effects. Very unusual for our location. Except when its getting to winter time....so not so unusual for this time of the year. Hi Ho Hi Ho Silver Its 3:10am and I'm still at work. The halls are empty, the offices sparse. The music is loud and the fan is whirring. I love my monitors. :-) The color of magic Maroon....Maroon 5. I watched the video for She Will Be Loved. What a masterpiece. Froyd would be happy with the way that came out. I must recommend that you all watch the Video. Thursday, October 21, 2004 Day 2: Awake & Barely Alive This is day 2 of my marathon session of staying awake. I've had lots of coffee. I plan to go till friday night sometime which will make it 3 days of awake and barely alive. Imagine, if I could stay awake for a full week, I'd be able to join the Special Forces :-) Of course they actually do more than just sit in an office plonking on keys. I FEEL SO AWAKE & ALIVE :-) The plight of the world is at stake and the world is slient. In case you don't know, Uganda (Africa) is under serious turmoil due to a civil war. About 20,000 children have been abducted by the Lords Resistance Army who uses child soldiers to fight their battles. Not only is it deplorable, that the war has been ensuing, but the world media, and the world leaders are silent on this topic. The whole African continent is in a sad state. The rich are getting richer, the poor are getting poorer. For the first time ever, Aids is killing more people than malaria. It pains me to see this, especially in countries like malawi which is close to my heart. It is a sad sad day indeed. Rincewind turned and ran.... Well my DOS class exam is done. I think I did ok. Definately not 100. Now back to coding. SO here are some stats on my coding. To Date: 10 weeks of coding results in 12,699 lines of code. If I worked for every hour of every day then my efficiency is about 7 LOC/hr I work more like 80hrs a week -> efficiency = approx 16 LOC/hr I get paid for 20hrs a week -> efficiency from a work standpoint is approx 64 LOC/hr. Ladies and gentlemen, might I toot my own horn and say, thats VERY impressive numbers. VERY!!!!! (industry standard 1 LOC/hr) Technically Speaking � Hellacious Riders Offer 100GB Email Accounts Technically Speaking � Hellacious Riders Offer 100GB Email Accounts: "" I always wanted to use the Blog This! button and now I can. For all you lovers of email accumulation, here is an offer your going to love. :-) Whats Longhorn gonna require? Our building has a security team monitoring it day and night. Since I'm here all night, most nights, i've gotton to know a few of the security guards (those that bother to patrol the halls anyways). They are pretty cool guys always willing to listen with wide eyes about the work and research as well as other tech stuff. So today, actually a few minutes ago, one guy asked me what's next for computers. I told him about the DUAL core CPU's that are becoming the fad. Then I wondered, what does MS thing will the requirements be for Longhorn. So I went to google and found this little excerpt from Technically Speaking.. Well at least I have almost a terrabyte of storage on all my machines put together. :-) Oh and I have a built in Gigabit ethernet built into all my comps motherboards too. The Heat Is ON! Well i've been studying for the DOS class since 5pm yesterday. Thats almost 12 hrs. I still have about 4 more sections to do (total of 16). Oh so painful. How do u threading for OS's which are Preemptive vs. Non-preemptive. Now what if its on a uniprocessor vs. multiprocessor. Now how do u distinguish between User-level and Kernel-level threads. Now lets make life hard and try to do synchronization in all these cases. AGGHHHH!!!! As prodigy said it "My Mind Is On Fire." Good thing I have a Huge fan in my office. :-) I wonder what time Starbucks opens? 5am? Victory Victory was mine...UFO:AI RULZ!!! I just sweapt through the city where an alien infestation has occured. I had 3 soldiers with sniper rifles. 3 with assult rifles. 1 with a plasma gun and 1 with a bazooka. I use a spotter for each sniper (play the assult rifle soldiers with the sniper soldiers) and have the bazooka guy close to the plasma guy. That way I put the smack on any alien. :-) Wednesday, October 20, 2004 Star Goose Star Goose - ONE ADDICTIVE GAME! Its one of the best and most addictive games of the ol' days. I loved this game. I played it to death and now I'll play it some more...once my exam is done and my reader project is finished. Only on Abandonia.com Exams Got my CSE 565: Software Testing exam back and I had a 99 out of 100. SUCKS!!! Forgot to answer the second part of a question. I can't believe I still make stupid errors like that at this age. Last time I made errors like that was on the GCSE mock exams. Tomorrow: Exam in my CSE 531: Distributed Operating Systems shall take place. I've been told that the exams are TOUGH!!!! No one wants to give me an exam because they've all give it to someone else in the class and since the class is averaged, its in their advantage to not share the exams. Anyways. I'm told that an A in that class for an exam is 70%. YEP 70%. That is one TOUGH class. On a lighter note... After 11 years of watching the movements of two Earth-orbiting satellites, researchers found each is dragged by about 6 feet (2 meters) every year because the very fabric of space is twisted by our whirling world. If you like OLD OLD games, like XCOM and UFO, then you'll like this site. Abandonia The site is dedicated to old games. I managed to get the good ol' ARENA when they announced its release. Tuesday, October 19, 2004 We are not ready....He is! Halo 2. Coming in Stores: 11/9/04. The world will never be the same. Price = approx $50 I'm so screwed. Halo 2 on 9th. Half Life 2 on 16th. I've got 8 days to finish Halo 2. :-) Over and Over I've been watching the video for Everything I've Got In My Pocket - Minnie Driver and I am just amazed by how good it sounds. So mellow and so soothing. Believe it or not, she was a singer before she became famous as an actress. Review by Thom Jurek allmusic. I know the albumn didn't get a good review but I for one will definately try and get this complete albumn becase as I heard the music, I grew to love it more and more. And as I already said, well I haven't said, but I will say now, the lyrics metaphorically speak to me. But if sultry soothing music is not your thing, stay away from it and come back to it hopefully when she does a new albumn. HALF LIFE 2 Release Date Set in GOLD. Ladies & Gentlemen. As Winston Churchill put it so eloquently Arm yourselves, and be ye men of valour, and be in readiness for the conflict; for it is better for us to perish in battle than to look upon the outrage of our nation and our altar. So I say to you, are you brave. Are you bold. Are you....an elite? Gamers of the world unite for this is the hour of our glory or the year of defeat. The date of our deliverance has been foretold. NOVEMBER 16th. Tis' the day of the release of Half Life 2. It is also a very special day for me too because its my день рождения. Get in lines, order your copy. Shun the world and on the night of November 16th, play...play like you've never played before for this is Half Life 2, the most awaited game of our century. HAHA....TO THE LOOSERS WHO PURPORT BROWSERS OTHER THAN IE A study done by SecurityFocus on the leading browsers Microsoft Internet Explorer, Mozilla / Netscape / Firefox, Opera, Lynx, Links found the following. All browsers but Microsoft Internet Explorer kept crashing on a regular basis due to NULL pointer references, memory corruption, buffer overflows, sometimes memory exhaustion; taking several minutes on average to encounter a tag they couldn't parse. If you'd like to read more on the browsers, to see that I ain't making it up and that your browsers (anything but IE) really does SUCK DONKEY BALLS.... :-) go here. SecurityFocus Web browsers - a mini farce All I want to say is...if you don't use IE, you got nailed :-) Monday, October 18, 2004 Hacking UFO:Alien Invasion This is a primer on hacking UFO: Alient Invasion. 1) Get the game from UFO: AI 2) Install the game 3) Run the game 4) Save a game. Make a note of how much money you have. 5) Find the saved game in C:\Program Files\UFOAI\base\save 6) Based on the slot you saved to, open the corresponding file in a Hex Viewer. You can use Visual Studio.Net for this if you have it. 7) Convert your money to Hex. So for 3660 it will be E4C. 8) Jumble the Hex. This gives us 4C 0E. 9) Find the seqeuence 4C 0E in the file you just opened. 10) Change it to 28 23 to give you 9000 in money. Have fun playing. :-) Case Closed. Daniele is coming back on this coming sunday. I guess we can go to the vine on monday evening. :-) Sunday, October 17, 2004 UFO In Game Exclusive I was playing the game UFO: Alien Invasion; well the demo anyways. I managed to corner the alien alive. Notice how close the camera is. This feature of UFO:Alien Invasion is pretty cool. You can zoom in quite close to the action. Then to see how good the death of the alien was, I took some screen shots of how it happened. Here is my team getting the bad alien man. Bad Alien bad! Its 5 am and I just finished coding a pretty decent amount. Still a LOT of work left to do. If only these JAWS Screen Reader folk would work weekends I would be much farther along. I have a bad feeling that even if they did, I wouldn't be able to get them to tell me how to do some JAWS programming without Scripts. Ok so I'm going to go home soon. I'll be back at say 2. I need to get some sleep tonight because I have a BIG BIG BIG BIG BIG exam this coming week and I am TOTALLY scared. The prof is hard and his exams are known to be extremely tough. I looked at a sample exam and didn't know any of the questions. But in class, I am able to follow the lecture and understand the material. SO what am I doing wrong? Well...it seems as though I was supposed to learn about the material on my own outside of class....yeah right!!! :-) Well I had posted on my MSN Messenger ID the following -- "Remember the file Liglob.dat??" The winner of Know your File was Vijay who was right in stating that it was from UFO: Enemy Unknown. Its the file you have to hack in order to get tons of money :-) And yes I did do that. If your into UFO: Enemy Unknown then you'll definately like this new remake. Its just a demo at this stage but its DAM GOOD!!! (5/5) UFO: Alien Invasion I've played the demo which has a few mission and its Excellent. The gameplay rulz, the graphics are awesome and the sounds are...well they are best described as "environmentally poised." Best yet, it works on your good ol' WinXP so that a big plus :-) The AI is pretty good too. Definately try it out and let me know what you think. The good ol' UFO Menu The Spaceship & Game View Tick Tock Tick Tock I didn't realize that .Net had 3 types of timers. I have always used the System.Windows.Forms.Timer when I needed to do any timer related processing. Apparently, there is one provided by the System.Threading class that is pretty reliable (more reliable than the one provided by Forms). I feel quite stupid not having known this but it just goes to show that .Net is just such a vast framework. So how do u code using the Threading Timer you ask. Well, I'll show you a neat little piece to play a wave file while processing some data in the background. That way users know that the application is functioning. 1) Add the Using clause. using System.Threading; using System.Globalization; using Microsoft.Win32; 2) Add the DllImport statement. // PlaySound() [DllImport("winmm.dll", SetLastError=true, CallingConvention=CallingConvention.Winapi)] static extern bool PlaySound( string pszSound, IntPtr hMod, SoundFlags sf ); 3) Add the SoundFlags enumerator in order to pass the information to the PlaySound() method. [Flags] public enum SoundFlags : */ } 4) Add a Using clause to the part of the code that you want your timer to run on. using (System.Threading.Timer timer = new System.Threading.Timer(new TimerCallback(PlayWavFile), null, 0, 5000)) { // DO SOME PROCESSING HERE } 5) Add the PlayWavFile method. private void PlayWavFile(Object state) { int err = 0; // last error try { // play the sound from the selected filename if (!PlaySound( @"C:\SoundFile.wav", IntPtr.Zero, SoundFlags.SND_FILENAME SoundFlags.SND_ASYNC )) MessageBox.Show(this, "Unable to find specified sound file or default Windows sound"); } catch { err = Marshal.GetLastWin32Error(); if (err != 0) MessageBox.Show( this, "Error " + err.ToString(), "PlaySound() failed", MessageBoxButtons.OK, MessageBoxIcon.Error ); } } I know the code looks manageled but thats only because Blogger in all its wisdom doesn't have code snippet technology. I've written to them and asked them about it and i'll fix this if they do. Friday, October 15, 2004 Bloggers World Divided Seems like Andy and Daniel don't read my blog too often. :-) Daniel has a blog but I don't see his update on there too often. I tried to get Andy to use a blog but he doesn't want to. If your reading this, and you don't have a blog, create one. Its really a nice way to communicated when you don't see people too often. #$%VCS(#*@#% (ENCRYPTION & BEYOND) Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication The above is a really good article on encryption for .Net. They cover -DES (Digital Encryption Standard) -Triple DES -Rijndael -RC2 The code allows you to select between the 4 encryptions. I will definately use this for my Authentication class for the DotNetBlogger. Lately I remember, speaking with the boy.... So Daniele is coming back from Italy finally....FINALLY!!!! On a side note....4 monitors = NO NO!! I have whip lash in my neck from using all 4 monitors together. :-) Seems like 3 is the magic number for monitors. I'll go back to that soon. Thursday, October 14, 2004 Wednesday, October 13, 2004 The Debate At ASU I went to the MU/Hayden area of ASU today evening in order to take in the debate. Being a "once in a lifetime opportunity", I was really happy to have made it there. So I took some photos, and here they are. This is the West Hall in front of Hayden lawn. This is the CNN booth on Hayden lawn. Wolf Blitzer was there. This is the fountain near the MU. This is the decor just outside MU across the HARD TALK booth. This is some local spanish channel interview session in front of the Hard Talk booth. Chris Matthews and Ron Regan were there. And finally this is me on Hayden lawn. Press "Any Key" How can I catch keyboard messages on a application-wide basis? You can implement the IMessageFilter interface in your main form. This amounts to adding an override for PreFilterMessage, and looking for the particular message you need to catch. In the sample, there are two forms, with several controls. You'll notice that no matter what form or control has input focus, the escape key is caught in the PreFilterMessage override. [C#]); } } The Emperor's New Camera Took some pics real quick with the new camera. So here they are. These are the books on my bookshelf. This is the view from my office. And finally, this is me, looking out to the view from my office, hoping to be outside on such a beautiful day. The Presidential Debate Today is the presidential debate at ASU. I won't be there to view it in person but I might just watch the thing on tv like millions of others. Apparently CNN and some other news places have put up stands on Hayden lawn. Maybe I'll go take pictures today with my NEW camera. The UPS people really do work hard. The driver from whom we got the package finally only got in at 9:30pm and was expected to be back at 6am. I guess the debate has affected delivery. Now, I need to work on some code for the reader. :-) Tuesday, October 12, 2004 ID myself I went to Oktoberfest with elizabeth and co. and was denied access because "I didn't have a US ID." Idiots. I have an international drivers liscence and they denied that. SO I finally relented to this regionists (like racists but with regions) and got a local ID card. So now card me and I will provide you with a "region 1" ID. My Picture Box with the Imp inside. I bought a new camera. Yes I finally gave in to peer pressure and bought one so that I can take pictures of my whiteboard and save all my work. They came to deliver it today and obviously I wasn't home so they decided to ignore the note on the door to deliver it to the office. Decided, they would rather come back tomorrow when, again, I am not home. So with the help of a college, I shall go today at 7:30 and pick it up from them. They should pay me now for the transportation that I have to use because they didn't fullfil their obligation to deliver it to me. Wouldn't it be cool if you could have an option to get something delivered to you, no matter where in the world you are. So I need to work on the reader tonight cause they are taking videos of it being used tomorrow at 9am. So I think I will sleep for a few hrs and then get back to school and code this. Been listening to some good ol' Garbage lately. Love that music. If you like Garbage, then you will definately like Laika and Everything But The Girl. Interesting Tidbit - Laika, was the first dog to ever go into space. Laika was also the first living thing in space too I believe. Dogs are so cool. :-) This is my younger cousin and my dog - taken quite a few years ago. My dog would sit at the corner where he is in the picture and watch the world go by...I used to love to sit with him and just enjoy a nice evening in malawi. I'm glad I was able to spend so much time with my dog. You never realzie how much you miss someone until they are gone from your life. Friday the 13th. Actually it was Friday the 8th. Twas the night of Andy's Birthday. We had dinner at Plaid. Went to P.F. Changs for drinks. Watched a movie. But there is more.... Daniel questioned the verbage of the name Plaid and I would venture that he pissed off the server...i'm sure he didn't mean to. :-) Then Daniel told us about the "funnel." At P.F. Changs we bought drinks for andy hoping he would enjoy his new found freedom to drink. But he diskliked the beer, thought the pinacolada was a cough syrup and loved some white milky drink. So we threatened to bring the "funnel" if he didn't drink his drinks. Of course, he didn't so Daniel had them for him. While standing in line at the theatre, Daniel got everyone in the line to sing Happy Birthday to Andy. That was nice. Then andy got a free movie pass. Finally we got out and went home. So ended a friday of fun. Monday, October 11, 2004 Work for the day TO DO: 1) Do cse 531 hwk for tomorrow. 2) Get code for Reader done for tomorrow. 3) Get exam done for wednesday. 4) Fall down dead from exhaustion thursday. I am eating some nice vegetable rice dish from In Season Deli. Quite a nice place to go for lunch. I think the rice dish is called Tabouli. Anyways, once I get this down, I'll wash it down with some ice cream from Coldstone. Coffee & mint mixed with pistachio and bananas. I call it the Banana-na-na-Surpise (Like in one of the Discworld novels..might have been Witches Abroad). I realized I'm not a child anymore. Today I realized just how old I am...or rather feel. This revelation came to me when I realized that I had bought a piece of chocolate cake and had failed to eat it completely. Having never wasted any chocolate, least chocolate cake, I was quite amazed when I packed up the take out box and dumped it in the garbage can. I am in need of a vacation. Away from everyone and in absolute desolation....peace and calm. Too bad I can't afford it right now. I need to get some sleep. This old man has battled the sea for too long. Sunday, October 10, 2004 Project Inception: DotNetBlogger Dot-Net-Blogger Why? - I am starting work on an Application called DotNetBlogger. I have used w.blogger to post into my blog but it does not allow me to post into my FTP space. Features - The features I am hoping to include are 1) Post into FTP 2) Post into Blogger 3) Easy & Intuitive UI 4) Accessible 5) Application.Config file to hold XML-RPC URI information 6) Multi-User application & custom configuration settings. 7) Secure using .Net's own Encryption API TimeLine For DotNetBlogger - I'm hoping to have this completed by December 04. Busy with school and work right now so it might take a while. If your interested in developing this with me, I'd be happy to have your help. If you have any comments or suggestions for features or anything else, please let me know. I'd be happy to incorporate those time willing. Current Prototype - I have done a proof of concept program and been able to get user information for a given username and password. It took me about 10 min to figure out completely how to do that. Thursday, October 07, 2004
http://sushantbhatia.blogspot.com/2004/10/
CC-MAIN-2020-40
refinedweb
4,707
84.88
Intro At Mumble, we're huge Flutter fans. But we're also JAM stack enthusiasts. The cool thing is: we can leverage both these passions and create a Flutter app powered by a headless CMS to make it flexible, dynamic, and future-proof. In this tutorial, you will build a newsfeed Flutter app using MBurger, a headless CMS. You'll define the data structure for your news through MBurger and use its Flutter SDK to fetch the data and show it in the application. If you're new to Flutter, go to flutter.dev and take a quick tour of what is Flutter and its potential. You won't regret it, I promise you! Ready to start? Let's do it. 🍔 Create the project on MBurger The first thing that you have to do is to create a project on MBurger. It's quick, free and preeeetty easy. Go to the MBurger dashboard, login with your credentials (or create a new free account) and then select "Create new project". Since we're starting from scratch, select "Create new project" again. Give it a cool name to your project and click on "Create". Or give it a boring name and click on "Create" anyway. Who cares? 😅 🔷 Create your Flutter app and install MBurger SDK Now let's switch to Flutter. Create a new Flutter project; if you don't know how to do it you can follow this tutorial put together by the Flutter team. Add MBurger to your project adding this to your package's pubspec.yaml file: dependencies: mburger: ^0.0.1 and install it from the command line: $ flutter pub get If you've installed MBurger package correctly, you will be able to import MBurger and set it up. First we will need an api key to connect the MBurger package to the project you've just created. Head over to the MBurger dashboard and create a new API Key navigating to Control Panel -> API Keys, insert a name for the key and click on "Create". Now it's time to setup MBurger in your app. Open your main.dart file and import MBurger adding this line at the top. import 'package:mburger/mburger.dart'; Then set the api key you've just created like this: void main() { // Replace the below string with the API token you get from MBurger MBManager.shared.apiToken = 'YOUR_API_TOKEN'; runApp(MyApp()); } 📰 News: Create the news block on MBurger Click on "Add Block" and create a new block. A block is essentially a data model and, in this case, it will represent the main content of your app: a list of news. Give it a name, pick an icon (it's just for internal purpose) and click on "Create". Now we have to define the structure of the block. We need to tell MBurger that a news block is composed of an image, a title and a some markdown content. Go to the "Content Structure" section of the block, and add a new image element, calling it "Image". Try it yourself, go ahead! Create a "Text" element and call it "Title", and a "Markdown" element and call it "Content". Does you "Content Structure" page resemble this screenshot below? If so, great! You did it! Create your first section Okay, let's create some actual content now. A section is just an instance of the block which contains the data you want to show in your app. To rephrase, the block is the abstract structure of the news while the sections are actual news with pictures, text, etc. To create your first News, select your News block in the left menu, head over to the "Content List" tab and click on "Add New News". In the page you've just opened, you should see the elements you've picked when building the News block (Title, Image and Content). Fill each input with some insert the content you wish to see on your home page and click the "Save" button. Congratulations, you've created your first section! 👏🤩 Create the news section in your app Okay, so now we've created some content on MBurger, basically exposing some APIs. Now we have to retrieve that section and show it in our app. Create a new package in your app and call it model, we will put here all the classes that represents our MBurger sections. Create a new news.dart file with this code: import 'package:mburger/elements/mb_markdown_element.dart'; import 'package:mburger/mburger.dart'; class News { String image; String title; String content; News.fromMBurgerSection(MBSection section) { section.elements.forEach((key, value) { if (key == 'image' && value is MBImagesElement) { image = value.firstImage()?.url; } else if (key == 'title' && value is MBTextElement) { title = value.value; } else if (key == 'content' && value is MBMarkdownElement) { content = value.value; } }); } } Have you noticed anything? This class has the same elements we inserted in the section we've created in MBurger. It has an image, a title and a content and will be initialized from an MBSection object, which is the object that will be returned by the MBurger SDK . The elements property of the MBSection is a Map, the keys are the names of the elements that you've created in MBurger, values are instances of MBElement subclasses. e.g. the title element is a MBTextElement. Now that we have created the model object, we need to create the widget that fetches the sections from MBurger and shows them, create a new package and call it news, we will put here all the widgets of our newsfeed. The first Widget that we are going to create is the Scaffold, the main view with the list of all the news. Create a new dart file and call it news_scaffold.dart with the following content. import 'package:flutter/material.dart'; import 'package:mburger/mburger.dart'; import '../model/news.dart'; class NewsScaffold extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('News'), ), body: FutureBuilder<List<News>>( future: _news(), builder: (context, snapshot) => Container(), ), ); } Future<List<News>> _news() async { MBPaginatedResponse<MBSection> homeSections = await MBManager.shared.getSections( blockId: BLOCK_ID, includeElements: true, ); return homeSections.items.map((s) => News.fromMBurgerSection(s)).toList(); } } There are a lot of things going on here, first we have a FutureBuilder, it takes a Future as parameter and calls the builder to create the widget, the snapshot parameter of the builder callback there will be the list of news. The Future parameter of the builder is the other function contained in the Scaffold, it fetches the sections from MBurger with the SDK and converts them to News objects using the .map function. You just need to change BLOCK_ID to the id of your block that you can find under "Content Structure" -> "Block Settings" -> "ID" Now we have to create the list of news, because at the moment we're returning just a Container() from the builder function, so our view will look empty. Add this function to your scaffold and change the return of the FutureBuilder to this: builder: (context, snapshot) => _newsList(context, snapshot.data). // The list of all the news Widget _newsList(BuildContext context, List<News> news) { if (news == null) { return Container(); } return ListView.builder( itemCount: news.length, itemBuilder: (context, index) => _newsListTile(news[index]), ); } // A tile of the list that represents the news, it's a card with the image of the news and its title Widget _newsListTile(BuildContext context, News news) { return Padding( padding: const EdgeInsets.all(8.0), child: Card( child: ListTile( contentPadding: const EdgeInsets.all(10), leading: ClipRRect( borderRadius: BorderRadius.all(Radius.circular(8)), child: AspectRatio( aspectRatio: 1, child: Image.network( news.image, fit: BoxFit.cover, ), ), ), title: Text(news.title), ), ), ); } Now we have to show the content of the news when the user taps on the ListTile, add the onTap parameter to the ListTile and connect it to the following function onTap: () => _showNewsDetail(context, news). Don't worry if there's an error, it's because we haven't created yet the NewsDetailScaffold class, we will do it in the next section. void _showNewsDetail(BuildContext context, News news) { Navigator.of(context).push( MaterialPageRoute( builder: (context) => NewsDetailScaffold(news: news), ), ); } News Detail Scaffold So, let's create the detail of our news, we will create a new Scaffold with all the contents of the news. We will use the flutter_markdown package to render the content of the news so add it to your pub spec.yaml and run flutter pub get. Create a new dart file called news_detail_scaffold.dart in the news package with the following content. It's a very simple Scaffold, the news is passed to it when it's initialized (in our case when the route is created) and it shows the news content in a list, with the image at the top and the markdown content bellow it. import 'package:flutter/material.dart'; import 'package:flutter_markdown/flutter_markdown.dart'; import '../model/news.dart'; class NewsDetailScaffold extends StatelessWidget { final News news; const NewsDetailScaffold({ Key key, @required this.news, }) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(news.title), ), body: ListView( children: [ AspectRatio( aspectRatio: 3 / 2, child: Image.network( news.image, fit: BoxFit.cover, ), ), Padding( padding: const EdgeInsets.all(20.0), child: MarkdownBody( data: news.content, ), ), ], ), ); } Where to Go From Here? If you've reached this paragraph you should have your new newsfeed app and running. If it's not the case you can use the MBurger template and GitHub repo below as a starting point for your app, you just need to insert the correct data in the constants.dart file. Download the sample Download MBurger Template You can try to add a lot more functionalities to your app, the only limit is your imagination. Here are a few examples of what you can built next: - Add a date property to the news using the availableAt property of MBSection. - Change the UI of your app which is pretty basics at the moment, add more colors, change the layout of the list tiles, try to build a grid instead of a list. - Build a Home for your app and use MBurger to edit its content. - Use BLoC to manage the state of your app. BLoC is one of the most populars approaches when talking about Flutter, try to use it in your newsfeed app. You can find out more at this link. - Add pull to refresh to the list of news using this package. - Add pagination using MBPaginationParameter and a ScrollController. - Catch and manage exceptions that can occur when fetching sections from MBurger, e.g. if your token is wrong or if you request sections with a wrong block id a MBException is raised, try to catch it and show an alert. - Add push notification functionality using the mbmessages package of MBurger. Hey, I like this MBurger! We're super happy to know you've liked MBurger. We're just in the process of launching it and we'd appreciate having you on board as a tester/contributor. Sounds good? Here's what you can do then: - Join us on Slack here and get involved or - Reach out and say hi to the team via email or - Simply keep using MBurger to build awesome stuff! 🚀 Thank you for reading this tutorial. Did you like it? Let us know by clapping or dropping us a comment. Do you like to hunt for bugs? Go ahead, do your worst! 😉 Discussion (3) Thanks for this guide! A screenshot of the complete app would be helpful Hey, thanks for the comment, you can find the complete app and screenshots here: github.com/Mumble-SRL/MBurger-Flut... The UI is made using Widgets from the material package (e.g. ListTile, Image, Text) but it's easily customizable.
https://dev.to/lorenzoliveto/have-you-tried-headless-flutter-it-takes-10-minutes-and-it-will-blow-your-mind-2men
CC-MAIN-2022-27
refinedweb
1,947
65.22
Other Aliastmpnam SYNOPSIS #include <stdio.h> char *tmpnam(char *s); DESCRIPTIONNote: Avoid use of tmpnam(); VALUEThe tmpnam() function returns a pointer to a unique temporary filename, or NULL if a unique name cannot be generated. ERRORSNo errors are defined. ATTRIBUTESFor an explanation of the terms used in this section, see attributes(7). CONFORMING TOSVr4, 4.3BSD, C89, C99, POSIX.1-2001. POSIX.1-2008 marks tmpnam() as obsolete. NOTEST and an implementation is provided in glibc. COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.org/tmpnam_r/3
CC-MAIN-2019-30
refinedweb
110
62.24
i have done a simple program, which i asked..(don't know how many days ago.) but i find that there is a problem.. if i give an input (for example) 1234 it shall reverse back 4321 (it does) when i key in 100, by right it should return 001, but it returned as 1. my question is how to make that two zeros in front visible. (is there any mistake(s) for my "this" program??) #include <iostream> using namespace std; long reverse(); int main() { long a; a = reverse(); cout << "The reversed number is " << a << endl; return 0; } long reverse() { long x, y, z = 0; cout << "Enter an integer to be reversed: "; cin >> x; while (x > 0){ y = x%10; x = x/10; z = 10 * z + y; } return z; } thank you....
https://www.daniweb.com/programming/software-development/threads/8506/reversing-number-problem
CC-MAIN-2018-13
refinedweb
131
87.05
Created on 2008-06-09 11:02 by bohdan, last changed 2009-05-03 22:06 by gregory.p.smith. This issue is now closed. In urllib2.AbstractHTTPHandler.do_open, the following like creates a circular link: r.recv = r.read [r.read is a bound method, so it contains a reference to 'r'. Therefore, r now refers to itself.] If the GC is disabled or doesn't run often, this creates a FD leak. How to reproduce: import gc import urllib2 u = urllib2.urlopen("") s = [ u.fp._sock.fp._sock ] u.close() del u print gc.get_referrers(s[0]) [<socket._fileobject object at 0xf7d42c34>, [<socket object, fd=4, family=2, type=1, protocol=6>]] I would expect that only one reference to the socket would exist (the "s" list itself). I can reproduce with 2.4; the problems seems to still exist in SVN HEAD. Since the socket object is added to a list, a reference to the object always exists right? That would mean that it would not be garbage collected as long as the reference exists. On the other hand, it should also be noted that in close method, the socket is not explicitly closed and for a single urlopen, atleast 3 sockets are opened. The list is not the problem. The problem is the other reference, from "<socket._fileobject object at 0xf7d42c34>". Also note that the workaround (u.fp.recv = None) removes the second reference. This is fine (at least in CPython), because the socket is destroyed when the refcount reaches zero, thus calling the finalizer. So if I add a: class _WrapForRecv: def __init__(self, obj): self.__obj = obj def __getattr__(self, name): if name == "recv": name = "read" return getattr(self.__obj, name) ...and then change: r.recv = r.read ...into: r = _WrapForRecv(r) ...it stops the leak, and afaics nothing bad happens. Has (non-unittest) test and proposed (non-diff) patch inline. I can't reproduce in python 2.5.4, 2.6.2, or 2.7 trunk (though I can with 2.4.6 and 2.5) on mac & linux. Quick bisection suggests that it was fixed in r53511 while solving related bug, and the explanation given there is consistent with the symptom here: the _fileobject doesn't close itself, and r53511 makes sure that it does. Suggest closing as fixed. not reproducable in head as stated.
https://bugs.python.org/issue3066
CC-MAIN-2017-51
refinedweb
393
70.09
This article describes the steps I have taken to set up an Open Source tool chain I use to write C++ programs. In it, I show how I: A while back, I decided to share the source code for an application I developed. I had written the program using Microsoft C++, Visual Studio, and made heavy use of MFC. I found that there was a pool of people who would be willing to help, but they were unable to help because they didn't have Microsoft C++. For this reason, I started investigating alternatives to using proprietary tools, and in this article, I share my experiences. A secondary reason for writing this article is that I want to write further articles about other technologies I have found out about. This article will allow people to know the tool chain I use so they can follow further articles. To write this tutorial, I have started off with a clean install of Windows XP Home edition. I am actually using a VMware virtual machine for this. The tutorial should work on most Windows versions. Notepad++ is an Open Source text editor that includes features like tabbed browsing and syntax highlighting. It is also customizable which will allow me to add menu items to compile and run the programs with one click. Download the latest version at. I downloaded npp.5.4.3.Installer.exe. Setting it up is easy, and I used all the defaults so I won't go through the steps here. You can check Notepad++ works by running the application which should be in your Program Files menu. Note: If you want to find out more about Notepad++, the website is. This is the actual compiler tool chain. Install this using the auto installer. I selected options as follows: For everything else, I use the default settings. Although not strictly necessary, I use MSYS. This is a minimal Unix style SYStem for Windows, and it sets up a Linux like environment on a Windows machine. I download and use various libraries with my programs (zipping libraries, GTK+, MySQL libraries etc.) and the make files frequently contain Unix commands. The main example is the make clean section which uses rm rather than del. MinGW will make using these a lot easier. Go to. You can select the current release section, and it will expand to show the current release. I downloaded MSYS-1.0.10.exe. I installed it with all the default setup and options. Note: If you want to find out more about MSYS and MinGW, the website is. We need the MinGW bin directory and the MSYS bin directory to be in the path. To do this, go to Control Panel -> System -> Advanced tab -> Environment Variables button. Under the system variables section, find the path variable and press Edit. Add the MinGW directory to the end. (Addresses are separated by ; so I need to add ;C:\MinGW\bin;C:\msys\1.0\bin on my system.) While you are there, add a user variable called HOME and set it to the value C:\msys\1.0\home\<<WINDOWS USER NAME>>. Note: Don't surround the path with "'s. This will cause make to not find the directory. (Although the normal shells will.) Check if this has worked by running the command prompt and typing gcc --help. If you get a command not found error, it means the path is not correctly set up. Finally, we are ready to get our first program together and compiled. Load up MSYS by clicking the blue M icon. (I usually put it in my quick launch for convenience.) Follow these steps: Note: You can copy the command from here and press shift + INS in MSYS to run it Make a code directory: Make a directory for our Hello World application: Create the C++ program: #include <iostream> int main( int argc, char *argv[] ) { printf("Hello World\n"); return 0; } Careful copying this code. <'s may be turned into < and >'s may be turned into >. Don't forget the newline at the end of the file. Create the Makefile: $(warning Starting Makefile) CXX=g++ main.exe: main.cpp $(CXX) main.cpp -o main.exe clean: -rm main.exe Test the setup by making and running the program: You should see the text Hello World appear when the program has run. Finally, check the clean up works OK: You should see that the exe file has gone. We can set up Notepad++ shortcuts. First, create two batch files using these steps: :##BATCH to run fron notepad++ c: cd\ cd %1 make pause main.exe pause :##BATCH to run fron notepad++ c: cd\ cd %1 make clean pause Now, we can set up Notepad++ to run these batch files: We now have a tool chain set up. The directory c:\code contains the programs and we can create a sub directory for new programs. Each sub directory will need a makefile and the program files. The makefile can be changed depending on what program files are needed. You can use Notepad++ for editing, and use the RUN CODE menu option to make and run the program. The run code command will run make in the directory the current file in Notepad++ is running. This may all seem pretty simple and not worth an article, but I plan to write more articles and need to share how my system is set up for them to make sense. This is the process I have used to set up this tool chain. I am interested in hearing from anyone who tries to follow these steps and hear about your experiences. If you can help me change the instructions to make it easier for others, that would be great. I am also interested to know what people think about the way I have setup my tool chain. It's always good to learn more and improve the way you do things. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/37059/Setting-up-an-Open-Source-tool-chain?msg=3083183
CC-MAIN-2013-48
refinedweb
1,030
73.98
As an Account Manager for an IT consulting firm, my customers rely on my firm’s resources and myself to provide advice and guidance on some of their most difficult challenges. A development manager might need to improve his or her team’s build process or a CMO might need help leveraging mobile technologies to promote his or her brand. While the challenges vary by title and company, one challenge that has been reoccurring lately is how to increase user adoption and specifically user adoption for employees for certain applications inside their enterprise. Why is user adoption so important to these customers? The answer is pretty simple. These customers are investing a large amount of money on these application deployments and adoption is a key measurement of success and ROI. Why not just mandate them? There are three reasons executives avoid mandating systems such as SharePoint which are: 1. Employee unrest 2. Technologically difficult 3. Increased cost Employee unrest is a bit tongue-and-check but many IT executives might not want to spend the political capital necessary to force users to use the new system. Also they may not be 100% convinced the system will meet everyone’s needs. Secondly the technology might be very challenging to prevent users from certain behaviors. For example suppose you wanted users to stop attaching documents in e-mail and instead store those in your ECM solution and link to them. How would you do that? Well the actual preventive measure would be easy just set the attachment size limit in Exchange to 1KB. However, this will present a problem when the user needs to e-mail the document to an external party who doesn’t have access. What if it is an attachment type that your ECM solution does not support? You can see there are many exceptions to just this one behavior. Lastly, there is the technology and training cost associated with a mandatory rollout. Optional rollouts are much easier in terms of costs and planning; just ask any ERP consultant. Once we’ve understood that adoption is important and we are going to use it to measure the success of our project, the next step is to understand what factors influence adoption. There are eight factors that influence adoption. Some of these are factors you can influence with your project’s adoption strategy while others are personal factors that can vary by employee. These factors you should be aware of but will not be able to influence directly. The factors in order are: 1. User Involvement 2. User experience 3. Perceived Value 4. Communications Quality 5. Training Quality 6. Peer Influence (needs to be communicated) 7. Leadership and organization pressure (How involved are they?) 8. Technology comfort and tolerance for change User Involvement This is the number one factor in the adoption of a new system so you should pay close attention to it. The more your users feel involved in the designing of the system and the decision making process the more positive they will perceive the system and adopt it. In fact those with a high degree of buy-in will influence other employees to use the system creating a bit of a grass roots campaign. The first approach we take to involving users is at the beginning of the requirements gathering process by creating and sending a survey. Send this to as many users as feasible, but be sure the people you ask are strategic. Typically, you only have a fixed amount of time so you have to make sure that every person you involve counts. The next step is user interviews. Since you cannot involve everyone, I usually recommend adding the most thorough responses to the round of user interviews. Finally be sure you communicate progress back to the survey respondents and interviewees so they know that their input was incorporated into the system. User involvement does not stop once the project is deployed. Be sure to solicit feedback after launch. I also recommend having a feedback link on every page of the site. User Experience Do not mistake User Experience for branding. While branding is an important part, UX focuses on all aspects of how the users will experience and interact with the system. This also includes the functionality of the system. Over the past several years, I have found many small UX improvements to established applications (e.g. SharePoint, Documentum, Dynamics CRM) that save the user time and make the system easier to use. An example of these improvements are pre-populating fields based on the user or previous selection. The investment of these improvements is small and save the organization training cost as well as improve the user experience thus improving user adoption. Perceived Value The Perceived Value factor is actually perceived value compared to perceived cost. This is how humans not just employees make most decisions. If I take the time (cost) to learn and do this, will it make my job faster / easier / more rewarding (benefit)? It is important to note that I said perceived value and benefit. This is what the employee thinks not what you think. You might know if the user takes the time to learn the system it will improve their productivity. However the employee has to truly believe the benefit in order to change their behavior. Communication and Promotion Quality While communications and promotion are related, I’m going valuable to address them separately. Communication requires a sender, a message, and an intended recipient and is defined as the action of distributing information. How you announce the system, prepare users for it and keep them updated on the progress is communication and is vital to your adoption strategy. I recommend that most communications should focus on the features of the system and how these features will benefit the user. Keep in mind most users only have the ability to process 1-3 new features at a time so if your system has lots of discoverable features (Office, SharePoint) it is a good idea to send out tips and tricks as part of your communications plan. While promotion does communicate, an idea or ideas about the application, it does it with a bit more flare. The message should be direct and simple and again focus on what is in it for Joe User. The amount of flare and pizzazz will vary by organization and industry. Here are some examples of promotional activities: - Have a launch event that includes some prizes. - Commercials. You can do these fairly inexpensively. - T-shirts, stress balls, pens, or other promotional material - Show screencasts or videos in high foot traffic locations Training Quality In my experience this is the most ignored adoption factor on the list. I see far too often system projects with a million dollar budget yet have a training spend of $10,000. Some symptoms of a low training budget might be lower user satisfaction in the system or a higher than anticipated number of support calls. Your project should have a training plan and at the minimum it should state: - Who your users are and organize them into groups - What details each group needs - What the appropriate training medium is for each group More than likely you will want to have more cost-effective training for end-users. Webinars or Computer based training programs are cheaper than in person training for a large number of users. Another strategy I have seen effective is creating short videos with specific content that explains how to do something. Embed these videos throughout the system. For example you could place a video that explains how to search for a document next to the search bar in your system. One training strategy I’ve found effective is “train the trainer”. If the target user base is large and geographically dispersed you could train certain power users in each geographical area. These power users could therefore train end-users without your involvement. The benefit is the trainer is closer to the user and the user can use them as a resource after the training has completed. Just be sure to sample the training classes from time to time to ensure the material is being delivered how you intended. Besides cost concerns, another reason for lack of training is employee time. The key is being sure the training is effective and you convince the doubters that this will save more time in the long run. I usually recommend doing this in the communication plan. The communication plan should tell the users how the system will benefit them and also the benefits of the training. I often get asked should the training be mandatory or optional. I’ve seen companies make granting access to the system dependent on employees taking training classes or online courses. I disagree with this for the simple fact that if you are deploying an off the shelf product, a certain number of employees will have already had exposure to it at past companies. Besides, if you do a good job addressing the other adoption factors, employees will be lined up to take training classes. Training is one aspect that differs between wide-spread consumer facing applications and enterprise applications. Training on consumer applications is usually not possible because of the number of users that would need to be trained. I do not think that Facebook will be creating a training course on how to post to a status update anytime soon. Although with everything people post maybe they should come out with a “What not to Post to Facebook” course. Peer Influence It is pretty obvious that users influence other users. How do we harness that power for our system deployment to increase the user adoption? The level of peer influence is affected by organizational culture, industry, and employee proximity to one another. Many of these are beyond the project team’s control but there are two things that can be done, find the influencers and nurturer them. Find the influencers in training classes, during requirements gathering and through employee surveys. Doesn’t matter where just find them. Once you have your list, cater to them by providing additional support and training. You should also update them on how their suggestions are being implemented. It is important to dispel the misconception that these influencers are in official mentoring roles or leadership positions. An Influencer can be Betty at reception because she has been in the department 20 years and knows where to find any document or the guy in accounting that can do anything in Excel. As part of your adoption plan spend the time to identify the true influencers for your system and do not just rely on the organizational chart. Leadership and Organization Pressures Leadership and organization pressure is similar to peer influence but originates from management and the executive level. This factor is interesting, because in my experience, leadership cannot influence user adoption. However, the lack of leadership support can negatively impact it. Take for example if you have a Launch Event and the highest level employee in the office skips it to catch up on email. Employees will infer from their action that the system is not a priority and they will consciously or subconsciously deprioritize the tasks required to adopt the new system. So be sure during your system rollout that you have senior leadership buy-in and involvement. Comfort with technology and tolerance for change Technology comfort and tolerance for change are two personal factors that affect adoption. However, your project team will not be able to directly influence or control these. No matter how great the system, the training, and communications there will be a small percentage of your work force that is technophobic and resistant to change. The good news is once you’ve got those in the larger group using the system the technophobes will come around. If you focus on these eight factors at the start of your project, during your project, and after deployment I can promise you the system will be used by more users than without these factors. Be sure prior to rollout you have a baseline of expected user adoption. This can be from the previous systems or if that data is not available other unrelated systems. After the launch of the system you should measure the usage of the system. Comparison of these two numbers will help justify the cost of the project and possibly even your salary. Two. Spoke today at the SQL Saturday in Baton Rouge. Great event, very well organized. My session was almost stand room only. Code Slides Last week at the Dallas Techfest I spoke about using TypeMock to Unit Test your SharePoint code. I wanted to follow that up with a few blog posts about mocking, SharePoint, and TypeMock. This first post will just be a quick intro on how to get setup. Before I discuss how to get started unit testing SharePoint code I will take a step back and ask if it is even worth the effort . The answer is it depends on the size and complexity of the SharePoint project. I would say unit testing SharePoint is for large customization projects of SharePoint where you have numerous web parts, event receivers, workflows and custom business classes. These larger development projects should be treated like any other .NET project and be unit tested in the same way. I’m going to assume that you haven’t started a SharePoint project yet. If you have then you should skip to step 3. One of the first things you need to do is add a reference to the SharePoint dll. Depending on what classes in the object model you are referencing will mean you have to add additional dlls. Here you would also configure your build process and how your wsp will get generated. I recommend automated this with a tool like wsp builder. Of course with SP2010 this gets a lot easier. Next you will also want to create a unit test project that will contain your tests for your entire code base. Next we need to download TypeMock isolator for .NET. You don’t need a separate install for SharePoint. *Must have TypeMock Isolator for SharePoint Then add a reference to TypeMock dll from your unti testing project Once we start to create some unit tests we are going to realize we need some help. The first challenge is we don’t want to test the SharePoint code that our business logic is calling. One reason is because this would become and integration test and make our test slow since they would have to go out to SharePoint and execute the code that we had written. The other reason is I might want an environment where not all my developers are running a SharePoint server. They could then write code, unit test it, and deploy it to some type of integration environment. Let’s take an example of this method:; } The method UpdateCustomerSPListItem is a pretty simple method that takes a first name parameter and updates the custom list item with that id (second parameter). Now this method is pretty simple but imagine that it contained more business rules and logic. The only rule we have right now is first name cannot be null or empty. Notice that this method has several dependencies., SPListItem, SPWeb, SPQuery etc… Those all need to be stubbed out in order to stub or fake this class. We would need to create a FakeSPListItem class that has an update method, a FakeSPWeb class that has a constructor, each of these returning fake values. Take a few minutes and go through the rest of the dependencies in your head, as you can see this would be a lot of work. The great thing about mocking frameworks is they do this creating of these fake classes for us. Now this is great but most frameworks need an interface to create the fake object. Not only do SharePoint classes not have interfaces but many of the classes are sealed or have private constructors. This is where TypeMock comes in. As far as I know it is the only framework that can mock these types of classes. We can now write a test method that will test UpdateCustomerSPListItem without having to update an item on a SharePoint server nor do we have to stub out ever class in the SharePoint namespace. First step would be to create a test method and add a recording using statement. Note I’m using the BDD syntax of naming test methods. [TestMethod, VerifyMocks] public void Update_Customer_ListItem_When_Given_ID_AND_Name() { using (RecordExpectations rec = RecorderManager.StartRecording()) { } } Next we need to plug in the steps of our UpdateCustomerSPListItem method that we want to test. Just like in the method under test we create a SPSIte, SPWeb, and SPListObjects. We then write a CAML query to get the item we want to update. The difference in our test is we are not interested in what the query is so we can just pass in a SPQuery object without a value.(); Next we need to call our method under test and add an assert statement like so: var result = classUnderTest.UpdateCustomerSPListItem("Kyle", "1"); Assert.IsTrue(result); Now this does test that the method returns the correct value but we also want to make sure that the method executes as we expected. For example making sure we call item.Update. In my opinion this is the real value of using a mocking framework. In order to verify expectations we add the VerifyMocks attribute. The entire test method looks like this: [TestMethod, VerifyMocks] public void Update_Customer_ListItem_When_Given_ID_AND_Name() { using (RecordExpectations rec = RecorderManager.StartRecording()) {(); } var result = classUnderTest.UpdateCustomerSPListItem("Kyle", "1"); Assert.IsTrue(result); } There you go your first unit test for SharePoint code using TypeMock. My plan is to go into more detail of how to mock out other SharePoint objects like Event Receivers, Web Parts, and Workflows. Code: Slides: Speaking tomorrow at 9am (July 29th) over unit testing SharePoint with TypeMock. I’m a little scared seeing that I’m the only SharePoint session for the entire day. :) Here is more info about the event: I spoke yesterday at SharePoint Saturday in Houston. Building Rich Interactive Applications with SharePoint 2010 Powershell Basics in SharePoint 2010 One big improvement in SP2010 is in the rendered markup. You will see fewer tables and more lists and div elements.When I saw this I immediately thought of what this would allow with jQuery. The Ribbon control and menus in SP2010 are rendered as unordered lists. This allows us to apply interactions to them from the jQuery UI framework. In this post I am going to walk you through an example. In the example we are going to apply the sortable interaction to the SharePoint menu. After deploying our Web Part , the user can then reorder the menu items. In the screen shots below I am moving the Browse menu item to the right of the Page menu item. Now in this example I don’t save the result so the page refresh would reset the users sorting but you could easily write some JavaScript code that saved the order to a SharePoint list. Step 1 – Create a new Visual Web Part project in VS2010. There are already many articles on this subject so I will allow you to just go ahead and Google this if you need to..sortable.js"></script> Step 3 – The rest of the code in the Visual WebPart <script type="text/javascript"> $(document).ready(function () { $(".ms-cui-tts").sortable(); ; }); </script> This is all the code that is required! The class of the UL element that wraps the menu we are trying to make sortable is ms-cui-tts. To keep this example simple I used a Visual Web Part but it would be better to add the above JavaScript block to the master page so that all pages had the same functionality...draggable.js"></script> Step 3 – The rest of the code in the Visual WebPart To finish off the Visual WebPart (remember that is just a user control) add this code: <script type="text/javascript"> $(document).ready(function () { $("#draggable").draggable(); . Even before the details of SharePoint 2010 were announced I’ve been wanting to learn PowerShell. Of course now that SP2010 is relying on PowerShell almost entirely I have extra motivation to learn the tool. I thought what I would do in this post is document step by step what I did to get familiar with the tool. Readers new to PS can follow along to help them get started. I first went to this link and downloaded the PowerShell cheat sheet here . This way as I learn commands I don’t really have to remember them. After fixing some printer issues I read this article from James Kovacs here The paragraph below gives you a basis for how PowerShell commands are structured:. After that I watched Hanselman’s PowerShell tutorial on DnrTV. After that I immediately installed PowerTab. This tool adds intellisense to PS which is a huge help. Best part is it is free. Since my main reason for learning PowerShell was to use it with SharePoint the final step was to read this article by Corey Roth which shows you how to load up the SharePoint PowerShell commands. I. In my last post I wrote about the Ribbon Control. In this post I’m going to review three new UI features in SP 2010 that SharePoint itself uses but you as a developer can leverage in your own applications. Status Bar The status bar is for persistent information like page status or version. It will show just below the ribbon control and you can select 4 different background colors. The javascript api is pretty simple: SP.UI.Notify.addNotification(strHtml, bSticky, tooltip, onclickHandler) SP.UI.Notify.addNotification(strHtml, bSticky, tooltip, onclickHandler) SP.UI.Notify.removeNotification(id) SP.UI.Notify.removeNotification(id) Notification Bar The Notification bar is less intrusive then the status bar and used for more transient information. By default each message remains on the screen for five minutes. Let’s look at some javascript to add and remove these notifications: SP.UI.Status.addStatus(strTitle, strHtml, atBeginning) SP.UI.Status.addStatus(strTitle, strHtml, atBeginning) SP.UI.Status.updateStatus(sid, strHtml) SP.UI.Status.updateStatus(sid, strHtml) SP.UI.Status.removeStatus(sid) SP.UI.Status.removeStatus(sid) SP.UI.Status.removeAllStatus(hide) SP.UI.Status.removeAllStatus(hide) SP.UI.Status.setStatusPriColor(sid, strColor) SP.UI.Status.setStatusPriColor(sid, strColor) Dialog Platform One of the first things you will notice about SP 2010 is the effort the development team has put forth to reduce the number of page refreshes. One way to do that is to make use of Modal Dialog boxes. Just create a new list item and you will see exactly what I am talking about. You can make use of this Modal Framework using the javascript API. You will pass in another webpage or a DOM element. For example: function myCallback(dialogResult, returnValue) {alert(“Hello World!”);} function myCallback(dialogResult, returnValue) {alert(“Hello World!”);} var options = {url: “/_layouts/somepage.aspx”, width: 500, dialogReturnValueCallback:myCallback}; var options = {url: “/_layouts/somepage.aspx”, width: 500, dialogReturnValueCallback:myCallback}; SP.UI.ModalDialog.showModalDialog(options); SP.UI.ModalDialog.showModalDialog(options); I obtained the information in this post from Elisabeth Olson’s UI Presentation at the SharePoint 2009 Conference. The’m giving a quick presentation over Telerik at the Tulsa .NET User’s Group meeting tonight. I go on at 6:45pm, pizza starts at 6pm. TCC (Tulsa Community College) Northeast Campus 3727 East Apache Tulsa, OK 74115 918-594-8000 Campus Map | Live | Yahoo | Google | MapQuest We meet in the Main Academic/Administration Building, Seminar Center, Room 109. I.
http://dotnetmafia.com/blogs/kylekelin/default.aspx
CC-MAIN-2016-26
refinedweb
3,966
63.8
On Tue, Aug 20, 2002 at 06:02:35PM -0300, Henrique de Moraes Holschuh wrote: > On Tue, 20 Aug 2002, Luca Barbieri wrote: > > Apparently the "different interpretation" is what I was assuming the > > current one. > Yeah, I was in a severe headache for a while because I too knew about the > old interpretation apparently. > > How about the implementing the GNU extension? > It would be useful, yes. But I think it is completely out of the > possibilities for the near future -- you need to get it upstream, and let it > deploy first. > So can we go with versioned symbols (plus -Bsymbolic in key libraries if > just versioned symbols isn't enough for that particular library -- not many > have an API so broken that -Bsymbolic is actually required when versioned > symbols are in use). FWIW, I find that -Bsymbolic tends to be useful in its own right; I've never met anyone who had a good reason for trying to override a library's internal references, but I have seen many cases where not using -Bsymbolic caused namespace conflicts and segfaults. This is a particularly popular source of bugs with Apache/PHP. It's just that -Bsymbolic doesn't solve this particular problem. Steve Langasek postmodern programmer Attachment: pgp5Ez7OBAV2U.pgp Description: PGP signature
https://lists.debian.org/debian-devel/2002/08/msg01315.html
CC-MAIN-2018-09
refinedweb
211
61.26
I need to create an UNBOUND method call to Plant to setup name and leaves and I don't know how. Any help is appreciated. My code: class Plant(object): def __init__(self, name : str, leaves : int): self.plant_name = name self.leaves = leaves def __str__(self): return "{} {}".format(self.plant_name, self.leaves) def __eq__(self, plant1): if self.leaves == plant1.leaves: return self.leaves def __It__(self, plant1): if self.leaves < plant1.leaves: print ("{} has more leaves than {}".format(plant1.plant_name, self.plant_name)) return self.leaves < plant1.leaves elif self.leaves > plant1.leaves: print ("{} has more leaves than {}".format(self.plant_name, plant1.plant_name)) return self.leaves < plant1.leaves class Flower(Plant): def __init__(self, color : str, petals : int): self.color = color self.petals = petals def pick_petal(self.petals) self.petals += 1 Create a new class called Flower. Flower is subclassed from the Plant class; so besides name, and leaves, it adds 2 new attributes; color, petals. Color is a string that contains the color of the flower, and petal is an int that has the number of petals on the flower. You should be able to create an init method to setup the instance. With the init you should make an UNBOUND method call to plant to setup the name and leaves. In addition, create a method called pick_petal that decrements the number of petals on the flower. The wording of the assignment is very strange. An "unbound method call" means you're calling a method on the class rather than on an instance of the class. That means something like Plant.some_method. The only sort of unbound call that makes sense in this context is to call the __init__ method of the base class. That seems to fulfill the requirement to "setup the names and leaves". It looks like this: class Flower(Plant): def __init__(self, name, leaves, color, petals): Plant.__init__(self, ...) ... You will need to pass in the appropriate arguments to __init__. The first is self, the rest are defined by Plant.__init__ in the base class. You'll also need to fix the syntax for the list of arguments, as `color : str' is not valid python. Note: generally speaking, a better solution is to call super rather than doing an unbound method call on the parent class __init__. I'm not sure what you can do with that advice, though. Maybe the instructor is having you do inheritance the old way first before learning the new way? For this assignment you should probably use Plant.__init__(...) since that's what the assignment is explicitly asking you to do. You might follow up with the instructor to ask about super.
https://codedump.io/share/quxSxpgTjsnp/1/how-do-i-call-on-a-parent-class-in-a-subclass
CC-MAIN-2017-26
refinedweb
443
77.84
The New PHP 254.'" Wake me they fix namespaces (Score:4, Insightful) It is nice to see that PHP is starting to grow up a little bit. They have long way to go. Re: (Score:2) Re: (Score:2) Wake up time. PHP actually has a pretty decent way to remove "garbage". First they make the compiler (and documentation) warn you about a feature being made obsolete in a future version, and then a few versions later they do remove the feature. Here is an example (quote from the manual [php.net]): As of PHP 5.3.0, you will get a warning saying that "call-time pass-by-reference" is deprecated when you use & in foo(&$a);. And as of PHP 5.4.0, call-time pass-by-reference was removed, so using it will raise a fatal Re: (Score:2) Yes, they've done it once or twice, with a tiny number of the headline issues, took an age to do so, and only did so because people were screaming about it. Re: (Score:2) Every time you break existing applications, you create systms that are stuck with old and buggy versions. That's bad enough normally, but is a terrible idea in a language meant for writing Internet-facing apps. Dealing with detritus is preferable to burning down the house to get rid of it. One question (Score:2) Have they managed to keep from breaking crypt() recently? Re:One question (Score:5, Informative) yeah - [php.net] It's now pretty easy to do password hashing correctly. Re: (Score:3) Re: (Score:2) That's, er, function hashing -... [php.net] Perl vs PHP (Score:4, Interesting) Being long in the tooth I do all my web development via Perl using my own nice call back templating engine and of course CGI.pm. Nice separation of code and html -neither of the two find themselves in the same file. Once in a while I have to do some repair work for customers in PHP and in horror find the html and code mixed to together with wild abandon and massive uses of global variable and I wonder PHP is so darn popular. Re: (Score:3) I've found that using the Smarty [wikipedia.org] template enginr helps me avoid that situation in PHP and the learning curve is fairly shallow. Re: (Score:2) Having seen some Perl web scripts that very much do not meet this description, and some PHP that was nicely templated I can say with confidence that it is not the language that is at fault here. Re: (Score:2) Wow - I'm not sure you should be using the sample of bad existing code as an argument against PHP and FOR perl. Yikes. Why use the Zend engine at all? (Score:4, Interesting) Many of the problems with PHP are from the crappy language implementation. I recently came across a Java implementation of the language. It's been around forever, but as I hadn't heard of it, I figure many people reading this thread haven't either. It's Quercus [caucho.com]. It's certainly worth a look as a Zend alternative. Re: (Score:2) > Many of the problems with PHP are from the crappy language implementation. Yes, because switching to a subtly different language implementation is not going to cause any problems running code that was written for the standard PHP implementation. > It's Quercus [caucho.com]. It's certainly worth a look as a Zend alternative. That was release 7 years ago. No one appears to really use it. Do you really think that if it was such a great improvement over the Zend engine that people wouldn't be using it? Still waiting (Score:4, Interesting) I'm still waiting for PHP to be completely case sensitive, a sane scoping scheme and real object oriented (can you say polymorphism) Re: (Score:2) PHP already has case sensitive variable names. $Foo and $foo are always different variables. Function names, class names, keywords (class, function, extends, if, while, etc) are always case insensitive. However, constants are sometimes case sensitive, depending on their declaration. I do a lot of PHP development, but these days it's only sane by the fact that I've been doing it so long I understand many of it's weirdness. Also, using frameworks (Symfony 1 & 2) and finally using a template engine (Twig) register_globals (Score:2) The beautiful thing is their lovely page explaining that it wasn't an insecure design, just one which "could be misused". I'd say that a feature that easy to "misuse" in ways that lead to security holes is, in fact, a pretty good example of an "insecure design". A fractal of bad design. (Score:5, Insightful) I don't normally like linking to blog posts, but this one pretty much sums up PHP for me: His analogy is very apt. Moving to Python (Score:5, Informative) It is shaping up to be one of these things where my only regret is not switching sooner. I was a huge defender of PHP for a long time but that time is over. There are interesting things like HHVM that are another bandaid for PHP but I am sick of making PHP work. I am sick of typing all those stupid dollar signs. I'll just say what so many have said before, "Python is like typing pseudo code, except you are actually coding." I don't look at my python and shudder. PHP reminds me of some of my own projects where I changed course many times leaving strange little architectures and changes in philosophy. The longer the project goes on and the more it changes direction the more debris it leaves behind. It is not necessarily broken just sort of all just off. Where Python is a tiny problem with the web is that setting up a development environment took me a tiny bit more work than the usual LAMP setup. This might make it harder for beginners but maybe that is a good thing. I don't mind leaving the beginners back in PHP land. It's still unmaintainable crap (Score:2) PHP's biggest problem is lack of modularization and encouragement of inline script hacking. It suffers from SQL that lacks proper commit controls. Implementations I've used leak connections like a seive, forcing restarts of the database servers on a regular basis. Bottom line: PHP is the one tool I've used that I hate more than JavaScript. JS is functional elegance compared to PHP spaghetti. Re: (Score:3) > It suffers from SQL that lacks proper commit controls. Wat? > Implementations I've used leak connections like a seive, forcing restarts of the database servers on a regular basis. While that must have been frustrating for you - that's not a common complaint, so was probably specific to either your DB or configuration. > PHP's biggest problem is lack of modularization and encouragement of inline script hacking. You mean you suck at writing decent code, without being forced to do things 'properly' ? Re: (Score:2) No, it means people keep demanding that I work on PHP I didn't create and it's all steaming piles of SHIT. Like PERL, you can create maintainable and readable PHP. Most people don't. They hack something together thinking they'll need it for a month and be done with it, and the steaming turd keeps on in production for years afterwards. And then some poor fellow like yours truly is expected to enhance the god damned thing which has no comments, uses perverse libraries that no one else uses and which have Re: (Score:2) My response to those demands nowadays? "Yes, I know PHP. That's why I won't work with it. You couldn't pay me enough to take on a PHP project." Re: (Score:2) The only time I've seen this was when a "Java Expert" built out a platform using PHP, and tried to make it jump through hoops to work like Java. Net result? Factory factory factories (not exaggerating) that resulted to an amazing kludge of massive memory-hogging threads which brought the servers down on a 2-3 hour cycle. Took massive refactoring to clean up that mess. A scripted language funda Re: (Score:3) The fellow who wrote the original code used a library I'd never heard of for MySQL connectivity. They didn't know how to use SQL properly. They didn't know how to error check results. Hell, they didn't even know how to sort data for the users as they'd been asking him to for months before. But no, he left the company and the steaming pile of crud was dropped in my lap to fix. By the time I was done stabilizing the thing, there must have been a whole 10% of the original code left. Just because it's po PHPs badness is its advantage. (Score:5, Interesting) I love Python, I think JavaScript is sort of OK and I did a lot of serious programming in ActionScript 2&3, both of which are quite simular to JS. I was basically forced into doing PHP by the market. I never really liked PHP but I really never hated it either. The thing about PHP is that it's so specific in its domain and such a hack that no one doing PHP development for a living will go around boasting about the greatness of the language. There is a refreshing lack of arrogance in the PHP community which, in my observation, makes it very easy for n00bs to pick up. As a result we get countless people reinventing the wheel in PHP and discovering basic programming patters anew for them selves and starting yet another Framework/CMS/Whatnot and the results often are really bizar. But the community remains alive that way. F.I. I'm working myself into Drupal at my current employer because it's the prime go-to CMS here. It's like a live alice in wonderland trip. A strange historically grown mess, barely tamed by sanitiy and a relentless chaotic community that all by accident seem to come up with hacks that somehow solve the problem in some way. And yet there's a solid global corporation building its business all around Drupal [acquia.com]. The surreal hacks with which the Drupal people solve their problems are mindboggling, and yet everybody seems totally OK with it. And Drupals track record of deployments is impressive. I guess with PHP it's somehow like the C vs. Lisp argument: C is so shitty compared to Lisp that you have to get yourself together and work as a team, or you won't get anything done. Hence Lisp has this loner exisitance on the side and all the real work gets done in this ancient C thing. PHP is a simular thing. It is so bad that no respectable programmer would pick it up voluntarly nowadays, but yet it grew out of Perl (which is worse in some ways), was somewhat of an improvement and was at the right place at the right time. The badness of PHP accounts for its considerable lack of arrogance (compare the PHP community to the Ruby community for instance) and for no one feeling guilty when he does a quick bad hack. As a programmer you don't feel dirty when you do bad programming in PHP, you already felt that when you picked PHP as the solution. Hence quite a bit of work gets done in PHP. That's why PHP has Drupal and Typo3 and Joomla and the Java Community has nothing of that proportions. The barrier of entry into PHP is *very* low which gives it its momentum. My 2 cents. Re:Too Little, Too Late & MtGox (Score:5, Insightful) Why in 2014, do I have to decorate variables with '$'? That is your first complaint about PHP? That? I can't stand PHP but, seriously, that is first on your list of PHP badness? Re:Too Little, Too Late & MtGox (Score:5, Insightful) Especially since it's actually one of the only things that makes PHP (barely) readable. Re: (Score:2) In an object oriented language, as PHP attempts to be, $ is a stupid idea, just like decorating variables with types, like bInstalled (bool installed) it iMaxLength. It's not such a bad idea in JavaScript though, where anything goes. Re: (Score:2) Hello, JavaScript is object-oriented. This is because, in JavaScript, everything is an object. Period. Loose typing and prototype inheritance do not alter this fact. And you do not need to decorate your variable names with anything in JS. Returning to the topic: Back in the day, PHP was sort of awesome for those of us who weren't C or Perl gurus, but those times have passed. Today I don't use it for anything other than the occasional shell script and simple websites that do not involve the transfer of goods, f Re: (Score:2) Why in 2014, do I have to decorate variables with '$'? That is your first complaint about PHP? That? I can't stand PHP but, seriously, that is first on your list of PHP badness? Maybe he is poor and seeing all those dollar signs depresses him. Re: (Score:2) If you can't tell the difference between GET, POST and COOKIE you have bigger problems. You complain about that but you suggest Node? Node is fine, but pulling out request variables requires you to parse through the headers and query string. Further more, sanitizing DB inputs and making sure your logic doesn't suck isn't the worst thing you have to do. Mt.Gox went down because their API was stupid, not because of some fundamental flaw in PHP. I don't know. php is the Gary busey of programming languages. Used t Re: (Score:3, Insightful) I used to think there weren't plain bad languages. Now with more experience under my belt, I know better. Every language has quirks. You get used to them, and do what you need to do. PHP is almost nothing but quirks. The only languages I can think of worse than PHP are those deliberately designed to be bad: Brainfuck, Malbolge, INTERCAL, and the like. I'm not even sure that some of those are worse than PHP. The entire structure and implementation of PHP screams of hasty decisions by cowboy coders who just dec Re: (Score:2) Rasmus Lerdorf told me at a con some years ago that he was still amazed at how PHP had taken off: "It was just a hack so I could get some things done, and still, that is all it is now, really." Re: (Score:2) You might not be aware of PHP in the old days, but they used to move all the variables into the script so that $_GET['x'] and $_POST['y'] would be $x and $y... ("register globals") So yeah, you couldn't tell where they came from. The situation with $_* greatly improved things especially when they deprecated register globals. Re: (Score:2) register_globals hasn't been part of the default PHP runtime since 2002. see:... [php.net] There are a lot of WTFs to PHP, something that hasn't been true since the first Bush administration isn't one of them. Re: (Score:2) And there used to be an import_request_variables() function that would allow you to define which request vars (get, post, cookie) you wanted and a prefix for them. import_request_variables("rvar_","p"); Would make $_POST['foo']==$rvar_foo Re: (Score:3) You never should have to sanitize your db inputs. Why? Because then you have to always unsantize them, else you end up with a crap string because it isn't escaped/unescaped enough times. The right thing to do is to use the database driver's bind interface. Basically, your DB values should be treated as opaque blobs as far as entry and retrieval go. Now if you need to verify a date, that's another matter. But you should be treating them as opaque blobs, full of nulls, quotes, semicolons and unprintable chara Re: (Score:2) You mean like PDO? [php.net] By sanitize, I mean, don't just write, "INSERT INTO table (col1, col2, col3, col4) VALUES ($unescapedValue, $hosed, $haxedLol, $bobbyTables)". Which you can totally do in Ruby, Python, C#, NodeJS, etc. I know mysql_real_escape_string is kind of a pain in the ass. Not to mention a huge WTF. Is the other one fake or something? Still, it's not perfect, but can you do Real Work in it? YES. It's not MUMPS for god's sake. Re: (Score:2) Don't you mean "mysqli_real_escape_string?"... [php.net] I kinda liked PHP but this stuff started to annoy me. Not only are these methods database specific, but there are tons of deprecated functions in PHP. Sure it's usable - but it's very easy to use functions you're "just not supposed to." Though perhaps that's something they're trying to change as well? Re: (Score:2) They're driver dependent. If you don't want the mysqli set of methods, don't enable the driver. Re: (Score:2, Insightful) The very fact that several websites exist to document inconsistencies in the language implementation should make you wary. Where do you find compiler devs who manage to evaluate 0x0+2 to 4? The fact that there is a function called real_escape_string scares the shit out my me, because it implies there exists a function called escape_string which doesn't really escape strings. Re:Too Little, Too Late & MtGox (Score:4, Insightful) That reminds me of people who call a document "x_final", but then change their mind and so create a second one called "x_final_final", and change their mind again to get "x_really_final_this_time_I_promise". I suggest version numbers, but then they say, "But version numbers don't tell me which one is final". I gave up on them. Re: (Score:2) I suggest version numbers, but then they say, "But version numbers don't tell me which one is final". I gave up on them. I work daily with a codebase full of methods like connect_v1(), connect_v2(), connect_v3(), ... . You do *not* want to go there. Please trust me on this. Re: (Score:2) Why in 2014, do I have to decorate variables with '$'? Well for one thing effortless string interpolation... and it nicely identifies what is a scalar Re:Too Little, Too Late & MtGox (Score:5, Insightful) I do a lot of coding in PHP, and there's a lot of things I don't like about it, but your particular dislikes don't make a lot of sense. Why in 2014, do I have to decorate variables with '$'? It's not like PHP was written in 1965 and thus there was some hardware (memory footprint, compilation speed, etc) reason variables are prefixed with a dollar sign. It was a design choice. That's so you can do this: $count=5; echo "The total is $count."; And you can use the same variable syntax in your code as in strings that are automatically parsed. Why is the assiciative array syntax take two characters that look a comparison operator? It doesn't "look" like a comparison operator if you actually know what the operators are. <= and >= are comparison operators, and => is not a comparison operator in any language I've ever used. A single equal sign looks like a comparison operator too, and woe to the developer that doesn't have the universal C-like basic operators (used in dozens of modern languages) memorized backwards and forwards. Why do I need == and ===? For the same reason that Javascript and other scripting languages need it. Those languages do automatic type conversion, and sometimes you don't want that to occur. The alternative is manually casting things, which isn't very script-like at all, and having to explicitly deal with types is more like C than an "easy to use" scripting language. Thus there are two equality operators for the times you don't really want 0 to equal null to equal false. This one is even more ironic considering Javascript based node.js is your favorite server side platform, and thus you would also have to use both == and === operators in your preferred language anyway. ANd vaiable confusion between $_GET, $_POST and $_COOKIE I don't even know where to begin on this one. They are 3 entirely different things, with the most self-explanatory names I can think of. That's exactly as it should be. Look at $_REQUEST if it's too difficult to figure out which you should be using (and woe to your client if that's the case). Re: (Score:2) It's that same easy substitution, i.e. $sql = "SELECT fname, lname from people where id='$id'" that leads to data breaches. [xkcd.com] Re: (Score:3) Like making it more difficult syntactically prevents SQL injection attacks either: var sql="SELECT fname, lname from people where id='"+id+"'"; Same vulnerability in Javascript. Re:Too Little, Too Late & MtGox (Score:5, Insightful) In PHP this is now solved with parameterized queries. Plus any framework or CMS worth it's salt was doing it already: $sql = $dbConnection->prepare("SELECT fname, lname FROM people WHERE id = ?"); $sql->bind_param('s', $id); $sql->execute(); If you're rolling your own DB connection layer in modern PHP, you're doing it wrong. Re: (Score:2) The real issue is there's too many PHP shitheads out there still doing it wrong. What I don't get is why the PHP shitheads don't use a framework. I am a PHP shithead so I use Drupal. I know I don't know a lot of PHP. I don't want to. But I wanted something I could conveniently host anywhere and I've got it. Lousy coders will be lousy coders (Score:4, Insightful) And how is this different from "SELECT yada yada " . id . " yada yada" How exactly does ANY language that allows catenation not allow you to enable sql injection attacks? "Coders" like you want a language to protect you from being stupid because you are stupid. It is your kind that insists everything be made child proof because you are a child yourself. Re: (Score:2) There's another (minor) reason to prefix variables with $: That way you can use "reserved" words as variable or field names, say $class, $abstract, etc. Re: (Score:2) You complain about == and === in PHP, but then you bring up a javascript solution (Node.js) as an alternative. This leads me to believe that if *you* decided to rewrite Mt Gox using your beloved Node, another hacker would probably get rich pretty soon. And just as it happened with the PHP version of Mt Gox, the problem would lie in the implementation not in the language. Re: (Score:2) Why do I need == and ===?...........My favorite two are Node[JS] Uh........there's something you need to know about Javascript.......... Re: (Score:2) Why in 2014, do I have to decorate variables with '$'? Not a big fan of variable interpolation, I'm guessing? Why is the assiciative array syntax take two characters that look a comparison operator? Don't forget to ask Perl the same question. Why do I need == and ===? Because the language is loosely typed. There are other loosely-typed scripting languages that have both of these operators as well. ANd vaiable confusion between $_GET, $_POST and $_COOKIE So you would prefer to have them all in one array? Or as global scalars? Seems to me you're complaining about PHP because it's a scripting language and not C or Java. Here's a suggestion for you: If you don't like the syntax, or if you want strict typing, use something else. If you don the real horror of MtGox (Score:3) Sure, some people lost some bitcoins. But what are those?!?!? Intangible sets of numbers and letters that don't exist in the real world. Not to be insensitive, but boo-hoo! The bigger tragedy here is that the MtGox site had a vulnerability that has probably been exploited for more than a decade by some nefarious organization to steal peoples' Magic The Gathering Card Re:6 scripts at once? HNNNNNNNNNG (Score:4) Yeah. Stupid global weather simulations also run like a dog on the Pi. When will people start testing their complex simulations on multiple platforms? Re: (Score:2) if ($filehandle = fopen($filepath, 'rb')) { $filecontent = fread($filehandle, $filesize); $filecontent = base64_encode($filecontent); $filecontent = 'data:image/' . $filetype . ';base64,' . $filecontent; fclose($filehandle); } else $filecontent = 'status:error/readfail'; echo '{ "content": "' . $filecontent . '" }'; } Each 6 requests comes with about two seconds of lag where the system needs to take a dump because it's so confus Re: (Score:2) Try: if ($_GET['do'] == 'read' && file_exists($filepath)) echo json_encode(array('content' => 'data:image/'.$filetype.';base64,'.base64_encode(file_get_contents($filepath)))); The key bit being file_get_contents. It is a hell of a lot better than using the f functions except for very specific circumstances. Also check the ram usage on the Pi. It should be able to keep a few 8kb files in the file cache. Re: (Score:2) I've never experienced a binary safeness issue in PHP for some time. The usual stuff I do like file_get_contents, substr, strlen, etc... are all binary safe. Not sure what you're talking about (Score:2, Interesting) PHP works, it's fast as heck, and I can do anything you can do in python/perl just as well and way faster. My host for my hobby site (Shameless Plug [glimmersoft.com]) gives me php and a mysql DB for $7 bucks a month, and that's probably more than I should be paying. If I want perl/python that goes up to $100/mo... Re: (Score:2) I'm paying $7.95 per month for a virtual machine, and I don't think that is the cheapest option. If I want to put perl or python on, I can, although last I checked a J2EE server was running into the RAM limits for the VM to do anything non-trivial with it. Re: (Score:3) Getting a VM (VPS) is not the same as shared hosting. WIth a VM you have to install, maintain, patch and monitor everything yourself. Obviously cheap providers that offer PHP/MySQL hosting for $3 a month won't offer terrific performance, the resources will be shared with a lot of other customers, but for a simple website with maybe a shopping cart and a small catalog it's far less overhead to use shared hosting than a VM and there is a big market for that. This being said, there are lots of cheap hosts that Re: (Score:3) Re: (Score:3) "WIth a VM you have to install, maintain, patch and monitor everything yourself" My experience with shared hosting is that they change system configuration all the time without informing me and thereby breaking my scripts. Never have that problem with a VM, but I admit that setting up a VM with dns, apache tweaks, iptables, and so on, is a major effort for someone who doesn't do that for a living, like me. But after that it's very little maintenance. By the way, the site in my sig runs on shared hosting, incl Re:Not sure what you're talking about (Score:5, Insightful) Re: (Score:3) "no you cannot do anything in PHP that you can do in Python or Perl!" that statement in itself is true, but PHP is a web language and as for things to do ON THE WEB yes I would argue it is more feature rich. Even if you disagree with the Python comparison it certainly beats the current state of Perl all the hell. Source: I've developed in all three for work. I've only ever developed in PHP (well, I tried ruby for a few months then ran away screaming in frustration), but I know of things in python/perl that PHP is missing. For example PHP doesn't begin executing your code until after the browser has sent _all_ of the post data. This makes it impossible to create a file upload progress bar in PHP. You can do it in modern browsers with javascript now, but previously it had to be done server side and only languages like perl can handle that - because they begin exec Re: (Score:2)... [php.net]... [php.net] Re: (Score:2) Actually it isn't. Read it again, carefully. Re: (Score:2) I've never done my own garbage collection, and PHP just updated it in 5.3 [php.net]. PHP works, it's fast as heck, and I can do anything you can do in python/perl just as well and way faster. I don't know about python/perl but there are operations in PHP that need 200MB of memory which I could achieve in C with only 20KB of memory. That's a 10,000x increase in memory consumption for PHP. If this has improved in version 5.5 I can't wait to give it a try. And it's not just memory consumption, there are times when I run a xhprof on some slow PHP code and find out it's spending 90% of it's time allocating and/or freeing memory. If it used less memory, it would spend less time managing it. PHP is a grea Re: (Score:2) You can get 20 dedicated VPSes for ~$20 a month and run them as a beowulf cluster for all they care. Yeah right! Beobunnies run faster than that! Re:Not sure what you're talking about (Score:5, Insightful) So the sort of people who claim that PHP is worthwhile are those who stick with a terrible webhost and have no clue how much they should be paying? Yes, that sounds typical. Actually I think its more that a certain percentage of the population has as the top priority just being able to get something done, and the low level details of this or that's garbage collection and memory management is way, way down the priority list somewhere. Re: (Score:2) low level details of this or that's garbage collection and memory management is way, way down the priority list somewhere Agreed, any memory leaks or performance problems should fall out in testing. The major problem I have with PHP is it's poor backward compatibility with previous versions, that sort-coming can quickly turn into a giant configuration/maintenance headache. Glad to see they are trying to do something about it. real_foo_bar() and somesuch_improved() (Score:4, Informative) Make PHP the lauging stock of many a programmer. The language's development has been in the wrong hands from day one. You can do great things in Python because of Python. You can do great things in PHP in spite of PHP. Re: (Score:2) I keep saying this on Slashdot: PHP has it's weaknesses, but inconsistent naming conventions isn't a major problem. What made PHP the laughing stock is looking at incompetent coders' code and thinking that's how you do things in PHP. PHP is a good language for web development. It has an easy learning curve and gives you power to shoot yourself in the foot. Combine those two and you get a bunch of at Re: (Score:2) mysql_real_escape_string [php.net] is a wrapper of a C function [mysql.com]. Does that make C the laughing stock for you as well? Wrapping your house in toilet paper would make your house a laughing stock ... that doesn't mean your house is now though Re:You don't know what you're talking about. (Score:5, Insightful) PHP has always used explicit memory management. allocate_StringMemory() sys_FreeMemory_UTF8() Watch out because there is no way to tell if allocation fails. That's convenient though because it makes sys_Free* idempotent; there is no difference between failure to allocate and multiple free-s. With 5.5 you get a great new function; sys_FreeEverything() // in traditional mixed camel case + underbar style! Now you don't need to keep track of allocations and release them. Just blow away all allocations across all requests and start fresh. It's really great for fixing those darn memory leaks. Re: (Score:2, Insightful) Bullshit. '=' is assignment in all cases - it is predictable behavior. However, in php: "hello" == false is FALSE. 0 == false is TRUE Therefore, "hello" == 0 should be false. But it doesn't. "hello" == 0 is TRUE. I understand WHY it happens. My understand why and when doesn't make it right. Re: (Score:2) Re: (Score:2) Why would anyone ever assume the latter? It's not true for anything but natural numbers (0.5 > 0.3, 0.5 * 0.5 = 0.25 < 0.3). Re: (Score:2) Re: (Score:2) Re:PHP (Score:5, Insightful) Every common language out there has ugly stuff of one kind or another. Re:PHP (Score:5, Interesting) Honestly, BASIC's wins this round just by virtue of being so limited that it's hard to shoot yourself in the foot. I don't count GOTO, as jumps aren't really language specific. Having tutored programming for years, I can say that students are perfectly able to write speghetti code with or without goto. Re: (Score:2) you can't write more than four lines of Fortran without painting some Star Trek action figure I like that. I'm going to use that. And GOTO is over-villified. In BASIC it is the only sane way to do error handling. In other languages, I frequently use the "continue" operation, which is just a limited goto with a different name. Re: (Score:2) As soon as the BASIC ecosystem gets a good templating framework like Twig, a good package management system like Composer or PEAR, convenient SDKs for most cloud providers like AWS or Azure, native support for JSON and easy access to mainstream database drivers (RDBMS and NoSQL), I'm definitely jumping on the BASIC bandwagon! Seriously, if you compare programming languages based on HelloWorld, it's easy to come out with worthless conclusions such as BASIC > $ANYTHING or $ANYTHING > PHP, but when you ha Re: (Score:2) "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." Re: (Score:2) BASIC is just imperative programming, and I find it similar to simple assembly programs by the way. It gives you understanding of both and doesn't teach much. C is just BASIC with pointers and functions. :) and why just stop at defaming BASIC. All imperative programming is like BASIC, some will argue functional programming should be taught instead. Today that "seminal article" would be called a rant Re: (Score:2) and why just stop at defaming BASIC He didn't Re:PHP (Score:4, Insightful) Python has the whole whitespace deal, Perl code tends to be unkempt Now this is a great comparison. One language is bad because it enforces tidiness, and the other is bad because it doesn't. Re: (Score:2) Re:Inconsistency (Score:4, Insightful) Cute. In JavaScript: "5"-2 = 0 and "5"+2 = "52". Even PHP isn't *that* nut. Re: (Score:2) Re: (Score:2, Informative) I agree they are incomparable. Javascript is much worse in so many ways... Re:Inconsistency (Score:4, Interesting) Re: (Score:2) Re: (Score:2) Re: (Score:2) Until I can get at least a warning on reads to undefined variables I will never use PHP for anything serious again. Look into ini_set [php.net]. Specifically 'error_reporting'. Re: (Score:3) Whereas it should of course be fewer_crabs()
http://developers.slashdot.org/story/14/03/04/2310240/the-new-php?sdsrc=next
CC-MAIN-2016-07
refinedweb
6,004
72.05
Hi there! I am trying to implement the SilentStepStick (TMC2130) with a stepper motor and I can't get it to switch directions. It works and spins Clockwise when the DIR pin is pulled high, but when it's pulled low it won't reverse directions and go Counter Clockwise. I've tried just grounding it instead of using the output pin on my Arduino Uno and it seems to just hum in the same spot and it does not pull any current either (the shaft freely spins). I've checked the motor wiring to make sure it's correct and I also have it working (loudly) on the L298N driver. The wiring I have follows this: and the code is below. #include <AccelStepper.h> #define PIN_EN 7 #define PIN_DIR 12 #define PIN_STEP 9 #define PIN_CS 10 int defaultAcc= 1000; int defaultPos = 0; int speedLim = 500; AccelStepper stepper(AccelStepper::DRIVER, PIN_STEP, PIN_DIR); void setup() { pinMode(PIN_EN, OUTPUT); pinMode(PIN_DIR, OUTPUT); pinMode(PIN_STEP, OUTPUT); pinMode(PIN_CS, OUTPUT); stepper.setMaxSpeed(speedLim); stepper.setAcceleration(defaultAcc); stepper.setSpeed(defaultSpeed); stepper.moveTo(defaultPos); // Default Positioning digitalWrite(PIN_EN, LOW); // PIN_EN LOW is ENABLED, PIN_EN HIGH is DISABLED Serial.begin(9600); } void loop() { digitalWrite(PIN_DIR, HIGH); // setting to LOW won't work? stepper.setSpeed(speedLim); if (True){ stepper.run(); } } Any ideas would be greatly appreciated!
https://forum.arduino.cc/t/tmc2130-dir-pin-not-changing-directions/917737
CC-MAIN-2021-49
refinedweb
218
52.29
In this tutorial we will check how to expose a Flask server to the local network, so it can be reached from clients connected to that same network. We have already covered how to create a simple “Hello World” Flask server on this previous tutorial. It also covers how to install Flask on the Raspberry Pi 3, in case you haven’t done it yet. In this tutorial, instead of returning a “Hello World” message, we will return the current date and time on the Raspberry Pi and make the server accessible so clients running on other machines can reach it. In order for a client to be able to reach the Flask server, it will need to know the Raspbery Pi’s local address on the network. You can check a detailed guide on how to obtain the local IP of the Raspberry Pi on this previous post. This tutorial was tested on a Raspberry Pi 3 model B+, running version 4.9 of Raspbian, installed using NOOBS. As usual, the first thing we need to do is importing the Flask class from the flask module, so we can create an object of this class and configure the server. We will also import the datetime class from the datetime module, so we can obtain the current time and date on the Raspberry Pi. from flask import Flask from datetime import datetime After that we need to create an object of class Flask. The constructor of this class receives as input the name of the module or package of our application. For this simple application we can pass the value __name__, which contains the name of the current module. app = Flask(__name__) Next we will define the route of our server that will return the current date and time of the Raspberry Pi. We will call the route “/time”. @app.route('/time') def getTime(): # Route handling code The implementation of the handling function will be very simple. We will just get the current date and time by calling the now class method on the datetime class. Then, we will convert the returned value to a string and return it to the client. time = datetime.now() return "RPI3 date and time: " + str(time) Finally, we need to call the run method on our Flask object, so the server starts listening to incoming HTTP requests. As first input, this method receives the IP where the server should be listening, and has second input the port. The IP is passed as a string and the port as a number. Since we want to expose our server to the local network, we use the 0.0.0.0 IP address, which indicates that the server should listen on all available IPs. As port, we will use the value 8090, although you can test with other values. The final complete code can be seen below. from flask import Flask from datetime import datetime app = Flask(__name__) @app.route('/time') def getTime(): time = datetime.now() return "RPI3 date and time: " + str(time) app.run(host='0.0.0.0', port= 8090) To test the code, simply run the previous program on the Python environment of your choice. In my case, I’ve used IDLE. Next, as mentioned before, you need to get the IP of your Raspberry Pi on the network to which it is connected. The quickest way is by opening a command line and use the ifconfig command, as detailed in this previous post. Then, open a web browser in another computer connected to the same network as the Raspberry Pi. On the address bar type the URL below, replacing #yourRaspberryIp# by the IP you have obtained. You should get an output similar to figure 1, which shows the current date and time of the Raspberry Pi. Figure 1 – Answer to the HTTP request performed to the Flask server.
https://www.dfrobot.com/blog-989.html
CC-MAIN-2020-29
refinedweb
645
70.84
QML can't see font ascender on Embedded device Hi All, I have some problems to correctly visualize the font Helvetica Neue LT Std Font on an embedded device. The problem is that i 'cant see the part of the characters that are in the ascender area of the font, for example i can't see the accent of the char "À". But also for example the symbol ^ above the char "ê" is cutten. Checking the font with fontdrop.info i can confirm that the part of the characters that are cutten is the part in the ascender area. If i run the application on PC, it works fine. My embedded device is based on a iMx6 ULL processor. The screen resolution is 800x480. Below the test code I am using. I'm using Qt5.8.0 both for embedded and PC. main.cpp { QGuiApplication::setAttribute(Qt::AA_EnableHighDpiScaling); QGuiApplication app(argc, argv); QtQuick2ApplicationViewer viewer; viewer.setSurfaceType(QSurface::OpenGLSurface); // Force orientation to see Maliit in landscape orientation QFont font("HelveticaNeueLTStd"); font.setStretch(QFont::Condensed); app.setFont(font); viewer.setMainQmlFile(QStringLiteral("main.qml")); QObject::connect((QObject*)viewer.engine(), SIGNAL(quit()), &app, SLOT(quit())); viewer.showExpanded(); return app.exec(); } main.qml import QtQuick 2.6 import QtQuick.Window 2.2 Window { visible: true width: 800 height: 480 Text { id: textEdit text: qsTr("ÀÀÀ") font.pointSize: 24 verticalAlignment: Text.AlignVCenter anchors.top: parent.top anchors.horizontalCenter: parent.horizontalCenter anchors.topMargin: 20 } } Thanks! Caro - Pablo J. Rogina last edited by @carols is it possible that you run any other non-Qt application in that device to see what happens with the font in that case? @Pablo J. Rogina thanks for your reply! Unfortunately is not easy for me in this moment to try with a no Qt Application. Do you think it could be related to the configuration of fonts on my embedded platform? - Pablo J. Rogina last edited by @carols said in QML can't see font ascender on Embedded device: Do you think it could be related to the configuration of fonts on my embedded platform? I cannot tell for sure, but I do remember some issues in the forum due to the actual font file in the device not the exact same file in the PC although having the same name... - SGaist Lifetime Qt Champion last edited by Hi, Did you check that you are actually getting the font you are requesting on your device ? @SGaist thanks for the reply! Yes the font is the one that i'm requesting. I made this test: i changed the font with a font editor, i moved the letter "À" in way that the symbol above the A is under the line "ascender". In this way i can see the letter correctly. So it is a confirm that the font that i'm using is the correct one and that the problem is in the visualization of the symbol or of the part of a character that are above the "ascender" line. Please help! After other tests i verified that this issue is related to some problem in the rendering of the font. So in the end i chose a simpler solution: i edited the font moving the "ascender" line upper in a way that now all the symbols are under this line. And now i can see alle the symbols correctly! - SGaist Lifetime Qt Champion last edited by Glad you found a solution and thanks for sharing ! That's a nice trick !
https://forum.qt.io/topic/106956/qml-can-t-see-font-ascender-on-embedded-device/8
CC-MAIN-2019-47
refinedweb
578
57.37
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 1.5.5 - Fix Version/s: 1.6-beta-1, 1.5.7 - Component/s: None - Labels:None Description From GROOVY-1875: def s = '12' s.value = 'ABCD' println s ==> AB The 'value' field is a private final variable in class java.lang.String. Therfore Groovy makes String mutable! Activity - All - Work Log - History - Activity - Transitions a I commented elsewhere.... the field is final, thus the change should not happen. If it where a property it would not happen. It only happens because it is a field and there is a check missing. that's nothing crazy That test case does not demonstrate modification of a final field and that is not what is happening. It reads a final field which returns a [C (array of char) then updates the element values in the array. That is completely valid from a 'final' aspect (that the field is private and should never have been readable in the first place is the bug as I reported using this test case). Groovy does not modify final fields, but if you think it can, then submit a suitable test. As you said, the test belongs here. The test I tried earlier was this: class MyBean { final String foo = '123' } def b = new MyBean() b.foo = 'ABC' ==> groovy.lang.ReadOnlyPropertyException: Cannot set readonly property: foo for class: MyBean My new version that should fail, but does not: String s = "123" def x = s.value s.value = "ABC" assert !x.is(s.value) The reason I put it in "should fail" form is because it should really fail before the assertion. It's crazy world that it's come to this point. I wonder if the fact that this would be possible occurred to any of the groovy developers as groovy was being written, or if this is an innocent mistake. If it occurred to someone pre-release... I'd love to see the discussion on the matter on why on earth this should be permitted.
https://issues.apache.org/jira/browse/GROOVY-2774
CC-MAIN-2017-43
refinedweb
342
74.49
On Fri, Apr 04, 2003 at 05:44:28PM -0700, hendriks@lanl.gov wrote:> Here's the answer to the second half... The kernel API is reasonably> large so let me know if anybody wants more detail anywhere. The C> library is basically a 1:1 mapping for these calls.> > Also, if anybody wants to know more about BProc in general, they can> check out:> > API for the syscall:> > arg0 is a function number and meaning of the rest of the arguments> depends on that.> > For arg0:Stop right here. We don't want even more muliplexer syscalls. Pleaseuntangle this to individual calls.> > 0x0001 - BPROC_SYS_VERSION - get BProc version> arg1 is a pointer to a bproc_version_t which gets filled in.> > return value: 0 on success, -errno on errorScratch this one, syscall ABIs are supposed to be stable.> > 0x0002 - BPROC_SYS_DEBUG - undocumented debugging call a magic> debugging hook who's argument meanings are fluid, currently it does:> arg1 = 0 - return number of children by checking pptr and opptr.> arg1 = 1 - return true/false indicating whether wait() will be local.> arg1 = 2 - return value of nlchild (internal BProc process book keeping val)> arg1 = 3 - perform process ID sanity check and return information about> pptr and opptr in linux and BProc.Debug stuff doesn't need a syscall, please get rid of this one.> 0x0003 - BPROC_SYS_MASTER - get master daemon file descriptor. The master> daemon reads/writes messages to/from kernel> space with this file descriptor.> no arguments> > return value: a new file descriptor or -errno on failure.Shouldn't this better be a new character device?(Dito for the other fd stuff)> 0x0201 - BPROC_SYS_INFO - get information on node status> arg1 pointer to an array of bproc_node_info_t's (first element contains> index of last node seen, nodes returned will start with next node)> arg2 number of elements in array> > return value: number of elements returned on success, -errno on errorThis should be read() on a special file.> > 0x0202 - BPROC_SYS_STATUS - set node status> arg1 node number> arg2 new node state> > return value: 0 on success, -errno on errorWrite on a special file.> > 0x0203 - BPROC_SYS_CHOWN - change node permission bits (perms are file-like)So why is this no file, e.g. in sysfs?> 0x0207 - BPROC_SYS_CHROOT - ask slave node to chroot()> arg1 node number> arg2 pointer to patch to chroot() to.Please explain this a bit more. Can't you use namespace properly onthe slaves somehow?> 0x0208 - BPROC_SYS_REBOOT - ask slave node to reboot> arg1 node number> > return value: 0 on success, -errno on error> > 0x0209 - BPROC_SYS_HALT - ask slave node to halt> arg1 node number> > return value: 0 on success, -errno on error> > 0x020A - BPROC_SYS_PWROFF - ask slave node to power off> arg1 node number> > return value: 0 on success, -errno on errorCan't you just call sys_reboot on the remote node?> > 0x020B - BPROC_SYS_PINFO - get information about location of remote processes> arg1 pointer to an array of bproc_proc_info_t's (first element contains> index of last proc seen, procs returned will start with next node)> arg2 number of elements in array> > return value: number of elements returned on success, -errno on errorShould be read() on a special file.> > 0x020E - BPROC_SYS_RECONNECT - ask slave daemon to reconnect> arg1 node number> arg2 pointer to bproc_connect_t which contains 2 sockaddrs - a local> and remote address for the slave to use when re-connecting to> the master.Don't use bproc_connect_t but the real arguments.> 0x0301 - BPROC_SYS_REXEC - remote exec (replace current process with exec> performed on remote node)> arg1 node number> arg2 pointer to bproc_move_t (contains exec args, io setup info, etc)> > return value: no return (it's an exec) on success, -errno on error> > 0x0302 - BPROC_SYS_MOVE - move caller to remote node> arg1 node number> arg2 pointer to bproc_move_t (contains flags, io setup info, etc)> > arg2 move flags (how much of the memory space gets sent)> > return value: 0 on success, -errno on error> > 0x0303 - BPROC_SYS_RFORK - fork a child onto another node. This is a> combination of the fork and move calls with> semantics such that no child process is> ever created (from the parent's point of> view) if the move step fails.> arg1 node number> arg2 pointer to bproc_move_t (contains flags, io setup info, etc)> > return value: parent: child pid on success, -errno on error> child: 0> > 0x0304 - BPROC_SYS_EXECMOVE - exec and then move. This is a> combination of the xec and move> syscalls. This call performs an exec> and then moves the resulting process> image to a remote node before it is> allowed to run. This is used to place> images of programs which are not BProc> aware on remote nodes.> arg1 node number> arg2 pointer to bproc_move_t (contains exec args, io setup info, etc.)> > return value: no return on success, -errno on error if error happens> in exec(). If error happens during the move step the process will> exit with errno as its status.> > 0x0306 - BPROC_SYS_VRFORK - vector rfork - create many child processes> on many remote nodes efficiently.> > arg1 pointer to bproc_move_t. This contains the number of children> to create, a list of nodes to move to, an array to store the> resulting child process IDs, and possibly IO setup information.> > return value: parent: number of nodes or -errno on error.> pid array contains pids or -errno for each child.> child: rank in list of nodes (0 .. n-1)> > 0x0307 - BPROC_SYS_EXEC - use master node to perform exec. A process> running on a slave node can ask it's "ghost"> on the front end to perform an exec for it.> The results of that exec will replace the> process on the slave node.> arg1 pointer to bproc_move_t (contains execve args)> > return value: no return on success, -errno on failure.> > 0x0309 - BPROC_SYS_VEXECMOVE - vector execmove - create many child> processes on many remote nodes> efficiently. The child process image> is the result of the supplied execve.> > arg1 pointer to bproc_move_t. This contains the number of children> to create, a list of nodes to move to, an array to store the> resulting child process IDs, execve args and possibly IO setup> information.> > return value: parent: number of nodes or -errno on failure. The array > children: no return. If BPROC_RANK=XXXXXXX exists in the> environment, vexecmove will replace the Xs with> the child's rank in vexecmove.> > 0x1000 - at this offset, bproc provides an interface to virtual memory> area dumper (vmadump). VMADump is the process save/restore> mechanism that BProc uses internally.I think all these are pretty generic for any SSI clustering. Couldyou please talk to the Compaq and Mosix folks about a common API?> 0x1000 - VMAD_DO_DUMP - dump the calling process's image to a file descriptor> arg1 file descriptor> arg2 dump flags (controls which regions are dumped and which regions> are stored as references to files.)> > return value: during dump: number of bytes written to the file descriptor.> during undump: 0 (when the process image is restored, it will> start by returning from this system call)I'm pretty sure this would better be a /proc/<pid>/image file youcan read from.> 0x1001 - VMAD_DO_UNDUMP - restore a process image from a file> descriptor. The new image will replace the> calling process just like exec.> arg1 file descriptor> > return value: no return on success, -errno on failure.> > side note: where possible, vmadump adds a binary format for dump files> which allows a dump stored in a file to be executed directly.Can't you always use this binary format? And btw, does this checkpointand restore code depend on the rest of bproc? I'd love to see it evenin normal, not cluster-awaer kernels.> 0x1030 - VMAD_LIB_CLEAR - clear the library list> no argumentsWhat library lists are all those calls about? Needs more explanation.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2003/4/5/108
CC-MAIN-2017-04
refinedweb
1,289
62.48
Answered by: Save field without displaying it on the VB form - I have a VB 2008 windows application that I'm working on; the data are stored in SQL. I have a lot of fields that user will need to fill out. Some fields that not displayed on the form are actual combination of few filled out fields. For example - user types in first name, middle name and last name in 3 separate boxes on the form and I'd like to combine those and assign it to the "FullName" field. I know it's duplicating the data but this FullName field would need to be merged into report as is in a different program. Another example - user will pick a data in combo box that is binded to a different table, but I need it to auto-populate 3 fields in the main form (those fields are not displayed). I know that I can actually add all those hidden fields on the form and assign calculated values based on user input and then everything is saved with built-in "Update" command. I was wondering if there is a way to specify those few calculated values as parameters and still use "Update" command without actually typing long Update command. I have over 400 fields - very lengthy command.. Hope it makes sense. Thanks! Question Answers - All replies - There are several way of going about this but I think I know what your looking for here. Solution #1: Keep the 4 text boxes binded to the sql data-source (e.g. Dataset, TableAdapter, and etc) set the fourth textboxes binding to the full name data field. Once you have done this place the textbox out of the way and set the property Visible = False. Then add an event in the last of the (3) to insert a combination of all three values into the fourth textbox (e.g. txtFirstName.Text + " " + txtMiddle.Text + " " + txtLastName.Text). One last note on this method would be to add an event handler OnMouseLeave on the LastName textbox and this will be where the fourth textbox "FullName" gets it's full name value set. Solution #2: The proper way of doing this is to have your normal (3) textboxes working and when you need the full name pull each value from the database or keep the fullname field in the data table and use string combination to combine the (3) values into a single full name. If you go that route you will most likely have to implement custom save method using sqlclient namespace. Let me know if this helps you out. Thanks Charlie - I briefly read over this and i am not sure i follow 100%. But, i thought i would mention something that i have used and if it is something that will help you then good, if not then disregard. I have a particular client application which has several hundred controls the user needs to input data into. large sections have about 30 or so checkboxes. 0 to all of these may be checked depending. this is each section so there is a lot to hanlde for the entire record. so this would take a lengthy routine or sql statement to write. What i found was easiest for me was to loop throgh the checkboxes and combine a comma delimeted string consisting of the checked checkbox's names. this way only the checked checkboxes are stored and the names allow me to split the string and check the controls by name in a loop. so basically i can handle 30 checkboxes with a single field, 1 small loop to build the delimeted string, a single parameter for my insert or update command, and 1 small loop to read the names when the record is loaded. using this approach i was able to shorten my column from about 250 or so down to about 70. i could probably combine more values into single fields but i try not to handle everything with strings. well, hope this helps. might give you some ideas to slim some things down. FREE DEVELOPER TOOLS, CODE & PROJECTS at Database Code Generator and Tutorial - Thank you for the suggestions. Solution 1 - is what I currently have. I wanted to see if there is a more efficient way to do it. As I mentioned - I have over 400 fields and trying not to make the form looking too busy (if I have a lot of hidden fields - hard to manage design). I have TabControl with 10 tabs already.. I was thinking about implementing solution 2 - to write the SQL command to save only calculated fields. Again - was trying to find an alternative. It's seems that there is got to be a way to setup the parameters with calculated fields, using existing parameters names. For example - built-in Update command already has @FullName parameter but I don't know how to assign this parameter calculated value prior to calling update command. Thanks again! - Me again. I've been trying to set the parameters prior to saving it and it still doesn't work. Please see the code below. What am I missing? I don't get any errors, but when I add a quick watch for the value of @FacTaxID - nothing showing up and no value inserted in the field. Please advise! FYI - I don't have FacTaxID field on the form, the field Me.txtFacTaxID is binded to related table. Private Sub TblAdmissDataEntryBindingNavigatorSaveItem_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles TblAdmissDataEntryBindingNavigatorSaveItem.Click Me.Validate() Me.Cursor = Cursors.WaitCursor TblAdmissDataEntryTableAdapter.Adapter.UpdateCommand.Parameters("@FacTaxID").Value = Me.txtFacTaxID.Text TblAdmissDataEntryTableAdapter.Adapter.UpdateCommand.Parameters("@LimitedPartnershipName").Value = Me.txtFacLegalName.Text Me.TableAdapterManager.UpdateAll(Me.AdmPacketMainDataSet) Me.Cursor = Cursors.Default end sub - Hey buddy I'm getting on a flight right now but I built you a sample app I can send you or post it for download and there is a method I just figured out today waiting at the airport that will blow your sox's off and it's only 2 lines of code. I will post around 9PM tonight sorry about the delays. Thanks Charlie - - Thank you, Charlie. I really appreciate you going into such length as designing sample application. I downloaded the project and reviewed the code and have a few questions. The Linq string pulls different full name than the one you pull from SQL - so Linq doesn't match the split version of the name. Was it intentional? About parameters - I wanted to use the built-in UpdateCommand that gets generated when you setup Data source connection in the TableAdapter and find a way to manipulate the existing parameters in that command. I see that in your project you didn't generate UpdateCommand and instead built your own. Is it possible to manipulate existing parameters of built in command? If not - then I'll still use built-in Update string for 400+ fields and setup separate SQL command to update Full name and other calculated fields. Thank you so much for your help! If it's not too much to ask - can you also take a look at another problem I'm having? Am I the only one experiencing it?? Thanks again, Alla Yes the LINQ needs to be tweaked a little I was just showing you a brief example to get you started. About the update command you can open the dataset view and view the code behind and the method I implemented will update the database. To use just click the little yellow + sign on the toolbar and when your ready to save the data click the button with a default image. It is the last button on the toolbar. It will save the new row including the fullname field behind the scenes in less than 2 lines of code. Thanks Charlie - Oh well. I guess I'll use a separate SQL update code for calculated fields. Unless someone will tell me why this line below doesn't work?! TblAdmissDataEntryTableAdapter.Adapter.UpdateCommand.Parameters("@FacTaxID").Value = Me.txtFacTaxID.Text Thanks again, Alla - Sorry, Charlie - I think there is some miscommunication here. I have setup connection to Data source - SQL database and using all automatically generated Select, Update, Insert, Delete commands - from table adapters. I understand that the method you suggested was to spell out each field and parameter in Insert/Update commands. I wanted to avoid it because of 400+ fields in those forms. It's much easier just to call it with 1 line of code: Me.TableAdapterManager.UpdateAll(Me.AdmPacketMainDataSet). I guess I'm trying to find an easier way. If I check the code behing UpdateCommand in Dataset designer - I can see that all parameters already there and wanted to find a way to modify those parameters prior to calling Update command (see example code from yesterday). Unfortunately that code doesn't give me any errors but doesn't update calculated fields either. If that method is not possible - I'll add the separate Update code (similar to the one in your sample) that I'll run right after built-in Insert for new records or UpdateCommand for existing. Hope I'm making sense. About strongly typed dataset - I learned VB practically by myself (books + some on-line classes) and have enough basic knowledge to create applications, but don't know technical terms to answer your question, sorry.. I really appreciate your assistance! Sincerely, Alla Well I understand your frustration and writing software with minimal knowledge can really be a pain but you can pat yourself on your back for the progress you have already made. So in your project did you use the datasource add-in in visual studio to create your dataset or did you create it manually? What exactly is wrong with the generated update method? Is it not updating the data correctly? Let me know exactly what you need help with and I can put you on the right track. Thanks Charlie - Thank you. I used a VS wizard to add a connection to SQL. The Update command works for all the fields displayed on the form and binded to the table fields. The problem is with the calculated fields that I don't display on the form or with the fields that are binded to different table. I was hoping that there was a way to manipulate the built in UpdateCommand parameters that are = calculated fields (see my code from yesterday - @FacTaxID is a parameter in the main table, but the value is stored in different table and displayed on the form). That code didn't update those 2 fields specified in the code. I can certainly write a separate SQL update code and use it. The other problem I have with date fields - see the reference link from earlier today. I am taking 2 days off - so I might not reply immediately. Have a great Thanksgiving! Thanks, Alla That's great. That is what I have been trying to explain to you. In the sample application I sent you Open the dataset and double click on the accounts table adapter. Once the code view is showing look at the method I have provided. This will modify the data before it is inserted or updated and it is a very simple but professional way of completing something like this. Thanks Charlie
https://social.msdn.microsoft.com/Forums/vstudio/en-US/7ffb501e-817d-4865-9145-051ad4485f2c/save-field-without-displaying-it-on-the-vb-form?forum=vbgeneral
CC-MAIN-2016-30
refinedweb
1,894
62.48
Problem: In a Java program, you want to determine whether a String contains a pattern, you want your search to be case-insensitive, and you want to use String matches method than use the Pattern and Matcher classes. Solution: Use the String matches method, and include the magic (?i:X) syntax to make your search case-insensitive. (Also, remember that when you use the matches method, your regex pattern must match the entire string.) Here's the source code for a complete Java program that demonstrates this case-insensitive pattern matching technique: /** * Demonstrates how to perform a case-insensitive pattern * match using String and the String.matches() method. */ public class StringMatchesCaseInsensitive { public static void main(String[] args) { String stringToSearch = "Four score and seven years ago our fathers ..."; // this won't work because the pattern is in upper-case System.out.println("Try 1: " + stringToSearch.matches(".*SEVEN.*")); // the magic (?i:X) syntax makes this search case-insensitive, so it returns true System.out.println("Try 2: " + stringToSearch.matches("(?i:.*SEVEN.*)")); } } The output from this program is: Try 1: false Try 2: true Discussion This is a trivial example that you could solve using other techniques, but when you have a more complex pattern that you're trying to find, this "magic" case-insensitive syntax can come in handy. For more information on this syntax see the Pattern javadoc page on Sun's web site.
https://alvinalexander.com/blog/post/java/java-how-case-insensitive-search-string-matches-method/
CC-MAIN-2022-27
refinedweb
233
53.92
Subject: Re: [boost] [modularization] Modularizing Boost (modularization) From: Edward Diener (eldiener_at_[hidden]) Date: 2013-10-21 15:25:02 On 10/21/2013 2:49 PM, Jeremiah Willcock wrote: > On Sat, 19 Oct 2013, Edward Diener wrote: > >>. > > I have done the rearrangement of header files and namespaces, but not > the tests, examples, or documentation yet. There are still likely > circular dependencies because of the compatibility #includes I left in > (see my other email about that). See my response to the other e-mail. If you agree with it the tests just need to have some of the #includes change and of course the ones in graph now for distributed property maps should be moved to property_map instead. BTW thanks very much for working on all this. Your decision to keep distributed property maps in property_map is the right decision and the only reason I initially thought of moving the distributed property maps to graph was because I did not feel comfortable trying to figure out how to move the necessary graph support code into property_map. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2013/10/207582.php
CC-MAIN-2021-17
refinedweb
197
63.19
05-06-2011 02:24 PM I'm in the process of "future proofing" my app before I start adding more features to it. I remember at some point, I believe a Webcast, RIM saying to make the app UI work off of the screen width and height instead of hard-coding positions because there could/would be other sized PlayBooks in the future. So in that effort I started playing around, but noticed that my UI elements were starting at a position that wasn't the upper-left of the screen. After doing some investigation I realized that the "stage" was only 500 by 375, a 4x3 resolution and the PlayBook was centering this in the middle of the screen (the odd UI element offset). At the top of the initial app class your supposed to specify a "[SWF(...)]" attribute. The only part I don't specify is the screen size. When this is set then the stage takes the specified screen size. Finally, the app still takes up the whole screen but the UI is specified that it only takes up the relatively small area. The stage has fields specifying the "fullscreen" width and height, and also specifies the display state as "normal" instead of "fullscreen". Does anyone know how to get the app to take up the full screen real estate, without specifying the absolute screen size in the SWF attribute? Solved! Go to Solution. 05-06-2011 06:42 PM this should work: //Class [SWF (backgroundColor = "0x000000")] public class FullScreenTest extends Sprite { //Constructor public function FullScreenTest() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; stage.frameRate = 60; init(); } 05-06-2011 08:40 PM Perfect, thank you.
http://supportforums.blackberry.com/t5/Adobe-AIR-Development/App-not-using-fullscreen/m-p/1062251/highlight/true
CC-MAIN-2013-20
refinedweb
281
67.99
Tech Ads Back to Article List Originally published June 2004 [ Publisher Link ] Ant 1.6: New features and tasks Ant is a Java-based build tool developed by the Apache Foundation. Thanks to its simplicity and power, Ant has made inroads into the wider Java development community. It's tailored to operate under IDEs like NetBeans and Eclipse, as well as other development tools like JUnit and XDoclet. Ant has become common on almost every Java developer workstation. In this article we will explore the new features incorporated into its latest version. Class loading has been revamped in the 1.6 release of Ant. The optional.jar library, which contains all the non-core Ant classes, has been broken up into more manageable granular pieces, reducing the loading of unnecessary classes upon execution. Also, previously, classes had to be loaded either inside the build file or in the CLASSPATH environment variable. Now the -lib flag has been added, allowing class loading from the command line: Ant relies on XML for defining its configuration parameters. XML namespace support was added to the latest release, aiding the creation of complex XML structures. Namespaces allow the presence of two equally named tasks to coexist and operate with distinct values: Macros are another welcome addition in Ant 1.6. Macros allow developers to compose programmatically defined snippets inside a project's configuration script: Here we define two targets that make use of the macro named mylogger. The actual functionality is defined in the macrodef element. The contents for our macro are declared within the sequential element, a previously available Ant task which is used to define a series of events. In this case, we create a log entry through the record task. The generated log corresponds to the type variable, which takes its value from the macro attribute provided at invocation. One restriction which has been lifted is the allowance of top-level declarations. Previously only a few definitions were allowed to reside outside specific targets. You are now permitted to define any property at a global level for reuse within targets. Ant libraries are another new concept. Libraries allow you to define tasks in separate XML files. In Ant circa 1.5, tasks had to be defined inside a build script either through typedef or taskdef elements. With Ant libraries, it is possible to define these elements inside an independent file and reuse them on various build scripts. Take the following Ant library: We simply nest our <taskdef> elements inside the root <antlib> tag. Assuming we place this library in a file named mylib.xml, we would use the following syntax to access its elements inside a build script: Through the file attribute available in typedef we gain access to those tasks defined in the library. The actual invocation inside the build script is done transparently with the same name definition provided in the library. XML namespaces are complementary to Ant libraries, in that they allow you to import a series of libraries into the same build script, and later use all of their elements in an unequivocal manner without worrying about name clashes. All of these features are among the major enhancements in Ant 1.6, which provide both veteran and novice users new capabilities for building better and more comprehensive build scripts.
https://www.webforefront.com/about/danielrubio/articles/ostg/ant1_6.html
CC-MAIN-2021-31
refinedweb
551
54.93
Ticket #598 (defect) Opened 2 years ago Last modified 2 months ago socket problems on Mac OS X Tiger Status: closed (fixed) There is a big problem with CP in conjunction with Python 2.4.4 on Tiger! Did you ever try to respond an image (1 Mpx) directly by cherrypy? I installed Python 2.4.4 Universal Binary Installer, then CherryPy 2.2.1 on Mac OS X Tiger 10.4.8 (Mac Mini Dual Core 2). My CherryPy App should deliver images (jub -- dynamicly generates ones), but I always receive only the first part of the image in my browser. Cherrypy throws an exception within the write-method, which is delegated to socket-write which ends up in a sendall in socket.py: error 35: Resource temporarily unavailable. Seams that this is a problem caused by some changes in the socket api between Python 2.4.3 and 2.4.4. If this is true and I didn't make a mistake, CherryPy can not be used upon P 2.4.4 on Tiger (if you try to respond something bigger than a small HTML page). I am thinking on Kevin Dangoors nice Turbogears Screencast, running on Max OS X... How should people redo this cool stuff he shows...? Attachments Change History 11/04/06 17:05:45: Modified by fumanchu 11/06/06 01:33:19: Modified by guest thx for help, but it doesn't work. I get still the following: 10.1.50.222 - - [06/Nov/2006:08:18:58] "GET /png/ HTTP/1.1" 200 353821 "" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.8.1) Gecko/20061010 Firefox/2.0" Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/cherrypy/_cpwsgiserver.py", line 205, in run request.write(line) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/cherrypy/_cpwsgiserver.py", line 149, in write self.wfile.write(d) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/socket.py", line 248, in write self.flush() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/socket.py", line 235, in flush self._sock.sendall(buffer) error: (35, 'Resource temporarily unavailable') I inserted --> import errno socket_errors_to_ignore = [] --> socket_errors_to_ignore.append("EAGAIN") # Not all of these names will be defined for every platform. for _ in ("EPIPE", "ETIMEDOUT", "ECONNREFUSED", "ECONNRESET", "EHOSTDOWN", "EHOSTUNREACH", "WSAECONNABORTED", "WSAECONNREFUSED", "WSAECONNRESET", "WSAENETRESET", "WSAETIMEDOUT"): if _ in dir(errno): socket_errors_to_ignore.append(getattr(errno, _)) at the beginning of _cpwsgiserver.py when I call my app from localhost, I get the whole image. when I try to fetch the image with firefox from another machine within the local network, I still get only the first part of the image. by the way: we do not have this problem with an old PowerPC-based mac mini, running Python 2.4.2 and CP 2.1. we urgently need some help because we expanded a productive system with 3 new mac minis in order the deliver the dynamicly rendered images (using cocoa binding with pyobc 1.4) 11/06/06 13:30:35: Modified by fumanchu Hm, yeah. I wasn't complete enough in my reply; you usually want to retry the write operation, not discard it, when you get EAGAIN. The attached patch does that. It's quick and dirty, just for testing; if it "fixes" the issue, it probably needs a sleep call and a timeout, and should be applied to all write calls in that module. 11/06/06 13:31:15: Modified by fumanchu - attachment eagain.patch added. Patch to retry socket write on EAGAIN 11/07/06 02:42:57: Modified by guest the patch does not work, but I successfully patched it this way: def write(self, d): if not self.sent_headers: self.sent_headers = True self.send_headers() #self.wfile.write(d) #self.wfile.flush() chunk_size = 1000*1024 totally_sent = 0 i = 0 while len(d) > 0: c = d[totally_sent:totally_sent+chunk_size] if len(c) < 1: break; try: currently_sent = self.socket.send(d[totally_sent:totally_sent+chunk_size]) except socket.error, e: errno = e.args[0] print "Error", errno, str(e) currently_sent = 0 totally_sent += currently_sent print "%i.: %i, %i" % (i, currently_sent, totally_sent) i += 1 the debug printings show: 0.: 65511, 65511 1.: 60049, 125560 2.: 64240, 189800 3.: 11680, 201480 Error 35 (35, 'Resource temporarily unavailable') 4.: 0, 201480 5.: 65700, 267180 6.: 2920, 270100 Error 35 (35, 'Resource temporarily unavailable') 7.: 0, 270100 8.: 59860, 329960 9.: 4380, 334340 10.: 30660, 365000 11.: 30660, 395660 12.: 64240, 459900 ... and so on. as you can see, socket.send() throws socket errno 35. seems that socket lib throws wrong error(s) on mac os tiger (errno 35 where obviously socket is currently not able to receive more bytes which is a common response). may be that this is caused by changes in socket module from Python 2.4.2 to 2.4.3 and higher... where socket handling has changed to non-blocking. we will take a deeper look to this problem and keep you informed. thx for help. 11/16/06 18:02:04: Modified by fumanchu - summary changed from CherryPy 2.2.1 or 3 on Python 2.4.4 on Mac OS X Tiger to socket problems on Mac OS X Tiger. 12/28/06 16:01:20: Modified by fumanchu - status changed from new to closed. - resolution set to wontfix. In the absence of further information, I'm closing this as "wontfix". Feel free to reopen if anyone runs into the same problem. 03/15/07 09:31:52: Modified by guest - status changed from closed to reopened. - resolution deleted. - milestone changed from 2.2.2 to 3.1. Well, I am suffering from this. I am trying to transmit the Dojo main file and it is greater than 66608 bytes and it is breaking. I'm wondering if the send method is overflowing a buffer by not sending in chunks smaller than system buffer size. -- metaperl 05/30/07 15:54:34: Modified by cherrypy_spam@perceptiveautomation.com - priority changed from high to low. - milestone changed from 3.1 to 2.2.2. I believe this problem is fixed in CherryPy 3.x. I'm stilling using CherryPy 2.2 and the simple solution appears to be to use a blocking receiving socket. Inside def tick(), comment out these two lines: #if hasattr(s, 'settimeout'): # s.settimeout(self.timeout) And add these two: if hasattr(s, 'setblocking'): s.setblocking(1) This appears to be what CherryPy 3.x is doing inside _cphttpserver get_request(). Apparently, using makefile and then calling write() on the file ends up doing a sendall() on the socket. From what I read elsewhere, calling sendall() is not supported for a non-blocking socket. Rather, it is not guaranteed to send all of the data. Hope that helps... Matt Bendiksen 05/30/07 17:24:32: Modified by fumanchu @Matt, First, thanks for the info! This has been a very difficult problem to reproduce and debug. But according to the Python 2.4.2 docs at least, "s.setblocking(1) is equivalent to s.settimeout(None)". And a value of None implies "block forever". Note that all Python socket objects are blocking by default, and even if they weren't, a call to settimeout calls an internal_setblocking function inside socketmodule.c. So the sendall concerns probably don't apply. My guess is changing cherrypy.server.timeout to a value sufficiently larger than the default 10 (seconds) would have the same effect as setblocking(1). If that's not true, then there's probably some platform-specific weirdness going on inside internal_setblocking. I don't quite understand your last paragraph, since CP 3 has no _cphttpserver module, and its wsgiserver module has no get_request function...? 08/31/07 09:25:36: Modified by Christopher Lenz <cmlenz@gmx.de> Just ran into this issue using CherryPy 3.0.2, trying to serve the unpacked version of jQuery 1.1.4 from staticdir. So I think the milestone is wrong here, the problem still exists in CP3. 08/31/07 10:49:02: Modified by dowski From: ." (emphasis added) Not sure if this helps at all, but it does sound like we are implicitly using a non-blocking socket in combination with makefile(). 09/13/07 17:30:25: Modified by exvito - milestone changed from 2.2.2 to 3.1. I ran into this issue yesterday while trying to serve a static file of about 120KiB from CP 3.0.2. My system - Mac OS X 10.4.10 (Tiger) - Mac Python 2.4.4 My observed behaviours, known facts and tests: - CP 3.0.2 logs no errors in error.log and logs 200 OK with the correct number of bytes xfered in access.log: this is wrong and the most critical issue -- if its not ok it must not state otherwise - Error is the same with Mac Python 2.5 - Error does not show up with Mac OS X bundled Python 2.3 - Error does not show up against local connections (either loopback or other physical local interface) - Setting socket_timeout to None makes things work (which seems to agree with notes above from dowski and fumanchu) - however this leaves us with no way to get out of hanging connections - After browsing the code, and with some guidance from dowski on IRC, I changed wsgiserver/init.py at line #367 and wrapped the self.write(chunk) in a try block to catch socket.errors. I managed to get the 35 (EAGAIN and EWOULDBLOCK) mentioned by someone, above -- something I never saw before on the logs as mentioned. - Then, in a way similar to the patch above, forced a retry on EAGAIN / EWOULDBLOCK; the result is improved but not perfect: - Clients get the file and the connection is not dropped halfway like before (tested with wget) - Server seems to hang for a few (1-3 ?) seconds between most requests -- that is: if I script the shell for a wget loop, I observe hangs between most xfers. The client, wget, states "Connection reset by peer" on starting most connections (all but the first in the loop ? not sure) Where to go next ? Questions, reasoning, ideas. - Not easy. According to python docs mentioned by dowsky, we are effectively working with a non-blocking socket - To which, again according to python docs, one should not use makefile() -- which CP uses to readlines() from the socket, if I understand correctly - And no code at all seems to be non-blocking socket ready - So, again, if I understand correctly, the python docs say one thing that seems to contradict CP's implementation but CP's implementation works everywhere else and only fails on Mac ! - Something is fishy here, I agree with the above comment by fumanchu - Do we have a buggy 2.4.4 python socket implementation on Mac OS ?! - I only seem to see three ways of moving forward: - If I understand the docs correctly, then wsgiserver socket operations must be changed... forgetting makefile() and readline() and using recv()s and extra buffering/management code; forgetting sendall() and using send() checking both EAGAINs / EWOULDBLOCKS but also the returned number of bytes, etc, etc - Somehow fix the "hangs" that the quick fix I tested resulted in... (no idea how!) - Write down a minimal pure python socket server test program, with the same socket module calls CP does to take to Mac Python project owners and get their feedback Within my limitations (time to spare / tech abilities), I'd like to contribute to fixing this. While not being critical for my work (I develop on Mac but deploy on Linux) it is somewhat annoying and made me really angry after several hours of battling with html / javascript / webservers / browsers / caches / JVMs, etc !!! Feel free to get back to me at ex (dot) vitorino (at) gmail (dot) com 09/15/07 13:03:09: Modified by exvito More tests and results - Python 2.4.4 vs 2.4.3: - Installed python 2.4.4 and 2.4.3 from source on Mac OS X 10.4.10 - Behaviour is good in 2.4.3 and bad in 2.4.4 - consistent with above findings - Build python 2.4.4 from source with 2.4.3 socketmodule.[ch] - feeling adventurous... - Behaviour is good in 2.4.4 with 2.4.3 socketmodule - diffed socketmodule.c between 2.4.4 and 2.4.3 - diff is 161 lines long - reasonable to evaluate - no changes in 2.4.4 internal_setblocking vs 2.4.3 - suspicion by fumanchu not a fact - new ifdefs APPLE regarding inet_aton - does not seem related - NEW: usage of poll() instead of select() if available -- this might be something ! More tests and results - Pure sockets, no cherrypy: Created a sample (stupid) http server script based on what I believed were the socket module calls CP does. The objective was to pinpoint the behaviour in the smallest amount of code possible so as to be able to understand and decide what exactly needs to be fixed: CP, Python or Mac OS. Unfortunately my creation does not behave as I expected: - It works ok if the socket timeout is None in all tested Mac python versions (Apple bundled 2.3, Mac Python's 2.4.4 and 2.5 and the ones I built from source: 2.4.3, 2.4.4 and 2.4.4 with 2.4.3 socket module) - And fails with the socket EAGAIN / EWOULDBLOCK (35, 'Resource temporarily unavailable') if the socket timeout is set to anything other than None - It works ok on Linux for both cases under Ubuntu 7.04's python 2.4.4 and 2.5.1: socket timeout to None or not Here is what I coded: import socket # server parameters HOST = '' PORT = 8088 SOCKTIMEOUT = 10 # data load to serve: MANY times DATA DATA = ''.join(map(chr, xrange(256))) MANY = 1024 # setup server socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.settimeout(SOCKTIMEOUT) s.bind((HOST,PORT)) s.listen(1) # wait for connection conn, addr = s.accept() # read HTTP request headers, discard body # (good enough for wget) f = conn.makefile() while True: req = f.readline().strip() if req=='': break # send minimal HTTP response headers conn.sendall('HTTP/1.0 200 OK\r\n') conn.sendall('Server: tigerSockets\r\n') conn.sendall('Content-length: %i\r\n' % (len(DATA)*MANY)) conn.sendall('\r\n') # send HTTP response content for count in xrange(MANY): conn.sendall(DATA) # done, close connection socket conn.close() # close serving socket s.close() In a way, this code's behaviour is more consistent... It always fails on Mac if timeout is not None (socket is in non-blocking mode, according to docs). Questions still to be answered: - Why does CP's behaviour varies with python versions ? - Why does Linux always work ? - Obviously my code is not equivalent to basic CP socket module usage: can anyone help out ? Please add comments or reach me at ex (dot) vitorino (at) gmail (dot) com 09/15/07 13:39:54: Modified by exvito Extra info regarding pure socket, non CP test code: - Unlike CP, this code also fails on the loopback and local interfaces (however it doesn't fail as often/easily with the 256KiB payload) - Its behaviour seems very consistent: fails for socket timeouts other than None on local and remote connections under all python versions tested on Mac 09/15/07 13:59:24: Modified by exvito Extra info regarding python 2.4.4: - Built python 2.4.4 from scratch forcing the usage of select() instead of poll(): - ./configure --prefix=xxx/python-2.4.4-nopoll - set to 0 all the defines in pyconfig.h regarding POLL / POLL_H etc, used by socketmodule.c - result is the same for both CP test program and raw socket test program: fails if socket timeout is set to anything other than None Note to self: should trace the system calls just to be sure that poll() vs select() are being used under the different test scenarios... *sigh!* 09/16/07 20:41:45: Modified by exvito quick fix - different approach - seems ok Feeling a bit baffled by the python runtime tests and the behaviour of my "no-CP-sockets-only" test script, I decided move forward based on the following assumptions: - The python docs are correct - we are dealing with non-blocking sockets: we should handle EAGAINs and EWOULDBLOCKS, we cannot use makefile() and thus we cannot use readline() - Somehow the network stacks in different OSs bahave differently thats why the non-blocking nature of the sockets result in different behaviours under different platforms - Forget, for now, that the changes python 2.4.4 socket module vs. 2.4.3 have impact in CPs behaviour That said, I hacked wsgiserver/init.py in a similar logic to what initially fumanchu did and that I tested a few days ago with half success. This time, however, I decided not to use socket.sendall() but socket.send() instead, which gives more control at the cost of a little more work. First results look promising. What I did: - Created a new method in HTTPRequest named _good_sendall - Changed the self.sendall definition of HTTPRequest in the constructor to point to _good_sendall The code for _good_sendall: def _good_sendall(self, buf): pending = len(buf) offset = 0 while pending: try: sent = self.connection.socket.send(buf[offset:]) pending -= sent offset += sent except socket.error, e: if e[0]!=errno.EAGAIN: raise That's it. First tests give good results and I no longer experience the 1-3 sec delay between serving consecutive requests. Again, according to python docs, I should eliminate the readline() usage... A quick glance at the code shows rfile and readline() at several places. This will need a little more thinking, even as quick hack, as rfile will need to be discarded (need to see if has more implications that readline() by itself). Anyway, the ideia will be to create a new method named _good_readline() that instead will use socket.recv() loop and a buffer, returning the line when a '\r\n' is found and keeping the remaining of the buffer until the next call. 09/17/07 09:19:39: Modified by dowski Great analysis and progress on this so far! The rfile object is fairly important, so we are going to at least need some sort of file-like interface to the socket. It is passed to cgi.FieldStorage? for reading POST request bodies. If there is someway to created a non-blocking-friendly version of makefile(), that would probably be ideal. 09/18/07 11:57:11: Modified by fumanchu Yes, great work! Note that rfile is also an API requirement for WSGI. 09/18/07 18:12:29: Modified by exvito Thanks for the feedback. I actually felt there was something in that rfile saying "don't dismiss me!". Now I'm certain. I'll need to take a look at the issue with some more time. Probable next steps: - Read wsgiserver code and improve my overall understanding of it - Understand 100% clearly specific usage of rfile: - Within wsgiserver and its interfaces/API - And according to the WSGI spec - need to check the PEP333 - Possible approach might be, as suggested by dowski, create a file like object as an interface non-blocking sockets IO; then adapt the current usage of socket.makefile() to something else like _nbsocket_makefile that instantiates one of such objects. Week is really busy. I'll probably only get back to this on the weekend. 09/20/07 06:01:35: Modified by exvito quick note: - the socket module already implements a file-like object - that's what socket.makefile() returns - it's just that the readline() in that "file" is not timeout resistant... - ...so it raises socket.timeout exceptions (although this is not clearly documented in python -- is this expected, stable behaviour between python versions ? is it by design or as a side effect ?) - socket module file object seems the same code between python 2.3.x and 2.4.x - maybe the solution is as simple as ensuring CP keeps trying its readlines() on socket.timeouts() ? maybe it already does this ? (I don't think so) - BTW, need to check to which sockets the settimeout is being applied; I'd assume both to the server socket listening to connections and to the connection sockets... maybe this could help understand the different behaviours on different platfroms... I'll be back! 09/21/07 09:23:10: Modified by exvito extra note: - although the file-like object returned by makefile has a readline() method that throws socket.timeout(), it does not seem to buffer data so it is probably (and 100% coherent with python docs) not a good idea to base incoming dataflow on that... (check:) - it seems we will really need to implement a non-blocking aware file-like object for makefile() replacement... 09/22/07 13:26:38: Modified by guest interesting old thread (raises good questions): - some of them shared by me while developing the non-blocking friendly file like object to replace current socket.makefile() extra notes/questions: - regarding the issue raised in the thread I linked in the previous note (readline buffering in timeout conditions) how should one go about readline() behaviour in non-blocking socket conditions ? - behave like blocking version ? - or raise timeouts but keep buffering incoming data ? todos: - need to check what WSGI spec states regarding non-blocking sockets and related behaviours - need to check the effective usage of timeouts under CPs wsgiserver... what are they used for ? I'd say the objective is to drop idling connections... Am I correct ? 09/22/07 13:29:40: Modified by exvito - attachment nbs.py added. inwork version of non blocking socket + file like object 09/24/07 08:24:06: Modified by exvito reminder / restatement of the original problem: - used sockets have a timeout set on them - this makes them non-blocking sockets - socket.sendall() and eventually other socket calls timeout earlier that the defined timeout when run on Mac OS X 01/05/08 23:50:53: Modified by exvito ...just got a new mac running 10.5.1, with python 2.5.1 bundled. It seems to have the same behaviour as in 10.4.10... I just did the pure python, pure socket test dated 09/15/07, 13:03:09, above, and it fails with timeouts. Mac Python 2.4.4 also fails. I need a way to get time to fix this once and forever... It's almost becoming a personal issue ! :-) 01/21/08 21:06:57: Modified by fumanchu Okay, I made source:branches/598-sendall/cherrypy to work on the "nonblocking sendall" approach to fixing this (similar to exvito's good_sendall code above). Lakin confirmed that it fixed the original issue (sendall raising EAGAIN) on OS X. However, it broke a couple of tests on all platforms, namely, test_conn and test_xmlrpc. These would both hang in the middle of the test. The fix for those was to explicitly call close() on the kernel socket, since Python's socket module does not do that until the socket is GC'ed. However, that breaks a couple of different tests when run with the --ssl option. Still need to track those down and fix them. 01/21/08 23:27:35: Modified by lakin I was able to reproduce the issue using python2.4 and the scripts in the attached zips (598_test.zip). I'm on OSX 10.5, and with the built-in 2.5.1, CP does not seem to have an issue, although the pure socket test _does_ have the issue. As fumanchu mentioned his branch fixes the issue. I am going to attach a few scripts. There are two python files, one is a test server which serves up a large response (1MB) on /hello You can test this from across the network using wget. Wget or curl on the local machine doesn't reproduce it. However, the included py_wget.py script does reproduce the problem locally. Interestingly enough the values of 512 and 6.0 for the recv and time.sleep calls are important. Fiddling with them will affect the ability of the script to reproduce the problem. This confirmed that producing it was a timing issue and that it was probably the EAGAIN issue. Because of the extreme specificness of reproducing the error, I'm not bothering with a Unit test, as we would only want to run it on OSX, with python2.4 :) 01/21/08 23:28:26: Modified by lakin - attachment 598_test.zip added. Some test scripts which reproduce the problem locally on OSX 10.5 with python 2.4 01/28/08 23:08:02: Modified by lakin 01/28/08 23:12:00: Modified by lakin Even better news is that the test passes using python2.4 on osx if I use the 598-sendall branch. 01/28/08 23:32:30: Modified by lakin And good news is that the test runs (and passes) in ubuntu as well. 03/12/08 01:47:49: Modified by fumanchu - status changed from reopened to closed. - resolution set to fixed. This is probably the same root cause as #479. Try the following:
http://cherrypy.org/ticket/598
crawl-001
refinedweb
4,231
67.65
I've got this code which parses a text file by reading through and assigning each individual string to a variable (an element in the 'holder' array). It is based mostly off of some code I found on youtube for reading text files on the newboston.com channel. How could I assign an entire line to a variable as I read a text file? I know that I need to use getline, but I can't figure out how to get it to work. Never programmed in C++ before, nor have I programmed much in general. Code:#include <iostream> #include <fstream> using namespace std; int main() { string holder[200]; int i = 0; char filename[50]; char word[50]; ifstream inputobject; cout << "filename equals: "; cin.getline(filename, 50); inputobject.open(filename); inputobject >> word; holder[i] = word; while(inputobject.good()){ holder[i] = word; inputobject >> word; i++; } system("pause"); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/133194-parsing-text-using-getline.html
CC-MAIN-2015-32
refinedweb
148
65.01
Login using Answers In Focus Opinion: Software Should Be Easier To Build, Not Harder - My Dream For The Future Of Development 2016 Second Half of the Year Unity not opening in Ubuntu Are there any rules of thumb for the most comfortable seats on a long distance bus? What is a non-vulgar synonym for this swear word meaning "an enormous amount"? Using log4net directly will cause the code to become twice longer, just because for every func(), you'll have to write corresponding log("func()"). his comment is here See AlsoTasksHow to: Add Items to a Deployment Project Show: Inherited Protected Print Export (0) Print Export (0) Share IN THIS ARTICLE Dev centers Windows Office Visual Studio Microsoft Azure More... Oturum aç Paylaş Daha fazla Bildir Videoyu bildirmeniz mi gerekiyor? The resource file is ResourceExceptions.resx.If I try to change the resource name to .resources VS complains I that I should not do this as I may corrupt something.Does anyone have any Smart? website here It worked! Why do shampoo ingredient labels feature the the term "Aqua"? You’ll be auto redirected in 1 second. The error message indicates that it is looking for resource ResourceExceptions.resource which is not what was embedded in the code by Visual Studio. The Execute function seems to returns the results of your delegate/lambda. It's quick & easy. KYLE ALTHEA CAPILI 2.574 görüntüleme 0:16 how to fix java "system cannot find the path specified" easy tutorial 2015 - Süre: 2:58. It also handles log rotation and recycling. I can't seem to find the file in my project files either. Once for the compiler to execute and once for the developer/log file to read/record. Why didn't Dumbledore appoint the real Mad Eye Moody to teach Defense Against Dark Arts? Where can I report criminal intent found on the dark web? If the specified file was added to the deployment project as a file, it may have been moved or deleted. See the very good log4net. undo a gzip recursively alignment of single- and multi-line column headers in tabular (latex) more hot questions question feed lang-cs about us tour help blog chat data legal privacy policy work If you have specified built outputs for a specific configuration, you must build that configuration before building the deployment project. How did Adebisi make his hat hanging on his head? Solution 1 Accept Solution Reject Solution Hi Lakhan,The error occurs in System.Diagnostics.Process.Start("rtmpdump", filename); line?If it so then the file you are trying to access from the path is not valid as the error Faisal Ansari 998 373 528.2k Exception : The system cannot find the file specified. (Exception from HRESULT: 0x80070002) ?? Raymundo Alfeche 10.149 görüntüleme 5:47 [Solved] Visual Studio Error : Unable to start program 'c\users\...\visual studio..The system.. - Süre: 1:22. Without the ExecutionContext code the method would read this: public string GetData(ExecutionContext context) { if(!File.Exists(_fileName)) { throw new FileNotFoundException(); } return File.ReadAllText(_fileName); } A reasonably experienced developer can read this code this content Now run the application, you might see: Ha! Browse other questions tagged c# or ask your own question. A few rebus puzzles When was today's radar measurement of the Earth-Sun distance made and by who? Once in memory I call each stream, input values to the excel sheet, and then read values from calculated cells. What I want to have is real description of action my program does, why it does these actions and so forth. I've implemented a draft solution for this problem and I'd like to know what you think: sealed class FileReader { private readonly string _fileName; public FileReader(string fileName) { _fileName = fileName; Yükleniyor... Düşüncelerinizi paylaşmak için oturum açın. KISS - Keep It Simple, Stupid. Reading a file twice causes it to be cached. Differential high voltage measurement using a transformer What would be your next deduction in this game of Minesweeper? Join them; it only takes a minute: Sign up MemoryMappedFile: Unable to find the specified file up vote 1 down vote favorite I'm having trouble getting the MemoryMappedFile pattern to work. I figured using MemoryMappedFiles was the way to go? –FaNIX Jul 28 '15 at 23:14 Not sure what you mean. Why do shampoo ingredient labels feature the the term "Aqua"? Jul 13 '08 #2 P: n/a steve I finally figured out that I need to put the assembly name in front of the resource file name. But when it is called I get the error messsage "Could not find any resources.." shown underneath the code. Learning resources Microsoft Virtual Academy Channel 9 MSDN Magazine Community Forums Blogs Codeplex Support Self support Programs BizSpark (for startups) Microsoft Imagine (for students) United States (English) Newsletter Privacy & cookies Optional Password I have read and agree to the Terms of Service and Privacy Policy Please subscribe me to the CodeProject newsletters Submit your solution! In the next blog, I will focus on how to customize the robot’s name and other attributes. Learn more You're viewing YouTube in Turkish. Once in memory I call each stream, input values to the excel sheet, and then read values from calculated cells. What would be your next deduction in this game of Minesweeper? ResourceManager Question Embedded resource namespace Unable to find an entry point named EnumerateSecurityPackagesW in Embedded resource suggestions? Unable to build the multi-file assembly files of assembly ' I can't seem to find the file in my project files either. Digital Hardness of Integers Can I make a woman who took a picture of me in a pub give the image to me and delete all other copies? Rajesh Dhalange 379.588 görüntüleme 2:07 Visual Studio 2012 Debugger Error - Süre: 4:46. Basically, just let it throw whatever it throws: public class FileReader : IFileReader { public string GetData(string fileName) { return File.ReadAllText(fileName); } } Then you implement the decorator - I like... HomeArchivesCAD DevCome & SeeGadgetsMiscDownloadsLinks RSS ← Android, you ask to “Run As”, but it still “Debugas” Get started with AIML C# programming (II): Give your robot a cutename → Get started In my Surface project, I have Merged Resource Dictionaries, one for my project and one that seems to have turned up when I created the application. Daha fazla göster Dil: Türkçe İçerik konumu: Türkiye Kısıtlı Mod Kapalı Geçmiş Yardım Yükleniyor... This documentation is archived and is not being maintained. You will need to remove the reference from your project and add it again from the new location. Yükleniyor... Michael February 6, 2016 at 8:26 pm Hello,i have some question about AIML.Does it work only with english?if no,how to change language of the massages? 2k16 coins May 11,__ Visual Studio Error Solved:Unable to start program 'c\users\....\visual studio....The
http://juicecoms.com/unable-to/unable-to-find-the-specified-file-c.html
CC-MAIN-2018-05
refinedweb
1,142
65.01
The or the Open Source renderer Aqsis or Pixie. On the commercial side, the most popular renderers are Pixar’s Photorealistic RenderMan (PRMan), RenderDotC and AIR. The RenderMan interface was created by Pixar and the official specification can be downloaded from their site. This document is not an introduction to the RenderMan interface itself, it just explains the usage of this particular Python binding. The binding was written to be compliant to v3.2 of Pixar’s RenderMan Interface specification. However, it also supports features that were introduced after v3.2. There is another RenderMan module called cri that interfaces a renderer directly. Almost everything that is said in this section applies to the cri module as well. It is safe to import the module using from cgkit.ri import * All the functions that get imported start with the prefix Ri, all constants start with RI_ or RIE_, so you probably won’t get into a naming conflict. After importing the module this way you can use the functions just as you’re used to from the C API (well, almost). from cgkit.ri import * RiBegin(RI_NULL) RiWorldBegin() RiSurface("plastic") RiSphere(1,-1,1,360) RiWorldEnd() RiEnd() The parameter to RiBegin() determines where the output is directed to. You can pass one of the following: Note: When using the cri module you first have to load a library and invoke the functions on the returned handle (see the section on the cri module for more information about that). The interpretation of the argument to RiBegin() is then dependent on the renderer you are using. Every function has an associated doc string that includes a short description of the function, some information about what parameters the function expects and an example how the function is called. Example (inside an interactive Python session): >>> from ri import * >>> help(RiPatch) RiPatch(type, paramlist) type is one of RI_BILINEAR (4 vertices) or RI_BICUBIC (16 vertices). Number of array elements for primitive variables: ------------------------------------------------- constant: 1 varying: 4 uniform: 1 vertex: 4/16 (depends on type) Example: RiPatch(RI_BILINEAR, [0,0,0, 1,0,0, 0,1,0, 1,1,0]) or from the shell (outside the Python shell): > pydoc ri.RiCropWindow Python Library Documentation: function RiCropWindow in ri RiCropWindow(left, right, bottom, top) Specify a subwindow to render. The values each lie between 0 and 1. Example: RiCropWindow(0.0, 1.0 , 0.0, 1.0) (renders the entire frame) RiCropWindow(0.5, 1.0 , 0.0, 0.5) (renders the top right quarter) The Python RenderMan binding is rather close to the C API, however there are some minor differences you should know about. Types In this binding typing is not as strict as in the C API. For compatibility reasons, the RenderMan types (RtBoolean, RtInt, RtFloat, etc.) do exist but they are just aliases to the corresponding built-in Python types and you never have to use them explicitly. In the ctypes-based cri module, the types refer to the respective ctypes types and you may want to use them occasionally to construct arrays. Wherever the API expects vector types (RtPoint, RtMatrix, RtBound, RtBasis) you can use any value that can be interpreted as a sequence of the corresponding number of scalar values. These can be lists, tuples or your own class that can be used as a sequence. It is also possible to use nested sequences instead of flat ones. For example, you can specify a matrix as a list of 16 values or as a list of four 4-tuples. The following two calls are identical: RiConcatTransform([2,0,0,0, 0,2,0,0, 0,0,2,0, 0,0,0,1]) RiConcatTransform([[2,0,0,0], [0,2,0,0], [0,0,2,0], [0,0,0,1]]) Parameter lists When passing parameter lists you have to know the following points: In C parameter lists have to be terminated with RI_NULL. In Python this is not necessary, the functions can determine the number of arguments themselves. However, adding RI_NULL at the end of the list will not generate an error. For example, if you are porting C code to Python you don’t have to change those calls. So the following two calls are both valid: RiSurface("plastic", "kd", 0.6, "ks", 0.4) RiSurface("plastic", "kd", 0.6, "ks", 0.4, RI_NULL) The tokens inside the parameter list have to be declared (either inline or using RiDeclare()), otherwise an error is generated. Standard tokens (like RI_P, RI_CS, ...) are already pre-declared. Parameter lists can be specified in several ways. The first way is the familiar one you already know from the C API, that is, the token and the value are each an individual parameter: RiSurface("plastic", "kd", 0.6, "ks", 0.4) Alternatively, you can use keyword arguments: RiSurface("plastic", kd=0.6, ks=0.4) But note that you can’t use inline declarations using keyword arguments. Instead you have to previously declare those variables using RiDeclare(). Also, you can’t use keyword arguments if the token is a reserved Python keyword (like the standard "from" parameter). The third way to specify the parameter list is to provide a dictionary including the token/value pairs: RiSurface("plastic", {"kd":0.6, "ks":0.4}) This is useful if you generate the parameter list on the fly in your program. Arrays In the C API functions that take arrays as arguments usually take the length of the array as a parameter as well. This is not necessary in the Python binding. You only have to provide the array, the length can be determined by the function. For example, in C you might write: RtPoint points[4] = {0,1,0, 0,1,1, 0,0,1, 0,0,0}; RiPolygon(4, RI_P, (RtPointer)points, RI_NULL); The number of points has to be specified explicitly. In Python however, this call could look like this: points = [0,1,0, 0,1,1, 0,0,1, 0,0,0] RiPolygon(RI_P, points) The functions that are affected by this rule are: RiBlobby() RiColorSamples() RiCurves() RiGeneralPolygon() RiMotionBegin() RiPoints() RiPointsGeneralPolygons() RiPointsPolygons() RiPolygon() RiSubdivisionMesh() RiTransformPoints() RiTrimCurve() When using the cri module it is particularly advantageous to pass arrays as ctypes arrays or numpy arrays. In this case, no data conversion is required which makes the function call considerably faster (particularly for large amounts of data). # Creating a ctypes array of floats points = (12*RtFloat)(0,1,0, 0,1,1, 0,0,1, 0,0,0) # Creating a numpy array of floats points = numpy.array([0,1,0, 0,1,1, 0,0,1, 0,0,0], dtype=numpy.float32) User defined functions Some RenderMan functions may take user defined functions as input which will be used during rendering. When using the cri module to link to an actual RenderMan library you can use Python functions in addition to the standard functions. However, in the case of the generic (ri) module, you can only use the predefined standard functions. Filter functions It is not possible to use your own filter functions in combination with the ri module, you have to use one of the predefined filters: Procedurals It is not possible to use your own procedurals directly in the RIB generating program, you can only use one of the predefined procedural primitives: However, this is not really a restriction since you always can use RiProcRunProgram to invoke your Python program that generates geometry. Extended transformation functions The transformation functions RiTranslate(), RiRotate(), RiScale() and RiSkew() have been extended in a way that is not part of the official spec. Each of these functions takes one or two vectors as input which usually are provided as 3 separate scalar values, like the axis of a rotation for example: RiRotate(45, 0,0,1) Now in this implementation you can choose to provide such vectors as sequences of 3 scalar values: RiRotate(45, [0,0,1]) axis = vec3(0,0,1) RiRotate(45, axis) Empty stubs In the ri module, the function RiTransformPoints() always returns None and never transforms points (as the module just outputs RIB and does not maintain transformations matrices). In the cri module, on the other hand, the function is available and can be used to transform points. There is currently one option that is specific to this RenderMan binding and that won’t produce any RIB call but will control what gets written to the output stream: If this option is set to 0 directly after RiBegin() is called, then no "version" call will be generated in the RIB stream (default is 1). — New in version 1.1 (as of cgkit 2.0.0alpha9, the version call has been disabled) This option can be used to set the number of significant digits that should be used for writing floating point values in parameter lists (default is 6). The value can be changed any time to affect subsequent calls. — New in version 2.0 Before a floating point value in a parameter list is written into the RIB, it is rounded to a certain precision. The precision can be controlled using this option (default is 10). The value can be changed any time to affect subsequent calls. — New in version 2.0 This can be used to specify a custom formatting string that should be used for writing floating point values stored in parameter lists (default is "%1.6g"). If this option is used, the RI_NUM_SIGNIFICANT_DIGITS setting does not have an effect anymore. The value can be changed any time to affect subsequent calls. — New in version 2.0 Besides the three standard error handlers RiErrorIgnore, RiErrorPrint (default) and RiErrorAbort the module provides an additional error handler called RiErrorException. Whenever an error occurs RiErrorException raises the exception RIException. If you install a new error handler with RiErrorHandler() only the three standard error handlers will produce an output in the RIB stream, if you install RiErrorException or your own handler then the handler is installed but no RIB output is produced. The module does some error checking, however there are still quite a bit of possible error cases that are not reported. For example, the module checks if parameters are declared, but it is not checked if you provide the correct number of values. In general, the module also does not check if a function call is valid in a given state (e.g. the module won’t generate an error if you call RiFormat() inside a world block).
http://cgkit.sourceforge.net/doc2/ri.html
CC-MAIN-2015-32
refinedweb
1,742
52.7
Cross-Language Interoperability If you've been building COM components for a while, you know that one of the great promises of COM is that it is language-independent. If you build a COM component in C++, you can call it from VB, and vice versa. However, to reach that point, your code had to be compiled to a COM standard. Much of this was hidden from the VB developer, but your component had to implement the IUnknown and IDispatch interfaces. Without these interfaces, they would not have been true COM components. Now, however, the CLR gives you much better language interoperability. Not only can you inherit classes from one PE written in language A and use them in language B, but debugging now works across components in multiple languages. This way, you can step through the code in a PE written in C# and jump to the base class that was written in VB.NET. In addition, you can raise an error (now called an exception) in one language and have it handled by a component in another language. This is significant because now developers can write in the language with which they are most comfortable, and be assured that others writing in different languages will be able to easily use their components. The Catch This all sounds great, and you are probably getting excited about the possibilities. There is a catch, however: To make use of this great cross-language interoperability, you must stick to only those data types and functions common to all the languages. If you're wondering just how you do that, the good news is that Microsoft has already thought about this issue and set out a standard called the Common Language Specification, or CLS. If you stick with the CLS, you can be confident that you will have complete interoperability with others programming to the CLS, no matter what languages are being used. Not very creatively, components that expose only CLS features are called CLS-compliant components. To write CLS-compliant components, you must stick to the CLS in these key areas: The public class definitions must include only CLS types. The definitions of public members of the public classes must be CLS types. The definitions of members that are accessible to subclasses must be CLS types. The parameters of public methods in public classes must be CLS types. The parameters of methods that are accessible to subclasses must be CLS types. These rules talk a lot about definitions and parameters for public classes and methods. You are free to use non-CLS types in private classes, private methods, and local variables..
http://www.informit.com/articles/article.aspx?p=21415&seqNum=8
CC-MAIN-2019-35
refinedweb
439
60.14
Forums Dev I am experimenting making external in c++ with the max external… Up until now I was encapsulating my main function in a extern “C” {}.. but I realize that alot of the cycling74 header have a if def _cplusplus… So does this means that if I do #define _cplusplus in before all my includes I don’t need to put my main in an extern “C” block? Of course I’ll only use c++ function in my main… thks alot Mani There is a C++ example in the SDK called “collect”, which you may find helpful. You do not need to define the c++ preprocessor symbol — compilers set this automatically and if you change it then it could cause you trouble down the road. Cheers, Wow thanks you are right! it is because I didn’t like to put all my software in an extern “C” block so i did namespace C74 { // Max includes extern "C" { #include "ext.h" // standard Max include, always required #include "ext_obex.h" // required for new style Max object #include "ext_strings.h" #include "ext_common.h" #include "commonsyms.h" } }; // namespace MaxMspJitter and it all seam to work! thanks alot! Mani The Cycling-provided header files (ext.h and friends) should have all of the extern “C” work done for you already, so there really isn’t any need for you to do it manually. Cheers, Tim You must be logged in to reply to this topic. C74 RSS Feed | © Copyright Cycling '74
http://cycling74.com/forums/topic/the-use-of-the-define-_cplusplus/
CC-MAIN-2014-10
refinedweb
246
72.46
I'm learning how to use the SDL libraries but I'm having a small problem playing the sound. The program appears to load the sound but either not play it or play it too low for me to hear. Here's my code so far: #include "SDL/SDL.h" #include "SDL/SDL_mixer.h" Mix_Music *play_sound = NULL; void cleanUp() { Mix_FreeMusic(play_sound); Mix_CloseAudio(); SDL_Quit(); } int main(int argc, char* args[]) { SDL_Init(SDL_INIT_EVERYTHING); Mix_OpenAudio(22050, MIX_DEFAULT_FORMAT, 2, 4096); play_sound = Mix_LoadMUS("noise.mp3"); Mix_PlayMusic(play_sound, -1); cleanUp(); return 0; } I'm using Dev-Cpp with these linker arguments: -lmingw32 -lSDLmain -lSDL -lSDL_mixer and I have the proper .dll's in the folder with my project along with the actual .mp3. Any suggestions?
https://www.daniweb.com/programming/software-development/threads/161557/sdl-playing-music
CC-MAIN-2019-18
refinedweb
119
56.66
heap is a specialized tree-based data structure. The Heap data structure can be used to efficiently find the kth smallest (or largest) element in an array. If P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. Nodes at the top are highest priority ones. An illustration of a min heap is: Table of Contents Heapify Removing an element requires a rearrange of the heap data structure. This is termed as heapifying. For example, if you delete the root node of the heap. For that, we need to swap it with the lowest node with the root node and then check the heap property recursively throughout. heapifyUp() starts from the bottom and checks the heap properties all the way up. heapifyDown() does just the opposite Children of a node at N would be positioned at 2n+1 and 2n+2 in a max heap. Let’s launch our XCode playground and create our heap using Swift. We’ll build a heap structure using an array. Building a Heap struct Heap<T> { var elements = [T]() var isEmpty: Bool { return elements.isEmpty } var count: Int { return elements.count } var isOrdered: (T, T) -> Bool public init(sort: @escaping (T, T) -> Bool) { self.isOrdered = sort } } An escaping closure is passed which is used to save the function values. It compares the two values passed. Add the following functions in the struct to get the indexes of the parent, left and right nodes: func parentOf(_ index: Int) -> Int { return (index - 1) / 2 } func leftOf(_ index: Int) -> Int { return (2 * index) + 1 } func rightOf(_ index: Int) -> Int { return (2 * index) + 2 } heapifyDown(): mutating func heapifyDown(index: Int, heapSize: Int) { var parentIndex = index while true { let leftIndex = self.leftOf(parentIndex) let rightIndex = leftIndex + 1 var first = parentIndex if leftIndex < heapSize && isOrdered(elements[leftIndex], elements[first]) { first = leftIndex } if rightIndex < heapSize && isOrdered(elements[rightIndex], elements[first]) { first = rightIndex } if first == parentIndex { return } elements.swapAt(parentIndex, first) parentIndex = first } } mutating func shiftDown() { heapifyDown(index: 0, heapSize: elements.count) } heapifyDown() is used while removing elements. heapifyUp() is used while inserting. The following function is used to build the heap from the array: mutating func buildHeap(fromArray array: [T]) { elements = array for i in stride(from: (elements.count/2 - 1), through: 0, by: -1) { heapifyDown(index: i, heapSize: elements.count) } } heapifyUp(): mutating func heapifyUp(index: Int) { var nodeIndex = index while true { let parentIndex = self.parentOf(nodeIndex) var first = parentIndex if parentIndex >= 0 && isOrdered(elements[nodeIndex], elements[first]) { first = nodeIndex } if first == parentIndex { return } elements.swapAt(parentIndex, first) nodeIndex = first } } The following functions are used to insert and remove the elements from the heap: Insert: mutating func insert(value: T) { self.elements.append(value) heapifyUp(index: self.elements.count - 1) } Remove: public mutating func remove(at index: Int) -> T? { let temp: T if index < 0 && count - 1 <= index { return nil } temp = elements[index] elements[index] = elements[count - 1] elements.removeLast() shiftDown() return temp } Let’s use a sample input to test the above structure we’ve created: let array: [Int] = [12,3,6,15,45,1,2] var heap = Heap<Int>(sort: >) heap.buildHeap(fromArray: array) heap.insert(value: 23) print(heap.elements) print(heap.remove(at: 0)) print(heap.elements) This brings an end to this tutorial on Swift Heap. Thanks for the post – helps to have some practical code when reviewing a topic. Just couple of things. Would be handy to know version of swift used. I tried the code in $ swift -v Apple Swift version 4.2.1 (swiftlang-1000.11.42 clang-1000.11.45.1) and needed to fix a couple of things. 1) struct Heap { var elements = [T]() becomes struct Heap { var elements: [T] = [] 2) and the } at the end of the first frame needs to move to the bottom of the last frame to encompass all the code into the strict definition 3) The heap.swift:116:7: warning: expression implicitly coerced from ‘Int?’ to ‘Any’ print(heap.remove(at: 0)) can be fixed by print(heap.remove(at: 0)!) Cheers Gannett
https://www.journaldev.com/21656/swift-heap-data-structure
CC-MAIN-2021-04
refinedweb
700
57.47
Simple Links Link Behavior Link Semantics Extended Links Linkbases DTDs for XLinks XLinks are an attribute-based syntax for attaching links to XML documents. XLinks can be simple Point A-to-Point B links, like the links you're accustomed to from HTML's A element. XLinks can also be bidirectional, linking two documents in both directions so you can go from A to B or B to A. XLinks can even be multidirectional, presenting many different paths between any number of XML documents. The documents don't have to be XML documents--XLinks can be placed in an XML document that lists connections between other documents that may or may not be XML documents themselves. Web graffiti artists take note: these third-party links let you attach links to pages you don't even control, like the home page of the New York Times or the C.I.A. At its core XLink is nothing more and nothing less than an XML syntax for describing directed graphs, in which the vertices are documents at particular URIs and the edges are the links between the documents. What you put in that graph is up to you. Current web browsers at most support simple XLinks that do little more than duplicate the functionality of HTML's A element. Many browsers don't support XLinks at all. However, custom applications may do a lot more. Since XLinks are so powerful, it shouldn't come as a surprise that they can do more than blue underlined links on web pages. XLinks can describe tables of contents or indexes. They can connect textual emendations to the text they describe. They can indicate possible paths through online courses or virtual worlds. Different applications will interpret different sets of XLinks differently. Just as no one browser really understands the semantics of all the various XML applications, so too no one program can process all collections of XLinks. A simple link defines a one-way connection between two resources. The source or starting resource of the connection is the link element itself. The target or ending resource of the connection is identified by a Uniform Resource Identifier (URI). The link goes from the starting resource to the ending resource. The starting resource is always an XML element. The ending resource may be an XML document, a particular element in an XML document, a group of elements in an XML document, a span of text in an XML document, or something that isn't a part of an XML document, such as an MPEG movie or a PDF file. The URI may be something other than a URL, for instance a book ISBN number like urn:isbn:1565922247. A simple XLink is encoded in an XML document as an element of arbitrary type that has an xlink:type attribute with the value simple and an xlink:href attribute whose value is the URI of the link target. The xlink prefix must be mapped to the namespace URI. As usual, the prefix can change as long as the URI stays the same. For example, suppose this novel element appears in a list of children's literature and we want to link it to the actual text of the novel available from the URL. <novel> <title>The Wonderful Wizard of Oz</title> <author>L. Frank Baum</author> <year>1900</year> </novel> We give the novel element an xlink:type attribute with the value simple, an xlink:href attribute that contains the URL to which we're linking, and an xmlns:xlink attribute that associates the prefix xlink with the namespace URI. The result is this: <novel xmlns:xlink= "" xlink:type = "simple" xlink: <title>The Wonderful Wizard of Oz</title> <author>L. Frank Baum</author> <year>1900</year> </novel> This establishes a simple link from this novel element to the plain text file found at. Browsers are free to interpret this link as they like. However, the most natural interpretation, and the one implemented by the few browsers that do support simple XLinks, is to make this a blue underlined phrase the user can click on to replace the current page with the file being linked to. Other schemes are possible however. XLinks are fully namespace aware. The xlink prefix is customary, though it can be changed. However, it must be mapped to the URI. This can be done on the XLink element itself, as in this novel example, or it can be done on any ancestor of that element up to and including the root element of the document. Future examples in this and the next chapter use the xlink prefix exclusively and assume that this prefix has been properly declared on some ancestor element. Every XLink element must have an xlink:type attribute telling you what kind of link (or part of a link) it is. This attribute has six possible values: Simple Extended Locator Arc Title Resource Simple XLinks are the only ones that are really similar to HTML links. The remaining five kinds of XLink elements will be discussed in later sections. The xlink:href attribute identifies the resource being linked to. It always contains a URI. Both relative and absolute URLs can be used, as they are in HTML links. However, the URI need not be a URL. For example, this link identifies but does not locate the print edition of The Wonderful Wizard of Oz with the ISBN number 0688069444: <novel xmlns:xlink= "" xlink:type = "simple" xlink: <title>The Wonderful Wizard of Oz</title> <author>L. Frank Baum</author> <year>1900</year> </novel>
https://docstore.mik.ua/orelly/xml/xmlnut/ch10_01.htm
CC-MAIN-2020-16
refinedweb
928
61.77
. There was a great pinch / zoom plugin for ST1 from chris, which worked fine in the end! Central zooming, adjustable scrolling bars, etc. You can find the code on github: I tried to port the plugin to ST2, but was not yet successful: Maybe this will help to finish the zooming issue... I just want to announce, that I made huge progress in porting the pinch / zoom plugin from chris to ST2. Maybe someone can help me, with the last small issues... Have a look at my posting: I still have problem with this class. in general, you may not know the ratio of the image, so a good option is to specify the desired width and let the image size adjust the height by keeping the same ratio. this means when you call this PinchZoomImage class, you don't set the height. however, if the initial height remains null, then the code breaks when you do zoom/pinch, because you are multiplying null by a fractional number. any one knows a way to discover the image height after it has been painted? I got it. on load, I picked it up by var height = e.element.dom.firstChild.height; Hello, I implemented the example cited in post#1(), but could not perform the pinch with him (he simply does not detect the pinch), only the DoubleTap worked properly. I have an android GalaxyII 2.3.3, anyone know if you have any restriction to enable the pinch or perhaps something in sencha? I'm using sencha touch 2.0.1. note: I did the same tests in IOS simulator with xCode, worked. You can't track pinch gestures on Android 2.x, that's why we added zoom in/out Buttons to the pinch-plugin from Chris. Modification to Image Sizing Modification to Image Sizing @mrsunshine This is a great piece of code and very useful. I ran into a few usability issues on my implementation: 1) If the user makes the image very small then the pinch to zoom can be difficult (as they have to pinch on the image) -> Easy fix by setting a minimum size 2) The resizing was a little 'wonky'. It always resized from the initial image size. This is odd, for example, once you have zoomed into the image and want to make it smaller, the decrease zoom factor should be applied to the enlarged version of the image, not the original version. Anyways, to fix the above I added / modified the following, and others might be interested in doing the same. To the prototype config added: Code: scaleWidth: 0 Code: pinchstart: { element: 'element', fn: this.onImagePinchStart } Code: onImagePinchStart: function(e) { if(this.up('container').getScaleWidth()==0) { this.up('container').setScaleWidth(this.getInitialConfig('width')); } else { this.up('container').setScaleWidth(this.element.getWidth()); } }, Code: onImagePinch: function(e) { var initialWidth = this.getInitialConfig('width'), initialHeight = this.getInitialConfig('height'), imageRatio = initialWidth / initialHeight, container = this, image = this.element, newWidth, newHeight, scroller = this.up('container').getScrollable().getScroller(), pos = scroller.getMaxPosition(); newWidth = this.up('container').getScaleWidth() * e.scale; newHeight = newWidth / imageRatio; //Code for Min Size if((newWidth < initialWidth) || (newHeight < initialHeight)) { newWidth = initialWidth; newHeight = initialHeight; } container.setWidth(newWidth); container.setHeight(newHeight); image.setWidth(newWidth); image.setHeight(newHeight); scroller.scrollTo(pos.x/2, pos.y/2); }, Anyways, for any of you using @mrsunshine's code, I think the above smooth's the zooming out a bit. I am loading my images into a detail card from an html file using an ajax call so they are just plain html <img> tags. I want to use this plugin but I am struggling trying to figure out how to get the <img> tag to an xtype of 'pinchzoomimage' Does anyone have any suggestions? Thanks. Brian The following might work for you - in the 'success' return of your AJAX call, parse the img you return (pulling the src property) and then call the pinchzoomimage xtype, passing it the src you retrieved as the config parameter. Alternatively, you can do roughly the same thing from within the 'pinchzoomimage' class, but you will have to modify some of the code of the class's internal code.)
http://www.sencha.com/forum/showthread.php?187928-Pinch-Zoom-Scrollable-Image&p=784863
CC-MAIN-2015-18
refinedweb
689
63.8
Rails is a powerful framework - it abstracts many of the underlying database concepts so that you never have to think about them. Of course, sometimes it abstracts away too many. Frequently I run into a case where I want more data in an ActiveRecord than is actually available from my SQL schema. For instance, I often want to include data from another table through a JOIN or sub-SELECT, or maybe I want to be able to ORDER BY data in a table that's in a belongs_to relationship with the current model. To illustrate why this is useful, let's assume that we have a table, Pet, which has a PetType. Our schema looks something like this: pet_types Table: pets Table: Given this schema, there are two things I want to be able to do: - Get a list of pets, ordered by their type. But I want to order on pet_types.name, not pet_types.id. (That is, I want them in order of "Cats", "Dogs", "Platypi", not whatever order their ids are.) - Get a list of pet types, including a count of the number of pets which are of that type. Admittedly, in this contrived example, I could do these both in rails without making any changes to the model and without too much pain. But with more advanced queries, it's much easier to let your database engine do the work. And rails is relatively graceful about allowing you to do this -- you do need to know a little SQL, but you only need to change a single method in your model. Ordering by a foreign column Let's say you wanted to display a list of pets ordered by type. We'd like our database engine to do this for us, since this is one of the things it's optimized at. Since ActiveRecord::Base's various find_* methods all ultimately call find_every(), we need only override that method to add a more complex SQL query. We just need to make one change in our model to support this: In models/pet.rb: def Pet.find_every(args = { }) sql = "SELECT pets.*, pet_types.name AS pet_type_name " + "FROM pets, pet_types " + "WHERE pets.pet_type_id = pet_types.ID" can say Pets.find(:all, :order => 'pet_types.name') to sort by pet type. This will give us the following: Including data from another table Let's say that we wanted to provide a list of pet types and inc lude the number of pets of that type. We could certainly load each pet type, then do a Pet.count() for each type, but it may be advantageous to let the database engine do this for us. In models/pet_type.rb: def PetTypes.find_every(args = { }) sql = "SELECT pet_types.*, " + "(SELECT COUNT(pets.id) FROM pets " + "WHERE pets.pet_type_id = pet_type.id) AS pet_type_count " + "FROM pet_types " get a new column - pet_type_count which we can read (but not write to, of course) as if it were part of the table. We can use it just for display, or we can sort on it just like above. PetType.find(:all, :order => 'pet_type_count') will produce: While these are trivial examples, this sort of overriding can become exceptionally useful when you have a lot of data being computed by the SQL server. If you have a model with several belongs_to or has_many relationships, it's a lot easier to let the SQL server compute these sort of details itself, rather than trying to do it in each method of your model. Ultimately, you can put any arbitrary SQL into your find_every() method, and rails will continue to work as usual, provided you follow these three simple rules: - Don't exclude columns, only include new ones. Rails examines your table's columns the first time you use a model, and it will expect those columns to be in the results of any query. - Fully qualify conflicting column names. That is, say: :order => 'pets.name'instead of just 'name'. - Ensure that you handle the arguments to find_every(). That is, remember to handle :conditions, :order, :limitand :offset. Otherwise you run the risk of breaking other finds. Wow, I've been scouring the net for this information for weeks. I'm new to Rails (and Ruby), but this page has enabled me to get an important feature working that I was just about ready to give up on in my web app. Thank you so much for posting this. A very helpful post. Thanks! Hey, don't do that. When your ruby starts looking like php, it's time to be suspicious! (Zeroth of all, pet_types should be plural, like all table names, and PetType should be singular, like all models.) First of all, you can substitute all the sql += stuff with a with_scope. something like But really you should use a counter cache. that is, (with the appropriate :pet_count field in the pet_type db schema) The thing about rails is it's designed to not have to do the ugly stuff, so when you are doing ugly stuff on a daily basis, it's probably not good practice, and there's probably a better way. Hi David, Thanks for the comment, and the interest. Unfortunately, you can't execute ActiveRecord::Base's find() method from inside find_every(), or you'll get infinite recursion. As I mentioned, the point of this is to overload find_every() so that find() will do whatever SQL magic you have in mind, without actually having to know about that bit of complexity. find_by_sql(), however, does a query right to the database, without going through find_every(). (In fact, ActiveRecord::Base::find_every() calls find_by_sql, and the code above was mostly stolen from that. I admit that my example was perhaps overly trivial, and as you point out, a counter_cache would have been a more appropriate solution to this problem, but I wanted a simple example that didn't get bogged down in a stored procedure. While I agree that rails abstracts away much of the ugly stuff -- there are times when you need to get down to the database level, whether to execute complex joins, sub-selects or executing stored procedures. To be fair, there may be a better (or, more rails-y) way to do this, but this has been working for me for a while with some nasty SQL on a rails project. Thanks again! -Ed
http://www.edwardthomson.com/blog/2007/02/complex_sql_queries_with_rails.html
crawl-002
refinedweb
1,054
71.24
WE hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness. WE hold these Truths to be certain unalienable Rights, self-evident, that all Men are that among these are Life, created equal, that they are Liberty and the Pursuit of endowed by their Creator with Happiness. (Yeah, I got this sales pitch from my real estate agent :-)) (We will not break a word with hyphenization... E.g., we will not break iceberg into ice- and berg) Therefore: (Each word in the input is a String stored in an array element) WE hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness. String[] words = new String[10000]; int numWords = 35 word[0] = "WE" word[1] = "hold" word[2] = "these" word[3] = "Truths" word[4] = "to" word[5] = "be" word[6] = "self-evident," word[7] = "that" word[8] = "all" word[9] = "Men" word[10] = "are" word[11] = "created" word[12] = "equal," word[13] = "that" word[14] = "they" word[15] = "are" word[16] = "endowed" word[17] = "by" word[18] = "their" word[19] = "Creator" word[20] = "with" word[21] = "certain" word[22] = "unalienable" word[23] = "Rights," word[24] = "that" word[25] = "among" word[26] = "these" word[27] = "are" word[28] = "Life," word[29] = "Liberty" word[30] = "and" word[31] = "the" word[32] = "Pursuit" word[33] = "of" word[34] = "Happiness." Open the file Create a "Scanner" to read data Read the input file into an array of String Note: we must remember the number of words read ! import java.io.*; import java.util.Scanner; public class FormatText1 { public static String[] words = new String[10000]; // Hold words in input file public static int numWords; // Count # words in input /* =========================================================== readInput(in, w): read words from input "in" into array w and return the #words read ============================================================ */ public static int readInput( Scanner in, String[] w ) { int nWords = 0; String x; while ( in.hasNext() ) { x = in.next(); // Read next word w[nWords] = x; // Store it away nWords++; // Count # words AND use next // array element to store next word } return ( nWords ); } public static void main(String[] args) throws IOException { if ( args.length == 0 ) { System.out.println( "Usage: java FormatText1 inputFile"); System.exit(1); } File myFile = new File( args[0] ); // Open file "inp2" Scanner in = new Scanner(myFile); // Make Scanner obj with opened file /* ---------------------------- Read input from data file ---------------------------- */ numWords = readInput( in, words ); } } How to run the program:
http://www.mathcs.emory.edu/~cheung/Courses/170/Syllabus/12/first-step.html
CC-MAIN-2017-51
refinedweb
437
56.22
termios.h − define values for termios #include <termios.h> The <termios.h> header contains the definitions used by the terminal I/O interfaces (see General Terminal Interface for the structures and names defined). The termios Structure The following data types shall be defined through typedef:: The following subscript names for the array c_cc shall be defined:: Output Modes The c_oflag field specifies the system treatment of output: Baud Rate Selection The input and output baud rates are stored in the termios structure. These are the valid values for objects of type speed_t. The following values shall be defined, but not all baud rates need be supported by the underlying hardware. Control Modes The c_cflag field describes the hardware control of the terminal; not all values specified are required to be supported by the underlying hardware: The implementation shall support the functionality associated with the symbols CS7, CS8, CSTOPB, PARODD, and PARENB. Local Modes The c_lflag field of the argument structure is used to control various terminal functions:. The following names are reserved for XSI-conformant systems to use as an extension to the above; therefore strictly conforming applications shall not use them: None. None. .
https://manpag.es/YDL61/0p+termios.h
CC-MAIN-2021-43
refinedweb
194
50.16
OmniJSONOmniJSON The ProblemTheThe Solution >>> import omnijson as json # \o/ FeaturesFeatures - Loads whichever is the fastest JSON module installed - Falls back on built in pure-python simplejson, just in case. - Proper API ( loads(), dumps()) - Vendorizable - Supports Python 2.5-3.2 out of the box UsageUsage Load the best JSON available: import omnijson as json Dump some objects: >>> json.loads('{"yo": "dawg"}') {'yo': 'dawg'} Load some objects: >>> json.dumps({'yo': 'dawg'}) '{"yo": "dawg"}' Check JSON >>> json.core.engine 'ujson' InstallInstall Installing OmniJSON is easy: $ pip install omnijson Of, if you must: $ easy_install omnijson But, you really shouldn't do that. LicenseLicense The MIT License: Copyright (c) 2011 Kenneth Re.
https://libraries.io/pypi/omnijson
CC-MAIN-2020-24
refinedweb
108
57.37
C# - A Split-and-Merge Expression Parser in C# By Vassili Kaplan | October 2015 | Get the Code: C# VB I published a new algorithm for parsing a mathematical expression in C++ in the May and July 2015 issues of CVu Journal (see items 1 and 2 in “References”). It took two articles because one astute reader, Silas S. Brown, found a bug in the first algorithm implementation, so I had to make some modifications to it. Thanks to that reader, the algorithm became more mature. I’ve also fixed a few smaller bugs since then. Now I’m going to provide an implementation of the corrected algorithm in C#. It’s not too likely you’ll ever have to write code to parse a mathematical expression, but the techniques used in the algorithm can be applied to other scenarios, as well, such as parsing non-standard strings. Using this algorithm you can also easily define new functions that do whatever you wish (for example, make a Web request to order a pizza). With small adjustments, you can also create your own C# compiler for your new custom scripting language. Moreover, you just might find the algorithm interesting in its own right. The Edsger Dijkstra algorithm, published more than 50 years ago in 1961, is often used for parsing mathematical expressions (see item 3 in “References”). But it’s good to have an alternative that, though it has the same time complexity, is, in my opinion, easier to implement and to extend. Note that I’m going to use the “virtual constructor” idiom for the function factory implementation. This idiom was introduced in C++ by James Coplien (see item 4 in “References”). I hope you’ll find its use interesting in the C# world, as well. The Split-and-Merge Algorithm The demo program in Figure 1 illustrates the split-and-merge algorithm for parsing a mathematical expression. Figure 1 A Demo Run of the Split-and-Merge Algorithm The algorithm consists of two steps. In the first step the string containing the expression is split into a list of Cell objects, where each Cell is defined as follows: The action is a single character that can be any of the mathematical operators: ‘*’ (multiplication), ‘/’ (division), ‘+’ (addition), ‘-’ (subtraction) or ‘^’ (power), or a special character denoting the end of an expression, which I hardcoded as ‘).’ The last element in the list of cells to be merged will always have the special action ‘),’ that is, no action, but you can use any other symbol or a parenthesis instead. In the first step, the expression is split into tokens that are then converted into cells. All tokens are separated by one of the mathematical expressions or a parenthesis. A token can be either a real number or a string with the name of a function. The ParserFunction class defined later takes care of all of the functions in the string to be parsed, or for parsing a string to a number. It may also call the whole string parsing algorithm, recursively. If there are no functions and no parentheses in the string to parse, there will be no recursion. In the second step, all the cells are merged together. Let’s see the second step first because it’s a bit more straightforward than the first one. Merging a List of Cells The list of cells is merged one by one according to the priorities of the actions; that is, the mathematical operators. These priorities are calculated as follows: Two cells can be merged if and only if the priority of the action of the cell on the left isn’t lower than the priority of the action of the cell next to it: Merging cells means applying the action of the left cell to the values of the left cell and the right cell. The new cell will have the same action as the right cell, as you can see in Figure 2. static void MergeCells(Cell leftCell, Cell rightCell) { switch (leftCell.Action) { case '^': leftCell.Value = Math.Pow(leftCell.Value, rightCell.Value); break; case '*': leftCell.Value *= rightCell.Value; break; case '/': leftCell.Value /= rightCell.Value; break; case '+': leftCell.Value += rightCell.Value; break; case '-': leftCell.Value -= rightCell.Value; break; } leftCell.Action = rightCell.Action; } For example, merging Cell(8, ‘-’) and Cell(5, ‘+’) will lead to a new Cell(8 – 5, ‘+’) = Cell (3, ‘+’). But what happens if two cells can’t be merged because the priority of the left cell is lower than the right cell? What happens then is a temporary move to the next, right cell, in order to try to merge it with the cell next to it, and so on, recursively. As soon as the right cell has been merged with the cell next to it, I return to the original, left cell, and try to remerge it with the newly created right cell, as illustrated in Figure 3. static double Merge(Cell current, ref int index, List<Cell> listToMerge, bool mergeOneOnly = false) { while (index < listToMerge.Count) { Cell next = listToMerge[index++]; while (!CanMergeCells(current, next)) { Merge(next, ref index, listToMerge, true /* mergeOneOnly */); } MergeCells(current, next); if (mergeOneOnly) { return current.Value; } } return current.Value; } Note that, from the outside, this method is called with the mergeOneOnly parameter set to false, so it won’t return before completing the whole merge. In contrast, when the merge method is called recursively (when the left and the right cells can’t be merged because of their priorities), mergeOneOnly will be set to true because I want to return to where I was as soon as I complete an actual merge in the MergeCells method. Also note that the value returned from the Merge method is the actual result of the expression. Splitting an Expression into a List of Cells The first part of the algorithm splits an expression into a list of cells. Mathematical operator precedence isn’t taken into account in this step. First, the expression is split into a list of tokens. All tokens are separated by any mathematical operator or by an open or close parenthesis. The parentheses may, but don’t have to, have an associated function; for example, “1- sin(1-2)” has an associated function, but “1- (1-2)” doesn’t. First, let’s look at what happens when there are no functions or parentheses, just an expression containing real numbers and mathematical operators between them. In this case, I just create cells consisting of a real number and a consequent action. For example, splitting “3-2*4” leads to a list consisting of three cells: The last cell will always have a special END_ARG action, which I define as: It can be changed to anything else, but in that case the corresponding opening parenthesis START_ARG must be taken into account, as well, which I define as: If one of the tokens is a function or an expression in parentheses, the whole split-and-merge algorithm is applied to it using recursion. For example, if the expression is “(3-1)-1,” the whole algorithm in the parentheses is applied to the expression first: The function that performs the splitting is LoadAndCalculate, as shown in Figure 4. public static double LoadAndCalculate(string data, ref int from, char to = END_LINE) { if (from >= data.Length || data[from] == to) { throw new ArgumentException("Loaded invalid data: " + data); } List<Cell> listToMerge = new List<Cell>(16); StringBuilder item = new StringBuilder(); do // Main processing cycle of the first part. { char ch = data[from++]; if (StillCollecting(item.ToString(), ch, to)) { // The char still belongs to the previous operand. item.Append(ch); if (from < data.Length && data[from] != to) { continue; } } // I am done getting the next token. The getValue() call below may // recursively call loadAndCalculate(). This will happen if extracted // item is a function or if the next item is starting with a START_ARG '(.' ParserFunction func = new ParserFunction(data, ref from, item.ToString(), ch); double value = func.GetValue(data, ref from); char action = ValidAction(ch) ? ch : UpdateAction(data, ref from, ch, to); listToMerge.Add(new Cell(value, action)); item.Clear(); } while (from < data.Length && data[from] != to); if (from < data.Length && (data[from] == END_ARG || data[from] == to)) { // This happens when called recursively: move one char forward. from++; } Cell baseCell = listToMerge[0]; int index = 1; return Merge(baseCell, ref index, listToMerge); } The LoadAndCalculate method adds all of the cells to the listToMerge list and then calls the second part of the parsing algorithm, the merge function. The StringBuilder item will hold the current token, adding characters to it one by one as soon as they’re read from the expression string. The StillCollecting method checks if the characters for the current token are still being collected. This isn’t the case if the current character is END_ARG or any other special “to” character (such as a comma if the parsing arguments are separated by a comma; I’ll provide an example of this using the power function later). Also, the token isn’t being collected anymore if the current character is a valid action or a START_ARG: static bool StillCollecting(string item, char ch, char to) { char stopCollecting = (to == END_ARG || to == END_LINE) ? END_ARG : to; return (item.Length == 0 && (ch == '-' || ch == END_ARG)) || !(ValidAction(ch) || ch == START_ARG || ch == stopCollecting); } static bool ValidAction(char ch) { return ch == '*' || ch == '/' || ch == '+' || ch == '-' || ch == '^'; } I know that I’m done collecting the current token as soon as I get a mathematical operator described in the ValidAction method, or parentheses defined by the START_ARG or END_ARG constants. There’s also a special case involving a “-” token, which is used to denote a number starting with a negative sign. At the end of this splitting step, all of the subexpressions in parentheses and all of the function calls are eliminated via the recursive calls to the whole algorithm evaluation. But the resulting actions of these recursive calls will always have the END_ARG action, which won’t be correct in the global expression scope if the calculated expression isn’t at the end of the expression to be evaluated. This is why the action needs to be updated after each recursive call, like so: The code for the updateAction method is in Figure 5. static char UpdateAction(string item, ref int from, char ch, char to) { if (from >= item.Length || item[from] == END_ARG || item[from] == to) { return END_ARG; } int index = from; char res = ch; while (!ValidAction(res) && index < item.Length) { // Look to the next character in string until finding a valid action. res = item[index++]; } from = ValidAction(res) ? index : index > from ? index - 1 : from; return res; } The actual parsing of the extracted token will be in the following code of Figure 4: If the extracted token isn’t a function, this code will try to convert it to a double. Otherwise, an appropriate, previously registered function will be called, which can in turn recursively call the LoadAndCalculate method. User-Defined and Standard Functions I decided to implement the function factory using the virtual constructor idiom that was first published by James Coplien (see item 4 in “References”). In C#, this is often implemented using a factory method (see item 5 in “References”) that uses an extra factory class to produce the needed object. But Coplien’s older design pattern doesn’t need an extra factory façade class and instead just constructs a new object on the fly using the implementation member m_impl that’s derived from the same class: The special internal constructor initializes this member with the appropriate class. The actual class of the created implementation object m_impl depends on the input parameters, as shown in Figure 6. internal ParserFunction(string data, ref int from, string item, char ch) { if (item.Length == 0 && ch == Parser.START_ARG) { // There is no function, just an expression in parentheses. m_impl = s_idFunction; return; } if (m_functions.TryGetValue(item, out m_impl)) { // Function exists and is registered (e.g. pi, exp, etc.). return; } // Function not found, will try to parse this as a number. s_strtodFunction.Item = item; m_impl = s_strtodFunction; } A dictionary is used to hold all of the parser functions. This dictionary maps the string name of the function (such as “sin”) to the actual object implementing this function: Users of the parser can add as many functions as they wish by calling the following method on the base ParserFunction class: The GetValue method is called on the created ParserFunction, but the real work is done in the implementation function, which will override the evaluate method of the base class: The function implementation classes, deriving from the ParserFunction class, won’t be using the internal constructor in Figure 6. Instead, they’ll use the following constructor of the base class: Two special “standard” functions are used in the ParserFunction constructor in Figure 6: The first is the identity function; it will be called to parse any expression in parentheses that doesn’t have an associated function: The second function is a “catchall” that’s called when no function is found that corresponds to the last extracted token. This will happen when the extracted token is neither a real number nor an implemented function. In the latter case, an exception will be thrown: All other functions can be implemented by the user; the simplest is a pi function: A more typical implementation is an exp function: Earlier I said I’d provide an example using a power function, which requires two arguments separated by a comma. This is how to write a function requiring multiple arguments separated by a custom separator: Any number of functions can be added to the algorithm from the user code, like so: Wrapping Up The split-and-merge algorithm presented here has O(n) complexity if n is the number of characters in the expression string. This is so because each token will be read only once during the splitting step and then, in the worst case, there will be at most 2(m - 1) – 1 comparisons in the merging step, where m is the number of cells created in the first step. So the algorithm has the same time complexity as the Dijkstra algorithm (see item 3 in “References”). It might have a slight disadvantage compared to the Dijkstra algorithm because it uses recursion. On the other hand, I believe the split-and-merge algorithm is easier to implement, precisely because of the recursion, and easier also to extend with the custom syntax, functions, and operators. References - V. Kaplan, “Split and Merge Algorithm for Parsing Mathematical Expressions,” CVu, 27-2, May 2015, bit.ly/1Jb470l - V. Kaplan, “Split and Merge Revisited,” CVu, 27-3, July 2015, bit.ly/1UYHmE9 - E. Dijkstra, Shunting-yard algorithm, bit.ly/1fEvvLI - J. Coplien, “Advanced C++ Programming Styles and Idioms” (p. 140), Addison-Wesley, 1992 - E. Gamma, R. Helm, R. Johnson and J. Vlissides, “Design Patterns: Elements of Reusable Object-Oriented Software,” Addison-Wesley Professional Computing Series, 1995 Vassili Kaplan is a former Microsoft Lync developer. He is passionate about programming in C# and C++. He currently lives in Zurich, Switzerland, and works as a freelancer for various banks. You can reach him at iLanguage.ch. Thanks to the following Microsoft technical expert for reviewing this article: James McCaffrey Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-gb/magazine/mt573716
CC-MAIN-2019-35
refinedweb
2,542
52.09
I'm over halfway through this masterclass on React with Kent. This has been a pretty deep dive and I'm finding my day-to-day development massively enriched by what I've learnt so far. This time we're looking at patterns in React. I'm used to patterns from PHP and Python but I don't necessarily have a big awareness of them in React. Some of these have names which are just used as reference and shouldn't be seen as the name of the pattern. Naming things helps people to talk about them but also can be used as weird gatekeeping mechanisms - "Oh, you've never heard of the blah-ba-blah pattern - such a newb!". When we export a derivative dispatch function from our context controllers, we end up causing ourselves extra work. You might create your own increment and decrement functions which call dispatch. It becomes a major annoyance when you have a sequence of dispatch functions that need to be called. We might create helper functions and share them in the value object. Then we have to useCallback to wrap the functions so we can use the dependencies array properly. const increment = React.useCallback(() => dispatch({ type: 'increment' }), [ dispatch, ]) const decrement = React.useCallback(() => dispatch({ type: 'decrement' }), [ dispatch, ]) An alternate approach is to export module level functions that are imported at that level rather than the context level. These take the dispatch as an argument which is passed from the context. const increment = dispatch => dispatch({ type: 'increment' }) const decrement = dispatch => dispatch({ type: 'decrement' }) export { CounterProvider, useCounter, increment, decrement } Then to use them, we get the dispatch like we normally do and then pass it to our module level functions. // src/screens/counter.js import { useCounter, increment, decrement } from 'context/counter' function Counter() { const [state, dispatch] = useCounter() return ( <div> <div>Current Count: {state.count}</div> <button onClick={() => decrement(dispatch)}>-</button> <button onClick={() => increment(dispatch)}>+</button> </div> ) } This way if we don't have to memoize the function and can pass any arguments into a dependency array. This is used a lot by the Reach UI library. If you have components that themselves have nested components (think navbars and links, or forms and inputs) then you could initialise this component with a large and (potentially) unwieldly configuration object. A better way, is to have the parent component use React.children.map() to clone each of the children with React.cloneElement(child, { //additional props}). For the example Kent used, each of the nested components needed the on state and the toggle function. If you want to include DOM elements then you need to clone those without passing on the new props. In this case, you can test the child.prop type. If it is a string or not a function, clone without the extra props, otherwise add the props. I've used this a bit in practice when adding a class to selected components in a navigation. Using the cloneElement approach for these compound components means we can only impact immediate children. If we want to be more flexible, we can use a context provider for our component and pass through the required elements that way. When would you use the default value for createContext? It would be good to provide a more helpful error to our context helper function. function useToggleContext() { const context = React.useContext(ToggleContext) if (!context) { throw new Error( 'Toggle compound components must be used within the Toggle component' ) } return context } Rather than be specific about the exact elements you want to provide or making bespoke custom elements that need to be used, instead you can use a prop collection to be able to group together the relevant props. This way, if the user wants to swap out the type of underlying components, they are free to do this as long as they use the props that your provide. If you use a getProps function, then you can spread any extra props that the user wants to pass through as well. function useToggle() { const [on, setOn] = React.useState(false) const toggle = () => setOn(!on) const getTogglerProps = ({ onClick, ...props }) => { return { 'aria-pressed': on, onClick: () => { toggle() onClick?.() }, ...props, } } return { on, toggle, getTogglerProps } } That way you can deal with extra props: <button {...getTogglerProps({ 'aria-label': 'custom-button', onClick: () => console.info('onButtonClick'), id: 'custom-button-id', })} > This is used a lot in libraries like react-table and downshift. It may be that you want to allow the user to completely override the behaviours of the component you are sharing. In this case, you can allow a custom reducer to be passed into our component. function useToggle({ initialOn = false, reducer = toggleReducer } = {}) {} and when you call you can pass in your own reducer: const { on, getTogglerProps, getResetterProps } = useToggle({ reducer: toggleReducer, }) or your user could just overwrite the methods they want to: function toggleStateReducer(state, action) { if (action.type === useToggle.types.toggle && timesClicked >= 4) { return { on: state.on } } return useToggle.reducer(state, action) } Good to have the types to be in objects to reduce the chance of typos. Kent's latest thinking on this Sometimes, people want to be able to manage the internal state of our component from the outside. The state reducer allows them to manage what state changes are made when a state change happens, but sometimes people may want to make state changes themselves. We can allow them to do this with a feature called "Control Props." This is pretty much the same as controlled input elements that you've probably done a million times in React.
https://www.kevincunningham.co.uk/posts/kcd-advanced-patterns/
CC-MAIN-2020-29
refinedweb
918
54.93
Difference between revisions of "KS0475 Keyestudio 8*8 Dot Matrix Module 1088AS For Smart Car (Black and Eco-friendly)" Latest revision as of 14:51, 16 January 2020 Contents Description In generally,we need 16 digital ports in total to drive an 8 * 8 dot matrix via MCU, which will greatly waste the MCU data. Thereby, we could use HT16K33 chip to drive an 8 * 8 dot matrix, and control dot matrix via the I2C communication port of the MCU , which greatly saves the MCU resources. The interface of the module is led out by 4pin pins with a 2.54mm pitch. We could connect the module to control board with DuPont cable to communicate. When in use, we can fix the dot matrix screen on the smart car to make a status display. Technical Parameters - Interface: 4pin pitch 2.54mm pin header - Working voltage: DC 4.5V-5.5V - Communication port: I2C communication - Control chip: HT16K33 Connection Diagram Test Code #include <Wire.h> #include "Adafruit_LEDBackpack.h" #include "Adafruit_GFX.h" Adafruit_LEDBackpack matrix = Adafruit_LEDBackpack(); void setup() { Serial.begin(9600); Serial.println("HT16K33 test"); matrix.begin(0x70); // pass in the address } void loop() { /////////smile/////////////// matrix.displaybuffer[0] = B00000011; matrix.displaybuffer[1] = B10000000; matrix.displaybuffer[2] = B00010011; matrix.displaybuffer[3] = B00100000; matrix.displaybuffer[4] = B00100000; matrix.displaybuffer[5] = B00010011; matrix.displaybuffer[6] = B10000000; matrix.displaybuffer[7] = B00000011; matrix.writeDisplay(); } Note: 1. In the experiment, we control dot matrix with code “matrix.displaybuffer[0] = B00000011”; ( connect dot matrix as shown) For “matrix.displaybuffer[0]”, 0 stands for column,0 is the first column, 1 is the second column, and so on;”B00000011” stands for 8 LEDs on/off, 0 means off, 1 means on. Setting matrix.displaybuffer[0] = B00000011, which represents the first column, the LEDs on rows 1, 8, 7, 6, 5, and 4 are set to off, and the LEDs on rows 3 and 2 are set to light on. 2. The library files need to be installed in the code, that is, put the Adafruit_GFX and Adafruit_LED_Backpack_Library_master folders into \ Arduino \ libraries under the compiler installation directory. After setting successfully, you need to restart the compiler, otherwise it will not compile. For example mine: C: \ Program Files \ Arduino \ libraries Test Result Wire according to connection diagram, upload the code, and after power-on, the 8 * 8 dot matrix displays a smiley pattern, as shown below. Resources Libraries and Test Code
https://wiki.keyestudio.com/index.php?title=KS0475_Keyestudio_8*8_Dot_Matrix_Module_1088AS_For_Smart_Car_(Black_and_Eco-friendly)&diff=26025&oldid=25294
CC-MAIN-2020-05
refinedweb
398
59.5
hana-ui 🌻A React UIKit with fresh nijigen style. Homepage: hana-ui.moe. 主页:hana-ui.moe/cn. Guide hana-ui is a completed UI component libray which almost base components that you may use in development works, and it also easy to use in your page. Install npm install hana-ui // or yarn add hana-ui Usage Configuration Before using hana-ui, you should import the default configuration of styles to ensure hana could build application successfully: import 'hana-ui/hana-style.scss'; On the other hand, for allowing user to use their own themes and import independent component, please add loaders for scss files in configuration file of webpack and ensure the node_modules is not added to rule exclude. Note: hana-ui also supports typescript with d.ts files, and we commend you to use it ! import hana-ui in your page Example: import React from 'react'; import {Button} from 'hana-ui'; // or just import Button import {Button} from 'hana-ui/dist/seeds/Button'; export default () => ( <Button size={'middle'}> Touch me... </Button> ); More components usage please checkout Documents page. Custom Theme hana allows you to use your own themes by using sass-resource-loader, you can pass a scss file includes configurations of theme to use it: { test: /\.(css|sass|scss)$/, use: [ ...... { loader: 'sass-loader' }, { loader: 'sass-resources-loader', options: { resources: './themes/himawari.scss' } } ], exclude: /node_modules/ }, You can check here for template of configurations: himawari.scss. Contribution Change the world with hana. How You could contribute to hana-ui in may ways, hana needs your help. Tell to us - Open the project hana-ui on Github. - Submit your issue with detailed description, code and error stack. - We will have a discussion and find the way to fix bugs. - Bugs are fixed. Fix by yourself - Open the project hana-ui on Github. - Fork it and fix bugs in a new branch. - Open a pull request with detailed description such as information, scope of influence... - We will review changes and merge it to master branch. Scripts Following npm scripts may help you while developing. 1. Develop: npm run dev Then open the 8000 port and preview the demo. 1. Prebuilt: npm run build Compile source code to dist folder with es5. License hana-ui is an open-source project with MIT license by hana-group. Welcome to join us !
https://nicedoc.io/hana-group/hana-ui
CC-MAIN-2019-35
refinedweb
389
66.33
Hi, I need to display a very long text (about 8k characters). My idea was to put a word wrapping label in a scroll view. On iOS, everything works fine. On Android, the label is cropped after about 5000 characters. If I debug label text property, the text is complete. But on the screen, it's just cut after that limit. I have tried to set the label height manually to ensure it's not a size calcultation problem. But even if height is larger, the text is not shown. Is this a bug or a known limitation of the label on Android? The only solution I found so far is to split the text in two labels. But that's just a workaround. [EDIT]After some additionnal tests, it seems the problem is related to label height. If I use a smaller font size, I can view more text. But the height of the label is the same. What is strange is that if I set a bigger HeightRequest, the text does not fill the additional space. As if the label doesn't draw itself below a given height. The problem is the same if I use Text or FormattedText.[/EDIT] Jacques @JacquesBersier would it be possible to replace the label with an Editor view with the IsEnabled property set to false? I haven't tried it so it's just a thought. Thanks for your feedback. I tried to use an editor (as well as a webview). It doesn't cut the content and display the whole text. But it comes with other drawbacks, like the text color that changes to gray when disabled, the font that can't be changed, the height that is not resized automatically or Android that shows the entry box below the text. Using an ExtendedEntry help for some points, but a label seems much better if I can find a way to fix it. Is it paragraphed? do you know how many characters you have on a line? I would just take the 'hacky' approach and implement line wrapping yourself, and after 10 / 100 / 1000 lines or chars move on to a new LabelView, but I tend to just 'make stuff work' sometimes. There's probably a more efficient way of doing this. You could split every double \n\n or any \r\n\r\n (windows formatted) knowing it's a new paragraph. Or make paragraphs based on after 500 chars, find the next period and split to the next line. Ryan Hi Ryan. Thanks for the suggestions. Yes the content is paragraphed. But by looking at the Label source, I found the problem. In procedure UpdateLineBreakMode, Xamarin is setting max lines to 100. So I just created a renderer that use a bigger value. That fixed the problem. My renderer looks like this (if it may help others): [assembly: ExportRenderer(typeof(LongLabel), typeof(LongLabelRenderer))] namespace MyProject.Android { public class LongLabelRenderer : LabelRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Label> e) { base.OnElementChanged(e); Control.SetMaxLines(1000); } } } +1 to request a more efficient/scrollable Editor/Label view that can handle long text strings! Adding here my iOS version: @JacquesBersier, Thanks! It is a noob question, but what is LongLabel in typeof(LongLabel)? Should I define it somewhere ? @DanielAbrahamberg LongLabel is probably a custom class they defined. You can just replace that with Label and it should work. android:maxLines="3" is the propert which you can set through axml file @JacquesBersier you just saved me a ton amount of time!! I was facing the same issue and doing what you suggested worked flawlessly thank you! @JacquesBersier @IdoTene Thanks for posting this, this tripped me up while doing a 3rd party licence section of an app about screen. this is reported at right now the workaround like IdoTene says a costume renderer Anyone have issues with long labels not displaying at all on iOS regardless of label.lines being set? @IdoTene
https://forums.xamarin.com/discussion/26611/label-display-problem
CC-MAIN-2019-35
refinedweb
657
75.71
tag:blogger.com,1999:blog-78786315571409715942018-08-30T22:07:23.871-04:00The Softer Side of Software DevelopmentSoftware development is often seen as a "man vs machine" endeavour. In reality, it's still people who do the work, a work that often has a clear artistic component. It is work we do with others, for others. And here are some of my thoughts on thatNancy Deschênes's Game of Life in SQL, Part 2I showed <a href="" target="_blank">my implementation</a> of <a href="'s_Game_of_Life" target="_blank">Conway's Game of Life</a> in SQL to a few people, a co-worker pondered if it could be done without the stored procedure. I gave it some thought, and came up with an implementation that relies on views. I used views for the simple reason that it made things easier to see and understand. If a query can be written using views, the it can be written as straight SQL, but the results are generally uglier, messier, and can often be unreadable. Since I'm using MySQL, I realize how bad views can be when it comes to performance, but since this is a proof of concept, and a toy anyway, I'm not too worried. I honestly doubt any real-life application would ever require me to implement the Game of Life in SQL!<br /><div><br /></div><div>I started as I had earlier, by defining the table as holding all the live cells for each generation.</div><pre class="brush:sql">create table cell (<br /> generation int,<br /> x int, <br /> y int,<br /> primary key (generation, x, y));<br /></pre><div>Then, I created a view to represent all the candidate position on the grids where we might find live cells on the next generation. We can only find cells where there are cells for the current generation, and in all 8 neighbour positions of those cells:</div><pre class="brush:sql">create view candidate (generation, x, y) as <br /> select generation, x-1, y-1 from cell union<br /> select generation, x, y-1 from cell union<br /> select generation, x+1, y-1 from cell union<br /> select generation, x-1, y from cell union<br /> select generation, x, y from cell union<br /> select generation, x+1, y from cell union<br /> select generation, x-1, y+1 from cell union<br /> select generation, x, y+1 from cell union<br /> select generation, x+1, y+1 from cell;<br /></pre><div>Next, I counted the number of live cells in the 3x3 square around each candidate. This isn't quite the same as counting the neighbours, since it includes the current cell in the count. We will need to take that into account in a bit.</div><pre class="brush: sql">create view neighbour_count (generation, x, y, count) as <br /> select generation, x, y, <br /> (select count(*) <br /> from cell <br /> where cell.x in (candidate.x-1, candidate.x, candidate.x+1)<br /> and cell.y in (candidate.y-1, candidate.y, candidate.y+1)<br /> and not (cell.x = candidate.x and cell.y = candidate.y)<br /> and cell.generation = candidate.generation)<br /> from candidate;</pre><div>From the previous post, the rules of the game can be reduced to: a cell is alive in the next generation if<br /><br /><ol><li>it has 3 neighbours</li><li>it has 2 neighbours and was alive in the current generation</li></ol><div>Let's write this as a;">Neighbours</th> <th style="padding: 0px 14px;">Alive in the next generation</th></tr></thead><tbody><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">less than 2</td> <td style="padding: 0 14px;">no</td></tr><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">2</td> <td style="padding: 0 14px;">YES<;">more than>If we count the cell in the neighbour count, we change our;"># cells in 3x3 square</th> <th style="padding: 0px 14px;">Alive in the next generation</th></tr></thead><tbody><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">less than 3</td> <td style="padding: 0 14px;">no<;">4</td> <td style="padding: 0 14px;">YES</td></tr><tr> <td style="border-right: 1px black solid; padding: 0 14px;">yes</td> <td style="border-right: 1px black solid; padding: 0 14px;">more than>A cell will be alive in the next generation if it's alive and the number of live cells in the square is 3 or 4, or if the cell is dead and there are 3 cells in the square. Let's rewrite the rules... again.<br /><br />A cell will be alive in the next generation if<br /><ol><li>The 3x3 square has exactly 3 cells</li><li>The 3x3 square has exactly 4 cells and the current cell is alive.</li></ol>So, we'll include a cell in the next generation according to these rules. We can tell if a cell is alive by doing a left outer join on the cell table:</div><pre class="brush:sql">insert into cell (<br /> select neighbour_count.generation+1, neighbour_count.x, neighbour_count.y <br /> from neighbour_count <br /> left outer join cell <br /> on neighbour_count.generation = cell.generation<br /> and neighbour_count.x = cell.x<br /> and neighbour_count.y = cell.y<br /> where neighbour_count.generation = (select max(generation) from cell)<br /> and (neighbour_count.count = 3 <br /> or (neighbour_count.count = 4 and cell.x is not null)));<br /></pre><div>Repeat the insert statement for each following generation.</div></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Code RetreatOn December 8th, I attended the Montréal edition of the 2012 Code Retreat. It was organized by Mathieu Bérubé, to whom I'm very thankful. He asked the attendees for comments on the event, so here I go.<br /><div><br /></div><div>The format of the code retreat is that during the day, there are 6 coding session, using pairs programming. In each session, we try to implement Conways' Game of Life, using TDD principles. We can use any language or framework we want. Sometimes, only one of the partners knows the language used, and the other learns as we go. For some of the session, the leader also adds a challenge or a new guideline. After the session, we delete the code, and we all get together to discuss the session and the challenge. Mathieu had some questions for us to guide the discussion and to make us think.</div><div><br /></div><div>The goal of the even, however, is not to implement the game of life, but to learn something in the process. I think we all learned a lot, technically. Some of us learned new languages, some learned new ways to approach the problem. I learned that the Game of Life is easier to solve in functional or functional-type languages, rather than with object oriented or procedural languages. I learned the problem cannot be solved by storing and infinite 2D array. I learned that MySQL has special optimisation for "IN" that it doesn't have for "BETWEEN". I learned a smattering of Haskell.<br /><br />But that's the sort of things one can learn in a book. What else did I learn, that's less likely to be in technology books? In all fairness, some of the following, I already knew, and is covered in management or psychology books, but those book tend to be less popular with developers.<br /><h4>The clock is an evil but useful master</h4></div><div>By having only 45 minutes to complete the task, we can't afford to explore various algorithms and data structures to solve the problem - as soon as we see something that looks promising, we grab it, and run with it, until we notice a problem and try down a different path. In some cases, we even started coding without knowing where we were headed. I would normally think of this as a problem, because creativity is a good thing, and stifling it must be bad. However, having too much time usually means I'll over-design, think of all possible ways, but in the end, I may still rush through the implementation. Moreover, I assume that I'll have thought about all the possible snags that can come along, but some problems only rear their ugly heads after some code is written and the next step proves impossible. Sometimes, it's better to start moving down the wrong path, than to just stay in one spot and do nothing at all.</div><div><br /></div><div>This is somewhat related the the Lean Startup methodology - test your assumptions before you make too many.</div><div><br /></div><div>At one point, I asked the organizer to put up, on the projector, how much time remained in the session. He pointed out that it was a bad idea, because when we look at the clock, we're more likely to cut corners just to get something working, rather than focus on the quality of the code. This was a hard habit to ignore!</div><h4>Let go, and smell the flowers</h4><div>Since my first partner and I solved the problem in the first session, I was really eager to solve it the second (and third, ...) time around, in part to prove that he didn't do all the work, and I was just there to fix the syntax. But from the second session on, Mathieu gave us particular challenges to include in our coding, from "think about change-proofing your solution" to "naive ping-pong (one partner writes the test, the other does the least amount of work possible to pass the test)". As it turns out, it was really hard to completely implement the solution, writing the tests first, and factoring in the challenge. Something had to give. It was really hard to let go "complete the solution" in favor of TDD and the challenges. Recognizing that this is what I was doing certainly helped me let go, but the drive to finish the job kept nagging me. So much so that when I got home, I just <i>HAD</i> to implement the Game of Life in Scala, because I was so upset I didn't finish during the event.</div><div><br /></div><div>Another aspect of this is how I interacted with my partners. When I thought I had the solution and they didn't, I just pushed on with my idea, so that we'd get the job done as fast as possible. Rushing means I didn't listen as well as I could have to what the other person had to say. You can't speak and listen at the same time. The point of Code Retreat is not to write the Game of Life, that's been done thousands of times. The point was to play with writing code, try different things, and learn something in the process.</div><h4>The tools make a difference on productivity</h4><div>Session after session, it became clear that for this particular problem, a functional approach makes more sense than a procedural or object-oriented one. While we can use functional programming concepts in most languages, functional languages make the task much simpler. Even though we struggled setting up the test environment for Scala, we got further than with procedural language, because once started, we just added a test, added an implementation - everything seemed to work almost on the first try.</div><div><br /></div><div>Another tool that affected productivity is the editor. I'm not advocating for any particular tool, but in all the sessions, we used one participant's set up. Whoever's setup that was would edit so much faster. When using someone else's computer, I quickly got frustrated because I would type some editor command, only to realize that it's the wrong way to do what I want on <i>that</i> editor, or I had to slow down to avoid such mistakes. This happened even though I knew how to use the other person's editor. It may have worked better if we had used something like <a href="" target="_blank">Dropbox</a> so we could each use our own computer, but given that IDEs tend to organize the source/project differently, it may not have worked either.</div><h4>I change depending on the personality of my partner</h4><div>Each team I was on had a different dynamic. This was due in part to the challenge proposed - ping-pong tends to do that, and in part to the personality of my partner. It is likely that they also took cues from my personality as well, particularly since I was generally outspoken between the sessions, so I did not necessarily get to know them well in the process. I tend to avoid being the leader of a group, but I'm also impatient. At the beginning of a session, the my partner didn't suggest something immediately, an approach, a language, etc, I'd propose something. This means that more often than not, I imposed my will on the other. I should have listened more to my partners, and I might have, if I hadn't been so intent on finishing the problem in the 45 minutes!</div><h4>Deleting the code</h4><div>One of the instructions in the Code Retreat is that after each session, you delete the code. This was hardest to do, the closer we got to finishing the solution. It was particularly difficult in the session when we wrote in Scala, because after the initial fumbling with the test environment, the progress was steady and fast. If <i>only</i> I had another 10 minutes, I'm <i>positive</i> we could finish! Surprisingly, the working code in MySQL was very easy to delete, possibly because it felt complete, done, over with.</div><div><br /></div><div><br /></div><div>All in all, it was a useful, interesting and fun experience. I highly recommend it to any programmer!</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes's Game of Life in... SQL?Yesterday, I attended the <a href="">Code Retreat</a> hosted at Notman House and organized by Mathieu Bérubé. It was a most enjoyable experience. On the first session, I paired with Christian Lavoie, and we tried to implement <a href="'s_Game_of_Life">Conway's Game of Life</a> using... mysql. Some people thought this was crazy, and I can't disagree. At the same time, we were able to complete the solution in under 45 minutes, not a small feat. <br /><br />Part. <br /><br /).<br /><br /. <br /><br />Then, for each cell within that rectangle, we decide if it should be alive in the next generation. We simply ignore cells that die, by not adding them to the next generation. The 4 rules of the game can be rewritten as two simpler rules:<br /><br /><ol><li>If the current cell is alive, it stays alive if it as exactly 2 or 3 neighbours who are also alive</li><li>If the current cell is dead, it becomes alive if it has exactly 3 neighbours.</li></ol><div>Using a little logic processing, we can further rewrite this as the following 2 conditions to be alive in the next generation:</div><div><ol><li>If a cells has 3 neighbours</li><li>If a cell is alive and has 2 neighbours</li></ol><div>So, we only add the cell to the next generation (insert it in the table) if it fits either of those rules.</div></div><div><br /></div><div.</div><div><br /></div><div>Here is code close to what we came up with, with some modifications that came to me after the fact.</div><br /><br /><pre class="brush: java">drop table if exists cells;<br />create table cells (<br /> generation int,<br /> x int,<br /> y int,<br /> primary key (generation, x, y));<br /><br />drop procedure if exists iterate_generation;<br />delimiter $$<br />create procedure iterate_generation ()<br /> begin<br /> declare current_gen int;<br /> declare current_x int;<br /> declare current_y int;<br /> declare min_x int;<br /> declare max_x int;<br /> declare min_y int;<br /> declare max_y int;<br /> declare live int;<br /> declare neighbour_count int;<br /><br /> select max(generation) into current_gen from cells;<br /> select min(x)-1, min(y)-1, max(x)+1, max(y)+1<br /> into min_x, min_y, max_x, max_y from cells<br /> where generation = current_gen;<br /><br /><br /> set current_x := min_x;<br /> while current_x <= max_x do<br /> set current_y := min_y;<br /> while current_y <= max_y do<br /> select count(1) into live from cells<br /> where generation = current_gen<br /> and x = current_x<br /> and y = current_y;<br /> select count(1) - live into neighbour_count from cells<br /> where generation = current_gen<br /> and x in (current_x - 1, current_x, current_x + 1)<br /> and y in (current_y - 1, current_y, current_y + 1);<br /><br /><br /> if neighbour_count = 3 || live = 1 and neighbour_count = 2<br /> then<br /> insert into cells (generation, x, y)<br /> values (current_gen+1, current_x, current_y);<br /> end if;<br /> set current_y := current_y + 1;<br /> end while;<br /> set current_x := current_x + 1;<br /> end while;<br />end<br />$$<br />delimiter ;<br /></pre><br />Crazy? well, yes. And yet, surprisingly easy. And absolutely a whole lot of fun. <img src="" height="1" width="1" alt=""/>Nancy Deschênes, and seemingly dirty command objects<div>I love Grails, but sometimes, the magic is a little too much.</div><br /><div.</div><div>Here is the simplest form of the problem I have been able to put together.</div><br /><div>The controller:</div><pre class="brush: java">def debug = { DebugCommand d -><br />render new JSON(d)<br />}</pre><br /><div>The command objects: I have nested commands, with DebugCommand being the outer command (used by the controller) and DebugMapCommand, a map holding some values. I'm using a LazyMap since that's why I used in my real-life problem.</div><br /><pre class="brush: java">public class DebugCommand {<br /> int someNum<br /> DebugMapCommand debugMapCommand = new DebugMapCommand()<br />}<br /></pre><br /><pre class="brush: java">@Validateable<br />public class DebugMapCommand {<br /> Map things = MapUtils.lazyMap([:], FactoryUtils.instantiateFactory(String))<br />}<br /></pre><br /><div>What happens here is that multiple calls to the controller/action result data being accumulated in the LazyMap between calls:</div><br /><pre class="brush: jscript">nancyd $ curl '\&debugMapCommand.things\[2\]=5'<br />{"class":"com.example.DebugCommand",<br /> "debugMapCommand":{"class":"com.example.DebugMapCommand",<br /> "things":{"2":"5"}},<br /> "someNum":5}<br /><br />nancyd $ curl '\&debugMapCommand.things\[1\]=5'<br />{"class":"com.example.DebugCommand",<br /> "debugMapCommand":{"class":"com.example.DebugMapCommand",<br /> "things":{"2":"5","1":"5"}},<br /> "someNum":5}<br /><br />nancyd $ curl '\&debugMapCommand.things\[elephants\]=5'<br />{"class":"com.example.DebugCommand",<br /> "debugMapCommand":{"class":"com.example.DebugMapCommand",<br /> "things":{"2":"5","1":"5","elephants":"5"}},<br /> "someNum":5}<br /></pre><br /><div>If I gave a new value for a map entry, the new value was used.</div><br /><div>So, what's happening? The DebugCommand's reference to the DebugMapCommand is called debugMapCommand, and Grails thinks I want a DebugMapCommand injected, so it created a singleton, and passed it to all my DebugCommand instances. Oops.</div><br /><div>Trying to prove this wasn't too easy. It would seem that a number of factors are necessary for this particular issue to manifest:</div><br /><ol><li>The field name must be the same as the class name with the first letter lowercased</li><li>The inner/sub command, DebugMapCommand, must be annotated with @Validateable</li><li>The inner/sub command must be in a package that is searched for "validateable" classes (in Config.groovy, you have to have <span class="Apple-style-span" style="font-family: Monaco; font-size: 11px;">grails.validateable.packages = ['com.example.yourpackage', ...])</span></li></ol><br /><div>So, what's the lesson here?</div><br /><div>Don't name your fields after their class.</div><img src="" height="1" width="1" alt=""/>Nancy Deschênes is reading code so hard?We've all been there. Faced with lines and lines of code, written by an intern, a programmer long gone, or a vendor with whom you do not have a support contract.<br /><br />Code can be quite hard to read. Why is that?<br /><br /.<br /><br /><b>1. Reading code usually involves two actions: identifying what the code does, and what it was actually meant to do.</b><br /><br /.<br /><br /:<br /><br /><pre class="brush: java">if (someCondition == true) {<br /> myVar = false;<br />} else {<br /> myVar = false;<br />}<br /></pre><br /><b>2. We're talking too loud, and not listening enough</b><br /><br />When we see code like the one above, we laugh, we point, and our opinion of the code (and the programmer(s)) goes down. We start generalizing and thinking the worse. How can code with <i>that</i> in it be any good? And <i>what</i> were they thinking? who would wrote such poor code?<br /><br /!"<br />. <br /><br /><b>3. It's not written how we think</b><br />.<br /><br /.<br /><br /><b>4. We lose track</b><br /><br /<br /><br /.<br /><br /><br /><b>What can we do about it?</b><br /><ul><li>Try to find out what the programmer wanted to do</li><li>Then see if the code does it</li><li>Keep an open mind - don't let your prejudice get in the way</li><li>Learn new (to you) techniques and patterns</li><li>When you code, try to see more than one way to solve the problem; maybe next time you try to read someone's code, they'll have used your second or third choice, and you'll recognize it more easily</li><li.</li></ul><div><br /></div><div>Why do you think reading code is so hard? what helps you?</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes building blocks part 2: imports and packagesIf you're used to Java, Scala imports and packages look both familiar and all wrong and weird.<br /><br /><b>Imports</b><br /><br />The first thing that you may notice in some code you're reading is the use of <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">_</span> in imports. The best way I've found to deal with <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">_</span> is to replace it in my head with "whatever". It works in imports, but it also works in other cases where you encounter <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">_</span>.<br /><br />So, when you see<br /><br /><pre class="brush: java">import com.nancydeschenes.mosaique.model._<br /></pre><br />it means "import whatever you find under <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">com.nancydeschenes.mosaique.model</span>"<br /><br />You can lump together multiple imports from the same package:<br /><br /><pre class="brush: java">import scala.xml.{ NodeSeq, Group }<br /></pre><br /><br />Imports are relative:<br /><br /><pre class="brush: java">import net.liftweb._<br />import http._<br /></pre><br />imports whatever's needed in net.liftweb, and whatever's needed from net.liftweb.http.<br /><br />But what if I have a package called http, and don't want net.liftweb.http? use the _root_ package:<br /><br /><pre class="brush: java">import net.liftweb._<br />import _root_.http._<br /></pre><br /><b>Packages</b><br /><br /.<br /><br />You can use the same package statement you would with Java:<br /><br /><pre class="brush: java">package com.nancydeschenes.mosaique.snippet<br /></pre><br />Or, you can use the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">package</span> block:<br /><br /><pre class="brush: java">package com.nancydeschenes.mosaique {<br /> // Some stuff<br /><br /> package snippet {<br /> // stuff that ends up in com.nancydeschenes.mosaique.snippet<br /> }<br />}<br /></pre><img src="" height="1" width="1" alt=""/>Nancy Deschênes building blocksWhen I first started reading about Scala, I saw a lot of very interesting looking code, but I felt lost. It looked as if I was trying to learn a foreign language with a phrasebook. I saw a number of examples, but I felt I was missing the building blocks of the language.<br /><br />I.<br /><br /><br /><b><span class="Apple-style-span" style="font-size: large;">Classes, traits, objects, companions</span></b><br /><br />Everything in Scala is an object, and every object is of a particular class. Just like Java (if you ignore the pesky primitives). So, generally speaking, classes work the way you expect them. Except that there's no "static". No static method, no static values. <br /><br />Traits are like interfaces, but they can include fields and method implementations. <br /><br />Objects can be declared and are then known throughout the program, in a way that's reminiscent of dependency injection, particularly the Grails flavour. Object declarations can even include implementation, and new fields and methods.<br /><br /.<br /><br />So you may encounter code such as<br /><br /><pre class="brush: java">class Car extends Vehicle with PassengerAble {<br /> val numWheels = 4<br /> val numPassengerSeats = 6<br />}<br /><br />object Car {<br /> def findByLicensePlate(String plate, String emitter) : Car = {<br /> Authorities.findEmittingAuthority(emitter).findByLicensePlate(plate);<br /> }<br />}<br /><br />object MyOwnCar extends Car with PassengerSideAirbags, RemoteStarter {<br /> val numChildSeats = 2;<br /> def honk : Unit = {<br /> HonkSounds.dixie.play;<br /> }<br />}<br /></pre><br />In that example, the first <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Car</span> is a class. The second <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Car</span> is an object. <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">MyOwnCar</span> is an object that can be addressed anywhere (same package rules apply as java), but <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">MyOwnCar</span> has extra stuff in it: <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">PassengerSideAirbags</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">RemoteStarter</span> are trait (you can guess that because of the <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">with</span> keyword). It even defines a new method so that honking it <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">MyOwnCar</span> should remind you of the <i>Dukes of Hazzard</i>.<br /><br /><br /><b><span class="Apple-style-span" style="font-size: large;">Types</span></b><br /><br />Unlike Java, in Scala, everything is an object. There is no such thing as a primitive.<br /><br /><br /><b>Basic types</b><br /><br />At the top of the object hierarchy, we have <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Any</span>. Every object, either what you think of an object in a Java sense, or the types that are primitives in Java, every thing inherits from <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Any</span>.<br /><br />The hierarchy then splits into two: <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyVal</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span>. Primitive-like types are under <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyVal</span> and <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span> is essentially the equivalent of <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">java.lang.Object</span>. All Java and Scala classes that you define will be under <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span>.<br /><br />Strings are just like Java Strings. Double-quoted. But Scala defines a few additional methods on them, and treats them as collections, too, so your String is also a list of characters. A particularly convenient method, I've found, is <span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">toInt.</span><span class="Apple-style-span" style="font-family: inherit;"> There's </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">toDouble</span><span class="Apple-style-span" style="font-family: inherit;"> and </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">toBoolean</span><span class="Apple-style-span" style="font-family: inherit;"> too.</span><br /><br /><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">Unit</span><span class="Apple-style-span" style="font-family: inherit;"> is what a method returns when it doesn't return anything. You can think of it as "void".</span><br /><br /><span class="Apple-style-span" style="font-family: inherit;">Null as you know it is a value, but in Scala, every value has type, so it's of type Null. And Null extends every single class, so that null (the one and only instance of Null) can be assigned to any </span><span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">AnyRef</span><span class="Apple-style-span" style="font-family: inherit;"> object. It sounds crazy, but if you let it sink in, it will make sense. Null is actually a trait, not a class.</span><br /><br /><span class="Apple-style-span" style="font-family: inherit;">Nothing is the absolute bottom of the hierarchy. Nothing doesn't have any instances.</span><br /><br /><br /><b>Numeric Types</b><br /><br />Integers are of type Int.<br /><br />Doubles are Doubles, floats are Float. <br /><br />A litteral in the code is treated an object of the appropriate type. Things just work, without "autoboxing" or other convolutions.<br /><br />Strings and numeric types are immutable.<br /><br /><br /><b>Collections</b><br /><br />Collections come in mutable and immutable variations.<br /><br /.<br /><br />A special kind of collection is the Tuple. It's an ordered group of N items that can be of the same or of different types. They are defined for N=2 to N=22. You can access the <i>j</i>-th element of the tuple with ._<i>j</i> (ex: the first is myTuple._1, the third is myTuple._3)<br /><br /><b>Options</b><br /><br />The first thing you have to realize about options is that they're everywhere, so you better get used to the idea. Lift uses Box instead, but it serves the same purpose.<br /><br />The second thing you have to realize is that Options (or Box) is the right way to handle multiple scenarios, but you've spent your programming life working around that fact.<br /><br />Let's take a simple example. You want to implement a method that computes the square root of the parameter it receives. What should that method return? a Double. So you start:<br /><br /><pre class="brush: java">def sqrt (x: Double) : Double = {<br /> // ...<br />}<br /></pre><br /)<br /><br /.<br /><br /.<br /><br />Or, you can use matching to decide what to do:<br /><br /><pre class="brush: java">MathHelper.sqrt(x) match {<br /> case Some(y) => "The square root of " + x + " is " + y<br /> case None => "The value " + x + " does not have a square root"<br />}<br /></pre><img src="" height="1" width="1" alt=""/>Nancy Deschênes on Rejection.<br /><br /?<br /><br /. <br /><br /.<br /><br /?"<br /><br /?<br /><br />And then, the decision came. The list of speakers was published, and I couldn't find my name on it. I looked and looked and.. nope! nothing. Not for the Scala talk, not for the Grails talk, not for the SQL talk.<br /><br /.<br /><br /><b>So, why was I so disappointed?</b><br /><br /.<br /><br /><b>What now?</b><br />. <br /><br />I know that whenever we do something, we "practice" doing it - we get better at it, it becomes easier. So it's time for me to start practicing risk taking. At least a little.<br /><br />Isn't it convenient that this is happening at the turning of the year?<br /><br />Here's to 2011!<img src="" height="1" width="1" alt=""/>Nancy Deschênes the configuration for the mail plugin for grails from the databaseI have a grails application that sometimes needs to email users. I am using the <a href="">mail plugin</a>, but that expects the configuration (SMTP server, etc) configuration to be in the application's Config.groovy file. I wanted to make it possible for the site administrator to change the configuration. This way, they could create a specific email account with Google just to send email for the application.<br /><div><br /></div><div>After some research, I discovered that the configuration could be altered at runtime by getting the mailSender injected into my code, and setting its properties as needed.</div><div><br /></div><div>Next, I needed a domain object to represent my configuration. I tried using the @Singleton annotation, but that does not play well with domain objects. I ended up writing getInstance() myself:</div><div><br /></div><div><div></div></div><br /><pre class="brush: java">static long instanceId = 1l;<br />static EmailServerConfig instance;<br /><br />static synchronized getInstance() {<br />if (!instance) {<br />instance = EmailServerConfig.get(instanceId);<br />if (!instance) {<br />instance = new EmailServerConfig()<br />instance.id = instanceId<br />instance.save()<br />}<br />}<br />return instance;<br />}<br /></pre><div>This can only work with </div><pre class="brush: java">static mapping = {<br />id generator:'assigned'<br />}<br /></pre><br /><div><br /></div><div>and even with that, I was unable pass the id parameter to the constructor, that's why I set it separately after the EmailServerConfig object is created.</div><div><br /></div><div>Then all I need is to define the GORM event handlers afterLoad, afterInsert and afterUpdate to apply the values from the database to the mailSender.</div><div><br /></div><div>Finally, I made sure to encrypt the SMTP authentication password on its way into the database, and to decrypt it when retreiving it. Thanks to Geoff Lane for this <a href="">blog post</a> on how to handle that with codecs:</div><pre class="brush: java">class EmailServerConfig {<br />String password<br />String passwordEncoded<br />//...<br />def afterLoad = {<br />password = passwordEncoded?.decodeSecure()<br />updateMailSender()<br />}<br />def beforeUpdate = {<br />passwordEncoded = password?.encodeAsSecure()<br />}<br />//...<br />}<br /></pre><br /><div>Then all I had to do was write the controller to let site admins edit those values. The controller uses EmailServerConfig.getInstance(), since we're simulating a singleton.</div><div><br /></div><div>Things to remember:</div><div><ul><li>onLoad is called before the values are loaded into the object, so the values are null; use afterLoad instead</li><li>beforeInsert, beforeUpdate and beforeDelete are called only once the object is validated, so the constraints have to allow a null value for passwordEncoded</li><li>Grails tries to instantiate domain classes at startup, and @Singleton prevents that, that's why you can't have @Singleton on a domain class</li><li>If you don't create an EmailServerConfig in the Bootstrap, and you do not provide any configuration in Config.groovy, the mailSender will default to trying to send mail through localhost:25. This may work, depending on your setup.</li></ul></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes 2010 in Montreal - Recap and Impressions<div><div>The Recent Changes Camp (also called RoCoCo) was held last weekend (June 25-17) in Montreal. I'm not sure how I heard about it. Possibly someone I follow on twitter commented on tikiwiki, and I checked it out and saw that they were sponsoring a conference near me. I do not "wiki". We use one at work, but I have felt no drive to get involved in the open ones such as wikipedia or wikiHow. But I decided not too long ago to be move involved in what's happening near me, and to actually meet people face to face once in a while. So I went. Their website made it look open enough.</div><div><div><br /></div><div>It turns out, that was a good idea.</div><div><br /></div><div>The first few people I met were welcoming and friendly, so I stayed. "I don't really have anything to do with wikis" was met with "that's cool", and "whoever ends up being there is whoever was meant to be there". I learned first-hand about the openspace technology (well, I'd call it more a philosophy or concept, but it seems that it's usually referred to as a technology). When we got together to set the agenda, I expected people to act like I've always seen people act, shy: "you go first" "no you do". But they couldn't wait for the presenter to give them the floor to introduce the sessions they wanted to host. I suppose I shouldn't have been surprised; many are active wiki or open software contributors, so they do not wait for others to do what they want done - they are doers. Still, for me that set the tone do the day - a very proactive sort of tone. Which eventually led me to co-host a talk (at Mark Dillon's invitation). I hadn't planned on coming back on the Saturday, but I was learning, I was talking to people, I felt part of the group, so I came back.</div></div><div><br /></div><div><b>Why wiki, why not wiki, and why don't more people contribute?</b></div><div><br /></div><div>The first session I attended was led by <a href="">Mark Dillon</a>, and asked "<a href="">Why wiki?</a>". Many issues were discussed, as you can see from the notes. We discussed how things are, how people use wikis, and how it can make things better. We also looked at some reasons why people do not wiki. The technical aspect came up (wiki syntax can be cumbersome), but I felt they were leaving the "people aspect" out of it a bit- being shy, or... something. We did cover a lot of the features and how they can act to invite users to contribute, or add to the "conversation". After the talk, Mark asked if I would co-host another one on <a href="">barriers to entry</a>. I don't know if it was his strategy to gently push a newcomer, of it was just the action of a natural leader - either way, it worked, and I accepted. The session happened the next day. I think we covered a lot of "personal" aspects. I personally feel that while the technical complexity may keep some people away, many just don't want to write, either because they are shy, or do not feel the drive to do so. I wanted to explore these in more details, and we certainly did. What came out of it for me, why I think many of us don't contribute, is that we feel we need the permission to edit someone else's text. Even when we know how the system works, we may want to make sure we're not going to hurt the other person's feelings, or possibly miss an important point they were making, or an angle they were trying to give. Another major obstacle is the lack of feedback. If we don't believe that others will review our work and improve on it, there is little to motivate us. What I didn't know is how much of a community the contributors end up creating, and that it is an important factor to current contributors. A very interesting and enlightening set of discussions.</div><div><br /></div><div>Since then, I have read an <a href="">article</a> (thanks to <a href="">Seb Paquet</a> for pointing it out) showing some empirical measurement of the benefits of methods used to encourage people to contribute. I have also encountered the concept of "diffusion of responsibility" and "the bystander effect". The feeling that we don't have to do something; someone else will. I was somewhat familiar with the concept, but now it has a name. And I'm sure it plays a role in whether one contributes to wikis, as well as pretty much any open venture.</div><div><br /></div><div><b>Structured wikis, semantic wikis, Semantic web</b></div><div><br /></div><div>Here, I mostly took notes, trying furiously to make sense of all the information provided. Structured wikis are those where contributors are asked to fill in fields, rather than (or as well as) come up with the text of the article. It may feel more like entering data into a database. For example, for an entry on an author, the user may be asked for a date of birth, a date of death, schools attended, genre, and a list of titles.</div><div><br /></div><div>A semantic wiki is one where the occurrence of information with a particular meaning ("this is a city", "this is a person", "this is a date") is annotated as such. This then allows the software to make links between other things that also have that information, such as other authors who were born in the same city, or events that occurred on the same date.</div><div><br /></div><div>The power of both these wiki styles is greatly improved when they are combined - structured semantic wikis create a wealth of information that can be interrelated by a machine to create a rich set of information.</div><div><br /></div><div>This is also where I learned about the existence RDF, and the RDF triple: a relation is expressed as 3 components: the subject, the predicate, and the object. I was taken back to 6th grade grammar, but of course it makes sense. If you want to describe a relation you need the the origin of the relation, the type of relation, and the target of the relation: "Bob knows Peter", "Steven King wrote The Stand". I believe someone brought up microformats, which can encode the same information, but differently, in a way that's more accessible on the web. There is so much here to try to make sense of, I'll be spending some serious time puzzling it out. But I think the potential there is obvious. Calling it "Web 3.0" may be an exaggeration, but I don't think it's far off.</div><div><br /></div><div><b>Open companies and reputation systems</b></div><div><br /></div><div><a href="">Bayle Shanks</a> presented the concept of open companies where employees are those who want to work on the company's project, and they get paid according to a rating system by their peers. When I first sat in that session, I expected it to be about open companies in the sense of information flow, not a true open source parallel. Before the talk, I would have thought the whole concept simply crazy, but the culture of open spaces (or this particular one?) is such that we all listened, commented, suggested, brought up issues, without anyone being offensive or dismissive. I'm still not sure it can work in a general setting, but I applaud Bayle for wanting to give it a shot, and I can see some applications where it has some definite potential.</div><div><br /></div><div>The reputation system he wants to use is quite intriguing. The system sounds simple enough, but I'm not following all the repercussions of money distribution scheme. The interesting bits is that the people who contribute the most also end up being the ones who have the bigger influence on the distribution of rewards (money, reputation, gold stars). This can be useful in a number of non-money applications, too, as a way to recognize the contribution of volunteers, for example. Maybe all wikis should have that!</div><div><br /></div><div>And here is why keeping an open mind pays. If I had chosen not to listen to the open company part of the session I would never have heard about the reputation system. And boy am I glad I did!</div><div><br /></div><div><b>Multilingual wikis</b></div><div><br /></div><div>The resources on the net are still mostly in English, but the web as a whole is getting a lot better. At least now, a number of HTTP requests include an Accept-Language header that makes sense for the context. Still, users do not always want to browse a particular site in the language configured in their browsers.</div><div><br /></div><div>The question was, "how do you handle multilingual wikis"? This can be particularly hard to do when original content can come from any of the supported languages. If someone updates the French version, how do you make sure the changes are included in the English version? <a href="">Wiki translation</a> shows some promise, but I think there are still more questions than answers on that topic, both for wikis and for websites and social networks in general. I still haven't figured out how to Facebook or Twitter bilingually.</div><div><br /></div><div><b>Intent Map</b></div><div><br /></div><div>In another session, Seb Paquet introduced the idea of a way to find people who want to work on the same sort of things you want to work on. We discussed this, and by the end of the session, we had agreed to turn this into a <a href="">project</a>. Others in the group were already familiar with RDF (see section above on semantic web). Someone else brought up <a href="">FOAF</a>, a specification for identifying people and relations between people. Someone else brought up <a href="">DOAP</a>. Microformats came up again. I try write down everything I'm going to have to learn about. And yet, I volunteered to code it all. Code what? well, that's the question! I'm not sure what I got myself into. During the session, I mentioned that one way to get widespread visibility would be with a Facebook application, so I have started playing with writing a Facebook application. Since I don't want to worry about hosting the application (and the scraper, and the database that will be needed to support it all) I also started learning about Google App Engine. I even wrote a very simple application that displays inside Facebook, but I'm still sorting out authentication and permissions. It is quite a challenge, but I need to get out of my comfort zone, so this is perfect. I get to play with new (to me) technologies, learn new specifications, and tons of new concepts. I hope I don't disappoint my teammates. I better get coding.</div><div><br /></div><div><b>What now?</b></div><div><br /></div><div>I met tons of new people. I hosted a session. I am now an official contributor to the RoCoCo2010.org wiki (I wrote up some session notes). I volunteered to implement something around a spec that doesn't even exist, with tools and APIs I don't know. This is not the same-old, same-old, but I'm okay with that. And I'm going to do it again, when the opportunity arises again. Thanks to everyone who was there for making it so energizing.</div><div><br /></div></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes notes from the RoCoCo unconference in Montreal this weekendI decided to attend the <a href="">Recent Changes Camp 2010: Montreal</a><br />Here are a few notes I made for myself, and that I'm sharing now, at least until I get around to writing a proper entry:<br /><br />Wikis (well, wiki software) could be a way to implement <a href="">addventure</a> (a "you are the hero" collaborative story telling game originally written by Allen Firstenberg). Wiki red links are very much like "this episode has not been written yet".<br /><br />Wikis synthesize (focus), where forums divide (or disperse in their focus).<br /><div><div><br /></div><div>Ontology vs folksonomy.<br /><br /><b>Look into:<br /></b>HTLit</div><div>Inform 7<br /><a href="">Universal Edit Button</a><br />Etherpad<br />Semantic web; the semantic triple: [subject, predicate, object]</div><div>Resource Definition Framework (RDF), SPARQL (query language for RDF)<br /><a href="">confoo</a><br /><a href="">appropedia</a></div><div><a href="">dbpedia</a></div><div>Google wheel</div><div>microformats</div><div><br /></div><div><br /><b>To read:</b><br />The wisdom of crowds</div><div><div><a href="">The Delphi Method</a></div><div><br /></div><br />I also got to lead a <a href="">session</a> (which is a lot easier than you might expect since all the participants are interested in the topic anyway - or they leave, and because they participate willingly). And we started a new <a href="">project</a>!</div></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes to transform one type of object to another in Groovy/Grails (as long as it's not a domain object)<div>I've been working on a system that will be using remote calls to communicate between a client (browser, mobile phone, possitbly a GWT client) and the server. The client sends a request, and a grails controller returns a grails domain object encoded using JSON. Relatively straight-forward stuff, but I hit a few snags. I was thankful when I discovered <a href=""></a>/ which goes into some details into how to make it happen. Detailed post are <a href="">here</a>, <a href="">here</a>, and <a href="">here</a>.</div><br /><div>I debated using the ObjectMarshaller to restrict the data sent (afterall, the client doesn't need to know the class name of my objects), but in the end, I decided to use Data Transfer Objects. I can see a future development where these objects will be used as commands, for example.</div><div><br /></div><div>The problem that's been keeping me awake tonight, tho, is in the translation from domain object to DTO. Based on my reading, it looked like I could transform any kind of object into any other kind of object, as long as the initial object knew what to do.</div><pre class="brush: java">class User {<br />// grails will contribute fields for id and version<br />String lastName<br />String firstName<br />Address workAddress //<br />Address homeAddress // The client does not need that info and SHOULD NOT ever see it<br />static hasMany [roles: Role, groups: Groups] // etc<br /><br />doThis() {<br />//..<br />}<br /><br />doThat() {<br />//...<br />}<br />}<br /><br />class UserDTO {<br />String lastName<br />String firstName<br />}<br /></pre><div>How do you take a User object and make a UserDTO out of it? Well, you should certainly have a look at Peter Ledbrook's <a href="">DTO plugin</a>. But for my needs, I thought I'd stick with something simpler. Just use the groovy "as" operator.</div><div></div><div>All you need to do something like</div><pre class="brush: java">def dto = User as DTO<br /></pre><div>is to have User implement asType(Class clazz) and to handle (by hand) the case where clazz is DTO:</div><pre class="brush: java">class User {<br />// same fields as before, etc<br />Object asType(Class clazz) {<br />if (clazz.isAssignableFrom(UserDTO)) {<br />return new UserDTO(lastName: lastName, firstName:firstName)<br />} else {<br />return super.asType(clazz)<br />}<br />}<br />}<br /></pre><div>All works well. Unit tests confirm, there's nothing to it.</div><pre class="brush: java">void testUserAsUserDTO() {<br />String lastName = 'Lovelace'<br />String firstName = 'Ada'<br />User u = new User(lastName: lastName, firstName: firstName);<br />UserDTO dto = u as UserDTO;<br />assertEquals(UserDTO.class, dto.class)<br />assertEquals(lastName, dto.lastName);<br />assertEquals(firstName, dto.firstName);<br />}<br /></pre><div>Integration test. I want to make sure my controller sends the right data</div><div>The controller:</div><pre class="brush: java">def whoAmI = {<br />def me = authenticateService.userDomain() // acegi plugin; this returns a User<br />if (me) {<br />def dto = me as UserDTO<br />render dto as JSON<br />} else {<br />render [error: "You are not logged in"] as JSON<br />}<br />}<br /></pre><div>The test:</div><pre class="brush: java">class RpcWhoAmITest extends ControllerUnitTestCase {<br />void testWhoAmI() {<br />String lastName = 'Lovelace';<br />String firstName = 'Ada';<br />User u = new User(lastName: lastName, firstName: firstName)<br />mockDomain(User.class,[u])<br />mockLoginAs(u)<br />controller.whoAmI()<br />def returnedUser = JSON.parse(controller.response.contentAsString)<br />assertNotNull(returnedUser)<br />assertEquals(lastName, returnedUser.lastName)<br />assertEquals(firstName, returnedUser.firstName)<br />}<br />}<br /></pre><div>And that... <span style="font-weight: bold;">fails!</span> The message is</div><pre>org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'my.package.User : 1' with class 'my.package.User' to class 'my.package.rpc.UserDTO'<br />at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:348)<br />at ...<br /></pre><div><br />What went wrong? The call to mock my domain object is what went wrong. It replaces my asType(Class clazz) with its own. Fortunately, that's relatively easy to fix. I needed to override the method addConverters in grails.test.GrailsUnitTestCase to replace asType(Class) only if it didn't already exist (in my test class):</div><pre class="brush: java">@Override<br />protected void addConverters(Class clazz) {<br />registerMetaClass(clazz)<br />if (!clazz.metaClass.asType) {<br />clazz.metaClass.asType = {Class asClass -><br />if (ConverterUtil.isConverterClass(asClass)) {<br />return ConverterUtil.createConverter(asClass, delegate, applicationContext)<br />}<br />else {<br />return ConverterUtil.invokeOriginalAsTypeMethod(delegate, asClass)<br />}<br />}<br />}<br />}</pre><br /><br /><div>Sadly, after all this work, I deploy, launch, and still get GroovyCastExceptions. It turns out that the instrumentation of domain class objects essentially throws out my "asType()" method. In the end, I switched to the DTO plugin (which post-instruments the domain object to do it's own stuff, something I considered doing, but at some point, the "quick, home-made solution" just isn't.</div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Part 3The third speaker in the rapid-fire keynotes was Adele <span class="blsp-spelling-error" id="SPELLING_ERROR_0">McAlear</span> with Death and <i>Digital Legacy</i>.<br /><div><br /></div><div>That presentation was definitely nowhere near as much fun as any of the other ones, simply due top the morbid topic. It is a brand new domain - what do you with all the electronic assets when their owner (producer, creator) dies? Who has the rights to decide what happens, and how are the service providers handling it?</div><div><br /></div><div>First, Adele showed us the breadth of our "online footprint". From email accounts to <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_1">flickr</span></a> uploads, blogs, tweets, WOW characters... Each service provider may have different policies for dealing with a deceased person's account, but in reality, only <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_2">Facebook</span></a> has a stated policy. And what about paid subscription services? When a person dies, the credit card they used to maintain that service is cancelled, and unless a survivor has access to the account, they can't change the credit card on file (I wonder what happens in the cases where the service have "gift subscriptions", such as "buy Person a pro account"). The survivor can't gain access to the account because it is tied to an email account, to which, in all likelihood, they do not have access.</div><div><br /></div><div>So what solutions do we have, if we want to leave something behind, if not for ourselves, for our friends, fans, followers?</div><div><br /></div><div>First, we should make a list of our digital assets. What accounts we have, and how we'd like them to be preserved after death. For example, we may want blogs to be preserved or archived, but we may think that our twitter account history is just not worth the effort. We should make sure that our family knows about our online accounts, too, so they know what to expect.</div><div><br /></div><div>Then we appoint a digital executor. This is someone with whom we will have discussed the matter, and who will be responsible for our digital legacy. This does not have to be same person who will execute our will. Then, we create an email account exclusively for this purpose, and set it up as a "backup email account" for our regular <span class="blsp-spelling-error" id="SPELLING_ERROR_3">emai</span>l account. This way, when the executor wants to take over our main email account, they only have to request a password reset be sent to the backup email account. The password to the backup email account should be kept with our will. From there, the executor will be able to gain access to other accounts by requesting a password reset.</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Part 2Sean Power, <i>Communilytics</i><div><br /></div><div>Communilytics, is the analysis of how specific information flow through online communities. In his presentation, Sean encouraged us to look a the numbers at a deeper level than simple page-views or the number of followers. But before we can mount a successful campaign, we have to decide what we want the accomplish with that campain: make more money, gain attention/recognition, or improve our reputation. How deeply we want to get involved in the community we're targeting (whether we want to search, join, moderate or run it) will affect which tools and technologies to use. There are 8 social platforms(group/mailing list, forum, real-time chat, social network, blog, wiki, ), with different dynamics, and each can be addressed at different levels of implication. The metrics, then, will depend on the tools used. Each business is different, we know best what's right for our business.</div><div><br /></div><div>He also brought up the AARRR model by Dave McClure: Acquisition, Activation, Retention, Referral, Revenue. </div><div><br /></div><div>He then gave some examples of how information flows through communities. When one person posts a tweet, their followers see it, But if it's a tweet of interest, the recipients may want to let <i>their</i> followers know about it, and re-tweet it. The reach of a message, then is the followers, those who receive the message as a retweet, and those who receive the retweet retweeted, etc. But some people may cross cross social platform - they may put a tweet on Facebook, or send it through email. </div><div><br /></div><div>This presentation is where I also found out the format definition of "going viral". That's when the average number a person tells the message to is greater than one. (So on average, everyone who gets this message will forward it).</div><div><br /></div><div>To get a wide reach, sometimes the best way is to find a few seeds who, because of their respectability and following, will insure a wide, receptive audience for the message. The sites <a href="">Twinfluence</a>, <a href="">tweetreach</a> and <a href="">TwitterAnalyer</a> can help find out the reach of tweeter user and messages.</div><div><br /></div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes Montreal<div>I was lucky enough to find out about this event a few days before the fact. I only participated in the free portion: 5 presentations of 15-20 minutes about different aspects of Web 2.0 and social networks.</div><div><div><br /></div><div>Here are some notes and thoughts about the presentations:</div><div><br /></div><div>Chris Heuer, <i>Serve the Market</i></div><div><br /></div><div.</div><div><br /></div><div style="text-align: center;"><b>Serving the market is leadership, not management</b></div><div><br /></div></div><div>and the parting quesiton, </div><div><br /></div><div style="text-align: center;"><b>is profit the only purpose of business? or are we able to transcend our current thinking?</b></div><div style="text-align: center;"><br /></div><div><br /></div><div.</div><div><br /></div><div>Next: Sean Power, <i>Applied Communilytics In a Nutshell</i></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes most software developers, programming is more than just a way to earn money. What puts a smile on our faces as we think about upcoming tasks? What makes us think about the current problem when we wait in line, while we're driving, or as we fall asleep? Sure, we do it because it has to be done, but often there are much more personal reasons. Where does the satisfaction come from? We are all different, and different things make us tic, but all the cases fit in 3 categories: personal, interpersonal, and global. The fun is in figure out who we are based on what's important to us<br /><br /><span class="Apple-style-span" style="font-size: large;">Personal<br /></span><br /><div><ul><li>Feeling smart: This is a very good feeling, whether it's because we've beat the machine into submission, or because we've found a whole new way to use an old tool. It doesn't mean outsmarting another person, but lining up ideas in a productive way</li><li>Aesthetics: Coming up with a beautiful, elegant design. Simplifying something complicated. Making all the parts fit nicely</li><li>Achievement: Going beyond our abilities, taking it one step further</li><li>Learning: New tools, new ways of thinking. We prefer learning things that change how we thinking about the problem. The more ways we can shape our minds around an issue, the more likely we can come up with an elegant solution; or a solution at all.</li><li>The big "DONE" stamp: when we get to say something is done, out the door, complete. Sometimes we have to compromise - we'd like to tweak this a bit more, or refactor that, but like an artist, talent isn't only knowing how a piece can be made better, but also knowing when to stop.</li><li>Glory: well, okay, that's pushing it a bit, but the recognition of others, whether our peers, customers, or the whole wide world is a very powerful motivation.</li></ul><span class="Apple-style-span" style="font-size: large;">Interpersonal</span></div><div><br /></div><div>We are often seen as solitary workers, but in truth, our job cannot be done totally alone. We value good relations with our coworkers, mentors, employer, and clients.</div><div><br /></div><div><ul><li>Coworkers: we often need to rely on others, whether to show us what they've done, to discuss an idea, or do share the workload. Good relations with them come from listening to ideas, asking question, and pointing out issues in a respectful manner. It works even better when we can be friends with our coworkers, and spend time together outside the work environment, but that's not necessary.</li><li>Relations with the employer can be very productive when we know what's expected of us, what the boundaries are, and when we know we can meet our targets. We much prefer having some freedom in how we attain these objectives, and when we have input in setting the targets. When employees are asked to set the target themselves, they tend to shoot for higher accomplishment, and they are more likely to reach them. This is true not only for software developers, but in pretty much any field.</li><li>Clients (in a consulting or customization setting) are also part of the interpersonal aspect of a developer's life. Some of us hate talking to the client, but for others, knowing the person who will use the product, knowing how they will use it and why, can help us propose solutions they may not have thought of. It may require us to think differently, to use a different language so we can truly connect with the client in terms they understand, but that just keeps our minds nimble.</li></ul></div><div><span class="Apple-style-span" style="font-size: large;">Global<br /></span></div><div><br /></div><div>For some of us, the greater good is what drives our actions. We can make a difference by the work we do, whether it's through software that favors sustainability, the people we're helping, teaching/education we support, the time our software will save thousands of people, the list goes on. We all have causes we support, some more ardently than others, and when our work allows us to promote them, or help them along, we derive even more satisfaction from our efforts. We build a legacy, even if it's all too often anonymously.</div><div><br /></div><div>Clearly, money is not the only reason we develop software. If it were, there would be no Open Source movement.</div><div><br /></div><div>We all feel the pull of these motivations differently. For some, doing good for goodness's sake is plenty; others want recognition. I'm generally motivated mostly by the personal and interpersonal aspects of the work. I value the recognition of my peers more than that of the population at large.</div><div><br /></div><div>What motivates you?</div><div><br /></div><img src="" height="1" width="1" alt=""/>Nancy Deschênes
http://feeds.feedburner.com/TheSofterSideOfSoftwareDevelopment
CC-MAIN-2019-18
refinedweb
11,135
53.21
In addition to CircuitPython there's an older MicroPython version of the HT16K33 LED backpack with your MicroPython board you'll need to install the micropython-adafruit-ht16k33 ht16k33_matrix.mpy and ht16k33_seg.mpy files from the releases page of the micropython-adafruit-ht16k33 GitHub repository. You'll need to copy all of the files to your MicroPython board's file system using a tool like ampy to copy the files to the board. The following section will show how to control the LED backpack from the board's Python prompt / REPL. You'll walk through how to control the LED display and learn how to use the MicroPython module built for the display. As a reference be sure to see the micropython-adafruit-ht16k33 module documentation too. First connect to the board's serial REPL so you are at the MicroPython >>> prompt. On MicroPython.org firmware which uses the machine API you can initialize I2C like the MicroPython I2C guide mentions. For example on a board like the ESP8266 you can run (assuming you're using the default SDA gpio #4 and SCL gpio #5 pins like on a Feather & LED backpack FeatherWing): import machine i2c = machine.I2C(scl=machine.Pin(5), sda=machine.Pin(4)) import machine i2c = machine.I2C(scl=machine.Pin(5), sda=machine.Pin(4)) Once you've initialized the I2C bus you're ready to create instances of the matrix and display classes. These classes are exactly the same as the CircuitPython version of the library, but instead live in the ht16k33_matrix module. For example to create a 8x8 matrix you could run: import ht16k33_matrix matrix = ht16k33_matrix.Matrix16x8(i2c) import ht16k33_matrix matrix = ht16k33_matrix.Matrix16x8(i2c) See the CircuitPython library usage for information on the functions you can call to control the matrix and segment displays. Once you've created the display class like above its usage is the same as with CircuitPython!
https://learn.adafruit.com/micropython-hardware-led-backpacks-and-featherwings/micropython
CC-MAIN-2021-17
refinedweb
317
63.39
Surprise Attack! If you visited the previous dungeon examples in a non-Chrome browser, you may have noticed two issues. One was that the graphics looked all blurred and the other being that the sound didn't play. Let's roll a couple of D6's and see if we have the agility to overcome it! Firstly, why is it blurry? This is due to browsers smoothing scaled images on the canvas by default. For pixel based art (blocky), we do not want this to happen so we have to use the property imageSmoothingEnabled of the drawing context object that we get from the canvas. Currently in Dart, this property doesn't cover the numerous variants. If we want the dungeon to work on IE, Firefox etc we need to use JavaScript. Fortunately calling JS from Dart is very easy. We will add a simple class to the index.html page with a simple class called crossBrowserFilla with a single method called keepThingsBlocky. var crossBrowserFilla = function (){ this.keepThingsBlocky = function(){ var canvas = document.getElementById("surface"); var ctx = canvas.getContext("2d"); ctx.mozImageSmoothingEnabled = false; ctx.imageSmoothingEnabled = false; ctx.msImageSmoothingEnabled = false; ctx.webkitImageSmoothingEnabled = false; } }; This is called in main.dart. Fairly easy! import 'dart:js'; .... JsObject jsproxy = new JsObject(context['crossBrowserFilla']); bool canvasConfigured = false; ... if (!canvasConfigured){ jsproxy.callMethod('keepThingsBlocky'); canvasConfigured = true; } The second issue of sound has to navigate things a bit differently. The best format across browsers is MP3 currently, the downside being is that Dartium doesn't yet support the required codec as it is based on Chromium. This is short term (hopefully) once the Dart VM makes it into Chrome. Apart from changing the file extensions, there is not change required to the audio code. Code is available on Github and a live demo is available here. Use arrow keys to move! Next time we will REALLY REALLY WILL finish our dungeon adventures by putting some other characters in the dungeon. A wizard's promise!
http://divingintodart.blogspot.com/2015/02/procedural-generation-part-five.html
CC-MAIN-2017-17
refinedweb
325
60.11
Next: Operator Overloading, Up: Overloading Objects [Contents][Index] Any Octave function can be overloaded, and allows an object specific version of this function to be called as needed. A pertinent example for our polynomial class might be to overload the polyval function like function [y, dy] = polyval (p, varargin) if (nargout == 2) [y, dy] = polyval (fliplr (p.poly), varargin{:}); else y = polyval (fliplr (p.poly), varargin{:}); endif endfunction This function just hands off the work to the normal Octave polyval function. Another interesting example for an overloaded function for our polynomial class is the plot function. function h = plot (p, varargin) n = 128; rmax = max (abs (roots (p.poly))); x = [0 : (n - 1)] / (n - 1) * 2.2 * rmax - 1.1 * rmax; if (nargout > 0) h = plot (x, p(x), varargin{:}); else plot (x, p(x), varargin{:}); endif endfunction. function b = double (a) b = a.poly; endfunction
https://docs.octave.org/v4.0.1/Function-Overloading.html
CC-MAIN-2022-40
refinedweb
146
66.03
Debugging. The question was: Where was this spurious signal coming from? The kernel trace showed no kill() system call from gdb, therefore gdb was not requesting the whole session to die. The decision to send the signal therefore had to be made in the kernel. Getting a spurious SIGHUP is quite unusual. Most of the time, runaway processes get unexpected SIGSEGV or SIGBUS signals because they attempted to access invalid memory locations in their address spaces.(). As its name suggests, ttioctl() is an ioctl method. By having a look to the kernel trace just before the SIGHUP is caught by gdb, we have a better idea of where the problem was coming from:. Operations on registers are quite straightforward to implement. Linux binaries expect reading and writing through a pt_regs structure, defined in Linux's header linux/include/asm-ppc/ptrace.h. The job is to get the registers and rearrange them appropriately. In this section we will explain the bug that prevented Linux's gdb from getting a backtrace on a traced program. In fact, ptrace() emulation was even more broken than this, because gdb was not even able to tell where the program stopped when it received a signal. The ouptut was: Program received signal SIGIO, I/O possible. 0x0 in ?? () gdb> Here is a kernel trace of what gdb attempted to do in order to display the address and the name of the function where the traced process was stopped. 161 gdb RET write 1 161 gdb CALL write(0x1,0x50374000,0x2d) 161 gdb GIO fd 1 wrote 45 bytes "Program received signal SIGIO, I/O possible. " 161 gdb RET write 45/0x2d 161 gdb CALL ptrace(PTRACE_PEEKUSER,0xa2,0x4,0x7fffdc3c) 161 gdb RET ptrace 2147477168/0x7fffe6b0 161 gdb CALL ptrace(PTRACE_PEEKUSER,0xa2,0x90,0x7fffdc0c) 161 gdb RET ptrace 268437452/0x100007ccbca0,0x7fffdc34) 161 gdb RET ptrace 0 The first PEEKUSER operation reads the register GPR4 (address 0x4 in Linux's U-dot zone). The returned value ( 0x7fffe6b0) is an address in the user stack -- it seems valid. The second PEEKUSER call reads the Link register (address 0x90 in Linux's U-dot zone). Here we get an address ( 0x100007cc) which is obviously located in the process text segment -- it also seems valid. Then gdb attempts to read the function names from the program text with PEEKTEXT commands, but there is obvously something wrong because the requested address ( 0xfffffffc) is not located in the user address space (user addresses range from 0x00000000 to 0x7fffffff on NetBSD/PowerPC). The next PEEKTEXT attempts are even more malformed, and they fail with an invalid argument error. The surprising thing was that the first two PEEKUSER calls seemed correct, and the next PEEKTEXT call using the PEEKUSER results was obvously wrong. Using printf() in the kernel to display the correct values confirmed that the PEEKUSER results were right. It seems that something is wrong in PEEKTEXT or PEEKUSER requests. Here we try the following sample program to check the PEEKTEXT operation. #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <signal.h> #include <sys/types.h> #include <sys/ptrace.h> #include <sys/errno.h> #include <sys/wait.h> void handler (void) { printf ("in handler\n"); return; } int main (int argc, char** argv) { int spot = 0x88888888; int err; int pid; int status; long data; pid = fork(); switch (pid) { case -1: perror ("fork failed"); exit (-1); break; case 0: spot = 0x77777777; signal (SIGUSR1, (void*)(*handler)); err = ptrace (PTRACE_TRACEME, 0, 0, 0); printf ("ptrace PTRACE_TRACEME returned %d, errno=%d\n", err, errno); kill (getpid(), SIGUSR1); sleep (1); printf ("child quitting\n"); break; default: spot = 0x99999999; wait (&status); printf ("parent: PTRACE_PEEK on %d\n", pid); errno = 0; data = ptrace (PTRACE_PEEKTEXT, pid, &spot, 0); if (errno != 0) printf ("ptrace returned %d, errno=%d\n", data, errno); printf ("readen 0x%lx\n", data); printf ("data=0x%lx\n&data=0x%lx\n", data, &data); break; } return 0; } This sample program was interesting because it exhibited result values for PEEKTEXT operations that were different in the program output and in the kernel trace. On the kernel trace, we had the correct value, and in the program output, the wrong value. The explanation of this kind of phenomenon is that the system-call wrapper in glibc altered the return value. Looking at glibc sources, the answer was obvious. The system call wrapper for ptrace() is defined in glibc/sysdeps/unix/sysv/linux/ptrace.c. Here are the sources of this wrapper if (request > 0 && request < 4) data = &ret; res = INLINE_SYSCALL (ptrace, 4, request, pid, addr, data); if (res >= 0 && request > 0 && request < 4) { __set_errno (0); return ret; } return res; The test on the request (between 0 and 4) selects the PEEKUSER, PEEKTEXT, and PEEKDATA operations. For these three operations, glibc replaces the return value by the value of the data argument. For other operations, the result is just the return value of the ptrace() system call. It is also interesting to look at the ptrace() implementation in Linux kernel sources, in linux/arch/ppc/kernel/ptrace.c:sys_ptrace(), where we discover the same trick: /* when I and D space are separate, these will need to be fixed. */ case PTRACE_PEEKTEXT: /* read word at location addr. */ case PTRACE_PEEKDATA: { unsigned long tmp; int copied; copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0); ret = -EIO; if (copied != sizeof(tmp)) break; ret = put_user(tmp,(unsigned long *) data); break; } Here, for PEEKTEXT and PEEKDATA, the value that will be returned to the calling program is copied at the location of the data argument, and the address of this data argument is returned to userland. As we saw, glibc will bring back the expected return value in the value returned to the calling program. The reason why Linux does this is probably that on most platforms, the Linux kernel uses negative return values when there is an error. We already had a look to this problem in part three of this series. Hence, on the i386, if ptrace() was returning a value such as 0xfffffffe, glibc would see a negative value and would assume it is the opposite of an error code. It would therefore set errno to the opposite of 0xfffffffe, which is 2, and we would see an ENOENT error ( ENOENT is errno 2). To avoid the problem, Linux must use this kludge with the data argument. The bug here was that Linux emulation of ptrace() operations PEEKTEXT, PEEKDATA, and PEEKUSER, were not emulating this Linux-specific behavior correctly. It was just returning the requested value to userland instead of copying it at the location of the data argument and returning the address of the data argument. This problem needed two fixes. One in machine-independent code, for PEEKTEXT and PEEKDATA operations, and one in machine-dependent code for PEEKUSER. Here is the fix for PEEKTEXT/PEEKDATA, in sys/compat/linux/common/ptrace.c:linux_sys_ptrace() error = sys_ptrace(p, &pta, retval); if (!error) switch (request) { case LINUX_PTRACE_PEEKTEXT: case LINUX_PTRACE_PEEKDATA: error = copyout (retval, (caddr_t)SCARG(&pta, data), sizeof retval); *retval = SCARG(&pta, data); break; default: break; } return error; The fix to the PEEKUSER operation stands in sys/compat/linux/arch/powerpc/linux_ptrace.c, in linux_sys_ptrace_arch(), and it is similar. With this fix done, Linux's gdb was fully functional. It was able to trace Linux programs, and get a backtrace when a signal was caught. This functionality is especially useful because it helps us understand how Opera, or the JVM with native threads, failed getting bus errors or segmentation fault signals. In this series, we examined all the different problems involved in porting Linux compatibility to NetBSD/PowerPC. Most of the problems described here were completely unexpected when I started to work on this project. My understanding of the problem was quite basic: It was just about remapping system calls. The conclusion may be that it is not mandatory to fully understand a kernel subsystem prior starting work on it, you just need an idea of how it works so that you know where you are heading. There are a lot of things that can be learned. Actually, there are number of things about kernels I learned while working on Linux compatibility. I would like to thank Manuel Bouyer for giving me the first clue on Linux compatibility ("it works by remapping system calls"); the NetBSD tech-kern and port-powerpc mailing lists contributors for supporting me when I was integrating the Linux compatibility code for NetBSD/PowerPC; Carl Alexander, for providing me an account to a LinuxPPC machine; Kevin B. Hendricks, for his valuable help on tracking bugs that broke the JVM; Hubert Feyrer; Vincent Guillard; and Thomas Klausner for reviewing this paper; and of course, the NetBSD community, without whom this paper would not even exist. Emmanuel Dreyfus is a system and network administrator in Paris, France, and is currently a developer for NetBSD.. Return to ONLamp.com.
http://www.onlamp.com/lpt/a/1125
CC-MAIN-2013-48
refinedweb
1,474
60.85
Foxmail 6.5 Build 020 Sponsored Links Foxmail 6.5 Build 020 Ranking & Summary RankingClick at the star to rank Ranking Level Foxmail 6.5 Build 020 description Foxmail 6.5 Build 020 is a useful tool which is designed to easily manage your mailbox or import your Outlook account. Major Features: -YG tool to compose nice looking HTML emails from templates or scratch. The program also offers filter options, allowing you to act upon incoming mail that meets certain criteria - you can delete messages, forward them, auto-respond to them and more based on keywords appearing in the subject, address, text etc. - The Express Send feature enables you to send mail directly to the recipient, using the built in SMTP server, thereby bypassing your ISP. - Foxmail includes many other features, including a remote mail viewer to manage mail on the server, as well as a small scroll ticker that displays message subjects as they arrive in your inbox.. - It is probably one of the most capable email programs you can find and an excellent alternative to Outlook Express (can import Outlook messages). Foxmail 6.5 Build 020 Screenshot Foxmail 6.5 Build 020 Keywords Bookmark Foxmail 6.5 Build 020 Foxmail 6.5 Build 020 Copyright WareSeeker.com do not provide cracks, serial numbers etc for Foxmail 6.5 Build 020. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited. Featured Software Want to place your software product here? Please contact us for consideration. Contact WareSeeker.com Version History Related Software Migrate & Convert Microsoft Outlook Mailbox to EML, MSG, vCard, TXT, RTF, HTML, MHT File Formats Free Download Jvw filter email 1.0 at emailspam.info Free Download Convert (import-export) Outlook from/to vCard, vCalendar, iCalendar file format Free Download Import email (eml files) in Outlook, Thunderbird and Other Email Programs. Free Download SaveMail Pro 1.0.45 is responsible for saving your email & attachments effectively. Free Download The automated power contact import robot for MS Outlook 2000-2003 import you data from Outlook Free Download Latest Software Popular Software Favourite Software
http://wareseeker.com/Communications/foxmail-6.5-build-020.zip/36548e188
CC-MAIN-2014-49
refinedweb
349
58.69
Introduction: Infinity Bike - Indoors Bike Training Video Game During the winter seasons, cold days and bad weather, cyclist enthusiasts only have a few options to exercise doing their favorite sport. We were looking for a way to make indoor training with a bike/trainer setup a bit more entertaining but most product available are either costly or just plain boring to use. This is why we started to develop Infinity Bike as an Open Source training video game. Infinity bike reads the speed and direction from your bicycle and offer a level of interactivity that cannot be easily found with bike trainers. We take advantage of the simplicity available from Arduino microcontroller and a few 3D printed parts to secure inexpensive sensors to a bicycle mounted on a trainer. The information is relayed to a video game made with the popular game making engine, Unity. By the end of this instructable, you should be able to set-up your own sensors on your bike and transfer the information of your sensors to Unity. We even included a track on which you can ride along and test out your new set-up. If you're interested in contributing you can checkout our GitHub. Step 1: Materials The material list you will need might vary a bit; for example, the size of you bike will dictate the lengths of the jumper cables you need but here are the main parts you will need. You could probably find cheaper prices for each piece on website like AliExpress but waiting 6 months for shipping isn’t always an option so were using the slightly more expensive parts so the estimation isn’t skewed. 1 x Arduino nano ($22.00) 1 x Mini Breadboard ($1.33/unit) 1 x 220 Ohm resistor ($1.00/kit) 1 x 10K Potentiometer ($1.80/unit) 1 x Hall sensor ($0.96) 20 cm x 6 mm 3D printer timing belt ($3.33) 1 kit x Various Lenght M3 screws and bolts ($6.82) 1 x Bicycle speedometer magnet ($0.98) We mounted the material above with 3D printed parts. The files we used are listed bellow and they're numbered with the same convention as the image at the beginning of this section. All the files can be found on Thingiverse. You can use them as-is but make sure that the dimensions we used match your bike. 1. FrameConnection_PotentiometerHolder_U_Holder.stl 2. FrameConnection_Spacer.stl 3. BreadboardFrameHolder.stl 4. Pulley_PotentiometerSide.stl 5. Pot_PulleyConnection.stl 6. FrameConnection.stl 7. Pulley_HandleBarSide_Print2.stl 8. FrameToHallSensorConnector.stl 9. PotHolder.stl 10. HallSensorAttach.stl Step 2: Reading and Transferring Data to Unity The Arduino and Unity code will work together to collect, transfer and process the data from the sensors on the bike. Unity will request the value from the Arduino by sending a string through the serial and wait for the Arduino to respond with the values requested. First, we prepare the Arduino with the library Serial Command which is used to manage the requests from Unity by pairing a request string with a function. A basic set up for this library can be made as follow; #include "SerialCommand.h" SerialCommand sCmd; void setup() { sCmd.addCommand("TRIGG", TriggHanlder); Serial.begin(9600); } void loop() { while (Serial.available() > 0) { sCmd.readSerial(); } } void TriggHandler () { /*Read and transmit the sensors here*/ } The function TriggHandler is attached to the object SCmd. If the serial receives a string that matches the attached command (in this case TRIGG), the function TriggHandler is executed. We use potentiometer to measure the steering direction and a halls sensor to measure the rotation per minute of the bicycle. The readings from the potentiometer can be easily made using the build-in functions from the Arduino. The function TriggHandler can then print the value to the serial with the following change. void TriggHandler (){ /*Reading the value of the potentiometer*/ Serial.println(analogRead(ANALOGPIN)); } The Hall sensor has a bit more setting up before we can have useful measurements. Contrary to the potentiometer, the instant value of the halls sensor is not very useful. Since were trying to measure the speed of the wheel, the time between triggers is what were interested in. Every function used in the Arduino code takes time and if the magnet lines up with the Hall sensor at the wrong time the measurement could be delayed at best or skipped entirely at worst. This is obviously bad because the Arduino could report a speed that is MUCH different than the actual speed of the wheel. To avoid this, we use a feature of Arduinos called attach interrupt which allows us to trigger a function whenever a designated digital pin is triggered with a rising signal. The function rpm_fun is attached to an interrupt with a single line of code added to the setup code. void setup(){ sCmd.addCommand("TRIGG", TriggHanlder); attachInterrupt(0, rpm_fun, RISING); Serial.begin(9600); } //The function rpm_fun is used to calculate the speed and is defined as; unsigned long lastRevolTime = 0; unsigned long revolSpeed = 0; void rpm_fun() { unsigned long revolTime = millis(); unsigned long deltaTime = revolTime - lastRevolTime; /*revolSpeed is the value transmitted to the Arduino code*/ revolSpeed = 20000 / deltaTime; lastRevolTime = revolTime; } TriggHandler can then transmit the rest of the information when requested. void TriggHanlder () { /*Reading the value of the potentiometer*/ Serial.println(analogRead(ANALOGPIN)); Serial.println(revolSpeed); } We now have all of the building blocks that can be used to build the Arduino code which will transfer data through the serial to when a request is made by Unity. If you want to have a copy of the full code, you can download it on our GitHub. To test if the code was set-up properly, you can use the serial monitor to send TRIGG; make sure you set the line ending to Carriage return. The next section will focus on how our Unity scripts can request and receive the information from the Arduino. Step 3: Receiving and Processing Data Unity is a great software available for free to hobbyist interested in game making; it comes with a large number of functionalities that can really cut down on time setting up certain things such as threading or GPU programming (AKA shading) without restricting what can be done with the C# scripts. Unity and Arduino microcontrollers can be used together to create unique interactive experiences with a relatively small budget. The focus of this instructable is to help set-up the communication between Unity and the Arduino so we wont dive too deep into most of the features available with Unity. There are plenty of GREAT tutorials for unity and an incredible community that could do a much better job explaining how Unity works. However, there is a special prize for those who manage to work their way through this instructable that serves as a little showcase of what could be done. You can download on our Github our first attempt a making a track with realistic bike physics. First, let’s go through the bare minimum that has to be done in order to communicate with an Arduino through the serial. It will be quickly apparent that this code is not suitable to gameplay but it’s good to go through every step and learn what the limitations are. In Unity, create a new scene with a single empty GameObject named ArduinoReceive at attache a C# script also named ArduinoReceive. This script is where we will add all the code that handles the communication with the Arduino. There is a library that must be access before we can communicate with the serial ports of your computer; Unity must be set-up to allow certain libraries to be used. Go to Edit->ProjectSerring->Player and next to the Api Compatibility Level under Configuration switch .NET 2.0 Subset to .NET 2.0. Now add the following code at the top of the script; using System.IO.Ports; This will let you access the SerialPort class which you could define as an object to the ArduinoReceive Class. Make it private to avoid any interference from another script. private SerialPort arduinoPort; The object arduinoPort can be opened by selecting the correct port (e.g. in which USB the Arduino is connected) and a baud rate (i.e. the speed at which the information is sent). If you’re not sure in which port the Arduino is plugged in you can find out either in the device manager or by opening the Arduino IDE. For the baud rate, the default value on most device is 9600, just make sure you have this value in your Arduino code and it should work. The code should now look like this; using System.Collections; using System.Collections.Generic; using UnityEngine; using System.IO.Ports; public class ArduinoReceive : MonoBehaviour { private SerialPort arduinoPort; // Use this for initialization void Start() { arduinoPort = new SerialPort("COM5", 9600); arduinoPort.Open(); WriteToArduino("TRIGG"); } } Your COM number will most likely be different. If you’re on a MAC, you’re COM name might have a name that looks like this /dev/cu.wchusbserial1420. Make sure the code from the section 4 is uploaded to the Arduino and the serial monitor is closed for the rest of this section and that this code compiles without problem. Let’s now send a request to the Arduino every frame and write the results to the console window. Add the WriteToArduino function to the class ArduinoReceive. The carriage return and new line are necessary for the Arduino code to parse the incoming instruction properly. private void WriteToArduino(string message) { message = message + "\r\n"; arduinoPort.Write(message); arduinoPort.BaseStream.Flush (); } This function can then be called in the Update loop. void Update () { WriteToArduino ("TRIGG"); Debug.Log("First Value : " + arduinoPort.ReadLine()); Debug.Log("Second Value : " + arduinoPort.ReadLine()); } The code above is the bare minimum you need to read the data from an Arduino. If you pay close attention to the FPS given by unity, you should see a significant drop in performance. In my case, it goes from around 90 FPS without reading/writing to 20 FPS. If your project doesn’t require frequent updates it might be sufficient but for a video game, 20 FPS is much too low. The next section will cover how you can improve the performance by using multi threading. Step 4: Optimising Data Transfer The previous section covered how to set-up basic communication between the Arduino and Unity program. The major problem with this code is the performance. In it’s current implementation, Unity has to wait for the Arduino to receive, process and answer the request. During that time, the Unity code has to wait for the request to be done and does nothing else. We solved this issue by creating a thread that will handle the requests and store the variable on the main thread. To begin, we must include the threading library by adding; using System.Threading; Next, we setup the function that we are starting in the the threads. AsynchronousReadFromArduino starts by writing the request to the Arduino with the WrtieToArduino function. The reading is enclosed in a try-catch bloc, if the read timeout, the variables remain null and the OnArduinoInfoFail function is called instead of the OnArduinoInfoReceive. Next we define the OnArduinoInfoFail and OnArduinoInfoReceive functions. For this instructable, we print the results to the console but you could store the results into the variables you need for your project. private void OnArduinoInfoFail() { Debug.Log("Reading failed"); } private void OnArduinoInfoReceived(string rotation, string speed) { Debug.Log("Readin Sucessfull"); Debug.Log("First Value : " + rotation); Debug.Log("Second Value : " + speed); } The last step is to start and stop the threads that will request the values from the Arduino. We must ensure that the last thread is done with it’s last task before starting a new one. Otherwise, multiple requests could be done to the Arduino at a single time which could confuse the Arduino/Unity and yield unpredictable results. private Thread activeThread = null; void Update() { if (activeThread == null || !activeThread.IsAlive) { activeThread = new Thread(AsynchronousReadFromArduino); activeThread.Start(); } } If you compare the performance of the code with the one we wrote at the section 5, the performance should be significantly improved. private void OnArduinoInfoFail() { Debug.Log("Reading failed"); } Step 5: Where Next? We prepared a demo that you can download on our Github (), download the code and the game and ride through our track. It’s all set up for a quick workout and we hope it can give you a taste of what you could build if you use what we taught you with this instructable. Credits Project contributors Alexandre Doucet (_Doucet_) Maxime Boudreau (MxBoud) External ressources [The Unity game engine]() This project started after we read the tutorial by Allan Zucconi "how to integrate Arduino with Unity" (... ) The request from the Arduino are handled using the SerialCommand library ( ) 2 Discussions This is great. I would love to theme the video like the cycles in the Black Mirror episode 15 million merits. Haha a track with that theme would be awsome. Were still looking for cool track and theme ideas. I might have to go rewatch the episode for inspiration.
https://www.instructables.com/id/Infinity-Bike-Indoors-Bike-Training-Video-Game/
CC-MAIN-2018-39
refinedweb
2,190
62.88
Cannot load null class!John Ament May 13, 2010 1:05 PM hey there tinkering with Infinispan 4.1 Hotrod client/server mode. Run into a few issues. 1. Does infinispan make any assumptions about how the client is connecting? e.g. does it expect that it's a bind, send data, receive data, unbind type of behavior or can it work if the client stays connected for a long time? (Or does it not matter?) Thoughts? I am able to connect to the server and it writes the first time, just not any other times. 1. Re: Cannot load null class!Supin Ko May 13, 2010 4:32 PM (in response to John Ament) I got something similar when it turned out I was instantiating multiple RemoteCacheManagers and one or more hadn't been started. HTH, Supin 2. Re: Cannot load null class!John Ament May 13, 2010 6:40 PM (in response to Supin Ko) agreed, after some more testing i noticed that it would consistently failed if start was false and i did not call start. however, my other problem remains. after doing some more testing i found that there's a sort of idle timeout on the remote cache manager, or something like that. i set up a singleton ejb that created a remote cache manager, if i hit it often and quickly, it stayed up. if i left the app running, went talked to someone about the weather for 10 minutes and came back, the result was an EOFException. 3. Re: Cannot load null class!Manik Surtani May 14, 2010 4:43 AM (in response to John Ament) Are you guys using 4.1.0.Beta1? Or is this a snapshot? 4. Re: Cannot load null class!John Ament May 14, 2010 5:03 AM (in response to Manik Surtani) I am using 4.1.0.Beta1 5. Re: Cannot load null class!Mircea Markus May 14, 2010 9:46 AM (in response to John Ament) Starting cache manager before using the caches associated with it is mandatory. The boolean flag does just that: indicates weather the cache manager should be started or not. That's not clear in the javadoc nor in the exception message. I've created ISPN-440 for that. 6. Re: Cannot load null class!John Ament May 14, 2010 9:55 AM (in response to Mircea Markus) What about my first issue, that seems like a bigger problem to me. Seems like I randomly lose connection back to a remote cache manager. 7. Re: Cannot load null class!Mircea Markus May 14, 2010 10:14 AM (in response to John Ament) Looking into that right now. I've tried to reproduce it with an 10mins delay between calls, no luck. Trying with 20 mins. Any estimate on the time difference between the two calls? Also if you get a stack trace would be very helpful. 8. Re: Cannot load null class!John Ament May 14, 2010 10:40 AM (in response to Mircea Markus) Here's a sample EJB that can consistently break it for me @Singleton @Startup public class SingletonRCM { private RemoteCacheManager rcm; @PostConstruct public void init() { rcm = new RemoteCacheManager("hell:11311",true); } public Cache<String,String> getStagingCache() { return rcm.getCache("staging"); } } The exception itself is a broken pipe:104) at sun.nio.ch.IOUtil.write(IOUtil.java:75) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) at java.nio.channels.Channels.write(Channels.java:60) at java.nio.channels.Channels.access$000(Channels.java:47) at java.nio.channels.Channels$1.write(Channels.java:134) at java.io.OutputStream.write(OutputStream.java:58) at java.nio.channels.Channels$1.write(Channels.java:115) at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.writeByte(TcpTransport.java:96) ... 117 more Here's what the client does (JSF Action method) @EJB SingletonRCM rcm; public void doSomething() { Cache<String,String> dataCache = rcm.getStagingCache(); dataCache.put(cacheKey,cacheValue); FacesContext ctx = FacesContext.getCurrentInstance(); ctx.addMessage(null, new FacesMessage("Put data successfully.")); } And a little bit more of the exception Caused by: org.infinispan.client.hotrod.exceptions.TransportException: java.io.IOException: Broken pipe at org.infinispan.client.hotrod.impl.transport.VHelper.writeVLong(VHelper.java:48) at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.writeVLong(TcpTransport.java:40) at org.infinispan.client.hotrod.impl.HotrodOperationsImpl.writeHeader(HotrodOperationsImpl.java:276) at org.infinispan.client.hotrod.impl.HotrodOperationsImpl.sendPutOperation(HotrodOperationsImpl.java:254) at org.infinispan.client.hotrod.impl.HotrodOperationsImpl.put(HotrodOperationsImpl.java:113) at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:146) at org.infinispan.CacheSupport.put(CacheSupport.java:30) hopefully this is a bit more info. 9. Re: Cannot load null class!Mircea Markus May 14, 2010 1:35 PM (in response to John Ament) 10. Re: Cannot load null class!John Ament May 16, 2010 9:39 AM (in response to Mircea Markus) So just tinkering with things a little bit this morning. I created there's a few typos in in startServer.sh I noticed that when I pass --idle_timeout=0, my issues seem to go away. I'll have to test it more thoroughly on the work dev server, but it seems like it is resolved by disabling timeouts. Are there any long term ramifications to having timeouts disabled? 11. Re: Cannot load null class!Mircea Markus May 17, 2010 6:58 AM (in response to John Ament)
https://developer.jboss.org/thread/151935
CC-MAIN-2018-39
refinedweb
898
61.02
As an avid reader of our blogs (we are sure you are), you might have noticed some topics regarding templating our controls or parts of them. As part of that came a minor catch – templating required (and as long as we are waiting for 12.1, it still does) the jQuery Templates Plugin and as a part of roadmap change it was decided that it will not be developed past its current beta status and an entirely new one is in planning. We strive to provide a well-rounded package of functionality with our jQuery product, so you wouldn’t usually need to look for additional solutions to support out controls. Along with that, we make sure to pack plenty of all the advantages new technologies provide so you get a product that goes..fast! I pointed those two ideology pillars of the product, because both of them led to the creation of our very own Template Engine – we made sure our users can use such features, while being certain we would provide support and most importantly we will strive to make the most out of it – we offer controls supporting very large amounts of data and now we offer the engine to template them swiftly. Let’s provide some basics for those unfamiliar with templating. Such technique provides a way to define a one layout(to be applied to many records) and indicate how and where each piece of your data should be via a special syntax that would be recognized by the template-providing scripts. You can then relax while the latter take your data, find it’s place, replace it and build your layout. It is in essence creating a tailored-to-your-data layout on the client side – that can reduce the load on your server! Now you have that much greater control over where should UI generation put its weight. Also it needs to support the current functionality covered by jQuery Templates and for that reason differences in terms of how it functions are kept to the bare minimum as far as the developer using the controls is concerned – if you are already using templating for our grid we would not ask you to learn something completely new or you might not even need to change you code. And that is, of course, not not the full story yet – while it was needed for some controls, the template engine was designed to be fully functional stand-alone feature. Which means that you can use the engine to template any data anywhere into anything really. All you really need is jQuery defined and to load igTemplating which can be done like so: At this point you have a tmpl method available in the infragistics namespace – $.ig that you can call to ‘render’ you template. The engine is fully integrated in out tree, combo and grid controls, which means you don’t even have to reference it(as seen above) – it’s enough to load one of the above and templating will be made available as well. So to make it short – you get a high-performance templating engine capable of working stand-alone as well as integrated with our controls and in a format you might be already familiar with or at least is easy to learn. In a snap! So we are to talk performance – if and until we do publish results from some official testing, you might have to trust my word for it, but as we said before it was designed with that main goal in mind. That led to separating the template producing logic into two branches – one that would handle the simple property –>actual data substitution and it is lightning fast. The second would handle more complex operations including, such as conditions and loops (will explain those below) and those operations require creating a JavaScript function to apply those ‘rules’ and it must be executed on every pass. That means it would be somewhat slower. One Other thing that can affect performance is encoding and you are, of course, given a way to control it. Furthermore, the engine outputs a sting rather than objects (which makes for great performance) and some preliminary testing attempts showed mind-boggling speeds when you use it for simple substitution without encoding and even if encoding is required it will outperform one of available alternatives (that itself is much faster the the jQuery Templates)! The engine will recognize a certain grammar, if you will. Along with all your data properties (tokens) in the template, the engine has denied regular expressions to help it identify comments(enclosed with ‘#’) and substitutions(data tokens) in the following formats: The difference with encoding , besides performance, is that if your data contains HTML itself the output will be different. Let’s say you have data like: And depending on whether you use {{html Name}} or ${Name} in you template you will get accordingly: or when encoded: A very basic substitution Assume you have a HTML ul element with id ‘productListing’ and the following data and script: The result from this would be something like this (depending on page default styles): As you can see you can build html markup, mark places for data and it will be generated for each of your records. Now things get a bit more interesting if you have some hierarchical data: Notice the ‘Test.Inner1’ - navigating nested properties is very easy to do and the result from this is: What if… So, what happens when you don’t want to apply the template blindly to all records? - You build a conditional block to handle that. The engine recognizes a number of directives such as logical blocks: And a reserved $i identifier for the current index in a loop. All that can make up for some pretty interesting capabilities. You can go very deep with this as you can nest multiple else-if conditions inside the ‘if’ and the ‘else’ will be your default. All blocks must be closed to be recognized and their closing tag is the same with added slash. However, there is something very important to note – due to the way blocks are recognized, it is not possible to have two consecutive ‘if’ blocks, as in: {{if condition}} /* work 1 */ {{/if}} {{if condition2 }} /* work 2 */ {{/if}} – as the very last closing will be assigned to the very first and therefore will not allow for such logic. At this point though let’s have a relatively simple example – same as the first simple substitution lets have our products, this time however lets check if we have enough items in stock to cover our orders and provide some sort of visual for insufficient ones – yes we can do all that in the template like so: Resulting in: So we made a very simple warning system for our user. You can apply the same type of logic to check the record index (using the reserved ‘$i’) to apply special templates to select rows and naturally create alternative row layouts and so on. Of course, you are not limited to just local arrays – the Templating engine can take data from functions and it can pass parameters if required. Along with that, you have all the freedom to produce markup of any kind and that includes placing data inside HTML elements classes, id-s.. you name it! Here’s an example, where a few styles applied to class names and the said class names are available in our data: This template will simply cause DIV-s for each product to be generated with class that contains their color (by which the CSS will be applied). Also calling the data function we provided parameter - the number of products to display. The result looks like: The igTemplating Engine truly gives you a lot of freedom - in terms of how to place your content, whether you would use it on it’s own or you can do some eye-catching wizardry with the tree, combo or grid and enhance them with functionality and style at the same time. Since you can easily build markup, you can take data that was not readily available and essentially format in a way that a UI control can transform – yes, that means templating table to turn then in an igGrid form example, or a table in another table and.. well you get the point. You can assign classes and Ids and you can target those to turn them into completely different UI. You can also enjoy the functionality that was so far available to template our jQuery tree and grid controls and soon you will also have a column template for the grids, in addition to the current row option. The good news around customizing your data presentation are far from over and this new Templating Engine is ready to take care of it and most importantly do it extremely fast! Expect a demo for this blog once 12.1 is made available, samples and more ways you can make the best out of such functionality. Meanwhile you can check out blogs that demonstrate templating (with jQuery Templates, though they should still give you an idea of the capabilities): And as usual you can check out the samples for each of those controls for more. In my last blog I mentioned that our jQuery Grids are getting a new column template option that would Please Login or Register to add a comment.
http://www.infragistics.com/community/blogs/damyan_petev/archive/2012/04/09/new-high-performance-jquery-templating-engine.aspx
CC-MAIN-2013-20
refinedweb
1,576
58.76
Black Camera Wrist Strap / Hand Grip for Canon Nikon Sony Olympus SLR/DSLR Description: 100% Brand new and high quality Color: Black Material: Faux Leather Non-slip surface design, One size fit all, adjustable strap Ideal SLR/DSLR Camera accessory The plastic attachment screws neatly into the tripod socket of your camera Provides extra security,comfort and better handling Suitable: Canon/Nikon/Sony/Pentax/Minolta/Panasonic/Olympus/Kodak/more DSLR SLR DC digital camera ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ Package content : 1 x Black Camera Wrist Strap (Accessory Only ,The Camera not included) WORLDWIDE SHIPPING. (Except Belgium, Luxembourg, Italy and Croatia) All the items will be dispatched within 1 business day by Hong Kong Airmail after the payment is clear. Items will arrive in 7~30 business days. The arrival time depends on some factors and different areas: Once you purchase, please DON’T leave negative or neutral feedback if you haven’t received item in 30 days, because we have mentioned the shipping time repeatedly. please contact us. We will track the shipment and get back to you as soon as possible with a reply. Our goal is customer satisfaction! Delivery Time: The item will be sent to your address listed in the PayPal. Please make sure your shipping address is correct. You will be notified once the shipment has been made. For any exchange or refund, we need the original receipt ,and the product must be in its original condition, including the box, packaging, and all accessories. We will offer replacement, reshipment, or refunds to make sure you are 100% happy if: you have received a defective or wrong item item is damaged during transit or defective item is not as described Please contact us by Ebay system emails for further help. We only accept PayPal as payment method. () The item will be shipped to buyer’s CORRECT and VERIFIED PayPal Address. We are not responsible for the package lost due to incorrect address. No combining shipping for multiple auction items. Payments not received within 7 days will be subject to a Non-Paying Bidder alert. If you have any problem, please feel free to contact us before leaving Neutral/ Negative Feedback (1-4 star), we will try our best to solve it. Once the item arrives, please kindly fill (5 star) in all Average rating of Detailed Seller Ratings via eBay, we will do the same for you. would you please contact us to get a better solution, we will try our best to reply your emails within 24 hours For defective item (not working), exchange only for the same item. Buyer responsible for return shipping fee. we not shipping to Italy . Buyer is responsible for all applicable customs, import duties, and VATs. In case there's inspection by the lo.cal customs or post office, delivery will be affected accordingly. We maybe will send your multiply items in one package to let you receive your package safely together if you buy several items at the same time. If you want to send the package separately, Please make a note to remind me when you make the payment with paypal. I sincerely hope you can consider this position. Will appreciate greatly.
http://www.ebay.com.au/itm/Black-Camera-Wrist-Strap-Hand-Grip-for-Canon-Nikon-Sony-Olympus-SLR-DSLR-/171093382361?pt=AU_Cameras_Photographic_Accessories&hash=item27d5f5d4d9
CC-MAIN-2014-41
refinedweb
529
62.07
I want to build one handbook but one is already available so lets use it for our learning create-react-app next/router getInitialProps() Working on a modern JavaScript application powered by React is awesome until you realize that there are a couple problems related to rendering all the content on the client-side. First, the page takes longer handles server-side rendering for you. Here is a non-exhaustive list of the main Next.js features: Next.js reloads the page when it detects any change saved to disk. Any URL is mapped to the filesystem, to files put in the pages folder, and you don't need any configuration (you have customization options of course). pages Using styled-jsx, completely integrated as built by the same team, it's trivial to add styles scoped to the component. styled-jsx You can render React components on the server side, before sending the HTML to the client. Next.js plays well with the rest of the JavaScript, Node, and React ecosystem. Pages are rendered with just the libraries and JavaScript that they need, no more. Instead of generating one single JavaScript file containing all the app code, the app is broken up automatically by Next.js in several different resources. Loading a page only loads the JavaScript necessary for that particular page. Next.js does that by analyzing the resources imported. If only one of your pages imports the Axios library, for example, that specific page will include the library in its bundle. This ensures your first page load is as fast as it can be, and only future page loads (if they will ever be triggered) will send the JavaScript needed to the client. There is one notable exception. Frequently used imports are moved into the main JavaScript bundle if they are used in at least half of the site pages. The Link component, used to link together different pages, supports a prefetch prop which automatically prefetches page resources (including code missing due to code splitting) in the background. Link prefetch You can import JavaScript modules and React Components dynamically. Using the next export command, Next.js allows you to export a fully static site from your app. next export Next.js is written in TypeScript and as such comes with an excellent TypeScript support. Next.js, Gatsby, and create-react-app are amazing tools we can use to power our applications. Let's first say what they have in common. They all have React under the hood, powering the entire development experience. They also abstract webpack and all those low level things that we used to configure manually in the good old days. create-react-app does not help you generate a server-side-rendered app easily. Anything that comes with it (SEO, speed...) is only provided by tools like Next.js and Gatsby. When is Next.js better than Gatsby? They can both help with server-side rendering, but in 2 different ways. The end result using Gatsby is a static site generator, without a server. You build the site, and then you deploy the result of the build process statically on Netlify or another static hosting site. Next.js provides a backend that can server side render a response to request, allowing you to create a dynamic website, which means you will deploy it on a platform that can run Node.js. Next.js can generate a static site too, but I would not say it's its main use case. If my goal was to build a static site, I'd have a hard time choosing and perhaps Gatsby has a better ecosystem of plugins, including many for blogging in particular. Gatsby is also heavily based on GraphQL, something you might really like or dislike depending on your opinions and needs. To install Next.js, you need to have Node.js installed. Make sure that you have the latest version of Node. Check with running node -v in your terminal, and compare it to the latest LTS version listed on. node -v After you install Node.js, you will have the npm command available into your command line. npm If you have any trouble at this stage, I recommend the following tutorials I wrote for you: Now that you have Node, updated to the latest version, and npm, we're set! We can choose 2 routes now: using create-next-app or the classic approach which involves installing and setting up a Next app manually. create-next-app If you're familiar with create-react-app, create-next-app is the same thing - except it creates a Next app instead of a React app, as the name implies. I assume you have already installed Node.js, which, from version 5.2 (2+ years ago at the time of writing), comes with the npx command bundled. This handy tool lets us download and execute a JavaScript command, and we'll use it like this: npx npx create-next-app The command asks the application name (and creates a new folder for you with that name), then downloads all the packages it needs (react, react-dom, next), sets the package.json to: react react-dom package.json and you can immediately run the sample app by running npm run dev: npm run dev And here's the result on: This is the recommended way to start a Next.js application, as it gives you structure and sample code to play with. There's more than just that default sample application; you can use any of the examples stored at using the --example option. For example try: --example npx create-next-app --example blog-starter Which gives you an immediately usable blog instance with syntax highlighting too: You can avoid create-next-app if you feel like creating a Next app from scratch. Here's how: create an empty folder anywhere you like, for example in your home folder, and go into it: mkdir nextjs cd nextjs and create your first Next project directory: mkdir firstproject cd firstproject Now use the npm command to initialize it as a Node project: npm init -y The -y option tells npm to use the default settings for a project, populating a sample package.json file. -y Now install Next and React: npm install next react react-dom Your project folder should now have 2 files: package-lock.json and the node_modules folder. node_modules Open the project folder using your favorite editor. My favorite editor is VS Code. If you have that installed, you can run code . in your terminal to open the current folder in the editor (if the command does not work for you, see this) code . Open package.json, which now has this content: { "name": "firstproject", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "next": "^9.1.2", "react": "^16.11.0", "react-dom": "^16.11.0" } } and replace the scripts section with: scripts "scripts": { "dev": "next", "build": "next build", "start": "next start" } to add the Next.js build commands, which we're going to use soon. Tip: use "dev": "next -p 3001", to change the port and run, in this example, on port 3001. Tip: use "dev": "next -p 3001", to change the port and run, in this example, on port 3001. "dev": "next -p 3001", Now create a pages folder, and add an index.js file. index.js In this file, let's create our first React component. We're going to use it as the default export: const Index = () => ( <div> <h1>Home page</h1> </div> ); export default Index; Now using the terminal, run npm run dev to start the Next development server. This will make the app available on port 3000, on localhost. Open in your browser to see it. Let's now check the application is working as we expect it to work. It's a Next.js app, so it should be server side rendered. It's one of the main selling points of Next.js: if we create a site using Next.js, the site pages are rendered on the server, which delivers HTML to the browser. This has 3 major benefits: Let's view the source of the app. Using Chrome you can right-click anywhere in the page, and press View Page Source. If you view the source of the page, you'll see the <div><h1>Home page</h1></div> snippet in the HTML body, along with a bunch of JavaScript files - the app bundles. <div><h1>Home page</h1></div> body We don't need to set up anything, SSR (server-side rendering) is already working for us. The React app will be launched on the client, and will be the one powering interactions like clicking a link, using client-side rendering. But reloading a page will re-load it from the server. And using Next.js there should be no difference in the result inside the browser - a server-rendered page should look exactly like a client-rendered page. When we viewed the page source, we saw a bunch of JavaScript files being loaded: Let's start by putting the code in an HTML formatter to get it formatted better, so we humans can get a better chance at understanding it: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1" /> <meta name="next-head-count" content="2" /> <link rel="preload" href="/_next/static/development/pages/index.js?ts=1572863116051" as="script" /> <link rel="preload" href="/_next/static/development/pages/_app.js?ts=1572863116051" as="script" /> <link rel="preload" href="/_next/static/runtime/webpack.js?ts=1572863116051" as="script" /> <link rel="preload" href="/_next/static/runtime/main.js?ts=1572863116051" as="script" /> </head> <body> <div id="__next"> <div> <h1>Home page</h1> </div> </div> <script src="/_next/static/development/dll/dll_01ec57fc9b90d43b98a8.js?ts=1572863116051"></script> <script id="__NEXT_DATA__" type="application/json"> { "dataManager": "[]", "props": { "pageProps": {} }, "page": "/", "query": {}, "buildId": "development", "nextExport": true, "autoExport": true } </script> <script async="" data-</script> <script async="" data-</script> <script src="/_next/static/runtime/webpack.js?ts=1572863116051" async=""></script> <script src="/_next/static/runtime/main.js?ts=1572863116051" async=""></script> </body> </html> We have 4 JavaScript files being declared to be preloaded in the head, using rel="preload" as="script": head rel="preload" as="script" /_next/static/development/pages/index.js /_next/static/development/pages/_app.js /_next/static/runtime/webpack.js /_next/static/runtime/main.js This tells the browser to start loading those files as soon as possible, before the normal rendering flow starts. Without those, scripts would be loaded with an additional delay, and this improves the page loading performance. Then those 4 files are loaded at the end of the body, along with /_next/static/development/dll/dll_01ec57fc9b90d43b98a8.js (31k LOC), and a JSON snippet that sets some defaults for the page data: /_next/static/development/dll/dll_01ec57fc9b90d43b98a8.js <script id="__NEXT_DATA__" type="application/json"> { "dataManager": "[]", "props": { "pageProps": {} }, "page": "/", "query": {}, "buildId": "development", "nextExport": true, "autoExport": true } </script> The 4 bundle files loaded are already implementing one feature called code splitting. The index.js file provides the code needed for the index component, which serves the / route, and if we had more pages we'd have more bundles for each page, which will then only be loaded if needed - to provide a more performant load time for the page. index / Did you see that little icon at the bottom right of the page, which looks like a lightning? This icon, which is only visible in development mode of course, tells you the page qualifies for automatic static optimization, which basically means that it does not depend on data that needs to be fetched at invokation time, and it can be prerendered and built as a static HTML file at build time (when we run npm run build). npm run build Next kicks in to reload the code in the application automatically. It's a really nice way to immediately determine if the app has already been compiled and you can test a part of it you're working on. Next.js is based on React, so one very useful tool we absolutely need to install (if you haven't already) is the React Developer Tools. Available for both Chrome and Firefox, the React Developer Tools are an essential instrument you can use to inspect a React application. Now, the React Developer Tools are not specific to Next.js but I want to introduce them because you might not be 100% familiar with all the tools React provides. It's best to go a little into debugging tooling than assuming you already know them. They provide an inspector that reveals the React components tree that builds your page, and for each component you can go and check the props, the state, hooks, and lots more. Once you have installed the React Developer Tools, you can open the regular browser devtools (in Chrome, it's right-click in the page, then click Inspect) and you'll find 2 new panels: Components and Profiler. Inspect If you move the mouse over the components, you'll see that in the page, the browser will select the parts that are rendered by that component. If you select any component in the tree, the right panel will show you a reference to the parent component, and the props passed to it: You can easily navigate by clicking around the component names. You can click the eye icon in the Developer Tools toolbar to inspect the DOM element, and also if you use the first icon, the one with the mouse icon (which conveniently sits under the similar regular DevTools icon), you can hover an element in the browser UI to directly select the React component that renders it. You can use the bug icon to log a component data to the console. bug This is pretty awesome because once you have the data printed there, you can right-click any element and press "Store as a global variable". For example here I did it with the url prop, and I was able to inspect it in the console using the temporary variable assigned to it, temp1: url temp1 Using Source Maps, which are loaded by Next.js automatically in development mode, from the Components panel we can click the <> code and the DevTools will switch to the Source panel, showing us the component source code: <> The Profiler tab is even more awesome, if possible. It allows us to record an interaction in the app, and see what happens. I cannot show an example yet, because it needs at least 2 components to create an interaction, and we have just one now. I'll talk about this later. I showed all screenshots using Chrome, but the React Developer Tools works in the same way in Firefox: In addition to the React Developer Tools, which are essential to building a Next.js application, I want to emphasize 2 ways to debug Next.js apps. The first is obviously console.log() and all the other Console API tools. The way Next apps work will make a log statement work in the browser console OR in the terminal where you started Next using npm run dev. console.log() In particular, if the page loads from the server, when you point the URL to it, or you hit the refresh button / cmd/ctrl-R, any console logging happens in the terminal. Subsequent page transitions that happen by clicking the mouse will make all console logging happen inside the browser. Just remember if you are surprised by missing logging. Another tool that is essential is the debugger statement. Adding this statement to a component will pause the browser rendering the page: debugger Really awesome because now you can use the browser debugger to inspect values and run your app one line at a time. You can also use the VS Code debugger to debug server-side code. I mention this technique and this tutorial to set this up. Now that we have a good grasp of the tools we can use to help us develop Next.js apps, let's continue from where we left our first app: I want to add a second page to this website, a blog. It's going to be served into /blog, and for the time being it will just contain a simple static page, just like our first index.js component: After saving the new file, the npm run dev process already running is already capable of rendering the page, without the need to restart it. When we hit the URL we have the new page: and here's what the terminal told us: Now the fact that the URL is /blog depends on just the filename, and its position under the pages folder. You can create a pages/hey/ho page, and that page will show up on the URL. pages/hey/ho What does not matter, for the URL purposes, is the component name inside the file. Try going and viewing the source of the page, when loaded from the server it will list /_next/static/development/pages/blog.js as one of the bundles loaded, and not /_next/static/development/pages/index.js like in the home page. This is because thanks to automatic code splitting we don't need the bundle that serves the home page. Just the bundle that serves the blog page. /_next/static/development/pages/blog.js We can also just export an anonymous function from blog.js: blog.js export default () => ( <div> <h1>Blog</h1> </div> ); or if you prefer the non-arrow function syntax: export default function() { return ( <div> <h1>Blog</h1> </div> ); } Now that we have 2 pages, defined by index.js and blog.js, we can introduce links. Normal HTML links within pages are done using the a tag: a <a href="/blog">Blog</a> We can't do do that in Next.js. Why? We technically can, of course, because this is the Web and on the Web things never break (that's why we can still use the <marquee> tag. But one of the main benefits of using Next is that once a page is loaded, transitions to other page are very fast thanks to client-side rendering. <marquee> If you use a plain a link: const Index = () => ( <div> <h1>Home page</h1> <a href="/blog">Blog</a> </div> ); export default Index; Now open the DevTools, and the Network panel in particular. The first time we load we get all the page bundles loaded: Now if you click the "Preserve log" button (to avoid clearing the Network panel), and click the "Blog" link, this is what happens: We got all that JavaScript from the server, again! But.. we don't need all that JavaScript if we already got it. We'd just need the blog.js page bundle, the only one that's new to the page. To fix this problem, we use a component provided by Next, called Link. We import it: import Link from 'next/link'; and then we use it to wrap our link, like this: import Link from 'next/link'; const Index = () => ( <div> <h1>Home page</h1> <Link href="/blog"> <a>Blog</a> </Link> </div> ); export default Index; Now if you retry the thing we did previously, you'll be able to see that only the blog.js bundle is loaded when we move to the blog page: and the page loaded so faster than before, the browser usual spinner on the tab didn't even appear. Yet the URL changed, as you can see. This is working seamlessly with the browser History API. This is client-side rendering in action. What if you now press the back button? Nothing is being loaded, because the browser still has the old index.js bundle in place, ready to load the /index route. It's all automatic! /index In the previous chapter we saw how to link the home to the blog page. A blog is a great use case for Next.js, one we'll continue to explore in this chapter by adding blog posts. Blog posts have a dynamic URL. For example a post titled "Hello World" might have the URL /blog/hello-world. A post titled "My second post" might have the URL /blog/my-second-post. /blog/hello-world /blog/my-second-post, like the ones we mentioned above: /blog/hello-world, /blog/my-second-post and more. pages/blog/[id].js In the file name, [id] inside the square brackets means that anything that's dynamic will be put inside the id parameter of the query property of the router. [id] id Ok, that's a bit too many things at once. What's the router? The router is a library provided by Next.js. We import it from next/router: import { useRouter } from 'next/router'; and once we have useRouter, we instantiate the router object using: useRouter const router = useRouter(); Once we have this router object, we can extract information from it. In particular we can get the dynamic part of the URL in the [id].js file by accessing router.query.id. [id].js router.query.id The dynamic part can also just be a portion of the URL, like post-[id].js. The dynamic part can also just be a portion of the URL, like post-[id].js. post-[id].js So let's go on and apply all those things in practice. Create the file pages/blog/[id].js: import { useRouter } from 'next/router'; export default () => { const router = useRouter(); return ( <> <h1>Blog post</h1> <p>Post id: {router.query.id}</p> </> ); }; Now if you go to the router, you should see this: We can use this id parameter to gather the post from a list of posts. From a database, for example. To keep things simple we'll add a posts.json file in the project root folder: posts.json { "test": { "title": "test post", "content": "Hey some post content" }, "second": { "title": "second post", "content": "Hey this is the second post content" } } Now we can import it and lookup the post from the id key: import { useRouter } from 'next/router'; import posts from '../../posts.json'; export default () => { const router = useRouter(); const post = posts[router.query.id]; return ( <> <h1>{post.title}</h1> <p>{post.content}</p> </> ); }; Reloading the page should show us this result: But it's not! Instead, we get an error in the console, and an error in the browser, too: Why? Because.. during rendering, when the component is initialized, the data is not there yet. We'll see how to provide the data to the component with getInitialProps in the next lesson. For now, add a little if (!post) return <p></p> check before returning the JSX: if (!post) return <p></p> import { useRouter } from 'next/router'; import posts from '../../posts.json'; export default () => { const router = useRouter(); const post = posts[router.query.id]; if (!post) return <p></p>; return ( <> <h1>{post.title}</h1> <p>{post.content}</p> </> ); }; Now things should work. Initially the component is rendered without the dynamic router.query.id information. After rendering, Next.js triggers an update with the query value and the page displays the correct information. And if you view source, there is that empty <p> tag in the HTML: <p> We'll soon fix this issue that fails to implement SSR and this harms both loading times for our users, SEO and social sharing as we already discussed. We can complete the blog example by listing those posts in pages/blog.js: pages/blog.js import posts from '../posts.json'; const Blog = () => ( <div> <h1>Blog</h1> <ul> {Object.entries(posts).map((value, index) => { return <li key={index}>{value[1].title}</li>; })} </ul> </div> ); export default Blog; And we can link them to the individual post pages, by importing Link from next/link and using it inside the posts loop: next/link import Link from 'next/link'; import posts from '../posts.json'; const Blog = () => ( <div> <h1>Blog</h1> <ul> {Object.entries(posts).map((value, index) => { return ( <li key={index}> <Link href="/blog/[id]" as={'/blog/' + value[0]}> <a>{value[1].title}</a> </Link> </li> ); })} </ul> </div> ); export default Blog; I mentioned previously how the Link Next.js component can be used to create links between 2 pages, and when you use it, Next.js transparently handles frontend routing for us, so when a user clicks a link,. <Link>. npm run start). load DOMContentLoaded Any other Link tag not in the viewport will be prefetched when the user scrolls and it Prefetching is automatic on high speed connections (Wifi and 3g+ connections, unless the browser sends the Save-Data HTTP Header. Save-Data You can opt out from prefetching individual Link instances by setting the prefetch prop to false: false <Link href="/a-link" prefetch={false}> <a>A link</a> </Link> One. Link.js In this component, we'll first import React from react, Link from next/link and the useRouter hook from next/router. Inside the component we determine if the current path name matches the href prop of the component, and if so we append the selected class to the children. href selected We finally return this children with the updated class, using React.cloneElement(): React.cloneElement() import React from 'react'; import Link from 'next/link'; import { useRouter } from 'next/router'; export default ({ href, children }) => { const router = useRouter(); let className = children.props.className || ''; if (router.pathname === href) { className = `${className} selected`; } return <Link href={href}>{React.cloneElement(children, { className })}</Link>; }; We already saw how to use the Link component to declaratively handle routing in Next.js apps. It's really handy to manage routing in JSX, but sometimes you need to trigger a routing change programmatically. In this case, you can access the Next.js Router directly, provided in the next/router package, and call its push() method. push() Here's an example of accessing the router: import { useRouter } from 'next/router'; export default () => { const router = useRouter(); //... }; Once we get the router object by invoking useRouter(), we can use its methods. useRouter() This is the client side router, so methods should only be used in frontend facing code. The easiest way to ensure this is to wrap calls in the useEffect() React hook, or inside componentDidMount() in React stateful components. This is the client side router, so methods should only be used in frontend facing code. The easiest way to ensure this is to wrap calls in the useEffect() React hook, or inside componentDidMount() in React stateful components. useEffect() componentDidMount() The ones you'll likely use the most are push() and prefetch(). prefetch() push() allows us to programmatically trigger a URL change, in the frontend: router.push('/login'); prefetch() allows us to programmatically prefetch a URL, useful when we don't have a Link tag which automatically handles prefetching for us: router.prefetch('/login'); Full example: import { useRouter } from 'next/router'; export default () => { const router = useRouter(); useEffect(() => { router.prefetch('/login'); }); }; You can also use the router to listen for route change events. In the previous chapter we had an issue with dynamically generating the post page, because the component required some data up front, and when we tried to get the data from the JSON file: we got this error: How do we solve this? And how do we make SSR work for dynamic routes? We must provide the component with props, using a special function called getInitialProps() which is attached to the component. To do so, first we name the component: const Post = () => { //... }; export default Post; then we add the function to it: const Post = () => { //... }; Post.getInitialProps = () => { //... }; export default Post; This function gets an object as its argument, which contains several properties. In particular, the thing we are interested into now is that we get the query object, the one we used previously to get the post id. query So we can get it using the object destructuring syntax: Post.getInitialProps = ({ query }) => { //... }; Now we can return the post from this function: Post.getInitialProps = ({ query }) => { return { post: posts[query.id], }; }; And we can also remove the import of useRouter, and we get the post from the props property passed to the Post component: props import posts from '../../posts.json'; const Post = (props) => { return ( <div> <h1>{props.post.title}</h1> <p>{props.post.content}</p> </div> ); }; Post.getInitialProps = ({ query }) => { return { post: posts[query.id], }; }; export default Post; Now there will be no error, and SSR will be working as expected, as you can see checking view source: The getInitialProps function will be executed on the server side, but also on the client side, when we navigate to a new page using the Link component as we did. getInitialProps It's important to note that getInitialProps gets, in the context object it receives, in addition to the query object these other properties: pathname path asPath which in the case of calling will respectively result to: /blog/[id] /blog/test And in the case of server side rendering, it will also receive: req res err req and res will be familiar to you if you've done any Node.js coding. How do we style React components in Next.js? We have a lot of freedom, because we can use whatever library we prefer. But Next.js comes with styled-jsx built-in, because that's a library built by the same people working on Next.js. And it's a pretty cool library that provides us scoped CSS, which is great for maintainability because the CSS is only affecting the component it's applied to. I think this is a great approach at writing CSS, without the need to apply additional libraries or preprocessors that add complexity. To add CSS to a React component in Next.js we insert it inside a snippet in the JSX, which start with <style jsx>{` and ends with `}</style> Inside this weird blocks we write plain CSS, as we'd do in a .css file: .css <style jsx>{` h1 { font-size: 3rem; } `}</style> You write it inside the JSX, like this: const Index = () => ( <div> <h1>Home page</h1> <style jsx>{` h1 { font-size: 3rem; } `}</style> </div> ); export default Index; Inside the block we can use interpolation to dynamically change the values. For example here we assume a size prop is being passed by the parent component, and we use it in the styled-jsx block: size const Index = (props) => ( <div> <h1>Home page</h1> <style jsx>{` h1 { font-size: ${props.size}rem; } `}</style> </div> ); If you want to apply some CSS globally, not scoped to a component, you add the global keyword to the style tag: global style <style jsx global>{` body { margin: 0; } `}</style> If you want to import an external CSS file in a Next.js component, you have to first install @zeit/next-css: @zeit/next-css npm install @zeit/next-css and then create a configuration file in the root of the project, called next.config.js, with this content: next.config.js const withCSS = require('@zeit/next-css'); module.exports = withCSS(); After restarting the Next app, you can now import CSS like you normally do with JavaScript libraries or components: import '../style.css'; You can also import a SASS file directly, using the @zeit/next-sass library instead. @zeit/next-sass From any Next.js page component, you can add information to the page header. This is handy when: How can you do so? Inside every component you can import the Head component from next/head and include it in your component JSX output: Head next/head import Head from 'next/head'; const House = (props) => ( <div> <Head> <title>The page title</title> </Head> {/* the rest of the JSX */} </div> ); export default House; You can add any HTML tag you'd like to appear in the <head> section of the page. <head> When mounting the component, Next.js will make sure the tags inside Head are added to the heading of the page. Same when unmounting the component, Next.js will take care of removing those tags. All the pages on your site look more or less the same. There's a chrome window, a common base layer, and you just want to change what's inside. There's a nav bar, a sidebar, and then the actual content. How do you build such system in Next.js? There are 2 ways. One is using a Higher Order Component, by creating a components/Layout.js component: components/Layout.js export default (Page) => { return () => ( <div> <nav> <ul>....</ul> </hav> <main> <Page /> </main> </div> ); }; In there we can import separate components for heading and/or sidebar, and we can also add all the CSS we need. And you use it in every page like this: import withLayout from '../components/Layout.js'; const Page = () => <p>Here's a page!</p>; export default withLayout(Page); But I found this works only for simple cases, where you don't need to call getInitialProps() on a page. Why? Because getInitialProps() gets only called on the page component. But if we export the Higher Order Component withLayout() from a page, Page.getInitialProps() is not called. withLayout.getInitialProps() would. Page.getInitialProps() withLayout.getInitialProps() To avoid unnecessarily complicating our codebase, the alternative approach is to use props: export default (props) => ( <div> <nav> <ul>....</ul> </hav> <main>{props.content}</main> </div> ); and in our pages now we use it like this: import Layout from '../components/Layout.js'; const Page = () => <Layout content={<p>Here's a page!</p>} />; This approach lets us use getInitialProps() from within our page component, with the only downside of having to write the component JSX inside the content prop: content import Layout from '../components/Layout.js'; const Page = () => <Layout content={<p>Here's a page!</p>} />; Page.getInitialProps = ({ query }) => { //... }; In addition to creating page routes, which means pages are served to the browser as Web pages, Next.js can create API routes. This is a very interesting feature because it means that Next.js can be used to create a frontend for data that is stored and retrieved by Next.js itself, transferring JSON via fetch requests. API routes live under the /pages/api/ folder and are mapped to the /api endpoint. /pages/api/ /api This feature is very useful when creating applications. In those routes, we write Node.js code (rather than React code). It's a paradigm shift, you move from the frontend to the backend, but very seamlessly. Say you have a /pages/api/comments.js file, whose goal is to return the comments of a blog post as JSON. /pages/api/comments.js Say you have a list of comments stored in a comments.json file: [ { "comment": "First" }, { "comment": "Nice post" } ] Here's a sample code, which returns to the client the list of comments: import comments from './comments.json'; export default (req, res) => { res.status(200).json(comments); }; It will listen on the /api/comments URL for GET requests, and you can try calling it using your browser: /api/comments API routes can also use dynamic routing like pages, use the [] syntax to create a dynamic API route, like /pages/api/comments/[id].js which will retrieve the comments specific to a post id. /pages/api/comments/[id].js Inside the [id].js you can retrieve the id value by looking it up inside the req.query object: req.query import comments from '../comments.json'; export default (req, res) => { res.status(200).json({ post: req.query.id, comments }); }; Heres you can see the above code in action: In dynamic pages, you'd need to import useRouter from next/router, then get the router object using const router = useRouter(), and then we'd be able to get the id value using router.query.id. const router = useRouter() In the server-side it's all easier, as the query is attached to the request object. If you do a POST request, all works in the same way - it all goes through that default export. To separate POST from GET and other HTTP methods (PUT, DELETE), lookup the req.method value: req.method export default (req, res) => { switch (req.method) { case 'GET': //... break; case 'POST': //... break; default: res.status(405).end(); //Method Not Allowed break; } }; In addition to req.query and req.method we already saw, we have access to cookies by referencing req.cookies, the request body in req.body. req.cookies req.body Under the hoods, this is all powered by Micro, a library that powers asynchronous HTTP microservices, made by the same team that built Next.js. You can make use of any Micro middleware in our API routes to add more functionality. In your page components, you can execute code only in the server-side or on the client-side, by checking the window property. window This property is only existing inside the browser, so you can check if (typeof window === 'undefined') { } and add the server-side code in that block. Similarly, you can execute client-side code only by checking if (typeof window !== 'undefined') { } JS Tip: We use the typeof operator here because we can't detect a value to be undefined in other ways. We can't do if (window === undefined) because we'd get a "window is not defined" runtime error JS Tip: We use the typeof operator here because we can't detect a value to be undefined in other ways. We can't do if (window === undefined) because we'd get a "window is not defined" runtime error typeof if (window === undefined) Next.js, as a build-time optimization, also removes the code that uses those checks from bundles. A client-side bundle will not include the content wrapped into a if (typeof window === 'undefined') {} block. if (typeof window === 'undefined') {} Deploying an app is always left last in tutorials. Here I want to introduce it early, just because it's so easy to deploy a Next.js app that we can dive into it now, and then move on to other more complex topics later on. Remember in the "How to install Next.js" chapter I told you to add those 3 lines to the package.json script section: script We used npm run dev up to now, to call the next command installed locally in node_modules/next/dist/bin/next. This started the development server, which provided us source maps and hot code reloading, two very useful features while debugging. node_modules/next/dist/bin/next The same command can be invoked to build the website passing the build flag, by running npm run build. Then, the same command can be used to start the production app passing the start flag, by running npm run start. build start Those 2 commands are the ones we must invoke to successfully deploy the production version of our site locally. The production version is highly optimized and does not come with source maps and other things like hot code reloading that would not be beneficial to our end users. So, let's create a production deploy of our app. Build it using: The output of the command tells us that some routes (/ and /blog are now prerendered as static HTML, while /blog/[id] will be served by the Node.js backend. Then you can run npm run start to start the production server locally: Visiting will show us the production version of the app, locally. In the previous chapter we deployed the Next.js application locally. How do we deploy it to a real web server, so other people can access it? One of the most simple ways to deploy a Next application is through the Now platform created by Zeit, the same company that created the Open Source project Next.js. You can use Now to deploy Node.js apps, Static Websites, and much more. Now makes the deployment and distribution step of an app very, very simple and fast, and in addition to Node.js apps, they also support deploying Go, PHP, Python and other languages. You can think of it as the "cloud", as you don't really know where your app will be deployed, but you know that you will have a URL where you can reach it. Now is free to start using, with generous free plan that currently includes 100GB of hosting, 1000 serverless functions invocations per day, 1000 builds per month, 100GB of bandwidth per month, and one CDN location. The pricing page helps get an idea of the costs if you need more. The best way to start using Now is by using the official Now CLI: npm install -g now Once the command is available, run now login and the app will ask you for your email. If you haven't registered already, create an account on before continuing, then add your email to the CLI client. Once this is done, from the Next.js project root folder run now and the app will be instantly deployed to the Now cloud, and you'll be given the unique app URL: Once you run the now program, the app is deployed to a random URL under the now.sh domain. now.sh We can see 3 different URLs in the output given in the image: Why so many? The first is the URL identifying the deploy. Every time we deploy the app, this URL will change. You can test immediately by changing something in the project code, and running now again: The other 2 URLs will not change. The first is a random one, the second is your project name (which defaults to the current project folder, your account name and then now.sh. If you visit the URL, you will see the app deployed to production. You can configure Now to serve the site to your own custom domain or subdomain, but I will not dive into that right now. The now.sh subdomain is enough for our testing purposes. Next provides us a way to analyze the code bundles that are generated. Open the package.json file of the app and in the scripts section add those 3 new commands: "analyze": "cross-env ANALYZE=true next build", "analyze:server": "cross-env BUNDLE_ANALYZE=server next build", "analyze:browser": "cross-env BUNDLE_ANALYZE=browser next build" Like this: { "name": "firstproject", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "dev": "next", "build": "next build", "start": "next start", "analyze": "cross-env ANALYZE=true next build", "analyze:server": "cross-env BUNDLE_ANALYZE=server next build", "analyze:browser": "cross-env BUNDLE_ANALYZE=browser next build" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "next": "^9.1.2", "react": "^16.11.0", "react-dom": "^16.11.0" } } then install those 2 packages: npm install --dev cross-env @next/bundle-analyzer Create a next.config.js file in the project root, with this content: const withBundleAnalyzer = require('@next/bundle-analyzer')({ enabled: process.env.ANALYZE === 'true', }); module.exports = withBundleAnalyzer({}); Now run the command npm run analyze This should open 2 pages in the browser. One for the client bundles, and one for the server bundles: This is incredibly useful. You can inspect what's taking the most space in the bundles, and you can also use the sidebar to exclude bundles, for an easier visualization of the smaller ones: Being able to visually analyze a bundle is great because we can optimize our application very easily. Say we need to load the Moment library in our blog posts. Run: npm install moment to include it in the project. Now let's simulate the fact we need it on two different routes: /blog and /blog/[id]. We import it in pages/blog/[id].js: import moment from 'moment' ... const Post = props => { return ( <div> <h1>{props.post.title}</h1> <p>Published on {moment().format('dddd D MMMM YYYY')}</p> <p>{props.post.content}</p> </div> ) } I'm just adding today's date, as an example. This will include Moment.js in the blog post page bundle, as you can see by running npm run analyze: See that we now have a red entry in /blog/[id], the route that we added Moment.js to! It went from ~1kB to 350kB, quite a big deal. And this is because the Moment.js library itself is 349kB. The client bundles visualization now shows us that the bigger bundle is the page one, which before was very little. And 99% of its code is Moment.js. Every time we load a blog post we are going to have all this code transferred to the client. Which is not ideal. One fix would be to look for a library with a smaller size, as Moment.js is not known for being lightweight (especially out of the box with all the locales included), but let's assume for the sake of the example that we must use it. What we can do instead is separating all the Moment code in a separate bundle. How? Instead of importing Moment at the component level, we perform an async import inside getInitialProps, and we calculate the value to send to the component. Remember that we can't return complex objects inside the getInitialProps() returned object, so we calculate the date inside it: import posts from '../../posts.json'; const Post = (props) => { return ( <div> <h1>{props.post.title}</h1> <p>Published on {props.date}</p> <p>{props.post.content}</p> </div> ); }; Post.getInitialProps = async ({ query }) => { const moment = (await import('moment')).default(); return { date: moment.format('dddd D MMMM YYYY'), post: posts[query.id], }; }; export default Post; See that special call to .default() after await import? It's needed to reference the default export in a dynamic import (see) .default() await import Now if we run npm run analyze again, we can see this: Our /blog/[id] bundle is again very small, as Moment has been moved to its own bundle file, loaded separately by the browser. There is a lot more to know about Next.js. I didn't talk about managing user sessions with login, serverless, managing databases, and so on. The goal of this Handbook is not to teach you everything, but instead it aims to introduce you, gradually, to all the power of Next.js..
https://tkssharma.com/nextjs-handbook-for-developers/
CC-MAIN-2022-40
refinedweb
7,710
64
where's Boost's shared_ptr destructor? Discussion in 'C++' started by Phlip, Mar 20, 2006. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads error with boost::shared_ptr<T> with incomplete T, VC7.1 ?Philippe Guglielmetti, Oct 9, 2003, in forum: C++ - Replies: - 4 - Views: - 1,049 - tom_usenet - Oct 9, 2003 boost::shared_ptr inside stl::listJames Mastro, Nov 13, 2003, in forum: C++ - Replies: - 1 - Views: - 2,206 - Cy Edmunds - Nov 13, 2003 boost::shared_ptr and operator->()Derek, Dec 8, 2003, in forum: C++ - Replies: - 2 - Views: - 722 - Derek - Dec 8, 2003 Problems mixing boost::lambda::bind and boost::shared_ptr..Toby Bradshaw, Jun 1, 2006, in forum: C++ - Replies: - 6 - Views: - 1,969 - Kai-Uwe Bux - Jun 2, 2006 #include <boost/shared_ptr.hpp> or #include "boost/shared_ptr.hpp"?Colin Caughie, Aug 29, 2006, in forum: C++ - Replies: - 1 - Views: - 880 - Shooting - Aug 29, 2006
http://www.thecodingforums.com/threads/wheres-boosts-shared_ptr-destructor.452708/
CC-MAIN-2015-35
refinedweb
181
72.46
At 12:55 14/6/00 +0200, you wrote: >>>>>> "PD" == Peter Donald <donaldp@mad.scientist.com> writes: > > PD> What I would like is some method of knowing how to preprocess > PD> properties so that they are in form task wants. [...] This > PD> would also mean it would be possible to have properties that are > PD> not strings and in fact could effectively be any Object. > >Peter, take a look at spec/core.html. Some things you want are already >there, some things have been discussed and even been agreed on but >unfortunately the document hasn't been updated to reflect this. oh .. working of oldish (about a month) cvs copy thou I will have a look :P >Basically it has been agreed that project properties as well as task >attributes should be richer objects not just plain strings. oh just in case there was confusion when I say properties, I meant JavaBean properties and thus attributes in build.xml not <properties .../> elements :P. >The way to >go for attributes (IIRC) was to use classes with a constructor taking >a String object as its only argument. > >Say you have a task Foo with an Attribute named bar of type Baz then >you would need: > >public class Baz { > public Baz(String value) {...} > ... >} ...snip... okay ... sounds good for user created types but how about standard types in java library .. it would mean that you have to restrict properties to new classes you are willing to define but I guess that may not be too much of a problem except for the primitive type wrappers ... >I don't know what to do with the processing part of your idea >though. I view this as very tightly coupled to the task at hand in >most cases - with the exception of file lists maybe, but then using >DirectoryScanner seems a reasonable approach to me. I'm not convinced >that moving the processing out of the individual tasks into Ant's core >would be a good idea. perhaps not sure ... but howmany wrapper classes will there end up being. Currently I factored out a lot of functionality from other classes (like package listing from javadocs task) and am seeing repetition of same calling over and again in my own tasks. This led me to believe there has to be a better way ... not sure if this is it thou. I just hate seeing the same two lines or copied methods all through my code > > PD> Request 2: > > PD> Would it be possible to have a bunch of standard ant properties > PD> in Tasks that effect project in other manners. > >We should have properties that are named and used in the same way >across different tasks. Other than that I don't think I understand >what you mean. > >Your example would be tied to the Javac and maybe the Java task, so >let's make sure they use the attribute classpath consistently. unfortunately classpath usually has to happen *before* task is loaded .. ie you have to load the task through the new classloader .... you can hack it now (which is what I do :P) by either recursively delegating to another task or starting a new vm. > PD> Request 3: > > PD> This would allow a single build file to compile the taskdefs it > PD> needs and then use those in turn to do further processing. > >That is you want to build the Task and use it in the same build file, >right? I've never thought about doing something like this >myself. Maybe you are trying unusual things 8^)? probably .. I am not sure it is a good idea thou .. just throwing thoughts out really :P > > PD> Request 4: > > PD> A standardised set of taskdefs for invoking standard C/C++ tools > >I think after Ant has settled there will be some kind of library for >optional tasks - hopefully somewhere under the Jakarta umbrella but >not necessarily in the same repository. I view things like this as >prime candidates for "standard optional tasks" i.e. optional tasks >with an implementation from the Ant developers. > >A draft on how to handle optional tasks in future versions of Ant can >be found in spec/core.html again. off to have a look ... But before I go there is another Request that I forgot to mention (that is not patently absurd :P). That is that input and output get wrapped before each task executes. Some of my tasks produce excessive verbose output that I can't really hide. I can't hide it as I often delegate to classes that I don't own or to native apps. So what I would like to see is to replace System.out with a custom PrintStream before execution. This PrintStream buffers output until end of task. If the task ends in error the output is printed out but otherwise it is silenetly ignored. This behaviour can be modified by properties ... err attributes of the task element such as <some-task ant:force-output-display="true" .. /> or something similar. While this could be done in the task it is useful to multiple targets and should possibly placed in core ??? Cheers, Pete *------------------------------------------------------* | "Nearly all men can stand adversity, but if you want | | to test a man's character, give him power." | | -Abraham Lincoln | *------------------------------------------------------*
http://mail-archives.apache.org/mod_mbox/ant-dev/200006.mbox/%3C3.0.5.32.20000614212957.00819ac0@latcs4.cs.latrobe.edu.au%3E
CC-MAIN-2016-40
refinedweb
876
71.24
Important: Please read the Qt Code of Conduct - why QWidget inherited class can't handle QAction ? Hello, While developping a small application, i faced a weird behavior concerning QAction. Why a QWidget inherited class cannot handle a QAction ? I made a small example to illustrate my problem: a.h #include <QObject> #include <QWidget> #include <QMainWindow> #include <QMenuBar> #include <QMenu> #include <QAction> using BaseClass = QObject; class A : public BaseClass { Q_OBJECT public: A(QMainWindow& mw) : BaseClass(&mw), m_menu("A"), m_action("a") { m_menu.addAction(&m_action); mw.menuBar()->addMenu(&m_menu); } private: QMenu m_menu; QAction m_action; }; main.cpp #include <QApplication> #include <QMainWindow> #include <QMenuBar> #include <QMenu> #include <QAction> #include "a.h" int main(int argc, char* argv[]) { QApplication app(argc, argv); QMainWindow mw; QAction quitAction("Quit"); QMenu fileMenu("File"); fileMenu.addAction(&quitAction); mw.menuBar()->addMenu(&fileMenu); A a(mw); mw.show(); return app.exec(); } In the a.h file, when I replace QObject by QWidget for the using BaseClass, the menuBar is not working anymore, I cannot click on any menus. Why ? 0_o I searched in the documentation but i did not find any information... Thanks in advance ! Hi, That's a good question however I can't answer right away. That said, your code as it is does not really follow best practices in terms of handling QObject based classes so I am wondering if there's some bad interaction there. - Chris Kawa Moderators last edited by Chris Kawa It's because of this part: BaseClass(&mw), Setting a parent on a widget adds that widget to the parent, so you just added A widget to your main window. Because it's not in any layout it sits in the top left corner, covering the menus (that's why you can't click any). It's transparent though so you can't see it. Add a paint event like this to your A class to see what I mean: void paintEvent(QPaintEvent*) { QPainter p(this); p.fillRect(rect(), Qt::red); } @Chris-Kawa good catch ! I went for "non-clickable" rather than "hidden by". @SGaist Thanks for the reply ! Concerning the best practices for handling QObject, do you have any ressources regrouping informations ? Or I can find them along the Qt documentation. I use Qt also in my company and I make projects alongside in my personal time to improve my skills. I find Qt hard to handle if you don't want to use raw pointers to follow c++ core guideline or something similar for example. I will be grateful if you could link me some materials :) You have a truck load of examples in Qt's documentation. That said it looks like you are trying to work against Qt's parent/child and QObject design by making everything stack based. Trying to keep absolutely everything on the stack will have issues like double deletion happening with properly parented objects. Making everything allocated on the heap is wrong as well. You should learn when to use which so your tool belt will be more complete. @Chris-Kawa Thanks ! It makes sense now ^^ In my application, I decided to use a MVC like approach with a Gui class inheriting QWidget (so I can make it dockable) to handle and the UI generated by QtDesigner (or manually made) and elements like actions to put in menuBar of the main window. I think it is now compromised... Do you have any idea of how to handle that ? Edit: Well no I can do it anyway, I just need to hide the widget and it works... @SGaist Yes, I find ugly and so outdated the use of raw pointers. I think I have to back off and keep it to myself ^^. Fingers crossed for a better approach in Qt6... You can uncross them. Raw pointers are not going anywhere.
https://forum.qt.io/topic/119433/why-qwidget-inherited-class-can-t-handle-qaction/2
CC-MAIN-2021-17
refinedweb
631
65.52
Posted 10 September 2017, 11:38 am EST First off let me say that this is the first time I've ever tried to work with any of these controls. I'm trying the gauges control and I'm getting an error of "'C1Gauge' is ambiguous in the namespace 'C1.Web.UI.Controls.C1Gauge'." I'm using VWD 2010 Express. Since this is the first time I've ever used these controls, I think I should explain how I installed them in case that's where I messed up. I downloaded and installed the package from the website (C1StudioASPNET_2011v1), in VWD, went to Project->Add a Reference and selected "ComponentOne Controls for ASP.NET AJAX (CLR 2.0)", in my page, added "Imports C1.Web.UI.Controls" to the code behind. I added a gauge control to the page and I get the error when I run it. Any information would be greatly appreciated. Thanks!
https://www.grapecity.com/en/forums/webforms-edition/gauges-issue
CC-MAIN-2018-13
refinedweb
155
74.39
discusses the topic of performance for multi-threaded applications. After defining the terms performance and scalability, we take a closer look at Amdahl’s Law. Further in the course we see how we can reduce lock contention by applying different techniques, which are demonstrated with code examples. 2. Performance Threads can be used to improve the performance of applications. The reason behind this may be that we have multiple processors or CPU cores available. Each CPU core can work on its own task, hence dividing on big task into a series of smaller tasks that run independently of each other, can improve the total runtime of the application. An example for such a performance improvement could be an application that resizes images that lie within a folder structure on the hard disc. A single threaded approach would just iterate over all files and scale each image after the other. If we have a CPU with more than one core, the resizing process would only utilize one of the available cores. A multi-threaded approach could for example let a producer thread scan the file system and add all found files to a queue, which is processed by a bunch of worker threads. When we have as many worker threads as we have CPU cores, we make sure that each CPU core has something to do until all images are processed. Another example where multi-threading can improve the overall performance of an application are use cases with a lot of I/O waiting time. Let’s assume we want to write an application that mirrors a complete website in form of HTML files to our hard disc. Starting with one page, the application has to follow all links that point to the same domain (or URL part). As the time between issuing the request to the remote web server up to the moment until all data has been received may be long, we can distribute the work over a few threads. One or more threads can parse the received HTML pages and put the found links into a queue, while other threads issue the requests to the web server and then wait for the answer. In this case we use the waiting time for newly requested pages with the parsing of the already received ones. In contrast to the previous example, this application may even gain performance if we add more threads than we have CPU cores. These two examples show that performance means to get more work done in a shorter time frame. This is of course the classical understanding of the term performance. But the usage of threads can also improve the responsiveness of our applications. Imagine as simple GUI application with an input form and a “Process” button. When the user presses the button, the application has to render the pressed button (the button should lock like being pressed down and coming up again when releasing the mouse) and the actual processing of the input data has to be done. If this processing takes longer, a single threaded application could not react to further user input, i.e. we need an additional thread that processes events from the operating system like mouse clicks or mouse pointer movements. Scalability means the ability of a program to improve the performance by adding further resources to it. Imagine we would have to resize a huge amount of images. As the number of CPU cores of our current machine is limited, adding more threads does not improve the performance. The performance may even degrade as the scheduler has to manage more threads and thread creation and shutdown also costs CPU power. 2.1. Amdahl’s Law The last section has shown that in some cases the addition of new resources can improve the overall performance of our application. In order to able to compute how much performance our application may gain when we add further resources, we need to identify the parts of the program that have to run serialized/synchronized and the parts of the program that can run in parallel. If we denote the fraction of the program that has to run synchronized with B (e.g. the number of lines that are executed synchronized) and if we denote the number of available processors with n, then Amdahl’s Law lets us compute an upper limit for the speedup our application may be able to achieve: If we let n approach infinity, the term (1-B)/n converges against zero. Hence we can neglect this term and the upper limit for the speedup converges against 1/B, where B is the fraction of the program runtime before the optimization that is spent within non-parallelizable code. If B is for example 0.5, meaning that half of the program cannot be parallelized, the reciprocal value of 0.5 is 2; hence even if we add an unlimited number of processor to our application, we would only gain a speedup of about two. Now let’s assume we can rewrite the code such that only 0.25 of the program runtime is spent in synchronized blocks. Now the reciprocal value of 0.25 is 4, meaning we have built an application that would run with a large number of processors about four times faster than with only one processor. The other way round, we can also use Amdahl’s Law to compute the fraction of the program runtime that has to be executed synchronized to achieve a given speedup. If we want to achieve a speedup of about 100, the reciprocal value is 0.01, meaning we should only spent about 1 percent of the runtime within synchronized code. To summarize the findings from Amdahl’s Law, we can conclude that the maximum speed up a program can get through the usage of additional processor is limited by the reciprocal value of the fraction of time the program spends in synchronized code parts. Although it is not always easy to compute this fraction in practice, even not if you think about large business applications, the law gives us a hint that we have to think about synchronization very carefully and that we have to keep the parts of the program runtime small, that have to be serialized. 2.2. Performance impacts of threads The writings of this article up to this point indicate that adding further threads to an application can improve the performance and responsiveness. But on the other hand, this does not come for free. Threads always have some performance impact themselves. The first performance impact is the creation of the thread itself. This takes some time as the JVM has to acquire the resources for the thread from the underlying operating system and prepare the data structures in the scheduler, which decides which thread to execute next. If you use as many threads as you have processor cores, then each thread can run on its own processor and might not be interrupted much often. In practice the operating system may require of course its own computations while your application is running; hence even in this case threads are interrupted and have to wait until the operating system lets them run again. The situation gets even worse, when you have to use more threads than you have CPU cores. In this case the scheduler can interrupt your thread in order to let another thread execute its code. In such a situation the current state of the running thread has to be saved, the state of the scheduled thread that is supposed to run next has to be restored. Beyond that the scheduler itself has to perform some updates on its internal data structures which again use CPU power. All together this means that each context switch from one thread to the other costs CPU power and therefore introduces performance degeneration in comparison to a single threaded solution. Another cost of having multiple threads is the need to synchronize access to shared data structures. Next to the usage of the keyword synchronized we can also use volatile to share data between multiple threads. If more than one thread competes for the shared data structured we have contention. The JVM has then to decide which thread to execute next. If this is not the current thread, costs for a context switch are introduced. The current thread then has to wait until it can acquire the lock. The JVM can decide itself how to implement this waiting. When the expected time until the lock can be acquired is small, then spin-waiting, i.e. trying to acquire the lock again and again, might be more efficient compared to the necessary context switch when suspending the thread and letting another thread occupy the CPU. Bringing the waiting thread back to execution entails another context switch and adds additional costs to the lock contention. Therefore it is reasonable to reduce the number of context switches that are necessary due to lock contention. The following section describes two approaches how to reduce this contention. 2.3. Lock contention As we have seen in the previous section, two or more thread competing for one lock introduce additional clock cycles as the contention may force the scheduler to either let one thread spin-waiting for the lock or let another thread occupy the processor with the cost of two context switches. In some cases lock contention can be reduced by applying one of the following techniques: - The scope of the lock is reduced. - The number of times a certain lock is acquired is reduced. - Using hardware supported optimistic locking operations instead of synchronization. - Avoid synchronization where possible - Avoid object pooling 2.3.1 Scope reduction The first technique can be applied when the lock is hold longer than necessary. Often this can be achieved by moving one or more lines out of the synchronized block in order to reduce the time the current thread holds the lock. The less number of lines of code to execute the earlier the current thread can leave the synchronized block and therewith let other threads acquire the lock. This is also aligned with Amdahl’s Law, as we reduce the fraction of the runtime that is spend within synchronized blocks. To better understand this technique, take a look at the following source code: public class ReduceLockDuration implements Runnable { private static final int NUMBER_OF_THREADS = 5; private static final Map<String, Integer> map = new HashMap<String, Integer>(); public void run() { for (int i = 0; i < 10000; i++) { synchronized (map) { UUID randomUUID = UUID.randomUUID(); Integer value = Integer.valueOf(42); String key = randomUUID.toString(); map.put(key, value); } Thread.yield(); } } public static void main(String[] args) throws InterruptedException { Thread[] threads = new Thread[NUMBER_OF_THREADS]; for (int i = 0; i < NUMBER_OF_THREADS; i++) { threads[i] = new Thread(new ReduceLockDuration()); } long startMillis = System.currentTimeMillis(); for (int i = 0; i < NUMBER_OF_THREADS; i++) { threads[i].start(); } for (int i = 0; i < NUMBER_OF_THREADS; i++) { threads[i].join(); } System.out.println((System.currentTimeMillis()-startMillis)+"ms"); } } In this sample application, we let five threads compete for the access of the shared Map. To let only one thread at a time access the Map, the code that accesses the Map and adds a new key/value pair is put into a synchronized block. When we take a closer look at the block, we see that the computation of the key as well as the conversion of the primitive integer 42 into an Integer object must not be synchronized. They belong conceptually to the code that accesses the Map, but they are local to the current thread and the instances are not modified by other threads. Hence we can move them out of the synchronized block: public void run() { for (int i = 0; i < 10000; i++) { UUID randomUUID = UUID.randomUUID(); Integer value = Integer.valueOf(42); String key = randomUUID.toString(); synchronized (map) { map.put(key, value); } Thread.yield(); } } The reduction of the synchronized block has an effect on the runtime that can be measured. On my machine the runtime of the whole application is reduced from 420ms to 370ms for the version with the minimized synchronized block. This makes a total of 11% less runtime just by moving three lines of code out of the synchronized block. The statement Thread.yield() is introduced to provoke more context switches, as this method call tells the JVM that the current thread is willing to give the processor to another waiting thread. This again provokes more lock contention as otherwise one thread may run too long on the processor without any competing thread. 2.3.2 Lock splitting Another technique to reduce lock contention is to split one lock into a number of smaller scoped locks. This technique can be applied if you have one lock for guarding different aspects of your application. Assume we want to collect some statistical data about our application and implement a simple counter class that holds for each aspect a primitive counter variable. As our application is multi-threaded, we have to synchronize access to these variables, as they are accessed from different concurrent threads. The easiest way to accomplish this is to use the synchronized keyword within the method signature for each method of Counter: public static class CounterOneLock implements Counter { private long customerCount = 0; private long shippingCount = 0; public synchronized void incrementCustomer() { customerCount++; } public synchronized void incrementShipping() { shippingCount++; } public synchronized long getCustomerCount() { return customerCount; } public synchronized long getShippingCount() { return shippingCount; } } This approach also means that each increment of a counter locks the whole instance of Counter. Other threads that want to increment a different variable have to wait until this single lock is released. More efficient in this case is to use separate locks for each counter like in the next example: public static class CounterSeparateLock implements Counter { private static final Object customerLock = new Object(); private static final Object shippingLock = new Object(); private long customerCount = 0; private long shippingCount = 0; public void incrementCustomer() { synchronized (customerLock) { customerCount++; } } public void incrementShipping() { synchronized (shippingLock) { shippingCount++; } } public long getCustomerCount() { synchronized (customerLock) { return customerCount; } } public long getShippingCount() { synchronized (shippingLock) { return shippingCount; } } } This implementation introduces two separate synchronization objects, one for each counter. Hence a thread trying to increase the number of customers in our system only has to compete with other threads that also increment the number of customers but it has not to compete with threads trying to increment the number of shipping. By using the following class we can easily measure the impact of this lock splitting: public class LockSplitting implements Runnable { private static final int NUMBER_OF_THREADS = 5; private Counter counter; public interface Counter { void incrementCustomer(); void incrementShipping(); long getCustomerCount(); long getShippingCount(); } public static class CounterOneLock implements Counter { ... } public static class CounterSeparateLock implements Counter { ... } public LockSplitting(Counter counter) { this.counter = counter; } public void run() { for (int i = 0; i < 100000; i++) { if (ThreadLocalRandom.current().nextBoolean()) { counter.incrementCustomer(); } else { counter.incrementShipping(); } } } public static void main(String[] args) throws InterruptedException { Thread[] threads = new Thread[NUMBER_OF_THREADS]; Counter counter = new CounterOneLock(); for (int i = 0; i < NUMBER_OF_THREADS; i++) { threads[i] = new Thread(new LockSplitting(counter)); } long startMillis = System.currentTimeMillis(); for (int i = 0; i < NUMBER_OF_THREADS; i++) { threads[i].start(); } for (int i = 0; i < NUMBER_OF_THREADS; i++) { threads[i].join(); } System.out.println((System.currentTimeMillis() - startMillis) + "ms"); } } On my machine the implementation with one single lock takes about 56ms in average, whereas the variant with two separate locks takes about 38ms. This is a reduction of about 32 percent. Another possible improvement is to separate locks even more by distinguishing between read and write locks. The Counter class for example provides methods for reading and writing the counter value. While reading the current value can be done by more than one thread in parallel, all write operations have to be serialized. The java.util.concurrent package provides a ready to use implementation of such a ReadWriteLock. The ReentrantReadWriteLock implementation manages two separate locks. One for read accesses and one for write accesses. Both the read and the write lock offer methods for locking and unlocking. The write lock is only acquired, if there is no read lock. The read lock can be acquired by more than on reader thread, as long as the write lock is not acquired. For demonstration purposes the following shows an implementation of the counter class using a ReadWriteLock: public static class CounterReadWriteLock implements Counter { private final ReentrantReadWriteLock customerLock = new ReentrantReadWriteLock(); private final Lock customerWriteLock = customerLock.writeLock(); private final Lock customerReadLock = customerLock.readLock(); private final ReentrantReadWriteLock shippingLock = new ReentrantReadWriteLock(); private final Lock shippingWriteLock = shippingLock.writeLock(); private final Lock shippingReadLock = shippingLock.readLock(); private long customerCount = 0; private long shippingCount = 0; public void incrementCustomer() { customerWriteLock.lock(); customerCount++; customerWriteLock.unlock(); } public void incrementShipping() { shippingWriteLock.lock(); shippingCount++; shippingWriteLock.unlock(); } public long getCustomerCount() { customerReadLock.lock(); long count = customerCount; customerReadLock.unlock(); return count; } public long getShippingCount() { shippingReadLock.lock(); long count = shippingCount; shippingReadLock.unlock(); return count; } } All read accesses are guarded by an acquisition of the read lock, while all write accesses are guarded by the corresponding write lock. In case the application uses much more read than write accesses, this kind of implementation can even gain more performance improvement than the previous one as all the reading threads can access the getter method in parallel. 2.3.3 Lock striping The previous example has shown how to split one single lock into two separate locks. This allows the competing threads to acquire only the lock that protects the data structures they want to manipulate. On the other hand this technique also increases complexity and the risk of dead locks, if not implemented properly. Lock striping on the other hand is a technique similar to lock splitting. Instead of splitting one lock that guards different code parts or aspects, we use different locks for different values. The class ConcurrentHashMap from JDK’s java.util.concurrent package uses this technique to improve the performance of applications that heavily rely on HashMaps. In contrast to a synchronized version of java.util.HashMap, ConcurrentHashMap uses 16 different locks. Each lock guards only 1/16 of the available hash buckets. This allows different threads that want to insert data into different sections of the available hash buckets to do this concurrently, as their operation is guarded by different locks. On the other hand it also introduces the problem to acquire more than one lock for specific operations. If you want to copy for example the whole Map, all 16 locks have to be acquired. 2.3.4 Atomic operations Another way to reduce lock contention is to use so called atomic operations. This principle is explained and evaluated in more detail in one of the following articles. The java.util.concurrent package offers support for atomic operations for some primitive data types. Atomic operations are implemented using the so called compare-and-swap (CAS) operation provided by the processor. The CAS instruction only updates the value of a certain register, if the current value equals the provided value. Only in this case the old value is replaced by the new value. This principle can be used to increment a variable in an optimistic way. If we assume our thread knows the current value, then it can try to increment it by using the CAS operation. If it turns out, that another thread has meanwhile incremented the value and our value is no longer the current one, we request the current value and try with it again. This can be done until we successfully increment the counter. The advantage of this implementation, although we may need some spinning, is that we don’t need any kind of synchronization. The following implementation of the Counter class uses the atomic variable approach and does not use any synchronized block: public static class CounterAtomic implements Counter { private AtomicLong customerCount = new AtomicLong(); private AtomicLong shippingCount = new AtomicLong(); public void incrementCustomer() { customerCount.incrementAndGet(); } public void incrementShipping() { shippingCount.incrementAndGet(); } public long getCustomerCount() { return customerCount.get(); } public long getShippingCount() { return shippingCount.get(); } } Compared to the CounterSeparateLock class the total average runtime decreases from 39ms to 16ms. This is a reduction in runtime of about 58 percent. 2.3.5 Avoid hotspots A typical implementation of a list will manage internally a counter that holds the number of items within the list. This counter is updated every time a new item is added to the list or removed from the list. If used within a single-threaded application, this optimization is reasonable, as the size() operation on the list will return the previously computed value directly. If the list does not hold the number of items in the list, the size() operation would have to iterate over all items in order to calculate it. What is a common optimization in many data structures can become a problem in multi-threaded applications. Assume we want to share an instance of this list with a bunch of threads that insert and remove items from the list and query its size. The counter variable is now also a shared resource and all access to its value has to be synchronized. The counter has become a hotspot within the implementation. The following code snippet demonstrates this problem: public static class CarRepositoryWithCounter implements CarRepository { private Map<String, Car> cars = new HashMap<String, Car>(); private Map<String, Car> trucks = new HashMap<String, Car>(); private Object carCountSync = new Object(); private int carCount = 0; public void addCar(Car car) { if (car.getLicencePlate().startsWith("C")) { synchronized (cars) { Car foundCar = cars.get(car.getLicencePlate()); if (foundCar == null) { cars.put(car.getLicencePlate(), car); synchronized (carCountSync) { carCount++; } } } } else { synchronized (trucks) { Car foundCar = trucks.get(car.getLicencePlate()); if (foundCar == null) { trucks.put(car.getLicencePlate(), car); synchronized (carCountSync) { carCount++; } } } } } public int getCarCount() { synchronized (carCountSync) { return carCount; } } } The CarRepository implementation holds two lists: One for cars and one for trucks. It also provides a method that returns the number of cars and trucks that are currently in both lists. As an optimization it increments the internal counter each time a new car is added to one of the two lists. This operation has to be synchronized with the dedicated carCountSync instance. The same synchronization is used when the count value is returned. In order to get rid of this additional synchronization, the CarRepository could have also been implemented by omitting the additional counter and computing the number of total cars each time the value is queried by calling getCarCount(): public static class CarRepositoryWithoutCounter implements CarRepository { private Map<String, Car> cars = new HashMap<String, Car>(); private Map<String, Car> trucks = new HashMap<String, Car>(); public void addCar(Car car) { if (car.getLicencePlate().startsWith("C")) { synchronized (cars) { Car foundCar = cars.get(car.getLicencePlate()); if (foundCar == null) { cars.put(car.getLicencePlate(), car); } } } else { synchronized (trucks) { Car foundCar = trucks.get(car.getLicencePlate()); if (foundCar == null) { trucks.put(car.getLicencePlate(), car); } } } } public int getCarCount() { synchronized (cars) { synchronized (trucks) { return cars.size() + trucks.size(); } } } } Now we need to synchronized with the cars and trucks list in the getCarCount() method and compute the size, but the additional synchronization during the addition of new cars can be left out. 2.3.6 Avoid object pooling In the first versions of the Java VM object creation using the new operator was still an expensive operation. This led many programmers to the common pattern of object pooling. Instead of creating certain objects again and again, they constructed a pool of these objects and each time an instance was needed, one was taken from the pool. After having used the object, it was put back into the pool and could be used by another thread. What makes sense at first glance can be a problem when used in multi-threaded applications. Now the object pool is shared between all threads and access to the objects within the pool has to be synchronized. This additional synchronization overhead can now be bigger than the costs of the object creation itself. This is even true when you consider the additional costs of the garbage collector for collecting the newly created object instances. As with all performance optimization this example shows once again, that each possible improvement should be measured carefully before being applied. Optimizations that seem to make sense at first glance can turn out to be even performance bottlenecks when not implemented correctly.
https://www.javacodegeeks.com/2015/09/performance-scalability-and-liveness.html
CC-MAIN-2020-16
refinedweb
4,030
52.6
A long time ago, in a galaxy far, far away, JavaScript was a hated language. In fact, "hated" is an understatement; JavaScript was a despised language. As a result, developers generally treated it as such, only tipping their toes into the JavaScript waters when they needed to sprinkle a bit of flair into their applications. Despite the fact that there is a whole lot of good in the JavaScript language, due to widespread ignorance, few took the time to properly learn it. Instead, as some of you might remember, standard JavaScript usage involved a significant amount of copying and pasting. "Don't bother learning what the code does, or whether it follows best practices; just paste it in!" - Worst Advice Ever Because the rise of jQuery reignited interest in the JavaScript language, much of the information on the web is a bit sketchy. Ironically, it turns out that much of what the development community hated had very little to do with the JavaScript language, itself. No, the real menace under the mask was the DOM, or "Document Object Model," which, especially at the time, was horribly inconsistent from browser to browser. "Sure, it may work in Firefox, but what about IE8? Okay, it may work in IE8, but what about IE7?" The list went on tirelessly! Luckily, beginning around five years ago, the JavaScript community would see an incredible change for the better, as libraries like jQuery were introduced to the public. Not only did these libraries provide an expressive syntax that was particularly appealing to web designers, but they also managed to level the playing feel, by tucking the workarounds for the various browser quirks into its API. Trigger $.ajax and let jQuery do the hard part. Fast-forward to today, and the JavaScript community is more vibrant than ever - largely due to the jQuery revolution. Because the rise of jQuery reignited interest in the JavaScript language, much of the information on the web is a bit sketchy. This is less due to the writers' ignorance, and more a result of the fact that we were all learning. It takes time for best practices to emerge. Luckily, the community has matured immensely since those days. Before we dive into some of these best practices, let's first expose some bad advice that has circulated around the web. Don't Use jQuery The problem with tips like this is that they take the idea of pre-optimization to an extreme. Much like Ruby on Rails, many developers' first introduction to JavaScript was through jQuery. This lead to a common cycle: learn jQuery, fall in love, dig into vanilla JavaScript and level up. While there's certainly nothing wrong with this cycle, it did pave the way for countless articles, which recommended that users not use jQuery in various situations, due to "performance issues." It wouldn't be uncommon to read that it's better to use vanilla for loops, over $.each. Or, at some point or another, you might have read that it's best practice to use document.getElementsByClassName over jQuery's Sizzle engine, because it's faster. The problem with tips like this is that they take the idea of pre-optimization to an extreme, and don't account for various browser inconsistencies - the things that jQuery fixed for us! Running a test and observing a savings of a few milliseconds over thousands of repetitions is not reason to abandon jQuery and its elegant syntax. Your time is much better invested tweaking parts of your application that will actually make a difference, such as the size of your images. Multiple jQuery Objects This second anti-pattern, again, was the result of the community (including yours truly at one point) not fully understanding what was taking place under the jQuery hood. As such, you likely came across (or wrote yourself) code, which wrapped an element in the jQuery object countless times within a function. $('button.confirm').on('click', function() { // Do it once $('.modal').modal(); // And once more $('.modal').addClass('active'); // And again for good measure $('modal').css(...); }); While this code might, at first, appear to be harmless (and truthfully is, in the grand scheme of things), we're following the bad practice of creating multiple instances of the jQuery object. Every time that we refer to $('.modal'), a new jQuery object is being generated. Is that smart? Think of the DOM as a pool: every time you call $('.modal'), jQuery is diving into the pool, and hunting down the associated coins (or elements). When you repeatedly query the DOM for the same selector, you're essentially throwing those coins back into the water, only to jump in and find them all over again! Always chain selectors if you intend to use them more than once. The previous code snippet can be refactored to: $('button.confirm').on('click', function() { $('.modal') .modal() .addClass('active') .css(...); }); Alternatively, use "caching." $('button.confirm').on('click', function() { // Do it ONLY once var modal = $('.modal'); modal.modal(); modal.addClass('active'); modal.css(...); }); With this technique, jQuery jumps into the DOM pool a total of one time, rather than three. Selector Performance Too much attention is paid to selector performance. While not as ubiquitous these days, not too long ago, the web was bombarded by countless articles on optimizing selector performance in jQuery. For example, is it better to use $('div p') or $('div').find('p')? Ready for the truth? It doesn't really matter. It's certainly a good idea to have a basic understanding of the way that jQuery's Sizzle engine parses your selector queries from right to left (meaning that it's better to be more specific at the end of your selector, rather than the beginning). And, of course, the more specific you can be, the better. Clearly, $('a.button') is better for performance than $('.button'), due to the fact that, with the former, jQuery is able to limit the search to only the anchor elements on the page, rather than all elements. Beyond that, however, too much attention is paid to selector performance. When in doubt, put your trust in the fact that the jQuery team is comprised of the finest JavaScript developers in the industry. If there is a performance boost to be achieved in the library, they will have discovered it. And, if not them, one of the community members will submit a pull request. With this in mind, be aware of your selectors, but don't concern yourself too much with performance implications, unless you can verbalize why doing so is necessary. Callback Hell jQuery has encouraged widespread use of callback functions, which can certainly provide a nice convenience. Rather than declaring a function, simply use a callback function. For example: $('a.external').on('click', function() { // this callback function is triggered // when .external is clicked }); You've certainly written plenty of code that looks just like this; I know I have! When used sparingly, anonymous callback functions serve as helpful conveniences. The rub occurs down the line, when we enter... callback hell (trigger thunderbolt sound)! Callback hell is when your code indents itself numerous times, as you continue nesting callback functions. Consider the following quite common code below: $('a.data').on('click', function() { var anchor = $(this); $(this).fadeOut(400, function() { $.ajax({ // ... success: function(data) { anchor.fadeIn(400, function() { // you've just entered callback hell }); } }); }); }); As a basic rule of thumb, the more indented your code is, the more likely that there's a code smell. Or, better yet, ask yourself, does my code look like the Mighty Ducks Flying V? When refactoring code such as this, the key is to ask yourself, "How could this be tested?" Within this seemingly simple bit of code, an event listener is bound to a link, the element fades out, an AJAX call is being performed, upon success, the element fades back in, presumably, the resulting data will be appended somewhere. That sure is a lot to test! Wouldn't it be better to split this code into more manageable and testable pieces? Certainly. Though the following can be optimized further, a first step to improving this code might be: var updatePage = function(el, data) { // append fetched data to DOM }; var fetch = function(ajaxOptions) { ajaxOptions = ajaxOptions || { // url: ... // dataType: ... success: updatePage }; return $.ajax(ajaxOptions); }; $('a.data').on('click', function() { $(this).fadeOut(400, fetch); }); Even better, if you have a variety of actions to trigger, contain the relevant methods within an object. Think about how, in a fast-food restaurant, such as McDonalds, each worker is responsible for one task. Joe does the fries, Karen registers customers, and Mike grills burgers. If all three members of the staff did everything, this would introduce a variety of maintainability problems. When changes need to be implemented, we have to meet with each person to discuss them. However, if we, for example, keep Joe exclusively focused on the fries, should we need to adjust the instructions for preparing fries, we only need to speak with Joe and no one else. You should take a similar approach to your code; each function is responsible for one task. In the code above, the fetch function merely triggers an AJAX call to the specified URL. The updatePage function accepts some data, and appends it to the DOM. Now, if we want to test one of these functions to ensure that it's working as expected, for example, the updatePage method, we can mock the data object, and send it through to the function. Easy! Reinventing the Wheel It's important to remember that the jQuery ecosystem has matured greatly over the last several years. Chances are, if you have a need for a particular component, then someone else has already built it. Certainly, continue building plugins to increase your understanding of the jQuery library (in fact, we'll write one in this article), but, for real-world usage, refer to any potential existing plugins before reinventing the wheel. As an example, need a date picker for a form? Save yourself the leg-work, and instead take advantage of the community-driven - and highly tested - jQuery UI library. Once you reference the necessary jQuery UI library and associated stylesheet, the process of adding a date picker to an input is as easy as doing: <input id="myDateInput" type="text"> <script> $("#myDateInput").datepicker({ dateFormat: 'yy-mm-dd' }); // Demo: </script> Or, what about an accordion? Sure, you could write that functionality yourself, or instead, once again, take advantage of jQuery UI. Simply create the necessary markup for your project. <div id="accordion"> <h3><a href="#">Chapter 1</a></h3> <div><p>Some text.</p></div> <h3><a href="#">Chapter 2</a></h3> <div><p>Some text.</p></div> <h3><a href="#">Chapter 3</a></h3> <div><p>Some text.</p></div> <h3><a href="#">Section 4</a></h3> <div><p>Some text.</p></div> </div> Then, automagically turn it into an accordion. $(function() { $("#accordion").accordion(); }); What if you could create tabs in thirty seconds? Create the markup: <div id="tabs"> <ul> <li><a href="#tabs-1">About Us</a></li> <li><a href="#tabs-2">Our Mission</a></li> <li><a href="#tabs-3">Get in Touch</a></li> </ul> <div id="tabs-1"> <p>About us text.</p> </div> <div id="tabs-2"> <p>Our mission text.</p> </div> <div id="tabs-3"> <p>Get in touch text.</p> </div> </div> And activate the plugin. $(function() { $("#tabs").tabs(); }); Done! It doesn't even require any notable understanding of JavaScript. Plugin Development Let's now dig into some best practices for building jQuery plugins, which you're bound to do at some point in your development career. We'll use a relatively simple MessageBox plugin as the demo for our learning. Feel free to work along; in fact, please do! The assignment: implement the necessary functionality to display dialog boxes, using the syntax, $.message('SAY TO THE USER'). This way, for example, before deleting a record permanently, we can ask the user to confirm the action, rather than resorting to inflexible and ugly alert boxes. Step 1 The first step is to figure out how to "activate" $.message. Rather than extending jQuery's prototype, for this plugin's requirements, we only need to attach a method to the jQuery namespace. (function($) { $.message = function(text) { console.log(text); }; })(jQuery); It's as easy as that; go ahead, try it out! When you call $.message('Here is my message'), that string should be logged to the browser's console (Shift + Command + i, in Google Chrome). Step 2 While there's not enough room to cover the process of testing the plugin, this is an important step that you should research. There is an amazing sense of assuredness that occurs, when refactoring tested code. For example, when using jQuery's test suite, QUnit, we could test-drive the code from Step 1 by writing: module('jQuery.message', { test('is available on the jQuery namespace', 1, function() { ok($.message, 'message method should exist'); }); }); The ok function, available through QUnit, simply asserts that the first argument is a truthy value. If the message method does not exist on the jQuery namespace, then false will be returned, in which case the test fails. Following the test-driven development pattern, this code would be the first step. Once you've observed the test fail, the next step would be to add the message method, accordingly. While, for brevity's sake, this article will not delve further into the process of test-driven development in JavaScript, you're encouraged to refer to the GitHub repo for this project to review all the tests for the plugin: Step 3 A message method that does nothing isn't of help to anybody! Let's take the provided message, and display it to the user. However, rather than embedding a huge glob of code into the $.message method, we'll instead simply use the function to instantiate and initialize a Message object. (function($) { "use strict"; var Message = { initialize: function(text) { this.text = text; return this; } }; $.message = function(text) { // Needs polyfill for IE8-- return Object.create(Message).initialize(text); }; })(jQuery); Not only does this approach, again, make the Message object more testable, but it's also a cleaner technique, which, among other things, protects again callback hell. Think of this Message object as the representation of a single message box. Step 4 If Message represents a single message box, what will be the HTML for one? Let's create a div with a class of message-box, and make it available to the Message instance, via an el property. var Message = { initialize: function(text) { this.el = $('<div>', { 'class': 'message-box', 'style': 'display: none' }); this.text = text; return this; } }; Now, the object has an immediate reference to the wrapping div for the message box. To gain access to it, we could do: var msg = Object.create(Message).initialize(); // [<div class=​"message-box" style=​"display:​ none">​</div>​] console.log(msg.el); Remember, we now have an HTML fragment, but it hasn't yet been inserted into the DOM. This means that we don't have to worry about any unnecessary reflows, when appending content to the div. Step 5 The next step is to take the provided message string, and insert it into the div. That's easy! initialize: function(text) { // ... this.el.html(this.text); } // [<div class=​"message-box" style=​"display:​ none">​Here is an important message​</div>​] However, it's unlikely that we'd want to insert the text directly into the div. More realistically, the message box will have a template. While we could let the user of the plugin create a template and reference it, let's keep things simple and confine the template to the Message object. var Message = { template: function(text, buttons) { return [ '<p class="message-box-text">' + text + '</p>', '<div class="message-box-buttons">', buttons, '</div>' ].join(''); // ... }; In situations, when you have no choice but to nest HTML into your JavaScript, a popular approach is to store the HTML fragments as items within an array, and then join them into one HTML string. In this snippet above, the message's text is now being inserted into a paragraph with a class of message-box-text. We've also set a place for the buttons, which we'll implement shortly. Now, the initialize method can be updated to: initialize: function(text) { // ... this.el.html(this.template(text, buttons)); } When triggered, we build the structure for the message box: <div class="message-box" style="display: none;"> <p class="message-box-text">Here is an important message.</p> <div class="message-box-buttons></div> </div> For more complex projects, consider using the popular Handlebars templating engine. Step 6 For the message box to be as flexible as possible, the user of the plugin needs to have the ability to optionally specify, among other things, which buttons should be presented to the user - such as "Okay," "Cancel," "Yes," etc. We should be able to add the following code: $.message('Are you sure?', { buttons: ['Yes', 'Cancel'] }); ...and generate a message box with two buttons. To implement this functionality, first update the $.message definition. $.message = function(text, settings) { var msg = Object.create(Message); msg.initialize(text, settings); return msg; }; Now, the settings object will be passed through to the initialize method. Let's update it. initialize: function(text, settings) { this.el = $('<div>', {'class': 'message-box', 'style': 'display: none'}); this.text = text; this.settings = settings this.el.html(this.template(text, buttons)); } Not bad, but what if the plugin user does not specify any settings? Always consider the various ways that your plugin can be used. Step 7 We assume that the user of the plugin will describe which buttons to present, but, to be safe, it's important to provide a set of defaults, which is a best practice when creating plugins. $.message = function(text, settings) { var msg = Object.create(Message); msg.initialize(text, settings); return msg; }; $.message.defaults = { icon: 'info', buttons: ['Okay'], callback: null }; With this approach, should the plugin user need to modify the defaults, he only needs to update $.message.defaults, as needed. Remember: never hide the defaults from the user. Make them available to "the outside." Here, we've set a few defaults: the icon, buttons, and a callback function, which should be triggered, once the user has clicked one of the buttons in the message box. jQuery offers a helpful way to override the default options for a plugin (or any object, really), via its extend method. initialize: function(text, buttons) { // ... this.settings = $.extend({}, $.message.defaults, settings); } With this modification, this.settings will now be equal to a new object. If the plugin user specifies any settings, they will override the plugin's defaults object. Step 8 If we intend to add a custom icon to the message box, dependent upon the action, it'll be necessary to add a CSS class to the element, and allow the user to apply a background image, accordingly. Within the initialize method, add: this.el.addClass('message-box-' + this.settings.icon); If no icon is specified in the settings object, the default, info, is used: .message-box-info. Now, we can offer a variety of CSS classes (beyond the scope of this tutorial), containing various icons for the message box. Here's a couple examples to get you started. .message-box-info { background: url(path/to/info/icon.png) no-repeat; } .message-box-warning { background: url(path/to/warning/icon.png) no-repeat; } Ideally, as part of your MessageBox plugin, you'd want to include an external stylesheet that contains basic styling for the message box, these classes, and a handful of icons. Step 9 The plugin now accepts an array of buttons to be applied to the template, but we haven't yet written the functionality to make that information usable. The first step is to take an array of button values, and translate that to the necessary HTML inputs. Create a new method on the Message object to handle this task. createButtons: function(buttons) {} jQuery.map is a helpful method that applies a function to each item within an array, and returns a new array with the modifications applied. This is perfect for our needs. For each item in the buttons array, such as ['Yes', 'No'], we want to replace the text with an HTML input, with the value set, accordingly. createButtons: function(buttons) { return $.map(buttons, function(button) { return '<input type="submit" value="' + button + '">'; }).join(''); } Next, update the initialize method to call this new method. initialize: function(text, settings) { this.el = $('<div>', {'class': 'message-box', 'style': 'display: none'}); this.text = text; this.settings = $.extend({}, $.message.defaults, settings); var buttons = this.createButtons(this.settings.buttons); this.el.html(this.template(text, buttons)); return this; } Step 10 What good is a button, if nothing happens when the user clicks it? A good place to store all event listeners for a view is within a special events method on the associated object, like this: initialize: function() { // ... this.el.html(this.template(text, buttons)); this.events(); }, events: function() { var self = this; this.el.find('input').on('click', function() { self.close(); if ( typeof self.settings.callback === 'function' ) { self.settings.callback.call(self, $(this).val()); } }); } This code is slightly more complex, due to the fact the user of the plugin needs to have the ability to trigger their own callback function, when a button is clicked on the message box. The code simply determines whether a callback function was registered, and, if so, triggers it, and sends through the selected button's value. Imagine that a message box offers two buttons: "Accept" and "Cancel." The user of the plugin needs to have a way to capture the clicked button's value, and respond accordingly. Notice where we call self.close()? That method, which has yet to be created, is responsible for one thing: closing and removing the message box from the DOM. close: function() { this.el.animate({ top: 0, opacity: 'hide' }, 150, function() { $(this).remove(); }); } To add a bit of flair, upon hiding the message box, we, over the course of 150 milliseconds, fade out the box, and transition it upwards. Step 11 The functionality has been implemented! The final step is to present the message box to the user. We'll add one last show method on the Message object, which will insert the message box into the DOM, and position it. show: function() { this.el.appendTo('body').animate({ top: $(window).height() / 2 - this.el.outerHeight() / 2, opacity: 'show' }, 300); } It only takes a simple calculation to position the box vertically in the center of the browser window. With that in place, the plugin is complete! $.message = function(text, settings) { var msg = Object.create(Message).initialize(text, settings); msg.show(); return msg; }; Step 12 To use your new plugin, simply call $.message(), and pass through a message and any applicable settings, like so: $.message('The row has been updated.'); Or, to request confirmation for some destructive action: $.message('Do you really want to delete this record?', { buttons: ['Yes', 'Cancel'], icon: 'alert', callback: function(buttonText) { if ( buttonText === 'Yes' ) { // proceed and delete record } } }); Closing Thoughts To learn more about jQuery development, you're encouraged to refer to my course on this site, "30 Days to Learn jQuery." Over the course of building this sample MessageBox plugin, a variety of best practices have emerged, such as avoiding callback hell, writing testable code, making the default options available to the plugin user, and ensuring that each method is responsible for exactly one task. While one could certainly achieve the same effect by embedding countless callback functions within $.message, doing so is rarely a good idea, and is even considered an anti-pattern. Remember the three keys to maintainable code and flexible jQuery plugins and scripts: - Could I test this? If not, refactor and split the code into chunks. - Have I offered the ability to override my default settings? - Am I following bad practices or making assumptions? To learn more about jQuery development, you're encouraged to refer to my screencast course on this site, "30 Days to Learn jQuery." Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/jquery-anti-patterns-and-best-practices--pre-45600
CC-MAIN-2016-50
refinedweb
4,049
65.32
Other AliasNs_UrlSpecificAlloc, Ns_UrlSpecificDestroy, Ns_UrlSpecificGet, Ns_UrlSpecificGetExact, Ns_UrlSpecificGetFast SYNOPSIS #include "ns.h" int Ns_UrlSpecificAlloc(void) void * Ns_UrlSpecificDestroy(char *server, char *method, char *url, int id, int flags) void * Ns_UrlSpecificGet(char *server, char *method, char *url, int id) void * Ns_UrlSpecificGetExact(char *server, char *method, char *url, int id, int flags) void * Ns_UrlSpecificGetFast(char *server, char *method, char *url, int id) void Ns_UrlSpecificSet(char *server, char *method, char *url, int id, void *data, int flags, void (*deletefunc) (void *)) DESCRIPTION These functions allow you to store URL-specific data in memory for later retrieval. They are used when registering procedures for example. - Ns_UrlSpecificAlloc() Return a unique ID used to identify a unique virtual URL-space that is then used with the Ns_UrlSpecific storage functions. You should only call this function at server startup, and not after. Here is an example: static int myId; void Init(void) { /* Allocate the id once at startup. */ myId = Ns_UrlSpecificAlloc(); } void Store(char *server, char *method, char *url, char *data) { Ns_UrlSpecificSet(server, method, url, myId, data, 0, NULL); } char * Fetch(char *server, char *method, char *url) { char *data; data = Ns_UrlSpecificGet(server, method, url, myId); return (char *) data; } - Ns_UrlSpecificDestroy(server, method, url, id, flags) The Ns_UrlSpecificDestroy function deletes URL-specific data previously stored with Ns_UrlSpecificSet with the same method/URL combination and the same inheritance setting. An id of -1 matches all ids. For example, Ns_UrlSpecificDestroy("myserver", "GET", "/", -1, NS_OP_RECURSE) removes all data for the method GET for server "myserver". The flags argument can be: - NS_OP_NODELETE - If set, the deletefunc specified in Ns_UrlSpeciciSet is run. - NS_OP_RECURSE - If set, then data for all URLs more specific than the passed-in URL are also destroyed. - NS_OP_NOINHERIT - If set, data that was stored with this flag in Ns_UrlSpecificSet will be deleted. If not set, the data stored without this flag will be deleted. - Ns_UrlSpecificGet(server, method, url, id) The Ns_UrlSpecificGet function retrieves the best match that it can find for in the URL subspace identified by id that the passed-in URL matches. For instance, suppose you had previously registered a handle/method/url/id combination of {myserver, GET, /, 1} and {myserver, GET, /inventory, 1}. The following call would match the data registered at {myserver, GET, /inventory, 1}: Ns_UrlSpecificGet("myserver", "GET", "/inventory/RJ45", 1) - Ns_UrlSpecificGetExact(server, method, url, id, flags) Retrieves stored data for the exact method/URL/id combination specified that was stored with the same inheritance setting. If the flags argument is set to NS_OP_NOINHERIT, the data stored with NS_OP_NOINHERIT will be retrieved. If the flags argument is set to 0, the data stored without NS_OP_NOINHERIT will be retrieved. - Ns_UrlSpecificGetFast(server, method, url, id) Same as Ns_UrlSpecificGet but does not support wildcards, making it much faster. - Ns_UrlSpecificSet(server, method, url, id, data, flags, deletefunc) The Ns_UrlSpecificSet function stores data in memory, allowing subsequent retrieval using handle, method, url, id, and inheritance flag. The flags argument can be NS_OP_NOINHERIT or NS_OP_NODELETE. You can store two sets of data based on the same handle, method, url, and id combination-- one set with inheritance on and one set with inheritance off. If the NS_OP_NOINHERIT flag is set, the data is stored based on the exact URL. If NS_OP_NOINHERIT is omitted, the data is stored based on the specified URL and any URL below it. In this case, Ns_UrlSpecificGetExact will match to the closest URL when retrieving the data. The deletefunc argument is called with data as an argument when this handle/url/method/id combination is re-registered or deleted, or when this server shuts down. unless NS_OP_NODELETE is set. Normally, calling Ns_UrlSpecificSet on a handle/url/method/id combination which already has an operation registered for it causes the previous operation's delete procedure to be called. You can override this behavior by adding the NS_OP_NODELETE flag. KEYWORDS
http://manpages.org/ns_urlspecificset/3
CC-MAIN-2018-51
refinedweb
623
51.07
In this article i will explain how to customize log activity in database, Imaging you need to save IP user, Url request and user_name in database,But table created by default with Yii only have id,level,category,logtime and message In this article i will explain how to customize log activity in database, Imaging you need to save IP user, Url request and user_name in database,But table created by default with Yii only have id,level,category,logtime and message CGridView and CListView are great widget to populate records and also provides features like ajax update, column sort, search, drop-down filter, ajax content load and many more... In this wiki I explain how to show a default popup dialogbox (like Gii does) using an existing module. There is a few cases that you want more of one CActiveDataProvider displayed in one CGrideView How to do that? Like gmail, if you have tree or more unsuccessful login attemps a captcha appears Getting "Expired token" errors ? Here is a solution to avoid invalid CSRF on POST or ajax requests, or user identity changes. Sometimes you want to use exisiting translations for locales, which do not directly match. An example would be a website targeting Germany (de_de), Austria (de_at) and Switzerland (de_ch, fr_ch, it_ch). Although you may have exisiting translations for German (de), French (fr) and Italian (it), there are problems using it directly. Lets say we have such a CGridView widget showing a list of users for administrator. Users have status „active“ or „disabled“. Grid widget puts class „odd“ or „even“ to rows and we want to preserve this. So we want to add a class „disabled“ to rows with disabled users. <?php $this->widget('zii.widgets.grid.CGridView', array( 'id'=>'user-grid', 'dat... Lots of people are asking how to solve it with YII,We think its difficult with YII. But its easy to solve . There is no database triggers needed . we can simply sove it by extending a class(say "RActiveRecord") from CActiveRecord .Then extend all our model classes from that class. We are running one frontend running NGINX and several app servers running Apache2. There are several issues we have come across but right now I'll be documenting one of them. I'll be completing this article when I get more time. The basic idea is to create a complete mail message and store it a Db table along with all info necessary for sending valid emails (to_email, from_email, from_name, subject etc.) For best experience,use Chrome.Other browsers may complain here and there.Sorry,no patience to make happy every freaking browser out there!. I've gotten Yii running cron jobs, and wanted to explain briefly how I did it. In order to get your Yii logs into Heroku's logs, you have to work a little bit of magic. You'll need to modify the boot.sh script and add the following two lines: ~~~ touch /app/apache/logs/app_log tail -F /app/apache/logs/app_log & ~~~ This will set up the log and tail it so that when you request "heroku logs", this log is included. A short explication on how to extract profile information for PHP on your server using XDebug and KCacheGrind or WinCacheGrind. This example includes a composite condition as well as an empty condition - as if you bypass or disable defaultScope without using resetScope(). This). This article explains how to easily turn standard text-line validation errors into beautifully and professionally looking Twitter Bootstrap's Popovers. Yiinitializr\Helpers\Initializer::create('./../', 'frontend', array( // will override frontend configuration file __DIR__ .'/../../common/config/main.php', // merged with __DIR__ .'/../../common/config/env.php', // merged with __DIR__ .'/../../common/config/local.php' // merged with ))->run(); ` This namespace brings utilities to interact with...
https://www.yiiframework.com/wiki?version=1.1&amp%3Btag=rbac&page=10
CC-MAIN-2019-47
refinedweb
626
55.13
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Searching By Name5:46 with Jason Seifer In this video, we implement a method to search for contacts by their name. Code Samples Here is our find_by_name method: def find_by_name(name) results = [] search = name.downcase contacts.each do |contact| if contact.full_name.downcase.include?(search) results.push(contact) end end puts "Name search results (#{search})" results.each do |contact| puts contact.to_s('full_name') contact.print_phone_numbers contact.print_addresses puts "\n" end end Which we can call as follows: address_book.find_by_name("e") - 0:00 [MUSIC] - 0:04 We have all of our contacts successfully in the program but just printing out - 0:08 a list doesn't do us much good if we don't have the ability to search through. - 0:13 We're going to use some methods that take blocks to search through our different - 0:17 contacts. - 0:18 Let's go ahead and try adding that ability now using work spaces. - 0:23 All right. - 0:24 So, first up let's go ahead and add the ability to find a contact by name. - 0:30 We can print out the contact list. - 0:32 But, we want to be able to search through it. - 0:37 So let's go ahead and write a method. - 0:40 Called find_by_name. - 0:45 And we'll pass in a name. - 0:49 Now, let's think about how we're gonna do this. - 0:52 What we can do is iterate over each contact in the contact list. - 0:58 And then, if the name matches the argument to this method we'll go ahead and - 1:03 return that contact. - 1:05 So how are we gonna do that? - 1:07 Well, we can start by having an empty array - 1:11 of search results which will be the contacts in our contacts list. - 1:19 And then we'll say the search is the name which we send into the method. - 1:26 Same thing here and here. - 1:29 But now let's think about this for just a moment. - 1:32 If somebody's searching for part of a name or they've capitalized the name, - 1:37 we still want the search to return the correct thing. - 1:40 So what we’re going to do is make the search query lowercase. - 1:46 And then when we’re looking through the different contacts we’ll also match that - 1:51 against the lowercase version of the first, middle or last name. - 1:56 So let's go ahead and iterate through the contacts. - 2:00 And then if it matches, we'll append that contact to the results array. - 2:11 So, here's a loop, and now we'll say, - 2:15 contact.first_name, and remember, we'll downcase this. - 2:20 So now we're dealing with a lower case version of the first name, and - 2:24 strings have a method called include, - 2:26 which lets you see if a string includes another string. - 2:32 So we can say if this includes our search, - 2:38 we can append this contact to the result array. - 2:44 Now, what we can do is go through and print out the results. - 2:52 And then, just to be a little bit more clear, we'll put in the search term. - 2:59 Now we can iterate over the results. - 3:04 Remember, this is going to be a contact. - 3:09 Now we can print out the contact's name. - 3:11 We'll print out their full name. - 3:19 And we'll go ahead and print their phone numbers, and - 3:24 we'll go ahead and print the addresses, and we'll also print a new line. - 3:31 So now let's go down here, and - 3:33 instead of printing out the contact list, we can comment that out. - 3:43 And let's first try searching for Nick in all lower case and - 3:47 when we run this we should hopefully see Nick's contact information printed out. - 3:56 Oh, undefined local variable or method, seach. - 4:00 That would appear to be a typo. - 4:06 Here we go. - 4:08 That's on line 18, that all looks good. - 4:13 Let's run this again. - 4:16 Okay, this looks good, that's what we wanted. - 4:22 Now, let's go ahead and change this to an n, since both Nick and - 4:26 I have that letter in our names. - 4:29 And then we should hopefully see Jason and Nick here. - 4:33 And we do. That looks good. - 4:39 Let's go ahead and change this to search for the letter e. - 4:47 And that returned no search results. - 4:51 Even though both of our last names include the letter e. - 5:00 So let's go ahead and add an or statement. - 5:03 And we'll say if contact.last_name. - 5:11 Includes the search, we can push that to the search results. - 5:21 Let's go ahead and run this again. - 5:23 And, hey, we get the exact same results that we were expecting. - 5:30 And actually we could include the full name here, instead of searching for - 5:36 the first and last names, we can get the middle name in there, as well. - 5:42 Run that one more time. - 5:43 All right, now we have a name search working.
https://teamtreehouse.com/library/build-an-address-book-in-ruby/search/searching-by-name
CC-MAIN-2017-09
refinedweb
950
84.07
hey, i have stored a few images in mySQL database as BLOB.Now i want to retrieve and display the images on frame as a gallery/grid view.I have displayed one image in a frame but when i tried to add more images it fails...Can anyone help with sample code,coz im new to the language.... I've tried the following code,bt it didnt worked:- public class ImageShow extends JFrame{ Image image; public int x = 30,y=30; ResultSet r; public ImageShow(){ setTitle("Image Retrieved"); setSize(1000,1000); addWindowListener(new WindowAdapter(){ public void windowClosing(WindowEvent we){ setVisible(false); } }); setVisible(true); } public void paint(Graphics g){ try { r = ImageRetrieve.rs; while (r.next()) { byte[] imagedata = r.getBytes("image_path") ; image = Toolkit.getDefaultToolkit().createImage(imagedata); //Toolkit tool = Toolkit.getDefaultToolkit(); g.drawImage(image,x,y,this); x+=30; y+=30; } } catch (SQLException ex) { Logger.getLogger(ImageShow.class.getName()).log(Level.SEVERE, null, ex); } } } Is there any better way? when i tried to add more images it fails.. Please explain what fails. Please explain what fails. On execution i only gets a new window,its not displaying any images....i heard about grid layout,can i use that here?? wil u help me with some examples? On execution i only gets a new window,its not displaying any images....i heard about grid layout,can i use that here?? wil u help me with some examples? maybe see here: and here: [edit]and this too is helpful: Rather than overriding paint, you;ll be on easier ground by creating a gridlayout of JLabels and using ImageIcon to place your graphics in the JLabels. (just Google it) If you must override paint then don't re-load the images from disk every time paint is called. It can get called very frequently and all that repeated IO will grind your GUI to snails-pace. Load them all into memory once when the program starts. hey, thanks to everyone who commented and helped me... i've solved my problem...i used gridlayout for displaying the images and it worked... code :- public class ImageShow extends JFrame{ ResultSet r; Image img; public ImageShow() throws SQLException { setTitle("Image retrieved"); setSize(500, 500); setDefaultCloseOperation(HIDE_ON_CLOSE); Container pane = getContentPane(); pane.setLayout(new GridLayout(3,3)); r=ImageRetrieve.rs; while (r.next()) { byte[] imagedata = r.getBytes("image_path") ; img = Toolkit.getDefaultToolkit().createImage(imagedata); img = img.getScaledInstance(200,200,Image.SCALE_SMOOTH); ImageIcon icon =new ImageIcon(img); JLabel Photo = new JLabel(icon) ; pane.add(Photo) ; this.pack(); this.setVisible(true); } } }
http://www.daniweb.com/software-development/java/threads/411623/displaying-images-stored-in-mysql-database-as-a-grid
CC-MAIN-2014-10
refinedweb
417
52.97
Miscelaneous functions API. The functions here are located in RDM DB Engine Library. Linker option: -l rdmrdm #include <rdmrdmapi.h> Return RDM Db Engine version information. This function fills a buffer with a string describing the version of the RDM library according to a format string. The format string consists of plaintext (copied to the string buffer) and format specifiers introduced by a percent ("%") character. See list below. If a NULL is passed in for fmt then the default format string will be used. The default format string is the full version string "%V". This corresponds to "%n %v Build %b" rdm_rdmGetVersion() Type Field Characters
https://docs.raima.com/rdm/14_1/group__rdm__miscellaneous.html
CC-MAIN-2019-18
refinedweb
105
69.38
Use the Sieve of Eratosthenes to find all the primes from 2 up to a given number.. It does not use any division or remainder operation. Create your range, starting at two and continuing up to and including the given limit. (i.e. [2, limit]) The algorithm consists of repeating the following over and over:. A good first test is to check that you do not use division or remainder operations (div, /, mod or % depending on the language).. Please see the learning and installation pages if you need any help. be( List)) } } import scala.collection.mutable object Sieve { def primes(integer: Int): List[Int] = if (integer == 1) { List() } else { primes(2 to integer) } def primes(integerRange: Range.Inclusive): List[Int] = { val mutableMarkedRange = createMarkableRange(integerRange) integerRange.foreach { factor ⇒ ((factor * 2) to integerRange.last by factor) .foreach { mutableMarkedRange.put(_, true) } } mutableMarkedRange .filterNot { case (_, isMarked) ⇒ isMarked } .map { case (value, _) ⇒ value } .toList .sorted } private def createMarkableRange(integerRange: Range.Inclusive): mutable.Map[Int, Boolean] = mutable.Map.apply( integerRange .map(i ⇒ (i, false)) .toMap[Int, Boolean] .toSeq: _* ) }
https://exercism.io/tracks/scala/exercises/sieve/solutions/4ba52f314cfb4f69bf489a763835fdff
CC-MAIN-2021-31
refinedweb
175
53.88
Pages: 1 … _id=121684 Here'a another thing that worked with the previous kernel. It seems to compile without problems, but when I try to mount my Siemens S55 mobile phone on COM 2 it says: FATAL: Error inserting fuse (/lib/modules/2.6.10-ARCH/kernel/fs/fuse/fuse.ko): Unknown symbol in module, or unknown parameter (see dmesg) fusermount: unable to open fuse device /proc/fs/fuse/dev: No such file or directory When I type 'dmesg' here's what comes out: fuse: Unknown symbol vfs_permission Dunno if anyone can help me with this(should send it to bugtracker?), but I believe it can be fixed. Cheers.[/code] Offline i dont speak russian, but seems this is the solution : Offline Indeed, it is the solution. At the bottom of the page it says(in english) to compile the 'cvs_permission.c' file with the provided source and Makefile. I haven't succeded though, it does nothing after I type 'make' in the directory with those 2 files.. maybe you try. :? Offline here's what you do, according to the link: file: vfs_permission.c #include <linux/module.h> #include <linux/kernel.h> #include <linux/fs.h> int vfs_permission(struct inode * inode, int mask) { int rc; rc = generic_permission(inode,mask,NULL); return rc; } EXPORT_SYMBOL(vfs_permission); MODULE_LICENSE("GPL"); file: Makefile ifneq ($(KERNELRELEASE),) obj-m := vfs_permission.o else KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules endif put these in a directory, run make it will create a vfs_permission.ko module... put that somewhere you want and insmod it. then you should be able to modprobe fuse Offline No, nein, niet, nie działa. When I run 'make' it spits out: Nothing to do in default (translation from polish 'Nie nic do roboty w default' Ja nie panimaju jush nicjewo. Offline hmmmm # make -C /lib/modules/$(shell uname -r)/build SUBDIRS=$(shell pwd) modules try just that? Offline or actually... change "default" to "all" in the original Makefile.... though that shouldn't matter Offline hmmmm # make -C /lib/modules/$(shell uname -r)/build SUBDIRS=$(shell pwd) modules try just that? That did it. Thanks Russians. Thanks Phrakture. Thanks z4ziggy. ) Offline Pages: 1
https://bbs.archlinux.org/viewtopic.php?pid=64547
CC-MAIN-2016-36
refinedweb
367
58.08
Ping-Pong Example Reset Greetings, I am currently using two Lopy1.0 one which is plugged into a breadboard and the other attached to a pytrack. I am currently running the code you see below. Pretty much the basic Ping-Pong example. I have a question regarding that when this example Node_A hangs, due to saturation or some other natural reason, so my Node_A, which is always listening, just can never see the node_B 'Ping' . It won't find it again until I reset the Node_B which sends just the ping. What is that reset doing in Node_B? Intuitively I would have thought the Node_A would need to be reset, like if there was a buffer issue, however resetting Node_A has no effect and it continues to not see the 'Ping' (Side note: With the Timestamp from lora.stats(), where is that data coming from/how to decode that into meaningful data?) Node_A (Just listens and reports data): if s.recv(64) == b'Ping': #print("Sending pong") data = lora.stats() print("Stats %s" % data) pycom.rgbled(0x0000ff) # blue time.sleep(.5) pycom.rgbled(0x00ff00) # green #s.send('Pong') #time.sleep(1) Node_B just sends import socket import time import pycom #NODE, tx_power = 20) s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(False) pycom.heartbeat(False) pycom.rgbled(0x007f00) # green while True: s.send('Ping') time.sleep(5) pycom.rgbled(0x0000ff) # blue time.sleep(1) pycom.rgbled(0x00ff00) # green @robert-hh I finally got a new probe, and I was able to confirm on an oscilloscope that it is no longer sending the 'Ping' after awhile. @robert-hh Yeah I figured those cables could be an issue, I don't have the original set, I'll find some better probes and report my findings. Thanks thus far for the help! @burgeh The diode must be immediately on the surface of the antenna, like shown on the picture. The Oscilloscope is fine for the purpose. The probe is not. You should use a proper 10:1 oscilloscope probe with low capacitance. At 10:1 it should be around 3 pF. They normally come bundled with the oscilloscope. If pictures are too large, you cannot upload them. Therefore I scale them down first. Edit: Siglent is a good brand. LeCroy relabels Siglent devices for their budget series. @robert-hh Using a 1N4148 this is what I keep getting. I'm using a foreign oscilloscope, but I believe I have it set like yours (10ms/div, 100mv/div. I also set triggers both rising and falling edge around +-50mV and get nothing. Below are some images ( had issues with the normal insert on here). How close are you to your node A? Osc. when just reading Setup @burgeh A 1N5817 might be too slow (capacitance 110pF). You would see a DC voltage when the device is sending. Below is a picture of the "Ping" Sending from lopy_b.py script. The Diode used is a 1N4148 (small signal, silicon, capacitance 4 pf). The level is 800mV,. With a fast schottky diode (BAR 11, capacitance 1.2 pF) I have seen up to 1.5V, so the noise does not matter. If the Anode of the diode is at GND, you have a negative pulse, if the anode is at the tip, a positive pulse. I used the x10 probe. @robert-hh What exactly should I look for? Won't 60Hz drown out such a low power signal? Also the signal is at 916 MHz I don't think many scopes (including mine) can go up that high. I have a schottky 1N5817 and did some basic testing to see if anything shows up...pretty inconclusive, doesn't change if the system is on or off it seems. @burgeh If you have an oscilloscope and a diode, you can use that, like shown in the picture below. The diode shown is a 1N4148, but a Schottky diode (e.g. BAR11) gives a stronger signal. The bandwidth of the oscilloscope does not matter, since the RF signal is rectified. It picks up the current, that's why in the picture the diode is at he bottom of the quarter wave antenna. For the PyCom antenna, you get the strongest signal with the diode body 4 cm away from the tip. If just shows the presence of RF, not the frequency. @robert-hh What would you suggest I use to detect if it is still present? I don't have a spectrum analyzer on hand. @burgeh The flashing just shows that the loop still continues. That's what I assumed, because of the print statement in the loop. But I wanted to see if still the RF signal is present. @robert-hh Sorry misinterpreted that a little bit. Yes the node_B is still sending from what I can tell. Looking at the code it still flashes blue and green. However, looking at it now maybe the s.send is in some error state and may be just failing (but the code would still blink). @burgeh I do not know. According to your tests, it is related to the sending device, that's why I wanted to tell, whether the device is still sending. If yes, either the content of could have changed (not "Ping"), or some other frequency was used, or ...... But I could net get a stable fault condition. I've noticed these inconsistencies as well, I've had node A receiving the message for 4 to 5 hours and then just stop, I'll reset and it will stop after 10 minutes. Does the s socket (something with the recv) need something to be cleared if it only receives part of a message, that way it can recover? @burgeh I have noticed initially some trouble to run that simple test, and just tried to do it again. I've seen some diverging behaviour: - in one attempt, the Node A stopped showing received permanently . in a second attempt it stopped for a while and then it recovered - in a third attempt, out of 400 messages of node B, one was skipped at node A It tried that because I wanted to tell, if at the time, node A does not get any messages any more, node B is still sending. But I could not create the first event any more.
https://forum.pycom.io/topic/3259/ping-pong-example-reset
CC-MAIN-2020-34
refinedweb
1,054
75.4
view raw I would like to call in the external script, or console React components of the method I tried to do so componentDidMount(){ window.mwap = this; } The return value of ReactDOM.render() is actually the mounted instance of the component: class Counter extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; } inc() { this.setState({count: this.state.count + 1}); } render() { return ( <div> Count: {this.state.count} </div> ) } } const instance = ReactDOM.render( <Counter />, document.getElementById('app') ); instance.inc(); <script src=""></script> <script src=""></script> <div id="app"></div> If you need access to a deeply nested component, that's a bit more difficult, but this should get you started.
https://codedump.io/share/UYWQRDNru0pw/1/how-to-call-a-method-in-the-react-component-in-an-external-script
CC-MAIN-2017-22
refinedweb
111
52.87
A scatter matrix is exactly what it sounds like – a matrix of scatterplots. This type of matrix is useful because it allows you to visualize the relationship between multiple variables in a dataset at once. You can use the scatter_matrix() function to create a scatter matrix from a pandas DataFrame: pd.plotting.scatter_matrix(df) The following examples show how to use this syntax in practice with the following pandas DataFrame: import pandas as pd import numpy as np #make this example reproducible np.random.seed(0) #create DataFrame df = pd.DataFrame({'points': np.random.randn(1000), 'assists': np.random.randn(1000), 'rebounds': np.random.randn(1000)}) #view first five rows of DataFrame df.head() points assists rebounds 0 1.764052 0.555963 -1.532921 1 0.400157 0.892474 -1.711970 2 0.978738 -0.422315 0.046135 3 2.240893 0.104714 -0.958374 4 1.867558 0.228053 -0.080812 Example 1: Basic Scatter Matrix The following code shows how to create a basic scatter matrix: pd.plotting.scatter_matrix(df) Example 2: Scatter Matrix for Specific Columns The following code shows how to create a scatter matrix for just the first two columns in the DataFrame: pd.plotting.scatter_matrix(df.iloc[:, 0:2]) Example 3: Scatter Matrix with Custom Colors & Bins The following code shows how to create a scatter matrix with custom colors and a specific number of bins for the histograms: pd.plotting.scatter_matrix(df, color='red', hist_kwds={'bins':30, 'color':'red'}) Example 4: Scatter Matrix with KDE Plot The following code shows how to create a scatter matrix with a kernel density estimate plot along the diagonals of the matrix instead of a histogram: pd.plotting.scatter_matrix(df, diagonal='kde') You can find the complete online documentation for the scatter_matrix() function here. Additional Resources The following tutorials explain how to create other common charts in Python: How to Create Heatmaps in Python How to Create a Bell Curve in Python How to Create an Ogive Graph in Python
https://www.statology.org/scatter-matrix-pandas/
CC-MAIN-2021-39
refinedweb
333
56.45
Commit 9bc9a6bd authored by Committed by Linus Torvalds [PATCH] sysctl: simplify ipc ns specific sysctls Refactor the ipc sysctl support so that it is simpler, more readable, and prepares for fixing the bug with the wrong values being returned in the sys_sysctl interface. The function proc_do_ipc_string() was misnamed as it never handled strings. It's magic of when to work with strings and when to work with longs belonged in the sysctl table. I couldn't tell if the code would work if you disabled the ipc namespace but it certainly looked like it would have problems. 49 additions and 57 deletions Please register or sign in to comment
https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/-/commit/9bc9a6bd3cf559bffe962c51efb062e8b5270ca9
CC-MAIN-2021-17
refinedweb
109
66.67
How to: Add a WPF-View with a Presenter The Add WPF-View (with presenter) recipe adds three items to an existing C# project in a smart client solution. It requires that you have an existing smart client solution with a C# project. The smart client solution must contain the Infrastructure.Interface project with the Presenter base class, located in the <rootnamespace>.Infrastructure.Interface namespace (for example, GlobalBank.Infrastructure.Interface). The recipe extends this class to create a specific presenter for the view. You can execute the Add WPF-View (with presenter) recipe to add a view interface, view, and presenter to your C# project. To add the view to your project - Using Visual Studio, open an existing smart client solution. - In Solution Explorer, right-click a C# project node, point to Add, point to Smart Client Factory, and then click Add WPF-View (with presenter), as shown in Figure 1. Figure 1 The Add WPF-View (with presenter) recipe menu - The recipe starts the Add WPF-View (with presenter) Wizard. Figure 2 illustrates the first page of the wizard. Figure 2 The Add View (with presenter) recipe wizard - (Optional) If you want to create a new folder in the project where the recipe will create the new items, select the Create a folder for the view check box. If this check box is clear, the new items are placed in the project's root folder. - (Optional) If you want to see a summary of the recipe actions and suggested next steps after the recipe completes, select the Show documentation after recipe completes check box. When you use the Add WPF-View recipe to add a WPF view to a project, the recipe creates a XAML file for the view. When you compile the project, Visual Studio compiles the XAML file (it creates a BAML file). Visual Studio requires the following Import declaration in the project file to allow it to compile the XAML file. The Add WPF-View recipe modifies the project file to add this declaration (if the declaration does not already exist in the file). When it does this, you will receive the warning dialog box illustrated in Figure 3. Figure 3 File Modification Detected dialog box Click Reload to reload the project with the new Import statement. You will have the following project items added to the selected project: - A view interface class. This is an empty interface definition for the view. You modify this interface to define the public interface to the view. - A view implementation user control for the view implementation. The class has the SmartPart attribute, derives from UserControl, and implements the view interface. It contains a reference to its presenter. You modify this class to call the presenter for user interface actions that affect other views or business logic. - A presenter class for the view. This class extends the Presenter base class and contains the business logic for the view. You modify this class to update the view for your business logic. If you selected the Create a folder for the view check box, you will have a new folder created that contains the preceding items. The following are typical tasks that you perform after you create a view with presenter: - Design the UI of the view. Add controls to the view and forward user events to the presenter. - Define the public view interface. The view should expose its state through its public interface. - Create unit tests and implement the presenter. Handle user events and update the view state. Interact with services and the module controller to execute business and navigation logic.
http://msdn.microsoft.com/en-us/library/ff650863.aspx
CC-MAIN-2013-20
refinedweb
600
64.3
(This article was first published on John Myles White » Statistics, and kindly contributed to R-bloggers) The news below was recently reported on the ProjectTemplate mailing list. For completeness, I’m also reporting it here. - The first piece of ProjectTemplate news is that I won’t be the exclusive maintainer for ProjectTemplate anymore. Allen Goodman, who works at BankSimple, is now my co-maintainer and he has full commit privileges. In the next few months, the emerging group with commit privileges is likely to grow beyond the two of us, but hopefully just having one more person in charge of ProjectTemplate’s development will help to keep things moving forward. - There’s a new draft of ProjectTemplate available on GitHub. v0.3-1 fixes problems with the YAML configuration system not working on Windows 64 machines by switching over to the DCF format that R naturally supports. Editing your configuration scripts should be trivial, but be prepared for ProjectTemplate to break on your existing v0.2-1 projects until you’ve updated them to use DCF instead of YAML. - In addition to switching the configuration system over to DCF, ProjectTemplate v0.3-1 now uses namespaces and separate functions to implement all of the automatic data loading functions that were previously nested inside of load.project(). Hopefully this will make it easier for end users to override ProjectTemplate’s defaults, while allowing ProjectTemplate releases to automatically rolls out bug fixes to less advanced users. On that note, the list of supported file formats for automatic data loading is growing and new patches on that front are always welcome. - A minimal project format: Some people have asked for the option to create projects without some of the clutter that the standard project format creates, such as the diagnostics and profiling directories. There’s now a minimal project format that you can use by invoking create.project()with the option create.project(minimal = TRUE). - Starting in two weeks, the version of ProjectTemplate available on CRAN will stay in pace with the version on GitHub. If you’re still using v0.1-3, please consider upgrading or forking. - There is now an official ProjectTemplate website at that will hopefully be the start of a new era of better documentation for ProjectTemplate. While the material on the site is still in noticeably draft form, I expect the documentation to improve considerably in the near future. If anyone out there is a graphic designer and would like to make the new site look better, please let me know by e-mailing me at [email protected]. For now that’s all, but there’s more ProjectTemplate news coming soon. Stay tuned!...
http://www.r-bloggers.com/projecttemplate-news/
CC-MAIN-2015-27
refinedweb
445
53
Configure Knox for HA Knox provides basic failover and retry functionality for REST API calls made to a service when service HA has been configured and enabled. To enable HA functionality in Knox, the following configurations must be added to the topology file.-- milliseconds that the process will wait or sleep before a retry is issued. enabled- Flag to turn the particular service on or off for HA. zookeeperEnsemble-- A comma separated list of host names (or IP addresses) of the zookeeper hosts that consist of the ensemble that the Hive servers register their information with. This value can be obtained from Hive’s config file hive-site.xml as the value for the parameter ‘hive.zookeeper.quorum’. zookeeperNamespace-- This is the namespace under which HiveServer2 information is registered in the ZooKeeper ensemble. This value can be obtained from Hive’s config file hive-site.xml as the value for the parameter ‘hive.server2.zookeeper.namespace’.: Example for Hive:Example for Hive: <service> <role>{COMPONENT}</role> <url>http://{host1}:50070/{component}</url> <url>http://{host2}:50070/{component}</url> </service> Please note that there is no <url> tag specified here as the URLs for the Hive servers are obtained from ZooKeeper.Please note that there is no <url> tag specified here as the URLs for the Hive servers are obtained from ZooKeeper. <service> <role>HIVE</role> </service>
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/configuring-proxy-knox/content/ha_configure_knox_for_ha.html
CC-MAIN-2019-13
refinedweb
226
55.95
The goal of this site is to put relevant and applicable tools and information at the fingertips With this blog we want to inform you on our latest initiatives. Enjoy reading and stay tuned! In this hands-on lab, you will learn how to build an Apache Cordova app that will store data in a local database running the same code on Android, iOS, Windows Phone 8 or Windows 8. You will also learn how to use the contacts plugin giving you access to the device’s contacts list in the same way across platforms thanksOpenCV on WPhone Applications to Cordova. Try it out WinJS + Cordova introduction WinJS is a library of CSS and JavaScript files. It contains JavaScript objects, organized into namespaces, designed to make developing Windows Store app using JavaScript easier. WinJS includes objects that help you handle activation, access storage, and define your own classes and namespaces. It also includes a set of controls such as DatePicker, FlipView, ListView, SearchBox, Menu, NavBar, and more. WinJS also provides styling features in the form of CSS styles and classes that you can use or override. Initially developed for Windows Web Apps, WinJS has been open sourced by MS Open Tech and can now be used to build Websites. You can learn more on the open sourcing of WinJS and what you can do with it reading, visiting the and trying things out on In this lab, you will try out WinJS to build a simple Apache Cordova app that has advanced graphical controls. OpenCV on Windows Phone Applications In this hands-on lab, you will learn how to use OpenCV on Windows Phone. In fact you will use an Interop object between C# and C++. C++ for opencv and C# for the windows phone 8.0 project (there is not C++ project on 8.0, only on 8.1) With this interop component you’ll display the camera frames and applied some opencv filters. In the second part of the lab you’ll modify the interop object itself to enabled a new opencv feature. Python and MongoLab (MongoDB in the Cloud) In this hands-on lab, you will learn how to create a MongoDB database on Microsoft Azure, as well as adding records to the database and reading the records using PyMongo. PyMongo is a Python distribution containing tools for working with MongoDB. Redis on Windows In this hands-on lab, you will learn how to install Redis on Windows and how to access Redis from a Python script running on Windows, through use of the redis-py module. (Redis-py is a popular Python interface to Redis.) You will write a script that displays a pattern of text on the screen and publishes that pattern to a Redis channel. You will then write a second script that “listens” on the same Redis channel and prints received messages on the screen. Using Kinect v2 sensor with openFrameworks in WinRT applications In this hands-on lab, you will learn how to use the Kinect sensor v2 in an openFramework application running on Windows 8. We use an openFramework version available on GitHub, in MSOpenTech repositories You will build on this knowledge to learn how to implement a C++ modern class that allows you use the Kinect v2 WinRT object. You will learn how to transpose the sensor data (pixel, depth, Body) into openFrameworks graphic classes. Ahead There are three great plugin and Unity3d Asset available today Prime 31 Windows 8 Windows 8 Check the release notes page for up to the minute information about each release. Microsoft Store Plugin The Microsoft Store Plugin lets you offer your app as a free trial and sell in app purchases. Get full access to the available license information for your app and all of your products. Includes Windows 8 and Windows 8.1 support. This plugin requires Unity 4.5+.Download Now Metro Essentials Plugin There is something here for everyone. The Metro Essentials Plugin exposes RoamingSettings (similar to Apples iCloud), live tiles, toasts, the settings charm, the share charm and snap events. All the goodies you need to metro-ize your game are here! This plugin requires Unity 4.5+.Download Now Social Networking Plugin (Twitter and Facebook) $75 Let word of mouth sell your game for you with this plugin! Post high score updates and achievements to the biggest social networking sites out there with just a couple lines of code. It's all here. For those who want to dig deeper into the Graph API with it's wealth of information, we support that too. Twitter integration includes all the usual suspects including posting updates, getting a users followers and full access to the entire Twitter API! This plugin requires Unity 4.5+.Buy Now Microsoft Ads Plugin Access the Microsoft Advertising SDK to display ad banners and monetize your app. With over 15 banner variants to choose from you are guaranteed to find one that fits your game without ruining the experience for your users. This plugin requires Unity 4.5+.Download Now Microsoft Azure Plugin The Azure web services are a powerful way to store data. Accessing them from a Unity game has never been easier. One line of code is all it takes to store any object remotely and securely on Azure. Retriving and deleting objects is just as easy.Download Now Flurry Analytics Plugin Use Flurry to learn the habits of your users! Collect valuable data like which devices they're using, if your game levels have appropriate difficulty and if anyone's bothering to read your tutorial. Sprinkle calls to log events throughout your code and watch the statistics roll in! This plugin requires Unity 4.5+. Windows Phone Windows Phone Check the release notes page for up to the minute information about each release. The Microsoft Store Plugin lets you offer your app as a free trial and sell in app purchases. Get full access to the available license information for your app and all of your products.Download Now Let word of mouth sell your game for you with this plugin! Post high score updates and achievements to the biggest social networking sites out there with just a couple lines of code. It's all here. For those who want to dig deeper into Facebook's Graph API with it's wealth of information, we support that too. Twitter integration includes all the usual suspects including posting updates, getting a users followers and full access to the entire Twitter API! Buy Now AdMob Plugin $40 AdMob has finally made its way to Windows Phone 8! Create and display a banner with one line of code. Full interstitial support is also included for ultra high CPM adverts. This plugin requires Unity 4.5+. Buy Now Access the Microsoft Advertising SDK to display ad banners and monetize your app. Monetize your free apps with Microsofts high CPM ad banner solution today! This plugin requires Unity 4.5+. Download Now Windows Phone Essentials Plugin There is something here for everyone. The Windows Phone Essentials Plugin exposes live tiles and push notifications to Unity. A host of Windows Phone sharing tasks are also exposed including the SMS composer, email composer, web browser, link sharing, status update sharing, rate this app, photo chooser and more! All the goodies you need to metro-ize your game are here! Download Now Flurry Analytics Plugin $45 Use Flurry to learn the habits of your users! Collect valuable data like which devices they're using, if your game levels have appropriate difficulty and if anyone's bothering to read your tutorial. Sprinkle calls to log events throughout your code and watch the statistics roll in! Buy Now BitRave Bit Rave have extensive experience working with the Windows 8 platform capabilities, and as part of that we decided to build a library for Unity to make Windows 8 integration easier for everyone. Azure plugins are now separate. Information can be found here: Azure plugins are now separate. Information can be found here: Find out just how easy it is for each of the Windows 8 capabilities. The Azure Mobile Services plugin for Unity 3D is available open source at github. That’s the place to go if you want to contribute or look at the source. It’s on github here: . However, if you don’t care about the source, and just use it, head to github as there is an example project with built binaries in it so you can just grab it and use it. WinBridge The WinBridge is a plugin for Unity that enables easier command of native controls and features of WinRT (the underlying library behind Windows Store, Windows Phone and Xbox One apps). Currently implemented are: - Windows store (In-app-purchases, trial upgrade, receipt management, Windows store debugging) - Native message dialogs - Native and hardware-accelerated video playback This plugin is an open-source project by Microsoft developer evangelists, aiming to make porting to and development for Windows, Windows Phone and Xbox platforms easier. This plugin ships with compiled DLLs, but the full source is available at. Windows. The current license state of your app is stored as properties of the LicenseInformation class. Typically, you put the functions that depend on the license state in a conditional block, as we describe in the next step. When considering these features, make sure you can implement them in a way that will work in all license states. Also, decide how you want to handle changes to the app's license while the app is running. Your trial app can be full-featured, but have in-app ad banners where the paid-for version doesn't. Or, your trial app can disable certain features, or display regular messages asking the user to buy it. Think about the type of app you're making and what a good trial or expiration strategy is for it. For a trial version of a game, a good strategy is to limit the amount of game content that a user can play. For a trial version of a utility, you might consider setting an expiration date, or limiting the features that a potential buyer can use. For most non-gaming apps, setting an expiration date works well, because users can develop a good understanding of the complete app. Here are a few common expiration scenarios and your options for handling them. Trial license expires while the app is running If the trial expires while your app is running, your app can: The best practice is to display a message with a prompt for buying the app, and if the customer buys it, continue with all features enabled. If the user decides not to buy the app, close it or remind them to buy the app at regular intervals. Trial license expires before the app is launched If the trial expires before the user launches the app, your app won't launch. Instead, users see a dialog box that gives them the option to purchase your app from the Store. Customer buys the app while it is running If the customer buys your app while it is running, here are some actions your app can take.. When the license change event is raised, your app must call the License API to determine if the trial status has changed. The code in this step shows how to structure your handler for this event. At this point, if a user bought the app, it is a good practice to provide feedback to the user that the licensing status has changed. You might need to ask the user to restart the app if that's how you've coded it. But make this transition as seamless and painless as possible. Include code to determine the app's trial expiration date.. Be sure to explain how your app will behave during and after the free trial period so your customers won't be surprised by your app's behavior. For more info about describing your app, see Your app's description
http://blogs.msdn.com/b/uk_faculty_connection/default.aspx?Redirected=true&title=Microsoft%20UK%20Faculty%20Connection&source=Microsoft&armin=armin&PageIndex=7
CC-MAIN-2014-52
refinedweb
2,009
61.97
Golang Quickstart go-appbase is a universal Golang client library for working with the appbase.io database. It can: - Index new documents or update / delete existing ones. - Stream updates to documents, queries or filters using http-streams. It can’t: - Configure mappings, change analyzers, or capture snapshots. These are provided by Elasticsearch client libraries. We recommend the golang elastic library by Olivere. Appbase.io - the database service is opinionated about cluster setup and hence doesn’t support the ElasticSearch devops APIs. See rest.appbase.io for a full reference on the supported APIs. This is a quick start guide to whet the appetite with the possibilities of data streams. The full client API reference can be found here. Creating an AppCreating an App This gif shows how to create an app on appbase.io, which we will need for this quickstart guide. Log in to appbase.io dashboard, and create a new app. For this tutorial, we will use an app called newstreamingapp. The credentials for this app are meqRf8KJC:65cc161a-22ad-40c5-aaaf-5c082d5dcfda. Note appbase.io uses HTTP Basic Auth for authenticating requests. Import go-appbaseImport go-appbase We will fetch and install the go-appbase lib using git. go get -t github.com/appbaseio/go-appbase Adding it in the project should be a one line import syntax. import "github.com/appbaseio/go-appbase" To write data or stream updates from appbase.io, we need to first create a reference object. We do this by passing the appbase.io API URL, app name, and credentials into the Appbase constructor: client, _ = NewClient("", "meqRf8KJC", "65cc161a-22ad-40c5-aaaf-5c082d5dcfda", "newstreamingapp") err := client.Ping() if err != nil { log.Println(err) } // Import `fmt` package before printing fmt.Println("Client created") Storing DataStoring Data Once we have the reference object (called client in this tutorial), we can insert any JSON data into it with the Index() method. const jsonObject = `{ "department_name": "Books", "department_name_analyzed": "Books", "department_id": 1, "name": "A Fake Book on Network Routing", "price": 5595 }` result, err := client.Index().Type("books").Id("1").Body(jsonObject).Do() if err != nil { log.Println(err) return } fmt.Println("Data Inserted. ID: ", result.ID) where type: 'books' indicates the collection (or table) inside which the data will be stored and the id: "1" is a unique identifier of the data. Note appbase.io uses the same APIs and data modeling conventions as ElasticSearch. A type is equivalent to a collection in MongoDB or a table in SQL, and a document is similar to the document in MongoDB or a row in SQL. GETing or Streaming DataGETing or Streaming Data Unlike typical databases that support GET operations (or Read) for fetching data and queries, appbase.io operates on both GET and stream modes. Getting a Document BackGetting a Document Back We will first apply the GET mode to read our just inserted object using the Get() method. response, err := client.Get().Type("books").Id("1").Do() if err != nil { log.Println(err) return } // MarshalIndent for pretty printing Json document, err := json.MarshalIndent(response.Source, "", " ") if err != nil { log.Println("error:", err) } fmt.Println("Document: ", string(document), ", ", "Id: ", response.Id) should print: Document: { "department_name": "Books", "department_name_analyzed": "Books", "department_id": 1, "name": "A Fake Book on Network Routing", }, Id: "1" Subscribing to a Document StreamSubscribing to a Document Stream Now let’s say that we are interested in subscribing to all the state changes that happen on a document. Here, we would use the GetStream() method over Get(), which keeps returning new changes made to the document. response, err := client.GetStream().Type("books").Id("1").Do() if err != nil { log.Println(err) } for { data, _ := response.Next() // MarshalIndent for pretty printing JSON document, err := json.MarshalIndent(data.Source, "", " ") if err != nil { log.Println("error:", err) } fmt.Println("Document: ", string(document), ", Id: ", data.Id) } Don’t be surprised if you don’t see anything printed, GetStream() only returns updates made to the document after you have subscribed. Observe the Updates in RealtimeObserve the Updates in Realtime Let’s see live updates in action. We will modify the book price in our original jsonObject variable from 5595 to 6034 and apply Index() again. For brevity, we will not show the Index() operation here. GetStream() Response: Document: { "department_name": "Books", "department_name_analyzed": "Books", "department_id": 1, "name": "A Fake Book on Network Routing", "price": 6034 }, Id: "1" In the new document update, we can see the price change (5595 -> 6034) being reflected. Subsequent changes will be streamed as JSON objects. Note: Appbase always streams the final state of an object, and not the diff b/w the old state and the new state. You can compute diffs on the client side by persisting the state using a composition of (_type, _id) fields. Streaming Rich QueriesStreaming Rich Queries Streaming document updates are great for building messaging systems or notification feeds on individual objects. What if we were interested in continuously listening to a broader set of data changes? The SearchStream() method scratches this itch perfectly. In the example below, we will see it in action with a match_all query that returns any time a new document is added to the type ‘books’ or when any of the existing documents are modified. const matchAllQuery string = `{"query":{"match_all":{}}}` response, err := client.SearchStream().Type("books").Body(matchAllQuery).Do() if err != nil { log.Println(err) return } // Now we index another object. const anotherBook string = `{"department_name": "Books", "department_name_analyzed": "Books", "department_id": 2, "name": "A Fake Book on Load balancing", "price": 7510}` _, err = client.Index().Type("books").Id("3").Body(anotherBook).Do() if err != nil { log.Println(err) return } // This should trigger a new streaming match. data, err := response.Next() if err != nil { log.Println(err) return } fmt.Println("Id: ", data.Id) // MarshalIndent for pretty printing JSON document, err := json.MarshalIndent(data.Source, "", " ") if err != nil { log.Println("error:", err) } fmt.Println("Document: ", string(document)) Response when a new data changes: Id: "3" Document: { "department_name": "Books", "department_name_analyzed": "Books", "department_id": 2, "name": "A Fake Book on Load balancing", "price": 7510 } Note: Like GetStream(), SearchStream() subscribes to the new matches. Streaming Rich Queries to a URLStreaming Rich Queries to a URL SearchStreamToURL() streams results directly to a URL instead of streaming back. In the example below we will see with a match_all query that sends anytime a new document is added to the type books to an URL. // Similar to NewClient, we will instiate a webhook instance with appbase.NewWebhook() webhook := appbase.NewWebhook() // Webhook instancess need to have a URL, method and body (which can be string or a JSON object) webhook.URL = "" webhook.Method = "POST" webhook.Body = "hellowebhooks" const matchAllQuery string = `{"query":{"match_all":{}}}` response, err := client.SearchStreamToURL().Type("books").Query(matchAllQuery).AddWebhook(webhook).Do() if err != nil { log.Println(err) return } stopSearchStream, err := response.Stop() if err != nil { log.Println(err) return } fmt.Println(response.Id == stopSearchStream.Id) SearchStreamToURL() response { true } In this tutorial, we have learnt how to index new data and stream both individual data as well as query results. Go check out the full Golang client reference over here.
https://docs.appbase.io/go-quickstart.html
CC-MAIN-2018-43
refinedweb
1,168
51.44
Finite state machines, generally speaking, are a computation model used to describe a system in terms of the transitions between it’s states. They make it possible to is really easy to visualize your computational model as a graph and make sure that your application isn’t left with unhandled cases or illegal states. Warning: Read the wikipedia page on Finite State Machines at your own risk. Side-effects of a visit to that page includes mild headaches, an annoying twitch in your eye and even death. If you’re a nerd, you are immune to the side-effects so knock yourself out. I am not gonna get into the theory bit of FSMs because it gets super mathematical, really fast. Also, I don’t know much about it. So here’s a more practical take on FSMs. Before we get into any libraries, let’s implement a Finite State Machine ourselves and try to understand what they mean. FSMs are about two things – Finite set of states and finite set of valid transitions between them. What Is ‘Finite set of states’? The state of any system should be predictable. We should design our systems in such a way that the number of combinations of states are minimized so that there are fewer cases left to handle. Lets take a traffic signal an example: const trafficLight = { isRed: true, isGreen: false, isYellow: false, }; Here, you have 3 flags describing the current state of a traffic signal. But what happens when isRed and isGreen are both true. The total number of combinations you have to handle is 6. Is that state valid? Have you handled this possibility in your application? Should you handle it? The answer to all of them is a strong NO in a bold font. Well, that sucks. What if, we had one piece of state instead? type TrafficSignal = { color: 'red' | 'green' | 'yellow' }; const trafficSignal: TrafficSignal = { color: 'green', }; This is a lot simpler and it makes sure that there is no invalid state for the traffic light color. The light will either be red or green or yellow. Which means you only have 3cases to handle in your application i.e. less code. Life hacked! Thank you. What is ‘State transitions’? This is the second and the most crucial part about FSMs. As you may have seen with useState in react, you get the current state and a state setter to manage a piece of state. Let’s enhance that. Let’s make a state setter type TrafficLight = 'red' | 'green' | 'yellow'; let state: TrafficLight = 'red'; const setState = (light: TrafficLight) => (state = light); In this system, you cannot have an invalid state i.e. light will always be either red, green or yellow. So everything’s awesome, right? WRONG! What if the state was green and someone called setState(‘red’)? That is an invalid state transition as it skips over the yellowstate that traffic lights are expected to go through. How do we solve that? const getNextState = (currentState: TrafficLight): TrafficLight => ({ green: 'yellow', yellow: 'red', red: 'green', }[currentState]); const nextState = () => setState(getNextState(state)); Now this system is very strict about the number of valid states and it has a finite set (only 1) of valid transitions. WE HAVE A FINITE STATE MACHINE! You can visualize the state machine as, Yeah, cool cool. What do I do with this? FSM gives us a very predictable, declarative, readable and elegant system. Also, my tiny little brain can’t keep track of too many pieces of states simultaneously so I find finite state machines really helpful in making sense of my own code. Obviously the example above is not all that powerful but it presents us with a model that will allow us to write code that we’re sure about and is one step towards writing cleaner, more reliable code. I wanna see more code. Show me the code. Alrighty then. There are a few good FSM libraries on npm. The most popular one amongst them is xstate. Here’s the traffic lights example using xstate. The API used to defined states and its transitions is called a statechart. const stateChart = { id: 'trafficLight', initial: 'green', states: { green: { on: { NEXT: 'yellow' }, }, yellow: { on: { NEXT: 'red' }, }, red: { on: { NEXT: { target: 'green' } }, // Same as the ones before }, }, }; This state describes the valid states as being ‘green’, ‘yellow’ and ‘red’ and the only available transition is NEXT. The states can be read as “If its green right now, on NEXT it’ll transition to target yellow” Here’s how you use xstate, import { Machine } from 'xstate'; // Create a state machine const trafficLight = Machine(stateChart); // Trigger the `NEXT` transition from `green` to the next state const { value: nextLight } = lightMachine.transition('green', 'NEXT'); // If `green`, on `NEXT` transition to `yellow` expect(nextLight).toBe('yellow'); Shoutout to all the react nerds in the audience! Here’s a simple hooks example to get you hooked on state machines. import { Machine } from 'xstate'; function useStateMachine(machine) { const [state, setState] = useState(machine.value); function dispatch(transition) { const { value: nextState } = machine.transition(state, transition); setState(nextState); } return { state, dispatch }; } Here’s how you use this hook, const trafficMachine = Machine({ id: 'trafficLight', initial: 'green', states: { green: { on: { NEXT: 'yellow' }, }, yellow: { on: { NEXT: 'red' }, }, red: { on: { NEXT: 'green' }, }, }, }); function TrafficLights() { const { state, dispatch } = useStateMachine(trafficMachine); const lightColor = { green: '#51e980', red: '#e74c3c', orange: '#ffa500', }[state]; return ( <> <div className="trafficLight" style={{ backgroundColor: lightColor }} /> <div>The light is {state}</div> <button onClick={() => dispatch('NEXT')}>Next light</button> </> ); } useTinyStateMachine XState has a lot of features so the minzipped size is around 15KB. Not everyone can justify the added payload. Which is why I wrote a lighter alternative use-tiny-state-machine which is a ~700 bytes (minzipped) react hook. import useTinyStateMachine from 'use-tiny-state-machine'; const stateChart = { id: 'trafficLight', initial: 'green', states: { green: { on: { NEXT: 'yellow' }, }, yellow: { on: { NEXT: 'red' }, }, red: { on: { NEXT: 'green' }, }, }, }; function TrafficLights() { const { cata, state, dispatch } = useTinyStateMachine(stateChart); // cata let's you reduce the current state to a value or a function call const lightColor = cata({ green: '#51e980', red: '#e74c3c', orange: '#ffa500', }); return ( <> <div className="trafficLight" style={{ backgroundColor: lightColor }} /> <div>The light is {state}</div> <button onClick={() => dispatch('NEXT')}>Next light</button> </> ); } Codesandbox demo Here’s a simple login form page example with actions, context and multiple transitions, import useTinyStateMachine from 'use-tiny-state-machine'; function LoginPage({ onSubmit }) { const { cata, dispatch } = useTinyStateMachine({ id: 'loginForm', initial: 'home', context: { username: '', }, states: { home: { on: { OPEN_FORM_LOGIN: 'loginForm', OPEN_FORM_SIGNUP: 'signupForm', }, }, loginForm: { on: { SUBMIT: { target: 'submitted', // Target state action: submitFormData, // The transition triggers this action }, SWITCH_FORM: 'signupForm', }, }, signupForm: { on: { SUBMIT: { target: 'submitted', action: submitFormData, }, SWITCH_FORM: 'loginForm', }, }, submitted: { onEntry: () => (window.location.href = '/dashboard'), // Redirect to wherever after login }, }, }); function submitFormData({ updateContext }, data) { updateContext({ username: data.username }); onSubmit(data); } return cata({ home: () => ( <> <button onClick={() => dispatch('OPEN_FORM_LOGIN')}>Login</button> <button onClick={() => dispatch('OPEN_FORM_SIGNUP')}>Signup</button> </> ), loginForm: () => ( <> <button onClick={() => dispatch('SWITCH_FORM')}>Signup</button> <LoginForm onSubmit={data => dispatch('SUBMIT', data)} /> </> ), signupForm: () => ( <> <button onClick={() => dispatch('SWITCH_FORM')}>Login</button> <SignupForm onSubmit={data => dispatch('SUBMIT', data)} /> </> ), submitted: ({ context: { username } }) => ( <div>You have been logged in as {username}. Redirecting...</div> ), }); } Codesandbox demo Here, context (not to be confused with react’s context) is some additional data that you want associated with your state. You can use context to store information that the future states may need. The fixed set of transitions makes sure that the future state will have the required context data. This is what the state chart looks like visually, To visualize the state charts you make, you can use the xstate visualizer. Even though use-tiny-state-machine and xstate have slightly different apis, you can still use the visualizer pretty well. Comparisons with useReducer Reducers are a good step forward but actions are not handled well and the logic you end up writing for it becomes very imperative. One way to tackle this is to make the effect execution declarative as well. With that data and effect, both turn into reduce operations(state -> state; effect -> effect). Here’s how to do that… Note: This idea was stolen from a David K. Piano tweet but I can’t find the exact tweet right now to link. function reducer({ state }, { type, payload }) { switch (state.type) { case 'idle': return type === 'FETCH' ? { effect: { type: 'fetch', payload }, state: { status: 'pending' }, } : { state, effect: null }; case 'pending': return { effect: null, state: type === 'RESOLVE' ? { status: 'success', data: payload } : { status: 'failure', error: payload }, }; default: return { state, effect: null }; } } function handleAction({ effect }, dispatch) { switch ((effect || {}).type) { case 'fetch': { fetchMyData(effect.payload) .then(data => dispatch({ type: 'RESOLVE', payload: data })) .catch(error => dispatch({ type: 'REJECT', payload: error })); break; } default: return; } } const initialState = { state: { status: 'idle' }, effect: null }; function MyComponent({ id }) { const [{ state, effect }, dispatch] = useReducer(reducer, initialState); useEffect(() => handleAction({ state, effect }, dispatch), [effect]); return ( <> <button onClick={() => dispatch({ type: 'FETCH', payload: { id } })}>Start fetchin</button> <div> {state.status === 'success' && 'Fetched data'} {state.status === 'failure' && `Error: ${state.error}`} </div> </> ); } So what does this mean for me? Well, you can start by looking back at your old code and re-considering the application state there. Although, like a lot of things in programming, this is not a one size fits all scenario, the idea of FSM’s is pretty universal wherever you find the need for state. xstate and use-tiny-state-machine are just tools. FSM is a concept. Even if the tools don’t fit your use case, you can still apply the concept of FSM almost everywhere.
https://tech.shaadi.com/2020/01/16/example-post/
CC-MAIN-2022-05
refinedweb
1,558
54.32
Microblog with Django /skill level/ /viewed/. This page is a wiki. If you have some crucial tips and code you want to add to make this page better, load up your digital typewriting ribbon and have at it. Let's get ready to Tumble These days, many of us follow our friends on services like FriendFeed or Facebook, where the concept of a "blog post" is much looser, smaller and faster. Whether you want to call it a newsfeed, lifestream, microblog, tumblelog or a dozen other clever portmanteaus or neologisms, the concept is the same: a site to log snippets of your life and its discoveries in one place. FriendFeed is a favorite. The service combines tweets from Twitter, photos from Flickr, links from Delicious, updates from Facebook and other sundry data and displays them all on one page. The page itself also has an RSS feed, giving you (or any of your many fans) a way to follow your every move. Let's build a version of Friendfeed using Django. We'll build a page displaying both our links and blog posts together in a single, time-based "river." As a bonus we'll add in some RSS sugar so our friends can follow along without having to visit the site. Laying the groundwork Before we get started, let's think for a bit about what needs to be done. We will post data two separate using two models: the blog Entry model and the Link model. We need to query them both at the same time and then sort the feed into reverse chronological order. The key to making it work is by creating a way to normalize our data. The database table for each of these models are significantly different. Instead of querying both models, wouldn't it be better if we just query against one table? Let's do it. How it works There's one thing all models will have in common: a publication date. What we're going to do is create a kind of meta model which stores the date as well as what kind of data it is (i.e. blog entry, link, etc). Django ships with two tools which actually make it very easy to build our meta model. The first is the content types framework. As the docs explain, "instances of ContentType represent and store information about the models installed in your project, and new instances of ContentType are automatically created whenever new models are installed." Huh? Basically, what the Django Project is telling us is just by referencing the content type, we will always know what type of content we're storing. Storing the content type is half the battle. The other half of the battle will be won by using a built-in field known as a GenericForeignKey. You may have noticed a ForeignKey field in the Django docs. ForeignKey is similar to what we're going to use. Where ForeignKey refers to a single model, our GenericForeignKey can refer to any model by referencing the content_type instead. If that sounds confusing, don't worry. It'll make more sense when you see it in action. Let's start writing some code. I'm going to store our meta app in a folder named "tumblelog" so create the folder and create a new models.py file inside it (don't forget the __init__.py file as well, Python's never-ending "gotcha"). Open up models.py in your text editor and paste in this code: from django.db import models from django.contrib.contenttypes import generic class TumbleItem(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() pub_date = models.DateTimeField() content_object = generic.GenericForeignKey('content_type', 'object_id') class Meta: ordering = ('-pub_date',) def __unicode__(self): return self.content_type.name If you're familiar with our previous tutorials (or Django in general), by now this should look somewhat familiar. We have a regular ForeignKey to the ContentType as we discussed above. Then we have an id field, which we need to pass to the GenericForeignKey field (along with content_type). Finally we add a datetime field for easy sorting. If you look at the GenericForeignKey documentation you'll notice this is pretty much the same code used to demonstrate GenericForeignKeys. The only difference is we've added a date field. Now we have a basic script, but we certainly don't want to update this by hand every time we post something. Especially since our delicious.com import script runs automatically. Well, it turns out there's a very powerful feature baked into Django which can handle the task for us. Django includes an internal "dispatcher" which allows objects to listen for signals from other objects. In our case, our tumblelog app is going to "listen" to our Entry and Link models. Every time a new Entry or Link is saved, those models will send out a signal. When the tumblelog app gets the signal it will automatically update itself. Sweet. How do we do it? Listen for the Signals The first thing we need to do is import the signals framework. Add this to the top of your model.py file: from django.db.models import signals from django.contrib.contenttypes.models import ContentType from django.dispatch import dispatcher from blog.models import Entry from links.models import Link OK, now move to the bottom of the file and add these lines: for modelname in [Entry, Link]: dispatcher.connect(create_tumble_item, signal=signals.post_save, sender=modelname) In Django 1.1 (and 1.0 I think) the signal have changed. Change the import to: from django.db.models import signals from django.contrib.contenttypes.models import ContentType from django.db.models.signals import post_save from blog.models import Entry from links.models import Link and change the dispatcher line to post_save.connect(create_tweet_item, sender=Entry) post_save.connect(create_tweet_item, sender=Link) So what's it do? Well it tells the dispatcher to listen for the post_save signal from out Entry and Link models and whenever it gets the signal, it will fire off our create_tumble_item function. But wait, we don't have a create_tumble_item function do we? No, so we better write one. Just above the dispatcher function add this code: def create_tumble_item(sender, instance, signal, *args, **kwargs): if 'created' in kwargs: if kwargs['created']: create = True ctype = ContentType.objects.get_for_model(instance) if ctype.name == 'link': pub_date = instance.date else: pub_date = instance.pub_date if create: t = TumbleItem.objects.get_or_create(content_type=ctype, object_id=instance.id, pub_date=pub_date) What's this function doing? For the most part it's just taking the data sent by the dispatcher and using it to create a new TumbleItem. The first line looks to see if a variable, created, has been passed by the dispatcher. The post_save signal is sent everytime an object is saved, so it's possible we've just updated an existing item rather than creating a new one. In that case, we don't want to create a new TumbleItem, so the function checks to make sure this is, in fact, a new object. If the dispatcher has passed the variable created, and its true, then we set our own create flag and get the content_type of the passed instance. This way, we know whether it's a Link or an Entry. If it's a Link we need to find the datetime info. When we built our Link model, we called the field date, so we set our normalizing pub_date field equal to the instance.date field. If the instance is a blog Entry we set pub_date to the value of the instance's pub_date field since, it shares the name we used in our Entry model. Then we create a new TumbleItem using the built-in get_or_create method. We also pass in a content_type, id and pub_date<code> to their respective fields. And there you have it. Any time you create a blog entry or the script adds a new link, our new TumbleItem model will automatically update. There's a slight problem though. What about the data we've already got in there? Well to make sure it gets added, we're going to have to comment out the following lines: if 'created' in kwargs: if kwargs['created']: Comment it out and save the file. Then fire up the terminal and enter this code one line at a time: >>> from blog.models import Entry >>> from link.models import Link >>> entries = Entry.objects.all() >>> for entry in entries: ... entry.save() ... >>> links = Link.objects.all() >>> for link in Link: ... link.save() ... >>> What did it do? We just called the <code>save method and dispatcher passed its signal along to the TumbleItem model which then updated itself. Because we commented out the lines that check to see if an item is new, the function ran regardless of the instance's created variable. Now uncomment those lines and save the file. Before we move on I should point out post_save isn't the only signal Django can send. There are in fact a whole bunch of useful signals like request_started, pre_save and many more. Check out the Django wiki for more details. Writing the URLs and views Now we need to add a tumblelog URL and pull out the data. Open up your project level urls.py file and add this line: (r'^tumblelog/', 'tumblelog.views.tumbler'), Now create a new views.py file inside your tumblelog folder and open it up to paste in this code: from tumblelog.models import TumbleItem from django.shortcuts import render_to_response def tumbler(request): context = { 'tumble_item_list': TumbleItem.objects.all().order_by('-pub_date') } return render_to_response('tumblelog/list.html', context) What's going on here? Well first we grab the tumblelog items and then we use a Django shortcut function to return to pass the list on to template named list.html. Let's create the template. Add a new folder "tumblelog" inside your templates folder and create the list.html file. We'll start by extending our base template and filling in the blocks we created in an earlier lesson: {% block title %}My Tumblelog{% endblock %} {% block primary %} {% for object in object_list %} {%if object 'link'%} <a href="{{ obj.url }}" title="{{ obj.title}}">{{ obj.title}}</a>{{object.description}} Posted on {{object.pub_date|date:"D d M Y"}} {% endif %} {%if object 'entry'%} <h2>{{ object.title }}</h2> <p>{{ object.pub_date }}</p> {{ object.body_html|truncatewords_html:"20"|safe }} <p><a href="{{object.get_absolute_url}}">more</a></p> {% endif %} {% endfor %} {% endblock %} If you start up the development server and head to you should see your blog entries and links ordered by date. Where do we go from here? We're looking pretty good up until we get to the template stage. The if statements aren't so bad when we're only sorting two content types, but if you start pulling in tons of different types of data you'll quickly end up with spaghetti code. A far better idea would be to write a template tag which takes the content type as an argument, uses it to call a template file, renders it and passes back to the html as a string. We'll leave it up to you as an exercise (hint: read up on Django's render_to_string method). The other thing you might be wondering about is pagination. If you save dozens of links a day and blog like a madman, this page is going to get really large really fast. Django ships with some built-in pagination tools (see the docs) and there's also a slick django-pagination app available from Eric Florenzano. Eric even has a very nice screencast on how to set things up. RSS feeds We're almost done. Let's hook up some RSS feeds for our new site. It turns out, Django ships with a very nice syndication framework. All we need to do is turn it on and hook it up. Let's start by going into our TumbleItem models.py file and setting up the framework. Paste in this code at the bottom of the file: from django.contrib.syndication.feeds import Feed class LatestItems(Feed): title = "My Tumblelog: Links" link = "/tumblelog/" description = "Latest Items posted to mysite.com" description_template = 'feeds/description.html' def items(self): return TumbleItems.objects.all.order_by('-pub_date')[:10] Here we've imported the syndication class Feed and then defined our own feed class to fill in some basic data. The last step is returning the items using a normal objects query. We need to create the URLs first. There are several places you could do this, I generally opt for the main project URLs.py file. Open it up and paste this code just below the other import statements: from tumblelog.models import LatestItems feeds = { 'tumblelog': LatestItems } Now add this line to your urlpattern function: (r'^feeds/(?P<url>.*)/$', 'django.contrib.syndication.views.feed', {'feed_dict': feeds}), The last step is creating the description template in a new folder we'll lable "feeds." Just plug in the same code from the tumblelog template, but delete the forloop tag since we're only passing a single object this time. And there you have it. Fire up the development server and point it to. If you want to add separate feeds for links or just blog entries, all you need to do is repeat the process, creating new feed classes for each. Conclusion Whew! It's been a long journey and we wrote a lot of code, but hopefully you learned a good bit about how Django works. You not only have a blog/linklog/tumblelog, but hopefully some ideas of how to approach other projects. Speaking of projects, although we used one, there's really no need to organize your code this way. Sometimes the projects come in handy, but other times they make your PYTHONPATH into a snarled mess. For some other ideas on how organize you code, check out James Bennet's article on the subject. If you ever have questions about Django, jump on the #Django IRC channel, there's load of friendly and helpful developers who can point you in the right direction, and be sure to join the mailing list. Happy coding! Suggested readings - This page was last modified 21:16, 26 February 2009. /related_articles/ - Install Django and Build Your First App - Use URL Patterns and Views in Django - Use Templates in Django - Django Newforms Admin Migration - Integrate Web APIs into Your Django Site See more related articles Special Offer For Webmonkey Users WIRED magazine: The first word on how technology is changing our world. Subscribe for just $10 a year
http://www.webmonkey.com/tutorial/Build_a_Microblog_with_Django
crawl-002
refinedweb
2,429
67.04
Cup of poJO anyone? By Edward Wong on Mar 10, 2009 NOTE, the instructions here have been updated a little for GlassFish ESB v2.1 since its release. A lot of the demos until this point have illustrated Scheduler BC consuming a BPEL SE, so how 'bout consuming a 'cup of Jo' instead? POJO that is. We'll now show how to have Scheduler BC trigger a POJO SE to execute a Windows (2000, XP, Vista) command that pops up a dialog with some greeting. Of course, you can adapt this to do more useful things like 'downloading a file from some remote site so that File BC can pick it up and send it to BPEL SE for processing'...but then that'll be work and fun is good! Prerequisites: Scheduler BC now comes pre-installed with GF ESB v2.1, but you'll need to install POJO SE (installer here). - Create a Java (SE) Application project, keeping the default Main class, although it won't be used here (but it's needed due to a POJO SE project idiosyncrasy): - Right-click the schedpojodemo package and add a New | Other | ESB | POJO for Binding: - Specify that the POJO SE is to work with Scheduler BC by: - Select Next and Add a Simple Trigger: The Windows command chosen here in the trigger Message is long so you can copy it from here: - Select Next and Choose Node for the Input Argument Type in the POJO method that will be doing the providing: - Select Finish and in the SchedPojoBinding.java Java editor that shows up, fix the package import warning for Node (answer: org.w3c.dom.Node): - Copy the imports below into the same class, pasting it before the first import statement that's already there: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; import java.util.logging.Level; - Copy the code below into the same class, pasting it just above the line that declares "private static final Logger logger": private void FireTriggerOperation(String input) { List args = new ArrayList (); if (System.getProperty("os.name").toLowerCase() //NOI18N .startsWith("windows")) { //NOI18N args.add("cmd.exe"); //NOI18N args.add("/c"); //NOI18N } args.add(input); ProcessBuilder procBuilder = new ProcessBuilder(args); try { Process proc = procBuilder.start(); StreamEater stdOutEater = new StreamEater(proc.getInputStream(), "StdOut"); //NOI18N StreamEater stdErrEater = new StreamEater(proc.getErrorStream(), "StdErr"); //NOI18N stdOutEater.start(); stdErrEater.start(); int rc = proc.waitFor(); logger.info("Command \\"" + input + "\\" exit code: " + rc); //NOI18N } catch (IOException ex) { logger.log(Level.SEVERE, "Unknown command? " + input, ex); //NOI18N return; } catch (InterruptedException ie) { logger.log(Level.WARNING, "Command aborted!"); //NOI18N } } private class StreamEater extends Thread { private BufferedReader rdr; private String name; public StreamEater(InputStream is, String name) { super(name + "-Eater"); //NOI18N rdr = new BufferedReader(new InputStreamReader(is)); this.name = name; } @Override public void run() { String line = null; try { while ((line = rdr.readLine()) != null) { logger.info(name + "> " + line); //NOI18N } } catch (IOException ex) { logger.log(Level.SEVERE, "Problem reading " + name, ex);//NOI18N } try { rdr.close(); } catch (IOException ex) { // ignore } rdr = null; } } public static void main(String[] args) throws Throwable { SchedPojoBinding spb = new SchedPojoBinding(); spb.FireTriggerOperation("echo MsgBox \\"Howdy, all's well!\\", " //NOI18N + "0, \\"Scheduler POJO Demo\\" > C:\\\\temp\\\\howdy.vbs & " //NOI18N + "wscript.exe C:\\\\temp\\\\howdy.vbs"); //NOI18N } - In the FireTriggerOperation(Node inupt) method, add this call: - From here on, it's pretty much mundane...Save this project and create a new Composite Application project - Drag the SchedPojoDemo project into the Service Assembly canvas and do a Build Project - Deploy the CASA project and voila, you got your popup! Nice blog Ed! Shows how one can build rapidly Scheduler BC triggered application. Posted by Girish on March 10, 2009 at 02:56 PM PDT # Nice one Ed.... I guess the catch was setting it at every minute(which i changed to 2 while creating)... really was hard to undeploy it before it prompted once more... :) Posted by VTR Ravi Kumar on January 25, 2010 at 11:37 AM PST #
https://blogs.oracle.com/edwong/entry/cup_of_pojo_anyone
CC-MAIN-2017-04
refinedweb
668
59.3
I’m confused about this lesson, or more specifically the use of return. The example given does not use return but still prints the updated list. When I remove return from the code in the editor and run the code, “None” appears in the console. If the premise is the same, modifying an element of a list in a function, why is return needed in the code I am updating? def list_function(x): x[1] = x[1] + 3 return x n = [3, 5, 7] print list_function(n) print def double_first(n): n[0] = n[0] * 2 numbers = [1, 2, 3, 4] double_first(numbers) print numbers
https://discuss.codecademy.com/t/modifying-an-element-of-a-list-in-a-function-10-18/220990
CC-MAIN-2018-43
refinedweb
105
68.91
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 23/06/2017 at 05:14, xxxxxxxx wrote: User Information: Cinema 4D Version: r18 Platform: Windows ; Language(s) : C.O.F.F.E.E ; PYTHON ; --------- Hi, Im really struggling to find a way to pull off the following in both Python and COFFEE: I want to be able to take the active object and set all tracks to be 'After: Repeat' 'Repetitions: 20' other than position Z, which I want to set to 'After: Offset Repeat' 'Repetitions: 20'. I want to be able to create this script so that I can automate creating looping run cycles for a lot of different characters as currently there's a fair few steps whilst doing it manually. Has anyone any idea how I can affect these attributes through Python or COFFEE? Ive trawled through the documentation and found that i should be using CLOOP_REPEAT and CLOOP_OFFSETREPEAT but Im obviously using these in the wrong way as Im getting syntax errors. Thanks in advance! Keith On 26/06/2017 at 07:39, xxxxxxxx wrote: Hi Keith, welcome to the Plugin Café forums The animation parameters you want to modify are part of a CTrack, which is stored on an animated object for each animated parameter. Parameters are identified by so called DescID. In the C++ SDK Docs we have a manual about animation further detailing CTrack, CCurve and CKey. I think it works quite well to explain the basic concepts for Python developers as well. There's also a manual on DescID. In order to get you going quickly, here's a small script, roughly doing what you want: import c4d def main() : if op is None: # if no object selected return # Construct a DescID for the animation track which will be handled differently descidPosZ = c4d.DescID(c4d.DescLevel(c4d.ID_BASEOBJECT_REL_POSITION), c4d.DescLevel(c4d.VECTOR_Z)) # Iterate all CTracks of the active object ct = op.GetFirstCTrack() while ct is not None: descid = ct.GetDescriptionID() if descid == descidPosZ: ct[c4d.ID_CTRACK_AFTER] = c4d.CLOOP_OFFSETREPEAT ct[c4d.ID_CTRACK_AFTER_CNT] = 42 else: ct[c4d.ID_CTRACK_AFTER] = c4d.CLOOP_REPEAT ct[c4d.ID_CTRACK_AFTER_CNT] = 20 ct = ct.GetNext() c4d.EventAdd() if __name__=='__main__': main() Also I'd like to suggest to stick with Python instead of COFFEE. It's way more powerful than COFFEE. Final note, I have moved this thread to the Python sub-forum.
https://plugincafe.maxon.net/topic/10171/13659_offset-loop-for-run-cycle-scripting-issue
CC-MAIN-2022-05
refinedweb
426
54.63
#include <db.h> int DB_ENV->txn_recover(DB_ENV *dbenv, DB_PREPLIST preplist[], long count, long *retp, u_int32_t flags); Database environment recovery restores transactions that were prepared, but not yet resolved at the time of the system shut down or crash, to their state prior to the shut down or crash, including any locks previously held. The DB_ENV->txn_recover() method returns a list of those prepared transactions. The DB_ENV->txn_recover() method should only be called after the environment has been recovered. Multiple threads of control may call DB_ENV->txn_recover(), but only one thread of control may resolve each returned transaction, that is, only one thread of control may call DB_TXN->commit() or DB_TXN->abort() on each returned transaction. Callers of DB_ENV->txn_recover() must call DB_TXN->discard() to discard each transaction they do not resolve. On return from DB_ENV->txn_recover(), the preplist parameter will be filled in with a list of transactions that must be resolved by the application (committed, aborted or discarded). The preplist parameter is a structure of type DB_PREPLIST; the following DB_PREPLIST fields will be filled in: DB_TXN * txn; The transaction handle for the transaction. u_int8_t gid[DB_XIDDATASIZE]; The global transaction ID for the transaction. The global transaction ID is the one specified when the transaction was prepared. The application is responsible for ensuring uniqueness among global transaction IDs. The DB_ENV->txn_recover() method returns a non-zero error value on failure and 0 on success. The count parameter specifies the number of available entries in the passed-in preplist array. The retp parameter returns the number of entries DB_ENV->txn_recover() has filled in, in the array. The flags parameter must be set to one of the following values: DB_FIRST Begin returning a list of prepared, but not yet resolved transactions. Specifying this flag begins a new pass over all prepared, but not yet completed transactions, regardless of whether they have already been returned in previous calls to DB_ENV->txn_recover.() Calls to DB_ENV->txn_recover() from different threads of control should not be intermixed in the same environment. DB_NEXT Continue returning a list of prepared, but not yet resolved transactions, starting where the last call to DB_ENV->txn_recover() left off.
http://docs.oracle.com/cd/E17275_01/html/api_reference/C/txnrecover.html
CC-MAIN-2016-07
refinedweb
357
51.28
Klaus Berndl wrote: > Is there an easy way to find out the caller of the function? > example: > (defun called-fcn () > (message "My caller is %s" (function-to-get-the-caller))) > (defun caller-fcn () > (called-fcn)) > Then evaluating caller-fcn should print out "caller-fcn".... > Is this possible with elisp, d.h. is there something like my > "function-to-get-the-caller"? You could look at parsing the result of BACKTRACE, or BACKTRACE-FRAME. e.g.: (defun called-fn () (let ((caller (find-caller))) (message "Caller is `%s'" caller))) (defun find-caller () (second (backtrace-frame 5))) (defun test () (called-fn)) (defun test-1 () (values (test) (called-fn))) (test-1) (test) => "Caller is `test'" (test-1) => ("Caller is `test'" "Caller is `values'") It's far from foolproof, but it might work. Alternately, you could define all your functions with a macro s.t. their name is registered and available in an internal variable, though that runs into problems due to namespace collisions. -- lawrence mitchell <address@hidden>
http://lists.gnu.org/archive/html/help-gnu-emacs/2003-07/msg00307.html
CC-MAIN-2014-10
refinedweb
163
64.71
Opened 3 years ago Closed 3 years ago #7011 defect closed fixed (fixed) Adjusting the size of a stopped thread pool starts new threads. Description The following code will hang: from twisted.python.threadpool import ThreadPool pool = ThreadPool() pool.start() pool.stop() pool.adjustPoolsize() Change History (5) comment:1 Changed 3 years ago by hawkowl - Branch set to branches/threadpool-adjust-hang-7011 comment:2 Changed 3 years ago by hawkowl - Keywords review added - Type changed from enhancement to defect The pool didn't mark itself as not started when it was stopped - I think this is an oversight? Unless it means it was never started... anyway, I've added that and a test for that case. It hangs as tomprince says if I run it on master. Putting up for review. comment:3 Changed 3 years ago by hawkowl - Summary changed from Adjusting the size of a stopped thead pool starts new threads. to Adjusting the size of a stopped thread pool starts new threads. comment:4 Changed 3 years ago by exarkun - Keywords review removed - Owner set to hawkowl Thanks. Just a few minor comments: - Making sure that a stopped pool doesn't start new workers when the size is adjusted. isn't a proper test method docstring. - The ThreadPool.threads attribute is undocumented but the new test relies on it having a particular meaning. Please document that meaning. I think the expected behavior is already tested by test_start but if you wanted to add a more direct test that would be cool too. - ThreadPool.started is also undocumented. Can you document it? - In the news fragment I would say "but the pool is stopped" rather than "and the pool is stopped". Alternatively, "and the pool is not running." "stopped" could indicate a state or an action. Please merge after addressing those points. Thanks again. comment:5 Changed 3 years ago by hawkowl - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. (In [41781]) Branching to threadpool-adjust-hang-7011.
http://twistedmatrix.com/trac/ticket/7011
CC-MAIN-2016-44
refinedweb
341
74.9
Statistics (scipy.stats)¶ Introduction¶ In this tutorial we discuss many, but certainly not all, features of scipy.stats. The intention here is to provide a user with a working knowledge of this package. We refer to the reference manual for further details. Note: This documentation is work in progress. Random Variables¶ There are two general distribution classes that have been implemented for encapsulating continuous random variables and discrete random variables . Over 80 continuous random variables (RVs) and 10 discrete random variables have been implemented using these classes. Besides this, new routines and distributions can easily added by the end user. (If you create one, please contribute it). All of the statistics functions are located in the sub-package scipy.stats and a fairly complete listing of these functions can be obtained using info(stats). The list of the random variables available can also be obtained from the docstring for the stats sub-package. In the discussion below we mostly focus on continuous RVs. Nearly all applies to discrete variables also, but we point out some differences here: Specific Points for Discrete Distributions. In the code samples below we assume that the scipy.stats package is imported as >>> from scipy import stats and in some cases we assume that individual objects are imported as >>> from scipy.stats import norm Getting Help¶ First of all, all distributions are accompanied with help functions. To obtain just some basic information we print the relevant docstring: print(stats.norm.__doc__). To find the support, i.e., upper and lower bound of the distribution, call: >>> print 'bounds of distribution lower: %s, upper: %s' % (norm.a, norm.b) bounds of distribution lower: -inf, upper: inf We can list all methods and properties of the distribution with dir(norm). As it turns out, some of the methods are private methods although they are not named as such (their name does not start with a leading underscore), for example veccdf, are only available for internal calculation (those methods will give warnings when one tries to use them, and will be removed at some point). To obtain the real main methods, we list the methods of the frozen distribution. (We explain the meaning of a frozen distribution below). >>> rv = norm() >>> dir(rv) # reformatted ['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__', '__weakref__', 'args', 'cdf', 'dist', 'entropy', 'isf', 'kwds', 'moment', 'pdf', 'pmf', 'ppf', 'rvs', 'sf', 'stats'] Finally, we can obtain the list of available distribution through introspection: >>> import warnings >>> warnings.simplefilter('ignore', DeprecationWarning) >>>: 94 >>> print 'number of discrete distributions: ', len(dist_discrete) number of discrete distributions: 13 Common Methods¶ The main public methods for continuous RVs are: - rvs: Random Variates - pdf: Probability Density Function - cdf: Cumulative Distribution Function - sf: Survival Function (1-CDF) - ppf: Percent Point Function (Inverse of CDF) - isf: Inverse Survival Function (Inverse of SF) - stats: Return mean, variance, (Fisher’s) skew, or (Fisher’s) kurtosis - moment: non-central moments of the distribution Let’s take a normal RV as an example. >>> norm.cdf(0) 0.5 To compute the cdf at a number of points, we can pass a list or a numpy array. >>> norm.cdf([-1., 0, 1]) array([ 0.15865525, 0.5, 0.84134475]) >>> import numpy as np >>> norm.cdf(np.array([-1., 0, 1])) array([ 0.15865525, 0.5, 0.84134475]) Thus, the basic methods such as pdf, cdf, and so on are vectorized with np.vectorize. Other generally useful methods are supported too: >>> norm.mean(), norm.std(), norm.var() (0.0, 1.0, 1.0) >>> norm.stats(moments = "mv") (array(0.0), array(1.0)) To find the median of a distribution we can use the percent point function ppf, which is the inverse of the cdf: >>> norm.ppf(0.5) 0.0 To generate a sequence of random variates, use the size keyword argument: >>> norm.rvs(size=3) array([-0.35687759, 1.34347647, -0.11710531]) # random Note that drawing random numbers relies on generators from numpy.random package. In the example above, the specific stream of random numbers is not reproducible across runs. To achieve reproducibility, you can explicitly seed a global variable >>> np.random.seed(1234) Relying on a global state is not recommended though. A better way is to use the random_state parameter which accepts an instance of numpy.random.RandomState class, or an integer which is then used to seed an internal RandomState object: >>> norm.rvs(size=5, random_state=1234) array([ 0.47143516, -1.19097569, 1.43270697, -0.3126519 , -0.72058873]) Don’t think that norm.rvs(5) generates 5 variates: >>> norm.rvs(5) 5.471435163732493 Here, 5 with no keyword is being interpreted as the first possible keyword argument, loc, which is the first of a pair of keyword arguments taken by all continuous distributions. This brings us to the topic of the next subsection. Shifting and Scaling¶ All continuous distributions take loc and scale as keyword parameters to adjust the location and scale of the distribution, e.g. for the standard normal distribution the location is the mean and the scale is the standard deviation. >>> norm.stats(loc = 3, scale = 4, moments = "mv") (array(3.0), array(16.0)) In many cases the standardized distribution for a random variable X is obtained through the transformation (X - loc) / scale. The default values are loc = 0 and scale = 1. Smart use of loc and scale can help modify the standard distributions in many ways. To illustrate the scaling further, the cdf of an exponentially distributed RV with mean \(1/\lambda\) is given by By applying the scaling rule above, it can be seen that by taking scale = 1./lambda we get the proper scale. >>> from scipy.stats import expon >>> expon.mean(scale=3.) 3.0 Note Distributions that take shape parameters may require more than simple application of loc and/or scale to achieve the desired form. For example, the distribution of 2-D vector lengths given a constant vector of length \(R\) perturbed by independent N(0, \(\sigma^2\)) deviations in each component is rice(\(R/\sigma\), scale= \(\sigma\)). The first argument is a shape parameter that needs to be scaled along with \(x\). The uniform distribution is also interesting: >>> from scipy.stats import uniform >>> uniform.cdf([0, 1, 2, 3, 4, 5], loc = 1, scale = 4) array([ 0. , 0. , 0.25, 0.5 , 0.75, 1. ]) Finally, recall from the previous paragraph that we are left with the problem of the meaning of norm.rvs(5). As it turns out, calling a distribution like this, the first argument, i.e., the 5, gets passed to set the loc parameter. Let’s see: >>> np.mean(norm.rvs(5, size=500)) 4.983550784784704 Thus, to explain the output of the example of the last section: norm.rvs(5) generates a single normally distributed random variate with mean loc=5, because of the default size=1. We recommend that you set loc and scale parameters explicitly, by passing the values as keywords rather than as arguments. Repetition can be minimized when calling more than one method of a given RV by using the technique of Freezing a Distribution, as explained below. Shape Parameters¶ While a general continuous random variable can be shifted and scaled with the loc and scale parameters, some distributions require additional shape parameters. For instance, the gamma distribution, with density requires the shape parameter \(a\). Observe that setting \(\lambda\) can be obtained by setting the scale keyword to \(1/\lambda\). Let’s check the number and name of the shape parameters of the gamma distribution. (We know from the above that this should be 1.) >>> from scipy.stats import gamma >>> gamma.numargs 1 >>> gamma.shapes 'a' Now we set the value of the shape variable to 1 to obtain the exponential distribution, so that we compare easily whether we get the results we expect. >>> gamma(1, scale=2.).stats(moments="mv") (array(2.0), array(4.0)) Notice that we can also specify shape parameters as keywords: >>> gamma(a=1, scale=2.).stats(moments="mv") (array(2.0), array(4.0)) Freezing a Distribution¶ Passing the loc and scale keywords time and again can become quite bothersome. The concept of freezing a RV is used to solve such problems. >>> rv = gamma(1, scale=2.) By using rv we no longer have to include the scale or the shape parameters anymore. Thus, distributions can be used in one of two ways, either by passing all distribution parameters to each method call (such as we did earlier) or by freezing the parameters for the instance of the distribution. Let us check this: >>> rv.mean(), rv.std() (2.0, 2.0) This is indeed what we should get. Broadcasting¶ The basic methods pdf and so on satisfy the usual numpy broadcasting rules. For example, we can calculate the critical values for the upper tail of the t distribution for different probabilities and degrees of freedom. >>> stats.t.isf([0.1, 0.05, 0.01], [[10], [11]]) array([[ 1.37218364, 1.81246112, 2.76376946], [ 1.36343032, 1.79588482, 2.71807918]]) Here, the first row are the critical values for 10 degrees of freedom and the second row for 11 degrees of freedom (d.o.f.). Thus, the broadcasting rules give the same result of calling isf twice: >>> stats.t.isf([0.1, 0.05, 0.01], 10) array([ 1.37218364, 1.81246112, 2.76376946]) >>> stats.t.isf([0.1, 0.05, 0.01], 11) array([ 1.36343032, 1.79588482, 2.71807918]) If the array with probabilities, i.e., [0.1, 0.05, 0.01] and the array of degrees of freedom i.e., [10, 11, 12], have the same array shape, then element wise matching is used. As an example, we can obtain the 10% tail for 10 d.o.f., the 5% tail for 11 d.o.f. and the 1% tail for 12 d.o.f. by calling >>> stats.t.isf([0.1, 0.05, 0.01], [10, 11, 12]) array([ 1.37218364, 1.79588482, 2.68099799]) Specific Points for Discrete Distributions¶ Discrete distribution have mostly the same basic methods as the continuous distributions. However pdf is replaced the probability mass function pmf, no estimation methods, such as fit, are available, and scale is not a valid keyword parameter. The location parameter, keyword loc can still be used to shift the distribution. The computation of the cdf requires some extra attention. In the case of continuous distribution the cumulative distribution function is in most standard cases strictly monotonic increasing in the bounds (a,b) and has therefore a unique inverse. The cdf of a discrete distribution, however, is a step function, hence the inverse cdf, i.e., the percent point function, requires a different definition: ppf(q) = min{x : cdf(x) >= q, x integer} For further info, see the docs here. We can look at the hypergeometric distribution as an example >>> from scipy.stats import hypergeom >>> [M, n, N] = [20, 7, 12] If we use the cdf at some integer points and then evaluate the ppf at those cdf values, we get the initial integers back, for example >>> x = np.arange(4)*2 >>> x array([0, 2, 4, 6]) >>> prb = hypergeom.cdf(x, M, n, N) >>> prb array([ 0.0001031991744066, 0.0521155830753351, 0.6083591331269301, 0.9897832817337386]) >>> hypergeom.ppf(prb, M, n, N) array([ 0., 2., 4., 6.]) If we use values that are not at the kinks of the cdf step function, we get the next higher integer back: >>> hypergeom.ppf(prb + 1e-8, M, n, N) array([ 1., 3., 5., 7.]) >>> hypergeom.ppf(prb - 1e-8, M, n, N) array([ 0., 2., 4., 6.]) Fitting Distributions¶ The main additional methods of the not frozen distribution are related to the estimation of distribution parameters: - fit: maximum likelihood estimation of distribution parameters, including location and scale fit_loc_scale: estimation of location and scale when shape parameters are given nnlf: negative log likelihood function expect: Calculate the expectation of a function against the pdf or pmf Performance Issues and Cautionary Remarks¶ The performance of the individual methods, in terms of speed, varies widely by distribution and method. The results of a method are obtained in one of two ways: either by explicit calculation, or by a generic algorithm that is independent of the specific distribution. Explicit calculation, on the one hand, requires that the method is directly specified for the given distribution, either through analytic formulas or through special functions in scipy.special or numpy.random for rvs. These are usually relatively fast calculations. The generic methods, on the other hand, are used if the distribution does not specify any explicit calculation. To define a distribution, only one of pdf or cdf is necessary; all other methods can be derived using numeric integration and root finding. However, these indirect methods can be very slow. As an example, rgh = stats.gausshyper.rvs(0.5, 2, 2, 2, size=100) creates random variables in a very indirect way and takes about 19 seconds for 100 random variables on my computer, while one million random variables from the standard normal or from the t distribution take just above one second. Remaining Issues¶ The distributions in scipy.stats have recently been corrected and improved and gained a considerable test suite, however a few issues remain: - the distributions have been tested over some range of parameters, however in some corner ranges, a few incorrect results may remain. - the maximum likelihood estimation in fit does not work with default starting parameters for all distributions and the user needs to supply good starting parameters. Also, for some distribution using a maximum likelihood estimator might inherently not be the best choice. Building Specific Distributions¶ The next examples shows how to build your own distributions. Further examples show the usage of the distributions and some statistical tests. Making a Continuous Distribution, i.e., Subclassing rv_continuous¶ Making continuous distributions is fairly simple. >>> from scipy import stats >>> class deterministic_gen(stats.rv_continuous): ... def _cdf(self, x): ... return np.where(x < 0, 0., 1.) ... def _stats(self): ... return 0., 0., 0., 0. >>> deterministic = deterministic_gen(name="deterministic") >>> deterministic.cdf(np.arange(-3, 3, 0.5)) array([ 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1.]) Interestingly, the pdf is now computed automatically: >>> deterministic.pdf(np.arange(-3, 3, 0.5)) array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 5.83333333e+04, 4.16333634e-12, 4.16333634e-12, 4.16333634e-12, 4.16333634e-12, 4.16333634e-12]) Be aware of the performance issues mentions in Performance Issues and Cautionary Remarks. The computation of unspecified common methods can become very slow, since only general methods are called which, by their very nature, cannot use any specific information about the distribution. Thus, as a cautionary example: >>> from scipy.integrate import quad >>> quad(deterministic.pdf, -1e-1, 1e-1) (4.163336342344337e-13, 0.0) But this is not correct: the integral over this pdf should be 1. Let’s make the integration interval smaller: >>> quad(deterministic.pdf, -1e-3, 1e-3) # warning removed (1.000076872229173, 0.0010625571718182458) This looks better. However, the problem originated from the fact that the pdf is not specified in the class definition of the deterministic distribution. Subclassing rv_discrete¶ In the following we use stats.rv_discrete to generate a discrete distribution that has the probabilities of the truncated normal for the intervals centered around the integers. General Info From the docstring of rv_discrete, help(stats.rv_discrete), ).” Next to this, there are some further requirements for this approach to work: - The keyword name is required. - The support points of the distribution xk have to be integers. - The number of significant digits (decimals) needs to be specified. In fact, if the last two requirements are not satisfied an exception may be raised or the resulting numbers may be incorrect. An Example Let’s do the work. First >>> npoints = 20 # number of integer support points of the distribution minus 1 >>> npointsh = npoints / 2 >>> npointsf = float(npoints) >>> nbound = 4 # bounds for the truncated normal >>> normbound = (1+1/npointsf) * nbound # actual bounds of truncated normal >>> grid = np.arange(-npointsh, npointsh+2, 1) # integer grid >>> gridlimitsnorm = (grid-0.5) / npointsh * nbound # bin limits for the truncnorm >>> gridlimits = grid - 0.5 # used later in the analysis >>> grid = grid[:-1] >>> probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound)) >>> gridint = grid And finally we can subclass rv_discrete: >>> normdiscrete = stats.rv_discrete(values=(gridint, ... np.round(probs, decimals=7)), name='normdiscrete') Now that we have defined the distribution, we have access to all common methods of discrete distributions. >>> print 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'% \ ... normdiscrete.stats(moments = 'mvsk') mean = -0.0000, variance = 6.3302, skew = 0.0000, kurtosis = -0.0076 >>> nd_std = np.sqrt(normdiscrete.stats(moments='v')) Testing the Implementation Let’s generate a random sample and compare observed frequencies with the probabilities. >>> n_sample = 500 >>> np.random.seed(87655678) # fix the seed for replicability >>> rvs = normdiscrete.rvs(size=n_sample) >>> rvsnd = rvs >>> f, l = np.histogram(rvs, bins=gridlimits) >>> sfreq = np.vstack([gridint, f, probs*n_sample]).T >>> print sfreq [[ -1.00000000e+01 0.00000000e+00 2.95019349e-02] [ -9.00000000e+00 0.00000000e+00 1.32294142e-01] [ -8.00000000e+00 0.00000000e+00 5.06497902e-01] [ -7.00000000e+00 2.00000000e+00 1.65568919e+00] [ -6.00000000e+00 1.00000000e+00 4.62125309e+00] [ -5.00000000e+00 9.00000000e+00 1.10137298e+01] [ -4.00000000e+00 2.60000000e+01 2.24137683e+01] [ -3.00000000e+00 3.70000000e+01 3.89503370e+01] [ -2.00000000e+00 5.10000000e+01 5.78004747e+01] [ -1.00000000e+00 7.10000000e+01 7.32455414e+01] [ 0.00000000e+00 7.40000000e+01 7.92618251e+01] [ 1.00000000e+00 8.90000000e+01 7.32455414e+01] [ 2.00000000e+00 5.50000000e+01 5.78004747e+01] [ 3.00000000e+00 5.00000000e+01 3.89503370e+01] [ 4.00000000e+00 1.70000000e+01 2.24137683e+01] [ 5.00000000e+00 1.10000000e+01 1.10137298e+01] [ 6.00000000e+00 4.00000000e+00 4.62125309e+00] [ 7.00000000e+00 3.00000000e+00 1.65568919e+00] [ 8.00000000e+00 0.00000000e+00 5.06497902e-01] [ 9.00000000e+00 0.00000000e+00 1.32294142e-01] [ 1.00000000e+01 0.00000000e+00 2.95019349e-02]] Next, we can test, whether our sample was generated by our normdiscrete distribution. This also verifies whether the random numbers are generated correctly. The chisquare test requires that there are a minimum number of observations in each bin. We combine the tail bins into larger bins so that they contain enough observations. >>> f2 = np.hstack([f[:5].sum(), f[5:-5], f[-5:].sum()]) >>> p2 = np.hstack([probs[:5].sum(), probs[5:-5], probs[-5:].sum()]) >>> ch2, pval = stats.chisquare(f2, p2*n_sample) >>> print 'chisquare for normdiscrete: chi2 = %6.3f pvalue = %6.4f' % (ch2, pval) chisquare for normdiscrete: chi2 = 12.466 pvalue = 0.4090 The pvalue in this case is high, so we can be quite confident that our random sample was actually generated by the distribution. Analysing One Sample¶ First, we create some random variables. We set a seed so that in each run we get identical results to look at. As an example we take a sample from the Student t distribution: >>> np.random.seed(282629734) >>> x = stats.t.rvs(10, size=1000) Here, we set the required shape parameter of the t distribution, which in statistics corresponds to the degrees of freedom, to 10. Using size=1000 means that our sample consists of 1000 independently drawn (pseudo) random numbers. Since we did not specify the keyword arguments loc and scale, those are set to their default values zero and one. Descriptive Statistics¶ x is a numpy array, and we have direct access to all array methods, e.g. >>> print x.max(), x.min() # equivalent to np.max(x), np.min(x) 5.26327732981 -3.78975572422 >>> print x.mean(), x.var() # equivalent to np.mean(x), np.var(x) 0.0140610663985 1.28899386208 How do the some sample properties compare to their theoretical counterparts? >>> m, v, s, k = stats.t.stats(10, moments='mvsk') >>> n, (smin, smax), sm, sv, ss, sk = stats.describe(x) >>> print 'distribution:', distribution: >>>>> print sstr %(m, v, s ,k) mean = 0.0000, variance = 1.2500, skew = 0.0000, kurtosis = 1.0000 >>> print 'sample: ', sample: >>> print sstr %(sm, sv, ss, sk) mean = 0.0141, variance = 1.2903, skew = 0.2165, kurtosis = 1.0556 Note: stats.describe uses the unbiased estimator for the variance, while np.var is the biased estimator. For our sample the sample statistics differ a by a small amount from their theoretical counterparts. T-test and KS-test¶ We can use the t-test to test whether the mean of our sample differs in a statistically significant way from the theoretical expectation. >>> print 't-statistic = %6.3f pvalue = %6.4f' % stats.ttest_1samp(x, m) t-statistic = 0.391 pvalue = 0.6955 The pvalue is 0.7, this means that with an alpha error of, for example, 10%, we cannot reject the hypothesis that the sample mean is equal to zero, the expectation of the standard t-distribution. As an exercise, we can calculate our ttest also directly without using the provided function, which should give us the same answer, and so it does: >>> tt = (sm-m)/np.sqrt(sv/float(n)) # t-statistic for mean >>> pval = stats.t.sf(np.abs(tt), n-1)*2 # two-sided pvalue = Prob(abs(t)>tt) >>> print 't-statistic = %6.3f pvalue = %6.4f' % (tt, pval) t-statistic = 0.391 pvalue = 0.6955 The Kolmogorov-Smirnov test can be used to test the hypothesis that the sample comes from the standard t-distribution >>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x, 't', (10,)) KS-statistic D = 0.016 pvalue = 0.9606 Again the p-value is high enough that we cannot reject the hypothesis that the random sample really is distributed according to the t-distribution. In real applications, we don’t know what the underlying distribution is. If we perform the Kolmogorov-Smirnov test of our sample against the standard normal distribution, then we also cannot reject the hypothesis that our sample was generated by the normal distribution given that in this example the p-value is almost 40%. >>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x,'norm') KS-statistic D = 0.028 pvalue = 0.3949 However, the standard normal distribution has a variance of 1, while our sample has a variance of 1.29. If we standardize our sample and test it against the normal distribution, then the p-value is again large enough that we cannot reject the hypothesis that the sample came form the normal distribution. >>> d, pval = stats.kstest((x-x.mean())/x.std(), 'norm') >>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % (d, pval) KS-statistic D = 0.032 pvalue = 0.2402 Note: The Kolmogorov-Smirnov test assumes that we test against a distribution with given parameters, since in the last case we estimated mean and variance, this assumption is violated, and the distribution of the test statistic on which the p-value is based, is not correct. Tails of the distribution¶ Finally, we can check the upper tail of the distribution. We can use the percent point function ppf, which is the inverse of the cdf function, to obtain the critical values, or, more directly, we can use the inverse of the survival function >>> crit01, crit05, crit10 = stats.t.ppf([1-0.01, 1-0.05, 1-0.10], 10) >>> print 'critical values from ppf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% (crit01, crit05, crit10) critical values from ppf at 1%, 5% and 10% 2.7638 1.8125 1.3722 >>> print 'critical values from isf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% tuple(stats.t.isf([0.01,0.05,0.10],10)) critical values from isf at 1%, 5% and 10% 2.7638 1.8125 1.3722 >>> freq01 = np.sum(x>crit01) / float(n) * 100 >>> freq05 = np.sum(x>crit05) / float(n) * 100 >>> freq10 = np.sum(x>crit10) / float(n) * 100 >>> print 'sample %%-frequency at 1%%, 5%% and 10%% tail %8.4f %8.4f %8.4f'% (freq01, freq05, freq10) sample %-frequency at 1%, 5% and 10% tail 1.4000 5.8000 10.5000 In all three cases, our sample has more weight in the top tail than the underlying distribution. We can briefly check a larger sample to see if we get a closer match. In this case the empirical frequency is quite close to the theoretical probability, but if we repeat this several times the fluctuations are still pretty large. >>> freq05l = np.sum(stats.t.rvs(10, size=10000) > crit05) / 10000.0 * 100 >>> print 'larger sample %%-frequency at 5%% tail %8.4f'% freq05l larger sample %-frequency at 5% tail 4.8000 We can also compare it with the tail of the normal distribution, which has less weight in the tails: >>> print 'tail prob. of normal at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% \ ... tuple(stats.norm.sf([crit01, crit05, crit10])*100) tail prob. of normal at 1%, 5% and 10% 0.2857 3.4957 8.5003 The chisquare test can be used to test, whether for a finite number of bins, the observed frequencies differ significantly from the probabilities of the hypothesized distribution. >>> quantiles = [0.0, 0.01, 0.05, 0.1, 1-0.10, 1-0.05, 1-0.01, 1.0] >>> crit = stats.t.ppf(quantiles, 10) >>> crit array([-Inf, -2.76376946, -1.81246112, -1.37218364, 1.37218364, 1.81246112, 2.76376946, Inf]) >>> n_sample = x.size >>> freqcount = np.histogram(x, bins=crit)[0] >>> tprob = np.diff(quantiles) >>> nprob = np.diff(stats.norm.cdf(crit)) >>> = 2.30 pvalue = 0.8901 >>> print 'chisquare for normal: chi2 = %6.2f pvalue = %6.4f' % (nch, npval) chisquare for normal: chi2 = 64.60 pvalue = 0.0000 We see that the standard normal distribution is clearly rejected while the standard t-distribution cannot be rejected. Since the variance of our sample differs from both standard distribution, we can again redo the test taking the estimate for scale and location into account. The fit method of the distributions can be used to estimate the parameters of the distribution, and the test is repeated using probabilities of the estimated distribution. >>> tdof, tloc, tscale = stats.t.fit(x) >>> nloc, nscale = stats.norm.fit(x) >>> tprob = np.diff(stats.t.cdf(crit, tdof, loc=tloc, scale=tscale)) >>> nprob = np.diff(stats.norm.cdf(crit, loc=nloc, scale=nscale)) >>> = 1.58 pvalue = 0.9542 >>> print 'chisquare for normal: chi2 = %6.2f pvalue = %6.4f' % (nch, npval) chisquare for normal: chi2 = 11.08 pvalue = 0.0858 Taking account of the estimated parameters, we can still reject the hypothesis that our sample came from a normal distribution (at the 5% level), but again, with a p-value of 0.95, we cannot reject the t distribution. Special tests for normal distributions¶ Since the normal distribution is the most common distribution in statistics, there are several additional functions available to test whether a sample could have been drawn from a normal distribution First we can test if skew and kurtosis of our sample differ significantly from those of a normal distribution: >>> print 'normal skewtest teststat = %6.3f pvalue = %6.4f' % stats.skewtest(x) normal skewtest teststat = 2.785 pvalue = 0.0054 >>> print 'normal kurtosistest teststat = %6.3f pvalue = %6.4f' % stats.kurtosistest(x) normal kurtosistest teststat = 4.757 pvalue = 0.0000 These two tests are combined in the normality test >>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(x) normaltest teststat = 30.379 pvalue = 0.0000 In all three tests the p-values are very low and we can reject the hypothesis that the our sample has skew and kurtosis of the normal distribution. Since skew and kurtosis of our sample are based on central moments, we get exactly the same results if we test the standardized sample: >>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % \ ... stats.normaltest((x-x.mean())/x.std()) normaltest teststat = 30.379 pvalue = 0.0000 Because normality is rejected so strongly, we can check whether the normaltest gives reasonable results for other cases: >>> print('normaltest teststat = %6.3f pvalue = %6.4f' % ... stats.normaltest(stats.t.rvs(10, size=100))) normaltest teststat = 4.698 pvalue = 0.0955 >>> print('normaltest teststat = %6.3f pvalue = %6.4f' % ... stats.normaltest(stats.norm.rvs(size=1000))) normaltest teststat = 0.613 pvalue = 0.7361 When testing for normality of a small sample of t-distributed observations and a large sample of normal distributed observation, then in neither case can we reject the null hypothesis that the sample comes from a normal distribution. In the first case this is because the test is not powerful enough to distinguish a t and a normally distributed random variable in a small sample. Comparing two samples¶ In the following, we are given two samples, which can come either from the same or from different distribution, and we want to test whether these samples have the same statistical properties. Comparing means¶ Test with sample with identical means: >>> rvs1 = stats.norm.rvs(loc=5, scale=10, size=500) >>> rvs2 = stats.norm.rvs(loc=5, scale=10, size=500) >>> stats.ttest_ind(rvs1, rvs2) (-0.54890361750888583, 0.5831943748663857) Test with sample with different means: >>> rvs3 = stats.norm.rvs(loc=8, scale=10, size=500) >>> stats.ttest_ind(rvs1, rvs3) (-4.5334142901750321, 6.507128186505895e-006) Kolmogorov-Smirnov test for two samples ks_2samp¶ For the example where both samples are drawn from the same distribution, we cannot reject the null hypothesis since the pvalue is high >>> stats.ks_2samp(rvs1, rvs2) (0.025999999999999995, 0.99541195173064878) In the second example, with different location, i.e. means, we can reject the null hypothesis since the pvalue is below 1% >>> stats.ks_2samp(rvs1, rvs3) (0.11399999999999999, 0.0027132103661283141) Kernel Density Estimation¶ A common task in statistics is to estimate the probability density function (PDF) of a random variable from a set of data samples. This task is called density estimation. The most well-known tool to do this is the histogram. A histogram is a useful tool for visualization (mainly because everyone understands it), but doesn’t use the available data very efficiently. Kernel density estimation (KDE) is a more efficient tool for the same task. The gaussian_kde estimator can be used to estimate the PDF of univariate as well as multivariate data. It works best if the data is unimodal. Univariate estimation¶ We start with a minimal amount of data in order to see how gaussian_kde works, and what the different options for bandwidth selection do. The data sampled from the PDF is show as blue dashes at the bottom of the figure (this is called a rug plot): >>> from scipy import stats >>> import matplotlib.pyplot as plt >>>2(x_eval), 'r-', label="Silverman's Rule") >>> plt.show() We see that there is very little difference between Scott’s Rule and Silverman’s Rule, and that the bandwidth selection with a limited amount of data is probably a bit too wide. We can define our own bandwidth function to get a less smoothed out result. >>> def my_kde_bandwidth(obj, fac=1./5): ... """We use Scott's Rule, multiplied by a constant factor.""" ... return np.power(obj.n, -1./(obj.d+4)) * fac >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> ax.plot(x1, np.zeros(x1.shape), 'b+', ms=20) # rug plot >>> kde3 = stats.gaussian_kde(x1, bw_method=my_kde_bandwidth) >>> ax.plot(x_eval, kde3(x_eval), 'g-', label="With smaller BW") >>> plt.show() We see that if we set bandwidth to be very narrow, the obtained estimate for the probability density function (PDF) is simply the sum of Gaussians around each data point. We now take a more realistic example, and look at the difference between the two available bandwidth selection rules. Those rules are known to work well for (close to) normal distributions, but even for unimodal distributions that are quite strongly non-normal they work reasonably well. As a non-normal distribution we take a Student’s T distribution with 5 degrees of freedom. import numpy as np import matplotlib.pyplot as plt from scipy import stats np.random.seed(12456) x1 = np.random.normal(size=200) # random data, normal distribution xs = np.linspace(x1.min()-1, x1.max()+1, 200) kde1 = stats.gaussian_kde(x1) kde2 = stats.gaussian_kde(x1, bw_method='silverman') fig = plt.figure(figsize=(8, 6)) ax1 = fig.add_subplot(211) ax1.plot(x1, np.zeros(x1.shape), 'b+', ms=12) # rug plot ax1.plot(xs, kde1(xs), 'k-', label="Scott's Rule") ax1.plot(xs, kde2(xs), 'b-', label="Silverman's Rule") ax1.plot(xs, stats.norm.pdf(xs), 'r--', label="True PDF") ax1.set_xlabel('x') ax1.set_ylabel('Density') ax1.set_title("Normal (top) and Student's T$_{df=5}$ (bottom) distributions") ax1.legend(loc=1) x2 = stats.t.rvs(5, size=200) # random data, T distribution xs = np.linspace(x2.min() - 1, x2.max() + 1, 200) kde3 = stats.gaussian_kde(x2) kde4 = stats.gaussian_kde(x2, bw_method='silverman') ax2 = fig.add_subplot(212) ax2.plot(x2, np.zeros(x2.shape), 'b+', ms=12) # rug plot ax2.plot(xs, kde3(xs), 'k-', label="Scott's Rule") ax2.plot(xs, kde4(xs), 'b-', label="Silverman's Rule") ax2.plot(xs, stats.t.pdf(xs, 5), 'r--', label="True PDF") ax2.set_xlabel('x') ax2.set_ylabel('Density') plt.show() We now take a look at a bimodal distribution with one wider and one narrower Gaussian feature. We expect that this will be a more difficult density to approximate, due to the different bandwidths required to accurately resolve each feature. >>> from functools import partial >>> loc1, scale1, size1 = (-2, 1, 175) >>> loc2, scale2, size2 = (2, 0.2, 50) >>> x2 = np.concatenate([np.random.normal(loc=loc1, scale=scale1, size=size1), ... np.random.normal(loc=loc2, scale=scale2, size=size2)]) >>> x_eval = np.linspace(x2.min() - 1, x2.max() + 1, 500) >>> kde = stats.gaussian_kde(x2) >>> kde2 = stats.gaussian_kde(x2, bw_method='silverman') >>> kde3 = stats.gaussian_kde(x2, bw_method=partial(my_kde_bandwidth, fac=0.2)) >>> kde4 = stats.gaussian_kde(x2, bw_method=partial(my_kde_bandwidth, fac=0.5)) >>> pdf = stats.norm.pdf >>> bimodal_pdf = pdf(x_eval, loc=loc1, scale=scale1) * float(size1) / x2.size + \ ... pdf(x_eval, loc=loc2, scale=scale2) * float(size2) / x2.size >>> fig = plt.figure(figsize=(8, 6)) >>> ax = fig.add_subplot(111) >>> ax.plot(x2, np.zeros(x2.shape), 'b+', ms=12) >>> ax.plot(x_eval, kde(x_eval), 'k-', label="Scott's Rule") >>> ax.plot(x_eval, kde2(x_eval), 'b-', label="Silverman's Rule") >>> ax.plot(x_eval, kde3(x_eval), 'g-', label="Scott * 0.2") >>> ax.plot(x_eval, kde4(x_eval), 'c-', label="Scott * 0.5") >>> ax.plot(x_eval, bimodal_pdf, 'r--', label="Actual PDF") >>> ax.set_xlim([x_eval.min(), x_eval.max()]) >>> ax.legend(loc=2) >>> ax.set_xlabel('x') >>> ax.set_ylabel('Density') >>> plt.show() As expected, the KDE is not as close to the true PDF as we would like due to the different characteristic size of the two features of the bimodal distribution. By halving the default bandwidth (Scott * 0.5) we can do somewhat better, while using a factor 5 smaller bandwidth than the default doesn’t smooth enough. What we really need though in this case is a non-uniform (adaptive) bandwidth. Multivariate estimation¶ With gaussian_kde we can perform multivariate as well as univariate estimation. We demonstrate the bivariate case. First we generate some random data with a model in which the two variates are correlated. >>>() Then we apply the KDE to the data: >>> X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j] >>> positions = np.vstack([X.ravel(), Y.ravel()]) >>> values = np.vstack([m1, m2]) >>> kernel = stats.gaussian_kde(values) >>> Z = np.reshape(kernel.evaluate(positions).T, X.shape) Finally we plot the estimated bivariate distribution as a colormap, and plot the individual data points on top. >>> fig = plt.figure(figsize=(8, 6)) >>>()
https://docs.scipy.org/doc/scipy/reference/tutorial/stats.html
CC-MAIN-2016-44
refinedweb
5,944
59.9
Upgrading Icinga 2 ¶ Upgrading Icinga 2 is usually quite straightforward. Ordinarily the only manual steps involved are scheme updates for the IDO database. Specific version upgrades are described below. Please note that version updates are incremental. An upgrade from v2.6 to v2.8 requires to follow the instructions for v2.7 too. Upgrading to v2.10 ¶ 2.10.5 Packages ¶ EOL distributions where no packages are available with 2.10.5: - SLES 11 - Ubuntu 14 LTS Path Constant Changes ¶ During package upgrades you may see a notice that the configuration content of features has changed. This is due to a more general approach with path constants in v2.10. The known constants SysconfDir and LocalStateDir stay intact and won’t break on upgrade. If you are using these constants in your own custom command definitions or other objects, you are advised to revise them and update them according to the documentation. Example diff: object NotificationCommand "mail-service-notification" { - command = [ SysconfDir + "/icinga2/scripts/mail-service-notification.sh" ] + command = [ ConfigDir + "/scripts/mail-service-notification.sh" ] If you have the ICINGA2_RUN_DIR environment variable configured in the sysconfig file, you need to rename it to ICINGA2_INIT_RUN_DIR. ICINGA2_STATE_DIR has been removed and this setting has no effect. Note This is important if you rely on the sysconfig configuration in your own scripts. New Constants ¶ New Icinga constants have been added in this release. Environmentfor specifying the Icinga environment. Defaults to not set. ApiBindHostand ApiBindPortto allow overriding the default ApiListener values. This will be used for an Icinga addon only. Configuration: Namespaces ¶ The keywords namespace and using are now reserved for the namespace functionality provided with v2.10. Read more about how it works here. Configuration: ApiListener ¶ Anonymous JSON-RPC connections in the cluster can now be configured with max_anonymous_clients attribute. The corresponding REST API results from /v1/status/ApiListener in json_rpc have been renamed from clients to anonymous_clients to better reflect their purpose. Authenticated clients are counted as connected endpoints. A similar change is there for the performance data metrics. The TLS handshake timeout defaults to 10 seconds since v2.8.2. This can now be configured with the configuration attribute tls_handshake_timeout. Beware of performance issues with setting a too high value. API: schedule-downtime Action ¶ The attribute child_options was previously accepting 0,1,2 for specific child downtime settings. This behaviour stays intact, but the new proposed way are specific constants as values ( DowntimeNoChildren, DowntimeTriggeredChildren, DowntimeNonTriggeredChildren). Notifications: Recovery and Acknowledgement ¶ When a user should be notified on Problem and Acknowledgement, v2.10 now checks during the Acknowledgement notification event whether this user has been notified about a problem before. types = [ Problem, Acknowledgement, Recovery ] If no Problem notification was sent, and the types filter includes problems for this user, the Acknowledgement notification is not sent. In contrast to that, the following configuration always sends Acknowledgement notifications. types = [ Acknowledgement, Recovery ] This change also restores the old behaviour for Recovery notifications. The above configuration leaving out the Problem type can be used to only receive recovery notifications. If Problem is added to the types again, Icinga 2 checks whether it has notified a user of a problem when sending the recovery notification. More details can be found in this PR. Stricter configuration validation¶ Some config errors are now fatal. While it never worked before, icinga2 refuses to start now! For example the following started to give a fatal error in 2.10: object Zone "XXX" { endpoints = [ "master-server" ] parent = "global-templates" } critical/config: Error: Zone 'XXX' can not have a global zone as parent. Package Changes ¶ Debian/Ubuntu drops the libicinga2 package. apt-get upgrade icinga2 won’t remove such packages leaving the upgrade in an unsatisfied state. Please use apt-get full-upgrade or apt-get dist-upgrade instead, as explained here. On RHEL/CentOS/Fedora, icinga2-libs has been obsoleted. Unfortunately yum’s dependency resolver doesn’t allow to install older versions than 2.10 then. Please read here for details. RPM packages dropped the Classic UI package in v2.8, Debian/Ubuntu packages were forgotten. This is now the case with this release. Icinga 1.x is EOL by the end of 2018, plan your migration to Icinga Web 2. Upgrading to v2.9 ¶ Deprecation and Removal Notes ¶ - Deprecation of 1.x compatibility features: StatusDataWriter, CompatLogger, CheckResultReader. Their removal is scheduled for 2.11. Icinga 1.x is EOL and will be out of support by the end of 2018. - Removal of Icinga Studio. It always has been experimental and did not satisfy our high quality standards. We’ve therefore removed it. Sysconfig Changes ¶ The security fixes in v2.8.2 required moving specific runtime settings into the Sysconfig file and environment. This included that Icinga 2 would itself parse this file and read the required variables. This has generated numerous false-positive log messages and led to many support questions. v2.9.0 changes this in the standard way to read these variables from the environment, and use sane compile-time defaults. Note In order to upgrade, remove everything in the sysconfig file and re-apply your changes. There is a bug with existing sysconfig files where path variables are not expanded because systemd does not support shell variable expansion. This worked with SysVInit though. Edit the sysconfig file and either remove everything, or edit this line on RHEL 7. Modify the path for other distributions. vim /etc/sysconfig/icinga2 -ICINGA2_PID_FILE=$ICINGA2_RUN_DIR/icinga2/icinga2.pid +ICINGA2_PID_FILE=/run/icinga2/icinga2.pid If you want to adjust the number of open files for the Icinga application for example, you would just add this setting like this on RHEL 7: vim /etc/sysconfig/icinga2 ICINGA2_RLIMIT_FILES=50000 Restart Icinga 2 afterwards, the systemd service file automatically puts the value into the application’s environment where this is read on startup. Setup Wizard Changes ¶ Client and satellite setups previously had the example configuration in conf.d included by default. This caused trouble on config sync, or with locally executed checks generating wrong check results for command endpoint clients. In v2.9.0 node wizard, node setup and the graphical Windows wizard will disable the inclusion by default. You can opt-out and explicitly enable it again if needed. In addition to the default global zones global-templates and director-global, the setup wizards also offer to specify your own custom global zones and generate the required configuration automatically. The setup wizards also use full qualified names for Zone and Endpoint object generation, either the default values (FQDN for clients) or the user supplied input. This removes the dependency on the NodeName and ZoneName constant and helps to immediately see the parent-child relationship. Those doing support will also see the benefit in production. CLI Command Changes ¶ The node setup parameter --master_host was deprecated and replaced with --parent_host. This parameter is now optional to allow connection-less client setups similar to the node wizard CLI command. The parent_zone parameter has been added to modify the parent zone name e.g. for client-to-satellite setups. The api user command which was released in v2.8.2 turned out to cause huge problems with configuration validation, windows restarts and OpenSSL versions. It is therefore removed in 2.9, the password_hash attribute for the ApiUser object stays intact but has no effect. This is to ensure that clients don’t break on upgrade. We will revise this feature in future development iterations. Configuration Changes ¶ The CORS attributes access_control_allow_credentials, access_control_allow_headers and access_control_allow_methods are now controlled by Icinga 2 and cannot be changed anymore. Unique Generated Names ¶ With the removal of RHEL 5 as supported platform, we can finally use real unique IDs. This is reflected in generating names for e.g. API stage names. Previously it was a handcrafted mix of local FQDN, timestamps and random numbers. Custom Vars not updating ¶ A rare issue preventing the custom vars of objects created prior to 2.9.0 being updated when changed may occur. To remedy this, truncate the customvar tables and restart Icinga 2. The following is an example of how to do this with mysql: $ mysql -uroot -picinga icinga MariaDB [icinga]> truncate icinga_customvariables; Query OK, 0 rows affected (0.05 sec) MariaDB [icinga]> truncate icinga_customvariablestatus; Query OK, 0 rows affected (0.03 sec) MariaDB [icinga]> exit Bye $ sudo systemctl restart icinga2 Custom vars should now stay up to date. Upgrading to v2.8.2 ¶ With version 2.8.2 the location of settings formerly found in /etc/icinga2/init.conf has changed. They are now located in the sysconfig, /etc/sysconfig/icinga2 (RPM) or /etc/default/icinga2 (DPKG) on most systems. The init.conf file has been removed and its settings will be ignored. These changes are only relevant if you edited the init.conf. Below is a table displaying the new names for the affected settings. Upgrading to v2.8 ¶ DB IDO Schema Update to 2.8.0 ¶ There are additional indexes and schema fixes which require an update. Please proceed here for MySQL or PostgreSQL. Note 2.8.1.sqlfixes a unique constraint problem with fresh 2.8.0 installations. You don’t need this update if you are upgrading from an older version. Changed Certificate Paths ¶ The default certificate path was changed from /etc/icinga2/pki to /var/lib/icinga2/certs. This applies to Windows clients in the same way: %ProgramData%\etc\icinga2\pki was moved to %ProgramData%\var\lib\icinga2\certs. Note The default expected path for client certificates is /var/lib/icinga2/certs/ + NodeName + {.crt,.key}. The NodeNameconstant is usually the FQDN and certificate common name (CN). Check the conventions section inside the Distributed Monitoring chapter. The setup CLI commands and the default ApiListener configuration have been adjusted to these paths too. The ApiListener object attributes cert_path, key_path and ca_path have been deprecated and removed from the example configuration. Migration Path ¶ Note Icinga 2 automatically migrates the certificates to the new default location if they are configured and detected in /etc/icinga2/pki. During startup, the migration kicks in and ensures to copy the certificates to the new location. This will also happen if someone updates the certificate files in /etc/icinga2/pki to ensure that the new certificate location always has the latest files. This has been implemented in the Icinga 2 binary to ensure it works on both Linux/Unix and the Windows platform. If you are not using the built-in CLI commands and setup wizards to deploy the client certificates, please ensure to update your deployment tools/scripts. This mainly affects - Puppet modules - Ansible playbooks - Chef cookbooks - Salt recipes - Custom scripts, e.g. Windows Powershell or self-made implementations In order to support a smooth migration between versions older than 2.8 and future releases, the built-in certificate migration path is planned to exist as long as the deprecated ApiListener object attributes exist. You are safe to use the existing configuration paths inside the api feature. Example Look at the following example taken from the Director Linux deployment script for clients. - Ensure that the default certificate path is changed from /etc/icinga2/pkito /var/lib/icinga2/certs. -ICINGA2_SSL_DIR="${ICINGA2_CONF_DIR}/pki" +ICINGA2_SSL_DIR="${ICINGA2_STATE_DIR}/lib/icinga2/certs" - Remove the ApiListener configuration attributes. object ApiListener "api" { - cert_path = SysconfDir + "/icinga2/pki/${ICINGA2_NODENAME}.crt" - key_path = SysconfDir + "/icinga2/pki/${ICINGA2_NODENAME}.key" - ca_path = SysconfDir + "/icinga2/pki/ca.crt" accept_commands = true accept_config = true } Test the script with a fresh client installation before putting it into production. Tip Please support module and script developers in their migration. If you find any project which would require these changes, create an issue or a patchset in a PR and help them out. Thanks in advance! On-Demand Signing and CA Proxy ¶ Icinga 2 v2.8 supports the following features inside the cluster: - Forward signing requests from clients through a satellite instance to a signing master (“CA Proxy”). - Signing requests without a ticket. The master instance allows to list and sign CSRs (“On-Demand Signing”). In order to use these features, all instances must be upgraded to v2.8. More details in this chapter. Windows Client ¶ Windows versions older than Windows 10/Server 2016 require the Universal C Runtime for Windows. Removed Bottom Up Client Mode ¶ This client mode was deprecated in 2.6 and was removed in 2.8. The node CLI command does not provide list or update-config anymore. Note The old migration guide can be found on GitHub. The clients don’t need to have a local conf.d directory included. Icinga 2 continues to run with the generated and imported configuration. You are advised to migrate any existing configuration to the “top down” mode with the help of the Icinga Director or config management tools such as Puppet, Ansible, etc. Removed Classic UI Config Package ¶ The config meta package classicui-config and the configuration files have been removed. You need to manually configure this legacy interface. Create a backup of the configuration before upgrading and re-configure it afterwards. Flapping Configuration ¶ Icinga 2 v2.8 implements a new flapping detection algorithm which splits the threshold configuration into low and high settings. flapping_threshold is deprecated and does not have any effect when flapping is enabled. Please remove flapping_threshold from your configuration. This attribute will be removed in v2.9. Instead you need to use the flapping_threshold_low and flapping_threshold_high attributes. More details can be found here. Deprecated Configuration Attributes ¶ Upgrading to v2.7 ¶ v2.7.0 provided new notification scripts and commands. Please ensure to update your configuration accordingly. An advisory has been published here. In case are having troubles with OpenSSL 1.1.0 and the public CA certificates, please read this advisory and check the troubleshooting chapter. If Icinga 2 fails to start with an empty reference to $ICINGA2_CACHE_DIR ensure to set it inside /etc/sysconfig/icinga2 (RHEL) or /etc/default/icinga2 (Debian). RPM packages will put a file called /etc/sysconfig/icinga2.rpmnew if you have modified the original file. Example on CentOS 7: vim /etc/sysconfig/icinga2 ICINGA2_CACHE_DIR=/var/cache/icinga2 systemctl restart icinga2 Upgrading the MySQL database ¶ If you want to upgrade an existing Icinga 2 instance, check the /usr/share/icinga2-ido-mysql/schema/upgrade directory for incremental schema upgrade file(s). Note If there isn’t an upgrade file for your current version available, there’s nothing to do. Apply all database schema upgrade files incrementally. # mysql -u root -p icinga < /usr/share/icinga2-ido-mysql-mysql. # mysql -u root -p icinga < /usr/share/icinga2-ido-mysql/schema/upgrade/2.5.0.sql # mysql -u root -p icinga < /usr/share/icinga2-ido-mysql/schema/upgrade/2.6.0.sql # mysql -u root -p icinga < /usr/share/icinga2-ido-mysql/schema/upgrade/2.8.0.sql Upgrading the PostgreSQL database ¶ If you want to upgrade an existing Icinga 2 instance, check the /usr/share/icinga2-ido-pgsql/schema/upgrade directory for incremental schema upgrade file(s). Note If there isn’t an upgrade file for your current version available, there’s nothing to do. Apply all database schema upgrade files incrementally. # export PGPASSWORD=icinga # psql -U icinga -d icinga < /usr/share/icinga2-ido-pgsql-pgsql. # export PGPASSWORD=icinga # psql -U icinga -d icinga < /usr/share/icinga2-ido-pgsql/schema/upgrade/2.5.0.sql # psql -U icinga -d icinga < /usr/share/icinga2-ido-pgsql/schema/upgrade/2.6.0.sql # psql -U icinga -d icinga < /usr/share/icinga2-ido-pgsql/schema/upgrade/2.8.0.sql
https://icinga.com/docs/icinga2/latest/doc/16-upgrading-icinga-2/
CC-MAIN-2019-26
refinedweb
2,562
51.14
Plasma 5 Monitoring 42 comments I normally use the Trinity desktop (i.e., more or less, KDE3) and am looking to move to the current version of KDE, but I need to be able to run several widgets monitoring resources on remote systems. In KDE3/Trinity this is easy using the "System Guard" widget, but I don't see how to do it in KDE/plasma 5. This seems like the right widget, but maybe I'm wrong and there's another one that I need to install to do the job? - Dec 05 2019 Plasma 4 Extensions 301 comments [HN:build] cmake -DCMAKE_INSTALL_PREFIX=`kde4-config --localprefix` ../ as described in the INSTALL file produces: ---- --/n7dr/.kde/share/apps;/usr/share/kde4/apps Call Stack (most recent call first): CMakeLists.txt:6 (find_package) -- Configuring incomplete, errors occurred! [HN:build] ---- What do I need to do to fix this? - Aug 20 2015 Utilities 11 comments For example, I have sunbird in /home/n7dr/programs/sunbird, but if I put the command: "ksystraycmd --startonshow --keeprunning home/n7dr/programs/sunbird/sunbird" as a menu item on the K menu, the command doesn't run (although it does run if I type it from a console window). I've tried putting an explicit path in for the working directory, and also trying various combinations of the other checkboxes when creating the K menu item, but none of them seem to work with the ksystraycmd you describe. I don't know why. Really, KDE should have a "minimize to tray" option on all main application windows. It seems, according to some threads I've seen, that this is being considered for the future. - Oct 21 2006 Amarok 1.x Scripts 23 comments Now that it's running, there seems to be one problem. When I click the "Playlist" link, the page that is shown is more or less empty. Certainly it does not show the playlist. (If I try to include the source code for the page in this message, unfortunately the forum software mangles it.) If I hit the "Currently Playing" link on that page, then nothing happens at all. The only links visible are the "Back" link (twice) and the "Currently Playing" link (which does nothing). - Oct 16 2006 Running Kubuntu 6.06-1 I am getting this error when I try to run this script: Can't locate DCOP/Amarok/Player /home/n7dr/.kde/share/apps/amarok/scripts/httpremote_amarok/httpremote_amarok.pl line 31. BEGIN failed--compilation aborted at /home/n7dr/.kde/share/apps/amarok/scripts/httpremote_amarok/httpremote_amarok.pl line 31. - Oct 09 2006 Karamba & Superkaramba 7 comments The version of SK that is shipped with 64-bit Mandriva 2006.0 is simply unstable. It crashes seemingly at random, sometimes after several days, more usually after a few hours. And it has a quite high probability of simply crashing on start-up. I don't know whether this is just because it's the 64-bit version or what, but SK as shipped with that version of the OS is pretty frustrating. In fact, 64-bit apps in general seem to be rather too "bleeding edge" for comfort :-( Many of them are unstable. - Jun 07 2006 Normally I run with ~12 themes in place, so it's been quite a pain to try to figure out which one is causing SK to crash, especially since it doesn't happen very often. So I've been running various combinations of themes to see how stable they are, and so far the only theme that all the combinations that crashed have in common is this one; and I haven't seen a crash yet when this theme is not running. So while I'm not saying that it's definitely causing a crash, I am getting fairly suspicious of it. I wonder if anyone else has experienced SK crashes while running this theme? (I fully admit that it's pretty weird that ANY theme could cause SK to crash -- one would think that the worst that could happen is that the theme crash.) - May 29 2006 As far as I can tell, it's because *.rdf files are treated as text/plain, which is somehow regarded by the feedparser module as an inappropriate type for RSS feeds. Anyway, I think I have a fix in place, but I want to leave it running here for a couple of days to make sure that it is stable. With any luck, I'll be able to upload a version 1.3 containing the fix sometimes tomorrow. - May 29 2006 the items in that feed have no description field (and, more importantly, it no pubDate field). This failure is actually not in code I wrote (don't you hate it when you rely on someone else's code and it breaks? :-) ). I'll see if I can figure out a way for it fail cleanly -- although the theme still won't actually display anything from this feed, since it has no pubDates, so the theme doesn't knwow how to put the items into time order. But obviously silent failure would be better than the error you're currently getting. - May 21 2006 The only reason I'm asking is because I find this theme extremely useful, and I'm a bit surprised that someone else wanted the functionality enough to download it, but then they were presumably disappointed in what they actally got. So I'd like to at least hear why they didn't like it, in case it's something I can easily fix (or they can fix it themselves if they prefer). - May 20 2006 Karamba & Superkaramba 7 comments You might also want to check out the multistat widget that I just uploaded. (Like you, I thought to myself, "why just stop with the VPN toggle when there's all kinds of things that can be monitored and toggled?" :-) - May 30 2006 The reason is that you normally don't download the e-mail from the gmail.com server, so I suspect that every time the theme tried to download the headers, it would get them ALL. You can test it if you like: 1. Add the line from poplib import POP3_SSL near the top of popcheck.py 2. Replace all occurrences of POP3 in the code with POP3_SSL In theory, that's all that's needed. But as I say, I think you'll find that while that would work for ordinary POP3 servers accessed over SSL, it won't do what you want for gmail. So I suspect that for google, one would ned to implement some state that keeps track of how many e-mails have been looked at in the past. Which is a bit more pain than I want to subject myself to right now :-) Although if someone else reads this and wants to implement it, they're of course welcome to give it a shot. Incidentally, if I'm wrong and the changes I describe above are in fact all that's needed to make th etheme work with gmail, just let me know and I'll put them in the base code. - May 21 2006 Karamba & Superkaramba 4 comments Karamba & Superkaramba 21 comments It says I need PerlQt (to access the graphical configuration stuff) -- but PerlQt is installed (via urpmi from my distro). So I wonder if there's a problem with the part of module-test.pl that detects PerlQt. - Apr 17 2006 I tried as an ordinary user, and when it had all finished, I tried "install DCOP", and got the following: $ install DCOP install: too few arguments Try `install --help' for more information. So presumably I need to run this stuff as root -- but I don't want to run as root unless you tell me I have to. I added the Time and Config modules from my distro, but the distro doesn't contain DCOP (mandatory) or some of the necessary optional modules. - Apr 17 2006 Can't locate Config/General.pm in @INC (@INC contains: /usr/lib/perl5/5.8.7/x86_64-linux /usr/lib/perl5/5.8.7 .) at ./mailbox-strainer.pl line 35. BEGIN failed--compilation aborted at ./mailbox-strainer.pl line 35. [H:perl] I'll also send this to your e-mail address, so you can respond whichever way you prefer. - Apr 16 2006 I created a mailbox-streamer.cfg file, and checked that it was actually being used by changing some innocuous things (like the background image) and reloading the theme to make sure that indeed my changes were reflected in what I see on screen. So I know that the config file is being read OK. But I NEVER see any mail, even though I know that there are several e-mails waiting on the POP3 server. - Apr 15 2006 > Yes, ksysguard is what I use in Trinity(KDE3). But I can't see how to put ksysguard in a panel in KDE5 :-( That inability is what led me to look for a different widget, which is when I found yours. - Dec 05 2019
https://www.pling.com/u/n7dr/
CC-MAIN-2020-29
refinedweb
1,517
70.33
RC - Runtime Configuration Report Issue / Source Code ( ) About RC is a is multi-tenant runtime configuration system for Ruby tools. It is designed to facilitate Ruby-based configuration for multiple tools in a single file, and designed to work whether the tool has built-in support for RC or not. The syntax is simple, universally applicable, yet flexible. RC can be used with any Ruby-based commandline tool or library utilized by such tool, where there exists some means of configuring it via a toplevel/global interface; or the tool has been designed to directly support RC, of course. Installation To use RC with tools that support RC directly, there is likely nothing to install. Installing the tool should install rc via a dependency and load runtime configurations when the tool is used. To use RC with a tool that does not provide built-in support, first install the RC library, typically via RubyGems: gem install rc Then add -rc to your system's RUBYOPT environment variable. $ export RUBYOPT='-rc' You will want to add that to your .bashrc, .profile or equivalent configuration script, so it is always available. Instruction To use RC in a project create a configuration file called either .ruby or .rubyrc. The longer name has precedence if both are present. In this file add configuration blocks by name of the commandline tool. For example, let's demonstrate how we could use this to configure Rake tasks. (Yes, Rake is not the most obvious choice, since developers are just as happy to keep using a Rakefile. But using Rake as an example serves to show that it can be done, and also it makes a good tie-in with next example.) $ cat .rubyrc config :rake do desc 'generate yard docs' task :yard do sh 'yard' end end Now when rake is run the tasks defined in this configuration will be available. You might wonder why anyone would do this. That's where the multi-tenancy comes into play. Let's add another configuration. $ cat .rubyrc title = "MyApp" config :rake do desc 'generate yard docs' task :yard do sh "yard doc --title #{title}" end end config :qedoc do |doc| doc.title = "#{title} Demos" end Now we have configuration for both the rake tool and the qedoc tool in a single file. Thus we gain the advantage of reducing the file count of our project while pulling our tool configurations together into one place. Moreover, these configurations can potentially share settings as demonstrated here via the title local variable. Of course, if we want configurations stored in multiple files, that can be done too. Simple use the import method to load them, e.g. import 'rc/*.rb' RC also supports profiles, either via a profile block: profile :cov do config :qed do require 'simplecov' ... end end Or via a keyword parameter: config 'qed', profile: 'cov' do require 'simplecov' ... end When utilizing the tool, set the profile via an environment variable. $ profile=cov qed RC also support just p as a convenient shortcut. $ p=cov qed Some tools that support RC out-of-the-box, may support a profile command line option for specifying the profile. $ qed -p cov Beyond mere namespacing, some tools might utilize profiles for a more specific purpose fitting the tool. Consult the tool's documentation for details. Configurations can also be pulled in from other gems using the from option. config :qed, :profile=>'simplecov', :from=>'qed' As long as a project includes its .rubyrc file (and any imported files) in it's gem package, it's possible to share configurations in this manner. Customization A tool can provide dedicated support for RC by loading rc/api and using the configure method to define a configuration procedure. For example, the detroit project defines: require 'rc/api' RC.configure 'detroit' do |config| if config.command? Detroit.rc_config << config end end In our example, when detroit is required this configuration will be processed. The if config.command? condition ensures that it only happens if the config's command property matches the current command, i.e. $0 == 'detroit'. We see here that Detroit stores the configuration for later use. When Detroit gets around to doing it's thing, it checks this rc_config setting and evaluates the configurations found there. It is important that RC be required first, ideally before anything else. This ensures it will pick up all configured features. Some tools will want to support a command line option for selecting a configuration profile. RC has a convenience method to make this very easy to do. For example, qed uses it: RC.profile_switch('qed', '-p', '--profile') It does not remove the argument from ARGV, so the tool's command line option parser should still account for it. This simply ensures RC will know what the profile is by setting ENV['profile'] to the entry following the switch. Dependencies Libraries RC depends on the Finder library to provide reliable load path and Gem searching. This is used when importing configurations from other projects. (It's very much a shame Ruby and RubyGems does not have this kind of functionality built-in.) Core Extensions RC uses two core extensions, #to_h, which applies to a few different classes, and String#tabto. These are copied from Ruby Facets to ensure a high standard of interoperability. Both of these methods have been suggested for inclusion in Ruby proper. Please head over to Ruby Issue Tracker and add your support. Release Notes Please see HISTORY.md file. Confection is distributable in accordance with the BSD-2-Clause license. See LICENSE.txt file for details.
http://www.rubydoc.info/gems/rc/frames
CC-MAIN-2017-26
refinedweb
930
65.01
I had to add this to my DCC_Namespace in the rbTC1416.dproj file to make it build under Delphi XE8: Vcl;Vcl.Imaging;Vcl.Touch;Vcl.Samples;Vcl.Shell; The occurrence of this DCC_Namespace setting corresponds to the “Unit scope names” of the “All configurations – All Platforms” target in the project options. That got rid of this error mesage: [dcc32 Fatal Error] ppChrt.pas(17): F2613 Unit 'Graphics' not found. It was for a site that had very little ReportBuilder but a truckload of FastReports stuff, so I temporarily needed their ReportBuilder for XE2 to just load in Delphi XE8 at design time so I could migrate. Apparently not all the ReportBuilder packages use the same namespace definitions. Even worse: they add various namespaces at various target levels in an inconsistent way, so it took me a bit more time than I originally hoped for sorting this out. Below is what the original settings were: only the .\TeeChart\Win32\TeePro900 directory had TeeChart project files (the .\Source and .\TeeChart\Win32\TeeStd900 directories hadn’t) and all three had slightly different unit source files for TeeChart support. Read the rest of this entry »
https://wiert.me/2017/02/02/
CC-MAIN-2018-17
refinedweb
192
65.01
This article will show you how to open and read text files in your programs. It will also present a simple library to hide those details away so you are not bothered with boiler-plate code over and over again. How many times have I had to parse a text file? Be it for reading some startup options, some kind of resource file or just to extract data from a web page or some other kind of textual data. I don t remember, but I do remember that every project started with same mantra: a few lines of code to open the file, to move through the file line by line, do my stuff and finally to close the file after parsing is done. I used to write my programs mostly In Java before, and below is the loop I usually write: import java.io.*; public class LineReader { public static void main(String[] args){ // do some checks for args here . String filename = args[0]; try { BufferedReader br = new BufferedReader(new FileReader(filename)); String line = br.readLine(); while(line != null){ /* * do something with line here */ line = br.readLine(); } br.close(); }catch(Exception e){ e.printStackTrace(); } } } I have also written quite few C and C++ projects, and this loop look like this in C: #include <stdio.h> int main(int argc, char** argv) { FILE* fp; char line[LINE_SIZE]; fp = fopen( argv[1], "r"); if (!fp) return 1; while ( fgets( line, LINE_SIZE, fp) ) { /* * do something with line here */ } fclose(fp); return 0; } C++ is bit more civilized and it is really easy to get this going in just few lines: #include <iostream> #include <fstream> int main(int argc, char** argv) { if(argc != 2) return 1; std::string line; std::ifstream ifs(argv[1]); if(!ifs.is_open()) return 0; while( std::getline(ifs,line) ){ /* * do something with line here */ } return 0; } Simple things are always simple, regardless of the language used. Maybe that is the reason why I never cared to encapsulate this. But once we keep opening files in several places in a program, it might get boring and tedious, no matter how simple it is. Two days ago I got really disgusted doing it in a parser, and actually got myself together and wrote little framework to encapsulate this in a simple line reader. Meet the libToken. This library is not meant to be an advanced language parser, do regular expressions or any other fancy stuff. There already exist hundreds of libraries that handle those tasks better than I can probably ever write on my own. LibToken is just very tiny wrapper for line parsing pattern, if you can call it a pattern. Maybe it is a line parsing algorithm; it does not really matter what it is called, the important thing is ease of use and deployment. LibToken started originally as one function only, but has expanded over a course of one afternoon to an entire four functions. It is written in pure ANSI C so there will be no name mangling and the same compiled code can (hopefully) be used with any compiler from both C and C++ (if you prefer to use it in a shared DLL as I do). As said it is a simple framework, not a library. For those who cannot differentiate the two, a framework is generally a prewritten application, with pluggable endpoints which connects the framework with your code. A library is a collection of prewritten functions which you call from your code to do some work. Also a framework calls your code, while in a library you call library code. That was the theory; here is how it works in this case: You specify a callback which will be called whenever a new line of text is found. The line will be passed to this callback so here is where you do all your parsing, copying and whatever you need to do. The function with which you invoke the parser takes the name of a file to be parsed and a pointer to callback that will be invoked when a new line is found. Sample code looks like this: #include <nextline.h> #include <stdio.h> void nextLine(const char* line){ /* this is callback */ /* * do something with line here */ } int main(int argc, char** argv) { if(argc != 2) return 1; ntNextLine(argv[1], nextLine, 0); /* this is parser invocation */ return 0; } As you see there is no special initiation or data structures involved. We just have to pass the name of the file to be opened and our callback. Simple *huh*? (Don t bother about last argument I will explain it next). So now we can jump into the realm of parsing text directly instead of writing all that open/close file, tests and line fetching loop every time we have to do some parsing. Protype for nextLine callback looks like this: int ntGetLines( const char* filename, ntNextTokenFunc f, ... ); As you see it has ellipsis in argument list which means it is a variadic function (just like printf). I think we can simplify code in our nextLine callback by factoring out some usual boiler-plate code which occurs when parsing text files. For this reason the parsing function can take arguments and callbacks to functions called filters. ellipsis printf When a file is parsed, there will be some lines you don't want to do anything with. A typical example is comments in a programming language. Usually you want to skip those and just get to the next line. Instead of cluttering our main parsing callback with a bunch of ifs or big switch we can refactor those cases in separate and simpler callbacks. if I call those filters and they are supposed to be simple, say, a few lines of code functions. They should return a boolean value used to determine if a line is to be skipped or passed to your callback. The prototype: int ntFilterFunc(const char* line); As example you might do something like this: #include <nextline.h> #include <stdio.h> void nextLine(const char* line){ puts(line); } int skipComment(const char* line){ if(line[0] == '#') return false; return true; } int main(int argc, char** argv) { if(argc != 2) return 1; ntNextLine(argv[1], nextLine, skipComment); return 0; } If skipComment returns 0, the line will not be passed to your nextLine callback and ntNextLine will continue with the next line from the file. skipComment nextLine ntNextLine That is a very dumb comment-skipping code since it expects there is no white spaces before the # , but it gives the idea of a filter I guess. You can pass any number of those callbacks to the parser. If you don't want to use any filter function, be sure to pass 0 at the end, otherwise your application will end in runtime hell (seg fault). Finally, this is prototype for callback when a new line is found (the nextLine above is such): void ntNextTokenFunc(const char* line); In some cases we are not interested in line parsing, but rather receiving text one word at a time. Those words do not have to be what we usually call words: a character sequences delimited by white spaces and punctuations. Sometimes we can have other symbols as delimiters, for example, mathematical symbols as parenthesis, operators etc. In programmers parlance, words are called tokens, and the process of finding them is called tokenizing the input. Tokenizing is usually implemented by reading text line by line and then searching for patterns within that line. Since this is also standard pattern/algorithm, we can encapsulate it into a wrapper. By encapsulating it, we are also hiding its implementation from the user, and we can make it more efficient by changing its implementation from reading one line at a time from file, to reading entire text file at once and then returning one token at a time. This is what ntNextToken function does. In case there is not enough space in the system to hold entire file in memory it will default to line reader. In any case you don t have to care about those details. ntNextToken To tokenize entire file call int ntGetTokens( const char* filename, const char* delimiters, ntNextTokenFunc f ); If you wish to find white space delimited tokens only you might use int ntGetWords( const char* filename, ntNextTokenFunc f ); instead. This is same function as ntNextToken but with delimiters already chosen to be white ( \n\v\t\r ), or to be exact, with delimiters defined by standard macro isspace(char c). Lastly, in many cases you have a string and wish to find tokens within that string. For this purpose there is: int ntTokenize( const char* string, const char* delimiters, ntNextTokenFunc f ); The library will make its own copy of string so original data is not being destroyed. It will operate on internal buffer of size NT_LINESIZE which is compile time constant. Default value is 1024 characters (1kb) but you may change if you wish. Another important thing to know is that function expects zero terminated strings. If \0 is missing the algorithm will automatically insert one at buffer[NT_LINESIZE-1] place. This is in order to protect against stack overflows. It also means we have to check the number of characters copied over, also somewhat slow performance. In case you trust your data and know exactly what comes in, you might consider removing those checks. I don't recommend it if you intend to parse any user generated text files or input. NT_LINESIZE NT_LINESIZE-1 It is important to notice that library uses only stack space. In case where it actually does allocate heap, it will also free that heap directly after your callback has executed, so don't expect any data you get in the callback, to last after the callback has executed. Also if you need a line(s) to be stored after the callback execution, you should copy them yourself to some space. You should never try to free any pointers you got from the library. That's the beauty of working with callbacks, you don't have to care about new/malloced space, it can be automated for you. Compared to other string parsing libraries, this is really simple and limited one, but I see simplicity as a feature rather than limitation. You can use this as a first step in building more advanced parser. Compared to standard C function strtok(), all functions in this library are non destructible and thread safe since no static data or any other form of data sharing is used. When compared to features of strtok() there is one disadvantage: it is not possible to change delimiters during the tokenizing process. Since strtok saves states between calls and continues from last place processed, it can change delimiters in different invocations. Considering drawbacks it has because of shared state, I don't think this is really killer feature. An advantage of tokenizer in this library over strtok would be to return delimiters, which is easily implemented; I just didn't have time yet. It can't be done with standard strtok (unless you want to write your own). strtok() strtok There are only 4 functions in one header and one source file. Since there are no other objects then primitive data used on stack, there is no need for initializations. Furthermore we have no shared objects here, so every invocation of a function uses its own stack only, so function invocations are equivalent to objects here. Simply said, there is no need to wrap such simple library in a class for all you object oriented purists. I prefer to keep it in a single shared library, but you may also simply add header and source files directly to your project. Download includes also a Visual Studio Express project file for building the dll, but no makefile for gcc users (planned for future). I didn t tested it on linux or anything else but VS Express yet. Observe that a line by line approach is sufficient in most cases unless you allow your tokens to continue on next line. This library does not attempt to handle cases where tokens are broken over multiple lines. NT in prefix stands for "next token." There is a lot of place for improvement in this library. The code so long, is just a prototype, rather then finished library. In future I do plan to convert it to wide chars instead of ASCII, and to add option to return delimiters in string tokenizer. However, the first task is to do more testing before any new features are.
https://www.codeproject.com/Articles/38442/libToken-Simple-Wrapper-for-Reading-and-Tokenizing?fid=1544313&df=90&mpp=10&sort=Position&spc=None&tid=3134189
CC-MAIN-2017-43
refinedweb
2,095
68.81
When processing manuscripts for shiny new O'Reilly books, I often need to run a particular Word macro on a batch of files. While this is certainly possible using VBA directly, it becomes quite challenging when either the name of the specific macro to run (it may be one of dozens of utility macros), or the files to run it on, are constantly changing, as is usually the case. Spoiled by the large percentage of my day spent on a Unix command line, I started looking for a way to easily run any Word macro, on any number of files, right from the DOS command line. This article shows how to do just that, using three popular, free, and Windows-friendly scripting languages: Perl, Python, and Ruby. You'll need at least one of those installed on your Windows machine to use any of the code in this article. If you don't have one, see the sidebar, "Picking a Scripting Language." I wanted a script that used the name of the macro to run as its first argument, and the files to run that macro on as the remaining arguments, like this: > batchmacro MyMacroName *.doc The Ruby version is the one I actually use, but the Perl and Python versions work just as well. To try out any (or all) of these scripts, put them in the same folder as one or more Word documents. (For now, avoid documents with spaces in their names.) For illustration purposes, you should also create the following macro in your Normal template: Normal Sub HelloWorld() ActiveDocument.Range.InsertBefore "Hello from the command line!" & vbCr ActiveDocument.Paragraphs.First.Style = wdStyleHeading1 End Sub If you haven't yet declared allegiance to a particular language (or been forced to after inheriting legacy tools), you've got a choice to make, at least if you want to try out the code in this article. Fortunately, there's no harm in choosing all three, but there are a few important considerations if you'll be scripting in Windows. Here's the skinny on the big three: Perl. Perl is the darling of system administrators and webmasters the world over. Unmatched in its text-processing skills, its already painful syntax is downright torturous when married with COM. I install Perl on any machine I use, but for COM scripting, it's definitely my last choice. Get Perl for free from ActiveState. Python. This is a fine choice for COM scripting and is a much easier transition for those already used to the "dot" syntax found in VB and .NET. Get Python for free from ActiveState. Ruby. If you haven't been converted to the Perl or Python camps (by a "Perl Monk" or a "Pythonista," respectively), you should definitely consider Ruby. It's the newest of the three and may well live up to its billing as "a better Python than Python, and a better Perl than Perl." Using Ruby for COM scripting is a pleasure. Get Ruby for free from RubyForge. As you may have guessed, this macro inserts a new paragraph at the start of the active document, and then styles that paragraph as Heading 1. Note: You should also quit Word before running any of these, in order to avoid any conflicts or problems with open documents. These scripts use Word's Application.Run() method, which takes, as its first argument, the name of a macro to run as a string. (It also accepts optional arguments to pass into the macro, but the code in this article won't use those.) Using Application.Run(), you can run any macro in the document, its template, or any loaded global templates. Application.Run() In each of the scripts, the filename is expanded to include the full path. Just giving Word the relative filename isn't recommended, because Word's "current" directory isn't necessarily the same as the one you're in when you call it with COM. Note: The code shown below uses the standard Windows distribution for each language. No additional packages or modules are needed. For simplicity, I've left out any error handling, command-line option processing, and even usage messages. Only Ruby was able to handle standard DOS wildcards without additional code, so it's the simplest of the three--if you don't count comments or white space, this script is just 11 lines long. I've found scripting Word with COM using Ruby much less stressful than with Perl or Python, though your mileage may vary. Ruby is both concise and readable, an unusual combination in a programming language. Save this script as batchmacro.rb in the same folder as your sample documents, as described above. # batchmacro.rb # Ruby script for batch running Word macros require 'win32ole' # Launch new instance of Word wrd = WIN32OLE.new('Word.Application') wrd.Visible = 1 # First argument to script is the name of the macro macro_to_run = ARGV.shift() # Everything else is a document on which to run the macro ARGV.each do |file| doc = wrd.Documents.Open(File.expand_path(file)) wrd.Run(macro_to_run) doc.Save() doc.Close() end wrd.Quit() To run this script, open up a DOS command line, and navigate to the folder where you've put the script and your sample documents, as described above. At the command line, type the following, and then press Enter: > ruby batchmacro.rb HelloWorld *.doc The Perl version is slightly longer, needing code that explicitly handles DOS wildcards. Each argument after the macro name is expanded using the File::DosGlob module, which is included in the ActivePerl distribution from ActiveState. The File::DosGlob::glob function returns an array of filenames (with only one element if there are no wildcards), and each of these in turn is opened by Word. File::DosGlob File::DosGlob::glob Save this script as batchmacro.pl in the same folder as your sample documents, as described above. # batchmacro.pl # Perl script for batch running Word macros use Win32::OLE; use File::DosGlob; # Launch new instance of Word my $wrd = Win32::OLE->new('Word.Application'); $wrd->{'Visible'} = 1; # First argument to script is the name of the macro my $macro_to_run = shift @ARGV; # Everything else is a document on which to run macro foreach $arg (@ARGV) { # Expand any wildcards that might be in the argument foreach $file (File::DosGlob::glob($arg)) { my $file_full_name = Win32::GetFullPathName($file); my $doc = $wrd->{'Documents'}->Open($file_full_name); $wrd->Run($macro_to_run); $doc->Save(); $doc->Close(); } } $wrd->Quit(); To run this script, open up a DOS command line, and then navigate to the folder where you've put the script and your sample documents, as described above. At the command line, type the following, and then press Enter: > perl batchmacro.pl HelloWorld *.doc The Python version also needed extra code to explicitly handle DOS wildcards. Each argument after the macro name is expanded using the glob module. The glob.glob function returns a list (Python calls them lists, not arrays) of filenames (with only one element if there are no wildcards), and each of these in turn is opened by Word. glob glob.glob Save this script as batchmacro.py in the same folder as your sample documents, as described above. # batchmacro.py # Python script for batch running Word macros import sys import os import glob from win32com.client import Dispatch # Launch new instance of Word wrd = Dispatch('Word.Application') wrd.Visible = 1 # First argument to script is the name of the macro macro_to_run = sys.argv[1] # Everything else is a document on which to run macro for arg in sys.argv[2:]: # Expand any wildcards that might be in the argument for file in glob.glob(arg): doc = wrd.Documents.Open(os.path.abspath(file)) wrd.Run(macro_to_run) doc.Save() doc.Close() wrd.Quit() > python batchmacro.py HelloWorld *.doc Related Reading Word Hacks Tips & Tools for Taming Your Text By Andrew Savikas Of course, There's More Than One Way to Do It, and I'm not presenting these three examples as the only, or even the best, way to do this particular task. In particular, I've tried to avoid shortcuts that might confuse someone unfamiliar with a particular language. Many of the macros I use in Word display some sort of dialog, if only a simple "Done" box for a macro that takes a while to run. But having to dismiss a dialog after each document sort of defeats the purpose of being able to batch a whole set of documents at once. One solution would be to create two separate macros, one with dialogs, the other meant to run silently. Another option, requiring a lot less code duplication, is to put the main code of your macro in a separate function, and put all the dialogs in a wrapper subroutine. It's the quiet function that you use when batching your files, not the noisy subroutine. For example, the Word template we use at O'Reilly includes a macro that deletes all the comments in a document. As you might imagine, authors, editors, and technical reviewers make extensive use of comments, so accidentally deleting all of them can be catastrophic (or at least a major annoyance). So the macro starts with a confirmation dialog, just in case. When I get the files, though, all the reviewing is finished, and it's time to blast away any remaining comments, preferably in all the files at once, and without any pesky confirmation dialogs. Here's how I've split up this particular task. The first listing is the function that deletes all of the comments in a given document (assuming the active document if none is specified), with no dialogs at all. The second listing is a wrapper macro that displays a confirmation dialog, and a message announcing when it's finished running. The production versions of these two also include some error handling, which I've left out for simplicity. Function DeleteAllCommentsInDoc(Optional doc As Document) As Boolean Dim iNumberOfComments As Integer Dim i As Integer If doc Is Nothing Then Set doc = ActiveDocument iNumberOfComments = doc.Comments.Count For i = iNumberOfComments To 1 Step -1 doc.Comments(i). Next i DeleteAllCommentsInDoc = True End Function And here's the wrapper macro: Sub DeleteAllComments() Dim doc As Document Dim i As Integer Dim iNumberOfComments As Integer Dim lContinue As Long Set doc = ActiveDocument iNumberOfComments = doc.Comments.Count If iNumberOfComments = 0 Then MsgBox "There are no comments in this document.", vbInformation Exit Sub End If If MsgBox("Are you sure you want to delete ALL " & _ iNumberOfComments & " comments?", _ vbYesNo) = vbNo Then Exit Sub End If If DeleteAllCommentsInDoc(doc) = True Then MsgBox iNumberOfComments & " Comment(s) Deleted.", vbInformation End If End Sub When someone working within Word wants to delete all the comments, there's plenty of feedback, and an opportunity to cancel. But now I can also delete all the comments in a group of documents at once, using one of the scripts shown above (I'll use the Ruby version here): > ruby batchmacro.rb DeleteAllCommentsInDoc ch??.doc By structuring your macros in two parts like this, you retain valuable feedback features, while adding the flexibility of easy scripting without distracting dialogs. In effect, this gives your Word templates an API that's easily accessible from COM scripts, using the language of your choice. In November 2004, O'Reilly Media, Inc., released Word Hacks. The following Sample Hacks are available free online. You can also look at the Table of Contents, the Index, and the full description of the book. For more information, or to order the book, Andrew Savikas is the VP of Digital Initiatives at O'Reilly Media, and is the Program Chair for the Tools of Change for Publishing Conference..
http://www.onjava.com/pub/a/windows/2005/03/22/word_macros.html
CC-MAIN-2014-15
refinedweb
1,951
61.36
Trust, reasons not to work in assembly: - It’s easier to make errors - It’s more difficult and takes longer to write - It’s more difficult and takes longer to read - It’s more difficult and takes longer to debug - It’s more difficult and takes longer to maintain - It’s more difficult and takes longer to document - It’s not portable between architectures - The compiler can produce assembly code for you, which runs almost as fast as your hand-optimized assembly code, sometimes faster - When was the last time you wrote code that was CPU-bound, anyway? This last note needs some clarification. By CPU-bound, I mean the limiting factor on how fast your program can complete a task is the CPU, rather than disk I/O or network I/O or scheduled timer delays. If your program runs calculations one after the other, never returning control to the operating system via some yielding call to a blocking function, then it’s CPU-bound. That’s getting rarer these days, unless you’re writing serious number-crunching code. And that’s why desktop and embedded software are different. On a desktop PC, I’ve got a multi-core multi-GHz processor screaming along running my code along with the 97 other processes that are active. My goodness, I can even write in high-level interpreted languages and it’s still fast. If it seems a little slower than I’d like, then I measure it with a profiler, find the code that seems to take the most processing power, and see if there’s a better algorithm for doing what I want — because I don’t know x86 assembly, and even if I did, I’d probably be lucky to speed up a large task by more than maybe 10-15%.... versus an algorithmic speedup that cuts down the runtime by 95% because I went from \( O(n^2) \) to \( O(n \log n) \). Or sometimes it’s just a matter of which library or OS functions you use: if things are too slow, try a few different methods and see if any are significantly faster. That doesn’t need any assembly code work, just some experimentation and careful measurement. On an embedded system, things are different. Embedded processors have only a fraction of the computing power of a desktop PC, and there’s economic incentive to use less CPU processing, because it means you can buy a cheaper processor or use less energy (which is crucial in the case of battery-powered systems). When I’m working on motor control software, I have a single-core processor running at 70 million instruction cycles per second, and it has a hard real-time requirement to do a certain set of motor control tasks every 50 microseconds. That means the motor control code needs to finish in less than 3500 instruction cycles. This isn’t a lot of time; if I can shorten things by 35 instruction cycles, that frees up 1% of the CPU. It matters. I said only a few percent of the motor control firmware is written in assembly. My team uses it only when necessary; in fact, in some cases we are migrating from assembly to C for maintainability. But 35 instruction cycles uses up 1% of the CPU — execution time matters — so while I don’t write assembly code very often, I do read assembly code fairly frequently. I look behind the curtain and see what the compiler is doing. There’s an old Russian proverb, Доверяй, но проверяй (Trust, but verify), which was co-opted and made famous by U.S. President Reagan in the 1985-1986 nuclear disarmament talks with the USSR’s General Secretary Gorbachev. I let the compiler do its job rather than try to write assembly code myself… but I do look at the output code. A compiler has only so much freedom to optimize — it must produce output code that correctly implements the specified source code according to the rules of the programming language I’m using. For example, which of the following functions for summing up an array of 16-bit integers will have a shorter execution time?; } We know these functions are equivalent, but the compiler may not be able to deduce that, so one may be faster than the other. (Don’t believe me? Keep on reading!) And looking at the compiler’s output lets you learn what matters and what doesn’t matter. Sometimes a compiler has been programmed with optimizations to handle certain cases; if you are aware of which cases do better, you can help it along. Yes, this represents an investment of time to learn the quirks of a particular compiler running on a particular architecture — but it’s a lot less than trying to write everything in assembly by hand. Ready to give it a shot? The Basics of Looking Behind the Curtain This is going to be fun! We’re going to look what the compiler does with a really simple function: #include <stdint.h> int16_t add(int16_t a, int16_t b) { return a+b; } That’s so simple, you might think it’s not even worth doing… but we can learn from it. Okay, there are several things you need if you want to look at assembly code generated by the compiler: A compiler. In this case we’ll be using the Microchip XC16 compiler. The features I’m going to show are in the free version; if you want to use the higher optimization levels you’ll need to purchase an appropriate license, but you can do a lot with just the basic features in the free version. The manual for the compiler. ‘Cause we want to know how to use it properly. The instruction set reference manual for the architecture we’re working with. In this case it’s Microchip’s 16-bit MCU and DSC architecture. If you’ve never looked at assembly code before, it’s going to look like gobbledygook, and even if you have, one core architecture’s assembly may be very different from another. The C code I’m going to look at is architecture-independent, so if you’re working with another type of microcontroller, you can do exactly the same exploration; you just need the appropriate compiler and reference documents to try it out. Of course, the resulting assembly output will be different on other architectures. Now we need to compile some code. There are a few options here. Option 1: The IDE We can use an integrated development environment (IDE), which usually makes it easy to write and debug a computer program. At work I use the MPLAB X IDE based on NetBeans, and it’s helpful, especially for things like debugging and syntax highlighting and refactoring and all that stuff. But for looking at little snippets of code, it’s kind of a pain; there’s not really an easy way to look at the compiler output unless you compile a whole program, and with an embedded microcontroller, that means putting together a bunch of stuff like a linker file and a project file, which gets in the way. The IDE definitely makes it easier. But here we just want to look at one function, so it’s too much baggage, at least for this article. Option 2: The command line Okay, here all the UNIX fanboys are going to come out of the woodwork cheering. Hey, I like the command line too, at times… it’s just that life is short, and I just don’t have enough free brain cells in my working set to remember all of those cryptic arguments like du -sk or chmod a+x or whatever. I’m already upset at how much of my memory is occupied by that junk. It’s a necessary evil. But it’s simple. It may be cryptic and hard to remember, but it’s really simple to follow instructions. Here’s what we have to do to use the compiler. We write our function into a file, let’s call it add.c, using our favorite editor. I’m not going to tell you mine. Them’s the stuff of holy wars, and I stay neutral. We open up a command-line shell in the directory where we put add.c, and type this: xc16-gcc -S -c add.c What does this do? The -coption just runs the compiler, usually converting add.cto an object file (either add.objor add.odepending on whether you’re running on Windows or a UNIX-based OS like a Mac or Linux); if you leave it out, the compiler also tries to link it into a full program, and that will fail because we haven’t included a main()function, and we really don’t care about making a complete program. We just want to see what our add()function turns into. But if we try to look at the object file add.o/ add.objit’s just incomprehensible binary junk, easily readable to a linker or debugger, but not meant for human consumption. The -Soption changes things, and produces an assembly file, add.s, instead of the object file. We open add.sin our favorite editor and look at it. Here it is: .file "/Users/jmsachs/link/embedded-blog/code-samples/add.c" .section .text,code .align 2 .global _add ; export .type _add,@function _add: .set ___PA___,1 lnk #4 mov w0,[w14] mov w1,[w14+2] mov [w14+2],w0 add w0,[w14],w0 ulnk return .set ___PA___,0 .size _add, .-_add .section __c30_signature, info, data .word 0x0001 .word 0x0000 .word 0x0000 ; MCHP configuration words .set ___PA___,0 .end Okay, now which are the important bits? Assembly language for Microchip processors is fairly similar to most others, and contains four elements: - Comments (which in this case start with a semicolon), which the assembler ignores - Directives (which start with a ., like .word 0x0000), which tell the assembler some auxiliary information - Labels (which start at the beginning of a line and end in a colon :, like _add:), which represent a reference to a point in the program - Instructions (everything else) If we ignore all the comments and directives we see this: _add: lnk #4 mov w0,[w14] mov w1,[w14+2] mov [w14+2],w0 add w0,[w14],w0 ulnk return And that’s our add() function in 16-bit dsPIC assembly! 7 instructions! What does it mean? Well, before we get into that, let’s look at another option. Option 3: Instant gratification with pyxc16 I used to use options 1 or 2 when I needed to look at compiler output. And it was tiring, and error-prone, because often I’d edit some C code, go to compile it, get distracted by something, then forget whether I actually ran the compiler, then compile it again, and flip back and forth between the source and the assembly. Oh, and actually finding the right portion of assembly was sometimes a challenge, because of all that other junk that shows up in an assembly file. We just saw 7 instructions of code, and a label. That’s 8 lines. But the add.s file contains 26 lines. The directives obscure the contents. Larger C code inputs produce a lot more output. So I got fed up, and wrote a quick Python module, pyxc16, which I have put up on my Bitbucket account for anyone to use. Essentially it takes care of running the compiler, fetching the output, and filtering out the stuff in the assembler file that’s not important. Here’s how to run it. You just have to make sure there’s an XC16_DIR environment variable that points to the root (not the bin directory) of the compiler installation. On my Mac that’s in /Applications/microchip/xc16/v1.25/. Then you give it a string (or a filename, if you must) and whatever command-line arguments you want to include: import pyxc16 pyxc16.compile(''' #include <stdint.h> int16_t add(int16_t a, int16_t b) { return a+b; } ''', '-c') _add: lnk #4 mov w0,[w14] mov w1,[w14+2] mov [w14+2],w0 add w0,[w14],w0 ulnk return Tada! Instant gratification! The pyxc16.compile() function saves your input as sample.c and compiles it with the -S option. Now I can use IPython Notebook — yeah, I know, it’s Jupyter Notebook now, but the name just doesn’t roll off the tongue — and write my blog on my Chromebook laptop while sitting on my comfy couch, compiling C snippets with XC16. Whereas my Mac which actually runs IPython and XC16 is relegated to an inconvenient spot with a marginally-acceptable desk and chair from IKEA that I don’t really like sitting at. The chair is a modern-but-cheap birchwood thing called JULES, with swivel casters that fall off every once in a while, and I can’t remember the desk’s IKEA name, and yes, I should make sitting at my computer at home more comfortable, but maybe that’s the point — if it were more comfortable, I would spend more time there. Anyway, we were talking about the assembly code produced by XC16, given our add() function. Dissecting assembly code, perhaps with the help of Uncle Eddie What is going on here? If you look at the programmer’s reference manual, you’ll see that the LNK instruction — when I refer to an instruction mnemonic without operands, I’ll be showing it in upper case; when it’s in a program used with specific operands, it will be in lower case — allocates a stack frame of a certain number of bytes, used for temporary variables. MOV copies data from its first operand to its second operand, ADD adds its operands, ULNK deallocates a stack frame (the inverse of LNK), and RETURN jumps back to the calling program. I’m not going to dwell on the details of dsPIC assembly instructions (you’ll have to read the programmer’s reference manual), but here’s an annotated version: _add: lnk #4 ; allocate a stack frame of 4 bytes mov w0,[w14] ; copy what's in register W0 to offset 0 relative to W14 ; (the stack frame pointer), aka FP[0] mov w1,[w14+2] ; copy what's in W1 to FP[2] mov [w14+2],w0 ; copy what's in FP[2] to W0 add w0,[w14],w0 ; add W0 to FP[0], and put the result into W0 ulnk ; deallocate the stack frame return ; return to caller That’s kind of stupid… why does the compiler do all that shuffling-around of data? The default mode of the compiler is to operate in its -O0 mode, eschewing essentially all attempts at optimization. Here’s what the XC16 manual says about -O0: Do not optimize. (This is the default.). The compiler only allocates variables declared registerin registers. It’s essentially there to make the debug tool’s job very easy, and to allow breakpoints on all lines of C code with all variables available on the stack. As far as “reducing the cost of compilation” goes, I’ve never noticed any serious slowdown at other optimization levels, but who knows. If you care about efficiency and need to speed up the execution, you’ll want to use one of the other optimization options. The -O1 option is the next step up, and does a pretty decent job; the other optimization levels are higher performance but aren’t present in the free version of the compiler. Here’s what our add() function looks like with -O1: pyxc16.compile(''' #include <stdint.h> int16_t add(int16_t a, int16_t b) { return a+b; } ''', '-c', '-O1') _add: add w1,w0,w0 return And now we’re down to two instructions, an ADD and a RETURN. Yay! Much better. In fact, this is optimal for a function call. Something else that’s useful to know is the -fverbose-asm option, which adds a kind of color commentary to the assembly code. It’s kind of like having your paranoid schizophrenic uncle Eddie tell you about the wolves and drug dealers watching him behind the grocery store shelves. He includes things that aren’t there… but… well, you just don’t see what he’s seeing, and maybe that’s your loss. pyxc16.compile(''' #include <stdint.h> int16_t add(int16_t a, int16_t b) { return a+b; } ''', '-c', '-O1', '-fverbose-asm') _add: add w1,w0,w0 ; b, a, tmp46 return Uncle Eddie is telling you… I mean XC16 is telling you that it implements add(a, b) with b in the W1 register and a in the W0 register, and then the result of the ADD instruction, that gets put into W0 — well, that’s easy, that’s just tmp46. Wait, what’s tmp46? Let’s see another instance of -fverbose-asm: pyxc16.compile(''' #include <stdint.h> int16_t f1(int16_t x, int16_t y) { return x*y; } int16_t cross_product(const int16_t *pa, const int16_t *pb) { return f1(pa[0], pb[1]) - f1(pb[0], pa[1]); } ''', '-c', '-O1', '-fverbose-asm') _f1: mul.ss w1,w0,w0 ; y, x, tmp47 return _cross_product: mov.d w8,[w15++] ; mov w10,[w15++] ;, mov w0,w9 ; pa, pa mov w1,w8 ; pb, pb mov [w8+2],w1 ;, tmp54 mov [w9],w0 ;* pa, rcall _f1 ; mov w0,w10 ;, D.2032 mov [w9+2],w1 ;, tmp55 mov [w8],w0 ;* pb, rcall _f1 ; sub w10,w0,w0 ; D.2032, D.2036, tmp56 mov [--w15],w10 ;, mov.d [--w15],w8 ;, return Now we’re looking at tmp47 and D.2032 and other imaginary friends. Huh. Presumably these are internal names that the compiler uses for intermediate results, but it’s too bad they aren’t annotated a little better; in this example, except for D.2032, all of them appear only once and it doesn’t seem to shed much light on what’s going on. I won’t be using -fverbose-asm in the rest of this article. A Quick Tour of Compiled Snippets Okay, here are some common features of C, along with the compiler’s generated output. You can learn a lot about the kinds of tricks the compiler uses behind the scenes to make them work. Branches pyxc16.compile(''' #include <stdint.h> #include <stdbool.h> /* if-statement */ int16_t add(bool a, int16_t x, int16_t y) { if (a) { return x+y; } else { return x-y; } } /* equivalent result using the conditional operator */ int16_t add2(bool a, int16_t x, int16_t y) { return (a) ? x+y : x-y; } int16_t add3(bool a, int16_t x, int16_t y) { return x + ((a) ? y : -y); } int16_t add4(bool a, int16_t x, int16_t y) { return x - ((a) ? -y : y); } ''', '-c', '-O1') _add: cp0.b w0 bra z,.L2 add w2,w1,w0 bra .L3 .L2: sub w1,w2,w0 .L3: return _add2: cp0.b w0 bra z,.L5 add w2,w1,w0 bra .L6 .L5: sub w1,w2,w0 .L6: return _add3: cp0.b w0 bra nz,.L8 neg w2,w2 .L8: add w2,w1,w0 return _add4: cp0.b w0 bra z,.L10 neg w2,w2 .L10: sub w1,w2,w0 return The CP0 instruction compares a register with 0; the BRA instruction does a conditional or unconditional branch. Calls, jumps, and branches on the dsPIC architecture, if taken, require more time to execute; part or all of the instruction pipeline must be flushed because of a branch hazard. On the dsPIC30F/33F and PIC24F/24H devices, the branch takes 1 extra cycle; on the dsPIC33E and PIC24E devices, the branch takes 3 extra cycles. The compiler generates its own labels — here we have .L5, .L6, .L8, and .L10 — for use as branch and jump targets; these don’t conflict with the names used from C functions. A C function’s start label gets an underscore added at the beginning; the autogenerated labels all start with a .L followed by a number. If you use debug mode -g when compiling, there are a whole bunch of other labels starting with .LSM and .LFB, but pyxc16 strips those out since they’re never actually used as branch and jump targets. Note that there is no difference between the use of an if statement and an equivalent behavior using the ? : conditional operator. The third and fourth functions add3() and add4() are interesting… all four of these functions are equivalent, but we’ve rewritten the implementation in C, and in this case the compiler is able to implement it more efficiently, taking fewer instructions for one of the branches. The compiler can’t figure this out on its own — it’s not omniscient. We also can’t tell the compiler directly which branch we want to make shorter. On the 33F devices, both branches take the same amount of time; on the 33E series of devices, the add3() function takes longer if a is true (6 cycles not including the RETURN function if the branch is taken, 4 cycles not including the RETURN function if the branch is not taken), and the add4() function has the opposite behavior for execution time. Looking at the compiler’s output can give you information to decide how to coax the compiler into doing things more efficiently… but there’s no solid guarantee of this. Upgrade to a new version of the compiler? The behavior might change. Use different compiler flags? The behavior might change. Tweak the C code? The behavior might change. It’s not specified as part of the C standard. If you really want a specific assembly implementation, you either have to write it yourself in assembly, or double-check the result of the compiler each time you compile a different input file. Don’t be a control freak! Learning how the compiler behaves in various conditions can help you work better with it, but don’t micromanage, except in cases where you have critical execution time requirements, like a function that is called thousands or tens of thousands of times per second. Loops pyxc16.compile(''' #include <stdint.h> /* Compute the nth triangular number, namely * 0 + 1 + 2 + 3 + ... (n-1) + n. */ int16_t trisum1(int16_t n) { int16_t result = 0; while (n > 0) { result += n; --n; } return result; } int16_t trisum2(int16_t n) { int16_t result = 0; int16_t i; for (i = 1; i <= n; ++i) { result += i; } return result; } void wait_forever() { while (1) ; } ''', '-c', '-O1') _trisum1: clr w1 cp0 w0 bra le,.L2 .L3: add w1,w0,w1 dec w0,w0 bra nz,.L3 .L2: mov w1,w0 return _trisum2: mov w0,w2 clr w0 cp0 w2 bra le,.L7 mov #1,w1 .L8: add w0,w1,w0 inc w1,w1 sub w2,w1,[w15] bra ge,.L8 .L7: return _wait_forever: .L12: bra .L12 Not much to add here; a loop is really just a branch backwards to repeat something. Function calls pyxc16.compile(''' #include <stdint.h> int16_t foo(int16_t n) { return n+1; } int16_t bar(int16_t a) { return foo(a+1)-1; } ''', '-c', '-O1') _foo: inc w0,w0 return _bar: inc w0,w0 rcall _foo dec w0,w0 return The RCALL instruction does a relative call (destination address within -32768 and +32767 words of the location of the RCALL instruction), which takes the same amount of execution time as an unlimited CALL instruction, but the code size of an RCALL is 1 word and the code size of a CALL is 2 words. The use of RCALL puts a constraint on where the linker will locate the foo and bar functions. Switch statements pyxc16.compile(''' #include <stdint.h> int16_t switch_example1(int16_t n) { switch (n) { case 0: return 3; case 1: return n; case 2: return -n; case 3: return 44; default: return 78; } } int16_t switch_example2(int16_t n, int16_t x) { switch (n) { case 0: return 3; case 1: return x; case 2: return -x; case 5: return 12; case 6: return 9; case 7: return 45; case 8: return 40; case 9: return 4; case 14: return x+1; case 23: return 12345; default: return x-1; } } ''', '-c', '-O1') _switch_example1: sub w0,#1,[w15] bra z,.L4 bra gt,.L7 cp0 w0 bra z,.L3 bra .L2 .L7: sub w0,#2,[w15] bra z,.L5 sub w0,#3,[w15] bra nz,.L2 bra .L8 .L3: mov #3,w0 bra .L4 .L5: mov #-2,w0 bra .L4 .L8: mov #44,w0 bra .L4 .L2: mov #78,w0 .L4: return _switch_example2: mul.su w0,#1,w2 sub w2,#23,[w15] subb w3,#0,[w15] bra gtu,.L10 bra w2 bra .L11 bra .L12 bra .L13 bra .L10 bra .L10 bra .L14 bra .L15 bra .L16 bra .L17 bra .L18 bra .L10 bra .L10 bra .L10 bra .L10 bra .L19 bra .L10 bra .L10 bra .L10 bra .L10 bra .L10 bra .L10 bra .L10 bra .L10 bra .L20 .L11: mov #3,w1 bra .L12 .L13: neg w1,w1 bra .L12 .L14: mov #12,w1 bra .L12 .L15: mov #9,w1 bra .L12 .L16: mov #45,w1 bra .L12 .L17: mov #40,w1 bra .L12 .L18: mov #4,w1 bra .L12 .L19: inc w1,w1 bra .L12 .L20: mov #12345,w1 bra .L12 .L10: dec w1,w1 .L12: mov w1,w0 return The compiler has two strategies for handling switch statements. One of them (as in switch_example1) is a decision tree, the other (as in switch_example2) is a jump table. We’ve already seen the case of plain branches, but the jump table is new. The bra w2 does a computed relative branch, and the series of BRA instructions that follow, to computed labels .L10 through .L20, causes the program to springboard off to the appropriate spot in the code for each case. Which choice gets used? The compiler has a heuristic that tries to optimize code size and execution time. If the number of case statements is small enough, the compiler uses a decision tree. If the number of case statements is large enough and the jump table implementation is small enough, the compiler uses a jump table. If a jump table would be too large, the compiler goes back to a decision tree. Miscellany for opt_level in ['-O0','-O1']: print '*****',opt_level,'*****' pyxc16.compile(''' #include <stdint.h> #include <stdbool.h> bool check_range(int16_t n) { return n > 33 && n < 48; } ''', '-c', opt_level) ***** -O0 ***** _check_range: lnk #2 mov w0,[w14] mov #33,w0 mov [w14],w1 sub w1,w0,[w15] bra le,.L2 mov #47,w0 mov [w14],w1 sub w1,w0,[w15] bra gt,.L2 mov #1,w0 bra .L3 .L2: clr w0 .L3: mov.b w0,w0 ulnk return ***** -O1 ***** _check_range: mov #-34,w1 add w1,w0,w1 mov.b #1,w0 sub w1,#13,[w15] bra leu,.L2 clr.b w0 .L2: return Now, that’s interesting. The -O0 version has two conditional branches, one for each of the two arithmetic comparisons ( n > 33 and n < 48). The -O1 version manages to do this in one conditional branch, first by subtracting 34 (shifting the 34-47 range down to the 0-13 range), then by subtracting 13, and using the BRA LEU conditional branch, which branches if the previous arithmetic result produces a positive less-than-or-equal comparison in an unsigned manner (more precisely, the carry flag is false or the zero flag is true). The compiler is full of little tricks like this. Unfortunately, it’s really difficult to know which cases it’s been trained to optimize, and which ones it hasn’t. (This is one reason to look at your compiler’s output every once in a while!) Now let’s look at the teaser code I posted at the beginning of this article to sum up elements of an array: pyxc16.compile(''' #include <stdint.h>; } ''', '-c', '-O1') _sum1: mov w0,w2 mov w1,w5 mul.uu w0,#0,w0 cp0 w5 bra le,.L2 clr w3 .L3: mov [w2++],w4 asr w4,#15,w6 add w4,w0,w0 addc w6,w1,w1 inc w3,w3 sub w3,w5,[w15] bra nz,.L3 .L2: return _sum2: mov w0,w2 mul.uu w4,#0,w4 cp0 w1 bra le,.L7 dec w1,w1 .L8: mov [w2++],w0 asr w0,#15,w3 add w0,w4,w4 addc w3,w5,w5 dec w1,w1 add w1,#1,[w15] bra nz,.L8 .L7: mov.d w4,w0 return The answer is that they take almost exactly the same time! The loops are the same except that one uses a DEC and ADD and the other uses and INC and SUB. There are the same number of loop iterations. In sum1() we have a 6-cycle prologue and no epilogue; in sum2() we have a 5-cycle prologue and a 2-cycle epilogue ( MOV.D takes two cycles) not including the RETURN. So sum1() is one instruction cycle faster. The compiler does something pretty fancy in sum1(): it recognizes an array index items[i] with an incrementing index in a loop, and turns it into the equivalent *items++ access, using the mov [w2++],w4 instruction. Wow! There are three other little things to notice here in both functions. mul.uu w4,#0,w4: This is an easy 1-cycle way to clear a pair of registers used as a 32-bit value; we’re writing 0 to W4and W5in one cycle asr w0,#15,w3: This is a sign-extension operation. If W0is negative, we put 0xFFFFinto W3, otherwise we put 0x0000into W3, and then the pair W3:W0forms a 32-bit value equal to W0cast to a 32-bit signed integer. add w1,#1,[w15]: This is an odd little line of assembly that confused me at first… what’s in [W15]? It’s not used anywhere else in the code. Well — that’s actually the answer; we execute an ADDinstruction and throw the result away by writing it to the top of the stack ( W15is the stack pointer), which will be overwritten the next time any part of the program executes a PUSHor a CALLor a LNKinstruction. It’s an easy way to throw a result somewhere without having to overwrite one of the registers that we care about. Why would we do this? Because all we need are the arithmetic status flags, which are used in the conditional branch instruction on the next line. Let’s do a similar operation, this time adding four fixed values in an array: pyxc16.compile(''' #include <stdint.h> /* sum up 4 numbers in an array */ int32_t foursum1(const int16_t *items) { int32_t S = 0; S += items[0]; S += items[1]; S += items[2]; S += items[3]; return S; } int32_t foursum2(const int16_t *items) { int32_t S = 0; S += *items++; S += *items++; S += *items++; S += *items; return S; } int32_t foursum3(const int16_t *items) { int32_t S = 0; S += *items; S += *++items; S += *++items; S += *++items; return S; } ''', '-c', '-O1') _foursum1: mov [w0+2],w1 mov [w0+4],w2 asr w2,#15,w3 asr w1,#15,w4 add w1,w2,w2 addc w4,w3,w3 mov [w0],w1 asr w1,#15,w4 add w1,w2,w2 addc w4,w3,w3 mov [w0+6],w0 asr w0,#15,w4 add w0,w2,w0 addc w4,w3,w1 return _foursum _foursum This time the compiler doesn’t change the array index access into a postincremented pointer — the value of W0 stays the same through most of the foursum1() function. In foursum1(), the compiler uses array indices and takes 14 cycles plus a RETURN. In the second function foursum2(), oddly enough, the compiler seems to get a little confused and transforms the postincrementing pointers into two preincrementing pointers, a plain indirect access, and an indirect-with-offset access ( mov [w0+2],w0). Here we take 16 cycles plus a RETURN, but only because the compiler does this odd dance at the beginning where it writes W0 to W1 and then back to W0. Sometimes things like that happen; Uncle Eddie comes by and makes some unnecessary motions just to be sure those wolves at the grocery store can’t hear him. The third implementation, foursum3(), is exactly the same as foursum2(), instruction-for-instruction, even though we switched our use of pointers in C from postincrement to preincrement. The compiler can recognize what’s being computed as equivalent, and chooses an assembly implementation according to its heuristics. Go figure. Summary We’ve looked at a handful of simple C functions, and examined the Microchip XC16 compiler’s output. The basic C control structures are fairly simple, and result in various conditional or unconditional branches or subroutine call instructions. You can learn a lot from checking the C compiler’s output, no matter what CPU architecture or compiler you’re using. This is a strategy that can be used, if necessary, to determine which types of C program implementations have more efficient implementations in assembly. Be careful: the exact mechanisms for optimization of assembly code generation are internal undocumented details of the compiler, and may have brittle dependencies on compiler flags, C program input, the compiler version, the day of the week, or the phase of the moon. There’s no iron-clad guarantee that the compiler will optimize your program the same way in multiple compilation runs. From a practical standpoint, however, the compiler’s behavior is fairly consistent, and can help guide you towards using it to achieve good program performance. I use this technique often on critical execution tasks that run thousands of times per second on a dsPIC microcontroller. I don’t use it on code that runs infrequently; it’s not worth it to optimize what doesn’t make much difference in execution time. I’ve also posted a link to my open-source pyxc16 repository, which is a simple Python module for making it easy in Python to compile C code and view the results from the Microchip XC16 compiler. Happy compiling! Previous post by Jason Sachs: How to Read a Power MOSFET Datasheet Next post by Jason Sachs: The Dilemma of Unwritten Requirements -.
https://www.embeddedrelated.com/showarticle/813.php
CC-MAIN-2020-05
refinedweb
5,652
71.04
This is part II of the "Writing a game in Python with Pygame" tutorial. Welcome back In the first part of this tutorial we've created a simple simulation of "Creeps" - round creatures moving around the screen and bouncing off walls. Not much of a game there, but a good start nonetheless. In this part, we are going to extend this simulation, making it much more game-like. It is not the final step, of course. The final product of this part is still going to be far from a real, interesting game, but many useful game programming concepts will be introduced, and the simulation will definitely have much more feeling of a game in it. Here's a teaser screenshot of the final product of this part: The code The full code for this part can be downloaded from here. As before, it is highly recommended to download it, run it and have it open in the editor while reading this tutorial. Goals for this part In this part I'm going to cover: - A prettier background for the game - Responding to user events - A more complex internal state for the creeps - Simple animation - Rendering text So let's get started. Background In the first part we've just splashed a bucket of greenish swamp-like color onto the screen and called that a background. Since we want the game to be a bit more appealing, this won't do any longer. We'll now tile a pretty background image onto the screen, and create a bounded "game field" for the creeps to roam in. What is tiling, you ask? Tiling, in simple terms, is taking a small surface and repeating it in a pattern until a larger surface is covered. In our case, we'll take this image: And tile it in a simple repeating-row pattern. The code doing it is in the draw_background function: def draw_background(screen, tile_img, field_rect): img_rect = tile_img.get_rect() nrows = int(screen.get_height() / img_rect.height) + 1 ncols = int(screen.get_width() / img_rect.width) + 1 for y in range(nrows): for x in range(ncols): img_rect.topleft = (x * img_rect.width, y * img_rect.height) screen.blit(tile_img, img_rect) field_color = (109, 41, 1) draw_rimmed_box(screen, field_rect, field_color, 4, Color('black')) The loop does the tiling. The last line of the function creates the playing field - a dark-brown filled rectangle to which the creeps will be restricted. The field is drawn using the utility function draw_rimmed_box - it's very simple, and you can study it on your own [1]. In the game screenshot above you can also see another box with some text on the right. This is drawn separately, and we'll get to it soon enough. Hey, you've clicked me! So far the only user event our game has responded to was closing closing the game window. Not much interaction there, so we'll pump it up. Here's the new event handler in our main loop: for event in pygame.event.get(): if event.type == pygame.QUIT: exit_game() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_SPACE: paused = not paused elif ( event.type == pygame.MOUSEBUTTONDOWN and pygame.mouse.get_pressed()[0]): for creep in creeps: creep.mouse_click_event(pygame.mouse.get_pos()) A couple of things were added. First, there's a handler for the user's pressing the space key on the keyboard. This flips the "paused" state of the game - try it now. The second handler is only slightly more complex. When a left mouse button is clicked inside the application, each creep gets its mouse_click_event method called with the mouse click coordinates. The idea is simple: creeps have health, and we can decrease their health by successfully clicking on them. A health bar is drawn above each creep showing its health as a proportion of red to green (click on some creeps to see their health decrease). The implementation is also quite simple. Here's the mouse click handler of a creep: def mouse_click_event(self, pos): """ The mouse was clicked in pos. """ if self._point_is_inside(vec2d(pos)): self._decrease_health(3) You see that when the click was found to be inside the creep, its health is decreased. Let's see how the click inside the creep is detected: def _point_is_inside(self, point): """ Is the point (given as a vec2d) inside our creep's body? """ img_point = point - vec2d( int(self.pos.x - self.image_w / 2), int(self.pos.y - self.image_h / 2)) try: pix = self.image.get_at(img_point) return pix[3] > 0 except IndexError: return False This method detects if the click is inside the creep. More specifically, inside the solid area of the creep's image. Clicking inside the creep's bounding box but outside its body won't result in True. Here's how it's done: First, the click point is recomputed to be relatively to the creep's image. If the point isn't inside its image on the screen, there's nothing to talk about. If it is inside, we still don't know if it's in the solid region. For this purpose, the pixel at the point of the click is examined. If the alpha constituent of the point is positive, this is part of the creep's body. Otherwise, it's just part of its bounding box but outside the body (see [2]). Drawing the health bars is very simple, and you should be able to understand the code of the draw method (which replaces blitme from the code of part I) that implements this. Simple animation What happens when the creep's health goes down to 0? I hope you've already experimented with the game and saw it, but if you didn't, here's a screenshot: If you've played with the game, however, you've surely noticed that the explosion the creep undergoes is animated - it's changing with time. What is an animation? In its simplest form, it is a sequence of images that are drawn one after another in the same location, creating the appearance of movement. It's not unlike our creeps moving on the screen (you can see the whole game as an animation, really), but for the sake of this part I'm specifically referring to a static animation that stays in the same place. The animation is implemented in the module simpleanimation.py which you can find in the downloaded code package. You can experiment with it by running it standalone (the module uses Python's if __name__ == "__main__" feature to allow stand-alone running). The code should be very simple to understand, because there's nothing much to it. The SimpleAnimation class receives a list of image objects and draws them to the screen with the given period and duration. Note how the explosion is simulated by taking the same image, rotating it by 90 degrees and using SimpleAnimation to change the two in rapid succession. Back in creeps.py, our creep uses SimpleAnimation to show its own explosion after its health has reached 0: def _explode(self): """ Starts the explosion animation that ends the Creep's life. """ self.state = Creep.EXPLODING pos = ( self.pos.x - self.explosion_images[0].get_width() / 2, self.pos.y - self.explosion_images[0].get_height() / 2) self.explode_animation = SimpleAnimation( self.screen, pos, self.explosion_images, 100, 300) It's very straightforward, really. Creep state The creeps of this part are much more complex than of part I. They have health which can decrease, and they can explode and disappear. To manage this complexity, we're going to use a state machine [3]. What states can the creep be in? A normal state, when the creep is roaming around, an exploding state in which the creep is replaced by the explosion animation, and an inactive state, in which the creep no longer functions. These are coded as follows: (ALIVE, EXPLODING, DEAD) = range(3) See the code of update for state management - the creep is updated differently, depending on which state it's in. The same is true for the draw method. It's a good idea now to search for self.state throughout the code, taking note of where the state is modified, and where it is used. Displaying textual information When you run the game (or in the large screenshot at the top of this part), you'll see a simple scoreboard in the top right corner of the screen. It counts the amount of active creeps on the screen, and will also display an exciting message when you've killed all the creeps. This display is implemented in the function draw_messageboard - study its code now, it should be quite simple to understand in conjunction with the docs. Sprites and sprite Groups I hope you've noticed that in both parts of the tutorial, the Creep class derives from pygame.sprite.Sprite. Sprite is a utility class of Pygame that implements some useful common methods for managing the animated images that represent the actors of the game (known in game programming jargon as sprites). In the first part I didn't use any of its capabilities at all. Here, I'm using its capability of being collected into sprite Groups. The list of creeps in the main function has now turned into a sprite group. The cool thing is that whenever a sprite is added to a group, it keeps track of which groups it's in, so calling self.kill() in a sprite causes it to be removed from all the groups and thus from the game. The update method of Creep calls kill() when the explosion has ended. This way, the main loop doesn't have to explicitly keep track of which sprites are active in the group - they do it themselves. That said, I'm still not sure I'm going to use the full capabilities of Sprites. For me, they're just a guideline, not a must. Perhaps I'll find out later that my code can be structured better with them. Or perhaps I'll see I don't need them at all. We'll see [4]. Wrapping up All the goals stated in the beginning of this part were achieved, so that's it, for now. We've turned the simplistic creeps simulation into something resembling a rudimentary game. True, it's more likely to pave your way to severe RSI than cause you any serious fun, but it's a simple game nonetheless. Not bad for just 450 lines of Python code! In future parts of this tutorial, we'll continue developing the code on our way to a real game, so stay tuned. Oh, and give the exercises a go, I guarantee you they will make your understanding of the material much deeper. Exercises - Try to pause the game (press SPACE), click a couple of times on a creep, and resume the game. Notice that the creep's health decreased? Try to fix this, i.e. block mouse events during the pause. - Can you see that when a creep is facing diagonally, his health bar is a bit farther from his body than when he's facing up or to the side? Can you figure out why? (Hint: re-read the section of part I dealing with the size of the rotated image). Propose ways to fix this. - Review the code of the _explode method. Note the complex computation of the explosion's position. Why is it needed? Try to modify it (for example, removing the consideration of the explosion image's size) and observe the difference. - Add a running clock to the scoreboard. It should begin at 00:00 and advance by 1 each second. When the game ends, the clock should stop. - Set the creeps' speed to higher than the default and attempt to catch them. It's quite challenging!
http://eli.thegreenplace.net/2008/12/20/writing-a-game-in-python-with-pygame-part-ii/
CC-MAIN-2017-09
refinedweb
1,968
73.98
Write facade types for JavaScript APIs When writing an application with Scala.js, it is expected that the main application logic be written in Scala.js, and that existing JavaScript libraries are leveraged. Calling JavaScript from Scala.js is therefore the most important direction of interoperability. Facade types are zero-overhead typed APIs for JavaScript libraries. They are similar in spirit to TypeScript type definitions. Defining JavaScript interfaces with native JS traits Most JavaScript APIs work with interfaces that are defined structurally. In Scala.js, the corresponding concept are traits. To mark a trait as being a representative of a JavaScript API, it must inherit directly or indirectly from js.Any (usually from js.Object). JS traits can be native or not. The present page describes native JS traits, which must be annotated with @js.native. There are also non-native JS traits (aka Scala.js-defined JS traits), documented in the Scala.js-defined JS types guide. The latter have more restrictions, but can be implemented from Scala.js code. Native JS traits as described here should only be used for interfaces that are exclusively implemented by the JavaScript library–not for interfaces/contracts meant to be implemented by the user of said library. In native JS types, all concrete definitions must have = js.native as body. Here is an example giving types to a small portion of the API of Window objects in browsers. @js.native trait Window extends js.Object { val document: HTMLDocument = js.native var location: String = js.native def innerWidth: Int = js.native def innerHeight: Int = js.native def alert(message: String): Unit = js.native def open(url: String, target: String, features: String = ""): Window = js.native def close(): Unit = js.native } Remarks var, val and def definitions without parentheses all map to field access in JavaScript, whereas def definitions with parentheses (even empty) map to method calls in JavaScript. The difference between a val and a def without parentheses is that the result of the former is stable (in Scala semantics). Pragmatically, use val if the result will always be the same (e.g., document), and def when subsequent accesses to the field might return a different value (e.g., innerWidth). Calls to the apply method of an object x map to calling x, i.e., x(...) instead of x.apply(...). Methods can have parameters with default values, to mark them as optional. However, the actual value is irrelevant and never used. Instead, the parameter is omitted entirely (or set to undefined). The value is only indicative, as implicit documentation. Fields, parameters, or result types that can have different, unrelated types, can be accurately typed with the pseudo-union type A | B. Methods can be overloaded. This is useful to type accurately some APIs that behave differently depending on the number or types of arguments. JS traits and their methods can have type parameters, abstract type members and type aliases, without restriction compared to Scala’s type system. However, inner traits, classes and objects don’t make sense and are forbidden. It is however allowed to declare a JS trait in a top-level object. Methods can have varargs, denoted by * like in regular Scala. They map to JavaScript varargs, i.e., the method is called with more arguments. isInstanceOf[T] is not supported for any trait T inheriting from js.Any. Consequently, pattern matching for such types is not supported either. asInstanceOf[T] is completely erased for any T inheriting from js.Any, meaning that it does not perform any runtime check. It is always valid to cast anything to such a trait. JavaScript field/method names and their Scala counterpart Sometimes, a JavaScript API defines fields and/or methods with names that do not feel right in Scala. For example, jQuery objects feature a method named val(), which, obviously, is a keyword in Scala. They can be defined in Scala in two ways. The trivial one is simply to use backquotes to escape them in Scala: def `val`(): String = js.native def `val`(v: String): this.type = js.native However, it becomes annoying very quickly. An often better solution is to use the scala.scalajs.js.annotation.JSName annotation to specify the JavaScript name to use, which can be different from the Scala name: @JSName("val") def value(): String = js.native @JSName("val") def value(v: String): this.type = js.native If necessary, several overloads of a method with the same name can have different @JSName’s. Conversely, several methods with different names in Scala can have the same @JSName. Members with a JavaScript symbol “name” @JSName can also be given a reference to a js.Symbol instead of a constant string. This is used for JavaScript members whose “name” is actually a symbol. For example, JavaScript iterable objects must declare a method whose name is the symbol Symbol.iterator: @JSName(js.Symbol.iterator) def iterator(): js.Iterator[Int] = js.native The argument to @JSName must be a reference to a static, stable field. In practice, this means a val in top-level object. js.Symbol.iterator is such a val, declared in the top-level object js.Symbol. Scala methods representing bracket access ( obj[x]) The annotation scala.scalajs.js.annotation.JSBracketAccess can be used on methods to mark them as representing bracket access on an object. The target method must either have one parameter and a non-Unit result type (in which case it represents read access) or two parameters and a Unit result type (in which case it represents write access). A typical example can be found in the js.Array[A] class itself, of course: @JSBracketAccess def apply(index: Int): A = js.native @JSBracketAccess def update(index: Int, v: A): Unit = js.native The Scala method names are irrelevant for the translation to JavaScript. The duo apply/ update is often a sensible choice, because it gives array-like access on Scala’s side as well, but it is not required to use these names. Native JavaScript classes It is also possible to define native JavaScript classes as Scala classes inheriting, directly or indirectly, from js.Any (like traits, usually from js.Object). The main difference compared to traits is that classes have constructors, hence they also provide instantiation of objects with the new keyword. Unlike traits, classes actually exist in the JavaScript world, often as top-level, global variables. They must therefore be annotated with the @JSGlobal annotation. For example: @js.native @JSGlobal class RegExp(pattern: String) extends js.Object { ... } The call new RegExp("[ab]*") will map to the obvious in JavaScript, i.e., new RegExp("[ab]*"), meaning that the identifier RegExp will be looked up in the global scope. If it is impractical or inconvenient to declare the Scala class with the same name as the JavaScript class (e.g., because it is defined in a namespace, like THREE.Scene), a constant string can be given as parameter to @JGlobal to specify the JavaScript name: @js.native @JSGlobal("THREE.Scene") class Scene extends js.Object Remarks If the class does not have any constructor without argument, and it has to be subclassed, you may either decide to add a fake protected no-arg constructor, or call an inherited constructor with ???s as parameters. isInstanceOf[C] is supported for classes inheriting from js.Any. It is implemented with an instanceof test. Pattern matching, including ClassTag-based matching, work accordingly. As is the case for traits, asInstanceOf[C] is completely erased for any class C inheriting from js.Any, meaning that it does not perform any runtime check. It is always valid to cast anything to such a class. Top-level JavaScript objects JavaScript APIs often expose top-level objects with methods and fields. For example, the JSON object provides methods for parsing and emitting JSON strings. These can be declared in Scala.js with object’s inheriting directly or indirectly from js.Any (again, often js.Object). As is the case with classes, they must be annotated with @js.native and @JSGlobal. @js.native @JSGlobal object JSON extends js.Object { def parse(text: String): js.Any = js.native def stringify(value: js.Any): String = js.native } A call like JSON.parse(text) will map in JavaScript to the obvious, i.e., JSON.parse(text), meaning that the identifier JSON will be looked up in the global scope. Similarly to classes, the JavaScript name can be specified as an explicit argument to @JSGlobal, e.g., @js.native @JSGlobal("jQuery") object JQuery extends js.Object { def apply(x: String): JQuery = js.native } Unlike classes and traits, native JS objects can have inner native JS classes, traits and objects. Inner classes and objects will be looked up as fields of the enclosing JS object. Variables and functions in the global scope Besides object-like top-level definitions, JavaScript also defines variables and functions in the global scope. Scala does not have top-level variables and functions, but we can define vals and defs in top-level objects instead. For example, we can define the document variable and the alert function as follows. Requires Scala.js 1.1.0 or later import js.annotation._ object DOMGlobals { @js.native @JSGlobal("document") val document: HTMLDocument = js.native @js.native @JSGlobal("alert") def alert(message: String): Unit = js.native } An alternative, more practical if there are a lot of variables and functions to declare, and also available in earlier versions, is to use a top-level object annotated with @JSGlobalScope. Such objects are considered to represent the global scope. import js.annotation._ @js.native @JSGlobalScope object DOMGlobalScope extends js.Object { val document: HTMLDocument = js.native def alert(message: String): Unit = js.native } Also read access to the JavaScript global scope. Imports from other JavaScript modules Important: Importing from JavaScript modules requires that you emit a module for the Scala.js code. The previous sections on native classes and objects all refer to global variables, i.e., variables declared in the JavaScript global scope. In modern JavaScript ecosystems, we often want to load things from other modules. This is what @JSImport is designed for. You can annotate an @js.native class, object, val or def with @JSImport instead of @JSGlobal to signify that it is defined in a module. For example, in the following snippet: @js.native @JSImport("bar.js", "Foo") class Foobaz(val x: Int) extends js.Object val f = new Foobaz(5) the annotation specifies that Foobaz is a native JS class defined in the module "bar.js", and exported under the name "Foo". Semantically, @JSImport corresponds to an ECMAScript 2015 import, and the above code is therefore equivalent to this JavaScript code: import { Foo as Foobaz } from "bar.js"; var f = new Foobaz(5); In CommonJS terms, this would be: var bar = require("bar.js"); var f = new bar.Foo(5); The first argument to @JSImport is the name of the JavaScript module you wish to import. The second argument denotes what member of the module you are importing. It can be one of the following: - A string indicating the name of member. The string can be a .-separated chain of selections (e.g., "Foo.Babar"). - The constant JSImport.Default, to select the default export of the JavaScript module. This corresponds to import Foobaz from "bar.js". - The constant JSImport.Namespace, to select the module itself (with its exports as fields). This corresponds to import * as Foobaz from "bar.js". Before Scala.js 1.1.0, the latter was particularly useful to import members of the modules that are neither classes nor objects (for example, functions): @js.native @JSImport("bar.js", JSImport.Namespace) object Bar extends js.Object { def exportedFunction(x: Int): Int = js.native } val y = Bar.exportedFunction(5) In CommonJS terms, this would be: var bar = require("bar.js"); var y = bar.exportedFunction(5); If the previous example had used JSImport.Default instead of JSImport.Namespace, the current translation into CommonJS terms would be the following: function moduleDefault(m) { return (m && (typeof m === "object") && "default" in m) ? m["default"] : m; } var bar = require("bar.js"); var y = moduleDefault(bar).exportedFunction(5); This is subject to change in future versions of Scala.js, to better reflect the evolution of specifications in ECMAScript itself, and its implementations. Starting with Scala.js 1.1.0, the above example would probably be written as follows instead: object Bar { @js.native @JSImport("bar.js", "exportedFunction") def exportedFunction(x: Int): Int = js.native } val y = Bar.exportedFunction(5) Important: @JSImport is completely incompatible with jsDependencies. You should use a separate mechanism to manage your JavaScript dependencies. The sbt plugin scalajs-bundler provides one such mechanism. Translating ES imports to Scala.js @JSImport When the documentation of a library specifies how to write ES imports to use it, use the following table to translate those into Scala.js @JSImports: Scala.js @JSExportTopLevel("foo") object Bar ECMAScript export { Bar as foo } Scala.js @js.native @JSImport("mod.js", "foo") object Bar extends js.Object ECMAScript import { foo as Bar } from "mod.js" Scala.js @js.native @JSImport("mod.js", JSImport.Namespace) object Bar extends js.Object ECMAScript import * as Bar from "mod.js" Scala.js @js.native @JSImport("mod.js", JSImport.Default) object Bar extends js.Object ECMAScript import Bar from "mod.js" Default import or namespace import? The default export accessible with JSImport.Default, specified in terms of ECMAScript 2015 modules, is somewhat underspecified when it comes to CommonJS, at the moment. This is because it is not entirely clear yet what default exports are supposed to be with respect to “legacy” module systems (such as CommonJS). It seems that the intention is that a legacy module (such as a CommonJS) would appear to an ECMAScript 2015 module as exporting a single member: the default export. For a CommonJS module, the value of the default export would be the value of exports. This intention is not clearly specified anywhere, though, and existing definitions are known to slightly conflict on the matter (e.g., what Rollup.js does compared to what Node.js would do in the future). There seems to be an emergent behavior that members of a legacy module (e.g., fields of the exports object) will also be exposed as if they were top-level exports, so that they can be imported as import { Foo } from "bar.js". What does it all mean to you? How to choose between Namespace, Default and named imports? At present, we recommend to follow these rules of thumb: - Does the documentation of the module specify how to import it with ECMAScript 2015 syntax? If yes, translate the ES syntax into @JSImportas specified above. - Otherwise, is the exportsvalue of a legacy module not an object (e.g., it is a class or a function)? If yes, use a default import with JSImport.Default. - Otherwise, use a named import with a string or a namespace import with JSImport.Namespace. Dynamic import ECMAScript 2020’s dynamic import is exposed in Scala.js as the method js.import[A <: js.Any](moduleName), which returns a js.Promise[A]. The parameter A should be a JS trait describing the API of the module, and be given explicitly. Since import is a keyword in Scala, it must be called with backticks: import scala.scalajs.js trait FooAPI extends js.Any { def bar(x: Int): Int } val moduleName = "foo.js" val promise = js.`import`[FooAPI](moduleName) val future = promise.toFuture for (module <- future) { println(module.bar(5)) } Monkey patching In JavaScript, monkey patching is a common pattern, where some top-level object or class’ prototype is meant to be extended by third-party code. This pattern is easily encoded in Scala.js’ type system with implicit conversions. For example, in jQuery, $.fn can be extended with new methods that will be available to so-called jQuery objects, of type JQuery. Such a plugin can be declared in Scala.js with a separate trait, say JQueryGreenify, and an implicit conversions from JQuery to JQueryGreenify. The implicit conversion is implemented with a hard cast, since in effect we just want to extend the API, not actually change the value. @js.native trait JQueryGreenify extends JQuery { def greenify(): this.type = ??? } object JQueryGreenify { implicit def jq2greenify(jq: JQuery): JQueryGreenify = jq.asInstanceOf[JQueryGreenify] } Recall that jq.asInstanceOf[JQueryGreenify] will be erased when mapping to JavaScript because JQueryGreenify is a JS trait. The implicit conversion is therefore a no-op and can be inlined away, which means that this pattern does not have any runtime overhead. Reflective calls Scala.js does not support reflective calls on any subtype of js.Any. This is mainly due to the @JSName annotation. Since we cannot statically enforce this restriction, reflective calls on subtypes of js.Any will fail at runtime. Therefore, we recommend to avoid reflective calls altogether. What is a reflective call? Calling a method on a structural type in Scala creates a so-called reflective call. A reflective call is a type-safe method call that uses Java reflection at runtime. The following is an example of a reflective call: // A structural type type T = { def foo(x: Int): String } def print(obj: T) = obj.foo(100) // ^ this is a reflective call Any object conforming structurally to T can now be passed to class A { def foo(x: Int) = s"Input: $x" } print(new A()) Note that A does not extend T but only conforms structurally (i.e., it has a method foo with a matching signature). The Scala compiler issues a warning for every reflective call, unless the scala.language.reflectiveCalls is imported. Why do reflective calls not work on js.Any? Since JavaScript is dynamic by nature, a reflective method lookup as in Java is not required for reflective calls. However, in order to generate the right method call, the call-site needs to know the exact function name in JavaScript. The Scala.js compiler generates proxy methods for that specific purpose. However, we are unable to generate these forwarder methods on js.Any types without leaking prototype members on non-Scala.js objects. This is something which – in our opinion – we must avoid at all cost. Lack of forwarder methods combined with the fact that a JavaScript method can be arbitrarily renamed using @JSName, makes it impossible to know the method name to be called at the call-site. The reflective call can therefore not be generated. Calling JavaScript from Scala.js with dynamic types Sometimes, it is more convenient to manipulate JavaScript values in a dynamically typed way. Although it is not recommended to do so for APIs that are used repetitively, Scala.js lets you call JavaScript in a dynamically typed fashion if you want to. The basic entry point is to grab a dynamically typed reference to the global scope, with js.Dynamic.global, which is of type js.Dynamic. You can read and write any field of a js.Dynamic, as well as call any method with any number of arguments. All input types are assumed to be of type js.Any, and all output types are assumed to be of type js.Dynamic. This means that you can assign a js.Array[A] (or even an Int, through implicit conversion) to a field of a js.Dynamic. And when you receive something, you can chain any kind of call and/or field access.. When calling getElementById or assigning to the field innerHTML, the String is implicitly converted to js.Any. And since js.Dynamic inherits from js.Any, it is also valid to pass newP as a parameter to appendChild. Remarks Calling a js.Dynamic, like in x(a) will be treated as calling x in JavaScript, just like calling the apply method with the statically typed interface. Parameters are assumed to be of type js.Any and the result type is js.Dynamic, as for any other method. All the JavaScript operators can be applied to js.Dynamic values. To instantiate an object of a class with the dynamic interface, you need to obtain a js.Dynamic reference to the class value, and call the js.Dynamic.newInstance method like this: val today = js.Dynamic.newInstance(js.Dynamic.global.Date)() If you use the dynamic interface a lot, it is convenient to import js.Dynamic.global and/or newInstance under simple names, e.g., import js.Dynamic.{ global => g, newInstance => jsnew } val today = jsnew(g.Date)() When using js.Dynamic, you are very close to writing raw JavaScript within Scala.js, with all the warts of the language coming to haunt you. However, to get the full extent of JavaScriptish code, you can import the implicit conversions in js.DynamicImplicts. Use at your own risk!
http://www.scala-js.org/doc/interoperability/facade-types.html
CC-MAIN-2021-10
refinedweb
3,441
60.51
1 programming language and it has it’s own graphical plotting abilities. Matplotlib runs pretty well with Pandas and Numpy which is a library for matrix operation and math. And in another post, we are going to see some other libraries like Seaborn that are designed based on Matplotlib. But in order to know those libraries, it’s necessary to understand Matplotlib first. Feel free to access the code on Google Colab. 2.Installing libraries First of all, you need first to install Matplotlib using conda if you are using anaconda or pip3 if you are using another version of python, like if you want to use a text editor such as Sublime Text. conda install matplotlib or pip3 install matplotlib 3.Getting Started So after you have installed Matplotlib, we gonna start our code by importing the Matplotlib subpackage pyplot as an alias of plt so we don’t write matplotlib.pyplot every time we need it: import matplotlib.pyplot as plt In my case, I use Jupyter notebook and if you are the same, make sure to add this command %matplotlib inline because it allows you to see the plots inside this environment (Jupyter notebook): %matplotlib inline But if you are using Sublime text editor or Pycharm or any text editor, consider putting plt.show() at the last of your code to see the plots: plt.show() 4.Creating a Dataset And Start Plotting In order to use Matplotlib, we need first to get data to work with, we are going to use Numpy which is a library that allows you to generate data so we can plot it later on using Matplolib. Make sure to install it first by using one of this command according to your environment: conda install numpy or pip3 install numpy Then make sure to import it as an alias of np and run the cell: import numpy as np Our dataset will be 11 random numbers ranging from 0 to 5. To do so, we use the linspace function that exists inside numpy, this will be our X values and for Y values it will be the power of 2 to the X values: X = np.linspace(0, 5, 11) # create X values y = x ** 2 # create Y values Now, there are two methods to create a plot, the first one is the functional method and the second is the object-oriented method. 4.1.The Functional Method To do a simple plot in Matplotlib we use the function plt.plot() with two-argument (X and y values): plt.plot(X, y) # create a simple plot As you see from the graph above that is is so simple and we can add some information to it like: - x_label: we use the plt.xlabel() function - y_label: we use the plt.ylabel() function - Title: we use the plt.title() function make sure to run the plt.plot() command alongside these three arguments in the same cell, otherwise, it won’t work. # Functional Method plt.plot(x, y) # add plot plt.xlabel('X Axis Title Here') # add X label plt.ylabel('Y Axis Title Here') # add Y label plt.title('String Title Here') # add Title We can create multiple plots on the same canvas, to do that we use the plt.subplot() function and it takes a number of arguments such as: - nrows: The number of rows in the canvas - ncols: The number of columns in the canvas - index: The index of every plot in the canvas To make everything clear, we are going to change the color of every plot by adding a third argument to the plt.plot() function which is the color. Feel free to see the official documentation about the colors and what every character means. For example, “r” refers to the red color. # plt.subplot(nrows, ncols, plot_number) plt.subplot(1,2,1) plt.plot(x, y, 'r') plt.subplot(1,2,2) plt.plot(y, x, 'g') 4.2.The Object-Oriented Method Now, let’s jump to the other method of creating plots which are the Object-Oriented Method. The main idea for this process is by creating figure objects and then call methods off of this, and then add a set of axes to this object. See the code below: # Object Oriented Method fig = plt.figure() # create an object axes = fig.add_axes([]) # add axes As you see above, the that we can add many axes is by adding a list to the fig.add_axes() function which is: left, bottom, width, height argument ranging from 0 to 1, in other words, a percentage of that blank canvas that you want to take or reserve, feel free to play around with these numbers. Next, we add a plot using axes.plot() function, as you see it looks the same as earlier but we have more control over our plots, you see that in seconds! We can add some information to our plot which are: - x_label: we use the axes.set_xlabel() function - y_label: we use the axes.set_ylabel() function - Title: we use the axes.set_title() function # Object Oriented Method fig = plt.figure() # create an object axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # add axes axes.plot(x, y) # plotting axes.set_xlabel("X Axis Title Here") # add X label axes.set_ylabel("Y Axis Title Here") # add Ylabel axes.set_title("String Title Here") # add a title You know what! let’s continue with this idea Object-Oriented programming and add two figures on the same canvas, See the code below: fig = plt.figure() # create an object axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # add axes first figure axes1.plot(x, y) # add plot to the first axes axes1.set_title("Larger Plot") # add a title to the first figure axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # add axes second figure axes2.plot(y, x) # add plot to the second axes axes2.set_title("Smaller Plot") # add a title to the second figure 5.Creating Subplots The next thing to do is showing you how to create subplots withing the same canvas using the object-oriented method. See the code below: fig, axes = plt.subplots(nrows=1, ncols=2) axes[0].plot(x, y) # index the first plot axes[0].set_title("The First Plot") # set title for the first plot axes[1].plot(y, x) # index the second plot axes[1].set_title("The Second Plot") # set title for the second plot plt.tight_layout() # remove overlapping As you see the code above, we use plt.subplots() to create multiple plots on the same canvas and it takes two arguments: - nrows: which is the number of rows - ncols: which is the number of columns And we can index each plot individually as you see axes[0] and axes[1] and change them like adding a title or plot some graphs on them. The last command plt.tight_layout() is recommended to use it at the last of the code so it removes any overlapping might occurs when creating many plots on the same canvas. We can also control the size of plots by adding figsize=() argument to the plt.subplots(): fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(8, 2)) axes[0].plot(x, y) # index the first plot axes[1].plot(y, x) # index the second plot plt.tight_layout() # remove overlapping You can also save the figure as a png or jpg format using the fig.savefig() function and it takes two arguments: - fname: which is the name of the picture - dpi: which is the dots per inch (Quality) fig.savefig(fname="picture.png", dpi=200) # save the figure Conclusion Object-Oriented is the method that we gonna use from now to create plots using Matplotlib because it gives us more control over plotting and creating a nice visualization. Hopefully, you have a basic understanding of what that line of codes mean, and I will see you at the next post. “Happy Coding ❤️” Note: This is a guest post, and the opinion in this article is of the guest writer. If you have any issues with any of the articles posted at please contact at asif@marktechpost.com
https://pythonlearning.org/2019/12/19/introduction-to-data-visualization-using-matplotlib/
CC-MAIN-2020-16
refinedweb
1,364
64.81
Global : Its a global namespace object Process : Its also a global object but it provides essential functionality to transform a synchronous function into a asynchronous callback. Buffer : Raw data is stored in instances of the Buffer class. Global Global is the global namespace object. When we type >console.log(global) it prints out all the other global objects. One important point with Global object is that these objects aren't actually in the global scope but they are in the module scope. So the common advice is to either "declare the variable without the var keyword" or "add the variable to the global object". A variable declared with or without the var keyword got attached to the global object but with a slight variation. Variables declared with the var keyword remain local to a module whereas those declared without it get attached to the global object. For example > var value = "Test"; > console.log(global); Using this we can see our variable as a new property of global at the bottom. Now, > value1 = global; > console.log(global); The global object interface is printed out to the console and at the bottom we can see the local variable assigned as "Circular Reference". Process Its also a global object but it provides essential functionality to transform a synchronous function into a asynchronous callback. It can be accessed from anywhere and also its an instance of EventEmitter. Each node application is an instance of a Node Process object. It mainly provides identification or information about the application and its environment. Such as process.execPath : returns the execution path for the Node application process.Version : returns the Node version process.platform : identifies the server platform. The process object also has STDIO (Standard IO) streams such as stdin, stdout and stderr. One more useful Process method is memoryUsage, which provide us with the information of how much memory Node application is using. process.NextTick method attaches a callback function that's fired during the next loop in the Node event loop. process.NextTick method can be used to delay a function but asynchronously. So for whatever reason if you want to delay a fucntion asynchronously, you can use process.nextTick. It can also be used when you are doing some complex operations which is time consuming then you can divide that complex function into sections and each sections can be called using process.nextTick thus allowing other requests in the the Node application to be processed without waiting for the time consuming process to finish. Buffer The Buffer class is a way of handling binary data in Node.A buffer is similar to an arrays of integers but corresponds to a raw memory allocation outside the V8 heap. Converting between Buffers and JavaScript string objects requires an explicit encoding method which are :- ascii - for 7 bit ASCII data only. utf8 - multibyte encoded Unicode characters utf16le - 2 or 4 bytes, liitle endian encoded Unicode characters. base64 - Base64 string encoding hex - Encode each byte as two hexadecimal characters. > var b = new Buffer(str, [encoding]); It allocates a new buffer containing the given str, encoding defaults to 'utf8'. We can also write a string to an existing buffer. > b.write(str) There are several methods for reading and writing various types of data to the buffer, they are buffer.readInt8 and buffer.writeUInt8. Please Like and Share the Blog, if you find it interesting and helpful. Related Articles - Crossroads : Router for Nodejs Developers - Creating Own Custom Module in Nodejs - Colors Module in Nodejs - Utilities Module in Nodejs - CodeLens : Visual Studio 2013 Feature - Change database runtime when using ADO.Net - Create TCP Server in Nodejs - readFile Vs createReadStream in Nodejs - Error when "git push" : src refspec test does not match any - Globals in Nodejs
http://www.codingdefined.com/2014/07/globals-in-nodejs.html
CC-MAIN-2017-47
refinedweb
624
56.15