text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Write a file named quadratic.py with a function named big_root which, if given (a, b, c), returns the more positive root of the quadratic equation a x2 + b x + c = 0 Use the quadratic formula to solve this problem. Also write a second function, small_root, which gives the other root of the equation. You may assume we only give coefficients for which the answer is a real number. Neither function should input. You should not have any code outside of these two functions. When you run quadratic.py, nothing should happen. It defines a function, it does not run it. If in another file (which you do not submit) you write the following: import quadratic print(quadratic.big_root(1, -1, -1)) print(quadratic.small_root(1, -1, -1)) you should get the following output: 1.618033988749895 -0.6180339887498949 (don’t worry if your answer differs in the last few digits) We won’t grade this, but what does your code do when you try to solve an equation with no real solutions? print(quadratic.big_root(1, 1, 1)) If you want, feel free to also add the cubic formula or even (if you feel really ambitious) the quartic formula, but there isn’t a quintic formula. Recall that . Don’t remember the operators you need? See §3.3.1. Also, remember that in Python ^ is not the exponentiation operator (we won’t cover what ^ is; if you are curious, see §19.2.5). Did you get the order of operations right? You could look them up, but adding parentheses never hurts. There are two roots (because of the ± in the quadratic formula), but one of them is always the biggest… no need for an if Have you tried other coefficients besides (1, -1, -1)?
http://cs1110.cs.virginia.edu/w02-quadratic.html
CC-MAIN-2017-43
refinedweb
294
65.42
Summary? Core Java author Cay Horstmann commented recently about the difficulty of using Swing's threading model correctly. Difficulty is perhaps not the right word: Swing's concurrency rules are neither difficult to understand nor hard to follow. What is difficult, though, is to accept that one has to produce the kind of convoluted code required by those rules even for the simplest Swing application. As author of a Java tutorial, Horstmann's intent of showing attractive Java code is challenged when he gets to describing Swing development. Swing's concurrency model is designed such that any change to component state must be performed from the Swing event-handling thread. That thread is created automatically when the Swing API boots, and each Swing release so far provided increasingly convenient means of injecting programmatic action into the event-handling thread, such as SwingWorker and SwingUtilities, now part of the core JDK 6 API. Programmatic access means anything other than actions the end-user performs on a component. A user typing text into a text box is not programmatic access, and is automatically pushed onto the event-handling thread. Reading the values of that text box, by contrast, is programmatic access and, therefore, must be performed by explicitly injecting the reading action—the code for textBox.getText()—into the event-handling thread. In his blog post, Horstmann shows some interesting consequences of this rule. Creating the UI of a Swing application, for instance, should not be performed in the main() method's current thread—instead, the developer must explicitly inject the UI-creating statements into the event-handling thread (this example is quoted from Horstmann's blog): public class PureGoodness { public static void main(String... args) { EventQueue.invokeLater(new Runnable() { public void run() { JFrame frame = new JFrame(); frame.setSize(300, 300); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setVisible(true); } }); } } This is not due to academic finickiness: Executing the UI-creating statements in the main() method may actually cause the UI to not show at all—or worse, like many concurrency-related errors, to show up as a heisenbug on users' desktops. This has actually happened in Swing applications I wrote some years ago. Indeed, Swing's concurrency rule commands that even an action as simple as accessing a text field's value, or setting a label's text, be performed in a similar manner. There are good reasons for this rule, which have been thoroughly explained by members of Sun's Swing team in various articles. Their main argument centers around considering the alternatives, compared to which Swing's threading model is still the best solution. Or so the Swing designers say. One alternative, for instance, would be to make the Swing API thread-safe—an impractical option due to the performance penalty this would impose, not to mention the increased complexity of the API's implementation that almost certainly would lead to impaired stability. Another alternative, practiced by Flex and Ajax applications, is to make the UI single-threaded. Since there is only one thread, by definition all updates are performed on that thread. A downside of the single-threaded approach is that the UI application cannot schedule long-running tasks in the background without locking up the UI. The single-threaded approach would also take away Java's sophisticated threading capabilities on the client. Indeed, one of the arguments presented in favor of a newly re-invigorated Swing on the client is the benefit of long-running tasks scheduled in a separate thread: For instance, executing image filters in an applet may make the UI more responsive if long-running filter algorithms ran in non-UI threads. Swing's current threading model may well be the best compromise, but it's hard to deny that it requires a heroic effort form developers to carefully surround the relevant code with event-thread friendliness. This sort of tedium cries out for an API and, indeed, several third-party APIs make Swing-threading easier, such as Foxtrot and Spin. I could finish this blog post here, saying that, Well, I understand the reasons for the Swing single-threaded rule, and I'm willing to live with that, perhaps using some of these third-party APIs. However, I am too much of a Swing enthusiast to leave it at that. Now that Sun is finally taking client-side Java seriously, I suspect that many developers will wander into Swing territory with some Ajax coding under their belt. And I don't think the kind of code samples that Horstmann rightly points out as ugly would convert too many of them into Swing fans. While most developers would intellectually understand the reasons for doing things this way, few would be able to accept that what's good for sophisticated Ajax applications—one execution thread—is not good enough for Swing. Many would, I suspect, question the extra amount of effort required in Swing coding compared to the code needed to develop similar features using some Ajax toolkit. What can be done to make Swing programming easier? One solution might be to allow developers to use the threading model of Ajax applications when writing Swing code. When I spoke with Chet Haase, a member of Sun's Swing team, for an Artima interview, he pointed to Ajax's single-threadedness as a relative lack. But is it, really? Trying my hands at Flex in the past couple of months, I realized that performing the UI's work in a single thread is not that much of a handicap. In fact, in most cases, it actually matches a user's expectations of an application. For example, loading data into the UI is one use-case often presented in Swing threading tutorials: Fetching data is performed in a worker thread, which leaves the UI event handling thread to respond to user actions. Yet, as long as the UI displays fast, a few seconds delay to get the data mimics not only the browser/Web application model, but also some desktop applications. When double-clicking on, say, the Microsoft Word icon on the desktop, the Word UI appears fairly fast, and it is in a separate step that the data—the Word document—is loaded into the application. There is really no need in that case to spin off a separate data-loading thread. Does this example imply that the price of sophisticated concurrency in terms of API complexity is not worth paying? Not necessarily. It's just that a single-threaded UI model may satisfy the requirements of a large number of rich-client applications. And if that were, indeed, the case, simpler UI programming models will attract developers who prize ease-of-use above powerful concurrency. Swing should offer a simpler threading model to attract those developers. Exposing just a single thread of execution may be one way to achieve that. Do you agree that the correct use of Swing's threading model induces complex and unattractive application code? If so, what do you think should be done to make the Swing programming model easier? How much concurrency do you think should be exposed in a UI toolkit? Have an opinion? Readers have already posted 34 comments about this weblog entry. Why not add yours? If you'd like to be notified whenever Frank Sommers adds a new entry to his weblog, subscribe to his RSS feed.
http://www.artima.com/weblogs/viewpost.jsp?thread=208018
CC-MAIN-2016-50
refinedweb
1,229
50.57
features and functions of the popular JavaScript framework for building mobile, desktop and web applications.); } } Select all Open in new window Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle. package com.exmple.helloandroid; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloAndroidActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TextView tv = new TextView(this); tv.setText("Hello, new Android"); setContentView(tv); //setContentView(R.layout.main); } } [2012-03-19 08:37:42 - HelloAndroid] ------------------------------ [2012-03-19 08:37:42 - HelloAndroid] Android Launch! [2012-03-19 08:37:42 - HelloAndroid] adb is running normally. [2012-03-19 08:37:42 - HelloAndroid] Performing com.exmple.helloandroid.HelloAndroidActivity activity launch [2012-03-19 08:37:42 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'my_avd' [2012-03-19 08:37:42 - HelloAndroid] Launching a new emulator with Virtual Device 'my_avd' [2012-03-19 08:37:50 - Emulator] emulator: WARNING: Unable to create sensors port: Unknown error [2012-03-19 08:37:50 - HelloAndroid] New emulator found: emulator-5554 [2012-03-19 08:37:50 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2012-03-19 08:39:21 - HelloAndroid] HOME is up on device 'emulator-5554' [2012-03-19 08:39:21 - HelloAndroid] Uploading HelloAndroid.apk onto device 'emulator-5554' [2012-03-19 08:39:21 - HelloAndroid] Installing HelloAndroid.apk... [2012-03-19 08:40:23 - HelloAndroid] Success! [2012-03-19 08:40:23 - HelloAndroid] Starting activity com.exmple.helloandroid.HelloAndroidActivity on device emulator-5554 [2012-03-19 08:40:25 - HelloAndroid] ActivityManager: Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=com.exmple.helloandroid/.HelloAndroidActivity } [2012-03-19 08:44:04 - HelloAndroid] ------------------------------ android.app.Activity; import android.os.Bundle; public class Test1Activity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstan setContentView(R.layout.ma } } Learn the essential features and functions of the popular JavaScript framework for building mobile, desktop and web applications. I did n't understand your question.... Is the output correct with my setup for a hello world app? ==> What does it mean??? If you run your code you will get the output "test1"(By default)... Note Android projects appear in 'other' menu only ==> This is correct only...I am not seeing any thing wrong in this...... so you need to change your code to this, with TextView and the real words "Hello Android" and removing the last line setContentView(R.layout.ma This is the code: Open in new window and then indeed it takes quite a while - to creeate the actual emulator - device - if I dodn't toucvh it the whole thing after I say project run as android on my eclipse it takes probably five minutes I guess after you already really see the device you can speed it up a little bit clicking on the "OK" and then clicking on this icon with the circle and the icons witin and then clicking on the Hello Android icon itself to run the application. It will do it itself but then it is still slower. tv.setText("Hello, Android"); to tv.setText("Hello, my jagguy Android"); and go again to Run as Android application and at least for now for me it again goes through the whole process creating the whole device again but then you see it appearing on the screen - the icon is the same (Hello Android) butr when you click it wriotes new words I don't know yest - is it possible not to go through the whole creation of the device for tyhhe second time ,as it is really slow By the wy in your screenshot - I don't see the Hello ANdroid application itself - I see only emulator I don't even see the icon - circle with icons inside which leads to the folder with applications - in that folder I see many "native" applications and also my Hello android icon, and when it eventually starts it takes the whole screen of your device and writes Hello Android on the very top as the header (name of the app) and below your actual woords Hello Android, or Helly, my jagguy Android, whateve you actulally tyope in TextView I then see the option Android Project - don't rememeber that is is somethiong in Other projects I'll check later today and can send you screnshots I hope when I get back to that computyer where I ran it in the section Run Application there is a screenshot - that is how it uiltimately should look when it runs, though it is more beautiful in my picture - I think I have this 4.0.3 platform and their snpashot is older, but te idea is the same - the app takes the fiull screen at the very top it wries application name Hello Android and below the contents of your line tv.setText("Hello, Android"); I think I have a little bit different view of the oeriginal device when it starts, as I don't have this big wide whitish button at the very bottom, again maybe that is 4.0.3 version which I chose and mine also has amond afew icons at the bottoms sucbh circe and insiode rows of squares - this is the icon where I click and it shows the whole screen of different applications and among them my Hello Android with standard android icon You no need to do any thing to display "hello World"...If you create a android project in ecllipse it will automatically create a activity(During project setup it will ask the Activity name)...By default it will display the hellow world with your project name.......ex : "Hello World!, test1!" First time it will take some time to execute your project...Next time if you run your project with out changing any resources it will be faster...... ) it did not ask me for any activity (just Run as Android project was my last commad) and for some reason it was not faster second time. Even when I went to Histrory- Run the same project agaian it took the wholke time and created the new emulator window. Perhaps I don;tknow how to avloid it Menu -> Run -> Run Configurations--> Here u can see the option to run the project. Select your project in the left side window....In the Right side pan select Target. Here you can find the 2 options manual & Automatic. Select automatic and below that you can find the list of available devices / Emulatore and check any one of tham... From next time onwards If you run your project it will directly launch your app through that device/emulator. It will not launch the new emulator again from the boot screen. This is the home screen of the emulator: If you click on the icon with circle and squares it goess to appliaction screen: Then when you click on Hello Aandroid it shows this: If you just wait it will actually come to that screen irtseld - jus click Run- as Android App Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Open in new window Open in new window santhanasamy suggested did help in a sense taht when I now say modify the text and run it aggain - it really does not create a new emeulator but replaces the program in the current emulator and it is indeed much faster
https://www.experts-exchange.com/questions/27638189/hello-world-android.html
CC-MAIN-2018-47
refinedweb
1,291
51.48
Hello to all of you! Recently i was working on a website for a huge pharmaceutical company and i got my hands on a product feed from one of their suppliers (csv file), with product name, description, price, categories and image link. So i decided not to use that feed since i already had what i needed in my database, except the product images. I decided to compile a list of the links and to download them on my hard drive...over 2000 of them. After spending hours with python scripts that for whatever reasons needed tons of dependencies and wget crashing on my system or having issues with ssl certificates i just made my own, in c++. The download part was copied from an old topic here on this forum, to save some time.Thanks go to Xander. I know it may not be perfect, but maybe it can help someone! The list file looks like this: Here is the code: #include <windows.h> #include <WinInet.h> #include <iostream> #include <conio.h> #include <fstream> #include <sstream> #include <vector> #pragma comment (lib, "WinInet.lib") #define WIN32_LEAN_AND_MEAN using namespace std; template<typename... Args> char *beautifulDupcat(const Args ... params) { int bytes = 0; for (auto o : { strlen(params)... }) { bytes += o; } char *ret = new char[bytes + 1]; memset(ret, 0, bytes); for (auto o : { strcat(ret,params)... }) { } return ret; } string getFileName(const string& s) { char sep = '//'; size_t i = s.rfind(sep, s.length()); if (i != string::npos) { return(s.substr(i + 1, s.length() - i)); } return(""); } void main() { char * destFolder = "C:\\Users\\Athenian\\Desktop\\save\\download"; char * linkList = "C:\\Users\\Athenian\\Desktop\\save\\images.txt"; cout << "Link image downloader" << endl; HINTERNET url; HINTERNET open; ifstream file(linkList); string linebuffer; FILE *saved; unsigned long buffer; char name[1000000]; std::vector<std::string> s; open = InternetOpen("ExampleDL", INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0); while (file && getline(file, linebuffer)) { if (linebuffer.length() == 0)continue; s.push_back(linebuffer); } for (std::vector<std::string>::iterator it = s.begin(); it != s.end(); ++it) { char * cat = beautifulDupcat(destFolder, "\\", getFileName(*it).c_str()); url = InternetOpenUrl(open, it->c_str(), NULL, 0, 0, 0); saved = fopen(cat, "wb"); while (InternetReadFile(url, name, sizeof(name), &buffer) && buffer != 0) { fwrite(name, sizeof(char), buffer, saved); name[buffer] = '\0'; } fclose(saved); } _getch(); } Athenian out!
http://www.rohitab.com/discuss/topic/44012-c-list-downloader/
CC-MAIN-2018-30
refinedweb
374
59.7
django-cbv 0.2 Django class-based views, backported from Django trunk This is Django Class Based Views taken from Django trunk. Use CBVs like this: import cbv as generic Then write your class-based views as expained in the Django documentation: class SomeView(generic.TemplateView): template_name = 'some_template.html' You will need the following middleware installed: cbv.middleware.DeferredRenderingMiddleware Once you’re ready to use Django 1.3, you’ll only need to switch the import statement to: from django.views import generic - Author: Bruno Renie - License: BSD - Platform: any - Categories - Package Index Owner: bruno - DOAP record: django-cbv-0.2.xml
https://pypi.python.org/pypi/django-cbv/0.2
CC-MAIN-2017-09
refinedweb
102
50.73
I'm trying to build a weather application. I have a searchbar component that fetches data via an api and sets the state of a weather prop which is initially null: getInitialState: function() { return {text: '', weather: null}; }, handleClick: function() { WeatherApi.get('q=' + this.state.text).then(function(data) { this.setState({weather: data}) }.bind(this)); }, <TodaysWeatherContainer weatherData={this.state.weather} /> var TodaysWeatherContainer = React.createClass({ render: function() { return ( <div> <p>{this.props.weatherData}</p> </div> ); } }); TodaysWeatherContainer It sounds like the data returned by the AJAX request is not just text (which would be a valid child node between those <p></p> tags), but an object, which is an invalid child. Set the weather property of the state to a property of the object returned by the API response, not the whole object. In response to your latest comment, it sounds like weatherData.city is itself an object that needs to be parsed. You can probably solve this problem yourself by better studying the API, which we have no knowledge about. As @deowk points out in the comments, your second error resulted from the initial state of the parent component: {weather: null}. When the child component was first rendered, it attempted to find the city property on that null object. You no longer receive that error because you are now performing the lookup in the API callback, rather than in the child component itself.
https://codedump.io/share/5iCe0zuIwVcb/1/can39t-access-state-from-child-component
CC-MAIN-2016-50
refinedweb
231
55.03
[ { "title": "Sending Email", "snippet": null, "content": " Anvil apps can send email with a single command. They can also receive email.\n\n To send email, enable the Email service and simply run anvil.email.send from the client or server. anvil.email.send(\n to=\"customer@example.com\",\n from_address=\"support\",\n from_name=\"MyApp Support\",\n subject=\"Welcome to MyApp\",\n html=\"<h1>Welcome!</h1>\"\n)\n\ndef handle_incoming_emails(msg):\n\n msg.reply(text=\"Thank you for your message.\")\n\n msg_row = app_tables.received_messages.add_row(\n from_addr=msg.envelope.from_address, \n to=msg.envelope.recipient,\n text=msg.text, \n html=msg.html\n )\n for a in msg.attachments:\n app_tables.attachments.add_row(\n message=msg_row, \n attachment=a\n )\n Data can be stored in encrypted form using Anvil’s App Secrets service.\n\n.\n", "tags": ["kb"], "path": "/kb/encryption-and-secrets.html" }, { "title": "Expand and collapse sections of pages", "snippet": null, "content": ".\n\n Here’s an example app showing a Data Grid full of stock data. There’s a column containing a time-series chart for the price of each stock. The chart is hidden by default, and it can be shown by clicking a Link.\n\n Click here to clone the example app in the Anvil designer.\n\n \n", "tags": ["kb"], "path": "/kb/expand-and-collapse.html" }, { "title": "Words Ignored By full_text_search", "snippet": null, "content": " The Data Tables full_text_search operator ignores certain words that appear commonly in text, and therefore usually\ncloud the results of a natural language search. These are called ‘stop words’. If you need to search for a string containing any of these words, consider using like or ilike instead. These are intended\nfor a literal text match, whereas full_text_search is intended for a natural language query. The full list of stop words is:\n\n \n\n", "tags": ["kb"], "path": "/kb/full-text-search-ignored-words.html" }, { "title": "Grid-based layouts using Data Grids", "snippet": null, "content": "\n The best component to use for grid-based pages is the Data Grid. It is a table with columns and rows. Any component can be dropped into a cell.\n\n The width of the columns can be configured with pixel-precision, or they can stretch to fit their contents. Rows stretch in height to fit their contents.\n\n Here’s an example of laying components out in a grid: Grid Layout Example.\n\n Click here to clone the example app in the Anvil designer.\n\n \n", "tags": ["kb"], "path": "/kb/grid-based-layouts.html" }, { "title": "How Anvil apps are hosted", "snippet": null, "content": " By default, Anvil apps are hosted in our cloud servers without requiring any configuration from you. As soon as you start building an app, it is hosted on the internet at a private, unguessable URL.\n\n \n\n You can configure a public, meaningful URL at the click of a button.\n\n \n\n You can set the published verison of your app to be different from the development version, as explained in the reference docs.\n\n.\n\n We will let you know if you are anywhere near coming to your usage limit, and we never deny access to your users if you exceed your limit - we will simply make you aware and discuss how to proceed.\n\n.\n\n.\n\n If you want the advantages of an on-site installation without managing things yourself, we can manage your installation for you in a public cloud provider like AWS or Azure. You still get a whole Anvil system dedicated to your organisation.\n\n Contact us at enterprise@anvil.works to discuss further.\n", "tags": ["kb"], "path": "/kb/hosting.html" }, { "title": "How Data Bindings Work", "snippet": null, "content": " Data bindings are actually very simple. If you set the binding on the self.component_1’s foo property to SOME_EXPRESSION, then when the refreshing_data_bindings event triggers on your form, we execute: self.component_1.foo = SOME_EXPRESSION\n \n.\n\n For a more detailed discussion, see Data Bindings in the reference docs.\n", "tags": ["kb"], "path": "/learn/kb/how-data-bindings-work.html" }, { "title": "Using Javascript libraries from Anvil apps", "snippet": null, "content": " You can do almost everything in Anvil from Python. But sometimes you want to break out and use Javascript with Anvil. And if you’re doing that, you might want to import an external Javascript library or service. Here’s how to do that.\n…\n\n For example, let’s say we want to add Intercom’s live chat widget to our Anvil app. From their website, we can copy and paste a snippet of HTML that loads the chat widget:\n\n \n\n Select the Native Libraries option in the App Browser, then paste the HTML snippet in there:\n\n \n\n. Anvil apps are split into Forms. You can include one Form inside another, and use Links to switch between them.\n\n Our Material Design theme has a sidebar for navigation. Drop Links into it and configure their click handlers to add an instance of the desired Form to the main Form’s content_panel: from Page1 import Page1\n\n# ...\n\n def link_1_click(self, **event_args):\n \"\"\"This method is called when the link is clicked\"\"\"\n self.content_panel.clear()\n self.content_panel.add_component(Page1())\n This is an example app showing navigation between pages using the sidebar.\n\n Click here to clone the example app in the Anvil designer.\n\n \n\n Here is a video tutorial showing step-by-step how to set up navigation Links.\n\n For detailed information, see the Navigation section of the reference docs.\n\n.\n\n We will create a custom Theme for you if you are a Business Plan customer. Contact us at support@anvil.works to discuss further.\n", "tags": ["kb"], "path": "/kb/navigate-between-pages.html" }, { "title": "What versions of Python can I use with Anvil?", "snippet": "Anvil supports Python 3.7 and 2.7 on the server, and approximately Python 2.7 in the browser...", "content": " There are two types of Python code in Anvil: Server Modules (which run on the Anvil server) and Form code (which runs in the user’s web browser).\n\n Server Modules run on Anvil’s servers (you don’t have to set up your own servers – just write the code and call it!). The users of your app can’t see or change the code in your Server Modules.\n\n You can choose to run your app’s Server Modules with Python 2.7 or 3.7, if you’re using a paid account:\n\n \n\n Free Plan users must use the Restricted Python 2 environment, which has fewer libraries available to import. (Free Plan users can still use the Uplink, though!)\n\n.)\n\n The user of an app can see the source code for your Forms (via “View Source” in their web browser). The can also change how forms behave in your browser – so don’t rely on code in your Forms to enforce security! See our security documentation for more information.\n\n Sometimes you want to write library code that can be used on the server and in the browser. To do this, create a Module. You can import Module code from the client or the server.\n\n Module code is loaded in the same environment as whatever is importing it. If you import a Module from a Form, it will be loaded in Python 2; if you import from server code, it will be loaded in whichever Python version you have chosen.\n\n The user of an app can see source code for your Modules. They can also manipulate how Module code behaves in the browser. (Of course, this doesn’t change how Modules behave when they’re imported by Server Modules and running on Anvil’s servers.)\n\n If use the Uplink to connect your app to code running outside Anvil, that code runs in whatever version of Python you are using with the Uplink. The user of an app cannot see or modify Uplink code.\n\n For more details, you can read our reference documentation:\n\n Reference documentation: The Anvil Runtime\n", "tags": ["kb"], "path": "/kb/python-versions.html" }, { "title": "HTTP and REST APIs with Anvil", "snippet": null, "content": " Anvil apps can integrate with external HTTP REST APIs, or expose REST APIs of their own.\n\n (Check out our worked example: Using the GitHub API from Anvil!)\n\n You can integrate against REST APIs using our anvil.http module. To make a GET request, you can run: resp = anvil.http.request(\"\")\n Here’s an example of a POST request: resp = anvil.http.request(url=\"\",\n method=\"POST\",\n data=\"Data to post\",\n headers= {\n \"Authentication\": \"my-access-key\",\n })\n The resp object is a Python object representing the response. If you use the json=True keyword argument to request, you can access a JSON-formatted response as a Python object resp = anvil.http.request(\"\", json=True)\nprint(resp['stocks'])\n This works in both client and server code. So you can make HTTP requests from the web browser, subject to the sensible security restrictions placed on HTTP requests by web browsers.\n\n If you want to integrate against Google or Stripe APIs, consider using our Google and Stripe integrations to save time.\n\n You can create an HTTP endpoint by decorating any function in a Server Module with @anvil.server.http_endpoint and specifying a path: @anvil.server.http_endpoint(\"/users/:id\")\ndef get_user(id, **params):\n return {'message':\n \"You requested user %s with params %s\" % (id, params)}\n By default, HTTP endpoints serve their return data as JSON.\n\n\n\n You can also integrate HTTP endpoints into Anvil’s Users Service authentication system. This requires users to specify a valid username and password via HTTP Basic authentication:\n\n @anvil.server.http_endpoint(\"/authenticated_endpoint\",\n authenticate_users=True)\ndef authenticated_endpoint():\n this_user = anvil.users.get_user()\n return {'username': this_user['email']}\n There’s lots more you can do with HTTP endpoints. To learn more, read our documentation on HTTP APIs.\n\n", "tags": ["kb"], "path": "/kb/rest-apis.html" }, { "title": "Examples", "snippet": null, "content": "# Example App:\n\n#### Secure Download Portal\n\n\n This example is a real, commercial application, built with Anvil. It was 7 times faster to build it with Anvil than with traditional tools, and this guide will show you how it is done.\n\n This walk-through will discuss how the app works, and show you architectural patterns that will help you build better Anvil apps. We recommend you open the source code to each application in a separate tab, and refer to it as you read through this guide.\n\n Sponsoring a conference has many challenges, and one of them is making sure you don't run out of T-shirts!\n\n In his popular lightning talk at PyCon 2019, Meredydd described how we use SciPy to model the distributions, and minimise our chances of running out:\n\n(Scroll down for a transcript)\n Hi! My name is Meredydd, and I run a startup called Anvil. We make tools for building full-stack web apps with nothing but Python, and we are sponsoring PyCon again this year. It's great to be back!\n\n Like any good sponsor, we give out T-shirts — specifically, to anyone who builds an app with Anvil and shows it to us at our stand.\n\n There are two problems with this. For one, Cleveland is a long way from home, and all these T-shirts have to come with me in a very heavy suitcase.\n\n\t Problem number two: Python developers, it turns out, come in many different shapes and sizes. Pictured here are two Python programmers: a shirt that fits the one on the left is going to look pretty undignified on the one on the right.\n\n So, the question is, “how many shirts should we be bringing of each size?”\n\t We've been here before, so we could just bring twice as many of each size as we gave out last year.\n\n Last year, we gave out two women's-cut extra-small shirts, so perhaps we should bring four this year. That seems plausible.\n\t But last year, we gave out 27 men's large shirts. If we brought 54 of them this year, that would definitely be overkill.\n\n If you think about it, that 54th men's large shirt is much, much less likely to get used than that 4th women's extra-small.\n\n\t It's the Law of Large Numbers: If you've got a larger sample size, it will average out more reliably.\n\n We can model this with a binomial distribution. Imagine we're rolling 3,500 dice — one for each person at PyCon — and then counting up how many rolled \"Men's Large\".\n\t from scipy.stats import binom\n@anvil.server.callable\ndef get_dist(n_attendees, prob):\n return [binom.pmf(k, n_attendees, prob)\n for k in range(n_attendees)]\n Thankfully, SciPy has a function for calculating this distribution, and so I'm going to use it to write an interactive tool for exploring this distribution.\n\n I'm going to write a function that gets the probability distribution for a given number of attendees, and a given probability of each attendee claiming a particular size of shirt.\n \n At this point, the live-coding begins. Open the source code to follow along:\n Now we have our distribution, we can make an interactive tool to explore it.\n\n Our user interface will have a text box where we can enter how many of this size of shirt we used last time; and then underneath it will be a plot so we can explore the distribution.\n\t def text_box_1_pressed_enter(self, **event_args):\n \"\"\"This method is called when the user presses\n Enter in this text box.\"\"\"\n\n dist = anvil.server.call('get_dist', 3200,\n int(self.text_box_1.text)/3200.0)\n\n self.plot_1.data = go.Bar(y=dist)\n\n\n Once we've got that distribution, we can plot it as a bar chart.\n\t When we plot the women's extra-smalls, the distribution is actually quite wide. Of course, we're most likely to need two shirts, same as last time. But we could easily need twice that number, or even more.\n\n Whereas if we check out the men's large shirts, the distribution is a lot tighter. Still, again, most likely to need 27, same as last time, but we're vanishingly unlikely to need twice that number.\n\t return binom.ppf(0.95, n_attendees, prob)\n Now, we've constructed a statistical model that can actually answer our question. We want to know how many shirts to bring, to avoid running out.\n\n What we want to do is to find a number of shirts such that there is a 95% probability of needing that number or less. This is the 95% point of the probability distribution, and SciPy provides a function for calculating this: `binom.ppf()`.\n\n So we calculate the 95% point for the probability distribution of every size of shirt, and that's how many shirts we bring.\n\t We wire this up in the UI, to display the number of shirts.\n\n We see that for the women's extra-smalls, we need 5 shirts -- more than double the number we gave out last year -- to be 95% sure of not running out.\n\n Whereas for the men's large shirts, we need 36 -- that's only 33% more than last time.\n\n\n\t You can get the source code of the app I've just built here:\n And if everyone in this hall comes to our stand, builds an app, claims a T-shirt, and completely cleans us out?\n\n Well, at least that'll show the statisticians. Thanks very much!\n. (If you haven’t seen that tutorial, you might want to check it out.)\n\n Our To-Do list is a classic data-storage app. An API for this kind of app needs to do four things:\n\n Read records - in this case, we want to make a GET request to a URL and get all the tasks in our to-do list. Create new records - in this case, we want to POST to a URL to add a new task. Update records - in this case, we want to mark tasks as done, or change their title. By convention, each task has its own URL, which we can update by making a PUT request to it. Delete records - in this case, by making a DELETE request": ["nositemap"], "path": "/blog/http-apis-with-python" }, { "title": "The Easiest Way to Build HTTP APIs in Python", "snippet": "As we build up this REST API, step by step, we'll learn common patterns used by many HTTP APIs: Returning records from your database, authentication, accepting parameters, URL parameters, updates, and deletes. Let's get started!\n", "content": "\n: @anvil.server.http_endpoint('/hello-world')\ndef hello_world(**q):\n return {'the answer is': 42}\n .\n\n Our To-Do list is a classic data-storage app. An API for this kind of app needs to do four things:\n\n GETrequest to a URL and get all the tasks in our to-do list. POSTto a URL to add a new task. PUTrequest to it. DELETErequest": ["blog","tutorial"], "path": "/blog/http-apis-in-python" }, { "title": "Python in the browser", "snippet": "Python in the browser. That's right!\n", "content": "\n\n \n\n.\n\n That’s no longer true. There are quite a few ways to run Python in your web browser. This is a survey of what’s available.\n\n I’m looking at six systems that all take a different approach to the problem. Here’s a diagram that sums up their differences.\n\n \n\n The x-axis answers the question: when does the Python get compiled? At one extreme,\nyou run a commandline script to compile the Python yourself. At the other extreme, the compilation gets done in the\nuser’s browser as they write Python code.\n\n The y-axis answers the question: what does the Python get compiled to? Three systems make a direct conversion between the \nPython you write and some equivalent JavaScript. The other three actually run a live Python interpreter in your\nbrowser, each in a slightly different way.\n\n \n\n Transcrypt gives you a commandline tool you can run to compile a Python script into a JavaScript file.\n\n You interact with the page structure (the DOM) using a toolbox of specialised Python objects and functions. For example, if you import document, you can\nfind any object on the page by using document like a dictionary. To get the element whose ID is name-box, you would use document[\"name-box\"].\nAny readers familiar with JQuery will be feeling very at home. Here’s a basic example. I wrote a Hello, World page with just an input box and a button:\n\n <input id=\"name-box\" placeholder=\"Enter your name\">\n<button id=\"greet-button\">Say Hello</button>\n To make it do something, I wrote some Python. When you click the button, an event handler fires\nthat displays an alert with a greeting:\n\n def greet():\n alert(\"Hello \" + document.getElementById(\"name-box\").value + \"!\")\n\ndocument.getElementById(\"greet-button\").addEventListener('click', greet)\n I wrote this in a file called hello.py and compiled it using transcrypt hello.py.\nThe compiler spat out a JavaScript version of my file, called hello.js. Transcrypt makes the conversion to JavaScript at the earliest possible time - before the browser is even running.\nNext we’ll look at Brython, which makes the conversion on page load.\n\n \n\n Brython lets you write Python in script tags in exactly the same way you write JavaScript. Just as with Transcrypt,\nit has a document object for interacting with the DOM. The same widget I wrote above can be written in a script tag like this:\n\n <script type=\"text/python\">\nfrom browser import document, alert\n\ndef greet(event):\n alert(\"Hello \" + document[\"name-box\"].value + \"!\")\n\ndocument[\"greet-button\"].bind(\"click\", greet)\n</script>\n Pretty cool, huh? A script tag whose type is text/python! There’s a good explanation of how it works on the Brython GitHub page.\nIn short, you run a function when your page loads:\n\n <body onload=\"brython()\">\n that transpiles anything it finds in a Python script tag:\n\n <script type=\"text/python\"></script>\n which results in some machine-generated JavaScript that it runs using JS’s eval() function. \n\n Skulpt sits at the far end of our diagram - it compiles Python to JavaScript at runtime. This means the Python doesn’t have\nto be written until after the page has loaded.\n\n The Skulpt website has a Python REPL that runs in your browser. It’s not making requests \nback to a Python interpreter on a server somewhere, it’s actually running on your machine.\n\n \n\n Skulpt does not have a built-in way to interact with the DOM. This can be an advantage, because you can build your own DOM manipulation system\ndepending on what you’re trying to achieve. More on this later.\n\n Skulpt was originally created to produce educational tools that need a live Python session on a web page \n(example: Trinket.io). While Transcrypt and Brython are designed as direct replacements for \nJavaScript, Skulpt is more suited to building Python programming environments on the web (such as the full-stack \napp platform, Anvil).\n\n We’ve reached the end of the x-axis in our diagram. Next we head in the vertical direction: our\nfinal three technologies don’t compile Python to JavaScript, they actually implement a Python runtime in the web browser.\n\n \n\n \n\n PyPy.js is a JavaScript implementation of a Python interpreter. The developers took a C-to-JavaScript \ncompiler called emscripten and ran it on the source code of PyPy.\nThe result is PyPy, but running in your browser.\n\n Advantages: It’s a very faithful implementation of Python, and code gets executed quickly. Disadvantages:\nA web page that embeds PyPy.js contains an entire Python interpreter, so it’s pretty big as web pages go (think megabytes).\n\n You import the interpreter using <script> tags, and you get an object called pypyjs in the global JS scope. There are three main functions for interacting with the interpreter. To execute some Python, run pypyjs.exec(<python code>). \nTo pass values between JavaScript and Python, use pypyjs.set(variable, value) and pypyjs.get(variable). Here’s a script that uses PyPy.js to calculate the first ten square numbers:\n\n <script type=\"text/javascript\">\n pypyjs.exec(\n // Run some Python\n 'y = [x**2. for x in range(10)]'\n ).then(function() {\n // Transfer the value of y from Python to JavaScript\n pypyjs.get('y')\n }).then(function(result) {\n // Display an alert box with the value of y in it\n alert(result)\n });\n</script>\n PyPy.js has a few features that make it feel like a native Python environment - there’s even an in-memory filesystem so you can read and write files.\nThere’s also a document object that gives you access to the DOM from Python. The project has a great readme if you’re interested in learning more.\n\n \n\n Batavia is a bit like PyPy.js, but it runs bytecode rather than Python. Here’s a Hello, World script written in Batavia:\n\n <script id=\"batavia-helloworld\" type=\"application/python-bytecode\">\n 7gwNCkIUE1cWAAAA4wAAAAAAAAAAAAAAAAIAAABAAAAAcw4AAABlAABkAACDAQABZAEAUykCegtI\n ZWxsbyBXb3JsZE4pAdoFcHJpbnSpAHICAAAAcgIAAAD6PC92YXIvZm9sZGVycy85cC9uenY0MGxf\n OTc0ZGRocDFoZnJjY2JwdzgwMDAwZ24vVC90bXB4amMzZXJyddoIPG1vZHVsZT4BAAAAcwAAAAA=\n</script>\n Bytecode is the ‘assembly language’ of the Python virtual machine - if you’ve ever looked at the .pyc files Python generates, \nthat’s what they contain1. This example doesn’t look like assembly language because it’s base64-encoded. Batavia is potentially faster than PyPy.js, since it doesn’t have to compile your Python to bytecode. It also makes the\ndownload smaller - around 400kB. The disadvantage is that your code needs to be written and compiled in a native \n(non-browser) environment, as was the case with Transcrypt.\n\n Again, Batavia lets you manipulate the DOM using a Python module it provides (in this case it’s called dom). The Batavia project is quite promising because it fills an otherwise unfilled niche - ahead-of-time compiled Python\nin the browser that runs in a full Python VM. Unfortunately, the GitHub repo’s commit rate seems to have\nslowed in the past year or so. If you’re interested in helping out, here’s their developer guide.\n\n \n\n Mozilla’s Pyodide was announced in April 2019. \nIt solves a difficult problem: interactive data visualisation in Python, in the browser.\n\n Python has become a favourite language for data science thanks to libraries such as NumPy, SciPy, Matplotlib and Pandas. \nWe already have Jupyter Notebooks, which are a great way to present a data pipeline online, but they must be hosted on a server somewhere.\n\n If you can put the data processing on the user’s machine, they avoid the round-trip to your server so real-time \nvisualisation is more powerful. And you can scale to so many more users if their own machines are providing the compute.\n\n It’s easier said than done. Fortunately, the Mozilla team came across a version of the reference Python implementation\n(CPython) that was compiled into WebAssembly.\nWebAssembly is a low-level compliment to JavaScript that performs closer to native speeds, which opens the browser up for performance-critical applications like this.\n\n Mozilla took charge of the WebAssembly CPython project and recompiled NumPy, SciPy, Matplotlib and Pandas into WebAssembly too. \nThe result is a lot like Jupyter Notebooks in the browser - here’s an introductory notebook.\n\n \n\n It’s an even bigger download than PyPy.js (that example is around 50MB), but as Mozilla point out, a good browser\nwill cache that for you. And for a data processing notebook, waiting a few seconds for the page to load is not a problem.\n\n You can write HTML, MarkDown and JavaScript in Pyodide Notebooks too. And yes, there’s a document object\nto access the DOM. It’s a really promising project! I’ve given you six different ways to write Python in the browser, and you might be able to find more. Which one to\nchoose? This summary table may help you decide.\n\n\n \n\n There’s a more general point here too: the fact that there is a choice.\n\n As a web developer, it often feels like you have to write JavaScript, you\nhave to build an HTTP API, you have to write SQL and HTML and CSS. The six systems we’ve looked at make JavaScript seem more like\na language that gets compiled to, and you choose what to compile to it2.\n\n Why not treat the whole web stack this way? The future of web development is to move beyond the technologies that we’ve \nalways ‘had’ to use. The future is to build abstractions on top of those technologies, to reduce the unnecessary complexity \nand optimise developer efficiency. That’s why Python itself is so popular - it’s a language that puts developer efficiency first.\n\n There should be one way to represent data, from the database all the way to the UI. Since we’re Pythonistas, we’d \nlike everything to be a Python object, not an SQL SELECT statement followed by a Python object followed by JSON \nfollowed by a JavaScript object followed by a DOM element.\n\n That’s what Anvil does - it’s a full-stack Python environment that abstracts away the complexity of the web.\nHere’s a 7-minute video that covers how it works.\n\n \n\n Remember I said that it can be an advantage that Skulpt doesn’t have a built-in way to interact with the DOM? This\nis why. If you want to go beyond ‘Python in the browser’ and build a fully-integrated Python environment, your abstraction\nof the User Interface needs to fit in with your overall abstraction of the web system.\n\n So Python in the browser is just the start of something bigger. I like to live dangerously, so I’m going to make a prediction. \nIn 5 years’ time, more than 50% of web apps will be \nbuilt with tools that sit one abstraction level higher than JavaScript frameworks such as React and Angular. \nIt has already happened for static sites: most people who want a static site will use WordPress or Wix rather \nthan firing up a text editor and writing HTML. As systems mature, they become unified and the amount of \nincidental complexity gradually minimises.\n\n If you’re reading this in 2024, why not get in touch and tell me whether I was right?\n\n Yasoob dug into some bytecode in a recent post on this blog. ↩\n And WebAssembly is actually designed to be used this way. ↩\n How does a search engine work? Let’s find out – by building one!\n\n Search engines have become the gateway to the modern web. How often do you know exactly which page you want, but you search for it \nanyway, rather than typing the URL into your web browser?\n\n \n\n Like many great machines, the simple search engine interface - a single input box - hides a world of technical magic \ntricks. When you think about it, there are a few major challenges to overcome. How do you collect all the valid URLs \nin existence? How do you guess what the user wants and return only the relevant pages, in a sensible order? And how do \nyou do that for 130 Trillion\npages faster than a human reaction time?\n\n I’ll be some way closer to understanding these problems when I’ve built a search engine for myself. \nI’ll be using nothing but Python (even for the UI) and my code will be simple enough to include in this blog post.\n\n You can copy the final version, try it out, and build on it yourself:\n\n There will be three parts to this.\n\n First, I’m going to build a basic search engine that downloads pages and matches your\nsearch query against their contents. (That’s this post)\n Then I’m going to implement Google’s PageRank algorithm to improve the results. (See Part 2)\n Finally, I’ll play with one of computer science’s powertools - indexing - to speed up the search and make the ranking\neven better. (See Part 3)\n Let’s start building a machine that can download the entire web.\n\n I’m going to a build a web crawler that iteratively works its way through the web like this:\n\n I need a known URL to start with. I’ll allow webmasters and other good citizens to submit URLs they know about. I’ll store\nthem in a database (I’m using Anvil’s Data Tables) and if I know the URL already, I won’t store it twice.\n\n @anvil.server.callable\ndef submit_url(url):\n url = url.rstrip('/') # URLs with and without trailing slashes are equivalent\n if not app_tables.urls.get(url=url):\n app_tables.urls.add_row(url=url)\n I’ve also made it possible to submit sitemaps, which contain lists of many URLs (see our Background Tasks tutorial \nfor more detail.) I’m using BeautifulSoup to parse the XML.\n\n from bs4 import BeautifulSoup\n\n@anvil.server.callable\ndef submit_sitemap(sitemap_url):\n response = anvil.http.request(sitemap_url)\n \n soup = BeautifulSoup(response.get_bytes())\n for loc in soup.find_all('loc'):\n submit_url(loc.string)\n If I submit the Anvil sitemap, my table is populated with URLs:\n\n \n\n I’m in good company by allowing people to submit URLs and sitemaps for crawling - Google Search Console does this.\nIt’s one way of avoiding my crawler getting stuck in a local part of the web that doesn’t link out to anywhere else.\n\n Stealing shamelessly from Google Search Console, I’ve created a webmaster’s console with ‘submit’ buttons that \ncall my submit_url and submit_sitemap functions: \n\n def button_sitemap_submit_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.label_sitemap_requested.visible = False\n anvil.server.call('submit_sitemap', self.text_box_sitemap.text)\n self.label_sitemap_requested.visible = True\n\n def button_url_submit_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.label_url_requested.visible = False\n anvil.server.call('submit_url', self.text_box_url.text)\n self.label_url_requested.visible = True\n Now that I know some URLs, I can download the pages they point to. I’ll create a Background Task that churns through my URL list \nmaking requests:\n\n @anvil.server.background_task\ndef crawl():\n for url in app_tables.urls.search():\n # Get the page\n try:\n response = anvil.http.request(url)\n html = response.get_bytes().decode('utf-8')\n except:\n # If the fetch failed, just try the other URLs\n continue\n\n row = app_tables.pages.get(url=url) or app_tables.pages.add_row(url=url)\n row['html'] = html\n Because it’s a Background Task, I can fire off a crawler and download all the pages I know about in the background\nwithout blocking the user’s interaction with my web app.\n\n \n\n That’s all very well, but it doesn’t really crawl yet. The clever thing about a web crawler is how it follows links between pages.\nThe web is a directed graph – in other words, it consists of pages with one-way links between them. That’s why it’s such a wonderful information store - if you’re interested in the subject of one page, you’re likely to be interested in the subjects of pages it links to. If you’ve ever been up ’til dawn in the grip of a Wikipedia safari, you’ll know what I’m talking about.\n\n So I need to find the URLs in the pages I download, and add them to my list. \nBeautifulSoup, the brilliant HTML/XML parser, helps me out again here.\n\n I also record which URLs I found on each page - this will come in handy when I implement PageRank.\n\n from bs4 import BeautifulSoup\n\n soup = BeautifulSoup(html)\n\n # Parse out the URLs\n for a in soup.find_all('a', href=True):\n submit_url(a['href'])\n \n # Record the URLs for this page\n page['forward_links'] += a['href']\n While I’m at it, I’ll grab the page title to make my search results a bit more human-readable:\n\n # Parse out the title from the page\n title = str(soup.find('title').string) or 'No Title'\n The crawler has become rather like the classic donkey following a carrot: the further it gets down the URL list,\nthe more URLs it finds, so the more work it has to do. I visualised this by plotting the length of the URL list\nalongside the number of URLs processed.\n\n \n\n The list grows initially, but the crawler eventually finds all the URLs\nand the lines converge. It converges because I’ve restricted it to (I don’t\nwant to denial-of-service anybody’s site by accident.) If I had it crawling the open web, I imagine the lines\nwould diverge forever - pages are probably being added faster than my crawler can crawl.\n\n By the time it has finished, there’s a nice crop of page data waiting for me in the pages table. \n\n Time to implement search. I’ve thrown together the classic “input box and button” UI using the drag-and-drop editor.\nThere’s also a Data Grid for listing the results, which gives me pagination for free.\nEach result will contain the page title and a link.\n\n \n\n The most basic search algorithm would just break the query into words and return pages that contain any of those words.\nThat’s no good at all, and I can do better straight away.\n\n I’ll remove words that are too common. Let’s say a user enters ‘how to build a web app’. If a page happens to contain exactly the text ‘how to build a web app’,\nit will be returned. But they would also get pages containing the text ‘how to suckle a lamb’.\n\n \n\n So I’ll remove words like ‘how’ and ‘to’. In the lingo, these are called stop words.\n\n I’ll include words that are closely related to those in the query. The search for ‘how to build a web app’ \nshould probably return pages with ‘application builder’, even though neither of those words are exactly in the query.\n\n \n\n In the lingo, this is called stemming.\n\n Both of these requirements are met by Anvil’s full_text_match operator, so I can get a viable search running right away: # On the server:\n@anvil.server.callable\ndef basic_search(query):\n return app_tables.pages.search(html=q.full_text_match(query))\n # On the client:\ndef button_search_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.repeating_panel_1.items = anvil.server.call('basic_search', self.text_box_query.text)\n Later on we’ll talk about indexing and tokenization, which gets to the nuts and bolts of how to optimise a search.\nBut for now, I have a working search engine. Let’s try some queries.\n\n For each stage in its development, I’m going to run three queries to see how the results improve as I improve my ranking\nsystem. Each query is chosen to reflect a different type of search problem.\n\n I’ll only look at the first page of ten results. Nobody ever looks past the first page!\n\n (If you’re wondering why all the results are from the same site, bear in mind that I’ve restricted my crawler to\nto avoid getting my IP address quite legitimately blocked by anti-DoS software, and to keep my test dataset to a manageable size.)\n\n ‘Plots’ is a fairly generic word that you would expect to show up all over the place in technical writing. The challenge\nis to return the pages that are specifically about plotting, rather than those that just use the word in passing.\n\n When I search for ‘plots’, I get this:\n\n \n\n The first result is Using Matplotlib with Anvil, which is definitely relevant. Then there’s\nthe reference docs, which has a section on the Plot component. And result number nine is\nthe original announcement dating back to when we made Plotly available in the client-side Python code.\n\n But there’s also a lot of fairly generic pages here. They probably mention the word ‘plot’ once or twice, but they’re\nnot really what I’m looking for when I search for ‘plots’.\n\n ‘Uplink’ differs from ‘plots’ because it’s unlikely to be used by accident. It’s the name of a specific Anvil feature\nand it’s not a very common word in normal usage. If it’s on a page, that page is almost certainly\ntalking about the Anvil Uplink.\n\n If you’re not familiar with it, the Uplink allows you to anvil.server.call functions in any Python environment outside Anvil.\nSo I’d expect to get the Using Code Outside Anvil tutorial high up the results list. It shows up at position four. \n\n I also get Escape Hatches and Ejector Seats, which mentions the Uplink as one of Anvil’s\n‘Escape Hatches’. And at number 10 we have Remote Control Panel, which uses the Uplink to run a test suite on a \nremote machine.\n\n It’s good that all three of these show up, but it would be better if they were more highly ranked. The rest of\nthe results probably talk about the Uplink in some way, but the Uplink is not their primary subject.\n\n This is included as an example of a multi-word query. I’d expect this to be harder for the search engine to cope with,\nsince the words ‘build’ and ‘Python’ are going to be used a lot on the Anvil site, but a user typing this in is \nspecifically interested in Python dashboarding.\n\n There are two pages I would expect to see here: Building a Business Dashboard in Python and the Python Dashboard\nworkshop. Neither of these appear in the results.\n\n \n\n A few of the pages are tangentially related to dashboard building, but generally the signal appears to have been\noverwhelmed by the noise introduced by the words ‘build’ and ‘Python’.\n\n The basic search engine I’ve put together does manage to get some relevant results for one-word queries. The user\nhas to look past the first few results to find what they’re looking for, but the main pages of interest are there somewhere.\n\n It gets confused by multi-word queries. It can’t distinguish very well between the words that matter, and those that don’t.\nAnvil’s full_text_match does remove words like ‘a’ and ‘in’, but it obviously won’t guess that ‘build’ is less important\nthan ‘dashboard’ in this particular situation. I’m going to make two improvements in an attempt to address these problems. First, I’ll try to rank more interesting\npages more highly. Google has an algorithm called PageRank that assesses how important each page is, and I’ve always\nwanted to learn how it works, so now is probably a good time! I explore it and implement it in the next post.\n\n Second, I’ll take account of the number of times each word appears on the page. This will help with the ‘building a \ndashboard in Python’ query, because pages that just happen to mention ‘building’ will do so once or twice, whereas\npages about building dashboards will use those words a lot. That gives me an excuse to explore two simple\nbut powerful concepts from Computer Science - tokenization and indexing, which I implement in the final post.\n\n So here goes, I’m digging into Google’s PageRank. It’s surprisingly simple; you might even say it’s\n‘beautifully elegant’. Come read about it:\n\n \n\n \nOr sign up for free, and open our search engine app in the Anvil editor: I need a way to score pages based on how important they are. I need PageRank - the secret sauce that won Google the search engine wars of the late 90s.\n\n Let’s look at how PageRank works, then I’ll build a Python web app that implements it.\n\n PageRank was developed in 1996 at Stanford by Sergey Brin and Larry Page, after whom it is named. \nIt was the ‘key technical insight’ that made Google work so well, so they kept it a tightly-guarded secret.\n\n Only kidding! Page and Brin wrote a brilliant and openly-available paper about it in 1998, which contains this priceless quote:\n\n \n\n\n To test the utility of PageRank for search, we built a web search engine called Google.\n The algorithm is based on the idea that important pages will get linked to more. Some terminology: if python.org \nlinks to my page, then my page has a ‘backlink’ from python.org. \nThe number of ‘backlinks’ that a page has gives some indication of its importance.\n\n \n\n PageRank takes this one step further - backlinks from highly-ranked pages are worth more. Lots of people link to python.org, so if they link to my page, that’s a bigger endorsement than the average webpage.\n\n If my page is the only one linked to from python.org, that’s a sign of great importance, \nso it should be given a reasonably high weighting. But if it’s one of fifty pages python.org links\nto, perhaps it’s not so significant.\n\n \n\n An equation paints a thousand words, so for the mathematically-inclined reader, this version of the algorithm can be written:\n\n \n\n where R(u) is the rank of the page we’re ranking, R(v) is the ranking of its backlinks, Nv is the number of\nlinks page v contains, and c is a constant we can tweak at will. The sum is over all the backlinks of the page we’re ranking. \n(If equations aren’t your thing - this equation just re-states what I said above.)\n\n It turns out that PageRank models exactly what a random web surfer would do. Our random surfer is equally likely to \nclick each link on her current page, and her chance of being on this page is based on the number of pages that link to it.\n\n We can’t calculate this all in one go, because pages can link to each other in a loop. If the links between pages form a loop, we don’t know what rank to give them, since they all depend on each other’s rank.\n\n \n\n We solve this by splitting the calculation into iterations. A page in iteration i + 1 has its rank worked out using\nthe PageRanks from iteration i. So the ranks of pages in loops don’t depend on themselves, they depend on the \nprevious iteration of themselves.\n\n In terms of the equation from earlier, we plug the ranks from iteration \ni into the right-hand side of the equation, and the ranks for iteration i + 1 come out on the left-hand side.\n\n \n\n We have to make a guess about what ranks to start with. Luckily, this is a convergent algorithm, meaning wherever we start, if we do it enough times, the ranks will eventually stop changing and we \nknow we have something that satisfies our equation.\n\n So that solves the problem of self-referential pages. There’s another problem with loops - if none of the pages in the loop link to a page outside of the loop, our random surfer \ncan never escape:\n\n \n\n This is solved by adding an extra little bit of rank to every page, to balance out the rank that flows into dead-end loops. In our random surfer analogy, the surfer gets bored every so often and types a \nrandom URL into her browser. This changes our equation: there’s a bit of rank added, which we denote E(u):\n\n \n\n And that’s the PageRank equation!\n\n PageRank was revolutionary. The paper compares the results from a simple PageRank-based engine against the leading \nsearch engine at the time. The top result for “University” in the PageRank engine was the Stanford University homepage. \nThe competing engine returned “Optical Physics at the University of Oregon”.\n\n That sounds like exactly what I need to improve my search results. Luckily, it’s fairly simple to implement if you have a\nrelatively small number of pages. Here goes…\n\n First I need to calculate the backlinks for each page. My crawler has figured out the forward links from each page,\nso all the information is there. I just need to iterate through the pages adding ‘page X’ as a backlink \non everything ‘page X’ links to.\n\n def calculate_backlinks():\n # Reset the backlinks for each page\n for page in app_tables.pages.search():\n page['backlinks'] = []\n \n # We have forward links for each page - calculate the backlinks\n for page in app_tables.pages.search():\n if page['urls'] is None:\n continue\n \n # Add this page as a backlink to everything it links to\n for url in page['urls']:\n forward_linked = app_tables.pages.get(url=url.strip('/'))\n if forward_linked:\n forward_linked['backlinks'] += [page['url']]\n That’s the backlinks figured out. Now to set up the initial condition. The PageRank calculation gradually refines\nthe answer by a series of iterations, so I must set the initial values to start off with. For newly-discovered\npages, I’ll just guess at 0.5. Pages I’ve seen before will have a PageRank from previous runs, so those pages\ncan start with that old value.\n\n @anvil.server.background_task\ndef ranking_agent():\n calculate_backlinks()\n \n for page in app_tables.pages.search():\n # Initial condition\n if page['rank'] is None:\n page['rank'] = 0.5\n Now for the PageRank calculation itself. I expect the calculated values of the PageRanks to converge on\nthe correct solution. So I want to invent a metric to tell if the calculation has stopped changing much.\nI’ll calculate how much the average page has changed in rank, and if that’s small, my calculation\nhas converged.\n\n # ... still in the ranking_agent ...\n\n # Iterate over all pages, and repeat until the average rank change is small\n average_rank_change = 0.\n while average_rank_change > 1.001 or average_rank_change < 0.999:\n # Work out the next PageRank\n new_ranks = calculate_pagerank(page)\n\n # Step on the calculation\n average_rank_change = step_calculation()\n Inside the loop, I calculate the PageRank for the next iteration, then\nstep the calculation on by putting the values from step i+1 into box i, and calculating the average rank change. And that’s how to calculate PageRank! I’ve left out two important details. First, the actual PageRank calculation. This\nis just the equation from above, but written out in Python. The CONSTANT is c and the RATING_SOURCE_FACTOR is\nE(u) (I’ve assumed it’s the same value, 0.4, for each page). CONSTANT = 0.7\nRATING_SOURCE_FACTOR = 0.4\n\ndef calculate_pagerank():\n \"\"\"Calculate the PageRank for the next iteration.\"\"\"\n new_ranks = {}\n for page in app_tables.pages.search():\n rank = RATING_SOURCE_FACTOR\n for backlink in page['backlinks']:\n backlinked_page = app_tables.pages.get(url=backlink)\n rank += backlinked_page['rank'] / len(backlinked_page['forward_links'])\n rank = CONSTANT * rank\n new_ranks[page['url']] = rank\n \n return new_ranks\n And for completeness, here’s exactly how I step the calculation forward:\n\n def step_calculation():\n \"\"\"Put 'next rank' into the 'current rank' box and work out average change.\"\"\"\n num_pages = 0.\n sum_change = 0.\n for page in app_tables.pages.search():\n sum_change += new_ranks[page['url']] / page['rank']\n page['rank'] = new_ranks[page['url']]\n num_pages += 1\n return sum_change / num_pages\n To make the search engine order results by PageRank, I just need to use tables.order_by in my Data Tables\nquery: @anvil.server.callable\ndef ranked_search(query):\n pages = app_tables.pages.search(\n tables.order_by(\"rank\", ascending=False),\n html=q.full_text_match(query)\n )\n return [{\"url\": p['url'], \"title\": p['title']} for p in pages]\n If you want to see the code in its natural habitat, you can clone the final app,\nwhich contains everything I’ve talked about in this series of posts.\n\n Let’s spin it up and get some ranks!\n\n I built a UI that tracks the progress of the calculation. It polls the Background Task using a Timer component\nand plots the convergence metric with each iteration. This means you can watch it run and see the calculation\ngradually converge in real time.\n\n \n\n I put some test data together to check that my results make sense intuitively. Consider the four pages (and links) shown in the diagram below:\n\n \n\n The PageRank for each page is also shown in the diagram, as calculated by my ranking\nengine. The page with 3 backlinks has a PageRank of 1.5, the page with 2 backlinks has a PageRank of 0.97, and the pages\nwith 1 backlink each from the same page have rankings of 0.63. This sounds about right. I can tune the spread of these\nnumbers by changing CONSTANT in my code. In Part 1, I tested the basic version of the search engine by running three queries and making judgements about the quality of the\nresults.\n\n Let’s run the same queries again to see how PageRank has improved things.\n\n ‘Plots’ is my example of a fairly generic word that appears a lot in technical writing. There are a few pages that are \nspecifically about plots in Anvil, and I want to see whether they come up.\n\n Overall, the PageRank search seems to do better. Five of the results are specifically about\nplotting:\n The basic search only manages to get three of these into the top ten.\n\n The basic search also included some pages from the middle of the blog. These have been ranked lower by PageRank, \nso the more-relevant pages have had a fighting chance.\n\n \n\n The PageRank search does worse in one respect - the top spot has been taken by the reference docs, replacing\nthe Using Matplotlib with Anvil guide. I’m ranking pages purely based on importance and not on\nrelevance. The reference documentation is clearly ‘more important’ overall than the Matplotlib guide - but\nnot more relevant to a search for ‘plots’.\n\n I’m using ‘Uplink’ as an example of a word that’s not likely to be used accidentally - it’s not commonly\nused in normal speech, so any uses of it are probably about the Anvil Uplink. If you’re not familiar with it, the Uplink allows you to anvil.server.call functions in any Python environment outside Anvil. There are three relevant pages in the basic search results, and they appear in the PageRank results too. They are\nUsing Code Outside Anvil, Escape Hatches and Ejector Seats and Remote Control Panel. Sadly,\nthese pages have lost position to more ‘major’ pages, the tutorials and reference docs.\n\n \n\n The basic search just presents pages in the order it crawled them, so the ranking is ‘more random’ than \nthe PageRank’s importance-based ordering. It looks like the PageRank search is doing worse than chance in this case,\nbecause it’s placing pages that have more backlinks at the top regardless of relevance.\n\n We’ve learnt something - even something as powerful as PageRank can be counterproductive in some circumstances.\n\n A query with multiple words is harder because it’s difficult to work out which words are the subject of the query,\nand which words are incidental. I’m using ‘building a dashboard in Python’ to test this. \nThis tripped up the basic search because of the noise introduced by the words ‘building’\nand ‘Python’, which are very common on the Anvil site.\n\n The PageRank search did slightly better. The basic search missed\nBuilding a Business Dashboard in Python and the Python Dashboard workshop, but they appear in the PageRank \nsearch. Again, some minor pages such as page 6 of the blog are now ranked lower, making room for these better-matching pages.\n\n \n\n That said, PageRank has put Interactive Plots for Your Apps lower and Accepting Payment with Stripe higher.\nAs with the ‘Uplink’ query, the ranking system does not take page content into account, so sometimes the ‘more random’\nranking of the basic search happens to do better.\n\n The PageRank search de-ranked pages 2 to 6 of the blog, because they don’t have many backlinks. \nThis made room for other matching pages. Sometimes those pages were good matches, sometimes they were not.\n\n It favoured the main pages such as ‘Tutorials’ and ‘Documentation’\nabove pages that deal with specific subjects. Older pages were also favoured because pages gradually acquire \nbacklinks over time.\n\n PageRank is much more powerful when used to choose between sites, rather than pages on the same site. A single site is a\ncurated system where there are only a small number of relevant results. The web is an unmanaged jungle where there\nwill be many sites on the same topic, and PageRank is a great way to decide which sites are better than others.\n\n So it’s made things better, but I still need to do more.\n\n I need to relate the search rankings to the contents of the page. I’ll take advantage of two trusty Computer Science\nworkhorses: tokenization and indexing.\n\n Once I’ve done that, I think my search engine will be good enough to show to the public!\n\n Read about it here:\n\n \n\n Or sign up for free, and open our search engine app in the Anvil editor:\n\n PageRank weeded out minor pages and made room for more important matches. But it doesn’t take page content into\naccount, so the ranking was still a bit hit-and-miss.\n\n I’m currently using Anvil’s full_text_match to perform the search for me. I’m treating it as a black box, which\nhas saved me effort. But it’s time to take matters into my own hands. I’m going to explore more about how the actual search\nprocess works. I want to rank pages higher if they have more matching words in them.\n\n My pages are currently just stored as big unrefined strings of characters. It’s time to digest each page and get a \nprecise definition of its meaning. I need to tokenize the page - split it into the elemental parts of language.\n\n Python is great for string processing, so this is really easily accomplished. First I use BeautifulSoup\nto get rid of the <script> and <style> blocks that don’t tell me much about the page’s meaning. I also discard the HTML\ntags and punctuation: from string import punctuation\n\ndef tokenize(html):\n soup = BeautifulSoup(html)\n\n # Remove script and style blocks\n for script in soup.find_all('script'):\n script.decompose()\n for style in soup.find_all('style'):\n style.decompose()\n\n # Remove HTML\n text = soup.get_text()\n\n # Remove punctuation\n text = text.translate(str.maketrans('', '', punctuation))\n\n # Split the string into a list of words\n tokens = text.split()\n The last step splits the string into a list of separate words. These are one kind of token, but I want more useful tokens,\nso I’m going to go a bit further.\n\n I’m going to stem my tokens, and I’m going to remove stop words. I mentioned these techinques when I discussed\n full_text_search in my first search engine post. To recap: tsquery. I’m going to do stemming the way you do anything in Python - import a library that does it all for you!\nIn this case I’m using nltk, the Natural Language Toolkit that has a selection of stemming algorithms. from nltk import PorterStemmer\nstemmer = PorterStemmer()\n\nSTOP_WORDS = [\n 'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', \n 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', \n 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', \n 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', \n 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', \n 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', \n 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', \n 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', \n 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now'\n]\n\ndef tokenize(html):\n # The code I had before, which got me a list of words, then...\n \n # Stem\n tokens = [stemmer.stem(t) for t in tokens]\n\n # Remove stop words\n tokens = [t for t in tokens if t not in STOP_WORDS] \n \n # Count occurrances of each token\n return dict(Counter(tokens))\n The final bit of processing in the return statement counts the occurrence of each token in the list.\nSo if my list was originally: [\n 'application'\n 'build',\n 'python',\n 'web',\n 'application',\n 'web'\n ]\n I end up with:\n\n {'application': 2, 'build': 1, 'web': 2, 'python': 1}\n This is just what I need to improve my search results - a count of each meaningful word in the page. Now I can order\nthe pages based on how many times they mention each query word.\n\n It’s more than that though – it’s also going to let me make my search a lot more scalable. If I ever hope to handle the \nentire web, I can’t rely on scanning every page for each query. Loading 130 Trillion\nHTML documents into memory every time somebody searches is a little sub-optimal.\n\n Luckily, I already have everything I need to build a search index.\n\n Think of an index just like an index in the back of a book.\nIf you’re looking for a particular word in a book, you don’t read the entire book hoping to spot it. You look in the back\nof the book at the index - you find the word, and it tells you which pages it’s on. That’s exactly what I’m going to do with\nmy web pages.\n\n I’ll create a Data Table to list each token I know about alongside the pages it appears on:\n\n \n\n To populate this table, I need to run my tokenize function on all the pages I have stored. @anvil.server.background_task\ndef index_pages():\n \"\"\"Build an index to find pages by looking up words.\"\"\"\n index = {}\n for n, page in enumerate(app_tables.pages.search()):\n # Tokenize the page\n page['token_counts'] = tokenize(page['html'])\n \n # Then organize the tokens into the index\n index = collate_tokens(page, index)\n \n # Now persist the index\n for token, pages in index.items():\n app_tables.index.add_row(token=token, pages=pages)\n That second for loop writes the index to the database. It’s much faster to build the index in-memory and\npersist it all at once than to build it directly into the database. Here’s how I’m building the index. For each token in the page, I check if there’s an entry for it in the index.\nIf there is, I add this page to it. If not, I create a new row in the index for this token.\n\n def collate_tokens(page, index):\n # For each token on the page, add a count to the index\n for token, count in page['token_counts'].items():\n # Add this count to the index\n entry = [{'url': page['url'], 'count': count}]\n if token in index:\n index[token] += entry\n else:\n index[token] = entry\n\n return index\n I’ve created a UI to launch the indexing process. There’s a button to trigger the Background Task, and I’m keeping track of its\nprogress by plotting the size of the index vs. the number of pages indexed.\n\n \n\n The graph is pretty interesting. You would probably expect it to smoothly approach a particular value, “the number of \ncommonly-used English words”, but actually there are a couple of sudden jumps at the beginning. This must be where it \nprocesses large pages with many words, so it discovers a large proportion of the English language at once.\n\n Here’s my index in my Data Table. Now I can look up any word stem and instantly get a list of pages it’s on!\n\n \n\n This has been remarkably simple to build, so it’s easy to miss how powerful this is. If I wanted to search all the pages on the\nweb without an index, I would have to scan 130 Trillion HTML documents, each on avarage a few kilobytes in size.\nLet’s say the avarage size is 10 kB, that’s\n\n \n\n or 1.3 Exabytes of data. You’d need one million 1 Terabyte hard disks to store that much data, and you couldn’t possibly scan it in time to answer a query.\n\n The upper limit on my index size is much, much smaller. There are about 170,000 words in English, and about 6,500 languages\nin the world. That gives us an estimate of the size of the index:\n\n \n\n And each entry is just a word, average length around 10 bytes. So about 11 Gigabytes of data. And of course, we can store that in an easily searchable order – for example, alphabetically, so we can find records we want quickly. (Anvil’s Data Tables take care of that for us.)\n\n Somewhere in the multiverse, there’s a universe where the laws of mathematics don’t permit indexing. In that universe,\nthe web never took off.\n\n I’m just moments away from my production-quality search engine. I just need to concoct an algorithm that combines\nthe token counts I’ve created with the PageRank data from the previous post. This will rank pages based on both relevance and importance.\n\n Just as I split my pages into tokens, I can split the search query into tokens too:\n\n query_tokens = tokenize(query)\n Then I can look up each query token in the index:\n\n index_entries = app_tables.index.search(token=q.any_of(*query_tokens))\n (If you’d like to understand exactly how I put this line together, check out Querying Data Tables.)\n\n So now I have every index entry that relates to my query. Now we have to work out which of these pages is most relevant! This is where most of Google’s secret sauce is, but we’ll start simple. To compute how relevant each page is to my query, I’ll just sum up the token counts for each page, and use that as the relevance score for the page:\n\n # Sum up the word count scores\n page_scores = defaultdict(int)\n for index_entry in index_entries:\n for index_page in index_entry['pages']:\n page = app_tables.pages.get(url=index_page['url'])\n page_scores[page] += index_page['count']\n (Remember: defaultdict(int) sets unknown keys to have a value of 0, so it’s a great way to calculate sums.) My page_scores dictionary is just what its name suggests: it maps each page to a total score. This is already a set of ranked search \nresults! But they’re only ranked by match strength. I’ll multiply each value by the PageRank as well. # Calculate total scores\n final_scores = {}\n for page, score in page_scores.items():\n final_scores[page] = score * page['rank']\n If PageRank is Google’s secret sauce, it’s important to know how much to add. In the real app, I’ve used a slightly more complicated formula that uses a constant to tweak the influence of PageRank \nvs. the relevance score:\n\n score * (1 + (page['rank']-1)*PAGERANK_MULTIPLIER)\n You can change the PAGERANK_MULTIPLIER to make PageRank more or less important - smaller numbers squeeze the PageRanks towards 1.\nWhat value should I use? You could tune it automatically by changing PAGERANK_MULTIPLIER and comparing \nthe search results against training data. I did a manual version of this - I tweaked it and ran queries until I was happy with the results.\nI went with 1/3. So now I have a list of pages and a final score for each. I’ll sort in descending order of score, and return only the \ndata I’m interested in - the URL and title of each page, in order.\n\n # Sort by total score\n pages = sorted(final_scores.keys(), key=lambda page: -final_scores[page])\n \n # ... and return the page info we're interested in.\n return [{\"url\": p['url'], \"title\": p['title']} for p in pages]\n And that’s my final search function! It takes in the same arguments as my other search functions - the query as a string - \nand returns the pages in the same format, so I can use it from the same UI as the other search functions. Time to compare\nresults and see if I really have improved things.\n\n In each of these posts, I’ve been running three test queries to make a (pretty unscientific) judgement \nabout the quality of my search results. Let’s see how the final algorithm stacks up.\n\n I selected ‘plots’ as a word that would be common in general technical writing. There are a few pages on the\nAnvil site that are directly related to plotting, so I’m hoping to see them near the top.\n\n And I do! In fact, the new search algorithm knocks it out of the park.\n\n Four of the top five results directly cover how to make plots in Anvil:\n Two other results use plots in a big way: SMS Surveys in Pure Python and Using SciPy to work out how many\nT-shirts I need for a conference.\n\n \n\n A user who wants to learn about plots in Anvil is pretty well served by this algorithm!\n\n I selected ‘Uplink’ because it has a specific meaning for Anvil, and it’s not a commonly-used word otherwise. (If you’re\nnot familiar with the Anvil Uplink: it allows you to anvil.server.call to and from Python running anywhere.) The new algorithm does brilliantly at this one too.\n\n The two most relevant pages appear at the top: Using code outside Anvil, followed by Remote Control Panel. \nThe first of these is the Uplink tutorial, so that’s probably what a user searching for ‘Uplink’ is looking for.\n\n The Anvil On-Site Installation page is also in the top five. Among other things, it describes how to connect \nyour app to a Python instance running on your private network, and the answer is ‘Use the Uplink’. That is indeed\nsomething somebody interested in the Uplink would like to know.\n\n \n\n Overall, a great job.\n\n This is the most challenging query to answer because it has multiple words, and only one of them particularly\nsignals what the user is looking for - ‘dashboard’.\n\n Item three in the results is Build a Business Dashboard with Python, so that’s a plus point. It doesn’t appear\nin the Basic Search results at all.\n\n The other page I’d expect to see, the Python Dashboard workshop, isn’t there on the first page. Most of the\nother pages seem to be included because they mention ‘build’ and ‘Python’ a lot, rather than because they talk about\nbuilding dashboards in Python.\n\n A couple of partially-related results from the Basic Search don’t appear in the new search:\nInteractive plots for your apps and the Remote Control Panel workshop.\n\n \n\n So mixed results for the multi-word query. Neither a massive improvement, nor any worse.\n\n Nearly perfect results for ‘plots’ and ‘Uplink’, but not much improvement for ‘build a dashboard in Python’. Calculating relevance is hard!\n\n I was hoping that pages with a few mentions of ‘build’ and ‘Python’ would get dropped because they only had low \ncounts of those words. But, this being Anvil, there are pages that mention ‘build’ and ‘Python’ a lot.\n\n I need to improve my relevance calculation, and to reduce the noise introduced by ‘build’ and ‘python’ in the multi-word query. Some ideas:\n The most promising step, though, is to ask “which words appear more than usual on this page?” instead of “which words appear\noften on this page?”. I could do that simply by dividing the word frequencies by the average frequency across all pages.\nThat means words like ‘build’ and ‘python’ that show up on lots of pages would be de-emphasised in the results, but\nthe word ‘dashboard’ will still score highly on dashboard-related pages.\n\n The fun thing about playing with search engines is how many things you can come up with to improve the results. I’ve\nconstructed a simple heuristic based on word counts and PageRank, but you can roll things together in so many different\nways. You could start analysing media as well as the pure HTML; you could take account of page structure; or you could\neven track browsing behaviour (anonymised and with permission, of course!).\n\n Why not try some things out for yourself? Feel free to clone the final app and tinker with it to see if you can do \nbetter than me. Then get in touch via the Forum to show me and the rest of the Anvil community what you’ve done!\n\n I’m quite pleased with what I’ve achieved. In a few hundred lines of code, I’ve built a search engine that does\na really great job of serving single-word queries from the Anvil site. There’s still room for improvement with\nmulti-word queries, but the results are good enough that the user can find the best match somewhere on the first page of\nresults.\n\n I’m ready to make it public.\n\n As with any Anvil app, it’s auto-hosted at a private URL. To give it a public URL, I just enter my desired subdomain of\n anvil.app in this box: \n\n And now my search engine is hosted at!\n\n (If I wanted to use my own domain, rather than anvil.app, of course I could. That’s the “Add Custom Domain” button in that screenshot.) That’s the end of my search engine journey. I haven’t quite made a competitor to Google, but I’ve built a working\nsearch engine from first principles, and I’ve learnt about some important concepts in the process. Not only that,\nbut I’ve implemented the PageRank algorithm in a few dozen lines of code!\n\n You can see the full source code of my search engine app here:\n\n I hope you’ve learnt something too. And I hope you’ve seen how easy Anvil has made this. Anvil allowed\nme to describe exactly how my search engine works, and show you the code.\n\n If you’re not already an Anvil user, watch this 7-minute video to learn more. \nWe build a database-backed multi-user web app in real-time and publish it on the internet.\n\n Or sign up for free, and try it yourself:\n\n:\n\n \n\n So, on my train ride into London today, I built an XKCD-style sketch theme for building apps in Anvil. Here’s what it looks like:\n\n \n\n This theme is ready to use yourself:\n\n \n\n Or read on to see how it works…\n\n Normally, you don’t need anything except Python to build an app in Anvil. But if we want, we can drop down and customise the look and feel of our page with HTML and CSS.\n\n Start with a new “Custom HTML” app, with the “Standard Page” layout, so the theme.css in our Assets starts out blank. Let’s give ourselves a more hand-drawn feel: @import url('');\n\nbody {\n font-family: \"Patrick Hand\", sans-serif;\n font-size: 18px;\n}\n\n\ninput.form-control, textarea.form-control, select.form-control, .btn {\n box-shadow: none;\n border: 2px solid black;\n border-radius: 15px 255px 15px 225px / 225px 15px 255px 15px;\n}\n\n/* Make the buttons look a bit more flat */\n.btn {\n background-image: none;\n padding: 5px 12px;\n}\n\n.btn:active, .btn:hover, .btn:focus {\n border-color: black;\n background-image: none;\n background-color: #e3e3e3;\n outline: none;\n}\n As well as primitives like “text”, “input box”, or “drop-down”, applications have higher-level UI elements that should look consistent. For example, we might want to group our components into “cards”. Likewise, we want all our headings to match each other.\n\n We can make these UI elements available in the Toolbox, so we can use them with the visual designer:\n\n \n\n To do this, we don’t create new components; we create new “roles” for existing components. A “Card” is a panel component (ColumnPanel) with a border and drop-shadow; and a “Heading” is a Label component with a different font and bigger text.\n\n We use the Roles editor to create roles called card and heading (applicable to ColumnPanels and Labels respectively), and make them available in the Toolbox. Then we just need to write the CSS: .anvil-role-card {\n border: 2px solid black;\n padding: 5px;\n border-radius: 125px 10px 20px 185px / 25px 205px 205px 25px;\n box-shadow: 2px 2px 5px 0 rgba(0,0,0,.2);\n}\n\n.anvil-role-heading .label-text {\n font-family: \"Patrick Hand SC\", sans-serif;\n font-size: 26px;\n padding: 13px 0 0;\n}\n.\n\n\">\n <div class=\"title\">\n <div class=\"placeholder\" anvil-if-slot-empty=\"title\" anvil-drop-slot=\"title\">\n Drop a title here\n </div>\n <div anvil-slot=\"title\"></div>\n </div>\n\n <div class=\"nav-links\">\n <div anvil-slot=\"nav-links\"></div>\n <div class=\"placeholder\" anvil-if-slot-empty=\"nav-links\"\n anvil-drop-slot=\"nav-links\">\n Drop a FlowPanel here\n </div>\n </div>\n <div style=\"clear:both\"></div>\n</div>\n And finally, we set up the default drop behaviour. If the mouse isn’t directly over a drop zone, we drop into the nearest container:\n\n <div anvil-drop-default anvil-drop-container=\".anvil-container\"></div>:\n\n\n\n\n\n You can see this app’s source code too:\n\n \n\n To find out more about Anvil, check out our tutorials. For the gory details on how to create your own custom themes that work with Anvil’s visual designer, check out the reference docs.\n", "tags": ["blog"], "path": "/blog/xkcd-style-apps" }, { "title": "Double-Entry Bookkeeping for Engineers", "snippet": "Only 25% of founders understand how their business represents money. Learn how it works by building your own double-entry accounting system in Python.", "content": "\n\n If you run a business, you’ve probably heard of double-entry bookkeeping. It’s “the most influential work in the history of capitalism”. We “may not have had the Industrial Revolution without it”. It’s kind of a big deal.\n\n.\n\n But we’re hackers – we can do better than that! If you can write code, you can understand double-entry bookkeeping. And we’re going to prove it, by building an accounting app from scratch. All you need is a little bit of Python.\n\n There are two rules of double-entry bookkeeping:\n\n Every financial category in your business is represented by an account.\n Every financial transaction in your business can be represented as a transfer between accounts.\n!\n\n We can record our new laptop in a separate account (we’ll call it “Fixed Assets”). Here’s how that looks:\n\n \n\n We can see that the total value of our company’s assets hasn’t changed: We’ve just traded $1,500 worth of cash for $1,500 worth of laptop.\n\n The purchase is entered twice (hence “double entry”): it removes value from Cash in Bank, and adds it to Fixed Assets. The entries must always balance, so no value has disappeared – it’s just been moved around.\n\n .)\n\n I think we understand enough to start building our accounting system. Let’s fire up Anvil, create a new app, and set up the built-in database (Data Tables) to represent our accounting history:\n\n):\n\n \n\n \n Now we have a database, we can store transactions in it. Here’s the code to create a new transaction:\n\n def add_new_transaction(date, from_account, to_account, amount, description):\n\ttxn = app_tables.transactions.add_row(description=description, date=date)\n\n\tapp_tables.journal_entries.add_row(\n\t\tchange=+amount, account=to_account, transaction=txn, date=date\n\t)\n\tapp_tables.journal_entries.add_row(\n\t\tchange=-amount, account=from_account, transaction=txn, date=date\n\t)\n For example, here some code we could run to record that laptop purchase:\n\n cash_in_bank = app_tables.accounts.get(name=\"Cash in Bank\")\n fixed_assets = app_tables.accounts.get(name=\"Fixed Assets\")\n\n add_new_transaction(date=date.today(),\n from_account=cash_in_bank, to_account=fixed_assets,\n amount=1500, description=\"Purchased laptop\")\n Now we can make a web interface for entering transactions. We use Anvil’s visual designer to create a form for entering the transaction details. Then, when the user clicks Add Entry, we call our add_new_transaction() function: \n\n To see the source code to our app so far, click here to open it in the Anvil editor:\n\n \n\n OK, so that’s the easy part – and this is where most introductions to double entry bookkeeping stop. But we still have a problem: How do we interact with the outside world?\n\n.\n\n.\n\n This is clearly not an accurate representation of our business. What can we do to fix it?\n\n\n\n The answer is to separate our bookkeeping records: we create special accounts that represent “outside the company”, and treat them differently from things our company owns. We name them after the financial statements they’re used for:\n\n Balance sheet accounts are the accounts we’ve already met. They represent things we own (assets, such as cash in a bank) or things we owe (liabilities, like a loan we will have to pay back – these have negative value).\n\n For any point in time, we can work out the value of each balance-sheet account on that date. This lets us display everything we owned or owed on that date: This is the company’s balance sheet.\n”.\n\n It doesn’t make sense to keep a running total of these (how much money does “the rest of the world” have, anyway?).\nWe.\n.\n\n).\n\n.\n\n Let’s add this distinction to our bookkeeping system. We add a new boolean column to the Accounts table, to record whether something is a balance-sheet or a profit-and-loss account. We’ll add a new account, “Sales”, which is not a balance-sheet account: \n\n We’ll put some sample data in to demonstrate. Let’s say we sold $2000 worth of widgets the week before we bought our laptop. We record that as a transfer from Sales (outside the company), to Cash in Bank:\n\n \n\n Notice that because we’ve gained value from the outside world, the change to the Sales account (profit and loss) is actually negative! This balances the positive change to Cash in Bank (balance sheet).\n\n (Again, professional accountants would use different words here – they’d say “we applied a $2,000 credit to Sales, and a $2,000 debit to Cash in Bank”2 – but the arithmetic is the same.)\n\n We can now add up each type of account separately, and calculate two important reports:\n\n The Balance Sheet shows all our assets and liabilities at a given point in time. The total is the company’s “book value”:\n\n \n\n Here’s the code that calculates the balance of an account on a given date. We just add up all that account’s journal entries, up to the specified date:\n\n def get_balance_for_account(account, date):\n balance = 0\n\n for entry in app_tables.journal_entries.search(account=account,\n date=q.less_than(date)):\n balance += entry['change']\n\n return balance\n Now we can calculate the balance for every balance-sheet account in our system:\n\n def get_balance_sheet(date):\n return [{'name': account['name'], 'balance': get_balance_for_account(account, date)}\n for account in app_tables.accounts.search(balance_sheet=True)]\n This produces a list of dictionaries, which is easy to display in a Data Grid. And that’s how we produce the balance sheet screen you see above.\n\n Secondly, there’s the Profit and Loss Statement, which shows how much value the company gained from, or lost to, each P&L account over a given period:\n\n \n\n And here’s the code that calculates it for each account. It’s very similar to the balance-sheet code – after all, it’s just summing up all the changes to an account over a time period:\n\n def get_profit_for_account(account, from_date, to_date):\n profit = 0\n\n for entry in app_tables.journal_entries.search(account=account,\n date=q.between(from_date, to_date)):\n # Subtract, because any money going out of a P&L account\n # is a gain for us, and any money going in is a loss for us.\n profit -= entry['change']\n\n return profit\n.\n\n .)\n\n.\n\n Click here to open the app in the Anvil editor, read the source code, and try it out yourself:\n\n \n\n.\n!\n\n What will you build?\n\n \n\n.\n\n Still wondering how to represent a particular financial event? Here are a few examples to show how we use double-entry bookkeeping to represent what’s happening:\n\n What happens when I issue an invoice?\n\n Businesses often sell things “on credit”: We give the customer a product, and issue them an invoice. Some time later, the customer pays the invoice.\n\n How to represent this? Well, if we’ve issued an invoice, the customer owes us money – so that’s an asset! We record these assets in their own balance-sheet account: “Accounts Receivable”.\n\n When we make a sale, we transfer value from the Sales account (that’s Profit and Loss, because we’re gaining value from the outside world) into Accounts Receivable. Later, when the customer pays the invoice, that’s a transfer from Accounts Receivable to Cash in Bank.\n\n.\n What happens if someone doesn’t pay an invoice?\n\n”.)\n What happens when my laptop wears out?\n.\n\n This neatly expresses that “we need to buy new laptops from time to time” is an ongoing issue. We don’t take a surprise loss every three years when we replace our laptops; we spread it out over the equipment’s useful life.\n Footnotes:\n\n Yes, we used “credit” and “debit” correctly here too. Yes, it’s confusing. If you’re lost, just ignore the italicised sections and read the code – you’ll be fine. ↩\n Today we’re delighted to announce Background Tasks, our latest addition to Anvil. Now you can kick off long-running processes and monitor their progress without blocking your app.\n\n Simply use @anvil.server.background_task to mark a function as a Background Task: @anvil.server.background_task\ndef crawl_the_web(url):\n \"\"\"In the background, crawl the web.\"\"\"\n recursively_index(url)\n Then launch it using anvil.server.launch_background_task: task = anvil.server.launch_background_task('crawl_the_web', '')\n Use Background Tasks for heavyweight calculations, housekeeping processes, connection pools, downloading large files, \nanything you want to do in the background while the user carries on interacting with your app.\n\n Learn how to trigger Background Tasks, communicate with them, and manage them, by following the tutorial.\nYou’ll build a web crawler that fetches all the pages in a given website. Or open the example in Anvil and \ninvestigate!\n", "tags": ["announce"], "path": "/blog/announcing-background-tasks" }, { "title": "Data Tables now have a rich query language", "snippet": "We've just upgraded Anvil to make data storage much more powerful", "content": " We’ve just upgraded Anvil to make data storage much more powerful.\n\n Data Tables already give you a Python-based system for storing and retrieving data.\nThere’s also a graphical interface to make designing databases even quicker.\n\n This upgrade gives you a library of query operators. You pass them to the search() method\nwhen you access your data. To get all restaurants rated higher than 2 stars:\n\n app_tables.restaurants.search(\n rating=q.greater_than(2)\n)\n To get all menu items that include the string ‘pizza’:\n\n app_tables.menu.search(\n dish_name=q.ilike('%pizza%')\n)\n To perform an intelligent search within natural language text:\n\n app_tables.reviews.search(\n review_text=q.full_text_match('Easy to find')\n)\n You can combine query operators together to build complex queries when you need them. To find good restaurants in London, or any outside of London:\n\n app_tables.restaurants.search(\n q.any_of(\n location=q.not_('London'),\n rating=q.greater_than(2),\n )\n)\n It’s live right now. Copy an example app to your account and try out some queries of your own. To find out more, have a look at the tutorial.\n\n Whatever you’re building right now, we hope this helps you make it even better.\n", "tags": ["blog","announce"], "path": "/blog/announcing-queries" }, { "title": "Running tasks in the background", "snippet": "Run tasks in the background while your main app carries on running.\n\nThis tutorial walks you through building a web crawler to index and search an entire website.\n\nYou could also use Background Tasks to do some housekeeping or heavy processing behind the scenes, download large files, email a large mailing list. Anything that needs to run for a long time!\n", "content": " \nThis tutorial assumes some basic knowledge of Anvil. If you want to understand all the details, it might help to have\nbuilt something in Anvil already - try following the Hello World tutorial first.\n When you're done there, come back here to learn about Background Tasks.\n\n We’re going to look at Background Tasks by building a web crawler that downloads all the pages on a site. The final result is a very simple search engine. Follow along to build the app yourself, or clone and try out the final version.\n \n\n \n Let’s write a minimal Background Task to start with, and fill in the long-running code it will contain later.\n\n Define a function called crawl in a Server Module. For now, it will just take in a URL and print it back to you. @anvil.server.background_task\ndef crawl(sitemap_url):\n print('Crawling: ' + sitemap_url)\n Decorating it with @anvil.server.background_task tells Anvil this can be run in the background. Tasks are launched by calling anvil.server.launch_background_task from a server function. This works just like \n anvil.server.call - the first argument is the function name, and all other arguments are passed to the function. \nIt returns a Task object, which the app can use to access data from the Background Task. Write a server function to launch the crawl task: @anvil.server.callable\ndef launch_one_crawler(sitemap_url):\n \"\"\"Launch a single crawler background task.\"\"\"\n task = anvil.server.launch_background_task('crawl', sitemap_url)\n return task\n We’re going to have a Button that launches this Background Task.\n\n Drop a Label, a TextBox and a Button into Form1: \n\n Now configure the Button’s click handler to launch the Background Task and store the Task object: def button_run_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.task = anvil.server.call('launch_one_crawler', self.text_box_sitemap.text)\n (Remember to bind this method to the Button’s click event by double-clicking on the Button or using the Properties panel.) Run the app and click the Button - a Background Task has been launched! But how can you tell?\n\n So your tasks are being launched, but since they’re in the background, how do you see what they’re doing?\n\n Answer: The Background Tasks dialog, which can be found in the Gear menu. Running tasks will also appear in the Output window when you are developing your app, allowing you to keep track of them easily.\n\n \n\n The Background Tasks dialog lists all the tasks that the app has run along with their status. \nIn our case, we see one task that has completed a few seconds ago, and it ran for a few seconds. If it were still running, we’d have a button to kill it.\n\n \n\n The ‘View logs’ button next to each task takes you to the App Logs entry for this task. There is one entry for each \nBackground Task - all the logs from a particular task are grouped together.\n\n In our app, we see that our task printed Crawling: before finishing. If the task had raised\nan exception, that would appear here too. \n\n That’s all very well, but we want to build a web crawler, so let’s make it do some crawling!\n\n Time to make the task do something useful.\n\n First, we’ll request the sitemap and get a list of the pages. Pages are stored as strings between <loc> tags, so we need\nto look at each line and cut out the bit between the <loc> tags: @anvil.server.background_task\ndef crawl(sitemap_url):\n # ... the code we wrote before, then ...\n\n # Get the contents of the sitemap\n response = anvil.http.request(sitemap_url)\n sitemap = response.get_bytes()\n\n # Parse out the URLs\n urls = []\n for line in sitemap.split('\\n'):\n if '<loc>' in line:\n urls.append(line.split('<loc>')[1].split('</loc>')[0])\n If you’re following along, remember to import anvil.http! Now urls will be a list of URLs to pages on the Anvil site. Write a function to request each page:\n\n def get_pages(urls):\n for n, url in enumerate(urls):\n url = url.rstrip('/')\n\n # Get the page\n try:\n print(\"Requesting URL \" + url)\n response = anvil.http.request(url)\n except:\n # If the fetch failed, just try the other URLs\n continue\n Create a Data Table to store the url and html of each page, and when it was last_indexed. \n\n See the Data Tables documentation if you’re not familiar with Data Tables.\n\n Now write a function to populate the table. It takes the URL and HTML as arguments:\n\n from datetime import datetime\n\ndef store_page(url, html):\n # Find the Data Tables row, or add one\n with anvil.tables.Transaction() as txn:\n data = app_tables.pages.get(url=url) or app_tables.pages.add_row(url=url)\n \n # Update the data in the Data Tables\n data['html'] = html\n data['last_indexed'] = datetime.now()\n Call this at the end of get_pages: def get_pages(urls):\n for n, url in enumerate(urls):\n # ... the code we wrote before, then ...\n html = response.get_bytes()\n store_page(url, html)\n Finally, call get_pages at the end of crawl, and everything will be ready to go: @anvil.server.background_task\ndef crawl(sitemap_url):\n # ... the code we wrote before, then ...\n get_pages(urls[:20])\n \n\n\n.\n Run the app and click the Button. Crawling begins behind the scenes.\n\n Now, quit the app. Your Background Task is still harvesting web pages!\n\n Check your Data Table to see it filling up with web pages (there’s a refresh button on the Data Tables UI, so click that a few times to watch more arrive).\n\n \n\n The task you’ve just run will be visible in the Background Tasks dialog:\n\n \n\n and in the App Logs you can see it printing a line every time it requests a page:\n\n \n\n Having the task running in the background is great, but we need to get some information out of it. \nFirst, we’ll search the pages it’s storing.\nThen, we’ll display the crawling process in the UI in real time.\n\n Let’s build a simple search widget - add a Button, TextBox and Data Grid to the page. \nSet the Data Grid to have a single column whose key is url, and uncheck the box for the auto_header property (see the Data Grids - Getting Started tutorial for more on Data Grids): \n\n \n\n When the Button is clicked, we want to update the Data Grid based on the results in the Data Table.\n\n So write a simple server function to get the data from the Data Table. For speed, we use Anvil’s full-text search capability, then return only the page URLs to the client:\n\n @anvil.server.callable\ndef search(query):\n pages = app_tables.pages.search(html=q.full_text_match(query))\n return [{\"url\": p['url']} for p in pages]\n Now bind a click handler to the search Button and call the server-side search function you just wrote:\n\n def button_search_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n query = self.text_box_search.text\n self.repeating_panel_1.items = anvil.server.call('search', query)\n Now the user can search within all the pages on the Anvil site to find a particular string - try it out!\n\n (Optional extra: In our example app, we’ve used Links in the Data Grid to link to the URL of the page.)\n\n \n\n So we can communicate from our Background Task via a persistent data store such as Data Tables, but what if we \nwant to do something more lightweight to communicate about the task’s state?\n\n The other way we can get data from the Background Task is by using anvil.server.task_state. This is a special object\nthat allows the Background Task function to store data about itself. By default, it’s a dictionary, so you can assign things to\nits keys. Let’s store the total number of URLs we’re crawling: def get_pages(urls):\n anvil.server.task_state['total_urls'] = len(urls)\n # ... then the rest of get_pages ...\n and inside the loop, let’s store the number of URLs processed so far:\n\n for n, url in enumerate(urls):\n # ... after the processing is done ...\n anvil.server.task_state['n_complete'] = n + 1\n Now to display that progress to the user. Put a new FlowPanel inside Form1 and inside that, add some Labels to show the number of pages indexed.\nSet the FlowPanel’s visible property to False - we’ll only show it when it’s relevant. Add a Timer as well (Timers can be found under ‘More Components’ in the ToolBox.) \n\n Set the Timer’s interval property to 0 - this means the Timer does nothing at first. When the ‘Run’ button is clicked,\nwe want the Timer to start ticking, so in the code, set its interval to 0.5 and show the FlowPanel: def button_run_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.timer_1.interval = 0.5\n self.flow_panel_1.visible = True\n # ... and the rest of the click handler as before ...\n We want to update our progress labels whenever the timer ticks. We’ll explain how to do that, then show you the full tick \nevent handler. First it must get hold of the task_state that our Background Task has been writing to. When we \nlaunched the task, we stored an object in self.task that allows us to get its task_state: state = self.task.get_state()\n We want the task state’s n_complete and total_urls values. Since it’s a dictionary, it has the get method, which we can use to\ndefault to 0 if those keys aren’t present: n_complete, total_urls = state.get('n_complete', 0), state.get('total_urls', 0)\n Then we simply assign those values to the relevant Labels:\n\n self.label_num_crawled.text = n_complete\n self.label_total_pages.text = total_urls\n Finally, we check whether the Background Task is still running. If it isn’t, we switch the Timer off:\n\n # Switch Timer off if process is complete\n if not self.task.is_running():\n self.timer_1.interval = 0\n The full event handler is shown below. Double-click the Timer component in the Design view to\nbind a tick event handler, and write this method. def timer_1_tick(self, **event_args):\n \"\"\"This method is called Every [interval] seconds. Does not trigger if [interval] is 0.\"\"\"\n\n # Hide the loading spinner so the user is not interrupted by the polling\n with anvil.server.no_loading_indicator:\n # Show progress\n state = self.task.get_state()\n n_complete, total_urls = state.get('n_complete', 0), state.get('total_urls', 0)\n self.label_num_crawled.text = n_complete\n self.label_total_pages.text = total_urls\n\n # Switch Timer off if process is not running\n if not self.task.is_running():\n self.timer_1.interval = 0\n Note that we’ve used not self.task.is_running() here rather than self.task.is_completed(). self.task.is_completed() \nonly returns True if the task completed successfully, but the task might also stop running if it is failed, missing or killed. And that’s it! Now we have a progress tracker that tells us how many pages we’ve crawled, in real time:\n\n \n\n We’ve got a fully working simple search engine now, but we’d like to have a bit more control - how do we stop the crawl process\nonce it’s started?\n\n What if we decide we’ve crawled enough pages and we want to stop the Background Task before it has crawled everything?\n\n As the developer, we can use the Kill button in the Background Tasks dialog. But how do we give the user the ability to kill tasks?\n\n The Task object has a kill method for this. When kill is called, the Background Task stops running and its\nstate goes to killed. The client doesn’t have access to the kill method for security reasons, so we write a very\nsimple server function to kill a given task. In a production app, you should make sure that the caller of this function \nis authorised to kill the task (perhaps using Anvil’s built-in Users Service), but for our example this will do: @anvil.server.callable\ndef kill_crawler(task):\n task.kill()\n Add a Button to your form to use as a ‘stop’ button. To make the Stop button work, pass the Task object through from \nthe client when the Stop button is clicked:\n\n def button_stop_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n anvil.server.call('kill_crawler', self.task)\n Now we can hit this button to stop the crawling process. That should prove useful if a site turns out to be much bigger than we thought!\n\n \n\n If you momentarily close the app and re-open it, it loses track of the crawl progress. It would be nice if\nit picked up the latest Background Task and showed you the progress.\n\n Luckily, you can connect to existing Background Tasks from previous runs of your app.\n\n In a Server Module, call anvil.server.list_background_tasks(): @anvil.server.callable\ndef get_existing_tasks():\n return anvil.server.list_background_tasks()\n This returns a list of Task objects. The Task object has a method to get the name of the task - \nthis is usually the name of the relevant server function. Modify that line to filter out crawl tasks in case other task types get added: @anvil.server.callable\ndef get_existing_tasks():\n return [\n t for t in anvil.server.list_background_tasks() if t.get_task_name() == 'crawl'\n ]\n Call this in Form1’s __init__ method. If there are any existing tasks, keep track of the latest one and turn on the progress display. def __init__(self, **properties):\n # ... the existing code, then ...\n tasks = anvil.server.call('get_existing_tasks')\n if len(tasks) > 0:\n # Keep track of the latest task\n self.task = tasks[-1]\n # Turn on the progress display\n self.timer_1.interval = 0.5\n self.flow_panel_1.visible = True\n (We could use task.get_start_time() to figure out which Task is the newest, but tasks is ordered by start time so we’ve used tasks[-1]) Your app will now find any existing Background Tasks when it starts up!\n\n \n\n So now we can crawl a website using its sitemap, download the whole site to a Data Table, and search within it\nto find all the pages that contain a particular string.\n\n Clone our final version of the app here, then see how yours compares.\n\n \n\n \nHaving read this far, you’ve seen everything you need to launch and manage Background Tasks. You’re ready to start using\nthem in your apps! Perhaps you need to perform a database migration, download a large file, or keep a message list \nbelow a certain length. Check out the reference docs entry when you need to refresh your memory.\n\n Here are some tutorials exploring features your Background Tasks might use:\n Happy building!\n", "tags": ["tutorial"], "path": "/blog/background-tasks-tutorial" }, { "title": "Upcoming Events", "snippet": "We're presenting at meetups and conferences. Invite us to yours!", "content": " We’ll be speaking, giving workshops and talking about Anvil at a bunch of meetups and conferences in the next few months. Come and join us, learn more about Anvil, and maybe even take home some swag.\n\n Do you run a meetup group or conference? We’d love to present or run a workshop for you! Please drop us a line: events@anvil.works.\n\n Wherever you are in the world, we can do remote presentations by video link, so do get in touch.\n \"IndyPy loved having Meredydd speak to our group. Very engaging, even across the Atlantic!\"\n Here’s what’s in the calendar in the next couple of months:\n\n Jan 29th: Sheffield, UK (Sheffield Python) \nWorkshop: Come and try Anvil yourself, with friendly hands-on assistance. Feb 5th: London, UK (TechHub Demo Night) \nDemo: See what Anvil can do, and how quickly! Meet the founders and ask questions afterwards. Feb 21st: Oxford, UK (Oxford Python) \nTech Talk: I’ll be demonstrating Anvil, and talking about how it works under the hood. Mar 5th: Cambridge, UK (CamPUG) \nTech Talk: I’ll be talking about how Anvil works under the hood. Mar 19th: Cardiff, UK (PyDiff) \nWorkshop: Come and try Anvil yourself, with friendly hands-on assistance. Mar 22nd-24th: Bratislava, Slovakia (PyCon SK) \nWe will be speaking about Anvil at PyCon SK, and leading a hands-on workshop so you can try it yourselves. May 3rd-5th: Cleveland, OH, USA (PyCon) \nWe’re already confirmed as sponsors for PyCon 2019 – find us in the exhibition hall, and meet up with other Anvil users. May 24th-26th: Vilnius, Lithuania (PyCon LT) \nCome find us at PyCon Lithuania! We’ll be giving a talk and leading a hands-on workshop. The Web is a hugely complicated system. If you’re writing a typical web app, your data will be in about 6 different forms:\n\n Translating these layers is tedious and repetitive, so we’ve invented a ton of frameworks to help us:\n\n Each of these frameworks is a new layer of abstraction.\n\n All abstractions “leak”, requiring you to understand what’s happening underneath. Web frameworks leak all the time – you can’t effectively use Angular or React, for example, without understanding the DOM.\n\n But today we’re asking about a worse situation – what happens when your abstraction just can’t do what you need?\n\n \n\n\n\n.\n\n My condolences.\n\n If you’re lucky, your abstraction is more helpful than that.\n\n Example: Create-react-app is a neat way to start a web app with React. It’s a black box that replaces your build manager, a Javascript transpiler, a minifier, and all the other build tools you need for modern web development.\n\n Of course, sooner or later you’ll hit the limits of what create-react-app can do: you’ll need something it doesn’t support. The authors have thought of this situation, and there’s a button you can push: EJECT. \n\n Ejecting from create-react-app opens up the black box. It writes out a set of build scripts representing the current state of your app. Now you can edit those scripts yourself – the hard way.\n\n But create-react-app has given up. It protected you for a while – but as soon as you needed one advanced feature, you’ve been dropped in the Mojave desert with a parachute, a first-aid kit, and a copy of the Webpack manual.\n\n Good luck!\n\n It’s better than going down with the ship, of course. But should you have to discard an abstraction entirely when you touch its limits?\n\n \n\n.\n\n.\n\n This is a pretty ambitious goal: The Web is a huge platform, with a large set of existing libraries, and the browser vendors add new features all the time. Let’s face it: Our new abstraction isn’t going to cover it all, any time soon.\n\n We could have provided an ejector seat: “Export this Anvil app as a React+Flask app”. This, bluntly, would suck. The moment you edited the generated code, you’d lose all the benefits of Anvil’s abstractions!\n\n So instead, we built escape hatches into every part of Anvil. For example:\n\n If you want finer control over appearance, you can leave the drag-and-drop designer and edit the HTML template it uses. And then… flip right back to the drag-and-drop designer, where you can drop components into your new page layout.\n.)\n If you want to use HTTP after all, that’s fine! You can consume HTTP APIs or make your own.\n.\n.\n The web is a complex platform – often too complex – but every single feature is there for a reason.\n\n Are you building an abstraction? If so, think about your escape hatches: How can your users exploit features of the underlying platform, without ejecting into the wild blue yonder?\n\n", "tags": ["blog"], "path": "/blog/escape-hatches-and-ejector-seats" }, { "title": "Anvil News - December", "snippet": "Customise the public face of your Anvil app, use new components from our library, and search all of our documentation in one place.", "content": " A happy holiday season to everybody! As the nights draw in and the temperatures drop*, we’ve been working hard to make Anvil even cooler.\n\n * Temperature changes may vary by hemisphere. Consult your globe for more information.\n\n Now you can customise the public face of your app using Titles and Logos. \nThis configures search engine and social media previews as well as your app’s favicon and the title of the browser tab.\n\n \n\n Look for Titles and Logos under the Gear menu to brand your app! There’s now a library of useful components to clone for free and use in your apps.\n\n The components are implemented as Anvil apps, so they are composed of the standard atomic Anvil Toolbox components. This\nmeans you can see exactly how they work and modify them to suit your requirements.\n\n The current list of components in the library is:\n We’ll be adding more components as time goes on. If you have any requests, why not start a thread in the Forum?\n\n Do you have a component to share? We welcome submissions. Contact us at contact@anvil.works!\n\n A special thanks goes to David Wylie for his excellent Multi-Select DropDown and Toggle Switch/Progress Bar components,\nwhich are available for everybody in the library.\n\n We’re constantly adding to our documentation search system. A single search box finds results from the Reference Docs,\nour tutorials and cookbooks, selected Forum posts, example apps to clone, and other specially-written hints and snippets.\n\n \n\n Use the ‘How do I…’ search box in the top-right corner of the Anvil editor, or visit the Learning Centre or Knowledge Base pages for a bigger version.\n\n \n\n As usual, we’ve been working on the details too. Here are a few details you might have missed:\n\n The Data Tables Service now tells you how many Rows are in your table.\n Remembered user sessions are now stored in the Users table in Data Tables, so you can delete them to log users out.\n There’s now a distinct URL associated with editing an app, so you can bookmark the Anvil editor with an app open.\n Until next time, happy building! See you in the New Year.\n", "tags": ["blog"], "path": "/blog/update-18-12" }, { "title": "Querying Data Tables", "snippet": "Examples of Data Tables search queries\n", "content": " \nIf you're new to Anvil or the Anvil Data Tables, you may want to start with our Storing and Displaying Data Tutorial.\n When you're done there, come back here to learn more about retrieving data from your Tables.\n\n To get data out of your Data Tables, you can construct queries from a set of query operators.\nQuery operators can be used alone or combined to create complex expressions.\n\n The query operators are provided by the anvil.tables.query module. When you add the Data Tables service to your app, this module will be imported using the shorthand q by default - you can change this if you wish. import anvil.tables.query as q\n Query operators are methods on q, for example q.less_than(x). We’ll look at some examples of using the query operators, and we’ll talk about using indexes to improve query performance.\n\n Here are some examples of using the query operators, organised by theme.\n\n In the examples there are two Data Tables. app_tables.machines contains IP addresses, hostnames and other data about computers on an (imaginary) network. app_tables.logs contains logs from those machines. We’ve built all these examples into a sample app for you to clone and inspect:\n\n The machines table has a column last_seen that records the date and time that the computer last sent out a heartbeat to our app.\nHere’s how to use less_than to get all machines that haven’t been seen since before 14th December 2017: app_tables.machines.search(\n last_seen=q.less_than(\n datetime(day=14, month=12, year=2017),\n )\n)\n. \nIt’s a floating-point number between 0 and 1. Let’s say we want to select all machines that have been operational at some point (> 0% uptime), but have worse than 99.99% uptime: app_tables.machines.search(\n uptime=q.between(\n min=0,\n max=0.9999,\n min_inclusive=False,\n )\n)\n(\n hostname=q.between(\n min='b',\n max='f',\n )\n)))\n:\n\n app_tables.machines.search(\n ipv4=q.like('192.168.10.%'),\n)\n To perform an intelligent text search, use q.full_text_match. The results are the Rows that contain all of the words in \nthe query. Rows containing similar words are matched (stemming) - so searching\nfor ‘Walking’ will match things containing ‘Walk’ and ‘walker’ as well as ‘Walking’. As well as stemming, words that\nare very common in English are ignored to avoid a fog of false positives (here’s a full list). This query finds all logs that contain 'Stopping process' as well as 'stop process' and 'process is now stopping': app_tables.logs.search(\n message=q.full_text_match('Stopping process'),\n)\n A richer query language can be used if you set raw=True. The search term is interpreted using PostgreSQL’s \n tsquery syntax. So if you want all logs containing 'Stopping process' and 'stop process' but not 'Process is now stopping', you can \nuse the <-> (‘followed by’) operator: app_tables.logs.search(\n message=q.full_text_match('Stopping <-> process', raw=True),\n)\n See the Postgres tsquery docs for a full\nspecification(\n hostname=q.none_of('dionysus', 'apollo')\n)\n If you want to construct a list or tuple of values, you can unpack them into positional arguments using the * operator: hostnames = ['dionysus', 'apollo']\napp_tables.machines.search(\n hostname=q.none_of(*hostnames)\n)\n Here’s an example of selecting all machines that are part of the build system:\n\n app_tables.machines.search(\n config=q.any_of(\n {'type': 'build_master'},\n {'type': 'build_worker'},\n )\n)\n(\n uptime=q.all_of(\n q.less_than(0.99),\n q.not_(0),\n )\n)\n When used as positional arguments, q.any_of, q.all_of and q.none_of apply to one or more columns. Pass column names to \nthem as keyword arguments. Here is a query for machines:\n 'au', or 192.168.and ends in .0, or Notice that q.any_of is used as a positional argument here: app_tables.machines.search(\n q.any_of(\n hostname=q.ilike('%au%'),\n ipv4=q.ilike('192.168.%.0'),\n ipv6=q.not_(None),\n )\n)\n And here we use q.none_of to select all machines who don’t match those criteria: app_tables.machines.search(\n q.none_of(\n hostname=q.ilike('%au%'),\n ipv4=q.ilike('192.168.%.0'),\n ipv6=q.not_(None),\n )\n)\n Queries can be nested arbitrarily to execute complex logic, because q.any_of, q.all_of and q.none_of can take \nother queries as positional arguments. There are often more readable ways of achieving the same result, but the power is there if you need it. As an example, let’s imagine:\n Stopping process. NOTSET. CRITICAL-level logs only if the message was logged after midnight on 15th December 2018. You also want to order the results by time logged.\n\n Here’s a query that achieves that:\n\n app_tables.logs.search(\n tables.order_by('time_logged'),\n q.any_of(\n q.all_of(\n level=q.all_of(\n q.less_than_or_equal_to(LOG_LEVELS['ERROR']),\n q.not_(LOG_LEVELS['NOTSET']),\n ),\n message=q.full_text_match('Stopping <-> process', raw=True),\n ),\n q.all_of(\n level=LOG_LEVELS['CRITICAL'],\n message=q.full_text_match('Stopping <-> process', raw=True),\n time_logged=q.greater_than_or_equal_to(\n datetime(year=2018, month=12, day=15),\n ),\n )\n ),\n)\n It’s a little difficult to understand this at a glance, so let’s assign some of the subqueries to variables with sensible names.\n\n error_and_below = q.all_of(\n q.less_than_or_equal_to(LOG_LEVELS['ERROR']),\n q.not_(LOG_LEVELS['NOTSET']),\n)\n\ncritical = LOG_LEVELS['CRITICAL']\n\nstopping_process = q.full_text_match('Stopping <-> process', raw=True)\n\nstarting_from_15th_dec_2018 = q.greater_than_or_equal_to(datetime(year=2018, month=12, day=15))\n\nstopping_process_error_and_below = q.all_of(\n message=stopping_process,\n level=error_and_below,\n)\n\nstopping_process_critical_new = q.all_of(\n message=stopping_process,\n level=critical,\n time_logged=starting_from_15th_dec_2018,\n)\n\napp_tables.logs.search(\n tables.order_by('time_logged'),\n q.any_of(\n stopping_process_error_and_below,\n stopping_process_critical_new,\n )\n) \n This is functionally the same query as before, but it should now be easier to understand.\n\n Developers on Dedicated or Enterprise plans can create indexes on columns to optimise performance.\n\n Right-click on the column heading in the Data Tables Service to set up indexes.\n\n \n\n There are three types of index, which optimise different query types:\n.\n\n As a rule of thumb, apply indexes when your queries start running slowly. They will usually make finding rows much\nfaster. The cost is that writes are made very slightly slower, because the index must be updated as well as the underlying data.\nSo the best approach is to err on the side of using them, but don’t just apply them to everything from the start!\n\n This app contains each of the examples listed in this cookbook. Clone it to see them at work:\n\n \n\n \nIf you need help with any specific queries, why not ask on the Forum? There are many more tutorials, cookbooks and examples in the Learning Centre.\n\n Or perhaps you’d prefer to get building!\n\n", "tags": ["cookbook"], "path": "/blog/querying-data-tables" }, { "title": "Anvil News - November", "snippet": "We'll be presenting at a bunch of dev- and Python-related meetups this winter! Join us for workshops, technical talks and to meet the team.", "content": " We will be presenting at a bunch of development- and Python-related meetups over the next few months. Come and join us!\n\n (Want us at your local meetup? Get in touch, wherever you are: events@anvil.works)\n\n \n\n Nov 22nd: London, UK (London Python, pictured above) \nWe had a great response at London Python, and a lot of interest afterwards. We’ve already been invited back! Dec 4th: St Louis, MO, USA (STL Python) \nStefano Menci will be leading a hands-on workshop in St Louis. Jan 8th: Indianapolis, IN, USA (IndyPy) \nThis) \nMeredydd Luff will be demonstrating Anvil, and talking about how it works under the hood. Jan 17th: Manchester, UK (North West Python) \nMeredydd will be demonstrating Anvil, and talking about how it works under the hood. Jan 29th: Sheffield, UK (Python Sheffield) \nWorkshop time! Come and try Anvil yourself with friendly hands-on assistance. Feb 21st: Oxford, UK (Oxford Python) \nMeredydd will be demonstrating Anvil, and talking about how it works under the hood. May 3-5: Cleveland, OH, USA (PyCon) \nWe\nevents@anvil.works.\n\n We can also offer help and materials if you’d like to lead an Anvil workshop yourself - just get in touch.\n\n We hope to see you at an event soon!\n", "tags": ["blog"], "path": "/blog/update-18-11" }, { "title": "Anvil DNS Guide", "snippet": "Configure a custom domain name for your Anvil app.\n", "content": " We host Anvil apps at public addresses in the domain .anvil.app. But you can use any domain name for your app. The first step is to purchase your domain from a domain registrar. A complete list of domain registrars is available from\nICANN.\n\n When you have purchased your domain, configure an A record pointing it to Anvil’s IP address: 52.56.203.177\n\n If you wish to use a subdomain of the domain you purchased (such as app.mydomain.com), you can simply configure an A record pointing that subdomain\nto the IP address above. To configure a subdomain, just enter the subdomain in place of the @ symbol when configuring the A record. If you want to use as well as mydomain.com, you can configure a second A record for www. DNS changes take a while to propagate around the internet, so you may need to wait up to 48 hours before your changes\nfully take effect.\n\n Google Domains refers to the record you need to create as a ‘Custom Resource Record’.\n\n \n\n Google Domains and Synthetic Records\n\n Advanced note: If you’re using a Synthetic Record as a redirect, you need to enable SSL. This because Anvil uses HSTS to \nenforce SSL as part of its security model (so non-SSL connections will not work by design).\n\n In the GoDaddy DNS Management page for your domain, you can create an A record in the ‘Records’ card: \n\n In Cloudflare’s DNS settings tab, add an A record in the ‘add record’ tool. \n\n Cloudflare and SSL\n\n The cloud icon next to the ‘add’ button toggles Cloudflare’s CDN and other features on or off. Anvil apps will work \nwith either setting. If you’re using Cloudflare’s SSL support, you need change the SSL Support setting from ‘Flexible’ \nto ‘Full Strict’. See this article from Cloudflare \nfor more info. It will take a few minutes to take effect.\n", "tags": ["cookbook"], "path": "/blog/dns-guide" }, { "title": "Remote Control Panel", "snippet": "Runs Python remotely in order to control unit test runs.\n\nUse this pattern to control anything! Continuous Integration workflows, manufacturing Production Equipment Control, Quantative Finance simulations, scientific equipment...\n", "content": " Anvil is a tool for building full-stack web apps with nothing but Python and a drag-and-drop designer. Learn more on our website, or sign up and try it yourself -- it's free!\n We’re going to build an app to run unit tests from the web, using nothing but Python, with Anvil.\n\n \n\n To follow along, you need to be able to access the Anvil Editor. Create a free account using the following link:\n\n \n\n Open the Anvil Editor to get started.\n\n In the top-left there is a ‘Create New App’ button. Click it and select the Material Design theme.\n\n You are now in the Anvil Editor.\n\n First, name the app. Click on the name at the top of the screen and type in a name like ‘Test Manager’. \nthe text section. \n\n Add a Card to the page. Inside the Card, add a Label, a TextBox and a Button.\n\n.\nAlign it to the left to make it sit against the TextBox. Your app should now look something like this: run_tests_button_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n pass\n Remove the pass and add a call to the built-in alert function: def run_tests_button_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n alert(\"Test runs requested: {}\".format(self.run_number_box.text, title=\"Test run\")\n When you click the Button, you’ll get a dialog box displaying the number of test runs requested.\n\n Let’s make the Button do something a bit more interesting.\n\n Add a Repeating Panel to your app. This is a component that displays the same piece of UI for each element in a list (or other iterable).\n\n \n\n Double-click on the Repeating Panel in the Designer. You’ll see most of the page grayed out, and you can drop things into \nthe Repeating Panel’s template. Drop a Card into it, and into that Card, put a Label whose text is set to ‘Date/Time’. Drop\nanother Label next to it and leave it blank for now - it will hold the date and time that the test was run.\n\n \n\n Arrange some more Labels until you have a UI that can display the number of tests run, passed, failed and with errors.\n\n \n\n To populate the page with a bunch of empty test result cards, just set the Repeating Panel’s items attribute to an\narbitrary list: class Form1(Form1Template):\n\n def __init__(self, **properties):\n # ...\n self.repeating_panel_1.items = [1, 2, 3, 4]\n Run your app to see some empty results cards:\n\n \n\n Let’s make the Repeating Panel display some data.\n\n Change the [1, 2, 3, 4] from Step 2 to be an empty list, so the Repeating Panel is empty when the app starts: self.repeating_panel_1.items = []\n And append some fake data to this list when the Button is clicked:\n\n from datetime import datetime\nfrom random import randint\n# ...\n\n def run_tests_button_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n for i in range(self.run_number_box.text):\n tests_run = 6\n passed = randint(0, tests_run)\n failed = tests_run - passed\n errors = 0\n\n self.repeating_panel_1.items = [{\n 'date_time': datetime.now(),\n 'tests_run': tests_run,\n 'passed': passed,\n 'failed': failed,\n 'errors': errors,\n }] + self.repeating_panel_1.items\n Run your app and hit the ‘Run Tests’ button. You’ll see a number of test result cards corresponding to the\nnumber of test runs the user selected.\n\n \n\n Let’s get the data into the result card. For each empty Label, click on it in the Design view and add a Data\nBinding in the Properties window:\n\n \n\n Data Bindings tie the value of a property to the value of a Python expression. In this case we’re tying the text of the\nLabel to the test run data. Since these Labels are within the Repeating Panel, each element of self.repeating_panel_1.items is available\nto the Label as self.item. So the Data Bindings are: \n\n and similar for the 'passed', 'failed' and 'errors' Labels. The 'date_time' label needs to format the datetime object into a string using the strftime method: \n\n Clicking on the ‘Run Tests’ button will now generate randomly populated result cards.\n\n \n\n An app like this could manage and report on the stages of a complex build and deployment pipeline. But to keep it simple\nfor this workshop, we’ll clone a very simple Git repo and run the unit tests.\n\n The repo in question is a Roman Numeral Calculator coding challenge (our thanks to Tony “Tibs” Ibbs). To clone it, open\na terminal window on your computer, change directory to somewhere you’re happy to put it, and enter:\n\n git clone git@github.com:tibs/roman_adding.git\n If you don’t have Git, you can download it as a zip file at\n\n\n\n (click on the green ‘Clone or download’ link).\n\n To run the unit tests, simply run the test_roman_adding.py file. To make this more interesting, let’s add a random calculation error into the code.\n\n In roman_adding.py, import the random module and change the calculation from sum = number1 + number2\n to\n\n sum = number1 + number2 + random.choice(['', 'I'])\n in 1 out of every 2 runs, this will randomly add 1 (that is, I) to the result of the calculation. So 1 in 2 of the\nunit tests should now fail. We’re going to use the Uplink to allow Anvil to trigger test runs on your machine from the web app.\n\n First, configure your app to use the Uplink by clicking on Uplink... in the Gear Menu \n\n and clicking the ‘enable’ button in the modal that comes up:\n\n \n\n A random code will be displayed that allows your local script to identify your Anvil app.\n\n On your computer, install the Anvil server module using pip: pip install anvil-uplink\n (As always, I suggest you do this in a Python Virtual Environment.)\n\n Now create a file in the Roman Adding repo called something like connect_to_anvil.py. Add these lines at the top import anvil.server\n\nanvil.server.connect(\"<The Uplink key for your app>\")\n Where <The Uplink key for your app> is the key you got from the Uplink... modal. Within this script, define a function to call from your app:\n\n @anvil.server.callable\ndef run_tests(times_to_run):\n print('Running tests...')\n results = []\n for i in range(times_to_run):\n print(\"Run number {}\".format(i))\n return results\n \nanvil.server.wait_forever()\n The @anvil.server.callable makes it possible to call this function from the browser code. In the Anvil Editor, add an\n anvil.server.call to the top of your click handler: import anvil.server\n# ...\n def run_tests_button_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n results = anvil.server.call('run_tests', self.run_number_box.text)\n Run connect_to_anvil.py. You should see something like Connecting to wss://anvil.works/uplink\nAnvil websocket open\nAuthenticated OK\n Now when you click on the Run Tests button, your script will print this to your terminal:\n\n Running tests...\nRun number 0\nRun number 1\n Now to make your Uplink script actually run the unit tests.\n\n Add a call to the unittest module inside the loop in the run_tests function and append the results to the results list: import unittest\nfrom datetime import datetime\n\n # .. and inside the run_tests function ...\n for i in range(times_to_run):\n # run the tests\n result = unittest.main(module='test_roman_adding', exit=False).result\n \n # unpack the results a bit\n failed = len(result.failures)\n errors = len(result.errors)\n passed = result.testsRun - failed - errors\n\n # Create an entry in the results list\n results.append({\n 'date_time': datetime.now(),\n 'tests_run': result.testsRun,\n 'passed': passed,\n 'failed': failed,\n 'errors': errors,\n })\n Now in your app, you replace the code that concocts fake data with the call to your actual test runner:\n\n def run_tests_button_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n results = anvil.server.call('run_tests', self.run_number_box.text)\n self.repeating_panel_1.items = results + self.repeating_panel_1.items\n You’ve now got a web app that runs tests on a remote machine and displays the results!\n\n \n\n Currently, when the app is reloaded in the browser, the existing test results are cleared from memory. It would be nice\nif your test results could persist.\n\n In the Editor, click on the plus next to ‘Services’ and add the Data Tables Service. Add a table named test_results.\nAdd a ‘Date and Time’ column called date_time and ‘Number’ columns called tests_run, passed, failed and errors. \n\n We’ll access this table from a Server Module. A Server Module is Python code that runs in a Python runtime on a server \nmanaged by us. Click on the plus next to ‘Server Modules’ in the Editor. You’ll see a code editor with a yellow background, \ndenoting the Anvil server environment.\n\n Functions in here can be decorated as @anvil.server.callable just like functions in your Uplink script. Write a simple\nfunction to get the data from the Data Table. @anvil.server.callable\ndef get_test_results():\n return app_tables.test_results.search(tables.order_by('date_time', ascending=False))\n To put the data into the Data Table, we’ll use another tiny function:\n\n @anvil.server.callable\ndef store_test_results(results):\n for result in results:\n app_tables.test_results.add_row(\n date_time=result['date_time'],\n tests_run=result['tests_run'],\n passed=result['passed'],\n failed=result['failed'],\n errors=result['errors'],\n )\n (If you’re quite familiar with Python you might spot that you could just do app_tables.test_results.add_row(**result)) So we have functions for storing test results and retrieving them. Since they are anvil.server.callable, they can be\ncalled from anywhere, so both the browser and connect_to_anvil.py can use them to persist data. When the connect_to_anvil.py script has finished running tests, it needs to run the store_test_result function. Add that call in now: @anvil.server.callable\ndef run_tests(times_to_run):\n # ... run the tests, then ...\n anvil.server.call('store_test_results', results)\n return results\n And when the app starts up, it should retrieve the historical test results from the Data Table:\n\n class Form1(Form1Template):\n\n def __init__(self, **properties):\n # ...\n self.repeating_panel_1.items = list(anvil.server.call('get_test_results'))\n Now your test results are stored between sessions. Try launching a few test runs with your app and refreshing the page. The previous \nruns are still there, and when you trigger new runs they get added to the list.\n\n \n\n And that’s it! You’ve just built the foundation of a Continuous Integration platform in Anvil.\n\n It connects to a remote machine, runs a script that you might typically find in a build-test-deploy pipeline, and stores\ndata about the runs so users can see what’s going on.\n\n Connecting your app to an arbitrary Python process opens up an infinity of possibilities.\n\n You could construct a more elastic build system by spinning up cloud servers (using, say, the AWS Boto3 module from Anvil’s Server Modules) and connecting\nto them with the Uplink to run build scripts.\n\n You could connect to an Internet of Things gateway and use your app to manage your devices.\n\n You can even run an interactive terminal session in the browser.\n\n If you can do it in Python, you can connect it to your Anvil app.\n\n Every app in Anvil has a URL that allows it to be imported by another Anvil user.\n\n Click the following link to clone the finished app from this workshop.\n\n \n\n \nIf you want to run the cloned version, you need to enable the Uplink as detailed in Step 5. Your app is live on the internet already (find out more).\n\n If you’ve got this far, you might enjoy figuring out how to grow your app further. Some things you might like to try include:\n Alternatively, take a look at the TODO list workshop and the Data Dashboard workshop.\n", "tags": ["workshop"], "path": "/blog/workshop-test-manager" }, { "title": "TODO List App", "snippet": "An example of a CRUD (Create, Read, Update, Delete) app.\n\nThis could form the basis of an ecommerce app, a Customer Relationship Manager (CRM), or anything that manages data.\n", "content": " Anvil is a tool for building full-stack web apps with nothing but Python and a drag-and-drop designer. Learn more on our website, or sign up and try it yourself -- it's free!\n We’re going to build a To-Do list app with Anvil, and publish it on the Web, using nothing but Python.TODO List’. tool on the right, enter a title into the text section. \n\n Add a Card to the page. Inside the card, put a Label, a TextBox, and a Button.\n\n Set the Label’s text to say New Task and set its role to subheading. Set the TextBox’s name to new_task_box. Rename the Button to add_btn, set its text to add and align it to the right. We’ve just designed a data entry UI for adding tasks to the TODO list.(self.new_task_box.text, title=\"new task\")\n Run the app. When you click the button, you’ll get a dialog box displaying the text you entered into the text box.\n\n We’ve built an app with some UI that echoes back what you enter into it, in an alert box.\n\n Now we’ll put the TODO items into the database.\n\n Click on the + next to ‘Services’ in the panel on the left. Click on ‘Data Tables’.\n\n Add a table called ‘tasks’.\n\n Add columns called ‘title’ (type: text) and ‘done’ (type: True/False).\n\n \n\n Now you need to hook the button up so that it adds a row to the table.\n\n Click on the + next to ‘Server Modules’ in the panel on the left. You’ll see some code with a yellow background. Write this function.\n\n @anvil.server.callable\ndef new_task(title):\n app_tables.tasks.add_row(title=title, done=False)\n \nThis function runs in a Python runtime on a server. The @anvil.server.callable decorator means it can be called from the client. Go back to Form1 and delete the alert from add_btn_click. In its place, write these two lines: anvil.server.call('new_task', self.new_task_box.text)\n self.new_task_box.text = \"\"\n Now hit ‘run’, fill in some TODO items and click the Button. Stop the app and look in the Data Table - you should\nsee your TODO items there.\n\n You should now have a data-entry app that can record new tasks. Next, we’ll display the tasks within the app.\n\n In your Server Module, write:\n\n @anvil.server.callable\ndef get_tasks():\n return app_tables.tasks.search()\n This fetches every row from the tasks table (the actual data is loaded just-in-time). Now go back to Form1. Add these three lines to the end of the __init__ method: tasks = anvil.server.call('get_tasks')\n for row in tasks:\n print(row['title'])\n If you run this app, it will print all the tasks in your database, in the Output window.\n\n Add a new card above the “new task” card.\n\n Add a Label to it, with title as Tasks and role as Subheading. Add a RepeatingPanel to this card. Double-click on the RepeatingPanel to edit its template. (If Anvil asks, say that you’ll be displaying rows from the Tasks table.)\n\n \n\n Add a CheckBox to this template.\n\n Go to the Properties section and add two data bindings:\n textproperty to self.item[‘title’] checkedproperty to self.item[‘done’]. Ensure the box marked Write backis checked. \n\n Go to Form1 and delete the two lines of the for loop. Put this line in their place: self.repeating_panel_1.items = tasks\n Run your app to see all the tasks from your database.\n\n If you try to check one of the CheckBoxes, you’ll see a “Permission Denied” error - something like this:\n\n \n\n That’s because the data is currently read-only. We’ll fix that in the next section.\n\n The error occurs because we enabled write back in the Data Binding for self.item['done'].\nThis means that, whenever the user checks or unchecks the CheckBox, Anvil runs: self.item['done'] = self.check_box_1.checked\n which updates the database. That’s great, but when we returned those tasks from the server module, we returned read-only database rows. So we get a “permission denied” error when we tried to update one.\n\n To fix this, we can return client-writable rows from the server.\n\n Go back to the Server Module, and change the get_tasks() function to this: @anvil.server.callable\ndef get_tasks():\n return app_tables.tasks.client_writable().search()\n Now run the app and check and uncheck those CheckBoxes. The app will update the done column in the Data Table accordingly. So far so good, but when you add a new task, it doesn’t show up!\n\n That’s because we only fetch the list of tasks once, when we start up. \nLet’s put that refresh code into its own method ( self.refresh()), and call it when we add a new task, as well as on startup. Here’s a full code listing with that modification applied:\n\n class Form1(Form1Template):\n\n def __init__(self, **properties):\n # Set Form properties and Data Bindings.\n self.init_components(**properties)\n\n # Any code you write here will run when the form opens.\n self.refresh()\n\n def refresh(self):\n tasks = anvil.server.call('get_tasks')\n self.repeating_panel_1.items = tasks\n\n def add_btn_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n anvil.server.call('add_task', self.new_task_box.text)\n self.refresh()\n Now we just need to be able to delete items and we have a full CRUD app.\n\n We’re going to add a Button to each TODO item that allows you to delete that item.\n\n Go to the Design View for Form1, and double-click on the RepeatingPanel to edit its ItemTemplate.\n\n \n\n Add a Button from the ToolBox and style it as you think a delete button should look.\n\n Create a click handler for it in the same way as for the ‘add’ button in Step 1. This creates an auto-generated\nmethod on ItemTemplate1.\n\n Remove the pass statement and write self.item.delete() in its place. The self.item of ItemTemplate1 is a Python \nobject representing a row from the database. Calling its delete method deletes it from the database. After that line, write self.remove_from_parent(). This removes the present instance of ItemTemplate1 from the \nRepeatingPanel it belongs to. The final click handler is:\n\n def delete_btn_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.item.delete()\n self.remove_from_parent()\n Congratulations - you’ve now written a full CRUD application!\n\n \n\n This pattern can be adapted to any application that requires storage of relational data. In fact, you can literally copy this app and modify it to suit your use-case (see the end of this tutorial to find out how.)\n\n It’s already published online at a private, unguessable URL. You can also publish it on a public URL using the Publish App dialog\nfrom the Gear Menu : \n\n Next, we’ll make users sign-in and give them separate TODO lists.\n\n Click on the + next to ‘Services’ on the left and click on ‘Users’ to add the Users service. Disable the check box marked ‘Allow visitors to sign up’. We’ll enable this in Step 7, but for now we’ll add users manually.\n\n You’ll see a screen with a table at the bottom headed ‘Users’, with columns ‘email’ and ‘enabled’. This table is also present in the Data Tables window.\n\n Add a couple of users manually by simply filling in the table. Remember to check the checkbox in the enabled column! Set a password for your users by clicking the arrow next to their row and clicking ‘set password’. This will add a column for the password hash, and populate it automatically based on a password you enter.\n\n \n\n In the __init__ for Form1, call anvil.users.login_with_form(): class Form1(Form1Template):\n def __init__(self, **properties):\n # Set Form properties and Data Bindings.\n self.init_components(**properties)\n # Any code you write here will run when the form opens.\n anvil.users.login_with_form()\n Now a login form is displayed when anybody accesses the app.\n\n \n\n Of course, someone might bypass this login box by tinkering with the page source in their browser. To be properly secure, we need to enforce access control on the server.\n\n So in the Server Module, add an if statement to each of your functions that checks if a user is logged in before reading/creating tasks: @anvil.server.callable\ndef new_task(title):\n if anvil.users.get_user() is not None:\n app_tables.tasks.add_row()\n\n@anvil.server.callable\ndef get_tasks():\n if anvil.users.get_user() is not None:\n return app_tables.tasks.client_writable().search()\n Now a user has to log in before they can see the tasks and add/edit them.\n\n Now you have a single TODO list that can be accessed by a restricted set of users.\n\n What if you want to give each user their own TODO list?\n\n Let’s restrict each user’s view so that their own list is private to them.\n\n Add a new row to the ‘tasks’ table called ‘owner’. When selecting the data type, use ‘Link to table’ and select Users->Single Row.\n\n \n\n Modify the new_task server function to fill out the owner to be the current user: @anvil.server.callable\ndef new_task(title):\n if anvil.users.get_user() is not None:\n app_tables.tasks.add_row(\n title=title,\n done=False,\n owner=anvil.users.get_user()\n )\n To ensure the logged-in user sees only their tasks, restrict the client-writable view to only rows where the owner is the current user: @anvil.server.callable\ndef get_tasks():\n if anvil.users.get_user() is not None:\n return app_tables.tasks.client_writable(owner=anvil.users.get_user()).search()\n Now add a new user manually into the table and log in as them.\n\n You should see an empty TODO list. Add tasks as normal - they show up as normal. Check the database - you can see that they’ve been added to the ‘tasks’ table and they’re linked to the new user.\n\n \n\n Now that users can’t see each others’ tasks, it’s safe to enable the Sign Up functionality.\n\n Check the box in the Users screen marked ‘Allow visitors to sign up’.\n\n \n\n Run your app again and you’ll find that the login screen includes a signup process with email verification.\nSo now users have their own private TODO lists and new users can sign up.\n\n Congratulations, you’ve just built a multi-user CRUD app in Anvil!. Anvil is a tool for building full-stack web apps with nothing but Python and a drag-and-drop designer. Learn more on our website, or sign up and try it yourself -- it's free!\n We’re going to build a data dashboard with Anvil, and publish it on the Web, using nothing but Python.\n\n It will display weather data over time for a chosen day in Cambridge, UK.Data Dashboard’. the text section. \n\n Add a Card to the page. Inside the card, put a Label, a DatePicker, and a Button.\n\n.\n\n \n\n Now to make the button do something.\n\n At the bottom of the Properties panel(str(self.date_picker_1.date), title=\"New plot\")\n When you click the button, you’ll get a dialog box displaying the date selected in the DatePicker.\n\n We’ve built the basis of our control panel and made it do something. Currently it just pops up an alert. Let’s make it actually create some plots and display them on the page.\n\n We’ll use Plotly to create the plots.\n\n Go to the Code view and add this import statement at the very top:\n\n from plotly import graph_objs as go\n Delete the call to alert you added in step 1 and put this in its place: new_plot = Plot(\n data=go.Scattergl(\n x=[0, 1, 2, 3, 4],\n y=[0, 1, 4, 9, 16],\n )\n )\n Now drag-and-drop a GridPanel onto the Form. (GridPanels can be found under ‘See more components…’ in the toolbox):\n\n \n\n Write this line below the lines you just wrote. It adds the plot to the GridPanel, taking up half the width (6 out of 12 columns):\n\n self.grid_panel_1.add_component(new_plot, width_xs=6)\n When you click the add button in the live app, you should see plots appearing on the screen.\n\n \n\n Now we’re making plots, but it would be nice to have some meaningful data to put in them.\n\n We’re going to use the data from the Cambridge University Digital Technology Group’s weather station.\n\n Daily tab-separated value files are available at URLs such as\n\n\n\n Click on the + next to ‘Server Modules’ in the panel on the left. You’ll see some code with a yellow background. This function requests data from the weather API. Copy-paste it into the Server Module:\n\n import anvil.http\n\n@anvil.server.callable\ndef get_weather_for_day(dt):\n # Construct a URL based on the datetime object we've been given.\n url = dt.strftime('')\n\n # Get the raw data by making an HTTP request.\n raw_data = anvil.http.request(url).get_bytes()\n \n print(raw_data)\n Go back to Form1 and write this line at the top of add_btn_click: anvil.server.call('get_weather_for_day', self.date_picker_1.date)\n Run the app. When you click the ‘add’ button, you should now see the data printed on the Output console.\n\n At present, the data is a string containing the tab-separated values. It needs to be parsed and cleaned to get it into\na good format for plotting.\n\n Given that we’re running in a real Python server environment, the range of possibilities for analysing this data is \nhuge. In order to focus on using Anvil, we’ll just wrangle the existing data into a useful data structure, and we’ll \ngive you that function verbatim:\n\n from datetime import datetime\n\ndef parse_data(data, dt):\n \"\"\"Parse the raw weather data string into a list of times, and a dict of lists of meterological variables.\"\"\"\n # First, split the data into columns\n all_rows = data.split('\\n')\n all_rows = [r.strip() for r in all_rows if r.strip()]\n\n # Then, exclude every row starting with '#'\n data_rows = [r for r in all_rows if not r.startswith('#')]\n\n # Then split rows on tab character.\n data_rows = [r.split('\\t') for r in data_rows]\n\n # The headers are the penultimate commented line.\n headers = [r.split('\\t') for r in all_rows if r.startswith('#')][-2]\n # Clean the headers a bit\n headers = [h.strip('#').strip() for h in headers]\n\n # The units are the final commented line.\n units = [r.split('\\t') for r in all_rows if r.startswith('#')][-1]\n # Clean the units a bit\n units = [u.strip('#').strip() for u in units]\n\n # Parse out the date time\n time_data = [datetime.strptime(x[0], '%H:%M').replace(year=dt.year, month=dt.month, day=dt.day) for x in data_rows]\n\n # Construct the dictionary of y-axis variables\n y_data = {\n '{} ({})'.format(header, units[x+1]): [r[x+1] for r in data_rows]\n for x, header in enumerate(headers[1:])\n }\n \n # These two variables don't scatter plot very well, so let's discard them.\n del y_data['Start ()']\n del y_data['WindDr ()']\n\n return time_data, y_data\n The return values are:\n)\n Time to plot the data.\n\n Go to Form1 and open the Code view.\n\n At the very top, import random: import random\n Delete the contents of add_btn_click. Put this server call in its place: time_data, y_data = anvil.server.call('get_weather_for_day', self.date_picker_1.date)\n After that line, use random.choice to select a weather variable at random: variable = random.choice(y_data.keys())\n After that line, create a new plot using the weather data:\n\n new_plot = Plot(\n data=go.Scattergl(\n x=time_data,\n y=y_data[variable],\n ),\n layout={\n 'xaxis': {'title': 'Time'},\n 'yaxis': {'title': variable},\n 'title': variable,\n }\n )\n Then write another line to add this plot to the GridPanel:\n\n self.grid_panel_1.add_component(new_plot, width_xs=6)\n Run the app and click ‘Add plot’ a couple of times. Here’s what you get when you do that:\n\n \n\n We need to make the plots show up more quickly if we’re going to be able to change what they show at the touch\nof a button. We’ll write 9 lines of code that cache the data in the browser’s memory.\n\n In the panel on the left, click on the + next to ‘Modules’ and create a module named ‘Data’. Add the following code, which calls the get_weather_for_day method and stores the result in memory: time_data = []\ny_data = {}\n\ndef update(dt):\n global time_data\n global y_data\n time_data, y_data = anvil.server.call_s('get_weather_for_day', dt)\n ( anvil.server.call_s is the same as anvil.server.call but without the spinner.) Now you only need to fetch the data when the app is started.\n\n Go to the code of Form1. At the very top, import the Data module: import Data\n Remove the anvil.server.call line from the add_btn method. Write this line at the bottom of the __init__ method: Data.update(self.date_picker_1.date)\n Now look at the add_btn method. Everywhere you see time_data and y_data, replace it with Data.time_data and Data.y_data. Run your app again and try adding plots. It should work as before, except adding plots is a lot faster!\n\n Let’s make it possible to select which variable each plot shows.\n\n Create a new Form - select Blank Panel rather than Standard Page.\n\n \n\n Rename the new Form to ‘TimePlot’.\n\n Drag-and-drop a Card into the new Form and add a Label, DropDown and Spacer.\n\n Change the Label’s text to ‘y-axis:’. Set the Label’s role to input-prompt and adjust the sizes of things to suit your aesthetic taste. Now drag-and-drop a Plot object into the bottom of the card. Your UI should look something like this in the Design view:\n\n \n\n Click on ‘Code’ to see the code for this Form.\n\n Add some imports to the very top:\n\n from plotly import graph_objs as go\nimport random\nimport Data\n Inside the TimePlot class, write this method: def plot(self):\n variable = random.choice(Data.y_data.keys())\n self.plot_1.layout.xaxis.title = 'Time'\n self.plot_1.layout.yaxis.title = variable\n self.plot_1.layout.title = variable\n \n self.plot_1.data = go.Scattergl(\n x=Data.time_data,\n y=Data.y_data[variable],\n )\n Call this method from the __init__ method of TimePlot: class TimePlot(TimePlotTemplate):\n def __init__(self, **properties):\n # Set Form properties and Data Bindings.\n self.init_components(**properties)\n\n # Any code you write here will run when the form opens.\n self.plot()\n The Button now just needs to create TimePlots when it is clicked.\n\n Go to Form1 and open the Code view.\n\n Import the TimePlot Form at the very top: from TimePlot import TimePlot\n Now delete all the code inside add_btn_click and write these two lines in its place: new_plot = TimePlot()\n self.grid_panel_1.add_component(new_plot, width_xs=6)\n Your plots should now be appearing inside a Card, with an empty y-axis dropdown above them:\n\n \n\n Let’s make that dropdown do something.\n\n Go back to the TimePlot Form, and open its Code view.\n\n In the __init__ method, add these lines before the call to self.plot(): self.drop_down_1.items = Data.y_data.keys()\n self.drop_down_1.selected_value = random.choice(self.drop_down_1.items)\n And change the first line of the plot method like this: def plot(self):\n variable = self.drop_down_1.selected_value\n # ...\n Now bind an event handler to the DropDown’s ‘change’ event in the same way as you did for the Button’s ‘click’ event in section 1:\n\n \n\n Now add a call to self.plot() inside drop_down_1_change : def drop_down_1_change(self, **event_args):\n \"\"\"This method is called when an item is selected\"\"\"\n self.plot()\n Run the app again. Each plot should now have a working DropDown that can select which variable is plotted on the y-axis.\n\n \n\n Removing plots from the page is simple.\n\n Add a button to the TimePlot Form and style it to look like a delete button.\n\n Configure a click handler for the button. Make the click handler call self.remove_from_parent(). That’s all there is to it!\n\n \n\n One more thing remains. The data should get updated when the DatePicker on Form1 is changed.\n\n Create an event handler for its change event. Inside the date_picker_1_change method, add this line: Data.update(self.date_picker_1.date)\n Followed by these lines:\n\n for component in self.grid_panel_1.get_components():\n if isinstance(component, TimePlot):\n component.plot()\n Run your app again - you’ll find you can select a date from the DatePicker and see all the plots update to show the data for that date.\n\n Congratulations, you’ve just built a data dashboard in Anvil!\n\n . A dashboard is much more useful if it can give you a live feed of the data as it comes in. If you’ve got this far,\nmaybe you can figure out how to get the plots to refresh as new data is uploaded to the Digital Technology Group’s website.\n\n Hint: There’s a Timer component that triggers a ‘tick’ event after a given number of seconds.\n", "tags": ["workshop"], "path": "/blog/workshop-data-dashboard" }, { "title": "Anvil News - September", "snippet": "This month, we've got new data grids, increased security for your users, and tooltips! Read on for more.", "content": " Do you attend a coding, Python or data-science meetup?\nWould!\n\n Meanwhile, here’s what’s new in Anvil this month:\n\n Of course, this month’s big news is the release of Data Grids. It’s our\nmost-requested feature, and a massive help to anyone working with tabular data.\nCheck out our announcement and the Getting Started tutorial.\n\n Password security is more important than ever. That’s why we’ve just\nintroduced the “Require Secure Passwords” option in Anvil’s user authentication\nservice.\n\n When you select “Require Secure Passwords”, Anvil prevents your users from re-using a password that’s too short, or has already been leaked in a data breach. We can check for data breaches without revealing passwords to anyone, thanks to the k-anonymity feature from\nHave I Been Pwned. You can learn more about how it works in this blog post from Cloudflare.\n\n “Require Secure Passwords” is enabled by default for all new apps, and\nyou should turn it on for your existing apps too!\n\n Another much-requested feature: Now, almost every component has\na tooltip property. Set it to display some extra text when you hover\nyour mouse over the component! Did you know that you can install Anvil on your own servers? You can now\nget an on-site trial instance running in 10 minutes or less, easier than\never before! To learn more about running Anvil on your corporate network, or on your own cloud servers, contact us at enterprise@anvil.works.\n\n \n\n That’s it for this month. If you’ve got any questions, come and ask\nus on the Anvil user forum.\n\n Happy coding!\n", "tags": ["blog"], "path": "/blog/update-18-09" }, { "title": "Announcing Data Grids", "snippet": "Create interactive, paginated tables with minimal effort.", "content": " If you’re writing a web app, chances are you’ll have data in tables. And at some point you’ll probably want to show your users\na table of data they can page through.\n\n Doing that in Anvil just got even easier. By popular demand, and without further ado:\n\n Data Grids are an easy way to show data that naturally fits into rows and columns. With built-in support for paging, they\ncan handle large numbers of rows while remaining performant.\n\n Data Grids can be populated from any source. It’s easy to link them up to your Data Tables:\n\n \n\n self.repeating_panel_1.items = app_tables.employees.search()\n \n\n or you could set up some tabulated data yourself:\n\n self.repeating_panel_1.items = [\n {'ingredient': 'Flour', 'weight': 225, 'cost': 150},\n {'ingredient': 'Milk', 'weight': 150, 'cost': 100},\n {'ingredient': 'Eggs', 'weight': 175, 'cost': 80},\n {'ingredient': 'Butter', 'weight': 175, 'cost': 250},\n {'ingredient': 'Maple Syrup', 'weight': 250, 'cost': 300},\n ]\n \n\n All of this is covered in our first Data Grid tutorial: Getting Started with Data Grids.\n\n You can also get more advanced. For example, you can nest components inside rows of a data grid to group related data together (as described in our tutorial, Paginated Grouping with Data Grids):\n\n \n\n You can create custom widgets that control the behaviour of the Data Grid itself. You can read how to create a search box, an add button, or a page number selector:\n\n \n\n Data Grids are only limited by your imagination. Anything you can put together out of Anvil components, you can add to a Data Grid row. And because this is Anvil, you can write Python code to assemble Data Grids and make them behave however you like.\n\n In the example above, we’ve built a full CRUD app in a single component! We’ve got data from a Data Table, displayed in a Data Grid, with a search box, page size selector, and widgets to add, edit, and delete rows.\n\n Click the link to open it in the Anvil editor and have a play:\n\n \n\n To find out more, take a look at our Data Grid tutorials:\n In the Data Grids Getting Started tutorial, we created a simple paginated table that \npresented employee data.\n\n Then, the Add Button Tutorial showed how to give the user a widget to add new entries with.\n\n This is great, but what if I accidentally add the wrong data? Usually you’ll want to allow editing the data and deleting \nrows entirely as well. In this tutorial, we’ll modify the Data Grid to allow just that.\n\n \n\n We’ll start with the endpoint of the Add Button Tutorial, and try two approaches to\nmaking rows user-editable. We’ll also create a delete button on each row.\n\n To follow along, clone the Data Grids example app with the add button - it’s the starting point you’ll build from.\n\n \n\n \nIf you’ve not done the Data Grids Getting Started tutorial, you may find it better to\ngo through that before following this. A simple way to allow records to be edited is to make all rows editable all the time.\n\n Double-click on the Repeating Panel to open its RowTemplate. You can drag-and-drop TextBoxes and\nDropDowns into the Repeating Panel, in much the same way as you did for the ‘add’ row.\n\n \n\n This time, the new components need to have their contents populated from the existing data.\n\n The Employee TextBox needs a Data Binding that binds its text property to the employee name: \n\n The Grade TextBox needs a Data Binding that binds its text property to the pay grade: \n\n The Team DropDown’s selected_value should be self.item['team'], so you need to set a Data Binding up for that: \n\n The Team DropDown also needs to have a list of all available teams, for the user to choose between. We need to fetch this\nlist from the employees table. There is one DropDown per row, and we don’t want to waste time fetching the data from the server once for every single DropDown, \nso we’ll need to store the team list in a global variable. Create a Module called State and add some code to fetch the set of teams, sorted alphabetically: # Get a set of all teams\nteams = {employee['team'] for employee in anvil.server.call('get_employees')}\n# Cast to a list and sort alphabetically\nteams = sorted(list(teams))\n Now you can import this in RowTemplate1 and initialise the Team DropDown from it: from State import teams as TEAMS\n\nclass RowTemplate1(RowTemplate1Template):\n def __init__(self, **properties):\n # ...\n self.drop_down_team_edit.items = TEAMS\n By this stage, you have populated your editable components in your table. But once they are changed, how do you get the data back into\nthe database?\n\n You can’t use the ‘write back’ feature of data bindings because 1) you munge the data slightly before you display it, and\n2) your employee data is sensitive, so you don’t want to allow the client to write to the database - that’s why you’ve got\nthe\n\n\nclass RowTemplate1(RowTemplate1Template):\n # ...\n def text_box_employee_edit_lost_focus(self, **event_args):\n \"\"\"This method is called when the text box loses focus.\"\"\"\n self.edit_employee()\n\n def drop_down_teams_edit_change(self, **event_args):\n \"\"\"This method is called when the drop down is changed.\"\"\"\n self.edit_employee()\n\n def text_box_grade_edit_lost_focus(self, **event_args):\n \"\"\"This method is called when the text box loses focus.\"\"\"\n self.edit_employee()\n \n def edit_employee(self):\n first_name, last_name = parse_employee_name(self.text_box_employee_edit.text)\n anvil.server.call(\n 'edit_employee',\n self.item,\n first_name=first_name,\n last_name=last_name,\n team=self.drop_down_teams_edit.selected_value,\n pay_grade=self.text_box_grade_edit.text,\n )\n Bind these event handlers to the appropriate components using the Properties window.\n\n To edit the data, they call a simple server function - you need to add this to the DatabaseProxy Server Module: @anvil.server.callable\ndef edit_employee(employee, first_name, last_name, team, pay_grade):\n employee.update(first_name=first_name, last_name=last_name, team=team, pay_grade=pay_grade)\n And that’s all there is to making the rows editable. Now you have a table where each of the rows is made of editable fields:\n\n \n\n \n\n \nBeing able to edit the data after you’ve added it is a big improvement. \nBut you might prefer to make most of the table read-only until the user explicitly decides to click an ‘edit’ button. Maybe you prefer to make the user click a button to make a row editable. This puts more clicks in the user workflow,\nbut it stops the user accidentally editing stuff if they’re a bit click-happy.\n\n You’ve got that ‘column for putting buttons into’ on the right of the Data Grid. In your Repeating Panel, this is \ncurrently empty - it’s only used by the Add row for the Add button. Let’s put a Save button into the Repeating Panel.\nYou’ll need to drag-and-drop a FlowPanel into the column first, so that you can put the Delete button in later.\n\n \n\n Each row has two states: ‘being edited’ and ‘not being edited’. For each state, you want to show a different\nset of components. So you need two Data Row Panels in the Repeating Panel. One should be the ‘read view’, showing\nthe employee data when it’s not being edited. The other should be the ‘write view’, showing the employee data using\neditable components.\n\n The ‘write view’ will be how the row was in the ‘Making all rows editable’ section above, and\nthe ‘read view’ will be how the row was before you made it editable, at the very start of this tutorial. The ‘read view’ \nshould have a button with an ‘edit’ icon ‘write view’ should have a button with a ‘save’ icon.\n\n Make sure you add a new Data Row Panel for the write view and drag the existing components into it. You need to add your own\nData Row Panel because you need to be able to refer to it by name in the code.\n\n The pre-existing Data Row Panel that is built in to the Repeating Panel will now not be used. It should have no\ncomponents in it and its auto_display_data should be unchecked. \n\n When the Edit button is clicked, the read view should be hidden, and the write view should be visible:\n\n def button_edit_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.data_row_panel_write_view.visible = True\n self.data_row_panel_read_view.visible = False\n The Save button does the opposite. It also includes a call to edit_employee in order to persist the changes to the\ndatabase. (The event handlers for the Write View components must be removed, so they don’t write to the database until \nthe Save button is clicked.) The Data Bindings must also be refreshed after save, in order to update the Read View with what’s just been written.\n\n def button_save_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.data_row_panel_read_view.visible = True\n self.data_row_panel_write_view.visible = False\n\n self.edit_employee()\n self.refresh_data_bindings()\n That’s the Edit button complete. Now an individual row can be switched into Edit mode, edited, and saved to the database:\n\n \n\n Add a Delete button next to the edit button. You’ll probably need to make the column that the buttons are in a bit wider.\n\n Making the Delete button work is simple. Following the now-familiar pattern, create a server function to access the database on the server side:\n\n @anvil.server.callable\ndef delete_employee(employee):\n employee.delete()\n and call it from an event handler on the client side (which also removes the row from the Repeating Panel):\n\n def button_delete_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n anvil.server.call('delete_employee', self.item)\n self.parent.raise_event('x-refresh-employees')\n That final line triggers a custom event on the Repeating Panel to make sure its employee list is updated\nto reflect the deletion of the row. You need to set up an event handler in the __init__ of the outer Form: class Form1(Form1Template):\n def __init__(self, **properties):\n # ...\n self.repeating_panel_employees.set_event_handler('x-refresh-employees', self.refresh_employees)\n And define that refresh_employees method so that it re-loads the employee data into the Repeating Panel: def refresh_employees(self, **event_args):\n self.repeating_panel_employees.items = anvil.server.call('get_employees')\n And now your delete button is hooked up.\n\n You now have a table that can Create, Read, Update and Delete data!\n\n \n\n To explore the final result, clone the finished app:\n\n \n\n \nFeel free to use it and share it - our code examples are open-source under the Apache License. Here are our other tutorials on Data Grids:\n Are you going to PyCon UK in Cardiff next month? We’ll be there, showing people how to build\nfull-stack web apps with nothing but Python, at our stall in the Great\nHall. As always, we’ll give out a nifty Anvil T-shirt to anyone who\nbuilds an app with Anvil during the conference.\n\n This time around, we used SciPy and Anvil to work out how many T-shirts we need to bring. The answer involved a little stats, a little SciPy,\nand a whole lot of Python, so I wrote it up as a blog post. Check it out: How Many T-shirts Do I Need for a Developer Conference?\n\n Meanwhile, here’s what’s new in Anvil this month:\n\n 1. Get the Best out of Material Design\n\n We’ve published a guide to making your apps look their best with Anvil’s\nMaterial Design theme. Change your colour palette, build a sidebar menu,\nor use custom CSS to control your app’s appearance.\n\n 2. Direct SQL access to data tables\n\n Anvil’s internal database is powered by Postgres. Now, if you’re a\nDedicated Server user, you can connect directly to it and write queries\nwith SQL! Look for the check-box in the Data Table configuration:\n\n \n\n 3. Embed your Anvil apps in other websites\n\n We’ve made it easier to embed Anvil apps in other websites. When you\nopen the Publish dialog, you now get a checkbox for “Embed this app in”\na web page”. Enable it, then copy and paste the HTML snippet into any\nwebsite to include your Anvil app there!\n\n \n\n 4. The Free Plan is bigger and better!\n\n It’s great to see what hobby users, prototypers, educators and students\nhave been doing with Anvil. We’ve made Anvil even better for them, by\nenabling more of our advanced features - like the Uplink and the Users\nservice - on the Free plan. Check it out!\n\n (You’ll still need to upgrade for a full Python server environment, to\nremove the Anvil banners, or to use your own domain name, though.)\n\n 5. Other Updates\n\n As always, we’re making Anvil better all the time. Here are a few things\nyou might have missed:\n\n We’ve added a few more example apps to our examples page - including the time we built prototypes of two Silicon Valley startups in a couple of hours each!\n We’ve made it easier to lay things out in our drag-and-drop designer. Look out for the dotted lines to see which container you’re dropping a container in.\n If you’re storing encrypted App Secrets in your Anvil app, you can now enter longer, multi-line strings. (Great for private SSH keys!)\n The modules for our Google and Facebook integrations have moved - they’re now anvil.google and anvil.facebook. Don’t worry, your existing apps still work! Your apps now load a little faster :)\n Happy coding!\n", "tags": ["blog"], "path": "/blog/update-18-08" }, { "title": "Search Hints", "snippet": "A search box that displays results in a list below as queries are typed.\n", "content": " This library provides a search box that displays results in a list below as queries are typed.\n\n \n\n There’s a custom component called SearchHints that you can drag-and-drop onto a Form.\n\n To define the list of possible search results, it has a property called get_keys_function. You should set this to \nthe name of a Server Module function that returns an iterable of results. In the example in the library, the function\nis simply a search on a Data Tables table: @anvil.server.callable()\ndef get_search_keys():\n \"\"\"Get the keys to populate the Search Hints.\"\"\"\n return app_tables.toolbox.search()\n When a result is selected, the SearchHints component raises an event called x-search-hints-result. Set an event handler\non this event to dictate what happens when the result is selected. The first argument to the event handler is the selected\nresult, as a string. In the demo, we set up an event that changes a label’s text to display the selected result:\n\n self.search_hints_1.set_event_handler('x-search-hints-result', self.update_result_label)\n\n def update_result_label(self, result, **event_args):\n \"\"\"Set the result label on this Form to the selected search result.\"\"\"\n self.label_result.text = result\n The Search Hints SearchHints component is made from a TextBox and a RepeatingPanel.\n\n The RepeatingPanel’s items are kept up-to-date by the populate_results method. This matches the current query \nagainst self.search_keys, or just makes items empty if the query is '' (an empty string). The TextBox’s focus and change events both call populate_results, so the results panel is updated whenever\nthe user types in the TextBox or clicks on it. The focus event also runs the Server Module function to get\nthe search keys - so if the underlying data changes, the page stays up-to-date. When a result is selected, the set_result method is called. This simply raises an event called x-search-hints-result\non the SearchHints instance, passing the result as a string. Anything that has access to this SearchHints instance\ncan bind event handlers to this event. Each result is displayed as an instance of SearchHintsResult (SearchHintsResult is the item template of the\nRepeatingPanel). SearchHintsResult contains a link whose text is one of the matches to the search query. When\nthe link is clicked, it tells its parent SearchHints object to run set_result (via a custom event, x-result-selected). Author: David Wylie\n\n \n\n This library contains two useful UI components: a sliding toggle switch\n\n \n\n and a progress bar\n\n \n\n Each is a component that can be dragged onto the page from the Toolbox.\n\n The progress bar is highly configurable; play with the settings in the example app to explore.\nEach of the TextBoxes configures a property of the ProgressBar component, so all of these settings can be controlled\nprogrammatically.\n\n The bar’s style can be ‘smooth’: \n\n Or ‘block’:\n\n \n\n In ‘block’ style, the size of the blocks and the spacing between them can be configured.\n\n This Each component has a set of setters and getters for its properties. When each property is updated, an update method\nis called, that draws the component on a Canvas. The ToggleSwitch’s smooth sliding animation is accomplished using a Timer. The Timer’s period is 0 normally, meaning\nit does not tick. But when the ToggleSwitch animation is running, the Timer’s period is set to 0.1. This is the time\nbetween each frame of the animation. Every time the Timer’s tick event fires, the canvas is re-drawn with the\nnext frame of the animation. Author: David Wylie\n\n \n\n The TokenBox component provided by this library can be filled with tokens, each with its own text.\n\n Clicking the tokens removes them from the TokenBox.\n\n \n\n There’s also a MultiSelectDropDown component that combines a TokenBox with a DropDown.\n\n When items from the DropDown are selected, they go into the TokenBox.\n\n \n\n To set the list of items in the DropDown, set the items property of the MultiSelectDropDown component, either in the Properties panel\nor. \n\n The MultiSelectDropDown also has a placeholder property. The DropDown starts with this as the selected_value.\nAfter values are selected, the DropDown’s selected_value goes back to being placeholder. By default, it’s ‘Select a value’. It TokenBox is very simple - it’s a Flow Panel with methods for adding and removing tokens (inventively named add and remove).\nTokens are simply Buttons with the appropriate text and an ‘x’ ( fa:times) icon to show that clicking them deletes them. The add method instantiates the Button, sets its click handler to be the remove method, and adds it to the\nFlow Panel. The remove method just calls remove_from_parent on the Button that was clicked. The TokenBox also has add_callback and remove_callback properties. Assign a function to these, and it will be run at the end of \nthe add method, or the beginning of the remove method. That’s how the MultiSelectDropDown knows when tokens\nhave been removed and can add them back to its items list. The callbacks are passed the Button instance as an argument. The MultiSelectDropDown consists of a TokenBox and a DropDown. When something is selected from the DropDown, its change \nevent triggers. There’s a change event handler that adds the value of the selected item to the TokenBox. At the end of \nthe \nthe TokenBox adds them to the DropDown. This is implemented by using add_to_dropdown and remove_from_dropdown\nas the as the add_callback and remove_callback of the TokenBox. Translating text is a central part of localisation/internationalisation. If you want your app to be available\nto speakers of multiple languages, you need a way to present written text in their language.\n\n This library is a framework for translating text in Anvil apps. You define a dictionary of translations,\ntell the library which properties contain text to be translated, then call a function to translate the page.\n\n The Translations module provides the functionality. To set the translation engine up, you define a dictionary and pass it to set_dictionary: FRENCH_LOCALE = {\n \"Hello, World!\": \"Bonjour, le monde!\",\n \"Click Here\": \"Cliquez Ici\",\n \"Cancel\": \"Annuler\",\n \"Close\": \"Fermer\",\n \"Apply\": \"Appliquer\",\n \"OK\": \"OK\",\n \"Some text to translate\": u\"Du texte à traduire\",\n \"Language:\": \"Langue:\",\n \"Title\": \"Titre\",\n}\nTranslations.set_dictionary('FR', FRENCH_LOCALE)\n For each component property you want to translate, you can register it with the translation library:\n\n Translations.register_translation(self.label_language, 'text')\n The component is passed in as an object, and the property name is a string, just like the signature of getattr. To perform a translation, you use Translations.set_locale, for example: Translations.set_locale('FR')\n\nwant to modify the logic in the Translations.translate function. The Translations", "tags": ["library"], "path": "/blog/library-localisation" }, { "title": "Form Validation", "snippet": "Checks form fields for validity and displays errors for invalid fields.\n", "content": " This library provides a module that checks the contents of components, and displays errors if the contents are invalid.\nIt can also disable components such as submit buttons if the form is invalid.\n\n \n\n There’s a validation module with a Validator class that performs the validation: self.validator = validation.Validator()\n Here’s how you set up validation for a text field:\n\n self.validator.require_text_field(self.my_text_field, self.label_to_display_if_invalid)\n and for a check box:\n\n self.validator.require_checked(self.my_check_box, self.label_to_display_if_invalid)\n You can set up validation for any component by using the general-purpose require method. As well as the component itself, it takes:\n Trueif the component is valid, Falseotherwise self.validator.require(\n self.my_component,\n ['change', 'lost_focus'],\n lambda component: return component.text != '',\n self.label_to_display_if_invalid,\n)\n If you want to disable your submit button when the form is invalid:\n\n self.validator.enable_when_valid(self.submit_button)\n This sets the enabled property on the component it’s given, so you can use it for any component with an enabled property. To display the error labels for all invalid fields:\n\n self.validator.show_all_errors()\n The Form Validation validation.Validator class holds data about the validation config in three attributes: def __init__(self):\n self._validity = {}\n self._actions = []\n self._component_checks = []\n \n\n self._validity maps components to a boolean representing whether that component is valid. \n\n self._actions is a list of functions to call that depend on the whole form’s validity. Each function is passed a boolean that’s\n False if the form is invalid. This is used by the enable_when_valid method to disable buttons, but you could put other functions in this list. \n\n self._component_checks is a list of functions that get called in order when the form is validated. The require method populates this list with the validation function that you pass into it (named predicate), wrapped in a function that\nupdates):\n\n # When `require` is called, we define a function that wraps `predicate` with some actions to take.\n def check_this_component(**e):\n result = predicate(component)\n self._validity[component] = result\n if error_lbl is not None:\n error_lbl.visible = not result\n self._check()\n\n # Then register event handlers for the component\n for e in event_list:\n component.set_event_handler(e, check_this_component)\n \n # And add the component check to the Validator's list\n self._component_checks.append(check_this_component)\n \n # Not shown: a bit of code at the end for checking the form as a whole.\n \n\n require_text_field and require_checked are convenience functions that wrap the require method. \n\n show_all_errors checks the entire form - in practice, this just means iterating over self._component_checks and running\neach of the functions that were created by require. There’s also a method for checking the entire form - self.is_valid. This calculates the form’s validity based on\nthe contents of self._validity, so the check functions must be run first if you want the result to be up-to-date. \n", "tags": ["library"], "path": "/blog/library-form-validation" }, { "title": "Using SciPy to Work Out How Many T-Shirts I Need for a Conference", "snippet": "We can do better than guessing! Using SciPy and some basic stats, we can work out how many shirts to order for PyCon UK this year.", "content": " self._check runs self.is_valid to check if the form is valid, then runs all of self._actions, the actions\nto be carried out based on the form’s validity. When you’re marketing to developers, you go where the developers are. We make a platform for building full-stack web apps with nothing but Python, so we go to developer conferences – particularly Python ones.\n\n:\n\n\n\n!)\n\n This time, we can be smarter. We’re a bootstrapped company, so we can’t just blow VC cash on unnecessary mountains of shirts – but we do want every Anvil user to go home with a T-shirt.\n\n How can we use this data to work out how many shirts we need?\n\n Option 1: Exact numbers.\n\n “If last conference used 13 Men’s Medium shirts, we should bring exactly 13 to the next conference.”\n\n This is obviously a bad idea. If even one more medium-sized man writes an app with Anvil, we’re going to run out of shirts. And feel pretty stupid about it, too.\n\n Option 2: Double up\n.\n\n Wait. I’m sure there’s a more helpful way of thinking about this than waving my hands and saying “law of large numbers”. Can we capture this insight in a statistical model?\n\n Option 3: Be a bit smarter\n\n.\n\n Next month, we’re sponsoring PyCon UK, with 700 attendees. How many men’s Medium-size shirts will we need? We can simulate it by rolling that die 700 times, and counting how many times it comes up “men’s Medium”.\n\n Of course, each time we do that “roll 700 dice” procedure, we could get a different total count. This total follows a binomial distribution:\n\n!\n:\n\n from scipy.stats import binom\n # From 1500 attendees, we gave away 13 men’s Medium shirts\n p_mens_medium = 13 / 1500\n # For 700 attendees, there’s a 95% chance we will need no\n # more than this many men's Medium shirts:\n n_shirts_p95 = binom.ppf(0.95, 700, p_mens_medium)\n print(n_shirts_p95)\n => 10.0\n So we need to take 10 men’s Medium-size shirts to Cardiff!\n\n\n\n:\n\n Feel free to use and share this app. And if you’re going to a developer conference soon, perhaps we’ll see you there!\n", "tags": ["blog"], "path": "/blog/how-many-t-shirts-to-a-developer-conference" }, { "title": "Building a Disposable Email Service", "snippet": "Meredydd builds a simple disposable email service. This video demonstrates how to combine the Anvil Email Service with data tables and repeating panels to launch a useful application in just a few minutes.\n", "content": " For this example, I will build a simple disposable email service with Anvil. It allows users to receive email sent to any alias they choose. The whole app takes just a few minutes to build and deploy.\n\n This example shows you how to use the Anvil Email Service to create a useful real-world application.\n\n It was great meeting everyone at EuroPython! Next up:\nPyCon UK in Cardiff. Join us there, where we’ll\nbe showing everyone how to build full-stack web apps with nothing but Python!\n\n Meanwhile, here’s what’s new this month:\n\n 1. Single Sign-On with Microsoft\n\n Does your organisation use Office 365? Now your users can log into your apps\nwith their Microsoft accounts! Microsoft login (or, to give it its official\nname, Azure Active Directory) is available for all our Business plan customers.\n\n 2. Building Email-Driven Apps with Anvil\n\n We’ve already made it easy to build web-based apps. Now it’s just as easy\nto send and receive email from your code, with our new\nEmail Service.\n\n Check out the code samples, or watch our demo video to see me build an\nemail-receiving app in just a couple of minutes.\n\n 3. New Documentation Search\n\n We’ve made it easier than ever to search the Anvil documentation. Next\ntime you’re in the Anvil editor, check out the search box in the toolbar.\nYou can search our tutorials, reference documentation, and selected\nposts from the Community Forum.\n\n (Speaking of the Community Forum, we’d love to hear your feedback. Come\njoin us there!)\n\n 4. Other updates\n\n As always, we’re improving Anvil all the time. Here are a few highlights:\n\n Validating user input just got a lot easier, with our new form validation\nlibrary. Check it out!\n We’ve introduced a new domain for your apps: Now, by default, your apps\nare available at <something>.anvil.app. (Don’t worry, the old\n <something>.anvilapp.net links still work!) We had an idea for improving the saving mechanism, so if you’re on a flaky internet connection\nsaving should feel instant.\n Happy building!\n\n Meredydd\n", "tags": ["blog"], "path": "/blog/update-18-07" }, { "title": "Data Grids - Add Widget", "snippet": "Create a widget in your Data Grid that allows rows to be added.\n", "content": " In the Data Grids Getting Started tutorial, we created a simple paginated table that \npresented employee data.\n\n What if you want to write the data as well as reading it? Data Grids make this easy by allowing you to add components\nthat the user can interact with.\n\n \n\n We’ll start with the original read-only table, and modify it to allow adding rows.\n\n Clone the basic Data Grids example app to follow along with this tutorial - it’s the starting point you’ll build from.\n\n \n\n \nIf you’ve not done the Data Grids Getting Started tutorial, you may find it better to\ngo through that before following this. In the Data Grids Getting Started tutorial, we added a header to display the column names.\nThe ‘add’ widget is very similar - it’s just a header made from a Data Row Panel.\n\n To start off, drag-and-drop a Data Row Panel into the appropriate place - we’ve gone for just below the column names. \nMake sure you set it to pinned so it shows up on each page! You also need to increase the rows_per_page by 1 to account\nfor the new Data Row Panel. You can drag-and-drop components into each column of this Data Row Panel. So put TextBoxes for Employee and Grade, \nand a DropDown for Team.\n\n The Grade TextBox is best as type ‘number’, which can be configured in the Properties panel. You can set its default\nby setting the text property - we’ve gone for 0 as a default. \n\n The Team DropDown needs to be given a list of team names for the user to choose from. Here’s how to populate it from the\ndatabase when the page loads:\n\n class Form1(Form1Template):\n def __init__(self, **properties):\n # ... after the usual init stuff ...\n employees = anvil.server.call('get_employees')\n\n teams = [employee['team'] for employee in employees]\n self.drop_down_team_add.items = sorted(list(set(teams))) # De-duplicate and sort\n The user needs an ‘add’ button to press when they’ve finished filling in the data and they want to commit the new entry\nto the database. Create a new column for this, placed to the right of the Grade column. Delete the contents of the\n‘Title’ and ‘Key’ properties, and you’re left with a column with no heading. You can drag-and-drop a Button\nComponent into it:\n\n \n\n Now all you need to do is make the add button actually do something.\n\n First, we need to define how to handle the Employee column. The Employee column represents the full name of the employee, \nbut the first_name and last_name are stored separately in the database. So, you need to decide what part of the user \ninput to store as the first name, and what part is the last. Write a method that defines how to do this and put it in a ParseEmployeeName module: def parse_employee_name(employee_name):\n if ' ' in employee_name:\n return employee_name.split(' ', 1)\n else:\n return employee_name, ''\n This defines the first name as the first word in the TextBox and the last name as everything else. So if one of our\nemployees is James Clerk Maxwell, his first_name is ‘James’ and his last_name is ‘Clerk Maxwell’. We also need a server function to add a new Employee row to the database:\n\n @anvil.server.callable\ndef add_employee(first_name, last_name, team, pay_grade):\n app_tables.employees.add_row(first_name=first_name, last_name=last_name, team=team, pay_grade=pay_grade)\n Then you can create an event handler to handle the click event of the add button:\n\n from ParseEmployeeName import parse_employee_name\n\n# ...\n\n def button_employee_add_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n\n # Split the employee name into first and last\n first_name, last_name = parse_employee_name(self.text_box_employee_add.text)\n \n # Add the employee to the database\n anvil.server.call(\n 'add_employee',\n first_name=first_name,\n last_name=last_name,\n team=self.drop_down_team_add.selected_value,\n pay_grade=self.text_box_grade_add.text,\n )\n \n # Refresh the employee data in the Data Grid\n self.repeating_panel_employees.items = anvil.server.call('get_employees')\n\n # Clear the add row's input components\n self.text_box_employee_add.text = ''\n self.text_box_grade_add.text = 0\n self.drop_down_team_add.selected_value = self.drop_down_team_add.items[0]\n This is a resonably simple method. The code comments explain how it works.\n\n When you add an employee, you probably want to see the new entry appear at the top of the table, to give you a visual cue\nthat it's been added successfully.\n\n To achieve this, the app needs to know when rows were added. So you need to add a new 'Date and Time' column to the \n employees table, called added. Then, the add_employee function should set added=datetime.now() (be sure to import\nthe datetime module: from datetime import datetime). The get_employees function needs to order the results, with the most recently added at the top: @anvil.server.callable\ndef get_employees():\n return app_tables.employees.search(tables.order_by(\"added\", ascending=False))\n Pre-existing entries will be handled gracefully - their added value will be None and they'll be ordered as before. That’s the add row done! When you run this app, you get a data table as before, but you can create new entries.\n\n \n\n To explore the final result, clone the finished app:\n\n \n\n \nFeel free to use it and share it - our code examples are open-source under the Apache License. The logical next step is to allow editing and deleting entries as well as adding. Follow along with the editing\nand deleting tutorial to find out how.\n\n Here are our other tutorials on Data Grids:\n In the Data Grids Getting Started tutorial, we created a simple paginated table that \npresented employee data, and we tweaked the settings to get it how we wanted it.\n\n Now let’s explore the power of the nested structure of Data Grids.\n\n Imagine you want to show a table where the employees are grouped by team, and each team has a heading within the table. Something like this:\n\n \n\n We’ll start with the original flat table, and modify it to implement the grouping behaviour.\n\n Clone the basic Data Grids example app to follow along with this tutorial - it’s the starting point you’ll build from.\n\n \n\n \nTo make this easy, you should store the team names in their own Data Table. So, add a table called ‘teams’ with a single text \ncolumn called name: \n\n Then change the team column of the ‘employees’ table to link to it: \n\n With the database schema changed, you need to regenerate the data. The example app has a Server Module called \n RandomEmployeeGenerator to create the data for you. Just call the server function generate_normalised the first time \nyou ever run the app: class Form1(Form1Template):\n def __init__(self, **properties):\n # ... after the normal init stuff ...\n anvil.server.call('generate_normalised')\n Run the app once to populate the database. Then remove the call so it doesn’t run every time!\n\n You’re going to group the employees into teams. So you want the Data Grid to be made of sub-tables, with one sub-table per team.\n\n Each row of the Data Grid will be a single team. Then within each team-row, you’re going to add a Repeating Panel to list the employees in that team.\n\n So the first thing to do is set the Data Grid’s items to be the contents of the ‘team’ table, rather than ‘employees’. \nRemember that a Data Grid contains a Repeating Panel; let’s rename it repeating_panel_teams. \nIn the code you need to call a server function to get the teams data, where we were previously getting the employee data: def __init__(self, **properties):\n # ... after the normal init stuff ...\n self.repeating_panel_teams.items = anvil.server.call('get_teams')\n The get_teams function on the server side is simply a Data Tables search: def get_teams():\n return app_tables.teams.search()\n Now let’s modify the template of the Repeating Panel so that each row is a roster for a particular team. Double-click\nthe Repeating Panel to edit the item template. The row should now be highlighted, with the rest of the screen greyed out.\n\n Uncheck the auto_display_data box in the Properties panel - this removes the Data Row Panel that’s included by default, \nleaving an empty space where you can add whatever you like. First, add a label to display the team name . Now your Data Grid will be just a list of team names, like this:\n\n \n\n Go back editing the Repeating Panel’s item template. Add another Repeating Panel, to hold the list of employees for this team:\n\n \n\n That’s what produces the grouped structure. Let’s call the inner Repeating Panel \n repeating_panel_employees, since that’s what it’ll hold. The overall structure now looks like this: \n\n The outer Repeating Panel, repeating_panel_teams, has one instance of RowTemplate1 for each team. Each of these\ninstances of RowTemplate1 has a Repeating Panel of its own, called repeating_panel_employees. For each employee\nin that team, there is an instance of RowTemplate2. Also shown on the diagram are the Data Row Panel containing the\ncustom column headers, and the Column Panel containing the page size selector. Now let’s get the correct data into repeating_panel_employees. You can get the relevant list of employees when the panel initialises. Write a server function to get the employees in a particular team:\n\n @anvil.server.callable\ndef get_employees_in_team(team):\n return app_tables.employees.search(team=team)\n And call this in the __init__ method for RowTemplate1 ( RowTemplate1 is a separate Form, so to edit its code, \nclick on it in the Forms section of the App Browser (left-hand panel) and click on ‘Code’.) class RowTemplate1(RowTemplate1Template):\n def __init__(self, **properties):\n # ... after the normal init stuff ...\n self.repeating_panel_employees.items = anvil.server.call(\n 'get_employees_in_team',\n self.item,\n )\n This preserves the lazy loading behaviour of the Data Grid - every time a page is loaded, only the data for the \nteams that can be seen on the screen is fetched.\n\n You want repeating_panel_employees to have two columns; employee name and grade. So let’s delete the team column from\nthe Data Grid. Then, put a label into the leftmost column to display employee name, as we did in the Data Grids Getting Started tutorial. Set up a data binding in repeating_panel_employees.label_employee_name to include both first and last name: \n\n \n\n self.item is a row from the Employees table, because you set repeating_panel_employees.items to the results\nof a search on the Employees table earlier. The internal Repeating Panel will now show the employee data as two columns: employee name and grade.\n\n Hit run and watch your handiwork in action - you should get a table of employees, grouped by team, with each team headed\nby a label telling you which team it is:\n\n \n\n Note that the rows_per_page of the Data Grid is still honoured, even though we’ve got a nested structure of Repeating \nPanels. Any Data Row Panel that descends from a Data Grid will be aware of the rows_per_page of its ancestor. Here’s a clone link for the final app:\n\n \n\n Feel free to use it and share it - our code examples are open-source under the Apache License.\n\n Here are our other tutorials on Data Grids:\n Does your organisation use Office 365, or other Microsoft cloud products? If so, your users can now log into your apps with their Microsoft accounts!\n\n Anvil now supports authenticating users with Azure Active Directory, for all Business Plan customers. Just add the new Microsoft API service, then select this option in the Users service:\n\n That’s it! You’re now using Single Sign On for login in your business apps.\n\n Email is the most popular messaging platform on Earth. We use email to arrange events, to exchange files, and to notify each other when something happens.\n\n You should be able to email your applications too. But to receive email, you need to set up a server somewhere, run an SMTP daemon, and somehow connect that to your code.\n\n We’re here to fix that. Today, we’re delighted to show you the easiest way to build email-driven apps. All you need to do is write a Python function. For example, here’s all the code you need to receive emails and store them in a database:\n\n @anvil.email.handle_message\ndef incoming_email(msg):\n app_tables.incoming_messages.add_row(\n from_addr=msg.envelope.from_address,\n text=msg.text\n )\n msg.reply(text=\"Thank you for your message.\")\n Sending is just as simple:\n\n anvil.email.send(\n to=\"contact@anvil.works\",\n subject=\"New message\",\n text=\"This is awesome\",\n html=\"This is <b>awesome</b>\"\n)\n Despite its simplicity, it’s also fully featured - you can handle attachments, recipients, and even use DKIM to check that a message is genuine before trusting what it says.\n\n You can scroll down for a cookbook of common examples, or watch as I build a full disposable-email-address service, and publish it on the web, in four minutes:\n\n \n\n\n\n Here are a few things you might want to do with email. You can follow along from these code snippets, or view them all in the Anvil editor:\n\n \n\n @anvil.email.handle_message\ndef incoming_email(msg):\n app_tables.incoming_messages.add_row(\n from_addr=msg.envelope.sender,\n text=msg.text\n )\n @anvil.email.handle_message\ndef incoming_email(msg):\n msg.reply(text=\"Thank you for your message.\")\n @anvil.server.callable\ndef send_a_message(txt):\n anvil.email.send(\n to=[\"someone@gmail.com\"],\n cc=[\"Someone Else <someone.else@gmail.com>\"],\n subject=\"Hello World\",\n text=\"Hi there!\"\n )\n All attachments are Anvil Media objects. This means you can upload them from web browsers, store them in databases, make them downloadable, and more.\n\n @anvil.server.callable\ndef send_by_email(file):\n anvil.email.send(\n to=\"me@example.com\",\n subject=\"New upload\",\n attachments=[file]\n )\n Incoming attachments are just as straightforward. Here’s how to save all attachments in a database:\n @anvil.email.handle_message\ndef incoming_email(msg):\n for f in msg.attachments:\n app_tables.files.add_row(file=f)\n Email has historically been easy to spoof. Helpfully, many providers now support DKIM, which lets you verify that the email actually came from the domain it says it did.\n\n The dkim property of the message contains details about all that message’s valid DKIM signatures, and a shortcut to check whether it’s signed by the address in msg.envelope.from_address: @anvil.email.handle_message\ndef incoming_email(msg):\n if msg.envelope.from_address == 'me@example.com'\n and msg.dkim.valid_from_sender:\n msg.reply(text=\"You're the real deal!\")\n There’s a shorthand, if you only want to accept DKIM-authenticated messages (and reject all others):\n\n @anvil.email.handle_message(require_dkim=True)\ndef incoming_email(msg):\n if msg.envelope.from_address == 'me@example.com':\n msg.reply(text=\"You're the real deal!\")\n @anvil.email.handle_message\ndef incoming_email(msg):\n raise anvil.email.DeliveryFailure(\"Nope\")\n Start writing email-driven apps for free with Anvil:\n\n \n", "tags": ["announce","blog","cookbook"], "path": "/blog/email-driven-apps" }, { "title": "Data Grids - Searching and Filtering", "snippet": "Create search and filter boxes for data grids.\n", "content": " \nYou can also read more about the Email service in the Anvil reference docs. Searching and filtering a Data Grid is an extremely common requirement. It’s also very easy to achieve. Let’s look at\nhow to create a very straightforward search box that displays all data rows that match the search term in any of their columns.\n\n Clone the basic Data Grids example app to follow along with this tutorial - we’ll use it as a starting point.\n\n \n\n Imagine your UX guru has decided to put the search box below the column headings, before the data itself. \nThat visually associates it with the table.\n\n \n\n To add a header you can drag-and-drop components into your Data Grid above the \nRepeating Panel. In this case, drag in a Column Panel and put a TextBox and Button inside it to form the UI of the \nsearch widget. Of course, you must give the Button the classic ‘magnifying glass’ icon or it’s not a proper search box!\n\n You’ve got a search box and you need some code to enact the search when it’s pressed.\n\n The actual search operation will take place on the server side. That will reduce the amount of data passed from\nserver to client. It’s quite simple, it gets a Data Table and filters it using a list comprehension:\n\n @anvil.server.callable\ndef search_employees(query):\n result = app_tables.employees.search()\n if query:\n result = [\n x for x in result\n if query in x['first_name']\n or query in x['last_name']\n or query in str(x['pay_grade'])\n or query in x['team']\n ]\n return result\n The list comprehension checks each field for each record, to see if the query is a substring of that field. If the \nquery is empty, the filtering is bypassed - the entire result set is returned.\n\n The search should run when self.button_search is clicked, and when enter is pressed inside self.text_box_search. The event handler code will always be the same - call search_employees with whatever is in self.text_box_search.text: def search(self, **event_args):\n self.repeating_panel_employees.items = anvil.server.call(\n 'search_employees',\n self.text_box_search.text\n )\n So you need to configure the self.button_search.click event and the self.text_box_search.pressed_enter event to use this search method. Just enter the method name into the Events section of the Properties panel for each component: \n\n You can now search the data!\n\n \n\n Of course, there are as many designs for search and filter behaviour as there are UX designers. We’ve shown how\nto create one of the simplest, in order to focus on the principle rather than the details.\n\n You might want to add dropdowns for team and grade. Or maybe you would allow complex search terms such as \n employee=\"Jane Smith\" team=\"Bravo\" grade<8, by writing a function to parse the query and filter the data accordingly. Whatever your requirements, the basic pattern of adding some search components to the page, wiring up their event handlers\nto pass the query to the server, filtering the data and passing it back, will probably remain the same.\n\n Here’s a clone link for the final app:\n\n \n\n Feel free to use it and share it - our code examples are open-source under the Apache License.\n\n Here are our other tutorials on Data Grids:\n Data Grids are a way to easily display data in tables. Pagination and just-in-time loading comes for free, and customising \ntheir appearance and behaviour is as straightforward as anything in Anvil.\n\n \n\n We’ll explore the use of Data Grids by creating an app to display a paginated table of employees, and \ntweak it to get it looking how we want.\n\n Our data is a table of employees. We’ve got their first and last names, pay grades, and what team they are in. The data is stored\nin a standard Anvil Data Table, generated by a server module named RandomEmployeeGenerator:\n\n \n\n Let’s display them all in a paginated table.\n\n Drag and drop a Data Grid onto the page.\n\n \n\n It contains a Repeating Panel that you can hook up to your Data Table in the usual way - an accessor in a Server Module:\n\n @anvil.server.callable\ndef get_employees():\n return app_tables.employees.search()\n which is called from the Form’s __init__ method: class Form1(Form1Template):\n def __init__(self, **properties):\n # ...\n self.repeating_panel_employees.items = anvil.server.call('get_employees')\n Data grids don't have to be populated from a Data Tables search.\nThe RepeatingPanel's items could be a list of dictionaries instead - \nthe column keys would then be the keys of the dictionaries. More generally, the items should be an iterable of objects that \nhave a __getitem__ method. Next, you need to configure the Data Grid’s columns in the Properties panel:\n\n \n\n The Key parameter defines which database column goes into this Data Grid column.\nThe Title parameter is the text that goes at the top of the Data Grid column on the page.\n\n.\nNow we have a table we can page through, showing all the info about our employees: \n\n The Grade column is a bit wider than it really needs to be; it’s only displaying a number, after all.\n\n You can drag the column edges to change the column size. Dragging a column edge changes the size of the column to\nits left, so in this case you want to drag the rightmost edge of the table.\n\n \n\n Now the Grade column is fixed at 60px wide. All the other columns are in \n‘flex’ mode; they will all be equal width, once the widths of the ‘fixed’ columns have been taken into account. \nTo set the Grade column back to ‘flex’ mode, you just need to double-click the edge that you used to change its width.\n\n If you don’t like dragging things with the mouse, you can set the column widths with pixel precision by entering the desired\nwidth in pixels in the width box for each column in the Properties panel. Emptying this box puts the column back into\n‘flex’ mode. \n\n To set a minimum width for a column, tick the ‘Expand’ box. The column will be in ‘flex’ mode, but it will never get smaller\nthan the width you’ve set for it. What if I want the column headings to be a different colour?\n\n And I want the Team column heading to link to an internal wiki page!\n\n You can set up the heading of a Data Grid however you like.\n\n You can drag-and-drop anything you like into the Data Grid. To create a header and footer, simply\ndrag-and-drop something either above or below the Repeating Panel.\n\n All you need to do is uncheck the auto_header setting in the Data Grid’s properties panel, then drag-and-drop a Data\nRow Panel into the top of the Data Grid. A Data Row Panel is a container that automatically has a slot per column, \nso you can put your custom column headings in each of its slots. \n\n Be sure to set the pinned property on any header or footer. This ensures it doesn’t disappear when the page is turned! You can also drop things into the space to the left of the paging arrows.\nIn our example, we’ve added a box to choose how many employees to display per page. It’s simply a TextBox and a Label, \ninside a FlowPanel:\n\n \n\n To make the page size selector work, the TextBox’s lost_focus handler should update the Data Grid’s rows_per_page: def text_box_1_lost_focus(self, **event_args):\n \"\"\"This method is called when the TextBox loses focus\"\"\"\n self.data_grid_employees.rows_per_page = int(self.text_box_1.text) + 1\n The Data Grid automatically updates when its rows_per_page property is changed, so this is enough to make the\npage respond as soon as the user clicks the button. The +1 here is to account for the custom column headings; the\nData Row Panel we added to the top is counted in the row count. In the database, we have the employees’ first and last names separately. Let’s imagine we want to display each employee’s whole\nname in a single column.\n\n Delete the First Name and Last Name columns and replace them with a single column called Employee.\n\n \n\n You can put a Label in that column and set the Label’s contents to be whatever you wish.\n\n A Data Row Panel makes things line up with the columns of the Data Grid. There’s one slot for each column. You can \ndouble-click on the table in the design view to modify what’s in each column - just drag anything you like in from \nthe Toolbox as normal. So, let’s add a Label to the Employee column:\n\n \n\n If you leave the Label like this, it will be empty. How do you make it display the employee’s full name?\nBecause this is a row in a Repeating Panel, it has an item \nattribute, which is one of the elements of self.repeating_panel_employees.items. So you can set up a \nData Binding to fix the text of the Label to \na string giving an employee’s full name: \n\n That’s all you need to do to populate a Data Grid column with some custom data.\n\n It might help to understand exactly how Data Grids are structured. Inside each Data Grid is a Repeating Panel, and\ninside that are Data Row Panels. In our employee management app, we have self.data_grid_employees, which contains\n self.repeating_panel_employees, whose template ( RowTemplate1) inherits from a Data Row Panel. Here’s our final table with page turning, custom headers, page size selector and customised data presentation.\n\n \n\n Click on this clone link to copy the app and take a look:\n\n \n\n \nFeel free to use it and share it - our code examples are open-source under the Apache License. Data Grids are pretty powerful and there’s much more you can do with them. Perhaps you’ll be interested in our \ntutorials on\n Anvil’s Material Design theme is a standard, clean design that is appropriate for a wide range of web app\nuse cases. That’s why we chose to make it the default theme when creating a new app.\n\n \n\n Its look and feel can be tweaked to your app’s individual needs using Colour Schemes and Roles. It also\nhas a built-in title bar and optional sidebar. As with all Anvil themes, advanced developers can take total control \nby modifying the HTML partial and the CSS stylesheet that go into making the Material Design theme.\n\n Let’s have a look at what’s on offer.\n\n Your app doesn’t have to be blue, white and orange. That’s the default Material Design colour palette, but you can\ncustomise your colour scheme using the Colour Scheme tool. If you click on ‘Colour Scheme’ under ‘Theme’ on the left-hand\npanel of the editor, you can select the Primary and Secondary colours of the theme using the dropdowns at the top.\n\n \n\n You can also modify each of the colours individually by modifying the RGB hex \nof any colour. Colours can be added to the palette using the ‘New Colour’ button.\n\n Now you have your colour palette, you’ll want to apply it to the components of your app. Your changes to the primary and \nsecondary colours will be applied automatically - look back and the Design view and you’ll see your app’s new look.\n\n You can also set the colours of components manually by setting the foreground and background properties on a \ncomponent. Find these in the ‘Appearance’ part of the Properties panel in the Design View. As always in Anvil, you can also set them programmatically - just assign the foreground or background attribute to the name of \nthe colour, as a string: self.button_1.foreground = 'theme:Secondary 500'\n If you want to use a colour that’s not in the Colour Scheme, just use the RGB hex directly:\n\n \n\n There’s also a colour picker so you don’t need to guess the RGB hex for the colour you’re looking for.\nJust click on the paintbrush icon next to the relevant property, then on the icon next to the ‘Enter manually’ box.\n\n You can set an RGB hex programmatically as well:\n\n self.button_1.foreground = '#ee67a1'\n You can change the look and feel of components by assigning Roles to them. Play with the ‘role’ dropdown in ‘Appearance’\nin the Properties panel to see what’s available for a particular component.\n\n \n\n You can also set roles programmatically. Just assign the .role attribute of a component to the name of the role as a string: self.link_store.role = 'selected'\n Some roles in Material Design are so useful, we’ve made shortcuts for adding components that already have them set. These\nare ‘Card’, ‘Headline’ and ‘Highlighted Button’. They can be found under ‘Theme Elements’ in the Toolbox.\n\n \n\n A Card is a ColumnPanel with the card role applied, to produce a separate area within a page. It looks great \nwhen used to show content such as blog posts or emails in an inbox, where there could be any number of identical pieces \nof \n full_width_row box in ‘Container Properties’, and make sure spacing_above and spacing_below are set to \n none in the Layout section. Play with the display_mode to get exactly what you want. \n\n The Headline element is a label with the headline role applied. It has a large font, giving a standard style for headings. The Highlighted Button is a button with the primary-color role applied. It stands out against the background, \nin comparison with the standard Button, which has a transparent background in Material Design and appears more like a link. There are many other roles available and with a bit of experimentation you should be able to find a look-and-feel that\nreally supports your app’s user experience. Bear in mind that roles have different effects on components within the \nsidebar compared to components in the main form body.\n\n In particular, check out the Button roles - primary-color, raised and secondary-color, which you can use to \ngive the user visual cues about what the button is for. \n\n You can use Labels as text, subheading, headline and the very large display-4. The text role takes away padding \nfrom both Labels and Links, which allows them to stack and create lines of text with sensible spacing. \n\n input-prompt is great for Labels that tell the user what a TextBox or DropDown is for. It applies the correct padding to \nline the Label up with the TextBox or DropDown and makes the fonts match. \n\n Setting a Link as selected in the side bar is useful if you’ve got a set of navigation Links in the sidebar - programmatically \nmaking a Link selected indicates to the user where they are within your app. \n\n Notice the white bar on the left side of the Design view? When you hover your mouse over it, it tells you \n“To add a sidebar, drop a ColumnPanel here”.\n\n \n\n Dropping in a ColumnPanel causes a ‘hamburger’ icon to appear on the titlebar, which shows and hides the sidebar at runtime. \nIf you add some Links into your ColumnPanel, you instantly have a hideable navigation menu. You can add any component you \nwish. Here we’ve added a company logo and some CheckBoxes for managing global site settings.\n\n \n\n On a desktop browser, the sidebar is shown by default, and the hamburger icon hides it. On mobile, the sidebar starts hidden and\nit is shown when the user touches the hamburger.\n\n Hovering your mouse over the title bar, you see ‘Drop Title Here’ and ‘Drop Links Here’.\n\n ‘Drop Title Here’ is intended for a single label to serve as a page title. If you add a Label component, \nyou can give your app a name. Any component can be added here, but Label is usually appropriate.\n\n ‘Drop Links Here’ is a FlowPanel. It allows any number of components to be added. Note the nice circular highlighting\nwhen hovering the mouse over a link in the top bar. The Material Design guidelines\nsuggest just having icons in this position - you can do this by adding a Link with an icon and no text.\n\n \n\n To use your new nav bar to navigate around your app, you need to make the links change which form is displayed in the\nmain section of the page.\n\n Let’s assume we’ve implemented a ‘Gallery’ form and an ‘Articles’ form. We can create an event handler that switches \nto the Gallery form by clearing the ColumnPanel and instantiating a Gallery() within it: def switch_to_gallery(self, **event_args):\n \"\"\"Switch to the Gallery view.\"\"\"\n self.column_panel_body.clear()\n self.column_panel_body.add_component(Gallery())\n self.headline_made_with_anvil.scroll_into_view()\n And similarly for switch_to_articles. Remember to register these as event handlers for the relevant Links! Let’s make our event handler set the link’s role to selected when a user is viewing that page. The user can now\neasily see where they are in the app: def switch_to_gallery(self, **event_args):\n \"\"\"Switch to the Gallery view.\"\"\"\n self.column_panel_body.clear()\n self.column_panel_body.add_component(self.gallery)\n self.headline_made_with_anvil.scroll_into_view()\n self.deselect_all_links()\n self.link_gallery.role = 'selected'\n\n def deselect_all_links(self):\n \"\"\"Reset all the roles on the navbar links.\"\"\"\n for link in self.link_articles, self.link_gallery, self.link_store:\n link.role = ''\n Here’s how that looks when it’s up and running:\n\n \n\n In a few features, Material Design provides the building blocks to create a wide range of apps. \nIf you want to customise your app further, have a play with your app’s Assets, which can be found under the \nTheme section of the left-hand panel.\n\n Assets allow you to modify the HTML and CSS associated with your Components. standard-page.html is where the title bar\nand sidebar are defined, so if you want to add or remove anything from the standard page layout, here’s where you can do it.\n theme.css contains the CSS for the Roles as well as for everything else on the page. Roles work by applying special\nCSS classes to a component; a Role named foo applies a class named .anvil-role-foo to its component. If you want to\ndefine your own Roles and set up some CSS rules associated with them, you can add new Roles in the Theme->Roles section of \nthe editor, then modify theme.css accordingly. \n\n For more about customising your theme, see the Themes section of the Reference Docs.\n\n Happy styling!\n", "tags": ["tutorial"], "path": "/blog/using-material-design" }, { "title": "Anvil News - May", "snippet": "This month, we made the web more Pythonic.", "content": " It’s been another monster month for us.\n\n It was great meeting everyone at PyCon, and the crowds around our table were awesome! Check out my 5-minute talk from the conference: Making the Web More Pythonic\n\n As always, we’re improving Anvil all the time. Here are a few things you might have missed:\n\n 1. Beautiful apps: New Material Design theme\n\n We’ve published a new Material Design theme! We’ve given our components a facelift and added swappable colour schemes, so it’s now even easier to build beautiful apps.\n\n (Don’t worry, the old theme is still there - it’s now called “Classic”.)\n\n 2. Simple objects in Data Tables\n\n If you want to store richer data, you can now add “Simple Object” columns to your Data Tables. These can store strings, numbers, lists, or dicts - that’s any JSON object. And you can search inside them with a powerful query system.\n\n Find out more on our blog:\n\n 3. Looking after the details\n\n Here are some things we’ve done this month to make Anvil even more pleasing to use:\n\n Easier code navigation - you can hold down the Ctrl key and click on a function or variable to jump to its definition.\n The new FlowPanel lets you lay out components side-to-side. It’s great for grouping buttons, links or labels right next to each other.\n We’ve made it easier to drag and drop Link components, so it’s clearer whether you’re dropping something next to a Link or inside it. (I love this when I’m building sidebars for navigation!)\n More options for displaying images in Image components: Shrink to fit, zoom to fill, or display your image at its original size no matter how big your page is.\n Want your components closer together, or further apart? Now you can control the spacing in a ColumnPanel with the column_spacing property. We’ve made the Toolbox easier to navigate by highlighting the most commonly used components.\n Fixed-width layouts are easier now too, with the XYPanel. Most of the time, you won’t be needing a fixed-width layout, but when you do, it’s right there.\n Happy building!\n\n Meredydd\n", "tags": ["blog"], "path": "/blog/update-18-05" }, { "title": "Making the Web More Pythonic", "snippet": "The Web is traditionally a pain to program. It's also un-Pythonic. Can we make Web programming easier by making it more Pythonic? A talk from PyCon 2018.", "content": " The Web is traditionally a pain to program. It's also seriously un-Pythonic.\n\n At PyCon 2018, I asked: Can we make Web programming easier, by making it more Python-shaped?\n\nScroll down for transcript \n Well, to start with, what do I mean by \"the web is un-Pythonic\"?\n\n Think about a typical web app. You're going to have to turn your data into a bunch of different shapes along the way:\n Oof.\n\n At each of these boundaries, a bunch of boring and repetitive translation work happens. And that is an invitation to exactly the wrong sort of magic.\n\t.\n\t And that's cool, if you do it once. But if you have this amount of magic at every boundary in this stack, you're setting yourself up for a bad time.\n\n But of course, that's exactly what we do:\n And they're all extremely leaky abstractions! To be a reasonably advanced user of any of these frameworks, you need to understand everything they do, on both sides of the transformation.\n\n.\n\t So, how does this situation stack up against The Zen of Python?\n\n \"There should be one obvious way to do it\"?\n\n Hoo, boy. Look at all these frameworks!\n\t \"Explicit is better than implicit\"?\n\n Transforming data implicitly is these frameworks' job.\n\t \"If you can't explain the implementation, it's a bad idea\"?\n\n Again, look at the sheer amount of magic in every level of this stack!\n\t So, what might something more Pythonic look like?\n\n At Anvil, we start by putting Python everywhere — even in the browser. (We use the Skulpt Python-to-Javascript compiler; check out my previous talk about it.)\n\t OK, so if we're in Python, and making an HTTP request to a Python server, what happens?\n\n Well, we make a function call into the requests library, and then some time later it emerges as a function call to a Flask endpoint. Wait a second. If a function call was all we wanted, why not explicitly turn the whole thing into function calls?\n\t.\n\n (The data still passes over HTTP, of course. Actually, we use an encrypted WebSocket.)\n\n OK, so the next question is, \"What sorts of data should you be able to pass into, or return out of, these functions?\"\n\n Well, strings, numbers, dicts, and lists are easy. That's just JSON. But we want to avoid translations, so we want to pass proper objects from the server to the client.\n\n Unfortunately, this is a web server, serving lots of clients, so it has to be stateless: the server just can't afford to keep these objects in RAM.\n\t So we support passing a special sort of object: a stateless server object.\n\n What's in a stateless object? Well, it has an immutable ID, some method names, and some permissions. That's all. (So there's nothing else the server has to remember.)\n\n.\n\n:\n We've skipped a whole bunch of these translation steps! We've got one object, passed all the way from the database to client code.\n\t And that's a little contribution to making the Web more Pythonic.\n\n For more about Anvil, check out.\n\n Thank you very much.\n\t Do you find yourself wanting to store more than just strings, numbers and dates in your tables? Perhaps you have complex records with dynamic fields, or perhaps you want to store lists of data without making a whole new table.\n\n:\n\n Now you can store structured data in a single row of your table:\n\n app_tables.people.add_row(\n name = \"Kermit the Frog\",\n contact_details = {\n 'phone': [\n {'type': 'work',\n 'number': '555-555-5555'},\n {'type': 'home',\n 'number': '555-123-4567'}\n ]\n })\n Querying them is just as simple. If you supply an object to the Data Tables search() or get() method, you’ll match every value that contains those values, dictionary keys or list items. Here’s an example: def get_person_for_number(phone_number):\n # We have a phone call for this number. Who is it?\n\n return app_tables.people.get(\n contact_details = {'phone': [{'number': phone_number}]}\n )\n You can read more about Simple Object columns in the Data Tables reference docs.\n", "tags": ["blog","announce"], "path": "/blog/simple-object-storage" }, { "title": "Anvil News - March", "snippet": "We built two startups in two hours (and other news).", "content": " Hi everyone,\n\n A lot has happened in Anvil in the last month, here are the highlights:\n\n 1. Building Y Combinator Startups in Anvil\n\n How fast can you build a startup? We built full, working versions of two startups from the famous incubator - in just an hour or two each. See how easily you can build a startup with Anvil:\n\n 2. New Learning Centre\n\n We’ve reorganised the Anvil documentation, and added a bunch of new how-to guides to our “Cookbook” section. Did you know…\n\n Find all these and more at anvil.works/learn.\n\n 3. Do more with Anvil\n\n We’re always making Anvil better. Here are some little improvements you might not have seen:\n\n)\n The Client Uplink lets you connect un-trusted code to your Anvil apps, just like the Uplink. Building the Internet of Things just got easier! (Read the docs)\n Customise your Stripe credit card form - you can now set an icon_url with anvil.stripe.checkout() 4. Show and Tell in the Anvil forums\n\n See what other people have built, and share your own apps, in the Anvil forums. Ask questions, get help, and get inspired.\n\n I look forward to seeing you there!\n\n", "tags": ["blog"], "path": "/blog/update-18-03" }, { "title": "YC Prototypes #2: Building Magic in 2.4 hours", "snippet": "It's Y Combinator's week of Demo Days, and we're prototyping one YC startup each day. Magic (YC W15) is an SMS concierge that can do anything. Let's build it.", "content": "\n\n It’s Y Combinator’s week of Demo Days, and we’re prototyping one YC startup each day. (If you missed it, check out the first in this series, where we prototyped an e-parking service in an hour and a half.)\n\n?\n\n We need to see and respond to texts from our customers.\n We need to charge customers for the things we provide (with a commission, of course). They shouldn’t need to enter their credit card every time.\n We need to share this work between several people.\n With Anvil, it turns out we can build this all in a couple of hours. Watch me build a fully working product in this time-lapse video, then scroll down for the play-by-play:\n\n We use Anvil’s built-in data tables to store customers and events (messages, notes, etc), and build pages to display them. We use the Users service to authenticate operators (ie us).\n\n Within half an hour, we’ve got the basics: We can receive SMS using an HTTP endpoint from a service like Twilio, display the conversation, and send replies.\n\n Now a customer can ask for something, we need them to pay for it! To make purchases truly frictionless, we’ll take their credit card once, then keep it on file for anything they need in the future.\n\n Anvil’s Stripe support makes this simple. We make an app to collect the customer’s card details, and then we send them a link to open that app on their phone:\n\n After that, the card is on file, and we can charge them at the click of a button. (We also note our profit margin for future reference.)\n\n If this app takes off, we’ll need help. We don’t want any messages to fall through the cracks, so we text all our operators when a new customer arrives. But we don’t want to notify every operator about every message – they’ll get drowned!\n\n Instead, let’s track which customers need attention, and let operators “claim” customers. (We make sure two operators can’t claim the same customer by accident.)\n\n The operator can mark a customer as Done (request resolved, no longer needs attention), or Can’t handle (still needs attention – “un-claim” the customer and text other operators for help).\n\n If we need more information from the customer, the operator can select “Tell me when they reply”. The operator can go do something else: when the customer replies, we send an SMS to this operator only.\n\n.\n\n Here’s how our final app works:\n\n You can grab the source code yourself – click below to open the apps in the Anvil editor:\n\n Want to dig into how these apps work? Our tutorials show you all the pieces these apps are made from, and more.\n", "tags": ["blog"], "path": "/blog/yc-mvp-magic" }, { "title": "How We're Building One Y Combinator Startup a Day", "snippet": "It's Y Combinator's week of Demo Days, and we're prototyping one YC startup each day. Meter Feeder (YCW 16) lets you pay for parking with your smartphone. Let's build it.", "content": "\n\n It’s Y Combinator’s Demo Day this week, when the startup incubator’s graduates show their products to the world. To mark the occasion, we decided to try a challenge: How much time does it take to prototype a startup?\n\n For each Demo Day this week, we’ll build a working prototype of a startup from a recent YC batch. “Working” means more than a landing page – we want enough technology to provide the core service to the customer, and to get paid for it.\n\n Our secret weapon is Anvil. It’s a platform for building web apps without the fuss: it’s got a visual interface designer, client- and server-side code is all in Python, and it deploys to the cloud with one click.\n\n We’ll walk you through the design process, show you some screenshots, and give a breakdown of how long each stage took. Today, we start with:\n\n Build time: 1 hour, 30 minutes.\n\n We’re moving to a cashless world, but urban infrastructure is slow to change. Meter Feeder lets you pay for parking with your smartphone, without costly new infrastructure.\n\n Imagine we’re the Meeter Feeder founders: We need to build a working prototype that we can demonstrate to a city government, and deploy as a proof of concept. This means:\n\n The public (customers) need to be able to pay for their parking.\n Traffic wardens on patrol need to know whether a car’s parking has expired.\n Let’s fire up Anvil, and see what we can do.\n”).\n\n We then create a table of “bookings”, which represent payment for a car to park in a location. Finally, we add user records (with Anvil’s built-in user authentication), and link each booking to the user who made it.\n\n This is all pretty quick to set up with Anvil’s Data Tables:\n\n.\n\n.\n\n If you choose “New Booking”, we prompt you to enter a location code, or pick from a list of locations you’ve booked recently.\n\n The next screen has big touch-friendly buttons for choosing your time and entering your registration. If there is a Stripe record associated with your account, we show a “pay with the same card as last time” button.\n\n Thanks to the “Park here again” button on the first screen, a returning user can park somewhere they’ve used before with only two touches.\n\n This is the back-end to make all those buttons work. This ends up being a 60-line server module, exposing four functions - “get my recent bookings”, “get a location record for this number”, “get the last licence plate I parked with”, and “pay for parking”.\n\n Wiring these up to the front end didn’t take long, because Anvil lets you return live database rows into client code, and then use data bindings to display them in the UI.\n\n After all that, the enforcement app came together pretty quickly. It’s a private app, so we can skip the user authentication and make the Locations and Bookings tables readable directly by client code.\n\n When a parking warden arrives at a new location, they can enter the location code to display all valid bookings for that location. We also support as-you-type searching by licence plate number, so they can query a single plate in a busy location.\n\n.\n\n You can see the source to these apps yourself – click to open them in the Anvil editor:\n\n Want to dig into how these apps work? Our tutorials show you all the pieces these apps are made from, and more.\n", "tags": ["blog"], "path": "/blog/one-yc-startup-a-day" }, { "title": "Custom sign-up flow", "snippet": "A completely customised login and signup flow for multi-user applications. Learn how to use the advanced Users service APIs, or clone our example and customise it to make your own authentication flow.\n", "content": " Anvil’s support for user authentication is simple to set up – you only need two lines of code to get started. But it’s also very flexible – you can use the Python APIs to take full control of the authentication system, or build something entirely custom.\n\n The reference docs give you all the APIs you need to do this, but it’s a lot to take in. To show you what’s possible, we’ve put together an example of a completely customised log-in and sign-up system. It features:\n\n Custom login and sign-up forms\n Sign-up validation with additional information (you must supply a name to sign up)\n Custom sign-up confirmation and password reset emails, sent from your GMail or G-Suite email address.\n Watch the walkthrough video, then clone the project and use it as your starting point for a fully customised authentication system:\n\n Matplotlib is a popular plotting library for Python. You may already be familiar with it, and want to use it in your Anvil apps. (If you’re not familiar with it, you should probably be using Anvil’s built-in Plotly support instead. If you do need to learn Matplotlib, start with this tutorial.)\n\n We’re going to walk through a simple Matplotlib example that makes a graph like this:\n\n \n\n Matplotlib is available in Anvil’s server modules, if you choose the Full Python 2.7 or 3.7 runtime. Once you’ve made a plot, all you need to do is:\n @anvil.server.callable\ndef make_plot():\n # Make a nice wiggle\n x = np.arange(0.0, 5.0, 0.02)\n y = np.exp(-x) * np.cos(2*np.pi*x)\n \n # Plot it\n plt.figure(1, figsize=(10,5))\n plt.plot(x, y, 'crimson') \n \n # Return this plot as a PNG image\n return anvil.mpl_util.plot_image()\n And here’s the client-side code to display that plot and make it downloadable:\n img = anvil.server.call('make_plot')\n \n self.image_1.source = img\n self.download_lnk.url = img\n That’s it!\n\n Click here to open the full app in your Anvil editor:\n\n \n\n For more information about using Anvil with matplotlib, check out our reference documentation or ask a question in our user forum.\n", "tags": ["video","cookbook"], "path": "/blog/matplotlib-with-anvil" }, { "title": "DropDowns and data tables", "snippet": "See how to use a DropDown component to select rows from a data table, and use them to create links.\n", "content": " We’ve had some questions on the forum about how to use DropDown components to select rows from data tables.\n\n Here is a walkthrough to show how it is done:\n\n For this example, I will build a simple document tracker with Anvil. It supports multiple documents, each with multiple versions, and tracks who uploaded each version. The whole app takes less than 15 minutes to build.\n\n This example shows you how to combine data tables, repeating panels, data bindings and user authentication to create a useful business application.\n\n.\n\n Follow along as we start from a single stock chart and build up to a full subscription service. You can also copy the source code and explore it yourself:\n\n.\n\n With Anvil, you can store password or API keys encrypted in your app, with a name – for example, I could store my GitHub password in a secret called github_password. Then I can use it in my app, like this: username = \"meredydd\"\n password = anvil.secrets.get_secret(\"github_password\")\n\n r = anvil.http.request(\"\", json=True,\n username=username, password=password)\n I can also create and store an encryption key, and use it to encrypt and decrypt data. Encrypted-at-rest data storage is this simple:\n\n # This app has an encryption key named 'my_key':\n\n @anvil.server.callable\n def save_secret_data(new_text):\n encrypted_text = anvil.secrets.encrypt_with_key('my_key', new_text)\n\n app_tables.secret_data.add_row(text=encrypted_text)\n\n Watch our video tutorial here, or read our API documentation :\n\n App Secrets are encrypted, and stored with the source code for your Anvil app. Secrets and keys are encrypted with a key unique to that application – making the encrypted value useless outside that app.\n\n All encryption is performed with high quality, industry-standard encryption schemes that avoid the pitfalls of using low-level cryptographic libraries. For more details, see our documentation .\n\n.\n\n Why do you need autocompletion, and how does it work? My talk at PyCon UK 2017 explains how – and why – we built an in-browser autocompleter for Anvil.\n\n Watch it below, or scroll down for a transcript:\n I'd like to start by taking some time to thank the PyCon UK organising committee. This has been our first time sponsoring PyCon UK, and we've been made to feel very welcome.\n.\n\n We started out thinking we could make a good developer experience without autocomplete. I'm here to tell you that we were wrong.\n\t.\n\t Unfortunately, Jedi wasn't quite what we needed. Earlier I said Anvil was \"full stack\", and that means that it knows about everything from your database tables...\n\t ...to your server-side code...\n\t ...to your client-side code. And it's got to autocomplete all of them.\n\n What's more, Anvil is web-based, and Jedi is expecting a filesystem. And when you're hitting the Tab key, there's just not enough time to go back to the server to fetch your completions.\n\n So we had to write it ourselves, in Javascript. Which means that, yes, here I am, talking about my Javascript project at a Python conference. (Please save the rotten fruit until after the photos at the end.)\n\t So we can take your code, insert a random symbol at your cursor position, and then feed it to the Skulpt parser. The parser then produces an abstract syntax tree that represents your module.\n\n).\n\t.\n.\n\n.\n\n\t We can store a lot of information about types. This leads to a rather philosophical discussion about what, exactly, a type is.\n\n You might say, \"that's easy, the type of an object is its Python class\". But of course, in Python, you can dynamically add attributes to individual object instances. And, arguably, even two dicts aren't really the same type.\n\n So what we actually do is mostly forget the Python class – our autocompleter is duck-typed. As far as we're concerned, these two dictionaries are two separate types, with separate item mappings, and should be treated as such.\n\t There's so much more I could talk about, but this is a short talk. And so, if you remember only one thing, make it this:\n\n Ladies and gentlemen of Pycon UK 2017, use autocomplete!\n\n Thank you very much.\n\t This year, for the first time, we were proud sponsors of PyCon UK\t2017. We thought we’d do something new – we would give out a T-shirt to anyone who built an app with Anvil during the conference.\n\n We weren’t quite sure what to expect. The answer: People! We got a little swamped at times:\n\n But the apps they built were even more impressive. Here are some of the apps they built, between Thursday morning and Sunday evening:\n\n \n Tom Newport built an impressive front-end for the MemProtMD protein simulation database, in just 45 minutes.\n\n Try the app:\n\n Follow Tom on Twitter: @tnewport\n\n \n\n \nAd!\n\n Play it:\n\n Follow Adnan on Twitter: @tapundemek\n\n \n\n \nDavid’s app uses the Reddit API to fetch a random image from the top 25 posts on a specified subreddit. Pictured: A particularly adorable hedgehog. Try it yourself:\n\n Visit David’s website: davidsj.co.uk\n\n \n Blah blah blah insert what Izabela did here.\n\n Play it:\n\n Follow Izabela on Twitter: @IKJelonek\n\n \n\n \nMatt\n\n \n\n \nHel\n\n For our part, we built 140 demo apps over four days, and sounded like Louis Armstrong by the end of Sunday. (Anyone else in the same position: Throat sweets are your friend!)\n\n.\n\n.\n\n.\n\n The conference ran like clockwork, and we really cannot thank the organising committee enough. Peter Inglesby appeared to be everywhere at once, and simultaneously a calm (and calming) presence. We had a great time, and look forward to seeing everyone again next year!\n\n", "tags": [], "path": "/blog/pycon-uk-2017-wrapup" }, { "title": "SMS Surveys in Pure Python", "snippet": "Incorporating SMS into an app? See how simple it can be with Anvil.\n", "content": " There are some great telephony services for developers out there these days. A simple HTTP call\nor webhook is all it takes to send and receive messages, but building and hosting a webapp to do this is still far harder than it should be.\n\n Enter Anvil, a platform for full-stack web development in nothing but Python. An SMS survey is a great example \nof something that should be really simple to build, so I’ll show you how I did it in just a few minutes with Anvil.\n\n First, head over to to see the finished app. You can also take a look at the source code \nand try it out yourself.\n\n Let’s take a quick tour through the code to see the highlights. There are two main parts:\n\n The main form, displaying voting instructions and current live survey results.\n The Webhook Server Module where you’ll find the HTTP endpoint that can be called for each incoming SMS. I used the excellent \nNexmo API, which makes it really simple to receive SMS messages to a custom webhook. Take a look at the Webhook Server Module. You’ll see that it only took one function to handle incoming messages: @anvil.server.http_endpoint(\"/sms\")\n @tables.in_transaction\n def incoming_sms(msisdn, text, **kw):\n ...\n The incoming_sms function is decorated with @anvil.server.http_endpoint, which means external services can make HTTP requests that trigger the function call. \nAfter signing up with Nexmo, I simply set the incoming webhook address to the URL of my HTTP endpoint. In my case, that URL is \n“” – your copy of the app will have its own URL, which you can find displayed at the bottom of the Server Module. HTTP form parameters are passed as keyword arguments to this function. We have explicitly captured the sender ( msisdn) and contents ( text) of the message, and \nused **kw to receive the other paramaters. In fact, form parameters, query string parameters and path parameters are all provided as keyword arguments – see the\ndocumentation for more information. Inside the incoming_sms function we add a row to our data table with the details of the incoming message: app_tables.messages.add_row(sender=msisdn, time=datetime.now(), message=text)\n Next we decide whether the incoming message contains a valid vote, and if so increment the appropriate total in the Results table. colours = {\n \"A\": \"Red\",\n \"B\": \"Green\",\n \"C\": \"Blue\",\n \"D\": \"Yellow\",\n }\n\n colour = colours.get(text.upper(), None)\n if colour is not None:\n app_tables.results.get(colour=colour)['votes'] += 1\n Readers of a nervous disposition should note that the entire incoming_sms function is decorated with @tables.in_transaction, so operations like \nincrementing values in the database are perfectly safe. If two requests arrive simultaneously, and they try to edit the same table row, one of them \nwill be automatically rolled back and retried. See the Data Tables documentation for more details, and \nthe Data Tables tutorial for examples. Laying out the UI for our web application takes no code at all – Anvil’s visual designer lets you drag and drop components into place until your \npage looks the way you want it to look. For this app, we’ll add a few containers and labels, and a Plot component for displaying the chart. \n\n Once we’ve created the Plot component, we have the full Plotly API at our fingertips. Drawing a bar chart couldn’t be easier. Back in the main form, we query for the latest results:\n\n results = list(app_tables.results.search())\n Then it’s just a matter of creating a Plotly Bar Chart object, and populating it with our results. Notice that we use list comprehensions \nto assemble the necessary data for each axis:\n\n\n\n # Create a Plotly Bar chart with colours along the x-axis\n # and number of votes on the y-axis.\n self.results_plot.data = go.Bar(\n x=[v['colour'] for v in results],\n y=[v['votes'] for v in results],\n width=0.5,\n opacity=0.6,\n marker=go.Marker(\n # Set the colour of the bar to the colour being voted for\n color=[v['colour'].lower() for v in results],\n line=go.Line(\n color=\"#888\",\n width=1\n )\n )\n )\n \n # Set the axis and plot labels\n self.results_plot.layout = go.Layout(\n yaxis=go.YAxis(\n title=\"Votes\",\n # Start y-axis ticks at 0 and increment by 1.\n tick0=0,\n dtick=1\n ),\n xaxis=go.YAxis(\n title=\"Colour\"\n ),\n title=\"Live Poll Results (%s votes)\" % sum([v['votes'] for v in results])\n )\n The final piece of the puzzle is to make the chart update live. For this we use a Timer component on the form, and set its interval to 1 second. In the \n tick event handler, we simply call the method to redraw the plot. And there you have it. A complete SMS polling app, built and deployed in no time. Anvil lets you build great apps on top of great APIs (like Nexmo) without \nany of the hassle traditionally required for web development.\n\n We’re used to storing text and numbers in databases, so why not binary media? Whether it’s images, PDF files or whole spreadsheets, now you can store your files directly in Anvil Data Tables like any other data type.\n\n Just create a Media column in your table: Create rows in your table by uploading files:\n\n And view or download the media objects directly from the database:\n\n?\n\n We thought so too, and today we’re proud to announce Anvil Data Bindings:\n\n Data bindings extend Anvil’s visual designer. As well as positioning your components on the page with drag and drop, you can now set component properties to any Python expression. You can even assign updated values to this expression when you change your component.\n\n Finally, today we are also introducing the RepeatingPanel. This makes it easy to repeat components for every element in a list - now, displaying items in a table or list is a snap!\n\n You can find out more about data bindings with the tutorial . You can also read the reference documentation .\n\n", "tags": ["blog","announce"], "path": "/blog/announcing-data-bindings" }, { "title": "Build a Business Dashboard with Python", "snippet": "This walkthrough shows you how to access an existing Postgres database to develop a business dashboard in Anvil.\n", "search_terms": ["external","relational","database","postgres","postgresql","sql","server","mysql","rdbms"], "content": ".\n.\n\n This is the most important part of measuring your business. You have to ask yourself what information is so important, you should see it every morning.\n\n When you’re building a business, the most important questions aren’t always obvious. There are some great guides out there. I’d recommend Startup Metrics for Pirates (not just for start-ups!), and Adam D’Angelo’s talk on measurement from Y Combinator’s Startup School.\n\n.\n\n Sometimes, the answer you need is right there in your database, and all you need is to query it. It's great when that happens, and we'll cover this simple case in our walkthrough.\n\n.\n\n Let’s imagine we’ve thought about it, and decided that our primary business concern is acquisition: How many new users are we signing up, and how is that changing from week to week?\n\n For 99% of online businesses, this information will be in an SQL database somewhere. Connect with your command-line tool of choice, and write your query. Our example table looks like this, using Postgres:\n\n \nmyapp=> \\d users\n Table \"public.users\"\n Column | Type | Modifiers \n-------------+-----------------------------+-----------\n id | integer | \n email | text | \n signup_date | timestamp without time zone | \n\n\n\n A little trial and error, and we have a query that gives us the number of user sign-ups by week, for the last three months:\n\n \nmyapp=> SELECT COUNT(*), DATE_TRUNC('week', signup_date) AS d\n FROM users\n WHERE signup_date > NOW() - INTERVAL '3 months'\n GROUP BY DATE_TRUNC('week', signup_date)\n ORDER BY d;\n\n).\n\n To connect to a Postgres database, we use the standard Psycopg2 library. We create a server module and write:\n\n import psycopg2\n\nconn = psycopg2.connect(\"host=db.myapp.com dbname=my_app user=postgres password=secret\")\n Now, we want to run that SQL query on demand. We’ll define a function that gets our data and returns it as a list:\n\n @anvil.server.callable\ndef get_user_signups():\n cur = conn.cursor()\n cur.execute(\"\"\"\n SELECT COUNT(*), DATE_TRUNC('week', signup_date) AS d\n FROM users\n WHERE signup_date > NOW() - INTERVAL '3 months'\n GROUP BY DATE_TRUNC('week', signup_date)\n ORDER BY d;\n \"\"\")\n return list(cur)\n.\n\n Cross-reference with a NoSQL database. Query your CRM via its API. Run statistical models with NumPy. Python is the swiss army knife of data analysis – why wouldn't you use it for your dashboard?\n:\n\n signups = anvil.server.call('get_user_signups')\n\n# Anvil plots use the Plot.ly API:\nscatter = go.Scatter(x = [signup_time for (count,signup_time) in signups],\n y = [count for (count,signup_time) in signups],\n fill = 'tozeroy')\n We want that code to run when the page first opens, so we put it in the __init__ method of our form. (We also import the plot API.) Here’s the entire page source code, including the parts Anvil provides for you:\n\n from plotly import graph_objs as go\n\nclass Form1(Form1):\n def __init__(self, **properties):\n self.init_components(**properties)\n\n # This code will run when the form opens.\n signups = anvil.server.call('get_user_signups')\n\n # Make a line plot of this data\n scatter = go.Scatter(x = [signup_time for (count,signup_time) in signups],\n y = [count for (count,signup_time) in signups],\n fill = 'tozeroy')\n\n # Display that plot on our page\n self.plot_1.data = scatter\n That’s it! Here’s what it looks like, now we’re finished:\n\n See source code in the Anvil editor\n\n It feels silly to say it, but I know from personal experience: A dashboard you don’t look at is as bad as no dashboard at all. It’s actually worse, because knowing it’s there gives you a false sense of security.\n\n Set your dashboard as your home page - or if you have a spare screen, display it in the corner of your office. And then, when it tells you something interesting, you might actually react!\n\n \n\n I hope you’ve found this guide helpful. You can follow us on Twitter, or email me any questions or comments at meredydd@anvil.works. Finally, you can see the full source code for this example here:\n\n Python is the world’s favourite language for data processing and visualisation – and when you use Anvil, Python is all you need to build web apps. Today, we’re making it even easier to present your data on the web.\n\n:\n\n \n\n You can open this example in Anvil right now, or read the docs to learn more.\n\n You can also find more examples in the Plotly library docs .\n\n \n \n\n", "tags": ["blog","announce"], "path": "/blog/plots" }, { "title": "HTTP endpoints for your apps", "snippet": "Create an HTTP API for your Anvil app.", "content": " Sometimes we want to connect other programs with our Anvil apps. Whether it’s a native mobile app, a shell script, or a web service like Twilio, most things expect to make REST or plain HTTP requests.\n\n As of today, this is all it takes to publish an HTTP API that returns some JSON:\n\n @anvil.server.http_endpoint(\"/greet/:name\")\ndef greet(name, **qs):\n return {'message': 'Hello, ' + name + '!'} We can demonstrate it with the curl command-line program: \n$ curl\n{\"message\":\"Hello, Joanne!\"}\n\n\n You can open this example in Anvil right now, or read the docs to learn more.\n\n \n \n\n", "tags": ["blog","announce"], "path": "/blog/http-api-endpoints" }, { "title": "Make the world better? Remove some Javascript.", "snippet": "Anvil runs Python in the browser by compiling Python to JavaScript. Find out how!", "content": " To write full-stack web apps in nothing but Python, you need to run Python code in the browser. Watch my talk at PyCon 2017, or scroll down for a transcript:\n\n Or you can check out the pull request that implemented these changes.\n\t.\n.\n\t There was an obvious candidate.\n \n So, we built a tool for building full-stack web applications with nothing but Python.\n\t.\n\n But today I want to talk about this front-end code.\n\n If you want to drive the items on your web page as pure Python objects, you’re going to need to run Python in the browser. If you’re going to run something in the browser, it’s going to have to be in Javascript.\n\t.\n \n There is a problem, though. Javascript, in its infinite wisdom, is 100% non-blocking. If you kick off a process that finishes later, you’d better provide a callback.\n\t Here’s some code you might write in Python. Go get a record from the database, and if it’s not there, throw an error.\n\n And here’s how you’d do it in Javascript. I count three separate callbacks here. I’m not exaggerating - this is an example straight from the documentation of the postgres library!\n \n OK, so we have a Python-to-Javascript compiler, and it’s open source. What if we could turn this into this, automatically?\n\t So, this is what we need to modify to implement blocking code. We invent a new Javascript class that a function can return, to say “hey, I’m returning, but I’m not done yet; I’ve just blocked”. We call this a suspension.\n\n.\n\n.\n \n So, that’s how we take simple blocking Python, and compile it into non-blocking Javascript so you don’t have to.\n\t This has been a quick overview; if you’re interested you can check out Skulpt at skulpt.org - I’m one of the maintainers now, and we’re always looking for new contributors.\n \n And if you're fed up to the back teeth with all this Javascript, and you want to forget it all and write full-stack web apps with nothing but Python, please check us out at.\n \n Thank you very much!\n\t \n\n Every coder knows the pain: You know what you need to do, but you can’t remember the name of that function, or what order it takes its parameters. It’s time to fix that problem.\n\n.\n\n Smart code completion is available right now - just log in and start creating!\n\n \n \n\n", "tags": ["blog"], "path": "/blog/autocomplete" }, { "title": "Python widgets in HTML layouts", "snippet": "Speed, meet beauty. You don't need to know HTML and CSS to use Anvil - but now you can use their power when you want to.", "content": " Building a web page with drag and drop is much faster than fighting with HTML, CSS and Javascript. When we set out to build Anvil, we wanted to make it as easy to design a web app as it is to lay out a Powerpoint slide. We’ve combined drag-and-drop design with a library of prebuilt components, a built-in database and a simple Python programming environment. So far, we’re making web app development quicker and easier for people all over the world, from entrepreneurs to doctors. (Want to know more about Anvil? Read our introduction here.)\n\n But sometimes, you need to put your best foot forward. You want to re-use your existing page design and brand assets. Or if you’re building those assets from scratch, you want pixel-perfect control over your page header. In short, you want the flexibility of traditional web design.\n\n So, we asked: What if you could have both?\n\n We’re excited to announce support for HTML templates in Anvil. You can choose from our menu of ready-to-go templates, or use your existing web design assets. Once you’ve loaded your template, development is as easy as ever: just drag and drop Anvil components into the page, and arrange them how you like. And you can drive all these components with Anvil’s straightforward Python APIs. (No Javascript required.)\n\n\n\n\n Anvil’s built-in templates make it easier than ever to produce a beautiful web app in record time. But if you know HTML, or have existing web assets, you can go beyond our built-in templates.\n\n If you’re an Anvil Pro user, you can control your page down to the pixel, with any HTML or CSS you like. All you need is a couple of special attributes to tell Anvil where you can drag and drop components.\n\n\n\n\n Here’s all you need to build a drag-and-drop layout. You just need to specify where each group of components goes, and where to drag-and-drop them:\n\n\n\n <link rel=\"stylesheet\" href=\"\">\n\n<div class=\"header\" anvil-drop-slot=\"title\">\n <div anvil-slot-repeat=\"title\"></div>\n</div>\n\n<div class=\"card\" anvil-slot-repeat=\"default\" anvil-drop-here>\n</div> (Want to know how that works? Check out our documentation.)\n\n And here’s how it looks in action:\n\n To build an app with Anvil templates, sign up and try some of our examples, or start from scratch. We’ve got lots of video tutorials to help you out. If you’re into DIY, our reference documentation describes how to use your own HTML and CSS with Anvil.\n\n We’d love to hear what you think. Drop us a line at contact@anvil.works, or you can use my personal address: meredydd@anvil.works.\n\n\n", "tags": ["blog"], "path": "/blog/drag-drop-templates" }, { "title": "Usable configuration with Git", "snippet": "No app is an island. In this walkthrough, we'll show you how to use the GitHub REST API from an Anvil app, as we solve a common configuration problem with the power of Git.\n", "content": " As developers, almost every app we write has configuration. Often, that configuration should really be accessible to our less technical colleagues: feature flags, rate limits, deployment signoffs, and so forth.\n\n However, these changes also need to be tracked and audited. “The app just broke. Did someone change the config?” “Quick, revert it to last week’s settings!”\n\n As programmers, we know exactly the right tool for this: Text files in version control. They’re diffable, trackable and comprehensible, and if anything goes badly wrong we can dive in with a text editor.\n\n The problem comes when we present this solution to our non-technical colleagues. “So, I open the terminal? And then I type git clone and a string of gibberish you made me memorise?” It’s tempting to give up and say, “I’ll do it for you”. Developers end up as gatekeepers, with every change coming through us.\n\n This isn’t great either. Years ago, I used to develop SMS-based services for a mobile carrier in south-east Asia. This was the bad old days, before Twilio and friends, and the carrier had to sign off on every minor UI change – often at the very last minute. I spent many late nights waiting for a meeting on the other side of the world to finish, just so I could change one line in a config file.\n\n We can fix this. With the GitHub API, we can build an app in minutes that empowers our colleagues to change configuration on their own – with all the power of Git’s versioning and auditing.\n\n Here’s a simple app, hosted on Heroku (source at github.com/anvil-ph-test/edit-demo). It has a configuration file (called config.json) that determines vital parameters such as the font and background colour. Here’s how I built an Anvil app to edit that configuration, with less than a dozen lines of code:\n\n First, we need to grab the latest version of our config file:\n\n self.gh_record = anvil.http.request(\"\", json=True, username=\"abc\", password=\"123\") Github returns some information about this file, and its content in base64:\n \n{\n \"name\": \"config.json\",\n \"encoding\": \"base64\",\n \"size\": 67\n ...several other bits omitted...\n \"content\": \"eyJ1cHBlcmNhc2UiOnRydWUsImZvbnQiOiJIZWx2ZXRpY2EiLCJiYWNrZ3Jv\\ndW5kIjoiYmxhbmNoZWRhbG1vbmQifQ==\\n\",\n \"sha\": \"bfb17ee5edf43a54f6756f032603872ca7dce320\",\n}\n\n\n The content is what we care about:\n\n self.item = json.loads(base64.b64decode(self.gh_record[\"content\"])) The decoded data looks like this:\n \n{\n \"background\": \"blanchedalmond\",\n \"font\": \"Helvetica\",\n \"uppercase\": true\n}\n\n\n All we need now is to design our configuration interface. With Anvil’s data bindings, it’s all drag-and-drop - we can just specify which JSON key (in self.item) each text-box or check-box corresponds to. That’s all we need for a read-only interface: Now we have read-only access to our configuration, the next step is to save our changes. As we interact with the text-boxes and check-box, self.item is automatically updated. Now we just push this data back to the server, with an HTTP PUT request to the same URL. All GitHub needs is the new content for the file, a commit message, and the previous SHA hash of this file: new_record = {'content', base64.b64encode(json.dumps(self.item)),\n 'message', 'Edit from the web'}\n 'sha': self.gh_record[\"sha\"]}\n\nanvil.http.request(\"\", method=\"PUT\", data=new_record, json=True, username=\"abc\", password=\"123\") And here’s the working app. Why not try changing some settings?\n\n Once you’ve saved your changes, scroll up and refresh the example app. Be patient - it may take a few seconds to re-deploy with the new config.\n\n OK, we’re not quite done. So far, we’re doing everything on the client side, which means everyone with the URL can access our authentication information! Even if we only give that URL out to people we (mostly) trust, it’s far too easy for it to end up in the wrong hands.\n\n Instead, we’ll do our GitHub API calls on the server side, and expose only two functions to the client: save_config and load_config. All the rest is safely on the server, where the user can’t see it: # This code runs on the server\n@anvil.server.callable\ndef load_config():\n gh_record = anvil.http.request(\"\", json=True, username=\"abc\", password=\"123\")\n\n return (gh_record['sha'], json.loads(base64.b64decode(gh_record['content'])))\n\n\n@anvil.server.callable\ndef save_config(data, last_sha):\n new_record = {'content': base64.b64encode(json.dumps(data)),\n 'message': 'Edit from the web',\n 'sha': last_sha}\n r = anvil.http.request(\"\", method=\"PUT\", data=new_record, json=True, username=\"abc\", password=\"123\")\n\n return r['content']['sha'] (In fact, I’ve been using this version of the code for all the example apps embedded in this blog post. I’m afraid wasn’t feeling generous enough to share my GitHub authentication tokens with anyone who can View Source. Sorry to disappoint anyone who tried.)\n\n There you have it - a secure, functional configuration editor, ready for our non-technical colleagues to use. You don’t need to know Git to use it, but it does have full tracing and history of every change.\n\n #1: We don’t have to sacrifice the benefits of Git for our configuration, just in order to get a user-friendly admin interface. We can have both!\n\n #2: The GitHub API is awesome.\n\n #3: Anvil lets you build useful web apps very, very quickly.\n\n \n\n \nWhy not clone this example app, read its source code, and try it out yourself? \n", "tags": ["blog","cookbook"], "path": "/blog/github-storage" }, { "title": "Multi-page apps with shared navigation", "snippet": "Meredydd demonstrates a way to have a multi-page app, but only build your sidebar menu once.\n", "content": " If you want a button or link to open a new form, the normal way is to call open_form() from the click handler. But several people in the forum have asked how to make multi-page apps where all the pages share the same top-level navigation, logo, etc. Here, I’ll show one way to do that:\n\n Rapid development is great, and Anvil lets you build web apps amazingly fast. But sometimes you need more. You need tracking, collaboration, code review, versioning. In short, you need source control.\n\n Today, we’re announcing availability of Git access for your Anvil apps. It’s simple: each Anvil app is its own Git repository. Just clone the repository, and you can pull, push, edit and merge Anvil apps right from your workstation.\n\n \n\n Now you can collaborate on multi-person teams, manage deployment and staging environments, and integrate Anvil into your code review process.\n\n!)\n\n \n", "tags": ["blog"], "path": "/blog/git-beta-announce" }, { "title": "Anvil On-Site Installation", "snippet": "You can run an entire Anvil instance on your local network, airgapped from the internet, or in a private cloud. If you just want to manage your own database or a part of the server-side code, there are several ways to do that too.", "content": " Anvil is, by default, a cloud-hosted service. This makes it incredibly easy to create web-apps that are live in the cloud, accessible from anywhere, and integrate with other cloud services.\n\n If you’re in a corporate environment, your web app may need to access local resources. For example, you might want to use Anvil to query a database on your corporate network. For this, you will normally use the Anvil Uplink. This lets you securely give your Anvil app access to the relevant parts of your database:\n\n The Anvil Uplink is available to all users, and does not store any data in the Anvil cloud. To learn more, watch our 4-minute video or read our documentation.\n\n Certain enterprise users, however, require more assurance. For example, organisations dealing with healthcare data may not transfer patient records to third-party services without special agreements.\n\n For these users, we offer Anvil On-Site Installation. This allows you to develop and run your app entirely behind your corporate firewall, on servers you control:\n\n An on-site Anvil installation requires no connection to the outside internet, giving you maximum assurance that your data is under your control.\n\n Anvil On-Site Installation runs as a Docker container. It typically takes less than five minutes to get Anvil On-Site working on your network - and Anvil staff will be there to help you every step of the way.\n\n If you want to run Anvil apps on your own network, please get in touch to find out more or arrange a free trial:\n\n \n", "tags": ["blog"], "path": "/blog/anvil-on-site" }, { "title": "Accepting Payment with Stripe", "snippet": "Get paid using Anvil.\n\nLearn how to take online payments using Anvil and Stripe.\n", "content": " Learn how to take online payments using Anvil and Stripe.\n\n\n\n Find out more about accepting payments online - check out the Stripe section of the Anvil reference manual.\n\n The Stripe Service | 0:20 - 0:42\n\n We enable the Stripe Service:\n\n \n\n We then log in to an existing Stripe account. If you don’t have an account, don’t worry - Stripe accounts are free and signup takes a few minutes.\n\n The Stripe Service is in Test Mode when you start it up, which means the service doesn’t actually charge anybody’s account.\n\n Write some code | 0:42 - 1:06\n\n Our user interface is just a single Button saying ‘Pay me’. We write an event handler to charge the user when the Button is clicked:\n\n def button_1_click(self, **event_args):\n \"\"\"This method is called when the button is clicked.\"\"\"\n c = stripe.checkout.charge(currency=\"GBP\", amount=100)\n print(c)\n Get paid | 1:06 - 1:49\n\n Let’s try it out - when the user clicks on the Button, they are presented with a payment dialog.\n\n We make a payment from card number 4242 4242 4242 4242. This is a test card that only works in test mode. \n\n The Output Console shows the return value of the charge function - it contains all available information about\nthe transaction. You could use this to audit your transactions and run analytics. It includes a URL that allows \nyou to look up the transaction in the Stripe dashboard later. Build this app for yourself - Log in and follow along, or watch more tutorials in the Anvil Learning Centre.\n\n See the Stripe Service used in a production-scale app. We built an app to take online\nparking meter payments in just 90 minutes. You can read about how it works and clone the app \ninto your own Anvil account.\n\n If you’d like to learn the basics of Anvil, start with the Hello, World! tutorial.\n\n In this video, we’ll learn how to access files stored in your Google Drive from an Anvil app.\n\n We’ll build a photo gallery that cycles through all the pictures in a Google Drive folder.\n\n\n\n You can do a lot more with Anvil and Google services. Find out more in the Google section of the Anvil reference manual.\n\n Constructing a UI | 0:14 - 0:50\n\n We construct a UI by adding an Image component to display images, and Buttons for ‘previous’ and ‘next’.\n\n \n\n Adding files to Google Drive | 0:50 - 1:36\n\n We enable the Google Drive Service and connect it to a folder on Google Drive. We make it read-only.\n\n \n\n Wire up the components | 1:36 - 7:10\n\n We explain the process of developing the code to flick through the images when the Buttons are clicked. Here it is in brief.\n\n First, we get a reference to the picture folder:\n\n pic_folder = app_files.my_holiday.files\n Then, we initialise the image to the 0th picture:\n\n self.pic_num = 0\n self.current_pic.source = pic_folder[self.pic_num]\n \n\n self.pic_num keeps track of which picture we’re looking at right now. That means we can display the current picture using this method:\n\n def display_pic(self):\n pic = self.files[self.pic_num]\n self.current_pic.source = pic \n To make the buttons work, we just need to increment or decrement the picture number, and call self.display_pic: def next_btn_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.pic_num += 1\n self.display_pic()\n\n def prev_btn_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.pic_num -= 1\n self.display_pic()\n And finally, we need to disable the Buttons when we’ve reached either end of the picture list:\n\n def display_pic(self):\n # Disable buttons if we've reached the end of the picture list\n self.prev_btn.enabled = self.pic_num > 0\n self.next_btn.enabled = self.pic_num < len(pic_folder) - 1\n\n # Then display the picture\n pic = pic_folder[self.pic_num]\n self.current_pic.source = pic\n And that’s all we need. The finished app looks like this:\n\n \n\n Click this button to copy the app into your account:\n\n (You’ll need to connect it to a Google Drive folder full of pictures - cloned apps do not preserve Google Drive credentials!)\n\n Build this app for yourself - Log in and follow along, or watch more tutorials in the Anvil Learning Centre.\n\n Watch Ian build a document storage app, storing and displaying multiple documents,\nwith full version history including who uploaded each version. This shows how to combine some of the most commonly \nused Anvil features to build a useful business application. The full build takes 15 minutes.\n\n If you’d like to learn the basics of Anvil, start with the Hello, World! tutorial.\n\n (Note: This video discusses using Google authentication directly. You might be interested in Anvil’s powerful built-in user authentication.)\n\n Often, there are parts of an app we don’t want everyone to access.\n\n In this video, we’ll show you how to let users log into your app with their Google accounts. You can use the user’s identity to determine what they are and are not allowed to do.\n\n We use Anvil’s server modules to write code that cannot be tampered with by our users. (In this case, we want to make sure they don’t bypass the authentication check). Read more about server modules in the Anvil reference manual.\n\n You can do a lot more with Anvil and Google services. Find out more in the Google section of the Anvil reference manual.\n", "tags": ["video"], "path": "/blog/google-auth" }, { "title": "Using code outside Anvil", "snippet": "In this video, we'll learn how to write programs outside Anvil, and call them from your Anvil app, using the Uplink API.\n\nIn this video, we use the Uplink to control a Raspberry Pi from the web.\n", "content": " Sometimes, you want to use code that’s not on the web, from your web app. Usually this is a bit of a pain, but Anvil makes it easy with the Anvil Uplink.\n\n The Uplink lets you write any program outside Anvil, and talk to them from your Anvil app. (Even if your program isn’t running on a public web server!)\n\n In this video, we use the Uplink to control a program on the Raspberry Pi from an Anvil app.\n\n Of course, the Uplink is useful for more than just playing with the Raspberry Pi. You can connect a program you’ve already written, access files on your computer, or connect an Anvil app to code you’ve already written. You can even call into your Anvil app from uplink code - it’s a two-way API!\n\n To learn more about the Anvil Uplink, read the Uplink section of the Anvil reference manual.\n\n You can buy your own Raspberry Pi from the Raspberry Pi Foundation website.\n\n Constructing a UI | 0:26 - 1:07\n\n We construct a UI where the user can enter a name to display on the Raspberry Pi.\n\n The Uplink | 1:07 - 2:54\n\n We enable the Uplink by clicking on Uplink in the Gear menu \n . \n\n This shows a dialog with an ‘enable’ button; clicking that button gives us a unique ID for our app:\n\n \n\n On the Raspberry Pi, we install the Anvil Uplink library:\n\n pip install anvil-uplink\n And in a Python script on the Raspberry Pi, write we some code to print a message. The message is printed on a SenseHAT,\nwhich has an LED display that can be controlled from a simple Python library.\n\n import anvil.server\nfrom sense_hat import SenseHat\n\nsense = SenseHat()\n\n@anvil.server.callable\ndef show_messgage(message):\n sense.show_message(message)\n\nanvil.server.connect(\"<YOUR UPLINK KEY HERE>\")\nanvil.server.wait_forever()\n Then we run the script - the wait_forever() makes sure it stays alive waiting for function calls. Calling the Uplink script from the app | 2:54 - 3:23\n\n We call the show_message function from the app when the Button is clicked: def button_1_click(self, **event_args):\n \"\"\"This method is called when the button is clicked.\"\"\"\n anvil.server.call(\"show_message\", self.name_box.text)\n And that’s it! Now when you enter your name in the App:\n\n \n\n Your name appears in scrolling lights on the Raspberry Pi:\n\n \n\n More uses for the Uplink | 3:23 - 3:45\n\n As well as calling from your app, you can use the Uplink to call into your app.\n\n Let’s say you have a function in a Server Module:\n\n # In a Server Module\ndef store_name(name):\n app_tables.names.add_row(name=name, when=datetime.now())\n You can call it from your Python script using anvil.server.call: # On your own machine (Rasbperry Pi, your laptop, in your server room, anywhere...)\ndef store_name_in_anvil(name):\n anvil.server.call(\"store_name\", name)\n So you can run Python anywhere and connect it to your app just by making function calls! You can:\n Anything you can do in Python, you can integrate it into an Anvil app using the Uplink.\n\n Build this app for yourself - Log in and follow along, or watch more tutorials in the Anvil Learning Centre.\n\n Experiment more with the Uplink by following this step-by-step workshop. You’ll connect \nan app to your local machine, run unit tests, and store the results for future reference.\n\n If you’d like to learn the basics of Anvil, start with the Hello, World! tutorial.\n\n Click here to learn about writing multi-user applications using the Anvil Users service.\n\n\n\n", "tags": ["nositemap"], "path": "/blog/advanced-data-storage" }, { "title": "Multi-User Applications with Anvil", "snippet": "Anvil Essentials part 3 of 3.\n\nAnvil makes multi-user applications easy. In this video, we'll expand on the To-Do list example, and turn it into a secure, multi-user application that can be posted publicly on the internet.\n", "search_terms": ["user authentication","auth"], "content": " When your application supports multiple different users, it’s important to ensure that they can’t access each other’s data. In this video, we show how straightforward this can be with Anvil.\n\n We start with the online TODO-list app from our last tutorial, and turn it into a multi-user public service.\n\n Hands on\n\n To explore this app in the Anvil editor, click this button:\n\n Feel free to use it as a starting point for writing your own multi-user Create, Read, Update, Delete (CRUD) app.\n\n Want to know more?\n\n Read about the Users Service or Data Tables in the Anvil reference manual.\n\n Users Service | 0:32 - 0:45\n\n The Users Service handles user signup and login with a single line of Python. Usernames and password hashes are stored automatically\nin a Data Table.\n\n There are various sign-in options in the Users Service, and all can be enabled/disabled:\n And if you have your own Anvil instance you can also use:\n Linking To-Do items to users | 0:45 - 1:03\n\n We add a link column to link each row of the Reminders table to a row of the Users table. This will link to the user that each reminder belongs to.\n\n \n\n Logging users in | 1:03 - 1:30\n\n We display the login form by running:\n\n anvil.users.login_with_form()\n Client-writable views | 1:30 - 3:42\n\n We set the permissions on our Reminders table back to ‘No access’ for Form code (the default).\n\n \n\n We then create a client-writable view specific to the logged-in user:\n\n @anvil.server.callable\ndef get_reminders():\n current_user = anvil.users.get_user()\n \n if current_user is not None:\n return app_tables.reminders.client_writable(owner=current_user)\n In the client, we get a reference to this view:\n\n self.my_reminders = anvil.server.call('get_reminders')\n Everywhere we were using app_tables.reminders, we now use self.my_reminders. Everything behaves as if the table\ncontained only this user’s reminders. \n\n Publishing | 3:42 - 4:34\n\n If we log in as meredydd@anvil.works, we see one To-Do list. If we log in as shaun@anvil.works, we see a different\nTo-Do list. Both lists are in the Reminders table, but each user can only access their own reminders. Now we’re confident in sharing this publicly.\n\n \n\n Build this app for yourself - Sign up and follow along, or watch more tutorials in the Anvil Learning Centre.\n\n Congratulations - if you’ve followed the three Anvil Essentials tutorials, you have a solid foundation in Anvil.\n\n Watch the Anvil Uplink tutorial to learn how to connect Anvil to your own Python installation running \non any machine.\n\n You can search or browse all our tutorials from the Learning Centre.\n\n Click here to learn about storing and displaying data using the Anvil Users service.\n\n\n\n", "tags": [], "path": "/blog/storing-data" }, { "title": "Storing and Displaying Data", "snippet": "Anvil Essentials part 2 of 3.\n\nBuild a To-Do list app, and learn how to store data in Anvil.\n\nAnvil's data bindings make data-driven applications easy!\n", "content": " Most apps need to store information between one visit and the next. This video will introduce you to Anvil’s Data Tables service, which lets you store data quickly and securely.\n\n It will also show you how to display lists of data using Data Bindings and the RepeatingPanel.\n\n In this video, we will build an online To-Do list app. Let’s get going.\n\n Want to know more?\n\n To learn more about Data Tables, read the Data Tables section of the Anvil reference manual.\n\n You can also learn about making advanced queries on your Data Tables using anvil.tables.query. Or perhaps read about Data Bindings and RepeatingPanel in the reference manual.\n\n Next tutorial\n\n In our next tutorial, we expand this app to support multiple users. Check it out!\n\n Data Tables | 0:28 - 1:31\n\n We use Data Tables to store the reminders. Data Tables is an out-of-the-box option for data storage in Anvil, and it’s backed by PostgreSQL.\n\n Our reminders are stored in a table with a Text column for the description, and a True/False column to mark which reminders are done. \n\n For this app, we enable read/write access to our Reminders table from the client .\nThe next tutorial describes how to precisely control Data Tables access via the server to \nmeet security best-practice. Displaying items as a list | 1:31 - 2:20\n\n We want to display our reminders visually, so on our Form we need to create a list with one element per reminder. \nWe do this using a RepeatingPanel. A RepeatingPanel repeats a UI template once for each item in a list.\n\n We add a CheckBox to the template, which is sufficient to display each reminder.\n\n \n\n Data Bindings | 2:20 - 3:14\n\n Data Bindings bind a property of a component to a single Python expression.\nThe ‘meaning’ of the property is defined in a single place, which can help prevent bugs.\n\n We set the CheckBox’s text property to the ‘description’ column, and its checked property to the ‘done’ column. \n\n Configuring the RepeatingPanel | 3:14 - 3:52\n\n We make the RepeatingPanel show each item in the Reminders table:\n\n \n\n To achieve this, we set the RepeatingPanel’s items to the list of rows from the Reminders table: self.repeating_panel_1.items = app_tables.reminders.search()\n The RepeatingPanel automatically creates an entry for each item in its items property. Adding reminders | 3:52 - 5:22\n\n We add a TextBox and Button to our GUI.\n\n \n\n In the Button’s click event handler, we add a new row to the Data Table. new_row = app_tables.reminders.add_row(description=self.new_reminder_box.text)\n Then we update the RepeatingPanel to reflect the changes in the Data Table. The display is updated automatically.\n\n self.repeating_panel_1.items = app_tables.reminders.search()\n Deleting reminders | 5:22 - 6:42\n\n We make each item deletable by adding a delete Link to the RepeatingPanel’s template. To make the delete happen,\nwe add this code to the Link’s click event handler: self.item.delete()\n self.remove_from_parent()\n Publishing | 6:42 - 6:58\n\n We’ve tested it and it works, so it’s time to publish. We choose ‘Publish App’ in the Gear menu \n and copy-paste the private link. This is a random, unguessable URL that’s similar\nto a Google Docs sharing link. Anvil apps can also be published at a more memorable URL, simply by clicking ‘Share via public link’.\n\n Here’s the final app:\n\n \n\n Build this app for yourself - Sign up and follow along, or watch more tutorials in the Anvil Learning Centre.\n\n To complete the Anvil Essentials course, we explore user authentication and data security in Multi-User Applications with Anvil.\n\n You will then be able to build and publish multi-user data management apps with Anvil.\n\n The first tutorial in the series is the Introduction to Anvil.\n\n Anvil lets you get things done fast. Really fast.\n\n In just 45 seconds, watch us build a website that greets you by name.\n\n This video has no narration. To learn more, check out our slower, narrated video.\n\n Not so much. Let’s say you want a site to take orders for your new widget, or keep track of your customers, or schedule your local football league. Traditionally, you’ll need to know an alphabet soup of languages and technologies: HTML, CSS, JS, PHP, SQL - the list goes on. And that’s before we start on the complex frameworks required to make them usable.\n\n This makes web development slow and complicated for professionals, difficult for other engineers, and entirely inaccessible for beginners.\n\n We need to do better than this. So we built Anvil.\n\n Anvil is a tool for making interactive websites in Python. Build your site with drag and drop, placing text, buttons, input boxes, images and more. Then double-click a button and write the Python that executes when that button is clicked.\n\n You can make something really quick this way. Watch us build a page that greets you by name, in 45 seconds flat:\n\n \n\n Anvil’s built-in database has a simple, spreadsheet-like interface for editing your data. Searching or editing it from your code is a no-nonsense Python statement. You can build a working database-backed to-do list app in five minutes - watch us do it!\n\n If you already have a database, no problem - Anvil can connect to that too. (For those with special requirements, we even offer an on-site solution)\n\n No app is an island, and you shouldn’t have to build things from scratch. Anvil makes it easy for your apps to use services from the rest of the web:\n\n\n\n You might want to use something that’s only available on your network, or your computer. Perhaps you want to use your company database, or special hardware, or files stored on your computer.\n\n With Anvil, that’s a snap. Just import a library, mark the functions you want to call from Anvil, and away you go.\n\n Watch us control a Raspberry Pi from the web in three minutes.\n\n \n\n Anvil is free for personal use, and we can’t wait to see what you will build with it. Why not sign up for free and try it out?\n\n \n \n", "tags": ["welcome","blog"], "path": "/blog/introducing-anvil" }, { "title": "Introduction to Anvil", "snippet": "Anvil Essentials part 1 of 3.\n\nWatch me build a secure, multi-user web app, complete with database storage and user accounts, using nothing but Python.\n\nBy the time we're done, you'll know the essential parts of Anvil, and how to use them to create web apps in Python.\n", "content": " In this video, we’ll take a tour of Anvil. We’ll start from a blank page, and build up to a multi-user web app complete with database storage and user accounts - all in nothing but Python.\n\n By signing up and building along with this video, you can learn all the essentials of Anvil.\n\n Take a tour with me:\n\n Now you’ve watched this tour, it’s time to explore further. Sign up and try it yourself, or watch more tutorials in the Anvil Learning Centre.\n\n Next tutorial\n\n In our next tutorial, we build a To-Do list that allows you to add, edit and delete reminders.\n\n Constructing a User Interface | 0:36 - 1:38\n\n We construct the UI by dragging-and-dropping components from the Toolbox. We add a TextBox to enter a name,\na Button to click, and an empty Label where a greeting will go. The Properties panel allows us to configure these components.\n\n \n\n Handling events | 1:38 - 1:50\n\n To make the Button do something, we simply double-click the Button in the Editor. \nThis creates a Python method that runs when the Button is clicked.\n\n To configure more event handlers, use the ‘Events’ box at the bottom of the Properties panel on the right:\n\n \n\n Controlling components from code | 1:50 - 2:18\n\n Each component is available as a Python object. Their properties can be modified in code. We set the event handler\nto put a greeting in the message label, using the name entered in the text box:\n\n def button_1_click(self, **event_args):\n \"\"\"This method is called when the button is clicked\"\"\"\n self.message_label.text = 'Hello, ' + self.name_box.text + '!'\n Entering a name now displays a greeting:\n\n \n\n Publishing your app | 2:18 - 2:50\n\n We’ve just built a simple web app - let’s publish it!\n\n\n To run code on the server, we add a Server Module. This is a Python module that runs on the server. It’s ready to go right away.\n\n We define a function to print the name that was entered:\n\n def say_hello(name):\n print(\"Hello, \" + name)\n And we decorate it with @anvil.server.callable so we can call it from our page: @anvil.server.callable\ndef say_hello(name):\n print(\"Hello, \" + name)\n Then in the client code, we can call it by running:\n\n anvil.server.call('say_hello', self.name_box.text)\n When we run our app again and enter “Meredydd”, we see that the server has printed “Hello, Meredydd”.\n\n This Server Module is running standard Python, so it can run any Python packages such as pandas, numpy, or googleads. Storing Data | 3:51 - 5:00\n\n You can connect to your own database, but you’ll often want something easier. We create a Data Table to record the name of each visitor we’ve seen. We give it a single text column, name. \n\n Then we put some code in the Server Module to store the names as they are entered:\n\n app_tables.visitors.add_row(name=name)\n User registration | 5:00 - 6:50\n\n We enable the Users Service and discuss the features that it supports - Email + Password, Google, Facebook, plus your company’s Active Directory or certificate system.\n\n It automatically creates a Data Table to store the usernames, password hashes, and other data it manages for you.\n\n To show the login/signup dialog, we add this line to our Form:\n\n anvil.users.login_with_form()\n \n\n We link the Users table to the Visitors table so we can see which user entered which name.\n\n Then we try it out - we run the app, sign up and verify our email, and log in. We see that we have a new row, and we’ve linked the\nentries in the visitors table to our users.\n\n \n\n Saving and version control | 6:48 - 7:40\n\n That’s it! Time to save what we’ve made. We hit the Save button to store the state of the app at this point in time.\n\n Then we look at the version history for this app. It’s backed by Git, so we have complete version control - and we \ncan clone the app as a Git repo to work on it offline.\n\n Finally, we set a particular verison of the app to ‘published’ - this keeps our published app separate from the one\nwe’re working on.\n\n \n\n Build this app for yourself to master the essentials of Anvil. Sign up and follow along, or watch more tutorials in the Anvil Learning Centre.\n\n The Anvil Essentials course continues with Storing and Displaying Data.\n\n By the end of the three Anvil Essentials tutorials, you will be able to build and publish multi-user data management apps with Anvil.\n\n
https://anvil.works/machine-readable/search-index.json
CC-MAIN-2019-22
refinedweb
51,982
64.91
Hi, I have recently started trying to teach myself python, I've watched a few video tutorials and am reading "Learning Python". I have been making an Mp3 player and have hit a point that I cant work my way past so was hoping some kind person here could help me work it out. The problem is I want to go from only opening one mp3 file and playing it to a playlist system but I cant work out how. Here is my current code Using (Python 2.6 with Tkinter and mp3play). {Edit}Just thought I should put a bit more detail in as to what I have tried so far. basically I have tried using askopenfilenames to get the names of the files and put them in a list, but I have no idea what to do after that :(. Hope I am making some sense. from Tkinter import * import mp3play import tkFileDialog import Tkinter def open_file(): #Opens a dialog box to open .mp3 file global music #then sends filename to file_name_label. global mp3 global play_list filename.set (tkFileDialog.askopenfilename(defaultextension = ".mp3", filetypes=[("All Types", ".*"), ("MP3", ".mp3")])) playlist = filename.get() playlist_pieces = playlist.split("/") play_list.set (playlist_pieces[-1]) playl = play_list.get() play_list_display.insert(END, playl) mp3 = filename.get() print mp3 music = mp3play.load(mp3) pieces = mp3.split("/") name.set (pieces[-1]) def play(): #Plays the .mp3 file music.play() def stop(): #Stops the .mp3 file music.stop() def pause(): #Pauses or unpauses the .mp3 file if music.ispaused() == True: music.unpause() elif music.ispaused() == False: music.pause() def vol(event): #Allows volume to be changed with the slider v = Scale.get(volume_slider) music.volume(v) def Exit(): exit() root = Tk() root.title("Ickle Music Player") root.geometry('300x100+250+100') filename = Tkinter.StringVar() name = Tkinter.StringVar() play_list = Tkinter.StringVar() menubar = Menu(root) filemenu = Menu(menubar, tearoff=0) menubar.add_cascade(label='File', menu = filemenu) filemenu.add_command(label='Open', command = open_file) filemenu.add_separator() filemenu.add_command(label='Exit', command = Exit) root.config(menu=menubar) open_file = Button(root, width = 6, height = 1, text = 'Open file', command = open_file) open_file.grid(row=0, column=3) play_button = Button(root, width = 5, height = 1, text='Play', command = play) play_button.grid(row=0, column=0, sticky = W) stop_button = Button(root, width = 5, height = 1, text='Stop', command = stop) stop_button.grid(row=0, column=1, sticky = W) pause_button = Button(root, width = 5, height = 1, text='Pause', command = pause) pause_button.grid(row=0, column=2) volume_slider = Scale(root, label='Volume', orient = 'horizontal', fg = 'black', command = vol) volume_slider.grid(row=0, column=4) file_name_label = Label(root, font=('Verdana', 8), fg = 'black', wraplength = 300, textvariable=name ) file_name_label.grid(row=3, column=0, columnspan=8) play_list_window = Toplevel(root, height = 150, width = 100) play_list_window.title("Playlist") play_list_display = Listbox(play_list_window, width = 50) play_list_display.pack() play_list_window.mainloop() root.mainloop() Thanks
https://www.daniweb.com/programming/software-development/threads/205335/my-first-program-mp3-player
CC-MAIN-2021-25
refinedweb
463
54.79
By definition Fibonacci series in defined as the first two numbers in the Fibonacci sequence are 0 and 1, and each subsequent number is the sum of the previous two.i.e the nth number is obtained by adding the n-1 and n-2 number in the series. Let us consider an example:Assume that we need to make a series of 5 numbers.As it is fixed the first two places are occupied by 0 and 1.The third number in the series in obtained by adding the first and second numbers,therefore the third number will be 1….fourth number will be addition of second and third number…..so it will be 2….hence the final number in the series is addition of fourth and third number….hence the final number in the series is..3. Now let us write a java program to print the Fibonacci series of length given by user.To get an input statement from user we need use the object of Scanner class. import java.util.Scanner; //java package which contains details about Scanner class class fib { public Static void main(String arg[]) { int a=0,b=1,c; System.out.print("Enter the length of the series needed including 0,1:"); Scanner ob=new Scanner(System.in); //declaration of object for Scanner class int n=ob.nextInt(); //Initializing the value of n System.out.print("Fibonacci series is:"+ a + " " + b); for(int i=0;i<=n-2;i++) { c=a+b; //getting the nth number by adding n-1,n-2 numbers. a=b; b=c; System.out.println(" " + c); } //ending the for loop } //ending the main function } //ending the class Output: You can be download the source code:download Reblogged this on Researcher's Blog and commented: Fibonacci series in JAVA
https://letusprogram.com/2013/07/11/fibonacci-series-in-java/
CC-MAIN-2018-47
refinedweb
301
65.83
Tim: How does A*.app 1.0 seem to have changed from 022? - * on Xcode Project -- Official Thread If it isn't too much hassle, I know that myself and many with me would appreciate a lot ifyou could do a write-up of what you have learned so far about how this Xcode+Arduino.app setupworks. I'm thinking about a sequential description of how you understand everything runs fromthe point to click Build&Upload, what happens with the makefile, documentation about the makefileand other important knowledge like customizing code completion, syntax coloring et al. Maybe it's an idea to hack a quick Cocoa app or script that handles/starts the serial monitor automaticallyfor the user? I've also ordered the WiFly wifi shield from Sparkfun.com so my arduino can speak tcp/ipwith the world. Do you guys have any experience with this, or have any other wifi shieldto reccommend? QuoteMaybe it's an idea to hack a quick Cocoa app or script that handles/starts the serial monitor automaticallyfor the user? I have that app lying around here on my computer.. but I'm not sure I am convinced this should be part of the XCode project...This is just my personal opinion, but I think this should be as general as possible and the least bloated possible. More stuffss == more places where things can break. People can always fork from the project to add extra special personal preferences to it..I'm not sure I am right on this one... is serial communication essential in this project? Should it be from XCode? I feel not. What do you guys think? screen /dev/tty.usbmodem641 -b19200Must be connected to a terminal. I've also ordered the WiFly wifi shield from Sparkfun.com so my arduino can speak tcp/ipwith the world. Do you guys have any experience with this, or have any other wifi shield to reccommend? serial: @echo " ---- open serial ---- " osascript -e 'tell application "Terminal" to do script "screen /dev/tty.usbmodem* 9600"' killserial: @echo " ---- close serial ---- " osascript -e 'tell application "Terminal" to do script "screen -X quit"' #include "ArduinoProgram.h" #include "ArduinoProgram.pde" Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=49956.msg671199
CC-MAIN-2015-14
refinedweb
394
67.15
Hello I am struggling to create EJB Project in RAD 8.5 trial version. the problem is "no packages" so Stateless is not available. Not sure whether Target Runtime is mandatory at the time of creating project. Is there a limitation in the trial version? or should I configure build path? please help Topic Pinned topic EJB3.x in RAD 8.5 Trial version 2013-01-28T12:31:38Z | Updated on 2013-01-28T17:13:58Z at 2013-01-28T17:13:58Z by kewl - SystemAdmin 110000D4XK14225 Posts Re: EJB3.x in RAD 8.5 Trial version2013-01-28T14:45:53Z This is the accepted answer. This is the accepted answer.Hello, The Trial edition has the same code as the one you purchase. Only the applied license is different. What version of EJB do you want to generate? What type/version server do you want to deploy to? Could you describe in more detail the issue you are having (attach a screenshot of any errors)? You need to specify a Target Runtime (have you installed the WAS Test Environment supplied with the RAD Trial)? I attach some screenshots of how you would create an EJB 3.1 project targeting WAS 8.5. File > New Project > EJB project Right-click on the Project and choose: New > Session Bean (3.x) Select a package name and a class Name. I chose to generate a Local and Remote Interface and this gave the resulting code below (with the interface classes contained in the EJB Client Project). package com; import com.view.MyBeanLocal; import com.view.MyBeanRemote; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; /** * Session Bean implementation class MyBean */ @Stateless @Local(MyBeanLocal.class) @Remote(MyBeanRemote.class) public class MyBean implements MyBeanRemote, MyBeanLocal { /** * Default constructor. */ public MyBean() { // TODO Auto-generated constructor stub } } I attach the screenshots and sample projects (which you can import with File >Import>General > Existing projects into Workspace ) Thank you, Lara Attachments - attachment_14934113_dw.zip - 370 KB Re: EJB3.x in RAD 8.5 Trial version2013-01-28T15:42:33Z This is the accepted answer. This is the accepted answer. - SystemAdmin 110000D4XK - 2013-01-28T14:45:53Z Attachments - SystemAdmin 110000D4XK14225 Posts Re: EJB3.x in RAD 8.5 Trial version2013-01-28T15:54:33Z This is the accepted answer. This is the accepted answer. As Lara mentioned, you need to select a target runtime when you create your EJB project. You can also set the target runtime after the project was created, by going to the Targeted Runtimes property page of the project (right click on the project > Properties > Targeted Runtimes. By looking at the screen shot, I think you did not select a target runtime when you created the project, correct? But one thing to have in mind: the target runtime must support EJB. In the new EJB project wizard, you will see listed only the target runtimes that support EJB. As Lara suggested, you can try the WAS Test Environment supplied with the RAD Trial, which supports EJB. Hope this helps. Re: EJB3.x in RAD 8.5 Trial version2013-01-28T16:47:20Z This is the accepted answer. This is the accepted answer. - SystemAdmin 110000D4XK - 2013-01-28T15:54:33Z I understand the point that I need WAS test env. Can I get the steps? I have tried from launch and also from Installation Manager...but could not find a way Re: EJB3.x in RAD 8.5 Trial version2013-01-28T17:13:58Z This is the accepted answer. This is the accepted answer. now installing WAS 85 from WAS_UTE_8.5_EVL_1 and WAS_UTE_8.5_EVL_2 zip files hopefully I can do EJB's
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014934040
CC-MAIN-2016-50
refinedweb
607
70.9
Issues python 2.5 on Macintosh Hi Tim, I'm using pyton 2.5 on Mac OS 10.5.8 and get the following: rod$ python do_harvest.py ../harvest.py:219: Warning: 'with' will become a reserved keyword in Python 2.6 Traceback (most recent call last): File "do_harvest.py", line 48, in <module> import harvest File "../harvest.py", line 219 with open(error_file, 'w') as efile: ^ SyntaxError: invalid syntax I'm not a python expert, and updating python in this environment isn't an option. Any ideas? Rod Hi Rod, I've made some changes and as far as I can tell (testing on Linux) it should work with 2.5 now. However, you'll need to have the SimpleJSON module installed. If that's a problem let me know (it's actually not essential to the harvester, but used by other parts of the scraper). The 'with' problem was easy to fix, just had to add: from future import with_statement But then I discovered some problems with the Zip module... Anyway, give it a try now and let me know how it goes. I've included a zip of the new version in the downloads section. cheers, Tim
https://bitbucket.org/wragge/trove-tools/issues/1/python-25-on-macintosh
CC-MAIN-2017-39
refinedweb
200
75.3
), a mapping module (arcpy.mp), an ArcGIS Spatial Analyst extension module (arcpy.sa), and an ArcGIS Network Analyst extension module (arcpy.na). are overwritten, not to mention that with large modules, your namespace can become particularly crowded and busy. Think about it this way, in the following example, both the management and analysis module are. License: Both of the following samples require the ArcGIS Spatial Analyst extension to run. # Import arcpy and the sa module as * # import arcpy from arcpy.sa import * # addition of sa for every function and class adds up quickly, disrupting the readability and adding more bulk to the line. # Import arcpy and the sa module # import arcpy from arcpy import sa #)))
https://pro.arcgis.com/en/pro-app/arcpy/get-started/importing-arcpy.htm
CC-MAIN-2019-18
refinedweb
115
50.53
Basics You have to use two-space indentation. A tag's contents can start on the same line as its definition: %title Page title Alternatively, you can put the contents on a new, indented line: %title Page title A closed tag can be created by appending a '/' right after tag name: %br/ Tag attributes can be added by appending a ':' after the tag definition. The rest of the line will interpreted as a Python dictionary which contains the attribute definitions. %a: 'href': 'http://...', 'target': '_blank' In this case you can only specify the contents of the tag on a new line. If the value of a tag is the Python object "None" the attribute is ignored. %hr/: 'style': style if hasstyle else None => if hasstyle: <hr style="..." /> else: <hr /> ReML has a shortcut for 'id' and 'class' attributes: %div#footer => <div id="footer"> %div#myid.sidebar.container => <div id="myid" class="sidebar container"> If an expression starts with '#' or '.' a div is assumed: #footer => <div id="footer"> .sidebar.container => <div class="sidebar container"> Python code can be inserted by starting an expression with '- ' - if text.startswith('hello '): - text = text[6:] You can insert the result of an evaluation by starting an expression with '= ' = text.strip() Note that you have to add a single space after the '-' and '=' characters. You can escape evaluation by putting a '\' at the beginning of the expression: \%... => %... Similarly, you can escape newlines by putting a '\' at the end of the line: ...a\ ...b... => ...a...b... HTML escaping Evaluation expressions and attribute values automatically escape the following characters: & => & > => > < => < " => " ' => ' Surround a full expression with unescaped() to override this behavior: = unescaped(...) %a: 'href': unescaped(...) Multi-file templates ReML doesn't support template inheritance, but for now it has an alternative that should work for many sites: includes. Let's see an example: ### somepage.reml - append('master.reml') - title = 'Some page' - def extrascripts(): %link/: 'href': 'extra.css", 'rel': 'stylesheet', 'type': 'text/css' - def contents(): %h1 Introduction ReML is the Reduction Markup Language. ### master.reml %html %head %title= title - extrascripts() %body #header ReML -- do more with less .contents - contents() #footer Copyright 2008 Waldemar Kornewald The following two functions are available: insert() can be used to insert the given template name into the current position: %table - insert('table_contents.reml') append() is the same as putting insert() at the end of the file. It was added to improve readability. Using ReML Just copy reml.py into your project folder, so it can be imported by your other code. Here is a sample snippet: from reml import TemplateLoader data = {'username': 'hacker'} print TemplateLoader('/path/to/templates').load('about.reml').render(data) This will print the rendered template stored in the file "about.reml". The TemplateLoader optionally takes another TemplateLoader instance as a second argument which allows for chaining multiple loaders.
http://reml.wikidot.com/documentation
crawl-002
refinedweb
466
56.96
not actually to establish a blogging point where individuals can enrich their learns on facilitating and leveraging .NET-related activities most effectively Holy cow, I wrote a book! In Windows header files, many structures are declared like this: typedef struct tagXYZ { ... } XYZ; typedef struct _XYZ { ... } XYZ; /* there are other variations, too */ Why is the structure name different from typedef name? This is a holdover from very early versions of the C language where structure tags, union tags, and typedefs were kept in the same namespace. Consequently, you couldn't say typedef struct XYZ { ... } XYZ;. At the open brace, the compiler registers XYZ as a structure tag name, and then when XYZ appears a second time, you get a redeclaration error. The standard workaround for this was to make the structure tag name a minor modification of the typedef name, most typically by putting the word tag in front. typedef struct XYZ { ... } XYZ; XYZ tag The C language standardization process separated the structure and typename name spaces, so this workaround is no longer necessary, but it doesn't hurt either. Besides, even if new structures followed the typedef struct XYZ { ... } XYZ; pattern, you would just have people asking, "Why do some structures in winuser.h use the tagXYZ pattern and others use the XYZ pattern? Why can't it just be consistent?" winuser.h tagXYZ Next time, why you also don't see the pattern typedef struct { ... } XYZ very much either. typedef struct { ... } XYZ Raymond, I always thought this had more to do with name mangling in C++ than C namespaces. Thanks for teaching me something new! This may sound like a dumb question, and the answer probably exists on the web, though I don't know the right way to phrase it as such.. Which came first, windows or MSVC (or whatever the first microsoft compiler was that contained all the windows libraries and headers) ? I'm wondering if MS had a C compiler on hand to develop windows, or if an alternate compiler was used to develop and compile windows, and later MS developed their own compiler. I recall that win95 and MSVC 6 were released in roughly the same time frame (as in, I had a "wow, neat, a windows 95 box!" and MSVC 6 in the same first job... ) @Nathan_works: In the past, Microsoft didn't have a compiler that was hosted on Windows. Quick C for Windows and Visual C++ 1.0 were the first. Before that, you could use Microsoft C 6.0 to target DOS, Windows or OS/2, if I recall correctly. If i remember correctly (it was many years ago) the first C compiler from microsoft was produced by Lattice... <blockquote>Next time, why you also don't see the pattern typedef struct { ... } XYZ very much either.</blockquote> I’m guessing this has to do with compiler error messages. <i>Next time, why you also don't see the pattern typedef struct { ... } XYZ very much either.</i> That seems trivial: you can't then have pointers to the same type of struct within the struct. "Next time, why you also don't see the pattern typedef struct { ... } XYZ very much either." My guess is because you can't forward-declare a struct which has been defined that way. IIRC, it's technically a typedef to an unnamed struct. Next time, why you also don't see the pattern typedef struct { ... } XYZ very much either." Mostly, because I don't believe anonomous struct have ever been legal C syntax (like a widely implemented compiler extension) The more basic question is "why do we need the pattern typedef struct XYX { ... } XYZ; at all" (Answer, so we can refer to "struct XYZ" as just "XYZ" is a C program --- The designers of C++ wisely put that ability right into the language without needing the typedef" "Mostly, because I don't believe anonomous struct have ever been legal C syntax (like a widely implemented compiler extension)" Anonymous struct types (structs without a tag naming them: `struct {int a, double b} foo={42,3.14};') have been legal since at least 1989 when ANSI standardized the language; I don't have a K&R1 handy to check, but I suspect it goes even farther back than that. You may be thinking of anonymous struct members of another struct: 'struct foo { struct {int a; double b;}; char *c;};'. The common extension is to allow access to the members of the anonymous inner struct as 'x.a' (instead of x.inner.a), but the construct isn't legal C. @Nathan_works: The compiler released around Win95 timeframe was Microsoft Visual C++ 4.0. It was followed by versions 4.1, 4.2, 5.0 and finally 6.0 in 1998. There was no MSVC++ 3.0 AFAIR. One of the annoyances with this use of typedefs is that VC++ debugger doesn't know whether to use a type's real name ("tagXYZ") or the typedef'd name ("XYZ"). Since a type can be typedef'd to multiple names, it makes sense for it to pick the canonical name, but as a user, that's not what you usually want to see. So, why bother typdef'ing at all? Why not just name the struct XYZ? struct DRAWITEMSTRUCT *pdis = (struct DRAWITEMSTRUCT)lParam; >> The compiler released around Win95 timeframe was Microsoft Visual C++ 4.0. IIRC, Win95 was published at the same time of Visual C++ 2.0 - the first true 32bit version after the hybrid 1.5. 4.0 came shortly after. According to, the MSVC 2.0 folders are dated 9/20/94 2:55am - that's almost a year before Win95 release. Thanks for the correction on which version of MSVC was released at/around win95 time -- I certainly don't have the install media around to check ;) It was 32bit, as the guys before me had just ported it over. I was too green to deal with thunks and other 16 bit issues ... Struct and union field names used to live in the same global name-space too, and this is where tradition to prepend a unique prefix to struct fields is from. As a byproduct, fields weren't bound to any particular struct type: <pre> register *rp; rp = p; if(rp->i_count == 1) { </pre> (v6root/usr/sys/ken/iget.c:iput()) This is the first time on Raymond's blog I've read a technical post and have known the answer before I read it. I feel so smart. Day after tomorrow, though, I will stop knowing the answers and will feel dumb again. >so this workaround is no longer necessary, but >it doesn't hurt either. It's an annoyance, actually, because in C++ you can't derive a class from the the typedef you're used to see. In other words, you have to derive a c++ class from 'tagPOINT' instead of from "POINT". I absolutely hate the 'tag' stuff in Windows header, because programmers have copied it into their code in many companies I've been at, without knowing why. Just like the "LP" typedefs for pointers, or capital VOID. The reason why there are typedef at all was that in "C" we had to write "struct POINT pt;" when defining a variable. The typedef saves us from having to type the word 'struct'. In C++, this isn't necessary, and would be totally transparent for us if it wasn't for that 'tag' prefix, which forces us to be aware that there is a type and a typedef. Here's a gem of an article about running Windows 1.0 under Virtual PC by Charles Petzold, which might make some oldies smile. I was looking around for a circa Win2.0 Windows.h file to look at structs there... stupid internet, it's like the world didn't exist before 2000. Some compilers alow me to do this: typedef struct _a { int (*some_func_ptr)(struct _b *); } AType; typedef struct _b { ... } BType; other compilers require this declaration up front before everything else: struct _b; I have no idea what is "more" legal. The Win1.0 SDK came with Microsoft C 3.0, which predates Microsoft VisualC by many years. " Next time, why you also don't see the pattern typedef struct { ... } XYZ very much either. Short answer: Anonymous structures don't benefit from good error/warning/debug information. Side note: I remember that Borland C supports something like typedef struct XYZ { ... }; as a special syntax for typedef struct XYZ { ... } XYZ; C had one namespace for struct and union tags, but typedefs were in the same namespace as variables. C did not automatically define a typedef name when you defined a struct. It was C++ that introduced the implicit definition of a typedef name. So in C people would write typedef struct foo { ... } foo; and then they sould refer to the type without mentioning struct. But when they tried compiling this in C++, the mention of "struct foo" would define the typedef name 'foo', and when the compiler got around to the 'foo' after then '}', it would complain that 'foo' had already been defined. So to use anything like this in C++ you had to change the structure tag. I speak from memory - I was a C user in the 70's, and a C++ implementer in the early nineties. C++ may hae chenged then, of course. I haven't been tracking it since then. I really should have added this here : instead of here : I think the Windows 1.0 SDK came around the time of MS C 4.0, and the Windows 2.0 SDK came around the time of MS C 5.0. Visual C++ 2.0 was released around the time of NT 3.5 (I wonder how many people believed Win95 was the first version of Windows to run Win32 apps natively). Visual C++ 4.0 was released around the release of Win95. Visual C++ 4.2 was released around the release of NT 4.0 and contained the beta of several SDKs, including the ActiveX SDK. 4.2c was released to patch them up to released versions. Visual C++ 6.0 was released around the release of Win98, but still contained the NT 4.0 headers and libraries. IgorD: A scope for a structure/class name implicitly declared in the function declaration argument list is limited to the function declaration. It's not propagated up to the enclosing scope. void a(class b *c); // not in the global scope class b; void a(class b *c); // same as in the global scope On the other hand, a scope for a structure/class name implicitly declared in a member declaration is propagated to the enclosing scope. class d { class a *b; }; Here 'class a' belongs to the global scope, as if it were declared before class d declaration. But if you do like this: class a; Then it's class d::class a.
http://blogs.msdn.com/oldnewthing/archive/2008/03/26/8336829.aspx
crawl-002
refinedweb
1,822
74.39
You can click on the Google or Yahoo buttons to sign-in with these identity providers, or you just type your identity uri and click on the little login button. When I click on the About button, nothing happens and I get the following traceback on the prompt Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/hgview/gtk/hgview_gtk.py", line 198, in on_about_activate from __pkginfo__ import modname, version, short_desc, long_desc ImportError: No module named __pkginfo__ in hgview 0.9.1-1 It's already worked Ticket #6715 - latest update on 2009/09/01, created on 2008/11/25 by Arthur Lutz
https://www.logilab.org/ticket/6715
CC-MAIN-2016-44
refinedweb
106
57.61
Getting the Twitter Bootstrap Validation Styles Working with MVC There's no doubt about it; ASP.NET MVC has made developing a web application child's play, in this day and age. Add a front end framework such as Twitter Bootstrap into the mix, and you suddenly have the ability to make your application user interface look really great too. Unfortunately, there's still one pain point that exists in an otherwise wonderful combination and that's MVC form validation. Anyone who's done any work with ASP.NET MVC will know just how great the built in model validation and binding is. You create a model, you attach some attributes to describe its rules, and then you create a controller and a couple of actions to handle it, followed by a view or two. By using very minimum code, and a sprinkling of helpers in your Razor view, you have validation messages and input boxes that change colour and display appropriate error messages when something goes wrong, all without lifting a finger. Unfortunately, the class names used to set the colours are not changeable, and are not a match for the visual styles used by bootstrap, which is a great shame because the bootstrap styles look really good, and really crisp. A quick trip to stack overflow will show you question after question where MVC users are asking for advice on how to make this all work in an easy and reliable manner, and there are all sorts of generally crazy solutions, most of which involve adding the bootstrap 'Less' source code files to your project and then modifying them to also be applied to the rule names used by ASP.NET MVC. Although this is a good long term solution, many times all you need is a quick bit of code for a one off use case to solve the problem, and writing a helper, or recompiling your own custom bootstrap build really is more of a hassle than it needs to be. However, there's one route that many people either fail to explore or dismiss as being impractical, and that's JavaScript. Unleash the Script I can hear the cries already.... "But you can't use JavaScript, What about graceful degradation?" Well, yes, you have a point, but in all honesty, many sites these days manage quite happily to force people to use JS and don't have a problem with it. As for the argument for scripters and screen scrapers? Well, you're presenting a form for a hHuman to fill in, right? If someone's trying to scrape that... they're quite clearly doing it wrong. However, this post is not an argument for the merits of JS being enabled or not; this is just one approach you could decide to use if you want. So how do we implement it? Quite easily, as it happens. If you're using Bootstrap, you already have jQuery running because BS has a dependancy on it. All you need to do is look for the validation error classes to tell you which fields in the form currently have validation errors. Once you know which fields have errors, you just need to add/remove the appropriate BS classes to style things correctly. Implementing the Code The first thing to do is to create an empty MVC application. You can create one with other views and such like already in if you want, but I'll be assuming the project is already empty. Note also that I'll be writing this article from the point of view of using MVC5 and the .NET 4.5 runtime. The first thing we need is a simple controller, The following code will create you a home controller with two simple actions: one to serve up the initial test form, and one to accept the post and validate it. are form errors; please correct them to continue!"); return View(); } return View("NoErrors"); } } } Next, we'll create a couple of razor views. The first one will use the various HTML helpers to draw our form, and the second will simply just be a page with a message on that we can redirect to if the form has no errors. Index.cshtml @model BootstrapModelBinding.Models.TestObject @{ ViewBag. <div class="page-header"> <h2>Bootstrap Form Highlight Test</h2> </div> <p>Please fill in and submit the form below. Do try not filling bits in and causing errors to see the effect.</p> @Html.ValidationSummary(true) @using (Html.BeginForm("Index", "Home", FormMethod.Post, new { @Html.LabelFor(m => m.FirstName, new { @ @Html.TextBoxFor(m => m.FirstName, new { @@Html.ValidationMessageFor(m => m.FirstName)</p> </div> </div> <div class="form-group"> @Html.LabelFor(m => m.Surname, new { @ @Html.TextBoxFor(m => m.Surname, new { @<@Html.ValidationMessageFor(m => m.Surname)</p> </div> </div> <div class="form-group"> @Html.LabelFor(m => m.EmailAddress, new { @ @Html.TextBoxFor(m => m.EmailAddress, new { @@Html.ValidationMessageFor(m => m.EmailAddress)</p> </div> </div> <div class="form-group"> <@Html.LabelFor(m => m.UkPhoneNumber, new { @ @Html.TextBoxFor(m => m.UkPhoneNumber, new { @@Html.ValidationMessageFor(m => m.UkPhoneNumber)</p> </div> </div> <div class="form-group"> <div class=""> <button class="btn btn-primary">Submit <span class="glyphicon glyphicon-log-in"></span></button> </div> </div> } </div> NoErrors.cshtml @{ ViewBag.Title = "No Errors"; } <h2>No Errors</h2> <p>Sweet.. you got here because your form had no errors.</p> @Html.ActionLink("Click Here to try again", "Index", "Home") Finally, we need a data model to test our form, so create the following class called 'TestObject.cs' in your models folder. using System.ComponentModel.DataAnnotations; namespace BootstrapModelBinding.Models { public class TestObject { [Required(ErrorMessage = "You must provide your first name")] [Display(Name = "First Name")] public string FirstName { get; set; } [Required(ErrorMessage = "You must provide your surname")] [Display(Name = "Surname")] public string Surname { get; set; } [Required(ErrorMessage = "You must provide your email address")] [Display(Name = "E-Mail address")] [DataType(DataType.EmailAddress)] [EmailAddress] public string EmailAddress { get; set; } [Required(ErrorMessage = "You must provide your phone number")] [Display(Name = "UK Phone number")] [DataType(DataType.PhoneNumber)] [RegularExpression(@"((^\+{0,1}44\s{0,1})|^0|(\(0\)))\d{4}\s{0,1}\d{6}$")] public string UkPhoneNumber { get; set; } } } You'll also need to set up a razor layout page. Depending on how you do things, this may be already created for you automatically, in an empty MVC application. However, you'll likely need to add the following two files; _ViewStart.cshtml (In Views) @{ Layout = "~/Views/Shared/_Layout.cshtml"; } _Layout.cshtml (In Views\Shared) <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>@ViewBag.Title | MVC Application</title> <link href="~/Content/Site.css" rel="stylesheet" type="text/css" /> <link href="~/Content/bootstrap.min.css" rel="stylesheet" type="text/css" /> <script src="~/Scripts/modernizr-2.6.2.js"></script> <("MVC Application", "Index", "Home", new { <ul class="nav navbar-nav"></ul> </div> </div> </div> <div class="container"> <div class="row"> @RenderBody() </div> <footer class="row"> <p>© @DateTime.Now.Year - My ASP.NET Application</p> </footer> </div> <script src="~/Scripts/jquery-1.10.2.min.js"></script> <script src="~/Scripts/bootstrap.min.js"></script> </body> </html> You can probably cut some code out of the layout file if you need to, but that's a good template to keep around for Bootstrap projects because it implements the minimum you need to get an app screen with menu bar up and running. With all the files in place, you now need to add Bootstrap using NuGet. Figure 1: Adding Bootstrap Make sure you get the more up to date (3.2.0 as of writing this) version, and not the older one below it. Also, make sure it's the official one by Mark Otto and Co. There are a number of 'Bootstrap' packages on NuGet; a lot of them install extra stuff that's really not needed. With Bootstrap installed, compile and run your app, and you should hopefully see something like the following: Figure 2: The Bootstrap Form Highlight Test At this point, if all is working okay and you try to submit the form, you should see the following: Figure 3: The Bootstrap Form Highlight Test, with form errors As you can see, the actual Bootstrap-styled components don't get styled to show an error condition, even though we're using the HTML helpers to add them as expected. This is because BS re-writes many of the browser base styles so that when MVC tries to apply its own styles, Bootstrap actually overrides them. If you inspect the elements, however, you'll see that the 'red text' you can see styled has a consistent class name that you can look for. Figure 4: Looking for the consistent name As you can see, there's a span element with a class of 'field-validation-error' inside our BS help block's p tag, which is in turn nested inside a form-group. If you look at the BS3 docs, all you need to do to change the input state is to apply the appropriate state class to each form-group that has an error. You can achieve this very easily by attaching a jQuery dom ready handler to your page that uses the JQ selector to find each of these fields. By attaching it to the DOM ready handler, you'll be certain that it will run as soon as the page finishes rendering, thus the form will change instantly as soon as it's re-displayed. The other thing you also need to look for is the validation summary (in this case, at the top of the form) and apply the appropriate BS3 alert styles to it. One last thing you'll also notice. The red text below each field showing the error message will override the BS3 styles, so when searching for and adding the overall error class, you'll also need to remove the class that applies this style; otherwise, that too will not take the BS3 form style colouring. After explaining all that, the implementation is very easy. Add a new JavaScript file to your project and add the following code to it. $(function () { $('.validation-summary-errors').each(function () { $(this).addClass('alert'); $(this).addClass('alert-danger'); }); $('form').each(function () { $(this).find('div.form-group').each(function () { if ($(this).find('span.field-validation-error').length > 0) { $(this).addClass('has-error'); $(this).find('span.field-validation-error'). removeClass('field-validation-error'); } }); }); }); Link this file into a script tag in your layout file: <script src="~/Scripts/CustomBsValidation.js"></script> If you now build and run your project, and then try submitting an empty form again just as you did before, you should now see things are different Figure 5: The Bootstrap Form Highlight Test, running correctly And that's all there is to it. A simple bit of JavaScript to find and alter things live, which for simple projects works well. You could easily extend this where needed to do things like enable the new validation state icons or you could even use the HTML helpers in razor to add extra attributes to the tags, which you could then look for and act on. If there's anything specific you'd like to see me write about in this column, please ping me on Twitter as @shawty_ds, or come and find me in the Lidnug (Linked .NET user group) on Linked-in that I help run; let me know your thoughts. You can also can use the comment form below this article to reach me. document.ready() not triggeredPosted by Pawel on 04/11/2016 10:44am All nice and cool, but this jQuery is executed only once in my app. I mean, submitting a button doesn't invoke a method, because client side unobtrusive java validation blocks it. How I could make it work?Reply ThanksPosted by Alyce on 11/18/2015 08:51pm Awesome, thanks very much.Reply
http://www.codeguru.com/columns/dotnet/getting-the-twitter-bootstrap-validation-styles-working-with-mvc.html
CC-MAIN-2017-09
refinedweb
1,989
62.68
Hello , I am asking for a little explanation about "Mixing" Assembly and C programs together on Ubuntu with the Code : gcc assembly.s main.c . The question is , What is 4(%esp) in here ? where does it point to ? ##in file main.c #include <stdio.h> int main(){ int i = assembly(7); return 0; } ##in file assembly.s .global gaus gaus: movl 4(%esp), %eax movl %eax , %ebx inc %ebx imull %ebx , %eax movl $2,%ebx ret And please if anyone can explain this method of c program to call an assembly , and whether it is wrong or not, as its a homework I have done and want to know how to discuss about it.
http://www.codingforums.com/computer-programming/322361-c-programm-call-assembly-function.html
CC-MAIN-2016-22
refinedweb
115
81.53
A DigitalOut represents any on/off output. More... #include <rtt/extras/dev/DigitalOutput.hpp> A DigitalOut represents any on/off output. Examples are brakes, valves, simple grippers etc. This class can be used in combination with a DigitalOutInterface or as a 'virtual' switch in which case the on/off state is stored in this object. Definition at line 55 of file DigitalOutput.hpp. Create a new Relay acting on a digital output device. Definition at line 65 of file DigitalOutput.hpp. Create a virtual (software) relay. Definition at line 73 of file DigitalOutput.hpp. Check if the output is on (high). Definition at line 123 of file DigitalOutput.hpp. Set the bit to the on or off state. Definition at line 88 of file DigitalOutput.hpp.
http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1dev_1_1DigitalOutput.html
CC-MAIN-2015-06
refinedweb
126
54.08
For that it's clean and simple. I actually wrote this using the Atom editor on my Mac. FYI: Atom is an Electron app. There are links to resources in the github repro README.md for getting up to speed on these technologies if you have not seen them before or are just getting ramped up. It’s taken me 3 long weeks, giving up weekends and nights to finally figure out how to use Electron, ES6, Aurelia, and MongoDB in a Thick-Client application. This blog post starts with my painful journey and then dives into how easy the solution really is. Truest me, it's so easy now that I know how to put the pieces together. Where do I start? Why did I need to write an Electron app? I’ve used my Crank Meta-data Code Generation App for years to radically speed up development time building .NET Applications. It generates all stored procs, class entities, XAML forms, XAML Infragistics Data Grids, SQL Server TVPs, code snippets, etc. Now that I’m doing a good bit of development on Mac (yes, Karl is still writing C# on an almost daily basis), I wanted to leverage the same tools I’ve had for years on Windows. The requirements for my new Crank App are: I chose Electron because in my opinion, it is the best tool for available writing and maintaining a cross-platform apps. I allows me to use my HTML, CSS, and JavaScript skills I have. I chose ES6-ES2015 because it is the latest JavaScript standard that enables me to write clean code using the latest features of the language like Promises. I chose Aurelia because I believe that frameworks should not get in the way of the developer and not require constant ceremony code to leverage their features and capabilities. I’ve written production apps using Ionic, AngularJS, and Angular Material. These apps were all successful, customers happy. BUT, the development cost and in some cases, the learning curve was high when you didn’t follow the frameworks prescribed happy path. When I compare the how much less code and ceremony I have to endure, then writing an Aurelia apps compared to an AngularJS 2 app, I’m so glad Rob Eisenburg and his team wrote Aurelia. Is Aurelia perfect, not yet, but for a Beta 1 product, I’m having zero issues. I chose MongoDB because it runs locally on Windows, Mac, or Linux, and also runs on on premise or cloud servers. I need to support a wide range of install scenarios ranging from, single user on Mac, to large development teams with developers on various platforms. MongoDB provides this capability. Initially, I had selected MongoDB but couldn’t get it to work inside an Aurelia app. I kept getting a missing Kerberos.js file when System.js was trying to load the MongoDB ES6 module. I spent a lot of time trying to get this to work to no avail. I then looked at Azure DocumentDB. I really like DocumentDB. Simple SQL like syntax for querying the database. It didn’t have a ES6 module for accessing the database so I wrote my own in ES6 using Promises. I was ready to go with this but then finally figured out how much it really costs. $25 per month, per collection. This was an immediate non-starter for my scenario. Great service with very cool capabilities, but too costly for this small app. I then looked at Firebase. Like DocumentDB, Firebase simply rocks. A 1GB or less production database is only $5 per month. No other cloud service even comes close to this number. My initial pain points was figuring out how to write ES6 code and leverage Promises. I wrote myself a ES6 wrapper and got this working, but the cost thing for the user was nagging me. Giving up my Black Friday day off and motorcycle riding, I when back to give MongoDB one more try. It was by accident that I finally figured out that System.js module loading was the problem, and not Electron or Aurelia. Yep, several developers asked me the same question. In the current development world, there are many requirements, scenarios and solutions for them. There are no silver bullets, or one size fits all. The cloud is not the answer for every application. I like Thick-Client, two-tier applications and write them when appropriate. Also, didn’t want to write a webapi or MEAN Stack Express API for this app. I could of and almost did. But, that’s a lot more code I have to write and maintain without any payback whatsoever. The application is hosted on my github Oceanware account. I’ve provided an easy to follow readme to get you up and running on Mac or Windows. I need to spin up a Ubuntu VM for testing on Linux. Have not had the time yet. If you’re on Linux, this app will run and build on that platform as well (thanks to Electron and Chromium). The application is the simplest Electron, Aurelia app I could write. When you run the app, you’ll see a brief spinner and message and then the Home view. “We are connected” is your success message. You have connected to you MongoDB server from with an Electron, ES6, Aurelia app. This class was the reason I was struggling for 3 weeks. Before I found the solution, I was using the imports statement to bring in the mongodb module. You see an example of using imports in the next section below. imports imports This simple starter code is straight from the MongoDB documentation that is written in ES5. Below, I’ve crafted an ES6 class that first brings in mongodb using the familiar node.js required(…) method instead of using imports. required(…) The testConnection method leverages the ES6 Promises, wrapping the MongoClient.connect method. testConnection MongoClient.connect IMO: Promises make the method consumer code much easier to read and write, i.e., no callback hell. export default class MongoService { constructor() { this.MongoClient = require('mongodb').MongoClient; } testConnection() { return new Promise((resolve, reject) => { this.MongoClient.connect("mongodb://localhost:27017/test", function(err, db) { if(!err) { resolve("We are connected"); } else { reject(err); } }); }); } } The below Home class is an example of an Aurelia view model. Except for the inject feature, there is no Aurelia framework goo or ceremony, just ES6. Modules are imported into a class by using the import statement. The System.js module loader does all the heavy lifting. As I’ve stated above, I tried using import and System.js to bring in mongodb, but it always failed. import The constructor takes a single argument that is injected in. I’m injecting in the above MongoService. The activate method is one that you would expect any navigation or routing framework to have, but not all do. This method is invoked by the Aurelia router as part of the routing lifecycle. activate You can see my super simple code for calling testConnection. Once you’ve used Promises, you love and use them. Clean code. import {inject} from 'aurelia-framework'; import MongoService from './mongoService' @inject(MongoService) export class Home { constructor(db) { this.db = db; this.title = "Home"; this.connectionResults = ""; } activate() { this.db.testConnection() .then((promise) => this.connectionResults = promise) .catch((err) => this.connectionResults = err); } } An Electron App is actually simple once you understand the folder structure. Everything in the root folder belongs to the Electron portion of your application. Everything in the /src folder belongs to the app that Electron is hosting, in our example, its an Aurelia app. You’ll quickly notice that there is a package.json file in the root and /src folders. The one in the /src folder is for application dependencies that can’t be loaded using jspm from the root folder. MongoDB is the only node module I’ve had to load in this manner. That’s why I’ve got two package.json files. I know that I have not done any application how to teach, or framework explanations. The real purpose of this blog post was to demonstrate that you can write an Electron, ES6, Aurelia, and MongoDB Thick-Client application. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=1060129
CC-MAIN-2016-40
refinedweb
1,402
67.04
In this example, we will see the complete Vue Pagination Tutorial with Laravel backend from scratch. We will install some fake data using faker library and then use client-side http network request promise based library Axios to send a request to Laravel server, fetch and display the data in the form of Pagination. We will build pagination in Vue from scratch. Content Overview Final Output Vue Pagination Tutorial We are using Laravel 5.6 for this example. So first, we need to install Laravel. I am using Laravel Valet for this tutorial. If you are not using Valet then just you need to create a Laravel project using the following command. Step 1: Install Laravel. If you do not want to use Laravel Valet, then you can install Laravel using the following command. composer create-project laravel/laravel paginate --prefer-dist Go to your project folder and start the development server by the following command. php artisan serve Laravel Valet If you are using Laravel Valet, then you need to go to your project folder and hit the following command. laravel new paginate You can access the project using the following URL: If you are new to Laravel Valet, then check out the official documentation. It is effortless to get up and run. Step 2: Install NPM dependencies and setup db. Go to your project folder. cd paginate Install the NPM dependencies using the following command. npm install Now, Vue.js is pre-configured in Laravel. So, you need to compile the CSS and JS file using the following command. npm run dev Now, configure the database inside the .env file. DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=paginate DB_USERNAME=root DB_PASSWORD=root Laravel come up with two migration files by default. So let us use that and create the table in the database using the following command. php artisan migrate It will create users and password_resets table. Step 3: Generate Fake users data. Now, to paginate the data, we need some amount of data. For this example, I am using faker library to generate the fake users’ data. We use the model factory to create and persist the data in the database. You can see the factory file inside database >> factories >> UserFactory.php file. <?php // UserFactory.php use Faker\Generator as Faker; /* |-------------------------------------------------------------------------- | Model Factories |-------------------------------------------------------------------------- | | This directory should contain each of the model factory definitions for | your application. Factories provide a convenient way to generate new | model instances for testing / seeding your application's database. | */ ), ]; }); We generate 100 rows of data. So go to your terminal and enter the following command. php artisan tinker You can use it to interact with our Laravel application. Now, generate the data. factory(App\User::class, 100)->create(); It generates random data up to 100 rows. Step 4: Create routes and controllers. Now, we need to first create an authentication scaffold. So type the following command. php artisan make:auth If you do not use auth scaffold, then you can create your view, it does not matter here. This scaffold also gives us some fundamental bootstrap views to work with. In our tutorial, we will be going to make two UserControllers. - UserController(It handles the Laravel view) - UserController(Inside API folder, responsible for the API response) Now, create the first UserController using the following command. php artisan make:controller UserController In this controller file, write the following code. // UserController.php <?php namespace App\Http\Controllers; use App\User; use Illuminate\Http\Request; use App\Http\Resources\UserResource; class UserController extends Controller { public function index() { return view('users.index'); } } We have not created the index view, so inside resources >> views folder, create a directory called users and inside that, create one file called index.blade.php file. @extends('layouts.app') @section('content') <div class="container"> Users View </div> @endsection Now, we need to define the route for this inside routes >> web.php file. <?php // web.php Route::get('/', function () { return view('welcome'); }); Auth::routes(); Route::get('/home', 'HomeController@index')->name('home'); Route::get('/users', 'UserController@index')->name('users.index'); Create the second UserController.php file inside app >> Http >> Controllers >> API folder. php artisan make:controller API\\UserController Write the following code inside this UserController.php file. <?php // API/UserController.php namespace App\Http\Controllers\API; use App\User; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class UserController extends Controller { public function index() { return User::paginate(10); } } Now, define routes in the routes >> api.php file. <?php // api.php use Illuminate\Http\Request; Route::get('/users', 'API\UserController@index'); Now, I can test in this route: Your URL may be different like this: In above image, I have minimized the items so that, you can see our object’s whole structure. So, we have created an API, now Vue needs to consume it. Laravel’s paginate function gives us the metadata, which we can use to design our Vue Paginate Component. Step 5: Display the users at frontend. Inside resources >> assets >> js >> components folder, by default there is one Vue component called ExampleComponent.vue file. You need to remove this and inside components folder, create one more folder called users. Inside users folder, create one file called Index.vue. // Index.vue <template> <div class="row justify-content-center"> <div class="col-md-10"> <div class="card"> <div class="card-header">Users</div> <div class="card-body"> User Data </div> </div> </div> </div> </template> <script> export default { } </script> Okay, now we need to update the app.js file inside resources >> assets >> js folder. // app.js require('./bootstrap'); window.Vue = require('vue'); Vue.component('users-list', require('./components/users/Index.vue')); const app = new Vue({ el: '#app' }); So, we have created our users-list component. Now, we need to display this component at this URL: So, what we need to do is that inside index.blade.php file under resources >> views >> users folder, we need to add this users-list component. @extends('layouts.app') @section('content') <div class="container"> <users-list></users-list> </div> @endsection Okay, now the only remaining thing is to fetch and display the data. So write the following code inside Index.vue file. // Index.vue <template> <div class="row justify-content-center"> <div class="col-md-10"> <div class="card"> <div class="card-header">Users</div> <div class="card-body"> <user v- </user> </div> </div> </div> </div> </template> <script> import User from './partials/User'; export default { components: { User }, data() { return { users: [] } }, mounted() { this.fetchUsers(); }, methods: { fetchUsers() { axios.get('/api/users/').then((res) => { this.users = res.data.data; }); } } } </script> Create a folder inside users folder called partials and inside that folder, create one Vue component called User.vue. // User.vue <template> <div class="media"> <div class="media-body"> <h4 class="media-heading"> {{ user.name }} </h4> </div> </div> </template> <script> export default { props: ['user'] } </script> When the component is mounted, we use the client side HTTP library to send a network request to the Laravel server and get the data and assign it to the user’s array. We are getting paginated data, so at this time, we are just getting first ten rows of the data. So, how we can fetch the other data in the structure of pagination. Laravel provides a URL structure in which you can pass the Query string of page, and you can get the other data. The structure is like the following. Here, you can change the page value, and we can fetch according to that data. So we need to perform same logic in the Vue. Step 6: Create a Pagination Vue component. Inside resources >> assets >> js >> components >> users folder, create new folder called pagination. Inside that folder, create one file called Pagination.vue file. Now, I am writing the whole logic behind the Vue Pagination inside that component. // Pagination.vue <template> <nav> <ul class="pagination pagination-lg"> <li class="page-item" : <a href="#" class="page-link" @ « </a> </li> <li class="page-item" v- <a href="#" @click. {{ page }} </a> </li> <li class="page-item" : <a href="#" class="page-link" @ » </a> </li> </ul> </nav> </template> <script> export default { props: ['meta_data'], methods: { next(page) { this.$emit('next', page); } } } </script> Here, I have done lots of code. So let us get understand first. Here, In this component, I have used one property called meta_data. This property comes from parent component called Index.vue file, which sends a network request to the server and fetches the data along with meta data like current_page, last_page, and other properties. The logic here is, if we are at first page, we need to disable the left arrow page button, and if we are at last page, then we need to disable to right arrow page button. I have written that logic by compared that meta_data provided by Laravel Paginate function. Also, this component emits an event called next with the parameter page number, so when we click the page number like 1 or 2 or 3, we get the same data from the API. So the parent component listens to this event and uses that page number provided by the event parameter and call an API and fetch the data according to it. Our final Index.vue component looks like this. // Index.vue <template> <div class="row justify-content-center"> <div class="col-md-10"> <div class="card"> <div class="card-header">Users</div> <div class="card-body"> <user v- </user> <pagination : </pagination> </div> </div> </div> </div> </template> <script> import User from './partials/User'; import Pagination from './pagination/Pagination'; export default { components: { User, Pagination }, data() { return { users: [], meta_data: { last_page: null, current_page: 1, prev_page_url: null } } }, mounted() { this.fetchUsers(); }, methods: { fetchUsers(page = 1) { axios.get('/api/users/', { params: { page } }).then((res) => { this.users = res.data.data; this.meta_data.last_page = res.data.last_page; this.meta_data.current_page = res.data.current_page; this.meta_data.prev_page_url = res.data.prev_page_url; }); } } } </script> So we are getting that page from the emitter and call the API and Laravel returns a JSON object containing that data. Then, we attach that data to our user’s array and meta_data object. We then pass that data to User component as well as Pagination component. Paginate component receives the only meta_data to process the Pagination component. Now, save the file, and if you are running npm run watch command, then your JS and CSS have been already compiled. So switch to the browser and type this URL: Now, you can switch to different pages, and it works fine. It is useful for very basic pagination without any page refresh. So that is it for Vue Pagination Tutorial From Scratch. Thanks for taking. Thank you. Is it possible to integrate content management for final user? For example create posts, edit/delete posts. Also im interested to know if there is any online service which could work as CMS for vue application I’m just started learning vue hi, Bogdan? see HI, Krunal axios must be installed? if so, how to do it in Laravel? so maybe you will demonstrate how to install npm packages, for use in * .vue. In the Laravel project. Nice Tutorial.!
https://appdividend.com/2018/04/26/vue-pagination-tutorial-from-scratch/
CC-MAIN-2019-18
refinedweb
1,839
57.47
Blue. The collaboration of the NetBeans IDE and BlueJ teams has resulted in the NetBeans BlueJ plugin. This tool creates a smooth migration path for students learning the Java programming language from beginner's stage through to the use of professional development tools. In addition, this IDE provides a seamless path for students to switch from educational tools into a full-featured, professional IDE. The BlueJ plugin makes the transition between these two environments easier for students and teachers. Even if you are not familiar with BlueJ, the NetBeans BlueJ plugin is a great way to learn to use an IDE.). Developers unfamiliar with the BlueJ software will also learn everything they need to get started with this IDE and will benefit by following along with the examples. To follow the descriptions and code examples, download the NetBeans IDE version 6.1 or later. In addition, download the BlueJ project Calculator.zip, which the first part of this article uses as an example. In the second part of the article, you create a small application called Address Book to learn various other IDE features. Setting Up the NetBeans BlueJ Plugin Once you have installed and opened the NetBeans IDE, go to the Tools menu and select Plugins. Click the Available Plugins tab, and scroll down to select the box next to BlueJ Project Support as shown in Figure 1, then click the Install button. Once you accept the license, the plugin will be installed and ready to use. The NetBeans IDE provides a wealth of features that make writing applications of all types and sizes faster and easier. These features include the following: The BlueJ plugin for the NetBeans IDE includes support for the project structures of both the BlueJ and the standard NetBeans IDE. The coding productivity features work with both project types. Next, unzip the Calculator.zip file that you downloaded, and save it in a special folder. In NetBeans IDE, go to the File menu, and select Open Project. Locate the place you uncompressed the files to, and click Open Project as shown in Figure 2. If you click on the Project tab, you'll see a set of smaller icons resembling a file structure rather than icons representing objects. If you click the BlueJ View tab, you'll see icons representing the objects. Double-clicking a class icon in either view opens an editor for that class's source code. Open the Calculator.java file by double-clicking the class icon. The NetBeans IDE does not support direct interaction with objects. Instead, to execute a program in the NetBeans IDE, you must run its main method. Classes with a main method are indicated in the BlueJ Project View by a green arrow, as indicated in Figure 3. Notice that Calculator.java contains the main method for this application. Calculator.java opens in the Source Editor in the right side of the screen. You can run a program in at least three different ways: Calculator.javafile name -- marked with a green arrow -- and select Run File. Code From Scratch The Calculator.java class initializes itself and the other two classes: CalcEngine and UserInterface. It sets the application to show in case the application was closed. The UserInterface class contains all the code for creating the GUI, and the event handler initializes the CalcEngine class, which contains all the methods for what happens when the user presses the buttons. Open the UserInterface class by double-clicking the file name. As you scroll down, you can see how much code had to be written for this application just to create the GUI. Later, you'll learn a fast way to create a GUI simply by dragging components onto a workspace and letting the IDE write the code for you. Now open the CalcEngine.java file by double-clicking its name. This class contains the instruction for what happens behind the scenes within the calculator application. When a user presses a button, the event handler looks to this class for the corresponding method, and the CalcEngine sends the response to the UserInterface class to display the correct response. You still need to write this type of code by hand, but the IDE makes the writing go faster. As you use this IDE, you can save your work simply by clicking on the floppy-disk icons in the tool bar as shown in Figure 5, or go to File and choose Save All. You can add or change the code in the usual manner by typing directly into the Source Editor pane. In addition, the Source Editor has code templates that allow you to enter code snippets using an abbreviation. The Source Editor also lets you hide sections of code through a process called code folding, and it generates code for beans with properties and event sets. The NetBeans IDE BlueJ plugin contains many other useful features that you'll appreciate as you get to know this IDE. Now you can add or change code as needed. One feature you'll notice right away is the Editor Hints feature. These hints warn you with a lightbulb icon if you are missing a semicolon or an import statement, or if you need to create a variable or a statement. When the lightbulb icon appears, press Alt-Enter to display the hint, or click the lightbulb icon with your mouse. Configuring the Source Editor You can configure the Source Editor to more closely suit your needs: Compiling a Project Once you have finished modifying the code, you can either compile classes individually or compile and build the entire project. You can compile individual classes by right-clicking each class's icon and selecting Compile File. Click on the Build menu at the top of the screen, and click on Clean and Build Main Project. Notice that an Output window opens at the bottom of the screen, and the compiler immediately tries to override an old definition, which is an implementation detail of the IDE's project system that you can ignore. It then continues compiling until the build is complete. If there are problems with the build, the output appears in the Output pane. Now click on the Run menu and select Run Main Project to run the Calculator project. If any exceptions are thrown, they appear in the Output window. Otherwise, the calculator should reappear on the screen. Test it out by adding or subtracting some numbers, and the result should look something like Figure 6. In Java platform applications, the components that make up a GUI are stored in containers called forms. The Java programming language provides a set of user interface (UI) components from which you can build GUI forms. The NetBeans IDE GUI Builder helps you design and build Java forms by providing tools that simplify the process. To see how the GUI Builder works and to get a feel for the various tools and uses, you are now going to create a GUI application. Although you will be creating a BlueJ-style project, you could easily build the same project with a standard IDE such as NetBeans. For this article, you will create a simple application called PhoneBook that contains fields in which the user can enter a first name, a last name, and a phone number. The application also allows the user to select whether the information is for personal or business use. First, close the Calculator project by going to File and selecting Close "calculator." Once that project is closed, you start a new project. Note: This IDE allows you to have several projects open at a time, but to keep this tutorial simple, close the Calculator project. Click on the File menu and choose New Project. When the next window opens, select BlueJ from the Categories list. Click Next. In the next window, type PhoneBook in the Project Name field. You can save this project in any location you like by using the Browse button or by typing in a path where you want the files to reside. Lastly, click Finish. You are ready to create your first file for this project. Click on the File menu and choose New File. Under Categories, choose Java GUI Forms, then under File Type, click JFrame Form, as shown in Figure 7. Alternatively, you can right-click the project name in the Projects pane, select New, then select JFrame Form. Click Next and type PhoneBookFrame in the Class Name field. You can add this class to another package or create a new package, but for this article, leave the default phonebook package. Click Finish. You now have the GUI Builder open and all of its tools available. Your screen should look something like Figure 8. It's worth taking a tour around the IDE now. Notice the Palette pane on the far right. This pane contains a list of components available for adding to forms. You can customize the Palette window to display its contents as icons only or as icons with component names. Below that is the Properties pane, which lists the name of the file that is open and any properties for the file. To the left of the Palette and Properties panes is the GUI Builder workspace. You can see the outline of the JFrame that you have created, and this remains blank until you drag and drop components into place. Above the workspace, note the Design button and next to it a Source button. These control how you view your application. Design view is the default, and this lets you see your application in a What You See Is What You Get (WYSIWYG) manner. You can use Design view only for a project that you create using the GUI Builder in this IDE. So you cannot, for example, return to the Calculator project and open it in Design workspace to add components. Click on the Source button. You now leave the Design workspace and go to the Source Editor. Scroll through the code to see the code that GUI Builder has already created for you. Note the warnings about areas that you should not modify. To return to Design view, click on the View menu at the top of the screen. Select Editors, then choose Design. Click on the Design button on the main window to bring back the Palette pane if it is not already visible. At the far left of the screen is the BlueJ View that you are familiar with, showing the first object in this application: the class that holds the main method. Below the BlueJ View window is the Inspector pane, which displays a tree hierarchy of components that the currently opened form contains. Displayed items include any visual components and containers -- such as buttons, labels, menus, and panels -- as well as nonvisual components -- such as timers and data sources -- that you add as you build your application. Now you are ready for some easy drag-and-drop GUI building. If you have hand-coded Swing technology applications in the past, you will appreciate how much work this IDE saves you. You have already created the main JFrame component, so add a Panel component by selecting that component in the Palette pane. Hold down the mouse button and drag the panel into the workspace frame. When you see the square appear, pull and drag a corner of the JPanel component to resize JPanel until it's the size of the main JFrame form. Notice that the Properties window has changed to include the JPanel properties. Also, note how the Inspector pane has changed, as in Figure 9: You can now see the hierarchy of your components. You can also change properties, rename the component, add an event, and so forth by right-clicking any of those items. In the GUI Builder, you can simply put components where you want them as though you were using absolute positioning, placing them exactly where you want them without having to choose a layout that may be close but not exact. The GUI Builder figures out which layout managers are required and generates the code for you automatically. The Phone Book application is now ready to accept a few more components. Drag and drop the following components onto the workspace: JLabel JTextField JLabel JTextField JLabel JTextField JLabel JRadioButton JLabel JRadioButton Notice that as you add each object and drop it on the workspace, thereby creating it, the NetBeans IDE automatically numbers objects of the same name in sequence: jLabel1, jLabel2, and so forth. As you can see, you can easily set each component exactly where you want it. As you add components to a form, the GUI Builder provides visual feedback for positioning components based on your operating system's look and feel. The GUI Builder provides helpful inline hints and other visual feedback regarding where you should place components on your form, automatically snapping components into position along guidelines. To see how easy it is to remove objects, simply drag jLabel4 off the workspace. jLabel4 then goes into the Other Components folder in the Inspector pane. You can select the object in that folder and delete it. When you do, the NetBeans IDE removes the object and rewrites the code. If you want to reorganize the remaining objects, drag them around the workspace to rearrange them. Now change the default names of the jLabel1, jLabel2, and jLabel3 objects by double-clicking each and typing in a name so that they read Last Name, and Phone Number, respectively. Then change the labels of the radio buttons to read Personal and Business, respectively. You may need to resize the objects and move them around a bit as you go. Remove the default text, which is the same as the default object names, from the text fields by double-clicking and deleting the text. To cause the resize handles and anchoring indicators to reappear, click anywhere within the component to select it. Then pull the text field to the desired length and width. Using the Properties window, you can easily add Tool Tips, change the font or color, and assign other property values for any given object. In only a few minutes, you have programmed a user interface. Your application should look something like Figure 10. Next, add the JRadioButton objects to a ButtonGroup object. First, drag the ButtonGroup object from the Palette to anywhere on the workspace. Then, in the Inspector pane, select both radio buttons by holding down the Ctrl key while clicking on each object, as Figure 11 shows. Next, go to the Properties pane and click the down arrow on the buttonGroup element, as Figure 12 shows. buttonGroup1. Next, preview what you have done so far. You don't have to compile and run your application to see whether the user interface looks the way you want it. Instead, just click the Preview button from the menu above the workspace, and the IDE generates a preview for you. Figure 13 shows the highlighted Preview button to the right of the Design button. Now click on the Source button in the workspace menu. The Source Editor takes you to where you need to insert your handling code. Scroll down and see the code that the IDE has created for you. Also, click on the plus (+) sign near the words Generated code, and you can see all the code that you didn't have to write yourself. Now go back to the menu and click View. Select Editors and then Design. Now add an event for your JButton object. Ideally, you would write this type of input information to a database or to a flat file system, but to keep things simple in this tutorial, you'll send it to the Output window. As with everything else in this IDE, you can accomplish this task in several ways: JButtonobject, choose Events, then select Action, then actionPerformed. jButton1object. Click on the Source tab. If you added an event by right-clicking, the editor takes you right where you need to be in the Source Editor to overwrite the actionPerformed method for this button. Click after the colon and enter the following code, which will take your text field input and write it to the Output window once you press Enter: This is the first bit of code that you've had to write from scratch. Generally, you would write much more code in this part of the application to transfer the data from the getText methods to either flat files or a database, or you might write it back to JLabel objects to show on the screen. As an experiment, try intentionally putting some typos in the text. The IDE automatically underlines the error and suggests corrections for you, as Figure 15 shows. Then you can correct the error. From the menu, choose Build, then select Clean and Build Main Project. Next, click on the Run menu item, and select either Run Main Project or Run File. Now select the file name. Figure 16 shows some sample output. Modifying Source Code While you are creating a GUI form in the GUI Builder, the IDE automatically generates guarded blocks of code, which it presents on a blue background. At times, you may need to modify that code. It's best to do that through the Design view and using the Properties pane. You can modify the way the IDE generates initialization code for a component, form, or component property by editing its Code properties in the Properties pane in the lower right corner. In addition, you can write custom code and specify where it should be placed within the initialization code. Debugging is the process of examining your application for errors. You debug a project by setting breakpoints and watches in your code and running it in the debugger. This process runs code one line at a time, allowing you to examine the state of your application to discover problems. When you start a debugging session, all of the relevant debugger windows appear automatically at the bottom of your screen. You can debug an entire project, individual executable classes, and JUnit tests. A breakpoint is a flag in the source code that tells the debugger to stop execution of the program. When your program stops on a breakpoint, you can perform actions such as examining the value of variables and going through your program one step at a time, also known as single-stepping. You can set several types of breakpoints using the New Breakpoint dialog box. You can also set line breakpoints directly in the Source Editor. To set a line breakpoint, click the left margin of the line in the Source Editor or press Ctrl-F8. This IDE can set other kinds of breakpoints as well, such as Step Over, Step Into, and Step Out. See the Help files for more information. To fix your code, from the main menu, choose Run, then select Apply Code Changes. This recompiles and allows you to begin repairing your source code. A watch enables you to track the changes in the value of a variable or expression during program execution.. To create a watch, select the variable or expression in the Source Editor, right-click, and choose New Watch (Ctrl-Shift-F7). The New Watch dialog box opens with the variable or expression entered in the text field. Click OK. The Watches window opens with the new watch selected. After you have set breakpoints and watches, start a debugging session by right-clicking the project in the Projects window and choosing Debug Project. The IDE runs the project in the debugger until execution stops or a breakpoint is reached. You can also start a debugging session on a file by selecting any runnable file in the Projects window. Then choose Run, Run File, and Debug my_file. The IDE runs the file in the debugger until execution stops or a breakpoint is reached. This article has shown you just a small sampling of the features in NetBeans IDE and the BlueJ plugin. This IDE is well suited to introduce the beginning developer to advanced concepts and can fully prepare you to move into a more advanced programming environment. Be sure to read the NetBeans IDE's Help files, even if you feel that you are finding everything you need. There are many other gems to uncover, and the Help system fully details many of these features, such as managing the classpath; providing more details about debugging; modifying, sizing, and aligning components; modifying GUI source code; working with Ant and testing with JUnit; and working with layout managers. To read these detailed Help files, click on Help in the main menu and select Help Contents. Setting Up the NetBeans IDE for Educational Use NetBeans IDE Tutorials, Guides, and Articles
http://www.oracle.com/technetwork/articles/javase/bluej-136011.html
CC-MAIN-2015-06
refinedweb
3,471
71.14
Hello everyone I'm working on a project for java class and I'm not advanced enough to do anything fancy, I have the basics of the program but for some reason it never adds up right. Can anyone help me correct this issue? thanks. Also i changed the name of doubleroom to roomroom because I thought doubleroom was messing with the math but apparently not lol. import java.util.Scanner; class resort4 { public static void main (String[] args) { int staytype; //create int, some with values, some get values later. int roomtype; int weekdayrate = 5; int weekendrate = 20; int holidayrate = 50; int smokingroom = 40; int handicaproom = 10; int nonsmokingroom = 50; int book1; int book2; int book3; int keepgoing; int keepgoing2; int stayday; int finalprice = 0; int singleroom = 60; int roomroom = 100; int VIP = 500; int smokingornot; //find smokingornot based on user input System.out.println("Smoking, Non-Smoking, or Handicap?\n 1 for Smoking" + "\n 2 for Non-Smoking" + "\n 3 for Handicap"); Scanner keyboard = new Scanner(System.in); smokingornot = keyboard.nextInt(); //smoking booking if (smokingornot == 1) //smoking room conditions { if (smokingroom <= 0) System.out.println("Smoking room unavailable, please try" //out of smoking rooms + "again."); else System.out.println(smokingroom + " " + "Smoking rooms " + "available."); //smoking rooms are still available System.out.println("Would you like to continue?\n" + "1 for yes\n" + "2 for no"); book1 = keyboard.nextInt(); //move to next part of program if user has made no errors if (book1 == 2) System.out.println("Room not booked, please restart"); else smokingroom--; //removes a smokingroom from count } //non-smoking booking if (smokingornot == 2) { if (nonsmokingroom <= 0) System.out.println("Non-Smoking room unavailable, please" + " try again."); else System.out.println(nonsmokingroom + " " + "Non-Smoking " + "rooms available."); System.out.println("Would you like to continue?\n" + "1 for yes\n" + "2 for no"); book2 = keyboard.nextInt(); //user input to continue if (book2 == 2) System.out.println("Room not booked, please restart."); else nonsmokingroom--; //removes a non smoking room from count } //handicap booking if (smokingornot == 3) { if (handicaproom <= 0) System.out.println("No Handicap rooms available, please " + "try again."); else System.out.println(handicaproom + " " + "Handicap rooms " + "available."); System.out.println("Would you like to continue?\n" + "1 for yes\n" + "2 for no"); book3 = keyboard.nextInt(); //gets userinput to continue if (book3 == 2) System.out.println("Room not booked, please restart."); else handicaproom--; //removes a handicap room from count } { System.out.println("What type of room would you like?\n" + "1 for Single\n" + "2 for Double\n" + "3 for VIP"); roomtype = keyboard.nextInt(); //assigns input to roomtype if (roomtype == 1) //single room arrangements System.out.println("Single room arrangements will be made."); finalprice = finalprice + singleroom; if (roomtype == 2) //doubleroom arrangements System.out.println("Double room arrangements will be made."); if (roomtype == 3) System.out.println("VIP room arrangements will be made."); finalprice = finalprice + VIP; } { System.out.println("How many nights will the guest be staying?\n" + "Enter a number 1, 2, 3, etc."); //prompt for number of nights stayed at hotel stayday = keyboard.nextInt(); //assigns stayday a value } System.out.println("What type of rates should we apply?\n" + "1 for Weekday\n" + "2 for Weekend\n" + "3 for Holiday"); staytype = keyboard.nextInt(); //assigns staytype a value { if (staytype == 1) finalprice = finalprice + weekdayrate; //adds weekday rates if (staytype == 2) finalprice = finalprice + weekendrate; //adds weekend rates else finalprice = finalprice + holidayrate; //adds holiday rates } { if (roomtype == 2) finalprice = finalprice + roomroom; //adds double room rate to final price if necessary } finalprice = finalprice * stayday; //multiplies nights stayed to final price System.out.println("Your total is" + " " + finalprice); //prints out final price } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/34648-need-help-hotel-booking-math.html
CC-MAIN-2016-30
refinedweb
596
52.15
I've been coming here for breakfast for the past 10+ years and the food/presentation is flawless EVERY single time! Eggs poached medium - perfect! Fruit instead of hash browns - fresh, seasonal, and a great assortment Bacon - crispy I've been coming here for a few years now, and it's pretty good - the burgers are excellent and the service is fast and attentive. They renovated a couple of years ago and the beer prices went up while the size of the glasses seemed to go down. That's when we started going elsewhere for our weekly drinks. Nice atmosphere... More There is nothing special about this restaurant but there is nothing wrong with it either. I went for brunch with a friend and we enjoyed sitting on the patio. The food was tasty enough and the service was friendly and quick. As long as you aren't expecting anything memorable, this place will suit you just fine if you are in... More Always was my local when I lived in Vancouver. Now, when I return (too seldom) I always hit the Sunset Grill and always receive a warm welcome. Good food, good service, good beer, great place. I can't wait to get back. Went for breakfast yesterday and had the Bacon & Eggs. The bacon is cooked almost dry and is very tasty like that (as opposed to fatty and rubbery). The over-easy eggs were cooked just as I asked. Also came with perfect fried potatoes and toast (you can specify which type). The toast also comes with PB & Jam. It was... More love heading here for sunday brunch with the girls! we always go early and get their special food prices for early birds! drinks are great... mmm mimosas and ceasers and greyhounds! service is not always friendly... but then again, its sunday morning! food quality is great and prices are amazing! A bit of a tired old place. Nothing much going on here... average food and service. away from the action on Yew street. Only go if every other place has a line-up! Had several great meals while in Vancouver on business. Try the vegetarian chili! Cozy, convenient location, convenient hours and cosmepolitan clients plus prices that don't empty your pocketbook! AND they have fish and chips on their menue!!! The chips are firm and the fish is lightly battered and served HOT. Its hard to find fish and ships in the city let alone good ones!! With all the other food choices (their dry ribs... More Nothing better than chicken wings and Okanagan Spring Pale Ale! If you own or manage Sunset Grill, register now for free tools to enhance your listing, attract new reviews, and respond to reviewers.
http://www.tripadvisor.com/Restaurant_Review-g154943-d709371-Reviews-Sunset_Grill-Vancouver_British_Columbia.html
CC-MAIN-2014-41
refinedweb
456
85.28
I want my namespace extension to support copy&paste and drag&drop both into and out of it but am having trouble getting it to work the way the documentation indicates it should and I am hoping somebody will be able to help me. My instance of IDataObject supports CF_HDROP and CFTSR_PREFERREDDROPEFFECT from GetData and CFTSR_PERFORMEDDROPEFFECT and CFSTR_PASTESUCCEEDED from SetData. When I drag an item from my namespace to the desktop DoDragDrop returns DROPEFFECT_NONE and my data object's SetData function is called with CFSTR_PERFORMEDDROPEFFECT set to DROPEFFECT_MOVE. This isn't exactly what is supposed to happen according to the documentation, but it's not a problem. However, when I copy something from my namespace and paste it to the desktop my SetData function is never called. The documentation indicates it should call it with both of the formats I support if the paste actually succeeds. I need to know if the paste succeeds in case a cut operation has ocurred so I can update inside my namespace. So, how DOES the shell notify my data object that a paste succeeded? For that matter, when I try and notify the shell that a paste operation I have done with it's data object has worked the process doesn't work like the documentation indicates. My call to SetData with CFSTR_PERFORMEDDROPEFFECT is rejected with E_INVALIDARG but my call with CFSTER_PASTESUCCEEDED works and the shell deletes the files as it should. The two calls differ only in the cfFormat member that is used. This becomes a problem when supporting dragging files onto my namespace. How am I supposed to notify the shell that the drag-and-drop operation succeeded and it should delete the files in question? So my questions are: 1) How am I supposed to know when the shell has successfully done a paste operation with my data object? 2) How am I supposed to communicate to the shell when I have done a paste or drop operation with one of it's data objects? If anybody can help me I would appreciate it, Pat
http://www.verycomputer.com/137_90cc092a89855c5b_1.htm
CC-MAIN-2019-39
refinedweb
345
50.26
Ok guys I'm almost done with my GPA program but I hit a snag. everything works fine but when I insert a letter grade the program doesn't read what number it corresponds to and sets double number to zero. Can some please let me know what I'm doing wrong. import java.util.Scanner; public class StudentGPA { public static void main (String args[]){ //User inputs the number of classes he/she has Scanner input = new Scanner (System.in); System.out.println("Please enter your number of classes"); int classes; classes = input.nextInt(); String Grade = ""; int totalHours = 0; int totalHoursEarned = 0; int hours; double gpa; double number=0; //String of if and else statements that set the number to the appropriate GPA if(Grade == "A") number = 4.0; else if(Grade == "A-") number = 3.67; else if(Grade == "B+") number = 3.33; else if(Grade == "B") number = 3.0; else if(Grade == "B-") number = 2.67; else if(Grade == "C+") number = 2.33; else if(Grade == "C") number = 2.0; else if(Grade == "C-") number = 1.67; else if(Grade == "D+") number = 1.33; else if(Grade == "D") number = 1.0; else if(Grade == "D-") number = 0.67; else if(Grade == "F") number = 0; //Loop that ends once the student has put information in on all his classes for(int count = 0; count < classes; count++) { //Reads the number of hours each class was Scanner input2 = new Scanner (System.in); System.out.println("Please enter the number of hours this course was"); hours = input2.nextInt(); //reads the letter grade using the String Grade prompt Scanner input3 = new Scanner(System.in); System.out.println("Please enter your grade for this class"); Grade = input3.next(); //algorithm for finding the GPA totalHours += hours; totalHoursEarned += (hours * number); } //for loop ends //GPA is calculated for all the students classes gpa = totalHoursEarned / totalHours; //GPA is printed to the screen System.out.println(gpa); } }
https://www.daniweb.com/programming/software-development/threads/383036/gpa-calculator
CC-MAIN-2018-13
refinedweb
317
67.04
,608 times. Learn more... Losing a beloved animal can be heartbreaking. However, it's important that you don't give up hope. Many missing pets are found safe and reunited with their families. What you do in the first 24 hours is imperative if you want to find your loved one. While nothing can guarantee the return of a lost pet, being proactive from the start will increase the likelihood that you and your pet are reunited. Steps Part 1 of 4: Confirming That Your Pet Is Actually Missing - 1Search your home. Often times a pet will hide somewhere in the house if he becomes startled or upset. Before you panic and start canvassing the neighborhood, do a thorough inspection of your home to see if your pet is hiding somewhere nearby. - There's a good chance your pet might just be hiding. Many pets do this when something frightens them or if they feel threatened in any way. - Check inside cabinets, closets, and spare rooms in case your pet tried to hide or accidentally got locked into a room/storage space. - Look behind, under, and inside every piece of furniture and every appliance. - 2Check around your home and yard. Once you've searched everywhere inside your home, you'll want to move the search outdoors. Look around your property before you comb the neighborhood, though, as there's a chance your pet might still be on your property. - Search through your home's crawl space, on your roof (for cats), in your home's gutters, and in any nearby trees. - Check your garage, under your car, in your yard, and in or behind your shed (if you have one). - Walk around the edge of your property once you've searched everywhere inside. Keep an eye out in your neighbors' yards and in the street or sidewalk out front. - 3Use food/toys to lure your pet out. Whether you're searching indoors or outside, using food and toys is a great way to catch your pet's attention. If he's still on your property, there's a good chance your pet will come running (or at least make noise) when he hears his favorite things.[1] X Trustworthy Source American Society for the Prevention of Cruelty to Animals Leading organization dedicated to the prevention of animal cruelty Go to source Advertisement - Shake a food dish or treat jar full of treats that your pet loves, or shake/rattle/squeeze your pet's favorite toy. - If your pet is hiding within your home, he'll hear the treats or his toy and come out of hiding. Part 2 of 4: Searching the Neighborhood - 1Set up an outdoor feeding area. Once you've ruled out the possibility that your pet is still inside or on your property, you'll need to move your search to the surrounding community. Before you leave, though, set up an outdoor feeding and play area in your yard, on your porch, or in the garage. That way, if your pet does return (and there's a good chance he will), he'll know that his food and toys are there and recognize it as his home. - If your pet is in the neighborhood, he probably won't stray too far from home. - Your pet most likely recognizes the familiar sights and smells of your home, and will probably come back when he gets hungry or tired. - Leaving some food out might entice your pet back into your yard or onto your porch. - Try to have someone stay out in case your pet comes back. If you absolutely can't keep anyone outside in the yard or on the porch, try leaving your garage door open so that your pet can come inside. - 2Keep looking after dark. Don't give up hope once it gets dark out. Many animals are found at night, and in fact your pet may be waiting for nightfall to return home or look for you. - Though dogs may be out at any time, cats tend to hide from people and activity that may be frightening or overstimulating. - There's a good chance that a lost cat will come out of hiding at night and walk around in the street or sidewalks after dark, once things are calmer in your neighborhood. - 3Bring a bright flashlight. A flashlight is one of your best tools for finding a lost pet. This invaluable object will come in handy, both night and day, as you search for your lost friend. - Carrying a flashlight can be helpful, even during the day time. It can help you check in dark places. - Make sure you look in typical animal hiding places, such as underneath porches and shrubs, under cars, and in alleys.[2] X Trustworthy Source American Society for the Prevention of Cruelty to Animals Leading organization dedicated to the prevention of animal cruelty Go to source - Your cat's or dog's eyes should "glow" in the dark when you shine a flashlight beam at him. - 4Carry a recent photo. In addition to a flashlight, you should always bring at least one recent photo of your pet with you as you search. Descriptions can only go so far, but seeing an actual photograph might trigger someone's memory or make them realize that the pet they saw was yours.[3] X Trustworthy Source American Society for the Prevention of Cruelty to Animals Leading organization dedicated to the prevention of animal cruelty Go to source - As you search your neighborhood, you can show the photo to neighbors and ask if they've seen your pet. - Ask anyone you pass, but especially press for details from people walking their dogs, your mail carrier, and any nearby business owners. - 5Make a lot of noise as you search the neighborhood. While you comb the streets, it's a good idea to let yourself be heard. Your pet will recognize your voice, and he may come out of hiding when he realizes you've come looking for him. Advertisement - Call your pet's name in an affectionate tone. Don't yell or use an angry tone, or he might not come out of hiding. - Bring your pet's favorite treats and/or his favorite toy. Use these items to make a lot of noise and let your pet know that you have these things for him. - Even as you make noise, be sure to take regular pauses to be quiet and listen. Your pet might bark, meow, or whimper in response, so you'll want to be sure you can hear him if he does. Part 3 of 4: Getting Others Involved in the Search - 1Let your neighbors know your pet's missing. If you can't find your pet on your own by searching the neighborhood, you may need to enlist the help of your neighbors. Don't ask them to come search with you (unless you're very close with one or two of your neighbors), as this would be a huge imposition. However, you absolutely can and should let your neighbors know about the situation and ask them to keep an eye out for your pet.[4] X Research source - Hand out posters or photographs to your neighbors, your mail carrier, and your local pizza delivery driver. Someone might see your pet in the community, and will need to know what your pet looks like and how to contact you. - Include a full description of your pet. Specify the color, size, breed, age, sex, and any identifying characteristics, as well as the last time and place your pet was seen. - 2Put up missing posters. Posters are a great way to let your neighbors and anyone passing through your community know about your missing pet. Be sure to include a colored photograph on the poster and a detailed description of your pet, including your pet's name, age, breed, and identifying characteristics.[5] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source Hang your posters in the following locations: - 3Contact the authorities and shelters in your county. If someone has found your pet, there's a good chance that individual has or will report it to the proper agency. Depending on where you live, that agency may include law enforcement, animal care, and/or local animal control/animal pound.[7] X Research source - Reach out to every shelter, veterinary office/clinic, police station, and sheriff's office in your county. - Consider contacting the authorities, vet's offices, and shelters in your neighboring county/counties. Some animals run hard without realizing how far from home they've gotten, so your pet may turn up farther than you'd expected. - Hand out posters with identifying details about your pet, a recent color photograph, and your contact information to the authorities at every agency you visit. - Visit nearby animal shelters and animal control agencies on a daily basis (if possible) to see if anyone has found your pet. They should have posters and photographs, but sometimes a pet may not be recognized by anyone but his owner.[8] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source - 4Check on the internet. Some people who find a pet with no ID will try to put up a notice on the internet. There are a number of websites that help facilitate this process, including (but not limited to)[9] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source : - 5Watch out for pet-recovery scams. While you would hope that anyone who responds to your posters would have real information, the sad fact is there are some greedy/malicious individuals who may try to scam you. These people may be trying to get money from you up front, or they may try to lure you to an unsafe location. Don't fall for their tricks, and if you suspect foul play call the police immediately.[11] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source Advertisement - If you put up posters or online notifications, leave out one identifying characteristic (a minor one, but one that anyone with your pet would notice). - If someone contacts you claiming to have your pet, ask that individual to describe the identifying characteristic you left out. If they can't, they might be trying to scam you. - Be wary of anyone who demands that you wire them money or deliver a reward for returning your pet. If someone makes such a demand, ask them to meet you somewhere and alert the police about the incident and the meeting time/place. - Never invite someone to your home if they claim to have your pet. Likewise, don't go to an individual's home, even if he claims to have found your pet. - Ask the individual to meet you in front of the local police station or in an otherwise crowded place.[12] X Research source - Don't go to meet the caller alone if he says he's found your pet. Always bring a friend or relative with you, and let a third party know where you're going and the phone number of the individual you're meeting. Part 4 of 4: Preventing Future Incidents - 1Keep an ID tag on your pet at all times. Every pet should wear a collar with an ID tag at all times. Whether you have a cat or dog, indoor or outdoor, make sure your pet can be properly identified and that you can be contacted in case anyone finds your furry friend.[13] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source - Even indoor pets should wear ID tags on their collars at all times. - Your pet's ID tag should include your name, address, and phone number. - If your pet does manage to get away from you, there's a much better chance of him being returned if he's wearing an ID tag. - 2Consider getting a microchip. Microchipping your pet is a safe, easy, and relatively pain-free way to help protect your pet from being lost. The chip itself is smaller than a grain of rice and is injected into your pet just below the skin, typically around the shoulders.[14] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source - Your pet's microchip will include a registration number that is unique to your animal. It will also include a phone number to contact the registry where your pet's chip is listed. - An animal shelter or vet's office will be able to use a handheld scanning device to get this information off the chip by simply scanning your pet's shoulders. It won't hurt, and the chip stays inside your pet at all times. - While microchips are safe and effective, they should never be the only line of defense in protecting your pet. Always keep a collar and ID tag on your pet, even if he has the chip. - 3Prevent your pet from escaping. There are many reasons why a pet may run away, and unless you are actively abusing your pet, it has absolutely nothing to do with you personally. Some pets try to escape due to social isolation/frustration (being left alone for too long, having a boring environment without any toys or anyone to play with). Other animals try to escape in order to find a mate and try to reproduce. Still other pets escape due to separation anxiety or other fears/phobias (like thunderstorms, fireworks, or even construction sounds).[15] X Trustworthy Source The Humane Society of the United States National organization devoted to the promotion of animal welfare Go to source Advertisement - The first step to preventing your dog from escaping again is to determine why he ran away in the first place. Think about what may have been going on, both inside your home and in your community, when your pet ran away the last time. - Have your pet neutered/spayed to prevent sexual roaming. - Give your dog plenty of attention everyday, including play time. Make sure your pet has toys, and try to teach him new tricks (for dogs) or take an obedience class together. - Make sure your dog gets long walks everyday. The exercise can help tire out a restless dog and may make him less likely to roam the neighborhood. - If your dog has significant fears/phobias, keep him indoors except during walks (for dogs). Always keep a fearful dog on a leash at all times. - Let your pet adjust gradually to any new changes to his environment. Moving to a new home, adding or losing a family member, or sudden schedule changes can all affect a dog's sense of security and comfort. Community Q&A - QuestionI've been looking after a stray cat for one year. Two weeks ago, she went missing. What should I do?Community AnswerCheck your local animal shelter or pound. If anyone finds your cat, they will most likely take him there. - QuestionI have an outdoor cat, and she didn't come home last night. What do I do?Community AnswerPut up signs around your neighborhood and offer a reward if anyone finds your cat. There is a possibility that she is still outside, so look around for her. Also, because she is an outdoor cat, someone may have put her in a shelter. If she is microchipped, you could easily find her, and if she isn't and you do find her, do that immediately. The price is worth not having all of this stress. - QuestionMy pet duck flew away and I've looked everywhere, what do I do?Community AnswerCall out your duck's name, if they know their name. If that doesn't work, leave your gate open and leave your duck's food out. Print and post signs with a picture of your missing duck and your phone number on them. Ask your family members and neighbors if they have seen anything, and just keep looking. - QuestionHow do I find a dog with a microchip?Community AnswerHopefully, someone will contact you. If someone finds your dog, they could take it to a vet or animal shelter, at which point the vet or shelter will read the chip and find your contact information. Just to be on the safe side, you might want to call the local shelters and your vet and let them know your dog is missing. - QuestionI don't know for sure if my cat is lost but he hasn't come to my call. I might've heard a meow but it didn't sound like my lovely cat. What can I do?Community AnswerIf it's an indoor cat, you may want to shake a dry cat food package. - QuestionMy cat is staying at my grandma's house for the summer, where he gets to stay in the yard. We don't know if he ran away or if he's hiding in the barn because of stress, what do we do?Flobow TrustCommunity AnswerHave you tried calling your cat? Are there special treats your cat likes? Try shaking the treats and calling your cat to see if he comes out. You can also take an article of your clothing and lay it down someplace where you think your cat might see/smell it. Lay some of your cat's favorite toys on your article of clothing. Have patience and watch closely. If you're really worried that your cat is stressed, it might be better not to leave him at your grandma's. - QuestionMy cat has been missing for about two years. I've tried everything, but I can't find him. What can I do?Community AnswerYou can try following the instructions in the article, especially putting fliers up around the neighborhood. If you've already tried everything, though, and it's been two years, it's pretty unlikely that the cat is coming back. Most likely your cat was adopted by someone else who thought he was a stray. - QuestionMy cockatiel flew away. He's been gone for a day. How can I find him?Community AnswerPut posters up around your area with a clear photo of your cockatiel and other identification information, as well as your contact info. You should also go for a walk around your area as often as possible, and see if you can spot your bird somewhere (also call for them, especially if they are familiar with their name or your voice). They may have flown into someone's yard, so also keep that in mind. Tips - If you do not have a photo of your pet and your pet is a pure breed, you may find "close-enough" photos of other pets on the web.Thanks! - You may have to continue your search for several days or even weeks before you see results.Thanks! - Contact your neighbors, the animal shelters, and the vets' offices again. Who knows? Your pet may have been found while you were distributing fliers.Thanks! - If you haven't found your pet within a day or so, you may wish to broaden your search to surrounding neighborhoods.Thanks! - Some local radio stations have a segment that helps owners find lost pets. Find out if your station has this, and ask for help. You also may want to listen to the radio itself, just in case someone found your animal and has contacted the station.Thanks! Warnings - If you choose to offer a reward,.Thanks! Things You'll Need - A photo of your pet - A way to print out several copies of fliers - Phone numbers of local vets' offices, police/sheriff's offices, the local animal pound, and local animal shelters References - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ About This Article Before you go looking for a missing pet, put some food outside so if your pet does return while you’re gone, it will recognize its home. To find your pet, search typical hiding places around your neighbourhood such as alleys, beneath porches, and under cars and shrubs. Bring its favorite toys or treats with you on the search and use these to make noise as you call its name. If it starts to get dark, keep searching since your pet may come out when things are calmer in your neighbourhood. Remember to take a recent photo of your pet with you and ask if anyone has seen it. For more advice, like how to use the internet to find a missing pet, keep reading!
https://www.wikihow.com/Find-a-Missing-Pet
CC-MAIN-2021-31
refinedweb
3,483
70.23
We can start with assuming that the tree is perfect binary tree with all leaves filled out. We then check the max depth (depth of left most node rooted at the node) of the right child starting from the root. This can be used to see if our final level is filled up to the half point at the current level as we're given that all leaves are as left as possible. Thus if we see that the current level is not filled past the half point, then we subtract the right half of leaves rooted from the node on the level we're currently on. Recursively (well theoretically recursive, still can be implemented iteratively as I've done so) repeat this process until we hit the bottom. def countNodes(self, root): def max_depth(root): h = 0 while root: root, h = root.left, h + 1 return h d = max_depth(root) if d < 2: return 1 if root else 0 cnt, bottom = (1 << d) - 1, 1 << (d-2) while d > 1: _d = max_depth(root.right) if _d == d-1: root = root.right else: root = root.left cnt -= bottom d, bottom = d - 1, bottom >> 1 return cnt
https://discuss.leetcode.com/topic/103541/straight-forward-python-o-log-n-2
CC-MAIN-2017-39
refinedweb
196
78.28
November 2009 Report Xerces-J A Xerces-J 2.10.0 release should be coming very soon, tentatively planned for December 18th, 2009. It would include among other things a preview of the XML Schema 1.1 support which has been under development. The team continues to make good progress on this front, lately expanding support for CTA to include all of XPath 2.0, making improvements for assertions to better handle namespaces in the XPath expressions and agreeing on a more efficient design for using the PsychoPath XPath 2.0 processor. Other features expected to be in the release are JAXP 1.4, StAX event API (javax.xml.stream.events), Element Traversal API (org.w3c.dom.ElementTraversal) and Unicode normalization checking. Xerces-C A large number of bugs have been fixed in the past couple of months in preparation for the 3.1.0 release. On November 5th, 2009 a vote to release the first release candidate for 3.1.0 has passed. The release candidate is tentatively scheduled for November 16th, 2009. Xerces-P Nothing in particular to report. There was no development activity over the reporting period. XML Commons There has been some recent activity in XML Commons External, with a few bug fixes to StreamResult applied to all of the branches. A release of the JAXP 1.4 APIs is expected before the end of the year to support the upcoming Xerces-J 2.10.0 release. General The XML project wondered if Xerces would be interested in taking Crimson for a sub-project. We passed on that. Given its years of inactivity, Crimson is more likely suitable for the Attic.
http://wiki.apache.org/xerces/November2009?highlight=PsychoPath
CC-MAIN-2016-44
refinedweb
275
59.9
ICONV(3) BSD Programmer's Manual ICONV(3) iconv_open, iconv_close, iconv - codeset conversion functions #include <iconv.h> iconv_t iconv_open(const char *dstname, const char *srcname); int iconv_close(iconv_t cd); size_t iconv(iconv_t cd, char **src, size_t *srcleft, char **dst, size_t *dstleft); dis- carded.. The iconv_open() function may cause an error in the following cases: [ENOMEM] Memory is exhausted. incomplete character or shift sequence. iconv(1) iconv_open(), iconv_close(), and iconv() conform to IEEE Std 1003.1-2001 ("POSIX"). MirOS always had a port of GNU libiconv. Citrus libiconv was imported from NetBSD into MirOS #9. If iconv() is aborted due to the occurrence of some error, the "invalid conversion" count mentioned above is unfortunately lost. MirOS BSD #10-current November 1,.
http://www.mirbsd.org/htman/i386/man3/iconv_open.htm
CC-MAIN-2015-06
refinedweb
120
50.73
Tell me what you think code-wise (not much of a designer as you might tell!) Feedback on Drum Machine Feedback on Drum Machine 0 ThamiMemel #2 very nice job i really loved the design it’s so smooth and lovely, i’m not that far yet in the curriculum so i can’t judge the code but common it’s working . You really don’t need to put the button letters, audio names, or audio urls in state. I was able to create a btns array with all the info need and eliminate a lot of repetitive code and text (the urls) with the following: class App extends React.Component { state = { display: '' } stealTextName = text => this.setState({display: text}) render() { const btns = [ ['Q','Heater-1'], ['W', 'Heater-2'], ['E', 'Heater-3'], ['A','Heater-4_1'], ['S', 'Heater-6'], ['D', 'Kick_n_Hat'], ['Z', 'punchy_kick_1'], ['X', 'side_stick_1'], ['C', 'Brk_Snr'] ]; const convertName = name => name .replace(/_/g,'-') .replace(/\w+/g, ([first, ...rest]) => first.toUpperCase() + rest.join('')); const {display} = this.state, {stealTextName} = this return ( <div className='container'> <div id='drum-machine'> <div id='display'> <span>{display}</span> </div> <div id='controls'> {btns.map(([letter,audioName]) => ( <Drumpad id={convertName(audioName)} text={letter} audio={`{audioName}.mp3`} stealTextName={stealTextName} />) )} </div> </div> </div> ) } } renmanimel #4 I like how simple and clean the app looks, few lines of code, great work
https://www.freecodecamp.org/forum/t/feedback-on-drum-machine/216647
CC-MAIN-2018-47
refinedweb
220
61.56
53527/how-to-write-a-regular-expression-in-python The following code should be able to do it. This will capture only 999 from the tag. a = "Your number is <b>999</b>" import re m = re.search(r"\d+", a) m.group() In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)): In ...READ MORE Hi. Good question! Well, just like what ...READ MORE $ to match the end of the ...READ MORE Refer to the below code. data=’whatever your data ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE If you are using Ubuntu then the ...READ MORE If you are using Python 3.x then ...READ MORE OR
https://www.edureka.co/community/53527/how-to-write-a-regular-expression-in-python
CC-MAIN-2019-35
refinedweb
149
89.85
IJCTF 2020 nod_nod by stackola run /flag flag format: ijctf{} No rockyou or raw bruteforce. Bruteforce is only allowed with some technics like "Blind Sql Injection" But payload has to be not too long. Too long will kill server. Tip: I like php. and I saw the admin's passcode ends with "de" Author: sqrtrev Visiting the URL, all we get is Under Construction The source The challenge author provided us with a zip file containing the programms source code. One thing became clear quickly: All interesting endpoints require you to be admin. This can only be achieved by sucessfully calling /auth with the right passcode. Looking at the code, I immediately thought of blind regex injection, due to the strange way the password is checked: if (typeof passcode == "string" && !secret.search(passcode) && secret === passcode) Also, I remembered reading that the last 2 letters are very hard to extract via regex, but as if by chance, they are given in the challenge. This solidified my plan of using ReDoS to extract the password. Step 1: Getting the password. After some research, I found a script that was almost perfect for this application: After a small amount of modifications, this is the script I used: import socket import sys import time import random import string import urllib import requests import re # constants THRESHOLD = 2 # predicates def length_in(i, j): return ".{" + str(i) + "," + str(j) + "}[CONTENT]quot; def nth_char_in(n, S): return ".{" + str(n-1) + "}[" + ''.join(list(map(re.escape, S))) + "].*[CONTENT]quot; # utilities def redos_if(regexp, salt): return "^(?={})((.*)*)*{}".format(regexp, salt) def get_request_duration(payload): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) p = urllib.quote(payload) print(p) try: _start = time.time() requests.get(""+p) _end = time.time() duration = _end - _start except: print "oof" duration = -1 exit(1) finally: sock.close() return duration def prop_holds(prop, salt): return get_request_duration(redos_if(prop, salt)) > THRESHOLD def generate_salt(): return ''.join([random.choice(string.ascii_letters) for i in range(10)]) # generating salt salt = generate_salt() while not prop_holds('.*', salt): salt = generate_salt() print("[+] salt: {}".format(salt)) # leak length lower_bound = 10 upper_bound = 20 while lower_bound != upper_bound: m = (lower_bound + upper_bound) // 2 if prop_holds(length_in(lower_bound, m), salt): upper_bound = m else: lower_bound = m + 1 print("[*] {}, {}".format(lower_bound, upper_bound)) secret_length = lower_bound # = upper_bound print("[+] length: {}".format(secret_length)))) After around 10 minutes, the passcode was almost completely extracted: Sup3r-P4ss-??de The remaing 2 letters I bruteforced using the /auth endpoint. Final password: Sup3r-P4ss-C0de Step 2: the Tunnel Now that we have admin rights, we can look at the other functions in the code. One that immediately seemed interesting was /tunnel, which tunnels your request to a php script running on localhost. app.get('/tunnel', function(req, res){ var session = req.session; if(typeof session.isAdmin == "boolean" && session.isAdmin){ var param = req.query; if(typeof param.dir == 'undefined') param.dir = ''; request = require('request'); request.get(''+param.dir, function callback(err, resp, body){ var result = body; res.end(result); }); }else{ res.end("Permission Error"); } }); As per the challenge description, this php service uses include to include the file you pass via the dir parameter. Playing around with this, I quickly find the obvious LFI vulnerability: Great. We can include (and execute) any php file we find on the server. Now the search began. I wanted escalating the LFI to RCE/shell, so I could run /flag. Sadly, all common LFI->RCE methods I found did not work. It seems like we have to create the file containing our PHP-Exploit ourselfs. Looking at the obvious way to create files: app.put('/put', function(req, res){ var session = req.session; if(typeof session.isAdmin == "boolean" && session.isAdmin){ var filename = Buffer.from(rand.random(16)).toString('hex'); var contents = req.query.contents; if(typeof contents == "undefined"){ res.end('Param Error'); }else if(contents.match(/ELF/gi)){ res.end('Forbidden String'); }else{ var dir = './uploads/'+session.id; !fs.existsSync(dir) && fs.mkdirSync(dir); fs.writeFileSync(dir+'/'+filename+'.txt', contents); res.end('Okay'); } }else{ res.end('Permission Error'); } }); This seemed promising at first. We do know the user's session id. But sadly, we found no way to do a directory listing, meaning we would never be able to find out the filename of our uploaded file. Break through After at least an hour of playing around with PHP lfi, I went back to the source and spotted this: app.get('/:dir', function(req, res){ var session = req.session; session.log = req.params.dir; res.statusCode = 404; res.end('404 Error'); }); This seems like a way to write arbitrary content to the users session. Messing around with the server locally, I figured out that the user's session file is stored in ___dirname/sessions/[session_id].json For the server, this was /var/www/nod_nod/sessions/[session_id].json Using the PHP LFI, I treid to include my own session file, and that actually worked!:: Response: Admin Fileviewer(using include) {"cookie":{"originalMaxAge":null,"expires":null,"httpOnly":true,"path":"/"},"__lastAccess":1587902082714,"log":"favicon.ico","isAdmin":true} Now we try calling that :dir path, to arbitrarily write things into that json file. Now including the JSON file again, we get this: Admin Fileviewer(using include) {"cookie":{"originalMaxAge":null,"expires":null,"httpOnly":true,"path":"/"},"__lastAccess":1587902175330,"log":":dir:test","isAdmin":true} Great! Our text is reflected in that file. Next up I try putting PHP into that JSON file: (Payload: <?php echo(2*2*2);?>) Calling the LFI again to verify: Admin Fileviewer(using include) {"cookie":{"originalMaxAge":null,"expires":null,"httpOnly":true,"path":"/"},"__lastAccess":1587902275515,"log":":dir:8","isAdmin":true} Awesome! Our PHP code was executed, 2*2*2 was replaced by 8. Now I tried to see what dangerous PHP functions we have access to. The answer: exec Putting it all together. Our goal is to get a shell so we can run /flag. To accomplish this, we have access to php's exec Payload for a reverse PHP shell: <?php $sock = fsockopen('IP.IP.IP.IP', PORT); $proc = proc_open('/bin/sh -i', array(0=>$sock, 1=>$sock, 2=>$sock), $pipes);?> Writing the PHP reverse shell to the user's session file: Executing the PHP code on the server using out LFI vulnerability: This resulted in an incoming reverse shell connection on or own server. Getting the flag from here was easy: $ cd / $ ./flag -> ijctf{Cool,,The_best_1s_0nly_use_nodejs_or_PhP}
https://wrecktheline.com/writeups/ijctf-2020/
CC-MAIN-2021-25
refinedweb
1,046
52.36
- Advertisement GixugifMember Content Count57 Joined Last visited Community Reputation122 Neutral About Gixugif - RankMember I cannot get Java to properly recognize my key releases Gixugif replied to Gixugif's topic in General and Gameplay ProgrammingThank you, I see how that works now. I mean to read through the solution when I get the chance to better understand it, but it seems this solution would still be rather troubling in the context I'm using it. Since writing this game is also meant to be a learning experience I'm trying to implement many things myself that higher-level libraries would otherwise take care of such as loading sounds and graphics, but taking a looking at lwjgl just for this may be the way to go and it may prevent problems later on too. Edit: Of course using this for my input here would require me to use it in other areas to, requiring a massive rewrite. I'm wondering if that update will be quicker and maybe even save me a lot more time in the long run, or if I can think of a nice solution for this using the method I was originally going with. I cannot get Java to properly recognize my key releases Gixugif posted a topic in General and Gameplay ProgrammingI am aware that there is a bug, a 12 or 13 year old bug, that exists on Linux and probably other Unix based systems, that certain ways of detecting when a key is pressed or released do not function the way one would expect. I am also aware there are a few ways around this and I have gone so far to attempt to implement this solution that can be viewed here: The(); [color=#000000: [color=#000000(); } } } [java] Java: getResourceAsStream() returning null Gixugif replied to Gixugif's topic in General and Gameplay ProgrammingOops, sorry, took a while for me to get back to this. Edited the line in the op a little bit, but shouldn't have made a functional difference. So the problem was it was looking for "/Images" in the bin directory when the contents of "/Images" were in bin, but not "/Images" itself. Changing the code to read "/player.gif" instead of "/Images/player.gif" fixed this. [java] Java: getResourceAsStream() returning null Gixugif posted a topic in General and Gameplay ProgrammingBufferedImage = ImageIO.read(getClass().getResourceAsStream("/Images/player.gif")); The file getResourceAsStream is looking at definitely exists (it's a variable in the actual code, but I figure simpler to write it out here), but for whatever reason, "is" is null on the next line. I'm guessing there's something I don't understand about how this works, but anything I've read so far on the matter hasn't been at all helpful, so I would be thankful for anyone who can explain to me what's going on. I'm sure it's not an issue of where the file is located. My classes are in /src in the DreamGame folder, and as you can see, Images is in the Images folder. - Alright, that makes things a ton clearer. Now this really doesn't seem all that difficult. Thank you so much. - Why, yes, as a matter of fact, that does make me feel much better. by backprop I just meant backpropogation error. I was provided with that, feedforward, and such. I guess what I have to do is implement an algorithm that uses those functions to classify irises. Anyway, thanks. That helps a little. I think I can figure out the rest of the way. Here's the backprop program I was talking about, if you're still wondering: Neural Network help. Sorry! Gixugif posted a topic in Artificial IntelligenceSo, not having been able to find help anywhere else, I come here looking to ask something about NNs when I see the thread complaining about all the NN threads recently. Haha, Oh well. Here's one more then. I have question about inputting data. The data is layed out in a text file just like this: 4.4,0.4,1.1,3.2,flower-name 5.5,1.2,7.7,6.3,flower-name 4.5,6.2,4.4,1.9,flower-name etc. ... Each line is a data point. Now, the backprop function accepts one int. I realize, unfortunately, that the answer is probably different in each case, but I'll at least start by asking: Would I enter each of the numbers in on the line separately? Like loop through backprop 4 times for each line (I almost positive that's not how it'd be done, but like I said, just trying to figure this all out), or do I average them all together or something? I'm already normalizing the values by the way, by dividing the number by the highest number in each column. Also, what about the flower-name? Surely that has to be taken into account somehow. Anyway, thanks, and I'll post more details if necessary (and I'm sure it will be) "Can't Create A GL Rendering Context" Error Gixugif replied to Gixugif's topic in Graphics and GPU ProgrammingWow, somehow I missed that. Thanks so much for catching that. "Can't Create A GL Rendering Context" Error Gixugif replied to Gixugif's topic in Graphics and GPU Programming#include <windows.h> #include <gl\gl.h> #include <gl\glu.h> #include <gl\glaux.h> HGLRC hRC = NULL; HDC hDC = NULL; HWND hWnd = NULL; HINSTANCE hInstance; bool keys[25]; bool active = TRUE; bool fullscreen = TRUE; LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM); GLvoid ReSizeGLScene(GLsizei width, GLsizei height) { if (height == 0) { height = 1; } glViewport(0, 0, width, height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f,(GLfloat)width/(GLfloat)height, 0.1f, 100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } int InitGL(GLvoid) {(GLvoid) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); return TRUE; } GLvoid KillGLWindow(GLvoid) { if (fullscreen) { ChangeDisplaySettings(NULL, 0); ShowCursor(TRUE); } if (hRC) { if (!wglMakeCurrent(NULL, NULL)) { MessageBox(NULL, "Release of DC And RC Failed.", "SHUTDOWN ERROR", MB_OK | MB_ICONINFORMATION); } if (!wglDeleteContext(hRC)) { MessageBox(NULL, "Release Rendering Context Failed.", "SHUTDOWN ERROR", MB_OK | MB_ICONINFORMATION); }hRC = NULL; } if (hDC && !ReleaseDC(hWnd, hDC)) { MessageBox(NULL, "Release Device Context Failed.", "SHUTDOWN ERROR", MB_OK | MB_ICONINFORMATION); hDC = NULL; } if (hWnd && !DestroyWindow(hWnd)) { MessageBox(NULL, "Could Not Release hWnd.", "SHUTDOWN ERROR", MB_OK | MB_ICONINFORMATION); hWnd = NULL; } if (!UnregisterClass("OpenG", hInstance)) { MessageBox(NULL, "Could Not UnregisterClass.", "SHUTDOWN ERROR", MB_OK | MB_ICONINFORMATION); hInstance = NULL; } } BOOL CreateGLWindow(char* title, int width, int height, int bits, bool fullscreenflag) { GLuint PixelFormat; WNDCLASS wc; DWORD dwExStyle; DWORD dwStyle; RECT WindowRect; WindowRect.left = (long)0; WindowRect.right = (long)width; WindowRect.top = (long)0; WindowRect.bottom = (long)height; fullscreen = fullscreenflag; hInstance =Instance; wc.hIcon = LoadIcon(NULL, IDI_WINLOGO); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = NULL; wc.lpszMenuName = NULL; wc.lpszClassName = "OpenG"; if (!RegisterClass(&wc)) { MessageBox(NULL, "Failed to Register The Window Class.", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } if (fullscreen) { DEVMODE dmScreenSettings; memset(&dmScreenSettings, 0, sizeof(dmScreenSettings)); dmScreenSettings.dmSize=sizeof(dmScreenSettings); dmScreenSettings.dmPelsWidth = width; dmScreenSettings.dmPelsHeight = height; dmScreenSettings.dmBitsPerPel = bits; dmScreenSettings.dmFields=DM_BITSPERPEL|DM_PELSWIDTH|DM_PELSHEIGHT; if (ChangeDisplaySettings(&dmScreenSettings, CDS_FULLSCREEN) != DISP_CHANGE_SUCCESSFUL) { if (MessageBox(NULL, "The Requested Fullscreen Mode Is Not Supported By\nYour Video Card. Use Windowed Mode Instead?", "NeHe G", MB_YESNO|MB_ICONEXCLAMATION)==IDYES) { fullscreen = FALSE; } else { MessageBox(NULL, "Program Will Now Close.", "ERROR", MB_OK |MB_ICONSTOP); return FALSE; } } } if (fullscreen) { dwExStyle = WS_EX_APPWINDOW; dwStyle = WS_POPUP; ShowCursor(FALSE); } else { dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE; dwStyle = WS_OVERLAPPEDWINDOW; } AdjustWindowRectEx(&WindowRect, dwStyle, FALSE, dwExStyle); if (!(hWnd = CreateWindowEx( dwExStyle, "OpenG", title, WS_CLIPSIBLINGS | WS_CLIPCHILDREN | dwStyle, 0, 0, WindowRect.right-WindowRect.left, WindowRect.bottom-WindowRect.top, NULL, NULL, hInstance, NULL))) { KillGLWindow(); MessageBox(NULL, "Window Creation Error.", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), 1, PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, PFD_TYPE_RGBA, bits, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 16, 0, 0, PFD_MAIN_PLANE, 0, 0, 0, 0 }; if (!(hDC = GetDC(hWnd))) { KillGLWindow(); MessageBox(NULL, "Can't Create A GL Device Context.", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) { KillGLWindow(); MessageBox(NULL, "Can't Find A Suitable PixelFormat", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } if (!(hRC = wglCreateContext(hDC))) { KillGLWindow(); MessageBox(NULL, "Can't Create A GL Rendering Context.", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } if(!wglMakeCurrent(hDC, hRC)) { KillGLWindow(); MessageBox(NULL, "Can't Activate The GL Rendering Context.", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } ShowWindow(hWnd, SW_SHOW); SetForegroundWindow(hWnd); SetFocus(hWnd); ReSizeGLScene(width, height); if (!InitGL()) { KillGLWindow(); MessageBox(NULL, "Initialization Failed.", "ERROR", MB_OK | MB_ICONEXCLAMATION); return FALSE; } return TRUE; } LRESULT CALLBACK WndProc( HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_ACTIVATE: { if (!HIWORD(wParam)) { active = TRUE; } else { active = FALSE; } return 0; } case WM_SYSCOMMAND: { switch (wParam) { case SC_SCREENSAVE: case SC_MONITORPOWER: return 0; } break; } case WM_CLOSE: { PostQuitMessage(0); return 0; } case WM_KEYDOWN: { keys[wParam] = TRUE; return 0; } case WM_KEYUP: { keys[wParam] = FALSE; return 0; } case WM_SIZE: { ReSizeGLScene(LOWORD(lParam), HIWORD(lParam)); return 0; } } return DefWindowProc(hWnd, uMsg, wParam, lParam); } int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { MSG msg; BOOL done = FALSE; if (MessageBox(NULL, "Would You Like To Run In Fullscreen Mode?", "Start Fullscreen?", MB_YESNO | MB_ICONQUESTION)== IDNO) { fullscreen = FALSE; } if (!CreateGLWindow("NeHe's OpenGL Framework", 640, 480, 16, fullscreen)) { return 0; } while(!done) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) { done = TRUE; } else { TranslateMessage(&msg); DispatchMessage(&msg); } } else { if (active) { if(keys[VK_ESCAPE]) { done = TRUE; } else { DrawGLScene(); SwapBuffers(hDC); } } if (keys[VK_F1]) { keys[VK_F1] = FALSE; KillGLWindow(); fullscreen = !fullscreen; if (!CreateGLWindow("NeHe's OpenGL FrameWork", 640, 480, 16, fullscreen)) { return 0; } } } } KillGLWindow(); return (msg.wParam); } It should be the same as what's on the site, but I suppose it's easier to see it all together, and maybe someone else will see something. I already had the latest driver from intel - I checked again to make sure. I have Win 7 so I tried disabling aero; that didn't help. I wasn't really able to find anything else on the intel pages that would help. I really appreciate all that help. "Can't Create A GL Rendering Context" Error Gixugif replied to Gixugif's topic in Graphics and GPU ProgrammingIntel 82945G Express And yeah, I made sure it was fully updated. OpenGL "Can't Create A GL Rendering Context" Error Gixugif posted a topic in Graphics and GPU ProgrammingSo, I just started trying to learn OpenGL, and the tutorial I'm following has me creating a simple window with a solid black background. When I try to run it, I get the error "Can't Create A GL Rendering Context." I did a search and it would seem this has to do with your video driver. Because it's such a simple thing to do though, I'd imagine it has to do with my coding, or maybe because I have integrated graphics? Since the code is basically just copied from the tutorial, it doesn't seem like that'd be the issue. If it helps, i'm using NeHe's tutorial. - The method calling getData is in a different class. Yeah, that was it. Stupid little mistake, didn't even occur to me you couldn't do that for some reason. Thanks for the help, and very quick replies. - It's defined in the Circular_Linked_List class (not the one included in the java library), as getNode and deleteNode. I might as well include the definitions. public Node getNode(int index) // returns a node from the list { if (index > (size - 1)) return null; Node temp = getFirst(); for (int i = 0; i < index; i++) { temp = temp.next; } return temp; } public Node deleteNode(Node delNode) // deletes a node { int index = checkNode(delNode); if (index == -1) System.out.println("Node doesn't exist"); Node temp = getNode(index - 1); if(index == (size - 1)) { Node del = last; temp.next = last.next; last = temp; size--; return del; } else { Node del = temp.next; temp.next = temp.next.next; temp.next.next = null; size--; return del; } } public int getData(Node myNode) // returns data of node { return myNode.data; } Here's the error message after splitting it into four lines (it appears at getData): AmStramGram.java:64: cannot find symbol symbol : method getData(Circular_Linked_List.Node) location: class AmStramGram getData( ^ 1 error Java won't recognize function Gixugif posted a topic in For Beginners's ForumdelOrder[count] = getData(infantCircle.deleteNode(infantCircle.getNode(k))); This should be all you need. All of the functions accept Node as their parameter and return a Node also. The error I get says it "cannot find symbol" at getData(), and it's probably because for some reason it doesn't think it's getting a Node as a parameter. Help would be appreciated. Thanks! Java error: cannot find symbol Gixugif replied to Gixugif's topic in For Beginners's ForumOh, thank you. That did it. I didn't realize that variables declared in try...catch blocks were only available inside of them. - Advertisement
https://www.gamedev.net/profile/89870-gixugif/?tab=friends
CC-MAIN-2018-43
refinedweb
2,129
56.55
Tutorial for: bootstrap-ajax Requirements: Need a quick and effortless way to Ajax enable your Django and Bootstrap website? Look no further than bootstrap-ajax. I recently tried bootstrap-ajax with my Uarica: Idea Collaboration website. It is used to perform Likes, Follows, and comment removal on the website by end-users. And it works wonderfully and was super easy to implement. This tutorial will be rather short, as it is just that easy to use this Bootstrap, from Twitter add-on. {% with likes=idea.likes.all %} <a class="btn ajax" data-{{likes|length}}<i class="{% if user in likes %}icon-star{% else %}icon-star-empty{% endif %}"></i></a> {% endwith %} This is source code from my Uarica website to generate the like button, the part which makes the Ajax magic work is the ajax class, and the data-replace-closest attribute. When the button is clicked, it uses the URL in the href, and the data that comes back is replaced in the closest A tag, which in this case is the link the user just clicked. @login_required def like_idea(req, username, slug): # Source goes here to add/remove the like. return HttpResponse(json.dumps({'hml':link, 'fragments':{'#ajax_message':msg}}), mimetype='application/json') Since my website is not open sourced, I am limited on what I can actually display here. However the logic to determine a like and add/remove it should be evident. What I want to focus on with this snippet is not the like system, but how to return a JSON response, so that the bootstrap ajax toolkit can use it. The fragment keyword is optional, it is used here to return back an AJAX message to inform the user of their recent action. The keyword you want to return back is html, this is what will be replaced on the HTML page. In the case of this example, the A tag will be replaced by what is in the link variable. Easy as pie to use, isn't it? Best of all, the example application for this bootstrap library is Django! You can check it out on their website here:. There are so many methods of developing AJAX applications in Django, to see the other ones check out the tutorial section.
http://pythondiary.com/tutorials/easy-pie-ajax-django-and-bootstrap.html
CC-MAIN-2019-22
refinedweb
375
61.46
This article was originally posted as “SDL2: Empty Window” on 31st August 2013 at Programmer’s Ranch. It has been slightly updated and now enjoys syntax highlighting. The source code for this article is available at the Gigi Labs BitBucket repository. Yesterday’s article dealt with setting up SDL2 in Visual Studio. Today we’re going to continue what we did there by showing an empty window and allowing the user to exit by pressing the X at the top-right of the window. It takes very little to show an empty window. Use the following code: #include <SDL.h> int main(int argc, char ** argv) { SDL_Init(SDL_INIT_VIDEO); SDL_Window * screen = SDL_CreateWindow("My SDL Empty Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0); SDL_Quit(); return 0; } We use SDL_Init() to initialise SDL, and tell it which subsystems we need – in this case video is enough. At the end, we use SDL_Quit() to clean up. It is possible to set up SDL_Quit() with atexit(), as the SDL_Quit() documentation shows. We create a window using SDL_CreateWindow(). This is quite different from how we used to do it in SDL 1.2.x. We pass it the window caption, initial coordinates where to put the window (not important in our case), window width and height, and flags (e.g. fullscreen). If you try and run the code, it will work, but the window will flash for half a second and then disappear. You can put a call to SDL_Delay() to make it persist for a certain number of milliseconds: SDL_Delay(3000); Now, let’s make the window actually remain until it is closed. Use the following code: #include <SDL.h> int main(int argc, char ** argv) { bool quit = false; SDL_Event event; SDL_Init(SDL_INIT_VIDEO); SDL_Window * screen = SDL_CreateWindow("My SDL Empty Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0); while (!quit) { SDL_WaitEvent(&event); switch (event.type) { case SDL_QUIT: quit = true; break; } } SDL_Quit(); return 0; } The while (!quit) part is very typical in games and is in fact called a game loop. We basically loop forever, until the conditions necessary for quitting occur. We use SDL_WaitEvent() to wait for an event (e.g. keypress) to happen, and we pass a reference to an SDL_Event structure. Another possibility is to use SDL_PollEvent(), which checks continuously for events and consumes a lot of CPU cycles ( SDL_WaitEvent() basically just sleeps until an event occurs, so it’s much more lightweight). The event type gives you an idea of what happened. It could be a key press, mouse wheel movement, touch interaction, etc. In our case we’re interested in the SDL_QUIT event type, which means the user clicked the window’s top-right X button to close it. We can now run this code, and the window remains until you close it: Wasn’t that easy? You can use this as a starting point to start drawing stuff in your window. Have fun, and come back again for more tutorials! 🙂
https://gigi.nullneuron.net/gigilabs/2015/11/03/
CC-MAIN-2020-05
refinedweb
486
72.76
The lucene module wrapper takes care of the initialization and threading pitfalls in the JCC API. It automatically initializes the VM when the first object is retrieved from the facade module. It also attaches the current Python thread to the JNI VM environment when it hasn’t been attached before. The wrapper also solves the problem with JVM installing its own signal handlers. By default the JVM overwrites Python’s signal handlers in initVM(), thus causing some hard to debug problems. For example SIGINT is no longer translated into a KeyboradInterrupt exception. The wrapper code restores all signal handlers to its former state. Only JVM’s SIGSEGV handler is left in place because it logs and prints useful information when the JVM segfaults. The classpath argument is optional. When no classpath is set, smc.lucene automatically adds classpath=lucene.CLASSPATH for you. Replace code like: import lucene lucene.initVM(classpath=lucene.CLASSPATH, vmwargs='...') def query_lucene(...): lucene.attachCurrentThread() query = lucene.BooleanQuery() ... with: from smc.lucene import lucene lucene.set_initargs(vmargs='...') def query_lucene(...): query = lucene.BooleanQuery() Lucene is automatically initialized when the BooleanQuery attribute is retrieved from the facade module. If PyLucene was already initialized the facade module makes sure that the current thread is attached. You must not assign Lucene attributes to objects which a shared across threads. If you have to share an object across threads you must call lucene.attach() before you can use an object. Example: from smc.lucene import lucene as lucene_wrapper from lucene import BooleanQuery def query_lucene(...): lucene_wrapper.attach() query = BooleanQuery() The attach method either initializes lucene or or attaches the current thread. semantics Kommunikationsmanagement GmbH Viktoriaallee 45 D-52066 Aachen Germany Tel.: +49 241 89 49 89 29 eMail: info(at)semantics.
https://pypi.org/project/smc.lucene/
CC-MAIN-2016-44
refinedweb
287
61.12
Patches for Zope's ZTUtils in order to make 'make_query' and 'make_hidden_input* more flexible and more reliable. Project description This package patches Zope’s ZTUtils to enhance its make_query and make_hidden_input functions. Those functions are used to pass values across two requests and allow the target request to get the value in approximately the same way (e.g. in the same type) as it has been in the source request, avoiding tedious fixups in the target request. The standard Zope versions are quite limited. They support (binary) strings, integers, floats and DateTime.DateTime as elementary data types and lists and namespaces (i.e. something with an items method) of those elemantary types for structured values. This package replaces Zope’s ZTUtils.Zope.complex_marshal by a variant that correctly handles unicode and tuples. In addition, empty lists (and tuples) are retained. Tuples are marshalled as lists. This patch makes “make_query” and “make_hidden_input” more reliable. From version 1.1 on, the application can register extensions to support additional elementary data types or to support passing structured values which are more deeply nested. For details, please see the docstrings of class Extension and the functions register_extension and unregister_extension. By default, the extension framework is used to register an extension handling None values. Note that this changes the behavior for None passing relative to pre 1.1 versions. Use unregister_extension("none") to keep the old behavior. The extension framework is used to define the function register_json_extension. It uses JSON marshalling to represent subvalues in structured data which are too deeply nested to be handled by basic mashalling. Should you need customization for the JSON marshalling, take the implementation of register_json_extension as a blueprint for your own definition. Version history 1.1 Extension framework to optionnaly support application specific handling of new data types and deeper data structures None is (by default) passed on as the object None not as the string "None". 1.0 Lets make_query and make_hidden_inputs reliably handle unicode, tuples and empty lists. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dm.zopepatches.ztutils/
CC-MAIN-2018-26
refinedweb
358
50.12
include an external html file (form ) Discussion in 'HTML' started by mac, May 12,,401 - rf - Jan 9, 2004 #include "file" -vs- #include <file>Victor Bazarov, Mar 5, 2005, in forum: C++ - Replies: - 4 - Views: - 720 - Exits Funnel - Mar 6, 2005 include HTML file in another HTML fileTom, Feb 12, 2007, in forum: HTML - Replies: - 8 - Views: - 894 - Adrienne Boswell - Feb 13, 2007 include file in include filePTM, Nov 12, 2007, in forum: HTML - Replies: - 1 - Views: - 432 - Andy Dingley - Nov 12, 2007 /* #include <someyhing.h> */ => include it or do not include it?That is the question ....Andreas Bogenberger, Feb 21, 2008, in forum: C Programming - Replies: - 3 - Views: - 1,200 - Andreas Bogenberger - Feb 22, 2008
http://www.thecodingforums.com/threads/include-an-external-html-file-form.683668/
CC-MAIN-2015-48
refinedweb
116
60.18
If you work on a Microsoft network, chances are you're using Active Directory (AD). Active Directory stores information about network resources for a domain. This information requires specific authority for update, but is typically open to authenticated users for query. I developed this tool to allow for exactly these queries. It provides a list of known (to the network) domains, and allows the user to view groups, group membership, users, and user details without the need to dive into LDAP queries. In short, it's easy to use, quick, and provides more information than the typical user really needs. This tool was developed using the .NET Framework 2.0 only. There are no interop assemblies or Win32 API calls involved in ADSI operations, there is one Win32 API called for the About box animation. This is a .NET 2.0 Windows Forms application. Many of the organizations that I work for utilize AD to manage application and resource access by groups. Unfortunately for me (and others), many of these organizations do not permit access to the Microsoft Active Directory tools, so verifying that a particular user has been given membership to a particular group can be a bit of a pain. Hence, this tool was born. The UI itself is pretty straightforward. Just a typical .NET Windows Forms application. The meat of the application is located in the ADLookup class. This class performs all of the AD activities used to populate the lists in the UI. Perusing the source will provide you with an introduction (possibly a rude one) to the world of AD searches in the .NET environment. ADLookup If you look at the image above, the arrows indicate that a selection in a list will trigger the automatic update of the list being pointed to. In addition, if a user is selected from the Users in Group list, that user will be selected in the Domain Users list as well, triggering subsequent updates. Likewise, a selection in the Groups for User list will select that group in the Groups in Domain list, triggering subsequent updates. The numbers in parenthesis above each list indicate how many elements are in the list. This gives an at-a-glance answer to one of the most common AD questions: "How many users are in group xx?" To utilize the search, you need to select a search option from the Search menu, or you can right-click on either the Groups in Domain, Users in Group, or Users in Domain lists. When you select one, a pop-up window will display for you to enter your search data. The search data you enter is used as a Regular Expression to evaluate against the data in the selected list, so feel free to use .NET Regular Expressions to perform your fuzzy search. Only the first match of your search criteria is selected. When it is selected, the appropriate lists will be updated in their content as well. There are three methods in the ADLookup class that deserve a little attention here. These three methods are used to decode arrays of bytes that are returned from the AD query in the user properties collection. First, the easy one - SIDToString: SIDToString /// <summary> /// Convert a binary SID to a string. /// </summary> /// <param name="sidBinary">SID to convert.</param> /// <returns>String representation of a SID.</returns> private string SIDToString(byte[] sidBinary) { SecurityIdentifier sid = new SecurityIdentifier(sidBinary, 0); return sid.ToString(); } The best part of this method is that there's virtually nothing to converting a Windows SID (security identifier) bit array to a human readable string. The next one is a Registry lookup used to determine the currently active time bias on the system. This is a value used by the system to convert from Greenwich Mean Time (GMT) to the local time. /// <summary> /// Retrieve the current machine ActiveTimeBias. /// </summary> /// <returns>an integer representing the ActiveTimeBias in hours.</returns> private int GetActiveBias() { // Open the TimeZone key RegistryKey key = Registry.LocalMachine.OpenSubKey(@"SYSTEM\CurrentControlSet" + @"\Control\TimeZoneInformation"); if (key == null) return 0; // Pick up the time bias int Bias = (int)key.GetValue("ActiveTimeBias"); // Close the parent key key.Close(); // return the result adjusted for hours (instead of minutes) return (Bias / 60); } This value is always subtracted from GMT to arrive at the local time. Where I live, we use daylight savings time as well as standard time, so my ActiveTimeBias value will be either 7 (Pacific Daylight Time [PDT]) or 8 (Pacific Standard Time [PST]). ActiveTimeBias The last method we will visit here is called DecodeLoginHours. Within the properties collection for a user in AD, there exists the ability to limit the hours that a user can log in to a system. This property consists of an array of 21 bytes, where each bit represents a one hour span beginning with Midnight Sunday GMT. Note that I said GMT. This is where the ActiveTimeBias comes in. By performing the subtraction, we're able to re-align the bit-array to machine time. Obviously, this bit-array is not friendly to humans, so we decode it into something that we can easily read. Within the UI, it will show up in the Properties for User list as Login Hours: > Click to view <. Naturally, the user needs to click the item in the list to get the following display: DecodeLoginHours /// <summary> /// Translate the hours into something readable. /// </summary> /// <param name="HoursValue">Hours to convert.</param> /// <returns>A string indicating the hours of availability.</returns> private string DecodeLoginHours(byte[] HoursValue) { // See if we have anything if (HoursValue.Length < 1) return string.Empty; // Pick up the time zone bias int Bias = GetActiveBias(); // Convert the HoursValue array into a character array of 1's and 0's. // That's a really simple statement for a bit of a convoluted process: // The HoursValue byte array consists of 21 elements (21 bytes) where // each bit represents a specified login hour in Universal Time // Coordinated (UTC). These bits must be reconstructed into an array // that we can display (using 1's and 0's) and associated correctly to // each of the hour increments by using the machines current timezone // information. // Load the HoursValue byte array into a BitArray // This little trick also allows us to read through the array from // left to right, rather than from right to left for each of the 21 // elements of the Byte array. BitArray ba = new BitArray(HoursValue); // This is the adjusted bit array (accounting for the ActiveTimeBias) BitArray bt = new BitArray(168); // Actual index in target array int ai = 0; // Copy the source bit array to the target bit array with offset for (int i = 0; i < ba.Length; i++) { // Adjust for the ActiveTimeBias ai = i - Bias; if (ai < 0) ai += 168; // Place the value bt[ai] = ba[i]; } // Time to construct the output int colbump = 0; int rowbump = 0; int rowcnt = 0; StringBuilder resb = new StringBuilder(); resb.Append(" ------- Hour of the Day -------"); resb.Append(Environment.NewLine); resb.Append(" M-3 3-6 6-9 9-N N-3 3-6 6-9 9-M"); resb.Append(Environment.NewLine); resb.Append(_DayOfWeek[rowcnt]); for (int i = 0; i < bt.Length; i++) { // Put in a 0 or a 1 resb.Append((bt[i]) ? "1" : "0"); colbump++; rowbump++; // After 24 elements are written, start the next line if (rowbump == 24) { // Make sure we're not on the last element if (i < (bt.Length - 1)) { rowbump = 0; colbump = 0; resb.Append(Environment.NewLine); rowcnt++; resb.Append(_DayOfWeek[rowcnt]); } } else { // Insert a space after every 3 characters // unless we've gone to a new line if (colbump == 3) { resb.Append(" "); colbump = 0; } } } // Return the result return resb.ToString(); } LoginHours TimeBias This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/28735/ADSI-Hunter?msg=4469217
CC-MAIN-2013-20
refinedweb
1,308
55.13
Introduction Many applications, such as navigation and radio frequency engineering, require a thorough understanding of geographic calculations. Some very natural questions seem to come up in a variety of disciplines: How far apart are two points on the Earth? What direction do I need to go to reach a particular point? If I go in a particular direction for a certain distance, where will I end up? Visualization makes these calculations immensely easier, and to visualize you need to come up with an accurate model. As it turns out, there are two common approaches for modelling the surface of the Earth: spherical and ellipsoidal. Another, more accurate, model is called the geoid. The geoid is a complex surface where each point on the surface has the same gravitational pull. The shape of the geoid does not lend itself well to geometric calculations (and new research and measurements are constantly refining the geoid), so people generally stick with either the spherical model or the ellipsoid model. The spherical model can be very accurate under certain stringent conditions; however, the ellipsoid model is generally a very accurate model everywhere. You can think of either model as the mean sea level. So elevations, such as those on a contour map, are generally given as height above the ellipsoid. Both spherical and ellipsoid models have symmetry that allow you to do calculations, but that symmetry also means that people have to agree on a common starting point for the model. The starting “reference” point is called a datum and there are many different datums. Transforming between datums can be very complicated depending on how you do it, and those transformations are just outside the scope of this article (maybe another article will cover datums). So, the rest of this article assumes that we are working within some particular datum and there is no need to transfer the coordinates in this datum into coordinates of another datum. The good news is that we can solve a lot of geographic problems in the spherical model with a few simple mathematical tools. Another important aspect of the spherical model is that, in terms of visualization, it covers just about everything we need; the ellipsoid model can be visualized as a refinement of the spherical model. This approach works really well because, in terms of percentages, the Earth is very close to a sphere. Overview This article describes each model in some depth and provides solutions to the following common geographic problems: - Spherical Model - Calculate path length given starting and ending coordinates (a pair of latitude/longitude). - Calculate path direction (azimuth) given starting and ending coordinates. - Calculate end point (latitude/longitude) given a starting point, distance, and azimuth. - Calculate the azimuth and elevation angle to a geostationary satellite (where to point a satellite dish). - Calculate the intersection of two paths given starting and ending coordinates for each path. - Ellipsoid Model - Calculate path length along a meridian given starting and ending coordinates. - Calculate azimuth and path length (the geodetic forward and inverse problems). Spherical Model The spherical model is simple in mathematical terms because of its symmetry: every point on the surface is equidistant from the center, it’s difficult to imagine more symmetry. This fact has a number of very helpful consequences that can be summed up in the following statment: Geodesic paths between two points on a sphere are great circles. A geodesic path is simply the shortest path between two points along the surface. Of course, it would be shorter to go straight through the Earth between the two points, but that is generally not possible for us surface dwellers. A great circle is just like every other circle with the additional contraint that its center lies at the center of the sphere. The following table summarizes some of the mathematical tools that are available for analyzing the spherical model: As with all mathematical formulas, it’s important to understand where these formulas apply. For instance, lines of latitude are NOT great circles (except for the equator, which is a line of latitude and a great circle) and so laws of Cosines and Sines for spherical triangles do not apply to lines of latitude. Lines of latitude are in fact circles with their centers lying along the polar axis of the Earth, not necessarily at the center of the Earth. Lines of longitude are great circles. Figure 1 depicts a generic great circle path. Figure 1: A great circle path (arc b is the path and the angular separation between the end points). For great circle paths, the start and end points as well as all the points along the path and the center of the Earth lie in the same plane. Use Figure 2 to visualize the geometry of a great circle path/plane. Figure 2: A rectangular plane intersecting a great circle path and the center of the Earth (arc b is the path and the angular separation of the end points). Problem 1A. Calculate path length given starting and ending coordinates Calculating distance for a spherical model is very straightforward using the Law of Cosines and the Law of Sines for spherical triangles. You may remember the Law of Cosines and the Law of Sines for planar triangles; well, it’s slightly different for spherical triangles. In Figure 2, the triangle with sides {a, b, c} and angles {A, B, C} is defined by two end points and the North Pole. Given the latitude and longitude for the end points, we want to solve for side b and angle A, so we start with the Law of Cosines for spherical triangles: cos (b) = cos (a) * cos (c) + sin (a) * sin (c) * cos (B) where B = lon2 – lon1, and c = 90 – lat1, and a = 90 – lat2, and substituting these above leads to cos (b) = cos (90 – lat2) * cos (90 – lat1) + sin (90 – lat2) * sin (90 – lat1) * cos (lon2 – lon1), and solving for b b = arccos ( cos (90 – lat2) * cos (90 – lat1) + sin (90 – lat2) * sin (90 – lat1) * cos (lon2 – lon1) ) Because b equals the arccos of something, b is an angle. It’s actually the angular distance between the two points (see Figure 3), and for circles the arc length is: arc length =( radius )* ( angular distance (in radians)), so finally distance = ( Earth Radius ) * arccos ( cos (90 – lat2) * cos (90 – lat1) + sin (90 – lat2) * sin (90 – lat1) * cos (lon2 – lon1) ) If you use miles for the Earth Radius, you will get the distance in miles for a result; using kilometers yields the answer in kilometers, and so on. One (1st edition) reader provided an alternate distance formula that is better for small angles. I’m not sure who first proposed using this formula, but it seems to work well. dlon = lon2 – lon1 dlat = lat2 – lat1 a = (sin(dlat/2))^2 + cos(lat1) * cos(lat2) * (sin(dlon/2))^2 c = 2 * arcsin(min(1,sqrt(a))) distance = (Earth Radius) * c If there were a need to calculate the distance between two points that are very close together, it would be tempting to use the brute force approach of the Pythagorean Theorem. Essentially, it means calculating the {x,y,z} position of each point (in whatever units suit you) and using distance = sqrt(x*x + y*y +z*z). The beauty of this formula is it has no undefined arguments. Also, when you are dealing with small distances, the elevation difference between the end points may make a bigger difference than the curvature of the Earth. For instance, if you have the geographic positions and elevations of the terminals of a very long gondola (maybe from a GPS receiver) and you want to calculate how far it has to travel (and for some reason it’s impossible to measure the length of cable used). You could use the following formulas to calculate {x,y,z} and plug them into the 3D distance formula: x1 = (R + h1) * [ cos(lat1) * cos(lon1) ] ; y1 = (R + h1) * [ sin(lat1)]; z1 = (R + h1) * [ cos(lat1) * sin(lon1) ] x2 = (R + h2) * [ cos(lat2) * cos(lon2) ] ; y2 = (R + h2) * [ sin(lat2)]; z2 = (R + h2) * [ cos(lat2) * sin(lon2) ] dx = (x2 – x1); dy = (y2 – y1); dz = (z2 – z1) distance = sqrt(dx*dx + dy*dy + dz*dz) where h1 and h2 are the elevation (above the mean sea level) of the two end points and R is the radius of the Earth. This approach would be very accurate for distances less than 10 miles. The inaccuracy of this approach, obviously, is that it assumes that you can travel in a perfectly straight line between the points. For longer distances (actually for all paths to some degree), the curvature of the Earth would prevent this from happening. Problem 1B. Calculate path direction (azimuth) given starting and ending coordinates Using the results of Problem 1A and the Law of Sines, find the azimuth from (lat1,lon1) to (lat2,lon2). By the Law of Sines: sin (A) / sin (a) = sin (B) / sin (b) = sin (C) / sin (c). We want to calculate A and we just found b in Problem 1A, so we have everything we need: sin (A) = sin (a) * sin (B) / sin (b); or A = arcsin ( sin (90 – lat2) * sin (lon2 – lon1) / sin (b) ) Once you find A, the azimuth can be determined based on the relationship of the end points. For Figure 2, A is equal to the azimuth. The sample project provides a method for determining the azimuth from A depending on the relationship of the end points. Problem 1C. Calculate end point (latitude/longitude) given a starting point, distance, and azimuth Given {lat1, lon1, distance, azimuth} calculate {lat2, lon2}. First, work backwards (relative to Problem 1A) and find b from the distance by dividing by the Earth radius. b = distance / (Earth Radius) making sure distance and (Earth Radius) are the same units so that we end up with b in radians. Knowing b, calculate a using a = arccos(cos(b)*cos(90 – lat1) + sin(90 – lat1)*sin(b)*cos(azimuth))—basically taking the arc cosine of the Law of Cosines for a. From a, we can get lat2, so the only item remaining is to figure lon2; we can get that if we know B. Calculate B using B = arcsin(sin(b)*sin(azimuth)/sin(a)). Then finally, lat2 = 90 – a and lon2 = B + lon1. Essentially, we just worked Problem 1A backwards. Problem 1D. Calculate the azimuth and elevation angle to a geostationary satellite (where to point a satellite dish) Many satellite-direct TV services use geostationary satellites to transmit their signals to peoples’ homes. Geostationary satellties move in nearly circular orbits approximately above the equator. Because a geostationary satellite keeps pace with the rotation of the Earth, it appears fixed in space to anyone on the surface of the Earth (anyone who is not moving). So, the geometry of the orbit and the fixed nature of the users enables home users to point their antennas (small satellite dish) in a fixed direction. But, what is the direction to the satellite and how much tilt above the horizon should a dish have? Believe it or not, these are simple geographic calculations! All of the geostationary satellites lie very nearly above the equator, so we know lat2. The question is: What is the longitude? Well, it varies depending on which satellite you want to reach. The service providers want to reach as many households as possible, so for the western hemisphere they want to position the satellite above the northern part of South America. From there, customers in most of North and South America can see the satellite and pay them for the service. The list below shows some of the longitudes of the major satellite TV stations: The altitude of geostationary satellites is about 35,800 km or 22,300 miles. Figure 3: Geostationary satellite orbit (over the Atlantic Ocean) and pointing geometry Figure 4 shows a more detailed visualization of the path from the satellite to your receiver. For this type of problem, we will use a combination of spherical geometry and plane geometry. The plane geometry will be used to analyze the shaded triangle in Figure 4. Figure 4: Detailed path geometry from a geostationary satellite to an Earth-based receiver (r = receiver for this figure, R = Earth Radius). Okay, let’s get to it. So, we have a receiver r point at {lat1, lon1} and a transmitter at { lat2 = 0.0 (equator), lon2} and we want to find the azimuth, and elevation (tilt) angle. The azimuth is precisely the same that we computed in Problem 1B with lat2 = 0.0 and lon2 = longitude of the geostationary satellite. I’ll repeat those calculations here so you don’t have to look up too much: b = acos(cos(lon2-lon1)*cos(lat1)) and A = arcsin ( sin (90 – lat2) * sin (lon2 – lon1) / sin (b) ), and from A you find the azimuth in the same way described above (in Problem 1B). To find the elevation angle, we need to take a closer look at the triangle in Figure 4c. The angle of the triangle at point r is greater than 90 degrees (if the receiver can see the satellite). If the angle at r were 90 degrees, the satellite would be right on the horizon, so we will measure the tilt angle as the amount of angle r above 90 degrees. Using the Law of Cosines for Plane triangles, we get: (R+h)^2 = d^2 + R^2 – 2*R*d*cos(r), so r = arccos((d^2 + R^2 – (R+h)^2)/(2*R*d)), where R is the radius of the Earth, h the altitude of the geostationary satellite, and d the distance from the receiver to the satellite. Simplifying and putting things in terms of values that we know: d = sqrt(R*R + (R+h)*(R+h) – 2*R*(R+h)*cos(b)) and elevation angle (tilt up from horizon) = arccos((R+h)*sin(b)/d). Problem 1E. Calculate the intersection of two paths given the starting and ending coordinates for each path Unfortunately, calculating the intersection of two great circle paths requires slightly more mathematical tools than what appear in Table 1. My goal in writing this article is to keep the math to a bare minimum and justify as much as possible with visualization rather than formulas; however, the easiest way (I know of) to calculate the intersection of two paths makes use of some basic vector analysis. For those who need it, the next paragraph provides a brief vector review. First, a very brief review of vector arithmetic: A vector is a combined representation of a length and direction. For example, the vectors v1 = {1, 1, 1} and v2 = {2, 2, 2} have the same direction (because the ratio of the components are the same); however, v2 is longer (because the values are larger). The length of a vector is equal to the square root of the sum of its components-squared, for example: Length(v1) = sqrt(v1[0]*v1[0] + v1[1]*v[1] + v1[2]*v1[2]). Vectors can be multiplied in two distinct ways: the scalar (dot) product or the vector (cross) product. The great thing about the vector product is that this form of multiplication produces another vector that is guaranteed to be perpendicular (okay, “normal” if you want to be picky) to the original two vectors. This fact can really help since Great Circle paths lie in a plane. If vectors are too confusing, just keep in mind that we have a simple formula that takes two vectors as inputs and produces another vector that is guaranteed perpendicular to both of the input vectors. Also keep in mind that it takes three distinct points to define a plane. Figure 5: Geometry of intersection (a) Earth and both planes and normals, (b) plane and normal of path A, (c) plane and normal of path B, (d) both planes, both normals and the intersection vector. Let’s take the paths one at a time. Path A depicted in Figure 5b: the two points on the plane are the end points of the path. If we take the center of the earth, and the starting and ending points, we have a plane. To define the plane, we need the normal vector, also known as the normal. We can easily get that by taking a vector from the center of the Earth to the starting point for path A (vector1), and a vector from the center of the Earth to the ending point of path A (vector2). Both vector1 and vector2 are “in the plane.” So, if we take the vector product of vector1 and vector2, we get the normal for path A (normalA). Figure 5c is the same picture for path B. Repeating the procedure for path B, we get the normal for path B (normalB). So, normalA is perpendicular to every vector in the plane of path A, and normalB is perpendicular to every vector in the plane of path B. The only vector that the two planes share is the intersection vector. Because the intersection is a vector in the plane of path A, and also in the plane of path B, it must be perpendicular to both normals. So, if we take the vector product of the normals, we are sure to get the intersection vector! Wow, I just realized how difficult it is to put that kind of analysis into written words. For those who prefer to read code, the following snippet calculates the point where the intersection vector passes through the Earth’s surface; in other words, a pair of latitude and longitude coordinates on opposite sides of the Earth. For the intersect problem, it gets a little tricky to keep track of all the coordinates, so let me explain. { lat1A, lon1A, lat1B, lon1B } are the end-points of path 1. { lat2A, lon2A, lat2B, lon2B } are the end-points of path 2. { lat3A, lon3A, lat3B, lon3B } are the points where the intersection vector passes through the surface of the Earth. Keep in mind the the planes of the paths will always intersect, but the paths themselves may not intersect. In case of path-intersection, I put the path intersection point into { lat3A, lon3A } and the intersection coordinate on the opposite side of the Earth into { lat3B, lon3B }. For the case of non-intersecting paths, { lat3A, lon3A } and { lat3B, lon3B } are just points where the intersection vector of the planes touch the surface of the Earth, and neither are inside the path segments. The return value (bool) is true if the segments intersect and false if they do not intersect, either way the plane intersection points are calculated. namespace GEO { const double PI = 3.14159265359; const double TWOPI = 6.28318530718; const double DE2RA = 0.01745329252; const double RA2DE = 57.2957795129; const double ERAD = 6378.135; const double ERADM = 6378135.0; const double AVG_ERAD = 6371.0; const double FLATTENING = 1.0/298.257223563; // Earth flattening (WGS '84) const double EPS = 0.000000000005; const double KM2MI = 0.621371; const double GEOSTATIONARY_ALT = 35786.0; // km } bool GCIntersectSegment(double lat1A, double lon1A, double lat1B, double lon1B, double lat2A, double lon2A, double lat2B, double lon2B, double& lat3A, double& lon3A, double& lat3B, double& lon3B) { bool isInside = false; double v1[3], v2[3], v3a[3], v3b[3], n1[3], n2[3]; double m; double d1 = ApproxDistance(lat1A, lon1A, lat1B, lon1B); double d2 = ApproxDistance(lat2A, lon2A, lat2B, lon2B); // // for path 1, setting up my 2 vectors, v1 is vector // from center of the Earth to point A, v2 is vector // from center of the Earth to point B. // v1[0] = cos(lat1A * GEO::DE2RA) * cos(lon1A * GEO::DE2RA); v1[1] = sin(lat1A * GEO::DE2RA); v1[2] = cos(lat1A * GEO::DE2RA) * sin(lon1A * GEO::DE2RA); v2[0] = cos(lat1B * GEO::DE2RA) * cos(lon1B * GEO::DE2RA); v2[1] = sin(lat1B * GEO::DE2RA); v2[2] = cos(lat1B * GEO::DE2RA) * sin(lon1B * GEO::DE2RA); // // n1 is the normal to the plane formed by the three points: // center of the Earth, point 1A, and point 1B // n1[0] = (v1[1]*v2[2]) - (v1[2]*v2[1]); n1[1] = (v1[2]*v2[0]) - (v1[0]*v2[2]); n1[2] = (v1[0]*v2[1]) - (v1[1]*v2[0]); // // for path 2, setting up my 2 vectors, v1 is vector // from center of the Earth to point A, v2 is vector // from center of the Earth to point B. // v1[0] = cos(lat2A * GEO::DE2RA) * cos(lon2A * GEO::DE2RA); v1[1] = sin(lat2A * GEO::DE2RA); v1[2] = cos(lat2A * GEO::DE2RA) * sin(lon2A * GEO::DE2RA); v2[0] = cos(lat2B * GEO::DE2RA) * cos(lon2B * GEO::DE2RA); v2[1] = sin(lat2B * GEO::DE2RA); v2[2] = cos(lat2B * GEO::DE2RA) * sin(lon2B * GEO::DE2RA); // // n1 is the normal to the plane formed by the three points: // center of the Earth, point 2A, and point 2B // n2[0] = (v1[1]*v2[2]) - (v1[2]*v2[1]); n2[1] = (v1[2]*v2[0]) - (v1[0]*v2[2]); n2[2] = (v1[0]*v2[1]) - (v1[1]*v2[0]); // // v3 is perpendicular to both normal 1 and normal 2, so // it lies in both planes, so it must be the line of // intersection of the planes. The question is: does it // go towards the correct intersection point or away from // it. // v3a[0] = (n1[1]*n2[2]) - (n1[2]*n2[1]); v3a[1] = (n1[2]*n2[0]) - (n1[0]*n2[2]); v3a[2] = (n1[0]*n2[1]) - (n1[1]*n2[0]); // // want to make v3a a unit vector, so first have to get // magnitude, then each component by magnitude // m = sqrt(v3a[0]*v3a[0] + v3a[1]*v3a[1] + v3a[2]*v3a[2]); v3a[0] /= m; v3a[1] /= m; v3a[2] /= m; // // calculating intersection points 3A & 3B. A & B are // arbitrary designations right now, later we make A // the one close to, or within, the paths. // lat3A = asin(v3a[1]); if ((lat3A > GEO::EPS) || (-lat3A > GEO::EPS)) lon3A = asin(v3a[2]/cos(lat3A)); else lon3A = 0.0; v3b[0] = (n2[1]*n1[2]) - (n2[2]*n1[1]); v3b[1] = (n2[2]*n1[0]) - (n2[0]*n1[2]); v3b[2] = (n2[0]*n1[1]) - (n2[1]*n1[0]); m = sqrt(v3b[0]*v3b[0] + v3b[1]*v3b[1] + v3b[2]*v3b[2]); v3b[0] /= m; v3b[1] /= m; v3b[2] /= m; lat3B = asin(v3b[1]); if ((lat3B > GEO::EPS) || (-lat3B > GEO::EPS)) lon3B = asin(v3b[2]/cos(lat3B)); else lon3B = 0.0; // // get the distances from the path endpoints to the two // intersection points. these values will be used to determine // which intersection point lies on the paths, or which one // is closer. // double d1a3a = ApproxDistance(lat1A, lon1A, lat3A, lon3A); double d1b3a = ApproxDistance(lat1B, lon1B, lat3A, lon3A); double d2a3a = ApproxDistance(lat2A, lon2A, lat3A, lon3A); double d2b3a = ApproxDistance(lat2B, lon2B, lat3A, lon3A); double d1a3b = ApproxDistance(lat1A, lon1A, lat3B, lon3B); double d1b3b = ApproxDistance(lat1B, lon1B, lat3B, lon3B); double d2a3b = ApproxDistance(lat2A, lon2A, lat3B, lon3B); double d2b3b = ApproxDistance(lat2B, lon2B, lat3B, lon3B); if ((d1a3a < d1) && (d1b3a < d1) && (d2a3a < d2) && (d2b3a < d2)) { isInside = true; } else if ((d1a3b < d1) && (d1b3b < d1) && (d2a3b < d2) && (d2b3b < d2)) { // // 3b is inside the two paths, so swap 3a & 3b // isInside = true; m = lat3A; lat3A = lat3B; lat3B = m; m = lon3A; lon3A = lon3B; lon3B = m; } else { // // figure out which one is closer to the path // d1 = d1a3a + d1b3a + d2a3a + d2b3a; d2 = d1a3b + d1b3b + d2a3b + d2b3b; if (d1 > d2) { // // Okay, we are here because 3b {lat3B,lon3B} is closer to // the paths, so we need to swap 3a & 3b. the other case is // already the way 3a & 3b are organized, so no need to swap // m = lat3A; lat3A = lat3B; lat3B = m; m = lon3A; lon3A = lon3B; lon3B = m; } } // // convert the intersection points to degrees // lat3A *= GEO::RA2DE; lon3A *= GEO::RA2DE; lat3B *= GEO::RA2DE; lon3B *= GEO::RA2DE; return isInside; } Additional Code for Great Circle Calculations double GCDistance(double lat1, double lon1, double lat2, double lon2) { lat1 *= GEO::DE2RA; lon1 *= GEO::DE2RA; lat2 *= GEO::DE2RA; lon2 *= GEO::DE2RA; double d = sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2)*cos(lon1 - lon2); return (GEO::AVG_ERAD * acos(d)); } double GCAzimuth(double lat1, double lon1, double lat2, double lon2) { double result = 0.0; INT32 ilat1 = (INT32)(0.50 + lat1 * 360000.0); INT32 ilat2 = (INT32)(0.50 + lat2 * 360000.0); INT32 ilon1 = (INT32)(0.50 + lon1 * 360000.0); INT32 ilon2 = (INT32)(0.50 + lon2 * 360000.0); lat1 *= GEO::DE2RA; lon1 *= GEO::DE2RA; lat2 *= GEO::DE2RA; lon2 *= GEO::DE2RA; if ((ilat1 == ilat2) && (ilon1 == ilon2)) { return result; } else if (ilon1 == ilon2) { if (ilat1 > ilat2) result = 180.0; } else { double c = acos(sin(lat2)*sin(lat1) + cos(lat2)*cos(lat1)*cos((lon2-lon1))); double A = asin(cos(lat2)*sin((lon2-lon1))/sin(c)); result = (A * GEO::RA2DE); if ((ilat2 > ilat1) && (ilon2 > ilon1)) { } else if ((ilat2 < ilat1) && (ilon2 < ilon1)) { result = 180.0 - result; } else if ((ilat2 < ilat1) && (ilon2 > ilon1)) { result = 180.0 - result; } else if ((ilat2 > ilat1) && (ilon2 < ilon1)) { result += 360.0; } } return result; } As you can see, most of the code for calculating azimuths has to do with avoiding undefined arccos() situations and transforming A into Azimuth. The real calculations are only three lines long. For all these problems, keep in mind that the standard C math library expects radians for arguments to sin() and cos(), and that acos() and asin() produce results in radians whereas latitude, longitude, and azimuth are usually expressed in degrees. Ellipsoid Model The geometry of the ellipsoid model is relatively simple to visualize by imagining cross sections of the Earth. A horizontal cross section (cutting through the Earth at the equator) would produce a circle of radius 6,378,137 meters. As the cross sections become more inclined with the equator, they become more elliptical (less circular). A vertical cross section that passes through the poles and the equator would be an ellipse with one axis (semi-major) radius equal to the equatorial radius (same as before) and the other axis (semi-minor, passing through the poles) with radius approximately 6,356,752 meters. This geometry can be conveniently described by two numbers: the equatorial radus ( a ) and the flattening ( f ). To calculate the polar radius, you simply multiply the equatorial radius times one minus the flattening. For example: polar radius = equatorial radius * ( 1 – f ) or b = a * ( 1 – f ) Since the early 1800s, surveyors and geodisists have tried to estimate the flattening. Table 3 shows some of the common ellipsoids. Modern satellite-based measurements are far more accurate than previous estimates. The technical name for a minimum-distance path is geodesic. Some geodesics are natural and intuitive while others can be quite complex. For example, most third graders know that the minimum distance path on a flat plane is a straight line. On the surface of a sphere, geodesics lie along great circles (i.e. a circle with its center at the center of the sphere). An ellipsoid geodesic is a minimum-distance path on an ellipsoid, and you might be tempted to think it’s a great-ellipse (an ellipse where the center of the ellipse is located at the center of the Earth). If you think it’s a great-ellipse, you are right in some cases; in other cases, it is not an ellipse but an ill-defined oval of some sort. This is unfortunate because even though solving for the arc length of an elliptical segment is painful, it’s far easier than solving for (and visualizing) the differential geometry involved in ellipsoid geodesics. That’s right, differential geometry…so much for our simple mathematical toolbox…it was fun while it lasted. Problem 2A.: Calculate path length along a meridian given starting and ending coordinates Try to visualize a path that runs along the equator, so lat1 = 0.0, and lat2 = 0.0. For the sake of simplicity, visualize a relatively short path, say 2 degrees arc length. The arc is a circlar-arc and calculating the distance and azimuth are trivial exercises using the toolbox from the spherical model (azimuth is simple zero, of course). Now, take point 2 and move it north of point 1 until they lie along the same meridian (longitude) except now lat2 > 0.0, say 2 degrees. In this case, the arc lies along an ellipse that runs through the North and South Poles. The semi-major axis of the ellipse is the equatorial radius ( a )of whatever ellipsoid we want to use, and the semi-minor axis is the polar radius, or as stated above b = a * (1 – f). This is the case we are dealing with for Problem 2A. In both cases, an equatorial arc and a polar arc, the center of the ellipse (remember a cicle is also an ellipse) is located at the center of the Earth. Now, just to complete the visualization, imagine point 1 and 2 at the same latitude again, this time let’s say 60 degrees and separated by 2 degrees of longitude. In this case, the geodesic arc connecting the two points still lies in a plane, but unfortunately that plane does not contain the center of the Earth. This is the break-down in symmetry that makes it a really tough problem (Problem 2B). Okay, more on that in 2B; back to the case of the arc that lies along a meridian. So, for this case we are dealing with an ellipse, an elliptical segment to be precise. Unfortunately, there is no handy dandy formula for the arc length of an ellipse, though many have tried and in the process come up with some terrific approximations. The basic approach for calculating the segment length of an ellipse is to break it down into tiny little segments, so tiny that each segment can be treated as a straight line. Then, take those straight lines and use them to form right triangles where the tiny segment is the hypotenuse and from the Pythagorean theorem we know that c*c = a*a + b*b. In this case, we are dealing with a segment, usually denoted as s, so a tiny piece of that segment should be called ds. So, the theorem is just ds^2 = dx^2 + dy^2. And the arc length is just the integral of all the tiny segment lengths (ds’s). For those inclined to calculus, this is the integral we would like to compute: The following code snippet evaluates the above integral numerically. The first step is to calculate the limits of integration, then an appropriate x-increment is calculated, then for each increment the code adds the contribution of Ds = a * Dx * sqrt(1 + (Dy/Dx)^2). double EllipseArcLength(double lat1, double lat2, double a = GEO::ERAD, double f = GEO::FLATTENING) { double result = 0.0; // how many steps to use INT32 steps = 100 + 100 * (INT32)(0.50 + (lat2>lat1) ? (lat2-lat1) : (lat1-lat2)); steps = (steps > 4000.0) ? 4000.0 : steps; double snLat1 = sin(GEO::DE2RA*lat1); double snLat2 = sin(GEO::DE2RA*lat2); double twoF = 2 * f - f * f; // limits of integration double x1 = a * cos(GEO::DE2RA * lat1) / sqrt(1 - twoF*snLat1*snLat1); double x2 = a * cos(GEO::DE2RA * lat2) / sqrt(1 - twoF*snLat2*snLat2); double dx = (x2 - x1) / (double)(steps - 1); double x, y1, y2, dy, dydx; double adx = (dx < 0.0) ? -dx : dx; // absolute value of dx double a2 = a * a; double oneF = 1 - f; // now loop through each step adding up all the little // hypotenuses for (INT32 i = 0; i < (steps - 1); i++){ x = x1 + dx * i; dydx = ((a * oneF * sqrt((1.0 - ((x+dx)*(x+dx))/a2))) - (a * oneF * sqrt((1.0 - (x*x)/a2)))) / dx; result += adx * sqrt(1.0 + dydx*dydx); } return result; } Normally, when dealing with an ellipse, you start with the lengths of the semi-major and semi-minor axes (a and b). From a and b, you can easily calculate e, and e can easily be transformed into f. The following transformations can be used to go between e, f, a, and b. These transformation would be useful if you wanted to calculate elliptical arc length in terms of e (the eccentricity of an ellipse), rather than f. f = 1 – (b/a) ; b = a(1 – f) ; e^2 = 2f – f^2 ; f = 1 – sqrt(1-e^2) ; e^2 = sqrt(1 – (b^2/a^2)) Problem 2B. Calculate azimuth and path length (the geodetic forward and inverse problems) The geodetic forward and inverse problems are the same as Problem 1B and 1A, respectively, only for the ellipsoid instead of a sphere. The basic approach to solving the problems for the azimuth and the distance is similar to Problems 1A and 1B in that triangles on a curved surface are analyzed. The difference is that instead of looking at one single triangle, the path is broken into small (infinitesmal) parts using differential geometry, then by applying theorems from calculus you end up with elliptic integrals that can be solved for the sides or angles of the small curved triangles. Elliptic integrals are usually solved numerically by expanding the differential triangle elements using series approximations. Luckily for us programmers, these problems have been solved and coded. Usually, all we have to do is know how and when to apply the different formulations. The “exact” ellipsoid calculations are included in the sample project. Code for Approximate Ellipsoid Distance Calculation double ApproxDistance(double lat1, double lon1, double lat2, double lon2) { lat1 = GEO::DE2RA * lat1; lon1 = -GEO::DE2RA * lon1; lat2 = GEO::DE2RA * lat2; lon2 = -GEO::DE2RA * lon2; double F = (lat1 + lat2) / 2.0; double G = (lat1 - lat2) / 2.0; double L = (lon1 - lon2) / 2.0; double sing = sin(G); double cosl = cos(L); double cosf = cos(F); double sinl = sin(L); double sinf = sin(F); double cosg = cos(G); double S = sing*sing*cosl*cosl + cosf*cosf*sinl*sinl; double C = cosg*cosg*cosl*cosl + sinf*sinf*sinl*sinl; double W = atan2(sqrt(S),sqrt(C)); double R = sqrt((S*C))/W; double H1 = (3 * R - 1.0) / (2.0 * C); double H2 = (3 * R + 1.0) / (2.0 * S); double D = 2 * W * GEO::ERAD; return (D * (1 + GEO::FLATTENING * H1 * sinf*sinf*cosg*cosg - GEO::FLATTENING*H2*cosf*cosf*sing*sing)); } Although the preceding code is called approximate, it is actually much more accurate than the great circle calculation. Sample Project The approximate method above can be found in a book by Jean Meeus called Astronomical Algorithms, a terrific book for programmers (he used a formula developed by Andoyer in a publication from 1950 called Annuaire du Bureau des Longitudes). I hope this article and the sample project are helpful for developers who wish to make accurate geographic calculations. Also, I want to thank CodeGuru for posting this article and Ted Yezek for taking his valuable time to discuss and explain so many geographic calculations and GIS algorithms. Please keep in mind that the code in this article and in the sample project were developed for the purpose of explaining the calculations; they have not been through the type of rigorous testing and validation process that professional software is subjected to before release. The “intersections” problem was the result of a reader’s question, so if you have other geographic or geometric problems you would like included in the article, send me an e-mail. I used the following references to write this article: Astronomical Algorithms by Jean Meeus Explanatory Supplement to the Astronomical Almanac edited by P. Kenneth Seidelmann Satellie Communications by Dennis Roddy Enjoy.
https://www.codeguru.com/cplusplus/geographic-distance-and-azimuth-calculations/
CC-MAIN-2021-49
refinedweb
5,891
57.5
Most versions of Linux-UNIX also support the mmap system call, which can be used to map a file to a process's virtual memory address space. In many ways mmap is more flexible than its shared memory system call counterpart . Once a mapping has been established, standard system calls rather than specialized system calls can be used to manipulate the shared memory object (Table 8.11). Unlike memory, the contents of a file are nonvolatile and will remain available even after a system has been shut down (and rebooted). Table 8.11. Summary of the mmap System Call. The mmap system call requires six arguments. The first, start , is the address for attachment. As with the shmat system call, this argument is most often set to 0, which directs the system to choose a valid attachment address. The number of bytes to be attached is indicated by the second argument, length . While the call will allow the user to specify a number of bytes for length that will extend beyond the end of the mapped file, an actual reference to these locations will generate an error (a SIGBUS signal). The third argument, prot , is used to set the type of access (protection) for the segment. The specified access should not be in conflict with the access permissions for the associated file descriptor. The prot argument uses the defined constants found in the include file . These constants are shown in Table 8.12. Table 8.12. Defined Protection Constants. Constants can be OR ed to provide different combinations of access. The manual page for mmap notes that on some systems PROT_WRITE is implemented as PROT_READ PROT_WRITE, and PROT_EXEC as PROT_READ PROT_EXEC. In any case, PROT_WRITE must be set if the process is to write to the mapped segment. The fourth argument, flags , specifies the type of mapping. Mapping types are also indicated using defined constants from the include file. These constants are shown in Table 8.13. Table 8.13. Defined Mapping Type Constants. The first two constants specify whether write s to the shared memory will be shared with other processes or be private. MAP_SHARED and MAP_PRIVATE are exclusionary. When specifying MAP_PRIVATE, a private copy is not generated until the first write to the mapped object has occurred. These specifications are retained across a fork system call but not across a call to exec . MAP_FIXED directs the system to explicitly use the address value in start . When MAP_FIXED is indicated, the values for start and length should be a multiple of the system's page size . Specifying MAP_FIXED greatly reduces the portability of a program, and its use is discouraged. When specifying the flags argument, either MAP_SHARED or MAP_PRIVATE must be indicated. Linux also supports the flags shown in Table 8.14. Table 8.14. Linux-Specific Defined Mapping Type Constants. The fifth argument, fd , is a valid open file descriptor. Once the mapping is established, the file can be closed. The sixth argument, offset , is used to set the starting position for the mapping. If the mmap system call is successful, it returns a reference to the mapped memory object. If the call fails, it returns the defined constant MAP_FAILED (which is actually the value -1 cast to a void * ). A failed call will set the value in errno to reflect the error encountered . The errors for mmap are shown in Table 8.15. Table 8.15. mmap Error Messages. While the system will automatically unmap a region when a process terminates, the system call munmap , shown in Table 8.16, can be used to explicitly unmap pages of memory. Table 8.16. Summary of the munmap System Call. The munmap system call is passed the starting address of the memory mapping (argument start ) and the size of the mapping (argument length ). If the call is successful, it returns a value of 0. Future references to unmapped addresses generate a SIGVEGV signal. If the munmap system call fails, it returns the value -1 and sets the value in errno to EINVAL. The interpretation of munmap - related error is given in Table 8.17. Table 8.17. munmap Error Messages. The msync system call is used in conjunction with mmap to synchronize the contents of mapped memory with physical storage (Table 8.18). A call to msync will cause the system to write all modified memory locations to their as sociated physical storage locations. If MAP_SHARED is specified with mmap , the storage location is a file. If MAP_PRIVATE is specified, then the storage location is the swap area. Table 8.18. Summary of the msync Library Function. The start argument for msync specifies the address of the mapped memory; the length argument specifies the size (in bytes) of the memory. The flags argument directs the system to take the actions shown in Table 8.19. Table 8.19. Defined Flag Constants for msync . If msync fails, it returns a -1 and sets errno (Table 8.20). If the call is successful, it returns a value of 0. Table 8.20. mmap Error Messages. Program 8.6 demonstrates the use of the mmap system call. Program 8.6 Using mmap . File : p8.6.cxx /* Using the mmap system call */ #define _GNU_SOURCE + #include #include #include #include #include 10 #include #include #include #include #include + using namespace std; int main(int argc, char *argv[]){ int fd, changes, i, random_spot, kids[2]; struct stat buf; 20 char *the_file, *starting_string="ABCDEFGHIJKLMNOPQRSTUVWXYZ"; if (argc != 3) { cerr << "Usage " << *argv << " file_name #_of_changes" << endl; return 1; } + if ((changes = atoi(argv[2])) < 1) { cerr << "# of changes < 1" << endl; return 2; } if ((fd = open(argv[1], O_CREAT O_RDWR, 0666)) < 0) { 30 perror("file open"); return 3; } write(fd, starting_string, strlen(starting_string)); // Obtain size of file + if (fstat(fd, &buf) < 0) { perror("fstat error"); return 4; } // Establish the mapping 40 if ((the_file = (char *) mmap(0, (size_t) buf.st_size, PROT_READ PROT_WRITE, MAP_SHARED, fd, 0)) == (void *) - 1) { perror("mmap failure"); exit(5); + } for (i = 0; i < 2; ++i) if ((kids[i] = (int) fork()) == 0) while (1) { cout << "Child " << getpid() << " finds: " << the_file << endl; 50 sleep(1); } srand((unsigned) getpid()); for (i = 0; i < changes; ++i) { random_spot = (int) (rand() % buf.st_size); + *(the_file + random_spot) = '*'; sleep(1); } cout << "In parent, done with changes" << endl; for (i = 0; i < 2; ++i) 60 kill(kids[i], 9); cout << "The file now contains: " << the_file << endl; return 0; } Program 8.6 uses a parent/two-child process arrangement to demonstrate the use of mmap . The parent process modifies the contents of a memory-mapped file. Each child process repetitively displays the contents of the mapped files to allow verification of the changes. The program is passed two command-line arguments. The first argument is the name of a file that it will use for mapping. The second argument indicates the number of modifications that should be made to the file. Upon execution of the program, the validity of the command-line arguments is checked. If problems are encountered, an appropriate error message is generated and the program exits. If the command-line arguments are good, the program opens, for reading and writing, the file whose name was passed as the first command-line argument. As the O_CREAT flag is specified, if the file does not exist, it will be created. Next, the string "ABCDEFGHIJKLMNOPQRSTUVWXYZ" is written to the first part of the file. Following this, the fstat call is used to determine the size of the file. In our example, if we start with an empty file, the size of the file is actually the length of the string that is written to the file. However, this would not be true if the file contained previous data. In many cases we will want to know the full size of the file to be mapped fstat provides us with a handy way of determining the file's size (it is returned as part of the stat structure). The call to mmap (line 40) establishes the actual mapping. We allow the system to pick the address and indicate that we want to be able to read from and write to the mapped memory region. We also specify the region be marked as shared, be associated with the open file descriptor fd , and have an offset (starting position within the file) of 0. Two child processes are then generated. Each child process displays the contents of the memory-mapped file using the the_file reference which was returned from the initial call to mmap . It is important to note that a call to read was not needed. The child process then sleep s one second and repeats the same sequence of activities until a terminating signal is received. The parent process loops for the number of times specified by the second command-line argument. Within this loop the parent process randomly picks a memory-mapped location and changes it to an asterisk ( * ). Again, this is done by direct reference to the location using the the_file reference; notice no write function is used. Between changes, the parent sleeps one second to slow down the processing sequence. Once the parent process is done, it displays the final contents of the memory-mapped file, removes the child processes, and exits. A sample run of the program is shown in Figure 8.10. Figure 8.10 A sample run of Program 8.6. linux$ p8.6 demo 7 Child 16592 finds: ABCDEFGHIJKLMNOPQRSTUVWXYZ Child 16593 finds: ABCDEFGHIJKLMNOPQRSTUVWXYZ Child 16592 finds: ABCDEFG*IJKLMNOPQRSTUVWXYZ Child 16593 finds: ABCDEFG*IJKLMNOPQRSTUVWX*Z Child 16592 finds: ABCDEFG*IJKLMNOPQRSTUVWX*Z Child 16593 finds: ABCDEF**IJKLMNOPQRSTUVWX*Z Child 16592 finds: ABCDEF**IJKLMNOPQRSTUVWX*Z Child 16593 finds: ABCDEF**IJ*LMNOPQRSTUVWX*Z Child 16592 finds: ABCDEF**IJ*LMNOPQRSTUVWX*Z Child 16593 finds: ABCDEF**I**LMNOPQRSTUVWX*Z Child 16592 finds: ABCDEF**I**LMNOPQRSTUVWX*Z Child 16593 finds: ABCDEF**I**LMNOPQRS*UVWX*Z Child 16592 finds: ABCDEF**I**LMNOPQRS*UVWX*Z Child 16593 finds: ABCDEF**I**L*NOPQRS*UVWX*Z Child 16592 finds: ABCDEF**I**L*NOPQRS*UVWX*Z In parent, done with changes The file now contains: ABCDEF**I**L*NOPQRS*UVWX*Z In this invocation the child processes, PIDs 16592 and 16593, initially find the mapped location to contain the unmodified starting string. A second check of the mapped location shows that each child now sees the string with a single ' * ' replacing the letter H . Additional passes reveal further modifications. When all of the processes have terminated , we will find that the file demo will contain the fully modified string. Programs and Processes Processing Environment Using Processes Primitive Communications Pipes Message Queues Semaphores Shared Memory Remote Procedure Calls Sockets Threads Appendix A. Using Linux Manual Pages Appendix B. UNIX Error Messages Appendix C. RPC Syntax Diagrams Appendix D. Profiling Programs
https://flylib.com/books/en/1.23.1/using_a_file_as_shared_memory.html
CC-MAIN-2021-21
refinedweb
1,780
63.49
I want to write a function. I have tried so: And for this function : pl<-function(x,i, ...) { ot(table(x[,i])) } Try this: uebersicht <- function(daten, ask = TRUE) { par(ask = ask) plot.numeric <- function(x, nm, ...) boxplot(x, main = nm, ...) plot.factor <- function(x, nm, ...) barplot(table(x), main = nm, ...) mapply(function(x, nm) { if (is.factor(x) || is.numeric(x)) { plot(x, nm) } else { warning("unsupported type: ", sQuote(class(x))) } }, daten, names(daten)) } The reason I put the definitions of plot.numeric and plot.factor within the function (instead of outside it as in your example) is that there are identically-named functions in the graphics namespace, and unless you intend the default behavior to change for every use even outside of uebersicht, this remapping should really only be done during the function call. Sample usage: set.seed(42) par(mfrow = c(1,4)) weirdData <- data.frame( num_type = rexp(100), int_type = sample(100, size = 100, replace = TRUE), fac_type = sample(LETTERS[1:6], size = 100, replace = TRUE), char_type = sample(LETTERS[1:6], size = 100, replace = TRUE) ) weirdData$char_type <- as.character(weirdData$char_type) str(weirdData) # 'data.frame': 100 obs. of 4 variables: # $ num_type : num 0.1983 0.6609 0.2835 0.0382 0.4732 ... # $ int_type : int 40 68 78 19 3 14 69 94 56 61 ... # $ fac_type : Factor w/ 6 levels "A","B","C","D",..: 4 1 5 1 4 4 4 5 1 2 ... # $ char_type: chr "A" "E" "C" "E" ... uebersicht(weirdData, ask = FALSE) ### ...snip... lots of output, should really be filtered or ### perhaps I should use 'invisible(mapply(...))' # [1] "unsupported type: 'character'" (Yes, I know the image has an empty fourth block ... I left it there because the naïve intention when calling the function was to plot four columns. *shrug*) The use of par(mfrow=c(1,4)) was merely for demonstration here on SO, as is the addition of the ask option to the function.
https://codedump.io/share/0B1eZNU9mVDH/1/name-biiyuiiooi
CC-MAIN-2017-04
refinedweb
318
61.53
Python pickle libary can allow us to save any python object to a binary file, then we can load this python object from that binary file. In this tutorial, we will introduce how to use pickle to save and load python object. Import library import pickle Create a python class class Car: #Constructor to initialize def __init__(self, price,color): self.price = price self.color = color #function to print car price and color def display(self): print ('This car is', self.color, self.price) We can create a python object by class Car, then save it to a binary file. Create a python object car_obj = Car(12345, 'red') car_obj.display() The display result is: This car is red 12345 Save this object to binary file with open("binary_car.bin","wb") as f: pickle.dump(car_obj, f) If you find TypeError: file must have a ‘write’ attribute, you can read this tutorial. Fix Python Pickle TypeError: file must have a ‘write’ attribute Error – Python Tutorial Load this object from a binary file with open("binary_car.bin","rb") as f: car_obj_2 = pickle.load(f) If you find TypeError: file must have ‘read’ and ‘readline’ attributes, you can refer to this tutorial. Fix Python Pickle Load TypeError: file must have ‘read’ and ‘readline’ attributes Error – Python Tutorial Print the loaded object print(type(car_obj_2)) car_obj_2.display() From the print result, we can find car_obj is the same to car_obj2. Both of them are: <class '__main__.Car'> This car is red 12345 Then we can save a python object to a file and load it from a file successfully.
https://www.tutorialexample.com/best-practice-to-save-and-load-python-object-from-a-file-with-pickle-python-tutorial/
CC-MAIN-2021-31
refinedweb
265
65.52
Dynamic InstanceNameTHE SU66LER May 12, 2009 10:40 AM I have a flash map that contains 51 movieClips, one for each state. Sample instanceName for a state (Alabama) would be: S_01 I have a XML record that contains specific info for each state. <stateID>01</stateID I am trying to link the stateID's in the XML record to the state movieClips that make up the map. var stateInstanceName = 'S_' + stateID; I was just trying to do a simple trace command to see if I could pull the _width of each state clip to see if I was linked up: trace(mapInstance.stateInstanceName._width) This is not working from me. I'm not even sure its the right way to go about it. Any guideance would be great! Thanks 1. Re: Dynamic InstanceNameNed Murphy May 12, 2009 10:55 AM (in response to THE SU66LER) Try: trace(mapInstance[stateInstanceName]._width); the value is merely a string, so you need to use the array notation to have it interpretted as an object. 2. Re: Dynamic InstanceNamegrover970 May 12, 2009 12:11 PM (in response to Ned Murphy) Spoke to soon. So, this worked... trace('content_mc.nAmerica[stateInstance]._width); But if I try to tween it: var mcTween:Tween = new Tween( content_mc.nAmerica[stateInstance], "_alpha", Strong.easeOut, 0, 100, 2, true ); or say I want to change the color of certain states: changeColor = new Color(content_mc.nAmerica[stateInstance]); changeColor.setRGB(0x00FF66); nothing happens. 3. Re: Dynamic InstanceNameNed Murphy May 12, 2009 1:02 PM (in response to grover970) It may be a problem with mis-naming something somewhere along the line. If I take the code you just provided and recreate a scenario with it, with S_01 buried within mc's per what your code implies, it works fine... import mx.transitions.Tween; import mx.transitions.easing.Strong; var stateInstance = "S_01"; trace(content_mc.nAmerica[stateInstance]._width); var mcTween:Tween = new Tween( content_mc.nAmerica[stateInstance], "_alpha", Strong.easeOut, 0, 100, 2, true ); var changeColor = new Color(content_mc.nAmerica[stateInstance]); changeColor.setRGB(0x00FF66); 4. Re: Dynamic InstanceNamegrover970 May 12, 2009 1:56 PM (in response to Ned Murphy) Yeah this is weird. I am using that exact code. I also created a similar scenario in a seperate file where everything worked fine. The trace code works just fine. But when I try to tween it or setRGB it seems to ignore it. I think I'll just have to come back to it tomorrow and look at it with fresh eyes. 5. Re: Dynamic InstanceNameNed Murphy May 12, 2009 3:27 PM (in response to grover970) If it's not a naming issue, it might be a location issue--the code needs to be in the presence of the object it's trying to control. One other oddity that sometimes comes into play... if things are timeline tweened into the place where the code is, and the initial keyframe doesn't have an instance name, or isn't the same name, the object at the end of the tween inherits that same name, even if you have it entered in the properties panel. 6. Re: Dynamic InstanceNamegrover970 May 13, 2009 7:41 AM (in response to Ned Murphy) Thanks Ned, I came back to it with fresh eyes and a new persepective this morning and found the issue. It was infact a naming issue. If I had just looked at a line of code about 10 lines earlier I would have seen this line: var maps = content_mc.nAmerica.duplicateMovieClip("tabItem" + i, i, verticalData); therefore this: content_mc.nAmerica[stateInstance] should have been this: maps[stateInstance] Thanks for your help Ben 7. Re: Dynamic InstanceNameNed Murphy May 13, 2009 8:16 AM (in response to grover970) You're welcome
https://forums.adobe.com/thread/431846
CC-MAIN-2018-39
refinedweb
623
64.61
Misery 354 Report post Posted June 14, 2012 (edited) Hello, I am writing a library for numerical computations. The whole thing is written in C++. However, there are some interesting open source programs I would like to integrate with my library. For example Triangle mesh generator. For the problem lets say I have three files: TypesAndDefines.h (Ansi C++ syntax) Triangle.h (Ansi C syntax) Triangle.c (Ansi C syntax) - souce file for Triangle.h In general using c files from c++ program works fine, but what I tend to do is: - I have abstract data types in C++ file TypesAndDefines.h setting for example: [code] #ifdef x64 typedef double REAL; #elif typedef float REAL; #endif [/code] - there are also namsespaces defined in this file - I need in Triangle.h and Triangle.c those defined typedefs, but I cannot include it in C-syntax file. The only working solution I have found is to make a C syntax compatibile version of TypesAndDefines.h file and include it. However it would be better for the project to use only one file with definitions. Is there any good way to do this? I can of course cancel the C++ TypesAndDefines.h file but then I also cancel using general namespace for my library what is quite elegant and convinient for end user of the lib. Thanks in advance and regards, Misery Edited June 14, 2012 by Misery 0 Share this post Link to post Share on other sites
https://www.gamedev.net/forums/topic/626343-how-to-include-c-files-in-c-program-properly/
CC-MAIN-2017-34
refinedweb
246
67.35
This is a container for attribute editors, used to group them visually in the attribute form if it is set to the drag and drop designer. More... #include <qgsvectorlayer.h> This is a container for attribute editors, used to group them visually in the attribute form if it is set to the drag and drop designer. Definition at line 140 of file qgsvectorlayer.h. Creates a new attribute editor container. Definition at line 149 of file qgsvectorlayer.h. Destructor. Definition at line 155 of file qgsvectorlayer.h. Add a child element to this container. This may be another container, a field or a relation. Definition at line 3964 of file qgsvectorlayer.cpp. Get a list of the children elements of this container. Definition at line 192 of file qgsvectorlayer.h. Traverses the element tree to find any element of the specified type. Definition at line 3974 of file qgsvectorlayer.cpp. Returns if this ccontainer is going to be rendered as a group box. Definition at line 185 of file qgsvectorlayer.h. Determines if this container is rendered as collapsible group box or tab in a tabwidget. Definition at line 178 of file qgsvectorlayer.h. Change the name of this container. Definition at line 3969 of file qgsvectorlayer.cpp. Will serialize this containers information into a QDomElement for saving it in an XML file. Implements QgsAttributeEditorElement. Definition at line 3952 of file qgsvectorlayer.cpp.
http://www.qgis.org/api/classQgsAttributeEditorContainer.html
CC-MAIN-2015-40
refinedweb
233
52.26
Issues Multiple python3 compatiblity issues... It seems like a print statement has gotten in that is not py3 compatible. from sphinx import apidoc File "/home/jenkins/workspace/gate-taskflow-python33/.tox/py33/lib/python3.3/site-packages/sphinx/apidoc.py", line 56 print 'Would create file %s.' % fname ^ SyntaxError: invalid syntax Complete output from command python setup.py egg_info: Traceback (most recent call last): $ .tox/py33/bin/pip freeze Babel==1.3 Jinja2==2.7.3 Mako==1.0.0 MarkupSafe==0.23 Pygments==1.6 SQLAlchemy==0.9.7 Sphinx==1.2.3 alembic==0.6.6 amqp==1.4.6 anyjson==0.3.3 argparse==1.2.1 coverage==3.7.1 decorator==3.4.0 discover==0.4.0 docutils==0.12 extras==0.0.3 fixtures==0.3.16 flake8==2.1.0 futures==2.1.6 hacking==0.9.2 iso8601==0.1.10 jsonschema==2.4.0 kazoo==2.0 kombu==3.0.21 mccabe==0.2.1 mock==1.0.1 networkx==1.9 oslosphinx==2.2.0.0a3 pbr==0.10.0 pep8==1.5.6 psycopg2==2.5.4 pyflakes==0.8.1 python-mimeparse==0.1.4 python-subunit==0.0.21 pytz==2014.7 six==1.7.3 stevedore==0.15 testrepository==0.0.20 testtools==0.9.39 zake==0.1.5 Another example that shows even more py 3.x compatiblilty issues. How did you install Sphinx in that virtualenv? Sphinx 1.2.x has to be run through 2to3 for Python 3, which is done automatically by the setup.py buildstep. Note that if you run setup.py buildon 2.x, the files created under build/must be removed before running on 3.x, or they will not be replaced by distutils. Here is an example of how it fails (nothing specially done to install sphinx using pip). Thanks, I can reproduce that problem. pip is installing the wheel, which should already be 2to3-converted. Apparently there was some problem generating the wheels during release. I'll recreate and reupload the 3.x wheel. Should be fixed now, please retry. Seems fixed when ran locally, let me retry openstack's CI jobs to verify. Seems all good, thanks! Issue #1555was marked as a duplicate of this issue. This issue has been migrated to Github, please re-subscribe <here> to get notifications.
https://bitbucket.org/birkenfeld/sphinx/issue/1553/multiple-python3-compatiblity-issues
CC-MAIN-2015-11
refinedweb
395
61.43
Suppose there is a group of two or more people and they wants to meet and minimize the total travel distance. We have a 2D grid of values 0 or 1, where each 1 mark the home of someone in the group. The distance is calculated using the formula of Manhattan Distance, so distance(p1, p2) = |p2.x - p1.x| + |p2.y - p1.y|. So, if the input is like then the output will be 6 as from the matrix we can understand that three people living at (0,0), (0,4), and (2,2): The point (0,2) is an ideal meeting point, as the total travel distance of 2+2+2=6 is minimum. To solve this, we will follow these steps − Define a function get(), this will take an array v, sort the array v i := 0 j := size of v ret := 0 while i < j, do − ret := ret + v[j] - v[i] (increase i by 1) (decrease j by 1) return ret From the main method do the following − Define an array row Define an array col for initialize i := 0, when i < size of grid, update (increase i by 1), do − for initialize j := 0, when j < size of grid[0], update (increase j by 1), do − if grid[i, j] is non-zero, then − insert i at the end of row insert j at the end of col return get(row) + get(col) Let us see the following implementation to get better understanding − #include <bits/stdc++.h> using namespace std; class Solution { public: int minTotalDistance(vector<vector<int>>& grid) { vector<int> row; vector<int> col; for (int i = 0; i < grid.size(); i++) { for (int j = 0; j < grid[0].size(); j++) { if (grid[i][j]) { row.push_back(i); col.push_back(j); } } } return get(row) + get(col); } int get(vector <int> v){ sort(v.begin(), v.end()); int i = 0; int j = v.size() - 1; int ret = 0; while (i < j) { ret += v[j] - v[i]; i++; j--; } return ret; } }; main(){ Solution ob; vector<vector<int>> v = {{1,0,0,0,1},{0,0,0,0,0},{0,0,1,0,0}}; cout << (ob.minTotalDistance(v)); } Input {{1,0,0,0,1},{0,0,0,0,0},{0,0,1,0,0}} Output 6
https://www.tutorialspoint.com/best-meeting-point-in-cplusplus
CC-MAIN-2022-21
refinedweb
380
62.72
Here is a screenshot of my program and the problem I'm having: (I drew the mouse in yellow to show the problem since print screen doesn't copy the cursor) At the 1,1 tile coordinate, the mouse ray-plane function works just fine. But as I venture out accross the map, the offset gets more and more extreme, and I have no idea why this is happening. Here is my rayplane function (returns true if there's an intersection): bool rayplane(vector3d n, vector3d s, vector3d d, vector3d p1, vector3d p2, vector3d p3, vector3d p4, float* dist, vector3d* point) { // Dot product of the normal vector and the direction vector float a = d.x*n.x + d.y*n.y + d.z*n.z; if (a == 0) // If the ray is parallel to the plane return false; float t = (((p1.x*n.x + p1.y*n.y + p1.z*n.z) - (n.x*s.x + n.y*s.y + n.z*s.z)) / a); if (t < 0) return false; // The intersection of the plane float x = s.x + t*d.x; float y = s.y + t*d.y; float z = s.z + t*d.z; vector3d cp(x,y,z); // Collision point - The problem is this result - Everything after this point works correctly. /*** The < 0.00001 corrects for precision innacuracies ***/ if (abs(trianglearea(p1,p3,p4)-trianglearea(p1,p4,cp)-trianglearea(p1,p3,cp)-trianglearea(p3,p4,cp))<0.00001 || abs(trianglearea(p1,p2,p3)-trianglearea(p1,p2,cp)-trianglearea(p2,p3,cp)-trianglearea(p1,p3,cp))<0.00001) return true; return false; } It checks each of the two triangles that make up each tile(quad) for an intersection. n = normal vector (I set this to (0, 1, 0) for a normal facing up. s = start point, d = direction point and p1 - p4 is each vertex of the quad. The trianglearea Function: float trianglearea(vector3d p1, vector3d p2, vector3d p3) { // sqrt(s^2 + b^2 + c^2) // a, b and c are the lengths of the triangle sides float a = sqrt((p2.x-p1.x)*(p2.x-p1.x)+(p2.y-p1.y)*(p2.y-p1.y)+(p2.z-p1.z)*(p2.z-p1.z)); float b = sqrt((p3.x-p2.x)*(p3.x-p2.x)+(p3.y-p2.y)*(p3.y-p2.y)+(p3.z-p2.z)*(p3.z-p2.z)); float c = sqrt((p3.x-p1.x)*(p3.x-p1.x)+(p3.y-p1.y)*(p3.y-p1.y)+(p3.z-p1.z)*(p3.z-p1.z)); float s = (a+b+c)/2; return (sqrt(s*(s-a)*(s-b)*(s-c))); } And here is how the Rayplane function is used in the program (the mouse coord is converted to world coords prior, as is the far coord): if (rayplane(vector3d(0.0, 1.0, 0.0), mousePosNear, mousePosFar, vector3d(0.0 + i, 0.0, 0.0 + j), vector3d(1.0 + i, 0.0, 0.0 + j), vector3d(1.0 + i, 0.0, 1.0 + j), vector3d(0.0 + i, 0.0, 1.0 + j))) { ...Do stuff } Help is always appreciated. Edited by Jossos, 18 April 2013 - 05:18 AM.
http://www.gamedev.net/topic/641762-rayplane-function-acts-weird-c/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2016-30
refinedweb
523
67.04
Although <topics-email-settings. Here’s a quick example of how to send an e-mail (without attachments): from scrapy.mail import MailSender mailer = MailSender() mailer.send(to=["someone@example.com"], subject="Some subject", body="Some body", cc=["another@example.com"]) MailSender is the preferred class to use for sending emails from Scrapy, as it uses Twisted non-blocking IO, like the rest of the framework. Send email to the given recipients. Emits the mail_sent signal. These settings define the default constructor values of the MailSender class, and can be used to configure e-mail notifications in your project without writing any code (for those extensions and code that uses MailSender). Default: None User to use for SMTP authentication. If disabled no SMTP authentication will be performed.
http://readthedocs.org/docs/scrapy/en/0.10.3/topics/email.html
crawl-003
refinedweb
127
59.09
Multisensor_PIR_DHT_LDR_Battery Hey Guys, Today I have completed my first multisensor. As base I used a cheap china fire detectors and the MYS PCB board. All components fits very well in the housing. Unfortunately, the modified HC-SR501 do not work well with <= 3V with the DHT (delay in the sketch). Now I use this: And it works very great! The used Sketch is this: wiring diagram coming soon;) @n3ro I am curious to see your wiring diagram. You have a step-up booster in one of the photos, was this just for the DHT? Yes. This is a step up with a transistor for the dht. I want to reduce the battery drain with the transistor. Hey folks, I have just tried to reduce the sleep current of the nodes. With *a transistor to turn on and off the stepup *a second transistor to disconnect the ldr *a external pullup with 680k for the pir Now I have a sleep current of 220-240uA. Any ideas to tune it a little bit more? Is it possible to check if the node is wake up from the interrupt or the timer? Regards n3ro maybe a stupid question, but can i use this sketch for the recommenced sensors ? it's just exactly the sketch im looking for thanks!! What do you mean with you recommenced sensors? @n3ro Yes, you can. The sleep function has a return value. If it is < 0, timer is timed out. Otherwise it waked up by interrupt. I'm working on a same multi sensor PCB with batteries. The PIR consumes 150uA. So you can reduce consume if your search a low power PIR sensor. Like this: @scalz said it consumes only ~24uA. @icebob hey the used pir consumes ~50uA. Since my last modifications I replaced the stepup and the dht with a si7021 sensor and the external pullup with 1m ohm. Now the sleeping current is ~190uA @icebob Great... Hope this will work for a long time on a battery? What battery's you are using? 1 problem< WHERE CAN I FIND: #include <readVcc.h> @Dylano Hey, Yes it works very good with 2 aaa Batts. You can find the read library here But I don't use the dht anymore. This works much better! yes i saw your better solution..the si7021 is using less power i read.. Where do i have to place the: long readVcc And how do ik make that .H file? How long is the multisensor running? on what battery? @Dylano what do you mean with long readVcc? The battery life is different because of the frequency of triggering from the pir. Some nodes works for 2 month. Other ones for 6.. sorry, it is working now: what do you mean with long readVcc? I use the sketch battery pir, to test Only the report of the vcc how many times is the sketch reading this? And where can i see the status? I use Domoticz as controller. great, i see in domoticz the value great!!!...[ when power is low i get an alert. ] You have a lot of sketches Is het possible that you build a sketch include your pir-battery-ldr, and a light switch, or a option so i can switch a ledlight. Like a pir-battery-ldr-switch... I will try to rebuild a solarpowerlight With a solar powerbank, and a:Light So i can have a Domoticz controlled solarpower light and motion/ldr [my]sensor Only i cannot make a sketch for this... [ if you want, i will pay for it..., or a donation to charity organization ] you cold try to modify this sketch: its not finished but there is all you need
https://forum.mysensors.org/topic/1514/multisensor_pir_dht_ldr_battery/1
CC-MAIN-2019-22
refinedweb
616
85.39
Comment on Tutorial - HashMap example in Java By Charles Comment Added by : Sai Comment Added at : 2010-09-23 21:52:38 Comment on Tutorial : HashMap example in Java By Charles Nice tutorial it hlep me a lot gr8. View Tutorial By: harish at 2012-03-31 14:34:50 2. Thank you very much!!! But I have t View Tutorial By: Gemis at 2009-04-14 02:37:08 3. when i copy paste your program or source code View Tutorial By: hello at 2010-07-09 18:56:23 4. This all works fine, but when I read records from View Tutorial By: Ayac at 2011-10-31 10:58:21 5. Very very nice example thank you View Tutorial By: Fabrice at 2010-12-03 03:36:18 6. its very nice View Tutorial By: keerthi at 2012-06-28 17:04:36 7. it is really best site for the information.many ti View Tutorial By: panchanan ruata at 2013-05-22 09:36:37 8. Hi Fred, I found this link which has more explanat View Tutorial By: Boulvat at 2012-09-25 08:36:48 9. Simpler example: import java.io.*; View Tutorial By: Joseph Harner at 2011-12-04 23:20:48 10. thanks nice article , but there is bug as already View Tutorial By: YZ at 2011-08-05 13:10:49
http://java-samples.com/showcomment.php?commentid=35379
CC-MAIN-2018-34
refinedweb
230
75.1
2017 up to now | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 (older archive entries before 2007 are not shown here, but included in the onsite-search) Mailing List - Entries of 2012 Hello Jim, def checkSelected(node): cbs = node.getElementsByClassName("INPUT:CHECKBOX") checked = cbs[0].getAttribute("checked") rc.check(checked == "true" or checked == "1", "Checkbox selected") node = rc.getComponent("myTreeNode") checkSelected(node) Best regards, Robert I was trying to record checks of the selected state of a "extjs tree node with checkbox" but I do not see an option for "selected state" as I get with a regular checkbox component. It seems qf only finds the wrapper of the checkbox but not the checkbox itself. Is there way to work around it? A sample extjs tree node can be found here. Thanks a lot. Jim Zhang --
https://www.qfs.de/en/qf-test-mailing-list-archive-2012/lc/2012-msg00093.html
CC-MAIN-2018-26
refinedweb
138
64.1
CodePlexProject Hosting for Open Source Software I have some code that looks like this: [JsonProperty(Required=Required.Always)] Uri MyField; If MyField is null, the serializer saves it without problems, but when I try to read it back (using the same settings), I get an exception "Required property 'MyField' expects a value but got null." In other words, the serializer writes output that it cannot read. I think a better design would be to throw an exception rather than saving invalid data. Otherwise these errors aren't caught until much later when attempting to reload the data. Also, the documentation for "Required.Always" could be clearer. I interpreted it to mean that the field must appear in the JSON tree, not that the field cannot be null. Arguably both would be useful behaviors. Thanks! Done. Serializer now validates when writing JSON for that property. I'm not sure what you mean about the documentation. I think the Required enum is pretty clear about what each value does. namespace Newtonsoft.Json { /// <summary> /// Indicating whether a property is required. /// </summary> public enum Required { /// <summary> /// The property is not required. The default state. /// </summary> Default, /// <summary> /// The property must be defined in JSON but can be a null value. /// </summary> AllowNull, /// <summary> /// The property must be defined in JSON and cannot be a null value. /// </summary> Always } } Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://json.codeplex.com/discussions/218084
CC-MAIN-2017-22
refinedweb
260
67.76
22 October 2004 17:18 [Source: ICIS news] With its modernization program largely complete, Arkema is looking beyond its 2006 spinoff from French oil giant Total with plans to ramp up top-line growth by drawing on a bevy of new plants, a solid pipeline of R&D projects, and a growing footprint in Asia. "Our destiny will be in our hands," said Arkema chairman and CEO Thierry Le Henaff in a recent interview. "In terms of industrial and R&D projects which will create long-term growth, we have never been in a better position. We will start benefiting from all the efforts over the last three to four years in R&D and plant development. We are very happy to launch Arkema in this context."?xml:namespace> With three relatively balanced business segments - vinyl products (26% of sales), industrial chemicals (38%) and performance products (36%) - and some level of integration, the $6.3bn (Euro5bn) global commodities, intermediates and specialty chemicals company will work on growing the latter two segments through new projects and acquisitions. "In 2005, we will announce a significant amount of new projects, mainly in performance products and industrial chemicals in areas such as technical polymers, acrylics, PMMA [polymethyl methacrylate], hydrogen peroxide and fluorochemicals," says Le Henaff. In recent years, Total has spent over $379m annually to modernize plants that comprise the Arkema group and improve performance in environmental, health and safety (EH&S). The company has 90 production facilities around the world with 16 in the ?xml:namespace> The company is building a facility in In Arkema' PMMA expansion project in Jinhae, South Korea, came on line in May, more than doubling capacity at the site from 17,000 to 40,000 tonne/year. The company is the world's leading producer of PMMA, marketed under the Plexiglas and Altuglas names, with around 20% market share. The company also expanded capacity of Kynar polyvinylidene fluoride technical polymers in However, in vinyl products, where Arkema is number three in From a geographic standpoint, Arkema will seek opportunities to build new sites in "In In Along with plant expansions, Arkema will rely heavily on R&D to kick-start growth. The company will continue to spend over 3% of its sales ($190m) on R&D programs. Arkema is working on developments in nanotechnology, tin-free antifoulants for marine paints, membranes for fuel cells, and a catalyst-driven process that could boost the company's acrylic production by 15 to 30%. "By the quality of our 1,400 researchers and the current program, which includes a number of long-term projects, we think we are in very good shape," says Le Henaff. "We are also careful that at the end of the day we get a new application to the customer. "We have a regular review of R&D projects by top management to ensure top-line growth." New environmental legislation can be a key driver of growth for innovative companies, according to Le Henaff. "We really want to invest in new technologies such as fluorochemicals for refrigerants to stay ahead of the game," he says. "We are also working on some alternatives for bromide derivative products for soil fumigation." Total plans to spin off Arkema to existing shareholders in 2006 with a solid financial structure. "The commitment of Total is to give Arkema a level of gearing [debt/equity] similar to that of Total," notes Le Henaff. Total's current debt/equity is around 30%. A conservative financial structure will give Arkema the financial flexibility to make acquisitions. "We think that today the chemical industry is too fragmented, and we will have a card to play," asserts Le Henaff. "Once we are spun off, we will look at opportunities to make select acquisitions to reinforce our strong business units. We want to grow in industrial chemicals and performance products. We won't make acquisitions in vinyl products." (For additional Chemical Market Reporter Analysis, visit the CMR Web site.
http://www.icis.com/Articles/2004/10/22/622485/analysis+arkema+prepares+balanced+strategy+for+spinoff.html
CC-MAIN-2013-20
refinedweb
661
50.06
Command for safe boot Is it possible to do a safe boot (or a soft reboot) from the command line? I know that you can accomplish these with Ctrl+F and Ctrl+D, respectively, but I would like to include a safe boot in my script so I need a command line option. machine.reset()is a command line reset but it does a hard reset and I don't want main.pyand boot.pyto run. @alexpul But if it's only about not executing boot.py and main.py on boot, you can do similar with python scripts: from machine import reset def safe_boot.py() # create a special main.py and reset f= open("no_main.py", "w") f.write('''import os\nos.remove("no_main.py")\n''') f.close() reset() and then a specific boot.py, which checks for that special main.py: import machine import os try: f = open("no_main.py") f.close() machine.main("no_main.py") except OSError: # do the previous boot.py stuff here Or something similar in you regular boot.py and main.py, which just checks for the existence of a flag for bypassing. @alexpul No, but it would be easy to add something like machine.safe_boot(). It would more or less only consist of the name definition, and a function with a single statement, which just calls another function.
https://forum.pycom.io/topic/3765/command-for-safe-boot/3
CC-MAIN-2019-30
refinedweb
225
69.18
Template talk:Otherlang2 Bollocks. I've just realised that this will only work on English pages, because there's no way to detect and remove the :<lang> suffix on an article name. A French page trying to link to a Norwegian page would get Some_Page:fr:no, which of course is wrong. I really don't want to allow people to type in arbitrary article names either, since that allows the usual naming convention to be broken. :-( --TomEdwards 20:09, 13 July 2009 (UTC) Contents How do you do it? Could someone explain to me how to create a page in another language? --Nathaniel 00:20, 9 October 2010 (UTC) 3-border style Awesome, Artfunkel. Looks great. —Mattshu 15:22, 28 June 2011 (PDT) noborder JeffLane added the feature "noborder" ("|noborder=true" removing borders). Please somebody write description for this. I can't because my English is so bad. Making noborder default, not optional I'm voting for the removal of the border all together. In my opinion, I think it looks a little cleaner without a border. —Mattshu 19:42, 27 August 2011 (PDT) Obsolete? Check Template:Lang. It's the same as this but now, languages are detected automatically. I think with that, this template is now obsolete and should be passively replaced with Lang. This is one of our most important templates so I'm asking for input on the new one's use. Pinsplash (talk) 10:09, 20 August 2018 (UTC) - This might still be usefull when a translated page doesn't follow standard naming conventions (i.e. is not named pagename:lang). — Dr. Orange talk · contributions 10:27, 20 August 2018 (UTC) - Very true. Honestly I think the language suffixes should have been done with /and not :. There's nothing native to MediaWiki that recognizes :aside from namespaces. If they were done with /#titleparts could have been used. Far too late to change that standard though. Pinsplash (talk) 11:39, 20 August 2018 (UTC)
https://developer.valvesoftware.com/w/index.php?title=Template_talk:Otherlang2&oldid=228640
CC-MAIN-2021-10
refinedweb
329
75.5
Dear rooters, I am trying to add some namespaces to the dictionary by following the users guide. My LinkDef.h looks like: #ifdef __MAKECINT__ #pragma link C++ class XObject+; #pragma link C++ namespace XAux+; #pragma link C++ namespace XUnit+; #pragma link C++ nestedclass; #pragma link C++ nestedtypedef; #endif after compiling, I can find XObject, XAux, XUnit in the dictionary .h and .cc files, and in CINT, XObject can be recognized and properly highlighted. However, XAux and XUnit cannot. I attached the codes to this post, hope you can repeat my problem as follows: tar xfvz xman.tar.gz cd xman make clean install export LD_LIBRARY_PATH=/path/to/xman/lib:$LD_LIBRARY_PATH root ROOT[1] XObject a; ROOT[2] XAux::SetStyle(); could you please tell me where I am doing wrong? Cheers, Jing
https://root-forum.cern.ch/t/how-to-add-namespace-into-dictionary/11121
CC-MAIN-2022-27
refinedweb
131
59.64
install TunnelBear App Download Install TunnelBear VPN from. Press On, wait a moment, open TunnelBear Android App Pick the country for the VPN server, american netflix mobile etc. 2. In the top section. UK, google Play Store. Such as United States, American netflix mobile how To Download Android App which is not Available in Your Country from Play Store. Some android apps intentionally set to american netflix mobile be distributed in certain permitted countries. motion Detection, outdoor IP67 Waterproof, m : Titathink TT522PW-PRO Wireless HD 720P Micro Covert Hidden Spy Network IP Camera, wiFi / POE private proxy murah / LAN, wide Viewing american netflix mobile Angle, sD Recording, India: American netflix mobile! düzinelerce ülkedeki kullanclarn birok. VPN lerin normalde almayan yerlerde baarl bir ekilde blok kazanmalarn ve özel kalmalarn salar. Herhangi bir taahhüdü american netflix mobile olmakszn aylk olarak abone olmak iin de ayda 10 dolar ödeme gerektiriyor. StrongVPN, bununla birlikte, strongVPN sunucular, import and export functions are available both through the GUI or through direct command line options. Secured import and export functions To vpn encryption 3des allow IT Managers to deploy VPN Configurations securely, ). >>IMAGE<< this feature is american netflix mobile not available right now. Loading. Please try again later. Rating is available when the video has been rented. unblock telegram. Org in case it is blocked in your computer. Org proxy list with american netflix mobile working proxies to unblock telegram. A web proxy can help you unblock telegram. Org with a premium VPN service Free telegram. Org and bypass Internet censorship.license:Shareware File Size:163 Kb Runs on:Win95,Win98,WinME, by Privacy Partners, but don't hang on - there's still a way to get around this. WinXP,WinNT 4.x,Windows2000,Windows2003 american netflix mobile Private Proxy Anonymous Surfing,unlike a VPN, the protocol was originally developed by programmers at the United States Naval Research Lab. The Tor browser does american netflix mobile not encrypt web browsing. There is now a non-profit foundation which exists to promote the continued development of Tor. Instead,noRoot firewall. As the name indicates, today we are going to tackle this issue using another app called. The app lacked a way to block any of those connections. This lets you control outgoing network connections on your device. all these VPN Services offer free american netflix mobile trial periods or a money back guarantee if you are not satisfied. Try before you buy!apart from using a good firewall application, make vpn client mac el capitan sure that you use a good web browser such as Google Chrome. How secure are vpn services! related Search Terms: Droid4x offline installer, american netflix mobile droid4x offline installer, droid4x offline installer, droid4x offline installer, happy surfing! Droid4x offline installer, still if you have any queries regarding this post then feel free to comment us. Droid4x offline installer.5.A VPN is a great way for expats or people traveling to Argentina to connect to a home network where they can access their favorite websites without the threat of their personal data being compromised. for example, add WAN_IN rules matching what american netflix mobile traffic you want to allow (with match inbound IPsec packets checked)) - all incoming traffic will be blocked by default hitting the default deny at the bottom of the ruleset (implicit)).there are free options that don't keep logs if you really american netflix mobile need that option. The paid version is pretty good, though. However, dOWNLOAD ON GOOGLE PLAY OpenVPN Connect is one of the precious few truly free VPNs available on Android.among other american netflix mobile reasons, to cut the chase and avoid going round in circles, pureVPN can effortlessly access other Netflix libraries. it can also be deployed on every Windows from Windows Vista to Windows 10, latest Windows TheGreenBow VPN Client is available for Windows 10 32/64-bit. Including Windows Server. Support of IPv4 american netflix mobile and IPv6 Deploy VPN in heterogeneous network in IPv4 and IPv6 simultaneously american netflix mobile Bedrohungen. Intelligentes Antivirus Blockiert Malware,if there are other. You need Bestline VPN. 1. VPN apps that claim to be the best, it must be that they have not met Bestline VPN. You want to visit Facebook, unblock for example, in the following scenario,enter the server, username and password into the Server tab, if you decided in Installation step 1 above that you would need MPPE, and a window should appear, domain, and if your administrator american netflix mobile says encryption is required, run pptpconfig as root, please try again ssl vpn client later. HOW TO MAKE XP VPN CONFIG THIS VIDEO WAS MADE AND PUBLISHED BY KID ANONYMOUSG. Loading. Rating american netflix mobile is available when the video has been rented. This feature is not available right now. all in one package - Our package include 60 countries VPN server ( will update every week)). One american netflix mobile VPN account can use all server.Here is a simple and quick Netfix proxy error fix along with a list of the best Netflix VPN providers that work in 2017. new features, enhanced for Windows 10, fortranProject plugin, more stable, the Code:Blocks Team Code:Blocks 17.12 is here! Special credits go to darmar for his great work on the. We hope you enjoy american netflix mobile using Code:Blocks! Many improvements, written by MortenMacFly Again, bundled since release 13.12.and when you find your app, you need to enter your new Gmail account and start to use Google Play american netflix mobile Store. In the next step, find the search bar, and enter the name of the app Amaze VPN. Check the search results, vPN memang sangat american netflix mobile tentang internet yaitu. Daftar Software VPN download vpn pia Gratis Terbaik Untuk Windows 10 PC.
http://nw-fencing.org.uk/test-the-great-china-firewall/american-netflix-mobile.html
CC-MAIN-2019-13
refinedweb
966
56.25
Amy Unruh, Oct 2012 Google Developer Relations Introduction This lesson covers the basics of using the Search API: indexing content and making queries on an index. In it, you'll learn how to - Create a search index - Add content to it via an index document - Make simple full-text search queries on that indexed data Objectives Learn the basics of using the App Engine Search API. Prerequisites - Python 2.7 and the App Engine SDK for Python - Basic understanding of Python - Familiarity with Google App Engine Indexes App Engine's Search API operates through an Index object. This object lets you store data via an index document, retrieve documents using search queries, modify documents, and delete documents. Each index has an index name and, optionally, a namespace. The name uniquely identifies the index within a given namespace. It must be a visible, printable ASCII string not starting with !. Whitespace characters are excluded. You can create multiple Index objects, but any two such objects that have the same index name in the same namespace reference the same index. You can use namespaces and indexes to organize your documents. For the example product search application, all the product documents are in one index, with another index containing information about store locations. We can filter a query on the product category if we want to search for, say, only books. In your code, you create an Index object by specifying the index name: from google.appengine.api import search index = search.Index(name='productsearch1') or index = search.Index(name='yourindex', namespace='yournamespace') The underlying document index will be created at first access if it does not already exist; you don't have to create it explicitly. You can delete documents from an index or delete the entire index as will be described in the next class, A Deeper Look at the Python Search API. Documents Documents hold an index's searchable content. A document is a container for structuring indexable data. From a technical point of view, a Document object represents a uniquely identified collection of fields, identified by a document ID. Fields are named, typed values. Documents do not have kinds in the same sense as Datastore entities. In our example application, for instance, our product categories are books and HD televisions. The store has a rather limited selection of products. Each product document in the example application always includes the following core fields, defined by docs.Product class variables: CATEGORY(set to booksor hd_televisions) PID(product ID) PRODUCT_NAME DESCRIPTION PRICE AVG_RATING UPDATED(date of last update) The books and HD televisions categories each have some additional fields of their own. For books, the extra fields are: title author publisher pages isbn For HD televisions, they are: brand tv_type size The application itself enforces an application-level semantic consistency for documents of each product type. That is, all product documents will always include the same core fields, all books have the same set of additional fields, and so on. However, a search index doesn't impose any cross-document schematic consistency on the fields that are used, so there is no explicit concept of querying for "product" documents specifically. Field types Each document field has a unique field type. The type can be any of the following, which is defined in the Python module search: TextField: A plain text string. HtmlField: HTML-formatted text. If your string is HTML, use this field type, as the Search API can take the markup into account when creating result snippets and in document scoring. AtomField: A string treated as a single token. A query will not match if it includes only a substring rather than the full field value. NumberField: A numeric (integer or floating-point) value. DateField: A date with no time component. GeoField: A geographical location, denoted by a GeoPointobject specifying latitude and longitude coordinates. For text fields ( TextField, HtmlField, and AtomField), the values should be Unicode strings. Example: Building product document fields and creating a document To construct a Document object, you build a list of its fields, define its document ID if desired, and then pass this information to the Document constructor. The example application uses the TextField, AtomField, NumberField, and DateField field types for product documents. Defining the product document fields The core product fields (those which are included in all product documents) look like this, where we assume the value arguments of the constructors below are set to appropriate values: from google.appengine.api import search ... fields = [ search.TextField(name=docs.Product.PID, value=pid), # the product id # The 'updated' field is set to the current date. search.DateField(name=docs.Product.UPDATED, value=datetime.datetime.now().date()), search.TextField(name=docs.Product.PRODUCT_NAME, value=name), search.TextField(name=docs.Product.DESCRIPTION, value=description), # The category names are atomic search.AtomField(name=docs.Product.CATEGORY, value=category), # The average rating starts at 0 for a new product. search.NumberField(name=docs.Product.AVG_RATING, value=0.0), search.NumberField(name=docs.Product.PRICE, value=price) ] Note that the category field is typed as AtomField. Atom fields are useful for things like categories, where exact matches are desired; Text fields are better for strings like titles or descriptions. One of our example categories is hd televisions. If we search for just televisions, we will not get a match (assuming that that string is not contained in another product field). But, if we search for the full field string, hd televisions, we will match on the category field. The example application also includes fields specific to individual product categories. These are added to the field list as well, depending on the category. For example, for the television category, there are additional fields for size (a number field), brand, and tv_type (text fields). Books have a different set of fields. Creating documents Given the field list, we can create a document object. For each product document, we'll set its document ID to be the predefined unique ID of that product: d = search.Document(doc_id=product_id, fields=fields) This design has some advantages for us (as we'll discuss in the follow-on class to this one), but if we didn't specify the document ID, one would be generated for us automatically when the document is added to an index. Example: Using geopoints in store location documents The Search API supports Geosearch on documents that include fields of type GeoField. If your documents contain such fields, you can query an index for matches based on distance comparisons. A location is defined by the GeoPoint class, which stores latitude and longitude coordinates. The latitude specifies the angular distance, in degrees, north or south of the equator. The longitude specifies the angular distance, again in degrees, east or west of the prime meridian. For example, the location of the Opera House in Sydney is defined by GeoPoint(-33.857, 151.215). To store a geopoint in a document, you need to add a GeoField field with a GeoPoint object set as its value. Here is how the fields for the store location documents in the product search application are constructed: from google.appengine.api import search ... geopoint = search.GeoPoint(latitude, longitude) fields = [search.TextField(name=docs.Store.STORE_NAME, value=storename), search.TextField(name=docs.Store.STORE_ADDRESS, value=store_address), search.GeoField(name=docs.Store.STORE_LOCATION, value=geopoint) ] Indexing documents Before you can query a document's contents, you must add the document to an index, using the Index object's put() method. Indexing allows the document to be searched with the Search API's query language and query options. You can specify your own document ID when constructing a document. The document ID must be a visible, printable ASCII string not starting with !. Whitespace characters are excluded. (As we'll see later, if you index a document using the ID of an existing document, that existing document will be reindexed). If you don't specify a document ID, a unique numeric ID will be generated automatically when the document is added to the index. You can add documents one at a time, or alternatively you can add a list of documents in batch, which is more efficient. Here's how to construct a document, given a fields list, and add it to an index: from google.appengine.api import search # Here we do not specify a document ID, so one will be auto-generated on put. d = search.Document(fields=fields) try: add_result = search.Index(name=INDEX_NAME).put(d) except search.Error: # ... You should catch and handle any exceptions resulting from the put(), which will be of type search.Error. If you want to specify the document ID, pass it to the Document constructor like this: d = search.Document(doc_id=doc_id, fields=fields) You can get the ID(s) of the document(s) that were added, via the id properties of the list of search.AddResult objects returned from the put() operation: doc_id = add_result[0].id Basic search queries Adding documents to an index makes the document content searchable. You can then perform full-text search queries over the documents in the index. There are two ways to submit a search query. Most simply, you can pass a query string to the Index object's search() method. Alternatively, you can create a Query object and pass that to the search() method. Constructing a query object allows you to specify query, sort, and result presentation options for your search. In this lesson, we'll look at how to construct simple queries using both approaches. Recall that some search queries are not fully supported on the Development Web Server (running locally), so you'll need to run them using a deployed application. Search using a query string A query string can be any Unicode string that can be parsed by the Search API's query language. Once you've constructed a query string, pass it to the Index.search() method. For example: from google.appengine.api import search # a query string like this comes from the client query = "stories" try: index = search.Index(INDEX_NAME) search_results = index.search(query) for doc in search_results: # process doc .. except search.Error: # ... Search using a query object A Query object gives you more control over your query options than does a query string. In this example, we first construct a QueryOptions object. Its arguments specify that the query should return doc_limit number of results. (If you've looked at the product search application code, you'll see more complex QueryOption objects; we'll look at these in the following class, A Deeper Look at the Python Search API). Next we construct the Query object using the query string and the QueryOptions object. We then pass the Query object to the Index.search() method, just as we did above with the query string. from google.appengine.api import search # a query string like this comes from the client querystring = “stories” try: index = search.Index(INDEX_NAME) search_query = search.Query( query_string=querystring, options=search.QueryOptions( limit=doc_limit)) search_results = index.search(search_query) except search.Error: # ... Processing the query results After you've submitted a query, matching search results are returned to the application in an iterable SearchResults object. This object includes the number of results found, the actual results returned, and an optional query cursor object. The returned documents can be accessed by iterating on the SearchResults object. The number of results returned is the length of the object's results property. The number_found property is set to the number of hits found. Iterating on the returned object gives you the returned documents, which you can process as you like: try: search_results = index.search("stories") returned_count = len(search_results.results) number_found = search_results.number_found for doc in search_results: doc_id = doc.doc_id fields = doc.fields # etc. except search.Error: # ... Summary and review In this lesson, we've learned the basics of creating indexed documents and querying their contents. To check your knowledge, try recreating these steps yourself in your own simple application: - Create an Indexobject. - Build a list of document fields (say, using the TextFieldtype) and construct a Documentobject with that field list. Add the document to the index. - Search the index using a search string consisting of a term in one of your field values. Is the document you created returned as a match? In the next lesson, we'll take a closer look at Search API indexes.
https://cloud.google.com/appengine/training/fts_intro/lesson2?hl=ja
CC-MAIN-2019-22
refinedweb
2,060
56.86
On Fri, 2010-09-24 at 16:09 +0200, David Lamparter wrote:> I understood your point. What I'm saying is that that functional graph> you're describing is too simplistic do be a workable model. Your graph> allows for what you're trying to do, yes. But your graph is not modeling> the reality.How about we put this specific point to rest by agreeing todisagree? ;->> Err... I'm migrating netdevs to assign them to namespaces to allow them> to use them? Setup, basically. Either way a device move only happens as> result of some administrative action; be it creating a new namespace or> changing the physical/logical network setup.> Ok, different need. You have a much more basic requirement than i do.> wtf is a "remote" namespace?> A namespace that is remotely located on another machine/hardware ;->> Can you please describe your application that requires moving possibly> several network devices together with "their" routes to a different> namespace?scaling and availability are the driving requirements.cheers,jamal
http://lkml.org/lkml/2010/9/24/289
CC-MAIN-2017-09
refinedweb
169
58.28
This is the mail archive of the xsl-list@mulberrytech.com mailing list . > > I have an XML file which I process with an XSL stylesheet to create [snip] > > > The source XML file has the following (truncated) content: > > <TagLabels xmlns: > > > <Tag name="GNSL_LOCATION_TYPE_CD">LOC<html:br/>TYPE</Tag> > > </TagLabels> > > > > The XSL stylesheet declaration and the templates which output the > > offending statement follow: > > > > <xsl:stylesheet > xmlns: > xmlns: > xmlns: > > [snip] > > OK, the namespace node that's creating the namespace declaration that > you're seeing is the one in the source document. Your html:br element > in the source document has a namespace node associated the default > namespace with the namespace ''. My guess is that > you're copying the html:br element in your document, with something > like: > > <xsl:template > <xsl:copy> > <xsl:apply-templates > </xsl:copy> > </xsl:template> > > When you copy an element, with xsl:copy or xsl:copy-of, you copy all > its namespace nodes as well. So rather than doing that, you need to > create an element without any associated namespace nodes, which means > that you have to use xsl:element. Try using: > > <xsl:template > <xsl:element > <xsl:apply-templates > </xsl:element> > </xsl:template> You're right; that's how I was propogating the <html:br/> tag forward. I have to confess that namespaces are the one area of XSL (I'm currently aware of) that I really don't have a handle on, so if you don't mind, I'd like to "kick the dead horse" a bit. In the source XML document, the <html:br/> tag is completely qualified. Why is the choice being made to associate an additional (the default) namespace with this node? This seems like it will cause problems in environments in which components from multiple namespaces are being integrated. Does this mean we shouldn't define default namespaces in situations involving multiple namespaces? I don't understand why that node is being associated with a namespace other than what it was explicitly referenced by. Could you describe what aspect/rule associated with the XSL transformation causes this additional namespace to be associated with that node? Thanks, Ed XSL-List info and archive:
http://www.sourceware.org/ml/xsl-list/2002-07/msg00472.html
CC-MAIN-2014-42
refinedweb
358
50.16
Im new to this site so the format of this post may be sub par. I have been working on this code and I cannot seem to figure out why when option one is chosen it asks for the number and does nothing. Thank you for any feedback you may be able to assist me with. import java.util.*; public class Lab12 { // Code for implementing option 1 of lab assignment // Roll three six-sided dice until they all show a different number // Print out the result public static void option1() { //System.out.println("Executing option 1"); // Code goes here Dice die1 = new Dice(); Dice die2 = new Dice(); Dice die3 = new Dice(); boolean finished = false; int getNumRolls; int count = 0; int r1 = die1.roll(); int r2 = die2.roll(); int r3 = die3.roll(); while (!finished) { //int r1 = die1.roll(); //int r2 = die2.roll(); //int r3 = die3.roll(); if (r1 != r2) { r1 = die1.roll(); r2 = die2.roll(); r3 = die3.roll(); } else if (r2 != r3) { r1 = die1.roll(); r2 = die2.roll(); r3 = die3.roll(); } else if (r3 != r1) { finished = true; //System.out.println("It took " + count + " rolls to roll three different numbers: " + r1 + ", " + r2 +", " + r3 + "."); } } getNumRolls = die1.getNumRolls(); System.out.println("It took " + getNumRolls + " rolls to roll three different numbers: " + r1 + ", " + r2 +", " + r3 + "."); } // Code for implementing option 2 of lab assignment // Roll the two dice n times and print the average. public static void option2(int n) { //System.out.println("Executing option 2"); Dice die1 = new Dice(); Dice die2 = new Dice(); int r1 = 0; int r2 = 0; for (int i = 1; i <= n; i++) { r1 = r1 + die1.roll(); r2 = r2 + die2.roll(); } int avg = (r2+r1)/n; System.out.println("The average of " + n + " roll(s) of the two dice is " + avg); // Code goes here } // Must add code to main for system prompts and Case 3 public static void main(String [] args) {
http://www.javaprogrammingforums.com/loops-control-statements/33736-why-infinite-loop.html
CC-MAIN-2014-42
refinedweb
308
78.55
11 November 2011 12:59 [Source: ICIS news] LONDON (ICIS)--European styrenics producer Styron is to announce a €50–100/tonne ($68–135/tonne) increase for its November polystyrene (PS) with immediate effect, with the precise amount of the targeted increase expected on Monday, a company source said on Friday. “We are experiencing an exceptional situation, with styrene currently moving at $1,500/tonne and above,” said the source. “The amount of increase we are targeting is based on what one has to pay for styrene today. We need to get numbers up or face a huge margin loss.” Earlier Styron had envisaged a €15/tonne drop for November PS pricing, but a spike in upstream styrene prices has made them consider the recent move. Styrene monomer price ideas have risen as November has progressed but some sources expect the current trend to be short-lived. While the second half of November has risen steeply, price ideas for December remain backwardated by around $150/tonne. Bids for December were heard at $1,335/tonne on Thursday morning, but were not met with any offers. November barge contracts settled at €1,061–1,108/tonne free delivered (FD) northwest ?xml:namespace> PS pricing is settled notoriously late in the month, so buyers have time to absorb the news from Styron, and compare their pricing ideas with those of their competitors, all of whom were expecting a slight drop in November monthly prices following a €23–25/tonne drop in November styrene contracts. Local holidays in September and October PS volumes were weak, with one producer estimating them to be down by 10–12% compared with 2010. November volumes are said to be better, down by only 3% compared with 2010. This upturn in volumes has led to some tightness, acknowledged by buyers. The success of Styron’s price initiative will depend on how other producers in the European market approach November. Different cost positions could mean that others will forge ahead with the initial plan of lowering PS prices by €10–15/tonne. Net general purpose polystyrene (GPPS) prices are trading at €1,200–1,250/tonne FD NWE, and sources agree that producers’ margins have suffered throughout 2011, when PS pricing has followed the trend set in the styrene market. “The situation is bad,” said another PS producer. “We are facing the same situation as in 2005 and 2009.” Those years saw permanent capacity closures in the European PS market, to cope with a structural change in demand. “I still expect minus €25/tonne for November PS,” said one buyer. “I think they are just trying to set the scene for December and get people to buy.” PS is used widely in the packaging and household sectors. ($1 = €0.74) ?xml:namespace> For more on styrene visit ICIS chemical intelligence
http://www.icis.com/Articles/2011/11/11/9507663/styron-to-target-increase-for-nov-ps-with-immediate.html
CC-MAIN-2014-41
refinedweb
470
60.55
In the last post of the series, we took a look at the Observer pattern. This time we’re going to explore the Composite pattern. The Composite pattern gives us the ability to take a complex procedure that may involve many steps and turn it into something that is simple for consumers to use. The classic definition of the Composite pattern involves three pieces: Component, Leaf, and Composite Component. The component defines the interface that the units (leaves and\or composite components) must implement. A leaf is an implementation of a component that performs work. The composite component also implements the component interface but that’s where the similarities end. Under the covers, it contains a collection of components which could be leaves on nested composites. When an interface method is called, it delegates the call to it’s child components. This may sound more complicated than it actually is. Let’s take a look at an example to see how this could work. C# Example In this example, we’re going to take the process of changing a car’s oil and break it down to a group of independent tasks. Following our definition above, let’s define an IComponent interface. Per the contract of this interface, all of our “task” objects will need to include a PerformTask() method. public interface IComponent { void PerformTask(); } Next, we’re going to create a base class for the tasks that are made up of a collection of child tasks. Notice that although it implements the PerformTask() method of the IComponent interface, it actually delegates the work to it’s child components. public abstract class CompositeComponent : IComponent { private IList<IComponent> _tasks = new List<IComponent>(); public void PerformTask() { foreach (var task in _tasks) task.PerformTask(); } public void AddTask(IComponent task) { _tasks.Add(task); } public void RemoveTask(IComponent task) { _tasks.Remove(task); } } Now let’s create our first needed task (aka. leaf). This one represents the draining of the old oil in the vehicle. public class DrainOldOilTask : IComponent { public void PerformTask() { Console.WriteLine("Draining old oil."); } } The next step is to replace the oil filter which is made of two steps, removing the old filter and installing the new one. To demonstrate this, we’ll create a new CompositeComponent subclass. public class RemoveOldFilterTask : IComponent { public void PerformTask() { Console.WriteLine("Removing old filter."); } } public class InstallNewFilterTask : IComponent { public void PerformTask() { Console.WriteLine("Installing new filter."); } } public class ReplaceFilterTask : CompositeComponent { public ReplaceFilterTask() { AddTask(new RemoveOldFilterTask()); AddTask(new InstallNewFilterTask()); } } Last, we need to create a task for adding the new oil as well as a composite “ChangeOil” task that ties it all together. Notice that the ChangeOil object is composed of both regular tasks (DrainOilTask and AddNewOilTask) as well as a composite task (ReplaceFilterTask). public class AddNewOilTask : IComponent { public void PerformTask() { Console.WriteLine("Adding new oil."); } } public class ChangeOil : CompositeComponent { public ChangeOil() { AddTask(new DrainOldOilTask()); AddTask(new ReplaceFilterTask()); AddTask(new AddNewOilTask()); } } Now the entire process is a simple method call. new ChangeOil().PerformTask(); Output Draining old oil. Removing old filter. Installing new filter. Adding new oil. Ruby Example With Ruby, we no longer need to define a component interface. We just need ensure that our classes define a perform_work() method. We will, however, want to create a base class for our composite components. class CompositeComponent def initialize @tasks = [] end def perform_task @tasks.each {|task| task.perform_task} end def add_task(task) @tasks << task end def remove_task(task) @tasks.delete task end end [/source] <p>Now we'll create our task objects. Besides basic syntax, the structure of these classes are pretty much the same C# versions.</p> class DrainOldOilTask def perform_task puts "Draining old oil." end end class RemoveOldFilterTask def perform_task puts "Removing old filter." end end class InstallNewFilterTask def perform_task puts "Installing new filter." end end class ReplaceFilterTask < CompositeComponent def initialize super add_task RemoveOldFilterTask.new add_task InstallNewFilterTask.new end end class AddNewOilTask def perform_task puts "Adding new oil." end end class ChangeOil < CompositeComponent def initialize super add_task DrainOldOilTask.new add_task ReplaceFilterTask.new add_task AddNewOilTask.new end end [/source] <p>Using the objects is also very much the same.</p> ChangeOil.new.perform_task Output Draining old oil. Removing old filter. Installing new filter. Adding new oil. As you can see, implementing the pattern in the two languages is pretty similar. The biggest difference is the lack of need for a component interface within the Ruby example. Next time, we’ll be looking at the Iterator pattern. Stay tuned!
http://www.gembalabs.com/2009/07/22/comparing-design-patterns-in-ruby-and-c-the-composite-pattern/
CC-MAIN-2017-22
refinedweb
739
51.95
use Email::MIME::CreateHTML; my $email = Email::MIME->create_html( header => [ From => 'my@address', To => 'your@address', Subject => 'Here is the information you requested', ], body => $html, text_body => $plain_text ); use Email::Send; my $sender = Email::Send->new({mailer => 'SMTP'}); $sender->mailer_args([Host => 'smtp.example.com']); $sender->send($email); This module allows you to build HTML emails, optionally with a text-only alternative and embedded media objects. For example, an HTML email with an alternative version in plain text and with all the required images contained in the mail. The HTML content is parsed looking for embeddable media objects. A resource loading routine is used to fetch content from those URIs and replace the URIs in the HTML with CIDs. The default resource loading routine is deliberately conservative, only allowing resources to be fetched from the local filesystem. It's possible and relatively straightforward to plug in a custom resource loading routine that can resolve URIs using a broader range of protocols. An example of one using LWP is given later in the "COOKBOOK". The MIME structure is then assembled, embedding the content of the resources where appropriate. Note that this module does not send any mail, it merely does the work of building the appropriate MIME message. The message can be sent with Email::Send or any other mailer that can be fed a string representation of an email message. The mail construction is compliant with rfc2557. HTML, no embedded objects (images, flash, etc), no text alternative text/html HTML, no embedded objects, with text alternative multipart/alternative text/plain text/html HTML with embedded objects, no text alternative multipart/related text/html embedded object one embedded object two ... HTML with embedded objects, with text alternative multipart/alternative text/plain multipart/related text/html embedded object one embedded object two ... There is only one method, which is installed into the Email::MIME package: This method creates an Email::MIME object from a set of named parameters. Of these the header and body parameters are mandatory and all others are optional. See the "PARAMETERS" section for more information. Email::MIME::CreateHTML also defines a lower-level interface of 3 building-block routines that you can use for finer-grain construction of HTML mails. These may be optionally imported: use Email::MIME::CreateHTML qw(embed_objects parts_for_objects build_html_mail); This parses the HTML and replaces URIs in the embed list with a CID. The modified HTML and CID to URI mapping is returned. Relevant parameters are: embed inline_css base object_cache resolver The meanings and defaults of these parameters are explained below. This creates a list of Email::MIME parts for each of the objects in the supplied CID mapping. Relevant options are: base object_cache resolver The meanings and defaults of these parameters are explained below. The assembles a ready-to-send Email::MIME object (that can be sent with Email::Send). A list reference containing a set of headers to be created. If no Date header is specified, one will be provided for you based on the gmtime() of the local machine. A scalar value holding the HTML message body. This is passed as the attributes parameter to the create method (supplied by Email::MIME::Creator) that creates the html part of the mail. The body content-type will be set to text/html unless it is overidden here. Attach relative images and other media to the message. This is enabled by default. The module will attempt to embed objects defined by embed_elements. Note that this option only affects the parsing of the HTML and will not affect the objects option. The object's URI will be rewritten as a Content ID. The set of elements that you want to be embedded. Defaults to the %Email::MIME::CreateHTML::EMBED package global. This should be a data structure of the form: embed_elements => { $elementname_1 => {$attrname_1 => $boolean_1}, $elementname_2 => {$attrname_2 => $boolean_2}, ... } i.e. resource will be embedded if $embed_elements->{$elementname}->{$attrname} is true. If a resolver is supplied this will be used to fetch the resources that are embedded as MIME objects in the email. If no resolver is given the default behaviour is to choose the best available resolver to read $uri with any $base value prefixed. Resources fetched using the resolver will be cached if an object_cache is supplied. This must be a filepath or a URI. If embed is true (the default) then base will be used when fetching the objects. Examples of good bases: ./local/images /home/somewhere/images Inline any CSS external CSS files referenced through link elements. Enabled by default. Some mail clients will only interpret css if it is inlined. A reference to a hash of external objects. Keys are Content Ids and the values are filepaths or URIs used to fetch the resource with the resolver. We use MIME::Types to derive the type from the file extension. For example in an HTML mail you would use the file keyed on '12345678@bbc.co.uk' like <img src="cid:12345678@bbc.co.uk" alt="a test" width="20" height="20" /> A cache object can be supplied to cache external resources such as images. This must support the following interface: $o = new ... $o->set($key, $value) $value = $o->get($key) Both the Cache and Cache::Cache distributions on CPAN conform to this. A scalar value holding the contents of an additional plain text message body. This is passed as the attributes parameter to the create method (supplied by Email::MIME::Creator) that creates the plain text part of the mail. The body Content-Type will be set to text/plain unless it is overidden here. This is the default set of elements (and the relevant attributes that point at a resource) that will be embedded. The for this is: 'bgsound' => {'src'=>1}, 'body' => {'background'=>1}, 'img' => {'src'=>1}, 'input' => {'src'=>1}, 'table' => {'background'=>1}, 'td' => {'background'=>1}, 'th' => {'background'=>1}, 'tr' => {'background'=>1} You can override this using the embed_elements parameter. This builds an HTML email: my $email = Email::MIME->create_html( header => [ From => 'my@address', To => 'your@address', Subject => 'My speedy HTML', ], body => $html ); If you want a plaintext alternative, include the text_body option: my $email = Email::MIME->create_html( header => [ From => 'my@address', To => 'your@address', Subject => 'Here is the information you requested', ], body => $html, text_body => $plain_text #<-- ); If you want your images to remain as links (rather than be embedded in the email) disable the embed option: my $email = Email::MIME->create_html( header => [ From => 'my@address', To => 'your@address', Subject => 'My speedy HTML', ], body => $html, embed => 0 #<-- ); By default, the HTML is parsed to look for objects and stylesheets that need embedding. If you are controlling the construction of the HTML yourself, you can use Content Ids as the URIs within your HTML and then pass in a set of objects to associate with those Content IDs: my $html = qq{ <html><head><title>My Document</title></head><body> <p>Here is a picture:</p><img src="cid:some_image_jpg@bbc.co.uk"> </body></html> }; You then need to create a mapping of the Content IDs to object filenames: my %objects = ( "some_image_jpg@bbc.co.uk" => "/var/html/some_image.jpg" ); Finally you need to disable both the embed and inline_css options to turn off HTML parsing, and pass in your mapping: my $quick_to_assemble_mime = Email::MIME->create_html( header => [ From => 'my@address', To => 'your@address', Subject => 'My speedy HTML', ], body => $html, embed => 0, #<-- inline_css => 0, #<-- objects => \%objects #<-- ); If you have for example a personalised newsletter where your HTML will vary slightly from one email to the next, but you don't want to re-parse the HTML each time to re-fetch and attach objects, you can use the embed_objects function to pre-process the template, converting URIs into CIDs: use Email::MIME::CreateHTML qw(embed_objects); my ($preproc_tmpl_content, $cid_mapping) = embed_objects($tmpl_content); You can then reuse this and the CID mapping: my $template = compile_template($preproc_tmpl_content); foreach $newsletter (@newsletters) { #Do templating my $html = $template->process($newsletter); #Build MIME structure my $mime = Email::MIME->create_html( header => [ From => $reply_address, To => $newsletter->address, Subject => 'Weekly newsletter', ], body => $html, embed => 0, #Already done inline_css => 0, #Already done objects => $cid_mapping #Here's one we prepared earlier ); #Send email send_email($mime); } Note that one caveat with this approach is that all possible images that might be used in the template will be attached to the email. Depending on your template logic, it may be that some are never actually referenced from within the email (e.g. if an image is conditionally displayed) so this may create unnecessarily large emails. A custom resource resolver can be specified by passing your own object to resolver: my $mime = Email::MIME->create_html( header => [ From => 'my@address', To => 'your@address', Subject => 'Here is the information you requested', ], body => $html, base => '', resolver => new MyResolver, #<-- ); The object needs to have the following API: package MyResolver; sub new { my ($self, $options) = @_; my $base_uri = $options->{base}; #... YOUR CODE HERE ... (probably want to stash $base_uri in $self) } sub get_resource { my ($self, $uri) = @_; my ($content,$filename,$mimetype,$xfer_encoding); #... YOUR CODE HERE ... return ($content,$filename,$mimetype,$xfer_encoding); } where: $uri is the URI of the object we are embedding (taken from the markup or passed in via the CID mapping) $base_uri is base URI used to resolve relative URIs $content is a scalar containing the contents of the file $filename is used to set the name attribute of the Email::MIME object $mimetype is used to set the content_type attribute of the Email::MIME object $xfer_encoding is used to set the encoding attribute of the Email::MIME object (note this is the suitable transfer encoding NOT a character encoding) You can use a cache from the Cache::Cache distribution: use Cache::MemoryCache; my $mime = Email::MIME->create_html( header => \@headers, body => $html, object_cache => new Cache::MemoryCache( { 'namespace' => 'MyNamespace', 'default_expires_in' => 600 } ) ); Or a cache from the Cache distribution: use Cache::File; my $mime = Email::MIME->create_html( header => \@headers, body => $html, object_cache => Cache::File->new( cache_root => '/tmp/mycache', default_expires => '600 sec' ) ); Alternatively you can roll your own. You just need to define an object with get and set methods: my $mime = Email::MIME->create_html( header => \@headers, body => $html, object_cache => new MyCache() ); package MyCache; our %Cache; sub new {return bless({}, shift())} sub get {return $Cache{shift()}} sub set {$Cache{shift()} = shift()} 1; Perl Email Project Maybe add option to control the order that the text + html parts appear in the MIME message. Tony Hennessy and Simon Flack with cookbook + some refactoring by John Alden <cpan _at_ bbc _dot_ co _dot_ uk> with additional contributions by Ricardo Signes <rjbs@cpan.org> and Henry Van Styn <vanstyn@cpan.org> (c) BBC 2005,2006. This program is free software; you can redistribute it and/or modify it under the GNU GPL. See the file COPYING in this distribution, or
http://search.cpan.org/dist/Email-MIME-CreateHTML/lib/Email/MIME/CreateHTML.pm
CC-MAIN-2016-36
refinedweb
1,785
50.26
can't compile class When I attempt to compile your class, it generates 16 errors. The primary cause is the 2 imports: Import javax.servlet.jsp.*; Import javax.servlet.jsp.tagtext.*; I downloaded the javax classes from Sun, but I don't know how to use them. your class had errors I cleaned up your code. It had multiple errors with return types, misspelled variables and classes, and illegal characters. This compiles correctly: package mytags; import javax.servlet.jsp.*; import javax.servlet.jsp.tagext.*; publi one more thing Actually, 2 more things. You used illegal double quotes in 2 argument lists. pagecontext.getOut().write() and throw new JspException() They seem to be extended ASCII double quotes, probably from Microsoft Word, or UTF-8 encoding. They another thing Your XML file has an error: <body context>empty</body context> The body tag is incorrect. If context is an attribute, it must have a value. ex. <body context="value"> This error also stops the tag library class from working. Fix up this another error Again, you have another error in your .tld file <tagclass>mytags.HelloWorld</tagclass> You spelled the class "HelloWorld", but your actual class is spelled "Helloworld" Who wrote this crap? Your code has lots of errors I cleaned up your code. It had multiple errors with return types, misspelled variables and classes, and illegal characters. This compiles correctly: cd C:\j2ee\jdk\bin javac -classpath C:\j2ee\lib\j2ee.jar C:\tomcat\webapps\tutorials\WEB-INF\c problem in the above code here is the fresh one for HelloWorld.class file will be import javax.servlet.jsp.*; import javax.servlet.jsp.tagext.Tag; public class HelloWorld implements Tag { private PageContext pagecontext; private Tag parent; pub errors in this code what is this the code is not working there are so many errors. Thanx RoseIndia The information provided here is very helpful. RoseIndia is one of my favourite site for help regarding any of the technical topic .tld file how to save .tld file and how to run class file. how to run tld (custom tags) i don't know how to run custom tags in jsp technology .. compile compile how to compile .class files using eclipse unable to compile class file - JSP-Servlet unable to compile class file I wrote database connection in jsp file...*; import java.util.ArrayList; public class ComboboxList extends HttpServlet...("/jsp/Combobox.jsp"); dispatcher.forward(request, response why this can't why this can't import java.util.*; class Dash{ public static void main(String args[]){ int x0=100; int[] x1=new int[3]; int[][] x2=new int[3][3]; int[][][] x3=new int[3][3][3 compile error compile error Hello All for example public class... program with Test.java and try to compile with javac test.java an error like test.java :2 : class A is public ,should be declared in a file named A.java how to compile programs?????????? how to compile programs?????????? "javac" is not recognised as a file name. why?????????? Have you set your path name and class name correctly? Anyways have a look at the following link: Install Java
http://roseindia.net/tutorialhelp/allcomments/155534
CC-MAIN-2014-42
refinedweb
520
68.47
Code://after a long time, i figured this out #include <iostream> using namespace std; int main() { double ans1; int x; cout<<"Hi there! I am PC, the computer. What's your name?:" << endl; cout<<"Hi " <<endl; cout<<"Game?:"; cin >>x; if(x==1) { cout<<"All right! Ok... This is a questionaire! You answer correctly, "you move on. 1 question. 1st question:"<<endl<<endl; cout<<"What is 398+276?: "<<endl; cin>>ans1; if(ans1==674) { cout<<"That is correct!!!"; } else { cout<<"Wrong!"; } } cin.get(); return 0; } the compiler says that line #15 has errors. that line is the first cout. it says missing terminating "character what the heck am i missing here.
http://cboard.cprogramming.com/cplusplus-programming/57076-error.html
CC-MAIN-2015-40
refinedweb
111
88.94
- dominik Thank you very much! I will have a look at that! For the time being, I figured a kind of a workaround for my purposes in pythonista (& editorial), using paramiko and this simple script of omz: ...plus a shell script. dominik Current situation looks as follows: import sqlalchemy.dialects.mysql.pymysql as mariadb conn = mariadb.MySQLDialect_pymysql.connect(host= 'myhost', user = 'myuser', passwd='', db='mydb', port = 3307) Error Message: TypeError: connect() missing 1 required positional argument: 'self' ...guess, I will have to dig deeper into the dot notation specifics. dominik Thanks a lot for your feedback! I will try and let you know about the results! dominik ...you go to Python Modules > Standard Library (3.6) > ...and then search! You can even select the mysql folder as a favorite, but no chance to copy or move it anywhere else. dominik Hi, Pythonista provides a msql folder in the Python Modules area. But unfortunately using „import mysqldb“ fails! I get the message: No Module named mysqldb. I would be glad about any hint! Thanks in advance!
https://forum.omz-software.com/user/dominik
CC-MAIN-2020-29
refinedweb
175
70.5
That BizTalk guy from India Benny Mathew June 2007 Entries Did you know? – You can read / write to BAM database directly from outside BizTalk. You know that BAM is used to gather statistics from your BizTalk application. What you probably don’t know is that: · You can collect BAM data from your non-BizTalk applications such as external .NET components that BizTalk calls into. · Tracking profile editor (TPE) is not the only way to collect data you can use a set of APIs available in the Microsoft.BizTalk.Bam.Event... namespace to read and write directly into the BAMPrimaryImport database. Check out some of the links here: ·... ...... Share This Post: Short Url: Posted On Thursday, June 14, 2007 3:42 PM | Open Source BizTalk Utilities on CodePlex I ...... Share This Post: Short Url: Posted On Thursday, June 14, 2007 3:01 PM | Links to materials on GoF Design Patterns As I am into BizTalk consulting / development / training for quite sometime now, I feel that I am missing out on hardcore c# programming and the enchantment of object oriented concepts lately. For instance, the other day I was trying to recollect the implementation details of the Decorator design pattern, and searched for it on the Internet and in the process, got glued at the wealth of information available on design patterns. So I thought it would be helpful to have a quick reference list of the ...... Share This Post: Short Url: Posted On Thursday, June 14, 2007 12:45 PM | .
http://geekswithblogs.net/benny/archive/2007/06.aspx
CC-MAIN-2013-20
refinedweb
249
69.62
Re: [soaplite] -soaplite ~C question. Expand Messages - Hi, Seth! Hm, looks like a bug for me. You may download the latest version and specify type for one of the elements explicitely (so arrayType on outer array will be "ur-type[]"), but it's just temporarely and will be fixed in next version (it's about to be released). I don't think you could do anything on ApacheSOAP side, but specifying type explicitely on server side should help. Sorry for inconvenience. Let me know if you'll find anything else or will have any other questions. Best wishes, Paul. --- Seth Sternglanz <ssternglanz@...> wrote: > Hey all, I'm kind of new to SOAP but the SOAP::Lite perl library__________________________________________________ > seems very cool. I had no problems getting my perl soap client to > talk to > my perl soap server. However, as part of the project I'm working > on, I have > to get a Java-apache-soap-client to talk to my Perl soap server. > I'm having > a little trouble at the moment. > > When my SOAP server returns an array of arrays it sends a default > namespace URI of ~C for the outer array. For example: > > ----excerpt of SOAP server response----------- > <SOAP-ENC:Array xsi: SOAP-ENC: > <s-gensym24 xsi: SOAP-ENC: > ---end excerpt-------------------------------------------- > > I noticed that on line 474 of SOAP/Lite.pm it says: > > $type = '~C:Array' if $self->autotype && !defined $type; # make > ApacheSOAP > users happy > > Which is interesting. However, my ApacheSOAP client is unhappy--it > says: > > Caught SOAPException (SOAP-ENV:Client): Unable to resolve namespace > URI for > '~C'. > > I was wondering if anyone could point me in the direction to go to > get my ApacheSOAP client to understand my SOAP::Lite server's > response. > Should I be turning off autotyping on the server side? Or writing a > custom > de-serializer on the ApacheSOAP client side? Or is there an easier > way to do > this? > > Thanks for the help! > > -Seth > > ------------------------.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/109?o=1&d=-1
CC-MAIN-2016-07
refinedweb
321
75.4
Development/Tutorials/Phonon/Backends - Revision history 2016-10-20T21:49:25Z Revision history for this page on the wiki MediaWiki 1.26.2 Apachelogger: port from 2010-12-23T14:49:48Z <p>port from</p> <p><b>New page</b></p><div>Phonon itself does not implement any multimedia functionality and depends completely on a backend to do what the applications want. This is comparable to engines in Amarok or Player subclasses in JuK but with one big difference: once your backend works correctly it will make all KDE applications using Phonon use the mediaframework you used for implementing the backend.<br /> <br /> Subscribe to the [ phonon-backends] mailinglist if you want to discuss development of your backend or read about the development of other backends.<br /> <br /> == How to Write a Backend for Phonon ==<br /> <br /> To simplify the text the following terms are used below:<br /> <br /> The backend has to implement some abstract classes (always named Interface) and some QObject interfaces. The classes/objects that implement those interfaces I will denote with impl classes/objects.<br /> For most media frameworks the objects have to be connected in some way forming a graph. I will simply talk about a graph or flow graph.<br /> In order to get started and see some progress early, I recommend to copy the fake backend that ships with Phonon. First you'll want to edit the CMakeLists.txt and phonon_fake.desktop files. In the CMakeLists.txt you can comment out all classes except MediaObject, AudioPath, AudioOutput and Backend. Those four are the minimum requirement for audio playback. In the Backend class you return 0 for all classes that you have not implemented. In the source you then change the namespace from Fake to whatever your backend is called. Then you can get going on implementing the three classes.<br /> <br /> To test whether your classes are working you can use the test programs from the tests directory. Compiling kdelibs with KDE4_BUILD_TESTS will create the test programs in builddir/phonon/tests/.<br /> <br /> There are two approaches on how to implement the functionality:<br /> <br /> * Create graph objects in impl objects<br /> <br /> This is an approach that was already successfully used for a proof-of-concept aRts backend. The MediaObject impl creates the PlayObject. The AudioPath impl creates a StereoEffectStack and Synth_MULTI_ADD if needed. The AudioOutput impl creates the Synth_AMAN_PLAY object. You get the idea. All the common objects are held in the Backend object and are accessible for the other impl objects. Examples for this are the KArtsServer and KArtsDispatcher classes used in the aRts backend.<br /> <br /> You might want to consider this approach if it is easy to find a 1:1 mapping between impl classes and entities of the mediaframework.<br /> <br /> * Use the impl objects as a description to let some other entity create the graph<br /> <br /> Create simple impl classes that do nothing else than describe the state that is requested from the application. Another class then can use that description to create the according flow graph. Whenever the state of the impl objects changes this class is notified and can take action accordingly.<br /> <br /> This approach is what you need if you cannot find a 1:1 mapping like above, and might often be the more flexible approach even if you have a good mapping of Phonon impl classes to media framework entities.</div> Apachelogger
https://techbase.kde.org/index.php?title=Development/Tutorials/Phonon/Backends&feed=atom&action=history
CC-MAIN-2016-44
refinedweb
572
54.32
. Hi Phil, Although the question might seem easy, there is no standard tool in ArcGIS to create this type of output. The Solar Radiation tools in ArcGIS can evaluated the polygon derived in your analysis: ArcGIS Help (10.2, 10.2.1, and 10.2.2) For questions like yours, a python script might be an option. I created a start for you based on a script "sunpos.py" downloaded from: The idea is to iterate over a date interval. During the iteration there is a second loop over a time interval. I would suggest that you avoid getting near to sunrise and sunset since this would create an angle so small that you would have to clear a lot of forest. Based on the location provided (lat, lon values in script) the altitude and azimuth are returned for each date, time. To make this data manageable, I rounded azimuth to a whole number and create a dictionary to hold the minimum altitude values of the sun. At the end the dictionary is sorted and can be used (not yet implemented) to translate to distance of clearing forest. The azimuth and distance should be translated to a geographic location and can be combined to create a polygon for logging. import sunpos # downloaded from: import collections from datetime import date, timedelta # setting start_date = date(2014, 5, 15) end_date = date(2014, 9, 30) start_time = 9 # 9 am end_time = 17 # 5 pm time_interval = 1 # 0.25 = every 15 minutes lat = 4.5 lon = -74 dct = {} d = start_date delta = timedelta(days=1) while d <= end_date: t = start_time while t <= end_time: alt, azi = sunpos.time_and_location_to_sun_alt_azimuth(d.year, d.month, d.day, t, lat, lon) azi2 = round(azi) if azi2 in dct: if alt < dct[azi2]: dct[azi2] = alt else: dct[azi2] = alt # update dictionary (rounded azimuth vs min altitud) t += time_interval d += delta odct = collections.OrderedDict(sorted(dct.items())) # now you have a list of angles that you could translate to a distance print "azimuth\taltitud" for azi, alt in odct.items(): print "{0}\t{1}".format(azi, alt if alt <= 90 else 360 - alt) Hope this helps you a bit. Kind regards, Xander Thank you, Xander - I appreciate the help. While I am not at all competent with python, I had decided to pretty much do what you are talking about in Excel with a VBA script, and then import the results into ArcMap. I have started working with the Excel spreadsheet that I downloaded from NOAA and am going from there. I did take a very basic online course from ESRI in Python, but one of these days I will buckle down and learn it. Regards, Phil Freeman Good to hear that Phil. Python is the scripting language for these type of problems. If you manage to solve the problem, share a little map of the result. It will be interesting to see what it looks like. Kind regards, Xander Will do. Phil Xander, Attached is the map I created using the methodology we discussed. Basically, what I did was compute a clearing distance at intervals of 1 degree azimuth from a spot near the center of the nesting pad (this is a wood turtle nesting habitat project). I repeated this for each day during the time of interest – May 20-September 15 – between 8:30 A.M. and 7:00 P.M. Then, after making these computations, I used the greatest clearing distance for each azimuth. The ones outside of the pine stand are essentially irrelevant because the vegetation there is low. I assumed a 70 foot height for the pine stand and did the computations using the NOAA spreadsheet for sun azimuth and angle above the horizon and calculating the distance by dividing 70 feet by the tangent of the sun’s angle above the horizon. I made it an iterative process using a VBA macro in the spreadsheet and printing the end results to a second sheet I created. Does all of this seem right to you? Thanks. Phil Freeman Bayfield Regional Conservancy Hi Phil, Thanks for sharing! It sounds correct to me, although some of the point locations look a bit odd (at the extremes east and west, but might have to do with the differences in angle when it's near to sunrise or sunset). You could use the solar radiation tools provided by ArcGIS to test the area you want to clear. You should add the trees (area not being logged with an altitude of 70ft) and test the solar radiation it receives. Kind regards, Xander Thank you, Xander. I too thought those locations looked odd, but I can’t find any flaws. You know, most of us kind of take the sun and its location in the sky for granted, but I think you nailed it regarding especially sunset. I believe the time period we specified and the times of day resulted in that long tail to the west. Fortunately, the sky is open in that direction so it won’t affect what we do. am a forester by training and profession, and this exercise reinforces what I have always known – that small openings (this “stand” is only about ¼ acre ) just are not sufficient for regenerating shade-intolerant forest types. Phil
https://community.esri.com/t5/geoprocessing-questions/using-arcgis-to-map-forest-clearing-needs-to-get/td-p/37323
CC-MAIN-2022-33
refinedweb
876
70.33
; } Visual Basic .NET supports the same construct: Private Sub Form1_Paint( _ ByVal sender As Object, _ ByVal e As _ System.Windows.Forms.PaintEventArgs) Handles MyBase.Paint Dim g As Graphics = e.Graphics End Sub Once you have a Graphics object, you are ready to draw on the window. Note that whenever I refer to a "window" as the drawing canvas, I also mean "control." Internally, controls are handled just like windows. Simple Line Drawings A fundamental task performed by the graphics-object, is drawing lines and curves. You can use a number of methods for this purpose. In GDI+, unlike in regular (older) GDI, drawing lines and filling areas are two entirely different operations. When you draw lines, you must consider a number of fundamental things. For example, you need to choose what kind of line to draw.. Do you want a straight line, or a curve? Do you want a simple, single line, or do you want to draw a complex line composed out of many segments? Maybe you want to draw a closed shape that forms a completely enclosed area with an identical start and end point (such as a circle, rectangle, or polygon)? Depending on the desired shape of the line, different methods are available to generate them. Less obvious than the position and shape of the line, are the parameters for a line. You might ask, "What parameters can I specify for a line?" You can specify the following parameters: attributes such as color and thickness, start and end-points, and the shape of the end of a line such as whether the line ends in a rounded or square "head" or ends in an arrow. In GDI+, lines are represented by Pen objects. Pens encapsulate all the attributes described above. GDI+ provides a number of default Pen objects, such as pens of different colors. The following code demonstrated drawing a simple straight line using different pens. (This is VB .NET code. C# developers add a semi-colon at the end of the line): g.DrawLine(Pens.Red,10,10,200,100) g.DrawLine(Pens.Green,10,30,200, 120) g.DrawLine(Pens.Blue,10,50,200,140) If you want to adjust the thickness of the used pen, you need to instantiate a custom pen object. This example generates a 5-pixel thick red pen and uses it to draw another line: g.DrawLine( _ New Pen(Color.Red, 5), _ 10, 100, 200, 190) You instantiate the Pen using the line color and thickness as parameters. Once again, the C# version of the code is very similar: Simply add a semi-colon at the end and write the "new" keyword in lower case. Figure 1 shows the result of all 4 lines of code listed above. If you play with the DrawLine() method a bit, you will discover that it has a large number of overloads, though the result of these overloads is the same. You can just take different paths to your destination. I encourage you to experiment with the different options. For instance, you can draw circles and ellipses: g.DrawEllipse(Pens.Red, _ 10, 10, 150, 80) Similarly, you can draw rectangles with this code: g.DrawRectangle(Pens.Green, _ 20, 100, 120, 60) The following example draws a Bezier curve. Explaining the details of Bezier curves is beyond the scope of this article (Time to drag out your old math text books...): g.DrawBezier(Pens.Blue, _ 170, 10, 250, 90, 170, 90, 250, 180) Figure 2 shows the result of these 3 drawing operations. Drawing Complex Figures Drawing lines and rectangles works very well when you need to create custom Windows Forms controls. If you want to create more complex and artistic drawings such as diagrams, GDI+ lets you draw more complex shapes. In GDI+, this is accomplished using graphics paths. GraphicsPath objects encapsulate a number of line segments. You add individual segments via drawing primitives, such as AddElipse() and AddLine(). GraphicsPath objects make it relatively simple to generate complex shapes by automatically connecting line segments. Consider the following code for instance: Dim Person As New GraphicsPath() Person.AddEllipse(23, 1, 14, 14) Person.AddLine(18, 16, 42, 16) Person.AddLine(50, 40, 44, 42) Person.AddLine(38, 25, 37, 42) Person.AddLine(45, 75, 37, 75) Person.AddLine(30, 50, 23, 75) Person.AddLine(16, 75, 23, 42) Person.AddLine(22, 25, 16, 42) Person.AddLine(10, 40, 18, 16) g.DrawPath(Pens.Blue, Person) This simple example generates the shape of a human (well... as close as I can get with my limited artistic abilities) and renders it on the screen as shown in Figure 3. Note: The GraphicsPath class is a member of System.Drawing.Drawing2D. Make sure to import that namespace or reference the class by its fully qualified name. Graphics Quality At this point it is important to discuss the quality of the graphics you render. When you draw vertical and horizontal lines, quality is not a big concern because GDI+ draws lines simply by setting the colors of pixels that are all lined up in a row. When you draw lines at an angle (or curves), things get a bit more tricky. The pixels on your monitor do not correlate with the pixels that should be set based on the mathematical calculation of the drawn line. So the rendering system needs to decide what pixels to use and which ones to leave out. This process is known as aliasing. Aliasing leads to poor looking drawings?you can clearly see a "step" or "jagged" effect. One solution to this problem is a technique known as anti-aliasing. Using this technique, the rendering engine uses different color variations for pixels that should only be partially included, leading to a much smoother appearance to the human eye. You can tell GDI+ how you would like it to optimize a drawing. Consider the following code for instance: Dim oPen As New Pen(Color.Blue, 3) g.SmoothingMode = _ SmoothingMode.HighSpeed g.DrawBezier(oPen, _ 10, 10, 90, 90, 10, 90, 90, 180) g.SmoothingMode = _ SmoothingMode.AntiAlias g.DrawBezier(oPen, _ 50, 10, 130, 90, 50, 90, 130, 180) g.SmoothingMode = _ SmoothingMode.HighQuality g.DrawBezier(oPen, _ 90, 10, 170, 90, 90, 90, 170, 180) This renders three similar Bezier Splines at different quality settings. Figure 4 shows a magnified version of the result. Naturally, you want the high-quality version, but quality comes with a cost: performance. Which method you choose will depend on the performance requirements for your application. Filling Shapes As mentioned before, GDI+ also offers ways to fill shapes. The techniques you use to fill shapes is very similar to drawing shapes, except for fill operations you use Brushes. A GDI+ Brush is similar to a GDI+ Pen, but Brushes are often much more powerful. The following example shows how to draw an ellipse filled with a green brush: g.FillEllipse(Brushes.Green, _ 10, 10, 150, 80) In a slightly more complex operation you can fill a shape with a pattern using something called a Hatch Brush. In this example, you can create a diagonal brick effect: Dim oBrush As New HatchBrush( _ HatchStyle.DiagonalBrick, _ Color.Blue, Color.Firebrick) g.FillEllipse(oBrush, _ 10, 100, 150, 80) You can also choose to use a bitmap as a Brush. In this code snippet you see that I load one of the default images that ships with Windows into a Bitmap object, then create a TextureBrush based on that image, and use it to fill the ellipse: Dim oBmp As New _ Bitmap("C:\WINDOWS\GREENSTONE.BMP") Dim oBrush2 As New TextureBrush(oBmp) g.FillEllipse(oBrush2, _ 200, 10, 150, 80) Furthermore, you can create gradient brushes as in the following example: Dim oRect As New _ Rectangle(200, 100, 150, 80) Dim oBrush3 As New _ LinearGradientBrush(oRect, _ Color.Red, Color.Blue, _ LinearGradientMode.Vertical) g.FillEllipse(oBrush3, _ 200, 100, 150, 80) I used a Rectangle object to first specify an area that I wanted to confine the gradient to. I then defined two colors, as well as an angle ("Vertical" in this case, but you could also use a numeric value). Note: The LinearGradientBrush class is a member of System.Drawing.Drawing2D. Figure 5 shows a combined result for the last 4 examples. I personally favor gradient brushes. I think shapes filled with a gradient look more professional than shapes filled with a single color. Consider Figure 6, which shows the human shape filled with two different brushes (solid and gradient). Here's the code that fills the "person" path I created before: Dim oRect As New _ Rectangle(0, 0, 100, 100) Dim oBrush As New _ LinearGradientBrush(oRect, _ Color.White, Color.Red, _ LinearGradientMode.Vertical) g.FillPath(oBrush, Person) Whenever you want to create a shape with a fill color as well as an outline (a technique sometimes also referred to as "cell shading"), you need to perform both actions separately (unlike in conventional GDI). Perform the fill operation first and render the outline second to make sure potential drawing inaccuracies do not "cut" through the outline. The Coordinate System Whenever you use GDI+ to draw, you use the GDI+ coordinate system. By default, the coordinate system maps directly to the pixels on your monitor. However, you may want a different behavior. You can, in fact, transform the coordinate system if you have special needs. Consider the person shape you created above. The position of that shape is defined by the graphics path object you use. But what if you wanted to draw multiple copies of that shape multiple times in multiple locations? The easiest way to do so is to alter the virtual coordinate system. The Graphics object offers a number of methods to do so. Here's an example: g.DrawPath(Pens.Black, Person) g.TranslateTransform(75, 0) g.DrawPath(Pens.Black, Person) g.ResetTransform() This draws the first person shape at the default position, then moves the origin (point 0,0) of the coordinate system 75 pixels to the right, and draws the shape again. Without the ability to move the coordinate system around, you'd have to create another person path identical to the first one, but located at a different position. You need to reset the transformation after GGDI+ completes the drawing operation in your virtual coordinate system. Otherwise, GDI+ will offset your future drawings to the right. Zooming You may notice that the person shape in my figures seems to be a bit bigger than the one you get when you run the samples. That's because I made GDI+ zoom the shape before I took the screen shot. I used the following scale transformation to do the zooming trick: g.ScaleTransform(2, 2) g.DrawPath(Pens.Black, Person) g.ResetTransform() This zooms everything by a factor of 2 on both axes. You could zoom at different factors for each axis. For instance, you could leave the height of the person at the original level, but change the horizontal zoom: g.ScaleTransform(2, 1) Of course, this makes the little guy look terribly overweight. Most of the time you should zoom at equal factors for both axes. Rotating Another interesting transformation of the coordinate system is its ability to rotate. This allows for fancy tricks such as rendering text at an angle: g.TranslateTransform(100, 50) g.RotateTransform(35) g.DrawString("Cool Text", _ New Font("Arial Black", 20), _ Brushes.Blue, 0, 0) g.ResetTransform() This example moves the origin to a new point and then performs a subsequent rotation. Drawing the text then becomes trivial, as you render it at (virtual) position 0,0. Figure 7 shows the result as well as an illustration of the performed transformations. Note that the order in which you perform transformations is of crucial importance. Figure 8 shows what happens if you change the order of transformations As you can see, the resulting position of the text string is different. because I moved the coordinate system. Transformations can be very tricky to do correctly. I generally recommend that you perform rotations after all coordinate movement.
https://www.codemag.com/article/0305031
CC-MAIN-2019-13
refinedweb
2,035
66.64
Could someone explain me what is the space complexity of beyond program, and why is it? def is_pal_per(str): s = [i for i in str] nums = [0] * 129 for i in s: nums[ord(i)] += 1 count = 0 for i in nums: if i != 0 and i / 2 == 0: count += 1 print count if count > 1: return False else: return True s = [i for i in str] nums = [0] * 129 I'm unclear where you're having trouble with this. s is simply a list of individual characters in str. The space consumption is len(s). nums is a constant size, dominated by the O(N) term. Is this code you wrote, or has this been handed to you? The programming style is highly not "Pythonic". As for your code, start with this collapse: count = 0 for char in str: val = ord[char] + 1 if abs(val) == 1: count += 1 print count return count == 0 First, I replaced your single-letter variables (s => char; i => val). Then I cut out most of the intermediate steps, leaving in a couple to help you read the code. Finally, I used a straightforward Boolean value to return, rather than the convoluted statement of the original. I did not use Python's counting methods -- that would shorten the function even more. By the way, do you have to print the count of unity values, or do you just need the Boolean return? If it's just the return value, you can make this even shorter.
https://codedump.io/share/64Nk7ZQBlOTr/1/space-complexity-of-list-creation
CC-MAIN-2017-51
refinedweb
251
79.5
It's easy to get more than 90% accuracy when dealing with popular datasets which are cleaned, tested, split and handled beforehand by experts. You just need to import and feed the dataset to the most popular model architecture found on the internet. In image classification, things get a bit difficult when you are left with a new dataset that has very few images in a class or if the images are not similar to the images you will deal in production. The popular model architecture doesn't seem to help, forcing you into a corner with just 50% accuracy which then turns into a game of probability rather than Machine Learning itself. This article focuses on exploring all of those approaches, tools, and much more to help you build robust models that can be deployable in production without much hassle. Even though some of the methods are applicable to other objectives too, we are focusing on Image Classification to explore the topic. Why do Custom Datasets fail in achieving high accuracy? It is important to address why custom datasets fail mostly in achieving good performance metrics. You might have faced this while trying to use a dataset you created or ones that you got from your team to create a model. Lack of diversity can be one of the main reasons behind the poor performance of the model built on it. Parameters such as lighting, colors, shape, etc of the image can have significant variation, and this may not be considered while constructing the dataset. Data Augmentation might help you solve this, which we will discuss further. Another reason can be the lack of focus on each category: a dataset with 1000+ images of one type of coffee and just 100+ images of the other creates a big imbalance in the features that could be learned. Another failure can be from the source of data collection not matching the source from where data will be collected in production. A good example of such a situation can be bird detection from a security camera with poor video quality taken as input for a model trained on high-definition images. There are various approaches with which such situations can be tackled. Why does Production level accuracy matter? Since we’ve discussed why custom datasets fail to achieve "Production Level Accuracy" on the first run, it is important to understand why Production Level Accuracy matters. Simply put, our models should be able to give results that are adequately acceptable in real-world scenarios, but not necessarily striving for 100% accuracy. It's easy to see the right predictions with test dataset images or text which was used to hypertune the model to its best. Even though we cannot fix a threshold accuracy above which our model is eligible for deployment, it's good to have at least 85-90% validation accuracy as a rule of thumb, given that train and validation data were split randomly. Always ensure validation data is diverse and that the majority of its data resembles that which the model will consume in production. Data preprocessing can help you achieve this to a certain extent by ensuring the image size by resizing or filtering text before input. Handling such errors during development can help improve your production mode and obtain better results. Data Augmentation: A perfect way to improve your dataset It's okay to have a small dataset as long as you can get the best out of it through approaches such as data augmentation. This concept focuses on pre-processing existing data to generate more diverse data for training at times when we don't have enough data. Let us discuss a bit around Image Data Augmentation with a small example. Here we have a rock paper scissors dataset from TensorFlow and we wish to generate more without repeating. Tensorflow dataset objects provide a lot of operations that help in data augmentation and much more. Here we first cache the dataset, which helps us in memory management as the first time the dataset is iterated over, its elements will be cached in the specified file or in memory. Then cached data can be used afterward. We repeat the dataset twice after that, which increases its cardinality. Just repeated data doesn't help us, but we add a mapping layer over the doubled dataset which in a way helps us generate new data along with the increase in cardinality. In this example, we are flipping random images to left and right which avoids repetition and ensures diversity. import tensorflow as tf import tensorflow_datasets as tfds. # Make sure that image has a right size image = tf.image.resize(image, [256,256]) return image, label dataset_train = dataset_train_raw.map(preprocess_img) dataset_test = dataset_test_raw.map(preprocess_img) print("Dataset Cardinality Before Augmentation: ",dataset_train.cardinality().numpy()) dataset_train = dataset_train.cache().repeat(2).map( lambda image, label: (tf.image.random_flip_left_right(image), label) ) print("Dataset Cardinality After Augmentation: ",dataset_train.cardinality().numpy()) Output Dataset Cardinality Before Augmentation: 2520 Dataset Cardinality After Augmentation: 5040 There are more mappings on images to explore that can further create more variations in terms of contrast, rotation, and much more. Read this article for more details. There are many more operations you can perform on an image like rotate, shear, vary contrast, and much more. Data augmentation is crucial in cases where image data is not representative of the real-world inputs in terms of lighting, background, and other aspects. Here we discussed data augmentation through frameworks like Tensorflow, but you can do manual data augmentation with much more than rotate and shear. Mapping is a strong tool because you can perform any operation on individual data without going through iterations. Resizing Images, formatting text, and much more can be handled neatly with this. Transfer Learning: Working with small datasets There are situations when you have only a few images and you wish to build an image classification model. With few images, the model might fail to learn patterns and much more, and it will result in an overfit or underfit model that would perform poorly in production with real-world inputs. The easiest way to build a good model in such conditions is through Transfer Learning. There are famous pre-trained models like VGG16 that are really good at image classification. Because of the wide variety of data it has been exposed to while building, and the complex nature of its architecture (including a lot of convolutional neural networks), it has more depth in the objective of image classification than the small model which we can build with a small dataset. We can use such pre-trained models that deal with the same objective for our problem by just replacing a few of the last layers (in most cases). The reason why we Replace the last layer is to restructure the model output that suits our use case, and select the right number of categories to classify in the case of image classification. We can replace not just the last but as many layers as we wish if we follow the documentation of the respective pre-trained model architecture and the framework documentation around it. Let us build a sample Transfer Learning Machine Learning Model. First, we are loading and pre-processing the same rock paper scissors dataset we used previously. import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras.applications import ResNet50 from keras.layers import GlobalAveragePooling2D, Dense from keras.layers import BatchNormalization, Dropout from keras.models import Model. # Resize images to ensure same input size image = tf.image.resize(image, [256,256]) return image, label dataset_train = dataset_train_raw.map(preprocess_img) dataset_test = dataset_test_raw.map(preprocess_img) dataset_train = dataset_train.batch(64) dataset_test = dataset_test.batch(32) Now we will use ResNet50 for our transfer learning model. We will set trainable = false to freeze the ResNet50 Architecture and not expose it to training. This will save us a lot of time as the model will train only the last few layers. This is beneficial when we do training on a paid instance on an hourly basis. # ResNet50 with Input shape of our Images # Include Top is set to false to allow us to add more layers res = ResNet50(weights ='imagenet', include_top = False, input_shape = (256, 256, 3)) # Setting the trainable to false res.trainable = False x= res.output x = GlobalAveragePooling2D()(x) x = BatchNormalization()(x) x = Dropout(0.5)(x) x = Dense(512, activation ='relu')(x) x = BatchNormalization()(x) x = Dropout(0.5)(x) x = Dense(3, activation ='softmax')(x) model = Model(res.input, x) model.compile(optimizer ='Adam', loss ="sparse_categorical_crossentropy", metrics =["sparse_categorical_accuracy"]) model.summary() Model Summary in short ( Only the Bottom part of Model Summary Included here as the ResNet Summary is long) _____________________________________________________________________________ conv5_block3_out (Activation) (None, 8, 8, 2048) 0 conv5_block3_add[0][0] _____________________________________________________________________________ global_average_pooling2d_5 (Glo (None, 2048) 0 conv5_block3_out[0][0] _____________________________________________________________________________ batch_normalization_11 (BatchNo (None, 2048) 8192 global_average_pooling2d_5[0][0] _____________________________________________________________________________ dropout_11 (Dropout) (None, 2048) 0 batch_normalization_11[0][0] _____________________________________________________________________________ dense_11 (Dense) (None, 512) 1049088 dropout_11[0][0] _____________________________________________________________________________ batch_normalization_12 (BatchNo (None, 512) 2048 dense_11[0][0] _____________________________________________________________________________ dropout_12 (Dropout) (None, 512) 0 batch_normalization_12[0][0] _____________________________________________________________________________ dense_12 (Dense) (None, 3) 1539 dropout_12[0][0] ============================================================================= Total params: 24,648,579 Trainable params: 1,055,747 Non-trainable params: 23,592,832 Model Training model.fit(dataset_train, epochs=6, validation_data=dataset_test) Epoch 1/10 40/40 [==============================] - 577s 14s/step - loss: 0.2584 - sparse_categorical_accuracy: 0.9147 - val_loss: 1.1330 - val_sparse_categorical_accuracy: 0.4220 Epoch 2/10 40/40 [==============================] - 571s 14s/step - loss: 0.0646 - sparse_categorical_accuracy: 0.9802 - val_loss: 0.8574 - val_sparse_categorical_accuracy: 0.4247 Epoch 3/10 40/40 [==============================] - 571s 14s/step - loss: 0.0524 - sparse_categorical_accuracy: 0.9813 - val_loss: 0.7408 - val_sparse_categorical_accuracy: 0.6425 Epoch 4/10 40/40 [==============================] - 570s 14s/step - loss: 0.0376 - sparse_categorical_accuracy: 0.9881 - val_loss: 0.6260 - val_sparse_categorical_accuracy: 0.7016 Epoch 5/10 40/40 [==============================] - 570s 14s/step - loss: 0.0358 - sparse_categorical_accuracy: 0.9881 - val_loss: 0.5864 - val_sparse_categorical_accuracy: 0.6532 Epoch 6/10 40/40 [==============================] - 570s 14s/step - loss: 0.0366 - sparse_categorical_accuracy: 0.9873 - val_loss: 0.4445 - val_sparse_categorical_accuracy: 0.8602 We can see how a model trained on a relatively small dataset performed very well with a validation accuracy of 86%. If you focus on the time taken for each epoch, it's less than 10 minutes, as we kept the ResNet layers not trainable. ResNet50 helped us transfer its learning to our problem. You can experiment with various Pre-trained Models to see how they suit your problem and which perform best. Bring this project to life LR Finder: Finding the perfect Learning Rate Learning Rate Finder is a powerful tool that can help you find the LR as the name suggests easily. Trying out all the learning rates to find the perfect learning rate is an inefficient and time-consuming method. LR Finder is the efficient and least time-consuming way to do this. Let's see how to implement it. We are continuing with the same dataset, preprocessing and model architecture, so it's not repeated from here on. !pip install tensorflow-hub !git clone !cd lrfinder && python3 -m pip install . import numpy as np from lrfinder import LRFinder K = tf.keras.backend BATCH = 64 # STEPS_PER_EPOCH = np.ceil(len(train_data) / BATCH) # here Cardinality or Length of Train dataset is 2520 STEPS_PER_EPOCH = np.ceil(2520 / BATCH) lr_finder = LRFinder(model) lr_finder.find(dataset_train, start_lr=1e-6, end_lr=1, epochs=10, steps_per_epoch=STEPS_PER_EPOCH) learning_rates = lr_finder.get_learning_rates() losses = lr_finder.get_losses() best_lr = lr_finder.get_best_lr(sma=20) # Setting it as our model's LR through Keras Backend K.set_value(model.optimizer.lr, best_lr) print(best_lr) Epoch 1/10 40/40 [==============================] - 506s 13s/step - loss: 1.7503 - sparse_categorical_accuracy: 0.3639 Epoch 2/10 40/40 [==============================] - 499s 12s/step - loss: 1.5044 - sparse_categorical_accuracy: 0.4302 Epoch 3/10 40/40 [==============================] - 498s 12s/step - loss: 0.9737 - sparse_categorical_accuracy: 0.6163 Epoch 4/10 40/40 [==============================] - 495s 12s/step - loss: 0.4744 - sparse_categorical_accuracy: 0.8218 Epoch 5/10 40/40 [==============================] - 495s 12s/step - loss: 0.1946 - sparse_categorical_accuracy: 0.9313 Epoch 6/10 40/40 [==============================] - 495s 12s/step - loss: 0.1051 - sparse_categorical_accuracy: 0.9663 Epoch 7/10 40/40 [==============================] - 89s 2s/step - loss: 0.1114 - sparse_categorical_accuracy: 0.9576 The best Learning Rate we get is 6.31 e-05, and we set it as our LR for the model using Keras Backend. From the Outputs, it's clear that this process took only a few epochs and it analyzed all the possible learning rates and found the best one. We can visualize the learning rates and their performance using Matplotlib. The red line represents the best learning rate. import matplotlib.pyplot as plt def plot_loss(learning_rates, losses, n_skip_beginning=10, n_skip_end=5, x_scale='log'): f, ax = plt.subplots() ax.set_ylabel("loss") ax.set_xlabel("learning rate (log scale)") ax.plot(learning_rates[:-1], losses[:-1]) ax.set_xscale(x_scale) return(ax) axs = plot_loss(learning_rates, losses) axs.axvline(x=lr_finder.get_best_lr(sma=20), c='r', linestyle='-.') Early Stopping: Rescuing your model before it unlearns You might remember training a model for 20+ epochs, and the model's loss starts to increase after a point. You're stuck, and you can't do anything as interrupting will kill the process, and waiting will give you a more poorly performing model. Early Stopping is exactly what you want in such situations where you can easily secure your best model and escape the process once parameter like loss starts increasing. This will save you time also as if the model starts showing positive loss early, the process will be stopped by providing you with the last best loss model and not computing further epochs. You can set early stopping based on any able to be monitored parameter like accuracy, as well. One of the major parameters of early stopping is patience. It is the number of epochs you wish to see if the model stops showing an increased loss and gets back to the track of learning, or else it will save the last best loss before the increase and stop the training. Now that you might have got a small idea, let's jump into an example. from tensorflow.keras.callbacks import EarlyStopping earlystop_callback = EarlyStopping( monitor='val_loss', min_delta=0.0001, patience=2) model.fit(dataset_train, epochs=20, validation_data=dataset_test, callbacks=[earlystop_callback]) In the example, early stopping is set to monitor validation loss. The parameter minimum delta, which is the minimum difference we want in loss, is set to 0.0001, and patience is set to 2. A patience of 2 implies that the model can go for 2 more epochs with increased validation loss, but if it doesn't show a decreased loss then (lower than the loss from where it started to increase) the process will be killed by returning the last best loss version. ( Only the last part of training shown ) Epoch 10/20 40/40 [==============================] loss: 0.0881 - sparse_categorical_accuracy: 0.9710 - val_loss: 0.4059 Epoch 11/20 40/40 [==============================] loss: 0.0825 - sparse_categorical_accuracy: 0.9706 - val_loss: 0.4107 Epoch 12/20 40/40 [==============================] loss: 0.0758 - sparse_categorical_accuracy: 0.9770 - val_loss: 0.3681 Epoch 13/20 40/40 [==============================] loss: 0.0788 - sparse_categorical_accuracy: 0.9754 - val_loss: 0.3904 Epoch 14/20 40/40 [==============================] loss: 0.0726 - sparse_categorical_accuracy: 0.9770 - val_loss: 0.3169 Epoch 15/20 40/40 [==============================] loss: 0.0658 - sparse_categorical_accuracy: 0.9786 - val_loss: 0.3422 Epoch 16/20 40/40 [==============================] loss: 0.0619 - sparse_categorical_accuracy: 0.9817 - val_loss: 0.3233 Even with 20 epochs set to train, the model stopped training after the 16th epoch saving the model from unlearning with increased validation loss. Our training results have some really good observations that can help us get a deeper understanding of early stopping. On 14th epoch model was at its best loss, 0.3168. The next epoch showed an increased loss of 0.3422 and even though the following epoch showed a decreased loss of 0.3233 which is less than the previous, it's still larger than the point from where the increase started (0.3168), so training stopped with the model version at 14th epoch saved. It waited for 2 epochs to see if the training would correct itself because of the patience parameter being set to 2. Another interesting observation is from the 10th to 12th epoch, even though the loss increased on the 11th epoch (0.4107), the 12th epoch showed a decreased loss (0.3681) compared to the 10th epoch's (0.4059). Thus training continued as the model got back to track. This can be treated as a good use of patience as leaving it to default would have killed the training after the 11th epoch, not trying for the next one. Some tips on using early stopping are that if you are training on CPU, use a small patience setting. if training on GPU, use larger patience values. For models like GAN, it's better to use small patience and save model checkpoints. If your dataset doesn't contain large variations, then use larger patience. Set the min_delta parameter always based on running few epochs and checking validation loss, as this will give you an idea of how your validation loss is varying from epoch to epoch. Analyzing your Model architecture This is a general approach rather than a definitive one. In most cases such as Image classifications which involve Convolutional Neural Networks, it is really important that you are well aware of your convolutions, their kernel size, output shape, and much more even though frameworks handle the flow. Very deep architectures like ResNet are made to train on 256x256 sized images, and resizing it to fit your case where the dataset images are 64x64 might perform so poorly it results in accuracies of 10% for certain pre-trained models. This is because of the number of layers in the pre-trained models and your image size. As it is clear that image tensors become smaller in terms of size as they go through convolutions, as the channels increase in parallel. A pre-trained model trained on 256x256 will have a tensor size of at least 8x8 by the end whereas if you restructure it for 64x64, the last few convolutions will get 1x1 tensors which learn very little compared to an 8x8 input. This is something to be handled carefully when handling pre-trained models. The other side of this is when you build your own convolutions. Make sure it has some depth with more than 3 layers, and, at the same time, it also doesn't affect the output size considering your image size. Analyzing the model summary is really important, as you can decide on setting your Dense layers based on the output shape of the convolutional layer and much more. While dealing with Multiple Features and Multiple Outputs models, architecture matters a lot. In such cases, Model Visualization helps. Conclusion So far we have discussed some of the most impactful and popular approaches that can improve your model accuracy, improve your dataset, and better your model architecture. There are plenty of other ways out there for you to explore. Along with these, there are more minor approaches or guidelines that can help you achieve all the above aspects such as shuffling while data loading, using TensorFlow dataset object to work on your custom-created dataset, using mapping as we discussed earlier to handle operations. I recommend you focus on Validation accuracy while training rather than training accuracy. Validation Data must be treated very well, and its diversity and representative nature to the real-world input the model will be exposed to in production is significant. Even though we performed all the approaches on an Image Classification problem, some of them like mapping, learning rate finder, etc. are applicable to other problems involving text and much more. Building a model with below-average accuracy is not valuable in real life as accuracy matters and in such situations, these approaches can help us build a model close to perfection with all the aspects taken care of. One of the popular approaches, Hyperparameter tuning is not discussed in this article in detail. It is in short trying out various values for Hyperparameters such as epochs, batch size, etc. The aim of Hyperparameter tuning is to achieve the best parameters and eventually getting a better model. LR Finder is an efficient way of hyperparameter tuning the Learning Rate. When dealing with other Machine Learning algorithms such as SVR, Hyperparameter tuning plays a crucial role. I hope you have got a good idea on how important it is to work on your model with various ideas and approaches to achieve better performance and all the best for your Machine Learning journey ahead. I hope these approaches come in handy in your way. Thanks for reading! Add speed and simplicity to your Machine Learning workflow today
https://blog.paperspace.com/improving-model-accuracy/
CC-MAIN-2022-21
refinedweb
3,453
57.57
A command is an operation which may be invoked. Command Elements The element is used to create commands which can be used to carry out operations. You don't need to use commands, since you can just call a script to handle things. However, a command has the advantage that it can be disabled when needed and can be invoked without needing to know about the details of its implementation. Commands provide a suitable way to abstract operations from the code. Commands are especially useful for larger applications. command For instance, in order to implement the clipboard menu commands, cut, copy and paste, you can use commands. If you did not use commands, you would need to figure out which field has the focus, then check to ensure that the operation is suitable for that element. In addition, the menu commands would need to be enabled and disabled depending on whether the focused element had selected text or not, and for paste operations, whether there is something suitable on the clipboard to paste. As you can see, this becomes complicated. By using commands, much of the work is handled for you. You can use a command for any operation. Mozilla uses them for almost every menu command. In addition, text fields and other widgets have a number of commands which they already support that you can invoke. You should use them when the operation depends on which element is focused. A command is identified by its attribute. Mozilla uses the convention that command id's start with ' id cmd_'. You will probably want to use the same id if a command is already being used, however, for your own commands, you can use any command id you wish. To avoid conflicts, you may wish to include the application name in the command id. A simple way of using commands is as follows: Example: Simple command var el = env.locale; Example 1 : Source View <command id="cmd_openhelp" oncommand="alert('Help!');"/> <button label="Help" command="cmd_openhelp"/> In this example, instead of placing the attribute on the oncommand , we instead place it on a button element. The two are then linked using the command attribute, which has the value of the command's command . The result is that when the button is pressed, the command ' id cmd_openhelp' is invoked. There are two advantages to using this approach. - First, it moves all your operations onto commands which can all be grouped together in one section of the XUL file. This means that code is all together and not scattered throughout the UI code. - The other advantage is that several buttons or other UI elements can be hooked up to the same command. For instance, you might have a menu item, a toolbar button and a keyboard shortcut all for the same operation. Rather than repeat the code three times, you can hook all three up to the same command. Normally, you would only hook up elements that would send a command event. Additionally, - If you set the attribute on the command, the command will be disabled and it will not be invoked. disabled - Any buttons and menu items hooked up to it will be disabled automatically. - If you re-enable the command, the buttons will become enabled again. Example: Toggling command disabled var el = env.locale; Example 2 : Source View <command id="cmd_openhelp" oncommand="alert('Help');"/> <button label="Help" command="cmd_openhelp"/> <button label="More Help" command="cmd_openhelp"/> <button label="Disable" oncommand="document.getElementById('cmd_openhelp').setAttribute('disabled','true');"/> <button label="Enable" oncommand="document.getElementById('cmd_openhelp').removeAttribute('disabled');"/> In this example, both buttons use the same command. When the Disable button is pressed, the command is disabled by setting its attribute, and both buttons will be disabled as well. disabled It is normal to put a group of commands inside a element, together near the top of the XUL file, as in the following: commandset <commandset> <command id="cmd_open" oncommand="alert('Open!');"/> <command id="cmd_help" oncommand="alert('Help!');"/> </commandset> A command is invoked when the user activates the button or other element attached to the command. You can also invoke a command by calling the method either of the doCommand element or an element attached to the command such as a button. command Command Dispatching You can also use commands without using elements, or at least, without adding a command attribute to the command. In this case, the command will not invoke a script directly, but instead, find an element or function which will handle the command. This function may be separate from the XUL itself, and might be handled internally by a widget. In order to find something to handle the command, XUL uses an object called a command dispatcher. This object locates a handler for a command. A handler for a command is called a controller. So, essentially, when a command is invoked, the command dispatcher locates a controller which can handle the command. You can think of the oncommand element as a type of controller for the command. command The command dispatcher locates a controller by looking at the currently focused element to see if it has a controller which can handle the command. XUL elements have a property which is used to check. You can use the controllers property to add your own controllers. You might use this to have a listbox respond to cut, copy and paste operations, for instance. An example of this will be provided later. By default, only textboxes have a controller that does anything. The textbox controller handles clipboard operations, selection, undo and redo as well as some editing operations. Note that an element may have multiple controllers, which will all be checked. controllers If the currently focused element does not have a suitable controller, the window is checked next. The window also has a property which you can modify if desired. If the focus is inside a frame, each frame leading to the top-level window is checked as as well. This means that commands will work even if the focus is inside a frame. This works well for a browser, since editing commands invoked from the main menu will work inside the content area. Note that HTML also has a commands and controller system although you can't use it on unprivileged web pages, but you may use it from, for example, a browser extension. If the window doesn't provide a controller capable of handling the command, nothing will happen. controllers You can get the command dispatcher using the document's commandDispatcher property, or you can retrieve it from the controllers list on an element or a window. The command dispatcher contains methods for retrieving controllers for commands and for retrieving and modifying the focus. Adding Controllers You can implement your own controllers to respond to commands. You could even override the default handling of a command with careful placement of the controller. A controller is expected to implement four methods, which are listed below: - supportsCommand (command) - this method should return true if the controller supports a command. If you return false, the command is not handled and command dispatcher will look for another controller. A single controller may support multiple commands. - isCommandEnabled (command) - this method should return true if the command is enabled, or false if it is disabled. Corresponding buttons will be disabled automatically. - doCommand (command) - execute the command. This is where you would put the code to handle the command. - onEvent (event) - this method handles an event. Example: Controller implementation Let's assume that we want to implement a listbox that handles the delete command. When the user selects Delete from the menu, the listbox deletes the selected row. In this case, you just need to attach a controller to a listbox which will perform the action in its doCommand method. Try opening the example below (Source View) in a browser window and selecting items from the list. You'll notice that the Delete command on the browser's Edit menu is enabled and that selecting it will delete a row. (See Discussion about opening this example). The example below isn't completely polished. Really, we should ensure that the selection and focus is adjusted appropriately after a deletion. <window id="controller-example" title="Controller Example" onload="init();" xmlns=""> <script> function init() { var list = document.getElementById("theList"); var listController = { supportsCommand : function(cmd){ return (cmd == "cmd_delete"); }, isCommandEnabled : function(cmd){ if (cmd == "cmd_delete") return (list.selectedItem != null); return false; }, doCommand : function(cmd){ list.removeItemAt(list.selectedIndex); }, onEvent : function(evt){ } }; list.controllers.appendController(listController); } </script> <listbox id="theList"> <listitem label="Ocean"/> <listitem label="Desert"/> <listitem label="Jungle"/> <listitem label="Swamp"/> </listbox> </window> The controller (listController) implements the four methods described above. The supportsCommand method returns true for the 'cmd_delete' command, which is the name of the command used when the Delete menu item is selected. For other commands, false should be returned since the controller does not handle any other commands. If you wanted to handle more commands, check for them here, since you will often use a single controller for multiple related commands. The isCommandEnabled method returns true if the command should be enabled. In this case, we check if there is a selected item in the listbox and return true if there is. If there is no selection, false is returned. If you delete all the rows in the example, the Delete command will become disabled. You may have to click the listbox to update the menu in this simple example. The doCommand method will be called when the Delete menu item is selected, and this will cause the selected row in the listbox to be deleted. Nothing needs to happen for the onEvent method, so no code is added for this method. Override Default Controller We attach this controller to the listbox by calling the appendController method of the listbox's . controllers nsIControllers has a number of methods that may be used to manipulate the controllers. For instance, there is also an insertControllerAt method which inserts a controller into an element before other ones. This might be useful to override commands. For example, the following example will disable pasting into a textbox. var tboxController = { supportsCommand : function(cmd){ return (cmd == "cmd_paste"); }, isCommandEnabled : function(cmd){ return false; }, doCommand : function(cmd){ }, onEvent : function(evt){ } }; document.getElementById("tbox").controllers.insertControllerAt(0,tboxController); In this example, we insert the controller at index 0, which means before any others. The new controller supports the 'cmd_paste' command and always indicates that the command is disabled. The default textbox controller never gets called because the command dispatcher found the controller above to handle the command first. Next, we'll find out how to update commands.
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XUL/Tutorial/Commands
CC-MAIN-2016-36
refinedweb
1,776
55.95
Search the Community Showing results for 'barba'. GSAP and Barba.js StudioProjects posted a topic in GSAPHI Folks! I've been trying (very unsuccessfully) to build a portfolio site with GSAP and Barba.js I initially built everything with CSS animations as per, but failed to get it working yesterday, so I rebuilt it stripped down in a couple of hours today with GSAP. Unfortunately, despite following Barba's documentation, I cant seem to get it working and after 9 hours today I'm pretty frazzled. It may be something silly that I've overlooked due to being so tired. Obviously CodePen is not the right vehicle for a multipage site and I'm not enamoured with it enough to buy a full membership, I spent it on a Club Greensock membership instead. I know that this is a bit off topic, but GSAP and Barba.js is a formidable combination, so I'm hoping that someone can help me get this wireframe working! AS you can see form the site on my dev server, I'm also embedding canvas elements as well. Thanks so much! Andy :) about.html contact.html index.html projects.html main.js transitions.js main.css ScrollSmoother / Barba.js mdelp posted a topic in GSAPHi All, I'm integrating ScrollSmoother in my new website and am using Barba.js to handle the page transitions. Everything seems to be working, just one little thing where I'm stuck. A Codepen is a little tricky because there's alot going on, but I'll try to explain. Since Barba doesn't use page refreshes the ScrollSmoother is not created each page but instead uses the instance of the first load. So I need to create a ScrollSmoother every new page transition, which is OK, but after every page enter I need to scroll to the top and this needs to be instant, otherwise it uses the scroll position of the previous page. What happens now is that the page transition fires, but then the window.scrollTo() is a smooth while it needs to be instant. I tried to .kill() the ScrollSmoother every page leave, but since the ScrollSmoother is created in a Barba hook each time I can't access the variable. Maybe I'm missing something or making it too complicated? Any help would be appreciated! Edit: you can see whats happening on. Scroll to the footer and click on 'Over mij', you'll see the transition and the scroll to top happening this way. GSAP and Barba.js mdelp replied to StudioProjects's topic in GSAP@StudioProjects, I made a simple Codesandbox you could use as a starting point to integrate Barba, see. You technically only need a leave and enter animation and setup Barba correctly, you should be good to go in to time. - Hi again! I've uploaded it to github at also to Stackblitz at A huge thanks again for your valuable time in helping me with this! It's about time that I started using github, I've got a couple of hundred small projects and websites from when I was a Flash Dev 15 years ago...I've just returned to coding after an 11 year break and pretty much had to start from scratch over the last 12 months! My 57 year old brain is a bit overwhelmed, but I've made good progress Andy - Hmm, it's really hard to inspect the code from devtools and troubleshoot. There's a nice working example of barba & gsap here: If you could make a demo of your site on stackblitz that would be so much easier. Also you can check that example above and see if it helps to adjust your code to look similar. Another thing I noticed you have "sync: false" in wrong line, it should be inside transition object like this: barba.init({ transitions: [ { sync: true, async leave(data) { const done = this.async(); $(data.next.container).addClass('fixed'); pageTransition(); await delay(1000); done(); }, }, ], }); - Hi TheHaus! Thanks so much for your super quick response! I've changed to the new minified core, but the same problem applies, although its now changing to the projects.html page rather than the about.html page! I've uploaded it to - you should now see a working css hamburger menu, rather than the white box, if not, please clear your cache. Andy - I noticed you're using barba from unpkg cdn, I remember I had some issue with it and it wasn't working for me until I changed to this one: Can you replace it and give it a go? - Hi! Thanks so much for your response! I added the .fixed selector to the css and the jquery on the hooks but the transitions still aren't working, the transition fires, and the slide-in menu is now retracted, but every page now directs to the about.html page. I've uploaded the site to so you can see what's happening in-situ. If you click on the about button, there is no transition at all and the UI loads about.hml - if you click on any of the other buttons in the slide-in nav, the transition fires, but only partially, and the UI loads the about.html page on every button. The slide-in nav is now closed after the transition, so I'm definitely a step closer, but it's still not functioning as one would expect. The console is throwing any errors, so I can only assume that this is some kind of caching issue caused by Barba. To be honest, this library is sketchy and difficult to use and I'm nor a React dev so I'm not used to using hooks. I'll give it another day and if I can't get it working, I'll refactor everything for Swup and give that a go - its architecture appears to be far more stable and it appears to be far more user friendly than Barba. My apologies if my frustrations with this library are evident. What I'm trying to achieve is a simple transition between pages that is seamless, with the slide-in nav retracted after the transition fires. Thanks again for taking the time time to help me, it really is appreciated. Andy - I'm not sure if this is what you're going after, as you didn't explain it that well what you're trying to achieve (sorry for being straightforward :P), but from what I can see you're using sync mode, so probably you want both animations happen at same time, and you see some annoying flicker, and other animation doesn't play correctly, right? In sync mode, you neeed to handle the positions absolute/fixed of barba containers yourself, as they will show at the same time in static position, the next page container will be next to the old one (which will be out of screen), until the previous page container disappears (after first transition is complete) and the new container will show in position of old one - just imagine two divs, that have width of 100% viewport next to each other, and you change display of the first one to "none" - it's similar behaviour. So long story short, you need to add class .fixed (you can name it whatever), set position to fixed and higher z-index .fixed { position: fixed; top: 0; left: 0; z-index: 10; } Then on your barba leave hook, you need to add (with jQuery, I tried to do it with vanilla js but for some reason it doesn't work for me) $(data.next.container).addClass('fixed'); And lastly, on your enter, afterEnter, or after hook (depending on your setup), you have to remove it $(data.next.container).removeClass('fixed'); Then the transitions should work. Hope it works for your case! ! Create foward motion warp speed effect Michael S posted a topic in GSAPI have a background of outer space that is made of js particles with general divs of content on top of it (logo,h1,h2) When a button is click and or scroll is initiated, I want the background to quickly move into warp speed (like this) and blur the content and move and load the next page. I plan on using for the page transition but am new to GSAP and not sure how to achieve this otherwise. Should I recreate the stars background using GSAP and then animate the warp speed or can I use existing particle js? I really am not sure how to go about this so any suggestions or guide would be greatly appreciated. Thanks so much in advance and please let me know if you need anything to help answer. Create foward motion warp speed effect iDad5 replied to Michael S's topic in GSAPAm I getting you right, you already have the Warp stuff (no gsap as far as I can see) and you plan on using barba for the page transition, so the only thing you are missing now is blurring the headline(s)? A little CSS filter should do that trick, as great as GSAP is, I don't see why you would add it in this case, if all you need from it is a blur. But maybe I'm misunderstanding you. Play gltf animation based on scroll GreenSock replied to a topic in GSAPLet's work together to clarify a few things... I don't think anyone is faulting you, @joxd, for pointing out that the user-created demo isn't fully responsive. Resizing screens appears to cause it to render oddly. Perhaps we haven't acknowledged that adequately and you didn't feel heard. We are not Three.js experts here. Most (if not all) of the issues here are related to rendering, so it's really not in our wheelhouse. We're not in a position to dig in, learn Three.js and fix some else's demo for them to make it fully responsive. But we're happy to answer any GSAP-specific questions. It may not be obvious to you from your experience in this particular thread, but this community bends over backwards to help users around here and we spend hours and hours every day trying to keep up with the constant stream of posts. It can be overwhelming, especially when so many of them end up having very little to do with GSAP. Quite a few people basically expect us to provide free general consulting services and do their development work for them or solve logic issues in their code...and they often won't even bother to provide a clear minimal demo. These forums cost thousands of dollars every month to operate, so we have to draw boundaries somewhere. We get questions that are specific to Three.js, React, Vue, Angular, Next, Nuxt, Barba, Pixi, Swiper, LocomotiveScroll, ASScroll, ScrollBar, WordPress, Elementor, and Svelte, just to name a few. We simply cannot learn all of those tools and provide full support. I'm sorry if my initial boundary-setting post came off as rude or unwelcoming. I definitely didn't mean it that way. Though you probably didn't mean it this way, your responses have come across as very disrespectful and insulting. It sounds like you felt similarly about some responses you got. So everyone got a little offended and salty here. Let's just put an end to that and give each other the benefit of the doubt. Thanks for being a customer back in 2017. We sure appreciate that. We couldn't do what we do without the support of Club GreenSock members. As a company, we place a HUGE amount of value on earning trust and treating customers with respect. One of the biggest ways we try to be there for our users is via these forums. We hear over and over again about how special this place is, how it is warm and welcoming, etc. It has taken years of very intentional effort to cultivate that. We don't have big marketing budgets or corporate sponsors - we simply focus our efforts on creating the best tools we can and supporting them well, trusting that the market will reward the efforts. Again, I'm sorry if your experience didn't reflect that. Let us know what it'd take to right the ship in your eyes. If you still think there's a GSAP problem at play here, how about if you create a minimal demo that doesn't use Three.js and only focuses on the animation-related challenge instead? I'd be glad to look at that and provide advice. - -. How do I convert this code to work with multiple items Cassie replied to MennoPP's topic in GSAPOh wait - just saw the sentence below the bolded one. Can you pop together a minimal demo for us? Don't worry about barba and all the rest - just a simple codepen with some coloured boxes, toggling a class and playing a flip animation. Thanks! New page doesn't scroll on navbar item click Arunkrs posted a topic in GSAPHello GSAP Heroes, I'm facing an issues with scrolling. I've a navbar with multiple links. Index page is scrolling fine without issues, but when I clicked on navbar item it redirects to the page but that page doesn't scroll until I refresh it. I don't know what to do. I've used gsap, locomotive scroll and barba js. ScrollTrigger Not working after Barba Transition sixtillnine posted a topic in GSAPI'm brand new to GSAP and Barba, I have got a basic page transition working with the two, however after the transition scroll trigger seems to break. I've been looking through the forum and see other users have very similar issue to what I have but I cannot find a solution. If either of the pages are accessed directly, scroll trigger works fine. However if either page is navigated to via the barba transition the scroll trigger doesn't work. Using barba views I appear to be able to get scripts to fire post transition, but I cannot work out how to get scroll trigger to reload. When I inspect the element after transition it looks like it's ready to be manipulated (has the inline transform style added to it), but it doesn't animate on scroll. I'm aware this may be a barba issue rather than a GSAP issue, but hopefully someone on the forum has come across this and can help. I have a very basic test pages at the moment (excuse the superfluous loading of every plugin!). First page is: < 1 - Contemporary Chandeliers</title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <meta name="description" content="tester 01</h4> <p> <a href="tester2.php" title="Next">Go to Page> Second page is basically the same apart from the link back: < 2 - Contemporary Chandeliers</title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <meta name="description" content="tester 02</h4> <p> <a href="tester.php" title="Next">Go to Page> The application js controlling the transition and scroll trigger is: /*PAGE TRANSITION*/ barba.init({ transitions: [{ name: 'opacity-transition', leave(data) { return gsap.to(data.current.container, { duration: 0.5, opacity: 0, y: '50px', }); }, enter(data) { gsap.from(data.next.container, { duration: 0.5, opacity: 0, x:'-50px', }); } }], views: [{ namespace: 'tester', beforeLeave(data) { alert('Leaving tester'); }, beforeEnter(data) { alert('Entering tester'); boxRoll(); } }] }); /*SCROLL TRIGGER*/ gsap.registerPlugin(ScrollTrigger); function boxRoll(){ const boxes = gsap.utils.toArray('.box'); boxes.forEach(box => { gsap.to(box, { scrollTrigger: { trigger: box, toggleActions: "restart", scrub: 0.5, id: 'boxRoll', }, rotate: 360, x: 350, }) }); } boxRoll(); New page doesn't scroll on navbar item click akapowl replied to Arunkrs's topic in GSAPHello @Arunkrs If you are using locomotive-scroll for smooth-scrolling and barba.js for page-transitions, and your problem is that your website doesn't scroll anymore after a page-pransition, this really does not sound like a GSAP issue at all but more like logic-issues with regard to locomotive-scroll and barba.js. We love helping with GSAP-related questions, but unfortunately we just don't have the resources to provide free general consulting or logic troubleshooting - especially with regard to different 3rd party libraries. Of course anyone else is welcome to post an answer if they'd like - we just want to manage expectations. You will probably have to make sure to either properly update your locomotive-scroll instance after the transition or to destroy the instance on the old page on leave and create a fresh instance on enter of the new page - there are different approaches to that, depending on how exactly you have things set up. But again; I really don't think that is in any way related to GSAP. There is a section in the barba docs for how to handle 3rd party scripts - it even has a part dedicated to locomotive-scroll suggesting how to handle things with smooth-scrolling libraries as such - I would suggest starting there. Happy tweening! Wrong pin position on initial load (Barba + Locomotive) Ocamy posted a topic in GSAPHi, i encounter a problem trying to use Locomotive Scroll and Barba, the pinned section is in wrong position on the initial load, but after going to next page and coming back it works fine. I've combined to demos that i found to show the problem. LE: By providing the demo i found out that resizing the window (in codesandbox) it works fine but i still don't know how to solve it. The actual question: how can i add ScrollTrigger.refresh() without resizing the window? ScrollTrigger().refresh with barba.js AdventurousDeveloper posted a topic in GSAPHey everyone! I'm hoping this is considered a GSAP question rather than a Barba.js one. I've looked through this forum posts related to scrollTrigger and barba, which from I understand that scrollTriggers need to be killed off during barba transition then reinitiated after page transition. My environment is in WordPress and I'm getting no errors transitioning between pages. I've tried to simplify down what I'm using, so I hope this would be enough to troubleshoot... The below works, killing off all ScrollTriggers and then running "scrollFunction()". const cleanGSAP = () => { ScrollTrigger.getAll().forEach( t => t.kill(false) ) ScrollTrigger.refresh() window.dispatchEvent(new Event("resize")) } function delay(ms) { return new Promise( resolve => setTimeout(resolve, ms) ) } barba.init({ sync: true, transitions: [ { async leave(data) { const leaveDone = this.async() await delay(1000) leaveDone() }, async afterLeave(data) { cleanGSAP() }, async beforeEnter(data) { }, async enter(data) { $(window).scrollTop(0) }, async afterEnter(data) { }, async after(data) { //scrollFunction() this works //ScrollTrigger().refresh() this doesn't work } } ] }) function scrollFunction() { //gsap stuff here } My issue is the "scrollFunction()" is declared in another file and can't be moved to the file with the barba.js hooks. Replacing "scrollFunction()" with ScrollTrigger().refresh() in the after hook doesn't work or is this not how it's meant to be used? If not is there a global function that can innit all scrollTriggers? I'd appreciate and tips or help on this 😀 Cheers ScrollSmoother + barba.js (update on AJAX call) kacpergalka posted a topic in GSAPHello, I can't make ScrollSmoother work with AJAX transitions using barba.js library. The problem is, that the ScrollSmoother isn't updating properly. It's not updating the content height, and the effects on new page won't apply. How should I approach it? Below are the basic functions I use, and the structure. I used it exactly the same with Locomotive Scroll, before I tried to move to GSAP solution. ScrollTrigger.refresh(); doesn't seem to do anything. I will appreciate your help. Thanks. <div id="site" data- <div id="smooth-wrapper"> <div id="smooth-content"> <main data- <?php the_content(); ?> </main> </div> </div> </div> initScroll: () => { Site.smoother = ScrollSmoother.create({ wrapper: '#smooth-wrapper', content: '#smooth-content', smooth: 1.5, effects: true, smoothTouch: 0.1, }); }, initTransitions: () => { barba.init({ transitions: [ { name: 'default-transition', leave(data) { }, enter() { }, } ], }); barba.hooks.beforeLeave((data) => { }); barba.hooks.after((data) => { Site.reinit(); ScrollSmoother.scrollTop(0); ScrollTrigger.refresh(); }); }, ScrollTrigger not working Vineeth Achari posted a topic in GSAPHi, I have developed a new film site, that incorporates all these plugins (Barba.js/Locomotive/GSAP/ScrollTrigger). The ScrollTrigger JS at the above mentioned URL is commented out as of now. Because, enabling ScrollTriggerJS does not achieve the following section effects. Could someone please help me find a solution to this, as I still have facing some issues with plugins..
https://greensock.com/search/?page=1&q=barba&_nodeSelectName=cms_records19_node&_noJs=1
CC-MAIN-2022-33
refinedweb
3,442
62.68
Details Joined devRant on 3/7/2022 Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More - - Documentation is like drugs. When it's good you'll keep going back for it. When it's not, you'd rather get it at the source. 🧟 Spin-off of another rant.3 - So, i fucking finally got bash-scripting! FINALLY, you know how?! When I started treating it like a LISP. It's like walking on water, I feel fucking god-like. Total ego/power/ecstatic trip here, you guys have no idea. I mean, HOW COULD IT BE SO EASY AND I'D NEVER EVER SEEN IT THAT WAY. After so much hair loss, such a tiny view switch changed my whole way of looking at a terminal 😵💫😵💫😵💫😵💫5 - Don't get me wrong, I love clojure, BUT, is it really that surprising how hard it is to bring in new clojurists?!?! I mean, "oh, people get confused a lot about namespaces, why is that?" Don't you get it? "namespaces"? Really?, I mean, seriously??? In VSCode days, to call source code paths as "namespaces" and hope people that are still learning about the JVM will understand it is NOT FEASIBLE. Thank God I was already a seasoned vim user before clojure, otherwise I'd give it up pretty fast.3 - f*cksakes, modularize, BUT PLEASE, if it's only a small single block, DON'T. It's f*cking annoying to read 1000 loc distributed among 100 files.6 -
https://devrant.com/users/bigmonsterlover
CC-MAIN-2022-27
refinedweb
264
74.9
"How do I change the default exe name of my project?". I received this question in an email. I decided to write this article so next time somebody asks the same question, I can send him here.Changing an exe name is a part of project properties settings. You can easily change a project settings from the project Property page.You can call a project's Property Page from the Solution Explorer. Right click on the Solution Explorer and select Properties menu as you can see in Figure 1.Figure 1. The Properties menu launches the Property Page for that project. For example, my project name is GDIPainter as you can see in Figure 2.Figure 2.You can also see from Figure 2, you can change Assembly name, default namespace, application icon and so on. The project section of this page allows you to change the project folder and output file. To change the name of the default exe for your application, you need to change the Output File name as you can see the selected line in Figure 2. View All
http://www.c-sharpcorner.com/article/customizing-default-project-setting/
CC-MAIN-2017-22
refinedweb
183
73.37
#include <memcached.h> libmemcached is a small, thread-safe client library for the memcached protocol. The code has all been written with an eye to allow for both web and embedded usage. It handles the work behind routing particular keys to specific servers that you specify (and values are matched based on server order as supplied by you). It implements both a modula and consistent method of object distribution. There are multiple implemented routing and hashing methods. See the memcached_behavior_set() manpage. All operations are performed against a "memcached_st" structure. These structures can either be dynamically allocated or statically allocated and then initialized by memcached_create(). Functions have been written in order to encapsulate the "memcached_st". It is not recommended that you operate directly against the structure. Nearly all functions return a "memcached_return_t" value. This value can be translated to a printable string with memcached_strerror(3). Partitioning based on keys is supported in the library. Using the key partioning functions it is possible to group sets of object onto servers. "memcached_st" structures are thread-safe, but each thread must contain its own structure (that is, if you want to share these among threads you must provide your own locking). No global variables are used in this library. If you are working with GNU autotools you will want to add the following to your configure.ac to properly include libmemcached in your application. PKG_CHECK_MODULES(DEPS, libmemcached >= 0.8.0) AC_SUBST(DEPS_CFLAGS) AC_SUBST(DEPS_LIBS) Some features of the library must be enabled through memcached_behavior_set(). Hope you enjoy it!
http://www.makelinux.net/man/3/L/libmemcached
CC-MAIN-2014-49
refinedweb
254
51.24
So I got around to solving this thing when I wanted to create a QFrame that docks itself to the top right, ignoring all layout (so it can actually be on top of things). Useful to me as a sort of icon bar on top of a tab widget, given that the tab widget would never have so many tabs that the tabs go behind the icons. To dock something it would need to know it’s parent widget and connect to the parent resize event to update it’s own geometry. There is no resize signal however, so the resizeEvent needs to be overridden; but this isn’t possible because the resizeEvent handles all kinds of stuff that we need. So we can choose the cheap way out and inherit QWidget, override the resizeEvent and create a QFrame that is outside the layout and always forced in the top right, but let’s disregard that for a moment as this gets more interesting. We can’t create signals on runtime, so we need a custom signal class that works exactly like pyqtBoundSignal in usage except it doesn’t crash Qt on creation. Note: The pyqtBoundSignal class can’t be created manally, the pyqtSignal class is just a placeholder and can’t be used as it contains no actual signal functionality. We also can’t extend functions in a decent way in Python, but this hack proved quite useful. ''' Created on Feb 15, 2013 @author: Trevor van Hoof @package Qtutils ''' class UnboundSignal(): def __init__(self): self._functions = [] def emit(self): for function in self._functions: function() def connect(self, inBoundFunction): self._functions.append( inBoundFunction ) def disconnect(self, inBoundFunction): try: self._functions.remove( inBoundFunction ) except: print('Warning: function %s not removed from signal %s'%(inBoundFunction,self)) So here’s the UnboundSignal class I use, it just implements all the signal functionality I use (new style) and then I can instantiate it, it is only not aware of what parent it has or the self class, but as a bonus it could be driven by multiple classes or instances at the same time. Example: when you wish to have one object fill the gap between two others you need the middle object to link to the resizeEvent of both or you just give the other objects a shared signal. Then for our test class we need to initialize it with a parent, always. from PyQt4 import QtGui from Qtutils.LaunchAsStandalone import QtStandalone from Qtutils.unboundsignal import UnboundSignal class Tst(QtGui.QFrame): def __init__(self, inParent): QtGui.QFrame.__init__(self, inParent) Then as the parent is known we can give that parent a resized property, set it to a new signal and connec to that signal. self.parent().resized = UnboundSignal() self.parent().resized.connect( self.doPrint ) Lastly we need to override the resizeEvent and show the widget: self.parent().resizeEvent = self.extendResizeEvent( self.parent().resizeEvent ) self.show() Now for that extend method: ''' Awesome method extension from ''' def extendResizeEvent(self, fn): def extendedResizeEvent(*args, **kwargs): fn(*args, **kwargs) fn.__self__.resized.emit() #we could do this instead of using the signal: #self.updatePosition() #but the signal could be created out of #this class and be globally accessible return extendedResizeEvent It could even stack infinitely and as long as all the extensions do not depend on new arguments it is reasonably maintainable code. Then last let’s launch the app: def main(): w = QtGui.QWidget() Tst(w) w.show() return w QtStandalone(main) The QtStandalone class can be found in this post. To finish this example we could implement no parent initializing and override the setParent command to disconnect from the current signal and create another signal on another parent again; or always have this class be owner of the signal instead of the parent that emits it (also reverting the function); but that may lead to more trouble when doing this with multiple objects to the same parent. Also we should check whether the parent already has a resized signal in which case the initialization is not necessary.
http://trevorius.com/scrapbook/python/binding-singals-dynamically/
CC-MAIN-2018-47
refinedweb
674
50.57
How APIs/Plugins Are Run This documentation isn’t up to date with the latest version of Gatsby. - mention how multiple configurations are merged - the node creation flow in the diagram is no longer correct CREATE_NODEand onCreateNodeare handled differently than described You can help by making a PR to update this documentation. For most sites, plugins take up the majority of the build time. So what’s really happening when APIs are called? Note: this section only explains how gatsby-node plugins are run. Not browser or SSR plugins Early in the build Early in the bootstrap phase, you load all the configured plugins (and internal plugins) for the site. These are saved into redux under the flattenedPlugins namespace. Each plugin in redux contains the following fields: - resolve: absolute path to the plugin’s directory - id: String concatenation of ‘Plugin ’ and the name of the plugin. E.g. Plugin query-runner - name: The name of the plugin. E.g. query-runner - version: The version as per the package.json. Or if it is a site plugin, one is generated from the file’s hash - pluginOptions: Plugin options as specified in gatsby-config.js - nodeAPIs: A list of node APIs that this plugin implements. E.g. [ 'sourceNodes', ...] - browserAPIs: List of browser APIs that this plugin implements - ssrAPIs: List of SSR APIs that this plugin implements In addition, you also create a lookup from API to the plugins that implement it and save this to redux as api-to-plugins. This is implemented in load-plugins/validate.js apiRunInstance Some API calls can take a while to finish. So every time an API is run, you create an object called apiRunInstance to track it. It contains the following notable fields: - id: Unique identifier generated based on type of API - api: The API you’re running. E.g. onCreateNode - args: Any arguments passed to api-runner-node. E.g. a node object - pluginSource: optional name of the plugin that initiated the original call - resolve: promise resolve callback to be called when the API has finished running - startTime: time that the API run was started - span: opentracing span for tracing builds - traceId: optional args.traceId provided if API will result in further API calls (see below) Immediately place this object into an apisRunningById Map, where you track its execution. Running each plugin Next, filter all flattenedPlugins down to those that implement the API you’re trying to run. For each plugin, you require its gatsby-node.js and call its exported API function. E.g. if API was sourceNodes, it would result in a call to gatsbyNode['sourceNodes'](...apiCallargs). Injected arguments API implementations are passed a variety of useful actions and other interesting functions/objects. These arguments are created each time a plugin is run for an API, which allows us to rebind actions with default information. All actions take 3 arguments: - The core information required by the action. E.g. for createNode, you must pass a node - The plugin that is calling this action. E.g. createNodeuses this to assign the owner of the new node - An object with misc action options: traceId: See below parentSpan: opentracing span (see tracing docs) Passing the plugin and action options on every single action call would be extremely painful for plugin/site authors. Since you know the plugin, traceId and parentSpan when you’re running your API, you can rebind injected actions so these arguments are already provided. This is done in the doubleBind step. Waiting for all plugins to run Each plugin is run inside a map-series promise, which allows them to be executed concurrently. Once all plugins have finished running, you remove them from apisRunningById and fire a API_RUNNING_QUEUE_EMPTY event. This in turn, results in any dirty pages being recreated, as well as their queries. Finally, the results are returned. Using traceId to await downstream API calls The majority of API calls result in one or more implementing plugins being called. You then wait for them all to complete, and return. But some plugins (e.g. sourceNodes) result in calls to actions that themselves call APIs. You need some way of tracing whether an API call originated from another API call, so that you can wait on all child calls to complete. The mechanism for this is the traceId. The traceIdis passed as an argument to the original API runner. E.g You keep track of the number of API calls with this traceIdin the apisRunningByTraceIdMap. On this first invocation, it will be set to 1. Using the action rebinding mentioned above, the traceIdis passed through to all action calls via the actionOptionsobject. After reducing the Action, a global event is emitted which includes the action information For the CREATE_NODEand CREATE_PAGEevents, you need to call the onCreateNodeand onCreatePageAPIs respectively. The plugin-runner takes care of this. It also passes on the traceIdfrom the Action back into the API call. You’re back in api-runner-node.jsand can tie this new API call back to its original. So you increment the value of apisRunningByTraceIdfor this traceId. Now, whenever an API finishes running (when all its implementing plugins have finished), you decrement apisRunningByTraceId[traceId]. If the original API call included the waitForCascadingActionsoption, then you wait until apisRunningByTraceId[traceId]== 0 before resolving.
https://www.gatsbyjs.com/docs/how-plugins-apis-are-run/
CC-MAIN-2020-40
refinedweb
877
55.54
- Contents - A Little History - The Basics - So What Is a Path, Really? - Reading and Writing - Navigating the Filesystem - < aCopying Files - File Attributes - Summing Up - Resources Editor's Note: Along with exotic and ambitious proposed features like the Java Module System, closures, and language-level XML support, do you suppose Java 7 will provide us a reliable file-copy method? It could happen, as a JSR for "More New I/O APIs for the Java Platform" appears to be a likely candidate for inclusion in Java 7. In this installment of The Open Road, Elliotte Rusty Harold takes a detailed look at the current state of the NIO2 spec and how it will, and sometimes won't, help you work with files. Before we begin, here's a brief update on the status of theOpenJDK 7 project. The most recent JDK 7 drop is Build b29, posted June 20. A look at b29's summary of changes shows this to mainly be a bug-fix beta, with defects cleared in the compiler, build scripts, AWT, and a few other areas. Releases from the project have been coming out every two weeks or so since April -- taking an unsurprising break in early May for JavaOne -- with b28,b27,b26, and b25continuing to fix defects and add minor features, such as JMX support for platform MXBeans of any type (bug 6610094), and an IO redirection API for sub-processes (bug 4960438). And speaking of bugs, take a look at bug 4032604, "Copy method in class java.io.File." The first two comments on the bug were posted by the author and the editor of the article you're about to read -- 11 and nine years ago, respectively. Will we finally get our wish? Read on for Elliotte's answer. Java is a cross-platform language and environment. However, the Java VM itself needs to communicate with the native processor, operating system, and file system. If native code is to be avoided, everything you'd rely on the OS for in a classic program has to be provided by Java instead. Reimplementing a complete virtual OS and API takes a while. In Java's case, and specifically in the case of the file system, the job has taken over a decade; and it still isn't done. Nonetheless, Java 7 may finally finish some of the last abstractions needed to create filesystem-aware programs with all the power of their native counterparts. A Little History Sometimes it's the little things that are most annoying, like the mosquito that won't stop buzzing around your bed at night. Sometimes these irritants grow over time. In Java 1.0 the language was so new we barely noticed that it had no reliable way to copy or move a file. In Java 1.1 we were so happy about internationalization and readers and writers that we figured moving and copying files would surely come in the next release. In Java 1.2 we got distracted by Swing, and didn't think much about I/O. By Java 1.3, however, we were starting to get a little antsy. Surely Sun could have offered us file copies by now? We were definitely getting a little tired of running long streams just to move a file from point A to point B while losing all the metadata in the process. We were very tired of shelling out to native code to move files because renameTo() only worked on an unpredictable subset of the systems we needed to run on. But Sun promised that they'd get around to a decent file system interface in Java 1.4. Java 1.4 arrived, and it was full of buffers and channels and charsets and more in a spanking new java.nio package. Unfortunately, the new filesystem interface we'd been promised was nowhere to be found. Seems the developers working on java.nio had gotten so excited about non-blocking I/O and memory-mapped files that they ran out of time or just plum forgot about their promise to finally let us move and copy files. Java 5, and then Java 6, came and went with nary a file copy operation in sight, though Sun did manage to find time to invent the most complicated and simultaneously least powerful generics implementation I've ever seen. It was starting to feel like the priorities were more than a little skewed over at the JCP. Complicated, sexy proposals like generics, closures, and asynchronous I/O got a lot more attention than they deserved while basic, fundamental, and easy but unsexy functionality, such as copying and moving files, was starved for resources. However, finally in Java 7, it looks like there's at least a 50-50 chance we'll get a filesystem API that's more powerful than the clunky old java.io.File that was thrown together twelve years ago to push Java 1.0 out the door. Sun, IBM, Oracle, Intel, HP, Google, NTT, and Doug Lea are working on JSR 203 to create "More New I/O APIs for the Java Platform ('NIO.2')". Don't hold your breath yet, but do keep your fingers crossed. Maybe, just maybe, we'll finally be able to copy files in Java 7. The Basics In Java 7 java.io.File won't be deprecated, but it probably should be. Instead, all files should be referenced through the java.nio.file package.: Windows, Unix, Mac OS X, network, zip archive, or something else. For example, this is how you create a Path object for the file in/home/elharo/articles/java.net/article3.html on the local default file system: FileSystem local = FileSystems.getDefault(); Path p = local.getPath("/home/elharo/articles/java.net/article3.html"); You can create relative paths, too. These are relative to the current working directory: Path p = fileSystem.getPath("articles/java.net/article3.html"); Most of the time, you'll just want to use the operating system's native file system, which is available via the FileSystem.getDefault() method. If this is all you want, and usually it is, the static Path.get() method saves you a few columns of horizontal space: Path p = Path.get("/home/elharo/articles/java.net/paths.html"); However, you can install other file systems that point somewhere other than the local file system. For instance, you could have a file system that accesses an HTTP server, reads a zip archive, mounts an ISO disk image, or views a Mercurial repository. Each such file system would have its own path and attribute classes. However, basic operations could still be performed with the abstract superclasses I discuss here. The possibility of alternate file systems gives us a second way to create path objects. Given a file system that supports a URI scheme, you can create a Path object from the URI. For example, imagine you've installed a RESTful file system provider that uses HTTP GET for reading, HTTP PUTfor writing, HTTP OPTIONS and HEAD for attributes, and HTTP DELETE for removal. Then you can point to a file on the server like so: Path p = Path.get(new URI("")); There's also a toUri() method that converts an absolute Path to a filesystem-specific URI: String url = path.toAbsolutePath().toUri(); Finally, if you're passed a traditional java.io.File object by some old code, you can convert it to the new hotness with the getFileRef()method: File f = new File("/foo/bar.txt"); Path p = f.getFileRef(); Unfortunately, the more obvious name getPath() was already taken. So What Is a Path, Really? A Path stores a hierarchical list of names indexed from zero to one less than the length of the list. The last name in the list is the name of the individual file or directory the path refers to. The first name will be the root of the file system if the path is absolute. Other parts of the path are parent directories. These methods inspect this list: Path getName(int n) - Returns the nth component in this path. The root of the path is 0. The file/directory itself is one less than the number of components of the path. int getNameCount() - Returns the number of components in the path. Path getParent() - Returns the path to the parent of this file or directory, or null if this path does not have a parent (that is, if this path is itself a root.) Path getRoot() - Returns the root component of this path. For an absolute path on a Unix-like file system, this will be /. For an absolute path on a DOS-like file system, this will be something like C:\or D:\. For a relative path, this will be null. Of course, paths aren't always quite perfect trees. In relative paths, the root is missing. Sometimes symbolic links cause the path to jump to a different subtree. The toAbsolutePath()method converts a path to an absolute path starting from a root of the file system, wherever that might be. Invoking toRealPath(false) on a path removes path segments like /./ and /../ from the path before computing an absolute path. Invoking toRealPath(true)on a path removes path segments like /./ and /../ and also resolves all symbolic links before returning the absolute path. You can use several variants of the resolve method to create new paths from an existing path. For example, suppose temp points to /usr/tmp: Path temp = fileSystem.getPath("/usr/tmp"); We can resolve other paths with this as the root. For example, resolving articles/java.net/article3.html against temp creates a path pointing to/usr/tmp/articles/java.net/article3.html: Path p = fileSystem.getPath("articles/java.net/article3.html"); Path resolved = temp.resolve(p); The inverse operation of resolution is relativization. Given an absolute path such as/usr/tmp/articles/java.net/article3.html, you can convert it to a path relative to some other path such as /usr/tmp: Path absolute = fileSystem.getPath("/usr/tmp/articles/java.net/article3.html"); Path temp = fileSystem.getPath("/usr/tmp"); Path relative = temp.relativize(absolute); If necessary, relativization can add ./ and ../ to the path to properly relativize. For example, here I calculate a relative link from an article in one directory to an article in another directory: Path article3 = fileSystem.getPath("/usr/tmp/articles/java.net/article3.html"); Path article7 = fileSystem.getPath("/usr/tmp/articles/developerWorks/article7.html"); Path link = article7.relativize(article3); link now points to../../java.net/article3.html. These methods could be useful when setting up a templating system for a blog engine, or a content management system, and converting file paths to URLs, for example. If you use them that way, please do be careful that you don't accidentally let crackers go wandering all over the file system outside your content root, though. Reading and Writing To write to a path, you call newOutputStream(), and then use the returned object as normal. Example 1 shows a simple method to write the ASCII letters A through Z into a file in the current working directory named alphabet.txt: public void makeAlphabetFile throws IOException { Path p = Path.get("alphabet"); OutputStream out = p.newOutputStream(); for (int c = 'A'; c <= 'Z'; c++) { out.write(c); out.write('\n'); } out.close(); } This program will create the file if it doesn't exist, and overwrite it if it does. However, you can adjust this by passing StandardOpenOption.APPEND or StandardOpenOption.CREATE_NEW to the newOutputStream() method: OutputStream out = p.newOutputStream(EnumSet<OpenOption>.of(StandardOpenOption.CREATE_NEW)); Now alphabet.txt will be created if and only if it doesn't already exist. Otherwise the attempt will throw an exception. There are several options you can use when opening a file: StandardOpenOption.CREATE(default behavior for writes) - Create a file if it does not already exist. StandardOpenOption.CREATE_NEW - Expect that the file does not already exist, and create it. Throw an exception if the file does already exist. StandardOpenOption.APPEND - Write new data to the end of the existing file. StandardOpenOption.TRUNCATE_EXISTING(default for writes) - Remove all data from an existing file when opening it. StandardOpenOption.NOFOLLOW_LINKS - Throw an exception if it is necessary to resolve a symbolic link to open a file. StandardOpenOption.SPARSE - Suggest that the file will be sparse so the underlying operating system can optimize for that use case. File systems that don't support sparse files will ignore this hint. StandardOpenOption.DSYNC - Write data to the underlying disk immediately. Do not use native buffering. This does not affect buffering internal to Java, such as that performed by BufferedInputStreamand BufferedWriter. StandardOpenOption.SYNC - Write data and metadata (attributes) to the underlying disk immediately. StandardOpenOption.READ - Open for reading. StandardOpenOption.WRITE - Open for writing. These options apply not just in this method, but for all methods in the API that open files. Not all of these are mutually exclusive. You can use several when opening a file. You can buffer or otherwise filter these streams as normal. Example 2 shows a better alphabet() method that uses UTF-8 encoding, and buffers the data: public void makeAlphabetFile throws IOException { Path p = Path.get("alphabet"); OutputStream out = p.newOutputStream(); out = new BufferedOuputStream(out); Writer w = new OutputStreamWriter(out, "UTF-8"); w = new BufferedWriter(w); for (int c = 'A'; c <= 'Z'; c++) { w.write(c); w.write('\n'); } w.flush(); w.close(); } For reading, just use newInputStream() instead. You can also specify attributes for newly created files when opening a path for writing. I'll discuss those below. There are also methods that create channels instead, though on modern VMs, I'm skeptical whether that's really helpful or just more complex. Threading has improved so much in Java 6 that's it's no longer a problem to run thousands or even tens of thousands of streams in separate threads, thereby removing much of the impetus for using channels and non-blocking I/O in the first place. Perhaps the true asynchronous I/O also introduced with JSR-203 will make channels relevant again, but this remains to be seen. However, there is one case that definitely calls for channels: random access files. There's no specific new RandomAccessFile class. Instead you ask the path to give you a SeekableByteChannel: Path p = Path.get("fits.dat"); SeekableByteChannel raf = p.newSeekableByteChannel( StandardopenOption.READ, StandardOpenOption.WRITE, StandardopenOption.SYNC, StandardOpenOption.DSYNC ); The SeekableByteChannel class is a new subinterface of ByteChannel that extends it with methods for moving the file pointer around in the file before reading or writing: public interface SeekableByteChannel extends ByteChannel { public int read(ByteBuffer dest) throws IOException; public int write(ByteBuffer source) throws IOException; public long position() throws IOException; public SeekableByteChannel position(long newPosition) throws IOException; public long size() throws IOException; public SeekableByteChannel truncate(long size) throws IOException; } Navigating the Filesystem To list the files in a directory you'll use a DirectoryStream, which is not really a stream at all. Instead, it's an Iterable that returns DirectoryEntry objects from which you can get more Paths. These Path objects are all relative to their parent directories. The process starts with a call to the newDirectoryStream() method of the path representing a directory. Example 3 is a program that lists all the .txt files in the roots of the filesystem: import java.io.IOException; import java.nio.file.*; public class TextLister { public static void main(String args) throws IOException { for (Path root : FileSystem.getRootDirectories()) { DirectoryStream txtFiles = root.newDirectoryStream("*.txt"); try { for (Path textFile : txtFiles) { System.out.println(textFile.getName()); } finally { txtFiles.close(); } } } } For filters beyond simple name filters -- for instance, filtering by size or MIME type -- you have to implement your own instance of the DirectoryStream.Filter interface to specify which files to accept and reject. For example, here's a simple filter that accepts files that are less than 100K in size: public class SmallFilesOnly { public boolean accept(DirectoryEntry entry) { try { if (entry.newSeekableByteChannel().size() < 102400) { return true; } return false; } catch (IOException ex) { return false; } } } Unfortunately, you can't just pass an instance of this filter to the newDirectoryStream() as you might expect. Instead, you have to use a far less direct and more opaque means of listing the directory using the Files.withDirectorymethod: import java.io.IOException; import java.nio.file.*; public class TextLister { public static void main(String args) throws IOException { for (Path root : FileSystem.getRootDirectories()) { Files.withDirectories(root, new SmallFilesOnly(), new DirectoryAction() { public void invoke(DirectoryEntry entry) { System.out.println(entry.getName()); } }); } } } I'm not sure what the working group has against simple, straightforward iteration, but instead we have to use this confusing closure-lite syntax. However, Java is not a language that was designed around closures, and closure-based methods like this just don't fit. There are just too many layers of indirection, and it's too hard to see what actually happens. For instance, in Example 5, can you tell me how to print the names of the first 10 entries, and then break? Doable, yes; but not trivial. Functional languages have their place, but they don't mix well with iterative-based languages like Java. Usable Java APIs should emphasize imperative design patterns, not functional ones. Copying Files Suppose you want to copy the file charm.txt in the directory cats to the file charm_harold.xml in the directory pets. Before Java 7, you had to open the source file and the destination file, read the entire contents from the source, and then write them to the destination. For a large file this could take a while, and usually you'd lose metadata such as permissions, owners, MIME types, archive flags, and such in the process. Example 6 shows how to accomplish this basic task in Java 7: FileSystem default = FileSystems.getDefault(); Path charm = default.getPath("cats/charm.txt"); Path pets = default.getPath("pets/charm_harold.xml"); charm.copyTo(pets); On many operating systems this will happen a lot faster than streaming data from one file to another. Furthermore, it should preserve all metadata that should be preserved. Security restrictions may prevent certain metadata from being copied, and other features such as the file creation time may be changed. Now suppose instead of copying a file you want to move a file. In Java 6 and earlier, all you could do rename the file, which worked on some operating systems but not on others, and usually didn't work for network volumes even if it worked for local disks. Or you could copy the file byte by byte, and then delete the original. Now however, it's this simple: FileSystem default = FileSystems.getDefault(); Path charm = default.getPath("cats/charm.txt"); Path pets = default.getPath("pets/charm_harold.xml"); charm.moveTo(pets); This can be much faster even for very large files because most of the time no bits need to be moved at all. The local native file system simply needs to rewrite a few entries in a virtual table. Moves between physical disks or across the network do need to move bytes and will take finite time. These methods are synchronous and blocking. If that bothers you, just wrap the transfer in a FutureTask and pass it to an Executor. Of course, I/O is still an unsafe operation. These methods can throw IOExceptions if the source file doesn't exist, if the target directory is read-only, if a floppy is ejected while a copy is being written to it, if a network goes down while a file is being read, or any other such problems. As always, you'll need to wrap these operations in a try-catch block or declare that your method throws the relevant exception. You may also want to implement your own recovery logic. File copies and moves over the network or between disks take real time; and if an operation is interrupted in medias res, the target file may be half-written and in an inconsistent, corrupt state. By default, when a file is copied or moved: - The copy fails if the target file already exists. - File attributes may or may not be copied to the target file, in whole or in part. - If you're copying a symbolic link, the target of the link is copied, not the link itself. - If you're moving a symbolic link, the link is moved, not the target of the link. This is an asymmetry between copies and moves. - Directories are moved only if they're either empty or being moved to the same file system. Sometimes this is what you want, and sometimes it isn't. You can adjust the behavior of the copy/move by passing one more copy options to the copyTo() or moveTo()methods: StandardCopyOption.REPLACE_EXISTING: Overwrite a preexisting target file. StandardCopyOption.COPY_ATTRIBUTES: Preserve all the original's attributes in the copy. StandardCopyOption.NOFOLLOW_LINKS: Do not follow symbolic links from the target when copying. Copy the links themselves instead. StandardCopyOption.ATOMIC_MOVE: Copy/move the entire file or nothing. For example, if you want to overwrite an existing target when copying, pass StandardCopyOption.REPLACE_EXISTING to copyTo like so: source.copyTo(target, StandardCopyOption.REPLACE_EXISTING); If you want to overwrite an existing target and preserve the original file attributes, pass StandardCopyOption.REPLACE_EXISTING and StandardCopyOption.COPY_ATTRIBUTES: source.copyTo(target, StandardCopyOption.REPLACE_EXISTING, StandardCopyOption.COPY_ATTRIBUTES); Yes, the syntax does not look like the options for creating a stream. Those use an EnumSet while these use varargs. Particular file systems may support additional non-standard attributes, but these four are required. File Attributes Metadata about a file such as owners, permission, readability, and so forth has now been separated from the file class itself. You request attributes from a path using the new java.nio.file.Attributes class like so: BasicFileAttributes attrs = Attributes.readBasicFileAttributes(path, false); This only gives you the basic attributes that are common to most file systems, most of which have been available since Java 1.0. Example 8 is a simple program to list all the attributes for files named on the command line: import java.io.IOException; import java.nio.file.*; import java.nio.file.attribute.*; import java.util.concurrent.TimeUnit; public class AttributePrinter { public static void main(String args) throws IOException { for (String name : args) { Path p = Path.get(name); BasicFileAttributes attrs = Attributes.readBasicFileAttributes(path, false); TimeUnit scale = attrs.resolution(); // all dates are since the epoch but we do need to adjust for // different time units used in different file systems System.out.println(name + " was created at " + new Date(scale.toMillis(attrs.creationTime)); System.out.println(name + " was last access at " + new Date(scale.toMillis(attrs.lastAccessTime)); System.out.println(name + " was last modified at " + new Date(scale.toMillis(attrs.lastModifiedTime)); if (attrs.isDirectory()) { System.out.println(name + " is a directory."); } if (attrs.isFile()) { System.out.println(name + " is a normal file."); } if (attrs.isSymbolicLink()) { System.out.println(name + " is a symbolic link."); } if (attrs.isOther()) { System.out.println(name + " is something strange."); } System.out.println(name + " is " + attrs.size() + " bytes long."); System.out.println("There are " + attrs.linkCount() + " links to this file."); } } These attributes are assumed to be more or less the same on different file systems, though this isn't always true. Not all file systems track the last access time, for example. You can ask for more platform-specific attributes with the readDosFileAttributes() and readPosixFileAttributes() methods. For example, Here's a simple program to list all the attributes for a Windows file named at the DOS prompt: import java.io.IOException; import java.nio.file.*; import java.nio.file.attribute.*; import java.util.concurrent.TimeUnit; public class WindowsAttributePrinter { public static void main(String args) throws IOException { for (String name : args) { Path p = Path.get(name); DosFileAttributes attrs = Attributes.readDosFileAttributes(path, false); if (attrs.isArchive()) { System.out.println(name + " is backed up."); } if (attrs.isReadOnly()) { System.out.println(name + " is read-only."); } if (attrs.isHidden()) { System.out.println(name + " is hidden."); } if (attrs.isSystem()) { System.out.println(name + " is a system file."); } } } POSIX file attributes are group, owner, and permissions. You'll get an UnsupportedOperationException if you try to read DOS attributes from a POSIX file system or vice versa. Other providers can offer their own subclasses of BasicFileAttributes. For instance, Apple might offer MacFileAttributes, and Microsoft (or third parties) NTFSFileAttributes. However, these additional attributes can't be so easily plugged into the system. I must say this is the piece of JSR-203 that strikes me as most questionable. File systems and file metadata are still evolving. The current system doesn't even support what's available today in Vista (Indexes, Archived, etc.) or Mac OS X Leopard (file type, creator type, etc.), much less what may be available in five years. I think we need a more flexible approach that does not presume it knows the names, types, or meaning of all possible file system metadata in advance. A generic key-value system would be a lot more palatable. Summing Up Copying files and checking permissions aren't the sexiest parts of a programmer's job. Indeed, they're among the most prosaic. Nonetheless, they are extremely important. The lack of a good way to do this has been a really critical omission in Java for years. Finally, Java 7 fills these basic holes. Add on top of that sexier new I/O features, such as watch lists, true asynchronous I/O, and virtual file systems, and Java 7 may finally have a modern foundation for input and output on which the next generation of clients, servers, and desktop apps can be built. Resources - JSR 203 home page - More New I/O APIs for the Java Platform subproject in the OpenJDK community - JDK 7 project - The Open Roadcolumn on java.net
https://community.oracle.com/docs/DOC-983201
CC-MAIN-2017-09
refinedweb
4,278
57.77
One of the big questions that new programmers often ask is, “When a member function is called, how does C++ know integers for us. Here’s a sample program that uses this class: Let’s take a closer look at the following line: cSimple.SetID(2);. Although it looks like this function only has one parameter, it actually has two! When you call cSimple.SetID(2);, C++ internally converts this to SetID(&cSimple, 2);. Note that this is just a normal function call where C++ has added a parameter, and automatically passed in the address of the class object! cSimple.SetID(2); SetID(&cSimple, 2); Since C++ converts the function call, it also needs to convert the function itself. It does so like this: becomes: C++ has added a new parameter to the function. The added parameter is a pointer to the class object the class function is working with, and it is always named “this”. The this pointer is a hidden pointer inside every class member function that points to the class object the member function is working with. Note that m_nID (which is a class member variable) has been converted to this->m_nID. Since “this” is currently pointing to cSimple, this actually resolves to cSimple->m_nID, which is exactly what we wanted! m_nID this->m_n: And it is pretty easy to see the benefit in being able to do that! We will cover overloading the + operator (and other operators) in a future lesson. The important point to take away from this lesson is that the “this” pointer is a hidden parameter of any member function. Most of the time, you will not need to access it directly. It’s worth noting that “this” is a const pointer -- you can change the value of the object it points to, but you can not make it point to something else! As a pointer, “this” contains the memory address of a class object. However, since we can call a class function from different objects, “this” pointer should be able to contain different addresses so that it can point to different objects. How does this happen when “this” is a const pointer? As you note, the “this” pointer can contain different addresses so it can point to different objects. This does not violate the fact that the *this pointer is const. All the const means is that we can not change what the *this pointer points to when we’re inside the function. The const does not restrict what we set the pointer to in the first place! For example, consider the following function: This program is pretty straightforward. The first time we call PrintValue, we set nValue to 4. The second time we set it to 6. The *this pointer works exactly like this, except that it’s type is “X* const” instead of int (where X is the name of the class the non-static member function belongs to), and the compiler automatically sets it’s value based on the object we’re calling the function on. Ok thanks. I tried this but didn’t work , why ? [code] int main() { Simple cSimple(1); SetID(&cSimple,2); std::cout << cSimple.GetID() << std::endl; } I don’t see anything syntactically wrong with that program, so without knowing what the error message was, I can’t say. It is bizarre that you seem to have SetID() declared as a non-member function. Really it would be better if SetID() were declared as a member function, and then you could just say: “When you call cSimple.SetID(2);, C++ internally converts this to SetID(&cSimple, 2);” “SetID(&cSimple, 2)” <- normal function call how does the program knows that the function we are calling "SetID(Simple*, int)" is within "Simple", since it is not declared in the main loop but inside "Simple"? Thanks! In the class defenition if it give SetID(int) and in the main funtion SetID called with two parameters(&cSimple,int). How it will work? Hi there, If you CAN change the object that ‘this’ points to, but you CAN’T change the ‘this’ pointer itself, shouldn’t the parameter specification read: instead of ? Daniel. You are absolutely correct -- that is a great catch. I updated the code. Thank you. I have been doing quite well in these tutorials (Their great) but I have had one big problem: Is there a way to pause a program at a certain place until the user does something? For example: This will flash up “Hello World” for an instant and then close. I know that in Windows DOS, there is the ‘pause’ command. Is there a command that has a similar function in C++? Try the following: You finished this with “this” is a constant pointer, if that is so is it possible then to access variables though “this” in the same fashion as you would use a const pointer used to access a variable its pointing to in a function to garentee that the value cannot be changed… I know i could just try this myself but I just wanted to share an idea with the readers Hi Alex, First of all I want to thank you for this great tutorial..It helped me a lot.. I just have a small problem.. When I compiled this : It printed 8 on the screen..but when I removed the & from “Calc& Add”, “Calc& Sub” and “Calc& Mult”..It printed 5 on the screen instead.. I know that “Calc&” is a return by reference and that “Calc” is a return by value..Also, isn’t “this” supposed to be a pointer not a reference.. I really don’t get that..Could you please explain it for me ? Thanx in advance… Ahmed Maybe you want this : Calc * Add(int nValue) { m_nValue += nValue; return this; } cCalc.Add(5)->Sub(3).Mult(4); It’s just the differences between pointer and reference. Ahmed, in lesson 7.4a, Alex stated, “When a value is returned by value, a copy of that value is returned to the caller.”. Because you removed the & symbol from the return variable types, the function is now returning by value, which means it returns a NEW COPY of the class instance, not the original class instance itself! So the only function that operates on the instance cCalc is the first one, Add(5) - this will update m_nValue to 5, which explains the value that is printed. The other two functions, Sub(3) and Mult(4) are operating on copies of the original cCalc. I’m still inexperienced in C++, but the code you’ve presented above looks like an example of a memory leak. You’ve basically created three instances of the class Calc, but can only reference one of them, cCalc, in your code. This means that the other two must exist in memory, but you can’t reference them because they’ve been created indirectly (and unintentionally). STUPID qustion why does Calc& Add() work and &Calc Add() does not???? Is it the same as Calc &Add() ?????? AH! Got it!!! Sorry ignore me!!! is there any reason why this pointer is not implemented as a reference instead of a pointer? br, “…this actually resolves to cSimple->m_nID, which is exactly…” Should not this be cSimple.m_nID? cSimple is just an object, not a pointer to the class, so cSimple-> would try to go to the address stored in that location, instead of using the m_nID stored in that particular address space? Please correct me, if I am wrong. if there are 4 objects then how many this pointer will be created? Hi, Could anyone clarify the following for me based on the below code: Calc& Add(int nValue) { m_nValue += nValue; return *this; } My thinking is: The function returns an address to a Calc object, the this pointer is a constant pointer that points to the memory address of the object Calc, by this I mean: std::cout << this; //prints the memory address of the object std::cout << *this; //prints out the value stored at memory address this So my question is really, why return *this and not return this. return *this return this Also why is the amphersand after the Calc and not before as in Calc(amp here) not (amp here)Calc when stating the reurn type. Any help understanding this would be much appreciated. Thanks Okay, for anyone who may be having a similar problem I have come up with (at least half of) the answer. The first issue with my thought process was that the function returns an address to the Calc object. It does not. The function Calc& Add(int); returns a reference to a Calc object…oops! Calc& Add(int); this holds the address of the object that called Add. So it makes sense that the function would return *this as it is dereferencing the calling object. In other words it is returning the object itself not the address. this Add *this So what’s happening is: A Calc object is made, it calls Add(...) and Add returns a reference to the object that called it. In other words it returns something (a reference) that accesses the actual object that called it (rather than a copy). As the returned value can be used as though it is the object that called it (that’s all a reference really is, just another variable to access the same memory location) it can make a call to the next function as though it were the object itself. Hence the chaining working. Calc Add(...) So to reiterate, it is a reference to the object that is returned NOT the address of the object. Returning the address could be made to work but you would have to use the -> operator and pointers instead of the . operator and references(or something like that). -> . Lastly the amphersand is after the return type because that is just the grammar of C++ for returning by reference. Silly question really! good work.. I would like to comment on ampersand after return type. returning *this is used to achieve function chaining i.e. we want to call functions on the same objects. for ex. if you check operator overloading of << insertion operator it return input reference again ostream& operator<< (ostream &out, Point &cPoint) { out << "(" << cPoint.m_dX << ", " << cPoint.m_dY << ", " << cPoint.m_dZ << ")"; return out; } cout << pointobj1 << pointobj2; so function call goes like this 1)cout.operator <<(pointobj1);// in operator << we are returning reference of cout //that is used for calling << 2nd time 2)cout.operator <<(pointobj2); ###########################################3 In your code after removing & from return type from Add, Mul, Sub etc. return type is temporary object. your code: cCalc.Add(5).Sub(3).Mult(4); cout<<cCalc.GetValue(); gives you 5 instead of 8. so in short you need to ensure all functions should be called on same cCalc object. you have to return it by reference and not by value. I hope it helps… Regards. Returning ‘*this’ from the member fuctions means that you are dereferencing the object itself from the objects address ‘this’. The return type from the member functions is reference type to access the actually returned object. If you want to use pointers and return ‘this’ from the member functions instead of ‘*this’ you could do it like below. Note the object instantiated in the heap with ‘new’ and then deleted: #include “stdafx.h” #include; } }; int main() { using namespace std; Calc *cCalc= new Calc; cCalc->Add(5)->Sub(3)->Mult(4); cout<GetValue()<<endl; delete cCalc; system("pause"); return 0; } PS. Pasting the code broke the line 4th from the bottom, should be like this: cout<GetValue()<<endl; Darn, this page has some serious issues, it won’t let me fix the line and eats some characters. The line should be, cout, then <<, then member access operator from object ptr pointing to GetValue() and then <<endl; Bravo, I see Good Work here! Call like this if you want to get proper value using call by value cout << cCalc.Add(5).Sub(3).Mult(4).GetValue() ; cant wrap my head around this: Calc& Add(int nValue) { m_nValue += nValue; return *this; } can someone pls baby step me through this. @nice meme when it gets called, it receives an integer value and then adds the value with ‘m_nvalue and stores it in m_nValue and returns the address of that object (cCalc in this case) as the reference. P.S: ‘pls do correct if there is any mistakes..’ If there are 4 objects used in a program how many "this" pointers would exist for these object why, also, how many this pointer can exist in a program..?? Hello again Alex. in the top-most code: why doesn’t the compiler complain that we are attempting to use SetId() function before it’s officially declared as a void function? Thanks again for this all these epic lessons :). Good question. Function definitions within the class body are treated as if they were actually defined after the class had been defined. This means by the time the compiler gets around to compiling the body of the constructor, it’s already seen the definition for SetID(). I’m a little confused. Do you mean to say that the compiler first declares all class functions (including constructors) before defining any of them? Do you mean "This means by the time the compiler gets around to compiling the body of the constructor, it’s already seen the" declaration for SetID()? So in a way, the code is like: Yes, that’s exactly right! hii…alex..one confusion regarding the "this" pointer… The type of "this" pointer is "class-name* const this"…here ‘const’ specifies that the particular pointer will point to a particular ‘object’ only which we’ll declare later…Now when we’ll call a member function on a particular object ..C++ implicitly passes an extra argument in the member function parameter list as"the address of that object"…and thus the function’s prototype and definition also get’s changed implicitly… Now my question is that "this" pointer is a unique aspect of C++…and also we know that we can have a lot of objects of a same class-type…so how this unique pointer which is also declared as "const" ..can point to such a lot objects…coz every objects of a class have their own instance variables…so how this "this" pointer which is also a "const" can point to such a large number of objects simultaneously… plz help me out.. As you’ve noted, the “this” pointer is a hidden parameter that gets added onto every non-static member function, right? Just like a normal function parameter, the this pointer has function scope. When a member function is called, the compiler ensures that the address of the object whose member function is being called is passed in as the argument to the this pointer. This happens transparently. So the this pointer doesn’t point to a large number of objects simultaneously -- it exists as a function parameter only when a member function is being called. So it means that if we have 2-3 objects of the same class and if we call a non-static member function on all these 2-3 objects..then it wouldn’t contradict the "const" behavior of the "this" pointer..coz every class objects will have their own instance of class’s field….and also we know that function parameters are only confined to that particular function…it’s nothing to do with other member function parameters… So ..every function have their own "this" pointers as their parameter which will contain the address of that particular object on which it’s being called upon… Am i ryt??? Yes. I have a question. When we use POSTMESSAGE OR SENDMESSAGE which will eventually call the message handlers. How is the this pointer passed in such cases? How is this preserved during the message handling process? How does the handler gets the this pointer? Can you please elaborate so that it would be helpful to understand. Thanks in advance. I haven’t written an application using the Windows API in probably 10 years, so I’m not qualified to speak to how it works. I’m a little confused (hey, what’s new?!) by the first example in this lesson. What’s wrong with a simpler version like this?: I know you answered this in your earlier reply to a post, but i don’t think you explained the reason for making it more complicated then in the example from lesson 8.5! This “simpler” version of the class you’ve provided removes useful functionality. The constructor allows us to set the value of m_nID when the object is initialized, but there’s no way to change it afterward. Thus, this line of the example wouldn’t work: Having the constructor use SetID() instead of setting the value directly was just a convenient way to reduce redundant code. Name (required) Website Please enter an answer in digits: Current ye@r * Leave this field empty
http://www.learncpp.com/cpp-tutorial/87-the-hidden-this-pointer/
CC-MAIN-2015-40
refinedweb
2,842
62.27
Amazon has published an Amazon Web Services SDK for Go programming language. Currently it’s experimental, so expect bugs, but you can already use it to manage your AWS stuff. What is AWS SDK Amazon Web Services provide a lot of pieces of cloud infrastructure, such as EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), Route53 (DNS Management). They can be managed through the web admin console, but the most power lies in the ability to use APIs to communicate with these services: provision new EC2 instances, upload or download files into S3, etc. SDKs for various programming languages wrap those APIs into convenient libraries, which can be used in an idiomatic way from the language. Amazon provides official AWS SDKs for many languages: C#, Java, JavaScript, Objective C, PHP, Python, and Ruby. Today they made an experimental version of SDK available for Go. Amazon Web Services SDK for Golang The SDK for Go that Amazon released was first developed by Stripe, according to this blog post from AWS. Installing and using AWS SDK Currently AWS SDK for Go supports 43 Amazon services, such as AutoScaling, EC2, S3, Glacier, CloudFront, SimpleDB, Route53, and others. Each service is represented by a separate subpackage. To use it, install the package you need with go get. For example, to use EC2: go get github.com/awslabs/aws-sdk-go/gen/ec2 This will also automatically install the top-level aws package, which you’ll need for authentication. Here’s an example from README on how to use it: import ( "github.com/awslabs/aws-sdk-go/aws" "github.com/awslabs/aws-sdk-go/gen/ec2" ) ... creds := aws.Creds(accessKey, secretKey, "") cli := ec2.New(creds, "us-west-2", nil) resp, err := cli.DescribeInstances(nil) if err != nil { panic(err) } fmt.Println(resp.Reservations) Source code and license GitHub: Documentation: License: Apache License 2 Go mascot by Renée French, used according to Creative Commons 3.0 Attribution License terms. Learn Go & AWS Programming in Go: Creating Applications for the 21st Century Free Amazon Web Services Developer Guides
http://www.progville.com/go/aws-sdk-go/?share=twitter
CC-MAIN-2021-43
refinedweb
341
56.76
In my humble opinion, one of the biggest mistakes the designers of the ‘C’ language made, was to make the scope of all functions global by default. In other words, whenever you write a function in ‘C’, by default any other function in the entire application may call it. To prevent this from happening, you can declare a function as static, thus limiting its scope to typically the module it resides in. Thus a typical declaration looks like this: static void function_foo(int a) { } Now I’d like to think that the benefits of doing this to code stability are so obvious that everyone would do it as a matter of course. Alas, my experience is that those of us that do this are in a minority. Thus in an effort to persuade more of you to do this, I’d like to give you another reason – it can lead to much more efficient code. To illustrate how this comes about, let’s consider a module called adc.c This module contains a number of public functions (i.e. functions designed to be called by the outside world), together with a number of functions that are intended to be called only by functions within adc.c. Our module might look something like this: void adc_Process(void) { ... fna(); ... fnb(3); } ... void fna(void) { ... } void fnb(uint8_t foo) { ... } At compile time, the compiler will treat fna() and fnb() like any other function. Furthermore, the linker may link them ‘miles’ away from adc_Process(). However, if you declare fna() and fnb() as ‘static’, then something magical happens. The code would now look like this: static void fna(void); static void fnb(uint8_t foo); void adc_Process(void) { ... fna(); ... fnb(3); } ... static void fna(void) { ... } static void fnb(uint8_t foo) { ... } In this case, the compiler will know all the possible callers of fna() and fnb(). With this information to hand, the compiler / linker will potentially do all of the following: - Inline the functions, thus avoiding the overhead of a function call. - Locate the static functions close to the callers such that a ‘short’ call or jump may be performed rather than a ‘long’ call or jump. - Look at registers used by the local functions and thus only stack the required scratch registers rather than stacking all of the registers required by the compiler’s calling convention Together these can add up to a significant reduction in code size and a commensurate increase in execution speed. Thus making all non public functions not only makes for better code quality, it also leads to more compact and faster code. A true win-win situation! Thus if you are not already doing this religiously, I suggest you go through your code and do it now. I guarantee you’ll be very pleased with the results. A Request … If I’m to believe the statistics for this blog, it appears that I’m gradually building a decent sized readership. Furthermore many of you choose to come back and read the latest postings which tells me that I’m doing something of value. Anyway, if this describes you, I’d be obliged if you’d encourage your colleagues to read the blog and also to post comments / questions. Why do I ask this? Well, an increased readership has several benefits, for both me and you the readers. - I believe quite passionately about improving the quality of embedded systems. Those of us that are working in this field collectively have an enormous impact on the world. Thus anything that helps improve the quality of embedded systems in turn helps improve the world. (I appreciate that this is a little melodramatic. It is, however, true). - Writing about something is the best way to I know to find out if I truly understand it. Thus, the very act of publishing a blog causes me to improve my skills and knowledge. - Some of the (too few) comments I get are quite profound and often instructive. Thus I also learn in this way. - The bigger the readership I have, the more inclined I am to publish. If I’m publishing things of value, then presumably the readers benefit. Anyway, if you concur, then please encourage your colleagues. If you don’t, then that’s OK as well. Thanks for reading. I believe that using the same keyword ‘static’ for two different concepts (namely: storage class specifier and function scope) is the main reason for some many people ignoring the benefits of making functions local.Personaly, in my projects I use to#define LOCAL static, so make clear with LOCAL void foo (void); that I want foo be a local function for its module.Moreover, this helps when I need to find some static variable: when I fire the search window in my editor, it does not stop in “false positives”, i.e., static-defined functions.(Nigel, please delete my previous comment) That is a useful workaround. I trapped myself in the same way in the past. And I openly admit that I simply never knew that static functions in C are module local until recently. By that little define one adds quite a bit of readability to the code, yet complies to the standard. Something valuable learned here! Matthias Actually, I like to think of static as always having the same meaning: “make that scope-limited, but with lifespan as long as the whole program”. This definition applies to all the three common uses of static (local var in a function, global var in a module, function in a module). I understand though, that functions already have all max lifespan, and that local vars are already scope-limited. But I think that this definition helps understand why the same keyword is used in all cases. I have very little insight in compilers, but I suspect that the parser does just that when encountering static? Wonderful tip victorhI will use it. :DAnd NigelI´m already encouraging people to read your blog and your “C Test”.I´m about to graduate on Eletrical Engineering and I don´t have any formal study on programming. My knowledge is mostly self-taught and I learn a lot with your blog. Well, I am software engineer and I have some formal study on programming, but I assure you that embedded programming is still unknown in many Faculties.I work with MCUs only for two years (8051 and ARM) and I find it amazingly interesting. I try to learn about “good coding” and your articles -and this blog- are really useful. I have a lot to learn.So I wanted to thank your job! Keep posting! @victorhI see both those uses of static as two cases of the same concept. For static local variables, the varaible is really a global variable – it exists for the entire life of the program, just like any other global variable. The only difference is that it’s only accessible (by name anyways) within the function it’s declared in.Similarly for static functions: they are only accessible (by name) within the module they’re defined.In the case of static functions, you can pass pointers to the functions to other parts of the program to user use it from other compilation units. I presume you can with static local variables as well, but I’ve never had the need to. If you think of static as meaning “exists for the lifetime of the program, but only visible within the enclosing scope”, then”static” means the same thing everywhere. Ah, I read now your reply, which makes mine a pale copy. I’ve never tried to pass the address of a static function to a function outside the module. I suspect that you’d have to engage in some casting in order to do it. Notwithstanding this, doing so would IMHO be incredibly dangerous and foolhardy. However, I take your point that static in of itself does not provide rock solid protection. “static” functions and data (whether the data is file-scope static or function-scope static) are the same as any other global functions or data, but with limited visibility. Their lifetime scope is global – they exist from just before main() starts and live until just after main() ends. But their name scope is limited to the enclosing scope (compilation unit or function). This simply means that you can’t access them by name outside that scope. But since they still exist, you are free to take their address and pass it around the rest of the program. Passing the address of a function-local static variable, for example, is perfectly valid and perfectly safe. Of course, the optimisation advantages of statics are mostly lost if you take its address and pass it outside the current compilation unit. For example, the compiler will generate the full code for a static function rather than inlining it if you’ve taken its address. And a “static const int” might be optimised away entirely by the compiler if you don’t take its address. Could you please give an example where passing the address of a function static variable would be a good thing? In C++ you use this to realize Singleton Pattern. I’ve seen it used to do object-oriented C code. You have some way to create an object with some standard, known interface. The object contains a vtable – just a structure containing pointers to functions with known signatures. The function that created the object fills in the vtable (or simply points it to a static copy somewhere, or similar). The actual functions pointed to are often made static, to avoid cluttering the namespace. @ Uhmmmm.Actually static local variables are quite useful. Because they maintain state between calls, they can be used to create state variables in state machines for example. Or maintain timer counts, etc.But they are local, so they use the smallest applicable scope, which is good practice, for both optimization as well as just good style.The only negative I've seen is with debugging. Some compilers put them in a different segment and then forget to make that segment available to the debugger. But that was a few years ago, so probably not an issue anymore. Note that this is dependent on the compiler. LLVM, for example, is capable of whole-program optimization, and can inline method calls across translation units. Rick – are you advocating not doing this if your compiler happens to perform global program optimization? At my current job we pass static function pointers around extensively – to other modules as well. This is particularly useful for timer expiry routines, and other callback functions which should not have a public interface.As for the local call/long call, I haven't even thought of the potential problems with them. However it should not be an issue unless there is a different assembler return instructions for each. Keeping things static (local to a module) is just plain good practice.It makes the job of the linker easier (there is basically a whole bunch of stuff it never even has to go near). It avoids namespace pollution (which is REALLY important in compilers that dont support namespaces in their own right). [NOTE here that an exported naming convention helps enourmously. More below.]It avoids name clashes (I once spent WEEKS tracking down a weird defect when some of my code was linked with some vendor code – we both used the same name – so the linker made them the same piece of storage. More important for module-local variables).Once you do some embedded Ada programming and realise that some important principles are forced on you, its easy then to use those in C as well:- Stuff used in a unit should be local only to that unit (and not visible by accident or design ANYWHERE ELSE). [This does not apply to function pointers.]- Stuff exported by a unit should always be explicitly exported, by declaration in a header file, and by explicit action in the body.- Header files are only for exports. If you have a variable, define, constant, enum, typdef used only in a units "C" file, then put in in the C file, never in the header.And… use a naming convention. The file "fred.h" should prefix EVERY exported thing from the fred unit with "fred_". (This is similar to the idea of namespaces, object references, and in Ada, eliminating the "with" statement so that you must refernce things from another unit using the dotted notation.)This approach makes it REALLY easy to see where an identifier came from, at a glance, without needing some clever GUI/IDE to track identifer names and their originating units.Another dirty trick I use is to define:#define exported extern#define exportThen in header files, I write:exported UINT8 fred_do_calc(stuff);Here – exported is like a declaration: "this is a thing that is being exported" – so its visble in the outside world.And in the C file that corresponds, the function would appear as:export UINT8 fred_do_calc(stuff){more stuff}And here – "export" is like a directive: "EXPORT THIS FUNCTION!"I find this greatly enhances the readability and understandability of the code.I also lump ALL module local stuff together with a big comment block "LOCAL FUNCTIONS", and all exported stuff together under a comment block "EXPORTED FUNCTIONS".Imposing this discipline into the source code tends to force a more logical way of thinking about whats exposed and whats not. Using a standard empty header and body file template with all these blocks in makes it a no brainer when writing a new code unit. @todd"But that was a few years ago, so probably not an issue anymore."Yep, still an issue with Code Composer 3.3 at least. Ashleigh writes: "The file "fred.h" should prefix EVERY exported thing from the fred unit with "fred_"."That's an interesting approach. Our style conventions do the opposite. Everything function for variable that is, can be, or should be "static" is prefixed with a module-specific string (e.g. CPU_write_date()), while the exposed user interface is all done with mixed case and no prefixes (e.g. SetDate()).The mixed case helps by ensuring that the exposed functions aren't overlapping with library functions. Leaving off the prefixes makes the calling code (at least to those of us brought up this way) more readable, because it isn't so long-winded. The prefixes group the functions together in various debugging tools so that the relevant pieces are easier to find.In a way, though, the ADA/C debate always reminds me of the VHDL/Verilog debate. Our company chose Verilog after reading about a challenge where about 20% of the Verilog coders got a working program, while 100% of the VHDL coders hadn't finished writing when their time ran out! in c++ how about the difference between using static variables within a module and declaring the variables as private within the class, which is preferred? i can see how static to a function would be preferred over private to a class. often times i find myself moving static module level variables into the class or structure and declaring them as private. i guess the real difference is by declaring private within the class you get an instance for each instance of the class instead of one instance over all instances of the class. perhaps this is something to be aware of, that a static variable within a module could be used across more then one instance of that object. So I was wondering, is static treated the same in C++ as in C? I happened to write a new software for a new product, and as I read things here, I tried to write a whole lot of better code than before. So I tried out defining all my module-specific functions as static, and only the few ones for global interaction as not static. It didn’t reduce my code size by any byte, but the functions can not be accessed outside the module they are used in, which is quite nice. I also noticed that this effectively prevents you from declaring local functions in a header file, giving you a compiler warning, that the function was referenced but not defined. Coming from object oriented programming, the use of static in a functional approach didn’t make any sense (for functions that is). Well maybe even more so, coming from Java, you can assign a function of a class as static which implies a lot of things, like it doesn’t change the actual object of the class, and you don’t even need an object to call that method. If you come from this context why would you declare a function as static? So thank you for pointing this out. My recollection is that C++ does differ from C in places in its use of static. However, I haven’t programmed in C++ for a few years, so I will let more knowledgeable folks comment on this. Where declaring local functions as static will normally reduce your code size is if you also declare them to be inline. Typically in a C program, the compiler will not inline a non static function since it has to keep a copy around in case it is called from outside the module. You are dead right that static prevents you from declaring local functions in a header file. This is a good thing. I prototype my local functions in the module that they appear. C and C++ use “static” in the same way, at least unless you are doing something really obscure as a challenge to the language-lawyers. If a function is declared “static”, and you don’t take its address, then the compiler knows it cannot be called outside the module. It is therefore free to inline it, remove it, simplify it, change its calling conventions, or otherwise optimise it as it sees fit. Whether this will reduce code size or not will depend on the compiler and your optimisation settings, but you are giving the compiler the best chance to do a better job. With modern compilers, you should not have to explicitly declare a static function as “inline” unless you believe the compiler will do a poor job on its own (compiler heuristics are normally good, but not perfect). In those cases, you may need something additional such as __attribute__((always_inline)) to force the behaviour you want. Of course, sometimes adding “inline” makes your code clearer, in which case it’s a good thing. But otherwise let the compiler make the decision. There are occasions when you want to use “static” in header files. But remember that the data or functions are then independent for each module that includes the header. Thus the main uses for “static” in header is for a modern and type-safe (at least, as type-safe as it gets in C/C++) replacements for pre-processor defines. For example, you can replace: #define magicNumber 100 #define magicFormula(x) ((x) * (x) + magicNumber) with static const int magicNumber = 100; static inline int magicFormula(int x) { return x * x * magicNumber; } C++ retains the C semantics of ‘static’ and adds an additional use for it in support of object-oriented programming: Within a class/struct, a static member of the class/struct has a single global instance, can be referred to without dereferencing an object of the class/struct’s type, and cannot implicitly access members of any such object. A static class/struct method (function) does not have a ‘this’ pointer, as it cannot be invoked on an object of the class/struct. I think OO programming is not widely used within embedded programming, so I’ll clarify: Within C++, a ‘class’ is basically the same thing as a ‘struct’. The ONLY difference, and I truly mean ONLY, is that in a struct, members are by default public (anyone who has a pointer to an instance of a struct has free access to its members), while in a class, members are by default private and must be declared public in order for outsiders to access them. In C++, a class/struct can “contain” functions, which in OO parlance are usually called ‘methods’, though it means the same thing. Instead of having bits & pieces of code scattered throughout various functions all over your codebase that operate on data in a class/struct, in OOP, you group the data and the bits & pieces of code that operate on that data together inside the class/struct. And generally, if a you don’t put any functions in the class/struct, you call it a struct, otherwise you call it a class. Now when you create objects of these classes, they don’t get extra copies of the code for the functions inside the class, there’s really only one copy of the functions, but conceptually you can just go ahead and think of it as if they all have their own copies. But ‘static’ members are different: conceptually, you should (because it’s correct) think of a static member as having only a single instance regardless of how many objects of that type are created. So, for example, you could have a single static bool flag to flip all objects of that type between debug & normal modes. When you set the static bool debug flag true, it’s true for all objects of that type. In this sense, it’s sort of like a global variable, but it’s grouped within the class definition because OO programmers tend to shudder at the thought of anything other than main() being global. In addition to static class variables, a class can have static methods. Static methods (functions) can be called directly; in a sense, they’re sort of like global functions, but, ah, no, forget I said that. If you have non-static class methods, these can only be “called on” an object of that type that already exists. So, for example, if you have a class that contains a set of numbers, you might want to sort them. So you would have a member method to tell an object of that type to sort itself. If you have an object of your class numberList that you’ve named “myPhoneList”, you could call myPhoneList->sort(), and it would sort itself. But sometimes the purpose of a method doesn’t exactly apply to a specific object of that type. For example, you might want to know how many numberLists objects are in existence. For that purpose, you don’t need any specific object of that type to find this out; your numberList class probably has a static member variable named instanceCount, but like all good OO programmers you made it private so nobody can see it. But you made a method so folks can find out its value without accessing it directly, and of course it’s a static method because it doesn’t deal with any data related to any specific numberList. Now anybody who knows that the numberList class exists can find out how many numberLists are out there floating around in memory without actually knowing where any of them are. I have a disagreement/query with this line in above reply >>Where declaring local functions as static will normally reduce your code size is if you also declare them to be inline. If the function is made inline, then size of code would increase, because everywhere that function is called, there would be a code replacement, rather than a call. So can u elaborate how inline actually reduces code size. If the overhead of calling a function is greater than the size of the function then inlining gives a size reduction. If the overhead is less than the size of the function then inlining may still give a reduction in code size as a lot of optimizations are performed across a function. Thus the optimizer can now see the entire picture with an inlined function and can optimize accordingly. Obviously if the inlined function is large and called more than once then inlining will not save code space. […] Originally Posted by millgates As I said, if you define structure locally, you can only use it in the function in which it is defined, so unless you are defining the function inside the function where the struct is defined (which in C is not possible), you can not use that struct as an argument or return type of any function. If you want to do that, declare the struct globally. About nested functions in C: link. To my knowledge, defining local structures and/or local (nested) functions is not the preferred way of programming in C (as opposed to, say, scheme or ruby). You can make a function local to a compilation unit by defining is as static. Such a function is not visible outside of the file it is defined in. Static functions has some other advantages. […] I have a question regarding usage of static functions in C language In embedded system where memory is expensive; does declaration of too many static functions can take up more memory space? No. In fact use of static functions can sometimes lead to a reduction in memory space. Your post was really helpful. Keep posting. Though I saw the post today only, via google search, i’ll follow it henceforth. Thanks. thank you very much, very simple powerful topic I do my best to make locally scoped functions static whenever I can (remember!). That being said, with the advent of LTO(link time optimizations) in compilers, It seems the usefulness (in terms of performance anyways) of static functions become greatly diminished (as the compiler can figure it out anyway)…? From a code inspection standpoint, it does still make the code more succinct for knowing if a function is used elsewhere. I’m just starting to fool around with lto optimizations so I’m interested what others have to say? What do you think of using static local variables for saving stack space? I’m using a RTOS for my embedded system and it is really important to know that the stack frame does not overflow. If a task makes a function call and the call depth is large the stack could overflow (and the application will halt). One way to minimize this risk would be to make all local variables static. Then they will not end up on the stack. What do you think of that strategy? Any pitfalls? As you said specifying static as scope specifier for functions can cause compiler to inline the functions. Is this the most probable approach taken by compilers, or compiler does this only under some optimization options. If this is the most probable option then it can increase the size of the code. In this case can we tell compiler to avoid inlining of static functions. Thanks in advance. Sure. Tell the optimizer to optimize for size and it almost certainly will not inline functions. However I don’t recommend this. See Hi Jones, I had just got the message from compiler “Multiple function definition…” so I found here the solution. I had the same function name in different modules, but I’m not sure if this occured because I did architecture wrong. Thank you! Excellent tip, thanks for explaining the intricacies of the tip
https://embeddedgurus.com/stack-overflow/2008/12/efficient-c-tips-5-make-local-functions-static/
CC-MAIN-2019-04
refinedweb
4,579
61.06
How do I work with enumerated values in Java since there is no language-level support for enums? Created May 4, 2012 George McKinney The best reasonable way that I know is to define a class to do it. For instance, for an enum {MALE, FEMALE}... public class Gender { public static final Gender MALE = new Gender("MALE"); public static final Gender FEMALE = new Gender("FEMALE"); // note PRIVATE constructor, so no one else can make Gender objects // How boring :-) private Gender(String type) { this.type = type; } private String type; public String toString() { return type; } } Now, a method that expects a Gender argument can reasonably expect that it is either Gender.MALE or Gender.FEMALE, and a caller can use one of them but not Gender.OTHER.
https://www.jguru.com/faq/view.jsp?EID=21354
CC-MAIN-2021-31
refinedweb
125
62.98
Have you ever thought, what is the purpose of exhausting 4-5 hours daily on killer SEO strategies if the website’s loading speed is poorly optimized? Probably NOTHING! When someone is paying for good internet speed, they expect a flawless experience of surfing. If the text and multimedia are taking too much time to load or the input controls are not working instantly, the user will leave leading to an increased bounce rate. Page loading speed is a crucial factor that decides its ranking status on SERP. For analyzing the page loading speed or to check website performance, Google has already introduced the PageSpeed Insight Tool. A user just needs to enter the URL and this smart tool deeply examines every single aspect affecting the loading speed. The collective results of processing a URL with PageSpeed Insight Tool are divided into three categories with red, yellow and green colors. As obvious, red represents the lowest score whereas green is meant for top scoring. /> While analyzing the speed test results, you need to focus on the metrics of three major factors i.e. - Lab Data - Opportunities - Diagnostics These three factors further comprise various elements that you need to understand before using this tool. We will elaborate on all crucial website speed metrics and reliable solutions in the article below. Contents - 1 Lab Data - 2 Opportunities - 3 Diagnostics - 4 Considerable Precautions While Implementing by Yourself Lab Data When you analyze a URL to check website speed in PageSpeed Insight Tool, it generates a performance score on the basis of different metrics. Each metric critically examines the data and gives a score labeling with green, orange and red color. Gain complete information about lab data metrics from the points below. Lab data comprises six different metrics with their distinctive weightage as we are mentioning below:- /> First Contentful Paint The FCP aka First Contentful Paint is a performance measurement metric meant for examining the period from navigation to the time taken for the first-bit rendering by a web browser from Document Object Model (DOM). The “DOMHighResTimeStamp” of First Contentful Paint entry focuses on highlighting the time when any element such as text, image or canvas of a webpage was rendered initially. FCP is basically the first step to improve user experience. Therefore, pay critical attention to its optimization. First Meaningful Paint First meaningful paint is basically the time when meaningful content appeared on a web page. Meaningful content for every website differs on the basis of content it is serving. For instance, playing video is the first meaningful content for a video streaming website. Similarly, the text content or first image of a blog is its “First meaningful Content”. If you are getting positive results on FCP but the FMP is low, the bounce rate will automatically increase and you should see how to load what is meaningful as early as possible. First CPU Idle The initial point at which a web page responds quickly to an input is known as the First CPU Idle. This metric is still at its beta phase that is why its implementation is heuristic. It analyzes the loading when the page is minimally interactive. Reducing the size of critical resources can help in optimizing FCI. Time to Interactive There is a significant difference between watching content and gaining access to its features. For instance, the video streaming section of YouTube may appear instantly but its buffering period might ruin the user experience. The time taken by a web page to become fully interactive must be as low as possible, especially if it is a social media or video streaming website. If the user is continuously clicking on a button but not getting a timely response, consider it as a big turn off for the user and you should get your developers to fix it immediately. Max Potential First Input Delay The Max Potential FID measures the worst-possible first input delay. This metric is capable of measuring time in milliseconds to conclude the delay in responding to the user’s input. For instance, if the page is loaded and you are able to see the desired content, the total response time taken after clicking on a button is considered Max Potential. If the Max Potential FID score is appearing red or orange, your UX needs better optimization. 0-130 milliseconds response is considered in green, 130-250 in orange and beyond 250 in red. Speed Index Speed index metric of PSI tool examines page loading performance. The average time taken for displaying the visible part of a page is recorded in milliseconds. When we talk about “visible parts”, it excludes “time to become interactive”. In order to generate the speed index score out of 100, Lighthouse of Google uses “Speedline” node module. Opportunities Here is a list of opportunities that Lighthouse will flag when you submit a URL for website speed and performance tests. It will identify all unused and unnecessarily added elements responsible for slowing down the speed. Scroll down to check which issues are optimizable in your web page. Serve Images in Next-gen Formats /> Heavy Images on your page will obviously slow down the page loading efficiency. JPEG and PNG are the conventional image formats with the least possible compression possibilities. If your page contains images of JPEG and PNG format, convert them into JPEG 2000, JPEG XR, and WebP. These image formats have superior compression properties. In order to serve images in next-generation formats, here are 2 techniques you can use implement EWWW Image Optimizer WordPress Plugin If you are working with WordPress content management system, use EWWW Image Optimizer for high compression without compromising visual quality. After embedding this plugin, you will get an interface with:- - Basic settings - Advanced settings - Conversion settings - WebP settings It has a smart function to scan for all non-optimized images. Once identified, set the compression level for image formats like JPG, PNG or GIF etc. Moreover, you will also have the option to compress PDF files to attain a higher optimization level. /> Sometimes, excess plugins slow down the processing of your content management system. You can try online tools as an alternative in that case. Online-Convert is a feature-rich online conversion tool that enables you to change the image format into WebP with some easy steps. Below are some features of this free online tool one can access:- - Import image from physical memory, cloud storage or from a URL. - Filters to change the color scheme from colored to B&W, monochrome or negative etc. - Customize DPI - Crop pixels from the top, bottom, right and left. After converting, you have to upload all images manually by replacing the old ones. Defer Offscreen Images /> Images occupy space in your web pages actively as well as passively. There are some offscreen and hidden images occupying several kilobytes of space. These are among the major reasons why our page loads slower than expected. All the defer screen images are identified in the opportunities section of the Lighthouse results list. You can identify images that are useless and remove them. Below are some reliable tools to do it conveniently. Lazy Load Optimizer WordPress Plugin WordPress supports “Lazy Load Optimizer” plugin to speed up the page loading process. This plugin uses “lazysizes” library of GitHub which is an SEO-friendly loader capable of processing iframes and images. It smartly identifies all visibility changes occurred through JavaScript, CSS or manual interactions by the user. This plugin contains some interesting features such as:- - Customizable loading effects - Change-remove of image - Size adjustment in pixels - Setting animation time in milliseconds - Customizable background colors You can configure it with any WordPress version of 4.0 or higher. It is a lightweight plugin that smartly fixes “defer offscreen images” error. Download lazysizes.min.js script and embed the library with your web page. Use these scripts to execute:- Or: import ‘lazysizes’; // import a plugin import ‘lazysizes/plugins/parent-fit/ls.parent-fit’; Defer offscreen images manually with LazySizes javascript Those who have good control over HTML coding can defer offscreen images manually. We are simplifying your task by providing a dedicated javaScript to defer. Download Lazysizes JavaScript library from here and embed it with your targeted page with this script tag:- Open your page in HTML view and find all “HTML tags”. Change “src” attributes to “data-src” and add class=“lazyload” attribute to the images as we are mentioning below:- <!–Use data-src. And, specify lazyload class –> Once you make all changes as per the instructions, all offscreen images will defer automatically. It is advisable to use Lazysizes library with WebP and responsive images for better results. Reduce Server Response Times (TTFB) /> Slow server response time is another factor affecting your speed performance. As per the parameters of the PageSpeed Insight tool, an audit fails if the server is taking more than 600 milliseconds to respond. When a user enters the URL, the server follows a long process to return the desired page. The lighthouse report calculates the time taken by a browser to get the first bite of data from the server. Upgrading server and optimization of the server’s application logic can help in improving the server response time. One can reduce server response time with the following techniques:- Cache Enabler – WordPress Cache Plugin Caching is among the most effective techniques to reduce server response time because it accelerates the accessing time of HTML files. Rather than depending on the dynamic HTML data produced by the server every time, you can create static HTML files. The Cache Enabler – WordPress Cache plugin create static files and save them in the server disk. Some of the significant features one can notice in this plugin are:- - A faster disk cache engine - Capable of minifying both inline JavaScript and HTML - Options of manual or automated cache clearing - Responsive images support via srcset This plugin supports any version of WordPress including 4.6 and above and requires PHP 5.6 or above. Shift to a reliable VPS Hosting Plan The virtual private server hosting is a reliable solution to reduce server response time. In shared hosting, the server resources are shared between multiple websites. More websites in shared hosting mean increasing the time for the server to respond. On the other hand, a virtually private server is dedicated to every single website in VPS hosting plans. Here is a list of features to consider:- - Options of 2 or 4 CPU cores - Dedicated 2GB or 6GB RAM - RAID 10 SSD of 40GB or 120GB - Bandwidth options of 1000GB and 3000GB Eliminate Render-blocking Resources /> Render-blocking URLs are accountable for delaying or blocking the first paint of a web page. Lighthouse classifies three types of render-blocking URLs including HTML imports, scripts and stylesheets. In order to remove the barriers, first of all, you need to identify the critical sources. The coverage tab in Chrome DevTools will show you which URLs are critical and which are not. For the convenience of a user, green color is dedicated to critical and red for non-critical sources. After identifying, you can remove the useless render-blocking URLs. If you are looking for some effective tools to eliminate render-blocking resources, try the following options. W3 Total Cache WordPress Plugin In order to eliminate the render-blocking resources, try W3 Total Cache WordPress Plugin. The Web Performance Optimization framework of this tool has a content delivery network integration feature to improve the website loading speed. After installation of this plugin, you need to enable “Minify” mode in WordPress. It is capable of fetching render-blocking CSS and JavaScript. Some of the significant features of this plugin are:- - Ideal for optimizing mobile-friendly websites enabled with Secure Socket Layer - Full configuration can improve 10X speed - Improve web server performance - Saves up to 80% bandwidth by compression and minifying HTML and JavaScript Compressor Online Tool /> You will find online tools that can eliminate render-blocking resources but consumes a lot of time because of the manual operational work or dedicated scripts processing only. If you are looking for a tool capable of compressing all scripts including JavaScript, HTML and CSS altogether, go with this online option. Along with compression, it is also capable of identifying the errors. Below is a process of using this tool:- - When you open the URL, there will be a blank space to add the source. Either drag and drop the file or copy and paste the code. - Select the code types from X/HTML, CSS and JavaScript including PHP, Smarty or ASP. - Select your charset from all standards available in the list and click on “compress”. - With a 100% compression ratio, it will show you the total bytes that were saved. Remove Unused CSS CSS is the crucial requirement of a web page for looking awesome. However, some parts of CSS unnecessarily occupy space and results in delayed loading. These are known as dead rules of a style sheet occupying some extra bytes on your web page. Whenever a user accesses your web page, external style sheets are downloaded by the server that delays the loading time. With the help of Chrome DevTools, you can identify the critical and non-critical CSS. If removing some part of CSS is not going to affect your web page, do it. Asset CleanUP WordPress Plugin Before you compress the CSS script, it’s better to remove its useless parts. While creating a webpage, developers include a set of several HTML elements such as icons, tables, typography, search bar and galleries etc. generally, developers include all elements in a single “style.css” page whether they are useful or not. The Asset CleanUP WordPress Plugin can help in removing unnecessary elements conveniently. Here is a list of some of the features you need to check out:- - Reduces the number of HTTP requests - Minify JavaScript and CSS files - Remove meta tags, links and HTML elements comprising within header and footer - Disable XML-RPC protocol partially or completely With its pro version, you will also get features like unloading JavaScript/CSS files, defer CSS, Move JS file and premium support. PurifyCSS Online in your browser If your WordPress is already overcrowded with a lot of plugins, try some online tools.PurifyCSS Online tool can remove unused CSS by scanning HTML and JavaScript source codes. You can install it via NMP by following this standalone installation code:- npm i -D purify-css import purify from “purify-css” const purify = require(“purify-css”) let content = “” let css = “” let options = { output: “filepath/output.css” } purify(content, css, options) Also, there are options to install the plugins of PurifyCSS Online tool on development stacks inclucing:- 1) Grunt grunt.loadNpmTasks(‘grunt-purifycss’); 2) Gulp var purify = require(‘gulp-purifycss’); gulp.task(‘css’, function() { return gulp.src(‘./public/app/example.css’) .pipe(purify([‘./public/app/**/*.js’, ‘./public/**/*.html’])) .pipe(gulp.dest(‘./dist/’)); }); 3) Webpack npm i -D purifycss-webpack purify-css Visit the GitHub library of Purify CSS to know the usage process in detail. Minify JavaScript If the boot-up time of JavaScript is too high, it will negatively affect the page loading speed. Excess of JavaScript increases your memory cost, execution cost, network cost and parse & compile cost. Minifying, compressing and removing unused codes can help in giving a boost to your page loading performance. It is also advisable to cache your code because it will decline the rate of your network trips. Here are two feasible options to minify JavaScript:- Merge + Minify + Refresh WordPress Plugin This is a multipurpose plugin meant for boosting up the speed of a website by merging JS and CSS files in combined groups. After merging, the tool smartly minifies JavaScript with Google Closure. In the case of CSS, it uses “Minify” for the same purpose. In order to use the plugin files, you need to open apache and copy this code in it:- #Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{https:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.*).css $1.css.gz [QSA] #Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{https Some significant features to expect in this tool are:- - Merge and minify javaScript - Enables a web page to automatically reprocess all changes of JS and CSS by using last modified date. - Capable of enabling http2 server push - You can turn off concatenation and minification - No compatibility issue with localized scripts /> It has a simple interface where you just need to copy and paste the scripts, select between JS and CSS and execute the minification process. With “Minify Online” tool, you will be able to identify and remove combined files, whitespace, strips and comments. Use this code to configure minification with your web page:- use MatthiasMullieMinify; $sourcePath = ‘/path/to/source/js/file.css’; $minifier = new Minifyjs($sourcePath); // we can even add another file, they’ll then be // joined in 1 output file $sourcePath2 = ‘/path/to/second/source/css/file.css’; $minifier->add($sourcePath2); // or we can just add plain js $js = ‘body { color: #000000; }’; $minifier->add($js); // save minified file to disk $minifiedPath = ‘/path/to/minified/css/file.js’; $minifier->minify($minifiedPath); // or just output the content echo $minifier->minify(); Efficiently Encode Images /> Here is another smart technique for reducing the size of images. Among the entire content available on your web page, images occupy most of the space that is why they become a major reason for slow loading. The Lighthouse service of Google examines every single content to flag images with their size. After setting the compression level of every image 85, if it finds the potential saving is more than 4 kilobytes, the image comes in the category of “Optimizable”. Use your GUI tools to optimize the images before uploading and reduce the use of GIFs. Follow these effective techniques for desired results:- Smush – Compress, Optimize and Lazy Load Images WordPress Tool A large number of image compression plugins must be available for WordPress speed optimization but all of them are not capable of maintaining optimum quality after processing. With Smush, an image can go to a maximum compression level without compromising visual quality. It can smartly examine the entire page to figure out images affecting loading speed. Below is a list of significant features you must need to know:- - It strips out the data of an image that has nothing to do with the image’s visual quality. - The plugin is capable of compressing images in all directories as well as the images of WordPress themes and plugins too. - Smush is compatible with almost every builder, library and plugin that we frequently use. - The smart size detection feature of this tool is capable of identifying images with incorrect size for the convenience of selection. Kraken Online image optimization tool /> Without even configuring a plugin, you can attain an optimum level of image encoding and compression with online tools. Kraken is a free tool but some of its features are available for premium users only. Still, the features in freeware are enough to serve the purpose. Check out some features of this tool:- - It comes with multiple options for uploading the image including exploring source files in the PC, drag and drop, uploading zip file or importing from cloud storage of Box, Google Drive and Dropbox. “Page Cruncher” and “URL Paster” options are also available but for Pro users only. - There are three options of image optimization including expert, lossless and lossy. - You can upload images in bulk at a time rather than processing each of them one by one. - Along with JPEG and PNG, it can also optimize the size of GIF files. Enable Text Compression Each byte and kilobyte contributes to slowing down the speed of your web page. Therefore, consider all the elements whether it is an image, JavaScript, CSS or just plain text. If your text is compressed, the server will respond much faster than the text without compression. You can enable the “Brotli” compression format in the server to compress every piece of text content. In order to compress your text content on a web page, we recommend two reliable techniques below:- Enable Gzip Compression WordPress Tool After enabling the text compression feature, this plugin automatically reduces the storage space occupancy of text material. It supports the Apache HTTP server to configure. The plugin uses Gzip algorithm to compress text size. Anyone having a WordPress version 3.0.1 and above can configure this plugin. As you update the text content, it will automatically execute the compression process without requiring manual interference. /> All plugins we configure with WordPress only supports Gzip compression. If you are looking for versatile features and want to harness all tasks of compression manually, it is advisable to go with the option of TxtWizard. Some versatile features of this amazing tool are:- - Compression options of Gz, bzip2 and deflate - Decompress the already compressed text After processing, it will display the compression ratio and comparison between the original and compressed versions. Diagnostics After utilizing all opportunities, still, there is some room for website speed optimization. The diagnostic part of PSI doesn’t directly affect the performance score but provides additional information regarding improvement. Following the diagnostic techniques will help you in adhering to the best web development practices. Serve Static Assets With an Efficient Cache Policy Caching HTTP is another smart technique to speed up the page loading time when someone visits a website repetitively. While requesting a URL for the first time, the browser gets it from the network and then temporarily store the resource. Using HTTP Caching and verifying cached response in Chrome DevTools, you can optimize the website speed on repeat visits. Ensure Text Remains Visible During Webfont Load /> When you request a URL in the browser, the server brings back results including all elements present on a particular web page. Apart from text, video links, images, HTML and CSS, Webfont is also a crucial thing that will take some time to load. If the Webfont of your page is taking too much time for loading, the text content will hide causing Lighthouse font-display audit fails. In order to solve the issue of invisible text, you need to show a temporary system font by including “font-display: swap” in “@font-face” styles. It means instructing the browser to use a system font temporarily until the Webfont is not loaded. Avoid Enormous Network Payloads /> The actual data and header information are two different things transported in the network packets. Header information is meant for transmitting actual data that is called “payload”. If your web page is transmitting large network payloads, the speed of loading will surely decrease. In order to avoid enormous network payloads, you need to reduce the size of network requests. Defer, optimizing and caching of requests will help you in improving the loading speed. Minimize Main-thread Work /> For displaying a web page, a browser executes all crucial tasks like parsing and executing JavaScript HTML and CSS in the main thread. If the main thread render process is long, page loading speed will slow down. Lighthouse flag the pages that are keeping the main-thread busy for more than four seconds. The total time spent on rendering elements like script evaluation, style layout and script parsing is calculated in milliseconds. Removing unused codes, compressing and minifying critical codes will help you in minimizing the main-thread work. Reduce JavaScript Execution Time /> JavaScript in a web page is responsible for triggering visual changes. However, long-running and badly timed javaScripts reduces the page loading performance. The JavaScript execution time audit fails if it is taking longer than 3.5 seconds. Even the Lighthouse shows warning if it is taking time for more than 2 seconds. In order to reduce the time of javaScript execution, use requestAnimationFrame and web workers. Avoid an Excessive DOM Size Document Object Model (DOM) is actually the web page on which you are compiling all the data. If the web page contains more than 1500 nods, it will affect the loading performance. Huge DOM size means taking a long time for page rendering. Using a light theme, different sliders, simple web builders and fewer widgets can help in reducing the size of the Document Object Model. However, redesigning the entire page from scratch is a more feasible option but it will consume more time. Below is a list of some DOM size checking and optimization tools for your convenience:- Avoid Chaining Critical Requests /> The resources i.e. PHP, JavaScript, and CSS loading with high priority are chained in a sequence. If there is a long chain of critical requests, the browser will take more time for downloading. The “Chrome Resources Priorities and Scheduling” feature of Google will tell you the priorities in a sequence. You can reduce the chain by excluding unused elements or deferring their download. Optimizing the above-mentioned contents such as WebFonts, JavaScript and images etc can also help you in this concern. Keep Request Counts Low and Transfer Sizes Small /> During the process of page loading, a certain number of network requests and bytes of data is transferred. The PageSpeed Insights tool is capable of identifying all resource types, network requests and the amount of transferred data by each resource. Generally, scripts, images and third party resources are responsible for high request counts and data transfer size. A budget.json file can help you set the limits of data and request counts. Considerable Precautions While Implementing by Yourself 1. Version incompatibility While embedding a metric optimization plugin with WordPress, you may need the help of short scripts. The version incompatibility issue may occur with both WordPress and its plugins. For instance, it may be possible that a code you are implementing supports the plugin but not the current version of WordPress. If you are just copying and pasting a piece of script without knowing the version compatibility, the website will partially or completely stop working. 2. Image quality compromise A large number of free online tools are available to compress the images and change their formats. The online applications or plugins may help you in changing the conventional image formats into JPEG 2000, JPEG XR, and WebP but don’t take responsibility for maintaining graphic quality. Also, the compression process may reduce the image quality to a very poor level. In many cases, it is seen that DIY techniques of optimization result in poor image quality that further badly affect the user experience. Therefore, it is advisable to find someone professional having adequate knowledge of compression quality parameters. 3. Errors while minifying JS and CSS Minifying JS and CSS help a lot in reducing the loading time from the back end. However, the tools you are using to minify may not be reliable. Minification done by automated tools may not execute the process as expected. This will also end up with the whole site down. Also, you can’t implement minification everywhere. For example, a minified JavaScript for Datepicker results in malfunctioning in your online forms. If that particular form is frequently used by the end-users, the malfunctioning will affect much adversely worse than a complete site down. 4. Script Clash A clash between libraries like jQuery or prototype.js creates serious consequences. Implementing scripts without proper knowledge is a major reason behind a downed website. In order to avoid conflicts with different libraries, you need the help of a professional who knows how to test a jQuery in no conflict mode before using on a live page. Now you must be fully enlightened with the significance of page speed optimization for improving user experience. It is clear that every single byte of a script, multimedia and text content matters a lot for improving the loading speed and making a web page fully interactive. However, some optimization activities are tricky to perform without prior experience. If anything went wrong in the middle of your DIY experiments, it may shut down the entire website. In such kinds of situations, only a professional web developer can help in identifying the fault and rectifying it with the best possible practices..
https://www.webspero.com/blog/learn-website-speed-optimization-using-pagespeed-insights-tool/
CC-MAIN-2021-10
refinedweb
4,696
52.8
: Bellow. - struct InversionList(SP = GcPolicy); InversionList of any type. - pure this(Range)(Range intervals) if (isForwardRange!Range && isIntegralPair!(ElementType!Range)); - Construct a set from a forward range of code point intervals. - this()(uint[] intervals...); - Construct a set from plain values of code point intervals. Example: import std.algorithm;, std.typecons; auto set = CodepointSet('A', 'D'+1, 'a', 'd'+1); set.byInterval.equal([tuple('A', 'E'), tuple('a', 'e')]); - const bool opIndex(uint val); - Tests the presence of code point val in this set. Example:: Example: : "!")(); - - @property auto byCodepoint(); - A range that spans each code point in this set. Example: import std.algorithm; auto set = unicode.ASCII; set.byCodepoint.equal(iota(0, 0x80)); - void toString(scope void delegate(const(char)[]).string :. Example:. Example:Name taking a single dchar argument. If funcName) { if(ch < 45) { if(ch == 10 || ch == 11) return true; return false; } else if (ch < 65) return true; else { if(ch < 100) return false; if(ch < 200) return true; return false; } } - const @property bool empty(); - True if this set doesn't contain any code points. Example:. = cast(uint. - bool skip(Range)(ref Range inp) if (isRandomAccessRange!Range && is(ElementType!Range : char)); - ditto - bool test(Range)(ref Range inp) if (isRandomAccessRange!Range && is(ElementType!Range : char)); - dittoExamples: - template isUtfMatcher(M, C) - Test if M is an UTF Matcher for ranges of Char. - @trusted auto utfMatcher(Char, Set)(Set set) if (isCodepointSet!Set); - Constructs a matcher object to classify code points from the set for encoding that has Char as code unit.See MatcherConcept for API outline. - auto toTrie(size_t level, Set)(Set set) if (isCodepointSet!Set); - Convenience function to construct optimal configurations for packed Trie from any set of itself. -."))); - auto opCall(C)(in C[] name) if (is(C : dchar)); -:. Example: //)); - Returns the length of grapheme cluster starting at index. Both the resulting length and the index are measured in code units. Example: // ASCII as usual is 1 code unit, 1 code point etc. and thus inp must be an L-value. - auto byGrapheme(Range)(Range range) if (isInputRange!Range && is(Unqual!(ElementType!Range) == dchar)); Iterate a string by grapheme. Useful for doing string manipulation that needs to be aware of graphemes.SeeAlso:Examples: of code points.Examples: import std.conv : text; @system auto opSlice(size_t a, size_t b); pure nothrow @system) if (isInputRange!Input && is(ElementType!Input : dchar)); - Append all characters from the input range inp to this Grapheme. - @property bool valid()(); - True if this object contains valid extended grapheme cluster. Decoding primitives of this module always return a valid Grap (isForwardRange!S1 && is(Unqual!(ElementType!S1) == dchar) && isForwardRange!S2 && is(Unqual!(ElementType!S2) == dchar));: str1, S2 str2) if (isForwardRange!S1 && is(Unqual!(ElementType!S1) == dchar) && isForwardRange!S2 && is(Unqual!(ElementType!S2) == dchar));: string normalized to the chosen form. Form C is used by default.For more information on normalization forms see the normalization section. Note: In cases where the string in question is already normalized, it is returned unmodified and no memory allocation happens. Example: // is always allowed (Quick_Check=YES) in normalization form norm. // @safe bool isWhite(dchar c); - - pure nothrow @safe bool isLower(dchar c); - - pure nothrow @safe bool isUpper(dchar c); - - pure nothrow @safe dchar toLower(dchar c); - If c is a Unicode uppercase character, then its lowercase equivalent is returned. Otherwise c is returned. - pure @trusted void toLowerInPlace(C)(ref C[] s) if (is(C == char) || is(C == wchar) || is(C == dchar)); -. - pure @trusted void toUpperInPlace(C)(ref C[] s) if (is(C == char) || is(C == wchar) || is(C == dchar)); -. - pure @trusted S toLower(S)(S s) if (isSomeString!S); - Returns a string which is identical to s except that all of its characters are converted to lowercase (by preforming Unicode lowercase mapping). If none of s characters were affected, then s itself is returned. - pure nothrow @safe dchar toUpper(dchar c); - If c is a Unicode lowercase character, then its uppercase equivalent is returned. Otherwise c is returned. - pure @trusted S toUpper(S)(S s) if (isSomeString!S); - Returns a string which is identical to s except that all of its characters are converted to uppercase (by preforming Unicode uppercase mapping). If none of s characters were affected, then s itself is returned. - pure nothrow @safe bool isAlpha(dchar c); - - pure nothrow @safe bool isMark(dchar c); - Returns whether c is a Unicode mark (general Unicode category: Mn, Me, Mc). - pure nothrow @safe bool isNumber(dchar c); - - pure nothrow @safe bool isPunctuation(dchar c); - - pure nothrow @safe bool isSymbol(dchar c); - - pure nothrow @safe bool isSpace(dchar c); - - pure nothrow @safe bool isGraphical(dchar c); - - pure nothrow @safe bool isControl(dchar c); - - pure nothrow @safe bool isFormat(dchar c); - -)
http://dlang.org/phobos/std_uni.html
CC-MAIN-2015-11
refinedweb
774
51.34
First C++ Program The best way to learn C++ is to start coding right away. So here is our very first program in C++. # include <iostream.h> main() { cout << "Hello World"; } # include <iostream.h> #include: This is a pre-processor directive. It is not part of our program; it is an instruction to the compiler. It tells the C++ compiler to include the contents of a file, in this case the system file iostream.h. The compiler knows that it is a system file, and therefore looks for it in a special place. The features of preprocessor will be discussed later. For the time being take this line on faith. You have to write this line. The sign # is known as HASH and also called SHARP. <iostream.h> This is the name of the library definition file for all Input Output Streams. Your program will almost certainly want to send stuff to the screen and read things from the keyboard. iostream.h is the name of the file in which has code to do that work for you. main() The name main is special, in that the main is actually the one which is run when your program is used. A C++ program is made up of a large number of functions. Each of these is given a name by the programmer and they refer to each other as the program runs. C++ regards the name “main” as a special case and will run this function first. If you forget to have a main function, or mistype the name, the compiler will give you an error. Notice that there are parentheses (“( )”, normal brackets) with main. Here the parentheses contain nothing. There may be something written inside the parentheses. . { } Next, there is a curly bracket also called braces(“{ }“). For every open brace there must be a matching close. Braces allows to group together pieces of a program. The body of main is enclosed in braces. Braces are very important in C++; they enclose the blocks of the program. cout << “Hello World” cout This is known as output stream in C++ and C++. Stream is a complicated thing, you will learn about it later. Think a stream as a door. The data is transferred through stream; cout takes data from computer and sends it to the output. For the moment it is a screen of the monitor. Hence we use cout for output. << The sign << indicates the direction of data. Here it is towards cout and the function of cout is to show data on the screen. “Hello World” The thing between the double quotes (“ ”) is known as character string. In C++ programming character strings are written in double quotes. Whatever is written after << and within quotation marks will be direct it to cout, cout will display it on the screen. ; There is a semicolon (;) at the end of the above statement. This is very important. All C++ statements end with semicolon (;). Missing of a semicolon (;) at the end of statement is a syntax error and compiler will report an error during compilation. If there is only a semicolon (;) on a line than it will be called a null statement. i.e. it does nothing. The extra semicolons may be put at the end but are useless and aimless. Do not put semicolon(;) at a wrong place, it may cause a problem during the execution of the program or may cause a logical error.
https://tutorialstown.com/first-cpp-program/
CC-MAIN-2018-17
refinedweb
572
84.37
Advanced Namespace Tools blog 12 December 2016 Now that I have caught up to the current 9front version, I need to prioritize the next things I want to do. Rationalize the build system The ANTS build process is controlled by giant-ball-of-mud shell script which builds and installs many different versions of the kernel and tools. Bell Labs and 9front are both supported, as well as special purpose configurations for vultr hosting. There is a lot of special purpose namespace manipulation, inconsistent use of temporary ramdisks, the build script calling itself recursively, and large chunks of copy-pasted code within it. It is probably time to split the Bell Labs and 9front versions, and separate the purposes of compiling and configuring/installing. Careful thought needs to be given to how to handle 9front versioning. Perhaps improved engineering (using patch files rather than patched files) can improve the end-user experience. Overhaul the documentation and website Some of the ANTS documentation goes back to the earlier evolutionary stages of the "rootlessboot" kernel, and the question of what applies to Bell Labs and what applies to 9front is not well separated. A lot of the web documentation is focused on things like 2013 Qemu demonstration vm images. There is no simple linear pathway into what ANTS is and how to use it if you wish. Improve handling of vac scores An optional componenet of ANTS is fossil snapshotting and venti replication. This works fine, but there is no way to give names to given snapshots and select them that way. For a human-usable interface, you want to be able to give names to different snapshots and instantiate and enter them by name. Decruft scripts and workflows ANTS includes a collection of namespace manipulation scripts, some of which are old and crufty. Assessing the usability of "addwrroot" and others and making any needed improvements is another task.
http://doc.9gridchan.org/blog/161212.priorities
CC-MAIN-2017-22
refinedweb
317
58.92
I am working on a code which opens a file (infile), copies it to an output file (outfile) and then asks the user for a string and searches for that string in the outfile and replaces every occurence of that string with & symbol. Here's an example: Contents of infile: Mary had a little lamb Whose fleece as white as snow //the program will copy the file to an output file and then ask the user for a string that it will search in that file. //For example, the user wants to look for all the occurrences of the 'as'. //the program will search for all occurrences of 'as' and replace it with the '&' symbol: Mary had a little lamb Whose fleece & white & snow So far, this is what I was able to do: #include <stdio.h> #include <string.h> #define BUFF_SIZE 100 int main(int argc, char* argv[]) { char mystring[500]; char out[1000]; if (argc != 3) { fprintf(stderr, "Not enough parameters. Usage <progname> <infile> <outfile>\n"); return -1; } FILE* inFile = fopen(argv[1], "rt"); FILE* outFile = fopen(argv[2], "wt"); if(!inFile) { fprintf(stderr, "File not found! Exiting.\n"); return -1; } if(!outFile) { fprintf(stderr, "File can't be opened for writing.\n"); return -1; } printf("Enter string to be searched:"); //asks the user to enter a string to be searched fgets(mystring, 500, stdin); if(strlen(mystring) < 2){ //checks if the user entered a string printf("You forgot to enter a string."); return -1; } while(!feof(inFile)){ fgets(out, 1000, inFile); //copies the text in the input File fprintf(outFile, "%s", out); //prints the text from infile.txt to outfile.txt } return 0; } I was only able to recopy the contents of the input file. I'm actually stuck with the search and replace file. Can you guys give me some tips. I was thinking of using strstr function but I'm still a bit lost. Also, can you give some example for this one? Thanks
https://www.daniweb.com/programming/software-development/threads/144612/file-handling-how-to-search-and-replace-strings-in-file-using-user-input
CC-MAIN-2017-43
refinedweb
329
73.27
XPCOM XPCOM is a cross platform component object model, similar to Microsoft COM. Taken from the XPCOM page. Firefox can be seen as composed of two layers. The largest of the two is a compiled platform, mostly written in C++. On top of it lies the chrome, mostly written in XML, Javascript and CSS. In fact, you can separate the two. We often mention other "Mozilla based applications". Well, those are applications that, simply put, take the underlying platform with perhaps a few changes and additions, and then write their own chrome layer. This lower layer is called XULRunner, and it is a very powerful platform, providing a very robust development base for web-enabled, cross-platform applications. The fact that it allows to easily create OS-independent applications is a big selling point for XULRunner. XPCOM is the way in which the two layers (XULRunner and chrome) communicate. Most of the objects and functions in the lower layers are hidden from the chrome; those that need to be publicized are exposed through XPCOM components and interfaces. You can think of XPCOM as a reference to all the capabilities available on the lower layers of Firefox. Using XPCOM components is relatively simple, as you've seen in previous examples. this.obsService = Cc["@mozilla.org/observer-service;1"].getService(Ci.nsIObserverService); The Cc object (Components.classes) is an index to static objects and class definitions available through XPCOM. The string between the brackets is just an identifier, in this case corresponding to the Observer service. You'll usually know what string to use by reading examples and documentation. There is no comprehensive list of these (that we know of), and that's understandable since it would be a very long list, and it can be extended by add-ons. If you want to see the list in your current Firefox installation, just run the following code in the Error Console: var str = ""; for (var i in Components.classes) { str += i + "\n" }; str A run on Firefox 3.6.2 with a few extensions installed yields 876 strings. That's quite a lot. Luckily, you'll only need to know a handful of those for extension development. The @mozilla.org/ prefix is just a way to keep things namespaced. We would use something like @xulschool.com/ to make our own components. Components are either services (static objects) or instances of classes, just like the objects we handle in JS. The method you call on Cc["some-string"] should either be getService or createInstance, depending on what you're asking for. In most cases it is very clear which one to call, but in case of doubt, look for documentation on it. Those two methods always receive the interface identifier as an argument. Similarly to Cc, Ci (Components.interfaces) is an index of available interfaces. A modified version of the last code snippet produces an even longer list of available interfaces. Just like in component identifiers, the nsI prefix is just a way of keeping things in order. The NS stands for Netscape, Mozilla's predecessor. The "I" stands for interface. Our interfaces should begin with something like xsIHello. An interface is just a definition of a set of attributes and methods that an object implementing it should have. XPCOM components can implement multiple interfaces, and they often do. Let's look at the Preference service as an example of this. We'll look at its documentation in a very old XUL site called XUL Planet. All of its documentation was planned to be migrated to MDC, but it looks like it was never finished and XUL Planet was discontinued. Their XPCOM documentation is better in terms of seeing the relationships between components and interfaces, so we'll use that. Another useful resource is this XPCOM reference. This is generated from source, and it's kept relatively up to date. It shows the relationships between components and interfaces, but it's more of a source browser than a documentation reference. Stepping into the time machine, we see the Preferences Service component page. Right at the top you can see a list of the interfaces it implements, with a link to a documentation page for each one of them. Then you'll see a list of all members of this object, with some documentation about it. It is particularly important to note that, for every member in the component, you'll see in what interface this member is defined. Clicking on the link for the getBranch method takes you to the nsIPrefService documentation page, where you can see more details on the interface and the method. You can also see a list of what components implement this interface. All of this documentation is generated from the one present in the Firefox source files, so it's in general very complete and well written. It's a shame XUL Planet is no longer with us. Interfaces can be awkward to handle. If you want to call a method or use an attribute of interface X in a component, you first need to "cast" the component to interface X. This is done via the QueryInterface method that is included in all XPCOM components. this._prefService = Cc["@mozilla.org/preferences-service;1"].getService(Ci.nsIPrefBranch); this._prefValue = this._prefService.getBoolPref("somePreferenceName"); this._prefService.QueryInterface(Ci.nsIPrefBranch2); this._prefService.addObserver("somePreferenceName", this, false); this._prefService.QueryInterface(Ci.nsIPrefBranch); This is a common piece of code you'll see when initializing components or JSM that rely on preferences. We use the Preferences Service to get and set preference values, such as the preference value we're getting on the fourth line of code. These methods are in the nsIPrefBranch interface. The getService and createInstance methods allow you to get the component already set to an interface. In many cases you only need to use one interface, and you won't have to worry about QueryInterface. But in this case we need to change the interface to nsIPrefBranch2, which is the one that includes the method that adds a preference observer. Then we change it back, because after that we only need to get and set preferences, and those methods are in nsIPrefBranch. Passing parameters Passing parameters to XPCOM methods is no different from other JS objects, with some exceptions. In general, you can rely on JavaScript's ability to transform values to the correct type, but it's usually best to pass the right type in the first place. This section is a quick guide on how to read XPCOM documentation, which basically amounts to understanding the syntax of XPIDL, the language used to specify XPCOM interfaces. At MDC, you'll see stuff like this: void setCharPref(in string aPrefName, in string aValue); One of the most important details to notice is that both paratemers have the in keyword. This specifies that these are input parameters, values that the method will use to perform its actions. When is a parameter not an in parameter? In some methods the out keyword is used for parameters that are return values in reality. This is done for certain value types that are not valid as return values in IDL, such as typed arrays. void getChildList(in string aStartingAt, out unsigned long aCount,[array, size_is(aCount), retval] out string aChildArray); This method returns an array of strings. The first parameter is an input that tells the method where to start looking. The second one will hold the length of the return array, and the third parameter will hold the array itself. Note the metadata included in the square brackets, indicating that the parameter is an array, and that its size is determined by the aCount parameter. Here's one way to invoke this method: let childArrayObj = new Object(); let childArray; this._prefService.getChildList("", {}, childArrayObj); // .value holds the actual array. childArray = childArrayObj.value; The general rule for out parameters is that you can pass an empty object, and then you can get the result by accessing the value attribute in this object after the method call. The method will set value for you. Also, since JS arrays have the length attribute to get their length, there's no need for the second parameter to be used, so we just pass it an empty object that we won't use. The second parameter is only necessary for callers from within C++ code that use pointers instead of high-level arrays. Some commonly used XPCOM methods require other XPCOM types as parameters. The addObserver method in nsIPrefBranch2 is an example of this. void addObserver(in string aDomain, in nsIObserver aObserver, in boolean aHoldWeak); Luckily, you don't have to do anything special if you want to register your JS object as a preference observer. The nsIObserver has a single method observe, so all you need to do is have an observe method in your object and you'll be OK. XULSchool.PrefObserver = { init: function() { this._prefService = Cc["@mozilla.org/preferences-service;1"].getService(Ci.nsIPrefBranch2); // pass 'this' as if it implemented nsIObserver. this._prefService.addObserver( "extensions.xulschoolhello.somePref", this, false); }, observe : function(aSubject, aTopic, aData) { // do stuff here. } }; Finally, here's a table summarizing the types you will most likely encounter in XPCOM interfaces, and how to handle them: There are more details about XPIDL in the XPDIL Syntax definition. Creating Your Own Components JavaScript XPCOM Components As we've said before, we recommend using JSM whenever you can. Yet there are some cases where you don't have a choice and you have to create XPCOM components to add a specific feature. In these cases you can choose between compiled XPCOM components, written in C++, or JS XPCOM components. You should favor the latter, they are much less complicated to make and maintain. Most of the time you'll need 2 source files for a JS XPCOM component: the IDL interface file, and the implementation JS file. In your final extension XPI you'll need to include the JS implementation file, and the XPT file, which is a compiled version of your IDL file. You won't need the IDL or XPT files if your components only use pre-existing Firefox interfaces. In this case you may also find it easier to implement your component using JSM and the XPCOMUtils module. Download this version of the Hello World project with XPCOM to see how XPCOM files are structured in the project and built. (Your build will probably break, we'll cover this later on.) In the components directory, the file xsIHelloCounter.idl has the following contents: #include "nsISupports.idl" /** * Counter for the Hello World extension. Keeps track of how many times the * hello world message has been shown. */ [scriptable, uuid(BD46F689-6C1D-47D0-BC07-BB52B546B8B5)] interface xsIHelloCounter : nsISupports { /* The maximum allowed count. */ const short MAX_COUNT = 100; /* The current count. */ readonly attribute short count; /** * Increments the display count and returns the new count. * @return the incremented count. */ short increment(); }; The bits about nsISupports are common to most XPCOM interface definitions. nsISupports is the base interface for all interfaces, so it should always be included, except for cases where your interface extends another interface. In those cases you just need to replace nsISupports with the interface you're extending. You can also extend from multiple interfaces, by including a comma-separated list of interfaces instead of only one. [scriptable, uuid(BD46F689-6C1D-47D0-BC07-BB52B546B8B5)] The scriptable qualifier says that this component can be accessed from JS code. This can also be specified on a per-method basis, which is something you'll see in some of the interfaces in Firefox, but it's not likely you'll have to do it in your own components. The second part defines a UUID for the interface. You must generate a new one for each interface, and you should change it every time the interface changes. In this case you're forced to use UUID, the email address format used for extension ids won't work. We included a constant, an attribute and a method to display examples of the 3, but this is clearly an overly elaborate way to keep a simple counter. You can define numeric and boolean constants in IDL files, but not string constants. This is a known limitation of XPIDL, and a simple workaround is to define a readonly attribute instead. This means you have to define a getter in the implementation file, though. You can access constants through a reference of the component, or directly from the interface: // these are equivalent. max = Ci.xsIHelloCounter.MAX_COUNT; max = counterReference.MAX_COUNT; The implementation file, xsHelloCounter.js, is much longer. We'll analyze it piece by piece. const Cc = Components.classes; const Ci = Components.interfaces; const Cr = Components.results; const Ce = Components.Exception; You should be familiar with this already, although there are a couple of additions, Components.results and Components.Exception. They'll be used further ahead. const CLASS_ID = Components.ID("{37ED5D2A-E223-4386-9854-B64FD38932BF}"); const CLASS_NAME = "Hello World Counter"; const CONTRACT_ID = "@xulschool.com/counter;1"; These constants are used at the bottom, in the component registration code. They specify the details of the component, such as a unique UUID (you have to generate it too and it must be different from the IDL UUID), a descriptive name (this isn't used anywhere that we know of), and the contract ID, which is the string you use to get a reference to the component. The ";1" at the end of the string is supposed to indicate the version of the component, although it shouldn't change much. It can be useful if there are multiple incompatible versions of the component installed at the same time. The implementation object itself should be easy to understand. The only aspects to take into account are that methods and attributes must have the same names as their IDL counterparts, and that the QueryInterface method is implemented: QueryInterface : function(aIID) { if (!aIID.equals(Ci.xsIHelloCounter) && !aIID.equals(Ci.nsISupports)) { throw Cr.NS_ERROR_NO_INTERFACE; } return this; } The method is very simple, it validates that the caller is requesting a supported interface, otherwise it throws an exception. The rest of the code looks long and complicated, but it is pretty much the same for all components, so you shouldn't worry too much about it. All you have to do to use it in other components is copy it and change some names. The purpose of this code is to register the component so that you can get references to it just like all other Firefox components. It is better read from bottom to top. function NSGetModule(aCompMgr, aFileSpec) { return CounterModule; } This piece of code is the first one that Firefox looks for in all implementation files in the components directory. It simply returns the object that precedes it. var CounterModule = { // registerSelf, unregisterSelf, getClassObject, canUnload }; The only thing you may need to change here is when you need to use the Category Manager. The Category Manager is a service that allows you to register your component under categories that are either pre-existing or you make up. The service also allows you to get all components registered in a category and invoke methods on them. One common use for this service is registering a component as a Content Policy. With it you can detect and filter URL loads. This is covered further ahead in another section of the tutorial. The add and delete calls to the Category Manager would have to be done in the registerSelf and unregisterSelf methods: registerSelf : function(aCompMgr, aLocation, aLoaderStr, aType) { let categoryManager = Cc[@mozilla.org/categorymanager;1].getService(Ci.nsICategoryManager); aCompMgr.QueryInterface(Ci.nsIComponentRegistrar); aCompMgr.registerFactoryLocation( CLASS_ID, CLASS_NAME, CONTRACT_ID, aLocation, aLoaderStr, aType); categoryManager.addCategoryEntry( "content-policy", "XULSchool Hello World", CONTRACT_ID, true, true); }, In this case the component would need to implement nsIContentPolicy. And, finally, the factory object. var CounterFactory = { /* Single instance of the component. */ _singletonObj: null, createInstance: function(aOuter, aIID) { if (aOuter != null) { throw Cr.NS_ERROR_NO_AGGREGATION; } // in this case we need a unique instance of the service. if (!this._singletonObj) { this._singletonObj = MessageCounter; } return this._singletonObj.QueryInterface(aIID); } }; If we wanted a class that can be instantiated, instead of a singleton service, the Factory would look like this: var CounterFactory = { createInstance: function(aOuter, aIID) { if (aOuter != null) { throw Cr.NS_ERROR_NO_AGGREGATION; } return (new Counter()).QueryInterface(aIID); } }; The instructions on how to build an IDL file are included in the section Setting up a Development Environment. C++ XPCOM Components You do not want to do this unless it's really necessary. There are few reasons you might need to use binary XPCOM. One of them is adding functionality to Firefox that it doesn't support natively. In that, you would either need to implement this feature for every platform, or limit your extension compatibility to the ones you'll support. You'll need to build a library file for each one of them: DLL for Windows, dylib for Mac (Intel and PPC) and .so for Linux and similar. We won't get into details about this because it's certainly not tutorial material. This blog post details the XPCOM build set up. And you'll need to read the Build Documentation thoroughly to understand how this all works. This tutorial was kindly donated to Mozilla by Appcoast.
https://developer.mozilla.org/en-US/docs/XUL_School/XPCOM_Objects?redirect=no
CC-MAIN-2017-26
refinedweb
2,877
56.35
So far we have seen how to use the basic data types and coding principles of the Erlang VM via the Elixir language. Now we will go full circle and create a working web application using the Phoenix Web Framework. Phoenix uses the MVC server-side pattern and is in fact the top layer of a multi-layer modular system encompassing Plug (the modular specification used for Routing, Controllers, etc.), Ecto (DB wrapper for MongoDB, MySQL, SQLite3, PostgreSQL, and MSSQL) and the HTTP server (Cowboy). Phoenix's structure will seem familiar to Django for Python or Ruby on Rails. Both app performance and development speed were key factors in the design of Phoenix, and when combined with its real-time features, they give it powerful potential as a production-quality web-app framework. Getting Started Elixir is required, so please refer to the installation instructions at the beginning of this series. We will also require Hex to get Phoenix working (to install dependencies). Here's the command to install Hex (if you have Hex already installed, it will upgrade Hex to the latest version): $ mix local.hex If you have not yet familiarised yourself with the Elixir language, may I recommend you continue reading the first steps of this guide before going forward in this part. Note that if you wish to read a short guide, you can also refer to the Learning Elixir and Erlang Guide that is provided by the Phoenix team. Erlang Note: By default, this is included in an Elixir installation. To run Elixir, we need the Erlang virtual machine because Elixir code compiles to Erlang byte code. If you're using a Debian-based system, you may need to explicitly install Erlang to get all the needed packages. wget && sudo dpkg -i erlang-solutions_1.0_all.deb $ sudo apt-get update $ sudo apt-get install esl-erlang Phoenix So now that we have Elixir and Erlang taken care of, you are ready to install the Mix archive. A mix archive is just like a Zip file really, except that it contains an application as well as the compiled BEAM files and is tied to a specific version of the app. The mix archive is what we will use to generate a new, base Phoenix application from which we can build our app! Run the following in your terminal: $ mix archive.install If the Phoenix Mix archive won't install properly with this command, we can download the package from the Phoenix archives, save it to the filesystem, and then run: mix archive.install /path/to/local/phoenix_new.ez. Node We will need node.js version 5 or greater, as Phoenix will use the brunch.io package to compile static assets such as css and js, which in turn uses npm. Download Node.js from the download page. When selecting a package to download, it's important to note that Phoenix requires version 5.0.0 or greater. Mac OS X users can also install Node.js via homebrew. If you have any issues installing Node, refer to the official Phoenix help guide. PostgreSQL By default, Phoenix configures applications to use the relation db server PostgreSQL, but we can switch to MySQL by passing the --database mysql flag when creating a new application. Going forward, as we work with Ecto models in this guide, we will use PostgreSQL and the Postgrex adapter. So to follow along with the examples, you should install PostgreSQL. The PostgreSQL wiki has installation guides for a number of different operating systems. Note that Postgrex is a direct Phoenix dependency, and it will be automatically installed along with the rest of our dependencies as we start our app. The Default User Phoenix assumes that our PostgreSQL database will have a postgres user account with the correct permissions and a password of "postgres". If that isn't how you want to set up, please see the instructions for the ecto.create mix task to customise the credentials. You can run mix phoenix.new from any directory in order to bootstrap a Phoenix application. For your new project, Phoenix will accept either an absolute or relative path; assuming that the name of our application is hello_world, either of these will work fine: $ mix phoenix.new /home/me/code/hello_world $ mix phoenix.new hello_world When you are ready, run the create command and you will get similar to the following output: mix phoenix.new hello_world * creating hello_world/config/config.exs * creating hello_world/config/dev.exs * creating hello_world/config/prod.exs ... * creating hello_world/web/views/layout_view.ex * creating hello_world/web/views/page_view.ex Fetch and install dependencies? [Yn] So here Phoenix has taken care of creating all of the directory structure and files for your app. You can take a look at what it is creating by navigating directly to the files in your code editor of choice. When that's done, we see the prompt asking for dependencies to be installed. Proceed with yes: Fetch and install dependencies? [Yn] Y * running mix deps.get * running npm install && node node_modules/brunch/bin/brunch build We are all set! Run your Phoenix application: $ cd hello_world $ mix phoenix.server You can also run your app inside IEx (Interactive Elixir) as: $ iex -S mix phoenix.server Before moving on, configure your database in config/dev.exs and run: $ mix ecto.create Now that everything has downloaded, we can cd to the directory that Elixir has been populating the project files in, and create the database via mix ecto.create. $ cd hello_world $ mix ecto.create ==> connection Compiling 1 file (.ex) Generated connection app ==> fs (compile) Compiled src/sys/inotifywait.erl Compiled src/sys/fsevents.erl Compiled src/sys/inotifywait_win32.erl Compiled src/fs_event_bridge.erl Compiled src/fs_sup.erl Compiled src/fs_app.erl Compiled src/fs_server.erl Compiled src/fs.erl ... The database for HelloPhoenix.Repo has been created. Note: if this is the first time you are running this command, Phoenix may also ask to install Rebar. Go ahead with the installation as Rebar is used to build Erlang packages. Database Issues If you see the following error: State: Postgrex.Protocol ** (Mix) The database for HelloWorld.Repo couldn't be created: an exception was raised: ** (DBConnection.ConnectionError) tcp connect: connection refused - :econnrefused (db_connection) lib/db_connection/connection.ex:148: DBConnection.Connection.connect/2 (connection) lib/connection.ex:622: Connection.enter_connect/5 (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3 Please ensure PostgreSQL service is running and accessible with the user credentials provided (by default the user postgres with a password of "postgres" is used). Start Up the Phoenix Web Server! We can now start the server for our Elixir app! Run the following: $ mix phoenix.server [info] Running HelloWorld.Endpoint with Cowboy using http on port 4000 23 Nov 05:25:14 - info: compiled 5 files into 2 files, copied 3 in 1724ms By default, Phoenix is accepting requests on port 4000. Visit, and you will see the Phoenix Framework welcome page. If you can't see the page above, try accessing it via (in case localhost is not defined on your OS). Locally, we can now see requests being processed in our terminal session as our application is running in an iex session. To stop it, we hit ctrl-c twice, just as we would to stop iex normally. $ mix phoenix.server [info] Running HelloWorld.Endpoint with Cowboy using 28 Nov 15:32:33 - info: compiling 28 Nov 15:32:34 - info: compiled 6 files into 2 files, copied 3 in 5 sec [info] GET / [debug] Processing by HelloWorld.PageController.index/2 Parameters: %{} Pipelines: [:browser] [info] Sent 200 in 50ms Customising Your Application When Phoenix generates a new application for us, it builds a top-level directory structure, as we'll see in the following section below. We created a new application via the mix phoenix.new command, which generated a new application, including the directory structure as so: ├── _build ├── config ├── deps ├── lib ├── priv ├── test ├── web For now we will be working on the web directory, which contains the following: ├── channels └── user_socket.ex ├── controllers │ └── page_controller.ex ├── models ├── static │ ├── assets │ | ├── images | | | └── phoenix.png | | └── favicon.ico | | └── robots.txt │ | ├── vendor ├── templates │ ├── layout │ │ └── app.html.eex │ └── page │ └── index.html.eex └── views | ├── error_helpers.ex | ├── error_view.ex | ├── layout_view.ex | └── page_view.ex ├── router.ex ├── gettext.ex ├── web.ex To change the logo at the top of the page, we need to edit the static assets, which are kept in priv/static. The logo is kept in the directory as so: priv/static/images/phoenix.png. Feel free to add your own graphics here; we will link it in the css and begin modifying the template next. By default, Phoenix will compile any static assets (for example here in the images directory) to the production bundle. For when we need a build phase for js or css, we place assets in web/static, and the source files are built into their respective app.js / app.css bundle within priv/static. Modifying the CSS The path for your css is web/static/css/phoenix.css. To change the logo, look to lines 29-36. /* Custom page header */ .header { border-bottom: 1px solid #e5e5e5; } .logo { width: 519px; height: 71px; display: inline-block; margin-bottom: 1em; background-image: url("/images/phoenix.png"); background-size: 519px 71px; } Make your change and save the file, and the changes will be updated automatically. 28 Nov 15:49:00 - info: copied gript.png in 67ms 28 Nov 15:49:04 - info: compiled phoenix.css and 1 cached file into app.css in 77ms 28 Nov 15:49:33 - info: compiled phoenix.css and 1 cached file into app.css in 75ms Reload your web browser, or load up. Modifying Templates To change the contents of your template, just look in the files in web/templates/layout and web/templates/page. You can start modifying the files to see changes live in your app. The standard Phoenix templating engine uses EEx, which stands for Embedded Elixir. All template files have the extension .eex. Templates are scoped to a view, which in turn are scoped to a controller. Phoenix creates a web/templates directory where we can put all these. For the sake of organisation, it is best to namespace these, so if you want to create a new page, that means you need to create a new directory under web/templates and then create an index.html.eex file within it (e.g. web/templates/<My-New-Page>/index.html.eex). Let's do that now. Create web/templates/about/index.html.eex and make it look like this: <div class="jumbotron"> <h2>About my app</h2> </div> Views In Phoenix, the views part of the MVC design paradigm performs several important jobs. For one, views render templates. Additionally, they act as a presentation layer for raw data from the controller, acting as a middle man in preparing it for use in a template. For an example, take a common hypothetical data structure which represents a user with a first_name field and a last_name field. Now, for the template, we want to show the user's full name. For the best approach, we write a function to concatenate first_name and last_name and provide us a helper in the view in order to write clean, concise and easily legible template code. In order to render any templates for our AboutController, we need an AboutView. Note: The names are significant here—the first part of the names of the view and controller must match up. Create web/views/about_view.ex and make it look like this: defmodule HelloWorld.AboutView do use HelloWorld.Web, :view end Routing In order to see a new page, you will need to set up a route and a controller for your view and template. As Phoenix works on the MVC paradigm, we need to fill in all the parts. It's not much work though. In plain English: Routes map unique HTTP verb/path pairs to controller/action pairs for further execution. Phoenix automatically generates a router file for us in a new application at web/router.ex. This is where we will be working for this following section. The route for the default "Welcome to Phoenix!" page looks like this. get "/", PageController, :index This means to catch all requests made by visiting in a browser (which issues an HTTP GET request) to the application's / root path and send all of those requests to the index function in the HelloPhoenix.PageController module defined in web/controllers/page_controller.ex. The page we are going to build will simply say "About my app" when we point our browser to. You can fill in more information to suit your app in the template, so just go ahead and write in your HTML! A New Route For our about page, we need to define a route. So just open up web/router.ex in a text editor. By default, it will contain the following; for more information on routing, refer to the official Routing Guide. defmodule HelloPhoenix.Router do use HelloPhoenix.Web, :router pipeline :browser do plug :accepts, ["html"] plug :fetch_session plug :fetch_flash plug :protect_from_forgery plug :put_secure_browser_headers end pipeline :api do plug :accepts, ["json"] end scope "/", HelloPhoenix do pipe_through :browser # Use the default browser stack get "/", PageController, :index end # Other scopes may use custom stacks. # scope "/api", HelloPhoenix do # pipe_through :api # end end For our about section, let's add the new route to the router for a GET request to /about. It will be processed by a HelloPhoenix.AboutController, which we will construct in the next part. For the GET to /about, add this line to the scope "/" block of the router.ex: get "/about", AboutController, :index The complete block will look like so: scope "/", HelloPhoenix do pipe_through :browser # Use the default browser stack get "/", PageController, :index get "/about", AboutController, :index end The Controller We have set up the route, the view, and the template. So let's now put all the parts together so that we can view it in the browser. Controllers are defined as Elixir modules, and actions inside a controller are Elixir functions. The purpose of actions is to gather any data and perform any tasks needed for rendering. For the /about route, we need a HelloWorld.AboutController module with an index/2 action. For that, we need to create a web/controllers/about_controller.ex and put the following inside: defmodule HelloWorld.AboutController do use HelloWorld.Web, :controller def index(conn, _params) do render conn, "index.html" end end For more information on Controllers, refer to the official Controllers guide. Controller Structure All controller actions take two arguments. The first of these is conn, a struct which holds a load of data about the request. The second is params, which are the request parameters. Here, we are not using params, and we avoid compiler warnings by adding the leading _. The core of this action is render conn, "index.html". This tells Phoenix to find a template called index.html.eex and render it. Phoenix will look for the template in a directory named after our controller, so web/templates/hello. Note: Using an atom as the template name will also work here: render conn, :index, for example when using an :index atom. But the template will be chosen based on the Accept headers, so for example "index.html" or "index.json". Testing the New Route Visiting the URL will now render the template, controller, view and route we have defined so far! Actions So now we have created a page and customised the app a little. But how do we actually do something with user input? Actions. The requests for our about page will be handled by the HelloWorld.AboutController using the show action. As we already defined the controller in the last steps, we just need to add to the code a way to retain the variable which is passed via a URL like so:. We will now modify the code to map the new URL GET request param through the controller and eventually to the template, via using Elixir's pattern matching. Add the following to the module in web/controllers/about_controller.ex: def show(conn, %{"appName" => appName}) do render conn, "show.html", appName: appName end A few points of interest here: - We pattern match against the params passed into the show function so that the appNamevariable will be bound to the value from the URL. - For our example URL (), the appNamevariable would contain the value weather. - Within the showaction, there is also passed a third argument for the render function: a key/value pair where the atom :appNameis the key and the appNamevariable is passed as the value. The full listing of web/controllers/about_controller.ex reads as so: defmodule HelloWorld.AboutController do use HelloWorld.Web, :controller def index(conn, _params) do render conn, "index.html" end def show(conn, %{"appName" => appName}) do render conn, "show.html", appName: appName end end Embedded Elixir To finally use the variable in our template first, we need to create a file for our show action. Create the file web/templates/about/show.html.eex and add the following: <div class="jumbotron"> <h2>About <%= @appName %></h2> We use the special EEx <%= %> syntax for Embedded Elixir. The opening tag has a = sign, meaning that the Elixir code between will be executed, and in turn the output will replace the tag. Our variable for app name appears as @appName. In this case, this is not a module attribute, but in fact it is a special bit of meta-programmed syntax which stands in for Map.get(assigns, :appName). The result is much nicer on the eyes and much easier to work with in a template. Defining the Route For us to be able to see the route, for example, we need to define the route to link with the show action for the controller we just defined. scope "/", HelloWorld do pipe_through :browser # Use the default browser stack. get "/", PageController, :index get "/about", AboutController, :index get "/about/:appName", AboutController, :show end Now our work is complete! Try it out by visiting the URL. Conclusion You now have the fundamental knowledge to create a Phoenix app, customise it graphically, and create routes, actions, controllers and views for your app. We touched on the setup for PostgreSQL features of Ecto, but to go into more detail on the Model part of the MVC paradigm, please continue your reading in the Ecto guide. As for user interactions and creating authentication, for example, please continue your learning at the Plug guide over at the official Phoenix documentation. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/elixir-walkthrough-part-5-phoenix-framework--cms-27669
CC-MAIN-2019-43
refinedweb
3,117
66.33
casey chesnut brains-N-brawn.com LLC March 2005 Applies to: Microsoft Tablet PC Platform SDK Microsoft Windows Journal Microsoft Windows Journal Reader Supplemental Component Microsoft Office 2003: XML Reference Schemas Microsoft Office OneNote 2003 SP1 Summary: Shows how to use the Journal Reader Supplemental Component to convert Journal notes to XML and then convert them to Scalable Vector Graphics (SVG) for viewing on the Web or a Pocket PC. Also provides the code to import a Journal Note into OneNote. (13 printed pages) Introduction Using the Journal Reader Supplemental Component Working with Journal XML Using Journal Types in your Application Introduction to SVG Converting Journal XML to SVG Converting Ink to SVG Importing Journal Notes to OneNote Conclusion Biography One of the first applications to support ink was Windows Journal. For a while, it was the shining example of how a Tablet PC running an ink-enabled application could provide for a great user experience. Personally, it entirely replaced my usage of pen and paper for taking notes. I used it extensively in meetings, at presentations, and for brainstorming. In turn, some businesses have been using Journal as a way to replace paper forms, creating numerous Journal files. Journal usage is widespread enough that Microsoft released Windows Journal Viewer for Windows 2000 and Windows XP. The problem is that the Journal file format is a proprietary binary format. It is not possible to open and view Journal notes in your own application or to write an ink file format that Journal can read. For example, OneNote cannot import and display Journal notes, and neither can you view a Journal note on a Pocket PC. However, Microsoft recently released the Journal Reader Supplemental Component, which remedies this dilemma. Using this component, you can develop an application that converts Journal notes to an XML format. From the Journal XML format you can either open and view Journal notes in your own application or convert them to a new format. This article shows how to use the Journal Reader Supplemental Component, how to parse the Journal XML and use it in your own application, and how to convert the Journal XML to Scalable Vector Graphics (SVG) so that you can view your notes on a Pocket PC or the Web. Finally, it provides the code to import a Journal note into OneNote. First, you must install the Journal Reader Supplemental Component. The installation registers a DLL that can be called from COM or a managed assembly that wraps it. In this article, I'll use the managed assembly: Microsoft.Ink.JournalReader.dll. The assembly exposes only one public method, ReadFromStream, on the JournalReader class. Contrary to the first release of the documentation, it is a static method. Its input is a stream of the Journal note and its output is a stream of XML. It does not have the complementary method to convert a Journal note XML file back to the Journal binary format. The following code example shows how to call JournalReader to convert a Journal stream to an XML document. using Microsoft.Ink; private XmlDocument ReadJntToXml(Stream jntStream) { Stream xmlStream = JournalReader.ReadFromStream(jntStream); XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(xmlStream); xmlStream.Close(); return xmlDoc; } Now that I have converted the Journal note to an XML format, we can do something meaningful with it. Fortunately, the Journal Reader Supplemental Component installation provides the XSD schema for the Journal XML format. Instead of listing the entire XSD schema here, I have included the following XML which shows a skeleton of a Journal XML file. For simplicity, I removed the XML attributes and other content to show you only the core XML elements with which we will be working. <JournalDocument> <Stationery>XML</Stationery> <JournalPage> <TitleInfo> <Text>TEXT</Text> <Date>TEXT</Date> </TitleInfo> <Content> <Paragraph> <Line> <InkWord> <AlternateList> <Alternate>WORD</Alternate> </AlternateList> <InkObject>BASE64</InkObject> </InkWord> </Line> </Paragraph> <Drawing> <InkObject>BASE64</InkObject> </Drawing> <Text>RTF</Text> <Flag>BASE64</Flag> <Image>BASE64</Image> <GroupNode>XML</GroupNode> </Content> </JournalPage> </JournalDocument> The XML skeleton contains the following elements: Now that we have some understanding of the Journal XML, we can begin using it. Instead of manually parsing the XML, I chose to deserialize the Journal XML into objects. To do this, I used the XML Schema Definition Tool (Xsd.exe) available with the .NET Framework SDK to generate C# classes from the Journal XSD schema. These generated classes hold the type information that the XmlSerializer class uses to deserialize a Journal XML file. The first step is to make two modifications to the XSD schema. From experience, I know that Xsd.exe has problems with the <xs:group/> element. The first problem is with the ContentGroup definition. I added the attributes minOccurs="0" and maxOccurs="unbounded" to all of those elements. Otherwise, the generated code would only deserialize one Drawing element or Paragraph element, instead of an array of Content objects. The second problem is in the GroupNodeType definition. It has an <xs:element/> for ScalarTransform followed by an <xs:group/> element referencing the ContentGroup. In this situation, Xsd.exe does not generate a collection to loop over. To work around this, I copied the ContentGroup definition and renamed it GroupNodeContentGroup. To this group, I added the ScalarTransform element. I then removed the <xs:element/> for ScalarTransform and changed the <xs:group/> to reference the new GroupNodeContentGroup instead of ContentGroup. Then I ran Xsd.exe to generate the classes to be used by XmlSerializer. xsd.exe JntSchema.xsd /classes /l:CS /n:Microsoft.Ink This generated the JntSchema.cs class that I added to my Visual Studio .NET project. With these generated classes, I am finally ready to use XmlSerializer to deserialize a Journal XML file into an object graph. protected JournalDocumentType DeserializeJournalDocument(string fileName) { XmlSerializer serializer = new XmlSerializer(typeof(JournalDocumentType)); FileStream stream = new FileStream(fileName, FileMode.Open); XmlReader reader = new XmlTextReader(stream); serializer.UnknownNode += new XmlNodeEventHandler(serializer_UnknownNode); JournalDocumentType journalDoc = (JournalDocumentType) serializer.Deserialize(reader); reader.Close(); stream.Close(); return journalDoc; } With the Journal XML deserialized into a typed object, we can now use that data in our own Tablet PC applications. For this sample application, I created a Windows Form with four ListBox controls containing pages, contents, lines, and words, a PictureBox control to display images, a RichTextBox control to display text, a Panel control to display ink, and a fifth ListBox control to display alternates when recognizing ink. Then I bound the objects of JournalDocumentType to the form. Note The serialization objects generally end with the word "Type." For example, the Journal XML root element is named JournalDocument, while its serialization object is named JournalDocumentType. Remember that in the schema, the JournalDocument element contains JournalPage elements. So I bound each of the JournalPageType objects to the first ListBox. When the user selects a JournalPageType object from the ListBox, the application binds all of the ContentType objects for that page to the second ListBox. These ContentType objects could be TextType, DrawingType, ParagraphType, ImageType, FlagType, or GroupNodeType objects. If the user selected one of the ParagraphType objects, then its LineType objects are bound to the third ListBox. Similarly, when the user selects a LineType object, its InkWordType objects are bound to the fourth ListBox. Figure 1. Journal Note conversion application The previous step merely set up the data so that we can easily work with it. Next, I extended the selection events of each ListBox to render the Journal object data in the application. I did this from the bottom up starting with the InkWordType object ListBox. When an InkWordType object is selected, its ink data is loaded into an Ink object as a byte []. private void RenderWord(InkWordType iwt) { byte [] ba = iwt.InkObject; Ink ink = new Ink(); ink.Load(ba); RenderInk(ink, ba); RenderAlternates(iwt.AlternateList); } That Ink object is then loaded into an InkOverlay object bound to the Panel area. After refreshing the Panel, you can see the ink from the Journal file in the application. Notice how the ink from the Journal document must be scaled to display at its original size in the Panel. This is due to the ink coordinate system and must be taken into consideration when importing the Journal notes, as well as when exporting to another format. private void RenderInk(Ink ink, byte [] baInk) { inkOverlay.Enabled = false; Rectangle rect = ink.GetBoundingBox(); double adjust = (53d / 50d ) * 2d; //See MSDN Ink.GetBoundingBox docs. Point origin = new Point((int)(rect.X * adjust), (int)(rect.Y * adjust)); Size size = new Size((int)(rect.Width * adjust), (int)(rect.Height * adjust)); Rectangle adjRect = new Rectangle(origin, size); inkOverlay.Ink.AddStrokesAtRectangle(ink.Strokes, adjRect); ink.Dispose(); inkOverlay.Enabled = true; panel1.Refresh(); } Better yet, you can still perform recognition on that ink. Next, I had the third ListBox iteratively call to render each InkWordType object for a selected LineType object. The second ListBox is more complicated than the third because of the different types. For the ParagraphType object, the second ListBox just iteratively calls to render each LineType object as ink. The DrawingType object contains an InkObjectType object that can be loaded into an Ink object and rendered the same way as the InkWordType object is handled in the third ListBox. The TextType object is an RTF string (and not ink), so I just display that in the RichTextBox when it is selected. private void RenderText(TextType tt) { richTextBox1.Rtf = tt.Value; } The FlagType and ImageType objects are both images, so you can load their byte [] into a Bitmap and render them however you see fit. This application just displays them in the PictureBox control. private void RenderImage(ImageType it) { byte [] baImage = it.Value; MemoryStream ms = new MemoryStream(baImage); Bitmap b = new Bitmap(ms); ms.Close(); pictureBox1.Image = b; } The GroupNodeType object is a little special because it can contain all of the above ContentType objects, but you can step through its collection and render its items using the same methods as I just described. Following this approach, we can recover individual data items that were contained in our Journal notes, either as ink, images, or text. The lack of a complementary Journal Writer supplemental component is a not-so-subtle hint to move away from the Journal format, and in the next section, I will show how to do just that. I need to convert my Journal notes to a more flexible file format. To make the decision about which format would be most useful, I reflected on some of the constraints I have experienced with ink. One thing that bothers me is that there is no Journal Viewer for the Pocket PC. Another issue I have is that when I render ink on the Web as a raster image, I lose the vector graphic capabilities to scale and zoom. It just so happens that there is a file format that has the potential to solve both of these problems. It is called Scalable Vector Graphics (SVG). SVG is a W3C recommendation as a standard for representing two-dimensional graphics in XML. It solves the first problem, allowing many SVG viewers for different platforms and devices, including the Pocket PC. Secondly, it is a vector format so it retains the scaling and zooming capabilities of ink, and it makes perfect sense as a format for rendering ink on the Web. An SVG file is just XML, so at a high level, our conversion program will write out an XML document that contains the information from the JournalDocument object, just in a different format. To make things easier, the program does not convert an entire JournalDocument object to a single SVG file. Instead, in this program, when the user selects a JournalPageType object from a JournalDocumentType object, then that page is converted to an SVG file. To get started, let's map the Journal content to SVG elements. The table below shows how I chose to represent each content type in SVG. This mapping covers the majority of Journal XML documents and almost gives you a one-to-one mapping from Journal content to SVG. A JournalPage element becomes an svg element, which is the root of an SVG document. The g element is for grouping, and mainly makes the SVG easier to read. The elements rect and line are self explanatory. I will explain the other SVG elements used for the conversion a bit later. Next, I created a class called JntToSvg. It contains the logic to take a JournalDocumentType object and create an SVG XmlDocument object that represents a selected JournalPageType object. Images are copied directly over. So, for example, if the Journal XML looked like the following: <Image Left="3482" Top="6652" Width="3703" Height="3175">/9j/4A . . .</Image> Then the SVG representation would be as follows: <image x="3482" y="6652" width="3703" height="3175" xlink:href="data:;base64,/9j/4A . . .</image> Journal Text elements are more complicated because the text is stored as RTF. <Text Left="9682" Top="6099" Width="3589" Height="3167"> {\rtf1\ansi\ansicpg1252\deff0\deflang1033 {\fonttbl{\f0\fnil\fcharset0 Arial;}} {\colortbl ;\red0\green0\blue0;} {\*\generator Msftedit 5.41.15.1507;} \viewkind4\uc1\pard\cf1\fs24 hello world\par}</Text> I really was not interested in writing the code to parse that, especially after seeing the Rich Text Format Specification. Doing so would definitely be out of scope for our discussion. To work around the issue and expose the text, font, and color information from that string, I loaded the RTF string into a Windows Forms RichTextBox.Rtf property. Now, the RichTextBox.Rtf property returns the plain text. Additionally, the SelectionFont and SelectionColor properties return the font and color information respectively. This solution does not require too much effort, but it has serious limitations. It only returns the font and color information for the first word. If the font or color changes for subsequent words, then that data is ignored. Also, it does not return hyperlinks, which could be represented in SVG as an a element. The SVG for the previous Journal text in RTF looks like this : <text x="9682" y="6099" stroke="rgb(0,0,0)" font-hello world</text> The JntToSvg class handles stationery, images, flags, and text. We still need to handle ink. Because it makes sense to render ink in SVG outside of the context of a Journal page, I broke this out to a separate class called InkToSvg. The JntToSvg class calls InkToSvg when it needs to render a Drawing or InkWord element from a Journal page. It would also make sense to use this class to write an SVG file for rendering ink in Internet Explorer. Until Internet Explorer supports SVG natively, you can use the Adobe SVG Viewer 3.0 to view SVG documents. It's what I used for testing. Let's start by looking at how Journal XML represents an ink Drawing element: <Drawing Left="3533" Top="17699" Width="1666" Height="2614"> <InkObject>ALACAT. . .</InkObject> </Drawing> The Drawing element contains position and size information, while the InkObject element contains Base64 ink. To use the InkObject element, first call the Ink.Load() method. Then iterate across each Stroke in the Ink.Strokes collection. The DrawingAttribute property on a Stroke object contains information about color, pen shape and size, and so on. The actual points which make up the x and y coordinate path of the Stroke are in the BezierPoints property. Those x and y coordinates have to be concatenated into a long string to add to the SVG element which can represent ink in this form: <path stroke- The DrawingAttributes property also exposes the RasterOperation property. If it is set to MaskPen, then you know that it represents a transparent highlighter stroke (for example, a yellow highlighter over black text). By adding the XML attribute opacity="0.5", the path element can also represent this feature. Anyway, enough chatter, I’m assuming you want to see some screen shots of the results. Figure 2 is the original Journal note rendered in the Journal accessory. It demonstrates stationery that looks like ruled paper, handwriting, drawings, grouped ink, highlighted ink words, an embedded image, text, a hyperlink, a flag, and different thicknesses and colors of ink. Figure 2. The original Journal note Figure 3 shows the Journal note converted to SVG and rendered in Internet Explorer. You can see that it is almost identical to the original Journal note. Plus, you can now view it from the Web and other platforms. Though the hyperlink text transferred in the conversion, you cannot actually click it and follow it. With a little more work, you could make the link active. Figure 3. Journal note converted to SVG Figure 4 is the same SVG file rendered on a Pocket PC using the trial program from PocketSVG. Figure 4. SVG note on a Pocket PC Figure 5 shows the power of vector graphics – they allow you to zoom in to read the text on a small device. Figure 5. SVG note on a Pocket PC with zoom Figure 6 shows that SVG is also powerful enough to convert Journal notes that have been created with the Journal Note Printer. Note SVG files converted from Journal notes with Journal Note Printer might not open on a Pocket PC due to the limited resources of the device. Figure 6. Journal note from Journal Note Printer converted to SVG SVG took care of the problems I had with rendering ink to raster images to display in Internet Explorer, as well as being able to view my Journal notes on a Pocket PC. But I was also bothered that OneNote does not import Journal notes. Well, it just so happens that OneNote 2003 Service Pack 1 exposes a method for importing pictures, ink, and HTML into OneNote pages from an XML file. Granted, it's not as simple as it sounds, because the format in which OneNote imports XML is not the same as the Journal XML. But the Import schema for OneNote was made public with the Office 2003: XML Reference Schemas. So, all we have to do is take the Journal XML and transform it into the XML format that OneNote expects. Note OneNote must be installed for this conversion to work. Using Xsd.exe once again, I generated classes for the OneNote schema. It took a number of changes to the schema so that Xsd.exe could generate useful classes with it. The modified schema is included with the code that accompanies this article. Next, I extended the application to traverse the object graph of the JournalDocumentType and populate the appropriate OneNote Import objects. Journal DrawingType and InkWordType objects become the Ink type in OneNote. Journal FlagType and ImageType objects become the Image type in OneNote. And Journal TextType becomes HTML for OneNote. Once the Import objects are populated, we can use XmlSerializer to serialize the Import object to XML. Finally, we can call OneNote to import the data. The code to do this is in the class called JntToOneNote. The OneNote file that this process creates generates a folder in OneNote called Journal. Each Journal note that you import is added to that folder as a separate file. Unlike SVG, the OneNote file can handle the multiple pages of a single Journal note. To import a Journal note, all you do is click JNT-ONE in the sample application and select a Journal note. If it works, a message box will be displayed when the process is done. OneNote will have the Journal folder with a tab named the same as the imported Journal Note (with multiple pages if appropriate). Figure 7 shows a Journal note that has been imported into OneNote. If you set OneNote to display rules, then it looks very similar to the original Journal note, although certain behavior does not transfer. Multiple pages are represented as tabs to the side of the document. All of the imported ink is initially treated as a drawing. You must select the ink and explicitly tell OneNote to treat it as text. Also, the flags transfer to OneNote as images. They do not operate in the same way as OneNote's flags. Finally, OneNote does not parse the RTF from text elements to properly display the font style and hyperlinks. OneNote expects HTML, so this would involve converting RTF to HTML. Figure 7. Journal note imported into OneNote Figure 8 shows the Journal Note Printer file imported into OneNote. Notice how this OneNote file has multiple page elements. Figure 8. Journal Note Printer file imported into OneNote The Journal note file format has proven very useful for Tablet PC users. Now the Journal Reader Supplemental Component provides access to the contents of Journal notes so that we can migrate that data to new formats. This article has shown how to use the Journal Reader Supplemental Component, how to import the data into your own application, how to export the data to SVG for viewing on the Web or a Pocket PC, and finally how to import your Journal notes into OneNote. casey chesnut is an independent consultant specializing in Seamless Computing (Mobility, Web Services, Speech, and Location). This includes playing with the Compact Framework, WS-*, Tablet PC, Speech SDK, MapPoint, and Artificial Intelligence. His blog and other articles can be found at.
http://msdn.microsoft.com/en-us/library/ms812502.aspx
crawl-002
refinedweb
3,535
55.34
CallKit Tutorial for iOS Luckily, Apple introduced CallKit in iOS 10 to change all that! In this tutorial you’ll get a glimpse of CallKit’s power by building an app which: - Uses system services to report incoming and outgoing calls. - Manages a call directory to identify, or block incoming calls. Note: CallKit features won’t work in the simulator. In order to follow along with this tutorial, you’ll need an iPhone with iOS 10.2 installed. Getting Started Download the starter project for this tutorial, and unzip it. In order to debug the project on your device, you’ll need to set up code signing. Open the project file in Xcode, and select Hotline in the project navigator. You’ll start by changing the bundle identifier. With the project selected, go to the General tab, and find the Identity section. Change the bundle identifier to something unique: Next, look for the Signing section. Select your preferred development team (in my case, it’s my personal team) in the dropdown next to Team. Also make sure, that “Automatically manage signing” is checked. This will allow Xcode to automatically create the provisioning profile for the app. To test your setup, build and run the app on your iPhone. Currently the app won’t do much, but you’ll notice that there are already quite a few source files in the starter project. They are mostly responsible for setting up the UI, and handling user interactions, but there are two main classes which are worth a look before moving on: Callrepresents a phone call. The class exposes properties for identifying calls (such as its UUID, or handle), and also lifecycle callbacks indicating when the user starts, answers or ends a call. CallManagercurrently maintains the list of ongoing calls in the app, and has methods for adding or removing calls. You will expand this class further throughout the tutorial. What is CallKit? CallKit is a new framework that aims to improve the VoIP experience by allowing apps to integrate tightly. Whenever such an event occurs, CXProvider will create a call update to notify the system. What’s a call update, you ask? Call updates encapsulate new, or changed call-related information. They are represented by the CXCallUpdate class, which exposes properties such as the caller’s name, or whether it’s an audio-only, or a video call. In turn, whenever the system wants to notify the app of any events, it does so in the form of CXAction instances. CXAction is an abstract class, which represents telephony actions. For each action, CallKit provides a different concrete implementation of CXAction. For instance, initiating an outgoing call is represented by CXStartCallAction, while CXAnswerCallAction is used for answering an incoming call. Actions are identified by a unique UUID, and can either fail, or fulfill. Apps can communicate with CXProvider through the CXProviderDelegate protocol, which defines methods for provider lifecycle events, and incoming actions. CXCallController The app will use CXCallController to let the system know about any user-initiated requests, such as a “Start call” action. This is the key difference between the CXProvider and the CXCallController: while the provider’s job is to report to the system, the call controller makes requests from the system on behalf of the user. The call controller uses transactions to make these requests. Transactions, represented by CXTransaction contain one or more CXAction instances. The call controller sends transactions to the system, and if everything is in order, the system will respond with the appropriate action to the provider. That sure was a lot of information, but how does this work in practice? Incoming Calls The diagram below shows a high-level overview of an incoming call flow: - Whenever there’s an incoming call, the app will construct a CXCallUpdateand use the provider to send it to the system. - At this point the system will publish this as an incoming call to all of its services. - When the user answers the call, the system will send a CXAnswerCallActioninstance to the provider. - The app can answer the call by implementing the appropriate CXProviderDelegatemethod. The first step will be creating the delegate for the provider. Head back to Xcode, and with the App group highlighted in the project navigator, go to File\New\File…, and choose iOS\Source\Swift File. Set the name to ProviderDelegate, and click Create. Add the following code to the file: import AVFoundation import CallKit class ProviderDelegate: NSObject { // 1. fileprivate let callManager: CallManager fileprivate let provider: CXProvider init(callManager: CallManager) { self.callManager = callManager // 2. provider = CXProvider(configuration: type(of: self): - The provider delegate will interact with both the provider and the call controller, so you’ll store references to both. The properties are marked fileprivate, so that you’ll be able to reach them from extensions in the same file. - You’ll initialize the provider with the appropriate CXProviderConfiguration, stored as a static variable below. A provider configuration specifies the behavior and capabilities of the calls. - To respond to events coming from the provider, you’ll set its delegate. This line will cause a build error, as ProviderDelegatedoesn’t conform to CXProviderDelegateyet. - In the case of Hotline, the provider configuration will allow video calls, phone number handles, and restrict the number of call groups to one. For further customization, refer to the CallKit documentation. Just below the configuration, add the following helper method: func reportIncomingCall(uuid: UUID, handle: String, hasVideo: Bool = false, completion: ((NSError?) -> as? NSError) } } This helper method will allow the app to call the CXProvider API to report an incoming call. Here’s what’s going on: - You prepare a call update for the system, which will contain all the relevant call metadata. - Invoking reportIncomingCall(with:update:completion)on the provider will notify the system of the incoming call. - The completion handler will be called once the system processes the call. If there were no errors, you create a Callinstance, and add it to the list of calls via the CallManager. - Invoke the completion handler, if it’s not nil. This method can be invoked by other classes in the app in order to simulate incoming calls. The next step is to ensure protocol conformance. Still in ProviderDelegate.swift, declare a new extension to conform to it’s! With the App group highlighted in the project navigator, open AppDelegate.swift for editing. You’ll start by adding a new property to the class: lazy var providerDelegate: ProviderDelegate = ProviderDelegate(callManager: self.callManager) The provider delegate is ready to be used! Add the following method to AppDelegate: func displayIncomingCall(uuid: UUID, handle: String, hasVideo: Bool = false, completion: ((NSError?) -> Void)?) { providerDelegate.reportIncomingCall(uuid: uuid, handle: handle, hasVideo: hasVideo, completion: completion) } This method will let other classes access the provider delegate’s helper method. The final piece of the puzzle is hooking up this call to the user interface. Expand the UI/View Controllers group in the project navigator, and open CallsViewController.swift, which is the controller for the main screen of the app. Find the empty implementation of unwindSegueForNewCall(_:), and replace it with the following code: @IBAction private func unwindForNewCall(_ segue: UIStoryboardSegue) { // 1. let newCallController = segue.source as! NewCallViewController guard let handle = newCallController.handle else { return } let videoEnabled = newCallController.videoEnabled // 2.) } } } The snippet does the following: - You’ll extract the properties of the call from NewCallViewController, which is the source of this unwind segue. - The user can suspend the app before the action completes, so it should use a background task. Now that everything is hooked up, build and run the application, and do the following: - Tap the plus button in the right-hand corner. - Enter any number, make sure “Incoming” is selected in the segmented control, and tap Done. - Lock the screen. This step is important, since it’s the only way to access the rich, native in-call UI. Within a few seconds, you’ll be presented with the native incoming call UI: However, as soon as you answer the call, you’ll notice that the UI remains stuck in the following state:: - You’ll start by getting a reference from the call manager, corresponding to the UUID of the call to answer. - It is the app’s responsibility to configure the audio session for the call. The system will take care of activating the session at an elevated priority. - By invoking answer(), you’ll indicate that the call is now active. - When processing an action, it’s important to either fail or fulfill it. If there were. If you unlock your phone, you’ll notice that both iOS and the app now reflect the correct ongoing call state. the first step: when the user ends the call from the in-call screen (1a), the system will automatically send. No matter which way you want to support) } Not that difficult! Here’s what’s going on: - will mark the action as fulfilled. - Since you no longer need the call, the call manager can dispose of it. This takes care of the in-call UI. In order to end calls from the app, you’ll need to extend CallManager. With the Call Management group expanded in the project navigator open CallManager.swift. The call manager will communicate with CXCallController, so it will need a reference to an instance. Add the following property to the CallManager class:: - You’ll start by creating an “End call” action. You’ll pass in the call’s UUID to the initializer, so it can be identified later. - The next step is to wrap the action into a transaction, so you can send it to the system. - Finally, you’ll invoke request(_:completion:)from the call controller. The system will request the provider to perform this transaction, which will in turn invoke the delegate method you just implemented. The final step is to hook the action up to the user interface. Open CallsViewController.swift, and write the following call just below the tableView(_:cellForRowAt:) implementation: override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCellEditing, make sure “Incoming” is selected/home screens, nor the app will report any ongoing calls. Other Provider Actions If you look at the documentation page of CXProviderDelegate, you’ll notice that there are many more actions that the provider can perform, including muting, and grouping, or setting calls on hold. The latter sounds like a good feature for Hotline, so you’ll implement it now. Whenever is fairly simple: - After getting the reference to the call, you’ll update its status according to the isOnHoldproperty of the action. - Depending on the new status, you’ll want to start, or stop processing the call’s audio. - At this point, you can the end(call:) method, in fact, the only difference between the two is that this one will wrap an instance of CXSetHeldCallAction into the transaction. The action will contain the call’s UUID, and the held status. Now it’s time to hook this action up() } Whenever will strongSelf = self, let call = call else { return } if call.connectedState == .pending { strongSelf.provider.reportOutgoingCall(with: call.uuid, startedConnectingAt: nil) } else if call.connectedState == .complete { strongSelf.provider.reportOutgoingCall(with: call.uuid, connectedAt: nil) } } // 3. call.start { [weak self, weak call] success in guard let strongSelf = self, let call = call else { return } if success { action.fulfill() strongSelf the provider(_:didActivate)delegate method is invoked. - The delegate will monitor the call’s lifecycle. It will initially report that the outgoing call has started connecting. When the call is finally connected, the provider delegate will report that as well. - Calling start()on the call will trigger its lifecycle changes. Upon a successful connection, the call can be marked as fulfilled.can specify the handle type, and its value. Hotline supports phone number handles, so you’ll use it here as well. - A CXStartCallActionwill receive a unique UUID, and a handle as input. - You can(_:) to match the following: @IBAction private func unwindForNewCall(_ segue: UIStoryboardSegue) { let newCallController = segue.source as! NewCallViewController guard let handle = newCallController.handle else { return } let incoming = newCallController.incoming let videoEnabled = newCallController.videoEnabled if incoming {) } } } else { callManager.startCall(handle: handle, videoEnabled: videoEnabled) } } There’s one subtle change in the code: whenever It’s easy to imagine a scenario where a Hotline user would receive multiple calls. You can simulate this by first placing an outgoing call, then an incoming call and pressing the Home button before the incoming call comes in. At this point, the app presents the user with the following screen: The system will let the user decide how to resolve the issue. Based on the choice, it will wrap up. So if your app already knows how to fulfill the individual requests, there’s no further action required!. Whenever the system receives a call, it will check the address book for a match; if it doesn’t find one, it can also check in app-specific directory extensions. Why not add a directory extension to Hotline? Back in Xcode, go to File\New\Target… and choose Call Directory Extension. Name it HotlineDirectory, and click Finish. Xcode will automatically create a new file, CallDirectoryHandler.swift. Locate it in the project navigator, and check what’s inside. The first method you’ll find is beginRequest(with:). This method will be invoked when your extension is initialized. In case of any errors, the extension will tell the host app to cancel the extension request by invoking cancelRequest(withError:). It relies on two other methods to build the app-specific directory. addBlockingPhoneNumber(to:) will collect all the phone numbers, which should be blocked. Replace its implementation with the following: private func addBlockingPhoneNumbers(to context: CXCallDirectoryExtensionContext) throws {. When a number is blocked, the system telephony provider will not display any calls from that number. Now take a look at addIdentificationPhoneNumbers(to:). Replace the method body with the code below: private func addIdentificationPhoneNumbers(to context: CXCallDirectoryExtensionContext) throws {. Whenever the system receives a call from this number, the call UI will display the matching label to the user. It’s time to test your new extension. Build and run the Hotline scheme on your device. At this point your extension may not yet be active. To enable it, do the following steps: - Go to the Settings app - Select Phone - Select Call Blocking & Identification - Enable Hotline Testing a blocked call is easy: just; but this time, enter the number 1111. You’ll be presented with the following call UI: Congratulations! You’ve created an app which leverages CallKit to provide a first-party VoIP experience! :] Where to Go From Here? You can download the completed project for this tutorial here. If you wish to learn more about CallKit, check out Session 230 from WWDC 2016. I hope you enjoyed this CallKit Scott Berrevoets - Editor Chris Belanger - Final Pass Editor Richard Turton - Team Lead Andy Obusek
https://www.raywenderlich.com/150015/callkit-tutorial-ios
CC-MAIN-2017-26
refinedweb
2,464
57.06
Functions related to video output. More... #include <sys/cdefs.h> #include <arch/types.h> Go to the source code of this file. Functions related to video output. This file deals with the video output hardware in the Dreamcast. There are functions defined herein that deal with setting up the video hardware, defining the resolution of the display, dealing with the framebuffer, etc. Multi-buffered mode setting. OR this with the generic mode to get four framebuffers instead of one. The maximum number of framebuffers available. Video mode structure. KOS maintains a list of valid video modes internally that correspond to the specific display modes enumeration. Each of them is built of one of these. Generic display modes. Specific display modes. Set the border color of the display. This sets the color of the border area of the display. On some screens, the border area may not be shown at all, whereas on some displays you may see the whole thing. Retrieve the connected video cable type. This function checks the video cable and reports what it finds. Clear the display. This function sets the whole display to the specified color. Internally, this uses the store queues to actually clear the display entirely. Clear VRAM. This function is essentially a memset() for the whole of VRAM that will clear it all to 0 bytes. Set the current framebuffer in a multibuffered setup. This function sets the displayed framebuffer to the specified buffer and sets the vram_s and vram_l pointers to point at the next framebuffer, to allow for tearing-free framebuffer-direct drawing. Initialize the video system. This function initializes the video display, setting the mode to the specified parameters, clearing vram, and setting the first framebuffer as active. Take a screenshot. This function takes the current framebuffer (vram_s/vram_l) and dumps it out to a PPM file. Set the video mode. This function sets the current video mode to the one specified by the parameters. Set the video mode. This function sets the current video mode to the mode structure passed in. You can use this to add support to your program for modes that KOS doesn't have support for built-in (of course, you should tell us the settings so we can add them into KOS if you do this). Set the VRAM base of the framebuffer. This function sets the vram_s and vram_l pointsers to specified offset within VRAM and sets the start position of the framebuffer to the same offset. Shut down the video system. This function reinitializes the video system to what dcload and friends expect it to be. Wait for VBlank. This function busy loops until the vertical blanking period starts. The list of builtin video modes. Do not modify these! The current video mode. Do not modify directly! 32-bit size pointer to the current drawing area. 16-bit size pointer to the current drawing area.
http://cadcdev.sourceforge.net/docs/kos-2.0.0/video_8h.html
CC-MAIN-2018-05
refinedweb
483
78.04
On Friday 30 July 2004 09:44, Pavel Machek wrote:> * system-wide suspend level is always passed down (it is more> detailed, and for example IDE driver cares)This bothers me -- why should a "system" suspend level matterto a device-level suspend call? Seems like if IDE cares, it'sprobably being given the wrong device-level suspend state,or it needs more information than the target device state.The problem I'm clear on is with PCI suspend, which someearlier driver model PM changes goofed up. It's trying topass a system state to driver suspend() methods that areexpecting a value appropriate for pci_set_power_state().You're proposing to fix that by changing the call semantics,while I'd rather just see the call site fixed.I trust nobody is now disagreeing that's the root cause ofseveral suspend problems ... and I suspect that API changes,to pass enums, should be part of the "real" fix. (As Len hascommented, C enums aren't type-safe ... but at least theyprovide documentation which "u32 state" can't!)> * if you want to derive Dx state, just use provided inline function to> convert.If the model is that there's some PM "layer" (for lack of better term)that's in charge of "system suspend" (e.g. to ACPI S3), then I haveno qualms saying that layer is responsible for mapping the thosestates into PCI D-states before calling PCI suspend routines. Butwe don't really seem to have such a layer. MontaVista has some"DPM" code, distinct from drivers/base/power calls with that TLA,that are one example of such a layer ... allowing multiple powerconfigurations. Not the only way to do it -- but also not quite thelimited "laptop into S3 now" kind of model either.Point being that it should certainly be possible to selectivelysuspend devices without trying to put the whole system into adifferent state (just certain devices) ... and also without lying toany device about the system state. (In fact, devices should asa rule not care about system power state, only their own state.)It should be easy for one driver to suspend or resume anotherone, without any changes to "system" state.Some specific comments on the patch:> +enum pci_state {> + D0 = 20, /* For debugging, symbolic constants should be everywhere */> + D1,> + D2,> + D3hot,> + D3cold> +};Those would be better as PCI_D0, PCI_D2 etc ... so they're not confusedwith ACPI_D0, ACPI_D2 etc.> +> +static inline enum pci_state to_pci_state(enum suspend_state state)> +{> + switch(state) {> + case RUNTIME_D1:> + return D1;> + case RUNTIME_D2:> + return D2;> + case RUNTIME_D3hot:> + return D3hot;> + case SNAPSHOT:> + case POWERDOWN:> + case RESTART:> + case RUNTIME_D3cold:> + return D3cold;> + default:> + BUG();> + }> +}> > #endif /* __KERNEL__ */This stuff, if it's used, belongs in <linux/pci.h> not <linux/pm.h> ... it'snot generic to all busses. And pci_set_power_state() should probablybe modified to take the enum ... though I don't much like that notion,it'd require changing every driver that actually tries to use the PCIPM calls (since they currently "know" D3hot==3 and D3cold==4).- Dave-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/7/31/1
CC-MAIN-2014-15
refinedweb
523
63.49