text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Lesson 31: How to Get Your Smartphone to Talk
Written by Jonathan Sim
You can find this lesson and more in the Arduino IDE (File -> Examples -> Andee). If you are unable to find them, you will need to install the Andee Library for Arduino IDE.
Did you know that you can make use of Android's Text-to-Speech (TTS) functionality to get the Annikken Andee to read to you?
Well, you can get it to do more than just read. It can make announcements or even report sensor readings so that you don't have to look at your phone!
Here's how!
If you do not hear anything from your phone, you will need to disable silent mode or check your phone's system volume settings.
If that does not work, you may need to manually configure Text to Speech.
To do this, go to the Andee app, and open "Settings." Scroll down and you should find a "Setup" button under the TTS section.
#include <SPI.h> #include <Andee.h> AndeeHelper displaybox; AndeeHelper button; AndeeHelper speechObject; // You need to create a speech object for the phone to talk.setTitle("Text to Speech"); displaybox.setData("Be sure to unmute your phone to hear your phone talk!"); button.setId(1); // Don't forget to assign a unique ID number button.setType(BUTTON_IN); // Defines object as a button button.setLocation(1,0,FULL); button.setTitle("Say something!"); button.setColor(THEME_RED_DARK); speechObject.setId(2); speechObject.setType(TTS); // Defines object as a Text-to-Speech object }
void loop() { if( button.isPressed() ) { button.ack(); // Use updateData() to get the phone to talk! speechObject.updateData("Confucius say: Man run in front of car get tired,"); speechObject.updateData("man run behind car get exhausted."); // You will need to break your sentence into multiple lines if your lines are too long. // Each speech object can only handle up to 40 characters of text at a time. // Punctuation will also affect the way your phone vocalises the text. } displaybox.update(); button.update(); // Do not update the speechObject! delay(500); // Always leave a short delay for Bluetooth communication } | http://resources.annikken.com/index.php?title=Lesson_31:_How_to_Get_Your_Smartphone_to_Talk | CC-MAIN-2017-09 | refinedweb | 349 | 66.84 |
There can be many reasons to make you want to cache SQL queries. Some of them are valid, e.g. reducing the number of round-trips (esp. when dealing with high-latency). Others might be micro-optimizations that are just not worth it. Regardless of your reasons for wanting to cache SQL queries, implementing them can be cumbersome.
Subject
I am going to use Slonik (PostgreSQL client for Node.js) and node-cache to demonstrate the usual way to implement cache and a declarative way to add cache to your existing codebase.
Let's assume a simple query-method to get
country PK value using another unique identifier:
const getCountryIdByCodeAlpha2 = ( connection: DatabaseConnectionType, countryCode: string ): Promise<DatabaseRecordIdType> => { return connection.maybeOneFirst(sql` SELECT id FROM country WHERE code_alpha_2 = ${countryCode} `); };
This type of query is particularly common when ingesting data from external inputs (e.g. user submitted input or data that has been collected using scraping).
Measuring the problem
In the particular case that prompted me to explore caching, this query was called 7k+ times/ minute. Aside from this query, there were a dozen of other similar queries that collectively were executed well over 50k+ times/ minute. None of them affect my database server performance (PostgreSQL is already good at caching), but they:
- generate unnecessary logs
- increase the overall time needed to complete the task
The time it takes for PostgreSQL to execute such a query is minuscule, e.g.
EXPLAIN ANALYZE SELECT id FROM country WHERE code_alpha_2 = 'gb'; Index Only Scan using country_code_alpha_2_id_idx on country (cost=0.14..1.16 rows=1 width=4) (actual time=0.425..0.426 rows=1 loops=1) Index Cond: (code_alpha_2 = 'gb'::citext) Heap Fetches: 0 Planning Time: 0.069 ms Execution Time: 0.439 ms
However, we have to also add the network time. In my case, the latency between the worker agent and the database is ~3ms.
ping ***.aivencloud.com 17:31:54 PING ***.aivencloud.com (34.90.***.***): 56 data bytes 64 bytes from 34.90.***.***: icmp_seq=0 ttl=53 time=3.166 ms 64 bytes from 34.90.***.***: icmp_seq=1 ttl=53 time=2.627 ms 64 bytes from 34.90.***.***: icmp_seq=2 ttl=53 time=2.873 ms
That means that executing a query and getting the result takes at least 7.5ms (0.5ms query execution time + 2 trips). Put it another way, every 60 seconds, we waste ~350 seconds of computing time (spread across many servers). Overtime, this adds up to a lot (70 hours over month).
Implementing cache
All you need to implement cache is some storage service with a mechanism to limit how long and how many items can be stored.
node-cache is such an abstraction for synchronously storing/ retrieving objects in memory. Using
node-cache, you use
set method to store cache and
get method to retrieve cache;
node-cache handles invalidation and storage limits behind the scenes. This is how
getCountryIdByCodeAlpha2 would look like if it used
node-cache:
const cache = new NodeCache({ checkperiod: 60, maxKeys: 10000, stdTTL: 60, useClones: false, }); const getCountryIdByCodeAlpha2 = async ( cache: NodeCache, connection: DatabaseConnectionType, countryCode: string ): Promise<DatabaseRecordIdType> => { const maybeCountryId = cache.get(countryCode); if (maybeCountryId) { return maybeCountryId; } const maybeResult = await connection.maybeOneFirst(sql` SELECT id FROM country WHERE code_alpha_2 = ${countryCode} `); cache.set(maybeResult, maybeResult); return maybeResult; };
However, this way of adding cache has a few disadvantages:
- It introduces a lot of boilerplate around every query.
- It introduces an additional dependency (
NodeCacheinstance) that needs to be passed around throughout your codebase along with the database connection handle.
If you had to go this way, 9/10 I would say it is not worth it. Luckily, there is a better way.
Declarative cache
Slonik has a concept of interceptors (middlewares) that can be used to capture and modify SQL request and response. This makes them perfect for implementing cache. Such interceptor already exists:
slonik-interceptor-query-cache.
slonik-interceptor-query-cache uses SQL comments to recognize which queries should be cached and for how long. Specifically, it searches for comment
@cache-ttl.
@cache-ttl comment indicates for how long the query should be cached. Queries without
@cache-ttl are not cached at all, i.e. In order to cache the result of the earlier query for 60 seconds, the only change we need to make is to add a
@cache-ttl comment to our query:
const getCountryIdByCodeAlpha2 = ( connection: DatabaseConnectionType, countryCode: string ): Promise<DatabaseRecordIdType> => { return connection.maybeOneFirst(sql` -- @cache-ttl 60 SELECT id FROM country WHERE code_alpha_2 = ${countryCode} `); };
Now this query will be cache result for each unique
countryCode for 60 seconds.
slonik-interceptor-query-cache does not implement storage, though. You can use
node-cache,
lru-cache, Redis, or any other storage engine. To use them, you simply need to abstract their interface using
get and
set methods, and provide them to
slonik-interceptor-query-cache. Continuing with the
node-cache example, this is how you would initiate Slonik with the query cache interceptor using
node-cache as a storage engine:
import NodeCache from 'node-cache'; import { createPool } from 'slonik'; import { createQueryCacheInterceptor } from 'slonik-interceptor-query-cache'; const nodeCache = new NodeCache({ checkperiod: 60, stdTTL: 60, useClones: false, }); const hashQuery = (query: QueryType): string => { return JSON.stringify(query); }; const pool = createPool('postgres://', { interceptors: [ createQueryCacheInterceptor({ storage: { get: (query) => { return cache.get(hashQuery(query)) || null; }, set: (query, cacheAttributes, queryResult) => { cache.set(hashQuery(query), queryResult, cacheAttributes.ttl); }, }, }), ] });
and that is it: with minimal code changes, now you can cache any query just by adding a comment to SQL. Among other benefits, this:
- allows you to quickly test the impact of caching a specific query
- allows you to quickly enable/ disable query caching (by simply adding/ removing the query cache interceptor)
- does not affect how you write test cases
- does not add boilerplate code to every query
- does not require to passthrough an additional dependency to every query invocation
Posted on by:
Discussion | https://dev.to/gajus/a-declarative-way-to-cache-postgresql-queries-using-node-js-4fbo | CC-MAIN-2020-40 | refinedweb | 974 | 56.45 |
Interface for scene manipulation and query the restrictions. More...
#include <ISceneController.h>
Interface for scene manipulation and query the restrictions.
Definition at line 17 of file ISceneController.h.
Vieport scale fitting mode.
Definition at line 51 of file ISceneController.h.
Definition at line 20 of file ISceneController.h.
Describe mode for setting of scale value.
Definition at line 73 of file ISceneController.h.
Get vieport scale fitting mode.
Get actual scale factor.
Get the restriction flags, to tell the scene manipulator, what can be changed on the scene.
Check if full screen mode is enabled.
In full screen mode scene is shown on whole screen area.
Set vieport scale fitting mode.
Set full screen mode on/off.
In full screen mode scene is shown on whole screen area.
Set or modify scale factor.
See ScaleMode for possible modes.
© 2007-2017 Witold Gantzke and Kirill Lepskiy | http://ilena.org/TechnicalDocs/Acf/classi2d_1_1_i_scene_controller.html | CC-MAIN-2018-30 | refinedweb | 145 | 80.99 |
To help you understand why namespaces were added to C++ in the first place, I'll use an analogy. Imagine that the file system on your computer did not have directories and subdirectories. All files would be stored in a flat repository and would therefore always be visible to every user and application. Consequently, extreme difficulties would arise: Filenames would clash (with some systems that limit a filename to 8 characters plus 3 for the extension, this is even more likely), and simple actions like listing, copying, or searching files would be much more difficult. Even worse, security and authorization restrictions would be severely compromised.
C++ namespaces are equivalent to directories. They enable you to hide declarations, they can be nested easily, they protect your code from name conflicts, and they do not incur any runtime or memory overhead. | http://www.informit.com/articles/article.aspx?p=18091&seqNum=2 | CC-MAIN-2017-13 | refinedweb | 139 | 51.48 |
31 August 2009
By clicking Submit, you accept the Adobe Terms of Use.
This article assumes that you are familiar with Flash Professional CS4 and have a basic understanding of delivering video through Flash Player and Flash Media Server. You should also have a basic understanding of XML and ActionScript 3.
Note: In response to reader requests, the extra sample file contains the code you need to add descriptions to the scrolling tile list in the dynamic video playlist (which now shows just titles). You simply add a field to the custom cell renderer class.
Intermediate
If Video Learning Guide for Flash has a whole section on adding web and later.—so don't be intimidated! Sample code is provided, along with some sample videos (vintage cartoons courtesy of the Prelinger Archives), so you can test your application right away.
The basic framework of the playlist application consists of the following (see Figure 1):
Comparing the pros and cons of streaming and progressive video is beyond the scope of this article. (Read the Video Learning Guide for Flash for an overview of Flash video delivery options.) Even so, you should decide which method of delivery to use before you begin. As a quick review, let's outline the major benefits and pitfalls of each.
Before beginning the development of your dynamic playlist application, you should understand how to deploy it to the web. The setup differs for progressive and streaming delivery, but only slightly. The biggest difference is where you place your video files. Take a closer look (see Figure 2).
No matter which delivery method you choose, you'll have to upload your application SWF and associated files to your standard web server. All of these files are required:
You'll also need to upload your video files. The location you place them depends on your delivery method.
For progressive delivery, just create a new directory called videos in the same location as the files above and copy all of your files there. After doing so, you're ready to test your application.
Note: You can download the free Flash Media Development Server 3.5 for local testing or sign up for an account with a Flash Video Streaming Service (FVSS) provider if you don't yet own a Flash Media Server license.
For streaming delivery, you will need to upload your videos to Flash Media Server (FMS). The setup is quite simple. This article assumes that you are running FMS locally, but the application setup would be the same if you were using a remote server. Follow these steps to set up your FMS application:
That's all the setup required on the server side for streaming delivery. You'll then be ready to connect to the application and test your streaming video playlist (once you create it, of course—I'm getting to that!)
The ActionScript for your playlist is identical for either deployment method. The only difference is in the source path of your videos in the playlist.xml file. I cover this in more detail in the next section.
This section describes the structure of the XML data files, playlist.xml, and playlist-streaming.xml, all of which are included in the sample files you downloaded on the first page of this article.
To add new videos to your playlist dynamically, simply edit the appropriate file: playlist.xml for progressive delivery or playlist-streaming.xml for streaming. Here's the code in playlist.xml:
<?xml version="1.0" encoding="UTF-8"?> <playlist id="Adobe Developer Center - Dynamic Video Playlist in AS3" > <vid desc="Popeye for President, Title and Credits" src="videos/Popeye_forPresiden256K_flash_popeye.mp4" thumb="thumbs/Popeye_forPresiden768K.jpg" /> <vid desc="Vote for Popeye" src="videos/Popeye_forPresiden768K_001.flv" thumb="thumbs/Popeye_forPresiden768K_001.jpg" /> <vid desc="Vote for Bluto" src="videos/Popeye_forPresiden768K_002.flv" thumb="thumbs/Popeye_forPresiden768K_002.jpg" /> <vid desc="Bluto's stolen all of Popeye's voters" src="videos/Popeye_forPresiden768K_003.flv" thumb="thumbs/Popeye_forPresiden768K_003.jpg" /> <vid desc="Popeye getting creative" src="videos/Popeye_forPresiden256kb_flash_004_sm.mp4" thumb="thumbs/Popeye_forPresiden768K_004.jpg" /> <vid desc="The fight for Olive Oyl's vote" src="videos/Popeye_forPresiden768K_005.flv" thumb="thumbs/Popeye_forPresiden768K_005.jpg" /> </playlist>
Notice that both sample files have a very basic XML structure, holding two main elements:
The
src attribute must point to your video files.
For progressive videos (shown above), you update the
src attribute with a relative path to the video from the SWF file; in this example, you've copied all the files into a folder aptly named videos.
For streaming video, just append your Flash Media Server address and application name to your video filenames, as shown in this playlist-streaming.xml example:
<?xml version="1.0" encoding="UTF-8"?> <playlist id="Adobe Developer Center - Dynamic Video Playlist in AS3" > <vid desc="Popeye for President, Title and Credits" src="rtmp://localhost/videoplaylist/mp4:Popeye_forPresiden256K_flash_popeye.mp4" thumb="thumbs/Popeye_forPresiden768K.jpg" /> <vid desc="Vote for Popeye" src="rtmp://localhost/videoplaylist/Popeye_forPresiden768K_001.flv" thumb="thumbs/Popeye_forPresiden768K_001.jpg" /> <vid desc="Vote for Bluto" src="rtmp://localhost/videoplaylist/Popeye_forPresiden768K_002.flv" thumb="thumbs/Popeye_forPresiden768K_002.jpg" /> <vid desc="Bluto's stolen all of Popeye's voters" src="rtmp://localhost/videoplaylist/Popeye_forPresiden768K_003.flv" thumb="thumbs/Popeye_forPresiden768K_003.jpg" /> <vid desc="Popeye getting creative" src="rtmp://localhost/videoplaylist/mp4:Popeye_forPresiden256kb_flash_004_sm.mp4" thumb="thumbs/Popeye_forPresiden768K_004.jpg" /> <vid desc="The fight for Olive Oyl's vote" src="rtmp://localhost/videoplaylist/Popeye_forPresiden768K_005.flv" thumb="thumbs/Popeye_forPresiden768K_005.jpg" /> </playlist>
This playlist-streaming.xml example file assumes that you are working with Flash Media Server running locally. If you are running a remote FMS server or using an FVSS provider, include your server address here instead of "localhost."
Go ahead and edit the XML file that corresponds with your delivery mode to play your own videos, or proceed using the provided sample videos for now.
Note: When you are streaming MPEG4 videos (MOV, M4V, F4V, MP4, etc.) with Flash Media Server, you'll need to append MP4: to the beginning of your filename to make your video play.
The
src attribute for a streaming MPEG4 video looks like this:
src="rtmp://localhost/videoplaylist/mp4:Popeye_forPresiden256K_flash_popeye.mp4"
You do not need to add the "MP4:" prefix when using progressive delivery.
Now that you've established the data source for your playlist, launch Flash CS4 Professional and start working on the application interface.
Here I explain the structure of the finished player and walk you through the process of creating the interface in the Flash authoring environment.
The application has a relatively simple interface, consisting of an FLVPlayback component for the video display and a TileList component for the playlist (see Figure 3).
The TileList component is populated automatically from the contents of your XML file. The TileList component is perfect here because it allows you to easily include a small thumbnail preview of each video. Each item in this list links to the appropriate video file, which plays in the FLVPlayback component when clicked.
Create the interface by placing and configuring the two components. Open Flash and follow these steps:
autoPlayto false, and
scaleModeto noScale. Since the dimensions of your videos will be automatically detected by the FLVPlayback component, the
scaleModesetting will determine what to do if the videos are different sizes:
noScaleretains the original size of the video and resizes the playback skin
maintainAspectRatioresizes your video to fit the current dimensions of the component but does not stretch the video
exactFitstretches the video horizontally and vertically to match the dimensions of the component
The component automatically changes its placement on the page, depending on the setting for the align property. By default, the component remains centered on its current registration point. If you were to reset the registration point to align topLeft instead, the placement of the video would remain anchored at the top left corner and would shrink or grow from that point. (I've encoded one of the sample video files with smaller dimensions than the others to demonstrate this alignment behavior. See if you can find which one it is!)
At this point, your Flash file should look something like Figure 6.
That's all there is to creating the functional interface. Save this file; you'll create the separate ActionScript files next, and then open VideoPlaylist.fla again later to skin the playlist.
Are you ready for the fun part? The next section gets into the meat of the application—the client-side ActionScript.
This section explains how to read in, or parse, the XML document, create the dynamic playlist, and play selected videos. (I think you'll be surprised how remarkably simple it is to do all of this in Flash!)
As I mentioned previously, all of the ActionScript used in this project is contained in external class files. Begin by creating an ActionScript file called VideoPlaylist.as in Flash, Flex Builder, Dreamweaver, or in your favorite text editor. Save it in the same folder as VideoPlaylist_CS4.fla. I'll walk you through the code here, so you understand how it works.
Import the classes that are needed later in the script:
package { import flash.display.MovieClip; import flash.net.URLLoader; import flash.net.URLRequest; import flash.events.Event; import fl.controls.listClasses.CellRenderer; import fl.controls.ScrollBarDirection;
Set up the class constructor and main VideoPlaylist method:
public class VideoPlaylist extends MovieClip { private var xmlLoader:URLLoader; public function VideoPlaylist():void { // Load the playlist file, then initialize the media player. xmlLoader = new URLLoader(); xmlLoader.addEventListener(Event.COMPLETE, initMediaPlayer); xmlLoader.load(new URLRequest("playlist.xml")); // Format the tileList, specify its cellRenderer class. tileList.setSize(200, 240); tileList.columnWidth = 180; tileList.rowHeight = 60; tileList.direction = ScrollBarDirection.VERTICAL; tileList.setStyle("cellRenderer", Thumb); }
The VideoPlaylist method loads the XML data. Once it is successfully loaded, it triggers the initMediaPlayer method. ActionScript 3 has given us some powerful XML handling features, so this part is amazingly easy.
The overall format of the TileList component is set up in this first method as well, specifying the size, row height, and scrollbar behavior—and assigning the class that will control what the playlist looks like. (I describe that class in the next section.)
Here's the initMediaPlayer method:
public function initMediaPlayer(event:Event):void { var myXML:XML = new XML(xmlLoader.data); var item:XML; for each(item in myXML.vid) { // populate playlist. // Get thumbnail value and assign to cellrenderer. var thumb:String; if(item.hasOwnProperty("@thumb")>0) thumb = item.@thumb; // Send data to tileList. tileList.addItem({label:item.attribute("desc").toXMLString(), data:item.attribute("src").toXMLString(), source:thumb});; } // Listen for item selection. tileList.addEventListener(Event.CHANGE, listListener); // Select the first video. tileList.selectedIndex = 0; // And automatically load it into myVid. myVid.source = tileList.selectedItem.data; // Pause video until selected or played. myVid.pause(); }
The initMediaPlayer method accomplishes the following:
myXMLobject.
Finally, set up the
listListener function to handle change events fired by the TileList component:
// Detect when new video is selected, and play it function listListener(event:Event):void { myVid.play(event.target.selectedItem.data); } } }
You're almost there. Now you have a video player that reads in a list of videos from your external XML data source and loads the first video in the list into the FLVPlayback component. Before you go ahead and test the app, however, you still need to set up the TileList component to accept the thumbnail of the video file. Just to get a bit fancy, I'll cover the process of customizing its skin as well—it's easier than you might think.
At this point, the application is clean and functional. However, the functionality for displaying the video thumbnails is not yet implemented.
To make the TileList component load your thumbnails, you need to set up a custom CellRenderer. While you're at it, why not customize the look of your list as well? Specify the skin for the mouseover and selected states of the video items.
Note: If you are integrating this code into your own custom file, it's important to note that the TileList will not populate correctly if you place it on a layer that is masked.
If you've worked with components in older versions of Flash, you're going to love how easy this is to do now. As you may remember, you've already assigned a CellRenderer in the main constructor of the VideoPlaylist.as class:
tileList.setStyle("cellRenderer", Thumb);
So, you now need to actually create this cellRenderer class. In the code editor of your choice, create a new ActionScript file and name it Thumb.as. Save this file in the same location as your VideoPlaylist.as file.
Import the classes that are needed later in the script:
package { import fl.controls.listClasses.ICellRenderer; import fl.controls.listClasses.ImageCell; import fl.controls.TileList; import flash.text.*;
Set up the main constructor:
public class Thumb extends ImageCell implements ICellRenderer { private var desc:TextField; private var textStyle:TextFormat; public function Thumb() { super(); loader.scaleContent = false; useHandCursor = true; //); // Create and format desc desc = new TextField(); desc.autoSize = TextFieldAutoSize.LEFT; desc.x = 75; desc.width = 110; desc.multiline = true; desc.wordWrap = true; addChild(desc); textStyle = new TextFormat(); textStyle.font = "Tahoma"; textStyle.color = 0×000000; textStyle.size = 11; }
This main constructor accomplishes the following:
After you've added this code, save the Thumb.as file.
That's all that's required to create the custom CellRenderer. There's just one final step before you can go ahead and test your application—create some simple movie clips that will be used to customize the look, or "skin" of your playlist. You specified these movie clips in the Thumb.as code:
//);
Note: This step is optional. If you are happy with the default mouseover skins, you can remove the six
setStyle lines from Thumb.as, then skip ahead to the section called "Publishing your video playlist."
Open your VideoPlaylist.fla in Flash. You'll be creating three new movie clips: ThumbCellBg, ThumbCellBgOver, and ThumbCellBgSelected. Follow these steps to use the movie clips as custom rollover states for your TileList component, and feel free to use other colors if you like:
That's it! You've just skinned your playlist.
With all of your ActionScript and playlist skins in place, you are now ready to publish the SWF. Select File > Publish or Control > Test Movie. The XML data will load into your playlist, and your first video should be cued up to play. Just click the play button or select an item in the playlist to start playing a video.
Note: If you're using progressive delivery, your videos can be previewed locally with no Internet connection required. If you are streaming your videos using Flash Media Server, be sure that your videos are copied into your Flash Media Server directory as outlined earlier, and that your server is running (either running locally or accessible via the Internet) in order to test your application.
One feature that you may want to add is support for full-screen video playback. This is remarkably easy to do. In fact, all of the video examples provided (and the code you've been working on so far) supports full-screen playback—and you didn't even know it!
All you need to do is specify the correct HTML template and publish your project. Select File > Publish Settings, check the box for HTML, and select the HTML tab. From the Template drop-down menu, choose "Flash Only – Allow Full Screen" (see Figure 9).
Click Publish, then OK. That's it! Now when you launch the HTML page in your browser, you can click the full-screen icon in your FLVPlayback skin (be sure to chose a skin that includes one) and voilà—full-screen video goodness!
The final step describes the process of deploying your files to the web for all the world to see.
The beginning of this article outlined the location of the video files for both progressive and streaming delivery. Now that you've developed the sample application, let's review. You will need to place your SWF file and other associated files in your web-accessible folder. And you'll need to copy your videos to either your web server (for progressive delivery) or Flash Media Server (for streaming).
The following files should be in your web-accessible folder on your web server:
The last step is to be sure your videos are uploaded in the proper place for your method of delivery. If you're unclear about where they belong, refer back to the section of this article entitled "Understanding the files and servers."
Once all of your files are in place on your server(s), navigate to VideoPlaylist.html in your favorite browser to test your dynamic video playlist, complete with thumbnails and a custom TileList skin.
Preview the dynamic video playlist template
Now that you've explored the elements that comprise a simple video playlist navigation using XML, you can extend this basic framework with other features as well:
In this tutorial, you created a flexible XML-based video playlist application that you can easily update, reuse, and reskin to your heart's content. Like most Flash developers, however, I'll bet you've got some custom features you'd like to add, don't you? Yeah, I thought so.
The framework I provided for you was meant to be streamlined and easy to follow, but you can easily extend it. In fact, there were some additional features that had been requested from my past articles that I've included for you. In the provided sample files, there are a few other variations you can play with:
VideoEvent.COMPLETEand trigger the next video in the playlist to play.
Some other, very useful features that you could add include bandwidth detection (client-based for progressive delivery, or server-based using Flash Media Server), custom metadata, dynamic cue points, or video preloaders. Take this basic framework and be creative.
The following resources can help you get up to speed with video in Flash, Flash Media Server, and customizing the components:
Be sure also to play with the other video templates in the Flash Developer Center. | http://www.adobe.com/devnet/flash/articles/video_playlist.html?devcon=f5 | CC-MAIN-2016-18 | refinedweb | 3,004 | 57.16 |
TreeView unknown component (M300) - Qt5.5.1
I am Using treeview in qml, but the qtcreator says that it is an unknown component (m300)..i have import QtQuick.Controls 1.4
so what do I have to do reinstall the qt liberary? or is there another way to solve?
import QtQuick 2.5
import QtQuick.Controls 1.4
import QtQuick.Window 2.2
import QtQuick.Dialogs 1.2
import treeview.mymodels 1.0
ApplicationWindow {
title: qsTr("Qt Quick: TreeView")
width: 640
height: 480
visible: true
TabView { width: parent.width height: parent.height MyTreeModel { id: theModel } TreeView { id:treeviewID anchors.fill: parent model: theModel headerVisible : false } }
}
the symbol treeview is marked as erroneous in Creator.
Hi!
I have the same problem. I did not find any solution.
I think this is only Qt GUI error. Because my project is compiled and work correctly.
Hi,
same problem here, at least on OSX don't know about Windows.
I can confirm the application compiles fine, any way to fix Qt Creator ?
Thx !
- p3c0 Moderators
@tvdp You can only suppress them. Just add
// @disable-check M300before
TreeView. The same can be done by right-click > TreeView > "Add a Comment to Suppress This Message"
@p3c0 Thanks for the tip: it doesn't fix the designer itself, but at least I can continue using the text editor without too much hassle... | https://forum.qt.io/topic/60155/treeview-unknown-component-m300-qt5-5-1 | CC-MAIN-2018-30 | refinedweb | 225 | 71 |
in reply to
Re^4: Why does the first $c evaluate to the incremented value in [$c, $c += $_] ? ("undefined")
in thread Why does the first $c evaluate to the incremented value in [$c, $c += $_] ?
Ok, engaging lawyer mode. ISO C99 standard says, in 6.5 (2):
Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be read only to determine the value to be stored.
The initialization shall occur in initializer list order, each initializer provided for a particular subobject overriding any previously listed initializer for the same subobject; ...
The order in which any side effects occur among the initialization list expressions is unspecified.
This example code is valid in C99 (no undefined behaviour), yet what it prints is unspecified.
#include <stdio.h>
void meh(int t[]) {
int i = 0;
int foo[] = { [1] = ++i, [2] = ++i, [0] = ++i };
for (i = 0; i < 3; i++) { t[i] = foo[i]; }
}
int main(void) {
int x[3];
meh(x);
printf("%d %d %d\n", x[0], x[1], x[2]);
}
[download] constructs.
- tye
My "lawyer mode" rant was to show that C is playing catch-up in some areas, and that the rules are anything but clear-cut.
I wouldn't consider perl code that the OP wrote, stupid. In fact I might quite possibly have used the same construct...
Then you simply need to update your best practices to include "Using a variable in the same statement as you separately modify it is stupid".
(Then, when it breaks [as it did] you'll save a lot of time by not trying to convince people that it is a | http://www.perlmonks.org/index.pl?node_id=1077144 | CC-MAIN-2015-48 | refinedweb | 288 | 70.13 |
csPathsUtilities Class Reference
[Utilities]
A helper class with path-related utilities. More...
#include <csutil/syspath.h>
Detailed Description
A helper class with path-related utilities.
Definition at line 215 of file syspath.h.
Member Function Documentation
Expands all paths in a path list.
Expand a native path relative to the current directory.
- Remarks:
- The specified path must refer to a directory, rather than a file.
- Caller is responsible to free the returned string with delete[] after using it.
Filter all non-existent items out of a paths list.
Determine which path(s) of a given set contains a given file.
- Parameters:
-
Check whether two native paths actually point to the same location.
Use this instead of strcmp() or the like, as it may not suffice in all cases (e.g. on Windows paths names are case-insensitive, but on Unix they aren't).
- Remarks:
- Expects the paths to be fully qualified. Use csExpandPath() to ensure this.
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.1 by doxygen 1.6.1 | http://crystalspace3d.org/docs/online/api/classcsPathsUtilities.html | CC-MAIN-2015-22 | refinedweb | 178 | 61.02 |
Let’s Add an Emoji Converter to Trix with Stimulus
We’ve been exploring using Stimulus to add interactivity to our apps. Let’s use Stimulus to watch for typing changes, and convert an emoji short code into the actual glyph, such as
:smiley: to 😀
Getting Started
I’ve uploaded my code to Github, so you can follow along with an actual Rails project if you’re ever stuck, or ask me on twitter.
Let’s create a new rails app (using rails 5.2.0.rc2):
rails new trix_emoji --webpack=stimulus
We’re going to add a controller called Documents and a show method on it. We need to add the appropriate routes in
routes.rb.
Rails.application.routes.draw do resource :document end
And the controller,
documents_controller.rb:
class DocumentsController < ApplicationController def show end end
And we’ll add a view,
documents/show.html.erb:
<h1>Trix Emoji Converter 2000</h1>
Configuring Trix With Yarn and Webpack
Let’s add Trix to our
package.json file with Yarn:
$ yarn add trix
Then, we’ll add Trix to our webpack root file,
javascripts/packs/applications.js:
import "trix/dist/trix.css"; import { Trix } from "trix"
The first line imports the CSS for the
trix-editor element, and the next line imports the Javascript code that makes the editor run.
You’ll want to make sure to add the webpacker stylesheet and javascript tags to
layouts/application.html.erb:
<%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %> <%= stylesheet_pack_tag 'application', 'data-turbolinks-track': 'reload' %>
And now we can add our Trix Editor to our html:
<trix-editor></trix-editor>
Adding Stimulus
Let’s add the proper annotations to our HTML for our Trix controller:
<trix-editor</trix-editor>
The editor will be a target for the controller so that we can filter out events that might not correspond the current controller instance. Here is the skeleton code for the
emoji_converter_controller.js:
import { Controller } from "stimulus" export default class extends Controller { static targets = ["editor"] connect() { window.addEventListener("trix-change", this.trixChange.bind(this)) } trixChange(event) { if (event.target == this.editorTarget) { } } }
When the controller connects to the dom, it starts listening for the
trix-change event, which is fired after a change occurs in a trix editor. We make sure that the change is from our controller’s editor, and then we’ll scan through the text to look for the emoji short codes.
Let’s Find those Short Codes :smiley:
Inside the
trixChange function, we’ll go through every character of the editor’s text, and we’ll see if we find a short code. If we find a one, we’ll see if we have the short code to emoji mapping. If we have a mapping, we’ll replace the text, and stop scanning, because replacing the text is going to create another change event, which will allow us to look again very shortly.
Let’s add a small set of supported emojis to our controller, by creating a dictionary in our connect method:
connect() { window.addEventListener("trix-change", this.trixChange.bind(this)) this.supportedEmojis = { ":smiley:" : "😀", ":stuck_out_tongue_winking_eye:" : "😜", ":bowtie:" : "🤵", } }
Then in the change method, we’ll process each change and look for a short code.
trixChange(event) { if (event.target == this.editorTarget) {
We’re going to get the text of the document from our editor:
let stringDoc = this.editorTarget.editor.getDocument().toString()
We’ll set up some variables to keep track of what we’ve found as we go over the string:
var foundItem = false var foundStart = -1 var foundText = "" // Iterating over every 16 bit unicode character, // since `for (var letter of stringDoc)` method won't work // in this particular situation. for (var count = 0; count<stringDoc.length; count++) {
We’ll look at every letter, and check to see if it’s a colon (:) character:
let letter = stringDoc[count]; if (letter == ":") { if (foundItem) { foundText += letter
If we found a supported emoji, we’ll replace the short code. Otherwise, we’ll ignore it, and keep looking. We also keep track of a new colon character, and any text we find between colons.
let emoji = this.supportedEmojis[foundText] if (emoji) { this.editorTarget.editor.setSelectedRange([foundStart, count + 1]) this.editorTarget.editor.insertString(emoji) return // break out and wait for next trix-change event } else { foundItem = false foundStart = -1 foundText = "" } } else { foundItem = true foundStart = count foundText = letter } } else if (foundItem) { // If we come across a space, it's not a supported emoji, so reset if (letter == " ") { foundItem = false foundStart = -1 foundText = "" } else { foundText += letter } } } } }
And now Trix will convert short codes into Emoji!
This is a fun and simple example, but I imagine it could be used for more complicated interactivity, like looking for @username values, or automatically linking to anther project.
Again, You can find all the code on Github here: and let me know how it worked on twitter: @jpbeatty
Want To Learn More?
Try out some more of my Stimulus.js Tutorials. | https://johnbeatty.co/2018/03/27/stimulus-trix-convert-emoji-short-codes-to-unicode-characters-on-the-fly/ | CC-MAIN-2019-35 | refinedweb | 817 | 51.99 |
Composing Links
Links represent small portions of how you want your GraphQL operation to be handled. In order to serve all of the needs of your app, Apollo Link is designed to be composed with other links to build complex actions as needed. Composition is managed in two main ways: additive and directional. Additive composition is how you can combine multiple links into a single chain and directional composition is how you can control which links are used depending on the operation.
It's important to note that no matter how many links you have in your chain, your terminating link has to be last.
NOTE Future composition mechanisms like
race are being considered. If you have ideas please submit an issue or PR for the style you need!
Additive Composition
Apollo Link ships with two ways to compose links. The first is a method called
from which is both exported, and is on the
ApolloLink interface.
from takes an array of links and combines them all into a single link. For example:
import { ApolloLink } from 'apollo-link'; import { RetryLink } from 'apollo-link-retry'; import { HttpLink } from 'apollo-link-http'; import MyAuthLink from '../auth'; const link = ApolloLink.from([ new RetryLink(), new MyAuthLink(), new HttpLink({ uri: '' }) ]);
from is typically used when you have many links to join together all at once. The alternative way to join links is the
concat method which joins two links together into one.
import { ApolloLink } from 'apollo-link'; import { RetryLink } from 'apollo-link-retry'; import { HttpLink } from 'apollo-link-http'; const link = ApolloLink.concat(new RetryLink(), new HttpLink({ uri: '' }));
Directional Composition
Given that links are a way of implementing custom control flow for your GraphQL operation, Apollo Link provides an easy way to use different links depending on the operation itself (or any other global state). This is done using the
split method which is exported as a function and is on the
ApolloLink interface. Using the
split function can be done like this:
import { ApolloLink } from 'apollo-link'; import { RetryLink } from 'apollo-link-retry'; import { HttpLink } from 'apollo-link-http'; const link = new RetryLink().split( (operation) => operation.getContext().version === 1, new HttpLink({ uri: "" }), new HttpLink({ uri: "" }) );
split takes two required parameters and one optional one. The first argument to split is a function which receives the operation and returns
true for the first link and
false for the second link. The second argument is the first link to be split between. The third argument is an optional second link to send the operation to if it doesn't match.
Using
split allows for per operation based control flow for things like sending mutations to a different server or giving them more retry attempts, for using a WS link for subscriptions and Http for everything else, it can even be used to customize which links are used for an authenticated user vs a public client.
Usage
split,
from, and
concat are all exported as part of the ApolloLink interface as well as individual functions which can be used. Both are great ways to build link chains and they are identical in functionality. | https://www.apollographql.com/docs/link/composition/ | CC-MAIN-2019-18 | refinedweb | 516 | 60.65 |
Morphic. A highly unusual graphical toolkit! Originates in the Self project at Sun Self: a prototype-based programming language No classes---objects inherit/instantiate by “cloning” Self design strongly reflected in Morph curious, the global namespace is a dictionary named Smalltalk. Do Smalltalk inspect in any Workspace to get a look at it.
Q: What’s a “world”?
A: An instance of a subclass of PasteUpMorph
Q: What’s a PasteUpMorph?
A: A Morph where you can drop other morphs, and they stick---think of it as a “desktop-like” morph.
Q: When should components paint themselves?
A: Often. It’s complicated... | http://www.slideserve.com/giacomo-watts/morphic | CC-MAIN-2017-09 | refinedweb | 102 | 57.87 |
all,..
I have a Micro SD card wired up to my mbed.
Description: Mbed Pin : MicroSD Pin :
Mosi : p5 : 3
Miso : p6 : 7
CLK : p7 : 5
CS : p8 : 2
I've checked and double checked my wiring.
I've got the hello world SD card program.
#include "mbed.h"
#include "SDFileSystem.h"
SDFileSystem sd(p5, p6, p7, p8, "test"); //i, o, clk, cs, const : on the card 3,7,5,2
int main() {
printf("Hello World!\n");
FILE *fp = fopen("/test/ttt.txt", "w");
if(fp == NULL) {
error("Could not open file for write\n");
} else {
fprintf(fp, "Hello SD Card World!");
fclose(fp);
printf("Goodbye World!\n");
}
}
I've got TeraTerm so I can see whats happening.
and all I get is..
Hello World!
Not in idle state
Could not open file for write
I can write to the card with a PC, I've tried 2 different cards, I've tried 2 different mbeds, I've tried 3 different card holders (!! just in case) I've gone crazy with a multimeter to make sure everything is as it should be.
Can anyone verify that the SD Card libraries still work.
I'm half hoping something has broken them.. otherwise I've gone mad !
cheers
Dave. | https://os.mbed.com/forum/mbed/topic/202/?page=1 | CC-MAIN-2020-24 | refinedweb | 207 | 84.68 |
A class that consolidates Pyramid’s various URL-generating functions into one concise API that’s convenient for templates. It performs the same job as pylons.url in Pylons applications, but the API is different.
Pyramid has several URL-generation routines but they’re scattered between Pyramid request methods, WebOb request methods, Pyramid request attributes, WebOb request attributes, and Pyramid functions. They’re named inconsistently, and the names are too long to put repeatedly in templates. The methods are usually – but not always – paired: one method returning the URL path only (“/help”), the other returning the absolute URL (“”). Pylons defaults to URL paths, while Pyramid tends to absolute URLs (because that’s what the methods with “url” in their names return). The Akhet author prefers path URLs because the automatically adjust under reverse proxies, where the application has the wrong notion of what its visible scheme/host/port is, but the browser knows which scheme/host/port it requested the page on.
URLGenerator unifies all these by giving short one-word names to the most common methods, and having a switchable default between path URLs and absolute URLs.
Copy the “subscribers” module in the Akhet demo (akhet_demo/subscribers.py) to your own application, and modify it if desired. Then, include it in your main function:
# In main(). config.include(".subscribers")
The subscribers attach the URL generator to the request as request.url_generator, and inject it into the template namespace as url.
URLGenerator was contributed by Michael Merickel and modified by Mike Orr.
Same as the .route method.
Instantiate a URLGenerator based on the current request.
The application URL or path.
I’m a “reified” attribute which means I start out as a property but I turn into an ordinary string attribute on the first access. This saves CPU cycles if I’m accessed often.
I return the application prefix of the URL. Append a slash to get the home page URL, or additional path segments to get a sub-URL.
If the constructor arg ‘qualified’ is true, I return request.application_url, otherwise I return request.script_name.
The URL of the default view for the current context.
I’m a “reified” attribute which means I start out as a property but I turn into an ordinary string attribute on the first access. This saves CPU cycles if I’m accessed often.
I am mainly used with traversal. I am different from .app when using context factories. I always return a qualified URL regardless of the constructor’s ‘qualified’ argument.
Generate a URL based on the current request’s route.
I call pyramid.url.current_route_url. I’m the same as calling .route with the current route name. The result is always qualified regardless of the constructor’s ‘qualified’ argument.
Return a “resource URL” as used in traversal.
*elements is the same as with .route. Keyword args query and anchor are the same as the _query and _anchor args to .route.
When called without arguments, I return the same as .ctx.
Generate a route URL.
I return a URL based on a named route. Calling the URLGenerator instance is the same as calling me. If the constructor arg ‘qualified’ is true, I call pyramid.url.route_url, otherwise I call pyramid.url.route_path.
Arguments:
Keyword arguments are passed to the underlying function. The following are recognized:
If the relevant route has a pregenerator defined, it may modify the elements or keyword args.
The source code (akhet/urlgenerator.py) has some commented examples of things you can do in a subclass. For instance, you can define a static method to generate a URL to a static asset in your application, or a deform method to serve static files from the Deform form library. The instance has request and context attributes, which you can use to calculate any URL you wish. You can put a subclass in your application and then adjust the subscribers to use it.
The reason the base class does not define a static method, is that we’re not sure yet what the best long-term API is. We want something concise enough for everyday use but also supporting unusual cases, and something we can guarantee is correct and we’re comfortable supporting long-term. There’s also the issue of the static route helper vs Pyramid’s static view, or multiple Pyramid static views responding to different sub-URLs. In the meantime, if you want a static method, you can decide on your own favorite API and implement it. | http://docs.pylonsproject.org/projects/akhet/en/latest/library/urlgenerator.html | CC-MAIN-2015-48 | refinedweb | 748 | 58.69 |
Introduction: Cyborg Computer Mouse
Many studies suggest that the posture of using a conventional computer mouse can be hazardous.The mouse is a standard piece of computer equipment. Computer users use the mouse almost three times as much as the keyboard. As exposure rates are high, improving upper extremity posture while using a computer mouse is very important.
For this abstract project we will be making a wearable that allows people to move through a computer screen without the necessity of external technology. That way we could use the hands natural movements instead of clicking a device on a horizontal surface. This also allows to use screens while standing, making oral presentations more pleasant.
As for the prototype will be using the index as a joystick, the middle finger for left clicking, ring finger for right clicking and the pinky for turning on and off the device. The thumb will act as the surface where the buttons get pressed at. All of which will be added into a glove.
Supplies
- (x1) Arduino Leonardo
- (x1) Protoboard
- (x1) Joystick module
- (x3) Pushbutton
- (x20±) Wire jumpers
- (x3)Resistors of 1KΩ
- (x1) Glove sewing kit
- Velcro Hot silicone
- Wire Soldering kit
- 3D printed part
Step 1: Set Up the Hardware
We have included a Fritzing sketch for a better understanding of the design. We recommend mounting the components on a protoboard first. That way you can check that everything is working before soldering.
Attachments
Step 2: Upload the Code and Test
Once the connections are made connect the USB A (M) to micro USB B (M) from the computer to the Arduino Leonardo and upload the sketch. Feel free to copy, modify and improve on the sketch.
WARNING: When you use the Mouse.move() command, the Arduino takes over your mouse! Make sure you have control before you use the command. It only works for Arduino Leonardo, Micro or Due
Here is our code for this project:
// Define Pins
#include <Mouse.h> ; const int mouseMiddleButton = 2; // input pin for the mouse middle Button const int startEmulation = 3; // switch to turn on and off mouse emulation const int mouseLeftButton = 4; // input pin for the mouse left Button const int mouseRightButton = 5; // input pin for the mouse right int mouseMiddleState = 0;
boolean mouseIsActive = false; // whether or not to control the mouse int lastSwitchState = LOW; // previous switch state
void setup() { pinMode(startEmulation, INPUT); // the switch pin pinMode(mouseMiddleButton, INPUT); // the middle mouse button pin pinMode(mouseLeftButton, INPUT); // the left mouse button pin pinMode(mouseRightButton, INPUT); // the right) }
//LEFT // read the mouse button and click or not click: // if the mouse button is pressed: if (digitalRead(mouseLeftButton) == HIGH) { // if the mouse is not pressed, press it: if (!Mouse.isPressed(MOUSE_LEFT)) { Mouse.press(MOUSE_LEFT); delay(100); // delay to enable single and double-click Mouse.release(MOUSE_LEFT); } }
// else the mouse button is not pressed: else { // if the mouse is pressed, release it: if (Mouse.isPressed(MOUSE_LEFT)) { Mouse.release(MOUSE_LEFT); } }
//RIGHT // read the mouse button and click or not click: // if the mouse button is pressed: if (digitalRead(mouseRightButton) == HIGH) { // if the mouse is not pressed, press it: if (!Mouse.isPressed(MOUSE_RIGHT)) { Mouse.press(MOUSE_RIGHT); delay(100); // delay to enable single and double-click Mouse.release(MOUSE_RIGHT); } }
// else the mouse button is not pressed: else { // if the mouse is pressed, release it: if (Mouse.isPressed(MOUSE_RIGHT)) { Mouse.release(MOUSE_RIGHT); } }
//MIDDLE // read the mouse button and click or not click: // if the mouse button is pressed: if (digitalRead(mouseMiddleButton) == HIGH) { // if the mouse is not pressed, press it: if (!Mouse.isPressed(MOUSE_MIDDLE) && mouseMiddleState == 0) { Mouse.press(MOUSE_MIDDLE); mouseMiddleState = 1 ; //actualiza el estado del botón } }
// else the mouse button is not pressed: else { // if the mouse is pressed, release it: if (Mouse.isPressed(MOUSE_MIDDLE) && mouseMiddleState == 1 ) { Mouse.release(MOUSE_MIDDLE); mouseMiddleState = 0; } }
delay(responseDelay); }
/* reads an axis (0 or 1 for x or y) and scales the analog input range to a range from 0 to */
int readAxis(int thisAxis) { // read the analog input: int reading = analogRead(thisAxis);
// map the reading from the analog input range to the output range: reading = map(reading, 0, 1023, 0, cursorSpeed);
// if the output reading is outside from the // rest position threshold, use it: int distance = reading - center;
if (abs(distance) < threshold) { distance = 0; }
// return the distance for this axis: return distance; }
Step 3: Mounting the Prototype
The first step is sewing the velcro to the glove, you have to sew four strips of velcro one to each finger. We sewed the soft part of the velcro.
Each pushbutton has two wires, one that starts at the respective pins and connects to the positive leg of the button and another on the negative leg. At the other end of the negative wire we solder the resistances of each button plus the negative wire of the joystick to one last wire, which connects to the GND of the Arduino board. The same parallel connection works for the positive side. (3 buttons and joystick positive leg)
After soldering the jumpers we will put on the hard velcro-strips, so that the wires will get stuck in between. Lastly we thermo-glued the joystick module to a 3D printed piece. Below you can find the .STL file.
Attachments
Step 4: Start Using Your Hand As a Mouse!
Vote for us in the Assistive Tech Contest if you enjoyed the project.
Participated in the
Assistive Tech Contest
Be the First to Share
Recommendations
9 Comments
1 year ago
Great project, but unfortunately it will not work fine.
your connections of fritz and connections in code does not matched..
your Vertical/Y Axis pin of joystick connected to A2 of Arduino, but in code you define it A0.
an extra token in line 1 when include mouse.h
1 year ago
I have a question what protoboard are you using . double sided/single sided and the size. will be a great help. btw im making this for my project so it will be nice if you reply fast. :)
Reply 1 year ago
You can use Leonardo, Micro and Due boards. We usted the Leonardo R3 from the brand KEYESTUDIO (model KS0248).
1 year ago
Your Code is not working with arduino Uno micro.it showed the error on compling.
Reply 1 year ago
Bro Arduino Uno is Not Compatible with Mouse and Keyboard Functions. Only some Arduino Boards like Leonardo are Compatible. You can Google It. Also, with help of Python if you know you can use Arduino Easily. Hope this Answer Sastifies your Question
Reply 1 year ago
Hy Kamal! I had some problems copy/pasting the arduino sketch code into my instructable. Apparently some symbols didn't get pasted right. I have checked it and now it should work right. If I could I would upload the .ino document, but I think they also have a bug with that :(
Add " #include <Mouse.h>; " at the beggining of your code.
Please do not hesitate to contact me if you have any further questions.
1 year ago
Beautiful!
1 year ago
well done!
Tip 1 year ago
Great project. Another idea is to press the middle and ring finger against the palm of your hand. This way one can move the mouse and click it at the same time. | https://www.instructables.com/Cyborg-Computer-Mouse/?utm_source=newsletter&utm_medium=email | CC-MAIN-2021-43 | refinedweb | 1,206 | 71.24 |
>>.'"
Pascal (Score:4, Insightful)
Re: (Score:3, Interesting)
Java is also a very nice first language. I know personally that I loved the built in UI stuff (started in C++). Stay the crap away from Flex (no concept of threads, a lot of voodoo beneath the hood, etc).
I think started in a garbage collected space and then moving to manual memory management is a good path as well.
Re: (Score:3, Insightful)
Get a Commodore 128 emulator, and start programming a game in BASIC. That's how I started, until I realized that BASIC is too slow so then I switched to C.
Re:Pascal (Score:4, Insightful)
How about teaching people to write algorithms ? You know, the kind we used to learn on the 60s-80s, which could be used to program in ANY language (from Assembly to Pascal/Cobol/Fortran), and not today's verbalized Pascal that people call algorithm ?
That was the first "language" I learned and, having learned 20+ languages after that (including some obscure ones lines Forth and Lua), even today I thank my teacher for giving me such a good understanding of programing without relying on any specific language's concepts.
Re: (Score:3, Insightful)
I would say that learning manual memory management first is more beneficial than learning to be lazy ant let the GC do all the work, and, the day you need to use a language which doesn't clean up in your path, write memory leaking piece of feces. Of course, it means that when you need to write in java, you'll be sorely missing the freedom to decide WHEN the garbage is collected.
It's a bit like the difference between living in your mom's basement and having her come and clean up you room randomly when you're:4, Interesting)
nahhh
.. in my analogy, the GC is your mom. the programmer is the one with the smelly condoms and the used sneakers.
This is
/. after all, so I guess I should have used a car analogy :
You most definitely don't want to learn driving in THAT, but once you're used to the beetle, you'll want something fast, better
okay
.. the analogy is probably a massive fail and I'm burning Karma as if there was no tomorow here, still .. I stand to my point : start with something really easy, and when you want to play with the big kids, use a language which keeps the hand-holding to a minimum.
Re:Pascal (Score:5, Insightful)
Being a professional software developer who works in Java, and a former nightclass teacher of Java I can say without reservation that Java is NOT the best first language. Java forces you to use Object constructs from the get go. These things get in the way of communicating the principles that you want to teach. Say you want to discuss IF statements, instead of writing this:
a = 5
if a>7:
print "a is bigger than 7"
else:
print "a is smaller or equal to 7"
you need to write something like
public class Example{
public String main( String[] args ) {
int a = 5;
if( a> 7)
{
System.out.println("a is bigger than 7");
} else {
System.out.println("a is smaller or equal to 7");
}
}
How different is this? Well, first of all you have the concept of a class, which you can either gloss over or explain in full. Students don't know off the bat what is important and what isn't, so the class details obscure the point being made about conditional constructs. Second, the System.out.println lines introduce several ideas, such as Classes (again), the differences between classes and instances (objects) and what methods are. The first example is Python, the second Java. In summary, Python is a far better language to start with because you can teach concepts in isolation.
Java is still my favourite professional language, and to be sure it isn't a knarly twisted mess like C++, but it still isn't the ideal language to introduce students to.
Re: (Score:3, Informative)
For logic programming, we were taught Prolog.
Pseudo Code (Score:3, Interesting)
If they want to understand the logic of computers, then they should write pseudo code first.
Do this until they are not new programmers.
Re:Pseudo Code (Score:4, Informative)
So... you recommend Python?
Re:Pascal (Score:4, Informative)
Sorry, but I started with Pascal and then Delphi. And it blocked me. Because I always walked around C/C++, and never really learned it.
I loved it, back then. But frankly, ObjectPascal, as a language, and Delphi, (as in:) the libraries, are extremely outdated today. After years of Java, PHP, Python and Haskell, I found myself crippled by their lack of features. And only C and C++ beat it in lack of elegance. But C/C++ at least have more features.
I recommend starting out with Python, and maybe the WebDev area (which is much fun right now, with HTML5, JS, SVG, CSS3, Firebug, etc, in Firefox 3.5).
And then go straight for the full package:
C and Java (forget C++, it tries to be C and Java, but fails to beat both) on the practical side, and
Haskell and Ocaml on the fun and educational side. (With Haskell, you're in for a ride, but it is totally worth it.)
There is no walking around it, by clinging to simple languages for years. The nice thing is, that when you learn the most advanced languages, you automatically learn to program in a better way in less elegant languages (like using full OOP and functional programming styles in C and JS.)
Re: (Score:3)
Yeah and the mirror is surely the reason your face seems ugly.
Instead of blaming tools, blame your incompetence. I started with Pascal/Delphi myself, had to switch to C and C# later, doing a bit of Java right now. In no way is Pascal really lacking features and Delphi Language is pretty much as powerful, as any other modern OOP language is. The syntax is different than the syntax of a curly brace language but then again, Delphi Language comes from a different language family (Algol) so it is to be expected.
Re:Pascal (Score:4, Informative)
C and Java (forget C++, it tries to be C and Java, but fails to beat both)
I'll readily agree that C++ is an awkward and complicated language that is absolutely not suitable for beginners, but how can it try to be Java? C++ predates Java by over a decade.
Re: (Score:3, Insightful)
I think most of the comments here are missing the point. A first prepgramming language should be about doing "stuff" with as little programming drivel as possible.
Alice is a language designed for people new to the idea of programming. If your subject is under 14 it is definitely worth looking into. For someone older, stanford has a great intro to programming clas online. It is cs106 with Dr. Sahami. The first couple of lessons are with karel, and then. It eases into java.
Re: (Score:3, Interesting)
Alice is a language designed for people new to the idea of programming
There are a lot of languages designed for people new to programming, and many of them are better known and more widely supported than Alice.
Re:
Python then C/C++ (Score:3)
Nowadays I would suggest Python as firs language as it is fairly easy, clean and powerful general purpose scripting language. Then extend it with C/C++.
Just don't start with VB, PHP, Java or C# as it will screw the person up for lifetime.
Re:Python then C/C++ (Score:5, Insightful)
I strongly disagree with this. Refusing to learn a new way of doing things will screw the person up for a lifetime. But the blame for that is on the person who is now screwed up, for being lazy.
Re:Python then C/C++ (Score:4, Insightful)
Re:Python then C/C++ (Score:5, Insightful)
Nowadays I would suggest $_language_of_choice as firs language as it is $_reasons[0], $_reasons[1] and $_reasons[2] language. Then extend it with $_arbitrarily_superior_language.
Just don't start with $_other_language[0], $_other_language[1], $_other_language[2] or $_other_language[3] as it will screw the person up for lifetime.
Please, I get so tired of arguments like this.
As long as:
who cares what languages they learn? If they enjoy it and it allows them to learn how to program why should it matter what language they start out with?
Re: (Score:3, Informative)
# it has all of the fundamentals of programming (looping, flow control, data structures, variables, etc)
Looping is not fundamental to programming languages. It's an iterative construct that is not necessary in declarative languages -- and not necessary in most languages, actually.
Re: (Score:3, Interesting)
Not all 15 year olds are gamers. My first script was for an eggdrop. I also knew several young kids on IRC who scripted for mIRC. Scripting is great because it yields quick results, allows you often to see others code, and gives functional results.
With the ADD world we live in, the investment needed to see results in a 3D graphics world is going to be a hard sell.
no right answer. (Score:2)
You have to look at the source code to figure out what a Scheme program is doing? Isn't this true in.... every language? Even if the "source code" consists of little blocks you're dragging and dropping together?
how about c++ (Score:2)
In my opinion you should start ingraining the OO paradigm as soon as possible. I would say Java for its relative simplicity, but Java hides a lot of the nitty gritty details that you get exposed to when dealing with a language like C or C++. So then C++ might be a happy medium. You get exposed to all the object-oriented concepts, but are also forced to learn about memory management, linking, etc. Plus, as a little bonus, its widely used in industry. I think this can be appealing to new programmers in a
Re: (Score:2)
My first language was Q-BASIC, then javascript, PERL, then *shudder*Java*shudder* and finally PHP
Having a grasp of OO concepts makes it easier to learn other OO languages. Depending on what your trying to program your options are different(Desktop app, vs web app)
My recommendation as a first language would be PHP. The nice thing about scripted languages is no compile time! So you can write your code and test it instantly. PHP also provides nice error handling to help you debug.
Scheme is the best teaching language (Score:5, Insightful)
for the professors, that it. By removing all the syntax, etc, you can be introducing functions, lexical scope, binding, etc in the first week. Data structures and recursion in the second.
Result: most students quit by week two, and you are left with a fairly teachable remainder.
Re:Scheme is the best teaching language (Score:5, Informative)
This is very, very true. We lost a ton of kids in my 100-level programming class because they couldn't "get" Scheme.
That was a fun class. Got to learn a new language and do almost no work that semester...
Python and Pygame (Score:3, Interesting)
For a 15 year old? Python with the Pygame toolkit. There are other toolkits besides Pygame, but that one works well.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
No. New programmers should be looking at a problem and thinking "How can I solve this" not "Where can I find a third-party library or toolkit that solves this?" That doesn't teach them the language, it teaches them Google.
To be fair, implementing graphics by raw interface with the windowing system is so difficult that a newbie attempting it will give up in minutes, and giving up programming certainly does not teach the language.
I seriously don't see how a third party library to visualize your computations impairs the teaching of the language.
Re: (Score:3, Insightful)
To be fair, implementing graphics by raw interface with the windowing system is so difficult that a newbie attempting it will give up in minutes
I didn't mean they should be writing their own implementations of Pygame, I meant that new programmers shouldn't be doing anything so complex that it requires them to use anything other than the standard library. Learn the language, then play with the extras.
Re: (Score:2)
You recommend making a 15 year old who has never programed before interface directly with the low-level OS APIs?
No, I recommend making a 15 year old who has never programmed before start with;
print "Hello world";
Learning the standard library of the language, learning the language itself, not jumping into using an external toolkit.
(At least) two distinct objectives... (Score:5, Insightful)
The one is engagement/excitement/comprehensibility: If somebody is disinterested in, or hugely frustrated by, a subject on first contact, they will have minimal motivation to continue. Unless you simply plan to beat it into them, introductory material needs to grab the audience(this doesn't mean that everybody must be your audience, of course). In many cases, this means a (temporary) sacrifice of rigor or correctness; think of intro physics, where you start with simplified Newtonian scenarios, or math, where you generally start by talking about addition/subtraction/multiplication/division, not sets and number theory.
The second value is that of being correct and rigorous, or at least not impeding later development in completeness and rigor. You obviously cannot learn everything all at once; but there are some simplifications that make it easy to fill in the gaps later and others that actively retard that effort. This can happen either because the simplifications are hugely different than the reality, and harden people in the wrong ways, or because, in an attempt to be "engaging" the intro stuff promises that the subject will be "fun", "relevant", and "exciting" to everyone, every step of the way. Fact is, that isn't true. Most subjects have, at some point or another, patches of sucky grunt work. Promising people that they are precious flowers who will never have to soil their hands with such is a good way to make them drop out when they hit those patches.
"Not engineering"? (Score:2)
> After all, as Jeff Atwood puts it, 'what we do is craftmanship, not engineering...'
It would seem that Mr. Atwood has never done any engineering.
Re: (Score:2)
Assembly (Score:5, Interesting)
First, learn assembly, it teaches you how the machine works. (You should probably also learn electronics and digital logic)
Then learn C, it is the most widely used in both commercial and open source.
Then learn C++, it is a better C.
Then learn Java, it rules the web.
Then learn Python, it has some very clever ideas.
Finally...never stop learning
Re:Assembly (Score:4, Insightful)
I think you want to pick a first language with which the kids can get some fun results fairly quickly, and keep their interest. Assembly is not ideal for this. With that said, by all means teach them how a computer works, right from the start.
My dad taught us programming, back in the days when building a computer meant heating up the old soldering iron. He started with explaining what a computer does, the components (registers, memory, i/o) and the instructions you could use to tell the computer what to do. So he taught us about assembly without actually teaching the language, but even so it proved valuable to know more or less what goes on inside a computer at the lowest level. He then got us started off on Basic (which was pretty much the only option), taught loops, conditional statements, functions / subroutines etc.
Any modern langauge can teach these basics, and I think they are still a good fundament. From there you can move on to object orientation and using mroe complex libraries and APIs to access the higher functions of the machine. I actually like the idea of starting with C (or just program C-style in C++), then adding object orientation on top of that by moving on to C++. Java might be good but I have no experience there.
Another option is PHP. It is a lot less finicky; of course the OO aspects are rather poor but one should ask if they are teaching their kids programming fun, or preparing them for a career. PHP (or a similar language) is nice because it is a language well suited for building simple, active web sites. With any luck, your kids will quickly find a few ideas for websites that he can then implement himself... and nothing is better for keeping someone interested in learning to program is having their own project to complete. Once their first efforts go on-line, teach them about structure, web design, and security. Yes, by al means do code reviews, but keep it fun... show them how they can do things better.
Whatever language you pick, I'd start kids off by keeping them away from IDEs and letting them code in Notepad (do not make them use vi; be mindful of child abuse laws)and a command line compiler (where applicable), just to teach them what goes on under the hood. Once they get to a level where they will want to use more complex libraries, GUIs etc, get them an IDE.
As an afterthought, do introduce them to the whole open source thing. Nothing stimulates more than a thriving scene of fellow developers.
Re:Assembly (Score:4, Insightful)
Java's strong/weak point is its memory management. You'll never have to deal with/learn garbage collection or pointers.
I disagree with this. There is nothing more frustrating than trying to learn to program and getting compile/runtime errors from a command line.
Without an IDE: "What the hell? What did I do wrong on line 53?"
With an IDE: "What the hell? Oh! The syntax highlighting tells me I didn't define the variable, I'll remember that for next time."
Using notepad is a horrible way to write code! I'd rather start them off with the IDE to get them familiar with the language first. Once they're familiar they can move on to command line compiling/building, at that point they'll have a good enough understanding of the language to focus on the command line tools (be it gcc and make or javac and ant or whatever).
Re: (Score:3, Insightful)
Assembly as the first language? 99.99% of first learners who aren't doing a mandatory course in school would walk away.
The first language has to be PICK UP AND GO. It is allowed to be a horrible abstraction of what the computer can do. It should be limited, to not confuse with abilities. It should allow creativity to be expressed. It should be easy to see results, no complex build system, nothing inbetween the programmer, the code, and running the code.
Then when the first language is too slow, or doesn't al
Re: .
Re:Assembly (Score:4, Informative)
C is not the most widely used commercial language and it certainly does not lead the way in open source
So...What is Windows written in?
What is the Linux kernel written in?
What are KDE and Gnome written in?
Autocad...Photoshop...MS Office?
AFIK, they are all C or C++
Re: (Score:3, Insightful)
So...What is Windows written in?
What is the Linux kernel written in?
What are KDE and Gnome written in?
Autocad...Photoshop...MS Office?
AFIK, they are all C or C++
Off-the-shelf packages like these are atypical, and represent I believe less than 1% of code that is written commercially. They are dwarfed by the number of payroll systems, account management systems, workflow tracker systems, management information systems, intranets, extranets, web services, etc. Most of these are written (depending on how old.
From an HCI perspective (Score:4, Informative)
take a look at alice.org (Score:5, Interesting)
."
kulakovich
Advice from "Epigrams in Programming" (Score:2)
Alan J. Perlis said: "A language that doesn't affect the way you think about programming is not worth knowing". (personally I would remove the 'about programming' bit).
I think the same applies to a first programming language. It has to expand the learner's view of the universe. Getting a language which panders to the learner's universe does not do them any good.
Re:Advice from "Epigrams in Programming" (Score:4, Insightful)
ANY first programming language introduces new concepts. When you're starting out even something like the concept of a variable takes a little getting used to. Maybe you can relate it to memory store/recall on a pocket calculator, but with a name. Later you can introduce arrays of variables, non-numeric variables, etc.
You seem to have forgotten what it was like in the beginning to know *nothing*.
Fast and free (Score:4, Interesting)
PHP (Score:4, Interesting)
Its not consistent, its not even well designed I expect, but its a remarkably easy way to learn to manipulate a computer. Learn a bit of HTML first, some CSS, then work on OO PHP and you can accomplish a lot. People will dismiss PHP but there are a lot of very large websites built using it - ones that lots of kids will be familiar with.
Follow it up with a second language once you have gotten the basics down pat - Python is likely a very good choice.
Re:PHP (Score:5, Insightful)
I was just going to write this comment myself. The biggest advantage of PHP is that you go from zero to tangible results very quickly. No programming language is going to be interesting to teenagers if they can't instantly do something useful with it, and to a teenager, cool web stuff is the useful thing that they're most likely to be trying to do.
Re: (Score:3, Insightful)
The problem is the vast majority of PHP code in the world is bad. It can very easily teach bad habits.
I was fortunate to write my first program in 1970 (Score:2, Interesting)
I say fortunate because there was nothing in the way; no distractions like GUIs to dilute the experience. Just BASIC on the command line.
It doesn't really matter whether the youngster uses BASIC, Pascal, or XYZ. It should just be a simple language so the concept of logic and process flow is what is learned, rather than getting bogged down in arcane concepts of the language itself (or some pig of a GUI, like "Visual" this, or "Visual" that).
PS. The environment was an interactive "command line" and BA
Start simple? (Score:3, Insightful)
The best first language is anything simple that lets you jump right in and understand the basics like variables, loops, arrays, etc, without getting bogged down in an over complex or restrictive language. It doesn't need to be the worlds best language - it's to get you started. You could do far worse than start with a BASIC interpreter (instant feedback, no compiler/linker to deal with) at a very young age.
Those of use who started out at the beginning of the personal comnputer (not PC) phenomemon in the late 70's started out simple. After putting away your soldering iron, you got out pen and paper and started hand assembling machine code. And we liked it!
At the same time c.1978 a highschool math teacher and a bunch of us took adult education classes at the local university (Durham, UK), where they taught us PL/1, and some of us found a way to hang out at the university after that and started playing with BASIC, then taught ourself C. The big excitement was going from the batch-mode PL/1 class with jobs submitted on punched card decks, with printed green bar fanfold output (maybe just a syntax error) delivered some time later, to being ONLINE sitting in front of a terminal. Whoopee!.
Lua (Score:3)
There are lots of good choices for a first programming language these days, but I thought I'd chime in and suggest Lua for consideration.
Lua is free.
The Lua interactive interpreter makes exploring and learning the language a pleasure (much like the Python interactive interpreter).
There are excellent and up-to-date free tutorials for Lua available online.
Lua integrates easily with C, giving you trivial access to any low level OS features you need.
The language is a pleasure to use. It just feels right.
Give it a shot. You won't be disappointed.
:-)
Quit knocking the hacker ethic... (Score:5, Insightful)
People that knock the hacker ethic are a bunch of MBA drones that could never really build a damned thing themselves.
You learn to program by diving in and doing it. The more you practice and study, the better you get at it. GM was very good at shackling some very brilliant engineers and turning them into process drones. Look at where it got them. Great things are built by individuals and the more steps you have in the way of people being individuals, the worse you will get. Products have to be owned by the engineers that make them and they are personal works of art.
At the end of the day, the managers, bean counters, and all of these other people with their measurements, metrics and fancy charts are so much fluff, a tax on the capable in society... by really a bunch of leaches that could barely feed themselves as they lack the mental self sufficiency to do anything other than to try and ride the labor of others. We condemn socialism in society there's no real difference between the PM in a three piece suit and the lowest of the homeless people. Neither add any real value to society, its just that, the PM knows how to use PowerPoint and the homeless guy does not.
Re: (Score:3, Interesting)
This comment is amusing to say the least. I have both a CS degree and an MBA, so I see both sides.
First of all, I agree with the comment about diving in and doing. You can memorize the syntax of a language all day long, and until you apply the language to a problem, its useless.
That (and some of the stuff in your link) is about all we agree on. I read you comment and I know EXACTLY your type. You are bitter because you have barely moved through the chain. You wonder why you work so hard and make "littl
Oh brother! (Score:3, Interesting)
That (and some of the stuff in your link) is about all we agree on. I read you comment and I know EXACTLY your type.
Stop.
You really think you know my type? You, just like every of your type, have it in your head that everyone has a a great desire to be just like them, or to do what they do.
Honestly, I really don't. I have no desire to be like you at all. Like, I don't need to have people beneath me to affirm myself. I don't need to have power to be satisfied with what I have done. I have myself, my two
Re: (Score:3, Interesting)
People that knock the hacker ethic are a bunch of MBA drones that could never really build a damned thing themselves.
MBA "drones" build things all the time. You just don't understand what, because they use human beings as their construction material.
You learn to program by diving in and doing it. The more you practice and study, the better you get at it.
You learn some things, but not others. You'll never learn how to write maintainable code this way.
GM was very good at shackling some very brilliant engineer
Teach him C (Score:3, Interesting)
Buy him a copy of C for Dummies and have done with it. C is kind of like the Latin of programming, except it's easier to learn than Latin.
I would have suggested BASIC around a decade ago, but I can't think of a modern BASIC implementation that's neither horrendously complex for a new programmer or insanely outdated.
anything ... (Score:2)
The reason I exclude the above three attributes is that they lengthen the learning curve to getting a "hello world" program written and working. If a newbie's first exposure to programming is spending hours in a classroom without producing any output, the teacher will start to hear CLICKing noises after the first five minutes as the children switch off (metaphorically speaking). After that, they're l
scratch (Score:2) [mit.edu] designed explicitly for this
Re:scratch (Score:4, Interesting)
Programming in Scratch helps kids
During the directed learning that takes place in a Scratch-oriented curriculum, the teaching team can introduce another programming language to show how syntax-oriented programming languages can perform the same tasks as the graphics-oriented systems. Any programming language can serve as that second language.
I find it a bit ironic that the best language for teaching programming languages isn't a language at all.
JavaScript/HTML (Score:3, Interesting)
One not-so-obvious candidate: JavaScript and HTML.
Pretty much every browser in existence supports JavaScript, so with nothing more than a simple text editor and your browser of choice you can be off and running. As far as beginning programming is concerned, JavaScript easily encompasses any programmatic constructs you'd need.
The best part is that the students can easily display the results of their test programs in HTML, either dynamically generated or just by manipulating some divs, textboxes, tables etc that they've written on their page. Additionally, an instructor could write a 'playground' bit of HTML and JavaScript, so all output variables are bound up and easy to access. At that point the student is free to focus on what really matters, his/her first logic routines. When the student has created his first masterpiece, sharing the accomplishment with parents/peers is as simple as sharing a link to their HTML file.
I think this has the potential to engage students much faster than observing console output or fighting with a front end like windows forms in VB or Swing in Java.
Re: (Score:3, Interesting)
First, JS is what powers the web, and is both a simple language to understand the basics of, and a complicated one to continue learning from. Along with HTML it can be easy to prototype ideas and have instant results, which can be very helpful for inspiring young programmers.
Secondly, it's object-oriented in a very different and arguably more powerfu
Scheme to first year CS students (Score:4, Insightful)
I don't associate Scheme with 'the hacker ethic'. I don't strongly associate any language with hacker sensibilities. I do associate Scheme with the intellectual rigor required for a programmer who really has a clue about programming.
I think Scheme is an excellent language to teach college students who think that they know how to program because they managed to smoosh together a bunch of working PHP code and make a website. I think it is a poor language to teach high school students who are learning their first language.
The association of technical knowledge with competence irritates me. A competent painter needs to know about brushes and mixing paint, the difference between oil and acrylic, and a whole host of other technical details. But that's not how you get a good painting.
One of my reasons for feeling that Scheme is a good language for people who think they know how to program is that such people frequently know all about paint but do not have the depth of understanding to be a good painter. Scheme is a language that forces you to think about programming differently than you did before. And if you understand it you are on the path to being a good programmer rather than just a code monkey.
But I would not recommend it as a first language. I would recommend Python for that. Clean, concise, expressive and powerful. It's my favorite language for a reason.
:-)
According to research, its Algol 60 (Score:3, Informative)
In Germany, researchers into didactics (teaching) of computer science (Informatik) have done some work on this topic. I recently found it when I was looking into materials for the computer science course in the Netherlands (seeing if I could do better).
Based on 15 criteria, they ranked 27 languages, ranging from Scheme to Haskell, ADA to Ocaml. The worst language for teaching was, by far, APL (scored a 5, which is the worst), followed closely by Perl. The best language for teaching was Algol 60 (1,50). Second best Python (1,66), 3rd place Ruby (1,88) and scraping in at a 4th spot was Pascal (2,14).
So to summarize: better dust off your Algol 60 books and compilers
:P
Failing that, Python and Ruby are nice as well for just teaching programming (although if you want to show the distinction between imperative and functional programming I'm not altogether sure that Ruby would be enough).
------
This was found in a (Dutch language) PDF: [utwente.nl] (see page 8 for the German criteria, and page 9 for the results). See the original research (*) here: [subs.emis.de] (German language document)
(*): [LH02] I. Linkweiler, L. Humbert. Ergebnisse der Untersuchung zur Eignung einer Programmiersprache fÂur die schnelle Softwareentwicklung â" kann der Informatikunterricht davon
profitieren?, Didaktik der Informatik, UniversitÃt Dortmund, 2002.
language probably matters to some extent (Score:3, Informative)
I've actually had this experience. I've mentored someone from about 12 - 15. He's going to be one of the best programmers of his generation if he sticks to it.
First, I agree that at this age finding things he wants to do is more important than the specific technology. But I would argue that the technology does matter to some extent. There's a lot of time between 12 and college. Someone who spends a lot of it programming is going to get at least as much experience before college as in college. I'd like to see it go in the right direction. When I taught computer science 111, I sometimes had to tell kids who were self-taught in Basic to forget everything they knew. I'd hate to see that happen to someone who had invested lots of time.
I think it's the job of the mentor to encourage -- with a light enough touch not to discourage -- use of good programming techniques. That means talking about program structure and design, proper data structures, and any other concepts needed for what they're doing. (In my case the kid likes doing multi-threaded network services, so I had to teach him synchronization much earlier than you'd typically do that.)
I haven't programmed in Scheme, so my judgement on it is probably not reliable. I did do a lot of work in Common Lisp. While in many ways I liked Common Lisp more than more recent languages, I think a language like Java or C++ is more likely to push you to think about structure. C++ seems a bit low level. I'd be willing to accept either a high level language like Perl or Python, or something lower level like Java or Visual Basic (using the newer features of the language so that it's essentially the same as Java -- although C# might be a better choice).
For someone who is just going to be playing around there's a lot to be said for Perl/Python. But if they're going to be doing anything big enough where structure matters, I'd probably start with Java or maybe C#. Of course you can certainly start with Python and them move to Java.
In my case, the student I worked with started with Visual Basic, moved into a more structured form of Visual Basic, and then to Java. By now he's also done PHP and C++. It's also all been his choice. And in fact the real answer may be that when working with teenagers unless you want to spoil the fun there's a limit to how much you can or should actually determine what they do. So you may end up supporting them in whatever language they pick. But if someone is likely to be a professional, I'd probably try to get them into a structured language like Java fairly soon for at least some of their work.
"What we do is craftsmanship " (Score:3, Interesting)
And that there my friends is the crux of the problem. This is why we are now swimming in code bloat from every angle. Sorry to disagree with your view of the world, but *proper* programming IS engineering.
Now, to get back on topic, how about something like Squeak? Using squeak land, you can 'trick' the kids into learning a real language. Of course i learned by sitting in front of a 8080 i built from scratch from reading intel data books that i wanted to actually do something other then sit there, but i realize that isnt for everyone.
Python is also be a good starting point. Approachable, and 'legit'.
Brief history of my programming... (Score:3, Interesting)
I learnt BASIC first, on the Sinclair ZX-81 and then the Sinclar Spectrum.
Soon after that, I learned to program using Turbo Pascal, and I translated a lot of my BASIC programs to Pascal on my Dad's PC.
At school, I learned to program on a BBC Micro using BASIC and 6502 assembler and a little while after that I learned to how to program using 808[86] assembler.
C++ was the next main language I learned to use but between Turbo C++ and Turbo Pascal, I preferred Turbo Pascal, especially how well it managed dependencies and the sheer speed of compilation. I was a very loyal Borland customer, purchasing nearly every version of Turbo/Borland Pascal for DOS (excluding 5.5)...
And then I went pure 32bit programming in 1994 and left the Borland world behind... Used Watcom C++ and Virtual Pascal. Found the GNU compiler. For the last 10+ years, I have almost exclusively been programming using GNU compiler in C and C++.
To answer the poster's question: I would recommend Pascal as a good learning language for learning structured programming.
But to graduate to C and C++ after the basics are well understood and good practices have been learned.
A language with a popular forum. (Score:3, Insightful)
The best way to learn to program is through social interaction about the subject within a culture dedicated to a wide array of programming topics.
Languages like scheme, lisp, haskel, perl, ruby, python... these are often domain-focused, where it would be hard for the budding programmer to get into some areas of programming that might keep them interrested. For example, graphics or sound.
Languages like C# and VB.NET have a more generalized culture, where you could get into just about anything and actualy find other people doing the same stuff via online forums (in the old days it was through BBS's and FidoNet.)
So I recommend a general language, without much meaningfull limitations, that has one or more high traffic public forums online dedicated to it.
Scheme "write-only" ? (Score:3, Insightful)
Scheme isn't remotely write-only - instead it changes how one thinks about programming for the better. If you really want to see write-only, let me introduce you to my good friend perl without strictures.
C++ or Fortran (Score:5, Funny)
Personally I would recommend C++ or Fortran since that should quickly kill their interest in programming. And I really don't want more competition from bright young people.
I'll probably get flamed (Score:3, Interesting)
For saying this, but the answer is right there is you RTFS....Good old VB6. The VB6 IDE was nothing if not simple, you drag and drop whatever elements you wanted, go "clicky clicky" to bring bring up code view and really butt simple for making little apps. Once the kids have gotten to make their own programs for awhile THEN you can move them up to more complex languages. But nothing excites a kid more than "I made this!" and VB6 is nothing if not simple to make little apps.
And their are plenty of places like vb6.us [vb6.us] and a1vbcode [a1vbcode.com] and find tons of heavily commented code snippets to show them how to make anything from a little GUI for keeping up with videos to slot machines. And with VB6 it is beyond simple to play with code like that and see what makes it tick.
So while a lot of the "real programmers" will be screaming bloody murder because I dared to bring up old VB6, the simple fact is it is really good at the niches it was designed for. It is really good at cranking off an app quickly and it is really easy to get started with. Both are requisites IMHO for teaching kids programming. After all, if they can't see that they can make the PC "do stuff" pretty quickly they are gonna get bored and not want to continue long before they get to the good stuff. And with VB6 you can have it popping off little message boxes and other little tricks within the first hour of picking it up.
Re:best first language? (Score:5, Funny)
Assembler (Score:3, Interesting)
Re:Assembler (Score:5, Funny)
In before old people telling you about punch cards.
Come to think of it, is there a way to do calculations with kids on/off your lawn?
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
This was before lawns were invented.
Re:Assembler (Score:5, Insightful)
It teaches you how a computer really works. That way you can become a 'real' programmer instead of an IDE user.
Really? Seriously?
I don't think assembler is the best way to instill the magic and excitement of getting the most complex machine in your house to do what you want it to. And, that's what a fifteen year old newb needs. If you start with assembler, you're assuring that it will be months before he has learned enough to be able to take a program that he's written to a friend or parent, and have that person say, "Cool!". And, it will be even longer before he can use even a fraction of the modern technology that computers now have; things like GUI's, and networking. More often than not, it will only cause frustration.
Back in the day, assembler might have been the right option. But today, I think that's a recipe for killing that spark of creativity and excitement that draws people into programming, and gets them to slog through the nitty-gritty stuff.
Re: (Score:3, Insightful)
Since we're talking about learning languages here, I think we need to remember to balance "excitement of programming" with actual learning.
Assembler is a terrible first language because it doesn't really teach programming so much as learning how a particular CPU works. You could theoretically get away with something like MIX because it's just a simple emulation of assembly, but real assembly language programming is something that really shouldn't be attempted until normal programming is learned.
On the other
Re: (Score:3, Informative)
BASIC is another good choice
No it's not, it's a godawful choice.
I don't know if you meant old-school 8-bit-style BASIC or Visual Basic.
If you meant the former then, WTF? There were technical reasons why it was popular and (to some extent) its use was justified on early microcomputers. These technical issues no longer apply, and it's entirely unsuited to programming on a modern scale.
And traditional Basic was *notorious* for fostering bad programming habits; I for one certainly suffered from that, and I really wish I'd used more
Re: (Score:3, Informative)
If it's Visual Basic, then I'm certainly not convinced that for someone coming to it from scratch it's ultimately any easier to learn than (e.g.) C# or other C/C++ derived languages. It retains much of the syntactical clunkiness of old-fashioned basic, and its syntax isn't used much anywhere else, closing off the leveraging effect you get with C-style languages- learn one of those, and you partly know the others. Ironically, this will also make other languages appear more intimidating and locking that person into Visual Basic further.
Agreed, At my college they thought us VB.Net as introduction to OOP programming. We also did C++ and you could do C#. Reason they did
.Net is while it wasn't technical school, they did want to teach us a language that was in wide use. However, there is really no other language with similar syntax so when you did some C++, you got to relearn syntax ALL OVER AGAIN.
If you want to teach someone something marketable for first language and there are plenty of
.Net jobs out there, C# is much better choice.
Re:Assembler (Score:5, Interesting)
I agree that Visual Basic is as bad a choice for a first language as any other complex programming platform.
What made old skool BASIC good was that it was limited in ability. Admitted data structures were limited to arrays, which was a problem. However a medium-complexity basic like Blitz Basic 2 on the Amiga allowed the creative side to be expressed, without having to wade through complex APIs like you would with a modern language.
And the best way to learn programming to a young person (under 16) is to allow their ideas to be expressed and implemented, be that writing your first football league tracking application, to a simple game, to a text adventure, and so on. If that means using BASIC, e.g., RealBasic, then so be it. It needs to be pick-up-able.
I bet there are people saying Haskell and ML on this thread, for some academic reasons. The last thing a young person wants to be doing is learning how to manipulate data structures, functionally, with all the brain-fuckery that involves, and only to get a sorted list at the end. That isn't exciting, it's not even something to be slogged through, it's tedious and will actually put them off, totally.
10 Print "I am god!" : goto 10
run
instant result.
It's sad that computer magazines don't have programming in them any more, unlike the 80s. Game type-ins promised rewards to typing, and learning was osmotic.
Re: (Score:3, Insightful)
Assembly teaches you 'how to bake a cake'. So does BASIC and Pascal and C.
The problem for me was moving from such basic programming to high-level modeling using (in my view) a much better system of OO design.
Start them with Objects... I had a hard time getting into OO programming because I started with a very low level language.
Assembler: start there and stay there (Score:4, Interesting)
Start them with Objects... I had a hard time getting into OO programming because I started with a very low level language.
I started on Fortran. It was horrible. Then I got a home computer with BASIC and advanced to assembly language.
25 years later, I still am at the assembly language stage for programming. But I use different processors and tools. The language is not as important as the tools that support that language. Visual BASIC is great because it gives a simple easy-to-use way to create programs in a Windows environment. Its structural limitations are irrelevant. It is the cost and sophistication of the development tools that is more important.
Now that you can buy a microcontroller for $1.50 that has more power than the original IBM PC, development tools like IDEs are the most important consideration. Computer science was important when computers cost a million dollars: it is meaningless today when they cost a few dollars.
I detest C because I can't debug it on the IDE that controls my $1.50 microcontroller. I can read it and write it fine. I can work with it fine. But I hate it because it's too abstract. I have no idea of what exactly the CPU is doing.
OOP is just science fiction; it's advantages are imaginary. If your application is so advanced and complex that you need OOP to create a program to do it, then it's time to completely rethink the idea of what a computer does.
Computer science is the process of reconfiguring complex concepts to fit into the limitations of the machine. Computer science becomes irreverent when you realize that the more complex the problem, the easier it is to solve by redesigning the computer to fit the problem. Not reducing the problem into small enough processes that will fit into the machine.
It is cheaper and faster to design a custom arrangement of 1000 $1.50 microprocessors to match the needs of complex problem than it is to write and debug the software that will 'solve' the problem on a $5000 standard Von Neumann computer. Microcontroller programmers are cheap and easy to recruit: OOP software development teams are expensive.
This is the new reality of the 21st-century. OOP is the last gasp of the 'big iron' boys.
Re: (Score:3, Funny)
Then maybe they should start them with Lisp instead?
:)
Then we'll have lots of programmerth all over the plathe. As if nerdth didn't thuffer from an image problem as it ith.
Re: (Score:2)
I wouldn't recommend Assembly. Most of the "under the hood" things are not the job of the programmer anymore. That's why we have compilers.
If I could choose now, I'd learn Python first, for basic algorithmic programming, followed by C, to get a grip on what's really happening at runtime. After that, you're not dependent on language anymore.
Why do I think C is important, you ask? Read on [joelonsoftware.com].
Re: (Score:2, Insightful)
I wouldn't recommend Assembly. Most of the "under the hood" things are not the job of the programmer anymore. That's why we have compilers.
And that's why we have so much of shitty inefficient code around. Even when you program in a high-level language, you still have to realize how the code you write works on the machine level. I've seen PHP programmers throwing around calls to array_diff/array_unique, chaining them without mercy, without thinking about performance - because they think that those function are some magic black boxes and never consider a performance hit. "Oh, it's a C function, C is fast anyway from what I've heard". Like a good
Re:Assembler! (Score:4, Insightful)
a good programmer should know all the chain, from Java/Python/Scheme/Whatever down to the machine code.
Yes, but you don't start with assembly language. You start with something conceptually simple, like Python. I started with Basic on the Commodore 64. Before a year was up, I was doing shit in 6502 assembly because interpreted Basic was too slow. Not a chance in hell I could've picked up assembly straight off without some understanding of a higher level language. Throwing assembly at someone is like throwing a pile of parts and fasteners at someone and telling them to build a combine harvester.
Re: (Score:3, Insightful)
Most of the people posting here really haven't grasped (1) and (2).
Assembly as a first language is ridiculous, yet so many are arguing for it.
Not only is it irrelevant today apart from microcontrollers, which they might get a job programming in 10-15 years time (assuming they're young now), but it will be incredibly frustrating.
The student has to come to the decision to use C, Assembler, etc, themselves, when they decide they have to in order to realise their vision for whatever they're programming. I.e., t
Re: (Score:3, Insightful)
Re performance, you're describing algorithms.
I've programmed for 27 years, in a great many languages, starting with BASIC and several assembly languages at the beginning.
Honestly, I didn't learn properly about algorithms and algorithmic complexity until I had the chance to learn higher level languages, which weren't available on home computers when I started.
I *did* learn a lot about raw efficiency, from counting cycles and memory accesses and instructions; things like that. I learned how to write really f
Re:Assembler! (Score:5, Insightful)
The problem with assembler, little Anonymous Coward, is that it doesn't let you do anything without a significant amount of work, and what you can do is unlikely to impress a fifteen-year-old kid just getting into programming.
Being able to print a few lines to the screen won't impress that kid and make him want to keep programming. Give him a language that can easily create GUIs, so he can see his stuff in action. To do this, I'd recommend an object-oriented language, maybe Python (though I personally detest it) or C# (which is a very nice language with very nice tools).
Re:Lisp/Scheme != latin of programming (Score:4, Informative)
in fact Lisp itself is built in C
Errr...no. Lisp originally dates back to the late 1950s; C didn't emerge until the early 1970s. The first working Lisp implementation was writtien in IBM 704 machine language; A Lisp compiler (itself implemented in Lisp) was implemented in 1962 - fully 10 years before the birth of C.
Re:Python - flamebait (Score:3, Insightful)
Python is better than Perl because for beginners would takes weeks just to learn all the different possibilities
You don't need to learn them all - you just need to learn ONE to start with, then others can be added as the newbie's level of competence grows. The biggest barrier to programming is a steep learning curve - too much time spent before something tangible can be produced. Any language that lets someone just type stuff, then press "run" is a good start - maybe even Python.
The point of IntroTo Programming courses is (Score:5, Insightful)
The point of IntroTo Programming courses is to instill a comprehension and sense of awe at the ability to control the actions, operations, and functions of physical machinery by using symbols, that are non-physical. It is by using your brain to amplify your body (robotics) or by using your brain to build and control a machine that can vastly amplify your brain (a pocket calculator).
Intro To Programming needs to skip language and process at the beginning and first teach how electricity can be used to create and manipulate symbols. This is multi-stage process that teaches how to use electricity to represent binary numbers, then using binary numbers to represent decimal numbers, using decimal to represent CPU instructions, using instructions to make programs, and using programs to control machines that amplify the users physical and mental abilities. And finally, how to use the imagination to create new structures of symbols to create programs.
Install the sense of god-like awe at the ability to manipulate physical reality by rearranging symbols, and the mundane details of language structure are of minor importance to both the student and the teacher.
Re: . | http://tech.slashdot.org/story/09/07/25/1224247/the-best-first-language-for-a-young-programmer?sdsrc=rel | CC-MAIN-2014-52 | refinedweb | 9,349 | 71.04 |
XmlSlurper parsing a query result
I have a query that bring back a cell in my table the has all xml in it. I have it so I can spit out what is in the cell without any delimiters. Now i need to actually take each individual element and link them with my object. Is there any easy way to do this?
def sql def dataSource static transactional = true def pullLogs(String username, String id) { if(username != null && id != null) { sql = new Sql(dataSource) println "Data source is: " + dataSource.toString() def schema = dataSource.properties.defaultSchema sql.query('select USERID, AUDIT_DETAILS from DEV.AUDIT_LOG T WHERE XMLEXISTS(\'\$s/*/user[id=\"' + id + '\" or username=\"'+username+'\"]\' passing T.AUDIT_DETAILS as \"s\") ORDER BY AUDIT_EVENT', []) { ResultSet rs -> while (rs.next()) { def auditDetails = new XmlSlurper().parseText(rs.getString('AUDIT_EVENT_DETAILS')) println auditDetails.toString } } sql.close() } }
now this will give me that cell with those audit details in it. Bad thing is that is just puts all the information from the field in on giant string without the element tags. How would I go through and assign the values to a object. I have been trying to work with this example with no luck since that works with a file.
I have to be missing something. I tried running another parseText(auditDetails) but haven't had any luck on that.
Any suggestions?
EDIT:
The xml int that field looks like
<user><username>scottsmith</username><timestamp>tues 5th 2009</timestamp></user>
^ simular to how it is except mine is ALOT longer. It comes out as "scottsmithtue 5th 2009" so on and so forth. I need to actually take those tags and link them to my object instead of just printing them in one conjoined string.
Answers
Just do
auditDetails.username
Or
auditDetails.timestamp
To access the properties you require
Need Your Help
C++ exception not caught in XCode iOS project
c++ objective-c xcode exceptionI created a C++ library. I am using it in an iOS application. I thought of handling exceptions in C++ library and in order to test this I created a test scheme and called the c++ function from it. ... | http://unixresources.net/faq/16529500.shtml | CC-MAIN-2019-09 | refinedweb | 355 | 67.35 |
Xray – Yes it does AS3 :)
Well, for a while now, people have been asking me about using Xray with ActionScript3, so I thought I’d clear up the confusion with one post about it.
Yes – there is a version for AS3. That’s the good news. The bad news is, it doesn’t have all the bells and whistles that AS2 version had. It DOES do introspection of your objects at runtime with a snapshot, and the logger is as strong as ever. There’s also a Flex specific Xray class that works rather well.
The nice thing is, the logger and Xray work the same in either the Flash IDE, pure ActionScript project, Flex application or Air application. It doesn’t have an AS3 component yet, but it works very easily with a couple lines of code just the same way.
Let’s take a look at doing 2 things: 1) setting up an Xray instance for introspection and 2) just using the logger.
Xray Instance
To create an instance of Xray, you simply add the import to your class, then create an instance:
import com.blitzagency.xray.inspector.Xray;
// later in your property declarations:
public var xray:Xray = new Xray();
addChild(Xray); // you have to add it to the stage in a pure ActionScript application or Flash IDE app – thanks Dave!
// for you Flex users – it works for Flex2 or Flex3 or Air
import com.blitzagency.xray.inspector.flex.Flex2Xray;
public var xray:Flex2Xray = new Flex2Xray();
Yay me! we’re done and you can use the Flex Xray interface to take a look at your application.
Xray’s Logger
Xray’s logger has the same log levels that most do these days: debug, info, warn, error and fatal. To create an instance of the Xray logger, you simply add the import, then create an instance like so:
import com.blitzagency.xray.logger.XrayLog;
// later in your property declarations:
public var log:XrayLog = new XrayLog();
// later, when you want to log something:
log.debug(“StringMessage”, object [, object, object...]);
log.info(“StringMessage”, object [, object, object...]);
log.warn(“StringMessage”, object [, object, object...]);
log.error(“StringMessage”, object [, object, object...]);
log.fatal(“StringMessage”, object [, object, object...]);
You pass a string message as the first argument, then send as many objects as you like after that. The output is nice in that it includes a time stamp as well as the class and line number that made the log call:
(2793) com.electricsheepcompany.aspen.spaces::SpaceCreator/handleGroupLoaded()[/Users/NeoRiley/Documents/Infrared5/clients/clientName/source/johnBranch/src/1.0_jgrden/src/main/flex/com/clientName/clientAppName/spaces/SpaceCreator.as:102
GroupTest: group load complete
As you can see, you get the calling Class fine and line number. Nice huh?
Where do I get the code?!?
Glad you asked! It’s hosted on Google code and you can check it out with any SVN client at this URL:
And the snazy Xray Interface app??
You can download the SWF based Xray app here
The future of Xray?
As usual, I’m sure Xray’s development will be driven by need on projects and by people who request features. If anyone cries loud enough, I’m sure I’ll do what I can. As I come to need more features that AS2 used to support, I’ll get around to implementing them. But for now, it works well in AS3, and does what I need it to do, which is introspection and the ability to modify properties at runtime to see what’s going on with the application. Much of my work these days is being a consultant on new gigs every couple of months and Xray really helps me cut through the felgercarb in mere sentons.
Also, there’s XrayViewer by Marc Hughes
Marc’s the one that did the work on the filter component for Xray’s log output, and came up with a really VERY cool AIR app that lets you load a SWF and inspect it ;) | http://rockonflash.wordpress.com/category/as3/page/2/ | CC-MAIN-2014-41 | refinedweb | 662 | 71.24 |
README
¶
go-testing-interface
go-testing-interface is a Go library that exports an interface that
*testing.T implements as well as a runtime version you can use in its
place.
The purpose of this library is so that you can export test helpers as a
public API without depending on the "testing" package, since you can't
create a
*testing.T struct manually. This lets you, for example, use the
public testing APIs to generate mock data at runtime, rather than just at
test time.
Usage & Example
For usage and examples see the Godoc.
Given a test helper written using
go-testing-interface like this:
import "github.com/mitchellh/go-testing-interface" func TestHelper(t testing.T) { t.Fatal("I failed") }
You can call the test helper in a real test easily:
import "testing" func TestThing(t *testing.T) { TestHelper(t) }
You can also call the test helper at runtime if needed:
import "github.com/mitchellh/go-testing-interface" func main() { TestHelper(&testing.RuntimeT{}) }
Versioning
The tagged version matches the version of Go that the interface is
compatible with. For example, the version "1.14.0" is for Go 1.14 and
introduced the
Cleanup function. The patch version (the ".0" in the
prior example) is used to fix any bugs found in this library and has no
correlation to the supported Go version.
Why?!
*Why would I call a test helper that takes a testing.T at runtime?
You probably shouldn't. The only use case I've seen (and I've had) for this is to implement a "dev mode" for a service where the test helpers are used to populate mock data, create a mock DB, perhaps run service dependencies in-memory, etc.
Outside of a "dev mode", I've never seen a use case for this and I think
there shouldn't be one since the point of the
testing.T interface is that
you can fail immediately.
Documentation
¶
Index ¶
- type RuntimeT
- func (t *RuntimeT) Cleanup(func())
- func (t *RuntimeT) Error(args ...interface{})
- func (t *RuntimeT) Errorf(format string, args ...interface{})
- func (t *RuntimeT) Fail()
- func (t *RuntimeT) FailNow()
- func (t *RuntimeT) Failed() bool
- func (t *RuntimeT) Fatal(args ...interface{})
- func (t *RuntimeT) Fatalf(format string, args ...interface{})
- func (t *RuntimeT) Helper()
- func (t *RuntimeT) Log(args ...interface{})
- func (t *RuntimeT) Logf(format string, args ...interface{})
- func (t *RuntimeT) Name() string
- func (t *RuntimeT) Parallel()
- func (t *RuntimeT) Skip(args ...interface{})
- func (t *RuntimeT) SkipNow()
- func (t *RuntimeT) Skipf(format string, args ...interface{})
- func (t *RuntimeT) Skipped() bool
- type T
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type RuntimeT ¶
type RuntimeT struct { // contains filtered or unexported fields }
RuntimeT implements T and can be instantiated and run at runtime to mimic *testing.T behavior. Unlike *testing.T, this will simply panic for calls to Fatal. For calls to Error, you'll have to check the errors list to determine whether to exit yourself.
Cleanup does NOT work, so if you're using a helper that uses Cleanup, there may be dangling resources.
Parallel does not do anything.
type T ¶
type T interface { Cleanup(func()) Error(args ...interface{}) Errorf(format string, args ...interface{}) Fail() FailNow() Failed() bool Fatal(args ...interface{}) Fatalf(format string, args ...interface{}) Helper() Log(args ...interface{}) Logf(format string, args ...interface{}) Name() string Parallel() Skip(args ...interface{}) SkipNow() Skipf(format string, args ...interface{}) Skipped() bool }
T is the interface that mimics the standard library *testing.T.
In unit tests you can just pass a *testing.T struct. At runtime, outside of tests, you can pass in a RuntimeT struct from this package. | https://pkg.go.dev/github.com/mitchellh/go-testing-interface?utm_source=godoc | CC-MAIN-2022-40 | refinedweb | 607 | 68.87 |
Lee,
I have no idea if you've received any replies as this list replies directly to the sender.
I run exactly the same configuration as you are trying to get going.
Issue A - I don't understand what you're saying here.
Issue B - you need a new line in Moin.py - "sys.path.insert(0, r'c:\moin\lib\site-packages')"
Issue C - See B
Issue D - This is a reported bug, change that line to read "
if os.name == 'posix' and os.getuid() == 0
:
" so that the os check happens first.
Craig
-----Original Message-----
From:
moin-user-admin@lists.sourceforge.net [mailto:moin-user-admin@lists.sourceforge.net]
On Behalf Of
Lee James
Sent:
26 September 2005 05:05 PM
To:
Moin-user@lists.sourceforge.net
Subject:
[Moin-user] Sources of advice while setting up a moinmoin
I am in the process of installing moinmoin. As an initial attempt I am installing it on my personal LAN as a standalone wiki. The specific machine is running Windows 2000 and already has Python 2.4 installed at C:\Python24. Here is what has transpired so far:
following
downloaded the install file and unpacked it into:\work\JOBS\2005Applications\OTI\wiki\moin-1.3.5
needed to upgrade WinRAR before this was successful.
command line install appears to have worked, install.log contains 814 entries
all entries installed under C:\moin
predicted message about the search path did not appear
added c:\moin to the python path
No messages in response to
import MoinMoin
appears that the MoinMoin source code is located in C:\moin\Lib\site-packages\MoinMoin\
Note that the predicted
python2.4
is missing issue A
appears that the templates are located in C:\moin\share\moin\
appears that some scripts that help to use the MoinMoin shell commands are located in C:\moin\Scripts
following
Not sure I found the 'dedicated pages' being referred to
assuming this refers to the Installation scenarios section of
.
for first instance will use the name of
trial
and be at
C:\moin\wikis\trial
PREFIX=C:\moin
SHARE=$PREFIX\share\moin
WIKILOCATION=$PREFIX\wikis
INSTANCE=trial
copied the relevant files to C:\moin\wikis\trial
permissions set so the directory is not shared, but everyone on this machine has access. This is the windows default.
editing wikiconfig.py
changing line 36 sitename = u'Untitled Wiki' to sitename = u'Trial Wiki'
changing line 83 #acl_rights_before = u"YourName:read,write,delete,revert,admin" to acl_rights_before = u"lee:read,write,delete,revert,admin"
see
changing line 96 mail_smarthost = "" to mail_smarthost = "mail.storm.ca"
looking up SMTP server: Outlook Express -> Tools -> Accounts: pop.storm.ca -> properties: servers
changing line 99 mail_from = "" to mail_from = "Wiki Trial
"
following
copied moin.py from C:\moin\share\moin\server to C:\moin\wikis\trial
editing moin.py
changing line 15 sys.path.insert(0, '/path/to/wikiconfig') to sys.path.insert(0, 'C:/moin/wikis/trial')
changing line 32 docs = '/usr/share/moin/htdocs' to docs = 'C:/moin/share/moin/htdocs'
changing line 44 interface = 'localhost' to interface = ''
changing line 49 ## logPath = 'moin.log' to logPath = 'moin.log'
try running by double-clicking moin.py
failed to find MoinMoin.server.standalone to import from issue B
moved MoinMoin from C:\moin\Lib\site-packages to C:\Python24\Lib\site-packages
try running by running moin.py from cmd window issue C
File "C:\moin\Lib\site-packages\MoinMoin\server\standalone.py", line 485, in run
AttributeError: 'module' object has no attribute 'getuid'
copied MoinMoin back so it appears as both C:\moin\Lib\site-packages and C:\Python24\Lib\site-packages
try running by running moin.py from cmd window and by doulble clicking
File "C:\moin\Lib\site-packages\MoinMoin\server\standalone.py", line 485, in run
if os.getuid() == 0 and os.name == 'posix': issue D
AttributeError: 'module' object has no attribute 'getuid'
I am left with several confusions:
A: I presume that if I had followed the instructions for downloading Python, instead of already having it installed, I would have had the default directory structure
B: given that c:\moin is in the pythonpath, why would python be unable to find this file?
C: how is it the error is being reported in a file that does not exist where it is being reported to be. Is this a bug? In Python or MoinMoin?
D: getuid is only defined for UNIX systems apparently. How did I get to be executing Unix code in a windows installation. Is this a MoinMoin bug?
I hope someone can help de-confuse me on some or all of these issues!
Thanx
Lee James
8161 Fallowfield Road
Ashton ON K0A 1B0
Telephone: 613 253-6154
lee.james@ieee.org
Please note that Edcon's e-Mail addresses have changed. Please update your address book with the new e-Mail address as displayed in the "From:" field.
------------------Edcon Disclaimer -------------------------
This email is private and confidential and its contents and attachments are the property of Edcon. It is solely for the named addressee. Any unauthorised Edcon, the company does not accept any responsibility for the contents of this email or any opinions expressed in this email or its attachments .If you are not the named addressee please notify us immediately by reply email and delete this email and any attached files.Due to the nature of email Edcon cannot ensure and accepts no liability for the integrity of this email and any attachments, nor that they are free of any virus.Edcon accepts no liability for any loss or damage whether direct or indirect or consequential, however caused, whether by negligence or otherwise, which may result directly or indirectly from this communication or any attached files. Edgars Consolidated Stores LTD ,Post office box 200 Crown Mines, Telephone: (011) 495-6000 | http://sourceforge.net/p/moin/mailman/attachment/D9724EA89D806C48ADFA732EF3BE3CFC858B38%40ORMEXVS01.production.fr-prod.edcon.co.za/1/ | CC-MAIN-2015-48 | refinedweb | 969 | 56.45 |
#include <vtkCgShader.h>
vtkCgShader is the only class that interfaces directly with the Cg libraries. Once is has a valid shader described by a vtkXMLDataElement it will create, compile, install, and initialize the parameters of a Cg hardware shader.
Definition at line 106 of file vtkCgShader.h.
Reimplemented from vtkShader.
Definition at line 110 of file vtkCgShader.h.
Internal method don't call directly. Called by Cg erro callback to report Cg errors.
Called to pass VTK actor/property/light values and other Shader variables over to the shader. This is called by the ShaderProgram during each render. We override this method for Cg shaders, since for Cg shaders, we need to ensure that the actor transformations are pushed before state matrix uniform variables are bound.
Reimplemented from vtkShader.
Establishes the given texture as the uniform sampler to perform lookups on. The textureIndex argument corresponds to the indices of the textures in a vtkProperty. Subclass may have to cast the texture to vtkOpenGLTexture to obtain the GLuint for texture this texture. Subclasses must override these and perform GLSL or Cg calls.
Definition at line 170 of file vtkCgShader.h. | https://vtk.org/doc/release/5.4/html/a00258.html | CC-MAIN-2022-21 | refinedweb | 189 | 59.9 |
I am having a lot of trouble finding the largest number in a binary search using recursion. The book gives me some basic code on how to search on, but not how to find the largest number in one. The question asks me to implement this a helper function but to start, I just want to get the inner workings down.
Say that i had an array of {1,6,8,3} and i had to use recursion to find the largest number in it.
this is the example that was given to search for an value in the array that was already given.
the book givesthe book givesCode:
#include<iostream>
using namespace std;
int binarySearch(const int anArray[], int first, int last, int value)
{
int index;
if (first >last )
index =-1;
else
{ //invariant : If value is in AnArray,
// anArray[first] <= value <= anArray[last]
int mid = (first + last)/2;
if ( value==anArray[mid])
index = mid; //value found at mid;
else if (value <anArray[mid])
// point x
index = binarySearch(anArray, first, mid-1, value);
// keep in mind that mid-1 become the "last" in the next recursive call
else
//point y
index = binarySearch(anArray, mid+1, last, value );
// keep in mind that "mid+1" will be first in the next recursive call
// as in (first+last)/2, the value at mid+1 is now firsts.
} // end if
return index;
}//end binarySearch
int main()
{
int first=0;
int last=7;
int value=29;
int anArray1[]={1,5,9,12,15,21,29,31};
binarySearch(anArray1, first, last, value);
//cout << value << endl ;
system("pause");
return 0;
}
this as in what it wants in the example.
if( anArray has only 1 item)
maxArray(anArray) is the item in the array
else if (anArray has more than 1 item)
maxArray(anArray) is the maximum of
maxArray(left half of anArray) and
maxArray(right half of anArray)
any suggestions? | http://cboard.cprogramming.com/cplusplus-programming/105700-finding-largest-number-binary-search-printable-thread.html | CC-MAIN-2014-15 | refinedweb | 313 | 53.38 |
Created on 2007-11-10 05:58 by gvanrossum, last changed 2007-11-10 22:13 by gvanrossum.
Here's an implementation of the idea I floated recently on python-dev
(Subject: Declaring setters with getters). This implements the kind of
syntax that I believe won over most folks in the end:
@property
def foo(self): ...
@foo.setter
def foo(self, value=None): ...
There are also .getter and .deleter descriptors. This includes the hack
that if you specify a setter but no deleter, the setter is called
without a value argument when attempting to delete something. If the
setter isn't ready for this, a TypeError will be raised, pretty much
just as if no deleter was provided (just with a somewhat worse error
I intend to check this into 2.6 and 3.0 unless there is a huge cry of
dismay. Docs will be left to volunteers as always.
Looks great (regardless of how this is implemented). I always hated this
def get_foo / def set_foo / foo = property (get_foo, set_foo).
propset2.diff is a new version that improves upon the heuristic for
making the deleter match the setter: when changing setters, if the old
deleter is the same as the old setter, it will replace both the deleter
and setter.
This diff is relative to the 2.6 trunk; it applies to 3.0 too.
propset3.diff removes the hack that makes the deleter equal to the
setter when no separate deleter has been specified. If you want a
single method to be used as setter and deleter, write this:
@foo.setter
@foo.deleter
def foo(self, value=None): ...
Fixed a typo:
+PyDoc_STRVAR(getter_doc,
+ "Descriptor to change the setter on a property.");
^^^
Checked into trunk as revision 58929. | http://bugs.python.org/issue1416 | crawl-002 | refinedweb | 289 | 74.9 |
:
Medium
PREREQUISITES:
Trees, BFS/DFS, memoization, tree diameter
PROBLEM:
For every edge in a weighted tree, compute the diameters of the resulting subtrees if the edge is removed.
QUICK EXPLANATION:
For every edge (a,b):
- Let L(a,b) be the length of the edge.
- Let H(a,b) be the height of the tree rooted at b if the edge (a,b) is removed, plus L(a,b).
- Let D(a,b) be the diameter of the tree rooted at b if the edge (a,b) is removed.
We have the following formulas:
where:
- c_1 ot= a is the neighbor of b with the maximum D(b,c)
- d_1 ot= a is the neighbor of b with the maximum H(b,d).
- d_2 ot= a is the neighbor of b with the second maximum H(b,d).
Note that H(a,b) is not the same as H(b,a)! Similarly, D(a,b) is not the same as D(b,a). But there are at most 2(N-1) valid arguments to H(a,b) and D(a,b), two for each edge, so they can be memoized.
Be careful that your code doesn’t run in O(N^2) if there’s a node with a high degree!
EXPLANATION:
Diameter of a tree
Let’s first discuss how to compute the diameter of a tree, without removing any edge. Recall that the diameter of a tree is the length of its longest path. Actually, the word “diameter” can also refer to the longest path itself, but hopefully it will be clear from context which sense of the word is being used.
In problems involving unrooted trees, it’s usually beneficial to root the tree at some node, because rooting the tree gives us a place where to start the computation. So in our case, let’s root the tree at some arbitrary node, say node 1.
By rooting the tree, we can now distinguish two cases for the diameter: Either it passes through the root or not.
- If it doesn’t pass through the root, then this diameter is completely contained in some subtree. So it must also be a diameter of some subtree, because the diameter of a subtree cannot be longer than the diameter of the whole tree.
- If it passes through the root, then the path consists of an upward path and a downward path, which meet at the root. But for this to be the longest path, both parts must be as long as possible. This means they must start/end at some leaf (so their lengths must be the heights of the corresponding subtrees plus an additional edge towards the root), and also they must be the longest two such paths.
We can now formalize these cases into an algorithm to compute the diameter. We introduce some notation. For every edge (a,b), let L(a,b) be the length of the edge. Also,
- Let H(b) be the height of the tree rooted at b.
- Let D(b) be the diameter of the tree rooted at b.
Then we have the following formulas:
where
- c_1 is the child of b with the maximum D(c).
- d_1 is the child of b with the maximum L(b,d) + H(d).
- d_2 is the child of b with the second maximum L(b,d) + H(d).
This can easily be implemented as a recursive procedure that starts at node 1! This runs in linear time in the input.
As an additional note, in some languages this will trigger stack overflow because the recursive calls may go too deep. We’ll discuss that later.
Edge removals
Now, what happens if we remove an edge? If we continue considering the tree as rooted on node 1, then computing the diameter of one of the trees is easy, because it is just D(b). However, the other tree’s diameter isn’t as easy to compute.
It would be nice if we can select which root we want any time. For example, suppose we remove the edge (a,b). Then it would be nice if we can have a and b be the roots of the two resulting trees, so that we can perform a similar traversal above. Specifically, let’s generalize our H(b) and D(b) functions above. If (a,b) is an edge, then:
- Let H(a,b) be the height of the tree rooted at b if the edge (a,b) is removed, plus L(a,b).
- Let D(a,b) be the diameter of the tree rooted at b if the edge (a,b) is removed.
H(a,b) and D(a,b) satisfy formulas similar to the ones above:
where:
- c_1 is the neighbor of b that is not equal to a and has the maximum D(b,c).
- d_1 is the neighbor of b that is not equal to a and has the maximum H(b,d).
- d_2 is the neighbor of b that is not equal to a and has the second maximum H(b,d).
Notice that we adjusted the H(a,b) function to also include L(a,b) so the formulas simplify a bit.
Now, suppose we remove the edge (a,b). Then the diameters of the resulting trees are D(a,b) and D(b,a). If we try to compute these two, we would need O(N) time to compute everything. But there are N-1 edges, so this will take O(N^2) time, which is too slow.
The key here is to notice that since there are N-1 edges, there are only 2(N-1) possible arguments to the functions H(a,b) and D(a,b). Also, we only need to compute each H(a,b) and D(a,b) once. This suggests that a memoization approach might work. Specifically, we compute H(a,b) and D(a,b) using the above formulas, but after computing them once, we store them so we don’t have to compute them again! Since there only 2(N-1) possible arguments, we will only need to compute H(a,b) and D(a,b) once, which is potentially faster!
As an illustration, here is some pseudocode to compute H(a,b):
def H(a,b): if H(a,b) has already been computed before: return the stored value for H(a,b) else: result = 0 for every neighbor c of b: if c != a: result = max(result, H(b,c)) store 'result' as the result of H(a,b) (for later use) return result
The pseudocode for D(a,b) is very similar.
Handling large degrees
You might be tempted to say that the solution above already runs in O(N) time, so it should already pass the time limit. Unfortunately, it’s not quite O(N) yet. To see why, notice that computing H(a,b) and D(a,b) actually runs in O(\operatorname{deg} b) time, where \operatorname{deg} b is the degree of b. H(a,b) and D(a,b) will have to be computed for every neighbor a of b. But since b has \operatorname{deg} b neighbors, these computations run in O((\operatorname{deg} b)^2) time overall. If b has a lot of neighbors, this is bad. In fact, b can have up to O(N) neighbors, so this runs in O(N^2) time in the worst case!
The bottleneck in computing H(a,b) and D(a,b) is finding three particular neighbors of b: c_1, d_1 and d_2. Unfortunately, these three nodes also depend on a, which means we have to check all neighbors of b for each a. Let’s write these nodes as c_1(a), d_1(a) and d_2(a) instead, to emphasize their dependence on a (of course they depend on b as well). For example, c_1(a) is defined as the neighbor of b that is not equal to a and has the maximum D(b,c).
But if you look closer, for many $a$s, c_1(a) is actually the same node! Namely, let c_1' be the neighbor of b that has the maximum D(b,c) overall. Then for most neighbors a of b, the node c_1(a) is the same as c_1'. The only time this isn’t true is when a is c_1' itself! Similarly, let d_1' and d_2' be the neighbors of b that have the maximum and second maximum H(b,d), respectively. Then for most neighbors a of b, the node d_1(a) is the same as d_1' and d_2(a) is the same as d_2'. The only time this isn’t true is when a is equal to d_1' or d_2'.
How can we use this? Well, as said above,
- If a ot= c_1', then c_1(a) is simply c_1'.
- But if a = c_1', then c_1(a) is actually c_2', which we can define as the neighbor of b that has the second maximum D(b,c).
This means that all we need to keep track of are just two neighbors of b with the highest D(b,c)!
Similarly, if we define d_3' as the neighbor of b that has the third maximum H(b,d), then:
- If a ot= d_1' and a ot= d_2', then d_1(a) = d_1' and d_2(a) = d_2'.
- If a = d_1', then d_1(a) = d_2' and d_2(a) = d_3'.
- If a = d_2', then d_1(a) = d_1' and d_2(a) = d_3'.
So now we can come up with a strategy to prevent the worst case O((\operatorname{deg} b)^2):
- The first time we visit b, e.g., when we need to compute H(a,b) and D(a,b), we will compute c_1(a), d_1(a) and d_2(a) normally. This runs in O(\operatorname{deg} b) time. At this point, we have computed the D(b,c) and H(b,c) for all neighbors c of b, except for a itself. We store these values. We also store a, the neighbor of b which we used to first visit b.
- The second time we visit b, e.g., when we need to compute H(a',b) and D(a',b), we will now compute D(b,a) and H(b,a). At this point, we now have computed D(b,c) and H(b,c) for all neighbors c of b, including a, so we can now compute c_1', c_2', d_1', d_2' and d_3', and using the rules above we can determine c_1(a'), d_1(a') and d_2(a').
- For all subsequent visits H(a'',b) and D(a'',b), we can now use the above rules to determine c_1(a''), d_1(a'') and d_2(a'') in O(1) time!
This reduces the overall running time per node b to just O(\operatorname{deg} b), and the overall running time of the algorithm to O(N)!
As a side note, d_3' (and possibly d_2' and c_2') is undefined if b has fewer than three neighbors. In this case, we assign them to a dummy node c' having H(b,c') = D(b,c') = 0.
Stack overflow
As a final note, the approach above may fail because the recursive calls become too deep. Note that N can reach 5\cdot 10^6 which is larger than usual, so in many languages a stack overflow may occur. There are a few ways to get around this:
Enlarge the runtime stack manually. In C++, the following can be used:
#include <sys/resource.h> ... void enlarge_stack() { struct rlimit lim; getrlimit(RLIMIT_STACK, &lim); lim.rlim_cur = lim.rlim_max; setrlimit(RLIMIT_STACK, &lim); }
In Python, we can use
sys.setrecursionstack(BIG_NUMBER), but note that you can’t usually set this to a very high number like in this problem. So this method might not work. In other languages, there might not even be ways to enlarge the stack at runtime.
The setter’s solution uses this method.
Convert the algorithm to an iterative one, by simulating the runtime stack yourself. This is trickier to implement, but has the added benefit that it will work in most languages. The tester’s solution below uses this.
Time Complexity:
O(N) | https://discuss.codechef.com/t/regions-editorial/12668 | CC-MAIN-2019-22 | refinedweb | 2,038 | 80.21 |
IntroductionIn this article we will see how to host our WCF service in a Windows environment under a Widows Service. You can read about how to create and consume a WCF service here. When we run our WCF service then it is hosted by WcfTestClient automatically and for using the WCF service we need to open and run our service application manually which is not good. To host our WCF service we can use various ways like IIS hosting and Windows Service hosting. In this article we will see how to host our WCF service under a Windows service.If we host our WCF service under a Windows service then it will start when our Windows (OS) starts and stop when our OS stops. To do that we need to create one Windows service application. So let's create one Windows service application using the following steps.Step 1Open your Visual Studio if you are using Windows Vista or Windows 7, then open Visual Studio in Administrator mode and create a new project of type Windows Service like in the following diagram.Step 2Add a reference to your WCF service library from Project Add Reference Browse Select your WCF service .dll as well add a reference to System.ServiceModel namespace from the .NET tab.Step 3When we created a new Windows service project, by default we got the Service1.cs file. Click on the "switch to code view" link on Service1.cs file which will open the code view which has methods i.e. OnStart and OnStop methods where we can write the code to perform our own actions when the service starts and when the service stops. We will start our service and stop our service in this method so make your Service1.cs code file like below. public partial class Service1 : ServiceBase { ServiceHost obj; public Service1() { InitializeComponent(); } protected override void OnStart(string[] args) { obj = new ServiceHost(typeof(AuthorService.Service1)); obj.Open(); } protected override void OnStop() { obj.Close(); } }In the above code we have written the code for starting our service and stoping our service. In the above lines you can see we are using ServiceHost which is the class present in System.ServiceModel namespace and this class will give the environment to host our WCF service.Still now we have created our application and Start and Stop methods but we also have to configure it like ABC (Address, Binding and Contract). For configuration add one Application configuration file in your Windows Service application which empty. In the <Configuration> tag we have to write our WCF service configuration i.e. ABC instead of writing it just copy the <System.ServiceModel> tag from your WCF service App.Config file and paste it into the app.config file of Windows service application.Step 4Now our Windows service is ready, but we cannot install it directly so we need a service installer and a process installer; for adding them, go to design view of service1.cs in Windows service application right-click and select Add Installer which will add a ServiceInstaller and ServiceProcessInstaller. Select ServiceInstaller and set the property DisplayName to what you want to display the service in the service tray (Control Panel Adm. Tools Services) and StartType to Automatic or Manual. If you select Automatic then your service will start automatically and if you set manual then we have to start our service manually from Control Panel Adm.Tools Services. Next Select ServiceProcessInstaller and set the property Account to LocalSystem and build the application which will create one exe file which we need to register with OS.Now we are ready to install our service. Here installing the service means registering our service with Windows. To register our service read more below.Step 5To register your service, open your Visual Studio command prompt in Administrator mode and switch to your project directory i.e. Windows Service Project and Register the generated exe file using installutil command followed by exe name which will register our service with Windows. Now you can see your service by opening Control Panel Adm. Tools Services Display Name which you set for ServiceInstaller.ConclusionUsing simple steps we can host our WCF service with a Windows service.
Consume a Custom WCF Service in a Console Application
Announcing Online Events in WCF Service | http://www.c-sharpcorner.com/UploadFile/krishnasarala/hosting-wcf-service-under-windows-service/ | CC-MAIN-2013-48 | refinedweb | 715 | 56.05 |
CoreMIDI using objc_util questions
With the beta version of Pythonista it looks like it might now be possible to get CoreMIDI access going.
I know that there are several issues surrounding this like needing to have the audio key in their UIBackgroundModes in order to use CoreMIDI’s MIDISourceCreate and MIDIDestinationCreate functions. These functions return kMIDINotPermitted (-10844) if the key is not set according to Apple docs.
Pythonista seems to have the ability to play background audio now - but does that mean it has the audio key set?
In looking at CoreMIDI I see that it is more of a "C" API. It does not have many Objects - only Functions that can be called. Can you show an example of how to access a framework like this? I have just tried:
CoreMIDI = NSBundle.bundleWithPath_('/System/Library/Frameworks/CoreMIDI.framework') CoreMIDI.load()
Which seems to work and does not throw any errors. I think I would like to do something like this C code next:
MIDIClientCreate(CFSTR("simple core midi client"), NULL, NULL, &(_midiClient));
... but since this is straight C code there is no obvious NSObject to create with methods to call to do this using objc_util.
Please feel free to waive me off, if this is going to be a dead end for any reason.
Pythonista seems to have the ability to play background audio now - but does that mean it has the audio key set?
Yes, the key is set in the Info.plist, so that should work.
In looking at CoreMIDI I see that it is more of a "C" API. It does not have many Objects - only Functions that can be called. Can you show an example of how to access a framework like this?
Basically, you'll mostly need to use "raw" ctypes. objc_util will make a few things slightly more convenient, but it won't help you very much, given that CoreMIDI is indeed a plain C framework.
Quite honestly, I don't know if this is going to be a dead end. I've never worked with CoreMIDI before, so there might be gotchas that I'm not aware of. It definitely looks like it's going to be quite a lot of work... Anyway, I've played around with it a little bit, so here' some code that might help you get started:
from objc_util import *] #... You're probably going to need a lot more declarations like these. # Create a client: client = c_void_p() MIDIClientCreate(ns('test client'), None, None, byref(client)) # ...then get its properties, which is a bit pointless, # as the only property will be the name we just assigned, but it # confirms that the client was successfully created. props = c_void_p() MIDIObjectGetProperties(client, byref(props), True) # The properties are returned as a CFDictionaryRef, which is # 'toll-free bridged' to NSDictionary, so we can wrap the pointer # in an ObjCInstance for easy printing etc. print ObjCInstance(props)
You would use more of a
ctypesapproach rather than
objc_utils. That means you need to explicitly set the
restypeand
argtypesfor each function, and in many cases you will need to define the
Structuresfor various return types.
You might be able to search for ctypes and some of the method names, to see if you find someone who has already done the hard work. A few of the beginning bits with loading libraries is not necessary, but I think the rest should basically work.
@omz Thanks for the starter code and pointer in the right direction.
@JonB I took your advice and tracked down some existing code to help jumpstart the work.
I have been able to setup a client and a source and send data to the source. I used a midi monitor program to read the source and verified that the midi events were flowing. I only used a very small part of the CoreMIDI API. Same as what Patrick Mueller supported in his project.
The one thing that makes this so difficult is that everything has to be rolled by hand. It is especially tough for "Core" stuff that is basically C API's and heavily uses CoreFoundation functions rather then Foundation objects.
I can't help noticing that the objc_util code is using the same methodology as the rubicon-objc project. That project seems to have support for CoreFoundation.
Maybe porting that code across would make my life easier as I flesh this CoreMIDI support out.
I also can't help thinking that this would all be a lot simpler if there was a utility to read a framework header file and churn out all this tedious code automatically. Somebody must have thought of that already though - don't you think?
etc., and the
ns()function can be used to create these objects quite easily.
I have been doing a lot of investigation on how to avoid doing all this error prone hand coding of API's such as CoreMIDI. It turns out that there are several different projects out there that are trying to do this.
The one that is most up to date is cffi. In theory it should be possible to use cffi to "compile" any framework header file into ctype wrappers completely automatically.
The other project is PySDL which includes PyCParser. This one was written specifically to build and maintain SDL bindings by compiling SDL.h into ctype wrappers at runtime. This work has not been upgraded since 2011.
Both of these solutions require copies of the header files for the frameworks being accessed that match up with the library being loaded. I am not an XCode dev and don't have access to the header files needed. The header files are all copyrighted by Apple as well, so it is unclear how I would get copies.
I am not sure how to proceed and not clear why you are apparently steering clear of all of this with obj_util instead of leveraging this prior work. If the answer to that it is copyright/licensing issues then I DO understand. If this is the reason it means that if I go this route, I may not be able to share my work with anyone else or include it into a product of my own. It would only be for personal use and hacking fun.
most of the require headers can be found at the apple open source page, though it is not well reference by search engines. there are several github repos as well that contain framework headers.
here is a link to coremidi headers
i thought that cffi and the like require a c compiler in order to extract type info from precompiled or compiled code, rather than trying to parse C code directly (anyone who has tried to write their own simple c type parser soon realizes how painful this is, since you have to deal with basically two languages, c preprocessor language as well as c, and before you know it you basically have to write a complete c preprocessor to handle anything other than the simplest of headers.
I think your best bet is to pre create a complete set of python bindings, perhaps with one of the tools you mentioned above, as a class, then reuse those.
@wradcliffe To get the iOS header files you only need to install Xcode from the Mac App Store for free and poke around inside the app package a little. The license on the header files is Apple's open-source code license that allows (as far as I can tell) modification and redistribution.
@JonB When set up properly,
cffican run on Pythonista with
ctypes. By default it tries to import a compiled C module and do its ffi-ing with that, but you can make it use
ctypesinstead:
import cffi import cffi.backend_ctypes ffi = cffi.FFI(backend=cffi.backend_ctypes.CTypesBackend()) # Done.
The C parsing is done entirely in Python using
pycparser, which is a full C parser based on
ply(Python
lexand
yacc). Though it does expect preprocessed C code - this means no comments, no includes, no defines, no if macros.
And I am still trying, and repeatedly failing, to write up some kind of basic C preprocessor in Python. Right now it can do
#errors and
#warnings,
#define,
#undefand expansion of macro constants, and if you're lucky it can even handle macro functions. That is, nested macro function calls will fail almost all the time.
@JonB - as @dgelessus explains, there is a C parser written in pure Python in cffi as well as PyCParser.
@dgelessus - it is new info to me that cffi must have preprocessed headers. I have not located any examples showing how they process header files and was expecting to find a utility of some kind that did the preprocessing somewhere. So you are telling me that this does not exist?
PyCParser actually has the preprocessor but I don't know how good or general it is. It is obviously good enough to handle SDL.h and is designed specifically to handle header files that include other header files, macros, ifdefs, etc. I have started looking at the code and playing with it and it looks very impressive. I keep wondering why the author - Albert Zeyer - stopped working on this 3 years ago. Might be because of cffi. The amount of work that went into this in order to solve this use case is jaw dropping.
EDIT: Also - I don't own a Mac desktop of any kind so downloading and installing XCode is not an option for me right now. @JonB thanks much for the pointer to the headers.
EDIT: Albert implemented the CWrapper (ctypes wrapper) functionality only 3 years ago so the work is still fairly fresh and he is accepting and merging fixes to the codebase from contributors.
cfficomes with
pycparserbundled, it doesn't have its own C parser.
pycparser's level of understanding unpreprocessed C goes as far as "this line starts with a
#, I'm not touching that" and "what is this
/*doing here, this isn't valid syntax". On a normal computer you'd tell
pycparserto run the code through
cpp(the C preprocessor) before parsing it. That's not an option under Pythonista, for obvious reasons.
@dgelessus - I found this reference on stackoverflow:
Basically it says that there is no preprocessor and even using gcc to generate the cleaned up code is not recommended.
So this effectively takes cffi off the plate for me unless someone like @omz would commit to running some kind of useful preprocessing of the framework headers and supplying these for either download or as part of Pythonista. The good news is that this might effectively remove the copyright problem since these files would not be original Apple source code. May be a gray area but I think we could get away with it.
@JonB - the pointer to the headers you sent is pretty sketchy. File dates are 2006. I am hoping to find something more official and up to date. The Apple Open Source sites I looked at do not seem to provide usable access. Lots of tar balls to download but no way to see what is in them. Still searching for a stable reliable site that provides these headers.
gcchas a lot of non-standard syntax (like
__attribute__(...)) which
pycparsercan't handle. I'm not sure about
clang, but the Darwin header files look like they don't include any other special syntax when using
clang. (I'm not sure, but wouldn't we have to (pretend to) use
clang, because that's what @omz compiles Python(ista) with?)
In a normal use case (providing Python bindings for a cross-platform C library) including system headers would be a problem, because they would only work on the exact system where they come from. It's also a load of boilerplate code that isn't really important in most cases. In the case of Pythonista portability is less of an issue - we're all on ARM running Darwin with the same version of Pythonista. The only important differences are the bitness (
__LP64__) and the iOS version.
Just because you don't have real header files doesn't mean you can't use
cffi. It just means that instead of this
ctypescode:
libc = ctypes.CDLL(None) libc.strlen.argtypes = [ctypes.c_char_p] libc.strlen.restype = ctypes.c_int
you'd write this
cfficode:
ffi = cffi.FFI(...) libc = ffi.dlopen(None) ffi.cdef(""" int strlen(char *); """)
@dgelessus - you make many good points about cffi. I hope I did not imply that it was useless in general. Only for my particular use case.
I am not the only one with this case as you may have noticed @Cethric has bitten off doing OpenGLES bindings, which makes CoreMIDI look simple in comparison. He wrote a header file processor to generate a lot of the boilerplate: parseHeaders.py
I also want to clarify for anyone else reading through this thread that I WAS able to get CoreMIDI working using a limited subset of the API and it was not all that terrible to hand code it. I just wanted to go to the next step by implementing a full set of bindings, writing test cases, etc. and went down this rabbit hole. I also can't help noticing that there are a lot of forum support requests being caused by simple coding errors for these bindings with users asking @omz to debug or help write the code. I really wish to avoid using the forum to help find my typos or teach me Python syntax.
@wradcliffe , sorry I didn't quite get the last paragraph. Is omz making errors implementing the bindings for MIDI because of too many silly questions?
- wradcliffe
@Phuket2 - I guess that last paragraph sounds like I am angry or pissed off but that is not the case. I was just noting that a lot of the forum postings asking for help with it by other users has to do with mistakes in coding. Some of it is getting the API wrong, some is lack of understanding of Python, some is an inability to debug the mistakes. I feel like this is a waste of his time more then anything and I personally don't want to contribute to that. I could have it all wrong and he may enjoy the interaction.
Also - CoreMIDI support has been a requested feature of Pythonista for awhile by a few users but has not been implemented for a variety of reasons. You can search for MIDI or CoreMIDI to see the history of the topic. As a relatively new user you should read through those threads as well as the discussions on getting Python3 implemented to see the kinds of constraints omz is under in offering this product.
@wradcliffe , you read me correctly :) we all get frustrated at times. But we all need distractions sometime. I think if some seemingly stupid questions are been answered by over qualified people, it could be just what they need to take a break. You never know. unfortunately, I ask my fair share of stupid questions! But I am getting better :)
Anyway, I am useless to your question here. However, funny enough , I use to work for Roland Australia (many years ago, D50 was released when I was working for them). Also, As far as I know, I had one of the first ever external Midi interfaces for a Macintosh. I got a prototype from a company in Brookvale, Sydney. It was a serial interface, I had a Mac Plus back then.
It didn't matter a great deal. Atari was so ahead of the game when it come to MIDI back then. They even had pressure response.
Oh, also while I was a Roland I used to eat lunch most days with a guy that was on the original fairlight team. Fun years!
I hope you get a resolution to your problem
Ian
- Webmaster4o
This is MIDI as in my MIDI keyboard that I can use with GarageBand for iOS? I'd love to see this in pythonista. Or am I confused about what MIDI is...
Musical Instrument Digital Interface
@Webmaster40 - yep - CoreMIDI is the framework used to interact with MIDI devices and applications. See: How To Use CoreMIDI
@dgelessus - I think some readers of this thread may wonder why PyObjC could not be used. This package has the longest history and goes back to 1996. It was designed to use a c compiler to generate the metadata we are discussing. It has struggled with methods of storing the metadata (preprocessed headers and massaged headers) and used XML files and more recently python source code and JSON files for this purpose. The fact that the metadata was always getting out of date and the use of a compiler as part of the process seems to have steered people away from it. Here is a pointer to the preprocessing part of the project:
The project README says: "This project is hopelessly incomplete at the moment".
All in all, it looks like there are good reasons why everyone is still looking for a pure python solution to my use case and why cffi offers the current best set of tradeoffs.
- Webmaster4o
Great! I own a 25-key MIDI keyboard, it'd fun to play with this and pythonista. I think the introduction of CoreMIDI support would have to come with modules for audio processing. I've been playing with the
wavebendermodule to make some sounds, but I find it very confusing as I'm not an audiophile and don't understand how that stuff works. | https://forum.omz-software.com/topic/2158/coremidi-using-objc_util-questions/3 | CC-MAIN-2022-27 | refinedweb | 2,933 | 70.63 |
#include <MP_domain.hpp>
Inheritance diagram for flopc::MP_domain:
This is one of the main public interface classes. One uses this in the context of a constraint, objective, variable, or data. It is usually used in conjunction with an MP_set, or a subset, but can be used without one. It is the range over which the other construct is defined.
Definition at line 61 of file MP_domain.hpp.
a set which points to nothing.
Special conditional creation of a subset.
This method allows for a test for inclusion of a condition during construction of a subset. The output MP_domain will include references where the condition is satisfied.
Referenced by flopc::MP_set::such_that().
Special conditional operation on the domain.
This method will call the functor for each member of the MP_domain.
Referenced by flopc::forall(), and flopc::operator<<=().
returns number of elements in the domain.
returns a reference to the "empty" set.
Referenced by flopc::forall(), and flopc::SUBSETREF::getDomain().
Definition at line 64 of file MP_domain.hpp.
Definition at line 65 of file MP_domain.hpp.
operator which creates a new domain as the pairwise combinations of two input domains.
Definition at line 96 of file MP_domain.hpp.
Definition at line 97 of file MP_domain.hpp.
Definition at line 98 of file MP_domain.hpp. | http://www.coin-or.org/Doxygen/Smi/classflopc_1_1_m_p__domain.html | crawl-003 | refinedweb | 212 | 53.88 |
Hi,
How do I know that my servlet will be deployed against JSR289 as opposed to JSR116? I understand that the JSR289 has new behaviors but the container can maintain backwards compatibility to support JSR116.
Topic
Pinned topic Configuring servlet between JSR289 and JSR116
2011-05-06T14:46:47Z |
Updated on 2011-05-08T08:55:07Z at 2011-05-08T08:55:07Z by SystemAdmin
Re: Configuring servlet between JSR289 and JSR1162011-05-06T17:10:00Z
This is the accepted answer. This is the accepted answer.The way the container identifies an application as a JSR 289 is through the <app-name> tag in the sip.xml deployment descriptor:
<app-name>xxxxxxx</app-name>
If this is removed or not included, the SIP container will assume its working with a JSR 116 app. From my experience, the one thing you have to be careful of when converting a JSR 116 app to a JSR 289 application is the new Invalidate When Ready feature that is enabled with JSR 289. This can cause issues with converged applications if a SIP app session is invalidated by the SIP container before the HTTP side of the converged application has finished using it. Just something to watch out for.
Re: Configuring servlet between JSR289 and JSR1162011-05-06T17:58:00Z
This is the accepted answer. This is the accepted answer.
This brings up a follow-up question. We have actually deployed our servlet with and without the app-name and observed two different behaviours when receiving 302 MOVED.
When not using the app-name tag, i.e. JSR116, when we send INVITE and receive 302 MOVED in response, we create a new INVITE but re-using the same session. No issues sending the second INVITE.
When using the app-name tag, i.e. JSR289, when we receive 302 MOVED and try to create a new INVITE on the session, we get an error that the session is already invalidated.
Any ideas why the difference?
Darryl
Re: Configuring servlet between JSR289 and JSR1162011-05-06T19:23:35Z
This is the accepted answer. This is the accepted answer.My guess is that you are getting bit by the automatic invalidation of the session. This is the invalidate when ready feature of JSR 289 that I mentioned previously. You can test this theory be disabling invalidate when ready. You can do this in the source of your application each time a SIP Application Session is created by calling the setInvalidateWhenReady(false) on the SipApplicationSession. This should prevent the container from automatically invalidating the session.
Re: Configuring servlet between JSR289 and JSR1162011-05-06T20:32:36Z
This is the accepted answer. This is the accepted answer.
Back to the JSR289 vs JSR116, our sip.xml has the app-name tag, but we're seeing this exception when trying to deploy the servlet.
com.ibm.ws.scripting.ScriptingException: org.apache.tools.ant.BuildException: /opt/IBM/WebSphere/AppServer/feature_packs/cea/sar2war_tool/build1.xml:236: /tmp/build/mysipservlet.sar/WEB-INF/sip.xml is not a valid XML document.
We looked in this build1.xml line 236 and the target name of this XML blcok is validateSipDotXml116. Does this mean the container thinks it is a JSR116 servlet?
Here's our sip.xml
<?xml version="1.0"?>
<sip-app>
<app-name>com.test.MySipServlet</app-name>
<display-name>My Sip Servlet</display-name>
<servlet>
<servlet-name>MySipServlet</servlet-name>
<display-name>MySipServlet</display-name>
<servlet-class>com.test.MySipServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>MySipServlet</servlet-name>
<pattern>
<or>
<equal ignore-
<var>request.method</var>
<value>INVITE</value>
</equal>
<equal ignore-
<var>request.method</var>
<value>NOTIFY</value>
</equal>
<equal ignore-
<var>request.method</var>
<value>OPTIONS</value>
</equal>
</or>
</pattern>
</servlet-mapping>
<!--Declare class that implements TimerListener interface-->
<listener>
<listener-class>com.test.MySipServlet</listener-class>
</listener>
</sip-app>
- SystemAdmin 110000D4XK45 Posts
Re: Configuring servlet between JSR289 and JSR1162011-05-08T08:55:07Z
This is the accepted answer. This is the accepted answer.During deployment there is a schema validation against your sip.xml deployment descriptor to verify that this is JSR289 application, it looks like the schema validation failed.
My guess is that the problem is that the root element tag does not include the correct namespace declarations.
Here is an example of a correct JSR289 sip.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<sip-app
<app-name>jsr289.app</app-name>
</sip-app>
Try to add the namespaces to the root element to see if this solves the problem.
One other thing that you can do is to enable the sip container traces, using com.ibm.ws.sip.*=all traces string, deploy the application, and look for this string: "Failed to parse Sip.xml with jsr1.1 schema validation" this will help you to understand what is wrong with your sip.xml file.
Re: Configuring servlet between JSR289 and JSR1162014-11-17T11:59:08Z
This is the accepted answer. This is the accepted answer.
Hello,I am a java developer,I meet the same problem(sip.xml is not a valid XML document). If anyone can solve it,please tell me what can i do ,Thanks!Updated on 2014-11-17T12:00:08Z at 2014-11-17T12:00:08Z by wcy | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014614022 | CC-MAIN-2017-39 | refinedweb | 883 | 50.63 |
By Steve Millidge
The latest release of Payara 4.1.151 includes Hazelcast as a built-in JSR107 JCache implementation this means with Payara you can start building applications that uses a robust data grid as a caching tier with a standard API.
Payara is a supported and enhanced version of GlassFish Open Source edition 4.1 and uses GlassFish to provide support for CDI and Java EE7.
Hazelcast is an in-memory Data Grid that provides resilient, elastically scalable, distributed storage of POJOs. Hazelcast provides a JSR107 compatible API.
JCache JSR107 specifies an API for caching of Java POJOs in Java SE and therefore works with Java EE. JCache provides a map like Key Value store for caching objects similar to other caches like memcached or EHCache.
JSR107 also specifies a set of annotations for caching objects which can be used via CDI and this is what we will walk-through in this blog.
First Enable Hazelcast in Payara
We will assume you have Payara installed (see blog for details of how to do this).
As Payara provides a drop-in replacement for GlassFish 4.1 open source edition Hazelcast is disabled by default. It can be enabled via as-admin or via the administration console.
./asadmin set-hazelcast-configuration --enabled=true –target=server
We can check that Hazelcast is enabled using asadmin;
./asadmin list-hazelcast-members –target=server
This will printout the Payara servers that are part of the Hazelcast cluster.
{ server-/192.168.79.135:55555-this }
In this case just our Payara server as we don’t have any further Payara servers running.
Adding Caching to a CDI Bean
Now we have Hazelcast enabled within Payara we can use the new JCache APIs in Java EE applications.
So imagine we have a CDI Bean with a method that takes a long time;
@RequestScoped public class SlowOperationBean {; } }
In the real world this operation could be doing anything that could take a long time and could be cached for performance e.g.;
- Running a time consuming database query
- Obtaining a JSON result from a remote REST service
- Computing some HTML based on user preferences
Let’s add Caching to speed it up
To Cache the result of the method using JSR107, in Payara, we just need to add the @CacheResult annotation to the method;
@CacheResult; }
That’s all you need to do!
Driving the Bean
So I created a servlet, that injects the CDI bean, and runs in Payara to test drive caching for this blog.
You just need to package up your bean and your servlet in a standard web application and deploy to Payara. Nothing else needs to be done to use JSR107 on Payara. Nothing needs to be added to beans.xml, no CDI extensions need to be written. Just deploy the bean and the servlet.
@WebServlet(name = "CallSlowMethod", urlPatterns = {"/CallSlowMethod"}) public class TestServlet2 extends HttpServlet { @Inject SlowOperationBean bean; @Override protected void doGet Call Slow Method</title>"); out.println("</head>"); out.println("<body>"); long startTime = System.currentTimeMillis(); String result = bean.doSlowOperation("hello", "world"); long endTime = System.currentTimeMillis(); out.println("Calling Slow Bean took " + (endTime - startTime) + " ms and we got result " + result); out.println("</body>"); out.println("</html>"); } } }
Running the Servlet
The first time I run the servlet we can see the call takes 20s and we get a random hex result.
The second time I run the Servlet you can see the method call takes 1ms and we get the same result back. This is because the @CacheResult annotation tells Payara to take the result of the method call and Cache it into Hazelcast using a key derived from the value of the two method parameters.
As simple as that! We have cached the result and sped up the method call 10,000 times. How’s that for supercharging your application with 12 characters!
Exploring More
The JCache JSR107 specification is fully supported in Payara 4.1.151 onwards and this blog shows just a very simple use case. Further blogs will show using other features of JCache from within Payara.
If you want to find out more about JSR107 take a look at these slides, click here to view a slide show with further instructions. | https://www.voxxed.com/2015/03/using-the-jcache-api-with-cdi-on-payara/ | CC-MAIN-2017-34 | refinedweb | 701 | 63.29 |
<h1>
Tree Container Library: Examples - unique_tree example explanation
</h1>
<hr />
<p>
This bit of code shows the unique_tree container at work. It uses functions to perform
many of the same operations performed in the
generic example.
The code illustrates how the unique_tree container is used.
The code can be viewed an printed from the included file unique_tree_example_explanation.htm. <br/>
A source file of this example is also included in the download. <br/>
This example is compatible with versions 3.50 and higher of the TCL. The latest version of the TCL
can be downloaded from the <a href='/tree_container_library/download.php'>TCL download page</a>.
</p>
<p>
This generic example, as well as the library, is now compatible with Visual C++ 6.0.
For this compiler, users need to un-check the <i>Enable minimal rebuild</i> option for successful compilation.
Also for VC 6.0 users, the normal <code>#pragma warning (disable : xxxx)</code> directive
is necessary to avoid the compiler warnings.
</p>
<p>
The example inserts objects of the class CFamilyMember into a unique_tree,
to form a family tree. Nodes are inserted in a linear manner, with no guarantees that
the parent will be inserted before any of it's child nodes.
Depending on whether orphans are allowed or not, not all of nodes will be inserted successfully.
A sample of the results to the console is also given to the right. Please insure that you have
downloaded version 1.08 or greater of the TCL for this example.
</p>
<p>
The #include in line 1 is commented out. Microsoft compilers need this line, so if you're compiling
with a microsoft compiler, you can un-comment this line. Line 2 includes the header for unique_tree.
Since we will be working with the unique_tree container exclusively, we won't need to include
the header for tree and multitree. Lines 3 - 7 include STL needed headers.
</p>
<p>
Line 9 begins the declaration of the class CFamilyMember, which will be the <code>stored_type</code> used
in unique_tree. It will be objects of this class that will be stored in the tree nodes. Just like the
STL containers, objects stored in the tree containers need default constructors, which is supplied
for CFamilyMember in line 12. The constructor accepting one argument in line 13 is used to create
stored_type objects with a valid 'key' member. Furthermore, this constructor will be used to
implicitely create CFamilyMember objects in unique_tree operations.
</p>
<p>
The constructor in line 14 - 15 is the constructor which is used to create a valid CFamilyMember object to
store in the unique_tree container. Lines 17 - 20 define a less-than operator for the class. Since we
will be using the default std::less<CFamilyMember> as the second template parameter for the unique_tree
declaration, this less-than operator will be used for the
node_compare_type.
Since the operation compares the ssn class member, the ssn class member is referred to as the 'key' member.
</p>
<p>
In line 22, get_ssn() is defined to return the ssn number. In this example, the ssn number is a three digit number,
to make the example easier to follow. line 23, get_age() is defined for use of the ordered_iterator operations,
described further below. Lines 24 - 29 define a function, get_name_and_age(), which returns
a string containing the name and age of the person. Lines 33 - 35 declare the class member variables. In this
example, <code>ssn</code> will be used as the 'key' member, or 'key' field. The member <code>age</code> will
be used for the key field for the
node_order_compare_type
comparisons, explained immediately below.
</p>
<p>
Lines 38 - 44 define the comparison function object mentioned above. This function object is used as the third
template parameter for our unique_tree, defined on line 46; This comparison operation
compares the age members, and is used to set the child node traversal order of the ordered_iterators.
</p>
<p>
The typedef in line 46 declares the unique_tree type
we will be using in this example. Let's examine this declaration closely. The first template parameter defines
the <code>stored_type</code> which will be CFamilyMember. The second template parameter defines the <code>node_compare_type</code>.
In this case, we explicitly supply the same type as the default, std::less<CFamilyMember>. We need to explicitly
provide this if we will be providing the third template parameter. This second template parameter instructs the
unique_tree to use the less-than operator of CFamilyMember, which compares the ssn members of the class objects, thus
making ssn the 'key' member. The third template parameter specifies a comparison function object (defined below) which
specifies the node_order_compare_type. This is used to set the order of the
ordered_iterators.
Since the comparison function object compares the <code>age</code> member of the class, the unique_tree's
ordered_iterators will iterate over a nodes children according to the age member of each node.
</p>
<p>
Line 48 declares a namespace for needed functions and variables. Lines 50 - 91 declare and define functions which will be
used to populate and print the unique_tree.
The function starting on line 50, is_last_child(), is similar to the one in the generic example. This function is templatized also,
to pass in the type of iterators to be used. Refer also the generic example documentation on a description
on how this function works.
</p>
<p>
Line 62 begins the function which prints the unique_tree contents. You will notice right away that this is a template function.
What isn't immediately apparent, though, that the template parameter is <b>not</b> used in the function signature.
So, when this function is called, the template parameter must be explicitly specified in the function call. This is
done in lines 131, 140, and 145. The template parameter specifies the iterator type to use to traverse the
child nodes. This function uses the same techniques to print the contents of the unique_tree as the
generic example.
The only difference is the iterators that are used to traverse the child nodes. In this function, the type
of iterators can either be standard
child iterators
or ordered_iterators.
Line 66 declares the iterators using the template parameter. Line 67 calls the appropriate overloaded function, depending
on the type of iterator, to set the iterator values. The rest of the function is identical to the function given
in the generic example. Refer to the documentation there for a description on how the code works.
</p>
<p>
Lines 93 - 122 declare an array of data that will be used to populate
the unique_tree. Each array element contains a data pair. The first pair member is the ssn of the parent
node in which to insert the second pair member. Looking at the first element in the array, a node is to be
created for Jim McAvoy, which is to be inserted in a node which has the ssn equal to 555. Notice that the nodes
are in no particular order. Most interesting, you'll notice that some of the children are listed before their parents
or other ancestors. This could have a large impact on the success of the insertions. With <code>allow_orphans</code> off,
any attempt to insert a child node within a parent node using insert(parent, child) when the parent is not present
in the unique_tree, will result in a failed insertion. The insertion will succeed, however, if allow_orphans is enabled.
When this happens, a temporary (orphan) node is created for the parent, until the parent node is actually inserted,
in which case the orphan will be updated and removed from the <i>orphan state</i>. In this example, we will populate
the unique_tree twice. Once with orphans disabled, and once with orphans enabled.
</p>
<p>
Line 124 starts main(). Line 126 declares the unique_tree we will be using throughout this example, and sets
the root node. The root node is set to John McAvoy, with ssn = 555, and age = 87. Lines 128 - 133
load and print the unique_tree with orphans disabled (default case). Loading the tree with orphans disabled
will result in many failed insertions, since the order of data does not guarantee that all parent nodes will
be inserted before their children are. Lines 135 - 141 re-load and print the unique_tree with orphans enabled.
With orphans enabled, all insertions of the form insert(parent, child) are guaranteed to succeed, even when parent
does not yet exist in the unique_tree. In this case, the parent (and it's descendants) will remain in an
<i>orphan state</i> until that node is actually inserted into the tree, in which case that insertion will bring
the orphan out of the orphan state, and put it in it's specified place in the tree. Lines 143 - 146 print the same
previously printed tree, but print the tree using ordered_iterators instead of the standard iterators. Using the
standard iterators, the child nodes will be printed out in an order dictated by the ssn. Using ordered_iterators,
the child nodes will be printed in an order dictated by the age member. Line 148 calls a function which tests
the unique_tree, by trying to insert nodes which already exist in the unique_tree.
</p>
<p>
Line 153 begins the function used to load the unique_tree. When successful, the unique_tree will have the structure
shown in figure 2 above. The figure shows the order of insertion in parentheses (), and shows those nodes which
are inserted before their ancestors in a different color. These nodes will not be successfully inserted when
allow_orphans is disabled.
The loop in lines 157 - 163 insert all other nodes in the unique_tree. Line 160 does the actual insertion. All
insertions use the insert operation insert(parent, child). This line inserts each element in the Family array,
using the first array element pair member in the element as the parent ssn, and the second array element
pair member as the child node to be inserted. An iterator is returned from the operation indicating the success
of the insert operation. If successful, the iterator will point to the inserted node. If unsuccessful, the iterator
will return the
end iterator
of node which the insertion was performed. Line 161 checks the iterator for success, and if successful,
checks to see if the inserted node was inserted as an orphan. If so, line 162 prints a message indicating
that the node was inserted as an orphan.
</p>
<p>
The functions starting on lines 166 and 174 are overloaded functions, which set the beginning and ending iterators
for the passed node. Which function is called will depend on the type of iterators passed by the calling function.
</p>
<p>
Line 182 begins the function which tests the unique_tree for insertions of identical nodes. The unique_tree guarantees that
all nodes in the tree will be unique. This function verifies this behavior. Line 185 declares an iterator which
will be used to test the success of the insert operations. Line 188 attempt to insert a duplicate node into
the root, with a standard insert(child) operation. Even though the node does not exist as a child node of the root, the
node <b>does</b> exist in the unique_tree <i>somewhere</i>, so the insertion will fail. Line 193 make the same
kind of insertion attempt, but with a different node which already exists somewhere in the unique_tree.
</p>
<p>
Lines 197 - 202 attempt to insert a node which is already present, into a node other than the root node with a standard
insert(child). A node with ssn = 827 is attempted to be inserted in node containing Connie Delay. Looking at figure 2 above,
you'll see that these two nodes are not even in the same main branch, but the tree still does not allow the insertion.
Notice the <code>find_deep()</code> operation in line 198. This operation makes use of the constructor for CFamilyMember
which accepts a single argument. Lines 204 - 209 attempt the same type of insertion, but with different parent and
child nodes. Notice the <code>find_deep()</code> operation again in line 205. This line simplifies the operation even
more, by using the CFamilyMember constructor implicitely.
</p>
<p>
Lines 211 - 214 attempt to insert a node which is already present with the insert(parent, child) operation. Notice that
line 212 also takes advantage of using the CFamilyMember one-argument constructor implicitly, when passing the parent argument.
< zlib/libpng License | http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=12765&zep=unique_tree_example_explanation.htm&rzp=%2FKB%2Flibrary%2Ftree_container%2F%2Ftree_container_library_demos.zip | CC-MAIN-2014-41 | refinedweb | 2,061 | 63.9 |
The.
If you have not worked through the article Create a Teradata Project using the Teradata Plug-in for Eclipse, do so now before you continue. Once you know how to produce a Teradata Project, make a Teradata project called ProductPrj.
You will also need some database objects before you can start this tutorial. First create the following products table using either the SQL Editor or the Stored Procedure creation dialog from the Teradata Plug-in for Eclipse.
CREATE MULTISET TABLE guest.products ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
id INTEGER NOT NULL,
description VARCHAR(255) CHARACTER SET LATIN CASESPECIFIC,
price DECIMAL(15,2),
PRIMARY KEY ( id ));
Now create a Stored Procedure using the products table with the following DDL:
CREATE PROCEDURE "guest"."getProduct" (
IN "id" INTEGER,
OUT "description" VARCHAR(256),
OUT "price" DECIMAL(10 , 2))
BEGIN
price, description into :price, :description
from guest.products where id=:id;
END;
The Wizard is launched from the DTP Data Source Explorer by right clicking on a Stored Procedure tree node in the explorer and selecting the "Create iBatis(MyBatis) SQL Map..." menu option.
Once the Wizard is launched, the Stored Procedure Selection Wizard page will come up. This page shows the selected schema and procedure for the Wizard.
The next page of the Wizard is the Create iBatis SQL Mapping XML File Wizard page. This page lets you define the location, name of the iBatis SQL mapping file and the mapping name for the selected Stored Procedure. The option of appending the mapping to an existing file will be default. You will need to select the option Launch iBatis DAO with Web Services Wizard if you want to create a Web service directly after you have created a SQL Map for your stored procedure.
The next page of the Wizard is the Domain Objects Source Location page. This page lets you define the location and package name of the domain object to be used as the result map for an SQL mapping.
The next page is the Edit Classes Wizard page. This page lets you rename and edit the properties for the classes which will be created by the Wizard.
This page will show the parameter class and any result set classes that have been derived from the Stored Procedure. The default names of the classes can be renamed to names that make sense for your application. In this case change the Parameters class to Product. You should notice the members of the Parameter class correspond to the parameters for the Stored Procedure.
Once all of the required information is entered into the Wizard, the Finish button can be selected and the SQL Map is generated. The SQL Map contains a resultMap for the parameter class and a SQL statement to call the Stored Procedure. The Stored Procedure being executed has id as an "in" parameter and has description and price as "out" parameters.
?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "">
<mapper namespace="repository.ProductMap">
<!-- Define object mapping -->
<resultMap type="domain.Product" id="Product">
<result column="id" jdbcType="INTEGER" javaType="java.lang.Integer" property="id" />
<result column="description" jdbcType="VARCHAR" javaType="java.lang.String" property="description" />
<result column="price" jdbcType="DECIMAL" javaType="java.math.BigDecimal" property="price" />
</resultMap>
<!-- Define procedure SQL statement -->
<select id="getProduct" parameterType="domain.Product" statementType="CALLABLE">
call "guest"."getProduct"(#{id,mode=IN, jdbcType=INTEGER},#{description,mode=OUT, jdbcType=VARCHAR},#{price,mode=OUT, jdbcType=OTHER, typeHandler=com.teradata.commons.mybatis.extension
s.NumberHandler})s.NumberHandler})
</mapper> <!-- Do not edit or add anything below this comment -->
The iBatis (MyBatis) DAO with Web Services Wizard will be launched if the option was selected from the iBatis Stored Procedure Wizard. This Wizard will create a DAO and a Web service derived from the generated SQL Map.
The First page of the Wizard defines the new DAO and options to create a Web Service.
Select the following options:
Create WSDL
Create Web Service
Save Password
Now hit the Next button:
The iBatis DAO Methods Wizard Page allows you to select which SQL actions from your iBatis Map file to be used in your Web service. You can change your return type from returning a single result set object to returning a list instead. Once you hit the next button Your DAO and Web service Definition files will be created.
The next page is the standard WTP Web services Wizard. Set your Client to test. Once you hit the Finish button your Stubs and Skeletons will be created for your Web service. The Implementation stub will be modified to use the new DAO you just created.
The Web service client will come up ready to use and connected to your Teradata database, through the Web service implementation. The generated client will show all of the members of the Parameter class but you are only required to enter the id because it is the "in" parameter of the Stored Procedure. The results will show all of the parameters of the stored procedure, id and the two "out" parameters, description and price.
A similar Wizard is the iBatis (MyBatis) Macro Wizard. This Wizard wraps a Macro into an iBatis or MyBatis SQL Map. The newly generated SQL map can then be used to create a Web service as described above or it can be used to create a Java application that uses the iBatis or MyBatis frame works. The Wizard is launched from a Macro Tree node from the DTP Data Source Explorer.
Both the iBatis (MyBatis) Stored Procedure and Macro Wizards are easy to use because parameter and result classes are derived from the selected Stored Procedure or Macro. The Wizards generate DAOs and functional applications you can start using right away. You now can get a head start on your application development by leveraging the Stored Procedures and Macros you have already. | http://community.teradata.com/t5/Tools/Instant-Web-services-from-Teradata-Stored-Procedures/td-p/27595 | CC-MAIN-2017-17 | refinedweb | 984 | 54.83 |
Symptom
The upload of a bank statement file with import format Mexico – BANCOMER/BBVA is hanging in status In Process.
Environment
SAP Business ByDesign
Reproducing the Issue
- Go to the Liquidity Management work center.
- Go to the File Management > Inbound Files view.
- Click New > Inbound File.
- Enter data:
- File Type: Bank Statement
- Respective Company and Bank ID
- Import Format: Mexico – BANCOMER/BBVA Bank statement
- Click Add > File > Browse and select the respective file.
- Click Start File Upload.
As a result, the file gets hung in status In Process.
Cause
The file content is not provided in the expected format.
The system expects to get each line to be differentiated by a two digit tag (00, 11, 22, 23, 32, 33 or 88) at the start of the line and each line to close with a carriage return (CR) + line feed (LF).
As the lines end only with a line feed (LF), there is no file content mapped to individual transaction - not even for the cases where the line starts with 11 as part of the date.
Resolution
The bank statement file should be amended to start with a two digit tag at the beginning of each line and each line should be closed with CR LF.
See Also
Please also refer to KBA 2803247 for other common issues that might be faced during Mexico – BANCOMER/BBVA bank statement upload.
Keywords
BS; MX; Mexico; CR LF; Carriage Return; Line Feed; BANCOMER/BBVA; , KBA , mx , mexico , bancomer/bbva , bs , cr lf , carriage return , line feed , AP-PAY-GLO , Cross-Country Extensions , How To | https://apps.support.sap.com/sap/support/knowledge/en/2761753 | CC-MAIN-2021-39 | refinedweb | 261 | 60.14 |
LIBPFM(3) Linux Programmer's Manual LIBPFM(3)
libpfm_intel_skl - support for Intel SkyLake core PMU
#include <perfmon/pfmlib.h> PMU name: skl PMU desc: Intel SkyLake
The library supports the Intel SkyLake core PMU. It should be noted that this PMU model only covers each core's PMU and not the socket level PMU. On SkyLake, the number of generic counters depends on the Hyperthreading (HT) mode. counters are available. The pfm_get_pmu_info() function returns the maximum number of generic counters in num_cntrs.
The following modifiers are supported on Intel SkyLake. fe_thres This modifier is for the FRONTEND_RETIRED event only. It defines the period in core cycles after which the IDQ_*_BUBBLES umask counts. It acts as a threshold, i.e., at least a period of N core cycles where the frontend did not deliver X uops. It can only be used with the IDQ_*_BUBBLES umasks. If not specified, the default threshold value is 1 cycle. the valid values are in [1-4095].
Intel SkyLake SkyLake, August, 2015 LIBPFM(3) | http://man7.org/linux/man-pages/man3/libpfm_intel_skl.3.html | CC-MAIN-2017-22 | refinedweb | 170 | 69.18 |
In Part one of File Handling in C++, we have introduced the ios class, the parent of all stream classes. We have learned some of its main features (manipulators, and format flags). In this article, we will revisit the ios class one more time, and then talk about the istream and ostream classes.
The ios Class Member Functions
Besides to the Manipulators and Format Flags, the ios class has a list of functions that control formatting. In this section, we are going to investigate some of the commonly-used functions, and illustrate them by example.
The setf() and unsetf() Functions
As I told you in the previous article, the setf() function is used to set (enable) format flags. Its opposite unsetf() disables the effect of the flag.
The fill() Function
This function works in two modes: it can get the fill character currently in use, or to set one.
Its use is equivalent to using the setfill() manipulator function.
The width() Function
Also having two modes: getting the current field width, and setting a new one. This is equivalent to using the setw() manipulator function.
Example
Consider the following code that illustrates the use of fill() and width() functions:
#include <iostream> using namespace std; int main() { int num = 3490; char ch; cout << "Number: "; cout.fill('*'); cout.width(10); cout << num << endl; ch = cout.fill(); cout << "Currently using " << ch << " as filling character." << endl; return 0; }
This program should produce the following output:
The istream and ostream Classes
Derived from the same parent class ios, both istream and ostream classes inherit the features of the ios class, and extend them with features of their own.
istream Class
As its name implies, this class is responsible for input operations. We have already used some of the functions defined in this class in many examples. For instance, we have been using the extraction operator >> since the first few articles. In this section, we are going to review on what we already know, and learn some new functions.
The Extraction Operator >>
Mostly used with the cin object (standard input stream), the extraction operator >> extracts input from the input stream on its left, and assigns it to the variable on its right.
The get() Function
We have learned about this function when talking about Strings. The get() function has many forms of operation depending on the list of provided arguments. For example, the function will wait for a single-character input, if provided a char variable as argument.
#include <iostream> using namespace std; int main() { char choice; cout << "A) Print the hostname\n" << "B) Print the IP Address info\n" << "C) Print the system data and time\n\n" << "Enter your choice: "; cin.get(choice); cout << "\nYou entered " << choice << endl; return 0; }
This will behave as follows:
The get() function could be also used to read a single-line input, or multi-line input if a delimiter character is specified.
The getline() Function
Another way to extract input text. This function is similar to the get() function, with slightly different syntax.
Example
Consider the following program:
#include <iostream> using namespace std; int main() { string str; char delimit = '#'; cout << "\n Enter text: \n"; getline(cin, str, delimit); cout << "\nYou entered: \n" << str << endl; return 0; }
This will read user input until the user enters #.
The gcount() Function
This function returns the number of characters read in the last input operation.
ostream Class
In contrary with istream class, the ostream class is responsible for output operations. We have already used one of the utilities defined in this class: the insertion operator << that we used so far in this series to write to the standard output.
Besides to the extraction operator and the other functions inherited from its parent class ios, like fill(), clear(), and good(), the ostream class has methods of its own like:
put() and write() for unformatted output.
tellp() and seekp() for getting and setting position inside a file.
Summary
In this article, we have continued what we started in Part one in the context of I/O.
- The fill() function gets/sets the padding character in output.
- The width() functions gets/sets field width.
- The istream and ostream classes are derived from the ios class.
That was Part two. See you in Part three, and File I/O. | https://blog.eduonix.com/system-programming/learn-inputoutput-file-handling-c-part-2/ | CC-MAIN-2020-29 | refinedweb | 713 | 61.87 |
Python Modules make it easy to group related code into .py files that you can reference in your program, main.py. You do this by using an import statement that calls the file or specific parts of it.
This example includes a variety of emoji that represent images from microbit.Image, a class of the microbit module. Save this file as emoji.py on your computer.
If you are developing a python module for the micro:bit, you can add magic header information that this is a microbit-module and give it a name and version number in the format major.minor.patch eg. emoji@0.1.0. This way the editor will add the module directly to the filesystem if it is dragged and dropped into the editor.
Adding by Drag and Drop
Now drag and drop emoji.py onto the editor window and you will see a message saying that 'the "emoji" module has been added to the filesystem'. You will now see the module under your project files and you can call the module from your main.py program.
Adding by the Project Files dialog
The FileSystem functionality built into the Python editor makes it simple to create and add an external module to be used in the program. To do this, select the Load/Save button from the menu.
This will open the files modal, look for Project Files and expand the dropdown button to reveal the files currently available to your project.
Choose Add file to open a file picker and select the emoji.py file that you downloaded earlier. The file will now appear under your Project Files.
You can now call the module from your main.py program.
Calling the module from main.py
In the text editor import everything from the microbit module and from your newly added emoji module
from microbit import *
from emoji import *
Now create a program that uses the emojis from your module in place of their Image class equivalents
| https://support.microbit.org/support/solutions/articles/19000106811-adding-a-module-to-the-python-editor | CC-MAIN-2022-05 | refinedweb | 332 | 73.17 |
CodeBetter.Com Editors Note : Manning Publications gave us exclusive rights to publish this excerpt from “C# in Depth, Second Edition”. In addition, we were given 5 free e-books to give away to our readers, if you are one of the first 5 people to track/link back to/tweet this article. Also, Manning is offering 40% off C# in Depth, 2E with promocode: codebetter40
C# in Depth, Second Edition
IN PRINT
Jon Skeet
November 2010 | 584 pages
ISBN: 9781935182474
This article is taken from the book C# in Depth, Second Edition. The author explains optional parameters, whose values don’t have to be explicitly specified by the caller. He also talks about named arguments. The idea of named arguments is that, when you specify an argument value, you can also specify the name of the parameter it’s supplying the value for.
Optional parameters and named arguments are perhaps the Batman and Robin[1] features of C# 4. They’re distinct but usually seen together. I’m going to keep them apart for the moment so we can examine each in turn, but then we’ll use them together for some more interesting examples.
PARAMETERS AND ARGUMENTS This article obviously talks about parameters and arguments a lot. In casual conversation, the two terms are often used interchangeably, but I’m going to use them in line with their formal definitions. A parameter (also known as a formal parameter) is the variable that’s part of the method or indexer declaration. An argument is an expression used when calling the method or indexer. So, for example, consider this snippet:
void Foo(int x, int y) { // Do something with x and y } ... int a = 10; Foo(a, 20);
Here the parameters are x and y, and the arguments are a and 20.
We’ll start by looking at optional parameters.
Optional parameters
Visual Basic has had optional parameters for ages, and they’ve been in the CLR from .NET 1.0. The concept is as obvious as it sounds: some parameters are optional, so their values don’t have to be explicitly specified by the caller. Any parameter that hasn’t been specified as an argument by the caller is given a default value.
Motivation
Optional parameters are usually used when there are several values required for an operation, where the same values are used a lot of the time. For example, suppose you wanted to read a text file; you might want to provide a method that allows the caller to specify the name of the file and the encoding to use. The encoding is almost always UTF-8, though, so it’s nice to be able to use that automatically if it’s all you need.
Historically the idiomatic way of allowing this in C# has been to use method overloading: declare one method with all the possible parameters, and others that call that method, passing in default values where appropriate. For instance, you might create methods like this:
public IList<Customer> LoadCustomers(string filename, Encoding encoding) { ... #A } public IList<Customer> LoadCustomers(string filename) { return LoadCustomers(filename, Encoding.UTF8); #B } #A Do real work here #B Default to UTF-8
This works fine for a single parameter, but it becomes trickier when there are multiple options. Each extra option doubles the number of possible overloads, and if two of them are of the same type, you can have problems due to trying to declare multiple methods with the same signature. Often the same set of overloads is also required for multiple parameter types. For example, the XmlReader.Create() method can create an XmlReader from a Stream, a TextReader, or a string—but it also provides the option of specifying an XmlReaderSettings and other arguments. Due to this duplication, there are 12 overloads for the method. This could be significantly reduced with optional parameters. Let’s see how it’s done.
Declaring optional parameters and omitting them when supplying arguments
Making a parameter optional is as simple as supplying a default value for it, using what looks like a variable initializer. Figure 1 shows a method with three parameters: two are optional, one is required.
Figure 1 Declaring optional parameters
All the method does is print out the arguments, but that’s enough to see what’s going on. The following listing gives the full code and calls the method three times, specifying a different number of arguments for each call.
Listing 1 Declaring a method with optional parameters and calling
static void Dump(int x, int y = 20, int z = 30) #1 { Console.WriteLine("x={0} y={1} z={2}", x, y, z); } ... Dump(1, 2, 3); #2 Dump(1, 2); #3 Dump(1); #4 #1 Declares method with optional parameters #2 Calls method with all arguments #3 Omits one argument #4 Omits two arguments
The optional parameters are the ones with default values specified (#1). If the caller doesn’t specify y, its initial value will be 20, and likewise z has a default value of 30. The first call (#2) explicitly specifies all the arguments; the remaining calls (#2 and #4) omit one or two arguments respectively, so the default values are used. When there’s one argument missing, the compiler assumes that the final parameter has been omitted—then the penultimate one, and so on. The output is therefore
x=1 y=2 z=3 x=1 y=2 z=30 x=1 y=20 z=30
Note that although the compiler could use some clever analysis of the types of the optional parameters and the arguments to work out what’s been left out, it doesn’t: it assumes that you’re supplying arguments in the same order as the parameters[2]. This means that the following code is invalid:
static void TwoOptionalParameters( int x = 10, string y = "default") { Console.WriteLine("x={0} y={1}", x, y); } ... TwoOptionalParameters("second parameter"); #A #A Error!
This tries to call the TwoOptionalParameters method specifying a string for the first argument. There’s no overload with a first parameter that’s convertible from a string, so the compiler issues an error. This is a good thing—overload resolution is tricky enough (particularly when generic type inference gets involved) without the compiler trying all kinds of different permutations to find something you might be trying to call.
If you want to omit the value for one optional parameter but specify a later one, you need to use named arguments.
Restrictions on optional parameters
There are a few rules for optional parameters. All optional parameters must come after required parameters. The exception to this is a parameter array (as declared with the params modifier), which still has to come at the end of a parameter list, but can come after optional parameters. A parameter array can’t be declared as an optional parameter—if the caller doesn’t specify any values for it, an empty array will be used instead. Optional parameters can’t have ref or out modifiers either.
An optional parameter can be of any type, but there are restrictions on the default value specified. You can always use constants: numeric and string literals, null, const members, enum members, and the default(T) operator. Additionally, for value types, you can call the parameterless constructor, although this is equivalent to using the default (…) operator anyway. There has to be an implicit conversion from the specified value to the parameter type, but this must not be a user-defined conversion.
Table 1 shows some examples of valid parameter lists.
Table 1 Valid method parameter lists using optional parameters
By contrast, table 2 shows some invalid parameter lists and explains why they’re not allowed.
Table 2 Invalid method parameter lists using optional parameters
The fact that the default value has to be constant is a pain in two different ways. One of them is familiar from a slightly different context, as we’ll see now.
Versioning and optional parameters
The restrictions on default values for optional parameters may remind you of the restrictions on const fields or attribute values, and they behave very similarly. In both cases, when the compiler references the value, it copies it directly into the output. The generated IL acts exactly as if your original source code had contained the default value. This means if you ever change the default value without recompiling everything that references it, the old callers will still be using the old default value. To make this concrete, imagine this set of steps:
- Create a class library (Library.dll) with a class like this:
public class LibraryDemo { public static void PrintValue(int value = 10) { System.Console.WriteLine(value); } }
- Create a console application (Application.exe) that references the class library:
public class Program { static void Main() { LibraryDemo.PrintValue(); } }
- Run the application—it’ll print 10, predictably.
- Change the declaration of PrintValue as follows, then recompile just the class library:
public static void PrintValue(int value = 20)
- Rerun the application—it’ll still print 10. The value has been compiled directly into the executable.
- Recompile the application and rerun it—this time it’ll print 20.
This versioning issue can cause bugs that are hard to track down, because all the code looks correct. Essentially, you’re restricted to using genuine constants that should never change as default values for optional parameters[3]. There’s one benefit of this setup: it gives the caller a guarantee that the value it knew about at compile-time is the one that’ll be used. Developers may feel more comfortable with that than with a dynamically computed value, or one that depends on the version of the library used at execution time.
Of course, this also means you can’t use any values that can’t be expressed as constants anyway—you can’t create a method with a default value of “the current time,” for example.
Making defaults more flexible with nullity
Fortunately, there’s a way round this. Essentially you introduce a magic value to represent the default, and then replace that magic value with the real default within the method itself. If the phrase magic value bothers you, I’m not surprised—except we’re going to use null for the magic value, which already represents the absence of a normal value. If the parameter type would normally be a value type, we simply make it the corresponding nullable value type, at which point we can still specify that the default value is null.
As an example of this, let’s look at a similar situation to the one I used to introduce the whole topic: allowing the caller to supply an appropriate text encoding to a method, but defaulting to UTF-8. We can’t specify the default encoding as Encoding. UTF8 as that’s not a constant value, but we can treat a null parameter value as “use the default.” To demonstrate how we can handle value types, we’ll make the method append a timestamp to a text file with a message. We’ll default the encoding to UTF-8 and the timestamp to the current time. Listing 2 shows the complete code and a few examples of using it.
Listing 2 Using null default values to handle nonconstant situations
static void AppendTimestamp(string filename, #A string message, Encoding encoding = null, #1 DateTime? timestamp = null) { Encoding realEncoding = encoding ?? Encoding.UTF8; #2 DateTime realTimestamp = timestamp ?? DateTime.Now; using (TextWriter writer = new StreamWriter(filename, true, realEncoding)) { writer.WriteLine("{0:s}: {1}", realTimestamp, message); } } ... AppendTimestamp("utf8.txt", "First message"); AppendTimestamp("ascii.txt", "ASCII", Encoding.ASCII); AppendTimestamp("utf8.txt", "Message in the future", null, #3 new DateTime(2030, 1, 1)); #A Two required parameters #1 Two optional parameters #2 Null coalescing operator for convenience #3 Explicit use of null
Listing 2 shows a few nice features of this approach. First, we’ve solved the versioning problem. The default values for the optional parameters are null (#1), but the effective values are “the UTF-8 encoding” and “the current date and time.” Neither of these could be expressed as constants, and should we ever wish to change the effective default—for example to use the current UTC time instead of the local time—we could do so without having to recompile everything that called AppendTimestamp. Of course, changing the effective default changes the behavior of the method—you need to take the same sort of care over this as you would with any other code change. At this point, you (as the library author) are in charge of the versioning story—you’re taking responsibility for not breaking clients, effectively. At least it’s more familiar territory: you know that all callers will experience the same behavior, regardless of recompilation.
We’ve also introduced an extra level of flexibility. Not only do optional parameters mean we can make the calls shorter, but having a specific “use the default” value means that should we ever wish to, we can explicitly make a call allowing the method to choose the appropriate value. At the moment, this is the only way we know to specify the timestamp explicitly without also providing an encoding (#3), but that’ll change when we look at named arguments.
The optional parameter values are simple to deal with thanks to the null coalescing operator (#2). I’ve used separate variables for the sake of printed formatting, but in real code you’d probably use the same expressions directly in the calls to the Stream-Writer constructor and the WriteLine method.
There are two downsides to this approach: first, it means that if a caller accidentally passes in null due to a bug, it’ll get the default value instead of an exception. In cases where you’re using a nullable value type and callers will either explicitly use null or have a non-nullable argument, that’s not much of a problem—but for reference types it could be an issue.
On a related note, it requires that you don’t want to use null as a “real” value[4]. There are occasions where you want null to mean null—and if you don’t want that to be the default value, you’ll have to find a different constant or just leave the parameter as a required one. But in other cases where there isn’t an obvious constant value that’ll clearly always be the right default, I’d recommend this approach to optional parameters as one that’s easy to follow consistently and removes some of the normal difficulties.
We’ll need to look at how optional parameters affect overload resolution, but it makes sense to wait until we’ve seen how named arguments work. Speaking of which…
Named arguments
The basic idea of named arguments is that when you specify an argument value, you can also specify the name of the parameter it’s supplying the value for. The compiler then makes sure that there is a parameter of the right name, and uses the value for that parameter. Even on its own, this can increase readability in some cases. In reality, named arguments are most useful in cases where optional parameters are also likely to appear, but we’ll look at the simple situation first.
INDEXERS, OPTIONAL PARAMETERS, AND NAMED ARGUMENTS You can use optional parameters and named arguments with indexers as well as methods. But this is only useful for indexers with more than one parameter: you can’t access an indexer without specifying at least one argument anyway. Given this limitation, I don’t expect to see the feature used much with indexers. It works exactly as you’d expect it to, though.
I’m sure we’ve all seen code that looks something like this:
MessageBox.Show("Please do not press this button again", // text "Ouch!"); // title
I’ve actually chosen a pretty tame example; it can get a lot worse when there are loads of arguments, especially if a lot of them are the same type. But this is still realistic: even with just two parameters, I’d find myself guessing which argument meant what based on the text when reading this code, unless it had comments like the ones I have here. There’s a problem though: comments can lie about the code they describe. Nothing is checking them at all. Named arguments ask the compiler to help.
Syntax
All we need to do to the previous example is prefix each argument with the name of the corresponding parameter and a colon:
MessageBox.Show(text: "Please do not press this button again", caption: "Ouch!");
Admittedly we now don’t get to choose the name we find most meaningful (I prefer title to caption) but at least we’ll know if we get something wrong. Of course, the most common way in which we could get something wrong here is to get the arguments the wrong way around. Without named arguments, this would be a problem: we’d end up with the pieces of text switched in the message box. With named arguments, the ordering becomes largely irrelevant. We can rewrite the previous code like this:
MessageBox.Show(caption: "Ouch!", text: "Please do not press this button again");
We’d still have the right text in the right place, because the compiler would work out what we meant based on the names. For another example, look at the StreamWriter constructor call we used in listing 2. The second argument is just true—what does this mean? Is it going to force a stream flush after every write? Include a byte order mark? Append to an existing file instead of creating a new one? Here’s the equivalent call using named arguments:
new StreamWriter(path: filename, append: true, encoding: realEncoding)
In both of the examples, we’ve seen how named arguments effectively attach semantic meaning to values. In the never-ending quest to make our code communicate better with humans as well as computers, this is a definite step forward. I’m not suggesting that named arguments should be used when the meaning is already obvious, of course. Like all features, it should be used with discretion and thought.
NAMED ARGUMENTS WITH out AND ref If you want to specify the name of an argument for a ref or out parameter, you put the ref or out modifier after the name, and before the argument. So using int.TryParse as an example, you might have code like this:
int number; bool success = int.TryParse("10", result: out number);
To explore some other aspects of the syntax, the following listing shows a method with three integer parameters, just like the one we used to start looking at optional parameters.
Listing 3 Simple examples of using named arguments
static void Dump(int x, int y, int z) #1 { Console.WriteLine("x={0} y={1} z={2}", x, y, z); } ... Dump(1, 2, 3); #2 Dump(x: 1, y: 2, z: 3); #3 Dump(z: 3, y: 2, x: 1); Dump(1, y: 2, z: 3); #4 Dump(1, z: 3, y: 2); #1 Declares method as normal #2 Calls method as normal #3 Specifies names for all arguments #4 Specifies names for some arguments
The output is the same for each call in listing 3: x=1, y=2, z=3. We’ve effectively made the same method call in five different ways. It’s worth noting that there are no tricks in the method declaration (#1): you can use named arguments with any method that has parameters. First we call the method in the normal way, without using any new features (#2). This is a sort of control point to make sure that the other calls really are equivalent. We then make two calls to the method using just named arguments (#3). The second of these calls reverses the order of the arguments, but the result is still the same, because the arguments are matched up with the parameters by name, not position. Finally there are two calls using a mixture of named arguments and positional arguments (#4). A positional argument is one that isn’t named—so every argument in valid C# 3 code is a positional argument from the point of view of C# 4. Figure 2 shows how the final line of code works.
Figure 2 Positional and named arguments in the same call
All named arguments have to come after positional arguments—you can’t switch between the styles. Positional arguments always refer to the corresponding parameter in the method declaration—you can’t make positional arguments skip a parameter by specifying it later with a named argument. This means that these method calls would both be invalid:
- Dump(z: 3, 1, y: 2)—Positional arguments must come before named ones.
- Dump(2, x: 1, z: 3)—x has already been specified by the first positional argument, so we can’t specify it again with a named argument.
Now, although in this particular case the method calls have been equivalent, that’s not always the case. Let’s look at why reordering arguments might change behavior.
Argument evaluation order
We’re used to C# evaluating its arguments in the order they’re specified—which, until C# 4, has always been the order in which the parameters have been declared too. In C# 4, only the first part is still true: the arguments are still evaluated in the order they’re written, even if that’s not the same as the order in which they’re declared as parameters. This matters if evaluating the arguments has side effects. It’s usually worth trying to avoid having side effects in arguments, but there are cases where it can make the code clearer. A more realistic rule is to try to avoid side effects that might interfere with each other. For the sake of demonstrating execution order, we’ll break both of these rules. Please don’t treat this as a recommendation that you do the same thing.
First we’ll create a relatively harmless example, introducing a method that logs its input and returns it—a sort of logging echo. We’ll use the return values of three calls to this to call the Dump method (which isn’t shown, as it hasn’t changed). Listing 4 shows two calls to Dump that result in slightly different output.
Listing 4 Logging argument evaluation
static int Log(int value) { Console.WriteLine("Log: {0}", value); return value; } ... Dump(x: Log(1), y: Log(2), z: Log(3)); Dump(z: Log(3), x: Log(1), y: Log(2));
The results of running listing 4 show what happens:
Log: 1 Log: 2 Log: 3 x=1 y=2 z=3 Log: 3 Log: 1 Log: 2 x=1 y=2 z=3
In both cases, the parameters in the Dump method are still 1, 2, and 3, in that order. But we can see that although they were evaluated in that order in the first call (which was equivalent to just using positional arguments), the second call evaluated the value used for the z parameter first. We can make the effect even more significant by using side effects that change the results of the argument evaluation, as shown in the following listing, again using the same Dump method.
Listing 5 Abusing argument evaluation order
int i = 0; Dump(x: ++i, y: ++i, z: ++i); i = 0; Dump(z: ++i, x: ++i, y: ++i);
The results of listing 5 may be best expressed in terms of the blood spatter pattern at a murder scene, after someone maintaining code like this has gone after the original author with an axe. Yes, technically speaking the last line prints out x=2 y=3 z=1 but I’m sure you see what I’m getting at. Just say “no” to code like this. By all means, reorder your arguments for the sake of readability: you may think that laying out a call to MessageBox.Show with the title coming above the text in the code itself reflects the onscreen layout more closely, for example. If you want to rely on a particular evaluation order for the arguments, though, introduce some local variables to execute the relevant code in separate statements. The compiler won’t care either way—it’ll follow the rules of the spec—but this reduces the risk of a “harmless refactoring” that inadvertently introduces a subtle bug.
To return to cheerier matters, let’s combine the two features (optional parameters and named arguments) and see how much tidier the code can be.
Putting the two together
The two features work in tandem with no extra effort required on your part. It’s not uncommon to have a bunch of parameters where there are obvious defaults, but where it’s hard to predict which ones a caller will want to specify explicitly. Figure 3 shows just about every combination: a required parameter, two optional parameters, a positional argument, a named argument, and a missing argument for an optional parameter.
Figure 3 Mixing named arguments and optional parameters
Going back to an earlier example, in listing 2 we wanted to append a timestamp to a file using the default encoding of UTF-8, but with a particular timestamp. Back then we just used null for the encoding argument, but now we can write the same code more simply, as shown in the following listing.
Listing 6 Combining named and optional arguments
static void AppendTimestamp(string filename, string message, Encoding encoding = null, DateTime? timestamp = null) { #A } ... AppendTimestamp("utf8.txt", "Message in the future", #B timestamp: new DateTime(2030, 1, 1)); #C #A Same implementation as before #B Encoding is omitted #C Named timestamp argument
In this fairly simple situation, the benefit isn’t particularly huge, but in cases where you want to omit three or four arguments but specify the final one, it’s a real blessing.
We’ve seen how optional parameters reduce the need for huge long lists of overloads, but one specific pattern where this is worth mentioning is with respect to immutability.
Immutability and object initialization
One aspect of C# 4 that disappoints me somewhat is that it hasn’t done much explicitly to make immutability easier. Immutable types are a core part of functional programming, and C# has been gradually supporting the functional style more and more… except for immutability. Object and collection initializers make it easy to work with mutable types, but immutable types have been left out in the cold. (Automatically implemented properties fall into this category too.) Fortunately, though they’re not particularly designed to aid immutability, named arguments and optional parameters allow you to write object initializer–like code that just calls a constructor or other factory method. For instance, suppose we were creating a Message class, which required a from address, a to address, and a body, with the subject and attachment being optional. (We’ll stick with single recipients in order to keep the example as simple as possible.) We could create a mutable type with appropriate writable properties, and construct instances like this:
Message message = new Message { From = "skeet@pobox.com", To = "csharp-in-depth-readers@everywhere.com", Body = "Hope you like the second edition", Subject = "A quick message" };
That has two problems: first, it doesn’t enforce the required data to be provided. We could force those to be supplied to the constructor, but then (before C# 4) it wouldn’t be obvious which argument meant what:
Message message = new Message( "skeet@pobox.com", "csharp-in-depth-readers@everywhere.com", "Hope you like the second edition") { Subject = "A quick message" };
The second problem is that this initialization pattern simply doesn’t work for immutable types. The compiler has to call a property setter after it has initialized the object. But we can use optional parameters and named arguments to come up with something that has the nice features of the first form (only specifying what you’re interested in and supplying names) without losing the validation of which aspects of the message are required or the benefits of immutability. The following listing shows a possible constructor signature and the construction step for the same message as before.
Listing 7 Constructing an immutable message using C# 4
public Message(string from, string to, string body, string subject = null, byte[] attachment = null) { #A } ... Message message = new Message( from: "skeet@pobox.com", to: "csharp-in-depth-readers@everywhere.com", body: "Hope you like the second edition", subject: "A quick message" ); #A Normal initialization code goes here
I really like this in terms of readability and general cleanliness. You don’t need hundreds of constructor overloads to choose from, just one with some of the parameters being optional. The same syntax will also work with static creation methods, unlike object initializers. The only downside is that it really relies on your code being consumed by a language that supports optional parameters and named arguments; otherwise callers will be forced to write ugly code to specify values for all the optional parameters. Obviously there’s more to immutability than getting values to the initialization code, but this is a welcome step in the right direction nonetheless.
There are a couple of final points to make around these features before we move on to COM, both around the details of how the compiler handles our code and the difficulty of good API design.
Overload resolution
Clearly both named arguments and optional parameters affect how the compiler resolves overloads—if there are multiple method signatures available with the same name, which should it pick? Optional parameters can increase the number of applicable methods (if some methods have more parameters than the number of specified arguments) and named arguments can decrease the number of applicable methods (by ruling out methods that don’t have the appropriate parameter names).
For the most part, the changes are intuitive: to check whether any particular method is applicable, the compiler tries to build a list of the arguments it would pass in, using the positional arguments in order, then matching the named arguments up with the remaining parameters. If a required parameter hasn’t been specified or if a named argument doesn’t match any remaining parameters, the method isn’t applicable.
There are two situations I’d like to draw particular attention to. First, if two methods are both applicable and one of them has been given all of its arguments explicitly whereas the other uses an optional parameter filled in with a default value, the method that doesn’t use any default values will win. But this doesn’t extend to just comparing the number of default values used—it’s a strict “does it use default values or not” divide. For example, consider the following:
static void Foo(int x = 10) {} static void Foo(int x = 10, int y = 20) {} ... Foo(); #1 Foo(1); #2 Foo(y: 2); #3 Foo(1, 2); #4 #1 Error: ambiguous #2 Calls first overload #3 Calls second overload #4 Calls second overload
In the first call (#1), both methods are applicable because of their optional parameters. But the compiler can’t work out which one you meant to call: it’ll raise an error. In the second call (#2), both methods are still applicable, but the first overload is used because it can be applied without using any default values, whereas the second overload uses the default value for y. For both the third and fourth calls, only the second overload is applicable. The third call (#3) names the y argument, and the fourth call (#4) has two arguments; both of these mean the first overload isn’t applicable.
OVERLOADS AND INHERITANCE DON’T ALWAYS MIX NICELY All of this is assuming that the compiler has gone as far as finding multiple overloads to choose between. If some methods are declared in a base type, but there are applicable methods in a more derived type, the latter will win. This has always been the case, and it can cause some surprising results (see)… but now optional parameters mean there may be more applicable methods than you’d expect. I advise you to avoid overloading a base class method within a derived class unless you get a huge benefit.
The second point is that sometimes named arguments can be an alternative to casting in order to help the compiler resolve overloads. Sometimes a call can be ambiguous because the arguments can be converted to the parameter types in two different methods, but neither method is better than the other in all respects. For instance, consider the following method signatures and a call:
void Method(int x, object y) { ... } void Method(object a, int b) { ... } ... Method(10, 10); #A #A Ambiguous call
Both methods are applicable, and neither is better than the other. There are two ways to resolve this, assuming you can’t change the method names to make them unambiguous that way. (That’s my preferred approach. Make each method name more informative and specific, and the general readability of the code can go up.) You can either cast one of the arguments explicitly, or use named arguments to resolve the ambiguity:
void Method(int x, object y) { ... } void Method(object a, int b) { ... } ... Method(10, (object) 10); #A Method(x: 10, y: 10); #B #A Casting to resolve ambiguity #B Naming to resolve ambiguity
Of course this only works if the parameters have different names in the different methods—but it’s a handy trick to know. Sometimes the cast will give more readable code; sometimes the name will. It’s just an extra weapon in the fight for clear code. It does have a downside, along with named arguments in general: it’s another thing to be careful about when you change a method.
The silent horror of changing names
In the past, parameter names haven’t mattered much if you’ve only been using C#. Other languages may have cared, but in C# the only times that parameter names were important were when you were looking at IntelliSense and when you were looking at the method code itself. Now, the parameter names of a method are effectively part of the API even if you’re only using C#. If you change them at a later date, code can break—anything that was using a named argument to refer to one of your parameters will fail to compile if you decide to change it. This may not be much of an issue if your code is only consumed by itself anyway, but if you’re writing a public API, be aware that changing a parameter name is a big deal. It always has been really, but if everything calling the code was written in C#, we’ve been able to ignore that until now.
Renaming parameters is bad; switching the names around is worse. That way the calling code may still compile, but with a different meaning. A particularly evil form of this is to override a method and switch the parameter names in the overridden version. The compiler will always look at the deepest override it knows about, based on the static type of the expression used as the target of the method call. You don’t want to get into a situation where calling the same method implementation with the same argument list results in different behavior based on the static type of a variable.
Summary
Named arguments and optional parameters are possibly two of the simplest-sounding features of C# 4—and yet they still have a fair amount of complexity, as we’ve seen. The basic ideas are easily expressed and understood—and the good news is that most of the time that’s all you need to care about. You can take advantage of optional parameters to reduce the number of overloads you write, and named arguments can make code much more readable when several easily confusable arguments are used. The trickiest bit is probably deciding which default values to use, bearing in mind potential versioning issues. Likewise it’s now more obvious than before that parameter names matter, and you need to be careful when overriding existing methods, to avoid being evil to your callers.
For Source Code, Sample Chapters, the Author Forum and other resources, go to
[3] Or you could just accept that you’ll need to recompile everything if you change the value. In many contexts that’s a reasonable tradeoff.
[4] We almost need a second null-like special value, meaning “please use the default value for this parameter”—and allow that special value to be supplied either automatically for missing arguments or explicitly in the argument list. I’m sure this would cause dozens of problems, but it’s an interesting thought experiment.
Pingback: Tweets that mention C# In Depth – Optional Parameters and Named Arguments | CodeBetter.Com -- Topsy.com
Pingback: The Morning Brew - Chris Alcock » The Morning Brew #768 | http://codebetter.com/2011/01/11/c-in-depth-optional-parameters-and-named-arguments-2/ | CC-MAIN-2016-50 | refinedweb | 6,169 | 50.36 |
Count
Counts the number of elements within each aggregation.
Examples
In the following example, we create a pipeline with two
PCollections of produce.
Then, we apply
Count to get the total number of elements in different ways.
Example 1: Counting all elements in a PCollection
We use
Count.Globally() to count all elements in a
PCollection, even if there are duplicate elements.
Output:
Example 2: Counting elements for each key
We use
Count.PerKey() to count the elements for each unique key in a
PCollection of key-values.
import apache_beam as beam with beam.Pipeline() as pipeline: total_elements_per_keys = ( pipeline | 'Create plants' >> beam.Create([ ('spring', '🍓'), ('spring', '🥕'), ('summer', '🥕'), ('fall', '🥕'), ('spring', '🍆'), ('winter', '🍆'), ('spring', '🍅'), ('summer', '🍅'), ('fall', '🍅'), ('summer', '🌽'), ]) | 'Count elements per key' >> beam.combiners.Count.PerKey() | beam.Map(print))
Output:
Example 3: Counting all unique elements
We use
Count.PerElement() to count the only the unique elements in a
PCollection.
Output:
Related transforms
N/A
Last updated on 2021/02/05
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know! | https://beam.apache.org/documentation/transforms/python/aggregation/count/ | CC-MAIN-2021-31 | refinedweb | 182 | 60.82 |
Factory reset via hardware
Hi,
I've just got round to playing with my Wipy 1, and the first thing I wanted to do was to put it onto my network.
So, I followed the wifi tutorial, and saved the simple boot.py script (without static IP) to /flash and reset the board.
And promptly lost it! It's not connected to my network, and it's not in AP mode. The heartbeat is flashing, so it's definitely alive, I assume it's stuck trying to run the boot.py...
How do I do a factory reset if I can't get access to it?
- Keptenkurk last edited by
And until machine.reset_cause is fixed it also works on the LoPy with the line:
if machine.reset_cause() != machine.SOFT_RESET:
removed
@Roberto I was thinking of something like that... Maybe something like this should be the default final result of the wifi tutorial? fall back to AP mode, I mean...
Anyway, I'm moving over to the micropython forums (I did wonder why it was quiet here!)
Thanks!
The information that you require to boot your WiPy in the different boot and safe boot modes is posted in the documentation here:
Furthermore, i want to help you with this nice script for your boot.py to connect to your network (make sure is a 2.4Ghz router, WiPy only supports 2.4Ghz).
The good thing in this script is that if it fails to connect it will fallback to AP mode after boot. You just have to replace the "Router SSID" with yours and the "Router Password".
Ex, The line you need to change should look like
known_nets = [('MyNetworkName', '123456')]
import machine import os uart = machine.UART(0, 115200) # Remove this if you don't want the serial port enabled on boot os.dupterm(uart) # Remove this if you don't want the terminal to be dumped to serial port if machine.reset_cause() != machine.SOFT_RESET: from network import WLAN # Put your network SSID and password here. You can use multiple networks, this script will try to connect to any of them, if it fails, it will fallback to AP mode known_nets = [('Router SSID', 'Router Password')]) # Timeout for connection, change this if you require a different timeout (ms) except: wl.init(mode=WLAN.AP, ssid=original_ssid, auth=original_auth, channel=6, antenna=WLAN.INT_ANT)
Aye, although my friend Google's usually quicker!
By the way, AFAIK the most active WiPy discussions are over at .
Jim
Thanks! A case of RTFM I think! | https://forum.pycom.io/topic/47/factory-reset-via-hardware/5?lang=en-US | CC-MAIN-2020-29 | refinedweb | 420 | 73.88 |
Quoting Cedric Le Goater (clg@fr.ibm.com):> Sam> eth0 or something in the guest and let the guest handle upper network layers ?> > lo0 would just be exposed relying on skbuff tagging to discriminate traffic> between guests.This seems to me the preferable way. We create a full virtual netdevice for each new container, and fully virtualize the devicenamespace.> host | guest 0 | guest 1 | guest2> ----------------------+-----------+-----------+--------------> | | | |> |-> l0 <-------+-> lo0 ... | lo0 | lo0> | | | |> |-> bar0 <--------+-> eth0 | |> | | | |> |-> foo0 <--------+-----------+-----------+-> eth0> | | | |> `-> foo0:1 <-------+-----------+-> eth0 |> | | |> > > is that clear ? stupid ? reinventing the wheel ?The last one in your diagram confuses me - why foo0:1? I wouldhave thought it'd behost | guest 0 | guest 1 | guest2----------------------+-----------+-----------+-------------- | | | | |-> l0 <-------+-> lo0 ... | lo0 | lo0 | | | | |-> eth0 | | | | | | | |-> veth0 <--------+-> eth0 | | | | | | |-> veth1 <--------+-----------+-----------+-> eth0 | | | | |-> veth2 <-------+-----------+-> eth0 |I think we should avoid using device aliases, as trying to dosomething like giving eth0:1 to guest1 and eth0:2 to guest2while hiding eth0:1 from guest2 requires some uglier code (asI recall) than working with full devices. In other words,if a namespace can see eth0, and eth0:2 exists, it should alwayssee eth0:2.So conceptually using a full virtual net device per containercertainly seems cleaner to me, and it seems like it should besimpler by way of statistics gathering etc, but are there actuallyany real gains? Or is the support for multiple IPs per deviceactually enough?Herbert, is this basically how ngnet is supposed to work?-serge-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/6/29/512 | CC-MAIN-2015-22 | refinedweb | 255 | 65.93 |
In this article, I will demonstrate how to export GridView data into Word, Excel, and pdf files using ASP.NET.
Introduction
I will use the jQuery plugin to search, sort, and paginate the data.
Step 1 Open SQL Server 2014 and create a database table.
Open Visual Studio 2015 and click on New Project
Screenshot-1
After clicking on New Project, one window will appear. Select Web from the left panel to choose ASP.NET Web Application, give a meaningful name to your project, then click OK.
After clicking on OK, one more window will appear. Choose Empty check on Web Forms checkbox and click on OK, as shown in the below screenshot.
After clicking on OK, the project will get created with the name as ExportWordExcelAndPDF_Demo.
Step 3 Right-click on web config file to add the database connection.
Step 4 Right-click on the project on Solution Explorer, select Add, choose New Item, and click on it.
Another window will appear. Select web from the left panel and choose Web Form, give it a meaningful name and click on Add. The Web Form will be added to the project.
Screenshot-2
Step 5 Click on Tools >> NuGet Package Manager >> Manage NuGet Packages for Solution.
Screenshot for NuGet Package
After that, a window will appear. Choose Browse >> type bootstrap and install the relevant package from the list.
Similarly, type jQuery and install the latest version of jQuery package in your project along with the jQuery validation file from NuGet and then, close the NuGet Solution.
Keep the required bootstrap and jQuery files while delete the remaining files if not using. Or you can download from and add in project.
Step 6 Add the following styles and scripts in head section of the Web Form.
Step 7 Design the Web Form using HTML, Bootstrap, and ASP.NET buttons and GridView control.
Step 8 Right-click on Web Form, select view code, and click on it.
Add namespace
Bind GridView with the database
Add the following method to export Word and Excel format.
Step 9 Double click on ExportToWord control. Write the following code.
Step 10 Double click on ExportToExcel control. Write the following code.
Step 11 Double-click on ExportToPDF control. Write the following code. To export the PDF format, we need to add itextsharp.dll from here -
Complete code of Web Form
Step 12 Run the project by pressing Ctrl+F5.
Conclusion
In this article, I have explained how to export GridView data into Word, Excel and PDF step by step.
I hope it will be helpful.
View All | https://www.c-sharpcorner.com/article/how-to-export-gridview-data-in-word-excel-and-pdf-format-using-asp-net/ | CC-MAIN-2020-05 | refinedweb | 430 | 76.22 |
The limits of my language mean the limits of my world.
– Ludwig Wittgenstein
For the past few months I’ve been exclusively writing ECMAScript 6 code by taking advantage of transpilation[1] to a currently supported version of JavaScript.
ECMAScript 6, henceforth ES6 and formerly ES.next, is the latest version of the specification. As of August 2014 no new features are being discussed, but details and edge cases are still being sorted out. It’s expected to be completed and published mid-2015.
Adopting ES6 has simultaneously resulted in increased productivity (by making my code more succinct) and eliminated entire classes of bugs by addressing common JavaScript gotchas.
More importantly, however, it’s reaffirmed my belief in an evolutionary approach towards language and software design as opposed to clean-slate recreation.
This should be fairly obvious to you if you’ve been using CoffeeScript, which set out to focus on the good parts of JS and hide the broken ones. ES6 has been able to adopt a lot of CoffeeScript’s great innovations in a non-disruptive way, to such an extent that some have even questioned its role moving forward.
For all intents and purposes, JavaScript has merged CoffeeScript into master. I call that a victory for making things and trying them out.
– @raganwald
Instead of making a thorough review of all the new features, I’ll point out the most interesting ones. To incentivize developers to upgrade, new languages or frameworks need to (1) feature a compelling compatibility story and (2) give you a big enough carrot.
# The module syntax
ES6 introduces syntax for defining modules and declaring dependencies. I emphasize the word syntax because ES6 is not concerned with the actual implementation details of how modules are fetched or loaded.
This further strengthens the interoperability between the different contexts in which JavaScript can be executed.
Consider as an example the simple task of writing a reusable implementation of CRC32 in JavaScript.
Up to now, no guidelines existed for how to actually do this. A common approach is to introduce a function declaration:
function crc32(){ // … }
With the caveat, of course, that it introduces a single fixed global name that other parts of the code will have to refer to. And from the perspective of the code that uses that
crc32 function, there’s no way to declare the dependency. One just has to assume the function will exist prior to the code’s interpretation.
With this situation in mind, Node.JS led the way with the introduction of the
require runtime function and the
module.exports and
exports objects. Despite succeeding in creating a thriving ecosystem of modules around it, the interoperability possibilities were still somewhat limited.
A common scenario to illustrate this is the generation of browser bundles of modules, with tools like browserify or webpack. These can only be conceived because they treat
require() as syntax, effectively ridding it of its inherent dynamism.
If you’re trying to transport code to the browser, the following is not subject to static analysis and therefore breaks:
require(woot() + ‘_module.js’);
In other words, the packer’s algorithm can’t possibly know what
woot() means ahead of time.
ES6 has introduced the right set of restrictions while accomodating for most existing use cases, drawing inspiration from most of the informally-specified ad-hoc module systems out there, like jQuery’s
$.
The syntax does take some getting used to. The most common pattern for dependency definitions is surprisingly impractical.
The following code:
import crc32 from ‘crc32’;
works for
export default function crc32(){}
but not for
export function crc32(){}
the latter is considered a named export and requires the
{ } syntax in the import statement:
import { crc32 } from ‘crc32’;
In other words, the simplest (and arguably most desirable) form of module definition requires the extra
default keyword. Or in the absence of that, the usage of
{ } when importing.
# Destructuring
One of the most common patterns that has emerged in modern JavaScript code is the usage of option objects.
So common is this practice that newly specified browser APIs, like WHATWG’s fetch (a modern substitute for
XMLHttpRequest) follow it:
fetch(‘/users’, { method: ‘POST’, headers: { Accept: ‘application/json’, ‘Content-Type’: ‘application/json’ }, body: JSON.stringify({ first: ‘Guillermo’, last: ‘Rauch’ }) });
The widespread adoption of this pattern has effectively prevented the JavaScript ecosystem from falling into The Boolean Trap.
If said API accepted regular parameters as opposed to an options object, calling fetch would be an exercise in argument order memorization and the typing of the
null keyword.
// artistic rendition of a nightmare alternative world fetch(‘/users’, ‘POST’, null, null, { Accept: ‘application/json’, ‘Content-Type’: ‘application/json’ }, null, JSON.stringify({ first: ‘Guillermo’, last: ‘Rauch’ }));
On the implementation side of things, however, things haven’t looked nearly as pretty. Looking at the function’s declaration signature is no longer descriptive of its input’s possibilities:
function fetch(url, opts){ // … }
Usually followed by the manual assignment of defaults to local variables:
opts = opts || {}; var body = opts.body || ''; var headers = opts.headers || {}; var method = opts.method || 'GET';
And unfortunately for us, despite being common, the
|| practice actually introduces subtle bugs. In this case we’re not admitting that
opts.body could be
0, so robust code would most likely look like:
var body = opts.body === undefined ? '' : opts.body;
Thanks to destructured parameters we can at once clearly define the parameters, properly set defaults and expose them to the local scope:
fetch(url, { body='', method='GET', headers={} }){ console.log(method); // no opts. everywhere! }
As a matter of fact, defaults can also apply to the entire object parameter as well:
fetch(url, { method='GET' } = {}){ // the second parameter defaults to {} // the following will output "GET": console.log(method); }
You can also destructure with the assignment operator as follows:
var { method, body } = opts;
This is reminiscent to me of the expressiveness granted by
with, without the magic or negative performance implications.
# New conventions
Some parts of the language have been altogether replaced with better alternatives that’ll quickly become a new default for how you write JavaScript.
I’ll go over some of them.
# let/const over var
Instead of writing
var x = y you’ll most likely be writing
let x = y.
let scopes your variable definition to the block it’s defined in:
if (foo) { let x = 5; setTimeout(function(){ // x is `5` here }, 500); } // x is `undefined` here
This is especially useful for
for or
while loops:
for (let i = 0; i < 10; i++) {} // `i` doesn't exist here.
When you want to ensure immutability with the same semantics as
let, use
const instead.
# template strings over concatenation
With the lack of
sprintf or similar utilities in the standard JavaScript library, composing strings has always been more painful than it should.
Template strings make the embedding of expressions trivial, as well as support for multiple lines. Simply replace ‘ with `
let str = ` Hello ${first}. We are in the year ${new Date().getFullYear()} `;
# class over prototypes
Defining a class was cumbersome and required a deep understanding of the language internals. Even though it’s still obviously useful to grasp the inner-workings, the barrier to entry to newcomers was unnecessarily high.
class offers syntax sugar for defining a constructor
function, the methods within
prototype and getters / setters. It also provides prototypical inheritance with syntax alone (no utilities or 3rd party modules).
class A extends B { constructor(){} method(){} get prop(){} set prop(){} }
I initially was surprised to learn classes are not hoisted (explanation here). You should therefore think of them translating roughly to
var A = function(){} as opposed to
function A(){}.
# ()=> over function
Not only is
(x, y) => {} shorter to write than
function (x,y) {}, but the behavior of
this within the function body will most likely refer to what you want.
The so-called “fat arrow” functions are lexically bound. Consider the example of a method within a class that launches two timers:
class Person { constructor(name){ this.name = name; } timers(){ setTimeout(function(){ console.log(this.name); }, 100); setTimeout(() => { console.log(this.name); }, 100); } }
To the dismay of newcomers to the language, the first timer (using
function) will log
"undefined". The second one will now correctly log
name.
# First-class support for async I/O
Asynchronous code execution has been around for almost the entire history of the language.
setTimeout, after all, was introduced around the time JavaScript 1.0 came out.
But arguably, the language didn’t really support it. The return value of function calls that scheduled “future work” is normally
undefined, or in the case of
setTimeout a
Number.
The introduction of
Promise addresses this, and by doing so fills a very large hole of interoperability and composability.
On one hand, APIs you’ll encounter become wholly more predictable. As a test of this, consider the new
fetch API. How does it work beyond the signature we described? You guessed right. It returns a
Promise.
If you’ve used Node.JS in the past, you know that there’s an informal contract that callbacks will follow the signature:
function (err, result){}
Also informally specified is the idea that callbacks will fire only once. And that
null will be the value in the absence of an error (and not
undefined or
false). Except, this might not always be the case.
# Towards the future
ES6 is gaining a lot of momentum in the ecosystem. Chrome and io.js have already incorporated some of its features. A lot has already been written about it.
But what’s worth pointing out is that this momentum has been largely enabled by transpilation rather than actual support. Great tools have emerged to enable transpiling and polyfilling, and browsers have over time added proper debugging and error reporting support for them (through source maps).
The evolution of the language and its proposed features are outpacing implementation. As mentioned above,
Promise is genuinely exciting as a building block alone, but consider the benefits of solving the callback problem once and for all.
ES7 is poised to do this by introducing the possibility of
await-ing a promise:
async function uploadAvatar(){ let user = await getUser(); user.avatar = await getAvatar(); return await user.save(); }
Even though the spec is in its early discussions, the same tool that compiles ES6 to ES5 already enables it.
There’s still substantial work left to do to make sure the process of adopting new language syntax and APIs becomes even more friction-less to those getting started.
But one thing is for certain: we must embrace the moving target.
1. ^ I use the word “transpilation” throughout the article on the basis of its popularity to refer to JavaScript source-to-source compilation. That aside, the merits of the term are technically debatable. | http://rauchg.com/2015/ecmascript-6/ | CC-MAIN-2016-26 | refinedweb | 1,781 | 54.83 |
Writing your first local unit test in Android
Hello everyone. I hope you are doing well. This blog post is about writing your first local unit test in Android. Testing is an integral part of the Software Engineering and testing will help you write software that is robust and behaves the way you intend it to. Tests can also serve as future reference documents as to how the software was originally built to work.
Types of testing in Android
While there are different types of testing in the software engineering field, viz. Unit Testing, Integration Testing, White Box Testing, Black Box Testing, etc., we will be focusing on the Android Specific testing. There are two types of testing in Android which are Local Testing and Instrumented Testing. Out of the two, we will be focusing on the Local Testing in this post.
Local Unit Test vs. Instrumented Tests
In Android, Local Unit tests refer to those tests that does not require an Android Device(physical device or emulator) Instrumentation. These tests run on the local JVM(Java Virtual Machine) of your development environment, thus the name Local Unit Test.
Instrumented tests on the other hand are tests that need to utilize on the Android framework thus a device to run.
Hence, local tests run much faster than the instrumented tests as they do not require Android Device instrumentation. Use Instrumented tests only when writing integration and functional UI tests to automate user interaction, or when your tests have Android dependencies that mock objects cannot satisfy. For a complete read follow this link.
Writing your first local unit test.
Now I assume you have your Android studio setup and have some knowledge on Android Development. The first thing I want you to do is create an Android Project, File>New>New Project.
Then choose Empty Activity Template from the next screen. After Android Studio finishes building your Project and Indexing, choose Run>Run App from the toolbar menu or hit F6 key, just to make sure everything is alright.
Lets now write a utility class for this demo. Lets say we need an App that does simple mathematics like addition, subtraction, multiplication and division. We will now write a
SimpleMath.java class. Right click on your main package,
com.example.javaand select New>New java class. Name it
SimpleMath
package com.example.testdemo;
public class SimpleMath {
public int add(int op1, int op2){
return op1 + op2;
}
public int diff(int op1, int op2){
return op1-op2;
}
public double div(int op1, int op2) {
if (op2 == 0) return 0;
return op1 / op2;
}
}
I hope you understand what is going on until this point. The class above is fairly straightforward. Now lets proceed to writing our first unit test. Recall from above section Local Unit Test vs. Instrumented Tests, they are two different things which is also reflected in the directory structure. We have
app\src\androidTest and
app\src\test. We are going to have to put our Local Unit tests in the latter.
Before we begin, I want you to check your app level
build.gradle file to see if you have the correct dependencies for setting up local unit tests. By default the android studio should have put those there but if not copy paste the following:
testImplementation 'junit:junit:4.12' or
testCompile 'junit:junit:4.12' if you are running older gradle plugin.
Right click on your test folder (not androdTest folder) and click New>Java Class. Name it
SimpleMathTest to reflect the unit test is for
SimpleMath.java.
package com.example.testdemo;import org.junit.After;
import org.junit.Before;
import org.junit.Test;import static org.junit.Assert.assertEquals;public class SimpleMathTest {
private SimpleMath simpleMath; @Before //This is executed before the @Test executes
public void setUp(){
simpleMath = new SimpleMath();
System.out.println("Ready for testing");
} @After //This is executed after the @Test executes
public void tearDown(){
System.out.println("Done with testing");
} @Test
public void testAdd() {
int total = simpleMath.add(4, 5);
assertEquals("Simple Math is not adding correctly", 9, total);
//The message here is displayed iff the test fails
} @Test
public void testDiff() {
int total = simpleMath.diff(9, 2);
assertEquals("Simple Math is not subtracting correctly", 7, total);
} @Test
public void testDiv(){
double quotient = simpleMath.div(9,3);
assertEquals("Simple math is not dividing correctly", 3.0, quotient, 0.0);
} //@Ignore //This ignores the test below
@Test
public void testDivWithZeroDivisor(){
double quotient = simpleMath.div(9,0);
assertEquals("Simple math is not handling division by zero correctly", 0.0, quotient, 0.0);
}}
Explanation:
The code
private SimpleMath simpleMath; declares an instance of the SimpleMath class in this test class. The JUnit here is the test framework, it provides us with easy to read Annotations like
@Before,
@After and
@Testand shares similar syntax and/or approach across other programming paradigms like JSUnit for Javascipt, PHPUnit for PHP, etc.
Now as the comments indicate,
@Before is executed before the actual
@Test executes. We have instantiated the
simpleMath object in the
setUpmethod (function).
We have written 4 methods with
@Test annotation, which means we have 4 unit tests that can either pass or fail. The
testAddmethod tests the Addition functionality of
SimpleMath. In this method we have used the
simpleMath instance to add 4 and 5 and store it in an
int total . In the next line
assertEquals("Simple Math is not adding correctly", 9, total);we are checking if
totalis or is not equal to 9, which it should be if our SimpleMath class is functioning correctly. The
assertEquals() is one of the many static methods provided by the junit framework. It takes at least two parameters as the String parameter is optional which is displayed in case of test failure. The second and third parameters of the
assertEquals method are expected and actual value in strict order.
Run the testAdd()
To run the test add either select the Green play button on the right of the line numbers or select the testAdd method right click and select Run testAdd(). If you run into error like
TestSuite Empty which I hope you don’t, just click Run>Edit Configurations and Remove the test configurations under JUnit
If everything goes well you should see something like the following:
And you have now written your first local unit test in Android Application. Also notice how fast the test runs and how we did not need to use any device or emulator as the SimpleMath class does not require any Android Instrumentation or device features. The JVM on your development machine and the JVM on your phone is exactly the same and we can thus ensure the functionality that we just tested will work fine in an Android Device too.
Explore
Please go through all other test methods in the
SimpleMathTest.java . I encourage you run other test methods and see if they pass or fail. Uncomment the
@Igonre and run the
SimpleMathTest.java . Run the whole TestSuite and see what happens. Remove line
if (op2 == 0) return 0; from
div() method of
SimpleMath.java and run the
testDivWithZeroDivisor() method to see if it passes or not. Do some further reading about JUnit here and do some research on your own. Do comment or contact me if you get stuck following this blog post.
Thank you and happy coding. | https://medium.com/android-news/writing-your-first-local-unit-test-in-android-b1256cdc1a7?source=rss----8fca399d4de---4 | CC-MAIN-2021-43 | refinedweb | 1,218 | 56.76 |
Seaweed is harvested on a weekly basis after 6 or 8 weeks of growth depending on the culture method you use.
It is the responsibility of the seaweed farmer to harvest the crop, dry the harvested seaweed and keep it in a well protected place such as a storage shed until the buyer comes to the village to collect it.
The Farmer's Responsibility
Harvesting Seaweed
Harvesting is a simple work to perform. It involves:
removing of the mature seaweed plants from the lines by unfastening the raffia knots, or by breaking off the plants;
spreading the seaweed plants over a drying rack; and then
removing raffia and other unwanted materials.
Drying Seaweed
The drying rack should be built in such a way, keeping in mind that air should easily circulate through the seaweed to assure good ventilation and quick drying.
The simplest way to dry seaweed is by spreading the harvested wet seaweed over a net, a tarpaulin or over coconut leaves on the ground. However, in this way only the seaweed exposed directly to the sun (top portion) will dry efficiently and the remaining (lower portion) will stay wet.
Air ventilation is very limited when seaweed is dried on the ground as shown in the photo below.
Photo of seaweed drying on tarpaulin on the grass in the village backyard.
If you find it convenient and handy to dry the seaweed on the ground, be sure that the seaweed is at least partially dried. Fresh wet seaweed, just harvested, cannot be placed directly on tarpaulin for proper drying.
Moreover, if you dry the seaweed on the ground, sand, soil and other rubbish can mix with the wet seaweed. The seaweed buyers will not buy a dirty product.
The best way to dry seaweed, is to use a drying rack with the drying area made of sarlon netting. The rack can be constructed on shore or near your farm as shown in the figure below.
Photo of a drying rack placed in the shallow waters near the farm site. Note the large amount of posts, braces and timber used.
In some cases it might be difficult to find the netting which can also be expensive. You can use reeds or bamboo instead.
Just to give you an idea, consider that in an area of 100 square meters (20 meter by 5 meter) of sarlon netting, you can dry approximately 80 lines of mature seaweed at one time or more., can be placed in shallow water near your farm.
Drawing of a drying rack placed on shore. Shore racks are easier to construct and also cheaper.
If you weigh your seaweed before and after drying, you will find out that the original weight has decreased by approximately 10 times. So, if you had 10 kg of wet seaweed, after 3 to 5 days of sun drying, it will weigh only 1 kg. This is called 10 to 1 wet to dry ratio.
The minimum moisture content which is required by the seaweed buyers, is about 35 %. Don't worry! You do not need to measure the moisture content of your seaweed before selling it. Just make sure that seaweed is properly dried as suggested.
Well dried seaweed is covered with plenty of salt crystals and have rubber-like texture when you squeeze it and no water should drip from it.
It is wise to cover the drying seaweed during rainy weather or at night with a tarpaulin or anything that can prevent the seaweed from getting wet. You might have unexpected showers at night! If seaweed is rain-washed, it will take more time to dry and you will face a further weight loss. This is a disadvantage for the seaweed farmer as the moisture content will be lower and the seaweed will become partially rigid. Very difficult to pack.
Rain-washed seaweed will give you less money than you might expect.
Storage of Dried Seaweed
After drying and before packing the seaweed, be sure to remove any rubbish material (raffia, pieces of nylon ropes, plastics, other unwanted seaweeds, etc.). The people that come to your village to buy seaweed, will appreciate a clean and well dried product.
Well dried seaweed with salt crystals on its surface can be stored for a long time; up to 2 years without getting spoiled. The salt covering the seaweed prevents the spoilage of the carrageenan. Of course it is important that seaweed is stored properly.
It is always a good idea to pack your seaweed in polypropylene bags soon after drying and store it in a dry area. Polypropylene bags are the best because they do not soak up water as compared with jute bags.
In some cases you can store the packed seaweed in your house, if there is enough space. You can also build a storage shed in the village.
It is the responsibility of the buyer to bale the seaweed and move the bales to the collection centers for consolidation and loading of the containers. He will also take care of transporting the container to the closest harbor with access to international routes for export. He will take charge of dealing with the overseas buyer.
The Buyer's Responsibility
Baling Seaweed
Dried seaweed already packed in bags, should be transported to collection centers in the shortest time to allow the buyer to bale the seaweed.
A quick baling is important because it will prevent moisture being re-absorbed into the seaweed as compared to seaweed kept loosely in bags.
In the next page are listed the advantages of baling seaweed. Do not under estimate the advantages of baling. Even if such operation will require extra costs for material and labor, it is considered of extreme importance for an efficient post-harvest technique.
prolonged storage time without spoilage of seaweed;
reduced space necessary for storage;
easier handling of bales compared to seaweed kept loosely in polypropylene bags;
more efficient way to export seaweed overseas.
The figure below shows a baling machine commonly utilized to pack seaweed at the collection centers.
Photo of a baling machine and of a 100 kg seaweed bale in the foreground. This is a simple machine operated by a screw press.
Shipping Seaweed
Seaweed is moved from the growing areas to the collection points for baling and consolidating enough volume for export. From here, the bales are transported by sea or road to the closest port with access to international shipping routes.
The industrial processing of seaweed is done in overseas countries at present, but if Fiji grows enough seaweed, we can start to process it ourselves.
The bales can be accommodated into containers which can hold up to 20 tonnes or more of dried seaweed.
Eucheuma seaweed can be harvested after 8 weeks of growth. During the year, it is possible to achieve at least 5 harvests; one every 2 months.
If weather remains favorable, an additional harvest can be obtained unless it is necessary to commit some time for social and/or traditional activities.
At maturity (8 weeks), about 30 kg of wet seaweed can be harvested from each line planted with 30 seeds. After drying the seaweed, from each line you can obtain about 3 kg of dried seaweed.
In a realistic situation, as experienced by many seaweed farmers, you will not be able to harvest on a regular basis. Rainy weather sometimes last several days or weeks. In this condition it will a problem to find a shade to protect your harvest form rain and dry the seaweed. You might then decide not to harvest during this time. The seaweed will keep growing and as a result at the harvest, you will be able to obtain more than 3 kg from each line.
Based on the work done by one man only, it is possible to harvest 10 lines of mature seaweed every day. In a week, working only 4 days, 40 lines can be harvested giving you a total of 120 kg of dried seaweed.
If you are able to harvest seaweed 5 times over a 12-month period, you will be able to sell 4,800 kg of dried seaweed.
How can you achieve this result? Carefully read the outline given in the next page.
These estimations are not theoretical. They are the results of observations and studies of real situations.
It should be clear though, what we have outlined applies to the most commonly used culture method: the off-bottom method.
If you wish to culture your seaweed using the raft or floating method because is more appropriate to the area you have selected for farming, a different production is expected.
Using the floating method, Eucheuma seaweed should grow faster because it is cultivated closer to the water surface. Eucheuma might be ready to be harvested in 6 or 7 instead of 8 weeks. Thus, more harvests can be obtained every year.
Each floating frame planted with 225 seeds (150 gram each), can produce about 337 kg of wet seaweed or approximately 33 kg of dried seaweed.
One man alone can harvest up to 4 frames every week (1 every day). Owing to the shorter period necessary to harvest mature seaweed, up to 8 harvests can be achieved in one year giving you a total annual yield of 6,336 kg of dried seaweed.
Carefully read the illustration in the next page to find out how you can achieve this production:
Because the floating frame method has been introduced in Fiji recently, the figures outlined above are only estimations.
The Longline Method has not yet been used in Fiji, but the Fisheries Division will be making trials in future.
As your work progresses and you become a well established seaweed farmer, private companies working in the seaweed business, may come to you and offer their technical expertise and help you to expand your farm.
A few years ago, a New Zealand company set up one of its branches in Fiji. The company helped villagers to start seaweed farming. They also bought all the seaweed produced in Fiji and took care of its export.
Today, a newly established Fiji company, Seaweed (South Pacific) Ltd., is the sole marketing agent in Fiji. They will buy all the seaweed you can produce.
Seaweed (South Pacific) Ltd., already has several collection centers where you can sell your dried seaweed. These are in places like Moturiki, Kiuva, Kasavu, and Lautoka. More collection centers will be located in new areas later on as seaweed farming develops in Fiji.
Photo of a seaweed farmer at a collection center in Kiuva. Here the farmers can sell the dried seaweed.
Photo of the shed built in Kiuva. Here the dried seaweed is consolidated and later baled for export.
In countries such as Indonesia and the Philippines, seaweed farming is very common. These countries produce most of the seaweed used by the industry in the world. In the past years they had some financial problems when the price of seaweed dropped because of an over-production.
Today the situation has changed and the production of seaweed is lower than the amount required by the industry every year. This is because many new products which include carrageenan have been developed. Since the demand for carrageenan exceeds the world production, the processors are willing to pay a higher price for your seaweed.
So, plant and cultivate more seaweed and make some extra money while the opportunity exists.
The opinion of the experts is that, till 1992 the total volume of seaweed produced in the world will not be enough to satisfy the needs of the processing industry and therefore the seaweed market price will not decrease.
By 1992, if enough farmers are growing seaweed, Fiji will be able to set up its own processing plant and this will help guarantee the local price against possible future fluctuations in the world market price. New uses are being found for carrageenan and this will keep the market expanding.
Another prospect for the future is the culture of other types of useful seaweeds, for example agar-producer. The Fisheries Division will be researching the ways of widening and diversifying Fiji's seaweed industry.
Remember: seaweed is a crop which requires low capital investment and has a rapid rate of return (5 or 6 crops per year). With the current high price level, this makes a very good way of making money particularly in places where agricultural land is scarce. | http://www.fao.org/3/ac287e/AC287E04.htm | CC-MAIN-2021-25 | refinedweb | 2,074 | 62.07 |
WebDriver element.text returning lowercase, when css uppercase applied
Confirmed Issue #11322543 • Assigned to Arthur B.
Steps to reproduce
Create a webpage with an element that has a css style of
text-transform: uppercase.
text
(note was trying to give div code, this is rendering in preview properly but not on the issue page)
#myelement { text-transform: uppercase; }
Run a test that checks the
.text of the element. For example, with python selenium bindings:
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Edge() driver.get("") element = WebDriverWait(driver, 10).until( EC.text_to_be_present_in_element( (By.ID, "myelement"), 'TEXT' ) )
In Chrome, Firefox, Safari, and IE this test will pass. But in Edge it will fail as the element.text returns as
"text". Whilst that is the dom value, it is not the displayed value. I’m not sure what the standard says about this, but the inconsistent implementation compared to other browsers, including internet explorer, makes cross-browser testing harder.
Update: I took a quick look at the standard: and it says
The Get Element Text command intends to return an element’s text “as rendered”.
Therefore, I believe that this is a bug and the other browser implementations are returning the correct value.
Changed Steps to Reproduce
Changed Steps to Reproduce
Changed Steps to Reproduce
Changed Steps to Reproduce
Changed Steps to Reproduce
Microsoft Edge Team
Changed Assigned To to “Steven K.”
Changed Assigned To to “Mike J.”
Changed Assigned To from “Mike J.” to “John J.”
Changed Assigned To from “John J.” to “Stanley H.”
Changed Status to “Confirmed”
Changed Assigned To from “Stanley H.” to “Sanket J.”
Changed Assigned To from “Sanket J.” to “Sanket J.”
Changed Status from “Confirmed”
Changed Status to “Confirmed”
Changed Assigned To from “Sanket J.” to “Arthur B.”
You need to sign in to your Microsoft account to add a comment. | https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/11322543/ | CC-MAIN-2017-26 | refinedweb | 326 | 60.82 |
On 2012-04-25 11:48, Felix Meschberger wrote:
> Hi,
>
> Am 25.04.2012 um 11:40 schrieb Jukka Zitting:
>
>> Hi,
>>
>> On Wed, Apr 25, 2012 at 9:54 AM, Julian Reschke<julian.reschke@gmx.de> wrote:
>>> Would it make sense to "optimize" the persistence in that we wouldn't store
>>> the primary type when it happens to be nt:unstructured?
>>
>> Yes, though the default type should be something like
>> "oak:unstructured" or "jr3:unstructured" that isn't orderable like
>> "nt:unstructured".
>
> Do we need a namespace ? How about just "Unstructured" ?
a) I wouldn't be surprised if there's code out there assuming that
namespace names are always prefixed.
b) Having "nt:unstructured" and "Unstructured" be different is ...
surprising. So we probably want a different term... | http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201204.mbox/%3C4F97CB8A.7010006@gmx.de%3E | CC-MAIN-2014-23 | refinedweb | 127 | 67.86 |
It's just an Idea:
Haxe
Haxe is simply a transpiler or source-to-source compiler written in OCaml, and transforms into multiple different codes of C++, Java, PHP, Javascript and...
When you have to use a library in different technologies like a server written in Java and a client written in C++ you can either use message passing or you can use a library which communicates with both. Haxe is the best option to write this cross communication library.
Compiler Idea
Haxe is written in OCaml it has many disadvantages yet it's not that bad. First it's recommended to read the disadvantages here.
Comparing OCaml with C, C is much faster.
hxcpp, hxjava, ...
Yet another haxe library is required to generate C++ code this goes the same for Java but the hxml file is not compatible and I don't like it personally.
Until this point I thought writing in haxe is bearable but the using foreign functions in haxe is not so nice, the point is that assembly is not even on the list. it's not recommended to generate asm code as long as the overhead of generating it is the same as C but at least we need a foreign functioning of assembly.
Write once then add platform specific extensions
I want to design the same thing as Adobe did with AIR but those who have been through writing native extensions have experienced the difficulties of writing native extensions for it.
It's so nice to write the same thing to write a code once and then add it like extension to the compiler.
This is a small part of my thought:
#if java @:extends("java"){ import java.lang.System; public function say_hello_in_java(){ System.out.println("Hello Java"); } } #end
Also extending binaries:
class Point() { private int x; private int y; public norm function new(int x, int y){ this.x = x; this.y = y; } #if asm("x86") @:section("text"){ mov [this.x], eax } #end }
Before I explain another fine idea I would like to show the design of build system.
build.json
{ "bin": { "x86_win": { "out": "./bin/windows.exe", "libs": ["/C/hello.dll"], "includes": ["/C/hello.h"], "signing": { ??? } }, "x86": { "..." }, "machO": { "..." }, "arm": { "..." }, "arm64": { "..." }, "alpha": { "..." } } }
Although this design seems so idealistic but I think it is the best just requires to generate C code and use Cross-Compilers with defined flags.
Also these are for advanced users so they will know how to create that library they are willing to cross compile.
Now let's see how does the idea of foreign function has to work here:
A library of hello with
hello.h and hello.c is written as bellow:
#include <stdio.h> #include <hello.h> void say_hello() { printf("Hello from C!"); }
Also java class:
class Hello { public void say_hello() { System.out.println("Hello from Java!"); } }
If you link it right in the build file if we build it as 'hello.so' and include java class as 'Hello.class' then you can use it like this:
public static class SayHello(){ #if c public function extC():void = @externC("dynamic", "hello", "say_hello"); #end #if java public function extJava():void = @externJava("class", "Hello", "say_hello"); #end }
@externC args are:
$1 -> library type.
$2 -> library name.
$3 -> function name.
@externJava args are:
$1 -> library type. which can be jar or class.
$2 -> class name.
$3 -> function name.
Isn't that nice? It just requires some patience yet haxe proved that this kind of building is possible.
Posted on Dec 12 '19 by:
Amir Mohammad Rezai
I'm a game developer, with performance issues in mind. Live in Nerverland, Iran.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/amir_rezai/inspired-to-write-compiler-nej | CC-MAIN-2020-29 | refinedweb | 599 | 66.54 |
<sigh> I wrote a long post for probably half an hour and then because it logged me out the whole thing got eaten. I spent about 5 hours today trying to work through this problem and look for some examples that I could understand to help me grasp it, with no success.
I am looking for a simple, elegant solution to recursive function that takes n, the total amount of people, and k, the number of people per group. Then it displays all the unique combination's of groups. For example, n = 5, k = 3...
543
542
541
532
531
521
432
431
421
321
The problem describes dividing up the problem by the functions calling itself twice with (n - 1, k - 1) and (n - 1, k). My code, which doesn't work, that I will post below, is using a string to store the numbers, I am open to suggestions but I do not want a solution involving a bunch of libraries. I am fairly sure, given the context of the problem, that it shouldn't be to overly complex. The problem describes the base case as being when K or n = 0 or when k > n.
At this point I am looking for any advice or tips on this. If you do post a working example, I am not going to copy it because I actually want to learn this stuff, and I've had plenty of opportunities to copy complicated versions I couldn't make sense of.
Code:include <iostream> using namespace std; string output = ""; int showTeams (long n, long k, string p) { if (k == 0 || n == 0 || k > n) { output += p + " "; return 0; } else { showTeams(n - 1, k - 1, p+= static_cast<char>(n+48)); showTeams(n - 1, k, p); } } int main() { showTeams(5,3, ""); cout << output << endl; } | http://cboard.cprogramming.com/cplusplus-programming/126413-unique-combinations-groups-cplusplus-w-recursion.html | CC-MAIN-2015-40 | refinedweb | 301 | 72.5 |
is intended to be used on a logstash indexer agent (but that
is not the only way, see below.) In the intended scenario, one cloudwatch
output plugin is configured, on the logstash indexer node, with just AWS API
credentials, and possibly a region and/or a namespace. The output looks
for fields present in events, and when it finds them, it uses them to
calculate aggregate statistics. If the
metricname option is set in this
output, then any events which pass through it will be aggregated & sent to
CloudWatch, but that is not recommended. The intended use is to NOT set the
metricname option here, and instead to add a
CW_metricname field (and other
fields) to only the events you want sent to CloudWatch.
When events pass through this output they are queued for background
aggregation and sending, which happens every minute by default. The
queue has a maximum size, and when it is full aggregated statistics will be
sent to CloudWatch ahead of schedule. Whenever this happens a warning
message is written to logstash’s log. If you see this you should increase
the
queue_size configuration option to avoid the extra API calls. The queue
is emptied every time we send data to CloudWatch.
Note: when logstash is stopped the queue is destroyed before it can be processed. This is a known limitation of logstash and will hopefully be addressed in a future version.
There are two ways to configure this plugin, and they can be used in combination: event fields & per-output defaults
Event Field configuration…
You add fields to your events in inputs & filters and this output reads
those fields to aggregate events. The names of the fields read are
configurable via the
field_* options.
Per-output defaults… You set universal defaults in this output plugin’s configuration, and if an event does not have a field for that option then the default is used.
Notice, the event fields take precedence over the per-output defaults.
At a minimum events must have a "metric name" to be sent to CloudWatch.
This can be achieved either by providing a default here OR by adding a
CW_metricname field. By default, if no other configuration is provided
besides a metric name, then events will be counted (Unit: Count, Value: 1)
by their metric name (either a default or from their
CW_metricname field)
Other fields which can be added to events to modify the behavior of this
plugin are,
CW_namespace,
CW_unit,
CW_value, and
CW_dimensions. All of these field names are configurable in
this output. You can also set per-output defaults for any of them.
See below for details.
Read more about AWS CloudWatch, and the specific of API endpoint this output uses, PutMetricData"
How many data points can be given in one call to the CloudWatch API
The default dimensions [ name, value, … ] to use for events which do not have a
CW_dimensions field
The name of the field used to set the dimensions on an event metric
The field named here, if present in an event, must have an array of
one or more key & value pairs, for example…
add_field => [ "CW_dimensions", "Environment", "CW_dimensions", "prod" ]
or, equivalently…
add_field => [ "CW_dimensions", "Environment" ]
add_field => [ "CW_dimensions", "prod" ]
The name of the field used to set the metric name on an event The author of this plugin recommends adding this field to events in inputs & filters rather than using the per-output default setting so that one output plugin on your logstash indexer can serve all events (which of course had fields set on your logstash shippers.)
The name of the field used to set a different namespace per event Note: Only one namespace can be sent to CloudWatch per API call so setting different namespaces will increase the number of API calls and those cost money.
The name of the field used to set the unit on an event metric
The name of the field used to set the value (float) on an event metric
The default metric name to use for events which do not have a
CW_metricname field.
Beware: If this is provided then all events which pass through this output will be aggregated and
sent to CloudWatch, so use this carefully. Furthermore, when providing this option, you
will probably want to also restrict events from passing through this output using event
type, tag, and field matching
The default namespace to use for events which do not have a
CW_namespace field
URI to proxy server if required
How many events to queue before forcing a call to the CloudWatch API ahead of
timeframe schedule
Set this to the number of events-per-timeframe you will be sending to CloudWatch to avoid extra API calls
- Value can be any of:
us-east-1,
us-east-2,
us-west-1,
us-west-2,
eu-central-1,
eu-west-1,
eu-west-2,
ap-southeast-1,
ap-southeast-2,
ap-northeast-1,
ap-northeast-2,
sa-east-1,
us-gov-west-1,
cn-north-1,
ap-south-1,
ca-central-1
- Default value is
"us-east-1"
The AWS Region
The AWS Secret Access Key
The AWS Session token for temporary credential
Constants aggregate_key members Units How often to send data to CloudWatch This does not affect the event timestamps, events will always have their actual timestamp (to-the-minute) sent to CloudWatch.
We only call the API if there is data to send.
See the Rufus Scheduler docs for an explanation of allowed values
- Value can be any
- Default value is
"Count"
The default unit to use for events which do not have a
CW_unit field
If you set this option you should probably set the "value" option along with it
The default value to use for events which do not have a
CW_value field
If provided, this must be a string which can be converted to a float, for example…
"1", "2.34", ".5", and "0.67"
If you set this option you should probably set the
unit option along with outputs.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
output { cloudwatch { id => "my_plugin_id" } } | https://www.elastic.co/guide/en/logstash-versioned-plugins/current/v3.0.8-plugins-outputs-cloudwatch.html | CC-MAIN-2018-22 | refinedweb | 1,019 | 58.55 |
Multi requirements we have. Well, all but one: rich:tree is single select only, not multi select. Or is it?
Some theory behind the rich:tree component
The RichFaces rich:tree component is a component that can display hierachical data in two ways. The first way is to display a org.richfaces.model.TreeNode with its children. The RichFaces API also provides a default implementation of the org.richfaces.model.TreeNode interface, which is the org.richfaces.model.TreeNodeImpl class.
The second way is to use a RichFaces rich:recursiveTreeNodesAdaptor to display a java.util.List or array of any kind of object, as long as it has some member that holds a java.util.List or array of child objects. Due to some heavy preprocessing of the data that is displayed in the tree, along with us feeling more comfortable with java.util.List we choose this approach in our project. In this article I’ll use the first approach to show that our solution also works in this case.
Building a simple rich:tree
To be able to display a rich:tree in a JSF page, you need few simple classes. The first class I used is called SelectionBean and it looks like this
import org.richfaces.event.NodeSelectedEvent; import org.richfaces.model.TreeNode; import org.richfaces.model.TreeNodeImpl; public class SelectionBean { private TreeNode rootNode = new TreeNodeImpl(); public SelectionBean() { TreeNodeImpl childNode = new TreeNodeImpl(); childNode.setData("childNode"); childNode.setParent(rootNode); rootNode.addChild("1", childNode); TreeNodeImpl childChildNode1 = new TreeNodeImpl(); childChildNode1.setData("childChildNode1"); childChildNode1.setParent(childNode); childNode.addChild("1.1", childChildNode1); TreeNodeImpl childChildNode2 = new TreeNodeImpl(); childChildNode2.setData("childChildNode2"); childChildNode2.setParent(childNode); childNode.addChild("1.2", childChildNode2); } public void processTreeNodeImplSelection(final NodeSelectedEvent event) { System.out.println("Node selected : " + event); } public TreeNode getRootNode() { return rootNode; } }
It creates a simple TreeNode hierarchy that can be displayed in a tree with the following Facelets JSF page:
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <body> <h:form <a4j:outputPanel <rich:panel <f:facetTree</f:facet> <rich:tree <rich:treeNode> <h:outputText </rich:treeNode> </rich:tree> </rich:panel> </a4j:outputPanel> </h:form> </body> </html>
The result looks like this:
Quite simple, like I said.
Multi select in the rich:tree
The SelectionBean class has been prepared to catch node selection events. If you modify the rich:tree line in the Facelets JSF page to read like this
<rich:tree
you should see selection events being registered in the log of your application server:
Node selected : org.richfaces.event.AjaxSelectedEvent[source=org.richfaces.component.html.HtmlTree@19eaa86]
Not very helpful information, but at least we know that the node selection events are registered. Now, if you look at the taglib doc for rich:tree you’ll notice that there is no way to configure the tree to accept multiple selections. So let’s modify the SelectionBean to keep track of node selections itself.
We’ll need some Set to hold the selected tree nodes in. We could use a List, but that will create doubles if we click a node more than one time. Remember, rich:tree has no way of knowing if a node recently was selected or not. So everytime we click a node that rich:tree thinks not to be selected, it raises a selection event again! We only want to know which nodes are clicked and keep track of that. we don’t want to know how many times a node is selected. So therefore a Set will do nicely.
There’s one more thing to a rich:tree. Its backing UIComponent is a org.richfaces.component.html.HtmlTree and in the TreeModel of that tree, each node is uniquely identified by a RowKey Object. So, I’ll use a
private Map<Object, TreeNode> selectedNodes = new HashMap<Object, TreeNode>();
Now, everytime a node is selected I’ll add its TreeNode to the Map under the RowKey key. Assuming we have a global Map member as defined above, the processTreeNodeImplSelection method can now be modified to
public void processNodeSelection(final NodeSelectedEvent event) { HtmlTree tree = (HtmlTree)event.getComponent(); Object rowKey = tree.getRowKey(); TreeNode selectedNode = tree.getModelTreeNode(rowKey); selectedNodes.put(rowKey, selectedNode); for (Object curRowKey : selectedNodes.keySet()) { System.out.println("Selected node : " + selectedNodes.get(curRowKey).getData()); } }
If you click the three nodes in the tree in some random order, you’ll get this output:
Selected node : childChildNode1 Selected node : childNode Selected node : childChildNode2
So, we are keeping track of all selected nodes!
Making the selected nodes visible in the tree
Now our bean knows that we have selected multiple nodes, but the tree still displays only one selected node at a time. Using e.g. FireBug it’s easy to determine the difference is CSS class between a selected node and a non-selected node. Using the default Blue skin for RichFaces, a non-selected node has style class
dr-tree-h-text rich-tree-node-text
while a selected node has style class
dr-tree-h-text rich-tree-node-text dr-tree-i-sel rich-tree-node-selected
The border around a selected node is there because of the dr-tree-i-sel style. We only need a way to make all selected nodes (that is, the ones that are stored in the Map in our bean) use that style. One way is to tell each TreeNode that is has been selected. But how can we do that? Well, for instance by introducing a class that holds both the text that will be displayed in the tree as well as a Boolean that holds the selection state of the node. Such a class could be like this
public class NodeData { private String nodeText; private Boolean selected = Boolean.FALSE; public NodeData(String nodeText) { this.nodeText = nodeText; } [getters and setters] }
With this class we need to make a few changes to our SelectionBean. First of all, when building the node hierarchy we need to use the NodeData class instead of a simple String. This means we’ll have to modify the constructor method so it looks like this
public SelectionBean() { TreeNodeImpl childNode = new TreeNodeImpl(); childNode.setData(new NodeData("childNode")); childNode.setParent(rootNode); rootNode.addChild("1", childNode); TreeNodeImpl childChildNode1 = new TreeNodeImpl(); childChildNode1.setData(new NodeData("childChildNode1")); childChildNode1.setParent(childNode); childNode.addChild("1.1", childChildNode1); TreeNodeImpl childChildNode2 = new TreeNodeImpl(); childChildNode2.setData(new NodeData("childChildNode2")); childChildNode2.setParent(childNode); childNode.addChild("1.2", childChildNode2); }
Next, the processNodeSelection method needs to tell a node that it is selected by setting the selected Boolean in NodeData to true. The method becomes
public void processNodeSelection(final NodeSelectedEvent event) { HtmlTree tree = (HtmlTree)event.getComponent(); Object rowKey = tree.getRowKey(); TreeNode selectedNode = tree.getModelTreeNode(rowKey); ((NodeData)selectedNode.getData()).setSelected(Boolean.TRUE); selectedNodes.put(rowKey, selectedNode); for (Object curRowKey : selectedNodes.keySet()) { System.out.println("Selected node : " + ((NodeData)selectedNodes.get(curRowKey).getData()).getNodeText()); } }
Finally, we need to modify our Facelets JSF page in two ways. The first one is to make sure the h:outputText element displays the nodeText of the NodeData. The second modification is to have the rich:treeNode set it’s nodeClass accordingly to the selected NodeData Boolean. The Facelets JSF page lines look like this
<rich:treeNode <h:outputText </rich:treeNode>
Now, if you reload the application in your browser, all of a sudden you can "select" multiple nodes in the tree.
Future enhancements
The above scenario isn’t ideal. First of all, now single selection of nodes doesn’t work anymore. To fix this, you may want to add a checkbox that toggles the selection state from single to multiple and back. Another issue is that accidentically selected nodes cannot be deselected anymore. The selection state checkbox may partially solve that, however. Once you select a node that you didn’t want to select, toggle the checkbox, select a single node, then toggle the checkbox again and start selecting multiple nodes once more. Another way would be to have another checkbox that allows you to deselect any selected node. Finally, users may want to hold a key, e.g. the CTRL key, and then start selecting multiple nodes. I haven’t got a clue how to do that, so if you know please drop me an email
Ideally the RichFaces rich:tree would have native multiple selection support. Perhaps this post will actually make that possible.
Related posts:
- Selecting a 'pruned tree' with selected nodes and all their ancestors – Hierarchical SQL Queries and Bottom-Up Trees
- Migrating the ADF 10g Hierarchical Table Report to JDeveloper & ADF Trinidad and onwards to 11g (RichFaces)
- Dropping trees
- Building ADF Faces Tree based on POJOs (without using the ADF Tree Data Binding)
- Creating Multi-Type Node Children and Child Node labels in ADF Faces Tree Component
This entry was posted by Wouter van Reeven on January 29, 2009 at 9:46 pm, and is filed under General. Follow any responses to this post through RSS 2.0.Both comments and pings are currently closed.
Didn't find any related posts :(
Hi Andriy,
Can you share link? It will be very helpful.
Thank You.
Hi Andriy, i’m really interested in your code.
it would be great if you post the full example!
Thank you.
Hi Andriy, i’m really interested in your code.
it would be great if you post the full example!
Thank you.
Hi!
I’ve implemented the same feature with rich:tree but in a completely different way.
To be able to select multiple I’ve used checkboxes inside each row to display the node’s “selected” property, and processed that data on submit.
If someone needs an example I can post a code here.
Â
Regards
Hi Wouter, as you’re interested in PrimeFaces these days, I’d like to point to PrimeFaces Tree which has built-in support to checkbox based selection, around 5 lines of code, you can implement this.
Demo:
Attach to privious one:
Property “UseJBossWebLoader” should be “true”
Dear cuixf and Krishna,
Change the following property in your jboss-service.xml file.
File:
jboss-4.2.2.GA\server\default\deploy\jboss-web.deployer\META-INF\jboss-service.xml
Should change to:
true
Regards
thanks for your article, sir
you said:”you should see selection events being registered in the log of your application server:”, how could i register the selection event in the example?
help me please thank u
I am getting the same InvocationTargetException, as mentioned by cuixf and other above.
Any solution to that?
Hello,
Thank you for the tutorial.
Unfortunetly i still couldn’t make the first simple tree on the tutorial.
I think i need you help.
First i want to tell you that i made a Seam Web Project with JBoss AS 4.2 Runtime.
When you made those simple tree, what kind of project did u have? A Dynamic Web Project or Seam Web Project?
And in my eclipse, there is no errror as i copied your jsf Page.
But on the browser i didn’t see anything. It’s only a white page.
Could you help me please?
Thank you
The WAR’s project is no problem,but my EAR’s project see InvocationTargetException warning.
ClassLoader ???
_______________________________________________
yukimi Says:
April 3rd, 2009 at 12:01 pm
Hi, thank you for this example. It’s great. However, I have problem with HtmlTree. Using your source code, whenever I click on the node, it shows
18:06:10,354 WARN [lifecycle] /admin/category.xhtml @28,79 nodeSelectListener=â€#{CategoryManager.processNodeSelection}â€: java.lang.reflect.InvocationTargetException
If I removed HtmlTree, I don’t see InvocationTargetException warning. What’s wrong?
Please help.
Thanks.
Hi, thank you for this example. It’s great. However, I have problem with HtmlTree. Using your source code, whenever I click on the node, it shows
18:06:10,354 WARN [lifecycle] /admin/category.xhtml @28,79 nodeSelectListener=”#{CategoryManager.processNodeSelection}”: java.lang.reflect.InvocationTargetException
If I removed HtmlTree, I don’t see InvocationTargetException warning. What’s wrong?
Please help.
Thanks.
hi ı have prolem with HtmlTree ı couldnt find org.richfaces.component.html.HtmlTree library
please help me how to parse this
thanks
Dear Sir,
Many thanks for this type of post. Its self-explanatory. Currently I am working on rich:tree with checkbox. If you have any idea or progress regarding checkbox tree, please share with us.
Thank you
Ismail
=====
Thanks for the post, it helped a lot. I still have a problem, though. I can select multiple nodes but I do not see them selected only after I refresh the page. Do you have the same problem with your implementation? If you know how to solve it, please respond. | http://technology.amis.nl/2009/01/29/multi-select-in-richfaces-trees/ | CC-MAIN-2013-20 | refinedweb | 2,102 | 57.47 |
Last chance to post a question for Frank Wierzbicki...the "Ask Frank"
question and answer session for Jython Monthly will come to a close
this Friday, July 11.
If you are interested in posting a question for Frank to answer,
please visit the following URL or send email to me at the address
below.
Thanks to Frank and to those who have already posted questions.
--
Josh Juneau
juneau001@...
On Sat, Jun 21, 2008 at 9:50 AM, Josh Juneau <juneau001@...> wrote:
> J email them to!
>
> --
> Josh Juneau
> juneau001@...
>
>
>
Since they are all based on the same components, take a look at or
Hth,
Greg.
From: jython-users-bounces@...
[mailto:jython-users-bounces@...] On Behalf Of DOUTCH
GARETH-GDO003
Sent: Monday, July 07, 2008 6:25 AM
To: jython-users@...
Subject: [Jython-users] Hide unwanted commons logging trace?
Hi all,
I am using htmlunit as part of my project and I want to disable the log
messages it prints to my command line whenever I load a web page
(usually ones with Javascript are the most verbose).
The example:
from com.gargoylesoftware.htmlunit import *
w = WebClient(BrowserVersion.FIREFOX_2)
w.getPage('';)
Generates the output:
07-Jul-2008 14:18:44
com.gargoylesoftware.htmlunit.javascript.host.Document jsxSet_cookie
INFO: Added cookie: testcookie=1
07-Jul-2008 14:18:44
com.gargoylesoftware.htmlunit.javascript.host.Document jsxSet_cookie
INFO: Added cookie: testcookie=
07-Jul-2008 14:18:44
com.gargoylesoftware.htmlunit.javascript.host.Document jsxSet_cookie
INFO: Added cookie: khcookie=fzwq2gh2pz1eDnO5bRamCTbugf_q4fmi-ww4Hg
Exception in declaration()
HtmlPage()@5602395
The project uses the commons logging package, and I haven't a clue how
to disable the output. Can anybody help? <>
Regards,
Gareth.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200807&viewday=8 | CC-MAIN-2016-40 | refinedweb | 317 | 59.6 |
The
LaserProxy class is used to control a laser device.
More...
#include <playerc++.h>
Detailed Description
The
LaserProxy class is used to control a laser device.
The latest scan data is held in two arrays:
ranges and
intensity. The laser scan range, resolution and so on can be configured using the Configure() method.
Member Function Documentation
Configure the laser scan pattern.
Angles
min_angle and
max_angle are measured in radians.
scan_res is measured in units of 0.01 degrees; valid values are: 25 (0.25 deg), 50 (0.5 deg) and 100 (1 deg).
range_res is measured in mm; valid values are: 1, 10, 100. Set
intensity to
true to enable intensity measurements, or
false to disable.
scanning_frequency is measured in Hz
Accessor for the pose of the laser with respect to its parent object (e.g., a robot).
Fill it in by calling RequestGeom.
References player_pose3d::px, player_pose3d::py, and player_pose3d::pyaw.
Accessor for the pose of the laser's parent object (e.g., a robot).
Filled in by some (but not all) laser data messages.
References player_pose3d::px, player_pose3d::py, and player_pose3d::pyaw.
- Deprecated:
- Minimum range reading on the left side
- Deprecated:
- Minimum range reading on the right side
Range access operator.
This operator provides an alternate way of access the range data. For example, given an
LaserProxy named
lp, the following expressions are equivalent:
lp.GetRange(0) and
lp[0].
Request the current laser configuration; it is read into the relevant class attributes.
Get the laser's geometry; it is read into the relevant class attributes.
The documentation for this class was generated from the following file: | http://playerstage.sourceforge.net/doc/Player-cvs/player/classPlayerCc_1_1LaserProxy.html | CC-MAIN-2015-40 | refinedweb | 269 | 61.22 |
Introduction
In this tutorial, we’ll take a look at using Python scripts to interact with infrastructure provided by Amazon Web Services (AWS). You’ll learn to configure a workstation with Python and the Boto3 library. Then, you’ll learn how to programmatically create and manipulate:
- Virtual machines in Elastic Compute Cloud (EC2)
- Buckets and files in Simple Storage Service (S3)
- Databases in Relational Database Service (RDS)
Requirements
Before we get started, there are a few things that you’ll need to put in place:
- An AWS account with admin or power user privileges. Since we’ll be creating, modifying, and deleting things in this exercise, the account should be a sandbox account that does not have access to production VMs, files, or databases.
- Access to a Linux shell environment with an active internet connection.
- Some experience working with Python and the Bash command line interface.
Getting Configured
Let’s get our workstation configured with Python, Boto3, and the AWS CLI tool. While the focus of this tutorial is on using Python, we will need the AWS CLI tool for setting up a few things.
Once we’re set up with our tools on the command line, we’ll go to the AWS console to set up a user and give permissions to access the services we need to interact with.
Python and Pip
First, check to see if Python is already installed. You can do this by typing which python in your shell. If Python is installed, the response will be the path to the Python executable. If Python is not installed, go to the Python.org website for information on downloading and installing Python for your particular operating system.
We will be using Python 2.7.10 for this tutorial. Check your version of Python by typing python -V. Your install should work fine as long as the version is 2.6 or greater.
The next thing we’ll need is pip, the Python package manager. We’ll use pip to install the Boto3 library and the AWS CLI tool. You can check for pip by typing which pip. If pip is installed, the response will be the path to the pip executable. If pip is not installed, follow the instructions at pip.pypa.io to get pip installed on your system.
Check your version of pip by typing "pip -V". Your version of pip should be 9.0.1 or newer.
Now, with Python and pip installed, we can install the packages needed for our scripts to access AWS.
AWS CLI Tool and Boto3
Using the pip command, install the AWS CLI and Boto3:
pip install awscli boto3 -U --ignore-installed six
We can confirm the packages are installed by checking the version of the AWS CLI tool and loading the boto3 library.
Run the command "aws --version" and something similar to the following should be reported:
aws-cli/1.11.34 Python/2.7.10 Darwin/15.6.0 botocore/1.4.91
Finally, run the following to check boto3: python -c “import boto3”. If nothing is reported, all is well. If there are any error messages, review the setup for anything you might have missed.
At this point, we’re ready for the last bit of configuration before we start scripting.
Users, Permissions, and Credentials
Before we can get up and running on the command line, we need to go to AWS via the web console to create a user, give the user permissions to interact with specific services, and get credentials to identify that user.
Open your browser and navigate to the AWS login page. Typically, this is.
Once you are logged into the console, navigate to the Identity and Access Management (IAM) console. Select “Users” -> “Add user."
On the “Add user” page, give the user a name and select “Programmatic access." Then click “Next: Permissions." In this example, I’ve named the user “python-user-2." Note that your user name should not have any spaces or special characters.
On the “Permissions” page, we will set permissions for our user by attaching existing policies directly to our user. Click “Attach existing policies directly." Next to “Filter”, select “AWS managed." Now search for
AmazonEC2FullAccess. After entering the search term, click the box next to the listing for “AmazonEC2FullAccess." Repeat this step for S3 and RDS, searching for and selecting AmazonS3FullAccess and AmazonRDSFullAccess. Once you’ve selected all three, click “Next: Review."
On the review screen, check your user name, AWS access type, and permissions summary. It should be similar to the image below. If you need to fix anything, click the “Previous” button to go back to prior screens and make changes. If everything looks good, click “Create user."
On the final user creation screen, you’ll be presented with the user’s access key ID and secret access key. Click the “Download .csv” button to save a text file with these credentials or click the “Show” link next to the secret access key. IMPORTANT: Save the file or make a note of the credentials in a safe place as this is the only time that they are easily captured. Protect these credentials like you would protect a username and password!
Now that we have a user and credentials, we can finally configure the scripting environment with the AWS CLI tool.
Back in the terminal, enter aws configure. You’ll be prompted for the AWS access key ID, AWS secret access key, default region name, and default output format. Using the credentials from the user creation step, enter the access key ID and secret access key.
For the default region name, enter the region that suits your needs. The region you enter will determine the location where any resources created by your script will be located. You can find a list of regions in the AWS documentation. In the example below, I’m using us-west-2.
Options for the default output format are text, json, and table. Enter “text” for now.
AWS Access Key ID [None]: AKIAJFUD42GXIN4SQRKA
AWS Secret Access Key [None]: LLL1tjMJpRNsCq23AXVtZXLJhvYkjHeDf4UO9zzz
Default region name [None]: us-west-2
Default output format [None]: text
Now that your environment is all configured, let’s run a quick test with the AWS CLI tool before moving on. In the shell, enter:
aws ec2 describe-instances
If you already have instances running, you’ll see the details of those instances. If not, you should see an empty response. If you see any errors, walk through the previous steps to see if anything was overlooked or entered incorrectly, particularly the access key ID and secret access key.
Wow! That was a lot to set up but we’re finally ready to do some scripting. Let’s get started with some basic scripts that work with EC2, S3, and RDS.
Scripting EC2
The Elastic Compute Cloud (EC2) is a service for managing virtual machines running in AWS. Let’s see how we can use Python and the boto3 library with EC2.
List Instances
For our first script, let’s list the instances we have running in EC2. We can get this information with just a few short lines of code.
First, we’ll import the boto3 library. Using the library, we’ll create an EC2 resource. This is like a handle to the EC2 console that we can use in our script. Finally, we’ll use the EC2 resource to get all of the instances and then print their instance ID and state. Here’s what the script looks like:
#!/usr/bin/env python
import boto3
ec2 = boto3.resource('ec2')
for instance in ec2.instances.all():
print instance.id, instance.state
Save the lines above into a file named list_instances.py and change the mode to executable. That will allow you to run the script directly from the command line. Also note that you’ll need to edit and chmod +x the remaining scripts to get them running as well. In this case, the procedure looks like this:
$ vi list_instances.py
$ chmod +x list_instances.py
$ ./list_instances.py
If you haven’t created any instances, running this script won’t produce any output. So let’s fix that by moving on to the the next step and creating some instances.
Create an Instance
One of the key pieces of information we need for scripting EC2 is an Amazon Machine Image (AMI) ID. This will let us tell our script what type of EC2 instance to create. While getting an AMI ID can be done programmatically, that's an advanced topic beyond the scope of this tutorial. For now, let’s go back to the AWS console and get an ID from there.
In the AWS console, go the the EC2 service and click the “Launch Instance” button. On the next screen, you’re presented with a list of AMIs you can use to create instances. Let’s focus on the Amazon Linux AMI at the very top of the list. Make a note of the AMI ID to the right of the name. In this example, its “ami-1e299d7e." That’s the value we need for our script. Note that AMI IDs differ across regions and are updated often so the latest ID for the Amazon Linux AMI may be different for you.
Getting an AMI ID
Now with the AMI ID, we can complete our script. Following the pattern from the previous script, we’ll import the boto3 library and use it to create an EC2 resource. Then we’ll call the create_instances() function, passing in the image ID, max and min counts, and the instance type. We can capture the output of the function call which is an instance object. For reference, we can print the instance’s ID.
#!/usr/bin/env python
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
ImageId='ami-1e299d7e',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro')
print instance[0].id
While the command will finish quickly, it will take some time for the instance to be created. Run the list_instances.py script several times to see the state of the instance change from pending to running.
Terminate an Instance
Now that we can programmatically create and list instances, we also need a method to terminate them.
For this script, we’ll follow the same pattern as before with importing the boto3 library and creating an EC2 resource. But we’ll also take one parameter: the ID of the instance to be terminated. To keep things simple, we’ll consider any argument to the script to be an instance ID. We’ll use that ID to get a connection to the instance from the EC2 resource and then call the terminate() function on that instance. Finally, we print the response from the terminate function. Here’s what the script looks like:
#!/usr/bin/env python
import sys
import boto3
ec2 = boto3.resource('ec2')
for instance_id in sys.argv[1:]:
instance = ec2.Instance(instance_id)
response = instance.terminate()
print response
Run the list_instances.py script to see what instances are available. Note one of the instance IDs to use as input to the terminate_instances.py script. After running the terminate script, we can run the list instances script to confirm the selected instance was terminated. That process looks something like this:
$ ./list_instances.py
i-0c34e5ec790618146 {u'Code': 16, u'Name': 'running'}
$ ./terminate_instances.py i-0c34e5ec790618146
{u'TerminatingInstances': [{u'InstanceId': 'i-0c34e5ec790618146', u'CurrentState': {u'Code': 32, u'Name': 'shutting-down'}, u'PreviousState': {u'Code': 16, u'Name': 'running'}}], 'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': '55c3eb37-a8a7-4e83-945d-5c23358ac4e6', 'HTTPHeaders': {'transfer-encoding': 'chunked', 'vary': 'Accept-Encoding', 'server': 'AmazonEC2', 'content-type': 'text/xml;charset=UTF-8', 'date': 'Sun, 01 Jan 2017 00:07:20 GMT'}}}
$ ./list_instances.py
i-0c34e5ec790618146 {u'Code': 48, u'Name': 'terminated'}
Scripting S3
The AWS Simple Storage Service (S3) provides object storage similar to a file system. Folders are represented as buckets and the contents of the buckets are known as keys. Of course, all of these objects can be managed with Python and the boto3 library.
List Buckets and Their Contents
Our first S3 script will let us see what buckets currently exist in our account and any keys inside those buckets.
Of course, we’ll import the boto3 library. Then we can create an S3 resource. Remember, this gives us a handle to all of the functions provided by the S3 console. We can then use the resource to iterate over all buckets. For each bucket, we print the name of the bucket and then iterate over all the objects inside that bucket. For each object, we print the object’s key or essentially the object’s name. The code looks like this:
#!/usr/bin/env python
import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print bucket.name
print "---"
for item in bucket.objects.all():
print "\t%s" % item.key
If you don’t have any buckets when you run this script, you won’t see any output. Let’s create a bucket or two and then upload some files into them.
Create a Bucket
In our bucket creation script, let's import the boto3 library (and the sys library too for command line arguments) and create an S3 resource. We’ll consider each command line argument as a bucket name and then, for each argument, create a bucket with that name.
We can make our scripts a bit more robust by using Python’s try and except features. If we wrap our call to the create_bucket() function in a try: block, we can catch any errors that might occur. If our bucket creation goes well, we simply print the response. If an error is encountered, we can print the error message and exit gracefully. Here’s what that script looks like:
#!/usr/bin/env python
import sys
import boto3
s3 = boto3.resource("s3")
for bucket_name in sys.argv[1:]:
try:
response = s3.create_bucket(Bucket=bucket_name)
print response
except Exception as error:
print error
Creating a bucket is easy but comes with some rules and restrictions. To get the complete run-down, read the Bucket Restrictions and Limitations section in the S3 documentation. The two rules that needs to be emphasized for this example are 1) bucket names must be globally unique and 2) bucket names must follow DNS naming conventions.
Basically, when choosing a bucket name, pick one that you are sure hasn’t been used before and only use lowercase letters, numbers, and hyphens.
Because simple bucket names like “my_bucket” are usually not available, a good way to get a unique bucket name is to use a name, a number, and the date. For example:
$ ./create_bucket.py projectx-bucket1-$(date +%F-%s)
s3.Bucket(name='projectx-bucket1-2017-01-01-1483305884')
Now we can run the list_buckets.py script again to see the buckets we created.
$ ./list_buckets.py
projectx-bucket1-2017-01-01-1483305884
OK! Our buckets are created but they’re empty. Let’s put some files into these buckets.
Put a File into a Bucket
Similar to our bucket creation script, we start the put script by importing the sys and boto3 libraries and then creating an S3 resource. Now we need to capture the name of the bucket we’re putting the file into and the name of the file as well. We’ll consider the first argument to be the bucket name and the second argument to be the file name.
To keep with robust scripting, we’ll wrap the call to the put() function in a try; block and print the response if all goes well. If anything fails, we’ll print the error message. That script comes together like this:
#!/usr/bin/env python
import sys
import boto3
s3 = boto3.resource("s3")
bucket_name = sys.argv[1]
object_name = sys.argv[2]
try:
response = s3.Object(bucket_name, object_name).put(Body=open(object_name, 'rb'))
print response
except Exception as error:
print error
For testing, we can create some empty files and then use the put_bucket.py script to upload each file into our target bucket.
$ touch file{1,2,3,4}.txt
$ ./put_bucket.py projectx-bucket1-2017-01-01-1483305884 file1.txt
{u'ETag': '"d41d8cd98f00b204e9800998ecf8427e"', 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'Host=', 'RequestId': '1EDAE2B1F66C693D', 'HTTPHeaders': {'content-length': '0', 'x-amz-id=', 'server': 'AmazonS3', 'x-amz-request-id': '1EDAE2B1F66C693D', 'etag': '"d41d8cd98f00b204e9800998ecf8427e"', 'date': 'Sun, 01 Jan 2017 21:45:28 GMT'}}}
$ ./put_bucket.py projectx-bucket1-2017-01-01-1483305884 file2.txt
...
$ ./put_bucket.py projectx-bucket1-2017-01-01-1483305884 file3.txt
...
$ ./put_bucket.py projectx-bucket1-2017-01-01-1483305884 file4.txt
...
$ ./list_buckets.py
projectx-bucket1-2017-01-01-1483305884
---
file1.txt
file2.txt
file3.txt
file4.txt
Success! We’ve created a bucket and uploaded some files into it. Now let’s go in the opposite direction, deleting objects and then finally, deleting the bucket.
Delete Bucket Contents
For our delete script, we’ll start the same as our create script: importing the needed libraries, creating an S3 resource, and taking bucket names as arguments.
To keep things simple, we’ll delete all the objects in each bucket passed in as an argument. We’ll wrap the call to the delete() function in a try: block to make sure we catch any errors. Our script looks like this:
#!/usr/bin/env python
import sys
import boto3
s3 = boto3.resource('s3')
for bucket_name in sys.argv[1:]:
bucket = s3.Bucket(bucket_name)
for key in bucket.objects.all():
try:
response = key.delete()
print response
except Exception as error:
print error
If we save this as ./delete_contents.py and run the script on our example bucket, output should look like this:
$ ./delete_contents.py projectx-bucket1-2017-01-01-1483305884
{'ResponseMetadata': {'HTTPStatusCode': 204, 'RetryAttempts': 0, 'HostId': '+A4vDhUEyZgYUGSDHELJHWPt5xmZE2WNI/eqJVjYEsB6wCLU/i6a65sUSPK6x8PcoJXN/2oBmlQ=', 'RequestId': 'A097DD4C0413AF12', 'HTTPHeaders': {'x-amz-id-2': '+A4vDhUEyZgYUGSDHELJHWPt5xmZE2WNI/eqJVjYEsB6wCLU/i6a65sUSPK6x8PcoJXN/2oBmlQ=', 'date': 'Sun, 01 Jan 2017 22:09:05 GMT', 'x-amz-request-id': 'A097DD4C0413AF12', 'server': 'AmazonS3'}}}
...
Now if we run the list_buckets.py script again, we’ll see that our bucket is indeed empty.
$ ./list_buckets.py
projectx-bucket1-2017-01-01-1483305884
---
Delete a Bucket
Our delete bucket script looks a lot like our delete object script. The same libraries are imported and the arguments are taken to be bucket names. We use the S3 resource to attach to a bucket with the specific name and then in our try: block, we call the delete() function on that bucket, catching the response. If the delete worked, we print the response. If not, we print the error message. Here’s the script:
#!/usr/bin/env python
import sys
import boto3
s3 = boto3.resource('s3')
for bucket_name in sys.argv[1:]:
bucket = s3.Bucket(bucket_name)
try:
response = bucket.delete()
print response
except Exception as error:
print error
One important thing to note when attempting to delete a bucket is that the bucket must be empty first. If there are still objects in a bucket when you try to delete it, an error will be reported and the bucket will not be deleted.
Running our delete_buckets.py script on our target bucket produces the following output:
$ ./delete_buckets.py projectx-bucket1-2017-01-01-1483305884
{'ResponseMetadata': {'HTTPStatusCode': 204, 'RetryAttempts': 0, 'Host=', 'RequestId': 'DEBF57021D1AD121', 'HTTPHeaders': {'x-amz-id=', 'date': 'Sun, 01 Jan 2017 22:27:51 GMT', 'x-amz-request-id': 'DEBF57021D1AD121', 'server': 'AmazonS3'}}}
We can run list_buckets.py again to see that our bucket has indeed been deleted.
Scripting RDS
The Relational Database Service (RDS) simplifies the management of a variety of database types including MySQL, Oracle, Microsoft SQL, and Amazon’s own Aurora DB. Let’s take a look at how we can examine, create, and delete RDS instances using Python and boto3.
In this high level example, we’ll just be looking at the instances — the virtual machines — that host databases but not the databases themselves.
List DB Instances
Let’s start with RDS by getting a listing of the database instances that are currently running in our account.
Of course, we’ll need to import the boto3 library and create a connection to RDS. Instead of using a resource, though, we’ll create an RDS client. Clients are similar to resources but operate at a lower level of abstraction.
Using the client, we can call the describe_db_instances() function to list the database instances in our account. With a handle to each database instance, we can print details like the master username, the endpoint used to connect to the instance, the port that the instance is listening on, and the status of the instance. The script looks like this:
#!/usr/bin/env python
import boto3
rds = boto3.client('rds')
try:
# get all of the db instances
dbs = rds.describe_db_instances()
for db in dbs['DBInstances']:
print ("%s@%s:%s %s") % (
db['MasterUsername'],
db['Endpoint']['Address'],
db['Endpoint']['Port'],
db['DBInstanceStatus'])
except Exception as error:
print error
If you run this script but haven’t created any database instances, the output will be an empty response which is expected. If you see any errors, go back and check the script for anything that might be out of order.
Create a DB Instance
Creating a database instance requires quite a bit of input. At the least, the following information is required:
- A name, or identifier, for the instance; this must be unique in each region for the user’s account
- A username for the admin or root account
- A password for the admin account
- The class or type the instance will be created as
- The database engine that the instance will use
- The amount of storage the instance will allocate for databases
In our previous scripts, we used sys.argv to capture the input we needed from the command line. For this example, let’s just prefill the inputs in the script to keep things as simple as possible.
Also in this case, we’ll use the db.t2.micro as our instance class and mariadb as our database engine to make sure our database instances runs in the AWS free tier if applicable. For a complete listing of the database instance classes and the database engines that each class can run, review the RDS documentation for more information.
Proceeding with the script, we’ll import the boto3 library and created an RDS client. With the client, we can call the create_db_instance() function and pass in the arguments needed to create the instance. We save the response and print it if all goes well. Because the create function is wrapped in a try: block, we can also catch and print an error message if anything takes a wrong turn. Here’s the code:
#!/usr/bin/env python
import boto3
rds = boto3.client('rds')
try:
response = rds.create_db_instance(
DBInstanceIdentifier='dbserver',
MasterUsername='dbadmin',
MasterUserPassword='abcdefg123456789',
DBInstanceClass='db.t2.micro',
Engine='mariadb',
AllocatedStorage=5)
print response
except Exception as error:
print error
We can save this as create_db_instance.py and run the script directly without having to pass any arguments. If all goes well, we should see a response similar to the one below (truncated for brevity).
$ ./create_db_instance.py
{u'DBInstance': {u'PubliclyAccessible': True, u'MasterUsername': 'dbadmin', u'MonitoringInterval': 0, u'LicenseModel': 'general-public-license', u'VpcSecurityGroups': [{u'Status': 'active', u'VpcSecurityGroupId': 'sg-2950f150'}], ....
Even though our script may have finished successfully, it takes some time for the database instance to be created up to 5 minutes or more. It's important to give the RDS infrastructure time to get our instance up and running before attempting any further actions, or those follow-on actions might fail.
So after waiting for a moment, we can run out list_db_instances.py script to see the details for the database instance we just created.
$ ./list_db_instances.py
dbadmin@dbserver.co2fg22sgwmb.us-west-2.rds.amazonaws.com:3306 available
In this case, we see our admin account name, our instance identifier along with the RDS endpoint assigned to it, and the port our database instance is listening on. Awesome! These all match up to the input we specified in the script.
Now that we know our database instance is up and running, we can go about connecting to it, creating user accounts, databases, and tables inside databases. Of course, all of those things can be done programmatically with Python but they are also beyond the scope of this tutorial. Take a look at The Hitchhiker’s Guide to Python for more information on using databases with Python.
Delete a DB Instance
Once we no longer need a database instance, we can delete it. Of course, we’ll import the boto3 library and the sys library as well this time; we’ll need to get the name of the instance to be deleted as an argument. After creating an RDS client, we can wrap the call to delete_db_instance() in a try: block. We catch the response and print it. If there are any exceptions, we print the error for analysis. Here’s the script:
#!/usr/bin/env python
import sys
import boto3
db = sys.argv[1]
rds = boto3.client('rds')
try:
response = rds.delete_db_instance(
DBInstanceIdentifier=db,
SkipFinalSnapshot=True)
print response
except Exception as error:
print error
Before we try to delete anything, let’s run list_db_instances.py to get the name of the database instances that are available. Then we can run delete_db_instance.py on one of the names. Like the creation step, deleting a database takes some time. But if we run list_db_instances.py right away, we’ll see the status of the instance change to “deleting." That procedure looks like this:
$ ./list_db_instances.py
dbadmin@dbserver.co2fg22sgwmb.us-west-2.rds.amazonaws.com:3306 available
$ ./delete_db_instance.py dbserver
{u'DBInstance': {u'PubliclyAccessible': True, u'MasterUsername': 'dbadmin', u'MonitoringInterval': 0, u'LicenseModel': 'general-public-license', u'VpcSecurityGroups': [{u'Status': 'active', u'VpcSecurityGroupId': 'sg-2950f150'}],
…
$ ./list_db_instances.py
dbadmin@dbserver.co2fg22sgwmb.us-west-2.rds.amazonaws.com:3306 deleting
Once the instance has been completely deleted, running the list script again should return an empty list.
Wrapping Up and Next Steps
In this tutorial we’ve covered a lot! If you followed from the beginning, you’ve configured an environment to run Python and installed the boto3 library. You created and configured an account to access AWS resources from the command line. You’ve also written and used Python scripts to list, create and delete objects in EC2, S3, and RDS. That’s quite the accomplishment.
Still, there’s more to learn. The examples used this this tutorial just scratch the surface of what can be done in AWS with Python. Also, the examples use just the bare minimum in regards to service configuration and security. I encourage you to explore the documentation for each service to see how these examples can be made more robust and more secure before applying them in your development and production workflows.
If you have feedback or questions on this tutorial, please leave them in the comments below. I also encourage you to share your successes as you explore and master automating AWS with Python. | https://linuxacademy.com/howtoguides/posts/show/topic/14209-automating-aws-with-python-and-boto3 | CC-MAIN-2018-05 | refinedweb | 4,490 | 66.23 |
Contents SQLObject vs SQL.
What works:
What doesn’t work:
There are two main ways to define the model:)
There’s a special way for the controller to access the session and data.
import turbogears as tg from turbogears import controllers, expose, flash from turbogears.database import session from YOURPACKAGE import model class Root(controllers.RootController): @expose(template="YOURPACKAGE.templates.test") def index(self): data = {} entries = [] data['entries'] = entries myTable = model.MyTable() session.save(myTable) # TurboGears handles session creation session.commit() for table in session.query(model.MyTable): entries.append(table.id) return data # session is automatically closed for you
Then in the template you can have something like:
<span py:${table_id}</span>
Don’t use .all() in your queries. The only place where you need to use .all() is with MSSQL server 2000 which does not support offsets..
The TurboGears database module provides the following:.
Before starting, be warned this can be quite a major task. You’ll need to be reasonably familiar with SQLAlchemy. And make sure you test your application thoroughly after the change. | http://www.turbogears.org/1.0/docs/SQLAlchemy/index.html | CC-MAIN-2016-30 | refinedweb | 177 | 52.76 |
Python logging for Humans
Project description
Alog
Python logging for Humans. Your goto logging module without panic on context swtich.
Warning: No more logger = logging.getLogger(__name__) in your every file.
>>> import alog >>> alog.info("Hi.") 2016-12-18 20:44:30 INFO <stdin> Hi. >>> def test(): ... alog.info("Test 1") ... alog.error("Test 2") ... >>> test() 2016-12-18 20:45:19 INFO <stdin:2> Test 1 2016-12-18 20:45:19 ERROR <stdin:3> Test 2 >>> alog.set_level("ERROR") >>> test() 2016-12-18 20:45:41 ERROR <stdin:3> Test 2
If you’re new to logging, see Why should you use logging instead of print.
Installation
pip install alog
Features
Instant logging with expected defaults.
You can do logging instantly by reading a small piece of README. Alog comes with useful defaults:
A default logger.
Logging level: logging.INFO
Logging format:
"%(asctime)s %(levelname)-5.5s [parent_module.current_module:%(lineno)s]%(message)s", "%Y-%m-%d %H:%M:%S"
No more __name__ whenever you start to do logging in a module.
Alog builds the default module names on the fly.
Compatible with default Python logging module.
Alog is built upon default Python logging module. You can configure it by the same way of default Python logging module when it’s needed.
Comparing alog with Python default logging module
Comparing alog :
In [1]: import alog In [2]: alog.info("Hello alog!") 2016-11-23 12:20:34 INFO <IPython> Hello alog!
with logging module:
In [1]: import logging In [2]: logging.basicConfig( ...: level=logging.INFO, ...: format="%(asctime)s %(levelname)-5.5s " ...: "[%(name)s:%(lineno)s] %(message)s") In [3]: # In every file you want to do log, otherwise %(names)s won't work. In [4]: logger = logging.getLogger(__name__) In [5]: logger.info("Hello log!") 2016-11-23 12:16:30 INFO [__main__:1] Hello log!
Why should you use logging instead of print
The main goal of logging is to figure out what was going on and to get the insights. print, by default, does only pure string output. No timestamp, no module hint, and no level control, comparing to a pretty logging record.
Lets start with aproject/models/user.py :
class User: def __init__(self, user_id, username): ... print(username) ...
What you got output of print :
>>> admin = User(1, "admin") "admin"
Now use alog :
import alog class User: def __init__(self, user_id, username): ... alog.info(username) ...
What you got output of alog.info :
>>> admin = User(1, "admin") 2016-11-23 11:32:58 INFO [models.user:6] admin
In the output of hundreds of lines, it helps (a lot).
What if you have used print a log? That’s as easy:
import alog print = alog.info ... # A lot of print code no needed to change
0.9.11 (2017-04-07)
- Add alog.getLogger() for handy replacing logging.getLogger.
0.9.10 (2017-03-27)
- Default logging format asctime to “%Y-%m-%d %H:%M:%S” instead of “%Y-%m-%d,%H:%M:%S.%f”.
- Update package info and usage (setup.py, README, …).
0.9.9 (2016-08-28)
- Update to turn_thread_name and turn_process_id.
0.9.8 (2016-08-27)
- Support showing_thread_name and showing_process_id.
- Support global reset.
0.9.7 (2016-08-17)
- Better paths log for None default root name.
0.9.6 (2016-08-16)
- First public release.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/alog/0.9.11/ | CC-MAIN-2020-29 | refinedweb | 581 | 61.93 |
Alright so basically today I decided I wanted to learn Java as it has been on my to do list for about 6 months now and basically I asked a few friends who knew how to program and they redirected me to thenewboston's tutorials on java programming. Everything was going smooth for awhile I was understanding everything he said and I was taking notes and such and then I got on to his user input tutorial. Now I listened to what he said, took notes on what he said what this and that did, and typed in what he wrote down. Now what happened whenever I ran it on Eclipse is that it gave me this error message.Exception in thread "main" java.lang.Error: Unresolved compilation problem: The method nextline() is undefined for the type Scanner at apples.main(apples.java:6)Could someone please help on what I should do. The code to what I'm trying to use to get user input is this....
import java.util.Scanner;
public class apples{
public static void main(String args[]){
Scanner bucky = new Scanner(System.in);
System.out.println(bucky.nextline());
}
} | http://www.javaprogrammingforums.com/whats-wrong-my-code/30041-user-input-error-help.html | CC-MAIN-2015-27 | refinedweb | 192 | 60.45 |
From the Intellij website:
Scratch Files, a very handy feature that helps you experiment and prototype. With this feature you can sketch something really quick right in the editor, without modifying your project or creating any files.
You can open an infinite number of Scratch Files and easily switch between them. IntelliJ IDEA will provide all of its coding assistance features for these files according to the type you will select for them in a status bar widget.
However, when I create a facelet scratch file the "URI is not registered" error appears, none of the tags are recognized, and tab-completion of tags as I type does not happen.
I create the new facelet scratch file as follows:
Tools -> New Scratch -> XHTML file
When the mouse hovers over one of the red-highlighted xmlns lines a balloon pops up:
URI is not registered (Settings | Project Settings | Schemas and DTDs). When the mouse hovers over the h:outputLabel tag a balloon pops up:
Cannot resolve symbol: 'h:outputLabel'. However, in my project there are no such errors in any of my Facelet files.
According to Jetbrains, "IntelliJ IDEA will provide all of its coding assistance features for these files according to the type you will select for them in a status bar widget." Is this just a broken feature or is there a way to resolve this problem? It's completely not useful to have a XHTML scratch file that does not provide coding assistance.
Changing to
java.sun.comnamespace does not resolve the issue:
| https://intellij-support.jetbrains.com/hc/en-us/community/posts/206820335-IntelliJ-IDEA-Scratch-File-JSF-xmlns-URI-not-registered | CC-MAIN-2020-10 | refinedweb | 255 | 69.01 |
As developers, many of us build breakable toys in various programming languages when learning something new; the goal is to learn the language and not the intricacies of a new product, after all! Some of us reach for todo apps, others a messaging clone. If you’re like me, though, it’s URL shorteners.
I built my most recent URL shortener in Elixir, and decided to see how much raw
speed I could get out of the application by using
GenServer - Generic
Server - to build out a key-value store to bypass the database when
possible.
State in Elixir
A GenServer is one mechanism for leveraging state in Elixir, so it’s the perfect use for a transient, in-memory cache. In this example, I’ll be caching the Struct straight from Ecto and using the slug from the URL to determine the cache key.
Cache Interface
Now that we understand the underlying mechanism we’ll be using, the cache behavior is the next step:
# lib/link_cache/cache.ex defmodule LinkCache.Cache do def fetch(slug, default_value_function) do case get(slug) do {:not_found} -> set(slug, default_value_function.()) {:found, result} -> result end end end
We’ll use
LinkCache.Cache.fetch/2 to provide both a slug for lookup and a
function to execute to populate the cache if no value is found. While
LinkCache.Cache.get/1 and
LinkCache.Cache.set/2 don’t exist yet, we can
infer behavior:
LinkCache.Cache.get/1 will either return a tuple of
{:not_found} (at which point we’ll call
LinkCache.Cache.set/2 to assign the
value and return it), or
{:found, result}, at which point we return the
result directly.
To leverage this, let’s change
RedirectTo.RedirectController:
--- a/web/controllers/redirect_controller.ex +++ b/web/controllers/redirect_controller.ex @@ -3,7 +3,9 @@ defmodule RedirectTo.RedirectController do alias RedirectTo.Queries def show(conn, %{"slug" => slug}) do - link = Queries.Link.by_slug(slug) |> Repo.one + link = LinkCache.Cache.fetch(slug, fn -> + Queries.Link.by_slug(slug) |> Repo.one + end) redirect conn, external: link.long_url end
Digging into our cache
With our usage out of the way, let’s build out the rest of
LinkCache.Cache.
# lib/link_cache/cache.ex defmodule LinkCache.Cache do use GenServer def start_link(opts \\ []) do GenServer.start_link(__MODULE__, [ {:ets_table_name, :link_cache_table}, {:log_limit, 1_000_000} ], opts) end def fetch(slug, default_value_function) do case get(slug) do {:not_found} -> set(slug, default_value_function.()) {:found, result} -> result end end defp get(slug) do case GenServer.call(__MODULE__, {:get, slug}) do [] -> {:not_found} [{_slug, result}] -> {:found, result} end end defp set(slug, value) do GenServer.call(__MODULE__, {:set, slug, value}) end # GenServer callbacks def init(args) do [{:ets_table_name, ets_table_name}, {:log_limit, log_limit}] = args :ets.new(ets_table_name, [:named_table, :set, :private]) {:ok, %{log_limit: log_limit, ets_table_name: ets_table_name}} end end
We’ve already seen
LinkCache.Cache.fetch/2, so let’s move on to
LinkCache.Cache.get/1 and
LinkCache.Cache.set/2.
defp get(slug) do case GenServer.call(__MODULE__, {:get, slug}) do [] -> {:not_found} [{_slug, result}] -> {:found, result} end end
GenServer.call/3 is a synchronous function we use to send a request to the
server (
__MODULE__, which is equivalent to
LinkCache.Cache). The 2-tuple
{:get, slug} (which we’ll pattern-match on) tells the server to retrieve the
value with the key
slug, and will return zero or one results due to the
underlying storage mechanism we’re using.
defp set(slug, value) do GenServer.call(__MODULE__, {:set, slug, value}) end
Here, we’re performing another synchronous action to cache the
value for
the specific
slug, returning the 2-tuple
{slug, value}.
Both
LinkCache.Cache.get/1 and
LinkCache.Cache.set/2 are sending requests
to
__MODULE__, as discussed before; we need to hook into these behaviors,
pattern-matching on the requests we’re sending, to trigger underlying
behavior.
Above, we’d seen
GenServer.call/3 used, with the appropriate tuples being
passed; let’s look at the hooks we’ve written to handle these requests:
GenServer.handle_call/3 manages the request (either
{:get, slug} or
{:set, slug, value}), a 2-tuple of the request sender (PID and descriptor),
which we’re ignoring here (
_from), and the server’s internal state (
state).
Both reply to the request with a 3-tuple
{:reply, link_struct, state} - with
the link Struct available, we can then retrieve the URL and send the user on
their way.
Erlang Term Storage
LinkCache.Cache.get/1 and
LinkCache.Cache.set/2 each reference
:ets, an
Erlang module referencing the underlying ETS (Erlang Term Storage)
architecture, an incredibly performant in-memory store with
O(1) lookups for
“set” storage, which we’re using.
In both definitions, we pattern-match on
state to retrieve the table name
for our link cache and perform both insert (for
LinkCache.Cache.set/2) and
lookup (for
LinkCache.Cache.get/1) operations.
We configure
:ets in
LinkCache.Cache.start_link/1 and
LinkCache.Cache.init/1:
def start_link(opts \\ []) do GenServer.start_link(__MODULE__, [ {:ets_table_name, :link_cache_table}, {:log_limit, 1_000_000} ], opts) end def init(args) do [{:ets_table_name, ets_table_name}, {:log_limit, log_limit}] = args :ets.new(ets_table_name, [:named_table, :set, :private]) {:ok, %{log_limit: log_limit, ets_table_name: ets_table_name}} end
These functions prepare
LinkCache.Cache to be supervised
(
GenServer.start_link/3) with configuration (a table named
:link_cache_table with room for 1 million 2-tuples), as well as create the
table (
:ets.new/2), configured for a named table with “set” storage.
Fault-tolerance with a supervisor
With the underlying cache built, we now want to ensure Phoenix keeps the cache running. Elixir promotes a strategy of letting things crash - and ensuring recovery is quick and straightforward - with Supervisors at the helm.
Let’s build out a supervisor to ensure
LinkCache.Cache is monitored and runs
correctly:
# lib/link_cache/supervisor.ex defmodule LinkCache.Supervisor do use Supervisor def start_link do Supervisor.start_link(__MODULE__, :ok, name: __MODULE__) end def init(:ok) do children = [ worker(LinkCache.Cache, [[name: LinkCache.Cache]]) ] supervise(children, strategy: :one_for_one) end end
Our
LinkCache.Supervisor supervises only
LinkCache.Cache, with a
:one_for_one strategy.
We’ll also need to update our application to include this supervisor:
# lib/redirect_to.ex defmodule RedirectTo do use Application def start(_type, _args) do import Supervisor.Spec, warn: false children = [ supervisor(RedirectTo.Endpoint, []), supervisor(RedirectTo.Repo, []), supervisor(LinkCache.Supervisor, []), ] opts = [strategy: :one_for_one, name: RedirectTo.Supervisor] Supervisor.start_link(children, opts) end # ... end
So, in addition to our
RedirectTo.Endpoint and
RedirectTo.Repo, our
LinkCache.Supervisor will also be managed by
RedirectTo; this ensures if
anything happens to the cache, our application will restart it.
You can find a more thorough walkthrough of supervisors, supervision trees, and how to tie various components together on Elixir’s discussion of OTP topics.
Testing
LinkCache.Cache
To test the cache, I wrote a small test ensuring the cache set the value based on return value of a function, and reused that value on a subsequent lookup.
# test/lib/link_cache/cache_test.exs defmodule LinkCacheTest do use ExUnit.Case test "caches and finds the correct data" do assert LinkCache.Cache.fetch("A", fn -> %{id: 1, long_url: ""} end) == %{id: 1, long_url: ""} assert LinkCache.Cache.fetch("A", fn -> "" end) == %{id: 1, long_url: ""} end end
Again, the cache can store Structs, Maps, or all sorts of data structures. As is the case with any cache, invalidation might be tricky. The architecture of this application is such that the links themselves cannot be updated, so I’m not concerned about invalid data. Your mileage may vary.
Memory usage with 1 million 2-tuples
I’d admittedly chosen a
log_limit of 1 million pairs without knowing the
memory implications of that particular cache size. And while lookups are
O(1) (constant time, regardless of the number of pairs in the cache), I did
want to verify I wouldn’t be running out of memory.
How do we verify this?
First, fire up IEx running the Phoenix application:
iex -S mix phoenix.server
Next, run
:observer.start() within IEx to see a GUI outlining various metrics of the
running application.
Assuming ETS doesn’t make any sort of optimizations around similar values:
link = RedirectTo.Queries.Link.by_slug("A") |> RedirectTo.Repo.one 1..1_000_000 |> Enum.each(fn(i) -> LinkCache.Cache.fetch((i |> to_string), fn -> link end) end)
This inserts one million pairs into the cache, with the keys being string
versions of each integer (
"1",
"1250", etc.) and the values being the
Struct directly from Ecto.
Over the course of ~40s, all 1 million pairs were stored in-memory (insertion of ~25,000/s) with a result size of around 856MB, or just under 1kB per pair.
Storing a subset of the data results in significantly less memory usage:
link = RedirectTo.Queries.Link.by_slug("A") |> RedirectTo.Repo.one |> Map.take([:slug, :long_url])
Benchmarks on Heroku
With the cache in place, I deployed the cache- and database-read versions of the application to Heroku, running on the free server and database.
In both tests, I used Siege against the HTTP version of the application, configured not to follow redirects, against the same slug with 500 concurrent users over 60s. In both cases, the Heroku dyno was running (no spin-up time).
DB read results
Transactions: 29508 hits Availability: 100.00 % Elapsed time: 59.75 secs Data transferred: 2.67 MB Response time: 0.70 secs Transaction rate: 493.86 trans/sec Throughput: 0.04 MB/sec Concurrency: 343.44 Successful transactions: 29508 Failed transactions: 0 Longest transaction: 5.98 Shortest transaction: 0.04
I was impressed by these numbers; almost 30,000 requests per minute on free hardware, with an average response time of roughly 700ms, and this was hitting the database on each request. No connections were dropped.
Cache read results
Transactions: 44829 hits Availability: 100.00 % Elapsed time: 59.38 secs Data transferred: 4.06 MB Response time: 0.38 secs Transaction rate: 754.95 trans/sec Throughput: 0.07 MB/sec Concurrency: 284.19 Successful transactions: 44829 Failed transactions: 0 Longest transaction: 8.39 Shortest transaction: 0.03
The throughput when reading from the cache was significantly better, serving 50% more requests. Additionally, the average response time was about half that of the run hitting the database.
Wrapping up
Even with Elixir’s incredible speed and concurrency support out of the box (30k RPM on free Heroku hardware with sub-second response times is pretty great), there are always ways to leverage certain technologies for what they’re great at. By leveraging Erlang’s ETS, GenServer, and supervisors, we were able to build a fault-tolerant, scalable in-memory cache to improve both performance and throughput. | https://robots.thoughtbot.com/make-phoenix-even-faster-with-a-genserver-backed-key-value-store | CC-MAIN-2018-17 | refinedweb | 1,741 | 59.4 |
On Mon, May 20, 2002 at 07:52:11AM +0200, Marcus Brinkmann wrote: > > "/hurd" isn't needed simply as somewhere to put these translators, > > it's needed as a namespace to reference them. > That is a nice description for /servers actually (oh, no, another new top level > directory. But that was the last one, promised). Hrm? Why are you typing "/hurd/foo" in your settrans command instead of "/servers/foo" then? What's /servers for? (If /hurd *is* just somewhere to store them and isn't meant to be a convenient namespace, then you probably shouldn't be polluting the "/" namespace directly -- after all, it's easy to put the files in /usr/lib/some/strange/place, and then make them referencable from /servers. But that would only apply if you're using /servers as the namespace, in which case you'd be typing /servers/msdos in your settrans commands, which you're not. So what gives?) > > >. > There are some specific problems that both solve, but they are different > enough that it is dangerous to mix them up by presenting them side by side > without explaining the differences thoroughly. Right, but that does make them remotely comparable. > I have heard quite often > terms like "kernel extensions" in this thread when people talked about > Hurd servers, this is not something one should encourage. How about "kernel extensions on acid" ? Hrm, maybe we could make a variant of the Chinese Fortune Cookie game for the Hurd: everytime you try to make an analogy to Linux or similar, you add "on acid" to the end. "Hurd developers are like Linux developers on acid." Fun for the whole family ;) > The Hurd translators are not internal binaries, though. They are not directly > executed by users, but especially in the settrans -a case above they are > explicitely invoked by the user, with the help of settrans. They're referred to by the user, but they're not invoked by the user. Take debootstrap, eg, which uses a bash script with a particular ABI (you source it, it defined a few functions which you then call) to work out how to install different versions of Debian's base system. It has a bunch of scripts in /usr/lib/debootstrap/scripts which can either be referred to longhand: debootstrap unstable ./chroot /usr/lib/debootstrap/scripts/woody or, if your dist has a script of its own, in shorthand: debootstrap woody ./chroot The fact that /usr/lib/debootstrap/scripts/woody is a script (or that its ABI used to make it a program) is basically irrelevant. It shouldn't go in /usr/bin because people don't call it directly, even though they could. The same thing applies to the Hurd servers; with the exception that while they *could* go somewhere under /usr/lib (or /usr/libexec) they need to be much easier to reference. This latter thing could be achieved by having some sort of shorthand notation in all the cases where it matters (which is easy in debootstrap's case, but might be an annoying thing to have to retrofit with the Hurd), or to make a new, short, top-level directory (which you seem to have done twice). Cheers, aj -- Anthony Towns <aj@humbug.org.au> <> I don't speak for anyone save myself. GPG signed mail preferred. ``BAM! Science triumphs again!'' --
Attachment:
pgpoN_jjrvpti.pgp
Description: PGP signature | https://lists.debian.org/debian-hurd/2002/05/msg00505.html | CC-MAIN-2014-15 | refinedweb | 562 | 61.77 |
<<
Guilherme OliveiraCourses Plus Student 2,985 Points
Python Basics - Stage 6 (Challenge Task 2 of 2)
Having a hard time trying to figure it out the right way to do this. I don't really know where the error is:
import random my_list = [1, 2, 3, 4, 5] def random_member(my_list): random_num = randint(0, (len (my_list) - 1)) my_list[random_num]
6 Answers
Peter Szerzo22,661 Points
Guilherme,
These should do the trick:
indent all lines after the def random_member(my_list) one. You need to do this to tell python that those two lines are part of the function definition.
instead of randint, type random.randint so it references the module you are importing in the first line (otherwise Python will not find it).
change the last line to 'return my_list[random_num]'. Otherwise, Python will find the random array member, but will never tell the rest of the code about it. Essentially, it takes the secret to the grave.
Let me know if this works. Happy learning!
Kenneth LoveTreehouse Guest Teacher
You don't need to provide the
my_list variable. That'll get passed in by the test runner (it's OK to provide it, though, in your own scripts.
Your example above isn't indented at all. Is your submission to the CC indented correctly for creating a function or is it just like what you have above?
Also, what error(s) are you getting, if any?
Andrew Molloy37,259 Points
Just to add to Kenneth Love 's answer, even when I completely doubted my code, most of my issues in the Python challenges were just down to the indenting. As an aside I quite like how it's beating some of my bad coding layout habits out of me.
Guilherme OliveiraCourses Plus Student 2,985 Points
Got it! Thank you Kenneth, Peter, Martin and Andrew for your help. It was the indentation. Apparently the biggest mistakes in coding for beginners are in the fundamentals. We get caught up so much with the challenge itself sometimes that we don't see the obvious.
Piraveen PartheepanFull Stack JavaScript Techdegree Student 1,130 Points
U am badly struggling do not get this at all
Kenneth LoveTreehouse Guest Teacher
What problems are you having?
Piraveen PartheepanFull Stack JavaScript Techdegree Student 1,130 Points
I do not understand what it means by int arguement
Kenneth LoveTreehouse Guest Teacher
It means the function will get an argument that is an int.
Piraveen PartheepanFull Stack JavaScript Techdegree Student 1,130 Points
Would I write the arguement as a = 4
Kenneth LoveTreehouse Guest Teacher
No, you don't write the arguments, you just say how many come in.
For example:
def takes_no_args(): pass def takes_one_arg(something): pass def takes_two_args(something, something_else): pass
takes_one_arg() takes only one argument.
takes_two_args() requires two arguments. And
takes_no_args() doesn't accept any arguments.
Piraveen PartheepanFull Stack JavaScript Techdegree Student 1,130 Points
Ah ha I understand thank you so much
Piraveen PartheepanFull Stack JavaScript Techdegree Student 1,130 Points
import random
def random_did(a): a = 4 return random.radiant (1, a)
I did this but it still did not work I indented
Piraveen PartheepanFull Stack JavaScript Techdegree Student 1,130 Points
import random
def random_did(a): a = 4 return random.radiant (1, a)
I did this but it still did not work I indented
Kenneth LoveTreehouse Guest Teacher
That won't pass the challenge because that's not what the challenge is asking for.
Kenneth LoveTreehouse Guest Teacher
Kenneth LoveTreehouse Guest Teacher
Just wait'll we cover PEP 0008 in a later course :D | https://teamtreehouse.com/community/python-basics-stage-6-challenge-task-2-of-2 | CC-MAIN-2022-27 | refinedweb | 591 | 70.23 |
Overview
OS.walk() generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (dirpath, dirnames, filenames). Paths root : Prints out directories only from what you specified dirs : Prints out sub-directories from root. files: Prints out all files from root and directories
walkFileSystem.py
Open an text editor , copy & paste the code below. Save the file as walkFileSystem.py and exit the editor. Run the script: $ python walkFileSystem.py
import os os.system("clear") print "-" * 80 print "OS Walk Program" print "-" * 80 print " " print "Root prints out directories only from what you specified" print "-" * 70 print "Dirs prints out sub-directories from root" print "-" * 70 print "Files prints out all files from root and directories" print "-" * 70 print "This program will do an os.walk on the folder that you specify" print "-" * 70 path = raw_input("Specify a folder that you want to perform an 'os.walk' on: >> ") for root, dirs, files in os.walk(path): print root print "---------------" print dirs print "---------------" print files print "---------------"
More reading can be found here: | https://www.pythonforbeginners.com/code-snippets-source-code/having-fun-with-os-walk-in-python | CC-MAIN-2020-16 | refinedweb | 196 | 65.93 |
Python Testing with pytest: Fixtures and Coverage
Improve your Python testing even more.
In my last two articles, I introduced pytest, a library for testing Python code (see "Testing Your Code with Python's pytest" Part I and Part II). pytest has become quite popular, in no small part because it's so easy to write tests and integrate those tests into your software development process. I've become a big fan, mostly because after years of saying I should get better about testing my software, pytest finally has made it possible.
So in this article, I review two features of pytest that I haven't had a chance to cover yet: fixtures and code coverage, which will (I hope) convince you that pytest is worth exploring and incorporating into your work.
Fixtures
When you're writing tests, you're rarely going to write just one or two. Rather, you're going to write an entire "test suite", with each test aiming to check a different path through your code. In many cases, this means you'll have a few tests with similar characteristics, something that pytest handles with "parametrized tests".
But in other cases, things are a bit more complex. You'll want to have some objects available to all of your tests. Those objects might contain data you want to share across tests, or they might involve the network or filesystem. These are often known as "fixtures" in the testing world, and they take a variety of different forms.
In pytest, you define fixtures using a combination of the
pytest.fixture
decorator, along with a function definition. For example, say
you have a file that returns a list of lines from a file, in which each
line is reversed:
def reverse_lines(f): return [one_line.rstrip()[::-1] + '\n' for one_line in f]
Note that in order to avoid the newline character from being placed at
the start of the line, you remove it from the string before reversing and
then add a
'\n' in each returned string. Also note that although it
probably would be a good idea to use a generator expression rather than a list
comprehension, I'm trying to keep things relatively simple here.
If you're going to test this function, you'll need to pass it a
file-like object. In my last article, I showed how you could use a
StringIO object
for such a thing, and that remains the case. But rather than defining
global variables in your test file, you can create a fixture that'll provide
your test with the appropriate object at the right time.
Here's how that looks in pytest:
@pytest.fixture def simple_file(): return StringIO('\n'.join(['abc', 'def', 'ghi', 'jkl']))
On the face of it, this looks like a simple function—one that returns the value you'll want to use later. And in many ways, it's similar to what you'd get if you were to define a global variable by the name of "simple_file".
At the same time, fixtures are used differently from global variables. For example, let's say you want to include this fixture in one of your tests. You then can mention it in the test's parameter list. Then, inside the test, you can access the fixture by name. For example:
def test_reverse_lines(simple_file): assert reverse_lines(simple_file) == ['cba\n', 'fed\n', ↪'ihg\n', 'lkj\n']
But it gets even better. Your fixture might act like data, in that you don't invoke it with parentheses. But it's actually a function under the hood, which means it executes every time you invoke a test using that fixture. This means that the fixture, in contrast with regular-old data, can make calculations and decisions.
You also can decide how often a fixture is run. For example, as it's
written now, this fixture will run once per test that mentions it.
That's great in this case, when you want to compare with a list or
file-like structure. But what if you want to set up an object and then
use it multiple times without creating it again? You can do that by
setting the fixture's "scope". For example, if you set the scope of the
fixture to be "module", it'll be available throughout your tests but
will execute only a single time. You can do this by passing the
scope
parameter to the
@pytest.fixture decorator:
@pytest.fixture(scope='module') def simple_file(): return StringIO('\n'.join(['abc', 'def', 'ghi', 'jkl']))
I should note that giving this particular fixture "module" scope is a
bad idea, since the second test will end up having a
StringIO whose
location pointer (checked with
file.tell) is already at the end.
These fixtures work quite differently from the traditional setup/teardown system that many other test systems use. However, the pytest people definitely have convinced me that this is a better way.
But wait—perhaps you can see where the "setup" functionality exists in these fixtures. And, where's the "teardown" functionality? The answer is both simple and elegant. If your fixture uses "yield" instead of "return", pytest understands that the post-yield code is for tearing down objects and connections. And yes, if your fixture has "module" scope, pytest will wait until all of the functions in the scope have finished executing before tearing it down.
Coverage
This is all great, but if you've ever done any testing, you know there's always the question of how thoroughly you have tested your code. After all, let's say you've written five functions, and that you've written tests for all of them. Can you be sure you've actually tested all of the possible paths through those functions?
For example, let's assume you have a very strange function,
only_odd_mul,
which multiplies only odd numbers:
def only_odd_mul(x, y): if x%2 and y%2: return x * y else: raise NoEvenNumbersHereException(f'{x} and/or {y} ↪not odd')
Here's a test you can run on it:
def test_odd_numbers(): assert only_odd_mul(3, 5) == 15
Sure enough, the test passed. It works great! The software is terrific!
Oh, but wait—as you've probably noticed, that wasn't a very good job of testing it. There are ways in which the function could give a totally different result (for example, raise an exception) that the test didn't check.
Perhaps it's easy to see it in this example, but when software gets larger and more complex, it's not going to be so easy to eyeball it. That where you want to have "code coverage", checking that your tests have run all of the code.
Now, 100% code coverage doesn't mean that your code is perfect or that it lacks bugs. But it does give you a greater degree of confidence in the code and the fact that it has been run at least once.
So, how can you include code coverage with pytest? It turns out that
there's a package called pytest-cov on PyPI that you can download and
install. Once that's done, you can invoke pytest with the
--cov
option. If you don't say anything more than that, you'll get a
coverage report for every part of the Python library that your program
used, so I strongly suggest you provide an argument to
--cov,
specifying which program(s) you want to test. And, you should indicate
the directory into which the report should be written. So in this case, you would
say:
pytest --cov=mymul .
Once you've done this, you'll need to turn the coverage report into something human-readable. I suggest using HTML, although other output formats are available:
coverage html
This creates a directory called htmlcov. Open the index.html file in this directory using your browser, and you'll get a web-based report showing (in red) where your program still lacks coverage. Sure enough, in this case, it showed that the even-number path wasn't covered. Let's add a test to do this:
def test_even_numbers(): with pytest.raises(NoEvenNumbersHereException): only_odd_mul(2,4)
And as expected, coverage has now gone up to 100%! That's definitely something to appreciate and celebrate, but it doesn't mean you've reached optimal testing. You can and should cover different mixtures of arguments and what will happen when you pass them.
Summary
If you haven't guessed from my three-part focus on pytest, I've been bowled over by the way this testing system has been designed. After years of hanging my head in shame when talking about testing, I've started to incorporate it into my code, including in my online "Weekly Python Exercise" course. If I can get into testing, so can you. And although I haven't covered everything pytest offers, you now should have a good sense of what it is and how to start using it.
Resources
- The pytest website is at.
- An excellent book on the subject is Brian Okken's Python testing with pytest, published by Pragmatic Programmers. He also has many other resources, about pytest and code testing in general, at.
- Brian's blog posts about pytest's fixtures are informative and useful to anyone wanting to get started with them. | https://www.linuxjournal.com/content/python-testing-pytest-fixtures-and-coverage | CC-MAIN-2021-31 | refinedweb | 1,555 | 71.14 |
330018: view does not hide Pythonista in appex mode
Same kind of problem as full_screen...
The window of Pythonista is not entirely hidden by the sheet presentation of the view
The gray part is the up part of Pythonista appex view with Edit and Done buttons
import appex import ui v = ui.View() w, h = (540,620) v.frame = (0,0,w,h) v.background_color='white' v.name = 'test in appex' v.present('sheet')#, hide_title_bar=True)
Hi,
can confirm this behavior, but unfortunately without solution. Seems to be a mathematical problem while calculating the views position.
@OMZ: can you please fix this?
Regards
Tom
More visible without title and a transparent background... | https://forum.omz-software.com/topic/5947/330018-view-does-not-hide-pythonista-in-appex-mode | CC-MAIN-2022-21 | refinedweb | 112 | 68.67 |
Hello all, trying to get my program working. The program is to take input from the user in a hh:mmx format (x being p or a, standing for PM and AM) and tell the user how many minutes from midnight that time is. I was able to make it so it would work in the Shell using the following code snippet:
def timedif(x): h = int(x[0:2]) m = int(x[3:5]) ampm = n[-1] if h != 12 and ampm == 'p': h = h + 12 time = h*60 + m print time n = raw_input('Input a time in hh:mmx format: ') timedif(n)
Now I am trying to make it work in a Tkinter format. The following snippet is what I have so far. Wondering if you all could help me figure out how to get it working. Thanks!
import Tkinter win = Tkinter.Tk() win.title('Time Clock') class Converter(): def __init__ (self,x): self.x = x def convert(self,x): h = int(x[0:2]) m = int(x[3:5]) ampm = n[-1] if h != 12 and ampm == 'p': h = h + 12 diff = h*60 + m conv = Converter(5) label = Tkinter.Label(win,text="Calculate Elapsed Time in Minutes",font=('Courier New',32,'bold')) label.pack() Row3 = Tkinter.Frame(win) mLabel = Tkinter.Label(Row3,text = "Enter Time (hh:mmx) With x = a or p",font=('Courier New',30)) mEntry = Tkinter.Entry(Row3,width=6,font=('Courier New',30)) mLabel.pack(side='left') mEntry.pack(side='right') Row3.pack() Row4 = Tkinter.Frame(win) vLabel = Tkinter.Label(Row4,text="Elapsed Minutes Since Midnight",font=('Courier New',30)) vLabel.pack() Row4.pack() Row5 = Tkinter.Frame(win) qb = Tkinter.Button(Row5,text='Quit',command=win.destroy,font=('Courier New',30)) cb = Tkinter.Button(Row5,text='Convert',font=('Courier New',30)) qb.pack(side='left') cb.pack(side='right') Row5.pack() win.mainloop()
You should send the time value entered to the class, so this statement should be in a function that is called when the "Convert" button is pressed
conv = Converter(5)
and should send the value from the Entry instead of "5" (links to using get to read an entry and callbacks for a button click
Next the class uses "n" which it doesn't have access to
ampm = n[-1]
You should be using "x" or even better self.x
The "Elapsed Minutes Since Midnight" should actually show something which requires a Tkinter variable
Finally you should allow for 2:2P instead of 02:02P which means you should split on the colon instead of the [0:2] and [3:5] slices | http://www.daniweb.com/software-development/python/threads/437884/tkinter-time-converter | CC-MAIN-2014-15 | refinedweb | 432 | 57.77 |
On Thu, 2003-07-31 at 13:42, Glenn Sieb wrote: > Jim Breton said: > > I have set up the above on a FreeBSD 4.7 system with the intention of > > being able to host per-virtual-domain mailing lists. > > Yay! Another FBSD user! :) > > > I want to be able to have: > > > > list01 at domain1.com > > and > > list01 at domain2.com > > > > as completely separate, autonomous lists. > > Can't do it :( It'd have to be: > > list01 at domain1.com > list02 at domain2.com > > If Mailman had a little more (no offense, I just can't think of a better > term to use) intelligence in it, when you set up for virtual domains it'd > do things like create a directorynamespace like > /usr/local/mailman/lists/domain2.com/, etc. so you *could* put lists with > the same name in different domains... Well, technically, you could have two or more parallel installs of Mailman. There are quite a few folks who do that when they have just a few virtual domains to worry about. Also, you can alias list01 at domain2.com ==> list01-domain2 And let the (hidden) real name of the list be list01-domain2. Of course then you have to do a lot of editing of the web-pages to display the domain information that you want. That is the way I normally do it. The webpages all aliases properly as do the email addresses, so the end-user is non the wiser. And yes, it sure would be easier if mailman used only the virtual host information when setting up it's web pages and sending out its emails. Good Luck | https://mail.python.org/pipermail/mailman-users/2003-July/030743.html | CC-MAIN-2016-30 | refinedweb | 270 | 74.59 |
15938/how-to-connect-amazon-redshift-in-apache-spark
I'm trying to connect to Amazon Redshift via Spark, so I can combine data that i have on S3 with data on our RS cluster. I found some a documentation here for the capability of connecting to JDBC:
The load command seems fairly straightforward
df = sqlContext.load(source="jdbc", url="jdbc:postgresql:dbserver", dbtable="schema.tablename")
And I'm not entirely sure how to deal with the SPARK_CLASSPATH variable. I'm running Spark locally for now through an iPython notebook (as part of the Spark distribution). Where do I define that so that Spark loads it?
Any help or pointers to detailed tutorials are appreciated.
It turns out you just need a username/pwd to access Redshift in Spark, and it is done as follows (using the Python API):
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.load(source="jdbc",
url="jdbc:postgresql://host:port/dbserver?user=yourusername&password=secret",
dbtable="schema.table")
Try it out. Hope it was helpful.
I had faced the same problem earlier. ...READ MORE
You should make your backend functions to ...READ MORE
On the CopyTablesActivity, you could set a lateAfterTimeout attribute ...READ MORE
I suspect you are calling the context.done() function before s3.upload() has ...READ MORE
RDD is a fundamental data structure of ...READ MORE
You have to use the comparison operator ...READ MORE
Yes, you can go ahead and write ...READ MORE
You can save the RDD using saveAsObjectFile and saveAsTextFile method. ...READ MORE
Amazon Redshift replicates all your data within ...READ MORE
You need to set the proper privileges ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/15938/how-to-connect-amazon-redshift-in-apache-spark?show=15939 | CC-MAIN-2021-49 | refinedweb | 297 | 60.11 |
This is one of the most used React hook. By reading this post, you will be able to use it properly.
How to use it?
To use this hook, firstly you need to import it from React.
import React, { useState } from 'react';
Usage
In es6, we have something called array destruturing. If you know how to destructure an array. You can use this hook very easily. Let me show you an example.
const [ data, setData ] = useState('');
So in this array the first index is a state variable. And the second index is a function. Which has the power of changing the value of state variable. And under useState you can store any type of data which will store the value under the state variable. And you can change the value of the state variable with the function named 'setData'. You can give any name to the state variable and function.
I know you are a little bit confused. Wait let me show you a simple example.
import React, { useState } from "react"; const App = () => { const [data, setData] = useState("Hello world"); return ( <> {data} // now the value will be hello world and after clicking on the button, the value will be I am a web developer <button onClick={() => { setData("I am a web developer"); }} > Change value </button> </> ); }; export default App;
Look at the code I have written in the top. Firstly I imported React and useState from 'react'. Then I have stored 'Hello world' under a state variable named ' data '. Then I am changing the value of the state variable onclicking on the button by just calling a function which is changing the value of our state varialbe. So that's how it is working.
You can store anything under this useState. You can store a string, object, array, number, boolean whatever. Just write them under the useState. Then you can use them by just calling the state variable . Then you can also change the variable with the function.
import React, { useState } from "react"; const App = () => { const [data, setData] = useState({ name: "Ratul", age: 16 }); return ( <> Name is : {data.name} and age is: {data.age} <button onClick={() => { setData({ name: "Jack", age: 21 }); }} > Chnage Value </button> </> ); }; export default App;
In this code, I just stored an object under the state variable. I am rendering them in my body. And onclicking on the button I am changing values under the object. Which is stored under the state variable. I am changing them very easily by just using the state function. Easy! Now we will create a mini project.
Mini Project
Try to create a project which will have two buttons. One of the button will increment the value of 'x' and another one will decrement the value of 'x'. And the value can't be under 0. It means the lowest value will be 0. So try it yourself using useState() hook.
import React, { useState } from "react"; const App = () => { const [data, setData] = useState(0); return ( <> <button onClick={() => { setData(data + 1); }} > Increament </button> {data} <button onClick={() => { setData(data - 1); if (data === 0) { setData(data); } }} > Decrement </button> </> ); }; export default App;
So I first of all I created a state which has a state variable and a function. Then I have stored a number 0. Which is our initial value of our state variable. And we can update it with the function. Now our condition was one of the button will increment the value of our number. So I just called a function onclicking on the increment button. Which is just incrementing the value of our state variable by 1. And another condition was, a button which will decrement the value of our number by 1 and the value can't be less than 0. So the numbers can't be negative. So we are just decrementing the value of our state variable by 1. And putting a condition that, if the value becomes 0 it will be remain 0. It will not be changed. And that's all.
Thanks for reading that post. Hope you enjoyed that. If you found any confusion or wrong details, please let me know in the discussions. And make sure you follow me to recieve all the informational posts just like that.
Discussion (7)
Simple and awesome intro
Great explanation. Do you know what it means when you switch to useState (true)? For example: const [button, setButton] = useState (true); Could you explain me.
You can do conditional rendering by putting a boolan value to your state variable. This video might help you. :)
can u do useEffect() 🥺🥺🥺
here it is -> dev.to/ratuloss/react-useeffect-ho...
This is the simplest way I have seen so far.
Thanks man
Really helps!!
My pleasure | https://dev.to/ratuloss/learn-usestate-in-5minutes-4j7h | CC-MAIN-2022-21 | refinedweb | 778 | 76.32 |
There are many things that have changed in version 2 of the ASP.NET Coreframework. There have been a lot of improvements in some of its supporting technologies as well. Now is a great time to give it a try, as its code has been stabilized and the pace of change has settled down a bit.
There were significant differences between the original release candidate and version 1 of ASP.NET Core and further alterations between version 1 and version 2. Some of these changes have been controversial, particularly ones related to tooling; however, the scope of .NET Core has grown massively, and this is a good thing.
One of the high-profile differences between version 1 and version 2 is the change (some would say regression) from the new JavaScript Object Notation (JSON)-based project format back to the Extensible Markup Language (XML)-based
csproj format. However, it is a simplified and stripped-down version, compared to the format used in the original .NET Framework.
There has been a move toward standardization between the different .NET Frameworks, and .NET Core 2 has a much larger API surface as a result. The interface specification, known as .NET Standard 2, covers the intersection between .NET Core, the .NET Framework, and Xamarin. There is also an effort to standardize Extensible Application Markup Language (XAML) into the XAML standard, which will work across Universal Windows Platform (UWP) and Xamarin.Forms apps.
C# and .NET can be used on a huge range of diverse platforms and in a large number of different use cases, from server-side web applications to mobile apps and even games (using game engines such as Unity 3D). In this book, we'll focus on web application programming and, in particular, on general ways to make web apps perform well. This means that we will also cover client-side web browser scripting with JavaScript and the performance implications involved.
This book is not just about C# and ASP.NET. It takes a holistic approach to performance and aims to educate you about a wide range of relevant topics. We don't have the space to take a deep dive into everything, so the idea here is to help you discover some useful tools, technologies, and techniques.
In this chapter, we will go through the changes between version 1 and version 2 of both .NET Core and ASP.NET Core. We will also look at some new features of the C# language. There have been many useful additions and a plethora of performance improvements too.
In this chapter, we will cover the following topics:
- What's new in .NET Core 2.0
- What's new in ASP.NET Core 2.0
- Performance improvements
- .NET Standard 2.0
- New C# 6.0 features
- New C# 7.0 features
- JavaScript considerations
There are two main products in the Core family. The first is .NET Core, which is a low-level framework that provides basic libraries. It can be used to write console applications, and it is also the foundation for higher level application frameworks.
The second is ASP.NET Core, which is a framework for building web applications that run on a server and service clients (usually web browsers). This was originally the only workload for .NET Core until it grew in scope to handle a more diverse range of scenarios.
We'll cover the differences in the newer versions separately for each of these frameworks. The changes in .NET Core will also apply to ASP.NET Core, unless you are running it on top of the .NET Framework, version 4.
The main focus of .NET Core 2 is the huge increase in scope. There are more than double the number of APIs included, and it supports .NET Standard 2 (covered later in this chapter). You can also reference .NET Framework assemblies with no recompilation required. This should just work as long as the assemblies only use APIs that have been implemented in .NET Core.
This means that more NuGet packages will work with .NET Core. Finding whether your favorite library was supported or not was always a challenge in the previous version. The author set up a repository listing package compatibility to help with this. You can find the ASP.NET Core Library and Framework Support (ANCLAFS) list at and. If you want to make a change, then please send a pull request. Hopefully, in future, all the packages will support Core, and this list will no longer be required.
Â
There is now support in .NET Core for Visual Basic, and more Linux distributions. You can also perform live unit testing with Visual Studio 2017 (Enterprise Edition only), much like the old NCrunch extension. We'll talk more about tooling in Chapter 3, Setting Up Your Environment, where we will also cover containerization.
Some of the more interesting changes in .NET Core 2.0 are performance improvements over the original .NET Framework. There have been tweaks to the implementations of many framework data structures. Some of the classes and methods that have seen speedy improvements or memory reduction include:
List<T>
Queue<T>
SortedSet<T>
ConcurrentQueue<T>
Lazy<T>
Enumerable.Concat()
Enumerable.OrderBy()
Enumerable.ToList()
Enumerable.ToArray()
DeflateStream
SHA256
BigInteger
BinaryFormatter
Regex
WebUtility.UrlDecode()
Encoding.UTF8.GetBytes()
Enum.Parse()
DateTime.ToString()
String.IndexOf()
String.StartsWith()
FileStream
Socket
NetworkStream
SslStream
ThreadPool
SpinLock
We won't go into specific benchmarks here because benchmarking is hard and the improvements you see will clearly depend on your usage. The thing to take away is that lots of work has been done to increase the performance of .NET Core. Many of these changes have come from the community, which shows one of the benefits of open source development. Some of these advances will probably work their way back to a future version of the regular .NET Framework too.
There have been improvements made to the RyuJIT Just In Time compiler for .NET Core 2 as well. As just one example,
finally blocks are now almost as efficient as not using exception handling at all, which is beneficial in a normal situation where no exceptions are thrown. You now have no excuses not to liberally use
try and
using blocks, for example, by the
checked arithmetic to avoid integer overflows.
ASP.NET Core 2 takes advantage of all the improvements to .NET Core 2, if that is what you choose to run it on. It will also run on .NET Framework 4.7, but it's best to run it on .NET Core if you can. With the increase in scope and support of .NET Core 2, this should be less of a problem than it was previously.
.NET Core 2 includes a new metapackage, so you only need to reference one NuGet item to get all the things. However, it is still composed of individual packages, if you want to pick and choose. They haven't reverted to the bad old days of having one huge
System.Web assembly. A new package-trimming feature ensures that if you don't use a package, then its binaries won't be included in your deployment, even if you use a metapackage to reference it.
There is also a sensible default for setting up a web host configuration. You don't need to add logging, Kestrel, and IIS individually anymore. Logging has also gotten simpler and, as it is built in, you have no excuses not to use it from the start.
A new feature is support for controllerless Razor Pages. This is exactly what it sounds like, and it allows you to write pages with just a Razor template. It is similar to the Web Pages product, not to be confused with Web Forms. There is talk of Web Forms making a comeback; if this happens, then hopefully, the abstraction will be thought out more and it won't carry so much state around with it.
There is a new authentication model that makes better use of dependency injection. ASP.NET Core Identity allows you to use OpenID and OAuth 2 and get access tokens for your APIs. You may also want to investigate the Identity Server 4 project that provides a lot of similar functionality.
A nice time saver is that you no longer need to emit anti-forgery tokens in forms (to prevent Cross-Site Request Forgery) with attributes to validate them on post methods. This is all done automatically for you, which should prevent you from forgetting to do this and leaving a security vulnerability.
There have been additional increases to performance in ASP.NET Core that are not related to the improvements in .NET Core, which also help. The start-up time has been reduced by shipping binaries that have already been through the Just In Time compilation process.
Although not a new feature in ASP.NET Core 2, output caching is now available. In 1.0, only response caching was included, which simply sets the correct HTTP headers. In 1.1, an in-memory cache was added, and today, you can use local memory or a distributed cache kept in SQL Server or Redis.
Standards are important; that's why we have so many of them. The latest version of the .NET Standard is version 2, and .NET Core 2 implements this. A good way to think about .NET Standard is it's an interface that a class would implement. The interface will define an abstract API, but the concrete implementation of this API will be left to the classes that inherit from it. Another way to think about this is like the HTML5 standard that is supported by different web browsers.
Version 2 of the .NET Standard was defined by looking at the intersection of the .NET Framework and Mono. This standard was then implemented by .NET Core 2, which is why it contains more APIs than version 1. Version 4.6.1 of the .NET Framework also implements .NET Standard 2, and there is work to support the latest versions of the .NET Framework, UWP, and Xamarin (including Xamarin.Forms).
There is also the new XAML Standard that aims to find a common ground between Xamarin.Forms and UWP. Hopefully, it will include Windows Presentation Foundation (WPF) in future. As this is a book about web applications, we won't go into XAML and native user interfaces.
If you create libraries and packages that use these standards, then they will work on all the platforms that support them. As a developer who simply consumes libraries, you don't need to worry about these standards. It just means that you are more likely to be able to use the packages that you want on the platforms you are working with.
It's not just the frameworks and libraries that have been worked on. The underlying language also had some nice new features added. We will focus on C# here as it is the most popular language for the Common Language Runtime (CLR). Other options include Visual Basic and the functional programming language F#.
C# is a great language to work with, especially when compared to a language such as JavaScript. Although JavaScript is great for many reasons (such as its ubiquity and the number of frameworks available), the elegance and design of the language is not one of them. We will cover JavaScript later in the book.
Many of these new features are just syntactic sugar, which means they don't add any new functionality. They simply provide a more succinct and easier-to-read way of writing code that does the same thing.
Although the latest version of C# is 7, there are some very handy features in C# 6 that often go underused. Also, some of the new additions in 7 are improvements on features added in 6 and would not make much sense without any context. We will quickly cover a few features of C# 6 here, in case you are unaware of how useful they can be.
String interpolation is a more elegant and easier-to-work-with version of the familiar string format method. Instead of supplying the arguments to embed in the string placeholders separately, you can now embed them directly in the string. This is far more readable and less error-prone.
Let's demonstrate this with an example. Consider the following code that embeds an exception in a string:
catch (Exception e) { Console.WriteLine("Oh dear, oh dear! {0}", e); }
This embeds the first (and in this case only) object in the string at the position marked by zero. It may seem simple, but it quickly gets complex if you have many objects and want to add another at the start. You then have to correctly renumber all the placeholders.
Instead, you can now prefix the string with a dollar character and embed the object directly in it. This is shown in the following code that behaves the same as the previous example:
catch (Exception e) { Console.WriteLine($"Oh dear, oh dear! {e}"); }
The
ToString() method on an exception outputs all the required information, including the name, message, stack trace, and any inner exceptions. There is no need to deconstruct it manually; you may even miss things if you do.
You can also use the same format strings as you are used to. Consider the following code that formats a date in a custom manner:
Console.WriteLine($"Starting at: {DateTimeOffset.UtcNow:yyyy/MM/dd HH:mm:ss}");
When this feature was being built, the syntax was slightly different. So, be wary of any old blog posts or documentation that may not be correct.
The null conditional operator is a way of simplifying null checks. You can now place an inline check for null rather than use an if statement or ternary operator. This makes it easier to use in more places and will hopefully help you avoid the dreaded null reference exception.
You can avoid doing a manual null check, as in the following code:
int? length = (null == bytes) ? null : (int?)bytes.Length;
This can now be simplified to the following statement by adding a question mark:
int? length = bytes?.Length;
You can filter exceptions more easily with the
when keyword. You no longer need to catch every type of exception that you are interested in and then filter it manually inside the
catch block. This is a feature that was already present in VB and F#, so it's nice that C# has finally caught up.
There are some small benefits to this approach. For example, if your filter is not matched, then the exception will still be caught by other catch blocks in the same
try statement. You also don't need to remember to rethrow the exception to avoid it from being swallowed. This helps with debugging, as Visual Studio will no longer break as it would when you use
throw.
For example, you could check to see whether there is a message in the exception and handle it differently, as shown here:
catch (Exception e) when (e?.Message?.Length > 0)
When this feature was in development, a different keyword (
if) was used. So be careful of any old information online.
One thing to keep in mind is that relying on a particular exception message is fragile. If your application is localized, then the message may be in a different language to what you expect. This holds true outside of exception filtering too.
Another small improvement is that you can use the
await keyword inside
catch and
finally blocks. This was not initially allowed when this incredibly useful feature was added to C# 5. There is not a lot more to say about this. The implementation is complex, but you don't need to worry about this unless you're interested in the internals. From a developer's point of view, it just works, as in this simple example:
catch (Exception e) when (e?.Message?.Length > 0) { await Task.Delay(200); }
This feature has been improved in C# 7, so read on. You will see
async and
await used a lot throughout this book. Asynchronous programming is a great way of improving performance and not just from within your C# code.
Expression bodies allow you to assign an expression to a method or getter property using the lambda arrow operator (
=>), which you may be familiar with from fluent LINQ syntax. You no longer need to provide a full statement or method signature and body. This feature has also been improved in C# 7, so see the examples in the next section.
For example, a getter property can be implemented like so:
public static string Text => $"Today: {DateTime.Now:o}";
A method can be written in a similar way, such as the following example:
private byte[] GetBytes(string text) => Encoding.UTF8.GetBytes(text);
The most recent version of the C# language is 7, and there are yet more improvements to readability and ease of use. We'll cover a subset of the more interesting changes here.
There are a couple of minor additional capabilities and readability enhancements when specifying literal values in code. You can specify binary literals, which means you don't have to work out how to represent them using a different base anymore. You can also put underscores anywhere within a literal to make it easier to read the number. The underscores are ignored but allow you to separate digits into convention groupings. This is particularly well suited to the new binary literal as it can be very verbose, listing out all those zeros and ones.
Take the following example that uses the new
0b prefix to specify a binary literal that will be rendered as an integer in a string:
Console.WriteLine($"Binary solo! {0b0000001_00000011_000000111_00001111}");
You can do this with other bases too, such as this integer, which is formatted to use a thousands separator:
Console.WriteLine($"Over {9_000:#,0}!"); // Prints "Over 9,000!"
One of the big new features in C# 7 is support for tuples. Tuples are groups of values, and you can now return them directly from method calls. You are no longer restricted to returning a single value. Previously, you could work around this limitation in a few suboptimal ways, including creating a custom complex object to return, perhaps with a Plain Old C# Object (POCO) or Data Transfer Object (DTO), which are the same thing. You could have also passed in a reference using the
ref or
out keyword, which are still not great although there are improvements to the syntax.
There was
System.Tuple in C# 6, but it wasn't ideal. It was a framework feature, rather than a language feature, and the items were only numbered and not named. With C# 7 tuples, you can name the objects and they make a great alternative to anonymous types, particularly in LINQ query expression lambda functions. As an example, if you only want to work on a subset of the data available, perhaps when filtering a database table with an O/RM, such as Entity Framework, then you could use a tuple for this.
The following example returns a tuple from a method. You may need to add the
System.ValueTuple NuGet package for this to work:
private static (int one, string two, DateTime three) GetTuple() { return (one: 1, two: "too", three: DateTime.UtcNow); }
You can also use tuples in string interpolation and all the values will be rendered, as shown here:
Console.WriteLine($"Tuple = {GetTuple()}");
If you want to pass parameters to a method for modification, then you always need to declare them first. This is no longer necessary, and you can simply declare the variables at the point you pass them in. You can also declare a variable to be discarded, using an underscore. This is particularly useful if you don't want to use the returned value, for example, in some of the try parse methods of the native framework data types.
Here, we parse a date without declaring the
dt variable first:
DateTime.TryParse("2017-08-09", out var dt);
In this example, we test for an integer, but we don't care what it is:
var isInt = int.TryParse("w00t", out _);
You can now return values by reference from a method as well as consume them. This is a little like working with pointers in C but safer. For example, you can only return references that were passed to the method, and you can't modify references to point to a different location in memory. This is a very specialist feature, but in certain niche situations, it can dramatically improve performance.
Consider the following method:
private static ref string GetFirstRef(ref string[] texts) { if (texts?.Length > 0) { return ref texts[0]; } throw new ArgumentOutOfRangeException(); }
You could call this method like so, and the second console output line would appear differently (
one instead of
1):
var strings = new string[] { "1", "2" }; ref var first = ref GetFirstRef(ref strings); Console.WriteLine($"{strings?[0]}"); // 1 first = "one"; Console.WriteLine($"{strings?[0]}"); // one
The other big addition is you can now match patterns in C# 7 using the
is keyword. This simplifies testing for null and matching against types, among other things. It also lets you easily use the cast value. This is a simpler alternative to using full polymorphism (where a derived class can be treated as a base class and override methods). However, if you control the code base and are able to make use of polymorphism properly, then you should still do this and follow good object-oriented programming (OOP) principles.
In the following example, pattern matching is used to parse the type and value of an unknown object:
private static int PatternMatch(object obj) { if (obj is null) { return 0; } if (obj is int i) { return i++; } if (obj is DateTime d || (obj is string str && DateTime.TryParse(str, out d))) { return d.DayOfYear; } return -1; }
You can also use pattern matching in the case of a
switch statement, and you can switch on non-primitive types, such as custom objects.
Expression bodies are expanded from the offering in C# 6, and you can now use them in more places, for example, as object constructors and property setters. Here, we extend our previous example to include the setting up of the value on the property we were previously just reading:
private static string text; public static string Text { get => text ?? $"Today: {DateTime.Now:r}"; set => text = value; }
There have been some small improvements to what
async methods can return and, although small, they could offer big performance gains in certain situations. You no longer have to return a task which can be beneficial if the value is already available. This can reduce the overhead of using
async methods and creating a task object.
You can't write a book on web applications without covering JavaScript. It is everywhere.
If you write a web app that does a full page load on every request and it's not a simple content site, then it will feel slow. However, users expect responsiveness.
If you are a backend developer, then you may think that you don't have to worry about this. However, if you are building an API, then you may want to make it easy to consume with JavaScript, and you will need to make sure that your JSON is correctly and quickly serialized.
Even if you are building a Single-Page Application (SPA) in JavaScript (or TypeScript) that runs in the browser, the server can still play a key role. You can use SPA services to run Angular or React on the server and generate the initial output. This can increase performance as the browser has something to render immediately. For example, there is a project called React.NET that integrates React with ASP.NET, and it supports ASP.NET Core.
If you have been struggling to keep up with the latest developments in the .NET world, then JavaScript is on another level. There seems to be something new almost every week, and this can lead to framework fatigue and a paradox of choice. There is so much to choose from that you don't know what to pick.
We will cover some of the more modern practices later in the book and show the improved performance that they can bring. We'll look at service workers and show how they can be used to move work into the background of a browser to make it feel more responsive to the user.
In this introductory chapter, you saw a brief but high-level summary of what has changed in .NET Core 2 and ASP.NET Core 2, compared to previous versions. Now, you are also aware of .NET Standard 2 and what it is for.
We showed examples of some of the new features available in C# 6 and C# 7. These can be very useful in letting you write more with less and in making your code more readable and easier to maintain.
Finally, we touched upon JavaScript as it is ubiquitous, and this is a book about web apps after all. Moreover, this is a book on general web application performance improvements, and many of the lessons are applicable regardless of the language or framework used.
In the next chapter, you'll see why performance matters and learn how the new .NET Core stack fits together. We will also see the tools that are available and learn about hardware performance with a graph. | https://www.packtpub.com/product/asp-net-core-2-high-performance-second-edition/9781788399760 | CC-MAIN-2020-40 | refinedweb | 4,249 | 65.22 |
One of the coolest capabilities of Apache OpenWhisk is the ability to develop functions with Docker. This allows you to develop functions in languages which are not supported out of the box by the platform.
I’ve open sourced a sample that shows how to develop and debug functions with TypeScript. I’m a big fan of TypeScript since it adds a type system to JavaScript which makes me more productive.
Get the code from GitHub.
Here is a very simple TypeScript function. The /run endpoint is where the actual implementation of the function goes.
import * as express from 'express'; import * as bodyParser from 'body-parser'; const app = express() app.use(bodyParser.json()); app.post('/run', (req, res) => { var payload = (req.body || {}).value; var result = { "result": { "echo": payload } } res.status(200).json(result); }); app.post('/init', function (req, res) { try { res.status(200).send(); } catch (e) { res.status(500).send(); } }); app.listen(8080, () => console.log('Listening on port 8080'))
Based on this recipe I’ve also documented how you can debug TypeScript code running in a Docker container from Visual Studio Code. In order to debug TypeScript code, the same mechanism is used which I explain in this video. A volume is used to share the files between the IDE and the container and VS Code attaches a remote debugger to the Docker container. The functions can be changed in the IDE without having to restart the container. nodemon restarts the Node application in the container automatically when files change.
This is a screenshot of the debugger in VS Code.
If you want to try out OpenWhisk in the cloud, you can get an account on the IBM Cloud. | http://heidloff.net/article/serverless-functions-typescript-openwhisk | CC-MAIN-2021-04 | refinedweb | 279 | 66.44 |
How to code like playing LEGO™
Guillaume Martigny
Updated on
・3 min read
Modularity is a big trend and I'm not the first to hop on this train. Today, I'm going to show you how easy you can build a multi-module app with vanilla Javascript and some awesome tools.
Recipe
Ingredients
First of all, I'm going to assume you know a few things beforehand :
- Object Oriented Programming
- How to write JS
- Basics of NPM
Steps
The ground
Lets start with an empty directory for your project (we'll name it unicorn) and initialize it
npm init
and create the main file
index.js with an old-school JS class
function Unicorn(name) { this.name = name; } Unicorn.prototype = { shine: function() { // All kind of good stuff here 🦄 } } var dazzle = new Unicorn("Dazzle"); dazzle.shine();
Decoupling
Now image that you want to use the
Unicorn class in another project, or just open-source it to the Humanity. You could create another directory with another repo, but let's be smarter. The
Unicorn class is so linked to the Unicorn project that we'll use NPM scoped package name for clarity.
All that reduce
index.js to 3 lines of codes.
import Unicorn from "@unicorn/model"; var dazzle = new Unicorn("Dazzle"); dazzle.shine();
Next, we create a sub-directory for our module.
mkdir packages/model cd packages/model npm init # and name it @unicorn/model
Which will have an
index.js too with the class inside it. Since we left the plain browser JS with import/export statement, why not use the beautiful ES6 class syntax.
export default class Unicorn { constructor(name) { this.name = name; } shine () { // All kind of good stuff here 🦄 } }
At that point, the
import statement is targeted at a package name that should be installed under the
node_modules sub-directory. We could use a relative path like
import Unicorn from "./packages/model/index.js";. What could be better is to create a link between packages.
Thankfully, npm can do that for you with the link command. Here's what it looks in our case.
cd packages/model npm link cd .. npm link @unicorn/model
Wrapping
Ok nice one, but now I can't use it in my browser, you dumbo !
First, how are you calling me ?
Then yeah, I know, for now it's all experimental syntax and stuff, but there's tools to handle it for you. I like to use webpack with babel, of course, it's not the only solution.
Adding some package on project's root.
npm install --save-dev babel-loader babel-core babel-preset-env webpack
The whole webpack configuration could fill another article, so I'll just show one that work. Set a new file called
webpack.config.js with some instructions inside.
module.exports = { entry: "./index.js", // Main file to read module: { rules: [{ test: /\.js$/, // For all file ending with ".js" use: { loader: "babel-loader", // Use babel options: { presets: ["babel-preset-env"], }, }, }], }, output: { filename: "dist/unicorn.js", // Output the result in another file library: "Unicorn", // Under "Unicorn" namespace libraryTarget: "this", libraryExport: "default", }, };
Then, if you run
npx webpack it will build all your sources into one file usable by plain web browser.
Managing
You can now create lots of sub-modules and wrap them all in one file. You can even have sub-sub-modules and so on. Just put them all in the
modules directory.
As your project grows, it'll be harder and harder to manage all this menagerie.
That where lerna come into play.
npm install -save-dev lerna
Think of it as a
npm link on steroids.
Check out the full documentation on the project page, but here's a few useful commands.
npx lerna clean # Remove all node_modules directories npx lerna bootstrap # Install remote dependencies and link local ones npx lerna add package # Install a package to all sub-modules npx lerna add package --scope=sub-module # Install a package to a specific sub-module npx lerna publish # Bump, tag and publish all your modules over NPM
Enjoy
You should now be on track to write the most elegant project possible. I'm counting on you ;)
If you want more in-depth examples, I'm currently building
yet another JS drawing library using the very same techniques.
Next time, we'll talk about automated tests and how to catch lots of bugs and ensure consistency over time.
Software development is a social profession
The software community has lost itself in a maze of frameworks and languages
A personal use I found for JReply's soon to come _.define() feature.
PDS OWNER CALIN (Calin Baenen) -
Functional programming for your everyday javascript: The power of map
Heiker -
Discussion: What is The Best Hosting Out There? And What is Your Favorite?
Yuli -
Just a note though: these are not vanilla JS modules, which it a bit confusing regarding the intro of the article.
Thanks for your comment. The code is pure ES6 syntax (no post-process, no frame-work). The use of webpack + babel is just there to make it work on browser. It feels vanilla to me. How would you have done ?
The thing is, this would not work, either in Node, you would need to name it
.jsmand run it in Node 8 LTS, nor in most browsers.
Node use another syntax to uses modules, and I think only Chrome supports import/export modules.
So, for now, you need to compile now. I think that's what Elarcis means. | https://dev.to/gmartigny/how-to-code-like-playing-lego--40fk | CC-MAIN-2020-10 | refinedweb | 915 | 65.42 |
IRC log of tagmem on 2010-09-02
Timestamps are in UTC.
17:02:36 [RRSAgent]
RRSAgent has joined #tagmem
17:02:36 [RRSAgent]
logging to
17:02:38 [trackbot]
RRSAgent, make logs public
17:02:38 [Zakim]
Zakim has joined #tagmem
17:02:40 [trackbot]
Zakim, this will be TAG
17:02:40 [Zakim]
ok, trackbot, I see TAG_Weekly()1:00PM already started
17:02:41 [trackbot]
Meeting: Technical Architecture Group Teleconference
17:02:41 [trackbot]
Date: 02 September 2010
17:02:48 [Zakim]
+Noah_Mendelsohn
17:02:57 [Yves]
Agenda:
17:03:06 [johnk]
johnk has joined #tagmem
17:03:45 [noah]
zakim, who is here?
17:03:46 [Zakim]
On the phone I see Masinter, DKA, Noah_Mendelsohn
17:03:52 [Zakim]
On IRC I see johnk, Zakim, RRSAgent, DKA, masinter, noah, timbl, Yves, trackbot
17:03:54 [DKA]
Scribe: Dan
17:03:57 [DKA]
ScribeNick: DKA
17:04:00 [Zakim]
+Yves
17:04:05 [ht]
ht has joined #tagmem
17:04:12 [ht]
zakim, code?
17:04:21 [Zakim]
the conference code is 0824 (tel:+1.617.761.6200 tel:+33.4.26.46.79.03 tel:+44.203.318.0479), ht
17:04:32 [noah]
zakim, who is here?
17:04:45 [Zakim]
On the phone I see Masinter, DKA, Noah_Mendelsohn, Yves
17:04:53 [Zakim]
On IRC I see ht, johnk, Zakim, RRSAgent, DKA, masinter, noah, timbl, Yves, trackbot
17:05:58 [Zakim]
+[IPcaller]
17:06:10 [ht]
zakim, [ is me
17:06:10 [Zakim]
+ht; got it
17:06:20 [DKA]
Chair: Noah
17:06:38 [Zakim]
+John_Kemp
17:07:33 [DKA]
DKA has joined #tagmem
17:07:35 [DKA]
I'm back.
17:08:05 [DKA]
Resolved: minutes of 19th approved
17:08:39 [DKA]
Noah: Call for next week is at risk depending on getting agenda / chairing set up...
17:08:51 [jrees]
jrees has joined #tagmem
17:08:57 [Zakim]
+ +1.617.538.aaaa
17:09:49 [DKA]
zakim, aaaa is johnk
17:09:49 [Zakim]
+johnk; got it
17:10:05 [DKA]
zakim, who is here?
17:10:05 [Zakim]
On the phone I see Masinter, DKA, Noah_Mendelsohn, Yves, ht, John_Kemp, johnk
17:10:07 [Zakim]
On IRC I see jrees, DKA, ht, johnk, Zakim, RRSAgent, masinter, noah, timbl, Yves, trackbot
17:10:17 [DKA]
zakim, johnk is actually jrees
17:10:17 [Zakim]
I don't understand 'johnk is actually jrees', DKA
17:10:20 [DKA]
hrm
17:10:35 [ht]
q+ to ask for a session on 3987
17:10:39 [DKA]
Noah: on Webapps - I want to make some progress in September. Any thoughts?
17:10:40 [noah]
ack next
17:10:41 [Zakim]
ht, you wanted to ask for a session on 3987
17:10:55 [noah]
zakim, who is here?
17:10:55 [Zakim]
On the phone I see Masinter, DKA, Noah_Mendelsohn, Yves, ht, John_Kemp, johnk
17:10:57 [Zakim]
On IRC I see jrees, DKA, ht, johnk, Zakim, RRSAgent, masinter, noah, timbl, Yves, trackbot
17:11:09 [DKA]
Henry: I support face-2-faces...
17:12:00 [DKA]
Larry: In the last week, the IETF area directors have got together with the wg chairs to push the work forward.
17:12:46 [jar]
jar has joined #tagmem
17:12:53 [noah]
Let me state a bit more forcefully on WebApps: I don't think our level of investment and rate of progress has been consistent with our agreement that significant writing on WebApps would be one of our major goals for the year.
17:13:02 [jar]
zakim, who is on the call
17:13:02 [Zakim]
I don't understand 'who is on the call', jar
17:13:03 [DKA]
Henry: I feel we can't usefully respond to Roy without knowing if your idea for re-architecting the situation has support.
17:13:09 [Zakim]
-jrees
17:13:20 [noah]
I intend to work with TAG members in Sept to see whether serious writing can be done in time for discussion at the F2F.
17:13:33 [DKA]
Larry: The problem is: there's some work that needs to get done to resolve the differences between what the specs currently say and what really happens and what should happens...
17:13:57 [DKA]
Larry: also there is a venue discussion (w3c-whatwg-ieft).
17:14:01 [Zakim]
+jrees
17:14:08 [DKA]
... or the unicode consortium...
17:14:10 [jar]
zakim, mute jrees
17:14:10 [Zakim]
jrees should now be muted
17:14:17 [ht]
LM, last spring during an MIT TAG meeting we walked together to the pub, and you described your ideas for reworking the whole idea of URI grammar
17:14:18 [Yves]
s/ieft/ietf
17:14:46 [DKA]
Larry: the only rational way of making progress is to start doing some of the work...
17:15:31 [noah]
ACTION-409?
17:15:31 [trackbot]
ACTION-409 -- Henry S. Thompson to run Larry's plan for closing IRIEverywhere by the XML Core WG -- due 2010-06-22 -- PENDINGREVIEW
17:15:31 [trackbot]
17:15:42 [noah]
ACTION-410?
17:15:42 [trackbot]
ACTION-410 -- Larry Masinter to let the TAG know whether and when the IRIEverywhere plan in HTML WG went as planned -- due 2010-11-01 -- OPEN
17:15:42 [trackbot]
17:16:30 [noah]
ACTION-448?
17:16:30 [trackbot]
ACTION-448 -- Noah Mendelsohn to schedule discussion of
on 26 August (followup to 24 June and 12 August discussion) -- due 2010-09-28 -- OPEN
17:16:30 [trackbot]
17:16:34 [DKA].
17:16:55 [DKA]
Larry: Roy's point was slightly different.
17:16:56 [noah]
is about URL processing in HTML
17:17:51 [DKA]
Noah: Happy to see people pushing on substantive work leading up to the f2f. I have put action 409, 410 and 448 into the IRC - Henry can you comment?
17:17:54 [Zakim]
-John_Kemp
17:18:01 [noah]
close ACTION-409
17:18:01 [DKA]
Henry: I think 409 is done.
17:18:01 [trackbot]
ACTION-409 Run Larry's plan for closing IRIEverywhere by the XML Core WG closed
17:18:06 [DKA]
Noah: any objections?
17:18:09 [DKA]
[none heard]
17:18:10 [masinter]
The IETF IRI working group chairs have indicated that they're going to start going through issues .... I hope that will result in making some progress
17:18:14 [noah]
HT: They have replied to us
17:18:22 [Zakim]
+John_Kemp
17:18:41 [DKA]
action-410?
17:18:41 [trackbot]
ACTION-410 -- Larry Masinter to let the TAG know whether and when the IRIEverywhere plan in HTML WG went as planned -- due 2010-11-01 -- OPEN
17:18:41 [trackbot]
17:19:28 [DKA]
Noah: Do we need any new actions?
17:19:44 [DKA]
Henry: No. I want a session at the f2f to hear from Larry about this.
17:20:07 [DKA]
... [of IRI bis status and the status of larry's proposal]
17:20:19 [noah]
ACTION: Noah to schedule F2F discussion of IRIbis status and Larry's proposal due: 2010-10-05
17:20:19 [trackbot]
Created ACTION-459 - Schedule F2F discussion of IRIbis status and Larry's proposal due: 2010-10-05 [on Noah Mendelsohn - due 2010-09-09].
17:20:39 [noah]
zakim, who is here?
17:20:39 [Zakim]
On the phone I see Masinter, DKA, Noah_Mendelsohn, Yves, ht, jrees (muted), John_Kemp
17:20:41 [Zakim]
On IRC I see jar, jrees, DKA, ht, johnk, Zakim, RRSAgent, masinter, noah, timbl, Yves, trackbot
17:20:41 [DKA]
Noah: anything else on oct f2f?
17:20:48 [Zakim]
+TimBL
17:20:55 [DKA]
Topic: Privacy Workshop
17:21:38 [DKA]
Privacy workshop report:
17:22:05 [noah]
DKA: See workshop report link
17:22:27 [noah]
DKA: Workshop headlines: 1) very well attended, some reported it as most comprehensive in 5 years.
17:23:07 [noah]
DKA: Participation from academic groups, good representation from IETF
17:23:41 [noah]
DKA: We discussed need for better coordination between the IAB and the TAG...look for better progress now that summer is over
17:23:41 [Zakim]
-jrees
17:24:02 [Zakim]
+jrees
17:24:32 [noah]
DKA: There was lots of focus on device APIs. What's been learned from geolocation api deployment. Should privacy information be carried along with device data (e.g. location) in the context of an API call. Also a UI dimension.
17:24:59 [jar]
zakim, unmute jrees
17:24:59 [Zakim]
jrees was not muted, jar
17:25:17 [noah]
DKA: Privacy questions may have to be asked at time that user is asked for permission to collect data.
17:26:40 [noah]
DKA: We also discussed "privacy rulesets", presented by ????. Creative commons-like model that allows users to pick from standardized options for privacy settings.
17:27:13 [noah]
DKA: Can link a license for that piece of data, the link being carried along in user agent and onward into the network. Can indicate preference for allowing 3rd party access, etc.
17:29:09 [noah]
DKA: In summary, it was a very good opportunity for discussion. We probably achieved somewhat less consensus than I hoped, but the topics discussed were very pertinent for the DAP F2F that followed immediately after the workshop. Chairs reported it was valuable.
17:29:36 [DKA_]
DKA_ has joined #tagmem
17:30:11 [noah]
DKA: Next steps are that we need to figure out coordination between TAG and Internet Architecture Board. I could take an action.
17:30:50 [noah]
ACTION: Appelquist to coordinate with IAB regarding next steps on privacy policy
17:30:50 [trackbot]
Created ACTION-460 - Coordinate with IAB regarding next steps on privacy policy [on Daniel Appelquist - due 2010-09-09].
17:31:17 [noah]
ACTION-460 due 2010-09-14
17:31:17 [trackbot]
ACTION-460 Coordinate with IAB regarding next steps on privacy policy due date now 2010-09-14
17:31:58 [DKA_]
17:32:11 [DKA_]
ScribeNick: DKA_
17:33:00 [DKA_]
Noah: Are you [dan] or is someone else willing to take on a writing assignment about "what the tag wants to tell the world about API design issues for webapps or something smaller like policy..."
17:34:24 [noah]
. ACTION: Appelquist to draft "finding" on Web Apps API design
17:34:36 [noah]
ACTION: Appelquist to draft "finding" on Web Apps API design
17:34:36 [trackbot]
Created ACTION-461 - Draft "finding" on Web Apps API design [on Daniel Appelquist - due 2010-09-09].
17:34:50 [noah]
ACTION-461 due: 2010-10-11
17:35:48 [Zakim]
-John_Kemp
17:37:10 [DKA_]
Tim: [+1 to Noah's comments on producing substantive findings]
17:37:31 [DKA_]
Noah: Roy's finding on authoritative metadata is a good example of a good finding.
17:37:48 [DKA_]
Topic: Developer Workshop / Camp at F2f
17:38:08 [DKA_]
Noah: I thought the idea was to have it at our f2f on the west coast...
17:38:16 [DKA_]
Noah: I've had some reservations...
17:39:02 [noah]
DKA: I took action to look into logistics. Raman seemed a bit negative, so I went looking for alternate hosts who migh contribute space.
17:39:56 [noah]
DKA: Goal is/was co-location with TAG meeting. Actually spoke with Carnegie Mellon West, but they couldn't manage it either.
17:40:07 [timbl]
Zakim, who is in a car wash?
17:40:07 [Zakim]
I don't understand your question, timbl.
17:40:27 [Zakim]
-jrees
17:40:50 [noah]
DKA: At this point, I'm not sure how much energy I have to push this forward.
17:41:08 [noah]
Chair is curious whether anyone else thinks this is high value?
17:41:19 [noah]
We need to settle soon so travel can be arranged.
17:41:26 [noah]
We also need to reach out to other attendees.
17:41:40 [noah]
I remain somewhat skeptical, but maybe I'm being too conservative.
17:42:35 [johnk]
I still think it's a good idea, but I don't have the time to do anything any more prior to October
17:42:42 [DKA_]
Tim: We could pick well-known established architects and/or people who have been making decisions that we care about...
17:43:02 [DKA_]
Noah: What kind of format and invite list would you have in mind?
17:43:41 [DKA_]
Tim: For format: maybe get people we don't know to present what the most important properties that we haven't mentioned in the architecture document?
17:43:53 [DKA_]
Noah: Invitation-only? Or open to anyone who signs up?
17:44:03 [DKA_]
Noah: Day-long thing? or Smaller meetings?
17:44:19 [DKA_]
... what kind of discussion are we trying to foster with whom?
17:45:37 [noah]
DKA: Could do day long, or just invite experts.
17:45:48 [noah]
DKA: Logistics would be much easier.
17:47:08 [DKA_].
17:47:39 [DKA_]
... option B (invited experts coming to talk to us) is less difficult.
17:47:55 [DKA_]
... Anyone want to push for a full-day developer-camp style thing? If not, I propose we let it go...
17:49:02 [DKA_]
Noah: Anyone else interested in working with Dan on option (A) - a bigger developer camp on a different day?
17:49:31 [DKA_]
Tim: My concern about the "big" one is peoples' time may already be committed.
17:50:03 [Zakim]
+jrees
17:50:16 [DKA_]
Noah: What I'm hearing is it doesn't work... If you have ideas for other things that might fit in the 3 days - but inclined to let go.
17:50:31 [noah]
ACTION-454?
17:50:31 [trackbot]
ACTION-454 -- Daniel Appelquist to take lead in organizing possible Web apps architecture camp / workshop / openday -- due 2010-07-22 -- OPEN
17:50:31 [trackbot]
17:51:20 [noah]
. ACTION: DanA to Take lead in organizing outside contacts for TAG F2f
17:52:22 [noah]
ACTION-455?
17:52:22 [trackbot]
ACTION-455 -- Noah Mendelsohn to schedule discussion on privacy workshop outcomes. -- due 2010-09-07 -- OPEN
17:52:22 [trackbot]
17:52:28 [noah]
close ACTION-455
17:52:28 [trackbot]
ACTION-455 Schedule discussion on privacy workshop outcomes. closed
17:52:35 [DKA_]
Noah: any objection to close 455?
17:52:38 [DKA_]
[none heard]
17:53:36 [DKA_]
Topic: Redirecting from secondary resource to secondary resource
17:53:39 [Zakim]
-jrees
17:53:59 [DKA_]
Larry: I have no opinion on this. I see nothing that I object to. I'd be happy for it to go either way.
17:54:05 [noah]
17:54:48 [DKA_]
Tim: I was very concerned about this. I've got code in tabulator that throws up an error message when it hits this.
17:54:53 [noah]
ACTION-456?
17:54:53 [trackbot]
ACTION-456 -- Yves Lafon to locate past HTTP WG discussion on Location: A#B change, and make the TAG aware of it -- due 2010-08-17 -- PENDINGREVIEW
17:54:53 [trackbot]
17:55:18 [noah]
Tim: I was hoping TAG would promote best practices
17:55:35 [DKA_]
... I'd been hoping that the TAG would say "there are best practices for redirecting from the object to the document about it"
17:56:01 [noah]
Tim: we see PERL site redirects with 302 from A#B to C#D, and then discover that there's no info in that result about C#D, only about A#B
17:56:10 [noah]
Tim: my code is unhappy with this
17:56:21 [Yves]
the redirection is from A to C#D
17:56:34 [Yves]
handling #B is done client-side
17:58:04 [DKA_]
Noah: Can the TAG write on this?
17:59:02 [timbl]
Ok, my current problem is with IIRC dcterms:title
17:59:29 [noah]
LM: I thought HTTP redirect is being addressed in http committee. Thus no need for TAG finding.
17:59:45 [DKA_]
Larry: I thought that the HTTP redirect was being addressed in http bis committee and that the TAG didn't need to write a finding.
17:59:45 [noah]
TBL: The httpbis committee?
17:59:49 [noah]
LM: Yves?
18:00:02 [timbl]
_________________________
18:00:03 [DKA_]
Yves: My impression is that there are more people interested in solving that issue here than in http bis...
18:00:05 [timbl]
$ curl -I
18:00:05 [timbl]
HTTP/1.1 302 Moved Temporarily
18:00:05 [timbl]
Date: Thu, 02 Sep 2010 17:59:31 GMT
18:00:06 [timbl]
Server: 1060 NetKernel v3.3 - Powered by Jetty
18:00:06 [timbl]
Location:
18:00:06 [timbl]
Content-Type: text/html; charset=iso-8859-1
18:00:08 [timbl]
X-Purl: 2.0;
18:00:09 [noah]
YL: I think there's more interest here on the TAG than there, because the focus is on the interpretation.
18:00:10 [timbl]
Expires: Thu, 01 Jan 1970 00:00:00 GMT
18:00:12 [timbl]
Content-Length: 283
18:00:26 [DKA_]
... if not enough people are interested in working on that here then we can say "allow http bis to do what they want."
18:00:53 [timbl]
__________________
18:00:57 [timbl]
$ curl -I
18:00:58 [timbl]
HTTP/1.1 200 OK
18:00:58 [timbl]
Date: Thu, 02 Sep 2010 17:53:42 GMT
18:00:58 [timbl]
Server: Apache/2.0.59 (Unix) DAV/2 mod_ssl/2.0.59 OpenSSL/0.9.8g SVN/1.4.3
18:00:59 [timbl]
Last-Modified: Mon, 30 Jun 2008 03:54:54 GMT
18:00:59 [timbl]
ETag: "8cd2f-133a9-38dddf80"
18:01:00 [timbl]
Accept-Ranges: bytes
18:01:02 [timbl]
Content-Length: 78761
18:01:04 [timbl]
Content-Type: application/rdf+xml
18:01:05 [timbl]
___________________________
18:01:37 [noah]
TBL: I'm looking up to find dcterms:title, that's what I first typed in.
18:01:39 [DKA_]
Tim: The first thing - I'm looking up to see what /dc/terms/title
18:01:59 [noah]
Tim: It's telling me that I need to look at
18:02:12 [noah]
Tim: from that you get:
18:02:12 [DKA_]
Tim: And it's telling me that I need to go look at that local id (#title)... and then if you look at what you get back from
18:02:21 [timbl]
So then you look in what you get from that you find:
18:02:33 [noah]
HT: You get 80K of data, only a bit of which is of interest
18:02:41 [timbl]
<!ENTITY dctermsns '
'>
18:02:53 [DKA_]
... you get - it declares a namespace -
18:03:05 [Zakim]
+jrees
18:03:09 [Yves]
#title seems to be an absolute reference in the redirected content, even if that content got a # in it
18:03:15 [timbl]
... xmlns:dcterms="
"
18:03:32 [timbl]
<rdf:Description rdf:about="
">
18:03:42 [noah]
Tim: the key bit is <rdf:Description rdf:about="
">
18:03:44 [DKA_]
... the information in that document says : <rdf:Description rdf:about="
">
18:04:00 [timbl]
RDF isn't about parts of documents, but things in this case the concept of title
18:04:14 [timbl]
so C has no infor about C#D
18:04:21 [timbl]
C#D basically does not exist
18:04:40 [DKA_]
Tim: RDF doesn't use anchors.
18:05:05 [DKA_]
Tim: If it was a hypertext document you'd be looking for an anchor...
18:05:21 [timbl]
Theer is information about A
18:05:28 [DKA_]
... it didn't define a local name - it used the fully qualified URL which has purl.org in it - about "a"
18:05:30 [timbl]
Ther is no #B in this example
18:06:32 [DKA_]
Tim: [in the current state of affairs you would be forced to write code that ignores these fragment identifiers - which is not good]
18:07:15 [ht]
q+ to ask how the situation would change if the reply had been a 303
18:07:21 [DKA_]
Tim: I've concluded from HTTP that I can use that URI to talk about this document...
18:07:31 [ht]
ack ht
18:07:31 [Zakim]
ht, you wanted to ask how the situation would change if the reply had been a 303
18:08:05 [DKA_]
Henry: What you're saying is that there are two problems here - one is that there a 302 and the other is that there is a hash in the response. What if the first were fixed - a 303 response but still a hash.
18:08:36 [DKA_]
Tim: Then I would have required C#D to identify a document - again, I am expecting a document from the 303.
18:09:23 [timbl]
From the 302 and the 200 my code concludes that C is a document and A is a document
18:09:26 [DKA_]
Tim: Yes, they could have done a 303 but then give me a document about that other document - for example a document that provides me a SPARLQL query...
18:09:57 [DKA_]
Henry: But I though the point of the http range 14 finding was that you get a document that doesn't pretend to be what you requested...
18:10:41 [DKA_]
Henry: it is 303 that we recommend in http range 14 - yes?
18:10:51 [Yves]
I would note that one of the option was to delegate the "fragment combination" to the mime type definition. RDF can tell its story there, like ignoring the #D part (but you know that only when you dereference the URI)
18:10:51 [DKA_]
Tim: Yes where the original is a predicate.
18:11:03 [timbl]
for teh case where the original is a thing like dublin or the concpet of a title.
18:11:51 [DKA_].
18:12:26 [DKA_]
Henry: Next question - I'm not convinced that the range 14 finding envisaged RDF that was not ONLY about Dublin.
18:12:28 [Zakim]
-jrees
18:12:54 [DKA_]
Tim: No; there are lots of cases where people wrote an ontology in one file and they've used a slash in their URIs but all the URIs redirect to the same file...
18:13:29 [DKA_]
Henry: Someone might think - "actually this document contains info on every city in Ireland then I should put a hash on it to direct me to the part of that document about Dublin"
18:13:43 [DKA_]
Tim: But RDF documents don't have parts.
18:14:07 [DKA_]
Henry: But RDF tells me what the semantics of # are ?
18:15:22 [Yves]
semantic of # should be described in the application/rdf+xml type, no ?
18:16:32 [DKA_]
Tim: the RDF spec says: when you get one of these things you parse it - it tells you how to parse it when you get ... (?)
18:17:08 [DKA_]
... the tutorials show you the fragment identifier is a local identifier in a local name space...
18:17:40 [DKA_]
... there could be no other semantics to fragment IDs [than what the RDF spec states].
18:17:50 [DKA_]
Tim: The C#D issue is up the stack a bit.
18:18:33 [DKA_]
Henry: I was trying to see if based on a reasonable reading, someone in the position of the Purl people might think that putting the # on was doing the right thing. I think they did.
18:19:14 [DKA_]
Tim: It may well be that if we got back to [Purl] then they could tweak their system accordingly.
18:19:23 [DKA_]
Tim: Anyone know the webmaster at dublin core?
18:19:36 [DKA_]
Tim: Anyone know anyone else who does this? Redirecting to a #?
18:20:12 [DKA_]
Henry: [points the finger at Yves]
18:20:26 [noah]
ACTION-456?
18:20:26 [trackbot]
ACTION-456 -- Yves Lafon to locate past HTTP WG discussion on Location: A#B change, and make the TAG aware of it -- due 2010-08-17 -- PENDINGREVIEW
18:20:26 [trackbot]
18:20:54 [DKA_]
Yves: I want to continue working on this, with Tim, to understand if the issue is only with RDF documents or if it's a more general one. E.g. part of a video, part of a document, etc...
18:20:56 [noah]
. proposed ACTION: Yves to work with Tim to propose next steps regarding redirection for secondary resources
18:21:07 [noah]
Would close 456
18:21:22 [DKA_]
Tim: We have to know what the semantics are and we have to specify it differently for hypertext and RDF.
18:21:35 [noah]
. proposed ACTION: Yves to write draft of best practices on redirection for secondary resources (with help from Tim)
18:21:53 [noah]
close ACTION-456
18:21:53 [trackbot]
ACTION-456 Locate past HTTP WG discussion on Location: A#B change, and make the TAG aware of it closed
18:21:59 [noah]
ACTION: Yves to write draft of best practices on redirection for secondary resources (with help from Tim)
18:21:59 [trackbot]
Created ACTION-462 - Write draft of best practices on redirection for secondary resources (with help from Tim) [on Yves Lafon - due 2010-09-09].
18:22:16 [noah]
ACTION-462 due: 2010-10-05
18:22:16 [ht]
is the media type registration for rdf+xml
18:22:36 [noah]
ACTION-462: due 2010-10-05
18:22:36 [trackbot]
ACTION-462 Write draft of best practices on redirection for secondary resources (with help from Tim) notes added
18:22:58 [ht]
And as TimBL said, it doesn't really answer the question, but points elsewhere: "More details on RDF's treatment of fragment identifiers can be found
18:22:58 [ht]
in the section "Fragment Identifiers" of the RDF Concepts document
18:22:58 [ht]
[
18:23:17 [ht]
18:23:18 [ht]
]
18:23:30 [timbl]
tracker, help action
18:23:59 [DKA_]
Topic: IETF coordination on MIME
18:24:00 [Zakim]
-ht
18:24:06 [noah]
ACTION-458?
18:24:06 [trackbot]
ACTION-458 -- Noah Mendelsohn to schedule discussion of followup actions for TAG to coordinate with IETF on MIME-type related activities -- due 2010-09-07 -- OPEN
18:24:06 [trackbot]
18:25:05 [noah]
ACTION-447?
18:25:05 [trackbot]
ACTION-447 -- Yves Lafon to coordinate TAG positions on media type related work with IETF, and to represent TAG at IETF meetings in Mastricht Due: 2010-07-20 -- due 2010-07-20 -- CLOSED
18:25:05 [trackbot]
18:25:40 [DKA_]
Noah: Yves said it didn't line up as well as we hoped - we said we would pick it up when Larry is back.
18:26:13 [DKA_]
Larry: I wrote this blog post on it - if you think it's good we could pick it up as a TAG note...
18:26:40 [DKA_]
Noah: if you did want to go forward with it - anything else need to be done?
18:26:55 [DKA_]
Noah: do you view this as "step on".
18:27:09 [jar_]
jar_ has joined #tagmem
18:27:14 [DKA_]
Noah: I think October is a good target for this.
18:27:31 [DKA_]
Noah: Can we take a week or 2 to re-read it.
18:28:03 [Zakim]
+ +1.617.209.aabb
18:28:12 [jar_]
zakim, aabb is jar
18:28:12 [Zakim]
+jar; got it
18:28:24 [DKA_]
Dan: should we bring it into W3C space?
18:28:32 [DKA_]
Larry: I can supply it.
18:28:40 [DKA_]
Noah: I can help to put it up.
18:29:00 [DKA_]
Noah: I think TAG members don't want to see this work lost or just left in Larry's blog.
18:29:08 [DKA_]
Larry: Anything that came up in the June meeting?
18:29:33 [DKA_]
Noah: I don't think we got to the point where we know what success is. In what further ways does the TAG want to engage?
18:30:14 [DKA_]
Larry: What I would like - some of this belongs in changing the ways in which MIME types are registered. So for this to have an effect on the Web it would need to be an IEFT document.
18:30:32 [DKA_]
Noah: Do you have time to help formulate [e.g.] a proposal to the IETF?
18:30:45 [jar_]
rrsagent, pointer
18:30:45 [RRSAgent]
See
18:30:46 [DKA_]
Larry: [Yes I can do that.]
18:30:57 [DKA_]
+1
18:31:18 [DKA_]
Noah: We will re-schedule in a week or 2.
18:31:35 [DKA_]
Larry: I'm willing to turn it into an Internet Draft - I can put it in that format.
18:32:05 [DKA_]
Tim: Regrets for next week.
18:32:17 [Zakim]
-TimBL
18:32:20 [DKA_]
Noah: Adjourned.
18:32:27 [DKA_]
thanks!
18:32:28 [Zakim]
-Masinter
18:32:29 [Zakim]
-Noah_Mendelsohn
18:32:29 [Zakim]
-Yves
18:32:31 [Zakim]
-DKA
18:32:33 [Zakim]
-jar
18:32:35 [Zakim]
TAG_Weekly()1:00PM has ended
18:32:36 [Zakim]
Attendees were Masinter, DKA, Noah_Mendelsohn, Yves, [IPcaller], ht, John_Kemp, +1.617.538.aaaa, jrees, TimBL, +1.617.209.aabb, jar
18:32:44 [johnk]
johnk has left #tagmem
18:32:57 [DKA]
DKA has joined #tagmem
18:32:59 [jar_]
zakim, jar is jrees
18:32:59 [Zakim]
sorry, jar_, I do not recognize a party named 'jar'
18:34:08 [noah]
New note on ACTION-458:
18:34:09 .
18:35:51 [DKA]
rrsagent, draft minutes
18:35:51 [RRSAgent]
I have made the request to generate
DKA
18:47:28 [jar_]
jar_ has joined #tagmem
19:18:28 [jar_]
jar_ has joined #tagmem
20:30:36 [jar_]
jar_ has joined #tagmem
20:38:22 [Zakim]
Zakim has left #tagmem
20:42:19 [jar_]
jar_ has joined #tagmem | http://www.w3.org/2010/09/02-tagmem-irc | CC-MAIN-2016-26 | refinedweb | 5,014 | 75.13 |
Opened 21 months ago
Closed 18 months ago
#28917 closed defect (fixed)
sagemath should not use math names with underscore when generating Latex
Description (last modified by )
seen at.
Using sagemath 8.9, when asking sagemath for latex of a math expression which contains something like
log_integral, it generates in latex
log_integral which does not typeset well due to underscore. Better translation would be
\logintegral, this allows one to make a math operator using
\DeclareMathOperator{\logintegral}{log\_integral}
But it is not possible to do this now as things stands. Here is an example
sage: var('t') sage: result=integrate(1/log(t)^2,t, algorithm="fricas") sage: result (log(t)*log_integral(t) - t)/log(t) sage: latex(result) \frac{\log\left(t\right) log_integral\left(t\right) - t}{\log\left(t\right)}
The latex above would be better as
sage: latex(result) \frac{\log\left(t\right) \logintegral\left(t\right) - t}{\log\left(t\right)}
even though
\logintegral is not known to Latex, it can be made a known math name using
\DeclareMathOperator as shown above.
EDIT: In order to avoid macros that are not known to Latex, we can define the Latex name to be
\operatorname{log\_integral}.
Change History (9)
comment:1 Changed 21 months ago by
- Milestone changed from sage-9.0 to sage-9.1
comment:2 Changed 18 months ago by
- Branch set to public/28917
comment:3 Changed 18 months ago by
- Commit set to 6fda05daf36531b73ddd2df4ed7c9fdd6ca4c51d
- Component changed from translations to symbolics
- Description modified (diff)
- Status changed from new to needs_review
I changed the Latex name of
log_integral to
\operatorname{log\_integral} so that the resulting Latex code can be pasted directly into a Latex file, without needing to add any macro definitions. (This is the same approach that was already used in the Latex name of
exp_polar.) I made a similar fix to the Latex name of
log_integral_offset, which was the only other place that I found this problem in the sage source.
New commits:
comment:4 Changed 18 months ago by
- Reviewers set to Markus Wageringel
- Status changed from needs_review to needs_work
Thank you for fixing this. There is just a small problem with the backslashes in the docstring. These need to be escaped, or (preferably) the docstring should be changed to a raw string:
def __init__(self): - """ + r""" See the docstring for ``Function_log_integral``.
Other than that, this looks good to me. I could not find other instances of this latex problem in Sage, either.
comment:5 Changed 18 months ago by
- Commit changed from 6fda05daf36531b73ddd2df4ed7c9fdd6ca4c51d to 77bc7f493fe82b0eefbcbd0a075040403427b51e
Branch pushed to git repo; I updated commit sha1. New commits:
comment:6 Changed 18 months ago by
Oops. Thanks for the correction. I also fixed a typo in a docstring (
Function_log_integral-offset ->
Function_log_integral_offset).
comment:7 Changed 18 months ago by
- Status changed from needs_work to needs_review
comment:8 Changed 18 months ago by
- Status changed from needs_review to positive_review
Thanks.
comment:9 Changed 18 months ago by
- Branch changed from public/28917 to 77bc7f493fe82b0eefbcbd0a075040403427b51e
- Resolution set to fixed
- Status changed from positive_review to closed
Ticket retargeted after milestone closed | https://trac.sagemath.org/ticket/28917 | CC-MAIN-2021-39 | refinedweb | 518 | 50.46 |
/************************************************************************** * * * * **************************************************************************/ package edu.hws.jcm.awt; import java.util.Vector; import edu.hws.jcm.data.Value; /** * A Tie associates several Tieable objects. When the check() mehtod of the * Tie is called, it determines which of the Tieables has the largest serial number. * It then tells each Tieable to synchronize with that object. Ordinarily, the * Tie is added to a Controller, which is responsible for calling the Tie's * check() method. * *
This. * * @author David Eck */ public class Tie { /** * The Tieables in this Tie. */ protected Vector items = new Vector(2); /** * Create a Tie, initially containing no objects. */ public Tie() { } /** * Create a Tie initally containing only the object item. * item should be non-null. * * @param item the only initial item in this tieable. */ public Tie(Tieable item) { add(item); } /** * Create a Tie initially containing item1 and item2. * The items should be non-null. The items will be * synced with each other at the time the Tie is created. */ public Tie(Tieable item1, Tieable item2) { add(item1); add(item2); } /** * Add item to the tie, and sync it with the items that are * already in the Tie. It should be non-null. Note that synchronization * of the objects is forced even if they all have the same serial number, * since the values might not be the same when they are first added to * the Tie. */ public void add(Tieable item) { if (item != null) { items.addElement(item); forcecheck(); } } /** * If this Tie contains more than one item, find the newest * one and sync all the items with that item. If the serial * numbers of all the items are already the same, nothing is * done. */ public void check() {; } } if (!outOfSync) // if serialnumbers are the same, no sync is necessary. return; Tieable newest = (Tieable)items.elementAt(indexOfMax); for (int i = 0; i < top; i++) ((Tieable)items.elementAt(i)).sync(this, newest); } private void forcecheck() { // Synchronize the items in this Tie, even if serial numbers are the same.; } } Tieable newest = (Tieable)items.elementAt(indexOfMax); for (int i = 0; i < top; i++) ((Tieable)items.elementAt(i)).sync(this, newest); } } // end class Ti | http://math.hws.edu/javamath/jcm1-source/edu/hws/jcm/awt/Tie.java | crawl-003 | refinedweb | 340 | 68.97 |
This is a WinForms project that shows exception information. You can use either the control or a dialog window. The main reason I published this project is due to other projects I have that have a dependency on this code. It's not the prettiest code I've ever written, but it's...
WinForms TextBox with spell checking. This WinForms control encapsulates the WPF textbox which has in built spell-checking. Only uses .net framework so you should be able to use this control without installing any extra libraries.spell-check textbox usercontrols winforms
.NET library that improves your productivity and application performance when performing reflection operations. It allows you to perform metadata lookup and reflection invocation intuitively while achieving greater performance than the built-in .NET Reflection.reflection code-generation dynamicmethod performance
The reflection helper classes over the mirrors based reflection
A simple WinForms control that will display a shape (rectange, triangle, star, curve, polygon, etc.) with a number of different fill and outline effects, including text. Nice for spicing up a form.controls windows-forms winforms
jSkin is .NET Winforms Skin Library that allows you to decorate your Formborder with custom style.jskin skin usercontrol winforms
Winforms sample using Facebook C# SDK
Winforms samples for use with Mono's implementation of System.Windows.Forms
The Compatibility API allows you currently to convert to and from certain formats from WPF and Winforms that do similar jobs but are imcompatible. e.g. the Image UI element in WPF requires BitmapImage format as its 'ImageSource'. On the other hand Winforms' PictureBox requires the regular Bitmap object as its 'Image'. The current alpha version allows you to convert to and from System.Drawing.Bitmap and System.Windows.Media.Imaging.ImageSource or System.Windows.Media.Imaging.BitmapImage.
Example Project Showing Model-View-Presenter in WinForms
Reflection Studio is a development tool that encapsulate all my work around reflection, performance and WPF. It allows to inject performance traces into any NET software, get them back for analyse and reporting.diagram editor injection mono-cecil net performance reflection
Reflection proxy class generator makes it easier for .NET developers to access non-public members of .NET types and use non-public classes with reflection.generator proxy reflection
Reflection is not supported by current c++ standard, This examples demonstrates how to adopt reflection in your c++ code, particularly, string fromClass and classFromString. It relies on templates and a couple of short macros. The Factory functions are indexed in a map against the class string. The map is static, therefore is instantiated before int main() is given a chance to run. Making the reflection available right at the start without any setup code being required to run.
Lightweight include files adding reflection capabilities to C++. Generates SQL queries and serialization routines. STL-style iterations over class members. Compile-time / run-time reflection. Based on templates and meta-programming.
We have large collection of open source products. Follow the tags from
Tag Cloud >> | http://www.findbestopensource.com/product/exceptionviews | CC-MAIN-2017-39 | refinedweb | 494 | 51.14 |
#include "ltwrappr.h"
L_INT LImageViewerCell::SetCellBitmapList (hBitmapList, bCleanImages, uFlags);
Attaches a bitmap list to the cell at the specified index.
Handle to the list of bitmaps, which will be attached to the cell. Pass NULL to remove the current bitmap list.
Removing the current bitmap list will not free the attached image(s). The value in the bCleanImages parameter determines whether the image(s) associated with the removed bitmap list are also freed.
Flag that specifies whether to free the bitmap list that was previously attached to the cell. Possible values are:
Reserved for future use. Pass 0.
It is best not to modify an existing bitmap list attached to a specific cell. However, if it is necessary to modify an attached bitmap list, follow the steps below:
If the cell already has a bitmap list and you call this function with hBitmapList set to another valid list of bitmaps and bCleanImages set to TRUE, this function frees the current attached list and attaches the new list to the cell.
If the cell already has a bitmap list and you call this function with hBitmapList set to another valid list of bitmaps and bCleanImages set to FALSE, this function will just overwrite the current list of bitmap with the new list of bitmaps. and the current list will just take up memory until the application ends.
Required DLLs and Libraries
For an example, refer to LImageViewer::Create.
Direct Show .NET | C API | Filters
Media Foundation .NET | C API | Transforms
Media Streaming .NET | C API | https://www.leadtools.com/help/sdk/v21/imageviewer/clib/limageviewercell-setcellbitmaplist.html | CC-MAIN-2021-25 | refinedweb | 254 | 63.09 |
The prototype version of this is being used to run the skim
production on the converted dataset.
To use the new Task Management system you should check out
BbkTaskManager sjg20031129a.
You need to setup your mysql data as mentioned in the Data Distribution page if you are not
doing this at site (like SLAC) that is already setup.
You need to initialise the tables for the Task Management;
Before running a task you need to set it up. There are a few
ingredients that need to be ready first.
You should create an entry for the releases you will be using. I'll
setup 14.1.3b first. For each release you should give it a precidence
(which defines which release is "newer"). This is generally done by
giving each element of a release two digits and turning the letter in
to the last two digits. So I would do (at SLAC you don't need the
-u bbrora option);
[antonia] ~/reldirs/tst14.1.3b > BbkCreateRelease -u bbrora 14.1.3b 14010302
name : 14.1.3b
created at : 2003/11/30 05:12:37
precedence : 14010302
id : 1
You should do this for all releases you plan to use (each release
needs it done once). Now we should configure a setup for the task we
want to carry out (later on I discover you need to give the path to
the executable (and the tcl file if it won't be in the directory you
do BbkSubmitJobs from);
[antonia] ~/reldirs/tst14.1.3b > BbkCreateSetup -u bbrora miniAnal BetaMiniApp MyMiniAnalysis.tcl
successfully created new configuration
name : miniAnal
maintainer : gowdy
created at : 2003/11/30 05:15:01
id : 1
associated with task : -
task id : 0
release : 14.1.3b
runs : BetaMiniApp MyMiniAnalysis.tcl
job wrapper : RELEASE/bin/<BFARCH>/jobWrapper
default queue : bfobjy
logfile : RELEASE/log/<TASKNAME>/<RUNNUMBER>/<PASS>.LOG
output : RELEASE/results/<TASKNAME>/<RUNNUMBER>/<PASS>/
Kan-configuration file : -
tcl template : RELEASE/BbkTaskManager/tclTemplate
user2 : -
ready : no
It checks to make sure the release is defined (fairly good
idea!). It also uses BFCURRENT, another good idea, if you don't use
the --release option.
Now we should setup a task;
[antonia] ~/reldirs/tst14.1.3b > BbkCreateTask -u bbrora -c miniAnal -i sjg,bbrora,bbrora -o sjg,bbrora,bbrora miniAnal
successfully created new task
name : miniAnal
maintainer : gowdy
created at : 2003/11/30 15:06:11
id : 1
current configuration : miniAnal
input datasets : -
input database : sjg,bbrora,bbrora
output stream(s) : -
output database : sjg,bbrora,bbrora
ready : no
active : no
You may want to use different input and output databases, I'm
testing this on my laptop.
I now need to define the dataset to use as input. From the earlier
import there are three datasets defined;
[antonia] ~/reldirs/tst14.1.3b > BbkDatasetHistory --dbsite sjg --dbuser bbrora
BbkDatasetHistory: 3 datasets found:-
AllEventsRun3Conv
GridKa.32955-36172
Padova.36173-39320
I'll setup this task to processes the last one of these;
[antonia] ~/reldirs/tst14.1.3b > BbkAddDataset -h sjg -u bbrora miniAnal Padova.36173-39320
successfully added dataset(s)
name : miniAnal
maintainer : gowdy
created at : 2003/11/30 15:06:11
id : 1
current configuration : miniAnal
input datasets : Padova.36173-39320
input database : sjg,bbrora,bbrora
output stream(s) : -
output database : sjg,bbrora,bbrora
ready : yes
active : no
Before submiting the job you need to setup a TCL snippet to
configure it correctly, as each application will look for difference
TCL variables to be set prior to invoking the tcl for it. The default
one (as seen in the Setup configuration) is
BbkTaskManager/tclTemplate. See the reference below for
allowed special tokens. If you make your own you should configure the
Setup with BbkEditSetup --tcl <yourFile>. I'll now
create the jobs to run;
[antonia] ~/reldirs/tst14.1.3b > BbkCreateJobs -h sjg -u bbrora miniAnal
Argument "stream|s=s" isn't numeric in numeric gt (>) at ./bin/Linux24RH72_i386_gcc2953/BbkCreateJobs line 24.
Checking local run list and creating lookup table...
11/30 15:34:21
...done... Retrieving dse list for dataset Padova.36173-39320 from the input database...
11/30 15:34:21
..done.. checking list for dublicates...
11/30 15:34:21
..done.. creating jobs...
11/30 15:34:21
..done.
created 0 jobs.
Created 0 nJobs job(s).
successfully created new Jobs
So it figured out that I didn't have any of the files for this
Dataset on my laptop's database. I should change the Task to also use
the Karlsruhe Dataset (with BbkAddDataset -h sjg -u bbrora
miniAnal GridKa.32955-36172). Retrying the above command it setup
1115 jobs;
...
1111 pending - bfobjy 50354
1112 pending - bfobjy 62507
1113 pending - bfobjy 22503
1114 pending - bfobjy 8888
1115 pending - bfobjy 6839
I can check the status thus;
[antonia] ~/reldirs/tst14.1.3b > BbkShowJobs -u bbrora --summary miniAnalsh: qstat: command not found
No jobs in any queues... Can that be right? Try again later.
Taskname prepared submitted done ok failed superceded last update
------------------------------------------------------------------------------
miniAnal 1115 0 0 0 0 0 2003/11/30 15:39:20
Now it is time to try running a job. This will fail though as I
don't have a batch system on my laptop... I'll try submitting a
job;
[antonia] ~/reldirs/tst14.1.3b/workdir > BbkSubmitJobs -u bbrora --jobid 150 miniAnal
11/30 19:25:17 BbkTaskManager::BbkTMJob::jobFiles::1231: Error! Can't execute /home/gowdy/reldirs/tst14.1.3b/workdir/BetaMiniApp. Aborting.
11/30 19:25:17 BbkTaskManager::BbkTMJob::submit::857: Error! Can't prepare output files. Aborting.
11/30 19:25:17 BbkTaskManager::BbkTMTask::submitJobs::1537: Error! Can't submit job.
Submitted 0 job(s).
No jobs submitted.
So it looks like when you create a Task you should give the full
path to the executable and tcl file.
To run locally without a batch system you can use the --local option to BbkSubmitJobs. So, this would look like;
[antonia] ~/reldirs/tst14.1.3b > BbkSubmitJobs -u bbrora --local --jobid 150 miniAnal
12/01 05:50:51 BbkTaskManager::BbkTMJob::jobFiles::1231: Error! Can't execute /home/gowdy/reldirs/tst14.1.3b/BetaMiniApp. Aborting.
12/01 05:50:51 BbkTaskManager::BbkTMJob::submit::857: Error! Can't prepare output files. Aborting.
12/01 05:50:51 BbkTaskManager::BbkTMTask::submitJobs::1537: Error! Can't submit job.
Submitted 0 job(s).
No jobs submitted.
No surprise that there is the same error. I'll make symlinks so it
can find the executable and tcl file. (I lost the output for job 150,
so I'll start using 151 below) Here goes, I'll break up the output
here so this page doesn't get too wide;
[antonia] ~/reldirs/tst14.1.3b/workdir > BbkSubmitJobs -u bbrora --local --jobid 151 miniAnal
/home/gowdy/reldirs/tst14.1.3b/workdir/RELEASE/bin/Linux24RH72_i386_gcc2953/jobWrapper \
--exec /home/gowdy/reldirs/tst14.1.3b/workdir/BetaMiniApp \
--tcl /home/gowdy/reldirs/tst14.1.3b/workdir/RELEASE/log/miniAnal/<RUNNUMBER>/job_151.tcl \
--stats /home/gowdy/reldirs/tst14.1.3b/workdir/RELEASE/log/miniAnal/<RUNNUMBER>/stats_151.dat \
--logs /home/gowdy/reldirs/tst14.1.3b/workdir/RELEASE/log/miniAnal/<RUNNUMBER>/00.LOG \
--out /home/gowdy/reldirs/tst14.1.3b/workdir/RELEASE/results/miniAnal/<RUNNUMBER>/00 \
--jobId 151 --debug
Submitted 1 job(s).
The <RUNNUMBER> element hasn't been substituted here for some
reason. That causes the job to fail straight away... so close...
There is a Quick
Start guide you should probably use for now.
Last Update: 29th November 2003 | http://www.slac.stanford.edu/BFROOT/www/Computing/Documentation/taskmgt.html | crawl-002 | refinedweb | 1,230 | 59.8 |
What is the use of namespace in openstack?
What is the main purpose of a namespace in OpenStack?
What is the main purpose of a namespace in OpenStack?
The same controller or compute node usually implements several Neutron networks. Network Namespaces are one of the methods to separate traffic that flows through those networks from each other. Each namespace has a routing table or a DHCP server (and perhaps other elements). This allows the creation of separate networks with identical or overlapping IP address ranges.
See also....
Asked: 2020-03-28 00:03:49 -0500
Seen: 56 times
Last updated: Mar 28
Router namespace is not able to ping 8.8.8.8 [closed]
what is the concept of partition space in swift
neutron not setting up namespace for floating IPs
Floating IP not pinging externally - Ocata
Is there a way to reference resources in separate stack except by uuid?
Router namespace issue;cannot connect to Openstack instances
unable to ping host from instance - openstack ocata [closed]
I can't ping the external network from namespace
How to spreading SNAT and Floating IP to multiple network nodes ?
Adding additional NAT rule on neutron-l3-agent
OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license. | https://ask.openstack.org/en/question/126754/what-is-the-use-of-namespace-in-openstack/ | CC-MAIN-2020-45 | refinedweb | 225 | 64.61 |
bvar is a set of counters to record and view miscellaneous statistics conveniently in multi-threaded applications. The implementation reduces cache bouncing by storing data in thread local storage(TLS), being much faster than UbMonitor(a legacy counting library inside Baidu) and even atomic operations in highly contended scenarios. brpc integrates bvar by default, namely all exposed bvars in a server are accessible through /vars, and a single bvar is addressable by /vars/VARNAME. Read bvar to know how to add bvars for your program. brpc extensively use bvar to expose internal status. If you are looking for an utility to collect and display metrics of your application, consider bvar in the first place. bvar definitely can't replace all counters, essentially it moves contentions occurred during write to read: which needs to combine all data written by all threads and becomes much slower than an ordinary read. If read and write on the counter are both frequent or decisions need to be made based on latest values, you should not use bvar.
/vars : List all exposed bvars
/vars/NAME:List the bvar whose name is
NAME
/vars/NAME1,NAME2,NAME3:List bvars whose names are either
NAME1,
NAME2 or
NAME3.
/vars/foo*,b$r: List bvars whose names match given wildcard patterns. Note that
$ matches a single character instead of
? which is a reserved character in URL.
Following animation shows how to find bvars with wildcard patterns. You can copy and paste the URL to others who will see same bvars that you see. (values may change)
There's a search box in the upper-left corner on /vars page, in which you can type part of the names to locate bvars. Different patterns are separated by
,
: or space.
/vars is accessible from terminal as well:
$ curl brpc.baidu.com:8765/vars/bthread* bthread_creation_count : 125134 bthread_creation_latency : 3 bthread_creation_latency_50 : 3 bthread_creation_latency_90 : 5 bthread_creation_latency_99 : 7 bthread_creation_latency_999 : 12 bthread_creation_latency_9999 : 12 bthread_creation_latency_cdf : "click to view" bthread_creation_latency_percentiles : "[3,5,7,12]" bthread_creation_max_latency : 7 bthread_creation_qps : 100 bthread_group_status : "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 " bthread_num_workers : 24 bthread_worker_usage : 1.01056
Clicking on most of the numerical bvars shows historical trends. Each clickable bvar records values in recent 60 seconds, 60 minutes, 24 hours and 30 days, which are 174 numbers in total. 1000 clickable bvars take roughly 1M memory.
x-ile (short for x-th percentile) is the value ranked at N * x%-th position amongst a group of ordered values. E.g. If there're 1000 values inside a time window, sort them in ascending order first. The 500-th value(1000 * 50%) in the ordered list is 50-ile(a.k.a median), the 990-th(1000 * 99%) value is 99-ile, the 999-th value is 99.9-ile. Percentiles give more information on how latencies distribute than mean values, and being helpful for analyzing behavior of the system more accurately. Industrial-grade services often require SLA to be not less than 99.97% (the requirement for 2nd-level services inside Baidu, >=99.99% for 1st-level services), even if a system has good average latencies, a bad long-tail area may still break SLA. Percentiles do help analyzing the long-tail area.
Percentiles can be plotted as a CDF or percentiles-over-time curve.
Following diagram plots percentiles as CDF, where the X-axis is the ratio(ranked-position/total-number) and the Y-axis is the corresponding percentile. E.g. The Y value corresponding to X=50% is 50-ile. If a system requires that “99.9% requests need to be processed within Y milliseconds”, you should check the Y at 99.9%.
Why do we call it CDF ? When a Y=y is chosen, the corresponding X means “percentage of values <= y”. Since values are sampled randomly (and uniformly), the X can be viewed as “probability of values <= y”, or P(values <= y), which is just the definition of CDF.
Derivative of the CDF is PDF. If we divide the Y-axis of the CDF into many small-range segments, calculate the difference between X values of both ends of each segment, and use the difference as new value for X-axis, a PDF curve would be plotted, just like a normal distribution rotated 90 degrees clockwise. However density of the median is often much higher than others in a PDF and probably make long-tail area very flat and hard to read. As a result, systems prefer showing distributions in CDF rather than PDF.
Here're 2 simple rules to check if a CDF curve is good or not:
A CDF with slowly ascending curve and small long-tail area is great in practice.
Following diagram plots percentiles over time and has four curves. The X-axis is time and Y-axis from top to bottom are 99.9% 99% 90% 50% percentiles respectively, plotted in lighter and lighter colors (from orange to yellow).
Hovering mouse over the curves shows corresponding values at the time. The tooltip in above diagram means “The 99% percentile of latency before 39 seconds is 330 microseconds”. The diagram does not include the 99.99-ile curve which is usually significantly higher than others, making others hard to read. You may click bvars ended with “_latency_9999” to read the 99.99-ile curve separately. This diagram shows how percentiles change over time, which is helpful to analyze performance regressions of systems.
brpc calculates latency distributions of services automatically, which do not need users to add manually. The metrics are as follows:
bvar::LatencyRecorder is able to calculate latency distributions of any code, as depicted below. (checkout bvar-c++ for details):
#include <bvar/bvar.h> ... bvar::LatencyRecorder g_latency_recorder("client"); // expose this recorder ... void foo() { ... g_latency_recorder << my_latency; ... }
If the application already starts a brpc server, values like
client_latency,
client_latency_cdf can be viewed from
/vars as follows. Clicking them to see (dynamically-updated) curves:
If your program only uses brpc client or even not use brpc, and you also want to view the curves, check here. | https://apache.googlesource.com/incubator-brpc/+/HEAD/docs/en/vars.md | CC-MAIN-2021-39 | refinedweb | 1,018 | 55.44 |
You can configure projects to import files and environment variables from other projects. This allows you to use your team’s existing work as reusable building blocks, and avoid unnecessary repetition.
For example:
A canonical dataset used by multiple projects can be managed in a single place.
Your code can reside in a separate project from your data. No need to duplicate large datasets within multiple projects.
An external data source requiring login credentials (such as a database) may be securely represented through environment variables in a single project, and then used by many projects.
Results from one model (such as trained model files, or R workspaces) can be imported and used by multiple different downstream projects.
If a project’s files are organized as an R or Python package, then you can configure other projects to automatically install them at runtime.
The first step is to configure the exporting project. Projects can export files and environment variables.
Other projects can import content from projects that are configured for export. After you’ve set up import, the content from the exporting projects is accessible when you run code in the importing project.
During runs with imported files, each project directory is located at
/mnt<username>/<project name>, where
<username> is the owner of
that particular project. Imported directories are read-only.
The path of your main project will also change from
/mnt to
/mnt<username>/<project name>. If you have hardcoded any paths in
your projects to
/mnt, we recommend replacing the hardcoded paths to
using the
$DOMINO_WORKING_DIR environment variable. This will ensure
the correct path regardless of whether other projects are imported. See
the support article
on Domino Environment Variables for
more information.
Setup
Set up the projects from which you want to export. You need to have Owner, Collaborator, or Project Importer access to the projects to set them up for export.
To set up export, open the project in Domino and click Settings from the project menu, then open the Exports tab. In the panel on that tab, click the checkbox for files or environment variables to make those types of content available to other projects. To export as a package, select the appropriate language from the Code Package dropdown.
Open the project into which you want to import content. Click Files from the project menu, then open the Other Projects tab. Add projects by filling in the project name field, then clicking Import. You’ll see projects currently being imported listed in a table below.
Only the files from the directly imported project will be viewable when you import. For example if project A is imported into project B, and then your project imports B, only the contents of B will be accessible to your project.
Running python scripts from imported project
When running a Python script from an imported project you might encounter the error message:
FileNotFoundError: [Error 2] No such file or directory:
When a python script runs an import, it executes that code with the
current working directory, so if you have a relative path in the
imported file, it will try to find the file in the current folder and
fail. In this case, you can update your imported script to use an
absolute path based on the current path of the imported file using
os.path, for example.
import os file_name = os.path.join(os.path.dirname(__file__), 'your_referenced_file.dat') | https://admin.dominodatalab.com/en/3.6/user_guide/9872cf/project-dependencies/ | CC-MAIN-2022-27 | refinedweb | 569 | 62.98 |
What is Hadoop Cluster? Best Practices to Build Hadoop Clusters
Eager to learn each and everything about the Hadoop Cluster?
Hadoop is a software framework for analyzing and storing vast amounts of data across clusters of commodity hardware. In this article, we will study a Hadoop Cluster.
In this article you’ll learn the following points:
- What is a Cluster
- What is Hadoop Cluster
- Architecture of Hadoop Cluster
- Single node Hadoop Cluster VS multi-node Hadoop Cluster
- Communication Protocols used in Hadoop Cluster
- Best Practices for building Hadoop Cluster
- Hadoop Cluster Management
- Advantages of Hadoop Cluster
Let us first start with an introduction to Cluster.
What is a Cluster?
A Cluster is a collection of nodes. Nodes are nothing but a point of connection/intersection within a network.
A computer cluster is a collection of computers connected with a network, able to communicate with each other, and works as a single system.
What is Hadoop Cluster?
Hadoop Cluster is just a computer cluster used for handling a vast amount of data in a distributed manner.
It is a computational cluster designed for storing as well as analyzing huge amounts of unstructured or structured data in a distributed computing environment.
Hadoop Clusters are also known as Shared-nothing systems because nothing is shared between the nodes in the cluster except the network bandwidth. This decreases the processing latency.
Thus, when there is a need to process queries on the huge amount of data, the cluster-wide latency is minimized.
Let us now study the Architecture of Hadoop Cluster.
Architecture of Hadoop Cluster
The Hadoop Cluster follows a master-slave architecture. It consists of the master node, slave nodes, and the client node.
1. Master in Hadoop Cluster
Master in the Hadoop Cluster is a high power machine with a high configuration of memory and CPU. The two daemons that are NameNode and the ResourceManager run on the master node.
a. Functions of NameNode
NameNode is a master node in the Hadoop HDFS. NameNode manages the filesystem namespace. It stores filesystem meta-data in the memory for fast retrieval. Hence, it should be configured on high-end machines.
The functions of NameNode are:
- Manages filesystem namespace
- Stores meta-data about blocks of a file, blocks location, permissions, etc.
- It executes the filesystem namespace operations like opening, closing, renaming files and directories, etc.
- It maintains and manages the DataNode.
b. Functions of Resource Manager
- ResourceManager is the master daemon of YARN.
- The ResourceManager arbitrates the resources among all the applications in the system.
- It keeps track of live and dead nodes in the cluster.
2. Slaves in the Hadoop Cluster
Slaves in the Hadoop Cluster are inexpensive commodity hardware. The two daemons that are DataNodes and the YARN NodeManagers run on the slave nodes.
a. Functions of DataNodes
- DataNodes stores the actual business data. It stores the blocks of a file.
- It performs block creation, deletion, replication based on the instructions from NameNode.
- DataNode is responsible for serving client read/write operations.
b. Functions of NodeManager
- NodeManager is the slave daemon of YARN.
- It is responsible for containers, monitoring their resource usage (such as CPU, disk, memory, network) and reporting the same to the ResourceManager.
- The NodeManager also checks the health of the node on which it is running.
3. Client Node in Hadoop Cluster
Client Nodes in Hadoop are neither master node nor slave nodes. They have Hadoop installed on them with all the cluster settings.
Functions of Client nodes
- Client nodes load data into the Hadoop Cluster.
- It submits MapReduce jobs, describing how that data should be processed.
- Retrieve the results of the job after processing completion.
We can scale out the Hadoop Cluster by adding more nodes. This makes Hadoop linearly scalable. With every node addition, we get a corresponding boost in throughput. If we have ‘n’ nodes, then adding 1 node gives (1/n) additional computing power.
Single Node Hadoop Cluster VS Multi-Node Hadoop Cluster
1. Single Node Hadoop Cluster
Single Node Hadoop Cluster is deployed on a single machine. All the daemons like NameNode, DataNode, ResourceManager, NodeManager run on the same machine/host. In a single-node cluster setup, everything runs on a single JVM instance. The Hadoop user didn’t have to make any configuration settings except for setting the JAVA_HOME variable.
The default replication factor for a single node Hadoop cluster is always 1.
2. Multi-Node Hadoop Cluster
Multi-Node Hadoop Cluster is deployed on multiple machines. All the daemons in the multi-node Hadoop cluster are up and run on different machines/hosts.
A multi-node Hadoop cluster follows master-slave architecture. The daemons Namenode and ResourceManager run on the master nodes, which are high-end computer machines. The daemons DataNodes and NodeManagers run on the slave nodes(worker nodes), which are inexpensive commodity hardware.
In the multi-node Hadoop cluster, slave machines can be present in any location irrespective of the location of the physical location of the master server.
Communication Protocols used in Hadoop Cluster
The HDFS communication protocols are layered on the top of the TCP/IP protocol. A client establishes a connection with the NameNode through the configurable TCP port on the NameNode machine.
The Hadoop Cluster establishes a connection to the client through the ClientProtocol. Moreover, the DataNode talks to the NameNode using the DataNode Protocol. The Remote Procedure Call (RPC) abstraction wraps Client Protocol and DataNode protocol. By design, NameNode does not initiate any RPCs. It only responds to the RPC requests issued by clients or DataNodes.
Best Practices for building Hadoop Cluster
The performance of a Hadoop Cluster depends on various factors based on the well-dimensioned hardware resources that use CPU, memory, network bandwidth, hard drive, and other well-configured software layers.
Building a Hadoop Cluster is a non-trivial job. It requires consideration of various factors like choosing the right hardware, sizing the Hadoop Clusters, and configuring the Hadoop Cluster.
Let us now see each one in detail.
1. Choosing Right Hardware for Hadoop Cluster
Many organizations, when setting up Hadoop infrastructure, are in a predicament as they are not aware of the kind of machines they need to purchase for setting up an optimized Hadoop environment, and the ideal configuration they must use.
For choosing the right hardware for the Hadoop Cluster, one must consider the following points:
- The volume of Data that cluster will be going to handle.
- The type of workloads the cluster will be dealing with ( CPU bound, I/O bound).
- Data storage methodology like data containers, data compression techniques used, if any.
- A data retention policy, that is, how long we want to keep the data before flushing it out.
2. Sizing the Hadoop Cluster
For determining the size of the Hadoop Cluster, the data volume that the Hadoop users will process on the Hadoop Cluster should be a key consideration. By knowing the volume of data to be processed, helps in deciding how many nodes will be required in processing the data efficiently and memory capacity required for each node.
There should be a balance between the performance and the cost of the hardware approved.
3. Configuring Hadoop Cluster
Finding the ideal configuration for the Hadoop Cluster is not an easy job. Hadoop framework must be adapted to the cluster it is running and also to the job.
The best way of deciding the ideal configuration for the Hadoop Cluster is to run the Hadoop jobs with the default configuration available in order to get a baseline. After that, we can analyze the job history log files to see if there is any resource weakness or the time taken to run the jobs is higher than expected. If it is so, then change the configuration. Repeating the same process can tune the Hadoop Cluster configuration that best fits the business requirements.
The performance of the Hadoop Cluster greatly depends on the resources allocated to the daemons. For small to medium data context, Hadoop reserves one CPU core on each DataNode, whereas, for the long datasets, it allocates 2 CPU cores on each DataNode for HDFS and MapReduce daemons.
Hadoop Cluster Management
On deploying the Hadoop Cluster in production, it is apparent that it should scale along all dimensions that are volume, variety, and velocity.
Various features that it should be posses to become production-ready are – round the clock availability, robust, manageability, and performance. Hadoop Cluster management is the main facet of the big data initiative.
The best tool for Hadoop Cluster management should have the following features:-
- It must ensure 24×7 high availability, resource provisioning, diverse security, work-load management, health monitoring, performance optimization. Also, it needs to provide job scheduling, policy management, back up, and recovery across one or more nodes.
- Implement redundant HDFS NameNode high availability with load balancing, hot standbys, resynchronization, and auto-failover.
- Enforcing policy-based controls that prevent any application from grabbing a disproportionate share of resources on an already maxed-out Hadoop Cluster.
- Performing regression testing for managing the deployment of any software layers over Hadoop clusters. This is to make sure that any jobs or data would not get crash or encounter any bottlenecks in daily operations.
Benefits of Hadoop Cluster
The various benefits provided by the Hadoop Cluster are:
1. Scalable
Hadoop Clusters are scalable. We can add any number of nodes to the Hadoop Cluster without any downtime and without any extra efforts. With every node addition, we get a corresponding boost in throughput.
2. Robustness
The Hadoop Cluster is best known for its reliable storage. It can store data reliably, even in cases like DataNode failure, NameNode failure, and network partition. The DataNode periodically sends a heartbeat signal to the NameNode. In network partition, a set of DataNodes gets detached from the NameNode due to which NameNode does not receive any heartbeat from these DataNodes. NameNode then considers these DataNodes as dead and does not forward any I/O request to them. Also, the replication factor of the blocks stored in these DataNodes falls below their specified value. As a result, NameNode then initiates the replication of these blocks and recovers from the failure.
3. Cluster Rebalancing
The Hadoop HDFS architecture automatically performs cluster rebalancing. If the free space in the DataNode falls below the threshold level, then HDFS architecture automatically moves some data to other DataNode where enough space is available.
4. Cost-effective
Setting up the Hadoop Cluster is cost-effective because it comprises inexpensive commodity hardware. Any organization can easily set up a powerful Hadoop Cluster without spending much on expensive server hardware.
Also, Hadoop Clusters with its distributed storage topology overcome the limitations of the traditional system. The limited storage can be extended just by adding additional inexpensive storage units to the system.
5. Flexible
Hadoop Clusters are highly flexible as they can process data of any type, either structured, semi-structured, or unstructured and of any sizes ranging from Gigabytes to Petabytes.
6. Fast Processing
In Hadoop Cluster, data can be processed parallelly in a distributed environment. This provides fast data processing capabilities to Hadoop. Hadoop Clusters can process Terabytes or Petabytes of data within a fraction of seconds.
7. Data Integrity
To check for any corruption in data blocks due to buggy software, faults in a storage device, etc. the Hadoop Cluster implements checksum on each block of the file. If it finds any block corrupted, it seeks it form another DataNode that contains the replica of the same block. Thus, the Hadoop Cluster maintains data integrity.
How Hadoop work internally? Let’s figure it out.
Summary
After reading this article, we can say that the Hadoop Cluster is a special computational cluster designed for analyzing and storing big data. Hadoop Cluster follows master-slave architecture. The master node is the high-end computer machine, and the slave nodes are machines with normal CPU and memory configuration. We have also seen that the Hadoop Cluster can be set up on a single machine called single-node Hadoop Cluster or on multiple machines called multi-node Hadoop Cluster.
In this article, we had also covered the best practices to be followed while building a Hadoop Cluster. We had also seen many advantages of the Hadoop Cluster, including scalability, flexibility, cost-effectiveness, etc.
Any queries while working on Hadoop clusters?
Ask TechVidvan Experts.
and Keep Practicing!! | https://techvidvan.com/tutorials/hadoop-cluster/ | CC-MAIN-2020-16 | refinedweb | 2,054 | 56.05 |
Dragging and Placing Holograms With Unity
If you're into VR development, particularly using Unity, it's helpful to know how to interact with, drag, and place holograms you want without a lot of nervous twitching.
Join the DZone community and get the full member experience.Join For Free
I've been trying to create the effect of what you get in the – in the meantime good old – Holograms app: When you pull a hologram out of a menu, it ‘sticks to your gaze’ and follows it. You can air tap, and then it stays hanging in the air where you left it, but you can also put it on a floor, on a table, or next to a wall. You can’t push it through a surface. That is, most of the time. So, like this:
In the video, you can see that it follows the gaze cursor floating through the air until it hits a wall to the left and then stops, then goes down until it hits the bed, and then stops, then up again until I finally place it on the floor.
A New Year, a New Toolkit
As happens often in the bleeding edge of technology, things tend to change pretty fast. This is also the case in HoloLens country. I have taken the plunge to Unity 5.5 and the new HoloToolkit, which has a few big changes. Things have gotten way simpler since the previous iteration. Also, I would like to point out that for this tutorial, I used the latest patch release.
Setting Up the Initial Project
This is best illustrated by a picture. If you have set up the project, we basically only need this. Both Managers and HologramCollection are simply empty game objects meant to group stuff together. They don’t have any other specific function here. Drag and drop the four blue prefabs in the indicated places, then set some properties for the cube
The Cube is the thing that will be moved. Now it’s time for ‘some’ code.
The Main Actors
There are two scripts that play the leading role, with a few supporting roles.
- MoveByGaze
- IntialPlaceByTap
The first one makes an object move, the second one actually ends it. Apropos, the actual moving is done by our old friend iTween, whose usefulness and application was already described in part 5 of the AMS HoloATC series. So, you will need to include this in the project to prevent all kinds of nasty errors. Anyway, let’s get to the star of the show, MoveByGaze.
Moving With Gaze
It starts like this:
using UnityEngine; using HoloToolkit.Unity.InputModule; using HoloToolkit.Unity.SpatialMapping; namespace LocalJoost.HoloToolkitExtensions { public class MoveByGaze : MonoBehaviour { public float MaxDistance = 2f; public bool IsActive = true; public float DistanceTrigger = 0.2f; public BaseRayStabilizer Stabilizer = null; public BaseSpatialMappingCollisionDetector CollisonDetector; private float _startTime; private float _delay = 0.5f; private bool _isJustEnabled; private Vector3 _lastMoveToLocation; private bool _isBusy; private SpatialMappingManager MappingManager { get { return SpatialMappingManager.Instance; } } void OnEnable() { _isJustEnabled = true; } void Start() { _startTime = Time.time + _delay; _isJustEnabled = true; if (CollisonDetector == null) { CollisonDetector = gameObject.AddComponent<DefaultMappingCollisionDetector>(); } } } }
Up above are the settings:
- MaxDistance is the maximum distance from your head the behavior will try to place the object on a surface. Further than that, and it will just float in the air.
- IsActive determines whether the behavior is active (duh).
- DistanceTrigger is the distance your gaze has to be from the object you are moving before it actual starts to move. It kind of trails your gaze. This prevents the object from moving in a very nervous way.
- Stabilizer is the stabilizer made, used, and maintained by the InputManager. You will have to drag the InputManager from your scene on this field to use the stabilizer. It’s not mandatory, but it's highly recommended
- CollisionDetector is a class we will see later – it basically makes sure the object that you are dragging is not pushed through any surfaces. You will need to add a collision detector to the game object that you are dragging along – or maybe a game object that is part of the game object that you are dragging. That collision detector then needs to be dragged on this field by the MoveByGaze. This is not mandatory. If you don’t add one, the object you attach the MoveByGaze to will just simply follow your gaze and move right through any object. That’s the work of the DefaultMappingCollisionDetector, which is essentially a null pattern implementation.
Anyway, in the Update method all the work is done:
void Update() { if (!IsActive || _isBusy || _startTime > Time.time) return; _isBusy = true; var newPos = GetPostionInLookingDirection(); if ((newPos - _lastMoveToLocation).magnitude > DistanceTrigger || _isJustEnabled) { _isJustEnabled = false; var maxDelta = CollisonDetector.GetMaxDelta(newPos - transform.position); if (maxDelta != Vector3.zero) { newPos = transform.position + maxDelta; iTween.MoveTo(gameObject, iTween.Hash("position", newPos, "time", 2.0f * maxDelta.magnitude, "easetype", iTween.EaseType.easeInOutSine, "islocal", false, "oncomplete", "MovingDone", "oncompletetarget", gameObject)); _lastMoveToLocation = newPos; } else { _isBusy = false; } } else { _isBusy = false; } } private void MovingDone() { _isBusy = false; }
Only if the behavior is active, not busy, and the first half second is over we are doing anything at all. And the first thing is – telling the world we are busy indeed. This method, like all Updates, is called 60 times a second and we want to keep things a bit controlled here. Race conditions are annoying.
Then we get a position in the direction the user is looking, and if that exceeds the distance trigger – or this is the first time we are getting here – we start off finding how far ahead along this gaze we can place the actual object by using CollisionDetector. If that’s possible – that is, if the CollisionDetector does not find any obstacles — we can actually move the object using iTween. It's important to note that whenever the move is not possible, _isBusy immediately gets set to false. Also, note the fact that the smaller the distance, the faster the move. This is to make sure the final tweaks of setting the object in the right place don’t take a long time. Otherwise, _isBusy is only reset after a successful move.
Then the final pieces of this behavior:
private Vector3 GetPostionInLookingDirection() { RaycastHit hitInfo; var headReady = Stabilizer != null ? Stabilizer.StableRay : new Ray(Camera.main.transform.position, Camera.main.transform.forward); if (MappingManager != null && Physics.Raycast(headReady, out hitInfo, MaxDistance, MappingManager.LayerMask)) { return hitInfo.point; } return CalculatePositionDeadAhead(MaxDistance); } private Vector3 CalculatePositionDeadAhead(float distance) { return Stabilizer != null ? Stabilizer.StableRay.origin + Stabilizer.StableRay.direction.normalized * distance : Camera.main.transform.position + Camera.main.transform.forward.normalized * distance; }
GetPostionInLookingDirection first tries to get the direction in which you are looking. It tries to use the Stabilizer’s StableRay for that. The Stabilizer is a component of the InputManager that stabilizes your view – and the cursor uses it as well. This prevents the cursor from wobbling too much when you don’t keep your head perfectly still (which most people don’t – this includes me). The stabilizer takes an average movement of 60 samples, and that makes for a much less nervous-looking experience. If you don’t have a stabilizer defined, it just takes your actual looking direction – the camera’s position and your looking direction.
Then it tries to see if the resulting ray hits a wall or a floor – but no further than MaxDistance away. If it sees a hit, it returns this point, if it does not, if gives a point in the air MaxDistance away along an invisible ray coming out of your eyes. That’s what CalculatePositionDeadAhead does – but also trying to use the Stabilizer first to find the direction.
Detect Collisions
Okay, so what is this famous collision detector that prevents stuff from being pushed through walls and floors, using the spatial perception that makes the HoloLens such a unique device? It’s actually very simple, although it took me a while to actually get it this simple.
using UnityEngine; namespace LocalJoost.HoloToolkitExtensions { public class SpatialMappingCollisionDetector : BaseSpatialMappingCollisionDetector { public float MinDistance = 0.0f; private Rigidbody _rigidbody; void Start() { _rigidbody = GetComponent<Rigidbody>() ?? gameObject.AddComponent<Rigidbody>(); _rigidbody.isKinematic = true; _rigidbody.useGravity = false; } public override bool CheckIfCanMoveBy(Vector3 delta) { RaycastHit hitInfo; // Sweeptest wisdom from // return !_rigidbody.SweepTest(delta, out hitInfo, delta.magnitude); } public override Vector3 GetMaxDelta(Vector3 delta) { RaycastHit hitInfo; if(!_rigidbody.SweepTest(delta, out hitInfo, delta.magnitude)) { return KeepDistance(delta, hitInfo.point); ; } delta *= (hitInfo.distance / delta.magnitude); for (var i = 0; i <= 9; i += 3) { var dTest = delta / (i + 1); if (!_rigidbody.SweepTest(dTest, out hitInfo, dTest.magnitude)) { return KeepDistance(dTest, hitInfo.point); } } return Vector3.zero; } private Vector3 KeepDistance(Vector3 delta, Vector3 hitPoint) { var distanceVector = hitPoint - transform.position; return delta - (distanceVector.normalized * MinDistance); } } }
This behavior first tries to find a RigidBody and, failing that, adds it. We will need this to check the presence of anything ‘in the way’. But – this is important – we will set ‘isKinematic’ to true and ‘useGravity’ to false, or else our object will come under the control of the Unity physics engine and drop on the floor. In this case, we want to control the movement of the object.
So, this class has two public methods (its abstract base class demands that). One, CheckIfCanMoveBy (that we don’t use now), just says if you can move your object in the intended direction over the intended distance without hitting anything. The other essentially does the same, but if it finds something in the way, it also tries to find a distance over which you can move in the desired direction. For this, we use the SweepTest method of RigidBody. Essentially, you give it a vector, a distance along that vector, and it has an out variable that gives you info about a hit, should any occur. If a hit does occur, it tries at again at 1/4th, 1/7th, and 1/10th of that initially found distance. Failing everything, it returns a zero vector. By using this rough approach, an object moves quickly in a few steps until it can't any longer.
And then it also moves the object back over a distance you can set from the editor. This keeps the object just a little above the floor or from the wall, show that be desired. That’s what KeepDistance is for.
The whole point of having a base class BaseSpatialMappingCollisionDetector, by the way, is a) enabling null pattern implementation which as implemented by DefaultMappingCollisionDetector and b) making different collision detectors based upon different needs. It's a bit of architectural consideration within the sometimes-bewildering universe of Unity development.
Making It Stop: InitialPlaceByTap
Making the MoveByGaze stop is very simple – set the IsActive field to false. Now we only need something to actually make that happen. With the new HoloToolkit, this is actually very, very simple:
using UnityEngine; using HoloToolkit.Unity.InputModule; namespace LocalJoost.HoloToolkitExtensions { public class InitialPlaceByTap : MonoBehaviour, IInputClickHandler { protected AudioSource Sound; protected MoveByGaze GazeMover; void Start() { Sound = GetComponent<AudioSource>(); GazeMover = GetComponent<MoveByGaze>(); InputManager.Instance.PushFallbackInputHandler(gameObject); } public void OnInputClicked(InputEventData eventData) { if (!GazeMover.IsActive) { return; } if (Sound != null) { Sound.Play(); } GazeMover.IsActive = false; } } }
By implementing IInputClickHandler, the InputManager will send an event to this object when you air tap and it is selected by gaze. But by pushing it as the fallback handler, it will get this event also when it’s not selected. The event processing is pretty simple – if the GazeMover in this object is active, it’s de-activated. Also, if there’s an AudioSource detected, its sound is played. I very much recommend this kind of audio feedback.
Wiring It All Together
On your cube, you drag the MoveByGaze, SpatialMappingCollisionDetector, and InitialPlaceByTap scripts. Then you drag the cube itself again on the CollisionDetector field of MoveByGaze, and the InputManager on the Stabilizer field. Unity itself will select the right component.
So, in this case, I could also have used GetComponent<SpatialMappingCollisionDetector> instead of a field where you need to drag something on. But this way is more flexible – in-app, I did not want to use the whole object’s collider, but only that of a child object. Note that I have set the MinDistance for the SpatialMappingCollisionDetector for 1 cm – it will keep an extra centimeter distance from the wall or the floor.
Concluding Remarks
So this is how you can more or less replicate part of the behavior of the Holograms App, by moving objects around with your gaze and placing them on surfaces using air tap. The unique capabilities of the HoloLens allow us to place objects next to or on top of physical objects, and the new HoloToolkit makes using those capabilities pretty easy.
Full code, as per my MVP ‘trademark’, can be found here.
Published at DZone with permission of Joost van Schaik, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/dragging-and-placing-holograms | CC-MAIN-2022-27 | refinedweb | 2,131 | 56.96 |
Net::Peep::BC - Perl extension for Peep: The Network Auralizer
use Net::Peep::BC; my $bc = new Net::Peep::BC;
Net::Peep::BC is a library for Peep: The Network Auralizer.
None by default.
PROT_MAJORVER PROT_MINORVER PROT_BCSERVER PROT_BCCLIENT PROT_SERVERSTILLALIVE PROT_CLIENTSTILLALIVE PROT_CLIENTEVENT PROT_CLASSDELIM
%Leases - Deprecated %Servers - A hash the keys of which are the servers found by autodiscovery methods (i.e., methods in which clients and servers notify each other of their existence) and the values of which are anonymous hashes containing information about the server, including an expiration time after which if the client has not heard from the server, the server is deleted from the %Servers hash. %Defaults - Default values for options such as 'priority', 'volume', 'dither', 'sound'. $Alarmtime - The amount of time (in seconds) between when the alarm handler (see the handlealarm method) is set and the SIGALRM signal is sent.
Note that this section is somewhat incomplete. More documentation will come soon.
new($client,$conf,%options) - Net::Peep::BC constructor. $client is the name of the client; e.g., 'logparser' or 'sysmonitor' and $conf is a Net::Peep::Conf object. If an option is not specified in the %options argument, the equivalent value in the %Defaults class attributes is used. assemble_bc_packet() - Assembles the broadcast packet. Duh. logger() - Returns a Net::Peep::Log object used for log messages and debugging output. send() - Sends a packet including information on sound, location, priority, volume etc. to each server specified in the %Servers hash. sendout() - Used by send() to send the packet. handlealarm() - Refreshes and purges the server list. Schedules the next SIGALRM signal to be issued in another $Alarmtime seconds. updateserverlist() - Polls to see if any of the servers have sent alive broadcasts so that the server list can be updated. purgeserverlist() - Removes servers from the server list if they have not sent an alive broadcast within their given expiration time. addnewserver($server,$packet) - Adds the server $server based on information provided in the packet $packet. The server is only added if it does not exist in the %Servers hash. The server is pysically added by a call to the addserver method. addserver($server,$leasemin,$leasesec) - Adds the server $server. The server is expired $leasemin minutes and $leasesec seconds after being added if it has not sent an alive message in the meantime. Sends the server a client BC packet. updateserver($server,$packet) - Updates the expiration time for server $server. Sends the server a client still alive message.
initialize(%options) - Net::Peep::BC initializer. Called from the constructor. Performs the following actions: o Sets instance attributes via the %options argument o Loads configuration information from configuration file information passed in through the %options argument o Opens a socket and broadcasts an 'alive' message o Starts up the alarm. Every $Alarmtime seconds, the alarm handler updates the server list.
Michael Gilfix <mgilfix@eecs.tufts.edu> Copyright (C) 2000
Collin Starkweather <collin.starkweather@colorado.edu> Copyright (C) 2000
perl(1), peepd(1), Net::Peep::Dumb, Net::Peep::Log, Net::Peep::Parser, Net::Peep::Log.
You should have received a file COPYING containing license terms along with this program; if not, write to Michael Gilfix (mgilfix@eecs.tufts.edu) for a copy.
This version of Peep is open source; you can redistribute it and/or modify it under the terms listed in the file COPYING.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$Log: BC.pm,v $ Revision 1.10 2001/10/01 05:20:05 starky Hopefully the final commit before release 0.4.4. Tied up some minor issues, did some beautification of the log messages, added some comments, and made other minor changes.
Revision 1.9 2001/09/23 08:53:49.8 2001/08/08 20:17:57 starky Check in of code for the 0.4.3 client release. Includes modifications to allow for backwards-compatibility to Perl 5.00503 and a critical bug fix to the 0.4.2 version of Net::Peep::Conf.
Revision 1.7.6 2001/07/16 22:24:47 starky Fix for bug #439881: Volumes of 0 are now correctly identified.
Revision 1.5 2001/06/05 20:01:20 starky Corrected bug in which wrong type of broadcast constant was being sent by the client; i.e., the PROT_BCCLIENT was being sent when the PROT_CLIENTSTILLALIVE should have been. The clients and servers worked as expected despite the bug, so no changes in functionality will be apparent from the bug fix. It is, however, the right way to do things :-)
Revision 1.4 2001/06/04 08:37:27 starky Prep work for the 0.4.2 release. The wake-up for autodiscovery packets to be sent is now scheduled through Net::Peep::Scheduler. Also modified some docs in Net::Peep slightly.
Revision 1.3 2001/05/07 02:39:19 starky A variety of bug fixes and enhancements: o Fixed bug 421729: Now the --output flag should work as expected and the --logfile flag should not produce any unexpected behavior. o Documentation has been updated and improved, though more improvements and additions are pending. o Removed print STDERRs I'd accidentally left in the last commit. o Other miscellaneous and sundry bug fixes in anticipation of a 0.4.2 release.
Revision 1.2 2001/05/06 08:03:01 starky Bug 421791: Clients and servers tend to forget about each other in autodiscovery mode after a few hours. The client now sends out a "hello" packet each time it goes through a server update/purge cycle.
Revision 1.1 2001/04/23 10:13:19 starky Commit in preparation for release 0.4.1.
o Altered package namespace of Peep clients to Net::Peep at the suggestion of a CPAN administrator. o Changed Peep::Client::Log to Net::Peep::Client::Logparser and Peep::Client::System to Net::Peep::Client::Sysmonitor for clarity. o Made adjustments to documentation. o Fixed miscellaneous bugs.
Revision 1.10 2001/04/18 05:27:04 starky Fixed bug #416872: An extra "!" is tacked onto the identifier list before the client sends out its class identifier string.
Revision 1.9 2001/04/17 06:46:21 starky Hopefully the last commit before submission of the Peep client library to the CPAN. Among the changes:
o The clients have been modified somewhat to more elagantly clean up pidfiles in response to sigint and sigterm signals. o Minor changes have been made to the documentation. o The Peep::Client module searches through a host of directories in order to find peep.conf if it is not immediately found in /etc or provided on the command line. o The make test script conf.t was modified to provide output during the testing process. o Changes files and test.pl files were added to prevent specious complaints during the make process.
Revision 1.8 2001/04/04 05:37:11 starky Added some debugging and made other transparent changes.
Revision 1.7 2001/03/31 07:51:34 mgilfix
Last major commit before the 0.4.0 release. All of the newly rewritten clients and libraries are now working and are nicely formatted. The server installation has been changed a bit so now peep.conf is generated from the template file during a configure - which brings us closer to having a work-out-of-the-box system.
Revision 1.7 2001/03/31 02:17:00 mgilfix Made the final adjustments to for the 0.4.0 release so everything now works. Lots of changes here: autodiscovery works in every situation now (client up, server starts & vice-versa), clients now shutdown elegantly with a SIGTERM or SIGINT and remove their pidfiles upon exit, broadcast and server definitions in the class definitions is now parsed correctly, the client libraries now parse the events so they can translate from names to internal numbers. There's probably some other changes in there but many were made :) Also reformatted all of the code, so it uses consistent indentation.
Revision 1.6 2001/03/30 18:34:12 starky Adjusted documentation and made some modifications to Peep::BC to handle autodiscovery differently. This is the last commit before the 0.4.0 release.
Revision 1.5 2001/03/28 02:41:48 starky Created a new client called 'pinger' which pings a set of hosts to check whether they are alive. Made some adjustments to the client modules to accomodate the new client.
Also fixed some trivial pre-0.4.0-launch bugs.
Revision 1.4 2001/03/27 00:44:19 starky Completed work on rearchitecting the Peep client API, modified client code to be consistent with the new API, and added and tested the sysmonitor client, which replaces the uptime client.
This is the last major commit prior to launching the new client code, though the API or architecture may undergo some initial changes following launch in response to comments or suggestions from the user and developer base.
Revision 1.3 2001/03/19 07:47:37 starky Fixed bugs in autodiscovery/noautodiscovery. Now both are supported by Peep::BC and both look relatively bug free. Wahoo!
Revision 1.2 2001/03/18 17:17:46 starky Finally got LogParser (now called logparser) running smoothly.
Revision 1.1 2001/03/16 18:31:59 starky Initial commit of some very broken code which will eventually comprise a rearchitecting of the Peep client libraries; most importantly, the Perl modules.
A detailed e-mail regarding this commit will be posted to the Peep develop list (peep-develop@lists.sourceforge.net).
Contact me (Collin Starkweather) at
collin.starkweather@colorado.edu
or
collin.starkweather@collinstarkweather.com
with any questions. | http://search.cpan.org/~starky/Net-Peep-0.4.5.1/BC/BC.pm | CC-MAIN-2015-32 | refinedweb | 1,623 | 57.67 |
15.17.
ctypes — A foreign function library for Python
New in version 2.5.
ctypes is a foreign function library for Python. It provides C compatible
data types, and allows calling functions in DLLs or shared libraries. It can be
used to wrap these libraries in pure Python.
15.17.1. ctypes tutorial.
15.17.1.1. Loading dynamic link libraries. ...> >>>
15.17.1.2. Accessing functions from loaded dlls strings or unicode strings
respectively.
Sometimes, dlls export functions with names which aren’t valid Python
identifiers, like
getattr() to retrieve the function:
>>> getattr(cdll.msvcrt, "[email protected] >>>
15.17.1.3. Calling functions)("spam").
15.17.1.4. Fundamental data types
ctypes defines a number of primitive C compatible data types:
- The constructor accepts any object with a truth value.
value.
15.17.1.5. Calling functions, continued
Note that printf prints to the real standard output channel, not to
sys.stdout, so these examples will only work at the console prompt, not
from within IDLE or PythonWin:
>>> printf = libc.printf >>> printf("Hello, %s\n", "World!") Hello, World! 14 >>> printf("Hello, %S\n", u"World!") Hello, World!)) An int 1234, a double 3.140000 31 >>>
15.17.1.6. Calling functions with your own custom data types
available.
15.17.1.7. Specifying the required argument types (function prototypes)\n", "X", 2, 3) X 2 3.000000 13 >>>, its
_as_parameter_ attribute, or whatever you want to
pass as the C function argument in this case. Again, the result should be an
integer, string, unicode, a
ctypes instance, or an object with an
_as_parameter_ attribute.
15.17.1.8. Return types >>> def ValidHandle(value): ... if value == 0: ... raise WinError() ... return value ... >>> >>> GetModuleHandle.restype = ValidHandle >>> GetModuleHandle(None) 486539264 >>> GetModuleHandle("something silly").
15.17.1.9. Passing pointers (or: passing parameters by reference)' >>>
15.17.1.10. Structures and unions. A structure))
Field descriptors can be retrieved from the class, they are useful for debugging because they can provide useful information:
>>> print POINT.x <Field type=c_long, ofs=0, size=4> >>> print POINT.y <Field type=c_long, ofs=4, size=4> >>>
15.17.1.11. Structure/union alignment and byte order
By default, Structure and Union fields are aligned in the same way the C
compiler does it. It is possible to override this behavior.
15.17.1.12. Bit fields in structures and unions> >>>
15.17.1.13. Arrays
Arrays are sequences, containing a fixed number of instances of the same type.
The recommended way to create array types is by multiplying a data type with a positive integer:
TenPointsArrayType = POINT * 10
Here is an example of a somewhat artificial >>>
15.17.1.14. Pointers
invalid non-
NULL pointers would crash Python):
>>> null_ptr[0] Traceback (most recent call last): .... ValueError: NULL pointer access >>> >>> null_ptr[0] = 1234 Traceback (most recent call last): .... ValueError: NULL pointer access >>>
15.17.1.15. Type conversions >>> >>> >>>
15.17.1.16. Incomplete Types
Incomplete Types are structures, unions or arrays whose members are not yet specified. In C, they are specified by forward declarations, which are defined later:
struct cell; /* forward declaration */ struct cell { char *name; struct cell *next; }; >>>
15.17.1.17. Callback functions
ctypes allows creating efficiently, it is doing less comparisons:
>>> qsort(ia, len(ia), sizeof(c_int), cmp_func) is sorted now:
>>> for i in ia: print i, ....
15.17.1.18. Accessing values exported from dlls
Some shared libraries not only export functions, they also export known, it is only used for
testing. Try it out with
import __hello__ for example.
15.17.1.19. Surprises
There are some edge cases!
15.17.1.20. Variable-sized data types
ctypes provides some support for variable-sized arrays and structures..
15.17.2. ctypes reference
15.17.2.3. Foreign functions
As explained in the previous section, foreign functions can be accessed as attributes of loaded shared libraries. The function objects created in this way by default accept any number of arguments, accept any ctypes data instances as arguments, and return the default result type specified by the library loader. They are instances of a private class:
- class
ctypes.
_FuncPtr
Base class for C callable foreign functions.
Instances of foreign functions are also C compatible data types; they represent C function pointers.
This behavior can be customized by assigning to special attributes of the foreign function object.
restype
Assign a ctypes type to specify the result type of the foreign function. Use
Nonefor
void, a function not returning anything.
It is possible to assign a callable Python object that is not a ctypes type, in this case the function is assumed to return a C
int, and the callable will be called with this integer, allowing further processing or error checking. Using this is deprecated, for more flexible post processing or error checking use a ctypes data type as
restypeand assign a callable to the
errcheckattribute.
argtypes
Assign a tuple of ctypes types to specify the argument types that the function accepts. Functions using the
stdcallcalling convention can only be called with the same number of arguments as the length of this tuple; functions using the C calling convention accept additional, unspecified arguments as well.
When a foreign function is called, each actual argument is passed to the
from_param()class method of the items in the
argtypestuple, this method allows adapting the actual argument to an object that the foreign function accepts. For example, a
c_char_pitem in the
argtypestuple will convert a unicode string passed as argument into a byte string using ctypes conversion rules.
New: It is now possible to put items in argtypes which are not ctypes types, but each item must have a
from_param()method which returns a value usable as argument (integer, string, ctypes instance). This allows defining adapters that can adapt custom objects as function parameters.
errcheck
Assign a Python function or another callable to this attribute. The callable will be called with three or more arguments:
callable(result, func, arguments)
result is what the foreign function returns, as specified by the
restypeattribute.
func is the foreign function object itself, this allows reusing the same callable object to check or post process the results of several functions.
arguments is a tuple containing the parameters originally passed to the function call, this allows specializing the behavior on the arguments used.
The object that this function returns will be returned from the foreign function call, but it can also check the result value and raise an exception if the foreign function call failed.
- exception
ctypes.
ArgumentError
This exception is raised when a foreign function call cannot convert one of the passed arguments.
15.17.2.4. Function prototypes
Foreign functions can also be created by instantiating function prototypes. Function prototypes are similar to function prototypes in C; they describe a function (return type, argument types, calling convention) without defining an implementation. The factory functions must be called with the desired result type and the argument types of the function.
ctypes.
CFUNCTYPE(restype, *argtypes, use_errno=False, use_last_error=False)
The returned function prototype creates functions that use the standard C calling convention. The function will release the GIL during the call. If use_errno is set to true, the ctypes private copy of the system
errnovariable is exchanged with the real
errnovalue before and after the call; use_last_error does the same for the Windows error code.
Changed in version 2.6: The optional use_errno and use_last_error parameters were added.
ctypes.
WINFUNCTYPE(restype, *argtypes, use_errno=False, use_last_error=False)
Windows only: The returned function prototype creates functions that use the
stdcallcalling convention, except on Windows CE where
WINFUNCTYPE()is the same as
CFUNCTYPE(). The function will release the GIL during the call. use_errno and use_last_error have the same meaning as above.
ctypes.
PYFUNCTYPE(restype, *argtypes) integer. name is name of the COM method. iid is an optional pointer to the interface identifier which is used in extended error reporting.
COM methods use a special calling convention: They require a pointer to the COM interface as first argument, in addition to those parameters that are specified in the
argtypestuple.
The optional paramflags parameter creates foreign function wrappers with much more functionality than the features described above.
paramflags must be a tuple of the same length as
argtypes.
Each item in this tuple contains further information about a parameter, it must be a tuple containing one, two, or three items.
The first item is an integer containing a combination of direction flags for the parameter:
- 1
- Specifies an input parameter to the function.
- 2
- Output parameter. The foreign function fills in a value.
- 4
- Input parameter which defaults to the integer zero.
The optional second item is the parameter name as string. If this is specified, the foreign function can be called with named parameters.
The optional third item is the default value for this parameter.
This example demonstrates how to wrap the Windows
MessageBoxA function so
that it supports default parameters and named arguments. The C declaration from
the windows header file is this:
WINUSERAPI int WINAPI MessageBoxA( HWND hWnd, LPCSTR lpText, LPCSTR lpCaption, UINT uType);
Here is the wrapping with
ctypes:
>>> from ctypes import c_int, WINFUNCTYPE, windll >>> from ctypes.wintypes import HWND, LPCSTR, UINT >>> prototype = WINFUNCTYPE(c_int, HWND, LPCSTR, LPCSTR, UINT) >>> paramflags = (1, "hwnd", 0), (1, "text", "Hi"), (1, "caption", None), (1, "flags", 0) >>> MessageBox = prototype(("MessageBoxA", windll.user32), paramflags) >>>
The MessageBox foreign function can now be called in these ways:
>>> MessageBox() >>> MessageBox(text="Spam, spam, spam") >>> MessageBox(flags=2, text="foo bar") >>>
A second example demonstrates output parameters. The win32
GetWindowRect
function retrieves the dimensions of a specified window by copying them into
RECT structure that the caller has to supply. Here is the C declaration:
WINUSERAPI BOOL WINAPI GetWindowRect( HWND hWnd, LPRECT lpRect);
Here is the wrapping with
ctypes:
>>> from ctypes import POINTER, WINFUNCTYPE, windll, WinError >>> from ctypes.wintypes import BOOL, HWND, RECT >>> prototype = WINFUNCTYPE(BOOL, HWND, POINTER(RECT)) >>> paramflags = (1, "hwnd"), (2, "lprect") >>> GetWindowRect = prototype(("GetWindowRect", windll.user32), paramflags) >>>
Functions with output parameters will automatically return the output parameter value if there is a single one, or a tuple containing the output parameter values when there are more than one, so the GetWindowRect function now returns a RECT instance, when called.
Output parameters can be combined with the
errcheck protocol to do
further output processing and error checking. The win32
GetWindowRect api
function returns a
BOOL to signal success or failure, so this function could
do the error checking, and raises an exception when the api call failed:
>>> def errcheck(result, func, args): ... if not result: ... raise WinError() ... return args ... >>> GetWindowRect.errcheck = errcheck >>>
If the
errcheck function returns the argument tuple it receives
unchanged,
ctypes continues the normal processing it does on the output
parameters. If you want to return a tuple of window coordinates instead of a
RECT instance, you can retrieve the fields in the function and return them
instead, the normal processing will no longer take place:
>>> def errcheck(result, func, args): ... if not result: ... raise WinError() ... rc = args[1] ... return rc.left, rc.top, rc.bottom, rc.right ... >>> GetWindowRect.errcheck = errcheck >>>
15.17.2.5. Utility functions
ctypes.
addressof(obj)
Returns the address of the memory buffer as integer. obj must be an instance of a ctypes type.
ctypes.
alignment(obj_or_type)
Returns the alignment requirements of a ctypes type. obj_or_type must be a ctypes type or instance.
ctypes.
byref(obj[, offset]).
New in version 2.6: The offset optional argument was added.
ctypes.
cast(obj, type)
This function is similar to the cast operator in C. It returns a new instance of type which points to the same memory block as obj. type must be a pointer type, and obj must be an object that can be interpreted as a pointer.
ctypes.
create_string_buffer(init_or_size[, size])
This function creates a mutable character buffer. The returned object is a ctypes array of
c_char.
init_or_size must be an integer which specifies the size of the array, or a string which will be used to initialize the array items.
If a a unicode string, it is converted into an 8-bit string according to ctypes conversion rules.
ctypes.
create_unicode_buffer(init_or_size[, size])
This function creates a mutable unicode character buffer. The returned object is a ctypes array of
c_wchar.
init_or_size must be an integer which specifies the size of the array, or a unicode string which will be used to initialize the array items.
If a unicode an 8-bit string, it is converted into a unicode string according to ctypes conversion rules.
ctypes.
DllCanUnloadNow()
Windows only: This function is a hook which allows implementing in-process COM servers with ctypes. It is called from the DllCanUnloadNow function that the _ctypes extension dll exports.
ctypes.
DllGetClassObject()
Windows only: This function is a hook which allows implementing in-process COM servers with ctypes. It is called from the DllGetClassObject function that the
_ctypesextension dll exports.
ctypes.util.
find_library(name)
Try to find a library and return a pathname. name is the library name without any prefix like
lib, suffix like
.so,
.dylibor version number (this is the form used for the posix linker option
-l). If no library can be found, returns
None.
The exact functionality is system dependent.
Changed in version 2.6: Windows only:
find_library("m")or
find_library("c")return the result of a call to
find_msvcrt().
ctypes.util.
find_msvcrt()
Windows only: return the filename of the VC runtime library used by Python, and by the extension modules. If the name of the library cannot be determined,
Noneis returned.
If you need to free memory, for example, allocated by an extension module with a call to the
free(void *), it is important that you use the function in the same library that allocated the memory.
New in version 2.6.
ctypes.
FormatError([code])
Windows only: Returns a textual description of the error code code. If no error code is specified, the last error code is used by calling the Windows api function GetLastError.
ctypes.
GetLastError()
Windows only: Returns the last error code set by Windows in the calling thread. This function calls the Windows GetLastError() function directly, it does not return the ctypes-private copy of the error code.
ctypes.
get_errno()
Returns the current value of the ctypes-private copy of the system
errnovariable in the calling thread.
New in version 2.6.
ctypes.
get_last_error()
Windows only: returns the current value of the ctypes-private copy of the system
LastErrorvariable in the calling thread.
New in version 2.6.
ctypes.
memmove(dst, src, count)
Same as the standard C memmove library function: copies count bytes from src to dst. dst and src must be integers or ctypes instances that can be converted to pointers.
ctypes.
memset(dst, c, count)
Same as the standard C memset library function: fills the memory block at address dst with count bytes of value c. dst must be an integer specifying an address, or a ctypes instance.
ctypes.
POINTER(type)
This factory function creates and returns a new ctypes pointer type. Pointer types are cached and reused internally, so calling this function repeatedly is cheap. type must be a ctypes type.
ctypes.
pointer(obj)
This function creates a new pointer instance, pointing to obj. The returned object is of the type
POINTER(type(obj)).
Note: If you just want to pass a pointer to an object to a foreign function call, you should use
byref(obj)which is much faster.
ctypes.
resize(obj, size).
ctypes.
set_conversion_mode(encoding, errors)
This function sets the rules that ctypes objects use when converting between 8-bit strings and unicode strings. encoding must be a string specifying an encoding, like
'utf-8'or
'mbcs', errors must be a string specifying the error handling on encoding/decoding errors. Examples of possible values are
"strict",
"replace", or
"ignore".
set_conversion_mode()returns a 2-tuple containing the previous conversion rules. On windows, the initial conversion rules are
('mbcs', 'ignore'), on other systems
('ascii', 'strict').
ctypes.
set_errno(value)
Set the current value of the ctypes-private copy of the system
errnovariable in the calling thread to value and return the previous value.
New in version 2.6.
ctypes.
set_last_error(value)
Windows only: set the current value of the ctypes-private copy of the system
LastErrorvariable in the calling thread to value and return the previous value.
New in version 2.6.
ctypes.
sizeof(obj_or_type)
Returns the size in bytes of a ctypes type or instance memory buffer. Does the same as the C
sizeofoperator.
ctypes.
string_at(address[, size])
This function returns the string starting at memory address address. If size is specified, it is used as size, otherwise the string is assumed to be zero-terminated.
ctypes.
WinError(code=None, descr=None)
Windows only: this function is probably the worst-named thing in ctypes. It creates an instance of WindowsError. If code is not specified,
GetLastErroris called to determine the error code. If
descris not specified,
FormatError()is called to get a textual description of the error.
ctypes.
wstring_at(address[, size])
This function returns the wide character string starting at memory address address as unicode string. If size is specified, it is used as the number of characters of the string, otherwise the string is assumed to be zero-terminated.
15.17.2.6. Data types
- class
ctypes.
_CData
This non-public class is the common base class of all ctypes data types. Among other things, all ctypes type instances contain a memory block that hold C compatible data; the address of the memory block is returned by the
addressof()helper function. Another instance variable is exposed as
_objects; this contains other Python objects that need to be kept alive in case the memory block contains pointers.
Common methods of ctypes data types, these are all class methods (to be exact, they are methods of the metaclass):
from_buffer(source[, offset])is raised.
New in version 2.6.
from_buffer_copy(source[, offset])
This method creates a ctypes instance, copying the buffer from the source object buffer which must be readable. The optional offset parameter specifies an offset into the source buffer in bytes; the default is zero. If the source buffer is not large enough a
ValueErroris raised.
New in version 2.6.
from_address(address)
This method returns a ctypes type instance using the memory specified by address which must be an integer.
from_param(obj)
This method adapts obj to a ctypes type. It is called with the actual object used in a foreign function call when the type is present in the foreign function’s
argtypestuple; it must return an object that can be used as a function call parameter.
All ctypes data types have a default implementation of this classmethod that normally returns obj if that is an instance of the type. Some types accept other objects as well.
in_dll(library, name)
This method returns a ctypes type instance exported by a shared library. name is the name of the symbol that exports the data, library is the loaded shared library.
Common instance variables of ctypes data types:
_b_base_
Sometimes ctypes data instances do not own the memory block they contain, instead they share part of the memory block of a base object. The
_b_base_read-only member is the root ctypes object that owns the memory block.
_b_needsfree_
This read-only variable is true when the ctypes data instance has allocated the memory block itself, false otherwise.
_objects
This member is either
Noneor a dictionary containing Python objects that need to be kept alive so that the memory block contents is kept valid. This object is only exposed for debugging; never modify the contents of this dictionary.
15.17.2.7. Fundamental data types
- class
ctypes.
_SimpleCData
This non-public class is the base class of all fundamental ctypes data types. It is mentioned here because it contains the common attributes of the fundamental ctypes data types.
_SimpleCDatais a subclass of
_CData, so it inherits their methods and attributes.
Changed in version 2.6: ctypes data types that are not and do not contain pointers can now be pickled.
Instances have a single attribute:
value
This attribute contains the actual value of the instance. For integer and pointer types, it is an integer, for character types, it is a single character string, for character pointer types it is a Python string or unicode string.
When the
valueattribute is retrieved from a ctypes instance, usually a new object is returned each time.
ctypesdoes not implement original object return, always a new object is constructed. The same is true for all other ctypes object instances.
Fundamental.:
- class
ctypes.
c_byte
Represents the C
signed chardatatype, and interprets the value as small integer. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_char
Represents the C
chardatatype, and interprets the value as a single character. The constructor accepts an optional string initializer, the length of the string must be exactly one character.
- class
ctypes.
c_char_p
Represents the C
char *datatype when it points to a zero-terminated string. For a general character pointer that may also point to binary data,
POINTER(c_char)must be used. The constructor accepts an integer address, or a string.
- class
ctypes.
c_double
Represents the C
doubledatatype. The constructor accepts an optional float initializer.
- class
ctypes.
c_longdouble
Represents the C
long doubledatatype. The constructor accepts an optional float initializer. On platforms where
sizeof(long double) == sizeof(double)it is an alias to
c_double.
New in version 2.6.
- class
ctypes.
c_float
Represents the C
floatdatatype. The constructor accepts an optional float initializer.
- class
ctypes.
c_int
Represents the C
signed intdatatype. The constructor accepts an optional integer initializer; no overflow checking is done. On platforms where
sizeof(int) == sizeof(long)it is an alias to
c_long.
- class
ctypes.
c_int64
Represents the C 64-bit
signed intdatatype. Usually an alias for
c_longlong.
- class
ctypes.
c_long
Represents the C
signed longdatatype. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_longlong
Represents the C
signed long longdatatype. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_short
Represents the C
signed shortdatatype. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_size_t
Represents the C
size_tdatatype.
- class
ctypes.
c_ssize_t
Represents the C
ssize_tdatatype.
New in version 2.7.
- class
ctypes.
c_ubyte
Represents the C
unsigned chardatatype, it interprets the value as small integer. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_uint
Represents the C
unsigned intdatatype. The constructor accepts an optional integer initializer; no overflow checking is done. On platforms where
sizeof(int) == sizeof(long)it is an alias for
c_ulong.
- class
ctypes.
c_uint64
Represents the C 64-bit
unsigned intdatatype. Usually an alias for
c_ulonglong.
- class
ctypes.
c_ulong
Represents the C
unsigned longdatatype. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_ulonglong
Represents the C
unsigned long longdatatype. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_ushort
Represents the C
unsigned shortdatatype. The constructor accepts an optional integer initializer; no overflow checking is done.
- class
ctypes.
c_void_p
Represents the C
void *type. The value is represented as integer. The constructor accepts an optional integer initializer.
- class
ctypes.
c_wchar
Represents the C
wchar_tdatatype, and interprets the value as a single character unicode string. The constructor accepts an optional string initializer, the length of the string must be exactly one character.
- class
ctypes.
c_wchar_p
Represents the C
wchar_t *datatype, which must be a pointer to a zero-terminated wide character string. The constructor accepts an integer address, or a string.
- class
ctypes.
c_bool
Represent the C
booldatatype (more accurately,
_Boolfrom C99). Its value can be
Trueor
False, and the constructor accepts any object that has a truth value.
New in version 2.6.
- class
ctypes.
HRESULT
Windows only: Represents a
HRESULTvalue, which contains success or error information for a function or method call.
- class
ctypes.
py_object.
15.17.2.8. Structured data types
- class
ctypes.
Union(*args, **kw)
Abstract base class for unions in native byte order.
- class
ctypes.
BigEndianStructure(*args, **kw)
Abstract base class for structures in big endian byte order.
- class
ctypes.
LittleEndianStructure(*args, **kw)
Abstract base class for structures in little endian byte order.
Structures with non-native byte order cannot contain pointer type fields, or any other data types containing pointer type fields.
- class
ctypes.
Structure(*args, **kw)
Abstract base class for structures in native byte order.
Concrete structure and union types must be created by subclassing one of these types, and at least define a
_fields_class variable.
ctypeswill create descriptors which allow reading and writing the fields by direct attribute accesses. These are the
_fields_
A sequence defining the structure fields. The items must be 2-tuples or 3-tuples. The first item is the name of the field, the second item specifies the type of the field; it can be any ctypes data type.
For integer type fields like
c_int, a third optional item can be given. It must be a small positive integer defining the bit width of the field.
Field names must be unique within one structure or union. This is not checked, only one field can be accessed when names are repeated.
It is possible to define the
_fields_class variable after the class statement that defines the Structure subclass, this allows creating data types that directly or indirectly reference themselves:
class List(Structure): pass List._fields_ = [("pnext", POINTER(List)), ... ]
The
_fields_class variable must, however, be defined before the type is first used (an instance is created,
sizeof()is called on it, and so on). Later assignments to the
_fields_class variable will raise an AttributeError.
It is possible to defined sub-subclasses of structure types, they inherit the fields of the base class plus the
_fields_defined in the sub-subclass, if any.
_pack_
An optional small integer that allows overriding the alignment of structure fields in the instance.
_pack_must already be defined when
_fields_is assigned, otherwise it will have no effect.
_anonymous_
An optional sequence that lists the names of unnamed (anonymous) fields.
_anonymous_must be already defined when
_fields_is assigned, otherwise it will have no effect.
The fields listed in this variable must be structure or union type fields.
ctypeswill create descriptors in the structure type that allow accessing the nested fields directly, without the need to create the structure or union field.
Here is an example type (Windows):
class _U(Union): _fields_ = [("lptdesc", POINTER(TYPEDESC)), ("lpadesc", POINTER(ARRAYDESC)), ("hreftype", HREFTYPE)] class TYPEDESC(Structure): _anonymous_ = ("u",) _fields_ = [("u", _U), ("vt", VARTYPE)]
The
TYPEDESCstructure describes a COM data type, the
vtfield specifies which one of the union fields is valid. Since the
ufield is defined as anonymous field, it is now possible to access the members directly off the TYPEDESC instance.
td.lptdescand
td.u.lptdescare equivalent, but the former is faster since it does not need to create a temporary union instance:
td = TYPEDESC() td.vt = VT_PTR td.lptdesc = POINTER(some_type) td.u.lptdesc = POINTER(some_type)
It is possible to defined sub-subclasses of structures, they inherit the fields of the base class. If the subclass definition has a separate
_fields_variable, the fields specified in this are appended to the fields of the base class.
Structure and union constructors accept both positional and keyword arguments. Positional arguments are used to initialize member fields in the same order as they are appear in
_fields_. Keyword arguments in the constructor are interpreted as attribute assignments, so they will initialize
_fields_with the same name, or create new attributes for names not present in
_fields_.
15.17.2.9. Arrays and pointers
- class
ctypes.
Array(*args)
Abstract base class for arrays.
The recommended way to create concrete array types is by multiplying any
ctypesdata type with a positive integer. Alternatively, you can subclass this type and define
_length_and
_type_class variables. Array elements can be read and written using standard subscript and slice accesses; for slice reads, the resulting object is not itself an
Array.
_length_
A positive integer specifying the number of elements in the array. Out-of-range subscripts result in an
IndexError. Will be returned by
len().
_type_
Specifies the type of each element in the array.
Array subclass constructors accept positional arguments, used to initialize the elements in order.
- class
ctypes.
_Pointer
Private, abstract base class for pointers.
Concrete pointer types are created by calling
POINTER()with the type that will be pointed to; this is done automatically by
pointer().
If a pointer points to an array, its elements can be read and written using standard subscript and slice accesses. Pointer objects have no size, so
len()will raise
TypeError. Negative subscripts will read from the memory before the pointer (as in C), and out-of-range subscripts will probably crash with an access violation (if you’re lucky).
_type_
Specifies the type pointed to.
contents
Returns the object to which to pointer points. Assigning to this attribute changes the pointer to point to the assigned object. | https://documentation.help/Python-2.7.13/ctypes.html | CC-MAIN-2020-10 | refinedweb | 4,820 | 56.86 |
Computer Vision and programming go hand in hand. One needs to use programming to materialize the theory so it can be applied to real world problems. Computer Vision is an exciting field where we try to make sense of images. These images could be static or could be retrieved from videos. Making sense could be things like tracking an object, modeling the background, pattern recognition etc. This article is the first of a series of articles that will using C# to educate users in Computer Vision. Being the first article, I intend to introduce some basic concepts used in Computer Vision. I will refer back to these concepts in upcoming articles where I will implement a few state-of-the-art algorithms in Computer Vision, covering areas such as object tracking, background modeling, patter recognition etc.
An image is composed of many dots called Pixels (Picture Elements). More the pixels, higher the resolution of the image. When an image is grabbed by the camera, it is often in RGB (Red Green Blue) format. RGB is one of many colour spaces used in Computer Vision. Other colour spaces include HSV, Lab, XYZ, YIQ etc. RGB is an additive colour space where we get different colours by mixing red, green, and blue values. In a 24-bit RGB image, the individual values of R, G, and B components range from 0 - 255. A 24-bit RGB image can represent 224 different colours, i.e., 16 million. OK, going back to the image, we humans see objects in there, where as the computer sees pixels having RGB values ranging from 0 - 255. So, there is an obvious need to build some kind of intelligence into computers so we can make them make sense of images.
If you want to pursue a career in Computer Vision, you have to understand one thing: Statistics & Probability! Normally, statistics would be used in creating a model, and probability would be used in making sense of the model. So, moving forward, I will try to explain some fundamentals required to understand an image so Computer Vision techniques can be applied to it.
The very first step in modeling an image is to pick an attribute to be modeled. It does not have to be a single attribute - you would normally use a combination of attributes to make your algorithm robust. Some of the primary attributes include edges, colour etc. The attributes are chosen such that they are unique. But in reality, that is not the case. For example, if using colour, many images would share the same colour distribution. So, we need to find an attribute or a combination of attributes that provides a greater degree of uniqueness.
Once an attribute is chosen, the next step is to model it. There are many models available in Computer Vision - each with its pros and cons. But in this article, I will concentrate on histogram. The reason for selecting histogram is because it is very popular in Computer Vision, plus it forms the foundation for articles coming up. From elementary statistics, we know that a histogram is nothing more than a frequency distribution. Hence, a colour histogram is the frequency of different colours in the image.
Using normalisation, we can add scale invariance to a histogram. What that means is that the same object with different scales will have identical histograms. Normalisation is achieved by dividing the value of each bin by the total value of the bins.
To create a colour histogram, we first need to decide on the number of bins of the histogram. Generally speaking, the more bins you have, the more discriminatory power you get. But then, the flip side is that you need more computational resources. The second decision you need to make is how to implement this colour histogram. Remember that you normally would have three colour components, such as Red, Green, and Blue. A popular approach is either to use a 3D array or a single array. Using a 3D array is straightforward, but using a single array for three components require some thought. In the end, it is a matter of liking - I prefer the latter approach.
For a 16x16x16 bin histogram, we have 256/16 = 16 colour components per bin. So, we define a 3D array something like this:
// Declare a 3 dimensional histogram
[,,],float histogram = new float [16, 16, 16];
As an example, if the pixel's RGB value is 13, 232, and 211, then this means you are dealing with RGB bins 0, 14, and 13. These bin numbers are obtained by dividing the colour values by the number of bins - 16, in our case. There you have to increment the histogram [0, 14, 15] by 1. If we do that for all the pixels in an image, we would end up with the colour histogram of the image which tells us about the colour distribution in the image.
For a 16x16x16 bin histogram, we declare a 1D array like this:
// Declare a 1 dimensional histogram
[]]float histogram = new float [16 * 16 * 16];
In order to use a 1D array, we need to define an indexing scheme so we can add and retrieve the values of the bins. The indexing method is given in the code below:
private int GetSingleBinIndex(int binCount1, int binCount2, int binCount3, BGRA* pixel)
{
int idx = 0;
//find the index
int i1 = GetBinIndex(binCount1, (float)pixel->red, 255);
int i2 = GetBinIndex(binCount2, (float)pixel->green, 255);
int i3 = GetBinIndex(binCount3, (float)pixel->blue, 255);
idx = i1 + i2 * binCount1 + i3 * binCount1 * binCount2;
return idx;
}
Again, if the pixel's RGB value is 13, 232, and 211, then this means you are dealing with RGB bins 0, 14, and 13. This points to an index of 0 + 14 x 16 + 13 x 16 x 16 = 3552 in a 1D array. To create a histogram, you will increment the value of this bin by 1.
Once we have represented an image attribute as a histogram, we often need to perform recognition. So, we can have a source histogram and a candidate histogram, and match the histogram to see how closely the candidate object resembles the source object. There are many techniques available, such as Bhattacharyya Coefficient, Earth Movers Distance, Chi Squared, Euclidean Distance etc. In this article, I will describe the Bhattacharyya Coefficient. You can implement your own matching technique bearing in mind that each matching technique has its pros and cons.
The Bhattacharyya Coefficient works on normalised histograms with an identical number of bins. Given two histograms with p and q, the Bhattacharyya Coefficient is given as:
If you are like me and get discouraged by mathematical equations, then, don't worry: I have a worked example for you! Considering the following two histograms, the calculation of Bhattacharyya Coefficient is shown below:
As we can see, it requires us to multiply the bins. Furthermore, we can see that for identical histograms, the coefficient will be 1. The values of Bhattacharyya Coefficient ranges from 0 to 1, i.e., from least similar to exact match.
Native image processing in .NET is slow! Using a Bitmap object with GetPixel() and SetPixel() methods is not the way to do image processing in .NET. We need to access the pixel data by using unsafe processing. I have used the code by Eric Gunnerson.
Bitmap
GetPixel()
SetPixel()
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
for (int y = 0; y < size.Y; y++)
{
pPixel = fastBitmap[0, y];
for (int x = 0; x < size.X; x++)
{
//get the bin index for the current pixel colour
idx = GetSingleBinIndex(numBinsCh1, numBinsCh2, numBinsCh3, pPixel);
hist.Data[idx] += 1;
total += 1;
//increment the pointer
pPixel++;
}
}
the individual values of R, G, and B components range from 0 - 255
return (BGRA*)(pBase + y * width + x * sizeof(BGRA));
int idx = (int)(colourValue * (float)binCount / maxValue);
private int getBinIndex(int binCount, int colourValue) {
int idx = colourValue / binCount;
if (idx >= binCount)
idx = binCount - 1;
return idx;
}
private int GetBinIndex(int binCount, float colourValue, float maxValue)
{
int idx = (int)(colourValue * (float)binCount / maxValue);
if (idx >= binCount)
idx = binCount - 1;
return idx;
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/35463/Computer-Vision-Applications-with-C-Part-I?msg=4449649&PageFlow=FixedWidth | CC-MAIN-2017-34 | refinedweb | 1,390 | 61.46 |
Верстак электронных таблиц
introduced in version 0.15Верстак электронных таблиц позволяет создать и редактировать электронные таблицы, выполнять вычисления, а так же получать данные из модели и экспортировать данные в другие приложения текстовых таблиц, такие как LibreOffice или Microsoft Excel.
The Spreadsheet Workbench has been available since FreeCAD 0.15.
Cell Expressions
A spreadsheet cell may contain arbitrary text or an expression. Technically, expressions must start with an equals '=' sign. However, the spreadsheet attempts to be intelligent; if you enter what looks like an expression without the leading '=', one will be added automatically.
Cell expressions may contain numbers, functions, references to other cells, and references to properties of the model (But see Current Limitations below). Cells are referenced by their column (CAPITAL letter) and row (number). A cell may also be referenced by its alias-name (below). Example: B4 + A (see Expressions).
Interaction between Spreadsheets and the CAD Model
Data in the cells of a spreadsheet may be used in CAD model parameter expressions. Thus, a spreadsheet may be used as the source for parameter values used throughout a model, effectively gathering the values in one place. When values are changed in the spreadsheet, they are propagated throughout the model.
Similarly, properties from CAD model objects may be used in expressions in spreadsheet cells. This allows use of object properties like volume or area in the spreadsheet. If the name of an object in the CAD model is changed, the change will automatically be propagated to any references in spreadsheet expressions using the name which was changed.
More than one spreadsheet may be used in a document; spreadsheets may be given a user-assigned name (rename) like any other object.
FreeCAD checks for cyclic dependencies. See Current Limitations.
Cell Properties
The properties of a spreadsheet cell can be edited with a right-click on a cell. The following dialog pops up:
As indicated by the tabs, the following properties can be changed:
- Color: Text color and background color
- Alignment: Text horizontal and vertical alignment
- Style: Text style: bold, italic, underline
- Units: Display units for this cell. Please read the Units section below.
- Alias: Define an alias
References To CAD-Data
As indicated above, one can reference data from the CAD model in spreadsheet expressions.
Computed expressions in spreadsheet cells start with an equals ('=') sign. However, the spreadsheet entry mechanism attempts to be smart. An expression may be entered without the leading '='; if the string entered is a valid expression, an '=' is automatically added when the final Enter is typed. If the string entered is not a valid expression (often the result of entering something with the wrong case, e.g. "MyCube.length" instead of "MyCube.Length"), no leading '=' is added and it is treated as simply a text string.
Note: The above behavior (auto insert of '=') has some unpleasant ramifications:
- If you want to keep a column of names corresponding to the alias-names in an adjacent column of values, you must enter the name in the label column before giving the cell in the value column its alias-name. Otherwise, when you enter the alias-name in the label column the spreadsheet will assume it is an expression and change it to "=<alias-name>"; and the displayed text will be the value from the <alias-name> cell.
- If you make an error when entering the name in the label column and wish to correct it, you cannot simply change it to the alias-name. Instead, you must first change the alias-name to something else, then fix the text name in the label column, then change the alias-name in the value column back to its original.
One way to side-step these issues is to prefix text labels corresponding to alias-names with a fixed string, thereby making them different. Note that "_" will not work, as it is converted to "=". However, a blank, while invisible, will work.
The following table shows some examples assuming the model has a feature named "MyCube":
Spreadsheet Data in Expressions.
Units
The Spreadsheet has a notion of dimension (units) associated with cell values. A number entered without an associated unit has no dimension. The unit should be entered immediately following the number value, with no intervening space. If a number has an associated unit, that unit will be used in all calculations. For example, the multiplication of two lengths with the unit mm gives an area with the unit mm².
If a cell contains a value which represents a dimension, it should be entered with its associated unit. While in many simple cases one can get by with a dimensionless value, it is unwise to not enter the unit. If a value representing a dimension is entered without its associated unit, there are some sequences of operations which cause FreeCAD to complain of incompatible units in an expression when it appears the expression should be valid. (This may be better understood by viewing this thread in the FreeCAD forums.)
You can change the units displayed for a cell value using the properties dialog units tab (above). This does not change the value contained in the cell; it only converts the existing value for display. The value used for calculations does not change, and the results of formulas using the value do not change. For example, a cell containing the value "5.08cm" can be displayed as "2in" by changing the units tab value to "in".
A dimensionless number cannot be changed to a number with a unit by the cell properties dialog. One can put in a unit string, and that string will be displayed; but the cell still contains a dimensionless number. In order to change a dimensionless value to a value with a dimension, the value itself must be re-entered with its associated unit.
Occasionally it may be desirable to get rid of a dimension in an expression. This can be done by multiplying.
Current Limitations
FreeCAD checks for cyclic dependencies. By design, that check stops at the level of the spreadsheet object. As a consequence, you should not have a spreadsheet which contains both cells whose values are used to specify parameters to the model, and cells whose values use output from the model. For example, you cannot have cells specifying the length, width, and height of an object, and another cell which references the total volume of the resulting shape. This restriction can be surmounted by having two spreadsheets: one used as a data-source for input parameters to the model and the other used for calculations based on resultant geometry-data.
When cells are copied, only the content (expression/value) is copied. The Cell Properties described above are not copied.
For earlier versions see Spreadsheet legacy.
Scripting Basics
import Spreadsheet sheet = App.ActiveDocument.addObject("Spreadsheet::Sheet") sheet. | https://www.freecadweb.org/wiki/index.php?title=Spreadsheet_Workbench/ru&oldid=458175 | CC-MAIN-2019-43 | refinedweb | 1,130 | 62.68 |
In the previous blog “Smattering of HDFS“, we learnt that “The NameNode is a Single Point of Failure for the HDFS Cluster”. Each cluster had a single NameNode and if that machine became unavailable, the whole cluster would become unavailable until the NameNode is restarted or brought up on a different machine. Now in this blog, we will learn about resolving the failure issue of NameNode.
Issues that arise when NameNode fails/crashes-
The metadata for the HDFS like Namespace Information, block information etc, when in use needs to be stored in main memory, but for persistence storage, it is to be stored in disk. The NameNode stores two types of information:
1. in-memory fsimage – It is the latest and updated snapshot of the Hadoop filesystem namespace.
2. editLogs – It is the sequence of changes made to the filesystem after NameNode started.
The total availablity of HDFS cluster is decreased in two major ways:
1. In the case of a machine crash, the cluster would become unavailable until the machine is restarted.
2. In case of maintenance task to be carried on NameNode machine, cluster downtime would happen.
StandBy NameNode – the solution to NameNode failure
In HDFS High Availability feature, it provides a facility of running two NameNodes in the same cluster. There is an active-passive architecture for NameNode, that is, if NameNode goes down, within a few seconds, the passive NameNode also known as Standby NameNode comes up. At any point in time, over if necessary.
For Namespace Information backup, the fsImage is stored along with the editLog. The editLog is like the journal ledger of NameNode. Through it, the in-memory fsImage can be reconstructed. So, it is needed to make the backup of editLog .
In Gen2 Hadoop architecture, there is a facility of Quorum Journal Manager(QJM) which is a set of atleast 3 machines known as journal nodes, where editLogs are stored for backup. To minimize the time to start the passive NameNode in case of active NameNode crash, the standby machine available is pre-configured and ready to take over the role of NameNode.
The Standby NameNode keeps reading the editLogs from the journal nodes and keeps itself updated. This configuration makes Standby ready to take up the active NameNode role in case of failure. All the DataNodes are configured to send the Block Report to both of the NameNodes. Thus, the Standby NameNode becomes active in case of NameNode failure in a short duration of time.
1 thought on “Resolving the Failure Issue of NameNode”
Reblogged this on akashsethi24. | https://blog.knoldus.com/2017/06/04/resolving-the-failure-issue-of-namenode/ | CC-MAIN-2018-30 | refinedweb | 427 | 53.71 |
Results 1 to 2 of 2
- Join Date
- Mar 2006
- 202
- Thanks
- 27
- Thanked 2 Times in 2 Posts
graphs drawing using coordinates (need to space out what I draw)
Hello,
I am drawing directed graphs using Java.
At the moment, I'm struggling to work out how to space out the circular nodes I am drawing.
I will be drawing on circle in the center, and several circles around it. The number of circles will vary, so my program takes the number of circles to draw as input, and I need the program to decide how to space them out so they don't overlap. (Within reason, obviously if I had 100 nodes to draw it would be impossible to draw them without overlaps!)
I have a class to draw circles, it takes 2 coordinates as input... the x and y of where to start drawing it, and draws the circle.
I just need to know how to write something that will space out the circles drawn depending on how many their are. I think I need some kind of algorithm but I don't know how to do this.
Any help is really appreciated, or links to anywhere that might help.
Here is my circles class incase it helps.
Code:
/* DRAW AND LABEL CIRCLES */ //draw circles. needs input of setName and what the circle is to be labelled as. This is elementLabel. Element is the object returned and might not be needed. xy are Coordinates that say where to start drawing the circle public class draw(Graphics g, String SetName, String elementLabel, element, x, y) { //draw circle MouseXN and YN are both set to 20 to draw a circle of this size g.drawOval(x,y,mouseXN,mouseYN); }
Nicky
assuming
n is the number of circles you want to draw
(x, y) is the center of the central circle which has a radius of cr
dOff is the average space that you want between individual surrounding circles
r is the radius of the surrounding circles
the radius of the external circle on which the surrounding circles will be positioned can be calculated as
Code:
R = ((n * r) + ((n-1) * dOff)) / (2 * Math.PI);
Code:
delta = (2 * Math.PI) / n;You never have to change anything you got up in the middle of the night to write. -- Saul Bellow | http://www.codingforums.com/java-and-jsp/137986-graphs-drawing-using-coordinates-need-space-out-what-i-draw.html | CC-MAIN-2016-07 | refinedweb | 390 | 68.3 |
C# 9.0 - record
Hello Everyone,
Back again after a long back with the new article where we are going to learn about records that are released with C# 9.0.
System Requirement
Please install Microsoft .NET SDK 5.0 + to run these exercises.
Before going into records let’s just look for object initialization process with class and properties,
class Program{public class Employee{public string? EmployeeId { get; set; }public string? EmployeeName { get; set; }}static void Main(string[] args){Employee employee = new Employee { EmployeeId = “123”, EmployeeName = “Vaibhav” };employee.EmployeeName = “Testing Mutable Property”;Console.WriteLine(employee.EmployeeName);}}
The one big limitation is that properties have to be mutable for object initialization to work.
If you want the whole object as immutable and behave like a value then you should declare it as a record. A record is still a class but the record keyword adds several value-like behaviors.
In nutshell, Records is a compact and easy way to define a reference type that automatically gets value-based equality and being immutable.
Let us just see how we can declare the records in c#,
class Program{public record Employee(string EmployeeId,string EmployeeName);static void Main(string[] args){Employee employee = new Employee(“123”,”Vaibhav”);Console.WriteLine(employee.EmployeeName);}}
Value-Based equality check
If you think of class object comparison check it’s going to happen on the reference it is pointing to so if you want to check that two class objects are equal or not having the same content you need to write extra logic to perform a value-based equality check but in case of records you are getting this feature inbuilt let’s see an example of it below,
Immutability
Once you created an instance of records with some values you can not change it later because every instance is immutable in the case of records. For any change, you can create a new instance of records but in the case of a class object, you are allowed to change the value of a particular instance.
With-Expression
When working with immutable data, a common pattern is to create new values from existing ones to represent a new state. For instance, if our employee wants to change their EmployeeId by keeping the same EmployeeName we will represent it as a new object which is a copy of the old one. This technique is often referred to as a non-destructive mutation.
The with-expression works like actually copying the full state of the old object into a new object, then mutating that new object values according to the object initializer.
Inheritance
Records also support inheritance where the new record can be inherited from any base record by having any additional expression or capabilities lets take an example where we want to extend our Employee record and create a new Developer record having an additional expression of Skill.
Reference -
C# 9.0 on the record
C# 9.0 on the record It's official: C# 9.0 is out! Back in May I blogged about the C# 9.0 plans, and the following is…
devblogs.microsoft.com
Thank You, See you in the next article !!
You can reach out or connect to me over here,
If you want some technical topics to be discussed with group of participants please raise a request on following link: | https://vaibhavbhapkarblogs.medium.com/c-9-0-record-c7976e9884f6?responsesOpen=true&source=user_profile---------1---------------------------- | CC-MAIN-2021-43 | refinedweb | 557 | 52.8 |
Created on 2011-01-29 17:10 by Keith.Dart, last changed 2017-11-26 03:04 by ncoghlan. This issue is now closed.
When the uuid.py module is simply imported it has the side effect of forking a subprocess (/sbin/ldconfig) and doing a lot of stuff find a uuid implementation by ctypes. This is undesirable in many contexts. It would be better to perform those tasks on demand, when the first UUID is actually requested. In general, imports should avoid unnecessary system call side effects. This also makes testing easier.
I think launching external tools like ifconfig and ipconfig can be avoided pretty easily. There are many recipes around the net how to use native API's.
About ctypes' horrible logic during find_library call - don't know yet.
With the attached patch the "heavy work" will be done on request, when calling uuid1() or uuid4() not on import.
I am working off from the py3k svn branch. Is it necessary to submit a separate patch for py2 branch?
Kenny, I don't see a problem with uuid is *imported*, it just creates a couple of STANDARD UUID class objects for use later. And this seems to just set the number and validates it. I don't see any subprocess calls performed. Perhaps you were referring to scenarios of using uuid1/uuid5 methods in mac and suggesting improvements to it by your patch?
If you do 'python -c "import uuid" under strace, _posixsubprocess is definitely loaded, and a pipe2 call is made.
Take a look at the code starting at (py3k trunk) line 418 (try:). That's where the weird stuff happens, which is what the patch is addressing.
Ken: thanks for working on this.
Thanks for posting a patch! I have two comments:
- Have you run test_uuid? When I run it, it seems to go into an infinite loop somewhere and I need to kill the process.
- uuid should work even when ctypes is not available, so you can't just put an import statement at the top-level without a fallback
>uuid should work even when ctypes is not available
A bit of offtopic: why can't we assume that ctypes is available?
> A bit of offtopic: why can't we assume that ctypes is available?
Because ctypes (or, actually, the libffi it relies on) needs specific low-level code for each platform it runs on, and not all platforms have such code.
Another reason is that ctypes is dangerous and some administrators might prefer to disable it (especially on shared hosting ala Google App Engine).
Thanks for pointing that out! I guess that is the reason you did the import
in a try block.
Maybe I understood and ctypes ImportError simply must be handled and fallbacked to something else.).
> Maybe I understood and ctypes ImportError simply must be handled and
> fallbacked to something else.
Indeed.
>).
Perhaps, but doing without ctypes should still be possible, otherwise
it's a regression..
>It's also possible using existing wrapped os system calls.
That's right, on linux we can use ioctls but windows would require win api calls like this one:
I'm thinking Python could use a general purpose ifconfig/mac-layer module that uuid.py could then just use.
> I'm thinking Python could use a general purpose ifconfig/mac-layer
> module that uuid.py could then just use.
Perhaps, but that's really out of scope for this issue. Feel free to
open another issue.
The patch hasn’t incorporated Antoine’s comments AFAICT.
Also I don’t see this fit for back porting to bug fix releases. Correct me if I’m wrong.
Hynek, you are right.
The implementation does not actually end up in infinite loop, just repeating the loading of the CDLL is slow.
I added caching to that and fixed the ctypes imports too.
Jyrki, roundup doesn’t seem to recognize you patch so we can’t review it in Rietveld. Could you re-try, maybe using hg?
Here's a second take on the patch
One thing I am wondering about is why we have to use find_library() at all. Wouldn’t it be more robust, and more efficient, to call CDLL() directly? We just have to know the exactly library version we are expecting. On Linux, the full soname is libuuid.so.1. It seems on OS X it is called libc.dylib (but it would be good for someone else to confirm).
# The uuid_generate_* routines are provided by libuuid on at least
# Linux and FreeBSD, and provided by libc on Mac OS X.
if sys.platform == "darwin":
libname = "libc.dylib"
else:
libname = "libuuid.so.1"
_ctypes_lib = ctypes.CDLL(libname)
I cannot comment on uuid directly, but for me, this is yet another example of how assumptions can break things.
imho - if you know the exact version of s shared library that you want, calling cdll directly should be find. Maybe find_library is historic.
However, an advantage to calling find_library is that it should lead to an OSError if it cannot be loaded. But trapping OSError on a call to ctypes.cdll or testing for None (NULL) from find_library() is the option of a developer using the ctypes interface.
As far as libFOO.so.N - do you always want to assume it is going to be version N, or are you expecting to be able to work work version N+X?
Again, find_library() can give you the library name it would load - but a programmer must be aware of the platform differences (e.g., AIX returns not a filename, but a libFOO.a(libFOO.so.N) - as just one example.
p.s. as far as ldconfig being part of the problem on AIX (as it does not exist and can lead to OSError or just long delays, I have worked out a (pending) patch for ctypes/util (and ctypes/cdll) that may address the issues with uuid import on AIX. (see issue26439 for the patch).
It sounds like ctypes is causing you some headache. How about we get rid of ctypes for uuid and replace it with a simple implementation in C instead? Autoconf (configure) can take care of the library detection easily.
There is already Issue 20519 for that, although it looks like the proposed patch keeps ctypes as a fall-back. (My interest here is only theoretical.)
The way I have seen that resolved - in many locations, is to have the
option to specify a specific version, e.g., libFOO.so.1 and then as long
as that version remains available, perhaps as read-only for running with
existing programs,they continue to work even when libFOO.so.2. However,
for some who believes, or expects, to be able to deal with potential ABI
changes - they can specify simply libFOO.so and let the loader decide
(what I see is that libFOO.so is a symbolic link to, or a copy of the
latest version.)
So, I agree wholeheartedly, if a versioned number is requested, that, or
nothing, should be returned. However, if it is generic - try to find a
generic named library (and get the link or copy), and when that is not
available either, take the latest versioned number.
It has been over two months, and I may have read it wrong - but that
appears to be what the current "ldconfig -p" solution implements. (in
ctypes, not uuid, so perhaps this is not the correct place to be
responding. If so, my apologies).
On 08-May-16 05:24, Martin Panter wrote:
> Martin Panter added the comment:
>
>.
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
How often uuid1 is used?
I never use it and it looks uuid1 makes uuid.py complicated.
How about split it to _uuid1.py (or uuid/__init__.py and uuid/_uuid1.py)?
I hope PEP 562 is accepted.
It ease splitting out (heavy and slow and dirty) part into submodule without breaking backward compatibility.
Without PEP 562, easy way is making proxy function.
# uuid/__init__.py."""
from . import _uuid1
return _uuid1.uuid1()
Sorry, I reject my idea. It clearly overdone. uuid.py is not so huge.
> It ease splitting out (heavy and slow and dirty) part into submodule without breaking backward compatibility.
Right. I created attached PR #3795 to implement this idea.
> I hope PEP 562 is accepted.
I don't think that this PEP is needed here. IMHO a new _uuid1 module is enoguh to avoid "side effects" on "import uuid".
> Sorry, I reject my idea. It clearly overdone. uuid.py is not so huge.
Can you please elaborate? Do you think that my PR is wrong?
uuid code contains 4 bare "except:" blocks which should be replaced with appropriate exceptions, or at least "except Exception:" to not catch KeyboardInterrupt. But this should be modified in a second time.
I think is a better resolution. It creates an optional _uuid C extension to avoid ctypes if possible *and* also loads system functions lazily.
> I think launching external tools like ifconfig and ipconfig can be avoided pretty easily. There are many recipes around the net how to use native API's.
That's right, but I would prefer to enhance uuid to use native API's in a different issue and focus on avoiding "side effects" on "import uuid" in this issue.
Do you know a native APIs for Windows and Linux? I don't. Do you have links to these "recipes"? If yes, please open a new issue.
> Do you know a native APIs for Windows and Linux?
On Linux, we already use uuid_generate_time(). See _unixdll_get_node().
And on Windows, _windll_getnode() calls into _UuidCreate().
Antoine: "I think is a better resolution. It creates an optional _uuid C extension to avoid ctypes if possible *and* also loads system functions lazily."
Implementing bpo-20519 is a very good idea. But I still like the idea of putting all these ugly functions to get the node and geneate a UUID1 object in a different module, since most of these code is not used on most platforms.
Antoine: Would you mind to modify your PR 3796 to only implement bpo-20519? I would prefer to reorganize uuid.py in a second step. It will be easy to review and easy to discuss what is the best option.
It's nice to see things moving in this 6 years old issue :-)
>> Sorry, I reject my idea. It clearly overdone. uuid.py is not so huge.
> Can you please elaborate? Do you think that my PR is wrong?
I looked and
I wonder if I should recommend to split module before review.
So I meant "I stop requesting split module and I'll review the PR3684."
Now I see your PR and it looks more clean.
> I would prefer to reorganize uuid.py in a second step
I am not reorganizing uuid.py, just making initialization lazy.
Oh. I was told that the PR 3684 of bpo-5885 in another fix for this issue.
PR 3684 seems mostly a subset of what I'm proposing.
I marked bpo-5885 as a duplicate of this issue, even if it's not exactly the same. The main purpose of this issue was to use uuid_generate_time() using a C extension, as bpo-20519.
+static PyObject *
+uuid_uuid1(PyObject *self, PyObject *args)
+{
+ uuid_t out;
+ uuid_generate_time(out);
+ return PyByteArray_FromStringAndSize((const char *) out,sizeof(out));
+}
+static PyObject *
+uuid_uuid4(PyObject *self, PyObject *args)
+{
+ uuid_t out;
+ uuid_generate_random(out);
+ return PyByteArray_FromStringAndSize(out,sizeof(out));
+}
I marked bpo-20519 as a duplicate of this issue, even if it's not exactly the same. The main purpose of this issue was to use uuid_generate_time() using a C extension:
+static PyObject *
+_uuid_generate_random(void)
+{
+ uuid_t out;
+ uuid_generate_random(out);
+ return PyBytes_FromStringAndSize((const char *) out, sizeof(out));
+}
+static PyObject *
+_uuid_generate_time(void)
+{
+ uuid_t out;
+ uuid_generate_time(out);
+ return PyBytes_FromStringAndSize((const char *) out, sizeof(out));
+}
I abandonned my PR and started to review Antoine's PR 3796 which basically combines all previous patches proposed last 6 years :-)
New changeset a106aec2ed6ba171838ca7e6ba43c4e722bbecd1 by Antoine Pitrou in branch 'master':
bpo-11063, bpo-20519: avoid ctypes and improve import time for uuid (#3796)
I ran two benchmarks on my Fedora 26.
* new = master (commit a106aec2ed6ba171838ca7e6ba43c4e722bbecd1)
* ref = commit 8d59aca4a953b097a9b02b0ecafef840e4ac5855
git co master
./python -m perf timeit -s 'import sys, uuid' "del sys.modules['uuid']; import uuid; uuid = None" --inherit=PYTHONPATH -v -o import_new.json
./python -m perf timeit -s 'import uuid; u=uuid.uuid1' "u()" --inherit=PYTHONPATH -v -o uuid1_new.json
git co 8d59aca4a953b097a9b02b0ecafef840e4ac5855
./python -m perf timeit -s 'import uuid; u=uuid.uuid1' "u()" --inherit=PYTHONPATH -v -o uuid1_ref.json
./python -m perf timeit -s 'import sys, uuid' "del sys.modules['uuid']; import uuid; uuid = None" --inherit=PYTHONPATH -v -o import_ref.json
Import:
haypo@selma$ ./python -m perf compare_to import_ref.json import_new.json --table
+-----------+------------+-----------------------------+
| Benchmark | import_ref | import_new |
+===========+============+=============================+
| timeit | 4.04 ms | 430 us: 9.39x faster (-89%) |
+-----------+------------+-----------------------------+
uuid.uuid1():
haypo@selma$ ./python -m perf compare_to uuid1_ref.json uuid1_new.json --table
+-----------+-----------+------------------------------+
| Benchmark | uuid1_ref | uuid1_new |
+===========+===========+==============================+
| timeit | 18.9 us | 15.2 us: 1.24x faster (-20%) |
+-----------+-----------+------------------------------+
Everything is faster. The import time is 9.4x faster, nice!
In practice, the import time is probably even better. My benchmark uses repeated import, it doesn't measure the "first time" import which was more expensive because of the "import ctypes".
Crude import benchmark (Ubuntu):
* before:
$ time ./python -c "import uuid"
real 0m0.074s
user 0m0.056s
sys 0m0.012s
* after:
$ time ./python -c "import uuid"
real 0m0.030s
user 0m0.028s
sys 0m0.000s
* baseline:
$ time ./python -c pass
real 0m0.027s
user 0m0.024s
sys 0m0.000s
uuid fails to build for me on master since that change landed:
cpython/Modules/_uuidmodule.c:13:11: error: implicit declaration of function 'uuid_generate_time_safe' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
res = uuid_generate_time_safe(out);
^
cpython/Modules/_uuidmodule.c:13:11: note: did you mean 'py_uuid_generate_time_safe'?
cpython/Modules/_uuidmodule.c:8:1: note: 'py_uuid_generate_time_safe' declared here
py_uuid_generate_time_safe(void)
^
cpython/Modules/_uuidmodule.c:13:11: warning: this function declaration is not a prototype [-Wstrict-prototypes]
res = uuid_generate_time_safe(out);
^
1 warning and 1 error generated.
This is on macOS Sierra.
It's expected if uuid_generate_time_safe() isn't available on your platform. But test_uuid still passes?
> It's expected if uuid_generate_time_safe() isn't available on your platform. But test_uuid still passes?
I would prefer to avoid compilation errors on a popular platforms like macOS. Would it possible to check if uuid_generate_time_safe() is available, maybe in configure? Or we can "simply" skip _uuid compilation on macOS?
> Would it possible to check if uuid_generate_time_safe() is available, maybe in configure?
That's probably possible.
Though I don't know how to reuse the find_file() logic in configure...
> Though I don't know how to reuse the find_file() logic in configure...
Maybe we could use pkg-config instead?
haypo@selma$ pkg-config uuid --cflags
-I/usr/include/uuid
haypo@selma$ pkg-config uuid --libs
-luuid
pkg-config is a Linux-ism. But Linux already works fine...
$ uname
Darwin
$ pkg-config
-bash: pkg-config: command not found
I proposed PR 3855 to add macOS support to _uuid (and fix the compilation error).
New changeset 4337a0d9955f0855ba38ef30feec3858d304abf0 by Victor Stinner in branch 'master':
bpo-11063: Fix _uuid module on macOS (#3855)
I agree with Barry's comment on PR 3855: "I'd rather see a configure check for the existence of uuid_generate_time_safe() rather than hard coding it to platforms !APPLE for two reasons. 1) If macOS ever adds this API in some future release, this ifndef will be incorrect, and 2) if some other platform comes along that doesn't have this API, it will still use the incorrect function." It's exactly for situations like this that autoconf tests exist; we should not be hardwiring assumptions about the lack of particular platform APIs.
I think the configure check should be this (sets HAVE_LIBUUID in pyconfig.h):
diff --git a/configure.ac b/configure.ac
index 41bd9effbf..90d53c1b7d 100644
--- a/configure.ac
+++ b/configure.ac
@@ -2657,6 +2657,7 @@ AC_MSG_RESULT($SHLIBS)
AC_CHECK_LIB(sendfile, sendfile)
AC_CHECK_LIB(dl, dlopen) # Dynamic linking for SunOS/Solaris and SYSV
AC_CHECK_LIB(dld, shl_load) # Dynamic linking for HP-UX
+AC_CHECK_LIB(uuid, uuid_generate_time_safe)
# only check for sem_init if thread support is requested
if test "$with_threads" = "yes" -o -z "$with_threads"; then
I've followed Stefan's suggestion and opened PR 4287 (tested on 10.10.5)
Berker's latest patch looks good to me.
Unrelated to the patch (same before and after), this looks odd to me:
>>> import uuid
>>> uuid._has_uuid_generate_time_safe is None
True
>>>
>>> import _uuid
>>> _uuid.has_uuid_generate_time_safe
1
[Also, I thought we weren't supposed to use ctypes in the stdlib.]
"""
Unrelated to the patch (same before and after), this looks odd to me:
>>> import uuid
>>> uuid._has_uuid_generate_time_safe is None
True
>>>
>>> import _uuid
>>> _uuid.has_uuid_generate_time_safe
1
"""
None means "not initialized yet". It's initialized on demand, at the first call of uuid1() or get_node():
$ python3
Python 3.7.0a2+ (heads/master:a5293b4ff2, Nov 6 2017, 12:22:04)
>>> import uuid
>>> uuid._has_uuid_generate_time_safe # == None
>>> uuid.uuid1()
UUID('3e5a7628-c2e5-11e7-adc1-3ca9f4650c0c')
>>> uuid._has_uuid_generate_time_safe
1
> [Also, I thought we weren't supposed to use ctypes in the stdlib.]
Antoine's commit a106aec2ed6ba171838ca7e6ba43c4e722bbecd1 avoids ctypes when libuuid is available.
For the other systems without libuuid, well, it was probably simpler to use ctypes. ctypes was more popular a few years ago. The code "just works" and I guess that nobody wants to touch it :-)
New changeset 9a10ff4deb2494e22bc0dbea3e3a6f9e8354d995 by Berker Peksag in branch 'master':
bpo-11063: Add a configure check for uuid_generate_time_safe (GH-4287)
Does this check work? I tried similar checks with other functions and they were falsely passed because calling an undeclared function is not an error in C.
The reliable check of the existence of the uuid_generate_time_safe function is:
void *x = uuid_generate_time_safe. :)
It worked for me on OS X (returns no) and Linux (returns yes after I installed uuid-dev) but I didn't test it on both systems at the same. Travis CI also returned 'no':
In any case, Serhiy's suggestion is better than mine so I've opened PR 4343.
And yes, I'm beginning to regret my decision on not using AC_CHECK_LIB :)
New changeset 0e163d2ced28ade8ff526e8c663faf03c2c0b168 by Berker Peksag in branch 'master':
bpo-11063: Use more reliable way to check if uuid function exists (GH-4343)
The header file check in setup.py incorrectly reported "not found" if `uuid.h` was in one of the standard include directories, so I've submitted a tweak to fix that:
New changeset 53efbf3977a44e382397e7994a2524b4f8c9d053 by Nick Coghlan in branch 'master':
bpo-11063: Handle uuid.h being in default include path (GH-4565) | https://bugs.python.org/issue11063 | CC-MAIN-2021-39 | refinedweb | 3,088 | 68.06 |
In this article, I will show you how to execute SQL queries from your C# applications.
I usually see code samples using SELECT SQL queries but I don't see many articles using other SQL queries. In this article, I will show you how to execute SQL queries from your C# applications.
Before we go to SQL queries, we need to know how to make a connection to a database. I will use Sql data provider to connect to the SQL Server. The Sql data provider classes are defined in the System.Data.SqlCliet namespace and some general database related classes are defined in the System.Data namespace. Hence I import these two namespace in my application before writing any thing else.
using
SqlConnection conn = new SqlConnection("Data Source=computer_name;"+"Initial Catalog=database_name;"+ "User ID=sa;"+"Password=pass;");conn.Open();conn.Close();
I want to show you that it is fairly easy to use SQL queries in your application. In this article, I will explain the CREATE, INSERT, and SELECT queries.
Now if you wonder why do I start with the CREATE SQL query then the simple answer is I want to show you how to create a database and database table before you can select any data. Because to use the SELECT SQL query, you must have a database, database table, and data in the table.
The following source code shows you how to use the CREATE SQL query.
The last thing we have to do is make a Primary key this is needed so that our simple_id can't have the same value twice "CREATE UNIQUE INDEX PrimaryID ON simplesql (simple_id)". Use this instead of the create table query.
NOTE:I expected you to have a database available if you have not use this query instead of ours "CREATE DATABASE csharpcorner". Use this instead of the create table query. You also need to change the connection string delete this part from it "Initial Catalog=database_name;"+
INSERT
The INSERT SQL query is second most used queries. I think you have an idea what it is for. Yes, it's for inserting data into a database table. You will see how the INSERT query works in a momeny. The INSERT query works almost the same as the create statement but to make it more interesting I will make something for you so you get an idea of how you could do it for example a guestbook with asp.net.
And here is the code:
And at last after the though work with creating and inserting something to our database we will read it from it. I guess this is the most used command in SQL this because the query let you show you what's in the database. I will make a application that count the rows in the table this is because we need to know how long our for loop will be. This application will be more difficult then you already saw this article this because we also need a SqlDataReader for reading from the database.
I will first show you the code this code will be commented like the first one so read carefully
cmd.CommandText = "SELECT Count(*) FROM simplesql";
What we do here is count all the rows from simplesql.This must be done to know how long our for loop will be.
cmd.CommandText = "SELECT simple_id, simple_text FROM simplesql ORDER BY simple_id";
What we do here is select simle_id and simple_text from simplesql and order it by simple_id that means like 1, 2, 3,...if you use ORDER BY simple_id DESC it will order reversed like ..., 3, 2, 1Again you could choose to do it like this."SELECT * FROM simplesql"This will make the query shorter but the gives you fewer control.So if we only want to select the text we could do this."SELECT simple_text FROM simplesql ORDER BY simple_id"Or if we want to select text where id equals 3 we would do this."SELECT simple_text FROM simplesql WHERE simple_id=3"There is one more I want to teach you and that is if you want to select text between some id numbers you will use this."SELECT simple_text FROM simplesql BETWEEN simple_id=0 AND simpel_id=5"This would be usefully if you have a news system and want to show the last5 news items this is the way
dr.Read(); Console.WriteLine("ID: {0} \t Text: {1}", dr[0], dr[1]);
Read one row from the databasedr[0] means the first variable you selected so if you had the query like this: "SELECT simple_text, simple_id ..."simple_text was dr[0]But now simple_id is.
Summary
As you can see from this article, it is a very basic SQL article. In this article, I wasn't expecting you to know any SQL. I tried to keep this article very simple. I will be back with more articles on SQL with more SQL queries. | http://www.c-sharpcorner.com/UploadFile/jeradus/UsingSQLP112032005062034AM/UsingSQLP1.aspx | crawl-002 | refinedweb | 820 | 70.53 |
I am having some interesting behaviour with my sense hat. I wrote some code to be invoked on button press with the joystick. I noticed it seems to be being run twice. So, I took some code from teh getting started tutorial and modified it a touch.
If I run the code below. For every button press of the joystick I get the direction printed twice. Is this intended behaviour and I missed something? It seems far too consistent for a hardware fault IMO.
If this is intended, how do I react to the button press but only call the function once and not twice?
Here is the code...
Code: Select all
from sense_hat import SenseHat sense = SenseHat() # Define the functions def up(): print 'up' def down(): print 'down' def left(): print 'left' def right(): print 'right' def middle(): print 'middle' # Tell the program which function to associate with which direction sense.stick.direction_up = up sense.stick.direction_down = down sense.stick.direction_left = left sense.stick.direction_right = right sense.stick.direction_middle = middle while True: pass # This keeps the program running to receive joystick events | https://www.raspberrypi.org/forums/viewtopic.php?f=104&t=240977&p=1470812& | CC-MAIN-2019-26 | refinedweb | 183 | 68.97 |
Simply a global object that act as undefined.
Undefined
Ever needed a global object that act as None but not quite ?
Like for example key-word argument for function, where None make sens, so you need a default value.
One solution is to create as singleton object:
mysingleton = object()
Though it becomes difficult to track the singleton across libraries, and teach users where to import this from.
It’s also relatively annoying use this singleton across library.
Introducing undefined:
>>> import undefined >>> from undefined import Undefined >>> undefined is Undefined True
behavior
It work (for now) mostly like a singleton object
Though it’s neither truthy not falsy
>>> if undefined: print(True) raise NotImplementedError
Casing ?
Because it is a module you can use it lowercase:
import undefined
Because it looks more like a keyword (None, True, False), you can use it upper case:
import undefined as Undefined
or
from undefined import Undefined
I tend to be torn between lowercase, for simplicity, and Uppercase.
Why not None, difference with None
undefined is likely slower, and as it is a regular Python object there are a few on purpose (or not difference).
Unlike None, you can assign to it
>>> None = 3 SyntaxError: can't assign to keyword
>>> undefined = 3 >>> undefned 3
Unlike None, undefined is mutable
>>> undefined.value = 42 >>> undefined.value 42
(might be able to fix that with __get_attr__
Unlike None, undefined is neither true not false.
If you test for boolean value of undefind if will raise. That is to say: the following will fail:
value = undefined if value: pass # will raise before reaching here.
You have to check for identity:
value = undefined other = 1 if value is undefined: pass # will execute
for info, undefined is not True,False, not undefined with respect to identity
>>> undefined is True False >>> undefined is False False >>>: undefined is None False
String form
str(undefined) raises. repr(undefined) is the unicode string 'Undefined'
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/undefined/ | CC-MAIN-2017-43 | refinedweb | 338 | 50.77 |
CodePlexProject Hosting for Open Source Software
Hey folks,
I am creating my fixtures based on a randomly generated platformer level. They are static squares and ramps that are grouped into larger fixtures to increase efficiency. I was having no difficulties creating and placing the rectangles in my world
using FixtureFactory.CreateRectangle.
I am getting positioning errors while when making my ramps with FixtureFactory.CreatePolygon. My ramps are at a 45 degree angle and so I have a uniform grid system with all my squares and ramps (they all have the same width and height). The problem
is that when I position my ramp into the world they are off to the right a few pixels and as I go further away from the world origin they get increasingly further away from where they should be located.
I had read that createpolygon doesn't set the center correctly and in a recent post it was suggested to translate the vertices by half the width and height of the object. This makes the fixture positioned even more incorrectly. (this may also
not matter but...) When setting the center of mass on in debug view the center is in the top left corner instead of the center of the polygon.
Hopefully this screenshot works.
As you can see the rectangles are all aligned on the grid but any polygon with a ramp is offset. What things could I look at to remedy this? Posting code isn't really an option due to the complexities of integrating farseer code with my generator.
If desired I could write up some mock code that would be doing the same approximate thing.
Edit: I'm using farseer 3.0
Rectangles are polygons too. Might be a problem with the calculation of centroid. Could you code up a quick test for the Testbed?
After creating my implementation in the testbed everything is lining up and looks good.
I'll have to scour my code much deeper it seems. I'm starting to suspect a precision error rather than a miscalculation. This will be lots of fun...
EDIT:
I located the issue after a some more debugging. My calculation was incorrect, I was positioning my vertices in world space instead of in local space. This explains why as the polygons got further from the origin, they got more off. Bla....
As I mentioned above, the center of mass appears to be set incorrectly (or not at all). Even in the test bed it was wrong. It seems that it is always at the polygons origin rather then at the center like the polygons from CreateRectangle. Maybe
it isn't supposed to be set, I had just assumed it would be since the other Create calls do this.
Here is the test that I wrote, its a repeating pattern of rectangles and ramps. When you run it and turn on the center of mass debug, you'll see what I'm talking about.
using System;
using FarseerPhysics.Collision;
using FarseerPhysics.Dynamics;
using FarseerPhysics.Factories;
using FarseerPhysics.Common;
using FarseerPhysics.TestBed.Framework;
using Microsoft.Xna.Framework;
namespace FarseerPhysics.TestBed.Tests
{
public class RampTest : Test
{
public const int TILE_SIZE = 2;
public const int TILE_HALF_SIZE = TILE_SIZE / 2;
private Vector2 GlobalPos = new Vector2(-20, 0);
private int Iterations = 10;
private RampTest()
{
int[] map = new int[6];
/* Demo Block
* _________________________
* |___|___|___|___|___|___| no slopes ( 6 x 1 ) Tiles
* |___|___|__\|___|___|___| a right slope (3 x 1) and (3 x 1) Tiles
* |___|___|___|___|/__|___| a left slope (4 x 1) and (2 x 1) Tiles
*
*/
Fixture fixture;
int sizeOfDemoBlockX = 6 * TILE_SIZE;
int sizeOfDemoBlockY = 3 * TILE_SIZE;
for (int repeaterY = 0; repeaterY < Iterations; ++repeaterY)
{
for (int repeaterX = 0; repeaterX < Iterations; ++repeaterX)
{
Vector2 repeaterPosition = new Vector2(repeaterX * sizeOfDemoBlockX, repeaterY * sizeOfDemoBlockY);
//The 1st row
MakeRectangle(0, 2, 6, 1, repeaterPosition);
//The right slope
int x = 0;
int y = 1;
int width = 3;
int height = 1;
Vertices verts = new Vertices(4);
verts.Add(new Vector2(width * TILE_SIZE, 0));
verts.Add(new Vector2((width - 1) * TILE_SIZE, height * TILE_SIZE));
verts.Add(new Vector2(0, height * TILE_SIZE));
verts.Add(new Vector2(0, 0));
Vector2 pos = new Vector2(x * TILE_SIZE, y * TILE_SIZE);
fixture = FixtureFactory.CreatePolygon(World, verts, 5, GlobalPos + repeaterPosition + pos);
fixture.Body.BodyType = BodyType.Static;
MakeRectangle(3, 1, 3, 1, repeaterPosition);
//The 3rd row
MakeRectangle(0, 0, 4, 1, repeaterPosition);
//The left slope
x = 4;
y = 0;
width = 2;
height = 1;
verts = new Vertices(4);
verts.Add(new Vector2(0, 0));
verts.Add(new Vector2(width * TILE_SIZE, 0));
verts.Add(new Vector2(width * TILE_SIZE, height * TILE_SIZE));
verts.Add(new Vector2((width - 1) * TILE_SIZE, height * TILE_SIZE));
pos = new Vector2(x * TILE_SIZE, y * TILE_SIZE);
fixture = FixtureFactory.CreatePolygon(World, verts, 5, GlobalPos + repeaterPosition + pos);
fixture.Body.BodyType = BodyType.Static;
}
}
}
private void MakeRectangle(int x, int y, int width, int height, Vector2 repeaterPosition)
{
Vector2 dimension = new Vector2(width * TILE_SIZE, height * TILE_SIZE);
Vector2 pos = new Vector2(x * TILE_SIZE + (width * TILE_HALF_SIZE),
y * TILE_SIZE + (height * TILE_HALF_SIZE));
Fixture fixture = FixtureFactory.CreateRectangle(World, dimension.X, dimension.Y, 5, GlobalPos + repeaterPosition + pos);
fixture.Body.BodyType = BodyType.Static;
}
public override void Update(GameSettings settings, GameTime gameTime)
{
base.Update(settings, gameTime);
}
internal static Test Create()
{
return new RampTest();
}
}
}
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://farseerphysics.codeplex.com/discussions/232143 | CC-MAIN-2017-30 | refinedweb | 900 | 57.37 |
In this section, we explore advanced topics in object filtering,
supported by the following
Query methods:
public void declareParameters (String parameters); public void declareVariables (String variables); public void declareImports (String imports);
The
declareParameters method defines
the names and types of the query's parameters. Declaring query
parameters is analogous to declaring the parameter signature of
a Java method, and uses the same syntax. Parameters acts as
placeholders in the filter string, allowing you to supply new
values on each execution. Parameters also allow for
query by example, as demonstrated in
Example 11.8, “Query By Example”.
Parameters do not have to be declared. As you will see in the examples below, you can introduce implicit parameters into your queries simply by prefixing the parameter name with a colon in your JDOQL string. Wherever the type of an implicit parameter can't be inferred from the context, you can set the type by casting the parameter in the JDOQL.
You cannot mix implicit and declared parameters in the same query. Each query must declare all of its parameters, or none of them.
declareVariables names the local
variables used in the query filter. It uses standard Java
variable declaration syntax. Query variables are typically
used to place conditions on elements of collections or maps.
Like parameters, variables do not have to be declared. Whenever you use an unrecognized identifier where a variable would be appropriate, the JDO implementation will dynamically define an implicit variable with that name. In the case of an unconstrained variable, you must cast the variable in your JDOQL to supply its type. Unlike parameters, you can mix both declared and implicit variables in the same query.
You can import classes into the query's namespace with the
declareImports method. The method
argument uses standard Java
import syntax.
By default, queries only recognize unqualified class names when
the class is in the package of the candidate class, or when it
is in
java.lang. Using imports, you can
save yourself the trouble of typing out full class names
in your parameter and variable declarations, and in your
your filter string (class names can appear in filter strings
when you use the
instanceof operator,
access a static field, or perform a cast).
The following examples will give you a feel for the query elements described above. They use the object model we defined in the previous section.
Example 11.6. Imports and Declared Parameters
Find the magazines with a certain publisher and title, where the publisher and title are supplied as parameters on each execution.
PersistenceManager pm = ...; Company comp = ...; String str = ...; Query query = pm.newQuery (Magazine.class, "publisher == pub && title == ttl"); query.declareImports ("import org.mag.pub.*"); query.declareParameters ("String ttl, Company pub"); List mags = (List) query.execute (str, comp); for (Iterator itr = mags.iterator (); itr.hasNext ();) processMagazine ((Magazine) itr.next ()); query.close (mags);
There are two things to take away from this example. First, the
import prevents us from having to qualify the
Company
class name when declaring our
pub
parameter (although in this case, it would have been easier to
just write out the full class name in the parameter declaration).
The unqualified
Company name is not
automatically recognized by the query because
Company
isn't in the same package as
Magazine, the candidate class.
Second, notice that we supply values for the parameter
placeholders when we execute the query. Parameters can be of
any type recognized by JDO, though primitive parameters have
to be supplied as instances of the appropriate wrapper type on
execution. The
Query interface includes
several methods for executing queries with various numbers of
parameters. When you use declared parameters, you should supply
the parameter values in the same order that you declare the
parameters.
Example 11.7. Implicit Parameters
The query below is exactly the same as our previous example, except this time we use implicit parameters.
PersistenceManager pm = ...; Company comp = ...; String str = ...; Query query = pm.newQuery (Magazine.class, "publisher == :pub && title == :ttl"); List mags = (List) query.execute (comp, str); for (Iterator itr = mags.iterator (); itr.hasNext ();) processMagazine ((Magazine) itr.next ()); query.close (mags);
Here, we use colon-prefixed names to introduce new parameters without declarations. When we execute the query, we supply the parameter values in the order the implicit parameters first appear in the JDOQL. Later in the chapter, we'll see queries that consist of multiple JDOQL strings. Each string might introduce new implicit parameters. When deciding the proper order to supply the parameter values, start with the parameters in the result string, then those in the filter, then the grouping string, and finally the ordering string.
In the query above, we can infer the type of each parameter based on its context. This is almost always the case. There are times, however, when the context alone is not enough to determine the type of an implicit parameter. In these cases, use a cast to supply the parameter type:
PersistenceManager pm = ...; Company comp = ...; Query query = pm.newQuery (Magazine.class, "publisher.revenue == ((org.mag.pub.Company) :pub).revenue"); List mags = (List) query.execute (comp); for (Iterator itr = mags.iterator (); itr.hasNext ();) processMagazine ((Magazine) itr.next ()); query.close (mags);
Example 11.8. Query By Example
Parameters do not need to be in the datastore to be useful. You can implement query by example in JDO by using an existing "example" object as a query parameter:
PersistenceManager pm = ...; Magazine example = new Magazine (); example.setPrice (100); example.setTitle ("Fourier Transforms"); Query query = pm.newQuery (Magazine.class, "price == ex.price && title == ex.title"); query.declareParameters ("Magazine ex"); List mags = (List) query.execute (example); for (Iterator itr = mags.iterator (); itr.hasNext ();) processMagazine ((Magazine) itr.next ());
Example 11.9. Variables
Find all magazines that have an article titled "Fourier Transforms".
PersistenceManager pm = ...; Query query = pm.newQuery (Magazine.class, "articles.contains (art) " + "&& art.title == 'Fourier Transforms'"; query.declareVariables ("Article art"); List mags = (List) query.execute (); for (Iterator itr = mags.iterator (); itr.hasNext ();) processMagazine ((Magazine) itr.next ()); query.close (mags);
A variable represents any persistent instance of its declared type.
So you can read the filter string above as: "The magazine's
articles collection contains some article
art, where
art's title is
'Fourier Transforms'". Notice how we bind
art
to a particular collection with the
contains
method, then test its properties in an &&'d
expression. This is a common pattern in JDOQL filters, and applies
equally well to placing conditions on the keys and values of maps.
Of course, we don't have to declare
art
explicitly. The same query without
declareVariables
would work just as well:
PersistenceManager pm = ...; Query query = pm.newQuery (Magazine.class, "articles.contains (art) " + "&& art.title == 'Fourier Transforms'"; List mags = (List) query.execute (); for (Iterator itr = mags.iterator (); itr.hasNext ();) processMagazine ((Magazine) itr.next ()); query.close (mags);
In this case the JDO implementation assumes
art
is an implicit variable because it does not match the name of any
field in the candidate class. The new variable's type is set to
the known element type of the collection that contains it.
Example 11.10. Unconstrained Variables
The example above uses a variable to represent any element in a collection. We refer to variables used to test collection or map elements as constrained or bound variables, because the values of the variable are limited by the collection or map involved. Many JDO implementations also support unconstrained variables. Rather than representing a collection or map element, an unconstrained variable represents any persistent instance of its class. Consider the following example:
Query query = pm.newQuery (Article.class, "mag.copiesSold > 10000 " + "&& mag.coverArticle == this"); query.declareVariables ("Magazine mag"); List arts = (List) query.execute (); for (Iterator itr = arts.iterator (); itr.hasNext ();) processArticle ((Article) itr.next ()); query.close (arts);
What does this query do? Let's break it down. The first clause
matches any magazine that has sold more than 10,000 copies. The
second clause requires that the cover article of the magazine is the
candidate instance being evaluated (notice the query's candidate
class is
Article in this example). So the
query returns all articles that are the cover article for a
magazine that has sold more than 10,000 copies. The unconstrained
variable
mag allowed us to overcome the fact that
there was no direct relation from
Article to
Magazine (only the reverse).
Later in this chapter, we'll see how to use a projection to drastically simplify this query.
We can also execute this query without declaring the
mag
variable explicitly. Without a constraining
contains clause, however, the type of an
unconstrained, implicit variable is impossible to infer. Use a cast
to supply the type:
Query query = pm.newQuery (Article.class, "((Magazine) mag).copiesSold > 10000 " + "&& mag.coverArticle == this"); List arts = (List) query.execute (); for (Iterator itr = arts.iterator (); itr.hasNext ();) processArticle ((Article) itr.next ()); query.close (arts);
Up until now, we have focused on how to configure our queries with the right filter, but we have ignored how to actually execute the query. The next section corrects this oversight. | http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13946/jdo_overview_query_advfilter.html | CC-MAIN-2014-15 | refinedweb | 1,493 | 50.63 |
$ cnpm install @atomist/clojure-sdm
Instance of an Atomist Software Delivery Machine that can be used to automate delivery of Atomist automatiom-client projects, like SDMs.
A software delivery machine is a development process in a box.
It automates all steps in the flow from commit to production (potentially via staging environments), and many other actions, using the consistent model provided by the Atomist API for software.
Many teams have a blueprint in their mind for how they'd like to deliver software and ease their day to day work, but find it hard to realize. A Software Delivery Machine makes it possible.
The concept is explained in detail in Rod Johnson's blog Why you need a Software Delivery Machine. This video shows it in action.
Please see the Atomist SDM library for explanation on what an SDM can do. The present document describes how to get yours running.
This delivery machine feeds on the Atomist API. You'll need to be a member of an Atomist workspace to run it. Create your own by enrolling at atomist.com.
Things work best if you install an org webhook, so that Atomist receives events for all your GitHub repos.
If the Atomist bot is in your Slack team, type
@atomist create sdm to have Atomist create a personalized version of this repository for you.
Alternatively, you can fork and clone this repository.
Below are instructions for running locally and on Kubernetes. See integrations for additional prerequisites according to the projects you're building.
This is an Atomist automation client. See run an automation client for instructions on how to set up your environment and run it under Node.js.
The client logs to the console so you can see it go.
You can use the Kubernetes resource files in the kube directory as a starting point for deploying this automation in your Kubernetes cluster.
This SDM needs write access to jobs and read-access to deployments in its namespaces. It uses the Kubernetes "in-cluster client" to authenticate against the Kubernetes API. Depending on whether your cluster is using role-based access control (RBAC) or not, you must deploy slightly differently. RBAC is a feature of more recent versions of Kubernetes, for example it is enabled by default on GKE clusters using Kubernetes 1.6 and higher. If your cluster is older or is not using RBAC, the default system account provided to all pods should have sufficient permissions to run this SDM.
Before deploying either with or without RBAC, you will need to create a namespace for the resources and a secret with the configuration. The only required configuration values are the
teamIds and
token. The
teamIds should be your Atomist team ID(s), which you can get from the settings page for your Atomist workspace or by sending
team as a message to the Atomist bot, e.g.,
@atomist team, in Slack. The
token should be a GitHub personal access token with
read:org and
repo scopes.
$ kubectl apply -f $ kubectl create secret --namespace=sdm generic automation \ --from-literal=config='{"teamIds":["TEAM_ID"],"token":"TOKEN"}'
In the above commands, replace
TEAM_ID with your Atomist team ID, and
TOKEN with your GitHub token.
If your Kubernetes cluster uses RBAC (minikube does), you can deploy with the following commands
$ kubectl apply -f $ kubectl apply -f
If you get the following error when running the first command,
Error from server (Forbidden): error when creating "rbac.yaml": clusterroles.rbac.authorization.k8s.io "sdm-role" is forbidden: attempt to grant extra privileges: [...] user=&{YOUR_USER [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
then your Kubernetes user does not have administrative privileges on your cluster. You will either need to ask someone who has admin privileges on the cluster to create the RBAC resources or try to escalate your privileges with the following command.
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin \ --user YOUR_USER
If you are running on GKE, you can supply your user name using the
gcloud utility.
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin \ --user $(gcloud config get-value account)
Then run the command to create the
kube/rbac.yaml resources again.
To deploy on clusters without RBAC, run the following commands
$ kubectl apply -f
If the logs from the pod have lines indicating a failure to access the Kubernetes API, then the default service account does not have read permissions to pods and you likely need to deploy using RBAC.
Once this SDM is running, here are some things to do:
If you have any Java or Node projects in your GitHub org, try linking one to a Slack channel (
@atomist link repo), and then push to it. You'll see Atomist react to the push, and the SDM might have some Goals it can complete.
Every organization has a different environment and different needs. Your software delivery machine is yours: change the code and do what helps you.
Atomist is about developing your development experience by using your coding skills. Change the code, restart, and see your new automations and changed behavior across all your projects, within seconds.
The kubernetesSoftwareDevelopmentMachine included here deploys to your Kubernetes cluster, using k8-automation, which you must run in your cluster. To deploy to Kubernetes using this SDM and k8-automation, set the
MACHINE_NAME environment variable to
k8sMachine before starting the SDM.
General support questions should be discussed in the
#support channel in our community Slack team at atomist-community.slack.com.
If you find a problem, please create an issue.
You will need to install [node][] to build and test this project.
Releases are handled via the SDM itself. Just press the release button in Slack or the Atomist dashboard.
Created by Atomist. Need Help? Join our Slack team. | https://developer.aliyun.com/mirror/npm/package/@atomist/clojure-sdm/v/0.1.0-pods.20180804232716 | CC-MAIN-2020-24 | refinedweb | 985 | 55.34 |
The TTbarAnalysis class tries to find a top antitop pair in the final state and books a number of histograms. More...
#include <TTbarAnalysis.h>
The TTbarAnalysis class tries to find a top antitop pair in the final state and books a number of histograms.
It only makes sense if hadronization and decays are switched off. However, if there is is no top quark pair in the final state then a warning is printed and nothing is booked. Some of the histograms will be sensitive to the initial state shower.
Definition at line 34 of file TTbar 83 of file TTbar 89 of file TTbar 108 of file TTbarAnalysis.h. | https://herwig.hepforge.org/doxygen/classHerwig_1_1TTbarAnalysis.html | CC-MAIN-2019-30 | refinedweb | 109 | 70.33 |
6.4: Use a Testing Framework
- Page ID
- 32396
Intent Encourage developers to write and use regression tests by providing a framework that makes it easy to develop, organize and run tests.
Problem
How do you encourage your team to adopt systematic testing?
This problem is difficult because:
Tests are boring to write.
Tests may require a considerable test data to be built up and torn down.
It may be hard to distinguish between test failures and unexpected errors.
Yet, solving this problem is feasible because:
Most tests follow the same basic pattern: create some test data, perform some actions, see if the results match your expectations, clean up the test data.
Very little infrastructure is needed to run tests and report failures and errors.
Solution
Use a testing framework that allows suites of tests to be composed from individual test cases.
Steps
Unit testing frameworks, like JUnit and SUnit [BG98], and various commercial test harness packages are available for most programming languages. If a suitable testing framework is not available for the programming language you are using, you can easily brew your own according to the following principles:
- The user must provide test cases that set up test data, exercise them, and make assertions about the results
- The testing framework should wrap test cases as tests which can distinguish between assertion failures and unexpected errors.
- The framework should provide only minimal feedback if tests succeed.
- Assertion failures should indicate precisely which test failed.
- Errors should result in more detailed feedback (such as a full stack trace).
- The framework should allow tests to be composed as test suites.
Tradeoffs
Pros
A testing framework simplifies the formulation of tests and encourages programmers to write tests and use them.
Cons
Testing requires commitment, discipline and support. You must convince your team of the need and benefits of disciplined testing, and you must integrate testing into your daily process. One way of supporting this discipline is to have one testing coach in your team; consider this when you Appoint a Navigator.
Example
JUnit is a popular testing framework for Java, which considerable enhances the basic scheme described above. Figure \(\PageIndex{1}\) shows that the framework requires users to define their tests as subclasses of
TestCase. Users must provide the methods
setUp(),
runTest() and
tearDown(). The default implementation of
setup() and
tearDown() are empty, and the default implementation of
runTest() looks for and runs a method which is the name of the test (given in the constructor). These user-supplied hook methods are then called by the
runBare() template method.
JUnit manages the reporting of failures and errors with the help of an additional
TestResult class. In the design of JUnit, it is an instance of
TestResult that actually runs the tests and logs errors or failures. In Figure \(\PageIndex{2}\) we see a scenario in which a
TestCase, in its run method, passes control to an instance of
TestResult, which in turn calls the
runBare template method of the
TestCase.
TestResult, which invokes the
runBaretemplate method of a
TestCase. The user only needs to provide the
setUp()and
tearDown()methods, and the test method to be invoked by
runTest().
TestCase additionally provides a set of different kinds of standard assertion methods, such as
assertEquals,
assertFails, and so on. Each of these methods throws an
AssertionFailedError, which can be distinguished from any other kind of exception.
In order to use the framework, we will typically define a new class, say
TestHashtable, that bundles a set of test suites for a given class,
Hashtable, that we would like to test. The test class should extend
junit.framework.TestCase:
import junit.framework.*; import java.util.Hashtable; public class TestHashtable extends TestCase {
The instance variables of the test class will hold the fixture - the actual test data:
private Hashtable boss; private String joe = "Joe"; private String mary = "Mary"; private String dave = "Dave"; private String boris = "Boris";
There should be constructor that takes the name of a test case as its parameter. Its behavior is defined by its superclass:
public TestHashtable(String name) { super(name); }
The
setUp() hook method can be overridden to set up the fixture. If there is any cleanup activity to be performed, we should also override
tearDown(). Their default implementations are empty.
protected void setUp() { boss = new Hashtable(); }
We can then define any number of test cases that make use of the fixture. Note that each test case is independent, and will have a fresh copy of the fixture. (In principle, we should design tests that not only exercise the entire interface, but the test data should cover both typical and boundary cases. The sample tests shown here are far from complete.)
Each test case should start with the characters “test":
public void testEmpty() { assert(boss.isEmpty()); assertEquals(boss.size(), 0); assert(!boss.contains(joe)); assert(!boss.containsKey(joe)); } public void testBasics() { boss.put(joe, mary); boss.put(mary, dave); boss.put(boris, dave); assert(!boss.isEmpty()); assertEquals(boss.size(), 3); assert(boss.contains(mary)); assert(!boss.contains(joe)); assert(boss.containsKey(mary)); assert(!boss.containsKey(dave)); assertEquals(boss.get(joe), mary); assertEquals(boss.get(mary), dave); assertEquals(boss.get(dave), null); }
You may provide a static method
suite() which will build an instance of
junit.framework.TestSuite from the test cases defined by this class:
public static TestSuite suite() { TestSuite suite = new TestSuite(); suite.addTest(new TestHashtable("testBasics")); suite.addTest(new TestHashtable("testEmpty")); return suite; } }
The test case class should be compiled, together with any class it depends on.
To run the tests, we can start up any one of a number of test runner classes provided by the JUnit framework, for instance
junit.ui.TestRunner (see Figure \(\PageIndex{3}\)).
This particular test runner expects you to type in the name of the test class. You may then run the tests defined by this class. The test runner will look for the suite method and use it to build an instance of
TestSuite. If you do not provide a static suite method, the test runner will automatically build a test suite assuming that all the methods named test* are test cases. The test runner then runs the resulting test suite. The interface will report how many tests succeeded (see Figure \(\PageIndex{4}\)). A successful test run will show a green display. If any individual test fails, the display will be red, and details of the test case leading to the failure will be given.
java.ui.TestRunner.
Rationale
A testing framework makes it easier to organize and run tests.
Hierarchically organizing tests makes it easier to run just the tests that concern the part of the system you are working on.
Known Uses
Testing frameworks exist for a vast number of languages, including Ada, ANT, C, C++, Delphi, .Net (all languages), Eiffel, Forte 4GL, GemStone/S, Jade, JUnit Java, JavaScript, k language (ksql, from kbd), Objective C, Open Road (CA), Oracle, PalmUnit, Perl, PhpUnit, PowerBuilder, Python, Rebol, ‘Ruby, Smalltalk, Visual Objects and UVisual Basic.
Beck and Gamma give a good overview in the context of JUnit [BG98]. | https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_and_Computation_Fundamentals/Book%3A_Object-Oriented_Reengineering_Patterns_(Demeyer_Ducasse_and_Nierstrasz)/06%3A_Tests__Your_Life_Insurance/6.04%3A_Use_a_Testing_Framework | CC-MAIN-2022-40 | refinedweb | 1,174 | 55.24 |
advanced OGL and DX shader samples.
-psychopath
This is a discussion on Game Programming Links within the Game Programming forums, part of the General Programming Boards category; advanced OGL and DX shader samples. -psychopath...
M.Eng Computer Engineering CandidateB.Sc Computer Science
Robotics and graphics enthusiast.
I am just learning to do some gaming programming and found to be very helpful in learning a lot of the things that are needed to do the graphics, sound and controll portions in SDL . I haven't made it through all the of the tutorials, but I would recommend it.
Last edited by octavian; 02-17-2006 at 06:46 PM. Reason: more info
I would like to inform you about the new group for 2D Game Making Tutorials for newbies and not only.
These tutorials (I hope) might help and those that are not good in programming..
Anyone is welcome to add he's own tutorials.
All tutorials are in Open Document format, so you might need to download and use the free and excellent OpenOffice Suite.
Not sure if this has been posted here before but I just found it and I'm about to pop out so:
Nice meaty article about all things 3D (almost all). I'll certainly be giving this one a thorough read tonight!
Good class architecture is not like a Swiss Army Knife; it should be more like a well balanced throwing knife.
- Mike McShaffry
out of 100's that i have looked over there is not one that i would recomend because everone who writes a tutorial learns by text and i don't.
Last edited by D00M3D2; 04-10-2006 at 01:39 PM. Reason: miss spell
Tons of links to other sites:
And for terrain:
This site has many free game programming books availible for download. The reason that they are free is most likely because the books are slightly dated (ie 2003 instead of 2006).
The site doesn't just have game programming books, though. It has many other categories too, like php, html, css, regular C++, and much, much, more.....
You use cygwin? You have a problem compiling Nehe's opengl tutorial?
Nehe suggests to use VC++ to follow his tutorial but I searched everywhere for a way to compile it under cygwin when I came up with a solution:
Step1: Make sure you installed the right packages
cygwin has a nice collection of packages you can install and add with nothing but a click. Make sure you have the "opengl: OpenGL -related libraries" checked.
Step2: Make sure you have the opengl dll files in your windows\system32 folder
glut32.dll glu32.dll opengl32.dll or other appropriate ones when you make use of them.
Step3: Nehe's tutorials include an old library - The GLaux Library, which apparently is not supported anymore. An easy fix would be to comment out or delete the current including of the glaux header file and include the GLUT library instead:
#include <gl\glut.h>
Step4: Now to compile it under cygwin we will use the line:.Code:g++ -mwindows -mno-cygwin -o myprog.exe myprog.cpp -lglut32 -lglu32 -lopengl32
Thats it.
Disclaimer: I am merely a beginner - this short solution may not work for you, but I think it should. Anyway hope this helps anyone out there.
Great explanation of the Separating Axis Theorem (2D Collision Detection)..
Very nice utility written in VB that will take any normal bitmap, perform gnomonic projection on it and save as 6 separate bitmaps.
These 6 bitmaps then can be used as cube maps for skyboxes, environment mapping, etc - with no seams if you do it correctly.
Your original bitmap should have a 2:1 ratio.
Thought this may be a good link, i found it great when i was choosing an ide and need to set up directX. Its a tutorial for setting up DirectX with Visual C++ 2005 Express Edition
Anyone know any good 3d math sites that kind of focus on the math you can use practically within 3d rather than all encompassing theory?
hello,
did anyone purchase the tutorial cd from gametutorials ? is it worth 70 dollars? | http://cboard.cprogramming.com/game-programming/33318-game-programming-links-3.html | CC-MAIN-2015-35 | refinedweb | 692 | 71.95 |
I have a basic program that deletes a file. The problem I'm having is passing
the file name to the class. I've tried putting the file in the same directory
as the program, adding the full path to the file and putting it in the root
directory but neither worked. The class is able to read in the file name
but when it gets to the Delete method an exception is thrown when it tries
to create a new file object. I assume I'm not passing the parameter correctly.
Any help would be greatly appreciated.
Regards
Andrew
import java.io.*;
public class Delete {
public static void main(String[] args){
if(args.length != 1){
System.err.print("Usage: java Delete <file or directory>");
System.exit(0);
}
try{ Delete(args[0]); }
catch(IllegalArgumentException e){
System.err.print(e.getMessage());
}
}//end main
public static void Delete(String filename){
//create a file object to represent the file name
File f = new File(filename); /**ERROR HERE**/ | http://forums.devx.com/printthread.php?t=29096&pp=15&page=1 | CC-MAIN-2015-18 | refinedweb | 164 | 66.74 |
In this article, we will build a client-side routing system. Client-side routing is a type of routing where users navigate through an application where no full page reload occurs even when the page’s URL changes — instead, it displays new content.
To build this, we’ll need a simple server that will serve our
index.html file. Ready? Let’s begin.
First, set up a new node.js application and create the project structure:
npm init -y npm install express morgan nodemon --save touch server.js mkdir public && cd public touch index.html && touch main.js file cd ..
The
npm init command will create a
package.json file for our application. We’ll install
Express and
Morgan , which will be used in running our server and logging of our routes.
We’ll also create a
server.js file and a public directory where we will be writing our views. Nodemon will restart our application once we make any change in our files.
Setting up the server
Let’s create a simple server using Express by modifying the
server.js file:
const express = require('express'); const morgan = require('morgan'); const app = express(); app.use(morgan('dev')); app.use(express.static('public')) app.get('*', (req, res) => { res.sendFile(__dirname + '/public/index.html') }) app.listen(7000, () => console.log("App is listening on port 7000"))
Now we can start our application by running
nodemon server.js. Let’s create a simple boilerplate for our HTML:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <h1>Javascript Routing</h1> <div id="app"> </div> <script src="main.js"></script> </body> </html>
Here, we’ll link the
main.js file so that we can manipulate the DOM at any point in time.
Implementing the routing system
Let’s head over to the
main.js file and write all of our router logic. All our codes will be wrapped in the
window.onload so that they only execute the script once the webpage has completely loaded all of its content.
Next, we’ll create a router instance that’s a function with two parameters. The first parameter will be the name of the route and the second will be an array which comprises all our defined routes. This route will have two properties: the name of the route and the path of the route.
window.onload = () => { // get root div for rendering let root = document.getElementById('app'); //router instance let Router = function (name, routes) { return { name, routes } }; //create the route instance let routerInstance = new Router('routerInstance', [{ path: "/", name: "Root" }, { path: '/about', name: "About" }, { path: '/contact', name: "Contact" } ]) }
We can get the current route path of our page and display a template based on the route.
location.pathname returns the current route of a page, and we can use this code for our DOM:
let currentPath = window.location.pathname; if (currentPath === '/') { root.innerHTML = 'You are on Home page' } else { // check if route exist in the router instance let route = routerInstance.routes.filter(r => r.path === currentPath)[0]; if (route) { root.innerHTML = `You are on the ${route.name} path` } else { root.innerHTML = `This route is not defined` } }
We’ll use the
currentPath variable to check if a route is defined in our route instance. If the route exists, we’ll render a simple HTML template. If it doesn’t, we’ll display
This route is not defined on the page.
Feel free to display any form of error of your choice. For example, you could make it redirect back to the homepage if a route does not exist.
Adding router links
For navigation through the pages, we can add router links. Just like with Angular, you can pass a
routerLink that will have a value of the path you want to navigate to. To implement this, let’s add some links to our
index.html file :
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <nav> <button router-Home</button> <button router-About</button> <button router-Contact</button> <button router-Error</button> </nav> <h1>Javascript Routing</h1> <div id="app"> </div> <script src="main.js"></script> </body> </html>
Notice the
router-link attribute that we passed in — this is what we will be using for our routing.
We’ll create a variable store all of
router-links and store it in an array:
let definedRoutes = Array.from(document.querySelectorAll('[router-link]'));
After storing our router-links in an array, we can iterate through them and add a click event listener that calls the
navigate() function:
//iterate over all defined routes definedRoutes.forEach(route => { route.addEventListener('click', navigate, false) })
Defining the navigate function
The navigate function will be using Javascript History API to navigate. The
history.pushState() method adds a state to the browser’s session history stack.
When the button is clicked, we’ll receive the router link attribute of that button and then use the
history.pushState() to navigate to that path, then change the HTML template rendered:
// method to navigate let navigate = e => { let route = e.target.attributes[0].value; // redirect to the router instance let routeInfo = routerInstance.routes.filter(r => r.path === route)[0] if (!routeInfo) { window.history.pushState({}, '', 'error') root.innerHTML = `This route is not Defined` } else { window.history.pushState({}, '', routeInfo.path) root.innerHTML = `You are on the ${routeInfo.name} path` } }
If a nav link has a router link that has not been defined in the
routeInstance, it will set the push state to
error and render
This route is not Defined on the template., you should consider storing routes in a separate file, which makes codes neater and easier to debug if there are any errors. Now, create a
routes.js file and extract the route constructor and router instance into this new file:
//router instance let Router = function (name, routes) { return { name, routes } }; let routerInstance = new Router('routerInstance', [{ path: "/", name: "Root" }, { path: '/about', name: "About" }, { path: '/contact', name: "Contact" } ]) export default routerInstance
Exporting this file makes it accessible to other JavaScript files. We can import it into our main.js file:
import routerInstance from './routes.js'
This will throw an error. To fix it, modify the script tag in the index.html file to this:
<script type="module" src="main.js"></script>
Adding the type of module specifies which variables and functions can be accessed outside the modules.
Conclusion
Understanding how to implement a routing system in Vanilla JavaScript makes it easier for developers to work with a framework routing library such as the Vue.js Router. Our code here can be reused in a single page application, which is perfect when you’re working without a framework. To get the source code, check.
6 Replies to “Building a JavaScript router using History API”
Interesting Read👏🏽
Overly simplified. Awesome
This is very enlightening. Great article Wisdom.
This is very helpful for my current project. Thank you
Awesome content man!
Interesting. Can I also make it reactive in this way, detecting a change in window.location? how would I do that? | https://blog.logrocket.com/building-a-javascript-router-using-history-api/ | CC-MAIN-2022-40 | refinedweb | 1,191 | 66.74 |
Code for this project is on github.
So you want to test in Clojure, huh? Well good thing you found this oasis of awesomeness. Getting a test up and running is pretty easy, but involves a few simple steps that I will stretch out to the point of pain.
First off, start a new project. Again simple. Just let Linegen to do if for you:
Doing this should create a new project space age tubular frame:
Still with me? Good. Now for the test creation.
As you can see, I added a file with a test. What you might notice, or cause panic, is that I don’t have a file to test with this test file test. And that’s ok, because we’re @#$%ing this $@#% up Test First style. Basically, create a test, and then create what it’s supposed to test. I won’t get into Test First here, but just wanted to calm your concerns.
Now to run the test. This is done by opening up a command window, navigating to the root folder for your project (It will have a project.clj file in it), and typing
lein test
Ruh roh. Looks like that file that isn’t in existence can’t be tested. Weird, right? Oh well, time to create it.
There are a couple points at this eh point. You will see that I added a file named “simple_math.clj” to the “test_example” folder.
Something I found out while I was creating this little gem of a tutorial; Apparently the convention for a folder is to use a _ where a class in a file uses a -. So as you can see, the folder “test_example” translates the namespace part “test-example”, and “simple_math.clj” becomes “simple-math”. From what I can tell, since I am pretty new to Clojure, lienigen will try to resolve the namespace of “test-directory.simple-math” to “test_directory/simple_math”. I assume this is part of the “convention over configuration” school of thought. Since I come from a .net background where conventions just don’t exist, it caught me off guard.
Anyways, since that is done, it’s time to run the test again.
One failure? Oh yeah, the junk test created by lienegin. Well, just get rid of that:
And run it again:
And things are looking a lot better. | https://byatool.com/tag/unit-testing/ | CC-MAIN-2021-25 | refinedweb | 392 | 84.57 |
A time series plot is useful for visualizing data values that change over time.
This tutorial explains how to create various time series plots using the seaborn data visualization package in Python.
Example 1: Plot a Single Time Series
The following code shows how to plot a single time series in seaborn:
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns #create DataFrame df = pd.DataFrame({'date': ['1/2/2021', '1/3/2021', '1/4/2021', '1/5/2021', '1/6/2021', '1/7/2021', '1/8/2021'], 'value': [4, 7, 8, 13, 17, 15, 21]}) sns.lineplot(x='date', y='value', data=df)
Note that we can also customize the colors, line width, line style, labels, and titles of the plot:
#create time series plot with custom aesthetics sns.lineplot(x='date', y='value', data=df, linewidth=3, color='purple', linestyle='dashed').set(title='Time Series Plot') #rotate x-axis labels by 15 degrees plt.xticks(rotation=15)
Example 2: Plot Multiple Time Series
The following code shows how to plot multiple time series in seaborn:
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns #create DataFrame df = pd.DataFrame({'date': ['1/1/2021', '1/2/2021', '1/3/2021', '1/4/2021', '1/1/2021', '1/2/2021', '1/3/2021', '1/4/2021'], 'sales': [4, 7, 8, 13, 17, 15, 21, 28], 'company': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B']}) #plot multiple time series sns.lineplot(x='date', y='sales', hue='company', data=df)
Note that the hue argument is used to provide different colors to each line in the plot.
Additional Resources
The following tutorials explain how to perform other common functions in seaborn:
How to Add a Title to Seaborn Plots
How to Change Legend Font Size in Seaborn
How to Change the Position of a Legend in Seaborn | https://www.statology.org/seaborn-time-series/ | CC-MAIN-2021-39 | refinedweb | 313 | 60.35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.