text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This topic contains 2 replies, has 2 voices, and was last updated by nina 1 year, 11 months ago. - AuthorPosts Hi Frederik, I love how copy-pasting a glyph renders it in different ways in different contexts – like how it prints out the glif data if you try to copy-paste to a text field. I’m wondering about modifying this behavior in special cases. Specifically, looking at improvements for word-o-mat I was wondering if it might be useful for the user to be able to copy-paste glyphs into the input text fields. (I’m not sure it’s actually a good idea, but some people may try to do that.) But then of course I would want the glyph *name* to be pasted, not the XML data (which is what currently happens, as the text field is identified as a text-only context, I guess). So I’m wondering whether it is (a) possible and (b) actually desirable/a good idea/good form to make the glyph paste itself as a name instead of the xml data in such a case? Thanks for your thoughts, Nina He Nina Space Center already doest that, in all three input boxes. You could use that for Word-O-Mat… so it has support for pasting glif xml from lib.UI.spaceCenter.glyphSequenceEditText import GlyphSequenceEditText from vanilla import * class Test: def __init__(self): self.w = Window((200, 100)) # the only tricky thing is that it is optimized for the lowerlevel defcon font object... self.w.input = GlyphSequenceEditText((10, 10, -10, 22), CurrentFont().naked(), callback=self.inputChangedCallback) self.w.open() def inputChangedCallback(self, sender): print sender.get() Test() FYI: RoboFont writes different representations to the pasteboard, any thing that is responding to a paste action can read their preferred representation. it is possible to add you’re own representations if you subscribe to the paste notification, just a add or rewrite whatever you like :) see - AuthorPosts You must be logged in to reply to this topic.
http://doc.robofont.com/forums/topic/modify-glyph-copypaste-behavior-for-custom-contexts/
CC-MAIN-2017-13
refinedweb
335
59.33
Abstract class defining the base interface for all GUIs. More... #include <StelGuiBase.hpp> List of all members. Abstract class defining the base interface for all GUIs. true false [virtual] Add a new action managed by the GUI. This method should be used to add new shortcuts to the program Reimplemented in StelGui, and StelNoGui. [pure virtual] Add a new progress bar in the lower right corner of the screen. When the progress bar is deleted the layout is automatically rearranged. Implemented in StelGui, and StelNoGui. Get a pointer on an action managed by the GUI. Reimplemented in StelNoGui. Show wether the Gui is currently used. This can then be used to optimize the rendering to increase reactivity. Show whether the GUI is visible.
http://stellarium.org/doc/0.11.2/classStelGuiBase.html
CC-MAIN-2014-15
refinedweb
123
70.39
Declaration of function and use it @mrjj sir i again improving it. as void MainWindow::on_pushButton_6_clicked() { /* Mat src , dst; // char* source_window = "Source image"; // char* equalized_window = "Equalized Image"; /// Load image src = imread( "/home/pi/Desktop/b4.jpg" ); /// Convert to grayscale cvtColor( src, src, CV_BGR2GRAY ); /// Apply Histogram Equalization equalizeHist( src, dst ); /// Display results namedWindow( "source", CV_WINDOW_NORMAL ); namedWindow( "equalized", CV_WINDOW_NORMAL ); imshow( "source", src ); imshow( "equalized", dst ); /// Wait until user exits the program // waitKey(0);*/ Mat src_base, hsv_base; Mat src_test1, hsv_test1; Mat src_test2, hsv_test2; Mat hsv_half_down; /// Load three images with different environment settings src_base = imread( "/home/pi/Desktop/b4.jpg" ); src_test1 = imread( "/home/pi/Desktop/b4.jpg" ); src_test2 = imread( "/home/pi/Desktop/b3.jpg" ); /// Convert to HSV cvtColor( src_base, hsv_base, COLOR_BGR2HSV ); cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV ); cvtColor( src_test2, hsv_test2, COLOR_BGR2HSV ); hsv_half_down = hsv_base( Range( hsv_base.rows/2, hsv_base.rows - 1 ), Range( 0, hsv_base.cols - 1 ) ); /// Using 50 bins for hue and 60 for saturation int h_bins = 50; int s_bins = 60; int histSize[] = { h_bins, s_bins }; // hue varies from 0 to 179, saturation from 0 to 255 float h_ranges[] = { 0, 180 }; float s_ranges[] = { 0, 256 }; const float* ranges[] = { h_ranges, s_ranges }; // Use the o-th and 1-st channels int channels[] = { 0, 1 }; /// Histograms MatND hist_base; MatND hist_half_down; MatND hist_test1; MatND hist_test2; /// Calculate the histograms for the HSV images calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false ); normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() ); calcHist( &hsv_half_down, 1, channels, Mat(), hist_half_down, 2, histSize, ranges, true, false ); normalize( hist_half_down, hist_half_down, 0, 1, NORM_MINMAX, -1, Mat() ); calcHist( &hsv_test1, 1, channels, Mat(), hist_test1, 2, histSize, ranges, true, false ); normalize( hist_test1, hist_test1, 0, 1, NORM_MINMAX, -1, Mat() ); calcHist( &hsv_test2, 1, channels, Mat(), hist_test2, 2, histSize, ranges, true, false ); normalize( hist_test2, hist_test2, 0, 1, NORM_MINMAX, -1, Mat() ); /// Apply the histogram comparison methods for( int i = 0; i < 4; i++ ) { int compare_method = i; double base_base = compareHist( hist_base, hist_base, compare_method ); double base_half = compareHist( hist_base, hist_half_down, compare_method ); double base_test1 = compareHist( hist_base, hist_test1, compare_method ); double base_test2 = compareHist( hist_base, hist_test2, compare_method ); printf( " Method [%d] Perfect, Base-Half, Base-Test(1), Base-Test(2) : %f, %f, %f, %f \n", i, base_base, base_half , base_test1, base_test2 ); } } void MainWindow::on_onTrackball_clicked() { Mat src, src_gray; Mat dst, detected_edges; int edgeThresh = 1; int lowThreshold; int const max_lowThreshold = 100; int ratio = 3; int kernel_size = 3; char* window_name = "Edge Map"; /// Load image src = imread( "/home/pi/Desktop/b4.jpg"); /// Create a matrix of the same type and size as src (for dst) dst.create( src.size(), src.type() ); /// Convert the image to grayscale cvtColor( src, src_gray, CV_BGR2GRAY ); /// Create a window namedWindow( window_name, CV_WINDOW_NORMAL ); /// Create a Trackbar for user to enter threshold // createTrackbar( "Min Threshold:", window_name, &lowThreshold, max_lowThreshold, CannyThreshold ); /// Show the image CannyThreshold(0, 0); /// Separate the image in 3 places ( B, G and R ) vector<Mat> bgr_planes; split( src, bgr_planes ); /// Establish the number of bins int histSize = 256; /// Set the ranges ( for B,G,R) ) float range[] = { 0, 256 } ; const float* histRange = { range }; bool uniform = true; bool accumulate = false; Mat b_hist, g_hist, r_hist; /// Compute the histograms: calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate ); calcHist( &bgr_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate ); calcHist( &bgr_planes[2], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate ); // Draw the histograms for B, G and R int hist_w = 512; int hist_h = 400; int bin_w = cvRound( (double) hist_w/histSize ); Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) ); /// Normalize the result to [ 0, histImage.rows ] normalize(b_hist, b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() ); normalize(g_hist, g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() ); normalize(r_hist, r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() ); /// Draw for each channel for( int i = 1; i < histSize; i++ ) { line( histImage, Point( bin_w*(i-1), hist_h - cvRound(b_hist.at<float>(i-1)) ) , Point( bin_w*(i), hist_h - cvRound(b_hist.at<float>(i)) ), Scalar( 255, 0, 0), 2, 8, 0 ); line( histImage, Point( bin_w*(i-1), hist_h - cvRound(g_hist.at<float>(i-1)) ) , Point( bin_w*(i), hist_h - cvRound(g_hist.at<float>(i)) ), Scalar( 0, 255, 0), 2, 8, 0 ); line( histImage, Point( bin_w*(i-1), hist_h - cvRound(r_hist.at<float>(i-1)) ) , Point( bin_w*(i), hist_h - cvRound(r_hist.at<float>(i)) ), Scalar( 0, 0, 255), 2, 8, 0 ); } /// Display namedWindow("calcHist Demo", CV_WINDOW_AUTOSIZE ); imshow("calcHist Demo", histImage ); imwrite("calcHist Demo2.jpg", histImage); waitKey(-1); } ... This is new improved code .but there are error or build issue 1.In member function'void mainwindow::on_on_trackball_clicked() ****canny Thresol was not declered in this scope .(function which i declare and define in .h and .c file. unsed variable 'edgeThresh'. @gauravsharma0190 said: CannyThreshold this function ? CannyThreshold that you call with CannyThreshold(0, 0); and you do include the .h file where its defined? ///ontrack.h/// #ifndef __ONTRACK_h #define __ONTRACK_h void cannyThresold(int,void*); #endif ...... ///ontrack.cpp #include "ontrack.h" #include<opencv2/opencv.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/core/core.hpp> ); } ..... thus i include ontrack.h in mainwindow.cpp code. but error occurs. Why?? i can't understand. I dont understand either. Seems fine. try to define a test function as in ontrack.h void mytest(); and call that from main. If that is still not known, it means that mainwinow do not really include ontrack.h for some reason. @gauravsharma0190 You call it like this CannyThreshold(0, 0); and in ontrack.cpp it is defined like void CannyThreshold(int, void*). But in ontrack.h it is void cannyThresold(int,void*); please fix the name of the function. @gauravsharma0190 "But again error" - why don't you say what error you get now? @jsulm yes it tells undefined reference to 'CannyThresold(int,void*)'. collect2:error ld returned 1 exit status. @gauravsharma0190 Is ontrack.cpp part of your project and is it built? "undefined reference to 'CannyThresold(int,void*)" - means linker cannot find CannyThresold(int,void*) @gauravsharma0190 But did you add them to your project? They should be in the PRO file, like: SOURCES += main.cpp\ mainwindow.cpp\ ontrack.cpp HEADERS += mainwindow.h\ ontrack.h @gauravsharma0190 After changing PRO file you need to rerun qmake (right-click on the project in QtCreator and then "Run qmake". Then do a rebuild. @gauravsharma0190 Well, I guess you have many issues there... @jsulm Yeap. are they function's issuses or anything. I can't undersatnd everything is all right so Why all these issuses are there. I will send you these issue. @gauravsharma0190 Here are all the errors ... /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:9: error: 'blur' was not declared in this scope blur(gray, edge, Size(3,3)); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:6: error: 'Mat' does not name a type Mat image, gray, edge, cedge; ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:7: error: 'void CannyThreshold(int, void*)' was declared 'extern' and later 'static' [-fpermissive] static void CannyThreshold(int, void*) ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:-1: In function 'void CannyThreshold(int, void*)': /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:9: error: 'edge' was not declared in this scope blur(gray, edge, Size(3,3)); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:9: error: 'edge' was not declared in this scope blur(gray, edge, Size(3,3)); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:9: error: 'blur' was not declared in this scope blur(gray, edge, Size(3,3)); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:12: error: 'Canny' was not declared in this scope Canny(edge, edge, edgeThresh, edgeThresh*3, 3); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:13: error: 'cedge' was not declared in this scope cedge = Scalar::all(0); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:15: error: 'src' was not declared in this scope src.copyTo(cedge, edge); ^ /home/pi/Desktop/qt/Desktop1/onTrackBall.cpp:16: error: 'imshow' was not declared in this scope imshow("Edge map", cedge); ^ /usr/local/include/opencv2/highgui/highgui.hpp:78: note: 'cv::imshow' CV_EXPORTS_W void imshow(const string& winname, InputArray mat); This post is deleted! @jsulm Hey i have posted the error i can't undersatnd why i am getting these error. everything is all right. - jsulm Qt Champions 2018 last edited by jsulm @gauravsharma0190 Nothing is all right with this code. "was not declared in this scope" - you have to check where are those declared or why they are not declared (blur, edge, Mat,...). For example: cedge = Scalar::all(0); where is cedge declared? You're trying to access a variable which was not declared. This one says that you first declared the function extern and later static. Why? Since it is just a function it neither has to be static nor extern! error: 'void CannyThreshold(int, void*)' was declared 'extern' and later 'static' [-fpermissive] static void CannyThreshold(int, void*) You really have to fix your code! And I really don't understand why you think your code is OK although there are really basic issues (not related to Qt at all)! @jsulm i declare all these variable inside onTrach.cpp file. blur is function name which is in imgproc.hpp header. Canny is also function name which is declared in highgui.h header. @gauravsharma0190 Can you post onTrach.cpp and onTrach.h? #include"onTrack.h" #include <opencv2/highgui/highgui.hpp> #include <opencv2/core/core.hpp> #include<opencv2/imgproc/imgproc.hpp> int edgeThresh = 1; //Mat image,gray,edge,cedge; void CannyThreshold(int, void*) { Mat image, gray, edge, cedge; blur(gray, edge, Size(3,3)); // Run the edge detector on grayscale Canny(edge, edge, edgeThresh, edgeThresh*3, 3); cedge = Scalar::all(0); src.copyTo(cedge, edge); imshow("Edge map", cedge); } .H file #define ONTRACKBALL_H_ int CannyThreshold(int,void* ); #endif //your code here Mat is opencv function which is declare in highgui.h header. all these functions are present in header which i added in code. @gauravsharma0190 What is Mat and where is it declared? @gauravsharma0190 Didn't you forget using namespace cv; in the cpp file? @jsulm @gauravsharma0190 Hey jslum thanks very mush i did it without error Now new problem arises when i clicked the push button it tells The program has unexpectedly finished. /home/pi/Desktop/qt /build-Desktop1-GCC Local/Desktop crashed. @gauravsharma0190 Most probably it crashes because of an invalid pointer. Debug your application step by step to see where exactly it crashes. @jsulm Hello sir i debug the programme but when i press button it tells the inferior stooped because it receives signal from OS Signal name SIGABRT Signal mean Aborted. Now how to solve it - jsulm Qt Champions 2018 last edited by jsulm @gauravsharma0190 Put a breakpoint in the slot which is called when you press the button and check where exactly it crashes. IT CRASHES WHERE FUNCTION ontrack() defination Canny(edge, edge, edgeThresh, edgeThresh*3, 3); kernel_size issue i think. int kernel_size =3; here @jsulm It crashes when i press the push button . I debugged it and found it creshes when my function call. Error found->it takes signal from OS. Now How to fix it. @gauravsharma0190 How to fix what? What do you mean by "Error found->it takes signal from OS" - what OS signal do you mean? Do you mean SIGABRT? I don't know how to fix it because I have no idea what the problem is. But usually such problems are caused by bugs in applications. You have to check your code and see what is wrong at the line where your application crashes. @gauravsharma0190 Can you post the code and mark the line where it crashes? void MainWindow::on_onTrackball_clicked() { int threshold_value = 0; int threshold_type = 3; int const max_value = 255; int const max_type = 4; // int const max_BINARY_value = 255; Mat src, src_gray, dst; // char* window_name = "Threshold Demo"; //char* trackbar_type = "Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted"; //char* trackbar_value = "Value"; /// Load image src = imread( "/home/pi/Desktop/b4.jpg"); /// Convert the image to Gray cvtColor( src, src_gray, CV_BGR2GRAY ); /// Create a window to display results namedWindow( "Threshold", CV_WINDOW_AUTOSIZE ); /// Create Trackbar to choose type of Threshold createTrackbar( "Type: \n 0: Binary \n 1: Binary Inverted \n 2: Truncate \n 3: To Zero \n 4: To Zero Inverted","Threshold", &threshold_type, max_type, Threshold_Demo ); createTrackbar( "value","Threshold", &threshold_value, max_value, Threshold_Demo ); /// Call the function to initialize @@@ ***Threshold_Demo( 0, 0)*** } onTrack.h #ifndef ONTRACKBALL_H_ #define ONTRACKBALL_H_ void Threshold_Demo( int, void* ); #endif ontrack.cpp void Threshold_Demo( int, void* ) { /* 0: Binary 1: Binary Inverted 2: Threshold Truncated 3: Threshold to Zero 4: Threshold to Zero Inverted */ @@threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type ); imshow("Threshold", dst ); Here @---> Error @gauravsharma0190 What are src_gray, dst, threshold_value, max_BINARY_value,threshold_type? Where are they defined and initialized? Please note: these are NOT same as in MainWindow::on_onTrackball_clicked()! void Threshold_Demo( int, void* ) { /* 0: Binary 1: Binary Inverted 2: Threshold Truncated 3: Threshold to Zero 4: Threshold to Zero Inverted */ @@threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type ); imshow("Threshold", dst ); @jsulm Hey Good morning These are defined in the same function .cpp files. Yes These are not in mainwindow @gauravsharma0190 Did you initialize these variables? Do they have correct values when you call threshold()?
https://forum.qt.io/topic/69239/declaration-of-function-and-use-it/61
CC-MAIN-2019-47
refinedweb
2,146
56.45
Of all the reasons Python is a hit with developers, one of the biggest is its broad and ever-expanding selection of third-party packages. Convenient toolkits for everything from ingesting and formatting data to high-speed math and machine learning are just an import or pip install away. But what happens when those packages don’t play nice with each other? What do you do when different Python projects need competing or incompatible versions of the same add-ons? That’s where Python virtual environments come into play. You can create and work with virtual environments in both Python 2 and Python 3, though the tools are different. Virtualenv is the tool of choice for Python 2, while venv handles the task in Python 3. What are Python virtual environments? A virtual environment is a way to have multiple, parallel instances of the Python interpreter, each with different package sets and different configurations. Each virtual environment contains a discrete copy of the Python interpreter, including copies of its support utilities. To continue reading this article register now Learn More Existing Users Sign In
https://www.infoworld.com/article/3239675/virtualenv-and-venv-python-virtual-environments-explained.html
CC-MAIN-2019-22
refinedweb
184
55.03
complaints of anyone using XSLT in .Net is that we are stuck on XSLT 1.0. While 1.0 provides a lot of capability it is easy to run headlong into the shortcomings of the language. For a number of reasons which I’ll never understand, Microsoft has not chosen to support XSLT 2.0 or XPATH 2.0 in .Net which forces developers to either live with the limitations of 1.0 or use a 3rd party XSLT engine. Neither of those options is really great. There are a number of modules in DotNetNuke which rely heavily on XSLT for advanced formatting: Reports, Form & List and XML module are three that come immediately to mind. It would be great if we could break out of the limitations imposed by the reliance on XSLT 1.0. Well, in fact you can. .Net has supported the concept of XSLT Extension Objects for quite a long time. Essentially, extension objects are .Net code that you can call from within your XSLT. With this capability you can easily code whatever functionality is missing from XSLT 1.0. This is just what a group of Microsoft MVPs did with the EXSLT.NET module which is a .NET implementation of EXSLT. Much of the work done in EXSLT was subsequently incorporated in XSLT 2.0. In order to use the EXSLT.Net module, your application needs to tell the .Net XslCompiledTransform class where the assembly is that contains your custom extensions. This is done using the XsltArgumentList. Implementing this code is rather trivial: // Create XslCompiledTransform and load the stylesheet. XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load("blogaboutme.xsl"); // Create an XsltArgumentList. XsltArgumentList xslArg = new XsltArgumentList(); // Add an object to calculate the new book price. Extension obj = new DotNetNuke.Demo.Xslt.Extension(); xslArg.AddExtensionObject("DemoXslt", obj); using (XmlWriter w = XmlWriter.Create("output.xml")) { // Transform the file. xslt.Transform("blog.xml", xslArg, w); } Of course, unless you plan to write your own module to handle XSLT transformations this isn’t really all that useful. The useful part is being able to apply this knowledge to an existing module that supports XSLT extensions. The Reports module is the only module currently distributed with DotNetNuke which supports extension objects. The 6.0 version of the XML module will also support extension objects and I am hoping that Form & List will also support this feature in its upcoming release. Lets look at an example of using extension objects with the reports module. In my example I am not going to focus on the XML data side of the equation. I have installed the latest version of the reports module and added it to my page. While logged in as a SuperUser, I created a new dummy query Select Test=’’ just so it would generate a little bit of xml. <DocumentElement> <QueryResults> <Test /> </QueryResults> </DocumentElement> I want to make sure that I am using the XSL Transformation Visualizer: For my particular case I am going to pull some data from a user’s profile and then output this information in my XSLT. I’m also going to generate a URL for displaying the Gravatar associated with a particular email address. The first step is that I’ll need to define a class in .net that includes a method to get a users biography and a method to get the Gravatar URL. Imports DotNetNuke.Entities.Users Imports System.Text Imports System.Security.Cryptography Namespace DotNetNuke.Demo.Xslt Public Class Extension Public Function GetUserBio( ByVal portalid As Integer, ByVal UserId As Integer) As String Dim blogger As UserInfo = UserController.GetUserById(portalid, UserId) Dim bio As String = blogger.Profile.GetProperty("Biography").PropertyValue Return System.Web.HttpUtility.HtmlDecode(bio) End Function Public Function GetGravatarImage( ByVal email As String, ByVal size As Integer) As String Return String.Format( "{0}?s={1}", GetMd5Hash(email.ToLower), size) End Function Private Function GetMd5Hash(ByVal input As String) As String Dim md5 As MD5CryptoServiceProvider = New MD5CryptoServiceProvider() Dim bs As Byte() = Encoding.UTF8.GetBytes(input) bs = md5.ComputeHash(bs) Dim s As StringBuilder = New StringBuilder() For Each b As Byte In bs s.Append(b.ToString("x2").ToLower()) Return s.ToString End Function End Class End Namespace In this example I am returning simple strings. If I wanted to return a node set I would return an XPathNodeIterator which is returned from an XPathNavigator. For my project, I’ve compiled this class in a DemoXSLT assembly and added this assembly to my /bin folder in my DotNetNuke website. At this point I can add this new extension object to my report definition: It is important to remember what namespace you assigned to the .Net type so that you can call it from your XSLT. We now have all of the pieces in place and just need to create our XSLT. <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version='1.0' xmlns:xsl='' xmlns: <xsl:output <xsl:template <div class="section" id="about-me"> <h4> About Me </h4> <div> <xsl:value-of <div> <img alt="Blogger Gravatar" src="{demo:GetGravatarImage([email protected] To use the extension object, I’ve defined a custom namespace which aliases the URN that I defined in the reports module. Now if I want to call a method in my extension object I just prefix the method name with the “demo:” namespace. Now if I add a few CSS classes to my portals.css I can generate a nice looking author block for my blog using the biography from my user profile and my Gravatar image: This is just one of the many techniques that I’ll be showing in my session “A Closer Look at DotNetNuke Core Modules” at CMS Expo this coming.
http://www.dnnsoftware.com/community-blog/cid/136391
CC-MAIN-2016-44
refinedweb
947
57.57
A context manager that creates savepoints Project description A context manager that creates savepoints, avoiding recalculating expensive parts of the code. An example: from savepoint import SavePoint a = 10 b = 20 # do some expensive calculation here with SavePoint("stuff.p"): print "doing stuff" a += 10 c = 30 print a, b, c The first time the code is ran the with block is executed, and the modifed scope is pickled to stuff.p. Subsequent calls will update the global scope from the pickle file, and skip the block completely. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/savepoint/
CC-MAIN-2018-47
refinedweb
115
61.16
K-Means Clustering Applied to GIS Data ToolsClusteringMachine Learningposted by Spencer Norris, ODSC October 11, 2018 Spencer Norris, ODSC Here, we use k-means clustering with GIS Data. GIS can be intimidating to data scientists who haven’t tried it before, especially when it comes to analytics. On its face, mapmaking seems like a huge undertaking. Plus esoteric lingo and strange datafile encodings can create a significant barrier to entry for newbies. There’s a reason why there are experts who dedicate their careers strictly to GIS and cartography. However, that doesn’t mean it’s completely inaccessible to the layman. Point in fact, most GIS tools make it very easy to create gorgeous maps. I made this map in five minutes using QGIS and public data from United States Geological Survey’s Wind Turbine Database. Each point is a wind turbine, encoded as a GeoJSON object. And all I had to do was drag and drop the GeoJSON file into the QGIS GUI. GIS Analysis The extra hurdle for many practitioners is how to apply analysis techniques to GIS data, but it’s surprisingly straightforward. The key insight here is GIS data typically boils down to projections on a transformed space, which you can plot in two dimensions. In other words, it’s just two axes and two continuous variables, which makes it incredibly easy to adapt our existing machine learning and data mining methods to this space. If you’ve built a machine learning pipeline in the past, the hardest part will be setting up the infrastructure that you’ve already built a hundred times. Your features are just the coordinates and whatever other columns you want to append to your data. To illustrate this point, I ran K-means clustering against the dataset used to create the map above, then plotted the points. Feel free to lift this code! import json from sklearn.cluster import KMeans import numpy as np import matplotlib.pyplot as plt from matplotlib.pyplot import figure with open('uswtdb_v1_1_20180710.geojson') as f: data = json.load(f) coordinates = [feature['geometry']['coordinates'] for feature in data['features']] coordinates = np.array(coordinates) #Train model kmeans = KMeans(n_clusters=5) kmeans.fit(coordinates) #Plot clusters figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k') plt.scatter(coordinates[:, 0], coordinates[:, 1], c=kmeans.predict(coordinates), s=50, cmap='viridis') centers = kmeans.cluster_centers_ plt.xlim(min(coordinates[:,0]) - 10, -50) plt.scatter( centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5 ); I admittedly committed a cardinal sin in this code snippet; you’re never supposed to test an algorithm against the data used to train it. However, since I’m just trying to discover clusters on a limited dataset and am not using this for future predictions (and since this is only a small demo), this is fine for our purposes. Notice that the GeoJSON file can be read as JSON — because that’s all it is. GeoJSON is just a particular expression of standard JSON objects that encodes points and shapes. However, you can use your standard JSON libraries to parse and analyze it, as well as append new attributes to each of the objects. Be aware of map types There’s one pitfall you should be wary of if you decide to try machine learning with GIS data. The projection of the map for which the data was encoded may be an important factor to accuracy-sensitive applications. Not all maps are the same: the Mercator Projection is the topography we’re most familiar with since every user-facing mapping application uses it. Whenever you open Google Maps, you’re looking at Gerardus Meractor’s view of the world. However, if data is encoded using a different projection, it will affect the coordinates of individual points and the geometry of shapes. This can have dramatic effects on the outcome of machine learning applications in practice. Ultimately, whatever projection you decide to use will depend on your application — just don’t expect someone with a different map to get the same results. Machine learning on geographic data is very simple and can open new possibilities for the enterprising practitioner. Give it a shot and see what you can find out using publicly-available datasets from data.gov or other asset collections. Ready to learn more data science skills and techniques in-person? Register for ODSC West this October 31 – November 3 now and hear from world-renowned names in data science and artificial intelligence!
https://opendatascience.com/k-means-clustering-applied-to-gis-data/
CC-MAIN-2019-35
refinedweb
750
56.35
type i.e. multiple decision trees, resulting in a forest of trees, hence the name "Random Forest". The random forest algorithm can be used for both regression and classification tasks. How the Random Forest Algorithm Works The following are the basic steps involved in performing the random forest algorithm: - Pick N random records from the dataset. - Build a decision tree based on these N records. - Choose the number of trees you want in your algorithm and repeat steps 1 and 2. - In case of a regression problem, for a new record, each tree in the forest predicts a value for Y (output). The final value can be calculated by taking the average of all the values predicted by all the trees in forest. Or, in case of a classification problem, each tree in the forest predicts the category to which the new record belongs. Finally, the new record is assigned to the category that wins the majority vote. Advantages of using Random Forest As with any algorithm, there are advantages and disadvantages to using it. In the next two sections we'll take a look at the pros and cons of using random forest for classification and regression. - The random forest algorithm is not biased, since, there are multiple trees and each tree is trained on a subset of data. Basically, the random forest algorithm relies on the power of "the crowd"; therefore the overall biasedness of the algorithm is reduced. - This algorithm is very stable. Even if a new data point is introduced in the dataset the overall algorithm is not affected much since new data may impact one tree, but it is very hard for it to impact all the trees. - The random forest algorithm works well when you have both categorical and numerical features. - The random forest algorithm also works well when data has missing values or it has not been scaled well (although we have performed feature scaling in this article just for the purpose of demonstration). Disadvantages of using Random Forest - A major disadvantage of random forests lies in their complexity. They required much more computational resources, owing to the large number of decision trees joined together. - Due to their complexity, they require much more time to train than other comparable algorithms. Throughout the rest of this article we will see how Python's Scikit-Learn library can be used to implement the random forest algorithm to solve regression, as well as classification, problems. Part 1: Using Random Forest for Regression In this section we will study how random forests can be used to solve regression problems using Scikit-Learn. In the next section we will solve classification problem via random forests. Problem Definition The problem here is to predict the gas consumption (in millions of gallons) in 48 of the US states based on petrol tax (in cents), per capita income (dollars), paved highways (in miles) and the proportion of population with the driving license. Solution To solve this regression problem we will use the random forest algorithm via the Scikit-Learn Python library. We will follow the traditional machine learning pipeline to solve this problem. Follow these steps: 1. Import Libraries Execute the following code to import the necessary libraries: import pandas as pd import numpy as np 2. Importing Dataset The dataset for this problem is available at: For the sake of this tutorial, the dataset has been downloaded into the "Datasets" folder of the "D" Drive. You'll need to change the file path according to your own setup. Execute the following command to import the dataset: dataset = pd.read_csv('D:\Datasets\petrol_consumption.csv') To get a high-level view of what the dataset looks like, execute the following command: dataset.head() We can see that the values in our dataset are not very well scaled. We will scale them down before training the algorithm. 3. Preparing Data For Training Two tasks will be performed in this section. The first task is to divide data into 'attributes' and 'label' sets. The resultant data is then divided into training and test sets. The following script divides data into attributes and labels: X = dataset.iloc[:, 0:4].values y = dataset.iloc[:, 4].values Finally, let's divide the data into training and testing sets: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) 4. Feature Scaling We know our dataset is not yet a scaled value, for instance the Average_Income field has values in the range of thousands while Petrol_tax has values in range of tens. Therefore, it would be beneficial to scale our data (although, as mentioned earlier, this step isn't as important for the random forests algorithm). To do so, we will use Scikit-Learn's StandardScaler class. Execute the following code to do so: # Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) 5. Training the Algorithm Now that we have scaled our dataset, it is time to train our random forest algorithm to solve this regression problem. Execute the following code: from sklearn.ensemble import RandomForestRegressor regressor = RandomForestRegressor(n_estimators=20, random_state=0) regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test) The RandomForestRegressor class of the sklearn.ensemble library is used to solve regression problems via random forest. The most important parameter of the RandomForestRegressor class is the n_estimators parameter. This parameter defines the number of trees in the random forest. We will start with n_estimator=20 to see how our algorithm performs. You can find details for all of the parameters of RandomForestRegressor here. 6. Evaluating the Algorithm The last and final step of solving a machine learning problem is to evaluate the performance of the algorithm. For regression problems the metrics used to evaluate an algorithm are mean absolute error, mean squared error, and root mean squared error. Execute the following code to find these values: from sklearn import metrics print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) The output will look something like this: Mean Absolute Error: 51.765 Mean Squared Error: 4216.16675 Root Mean Squared Error: 64.932016371 With 20 trees, the root mean squared error is 64.93 which is greater than 10 percent of the average petrol consumption i.e. 576.77. This may indicate, among other things, that we have not used enough estimators (trees). If the number of estimators is changed to 200, the results are as follows: Mean Absolute Error: 47.9825 Mean Squared Error: 3469.7007375 Root Mean Squared Error: 58.9041657058 The following chart shows the decrease in the value of the root mean squared error (RMSE) with respect to number of estimators. Here the X-axis contains the number of estimators while the Y-axis contains the value for root mean squared error. You can see that the error values decreases with the increase in number of estimator. After 200 the rate of decrease in error diminishes, so therefore 200 is a good number for n_estimators. You can play around with the number of trees and other parameters to see if you can get better results on your own. Part 2: Using Random Forest for Classification Problem Definition The task here is to predict whether a bank currency note is authentic or not based on four attributes i.e. variance of the image wavelet transformed image, skewness, entropy, and curtosis of the image. Solution This is a binary classification problem and we will use a random forest classifier to solve this problem. Steps followed to solve this problem will be similar to the steps performed for regression. 1. Import Libraries import pandas as pd import numpy as np 2. Importing Dataset The dataset can be downloaded from the following link: The detailed information about the data is available at the following link: The following code imports the dataset: dataset = pd.read_csv("D:/Datasets/bill_authentication.csv") To get a high level view of the dataset, execute the following command: dataset.head() As was the case with regression dataset, values in this dataset are not very well scaled. The dataset will be scaled before training the algorithm. 3. Preparing Data For Training The following code divides data into attributes and labels: X = dataset.iloc[:, 0:4].values y = dataset.iloc[:, 4].values The following code divides data into training and testing sets: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) 4. Feature Scaling As with before, feature scaling works the same way: # Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) 5. Training the Algorithm And again, now that we have scaled our dataset, we can train our random forests to solve this classification problem. To do so, execute the following code: from sklearn.ensemble import RandomForestRegressor regressor = RandomForestRegressor(n_estimators=20, random_state=0) regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test) In case of regression we used the RandomForestRegressor class of the sklearn.ensemble library. For classification, we will RandomForestClassifier class of the sklearn.ensemble library. RandomForestClassifier class also takes n_estimators as a parameter. Like before, this parameter defines the number of trees in our random forest. We will start with 20 trees again. You can find details for all of the parameters of RandomForestClassifier here. 6. Evaluating the Algorithm For classification problems the metrics used to evaluate an algorithm are accuracy, confusion matrix, precision recall, and F1 values. Execute the following script to find these values: from sklearn.metrics import classification_report, confusion_matrix, accuracy_score print(confusion_matrix(y_test,y_pred)) print(classification_report(y_test,y_pred)) print(accuracy_score(y_test, y_pred)) The output will look something like this: [[155 2] 1 117]] precision recall f1-score support 0 0.99 0.99 0.99 157 1 0.98 0.99 0.99 118 avg / total 0.99 0.99 0.99 275 0.989090909091 The accuracy achieved for by our random forest classifier with 20 trees is 98.90%. Unlike before, changing the number of estimators for this problem didn't significantly improve the results, as shown in the following chart. Here the X-axis contains the number of estimators while the Y-axis shows the accuracy. 98.90% is a pretty good accuracy, so there isn't much point in increasing our number of estimators anyway. We can see that increasing the number of estimators did not further improve the accuracy. To improve the accuracy, I would suggest you to play around with other parameters of the RandomForestClassifier class and see if you can improve on our results. Resources Want to learn more about Scikit-Learn and other useful machine learning algorithms like random forests? You can check out some more detailed resources, like an online course: - Data Science in Python, Pandas, Scikit-learn, Numpy, Matplotlib - Python for Data Science and Machine Learning Bootcamp - Machine Learning A-Z: Hands-On Python & R In Data Science Courses like these give you the resources and quality of instruction you'd get in a university setting, but at your own pace, which is great for difficult topics like machine learning.
https://stackabuse.com/random-forest-algorithm-with-python-and-scikit-learn/
CC-MAIN-2021-17
refinedweb
1,867
56.55
TextInputFlag Since: BlackBerry 10.0.0 #include <bb/cascades/TextInputFlag> Flags for turning on and off different text features (for example, spell check). Overview Public Types Index Public Types Text feature flags used to turn on and turn off text features. If both the on and off flag for a certain feature is set to 1 at the same time, the behavior is undefined. If both flags are set to 0, the default behavior for the given control and input mode are used. BlackBerry 10.0.0 - Default 0x00 Default settings for all features will be used. - SpellCheck (1<<0) Turns on spell checking.Since: BlackBerry 10.0.0 - SpellCheckOff (1<<1) Turns off spell checking.Since: BlackBerry 10.0.0 - Prediction (1<<2) Turns on word predictions.Since: BlackBerry 10.0.0 - PredictionOff (1<<3) Turns off word predictions.Since: BlackBerry 10.0.0 - AutoCorrection (1<<4) Turns on auto correction.Since: BlackBerry 10.0.0 - AutoCorrectionOff (1<<5) Turns off auto correction.Since: BlackBerry 10.0.0 - AutoCapitalization (1<<6) Turns on auto capitalization.Since: BlackBerry 10.0.0 - AutoCapitalizationOff (1<<7) Turns off auto capitaliztion.Since: BlackBerry 10.0.0 - AutoPeriod (1<<8) Turns on auto period.Since: BlackBerry 10.0.0 - AutoPeriodOff (1<<9) Turns off auto period.Since: BlackBerry 10.0.0 - WordSubstitution (1<<10) Turns on word substitution.Since: BlackBerry 10.0.0 - WordSubstitutionOff (1<<11) Turns off word substitution.Since: BlackBerry 10.0.0 - VirtualKeyboard (1<<12) Turns on virtual keyboard.Since: BlackBerry 10.0.0 - VirtualKeyboardOff (1<<13) Turns off virtual keyboard.Since: BlackBerry 10.0.0 - LatinOnly (1<<14) Forces the keyboard to Latin-1. This affects the VKB layout and language predictions but does not filter non keyboard input such as paste or setting the text programmatically.Since: BlackBerry 10.2.0 - LatinOnlyOff (1<<15) Turns off force Latin-1 keyboard.Since: BlackBerry 10.2.0 - KeyboardUsageHints (1<<16) Turns on keyboard usage hints to help users learn how to use the features of the keyboard.Since: BlackBerry 10.3.0 - KeyboardUsageHintsOff (1<<17) Suppresses keyboard usage hints. This suppresses instructions to the end user on how to use the keyboard. For example, instructions on how to accept word predictions won't show up when this flag is set. Note that these kind of instructions will automatically be suppressed as the user gains experience with using the keyboard, so you rarely need to use this flag.Since: BlackBerry 10.3.0 - Learning (1<<18) Turns on learning. Turns on the learning ability for the keyboard. This is on by default and gives the keyboard possibility to learn from the users writing patterns and suggest the next word that might be written. In password fields this is disabled by default.Since: BlackBerry 10.3.2 - LearningOff (1<<19) Supress learning. This supresses the learning ability of the keyboard. Just as in password fields the users input patterns won't be learned and hence not presented as word completions in the keyboard.Since: BlackBerry 10.3.2 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__cascades__textinputflag.html
CC-MAIN-2015-40
refinedweb
521
55.5
Enum-based ComboBox? Hi, I'm still very new to GXT so please forgive me if this is a stupid question. I have the following enum: Code: public enum UserType { Guest, User, Admin } You have to convert the enum to use ModelData Could it work with the @BEAN annotation + BeanModelMarker interface ? No, as this are two different things My quick and dirty solution would be to use a simple modeldata as sven suggested, then use a custom Converter class to transfer back and forth. Try this as a model object - this assumes that the toString() method of the enum has been overridden to display something useful (you aren't just displaying the PascalCased words on your otherwise pretty UI are you? Alternatively, you could make the constructor take an additional String param if you like... Code: public class EnumWrapper<E extends Enum<E>> extends BaseModelData { public EnumWrapper(E enumValue) { set("enum", enumValue); set("value", enumValue.toString()); } } Code: public class EnumConverter<E> extends Converter { public Object convertFieldValue(Object value) { return new EnumWrapper<E>((E)value); } public Object convertModelValue(Object value) { return ((EnumWrapper)value).get("enum"); } } FWIW, I am find a lot of cases where a Converter class is essential to get many faucets of GXT working well. They are ugly to have to patch in, as you only want a single instance of each one, and Java's syntax makes them rather verbose - at least 6 lines of code before you even start adding custom code to the thing... Anyone have better ideas on how some of this could be implemented? I am aware that it is a big deal to get the ComboBox working well, especially if you want it to contain complex data, but this whole idea of using a Map<String, Object> (i.e. ModelData) feels like you are trying to write Javascript components in Java. And while yes, it will all be compiled to Javascript in the end, the entire point of using Java is to stay far away from Javascript ... Converter This is exactly what I have done in my application. It would be nice to perhaps have an EnumConverter OOTB with GXT. One exception is I would say there should be a public method "toDisplayString( Enum val )" which can convert the enum (or internal name) of the selection to a proper display string using the localized resources. By default this could simply do a val.toString( ) unless implemented. I agree. I have been evaluating Gxt for the past month and while I feel that Gxt is a nice library, it suffers from little pieces of complexities like this that makes it not only frustrating to use, but also seriously slows down development speed. I'm spending more time figuring out how to do get Gxt to do what I want than actually developing my web app. Btw, this is the first time I've even heard about Converters. How would I plug this converter into the grid editor? Converters are attached to FieldBinding objects, and are only used when binding occurs. While using the CellEditor class, it appears you could subclass it and override the preProcessValue(Object) and postProcessValue(Object) methods to act like a Converter Object. In my opinion, the Converter object should be pluggable into the CellEditor using those methods, much like the FieldBinding object accepts an optional Converter. Not doing so is rather inconsistent... But aside from these inconsistencies, I have to say that using GWT (and the GXT library) yields slightly more rapid initial development, but also far faster subsequent development. It isn't entirely due to _using_ the language, but more to the assurances that Java gives you in terms of types, and so the nice refactoring tricks you can do. Eclipse's 'find all references' and Javadoc have made my work with these libraries much simplier than any AJAX project that I have undertaken before. I agree that GXT is not fully mature yet, but then again, this is a milestone release. The changes discussed in this thread are API breaking, so I don't know if we will get them with the 2.0 release (can a dev comment on this?), but at least the Converter+CellEditor issue seems to be a pretty obvious one, with very little additional code required. I'm not sure I agree that it actually slows down development time, but it does have a long ways to go. You can also use the BeanModelMarker method: Code: @BeanModelMarker.BEAN(MyEnum.class) public class MyEnumModel implements BeanModelMarker { } Code: Enum UserType { User("A User"), Admin("An Admin"); String description; UserType(String desc) { this.description = desc; } private String getDescription() { return description; } } comboBox.getValue().getBean() will get you back the Enum. Ben Clever - I haven't done any with the bean code, so its all sorta magic in my book still ;-). My next project is going to have to include some experimentation with these things. How about working with GWT's localizable constant code to make the interface change text based on language? By this pattern, the Enum is written in the server code, so it can't interact with the GWT.create command for injecting strings... Thanks for the cool suggestion - any thoughts?
https://www.sencha.com/forum/showthread.php?67317-Enum-based-ComboBox
CC-MAIN-2016-18
refinedweb
866
60.95
Implementation status: to be implemented Synopsis #include <stdio.h> FILE *fmemopen(void *restrict buf, size_t size, const char *restrict mode); Description The fmemopen() function associates the buffer given by the buf and size arguments with a stream. The buf argument is either a null pointer or points to a buffer that is at least size bytes long. Arguments: buf - the pointer to the buffer. size - the buffer size. mode - the mode of the file. The mode argument points to a string. If the string is one of the following, the stream is opened in the indicated mode. Otherwise, the behavior is undefined. All mode strings allowed by fopen() are accepted:. If a null pointer is specified as the buf argument, fmemopen() allocates size bytes of memory as if by a call to malloc(). This buffer is automatically freed when the stream is closed. Because this feature is only useful when the stream is opened for updating (because there is no way to get a pointer to the buffer) the fmemopen() call may fail if the mode argument does not include a '+'. The stream maintains a current position in the buffer. This position is initially set to either the beginning of the buffer (for r and w modes) or to the first null byte in the buffer (for a modes). If no null byte is found in append mode, the initial position is set to one byte after the end of the buffer. If buf is a null pointer, the initial position is always set to the beginning of the buffer. The stream also maintains the size of the current buffer contents; use of fseek() or fseeko() on the stream with SEEK_END seeks relative to this size. For modes r and r+ the size is set to the value given by the size argument. For modes w and w+ the initial size is zero and for modes a and a+ the initial size is: Zero, if buf is a null pointer, The position of the first null byte in the buffer, if one is found, The value of the size argument, if buf is not a null pointer and no null byte is found. A read operation on the stream does not advance the current buffer position beyond the current buffer size. Reaching the buffer size in a read operation counts as `end-of-file'. Null bytes in the buffer have no special meaning for reads. The read operation starts at the current buffer position of the stream. A write operation starts is set to the current position. A write operation on the stream does not advance the current buffer size beyond the size given in the size argument. When a stream opened for writing is flushed or closed, a null byte is written at the current position or at the end of the buffer, depending on the size of the contents. If a stream opened for update is flushed or closed and the last write has advanced the current buffer size, a null byte is written at the end of the buffer if it fits. An attempt to seek a memory buffer stream to a negative position or to a position larger than the buffer size given in the size argument fails. Return value Upon successful completion, fmemopen() returns a pointer to the object controlling the stream. Otherwise, a null pointer is returned, and errno is set to indicate the error. Errors [ EMFILE] { STREAM_MAX} streams are currently opened in the calling process. [ EINVAL] The value of the mode argument is not valid or The buf argument is a null pointer and the mode argument does not include a '+' character or The size argument specifies a buffer size of zero and the implementation does not support this. [ ENOMEM] The buf argument is a null pointer and the allocation of a buffer of length size has failed. [ EMFILE] { FOPEN_MAX} streams are currently opened in the calling process. Implementation tasks - Implement the fmemopen()function.
http://phoenix-rtos.com/documentation/libphoenix/posix/fmemopen
CC-MAIN-2020-29
refinedweb
661
69.92
Hi Kacper, Just to be clear, is it tri.Triangulation(x, y) that hangs, or is it plt.tricontour(…)? It’s plt.tricontour that hangs, tri.Triangulation properly issues warning about duplicates. Cheers, Kacper Hi, I haven’t been able to pin point it exactly but following script: import matplotlib.pyplot as plt import matplotlib.tri as tri import numpy as np from numpy.random import uniform, seed seed(0) npts = 200 x = uniform(-2,2,npts) y = uniform(-2,2,npts) z = x*np.exp(-x2-y2) y[1:3] = x[0] # 4 or more duplicate points make tricontour hang!!! x[1:3] = y[0] You should call z = x*np.exp(-x2-y2) before changing the points you’re triangulating. Having said that, I see the same behaviour even if I change the vertices before I compute z. triang = tri.Triangulation(x, y) plt.tricontour(x, y, z, 15, linewidths=0.5, colors=‘k’) plt.show() causes infinite loop in _tri.so. It happens in matplotlib-1.1.0 as well as git HEAD. I understand that my input is not exactly valid, but I’d rather see MPL die than occupy my box for eternity Best regards, Kacper I think the reason it’s hanging is because you’re trying to plot the contours of a function that is defined on an invalid triangulation (edges cross at points that are not in the vertex set). I think the best way to deal with this is to write a helper function to check the triangulation is valid. If it isn’t, either tri.Triangulation(x, y) should fail, or the plotter should fail. Anybody else have any suggestions? ··· On Monday, 16 April 2012 at 16:34, Kacper Kowalik wrote: On 16 Apr 2012 22:31, “Damon McDougall” <D.McDougall@…230…> wrote: On Monday, 16 April 2012 at 14:28, Kacper Kowalik wrote: – Damon McDougall d.mcdougall@…230… B2.39 Mathematics Institute University of Warwick Coventry West Midlands CV4 7AL United Kingdom
https://discourse.matplotlib.org/t/matplotlib-users-bug-in-triangulation-causes-infinite-loop-if-4-or-more-duplicate-points-are-used-in-tricontour/16869
CC-MAIN-2022-21
refinedweb
333
69.18
Continuing in our discussion of Silverlight 3 and the update to .NET RIA Services. I have been updating the example from my Mix09 talk “building business applications with Silverlight 3”. I customer recently asked about using ASP.NET MVC and Silverlight with RIA Services. There specific scenario was an application with the complex admin interface in Silverlight but using ASP.NET MVC for the consumer facing part of the web to get the maximum reach. The customer wanted to share as much of their application logic as possible. So, to address this, i thought I’d update my Mix 09 demo to have an ASP.NET MVC view as well as a silverlight view. You can watch the original video of the full session You can see the full series here The demo requires (all 100% free and always free): - VS2008 SP1 (Which includes Sql Express 2008) - Silverlight 3 RTM - .NET RIA Services July ’09 Preview - ASP.NET MVC 1.0 Also, download the full demo files and check out the running application. The architecture we are going to look at today us focused on building the ASP.NET MVC head on the RIA Servces based app logic. As you will see this is easy to do and shares all the great UI validation support. To start with I took the original application and deleted the MyApp.Web project and added a new project that is ASP.NET MVC based. You could have just as easily added another web project to the same solution. Then I associated the Silverlight application with this web project. Be sure to turn on the RIA Services link as well. This is what controls the codegen to the Silverlight client. Then i copied over the Authentication related DomainServices from the web project. Then i added my northwind.mdf file to App_Data, built my Entity Framework model and finally added my domain service and updated it exactly the way did in Part 2. The one tweak we need to do is make each of the methods in the DomainService virtual.. this has no real effect on the silverlight client, but it allows up to build a proxy for the MVC client. ; } public override void Submit(ChangeSet changeSet) { base.Submit(changeSet); } public virtual void UpdateSuperEmployee(SuperEmployee currentSuperEmployee) { this.Context.AttachAsModified(currentSuperEmployee, ChangeSet.GetOriginal(currentSuperEmployee)); } Running the MyAppTestPage.aspx should show that we have the same Silverlight app up and running.. We can now focus on the ASP.NET MVC part of this. In the Controllers direction, open up HomeContollers.cs. 1: [HandleError] 2: public class HomeController : Controller 3: { 4: SuperEmployeeDomainService domainService = 5: DomainServiceProxy.Create<SuperEmployeeDomainService>(); Line 5 calls a factory method that creates a DomainService wrapper.. this allows you to have a clean, direct calling syntax for the DomainService, but allows the system to run through it’s standard pipeline of validation, authorization, etc. Note, with the current CTP, you will need to reference DomainServiceExtensions.dll from this sample to get this functionality. 1: public ActionResult Index() 2: { 3: return View(“Index”, domainService.GetSuperEmployees()); 4: } Then index is very easy.. we simply pass the results of calling GetSuperEmployee() as our model. This allows us to share any business logic that filters the results… I can write it once and share it between my Silverlight and ASP.NET MVC app. Nothing at all remarkable about the view.. Index.aspx in the \Views directory. 1: <%@ Page Language=“C#” MasterPageFile=“~/Views/Shared/Site.Master” 2: Inherits=“System.Web.Mvc.ViewPage<IQueryable<MyApp.Web.Models.SuperEmployee>>” %> 3: 4: <asp:Content ID=“indexTitle” ContentPlaceHolderID=“TitleContent” runat=“server”> 5: Home Page 6: </asp:Content> 7: 8: <asp:Content ID=“indexContent” ContentPlaceHolderID=“MainContent” runat=“server”> 9: 10: <h2>List of Employees</h2> 11: <ul> 12: <% foreach (var emp in Model) { %> 13: 14: <li> 15: <%=Html.ActionLink(emp.Name, “Details”, new { id = emp.EmployeeID })%> 16: Origin: 17: <%=Html.Encode(emp.Origin)%> 18: </li> 19: <% } %> 20: 21: </ul> 22: 23: </asp:Content> In line two, we set up the Model type to be IQueryable<SuperEmployee>… notice this is exactly what my DomainService returned. The in lines 15-18 I get strongly typed access to each SuperEmployee. Here is how it looks: Next, let’s look at the controller action for the Details view.. 1: public ActionResult Details(int id) 2: { 3: var q = domainService.GetSuperEmployees(); 4: var emp = q.Where(e=>e.EmployeeID==id).FirstOrDefault(); 5: 6: return View(emp); 7: } Again, very simple, here we do a Linq query over the results of calling our business logic. The cool thing about the composition of Linq is that this where clause passes from here, to the DomainSerivce, to the Entity Framework model and all the way to the database where it is executed as efficient tsql. The Details.aspx view is just as simple as the view above, basically just accessing the model. Ok – so the read case is easy, what about update? Let’s look at the Edit action in the controller.. 1: public ActionResult Edit(int id) 2: { 3: var q = domainService.GetSuperEmployees(); 4: var emp = q.Where(e => e.EmployeeID == id).FirstOrDefault(); 5: 6: return View(emp); 7: } Again, very similar to the Details action we saw.. this simply populates the fields. Now let’s take a look at edit.aspx view for this action. We need a simple data entry form.. First, we add a Validation summary at the top of the form. This is where we will display all the validation issues for the page. 1: <%= Html.ValidationSummary(“Edit was unsuccessful. Please correct the errors and try again.”) %> Next, we include some standard HTML for each field we need filled out: 1: <p> 2: <label for=“Name”>Name:</label> 3: <%= Html.TextBox(“Name”, Model.Name) %> 4: <%= Html.ValidationMessage(“Name”, “*”) %> 5: /p> 6: <p> 7: <label for=“Name”>Gender:</label> 8: <%= Html.TextBox(“Gender”, Model.Gender) %> 9: <%= Html.ValidationMessage(“Gender”, “*”)%> 10: </p> Notice in line 4 and line 9 we are adding validation hooks… If there are any fields in the type that we don’t want to display, we still need to include them here so they will be in the postback data… <p> <%= Html.Hidden(“EmployeeID”, Model.EmployeeID)%> </p> The other thing we want in postback data the set of unedited “original” data.. this is so that we can do concurrency checks with the database. <%= Html.Hidden(“orginal.EmployeeID”, Model.EmployeeID)%> <%= Html.Hidden(“orginal.Gender”, Model.Gender)%> <%= Html.Hidden(“orginal.Issues”, Model.Issues)%> <%= Html.Hidden(“orginal.LastEdit”, Model.LastEdit)%> <%= Html.Hidden(“orginal.Name”, Model.Name)%> <%= Html.Hidden(“orginal.Origin”, Model.Origin)%> <%= Html.Hidden(“orginal.Publishers”, Model.Publishers)%> <%= Html.Hidden(“orginal.Sites”, Model.Sites)%> Notice here the naming convention of original.blahh.. as you will see later, this matches the name of the argument on the controller action. Let’s flip back to the controller action and take a look.. 1: [AcceptVerbs(HttpVerbs.Post)] 2: public ActionResult Edit(SuperEmployee emp, SuperEmployee orginal) 3: { 4: 5: if (ModelState.IsValid) 6: { 7: domainService.AssociateOriginal(emp, orginal); 8: domainService.UpdateSuperEmployee(emp); 9: return RedirectToAction(“Details”, new { id = emp.EmployeeID }); 10: } 11: 12: return View(emp); 13: 14: } Notice my method takes two arguments, emp, which is the current employee as it has been updated from the form and the “original” employee that is mapped back from the hidden html fields. Then in line 7, we associate the original value with the employee… this effectively makes it so that calls in the DomainService class to ChangeSet.GetOriginal() will return the right value. Next we call UpdateSuperEmployee() on the business logic. Here we do all the validation defined for the model including running custom validation code. If validation fails, the error information will be shown on the form. Otherwise the data is eventually saved to the database. What I have show is building an ASP.NET MVC application that shares the exact same application logic as my Silverlight client. This includes things such as data validation that shows right through to the UI. To bring it home, here is an example of the exact same validation error first in the Silverlight client, then in the ASP.NET MVC client… Looking forward to tonight’s LinkedIn talk. You might ask how many participants are experienced/knowledgeable with RIA services already. Many, I assume, will be just now getting into the details and more interested in the-way-it-is-now, rather than specific deltas from MIX09+. Thanks! Yea.. I will focus the talk tonight on just the basics…. Nice – thanks Brad. I didn’t realize this option. Perhaps this should be called just ‘Domain Services’ and drop the ‘RIA’ 🙂 I’m impressed, would like to see more – it’s quite powerful. I especially like the validation aspects. You going to do the ‘DTO’ option with Domain Services and MVC next? 🙂 I should ask one question in terms of architecture. Can I have the following: WebServer – mvc AppServer – domain services ? Putting the original data on the page and relying on it is probably not a very good idea as it can be easily manipulated. This is basically how viewstate in webforms works… you could obuscate it pretty easily if that helps. I would not depend on orginal data for any security type decisions.. this is just about optomistic currency. >I should ask one question in terms of architecture. >Can I have the following: >WebServer – mvc >AppServer – domain services >? Steve — So the WebServer and the AppServer are different machines? How do you want to talk between them? A WCF services? that should be workable, but we’d need a way to flow the validation information.. we are still working on that. Did I understand you correctly? A very nice article. I will try to use them in my site Hi Brad, I was wondering if you could explain how the DomainServiceProxy works. I would have thought that you would have to call Sumbit() after you called UpdateSuperEmployee() to commit the change. Or is this done automatically by the proxy? Typically, the workflow for using a service would be to do some sort of Find or Get. Then run some inserts, updates, deletes and other custom operations as required. Then call Submit() to commit all the changes and go through the pipeline (Authorize, Validate, Persist, Resolve). In reagrds to the question about having the domain services sit on a different server than the MVC server. Could you not do something like generate a ServiceContext (similar or exactly the same as the one generated for silverlight), then just use this context to make calls from a controller exactly like you would from silverlight. It would then use HTTP to communicate with the application server. Or am I way off base here? Great series Brad… Dave Great tutorial Brad! Thanks a lot. I have been trying to do exactly this for some time now. Hi. I have installed VS2008 SP1, RIA 3 July09, WPF, etc etc, however even when I try to run the default silverlight application, I get error 2401, not only that I am getting following error msg in VS "Name ‘InitializeComponent’ is not declared " Please help. +1 for getting the presentation tier and application tier on separate domains. Many of our customers require us to run in this configuration (with an additional firewall betweent the web tier and application tier). On the validation front I would like to see validation carried out at [at least] three levels the client (browser) (a la xVal), the web tier and the application tier. Minimally, on the web tier and the application tier. Hi In regards to Steve his post: in our company we have a physical 3-tier application. mvc front-end web server, mid tier web server (being the application server) and the database server. At the moment we use WCF to talk from front-end to application server. How can we use .NET RIA Services to map to this physical model???
https://blogs.msdn.microsoft.com/brada/2009/07/30/business-apps-example-for-silverlight-3-rtm-and-net-ria-services-july-update-part-15-asp-net-mvc/
CC-MAIN-2017-13
refinedweb
1,981
58.89
The QNetworkReply class contains the data and headers for a request sent with QNetworkAccessManager More... #include <QNetworkReply> Note: All functions in this class are reentrant. This class was introduced in Qt 4.4.. Indicates all possible error conditions found during the processing of the request. RawHeaderPair is a QPair<QByteArray, QByteArray> where the first QByteArray is the header name and the second is the header. Creates a QNetworkReply object with parent parent. You cannot directly instantiate QNetworkReply objects. Use QNetworkAccessManager functions to do that. Disposes of this reply and frees any resources associated with it. If any network connections are still open, they will be closed. See also abort() and close(). Aborts the operation immediately and close down any network connections still open. Uploads still in progress are also aborted... This signal is suitable to connecting to QProgressBar::setValue() to update the QProgressBar that provides user feedback.(). Returns the error that was found during the processing of this request. If no error was found, returns NoError. QNetworkAccessManager::finished() and isFinished(). Returns true if the raw header of name headerName was sent by the remote server Returns the value of the known header header, if that header was sent by the remote server. If the header was not sent, returns an invalid QVariant. See also rawHeader(), setHeader(), and QNetworkRequest::header().. See also sslConfiguration(), sslErrors(), and QSslSocket::ignoreSslErrors(). This is an overloaded function. If this function is called, the SSL errors given in errors will be ignored. Note that you can set the expected certificate in the SSL error:(). Returns true when the reply has finished or was aborted. This function was introduced in Qt 4.6. Returns true when the request is still processing and the reply has not finished or was aborted yet. This function was introduced in Qt 4.6. See also isFinished(). Returns the QNetworkAccessManager that was used to create this QNetworkReply object. Initially, it is also the parent object.(). Returns the operation that was posted for this reply. See also setOperation().(). Returns a list of headers fields that were sent by the remote server, in the order that they were sent. Duplicate headers are merged together and take place of the latter duplicate. Returns a list of raw header pairs. Returns the size of the read buffer, in bytes. See also setReadBufferSize(). Returns the request that was posted for this reply. In special, note that the URL for the request may be different than that of the reply. See also QNetworkRequest::url(), url(), and setRequest(). Sets the attribute code to have value value. If code was previously set, it will be overridden. If value is an invalid QVariant, the attribute will be unset. See also attribute() and QNetworkRequest::setAttribute(). Sets the error condition to be errorCode. The human-readable message is set with errorString. Calling setError() does not emit the error(QNetworkReply::NetworkError) signal. See also error() and errorString(). Sets the known header header to be of value value. The corresponding raw form of the header will be set as well. See also header(), setRawHeader(), and QNetworkRequest::setHeader(). Sets the associated operation for this object to be operation. This value will be returned by operation(). Note: the operation should be set when this object is created and not changed again. See also operation() and setRequest().(). Sets the associated request for this object to be request. This value will be returned by request(). Note: the request should be set when this object is created and not changed again. See also request() and setOperation(). Sets the SSL configuration for the network connection associated with this request, if possible, to be that of config. See also sslConfiguration(). Sets the URL being processed to be url. Normally, the URL matches that of the request that was posted, but for a variety of reasons it can be different (for example, a file path being made absolute or canonical). See also url(), request(), and QNetworkRequest::url().. This signal is suitable to connecting to QProgressBar::setValue() to update the QProgressBar that provides user feedback. See also downloadProgress(). Returns the URL of the content downloaded or uploaded. Note that the URL may be different from that of the original request. See also request(), setUrl(), and QNetworkRequest::url().
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qnetworkreply.html
CC-MAIN-2014-15
refinedweb
702
61.53
Important: Please read the Qt Code of Conduct - [Solved] Sliding Bar Graph Hello All, I am in the process of developing an user interface,using Qt, for an embedded system. I have to admit I am very new to Qt and C++ but have been reading through some books. I would really appreciate if you can guide me through for the following. - I need to develop a sliding bar graph, that changes (fill color length) dynamically according to the pressure(of fluid in tubings) at that given time. - Our core application is written in C. We are planning to read various parameters in the C application. Is it possible to get the data from the C application? I went through some a site(cant remember)that we can use QProcess to start a C program that can read data(I might be very wrong here). I tried searching for the above questions in Internet, but I could not find something. Thanks for your help Bsbala Hi and welcome! If you want to keep your core application as a stand-alone process then you need some kind of inter-process communication. Some options are: - QProcess and use stdin/stdout/stderr/intermediate files etc - QLocalSocket - QTcpSocket/QUdpSocket - dbus - Shared memory Or if you would like the GUI to do the actual data acquisition too, then you could refactor your existing application into a shared library that your GUI can link against. It is not difficult to provide GUI and non-GUI versions of an application using Qt in case you still need a headless version. As for how to draw such an indicator you again have a few options: - Use an existing widget such as QProgressBar or QSlider - Use a 3rd party widget such as those from the Qwt project - Write your own custom widget using QWidget as a base class (using QPainter) - Write your own custom widget using QGraphicsView framework (canvas-based API) - Write your own custom widget using QML (declarative framework) How many such indicators do you need to display at once? If you need fluid animations and custom appearance (ie maybe with artistic assets from some graphical package) then QML (or QGraphicsView) are probably the way to go. Hope this helps and good luck with your project! ZapB, Thank you so much for your reply. I will tryout your suggestions. With respect to the indicators, We need around 6 of them in the form of sliding graphs. Three of them in the form of normal number displays. I really appreciate your guidance.It would be very helpful in completing this project. I will again post when I am done trying your suggestions. Best regards, Bsbala Hi, For that sort of this I would be tempted to use QML. It's quite simple to get started with. Just look at some of the QML examples for inspiration. Good luck! Thanks a lot ZapB. I will start trying out in QML. Best regards Balaji Take a look at this "simple progress bar": QML item that I knocked together. It should get you going in the right direction. Of course you may well want to improve the visual appearance using soem custom images for the border and the fill. You could also apply a ColorAnimation to vary the colour of the filled area according to value. Give us a shout if you want any help on hooking the QML up to the C++ backend. Thanks ZapB. I had a look into the simple progress bar QML. I will go ahead and see if I can add a different property to it. I have another question: I need to interface qt with the back end C application to obtain real time data using any of the process that you suggested(QProcess etc.,) above. If I do it, is it possible to integrate Qt with the QML user interface and manipulate the properties of the objects(sliding bar graph) on the QML user interface? Thanks for your assistance. Apologies if I sound very naive. Balaji Yes this is possible. Some tags to search for on QtDevNet are things like "qml c++ integration". The basic principal is that you create a C++ class that inherits QObject and in this class you declare some properties using Q_DECLARE_PROPERTY macro. These properties represent the quantities that you wish to display in your QML gui. You then expose your C++ object(s) to the QDeclarativeConext object using the setContextProperty() function. In you QML gui you can then bind properties of your QML items to the properties of your C++ object(s). It is up to you how to factor your C++ object(s) and their properties to map onto your data. This kind of design leads to a nice separation of concerns. You will have a component that retrieves the data from your existing C application somehow and populates the properties of your C++ QObject subclasses. Then you have your GUI in QML which is only dependent upon the names and types of object and properties exposed to it form the C++ side. So you can very easily swap out the GUI for a new one without touching the backend code if needed. ZapB I will try it out and let you know. All of you r suggestions were very useful. Thanks for you valuable guidance. Bsbala Hi, I am still trying and I will let you know once I am done. It moving at a pretty slow rate :-) Balaji Hi, I tried your progress bar graph illustration it was working fine, when I used it with a single value change from the C++ program. I am trying to change the graph value with the values from the external "C" program using Qprocess. I am not sure on how to proceed with the program structure. Can you please suggest me. Should the QProcess be a part of some class along with the QML?. Thanks for your assistance. Balaji ZapB, I was able to connect the QML and was able to connect to the output of the C program(the program just outputs from 1 to 100) using QProcess. However the value from the standard output seems to be a stream and the toInt function does not recognize it and returns a value of '0' and the Graph displays with no fill. Thanks a lot for providing your valuable assistance. BsBala This is the coding for the Qprocess. @#include <QApplication> #include <QDeclarativeView> #include <QDeclarativeContext> #include <QDeclarativeProperty> #include <QGraphicsObject> #include <QDebug> #include<QObject> #include<QProcess> #include "processconnector.h" #include "slidinggraph.h" ProcessConnector::ProcessConnector(QObject *parent){ process = new QProcess(this); process->setReadChannel(QProcess::StandardOutput); connect(process, SIGNAL(readyReadStandardOutput()),this, SLOT(disp())); process->start("./op"); } void ProcessConnector::disp() { while( 1 ){ QString t = process->readLine(); if( t.isEmpty() ) break; qDebug() << "got output" << t.toInt(); emit valueChanged(t.toInt()); } } @ This is the coding for the QML object. @#include <QApplication> #include <QDeclarativeView> #include <QDeclarativeContext> #include <QDeclarativeProperty> #include <QGraphicsObject> #include <QDebug> #include<QObject> #include<QProcess> #include<slidinggraph.h> SlidingGraph::SlidingGraph(QObject *parent){ view.setSource(QUrl::fromLocalFile("qml/QMLconnector/main.qml")); QObject object = view.rootObject(); progressbar = object->findChild<QObject>("bar"); view.show(); } void SlidingGraph::setvalue(int value){ if(progressbar) progressbar->setProperty("value",value); } @ This is the main.cpp @#include <QtGui/QApplication> #include "qmlapplicationviewer.h" #include "processconnector.h" #include "slidinggraph.h" #include <QObject> int main(int argc, char *argv[]) { QApplication app(argc, argv); ProcessConnector *Pc = new ProcessConnector(); SlidingGraph *sl = new SlidingGraph(); QObject::connect(Pc,SIGNAL(valueChanged(int)),sl,SLOT(setvalue(int))); sl->view.show(); return app.exec(); } @ I was able to solve the problem. Zap B thanks for helping me out on this. BSBALA Ah glad you got this sorted. Sorry I've not had time to look at it. Things have been a bit hectic here the last couple of weeks. What was the problem and solution in the end? Hi ZapB, Nice to hear from you. Actually I had my friend code the C part which had some mistakes. He corrected it . I had good sliding graph. Thanks for your help.I got a good idea for working with QML. I have some other questions about QML multiple screens and how to make underlying Qt C++ layer to handle it. I will post it in a new thread. Please help me out if you find some time. I appreciate all your help, guidance and time you have taken to reply my questions. Balaji No worries. I'll keep an eye out for it but I'll be away on holidays next week. I'm sure somebody else will jump in to help you though. Thanks a lot ZapB.
https://forum.qt.io/topic/8382/solved-sliding-bar-graph
CC-MAIN-2021-17
refinedweb
1,431
66.44
Hello, I've been struggling for a few days now with some crazy issues connecting to my 2012 R2 server via Hyper-V manager from Windows 10. Setup: - Client is a Windows 10 Pro machine running v1903 - Host is a Hyper-V 2012 R2 machine running v9600 Symptoms: - Hyper-V Manager is able to connect to the remote server successfully, however an "RPC" error is displayed when attempting to load the virtual machine list; All other functionality works including creating a new virtual machine - Additionally, all functionality of Server Manager works including Windows Powershell for the remote host Here's what I've done so far: - Installed Hyper-V management tools via Windows features on the Wins 10 client machine - Installed RSAT for Win 10 from (tried all three versions) - Followed the steps outlined in this article: - Attempted firewall update to allow WMI located here:-... - As the previous fix did not work, we disabled the Windows firewall on both the client and host; We found there was a strange issue where our router was blocking WmiObject/RFC calls so I resolved this by connecting to my cell phone hot spot; Now Get-WmiObject calls are successful - Ran WireShark to validate traffic is following to and from the host; It does not appear that traffic is being blocked as we are seeing DCERPC and TCP packets flowing over 49152+ and 135 respectively - Enabled "Remote Access" per this article:... - Ran "hvremote.swf /override /show /target:[FQDN OF HOST]"; All tests pass except final test: Async notification query to root\virtualization\v2 WMI namespace; I been unable to resolve this issue - I've added the user that I'm connecting with to the "Hyper-V Administrators" group on the host and added the login to the credential manager on the Win 10 client Something important to note: I currently have another server running Windows 2016, v1607 with Hyper-V manager installed. Everything works completely fine from this server so I've also tried to mirror the settings of this machine on my Windows 10 client. This leads me to believe that everything is configured properly on the host as my Win 2016 client is able to connect and manage the host without any issues. I feel like there is something I am missing in my Windows 10 config, but at the same time, I feel like I've tried almost everything. I'm hoping there is an expert here that can shed some light. Please let me know if there is any additional information that I can provide to help troubleshoot this problem. Thanks in advance, Malik Problem resolved! After pulling hair and teeth and combing through logs, running dcdiag and all sorts of stuff I found that the NETLOGON service was paused on the PDC!!! Restarted and now we're back in business. I really inherited a mess of a network. They're using 192.200.x.x as their IP scheme, their AD domain name is a public "real" domain name, but doesn't belong to us! DHCP/DNS was full of old computers, no scavenging etc. etc. Think it might soon be time to stand up a couple of Server 2012 R2's and make them the DC's, possibly in a new domain and then migrate everything over. Also need to fix that IP scheme! DCDIAG is still showing issues. DC1 "is not advertising as a time server" failed advertising; KccEvent failed - ADDS unable to establish a connection with the Global Catalog; Replications failed - destination server is currently rejecting replication requests ... Funday continues :) [ ... ] Host IS part of the domain. Goal was to get all VM's on host #1, then do the same upgrade to host #2 and then balance out the VM's and set up some kind of failover/replication/ability to move VM's (possibly using Starwind) between hosts for scheduled downtime or host problems. You don't need Virtual SAN for planned downtime. Windows built-in Shared Nothing Live Migration does it (unless you have hell load of VMs of course). Virtual SAN does provide a virtual shared storage to configure VM HA and guest VM clusters (Windows cannot do it out-of-box w/o physical or virtual shared storage) and also does complete volume replication (built-in Hyper-V Replica does that for VMs only) in VMware SRM style. 2 Replies Could be WinRM configuration and trusted host settings? More information along with some other tips here. The solution has been found! I had to look deeper into RPC connectivity. The issue was quite simple and somewhat idiotic once I was able to identify the problem. I feel quite silly having spent over 20 hours on this issue, but you live and you learn. :-) One word (or acronym), NAT. My wireless router has a feature, "Port Scan and DoS Protection", which apparently inadvertently blocks valid RPC calls over port 135 despite Windows Firewall being disabled on both ends. After disabling this feature, all tests passed using "HVRemote" including the error that I mentioned earlier: Async notification query to root\virtualization\v2 WMI namespace. This explains why my off site Win Server 2016 could connect without issues but my local Win 8.1 and Win 10 clients could not. I hope my findings can help others who have struggled with this issue and save them hours (or days) of frustration. A quick update: There are some key items that need to be enabled for this to work. - You must enable enable "Remote Access" per this article:... - The host must be able to reach the client; If you are not running your own DNS, this can be achieved by creating an entry in the HOSTS file on your Hyper-V server If you are still having trouble, it may help to assign your client a static IP on your local network and enable DMZ for that IP address.
https://community.spiceworks.com/topic/2283316-hyper-v-remote-management-windows-10-client-hyper-v-server-2012-r2?source=recommended
CC-MAIN-2020-40
refinedweb
983
56.08
Psutil is a python based library, it provides an interface to monitor your computer system's resources. You can use this utility and its available APIs to find out all details about currently running processes, their resource consumption like memory, network and disk usage etc. It is a cross-platform application which works on all popular operating systems like Linux, Microsoft Windows, MacOS, and FreeBSDs. Currently, you might be using many different utilities/commands to monitor your system processes, but Psutil combines the features of commands like top, ps, netstat, lsof, df etc into a single place. It runs on both 32 and 64-bit systems and is an extremely optimized library. It uses the most efficient way possible to collect your desired system information. In this tutorial, we will be learning the installation and usage process for this utility. If you have programming or system scripting background, you should be able to learn the working of this utility with great ease. Grab a cup of coffee and let's get started :-) How to Install Psutil on Ubuntu 16.10 / 16.04 For the sake of demonstration, we will be using Ubuntu latest release 16.10 to install and show the usage of this library. The same set of instructions should work for any older version of Ubuntu and Debian based system. The easiest way to get this library installed is using pip. Run the following command on your system terminal to install pip utility. sudo apt install python-pip Once the pip has been installed, run following command to install psutil. sudo pip install psutil Congratulations! Psutil has been successfully installed now. We will go ahead and see some its example usages. How to Use Psutil First of all, let's understand how we can run python commands on our system terminal. Python offers a native shell, simply run "python" command and it should take you to the shell where you can execute python related commands. The following screenshot will further demonstrate our this point. Now we will be running all commands related to Psutil library in this shell. In order to find CPU usage in percentage, we need to run following two commands in Python console. The very first command will import psutil library and next one will be used to return the value of current consumption of CPU in percentage. import psutil psutil.cpu_percent(interval=1) The following screenshot should further clarify this concept. The following command will return the total number of CPU of our Linux system. We have included the example output below as well. psutil.cpu_count() >> psutil.cpu_count() 1 If you want to see the value for CPU frequency parameter, use the following command on Python shell. psutil.cpu_freq() In order to monitor your system's virtual memory consumption, use the following commands in the console. import psutil mem = psutil.virtual_memory() mem Here is the sample output for the above snippet. aun@ubuntu:~$ python Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import psutil >>> mem = psutil.virtual_memory() >>> mem svmem(total=1022431232, available=315588608, percent=69.1, used=689242112, free=26132480, active=370671616, inactive=366432256, buffers=17944576, cached=289112064, shared=2854912) >>> In order to view the swap memory consumption, use "psutil.swap_memory()" function on the console. >> psutil.swap_memory() sswap(total=1071640576, used=137801728, free=933838848, percent=12.9, sin=14856192, sout=146563072) Let's perform some disk related operations using Psutil. Run following code snippet to find out about partitions of your system's hard disk is having. import psutil psutil.disk_partitions() Example output: >>> import psutil >>> psutil.disk_partitions() [sdiskpart(device='/dev/sda1', mountpoint='/', fstype='ext4', opts='rw,relatime,errors=remount-ro,data=ordered')] >>> The following snippet will give you current disk consumption on your system's root partition. import psutil psutil.disk_usage('/') Example output: > import psutil >>> psutil.disk_usage('/') sdiskusage(total=19945680896, used=4598263808, free=14310637568, percent=24.3) >>> Psutil is also good to monitor your system's hardware components. For example, you can find details about your system's hardware temperature sensors by using the following parameters. psutil.sensors_temperatures() It will display output as follows, you can further use this output in your bash or programming scripts to generate any kind of triggers. >>> psutil.sensors_temperatures() {'coretemp': [shwtemp(label='Physical id 0', current=100.0, high=100.0, critical=100.0), shwtemp(label='Core 0', current=100.0, high=100.0, critical=100.0)]} Let's now learn a bit more about how to find out details about running processes, following command will display the Process IDs (PIDs) of currently running processes on our Linux system. import psutil psutil.pids() Here is example output of this command: >> import psutil >>> psutil.pids() [1, 2, 3, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 28, 29, 30, 31, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 80, 95, 96, 144, 145, 146, 147, 148, 149, 150, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, ......] psutil.pid_exists(pid) is yet another command which is used to identify if any process with specified ID exists or not. Similarly, it has many other functions too which are extremely helpful in deriving desired outputs from system processes. Psutil is extremely helpful utility, it offers a large number of functions/APIs which can be used to properly monitor your system. You will be astonished to know that the currently over 4600 open source projects are utilizing Psutil on the backend to perform many system resource monitoring tasks. Psutil has been ported to many other languages too, noteworthy of them are: Ruby, C, Go, Node, Rust. Complete details about its portability can be found on Github. Hope you enjoyed this article, Psutil has a lot to offer, you should be able to find many exciting functions/API which you can use on your daily basis to improve your system performance. If you have any questions or feedback, feel free to let us know in the comments section of the article.
https://linoxide.com/psutil-library-fectch-process-information/
CC-MAIN-2021-43
refinedweb
1,047
56.66
I am wondering if there is way to get the declared type in the underlying Iterable as in: var string: Seq[String] => I want something to return String var int: Seq[Int] = _ => I want something to return Int var genericType: Seq[A] => I want to return A def fromJson[A](jsonString: String)(implicit tag: TypeTag[A]): A = ??? I might have not provided enough information in my question. Anyway, I found out the answer. In order to retrieve a class from a generic type I had to do the following import scala.reflect.runtime.universe._ val m = runtimeMirror(getClass.getClassLoader) def myMethod[A](implicit t: TypeTag[A]) = { val aType = typeOf[A] aType.typeArgs match { case x: List[_] if x.nonEmpty => m.runtimeClass(x.head) case x: List[_] if x.isEmpty => m.runtimeClass(aType) } } scala> myMethod[Seq[String]] res3: Class[_] = class java.lang.String scala> myMethod[Seq[Int]] res4: Class[_] = int scala> case class Person(name: String) defined class Person scala> myMethod[Seq[Person]] res5: Class[_] = class Person scala> myMethod[Person] res6: Class[_] = class Person I can then provide this class to the underlying library to do its job. Thanks
https://codedump.io/share/pYXIiPHn8tM4/1/how-to-get-type-declared-from-iterable-in-scala
CC-MAIN-2017-17
refinedweb
196
55.74
, ... DLFCN(3) OpenBSD Programmer's Manual DLFCN(3) NAME dlopen, dlclose, dlsym, dlctl, dlerror - dynamic link interface SYNOPSIS #include <dlfcn.h> void * dlopen(char *path, int mode); int dlclose(void *handle); void * dlsym(void *handle, char *symbol); int dlctl(void *handle, int cmd, void *data); char * dlerror(void); DESCRIPTION These functions provide an interface to the run-time linker ld.so. They allow new shared objects to be loaded into a process's address space un- der program control. The dlopen() function takes a name of a shared ob- ject as its first argument. The shared object is mapped into the address space, relocated and its external references are resolved in the same way as is done with the implicitly loaded shared libraries at program start- up. The argument can either be an absolute pathname or it can be of the form ``lib<name>.so[.xx[.yy]]'' in which case the same library search rules apply that are used for ``intrinsic'' shared library searches. re- solved, NULL is returned. dlctl() provides an interface similar to ioctl(2) to control several as- pects of the run-time linker's operation. This interface is currently un- der development. dlerror() returns a character string representing the most recent error that has occurred while processing one of the other functions described here. SEE ALSO ld(1), rtld(1), link(5) HISTORY Some of the dl* functions first appeared in SunOS 4. BUGS An error that occurs while processing a dlopen() request results in the termination of the program. OpenBSD 2.6 September 30, 1995 1
http://www.rocketaware.com/man/man3/dlfcn.3.htm
crawl-002
refinedweb
261
54.22
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. database not getting updated by on change event _columns={ 'salary': fields.integer("Salary" ), 'increment_date':fields.date('Next increment Date'), 'department_id':fields.many2one('hr.department', 'Department'), 'parent_id': fields.many2one('hr.employee', 'Manager'), 'job_id': fields.many2one('hr.job', 'Job'), 'coach_id': fields.many2one('hr.employee', 'Coach'), 'employee_id':fields.many2one('hr.employee','HR employee'), } def on_change_job(self, cursor, user, field,jobid, context=None): res = {} if not jobid: return res job= self.pool.get('hr.job').browse(cursor, user,jobid) nb_employees = len(job.employee_ids or []) nb_employees=nb_employees+1 res[job.id] = { 'no_of_employee': nb_employees, 'expected_employees': nb_employees + job.no_of_recruitment, } return res on change job event values which are returned are ohk but the effect is not seen in database, values of the fields does not get updated. i can use update query but .....i want to try it using ORM on_change is only to update the form of the UI, so by default this is a feature ;) if you want to save data to the database you have to call a write method, but using write inside on_change is bad design. In a form the user expect that the data will by only save on clicking the "save" button and no autosave on change events. I have a button that runs a function on the one2many field objects in the form. I need the on_change event to save it. How can I achieve that? If you save a object in your form all relatet (one2many) will also save. As i menation auto-save wit on_change is not was the user expect and bad design. Let the user control when he wants to save a form. I think what your are looking for is wizard: About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/database-not-getting-updated-by-on-change-event-22026
CC-MAIN-2018-26
refinedweb
332
59.4
In this Webblog I will illustrate on how to achieve ASBO-GBO-ASBO scenario using SOAP Adapter. I will not Discuss the Integration Repository and other Integration directory configurations. ASBO: Application Specific Business Objects (Represents application Specific Metadata). GBO: Generic Business Object (For examples, a set of objects is available from OAGIS () and also GS1 (). The Use of Generic Business Object allows us to create decoupled interfaces as opposed to tightly coupled interfaces. Let us take an example : System A is sending message ASBO1 to System B(Message ASBO2) System C(Message ASBO3) and System D(Message ASBO4) .If we do not use the Generic Business Objects then essentially we are using three mapping programs A <–> B and A <–> C and A <–>D. The Problem with this scenario is if System A changes its Metadata that it ASBO1 , all the three mappings needs rework. Also with the addition of more systems this means higher complexity.An alternative is to use a standard generic object (say GS1 message). System A maps to the GBO and System B C and D also maps to the GBO. essentially System A acts as a publisher of the message and System B, C and D are subscriber to the GBO. Moreover new systems in the landscape can simply subscribe to the GBO. What it means for XI: In XI we need to create an intermediate Business Service for GBO. A will send message to the GBO and GBO in turn with forward it to all the Subscribers. For each type of message GBO will have a pair of SOAP Communication channels :- SOAP_Receiver and SOAP_Sender. GBO will receive message from A through Communication Channel SOAP_Receiver which inturn will push the message to the SOAP_Sender Communication Channel. Soap Receiver For SOAP_Receiver specify the target URL as: :/XISOAPAdapter/MessageServlet?channel=:: Soap Sender For SOAP_Sender specify the Default Interface namespace and Interface Name. This is the outbound interface from GBO. Bang! Now we have a Business service within XI to act as a middleman. Create the ID Configs for Message flow from System A to GBO and GBO to Destination Systems. I do like your Post. Still, could you explain me the benefits of using the GBO-Approach? My concern would be a larger development effort to start and multiplied messages compared to a point-to-point development. Every message coming in, does generate another one, data is persisted twice etc. Thanks for your input. Mathias. As a matter of fact I am yet to convince myself of the GBO approach in XI :-)) However as for your questions — Yes your points are correct — development effort will be more .. data persisted twice… The main advantage of GBO approach is decoupling of interfaces and reducing dependencies.. It helps to build plug and play interfaces.. This is because the source and the destination systems are communicating with the GBO only .. Also if you think this way .. most of the mapping requirement can be covered in one interface only and the other interface might use a simple mapping 1 to 1 [Assuming one intf A-B will result in two interfaces A-GBO and GBO-B – You can keep A-GBO 1:1 ] So development effort will increase but not significantly. Thanks, Himadri I see your points and the general approach of decoupling is good as well, even decoupling may better be done at the configuration level (?). It is a good idea to have one of the involved interfaces to act in a simple way with the GBO, for example a 1:1 relation. There, it gets only difficult to predict which side should it be, meaning which side will encounter less changes. Basically, the GBO approach will give you an easier adaption of your flows once a side needs to change for example a message type etc. The question to raise of course is, how often such changes will happen. Since we are currently collection position papers and arguement regarding the GBO-Approach, your answer and post are much appreciated. Thanks alot. Mathias.
https://blogs.sap.com/2006/12/14/achieving-asbo-gbo-asbo-scenario-using-soap-adapter/
CC-MAIN-2018-05
refinedweb
671
62.98
This is a C++ Program that Solves Maximum Value of Gifts Problem using Dynamic Programming technique. You are given a matrix of order n*m. Each cell of the matrix has a gift of certain value. Starting from the top-left cell, you have to reach the bottom-right cell of the matrix collecting gifts of the visited cells. But you can visit either the cell below the current cell or the cell to right of current cell. Determine the maximum value of gifts you can collect. There are two ways to reach colums (i,j), one from above and one from left. Choose the one which gives more value. Case-1: rows,columns=(4,4) Values of gifts in cells 1 10 3 8 12 2 9 6 5 7 4 11 3 7 16 5 maximum attainable value of gifts=53 Here is source code of the C++ Program to Solve Maximum Value of Gifts Problem. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. #include<iostream> #include<vector> using namespace std; int maxGifts(int n, int m, vector<vector<int> > gifts) { //vector to store results vector<vector<int> > dp(n+1,vector<int>(m+1, 0)); //dp[i][j]=maximum attainable value of gifts for the matrix of rows i and columns j only for(int i=1;i<=n;i++) { for(int j=1;j<=m;j++) { //There are two ways to reach colums (i,j) //one from above and one from left //choose the one which gives more value dp[i][j]=max(dp[i][j-1],dp[i-1][j])+gifts[i][j]; } } return dp[n][m]; } int main() { int n,m; cout<<"Enter the number of rows and columns"<<endl; cin>>n>>m; vector<vector<int> > gifts(n+1,vector<int>(m+1)); cout<<"Enter the values of gifts in cells"<<endl; for(int i=1;i<=n;i++) { for(int j=1;j<=m;j++) { cin>>gifts[i][j]; } } cout<<"maximum attainable value of gifts is "<<endl; cout<<maxGifts(n,m,gifts); cout<<endl; return 0; } In the main function, we ask the user to input the values for number of rows and columns and values of gifts in each cell. We pass these values to the function maxGifts as parameters. This function will calculate the expected result and return it. The returned value will be displayed. Case-1: $ g++ max_gifts.cpp $ ./a.out Enter the number of rows and columns 4 4 Enter the values of gifts in cells 1 10 3 8 12 2 9 6 5 7 4 11 3 7 16 5 maximum attainable value of gifts is 53 Sanfoundry Global Education & Learning Series – Dynamic Programming Problems. To practice all Dynamic Programming Problems, here is complete set of 100+ Problems and Solutions.
http://www.sanfoundry.com/dynamic-programming-solutions-maximum-value-of-gifts-problem/
CC-MAIN-2017-43
refinedweb
474
58.21
.NET Helpers - 6. Text Processing - 7. Regular Expressions - 8. Threading and Async - 9. Collections - 10. LINQ - 11. Networking - 12. JSON - 13. Trimming - 14. New Performance-Focused APIs - 15. New Performance-Focused Analyzers - 16. Peanut Butter Setup Benchmark.NET is now a simple tool for measuring the performance of .NET code, making it easy to analyze the throughput and allocation of code snippets. The large number of our examples in this post are evaluated using microbenchmarks note down using that tool. To make it easy to follow at home (literally for many of us these days), we started using the Dot Net tool to create a directory and scaffold it: mkdir Benchmarks cd Benchmarks dotnet new console And the benchmarks we have generated have been enhanced to look at the contents of the csproj as follows: Exe true true net5.0;netcoreapp3.1;net48 Read More: What’s New In .net Productivity? Garbage Collection For anyone interested in .NET and performance, garbage collection is always on the mind. Many efforts go into reducing the allocation, not because the act of allocating work itself is particularly expensive, but because of the follow-on costs by the garbage collector (GC) cleaning up after that allocation. It doesn’t matter how much work goes into reducing the allocation, however, the vast majority workloads will incur them, and so it’s important to continually push the boundaries of what GC is able to accomplish, and how quickly. Just-In-Time (JIT) Compiler .NET 5 is exciting version of the Just-in-Time (JIT) compiler, with many improvements of all the way to finding a way to access the release. Like any compiler, improvements made to JIT can have wide-reaching effects. Often individual changes have little effect on the individual piece of the code, but such changes are then magnified by the sheer number of places they apply. JIT has almost unlimited optimizations that can be added, and given unlimited amount of time to run such optimizations, JIT can create the most optimal code for any given scenario. But JIT does not have an unbounded amount of time. The “just-in-time” nature of JIT means it is performing the compilation as the application run: when no method has yet been complied is invoked, JIT is required to provide an assembly code for it on-demand. That means the thread will not be able to move forward until compilation has completed, which means that JIT needs to be strategic in what optimization it applies and how it selects to utilize its limited time budget. Intrinsics In .NET Core 3.0, over a thousand new hardware intrinsic methods were added and recognized by JIT to enable C # codes to directly target instruction sets such as SSE4 and AVX2 (see docs). This were then used to great advantage in the set of APIs in core libraries. However, the intrinsic were limited to x86 / x64 architectures only. Runtime Helpers GC and JIT represent large portion of the runtime, but a significant portion of the functionality still remains in the runtime outside of these components, and similar improvements have been seen. Text Processing Text-based processing is the bread-and-butter of many applications, and many efforts in each release goes into improving the fundamental building blocks on top which everything else is built. The changes extend from the micro optimizations in helpers processing individual characters all the up to overhauls of entire text-processing libraries. Regular Expressions A very specific but extremely common form of parsing is through regular expressions. The System.Text.RegularExpressions namespace has been in .NET for many years, all the way to NET Framework 1.1. Use it yourself. Within the .NET implementation, and directly through thousands upon thousands of applications. Threading and Async One of the biggest changes around asynchronous in .NET 5 is actually not enabled by default, but is another experiment to get feedback. The async/await feature in C # has revolutionized how developers target .NET writes asynchronous code. Sprinkle some async and await around, the change some return types to be tasks. Collections Over the years, C# has gained a plethora of valuable features. Many of these features are concentrated on developers being capable to more bluntly write code, the language/compiler is accountable for all boilerplate. However, some features focus less on productivity and more on performance, and such features are a great boon to the core libraries, which can be used to make everyone's program's more efficient. LINQ An earlier release of .NET Core saw a large amount of churn the System.linq codebase, in particular to improve performance. That flow has slowed, but .NET 5 still sees performance improvements in LINQ. Networking These days networking is a crucial component of almost any application, and the great networking performance of paramount important. As such, every release of .NET now sees many attentions paid to improving networking performance, and .NET 5 is no exception. JSON Significant improvements were made to the System.text.json library for .NET 5 and specifically for the JsonSerializer, but many of these improvements actually ported back to .NET Core 3.1 and were published as part of servicing fixes. However, there are some nice improvements that show up in .NET 5 beyond those. Trimming .NET core until 3.0 was primarily focused on server workloads, with the pre-eminent application model on the ASP.net core platform. With .NET Core 3.2, Blazor support for browser applications has been released, but based on mono and library mono stacks. With .NET 5, Blazor utilizes .NET 5 mono runtime and all the similar .NET 5 libraries shared by each other application model. This brings a prior twist to performance as well as size. While code size has always been an important issue (and is important for Native applications), the scale required for successful browser-based deployment really brings it to the forefront, as we need to be concerned about download size. Not focused with the net core in the past New Performance-Focused APIs This post has highlighted a plethora of an existing API that simply get better when running on Net5. In addition, .Net 5 has many new APIs, some of which focused on helping developers write code faster. New Performance-Focused Analyzers The C# "Roslyn" compiler has a very important extension point called "Analyzers" or "Roslyn Analyzers". Analysts plug into the compiler and are given full read access to all the sources the compiler is operation over as well as the compiler's parsing and modeling of that code,which enabling developers to plug in their own custom analysis to a compilation.On top of that, analysts run not only as part of builds, but also in IDE as developers writing their code, which enabling analysts to present suggestions, warnings, and errors on how developers can improve their code. Analyzer developers can also author "fixers" that can be invoked to the IDE and automatically replace the flagged code with "fixed" alternatives and all of these components can be distributed through NuGet packages, making it easier for developers to consume arbitrary analysis written by others. Conclusion .NET 5 release will be an advanced open source platform for building your applications that works cross-platform with multiple operating systems. We hope this article will help you to understand the Performance Improvements in .NET 5.
https://www.ifourtechnolab.com/blog/performance-improvements-in-net-5
CC-MAIN-2022-40
refinedweb
1,230
56.05
13 December 2011 18:37 [Source: ICIS news] WASHINGTON (ICIS)--?xml:namespace> Economists surveyed by Dow Jones Newswires had forecast a gain of 0.5% for November’s retail sales, and the modest showing suggests that consumers are becoming more cautious. The narrow 0.2% advance in retail sales for November marks the second month of lower growth and is in contrast to the 0.6% advance seen in October from September and September’s robust 1.2% gain over August. The mediocre November advance is seen as all the more surprising, given that the month typically is one of the busiest shopping periods of the year. Consumer spending usually ramps up sharply in November in advance of the US Christmas holiday in December and its tradition of gift-giving. However, the National Retail Federation (NRF) noted that the November gain in retail sales marked the 17th consecutive month of increases, and last month’s sales were 6.7% ahead of November 2010, according to the department's data. NRF president Matthew Shay said that the November figures show that “though consumers remain concerned about the economy, they are demonstrating an increased willingness to spend this holiday season”. But Shay noted that the real measure of consumer sentiment will come with retail sales data for this month. “While [retail] companies are encouraged by a strong start to the holiday season, the real test will come in December when the majority of holiday spending occurs,” he said. Consumer spending is the principal driving force
http://www.icis.com/Articles/2011/12/13/9515907/us-retail-sales-rise-only-0.2-in-nov-less-than-expected.html
CC-MAIN-2014-52
refinedweb
253
55.24
> > We have rewitten most of the code used for creating text from DOMs. > > I've cc'ed xml-sig because the check-ins of 4DOM I'll be making > > today reflect these changes. > > Very interesting. Are you following the DOM Level 3 discussions on > load-and-save interfaces? [I couldn't access the draft right now, so > I can't check whether it is related to your work] Not yet. In the first draft, load and save was not covered at all. I haven't perused the second draft, but at any rate it will be somewhat closer to CR before we move DOM L3. We were burned in terms of wasted effort by moving to the draft DOM L2 namespaces and having them change quite a bit. > > Using one of the new reader classes is also simple. You create an > > instance passing in to the constructor any parameters relevant to the > > state of that class. > > While support for customization is a good thing, I think many users > won't need it, or might get confused by it. So I'd prefer to have some > guidelines what the "good for most uses" way of getting a DOM is. OK. I'll try to get some such doc in before release. > >. > > Can you please bring the fromString interface back? In interactive > mode, it is a pain to type StringIO.StringIO. OK. > Also, what is the complication that makes urllib not work for fromUri? > In the Python 2 SAX2 interfaces, you can pass a string to parse, and > it will then consider that as a system identifier. In turn, it will > pass it to urllib, which will open either a local file or the URL. Ah, but not all URIs are URLs. What if you have a URN resolution handler? This is something that will be especially relevant with 4Suite Server, which provides URN/UUIDs for XML documents in the repositories, and also provides a relevant URI handler which can easily be plugged into XPath, XSLT, RDF, XPointer, etc. > > .] > > Isn't a validating parser supposed to indicate which elements can have > their whitespace stripped? Not directly, but of course one can use the ignorableWhitespace call-back if you're using SAX. However, the reader support for stripping is an entirely different matter entirely. XSLT allows you to specify elements to be stripped from source documents. Originally, 4XSLT would create the DOM normally, and then strip the relevant WS nodes, but this was horribly inefficient. We sped things up several times by merely stripping whitespace as we built the DOM. This is why we have the interface, and it is also why it is not recommended for regular use: it's pretty much a hack (but a very important hack) for XSLT performance. > > Python 1.x users can break circular dependencies by calling the > > releaseNode method on the reader that was used to create the DOM: > > > > reader.releaseNode(xml_doc) > > What kind of circularity does that break? The one in the tree? Does > that mean I have to keep the reader until I release the tree? Yes and maybe. You don't have to keep the reader around if you're sure what type of DOM you have. However, if you try to call cDomlette's ReleaseNode on a pDomlette node, it will break, and vice versa. That's why it's also on the instance as a convenience. --
https://mail.python.org/pipermail/xml-sig/2000-November/003704.html
CC-MAIN-2016-40
refinedweb
565
72.76
JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 11:54 AM I know we said we weren't going to worry too much about JMX integration in JBoss4. But it should be possible to provide some basic integration at the deployment level, e.g. <?xml version="1.0" encoding="UTF-8"?> <deployment xmlns: <loader-repository> dot.com:loader=test </loader-repository> <depends>jboss:service=TransactionManager</depends> <bean name="Name1" class="test.Hello"> <property name="prop">Property</property> </bean> </deployment> Where "loader-repository" and "depends" are the standard JMX MC notions. The main issue is how to do this while minimizing the "kludges" we have to support for backwards compatibility when there is a real integration between the JMX and POJO MCs. 1. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 12:58 PM (in response to Adrian Brock) If we add a full object name alias notion to the jmx mc service layer, the depends notion can be the same as jbas5 in terms of a simple name. The only scenario where I can see needing to specify a loader-repository is if a standalone bean deployment needs to use the scoped namespace of another legacy deployment. This is a rather exotic deployment for most users. If your just talking about allowing a standalone bean deployment to use a scoped class loader, can't we just use a bean reference to a class loader configuration and embed the configuration in the target bean? 2. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 1:55 PM (in response to Adrian Brock) "scott.stark@jboss.org" wrote: If we add a full object name alias notion to the jmx mc service layer, the depends notion can be the same as jbas5 in terms of a simple name. I'm not sure I like the idea of introducting yet another arbitrary naming space. Though I do agree that logical names are important. I would prefer the "alias notion" to be based on something more concrete, like. a contract rather than just an ad hoc notion. > <bean interface="javax.transaction.TransactionManager" <factory bean="jboss:service=TransactionManager" method="getInstance"/> </bean> <bean name="MyBean"> <property name="TM"><inject interface="javax.transaction.TransactionManager"/></property> </bean> And ideally, this type of monomorphic (one implementor of the contract) behaviour would be "automatic" without the too much need for explicit declaration. <bean name="MyBean"> <property name="TM"><inject/></property> <!-- optional? --> </bean> There are other dependencies that are not directly related to injection, but have some logical component: <deployment> <depends implementation="javax.jms" version="1.5"/> <bean name="MyBean"/> </deployment> 3. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 1:59 PM (in response to Adrian Brock) If we are to have an alias scheme, I would prefer it to have some sort of "type" that could be used to identify mbean automatically, e.g. <depends type="Queue" name="testQueue"/> is jboss.mq.destinations:service=Queue,name=testQueue Otherwise you're going to either need to explicity declare aliases in the MBeans or get the relevent MBeans to populate the alias database in such as a way that they don't conflict. 4. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 2:12 PM (in response to Adrian Brock) How is the type enabling the namespace management? In you example it would seem that you are carving out a Queue namespace that can be augmented with keys known to be unique in the Queue namespace. So you want a type based name to ObjectName mapping function on the alias aspect of the SARDeployer? 5. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 2:25 PM (in response to Adrian Brock) Yes, I would segment the namespace by type IF I were going to do it that way. The trouble is that it relies on something that does not really exist, i.e. some notion of an abstract Queue MBean or class/grouping of such MBeans. Such a notion could be retrofitted to the MBeans with a getAliasType()/getAliasName() but that is pretty intrusive to the implementation or it requires more xml in the deployment to state it, or it requires another listener on deployments to generate the alias database from known patterns in MBean deployments (often with those patterns being conventions only). 6. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 3:16 PM (in response to Adrian Brock) So use both an explicit alias attribute on the legacy mbean for complete control, and allow for a mapping function on the SARDeployer to transform ObjectNames meeting understood patterns to be transformed into a unique alias. The latter should generally be universially applicable as one would expect a convention was in place for all object names. 7. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 3:41 PM (in response to Adrian Brock) Yes, but surely all that does is introduce a binding to the new alias notion rather than the ObjectName. This might be useful for solving some problems like specifying the default datasource (which already has an alias in that DefaultDS is already "hardwired" is some places) To solve some other problems you would need to be able to specify an alias based on some other name. e.g. automatically adding dependencies on resource-refs based on jndi-name or ejb-refs based on ejb-name i.e. <depends type="ejb" name="MyEJB"/> would resolve to jboss.j2ee:service=EJB,jndiName=MyEJB but this also depends upon context as well since MyEJB can exist in multiple deployments. I don't think adding a "trivial" alias notion would solve many problems, and it would likely confuse some users. Users are confused enough when the MBeans are created by deployers so they don't know what names to use. Instead, we should be working towards defining an extensible dependency notion that understands context, type or anything else that might be important. This should produce something intuitive and allow users to "write their own dependency" if nothing fits what they need. 8. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 3:48 PM (in response to Adrian Brock) Ok, but we seem to be mixing legacy vs future naming issues. Just throwing an alias attribute onto the existing mbean would seem to ease integration of non-jmx based stuff into the jmx mc, and avoids the exposure of a jmx name dependency in the new bean deployer. 9. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 4:12 PM (in response to Adrian Brock) Yes, but as I said above, I want to "minimiz(e) the kludges". This can be done by introducing something that is based on what we want going forward (even if it is incomplete). e.g. when I've integration the jmx/pojo MCs the injection should be "transparent" between the two. The transaction manager factory above, doesn't really know that it is using an MBeanProxy on the TransactionManagerService. The problem before that integration (jboss4) is that the lifecycles or configuration spaces aren't linked, except that the whole bean deployment is managed by an MBean known to the SARDeployer. I can hack some things to make it work better, but it isn't going to work as well as the real solution in JBoss5. There is sort of cross over here, in that we still want the old loader-repository config to work in JBoss5, even when the classloaders are part of the DI framework. 10. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 5:48 PM (in response to Adrian Brock) So what is the "minimiz(e) the kludges" approach? 11. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 6:04 PM (in response to Adrian Brock) "scott.stark@jboss.org" wrote: So what is the "minimiz(e) the kludges" approach? I don't know yet. If it was trivial, I probably wouldn't have mentioned the issue in the forums :-) The issue is to spot that a dependency/injection is really an external dependency outside the control of the bean MC and preprocess them into "depends" to attach to the ServiceController registration, i.e. the wrapping container. Anyway, we said we weren't going to worry too much about the JMX integration for JBoss4 (at least in the initial release). 12. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 7:05 PM (in response to Adrian Brock) So I'm not seeing why its not as simple as injecting an object name pattern matcher to the sar deployer that returns the string alias: interface TheMinimalHackThing { String mapObjectNameToAlias(ObjectName name); String getAlias(ObjectName name); ObjectName getObjectName(String alias); } 13. Re: JBoss Bean Deployer - JMX IntegrationAdrian Brock Jun 6, 2005 7:54 PM (in response to Adrian Brock) I'm not sure I follow, I think we have got a bit off-topic :-) Certainly, I could make it a rule that if the depends or inject tag in the bean's xml has a name that is a valid ObjectName, then it is assumed to be a jmx dependency. This logic can be hidden away in the deployer/dependency processing of JBoss4's BeanDeployer. Whether we want to keep the jmx ObjectName as the namespace convention down the road is a different issue. It does have some advantages in terms of the flat namespace and querying, but it fails in its original intention of being a "logical" name for the underlying object (which is probably partly a fault of the way we use it). In many ways, querying on the real attribute/property values is probably better? Like I expressed above, I don't think that just a name is necessarily the best/only way to do this. Though using any other scheme will still require at least an internal unique "id". 14. Re: JBoss Bean Deployer - JMX IntegrationScott Stark Jun 6, 2005 8:48 PM (in response to Adrian Brock) I think we are still on topic target, its just a question of how the bean deployer is implementing the depends tag. Maybe I'm confusing the issue with the belief that the bean deployer already has a depends tag. That is what I am thinking and the question is one of making the jmx mc components available as dependencies targets in the bean deployers. As you said, this has a straightforward impl if one uses the jmx mc component object name, and the bean deployer assumes valid jmx names refer to components in the jmx mc. Beyond that is the question of mapping the jmx mc names into a non-jmx object name namespace for more natural pojo mc naming convention. Its a notion that has validity purely in the jmx mc as well given that there are more logical names/aliases for many services. For the not so logical names such as that assigned to an ejb local home or mdb deployment, there really is no natural name other than the existing ejb-name, but its far from being unique. I'm punting on this for now.
https://developer.jboss.org/message/311364
CC-MAIN-2019-26
refinedweb
1,858
56.29
Introduction to property based testing Giulio Canti Updated on ・2 min read In the last posts about Eq, Ord, Semigroup and Monoid we saw that instances must comply with some laws. So how can we ensure that our instances are lawful? Property based testing Property based testing is another way to test your code which is complementary to classical unit-test methods. It tries to discover inputs causing a property to be falsy by testing it against multiple generated random entries. In case of failure, a property based testing framework provides both a counterexample and the seed causing the generation. Let's apply property based testing to the Semigroup law: Associativity : concat(concat(x, y), z) = concat(x, concat(y, z)) I'm going to use fast-check, a property based testing framework written in TypeScript. Testing a Semigroup instance We need three ingredients - a Semigroup<A>instance for the type A - a property that encodes the associativity law - a way to generate random values of type A Instance As instance I'm going to use the following import { Semigroup } from 'fp-ts/lib/Semigroup' const S: Semigroup<string> = { concat: (x, y) => x + ' ' + y } Property A property is just a predicate, i.e a function that returns a boolean. We say that the property holds if the predicate returns true. So in our case we can define the associativity property as const associativity = (x: string, y: string, z: string) => S.concat(S.concat(x, y), z) === S.concat(x, S.concat(y, z)) Arbitrary<A> An Arbitrary<A> is responsible to generate random values of type A. We need an Arbitrary<string>, fortunately fast-check provides many built-in arbitraries import * as fc from 'fast-check' const arb: fc.Arbitrary<string> = fc.string() Let's wrap all together it('my semigroup instance should be lawful', () => { fc.assert(fc.property(arb, arb, arb, associativity)) }) If fast-check doesn't raise any error we can be more confident that our instance is well defined. Testing a Monoid instance Let's see what happens when an instance is lawless! As instance I'm going to use the following import { Monoid } from 'fp-ts/lib/Monoid' const M: Monoid<string> = { ...S, empty: '' } We must encode the Monoid laws as properties: - Right identity : concat(x, empty) = x - Left identity : concat(empty, x) = x const rightIdentity = (x: string) => M.concat(x, M.empty) === x const leftIdentity = (x: string) => M.concat(M.empty, x) === x and finally write a test it('my monoid instance should be lawful', () => { fc.assert(fc.property(arb, rightIdentity)) fc.assert(fc.property(arb, leftIdentity)) }) When we run the test we get Error: Property failed after 1 tests { seed: -2056884750, path: "0:0", endOnFailure: true } Counterexample: [""] That's great, fast-check even gives us a counterexample: "" M.concat('', M.empty) = ' ' // should be '' Resources For a library that makes easy to test type classes laws, check out fp-ts-laws My 2019 Resolutions as a Developer The goals I will accomplish in 2019, and some others I probably won't. I was playing a bit with this and wondering if/how it would make sense to test a monoid for tasks. Any ideas? Particularly on generating a setoid for those types? I think you have a typo in your Monoid instance. The string should not be empty for the test to fail concat(x, empty)is equal to x + ' ' + emptyby definition of concat. If x = ''then x + ' ' + emptyis equal to '' + ' ' + ''which is equal to ' 'so concat(x, empty) !== x Ah, yeah, I missed the extra space in the Semigroup instance
https://practicaldev-herokuapp-com.global.ssl.fastly.net/gcanti/introduction-to-property-based-testing-17nk
CC-MAIN-2019-35
refinedweb
596
53.31
Now that you know the basics, this article explains how to use XML's more advanced constructs to author complex XML documents. Entities, namespaces, CDATA blocks, processing instructions - they're all in here, together with aliens, idiots, secret agents and buried treasure. <?xml version="1.0"?> <headline>Alien Life On Earth, Says IDIOT Official</headline> <date> July 23, 2001</date> <place>Alaska</place> <reporter>Joe Cool</reporter> <body> <!-- who says you can't fool all of the people all of the time --> In a not-unexpected turn of events, an IDIOT (I Doubt It's Out There) official today confirmed reports of alien sightings in Area -10, the coldest part of Northern Alaska, and again called on Pentagon officials to either confirm or deny that the sightings were part of a decade-long government project to breed alien lifeforms on Earth. IDIOT also claims to have a map displaying the exact location of the alien "farm", and states that it will be released to the press within the next forty-eight hours. However, posing as an IDIOT, this intrepid reporter has successfully obtained a copy of said map, reproduced below: <!-- thanks, Mom --> <map> </map> </body>
http://www.devshed.com/c/a/XML/XML-Basics-part-2/7/
CC-MAIN-2014-15
refinedweb
195
53.95
Encryption Using Rotor Module in Python This tutorial will help you to understand the concept of Rotor module in Python. After this tutorial, you will able to encrypt or decrypt the messages which will help you in your future projects. Installation Of Rotor Module Rotor is not a standard module. So, you have to install it on your system by firing the below line on the console. pip install rotor or pip3 install rotor Code For Encryption or Decryption In Python Using Rotor This is a very simple module to use to encrypt or decrypt the message. So, here the explanation of the below code. First, import our module then create a key and message variable which we have to encrypt or decrypt in our code. Next, we create an object of a rotor. Which we will use to call encrypt() or decrypt() message. As the name suggests, encrypt and decrypt method encrypt or decrypt a message. You can also use encryptmore() or decryptmore() method instead of used method. the only difference is that this method will reset our object every time. In last, we print the message. import rotor KEY = "codespeedy" msg = "Hi, How are you ?" rt = rotor.newrotor(KEY) encrypted_msg = rt.encrypt(msg) decrypted_msg =rt.decrypt(msg) print("Message : ",repr(encrypted_msg)) print("Message : ",repr(decrypted_msg)) Also read: RSA Algorithm an Asymmetric Key Encryption in Python
https://www.codespeedy.com/encryption-using-rotor-module-in-python/
CC-MAIN-2022-27
refinedweb
227
66.94
Created on 2015-05-27 18:22 by Paul Hobbs, last changed 2018-11-01 05:41 by benjamin.peterson. Using pid namespacing it is possible to have multiple processes with the same pid. "semlock_new" creates a semaphore file with the template "/dev/shm/mp{pid}-{counter}". This can conflict if the same semaphore file already exists due to another Python process have the same pid. This bug has been fixed in Python3:. However, that patch is very large (40 files, ~4.4k changed lines) and only incidentally fixes this bug while introducing a large backwards-incompatible refactoring and feature addition. The following small patch to just _multiprocessing/semaphore.c fixes the problem by using the system clock and retrying to avoid conflicts: --- a/Modules/_multiprocessing/semaphore.c +++ b/Modules/_multiprocessing/semaphore.c @@ -7,6 +7,7 @@ */ #include "multiprocessing.h" +#include <time.h> enum { RECURSIVE_MUTEX, SEMAPHORE }; @@ -419,7 +420,7 @@ semlock_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { char buffer[256]; SEM_HANDLE handle = SEM_FAILED; - int kind, maxvalue, value; + int kind, maxvalue, value, try; PyObject *result; static char *kwlist[] = {"kind", "value", "maxvalue", NULL}; static int counter = 0; @@ -433,10 +434,24 @@ semlock_new(PyTypeObject *type, PyObject *args, PyObject *kwds) return NULL; } - PyOS_snprintf(buffer, sizeof(buffer), "/mp%ld-%d", (long)getpid(), counter++); + /* With pid namespaces, we may have multiple processes with the same pid. + * Instead of relying on the pid to be unique, we use the microseconds time + * to attempt to a unique filename. */ + for (try = 0; try < 100; ++try) { + struct timespec tv; + long arbitrary = clock_gettime(CLOCK_REALTIME, &tv) ? 0 : tv.tv_nsec; + PyOS_snprintf(buffer, sizeof(buffer), "/mp%ld-%d-%ld", + (long)getpid(), + counter++, + arbitrary); + SEM_CLEAR_ERROR(); + handle = SEM_CREATE(buffer, value, maxvalue); + if (handle != SEM_FAILED) + break; + else if (errno != EEXIST) + goto failure; + } - SEM_CLEAR_ERROR(); - handle = SEM_CREATE(buffer, value, maxvalue); /* On Windows we should fail if GetLastError()==ERROR_ALREADY_EXISTS */ if (handle == SEM_FAILED || SEM_GET_LAST_ERROR() != 0) goto failure; At first blush it does appear there is potential for conflict because of how the semaphore filename template was implemented -- that's a cool find. In practice, I wonder how often this has actually bitten anyone in the real world. The Linux world's use of clone() (creating pid namespaces) is relatively new/young. The BSD world's use of jails (bsd-style) take a bit of a different approach, advocate against the use of shared filesystems across jails where a similar conflict could arise, and have been around longer. @Paul: Out of curiosity, what inspired your discovery? Agreed that backporting Richard's work from issue8713 does not appeal. A few concerns: * Retrying with a modified filename makes sense but basing it on a timestamp from the system clock is not particularly robust given that the cloned processes can be executing in sufficiently close lock step that they both get the same timestamp (given the granularity/precision of the clock functions). Developers new to high performance computing often learn this the hard way when trying to use the system clock to create uniquely named files. * Instead, what about using the system's available pseudo-random number generators? Most are implemented to avoid just this problem where two or more processes/threads ask for a random/distinct value at nearly the same moment. * What about simply using a counter (not the same static int counter but another) and incrementing it when the attempt to create the semaphore file fails? This avoids a system function call and potentially simplifies things. Would this be faster in the majority of cases? we hit this problem daily in Chromium OS's build farm because we use pid namespaces heavily Triggering it regularly in a build farm indeed sounds like genuine pain. @Paul or @vapier: In tracking down this issue, did you already create a convenient way to repro the misbehavior that could be used in testing? Any finalized patch that we make will need some form of test. the original report on our side w/bunches of tracebacks: with that traceback in hand, it's pretty trivial to write a reproduction :). attached! i couldn't figure out how to make it work w/out completely execing a new python instance. i suspect the internals were communicating and thus defeating the race. the unshare() might need some checks so that it skips on older kernels or ones with pidns support disabled. but i don't have any such system anymore ;). if all goes well, it should fail fairly quickly: $ python3.3 ./test.py Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib64/python3.3/multiprocessing/__init__.py", line 187, in Event return Event() File "/usr/lib64/python3.3/multiprocessing/synchronize.py", line 293, in __init__ self._cond = Condition(Lock()) File "/usr/lib64/python3.3/multiprocessing/synchronize.py", line 174, in __init__ self._wait_semaphore = Semaphore(0) File "/usr/lib64/python3.3/multiprocessing/synchronize.py", line 84, in __init__ SemLock.__init__(self, SEMAPHORE, value, SEM_VALUE_MAX) File "/usr/lib64/python3.3/multiprocessing/synchronize.py", line 48, in __init__ sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue) FileExistsError: [Errno 17] File exists failed and doesn't take that long to pass: $ time python3.4 ./test-issue24303.py passed! real 0m1.715s also, it looks like this bug is in python-3.3. not sure what the status of that branch is, but maybe the backport from 3.4 would be easy ? I think it would be a better idea to partially backport the 2.7 logic that uses the random module. Here's a patch against 2.7 using _PyOS_URandom(): it should apply as-is to 3.3. Anyone opposed to me committing the patch I submitted? It slves a real problem, and is fairly straight-forward (and conceptually more correct). i don't feel strongly about either version @neologix: I second your proposed patch -- looks like a winner to me. Apologies for not following up earlier. New changeset d3662c088db8 by Charles-François Natali in branch '2.7': Issue #24303: Fix random EEXIST upon multiprocessing semaphores creation with Any chance this patch was every applied to Python3? It looks to me like 3.6 has the old code: it does seem like the patch was never applied to any python 3 branch :/ The retry logic is implemented at a different layer in Python 3: that's highlighting the SemLock._make_name func which doesn't have retry logic in it. did you mean to highlight SemLock.__init__ which has a retry loop that looks similar to the C code ? Yes On Wed, Oct 31, 2018, at 22:40, Mike Frysinger wrote: > > Mike Frysinger <vapier@users.sourceforge.net> added the comment: > > that's highlighting the SemLock._make_name func which doesn't > have retry> logic in it. did you mean to highlight SemLock.__init__ which has a > retry loop that looks similar to the C code ? >> > ---------- > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________
https://bugs.python.org/issue24303
CC-MAIN-2021-17
refinedweb
1,132
57.67
Embedded Linux device drivers: Device drivers in user space Device drivers in user space Before you start writing a device driver, pause for a moment to consider whether it is really necessary. There are generic device drivers for many common types of device that allow you to interact with hardware directly from user space without having to write a line of kernel code. User space code is certainly easier to write and debug. It is also not covered by the GPL, although I don't feel that is a good reason in itself to do it this way. These drivers fall into two broad categories: those that you control through files in sysfs, including GPIO and LEDs, and serial buses that expose a generic interface through a device node, such as I2C. GPIO General-Purpose Input/Output (GPIO ) is the simplest form of digital interface since it gives you direct access to individual hardware pins, each of which can be in one of two states: either high or low. In most cases you can configure the GPIO pin to be either an input or an output. You can even use a group of GPIO pins to create higher level interfaces such as I2C or SPI by manipulating each bit in software, a technique that is called bit banging . The main limitation is the speed and accuracy of the software loops and the number of CPU cycles you want to dedicate to them. Generally speaking, it is hard to achieve timer accuracy better than a millisecond unless you configure a real-time kernel, as we shall see in Chapter 16, Real-Time Programming . More common use cases for GPIO are for reading push buttons and digital sensors and controlling LEDs, motors, and relays. Most SoCs have a lot of GPIO bits, which are grouped together in GPIO registers, usually 32 bits per register. On-chip GPIO bits are routed through to GPIO pins on the chip package via a multiplexer, known as a pin mux . There maybe additional GPIO pins available off- chip in the power management chip, and in dedicated GPIO extenders, connected through I2C or SPI buses. All this diversity is handled by a kernel subsystem known as gpiolib , which is not actually a library but the infrastructure GPIO drivers use to expose I/O in a consistent way. There are details about the implementation of gpiolib in the kernel source in Documentation/gpio and the code for the drivers themselves is in drivers/gpio . Applications can interact with gpiolib through files in the /sys/class/gpio directory. Here is an example of what you will see in there on a typical embedded board (a BeagleBone Black): # ls /sys/class/gpio export gpiochip0 gpiochip32 gpiochip64 gpiochip96 unexport The directories named gpiochip0 through to gpiochip96 represent four GPIO registers, each with 32 GPIO bits. If you look in one of the gpiochip directories, you will see the following: # ls /sys/class/gpio/gpiochip96 base label ngpio power subsystem uevent The file named base contains the number of the first GPIO pin in the register and ngpio contains the number of bits in the register. In this case, gpiochip96/base is 96 and gpiochip96/ngpio is 32, which tells you that it contains GPIO bits 96 to 127. It is possible for there to be a gap between the last GPIO in one register and the first GPIO in the next. To control a GPIO bit from user space, you first have to export it from kernel space, which you do by writing the GPIO number to /sys/class/gpio/export . This example shows the process for GPIO 53, which is wired to user LED 0 on the BeagleBone Black: # echo 53 > /sys/class/gpio/export # ls /sys/class/gpio export gpio53 gpiochip0 gpiochip32 gpiochip64 gpiochip96 unexport Now, there is a new directory, gpio53 , which contains the files you need to control the pin. The directory gpio53 contains these files: # ls /sys/class/gpio/gpio53 active_low direction power uevent device edge subsystem value The pin begins as an input. To change it to an output, write out to the direction file. The file value contains the current state of the pin, which is 0 for low and 1 for high. If it is an output; you can change the state by writing 0 or 1 to value . Sometimes, the meaning of low and high is reversed in hardware (hardware engineers enjoy doing that sort of thing), so writing 1 to active_low inverts the meaning of value such that a low voltage is reported as 1 and a high voltage as 0. You can remove a GPIO from user space control by writing the GPIO number to /sys/class/gpio/unexport. Handling interrupts from GPIO In many cases, a GPIO input can be configured to generate an interrupt when it changes state, which allows you to wait for the interrupt rather than polling in an inefficient software loop. If the GPIO bit can generate interrupts, the file called edge exists. Initially, it has the value called none , meaning that it does not generate interrupts. To enable interrupts, you can set it to one of these values: rising : Interrupt on rising edge falling : Interrupt on falling edge both : Interrupt on both rising and falling edges none : No interrupts (default) You can wait for an interrupt using the poll() function with POLLPRI as the event. If you want to wait for a rising edge on GPIO 48, you first enable the interrupts: # echo 48 > /sys/class/gpio/export # echo falling > /sys/class/gpio/gpio48/edge Then, you use poll(2) to wait for the change, as shown in this code example, which you can see in the book code archive in MELP/chapter_09/gpio-int/gpio-int.c: #include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <poll.h> int main(int argc, char *argv[]) { int f; struct pollfd poll_fds[1]; int ret; char value[4]; int n; f = open("/sys/class/gpio/gpio48/value", O_RDONLY); if (f == -1) { perror("Can't open gpio48"); return 1; } n = read(f, &value, sizeof(value)); if (n > 0) { printf("Initial value=%cn", value[0]); lseek(f, 0, SEEK_SET); } poll_fds[0].fd = f; poll_fds[0].events = POLLPRI | POLLERR; while (1) { printf("Waitingn"); ret = poll(poll_fds, 1, -1); if (ret > 0) { n = read(f, &value, sizeof(value)); printf("Button pressed: value=%cn", value[0]); lseek(f, 0, SEEK_SET); } } return 0; } LEDs LEDs are often controlled though a GPIO pin, but there is another kernel subsystem that offers more specialized control specific to the purpose. The leds kernel subsystem adds the ability to set brightness, should the LED have that ability, and it can handle LEDs connected in other ways than a simple GPIO pin. It can be configured to trigger the LED on an event such as block device access or just a heartbeat to show that the device is working. You will have to configure your kernel with the option, CONFIG_LEDS_CLASS , and with the LED trigger actions that are appropriate to you. There is more information on Documentation/leds/, and the drivers are in drivers/leds/. As with GPIOs, LEDs are controlled through an interface in sysfs in the directory /sys/class/leds . In the case of the BeagleBone Black, the names of the LEDs are encoded in the device tree in the form devicename:colour:function , as shown here: # ls /sys/class/leds beaglebone:green:heartbeat beaglebone:green:usr2 beaglebone:green:mmc0 beaglebone:green:usr3 Now, we can look at the attributes of one of the LEDs, noting that the shell requires that the colon characters, ':', in the path name have to be preceded by a backslash escape character, '': # cd /sys/class/leds/beaglebone:green:usr2 # ls brightness max_brightness subsystem uevent device power trigger The brightness file controls the brightness of the LED and can be a number between 0 (off ) and max_brightness (fully on ). If the LED doesn't support intermediate brightness, any non-zero value turns it on. The file called trigger lists the events that trigger the LED to turn on. The list of triggers is implementation dependent. Here is an example: # cat trigger none mmc0 mmc1 timer oneshot heartbeat backlight gpio [cpu0] default-on The trigger currently selected is shown in square brackets. You can change it by writing one of the other triggers to the file. If you want to control the LED entirely through brightness , select none . If you set the trigger to timer , two extra files appear that allow you to set the on and off times in milliseconds: # echo timer > trigger # ls brightness delay_on max_brightness subsystem uevent delay_off device power trigger # cat delay_on 500 # cat /sys/class/leds/beaglebone:green:heartbeat/delay_off 500 If the LED has on-chip timer hardware, the blinking takes place without interrupting the CPU. I2C I2C is a simple low speed 2-wire bus that is common on embedded boards, typically used to access peripherals that are not on the SoC, such as display controllers, camera sensors, GPIO extenders, and so on. There is a related standard known as system management bus (SMBus ) that is found on PCs, which is used to access temperature and voltage sensors. SMBus is a subset of I2C. I2C is a master-slave protocol with the master being one or more host controllers on the SoC. Slaves have a 7-bit address assigned by the manufacturer (read the data sheet), allowing up to 128 nodes per bus, but 16 are reserved, so only 112 nodes are allowed in practice. The master may initiate a read or write transactions with one of the slaves. Frequently, the first byte is used to specify a register on the slave, and the remaining bytes are the data read from or written to that register. There is one device node for each host controller, for example, this SoC has four: # ls -l /dev/i2c* crw-rw—- 1 root i2c 89, 0 Jan 1 00:18 /dev/i2c-0 crw-rw—- 1 root i2c 89, 1 Jan 1 00:18 /dev/i2c-1 crw-rw—- 1 root i2c 89, 2 Jan 1 00:18 /dev/i2c-2 crw-rw—- 1 root i2c 89, 3 Jan 1 00:18 /dev/i2c-3 The device interface provides a series of ioctl commands that query the host controller and send the read and write commands to I2C slaves. There is a package named i2c- tools , which uses this interface to provide basic command-line tools to interact with I2C devices. The tools are as follows: i2cdetect : This lists the I2C adapters, and probes the bus i2cdump : This dumps data from all the registers of an I2C peripheral i2cget : This reads data from an I2C slave i2cset : This writes data to an I2C slave The i2c-tools package is available in Buildroot and the Yocto Project as well as most mainstream distributions. So, long as you know the address and protocol of the slave, writing a user space program to talk to the device is straightforward. The example that follows shows how to read the first four bytes from the AT24C512B EEPROM that is mounted on the BeagleBone Black on I2C bus 0, slave address 0x50 (the code is in MELP/chapter_09/i2c-example ): #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <sys/ioctl.h> #include <linux/i2c-dev.h> #define I2C_ADDRESS 0x50 int main(void) { int f; int n; char buf[10]; f = open("/dev/i2c-0", O_RDWR); /* Set the address of the i2c slave device */ ioctl(f, I2C_SLAVE, I2C_ADDRESS); /* Set the 16-bit address to read from to 0 */ buf[0] = 0; /* address byte 1 */ buf[1] = 0; /* address byte 2 */ n = write(f, buf, 2); /* Now read 4 bytes from that address */ n = read(f, buf, 4); printf("0x%x 0x%x0 0x%x 0x%xn", buf[0], buf[1], buf[2], buf[3]); close(f); return 0; } Serial Peripheral Interface (SPI) The SPI bus is similar to I2C, but is a lot faster, up to tens of MHz. The interface uses four wires with separate send and receive lines, which allow it to operate in full duplex. Each chip on the bus is selected with a dedicated chip select line. It is commonly used to connect to touchscreen sensors, display controllers, and serial NOR flash devices. As with I2C, it is a master-slave protocol with most SoCs implementing one or more master host controllers. There is a generic SPI device driver, which you can enable through the kernel configuration CONFIG_SPI_SPIDEV . It creates a device node for each SPI controller, which allows you to access SPI chips from user space. The device nodes are named spidev[bus].[chip select]: # ls -l /dev/spi* crw-rw—- 1 root root 153, 0 Jan 1 00:29 /dev/spidev1.0 For examples of using the spidev interface, refer to the example code in Documentation/spi . The next article in this series will discuss the issues related to writing a kernel device driver. Reprinted with permission from Packt Publishing. Copyright © 2017 Packt Publishing. You can see some of his work on the Inner Penguin blog at.
https://www.embedded.com/embedded-linux-device-drivers-device-drivers-in-user-space/
CC-MAIN-2020-40
refinedweb
2,198
56.08
regression test scripting framework for embedded systems developers Project description Intro Using MONk you can write tests like you would write unit tests, just that they are able to interact with your embedded system. Let’s look at an example. In the following example we have an embedded system with a serial terminal and a network interface. We want to write a test, which checks whether the network interface receives correct information via dhcp. The test case written with nosetests: import nose.tools as nt import monk_tf.conn as mc import monk_tf.dev as md def test_dhcp(): """ check whether dhcp is implemented correctly """ # setup device = md.Device(mc.SerialConn('/dev/ttyUSB1','root','sosecure')) # exercise device.cmd('dhcpc -i eth0') # verify ifconfig_out = device.cmd('ifconfig eth0') nt.ok_('192.168.2.100' in ifconfig_out) Even for non python programmers it should be not hard to guess, that this test will connect to a serial interface on /dev/ttyUSB1, send the shell command dhcpc to get a new IP adress for the eth0 interface, and in the end it checks whether the received IP address that the tester would expect. No need to worry about connection handling, login and session handling. For more information see the API Docs. Release 0.1.10/0.1.11 (2014-05-05) Enables devices to use current connections to find its IP addresses. Example usecase: You have a serial connection to your device that you know how to access. The Device itself uses DHCP to get an IP address and you want to send HTTP requests to it. Now you can use MONK to find its IP address via SerialConnection and then send your HTTP requests. Release 0.1.9 (2014-04-14) - add option to set debug fixture file which overwrites differences between test environment and development environment while developing Release 0.1.7/0.1.8 (2014-03-27) - workaround for slower password prompt return times - there was a problem in the publishing process which lead to changes not being added to the 0.1.7 release Release 0.1.6 (2014-03-06) - again bugs got fixed - most important topic was stabilizing the connect->login->cmd process - error handling improved with more ifs and more userfriendly exceptions - it is now possible to completely move from Disconnected to Authenticated even when the target device is just booting. Release 0.1.5 (2014-02-25) - fixed many bugs - most notably 0.1.4 was actually not able to be installed from PyPI without workaround Release 0.1.4 (2014-01-24) - fixed some urgent bugs - renamed harness to fixture - updated docs Release 0.1.3 - complete reimplementation finished - documentation not up to date yet - Features are: * create independent connections with the connection layer * example implementation with SerialConnection * create complete device abstraction with the dev layer * basic device class in layer * separate test cases and connection data for reusage with harness layer * example parser for extendend INI implemented in harness layer Release 0.1.2 - added GPLv3+ (source) and CC-BY-SA 3.0 (docs) Licenses - updated coding standards Release 0.1.1 - rewrote documentation - style guide - configured for pip, setuptools, virtualenv - restarted unit test suite with nosetests - added development and test requirements Release 0.1 The initial Release. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/monk_tf/0.2.22/
CC-MAIN-2018-47
refinedweb
570
55.54
Strange CS Extension console errorrgartlan Sep 6, 2012 6:01 AM Hi, I have a CS Extension for InDesign CS5.5 (Extension Builder 1.5), and now (after installing CS6 but later uninstalling it -- that's another story) I'm seeing a lot of messages like this in my Eclipse console window: [INFO] CSAWLib.CSLogger TypeConUtils.getAnyPossibleClassDefinitionFor() Failed to find any class definition for com.adobe.indesign::Array Sometimes it's for the Array type, sometimes for File. There's a LOT of these messages showing up, and I would like to find what happened at snuff this out. Any ideas? This seems directly related to my aborted attempt to get onto CS6, but I can't find anywhere in my own code that I could be calling into the named function. Cheers, - Rich 1. Re: Strange CS Extension console errorMSSDedalus Sep 6, 2012 6:06 AM (in response to rgartlan) Not sure if this can help but I get erratic errors in Flash Builder with similar messages ( lack of definition for common classes), I have to use Project - Clean to solve this issue. 2. Re: Strange CS Extension console errorrgartlan Sep 6, 2012 7:34 AM (in response to MSSDedalus) Thanks, but this didn't help. I didn't rebuild the several libraries my project includes, but then again I hadn't rebuilt them earlier, either. Might try that when I get a chance -- but at this point I'm still looking for ideas. 3. Re: Strange CS Extension console errorHarbs. Sep 6, 2012 9:20 AM (in response to rgartlan) For some reason, your compiler seems to think that Array and File are InDesign obejcts. Is it possible you have a bad import? Which swcs are you using for your project? 4. Re: Strange CS Extension console errorrgartlan Sep 10, 2012 8:04 AM (in response to Harbs.) Not sure, but I do see that the messages seem to occur when my plug-in calls InDesign.app.doScript(). 5. Re: Strange CS Extension console errorHarbs. Sep 10, 2012 11:39 AM (in response to rgartlan) Ah! Let me guess: You're trying to execute ActionScript code in a doScript. Right? You can only use ExtendScript, AppleScript or Visual Basic in a doScript 6. Re: Strange CS Extension console errorrgartlan Sep 10, 2012 12:03 PM (in response to Harbs.) No, I’m telling it to execute some extendscript. It’s running fine, but throwing these log messages now as it does so. 7. Re: Strange CS Extension console errorHarbs. Sep 10, 2012 1:00 PM (in response to rgartlan) How are you creating the string for the doScript? 8. Re: Strange CS Extension console errorrgartlan Sep 10, 2012 2:12 PM (in response to Harbs.) Often with something like this: var theScript:String = “//blah blah blah”; 9. Re: Strange CS Extension console errorHarbs. Sep 10, 2012 2:15 PM (in response to rgartlan) Hmm. Without seeing more code, I have no more ideas... 10. Re: Strange CS Extension console errorShane Smit Sep 17, 2012 4:27 PM (in response to Harbs.) This is happening to me too. I'm a CS Extension Builder n00b. I just downloaded 2.0 today, and created an extension targeting Flash Pro CS6, using Flash Builder 4.6. I get lots of these warnings: [INFO] CSAWLib.CSLogger TypeConUtils.getAnyPossibleClassDefinitionFor() Failed to find any class definition for com.adobe.flashpro::Array But eventually, I get a similar error that prevents any further progress: [ERROR] CSAWLib.CSLogger CSHostObject.hostGet() failed on elements - with msg: Failed to find any class definition for com.adobe.flashpro::SymbolInstance I was just exploring down into a MovieClip timeline. I haven't changed any project settings since it was setup. Here's the code: package { import com.adobe.csawlib.flashpro.FlashPro; import com.adobe.flashpro.*; public class ExportFlashPro { public static function run():void { var app:Flash = FlashPro.app; var doc:Document = app.getDocumentDOM(); if( doc == null ) { app.outputPanel.trace( "No document opened." ); return; } var timeline:Timeline = doc.getTimeline(); for( var f:int=0; f=timeline.frameCount; ++f ) { for( var l:int=timeline.layerCount-1; l>=0; --l ) // Loop backwards, back to front. { var layer:Layer = timeline.layers[ l ] as Layer; if( layer.layerType != "normal" ) continue; if( layer.frameCount < f ) continue; // This layer doesn't go that far. var frame:Frame = layer.frames[ f ] as Frame; if( frame.elements ) { var elementCount:int = frame.elements.length; for( var e:int=elementCount-1; e>=0; --e ) // Loop backwards, back to front. { var element:Element = frame.elements[ e ]; } } } } } } } 11. Re: Strange CS Extension console errorJoe Tam Sep 18, 2012 7:18 AM (in response to Shane Smit) Hi everyone, This is a known issue in the CSAW libraries. The warnings regarding the missing definition of Array can be safely ignored. For other cases, such as trying to iterate through the elements in a frame in Flash, you can define an unused variable of that type to provide a definition to get around this problem. For example, in Shane's case, you can do this: var unused:SymbolInstance = null; This would trigger the relevant CSAW library to find the class definition of SymbolInstance, therefore any subsequent methods that use it will work fine as expected. Hope that helps and sorry for any inconvenience caused. Joe 12. Re: Strange CS Extension console errorrgartlan Sep 18, 2012 7:34 AM (in response to Joe Tam) Thanks Joe. Is this something new? My impression is that this began after I’d installed CS6 (which I have since uninstalled). If this is the case, how do I get (or get back) to a configuration that avoids these messages? These come in dozens when I run a debug session and really clog up my console window, not to mention the effect this has on performance. - Rich 13. Re: Strange CS Extension console errorShane Smit Sep 18, 2012 8:53 AM (in response to Joe Tam) Thanks Joe. Adding the unused SymbolInstance reference got it working. 14. Re: Strange CS Extension console errorrgartlan Oct 19, 2012 10:54 AM (in response to Joe Tam) Joe, do you have any thoughts on why this seems to have started happening when I installed CS6? I uninstalled it and the problem persists, so maybe it’s my imagination. But something made this problem start out of the blue, and I *really* need to find a way to shut it off. Ideas?
https://forums.adobe.com/message/4688082
CC-MAIN-2017-39
refinedweb
1,067
67.35
“Set” is a card game where a group of players try to identify a Set of cards from those placed face-up on a table. This project uses Semantic Versioning for managing backwards-compatibility. Each Card has an image on it with 4 orthogonal attributes: Three Cards are a part of a Set if, for each Property, the values are all the same or all different. For example: Your task is to model the Game in code, and implement the following methods: For this last method, there will be multiple correct solutions, but any valid list of Sets is fine. “Three cards are a part of a set if, for each property, the values are all the same or all different.” This is phrased ambiguously, and the examples given lead me to believe that the following is a more specific description of the rules. (Whereas “Combination” refers to the mathematical concept.) # Install from Pypi pip install skyzyx-set-game-demo # Install from local code pip install -e . And either include it in your scripts: from set_game_demo import SetGame …or run it from the command line. # Application help set-game-demo -h From the Python REPL or a Python script… from __future__ import print_function from set_game_demo import SetGame # Initialize the game. game = SetGame() # Chatty, interactive version of the game. game.play() # Quiet version of the game. Good for code. discovered, sets = game.play_quiet() print("Sets discovered: {}".format(discovered)) for set in sets: game.display_cards(set) From the Terminal… # Chatty, interactive version of the game. set-game-demo # Quiet version of the game. set-game-demo --quiet pip install virtualenv virtualenv .vendor source .vendor/bin/activate pip install -r requirements.txt make lint We use tox to handle local testing across multiple versions of Python. We install multiple versions of Python at a time with pyenv. Testing occurs against the following versions: To begin… pyenv install 3.6.0b1 && \ pyenv install 3.5.2 && \ pyenv install 3.4.5 && \ pyenv install 3.3.6 && \ pyenv install 2.7.12 && \ pyenv install pypy-5.3.1 && \ pyenv install pypy3-2.4.0 && \ pyenv rehash && \ eval "$(pyenv init -)" && \ pyenv global system 3.6.0b1 3.5.2 3.4.5 3.3.6 2.7.12 pypy-5.3.1 pypy3-2.4.0 To verify that the installation and configuration were successful, you can run pyenv versions. You should see a * character in front of every version that we just installed. $ pyenv versions * system (set by ~/.pyenv/version) * 2.7.12 (set by ~/.pyenv/version) * 3.3.6 (set by ~/.pyenv/version) * 3.4.5 (set by ~/.pyenv/version) * 3.5.2 (set by ~/.pyenv/version) * 3.6.0b1 (set by ~/.pyenv/version) * pypy-5.3.1 (set by ~/.pyenv/version) * pypy3-2.4.0 (set by ~/.pyenv/version) make test tox make docs open docs/set_game_demo/index.html make pushdocs Docs can be viewed at. Make sure that the CHANGELOG.md is human-friendly. See if you don’t know how. Running make by itself will show you a list of available sub-commands. $ make all buildpip clean docs lint pushdocs pushpip.
https://pypi.org/project/skyzyx-set-game-demo/
CC-MAIN-2016-50
refinedweb
520
70.5
Step one, you need to get chromedriver. What is chromedriver? It is a separate binary that you must run to get Selenium and Chrome working. See for tiny explantion of what chromedriver does : To download chromedriver (which you must have to use Chrome with Selenium) go to this link: Mainly note the version of Chrome you are using. I am using Windows 10 and this Chrome: Version 75.0.3770.142 (Official Build) (64-bit) So I will pick this version of chromedriver: You need to save the chromedriver.exe to a directory that you will remember or your working directory. You need to know where you saved chromedriver.exe because you will use the location in the Python script that you are about to write. from selenium import webdriver ## note 1 driver = None try: cpath = "e:\\projects\\headless\\chromedriver.exe" ## note 2 driver = webdriver.Chrome(cpath) ## note 3 driver.get("") ## note 4 import time ## note 5 time.sleep(3) finally: # note 6 if driver is not None: driver.close() Note 1 - this is where you are loading the Python webdriver binding. This is a fancy way of saying we are telling Python about Selenium. If you do not have this line none of your Selenium scripts will run. Note 2 - this is the path of where I put chromedriver.exe your directory name will be different. The name does not matter either just pick somewhere on your disk. Note 3 - this is where Chrome will start up. Chrome and chromedriver.exe are both started on this line. If you looked at your processlist at the instant that line executes you will see a new Chrome instance start along with a chromedriver.exe. If you look closely chromedriver.exe starts first and it starts Chrome.exe Note 4 - this line navigates to google. Not exciting but at this point you will see your Selenium driven Chrome navigate to a web page. woooo!!!! Note 5 - at this point I put in a sleep so you can actually see what is happening. In general sleeps are bad when you are writing scripts. There are times when you are debugging when time.sleep is useful. This is one of those cases. Note 6 - this is shutting down chromedriver.exe and Chrome. You need this for cleanup. If you did not run that line Chrome.exe will still continue to run until you stop it manually. And that is it. Your first Selenium script with Python. Discussion (0)
https://dev.to/tonetheman/how-to-get-started-with-selenium-and-python-7p
CC-MAIN-2021-21
refinedweb
414
86.91
Answered by: Excel Undo/Redo We are creating an Excel Add-in using VSTO. We plan to support Excel 2007, 2010 and 2013, but are willing to sacrifice uniformity if newer versions have important features not available in older versions. We would like to implement Undo/Redo, but reading the post this forum this seems impossible. Is that still true? Has the situation changed for Excel 2013? As far as I can tell the Undo button becomes unavailable as soon as one of my (C#) methods are executed. This should mean that there is no way to tie into Excel's Undo stack. Is that true? It seems we have to resort into making our own Undo button so users at least can undo the last steps made by our Add-in, but it seems impossible to make something work with Excel's own Undo stack (for its own changes) and our Undo stack (for handling whatever state changes we need to keep track of). Any pointers for how to deal with this will be most welcome! Thanks Lars Question Answers All replies Visit John Walkenbach's site: Hi Lars_DK, Would you please clarify your goal more detail? I will provide a snippet for you, please tell more additional aim for further research: private static Office.CommandBars getCommandBars() { return (Office.CommandBars)Globals.ThisWorkbook.Application.GetType().InvokeMember("CommandBars" , System.Reflection.BindingFlags.GetProperty, null, Globals.ThisWorkbook.Application, null, System.Globalization.CultureInfo.InvariantCulture); } private static string getLastUndo() { string result = string.Empty; Office.CommandBars oCommandBars = getCommandBars(); Office.CommandBar oCommandBar = oCommandBars["Standard"]; Office.CommandBarControl oCommandBarControl = (Office.CommandBarControl)oCommandBar.Controls[14]; MessageBox.Show(oCommandBarControl.Caption); MessageBox.Show(oCommandBarControl.accChildCount.ToString()); try { if (oCommandBarControl is Office.CommandBarComboBox) { Office.CommandBarComboBox ocbcb = (Office.CommandBarComboBox)oCommandBarControl; for (int i = 1; i < oCommandBarControl.accChildCount; i++) { MessageBox.Show(ocbcb.get_List(i)); } } else { MessageBox.Show("No"); } }catch(Exception ex){ MessageBox.Show(ex.Message); } return result; } Have a good day, Tom Tom Xu [MSFT] MSDN Community Support | Feedback to us Hi Tom Our Add-in adds a tab to the Ribbon. About half of the buttons on our tab retrieves or calculates data and inserts it in the current Excel sheet. When the data are inserted the Undo button is disabled by Excel (and the accChildCount in your code example goes to 0). I really can't see why the Undo button would be disabled by my inserting data into Excel. I wish Excel would just behave as if I had typed in all the data using the keyboard. Alternatively that Excel would give access to the Undo handling from our Add-in (C#) and that the Undo stack would not be reset by Excel. I.e. I'm perfectly happy about having to store the data that I'm overwriting in the Excel sheet and having to set the data back to what it was upon receiving some event, but since Excel seems to remove everything on the Undo list that doesn't seem possible. Do anyone know where the current Undo behaviour is documented when it comes to VSTO Add-ins? Thanks Lars Hi Lars, I will involve some experts into your thread to see whether they can help you. There might be some time delay, thanks for you patience. Have a good day, Tom Tom Xu [MSFT] MSDN Community Support | Feedback to us Hi Sreerenj, Isn't the expectation of an addin to augment the existing functionality of an application? When we develop some kind of a productivity addin, the last thing we would expect is for that to wipe-off the undo stack for every interop call we make, for undo being such an indispensable functionality. Isn't there anyway at all to workaround this, at least to preserve the undo/redo stacks between interop calls? I seriously find it difficult to believe that this is not supported by design. I'm working on an Excel addin and so far spent more than two weeks on experimenting on adding the required functionality. I noticed that the undo gets cleared when my extended ribbon tab is brought in to play, but thought that there is someway to support undo/redo through interop. This now sounds like a show stopper to me as it will be seriously hard to push this addin to users given the loss of undo. Thanks, Ishan. May be instead of modifying object properties directly in VBA or VSTO, couldn't we emulate user input in the worksheet and send events to the Excel UI ? Basically, changing the value of a cell should be equivalent to selecting it (Activate, Select) then sending a Click event, then a SelectAll event, then pasting a string But the real bad thing in Excel is the fact that it zaps the Undo stack. This should NEVER happen when we set properties. Excel should instead check if the cell is editable or locked. Ideally, we should have a a way to create an undoable reference to the original object, to which we can then set properties as if it was the original. Excel would then record the previous value of the property, and if the new value is different, it will insert the original object reference, the property name, and its current value in its Undo stack, then it will apply the new value to that property. Something like: * Application.StartUndoable "description", True 'Flag is used to indicate if we force edits on locked/uneditable/protected cells * Let ExcelObject.SimpleProperty = newValue or Set ExcelObject.ObjectProperty = newObject (which can be repeated) * Application.SuspendUndoable * Application.ResumeUndoable * Application.EndUndoable (this finalize all modifications that can be undone in one step by pressing CTRL+Z or with the Edit>Undo menu). Some objects methods are utilities that internally just set one or several properties. The undo buffer needs not record these methods but the result o their actions (e.g. sorting tables, or animating a chart). Some active components (like videos) have internnal status that do not change their content (e.g. Play) but just the initial state (start time). But moving a scrolbar should be like moving gauges : theur modify some values, but not all of them need to be recorded. That's why se shoud be able to suspend the recording in undo buffers. The Excel objects supporting this should be WorkBook properties, the WirkSheets collection, the Charts collection, their contents (ranges, cells,), their state (active cell, view states like divisions of panes, and the protection state, oncluding undoing password changes and resttoring the original password or encrypted state, the the VBA or VSTO code can change them by providing the appropriate credentials to unlock thngs) Most basix Add-ins (like those with UserForms) should be helpers to create/modify data, but all these modifications should be as easy to undo as if they were done by manual edit (except that the VBA code should probably have an "Apply" button which will perform many actions in one group. The UserForms would also be undoable themselves the same way, if they are controled by another addin pdofying the state of another addin, so all addings should also have their undo buffer. for their current state, as long as they are loaded up to the point where they are terminated. But not everything needs to be recorded in the Undo buffer of the workheet or in the Addin, that's why methods should control what to record. Recording changes in the Undobuffer can be compressed: only the initial and final state of the modified list of properties would need to to be compared., i.e. only properties that are part of the savable document. This means that each savable docuemnt has its own undo/redo buffer. The list of these documents is i nthe Application object. Another way to design it: Worksheet.StartUndoable "Insert some date" With ActiveCell .Value=#12/31/2000# .NumberFormat = "mm/dd/yyyy' End With finalized by: Worksheet.EndUndoable or Worksheet.CancelUndoable. In case of any abort of the VBA code, the EndUndoable is called impliclty in the error handler, if it can. If it cannot save this in the Undobuffer (not anough memory, or other errors, it may display an alert like "This action cannot be undone, do you want to continue?" IF we press "No" it will call CancelUndoable). This alert may appear at any time, and in that case, as long as the Undoable action is running, all other changes to the document are ignored, but an event may be raised to allow aboring the VBA code early (the handler will then call CanceUndoable, or it could propose to save a backup of the document before continuing. Ideally, Excel should expose a method to perform an autosave (just like autosaved bakups being performed every few minutes) and restoring the document from this autosave backup., this will keep mpeory low and will allow the execurion to continue running, where the undobuffer just records the name of the backup that can be restored by replacing the current corrupted document to undo. Excel will manage the name of these autosaved backups itself. IF Excel crashes, it will be able to restart and restore them automatically, but the user will be able to choose the one to keep to restart the edits, and will cean the autosaved old backups with the normal Excel interface. This is a huge bug in Excel VSTO/VBA. Sreerenj please record it as a bug on Connect and have it submitted for fixing as soon as possible. It really is a massive block to using this technology in a business in any serious way. You design a .NET project to work with Excel then your users realise that they lose CTRL-Z when they use that technology! Guess what the user says? It is farcical that MS response is oh yea thats just how it works, this is by design' but sorry that doesn't cut the mustard. Why is it designed like this, can we have the technical explanation, we are programmers after all.
https://social.msdn.microsoft.com/Forums/windowsapps/en-US/7482a0cc-33c2-45ba-84db-3e18fa3ea539/excel-undoredo?forum=exceldev
CC-MAIN-2019-09
refinedweb
1,665
53.41
Complex. Annotations. Sample problem: - Construction: The mask is laid out in this phase, preferably only once - Initialization: In this phase, data is retrieved from a file, database, etc., and the mask's fields are filled with the data - Activation: Here, the user is given control over the mask In the real world, many more aspects are usually considered: access, validation, control dependencies, etc. Phases In this discussion, I refer to each operational step as a phase, and the basic idea is simple: We mark class methods as phases of a chain of operations and leave invocation of those methods to a service (framework) class. Actually, this approach is not limited to lifecycle management. It can be used for all kinds of invocation control mechanisms needed in a business process. The annotation we use is simply called Phase, and we use it to denote a method as a part of the chain of operations that can be invoked. In the code below, you'll see that the annotation declaration is similar to that of an interface: @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD}) public @interface Phase { int index(); String displayName() default ""; } Let's walk through the code above. In the first two lines, you see the annotation marked with two other annotations. This looks a little confusing at first glance, but those two lines simply specify that annotation Phase applies to methods only and should be retained after compilation. These annotations are added because other annotations may be used only during compilation and may target classes or members. The @interface is the syntactic description of an annotation. The code that follows, specifically, index and displayName—which not only declares a member, but also, at the same time, a method—is also new in terms of Java syntax. displayName is given an empty string as a default value, in case it is not provided, and can be used for monitoring purposes, say a progress bar. index() is required; it tells the framework in which order the phases will execute by default. As I said earlier, we should separate this logic from the object, so we define an interface that must be implemented to take part in the invocation management. This interface then can be implemented by a client object. For management purposes, we define a common marker interface from which all other "phaseable" interfaces will be derived so the framework has a unified access point to classes that are managed: public interface Phased { } A concrete implementation of this interface might look like the code below. Here, the interface defines how a mask, or form, that contains several controls must be properly set up as described above: public interface PhasedMask extends Phased { @Phase(index=0) public void construct(); @Phase(index=1) public void initialize(); @Phase(index=2,displayName="Activating...") public void activate(); } You can see how an annotation is used. It is written right before the declaration of the interface method and has an introductory @ sign. Its property index is provided within parentheses. Note that, because an annotation is not a Java statement, no semicolon appears at the end. Now we need a class that brings everything together and evaluates the phases we have defined. Phaser The main management class is properly called Phaser. (Hey, don't we all love Star Trek?) It executes all phases and provides a simple monitoring mechanism to the user. The implementation of this class is not included with this article, however, you can look up the code after downloading the framework from Resources. A Phaser is provided with an object that implements some concrete PhasedXxxx interface and manages the invocation of the phases. Suppose we have a class MyMask like this: public class MyMask implements PhasedMask { @Phase(index = 0) public void construct() { // Do the layout } @Phase(index = 1) public void initialize() { // Fill the mask with data } @Phase(index = 2) public void activate() { // Activate the listeners and allow the user to interact with the mask } // Business code } Now we can control the invocation of those PhasedMask methods like this: Phaser phaser = new Phaser( phasedMask ); phaser.invokeAll(); This results in the methods construct(), initialize(), and activate() being invoked in that order. How about controlling the phases? Let's omit the construction phase because, when we call the phasedMask() a second time, the layout no longer proves necessary. This essentially means we don't want the method construct() being called anymore. Since we've marked this method with the index 0, we can simply omit this index and tell the Phaser specifically what phases should execute: Phaser phaser = new Phaser( phasedMask ); phaser.invoke( new int[] {1,2} ); This is okay, but not explicit. Who can remember what phases the indices actually stand for? Fortunately, we can be a little more verbose like so: Phaser phaser = new Phaser( phasedMask ); phaser.invoke( new String[] {"initialize","activate"} ); Here, we use the method names from the interface. Also, note that we can reorder the phases if need be. So, to switch the sequence, we could write: Phaser phaser = new Phaser( phasedMask ); phaser.invoke( new String[] {"activate","initialize"} ); This hardly makes sense here, but, in a setting where we have more phases that are less tightly dependent on each other, that approach proves useful. Since we're using reflection here to call those methods, potentially, many exceptions could be thrown. The Phaser will catch the ones it expects and wrap them in the so-called PhaserException. So, if a method call fails (say, because it is private) the Phaser's invoke() method will throw a PhaseException that contains the originating exception. For those unfamiliar with reflection. see the sidebar "Notes on Reflection." You may also add a PhaseListener to a Phaser to observe what's happening inside and to provide the user with feedback during lengthy phase invocations: PhaseListener listener = new PhaseListener() { public void before( PhaseEvent e ) { // This is called before the Phaser invokes a phase } public void after( PhaseEvent e ) { // This is called after the Phaser has successfully invoked a phase } }; Phaser phaser = new Phaser( phasedMask ); phaser.addPhaseListener( listener ); phaser.invoke( new String[] {"initialize","activate"} ); Discussion and summary In this article, you have seen how to utilize annotations to manage the lifecycle of an arbitrary class split into separate phases. In order for classes to be ready for management by an external framework component, they must simply implement an interface derived from the parent interface called Phased, in which the methods are annotated with the annotation Phase. Management is completed with a Phaser class that controls the sequence and invokes the annotated methods of the implemented interface. It is possible to control the sequence in which the operations are invoked, and an event-handling mechanism provides the means for observing Phaser's workings. This approach also shows how annotations can be used for more than just Javadoc enhancements. They can be used not only for lifecycle management, but also for object initialization purposes in general. The implementing classes do not concern themselves with the sequence in which their methods are invoked. If you keep this is mind during design, you can be much more flexible with the use of classes. If the phases must be rearranged or omitted, these actions can occur outside the implementing classes. As with any tool, there are drawbacks. If the interface must be changed, either new interfaces must be defined to maintain backward compatibility or, if the source code is entirely available, the implementing classes must be altered. This solution lacks parameter support and return-value support. Parameters must be fully provided before the phases are invoked. Also, for performance-hungry systems, a bottleneck might occur since reflection is heavily used. Finally, the invocation chain is not transparently available to IDEs. In other words, for [put your favorite Java IDE here], it is impossible to show the developer at compile-time what method is going to be called from where. Learn more about this topic - Download the source code that accompanies this article - Get the Phaser framework (Sorry, the Javadoc is in German only right now. I'll localize the Javadoc soon.) - For more information on annotations, see Java 5.0 TigerA Developer's Notebook, David Flanagan and Brett McLaughlin (O'Reilly, June 2004; ISBN0596007388) - For more information on reflection, see The Java Programming Language, 3rd Edition, Ken Arnold, James Gosling, David Holmes (Addison-Wesley Professional, 2000; ISBN0201704331) - For more on annotations, read these JavaWorld articles: - "An Annotation-Based Persistence Framework," Allen Holub (March 2005) - "Taming TigerDecorate Your Code with Java Annotations," Tarak Modi (July 2004) - For more articles on J2SE, browse the Java 2 Platform, Standard Edition section of JavaWorld's Topical Index - For more articles on programming for application servers, browse the Java Application Servers section of JavaWorld's Topical Index
http://www.javaworld.com/article/2071998/core-java/annotations-to-the-rescue.html
CC-MAIN-2015-11
refinedweb
1,453
50.87
<< tappedMembers Content count28 Joined Last visited Community Reputation412 Neutral About tapped - RankMember Personal Information - LocationOslo, Norway tapped replied to KenWill's topic in For BeginnersFor me this looks like a main loop doing rendering (you call each iteration for frames) and logic. The loop you made is not deterministic, which means how long your 60 frames takes, depends on how long your logic takes to execute, and will differ from system or amount of work. But that is without knowing how you have setup your rendering API, in other words I am not sure your frames is bound to the screen refresh rate. If VSYNC is enabled(should be to prevent screen tearing or jiggered pixel movement), your loop could be rewritten to something like: stopwatch frameTime; setSwapBufferInterval(1); // Swap buffer by an interval of one frame. while(window.isOpen()) { frameTime.restart(); // LOGIC/RENDERING swapBuffer(); // Will block until it is time to swapBuffer based on screen refresh rate. frameTime.stop(); std::cout << "FPS: " << 1.0 / frameTime.seconds << std::endl; // You should probably throttle FPS printing, by for example taking average FPS every sec. } Beware that there exist screens with 120 hz (and more than that too), which may not be suitable for your physics engine or game logic (too low precision per timestep etc.). Also having a dedicated rendering thread is what is recommended today. There are unfortunately no standard way of doing it, but there exist per platform solutions. On OS X and iOS you have something called CADisplayLink (CVDisplayLink on OS X). On android you have something called Choreographer. In browser you have requestAnimationFrame. On Windows you make your own thread and use from GDI+ (Only works on Vista and upwards). So now you need to sync rendering and logic, since rendering happens in its own thread, and event handling and other things on the main thread. This can be done by simple communication (in hardware) between your main thread and rendering thread. For example by using semaphores. Also all the per platform methods above gives you the current frame duration, which is very accurate and can be used later in your time independent code to do proper physic integration or movement. I guess this may seem a bit daunting to you, however I have seen a lot of weird solutions to this problem, and in my experience the best way to make a main loop is by reading the docs for your target platform. tapped replied to anders211's topic in General and Gameplay ProgrammingFirst of all your code is not cache friendly at all. You branch your code to much, for example why check if the force vector is empty? Another thing, you have not made a common solution for physic update. The same update function should work for all types of shapes not only spheres. The update function should only be a physic integration. As you may recall: a = F / m, v += a * t, s += v * t + 0.5 * a * t * t for linear physic, and angularAcc = torq * I-1, rotation += angularAcc * t, orientation += rotation * t for angular physic. I see you are doing this, however you should separate your integration code from the contact resolving, so that you can have common solution for all bodies. So what you need to do is to clean up your code, and separate it, and have in mind that your solution should not be specific for each shape, but common. Also to paste code from visual studio, turn on "space for tabs" in the editor settings, and then convert your code file before you copy it. - Thanks for pointing that out, i have edited the post. - I can't really see your problem. You can create a function that returns the world client position, and use it when you want. Also, i would have made a struct/class for points/vectors, so that you could easily handle coordinates. // Example of how a Vector3 struct may look like struct Vector3 { int x, y, z; Vector3(int x, int y, int z) : x(x), y(y), z(z) {} void set(int x, int y, int z) { x = x; y = y; z = z; } Vector3 operator+(const Vector3 &a) const { return Vector3(x + a.x, y + a.y, z + a.z); } /* ... add some other neat functions ... */ }; // Example #1 Vector3 Object::getWorldCoord() { Vector3 result; if(parent) { result = parent->getWorldCoord() + m_position; // position is a Vector3, and is a member of the Object class. } else { result = m_position; } return result; } // Example #2 that works out of the box for you void Object::getWorldCoord(int &outX, int &outY, int &outZ) { if(parent) { int parentX, parentY, parentZ; parent->getWorldCoord(parentX, parentY, parentZ); outX = parentX + x; outY = parentY + y; outZ = parentZ + z; } else { outX = x; outY = y; outZ = z; } } tapped reviewed Secretmapper's article in General and Gameplay Programming tapped reviewed alessandroalinone's article in Networking and Multiplayer tapped replied to retsgorf297's topic in General and Gameplay ProgrammingDid i mention Bullets? Even if i am wrong with Havok, since it has limited GPU support, however are going to have full on next-gen consoles. Hmm, poorly expressed by me. What i mean is that AAA physic engines are moving towards GPU acceleration, which is state of the art. With the next-gen consoles, we would see that CPU based physics engine are being slowly thrown away, in favor of GPU based. Why create a CPU based physic engine, when we are missing a good and open-source GPU based physic engine, that are platform independent. Not that he is going to create a open-source engine, but you see what i mean. The question is what's the goal. Do you want to create an useful engine and push the market, or are you going to create an engine just for education(where parallel programming is a good start though). tapped replied to retsgorf297's topic in General and Gameplay Programming reviewed Hiwas's article in General and Gameplay Programming tapped replied to cebugdev's topic in General and Gameplay ProgrammingYeah, you can use OpenGL ES on desktop computers. However it is both platform and GPU dependent approaches for this to work. If you are using linux, just download the mesa-gles drivers, or even better find the vendor specific drivers for GLES. I suggest to use the SDK from the GPU vendor on windows, which is for AMD and for Nvidia. Look here for installation instructions for Nvidia emulator: You may also look at for a platform independent solution. Good luck, and remember I have yet not found a 100 % stable solution for my hardware setup, so don't expect to much, and happy debugging ;) tapped reviewed Code_Analysis's article in General and Gameplay Programming tapped reviewed Code_Analysis's article in General and Gameplay Programming Unity tapped replied to Taylor Ringo's topic in General and Gameplay ProgrammingYou need to ask yourself, what is your motivation, and what do you want to learn or produce from it, With the answers of these questions, you can find a framework that suits your needs. For instance, if you want to learn how to use a pretty high level framework, use SFML. However SFML can also be used for low-level OpenGL programming too. If you want to be individual and invent the wheel again, then create everything yourself using Windows API and OpenGL/Directx contexts to setup some nice graphics. When it comes to the decision between OpenGl and Directx, you need to compare your experience level with your program specifications. Today, Directx is so simple to implement using Visual Studio 2012, and the Windows SDK. It even exists a template for it. So you don't need to use time to setup, but can code right away. Unfortunately, it will only work on windows. Use OpenGL if you want to support a lot of platforms, but keep in mind that OpenGL is harder to learn, especially in the beginning. Also, if you are not experienced with cross-platform coding, you will pretty fast find it a nightmare from time to time, since compilers work differently. So your code may look as a mess, because of all the compiler specific stuff. However it is a good thing to learn how to structure code for more than one platform. And remember the best way to learn is by your faults. But again, what do you want? If you want to learn as much as possible, then you better invent the wheel again. And if you want to make a game, and you got a team of artists, then stick to UDK, Unity3D, Torque3D, C4 etc. tapped replied to firey's topic in General and Gameplay ProgrammingWhy not use the existing tile system as a grid? Here is a snippet that maybe useful: void DrawTilesWithinFrustum(Vec2f worldPos, int windowWidth, int windowHeight) { Vec2f minIndex = worldPos / Vec2f(TILE_SIZE_X, TILE_SIZE_Y); Vec2f maxIndex = (worldPos + Vec2f((float)windowWidth, (float)windowHeight)) / Vec2f(TILE_SIZE_X, TILE_SIZE_Y); // We want to be sure that we draw the tiles that are slightly in the frustum. maxIndex.x += 1; maxIndex.y += 1; int numColumns = (maxIndex.x - minIndex.x) + 1; int numRows = (maxIndex.y - minIndex.y) + 1; for(int row = 0;row < numRows;++row) { for(int column = 0;column < numColumns;++column) { int index = (unsigned int)(minIndex.x + row) + (unsigned int)(minIndex.y + column) * TILE_SIZE_X; Tiles[index]->Draw(); } } } worldPos is the position of the camera. This approach would run a lot faster than bruteforcing each tile. It will only loop through the tiles that you want to draw. Beware that this code snippet is not tested, so it may include type errors etc. Good luck. tapped replied to Axiverse's topic in Graphics and GPU ProgrammingHmm, why not use the same grid system that is used to represent Earth coordinates. In other words use spherical coordinates. Spherical coordinates can easily be worked out from Cartesian coordinates as long as you know the radius and position of the sphere in question. So you create the boundaries virtually by specifying [?, ?] extent of each grid cell, as you may be used to when working with normal grids. Then you can calculate indices for the cells by coordinates, which is done as following: int GetLocationIndex(const Vec2f &sphericalPoint) { Vec2f indices2D = sphericalPoint / extent; return (unsigned int)(indices2D.x) + extent.x*(unsigned int)(indices2D.y); } The x coordinates of the vectors, are Theta, and the y coordinates are Phi. Note, i have not tried this, so it is only an approach that may work. However good luck
https://www.gamedev.net/profile/179450-tapped/?p=1&st=30&tab=smrep%26p=1%26st=30
CC-MAIN-2017-30
refinedweb
1,746
61.06
Or have a look at what I've been doing with mine Polargraph website Polargraph project code Flickr stuff Hi everybody, I am having troubles with getting the stepper motors to start to work. The green light is brightly light on top and the motors are hooked up. The serial connection doesn't seem to be a problem seeing as how the polargraph READY! shows up in green on top of the controller. Wondering if anybody has any possible solutions. The steppers don't even lock when I do the things that are meant to make them wriggle. I am using the Adafruit V2 and I tried to do the things suggested like changing the code in the configuration section to go from v1 to v2 but the code isn't able to be verified. Power is getting to the motor ports, I tested it with a multimeter but the steppers haven't shown any sign of life yet. Very discouraging. Any and all help is appreciated. Also, not sure what you mean by the code not being able to be verified - you mean you can't compile / upload it? Nothing will ever work until you can get the code uploaded, and that's not a physical problem. The arduino IDE will give you a message saying what it thinks is wrong if it fails (sometimes it's even a useful one). Work on getting that done next. Post the code you're using if that helps. The configuration.ino should look like [code] */ // motor configurations for the various electrical schemes // ================================================================= // 1. Adafruit motorshield // Using Adafruit Motorshield V2? Comment out this one line underneath. //#include <AFMotor.h> // Using Adafruit Motorshield V1? Comment out the three INCLUDE lines below. #include <Wire.h> #include <Adafruit_MotorShield.h> #include "utility/Adafruit_PWMServoDriver.h" [/code] and the top part in polargraph_server_a1.ino should look like [code] // Program features // ================ //#define PIXEL_DRAWING #define PENLIFT #define VECTOR_LINES // Specify what kind of motor driver you are using // =============================================== // REMEMBER!!! You need to comment out the matching library imports in the 'configuration.ino' tab too. #define ADAFRUIT_MOTORSHIELD_V2 //#define ADAFRUIT_MOTORSHIELD_V1 [/code] Hello, thank you for your help. I did mean that the code could not be compiled before when I said verify. Since your suggestion, I have tried the stepper test and that too yielded no results. From what you said, this probably means that there is a physical problem, correct? The power light does come on when I plug it in, and the soldering I did did't seem to have any visual problems with connectivity. What action should I next take? Thanks so much for the help, I can't wait to get this awesome project up and running. Hi Euphy, Love the look of this project. Have bought all the bits and pieces but couldn't get hold of a Motorshield v1 so have a v2 instead. But pretty confused in terms of what software i need to run it. I notice in the comments below you've mentioned writing up a version for the v2 but looking through the various links, i'm completely lost. Are you still planning on doing a write up? Any help gratefully received! Regards George Hai, This is a great guide I have a Kritzler polargraph built, my question is now , can I use polargraph- controller with my polargraph kritzler hardware and what firmware should I use Could someone help me please Kind Regards You could have a look at and to see the more recent details about running on other hardware. sandy noble I made it! I like to upgrade it and try new things, like painting on canvas: Thanks for the instructions. hello,very nice work, i have a problem , when i click the "click to start" the pen didn't working, i don't know what's wrong with that ,please tell me my problem , thanks very much. Hi Chenjian, thank you! The bad news is I don't even know what you mean by "click to start" - which step are you reading that from? everything is ok now , really thanks , and i am sorry for i didn't reply timely, i was busy for my graduration these days, hey, nice instructable' i have a problem here, can you help me? can i use servos instead of steppers? and i dont have a motorshield, can it be without it or is it necessary? if so please help me with the code, thanks , naman The Polargraph software is written for stepper motors, and a motorshield. However, the principle of this machine is very simple, and a machine like this can be made using any kind of physical actuator. Equally so, you can construct a motor driver from discrete parts if you wanted to, but the shield is easier. I got a stepper but it has six wires coming from it ??! I got a stepper but it has six wires coming from it ??! Hi Euphy, I've been slowly following this Instructable to completion, in stolen moments here and there, for a while now and have had a lot of fun (and the odd moment of frustration) doing so. I'm tantalising close to the end now: i have a machine; I have steppers that respond; I have a functioning gondola; I can set home and move pen to point (now that have set the stepping to SINGLE, for some reason my steppers don't function well with INTERLEAVE); and I've managed to upload and render an image. Only trouble now is when I hit Render and then Generate commands, I can see the queue gobbling up commands, the pen moves to the start position, then......nothing. The commands continue to get gobbled up but the pen hangs motionless. I'm sure there is something simple happening that I have overlooked but I'm stumped. Any clues? Dean Hi Dean, aha, the clues will probably be in the debug console! If you press ctrl-C in the controller, you'll be able to see the raw communication between the controller and the machine itself - the commands being dispatched from the controller, and then some kind of response from the machine itself. If there's something going wrong, then you should see an error, and it might shed some light on what's afoot. When you changed to SINGLE rather than interleaved, did you also half your steps per rev? And when you are doing "move pen to point", does it definitely move to the exact right point that you've indicated? And of course you've uploaded your machine spec to your machine! lets see :) sandy noble Thx Sandy, I hadn't changed steps per rev so I've now halved that to 100. Pen definitely moves to the point indicated when I move pen to point. What the controller reveals is the following (this is a snippet but you'll get the picture): "Dispatching command: C05,896,544,23,226,END command:C05,896,544,23,226,END:3190364230 incoming: C05,896,544,23,226,END not recognised. incoming: READY incoming: READY regenerating preview queue." This basically repeats for each command. So the Arduino is having problems recognising the coordinates it would seem? Any ideas why this would be? I have not had this result using move pen to point or return to home which seem to be working just fine. Dean Ok yes if you're using SINGLE, then 200 is the proper stepsPerRev. Hmm! Which firmware are you using? The "not recognised" means it doesn't recognise the command (C05) rather the coordinates... The newest release of polargraph_server_a1 1.7 does not, by default, include the code for pixel shading (because it had to go to make room for the new motorshield library). Ahhhhhhh! It seems I have not paid due attention to the words of the King Fu master and have failed to define pixel drawing! Thanks for the clues and I will try this out as soon as I have a (stolen) moment. D P.s. your support for this instructable is amazeballs. Thanks :) The current latest firmware is a bit of a beta version, because it has this ability to load only certain features at compile-time. If you are using the plain adafruit shield v1, then you _should_ probably be able to load pixel shading AND vector functions at once. Give it a go at least - I think I got that working. It's only for the v2 shield that you need to choose which one you need. I need to revise this instructable at some point - things have got a little more complicated since I wrote it up first! I'm up and running! Still some calibration required (I'm not sure my motors are the best though) but it's definitely drawing things, and the output has a lovely low-fi quality. I'll post some video when I have something worth sharing. Just pre-ordered the SD version to install on a wall in the Fluxx office (). Great instructable and great support, long may you prosper! ...apologies, I think the default steps per rev is 400 so I had already halved that (my steppers have a 1.8deg step). My local hackerspace recently completed this build. I was able to help with some of the build. This was one of the most fun builds I've helped with. I feel less intimidated by motor controls in Arduino IDE and Processing now. We named our bot Pablo. Pablo will be helping out with a local arts festival called the Dogwood Arts Festival. Here's a video: Thanks to our fearless cat herder @samthegiant for sending us on this mission I'm having a weird issue where when I click on "set home" I see no purple dot, and only one of the motors will lock. Then when I click "Move pen to point" and select a point, the line from where the machine thinks the pen is starts way off the screen up to the left and the machine does not move. I tried deleting the default config file and starting over, but it still seems to think the pen is there. Also, when I select "return to home" it will do the same, thinking the pen is up in the far left beyond the screen, and the purple dot is in the correct place, so I believe "home" is still in the right locations. My motors are working just fine with the Accelstepper Multistepper example btw, so no issue with the electronics side of it. Like I said in my other comment (page 13), I am using a motorshield v2, so I'm checking to make sure I didn't update the program improperly, but so far I can't find anything that is wrong. I've updated all the Arduino libraries to accommodate the motorshield v2, but per your instructions I am using Processing 1.5 and the libraries that you provided on github for the processing side of things. So, any ideas? I am stumpted! You have already set your machine size, and uploaded it with "upload machine spec"? And just to be sure, you're based on the code from the latest release zip at github: Open the console with ctrl-c, then issue your "set home" command, see if the numbers that come back are the same as the number you send. sn Seems to me that the issue is probably with Accelstepper. I contacted Adafruit and they told me that Accelstepper is not compatible with the Motorshield v2, unless you use the adafruit edited version that you can download from their site. That version doesn't seem to work with the polargraph though. I got this working with the new motorshield recently (well, moving at least), and didn't have to use adafruit's own accelstepper fork. Aha, that's worth knowing. I wonder what is different about adafruits fork of accelstepper. I've actually got a motorshield v2 now, so I hope I'll be able to get it put together and have a go myself sometime soon. The work you have already done will be useful, but if I get it working I'll let you know and write it up as part of this instructable, and on the site (). First, Great project! This is my first Instructable, you have inspired me. I purchased the new Adafruit Motorshield v2 and the Library has changed. I was able to update your source to accommodate the new Library however now the compiled code is too large for the Arduino Uno. Looks like the new shield library adds some additional libraries. I'm also assuming the shields new ability to add additional shields creates a new array variable that is taking up space. (just a guess) Binary sketch size: 32,474 bytes (of a 32,256 byte maximum) My next step would be to graduate to a Arduino Mega 2560 however I wanted to ask you if there was anything that could be done to simply reduce the code by 218 bytes? This reminds me of code length issues with my TRS-80 / 4k memory way back when. Here is what i changed in file "configuration" to update for the new Library. I'm assuming this will work however its not tested as I'm unable to load. // 1. Adafruit motorshield V2 - testing #include #include #include "utility/Adafruit_PWMServoDriver.h" // ADD THIS FOR NEW SHIELD TYPE // Create the motor shield object with the default I2C address Adafruit_MotorShield AFMS = Adafruit_MotorShield(); // END ADDED THIS FOR NEW SHIELD TYPE const int stepType = INTERLEAVE; Adafruit_StepperMotor *afMotorA = AFMS.getStepper(motorStepsPerRev, 1); Adafruit_StepperMotor *afMotorB = AFMS.getStepper(motorStepsPerRev, 2); void forwarda() { afMotorA->onestep(FORWARD, stepType); } void backwarda() { afMotorA->onestep(BACKWARD, stepType); } AccelStepper motorA(forwarda, backwarda); void forwardb() { afMotorB->onestep(FORWARD, stepType); } void backwardb() { afMotorB->onestep(BACKWARD, stepType); } AccelStepper motorB(forwardb, backwardb); void configuration_motorSetup() { // no initial setup for these kinds of motor drivers } void configuration_setup() { defaultMachineWidth = 650; defaultMachineHeight = 650; defaultMmPerRev = 95; defaultStepsPerRev = 400; defaultStepMultiplier = 1; currentlyRunning = true; delay(500); } // end of Adafruit motorshield definition // ================================================================= Thanks! You could empty the function exec_drawBetweenPoints(..) and remove desiredSpeed() to cut out the vector drawing features (drawing straight lines), saving 2k. To remove the pixel shading features, delete the entire "pixel" tab and snip a little out of the exec_executeBasicCommand(..) function, freeing a mighty 3.5k. I think the AccelStepper library has grown a little bit during the last couple of releases too - it all adds up. good luck! sn Many thanks! I just did this with an UNO and MotorShield v2. I replaced all the config code like Squeakychair suggested, but I had to cut down some stuff to get my sketch down from 33,242 bytes. I cut the DC motor code out of Adafruit_MotorShield.h and Adafruit_MotorShield.cpp, but that wasn't enough. Ultimately I had to cut out the penlift code to get the sketch small enough
http://www.instructables.com/id/Polargraph-Drawing-Machine/CQ3CEVQH5E9QCYK
CC-MAIN-2014-41
refinedweb
2,489
71.44
At 01:09 PM 7/24/2007 -0400, Stephen Waterbury wrote: >Actually, I wasn't confused. :) I'd suggest a convention that allows >a distribution "title" (e.g., "Zope", "Twisted", etc.) and a >distribution "name" that would simply be the name of the >distribution's top-level package (e.g., "zope", "twisted", etc.), This proposal would rule out namespace packages, in addition to being incompatible with existing distribution names. Note that package != distribution -- a distribution may contain zero or more packages (even top-level), *and* a single package (top-level or otherwise) may be spread over more than one distribution. Also note that this was true even with the distutils, long before setuptools existed.
https://mail.python.org/pipermail/distutils-sig/2007-July/008016.html
CC-MAIN-2014-15
refinedweb
114
51.44
Arrays are a very important data structure encountered by programmers. It is basically a collection of related items and is of fixed length entries. The variable declaration we learned so far occupies a single location of memory, on the other hand array occupies a consecutive collection of memory. But the elements contained in the array must be of same type. In this article we shall demonstrate the ways in which arrays allow programmers to organize and manipulate data. Declaring and Creating Arrays Like any other objects arrays are also created with the keyword new. Programmer specifies the type of the array elements and the number of elements as part of an array creation expression. Such an expression return reference that can be stored in an array variable. Arrays can be created as follows: int [] myarray = new int[ 5 ]; or int [] myarray; myarray = new int [ 5 ]; In the declaration above, the square brackets [ ] following the variable name a indicates that a is a variable that will refer to an array and is of size 5 elements with an index starting from 0 through 4. When an array is created in Java, each element of the array receives a default value – zero for numeric primitive type elements, false for boolean elements and null for references. One can create several arrays in a single declaration such as: String [] color = new String[3], state[] = new String[4]; The above declaration has the same meaning as: String [] color = new String[3]; String state[] = new String[4]; When an array is declared we can combine the square bracket at the beginning of declaration, such as double[] val1, val2 val1 = new double[10]; val2 = new double[20]; which is equivalent to the declaration double val1[]; val1 = new double[10]; double val2[]; val2 = new double[20]; or double[] val1; val1 = new double[10]; double[] val2; val2 = new double[20]; Note: You can declare arrays of any type but the values should be all of similar type or homogeneous. Every element of an int array is an int value, and every element of a String array is a reference to a String object Array creation and initialization Arrays can be initialized at the run time as well as compile time. At compile time arrays can be initialized with an array initializer, which comma separated list of values enclosed within braces, '{' and '}' such as int [] numbers = {0,1,2,3,4,5,6,7,8}; String [] colors = {“red”, “green”, “blue”}; In this case, the array length is determined by the number of elements in the list. Let write a program to illustrate the point of array initialization at compile time as well as run time. Run time array initialization means values in the array will be feed when the program is running, lets say, in our we shall feed the array values from keyboard. Example 1: Program to demonstrate array initialization, manipulation import java.util.Scanner; public class ArrayDemo { public static void main(String[] args) { //array initialized at compile time through initializer and the same values are displayed when the program is running. int array[] = { 12, 34, 56, 78, 89, 32, 56, 22 }; for (int i = 0; i < array.length; i++) { System.out.println("Value " + array[i] + " is at array[" + i + "]"); } // array initialized at run time from keyboard input Scanner input = new Scanner(System.in); String colors[] = new String[3]; for (int j = 0; j < colors.length; j++) { System.out.println("Enter color: "); colors[j] = input.nextLine(); } // values are displayed when the program is running. for (int k = 0; k < colors.length; k++) { System.out.println("Color " + colors[k] + " is at colors[" + k +"]"); } } } Note: Every array object knows its own length in Java and maintains this information in a length field. Thus the expression, for example colors.length accesses colors' length field to determine the length of the array. Example 2: Program to find the sum of all array elements public class ArraySum { public static void main(String[] args) { int sum = 0; int array[] = { 12, 34, 56, 78, 89, 32, 56, 22 }; for (int i = 0; i < array.length; i++) { sum += array[i]; // same as writing sum =sum + array[i] } System.out.println("Sum is :"+ sum); } } Example 3: Program to create a bar chart from the list of array elements public class ArrayDemo { public static void main(String[] args) { int sum = 0; int array[] = { 10, 0, 5, 7, 1, 0, 12, 22 }; for (int i = 0; i < array.length; i++) { System.out.print(array[i]+"\t|"); for(int j = 0; j < array[i]; j++){ System.out.print("="); } System.out.println(""); } } } Multidimensional arrays Arrays which have rows and columns are called two-dimensional, similarly there are three-dimensional, four-dimensional in fact arrays may be declared as n-dimensional. But practically visualizing arrays of more than two-dimension is difficult and rarely used. Java does not support multidimensional arrays directly, but it does allow the programmer to specify one dimensional arrays whose elements are also one-dimensional arrays, thus achieving the same effect. Two dimensional arrays, analogically, is more like a table and to identify a particular table element, we must specify two indices. The first identifies the row element and second its column. Like one dimensional arrays, multidimensional arrays can be initialized with array initializers in declaration. A two-dimensional array with two rows and two columns could be declared and initialized as follows: int array[][] = { { 11, 22, 66 }, { 33, 44, 99, 77 }}; A multidimensional array with the same number of columns in every row can be created with an array creation expression as follows: int array[][]; array = new int[3][4]; In this case, we use literal 3 and 4 to specify the number of rows and columns respectively. But if we want a multidimensional array in which each row has a different number of columns, we can create it as follows: int array[][]; array = new int[2][]; array[0] = new int[7]; array[1] = new int[5]; Thus the statements create a two-dimensional arrays with two rows where row 0 has seven columns and row 1 has five columns. Example 3: Program to demonstrate two-dimensional arrays import java.util.Scanner; public class Array2DDemo { public static void main(String[] args) { int array2d[][] = {{11,22},{33},{44,55,66}}; System.out.println("Values in the array are"); for(int row = 0;row<array2d.length;row++){ for(int col=0;col<array2d[row].length;col++){ System.out.print(array2d[row][col]+" "); } System.out.println(); } } } Passing arrays to methods To pass an array argument to a method, specify the name of the array without any brackets. For example if array price is declared as double price[]=new double[5]; then the method call addTax(price); passes the array price to method addTax. For a method to receive an array reference through a method call, the method's parameter list must specify an array parameter. For example, the method header for method addTax might be written as void addTax(double p[]), which indicates that addTax receives the reference of an double array in parameter p. The method call passes price's reference, so when the called method uses the array variable p, it refers to that same array object as price in the calling method. Example 4: Program to demonstrate passing arrays to methods and modifying the elements public class ArrayToMethodDemo { public static void main(String[] args) { double price[] = { 12.20, 45.25, 78.10, 35.40, 78.50 }; System.out.println("Price before tax imposed"); display(price); addTax(price); System.out.println("Price after 12.75% tax imposed"); display(price); } public static void addTax(double p[]) { for (int i = 0; i < p.length;i++) { p[i] = p[i] + p[i] * 0.1275; } } public static void display(double p[]) { for (int i=0; i < p.length; i++) { System.out.println(p[i]); } } } On passing arguments to methods Many programming language have two ways to pass arguments in method calls: pass-by-value (or call-by-value) and pass-by-reference(call-by-reference). When a argument is called by value, a copy of the argument's value is passed to the called method. The called method works exclusively with the copy. Any changes made to the copy do not affect the original variable's value in the caller. On the other hand, when an argument is called by reference, the called method can access the argument's value in the caller directly and modify the data. However, Java does not give programmer much of a choice. All arguments are passed by value in Java. Example 5: Program to demonstrate passing arguments by value public class PassByValueDemo { public static void main(String[] args) { float value = 2.5f; System.out.println("Value before modification " + value); changeValue(value); System.out.println("Value after modification " + value); } public static void changeValue(float f) { f *= 2; System.out.println("Modified value " + f); } } About String[] args in main Java allows arguments to be passed from command line to an application by including a parameter of type String[]in the parameter list of main. When an application is executed, Java passes the command line arguments that appear after the class name in the java command to the application's main method as strings in the array args. For example java MyApp Hello world passes two command line argument to application MyApp. MyApp's main method receives the two element array args in which args[0] contains the String “Hello” and args[1] contains the String “world”. Quizzes 1. List and table of values can be stored in (1) variables (2) table (3) arrays (4) program 2. An array is a group of (1) variables (2) table (3) arrays (4) program 3. The word homogeneous with respect to array refers to (1) element of same size (2) element of same type (3) data type (4) variables 4. What allows programmer to iterate through the elements in an array (1) conditional statements (2) functions (3) loops (4) only for loop 5. An array that uses two indices is called (1) one dimensional array (2) two dimensional array (3) three dimensional array (4) four dimensional array 6. command line arguments are stored in (1) main (2) String[] args (3) Java (4) stack 7. Array length is denoted by the field (1) size() (2) length() (3) max() (4) length 8. Can an array index be of type float (1) yes (2) no 9. Given the command java abc xyz, the first command line argument is (1) java (2) abc (3) xyz (4) none of the above 10. String is a primitive data type (1) true (2) false Task: Try yourself - Write a program to copy elements in one array to another array. - Write a program to simulate a stack with the help of an array - Write a program to reverse the element in an array - Write a program to search an element in an array Previous: Methods | Next: Introduction to classes and objects | Back to Table of content Edited by mdebnath, 09 March 2013 - 06:26 AM.
http://forum.codecall.net/topic/74431-arrays/
CC-MAIN-2019-51
refinedweb
1,834
51.18
/* * In a hypothetical real-estate themed board game, a player rolls two dice to *move their token around the perimeter of the board by the number of squares *shown on the face of the dice. At each spot the player lands, he or she must *take an action, either purchasing a property or paying some cash penalty. If *the player lands on a property, she or he must purchase it. If the property *has already been purchased, the player does nothing. If the player lands on a *penalty square, he or she must pay the penalty, no matter how many times the *player lands on it. The table below shows each square on the board, and the *price or penalty for each square. * Each time the player rounds the board (reaches or passes the "Go" square), *the player earns $200. There is no other way to earn money. Play continues *until the player lands on a property or penalty he or she cannot afford or *there have been 1,000 rolls of dice. * Assuming only one player who starts with $1500 per game, and 1,000 games, *please answer the following questions: * What is the average number off rolls (turns) in a game? * What is the average number of properties purrchased in a game? * As a percentage, in how many games is Indiana Avenue purchased? */ package jmonopoly; /** * * Beginning... */ public class JMonopoly { /** * @param args the command line arguments */ public static void main(String[] args) { // TODO code application logic here } } 0 Edited by tbuchli
https://www.daniweb.com/programming/software-development/threads/470806/java-monopoly
CC-MAIN-2018-26
refinedweb
252
70.43
Introduction Plumbing is a facility in minimega to enable communication between VMs, processes on guests or hosts, and instances of minimega. In short, it allows "plumbing" communication pathways for any element of a minimega ecosystem. Plumbing in minimega is similar in concept to unix pipes and the myriad other IPC mechanisms available in many programming languages and operating systems. minimega's plumber is designed to interact with unix command line tools and provides a number of additional capabilities over unix pipes. The plumber allows for uni- and multi-cast pipelines, supports fan-in (multiple pipe inputs), message delivery modes for each pipe (broadcast, round-robin, random), and per-reader pipelines called vias. The plumber is fully distributed, and works seamlessly across instances of minimega and VMs. This means a VM on node X can, without additional configuration, attach to a pipeline on node Y. VMs and minimega instances can even read or write to pipes from other VMs. plumbing semantics The minimega plumber provides two plumbing primitives - pipes and pipelines. Pipes are I/O points, and support a number of delivery and read/write options. Pipelines are compositions of pipes and external programs. pipes Pipes are simply named I/O points, similar to a named pipe on a unix system. minimega pipes exchange newline delimited messages as opposed to a byte stream like unix pipes. By using messages, minimega pipes allow for any number of readers and writers to a sinlge named pipe. Messages are written and are delivered to any attached readers according to that pipe's current mode. Message writes are non-blocking, and if no readers are present on the pipe, the message is discarded. No buffering of messages takes place. Named pipes are also unique to the namespace they were created in, so it's safe to reuse pipe names between experiments. Pipes can be written to or read from on the minimega CLI, the host command line, attached to miniccc processes via the cc API, and from the VM command line via a miniccc switch. For example, we can read from a minimega pipe foo on the command line: $ minimega -pipe foo Invoking a pipe this way will block until standard in is closed. Let's write to the pipe now using the command line as well as the minimega CLI: $ echo "Hallo von minimega!" | minimega -pipe foo And from the minimega cli: miniemga$ pipe foo "Or would you rather use English?" Meanwhile, back at our reader: $ minimega -pipe foo Hallo von minimega! Or would you rather use English? This exact same method can be used across distributed instances of minimega - simply attach to a named pipe as you would locally. You can even use pipes from connected miniccc clients running on VMs: $ miniccc -pipe foo It's also possible to directly attach named pipes to standard input, output, or error streams on processes launched by the cc API, by specifying key/value pairs on the exec and background commands: minimega$ cc exec stdin=foo my_program minimega$ cc background stdin=foo stdout=bar my_program The pipe API Named pipes are created on the first read, write, or mode selection on that pipe. To list current pipes, use the pipe API: minimega$ pipe name | mode | readers | writers | via | last message bar | all | 1 | 1 | | foo | round-robin | 2 | 2 | | a message! Using the pipe API, you can set the delivery mode (explained in the next section), write to a pipe, set vias (explained later), and delete pipes. When deleting a pipe, all attached readers will be closed (and receive an EOF). multiplexing By default, messages written to a pipe will be delivered to all readers. There are cases however, where you may want messages to be delivered to only one reader, similar to a load balancer. minimega pipes support three message delivery modes: all (the default one-to-many mode), round-robin, and random. In round-robin and random modes, messages written to a pipe will be delivered to exactly one reader (including distributed readers). To change the mode on a named pipe, use the pipe API: minimega$ pipe foo mode round-robin vias Vias are single-stage, external programs that are invoked for every read that takes place on a named pipe. They are used in places where a value that is written to a pipe needs to be transformed in some way for every reader that message will be forwarded to. For example, say you want readers on pipe foo to have a unique, normally distributed floating point value based on a mean written to the pipe: One approach would be to have the writer count the number of readers and generate N unique values based on the mean to a pipe with round-robin delivery. This is problematic as it requires the agent to check reader and pipe state at every potential write. Instead, we can have the pipe use a via to geneate unique values for every reader automatically when a write occurs: minimega$ pipe foo via "normal -stddev 5.0" In the above example, "normal" is a program that takes, on standard input, a floating point value as a mean, and generates a single value on a normal distribution with the given mean and standard deviation. When a value is written to foo, minimega will invoke the "normal" program for every reader on the pipe, sending unique values to each: # write a value to foo echo "1.5" | minimega -pipe foo # on node A $ minimega -pipe foo 2.35 # on node B $ minimega -pipe foo 3.44 pipelines minimega provides the plumb API for creating pipelines of external processes and named pipes. Pipelines are constructed similar to unix pipelines, and follow the same basic semantics such as cascading standard I/O and signaling pipeline stages with EOF. However, minimega pipes are message-based and consume and emit newline delimited messages. Additionally, pipes support multiple readers and writers and delivery modes, so it's possible to construct arbitrary topologies of pipelines using multiple linear pipelines with the plumb API. For example, let's construct a simple, linear pipeline with the unix program "sed": minimega$ plumb foo "sed -u s/foo/moo/" bar minimega$ plumb pipeline foo sed -u s/foo/moo/ bar minimega$ pipe foo "the cow says foo" minimega$ pipe name | mode | readers | writers | via | last message bar | all | 0 | 1 | | the cow says moo foo | all | 1 | 0 | | the cow says foo In this example, we created a pipeline starting with a named pipe foo, then to an external process "sed -u s/foo/moo", and finally back to a named pipe bar. The plumber creates the pipeline, and starts any external processes. We can then write to the named pipe foo and see the result with the pipe API. In this example, all readers on foo would see the original message, and all readers on bar will see the message as modified by "sed". Also in this example, the pipeline stays running until one of the pipeline stages is closed. We can shutdown the entire pipeline using the minimega CLI either by clearing the plumber, or by simply closing the first pipe in the pipeline, foo: minimega$ plumb foo "sed s/foo/moo/" bar minimega$ plumb pipeline foo sed s/foo/moo/ bar minimega$ clear pipe foo minimega$ plumb minimega$ Named pipes in pipelines are distributed as usual, but external programs are invoked on the machine where the command is issued. This means that if you start a pipeline that uses sed and writes to pipeline foo on node X, the sed process will be launched only on node X, but readers anywhere in the experiment can read the value written to foo.
https://minimega.org/articles/plumbing.article
CC-MAIN-2020-24
refinedweb
1,284
64.64
We introduced Cape in a previous post. In a nutshell, Cape is a framework for enabling real-time asynchronous event processing at a large scale with strong guarantees. It has been over a year since the system was launched. Today Cape is a critical component for Dropbox infrastructure. It operates with both high performance and reliability at a very large scale. Here are a few key metrics, Cape is: - running on thousands of servers across the continent - subscribing to over 30 different event domains at a rate of 30K/s - processing jobs of various sizes at rate of 150K/s - delivering 95% of events under 1 second after they are created. Cape has been widely adopted by teams at Dropbox. Currently there are over 70 use cases registered under Cape’s framework, and we expect Cape adoption to continue to grow in the future. In this post, we’ll take a deep dive into the design of the Cape framework. First, we’ll discuss Cape’s architecture. Then we’ll look at the core scheduling component of the system. Throughout, we’ll focus the discussion on a few key design decisions. Design principles behind Cape Before we begin, let’s touch on a few of our principles for developing and maintaining Cape. These principles were proposed based on learnings from the development of other systems at Dropbox, especially from Cape’s predecessor Livefill. These principles were critical for both the project’s success and the ongoing maintenance of the system. Modularization From the beginning we explicitly took a modular approach to system design; this is critical for isolating complexities. We created modules with clearly defined functionalities, and carefully designed the communication protocols between these modules. This way, when building a module we only needed to worry about a limited number of issues. Testing was easy since we could verify each module independently. We also want to highlight the importance of keeping module interface to a minimum. It’s easier to reason about interaction between the modules when their interfaces are small. What’s more, a small interface is more easily adapted to new use cases. Clear boundaries between system logic and application-specific logic In Cape it’s common for a component to contain procedures for both system logic and application-specific logic. For instance, the dispatcher’s tracker queries information from the event source (application specific), and produces scheduling requests based on query results (system logic). We carefully designed Cape’s abstraction to ensure there is a clear boundary between the two categories. As illustrated in the following figure, system logic and application specific query logic are separated by an intermediate layer. It translates generic queries issued by the tracker into specific queries to different event sources. This boundary ensures that the system is easily extensible, and that logic for different event sources is completely isolated. Evolution of terminology The following terms and concepts are necessary for understanding the fundamentals of how Cape functions. During the course of Cape’s development, definitions for the following key terms and concepts evolved and became further refined. Readers may wish to re-read the previous post for a refresh. Otherwise, a quick recap will promote the following discussion. In Cape’s world, an event is a piece of persisted data. The key for any event has two components. One component is a subject, of type string. The second component is sequence ID, of type integer. Events with the same subject are ordered by their sequence IDs, and those with different subjects are considered independent. A namespace for events that are constructed in the same way is a domain. Topology is our notion of a user application. Topologies that subscribe to the same domains can form ordering dependencies. In other words, users can specify a set of other topologies that must run before their own topologies do. Lambdas carry out execution. Conceptually, lambdas are callbacks that are invoked with a set of events provided as input. Going back to topology, a topology consists of one or more lambdas. Within the scope of a topology, lambdas may form data dependency. This means a lambda can generate output regarding an event, and the output will become input for one or more other lambdas when processing the same event. Why have multiple lambdas carry out a topology’s logic? At Dropbox one popular motivation is data vicinity. If a topology’s workflow can be divided into stages that process data in different datacenters, then it’s more efficient to colocate the computation with data. In this case a user may choose to have multiple lambdas running in different data centers. For each subject’s topology, we maintain the sequence ID of the last successfully processed event. This value is called a cursor. All the cursors of a single subject are included in one protobuf object, which we call cursor info. Cursor info is persisted and can be retrieved with the subject as key. Architecture: why not just a queue? Cape is an event delivery system. Conceptually, it can be thought of as a pipe, as below. At one end event sources notify Cape of new events. At the other end events are consumed by various topologies. An intuitive solution would be to build Cape with a queue at its core. Sources could publish events to the queue and topologies could consume from it as independent consumer groups. But Cape is not a passive queue. It’s an active and intelligent system that fetches events from sources, and delivers them to the topologies. Instead of sending events to Cape, the sources only send pings—lightweight reminders about new events from a particular subject. Upon receiving pings, Cape performs sophisticated analysis and then issues jobs to workers. A refresh component sends pings for backlog processing. This dynamic is captured below. A queue-based solution isn’t enough to meet requirements Cape’s event-processing design is result of the following requirements for how events should be delivered: - low latency: events should be delivered as fast as possible - retry until success: each event is guaranteed to be eventually successfully processed by all subscribing topologies - subject level isolation: failures in one subject shouldn’t impact the processing of events from other subjects - event source reliability: event sources are external services and therefore they can fail from time to time. The system must be able to tolerate failures such as when sources fail to send out events. Although a queue is the natural solution for event delivery, if we take the above requirements into consideration, it quickly becomes apparent that a queue-based solution isn’t enough. Imagine building a queue-based system that satisfies Cape’s requirements. For simplicity, we’ll limit our discussion to a few common and mature solutions for queues: Kafka, Redis, and SQS. Queue-based solutions require event sources to reliably push each event to the system. Additionally, events are pushed with correct ordering, meaning that for the same subject, events with smaller sequence IDs are pushed first. Now let’s go through some of the above requirements. Low latency Comparatively speaking, publishing to Kafka is significantly faster than SQS. Kafka is designed for low latency publishing. We can set up Kafka clusters in Dropbox infrastructure, so its network latency is going to be much lower than using SQS. Redis has very low latency when data is only in memory. However when it’s configured to persist snapshot data, there can be significant negative impact on its availability. Retry until success In a queue-based solution, this is equivalent to a requirement for persistency. Both Kafka and SQS support data persistency very well. For Redis, as mentioned above, persistency can be achieved to some extent by taking periodic snapshots, which impacts availability. Additionally, a Redis cluster with persisted data is usually more difficult to maintain compared to Kafka or SQS. Subject level isolation Subject level isolation is the biggest barrier to queueing. It’s not practical to create a queue for each subject; there can be billions of them. The problem with using Kafka for queuing is that Kafka requires each consumer to acknowledge each event after it’s been successfully processed. This introduces a severe head-of-the-line blocking problem because it prevents processing any other subjects’ events until the one at the head of the queue is successfully processed. The latency is unacceptable—there are thousands of subjects generating events every second, and lambda runtime can vary from milliseconds to tens of minutes. Delivery latency could be improved by decoupling an event’s read and acknowledgement phases. This would allow consumers to keep reading new events while asynchronously acknowledging them after successful processing. But, if any event misses getting processed the consumer will have to rewind and reprocess a potentially large amount of other subjects’ events. This rewind is necessary to provide an “at least once” success guarantee for every event. Essentially, failure in one subject can introduce duplicated processing and extra latency to other subjects, which breaks the isolation between subjects. SQS provides a better solution for this kind of isolation. When an event is being consumed it is invisible to other consumers, until a specified deadline is reached. Once successful processing has finished the consumer acknowledges it and the event is removed from the queue. This allows consumers to keep fetching new events, without having to worry about rewind on failures. However, with SQS there is a problem when ordered processing is taken into consideration. Although it has a first-in, first out (FIFO) option that provides ordered consumption, it comes at the cost of limited throughput. Otherwise the problem is that, before processing an event, there is no way for a consumer to know whether an event’s predecessors were successfully processed. Without knowing that, guarantees of ordered processing cannot be made. Complicating the issue, a record indicating which consumer gets the right to exclusively process which subject would have to be made (to ensure events of the same subjects are processed in order). This kind of bookkeeping can be tricky to maintain and we would need to build the service for necessary bookkeeping. The requirement for subject level isolation is also problematic when using Redis for queuing. In practice, consuming an event from Redis means removing it from the queue. This creates a durability issue. The event will be lost if the consumer crashes, a very common event. While it’s possible to let the consumer peek at the event and only remove it after it’s successfully processed, that also leads to a head-of-the-line blocking problem, and it doesn’t solve the issue of lost events. Event source reliability Finally we come back to the initial setup: events must be published by the source in a reliable way. In production this can be very hard to achieve. Often the creation of an event is in the path of a critical Dropbox service (think about syncing a file or signing up as new user). In a large scale distributed system, failure happens almost all the time. When there is a publish failure, the rational choice is to give up so as to avoid impacting the critical service’s availability. The event goes unpublished. Why build a dispatcher? Given the above discussion, using SQS plus a bookkeeping service could provide a possible queuing solution, but we still needed to address scalability and reliability. Because the queue-based system isn’t reliable, and the inefficiency entailed in custom bookkeeping would make scaling a difficult prospect, we chose to build Cape’s scheduling component with a dispatcher. Rather than having event sources send every event with proper ordering to a queue, event sources send pings to a dispatcher. This setup imposes much less workload on event sources as the publish workload for event sources is significantly smaller. Since the Cape has a refresh feature, all pings get tried at least once. Lastly, because all scheduling operations happen inside the dispatcher (avoiding slow communication between servers), we can achieve very low event delivery latency. As a side benefit, having a centralized scheduling component allows Cape to support more advanced processing modes, including scheduling with dependency, ordered processing, and heartbeat. Dispatcher Overview The dispatcher lies at center of Cape’s architecture. It is the system’s brain and controls the full lifecycle of scheduling. Let’s first take a look at dispatcher at the top level. The dispatcher’s data flow is summarized in the following figure. Event sources send lightweight, subject-related pings to the dispatcher. The ping reminds the dispatcher to check and see whether there are any new events to be processed for that subject. Upon receiving the ping, the dispatcher makes a few queries to gather relevant information. This includes querying the cursor from the cursor store, and getting event information from the sources. Using these query results, the dispatcher updates its in-memory state and determines which events need to be processed by which lambdas. At this point jobs—a set of events and a unique job ID—are issued to the corresponding lambda workers. After a worker finishes the processing, it reports back to the dispatcher with a job status. This contains the process results for each event and the same job ID for the corresponding job. Upon receiving job status results, the dispatcher updates the in-memory state. If the job was a success, the dispatcher may advance the corresponding cursor and schedule more jobs if any new scheduling can be triggered. The scheduling lifecycle for a ping ends when the dispatcher’s in-memory state doesn’t have any records for running jobs, and there are no more jobs to be issued. An important feature is the ping refresh component. When pings can’t make it to the dispatcher, Cape’s refresh feature will make sure to resend any lost pings. Dispatcher internal structures Above is a graph of the dispatcher’s internal process. A highlight of the dispatcher design is modularity, one of the key design principles we started with. Each component carries out relatively independent functionality. Instead of sharing memory state, components coordinate with each other by passing messages, though the communication protocol between components is minimal. This design allows each component to be tested separately with great ease. Tracker The tracker is a stateless component that receives pings as input, and produces one or more scheduling requests for the scheduler to consume. It queries the cursor store and event sources for information necessary to making scheduling decisions, and then compiles scheduling requests. Of all the data inside a request, the most important pieces are event interval and event set. The event interval is a closed interval of sequence IDs, and the event set contains all the events within the corresponding interval. Scheduler The scheduler is the only stateful component inside the dispatcher and is the only entity that can write to the cursor store in Cape. Except for reading and writing cursors, the scheduler’s operations are all strictly in memory. Additionally, the scheduler owns the in-memory state that keeps the bookkeeping for jobs. When the scheduler gets a request it creates new jobs and sends them out to the publisher—a stateless component that receives jobs. It sends jobs to an external buffer which is subscribed to by corresponding lambda workers. The RPC server gets the job status from the workers and forwards it to the scheduler. Depending on success or failure, the scheduler may issue more jobs. Because it’s the only component that owns in-memory state, it does all the related bookkeeping. This reflects our observation of scheduling operations. Every scheduling operation consists of expensive but stateless remote queries, plus in-memory logic that requires locking. In our design, stateless procedures are managed by peripheral components and the scheduler focuses on almost pure in-memory scheduling procedures. Components run in parallel and communicate by sending messages. This provides a clear view of how different components work together and the implementation for parallelism is straightforward because the design naturally fits Go’s concurrency model. Deep Dive into two important elements Two key elements that deserve a deeper look are tracker’s procedure for scheduling requests, and the scheduler data structures and algorithm. Tracker operation The tracker takes a ping as input and produces one or more scheduling requests as output. A scheduling request is an object packed with all the information necessary for making scheduling decisions, including: - the subject’s latest sequence ID - the event interval [sequenceId_start, sequenceId_target] - a set of all events within the above interval This tracker procedure is best demonstrated with an example. Let’s say a ping regarding subject S is received, and there are 4 topologies: T1–T4, subscribing to subject S’s events. Upon receiving this ping, the tracker first queries the latest sequence ID for S, which is 100. Then it queries S’s cursor info. Let’s say the content of the cursor info is: - T1: 10 - T2: 90 - T3: 99 - T4: 99 Next, the tracker needs to make event queries in order to fetch events for scheduling. An event query takes three arguments: subject, start sequence ID, and max batch size. It returns a sorted list of events beginning with the start sequence ID and which is capped in length by a max batch size. The max batch size is set based on the capability of the event source, as well as on the data size per event from this source. In this example let’s assume the limit is 10. Additionally, it’s important to note that for a given topology, an event range can be used to create new jobs for the topology only when the range contains the topology’s cursor + 1. As you can imagine, event queries can be very expensive, both in terms of latency and the load on the event source. This creates an interesting optimization problem: how do you make the minimum number of event queries, and still allow all topologies to make as much progress as possible? The most naive approach would be to make an event query for each topology. But the problem with this approach is also obvious: it doesn’t scale. As more topologies subscribe to the same event source, the tracker’s workload grows linearly. Because the topologies’ cursors are nearly aligned most of the time, most of the tracker’s queries will be redundant. For our first iteration, we adopted a simple heuristic: make one event query for each distinct cursor value. For instance in the above setting, tracker will make three queries with the following arguments: - (S, 11, 10) - (S, 91, 10) - (S, 100, 10) This had very good performance in the beginning, when most topologies had very simple and robust processing logic. However, as we started adding expensive topologies with jobs that take longer time to execute, or that were sometimes flaky with errors, we observed the system frequently fell into an unstable state where the dispatcher was running hot on CPU and couldn’t keep up with the scheduling. A thorough investigation revealed that the problem was with the event query heuristic. It works well when most topologies have their cursor values aligned. However once a few of them start to experience increased failures, more and more distinct cursor values emerge for a single subject. This causes the tracker’s worker pool to become increasingly busy and then scheduling gets delayed or canceled as the system approaches its capacity limit. This is a vicious cycle. Once the system becomes overwhelmed things only get worse. The system cannot recover on its own. Given that the first heuristic wasn’t robust, an improved heuristic was proposed and applied. In this new approach, cursor values are grouped by vicinity. Only one event query is made for a group. This an improvement because it generates fewer requests than first heuristic when some topologies are degraded. For the given example, we only generate the following event queries: - (S, 11, 10) - (S, 91, 10) This heuristic has much better tolerance on small cursor misalignment, and is much more robust. When a few topologies are behaving badly, their impact on the dispatcher is well constrained. This heuristic also proved to be highly scalable. We added tens of topologies for a particular event domain, and Cape continues to run efficiently, with very high stability. Scheduler: the only stateful component in Cape’s system Now let’s talk about the scheduler. The scheduler exclusively owns the dispatcher’s in memory data structure for bookkeeping jobs that are currently inflight. For this reason, we call this data structure inflight state. When new information is received from either the tracker or the RPC server, the scheduler updates the inflight state accordingly, and makes correct scheduling decisions. Now we’ll look at inflight state and an illustration of how the scheduling algorithm works. First, a look at the basic scheduling workflow. Inflight state: The inflight state has three components. The first component is a tree structure that allows scheduling information to be stored hierarchically, it’s called state tree. Nodes at the first level of state tree are called subject state. They are the root of the subtree that contains all inflight information for a given subject. Information shared by all inflight jobs for this subject, including cursor info, latest sequence ID, and the ranges of events used by the subject’s inflight jobs are stored by the subject state node. The subject state fans out to the topology state. It’s the root of the subtree that corresponds to this topology’s inflight jobs. Finally, the leaf node is the lambda state, containing a sorted list of the inflight job records. The following graph shows the layout of the state tree. Note how at the root level there is a subject state table that maps inflight subjects to their corresponding subject state nodes. The second component is a timeout list. It’s a priority queue holding references to all inflight jobs, as sorted by their expiration timestamps. A job lookup table, keyed by job ID, is the third component. It maintains the job metadata, which is used to find corresponding job records in the timeout list and the state tree. Scheduling algorithm: To show how the scheduling algorithm works, we need to set up some necessary context. Let’s first assume there are two topologies, T1 and T2. Both subscribe to the same event domain. For simplicity, we’ll assume they both contain a single lambda. The scheduler has received a scheduling request with the following content: - subject: S - latest sequence ID: 100 - event interval: [91, 100] - event set: [91, 99, 100] (note that sequence IDs might not be consecutive) Upon receiving this request, the scheduler initiates a scheduling procedure that is carried out by a Go routine. The scheduling procedure finds all the relevant lambda states in the state tree and tries to generate new jobs inside those state nodes. The procedure begins from the subject state. At root there is a table that maps subjects to a corresponding subject state. However, before making any attempts to access a subject state, a lock for that particular subject must be held. This guarantees that no race condition can occur under the subject subtree. With a subject lock, the procedure checks to see if a subject state exists in the root table. If it doesn’t, we need to create the state. During the subject state initialization, the cursor info is fetched from the cursor store. Note that even though the tracker has queried the cursor to generate the request, the scheduler still has to query the cursor in order to initialize a subject state. This is because the tracker and the scheduler operations are independent. Cursor info obtained by the tracker may be stale if the scheduler updates the cursor while the tracker is preparing the request. Let’s assume the cursor info values obtained by the scheduler are as follows: - T1: 90 - T2: 99 Inside the subject state, content is updated with new information from the request, including the latest sequence ID, and events. Next the procedure examines the topologies in a sorted topological order, determined by their ordering dependency. For each topology we compare the updated event range, which is [91, 100], with each topology’s cursor. Then we start topology-level scheduling with events after their cursors. In this example, T1 is receives events {91, 99, 100}, and T2 receives {100}. At the topology level, the scheduling procedure selects lambdas for scheduling based on lambda dependency. Here the topology’s only lambda is selected. Next, the procedure determines which events should be included in the new jobs. For the T1 lambda, an inflight job with events {91} already exists. A new job will be created with only {99, 100}. For the T2 lambda, assuming there is no existing inflight job, a new job of {100} is created. Once jobs are created, their records are appended to their lambdas’ job lists. The corresponding job records, containing job ID and job expiration time, are inserted into the timeout list. Finally, another set of records that contain job metadata are registered to the job lookup table. Once registration is complete, the jobs are issued to the publisher. After a worker finishes processing the job, a job status update is sent to the scheduler. This triggers a job status update procedure, again carried out by a Go routine. Let’s say the job status update shows that T2’s job has succeeded. The procedure first finds metadata in the lookup table. Once the metadata is obtained, we deregister the record from the lookup table, and use the metadata to remove the corresponding record from the timeout list. Finally we track down the lambda state containing this job, and mark it as success. As there are no pending inflight jobs before this one, T2’s cursor should be updated to 100. Besides handling scheduling request and job status update procedures, the scheduler also periodically purges expired inflight jobs. For each expired job, a timeout procedure is issued to a Go routine. This is essentially the same as a job status update, except it only marks the job as failure. Let’s presume T1’s new job, which contains event {99, 100}, has expired. In the lambda state the corresponding job is marked as failure and is then removed from the inflight state. The cursor won’t be updated if the job failed, ensuring that Cape will retry later. This is certainly an oversimplified description of the scheduler workflow. Real scheduler operations are much more sophisticated than this. The state tree allows us to organize relevant information in a hierarchical structure. Each scheduling procedure—be it handling a scheduling request, job status update, or job timeout—always starts from the top, passes control one level at a time until the leaf node, updating the relevant state at each level. Once the lambda-level operation is done, control is passed back to upper level with proper post-processing. This hierarchical approach allows us to effectively modularize scheduling logic, and thus keep the complexity to a controllable level. Wrap up We hope the this discussion of Cape’s design philosophy and inner workings present a picture of the issues we face when working with large scale distributed systems. We hope this will be useful when readers make their own design decisions. It takes a team of great engineers to build such an advanced and capable distributed system. Many thanks to everyone who contributed to this project: Anthony Sandrin, Arun Krishnan, Bashar Al-Rawi, Daisy Zhou, Iulia Tamas, Jacob Reiff, Koundinya Muppalla, Rajiv Desai, Ryan Armstrong, Sarah Tappon, Shashank Senapaty, Steven Rodrigues, Thomissa Comellas, Xiaonan Zhang, and Yuhuan Du.
http://engineeringjobs4u.co.uk/cape-technical-deep-dive-dropbox-tech-blog
CC-MAIN-2020-16
refinedweb
4,641
54.63
Test a wide character to see if it's a whitespace character #include <wctype.h> int iswspace( wint_t wc ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The iswspace() function tests if the argument wc is a whitespace wide character of the class space. In the C locale, this class includes the space character, \f (form feed), \n (newline or linefeed), \r (carriage return), \t (horizontal tab), and \v (vertical tab). A nonzero value if the character is a member of the class space, or 0 otherwise. The result is valid only for wchar_t arguments and WEOF.
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/i/iswspace.html
CC-MAIN-2018-43
refinedweb
108
64.61
Tests that come with pyfrc¶ Note pyfrc’s testing functionality has not yet been updated to work with RobotPy 2020 pyfrc comes with testing functions that can be used to test basic functionality of just about any robot, including running through a simulated practice match. These generic test modules can be applied to wpilib.IterativeRobot and wpilib.SampleRobot based robots. The primary purpose of these tests is to run through your code and make sure that it doesn’t crash. If you actually want to test your code, you need to write your own custom tests to tease out the edge cases. To use these, add the following to a python file in your tests directory: from pyfrc.tests import * pyfrc.tests.basic. test_autonomous(control, fake_time, robot, gamedata)[source]¶ Runs autonomous mode by itself pyfrc.tests.basic. test_operator_control(control, fake_time, robot)[source]¶ Runs operator control mode by itself pyfrc.tests.basic. test_practice(control, fake_time, robot)[source]¶ Runs through the entire span of a practice match Fuzz tests¶ The purpose of the fuzz ‘test’ is not exactly to ‘do’ anything, but rather it mashes the buttons and switches in various completely random ways to try and find any possible control situations and such that would probably never normally come up, but.. well, given a bit of bad luck, could totally happen. Keep in mind that the results will totally different every time you run this, so if you find an error, fix it – but don’t expect that you’ll be able to duplicate it with this test. Instead, you should design a specific test that can trigger the bug, to ensure that you actually fixed it. Docstring tests¶ pyfrc.tests.docstring_test. ignore_object(o, robot_path)[source]¶ Returns true if the object can be ignored
https://robotpy.readthedocs.io/projects/pyfrc/en/stable/testing.html
CC-MAIN-2020-24
refinedweb
293
62.78
My article “Run esxcli Command in a Web Browser: Another ESXi Hack” got quite some interests from the community. Although it works, I am not quite satisfied with the fact that the real esxcfg-info.cgi is disabled to run the esxcli.cgi. There is actually another way to run the esxcli command in a browser that is less hacky even though the user interface is not as good as the esxcli.cgi, and there is no central page that links them together. In other words, you cannot navigate the pages and have to type in individual URLs. Luckily, there is a pretty straightforward pattern in the URLs, so it’s not a big deal.. This approach was first discovered by William Lam in one of his posts in 2010 with ESXi 4.1. With the 5.1, the exact URLs may not be working due to the changes of the ESXi product, mainly the esxcli command itself. Because it’s hidden, VMware can change it without notifying anyone. The URL pattern, however, remain the same. In ESXi 4.1 as William found out, the following URL points to the vms/vm namespace or managed object: The new name space for the same(?) object in ESXi 5.1/5.5 is: Once you get the page as below, you can use it the same way as with other managed objects in MOB. Here, for example, you can simply click on a method link and got a popup page on which you can click “Invoke Method” for the return information. The invoke page can also be accessed directly with the URL with method parameter as follows: Now, let’s cover a bit more on the URL pattern. Everything is pretty much the same as other URLs in MOB. The key here is the string after “moid=,” which is essentially the value of ManagedObjectReference. The string has two parts: one is the “ha-cli-handler-“ which is fixed for the esxcli related managed objects. The second part is the so called namespace. It has to be the leaf namespace (just think the namespaces as a tree) that does not have no “Available Namespaces:” when trying out the command on ESXi. The storage-filesystem, for example, is a leaf namespace but storage is not. When not sure, just type in the esxcli command. With a leaf namespace in esxcli command, you can replace the space with “-“, and compile a full URL. The URL for the storage filesystem URL is as follows: While it’s not quite important on the type of the managed object, the type name for the above storage filesystem is “VimEsxCLIstoragefilesystem”. I am sure you can easily generalize a pattern out of it for the type names.
http://www.doublecloud.org/2013/12/run-esxcli-command-in-a-browser-hidden-but-probably-better-hack/
CC-MAIN-2016-44
refinedweb
457
72.46
[ ] dhruba borthakur commented on HDFS-1435: ---------------------------------------- > time to load the image and the time to do saveNamespace to go up by a lot with this change? It might go up a little, and we can measure it and provide details here. > Provide an option to store fsimage compressed > --------------------------------------------- > > Key: HDFS-1435 > URL: > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Affects Versions: 0.22.0 > Reporter: Hairong Kuang > Assignee: Hairong Kuang > Fix For: 0.22.0 > > > Our HDFS has fsimage as big as 20G bytes. It consumes a lot of network bandwidth when secondary NN uploads a new fsimage to primary NN. > If we could store fsimage compressed, the problem could be greatly alleviated. > I plan to provide a new configuration hdfs.image.compressed with a default value of false. If it is set to be true, fsimage is stored as compressed. > The fsimage will have a new layout with a new field "compressed" in its header, indicating if the namespace is stored compressed or not. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201010.mbox/%3C14740090.510471286001994842.JavaMail.jira@thor%3E
CC-MAIN-2017-22
refinedweb
188
74.39
Hi Everyone, I’ve been teaching myself Ruby on Rails while trying to build a new website I want to launch in the next few weeks. I’ve gotten pretty far, but I’ve hit a wall this week and I can’t seem to get around it. I’ve been trying to implement a friends feature, allowing people to add, edit and delete friends as they see fit. I’ve been following the example set in the book “Practical Rails, Social Networking Sites” by Alan B., and it’s been a big help, but since the particulars of my application are different, I’ve had to do some adjusting of the code to make it fit. Right now, I have no problem logging in as one user, and viewing other users’ profiles. From there, I can tag a user as a “friend”. I can go to my friends list, and choose to edit their information, which is basically their XFN info. However, when I click on “Edit Friendship”, I get the above error, ActionController::RoutingError. I just can’t seem to get around it. This is the full error I’m receiving: ActionController::RoutingError in Friends#edit Showing app/views/friends/edit.rhtml where line #8 raised: user_friend_url failed to generate from {:user_id=>#<User:0x2380ef8 @attributes={“last_login_at”=>nil, “authorization_token”=>nil, “updated_at”=>“2007-12-12 14:44:02”, “zip”=>“000000”, ”, :friend_id=>#<User: 0x23005f0 @attributes={“last_login_at”=>nil, “authorization_token”=>nil, “updated_at”=>“2007-12-10 18:41:07”, “zip”=>“00000”, ”, {:controller=>“friends”, :action=>“show”}, diff: {:user_id=>#<User: 0x2380ef8 @attributes={“last_login_at”=>nil, “authorization_token”=>nil, “updated_at”=>“2007-12-12 14:44:02”, “zip”=>“00000”, ”, @attributes={“last_login_at”=>nil, “authorization_token”=>nil, “updated_at”=>“2007-12-10 18:41:07”, “zip”=>“02338”, ”, Extracted source (around line #8): 5: 6: <% form_for(:friendship, 7: :url => user_friend_path(:user_id => @logged_in_user, 8: :friend_id => @friend), 9: :html => { :multipart => true, :method => :put}) do |f| %> 10: 11: Define your relationship with <%= @friend.username %> if I change line 7 and 8 to read @logged_in_user.id and @friend.id, the error message is much shorter, but still the same: user_friend_url failed to generate from {:user_id=>1, :controller=>“friends”, :action=>“show”, :friend_id=>2}, expected: {:controller=>“friends”, :action=>“show”}, diff: {:user_id=>1, :friend_id=>2} This is the edit function from my friends_controller: def edit @user = User.find(logged_in_user) @friendship = @user.friendships.find_by_friend_id(params[:id]) @friend = @friendship.friend if @friendship if !@friendship redirect_to user_friend_path(:user_id => logged_in_user, :id => params[:id]) end end in my friendship model file, I have this as well: belongs_to :user belongs_to :friend, :class_name => ‘User’, :foreign_key => ‘friend_id’ in my user’s model, I have this: has_many :friendships has_many :friends, :through => :friendships, :class_name => ‘User’ in my routes.rb file, I have this: map.resources :users, :member => {:enable => :put} do |users| users.resources :roles users.resources :friends users.resources :tags, :name_prefix => ‘user_’, :controller => ‘user_tags’ end I did notice, that if in the form_for part of my edit template, I change the user_friend_path to user_friends_path, the page will load, but if I go to save the changes, I get this error message: Unknown action No action responded to 1 with this link in the address bar: and this in the terminal: Processing UsersController#1 (for 127.0.0.1 at 2007-12-14 14:28:48) [PUT] Session ID: 003df0e09ed24a6fe2e93e946a8ae9e7 Parameters: {“commit”=>“Save”, “friendship”=>{“xfn_met”=>“1”, “xfn_friendship”=>“xfn_contact”, “xfn_family”=>“xfn_child”, “xfn_sweetheart”=>“0”, “xfn_geographical”=>“xfn_coresident”, “xfn_crush”=>“0”, “xfn_date”=>“0”, “xfn_muse”=>“1”, “xfn_coworker”=>“1”, “xfn_colleague”=>“0”}, “_method”=>“put”, “action”=>“1”, “id”=>“friends”, “controller”=>“users”, “friend_id”=>“2”} ActionController::UnknownAction (No action responded to 1): now, if above should be edit_friends_path instead of edit_friend_path, the issue above is that it’s using the user’s controller instead of the friend’s controller. but I’m not sure how to resolve that. I know this is long, and I really appreciate any help you could provide, if you need any more code posted, I’d be happy to. I’ve been working on this since wednesday, and I just can’t wrap my head around it.
https://www.ruby-forum.com/t/actioncontroller-routingerror-problem/123197
CC-MAIN-2021-43
refinedweb
658
51.99
In C programming <stdlib.h> calloc function allocate memory in heap for an array of objects of specific size.The declaration of the function is given below. Parameters: mem -The number of objects for whose memory is to be allocated . size -The size of each object whose memory is to be allocated. Return type void* -A pointer to memory allocated. Some points to note: i)If the memory allocation fails ‘caloc’ return NULL.So whenever you call this function you must test if the memory allocation was successful or failure. ii)After the allocation the space is all initialized to 0. iii)If the size-the first argument- pass is 0 the behavior is undefined. iv)If this function is called do not forget to call the ‘free’ function to deallocate the storage,if you forget to do so there will be memory leakage. Code example char *mem=(char*)calloc( 4 , sizeof(char) ) ; //work fine void *imem=calloc( 9 , sizeof(int) ) ; free(imem); //do not forget free(mem); //do not forget In the first line the pointer returned by ‘calloc’ is casted to ‘int*’ to type so assigning the returned pointer to ‘mem’ pointer is totally fine.In the second line casting is not needed because ‘imem’ is ‘void*’ type. Link : C free stdlib.h A more exhaustive code example using ‘calloc’ is given below. #include <stdlib.h> int main( ) { int *imem = (int*)calloc(4, sizeof(int)); void *cmem = calloc(5 , strlen(c)+1 ); memcpy(imem, i, sizeof(int) * 4); //copying the content of ‘i’ array to memory pointed by imem memcpy(cmem, c, strlen(c) + 1); //copying the content of ‘c’ string to memory pointed by cmem printf(“%i %i “, imem[0], imem[1] ); //accessing the content stored in ‘imem’ memory printf(“\n%c %c “, *((char*)cmem), *(((char*)cmem)+1) ); //accessing the content stored in ‘cmem’ memory free(imem); free(cmem); getchar(); return 0; } Output in VS, 2 4 c h Note : i)’imem’ is ‘int*’ so we can directly access the content of the memory pointed by imem using the array syntax(the subscript []). ii)Since ‘cmem’ is ‘void*’ type we have to cast it to ‘char*’ type first before accessing the content of the memory.And this done by adding the (char*) before the ‘cmem’ pointer. Link : C what is casting??
https://corecplusplustutorial.com/c-programming-calloc-stdlib-h/
CC-MAIN-2017-30
refinedweb
384
61.77
SMIME_read_CMS.3ossl - Man Page parse S/MIME message Synopsis #include <openssl/cms.h> CMS_ContentInfo *SMIME_read_CMS_ex(BIO *bio, int flags, BIO **bcont, CMS_ContentInfo **cms); CMS_ContentInfo *SMIME_read_CMS(BIO *in, BIO **bcont); Description SMIME_read_CMS() parses a message in S/MIME format.. SMIME_read_CMS_ex() is similar to SMIME_read_CMS() but optionally a previously created cms CMS_ContentInfo object can be supplied as well as some flags. To create a cms object use CMS_ContentInfo_new_ex(3). If the flags argument contains CMS_BINARY then the input is assumed to be in binary format and is not translated to canonical form. If in addition SMIME_ASCIICRLF is set then the binary input is assumed to be followed by CR and LF characters, else only by an LF character. If flags is 0 and cms is NULL then it is identical to SMIME_read_CMS(). Notes If *bcont is not NULL then the message is clear text signed. *bcont can then be passed to CMS_verify() with the CMS_DETACHED flag set.); Bugs The MIME parser used by SMIME_read_CMS() is somewhat primitive. While it will handle most S/MIME messages more complex compound formats may not. Return Values SMIME_read_CMS_ex() and SMIME_read_CMS() return a valid CMS_ContentInfo structure or NULL if an error occurred. The error can be obtained from ERR_get_error(3). See Also ERR_get_error(3), CMS_sign(3), CMS_verify(3), CMS_encrypt(3), CMS_decrypt(3) History The function SMIME_read_CMS_ex() was added in OpenSSL 3.0. Licensed under the Apache License 2.0 (the “License”). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at < Referenced By The man page SMIME_read_CMS_ex.3ossl(3) is an alias of SMIME_read_CMS.3ossl(3).
https://www.mankier.com/3/SMIME_read_CMS.3ossl
CC-MAIN-2022-21
refinedweb
276
57.87
Question : I have a data frame with alpha-numeric keys which I want to save as a csv and read back later. For various reasons I need to explicitly read this key column as a string format, I have keys which are strictly numeric or even worse, things like: 1234E5 which Pandas interprets as a float. This obviously makes the key completely useless. The problem is when I specify a string dtype for the data frame or any column of it I just get garbage back. I have some example code here: df = pd.DataFrame(np.random.rand(2,2), index=['1A', '1B'], columns=['A', 'B']) df.to_csv(savefile) The data frame looks like: A B 1A 0.209059 0.275554 1B 0.742666 0.721165 Then I read it like so: df_read = pd.read_csv(savefile, dtype=str, index_col=0) and the result is: A B B ( < Is this a problem with my computer, or something I’m doing wrong here, or just a bug? Answer #1: Update: this has been fixed: from 0.11.1 you passing str/ np.str will be equivalent to using object. Use the object dtype: In [11]: pd.read_csv('a', dtype=object, index_col=0) Out[11]: A B 1A 0.35633069074776547 0.745585398803751 1B 0.20037376323337375 0.013921830784260236 or better yet, just don’t specify a dtype: In [12]: pd.read_csv('a', index_col=0) Out[12]: A B 1A 0.356331 0.745585 1B 0.200374 0.013922 but bypassing the type sniffer and truly returning only strings requires a hacky use of converters: In [13]: pd.read_csv('a', converters={i: str for i in range(100)}) Out[13]: A B 1A 0.35633069074776547 0.745585398803751 1B 0.20037376323337375 0.013921830784260236 where 100 is some number equal or greater than your total number of columns. It’s best to avoid the str dtype, see for example here. Answer #2: Like Anton T said in his comment, pandas will randomly turn object types into float types using its type sniffer, even you pass dtype=object, dtype=str, or dtype=np.str. Since you can pass a dictionary of functions where the key is a column index and the value is a converter function, you can do something like this (e.g. for 100 columns). pd.read_csv('some_file.csv', converters={i: str for i in range(0, 100)}) You can even pass range(0, N) for N much larger than the number of columns if you don’t know how many columns you will read. Answer #3: Use a converter that applies to any column if you don’t know the columns before hand: import pandas as pd class StringConverter(dict): def __contains__(self, item): return True def __getitem__(self, item): return str def get(self, default=None): return str pd.read_csv(file_or_buffer, converters=StringConverter()) Answer #4: Many of the above answers are fine but neither very elegant nor universal. If you want to read all of the columns as strings you can use the following construct without caring about the number of the columns. from collections import defaultdict import pandas as pd pd.read_csv(file_or_buffer, converters=defaultdict(lambda i: str)) The defaultdict will return str for every index passed into converters.
https://discuss.dizzycoding.com/pandas-reading-csv-as-string-type/
CC-MAIN-2022-33
refinedweb
536
74.29
On 06/03/2011 02:47 PM, Brian C. Lane wrote: > Separate the memory check logic from the act of displaying dialogs or > switching to text mode. This makes it easier to skip it when --nomemcheck > is passed. > > Add --nomemcheck which continues the install even if there isn't enough > memory. > --- > anaconda | 59 +++++++++++++++++++++++++++++++++++++++-------------------- > 1 files changed, 39 insertions(+), 20 deletions(-) > > diff --git a/anaconda b/anaconda > index 21ae8fe..ff6eda1 100755 > --- a/anaconda > +++ b/anaconda > @@ -219,6 +219,7 @@ def parseOptions(argv = None): > op.add_option("--dogtail", + total_ram = int(isys.total_memory() / 1024) > + text_ram = int((isys.MIN_RAM) / 1024) > + gui_ram = text_ram + int(isys.GUI_INSTALL_EXTRA_RAM / 1024) > + > + return (total_ram, > + (text_ram<= total_ram, text_ram), > + (gui_ram<= total_ram, gui_ram) > + ) I'm not a big fan of passing back bare tuples, honestly. Why not return a dict with names here, so it's self descriptive? > + (total_ram, (text_ok, text_ram), (gui_ok, gui_ram)) = how_much_ram() Because that's pretty nauseating. Aside from that, I don't really have a big problem with either patch, though I'm still not convinced this won't produce more bs bug reports. -- Peter
http://www.redhat.com/archives/anaconda-devel-list/2011-June/msg00035.html
CC-MAIN-2014-10
refinedweb
174
59.9
I Ext 3.3.1 knows nothing of IE9 Ext 3.3.1 knows nothing of IE9 It has no ability to detect IE9, so thinks you are running pre version 6 to a certain extent.. in ext-base-debug.js if you change the following line it appears to work: Code: isIE8 = isIE && (check(/msie 8/) && docMode != 7), Code: isIE8 = isIE && (check(/msie [89]/) && docMode != 7), You obviously need to make sure you are including the debug file, and you will need to reminify for production. Also, Ext JS 3.4.0 is released, specific to IE9 issues and fixes. Most of the issues seems to be addressed in there ! Override Override Hello, I was façing the same issue on a TreePanel. IE9 is building its dom as firefox and chrome and so the research of the tree-node-id (tree elements) doesn't retrieve any elements; This is how I fixed the problem : I overrided the method getAttributeNS of Ext.Element : Code: Ext.override(Ext.Element, { /** * Returns the value of a namespaced attribute from the element's underlying DOM node. * @param {String} namespace The namespace in which to look for the attribute * @param {String} name The attribute name * @return {String} The attribute value */ getAttributeNS : Ext.isIE && !Ext.isIE9 ? function(ns, name){ var d = this.dom; var type = typeof d[ns+":"+name]; if(type != 'undefined' && type != 'unknown'){ return d[ns+":"+name]; } return d[name]; } : function(ns, name){ var d = this.dom; return d.getAttributeNS(ns, name) || d.getAttribute(ns+":"+name) || d.getAttribute(name) || d[name]; } }); Does any one came up with a solution for that? I am using ExtJs 3.2.1 with IE9 and I got this problem. Suggestion:This is how I fixed the problem : I overrided the method getAttributeNS of Ext.Element : Any info would be appreciated... cheers, Pawel I fixed the problem in getNode method in Ext.tree.TreeEvenModel: Code: Ext.isIE9 = Ext.isIE && (/msie 9/).test(navigator.userAgent.toLowerCase()); if (Ext.isIE9) { Ext.override(Ext.tree.TreeEventModel, { getNode: function (e) { var t; if (t = e.getTarget('.x-tree-node-el', 10)) { //var id = Ext.fly(t, '_treeEvents').getAttribute('tree-node-id', 'ext');// BUG !!! var id = e.getTarget('.x-tree-node-el', 10).getAttribute("ext:tree-node-id") // FIX!!! if (id) { return this.tree.getNodeById(id); } } return null; } } cheers! Thank you for reporting this bug. We will make it our priority to review this report. Similar Threads hiding open arrow for childless folder in TreePanelBy nickbrook72 in forum Ext: Q&AReplies: 2Last Post: 17 Aug 2010, 8:50 AM A Glimpse at IE9 (the early days)By hendricd in forum Community DiscussionReplies: 13Last Post: 6 May 2010, 11:44 PM
http://www.sencha.com/forum/showthread.php?115849-OPEN-1434-ExtJS-TreePanel-and-IE9&p=918037&viewfull=1
CC-MAIN-2014-10
refinedweb
445
69.99
Back I would like to calculate NN model certainty/confidence (see What my deep model doesn't know) - when NN tells me an image represents "8", I would like to know how certain it is. Is my model 99% certain it is "8" or is it 51% it is "8", but it could also be "6"? Some digits are quite ambiguous and I would like to know for which images the model is just "flipping a coin". I have found some theoretical writings about this but I have trouble putting this in code. If I understand correctly, I should evaluate a testing image multiple times while "killing off" different neurons (using dropout) and then...? Working on MNIST dataset, I am running the following model: from keras.models import Sequentialfrom(Dropout(0.20),)) from keras.models import Sequential from,)) Question: how should I predict with this model so that I get its certainty about predictions too? I would appreciate some practical examples (preferably in Keras, but any will do). I am looking for an example of how to get certainty using the method outlined by Yurin Gal (or an explanation why some other method yields better results). You can implement a dropout approach to measure uncertainty. You should also apply dropout during the test time: For example: import keras.backend as Kf = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output]) import keras.backend as K f = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output]) You can use this function as uncertainty predictor def predict_with_uncertainty(f, x, n_iter=10): result = numpy.zeros((n_iter,) + x.shape) for iter in range(n_iter): result[iter] = f(x, 1) prediction = result.mean(axis=0) uncertainty = result.var(axis=0) return prediction, uncertainty def predict_with_uncertainty(f, x, n_iter=10): result = numpy.zeros((n_iter,) + x.shape) for iter in range(n_iter): result[iter] = f(x, 1) prediction = result.mean(axis=0) uncertainty = result.var(axis=0) return prediction, uncertainty.
https://intellipaat.com/community/8911/how-to-calculate-prediction-uncertainty-using-keras
CC-MAIN-2021-43
refinedweb
329
50.94
insque() Insert an element into a doubly linked queue Synopsis: #include <search.h> void insque( void *elem, void *pred); Arguments: - elem - A pointer to the element you want to insert. - pred - A pointer to the previous element in the list, or NULL if you want to initialize the head of the list. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The insque() function inserts the element pointed to by elem into a doubly linked queue immediately after the element pointed to by pred. The queue can be either circular or linear. The first two members of the elements must be pointers to the same type of structure; the names of the members don't matter. The first member of the structure is a forward pointer to the next element in the queue, and the second is a backward pointer to the previous element. The application can define any additional members in the structure. If the queue is linear, the queue is terminated with NULL pointers. If the queue is to be used as a linear list, invoking insque( &element, NULL), where element is the initial element of the queue, initializes the forward and backward pointers of the element to NULL. If you intend to use the queue as a circular list, initialize the forward pointer and the backward pointer of the initial element of the queue to the element's own address. You can use remque() to remove elements from the queue.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/insque.html
CC-MAIN-2020-05
refinedweb
255
61.67
When you edit a site in front page and and specify the register tag for the user control as <%@ Register TagPrefix="AO" TagName="Header" Src="~/usercontrol.ascx" %>, you will get the "Assembly, TagPrefix, and Namespace are required on Register directives" error message. The behavior is as per the design of collaboration between FrontPage and SharePoint. We know when we edit a SharePoint site through front page, the details are getting stored into the SharePoint database, called ghosting/unghosting problem. When we specify the “virtual directory” path in the register tag, FrontPage may not able to fetch the .ascx control file detail and store the control details into database. So as per the design when we edit the page in front page, we cannot specify the “virtual directory” path in the register tag. If it’s a regular asp.net application then everything will works. But when SharePoint comes into picture, the issue will appear. If you edit the default.aspx page (C:\program files\common files\microsoft shared\web server extenstions\60\template\1033\sts) in notepad, this will work, because the details are not stored into the database. · Workaround 1 -Modify the default.aspx (sts folder or other site definition folder files where you want to specify the control) page in notepad. -Place the register tag, and other required tags to specify the control Implication : All sites will get affect, since we are modifying the site definition files. If you want to specify the information only to single site, then this workaround may not be feasible. You may have to edit the page in front page. In that case, you may follow the second workaround · Workaround 2 -Create a signed webcontrol library, and install the assembly into GAC. -Open the site in front page, add the register tag with Assembly, namespace, tagprefix and publickey token details
http://blogs.msdn.com/b/karthick/archive/2006/12/05/wss-2-0-getting-assembly-tagprefix-and-namespace-are-required-on-register-directives-error-message.aspx
CC-MAIN-2014-23
refinedweb
307
55.95
Isn't it time you learned how to JBehave? Document options requiring JavaScript are not displayed Discuss Help us improve this content Level: Introductory Andrew Glover (aglover@stelligent.com), President, Stelligent Incorporated 18 Sep 2007 Test-driven development (TDD) is a great idea in practice, but some developers just can't get over the conceptual leap associated with that word test. In this article, Andrew Glover shows you a more natural way to integrate the momentum of TDD into your programming practice. Get started with behavior-driven development (BDD) (via JBehave) and see for yourself what happens when you focus on program behaviors, rather than outcomes. Developer testing is clearly a good thing. Testing done early — say, as you write your code — is an even better thing, especially when it comes to code quality. Write your tests first and you win the blue ribbon. The added momentum of being able to examine the behavior of your code and debug it preemptively is undeniably high-speed. Even knowing this, we're nowhere near the critical mass that would make writing tests before writing code a standard practice. Just as TDD was an evolutionary next step extending from Extreme Programming, which pushed unit-testing frameworks into the limelight, evolutionary leaps are waiting to be made from the foundation that is TDD. This month, I invite you to join me as I make the evolutionary leap from TDD to its slightly more intuitive cousin: behavior-driven development (BDD). Behavior-driven development While test-first programming works for some people, it doesn't work for everyone. For every application developer who avidly embraces TDD, there are many others who actively resist it. Even with the abundance of testing frameworks like TestNG, Selenium, and FEST, the reasons for not testing code are manifold. Two common reasons for not doing TDD are "There's not enough time for testing" and "The code is too complex and hard to test." Another hurdle in test-first programming is the test-first concept itself. Many of us view testing as a tactile activity, more concrete than abstract. Experience tells us that it isn't possible to test something that doesn't already exist. For some developers, given this conceptual framework, the idea of testing first is an oxymoron. But what if, rather than thinking in terms of writing tests and how to test things, you thought about behavior, instead? By behavior, I mean how an application should behave — in essence, its specification. As it turns out, you already think this way. We all do. Watch. Frank: What's a stack? Linda: It's a data structure that collects objects in a first in, last out (or last in, first out) manner. It usually has an API with methods like push() and pop(). Sometimes you'll see a peek() method, as well. Frank: What does push() do? Linda: push() takes an input object, say foo, and places it into an internal container (like an array). push() usually doesn't return anything either. Frank: What if I push() two things, like foo and then bar? Linda: The second object, bar, should be on top of the conceptual stack (containing at least two objects), so that if you call pop(), bar should come off instead of the first object, which, in your case, is foo. If you called pop() again, then foo should be returned and the stack should be empty (assuming there wasn't anything in it before you added the two objects). Frank: So pop removes the most recent item placed into the stack? Linda: Yes, pop() should remove the top item (assuming there are items to remove). peek() follows the same rule, but the object isn't removed. peek() should leave the top item on the stack. Frank: What if I call pop() without having pushed anything? Linda: pop() should throw an exception indicating that nothing has been pushed yet. Frank: What if I push() null? Linda: The stack should throw an exception because null isn't a valid value to push(). push() pop() peek() foo bar pop null Notice anything particular about this conversation (aside from the fact that Frank didn't major in computer science)? Nowhere was the word "test" used. The word "should" slipped in here and there quite naturally, however. Doing what comes naturally Annotations make it possible to practice BDD using JUnit and TestNG. I find it more interesting to use a BDD framework like JBehave, which provides features for defining behavior classes, such as an expectation framework that facilitates a more literate style of programming. BDD isn't anything new or revolutionary. It's just an evolutionary offshoot of TDD in which the word "test" is replaced by the word "should." Semantics aside, many people find the concept of should a much more natural development driver than the concept of testing. Thinking in terms of behavior (shoulds) somehow paves the way into writing specification classes first, which, in turn, can be a very efficient implementation driver. Using the conversation between Frank and Linda as a basis, let's see how BDD drives development in just the way that TDD was intended to promote. JBehave JBehave is a BDD framework for the Java™ platform inspired by the xUnit paradigm. As you can probably guess, JBehave stresses the word should, rather than test. Just like JUnit, you can run JBehave classes in your favorite IDE and via your preferred build platform, such as Ant. JBehave lets me create a behavior class much like I would in JUnit; however, in the case of JBehave, I don't need to extend from any particular base class, and all my behavior methods need to start with should, rather than test), as shown in Listing 1. should test public class StackBehavior { public void shouldThrowExceptionUponNullPush() throws Exception{} public void shouldThrowExceptionUponPopWithoutPush() throws Exception{} public void shouldPopPushedValue() throws Exception{} public void shouldPopSecondPushedValueFirst() throws Exception{} public void shouldLeaveValueOnStackAfterPeep() throws Exception{} } The methods defined in Listing 1 all start with should and they all create a human-readable sentence. The resulting StackBehavior class describes many of the features of the stack in the conversation between Frank and Linda. StackBehavior For instance, Linda stated that a stack should throw an exception if a user attempted to place null onto it. Check out the first behavior method in the StackBehavior class: It's called shouldThrowExceptionUponNullPush(). The other methods follow the same pattern. This descriptive naming pattern (which is by no means unique to JBehave or BDD) makes it possible to state a failing behavior in a human readable manner, as you'll see shortly. shouldThrowExceptionUponNullPush() Speaking of shouldThrowExceptionUponNullPush(), how would you verify this behavior? It seems to me that you'd first need a push() method on a Stack class, which is easy to define. Stack public class Stack<E> { public void push(E value) {} } As you can see, I've coded the minimum amount of a stack so I can start fleshing out the required behavior first. As Linda stated, the behavior is simple: If someone calls push() with a null value, the stack should throw an exception. Now look at how I've defined this behavior in Listing 3. public void shouldThrowExceptionUponNullPush() throws Exception{ final Stack<String> stStack = new Stack<String>(); Ensure.throwsException(RuntimeException.class, new Block(){ public void run() throws Exception { stStack.push(null); } }); } Great expectations and overrides A few things are happening in Listing 3 that are unique to JBehave, so let me explain. First, I create an instance of the Stack class and limit it to String types (via Java 5 generics). Next, I use JBehave's expectation framework to essentially model my desired behavior. The Ensure class is analogous to JUnit or TestNG's Assert type; however, it adds a series of methods that facilitate a more readable API (this is often called literate programming). In Listing 3, I've ensured that a RuntimeException will be thrown if push() with null is called. String Ensure Assert RuntimeException JBehave also introduces a Block type, which is implemented by overriding the run() method with your desired behavior. Internally, JBehave ensures that your desired exception type isn't thrown (and, therefore, caught), a failure state is generated. You may recall a similar pattern of overriding a convenience class in my previous article about unit testing Ajax with the Google Web Toolkit; in that case, the override was done with GWT's Timer class. Block run() Timer If I run the behavior from Listing 3 now, I should see a failure. As currently coded, the push() method doesn't do anything; so there is no way an exception will be generated, as you can see from the output in Listing 4. 1) StackBehavior should throw exception upon null push: VerificationException: Expected: object not null but got: null: The sentence in Listing 4, "StackBehavior should throw exception upon null push," mimics the behavior's name (shouldThrowExceptionUponNullPush()) along with the name of the class. Essentially, JBehave is reporting that it didn't get anything when it ran the desired behavior. Of course, my next step is to make that behavior pass, which I've done by checking for null in Listing 5. StackBehavior should throw exception upon null push public void push(E value) { if(value == null){ throw new RuntimeException("Can't push null"); } } When I rerun my behavior, everything is good to go, as shown in Listing 6. Time: 0.021s Total: 1. Success! Behavior drives development Doesn't the output in Listing 6 look similar to JUnit's output? That's probably not a coincidence, is it? As mentioned, JBehave is modeled after the xUnit paradigm and even supports fixtures via setUp() and tearDown(). Given that I'm probably going to be using a Stack instance throughout my behavior class, I might as well push (no pun intended) that logic into a fixture, as I've done in Listing 7. Note that JBehave will follow the same fixture contract that JUnit follows — that is, it will run a setUp() and tearDown() for every behavior method. setUp() tearDown() public class StackBehavior { private Stack<String> stStack; public void setUp() { this.stStack = new Stack<String>(); } //... } Moving on to the next behavior method, shouldThrowExceptionUponPopWithoutPush() indicates I'll have to ensure similar behavior to that of shouldThrowExceptionUponNullPush() from Listing 3. As you can see in Listing 8, there isn't any particular magic going on — or is there? shouldThrowExceptionUponPopWithoutPush() public void shouldThrowExceptionUponPopWithoutPush() throws Exception{ Ensure.throwsException(RuntimeException.class, new Block() { public void run() throws Exception { stStack.pop(); } }); } As you've probably figured out, Listing 8 won't actually compile at this point because pop() hasn't been written yet. But before I start to do that (write pop()), let's think a few things through. Ensuring behavior Technically, I could just implement pop() to only throw an exception at this point, regardless of calling order. But going down this behavior route encourages me to think about an implementation that supports my desired specification. In this case, ensuring that pop() throws an exception if push() hasn't been called (or logically, if the stack is empty) means that the stack has a state. And as Linda mused earlier, a stack usually has an "internal container" that actually holds items. Accordingly, I can create an ArrayList for the Stack class that holds values passed into the push() method, as shown in Listing 9. ArrayList public class Stack<E> { private ArrayList<E> list; public Stack() { this.list = new ArrayList<E>(); } //... } Now I can code the behavior for the pop() method, which ensures that if the stack is logically empty, an exception will be thrown. public E pop() { if(this.list.size() > 0){ return null; }else{ throw new RuntimeException("nothing to pop"); } } When I run the behavior in Listing 8, things work as expected: Because the stack isn't holding any values (hence, its size isn't greater than zero), an exception is thrown. The next behavior method is called shouldPopPushedValue(), which turns out to be easy to specify. I simply push() a value ("test") and ensure that when I call pop(), that same value is returned. shouldPopPushedValue() "test" public void shouldPopPushedValue() throws Exception{ stStack.push("test"); Ensure.that(stStack.pop(), m.is("test")); } Dial 'M' for Matcher You might note that the code in Listing 12 isn't exactly elegant. The m in Listing 11 does affect the readability slightly ("ensure that pop's value m (what the?) is test"). You can avoid using the UsingMatchers type by extending a special base class (UsingMiniMock) provided by JBehave. This way, the last line in Listing 11 becomes Ensure.that(stStack.pop(), is("test")), which is a bit more readable. m ensure that pop's value m (what the?) is test UsingMatchers UsingMiniMock Ensure.that(stStack.pop(), is("test")) In Listing 11, I ensure that pop() returns the value "test". In the course of using JBehave's Ensure class, you'll often find that you need a richer way to specify expectations. JBehave meets this need by offering a Matcher type for implementing rich expectations. In my case, I chose to reuse JBehave's UsingMatchers type (the m variable in Listing 11) so I could use methods like is(), and(), or(), and a host of other neat mechanisms for building a more literate style of expectations. Matcher is() and() or() The m variable from Listing 11 is a static member of the StackBehavior class, as shown in Listing 12. private static final UsingMatchers m = new UsingMatchers(){}; With the new behavior method coded in Listing 11, it's time to run it — but doing so indicates a failure, as shown in Listing 13. Failures: 1. 1) StackBehavior should pop pushed value: java.lang.RuntimeException: nothing to pop What happened? It turns out that my push() method wasn't finished. Back in Listing 5, I coded the bare minimum implementation to get my behavior to work. Now it's time to finish the job, by actually adding the pushed value into the internal container (if the value isn't null). I do this in Listing 14. public void push(E value) { if(value == null){ throw new RuntimeException("Can't push null"); }else{ this.list.add(value); } } But wait — when I rerun the behavior, it still fails! 1) StackBehavior should pop pushed value: VerificationException: Expected: same instance as <test> but got: null: At least the failure in Listing 15 is different from Listing 13. In this case, rather than an exception being thrown, the "test" value isn't being found; null is being popped. Looking closely at Listing 10 reveals the issue: I initially coded the pop() method to return null if the internal container had anything in it. Well, that's easy to fix. public E pop() { if(this.list.size() > 0){ return this.list.remove(this.list.size()); }else{ throw new RuntimeException("nothing to pop"); } } But now if I rerun the behavior, I get a new failure. 1) StackBehavior should pop pushed value: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 A close reading of the information in Listing 17 uncovers the issue: I need to account for 0 when dealing with an ArrayList. public E pop() { if(this.list.size() > 0){ return this.list.remove(this.list.size()-1); }else{ throw new RuntimeException("Nothing to pop"); } } The logic of stacks Thus far, I've managed to implement push() and pop() in such a manner as to permit a number of behavior methods to pass. I've yet to tackle the meat of the stack, however, which is the logic associated with multiple push()es and pop()s, along with throwing in an occasional peek(). First, I'll ensure that the basic algorithm of my stack (first in, last out) is sound, via the shouldPopSecondPushedValueFirst() behavior. shouldPopSecondPushedValueFirst() public void shouldPopSecondPushedValueFirst() throws Exception{ stStack.push("test 1"); stStack.push("test 2"); Ensure.that(stStack.pop(), m.is("test 2")); } The code in Listing 19 works as planned, so I'll implement another behavior method (in Listing 20) to ensure that using pop() twice shows proper behavior, as well. public void shouldPopValuesInReverseOrder() throws Exception{ stStack.push("test 1"); stStack.push("test 2"); Ensure.that(stStack.pop(), m.is("test 2")); Ensure.that(stStack.pop(), m.is("test 1")); } Moving on, I'd like to ensure that peek() works as intended. As Linda said, peek() follows the same rules as pop(), but "should leave the top item on the stack." Accordingly, I've implemented the behavior for the shouldLeaveValueOnStackAfterPeep() method in Listing 21. shouldLeaveValueOnStackAfterPeep() public void shouldLeaveValueOnStackAfterPeep() throws Exception{ stStack.push("test 1"); stStack.push("test 2"); Ensure.that(stStack.peek(), m.is("test 2")); Ensure.that(stStack.pop(), m.is("test 2")); } Because peek() hasn't been defined yet, the code in Listing 21 won't compile. In Listing 22, I've defined a bare-bones implementation of peek(). public E peek() { return null; } Now the StackBehavior class will compile, but it still won't run. 1) StackBehavior should leave value on stack after peep: VerificationException: Expected: same instance as <test 2> but got: null: Logically, peek() doesn't remove the item from the internal collection, it basically just passes a pointer to it. Consequently, I use the get() method on ArrayList, rather than remove(), as shown in Listing 24. get() remove() public E peek() { return this.list.get(this.list.size()-1); } Nothing to see here Now rerunning the behavior from Listing 21 yields a passing grade. Doing this exercise has revealed an issue, however: What is the behavior of peek() if nothing is there? If a pop() with nothing in it should throw an exception, should peek() do the same? Linda didn't say anything about this, so apparently, I need to flesh out some new behavior myself. In Listing 25, I've coded the scenario for "What happens if peek() is called without a push()." public void shouldReturnNullOnPeekWithoutPush() throws Exception{ Ensure.that(stStack.peek(), m.is(null)); } Once again, no surprises here. Things blew up, as you can see in Listing 26. 1) StackBehavior should return null on peek without push: java.lang.ArrayIndexOutOfBoundsException: -1 The logic to fix the defect is quite similar to the logic in pop(), as you can see in Listing 27. public E peek() { if(this.list.size() > 0){ return this.list.get(this.list.size()-1); }else{ return null; } } All my modifications and fixes to the Stack class result in the code you see in Listing 28. import java.util.ArrayList; public class Stack<E> { private ArrayList<E> list; public Stack() { this.list = new ArrayList<E>(); } public void push(E value) { if(value == null){ throw new RuntimeException("Can't push null"); }else{ this.list.add(value); } } public E pop() { if(this.list.size() > 0){ return this.list.remove(this.list.size()-1); }else{ throw new RuntimeException("Nothing to pop"); } } public E peek() { if(this.list.size() > 0){ return this.list.get(this.list.size()-1); }else{ return null; } } } At this point, the StackBehavior class runs seven behaviors that ensure the Stack class works according to Linda's (and a bit of my own) specification. The Stack class could probably use some refactoring (maybe the pop() method should call peek() as a test, rather than the size() check?), but thanks to my behavior-driven process so far, I have the infrastructure to make the changes in near total confidence. If I break something, I'll be quickly notified. size() In conclusion What you may have noticed about this month's exploration in behavior-driven development, or BDD, is that Linda was, in essence, the customer. You might think of Frank as the developer in this scenario. Take away the domain (in this case, data structures) and replace it with something else (say, a call center application) and the exercise is similar. Linda, the customer or domain expert, says what the system, feature, or application should do and someone like Frank uses BDD to ensure he has heard her correctly and implement her requirements. For many developers, the shift from test-driven development to BDD is a smart move. With BDD, you don't have to think about tests, you can just pay attention to the requirements of your application and ensure that the application behavior does what it should to meet those requirements. In this case, using BDD and JBehave made it easy for me to implement a working stack based on Linda's specifications. I just listened to what she was saying and then built the stack accordingly, by thinking in terms of behavior first. In the process, I also managed to discover a few things Linda had forgotten about stacks..
http://www.ibm.com/developerworks/java/library/j-cq09187/
crawl-001
refinedweb
3,432
55.24
Subject: [Boost-docs] [Invalid] Markup Validation of index.html - W3C Markup Validator_files From: Paul A. Bristow (pbristow_at_[hidden]) Date: 2013-12-18 19:20:22 I've been checking the output from a documentation produced using Quickbook (after getting a clean bill of health from Boost inspect tool and Doxygen warnings-free) There are complaints about the header L 1. Warning. 2. Warning: · the MIME Media Type (text/html) can be used for XML or SGML document types · No known Document Type could be detected · No XML declaration (e.g <?xml version="1.0"?>) could be found at the beginning of the document. · No XML namespace (e.g <html xmlns="" xml:) could be found at the root of the document. As a default, the validator is falling back to SGML mode. 3. Warning No DOCTYPE found! Checking with default HTML 4.01 Transitional. The document was checked using a default "fallback" Document Type Definition that closely resembles âHTML 4.01 Transitionalâ. Learn <> how to add a doctype to your document from our FAQ. 4. Info. <> â TOP Validation Output: 2 Errors 1. Error Line 11, Column 1: no document type declaration; implying "<!DOCTYPE HTML SYSTEM>" <html> <;errmsg_id=344#errormsg> â. 2. Error Line 14, Column 7: end tag for "HEAD" which is not finished </head> <;errmsg_id=73#errormsg> â. It would be nice to be able to avoid these warnings. Has anyone any suggestions on quieting them? Paul (PS I think I've raised this before and issues with some browsers were a problem then - but surely things should be sorted now?) ---
https://lists.boost.org/boost-docs/2013/12/2214.php
CC-MAIN-2021-43
refinedweb
266
66.84
Results 1 to 2 of 2 hi I have been reading the Linux Device Drivers 3 book. In Chapter 4 on debugging an example program for changing console for printk was given. but it is not ... - Join Date - Nov 2012 - 5 usage of ioctl command TIOCLINUX In Chapter 4 on debugging an example program for changing console for printk was given. but it is not executing in my terminal what could be the reason? I have given the source code also. Also when I googled for the TIOCLINUX command I was able find only 10 subcommands for it. what is the subcommand 11 doing in the code? Please direct me to any article or links or guide me. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <sys/ioctl.h>); } Thank you - Join Date - Dec 2011 - Location - Turtle Island West - 367 You're getting into grotty territory there. You're not supposed to use TIOCLINUX because it's not portable, at all. But it is fun when you're banging on your linux box and getting your program to work just the way you want it to. I'm not going to do a bunch of research here or anything, but I do recall that TIOCLINUX varies incredibly, takes strange arguments, does strange things, and returns weirdness. Be careful.
http://www.linuxforums.org/forum/newbie/193704-usage-ioctl-command-tioclinux.html
CC-MAIN-2014-41
refinedweb
226
76.32
Man Page Manual Section... (2) - page: accept NAMEaccept - accept a connection on a socket SYNOPSIS #include <sys/types.h> /* See NOTES */ #include <sys/socket.h> int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen); #define _GNU_SOURCE #include <sys/socket.h> int accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen, int flags); DESCRIPTIONOn success, these system calls return a nonnegative integer that is a descriptor for the accepted socket. On error, -1 is returned, and errno is set appropriately. Error Handling case of TCP/IP these are ENETDOWN, EPROTO, ENOPROTOOPT, EHOSTDOWN, ENONET, EHOSTUNREACH, EOPNOTSUPP, and ENETUNREACH. ERRORS -. - EBADF - The descriptor is invalid. -, not a socket. - EOPNOTSUPP - The referenced socket is not of type SOCK_STREAM. - EPROTO - Protocol error. In addition,. VERSIONSThe accept4() system call is available starting with Linux 2.6.28; support in glibc is available starting with version 2.10. CONFORMING TOaccept(): POSIX.1-2001,(). NOTESPOS)). The socklen_t typeThe third argument of accept() was originally declared as an int * (and is that under libc4 and libc5 and on many other systems like 4.x BSD, SunOS 4, SGI); a POSIX)." EXAMPLESee bind(2). SEE ALSObind(2), connect(2), listen(2), select(2), socket(2), socket:35 GMT, June 11, 2010
http://linux.co.uk/documentation/man-pages/system-calls-2/man-page/?section=2&page=accept
CC-MAIN-2013-48
refinedweb
200
52.36
This might be part of another ongoing series, but the for this post, right here, RIGHT NOW, I am going to show are really simple but fun (??) way to change an image’s… eh image from a database stored image using Javascript and an .ashx file. And when I say simple, I mean it took me longer to get test code going than it did to make this work. First you need a handler (If you don’t what this is for, well for this example it allows you to create a non existant url for an image loaded from the database.) which is aptly named Generic Handler when you do the usual Add New Item. Amazing. You should get something like this in the class file: [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class SomeImage: IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World"); } public bool IsReusable { get { return false; } } } Yeah you like that don’t you? Yeah you do. So now we have something to display the image right? Well that’s pretty easy too. For me, I have an UserImage class I created with Linq to Sql to mimic my UserImage table. I’m cool like that. I then created a method to return the image bytes based on the ID sent in. That part is up to you how to handle. The main thing you need is to get the image bytes somehow. With that in mind, here is what the class file looks like now: [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class ShowImage : IHttpHandler { public void ProcessRequest(HttpContext context) { Int32 imageID; Byte[] imageBytes; imageID = Convert.ToInt32(context.Request.QueryString["ImageID"]); imageBytes = UserImage.GetImageBytesById(imageID); context.Response.ContentType = "image/jpeg"; context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.BufferOutput = false; context.Response.OutputStream.Write(imageBytes, 0, imageBytes.Length); } public bool IsReusable { get { return false; } } } As you can see, the query string has an id being sent in and I’m retrieving it. From there I get the image bytes, tell the context what it is, how to handle the cache, and sending the bytes out. What you can’t see right now is that somewhere I have something that looks like this: <image src="SomeImage.ashx?ImageID=1" /> When that url is read, it will send it off to the handler to get and display the image. (Not taking caching into account mnd you) Now I know what you’re thinking right now, “I’m bored” which I understand but you’re also thinking, “Where’s the f@&#ing javascript?”. Well my vulgar friend, it’s on the way. <head runat="server" > <script type="text/javascript" language="javascript"> function TestThis(name, imageID) { var test = document.getElementById(name); test.src = 'ShowImage.ashx?imageID=' + imageID; } </script> </head> <body> <form id="form1" runat="server"> <div> <div style="background-color:Gray;height:20px;width:20px;" onclick="TestThis('hi', 1);"> </div> <div style="background-color:Red;height:20px;width:20px;" onclick="TestThis('hi', 2);" > </div> <div style="background-color:Blue;height:20px;width:20px;" onclick="TestThis('hi', 3);"> </div> <br /> <img src="" id="hi" name="hi" /> </div> </form> </body> So now every time one of the three divs is “clicked”, the image is changed depending on the image id sent in using the “url” of the handler you created. Much like sending information to another page, you send the id to the handler so it can display the correct image. I realize this isn’t the best of examples but it’s a push in the right direction. Maybe next time you want something, you won’t swear at me. One thought on “Use Javascript and an HttpHandler to Load an Image from a Database” why haven’t you registered the HttpHandler in web.config??is it not required???
http://byatool.com/uncategorized/use-javascript-and-an-httphandler-to-load-an-image-from-a-database/
CC-MAIN-2019-13
refinedweb
636
56.35
GTK+ for BeOS Update 77 BugMaster ChuckyD writes to us with the latest info about the GTK+ porting to BeOS effort. Alpha stage, so it's still crashy-the-crash guy, but the screen shots look sweet. I'm going to have put this on my Be machine later. Disk crisis, please clean up! Re:Slashmeat? Freshdot? What gives? (Score:1) No, slashdot is not freshmeat. No, you dont see news stories on freshmeat. I dont use linux (or any unix) on my desktop - though a "long term goal" is to migrate over. So I dont read freshmeat randomly - there are too many gminsweapers and the like to weed through.. I proably wont randomy go there when I do uses gnome on my desktop, either. Apache is important. I maintain some sites that run linux and use apache. Its nice to know. A Be gtk port is less important (to me) but there are proably lots of Be users who dont run linux and gnome, but like some of what they see. They would be interested in something like this. Besides, its not like there is a hundred stories a day on ./ Do you realy think that these displaced something more important? Re:You're the one with the problem, dude. (Score:2) (2) it's not Unix (3) it's partially owned by Microsoft Essentially, OS/2 doesn't exist in the "Linux world" (as you put it), so it's no real shock that Slashdot isn't all over an OS that most people aren't even aware is still in production. (I know it's unfair, but OS/2's fate as a nitch OS has been sealed for a long time. Blame IBM, Microsoft, fate, timing, whatever. Expecting "News for Nerds" to pickup on news which only interests what can be fairly called a legacy user base is a little odd. There's not much AS/400 or VMS news here either, and those plaforms are growing much faster than OS/2 is right now.) -- Re:TCP/IP for OS/2 (Score:2) Oog, through misfortune, I had to work with OS/2 2.1 -- **$300** for the TCP/IP stack. And one wonders why so many people run Windows NT. And the WPS was so unstable that we ended up booting into command prompts on the server, but that's a different story... -- Yuck (Score:1) YUCK, it's not using BeOSes L&F. One of the reasons I left Linux was the lack of the standardised L&F, yeah there is 'GNOME' and 'KDE' etc, but not all apps use it. Mlk Re:Perhaps, but... (Score:1) Plus how many 'themes' are perfect? Re:No other platform uses C for GUI (Score:1) Anyway, it doesn't matter at all what language the GUI source code is in (I'm willing to bet the majority of GUI's are written in C) --- the point being made was that there are bindings for many languages. You don't "link to" C code to get to the GUI, you link to a dynamic library. A dynamic library isn't "C" or ANY language for that matter, its assembly opcodes - the only "C" there may be the calling convention for the functions, but that doesn't restrict anyone linking to the library to C. You could write BASIC code that links to GTK+. Re:You're the one with the problem, dude. (Score:1) Or did you think that $59 price tag was to be donated to the FSF? Hmm? Secondly, since when does an OS need to be Unix to be mentionable? Clue: Linux is not Unix! Its origins are not from the BSD school because the kernel sprouted from the head of Linus. Just because a distribution *looks like* System V at the command line, doesn't make it a descendent of any of the BSD lineage. Thirdly, Microsoft pulled out of OS/2 a looooong time ago and stole a bunch of code (why hasn't IBM sued yet?) in the process to make Win9x and NT. Lastly, as a former OS/2 user/hacker, I must remind you that posting non-facts here can make you look silly. As for who to blame, I place the blame squarely on IBM's shoulders. They refused to fight the FUD with facts and had the WORST advertising campaign known to mankind. To this day, they have no idea what kind of decent OS they had or what kind of challenge they could have given Evil Bill. Remember that this was an OS that was full 32 bit, had drag-n-drop capabilities that NO version of Windoze has to this day, a decent scripting/batch language, and better DOS support, and this was back in the days of Win3.11 being the best out of Redmond. OS/2 is dead because IBM eats its children. Silly? (Score:2) Note that I was referring to the fetishes of Slashdot editors more that I was talking about OS/2. I should have added [(4) It's not a new OS], which explains Slashdot's fixation on Be, Mac OS X, and so on. But to address your points in particular: - Nobody here thinks Be or Solaris or OS/400 is free software. - Linux is a nearly perfect clone of Unix. Apologies if calling it unix offends your delicate sensibilities, but most people consider it a unix variant. - Microsoft didn't steal any OS/2 code. At the time of the breakup, both MS and IBM got rights to each others projects - DOS, Windows 3, and OS/2 2. - I didn't attempt to post any facts - just 100% opinion. Babbling a bunch of nonsense about System V not being BSD and accusing Microsoft of stealing something that they own looks pretty silly too. -- I've never seen an OS/2 article on Slashdot (Score:2) That's why all us OS/2 users think that Slashdot is anti-OS/2. Excellent screenshot (Score:1) Re:I've never seen an OS/2 article on Slashdot (Score:1) BTW, not long ago, Jerry Pournelle made the statement that Warp advocates made it so unpleasant for people to write about Warp that eventually no one did. We see the same thing now happening with Linux. Nothing is stopping Warp users from starting up their own site a la Slashdot, BeNews or BeosCentral, is there? Know anyone who would be willing to host and run a site like that (please don't even mention Tim Martin and Warpcity!)? Re:You're the one with the problem, dude. (Score:1) Nobody seems to want to even acknowledge that OS/2 exists in the Linux world. I remember seeing ONE good blurb about OS/2's WPS... Re:I've never seen an OS/2 article on Slashdot (Score:1) Slashdot is "news for nerds" or so it claims. And nerds are semi-interested in OS developments, yes? I know that at least one or 2 interesting stories about OS/2 have been submitted to Slashdot, yet never get posted. Yet some yahoo goes around asking about modems for Linux and the like and _that_ gets posted. Re:GTK+ also available on OS/2 (Score:1) ON THE OTHER HAND, somebody submits a GTK+ on BeOS story and that gets printed. A story on GTK+ on OS/2 was submitted and NOTHING showed up on Slashdot. BeOS is cool too, but there are a heck of a lot more OS/2 users than BeOS right now! Re:Excellent screenshot (Score:1) Re:I've never seen an OS/2 article on Slashdot (Score:1) The subhead at the top of the page says "News for Nerds. Stuff that matters." Unfortunately, it usually ends up "News for kernel hackers. Linux stuff that matters." The only reason this BeOS story got posted is because the subject (GTK+) tied back to Linux. That's the way it is for all BeOS stories. I sometimes think this is how they determine what gets posted: Keith Russell OS != Religion Nice looking fonts. (Score:2) And... what's the graphing widget? Re:gimp (Score:1) Re:Ah, crap. (Score:1) Others could port Gtk-- to beos, but it's not a priority at all. The BeOS version does run themes, although the only one so far that I've tried is ThinIce. It works wonderfully. gimp (Score:1) Re:Ah, crap. (Score:1) Well, cool. More apps for another cool platform! Get porting, boys! Ah, crap. (Score:1) And BeOS likes to programmed with C++, not C For porting apps from Linux, it might be useful, somewhat -- at least some of the UI code will carry over. Does the BeOS version support themes? Does this mean a Mozilla Port? (Score:1) Hmm, how about Qt? (Score:1) It is based on C++ and has a much nicer API. Unfortunately, though, its license has been a subject of controversy: Red Hat's Marc Ewing speaks out on Qt License [slashdot.org] QT Goes OpenSource [slashdot.org] It doesn't hurt that the BeOS native compiler is EGCS [which will soon (already?) become the official gcc compiler]. Slashmeat? Freshdot? What gives? (Score:1) Why do we have Freshmeat.net? Why do we have Slashdot.org? I thought that they were two different places, with two different themes. But now, all we have is the same stuff on both sites! I'm not in a position to question Rob and Jeff's editorial decisions. After all, it is still their site and their's to do with as they please. But could we get some clarification as to what we can expect to see here? If Slashdot keeps going like this, we might as well autoforward to Freshmeat.net Just my $.02 Yeah I wish X were that good. (Score:1) nice looking! (but i'm off-topic, i know) (Score:1) Oh, get a grip. (Score:1) Freshmeat, in the last week, has posted over 200 software version updates. Similarly, Slashdot has posted a huge amount of news and information. If you cannot cope with the occasional overlap, you are free to download Slashdot's source code and offer an alternative news site completely devoid of news about software updates. Otherwise, cope. Re:Slashmeat? Freshdot? What gives? (Score:1) /me can't imagine someone spending all the time to write that in the first place. get a clue. Re:Ah, crap. (Score:1) Re:Cross-platform tooklit (Score:1) Re:Cross-platform tooklit (Score:2) Which is why I've used it the last few times I had to do Motif development. Pretty easy to wrap classes around the Motif stuff you need. Heck, I even used C++ to do X Windows 10 stuff, back when. C makes sense in some problem domains, and I'll grant that too many C++ programmers tend to write unreadable code, but graphics and GUIs are domains that practically beg for an OO approach. (And yes, you can do OO in C, but why?) GTK+ is better than nothing, but not much better (Score:1) There is a native port going on. I would expect that to be much better than any GTK based port, since GTK applications can't integrate themselves into BeOS properly, let alone use the facilities offered. For one example, threads -- the GTK framework isn't thread safe, not to mention actually having integrated thread support. John Re:Hmm, how about Qt? (Score:1) Talking of nicer API's (Score:1) John Re:Cross-platform tooklit (Score:1) On the other hand, you can't use any C++ features such as namespaces, templates, operator overloading etc. C is an excellent low level systems programming language. C++ is just about usable for GUI's, but its a lot better than C for that sort of thing. p.s. you don't need to do thread management and locking manually to program BeOS. John GTK+ also available on OS/2 (Score:2) Cross-platform tooklit (Score:1) It makes sense. (Score:1) It can be debated whether this announcement is 'worthy', but I don't care. It's not Freshmeat material, it belongs here if anywhere. Re:Slashmeat? Freshdot? What gives? (Score:1) Although the majority of the posters to Re:Ah, crap. (Score:1) Re:Hmm, how about Qt? (Score:1) However, I don't see anything wrong with existing GUI API - in fact, this kind of setup will provide consistency across all the apps on BeOs- something that Linux lacks. Re:GTK+ also available on OS/2 (Score:1) -Shawn Re:Cross-platform tooklit (Score:1) C++ fits naturally into GUI model of programming. I used C with Motif and EZWGL dev I don't understand people fascination with C Right tool for the right job. Re:Yeah I wish X were that good. (Score:1) They seem to work very well and look very nice. Re:Cross-platform tooklit (Score:1) Re:No other platform uses C for GUI (Score:1) Re:Be API (docs) (Score:1) Enjoy. Brian Re:Does this mean a Mozilla Port? (Score:1) BeOS + gtk (Score:1) ---------- Have FreeBSD questions? Re:nice looking! (but i'm off-topic, i know) (Score:1) I think it looks crappy as compared to look of native BeOS toolkit Really , people, don't you see that default GTK is butt-ugly ?? Re:GTK+ also available on OS/2 (Score:2) But I wouldn't personally know. Given the common hostility of Linux/X developers towards OS/2, I'm sure a lot of OS/2 programmers haven't even asked. Look at how difficult it's been to get Mozilla for OS/2 into the main tree. Anti-OS/2?? (Score:1) Re:First (Score:1) Mycroft Re:Slashmeat? Freshdot? What gives? (Score:1) Mycroft Re:Cross-platform tooklit (Score:1)
https://slashdot.org/story/99/08/20/1328251/gtk-for-beos-update
CC-MAIN-2017-17
refinedweb
2,323
82.44
Oleg Nesterov <oleg@redhat.com> writes:> On 06/18, Oleg Nesterov wrote:>>>> I only try to discuss the idea to break the circular reference.>> I don't know what I have missed, but this looks really right to me.> Besides, we have yet another problem: proc_flush_task()->mntput()> is just wrong. Consider the multithreaded execing init.>> I am going to simplify, test, and send the fix which moves mntput()> into free_pid_ns() paths.free_pid_ns is comparatively late, to release the kern_mount.> But first of all I think we should cleanup the pid_ns_prepare_proc()> logic. Imho, this code is really ugly. Please see the patches.Since I have a patchset that makes it possible to unshare the pidnamespace about ready to send I figure we should combine the twoefforts.This patchset is a prerequisite to my patches for giving namespacesfile descriptors and allowing you to join and existing namespace.When I look over my old notes it appears there Daniel managed to hitthis proc_mnt reference counting in that context. So that is definitelyinteresting.Oleg take a look I think I have combined the best of our two patchsets.Eric
http://lkml.org/lkml/2010/6/20/23
CC-MAIN-2013-48
refinedweb
185
68.16
WordPress Question for a new poster and WP coder for( $i=0; $i<$total_attachments; $i++ ) : ?> - < ?php echo $attachments[$i]; ?> - < ?php echo $attachments[$i]; ?> - < ?php echo $attachments[$i]; ?> - < ?php echo $attachments[$i]; ?> - < ?php echo $attachments[$i]; ?> - < ?php echo $attachments[$i]; ?> < ?php endfor; ?> < ?php endif; ?> < ?php } ?> Confused! Thanks @chrisburton – I believe so – I figured out how to attach and show the image in the correct DIV on the template. I can style the image with CSS. Here is my markup in my template:< ?php $attachments = attachments_get_attachments(); $total_attachments = count($attachments); ?> < ?php if( $total_attachments > 0 ) : ?> < ?php for ($i=0; $i < $total_attachments; $i++) : ?>< ?php $tmp_attachment_id = $attachments[$i]; $tmp_post_id = get_post($tmp_attachment_id); $tmp_title = $tmp_post_id->post_title; $tmp_path_array = explode(“/”,$attachments[$i]); $tmp_array_length = sizeof($tmp_path_array)-1; $tmp_filename = $tmp_path_array[$tmp_array_length]; $j=$i+1; ?> < ?php echo wp_get_attachment_image($attachments[$i], $size=’full’, $icon = true); ?> < ?php endfor ?> < ?php endif ?> Here is the basic CSS for the image: .attachments-div { position: relative; float: right; width: 35%; } .attachments-div img { border:#000 solid 1px; float:left; } And now using this template I can allow the client to upload and attach an image to the right hand column. My PHP is limited, so not totally sure what all the code means on the PHP side, but it worked! Do you see any thing wrong with it? Thanks again @chrisburton zack Depending on the dimensions for `$size=’full’`, you might want to replace ‘full’ with ‘thumbnail’. If they are already sized correctly that code should work. Yeah I saw that in the code, actually changed it to full from thumbnail – one quick question, is there a PHP call that I can add for a hyperlink? Say I want to use the images and have them link to a specific page? An example (excuse the funky CSS background color) is this page: using attachments to build the thumbs, but would like them to link to the proper case studies page: If you have quick answer thanks! If not I have hassled you enough! Thanks again @chrisburton cheers zack @zeech This is what I use: < ?php $attachments = attachments_get_attachments(); ?> < ?php if( function_exists( 'attachments_get_attachments' ) ) { $attachments = attachments_get_attachments(); $total_attachments = count( $attachments ); if( $total_attachments ) : ?> < ?php for( $i=0; $i<$total_attachments; $i++ ) : ?> “> @chrisburton thank your for the code! I did a slight modification to this chunk: “> Sorry not letting me post the code properly… Trying again: “> @zeech I don’t think that’s correct. I believe it should be: “>Viewing 11 posts - 16 through 26 (of 26 total) You must be logged in to reply to this topic.
https://css-tricks.com/forums/topic/wordpress-question-for-a-new-poster-and-wp-coder/page/2/
CC-MAIN-2017-04
refinedweb
412
68.26
The purpose of this tutorial is to get an experienced Python programmer up to speed with the basics of the C language and how it’s used in the CPython source code. It assumes you already have an intermediate understanding of Python syntax. That said, C is a fairly limited language, and most of its usage in CPython falls under a small set of syntax rules. Getting to the point where you understand the code is a much smaller step than being able to write C effectively. This tutorial is aimed at the first goal but not the second. In this tutorial, you’ll learn: - What the C preprocessor is and what role it plays in building C programs - How you can use preprocessor directives to manipulate source files - How C syntax compares to Python syntax - How to create loops, functions, strings, and other features in C One of the first things that stands out as a big difference between Python and C is the C preprocessor. You’ll look at that first. Note: This tutorial is adapted from the appendix, “Introduction to C for Python Programmers,” in CPython Internals: Your Guide to the Python Interpreter. Free Download: Get a sample chapter from CPython Internals: Your Guide to the Python 3 Interpreter showing you how to unlock the inner workings of the Python language, compile the Python interpreter from source code, and participate in the development of CPython. The C Preprocessor The preprocessor, as the name suggests, is run on your source files before the compiler runs. It has very limited abilities, but you can use them to great advantage in building C programs. The preprocessor produces a new file, which is what the compiler will actually process. All the commands to the preprocessor start at the beginning of a line, with a # symbol as the first non-whitespace character. The main purpose of the preprocessor is to do text substitution in the source file, but it will also do some basic conditional code with #if or similar statements. You’ll start with the most frequent preprocessor directive: #include. #include #include is used to pull the contents of one file into the current source file. There’s nothing sophisticated about #include. It reads a file from the file system, runs the preprocessor on that file, and puts the results into the output file. This is done recursively for each #include directive. For example, if you look at CPython’s Modules/_multiprocessing/semaphore.c file, then near the top you’ll see the following line: #include "multiprocessing.h" This tells the preprocessor to pull in the entire contents of multiprocessing.h and put them into the output file at this position. You’ll notice two different forms for the #include statement. One of them uses quotes ( "") to specify the name of the include file, and the other uses angle brackets ( <>). The difference comes from which paths are searched when looking for the file on the file system. If you use <> for the filename, then the preprocessor will look only at system include files. Using quotes around the filename instead will force the preprocessor to look in the local directory first and then fall back to the system directories. #define #define allows you to do simple text substitution and also plays into the #if directives you’ll see below. At its most basic, #define lets you define a new symbol that gets replaced with a text string in the preprocessor output. Continuing in semphore.c, you’ll find this line: #define SEM_FAILED NULL This tells the preprocessor to replace every instance of SEM_FAILED below this point with the literal string NULL before the code is sent to the compiler. #define items can also take parameters as in this Windows-specific version of SEM_CREATE: #define SEM_CREATE(name, val, max) CreateSemaphore(NULL, val, max, NULL) In this case, the preprocessor will expect SEM_CREATE() to look like a function call and have three parameters. This is generally referred to as a macro. It will directly replace the text of the three parameters into the output code. For example, on line 460 of semphore.c, the SEM_CREATE macro is used like this: handle = SEM_CREATE(name, value, max); When you’re compiling for Windows, this macro will be expanded so that line looks like this: handle = CreateSemaphore(NULL, value, max, NULL); In a later section, you’ll see how this macro is defined differently on Windows and other operating systems. #undef This directive erases any previous preprocessor definition from #define. This makes it possible to have a #define in effect for only part of a file. #if The preprocessor also allows conditional statements, allowing you to either include or exclude sections of text based on certain conditions. Conditional statements are closed with the #endif directive and can also make use of #elif and #else for fine-tuned adjustments. There are three basic forms of #if that you’ll see in the CPython source: #ifdef <macro>includes the subsequent block of text if the specified macro is defined. You may also see it written as #if defined(<macro>). #ifndef <macro>includes the subsequent block of text if the specified macro is not defined. #if <macro>includes the subsequent block of text if the macro is defined and it evaluates to True. Note the use of “text” instead of “code” to describe what’s included or excluded from the file. The preprocessor knows nothing of C syntax and doesn’t care what the specified text is. #pragma Pragmas are instructions or hints to the compiler. In general, you can ignore these while reading the code as they usually deal with how the code is compiled, not how the code runs. #error Finally, #error displays a message and causes the preprocessor to stop executing. Again, you can safely ignore these for reading the CPython source code. Basic C Syntax for Python Programmers This section won’t cover all aspects of C, nor is it intended to teach you how to write C. It will focus on aspects of C that are different or confusing for Python developers the first time they see them. General Unlike in Python, whitespace isn’t important to the C compiler. The compiler doesn’t care if you split statements across lines or jam your entire program into a single, very long line. This is because it uses delimiters for all statements and blocks. There are, of course, very specific rules for the parser, but in general you’ll be able to understand the CPython source just knowing that each statement ends with a semicolon ( ;), and all blocks of code are surrounded by curly braces ( {}). The exception to this rule is that if a block has only a single statement, then the curly braces can be omitted. All variables in C must be declared, meaning there needs to be a single statement indicating the type of that variable. Note that, unlike Python, the data type that a single variable can hold can’t change. Here are a few examples: /* Comments are included between slash-asterisk and asterisk-slash */ /* This style of comment can span several lines - so this part is still a comment. */ //\n", x, y); } // Single-line blocks do not require curly brackets if (x == 13) printf("x is 13!\n"); printf("past the if block\n"); In general, you’ll see that the CPython code is very cleanly formatted and typically sticks to a single style within a given module. if Statements In C, if works generally like it does in Python. If the condition is true, then the following block is executed. The else and else if syntax should be familiar enough to Python programmers. Note that C if statements don’t need an endif because blocks are delimited by {}. There’s a shorthand in C for short if … else statements called the ternary operator: condition ? true_result : false_result You can find it in semaphore.c where, for Windows, it defines a macro for SEM_CLOSE(): #define SEM_CLOSE(sem) (CloseHandle(sem) ? 0 : -1) The return value of this macro will be 0 if the function CloseHandle() returns true and -1 otherwise. Note: Boolean variable types are supported and used in parts of the CPython source, but they aren’t part of the original language. C interprets binary conditions using a simple rule: 0 or NULL is false, and everything else is true. switch Statements Unlike Python, C also supports switch. Using switch can be viewed as a shortcut for extended if … elseif chains. This example is from semaphore.c: switch (WaitForSingleObjectEx(handle, 0, FALSE)) { case WAIT_OBJECT_0: if (!ReleaseSemaphore(handle, 1, &previous)) return MP_STANDARD_ERROR; *value = previous + 1; return 0; case WAIT_TIMEOUT: *value = 0; return 0; default: return MP_STANDARD_ERROR; } This performs a switch on the return value from WaitForSingleObjectEx(). If the value is WAIT_OBJECT_0, then the first block is executed. The WAIT_TIMEOUT value results in the second block, and anything else matches the default block. Note that the value being tested, in this case the return value from WaitForSingleObjectEx(), must be an integral value or an enumerated type, and each case must be a constant value. Loops There are three looping structures in C: forloops whileloops do… whileloops for loops have syntax that’s quite different from Python: for ( <initialization>; <condition>; <increment>) { <code to be looped over> } In addition to the code to be executed in the loop, there are three blocks of code that control the for loop: The <initialization>section runs exactly once when the loop is started. It’s typically used to set a loop counter to an initial value (and possibly to declare the loop counter). The <increment>code runs immediately after each pass through the main block of the loop. Traditionally, this will increment the loop counter. Finally, the <condition>runs after the <increment>. The return value of this code will be evaluated and the loop breaks when this condition returns false. Here’s an example from Modules/sha512module.c: for (i = 0; i < 8; ++i) { S[i] = sha_info->digest[i]; } This loop will run 8 times, with i incrementing from 0 to 7, and will terminate when the condition is checked and i is 8. while loops are virtually identical to their Python counterparts. The do … while syntax is a little different, however. The condition on a do … while loop isn’t checked until after the body of the loop is executed for the first time. There are many instances of for loops and while loops in the CPython code base, but do … while is unused. Functions The syntax for functions in C is similar to that in Python, with the addition that the return type and parameter types must be specified. The C syntax looks like this: <return_type> function_name(<parameters>) { <function_body> } The return type can be any valid type in C, including built-in types like int and double as well as custom types like PyObject, as in this example from semaphore.c: static PyObject * semlock_release(SemLockObject *self, PyObject *args) { <statements of function body here> } Here you see a couple of C-specific features in play. First, remember that whitespace doesn’t matter. Much of the CPython source code puts the return type of a function on the line above the rest of the function declaration. That’s the PyObject * part. You’ll take a closer look at the use of * a little later, but for now it’s important to know that there are several modifiers that you can place on functions and variables. static is one of these modifiers. There are some complex rules governing how modifiers operate. For instance, the static modifier here means something very different than if you placed it in front of a variable declaration. Fortunately, you can generally ignore these modifiers while trying to read and understand the CPython source code. The parameter list for functions is a comma-separated list of variables, similar to what you use in Python. Again, C requires specific types for each parameter, so SemLockObject *self says that the first parameter is a pointer to a SemLockObject and is called self. Note that all parameters in C are positional. Let’s look at what the “pointer” part of that statement means. To give some context, the parameters that are passed to C functions are all passed by value, meaning the function operates on a copy of the value and not on the original value in the calling function. To work around this, functions will frequently pass in the address of some data that the function can modify. These addresses are called pointers and have types, so int * is a pointer to an integer value and is of a different type than double *, which is a pointer to a double-precision floating-point number. Pointers As mentioned above, pointers are variables that hold the address of a value. These are used frequently in C, as seen in this example: static PyObject * semlock_release(SemLockObject *self, PyObject *args) { <statements of function body here> } Here, the self parameter will hold the address of, or a pointer to, a SemLockObject value. Also note that the function will return a pointer to a PyObject value. Note: For an in-depth look at how to simulate pointers in Python, check out Pointers in Python: What’s the Point? There’s a special value in C called NULL that indicates a pointer doesn’t point to anything. You’ll see pointers assigned to NULL and checked against NULL throughout the CPython source. This is important since there are very few limitations as to what values a pointer can have, and accessing a memory location that isn’t part of your program can cause very strange behavior. On the other hand, if you try to access the memory at NULL, then your program will exit immediately. This may not seem better, but it’s generally easier to figure out a memory bug if NULL is accessed than if a random memory address is modified. Strings C doesn’t have a string type. There’s a convention around which many standard library functions are written, but there’s no actual type. Rather, strings in C are stored as arrays of char (for ASCII) or wchar (for Unicode) values, each of which holds a single character. Strings are marked with a null terminator, which has a value 0 and is usually shown in code as \\0. Basic string operations like strlen() rely on this null terminator to mark the end of the string. Because strings are just arrays of values, they cannot be directly copied or compared. The standard library has the strcpy() and strcmp() functions (and their wchar cousins) for doing these operations and more. Structs Your final stop on this mini-tour of C is how you can create new types in C: structs. The struct keyword allows you to group a set of different data types together into a new, custom data type: struct <struct_name> { <type> <member_name>; <type> <member_name>; ... }; This partial example from Modules/arraymodule.c shows a struct declaration: struct arraydescr { char typecode; int itemsize; ... }; This creates a new data type called arraydescr which has many members, the first two of which are a char typecode and an int itemsize. Frequently structs will be used as part of a typedef, which provides a simple alias for the name. In the example above, all variables of the new type must be declared with the full name struct arraydescr x;. You’ll frequently see syntax like this: typedef struct { PyObject_HEAD SEM_HANDLE handle; unsigned long last_tid; int count; int maxvalue; int kind; char *name; } SemLockObject; This creates a new, custom struct type and gives it the name SemLockObject. To declare a variable of this type, you can simply use the alias SemLockObject x;. Conclusion This wraps up your quick walk through C syntax. Although this description barely scratches the surface of the C language, you now have sufficient knowledge to read and understand the CPython source code. In this tutorial, you learned: - What the C preprocessor is and what role it plays in building C programs - How you can use preprocessor directives to manipulate source files - How C syntax compares to Python syntax - How to create loops, functions, strings, and other features in C Now that you’re familiar with C, you can deepen your knowledge of the inner workings of Python by exploring the CPython source code. Happy Pythoning! Note: If you enjoyed what you learned in this sample from CPython Internals: Your Guide to the Python Interpreter, then be sure to check out the rest of the book.
https://realpython.com/c-for-python-programmers/
CC-MAIN-2021-43
refinedweb
2,762
60.65
#include <IpNLP.hpp> Inheritance diagram for Ipopt::NLP: Detailed Class Description. Definition at line 31 of file IpNLP.hpp. Copy Constructor. Exceptions. Overload if you want the chance to process options or parameters that may be specific to the NLP. Reimplemented in Ipopt::CompositeNLP, and Ipopt::TNLPAdapter. Definition at line 55 of file IpNLP.hpp. Method for creating the derived vector / matrix types. The Hess_lagrangian_space pointer can be NULL if a quasi-Newton options is chosen. Implemented in Ipopt::CompositeNLP, and Ipopt::TNLPAdapter. Method for obtaining the bounds information. Implemented in Ipopt::CompositeNLP, and Ipopt::TNLPAdapter. Method for obtaining the starting point for all the iterates. ToDo it might not make sense to ask for initial values for v_L and v_U? Implemented in Ipopt::CompositeNLP, and Ipopt::TNLPAdapter. Method for obtaining an entire iterate as a warmstart point. The incoming IteratesVector has to be filled. The default dummy implementation returns false. Reimplemented in Ipopt::TNLPAdapter. Definition at line 108 of file IpNLP.hpp. This method is called at the very end of the optimization. It provides the final iterate to the user, so that it can be stored as the solution. The status flag indicates the outcome of the optimization, where SolverReturn is defined in IpAlgTypes.hpp. Reimplemented in Ipopt::TNLPAdapter. Definition at line 144 of file IpNLP.hpp. This method is called once per iteration, after the iteration summary output has been printed. It provides the current information to the user to do with it anything she wants. It also allows the user to ask for a premature termination of the optimization by returning false, in which case Ipopt will terminate with a corresponding return status. The basic information provided in the argument list has the quantities values printed in the iteration summary line. If more information is required, a user can obtain it from the IpData and IpCalculatedQuantities objects. However, note that the provided quantities are all for the problem that Ipopt sees, i.e., the quantities might be scaled, fixed variables might be sorted out, etc. The status indicates things like whether the algorithm is in the restoration phase... In the restoration phase, the dual variables are probably not not changing. Reimplemented in Ipopt::TNLPAdapter. Definition at line 169 of file IpNLP.hpp. Routines to get the scaling parameters. These do not need to be overloaded unless the options are set for User scaling Reimplemented in Ipopt::TNLPAdapter. Definition at line 187 of file IpNLP.hpp. References THROW_EXCEPTION. Method for obtaining the subspace in which the limited-memory Hessian approximation should be done. This is only called if the limited-memory Hessian approximation is chosen. Since the Hessian is zero in the space of all variables that appear in the problem functions only linearly, this allows the user to provide a VectorSpace for all nonlinear variables, and an ExpansionMatrix to lift from this VectorSpace to the VectorSpace of the primal variables x. If the returned values are NULL, it is assumed that the Hessian is to be approximated in the space of all x variables. The default instantiation of this method returns NULL, and a user only has to overwrite this method if the approximation is to be done only in a subspace. Reimplemented in Ipopt::TNLPAdapter. Definition at line 216 of file IpNLP.hpp. Overloaded Equals Operator.
http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_n_l_p.html
crawl-003
refinedweb
551
51.24
puts() prototype int puts(const char *str); The puts() function takes a null terminated string str as its argument and writes it to stdout. The terminating null character '\0' is not written but it adds a newline character '\n' after writing the string. A call to puts() is same as calling fputc() repeatedly. The main difference between fputs() and puts() is the puts() function appends a newline character to the output, while fputs() function does not. It is defined in <cstdio> header file. puts() Parameters str: The string to be written. puts() Return value On success, the puts() function returns a non-negative integer. On failure it returns EOF and sets the error indicator on stdout. Example: How puts() function works #include <cstdio> int main() { char str1[] = "Happy New Year"; char str2[] = "Happy Birthday"; puts(str1); /* Printed on new line since '/n' is added */ puts(str2); return 0; } When you run the program, the output will be: Happy New Year Happy Birthday
https://cdn.programiz.com/cpp-programming/library-function/cstdio/puts
CC-MAIN-2021-04
refinedweb
162
62.07
Last string in the following format: Thu Jan 01 1970 01:00:00 GMT+0100 (CET). In this tutorial, I’ll replace it with an ISO timestamp. Adjusting the schema The first thing you have to do, is to adjust the schema to introduce the new scalar. This means that you need to define the scalar itself: scalar ISODate Next to that, we also have to change the type of createdAt from String to ISODate: type Post { _id: ID! content: String createdAt: ISODate author: User votes: [Vote] voteCount: Int question: Question isQuestion: Boolean } Defining the scalar type The next thing to do is to write the scalar itself. Basically, a scalar is a special resolver that is able to map a value to JSON, and the other way around. To do this, you need to specify three functions: serialize: This function is called when a value is passed to the client. Within here, you can return anything as long as it can be valid JSON. This means you could serialize to string, numbers, objects and arrays. parseValue: This function is called when an input parameter should be parsed. parseLiteral: This function is called when an inline input parameter should be parsed. Rather than returning a value, it will return an AST node. (GraphQL uses an Abstract Syntax Tree for parsing the query) Before writing the scalar type itself, I’m going to introduce a new helper: const returnOnError = (operation, alternative) => { try { return operation(); } catch (e) { return alternative; } }; After that, you can use the helper to write the custom scalar: import {Kind} from 'graphql/language'; import {GraphQLScalarType} from 'graphql'; import {returnOnError} from '../helpers'; function serialize(value) { return value instanceof Date ? value.toISOString() : null; } function parseValue(value) { return returnOnError(() => value == null ? null : new Date(value), null); } function parseLiteral(ast) { return ast.kind === Kind.STRING ? parseValue(ast.value) : null; } export default new GraphQLScalarType({ name: 'ISODate', description: 'JavaScript Date object as an ISO timestamp', serialize, parseValue, parseLiteral }); By using the GraphQLScalarType you can define your own scalars, using the functions I mentioned before. The parse functions will basically try to create a new Date object from the ISO timestamp when possible, while the serialize function will use the Date.prototype.toISOString() function. Including the scalar type as a resolver The final step to make your custom scalar work is to include it as a resolver. I defined my resolvers in src/schema/index.js, so I’ll change my code to do the following: import ISODate from '../scalars/ISODate'; const resolvers = {Query, Mutation, Question, Post, User, Vote, ISODate}; export default makeExecutableSchema({typeDefs, resolvers}); If you run the application now, and make a query to get a posts creation date, you’ll see that it’s now formatted as an ISO string: query AllQuestions($query: Pagination!) { questionCount questions(query: $query) { _id title firstPost { _id voteCount createdAt } } } If you’re interested in the full code, you can find it on GitHub.
https://g00glen00b.be/custom-scalar-types-graphql-apollo/
CC-MAIN-2018-43
refinedweb
485
59.74
26 May 2009 15:27 [Source: ICIS news] NEW DELHI (ICIS news)--The Indian government has given environmental approval to Cals Refineries Ltd’s (CRL) proposed 5m tonne/year refinery at Haldia in West Bengal State, the company said on Tuesday. The state government separately approved a special package of incentives for the rupees (Rs) 40.3bn ($852m) project under the West Bengal Incentive Scheme (WBIS), the company added. According to the WBIS, the state government grants a special package of incentives to large projects on a “case by case basis”. The projects are eligible for cash incentives such as a state capital investment subsidy, industrial promotion assistance and tax incentives such as the waiver of electricity duty and the refund of stamp duty. Apart from manufacturing transportation fuels, the refinery would have the capacity to produce 180,000 tonnes/year of benzene, 200,000 tonnes/year of propylene and 100,000 tonnes/year of sulphur, a government official said. The company did not disclose any additional information on the project in a brief statement to the Mumbai Stock Exchange. CRL is controlled by ?xml:namespace> ($1 = Rs47
http://www.icis.com/Articles/2009/05/26/9219394/indias-cals-gets-clearance-for-haldia-refinery.html
CC-MAIN-2015-06
refinedweb
188
51.58
UnTunes:We will mock you! From Uncyclopedia, the content-free encyclopedia One of the best rock ballads ever to be performed by the Uncyclopedia Band that was composed and lyrics written by Oscar Wilde are sung to those who would dare invade Uncyclopedia and blank and vandalize pages, or make demands that admins unprotect pages or delete pages that offend them. It is, in fact, sung 38 times daily to people who vandalize this very page you are reading right now. If you haven't heard this song already, you must have never had Internet access. edit History Long ago, Oscar Wilde knew that one day, morons dumb enough to dare take on his creation and followers would need a song be sung to them. It is said that music often calms the savage beast, and these invaders to Uncyclopedia are beasties indeed! That one day, an Unsongs namespace might be created to add in this song in mp3 and ogg formats along with the lyrics and vocals by Uncyclopedia admins and members to be played for these invading sockpuppet hordes. edit The Legend This rock ballad is so powerful that none dare stand it, save for Benson who is better than all of us anyway. No World Order, no Anonymous user, no dynamic IP dare resist this song of legend. It is played all over the Internet, and it is said that there are those who fear it. There are those who cannot stand it, for it is a righteous song, so righteous that it is truthful and it turns away the undead and the idiotic (who lack brains anyway like the undead) back to whence they came. edit The Lyrics Slashy you're a boy make a big mess Blankin' in the article gonna be an idiot some day You got spud on yo face You big disgrace Leavin' your slashes all over the place We will we will mock you We will we will mock you Powershot you're a young man 'tard man Writin' in the forum calling everyone gay You got blood on yo face You big disgrace Whorin' your articles all over the place We will we will mock you We will we will mock you Conspiracy you're an odd man bore man Threatenin' with your lies gonna make you some cliché some day You got spud on your face You big disgrace Admins better block you back with some mace We will we will mock you We will we will mock you
http://uncyclopedia.wikia.com/wiki/UnTunes:We_will_mock_you!?oldid=5295438
CC-MAIN-2014-10
refinedweb
420
61.84
Existing books on debugging ? There was a recent thread about testing books on the perl-qa list. (which is not in the archive for some reason but you can see it in the news feed) I'd be glad to hear your opinion about similar books [debugger] break on watch ? Devel-ptkdb 1.1087 Released -- Forwarded message -- Date: 23 Nov 2003 16:24:00 -0500 From: Andrew E. Page aepage AT users.sourceforge.net Subject: Devel-ptkdb 1.1087 Released New Features: Hex Dump option of scalar variables in the Expresion Eval Window. This will dump the contents of scalar variable [debugger] finding deep recursin Any idea how can I use the debugger to find the cause of a deep recursion in my code ? Gabor Re: begginer question Hi Mike, you can either try to run the scripts from the command line perl -d myscript.pl or you try using another debugger: For example install Tk and the Devel::ptkdb module. Then read the CGI section of perldoc Devel::ptkdbd Gabor Re: Watching all variables in a specific namespace? On Sun, 10 Oct 2004, Yuval Yaari wrote: And more importantly, how can I watch array changes, hash changes, etc? Not the perfect solution but for a specific simple array or simple hash I'd put a watch on some expression that represents the whole array or hash: w join :,@a w join :,%h for
https://www.mail-archive.com/search?l=debugger@perl.org&q=from:%22Gabor+Szabo%22
CC-MAIN-2017-43
refinedweb
234
71.24
GhostDoc is a Visual Studio 2010 add-on that makes creating useful comments as easy as a few button or mouse clicks. When you're working with existing application code, the chances of finding that code thoroughly documented are slim. Most developers aren't too demanding -- we just need a basic understanding of what an application is supposed to do when confronted with a method (even a quick sentence on what it does and information on its parameters) and away we go. The Visual Studio add-on GhostDoc provides the framework for code comments with a mouse-click or keystrokes. GhostDoc works as advertised I was apprehensive about the promise of GhostDoc. A colleague had been prodding me for quite some time to use it, but I remained resolute (stubborn?) and stuck with my old way of doing things (by hand). I was asked to work with an existing code base and to insert comments as a guide for other developers, so I finally gave in and installed GhostDoc to (hopefully) simplify the task at hand. Now I'm a fan of the tool. The GhostDoc download comes in two flavors: Basic and Pro. GhostDoc Basic is free and provides the basic functionality of the tool, as you can easily insert comments in your code with a few clicks. GhostDoc Pro adds a few more features such as commenting a complete file with one click and automated comment blocks are configurable. GhostDoc Pro is used in the screenshots in this post, but the features covered are available in both versions. At your fingertipsOnce installed, GhostDoc is available via the Tools dropdown menu within the Visual Studio IDE. Figure A shows the menu as it appears with GhostDoc Pro installed. GhostDoc Basic will create the same menu with a few less options like Document File grayed out so it's unavailable. In addition, a basic GhostDoc context menu is available by right-clicking your mouse when you're in your source code. This allows you to document while coding -- imagine that! Figure A The GhostDoc menu within Visual Studio 2010 My favorite approach to commenting is via a predefined shortcut that is configured during installation. The [Ctrl][Shift][D] combination is the default, but you can choose what you like within reason as [Ctrl][Shift][Delete] is not available for obvious reasons. If you decide that you dislike the chosen key combination, you can change it via the GhostDoc menu via the Re-assign Shortcut selection as shown in Figure A. Also, you can "Run Configuration Wizard" to rerun the configuration wizard that runs when first installed. Inserting comments An individual portion of code (method, property, class definition, etc.) can be commented by moving the cursor to its location within the source code and choosing Document This or by using your key combination. As an example, I commented an existing method in my code as the next snippet demonstrates: /// <summary> /// Populates the team child nodes. /// </summary> /// <param name="curNode">The cur node.</param> /// <returns></returns> /// <remarks></remarks>public static System.Windows.Forms.TreeNode populateTeamChildNodes(System.Windows.Forms.TreeNode curNode) The comment shows that it recognized its parameters and that it returns a value. In addition, it made a valiant effort to place text in the summary element by processing the method name -- parsing the words and adding "the" so populateTeamChildNodes becomes "Populates the team child nodes." Doing the same thing with a class declaration produces similar results: /// <summary> /// Summary description for Class1. /// </summary> /// <remarks></remarks>public class Class1 It isn't flashy, but that is the point of GhostDoc -- it only writes XML comments. You can go through code and insert these comments individually or comment a complete file on the fly (only in the Pro version), but you'll still need to update the comments.Note: The comments generated by GhostDoc are not useful until you add text to the comments and any other pertinent details. Thus, GhostDoc doesn't actually solve anything -- it provides you with a framework for entering useful information in comments. Wish list GhostDoc isn't for everybody; the tool does a basic task, and it focuses on that without adding a lot of other features. One common complaint is the lack of support for generating an XML file or even HTML from the comments, but you can use a tool like Doxygen to create HTML output. I would like GhostDoc to add the return type in the returns comment element, as well as a list of exceptions raised in the comment block. If there is something you truly want or need, post a comment on the GhostDoc discussion forum, as they seem to be very responsive. Leave a trail Commenting source code has been an issue since developers began building applications. Thankfully, GhostDoc makes creating useful comments as easy as a few button or mouse clicks. There a plenty of great Visual Studio add-ins available today for documentation, as well as many more tasks. What tools do you prefer in your daily coding? Share your recommendations and thoughts with the TechRepublic community.
https://www.techrepublic.com/blog/software-engineer/jump-start-code-comments-with-ghostdoc/
CC-MAIN-2020-05
refinedweb
848
60.75
Ticket #3600 (closed Bugs: fixed) [tribool] BOOST_TRIBOOL_THIRD_STATE produces warning Description #include <boost/logic/tribool.hpp> BOOST_TRIBOOL_THIRD_STATE(ignore) leads to a compiler warning (gcc 4.2.1) warning: unused parameter 'dummy' (Sorry, could not find a suitable component.) Attachments Change History comment:1 Changed 7 years ago by anonymous - Owner set to dgregor - Component changed from None to logic Changed 7 years ago by jewillco - attachment tribool.patch added Patch to fix problem comment:2 Changed 7 years ago by jewillco I added a fix for this problem. Is it OK to commit it to the trunk? comment:3 Changed 7 years ago by jewillco - Status changed from new to closed - Resolution set to fixed comment:4 Changed 6 years ago by danieljames Note: See TracTickets for help on using tickets.
https://svn.boost.org/trac/boost/ticket/3600
CC-MAIN-2016-44
refinedweb
130
54.22
Moo - Minimalist Object Orientation (with Moose compatibility) package Cat::Food; use Moo; use strictures 2; use namespace::clean;;. If a new enough version of Class::XSAccessor is available, it will be used to generate simple accessors, readers, and writers for better performance. Simple accessors are those without lazy defaults, type checks/coercions, or triggers. Readers and writers generated by Class::XSAccessor will behave slightly differently: they will reject attempts to call them with the incorrect number of parameters.. Moo provides several methods to any class using it. Foo::Bar->new( attr1 => 3 ); or Foo::Bar->new({ attr1 => 3 }); The constructor for the class. By default it will accept attributes either as a hashref, or a list of key value pairs. This can be customized with the "BUILDARGS" method. if ($foo->does('Some::Role1')) { ... } Returns true if the object composes in the passed role. if ($foo->DOES('Some::Role1') || $foo->DOES('Some::Class1')) { ... } Similar to "does", but will also return true for both composed roles and superclasses. my $meta = Foo::Bar->meta; my @methods = $meta->get_method_list; Returns a Moose metaclass object for the class. The metaclass will only be built on demand, loading Moose in the process. There are several methods that you can define in your class to control construction and destruction of objects. They should be used rather than trying to modify new or DESTROY yourself.. sub FOREIGNBUILDARGS { my ( $class, $options ) = @_; return $options->{foo}; } If you are inheriting from a non-Moo class, the arguments passed to the parent class constructor can be manipulated by defining a FOREIGNBUILDARGS method. It will receive the same arguments as "BUILDARGS", and should return a list of arguments to pass to the parent class constructor.. extends 'Parent::Class'; Declares a base class. Multiple superclasses can be passed for multiple inheritance but please consider using roles instead. The class will be loaded but no errors will be triggered if the class can't be found and there are already subs in the class. cannot be composed because they have conflicting method definitions. The roles will be loaded using the same mechanism as extends uses.wp or rw. ro stands for "read-only" and. There is, however, nothing to stop you using lazy and builder yourself with rwp or rw -a is discarded. Only if the sub dies does type validation fail. compatible or MooseX::Types style named types, look at Type::Tiny.. coerce Takes a coderef which is meant to coerce the attribute. The basic idea is to do something like the following: coerce => sub { $_[0] % 2 ? $_[0] : $_[0] + 1 }, Note that Moo will always execute your coercion: this is to permit isa entries to be used purely for bug trapping, whereas coercions are always structural to your code. We do, however, apply any supplied isa check after the coercion has run to ensure that it returned a valid value. If the isa option is a blessed object providing a coerce or coercion method, then the coerce option may be set to just 1. handles Takes a string handles => 'RobotRole' Where RobotRole is a, but not default or built values. The - for that case instead use a code reference that returns the desired value.; The following features come from MooseX::AttributeShortcuts: If you set this to just 1, the builder is automatically named _build_${attr_name}. If you set this to a coderef or code-convertible object, that variable will be installed under $class::_build_${attr_name} and the builder set to the same name.. NOTE: If the attribute is lazy, it will be regenerated from default or builder.. moosify Takes either a coderef or array of coderefs which is meant to transform the given attributes specifications if necessary when upgrading to a Moose role or class. You shouldn't need this by default, but is provided as a means of possible extensibility. Sub::Quote; use Moo; use namespace::clean;.); initializer is not supported in core since the author considers it to be a bad idea and Moose best practices recommend avoiding it. Meanwhile trigger or coerce are more likely to be able to fulfill your needs. There is no meta object. If you need this level of complexity you need Moose - Moo is and plain scalars, because passing a hash or array reference as a default is almost always incorrect since the value is then shared between all objects using that default. lazy_build is not supported; you are instead encouraged to use the is => 'lazy' option supported by Moo and MooseX::AttributeShortcuts. auto_deref is not supported since the author considers it a bad idea and it has been considered best practice to avoid it for some time. strict and warnings, in a similar way to Moose. The authors recommend the use of strictures, which enables FATAL warnings, and several extra pragmas when used in development: indirect, multidimensional, and bareword::filehandles.; use strictures 2;. ( make_immutable is a no-op in Moo to ease migration.) An extension MooX::late exists to ease translating Moose packages to Moo by providing a more Moose-like interface. Users' IRC: #moose on irc.perl.org (click for instant chatroom login) Development and contribution IRC: #web-simple on irc.perl.org (click for instant chatroom login) Bugtracker: Git repository: git://github.com/moose/Moo.git Git browser:@cpan.org> mattp - Matt Phillips (cpan:MATTP) <mattp@cpan.org>. This library is free software and may be distributed under the same terms as perl itself. See.
http://search.cpan.org/~haarg/Moo/lib/Moo.pm
CC-MAIN-2016-18
refinedweb
906
64.91
Thomas A. Anastasio 22 November 1999 Skip Lists were developed around 1989 by William Pugh1 of the University of Maryland. Professor Pugh sees Skip Lists as a viable alternative to balanced trees such as AVL trees or to self-adjusting trees such as splay trees. The find, insert, and remove operations on ordinary binary search trees are efficient, , when the input data is random; but less efficient, , when the input data are ordered. Skip List performance for these same operations and for any data set is about as good as that of randomly-built binary search trees - namely In an ordinary sorted linked list, find, insert, and remove are in because the list must be scanned node-by-node from the head to find the relevant node. If somehow we could scan down the list in bigger steps (``skip'' down, as it were), we would reduce the cost of scanning. This is the fundamental idea behind Skip Lists. In simple terms, Skip Lists are sorted linked lists with two differences: We speak of a Skip List node having levels, one level per forward reference. The number of levels in a node is called the size of the node. In an ordinary sorted list, insert, remove, and find operations require sequential traversal of the list. This results in performance per operation. Skip Lists allow intermediate nodes in the list to be ``skipped'' during a traversal - resulting in an expected performance of per operation. To introduce the Skip List data structure, let's look at three list data structures that allow skipping, but are not truly Skip Lists. The first of these, shown in Figure 1 allows every other node to be skipped in a traversal. The second, shown in Figure 2 additionally allows every fourth node to be skipped. The third, shown in Figure 3 additionally allows every eighth node to be skipped, but suggests further development of the idea to skipping every -th node. For all three pseudo-skip-lists, there is a header node and the nodes do not all have the same number of forward references. Every node has a reference to the next node, but some have additional references to nodes further along the list. Here is pseudo-code for the find operation on each list. This same algorithm will be used with real Skip Lists later. Comparable & find(const Comparable & X) { node = header node for (reference level of node from (nodesize - 1) down to 0) while (the node referred to is less than X) node = node referred to if (node referred to has value X) return node's value else return item_not_found } We start at the highest level of the header node and follow the references along this level until the value at the node referred to is equal to or larger than X. At this point, we switch to the next lower level and continue the search. Eventually, we shall be dealing with a reference at the lowest level of a node. If the next node has the value X, then we return that value; otherwise we return a value that signals unsuccessful search. Figure 1 shows a 16-node list in which every second node has a reference two nodes ahead. The value stored at each node is shown below it (and corresponds in this example to the position of the node in the list). The header node has two levels; it's no smaller than largest node in the list. Node 2 has a reference to node 4, two nodes ahead. Similarly for nodes 4, 6, 8, etc - every second node has a reference two nodes ahead. It's clear that the find operation does not need to visit each node. It can skip over every other node, then do a final visit at the end. The number of nodes visited is therefore no more than . For example, the nodes visited in scanning for the node with value 15 would be 2, 4, 6, 8, 10, 12, 14, 16, 15, a total of . Follow the algorithm for find on page for this simple example to be sure you understand it thoroughly. The second example is a list in which every second node has a reference two nodes ahead and additionally every fourth node has a reference four nodes ahead. Such a list is shown in Figure 2. The header is still no smaller than the largest node in the list. The find operation can now make bigger skips than those for the list in Figure 1. Every fourth node is skipped until the search is confined between two nodes of size 3. At this point, as many as three nodes may need to be scanned. It is also possible that some nodes will be visited more than once using the algorithm on page . The number of nodes visited2 is no more than . As an example, look at the nodes visited in scanning for the node with value of 15. These are 4, 8, 12, 16, 14, 16, 15 for a total of . This final example is for a list in which some skip lengths are even larger. Every -th node, , has a forward reference nodes ahead. For example, every node has a reference 2 nodes ahead; every node has a reference 8 nodes ahead, etc. Figure 3 shows a short list of this type. Once again, the header is no smaller than the largest node on the list. It is shown arbitrarily large in the Figure. Suppose the Skip List in Figure 3 contained 32 nodes and consider a search in it. Working down from the highest level, we first encounter node 16 and have cut the search in half. We then search again, one level down in either the left or right half of the list, again cutting the remaining search in half. We continue in this manner till we find the sought-after node (or not). This is quite reminiscent of binary search in an array and is perhaps the best way to intuitively understand why the maximum number of nodes visited in this list is in . This data structure is looking pretty good, but there's a serious problem with it for the insert and remove operations. The work required to reorganize the list after an insertion or deletion is in . For example, suppose that the first element is removed in Figure 3. Since it is necessary to maintain the strict pattern of node sizes, values from 2 to the end must be moved toward the head and the end node must be removed. A similar situation occurs when a new element is added to the list. This is where the probabilistic approach of a true Skip List comes into play. A Skip List is built with the same distribution of node sizes, but without the requirement for the rigid pattern of node sizes shown. It is no longer necessary to maintain the rigid pattern by moving values around after a remove or insert operation. Pugh shows that with high probability such a list still exhibits behavior. The probability that a given Skip List will perform badly is very small. Figure 4 shows the list of Figure 3 with the nodes reorganized. The distribution of node sizes is exactly the same as that of Figure 3; the nodes just occur in a different pattern. In this example, the pattern would require that 50% of the nodes have just one reference, 25% have just two references, 12.5% have just three references, etc. The distribution of node sizes is maintained, but the strict order is not required. Figure 4 shows one way this might work out. Of course, this is probabilistic, so there are many other possible node sequences that would satisfy the required probability distribution. When inserting new nodes, we choose the size of the new node probabilistically. Every Skip List has an associated (and fixed) probability that determines the distribution of nodes. A fraction of the nodes that have at least references also have references. The Skip List does not have to be reorganized when a new element is inserted. Suppose we have an infinitely-long Skip List with associated probability . This means that a fraction, , of the nodes with a forward reference at level also have a forward reference at level . Let be the fraction of nodes having precisely forward references ( i.e., is the fraction of nodes of size ). Then, and Since Recall that is the sum of the geometric progression with first term and common ratio . Thus, Therefore, . Since , we can write Example: In the situation shown in Figure 4, . Therefore, of the nodes with at least one reference have two references; one-half of those with at least two references have three references, etc. You should work out the distribution for a SkipList with associated probability of to be sure you understand how distributions are computed. int generateNodeLevel(double p, int maxLevel) { int level = 1; while (drand48() < p) level++; return (level > maxLevel) ? maxLevel : level; } Note that the level of the new node is independent of the number of nodes already in the Skip List. Each node is chosen only on the basis of the Skip List's associated probability. When the associated probability is , the average number of comparisons that must be done for find, is . For example, for a list of size 65,536, the average number of nodes to be examined is 34.3 for and 35 for . This is a tremendous improvement over an ordinary sorted list for which the average number of comparisons is . The level of the header node is the maximum allowed level in the SkipList and is chosen at construction. Pugh shows that the maximum level should be chosen as . Thus, for , the maximum level for a SkipList of up to 65,536 elements should be chosen no smaller than . The probability that an operation will take longer than expected is a function of the probability associated with the list. For example, Pugh calculates that for a list with and 4096 elements, the probability that the actual time will exceed the expected time by a factor of 3 is less than one in 200 million. The relative time and space performance of a Skip List depends on the probability level associated with the list. Pugh suggests that a probability of 0.25 be used in most cases. If the variability of performance is important, he suggests using a probability of 0.5 (variability decreases with increasing probability). Interestingly, the average number of references per node is only 1.33 when a probability of 0.25 is used. A binary search tree, of course, has 2 references per node, so Skip Lists can be more space-efficient. We will examine SkipList methods insert, and remove in some detail below. Pseudocode for find was given on page . Insertion and deletion involve searching the SkipList to find the insertion or deletion point, then manipulating the references to make the relevant change. When inserting a new element, we first generate a node that has had its level selected randomly. The SkipList has a maximum allowed level set at construction time. The number of levels in the header node is the maximum allowed. For convenience in searching, the SkipList keeps track of the maximum level actually in the list. There is no need to search levels above this actual maximum. In Skip Lists, we need pointers to all the see-able previous nodes between the insertion point and the header. Imagine standing at the insertion point, looking back toward the header. All the nodes you can see are the see-able nodes. Some nodes may not be see-able because they are blocked by higher nodes. Figure 5 shows an example. We construct a backLook node that has its forward pointers set to the relevant see-able nodes. This is the type of node returned by the findInsertPoint method. The public insert(const Comparable &) method decides on the new node's size by random choice, then calls the overloading private method insert(const Comparable &,int,bool &) to do all the work. -no_navigation skip_lists.tex The translation was initiated by Thomas Anastasio on 2000-02-19
http://www.csee.umbc.edu/courses/undergraduate/341/fall01/Lectures/SkipLists/skip_lists/skip_lists.html
CC-MAIN-2015-06
refinedweb
2,032
63.19
SQL Tutorial Table of contents 1. SqlSession, Sql, opening database connection 2. Using global main database, executing statements with parameters, getting resultset info 3. Using SqlExp 4. Schema file 5. Using schema file to define SqlId constants 6. Using structures defined by schema files SqlSession derived objects represent database connection. Each SQL database (Sqlite3, Microsoft SQL, Oracle, MySQL, PostgreSQL) has its own session class derived from SqlSession. Sql class is used to issue SQL statements and retrieve results: #include <Core/Core.h> #include <plugin/sqlite3/Sqlite3.h> using namespace Upp; CONSOLE_APP_MAIN { Sqlite3Session sqlite3; if(!sqlite3.Open(ConfigFile("simple.db"))) { Cout() << "Can't create or open database file\n"; return; } #ifdef _DEBUG sqlite3.SetTrace(); #endif Sql sql(sqlite3); sql.Execute("select date('now')"); while(sql.Fetch()) Cout() << sql[0] << '\n' << ; } In this tutorial, we are using Sqlite3 database. The connection method varies with database; in this case it is done using Open statement. SetTrace is useful in debug mode - all issued SQL statements and SQL errors are logged in standard U++ log. Each Sql instance has to be associated to some SqlSession - it is passed as constructor parameter (parameter-less Sql constructor uses global session, more on that in section 2.). To execute SQL statements, use Execute. If executed statement is Select, it may return a result set, which is retrieved using Fetch. Columns of result set are then accessed by Sql::operator[] using index of column (starts with 0). Values are returned as Value type. Most applications need to work with just single database backend, therefore repeating SqlSession parameter in all Sql declarations would be tedious. To this end U++ supports concept of "main database" which is represented by SQL variable. SQL is of Sql type. When any other Sql variable is created with default constructor (no session parameter provided), it uses the same session as the one the SQL is bound to. To assign session to global SQL, use operator=: SQL = sqlite3; SQL.Execute("drop table TEST"); SQL.ClearError(); SQL.Execute("create table TEST (A INTEGER, B TEXT)"); for(int i = 0; i < 10; i++) SQL.Execute("insert into TEST(A, B) values (?, ?)", i, AsString(3 * i)); Sql sql; sql.Execute("select * from TEST"); for(int i = 0; i < sql.GetColumns(); i++) Cout() << sql.GetColumnInfo(i).name << '\n'; Cout() << sql[0] << " \'" << sql[1] << "\'\n"; As global SQL is regular Sql variable too, it can be used to issue SQL statements. Warning: While it is possible to issue select statements through SQL, based on experience this is not recommended - way too often result set of select is canceled by issuing some other command, e.g. in routine called as part of Fetch loop. One exception to this rule is using SQL::operator% to fetch single value like String txt = SQL % Select(TEXT).From(DOCTEMPLATE).Where(ID == id); (see further tutorial topics for detailed explanation of this code). To get information about result set columns, you can use GetColumns to retrieve the number of columns and GetColumnInfo to retrieve information about columns - returns SqlColumnInfo reference with information like name or type of column. U++ contains an unique feature, "SqlExp". This is a mechanism where you construct SQL statements as C++ expressions (using heavily overloaded operators). There are three advantages to this approach: SQL statements are at least partially checked at compile time As such statements are yet to be interpreted, it is possible to hide some differences between DB engines It is much easier to create complex dynamic SQL statements Database entity identifiers (like table or column names) can be defined as SqlId type. For the complete lest of supported SQL statements, see SqlExp in examples. SqlId A("A"), B("B"), TEST("TEST"); SQL * Insert(TEST)(A, i)(B, AsString(3 * i)); sql * Select(A, B).From(TEST); Cout() << sql[A] << " \'" << sql[B] << "\'\n"; SqlId identifiers can be also used as parameter of Sql::operator[] to retrieve particular columns of result-set. Schema files can be used to describe the database schema. Such schema files can be used to upload the schema to the database, to defined SqlId constants and also to work with database records as C++ structures. Following example demonstrates using schema file to create database schema in SQL database server. MyApp.sch: TABLE(TEST) INT (A) STRING (B, 200) END_TABLE MyApp.h #ifndef _MyApp_h_ #define _MyApp_h_ #define SCHEMADIALECT <plugin/sqlite3/Sqlite3Schema.h> #define MODEL <Sql04/MyApp.sch> #include "Sql/sch_header.h" main.cpp #include "MyApp.h" #include <Sql/sch_schema.h> #include <Sql/sch_source.h> SqlSchema sch(SQLITE3); All_Tables(sch); SqlPerformScript(sch.Upgrade()); SqlPerformScript(sch.Attributes()); As names of columns are present in the database schema, it is natural to recycle them to create SqlId constants. However, due to C++ one definition rule (.sch files are interpreted as C++ sources, using changing set of macros), you have to mark identifiers using underscore: TABLE_(TEST) INT_ (A) STRING_ (B, 200) TABLE_(TEST2) MyApp.h: #define MODEL <Sql05/MyApp.sch> main.cpp: Schema files also define structures that can be used to fetch, insert or update database records. Names of such structures are identical to the names of tables, with S_ prefix: S_TEST x; for(int i = 0; i < 10; i++) { x.A = i; x.B = AsString(3 * i); SQL * Insert(x); sql * Select(x).From(TEST); while(sql.Fetch(x)) Cout() << x.A << " \'" << x.B << "\'\n"; Recommended tutorials: If you want to learn more, we have several tutorials that you can find useful: Skylark - now you know everything about databases - why not to use your knowledge to become web development star? U++ Core value types - still not very confident with U++. In this tutorial you will learn basics. Last edit by cxl on 12/02/2017. Do you want to contribute?. T++
https://www.ultimatepp.org/srcdoc$Sql$tutorial_en-us.html
CC-MAIN-2019-04
refinedweb
952
59.19
Friday Fun: MicroPython Weather Station This week we build a weather station using the Wemos D1 Mini and MicroPython, but how do we show the weather? Well with neopixels of course! Being British I have two go to conversations in taxis. The first is "Are you busy today?" and the second is "Well the weather is hot/cold/windy/wet/snowy isn't it?" < sarcasm >Both of these conversations taxi drivers love and my questions are totally unique.</ sarcasm > So what are we building? I wonder what the weather is like? pic.twitter.com/ZRnCcTNrbP— biglesp (@biglesp) April 12, 2019 Our project is to build an Internet of Things (IoT) weather station which will get the local weather from an online service and display an icon on a screen to advise us on the weather. This is triggered by a capacitive touch button, and displayed on an 8x8 LED matrix. So what equipment do we need? - A Wemos D1 Mini (About £1.90 from Aliexpress) - A Unicorn HAT from Pimoroni or an 8x8 WS2812 matrix - Capacitive touch button - Something to contain the project, I used a Poundland light up cloud - 2 x Wago 221 connectors - Micro USB breakout - Micro USB lead to cut up - Soldering equipment - Hookup wire - A Micro USB power supply, 5V 2A is best - A computer - All of the code, images and the spreadsheets used for the icons are available via my Github page Getting our computer setup Flashing the Wemos D1 Mini The Wemos D1 Mini does not come with MicroPython pre-installed, so we need to use a tool to install it. In an earlier blog post I covered Vanguard created by Cefn Hoile. This tool makes flashing MicroPython on to an ESP8266 (which the Wemos D1 Mini is) really easy. Take a look at the blog post for full details, in this tutorial we shall do the bare minimum to get MicroPython installed. To install Vanguard Open a terminal / command prompt and type Linux / Mac pip3 install vgkits-vanguard Windows pip3.exe install vgkits-vanguard To flash MicroPython on to the Wemos D1 Mini Connect your Wemos D1 Mini to your computer using a known good quality USB cable. In the terminal / command line type the following to flash MicroPython on to the board. vanguard brainwash micropython Wait for the brainwash to complete and when it does so, control will be returned to you. To test that it is working type the following to connect to an interactive console that will control your Wemos D1 Mini. vanguard shell In the console, you may need to press Enter to get a cursor. Then type in a little Python to check that it is working. for i in range(10): print("Visit for hacks") If you can see it, then it is working! To exit the console press CTRL + ] and you can now close the terminal / command prompt. Mu For this project we shall be using Mu, an easy to use Python editor created by my friend Nicholas Tollervey. In fact we need to use the very latest version of Mu, called an Alpha release in order to use it with our Wemos D1 Mini. Download the latest Alpha release from the Mu website. Install Mu and when ready open it. Hey fellow Linux users! At the time of writing there is no easy download and install version of Mu Alpha. So we need to do a little more work. Install the current stable version of Mu sudo pip3 install mu-editor Now download the latest Alpha of Mu from Github and extract the ZIP into a new folder in your home directory. But don't call it muas you have that already. I call mine mu-alpha To run Mu Alpha we need to run the following command from the terminal. /mu-alpha/mu/run.py This will load the alpha version of Mu with support for ESP devices. Writing the code With Mu open and our Wemos D1 Mini attached to the computer we should be prompted to change mode. Mu is a modal editor and that means it can switch modes depending on the device we are working on, or the choice of the user. If you are asked to confirm using ESP mode, hit Ok / Yes and you are good to carry on. If you have not been asked to change mode, then you can do it yourself by clicking on Mode and selecting ESP MicroPython and clicking Ok. In the bottom right of the screen you should now see Esp, this means we are in ESP MicroPython mode. We start our code with a five imports. These enable us to use libraries of pre-written Python code in our project. The first import is for the Pin class in the machine library, and this enables us to use the physical GPIO pins on the Wemos D1 Mini. The second import is for the sleep function found in the time library and this enables us to control the pace of our code. Next we import neopixel to work with the neopixel LEDs on our Unicorn HAT. Then we import network to enable our Wemos D1 Mini to connect to WiFi. Lastly we import urequests which is MicroPython's version of requests and that enables us to use and interact with websites and remote data. from machine import Pin from time import sleep import neopixel import network import urequests The next step is to create an object called pixels and in there we create a connection from the Wemos D1 Mini to the Unicorn HAT. Here we tell the board that our Unicorn HAT will be connected to GPIO14, which is marked D5 on the board. It also instructs the board that GPIO14 is an output pin. Then we tell the neopixel library how many neopixels there are on the Unicorn HAT, which is 64 as it is an 8 x 8 grid. pixels = neopixel.NeoPixel(Pin(14, Pin.OUT), 64) To create the weather icons I had to be inventive and that meant I had to think about the problem. How could I work out which LED to light up to create a simple 8-bit icon? I started with an 8x8 grid drawn in a spreadsheet. Then I coloured in the sections the colour that I wished to see on the Unicorn HAT. So for the sun I used yellow. You will see the numbers inside each cell. These are the numbers from 0 to 63 and their direction around the Unicorn HAT. So I knew that by using these numbers I could make the icon for the Unicorn HAT. So for each main weather condition available on open Weather Map, by main I mean the coarse weather type Rain, Snow, Clear, Cloudy, Thunderstorm. I created lists that would store the LEDs that I wish to light up. Please note that each list is one really long line, despite what the code formatting looks like on this blog. So for Sun sun = [11,12,18,19,20,21,25,26,27,28,29,30,33,34,35,36,37,38,42,43,44,45,51,52] For cloudy cloudy = [4,5,6,9,10,11,12,19,20,21,22,25,26,27,28,29,30,33,34,35,36,37,38,41,42,43,44,52,53,54,57,58,59] For rain there are two lists, for the rain cloud, and the rain itself. rain_cloud = [4,5,6,10,11,12,19,20,22,26,27,28,29,30,33,34,35,36,38,42,43,44,52,54,57,58,59] rain = [7,9,21,23,25,37,39,41,53] For lightning / thunder there are two lists. For the cloud and the lightning. lightning_cloud = [4,5,6,9,10,11,12,19,21,25,27,28,29,30,33,34,35,36,37,38,42,44,52,54,57,58,59] lightning = [8,20,22,26,39,41,43,53] For snow and ice snow_ice = [1,4,7,9,11,13,19,20,21,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,42,43,44,50,52,54,56,59,62] I also added three extra lists to create a pretty green grass, blue sky, and sun scene. The intention is to use that as an error alert. But not implemented in this project. clear_sky = [2,3,4,5,10,11,12,13,16,17,18,19,20,21,26,27,28,29,30,31,32,33,34,35,36,37,42,43,44,45,46,47,48,49,50,51,52,53,58,59,60,61,62,63] clear_sun = [0,1,14,15] clear_gnd = [6,7,8,9,22,23,24,25,38,39,40,41,54,55,56,57] Connecting to WiFi What makes the Wemos D1 Mini so much fun is that we can connect it to WiFi really easily. Using the Network library we create an object called sta and use that to start a connection. We then turn on the WiFi chip. Then we connect to our WiFi using the SSID and Password for our network. Lastly we print the status of our connection. #Setup WiFi sta = network.WLAN(network.STA_IF) sta.active(True) sta.connect("YOUR WIFI SSID", "WiFi Password") print(sta.isconnected()) Setting up the capacitive touch button Our capacitive touch button is connected to GPIO12 which is D6 on the Wemos D1 Mini. We create an object called button and tell the board that it is on GPIO 12 and that it is an input. #setup the button button = Pin(12, Pin.IN) Creating a delay To give the user time to see the image on the Unicorn HAT we need to create a universal delay variable. This means if we need to tweak the timing we can do it from one place rather than edit the same line 6/7 times. #Delay for showing image delay = 10 Clearing the Unicorn HAT Each time we light up the Unicorn HAT we need to ensure that the screen is cleared from the last icon otherwise the two mix up and it looks a bit ghastly. Using a function called clear() means we can just call the function and have it perform the task in less than a second. The function works in a for loop that will loop round 64 times, the same as the number of LEDs on the Unicorn HAT. Each time the for loop goes round the value of i increases by 1. So starting at 0 and ending at 63 each LED is set to 0x00, 0x00, 0x00 which means turn off that pixel. The we write the changes to the LED to trigger them to turn off. #Clear the screen def clear(): for i in range(64): pixels[i] = (0x00, 0x00, 0x00) pixels.write() The main body of code In order to constantly run our code we need to start a loop and the best one for this task is a while True: loop. The first task that the loop has is to print the current status of the button. If it has not been pressed then it will print 0 to the shell. while True: print(button.value()) But now we ask Python to check "Has the button been pressed?" This checks the status (value) and if it matches 1 then our code is triggered. if button.value() == 1: With the button pressed our code springs into action and the first task is to get the weather data from Open Weather Map. For this we create a dictionary called r and in there we get and store the weather data. But make sure to change << WHERE DO YOU LIVE>>> to match your town / city and country. For example I used Blackpool, UK for my test. You will also need to get an API KEY and change ``=API KEY``` to your private key. r = urequests.get("<<WHERE DO YOU LIVE?>>&appid=API KEY").json() API Key? To use Open Weather Map you need an API key, which is basically a private key that enables you to use their services. Each key is unique and can be traced to the creator. To get an API key you will need to visit Open Weather Map and click on Sign Up and follow the account creation process. Then you will need to sign in and click on API keys from the menu. In the next screen give your key a name, something memorable that links to the project. Then click Generate to create the key. Copy the key into Mu. So now that we have the weather data, what do we do with it? Well the r dictionary contains lots of data. Temperatures, humidity, wind speed, visibility etc. But we are just interested in the main heading Sun, Rain, Snow etc. So we create a new variable called weather and in there we store the main heading from the returned data, which can be found using three keys to return the value of "main". These keys are ["weather"][0]["main"] and by using all three we return the specific data that we need. We then print it to the Python shell for debug. weather = r["weather"][0]["main"] print(weather) Here start a series of conditional tests that will check the value stored in weather against the values Clear, Clouds, Rain, Thunderstorm, Snow Whenever a match is made, the corresponding code is activated. Lets use "Clear" as an example. If the weather is clear then we start a for loop that will iterate over the values in the sun list that we created earlier. This means that each LED (pixel) numbered will be set to a certain colour, 0x4c, 0x99, 0x00 which is yellow for the Unicorn HAT LEDs. We then write the changes to the Unicorn HAT in order for the LEDs to turn on with that colour. Then we sleep using the delay variable (set to 10 seconds by default) before we finally call the clear() function to turn off all the LEDs. if weather == "Clear": for values in sun: pixels[values] = (0x4c, 0x99, 0x00) pixels.write() sleep(delay) clear() Where did I get the colour code info from? When working with Neopixels we typically see (255,0,0)which are RGB (Red, Green, Blue) colour values and by mixing these values we can make any colour we wish. For the MicroPython Neopixels we use Hex values. So to recreate the previous value in hex we use 0xff, 0x00, 0x00but how can we learn this? Well I use this reference to convert between the values and then I just need to remember to add 0xbefore each value. Here is the rest of the code for the other weather conditions. Note that for rain and thunderstorm there are two for loops, one after the other as we need to add rain / lightning to the cloud. The last line of code in the project tells the board to wait for one second before repeating the loop. elif weather == "Clouds": for values in cloudy: pixels[values] = (0x99, 0x99, 0x99) pixels.write() sleep(delay) clear() elif weather == "Rain": for values in rain_cloud: pixels[values] = (0x55, 0x55, 0x55) pixels.write() for values in rain: pixels[values] = (0x00, 0x00, 0x99) pixels.write() sleep(delay) clear() elif weather == "Thunderstorm": for values in lightning_cloud: pixels[values] = (0x55, 0x55, 0x55) pixels.write() for values in lightning: pixels[values] = (0x4c, 0x99, 0x00) pixels.write() sleep(delay) clear() elif weather == "Snow": for values in snow_ice: pixels[values] = (0x00, 0x00, 0x99) pixels.write() sleep(delay) clear() sleep(1) With the code completed, click on Save and call the file main.py this is very important as that is what the Wemos will look for on boot. Copy the code to your Wemos D1 Mini Now this bit is something new for most Mu users. With the main.py code saved to our computer, we now need to flash it onto the Wemos. To do this click on Files. make sure that your Wemos D1 Mini is connected to your computer. From the right hand side pane, find the main.py file and drag it to the left pane which represents your Wemos D1 Mini. Drop the file and it will copy across. That's it! Your code is on the Wemos D1 Mini and now we can build the circuit for the hardware! The hardware build For my build I provided power via a micro USB breakout. This breakout enabled me to use standard micro USB power supplies, but breakout the 5V and GND connection to two Wago connectors which provided a common 5V and GND connection. So then I could cut and strip the wires of a micro USB lead, and connect the 5V (RED) and GND (BLACK) to the Wago connectors, and then insert the Micro USB into the Wemos D1 Mini. The Unicorn HAT has three pins marked VCC, DIN and GND. Solder wires from VCC to the 5V power supply (Wago connector for me) and GND to your common GND. The DIN pin connects to the Wemos D1 Mini GPIO05 pin. To add a little mechanical strength I superglued the wires down and used a little tape. The capacitive touch button is powered from the 3v3 pin of the Wemos D1 Mini, so solder wires from the buttons VCC and GND to 3v3 and GND accordingly. The output pin of the button goes to GPIO6 of the Wemos D1 Mini. Here is a quick reference for the wiring. Testing Wemos D1 Mini retrieving weather day from @OpenWeatherMap and using @pimoroni Unicorn HAT to display icons for weather conditions. Created using #MicroPython #Mu and the new ESP mode.— biglesp (@biglesp) April 12, 2019 Video contains a great fail too! #fridayfun pic.twitter.com/zw3pu2YRlb Before stuffing it all in a box TEST IT!! Does it work? Are you happy? Ok now stuff it in the box! I made sure that my button worked through the plastic case, but I needed to press the button for 2 seconds to make it work. To hold the button in place I used blu tack. The Unicorn HAT is held in place by lots of wire!I made sure that my button worked through the plastic case, but I needed to press the button for 2 seconds to make it work. To hold the button in place I used blu tack. The Unicorn HAT is held in place by lots of wire! I wonder what the weather is like? pic.twitter.com/ZRnCcTNrbP— biglesp (@biglesp) April 12, 2019 So there we have it! We've built a weather station in less than 80 lines of code and managed to reuse some old boards along the way. Happy Hacking!
https://bigl.es/friday-fun-micropython-weatherstation/
CC-MAIN-2020-16
refinedweb
3,137
80.31
. Again a nice one for perl as it’s string handling! Using Factor: The first version finds the “maximum” pair of words. The key depends on whether the intersection of their respective sets of letters is empty. If it is empty, the key is the product of the lengths of the two words. Otherwise, the key is zero. This second version builds a dictionary mapping letters to sets of words containing that letter. Then set operations can be used to calculate pairs of words that have no letters in common. The second line was a comment, it lost a ‘#’ character somewhere. This works pretty fast for larger number of words. Another Python solution, similar to Paul’s (I borrowed some your code too, thanks Paul), but using a heap to ensure we examine possible solutions in order. Using negative lengths is because Python heapq uses a fixed less than ordering & also to ensure the solutions come out in lexicographic order. This version prints all valid word pairs: Another method. Make a heap of the (-len(word), set(word)) tuples. Then build a sorted list and simultanuously see if any combination of a new item with the already sorted items has no overlap. If so, the solution is found. Otherwise sort further. This method is fast as it is not necessary to sort all items and the items are tested for large products first. As usual, simple solutions are also often the fastest. This one is about as fast as g3 and is much simpler. Here’s a solution in Python. It isn’t as concise as the posted solution, but the critical difference is that it runs in O(n) time and constant additional space when given n words of input. The trick is to realize that we don’t need to keep track of each previously seen word, only the maximum word length seen for each combination of letters, and there’s a large-ish but fixed size of possible letter combinations at 2^26-1. This means we can put a constant upper bound on the amount of work done to process each additional word. In the solution below, each word gets reduced to an “lset”, a string identifying the set of letters it contains. For example, “FOOBAR” has the lset “ABFOR”; “FOO” and “FFOO” both have the lset “FO”. We store the maximum word length seen for each lset in a dictionary that we iterate over for each new word. The maximum number of lsets in the dictionary is 2^26-1. @Mark. I do not see that your method is O(n). I see a loop over words and an inner loop over the dict lset_len. I tested the method on 355000 words. After half an hour your method did not finish. g3 and g4 finish in about 5 seconds. @Paul – I think Mark’s point is that the size of the lset dict is bounded as there are only 2^26-1 possibly entries, therefore time iterating over all the entries is also bounded – so it’s O(n) but with a rather large constant factor on the upper bound – nice illustration of the limitations of big-O notation. Of course, that’s assuming 8-bit characters, doing Unicode would be a different kettle of fish (stll theoretically O(n) though) I meant “characters in A-Z”, not 8-bit characters. @matthew and @Mark. I understand (now) that one could say that this is O(n). It is certainly so, that the lset_len dict saves space and time as words with the same characters are collapsed into one item. I printed some sizes of the dict and saw that after 15000 (dutch) words the number of entries in the dict were 0.88 * 15000. The time and space saving are not very significant. Therefore the loop over lset_len is still over the majority of the words seen sofar. If the number of words would be much larger than 2^26-1 (about 67,000,000; the number of words in the Bible is about 783,000 (about 12,000 unique words)) then the saving would be significant. Methods g3 and g4 have a bug. Here is a (more) correct version of g3. Here is Scala solution: def googInterview(ls: List[String]) = { ls map(x => x.toUpperCase) sorted val yy = for { xx <- ls yy xx && ((xx.toSet intersect yy.toSet).isEmpty) }yield((xx length)*(yy length), xx,yy) yy max } Solution in C#, using LINQ and Extension Methods:
https://programmingpraxis.com/2016/04/08/google-interview-question/
CC-MAIN-2017-51
refinedweb
752
73.07
libpfm_intel_glm — support for Intel Goldmont core PMU Synopsis #include <perfmon/pfmlib.h> PMU name: glm PMU desc: Intel Goldmont Description The library supports the Intel Goldmont core PMU. It should be noted that this PMU model only covers each core's PMU and not the socket level PMU. On Goldmont, the number of generic counters is 4. There is no HyperThreading support. The pfm_get_pmu_info() function returns the maximum number of generic counters in num_cntrs. Modifiers The following modifiers are supported on Intel Gold Goldmont normal events by the library. The extra settings are exposed as regular umasks. The library takes care of encoding the events according to the underlying kernel interface. On Intel Goldmont,.
https://dashdash.io/3/libpfm_intel_glm
CC-MAIN-2022-27
refinedweb
114
51.75
William is an assistant professor of computer science at the University of Texas in Austin. Carl is chief software architect at db4objects. They can be contacted at wcook@cs.utexas.edu and carl@db4o.com, respectively. While today's object databases and object-relational mappers do a great job in making object persistence feel native to developers, queries still look foreign in object-oriented programs because they are expressed using either simple strings or object graphs with strings interspersed. Let's take a look at how existing systems would express a query such as "find all Student objects where the student is younger than 20." This query (and other examples in this article) assume the Student class defined in Example 1. Different data access APIs express the query quite differently, as illustrated in Example 2. However, they all share a common set of problems: - Modern IDEs do not check embedded strings for syntactic and semantic errors. In Example 2, both the field age and the value 20 are expected to be numeric, but no IDE or compiler checks that this is actually correct. If you mistyped the query codechanging the name or type of the field age, for exampleall of the queries in Example 2 would break at runtime, without a single notice at compile time. - Because modern IDEs will not automatically refactor field names that appear in strings, refactorings cause class model and query strings to get out of sync. Suppose the field name age in the class Student is changed to _age because of a corporate decision on standard coding conventions. Now all existing queries for age would be broken, and would have to be fixed by hand. - Modern agile development techniques encourage constant refactoring to maintain a clean and up-to-date class model that accurately represents an evolving domain model. If query code is difficult to maintain, it delays decisions to refactor and inevitably leads to low-quality source code. - All the queries in Example 2 operate against the private implementation of the Student class student.age instead of using its public interface student.getAge()/student.Age (in Java/C#, respectively). Consequently, they break object-oriented encapsulation rules, disobeying the principle that interface and implementation should be decoupled. - You are constantly required to switch contexts between implementation language and query language. Queries cannot use code that already exists in the implementation language. - There is no explicit support for creating reusable query components. A complex query can be built by concatenating query strings, but none of the reusability features of the programming language (method calls, polymorphism, overriding) are available to make this process manageable. Passing a parameter to a string-based query is also awkward and error prone. - Embedded strings can be subject to injection attacks. Design Goals Our goal is to propose a new approach that solves many of these problems. This article is an overview of the approach, not a complete specification. What if you could simply express the same query in plain Java or C#, as in Example 3? You could write queries without having to think about a custom query language or API. The IDE could actively help to reduce typos. Queries would be fully typesafe and accessible to the refactoring features of the IDE. Queries could also be prototyped, tested, and run against plain collections in memory without a database back end. At first, this approach seems unsuitable as a database query mechanism. Naively executing Java/C# code against the complete extent of all stored objects of a class would incur a huge performance penalty because all candidate objects would have to be instantiated from the database. A solution to this problem was presented in "Safe Query Objects" by William Cook and Siddhartha Rai [3]. The source code or bytecode of the Java/C# query expression can be analyzed and optimized by translating it to the underlying persistence system's query language or API (SQL [6], OQL [1,8], JDOQL [7], EJBQL [1], SODA [10], and so on), and thereby take advantage of indexes and other optimizations of a database engine. Here, we refine the original idea of safe query objects to provide a more concise and natural definition of native queries. We also examine integrating queries into Java and .NET by leveraging recent features of those language environments, including anonymous classes and delegates. Therefore, our goals for native queries include: - 100-percent native. Queries should be expressed in the implementation language (Java or C#), and they should obey language semantics. - 100-percent object oriented. Queries should be runnable in the language itself, to allow unoptimized execution against plain collections without custom preprocessing. - 100-percent typesafe. Queries should be fully accessible to modern IDE features such as syntax checking, type checking, refactoring, and so on. - Optimizable. It should be possible to translate a native query to a persistence architecture's query language or API for performance optimization. This could be done at compile time or at load time by source code or bytecode analysis and translation. Defining the Native Query API What should native queries look like? To produce a minimal design, we evolve a simple query by adding each design attribute, one at a time, using Java and C# (.NET 2.0) as the implementation languages. Let's begin with the class in Example 1. Furthermore, we assume that we want to query for "all students that are younger than 20 where the name contains an f." - The main query expression is easily written in the programming languages; see Example 4. - We need some way to pass a Student object to the expression, as well as a way to pass the result back to the query processor. We can do this by defining a student parameter and returning the result of our expression as a Boolean value; see Example 5. - Now we have to wrap the partial construct in Example 5 into an object that is valid in our programming languages. That lets us pass it to the database engine, a collection, or any other query processor. In .NET 2.0, we can simply use a delegate. In Java, we need a named method, as well as an object of some class to put around the method. This requires, of course, that we choose a name for the method as well as a name for the class. We decided to follow the example that .NET 2.0 sets for collection filtering. Consequently, the class name is Predicate and the method name is match; see Example 6. - For .NET 2.0, we are done designing the simplest possible query interface. Example 6 is a valid object. For Java, our querying conventions should be standardized by designing an abstract base class for queriesthe Predicate class (Example 7). We still have to alter our Java query object slightly by adding the extent type to comply with the generics contract (Example 8). - Although Example 8 is conceptually complete, we would like to finish the derivation of the API by providing a full example. Specifically, we want to show what a query against a database would look like, so we can compare it against the string-based examples given in the introduction. Example 9 completes the core idea. We have refined Cook/Rai's concept of safe queries by leveraging anonymous classes in Java and delegates in .NET. The result is a more concise and straightforward description of queries. Adding all required elements of the API in a step-by-step fashion lets us find the most natural and efficient way of expressing queries in Java and C#. Additional features, such as parameterized and dynamic queries, can be included in native queries using a similar approach [4]. We have overcome the shortcomings of existing string-based query languages and provided an approach that promises improved productivity, robustness, and maintainability without loss of performance. Specification Details A final and thorough specification of native queries is only possible after practical experience. Therefore, this section is speculative. We would like to point out where we see choices and issues with the native query approach and how they might be resolved. Regarding the API alone, native queries are not new. Without optimizations, we have merely provided "the simplest concept possible to run all instances of a class against a method that returns a Boolean value." Such interfaces are well known: Smalltalk-80 [2, 5], for instance, includes methods to select items from a collection based on a predicate. Optimization is the key new component of native queries. Users should be able to write native query expressions and the database should execute them with performance on par with the string-based queries that we described earlier. Although the core concept of native queries is simple, the work needed to provide a solution is not trivial. Code written in a query expression must be analyzed and converted to an equivalent database query format. It is not necessary for all code in a native query to be translated. If the optimizer cannot handle some or all code in a query expression, there is always the fallback to instantiate the actual objects and to run the query expression code, or part of it, with real objects after the query has returned intermediate values. Because this may be slow, it is helpful to provide developers with feedback at development time. This feedback might include how the optimizer "understands" query expressions, and some description of the underlying optimization plan created for the expressions. This will help developers adjust their development style to the syntax that is optimized best and will enable developers to provide feedback about desirable improved optimizations. How will optimization actually work? At compile or load time, an enhancer (a separate application or a plug-in to the compiler or loader) inspects all native query expressions in source code or bytecode, and will generate additional code in the most efficient format the database engine supplies. At runtime, this substituted code will be executed instead of the original Java/C# methods. This mechanism will be transparent to developers after they add the optimizer to their compilation or build process (or both). Our peers have expressed doubts that satisfactory optimization is possible. Because both the native query format and the native database format are well defined, and because the development of an optimizer can be an ongoing task, we are very optimistic that excellent results are achievable. The first results that Cook/Rai produced with a mapping to JDO implementations are very encouraging. db4objects () already shows a first preview of db4o with unoptimized native queries today and plans to ship a production-ready Version 5.0 with optimized native queries. Ideally, any code should be allowed in a query expression. In practice, restrictions are required to guarantee a stable environment, and to place an upper limit on resource consumption. We recommend: - Variables. Variable declarations should be legal in query expressions. - Object creation. Temporary objects are essential for complex queries so their creation should also be supported in query expressions. - • Static calls. Static calls are part of the concept of OO languages, so they should be legal. - Faceless. Query expressions are intended to be fast. They should not interact with the GUI. - Threads. Query expressions will likely be triggered in large numbers. Therefore, they should not be allowed to create threads. - Security restrictions. Because query expressions may actually be executed with real objects on the server, there need to be restrictions on what they are allowed to do there. It would be reasonable to allow and disallow method execution and object creation in certain namespaces/packages. - Read only. No modifications of persistent objects should be allowed within running query code. This limitation guarantees repeatable results and keeps transactional concerns out of the specification. - Timeouts. To allow for a limit to the use of resources, a database engine may choose to timeout long-running query code. Timeout configuration does not have to be part of the native query specification, but it should be recommended to implementors. - Memory limitation. Memory limitations can be treated like timeouts. A configurable upper memory limit per query expression is a recommended feature for implementors. - • Undefined actions. Unless explicitly not permitted by the specification, all constructs should be allowed. It seems desirable that processing should continue after any exception occurs in query expressions. A query expression that throws an uncaught exception should be treated as if it returned False. There should be a mechanism for developers to discover and track exceptions. We recommend that implementors support both exception callback mechanisms and exception logging. The sort order of returned objects might also be defined using native code. An exact definition goes beyond the scope of this article but, using a Java comparator, a simple example might look like Example 10. This code should be runnable both with and without an optimization processor. Querying and sorting could be optimized to be executed as one step on the database server, using the sorting functionality of the database engine. Conclusion There are compelling reasons for considering native queries as a mainstream standard. As we have shown, they overcome the shortcomings of string-based APIs. The full potential of native queries will be explored with their use in practice. They have already been demonstrated to provide high value in these areas: - Power. Standard object-oriented programming techniques are available for querying. - Productivity. Native queries enjoy the benefits of advanced development tools, including static typing, refactoring, and autocompletion. - Standard. What SQL has never managed to achieve because of the diversity of SQL dialects may be achievable for native queries: Because the standard is well defined by programming-language specifications, native queries can provide 100-percent compatibility across different database implementations. - Efficiency. Native queries can be automatically compiled to traditional query languages or APIs to leverage existing high-performance database engines. - Simplicity. As shown, the API for native queries is only one class with one method. Hence, native queries are easy to learn, and a standardization body will find them easy to define. They could be submitted as a JSR to the Java Community Process. Acknowledgments Thanks to Johan Strandler for his posting to a thread at TheServerSide that brought the two authors together, Patrick Roomer for getting us started with first drafts of this paper, Rodrigo B. de Oliveira for contributing the delegate syntax for .NET, Klaus Wuestefeld for suggesting the term "native queries," Roberto Zicari, Rick Grehan, and Dave Orme for proofreading drafts of this article, and to all of the above for always being great peers to review ideas. References - Cattell, R.G.G., D.K. Barry, M. Berler, J. Eastman, D. Jordan, C. Russell, O. Schadow, T. Stanienda, and F. Velez, editors. The Object Data Standard ODMG 3.0. Morgan Kaufmann, 2000. - Cook, W.R. "Interfaces and Specifications for the Smalltalk Collection Classes." OOPSLA, 1992. - Cook, W.R. and S. Rai. "Safe Query Objects: Statically Typed Objects as Remotely Executable Queries." G.C. Roman, W.G. Griswold, and B. Nuseibeh, editors. Proceedings of the 27th International Conference on Software Engineering (ICSE), ACM, 2005. - db4objects (). - Goldberg, A. and D. Robson. Smalltalk-80: The Language and Its Implementation. Addison-Wesley, 1983. - ISO/IEC. Information technologydatabase languagesSQLPart 3: Call-level interface (SQL/CLI). Technical Report 9075-3:2003, ISO/IEC, 2003. - JDO ( jdo/). - ODMG (). - Russell, C. Java Data Objects (JDO) Specification JSR-12. Sun Microsystems, 2003. - Simple Object Database Access (SODA) ( sodaquery/). - Sun Microsystems. Enterprise JavaBeans Specification, Version 2.1. 2002 (). DDJ
http://www.drdobbs.com/database/native-queries-for-persistent-objects/184406432
CC-MAIN-2015-35
refinedweb
2,572
55.13
One of the things you need to do as a developer of the application is track who is logged in to your application so you can display pages related to that specific user. In Flask you can use sessions for this, first, you would need to configure your flask application: from flask import Flask, session from flask_session import Session # Configure session to use filesystem app.config["SESSION_PERMANENT"] = False app.config["SESSION_TYPE"] = "filesystem" Session(app) Then in your application, you would need to store desired information, for example here we storing user id retrieved from the database: session["user_id"] = db.execute("SELECT id FROM users WHERE username = :username", {"username": username}).fetchone() Once we store it, we can retrieve the user_id, whenever there is a need for that: print(session.get("user_id")) And if for some reason our user logout from the app we would need to make sure, the session is cleared: session.clear()
https://93days.me/how-to-use-sessions-in-flask/
CC-MAIN-2021-39
refinedweb
153
51.18
An interpreter is used to interpret languages like Ruby. An Interpreter is a program that looks at the code and does whatever the code says. A compiler doesn't do what the code says, but rather converts that code to machine code, so that the computer would do what that code says, when it's time for that code to run. In other words, a compiler makes source code more readable for the machine, while an interpreter runs the source code on-the-spot. Overview - Getting Started - Example Program Getting Started The Tools Let's take a look at what tools we would need. - Ruby Interpreter (Download Page) - Text Editor (Notepad++ is a good one) If you want to use the Ruby Installer, go to RubyInstaller.org Downloads. Under RubyInstallers, use the link with the latest version (as of this writing, it's 'Ruby 1.9.2-p290'). When you're installing ruby, you might like to install it in the C:\ directory (not in program files), for easier access. (Fullsize Screenshot) It might also be a good idea to associate the .rb and .rbw file exensions with the Ruby Interpreter. When you're making web scripts using Ruby, you might have to include the path to ruby. That's done by putting a '#' sign, followed by a '!', followed by the path to ruby; for example, if the path to ruby (in our case) is C:\Ruby\bin\ruby.exe - which usually can be said as \Ruby\bin\ruby.exe - then we would probably type something like this, at the beginning of the .rb file: #!/Ruby/bin/ruby.exe That tells the server that the interpreter for this script is at that path. Example Program The Plan Okay, it's time to get to the example program. Here's the plan: - Print "Hello World!" to the screen. - Wait for the user to press the return (enter) key and exit. Let's go over what we'll do, to achieve the plan above. A Little About The Language Functions, in Ruby, don't have to be called with parentheses around the arguments, as long as it's understandable what's happening. So first of all, there's the 'puts' function; it means 'put string' . There's also the 'gets' function; that means 'get string' . The puts function sends its first argument, followed by a new-line character (/sequence), to the standard output. So if you call puts() with the "hi" string, the output would actually be "hi\r\n" and if you call puts() with the "abcd" string, the output will be "abcd\r\n" . The gets function gets a string from standard input, until the first carriage return character, and returns that string. Besides puts() there's also print(), which sends its parameter to standard output, but does not append a new-line character sequence to the output. So PRINTing "hi" would really print "hi" and not "hi\r\n" . Standard output is usually directed to the screen, and standard input is usually directed from the keyboard; however, in a web environment (with a web server), the standard input comes from the HTTP request that's sent by the client (browser) and the standard output gets sent back to the client (usually as the HTML code for the browser to view). And another thing, before we go on to the code, strings, in Ruby, are represented by either single- or double-quotes. When in double-quotes, the string can include things, if you use the '#' character, then the '{' and '}' characters: "Hello #{username}, how are you? " would return the string with 'Hello ' and whatever's in the variable `username` and then the string ', how are you? ' . So if `username` is "user1" , the string will evaluate to "Hello user1, how are you? " . You can't do that with single-quote strings. We would go over variables later. Comments, in Ruby, are denoted by the '#' (sharp) character. The sharp character tells ruby (the interpreter, not the language) to ignore the rest of the line (the rest of the line is for people who read the code, so they can understand what the code does). The Code Here's the code: puts 'Hello World!' gets And that's it! puts 'Hello World!'sends "Hello World!\r\n" to the standard output and getswaits for the standard input (in this case, for a return key press). The Output (Fullsize Screenshot) First Tutorial: This is the first tutorial. Previous Tutorial: Not Applicable Next Tutorial: Variables and Functions
http://forum.codecall.net/topic/65459-ruby-hello-world-introduction/
CC-MAIN-2019-51
refinedweb
751
72.66
Properly Getting Into Jail: Counting Inmates and Other Hard Problems Properly Getting Into Jail: Counting Inmates and Other Hard Problems We relate the way a database works to the way counting inmates of a prison works. After all, the secret to a successful prison is keeping the people there inside. Join the DZone community and get the full member experience.Join For Free and in this document is the data as it was at the time of this document’s creation. Later changes do not apply, by design, since we need to see keep it in the state it was at the time. It might be easier to look at things in code: public class Inmate { public string Id { get; set; } public string Name { get; set; } } public class Staff { public string Id { get; set; } public string Name { get; set; } } public class InmateLogEntry { public Inmate Inmate { get;set; } public Staff ResponsibleParty { get;set; } public LogStatus Status { get;set; } // checkin / checkout public string Notes { get;set; } } public class Block { public List<InmateLogEntry> Log { get; set; } public List<Inmates> Inmates { get; set; } public Staff Sargent { get; set; } } on the block and why. And that is quite enough about the prison’s life. This gives us sufficient level of details that we can now work with. The next post will talk about how the physical architecture and data flow in the prison. Published at DZone with permission of Oren Eini , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/properly-getting-into-jail-counting-inmates-and-ot?fromrel=true
CC-MAIN-2020-10
refinedweb
263
59.03
Question Based on new information regarding the popularity of basketball, you revise your growth estimate for BBC to 9 percent. What is the maximum P/E ratio you will apply to BBC, and what is the maximum price you will pay for the stock? Answer to relevant QuestionsThe Shamrock Dogfood Company (SDC) has consistently paid out 40 percent of its earnings in dividends. The company's return on equity is 16 percent. What would you estimate as its dividend growth rate?You have been reading about the Madison Computer Company (MCC), which currently retains 90 percent of its earnings ($5 a share this year). It earns an ROE of almost 30 percent.a. Assuming a required rate of return of 14 ...An investor is convinced that the stock market will experience a substantial increase next year because corporate earnings are expected to rise by at least 12 percent. Do you agree or disagree? Why or why not?Currently, the dividend-payout ratio (D/E) for the aggregate market is 60 percent, the required return (k) is 11 percent, and the expected growth rate for dividends (g) is 5 percent.a. Compute the current earnings ...What were the results when industry risk was examined during successive time periods? Discuss the implication of these results for industry analysis. Post your question
http://www.solutioninn.com/based-on-new-information-regarding-the-popularity-of-basketball-you
CC-MAIN-2016-44
refinedweb
219
58.08
I saw a code here at Cpp Quiz [Question #38] #include <iostream> struct Foo { Foo(int d) : x(d) {} int x; }; int main() { double x = 3.14; Foo f( int(x) ); std::cout << f.x << std::endl; return 0; } Foo f( int(x) ); Foo Foo f( int(x) ); Foo f( int ); Foo f( int x ); Foo f( int x ); what is this syntax int(x)in statement Foo f( int(x) );means? The parentheses around x are superfluous and will be ignored. So int(x) is same as int x here, means a parameter named x with type int. Is it same as Foo f( int x );? Yes. Foo f( int(x) );, is a function declaration which is named f, returns Foo, takes one parameter named x with type int. Here's the explanation from the standard. $8.2/1 Ambiguity resolution [dcl.ambig.res]: (emphasis mine) The ambiguity arising from the similarity between a function-style cast and a declaration mentioned in [stmt.ambig], the resolution is to consider any construct that could possibly be a declaration a declaration. [ Note: A declaration can be explicitly disambiguated by adding parentheses around the argument. The ambiguity can be avoided by use of copy-initialization or list-initialization syntax, or by use of a non-function-style cast. — end note ] [ Example: struct S { S(int); }; void foo(double a) { S w(int(a)); // function declaration S x(int()); // function declaration S y((int(a))); // object declaration S y((int)a); // object declaration S z = int(a); // object declaration } — end example ] So, int(x) will be considered as a declaration (of parameter) rather than a function style cast.
https://codedump.io/share/ulNdqTvGI5Gp/1/most-vexing-parse
CC-MAIN-2018-26
refinedweb
275
54.12
Migrating to xarray and dask¶ Many python developers dealing with meteorologic satellite data begin with using NumPy arrays directly. This work usually involves masked arrays, boolean masks, index arrays, and reshaping. Due to the libraries used by Satpy these operations can’t always be done in the same way. This guide acts as a starting point for new Satpy developers in transitioning from NumPy’s array operations to Satpy’s operations, although they are very similar. To provide the most functionality for users, Satpy uses the xarray library’s DataArray object as the main representation for its data. DataArray objects can also benefit from the dask library. The combination of these libraries allow Satpy to easily distribute operations over multiple workers, lazy evaluate operations, and keep track additional metadata and coordinate information. XArray¶ import xarray as xr XArray's DataArray is now the standard data structure for arrays in satpy. They allow the array to define dimensions, coordinates, and attributes (that we use for metadata). To create such an array, you can do for example my_dataarray = xr.DataArray(my_data, dims=['y', 'x'], coords={'x': np.arange(...)}, attrs={'sensor': 'olci'}) where my_data can be a regular numpy array, a numpy memmap, or, if you want to keep things lazy, a dask array (more on dask later). Satpy uses dask arrays with all of its DataArrays. Dimensions¶ In satpy, the dimensions of the arrays should include: x for the x or column or pixel dimension y for the y or row or line dimension bands for composites time can also be provided, but we have limited support for it at the moment. Use metadata for common cases (start_time, end_time) Dimensions are accessible through my_dataarray.dims. To get the size of a given dimension, use sizes: my_dataarray.sizes['x'] Coordinates¶ Coordinates can be defined for those dimensions when it makes sense: x and y: Usually defined when the data’s area is an AreaDefinition, and they contain the projection coordinates in x and y. bands: Contain the letter of the color they represent, eg ['R', 'G', 'B']for an RGB composite. This allows then to select for example a single band like this: red = my_composite.sel(bands='R') or even multiple bands: red_and_blue = my_composite.sel(bands=['R', 'B']) To access the coordinates of the data array, use the following syntax: x_coords = my_dataarray['x'] my_dataarray['y'] = np.arange(...) Most of the time, satpy will fill the coordinates for you, so you just need to provide the dimension names. Attributes¶ To save metadata, we use the attrs dictionary. my_dataarray.attrs['platform_name'] = 'Sentinel-3A' Some metadata that should always be present in our dataarrays: areathe area of the dataset. This should be handled in the reader. start_time, end_time sensor Operations on DataArrays¶ DataArrays work with regular arithmetic operation as one would expect of eg numpy arrays, with the exception that using an operator on two DataArrays requires both arrays to share the same dimensions, and coordinates if those are defined. For mathematical functions like cos or log, you can use numpy functions directly and they will return a DataArray object: import numpy as np cos_zen = np.cos(zen_xarray) Masking data¶ In DataArrays, masked data is represented with NaN values. Hence the default type is float64, but float32 works also in this case. XArray can’t handle masked data for integer data, but in satpy we try to use the special _FillValue attribute (in .attrs) to handle this case. If you come across a case where this isn’t handled properly, contact us. Masking data from a condition can be done with: result = my_dataarray.where(my_dataarray > 5) Result is then analogous to my_dataarray, with values lower or equal to 5 replaced by NaNs. Dask¶ import dask.array as da The data part of the DataArrays we use in satpy are mostly dask Arrays. That allows lazy and chunked operations for efficient processing. Creation¶ From a numpy array¶ To create a dask array from a numpy array, one can call the from_array() function: darr = da.from_array(my_numpy_array, chunks=4096) The chunks keyword tells dask the size of a chunk of data. If the numpy array is 3-dimensional, the chunk size provide above means that one chunk will be 4096x4096x4096 elements. To prevent this, one can provide a tuple: darr = da.from_array(my_numpy_array, chunks=(4096, 1024, 2)) meaning a chunk will be 4096x1024x2 elements in size. Even more detailed sizes for the chunks can be provided if needed, see the dask documentation. From memmaps or other lazy objects¶ To avoid loading the data into memory when creating a dask array, other kinds of arrays can be passed to from_array(). For example, a numpy memmap allows dask to know where the data is, and will only be loaded when the actual values need to be computed. Another example is a hdf5 variable read with h5py. Procedural generation of data¶ Some procedural generation function are available in dask, eg meshgrid(), arange(), or random.random. From XArray to Dask and back¶ Certain operations are easiest to perform on dask arrays by themselves, especially when certain functions are only available from the dask library. In these cases you can operate on the dask array beneath the DataArray and create a new DataArray when done. Note dask arrays do not support in-place operations. In-place operations on xarray DataArrays will reassign the dask array automatically. dask_arr = my_dataarray.data dask_arr = dask_arr + 1 # ... other non-xarray operations ... new_dataarr = xr.DataArray(dask_arr, dims=my_dataarray.dims, attrs=my_dataarray.attrs.copy()) Or if the operation should be assigned back to the original DataArray (if and only if the data is the same size): my_dataarray.data = dask_arr Operations and how to get actual results¶ Regular arithmetic operations are provided, and generate another dask array. >>> arr1 = da.random.uniform(0, 1000, size=(1000, 1000), chunks=100) >>> arr2 = da.random.uniform(0, 1000, size=(1000, 1000), chunks=100) >>> arr1 + arr2 dask.array<add, shape=(1000, 1000), dtype=float64, chunksize=(100, 100)> In order to compute the actual data during testing, use the compute() method. In normal Satpy operations you will want the data to be evaluated as late as possible to improve performance so compute should only be used when needed. >>> (arr1 + arr2).compute() array([[ 898.08811639, 1236.96107629, 1154.40255292, ..., 1537.50752674, 1563.89278664, 433.92598566], [ 1657.43843608, 1063.82390257, 1265.08687916, ..., 1103.90421234, 1721.73564104, 1276.5424228 ], [ 1620.11393216, 212.45816261, 771.99348555, ..., 1675.6561068 , 585.89123159, 935.04366354], ..., [ 1533.93265862, 1103.33725432, 191.30794159, ..., 520.00434673, 426.49238283, 1090.61323471], [ 816.6108554 , 1526.36292498, 412.91953023, ..., 982.71285721, 699.087645 , 1511.67447362], [ 1354.6127365 , 1671.24591983, 1144.64848757, ..., 1247.37586051, 1656.50487092, 978.28184726]]) Dask also provides cos, log and other mathematical function, that you can use with da.cos and da.log. However, since satpy uses xarrays as standard data structure, prefer the xarray functions when possible (they call in turn the dask counterparts when possible). Wrapping non-dask friendly functions¶ Some operations are not supported by dask yet or are difficult to convert to take full advantage of dask’s multithreaded operations. In these cases you can wrap a function to run on an entire dask array when it is being computed and pass on the result. Note that this requires fully computing all of the dask inputs to the function and are passed as a numpy array or in the case of an XArray DataArray they will be a DataArray with a numpy array underneath. You should NOT use dask functions inside the delayed function. import dask import dask.array as da def _complex_operation(my_arr1, my_arr2): return my_arr1 + my_arr2 delayed_result = dask.delayed(_complex_operation)(my_dask_arr1, my_dask_arr2) # to create a dask array to use in the future my_new_arr = da.from_delayed(delayed_result, dtype=my_dask_arr1.dtype, shape=my_dask_arr1.shape) Dask Delayed objects can also be computed delayed_result.compute() if the array is not needed or if the function doesn’t return an array. Map dask blocks to non-dask friendly functions¶ If the complicated operation you need to perform can be vectorized and does not need the entire data array to do its operations you can use da.map_blocks to get better performance than creating a delayed function. Similar to delayed functions the inputs to the function are fully computed DataArrays or numpy arrays, but only the individual chunks of the dask array at a time. Note that map_blocks must be provided dask arrays and won’t function properly on XArray DataArrays. It is recommended that the function object passed to map_blocks not be an internal function (a function defined inside another function) or it may be unserializable and can cause issues in some environments. my_new_arr = da.map_blocks(_complex_operation, my_dask_arr1, my_dask_arr2, dtype=my_dask_arr1.dtype) Helpful functions¶ - - atop() - tokenize() - - - -
https://satpy.readthedocs.io/en/stable/dev_guide/xarray_migration.html
CC-MAIN-2020-40
refinedweb
1,450
54.52
rapidsms.contrib.handlers¶ The handlers contrib application provides three classes- BaseHandler, KeywordHandler, and PatternHandler- which can be extended to help you create RapidSMS applications quickly. Installation¶ To define and use handlers for your RapidSMS project, you will need to add "rapidsms.contrib.handlers" to INSTALLED_APPS in your settings file: INSTALLED_APPS = [ ... "rapidsms.contrib.handlers", ... ] Then you’ll also need to set RAPIDSMS_HANDLERS. The application will load the handler classes listed in RAPIDSMS_HANDLERS, as described in Handler Discovery. RAPIDSMS_HANDLERS = [ "rapidsms.contrib.handlers.KeywordHandler", "rapidsms.contrib.handlers.PatternHandler", ] Usage¶ KeywordHandler¶ Many RapidSMS applications operate based on whether a message begins with a specific keyword. By subclassing KeywordHandler, you can easily create a simple, keyword-based application: from rapidsms.contrib.handlers import KeywordHandler class LightHandler(KeywordHandler): keyword = "light" def help(self): self.respond("Send LIGHT ON or LIGHT OFF.") def handle(self, text): if text.upper() == "ON": self.respond("The light is now turned on.") elif text.upper() == "OFF": self.respond("Thanks for turning off the light!") else: self.help() Your handler must define three things: keyword, help(), and handle(text). When a message is received that begins with the keyword (case insensitive; leading whitespace is allowed), the remaining text is passed to the handle method of the class. If no additional non-whitespace text is included with the message, help is called instead. For example: > light < Send LIGHT ON or LIGHT OFF. > light on < The light is now turned on. > light off < Thanks for turning off the light! > light something else < Send LIGHT ON or LIGHT OFF. The handler also treats ,, :, and ; after the keyword the same as whitespace. For example: > light < Send LIGHT ON or LIGHT OFF. > light:on < The light is now turned on. > light, off < Thanks for turning off the light! > light :,; on < The light is now turned on. Tip Technically speaking, the incoming message text is compared to a regular expression pattern. The most common use case is to look for a single exact-match keyword. However, one could also match multiple keywords, for example keyword = "register|reg|join". However, due to how we build the final regular expression, capturing matches using grouping in the keyword regular expression won’t work. If you need that, use the PatternHandler. All non-matching messages are silently ignored to allow other applications and handlers to catch them. For example implementations of KeywordHandler, see - rapidsms.contrib.echo.handlers.echo.EchoHandler - rapidsms.contrib.registration.handlers.register.RegistrationHandler - rapidsms.contrib.registration.handlers.language.LanguageHandler Here’s documentation from the KeywordHandler class: - class rapidsms.contrib.handlers. KeywordHandler¶ This handler type can be subclassed to create simple keyword-based handlers. When a message is received, it is checked against the mandatory keywordattribute (a regular expression) for a prefix match. For example: >>> class AbcHandler(KeywordHandler): ... keyword = "abc" ... ... def help(self): ... self.respond("Here is some help.") ... ... def handle(self, text): ... self.respond("You said: %s." % text) If the keyword is matched and followed by some text, the handlemethod is called: >>> AbcHandler.test("abc waffles") ['You said: waffles.'] If just the keyword is matched, the helpmethod is called: >>> AbcHandler.test("abc") ['Here is some help.'] All other messages are silently ignored (as usual), to allow other apps or handlers to catch them. PatternHandler¶ Note Pattern-based handlers can work well for prototyping and simple use cases. For more complex parsing and message handling, we recommend writing a RapidSMS application with a custom handle phase. The PatternHandler class can be subclassed to create applications which respond to a message when a specific pattern is matched: from rapidsms.contrib.handlers import PatternHandler class SumHandler(PatternHandler): pattern = r"^(\d+) plus (\d+)$" def handle(self, a, b): a, b = int(a), int(b) total = a + b self.respond("%d + %d = %d" % (a, b, total)) Your handler must define pattern and handle(*args). The pattern is case-insensitive, but must otherwise be matched precisely as written (for example, the handler pattern written above would not accept leading or trailing whitespace, but the pattern r"^(\d+) plus (\d+)\s*$" would allow trailing whitespace). When the pattern is matched, the handle method is called with the captures as arguments. As an example, the above handler could create the following conversation: > 1 plus 2 < 1 + 2 = 3 Like KeywordHandler, each PatternHandler silently ignores all non-matching messages to allow other handlers and applications to catch them. Here’s documentation from the PatternHandler class: - class rapidsms.contrib.handlers. PatternHandler¶ This handler type can be subclassed to create simple pattern-based handlers. This isn’t usually a good idea – it’s cumbersome to write patterns with enough flexibility to be used in the real world – but it’s very handy for prototyping, and can easily be upgraded later. When a message is received, it is matched against the mandatory patternattribute (a regular expression). If the pattern is matched, the handlemethod is called with the captures as arguments. For example: >>> class SumHandler(PatternHandler): ... pattern = r'^(\d+) plus (\d+)$' ... ... def handle(self, a, b): ... a, b = int(a), int(b) ... total = a + b ... ... self.respond( ... "%d+%d = %d" % ... (a, b, total)) >>> SumHandler.test("1 plus 2") ['1+2 = 3'] Note that the pattern must be matched precisely (excepting case sensitivity). For example, this would not work because of the trailing whitespace: >>> SumHandler.test("1 plus 2 ") False All non-matching messages are silently ignored, to allow other apps or handlers to catch them. BaseHandler¶ All handlers, including the KeywordHandler and PatternHandler, are derived from the BaseHandler class. When extending from BaseHandler, one must always override the class method dispatch, which should return True when it handles a message. All instances of BaseHandler have access to self.msg and self.router, as well as the methods self.respond and self.respond_error (which respond to the instance’s message). BaseHandler also defines the class method test, which creates a simple environment for testing a handler’s response to a specific message text. If the handler ignores the message then False is returned. Otherwise a list containing the text property of each OutgoingMessage response, in the order which they were sent, is returned. (Note: the list may be empty.) For example: >>> from rapidsms.contrib.echo.handlers.echo import EchoHandler >>> EchoHandler.test("not applicable") False >>> EchoHandler.test("echo hello!") ["hello!"] For an example implementation of a BaseHandler, see rapidsms.contrib.echo.handlers.ping.PingHandler. Calling Handlers¶ When a message is received, the handlers application calls dispatch on each of the handlers it loaded during handlers discovery. The first handler to accept the message will block all others. The order in which the handlers are called is not guaranteed, so each handler should be as conservative as possible when choosing to respond to a message. Handler Discovery¶ Handlers may be any new-style Python class which extends from one of the core handler classes, e.g. BaseHandler, PatternHandler, KeywordHandler, etc. The Python package names of the handler classes to be loaded should be listed in RAPIDSMS_HANDLERS. Example: RAPIDSMS_HANDLERS = [ "rapidsms.contrib.handlers.KeywordHandler", "rapidsms.contrib.handlers.PatternHandler", ] Warning The behavior described in the rest of this section is the old, deprecated behavior. If RAPIDSMS_HANDLERS is set, the older settings are ignored. Handlers may be defined in the handlers subdirectory of any Django app listed in INSTALLED_APPS. Each file in the handlers subdirectory is expected to contain exactly one new-style Python class which extends from one of the core handler classes. Handler discovery, which occurs when the handlers application is loaded, can be configured using the following project settings: RAPIDSMS_HANDLERS_EXCLUDE_APPS- The application will not load handlers from any Django app included in this list. INSTALLED_HANDLERS- If this list is not None, the application will load only handlers in modules that are included in this list. EXCLUDED_HANDLERS- The application will not load any handler in a module that is included in this list. Note Prefix matching is used to determine which handlers are described in INSTALLED_HANDLERS and EXCLUDED_HANDLERS. The module name of each handler is compared to each value in these settings to see if it starts with the value. For example, consider the rapidsms.contrib.echo application which contains the echo handler and the ping handler: - “rapidsms.contrib.echo.handlers.echo” would match only EchoHandler, - “rapidsms.contrib.echo” would match both EchoHandler and PingHandler, - “rapidsms.contrib” would match all handlers in any RapidSMS contrib application, including both in rapidsms.contrib.echo.
http://docs.rapidsms.org/en/develop/topics/contrib/handlers.html
CC-MAIN-2017-47
refinedweb
1,384
50.73
SYNOPSIS #include <unistd.h> /* for libc5 */ #include <sys/io.h> /* for glibc */ int ioperm(unsigned long from, unsigned long num, int turn_on); DESCRIPTION i). permis- sions are preserved across execve(2); this is useful for giving port access permissions to non-privileged programs. This call is mostly for the i386 architecture. On many other architec- tures it does not exist or will always return an error. RETURN VALUE ioperm() is Linux-specific and should not be used in programsopl(2), capabilities(7) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can
http://www.linux-directory.com/man2/ioperm.shtml
crawl-003
refinedweb
108
60.51
JUnit Interview Questions and Answers Ques 6. How To Run a JUnit Test Class? Ans. A JUnit test class usually contains a number of test methods. You can run all test methods in a JUnit test class with the JUnitCore runner class. For example, to run the test class LoginTest.java described previously, you should do this: java -cp .; junit-4.4.jar org.junit.runner.JUnitCore LoginTest JUnit version 4.4 Time: 0.015 OK (1 test) This output says that 1 tests performed and passed. The same you can perform by executing build.xml also. Is it helpful? Add Comment View Comments Ques 7. What CLASSPATH Settings Are Needed to Run JUnit?Ans.: * The JUnit JAR file should be there. * Location of your JUnit test classes. * Location of classes to be tested. * JAR files of class libraries that are required by classes to be tested. If found NoClassDefFoundError in your test results, then something is missing from your CLASSPATH. If you are running your JUnit tests from a command line on a Windows system: set CLASSPATH=c:\\A\\junit-4.4.jar;c:\\B\\test_classes; c:\\B\\target_classes; If you are running your JUnit tests from a command line on a Unix (bash) system: export CLASSPATH=/A/junit-4.4.jar:/B/test_classes: /C/target_classes: Is it helpful? Add Comment View Comments Ques 8. How Do I Run JUnit Tests from Command Window?Ans. You need to check the following list to run JUnit tests from a command window: 1. Make sure that JDK is installed and the \"java\" command program is accessible through the PATH setting. Type \"java -version\" at the command prompt, you should see the JVM reports you back the version string. 2. Make sure that the CLASSPATH is defined as shown in the previous question. 3. Invoke the JUnit runner by entering the following command: java org.junit.runner.JUnitCore Is it helpful? Add Comment View Comments Ques 9. How To Write a JUnit Test Method?Ans. * You need to mark the method as a JUnit test method with the JUnit annotation: @org.junit.Test. * A JUnit test method must be a \"public\" method. This allows the runner class to access this method. * A JUnit test method must be a \"void\" method. The runner class does not check any return values. * A JUnit test should perform one JUnit assertion - calling an org.junit.Assert.assertXXX() method. Here is a simple JUnit test method: import org.junit.*; @Test public void testLogin() { String username = \"withoutbook\"; Assert.assertEquals(\"withoutbook\", username); } Is it helpful? Add Comment View Comments Ques 10. Why Not Just Use a Debugger for Unit Testing?Ans. * Automated unit testing requires extra time to setup initially. But it will save your time, if your code requires changes many times in the future. * A debugger is designed for manual debugging and manual unit testing, not for automated unit testing. JUnit is designed for automated unit testing. Is it helpful? Add Comment View Comments Most helpful rated by users:
http://www.withoutbook.com/Technology.php?tech=51&page=2&subject=JUnit%20Interview%20Questions%20and%20Answers
CC-MAIN-2018-39
refinedweb
502
70.29
A while back I wrote about how the leading digits of factorials follow Benford’s law. That is, if you look at just the first digit of a sequence of factorials, they are not evenly distributed. Instead, 1’s are most popular, then 2’s, etc. Specifically, the proportion of factorials starting with n is roughly log10(1 + 1/n). Someone has proved that the limiting distribution of leading digits of factorials exactly satisfies Benford’s law. But if we didn’t know this, we might use a chi-square statistic to measure how well the empirical results match expectations. As I argued in the previous post, statistical tests don’t apply here, but they can be useful anyway in giving us a way to measure the size of the deviations from theory. Benford’s law makes a better illustration of the chi-square test than the example of prime remainders because the bins are unevenly sized, which they’re allowed to be in general. In the prime remainder post, they were all the same size. The original post on leading digits of factorial explains why we compute the leading digits the way we do. Only one detail has changed: the original post used Python 2 and this one uses Python 3. Integer division was the default in Python 2, but now in Python 3 we have to use // to explicitly ask for integer division, floating point division being the new default. Here’s a plot of the distribution of the leading digits for the first 500 factorials. And here’s code to compute the chi-square statistic: from math import factorial, log10 def leading_digit_int(n): while n > 9: n = n//10 return n def chisq_stat(O, E): return sum( [(o - e)**2/e for (o, e) in zip(O, E)] ) # Waste the 0th slot to make the code simpler. digits = [0]*10 N = 500 for i in range(N): digits[ leading_digit_int( factorial(i) ) ] += 1 expected = [ N*log10(1 + 1/n) for n in range(1, 10) ] print( chisq_stat(digits[1:], expected) ) This gives a chi-square statistic of 7.693, very near the mean value of 8 for a chi-square distribution with eight degrees of freedom. (There are eight degrees of freedom, not nine, because if we know how many factorials start with the digits 1 through 8, we know how many start with 9.) So the chi-square statistic suggests that the deviation from Benford’s law is just what we’d expect from random data following Benford’s law. And as we said before, this suggestion turns out to be correct. Related posts:
https://www.johndcook.com/blog/2016/06/21/benfords-law-chi-square-and-factorials/
CC-MAIN-2017-22
refinedweb
437
58.52
Many times, we need to add/change the border of a WinForm control. In WPF, it's very easy and flexible to do so. But in Winforms, it might be tricky sometimes. Most of the Winform controls have a property BorderStyle that allows for a border to be shown around that control. But the look and feel of that border cannot be changed. And there are controls (button, groupbox, etc.) that do not have this property. So there is no direct way of adding a Border around these controls. BorderStyle OK, enough talk! The following magic lines allow us to add a border around a control: public class MyGroupBox : GroupBox { protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); ControlPaint.DrawBorder(e.Graphics, ClientRectangle, Color.Black, BORDER_SIZE, ButtonBorderStyle.Inset, Color.Black, BORDER_SIZE, ButtonBorderStyle.Inset, Color.Black, BORDER_SIZE, ButtonBorderStyle.Inset, Color.Black, BORDER_SIZE, ButtonBorderStyle.Inset); } } The ControlPaint class allows to add a border around a control. The above call can be invoked in a Paint event handler of a control. The ControlPaint class has many other useful methods (along with an overload of DrawBorder) that you can explore to learn more! Paint ControlPaint DrawBorder.
http://www.codeproject.com/Tips/388405/Draw-a-Border-around-any-Csharp-Winform-control.aspx
CC-MAIN-2014-41
refinedweb
193
62.04
Understanding Kubernetes API - Part 2 : Pods second post in the series which discusses about pods API. You can access all the posts in the series here. Pods API Pod is a kubernetes abstraction that runs one or more containers. All pods in kubernetes run in a namespace. All the system related pods run in a namespace called kube-system. By default all the user pods run in default namespace. Pods API is the part of Kubernetes API which allows user to run CRUD operations on pods. List Pods We can list the pods using GET API call to /api/v1/namespaces/{namespace}/pods. The below curl request lists all the pods in kube-system namespace. curl --request GET \ --url The output contains these important fields for each pod • metadata - The labels , name etc • spec - The spec of the pod. This contains the image, resource requirements etc • status - Status of the pod. For example, the output looks as below for etcd pod "metadata": { "name": "etcd-minikube", "namespace": "kube-system" ... } "spec": { "containers": [ { "name": "etcd", "image": "k8s.gcr.io/etcd-amd64:3.1.12", "command": [ "etcd" ... } "status": { "phase": "Running", "conditions": [ { "type": "Initialized", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-10-21T05:11:11Z" } .. Get Details about Single Pod We can access the details of individual pod using api/v1/namespaces/kube-system/pods/{podname}. Example for etcd pod look as below curl --request GET \ --url Create Pod Creating Pod requires defining the spec and metadata for the same. Usually user writes a yaml file to define the spec. The below is the YAML definition for creating ngnix pod. apiVersion: v1 kind: Pod metadata: name: ngnix-pod spec: containers: - name: ngnix image: nginx:1.7.9 ports: - containerPort: 80 Kubernetes API accepts the json rather than YAML. The respective json looks as below { "apiVersion":"v1", "kind":"Pod", "metadata":{ "name":"nginx-pod" }, "spec":{ "containers":[ { "name":"ngnix", "image":"nginx:1.7.9", "ports":[ { "containerPort": 80 } ] } ] } } User needs to send PUT API call to /api/v1/namespaces/default/pods. curl --request POST \ --url \ --header 'content-type: application/json' \ --data '{ "apiVersion":"v1", "kind":"Pod", "metadata":{ "name":"nginx-pod" }, "spec":{ "containers":[ { "name":"ngnix", "image":"nginx:1.7.9", "ports":[ { "containerPort": 80 } ] } ] } }' Delete Pod We can delete individual pod using DELETE API call api/v1/namespaces/kube-system/pods/{podname}. curl --request DELETE \ --url Conclusion Pod is a basic abstraction in kubernetes to run one or more containers. In this post, we discussed about how to create, delete and list pods using kubernetes API.
http://blog.madhukaraphatak.com/understanding-k8s-api-part-2/
CC-MAIN-2019-47
refinedweb
413
58.18
scrl, scroll, wscrl - scroll a Curses window #include <curses.h> int scrl(int n); int scroll(WINDOW *win); int wscrl(WINDOW *win, int n); The scroll() function scrolls win one line in the direction of the first line. The scrl() and wscrl() functions scroll the current or specified window. If n is positive, the window scrolls n lines toward the first line. Otherwise, the window scrolls -n lines toward the last line. These functions do not change the cursor position. If scrolling is disabled for the current or specified window, these functions have no effect. The interaction of these functions with setscrreg() is currently unspecified. Upon successful completion, these functions return OK. Otherwise, they return ERR. No errors are defined. <curses.h>.
http://pubs.opengroup.org/onlinepubs/007908775/xcurses/scrl.html
CC-MAIN-2016-50
refinedweb
122
69.18
Hide Forgot Spec URL: SRPM URL: Description: This package contains the R RPM macros, that most implementations should rely on. You should not need to install this package manually as the R-devel package requires it. So install the R-devel package instead. This is part of the implementation of koji scratch build: The Change is now accepted by FESCo: The package looks pretty good to me. I've not checked what the files, do, however. There are one or two queries, so please look into them before the import XXX APPROVED XXX Package Review ============== Legend: [x] = Pass, [!] = Fail, [-] = Not applicable, [?] = Not evaluated [ ] = Manual review needed Issues: ======= - Package contains the mandatory BuildRequires. Note: Missing BuildRequires on tex(latex), R-devel See: ^ False positive? =====". 4 files have unknown license. Detailed output of licensecheck in /home/asinha/dump/fedora- review/1731422-R-rpm-macros R: [x]: Package have the default element marked as %%doc : [x]: Package requires R-core. ===== SHOULD items ===== Generic: [-]: If the source package does not include license text(s) as a separate file from upstream, the packager SHOULD query upstream to include it. [x]: Final provides and requires are sane (see attachments). [?]: Package functions as described. ^ I've not checked this bit [x]: Latest version is packaged. [x]: Package does not include license text files separate from upstream. [-]:. R: [-]: The %check macro is present [-]: Latest version is packaged. Note: The package does not come from one of the standard sources ===== EXTRA items ===== Generic: [x]: Rpmlint is run on all installed packages. Note: There are rpmlint messages (see attachment). [x]: Spec file according to URL is the same as in SRPM. Rpmlint ------- Checking: R-rpm-macros-1.0.0-1.fc31~bootstrap.noarch.rpm R-rpm-macros-1.0.0-1.fc31~bootstrap.src.rpm R-rpm-macros.noarch: W: incoherent-version-in-changelog 1.0.0-1 ['1.0.0-1.fc31~bootstrap', '1.0.0-1.fc31~bootstrap'] ^ Where does "bootstrap" come from? R-rpm-macros.noarch: W: only-non-binary-in-usr-lib R-rpm-macros.src: W: no-%build-section 2 packages and 0 specfiles checked; 0 errors, 3 warnings. Rpmlint (installed packages) ----------------------------"). R-rpm-macros.noarch: W: incoherent-version-in-changelog 1.0.0-1 ['1.0.0-1.fc31~bootstrap', '1.0.0-1.fc31~bootstrap'] R-rpm-macros.noarch: W: invalid-url URL: <urlopen error [Errno -2] Name or service not known> R-rpm-macros.noarch: W: only-non-binary-in-usr-lib 1 packages and 0 specfiles checked; 0 errors, 3 warnings. Source checksums ---------------- : CHECKSUM(SHA256) this package : 55a225c038e04889eb4f308d4b722527d5dfe6ec6c1f5f16b4c6e384f4ddca17 CHECKSUM(SHA256) upstream package : 55a225c038e04889eb4f308d4b722527d5dfe6ec6c1f5f16b4c6e384f4ddca17 Requires -------- R-rpm-macros (rpmlib, GLIBC filtered): /usr/bin/Rscript R-core rpm Provides -------- R-rpm-macros: R-rpm-macros Generated by fedora-review 0.7.2 (65d36bb) last change: 2019-04-09 Command line :/usr/bin/fedora-review -b 1731422 Buildroot used: fedora-rawhide-x86_64 Active plugins: Generic, Shell-api, R Disabled plugins: Java, PHP, Haskell, Ocaml, fonts, Python, SugarActivity, Perl, C/C++ Disabled flags: EPEL6, EPEL7, DISTTAG, BATCH, EXARCH >- Package contains the mandatory BuildRequires. > Note: Missing BuildRequires on tex(latex), R-devel > See: >^ >False positive? Completely. :-) fedora-review is considering this as an R package (it is not) since latex is required to build R packages documentation. (fedscm-admin): The Pagure repository was created at Thanks Ankur! Yes, that warning is a false positive as José explained. The bootstrap in the version comes from defining the bootstrap conditional:
https://bugzilla.redhat.com/show_bug.cgi?id=1731422
CC-MAIN-2020-34
refinedweb
572
50.43
Forum:Encyclopedic meme From Uncyclopedia, the content-free encyclopedia I've seen this meme on the site ... it has exploded. That articles should be encyclopedic. This is utterly new to me and I don't know how it started and why this has somehow become a policy. There is a sort of template that is being pasted on new users pages who write bad or cheesy stuff. There are three lines that made my eyes bulge out: - Uncyclopedia is not a place where you do everything just to be funny (if the result is funny writing...it doesn't matter...who invented this rule?) - Criticism: This is not encyclopedic (there are hundreds of featured articles that are not uncyclopedic...there is a featured article about a country that does not exist...this "policy" is totally new to me and has come out of nowhere. Non existant things does not equal vanity article.) - Criticism: It does not interest a general reader (according to who?...since when have we defined who our general reader is and what they want to read?) HTBFANJS is the concept by which uncyclopedia fosters new users. A user can write about whatever they like. They can use any style. As long as it's not disruptive, chronically voted for deletion or goes against HTBFANJS. There has never been a strict law that says articles and their content should be as close to wikipedian ones as possible with clever parody sewed in. Just look around the site. I have never heard of such limits before and it makes me cringe to think that there should be such limits and that new users should be threatened over them. Not all non-encyclopedic writing is bathroom humour and off colour randomness. Some of it is the very best writing that ever came out of this site. My questions here are: When did HTBFANJS become superceded by this "encyclopedic" meme and this "for the general reader meme"? Why do we need any concept other than HTBFANJS? Should we create an open forum to discuss the essence of this project and what kind of new material, edits and writing style should be encouraged? --ShabiDOO 03:28, August 31, 2013 (UTC) - Shabidoo, this is not a template. You got confused by my words at User talk:Tamerial horror. In this case, he created an article which he called A funny article that consisted of several lines and just told the readers that it would be funny to vandalize admins' talk pages. It is now deleted. - Concerning "encyclopedic meme", as you call it, sometimes, yes, articles don't have to be encyclopedic if they are funny without it but we tell the new users to be encyclopedic if they are creating something neither funny, nor encyclopedic, in which case being encyclopedic would help them to be funny. If you get what I mean. Anton (talk) 10:31, August 31, 2013 (UTC) - Copied from User talk:SPIKE: - "I think Spike just is refering to the fact that as a parody of an encyclopedia we must put on the most decent impression of an encyclopedia and then ruin the encyclopedic nature of an article with jokes in images and writing. - The best impressionists are the ones with the best accents but who say ridiculous things. That means that if the article at first glance doesn't look like something off wikipedia (without reading it) it looks like a poor impression. The only exeption are funny one-joke articles like Enigma Code and such which have a funny enough joke to comprimise the encyclopedic looking nature of the article. Sir ScottPat (talk) VFH UnS NotM WotM WotY 07:59, August 31, 2013 (UTC)" - Does that answer the:05, August 31, 2013 (UTC) - PS - Our general reader comes to Uncyclopedia as defined by google and wikipedia as "A parody of wikipedia." Therefore we must cater for the reader to read a parody of wikipedia.:08, August 31, 2013 (UTC) It seems to me that we are an all-purpose writer's playground. The main body of the site is encyclopedic, even though that term can be stretched very thin if the page is good. Then we are a place to write books (at least short stories, I've never seen a real book written here, are there any?) and the HowTo's and Why's provide another couple of categories to get very creative and speculative within. So uncy is many things, pure encyclopedia just one of them, imnho. Aleister 13:46 31-8 - When a new Uncyclopedian builds a story-arc of articles on his personal fantasy country or planet, I teach him about our joint project, which with respect to mainspace is exactly "a parody of Wikipedia"; wherever you want to take it, with whatever your personal biases and humor styles are, but always superficially presenting itself as an encyclopedia article. New Uncyclopedians who write articles overly based on the week's current events, I do guide toward UnNews, and more rarely, those who seem to need to nag the reader I gently push toward HowTo. There are many other ways to have fun, but it is better counsel for a newbie about whom you don't know everything, that he first master the basics. - And when a new Uncyclopedian uses even his userspace to code a reference card to who got voted off the island every week in the current TV season, we give him a sterner warning and delete his work; and this started a year ago with Bizzeebeever, who got the rules changed to reflect this. - The essence here should be to work on this common project and keep the reader in mind. The authors who wanted "an all-purpose writer's playground"--that is, for the focus to be themselves and not the reader--famously quit the site last January. Those who stayed here should carry on with the work for which we stayed. Spıke Ѧ 14:11 31-Aug-13 - Although I respect and agree with what you said Spike, your last comment may have sent two keen supporters of "all-purpose" (and two great writers) packing. I think encyclopedic-style stress should be placed on mainspace articles (as stated previously) however I believe HowTos and Whys and such like can continue despite the fact they are unencyclopedic as they are a large part of the community and I quite frankly find them fun to write as 14:55, August 31, 2013 (UTC) - All of this sounds very good. Encyclopedic pages are the bread and butter of the uncy and its mission to be a Wikipedia clone, just with a clown face and a satirical manner. Spike hits another homer when he says that new users should be educated in this, and know that the really really far out stuff can go elsewhere, but we do have a pretty wide limit. Lots of room to play in (reason I use the term "playground" is Funnybony told me a long time ago that uncy is a playground for adults. I've described it as that ever since. And the other writing areas, such as UnNews, UnBooks, UnPoetia, HowTo:s and their bastard children, expand that limit to just enough outside of a pure encyclopedic atmosphere to allow lots and lots of latitude about topics and other adventures. Has anyone ever written an entire book here? That concept should be an interesting playground in itself. Should we have the ultimate contest, writing a book in a given time period (four months. Maybe six months sounds about right, and as rules creator of this contest which shouldn't be held unless five people sign up, from any site. With extensions given when requested. what the fuck am I getting into here, it sounds like too much work but WTF) to be placed in the UnBooks section of uncy. Well, I digressed, so back to topic, Yay!, Spike and ScottPat. Aleister 1:57 31-8-'13 SPIKE...I agree completely that it is a good idea to encourage new users to write about things that are real...to avoid vanity articles and bathroom humour and to encourage satire and in that sense I agree with your goal. You are right. I myself wrote embarassing trash at first...but I was encouraged to write better and given direction...NOT LAWS (beyond HTBFANJS)...and a broad canvas to discover and hone in my own style...which all of us are still doing anyways. But it is completely different to tell new users WHAT style of articles WE expect, what WE don't want, what articles WE want them to write based on one's own idiosyncratic view of uncyclopedia. Encouraging good writing and suggesting what will improve it (and your suggestions are usually good) is one thing however...telling users what encyclopedia expects and what not to do...when encyclopedia has never created or envisioned such a policy...and when so much of our material and featured material contradicts this view...is something else. Very confusing. It's not the advice...its often good advice...it's the way it is being communicated as law instead of personal opinion...and communicated quite agressively some times. --ShabiDOO 18:32, August 31, 2013 (UTC) - Shabidoo, want to have an IPod Car? Anton (talk) 20:19, August 31, 2013 (UTC) - But seriously, could you give specific examples of "legalistic communication", along with examples of how that communication could be better communicated? I think everybody here is trying to communicate as best they can, in good faith. -- Simsilikesims(♀UN) Talk here. 07:15, September 2, 2013 (UTC) - I have always considered UnBooks and UnScripts etc as perfect areas to write and place ideas which you wouldn't find in the main encyclopedia parody space. Perhaps better would be that new contributors are given links on the welcome message for examples or point directly to that page where all the featured articles are:34, September 2, 2013 (UTC) - I point new Uncyclopedians to alternate namespaces in the very first paragraph of my welcome message: "If you aren't interested in a fake encyclopedia but in writing fake news stories, we have UnNews, and there are other projects for scripts, lyrics, how-to guides, and so on." (Romartus, who welcomes more users than I do, has adopted the same text.) - It's nice to see that we don't have a huge policy disagreement--nor, as the start of this Forum suggests, unprecedented, unilateral, and baseless imposition of will--but merely a tone-of-voice problem. I famously fail to attach to expressions of my opinion the pleasant language identifying it as personal opinion, which has led ScottPat to complain that I have access to a Pirate Code that he does not. But likewise I have never bought into Uncyclopedia as a "community" (a herd speaking with a single voice) and if I ever claim to be the voice, I'd want to rephrase it. Looking at the examples above, it is not so much me claiming to speak for Uncyclopedia but certain editors persistently claiming to speak for others, such as one, nearby, issuing an apology to an editor who did not deserve one, "on behalf of Uncyclopedia," a pronouncement not even an Admin should make. - Oops! the above, likewise, is not Policy but my personal opinion. Spıke Ѧ 15:48 2-Sep-13 - I did not complain, I specifically said that being the Pirate Code was a good thing. I originally entitled Spike that for solving grammar disputes anyway.:30, September 2, 2013 (UTC) Everyone is being very polite and non-confrontational here. This is reducing the level of drama that we have all come to expect from our forums, and means this forum fails our basic standards of communication. In order to ensure this forum is not deleted due to not meeting Uncyclopedia standards, I would just like to say fuck you all - you're all wrong! • Puppy's talk page • 03:55 05 Sep 2013
http://uncyclopedia.wikia.com/wiki/Forum:Encyclopedic_meme?oldid=5729484
CC-MAIN-2015-35
refinedweb
2,006
60.85
On 20/09/12 5:23 PM, Derek Buitenhuis wrote: > On 20/09/2012 3:53 PM, jamal wrote: >> I think HAVE_WINDOWS_H should work for that. > > We already have something like HAVE_MSVCRT or so. There's no reason for > an extra check. Heck, we may as well just use _WIN32. HAVE_WINDOWS_H already exists, that's why i mentioned it. >> Nonetheless, Mingw32's vsnprintf works fine. The buggy one is Mingw64's (both 32 and 64 bits) before rev 4663 as you mentioned, so we should make sure to use avpriv_vsnprintf only with Mingw64 v1.0 and v2.0. > > I'd rather just use avpriv_vsnprint everywhere instead of extra version checks. > Anyone who still uses pre-xp OSes is not fit to be using a computer. > >> This is important because Mingw32 is the only remaining toolchain that compiles for Win98/NT4 and Win2K, so forcing the usage of avpriv_vsnprintf with it would mean the end of ffmpeg's support for those versions. > > See above. I personally don't agree with dropping support for Win98/NT4/2K when there's no justified reason for doing so. If we were trying to add some new feature or fix that would bring improvements to current Windows versions and the only way to achieve that is by dropping support for old versions, then I'd agree since it would be justified. But this you suggest is dropping support simply to avoid doing some extra checks. >> And regarding the prototypes, i still think that a new libavutil/os_support.h header for this would be a good idea. > > I am certain this will be frowned upon... see all the bikeshedding about where the implementation was > placed in the first place... What about adding new functions av_vsnprintf and av_snprintf to avstring.h/c, and replacing every instance of vsnprintf and snprintf with them? With MSVC and MingW64 < v3.0 they would use the code that's currently on snprintf.c, and with everything else they would simply call the corresponding system function. Check for example the attached patch.. Regards. -------------- next part -------------- diff --git a/libavutil/avstring.c b/libavutil/avstring.c index 8b258a4..416309e 100644 --- a/libavutil/avstring.c +++ b/libavutil/avstring.c @@ -118,6 +118,49 @@ end: return p; } +int av_snprintf(char *s, size_t n, const char *fmt, ...) +{ + va_list ap; + int ret; + + va_start(ap, fmt); + ret = av_vsnprintf(s, n, fmt, ap); + va_end(ap); + + return ret; +} + +int av_vsnprintf(char *s, size_t n, const char *fmt, + va_list ap) +{ +#if defined(__MSVC__) || defined(__MINGW64_VERSION_MAJOR) && (__MINGW64_VERSION_MAJOR < 3) + int ret; + va_list ap_copy; + + if (n == 0) + return _vscprintf(fmt, ap); + else if (n > INT_MAX) + return AVERROR(EFBIG); + + /*); + va_copy(ap_copy, ap); + ret = _vsnprintf(s, n - 1, fmt, ap_copy); + va_end(ap_copy); + if (ret == -1) + ret = _vscprintf(fmt, ap); + + return ret; +#else + return vsnprintf(s, n, fmt, ap); +#endif +} + char *av_d2str(double d) { char *str= av_malloc(16); diff --git a/libavutil/avstring.h b/libavutil/avstring.h index f73d6e7..5dc7ead 100644 --- a/libavutil/avstring.h +++ b/libavutil/avstring.h @@ -125,6 +125,9 @@ size_t av_strlcatf(char *dst, size_t size, const char *fmt, ...) av_printf_forma */ char *av_asprintf(const char *fmt, ...) av_printf_format(1, 2); +int av_snprintf(char *s, size_t n, const char *fmt, ...) av_printf_format(3, 4); +int av_vsnprintf(char *s, size_t n, const char *fmt, va_list ap); + /** * Convert a number to a av_malloced string. */
http://ffmpeg.org/pipermail/ffmpeg-devel/2012-September/131450.html
CC-MAIN-2015-06
refinedweb
545
74.9
- Struts Struts2 S:select tag is being used in my jsp, to create a drop down... me code and explain in detail. I am sending you a link. This link will help.../struts/struts2/struts2uitags/autocompleter-example.shtml Thanks. hi i Auto Completer Example Auto Completer Example In this section, we are going to describe the autocompleter tag. The autocompleter tag always... options shown in the dropdown list. The autocompleter tag generates two input Validation Problem - Struts in the browser having the example of handling the error in struts 2. http...Struts2 Validation Problem Hi, How to validate field that should not accept multiple spaces in Struts2? Regards, Sandeep Tags / index.jsp to the address bar. Auto Completer Example The autocompleter...Struts2 Tags Apache Struts is an open-source framework used to develop Java select tag in Struts2 to handle Enums - Struts to handle enums in struts2 select tag ? Hi friend, Code to solve...select tag in Struts2 to handle Enums I have an java enum in my object. I am trying to set its values from struts2 select tag. I tried with " to develop an application. But, while surfing some code on internet, i saw some... it more simple and better to use than struts2 tag to generate the output on web page ajax code - Ajax ajax code hi can any body tell me how i can create an autocomplete textbox in java using ajax. For example in google when we type any thing... the following links: Struts Code - Struts Struts Code Hi I executed "select * from example" query and stored all the values using bean . I displayed all the records stored in the jsp using struts . I am placing two links Update and Delete beside each record . Now 2 Tags (UI Tags) Examples Struts 2 Tags (UI Tags) Examples Form Tags Auto Completer Example In this section, we are going to describe the autocompleter tag. The autocompleter tag - Struts Struts2 Hi, I am using doubleselect tag in struts2.roseindia is giving example of it with hardcoded values,i want to use it with dynamic values. Please give a example of it. Thanks Struts 2.2.1 Ajax Tags And Example Struts 2.2.1 Ajax Tags And Example a autocompleter bind datetimepicker div head submit tabbedpanel textarea tree treenode struts2 email code struts2 email code when I tried the code of sending email suggested by you on when I tried the code of sending email suggested by you on giving following exception. com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.0 Must2 - Framework the example code from i m beginner for struts2.i tried helloworld program from roseindia tutorial.But HelloWorld.jsp file didnt showing the current date and time struts2 - Framework struts2 i m beginner for struts2.i tried helloworld program from... is: i downloaded example from below link... application. thnx Hello The Said code You download struts2+hibernate - Struts struts2+hibernate How to use hibernate 3 with struts 2 application? kindly reply with example Struts2 ajax validation example. Struts2 ajax validation example. In this example, you will see how to validate login through Ajax in struts2. 1-index.jsp <html> <... uri="/struts-tags" prefix="s"%> Struts 2 Tags (UI Tags) Examples= Java Compilation error. Hibernate code problem. Struts first example - Hibernate Java Compilation error. Hibernate code problem. Struts first example Java Compilation error. Hibernate code problem. Struts first example Struts2 Struts2 Apache Struts: A brief Introduction Apache Struts is an open... for developing application easily by organizing JSP and Servlet and basic java code code - Struts code How to write the code for multiple actions for many submit buttons. use dispatch action example code example code code for displaying a list from another class on a midlet all, i am writing one simple application using struts framework. In this application i thought to bring the different menus of my application in a tree view.Like what we have in window explorer when we click2 Training Struts2 Training Apache Struts is an open-source framework that is used to develop Java applications. Struts... and Servlet and basic java code. Strut1 along with all the standard Java java source code to send group mails using struts2 java source code to send group mails using struts2 code to send group mails using struts2 struts2 struts2 Sir when i have run my struts 2 web application,every time i get error " request resources is not available",,,what is this,,,plz help me Example Code Example Code Example Code Following are the some of example code with demos : jQuery blur event jQuery change event struts2 - Framework struts2 RoseIndia i m using this code for a bean in struts2.but geting error SEVERE: Could not instantiate bean how to resolve this.thnx java source code to create mail server using struts2 java source code to create mail server using struts2 java source code to create mail server using struts2 how to read properties file in jsp of struts2 Hi, You can use the Properties class of Java in your action class. Properties... complete example at Read the Key-Value of Property File in Java. Thanks   Spring 3.0 Tutorials with example code Spring 3.0 - Tutorials and example code of Spring 3.0 framework... of example code. The Spring 3.0 tutorial explains you different modules... download the example code in the zip format. Then run the code in Eclipse struts2 how to pre populate the fields using struts2 from the database" Code Problem - Struts Code Problem Sir, am using struts for my application.i want to uses session variable value in action class to send that values as parameter to function.ex.when i login into page,i will welcome with corresponding user homepage struts2 tiles.xml struts2 tiles.xml Hi, I want to use .properties file data in tiles.xml file of struts2(2.1.8) application to display title. sample code I have used: In tiles.xml: <definition name="disastersettingsview_users" extends.0.2 Released Struts 2.0.2 Released Struts 2.0.2... of new features and Plugins added to Struts 2 framework: * Plugins are now documented in the Apache Struts 2 Plugin Registry. * Annotations: @Result struts2 - Struts struts2 hello, am trying to create a struts 2 application that allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend, Please visit the Struts2 Application in Eclipse: Running the application in Eclipse IDE will be able to code, compile and deploy the Struts2 Hello World application using... Hello World example!!" link to run the Hello World Struts 2 example project...Struts2 Application in Eclipse: Learn how to run the Struts2 application Struts - Struts Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2:http... code will help you learn Struts 2.Thanks struts - Struts of a struts application Hi friend, Code to help in solving the problem : In the below code having two submit button's its values... super.execute(); } } For more information on struts2 visit to : http Struts - Struts Struts hi, I am new in struts concept.so, please explain example login application in struts web based application with source code . what are needed the jar file and to run in struts application ? please kindly struts - Struts struts Hi, I need the example programs for shopping cart using struts with my sql. Please send the examples code as soon as possible. please send it immediately. Regards, Valarmathi Hi Friend, Please PDF Generating Example Struts PDF Generating Example To generate a PDF in struts you need to use...: Download Select Source Code Another Example of Creating PDF in struts How to insert image in PDF file in struts2 How to set pdf Struts 2.1.8 Hello World Example Struts 2.1.8 Hello World Example  ... example using the latest Struts 2.8.1. We will use the properties files... to develop simple Hello World example in Struts 2.8.1. You will also learn 1 Tutorial and example programs Struts 1 Tutorials and many example code to learn Struts 1 in detail. Struts 1... and struts2 Introduction to the Apache Struts...; - Struts LookupDispatchAction Example pls review my code - Struts pls review my code Hello friends, This is the code in struts. when i click on the submit button. It is showing the blank page. Pls respond soon its urgent. Thanks in advance. public class LOGINAction extends Action source code source code sir...i need an online shopping web application on struts and jdbc....could u pls send me the source code java - Struts java pls source code for ordering a product in onlineshopping in struts2
http://roseindia.net/tutorialhelp/comment/85876
CC-MAIN-2013-48
refinedweb
1,455
66.74
Code. Collaborate. Organize. No Limits. Try it Today. This article will teach you how to develop an application for Sugar and the XO, the OLPC project machine. More precisely, in this article, I will use C# and Mono to create a new "activity" which works on the XO and on the Sugar emulator included in the downloadable virtual machine. The One Laptop per Child association develops a low-cost laptop—the "XO Laptop"—to revolutionize how we educate the world's children. The mission of the OLPC project is to provide educational opportunities for the world's poorest children by providing each child with a rugged, low-cost, low-power, connected laptop with content and software designed for collaborative, joyful, self-empowered learning. Shortly, the XO is a netbook. The XO is an innovative hardware design, with a dual-mode display—both a full-color transmissive mode and a black and white reflective mode (sunlight-readable). The laptops have a 433 MHz processor and 256 MB of DRAM, with 1GB of NAND flash memory (there is no hard disk). The laptops have wireless broadband that allow them to work with a standard access point or as a mesh network—each laptop is able to talk to its nearest neighbors, creating an ad hoc local-area network (based on the 802.11s standard). The laptops use a wide range of DC power inputs (including wind-up and solar). Sugar is the Operating System of the XO laptop. The Sugar Learning Platform promotes collaborative and critical-thinking learning. It is already in use worldwide on 800,000 XO distributed to developing countries. Sugar is supported by a large community of developers and teachers around the world. Sugar is useful not only on the XO laptop but on every other laptop. Technically speaking, Sugar is based on Fedora Core Linux and on the Gnome user interface. However, Sugar could also be used on other Linux distributions (Debian, Fedora, Mandriva, ...). Sugar has a unique window manager with simplified concepts that every child can understand very quickly. More importantly, most of the interface can be handled by children who haven't yet been taught to read. The Mono project is an open development initiative to develop an Open Source version of the Microsoft .NET development platform. Its objective is to enable Linux developers to build and deploy cross-platform .NET applications. The project implements technologies developed by Microsoft and submitted to the ECMA organization for standardization. Mono is not only a ".NET clone". Mono comes with some unique C#-based components, libraries, and frameworks. The most important one is Gtk# which allows building of Gnome applications in C#. Gtk# is a "binding" for the Gtk+ GUI toolkit. The word "binding" means that it allows a call to native libraries directly from C# source code. The following screen capture shows a very basic sample in Mono using a command line compiler. If you are a .NET developer, this sample should look familiar to you. The only difference with .NET is that the .EXE resulting file is not directly executable on Linux so you should prefix the command line call with mono. Sugar is mainly written in Python. Python is a scripting language so it's a good tool to let users customize the system. If you want to create new applications (named "Activities"), Sugar will provide you a large number of Python APIs for that. Torello Querci is a contributor to the Mono project. Torello wrote an year ago a C#/Sugar binding to let you use Mono to create Sugar Activities. This binding is packaged as a .NET assembly named Sugar.dll. Most Sugar APIs are callable from this assembly. Of course, the Sugar.dll assembly is need for the sample described here. To create our first activity, we're going to use MonoDevelop. MonoDevelop is a Visual Studio like IDE. MonoDevelop lets you edit and package a Mono application. Let's start: We're using the Gtk# template because Sugar is based on Gtk. Let's now configure this solution to add the Sugar binding. We just need to add the Sugar.dll assembly. Right click on References, "Edit References…", choose the ".NET Assembly" sheet, and select the right assembly. That's all, the solution is now ready. Here is the source code to create the main window for the Activity: using System; using System.Collections; using Gtk; using Sugar; namespace LabActivity { public class MainWindow : Sugar.Window { public new static string activityId = ""; public new static string bundleId = "";); Button _button = new Button(); _button.Label = "Quit"; _button.Clicked += new EventHandler(OnClick); vbox.Add(_button); this.Add(vbox); ShowAll(); } void OnMainWindowDelete(object sender, DeleteEventArgs a) { Application.Quit(); a.RetVal = true; } void OnClick(object sender, EventArgs a) { Application.Quit(); } } } If you are a .NET WinForms developer, the Gtk framework should look familiar to you. The MainWindow's constructor is responsible for the window initialization: it creates controls and sets up event handlers. The Sugar binding adds the need to inherit from the Sugar.Window class to access the low level communication with the Sugar API. More precisely, the main window should handle the Activity's identifier and the instance's identifier. Both values are send to the window manager and should be retrieved from the Activity entry point. Here's how: MainWindow Sugar.Window public static void Main(string[] args) { if (args.Length > 0) { IEnumerator en = args.GetEnumerator(); while (en.MoveNext()) { if (en.Current.ToString().Equals("-sugarActivityId")) { if (en.MoveNext()) { activityId = en.Current.ToString(); } } if (en.Current.ToString().Equals("-sugarBundleId")) { if (en.MoveNext()) { bundleId = en.Current.ToString(); } } } } Application.Init(); new MainWindow(activityId, bundleId); Application.Run(); } We can now build and run the new application: click on "Project/Run". You should obtain this: It's a first success: we've got our first C# Gtk application which is compliant with Sugar. A pre-requisite to run a .NET application is that the .NET Framework should be installed. Of course, it's the same thing for Mono, you should have Mono installed on your machine. Unfortunately, Mono is not a standard package on the XO and we can't force each user to pre-install it to run our application. Hopefully, Mono provides a unique feature in .NET: the capacity to package Mono into an executable file. It's what we call a "bundle". A bundle allows you to generate a run-able file embedding all things needed to execute it: assemblies for the .NET Framework, user assemblies, and the JIT compiler to translate MSIL to the machine code instruction suitable for the processor. The previous schema shows the bundle for the Hello World application. On the left side, the .EXE file is just a standard assembly. This assembly only contains the MSIL code for the application and uses the JIT compiler and the .NET Framework at runtime. On the right side, we create a Mono bundle. The bundle embeds the JIT compiler and only the .NET Framework assemblies are needed. Nothing more is needed at runtime. Here is how you can create the Mono bundle for the Hello World application using the mkbundle2 command: mkbundle2 Of course, both the hello.exe assembly and the hello bundle do the same thing. Note that the hello bundle no longer needs the mono prefix and is very huge (4Mo to compare to 3Ko for the initial assembly!). The Mono bundle for our sample activity is not shown here but could be built the very same way. On the Activities page of the OLPC wiki, you could find all activities downloadable on the XO laptop. The following screen capture shows a small part of this page. As you could see, downloading a new activity means downloading a .XO file. Let's see what a ".xo" file is and how we can generate this sort of file. Like other common file formats (.JAR, .DOCX, ...), the .XO file is a zip file containing all the needed files for the activity: binary and resource. We describe it below. The most important file in the .XO file is the manifest. The manifest should be located at the root of the file. Here is the content of the manifest (named MANIFEST) for the activity: MANIFEST activity/activity-labactivity.svg activity/activity.info bin/libgdksharpglue-2.so bin/libgladesharpglue-2.so bin/libglibsharpglue-2.so bin/libgtksharpglue-2.so bin/libMonoPosixHelper.so bin/libpangosharpglue-2.so bin/labactivity-activity bin/labactivity.exe bin/uiX11Util.so The manifest file describes all files in the .XO file. The .XO file has two directories: activity which holds the Activity's properties and bin which holds the binary files. Binary files are: Mono bundle for the activity (labactivity.exe), Gtk# shared libraries (.so files), and a Python script to start the Activity (labactivity-activity). The first file in the Activity directory is the Activity's icon. This file use the SVG format. SVG is a vectorial format based on XML. The SVG file used here just draws a little square: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "" [ <!ENTITY stroke_color "#666666"> <!ENTITY fill_color "#FFFFFF"> ]> Another important file is the activity.info file. This file contains all the properties for the Activity. Here is our file: [Activity] name = LabActivity activity_version = 1 host_version = 1 service_name = org.olpcfrance.LabActivity icon = activity-labactivity exec = labactivity-activity mime_types = The activity.info file is just a text file containing the values for each property: name, ID, link to the icon file, link to the start script, ... So, as a conclusion, to create the .xo file, you should: The deploy script included with the source file of the project automatically builds the .XO file. You should launch it at the end of each build. At the end of the script, a LabActivity-1.xo is generated: Our Activity can be installed on the Sugar emulator. A shortcut for this emulator is available on the Ubuntu desktop from the virtual machine included with this article. Once launched, the Sugar emulator displays the Sugar home page: To install a new Activity, first click on the Terminal Activity (icon square with an included dollar). Then launch the command line sugar-install-bundle on the ".xo" file. This system command unzips the package and adds the Activity. The icon's Activity is now visible at the top of the desktop (the little white square). A click on the icon launches our Activity: Note that Gtk controls are slightly different on Sugar than on Ubuntu: buttons are rounded. If you have the luck to get a XO machine, you could install the Activity with a USB key where you previously put the .XO file or use the Browse Activity if you have previously deployed the .XO file on a website. The resulting window is exactly the same as the one in the emulator. The previous part of this article shows you how you can write a new Activity from scratch. Alternatively, you could be interested in porting an existing application. It's what we call "Sugarize" an application. Torello Querci has done a great job to sugarize the existing Mono Gbrainy application. Gbrainy is a collection of little games based on memory. The following screen capture shows you the application on the XO. Gbrainy was initially wrote in pure Gtk. Gbrainy is also an interesting application because it uses two advanced Mono features: Glade and Gettext. Glade is a window design tool for Gtk. Glade comes with a dedicated designing tool and uses a file format based on XML to store resources. Gettext is the localization tool for Gnome. The Mono.Unix namespace allows calls to Gettext from your C# application. Note that both Gettext and the standard localization mechanism of .NET are usable from Mono. The main advantage of Gettext is to be supported by a large community of developers and to be compliant with a lot of common tools like the Pootle server or the Poedit editor. Mono.Unix The following source code comes from the Sugarized Gbrainy application. You could see the Sugar features, the use of Glade properties ([Glade.Widget]), and some calls to localization methods (Catalog.GetString(...)). [Glade.Widget] Catalog.GetString(...) public class gbrainy { [Glade.Widget("gbrainy")] Gtk.Window app_window; [Glade.Widget] Box drawing_vbox; [Glade.Widget] Gtk.Label question_label; [Glade.Widget] Gtk.Label solution_label; [Glade.Widget] Gtk.Entry answer_entry; [Glade.Widget] Gtk.Button answer_button; [Glade.Widget] Gtk.Button tip_button; [Glade.Widget] Gtk.Button next_button; [Glade.Widget] Gtk.Statusbar statusbar; [Glade.Widget] Gtk.Toolbar toolbar; GameDrawingArea drawing_area; GameSession session; const int ok_buttonid = -5; ToolButton pause_tbbutton; string activityId=""; string bundleId=""; public gbrainy (string [] args, params object [] props) { Catalog.Init ("gbrainy", Defines.GNOME_LOCALE_DIR); IconFactory icon_factory = new IconFactory (); AddIcon (icon_factory, "math-games", "math-games-32.png"); AddIcon (icon_factory, "memory-games", "memory-games-32.png"); AddIcon (icon_factory, "pause", "pause-32.png"); AddIcon (icon_factory, "resume", "resume-32.png"); AddIcon (icon_factory, "endgame", "endgame-32.png"); AddIcon (icon_factory, "allgames", "allgames-32.png"); AddIcon (icon_factory, "endprogram", "endprogram-32.png"); icon_factory.AddDefault (); Glade.XML gXML = new Glade.XML (null, "gbrainy.glade", "gbrainy", null); gXML.Autoconnect (this); Sugar.Activity activity = new Sugar.Activity(app_window, activityId, bundleId); activity.SetActiveEvent += activeChanged; app_window.Show(); toolbar.IconSize = Gtk.IconSize.Dnd; Tooltips tooltips = new Tooltips (); ToolButton button = new ToolButton ("allgames"); button.SetTooltip (tooltips, Catalog.GetString ("Play all the games"), null); button.Label = Catalog.GetString ("All"); button.Clicked += OnAllGames; toolbar.Insert (button, -1); // ... } // ... } This article is an introduction to Sugar development using Mono. Mono allows you to use your .NET and C# skills to create software for the XO laptop, to contribute to the OLPC project, and then to provide educative material for the world's poorest children. Why not take this chance? This article comes from a full tutorial on the SugarLabs wiki. A French version is also available on the TechHead Brothers website. Do not hesitate to contact me for more information on the OLPC.
http://www.codeproject.com/Articles/35770/Develop-a-Mono-application-for-the-XO-laptop?fid=1539600&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&fr=1&PageFlow=FixedWidth
CC-MAIN-2014-23
refinedweb
2,308
52.56
Creating a. In this tutorial, we'll create a welcome bot for our programming discussion Discord server. This bot will welcome users as they join and assign them roles and private channels based on their stated interests. By the end of this tutorial, you will: - Have familiarity with the process of creating a Discord bot application. - Be able to use discord.py to develop useful bot logic. - Know how to host Discord bots on Replit! Getting started Sign in to Replit or create an account if you haven't already. Once logged in, create a Python repl. Creating a Discord application Open another browser tab and visit the Discord Developer Portal. Log in with your Discord account, or create one if you haven't already. Keep your repl open – we'll return to it soon. Once you're logged in, create a new application. Give it a name, like "Welcomer". Discord applications can interact with Discord in several different ways, not all of which require bots, so creating one is optional. That said, we'll need one for this project. Let's create a bot. - Click on Bot in the menu on the left-hand side of the page. - Click Add Bot. - Give your bot a username (such as "WelcomeBot"). - Click Reset Token and then Yes, do it! - Copy the token that appears just under your bot's username. The token you just copied is required for the code in our repl to interface with Discord's API. Return to your repl and open the Secrets tab in the left sidebar. Create a new secret with DISCORD_TOKEN as its key and the token you copied as its value. Once, you've done that, return to the Discord developer panel. We need to finish setting up our bot. First, disable the Public Bot option – the functionality we're building for this bot will be highly specific to our server, so we don't want anyone else to try to add it to their server. What's more, bots on 100 or more servers have to go through a special verification and approval process, and we don't want to worry about that. Second, we need to configure access to privileged Gateway Intents. Depending on a bot's functionality, it will require access to different events and sources of data. Events involving users' actions and the content of their messages are considered more sensitive and need to be explicitly enabled. For this bot to work, we'll need to be able to see when users join our server, and we'll need to see the contents of their messages. For the former, we'll need the Server Members Intent and for the latter, we'll need the Message Content Intent. Toggle both of these to the "on" position. Save changes when prompted. Now that we've created our application and its bot, we need to add it to a server. We'll walk you through creating a test server for this tutorial, but you can also use any server you've created in the past, as long as the other members won't get too annoyed about it becoming a bot testing ground. You can't use a server that you're just a normal user on, as adding bots requires special privileges. Open Discord.com in your browser. You should already be logged in. Then click on the + icon in the leftmost panel to create a new server. Alternatively, open an existing server you own. In a separate tab, return to the Discord Dev Portal and open your application. Follow these steps to add your bot to your server: Click on OAuth2 in the left sidebar. In the menu that appears under OAuth2, select URL Generator. Under Scopes, mark the checkbox labelled bot. Under Bot Permissions, mark the checkbox labelled Administrator. Scroll down and copy the URL under Generated URL. Paste the URL in your browser's navigation bar and hit Enter. On the page that appears, select your server from the drop-down box and click Continue. When prompted about permissions, click Authorize, and complete the CAPTCHA. Return to your Discord server. You should see that your bot has just joined. Now that we've done the preparatory work, it's time to write some code. Return to your repl for the next section. Writing the Discord bot code We'll be using discord.py to interface with Discord's API using Python. Add the following code scaffold to main.py in your repl: import os, re, discord from discord.ext import commands DISCORD_TOKEN = os.getenv("DISCORD_TOKEN") bot = commands.Bot(command_prefix="!") @bot.event async def on_ready(): print(f"{bot.user} has connected to Discord!") bot.run(DISCORD_TOKEN) First, we import the Python libraries we'll need, including discord.py and its commands extension. Next we retrieve the value of the DISCORD_TOKEN environment variable, which we set in our repl's secrets tab above. Then we instantiate a Bot object. We'll use this object to listen for Discord events and respond to them. The first event we're interested in is on_ready(), which will trigger when our bot logs onto Discord (the @bot.event decorator ensures this). All this event will do is print a message to our repl's console, telling us that the bot has connected. Note that we've prepended async to the function definition – this makes our on_ready() function into a coroutine. Coroutines are largely similar to functions, but may not execute immediately, and must be invoked with the await keyword. Using coroutines makes our program asynchronous, which means it can continue executing code while waiting for the results of a long-running function, usually one that depends on input or output. If you've used JavaScript before, you'll recognize this style of programming. The final line in our file starts the bot, providing DISCORD_TOKEN to authenticate it. Run your repl now to see it in action. Once it's started, return to your Discord server. You should see that your bot user is now online. Creating server roles Before we write our bot's main logic, we need to create some roles for it to assign. Our Discord server is for programming discussion, so we'll create roles for a few different programming languages: Python, JavaScript, Rust, Go, and C++. For the sake of simplicity, we'll use all-lowercase for our role names. Feel free to add other languages. You can add roles by doing the following: Right-click on your server's icon in the leftmost panel. From the menu that appears, select Server Settings, and then Roles. Click Create Role. Enter a role name (for example, "python") and choose a color. Click Back. Repeat steps 3–5 until all the roles are created. Your role list should now look something like this: The order in which roles are listed is the role hierarchy. Users who have permission to manage roles will only be able to manage roles lower than their highest role on this list. Ensure that the WelcomeBot role is at the top, or it won't be able to assign users to any of the other roles, even with Administrator privileges. At present, all these roles will do is change the color of users' names and the list they appear in on the right sidebar. To make them a bit more meaningful, we can create some private channels. Only users with a given role will be able to use these channels. To add private channels for your server's roles, do the following: - Click on the + next to Text Channels. - Type a channel name (e.g. "python") under Channel Name. - Enable the Private Channel toggle. - Click Create Channel. - Select the role that matches your channel's name. - Repeat for all roles. As the server owner, you'll be able to see these channels regardless of your assigned roles, but normal members will not. Messaging users Now that our roles are configured, let's write some bot logic. We'll start with a function to DM users with a welcome message. Return to your repl and enter the following code just below the line where you defined bot:. """ ) This simple function takes a member object and sends it a private message. Note the use of await when running the coroutine member.send(). We need to run this function when one of two things happens: a new member joins the server, or an existing member types the command !roles in a channel. The second one will allow us to test the bot without constantly leaving and rejoining the server, and let users change their minds about what programming languages they want to discuss. To handle the first event, add this code below the definition of on_ready: @bot.event async def on_member_join(member): await dm_about_roles(member) The on_member_join() callback supplies a member object we can use to call dm_about_roles(). For the second event, we'll need a bit more code. While we could use discord.py's bot commands framework to handle our !roles command, we will also need to deal with general message content later on, and doing both in different functions doesn't work well. So instead, we'll put everything to do with message contents in a single on_message() event. If our bot were just responding to commands, using @bot.command handlers would be preferable. Add the following code below the definition of on_member_join(): @bot.event async def on_message(message): print("Saw a message...") if message.author == bot.user: return # prevent responding to self # Respond to commands if message.content.startswith("!roles"): await dm_about_roles(message.author) First, we print a message to the repl console to note that we've seen a message. We then check if the message's author is the bot itself. If it is, we terminate the function, to avoid infinite loops. Following that, we check if the message's content starts with !roles, and if so we invoke dm_amount_roles(), passing in the message's author. Stop and rerun your repl now. If you receive a CloudFlare error, type kill 1 in your repl's shell and try again. Once your repl's running, return to your Discord server and type "!roles" into the general chat. You should receive a DM from your bot. Assigning roles from replies Our bot can DM users, but it won't do anything when users reply to it. Before we can add that logic, we need to implement a small hack to allow our bot to take actions on our server based on the contents of direct messages. The Discord bot framework is designed with the assumption that bots are generic and will be added to many different servers. Bots do not have a home server, and there's no easy way for them to trace a process flow that moves from a server to private messages like the one we're building here. Therefore, our bot won't automatically know which server to use for role assignment when that user replies to its DM. We could work out which server to use through the user's mutual_guilds property, but it is not always reliable due to caching. Note that Discord servers were previously known as "guilds" and this terminology persists in areas of the API. As we don't plan to add this bot to more than one server at a time, we'll solve the problem by hardcoding the server ID in our bot logic. But first, we need to retrieve our server's ID. The easiest way to do this is to add another command to our bot's vocabulary. Expand the if statement at the bottom of on_message() to include the following elif: elif message.content.startswith("!serverid"): await message.channel.send(message.channel.guild.id) Rerun your repl and return to your Discord server. Type "!serverid" into the chat, and you should get a reply from your bot containing a long string of digits. Copy that string to your clipboard. Go to the top of main.py. Underneath DISCORD_TOKEN, add the following line: SERVER_ID = Paste the contents of your clipboard after the equals sign. Now we can retrieve our server's ID from this variable. Once that's done, return to the definition of on_message(). We're going to add another if statement to deal with the contents of user replies in DMs. Edit the function body so that it matches the below: @bot.event async def on_message(message): print("Saw a message...") if message.author == bot.user: return # prevent responding to self # NEW CODE BELOW # Assign roles from DM if isinstance(message.channel, discord.channel.DMChannel): await assign_roles(message) return # NEW CODE ABOVE # Respond to commands if message.content.startswith("!roles"): await dm_about_roles(message.author) elif message.content.startswith("!serverid"): await message.channel.send(message.channel.guild.id) This new if statement will check whether the message that triggered the event was in a DM channel, and if so, will run assign_roles() and then exit. Now we need to define assign_roles(). Add the following code above the definition of on_message(): async def assign_roles(message): print("Assigning roles...") languages = set(re.findall("python|javascript|rust|go|c\+\+", message.content, re.IGNORECASE)) We can find the languages mentioned in the user replies using regular expressions: re.findall() will return a list of strings that match our expression. This way, whether the user replies with "Please add me to the Python and Go groups" or just "python go", we'll be able to assign them the right role. We convert the list into a set in order to remove duplicates. The next thing we need to do is deal with emoji responses. Add the following code to the bottom of the assign_roles() function: language_emojis = set(re.findall("\U0001F40D|\U0001F578|\U0001F980|\U0001F439|\U0001F409", message.content)) # # Convert emojis to names for emoji in language_emojis: { "\U0001F40D": lambda: languages.add("python"), "\U0001F578": lambda: languages.add("javascript"), "\U0001F980": lambda: languages.add("rust"), "\U0001F439": lambda: languages.add("go"), "\U0001F409": lambda: languages.add("c++") }[emoji]() In the first line, we do the same regex matching we did with the language names, but using emoji Unicode values instead of standard text. You can find a list of emojis with their codes on Unicode.org. Note that the + in this list's code should be replaced with 000 in your Python code: for example, U+1F40D becomes U0001F40D. Once we've got our set of emoji matches in language_emojis, we loop through it and use a dictionary to add the correct name to our languages set. This dictionary has strings as values and lambda functions as keys. Finally, [emoji]() will select the lambda function for the provided key and execute it, adding a value to languages. This is similar to the switch-case syntax you may have seen in other programming languages. We now have a full list of languages our users may wish to discuss. Add the following code below the for loop: if languages: server = bot.get_guild(SERVER_ID) roles = [discord.utils.get(server.roles, name=language.lower()) for language in languages] member = await server.fetch_member(message.author.id) This code first checks that the languages set contains values. If so, we use get_guild() to retrieve a Guild object corresponding to our server's ID (remember, guild means server). We then use a list comprehension and discord.py's get() function to construct a list of all the roles corresponding to languages in our list. Note that we've used the lower() to ensure all of our strings are in lowercase. Finally, we retrieve the member object corresponding to the user who sent us the message and our server. We now have everything we need to assign roles. Add the following code to the bottom of the if statement, within the body of the if statement: try: await member.add_roles(*roles, reason="Roles assigned by WelcomeBot.") except Exception as e: print(e) await message.channel.send("Error assigning roles.") else: await message.channel.send(f"""You've been assigned the following role{"s" if len(languages) > 1 else ""} on {server.name}: { ', '.join(languages) }.""") The member object's add_roles() method takes an arbitrary number of role objects as positional arguments. We unpack our languages set into separate arguments using the * operator, and provide a string for the named argument reason. Our operation is wrapped in a try-except-else block. If adding roles fails, we'll print the resulting error to our repl's console and send a generic error message to the user. If it succeeds, we'll send a message to the user informing them of their new roles, making extensive use of string interpolation. Finally, we need to deal with the case where no languages were found in the user's message. Add an else: block onto the bottom of the if languages: block as below: else: await message.channel.send("No supported languages were found in your message.") Rerun your repl and return to your Discord server. Open the DM channel with your bot and try sending it one or more language names or emojis. You should receive the expected roles. You can check this by clicking on your name in the right-hand panel on your Discord server – your roles will be listed in the box that appears. Removing roles Our code currently does not allow users to remove roles from themselves. While we could do this manually as the server owner, we've built this bot to avoid having to do that sort of thing, so let's expand our code to allow for role removal. To keep things simple, we'll remove any roles mentioned by the user which they already have. So if a user with the "python" role writes "c++ python", we'll add the "c++" role and remove the "python" role. Let's make some changes. Find the if languages: block in your assign_roles() function and change the code above try: to match the below: if languages: server = bot.get_guild(SERVER_ID) # <-- RENAMED VARIABLE + LIST CHANGED TO SET new_roles = set([discord.utils.get(server.roles, name=language.lower()) for language in languages]) member = await server.fetch_member(message.author.id) # NEW CODE BELOW current_roles = set(member.roles) We replace the list of roles with a set of new roles. We also create a set of roles the user current holds. Given these two sets, we can figure out which roles to add and which to remove using set operations. Add the following code below the definition of current_roles: roles_to_add = new_roles.difference(current_roles) roles_to_remove = new_roles.intersection(current_roles) The roles to add will be roles that are in new_roles but not in current_roles, i.e. the difference of the sets. The roles to remove will be roles that are in both sets, i.e. their intersection. Now we need to replace the try-except-else block with the code below: try: await member.add_roles(*roles_to_add, reason="Roles assigned by WelcomeBot.") await member.remove_roles(*roles_to_remove, reason="Roles revoked by WelcomeBot.") except Exception as e: print(e) await message.channel.send("Error assigning/removing roles.") else: if roles_to_add: await message.channel.send(f"You've been assigned the following role{'s' if len(roles_to_add) > 1 else ''} on {server.name}: { ', '.join([role.name for role in roles_to_add]) }") if roles_to_remove: await message.channel.send(f"You've lost the following role{'s' if len(roles_to_remove) > 1 else ''} on {server.name}: { ', '.join([role.name for role in roles_to_remove]) }") This code follows the same general logic as our original block, but can remove roles as well as add them. Finally, we need to update the bot's original DM to reflect this new functionality. Find the dm_about_roles() function and amend it as follows:. Reply with the name or emoji of a language you're currently using and want to stop and I'll remove that role for you. """ ) Rerun your repl and test it out. You should be able to add and remove roles from yourself. Try inviting some of your friends to your Discord server, and have them use the bot as well. They should receive DMs as soon as they join. Where next? We've created a simple Discord server welcome bot. There's a lot of scope for additional functionality. Here are some ideas for expansion: - Include more complex logic for role assignment. For example, you could have some roles that require users to have been members of the server for a certain amount of time. - Have your bot automatically assign additional user roles based on behavior. For example, you could give a role to users who react to messages with the most emojis. - Add additional commands. For example, you might want to have a command that searches Stack Overflow, allowing members to ask programming questions from the chat. Discord bot code can be hosted on Replit permanently, but you'll need to use an Always-on repl to keep it running 24/7. You can find our repl below:
https://docs.replit.com/tutorials/discord-role-bot
CC-MAIN-2022-27
refinedweb
3,498
67.15