text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Subprocess Overview For a long time I have been using os.system() when dealing with system administration tasks in Python. The main reason for that, was that I thought that was the simplest way of running Linux commands. In the official python documentation we can read that subprocess should be used for accessing system commands. The subprocess module allows us to spawn processes, connect to their input/output/error pipes, and obtain their return codes. Subprocess intends to replace several other, older modules and functions, like: os.system, os.spawn*, os.popen*, popen2.* commands. [Source] Let's start looking into the different functions of subprocess. subprocess.call() Run the command described by "args"..call() subprocess.call(['df', '-h']) This time we set the shell argument to True subprocess.call('du -hs $HOME', shell=True) Note, the official Python documentation states a warning about using the shell=True argument. "Invoking the system shell with shell=True can be a security hazard if combined with untrusted input" [source] Now, let's move on and look at the Input / Output. Input and Output With subprocess you can suppress the output, which is very handy when you want to run a system call but are not interested about the standard output. It also gives you a way to cleanly integrate shell commands into your scripts while managing input/output in a standard way. Return Codes You can use subprocess.call return codes to determine the success of the command.. subprocess.Popen() The underlying process creation and management in the subprocess module is handled by the Popen class. subprocess.popen is replacing os.popen. Let's get started with some real examples. subprocess.Popen takes a list of arguments import subprocess p = subprocess.Popen(["echo", "hello world"], stdout=subprocess.PIPE) print p.communicate() >>>('hello world ', None) Note, even though you could have used "shell=True", it is not the recommended way of doing it. If you know that you will only work with specific subprocess functions, such as Popen and PIPE, then it is enough to only import those. from subprocess import Popen, PIPE p1 = Popen(["dmesg"], stdout=PIPE) print p1.communicate() Popen.communicate() The communicate() method returns a tuple (stdoutdata, stderrdata). Popen.communicate() interacts. # Import the module import subprocess # Ask the user for input host = raw_input("Enter a host to ping: ") # Set up the echo command and direct the output to a pipe p1 = subprocess.Popen(['ping', '-c 2', host], stdout=subprocess.PIPE) # Run the command output = p1.communicate()[0] print output Let's show one more example. This time we use the host command. target = raw_input("Enter an IP or Host to ping: ") host = subprocess.Popen(['host', target], stdout = subprocess.PIPE).communicate()[0] print host I recommend that you read the links below to gain more knowledge about the subprocess module in Python. If you have any questions or comments, please use the comment field below. More Reading The ever useful and neat subprocess module:
https://www.pythonforbeginners.com/os/subprocess-for-system-administrators
CC-MAIN-2020-16
refinedweb
492
58.58
import scala.{Option => _, Either => _, _} sealed trait Option[+A] { def map[B](f: A => B): Option[B] = this match{ case None => None case Some(a) => Some(f(a)) } def flatMap[B] (f:A=> Option[B]): Option[B] = { case None => None case Some(_) => f(_) } code above method map() will pass in Scala compiler, but error in Idea. method flatMap() error in Scala compiler, but no error showed in Idea I use Idea 15.0.2, Scala plug-in 2.2.0, Scala SDK 11.7.2 Hi, Patrick! sorry for the inconvenience, we have a similar issue, I add a link to your question into it. I have tried scala-ide, no this issue. and error tips better than idea, idea just tops error, no detail reason or reason is just some error, but scala-ide show some detailed error info just like Scala compiler gave.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206409269-Why-idea-scala-plug-in-work-totally-on-the-contrary-with-Scala-compiler?sort_by=votes
CC-MAIN-2020-10
refinedweb
149
80.82
This is somewhat related to my previous post. I didn’t want to include my user profile value in the form and instead add it during validation. This caused the model validation to be skipped in part. I had to do some gymnastics to get it to run. In this latest attempt, I have instead included that profile as a hidden field and compare it to the current user profile. Much cleaner! Now all I have to to is get that current value into the form. It didn’t take long on Google to find several examples. Here is my rendition: In the view, I overrode the get_form method to add the profile as a paramter def get_form(self, form_class=None): if form_class is None: form_class = self.get_form_class() return form_class(**self.get_form_kwargs(),current_user_profile=get_profile(self.request.user)) Next, in the form I need to pop that value from the kwargs dictionary as assign to a object variable class LinkForm(ModelForm): def __init__(self, *args, **kwargs): self.current_user_profile = kwargs.pop('current_user_profile') super(LinkForm, self).__init__(*args, **kwargs) self.fields['profile'].widget=HiddenInput() I also set the profile field to hidden here. Now I can write a clean method for the profile field like so: def clean_profile(self): profile = self.cleaned_data.get('profile',None) if profile != self.current_user_profile: self.add_error(None,ValidationError('Web page altered. Try again.', code='wrong_profile')) return profile This is much easier and clearer than my previous method. Not bad for working late on a Wednesday night.
https://dashdrum.com/blog/2018/02/passing-a-parameter-to-a-django-form/
CC-MAIN-2018-51
refinedweb
248
51.04
Feature #16746closed Endless method definition Added by mame (Yusuke Endoh) about 1 year ago. Updated 4 months ago. 1 year 1 year ago - Related to Feature #5054: Compress a sequence of ends added Updated by shyouhei (Shyouhei Urabe) about 1 year ago - Related to Feature #5065: Allow "}" as an alternative to "end" added Updated by shyouhei (Shyouhei Urabe) about 1 year ago - Related to Feature #12241: super end added Updated by mame (Yusuke Endoh) about 1 year ago - File deleted ( endless-method-definition.patch) Updated by mame (Yusuke Endoh) about 1 year ago Updated by matz (Yukihiro Matsumoto) about 1 year ago - Tags deleted ( joke) Updated by mame (Yusuke Endoh) about 1 year 1 year ago Updated by ruurd (Ruurd Pels) about 1 year ago Updated by nobu (Nobuyoshi Nakada) about 1 year ago Updated by ioquatix (Samuel Williams) about 1 year ago Have you considered adopting Python's whitespace sensitive indentation? def hello(name): puts("Hello, #{ name }") hello("endless Ruby") #=> Hello, endless Ruby Updated by mame (Yusuke Endoh) about 1 year 1 year 1 year year ago I'd like to experiment with this new syntax. We may find drawbacks in the future, but to find them, we need to experiment first. Matz. Updated by nobu (Nobuyoshi Nakada) about 1 year year year ago It is not easy to control parsing time warnings, and bothers tests. Updated by Eregon (Benoit Daloze) about 1 year) 12 months ago Is it intended to allow multi-line definitions in this style? I think we should not, and only single-line expressions should be allowed. I think that's not the original purpose of this ticket and rather an incorrect usage of this new syntax, e.g., Updated by marcandre (Marc-Andre Lafortune) 10 months ago tldnr; I am against this proposal, it decreases readability, increases complexity and doesn't bring anything to the table (in that order) I read the thread carefully and couldn't find a statement of what is the intended gain. matz (Yukihiro Matsumoto), could you please explain what you see are the benefits of this new syntax (seriously)? We may find drawbacks in the future, but to find them, we need to experiment first. I see many drawbacks already. Also, it is already possible to experiment with tools like. Here's a list of drawbacks to start: 1) Higher cognitive load. Ruby is already quite complex, with many nuances. We can already define methods with def, define_method, define_singleton_method. 2) Requiring all text editors, IDE, parsers, linters, code highlighters to support this new syntax 3) Encouraging less readable code With this endless method definition, if someone want to know what a method does, one has to mentally parse the name and arguments, and find the = to see where the code actually starts. My feeling is that the argument signature is not nearly as important as the actual code of the method definition. def header_convert(name = nil, &converter) = header_fields_converter.add_converter(name, &converter) # ^^^^^^^^^^^^^^^^^^^^^^^^^^ all this needs to be scanned by the eye to get to the definition # ^ ^^^ the first `=` needs to be ignored, one has to grep for ") =" That method definition, even if very simple, deserves it's own line start, which makes it easy to locate. If Rubyists used this new form to write methods in two lines, with a return after the =, it is still harder to parse as someone has to get to the end of the line to locate that =. After a def the eye could be looking for an end statement. def header_convert(name = nil, &converter) = header_fields_converter.add_converter(name, &converter) # <<< hold on, is there a missing `end`? or is the indent wrong? Oh, no, the line ended with `=` def more_complex_method(...) # .... end I believe it is impossible to improve the readability of a one-line method as written currently. This new form can only make it harder to understand, not easier. def header_convert(name = nil, &converter) header_fields_converter.add_converter(name, &converter) end def more_complex_method(...) # ... end If header_convert ever need an extra line of code, there won't be a need to reformat the code either. For these reason I am against this proposal and my hope is that is reverted and that we concentrate on more meaningful improvements. Updated by Eregon (Benoit Daloze) 10 months ago Noteworthy is the current syntax in trunk is def name(*args) = expr (and not def: name), so there is no visual cue that this is a endless method definition except the = which comes very late. I agree with marcandre (Marc-Andre Lafortune), I think it makes code just less readable and harder to maintain. Updated by marcandre (Marc-Andre Lafortune) 10 months ago Eregon (Benoit Daloze) wrote in #note-23: Noteworthy is the current syntax in trunk is def name(*args) = expr(and not def: name), so there is no visual cue that this is a endless method definition except the =which comes very late. Oh, my mistake. That syntax makes it even less readable! There are potentially = already in the method definition... I'm even more against it 😅 I edited my post. Updated by zverok (Victor Shepelev) 10 months ago To add an "opinion perspective"... I, for one, even if not fascinated but at least intrigued by new syntax. My reasoning is as follows: - The most precious Ruby's quality for me is "expressiveness at the level of the single 'phrase' (expression)", and it is not "code golf"-expressiveness, but rather structuring language's consistency around the idea of building phrases with all related senses packed, while staying lucid about the intention (I believe that, as our "TIMTOWTDI" is opposite to Python's "there should be one obvious way...", this quality is also opposite to Pythons "sparse is better than dense") - (I am fully aware that nowadays I am in a minority with this viewpoint: whenever the similar discussions are raised, I see a huge difference between "it can be stated in one phrase" vs "it can not", while most of my correspondents find it negligible) - I believe that the difference of "how you write it when there is one statement" vs "...more than one statement" is intentional, and fruitful: "just add one more line to 10-line method" typically small addition, but "just add one more line to 1-line method" frequently makes one think: may be that's not what you really need to do, maybe data flow becomes unclear? One existing example: collection.map { |x| do_something_with(x) } # one line! cool! # ...what if I want to calculate several other values on the way? I NEED to restructure it: collection.map do |x| log_processing(x) total += x do_something_with(x) end # ...which MIGHT imply that "I am doing something wrong", and maybe what would clearer express intent # is separating calculation of total, and move `log_processing` inside `do_something` - So, it was actually always felt for me as "a bit too wordy" for trivial/short methods to write it as def some_method(foo) foo.bar(baz).then { |result| result + 3 } end - ...alongside with "nicer formatting" of empty lines between methods, and probably requirement to document every method, and whatnot, it means that extraction of trivial utility method feels like "making MORE code than necessary", so I'd really try how it would feel with def just_utility(foo)= foo.bar(baz).then { |result| result + 3 } def other_utility(foo)= "<#{self}(#{foo})>" def third_utility(foo)= log.print(nicely_formatted(foo)) - ...and I believe that "what if you need to add second statement, you will reformat the code completely?" is a good thing, not bad: you'll think twice before adding to "simple utility one-statement" method something that really, actually, maybe not belongs there. Updated by mame (Yusuke Endoh) 8 months ago In the previous dev-meeting, matz said that it should be prohibited to define setter method with endless definition. # prohibited def foo=(x) = @x = x There are two reasons: - This code is very confusing and it is not supposed that a destructive operation is defined in this style. - We want to allow def foo=42as def foo() = 42in future. It is difficult to implement it in short term, though. If def foo=(x) = @x = xis allowed once, it will become more difficult. I will create a PR soon. Updated by mame (Yusuke Endoh) 8 months ago Updated by pankajdoharey (Pankaj Doharey) 8 months ago Why do we even need a def. ? Why not just do it like Haskell? triple(x) = 3*x Since this is an assignment if triple hasn't already been assigned it should create a function otherwise it should pose a syntax error. Function redefinition error or something. or Perhaps a let keyword ? let double x = x * 2 Updated by etienne (Étienne Barrié) 4 months ago I haven't seen it mentioned in the comments so I just wanted to point out that the regular method-definition syntax doesn't require semicolons, and is very close to this experiment, given the parentheses are mandatory: def fib(x) x < 2 ? x : fib(x-1) + fib(x-2) end # one-line style def value() @value end # one-line style 3.0 def value() = @value # one-line style setter def value=(value) @value = value end # not supported in 3.0 def value=(value) = @value = value It doesn't remove the end which is the purpose of this feature sorry, but I like it because it keeps the consistency of having end, while not using more punctuation like ; or =. Edit: Also when we use endless methods, we need parentheses for calls made in the method, whereas the regular syntax does not # one-line style def value() process @value end # endless method needs parentheses def value() = process(@value) Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/16746?tab=notes
CC-MAIN-2021-21
refinedweb
1,606
58.82
One of the coolest features in Avalon is the property subsystem. While it might seem difficult for some that Avalon has a property system built on top of the normal CLR properties, the power that this system gives you is incredible and is part of why a lot of scenarios in Avalon require less code than their counterparts in other systems. One cool thing that the property system in Avalon allows is attached properties. The simplest way to explain an attached property is that it allows the place where the property is defined and the place where it is stored to be completely different classes that know nothing about each other. The only restriction is that the property value can only be stored on DependencyObject objects (which happens to be the base class for most Avalon things). Why is this so cool? The best example is how DockPanel uses attached properties. When you add elements to a DockPanel, you need to tell the DockPanel on what side to Dock the elements. WinForms uses the Dock property on the Control class for this. But you then pay the tax of having the property on the Control, even though you are not always being docked. The property only makes sense to the docking code. In Avalon the DockPanel defines an attached property (DockPanel.DockProperty) that tells the DockPanel where to dock a child element. You can set the property on the child element from code: DockPanel.SetDock(element, Dock.Left); Or from XAML: <UIElement DockPanel. The DockPanel reads off the value and uses it during the layout. Because it is using attached properties, the UIElement and associated classes do not need to know about how DockPanel works, and there can be a multitude of properties used by various panels for layout without bloating UIElement with spurious properties. Defining your own attached properties is easy - I have made a quick sample to show how to do this. The idea behind the sample is to make a panel called RadialPanel that arranges children in a radial pattern, and has an attached property Angle that shows where to place the children. The first thing that we do is create a new class called RadialPanel (make sure it is public or the XAML cannot find it!). We then add an attached property to the class as such: public static readonly DependencyProperty AngleProperty = DependencyProperty.RegisterAttached( "Angle", typeof(double), typeof(RadialPanel), new FrameworkPropertyMetadata(0.0, FrameworkPropertyMetadataOptions.AffectsParentArrange)); Here is some explanation about this blob of code (if you work with Avalon a while you end up writing stuff like this a lot): After we write this code, we write some accessor methods. We need to do this or Avalon will not be able to change the values from XAML. These are just static methods that wrap the Avalon GetValue/SetValue methods: public static double GetAngle(DependencyObject obj) { return (double)obj.GetValue(AngleProperty); } public static void SetAngle(DependencyObject obj, double value) obj.SetValue(AngleProperty, value); } Now we can implement the measure and arrange code. The measure code is fairly straightforward - we just decide how big each child will be and measure the children like that: protected override Size MeasureOverride(Size availableSize) // find out the radius of the children (it is 1/3rd of the minor axis of the RadialPanel) double childRadius = Math.Min(availableSize.Width, availableSize.Height) / 3.0; // measure each of the children with this size foreach (UIElement element in this.InternalChildren) { element.Measure(new Size(childRadius, childRadius)); } // tell our layout parent that we used all of the space up return availableSize; And now we can write the arrange, which is simple as well. Notice that we grab the value of the attached property to figure out where to put the element: protected override Size ArrangeOverride(Size finalSize) // figure out the size for the children double childRadius = Math.Min(finalSize.Width, finalSize.Height) / 3.0; double angle = GetAngle(element); // figure out where the child goes in cartesian space. We invert the // Y value because screen space and normal mathematical cartesian space // are reversed in the vertical. Point childPoint = new Point( Math.Cos(angle * Math.PI / 180.0) * childRadius, -Math.Sin(angle * Math.PI / 180.0) * childRadius); // offset to center of the panel childPoint.X += finalSize.Width / 2; childPoint.Y += finalSize.Height / 2; // arrange so that the center of the child is at the point we calculated, // and the child is of the calculated radius. element.Arrange(new Rect( childPoint.X - childRadius / 2.0, childPoint.Y - childRadius / 2.0, childRadius, childRadius)); // we used up all of the space return finalSize; Now the panel is written we can test it out with some simple XAML: <Window x:Class="RadialPanelDemo.Window1" xmlns="" xmlns:x="" xmlns: <local:RadialPanel x: <Button Content="Button One" local:RadialPanel. <Button Content="Button Two" local:RadialPanel. <Button Content="Button Three" local:RadialPanel. <Button Content="Button Four" local:RadialPanel. </local:RadialPanel> </Window> We give some transparency to the buttons so that we can see them when they overlap each other. One last bit of fun is showing changes to the property value and showing how the layout updates automatically. In the InitializeComponent of the window, we add an event handler to the buttons: // whenever one of the buttons is clicked, handle it in the same place foreach (Button button in radialPanel.Children) button.Click += new RoutedEventHandler(OnButtonClick); And write the handler: void OnButtonClick(object sender, RoutedEventArgs e) // find out who was clicked Button button = (Button)e.OriginalSource; // increment angle by 15 degrees RadialPanel.SetAngle(button, RadialPanel.GetAngle(button) + 15.0); Now when we click the buttons, they each move 15 degrees around the circle! This is just the beginning of what attached properties can do. I will try to blog about more advanced usage soon. Please let me know if there is a specific Avalon property system feature that you want an article about. Update: I uploaded the code for this example. The color of this page makes it really hard to read. Suggestion - change the blue and green text to a lighter color. Or use a lighter background color with darker text. [BenCon - I have played with this but have been slack. I will try to clean it up when I recover from the conference] [BenCon - I have changed to a better (I hope) theme now. Sorry it took so long, and thanks for the feedback] Your blogs are helping me alot to get insight into WPF. I will be very glad if you can write entry about dependency property inheritence. How to do it and specificly how to inherit property value if you are not inside the logical tree, but in the visual tree. For example, if I set an inherit DP on the window level and I would like to know the value on a ListView's GridView header element. Thank you, Ido. If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/bencon/archive/2006/07/24/677520.aspx
crawl-002
refinedweb
1,160
55.13
i2c smbus not working on edsionIoT_srinivas Jul 13, 2016 10:27 AM Hi , HTS221/Python at master · ControlEverythingCommunity/HTS221 · GitHub I was using above python code for HTS221. Error : python hts221.py Traceback (most recent call last): File "hts221.py", line 2, in <module> import smbus ImportError: No module named smbus Please provide the steps to resolve above issue. 1. Re: i2c smbus not working on edsionJul 15, 2016 10:03 AM (in response to IoT_srinivas) Hi Srinivas, I would like to know some more about your project so we can help you better. Which image are you using? Also, did you attach the log for some reason? I’m not sure why you attached it. If you could explain us I would really appreciate it. Now, regarding the error, first try to install i2c-tools from the AlexT repository using opkg. Then try to install smbus with these instructions: pip install cffi pip install smbus-cffi You can check some more discussion on this following this link,. Regards, -Pablo 2. Re: i2c smbus not working on edsionIoT_srinivas Jul 15, 2016 11:57 AM (in response to IoT_srinivas) Hi Pablo , I was unable to install ccfii and smbus-cffi I was getting below error root@edison:~# pip install cffi -sh: pip: command not found root@edison:~# pip install smbus-cffi -sh: pip: command not found Please do need full. 3. Re: i2c smbus not working on edsionJul 18, 2016 3:40 PM (in response to IoT_srinivas) Hi Srinivas, This is due to pip not being installed on your Edison. You should avoid the error if you install it. Here are the steps to do so: Regards, -Pablo 4. Re: i2c smbus not working on edsionJul 22, 2016 6:07 PM (in response to IoT_srinivas) Hi Srinivas, Any updates on this case? Regards, -Pablo
https://communities.intel.com/thread/104417
CC-MAIN-2017-39
refinedweb
303
62.78
Hello, The field does not accept the CrLf characters(mean that at the first time they exist within the string entered then the text change method is called again second time, and within the debbug, I found that the text as being "cleaned" the CrLf does not appear within any more), , while typing in it the word wrapping occurs and moves the cursor to the next line, not being able to make a new line at any place at the text. I have a code example that subscribes a Control_Text Changed method to the Text Change event of the field. here to show the problem I set the value of the text to a string that holds the \r\n mean CrLf characters within the string, this will cause and endless calling loop while the text change event is recalled to clear the CrLf characters from the text set . Witch I do not understand why this is happened I did not request to filter/ignore the CrLf characters. Here the code: public class CustomEditorRenderer : EditorRenderer { protected override void OnElementChanged(ElementChangedEventArgs e) { base.OnElementChanged(e); if (Control == null || e.NewElement == null) return; Control.PlaceholderText = ((MyEditor) e.NewElement).Placeholder; Control.AcceptsReturn = true; Control.TextWrapping = TextWrapping.Wrap; Control.MinHeight = 150; Control.MaxHeight = 150; Control.TextChanged += Control_TextChanged; } // The subcribed method to demostrade the endless calling loop trying to remove the \r\n chaeacters.... private void Control_TextChanged(object sender, Windows.UI.Xaml.Controls.TextChangedEventArgs e) { ((TextBox)sender).Text = "Here is the end of the line.\r\nHere start a New line."; } } Note: The binding declaration at the ViewCell: This binding works as expected ... _editor.SetBinding(Editor.TextProperty, ffts => ffts.Value, BindingMode.OneWayToSource); But the needed binding is the TWO_WAY because the field cut have initial values we are interested to show. Answers @Bor - what value is it replacing it with. Is it stripping out those values to add in a environment specific newline character? Does it show correctly on screen and does it show the newline when you get the string at a later time? There are not new lines the text apear in one long line and when reaches the width it wraps to the next line. The expectation it is when using two way binding is it that when pressing the Enter button the new text line appears to continue thw witing.... Like here..... and here..... :-) Thanks
https://forums.xamarin.com/discussion/comment/206560
CC-MAIN-2019-22
refinedweb
394
64.61
Support » Programming Orangutans and the 3pi Robot from the Arduino Environment » 5. Arduino Libraries for the Orangutan and 3pi Robot » 5.g. Pololu3pi - Sensor Library for the 3pi Robot Overview This library allows you to easily interface with the five infrared reflectance sensors on the 3pi robot. Note that in order to use this library, you must also include PololuQTRSensors.h in your sketch. You should have something like the following at the top of your sketch: #include <Pololu3pi.h> // gives access to sensor interface functions #include <PololuQTRSensors.h> // used by Pololu3pi.h #include <OrangutanMotors.h> // gives access to motor control functions #include <OrangutanBuzzer.h> // gives access to buzzer control functions Unlike the other Orangutan libraries, you must explicitly call the init() method to initialize your Pololu3pi object before using it. All of the methods in this class are static; you should never have more than one instance of a Pololu3pi object in your sketch. Pololu3pi Methods Complete documentation of this library’s methods can be found in Section 19 of the Pololu AVR Library Command Reference. Usage Examples This library comes with three example sketches that you can load by going to File > Examples > Pololu3pi. There is a simple line-following example, a more advanced PID-based line-following example, and a basic maze-solving example. For more information on creating line and maze courses, as well as a detailed explanation of the C versions of the demo programs, please see sections 6 and 7 of the 3pi User’s Guide.
https://www.pololu.com/docs/0J17/5.g
CC-MAIN-2017-09
refinedweb
252
64.81
Scenario : Download Script You are working as C# or dot net developer, You need to read the file names from a folder and then remove the extension from file names. The file extensions can be of different lengths, such as .txt is only three characters but .xlsx is 4 character extension. We can't simply remove last 3 or 4 characters from file name to get only file name. As the extension are added after dot (.) , if we can find the position of dot and remove the string from there, we will be left with file name only. There could be special scenario when file name can have dot (.) as part of file name. That means that to find the extension's dot, we need to find the last dot in file name. Here are some sample files I have in C:\Source folder. I wrote below C# Console Application, this will read the file names from a folder. We will use Substring Method with LastIndexOf Method to remove extension from each file name. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; //add namespace using System.IO; namespace TechBrothersIT.com_CSharp_Tutorial { class Program { static void Main(string[] args) { //Source Folder path for files string Folderpath = @"C:\Source\"; //Reading file names one by one string[] fileEntries = Directory.GetFiles(Folderpath, "*.*"); foreach (string FileName in fileEntries) { //declare a variable and get file name only in it from FileName variable string FileNameOnly = FileName.Substring(0, FileName.LastIndexOf(".")); //Print the values for FileName and FileNameOnly variables Console.WriteLine("FileName variable Value :" + FileName); Console.WriteLine("FileNameOnly variable Value :" + FileNameOnly); Console.ReadLine(); } } } } Here are the output after and before removing the extension from file name in C#.
http://www.techbrothersit.com/2016/04/c-how-to-remove-extension-from-file.html
CC-MAIN-2016-50
refinedweb
287
60.72
Library: Input/output A predefined stream that controls output to an unbuffered stream buffer associated with the object stderr declared in <cstdio> #include <iostream> namespace std { extern wostream wcerr; } The object wcerr controls output to an unbuffered stream buffer associated with the object stderr declared in <cstdio>. By default the standard C and C++ streams are synchronized, but you can improve performance by using the ios_base member function synch_with_stdio to desynchronize them. wcerr uses the locale codecvt facet to convert the wide characters it receives to the tiny characters it outputs to stderr. The formatting is done through member functions or manipulators. See cout, wcout, or basic_ostream for details. // // wcerr example // #include <iostream> #include <fstream> void main ( ) { using namespace std; // open the file "file_name.txt" // for reading wifstream in("file_name.txt"); // output the all file to stdout if ( in ) wcout << in.rdbuf(); else // if the wifstream object is in a bad state // output an error message to stderr wcerr << L"Error while opening the file" << endl; } basic_ostream, basic_iostream, basic_filebuf, cout, cin, cerr, clog, wcin, wcout, wclog, ios_base, basic_ios ISO/IEC 14882:1998 -- International Standard for Information Systems --Programming Language C++, Section 27.3.2
http://stdcxx.apache.org/doc/stdlibref/wcerr.html
CC-MAIN-2014-15
refinedweb
194
50.77
Oh. btw admin! sorry about my earlier thread. i promise not to do something like that ever again :) anyway. i was having trouble with my project and its about Multiplication with loops. i started with a few codes and im really frustrated with what im doing now. please let mee fix this stuff i dont really know about looping statements @---@. thanks heres tha code. import java.io.*; public class Multiplied{ public static void main(String [] args){ BufferedReader buffer= new BufferedReader(new InputStreamReader(System.in)); int i; int j; try{ {System.out.println("Enter a number: "); i = integer.parseInt(num.readLine()); } for (int i=1; i<=maxNo; ++i) { int sum = 0; for (int j=1; j<=maxNo; ++j) { sum +=i; System.out.print( sum + " " ); } System.out.println(); } catch(IOException e){ System.out.println("ERROR"); } } thanks again!
https://www.daniweb.com/programming/software-development/threads/386019/java-multiplication-table-help
CC-MAIN-2018-13
refinedweb
135
62.75
After years of developing software, many programmers end up realizing that there are certain heuristics that, if implemented, result in more concise and solid code. The purpose of this article is to catalog some of these good practices, many experienced programmers know, even unconsciously, and may help those who are entering the area. Applying the code renormalizations In physics, renormalization is a technique to simplify calculations, eliminating the presence of infinite in the equations. It is a way of trying to tame the complexity of the problem domain, reducing it to a narrower scope. This technique can also be applied in our daily schedule, making our code safer. Enumeration Use constant int, String or any other primitive type to represent a finite and small set of states or values, instead of using Enumeration, is a mistake that many programmers make. Let's look at this example in Listing 1. Listing 1. Code Enum public static final String APPROVED = "Approved"; public static final String REPROVED = "Reproved"; public static final String PENDING = "Pending"; public void doSomething(String status) { // ... } Let's assume that the doSomething method should only accept one of three states: APPROVED, REPROVED and PENDING. For any input it would cause an exception. What's wrong with this code? As the argument of the method is of type String, any String could be passed, even if not part of the set of entries that the developer of the method want . There is no formal restriction to ensure the correct use of the method, so we have multiple input possibilities. To eliminate this "infinity", just apply a 'renormalization "in the case, the use of Enumeration instead of String constants, as shown in Listing 2. Listing 2. Code with Enum enum Status { APPROVED, REPROVED, PENDING } public void doSomething(Status status) { //... } Despite being a simple example, it has important implications: - We restrict method of entry to only three possible states, instead of "infinite"; - The compiler will work in our favor, as do compile-time type checking is one of the best techniques to prevent programmers mistakenly use a code snippet. We should, whenever possible, use the language features to make the compiler ensure us the code promises. Tiny Types and Strings Oriented Programming We will see in Listing 3 more interesting case of application of renormalization and secure verification by the compiler. Listing 3. Class Person public class Person { private String name; private String surname; private String rg; public Person(String name) { this.name = name; } public Person(String name, String surname) { this.name = name; this.surname = surname; } public Person(String x, String y, String z) { this.name = x; this.surname = y; this.rg = z; } } This is the typical model of data oriented Strings. In addition to the myriad of problems, the compiler will not be willing to save things like you: new Person("De Niro"); // Last name instead of the name new Person("99999999-99") // GR rather than the name new Person("Fowler" "Martin"); // Inversion Any unsuspecting programmer could make the mistake of reversing values, pass a semantically wrong value, etc. But how to prevent these errors in the compiler? Listing 4. Tiny Types public class Name { private String name; public Name(String text) { this.name = text; } } public class Surname { private String surname; public Surname(String text) { this.surname = text; } } public class RG { private String rg; public RG(String text) { this.rg = text; } } public class Person { private Name name; private Surname surname; private RG rg; public Person(Name x) { this.name = x; } public Person(Name x, Surname y) { this.name = x; this.surname = y; } public Person(Name x, Surname y, RG z) { this.name = x; this.surname = y; this.rg = z; } } Using more robust types (instead of strings) in your data model, as shown in Listing 4, the mistakes made in the example of construction Person objects would be advised at compile time, providing greater security in the use of this class. Yes, more code must be written to support this programming model (increases the verbosity of the code), and the work to make the changes to other data types in the persistence layer or the use of certain frameworks. However, the gain in terms of compile-time type safety, and consequently less chances of error by the developer in production are aspects to consider. Another benefit is that the code becomes more descriptive and intuitive. Moreover, we could make these richer classes. For example, if there was a CPF class, we could include in it a CPF validation method. The class name could validate entries that include numbers, remove spacings at the beginning and end, impose a maximum size, etc. Builder pattern Builder pattern is a great tool when we want certain objects have their most important attributes initialized, thus avoiding the famous NullPointerException errors or SQL errors when the field in the table that corresponds to the field of the class is the type required (although this validation can be made in the persistence layer). See an example in Listing 5. Listing 5. Required fields and options public class Person { private String name; // Must be null | required at db private String surname; // Must be null | required at db private String rg; // Optional private String cpf; // Optional ... } But how to make the compiler help us meet these premises, without relying on the goodwill of the developer? Let's see Listing 6. Listing 6. Use standard Builder to ensure the premises of the Person class // Classe Person becomes imutable public final class Person { private final String name; private final String surname; private final String rg; private final String cpf; public static class Builder { // Required parameters private final String name; private final String surname; // Optional parameters - initialized to default values private String rg = ""; // NULL safe private String cpf = ""; // NULL safe public Builder(String name, String surname) { this.name = name; this.surname = surname; } public Builder rg(String val) { rg = val; return this; } public Builder cpf(String val) { cpf = val; return this; } public Person build() { return new Person(this); } } private Person(Builder builder) { name = builder.name; surname = builder.surname; rg = builder.rg; cpf = builder.cpf; } public static void main(String... args) { Person person1 = new Person.Builder("Robert", "De Niro").build(); // + cpf Person person2 = new Person.Builder("Robert", "De Niro").cpf("12345").build(); // + cpf and rg Person person3 = new Person.Builder("Robert", "De Niro").cpf("12345").rg("00000").build(); } } The Builder pattern, in addition to helping startup/creating classes with many attributes, can be used to enforce usage restrictions for developers, ensuring the correct use of certain classes in the application. Realize that, regardless of the technique discussed (Enumeration, Tiny Types or Builder), the real goal is to impose restrictions that are checked by the compiler, preventing developers break assumptions that would otherwise be vague and easily forgotten. Talking about restrictions, we need to mention the appearance of immutability. Even in the previous example, the presence of immutability is fundamental to, along with the standard Builder, ensure the premises of the Person class. Optional - Resource Java 8 Optional is a class introduced in Java 8 and is based on the Standard Option/Some/None of the Scala language. In this article will not address the use of Optional with Lambdas and Streams Java 8. As we are talking about renormalization and compile-time safety, Optional class will be addressed on this light. Anyone who program in Java has come across the famous NullPointerException exception. Even the creator of the concept of zero reference, Tony Hoare said that this was the "error of one billion dollars." Consider the following code snippet: public String getCarName() {// ....} Seeing this method declaration, we can infer that the return type is always a string, right? Think again ... Even in this simple, two types of return are possible: String or null. Yes, we always forget that any method in Java (which does not return a primitive type), returns an object or null. And because of this simple assertion, we can never guarantee that any method call is null-safe without thoroughly analyzing their implementation or believe in their documentation (in the case of third-party API). Listing 7. Possible null returns List<Person> persons = repository.list(); Address address = persons.get(0).getAddress(); System.out.println(address.getName()); .... Yes, we have to analyze the code carefully and see if certain method returns null or not, as shown in Listing 7. If we are not sure if the method can actually return null, we must always use if (obj! = Null) to validate any return. If a method can always return two types of data (null or a valid object), why not formalize this concept? For this we can use Optional. Independent Optional paper to reduce the verbosity and provide a more declarative, functional and clean style with flatMap/map/filter, the mere existence of this concept already promotes serious benefits. Listing 8. Optional Use import java.util.Optional; import java.util.Random; public class Example { public static String getString1() { return "OK"; } public static Optional<String> getString2() { String retorno; if(new Random().nextInt(2) == 0) { retorno = null; } else { retorno = "OK"; } return Optional.ofNullable(retorno); } public static void main(String... args) { for(int i = 0; i < 10; i++) { Optional<String> option = getString2(); if(option.isPresent()) { System.out.println(option.get()); } } } } What can we say about the two getString methods in Listing 8? - We can say with certainty that the first will always return a String; - The second can be a String or null, so the return is an Optional. With the presence of Optional we can now clearly define when a method is null-safe (getString1()) and when it can return null(GetString2()). Notice when the idiomatic aspect is important and powerful. If a method can return null, we will refund an Optional <T> instead of T. If we know that the method never returns null, we can only return T. This convention promotes the programmer, because it will not need to analyze the method of code to see if it is null-safe or not, just by seeing the return type, or method of input arguments (and in that an IDE is a great help). In the case of GetString2(), as the user of the method knows that it returns Optional <T>, it probably will not forget to take the test with isPresent() method to see if the returned value is a T object or null . If a valid object, isPresent() == true, we can use the get method to get the object. Use get without testing with isPresent() is dangerous because it contains null Optional, we have an exception. An alternative would be to use to get the code Listing 9. Listing 9. Use OrElse public static void main(String... args) { for(int i = 0; i < 10; i++) { Optional<String> option = getString2(); System.out.println(option.orElse("NULL")); } } If Optional (a container) contains a valid object (in this case a String), it will be returned and printed. If it contains null, the string "NULL" is returned and printed. The Optional class favors a more defensive style of programming, as explicitly warns the developer of a possible null return from a method (or when an argument passed to a method can be null or not), helping to minimize the presence of NullPointerException. Obviously, as the Optional was introduced in Java 8, most Java API classes do not make use of this concept, which somewhat limits its power (other than Scala, which has ample support Option in your library). View your code in Listing 10. Listing 10. Optional class source code import java.util.function.Consumer; import java.util.function.Function; import java.util.function.Predicate; import java.util.function.Supplier; public final class Optional<T> { private static final Optional<?> EMPTY = new Optional<>(); private final T value; private Optional() { this.value = null; } public static<T> Optional<T> empty() { @SuppressWarnings("unchecked") Optional<T> t = (Optional<T>) EMPTY; return t; } private Optional(T value) { this.value = Objects.requireNonNull(value); } public static <T> Optional<T> of(T value) { return new Optional<>(value); } public static <T> Optional<T> ofNullable(T value) { return value == null ? empty() : of(value); } public T get() { if (value == null) { throw new NoSuchElementException("No value present"); } return value; } public boolean isPresent() { return value != null; } public void ifPresent(Consumer<? super T> consumer) { if (value != null) consumer.accept<U> Optional<U> flatMap(Function<? super T, Optional<U>> mapper) { Objects.requireNonNull(mapper); if (!isPresent()) return empty(); else { return Objects.requireNonNull(mapper.apply(value)); } } public T orElse(T other) { return value != null ? value : other; } public T orElseGet(Supplier<? extends T> other) { return value != null ? value : other.get(); } public <X extends Throwable> T orElseThrow(Supplier<? extends X> exceptionSupplier) throws X { if (value != null) { return value; } else { throw exceptionSupplier.get(); } } @Override public boolean equals(Object obj) { if (this == obj) { return true; } if (!(obj instanceof Optional)) { return false; } Optional<?> other = (Optional<?>) obj; return Objects.equals(value, other.value); } @Override public int hashCode() { return Objects.hashCode(value); } @Override public String toString() { return value != null ? String.format("Optional[%s]", value) : "Optional.empty"; } } Criteria API Both in JPA 2.0 and Hibernate, we can write queries through Strings (HQL/JPQL) or using Criteria API. Again we had an impasse about oriented programming strings versus strongly typed. The biggest advantage of the Criteria API use is a compile-time error checking (if we are passing the correct class without spelling errors, etc.). And we will also be completely safe with respect to SQL injection. The biggest drawback is the additional complexity in learning the Criteria API, at least initially, since HQL/JPQL are more intuitive for developers (as it probably deal with SQL for relational database). Another disadvantage is the verbosity, which considerably increases using Criteria API instead of HQL/JPQL. Certainly, there are other techniques not mentioned in that article that "renormalize" and ensure compile-time safety, or an idiomatic style that indicates more clearly the programmer's intentions. The "renormalize" given piece of code, impose fewer decisions a programmer must take, thereby reducing the possibility of errors. The frameworks almost always meet this goal because it imposes a number of restrictions and steps to follow programmer, standardizing the collective knowledge to create a common language and increasing productivity as a whole. It would be chaos if each developer came with his/her own vision of how to implement persistence, MVC, etc. Design patterns also favor the use restrictions. Good examples are the Template Method, Strategy, State, Command and Factories. Of course, we can not forget the Generics. Despite being a resource considered complex, especially for beginners, to master its use is imperative question to make the compiler "his good friend". Hugs and see you soon. Links Builder pattern Optional Oracle Optional II
http://mrbool.com/restrictive-programming-best-practices-in-java/34398
CC-MAIN-2017-09
refinedweb
2,432
56.25
My 2D game requires a sound to have the same volume along the y axis, but a different volume depending on the distance to the camera on the x axis. I can't figure out how to code the last part of the script though, can you help me? The volume should stay at 0 until the distance is equal/smaller than MinDist at which point the volume should grow linearly until the distance is equeal/smaller than MinDist. using System.Collections; using System.Collections.Generic; using UnityEngine; public class spatialFader : MonoBehaviour { private float pos; private float pos_other; private float dist; private AudioSource source; public float MinDist; public float MaxDist; private float sqrMinDist; private float sqrMaxDist; void Start () { source = GetComponent<AudioSource> (); sqrMinDist = MinDist * MinDist; sqrMaxDist = MaxDist * MaxDist; } void Update () { pos_other = GameObject.Find("Main Camera").transform.position.x; pos = transform.position.x; dist = pos - pos_other; print("Distance to other: " + dist); //this is the part where I don't know what to do anymore: var t = source.volume = Mathf.Lerp(0,1,t); } } Answer by TobiKnitt · Feb 15, 2018 at 05:50 PM I found a solution! Not what I originally had in mind, but for what I'm doing it works fine. At a certain distance it just fades in or out over time. using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.Audio; public class SoundRolloff : MonoBehaviour { private AudioSource source; public float spatialBlend; public float minDistance; public float maxDistance; public float fadeDistance; public float volume; private float pos; private float pos_other; private float dist; private float currentVolume; // Use this for initialization void Start () { source = GetComponent<AudioSource> (); source.volume = 0; currentVolume = source.volume; } // Update is called once per frame void Update () { source = GetComponent<AudioSource> (); source.spatialBlend = spatialBlend; source.minDistance = minDistance; source.maxDistance = maxDistance; source.dopplerLevel = 0; source.spread = 0; pos_other = GameObject.Find("Main Camera").transform.position.x; pos = transform.position.x; dist = pos - pos_other; if (dist < 0) { dist = -dist * 2; } if (dist > fadeDistance){ source.volume -= volume * Time.deltaTime; } if ((dist < fadeDistance) && source.volume < volume) { source.volume += volume *. change Volume of Audiomixergroups by script 1 Answer Not sure how to word this question but having issue with mixer and sound clips 1 Answer how can i check for system volume changes 0 Answers How do I store the current volume of an AudioMixer in a variable? 0 Answers Unity custom Audio Manager won't work with custom Stop function 0 Answers
https://answers.unity.com/questions/1468407/audio-rolloff-aling-x-axis.html
CC-MAIN-2019-09
refinedweb
401
58.18
New User? Start here. Error goes here Question 1 Home Guardian has recently completed a $200,000, two-year study on its new pest control device. It can go into production for an initial investment in equipment of $5 million. The equipment will be depreciated straight line over the useful life of 5 years to a value of zero. The fully depreciated equipment is expected to sell for $1,200,000 at the end of its useful life. The project also requires investment in land value of $300,000 which is expected to have a realisable value of $500,000 at the end of the project. Investment of $400,000 in current assets will be recovered at the termination of the project. The marketing department has estimated that 200,000 units of its new device could be sold annually over the next five years at a price of $10 each. Fixed costs of $500,000 per annum will be incurred.The firm is an ongoing profitable business and pays taxes at a 30% rate in the year of income. All capital gains will also be taxed at a rate of 30%. The company uses a 12% discount rate on the new project. Using the NPV approach, advise the firm whether the project should be undertaken.Question 2HRE Mining Limited’s (HRE) is considering a major gold exploration project in South Sudan. Costs of financing have been declining recently causing the finance department to consider sourcing capital through debt and equity issues. The company’s bonds will mature in six years with a total face value of $100 million, paying a half yearly coupon rate of 10% per annum. The yield on the bonds is 15% per annum. The market value for the company’s preference share is $4.75 per unit while the ordinary share is currently worth $1.85 per unit. The preference share pays a dividend of $0.4 per share. The beta coefficient for the ordinary share is 1.4. No issue costs will be incurred by the company. The market risk premium is estimated to be 12% per annum and the risk-free rate is 4% per annum. The company is subject to a 30% corporate tax rate and intends to issue 200,000 preference shares and 5,000,000 ordinary shares. HRE’s current balance sheet shows the following information for bonds and shares:$ MillionPreference shares 3Ordinary shares 15Bonds 100a. Outline the necessary steps required to estimate the company’s weighted average cost of capital.b. Calculate the after-tax cost of each of the company’s current financing sources.c. Using the information provided, calculate the market values for the financing sources for HRE.d. Using the information from b.) and c.) calculate HRE’s after-tax weighted average cost of capital.e. The company’s finance department has confirmed that the proposed project will generate an IRR of 15% per year. Discuss whether or not the project should be undertaken.Question 3Mid-Western Mining Ltd is considering short term financing for its working capital requirement. You are invited to provide a discussion on the three key factors that the company should consider in selecting different sources of short term financing. Briefly discuss these factors and illustrate with an appropriate example where possible.Q1. Calculate the NPV of the project and provide a critical analysis of the company’s future with the proposed construction of a new plant.Applies correct principles and calculations, substantiated with workings or diagrams in order to correctly calculate the NPV. There are no errors in calculations. Provides a precise critical analysis of the company’s proposed future.Applies correct principles and calculations, substantiated with workings or diagrams in order to correctly calculate the NPV. Workings may contain s few minor errors. Provides a detailed critical analysis of the company’s proposed future.Applies correct principles and calculations, substantiated withworkings or diagrams in order to correctly calculate the NPV. Workings contain some minor errors. Provides a critical analysis of the company’s proposed future. May contain limited detail and reference to examples from the case.Applies understanding of relevant principles, correctly calculates the NPV. Workings contain some major errors. Attempts to provide a critical analysis of the company’s proposed future.No understanding of relevant principles, incorrectly calculates the NPV. Workings contain major errors. Failed to provide a critical analysis of the company’s proposed future.Q2. Complete calculation as required. Discuss your recommendations for the potential project and support with relevant calculations.Applies correct principles and calculations, substantiated with workings or diagrams in order to arrive at the right answer. There are no errors in calculations.Applies correct principles and calculations, substantiated with workings or diagrams in order to arrive at the right answer, shows workings but contains few minor errors.Applies correct principles and calculations, substantiated with workings or diagrams in order to arrive at the right answer, shows workings but contains some minor errors.Applies understanding of relevant principles, shows workings but contains some major errors.No understanding of relevant principles, no workings and contains major errors.Q3. Discuss the factors relevant to the given scenario and illustrate with an appropriate example where possible.Principles are applied in the appropriate manner to arrive at the correct answer. The use of relevant principles and example shows creativity and imagination.Discussion reflects detailed understanding of relevant principles. Uses an example to illustrate the discussion.Discussion reflects good understanding of relevant principles. Makes use of an example to attempt to illustrate but relevance is unclear or inaccurate.Discussion reflects basic understanding of relevant principles. Limited or no use of relevant example.Discussion reflects no understanding of relevant principles. No use of relevant example. 1. Figure 1: (Image showing Capital Budgeting techniques) Source: (Created by Author) As per the case study which is provided in the question, Home Guardian has completed a research study on a new pest control device which the company was developing. The company can finally implement the product for production. As shown in figure 1, the initial investment which is required to be made for the production of the new pest control device amounts to $ 5 million. It is also provided in the question that the equipment is to be depreciated over the useful of 5 years until the value becomes zero. The tax rate which is considered for the analysis is 30%. As per the calculation which is shown in the above table, the management has applied NPV analysis for the purpose of reviewing whether the production of the new product which the business wants to undertake is favorable or not. NPV analysis is conducted when the business wants to ident6ify the future discounted cash inflow which can be earned from the project (Ognjenovic et al., 2016). The discounting rate and the tax rate which are considered for the purpose of computing the NPV of the production option are 12% and 30% respectively. The analysis is for a period of five years at the end of which the company will receive the salvage value of the equipment and land which amounts to $ 12,00,000 and $ 5,00,000 respectively. The company will also able to recover a portion of the current asset which was employed in the production process. As shown in figure 1, the NPV of the project comes to about $ 1,19,725 which shows that the company can expect profit in future and also steady cash inflows as well, therefore the company must invest in the project. Answer to Question 2 Requirement (a) As per the case study which is provided in the question, HRE Mining limited is considering a major gold exploration in Sudan. The cost of financing has declined and the management of the company is considering sourcing finance through equity shares or by use of debt capital. The company wants to analyze the weighted average cost of capital of the business after the business uses equity and debt financing. Weighted Average cost of capital refers to the expected rate at which the company is expected to pay its shareholders from the return on assets. The weighted average cost of capital considers the cost of capital of all other sources of finance and is the weighted average of the same (Frank & Shen, 2016). In this case the weighted average cost of capital is computed by first of all computing the cost of capital which is associated with each source of finance (Hann, Ogneva & Ozbas, 2013). The computation of the WACC of the company is shown below: Figure 2: (Image showing Weighted Average Cost of Capital) As shown in figure 2, the company uses three types of capital which are debt capital, equity capital and preference share capital. The debt capital is obtained from bond issues by the business which has yield rate of 15% pa. The cost of debt which is obtained is 10.50%. The cost of preference shares is obtained considering the market value of such preference shares. For the purpose of computing cost of equity, the business uses Capital Asset Pricing Model (CAPM) where it considers Beta, market rate of return and risk-free rate of return. The cost of equity comes to about 20.80% which is significantly high. The WACC of the company is then calculated taking into consideration all cost of capital computed (Ross, 2013). The amount of capital involved in the business from each source is considered as weighted in order to obtain the weighted average cost of capital which comes to about 12%. Requirement (b) As the business employs three types of capital in the capital structure of the company which are equity shares, preference shares and debt capital. Therefore, the business needs to compute cost of equity, cost of debt and cost of preference shares for each capital source it uses in the business. The cost of equity is computed to be 20.80% which is computed under CAPM method. CAPM method is considered to be one of the most used and reliable method for the purpose of computing cost of equity (Zivot, 2013). The cost of debt which is computed in figure 3 comes to 10.50% which is a after tax figure. The cost of preference shares as shown in the table below comes to about 8.42%. Figure 3: (Image showing Different Cost of Capital) Requirement (c) Figure 4: (Image showing Market Value of Bonds) As per the calculations which is shown in the table able the market value of bonds is shown as $ 806,61,804. This is computed considering the face value of the bonds, coupon rate, payment period and coupon period. Requirement (d) Figure 2: (Image showing Weighted Average Cost of Capital) The weighted average cost of capital for HRE comes to about 12% which is considering the cost of capitals from all the sources of capital which is used by the business (Ortiz-Molina & Phillips, 2014). The weighted average cost of capital signifies the level of risks which the business faces and such risks can be minimized by preparing an efficient capital structure of the company. Requirement (e) As per the finance department the Internal rate of return of then company is 15% as per the present scenario. The weighted average cost of capital of the business as computed comes to 12%. As per the definition of Internal Rate of return (IRR), it is the rate at which the present value of cash inflow which can be generated from a project and the cash outflow which is incurred as initial investment are same (Magni, 2013). Thus, at this point the company will be indifferent. The overall cost of capital reflects the risks which the business faces. In the case of HRE, the company has a overall cost of capital of 12% and has an IRR of 15% which shows that the IRR is more than the WACC which signifies favorable results (Rich & Rose, 2014). Thus, from the above analysis it is clear that the business must undertake the project (Moten Jr & Thron, 2013). Answer to Question 3 As per the question Mid- Western Mining ltd is considering sources through which the business can obtain short term finances. For such a purpose the factors which affect short term financing are discussed below in details: Frank, M. Z., & Shen, T. (2016). Investment and the weighted average cost of capital. Journal of Financial Economics, 119(2), 300-315. Hann, R. N., Ogneva, M., & Ozbas, O. (2013). Corporate diversification and the cost of capital. The journal of finance, 68(5), 1961-1999. Hillson, D., & Murray-Webster, R. (2017). Understanding and managing risk attitude. Routledge. Magni, C. A. (2013). The internal rate of return approach and the AIRR paradigm: a refutation and a corroboration. The Engineering Economist, 58(2), 73-111. Moten Jr, J. M., & Thron, C. (2013). Improvements on secant method for estimating internal rate of return (IRR). Int. J. Appl. Math. Stat, 42(12), 84-93. Ognjenovic, S., Ishkov, A., Cvetkovic, D., Peric, D., & Romanovich, M. (2016). Analyses of Costs and Benefits in the Pavement Management Systems. Procedia Engineering, 165, 954-959. Ortiz-Molina, H., & Phillips, G. M. (2014). Real asset illiquidity and the cost of capital. Journal of Financial and Quantitative Analysis, 49(1), 1-32. Pianeselli, D., & Zaghini, A. (2014). The cost of firms’ debt financing and the global financial crisis. Finance Research Letters, 11(2), 74-83. Rich, S. P., & Rose, J. T. (2014). Re-examining an old question: Does the IRR method implicitly assume a reinvestment rate?. Journal of Financial Education, 152-166. Ross, S. A. (2013). The arbitrage theory of capital asset pricing. In HANDBOOK OF THE FUNDAMENTALS OF FINANCIAL DECISION MAKING: Part I (pp. 11-30). Schmidt-Eisenlohr, T. (2013). Towards a theory of trade finance. Journal of International Economics, 91(1), 96-112. Zivot, E. (2013). Capital asset pricing model. To export a reference to this article please select a referencing stye below: My Assignment Help. (2020). Introduction To Financial Management. Retrieved from. "Introduction To Financial Management." My Assignment Help, 2020,. My Assignment Help (2020) Introduction To Financial Management [Online]. Available from:[Accessed 23 July 2021]. My Assignment Help. 'Introduction To Financial Management' (My Assignment Help, 2020) <> accessed 23 July 2021. My Assignment Help. Introduction To Financial Management [Internet]. My Assignment Help. 2020 [cited 23 July 2021]. Available from:. The respective sample has been mail to your register email id * $5 to be used on order value more than $50. Valid for only 1 month. We have sent login details on your registered email..
https://myassignmenthelp.com/free-samples/bfa503-introduction-to-financial-management/the-case-study-on-npv-approach.html
CC-MAIN-2021-31
refinedweb
2,424
56.45
I have been trying to send a command string to my domotics web server which is connected to a bus and then to my LAN. The command that has to be sent to the web server through the bus is a string like this: *1*1*52## where 52 is my room's power point address, and the second "1" of the command is an on switch status (0 stands for off). So the light of my room is get switched on. My webserver IP adress is: 192.168.1.36. My goal is writing the string to the server to execute commands as well as reading from that I wrote some Java Code but the program neighter reads nor writes from/to the web server. I am very frustrated about that. Here is the code: Any help will be appreciated!Any help will be appreciated!import java.io.*; import java.net.*; public class ClientExample extends Thread { ClientExample() { Thread t = new Thread(); t.start(); } public static void main(String[] args) throws InterruptedException { String str; String ip = null; String Command = null; int port; ip = "192.168.1.36"; port = 80; try { Socket s = new Socket(ip,port);//creating socket object System.out.println("port: " + s.getPort()); OutputStream outputstream = s.getOutputStream(); InputStream inputstream = s.getInputStream(); BufferedReader in = new BufferedReader(new InputStreamReader(inputstream)); PrintWriter out = new PrintWriter(outputstream, true); System.out.println("Connection status: " + s.isConnected()); System.out.println("Reading from socket...."); do{ str=in.readLine(); if (str!=null) System.out.print("str "+str); } while (str!=null); // Thread.sleep(5000); Command = "*1*0*52##"; // Thread.sleep(5000); System.out.println("writing to socket...."); out.write(Command); System.out.println(("Command: " + Command)); System.out.println("check for errors : " + out.checkError()); System.out.println(out); in.close(); out.close(); s.close(); } catch (IOException e) { System.out.println("an error occured"); } } }
http://www.javaprogrammingforums.com/java-networking/29960-problems-reading-writing-socket.html
CC-MAIN-2014-15
refinedweb
305
53.07
I am always looking to learn something. Sometimes it is re-working problems like I am a complete beginner. Almost like starting over. I have read that called "beginners mind" in some literature. So today I looked at unopened emails and I had one from Passion of Programming. It was a programming problem of reversing an array. So I started a Python repl and started figuring out how to reverse an array. def r(a,start,end): print(a,start,end) def swapit(pos1,pos2): tmp = a[pos1] a[pos1] = a[pos2] a[pos2] = tmp for i in range(int(len(a)/2)): swapit(start+i,end-i) print(a,start,end) return a I wrote a few test cases and it looked correct. Is it the best way to do it? Probably not but it is what I came up with fairly quickly and it passed my tests. One thing that I like about Python is being able to define a function exactly where you need it. The def swapit(pos1,pos2): only works in the function r. Another way to say that is that swapit is scoped to only run in the function r. One other thing I noticed is that it skips the middle element if you have an array that has an odd length. So there you are. Now the next time I am asked an array for a job interview. BOOM! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/tonetheman/leetcode-reverse-an-array-1hdf
CC-MAIN-2021-21
refinedweb
238
74.9
39437/xpath-in-java-for-accessing-owl-document I have started working on Java lately. Currently I am having an OWL document in the form of an XML file. I want to extract elements from this document. My code works for simple XML documents, but it does not work with OWL XML documents. I was actually looking to get this element: /rdf:RDF/owl:Ontology/rdfs:label, for which I did this: DocumentBuilder builder = builderfactory.newDocumentBuilder(); Document xmlDocument = builder.parse( new File(XpathMain.class.getResource("person.xml").getFile())); XPathFactory factory = javax.xml.xpath.XPathFactory.newInstance(); XPath xPath = factory.newXPath(); XPathExpression xPathExpression = xPath.compile("/rdf:RDF/owl:Ontology/rdfs:label/text()"); String nameOfTheBook = xPathExpression.evaluate(xmlDocument,XPathConstants.STRING).toString(); I also tried extracting only the rdfs:label element this way: XPathExpression xPathExpression = xPath.compile("//rdfs:label"); NodeList nodes = (NodeList) xPathExpression.evaluate(xmlDocument, XPathConstants.NODESET); But this nodelist is empty. Please let me know where I am going wrong. I am using Java XPath API. Don't query RDF (or OWL) with XPath In the question, all that's being asked for the is rdfs:label of an owl:Ontology element, so how much could go wrong? Well, here are two serializations of the ontology. The first is fairly human readable, and was generated by the OWL API when I saved the ontology using the Protégé ontology editor. The query in the accepted answer would work on this, I think. <rdf:RDF xmlns="" xml:base="" xmlns:rdfs="" xmlns:owl="" xmlns:xsd="" xmlns: <owl:Ontology rdf: <rdfs:label>Here is a label on the Ontology.</rdfs:label> </owl:Ontology> </rdf:RDF> Here is the same RDF graph using fewer of the fancy features available in the RDF/XML encoding. This is the same RDF graph, and thus the same OWL ontology. However, there is no owl:OntologyXML element here, and the XPath query will fail. <rdf:RDF xmlns:rdf="" xmlns:owl="" xmlns:xsd="" xmlns="" xmlns: <rdf:Description rdf: <rdf:type rdf: <rdfs:label>Here is a label on the Ontology.</rdfs:label> </rdf:Description> </rdf:RDF> You cannot reliably query an RDF graph in RDF/XML serialization by using typical XML-processing techniques. Well, if we cannot query reliably query RDF with XPath, what are we supposed to use? The standard query language for RDF is SPARQL. RDF is a graph-based representation, and SPARQL queries include graph patterns that can match a graph. In this case, the pattern that we want to match in a graph consists of two triples. A triple is a 3-tuple of the form [subject,predicate,object]. Both triples have the same subject. In SPARQL, after defining the necessary prefixes, we can write the following query. PREFIX owl: <> PREFIX rdfs: <> SELECT ?label WHERE { ?ontology a owl:Ontology ; rdfs:label ?label . } (Note that because rdf:type is so common, SPARQL includes a as an abbreviation for it. The notation s p1 o1; p2 o2 . is just shorthand for the two-triple pattern s p1 o1 . s p2 o2 ..) You can run SPARQL queries against your model in Jena either programmatically, or using the command line tools. If you do it programmatically, it is fairly easy to get the results out. To confirm that this query gets the value we're interested in, we can use Jena's command line for arq to test it out. $ arq --data labelledOnt.owl --query getLabel.sparql -------------------------------------- | label | ====================================== | "Here is a label on the Ontology." | -------------------------------------- While programming we often write code that ...READ MORE Basically, there are two important differences between ...READ MORE finalize() is a method called by the ...READ MORE According to Effective Java, chapter 4, page 73, ...READ MORE Import the packages required to work with ...READ MORE You can use Java Runtime.exec() to run python script, ...READ MORE Basically, they are not too different. The ...READ MORE First, find an XPath which will return ...READ MORE 242 StringEscapeUtils from Apache Commons Lang: import static org.apache.commons.lang.StringEscapeUtils.escapeHtml; // ... String source ...READ MORE Java 8 Lambda Expression is used: String someString ...READ MORE OR
https://www.edureka.co/community/39437/xpath-in-java-for-accessing-owl-document
CC-MAIN-2019-30
refinedweb
677
52.66
sending QStringLists from dynamically created buttons using signals and slots In my qt C++ application I create buttons dynamically based on a qstringlist I received! when I click the buttons I want to open an interface(dialo2.ui) and send another QStringList(based on that button that I click) to that interface via signals and slots. In the newly created interface the contents of the qstringlIst sent is displayed in labels! following is my code. Dialog.h #ifndef DIALOG_H #define DIALOG_H #include "QFrame" #include "QLabel" #include "QLineEdit" #include "QPushButton" #include "dialog2.h" #include <QDialog> namespace Ui { class Dialog; } class Dialog : public QDialog { Q_OBJECT public: Dialog2 dialog2; QFrame *f1; QPushButton *a; QList<QStringList> subMessageFields; QStringList buttonNames; explicit Dialog(QWidget *parent = 0); QStringList subMessages; QStringList Messages1; QStringList Messages2; ~Dialog(); private slots: void on_pushButton_clicked(); void receiveSubMessages(QStringList List); void openWindow(); void getButtons(); Dialog.cpp #include "dialog.h" #include "ui_dialog.h" #include "QFrame" #include "QLabel" #include "QLineEdit" #include "dialog2.h" Dialog::Dialog(QWidget *parent) : QDialog(parent), subMessages{"0"}, ui(new Ui::Dialog) { ui->setupUi(this); } Dialog::~Dialog() { delete ui; } void Dialog::receiveSubMessages(QStringList List){ subMessages=List; ui->comboBox->addItems(subMessages); for(int i=0;i<subMessages.size();i++){ QString Name= QString::number(i); f1 = new QFrame(); a= new QPushButton(); a->setText(subMessages[i]); a->setObjectName("Button"+Name); ui->horizontalLayout->addWidget(a); // } getButtons(); } void Dialog::getButtons(){ Messages1<<"John"<<"Smith"; Messages2<<"Hello"<<"Anne"; for(int i=0;i<subMessages.size();i++){ QPushButton *le = findChild<QPushButton*>(QString("Button%1").arg(i)); if((le)&&(i==0)){ connect(le, SIGNAL(clicked()), this, SLOT(openWindow())); connect(this,SIGNAL(sendList(QStringList)),&dialog2,SLOT(receiveList(QStringList))); emit sendList(Messages1); } else if((le)&&(i==1)){ if(connect(le, SIGNAL(clicked()), this, SLOT(openWindow()))){ connect(this,SIGNAL(sendList(QStringList)),&dialog2,SLOT(receiveList(QStringList))); emit sendList(Messages2); } } } void Dialog::openWindow(){ dialog2.exec(); } Dialog2.h #include <QDialog> #include "QFrame" #include "QLabel" #include "QLineEdit" namespace Ui { class Dialog2; } class Dialog2 : public QDialog { Q_OBJECT public: QFrame *f1; QFrame *f2; QLabel *label; QLineEdit *text; QStringList subMessageList; explicit Dialog2(QWidget *parent = 0); ~Dialog2(); private slots: void on_pushButton_clicked(); void receiveList(QStringList List); void on_pushButton_2_clicked(); Dialog2.cpp #include "dialog2.h" #include "ui_dialog2.h" Dialog2::Dialog2(QWidget *parent) : QDialog(parent), ui(new Ui::Dialog2) { ui->setupUi(this); } Dialog2::~Dialog2() { delete ui; } void Dialog2::on_pushButton_clicked() { ui->label->setText(QString::number(subMessageList.size())); }); label->setText(subMessageList[i]); } } Here I have supported 2 buttons dynamically created( i=0 and i=1) But when I click the button which resembles i=0 I get the labels Hello, Anne , John, Smith ,John, Smith! I want only to see Hello Anne in the labels since that button only emits Messages1 qstringList! How can I achieve this? - jsulm Qt Champions 2018 @Kushan said in sending QStringLists from dynamically created buttons using signals and slots: for(int i=0;i<subMessageList.size();i++){ Well, you're adding the complete list. If you only want to add part of it then do so. @jsulm yeah but! If I click the first button ( i.e i==0). The dialog2.ui which opens up should only show "John" and "Smith" in the labels right? because I only emit QStringList "Message1"! but when I click that button both contents of Message1 and Message2 QStringLists are shown in the labels of dialog2.ui! @Kushan Hi Check what you actually sets. I does seems its only one text but hard to tell); QString text=subMessageList[i]; qDebug() << "text is" << text; label->setText(text); } } @mrjj It gave text is "John" text is "Smith" text is "Hello" text is "Anne" text is "Hello" text is "Anne" @mrjj yeah but how can I change my code so that only text is "John" text is "Smith" appears when I click 1st button. and text is "Hello" text is "Anne" appear when I click the button You seems to connect multiple times to same dialog2 for(int i=0;i<subMessages.size();i++){ .... connect(this,SIGNAL(sendList(QStringList)),&dialog2,SLOT(receiveList(QStringList))); When you then call emit it will call that many times, so i think you mean just to call once out side the loop. connect(this,SIGNAL(sendList(QStringList)),&dialog2,SLOT(receiveList(QStringList))); for(int i=0;i<subMessages.size();i++){ ..... @mrjj yeah thanx but the problem is if I place it outside then I can't connect the signal to a particular button right? Here I connect the signal using the value of i in the for loop @Kushan Only for the dialog2 as you dont create new dialog2 and hence get multiple connect to same. for the LE button , inside loop is correct. ( as you find new one to connect to ) So only call connect(this,SIGNAL(sendList(QStringList)),&dialog2,SLOT(receiveList(QStringList))); ONCE or it will send many times to same dialog. @mrjj Thanx alot! now it is 80% working! But the only problem I face now is when I click the 1st button close the dialog2.ui which pops up and click the second button and open dialog2.ui it is like text is "John" text is "Smith" text is "Hello" text is "Anne" How can I remove the following part? text is "John" text is "Smith" @Kushan If you reuse same dialog2 for button1 AND button2, it will still have the Labels from last time. I think most easy would be to delete it and create it again. (its not super easy to clear a layout) I assume its this one ? Dialog2 dialog2; its a bit hard to delete as its not pointer. you could use void clearLayout(QLayout *layout) { if (layout) { while(layout->count() > 0){ QLayoutItem *item = layout->takeAt(0); QWidget* widget = item->widget(); if(widget) delete widget; delete item; } } } void Dialog2::receiveList(QStringList List){ clearLayout(ui->verticalLayout); .... So each time you call Dialog2::receiveList it removes what it has and then add the new stuff. What you mean delete dialog2.ui ? ( ui files are like cpp/.h files) is it not the variable in Dialog called Dialog2 dialog2 u are using ? @mrjj Thanx alot it works fine! I am also interested in using pointers! if I had used Dialog2 *dialog2 = new Dialog2(); instead of Dialog Dialog2 what should I do without using the clearLayout method? @Kushan just delete dialog2; dialog2->deletelater(); (that also delete all you put into it) And BEFORE using it again dialog2 = new Dialog2(); // new it again Note: Qt has a auto delete system. So any parent will delete its children so dont delete a button u have given to a layout. It works here as you dont give the dialog a parent. if you did Dialog2 *dialog2 = new Dialog2(this); "this" would own it and it would be bad to delete it manually as when ever "this" is deleted it would try to delete dialog. ( a second time) update Also you can use setAttribute(Qt::WA_DeleteOnClose); which will auto delete dialog when you press close/ok/cancel and in that case , you must new it before use each time and it will delete itself - SGaist Lifetime Qt Champion
https://forum.qt.io/topic/85653/sending-qstringlists-from-dynamically-created-buttons-using-signals-and-slots/2
CC-MAIN-2019-09
refinedweb
1,159
55.13
+2 Hey, when I make my first hello world program and try to build and run in code blocks, it gives me 1 error and it's Fatal error: iostream: no such file or directory. How can I solve this? 7/23/2020 9:38:41 AM 27 Answers 0 You have to add path - Read 4. Add the path..... if you don't have mingw install it 3. If you still have problems, you can try visual studio by Microsoft, they have own compiler that will automatically install along with the IDE write g++ --version in the command line +4 May be you have created c project instead of cpp that's why it giving error. If u didn't getting error watch this video may be it will help you. Check if you have created a C project in Codeblocks instead of C++ project. Windows is not a problem. Did you installed just CodeBlocks IDE or CodeBlocks IDE with compiler? You need IDE with Compiler. Or there could be an issue with your compiler installation. +1 So basically the solution of krystof worked best and it's all I've done which is installing mingw and choosing it's path in cb and it didn't work the first try cause i think i did it wrong but after i watched the video on how to do it correctly it was done correctly. Thanks everyone for helping me Can you please paste the code here? #include <iostream> using namespace std; int main() { cout << "Hello world!"; return 0; } It's just that I'm totally new and I don't know anything like what's a header, a class or a console, basically nothing and I googled a lot but found nothing, all were about more and more complicated stuff, i really apologize for being so noob lol Youssef Hossam the code is fine, where are you compiling it? Code blocks What operating system are you using? I checked that swim Windows 7 ultimate It's GNU GCC compiler I think you haven't setup the compiler correctly. Delete code::blocks and on their site, there is download option "cb mingw setup" download this one you will have compiler already setup for you. Okay I'll do that Thanks everyone for your time If it's working, please mark Right answer with ✔️ Sololearn Inc.
https://www.sololearn.com/Discuss/2408949/fatal-error-iostream-no-such-file-or-directory/
CC-MAIN-2022-40
refinedweb
395
77.87
view raw Is there any short way to achieve what the APT (Advanced Package Tool) command line interface does in Python? I mean, when the package manager prompts a yes/no question followed by [Yes/no] YES/Y/yes/y Yes input raw_input As you mentioned, the easiest way is to use raw_input(). There is no built-in way to do this. From Recipe 577058: import sys") Usage example: >>> query_yes_no("Is cabbage yummier than cauliflower?") Is cabbage yummier than cauliflower? [Y/n] oops Please respond with 'yes' or 'no' (or 'y' or 'n'). Is cabbage yummier than cauliflower? [Y/n] [ENTER] >>> True >>> query_yes_no("Is cabbage yummier than cauliflower?", None) Is cabbage yummier than cauliflower? [y/n] [ENTER] Please respond with 'yes' or 'no' (or 'y' or 'n'). Is cabbage yummier than cauliflower? [y/n] y >>> True
https://codedump.io/share/pV8vkmubt03H/1/apt-command-line-interface-like-yesno-input
CC-MAIN-2017-22
refinedweb
136
69.48
What up gang, Hay this is The Ski calling out to all you helpful and fine programmers out there in Wonderland, and my wonder of the night is what am I doing wrong with the arrays? I do pay attention in class but sometimes when you are in a class of 100 people sometimes you can get your questions answered and sometimes you cannot due to time contraints. But hay why worry when I have all you GREAT people to help me out. Anyways: Enough of the jive talk from me. Here is a program I am working on and below that is my compiling errors. What am I doing wrong that the system will not let me compile the program? // // // // // Nov 6, 2001 // This program is used to catigorize salary and commission to the a range of a payscale. #include <iostream.h> #include <iomanip.h> const int arsize1 = 50; const int arsize2 = 5; using std::cout; using std::cin; using std::endl; int main() { int num1[arsize1], num2[arsize2], num3[arsize1],num4[arsize1]; num1[arsize1] = 0; num2[arsize2] = 0; num3[arsize1] = 0; num4[arsize1] = 0; cout << "Please enter the gross sales amount (-1 to end):"; cin >> num1[0]; while ( num1[arsize1] != -1 ) { for (int count1 = 0; count1 < arsize1; count1++ ) { num3[count1] = num1[count1] * .09; num4[count1] = num3[count1] + 200; cout <<"The salary for that sales amount is: " << num4[count1] << endl; if ( num4[count1] <= 399 ) num2[1] = num2[1] + 1; else if (( num4[count1]>= 400) && (num4[count1] <= 599 ) num2[2] = num2[2] + 1; else if (( num4[count1]>= 600) && (num4[count1] <= 799 ) num2[3] = num2[3] + 1; else if (( num4[count1]>= 800) && (num4[count1] <= 999 ) num2[4] = num2[4] + 1; else if ( num4[count1]>= 400) num2[5] = num2[5] + 1; } } cout << endl; cout << "RESULTS!" << endl; cout << endl; cout << num2[1] << " people fell into the range of 200 to 399" << endl; cout << num2[2] << " people fell into the range of 400 to 599" << endl; cout << num2[3] << " people fell into the range of 600 to 799" << endl; cout << num2[4] << " people fell into the range of 800 to 999" << endl; cout << num2[5] << " people fell into the range of 1000 and up" << endl; return 0; } ERRORS: prog_19.cpp: In function `int main()': prog_19.cpp:34: warning: assignment to `int' from `double' prog_19.cpp:40: parse error before `[' prog_19.cpp:59: confused by earlier errors, bailing out Now the one error I think I may know because I am dealing with multiply by .09. Do I have to include the math library? And what do I have to do to avoid outputting a double variable, just keeping it as a regular integer?
http://cboard.cprogramming.com/cplusplus-programming/4693-arrays.html
CC-MAIN-2015-27
refinedweb
439
74.93
documentation changes added path-allot and clear-path 1: \ environmental queries ( -- wid ) \ gforth 24: \G @i{wid} identifies the word list that is searched by environmental 25: \G queries. 26: wordlist drop 27: 28: : environment? ( c-addr u -- false / ... true ) \ core environment-query 29: \G @i{c-addr, u} specify a counted string. If the string is not 30: \G recognised, return a @code{false} flag. Otherwise return a 31: \G @code{true} flag and some (string-specific) information about 32: \G the queried string. 33: environment-wordlist search-wordlist if 34: execute true 35: else 36: false 37: endif ; 38: 39: : e? name environment? 0= ABORT" environmental dependency not existing" ; 40: 41: : has? name environment? 0= IF false THEN ; 42: 43: : $has? environment? 0= IF false THEN ; 44:. 52: 8 constant ADDRESS-UNIT-BITS ( -- n ) \ environment 53: \G Size of one address unit, in bits. 54: 55: 1 ADDRESS-UNIT-BITS chars lshift 1- constant MAX-CHAR ( -- u ) \ environment 56: \G Maximum value of any character in the character set 57: 58: MAX-CHAR constant /COUNTED-STRING ( -- n ) \ environment 59: \G Maximum size of a counted string, in characters. 60: 74: \G True if @code{/} etc. perform floored division 90: \G Gforth (for versions>0.3.0). The version strings of the various 91: \G versions are guaranteed to be sorted:
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/environ.fs?hideattic=0;sortby=rev;f=h;only_with_tag=MAIN;ln=1;content-type=text%2Fx-cvsweb-markup;rev=1.22
CC-MAIN-2022-21
refinedweb
222
57.98
Episode 34 · December 5, 2014 Learn how to create forms with multiple submit buttons Thanks to Amirol Ahmad for suggesting the topic! In this episode we're going to talk about creating forms that have multiple submit buttons, so in this case a good example that I came up with, was creating blog posts where you can publish them immediately or you can save them as a draft. Hopping into our text editor, you can see that we already have the publish button in there, and all we have to do is copy this line and paste it in, and change <%= f.submit "Publish", class: "btn btn-primary" %> to: <%= f.submit "Save as draft", class: "btn btn-default" %> Now we have these two submit buttons, and we can test the way that the form works by looking in our params inside our Rails logs. Let's create a test post, some text, and we'll click the publish button here to create it, and then we can look in our terminal and see what happened. If you look at the last POST request, you can see that the commit message inside the parameters hash was "Publish", so that means that when you submit these, the button that is submitted is set up so that it gets submitted based upon the text on the button you clicked. So that's really really neat, you can have this next post, click on "Save as draft", and if we look at our terminal and scroll down to that post, that post request now has a commit message of "Save as draft", and it's as simple as that, so inside of your controller, you can look for the params commit and determine if you've saved as draft or published the post. Inside our posts_controller.rb we can createa couple methods here that are helpers that allow us to add the logic into our create and update actions. I'm going to go down here at the bottom and create a published? method, and this is going to look at the params[:commit] and see if it equals "Publish". Also, I'm not going to use it, but you can add a method save_as_draft for the other one and have that as well, in case you want to do separate logic here in these controllers. There's a handful of ways that you can update these actions to support this. One would be def create @post = Post.new(post_params) @post.published_at = Time.zone.now if publising? Choosing the names for the methods is important, published and publishing can both be candidates, but remember the importance of readability for yourself and other developers. You can name those accordingly however you want Another approach might be updating the post once it's saved. respond_to do |format| if @post.save @post.update(published_at: Time.zone.now) if publishing? #rest of the code I often prefer going something like this, and the reason for that is because if the post gets saved and any of the publish logic fails, the post will automatically be saved as a draft and you will have it around in case the publish action fails. So maybe you have something like when you publish the post, rather then just simply setting the published_at: date, maybe you have a publish method on your post and inside of there it saves it, sets the published_at time, but it also goes and tells MailChamp to send out an email to everybody saying that we just published this new post. You know, you don't want that to fail here, so could do something like this where if it failed, it would be ok because you wouldn’t lose your progress. So I'm just going to go back to setting the published_at attribute here right after we create a new one in memory, and in our update we can also add that same thing down here, because when you're editing a post, you still want to be able to publish it if you saved it as a draft. That is how you can separate your logic and have two different paths of handling your actions based upon which button was clicked in the form. Let's take a look at this in our browser and see if it's working in the way we want it to. If we clicked "Edit", now we have a functional "Publish" and "Save as Draft" buttons, and if we save as draft, it should still stay a draft, and if we publish this, the time stamp for the "published at" time is set and it marks the post as published. Now when we edit again, it's a little odd though, because now we have a publish, and it's already published, and we have a "Save as draft", but if you save as draft it still publishes and it didn't reset the post to a draft, so we definitely need to update the formm and if our post is published, let's add a couple other actions here, otherwise, we will have the regular publish and save as draft app/views/posts/_form.html.erb <div class="form-group"> <% if @post.published? %> <%= f.submit "Update", class: "btn btn-primary" %> <%= f.submit "Unpublish", class: "btn btn-default" %> <% else %> <%= f.submit "Publish", class: "btn btn-primary" %> <%= f.submit "Save as Draft", class: "btn btn-default" %> </div> If we change that, we need to go into our controller and do some modifications as well, to give meaning to the "Unpublish" text app/controllers/posts_controller.rb def update @post.published_at = Time.zone.now if publishing? @post.published_at = nil if unpublishing? #Rest of code end #Some more code def unpublishing? params[:commit] == "Unpublish" end And this is only going to affect the update action, obviously you can refactor this however you like and reorganize it, whatever makes more sense for you, but in this example, we now have the update and unpublish, successfully resetting that time stamp there, that is something that you'll probably need to do most of the time, each time the buttons change, and that's actually a really really good thing for usability, for you to update these buttons accordingly. Hopping back to the form before we wrap up, I've talked about in a previous episode about using the button tag here instead of submit, to create HTML inside of those buttons, so you could have loading animations magically with that data attribute that we talked about in a previous episode, which I definitely recommend checking out if you haven't seen it before. Now the problem with this is when you do a submit tag, <input type="submit" name="commit" value="Unpublish"> <%= f.submit "Unpublish", class: "btn btn-primary" %> you create this tag with the bottom one, and that's what's produced, however when you use a button tag, you simply get basically this button, you get "Update" inside of it, and close the tag, so the benefit here is you can put HTML inside, the problem is that you're missing all of the important information (name="commit" value="Unpublish"), you don't need the type of course, but you do need to put the name as commit and the value and match that to the text that you want, so you can change these values with the value option here, and if you want to use those buttons still, you can say name=commit and value=(whatever string you want), so that should produce the correct name and value and submit those when you click those buttons. I almost always anymore use the button tag now, because I can add the loading animations to it and etcetera. But if you're fine using the submit tag, that's really easy and that's the way to go. And the other thing to mention before we leave is that if you make a JavaScript form out of this, you're going to run into a little bit of trouble, so this is basically going to have the jQuery UJS library, take a look at this form when you click any of these submit buttons, it's going to serialize the form, and that means that it's going to look at every single one of these, and then submit that value and create a JSON hash and submit that over basically, so that is going to cause some problems, because it will serialize all of these at the same time, because it doesn't really know how to do that separately, so you're going to have to write some JavaScript to handle this accordingly, but it's really not to bad, so maybe you have a hidden field that you'll update with the action, so maybe you're using the button tags, and then when you click on one of these buttons, you will have JavaScript intercept it, submit a hidden field with that same name or something like that. So that's up to you to dive into if you're interested, and you'll probably also cover it in a future episode but not in this one. That wraps up how to create forms with multiple buttons, and of course, as anything goes, you start simply and think you just need two buttons, but then pretty quickly you need four and so on. I hope you found this interesting, it certainly makes life a lot better for your users because it's a really fantastic way to add a little bit extra usability into your applications Transcript written by Miguel
https://gorails.com/episodes/forms-with-multiple-submit-buttons?autoplay=1
CC-MAIN-2017-47
refinedweb
1,612
55.61
The QDawg class provides storage of words in a Directed Acyclic Word Graph. More... #include <QDawg> The QDawg class provides storage of words in a Directed Acyclic Word Graph. A DAWG provides very fast look-up of words in a word list given incomplete or ambiguous letters in those words. In Qtopia, global DAWGs are maintained for the current locale. See Qtopia::dawg(). Various Qt Extended components use these DAWGs, most notably input methods, such as handwriting recognition that use the word lists to choose among the likely interpretations of the user input. To create your own a QDawg, construct an empty one with the default constructor, then add all the required words to it by a single call to createFromWords(). Created QDawgs can be stored with write() and retrieved with read() or readFile(). The data structure is such that adding words incrementally is not an efficient operation and can only be done by creating a new QDawg, using allWords() to get the existing words. The structure of a DAWG is a graph of Nodes, each representing a letter (retrieved by QDawg::Node::letter()). Paths through the graph represent partial or whole words. Nodes on paths that are whole words are flagged as such by QDawg::Node::isWord(). Nodes are connected to a list of other nodes. The alphabetically first such node is retrieved by QDawg::Node::jump() and subsequent nodes are retrieved from the earlier by QDawg::Node::next(), with the last child returning 0 for that (and false for QDawg::Node::isLast()). There are no cycles in the graph as there are no inifinitely repeating words. For example, the DAWG below represents the word list: ban, band, can, cane, cans, pan, pane, pans. In the graph above, the root() node has the letter 'b', the root()->jump() node has the letter 'a', and the root()->next() node has the letter 'c'. Also, the root() node is not a word - !Node::isWord(), but root()->next()->jump()->jump() is a word (the word "can"). This structuring not only provides O(1) look-up of words in the word list, but also produces a smaller compressed storage file than a plain text file word list. A simple algorithm that traverses the QDawg to see if a word is included would be: bool isWord(const QDawg::Node *n, const QString& s, int index) { int i=0; if ( index < (int)s.length() ) { while (n) { if ( s[index] == n->letter() ) { if ( n->isWord() && index == (int)s.length()-1 ) return true; return isWord(n->jump(),s,index+1); } n = n->next(); } } return false; } In addition to simple look-up of a single word, the QDawg can be traversed to find lists of words with certain sets of characters, such as the characters associated with phone keys or handwriting. For example, given a QStringList where each string is a list of letter in decreasing order of likelihood, an efficient algorithm could be written for finding the best word by traversing the QDawg just once. The QDawg graph can have a value() stored at each node. This can be used for example for tagging words with frequency information. Constructs a new empty QDawg. The next step is usually to add all words with createFromWords() or use readFile() to retrieve existing words. Destroys the QDawg. If it was attached to a file with readFile(), it is detached. Returns a list of all the words in the QDawg, in alphabetical order. Returns true if the QDawg contains the word s; otherwise returns false. Iterates over the whole graph and returns the total number of words found in the QDawg. Replaces all the words in the QDawg with the words in the list. In addition to single words, QDawg also allows prefixes and node values. Prefixes are described by words with a trailing "*"; any necessary nodes are added to the QDawg, but the final node is not marks as isWord(). Node values are integers appended to the word (or prefix) following a space. The values are accessible via the value() function. The value must be in the range 0 to QDawg::Node::MaxValue. Note that storing a wide range of values will increase the size of the generated dawg since suffixes with different values cannot be merged. This is an overloaded member function, provided for convenience. Replaces all the words in the QDawg with words read by QIODevice::readLine() from dev. The text in dev must be in UTF8 format. Replaces all the words in the QDawg with the QDawg read from dev. The file is not memory-mapped. Use readFile() wherever possible, for better performance. Returns true if successful. If not successful, the QDawg is left unchanged and false is returned. See also write(). Replaces all the words in the QDawg with the QDawg in filename. Note that the file is memory-mapped if possible. Returns true if successful. If not successful, the QDawg is left unchanged and false is returned. See also write(). Returns the root Node of the QDawg, or 0 if the QDawg is empty. The root is the starting point for all traversals. Note that this root node has a Node::letter(), and subsequent nodes returned by Node::next(), just like any other Node. Writes the QDawg to dev, in a custom QDAWG format. Returns true if successful. Warning: QDawg memory maps QDAWG files. The safe method for writing to QDAWG files is to write the data to a new file and move the new file to the old file name. QDawg objects using the old file will continue using that file. Writes the QDawg to dev, in a custom QDAWG format, with bytes reversed from the endianness of the host. This allows a host of one endianness to write a QDAWG file readable on a target device with reverse endianness. Returns true if successful.
http://radekp.github.io/qtmoko/api/qdawg.html
CC-MAIN-2022-27
refinedweb
970
73.68
Fixing. C++: A jump table with a template devicePosted: May 5, 2015 Filed under: Templates 8 Comments A few articles ago we saw how gcc might need some help when mixing template instanciation (pure compile time data) with function calls (deducible compile time information, but not available to the template expander). Now we’ll go one step further and combine all three types: pure compile time data, deducible compile time data and pure run time data (*). Just to annoy the compiler, and to see how gcc is able to optimize the results. Let’s build a simple example, similar to what we used last time: an object that will determine the range of an integer and then invoke a callback with the closest range. Something like this could be used, for example, to allocate a buffer. void boring(int x, func f) { if (x < 2) { f(2); } else if (x < 4) { f(4); } else if (x < 8) { f(8); } else if (x < 16) { // You get the idea... } } Can we build a prettier template version of this code, without any overhead? Let’s try: typedef void (*func)(int); template <int My_Size> struct Foo { void bar(size_t size, func callback) { if (size > My_Size) { callback(My_Size); } else { next_foo.bar(size, callback); } } Foo<My_Size/2> next_foo; }; // Stop condition template<> struct Foo<0> { void bar(size_t, func) { } }; void wrapper(int x, func f) { Foo<512> jump_table; jump_table.bar(x, f); } And now, let’s compile like as “g++ -fverbose-asm -S -O0 -c foo.cpp -o /dev/stdout | c++filt”. You’ll see something like this: wrapper(int, void (*)(int)): call Foo<512>::bar(unsigned long, void (*)(int)) Foo<512>::bar(unsigned long, void (*)(int)): cmpq $512, %rsi #, size jbe .L4 call *%rdx # callback jmp .L3 .L4: call Foo<256>::bar(unsigned long, void (*)(int)) # .L3: leave Foo<256>::bar(unsigned long, void (*)(int)): cmpq $256, %rsi #, size jbe .L4 call *%rdx # callback jmp .L3 .L4: call Foo<128>::bar(unsigned long, void (*)(int)) # .L3: leave # You get the idea, right? Foo<0>::bar(unsigned long, void (*)(int)): # Stop condition, do nothing That doesn’t look too good, does it? We don’t need to worry: we already learned that gcc needs help from the optimizer to handle template expansion and non static function calls. Let’s move to O1: rapper(int, void (*)(int)): .LFB14: cmpq $512, %rdi #, D.2974 jbe .L2 #, movl $512, %edi #, call *%rsi # f jmp .L1 # .L2: cmpq $256, %rdi #, D.2974 jbe .L4 #, movl $256, %edi #, call *%rsi # f jmp .L1 # # Again, it should be clear what's going on... .L11: cmpq $1, %rdi #, D.2974 .p2align 4,,2 jbe .L1 #, movl $1, %edi #, .p2align 4,,2 call *%rsi # f .L1: It’s better than last time, but it doesn’t look great either: gcc managed to inline all calls, but it stopped there. Let’s move to O2 and see what happens: wrapper(int, void (*)(int)): movslq %edi, %rdi # x, D.2987 cmpq $512, %rdi #, D.2987 ja .L13 #, cmpq $256, %rdi #, D.2987 ja .L14 #, [ .... ] cmpq $2, %rdi #, D.2987 ja .L21 #, .L13: movl $512, %edi #, jmp *%rsi # f .L14: movl $256, %edi #, jmp *%rsi # f [ .... ] .L21: movl $2, %edi #, jmp *%rsi # f .L1: rep ret .p2align 4,,10 .p2align 3 Now, that looks much better. And we can now see that gcc generates the same code at -O2 for both versions of our code. (*) Just for the sake of completion: - Pure compile time data is information directly available during compilation time, like a constant. - Deducible compile time data means something that can easily be deduced, like a function call to a non virtual method. - Run-time only data means something that a compiler could never deduce, like a volatile variable or the parameter of a function called from outside the current translation unit. gcc: Optimization levels and templatesPosted: April 21, 2015 Filed under: C++, Templates 6 Comments Analyzing the assembly output for template devices can be a bit discouragging at times, specially when we spend hours trying to tune a mean looking template class only to find out the compiler is not able to reduce it’s value like we expected. But hold on, before throwing all your templates away you might want to figure out why they are not optimized. Let’s start with a simple example: a template device to return the next power of 2: template <int n, long curr_pow, bool stop> struct Impl_Next_POW2 { static const bool is_smaller = n < curr_pow; static const long next_pow = _Next_POW2<n, curr_pow*2, is_smaller>::pow; static const long pow = is_smaller? curr_pow : next_pow; }; template <int n, long curr_pow> struct Impl_Next_POW2<n, curr_pow, true> { // This specializtion is important to stop the expansion static const long pow = curr_pow; }; template <int n> struct Next_POW2 { // Just a wrapper for _Next_POW2, to hide away some // implementation details static const long pow = _Next_POW2<n, 1, false>::pow; }; Gcc can easily optimize that away, if you compile with “g++ foo.cpp -c -S -o /dev/stdout” you’ll just see the whole thing is replaced by a compile time constant. Let’s make gcc’s life a bit more complicated now: template <int n, long curr_pow, bool stop> struct Impl_Next_POW2 { static long get_pow() { static const bool is_smaller = n < curr_pow; return is_smaller? curr_pow : _Next_POW2<n, curr_pow*2, is_smaller>::get_pow(); } }; template <int n, long curr_pow> struct Impl_Next_POW2<n, curr_pow, true> { static long get_pow() { return curr_pow; } }; template <int n> struct Next_POW2 { static long get_pow() { return _Next_POW2<n, 1, false>::get_pow(); } }; Same code but instead of using plain static values we wrap everything in a method. Compile with “g++ foo.cpp -c -S -fverbose-asm -o /dev/stdout | c++filt” and you’ll see something like this now: main: call Next_POW2<17>::get_pow() Next_POW2<17>::get_pow(): call _Next_POW2<17, 1l, false>::get_pow() _Next_POW2<17, 1l, false>::get_pow(): call _Next_POW2<17, 2l, false>::get_pow() _Next_POW2<17, 2l, false>::get_pow(): call _Next_POW2<17, 4l, false>::get_pow() _Next_POW2<17, 4l, false>::get_pow(): call _Next_POW2<17, 8l, false>::get_pow() _Next_POW2<17, 8l, false>::get_pow(): call _Next_POW2<17, 16l, false>::get_pow() _Next_POW2<17, 16l, false>::get_pow(): call _Next_POW2<17, 32l, false>::get_pow() _Next_POW2<17, 32l, false>::get_pow(): movl $32, %eax #, D.2171 What went wrong? It’s very clear for us the whole thing is just a chain of calls which could be replaced by the last one, however that information is now only available if you “inspect” the body of each function, and this is something the template instanciator (at least in gcc) can’t do. Luckily you just need to enable optimizations, -O1 is enough, to have gcc output the reduced version again. Keep it in mind for the next time you’re optimizing your code with template metaprogramming: some times the template expander needs some help from the optimizer too. A C++ template device to obtain an underlying typePosted: October 15, 2013 Filed under: C++, C++0x, Templates Leave a comment What happens when you need to get the underlying data type of a pointer or reference? You can write some crazy metaprogram to do it for you. Like this: template <typename T> struct get_real_type { typedef T type; }; template <typename T> struct get_real_type<T*> { typedef T type; }; template <typename T> struct get_real_type<T&> { typedef T type; }; template <class T> int foo() { return get_real_type<T>::type::N; } struct Bar { static const int N=24; }; #include <iostream> using namespace std; int main() { cout << foo<Bar*>() << endl; cout << foo<Bar&>() << endl; cout << foo<Bar>() << endl; } Incidentally, this is also the basis for the implementation of std::remove_reference. Actually you’d be better of using std::remove_reference, for your own sanity. Useless code: a template device to calculate ePosted: June 27, 2013 Filed under: C++, Templates Leave a comment Recently I needed to flex a bit my template metaprogrammingfooness, so I decided to go back and review and old article I wrote about it (C++11 made some parts of those articles obsolete, but I’m surprised of how well it’s aged). To practice a bit I decided to tackle a problem I’m sure no one ever had before: defining a mathematical const on compile time. This is what I ended up with, do you have a better version? Shouldn’t be to hard. template <int N, int D> struct Frak { static const long Num = N; static const long Den = D; }; template <class X, int N> struct MultEscalar { typedef Frak< N*X::Num, N*X::Den > result; }; template <class X1, class Y1> struct IgualBase { typedef typename MultEscalar< X1, Y1::Den >::result X; typedef typename MultEscalar< Y1, X1::Den >::result Y; }; <class X, class Y> struct Suma { typedef Iguala< term, next_term >::result result; }; template <> struct E<0> { typedef Frak<1, 1> result; }; #include <iostream> int main() { typedef E<8>::result X; std::cout << "e = " << (1.0 * X::Num / X::Den) << "\n"; std::cout << "e = " << X::Num <<"/"<< X::Den << "\n"; return 0; } Cool C++0X features XIII: auto and ranged for, cleaner loops FTWPosted: November 29, 2012 Filed under: C++0x, Templates Leave a comment Long time without updating this series. Last time we saw how the ugly for (FooContainer::const_iterator i = foobar.begin(); i != foobar.end(); ++i) could be transformed into the much cleaner for (auto i = foobar.begin(); i != foobar.end(); ++i) Yet we are not done, we can clean that a lot more using for range statements. Ranged for is basically syntactic sugar (no flamewar intended) for shorter for statements. It’s nothing new and it’s been part of many languages for many years already, so there will be no lores about the greatness of C++ innovations (flamewar intended), but it still is a nice improvement to have, considering how tedious can be to write nested loops. This certainly looks much cleaner: for (auto x : foobar) This last for-statement, even though it looks good enough to print and hang in a wall, raises a lot of questions. What’s the type of x? What if I want to change its value? Let’s try to answer that. The type of the iterator will be the same as the type of the vector, so in this case x would be an int: std::vector foobar; for (auto x : foobar) { std::cout << (x+2); } And now, what happens if you want to alter the contents of a list and not only display them? That’s easy too, just declare x as an auto reference: std::vector foobar; for (auto& x : foobar) { std::cout << (x+2); } This looks really nice but it won’t really do anything, for two different reasons: * Ranged fors won’t work until g++ 4.5.6 is released * The list is empty! There are many ways to initialize that list, but we’ll see how C++0X let’s you do it in a new way the next time. stlfilt: read ugly tmpl errorsPosted: November 1, 2012 Filed under: C++, Grumpy, Templates Leave a comment Luckily STLFilt can be quite a relief when dealing with this kind of errors. Granted, it won’t make a steaming pile of poo seem to be a nice poem, but if you have something like the dog in the picture, to use a metaphor, at least it would put a blanket on its face.. Cool C++0X features X: type inference with decltypePosted: June 10, 2011 Filed under: C++0x, Templates Leave a comment After creating a wrapper object on the last entries, we were left with three syntax changes two analyze: - -> (delayed declaration) - decltype - auto We already saw the first, and we’ll be talking about the other two this time. This was the original wrapper function which led us here: template <class... Args> auto wrap(Args... a) -> decltype( do_something(a...) ) { std::cout << __PRETTY_FUNCTION__ << "n"; return do_something(a...); } Back on topic: decltype This operator (yes, decltype is an operator) is a cousin of sizeof which will yield the type of an expression. Why do I say it’s a cousin of sizeof? Because it’s been in the compilers for a long time, only in disguise. This is because you can’t get the size of an expression without knowing it’s type, so even though it’s implementation has existed for a long time only now it’s available to the programmer. One of it’s interesting features is that the expression with which you call decltype won’t be evaluated, so you can safely use a function call within a decltype, like this: auto foo(int x) -> decltype( bar(x) ) { return bar(x); } Doing this with, say, a macro, would get bar(x) evaluated twice, yet with decltype it will be evaluated only once. Any valid C++ expression can go within a decltype operator, so for example this is valid too: template <typename A, typename B> auto multiply(A x, B y) -> decltype( x*y ) { return x*y; } What’s the type of A and B? What’s the type of A*B? We don’t care, the compiler will take care of that for us. Let’s look again at that example, more closely: -> (delayed declaration) and decltype Why bother creating a delayed type declaration at all and not just use the decltype in place of the auto? That’s because of a scope problem, see this: // Declare a template function receiving two types as param template <typename A, typename B> // If we are declaring a multiplication operation, what's the return type of A*B? // We can't multiply classes, and we don't know any instances of them auto multiply(A x, B y) // Luckily, the method signature now defined both parameters, meaning // we don't need to expressly know the type of A*B, we just evaluate // x*y and use whatever type that yields -> decltype( x*y ) { return x*y; } decltype As you see, decltype can be a very powerful tool if the return type of a function is not known for the programmer when writing the code, but you can use it to declare any type, anywhere, if you are too lazy to type. If you, for example, are very bad at math and don’t remember that the integers group is closed for multiplication, you could write this: int x = 2; int y = 3; decltype(x*y) z = x*y; Yes, you can use it as VB’s dim! (kidding, just kidding, please don’t hit me). Even though this works and it’s perfectly legal, auto is a better option for this. We’ll see that on the next entry. Cool C++0X features IX: delayed type declarationPosted: June 7, 2011 Filed under: C++0x, Templates 1 Comment): #include <iostream> void do_something() { std::cout << __PRETTY_FUNCTION__ << "n"; } void do_something(const char*) { std::cout << __PRETTY_FUNCTION__ << "n"; } int do_something(int) { std::cout << __PRETTY_FUNCTION__ << "n"; return 123; } template <class... Args> auto wrap(Args... a) -> decltype( do_something(a...) ) { std::cout << __PRETTY_FUNCTION__ << "n"; return do_something(a...); } int main() { wrap(); wrap("nice"); int x = wrap(42); std::cout << x << "n"; return 0; }:
https://monoinfinito.wordpress.com/category/programming/c/templates/
CC-MAIN-2018-39
refinedweb
2,492
67.08
daemontools is a collection of tools for managing UNIX services. It can be used to automatically start, and restart processes on a unix system. The instructions to run services in daemontools are here:. I'm assuming you have daemontools set up already. Cherrypy is a python library for making website applications. There are many ways to deploy cherrypy apps, but I will describe how to deploy cherrypy apps with daemontools. Firstly, you do not need the cherryd tool to use with daemontools. Daemontools does the daemonising for you. But cherryd can be used if you want, since it provides some nice options (just don't use the -d daemonize flag). You may want to use cherryd with daemontools if you'd like to use FastCGI or SCGI. I won't cover cherryd any further in this post. I'll now show you how to setup daemontools for an example cherrypy application. Here is a normal helloworld cherrypy app... which I'll put in the file exampleapp.py import cherrypy class HelloWorld(object): def index(self): return "Hello World!" index.exposed = True cherrypy.quickstart(HelloWorld()) This runs on 127.0.0.1 on port 8080 by default. It also runs in development mode (which you should not deploy with, but only use for testing). Now create a 'theservicedir' directory somewhere. Also create the following directories: 'theservicedir/log' and 'theservicedir/log/main' Then create a 'theservicedir/run' file. #!/bin/sh exec setuidgid rene /usr/bin/python /home/rene/cherrypydaemontools/exampleapp.py Note that I specified full paths to python and to your python script. Also see that I got the app to run with the setuidgid program as the user 'rene'. Then create a 'theservicedir/run' file for logging. #!/bin/sh exec setuidgid rene multilog t ./main Now to start it up we symlink the file into the service directory. sudo ln -s /home/rene/cherrypydaemontools/theservicedir /service/theservicedir To stop it: svc -d /service/theservicedir To restart it: svc -t /service/theservicedir To start it: svc -u /service/theservicedir Your app should start up the next time your server reboots. Happy cherrypy and daemontools usage. Please leave any related tips or fixes to these instructions in the comments.
http://renesd.blogspot.com/2011/07/cherrypy-daemontools.html
CC-MAIN-2017-26
refinedweb
367
60.21
I am currently working on a source code generation tool. To make sure that my changes do no introduce any new bugs, a diff between the output of the program before and after my changes would theoretically be a valuable tool. However, this turns out to be harder than one might think, because the tool outputs lines where the order does not matter (like import statements, function declarations, …) in a semi-randomly ordered way. Because of this, the output of diff is cluttered with many changes that are in fact only lines moved to another position in the same file. Is there a way to make diff ignore these moves and only output the lines that have really been added or removed? difftool be able to separate valid moves from invalid ones, as Order of instructions in code does matter, and cases where this is not true are limited (imports, declaration of functions and classes, etc.) ? – Joël Aug 4 '14 at 12:05
https://superuser.com/questions/184969/how-to-ignore-moved-lines-in-a-diff
CC-MAIN-2019-22
refinedweb
163
65.05
Hello. I have a problem and I would love it if I could get some insight on it. I have read the forums with many answers about the subject, but I'm still having problems so I'm writing here. Basically I was following this guide which is for creating Managed Wrapper from a static library LIB. I thought, static, dynamic, how different could it be. I have a C program that allows me to do some functions, for simplicity, I am now only using a simple addition function; but the real thing I ultimately want to do is far more complex. For now I just want this simple thing to work. I'm using MS Visual Studio 2010. I created a Blank Solution with the following three projects: - AdditionDLL - Wrapper - MainCode AdditionDLL project is a simple C program that has only one function which is to add two numbers together. It is exported as DLL. The code is as follows: // AdditionClass.c #include "AdditionClass" #include <stdio.h> #pragma once double Add(double x, double y){ return x+y; } // AdditionClass.h #pragma once static double Add(double x, double y); This compiles fine and the DLL is created. Now the Wrapper project is a managed C++/CLR application that is supposed to use the Add function from the AdditionDLL.dll. So I created the project as C++/CLR and change the properties to depend on the dll and also for the linker. Moreover, I have added the AdditionDLL as a reference. Here is the content of the files: //ClrWrapper.cpp #include "stdafx.h" #include "ClrWrapper.h" using namespace ClrWrapper; extern "C" __declspec(dllimport)double Add(double x, double y); //ClrWrapper.h using namespace System; namespace ClrWrapper{ public ref class AdditionClass{ public: double Add(double x, double y); }; } This doesn't compile. It gives me error LNK1107 invalid or corrupt file cannot read at 0x2F0 I know I have a lot of mistakes because I'm trying to make things communicate which shouldn't be communicating. I'd appreciate any help. The last project is just the C# code to use the Wrapper. I haven't worked on that yet because my wrapper still doesn't work.
https://www.daniweb.com/programming/software-development/threads/388174/calling-unmanaged-c-code-from-managed-c-wrapper
CC-MAIN-2021-17
refinedweb
366
66.13
This is the sixth lesson in a series introducing 10-year-olds to programming through Minecraft. Learn more here. All of us here speak English and probably some French as well; those are two languages that people use to communicate. Although technically computers only speak one language, binary, we can use many different programming languages to tell the computer what we want it to do. How do we create an application? - Pick a programming language - Write some code - Have the code compiled (translated) into instructions the computer can understand - Run those instructions (the application) Step 1: For working with Minecraft, we'll be using a language called Java. Step 2: Source code is written in text files, except that instead of having a .txt extension, ours will have a .java extension. We're going to use an application called Eclipse to edit these files. Step 3: Eclipse will also help us compile our code, using the Java compiler. What does source code look like? package mc.jb.first; public class Application { public static void main(String[] args) { String myName = "Jedidja"; int yearOfBirth = 1979; // wow I'm old /* Let's tell the computer what to display on screen */ System.out.println("Hi, my name is " + myName); System.out.println("I was born in " + yearOfBirth); } } That is perhaps one of the simplest applications you can write :) And it introduces us to some import fundamentals: statements, variables, methods, classes, and packages. Fundamentals Variable: A name for a place in the computer's memory where you store information. In Java, variables can store different types of information (e.g. numbers or strings). For instance, myName is a variable that stores a string. Statement: A line of source code ending in a semicolon. The semicolon tells the compiler that the line is complete. If we forget the semicolon at the end of the line, it will confuse the compiler and it won't be able to generate an application for us. The following statement String myName = "Jedidja"; creates a new variable called myName and assigns it a value of "Jedidja". Method: A named group of one or more statements. main is a method in the above code. Class: A named group of methods. Application is a class in the above code. Package: A named group of classes. mc.jb.first is a package in the above code. Comments: Comments are ignored by the compiler, but can be useful to other developers in understanding why something is happening. There are two ways to write comments in Java: - single-line comments which start with // - single- and multi- line comments which begin with /*and end with */
http://www.jedidja.ca/fundamentals-part-one/
CC-MAIN-2018-05
refinedweb
438
65.83
As you all know, we rolled out our first public release last week: kotlin-demo.jetbrains.com. And while you (7K+ unique visitors) are having fun with it, we keep working on the Kotlin compiler, IDE and standard library. In this post, I’ll give an overview of our plans. What you can play with today Today you can already try many features of Kotlin. Among others, these include: - Function literals (closures) - Extension functions and properties - Traits (code in interfaces) - Declaration site/Use site variance - First-class delegation - Compilation of mixed Java/Kotlin code See Kotlin Documentation for more details. Using things for real problems reveals limitations, inconsistencies and other downsides of the design, and this is good, because then we can fix them. The point is to find and fix virtually everything there is before we release the 1.0 version. After the release, we won’t be able to introduce backwards incompatible changes, so fixing the language will be difficult. So, please, go try them, and give us your feedback in the comments below or in the issue tracker. What’s keeping us busy Currently we are stabilizing the existing features and work on the IDE and language infrastructure (building etc). The hottest topics of this month are: - Modules: module = unit of compilation and dependency management; - Ant and Maven integrations: use your favorite build infrastructure; - Standard Library: utility functions for JDK collections, IO etc; - JavaScript back-end: still very early prototype, but it’s improving. ToDo When playing with Kotlin it’s handy to know what’s not supported yet. The list is long, and some of these features may even wait for 2.0: - Visibility checks: it’s a pity we don’t have those private, public etc. yet; - Local functions: a function inside a function can be a very handy thing to have; - Labeled tuples (aka records): to return many things from a function; - KotlinDoc: like JavaDoc, but Markdown-based; - Annotations: customizable metadata to enable compiler extensions; - Secondary constructors: sometimes you need more than one; - Enum classes (Algebraic data types): like Java enums, but better; - Pattern matching: handy conditions over object structures; - Inline functions: zero-overhead closures for custom control structures; - Labels: break and continue for outer loops; - Type aliases: to shorten long generics etc; - Self-types: never write awkward recursive generics any more; - Dynamic type: interoperability wi JavaScript and other dynamic languages; - Eclipse plugin: Kotlin IDE != IntelliJ IDEA; - LLVM back-end: compile Kotlin to native code… Even without this stuff, you can have a lot of fun. Try out extension functions and closures, traits and string templates and much more. Solve the problems (we wil be adding more over time). Great work so far, and I have high hopes for Kotlin. However, it seems to me that you’re trying to do too much, too fast. E.g. the last two items on the to-do list, dynamic types and an LLVM backend can surely wait. While we’re at it, the JavaScript backend can wait as well. Why not release a solid first version (with most of the items on the to-do list, as some of them are quite necessary, like KotlinDoc, annotations, visibility checks, secondary constructors, enums, pattern matching and more) for one platform – the JVM – and wait with the other platforms (JavaScript, native, whatever) for later, after you’ve built some following? Even for marketing reasons, it’s best to have one user community first and only then expend effort to reach others. It’s better to concentrate your efforts to gain a strong foothold first Thanks for your care About the last items on the list: they are last on the list, and this is not arbitrary. One thing we disagree upon is JS back-end. We believe that it’s important to have two back-end on the early stage: helps us design the compiler and other parts of the project in the right way. Isn’t a labelled tuple basically a class with only fields and no methods? What would the advantage be of labelled tuples over classes? Hi Ian, The major advantage there is that there’s no new class created, i.e. no PermGen pressure and no cross-CL issues. + The syntax is consice, of course. Thanks for the reply Andrey. I understand the efficiency issue, however I worry a little that they are conceptually very close to classes, and so you would now have two very similar concepts, classes and labelled tuples, for programmers to choose between. It seems that a well designed programming language should avoid this kind of overlap. Regarding the syntax, could the syntax for defining a new class be generalized so that it is as concise as the planned syntax for defining a new labelled tuple? I don’t think it is worth it to make class declarations so different from the traditional form. The primary point in tuples is avoiding creation of classes like Pair and Trinity, and labeled tuples just make this easier. One other thing is that tuples seem to be very important for pattern matching, and I see no reasonable way of replacing them with classes in this use case. It seems to me tuples (or classes like Pair or Trinity) is just a code smell. You lose the name. I don’t know about you, but “Point” says a lot more to me than “Pair”… In my opinion an ideal language would be even more restrictive than Java, not more permissive: – everything is private by default – everything is final by default – no primitive types – no public fields – etc. In addition, an ideal language would fix Java problems: – add runtime generics info (at instance level, not only at class level) – nicer visibility options – e.g. module visibility so we can have classes in different packages and still not be public (something like OSGi or the Java super-modules JSR proposal). – etc. Unfortunately, Kotlin is going the way of all the other languages. For example, supporting first class functions will only lead to unreadable code, because you can’t give a name to an anonymous piece of code (well, technically you can assign it to a variable, but people won’t). What Java lacks is a more concise syntax for anonymous classes, not full-blown closures. Fortunately, Kotlin addresses many of your requirements: – everything is final by default Yes! – no primitive types Yes! – no public fields Yes! – nicer visibility options – e.g. module visibility so we can have classes in different packages and still not be public (something like OSGi or the Java super-modules JSR proposal). Yes! I have a couple of questions: What problems does this solve? This seems a little controversial to me. You say “anonymous functions are bad, because they are anonymous”, and then “we need better anonymous classes”. What am I missing? Is the demo currently the exclusive means to try out Kotlin? As a UX developer recently disappointed by developments in Flex I’m glad to hear JS is a priority. The front end sorely needs a Scala-like language and Fantom doesn’t quite quench it (Dart doesn’t even aspire). Could you provide more detail about the status of the JS back end? I suspect this community would be forgiving of fluctuating APIs given how fast our toolchains evolve. PS: Amazing work! You guys never fail to impress The JS back-end is currently a very early prototype. We are working on making it better. The current priorities include making some practical examples work, this implies supporting some basic standard APIs. Looking at the first example in the web demo, I take it we’re going to have to use safe referencing for a lot of the stuff in the standard Java libraries. We are working on this. First of all, nobody has to say “System.out?.println()” in Kotlin anymore, it just hasn’t made it to the Web Demo yet. + We will be smarter about many things that are actually non-null in Java. Good news then … I’ll keep an eye out for the changes. This could be achieved by manipulating the strategy for resolving extension functions and member functions. If extension functions have priority, then if we wanted to indicate that Object.getClass always returns a non-null Class, we could just define a getClass extension function that calls Object.getClass and casts away the nullability. If that function was also inline, then the optimizer would probably just throw it away. Of course there would need to be a syntax for specifically targeting the object method instead of the extension function. Extension functions should not have priority over members, because this may lead to very unpleasant bugs when someone adds an extension function to the package foo I import as “foo.*”, and I suddenly start calling wrong code. (See the comment below for the outline of our solution). It’s always possible for someone to add something to a package or class that breaks existing code. If Kotlin prefers class members to extension functions, then someone could add a new member to a class with the same name as one of my extension functions, and that would also lead to errors. So the question is really whether extension functions should be considered as overrides or as fall-backs. The equivalent Scala feature works as a fall-back (Scala won’t look for implicit conversions if the method exists in the class) so maybe that’s the best model. I don’t agree here: only the owner of the class can add something to it whereas anyone can add an extension method. Do you have a internal list of Java methods which cannot return null or does the compiler actually look into the called Java/Bytecode? If I want to call my own Java code, can I add annotations to the Java code which tell the compiler that a certain function never returns null? Currently we simply keep correct signatures for the most popular Java methods. This is a temporary stub solution. We’ll switch to a system for externally annotating arbitrary code, so that you’ll be able to annotate your favorite libraries. + We will probably do a lot of the annotation work automatically for you (see the “Infer nullity” action in IntelliJ IDEA). Are you going to use external annotation format from here or you develop a new notation? (I’m currently searching for good notation myself) The JSR-308 annotation format is not too bad. We haven’t decided on the format yet, but it would make sense to have it aligned with Java’s KotlinDoc (with markdown) looks good. I’m impressed by the language and have been following it ever since it was announced. When can I get to see a kotlin compiler? I don’t see it in the list. Personally , I see the priority as 1) A standard library 2) compiler 3) maven , ant tools and the language features can just keep coming Thanks for your interest in Kotlin! The compiler exists already, of course. It compiles your programs here. We are stabilizing it now so that you get a smoother experience when it’s published. Any news about release date? We will open the compiler/IDE pretty soon. Does Kotlin have normal fields (w/o getter or setter ) which are not properties? I could not find an example. I believe there is an over emphasis on the properties thing. The convoluted syntax for getter/setter looks like an overkill . It’s true that the EE guys use a lot of getters/setters but not everyone use them Is there a way to see the generated java code ? It may help users solve bugs /perf issues if something goes wrong Kotlin does not allow public or protected fields w/o accessors. This is a common convention that everyone follows, and there is a rather good reason for it: binary compatibility. If all you need is something that works like a field, there’s no need in the “convoluted” getter/setter syntax, you can simply say: class A() { var a : Int = 0 } And get the same behavior but in a binary-compatible way. Regarding the binary code: thanks for your suggestion, we’ll add it to our WebDemo ToDo. Why are you assigning zero to an Int? Oh, you’re not, you’re assigning it to “a”. But thats what it reads like… If you write Int a then somebody may ask “why are you applying ‘Int’ to ‘a’, is ‘Int’ a function?”. But I’d like to ask Andrey, why are you using a : Int instead of a: Int ? Good question. The answer is “bad habit from Pascal”… We should probably reconsider this I agree, I find function definitions easier to read with the Scala style a: Intthan the Pascal style a : Int If all I need is a private mutable field (no accessors) is it possible? don’t you think the getters/setters add to the compiled binary size? If you declare a private property, no accessors will be generated, because there are no binary compatibility issues in this case. Thanks for creating such a great language. How do you consider concurrency programming? Is there a difference with concurrency programming in scala(see) Do we need “synchronized” and “thread” keywords still ? Whether you need “synchronized” sections or not depends on the kind of concurrency you would like to have. You surely can use it in Kotlin, but you don’t have to. Also, you can use Akka in Kotlin, as well as any Java library. Whether we are going to provide some library that would be unique to Kotlin is subject to discussion. On top of this, we are planning to support immutable classes, so that you can guarantee thread-safety of your data. Has anyone tried Akka with Kotlin? I actually tried to port a simple Java code to Kotlin. This is the code snippet: val system = ActorSystem.create(“mysysname”) system?.actorOf(Props(object: UntypedActorFactory { override fun create(): UntypedActor = HelloActor() }))?.tell(“start”) It fails at run time with this error, which I don’t quite understand: Exception in thread “main” java.lang.ClassCastException: akka.actor.ActorSystemImpl cannot be cast to akka.actor.UntypedActorContext I tried posting on Kotlin’s forum but unfortunately the 3 Kotlin forums are all read only even after I login This error should be fixed now, please take a fresh nightly build from here: (Don’t forget to uninstall the plugin before installing the new one). If it does not fix the error, please tell me. Regarding the forums, do you mean those at? Thanks Andrey for the reply, and Kotlin team for fixing it. I updated the plugin. The code went pass the original problem but ran into another issue that appears to have something to do with akka failing to create actors. Since the Java version works, I’m not sure if it’s a kotlin problem or if it’s a problem with porting to kotlin. If you are interested please send me an email and I can send you the Java and Kotlin source code. Yes, that’s the forum link. I go to new, discussion, browse for a location, search for kotlin and see 3 items, all three are grayed out, not clickable. BTW, is there a way to send update emails if the comment gets a reply? I’m glad I checked back otherwise I wouldn’t know there was a reply to my comment The forum should be fixed now Below is the error: [ERROR] [02/19/2012 00:50:07.891] [mysysname-akka.actor.default-dispatcher2] [akka://mysysname/user/$a] error while creating actor java.lang.AbstractMethodError: namespace$2.create()Ljava/lang/Object; at akka.actor.Props$$anonfun$$init$$1.apply(Props.scala:122) at akka.actor.Props$$anonfun$$init$$1.apply(Props.scala:122) at akka.actor.ActorCell.newActor(ActorCell.scala:343) at akka.actor.ActorCell.create$1(ActorCell.scala:360) at akka.actor.ActorCell.systemInvoke(ActorCell.scala:450) at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:195) at akka.dispatch.Mailbox.run(Mailbox.scala:164), as I don’t have your email address, could you attach the source to an issue in our tracker: ? Thanks. I thought you could see the email I used when I posted the comments. Anyway, I just filed KT-1345 with the link to Scala code, Java and Kotlin source code in description. Thanks Exciting times ahead for Kotlin. Personally, i’m most interested in the JS-backend and the prospect of an LLVM backend. On the later: are there any concrete plans on how to go about this yet? E.g. will you guys try to leverage LLVM vmkit? That could be a big win. Also, given Kotling has a JS and LLVM backend, that would open up a lot more oportunities for it to shine. Cross-platform game development could be one area. We are investigating a possibility of re-using clang to integrate with existing C/C++/Objective-C libraries, but this is very open-ended. We might use vmkit, too.
https://blog.jetbrains.com/kotlin/2012/01/the-road-ahead/
CC-MAIN-2016-50
refinedweb
2,854
63.59
The Label widget is a standard Tkinter widget used to display a text or image on the screen. The label can only display text in a single font, but the text may span more than one line. In addition, one of the characters can be underlined, for example to mark a keyboard shortcut. When to use the Label Widget Labels are used to display texts and images. The label widget uses double buffering, so you can update the contents at any time, without annoying flicker. To display data that the user can manipulate in place, it’s probably easier to use the Canvas widget. Patterns # To use a label, you just have to specify what to display in it (this can be text, a bitmap, or an image): from Tkinter import * master = Tk() w = Label(master, text="Hello, world!") w.pack() mainloop() If you don’t specify a size, the label is made just large enough to hold its contents. You can also use the height and width options to explicitly set the size. If you display text in the label, these options define the size of the label in text units. If you display bitmaps or images instead, they define the size in pixels (or other screen units). See the Button description for an example how to specify the size in pixels also for text labels. You can specify which color to use for the label with the foreground (or fg) and background (or bg) options. You can also choose which font to use in the label (the following example uses Tk 8.0 font descriptors). Use colors and fonts sparingly; unless you have a good reason to do otherwise, you should stick to the default values. w = Label(master, text="Rouge", fg="red") w = Label(master, text="Helvetica", font=("Helvetica", 16)) Labels can display multiple lines of text. You can use newlines or use the wraplength option to make the label wrap text by itself. When wrapping text, you might wish to use the anchor and justify options to make things look exactly as you wish. An example: w = Label(master, text=longtext, anchor=W, justify=LEFT) You can associate a Tkinter variable with a label. When the contents of the variable changes, the label is automatically updated: v = StringVar() Label(master, textvariable=v).pack() v.set("New Text!") You can use the label to display PhotoImage and BitmapImage objects. When doing this, make sure you keep a reference to the image object, to prevent it from being garbage collected by Python’s memory allocator. You can use a global variable or an instance attribute, or easier, just add an attribute to the widget instance: photo = PhotoImage(file="icon.gif") w = Label(parent, image=photo) w.photo = photo w.pack() Reference # -)
http://effbot.org/tkinterbook/label.htm
crawl-001
refinedweb
463
62.98
Hello; I have need for a select() entry point in my driver but my device is not using interrupts so I'm not sure how to have the os call my select() to let me poll the device. I think I must use a timer, with select_wait(), so the system will call my select() until my device becomes active. The trouble is I can not seem to get the timer work. The entire system hangs _solid_ whenever it gets activated. Below is my select() code. I use wake_up_interruptible() as the function the timer will call in the future to just make this process runnable again, in lieu of calling it from an interrupt service routine. A few specific questions: 1) my driver permits several processes to have the device open at once. Am I correct in assuming that if this general approach works I will need a separate timer_list and wait_queue for each open process instance? 2) In no examples do I ever see the wait_queue pointer ever _set_ to point at an actual wait_queue instance. Is this correct? Any comments would be greatly appreciated. Rememeber, the only real goal here is some way to get the os to call us occasionally to let us poll the device, but the device is not using interrupts. Thank you in advance; Elwood Downey static int pc39_select (struct inode *inode, struct file *file, int sel_type, select_table *wait) { static struct timer_list pc39_tl; static struct wait_queue *pc39_wq; switch (sel_type) { case SEL_EX: return (0); /* never any exceptions */ case SEL_IN: if (IBF()) return (1); break; case SEL_OUT: if (TBE()) return (1); break; } /* nothing ready -- set timer to try again later if necessary */ if (wait) { init_timer (&pc39_tl); pc39_tl.expires = PC39_SELTO; pc39_tl.function = (void(*)(unsigned long))wake_up_interruptible; pc39_tl.data = (unsigned long) &pc39_wq; add_timer (&pc39_tl); select_wait (&pc39_wq, wait); } return (0); }
http://www.tldp.org/LDP/khg/HyperNews/get/devices/basics/1.html
CC-MAIN-2015-22
refinedweb
300
69.72
"Phillip J. Eby" wrote: >". Your logic is sound, and I'm glad you spotted the recursion problem, but I would like to propose a different solution. >. The new security machinery actually provides a different way to solve this problem. Since we now have an execution stack, we can limit that stack, causing an exception to be thrown rather than letting it overflow the C stack. It would actually be more reliable than the current mechanism, which depends on DTML namespaces. Now let's do an analysis of this situation. If the LoginManager is created by the superuser and is used as the root authenticator, it stays unowned, so the ownership problem should not occur. If that is not the current behavior then it needs to be corrected. If untrusted users are allowed to add LoginManagers, the methods called to get ownership information should be owned by them. This is to avoid the server-side trojan horse. I used to think that the new security model was a break from the *nix security model. But consider this: sysadmins don't go around executing arbitrary users' scripts, do they? Of course not. The only parallel to what Zope provides is CGI scripts. Sysadmins are able to visit arbitrary CGI scripts without worry of a server-destined trojan (although a client-destined trojan is another matter) because the scripts are not given extra privileges. In fact, the suexec add-on makes Apache behave almost exactly the way Jim is getting Zope to behave. Now, the problem still remains that ownership information cannot be retrieved if the method that gets ownership is in ZODB, is owned by a user defined in that user folder, and has to call another method. Note that this also applies to Generic User Folder. With the correction in the execution stack, instead of killing Zope, an attempt to authenticate that way will result in a controlled stack overflow. Unless you can come up with another option, we can either break the security model or slightly reduce the capabilities of LoginManager. What would you do if you were in my position? Shane _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - ) - [Zope-dev] LoginManager patch considered harmful (was Re: c... Phillip J. Eby - Re: [Zope-dev] LoginManager patch considered harmful Shane Hathaway - Re: [Zope-dev] LoginManager patch considered harmf... Phillip J. Eby - Re: [Zope-dev] LoginManager patch considered h... Shane Hathaway - [Zope-dev] Expanded "access" fil... Phillip J. Eby - [Zope-dev] Re: Expanded "access&... Shane Hathaway - [Zope-dev] Re: Expanded "acc... Phillip J. Eby - Re: [Zope-dev] Expanded "access&... Chris Withers - Re: [Zope-dev] Expanded "acc... Shane Hathaway - Re: [Zope-dev] Expanded "... Phillip J. Eby - Re: [Zope-dev] Expanded "... Chris Withers - Re: [Zope-dev] Expanded "... Shane Hathaway
https://www.mail-archive.com/zope-dev@zope.org/msg01043.html
CC-MAIN-2017-51
refinedweb
462
57.47
import matplotlib.patches as mpatches import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import norm from sklearn.manifold import TSNE %matplotlib inline Most introductory tutorials on Gaussian processes start with a nose-punch of fancy statements, like: They continue with fancy terms, like: Is this really supposed to make sense to the GP beginner? The following is the introductory tutorial on GPs that I wish I'd had myself. The goal is pedagogy — not the waving of fancy words. By the end of this tutorial, you should understand: Let's get started. The Gaussian distribution, a.k.a. the Normal distribution, can be thought of as a Python object which: mu(the mean) and var(the variance). density, which accepts a floatvalue x, and returns a floatproportional to the probability of this Gaussianhaving produced x. class Gaussian: def __init__(self, mu, var): self.mu = mu self.var = var self.stddev = np.sqrt(var) # the standard deviation is the square-root of the variance def density(self, x): """ NB: Understanding the two bullet points above is more important than understanding the following line. That said, it's just the second bullet in code, via SciPy. """ return norm(loc=self.mu, scale=self.stddev).pdf. gaussian = Gaussian(mu=.123, var=.456) x = np.linspace(-5, 5, 100) y = [gaussian.density(xx) for xx in x] plt.figure(figsize=(14, 9)) plt.plot(x, y) _ = plt.title('`Gaussian(mu=.123, var=.456)` Density') If we increase the variance var, what happens? bigger_number = 3.45 gaussian = Gaussian(mu=.123, var=bigger_number) x = np.linspace(-5, 5, 100) y = [gaussian.density(xx) for xx in x] plt.figure(figsize=(14, 9)) plt.plot(x, y) _ = plt.title('`Gaussian(mu=.123, var=bigger_number)` Density'). def sample(self): return norm(loc=self.mu, scale=self.stddev).rvs() # Add method to class Gaussian.sample = sample # Instantiate new Gaussian gaussian = Gaussian(mu=.123, var=.456) # Draw samples samples = [gaussian.sample() for _ in range(500)] # Plot plt.figure(figsize=(14, 9)) pd.Series(samples).hist(grid=False, bins=15) _ = plt.title('Histogram of 500 samples from `Gaussian(mu=.123, var=.456)`') This looks similar to the true Gaussian(mu=.123, var=.456) density we plotted above. The more random samples we draw (then plot), the closer this histogram will approximate (look similar to) the true density. Now, we'll start to move a bit faster. We just drew samples from a 1-dimensional Gaussian, i.e. the sample itself was a single float. The parameter mu dictated the most-likely value for the sample to assume, and the variance var dictated how much these sample-values vary (hence the name variance). >>> gaussian.sample() -0.5743030051553177 >>> gaussian.sample() 0.06160509014194515 >>> gaussian.sample() 1.050830033400354 In 2D, each sample will be a list of two numbers. mu will dictate the most-likely pair of values for the sample to assume, and the second parameter (yet unnamed) will dictate: The second parameter is the covariance matrix, cov. The elements on the diagonal give Items 1 and 2. The elements off the diagonal give Item 3. The covariance matrix is always square, and its values are always non-negative. Given a 2D mu and 2x2 cov, we can draw samples from the 2D Gaussian. Here, we'll use NumPy. Inline, we comment on the expected shape of the samples. plt.figure(figsize=(14, 9)) plt.ylim(-13, 13) plt.xlim(-13, 13) def plot_2d_draws(mu, cov, color, n_draws=100): x, y = zip(*[np.random.multivariate_normal(mu, cov) for _ in range(n_draws)]) plt.scatter(x, y, c=color) """ The purple dots should center around `(x, y) = (0, 0)`. `np.diag([1, 1])` gives the covariance matrix `[[1, 0], [0, 1]]`: `x`-values have a variance of `var=1`; `y`-values have `var=1`; these values do not covary with one another (e.g. if `x` is larger than its mean, the corresponding `y` has 0 tendency to "follow suit," i.e. trend larger than its mean as well). """ plot_2d_draws( mu=np.array([0, 0]), cov=np.diag([1, 1]), color='purple' ) """ The blue dots should center around `(x, y) = (1, 3)`. Same story with the covariance. """ plot_2d_draws( mu=np.array([1, 3]), cov=np.diag([1, 1]), color='orange' ) """ Here, the values along the diagonal of the covariance matrix are much larger: the cloud of green point should be much more disperse. There is no off-diagonal covariance (`x` and `y` values do not vary — above or below their respective means — *together*). """ plot_2d_draws( mu=np.array([8, 8]), cov=np.diag([4, 4]), color='green' ) """ The covariance matrix has off-diagonal values of -2. This means that if `x` trends above its mean, `y` will tend to vary *twice as much, but in the opposite direction, i.e. below its mean.* """ plot_2d_draws( mu=np.array([-5, -2]), cov=np.array([[1, -2], [-2, 3]]), color='gray' ) """ Covariances of 4. """ plot_2d_draws( mu=np.array([6, -5]), cov=np.array([[2, 4], [4, 2]]), color='blue' ) _ = plt.title('Draws from 2D-Gaussians with Varying (mu, cov)') plt.grid(True) /usr/local/lib/python3.6/site-packages/ipykernel_launcher.py:7: RuntimeWarning: covariance is not positive-semidefinite. import sys The Gaussian for the blue: def make_features(x): return np.array([3 * np.cos(x), np.abs(x - np.abs(x - 3))]), we'll multiply this matrix by our 2D vector of weights $\mu_w$. You can think of the latter as passing a batch of data through a linear model (where our data have features $x = [x_1, x_2]$, and our parameters are $w = [w_1, w_2]$). Finally, we'll take draws from this $\text{Normal}(A\mu_w,\ A^T\Sigma_w A)$. This will give us tuples of the form (x, y), where:: $$ w\phi(X)^T \sim \text{Normal}(\phi(X)^T\mu_w,\ \phi(X)^T\Sigma_w \phi(X)) $$ In addition, let's set $\mu_w =$ np.array([0, 0]) and $\Sigma_w =$ np.diag([1, 2])`. Finally, we'll take draws, then plot. # x-values x = np.linspace(-10, 10, 200) # Make features, as before phi_x = np.array([3 * np.cos(x), np.abs(x - np.abs(x - 3))]) # phi_x.shape: (2, len(x)) # Params of distribution over weights mu_w = np.array([0, 0]) cov_w = np.diag([1, 2]) # Params of distribution over linear map (lm) mu_lm = phi_x.T @ mu_w): a vector of floats $y$, where each $y_i$ corresponds to its initial input $x_i$. For example, given a vector x = np.array([1, 2, 3]) and a function lambda x: x**2, an evaluation of this function gives y = np.array([1, 4, 9]). We now have tuples [(1, 1), (2, 4), (3, 9)], which we can plot. This gives one "function evaluation." Above, we did this 17 times then plotted the 200 resulting (x, y) tuples (as our input was a 200D vector $X$). This gave 17 curves. The curves are similar because of the given mean function mu_lm; they are different because of the given covariance matrix cov_lm. Let's try some different "features" for our x-values then plot the same thing. # x-values x = np.linspace(-10, 10, 200) # Make different, though still arbitrary, features phi_x = np.array([x ** (1 + i) for i in range(2)]) # Params of distribution over weights mu_w = np.array([0, 0]) cov_w = np.diag([1, 2]) #) The features we choose give a "language" with which we can express a relationship between $x$ and $y$. Some features are more expressive than others; ome restrict us entirely from expressing certain relationships. For further illustration, let's employ step functions as features and see what happens. # x-values x = np.linspace(-10, 10, 200) # Make features, as before phi_x = np.array([x < i for i in range(10)]) # Params of distribution over weights mu_w = np.zeros(10) cov_w = np.eye(10) #) Let's revisit the 2D Gaussians plotted above. They took the form: $$ (x, y) \sim \mathcal{N}(\mu, \Sigma) $$ Said differently: $$ P(x, y) = \mathcal{N}(\mu, \Sigma) $$ And now a bit more rigorously: $$ P(x, y) = \mathcal{N}\bigg([\mu_x, \mu_y], \begin{bmatrix} \Sigma_x & \Sigma_{xy}\\ \Sigma_{xy}^T & \Sigma_y\\ \end{bmatrix}\bigg) $$ NB: In this case, all 4 "Sigmas" in the 2x2 covariance matrix are scalars. If our covariance were bigger, say 31x31, but we still wrote it as we did above, then these 4 "Sigmas" would be matrices (with an aggregate size totalling: y_values = [] mu, cov = np.array([0, 0]), np.diag([1, 1]) while len(y_values) < 345: x, y = np.random.multivariate_normal(mu, cov) if x > 1: y_values.append(y) plt.figure(figsize=(14, 9)) pd.Series(y_values).hist(grid=False, bins=15) _ = plt.title('Histogram of y-values, when x > 1'): $$ \begin{align*} \int P(x, y)dy &= \int \mathcal{N}\bigg([\mu_x, \mu_y], \begin{bmatrix} \Sigma_x & \Sigma_{xy}\\ \Sigma_{xy}^T & \Sigma_y\\ \end{bmatrix}\bigg) dy\\ \\ &= \mathcal{N}(\mu_x, \Sigma_x) \end{align*} $$, and wanted to compute the marginal over the first 2 elements: # 3D Gaussian mu = np.array([3, 5, 9]) cov = np.array([ [11, 22, 33], [44, 55, 66], [77, 88, 99] ]) # Marginal over the first 2 elements mu_marginal = np.array([3, 5]) cov_marginal = np.array([ [11, 22], [44, 55] ]) # That's it. Finally, we compute the conditional Gaussian of interest — a result well-documented by mathematicians long ago: $$ \begin{align*} P(y\vert x) &= \frac{ \mathcal{N}\bigg( [\mu_x, \mu_y], \begin{bmatrix} \Sigma_x & \Sigma_{xy}\\ \Sigma_{xy}^T & \Sigma_y\\ \end{bmatrix} \bigg) } {\mathcal{N}(\mu_x, \Sigma_x)}\\ \\ &= \mathcal{N}(\mu_y + \Sigma_{xy}\Sigma_x^{-1}(x - \mu_x), \Sigma_y - \Sigma_{xy}\Sigma_x^{-1}\Sigma_{xy}^T) \end{align*} $$ $x$ can be a matrix. From there, just plug stuff in. Conditioning a > 1D Gaussian on one (or more) of its elements yields another Gaussian. In other words, Gaussians are closed under conditioning. We previously posited a distribution over some vector of weights, $w \sim \text{Normal}(\mu_w, \Sigma_w)$. In addition, we posited a distribution over the linear map of these weights onto some matrix $A = \phi(X)^T$: $$ w\phi(X)^T \sim \text{Normal}(\phi(X)^T\mu_w,\ \phi(X)^T\Sigma_w \phi(X)) $$ Given some ground-truth realizations from this distribution $y$, i.e. ground-truth "function evaluations," we'd like to infer the weights $w$ most consistent with these values. In machine learning, we equivalently say that given a model and some observed data (X_train, y_train), we compute/train/infer/optimize the weights of said model (often via backpropagation). Most precisely, our goal is to infer $P(w\vert y)$ (where $y$ are our observed function evaluations). To do this, we simply posit a joint distribution over both quantities: $$ P(w, y) = \mathcal{N}\bigg( [\mu_w, \phi(X)^T\mu_w], \begin{bmatrix} \Sigma_w & \Sigma_{wy}\\ \Sigma_{wy}^T & \phi(X)^T\Sigma_w \phi(X)\\ \end{bmatrix} \bigg) $$ Then compute the conditional via the formula above: $$ \begin{align*} P(w\vert y) &= \mathcal{N}(\mu_w + \Sigma_{wy}\Sigma_y^{-1}(y - \mu_y), \Sigma_w - \Sigma_{wy}\Sigma_y^{-1}\Sigma_{wy}) \end{align*} $$ This formula gives the posterior distribution over our weights given the model and observed data tuples (x, y). Until now, we've assumed a 2D $w$, and therefore a $\phi(X)$ in $\mathbb{R}^2$ as well. Moving forward, let's work with weights and features in $\mathbb{R}^{20}$ which will give us a more expressive language with which to capture the true relationship between some quantity $x$ and its corresponding $y$. $\mathbb{R}^{20}$ is an arbitrary choice. # The true function that maps `x` to `y`. This is what we are trying to recover with our mathematical model. def true_function(x): return np.sin(x)**2 - np.abs(x - 3) + 7 # x-values x_train = np.array([-5, -2.5, -1, 2, 4, 6]) # y-train y_train = true_function(x_train) # Params of distribution over weights D = 20 mu_w = np.zeros(D) # mu_w.shape: (D,) cov_w = 1.1 * np.diag(np.ones(D)) # cov_w.shape: (D, D) # A function to make some arbitrary features def phi_func(x): return np.array([np.abs(x - d) for d in range(int(-D / 2), int(D / 2))]) # phi_x.shape(D, len(x)) # A function that computes the parameters of the linear map distribution def compute_linear_map_params(mu, cov, map_matrix): mu_lm = mu.T @ map_matrix cov_lm = map_matrix.T @ cov @ map_matrix return mu_lm, cov_lm def compute_weights_posterior(mu_w, cov_w, phi_func, x_train, y_train): """ NB: "Computing a posterior," and given that that posterior is Gaussian, implies nothing more than computing the mean-vector and covariance matrix of this Gaussian. """ # Featurize x_train phi_x = phi_func(x_train) # Params of prior distribution over function evals mu_y, cov_y = compute_linear_map_params(mu_w, cov_w, phi_x) # Params of posterior distribution over weights mu_w_post = mu_w + cov_w @ phi_x @ np.linalg.inv(cov_y) @ (y_train - mu_y) cov_w_post = cov_w - cov_w @ phi_x @ np.linalg.inv(cov_y) @ phi_x.T @ cov_w return mu_w_post, cov_w_post # Compute weights posterior mu_w_post, cov_w_post = compute_weights_posterior(mu_w, cov_w, phi_func, x_train, y_train) As with our prior over our weights, we can equivalently draw samples from the posterior, then plot. These samples will be 20D vectors; we reduce them to 2D for ease of visualization. # Draw samples samples_prior = np.random.multivariate_normal(mu_w, cov_w, size=250) samples_post = np.random.multivariate_normal(mu_w_post, cov_w_post, size=250) # Reduce to 2D for ease of plotting first_dim_prior, second_dim_prior = zip(*TSNE(n_components=2).fit_transform(samples_prior)) first_dim_post, second_dim_post = zip(*TSNE(n_components=2).fit_transform(samples_post)) # Plot prior, posterior draws plt.figure(figsize=(14, 9)) plt.title('Draws from Prior, Posterior over Weights') plt.legend(handles=[ mpatches.Patch(color='orange', label='Prior'), mpatches.Patch(color='blue', label='Posterior'), ]) plt.scatter(first_dim_prior, second_dim_prior, color='orange') plt.scatter(first_dim_post, second_dim_post, color='blue') <matplotlib.collections.PathCollection at 0x115ed62e8>, and has probably maintained a similar mean. This type of change (read: a small one) is expected, as we've only conditioned on 6 ground-truth tuples. We'd now like to sample new function evaluations given the updated distribution (i.e. posterior distribution) over weights. Previously, we generated these samples by centering a multivariate Gaussian on $\phi(x)^T\mu_{w}$, where $\mu_w$ was the mean of the prior distribution over weights. How do we do this with our posterior over weights instead? Well, Gaussians are closed under linear maps. So, we just follow the formula we had used above. This time, instead of input vector $X$, we'll use a new input vector called $X_{*}$. $$ . x_test = np.linspace(-10, 10, 200) def compute_gp_posterior(mu_w, cov_w, phi_func, x_train, y_train, x_test): mu_w_post, cov_w_post = compute_weights_posterior(mu_w, cov_w, phi_func, x_train, y_train) phi_x_test = phi_func(x_test) mu_y_post, cov_y_post = compute_linear_map_params(mu_w_post, cov_w_post, phi_x_test) return mu_y_post, cov_y_post mu_y_post, cov_y_post = compute_gp_posterior(mu_w, cov_w, phi_func, x_train, y_train, x_test) true function evaluations from this posterior as well, as we did with our prior. def plot_gp_posterior(mu_y_post, cov_y_post, x_train, y_train, x_test, n_samples=0, ylim=(-3, 10)): plt.figure(figsize=(14, 9)) plt.ylim(*ylim) plt.xlim(-10, 10) plt.title('Posterior Distribution over Function Evaluations') # Extract the variances, i.e. the diagonal, of our covariance matrix var_y_post = np.diag(cov_y_post) # Plot the error bars. To do this, we fill the space between `(mu_y_post - var_y_post, mu_y_post + var_y_post)` for each `x` plt.fill_between(x_test, mu_y_post - var_y_post, mu_y_post + var_y_post, color='#23AEDB', alpha=.5) # plot error bars # Scatter-plot our original 6 `(x, y)` tuples plt.plot(x_train, y_train, 'ro', markersize=10) # plot ground-truth `(x, y)` tuples # Optionally plot actual function evaluations from this posterior if n_samples > 0: for _ in range(n_samples): y_pred = np.random.multivariate_normal(mu_y_post, cov_y_post) plt.plot(x_test, y_pred, alpha=.2) plot_gp_posterior(mu_y_post, cov_y_post, x_train, y_train, x_test, n_samples=25) The posterior distribution is nothing more than a distribution over function evaluations (25 of which are shown? D = 20 # Params of distribution over weights mu_w = np.zeros(D) # mu_w.shape: (D, ) cov_w = 1.1 * np.diag(np.ones(D)) # cov_w.shape: (D, D) # Still arbitrary, i.e. a modeling choice! def phi_func(x, D=D, a=.25): return np.array([a * np.cos(i * x) for i in range(int(-D / 2), int(D / 2))]) # phi_x.shape: (D, len(x)) mu_y_post, cov_y_post = compute_gp_posterior(mu_w, cov_w, phi_func, x_train, y_train, x_test) plot_gp_posterior(mu_y_post, cov_y_post, x_train, y_train, x_test, n_samples=25)
https://nbviewer.jupyter.org/github/cavaunpeu/gaussian-processes/blob/master/gaussian-processes-part-1.ipynb
CC-MAIN-2018-26
refinedweb
2,672
51.95
User Name: Published: 24 Nov 2011 By: Xianzhong Zhu Download Sample Code Lua is a world-famous scripting language. It is a lightweight multi-paradigm programming language designed as a scripting language with extensible semantics as a primary goal. In this article, we are to explore how to integrate Lua into C# applications, especially WPF games. Lua is a world-famous scripting language. It is a lightweight multi-paradigm programming language designed as a scripting language with extensible semantics as a primary goal. A typical usage of Lua is to serve as an embedded script tool in a large application developed using C, C++, Java, C#, etc. In this article, we are to explore how to integrate Lua into C# applications, especially WPF games. The test environments we'll utilize in the sample applications include: 1. Windows 7; 2. .NET 4.0; 3. Visual Studio 2010; 4. LuaForWindows (version 5.1.4-45 at), the famous Lua integration development tools on Windows; 5. LuaInterface (). Lua was created in 1993 in Brazil. As a small scripting language, Lua is designed to be embedded inside applications in order to provide flexible expansion and customization. The most famous applications based upon Lua should be the online game "World of Warcraft" and the AngryBirds game release on the iOS platform. Lua scripts can be easily called in C/C++ code. Lua can in turn call functions written using C/C++, which makes Lua widely used in applications. Lua can not only be used as an extension script but can be used as an ordinary configuration file, instead of XML, .ini, and other file formats, easier to understand and maintain. LuaInterface is a library for integration between the Lua language and Microsoft .NET platform's Common Language Runtime (CLR). Lua scripts can use it to instantiate CLR objects, access properties, call methods, and even handle events with Lua functions. As of this writing, the latest version of LuaInterface is 2.0.3 compiled against lower version of .NET. To use LuaInterface in the latest .NET 4.0, we have to rebuild the source code project. To do this, we can download the source project from here running the following command at the MS-DOS command line: svn checkout To successfully rebuild LuaInterface under .NET 4.0, you should remain the two files, App.config under the project LuaInterface.net35, and App.config under the project LuaRunner. If you remove these two files, you are sure to meet some error during the course of the build. The related reason and solution is simply given at somewhere at MSDN. I forgot taking it down; if you meet such an error, please search MSDN or leave message at the end of this file. To use LuaInterface more securely, you are suggested to use the assemble LuaInterface.dll and its buddy .dll. A good news is within the project LuaInterface.Test, there is a file LuaTests.cs. In this file, there are a lot of test examples (based on XUnit and Mock) related to using LuaInterface in .NET CLR. Now that the preconditions have been prepared, it is time to build some examples to illustrate how to combine LuaInterface with .NET projects. First off, create a new C# Console program and name it LuaForCSharpTest. For readers more clearly see the relations between the interested stuff, I create a folder LuaInterfaceStuff which holds the two file LuaInterface.dll and lua51.dll. Now, add reference to LuaInterface.dll in the sample project, the other DLL should be automatically copied to the output folder with it. As usual, let's add reference to the namespace defined in the assemble: Let's next introduce the related tiny samples one by one. To interact with Lua, we should first create an instance of the Lua virtual machine. And then we create the following .NET code to access the global variables defined in the .NET side (you can of course access those defined in Lua files - to be shown later). There are only eight basic types defined in Lua, i.e. nil, boolean, number, string, userdata, function, thread, and table. The table type is like a magic, with which you can achieve more flexible and stronger data structures, and other fantastic features. Here, we defined some global variables, use the method NewTable to define a table and use the related method GetTable to access the table. As you may have guessed, all the global variables are held inside the property Globals of the Lua virtual machine. As for the method DoString, we'll discuss it in particular later on. nil boolean number string userdata function thread table NewTable GetTable DoString The running-time snapshot is shown in Figure 1. In Lua, functions can be stored in variables, passed as arguments to other functions, and returned as results. With this definition, you can easily redefine a function to add new functionality, or simply erase a function to create a secure environment when running a piece of untrusted code. Moreover, Lua offers good support for functional programming. And also, functions play a key role in Lua's object- oriented programming. On the other hand, table is similar to the associative array in C# and Java, but with more flexibility and efficiency. An associative array is an array that can be indexed not only with numbers, but also with strings or any other value of the language, except nil. By digging further, you will easily find that tables are virtually the only data structuring mechanism in Lua, with which you can represent ordinary arrays, symbol tables, sets, records, queues, and other data structures, in a simple, uniform, and efficient way. Detailing into function and table is beyond the scope of this short article. Let's see a simple example concerning accessing Lua function and table via CLR. First, we define a .lua file named LuaAdd.lua using SCiTE. In the above script file, we defined two functions, three tables (named mytable1, color, and tree respectively), three global variables - width, height, and width height Please don't define the .lua file content with Visual Studio because the .lua file is not the simplest text file format as you image. So you are suggested to create .lua files using special .lua editor like SciTE (together with the Lua for windows package). If you create .lua file using Visual Studio you are sure to meet some error when running the application. Now, let's see the CLR code dealing with the above Lua script. The DoFile method loads and executes the Lua script. By digging into the anti-compiled code using ILSpy.exe (a good open-source .NET assembly browser and decompiler with which you can remove your .NET Reflector since it will become a commercial tool), I find GetFunction is just a simple helper method of the Lua class. The real work is finished in another class LuaFunction, which has a Call method to execute the target function, with two arguments (in this case), and return the function's return values inside an array. DoFile GetFunction Call With the GetTable method of class Lua and another class LuaTable, we can easily retrieve all the table related info defined in the Lua side as shown above. Figure 2 illustrates the related running-time screenshot. There is a Lua function called loadstring which is similar to another Lua function loadfile, except that it reads its chunk from a string, not from a file. As you image, the preceding DoFile method and another method DoString are just the correspondents of the Lua function loadfile and loadstring. Every time the method DoString is called, the compilation is executed, so although DoString is stronger it may result in huge overhead. Before using it please make sure that you cannot find simpler one to tackle your immediate problem. loadstring loadfile DoString loadfile loadstring Now, let's look at a related example. As you've seen, we can within the method DoString do common Lua things. The corresponding running-time snapshot is given in Figure 3. As is mentioned previously, you can easily extend the current function of Lua using table. The following example gives a simple example, in which we define an empty object and also define a method for it. In fact, with this approach, we can achieve object-oriented programming support in Lua. Though, this is only a simple case; the real thing is associated with more details. Let's look at the related CLR code. The corresponding running-time snapshot is given in Figure 4. In this sample, we are going to use DoString method to construct a Lua function, and then call it using the Lua virtual machine. Through this example, you will further have an idea of the powerful feature of the DoString method. The related running-time snapshot is shown in Figure 5. In this example, we will explore another powerful feature of LuaInterface. Via the RegisterFunction method of the Lua virtual machine class, we can register a CLR function to Lua, so that the Lua script can invoke the function as if it is an internal function. And at last, we will use the above means to test it. RegisterFunction First of all, let's define a simple C# method named outString outside the method Main. outString This method simple sets the fore color of the Console and then output the incoming string. Now, let's look at the first case in which we will register the above function within the Lua virtual machine and then invoke this method from the CLR side. The running-time snapshot for Sample 6 case 1 is given below. In the second case, we will invoke the above registered CLR function inside Lua script. Let's first look at the Lua script (named outStringTest.lua), as shown below. Finally, we start to test in the C# side with the following code. In this case, we invoke the method DoFile of the Lua virtual machine to execute the above Lua script, so that the method outString1 gets invoked inside Lua. The second case related snapshot is shown in Figure 7 below. outString1 In the above samples, you've seen how we can invoke Lua scripts from inside C# applications. In fact, as a high-efficiency embedded language, Lua can do far more than that. In the next section, you will see more complex samples where lua files can invoke other .lua files, components defined inside .NET assembles and even more... The following sample is adapted from the online LuaInterface samples (). In most of the online samples, you would see Lua scripts inside C# application can easily interact with .NET libraries and related data structures. Now, let's take a look at a related example. First, let's build up a Lua script file named Prompting.wlua which looks like this. First note that require command is the generally-used higher-level function to load and run libraries in Lua.. require dofile Second, CLRForm.lua is a lua script file shipped together with LuaForWindows_v5.1.4- 45.exe, which is located under \Program Files\Lua\5.1\lua. The CLRForm module contains some very useful utilities for doing Windows Forms programming in Lua. In fact, by opening CLRForm.lua, you will notice it further call another module using require "CLRPackage". CLRPackage.lua is a helper script to simplify the explicit invocation of all the required types, which is also located under \Program Files\Lua\5.1\lua. require "CLRPackage Third, PromptForString is a function defined in the file CLRForm.lua. In this case, we use the method PromptForString to prompt the user to enter some content and then call the delegate MessageBox to show a CLR dialog. PromptForString Finally, according to the online LuaInterface samples, the .wlua extension is associated with the GUI version wlua.exe while .lua is associated with the console version lua.exe. Though, in our self-built samples we can also use the .wlua files. To make our sample normally run, we copy the two files CLRForm.lua, and CLRPackage.lua at the project root folder while put the other .lua script files under the sub folder named scripts, all of whose "Build Action" property should be set to "Content" and the "Copy to Output Directory" to "Copy if newer". Next, let's look at the .NET side programming. There is only a simple DoFile method call; most of the work has been done in the Lua side. The running-time snapshot is shown in Figure 8 and 9. In this case, I input a simple string "Hi" at the textbox, and then you will see a related modal dialog like Figure 9. As you will notice in the downloadable source project, there are still more samples in the file Program1.cs, which are nearly corresponding to the ones in the online samples. Nearly all of them have been passed in this sample project. Embedded scripting language can give our application system more powerful flexibility and scalability. Take the world-famous network game World of Warcraft for example. This application used a bit more than 200K of Lua volume but gained a very fast computation speed. Nowadays, in Microsoft .NET platform, Lua can be widely applied to applications (such as WinForm, WebForm, WPF, etc.) which allow unsafe code. The good features of ease of use (no needing to consider the core module of complex logic, just using the simple script to complete the task can be specified), cross-platform (once compiled by the right script language can be free to migrate to other systems), and expansion of demand (in the function of the complement system and upgrade in the same play a pivotal role) makes Lua applicable over a wide variety of development areas. Starting from this section, let's explore how to make WPF games interact with Lua. First, start up Visual Studio 2010 and create a general WPF application named LuaInWPFGame based upon .NET 4.0. Second, add a reference to LuaInterface in our WPF project. In our case, we use the afore-mentioned ready-made assemble LuaInterface.dll (together with the related file lua51.dll). Next, let's take the popular chat scene in a WPF game as an example to introduce how to embed Lua inside a WPF application. The first thing to do is to create a Lua script file (named Talk.lua). The script is responsible for storing the chat content, as well as making corresponding process according to the income parameters. And especially, the script will invoke a C# method, return the data to the code-behind .cs file, and eventually the contents are reflected to the UI interface. The complete script code is as follows: The first variable content is a table, which is used to store all kinds of chat contents. Here I defined three sets of contents. As is shown, the method ShowTalk called another method RandomTalk that's defined in the C# site to achieve the target of displaying conversation contents onto the screen. Apparently, the RandomTalk method does not belong as part of the Lua script file. In this case, we illustrate to you how Lua calls C# method within a WPF application. Of course, we also need to register this method for the script in the background WPF C# code, so that the method can ultimately play the role. ShowTalk RandomTalk The Build Action of the script file Talk.lua should be set as "Embedded Resource" in the Properties dialog. As the preceding C# Console samples, to use Lua inside WPF, we should first add necessary namespace reference. Next, we define a Lua virtual machine class (as well as define other related variables used in the sample game): Also note that we defined a Sprite array to represent the sprites to be rendered in the game scene. In this case, we will simple draw five game characters and let them chatter around a fire. Each character is described using the Sprite class (refer to the source code yourself). Now, set up the C# method to be invoked with the Lua script. Here, we randomly select a sprite and assign the passed in argument as his topic. Now, we should register the above C# method to make Lua know it and all related info. Since all the related details concerning registering the method have been covered previously, we are no more to delve into it. You've also seen after the registration we invoked the method DoFile of the lua virtual machine to load and run the script. Till now, the core of the mechanism of embedding Lua script inside a WPF application has been introduced. Figure 10 gives one of the running-time snapshots. In this article you learned the fundamental means of integrating Lua into C# based Console and WPF applications. Lua is a high-efficient embedded language based upon ANSI C. Lua offers many advanced features based upon the two components table and function; we cannot delve into all the interesting stuffs, but you can refer to other detailed Lua materials together with this tutorial to do further research.
http://dotnetslackers.com/articles/wpf/Integrate-Lua-into-WPF-Games.aspx
CC-MAIN-2013-48
refinedweb
2,853
65.32
FWIW, I just ran into this issue too. Together with @monoidal we reduced our code to the following: {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ApplicativeDo #-} module M where data T = T (forall a . a -> a) g :: forall m . Monad m => m T g = do a <- return () b <- return () let f :: forall a . a -> a f = id return (T f) FWIW, here's the same issue reported against tasty: I find it suspicious that both reports involve tasty, although as Mario's latest repro shows, this is not specific to tasty. I also verified that it's not specific to being run as a Cabal test suite. Ah, I see. The way I read it was [ e | p1 <- e11, p2 <- e12, ... {- other statements of the form pn <- ... -} | q1 <- e21, q2 <- e22, ... {- other statements of the form qn <- ... -} ... {- everything not of the form x <- ... goes here -} ] I personally think that the documented semantics is much more useful than the implemented one. The idiom of zipping several lists (typically of the same length), then filtering is a fairly common one. OTOH filtering a list before zipping would generally produce lists of different lengths and is usually not what one wants/expects. The desugaring of parallel list comprehensions described in the manual appears to state that parallel bars ( |) have higher precedence than what comes after them, but this is not how they work in practice. {-#. 8.6.5 Haddock drops {- | ... -}. My understanding is that the GHC API is responsible for parsing this, but please let me know if this is wrong and I should file a haddock issue instead. Process the following module with haddock: {- | A Haskell comment looks like this: {- comment -} -} module Test where I expect the full doc string to show up in html, but the {- comment -} part is dropped. Tested with 8.6 and 8.8. As a side note, I tried to reproduce this bug using raw ghc api like this: import GHC import GHC.Paths import Outputable import GhcMonad main = runGhc (Just libdir) $ do dynflags <- getSessionDynFlags setSessionDynFlags dynflags { packageFlags = [] } setTargets [Target (TargetFile "Test.hs" Nothing) False Nothing] load LoadAllTargets ms <- getModSummary (mkModuleName "Test") pm <- parseModule ms let pp = showPpr dynflags (hsmodHaddockModHeader . unLoc $ pm_parsed_source pm) liftIO $ putStrLn pp But I got Nothing — i.e. the parsed module does not have hsmodHaddockModHeader. I'd be interested in learning what I did wrong there.
https://gitlab.haskell.org/romanc.atom
CC-MAIN-2022-27
refinedweb
391
65.73
XvQueryBestSize(3X) UNIX Programmer's Manual XvQueryBestSize(3X) XvQueryBestSize - determine the optimum drawable region size #include <X11/extensions/Xvlib.h> X. motion Specifies True if the destination size needs to support full motion, and False if the des- tination size need only support still images. vw,vh Specifies the size of the source video region desired. dw,dh Specifies the size of the destination draw- able region desired. p_dw,p_dh Pointers to where the closest destination sizes supported by the server are returned. Some ports may be able to scale incoming or outgoing video. XvQueryBestSize(3X) returns the size of the closest destina- tion region that is supported by the adaptor. The returned size is guaranteed to be smaller than the requested size if a smaller size is supported. [Success] XFree86 Version 4.5.0 1 XvQueryBestSize(3X) UNIX Programmer's Manual XvQueryBestSize(3X) Returned if XvQueryBestSize(3X) completed success- fully. [XvBadExtension] Returned if the Xv extension is unavailable. [XvBadAlloc] Returned if XvQueryBestSize(3X) failed to allocate memory to process the request. [XvBadPort] Generated if the requested port does not exist..
http://mirbsd.mirsolutions.de/htman/sparc/man3/XvQueryBestSize.htm
crawl-003
refinedweb
181
50.23
Is anyone able to run a script through Windows Task scheduler using ArcGIS Pro? As a test I have an extremely simple script (just copying a feature) that runs fine within ArcGIS Pro. import arcpy arcpy.management.CopyFeatures("C:\CityScripts\ScheduledTasks\SafewatchCameras\Data\General.gdb\SafewatchCameras", "C:\CityScripts\ScheduledTasks\SafewatchCameras\Data\General.gdb\SafewatchCamerastemp2", None, None, None, None) I setup the Task Scheduler as recommended by the ArcGIS Pro help and while the Task completes successfully nothing happens with the script. Put in a call to ESRI tech support but they said it was a Windows issue and therefore couldn't work on it. Scott All, We were having issues running arcpy or arcgis modules on task scheduler on our 2016 windows server. It would error out because the script was running under the SYSTEM account and was not able to log into AGOL for the license. We tried to manually install Pro and switching to a single use license but this failed as well. Even when manually installing Pro and switching the license it did not work because the license install manager is installed in the user profile. We finally found the answer! We had to run cmd line install Pro with it pointing at a single use license. We have to run a command install to a place accessible by the SYSTEM account. This cleared up everything. Here's another addition to the thread in case it's helpful to others since this took me awhile to sort out scheduled tasks using ArcGIS Pro's Python 3 environment. Setup: *Yes, this particular server has both Pro and ArcMap and therefore Python3 and Python2, but only Python3 was set into system's path variable. Issue: Our Python 3 scripts worked correctly from both the command prompt or running them directly from an IDE, yet Windows task scheduler kept reporting an error was returned in the last run result: (0x1). Reason for Error: Windows task scheduler defaulted to running Python 2, causing scripts using Python 3 to fail. It took me awhile to catch this since only Python3 has been set inside the Environment's path variable, so I assumed that Windows Task scheduler would be running Python3 --- not Python2. (*** not sure about the message about the environment not being activated. ***) Helper diagnosing script: I created a dummy script using only Python 2 and used the sys module to report which version of Python was being called when the script was run through Task scheduler. The script revealed it was Python2, cementing that I needed to explicitly point to Python 3 installation. import os import sys from datetime import datetime def main(): try: basepath = os.path.dirname(__file__) os.chdir(os.path.join(basepath, 'Test_script')) with open("log.txt", "a") as f: f.write('hello world! @ {}\n'.format(datetime.now())) f.write('Python installation info:\n') f.write('major: {} minor: {} micro: {}\n'.format(sys.version_info[0], sys.version_info[1], sys.version_info[2])) except Exception as exc: f.write(exc) if __name__ == '__main__': main() Solution: I had to specify the path to Python3 for program/script and the fullpath to the script as an argument. Again, since my Python2 scripts just worked when I specified the path to the script (done on a different server with Python2 installation set in path), I thought it would work be the same for Python3. Python 3 Script (Server with python3 in path and without python2 in path) vs. Python 2 Script (DIFFERENT Server with python2 in path and without python3 in path)
https://community.esri.com/t5/python-questions/python-script-as-sheduled-task-arcgis-pro/m-p/564040
CC-MAIN-2021-49
refinedweb
588
62.48
Group results by Difficulty Mentor Owner Priority Reporter Resolution Status Topic Type descending Show under each result: Description Max items per page Write an tool to convert Haskell values, structures and functions, to C bindings. We need to find a way to convert Haskell functions, or pointers to, into C function pointers. This would allow many things to follow. Haskell class(HC) instances could have C Class(CC) definitions. So then calling the HC functions would be as simple as referring to the functions pointers stored in the CC. How to deal with general module functions is something to think about still. Something to also think about is how to convert the function pointers back to Haskell functions. This will be needed when passing function pointers to Haskell function calls. Do we want to allow the "lifting" of regular C functions into the pointers to Haskell functions so they can be passed to Haskell functions? This would almost seem required with such a system. Structures created with "data" will most likely be GObjects. Those done with "newtype" will most likely have to be duplicated, or maybe we can get away with #defines. If anything, we should probably be able to get away with a #define for "type" definitions. Some standard structures from the Prelude will have to be converted and included in the conversion tool. Haskell has a, to put it lightly, strong typing system. We need to think about how to bring that into the C context. How much of it do we want to preserve? Any good tool is useless without documentation. Part of the deliverables of this project will be documentation on usage, how it works, and some man pages. Extend the GSLHaskell library to cover all the GSL functions. Implement (possibly using additional numerical libraries) important Octave functions not available in the GSL. The Haskell FFI, and in particular, GHC, supports "foreign export" of Haskell code to C, and embedding Haskell modules inside C apps. However, this functionality is rarely used, and requires a fair bit of manual effort. This project would seek to simplify, polish and document the process of embedding and calling Haskell code from C, and languages that use the C ffi (Python, Erlang, C++ ...), providing a canned solution for running Haskell fragments from projects written in other languages. A good solution here has great potential to help Haskell adoption (as we've seen in Lua), by mitigating the risk of using Haskell in a project. Depending on the language, we may be able to move this under another language umbrella (Python, Perl, Ruby, ... ?) GStreamer is a multimedia framework. The goal would be to create haskell bindings, to allow easy creation of multimedia applications in haskell.. The task is to write a program which generates a Haskell binding to the Qt library or parts of it. The program shall do the generation fully automatically, using the Qt header files or similar data as its input. Python has an impressive number of libraries we might want to utilise from Haskell (e.g.pygments is used in the new hpaste), also, being able to access Haskell from Python lowers the risk when integrating Haskell into new projects. This project would seek to develop an Python<->Haskell bridge, allowing the use of libraries on either side from each language. Python's C-based FFI makes this not too daunting, and the pay off could be quite large. Related projects: MissingPy? and hpaste's light python binding. We may also be able to get this funded under the (many) python slots. This could be funded under the Python project umbrella. Applications should also be submitted to them. Don Stewart Michal Janeczek <janeczek@…> This was my project idea from last year, but I ended up not doing GSoC. It would be useful to have a static binding generator for Gtk2hs using data from gobject-introspection to do most of the work, making it easier to update and maintain the bindings to Gtk+ and company, and easier to add bindings for new GObject based libraries. I'd like to head up the implementation of a basic SWIG module that will properly generate appropriate C wrappers and hsc files that implement C++ classes, inheritance, and method calls appropriately. This would include generating type classes that emulate upcasting and public methods, proper handling of typedefs to correspond in Haskell, generating accessors for public class members, and creating equivalent constant variables in the Haskell code, and finally converting enums into data types. Thoughts? cf Although there are a number of issues with HDBC backends, the main one is that nearly everything goes through strings, even when drivers offer support for proper wire formats. This makes large inserts of selects much slower than necessary, and consume much more space than necessary as well. Along with this, there's no implementation of batched inserts. And finally, there's BLOB support. Additionally, the backends could be simplified. That is to say, that when HDBC was produced, there were no seperate bindings packages for most databases. So the backends include the bindings themselves. But now there are seperate packages that just provide simple bindings to e.g. sqlite and postgres. It would be very good to switch over to those libraries where possible, and where such packages don't exist to split out, e.g., the direct ODBC bindings to a separate package as well. This would help maintainability and future development. HDBC remains by far the most used Haskell database package, and now that John Goerzen has announced plans to switch to BSD licensing, there's no obstacle to its use across the board except for it's current limitations. It would be far better for the Haskell community to bring HDBC up to snuff as a single intermediate tool for accessing multiple database backends than to continue down the path of a profusion of competing single-db bindings that higher-level libraries are forced to access individually. Major points from the above email reproduced below: For more details, click here. Prior bioinformatics knowledge is not a requirement. Please contact me for details.. Non-standard builds often need to implement specific build steps in Setup.hs, specifying a build-type: Custom in the project cabal file. The user hook system works reasonably well for modifying or replacing the specific sub steps of a build, but *implementing* anything more than the simplest logic in Setup.hs is very difficult. A great deal of this difficulty stems from the lack of library support for code in Setup.hs. Adding a cabal section that specifies a build-depends: for Custom (and possibly other) build types would allow developers to reuse build code between projects, to share build system modifications on hackage more easily, and to prototype new additions to cabal. Setup.hs *can* allow arbitrarily complex build system manipulations; however, it is not practical to do so because the infrastructure surrounding Setup.hs doesn't promote code reuse. The addition of dependencies that cabal-install would install prior to building setup.hs and issuing the build would enable developers to produce custom builds that perform complex operations that utilize the high-quality libraries available on hackage. Furthermore, this would provide the means to prototype (and distribute) new cabal / cabal-install features before integrating experimental code into the stable tools. I think something akin to the Library section would work for this, e.g.; CustomBuild Setup-is: Setup.hs Build-Depends: ... Build-tools: ... ... (I expect that most of the fields applicable to Library would also apply here.) cabal test PACKAGE DIR]:[SECTION?... cabal test PACKAGE DIR]:[SECTION?... Example: cabal test my-pkg1-dir:test1 my-pkg1-dir:test2 my-pkg2-dir cabal test my-pkg1-dir:test1 my-pkg1-dir:test2 my-pkg2-dir Implementation wise this means that the working dir (i.e. dist/ today) needs to be able to live outside the some "current" package's dir. It would live in the project "root" (e.g. some director that contains all the packages that are checked out in source form.).) (Project idea first proposed by Johan Tibell) A tool that is given two package tarballs (or similar) tells you which version number component that needs to be bumped and why. It should be integrated into the cabal workflow, perhaps into cabal itself. (Probably too small to be a project on its own, suggested by Johan Tibell.). From an email from Johan Tibell: Currently the cabal dependency solver treats all the components (i.e. library, test suites, and benchmarks) as one unit for the purpose of dependency resolving. This creates false dependency cycles. For example, test-framework depends on containers and the container's test suite depends on containers, which means it should be possible to build the container's test suite by first building the library, then test-framework, and finally the container's test suite. This doesn't work today as the solver treats the whole containers .cabal file as one unit and thus reports a dependency cycle. The dependency solver should treat each component as a separate mini-package for the purpose of dependency solving.: This ticket proposes to add a NVIDIA CUDA backend for the Data Parallel Haskell extension of GHC. To quote Wikipedia on CUDA: "CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the GPU...." To me, the exciting thing about CUDA is, if not the technology itself the high availability of CUDA enabled "graphic" cards. It is estimated that by the end of 2007 there will be over 40,000,000 CUDA-capable GPUs! Also see the NVIDIA CUDA site. To quote the Haskell Wiki on DPH: "Data Parallel Haskell is the codename for an extension to the Glasgow Haskell Compiler and its libraries to support nested data parallelism with a focus to utilise multi-core CPUs. Nested data parallelism extends the programming model of flat data parallelism, as known from parallel Fortran dialects, to irregular parallel computations (such as divide-and-conquer algorithms) and irregular data structures (such as sparse matrices and tree structures)..." It turns out people are actually already working on this. See this thread on haskell-cafe. I actually think this project is to big for a Google Summer of Code project. It's more suitable for a Masters project I guess. However the project can be broken into several sub-projects that each address a different research question. Immediate questions that come to mind are for example: GHC offers many features for muilticore, parallel programming, including a parallel runtime, a new parallel garbage collector, a suite of shared memory concurrency abstractions, and a sophisticated parallel stratgies library. What's missing from this set is experience building parallel applications using the parallel runtime, and high level parallelism primitives. In this project a parallelism benchmark suite, possibly ported from an existing suite, would be implemented, and used to gather experience and bug reports about the parallel programming infrastructure. Improvements, to , say, Control.Parallel.Strategies , could result, as would a robust way of comparing parallel program performance between versions of GHC.; struct Tree* left; struct Tree* right; }; Types that have functions that act on them such as: data IntContainer? = IntContainer? Int getInt :: IntContainer? -> Int getInt (IntContainer? a) = a could have these functions automatically converted to C/C++: struct IntContainer? { int a;); public: virtual Monad add(Monad a, Monad b); virtual Monad mult(Monad a, Monad b) virtual Monad neg(Monad a, Monad b); virtual Monad div(Monad a, Monad b);..: Similar to:. hs-plugins can dynamically compile and load Haskell code, but does not prevent plugins from using unsafePerformIO or unsafeCoerce#. I would like to be able to use hs-plugins to execute untrusted code. As far as I can see, two pieces of infrastructure are missing: It seems to me that the best way to achieve the first goal is to make GHC keep track during compilation of which functions are safe (do not call unsafe primitives, or, I suppose, are declared to be safe by a pragma in a trusted library). However, I know only very little about GHC internals. One project I want to use this for would be a web server that lets users create Haskell-based web applications without having to set up their own Unix account etc. If this project is accepted, I'll build a prototype of this that can be used to test "sandboxed haskell" (no matter whether the project ends up being assigned to me or somebody else). Project 1: LLVM optimisation passes ~ 1) Clearly identify issues with LLVM produced assembly code in the context of GHC 2) The second part would be to identify the lowest hanging fruit from those things identified in 1) and make changes to the LLVM output / write LLVM optimisations (apparently this is a joy to do, the LLVM framework is very well designed) to fix the issues Separating the project into two parts like this means that we could get something out of the project even if the student is unable to make significant progress with LLVM itself in the GSoC timeframe. Having a clear description of the problems involved + simple benchmark programs would be a huge help to someone attempting part 2) in the future, or they could serve as the basis for feature requests to the LLVM developers. Project 2: Tables next to code My feeling is that this is the more challenging of the two projects, as it is likely to touch more of LLVM / GHC. However, it is likely to yield a solid 6% speedup. It seems there are two implementation options: 1) Postprocessor ala the Evil Mangler (a nice self contained project, but not the best long term solution) 2) Modify LLVM to support this feature Other Either project could include looking at the impact of using llvm for link time optimisation, and a particularly able student might be able to attempt both halves. See also the thread at and David's thesis ThreadScope? lets us monitor thread execution. The Haskell Heap Profiler lets us monitor the Haskell heap live. HPC lets us monitor which code is execution and when. These should all be in an integrated tool for monitoring executing Haskell processes. This ticket is an adaptation of Sajith Sasidharan's mailing list post. Broadly, idea is to improve support for NUMA systems. Specifically: Needed! I ([acfoltzer@… Adam Foltzer]) know the outlines of the problems fairly well, but have no experience hacking on RTS code. I would be willing to take on a supporting role, but such experience seems necessary to mentor this project. Needed! I ([acfoltzer@… Adam Foltzer]) know the outlines of the problems fairly well, but have no experience hacking on RTS code. I would be willing to take on a supporting role, but such experience seems necessary to mentor this project. Something is wrong in the parallel build implementation in GHC, as it doesn't even give good speed-ups in embarrassingly parallel cases on few cores (e.g. 2). Someone needs to profile GHC and work on speeding up parallel builds. (Suggestion taken from an email by Simon Peyton Jones): Something like: while a Haskell program is running, start another process, which connects to the running Haskell RTS, switches on the event-monitoring infrastructure, processes the stream of events, and displays useful stuff about it (probably in a web browser). Karolis Velicka (an undergrad) did an 8-week internship in which he made the ghc-events library, which parses the event stream coming from the event-monitoring infrastructure, incremental. That means it can parse events as they arrive, rather than having to wait until the run is completed. So that is a useful piece of the puzzle. Threadscope could in principle be a client of this more incremental API. But Threadscope is hard to build (based on GTK). Maybe something displaying in a web browser would be more easily portable? Peter Wortman is working on generating DWARF annotations in the symbol table. So I don’t have a precise vision of what the project might be, but something about better infrastructure for giving insight into what is going on at runtime. There are at least two open GHC tickets (#8279 and #8287) to improve the performance of generated code. Both of these already have a good amount of discussion and (old) patches to get you started. This would be a good topic to follow through to the point where it can be merged into GHC, as every Haskell program compiled to native code would benefit. Most of the base library functions have at best a one-line description of what they do. These are the most-used functions in Haskell, and the lack of examples and parameter documentation makes it difficult for beginners to learn. There's tons of low-hanging fruit here. For example, in Data.String, the following Prelude functions have no examples of their usage to show what they do: It would only take about an hour to add extensive documentation showing all sorts of use cases. The examples can then be tested with the doctest package from. Here are some example revisions adding these sorts of documentation: This is a good project because there's no way it can fail. Each module in base is a TODO item, and once one of them is completed (independently of the rest), it's done for good. If at the end of the Summer the candidate is only 50% finished, we don't walk away empty-handed -- half of the library is documented. Good examples can be crowd-sourced if needed. It's also easy to get up to speed with Phabricator for independent changes like this. As mentioned earlier, the examples can be checked with the doctest program to ensure that their output is correct. At the moment, doctest will not run "out of the box" within the GHC tree (due to some entangled imports). A bonus project along these lines would be to make doctest runnable within the GHC tree, and to automate the test suite for the examples in base. Nomyx [1] is a unique game where you can change the rule. It's the first full implementation of a Nomic game [2] in a computer. It is based on a Haskell DSL that lets the players submit new rules while playing, thus completely changing the behavior of the game through time. The game could be seen as a platform where beginners in Haskell could learn the language while playing an actual multiplayer game. Experts on the other hand could participate in great matches that show their Haskell skills. However, the game is still in development and especially the design space of the DSL need to be explored to make it easier to learn and use. Ideally, even non-Haskellers should be able to create some basic rules. This is the objective of this proposed GSoC. [1] [2] We propose the following improvements to the INblobs tool developed at U. Minho: wxWidgets are a funded project this year. Coordinate with them to work on wxHaskell improvements. Needs support from the wxHaskell team Things which look like they might be good summer of code projects: Things we want for wxhaskell 0.12 Daan's ideas from Other ideas Todo: what else needs doing with wxHaskell? It is hard in hackage to get an overview over nearly 3000 libraries, how popular a library is, examples of how to use it, see changes, etc. . Some of this is planned to be solved in hackage 2, which is very promising. I would like to extend this by displaying libraries like skyscrapers in a city as flow networks. For this I have developed several libraries to prepare this, see . I have have found a very general way to construct 3d shape based on symmetry classes in my diploma thesis (this also tackles this reddit proposal:). My plan is to change Sourcegraph () to parse all packages and display them with WebGL or a 3d-engine. To extend Haddock or HsColour? to create hyperlinked coloured source files for browsing, similar to how agda can render Agda source[1]. Clickable source code is useful as documentation and as an exploration tool to learn a library but also to learn library design. The ambition is to make all source on Hackage browsable in this manner. Time permitting this proposal could also include other enhancements to Haddock, such as links to instance declarations in Haddocks. [1]: Implement a new feature in JHC, depending on interests and skills. e.g. MPTC+FD, fast arrays, STM, Template Haskell, open datatypes, nice records, or something that you are interested in.. Write a SMIv1, SMIv2 SNMP MIB compiler using the Parsec combinator. ashish: xmonad is a tiling window manager for X11, and a popular Haskell open source project with many users. This project would seek to integrate "compositing" support into xmonad, creating the first compositing tiling window manager. Compositing is the use 3D hardware accelaration to provide window effects, such as in the Apple "expose" functionality, and Compiz, an unix window manager supporting compositing effects. By reusing the compositing libraries provided by "compiz", binding to them from Haskell, and integrating compositing hooks into xmonad, we could hope to write effects in Haskell, for the window manager. This would make xmonad unique: the only tiling window manager with support for compositing. Additionally, a Haskell EDSL for describing effects would be of general utility. The result would be a novel UI interface, and would investigate how the user interface for a tiling wm can be enhanced via compositing. The initial goal would be to bind to the basic library, providing a simple effect (such as shadowing), and then extend the supported effects as necessary. The xmonad feature ticket for this: Discussion will take place on the xmonad@ lists, in order to prepare good submissions to these groups. This idea was in my head since people learning haskell (like me) usually have some trouble understanding substitution step by step. Is the third wish in foldr (+) 0 [1, 2, 3, 4] foldr (+) 0 (1 : [2, 3, 4]) 1 + foldr (+) 0 [2, 3, 4] 1 + foldr (+) 0 (2 : [3, 4]) 1 + (2 + foldr (+) 0 [3, 4]) 1 + (2 + foldr (+) 0 (3 : [4])) 1 + (2 + (3 + foldr (+) 0 [4])) 1 + (2 + (3 + foldr (+) 0 (4 : []))) 1 + (2 + (3 + (4 + foldr (+) 0 []))) 1 + (2 + (3 + (4 + 0))) 1 + (2 + (3 + 4)) 1 + (2 + 7) 1 + 9 10 Comparing this with foldl immediately shows the viewer how they differ in structure:) [] (+) ((+) ((+) ((+) 0 1) 2) 3) 4 1 + 2 + 3 + 4 3 + 3 + 4 6 + 4 Each step in this is a valid Haskell program, and it’s just simple substitution. This would be fantastic for writing new algorithms, for understanding existing functions and algorithms, writing proofs, and learning Haskell.) The Hat tracer for Haskell is a very powerful tool for debugging and comprehending programs. Sadly, it suffers from bit-rot. It was written largely before the Cabal library packaging system was developed. It is difficult to get any non-trivial program to work with Hat, if the program uses any pre-packaged libraries outside the haskell'98 standard. The aims of this project would be (a) to fix Hat so that it adequately traces hierarchical library packages (a) to fix Hat so that it adequately traces hierarchical library packages (b) to integrate support for tracing into Cabal, so a user can simply ask for the 'tracing' version of a package in addition to the 'normal' and 'profiling' versions. (b) to integrate support for tracing into Cabal, so a user can simply ask for the 'tracing' version of a package in addition to the 'normal' and 'profiling' versions. (c) to develop a "wrapping" scheme whereby the code inside libraries does not in fact need to be traced at all, but instead Hat would treat the library functions as abstract entities. (c) to develop a "wrapping" scheme whereby the code inside libraries does not in fact need to be traced at all, but instead Hat would treat the library functions as abstract entities. Now, EclipseFP supports working with both Haskell and Cabal files, compiling and running the code. However, it would be great to make it a complete environment as for Java or C++. Some ideas would be: The idea is to, following in the footsteps of tickets #1555 and #1564, make accessing Haskell functions from C++ as conveniently as possible by creating a tool which easily exposes Haskell functions to C++. The main rationale behind this, is that pure code should be called from impure code. This is a paradigm already present in Haskell in the form of the IO monad. Thus Haskell should be invoked from an external (impure) language, not the other way around. The FFI is mostly used in the opposite direction, because it is usually employed to create wrappers for external libraries. A big part of exposing Haskell functionality to another language is marshalling data structures between the two languages. We could use the following system to derive a c++ representation for Haskell datatypes: To keep things initially simple one could start with mutually recursive datatypes with type parameters (no nested datatypes or functions with constructors). For these there is a straightforward (automatic) translation from Haskell types into c++ types: Standard Haskell types such as integers/lists/maybe/map could be treated separately to create native c++ representations. For functions we want exposed to c++, a wrapper function is created which calls the original Haskell function (via C). This is also where the marshalling of arguments/results takes place. Initially only first-order functions will be supported. A system to support higher-order functions (which implies functions have to become first-class in c++) may be possible but this would be future work. A possible use-case for this system would be a code editor. The (impure) gui can be written in c++ while the text editing functions and for example parsing for code-highlighting can be provided by pure Haskell functions. We recently improved the UHC JavaScript? backend[1] and showed that we can use it to write a complete front end for a web application[2]. A summary and demo of our results is available online[3]. Currently, however, it is still quite inconvenient to compile larger UHC JS projects, since Cabal support for UHC's different backends is limited. The programmer currently manually specifies include paths to the source of the used modules. Improving this situation is the goal for this GSoC project: make it possible to type cabal configure && cabal build and find a complete JS application in the project's dist folder. Successful completion of this project would go a long way towards making the UHC JS backend more usable and as a result, make Haskell a practical language to use for client-side programming. In solving this problem, one will have to think about how to deal with external Haskell libraries (UHC compiles JS from its internal core representation, so storing pre-compiled object files won't work in this case) and perhaps external JS libraries as well. One will also need to modify Cabal so that it becomes possible to select a specific UHC backend in your cabal files. Ideally, only one cabal file for an entire web application is needed; for both the front-end and backend application. An additional goal (in case of time to spare) would be the creation of a UHC JS-specific Haskell Platform-like distribution, so that programmers interested in using this technology can get started right away. Of course, such a distribution would have to be installed alongside the regular Haskell Platform. As for mentoring is concerned, I (Jurriën) might be able to help out there, but since the above project would revolve more around Cabal and not so much around the UHC internals, there might be more suitable mentors out there. [1] [2] [3] Joachim Breitner created a build bot () that runs some the nofib suite every so often. It gathers the results and create a dashboard showing how performance on the benchmark changes over time. It needs to be productionized so it runs on a machine maintained by the Haskell infrastructure team. It also needs to reliably detect regressions and email ghc-dev@ when those happens. The latter involves setting up emailing infrastructure and possibly tweaking nofib until the benchmarks are reliable (i.e. have long enough runtimes to avoid noise.) Snap aims to be a fast, resiable, easy to use, high quality and high level web development framework for Haskell. More information on Snap can be found at. Type safe URLs encode all accessible paths of a web application via an algebraic datatype defined specifically for the web application at hand. All urls within the application, then, are generated from this datatype, which ensures statically that no dead links are present anywhere in the application. Furthermore, when the user-visible URLs need to change for various (possibly cosmetic) reasons, the change can be implemented centrally and propagated to all the places of use within the application. The web-routes package on Hackage seems to provide a framework-agnostic way to handle type-safe URLs. However, it might be worthwhile to discuss if this is the right choice for Snap. The scope of this project would include the following: Doug Beardsley Greg Collins Ozgun Ataman Do you have a vision how to do better than WASH? Integrate continuation based interaction with client or use something like Functional Forms for the interaction. How to best to interact with XML etc. Other HAppS related projects also possible. HAppS is a web framework offering transactional semantics. Tying it to SQL transactions is an interesting exercise requiring some knowledge in haskell database bindings (implementing two phased commit for different databases) and extending HAppS MACID to handle the two-phased commit. XML Schema() is a format to define the structure of XML documents(like DTD, but more expressive). Since it's recommended by the W3C, many XML standards use it to specify their format. Some XML technologies, like WSDL() use it to define new data structures. This makes XML Schema a door-opener for the implementation of other XML technologies like SOAP. The project could include the following: Malcolm Wallace Vlad Dogaru <ddvlad*REMOVETHIS*@…> Saurabh Kumar <saurabh.catch@…> Marius Loewe <mlo@…> David McGillicuddy <dmcgill9071@…> PureScript? is a relatively large project written in Haskell, which is increasing in popularity and starting to see commercial use. The PureScript? community has plenty of possible projects for interested students, and we have a very active developer community, which would be able to provide lots of guidance. To avoid splitting each possible project into a separate Trac item, I have created a more detailed set of project descriptions on the PureScript? wiki, here: Initial feedback on #haskell-gsoc IRC seems to suggest that some of the later items in the list might be too much work for a single summer, but it should be possible to break most of them down into manageable pieces, and get at least some good results by the end of the summer.. Visual Haskell is the Microsoft Visual Studio plugin for Haskell development, released in September 2005. There's lots to do in Visual Haskell. There are plenty of enhancements that could be added; some ideas are here.). and. Note that this was proposed some years ago, and any prospective student should first identify a suitable mentor, and discuss the details with her. Machine learning includes many methods (e.g, neural networks, support vector machines, hidden markov models, genetic algorithms, self-organizing maps) that are applicable to classification and/or pattern recognition problems in many fields. A selection of these methods should be implemented as a Haskell library, the choice would depend on the qualifications and interests of the student. Optionally, a more limited library with an application using it could be implemented. My main interest is in bioinformatics, but as machine learning methods are useful in a vast number of fields, I'm happy to hear from prospective mentors and other people who are interested, as well as from prospective applicants. The GHC Haskell compiler recently gained the capability to generate atomic compare-and-swap (CAS) assembly instructions. This opens up a new world of lock-free data-structure implementation possibilities. Furthermore, it's an important time for concurrent data structures. Not only is the need great, but the design of concurrent data structures has been a very active area in recent years, as summarized well by Nir Shavit in this article: Because Haskell objects containing pointers can't efficiently be stored outside the Haskell heap, it is necessary to reimplement these data structures for Haskell, rather than use the FFI to access external implementations. There are already a couple of data structures implemented in the following library (queues and deques) : But, this leaves many others, such as: A good point of reference would be the libcds collection of concurrent data structures for C++ (or those that come with Java or .NET): One of the things that makes implementing these data structures fun is that they have very short algorithmic descriptions but a high density of thought-provoking complexity. A good GSoC project would be to implement 2-3 data structures from the literature, and benchmark them against libcds implementations. Ryan Newton Sergiu Ivanov kodoque Mathias Bartl Aleksandar Kodzhabashev This ticket has been REFACTORED. It is now specifically focused on deques, bags, or priority queues. For anyone interested in concurrent hash-tables / hashmaps take a look at ticket #1617. GHC's new support for type families provides a fairly convenient and rather expressive notation for type-level programming. Just like their value-level counterparts, many type-level programs rely on a range of operations (e.g., type-level lists and lookup functions) that are common to different applications and should be part of a standard library, instead of being re-implemented with every new type-level program. As a concrete example of a sophisticated use of type-level programming, see for example Andrew Appleyard's C# binding and its encoding of C#'s sub-typing and overload resolution algorithm: The primary goal of this project is to develop a support library for type-level programming by looking at existing type-level programs and what sort of infrastructure they need. A secondary goal is to acid test the rather new type family functionality in GHC and explore its current limitations. Last year's Summer of Code project led to Parsec3, a monad transformer version of Parsec with significantly more flexibility in the input it accepts. There is much work left that can be done though: GHC's current checker for overlaps and exhaustiveness patterns is in need of an overhaul. There are several bugs and missing features. For example, GADTs are not taken into account. The project would involve the analysis of the current implementation, specification of bugs and desired features, design and implementation of an improved checker. The project is mentioned on: Three possibilities: The first is to integrate preprocessing into the library. Currently, the library calls out to GCC to preprocess source files before parsing them. This has some unfortunate consequences, however, because comments and macro information are lost. A number of program analyses could benefit from metadata encoded in comments, because C doesn't have any sort of formal annotation mechanism, but in the current state we have to resort to ugly hacks (at best) to get at the contents of comments. Also, effective diagnostic messages need to be closely tied to original source code. In the presence of pre-processed macros, column number information is unreliable, so it can be difficult to describe to a user exactly what portion of a program a particular analysis refers to. An integrated preprocessor could retain comments and remember information about macros, eliminating both of these problems. The second possible project is to create a nicer interface for traversals over Language.C ASTs. Currently, the symbol table is built to include only information about global declarations and those other declarations currently in scope. Therefore, when performing multiple traversals over an AST, each traversal must re-analyze all global declarations and the entire AST of the function of interest. A better solution might be to build a traversal that creates a single symbol table describing all declarations in a translation unit (including function- and block-scoped variables), for easy reference during further traversals. It may also be valuable to have this traversal produce a slightly-simplified AST in the process. I'm not thinking of anything as radical as the simplifications performed by something like CIL, however. It might simply be enough to transform variable references into a form suitable for easy lookup in a complete symbol table like I've just described. Other simple transformations such as making all implicit casts explicit, or normalizing compound initializers, could also be good. A third possibility, which would probably depend on the integrated preprocessor, would be to create an exact pretty-printer. That is, a pretty-printing function such that pretty . parse is the identity. Currently, parse . pretty should be the identity, but it's not true the other way around. An exact pretty-printer would be very useful in creating rich presentations of C source code --- think LXR on steroids. Although some statistics functionality exists in Haskell (e.g. in Bryan O'Sullivan's 'statistics' library), I think Haskell (or GHCI) could make sense as a more complete environment for statistical analysis. I think a standard data type representing a data frame would be a cornerstone, and analysis and visualization could be built on top of this to become a complete analysis environment, as well as a standard library for use in regular applications. This isn't necessarily a very difficult project, but for it to be acceptable it would probably need to be detailed and discussed a lot more, and one or more suitable mentors would need to step forward. Edit: It would also be great to take advantage of Haskell's parallel support (R only has this as proprietary add-ons, I think), and type safety (for instance using the 'dimensional' library to keep track of units). See also Haskell is usually described as a language appropiate for people with a mathematically-oriented mind. However, I've found that there is no comprehensive library (or set of libraries) for doing maths in Haskell. My idea would be to follow the path of the Sage project. That projects took a lots of libraries and programs already available and added a layer for communication with them in Python. The result is that now you can have a full-fledged mathematical environment while using your normal Python knowledge for GUI, I/O and so on... The project can have several parts: For free, GHCi could be used as the next Matlab :D The Arduino platform is an open electronics platform for easily building interactive objects. Arduino offers simple ways of getting creative with electronics and building your own devices without having to dive deep into microcontroller programming with c or asm. I think this is a great opportunity for the Haskell community to extend beyond the software world and to get people interested in Haskell who would otherwise not consider engaging with fp. This project would aim at providing an extensible library for programming the Arduino platform, enabling a functional (and hopefully more intuitive) way of programming the Arduino hardware. The library would be designed on providing an API for interacting with Arduino. I think this would be a better way then to write a cross-compiler or to define a language subset. If you have any thoughts or ideas please feel free to leave a comment. A lack of a high-level data store library is a huge weakness in haskell. Data storage is absolutely critical for web development or any program that needs to persist data or whose data processing exceeds memory. Haskell web development has languished in part because one had to choose between lower-level SQL or Happstack's experimental data store that uses only the process memory. All other widely used programming languages have multiple ORMs for proven databases. Haskell needs some better database abstraction libraries. The persistent library is a universal data store interface that already has PostgreSQL, Sqlite, MySQL, MongoDB, and experimental CouchDB backend. Most users of the Yesod web framework are using it, and it is also being used outside of web development. With some more improvements, persistent could become the go-to data store interface for haskell programmers. We could create interfaces to more databases, but the majority of Haskell programs just need *a* database, and would be happy with a really good interface to any database. There is also a need to interface with existing SQL databases. So I would like to focus on making (SQL & MongoDB) storage layers really good. MongoDB should be easier to create a great interface finally write raw SQL queries. This issue should be explored in the GSoC, but does not have to be solved. Persistent already has a very good Quasi-Quoted DSL for creating a schema, but another task at hand is to write a convenient Template Haskell interface for declaring a schema. This should not be difficult because we already have all the tools in place. There are also some possibilities for integrating with existing Haskell backends. One interesting option is integration with HaskellDB or DSH - HaskellDB does not automatically serialize to a Haskell record like Persistent does. Michael Snoyman and Greg Weber are willing to mentor, and there is a large community of users willing to help or give feedback. Note "real-time web" basically means not needing to refresh web pages, nothing to do with real-time systems. Haskell has the best async IO implementation available. And yet, node.js is getting all the spotlight! This is in part a library issue. Node.js has several interfaces that use websockets and fall back to other technologies that are available. I propose we create such a library for Haskell. We have already identified a language-independent design called sockjs. And there is already work under way on wai-sockjs, with good implementations of wai-eventsource and wai-websockets already available. Such a library is a fundamental building block for interactive (real-time) web sites. There is also a use case for embedding a capable Haskell messaging server into a website ran on a slower dynamic language that has poor concurrency, like Ruby (some people embed node.js based messaging servers now). This gives Haskell a back door to getting adopted in other areas. This should be easily achievable in a GSoC. In fact I think the biggest problem may be how to expand the scope of this to fit the summer. The Yesod web framework is now a very good server oriented web framework. A good wai-sockjs implementation could be nicely integrated with Yesod and other frameworks, but it could also be the building block for a great client-side abstraction in which a web page is automatically kept up to date in "real-time". Michael Snoyman and myself are willing to mentor. We can probably also get those that have been involved with websockets related development to date to mentor or at least give assistance. This project will likely have a greater community impact than all others combined. Port GHC to Haiku OS. AcidState is a high performance library for adding ACID () guarantees to (nearly) any kind of Haskell data structure. In layman's term, it's a library that will make your data persist even in the event of, say, a power outage. AcidState relies on an on-disk transaction log from which to recreate the Haskell data structure if your application crashed unexpectedly. This makes the hard drive a single point of failure (leading to the loss of data, even) and the low reliability of hard drives simply does not merit this. I propose that this should be solved by replicating the state across a cluster of machines. Each machine would keep a copy of the transaction log on disk and the single point of failure would be gone. When thinking about this project, one has to keep the CAP theorem () in mind. In particular, I propose consistency and availability to be guaranteed. Partition tolerance (compared to consistency and availability) has fewer uses and is much harder to implement. AcidState is already structured for multiple backends so few, if any, internal changes are required before this project can begin. Currently cabal manages the installation of hackage projects relatively well provided (among other things) that you have all the necessary external tools available and installed. However, when certain tools are not available, such as alex or happy or some external library that cabal checks for with pkg-config, we lose the automated aspect of compiling and installing packages and confused and frustrated users. That is particularly bad for new users or people just wishing to use a certain haskell program available in hackage for which their OS does not have a pre-packaged version (such as an installer in windows or .deb in Debian/Ubuntu?). For instance, when issuing the command cabal install gtk in a fresh environment (say, ubuntu), it is going to complain about the lack of gtk2hs-buildtools. Then when issuing the command cabal install gtk2hs-buildtools it complains about the lack of alex. Installing alex is also not enough, as you also need happy. Cabal does not indicate that at first, though. Installing alex and happy still does not solve your problems as gtk depends on cairo, which checks for cairo-pdf through pkg-config and fails. Point being that even though it is quite easy to install all those packages (and some you might not even have to be explicitly installed, as it can be provided by your haskell platform package), this automated process becomes manual, interactive and boring. I propose extending cabal in three ways: I think this would be a nice step in making Haskell even more accessible and slightly less frustrating for newcomers and for people trying interesting things in projects that depend on external libraries. I do understand that cabal is not a package manager[2], however I see no reason why it should not be able to check for dependencies and issue warnings and suggestions in this way. And since those are warnings and not errors, the user can always ignore them and proceed. Edit: Talked to Jürrien about this ticket and he mentioned the text on link [2]. I'm editing to try to make things clearer. Some program instrumentation and analysis tools are language agnostic. Pin and Valgrind use binary rewriting to instrument an x86 binary on the fly and thus in theory could be used just as well for a Haskell binary as for one compiled by C. Indeed, if you download Pin from pintool.org, you can use the included open source tools to immediately begin analyzing properties of Haskell workloads -- for example the total instruction mix during execution. The problem is that aggregate data for an entire execution is rather coarse. It's not correlated temporally with phases of program execution, nor are specific measured phenomena related to anything in the Haskell source. This could be improved. A simple example would be to measure memory-reuse distance (an architecture-independent characterization of locality) but to distinguish garbage collection from normal memory access. It would be quite nice to see a histogram of reuse-distances in which GC accesses appear as a separate layer (different color) from normal accesses. How to go about this? Fortunately, the existing MICA pintool can build today (v0.4) and measure memory reuse distances. In fact, it already produces per-phase measurements where phases are delimited by dynamic instruction counts (i.e. every 100M instructions). All that remains is to tweak that definition of phase to transition when GC switches on or off. How to do that? Well, Pin has existing methods for targeted instrumentation of specific C functions: By targeting appropriate functions in the GHC RTS, this analysis tool could probably work without requiring any GHC modification at all. A further out goal would be to correlate events observed by the binary rewriting tool and those recorded by GHC's traceEvent. Finally, as it turns out this would NOT be the first crossing of paths between GHC and binary rewriting. Julian Seward worked on GHC before developing valgrind: Others?? Switch cabal to use an off-the-shelf SAT-solver/theorem prover for solving version constraints The current solver has already started to hit some performance issues (which I think affects Michael Snoyman for example). It also gives terrible error messages. Using a true and tested solver could get rid of these solver issues, as long as we can make the off-the-shelf solver give good enough error messages. There are some open issues (e.g. the solver needs to have a liberal license, be possible to ship with cabal-install, etc) that need to be thought about before we make this a student project however. I'd rank this project as difficult but feasible, for the right student. Pandoc is one of the most widely used open-source Haskell projects. The project is primarily used by academics and authors to streamline their writing process. A typical author will write their paper in markdown before converting their document to LaTeX but of course there are many other possibilities. Unfortunately, this process is not always plain sailing. Despite looking very simple, there are a number of ambiguous corner cases in the original Markdown description. This leads to surprises to users when they try to convert their documents.(1) Many of these cases have been reported as issues to the Pandoc issue tracker. Last year, John MacFarlane along with an industry consortium set out to standardise Markdown. The result of their work was CommonMark. The project expands Gruber's initial ideas into a several thousand word specification along with reference implementations and a comprehensive test suite. Hopefully as the spec finalises, more implementations will conform to it. As a result of this ambiguity, the markdown parser has grown difficult to maintain and inefficient. The goal of this proposal would be to rewrite the markdown parser so that it becomes more maintainable, efficient, extensible and consistent. In addition to the "standard" elements, Pandoc supports many useful extensions. These have been added in an ad-hoc manner so the code has accumulated technological debt over time. A proposal should also look to provide a mechanism to make it easy to add extensions in the future. This part would require careful thought and analysis of the existing code base as some extensions (such as footnotes) have caused a significant change in the parsing strategy. Given a suitably extensible base parser it would perhaps even be desirable to split the base CommonMark? parser into a separate package (á la Cheapskate). This would enable other projects to firstly use the parser without depending on Pandoc and secondly, independently define their own extensions. The following tasks could make up part of a proposal. There are lots of open tickets on the issue tracker. Those marked minor should have isolated fixes which would be suitable for beginners. (1) For an overview of different implementations, see BabelMark. From the reddit thread: Not sure if this is the sort of thing GSOC is meant for, but if anyone is interested in improving interactive coding in Haskell in IHaskell (useful for graphics, interactive data analysis, etc), I would be happy to help. I've mentored people working on small projects related to IHaskell / contributions to IHaskell, which went well. If anyone is interested, let me know! Potential ideas include: And also: I've mentioned this before, but, which offers sage and ipython notebooks for free also offers terminals, and ghc in those terminals. William Stein, whose site it is, has indicated that if someone were to help him set up IHaskell in sageMathCloud, he would be happy to do so. I like the goals of the SMC project and the IHaskell project both, and together I think it would be huge for IHaskell, since it would remove all the setup cost and hosting cost for sharing, just as it does for IPython notebooks. So I think that would be a very doable SOC project, with a significant immediate payoff. Comments from carter on reddit, who would also be happy to mentor as far as I know: From reddit comments: One element of Hackage I'd very much like to see work on is discoverability -- better search, tagging, a "reverse deps" counter that doesn't eat all our memory, etc. Part of that would be maybe letting people add search columns for things like # of uploads, existence of test suites, whether build reports are green and deps are compatible with various platforms, etc. I know star ratings and comments are considered unreliable and untrustworthy for a number of reasons, but if there were a decent approach to them I'd be keen. Perhaps if we included voting on comments? And maybe not star ratings but just "I recommend this" as something somebody could choose to thumbs up (like twitter favs or facebook likes), then that could help? I think +1s are a great metric - they are basically a people-who-downloaded-this-and-liked-it-enough-to-mention-it counter. Often, when I'm trying to find a library for something, this metric is good enough for me to get going and avoid going down the wrong path. On the subject of stealing ideas though, I consider the MetaCPAN project a shining example of good metadata. See the sidebar for Catalyst - sparklines, inline issue counters, build machine reports, ratings, +1 aggregates, and even a direct link to IRC. There's a lot we could learn from, right there! From Reddit: I've been using (the excellent) criterion a ton and have a small wishlist of things easily doable in a summer. Mainly around making the reports better, in order from most to least important to me: Cleaned up from reddit: A) import Data.Map (Map) qualified as M This would be the same as import Data.Map (Map) import qualified Data.Map as M B) Nested Module Organization The proposal is detailed here:[email protected] C) Qualified Module Export The proposal is detailed here: The first is very simple. That combined with either of the others (the second being my longtime favorite) would make a good GSOC imho. Unary natural numbers (also known as Peano naturals) have the stigma of slowness associated with them. This proposal suggests to rehabilitate Nat (and also its ornamented cousin, the linked list) as an efficient data structure on modern CPUs. I'll add more ideas later (possibly as comments), but for now let's consider a bare bones case… data Nat where Z :: Nat S :: !Nat -> Nat is the type of strict unary naturals. They can be only Z S Z S (S Z) … S^n^ Z I propose to introduce an analysis detecting this shape of constructors and store the machine integer (unsigned) n instead of a linked heap-allocated chain. (Some considerations must be invested to deal with overflows). The general principle is to count the known S constructors. GHC already performs constructor tagging, and my suggestion might be regarded as a generalization of it to more than 3 (2 on 32-bit architectures) tag bits. The real benefit comes from algorithms on Nat, such as double :: Nat -> Nat double Z = Z double (S m) = S (S (double m)) Each S above could be a batch pattern match of Sn, thus arriving at the efficient double Sn = S2*n if we set Z = S0. This is (basically) a shift-left machine instruction by one bit. Handling non-strict naturals needs adding extra knowledge about non-constructors, but the basic mechanics (and benefits) would be similar. For strict/lazy-head linked lists the corresponding machine-optimized layout would be the segmented vector with e.g. upto 64 memory-consecutive heads. The exact details need to be worked out and there is some wiggle-room for cleverness. Summary: I'd like to give back the honor of efficiency to the famously elegant concept of Peano naturals by generalizing pointer tagging to many bits. A catch-all ticket for Haddock work. One idea is to allow Haddock to generate not only the full correct signature for functions, but also "example" type-specialized versions, perhaps hidden in a mouseover -- as discussed at Another general area to look at (not yet a full proposal) is moving haddock from "documented modules" towards "a system for building documentation". Some ideas would be better doctest integration, improving inclusion of images and such, perhaps a plugin to generate dependency graphs and visualizations. Also, rather than cutting haddock over to markdown, allowing "markdown-only" documentation, not necessarily coupled to the source of particular modules, to be bundled and also provided in the generated html. Similarly, inclusion of a changelog file could be useful. Generally, more ideas could be derived from looking at the state-of-the-art in other languages' package documentation systems and looking for things to borrow. As described in this mailinglist thread: Template-Haskell is not available to cross compilers at the moment. This is because it is only available in a "stage 2" compiler (i.e. self-hosted). Meanwhile, cross-compilation is performed by a "stage 1" compiler -- running on a host to produce executables for a target. The thread describes a solution developed for ghcjs that allows a stage 1 compiler on a host to _drive_ a stage 2 compiler on a target only for the expansion of template haskell splices. "The ultimate goal would be a multi-target compiler, that could produce a stage2 compiler that runs on the host but can produce binaries for multiple targets of which the host its running on is one." There remains a fair amount of work to integrate this solution into the main GHC code base, but it would be very worthwhile. This is a relatively advanced project, but for the right student it could be a great exercise. Luite, who developed the solution for ghcjs, had offered to mentor. The recent splitMix work of Steele, Lea, and Flood () describes a splittable algorithm for random generation that ?." Simon Peyton Jones has noted: "Moreover, splitMix has a published paper to describe it, which is massively better (in a tricky area) than an un-documented pile of code. I knew that Guy was working on this but I didn't know it was now published. Excellent! I would love to see this happen." The goal of this project would be, guided by the paper and the working code available in java JDK8's java.util.SplittableRandom?, to bring the algorithm back to an efficient Haskell implementation, suitable for use as an instance of RandomGen? [1]. We could then seek to replace the current StdGen? with this better implementation. [1] By Edgewall Software. Visit the Trac open source project at
https://ghc.haskell.org/trac/summer-of-code/query?status=new&status=assigned&status=reopened&group=topic&order=owner&type=proposed-project&row=description
CC-MAIN-2015-40
refinedweb
9,882
60.85
Create a graphic animation of fireworks in Python (with Turtle) with me to celebrate the 4th of July! Use loops and random numbers to draw colorful fireworks in the night sky. Who is this for? - Language: Python (with Turtle) - Juni Level: Python Level 1 - Ages: 11+ - Coding experience: Beginner, or familiar with basic concepts - Difficulty Level: Easy, but every student is different! Learning Outcomes Project Demo Click run to see the Fireworks project in action below. You can also view my project solution code if you get stuck. What to look out for: - The background is black, like the night sky. - There's lots of fireworks, but they all have the same shape. - Each one has a random color, random size, and random location. - If you slow down the speed, you can see that each firework is made by making 36 lines. - To make each line, the turtle goes forward, then goes backward the same distance, and then turns a little to get ready for the next line. General Steps to Build the Project - Make just one firework, without any of the randomness. - Make that firework appear in a random place. - Make that firework a random size. - Make that firework a random color. - Make the background black. - Use a loop to make your code draw lots of fireworks. Step-by-Step Tutorial Step 1: Make just one firework, without any of the randomness - First, import turtle. - Then, create a variable to store your turtle. - Then, make the first line of your firework. - Make sure to go forward and then backward go that you end up back where you started! - Then, turn the turtle so that he’s ready for the next line. Hint: How much should we turn the turtle? There’s 360 degrees in a circle, and in the example each firework had 36 lines, so we can divide those numbers to figure out how many degrees we should turn! - Now, use a loop to make the turtle draw 36 lines! Hint: Remember, you can use for i in range():to repeat a piece of code; you just put the number of repetitions you want inside of range’s parentheses! Step 2: Make that firework appear in a random place - First, import random. - Then, before our firework code, create a variable and store our random x position. Hint 1: Remember, you can use random.randint()to generate a random number! Just decide on the range you want, and put those numbers in randint’s parentheses! For example, if I want a random number from 1 to 10, I would write random.randint(1, 10)! Hint 2: Your Python with Turtle screen’s x and y coordinates normally range from about -250 to 250, but it varies from screen to screen! - Then, create a variable and store our random y position. Hint: It’s going to be really similar to what you just did for the x position! - Use penup(), pendown(), and goto()to make your turtle go to the random position before it draws its firework. Step 3: Make that firework a random size - Before your firework code, create a variable and store a random number for the length of the firework’s lines. - Replace the number you’re currently using to tell the turtle how far to go with your new variable. Step 4: Make that firework a random color - Before our firework code, create a random number between 0 and 255, and store it in a variable. This will represent how much red is in our color! - Right after that, create a random number between 0 and 255, and store it in a variable. This will represent how much green is in our color! - Right after that, create a random number between 0 and 255, and store it in a variable. This will represent how much blue is in our color! - Use color()and the three variables you just made to make your turtle turn a random color. Step 5: Make the background black - At the top of your code, right under your import statements, make a new variable and set it equal to turtle.Screen(). This will represent our screen! - Use the function bgcolor()on your new screen variable to change the background color of the screen! Put the name of the color you want inside of its parentheses to change the screen to that color. Step 6: Use a loop to make your code draw lots of fireworks Take all of the code we made that affects how our firework looks, and put it in a loop! Decide how many fireworks you want, and use that number when you’re creating your loop! Extra Features If you want, use what we learned to create a bunch of stars in the sky before you start drawing your fireworks! Check out this example of how I made both fireworks and stars in my sky. Stars are like fireworks, but they only have 5 lines instead of 36. They’re also a lot smaller! I made mine all white, but you can make yours different colors if you want.. Read more about how Juni teaches coding for kids, or if you have questions, speak with a Juni Advisor by calling (650) 263-4306 or emailing advisors@learnwithjuni.com.
https://junilearning.com/blog/coding-projects/python-july-4th-fireworks/
CC-MAIN-2022-40
refinedweb
880
81.63
Earlier this year, Github released Atom-Shell, the core of its famous open-source editor Atom, and renamed it to Electron for the special occasion. Electron, unlike other competitors in the category of Node.js-based desktop applications, brings its own twist to this already well-established market by combining the power of Node.js (io.js until recent releases) with the Chromium Engine to bring us the best of both server and client-side JavaScript. Imagine a world where we could build performant, data-driven, cross-platform desktop applications powered by not only the ever-growing repository of NPM modules, but also the entire Bower registry to fulfill all our client-side needs. In this tutorial, we will build a simple password keychain application using Electron, Angular.js and Loki.js, a lightweight and in-memory database with a familiar syntax for MongoDB developers. The full source code for this application is available here. This tutorial assumes that: - The reader has Node.js and Bower installed on their machine. - They are familiar with Node.js, Angular.js and MongoDB-like query syntax. Getting the Goods First things first, we will need to get the Electron binaries in order to test our app locally. We can install it globally and use it as a CLI, or install it locally in our application’s path. I recommend installing it globally, so that way we do not have to do it over and over again for every app we develop. We will learn later how to package our application for distribution using Gulp. This process involves copying the Electron binaries, and therefore it makes little to no sense to manually install it in our application’s path. To install the Electron CLI, we can type the following command in our terminal: $ npm install -g electron-prebuilt To test the installation, type electron -h and it should display the version of the Electron CLI. At the time this article was written, the version of Electron was 0.31.2. Setting up the Project Let’s assume the following basic folder structure: my-app |- cache/ |- dist/ |- src/ |-- app.js | gulpfile.js … where: - cache/ will be used to download the Electron binaries when building the app. - dist/ will contain the generated distribution files. - src/ will contain our source code. - src/app.js will be the entry point of our application. Next, we will navigate to the src/ folder in our terminal and create the package.json and bower.json files for our app: $ npm init $ bower init We will install the necessary packages later on in this tutorial. Understanding Electron Processes Electron distinguishes between two types of processes: - The Main Process: The entry point of our application, the file that will be executed whenever we run the app. Typically, this file declares the various windows of the app, and can optionally be used to define global event listeners using Electron’s IPC module. - The Renderer Process: The controller for a given window in our application. Each window creates its own Renderer Process. For code clarity, a separate file should be used for each Renderer Process. To define the Main Process for our app, we will open src/app.jsand include the appmodule to start the app, and the browser-windowmodule to create the various windows of our app (both part of the Electron core), as such: var app = require('app'), BrowserWindow = require('browser-window'); When the app is actually started, it fires a ready event, which we can bind to. At this point, we can instantiate the main window of our app: var mainWindow = null; app.on('ready', function() { mainWindow = new BrowserWindow({ width: 1024, height: 768 }); mainWindow.loadUrl('file://' + __dirname + '/windows/main/main.html'); mainWindow.openDevTools(); }); Key points: - We create a new window by creating a new instance of the BrowserWindowobject. - It takes an object as a single argument, allowing us to define various settings, amongst which the default width and height of the window. - The window instance has a loadUrl()method, allowing us to load the contents of an actual HTML file in the current window. The HTML file can either be local or remote. - The window instance has an optional openDevTools()method, allowing us to open an instance of the Chrome Dev Tools in the current window for debugging purposes. Next, we should organize our code a little. I recommend creating a windows/ folder in our src/ folder, and where we can create a subfolder for each window, as such: my-app |- src/ |-- windows/ |--- main/ |---- main.controller.js |---- main.html |---- main.view.js … where main.controller.js will contain the “server-side” logic of our application, and main.view.js will contain the “client-side” logic of our application. The main.html file is simply an HTML5 webpage, so we can simply start it like this: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Password Keychain</title> </head> <body> <h1>Password Keychain</h1> </body> </html> At this point, our app should be ready to run. To test it, we can simply type the following in our terminal, at the root of the src folder: $ electron . We can automate this process by defining the startscript of the package.son file. Building a Password Keychain Desktop App To build a password keychain application, we need: - A way to add, generate and save passwords. - A convenient way to copy and remove passwords. Generating and Saving Passwords A simple form will suffice to insert new passwords. For the sake of demonstrating communication between multiple windows in Electron, start by adding a second window in our application, which will display the “insert” form. Since we will open and close this window multiple times, we should wrap up the logic in a method so that we can simply call it when needed: function createInsertWindow() { insertWindow = new BrowserWindow({ width: 640, height: 480, show: false }); insertWindow.loadUrl('file://' + __dirname + '/windows/insert/insert.html'); insertWindow.on('closed',function() { insertWindow = null; }); } Key points: - We will need to set the show property to false in the options object of the BrowserWindow constructor, in order to prevent the window from being open by default when the applications starts. - We will need to destroy the BrowserWindow instance whenever the window is firing a closed event. Opening and Closing the “Insert” Window The idea is to be able to trigger the “insert” window when the end user clicks a button in the “main” window. In order to do this, we will need to send a message from the main window to the Main Process to instruct it to open the insert window. We can achieve this using Electron’s IPC module. There are actually two variants of the IPC module: - One for the Main Process, allowing the app to subscribe to messages sent from windows. - One for the Renderer Process, allowing the app to send messages to the main process. Although Electron’s communication channel is mostly uni-directional, it is possible to access the Main Process’ IPC module in a Renderer Process by making use of the remote module. Also, the Main Process can send a message back to the Renderer Process from which the event originated by using the Event.sender.send() method. To use the IPC module, we just require it like any other NPM module in our Main Process script: var ipc = require('ipc'); … and then bind to events with the on() method: ipc.on('toggle-insert-view', function() { if(!insertWindow) { createInsertWindow(); } return (!insertWindow.isClosed() && insertWindow.isVisible()) ? insertWindow.hide() : insertWindow.show(); }); Key Points: - We can name the event however we want, the example is just arbitrary. - Do not forget to check if the BrowserWindow instance is already created, if not then instantiate it. - The BrowserWindow instance has some useful methods: - isClosed() returns a boolean, whether or not the window is currently in a closedstate. - isVisible(): returns a boolean, whether or not the window is currently visible. - show() / hide(): convenience methods to show and hide the window. Now we actually need to fire that event from the Renderer Process. We will create a new script file called main.view.js, and add it to our HTML page like we would with any normal script: <script src="./main.view.js"></script> scripttag loads this file in a client-side context. This means that, for example, global variables are available via window.<var_name>. To load a script in a server-side context, we can use the require()method directly in our HTML page: require('./main.controller.js');. Even though the script is loaded in client-side context, we can still access the IPC module for the Renderer Process in the same way that we can for the Main Process, and then send our event as such: var ipc = require('ipc'); angular .module('Utils', []) .directive('toggleInsertView', function() { return function(scope, el) { el.bind('click', function(e) { e.preventDefault(); ipc.send('toggle-insert-view'); }); }; }); There is also a sendSync() method available, in case we need to send our events synchronously. Now, all we have left to do to open the “insert” window is to create an HTML button with the matching Angular directive on it: <div ng- <button toggle-insert-view <i class="material-icons">add</i> </button> </div> And add that directive as a dependency of the main window’s Angular controller: angular .module('MainWindow', ['Utils']) .controller('MainCtrl', function() { var vm = this; }); Generating Passwords To keep things simple, we can just use the NPM uuid module to generate unique ID’s that will act as passwords for the purpose of this tutorial. We can install it like any other NPM module, require it in our ‘Utils’ script and then create a simple factory that will return a unique ID: var uuid = require('uuid'); angular .module('Utils', []) ... .factory('Generator', function() { return { create: function() { return uuid.v4(); } }; }) Now, all we have left to do is create a button in the insert view, and attach a directive to it that will listen to click events on the button and call the create() method: <!-- in insert.html --> <button generate-passwordgenerate</button> // in Utils.js angular .module('Utils', []) ... .directive('generatePassword', ['Generator', function(Generator) { return function(scope, el) { el.bind('click', function(e) { e.preventDefault(); if(!scope.vm.formData) scope.vm.formData = {}; scope.vm.formData.password = Generator.create(); scope.$apply(); }); }; }]) Saving Passwords At this point, we want to store our passwords. The data structure for our password entries is fairly simple: { "id": String "description": String, "username": String, "password": String } So all we really need is some kind of in-memory database that can optionally sync to file for backup. For this purpose, Loki.js seems like the ideal candidate. It does exactly what we need for the purpose of this application, and offers on top of it the Dynamic Views feature, allowing us to do things similar to MongoDB’s Aggregation module. Dynamic Views do not offer all the functionality that MongodDB’s Aggregation module does. Please refer to the documentation for more information. Let’s start by creating a simple HTML form: <div class="insert" ng- <form name="insertForm" no-validate> <fieldset ng- <div class="mdl-textfield"> <input class="mdl-textfield__input" type="text" id="description" ng-Description...</label> </div> <div class="mdl-textfield"> <input class="mdl-textfield__input" type="text" id="username" ng- <label class="mdl-textfield__label" for="username">Username...</label> </div> <div class="mdl-textfield"> <input class="mdl-textfield__input" type="password" id="password" ng-Password...</label> </div> <div class=""> <button generate-passwordgenerate</button> <button toggle-insert-viewcancel</button> <button save-passwordsave</button> </div> </fieldset> </form> </div> And now, let’s add the JavaScript logic to handle posting and saving of the form’s contents: var loki = require('lokijs'), path = require('path'); angular .module('Utils', []) ... .service('Storage', ['$q', function($q) { this.db = new loki(path.resolve(__dirname, '../..', 'app.db')); this.collection = null; this.loaded = false; this.init = function() { var d = $q.defer(); this.reload() .then(function() { this.collection = this.db.getCollection('keychain'); d.resolve(this); }.bind(this)) .catch(function(e) { // create collection this.db.addCollection('keychain'); // save and create file this.db.saveDatabase(); this.collection = this.db.getCollection('keychain'); d.resolve(this); }.bind(this)); return d.promise; }; this.addDoc = function(data) { var d = $q.defer(); if(this.isLoaded() && this.getCollection()) { this.getCollection().insert(data); this.db.saveDatabase(); d.resolve(this.getCollection()); } else { d.reject(new Error('DB NOT READY')); } return d.promise; }; }) .directive('savePassword', ['Storage', function(Storage) { return function(scope, el) { el.bind('click', function(e) { e.preventDefault(); if(scope.vm.formData) { Storage .addDoc(scope.vm.formData) .then(function() { // reset form & close insert window scope.vm.formData = {}; ipc.send('toggle-insert-view'); }); } }); }; }]) Key Points: - We first need to initialize the database. This process involves creating a new instance of the Loki Object, providing the path to the database file as an argument, looking up if that backup file exists, creating it if needed (including the ‘Keychain’ collection), and then loading the contents of this file in memory. - We can retrieve a specific collection in the database with the getCollection()method. - A collection object exposes several methods, including an insert()method, allowing us to add a new document to the collection. - To persist the database contents to file, the Loki object exposes a saveDatabase()method. - We will need to reset the form’s data and send an IPC event to tell the Main Process to close the window once the document is saved. We now have a simple form allowing us to generate and save new passwords. Let’s go back to the main view to list these entries. Listing Passwords A few things need to happen here: - We need to be able to get all the documents in our collection. - We need to inform the main view whenever a new password is saved so it can refresh the view. We can retrieve the list of documents by calling the getCollection() method on the Loki object. This method returns an object with a property called data, which is simply an array of all the documents in that collection: this.getCollection = function() { this.collection = this.db.getCollection('keychain'); return this.collection; }; this.getDocs = function() { return (this.getCollection()) ? this.getCollection().data : null; }; We can then call the getDocs() in our Angular controller and retrieve all the passwords stored in the database, after we initialize it: angular .module('MainView', ['Utils']) .controller('MainCtrl', ['Storage', function(Storage) { var vm = this; vm.keychain = null; Storage .init() .then(function(db) { vm.keychain = db.getDocs(); }); }); A bit of Angular templating, and we have a password list: <tr ng- <td class="mdl-data-table__cell--non-numeric">{{item.description}}</td> <td>{{item.username || 'n/a'}}</td> <td> <span ng-•</span> </td> <td> <a href="#" copy-copy</a> <a href="#" remove-remove</a> </td> </tr> A nice added feature would be to refresh the list of passwords after inserting a new one. For this, we can use Electron’s IPC module. As mentioned earlier, the Main Process’ IPC module can be called in a Renderer Process to turn it into a listener process, by using the remote module. Here is an example on how to implement it in main.view.js: var remote = require('remote'), remoteIpc = remote.require('ipc'); angular .module('MainView', ['Utils']) .controller('MainCtrl', ['Storage', function(Storage) { var vm = this; vm.keychain = null; Storage .init() .then(function(db) { vm.keychain = db.getDocs(); remoteIpc.on('update-main-view', function() { Storage .reload() .then(function() { vm.keychain = db.getDocs(); }); }); }); }]); Key Points: - We will need to use the remote module via its own require()method to require the remote IPC module from the Main Process. - We can then setup our Renderer Process as an event listener via the on()method, and bind callback functions to these events. The insert view will then be in charge of dispatching this event whenever a new document is saved: Storage .addDoc(scope.vm.formData) .then(function() { // refresh list in main view ipc.send('update-main-view'); // reset form & close insert window scope.vm.formData = {}; ipc.send('toggle-insert-view'); }); Copying Passwords It is usually not a good idea to display passwords in plain text. Instead, we are going to hide and provide a convenience button allowing the end user to copy the password directly for a specific entry. Here again, Electron comes to our rescue by providing us with a clipboard module with easy methods to copy and paste not only text content, but also images and HTML code: var clipboard = require('clipboard'); angular .module('Utils', []) ... .directive('copyPassword', [function() { return function(scope, el, attrs) { el.bind('click', function(e) { e.preventDefault(); var text = (scope.vm.keychain[attrs.copyPassword]) ? scope.vm.keychain[attrs.copyPassword].password : ''; // atom's clipboard module clipboard.clear(); clipboard.writeText(text); }); }; }]); Since the generated password will be a simple string, we can use the writeText() method to copy the password to the system’s clipboard. We can then update our main view HTML, and add the copy button with the copy-password directive on it, providing the index of the array of passwords: <a href="#" copy-copy</a> Removing Passwords Our end users might also like to be able to delete passwords, in case they become obsolete. To do this, all we need to do is call the remove() method on the keychain collection. We need to provide the entire doc to the ‘remove()’ method, as such: this.removeDoc = function(doc) { return function() { var d = $q.defer(); if(this.isLoaded() && this.getCollection()) { // remove the doc from the collection & persist changes this.getCollection().remove(doc); this.db.saveDatabase(); // inform the insert view that the db content has changed ipc.send('reload-insert-view'); d.resolve(true); } else { d.reject(new Error('DB NOT READY')); } return d.promise; }.bind(this); }; Loki.js documentation states that we can also remove a doc by its id, but it does not seem to be working as expected. Creating a Desktop Menu Electron integrates seamlessly with our OS desktop environment to provide a “native” user experience look & feel to our apps. Therefore, Electron comes bundled with a Menu module, dedicated to creating complex desktop menu structures for our app. The menu module is a vast topic and almost deserves a tutorial of its own. I strongly recommend you read through Electron’s Desktop Environment Integration tutorial to discover all the features of this module. For the scope of this current tutorial, we will see how to create a custom menu, add a custom command to it, and implement the standard quit command. Creating & Assigning a Custom Menu to Our App Typically, the JavaScript logic for an Electron menu would belong in the main script file of our app, where our Main Process is defined. However, we can abstract it to a separate file, and access the Menu module via the remote module: var remote = require('remote'), Menu = remote.require('menu'); To define a simple menu, we will need to use the buildFromTemplate() method: var appMenu = Menu.buildFromTemplate([ { label: 'Electron', submenu: [{ label: 'Credits', click: function() { alert('Built with Electron & Loki.js.'); } }] } ]); The first item in the array is always used as the “default” menu item. The value of the labelproperty does not matter much for the default menu item. In dev mode it will always display Electron. We will see later how to assign a custom name to the default menu item during the build phase. Finally, we need to assign this custom menu as the default menu for our app with the setApplicationMenu() method: Menu.setApplicationMenu(appMenu); Mapping Keyboard Shortcuts Electron provides “accelerators”, a set of pre-defined strings that map to actual keyboard combinations, e.g.: Command+A or Ctrl+Shift+Z. The Commandaccelerator does not work on Windows or Linux. For our password keychain application, we should add a Filemenu item, offering two commands: - Create Password: open the insert view with Cmd (or Ctrl) + N - Quit: quit the app altogether with Cmd (or Ctrl) + Q ... { label: 'File', submenu: [ { label: 'Create Password', accelerator: 'CmdOrCtrl+N', click: function() { ipc.send('toggle-insert-view'); } }, { type: 'separator' // to create a visual separator }, { label: 'Quit', accelerator: 'CmdOrCtrl+Q', selector: 'terminate:' // OS X only!!! } ] } ... Key Points: - We can add a visual separator by adding an item to the array with the typeproperty set to separator. - The CmdOrCtrlaccelerator is compatible with both Mac and PC keyboards - The selectorproperty is OSX-compatible only! Styling Our App You probably noticed throughout the various code examples references to class names starting with mdl-. For the purpose of this tutorial I opted to use the Material Design Lite UI framework, but feel free to use any UI framework of your choice. Anything that we can do with HTML5 can be done in Electron; just keep in mind the growing size of the app’s binaries, and the resulting performance issues that may occur if you use too many third-party libraries. Packaging Electron Apps for Distribution You made an Electron app, it looks great, you wrote your e2e tests with Selenium and WebDriver, and you are ready to distribute it to the world! But you still want to personalize it, give it a custom name other than the default “Electron”, and maybe also provide custom application icons for both Mac and PC platforms. Building with Gulp These days, there is a Gulp plugin for anything we can think of. All I had to do is type gulp electron in Google, and sure enough there is a gulp-electron plugin! This plugin is fairly easy to use as long as the folder structure detailed at the beginning of this tutorial was maintained. If not, you might have to move things around a bit. This plugin can be installed like any other Gulp plugin: $ npm install gulp-electron --save-dev And then we can define our Gulp task as such: var gulp = require('gulp'), electron = require('gulp-electron'), info = require('./src/package.json'); gulp.task('electron', function() { gulp.src("") .pipe(electron({ src: './src', packageJson: info, release: './dist', cache: './cache', version: 'v0.31.2', packaging: true, platforms: ['win32-ia32', 'darwin-x64'], platformResources: { darwin: { CFBundleDisplayName: info.name, CFBundleIdentifier: info.bundle, CFBundleName: info.name, CFBundleVersion: info.version }, win: { "version-string": info.version, "file-version": info.version, "product-version": info.version } } })) .pipe(gulp.dest("")); }); Key Points: - the src/folder cannot be the same as the folder where the Gulpfile.js is, nor the same folder as the distribution folder. - We can define the platforms we wish to export to via the platformsarray. - We should define a cachefolder, where the Electron binaries will be download so they can be packaged with our app. - The contents of the app’s package.json file need to be passed to the gulp task via the packageJsonproperty. - There is an optional packagingproperty, allowing us to also create zip archives of the generated apps. - For each platform, there is a different set of “platform resources” that can be defined. Adding App Icons One of the platformResources properties is the icon property, allowing us to define a custom icon for our app: "icon": "keychain.ico" OS X requires icons with the .icnsfile extension. There are multiple online tools allowing us to convert .pngfiles into .icoand .icnsfor free. Conclusion In this article we have only scratched the surface of what Electron can actually do. Think of great apps like Atom or Slack as a source of inspiration where you can go with this tool. I hope you found this tutorial useful, please feel free to leave your comments and share your experiences with Electron!
https://www.toptal.com/javascript/electron-cross-platform-desktop-apps-easy
CC-MAIN-2017-17
refinedweb
3,886
55.84
Someone pointed me to this site as "like Slashdot, only with people who can actually think". Sort of like the Scary Devil Monastery on a web site, I guess, though the "Certification" process seems a mite complex and also, well, kind of limited. Also, what the hell is wrong with calling a Journeyman a "Journeyman"? Ah well, I guess I'll write up something about myself, troll for votes, see who bites... and maybe use the opportunity to get some exposure for my free software tools. Anyway, I've been writing free software since 1981. I've written a lot of small programs, and worked on a few big ones (a lot of the semantics of Tcl are my fault, for example... Karl Lehenbauer and I contributed a lot of stuff to it early on, though for some reason they scrubbed most of the early contributors names from the sources around the Tcl 6-7 timeframe). I also did Patch Kit 24 to 386BSD and I've been working on FreeBSD since the beginning. What else? Oh, I'm the demented person who kicked off Usenet II and The Internet namespace Cooperative. You can see some of the free tools I've written at my software page. And I can't believe that nobody caught on to the Slashdot lawsuit hoax. Cordwainer Bird? Doesn't anyone read *paper* books any more? Whoops the diary code needs a little tuning. Somewhere in there it's got a chunk of code that looks something like... If the last line was blank, insert a <p> at the beginning of this line. That really borks editing. It really needs to be... If the last line was blank, and this line doesn't start with a <p> insert a <p> at the beginning of this line. Does anyone else try to use "vi" commands when editing something in a browser?
http://www.advogato.org/person/argent/diary.html?start=0
CC-MAIN-2014-42
refinedweb
315
81.43
In Part II, we looked at how we can use anonymous methods and lambdas to pass in-line method implementations to a method which takes a delegate as a parameter. In this article, we will look at how we can leverage the power of generics in order to make our delegates more ...well ...generic! Continuing with our math theme, let's look again at our original MathFunction delegate: MathFunction // C# delegate int MathFunction(int int1, int int2); ' Visual Basic Delegate Function MathFunction(ByVal int1 As Integer, _ ByVal int2 As Integer) As Integer Well that is fine, providing all we ever want to work with are integers! In the real world, however, we are probably going to want to perform mathematical operations on all sorts of different data types. By using a generic delegate, we can have a single delegate which will handle any data type we like in a type-safe manner: // C# delegate TResult MathFunction<T1, T2, TResult>(T1 var1, T2 var2); ' Visual Basic Delegate Function MathFunction(Of T1, T2, TResult)(_ ByVal var1 As T1, ByVal var2 As T2) As TResult Our delegate will now accept parameters of any arbitrary type and return a result of any arbitrary type. Next, we need to modify our PrintResult() method accordingly to handle our new delegate: PrintResult() // C# static void PrintResult<T1, T2, TResult>(MathFunction<T1, T2, TResult> mathFunction, T1 var1, T2 var2) { TResult result = mathFunction(var1, var2); Console.WriteLine(String.Format("Result is {0}", result)); } ' Visual Basic Sub PrintResult(Of T1, T2, TResult)(ByVal mathFunction As _ MathFunction(Of T1, T2, TResult), ByVal var1 As T1, ByVal var2 As T2) Dim result As TResult = mathFunction(var1, var2) Console.WriteLine(String.Format("Result is {0}", result)) End Sub Our simple calculator should now be able to perform mathematical operations on any data type of our choosing. Here are some examples: // C# PrintResult((x, y) => x / y, 5, 2); // Integer division - Result: 2 PrintResult((x, y) => x / y, 5, 2.0); // Real division - Result: 2.5 // Circumference of a circle - Result: 157.07963267949 PrintResult((x, y) => 2 * y * x, 25, Math.PI); // Area of circle - Result: 1963.49540849362 PrintResult((x, y) => y * Math.Pow(x, 2), 25, Math.PI); PrintResult((x, y) => (x - y).TotalDays, DateTime.Now, new DateTime(1940, 10, 9)); // Days since birth of John Lennon - Result: 25775.8028079865 ' Visual Basic PrintResult(Function(x, y) CInt(x / y), 5, 2) ' Integer division - Result: 2 PrintResult(Function(x, y) x / y, 5, 2.0) ' Real division - Result: 2.5 ' Circumference of a circle - Result: 157.07963267949 PrintResult(Function(x, y) 2 * y * x, 25, Math.PI) ' Area of circle - Result: 1963.49540849362 PrintResult(Function(x, y) y * Math.Pow(x, 2), 25, Math.PI) PrintResult(Function(x, y) (x - y).TotalDays, _ DateTime.Now, New DateTime(1940, 10, 9)) ' Days since birth of John Lennon - Result: 25775.8028079865 In each example, note how the types of x and y, as well as the return type of the lambda expression, are inferred by the compiler. x y Our MathFunction generic delegate can now be used as a type for any method that accepts two parameters of any type and returns a value of any type. As such, our delegate is highly re-usable and not just restricted to our simple calculator scenario. Indeed, Microsoft has included a whole stack of generic Func and Action delegates in the System namespace which would probably cover most conceivable scenarios. As such, we could have used one of those in our example here; however that would have defeated the object of the exercise. I personally believe there is still a strong case for creating your own delegates in certain scenarios, as it often improves code readability. Func Action System Using Generics allows us to write more versatile delegates which can be re-used for a whole variety of different scenarios. Even when using generic delegates, the compiler is still able to infer the type of any parameter and the return type. Next: Event.
http://www.codeproject.com/Articles/192027/Delegates-101-Part-III-Generic-Delegates
CC-MAIN-2015-35
refinedweb
668
54.02
1234567891011121314151617181920212223242526272829303132 #include <iostream> #include <string> using namespace std; int main() { for ( int i = 0; i < 10; i++ ) // For loop defines 'i' as 0, while 'i' is less than 10, increment by 1. { // Loop starts. cout << '\t' << i; // Horizontally prints the value of 'i' up to 9 with tab gaps between numbers. } // Loop terminates. This handles ONLY the first line of output and is disregarded afterward. cout << endl; // New line. Comprehension 100% up to this point. for ( int i = 0; i < 10; i++ ) // New for loop created, 'i' defined as 0 again, same as above. { //Outer loop starts. cout << i; // No idea how this line's behaviour is influenced. By itself prints 'i''s value. // Without the nested loop below, it prints 0 through 9 with no gaps horizontally. // With the nested loop below, it results in the multiplication table. ??? for ( int j = 0; j < 10; j++ ) // Inner loop defines 'j' as 0, while 'j' is less than 10, increment by 1. { // Inner nested loop starts. cout << '\t' << i * j; // Prints result and inserts tab gap before each. Grid is created here? Don't know. How does it make this perfect grid? } // Inner nested loop terminates. cout << endl; // New line. } //Outer loop terminates. } 1234567891011 for (int i=0; i < 10; ++i) { std::cout << i; for (int j=0; j < 10; ++j) { std::cout << "\t" << i * j; } std::cout << std::endl; }
http://www.cplusplus.com/forum/beginner/129419/
CC-MAIN-2017-04
refinedweb
229
85.69
Unless you’re aiming for the next great text-only game, chances are something’s going to move in your game. From ships and planets to fireballs and monsters, what are some ways to get them going? JavaScript-based games in Windows 8 can take advantage of many different options. Here are some of them: We’ll take a look at each of these, but first it’s important to know game loop options. To get things going, you need code that handles input, updates game state, and draws/moves things. You might think to start with a for loop, and move something across the screen. E.g.: for (i = 0; i<10; i++){ smiley.x += 20; } What the user actually sees: You’ll probably just see the item popping instantaneously to the final location. Why? The interim positions will change more quickly than the screen will update. Not the best animation, is it? Later, we’ll get to CSS features and animation frameworks that offer ways to chain animations with logical delays (“move this here, wait a bit, then move it there”), but in the meantime, let’s leave the for loop behind… Generally, game loops work by calling a function repeatedly, but with a delay between each call. You can use setTimeout to do something like this: function gameLoop() { // Update stuff… // Draw stuff… setTimeout(gameLoop, 1000/60); // Target 60 frames per second } But there’s some trouble in setTimeout/setInterval paradise. They work by setting a fixed amount of time between loops, but what if the system isn’t ready to draw an updated frame? Any drawing code executed is wasted because nobody will see the update. There’s now a more efficient way - requestAnimationFrame. Instead of specifying how long until the next iteration, requestAnimationFrame asks the browser to call back when it’s ready to display the next frame. For a demonstration, see the requestAnimationFrame example (and the Windows SDK sample): Important: to benefit from requestAnimationFrame, you should separate your state-updating code from your visual rendering/drawing code. Update can continue use setTimeout to run consistently to keep the game going - process input, update state, communicate, etc. - but rendering will run only when necessary via requestAnimationFrame. Like setTimeout, requestAnimationFrame needs to be used once per callback. With a separate render loop, you might have something like this: function renderLoop() { requestAnimationFrame(gameLoop); // Draw stuff… } [Here, I’m using a tip from Martin Wells’ Sprite Animation article that putting requestAnimationFrame before drawing code can improve consistency.] Note that if you’re creating a browser-based (vs. Windows Store) game, you’ll want to check for vendor-specific prefixed requestAnimationFrame implementations (e.g. window.msRequestAnimationFrame). That’s a basic introduction, but there’s more to game loops that you should look into: loop coordination techniques, integrating event-driven updates, and different ways to compute how much to change in each iteration of the game loop. One way to create HTML games that generally work well across browsers and without plugins, is to use a bunch of DOM elements (<div>, <img>, etc.) and move them around. Through some combination of JavaScript and CSS, you could animate and move things around the browser. Let’s take a look at how this applies to a Windows Store game. Start Visual Studio 2012 and under JavaScript Windows Store templates, create a game with the “Blank App” template. Add any image to the game’s images folder, then and add it via an <img> tag to default.html: <body> <img id="smile" src="images/Smile.png" /> </body> Set some starting CSS in /css/default.css: #smile { position:absolute; background:#ffd800; border:4px solid #000; border-radius:20px; } Now we can write some basic animation in /js/default.js. First add the highlighted to the existing args.setPromise line: args.setPromise(WinJS.UI.processAll().then(init())); Now, add a variable, init function to start things, and gameLoop to animate things: Give it a run, and prepare to be amazed! DOM element animation is time-tested and still a perfectly viable option for many situations. However, more options have emerged since then, so, onward! The latest features in CSS3 (aka “CSS Module Level 3”) offer 2D and 3D ways to change and animate presentation styles defined for DOM elements. These approaches can be both concise and easy to maintain. Transforms (both 2D and now in CSS3, 3D) can change appearance of an item – Transforms to rotate, scale, move (called “translate”), and skew (think stretch or italics and you won’t be far off), but they simply change from one state to another. Transitions can help you animate the presentation changes from state A to B, simply using CSS. See the Hands On: transitions sample: Our above DOM element example could be modified to apply a style with a transition over 4 seconds. Something like this: left: 500px; top: 500px; transition:4s linear; Transitions are very useful, but support only a single timing function. Animations introduce keyframes, which allow multiple timing functions to control animations. Also, Animations are explicitly called (vs. Transitions, which trigger implicitly.) See the Hands On: animations sample: See Also: SVG (Scalable Vector Graphics) support using markup and/or code to define shapes that retain clarity/definition as they are scaled. SVG can even utilize styles defined in CSS, making them even more flexible. Because the individual components of an SVG composition are themselves DOM elements, this is technically a special case of the DOM elements approach, so many of the same techniques apply. Some examples (don’t forget, View Source is your friend): SVG-oids, SVG Dice, and SVG Helicopter. SVG is a powerful option, but when should you use it, and how does it compare with using canvas? See How To Choose Between SVG and Canvas. See also: Support for <canvas> across many browsers has made it a great option games. Though you can add a canvas via HTML (or dynamically with JavaScript), canvas is driven entirely by JavaScript, so creating animations is a matter of knowing what you’re drawing and where, and modifying over time with JavaScript. New to canvas? Try the ”drawing to a canvas” Quickstart. Let’s use canvas in a new example. Create a new Blank App project and add a <canvas> element to default.html: <body> <canvas id="gameCanvas" height="800" width="800"></canvas> </body> Because JavaScript is the only way to “talk” to the canvas, we’ll focus on /js/default.js for the rest. Add a pointer to init() like above, then the function: Because canvas uses immediate mode graphics, it’s not as simple as creating an object and moving by setting location properties (like you can with DOM and SVG animation.) So, canvas-based games will typically create JavaScript classes for things to be tracked and displayed, then update logic simply sets properties (x, y, etc.) on those instances, which the render loop dutifully draws to the canvas. Here’s what you should see when you run the “game”: Of course, there’s a lot more to canvas and canvas animation than that. Canvas can work with images, video, animations, gradients and much more. For details, see the Quickstart and ”How to animate canvas graphics", and for demos see Canvas Pad and Canvas Pinball: Also, there are a number of libraries available to make working with canvas easier, for example EaselJS, which we’ll get to a little later. The Windows Library for JavaScript (WinJS) contains helpful namespaces and classes to create animations in an efficient and reliable way. For example, you can use WinJS.UI.executeTransition to activate one or more CSS transitions. Our earlier DOM element example could become: Even more useful, there’s a full library of animations available in WinJS.UI.Animation, including animations for: These are most frequently helpful with Windows Store apps, but games can benefit as well. To see some in action, try the HTML animation library sample from the Windows SDK: See Animating your UI to learn more. Just as we saw with JavaScript physics engines in an earlier post, there are many frameworks out there that can make many of common scenarios easier. Anything that can save time and effort is worth some research. There are many out there, but here are a couple to get you thinking. jQuery is an JavaScript library that helps make common tasks easier (and reliably cross-browser). Among the many things you’ll find in jQuery, the jQuery .animate() method has many features that can apply to game animation. With jQuery.animate, our “smile” example could be written as: $('#smile').animate({ left: '+=500', top: '+=500', }, 4000); A jQuery selector is used to find the smile element, then the animate method will adjust the left and top CSS properties, increasing each by 500 pixels over 4 seconds. For some interesting examples/tutorials, see "13 Excellent jQuery Animation Techniques" on WDL. CreateJS is a family of JavaScript libraries and tools to help make things from canvas, sound, loading, to animations easier. I introduced using CreateJS for Windows Store Games in an earlier post. If you’re new to CreateJS, have a look. As the post details, EaselJS can make using canvas for games much easier. It also supports features like SpriteSheet, which uses multiple pictures of the same object to create animations, such as a character running: However, another CreateJS library, TweenJS, is especially relevant to any discussion of animations. You can use TweenJS to animate objects’ values and CSS properties. It also offers a chained syntax, where you can specify animations as steps with delays and other connections to relate them. Like CSS Transitions and Animations, TweenJS also supports easing functions to control the rate of change of each animation. Not only are there more animation frameworks out there, but there are game-related frameworks as well. I’ll probably dedicate a post to this later, but some examples are Crafty, melonJS, Akihabara and more. A great directory of what’s out there is the Game Engines list on GitHub. Also see Bob Familiar’s “Windows 8 Game Development for the Win” list of various game tools and frameworks (free and commercial) that can lead to Windows Store games. Ready to dive in? Here are a few more references to get you started. Enjoy, and good luck! -Chris Interesting, JavaScript is definitely the wave of the future, I think we will be seeing it used for more and more game development. I've been thinking a lot lately about the future of Js on the web: plus.google.com/.../Z3UpNLrwMSW
http://blogs.msdn.com/b/cbowen/archive/2012/09/24/introduction-to-javascript-animation-and-windows-8-games.aspx?PageIndex=52
CC-MAIN-2015-27
refinedweb
1,762
62.88
Oleg Nesterov <oleg@tv-sign.ru> writes:> On 03/16, Eric W. Biederman wrote:>>>> Oleg Nesterov <oleg@tv-sign.ru> writes:>> >> > Sukadev Bhattiprolu wrote:>> >>> > This means that idle threads (except "swapper") are visible to>> > for_each_process()>> > and do_each_thread(). Looks dangerous and somewhat strange to me.>> >>> > Could you explain this change?>> >> Good catch. I've been so busy pounding reviewing this patches into>> something that made sense that I missed the fact that we care about>> this for more than just the NULL pointer that would occur if we didn't>> do this.Err. I meant NULL pointer dereference. > Why it is bad to have a NULL pointer for idle thread? (Sorry for stupid> question, I can't track the code changes these days).>>> Still it would be good if we could find a way to remove this rare>> special case.>> >> Any chance we can undo what we don't want done for_idle, or create>> a factor of copy_process that only does as much as fork_idle should do,>> and make copy_process a wrapper that does the rest.>> >> I doubt it is significant anywhere but it would be nice to remove a>> branch that except at boot up never happens.>> ... or at cpu-hotplug. Probably you are right, but I am not sure.>> The "if (p->pid)" check in essence implements CLONE_UNHASHED flag,> it may be useful.>> Btw. Looking at,>> Subject: Explicitly set pgid and sid of init process> From: Sukadev Bhattiprolu <sukadev@us.ibm.com>>> Explicitly set pgid and sid of init process to 1.>> Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com>> Cc: Cedric Le Goater <clg@fr.ibm.com>> Cc: Dave Hansen <haveblue@us.ibm.com>> Cc: Serge Hallyn <serue@us.ibm.com>> Cc: Eric Biederman <ebiederm@xmission.com>> Cc: Herbert Poetzl <herbert@13thfloor.at>> Cc: <containers@lists.osdl.org>> Acked-by: Eric W. Biederman <ebiederm@xmission.com>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>> --->> init/main.c | 1 +> 1 file changed, 1 insertion(+)>> diff -puN init/main.c~explicitly-set-pgid-and-sid-of-init-process> init/main.c> ---to take a struct pid pointer instead of a pid_t value. It meansfewer hash table looks ups and it should help in implementing the pidnamespace.Well the initial kernel process does not have a struct pid so whenit's children start doing: attach_pid(p, PIDTYPE_PGID, task_group(p)); attach_pid(p, PIDTYPE_SID, task_session(p));We will get an oops.So a dummy unhashed struct pid was added for the idle threads.Allowing several special cases in the code to be removed.With that chance the previous special case to force the idle threadinit session 1 pgrp 1 no longer works because attach_pid no longerlooks at the pid value but instead at the struct pid pointers.So we had to add the __set_special_pids() to continue to keep initin session 1 pgrp 1. Since /sbin/init calls setsid() that our settingthe sid and the pgrp may not be strictly necessary. Still is betterto not take any chances.Anyway the point of removing the likely(pid) check was that it didn'tlook necessary any longer. But as you have correctly pointed puttingit on the task list and incrementing the process count for the idlethreads is probably still a problem. So while we are much better westill have some use for the if (likely(p->pid)) special case.Is that enough to bring you up to speed?Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/3/17/79
CC-MAIN-2017-51
refinedweb
593
67.96
I sometimes write Python programs which are very difficult to determine how much memory it will use before execution. As such, I sometimes invoke a Python program that tries to allocate massive amounts of RAM causing the kernel to heavily swap and degrade the performance of other running processes. Because of this, I wish to restrict how much memory a Python heap can grow. When the limit is reached, the program can simply crash. What's the best way to do this? If it matters, much code is written in Cython, so it should take into account memory allocated there. I am not married to a pure Python solution (it does not need to be portable), so anything that works on Linux is fine. Check out resource.setrlimit(). It only works on Unix systems but it seems like it might be what you're looking for, as you can choose a maximum heap size for your process and your process's children with the resource.RLIMIT_DATA parameter. EDIT: Adding an example: import resource rsrc = resource.RLIMIT_DATA soft, hard = resource.getrlimit(rsrc) print 'Soft limit starts as :', soft resource.setrlimit(rsrc, (1024, hard)) #limit to one kilobyte soft, hard = resource.getrlimit(rsrc) print 'Soft limit changed to :', soft I'm not sure what your use case is exactly but it's possible you need to place a limit on the size of the stack instead with resouce.RLIMIT_STACK. Going past this limit will send a SIGSEGV signal to your process, and to handle it you will need to employ an alternate signal stack as described in the setrlimit Linux manpage. I'm not sure if sigaltstack is implemented in python, though, so that could prove difficult if you want to recover from going over this boundary.
https://codedump.io/share/8dhBW54zM0Gn/1/how-to-limit-the-heap-size
CC-MAIN-2017-17
refinedweb
295
62.27
The on. The following sample generates C4996. // C4996.cpp // compile with: /W1 // C4996 warning expected #include <stdio.h> // #pragma warning(disable : 4996) void func1(void) { printf_s("\nIn func1"); } __declspec(deprecated) void func1(int) { printf_s("\nIn func2"); } int main() { func1(); func1(1); } C4996 can also occur if you do not use a checked iterator when compiling with _SECURE_SCL 1. See Checked Iterators for more information. // C4996_b.cpp // compile with: /EHsc /W1 #define _SECURE_SCL 1 #include <algorithm> using namespace std; using namespace stdext; int main() { int a [] = {1, 2, 3}; int b [] = {10, 11, 12}; copy(a, a + 3, b); // C4996 copy(a, a + 3, checked_array_iterator<int *>(b, 3)); // OK }
http://msdn.microsoft.com/en-us/library/ttcz0bys(VS.80).aspx
crawl-002
refinedweb
108
60.65
Red Hat Bugzilla – Bug 152201 cpp expands ifr_name Last modified: 2007-11-30 17:11:02 EST From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.6) Gecko/20050323 Firefox/1.0.2 Fedora/1.0.2-1 Description of problem: Try to compile the following example: $ cat foo.c #include <sys/socket.h> #include <netpacket/packet.h> #include <net/ethernet.h> #include <sys/ioctl.h> #include <net/if.h> int foo(char *ifr_name) { return(0); } This is what I get: $ gcc -c foo.c foo.c:7: error: syntax error before '.' token For some reason cpp expands ifr_name: $ cpp foo.c | tail extern void if_freenameindex (struct if_nameindex *__ptr) __attribute__ ((__nothrow__)); # 6 "foo.c" 2 int foo(char *ifr_ifrn.ifrn_name) { return(0); } Version-Release number of selected component (if applicable): gcc-4.0.0-0.37 How reproducible: Always Steps to Reproduce: See description Additional info: Did some more digging. The bad include is: /usr/include/net/if.h Tried compiling the following and got same error: $ cat foo.c #include <net/if.h> int foo(char *ifr_name) { return(0); } This include belongs to: $ rpm -q -f /usr/include/net/if.h glibc-headers-2.3.4-16 Maybe this isn't a bug because I see a lot of namespace pollution in this include file. I thought that if the names were not documented then they would start with an underscore(â_â). From the include: # define ifr_name ifr_ifrn.ifrn_name /* interface name */ # define ifr_hwaddr ifr_ifru.ifru_hwaddr /* MAC address */ # define ifr_addr ifr_ifru.ifru_addr /* address */ # define ifr_dstaddr ifr_ifru.ifru_dstaddr /* other end of p-p lnk */ # define ifr_broadaddr ifr_ifru.ifru_broadaddr /* broadcast address */ # define ifr_netmask ifr_ifru.ifru_netmask /* interface net mask */ # define ifr_flags ifr_ifru.ifru_flags /* flags */ # define ifr_metric ifr_ifru.ifru_ivalue /* metric */ # define ifr_mtu ifr_ifru.ifru_mtu /* mtu */ # define ifr_map ifr_ifru.ifru_map /* device map */ # define ifr_slave ifr_ifru.ifru_slave /* slave device */ # define ifr_data ifr_ifru.ifru_data /* for use by interface */ # define ifr_ifindex ifr_ifru.ifru_ivalue /* interface index */ # define ifr_bandwidth ifr_ifru.ifru_ivalue /* link bandwidth */ # define ifr_qlen ifr_ifru.ifru_ivalue /* queue length */ # define ifr_newname ifr_ifru.ifru_newname /* New name */ Are these documented anywhere in the man pages? Well, you are compiling without feature macros, so you can't talk about namespace pollution. The default namespace is not covered by standards, so as you reported it, there is really not a bug. ifr_* is not reserved namespace for net/if.h just on Linux, but e.g. Solaris too. With -D_XOPEN_SOURCE=600, I agree there is a bug though, doesn't include ifr_* for net/if.h and doesn't speficy these. So I wonder if #ifdef __USE_MISC in net/if.h shouldn't be changed to at least: #if defined __USE_MISC && (!defined __USE_XOPEN2K || defined __USE_GNU) or something like that. Solaris guards these with #if !defined(_XOPEN_SOURCE) || defined(__EXTENSIONS__) but given that net/if.h header was not covered in Unix98, I think the above should be enough. I shouldn't respond this early, the header is ok. __USE_MISC is not defined for -D_XOPEN_SOURCE=600. So there is no bug. Sorry for being an idiot, but why don't these reserved names show up in the man page for "if.h" or in: Is there any place to check the reserved namespaces for Linux and the libc includes? I've googled around without success.
https://bugzilla.redhat.com/show_bug.cgi?id=152201
CC-MAIN-2017-22
refinedweb
547
63.56
17 November 2011 16:38 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> Chemieverbande Rheinland-Pfalz, which represents chemical firms in BASF's home state of Rhineland-Palatinate, said its survey of member firms shows that producers are being hit by market uncertainties resulting from the eurozone debt crisis. “We expect a dampening effect on the real economy from the turmoil in global financial markets,” said the group’s chairman, Hans-Carsten Hansen. The risks – in particular from the eurozone debt crisis and the political struggles to find a resolution – are a worry for the chemical industry, Hansen added. Those worries are also reflected in the recent cuts in 2012 GDP forecasts by economists and governments, he added. As for the outgoing year of 2011, many chemical firms do not expect to see big increases in profits, despite rising sales, Hansen said. The reason for this is a “significant increase” in raw material costs, he said. Chemical firms in Rhineland-Palatinate recorded sales of €13.4bn ($18.1bn) in the first half of 2011, reflecting an 8.9% year-on-year increase from the same period in 2010. However, for the second half of 2011, producers were already seeing a “significant flattening of the sale growth curve” compared to the strong first six months of the year, as demand fell, the group said in an update. BASF, for its part, has previously said it has a cautious outlook, as turbulence in international capital markets leads to fears for economic development. Analysts have warned that BASF's 2012 earnings could fall dramatically from 2011 because of deteriorating confidence in the chemicals industry. ($1 = €0
http://www.icis.com/Articles/2011/11/17/9509395/germany-chem-producers-cautious-on-2012-outlook-regional.html
CC-MAIN-2015-14
refinedweb
272
51.38
#include <ldns/ldns.h> ldns_buffer* ldns_buffer_new(size_t capacity); void ldns_buffer_new_frm_data(ldns_buffer *buffer, const void *data, size_t size); void ldns_buffer_clear(ldns_buffer *buffer); int ldns_buffer_printf(ldns_buffer *buffer, const char *format, ...); void ldns_buffer_free(ldns_buffer *buffer); void ldns_buffer_copy(ldns_buffer* result, const ldns_buffer* from); void* ldns_buffer_export(ldns_buffer *buffer); char* ldns_buffer_export2str(ldns_buffer *buffer); char* ldns_buffer2str(ldns_buffer *buffer); ldns_buffers can contain arbitrary information, per octet. You can write to the current end of a buffer, read from the current position, and access any data within it. Example use of buffers is in the source code of \ref host2str.c struct ldns_struct_buffer { The current position used for reading/writing: size_t _position; The read/write limit: size_t _limit; The amount of data the buffer can contain: size_t _capacity; The data contained in the buffer: uint8_t *_data; If the buffer is fixed it cannot be resized: unsigned _fixed : 1; /** The current state of the buffer. If writing to the buffer fails * for any reason, this value is changed. This way, you can perform * multiple writes in sequence and check for success afterwards. */ ldns_status _status; }; typedef struct ldns_struct_buffer ldns_buffer; .br capacity: the size (in bytes) to allocate for the buffer .br Returns the created buffer .br buffer: pointer to the buffer to put the data in .br data: the data to encapsulate in the buffer .br size: the size of the data .br buffer: buffer containing char * data .br Returns null terminated char * data, or NULL on error .br buffer: buffer containing char * data .br Returns null terminated char * data, or NULL on error Licensed under the BSD License. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
https://man.linuxreviews.org/man3/ldns_buffer_clear.3.html
CC-MAIN-2020-45
refinedweb
272
56.96
Complexity of the automation increases day by day with changing requirements. To cope with them, we need to develop an automation framework more intelligently. Decision control is a one of the key concepts in the creation of complex automation. In IBM® Rational® Performance Tester, you can define part of a test as a conditional loop, which is a loop that runs a specified number of times. You can set the duration of the loop according to count-based, time-based, and infinite options. But the software does not provide a way to dynamically set the target value of the loop. A beginner working in Rational Performance Tester might not have the sample code to get started on this. Therefore, the purpose of this article is to guide you on the implementation of a conditional loop. This article walks you through a complete example of an automation script for conditional looping. However, this might not be the only way of achieving the implementation. Create a conditional loop in Rational Performance Tester The logic or control part of the loop resides inside custom code (Java class) that is responsible for limiting the number of iterations of the loop to a finite number. Start with a fresh performance test. Screen captures that follow show how to do each task. Open a new test in Rational Performance Tester Note: Only a few steps from the Create New Test Wizard are shown here. For the rest, wizard screens assume that no changes have been made to the defaults, so you can simply click the Next button. - From the File menu, click New > New Test. Figure 1. Creating a new test - In the Test File Name and Location dialog window, provide a name for the test. For this example, I named the test LoopExample. Figure 2. Providing a test name - In the "Protocol and Features" dialog window, only check the Core Features check box for this demo. Figure 3. Selecting test protocol and features Add an infinite loop to the test - After finishing with the Create New Test wizard, in the window where the new test opens, click the Add button, and select the Loop element. This will add a loop element to the test. Tip: You can invoke the same context menu by right-clicking the test name (LoopExample in Figure 4). Figure 4. Adding a loop from the Add context menu - Select the newly added Loop element. - On the right, in the Test Element Details section, under Duration, select the Infinite radio button from the options listed. Figure 5. Changing the loop's default duration Add custom code element to control the loop - Right-click the Loop element. - From the context menus, select Add and then Custom Code. This Java code will control the number of iterations of the loop, based upon some boundary condition. In our demo test script, we are going to use another class located outside of the loop in the script to get a random value for number of iterations. Figure 6. Adding custom code inside the Loop element Note: While naming the custom code, please follow naming conventions. Given that all of the classes are saved under the same directory by default, it is good to have separate namespaces for better readability and to reduce confusion. For example: - customcode.com.ApplicationName.search.controlSearchLoop - customcode.com.ApplicationName.create.controlItemAddLoop Those two examples show two custom code names that are for two separate types of automation script. The first one is for a search type script, while the second one is a data creation type script. Simply put, this is just in keeping with Java's naming conventions and recommendations. Add another custom code before the loop to simulate a dynamic value to control the loop - Right-click the loop element, and select Insert from the context menu to insert a custom code before the loop element. This class is added to provide a random value to the loop controlling custom code to limit the infinite iterations of the loop to a predetermined value. In real-world scenario, this value can be replaced by any value that you decide to make a loop-breaking condition. You will need to modify the loop control custom code accordingly. For reference, these are the names of the classes in this example: customcode.returnRandomValue customcode.controlMyLoop - Notice the interesting behavior of the Add and Insert options in the context menu. Use Add to apply an element after the reference element, and use Insert for one before the reference element. Here, the reference element is the one that is right-clicked, which is the Loop element in this case. Quick reference: - Add for after - Insert for before Your test should now look like Figure 7. Figure 7. Test design screen Write the custom code Listing 1. customcode.returnRandomValue (for random loop executions) public String exec(ITestExecutionServices tes, String[] args) { int rndNumber = (new Random().nextInt(10))+1; tes.getTestLogManager().reportMessage("Random Number:"+rndNumber); return ""+(rndNumber);} In this code, to simulate a dynamic value, you are using a random number to generate an integer number between 1 and 10. You are going to pass this to the custom code (customcode.controlMyLoop) inside the loop for controlling the loop iterations. You can find the complete code in this file (see the Downloads section): returnRandomValue.java Note: tes.getTestLogManager().reportMessage() is a Rational Performance Tester method that I used in the previous listing to print the random number. This will appear in two places: - In the test Log - Under Protocol Data > Event Log Generally, while designing the scripts, I use a lot of these statements to check dynamic values. These are good for debugging functional and code issues. However, I disable them when doing a long-duration run. Listing 2. customcode.controlMyLoop public String exec(ITestExecutionServices tes, String[] args) { int i = tes.getLoopControl().getIterationCount(); if(i>Integer.parseInt(args[0])) tes.getLoopControl().breakLoop(); else tes.getTestLogManager().reportMessage("Iteration No:"+i); return null; } You can find the complete code in this file (see the Downloads section): controlMyLoop.java The screen capture in Figure 8 shows how the return value of returnRandomValue custom code is passed to the controlMyLoop custom code as an argument (args[0]), under Test Element Details section for customcode.controlMyLoop. Figure 8. Random output from returnRandomValue The code tests the iteration number for the innermost loop. The innermost loop is broken when its iteration number is greater than the test value (which, in this case, is the output of customcode.returnRandomValue) If the iterations are less than the test value, the code prints the corresponding iteration number. Note: One important thing to keep in mind when dealing with the tes.getLoopControl()object is that, when there are multiple loops, it gives the handle to the innermost loop. For example, in the piece of pseudo code in Listing 3, the myloop reference variable will have handle to LOOP2. Hence, the object can be used to get iteration number or break or continue the loop. Listing 3. LOOP1 {LOOP2 { ILoopControl myloop = tes.getLoopControl(); } } After completing the previous steps, start running the script. The implementer should be able to see that the loop is breaking after a certain number of iterations. Execute the script Figure 9 shows that the random number generated by customcode.returnRandomValue was 1 (see the right-side text box). Although the loop runs two times, the condition check in customcode.controlMyLoop (2>1) is satisfied, thus the loop is broken on the second execution. Therefore, our objective of breaking a loop when a certain condition is met is achieved. However, the conditional check for a number as displayed here can be replaced by any other comparisons. For example, we can use a reference value from an HTTP request (rather than the custom code output used here), and based upon custom logic, we can break the loop. Figure 9. Test logs after execution How to continue a loop with a condition An added feature of loops is a continue operation. This is useful in certain conditions when you want the loop to skip executing when a certain condition is met. For example, I can use this if I want to execute the loop body only for even values up to a maximum value. In Listing 4, I have modified the earlier code to include this condition, as well. Listing 4 shows the code to achieve this. Listing 4. Code change required in customcode.controlMyLoop to skip executing the loop body for non-even iterations public String exec(ITestExecutionServices tes, String[] args) { int i = tes.getLoopControl().getIterationCount(); if(i>Integer.parseInt(args[0])) tes.getLoopControl().breakLoop(); else if(i%2 !=0) tes.getLoopControl().continueLoop(); else tes.getTestLogManager().reportMessage("Iteration No:"+i); return null; } On running the test with those changes, the test logs look something like Figure 10. Figure 10. Test log with the code change to continue the loop As you can see, the random number generated in customcode.returnRandomValue is 9. The loop execution has been skipped for iterations that are not even-numbered, up until the tenth iteration, where the loop is finally broken. I hope this article is useful for anyone taking up Rational Performance Tester as for test automation. After going through this article, you should be able to create a conditional loop, even if you are new to Rational Performance Tester. Although there can be other ways to achieve the same goal, this article demonstrates the way that we successfully implemented it in our projects. Downloads Resources Learn - Find out more on the Rational Performance Tester product overview page. Then explore the Rational Performance Tester page on IBM developerWorks for links to technical articles and browse the user assistance in the Rational Performance Tester Information Center. You might also find the Adding a loop to a test topic in the information center
http://www.ibm.com/developerworks/rational/library/implement-conditional-loop-rational-performance-tester/index.html
CC-MAIN-2017-04
refinedweb
1,639
54.63
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Ashish Schottky Ranch Hand 93 26 Threads 0 Cows since Dec 29, 2009 Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 2 Received in last 30 days 0 Total given 0 Given in last 30 days 0 Forums and Threads Scavenger Hunt Ranch Hand Scavenger Hunt Number Posts (93/100) Number Threads Started (93/10) Number Threads Started (26/10) Number Likes Received (2/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Ashish Schottky Issues with null layout and setBounds Thank you all for taking time to reply my question. Here is what I was able to do, its simple tic-tac-toe problem, I have coded it using java swing, It will be great if you guys can play it and find flaws in it, also I would like you all to rate this and suggest improvements and also that I have just began to code swing. Thank you. applet show more 7 years ago Swing / AWT / SWT Issues with null layout and setBounds I am new to Java Swing. I am writing some simple swing programs which use null layout. I want to get feel of swing, how to position them. I had rough time with Layouts, so tried using null layout. As I am going to make this as an applet, I don't think that it is going to bother me much on resizing issues, please correct me if I am wrong or I am missing something. My first code was to just fill up a rectangle, leaving 10px on each side. Here is my code. //My main class, which handles the visibility of the class, it doesnot involve any logic. import javax.swing.JFrame; public class Main extends JFrame { public Main() { } public static void main(String[] args) { Board b=new Board(); b.setVisible(true); } } My second class contains the control panel, that is buttons, and over all display. import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.*; public class Board extends JFrame implements ActionListener { DrawState ds; public Board() { setTitle("Swing One"); setDefaultCloseOperation(EXIT_ON_CLOSE); setSize(500,500); ds=new DrawState(); initGUI(); } public void initGUI() { JPanel panel_one=new JPanel(); panel_one.setLayout(null); ds.setBounds(10,10,480,480); panel_one.add(ds); this.add(panel_one); } @Override public void actionPerformed(ActionEvent arg0) { // TODO Auto-generated method stub } } Here is my third part, where the it draws the figure. I kept it all three differently because I wanted them to be independent of each other and my code doesnt become one lengthy mess. import java.awt.*; import javax.swing.*; public class DrawState extends JPanel { public void paint(Graphics g) { super.paintComponents(g); g.setColor(Color.orange); g.fillRect (0, 0, 500, 500); } } I am confused with the setBound method. I have set the size to be 500x500. I have used setBound(10,10,480,480) Doesn't this mean that I should get an rectangle filling the entire area of 480x480 and leaving 10px border on each side? this is how I visualized But rather I am getting this(image attached). Its leaving proper 10 px from left and top, but then it covers more than the allowed area and doesnt give the visualized look. What am I missing here? show more 7 years ago Swing / AWT / SWT Sudoku solver help (not brute force) Here is a puzzle that is solved by hidden singles alone. { {1,0,0, 2,0,0, 3,0,0}, {0,2,0, 0,1,0, 0,4,0}, {0,0,3, 0,0,5, 0,0,6}, {7,0,0, 6,0,0, 5,0,0}, {0,5,0, 0,8,0, 0,7,0}, {0,0,8, 0,0,4, 0,0,1}, {8,0,0, 7,0,0, 4,0,0}, {0,3,0, 0,6,0, 0,2,0}, {0,0,9, 0,0,2, 0,0,7}}; I used the following strategy, a candidate 'x' that has only one cell to go and other candidates have legal possibility to occupy that cell, then candidate is said to be the hidden single. I used this strategy to solve. 1) First pencil up the candidates. 2) Check for naked singles. 3)Update the candidates. 4)Check for hidden singles. To check for hidden singles, just try if a candidate can occupy more than one cell that row and column, then it is not hidden, else its hidden candidate. show more 7 years ago Java in General How to start programing, the correct way. @Everyone : First of all I thank you all for taking out sometime to help me. @Stephan van Hulst: No I havent read that, but when I started programing, I used complete-reference java. @Bear Bibeault: I agree, but until I posted this, I was confused and didnt know what to do. I was just learning to swim by jumping in well, without any life-guards or floats. Thus I wanted to get my programing on the right foot. What bothered me was the bad practices which I kept on developing. @Hebert Coelho : this is in reference to your first post, I am trying to get into graduate CS course, but I don't want to be a programmer all my life, I want to explore new areas in field of CS. It also depend on what job I get and what they tell me to do. I have just browsed through the reference you gave, seems to be quiet a nice one. Thank you. @John de Michele : I am trying my hands on it currently, I am using Introduction to Algorithms (CLRS) to study algorithms. I am finding it hard to digest, can you suggest anything else some simple book on algorithms, it will be helpful. So now I think I have to read effective java and the clean code first, then data-structures and algorithms based on java. Thank you. Thank you. show more 7 years ago Beginning Java How to start programing, the correct way. I am already 23 and I am developing javaSE from past 2 years, yet I am not very much comfortable programing in general. Actually I am having trouble to implement algorithms or somethings like 'developing logic'. I drew up a conclusion that I need to focus more on algorithms, but some how I am quiet weak in it. Apart from my regular courses, I am self trained java programer by programing approximately 2hours a day. I tried my hands on code-chef and project euler. Solved about 40 problems now from project euler. And many more coding problems, I developed board games, but I always used the inefficient method, and its a bad habbit that I cant get rid off.(eg, using dynamic memory allocation in AI, slows down the speed tremendously). I neither do have formal back-ground in mathematics nor in computing, but I wish to be a good programmer, I am willing to put up my efforts, can anyone guide me how to be a better programer? -Thank you. show more 7 years ago Beginning Java How to structure an application @OP: The best way I think, is to make few console apps, get used to few things in java, how objects work,inheritance,encapsulation,multi-threading. Once your hand is set on these, jump into GUI programing as it requires these basics.I think you should start from here. show more 7 years ago Beginning Java Connect four AI using alpha beta pruning algorithm @Padda: Welcome to Ranch. When you build any board game, its generally better to keep separate classes for organization ,search,evaluation. I suggest you to write a clean 2-player game. Then make the AI component by allowing it to play random. This will test for legal moves in a position and winning combinations. in connect four, winning is simple, so make another fuction, let it return some value if any player has won, this is your maximum score, and in game like connect4, this is what we want to achieve. Then make sure you would add in more sophisticated search algorithm like min-max. Here I would like to suggest you that you go for a better frame-work of min-max namely "nega-max". Once you get this working, then add in alpha-beta pruning , PVS or what ever you feel like. Answer for your 1) :: Computer in normal min-max search will compare all the scores say best_move has score +5 and move B will have score +7 then best_move's score will be assigned as 7 and best_move now will be move B. However, addition of alpha-beta changes the things quickly. By adding alpha-beta, search doesnt evaluate all nodes(moves), thus it is faster than naive min-max. Answer for your 2) :: It differs from programer to programer. In general and widely used basic data-structure are arrays. Arrays is java are certainly faster than Strings. If you think further, assuming your game is for 7x6(standard board), there are only 42 cells. long type in java offers uptil 64 bits, here you can allot 1-bit per cell so that takes only 42 bits. This is a complicated approach but would be very fast. However I suggest you not to take this approach on first go, make a working connect4, then optimize it. I hope this is not home-work. If you want me to help you, then let me know so that I can write on it. show more 7 years ago Game Development How do the values change even if I am not passing the arrays with reference?. show more 8 years ago C / C++ How do the values change even if I am not passing the arrays with reference?. show more 8 years ago C / C++ Avoiding dynamic memory allocation in AI board games. @Mich: I am very busy currently and hence donot have anytime for chess programing. I have "paused" developing chess long back.However once I get some time, I am planning to rewrite the code in C. But for sure it wouldnt be anytime soon. The links you posted are too awsome. show more 8 years ago Game Development Help required in preparing time table for SCJP. @Hama Kamal: I have been programing in Java from 2 years. However I my main inclination was to program board AI-games. show more 8 years ago Programmer Certification (OCPJP) Help required in preparing time table for SCJP. I want to appear for SCJP and hence want to study. However I am experiencing problems as I don't know how to organize my study and the time that I should allot per day to crack SCJP with a real good score. Can anyone help me doing it, I own a copy of SCJP 6.0 by K&B. I want to appear for the exam in comming December. Kindly help me in preparing time table for the same. Help is form of "Allot minimum x hrs a day, prepare notes like this, refer following online websites" is really appreciated. Thank you. show more 8 years ago Programmer Certification (OCPJP) java newbee guidance @shibashish: AWT stands for Abstract Window Toolkit. It contains a class called as Component. This class contains a method 'Paint(Graphics g)', this paint method is used to update the screen. eg: Display a message on screen, this will be done with help of paint method. show more 8 years ago Beginning Java Java Static Problems I dont know if this answers your question, but I find your while loop never terminates. show more 8 years ago Beginning Java converting morse code to english @Khair: You can do something like this: Create a database /look up table sort of thing for morse<->english say something like this. morse[]={//morse code for a,//morse code for b...} english[]={a,b,c} Take the input as a string. use a counter to count '|' from the current position, make a substring search for this substring in morse code array,once found, printout the character from corresponding english array or put that in string. in this way, you can eliminate the if statements for both types of conversions. show more 8 years ago Beginning Java
https://www.coderanch.com/u/220100/Ashish-Schottky
CC-MAIN-2019-43
refinedweb
2,088
70.43
Apache OpenOffice (AOO) Bugzilla – Issue 105418 dynamic linkage of neon library Last modified: 2012-09-10 21:19:47 UTC LGPL licensed neon library need dynamcic linkage. accepted fixed in tkr27 TKR->TM: Please verify on all platforms. In the basis layer of the office installation you will find a new library (Windows: neon.dll / Linux/Unix: libneon.so / macos: libneon.dylib). To verify this issue try to open a document via webdav on each platform. The import library on Windows platform is being renamed from ineon.lib to neon.lib at delivery time. Is it essential to do so or can we modify solenv/inc/libs.mk instead of renaming the library? Conventionally the name of the import libraries are different from the shared target names. I do not know the reason why but it made mingw port simple. Mingw port really does not use the .lib style import libraries and create empty dummy files just for clearing the dependencies. And if the core name of the import libraries are different from the shared target, the linker can safely use dll-direct-linking without troubles. If the core name of the import librbarary is the same as the dll, the empty dummy file will be used for linking and will cause problem. Currently the cws tkr27 is broken on MinGW but it will be safely buildable if we stop renaming ineon.lib at delivery time and set NEON3RDLIB to ineon.lib for Windows MSVC build in solenv/inc/libs.mk. checked and verified in cws tkr27 -> OK ! Reopen because of the Mingw build braker. @tkr: Since mingw port does not use import library but use direct-dll-linking instead, NEON3RDLIB should not be changed to -lineon from -lneon. The file ineon.lib will be used only in normal MSVC build. @tono: Changes commited. Please take a look on it. Since our internal mingw built-bot doesn't work properly maybe you can do me a favor and check the mingw port to verify that the build issue was solved. -> Fixed m4 is brokwith system neon for me. ERROR: ERROR: Missing files in function: remove_Files_Without_Sourcedirectory ************************************************** ************************************************** ERROR: Saved logfile: /tmp/openoffice-base-beta/trunk/src/OOO320_m4/instsetoo_native/unxlngx6.pro/OpenOffice/native/logging/en-US/log_OOO320_en-US.log ************************************************** ... reading include pathes ... ... analyzing script: /tmp/openoffice-base-beta/trunk/src/OOO320_m4/solver/320/unxlngx6.pro/bin/setup_osl.ins ... ... analyzing directories ... ... analyzing files ... ... analyzing scpactions ... ... analyzing shortcuts ... ... analyzing unix links ... ... analyzing profile ... ... analyzing profileitems ... ... analyzing modules ... ------------------------------------ ... languages en-US ... ... analyzing files ... ERROR: The following files could not be found: ERROR: File not found: libneon.so ... cleaning the output tree ... ... removing directory /tmp/ooopackaging/i_32551257776001 ... Mon Nov 9 14:13:24 2009 (00:03 min.) dmake: Error code 255, while making 'openoffice_en-US.native' We should reopen this one. andyrtr:
https://bz.apache.org/ooo/show_bug.cgi?id=105418
CC-MAIN-2020-24
refinedweb
465
52.56
At Virgin Hyperloop One, we work on making Hyperloop a reality, so we can move passengers and cargo at airline speeds but at a fraction of the cost of air travel. In order to build a commercially viable system, we collect and analyze a large, diverse quantity of data, including Devloop Test Track runs, numerous test rigs, and various simulation, infrastructure and socio economic data. Most of our scripts handling that data are written using Python libraries with pandas as the main data processing tool that glues everything together. In this blog post, we want to share with you our experiences of scaling our data analytics using Koalas, achieving massive speedups with minor code changes. As we continue to grow and build new stuff, so do our data processing needs. Due to the increasing scale and complexity of our data operations, our pandas-based Python scripts were too slow to meet our business needs. This led us to Spark, with the hopes of fast processing times and flexible data storage as well as on-demand scalability. We were, however, struggling with the “Spark switch” – we would have to make a lot of custom changes to migrate our pandas-based codebase to PySpark. We needed a solution that was not only much faster, but also would ideally not require rewriting code. These challenges drove us to research other options and we were very happy to discover that there exists an easy way to skip that tedious step: the Koalas package, recently open-sourced by Databricks. As described in the Koalas Readme, The Koalas project makes data scientists more productive when interacting with big data, by implementing the pandas DataFrame API on top of Apache Spark. (…) Be immediately productive with Spark, with no learning curve, if you are already familiar with pandas. Have a single codebase that works both with pandas (tests, smaller datasets) and with Spark (distributed datasets). In this article I will try to show that this is (mostly) true and why Koalas is worth trying. Quick Start Before installing Koalas, make sure that you have your Spark cluster set up and can use it with PySpark. Then, simply run: pip install koalas or, for conda users: conda install koalas -c conda-forge Refer to Koalas Readme for more details. A quick sanity check after installation: import databricks.koalas as ks kdf = ks.DataFrame({'column1':[4.0, 8.0]}, {'column2':[1.0, 2.0]}) kdf As you can see, Koalas can render pandas-like interactive tables. How convenient! Example with basic operations For the sake of this article, we generated some test data consisting of 4 columns and parameterized number of rows. import pandas as pd ## generate 1M rows of test data pdf = generate_pd_test_data( 1e6 ) pdf.head(3) >>> timestamp pod_id trip_id speed_mph 0 7.522523 pod_13 trip_6 79.340006 1 22.029855 pod_5 trip_22 65.202122 2 21.473178 pod_20 trip_10 669.901507 - Disclaimer: this is a randomly generated test file used for performance evaluation, related to the topic of Hyperloop, but not representing our data. The full test script used for this article can be found here: . We’d like to assess some key descriptive analytics across all pod-trips, for example: What is the trip time per pod-trip? Operations needed: - Group by ['pod_id','trip id'] - For every trip, calculate the trip_timeas last timestamp – first timestamp. - Calculate distribution of the pod-trip times (mean, stddev) The short & slow ( pandas ) way: (snippet #1) import pandas as pd # take the grouped.max (last timestamp) and join with grouped.min (first timestamp) gdf = pdf.groupby(['pod_id','trip_id']).agg({'timestamp': ['min','max']}) gdf.columns = ['timestamp_first','timestamp_last'] gdf['trip_time_sec'] = gdf['timestamp_last'] - gdf['timestamp_first'] gdf['trip_time_hours'] = gdf['trip_time_sec'] / 3600.0 # calculate the statistics on trip times pd_result = gdf.describe() The long & fast ( PySpark ) way: (snippet #2) import pyspark as spark # import pandas df to spark (this line is not used for profiling) sdf = spark.createDataFrame(pdf) # sort by timestamp and groupby sdf = sdf.sort(desc('timestamp')) sdf = sdf.groupBy("pod_id", "trip_id").agg(F.max('timestamp').alias('timestamp_last'), F.min('timestamp').alias('timestamp_first')) # add another column trip_time_sec as the difference between first and last sdf = sdf.withColumn('trip_time_sec', sdf2['timestamp_last'] - sdf2['timestamp_first']) sdf = sdf.withColumn('trip_time_hours', sdf3['trip_time_sec'] / 3600.0) # calculate the statistics on trip times sdf4.select(F.col('timestamp_last'),F.col('timestamp_first'),F.col('trip_time_sec'),F.col('trip_time_hours')).summary().toPandas() The short & fast ( Koalas ) way: (snippet #3) import databricks.koalas as ks # import pandas df to koalas (and so also spark) (this line is not used for profiling) kdf = ks.from_pandas(pdf) # the code below is the same as the pandas version gdf = kdf.groupby(['pod_id','trip_id']).agg({'timestamp': ['min','max']}) gdf.columns = ['timestamp_first','timestamp_last'] gdf['trip_time_sec'] = gdf['timestamp_last'] - gdf['timestamp_first'] gdf['trip_time_hours'] = gdf['trip_time_sec'] / 3600.0 ks_result = gdf.describe().to_pandas() Note that for the snippets #1 and #3, the code is exactly the same, and so the “Spark switch” is seamless! For most of the pandas scripts, you can even try to change the import pandas databricks.koalas as pd, and some scripts will run fine with minor adjustments, with some limitations explained below. Results All the snippets have been verified to return the same pod-trip-times results. The describe and summary methods for pandas and Spark are slightly different, as explained here but this should not affect performance too much. Sample results: Advanced Example: UDFs and complicated operations We’re now going to try to solve a more complex problem with the same dataframe, and see how pandas and Koalas implementations differ. Goal: Analyze the average speed per pod-trip: - Group by ['pod_id','trip id'] - For every pod-trip calculate the total distance travelled by finding the area below the velocity (time) chart (method explained here): - Sort the grouped df by timestampcolumn. - Calculate diffs of timestamps. - Multiply the diffs with the speed – this will result in the distance traveled in that time diff. - Sum the distance_travelledcolumn – this will give us total distance travelled per pod-trip. - Calculate the trip timeas timestamp.last– timestamp.first(as in the previous paragraph). - Calculate the average_speedas distance_travelled/ trip time. - Calculate distribution of the pod-trip times (mean, stddev). We decided to implement this task using a custom apply function and UDF (user defined functions). The pandas way: (snippet #4) import pandas as pd def calc_distance_from_speed( gdf ): gdf = gdf.sort_values('timestamp') gdf['time_diff'] = gdf['timestamp'].diff() return pd.DataFrame({ 'distance_miles':[ (gdf['time_diff']*gdf['speed_mph']).sum()], 'travel_time_sec': [ gdf['timestamp'].iloc[-1] - gdf['timestamp'].iloc[0] ] }) results = df.groupby(['pod_id','trip_id']).apply( calculate_distance_from_speed) results['distance_km'] = results['distance_miles'] * 1.609 results['avg_speed_mph'] = results['distance_miles'] / results['travel_time_sec'] / 60.0 results['avg_speed_kph'] = results['avg_speed_mph'] * 1.609 results.describe() The PySpark way: (snippet #5) import databricks.koalas as ks from pyspark.sql.functions import pandas_udf, PandasUDFType from pyspark.sql.types import * import pyspark.sql.functions as F schema = StructType([ StructField("pod_id", StringType()), StructField("trip_id", StringType()), StructField("distance_miles", DoubleType()), StructField("travel_time_sec", DoubleType()) ]) @pandas_udf(schema, PandasUDFType.GROUPED_MAP) def calculate_distance_from_speed( gdf ): gdf = gdf.sort_values('timestamp') print(gdf) gdf['time_diff'] = gdf['timestamp'].diff() return pd.DataFrame({ 'pod_id':[gdf['pod_id'].iloc[0]], 'trip_id':[gdf['trip_id'].iloc[0]], 'distance_miles':[ (gdf['time_diff']*gdf['speed_mph']).sum()], 'travel_time_sec': [ gdf['timestamp'].iloc[-1]-gdf['timestamp'].iloc[0] ] }) sdf = spark_df.groupby("pod_id","trip_id").apply(calculate_distance_from_speed) sdf = sdf.withColumn('distance_km',F.col('distance_miles') * 1.609) sdf = sdf.withColumn('avg_speed_mph',F.col('distance_miles')/ F.col('travel_time_sec') / 60.0) sdf = sdf.withColumn('avg_speed_kph',F.col('avg_speed_mph') * 1.609) sdf = sdf.orderBy(sdf.pod_id,sdf.trip_id) sdf.summary().toPandas() # summary calculates almost the same results as describe The Koalas way: (snippet #6) import databricks.koalas as ks def calc_distance_from_speed_ks( gdf ) -> ks.DataFrame[ str, str, float , float]: gdf = gdf.sort_values('timestamp') gdf['meanspeed'] = (gdf['timestamp'].diff()*gdf['speed_mph']).sum() gdf['triptime'] = (gdf['timestamp'].iloc[-1] - gdf['timestamp'].iloc[0]) return gdf[['pod_id','trip_id','meanspeed','triptime']].iloc[0:1] kdf = ks.from_pandas(df) results = kdf.groupby(['pod_id','trip_id']).apply( calculate_distance_from_speed_ks) # due to current limitations of the package, groupby.apply() returns c0 .. c3 column names results.columns = ['pod_id', 'trip_id', 'distance_miles', 'travel_time_sec'] # spark groupby does not set the groupby cols as index and does not sort them results = results.set_index(['pod_id','trip_id']).sort_index() results['distance_km'] = results['distance_miles'] * 1.609 results['avg_speed_mph'] = results['distance_miles'] / results['travel_time_sec'] / 60.0 results['avg_speed_kph'] = results['avg_speed_mph'] * 1.609 results.describe() Koalas’ implementation of apply is based on PySpark’s pandas_udf which requires schema information, and this is why the definition of the function has to also define the type hint. The authors of the package introduced new custom type hints, ks.DataFrame and ks.Series. Unfortunately, the current implementation of the apply method is quite cumbersome, and it took a bit of an effort to arrive at the same result (column names change, groupby keys not returned). However, all the behaviors are appropriately explained in the package documentation. Performance To assess the performance of Koalas, we profiled the code snippets for different number of rows. The profiling experiment was done on Databricks platform, using the following cluster configurations: - Spark driver node (also used to execute the pandas scripts): 8 CPU cores, 61GB RAM. 15 Spark worker nodes: 4CPU cores, 30.5GB RAM each (sum: 60CPUs / 457.5GB ). Every experiment was repeated 10 times, and the clips shown below are indicating the min and max times for the executions. Basic ops When the data is small, the initialization operations and data transfer are huge in comparison to the computations, so pandas is much faster (marker a). For larger amounts of data, pandas’ processing times exceed distributed solutions (marker b). We can then observe some performance hits for Koalas, but it gets closer to PySpark as data increases (marker c). UDFs For the UDF profiling, as specified in PySpark and Koalas documentation, the performance decreases dramatically. This is why we needed to decrease the number of rows we tested with by 100x vs the basic ops case. For each test case, Koalas and PySpark show a striking similarity in performance, indicating a consistent underlying implementation. During experimentation, we discovered that there exists a much faster way of executing that set of operations using PySpark windows functionality, however this is not currently implemented in Koalas so we decided to only compare UDF versions. Discussion Koalas seems to be the right choice if you want to make your pandas code immediately scalable and executable on bigger datasets that are not possible to process on a single node. After the quick swap to Koalas, just by scaling your Spark cluster, you can allow bigger datasets and improve the processing times significantly. Your performance should be comparable (but 5 to 50% lower, depending on the dataset size and the cluster) with PySpark’s. On the other hand, the Koalas API layer does cause a visible performance hit, especially in comparison to the native Spark. At the end of the day, if computational performance is your key priority, you should consider switching from Python to Scala. Limitations and differences During your first few hours with Koalas, you might wonder, “Why is this not implemented?!” Currently, the package is still under development and is missing some pandas API functionality, but much of it should be implemented in the next few months (for example groupby.diff() or kdf.rename()). Also from my experience as a contributor to the project, some of the features are either too complicated to implement with Spark API or were skipped due to a significant performance hit. For example, DataFrame.values requires materializing the entire working set in a single node’s memory, and so is suboptimal and sometimes not even possible. Instead if you need to retrieve some final results on the driver, you can call DataFrame.to_pandas() or DataFrame.to_numpy(). Another important thing to mention is that Koalas’ execution chain is different from pandas’: when executing the operations on the dataframe, they are put on a queue of operations but not executed. Only when the results are needed, e.g. when calling kdf.head() or kdf.to_pandas() the operations will be executed. That could be misleading for somebody who never worked with Spark, since pandas does everything line-by-line. Conclusions Koalas helped us to reduce the burden to “Spark-ify” our pandas code. If you’re also struggling with scaling your pandas code, you should try it too! If you are desperately missing any behavior or found inconsistencies with pandas, please open an issue so that as a community we can ensure that the package is actively and continually improved. Also, feel free to contribute! Resources - Koalas github: - Koalas documentation: - Code snippets from this article: .
https://databricks.com/blog/2019/08/22/guest-blog-how-virgin-hyperloop-one-reduced-processing-time-from-hours-to-minutes-with-koalas.html
CC-MAIN-2019-39
refinedweb
2,088
50.43
The function int setvbuf(FILE *stream, char *buffer, int mode, size_t size); sets the buffer to be used by a stream for Input/Output operations. It also allows to specify the mode and size of the buffer. If buffer argument is NULL, the function automatically allocates a buffer of size bytes. The setvbuf function must be called once the file associated with the stream has already been opened, and before doing any input or output operation on stream. Various modes of buffering Function prototype of setvbuf int setvbuf(FILE *stream, char *buffer, int mode, size_t size); - stream : A pointer to a FILE object which identifies a stream. - buffer : This is pointer to a memory block to be used as buffer for given stream. The size of this buffer must be at least BUFSIZ bytes. - mode : This is an integer representing the mode of buffering. It's value must be one of the three(_IOFBF, _IOLBF and _IONBF) macro constants defined in stdio header file. - size : This is the size of buffer in bytes. Return value of setvbuf On Success, this function returns zero otherwise in case of an error it returns a non-zero value. C program using setvbuf function The following program shows the use of setvbuf function to set buffer for stdout stream. #include <stdio.h> int main(){ char buffer[500]; memset(buffer, '\0', sizeof(buffer)); setvbuf(stdout, buffer, _IOFBF, 500); fputs("This string will go in buffer\n", stdout); fflush(stdout); return(0); } Output This string will go in buffer
http://www.techcrashcourse.com/2015/08/setvbuf-stdio-c-library-function.html
CC-MAIN-2017-39
refinedweb
253
61.36
I am trying to infile my data file into two seperate arrays which are seperated by a white space in my file. But I keep getting -86******* something number where my file is suppose to be any help will be very helpful. My first array which will read the student id is a one dimensional array and the second arry is a 2 dimensional arry which will read in the students answers to a true false test and the second col will be the total number correct for each true false question. Yeah and the answer key is suppose to be the last line in the 2 D array which is the first line in my data file. #include<iostream> #include <fstream> #include "string" using namespace std; int main() { const int maxstudents=50; const int maxcols=3; char quiz[maxstudents+1][maxcols]; int studentid[maxstudents]; ifstream infile; string inputfile; cout<<"Input filename"<<endl; cin>> inputfile; infile.open(inputfile.c_str()); for (int c = 0; c<maxstudents; c++) infile>>quiz[maxstudents][c]; for (int k=0; k<maxstudents; k++) infile>>studentid[k]; for (int j=0; j<maxstudents; j++) cout<<"studentid"<<j<<"\t"<<studentid[j]<<endl; for (int r=0; r<10; r++) cout<<quiz[maxstudents][r]<<"\t"; return 0; }
https://www.daniweb.com/programming/software-development/threads/33441/please-help-with-data-file-and-array
CC-MAIN-2017-26
refinedweb
208
50.5
On Wed, Sep 7, 2011 at 6:41 AM, Barry Warsaw <barry at python.org> wrote: >. The main objection to that approach is that it breaks down when you go to explicitly define class (or module) attributes that make use of the feature: class Foo: for = "Demonstrating a flaw in the 'keywords as attributes' concept" While there's a definite appeal to the idea of allowing keywords-as-attributes, it comes at the cost of breaking the symmetry between the identifiers that are permitted in module and class definition code and the attributes that are accessible via those namespaces. You could answer "but the feature is for programmatically generated attributes, such as CSV column names, not ones you type in a class definition" and make a reasonable case for the feature on that basis, but the conceptual symmetry is still broken.. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
https://mail.python.org/pipermail/python-ideas/2011-September/011412.html
CC-MAIN-2014-15
refinedweb
151
50.5
Intron This copy of the game is called PAT A. I believe that many players know this copy and some players have passed this copy. This dungeon can give junior players rich experience value, or even the key to advance. After 15 days, this player has finally finished painting this copy, leaving the copy strategy for the reference of other players. This copy is generally divided into two parts. One is the rule problem, that is, as long as you master the movement rule of the monster, you can get points without losing blood. The other is the skill problem, which depends on the experience of fighting monsters in history or a certain talent. There are seven types of regular questions: - STL (Standard Template Library) - Sort (insert sort, merge sort, heap sort, table sort) - Two branch tree, graph theory, connected body, and search set - BFS (width first search algorithm), DFS (depth first search algorithm) - Validation questions - Array mapping problem - String handling problems There are three types of skill questions: - Mathematical relevance - Greedy algorithm - dynamic programming Strategies for solving regular problems Regular questions are built according to the inherent framework, so there are rules to follow, and there are fixed templates to solve. As long as players remember the fixed mind method, they can break it. STL of strategy STL, also known as standard template library, is a general topic, which only requires the skills that players light up in the novice village. It only requires the ability of players to combine and cast skills for slightly difficult topics. General topics are similar to the following: "1006 Sign In and Sign Out", and there is no more difficult topic than "1014 Waiting in Line". The first thing to break them is to understand the usage of Vector, Map, Pair, Set, Queue, Stack, these basic containers. STL type questions can be divided into several types: queuing questions, statistical questions, simulation questions. Queuing questions and statistical questions need a certain object-oriented thinking, that is, to construct objects through struct constructs, such as 1014 Waiting in Line struct window { int endtime; //End time int capacity; //capacity queue<int> cust; //Customer queue }; This structure is the object of the bank counter constructed, which is more convenient for players to think according to the three-dimensional thinking mode, without first converting the thinking into the program language and then sorting out the complex relations in the topic meaning from the program language. For example, 1051 Pop Sequence is a kind of simulation of stack. As long as the definition of stack is clear and the stack is first in and then out, you can simulate the operation of entering and leaving stack with program. The main breakthrough points of STL are as follows: - Build the structure body, can build the structure model of the object according to the topic meaning - Suitable containers for objects can be selected - Can modify the sorting mode of the corresponding container The first two points only need to be able to read the questions and know the characteristics of each container. The third point is to modify the sorting mode of the corresponding container, which is different for different containers. Array, Vector, Dequeue: #include <algorithm> bool cmp(int a, int b){ return a > b; } int main(){ int arr[] = {3,1,2,5,4}; sort(arr, arr+5, cmp); } Set (there are red black trees inside, which will be sorted automatically when inserting): struct Node { value; bool operator < const(Node& a) const{ return value < a.value; } } int main(){ set<Node> s; s.insert(Node{ 0 }); s.insert(Node{ 1 }); } Priority_queue: - Definition method: priority_queue<int> q; priority_queue<int, vector<int>, less<int>> q; //Vector < int > the container that hosts the underlying data structure heap. //Less < int > indicates that the priority of large number is large. //Greater < int > indicates that a small number has a small priority. - Heavy load method: struct fruit{ string name; int price; friend bool operator < (fruit f1, fruit f2){ //High price priority return f1.price < f2.price; //Low price, high priority //return f1.price > f2.price; } } int main(){ priority_queue<fruit> q; } - External loading method: struct cmp{ bool operator() (fruit f1, fruit f2){ return f1.price > f2.price; } } int main(){ priority_queue<fruit, vector<fruit>, cmp> q; } Order of strategy This sort of questions can be divided into four categories, and the templates are as follows: - Insert sort const int maxN = 1000010; int arr[maxN]; void insertSort(){ for(int i=1; i<maxN; i++){ int j=i; int tmp = arr[i]; while(j > 0 && tmp > arr[j]){ arr[j] = arr[j-1]; j--; } arr[j] = tmp; } } - Merge sort const int maxN = 100010; int arr[maxN]; void merge(int L1, int R1, int L2, int R2){ int i = L1; int j = L2; int idx = 0; int tmp[maxN]; while(i<=R1 && j<=R2){ if(arr[i]<arr[j]){ tmp[idx++] = arr[i]; }else{ tmp[idx++] = arr[j]; } } while(i<=R1){ tmp[idx++] = arr[i]; } while(j<=R2){ tmp[idx++] = arr[j]; } for(int k=0; k<idx; k++){ arr[L1+k] = tmp[k]; } } void mergeSort(){ for(int step=2; step/2<maxN; step*=2){ for(int i=0; i<N; i+=step){ int mid = i+step/2-1; if(mid+1<N){ merge(i, mid, mid+1, min(i+step-1, maxN-1)); } } } } - Heap sort const int maxN = 100010; int arr[maxN+1]; void downAdjust(int low, int high){ int i = low; int j = 2*i+1; while(j <= high){ if(j+1 <= high && arr[j+1] > arr[j]){ j+=1; } if(arr[i] < arr[j]){ swap(arr[i], arr[j]); i = j; j = 2*i+1; }else{ break; } } } void createHeap(){ for(int i=maxN/2; i>=1; i--){ downAdjust(i, maxN); } } void sortHeap(){ createHeap(); for(int i=N; i>=1; i--){ swap(arr[i], arr[1]); downAdjust(1, i-1); } } - Table sorting Table sorting is A method, for example, the heavyweight array A[] = {book1, book2, book3}. Each book contains 10GB of content. If you sort directly, moving books in memory will be quite slow. At this time, out of table sorting will be A good choice. Store address index 0 of book1 in B[0], address index 1 of book2 in B[1], and address index 2 of book3 in B[2]. When sorting B, you need to read the data of A through the index. 1067 Sort with Swap(0, i) Table sorting 1067 Sort with Swap parsing Strategy two fork tree Determination method A binary tree is an ordered tree in which the degree of nodes is no more than 2. There are only four ways to determine a binary tree: - The middle order sequence of a binary tree is known, and all values of the sequence are different. There are either pre order sequence or post order sequence, i.e. middle order + pre order. Middle order + post order can uniquely determine the structure of a binary tree. - The relationship between each node and its child nodes is known. - It is known that this tree is a complete binary tree and all node values are known. - It is known that this tree is an AVL tree and its insertion sequence is known. The way to determine a binary tree is not unique: - The pre - and post order sequences are known. The key to determine a binary tree is - Whether the root node can be determined. - Can you determine which nodes the left and right subtrees are made of. - Whether the method of root node and left and right subtrees can be recursive to subtrees. How can these key factors of building a binary tree be bound to the clues provided by the questions? You can see if there is a relationship between the following table. Note: pre [], middle in [], and post []. The start index and end index of the sequence are preLeft and preRight. The start index and end index of the sequence are inLeft and inRight. The start and end indexes of the sequence are postLeft and postRight. The start and end indexes of a complete binary tree are CBTLeft and CBTRight. Storage mode Binary tree can be stored in two ways: - Linked list Linked list storage is a classic way to construct a general binary tree. As long as it is a binary tree, it can be stored in the way of linked list. struct Node{ int data; Node* lchild;//Left subtree Node* rchild;//Right subtree } Node* createNode(int d){ //Create node Node nd = new Node; nd->lchild = NULL; nd->rchild = NULL; nd->data = data; return nd; } - array Array type storage is used for completely binary tree level traversal records. Root is the index of the root in the array, 2*root is the index of the left child in the array, and 2*root+1 is the index of the right child in the array. Note: the index of the start node of this array is 1. Subdivision type If the binary tree continues to subdivide, it can be divided into complete binary tree, balanced binary tree and red black tree. Some of its characteristics can be used as the basis for its judgment. DFS, BFS DFS can also be said that it does not hit the south wall and does not look back. There is a standard template, just set the template directly. void dfs(int next step){ if(Hit the south wall) return;//come back dfs(next step+1); } int main(){ dfs(Step 1); } BFS is also called the queuing method. There are standard templates. Let's set them together. void bfs(int s){ queue<int> q; q.push(s); while(!q.empty()){ Take out the team head node top; Visit the team head node top; Remove the first element from the team; All the nodes in the next level of top that have not yet joined the team will join the team and be set as joined; } } DFS and BFS exist as tools to explore relationships. With these two tools, we can clarify the relationships and structures. DFS and BFS can be used for mutual conversion, but there are still some differences to be noted when using them. The function parameters of DFS can pass hierarchy, next state and recursive information. If you need to traverse a binary tree with DFS, you want to get its height. Then the height information can be passed down as a parameter of the function. struct Node{ int data; Node* lchild; Node* rchild; } void dfs(Node* root, int height){ if(root==NULL){ return; } dfs(root->lchild, height+1); dfs(root->rchild, height+1); } If BFS is needed, it is different from the parameters of recursive function. At this time, nodes and structures can be used to transmit information. struct Node{ int data; Node* lchild; Node* rchild; int level;//arrangement } void bfs(Node* root){ queue<Node*> q; root->level = 1; q.push(root); while(!q.empty()){ Node* nd = q.front(); q.pop(); if(nd->lchild!=NULL){ nd->lchild->level = nd->level+1; q.push(nd->lchild); } if(nd->rchild!=NULL){ nd->rchild->level = nd->level+1; q.push(nd->rchild); } } } Combination of strategies, topological structure And look-up set and topology structure are also the structures that describe the relationship between multiple nodes, and the function of look-up set is to gather the nodes that are related together. The function of topological structure is to develop, to show the sequence of each node, and to judge whether a given graph is a directed acyclic graph. Check the template code corresponding to 1114 Family Property, 1118 Birds in Forest const int maxN = 10010; int father[maxN]; void initFather(){ for(int i=0; i<maxN; i++){ father[i] = i; } } int findFather(int x){ if(father[x]==x){ return father[x]; }else{ int F = findFather(father[x]); father[x] = F; return F; } } void unionGroup(int x, int y){ if(findFather(x)!=findFather(y)){ father[findFather(x)] = findFather(y); } } Code template for topological order const int maxN = 10010; vector<int> G[maxN]; int n, m, inDegree[maxN]; bool topologicalSort(){ int num = 0; queue<int> q; for(int i=0; i<n; i++){ if(inDegree[i]==0){ q.push(i); } } while(!q.empty()){ int u = q.front(); q.pop(); for(int i=0; i<G[u].size(); i++){ int v = G[u][i]; inDegree[v]--; if(inDegree[v] == 0){ q.push(v); } } G[u].clear(); num++; } if(num == n) return true; else return false; } Dijkstra shortest path Dijkstra is used to solve the shortest path problem of single source, that is, given graph G and starting point S, the shortest distance from S to every other vertex is obtained by algorithm. Generally, Dijkstra scenarios need to be used with DFS to solve problems. Dijkstra can find the shortest path and the corresponding set of pre path nodes. Generally, there is not only one shortest path. In this case, other conditions are needed to select the most suitable one from these same shortest paths, and DFS is needed. There are two ways to construct graph structure, one is adjacency matrix and the other is adjacency table. Because adjacency matrix needs to define two-dimensional array directly, if the number of nodes is not many, adjacency matrix can be used directly. If the number of nodes is too many, it is better to define graph by adjacency table. //adjacency matrix G[N][N]; //Adjacency table vector<int> Adj[N]; //Adjacency table for storing terminal number and edge weight struct Node{ int v; //End number of edge int w; //Border right } vector<Node> Adj[N]; Dijkstra shortest path + DFS (1018 Public Bike Management) template code const int maxN = 100; const int INF = 99999999; int G[maxN][maxN]; bool visited[maxN]; int d[maxN]; int start; //start int dest; //termination int N; //Total points vector<int> pre[maxN]; vector<int> temppath; vector<int> path; void dfs(int start){ temppath.push_back(start); if(start==dest){ for(int i=temppath.size()-1; i>=0; i--){ //According to the processing content of each path } temppath.pop_back(); return; } for(int i=0; i<pre[start].size(); i++){ dfs(pre[start][i]); } temppath.pop_back(); } int main(){ fill(d, d+N, INF);//initialization d[start]=0; for(int i=0; i<N; i++){ int mind=INF; int u=-1; for(int j=0; j<N; j++){ if(!visited[j] && d[j]<mind){ mind=d[j]; u=j; } } if(u==-1){ break; } visited[u]=true; for(int j=0; j<N; j++){ if(!visited[j] && G[u][j]!=INF){ if(d[u]+G[u][j]<d[j]){ d[j]=d[u]+G[u][j]; pre[j].clear(); pre[j].push_back(u); }else if(d[u]+G[u][j]==d[j]){ pre[j].push_back(u); } } } } dfs(start); } Big number operation of strategy The condition for judging whether it is a large number operation is to see whether the range of input numbers prompted in the title is greater than 10 ^ 18. There are two kinds of big number operations, one is the operation of integer type big number, the other is the operation of fractional type big number. The processing method of integer large number is to read data in the form of character array and convert it to integer array, and then calculate each bit of the number respectively. struct bign{ int d[1000]; int len; bign(){ memset(d, 0, sizeof(d)); len=0; } } /** Convert the read in character array to a large number structure **/ bign change(char str[]){ bign a; a.len = strlen(str); for(int i=0; i<a.len; i++){ a.d[i] = str[a.len-i-1] - '0'; } return a; } /** Large number addition **/ bign add(bign a, bign b){ bign c; int carry=0;//carry for(int i=0; i<a.len||i<b.len; i++){ int temp = a.d[i]+b.d[i]+carry; c.d[c.len++] = temp%10; carry = temp/10; } if(carry!=0){ c.d[c.len++] = carry; } } /** Subtraction of large numbers **/ bign sub(bign a, bign b){ bign c; for(int i=0; i<a.len || i<b.len; i++){ if(a.d[i]<b.d[i]){ a.d[i+1]--; a.d[i]+=10; } c.d[c.len++] = a.d[i]-b.d[i]; } //Remove the 0 of the high position and keep at least one lowest position while(c.len-1>=1 && c.d[c.len-1] == 0){ c.len--; } return c; } /** Multiplication of large numbers **/ bign multi(bign a, bign b){ bign c; int carry = 0; for(int i=0; i<a.len; i++){ int temp = a.d[i]*b+carry; c.d[c.len++]=temp%10; carry=temp/10; } while(carry!=0){//Multiplication has more than one carry c.d[c.len++]=carry%10; carry/=10; } return c; } /** Division of large numbers **/ bign divide(bign a, bign b, int& r){ bign c; c.len = a.len; for (int i=a.len-1; i>=0; i--){ r = r*10+a.d[i]; if(r<b){ c.d[i]=0; }else{ c.d[i] = r/b; r = r%b; } } while(c.len-1>=1 && c.d[c.len-1]==0){ c.len--; } return c; } Fractional large numbers (1088 Rational Arithmetic) can be treated as molecules and denominators respectively. struct Fraction{ int up, down; }; /** greatest common factor **/ int gcd(int a, int b){ if(b==0){ return a; } return gcd(b, a%b); } /** Fractional simplification **/ Fraction reduction(Fraction result){ if(result.down<0){ result.up = -result.up; result.down = -result.down; } if(result.up==0){ result.down = 1; }else{ int d = gcd(abs(result.up), abs(result.down)); result.up/=d; result.down/=d; } return result; } /** Fractional addition **/ Fraction add(Fraction f1, Fraction f2){ Fraction result; result.up = f1.up*f2.down+f2.up*f1.down; result.down = f1.down*f2.down; return reduction(result); } /** fraction subtraction **/ Fraction minu(Fraction f1, Fraction f2){ Fraction result; result.up = f1.up*f2.down-f2.up*f1.down; result.down = f1.down*f2.down; return reduction(result); } /** Fractional multiplication **/ Fraction multi(Fraction f1, Fraction f2){ Fraction result; result.up = f1.up*f2.up; result.down = f1.down*f2.down; return reduction(result); } /** Fractional division **/ Fraction divide(Fraction f1, Fraction f2){ Fraction result; result.up = f1.up*f2.down; result.down = f1.down*f2.up; return reduction(result); } String processing of strategy Construction method There are many ways to construct string, as shown below. - Read string through cin string str; cin >> str; - Read in through char [] and construct string char c[100]; scanf("%s", c); string str = string(c); traversal method Traversal methods are introduced as follows. - Index traversal string str = "abc"; for(int i=0; i<str.size(); i++){ printf("%c", str[i]); } - Iterator traversal string str = "abc"; for(string::iterator it=str.begin(); it!=str.end(); it++){ printf("%c", *it); } Delete element There are two ways to delete elements. - Iterator delete string str = "abcdefg"; str.erase(str.begin()+2, str.end()-1); cout << str << endl; Output results: abg - Index delete string str = "abcdefg"; str.erase(3, 2); cout << str << endl; Output results: abcfg Search method Search is divided into a search string or a character. - Find character (string::npos if not found) string str = "abcdeft"; int cPosition = str.find_first_of('c'); - String found (string::npos returned if not found) string str = "abcdefg"; int idx = str.find("cde"); int idx2 = str.find("cde", 2);//Matching cde substring from 2 positions Number and string conversion This transformation is usually used in simple large number computing scenarios. //string to int string str = "111"; int a = stoi(str); //int to string str = to_string(a); //char [] to int string str = "111"; char c[5] = "111"; int b = atoi(c); b = atoi(str.c_str()); toggle case If you need to change the whole string to uppercase or lowercase, you can do the following. string str = "abcABC"; //Convert to uppercase transform(str.begin(), str.end(), str.begin(), ::toupper); //Convert lowercase transform(str.begin(), str.end(), str.begin(), ::tolower); Array of strategies Array is the basic container for storing data. Here are three types of data that can be hosted by this container. Tree array summing template: #define lowbit(i) ((i) & (-i)) const int maxN = 100010; int c[maxN]; int getSum(int x){ int sum = 0; for(int i=x; i>=1; i-=lowbit(i)){ sum += c[i]; } return sum; } void update(int x, int v){ for(int i=x; i<maxN; i+=lowbit(i)){ c[i]+=v; } } Summary of the strategies for solving regular problems This is the end of the analysis of the strategy to solve the law problem. Presumably, all players have already seen everything. The reason why the law problem contains the word "law" is that when the designer designs the problem, there are upper and lower bounds of the design, and there are reference frames. As long as the framework is mastered, it can be broken by direct application. To be continue...
https://programmer.help/blogs/path-of-pat-a.html
CC-MAIN-2021-43
refinedweb
3,463
61.26
I want to output my result to a file. I use BufferWriter as below: public class class1{ ... void print() { System.out.println("The name "+outName()+" Tel: "+outNumber()); try{ PrintWriter printWriter=new PrintWriter(new BufferedWriter(new FileWriter("myfile.txt", true))); printWriter.println("The name "+outName()+" Tel: "+outNumber()); }catch (IOException e){} } } public class class2{ ... void print() { System.out.println("The name "+outName()+" Tel: "+outNumber()); try{ PrintWriter printWriter=new PrintWriter(new BufferedWriter(new FileWriter("myfile.txt", true))); printWriter.println("The name "+outName()+" Tel: "+outNumber()); }catch (IOException e){} } } public static void main(String[] args) throws IOException{ try{ PrintWriter printWriter=new PrintWriter(new BufferedWriter(new FileWriter("myfile.txt", true))); ... printWriter.println("something"); printWriter.close(); }catch(IOException e){ } } There are three (OK ... make that four, no five) significant problems with your code. In class2 you don't close or flush thePrintWriter` after you have finished writing. That means that the data will never be written out to the file. That's why you never see the output. This is the obvious bug. But the rest of the problems are also important. Arguably MUCH MORE important ... so keep reading. The print() method in class2 leaks file descriptors (!). Each time you call it, it will open a file descriptor, write stuff ... and drop it on the floor. If you call print() repeatedly, the FileWriter constructor will fail. You need to close the file, and the cleanest way to ensure it always happens is to write the code like this: try (PrintWriter printWriter = new PrintWriter(new BufferedWriter( new FileWriter("myfile.txt", true)))) { printWriter.println(...); } This is a "try with resource" ... and it guarantees that the resource ( printWriter) will be closed when the scope exits. You are squashing exceptions. try { PrintWriter printWriter - ... } catch (IOException e) { // SQUASH!!! } This is really, really bad. Basically, you have written your code to ignore the exception. Pretend it never happened ... and throw away the information in the exception that would say why it happened. You should only ever squash an exception if you are absolutely sure that you will only catch expected exceptions, and that ignoring them is absolutely correct. Here, it isn't. If an IOException is thrown here, you need to know why! Opening multiple streams to write to the same file is a recipe for problems. The streams won't be synchronized, and you are likely to see the output interleaved in the output file in unexpected ways. If the output pipelines include buffering (like yours do), the problem is worse. You have serious Java style issues: A class name should always start with an uppercase letter. Always. Even in example code snippets ... Code should be consistently indented. I recommend using SP characters rather than TAB characters because tabs don't display consistently. There are style rules about where you should and should not put spaces and line breaks. For example, there should always be whitespace around a binary operator. Find a Java style guide, read it and format your code accordingly. Always write your code so that >>other people<< can read it.
https://codedump.io/share/gGEHAv9Nr0GP/1/write-to-the-same-file-using-bufferwriter-from-multiple-classes
CC-MAIN-2016-44
refinedweb
501
70.19
Building Docker Images for Static Go Binaries Building applications in Go enables the ability to easily produce statically linked binaries free of external dependencies. Statically linked binaries are much larger than their dynamic counterparts, but often weigh in at less than 10 MB for most real-world applications. The reason for such large binary sizes are because everything, including the Go runtime, is included in the binary. The large binary size is a tradeoff I’m willing to make since I gain the ability to deploy applications by copying a single binary into place and executing it. So I can’t help but ask, If the process of building and deploying static binaries is so easy, why would I want to bring Docker into the mix? Well, Docker does offer the convenience of a standardized packaging format that makes it easy to share, discover, and install applications. I see Docker images being similar to rpms in concept, with Docker images having the advantage of packaging up my entire application in a single artifact. Hmm… this sounds familiar. Deploying applications with Docker brings the benefits of Linux containers. I essentially gain the ability to leverage Linux namespaces and cgroups for free. After being sold on the benefits of using Docker for Go apps, I started building containers for them. I was a bit surprised by the initial results. To help illustrate my journey I’ll use an example application called Contributors. The Contributors app is a simple web frontend that lists the contributors, along with their avatar photos, for a given GitHub repository. I, like many people, followed examples for building Docker images using a Dockerfile that looks something like this: FROM google/debian:wheezy MAINTAINER Kelsey Hightower <kelsey.hightower@gmail.com> RUN apt-get update -y && apt-get install —no-install-recommends -y -q curl build-essential ca-certificates git mercurial # Install Go # Save the SHA1 checksum from RUN echo '9f9dfcbcb4fa126b2b66c0830dc733215f2f056e go1.3.src.tar.gz' > go1.3.src.tar.gz.sha1 RUN curl -O -s RUN sha1sum —check go1.3.src.tar.gz.sha1 RUN tar -xzf go1.3.src.tar.gz -C /usr/local ENV PATH /usr/local/go/bin:$PATH ENV GOPATH /gopath RUN cd /usr/local/go/src && ./make.bash —no-clean 2>&1 WORKDIR /gopath/src/github.com/kelseyhightower/contributors # Build the Contributors application RUN mkdir -p /gopath/src/github.com/kelseyhightower/contributors ADD . /gopath/src/github.com/kelseyhightower/contributors RUN CGO_ENABLED=0 GOOS=linux go build -a -tags netgo -ldflags '-w' . RUN cp contributors /contributors ENV PORT 80 EXPOSE 80 ENTRYPOINT ["/contributors"] The above Dockerfile combines the Docker image creation process with the application build process. While this workflow has its advantages I consider it unnecessary when working with statically linked binaries. I personally build my applications in CI and save the resulting binaries as artifacts of the build, which allows me to package my application in any format including a Docker image, rpm, or tarball. Another drawback to building the application during the image creation process is the size of the resulting Docker image. The above Dockerfile produces a Docker image that checks in around 500 MB in size. Earlier I mentioned that sized does not matter, but at almost 500 MB we better start paying attention. Why is the Docker image so big? Building the Contributor app as part of the image creation process means we must set up a working build environment including all the tools required to download and build the Go runtime. This requirement locks us into choosing a base image capable of installing all those extra bits, and that base image usually starts around the 100 MB range. Then every additional RUN command in the Dockerfile increases the overall file size of the image. Keep in mind I’m doing all of this so I can produce the same binary built by my CI system. While I could add logic to the Dockerfile to clean up unneeded files I decided to keep the application build process in CI, and treat the Docker image as another target for packaging. At this point I was able to shrink my Dockerfile to the following: FROM google/debian:wheezy MAINTAINER Kelsey Hightower <kelsey.hightower@gmail.com> ADD contributors contributors ENV PORT 80 EXPOSE 80 ENTRYPOINT ["/contributors"] Notice the addition of the ADD command which adds the contributors binary to the image. The build process now goes like this: CGO_ENABLED=0 GOOS=linux go build -a -tags netgo -ldflags '-w' . docker build -t kelseyhightower/contributors . I’m now working with a much smaller Docker image, but at 120 MB the image is still too big. Especially since our binary doesn’t require any files in the image. The easy solution is to use a smaller base image such as busybox, considered tiny at 5 MB, or the absolute smallest image available, the scratch image: FROM scratch MAINTAINER Kelsey Hightower <kelsey.hightower@gmail.com> ADD contributors contributors ENV PORT 80 EXPOSE 80 ENTRYPOINT ["/contributors"] Normally the scratch image will not work for typical Go applications because they are often built with cgo enabled, which results in a binary that dynamically links to a few dependencies. Also, some Go applications make external calls to SSL endpoints, which will fail with the following error when running from the scratch image: x509: failed to load system roots and no roots provided The reason for this is that on Linux systems the tls package reads the root CA certificates from /etc/ssl/certs/ca-certificates.crt, which is missing from the scratch image. The Contributors app gets around this problem by bundling a copy of the root CA certificates and configuring outbound calls to use them. Bundling of the actual root CA certificates is pretty straightforward. The Contributor app takes a cert bundle, /etc/ssl/certs/ca-certificates.crt, from CoreOS Linux and assigns the text to a global variable named pemCerts. Then in the main init() the Contributor app initializes a cert pool and configures a HTTP client to use it. func init() { pool = x509.NewCertPool() pool.AppendCertsFromPEM(pemCerts) client = &http.Client{ Transport: &http.Transport{ TLSClientConfig: &tls.Config{RootCAs: pool}, }, } } From this point on all calls using the new HTTP client will work with SSL end-points. Checkout the source code for more details on how the the root CA certificates are wired up. Using the updated Dockerfile and re-running the build process I end up with a Docker image that is slightly larger than the Contributor app binary. We are now down to a 6MB Docker image. At this point I have a Docker image optimized for size that is ready to run and share with others. docker run -d -P kelseyhightower/contributors
https://medium.com/@kelseyhightower/optimizing-docker-images-for-static-binaries-b5696e26eb07
CC-MAIN-2017-09
refinedweb
1,118
53.92
I seem to have an intermittent issue with the following code segment. I am installing an application which has an associated excel add in which I register and then run the on open macro. The formula then calls one of the functions from the add-in. The script fails at the RegisterXLL step, reporting "AttributeError: Excel.Application.RegisterXLL", and then after failing for a number of times it will succeed until I reboot after which time it will commence failing again for a number of iterations following which it will work. If I run the script and let it fail and then comment out the install step the script works. This is running on both Win2K and WinXP o/s and both Excel 2000 and 2003. Any ideas gratefully received. import os from win32com.client import Dispatch os.system (install "setup.exe" in silent mode) xlApp = Dispatch("Excel.Application") xlApp.RegisterXLL(xll) xlApp.Visible = 1 wb=xlApp.Workbooks.Add() xlApp.Workbooks.Open(xla).RunAutoMacros(1) xlApp.ActiveSheet.Cells(1,1).Formula = '=xversion()' wb.Close(SaveChanges=0) xlApp.Quit() xlApp.Visible = 0 del xlApp os.system (uninstall "setup.exe" in silent mode)
https://www.daniweb.com/programming/software-development/threads/65965/excel-add-in-registerxll
CC-MAIN-2017-43
refinedweb
191
53.47
Your message dated Mon, 28 May 2001 00:08:35 +0200 (MEST) with message-id <15121.31254.719834.751026@bolero> and subject line fixed bugs in gcc-3.0 (3.0.ds6-0pre0105 2001 21:01:19 +0000 >From ege@rano.org Thu May 10 16:01:19 2001 Return-path: <ege@rano.org> Received: from mta05-svc.ntlworld.com [62.253.162.45] by master.debian.org with esmtp (Exim 3.12 1 (Debian)) id 14xxYl-0006QV-00; Thu, 10 May 2001 16:01:19 -0500 Received: from dueto.rano1.org ([62.253.149.167]) by mta05-svc.ntlworld.com (InterMail vM.4.01.02.27 201-229-119-110) with ESMTP id <20010510210103.FXIV272.mta05-svc.ntlworld.com@dueto.rano1.org> for <submit@bugs.debian.org>; Thu, 10 May 2001 22:01:03 +0100 Received: from ege by dueto.rano1.org with local (Exim 3.22 #1 (Debian)) id 14xxIe-0001WZ-00; Thu, 10 May 2001 21:44:40 +0100 From: edmundo@rano.org Subject: gcc: seems to generate incorrect code To: submit@bugs.debian.org X-Mailer: bug 3.3.9 Message-Id: <E14xxIe-0001WZ-00@dueto.rano1.org> Date: Thu, 10 May 2001 21:44:40 +0100 Delivered-To: submit@bugs.debian.org Package: gcc Version: 1:2.95.3-7 Severity: important Maybe I've been working too hard, but this gcc seems to get this wrong with -O2: void gunc(unsigned a, unsigned b) { } unsigned func(unsigned a, unsigned b) { long long x = 0; gunc(a, b); return (x + 1) * a * b; } int main() { unsigned a, b; a = 1; b = 1; return func(a, b); } -- System Information Debian Release: testing/unstable Kernel Version: Linux dueto.rano1.org 2.2.19 #4 SMP Mon Apr 16 12:55:15 BST 2001 i686 unknown Versions of the packages gcc depends on: ii cpp 2.95.3-7 The GNU C preprocessor. ii cpp-2.95 2.95.4-0.01042 The GNU C preprocessor. ii gcc-2.95 2.95.4-0.01042 The GNU C compiler. --------------------------------------- Received: (at 97030-done) by bugs.debian.org; 27 May 2001 22:10:18 +0000 >From doko@cs.tu-berlin.de Sun May 27 17:10:18 2001 Return-path: <doko@cs.tu-berlin.de> Received: from mail.cs.tu-berlin.de [130.149.17.13] (root) by master.debian.org with esmtp (Exim 3.12 1 (Debian)) id 1548jm-0005Ir-00; Sun, 27 May 2001 17:10:14 -0500 Received: from bolero.cs.tu-berlin.de (doko@bolero.cs.tu-berlin.de [130.149.19.1]) by mail.cs.tu-berlin.de (8.9.3/8.9.3) with ESMTP id AAA03363; Mon, 28 May 2001 00:08:35 +0200 (MET DST) Received: (from doko@localhost) by bolero.cs.tu-berlin.de (8.9.3+Sun/8.9.3) id AAA01795; Mon, 28 May 2001 00:08:35 +0200 (MEST) From: Matthias Klose <doko@cs.tu-berlin.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Date: Mon, 28 May 2001 00:08:35 +0200 (MEST) To: 98851-done@bugs.debian.org, 93597-done@bugs.debian.org, 94576-done@bugs.debian.org, 96448-done@bugs.debian.org, 96461-done@bugs.debian.org, 93343-done@bugs.debian.org, 96348-done@bugs.debian.org, 96262-done@bugs.debian.org, 97134-done@bugs.debian.org, 97905-done@bugs.debian.org, 96451-done@bugs.debian.org, 95812-done@bugs.debian.org, 93157-done@bugs.debian.org, 87000-done@bugs.debian.org, 97030-done@bugs.debian.org Subject: fixed bugs in gcc-3.0 (3.0.ds6-0pre010526) X-Mailer: VM 6.43 under 20.4 "Emerald" XEmacs Lucid Message-ID: <15121.31254.719834.751026@bolero> Delivered-To: 97030-done@bugs.debian.org gcc-3.0 (3.0.ds6-0pre010526) unstable; urgency=high * Urgency "high" for replacing the gcc-3.0 snapshots in testing, which now are incompatile due to the changed ABIs. * Upstream begins tagging with "gcc-3_0_pre_2001mmdd". * Tighten dependencies to install only binary packages derived from one source (#98851). Tighten libc6-dev dependency to match libc6. gcc-3.0 (3.0.ds6-0pre010525) unstable; urgency=low * ATTENTION: The ABI (exception handling) changed. No upgrade path from earlier snapshots (you had been warned in the postinst ...) Closing #93597, #94576, #96448, #96461. You have to rebuild * HELP is appreciated for scanning the Debian BTS and sending followups to bug reports!!! * Should we name debian gcc uploads? What about a "still seeking g++ maintainer" upload? * Fixed in gcc-3.0: #97030 * Update patches for recent (010525) CVS sources. * Make check depend on build target (fakeroot problmes). * debian/rules.d/binary-libgcc.mk: new file, build first. * Free memory detection on the hurd for running the testsuite. * Update debhelper build dependency. * libstdc++-doc: Include doxygen generated docs. * Fix boring packaging bugs, too tired for appropriate changelogs ... #93343, #96348, #96262, #97134, #97905, #96451, #95812, #93157 * Fixed bugs: #87000.
https://lists.debian.org/debian-gcc/2001/05/msg00134.html
CC-MAIN-2017-04
refinedweb
819
54.59
How to add all the diagonal elements of a matrix in C++ In this tutorial, we will learn how to add all diagonal elements of a matrix in C++ with Algorithm. In order to add all diagonal elements of a matrix, it is important to make sure that the size/length of the row and column must be same. This is because if they are not the same, then we will not get the appropriate diagonal. which will lead to a false answer. appropriate diagonal means: consider two matrices of size 3*3 7 8 5 6 4 2 1 2 3 we can see that the above matrix of size 3*3 has appropriate diagonal as if we start from one corner, we will end up another corner.Therefore, this matrix has appropriate diagonal. --------------------------------------------------------------------------------- consider two matrices of size 3*4 1 2 4 7 2 1 7 5 7 4 1 5 we can see that the above matrix of size 3*4 has inappropriate diagonal as if we start form one corner, we are not ending up with another corner. Therefore, this matrix has inappropriate diagonal. Add all the diagonal elements of a matrix let’s take an example to understand how the addition of all diagonal elements looks like. Example: consider a matrix of size 5*5 Input matrix: 1 5 7 4 6 2 4 5 6 8 8 7 5 4 1 7 4 1 0 5 2 6 4 7 8 The addition of all diagonal elements should be: 1 + 4 + 5 + 0 + 8 + 2 + 4 + 6 + 6 = 36 Note: The green bold elements in the matrix are the diagonal elements in the above examples. Algorithm - Declare and initialize a matrix of size m*n in the main function. (m = size of row, n = size column) - declare and initialize a variable with zero say “sum = 0“ - check whether ‘n‘ is equal to ‘m‘. Then proceed for next steps. - Declare two loops, one form i = 0 to ‘n’ and another inside the first loop which will go from j = 0 to ‘m’. - Inside the second loop check whether i = j or “i + j = n – 1″. if yes, then for every true arr[i][j], add arr[i][j] to “sum”. - print the value of sum. Note: we can use either ‘n’ or ‘m’ to check the condition for loop or inside the loop. because the value of ‘n’ and ‘m’ is equal, which is already checked in step 3. Program to add all diagonal elements of a matrix in C++ #include <cstdlib> #include <iostream> #define size 100 using namespace std; int main(int argc, char** argv) { int arr[size][size], n, m, sum = 0; cout<<"Enter the length of row: "; cin>>n; cout<<"Enter the length of column: "; cin>>m; //checking for equality of m and n if(n != m){ cout<<"Length of rows and columns must be equal."; exit(0); } //Taking input in matrix 'arr' cout<<"\nEnter the elements of matrix: \n"; for(int i = 0;i < n; i++){ for(int j = 0;j < m;j++){ cin>>arr[i][j]; } } //Finding sum of all diagonal elements cout<<"\nSum of All diagonal elements is: "; for(int i = 0; i < n; i++){ for(int j = 0; j < m; j++){ if(i == j || i + j == n - 1){ sum = sum + arr[i][j]; } } } //Printing the sum of all diagonal elements of entered matrix cout<<endl<<sum; return 0; } Output: Enter the length of row: 5 Enter the length of column: 5 Enter the elements of matrix: 1 5 7 4 6 2 4 5 6 8 8 7 5 4 1 7 4 1 0 5 2 6 4 7 8 Sum of All diagonal elements is: 36 Time complexity: O(n^2) Note: In the program, the maximum size of the matrix is restricted to 100 only as the matrix size is initialized with “matrix [size] [size] ” and here the size = 100 from “#define size 100” You may also learn:
https://www.codespeedy.com/add-all-the-diagonal-elements-of-a-matrix-in-cpp/
CC-MAIN-2022-27
refinedweb
664
59.67
Hi again, I ran to some problems in input some codes but i'm quite sure that my code is correct but the output is not what I expect it to be, this is my code: #include<iostream> #include<string> #include<sstream> using namespace std; struct movie{ string title; int year; }; struct info{ string name; movie fmovie; }; void main(){ info xinfo; cout<<"Enter name: "; getline(cin, xinfo.name); cout<<"Enter movie: "; getline(cin, xinfo.fmovie.title); cout<<"Enter year: "; cin>>xinfo.fmovie.year; cout<<"Name is "<<xinfo.name<<endl; cout<<"Movie is "<<xinfo.fmovie.title<<endl; cout<<"Year is "<<xinfo.fmovie.year<<endl; } When I run this, it asks for the name but when I type in my name and press enter it doesn't proceed to asking for the movie, it just hangs there in the newline doing nothing, so I press enter again, and it asks for the movie now, i press enter and its asks for the year now, but then in the final out only name is shown and the other two is fucked up..why is this? output looks like this: Enter name: john Enter movie: avatar Enter year: Name is john Movie is Year is -85999284
https://www.daniweb.com/programming/software-development/threads/430363/sstream-string-problem
CC-MAIN-2017-34
refinedweb
202
77.27
Creating methods in servlets - JSP-Servlet . Document : index Created on : Dec 15, 2008, 7:49:51 PM Author : mihael... mistake and check it : 1.In the JSP page having a "Form" Tag You& servlets to build the HTML servlets to build the HTML When using servlets to build the HTML, you build a DOCTYPE line, why do you do that? I know all major... they know which specification to check your document against. These validators servlets - JSP-Servlet first onwards i.e., i don't know about reports only i know upto servlets... { OutputStream file = new FileOutputStream(new File("C://text.pdf")); Document document = new Document(); PdfWriter.getInstance(document, file); document vs JSP - JSP-Servlet )Java Server Pages is that they are document-centric. Servlets, on the other hand...Servlets vs JSP What is the main difference between Servlets and JSP... and servlet is used for bussiness logic 5)Servlets are faster than jsp. 6 this is my javascript code and i am not understanding the mistake in this,please help me? this is my javascript code and i am not understanding the mistake in this,please help me? <html> <h2>Form Validation</h2> <script language = "Javascript"> function checkEmail PDF document PDF document hello, How to Open a PDF document on iPhone?? You can use these There's a whole toolkit built in which lets you render PDF pages to a UIView. Check out: CGPDFDocumentCreateWithURL Print the document jsp -servlets jsp -servlets i have servlets s1 in this servlets i have created emplooyee object, other servlets is s2, then how can we find employee information in s2 servlets Document Type Definition Document Type Definition What is Document Type Definition HTML document's background color HTML document's background color How to set a HTML document's background color Untitled Document frame1 document frame2 document frame3 document New Document open word document open word document how to open a word document ?? Please go through the following link: Java Read word document file The above link will provide an example that will read the document file using POI library in java xml document parsing xml document parsing Hi Guys, Xml document has been inserted into the database,how do i parse the xml document stored in database,so that xml document can be retrieved on to JSP file using jdbc/odbc connection. Please help Write String in Word document Write String in Word document How to read and write strings in word document Html framed document Html framed document How do I change the title of a framed document Java program for printing document Java program for printing document can u send me the java program for printing the office servlets - Servlet Interview Questions what is servlets in Java what is servlets in Java Document Conversion - WebSevices Document Conversion I have a full scanned book pages in which I have to make some changes for my new publication. From where I can do this? Click upload, convert to rtf Servlets Program Servlets Program Hi, I have written the following servlet: [code] package com.nitish.servlets; import javax.servlet.*; import java.io.*; import java.sql.*; import javax.sql.*; import oracle.sql.*; public class RequestServlet Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/80807
CC-MAIN-2015-22
refinedweb
549
59.84
The things I'll cover in this tutorial: 1. connecting a bunch of LEDs to a project 2. integrating a toy into a project 3. The coolness of Shapelock 4. Running motors, servos and steppers (using the motor shield) 5. Connecting an Xbee if you don't have room for the shield 6. Remote Controlling your application from a web application over Xbee 7. Connecting to I2C devices (LCD, Wiimote, another Arduino) 8. Connecting to a Parallax Knock sensor 9. Making sound with a speaker 10. Getting input from buttons 11. General Puzzle trickery For the puzzles I thought I'd all for Myst type puzzles (you've been given a box, figure it out), and Mario style (generally you know what you're supposed to do, sometimes you even get a chance to practice parts of it before doing it for real). The attached video shows a "level" where you have to put the Contra code in on buttons that look kind of like a NES controller. Step 1: Stuff Used in This Project One goal of this project, as it progressed, was to complete overengineer. I think I did that. 1 - Unfinished wooden box from a craft store 7 - small push buttons 1 - Arcade push button 1 - 5-way button 1 - Serial/I2C LCD 32 - LEDs 1 - resistor 1 - GPS 1 - Keypad 1 - Parallax Knock sensor 1 - Xbee 1 - speaker 4 - Nixie Tubes 1 - Nixie Tube Shield 1 - Adafruit Motor Shield 1 - Buzz Lighyear gun disc gun 1 - Stepper 1 - Servo 1 - Pack of Shape Lock 1 - Wiimote Nunchuck 1 - Xbee shield from SeeedStudio 1 - Arduino Uno 1 - Arduino Mega (Sparkfun free day goodness) A few random screws Lots of random wires Lots of batteries Step 2: LEDs Step 3: "Coin" Shooter Step 4: Taking the Gun Apart Step 5: Putting It in the Box Step 6: Running the Gun To run a servo with the motor shield, it's exactly like running a servo connected directly to an Arduino. In fact all the motor shield does is give you a convenient 3 pin header to connect to. GOTCHA - According to the FAQ Arduino Pin 9 goes to Servo Header 1 and Pin 10 goes to Servo Header 2. On mine it was backwards (Pin 9 -> Servo Header 2, Pin 10 -> Servo Header 1). To control a motor you basically set it up and tell it what port it's connected to, set the speed and direction and let it go. If you don't connect external power to the shield motors won't work (even though steppers do). Note: obviously this could be done without the motor shield, but I had one around and was using an Arduino Mega, so I had plenty of pins. Step 7: XBee Remember when you're connecting an serial IC IC Arduino RX -> TX TX -> RX When I soldered the headers on I could have done it two different ways. I could have just soldered up a female header to fit on the pins that normally plug into the male pins that normally hook into the Arduino. The second way is to solder on new male headers on side of the shield right next to the Xbee pins. Doing it that way made it so I didn't have to figure out the pins (I was having a hard time finding documentation on it). The problem doing it this way is that the Arduion is working on 5v and the Xbee shield is wanting 3.3v. From what I hear it's 5v tolerant though, and for me it's working fine so far (it may reduce the life of the Xbee though). Step 8: GPS and Other Low Voltage Serial Devices My project also has a GPS connected. Rather than give specifics on my GPS module (especially since it's been discontinued by Sparkfun) Ive decided to just give general help on hooking up a GPS to an Arduino. You'll want to look at the datasheet for your particular GPS before connecting. The things you want to pay the most attention to are: what the pins are, and what voltage they take. Many GPS units only tolerate 3.3v, and an Arduino works on 5v (note: there are Arduino's that work on 3.3v as well, like the Arduino Pro 3.3v). The Arduino One (and the Duemilanove) has a 3.3v power output on it. You should be able to use that to feed power to the device. You also need to protect the serial connection to the device. To do this use a Logic Level Converter, like this one from Sparkfun . This is really easy to connect, just: Arduino Logic Level Converter 5v -----------> HV 3.3v --------> LV RX ----------> RXI TX -----------> TXI GND --------> GND then to connect to the GPS unit (or any other 3.3v serial device) Logic Level Converter GPS RXO ---------------------------> TX TXO ----------------------------> RX Direct Connections from Arduino To GPS Arduino GPS 3.3 v -------> Vin GND ------> GND Translating what's coming from the Arduino to use it for whatever you're trying to do you'll need to read the datasheet or other documents more. If you're worried about getting all this working you may want to find a GPS unit where someone's worked out all the code and has a tutorial for it. If I were going to get one today I'd probably go with this unit (and tutorial) Step 9: Remote Control Step 10: Install Sinatra Sinatra is a web "micro-framework". This basically means it gives you what you need to make web applications without a lot of extra stuff. Don't get me wrong, I love Rails, but for something small like this it seems like overkill. Sinatra is great for something like this. This app is built to run in Windows, but the only thing that would need to change to run in Linux is some of the serial stuff. First off, installing Ruby and Sinatra: Download the newest Ruby 1.8 installer , I would prefer 1.9, but it looks like the serial libraries aren't being maintained and don't work with 1.9 Open a command prompt and type "gem install serialport" then type "gem install sinatra". One quirk with Ruby 1.8 is that it has a problem with rack, which get's installed with Sinatra. To fix this do "gem uninstall rack" (tell it to uninstall the binary, and uninstall even though other gems are dependent on it), then "gem install rack --version '1.2.0'. You should have all the stuff installed you need to run the program. I like to use thin to run my Ruby stuff, it's a pretty fast, easy to use Ruby web application server. To make this happen type "gem install thin" in the command prompt. Step 11: The Sinatra Application This get's the required libraries to do Sinatra and Serial require 'serialport' require 'sinatra' This is where you configure the serial port number and speed arduinoSerialPort = 'COM22' arduinoSerialPortSpeed = 9600 This creates the serial port object. It's not really important to understand what this means if you're not a programmer, but because we made sp equal to the SerialPort.new then sp.write is how we'll output stuff to the serial port sp = SerialPort.new(arduinoSerialPort, arduinoSerialPortSpeed, 8, 1, SerialPort::NONE) Normally in Sinatra you'd have a separate file in a separate directory with your template of your page. I wanted to have this app all in the same file, so I made a really simple template below. In the "routes" (the pages in URL) I replace the word BODY below with the HTML for that page I want to display. htmlCode = "<html><head></head><body>BODY</body></html>" Here's the first route. get just means it's a get request, which is the type of request that happens when you type in a URL or click on a link. The '/' means the homepage basically (so if I type in then I'll get to this code). Everything between the "do" and the "end" is executed as part of this page. Here all I do is have a body variable and I assign it a bunch of HTML for links to other pages. There are three links below one to /shootstuff, one to /lightallleds, and one for /coinsound. The last thing I do is replace the BODY in htmlCode (from above) with the HTML I defined here. Whatever the last thing is in the route get's returned and rendered as the page. get '/' doShoot Stuff</a><br />" body += "<a href=\"/lightallleds\">Light All LEDs</a><br />" body += "<a href=\"/coinsound\">Mario Coin Sound</a><br />" htmlCode.gsub("BODY", body) end Here's another route, this one's what you get if you hit. The first thing it does is send a "1" over serial to my coin box. Then it just has a link to go back to the last page. get '/shootstuff' do sp.write "1" body = "<b>Stuff Should be shooting</b></br>" body += "<a href=\"/\">Return to Actions</a><br />" htmlCode.gsub("BODY", body) end Similar to the route above, this one fires when someone goes to. It sends a 2 over serial and give a link back to the index page get '/lightallleds' do sp.write "2" body = "<b>All LEDs Should Be Lit</b></br>" body += "<a href=\"/\">Return to Actions</a><br />" htmlCode.gsub("BODY", body) end This route is similar to the two above. It fires when someone hits, writes a "3" over serial and has a link back to the index page get '/coinsound' do sp.write "3" body = "<b>Coin Sound Should Have Sounded</b></br>" body += "<a href=\"/\">Return to Actions</a><br />" htmlCode.gsub("BODY", body) end Step 12: Starting Up the Sinatra Application Step 13: I2C Devices In my puzzle box I used 3 I2C devices: 1. Web4robot Serial/I2C LCD 2. Wiimote Nunchuck 3. An Arduino Uno with a Nixie tube shield (I used this as the timer for the game). I2C has two lines, SDA and SCL. To connect I2C devices simply connect the SDA on the Arduino to all the SDA's on the devices and connect the SCL on the Arduino to all the SCL's on all the other devices. To get good reliability it's good to connect a 1.5 K Ohm resistor between 5v and each of the lines (a pull-up resistor). Arduino has a library called Wire that's used to communicate between I2C devices. Each I2C bus should have one master and can have up to 128 devices total. In my case I set up the Arduino Mega as the master, and everything was a slave (including the Arduino Uno that was running the Nixie tubes). Care should be taken when connecting I2C devices to see what voltages they can tolerate. Like serial devices, many I2C devices can only tolerate 3.3v, instead of 5v, which is what many Arduinos work off of. You can use the same logic level converter I mentioned when I was talking about serial devices on I2C A couple of really good I2C tutorial are: Step 14: Web4Robot LCD Minimally to get this LCD working: 1. connect the SDA pin on the LCD to the SDA pin on the Arduino 2. connect the SCL pin on the LCD to the SCL pin on the Arduino 3. connect Power on the LCD to 5v on the Arduino 4. connect Gnd on the LCD to Gnd on the Arduio I used this library to connect t the LCD . It works really well, all you need to do to start sending data over to it is: Include the libraries #include <Wire.h> #include <LCDI2C.h> Set up the options for your particular LCD int g_rows = 4; int g_cols = 20; LCDI2C lcd = LCDI2C(g_rows,g_cols,0x4C,1); In setup() you need to initialize the LCD (clears it, sets up the curser, etc) lcd.init(); Now you're ready to use it lcd.print("Stuff to print to the LCD") Move the Curser Down a row lcd.setCursor(1,0); Clear the LCD lcd.clear(); There are many other functions such as off() on(), cursor functions, the ability to draw graphs, etc. Step 15: Wiimote Nunchuck There are already many tutorials on connecting to and communicating with a Wiimote nunchuck. I'll give some tips on the way I did it and reference these other tutorials I used this breakout so I didn't have to cut off the end of the nunchuck cord. The PCB just slides into the end of the Wiimote nunchuck connector. All you need to do is solder on headers and connect Arduino Nunchuck Breakout 5v -------------> PWR (I think 3.3v works as well, in fact it may be that it prefers 3.3v, but its 5v tolerant) GND ---------> GND SDA ----------> D SCL ----------> C After that you just use a library to communicate with it. Here's the tutorial and the library: Here's another good tutorial: Step 16: Second Arduino With I2C Arduino Mega Arduino Uno SDA -----------------> SDA SCL -----------------> SCL GND ----------------> GND Vin -------------------> Vin (this may not be necessary . . . anyone know for sure). The master (Arduino Mega) communicates the same way it normally does. The slave (Arduino Uno) is set up like this in setup(): Wire.begin(4); // join i2c bus with address #4 Wire.onReceive(receiveEvent); // register event then you just need to define the recieveEvent (this one is from the example that comes with the Arduino IDE (it's under Wire in Examples) void receiveEvent(int howMany) { while(1 < Wire.available()) // loop through all but the last { char c = Wire.receive(); // receive byte as a character Serial.print(c); // print the character } int x = Wire.receive(); // receive byte as an integer Serial.println(x); // print the integer } For mine I just have the master send one thing that tells the timer to start. Here is a really good tutorial on connecting multiple Arduinos Step 17: Parallax Sound Impact Sensor Arduino Impact Sensor GND --------> GND 5v ------------> 5v SIG ----------> Arduino Digital or Analog Pin of you Choice The board has a little pot that you can adjust to tell it how loud the sound has to be to trigger the signal. To read the signal you have a couple of choices. You can read it like a button (check for high on the pin, then you know it's fired), or you can use one of the Arduino interrupts. I wanted to use this to trigger hitting the bottom of the box to spit out a "coin". I had mixed success with this because of how loud the little motor was that shot stuff out. Interrupts are easy to do. All you do is define the interrupt in setup like this: attachInterrupt(0, smack, RISING); 0 means that it's on digital pin 2. smack is the name of the function it will call when the interrupt happens. You can also define when it will fire, I said RISING, which means it will fire whenever the voltage is going up (in other words the SIG pin just went high). Then you just need to define smack void smack() { //do whatever you want here, I played the mario coin sound and shot out a coin } Step 18: Buttons To connect a button, I do this: Arduino Button 5v -------------> one of the button leads button Pin --> other button lead Set the pin to input pinMode(buttonPin, INPUT); Do a digital read on the pin int reading = digitalRead(buttonPin); That's the basics, but to get an accurate reading, especially if you're trying to detect sequences you're going to want to do something like the debounce example included with the Arduino IDE. Basically this checks to see if the same button is still pressed each time through the loop, if it's changed since the last read then it doesn't count the reading until the debounce delay has passed (a few milliseconds). This is because button readings can be a little squirly in the transition from on to off and can look like multiple button presses. Step 19: Playing Music Arduino Speaker GND -----------------------> one speaker lead/wire digital or analog pin ---> other speaker lead/wire A couple of revs ago the Arduino IDE started to include the tone library. Here's the basics (this will play the coin sound from Mario Bros), this is taken from the example that comes with the IDE: Include the pitches.h header file #include "pitches.h" // notes in the melody: int melody[] = { NOTE_B5, NOTE_E6}; // note durations: 4 = quarter note, 8 = eighth note, etc.: int noteDurations[] = { 16, 2 }; void playSound() { // iterate over the notes of the melody: for (int thisNote = 0; thisNote < 2;(noteDuration + 10); // stop the tone playing: noTone(8); } } Step 20: Puzzle Stuff Contra Code (Up Up Down Down Left Right Left Right B A) I've got this working a couple of different ways. On one side of my puzzle box I have buttons that look somewhat like a NES controller. One way to have the user input the code is to use those buttons. Many times in video games you think you see the solution, but really it's slightly off what you think. If you notice the button on the left is different than the others. It's actually a 5 way button from Parallax. So, it moves slightly left, right, up and down. The way this button is set it's looks like maybe I didn't have enough buttons, and since it doesn't stick out far you have to use your fingernail to make it work. Like video games though when something seems a little different it's something you should pay attention to. Discs giving devices to help with other puzzles or prizes The way these discs are, you can connect pieces of paper, or other small things to the discs. Ideas for prizes include things like cash, gift certificates, or other paper prizes. Other small things may include wire (as described in the next section) that could be used to solve future puzzles. Hints could also be included on the paper shot out on a disc. Stepper Blocking Access to Flexible Resistor Notice in the attached picture that there's a stepper with a piece of card board blocking access to a flexible resitor (there's a hole in the side of the box that would allow you to access it otherwise. To solve the puzzle you have to realize that something moves the stepper (i.e. a reading from the wiimote nunchuck accelerometer, a reading from the GPS, button presses, a combination of all of them). This one feels kind of like a Myst puzzle to me, hand them the box, they starting doing things and hear the stepper moving and then figure out what they need to do. I like the idea of getting a piece of wire inside of one of the discs after solving one of the previous puzzles and using that to solve this one. Sound Impact Sensor vs Buttons One thing I really like about the sound impact sensor is that it's basically like a hidden button. Getting someone to push figure out there's something inside detecting an impact can make things interesting. Combining Multiple Reading Getting people to do multple things at the same time can be fun. Making them hold the box in a certain way and push a button that's awkward to push (or multiple buttons) can be fun Location Stuff This box can basically work like a reverse Geocache box. It can try to get the person to a location. Once they get there you could connect to the device via Xbee and send data over to the screen to help with other puzzles
http://www.instructables.com/id/Question-Box-Puzzle/
CC-MAIN-2017-22
refinedweb
3,324
69.62
Results 1 to 1 of 1 - Will the HD video from a Lumix TZ7 import into iMovie09? Looking at buying a Lumix TZ7 and was just wondering if the HD widescreen video it shoots will import straight in to iMove when I connect it to my iMac? Cheers Steve, 10:04 AM iMovie09 wont import converted AVDHD?By jtnesbitt in forum Movies and VideoReplies: 1Last Post: 10-05-2010, 05:30 AM import AVCHD from SD Card iMovie09By MilenkoD in forum Movies and VideoReplies: 4Last Post: 12-14-2009, 02:32 PM Panasonic Lumix DMC-FZ20By Gas2323 in forum Images, Graphic Design, and Digital PhotographyReplies: 3Last Post: 10-26-2009, 07:22 AM
http://www.mac-forums.com/forums/movies-video/188090-will-hd-video-lumix-tz7-import-into-imovie09.html
CC-MAIN-2016-30
refinedweb
113
62.72
applications. It helps you create connections to well-known and less-known SaaS and PaaS applications, using the available cloud adapters, publish or subscribe to the Messaging Cloud Service, or use industry standards such as SOAP, REST, FTP, File, and JMS. Most of these technologies will be explained in more detail later. Integration Cloud Service (ICS) provides enterprise-grade connectivity regardless of the application you are connecting to or where they are hosted. The concepts and terminology can be categorized into three major areas: Connections describe the inbound and outbound applications that we are integrating with Integrations describe how information is shared between applications Transformations and lookups describe how to interact with the data We can engage with Oracle Cloud Services and especially with ICS by going to. Here we can try the service for free, which we can use when going through this book. Before we dive deep into the major three areas, let's first take a look at the typical workflow when creating integrations with Oracle Integration Cloud Service. Since ICS is a cloud service, you only need to open a browser and enter the URL of your Cloud instance, for example:. We can sign into Oracle Integration Cloud Service by entering our credentials. Just like any Oracle Cloud Service users can be provisioned after subscribing to a service. After logging in we are welcomed by the home page: The home page gives an overview of all major functionalities that ICS has to offer. On this page we can easily navigate to each of these functions or to the help pages to learn the details. Besides the home page, all the functions are part of the Designer Portal. We use the Designer Portal to create the five pillars of ICS: Integrations, Connections, Lookups, Packages, Agents and Adapters. We will discuss the pillars in the chapters to come, but we specifically address the agents in Chapter 11, Calling an On-Premises API and adapters in Chapter 13, Where Can I Go From Here?: Let's investigate the most important pillars. Each integration starts with a blank canvas: An integration always consists of a Trigger (source) and an Invoke (target). A Trigger means the connection where the integration receives the message from. An Invoke means the connection where the integration sends the message to. These two connections are the first two objectives before creating an integration. In the following figure, both Trigger and Invoke connections use a SOAP connector. Just simply drag and drop the connection to use from the Connections panel onto the drop zone: When integrating two applications with each other, it is likely that the data structure which the Trigger and Invoke applications understand is different. The next objective is to map the data between the two applications: It depends on the type of connection pattern which mappings you can create. For example, when dealing with an asynchronous/one-way operation you only have a request mapping. When dealing with a synchronous operation you have both request and response mappings. The only time you can create a fault mapping is when both trigger and invoke connections define faults. For instance, in the preceding case where both WSDLs define a business fault in their specification. For point-to-point integrations these are the objectives to reach. But if you are dealing with more complex integrations a typical workflow can consist of a few more objectives. For instance, if the data received from the Trigger needs to be enriched (that is, locating and adding additional data based on data included in the message) before it can be sent to the Invoke. The next objective would be to add a call to an enrichment service. This enrichment service can be a different connector from your trigger or invoke: An enrichment service can easily be added with a simple drag and drop of the connection. Another objective can be to route to a different target based on the source data: All of these objectives are going to be discussed in detail, but first let's explore the concepts and terminology behind them. It all starts with creating connections. A connection defines the application you want to integrate with. If an application has a public API then ICS can integrate with it. For example, a well-known or lesser-known SaaS application, a public SOAP or REST API for weather or flight information, a custom application using the Messaging Service, or an on-premises Enterprise Resource Planning (ERP) application. Oracle Integration Cloud Service comes with a large set of out-of-the-box Cloud adapters, to provide easy access to these applications. The amount of available adapters is constantly growing. Most of these adapters are built by Oracle, but through the marketplace it is also possible for customers and partners to build their own adapters. Each connection can be used for inbound and outbound communication. The majority of available adapters support both ways. A connection commonly describes the type of application, the location of the API definition or endpoint, and the credentials needed to connect securely with the application. Connections can be divided into four categories: SaaS adapters, Technology adapters, Social adapters, and on-premises adapters: Oracle Integration Cloud Service offers a large collection of adapters to connect to SaaS applications natively. Software as a Service (SaaS), also called on-demand software, is software that is offered as a hosted service. SaaS applications are typically accessed by users using a browser, but most offer API's to access and modify the data or to send events to the SaaS application to perform a task. For the most popular SaaS vendors, Oracle supplies Cloud adapters that can be used by Integration Cloud Service. New adapters are released on monthly cycles. The SaaS adapters can also be developed by customers, partners, and even you. Most SaaS applications that Oracle offers as the vendor have their own adapter in Integration Cloud Service, such as the ERP, HCM, and the Sales Cloud: Besides that, ICS supports solutions such as Salesforce, Eloqua, and NetSuite out-of-the-box. Because the SaaS application offers their API, you might wonder why a special adapter is necessary. The adapters offer a much more simplified experience through a powerful wizard. For example, the Oracle RightNow and Salesforce adapters support the automatic provisioning of Business Objects in the wizard. These adapters also handle security and provide standard error handling capabilities. In Chapter 4, Integrations between SaaS Applications, we will integrate applications with some of these SaaS applications. Not all applications we see on a daily basis are SaaS applications with prebuilt adapters. Industry standards such as SOAP and REST are used by the majority of APIs. SOAP is mostly used for system-to-system integrations, whereas the lightweight REST protocol is used to provide access to mobile applications. For both protocols Oracle Integration Cloud Service provides an adapter. Originally an acronym for Simple Object Access Protocol, SOAP is an industry standard protocol originated around 2000. This specification is used in the implementation of web services and describes exchanging structured information. The SOAP protocol uses XML as the markup language for its message format. SOAP itself is not a transport protocol, but relies on application layer protocols, such as HTTP and JMS. Web services that are built to communicate using the SOAP protocol use the Web Service Description Language (WSDL). This is an XML-based interface and describes the functionality a web service offers. The acronym WSDL also describes the physical definition file. There are two versions of the WSDL, 1.1 and 2.0, but version 1.1 is still the most commonly used. The WSDL structure consists of five building blocks; types, messages, porttypes, bindings, and services: The first three describe the abstract definition and are separated from the latter two that describe the concrete use, allowing the reuse in multiple transports. Where concrete means that a specific instance of a service is referenced (meaning that you have a URI). Types are nothing more than placeholders that describe the data. An embedded or external referenced XML Schema definition is used to describe the message structure. Messages are an abstraction of the request and/or response messages used by an operation. The information needed to perform the operation is described by the message. It typically refers to an element in the embedded or referenced XML Schema definition. PortTypes or Interfaces define a web service, with a set of operations it can perform and direct which messages are used to perform the operation. An operation can only have a request message (one-way), a response message (call-back), or both request and response message (synchronous). Bindings describe the first part of a concrete WSDL. It specifies the interface with its operations and binds a porttype to a specific protocol (typically SOAP over HTTP). Services expose a set of bindings to the web-based protocols. The port or endpoint typically represent where consumers can reach the web service: SOAP itself, also defines a small XML Envelope, which allows XML messages to be carried from one place to another without any system having to inspect the content. Compare it to sending a parcel with a courier-the courier only needs to see the information written on the box, not what is in it! In Oracle Integration Cloud Service you can create connections based on these WSDLs. When the SOAP adapter is used in ICS you get a wizard that lets you pick one of the available operations (or select it for you if you only have one). Originally an acronym for Representational State Transfer, REST is a software architectural style also introduced in 2000. It consists of a coordinated set of architectural constraints within distributed systems. The REST architectural style introduces certain architectural properties such as performance, scalability, simplicity, addressability, portability, and reliability. Because it is a style, there are some variations going around. Web services or APIs that apply REST are called RESTful APIs. They are simply a collection of URIs, HTTP-based calls that use JavaScript Object Notation (JSON) or XML to transmit data objects, many of which will contain relational links. JSON is a human-readable text format consisting of attribute/value pairs. RESTful APIs are usually defined with the following aspects: The principal of addressability is covered by the URIs, which has a base URI, such as, for all its resources. Each resource has its own address, also known as an URI. A resource exposes a unique piece of information that the server can provide. For sending the data objects, an Internet media type, often JSON, is used. The API uses standard HTTP methods, for example, GET, PUT, POST, or DELETE. Reference stat and reference-related resources use hypertext links. RESTful APIs use common resource naming. When deciding which resources are available within your system, name the resources as nouns as opposed to verbs or actions. They should refer to a thing instead of an action. The name and structure convey meaning to those consuming the API. In this example, we use our Flight API hosted on Apiary (). The base URI for this API is: To insert (create) an airline in our flight system, we can use: To retrieve the details of the Airline with ICAO Identifier KLM, we can use the following: GET The same URI would be used for PUT and DELETE, to update and delete, respectively. What about creating a new destination an airline travels to? One option is to POST to the resource URI, but it's arguably outside the context of an airline. Because you want to create a destination for a flight schedule, the context should be on the schedule. It can be argued that the option to POST a message to the URI better clarifies the resource. Now you know that the destination is added to the airline. With this in mind, there is no limit on the depth of the URIs hierarchy as long as it is in the context of the parent resource. In Oracle Integration Cloud Service you can create connections based on the base URI. When the REST adapter is used in ICS you get a wizard that lets you create the resource that you want to expose. Only one resource can be implemented per integration. In Chapter 2, Integrating Our First Two Applications, both SOAP(inbound) and REST(outbound) adapters are used for our first integration. Besides web service standards of SOAP and REST, there is also a technology adapter for FTP. Originally an acronym for File Transfer Protocol, FTP is a protocol used to rapidly transfer files across servers originated around 1985. The FTP adapter enables you to transfer files from a source or to a target FTP server in an integration in ICS. With this adapter you can transfer (write) files to any server that is publicly accessible through the Internet. Files can be written in either binary or ASCII format. The adapter enables you to create integrations, which will read a file from a source FTP and write it to a target FTP server. In this scenario, the integration also supports scheduling, which enables you to define the time and frequency the transfer occurs. The adapter supports some welcome features, such as the possibility to natively translate file content and to encrypt and decrypt outbound files using Pretty Good Privacy (PGP) cryptography. With the first feature you can translate a file with comma-separated values to XML. The adapter not only supports plain FTP, but also FTP over SSL and secure FTP (SFTP). FTP over SSL requires the upload of a Certificate store. SFTP requires an optional host key to ensure that you connect to the correct SFTP server and secures that your connection is not compromised. We will use the FTP adapter when managing file transfers in Chapter 9, Managed File Transfers with Scheduling. Of course, not all of our applications run in the cloud, for most of us it is still rather new. Most of our mission critical systems run on-premises. Oracle Integration Cloud Service provides adapters and supporting software to create a Hybrid Cloud solution. A Hybrid Cloud is a cloud computing environment, which is a combination of on-premises, private (third-party), and public cloud services. Between the platforms we usually find an Orchestration layer. For example, an enterprise has an on-premises finance system to host critical and sensitive workloads, but want to expose this system for third-party users. Integration Cloud Service provides adapters and the supporting software to simplify integration between cloud and on-premises applications in a secure and scalable way. The supported adapters include technology adapters, for example, Database, File, and JMS, an adapter for Oracle E-Business Suite, Oracle Siebel and SAP, and so on. For example, with the database adapter you can call a stored procedure in your on-premises database or execute a pure SQL statement. The File Adapter enables file transfers between two servers that cannot talk directly with each other. Java Message Service (JMS) enables integrations with existing JEE applications. Adapters do not indicate if it is for on-premises use only, or if it can be used with an on-premises endpoint. When creating a new connection based on the adapter it will ask for an agent to assign to the connection. ICS includes two agents; the Connectivity and the Execution agent. An agent is a piece of software running on-premises and puts a secure bridge between the Oracle Cloud and on-premises. We will shortly describe both agents, but have dedicated Chapter 11, Calling an On-Premises API to explain them in more detail. The Agent is basically a gateway between cloud and on-premises, and it eliminates common security and complexity issues previously associated with integrating on-premises applications from outside the firewall. The agent can connect with on-premises applications, such as the database or ERP application, using the existing JCA adapter framework. To understand this concept we first look at the agent's architecture. The Agent is developed with a few architectural guidelines in mind. The most important guideline is that it shouldn't be required to open inbound ports to communicate with on-premises applications. This means that there isn't a need to create firewall rules to provide access. Because of this no open ports can be abused. The second guideline describes that it is not required to expose a private SOAP-based web service using a (reverse) proxy, for example, API Gateway or Oracle HTTP Server (OHS). The third describes that no on-premises assets have to be installed in the DeMilitarized Zone (DMZ). The agent is installed in the local network where the backend systems are accessible. The fourth guideline describes that it is not required to have an existing J2EE container to deploy the agent on. The fifth and last guideline describes that it is not required to have IT personnel monitor the on-premises component. With this agent the monitoring of the component is part of monitoring UI within ICS. The Agent consists of two components, a Cloud Agent installed on ICS and a Client Agent installed at on-premises. The Messaging Cloud is used by the Agent for its message exchange and only allows connections established from the Oracle Cloud. It disallows explicit inbound connections for other parties. The agent uses the existing JCA adapter framework to invoke on-premises endpoints, for example, Database, file, and ERP (Siebel/SAP): Oracle Integration Cloud Service supports multiple agents for load distribution and high availability. For example, it is possible to group multiple agents, but place each agent on a different local host/machine. Agents can be grouped on a functional, process, or organization level. The Connectivity Agent can be downloaded from ICS and installed on demand on-premises. What you get at the end is a fully installed WebLogic server with a domain and managed server running the necessary Agent clients and JCA adapters. A Message Exchange Pattern (MEP) describes the pattern of messages required by a communication protocol for the exchange of messages between nodes. A MEP defines the sequence of messages, specifying the order, direction, and cardinality of those messages in a service call or service operation. Two main MEPs are synchronous (request-response) and asynchronous (fire and forget). The agent conforms to a few message exchange patterns when communicating with on-premises applications from the cloud: Synchronous request from cloud to on-premise to retrieve data (for example, getting the status of an order from EBS in real time) Cloud events triggers Async message exchange with on-premises (for example, creation of an incident in RightNow causes creation of service requests in EBS) On-Premises events triggers Async message exchange with the cloud (for example, service request updates event result in Async synchronization with RightNow) Synchronized data extracts between on-premises and Cloud applications (for example, EBS-based customer data synchronized with CRM) This Agent is a fully fledged Integration Cloud Service that you can install on-premises. When you subscribe to ICS, you also have the option to install an on-premises version in your local environment (that is, DMZ). This enables you to use ICS as a proxy server that sits between your mission critical systems protected by a firewall and the cloud version. More on this agent in Chapter 11, Calling an On-Premises API. After installing the on-premises version of ICS you can create users and assign roles to these users. This is done in the provided Users page of the on-premises ICS. This page is not available in the Cloud version. You also have access to the WebLogic Console, Service Console, and Fusion Middleware Control. This means that you can take a peak in the deployed applications, the Service Bus resources, and in the log files. When something goes wrong you can debug the problem without the help of Oracle Support. This is not possible in the Cloud version. Another difference or even a restriction between the Cloud and the on-premises version is that you aren't able to configure Connectivity Agents. With this restriction in place, the adapters for connecting to the Oracle Database, MySQL Database, and Oracle SAP are not supported. There is a default Agent group available in the Cloud version of ICS. All installed Execution Agents will be registered under this group, which restricts assigning connections to a specific Execution Agent: We will explore both the Connectivity and Execution Agent in Chapter 11, Calling an On-Premises API. Oracle Integration Cloud Service comes with a dozen of other kinds of adapters. These applications can also be categorized as SaaS applications; they're not in a traditional sense that is, enterprise applications, but offer Internet services. I'm talking about services where people can be social or collaborate with each other. Integration Cloud Service supports social apps such as Facebook, LinkedIn, and Twitter, to post a status update. The supported productivity apps include Google, Microsoft Calendar and Mail, Google Task, MailChimp, and SurveyMonkey; and this list is updated on a monthly cycle: If you're looking for a full reference of all the supported adapters, please take a look at the documentation which can be found at:. In Oracle Integration Cloud Service, the core function is creating integrations between applications. Integrations use the created connections to connect to our applications. Integrations define how information is shared between these applications. Integrations can be created from scratch, but can also be imported from an existing environment (that is, using an exported integration). As explained earlier, an integration always has a source and a target connection. The source and the target define the type of integration pattern that is implemented. There are four types of integrations; point-to-point (that is, data mapping), publish messages or subscribe to messages, content-based routing, and Orchestration: For these types of integration you can start with a blank canvas, but Oracle Integration Cloud Service provides templates to quick start your integration. A point-to-point integration, also known as a Basic Map Data integration in ICS, is the least complex integration pattern. We use a point-to-point integration when we have to send a message to a single receiver (that is, a 1:1 relationship). For example, a customer is updated in the HCM Cloud, which sends out an automatic message to the ERP, to also update the customer details. Another example is that a third-party channel calls an API, which returns flight data from an on-premises database. This integration pattern is the most common pattern that advertises Oracle Integration Cloud Service. You create an integration and use your defined connections for the Source and Target. You also define data-mappings between the Source and Target and vice versa. As mentioned previously, the source and target operation that is implemented, determines the kind of integration capabilities. A one-way source can only send data to a one-way target, and a synchronous source can only send data to a synchronous target. This integration does not support mixing one-way with synchronous operations: Although this pattern supports a more complex integration flow, Integration Cloud Service provides the possibility to enrich the data, by calling another service between the source and the target or vice versa. For example, the target requires more data for its input than the source provides. By using enrichment services you are able to invoke a service operation that returns the required data. The data from the source and the data from the enrichment service can then be used for the request mapping to the target: In a one-way integration the source data can be enriched by calling an additional service: In a synchronous integration both sources as target data can be enriched by calling additional services. A publish-subscribe integration, in ICS separated in two separate integrations, implements a messaging pattern where messages are not directly sent to a specific receiver. The senders of messages are called publishers and the receivers of messages are called subscribers. Publishers send messages, which are then published to an intermediary message broker or event bus without knowledge of which subscribers, if any, are interested in the message. On the other hand, subscribers register subscriptions on messages they are interested in, without knowledge of which publishers, if any, are sending these messages. The event bus performs a store and forward to route messages from publishers to subscribers: This pattern generally uses the message queue paradigm and provides greater scalability and a more dynamic network topology. Most messaging systems support this pattern in their API using Java Message Service (JMS). The publish-subscribe pattern in ICS can be implemented using its own messaging service, by using the Oracle Messaging Cloud Service (OMCS) adapter or the JMS adapter for on-premises connectivity. All implementations use JMS as its underlying protocol. The ICS Messaging Service also uses the OMCS for its delivery, but these messages can't be accessed outside of ICS. The two major advantages of using this kind of pattern are scalability and loose coupling. This integration pattern provides the opportunity for better scalability of the message flow. Messages can be handled in parallel instead of being processed one after each other. If the amount of messages succeeds the amount the systems can process, a new system or a subscriber can be added. A subscriber can also choose to temporarily stop receiving messages to process already received messages. When using the publish-subscribe pattern, publishers and subscribers are loosely coupled and generally don't know of each other's existence. Both can operate normally regardless of the state of the other. In a point-to-point integration, the systems are tightly coupled and can't process messages if one of the services is not running. When using ICS it is possible to even decouple the location of the publishers and subscribers by using OMCS or the on-premises JMS adapter. The advantage of loosely coupling also introduces side effects, which can lead to serious problems. We should also consider issues that can occur during the delivery of messages. Because the publishers and subscribers do not need to know of each other's existence, it is important that the data is well defined. This can lead to inflexibility of the integration, because in order to change the data structure you need to notify the subscribers. This can make it difficult for the publishers to refactor their code. In ICS this should not be a problem because it can be fixed by versioning the integration and its data structure. Because a pub-sub system is decoupled, it should be carefully designed to assure message delivery. If it is important for the publisher to know a message is delivered to the subscriber, the receiving system can send a confirmation back to the publisher. If, for example, a message can not be delivered, it is important to log the error and notify someone to resend the failed messages. When designing an integration, keep in mind that delivery management is important to implement. An integration with content-based routing is a more advanced version of the point-to-point integration. Content-based routing essentially means that the message is routed based on a value in the message payload, rather than an explicitly specified destination: A use case for this type of routing is the possibility to retrieve data from a different application based on the country code. Typically, a point-to-point integration receives a message and sends the message to an endpoint. This endpoint identifies the service or client that will eventually process the message. However, what if the data is not available (anymore) at that destination? For example, what if the data specific to the callers country is available at a different site and the service cannot return the requested data? Is it also possible that one destination can only process specific types of messages and no longer support the original functions. A solution for this is content-based routing. A content-based routing is built on two components: services and routers. Services are the consumers of the messages, and like all the other integrations they decide on which messages they are interested in. Routers, usually one per integration, route messages. When a message is received the message is inspected and a set of rules are applied to determine the destination interested in the message content. With this pattern we can provide a high degree of flexibility and can easily adapt to changes, for example, adding a destination. The sender also does not need to know everything about where the message is going to end up. Each destination can have its own implementation, which means the router also needs to transform the message where necessary. The following diagram illustrates a simple example of the content-based routing architecture. It shows how a message is sent to the endpoint of Service A. Service A receives the message and the Router routes the message to Service B or Service C, based on the content of the message: With Integration Cloud Service it is as easy as adding a filter on the request operation of the source connection. We can define multiple rules in an if-then-else construction and in every rule we can filter on multiple fields within the request message. The two major advantages for using this kind of pattern are efficiency and sophisticated decisions. Content-based routing is very efficient. This is because the decision to route the message to one consumer or the other is kept away from the provider. The decision is based on the provider's request. There is also no risk of more systems than necessary consuming the message (when compared to the pub/sub integration), because we route to dedicated consumers. When designing a content-based router it can become highly sophisticated. We can have multiple routing rules where one rule can filter on multiple fields of the request message. Content-based routing is easier to incorporate into a process pipeline more often than not. The disadvantages arise when the number of consumers grows. When we introduce an additional consumer it also means changing the router (compared to a pub-sub integration, which requires no change). We also need to add an extra routing decision. The usage of Orchestration comes into the picture when we discuss integrations in the context of service-oriented architecture. Service Orchestration is usually an automated process where two or more applications and/or services are integrated. We have discussed point-to-point integrations and have seen that in many use cases this pattern fulfills our requirements. However, point-to-point integrations can lead to complex dependencies, if we need to integrate multiple application and/or services. The downside of this is that it is hard to manage, maintain, and possibly monitor. An integration that implements Orchestration provides an approach that aggregates services into application processes. Orchestration has the capabilities for message routing, transformation, keeping the process state for reliability, and security through policies. The most important capability of Orchestration is centralized management, for example, resources and monitoring. Orchestration, in a sense, is a controller, which deals with the coordination of (a)synchronous interactions and controls the flow. Usually Business Process Execution Language (BPEL) is used to write the code that is executed. This is also the case with Integration Cloud Service; however, a visual representation is used to define the process. To get a practical sense of service Orchestration, let us take a look at an example of a mortgage application. A mortgage broker wants to request a mortgage on behalf of a customer. The application uses API that calls an HTTP endpoint, which sends the initial request to the orchestrator through Service A. The orchestrator enriches the customer data by calling Service B. Based on the age of the customer, Service C or Service D is called to find special mortgage quotes, but Service E is called for everyone. Service E retrieves credit scores belonging to that customer. The data from both service calls are merged and sent to Service F to return the best quote for that customer: The two major advantages of using this kind of pattern are loose coupling and enabling automation. Application and business services can be designed to be process-agnostic and reusable. The business process is responsible for the management and coordination of state, which frees composite services from a number of design constraints. Additionally, the logic of the business process is centralized in one location, instead of being distributed across and embedded within multiple services. In day-to-day operations, Orchestration enables us to automate processes such as insurance claims that come from medical offices. This is one process where human approval can be removed by programming parameters to accept claims. Orchestration can also automate error handling; for example, a message can be retried automatically when an endpoint is unavailable or a notification can be sent to a human when it needs to be recovered manually. There is one major disadvantage and that is debugging. There can be complex processes, which have multiple service interactions, nested loops, and so on. In these situations, it's not easy to debug your process if something goes wrong, because we need to know what message initiated the flow. We will touch on this topic in Chapter 10, Advanced Orchestration with Branching and Asynchronous Flows. When talking about integrations between application and/or services we can't escape the fact that messages need to be transformed. Most of the time the applications and/or services do not talk the same language (that is, message structure or even data types, for example, milliseconds from epoch versus date time). Besides transforming the structure, we sometimes need to convert values (that is, domain value mapping). Oracle Integration Cloud Service uses XML as its message format for its data objects and messages. To transform XML-based messages from one structure to another there are two main open standards that can be used to manipulate data; XQuery (more info at) and XSLT (). XQuery is like the name suggests, a query language, but besides that it is also a functional programming language. It queries and transforms collections of data, both structured as unstructured. It can transform messages to XML, but also text and other data formats, for example, JSON and HTML. Using XQuery we can extract and manipulate XML documents or any source that can be viewed in XML, such as office documents or database schemas. It has built-in support for XPath expressions. With XPath we can address specific nodes within the XML document. XQuery is a SQL-like query language that supplements XPath and uses FLWOR expressions for performing joins. A FLWOR expression is named after the five parts it is constructed of: FOR, LET, WHERE, ORDER BY, and RETURN. XQuery has the capability to transform, and it also allows us to create new XML documents. Where normally the elements and attributes are known in advance, it can use expressions to construct dynamic nodes, including conditional expressions, list expressions, quantified expressions, and so on. A simple example of XQuery is shown as follows: <html><body> LET $book := doc("bookstore.xml")/book FOR $ch in $book/chapter WHERE $book/chapter/num < 10 ORDER BY $ch/pagecount DESC RETURN <h2>{ $ch/title }</h2> </body></html> Originally an acronym for Extensible Stylesheet Language Transformations, XSLT is as the name suggests, a language for transforming XML documents and is basically a style sheet to transform XML documents into other XML documents, or other data formats such as (X)HTML, XSL Formatting Objects (such as generating PDFs), RSS, and non XML (such as CSV): XSLT processing takes an XML source document and a XSLT style sheet, and produces a new output document. The XSLT style sheet contains static XML that is used to construct the output structure. It uses XPath expressions, just like XQuery, to get data from the XML source. The XSLT language includes logic ( if-else) and operators ( for-each) to process the XML source. An XSLT style sheet is an XML document containing the root node <xsl:stylesheet>, which is declared by the xsl prefix and is mandatory. An XSLT style sheet contains one or more <xsl:template> elements and other XML elements defining transformation rules: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:template <html> <body> <h2>Book Collection</h2> <table border="1"> <tr> <th>Title</th><th>Author</th> </tr> <xsl:for-each <tr> <td><xsl:value-of</td> <td><xsl:value-of</td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> Integration Cloud Service currently only supports XSLT, because it is a transformation/mapping language, which can easily be made visually. Let's discuss XSLT in more detail. XSLT, like XQuery, uses XPath expressions to address specific nodes within the source document and perform a calculation. XSLT can use a range of functions that XPath provides to further augment itself. Originally an acronym for XML Path Language, XPath is used for selecting nodes from an XML document. It models XML as a tree of nodes. XPath is named after its use of a path notation for navigating through the hierarchical structure of an XML document. XPath uses a compact, non-XML syntax to form expressions for use in Uniform Resource Identifier (URI) and XML attribute values. XPath models an XML document as a tree of nodes. The following figure depicts the different types of nodes, as seen by XPath, in an XML document: The elements in the preceding diagram are: A document root, which is a virtual document root that is the parent of the entire XML document containing the XML declaration, the root element and its children, comments, and processing instructions at the beginning and the end of the document. A root element (for example, the <bookstores>element). Element nodes (for example, the <bookstore>, <store_name>, and <store_url>elements). The root element is also an element node. Attribute nodes (for example, the num="1"node). Text nodes (for example, the nodes containing the text, Packt Publishing). In the XPath model, the document root is not the same as the root element node. XPath defines a way to compute a string value for each type of node, some of which have names. Because XPath fully supports the XML Namespaces Recommendation, the node name is modeled as a pair known as the expanded name, which consists of a local part and a namespace URL, which may be null. XPath allows various kinds of expressions to be nested with full generality (that is, using wildcards). It is a strongly typed language, such that the operands of various expressions, operators, and functions must conform to designated types. XPath is a case-sensitive expression language. The following are some examples of XPath expressions: Get the store_name node of the bookstore identified with number 1: /bookstores/bookstore[@num="1"]/store_name Get the text value of the store_name node of the last bookstore: /bookstores/bookstore[last()]/store_name/text() Check if a bookstore exists in the source document (prefixing // means a wildcard search): count(//bookstore[store_name="Packt Publishing"]) = 1 With XSLT it is possible to create really dynamic output documents using XSLT constructs together with XPath expressions. XLST construct are specific elements that are defined by the XSLT standard. XSLT defines the following construct elements: output, template, value-of, for-each, if, and choose. The xsl:output element The output element specifies the output format of the result tree and it must be a direct child of the style sheet root element. The element has a few attributes. The method attribute defines which processor needs to be used, such as xml, html, or text, and the media-type attribute defines the target type, for example, application/json: <xsl:output The xsl:template element The template element defines a template rule that matches a specific node in the source document using an XPath pattern value. The output expression contains the formatting instructions to produce a result tree: <xsl:template output-expressions </xsl:template> The match pattern determines the context node within the template. The most common match is on the root element of the source document tree: <xsl:template A simple text string </xsl:template> A template can also be given a name and called by its name within another template by passing a parameter: <xsl:template <xsl:param ... </xsl:template> <xsl:template <xsl:call-template <xsl:with-param </xsl:call-template> </xsl:template> The xsl:value-of element The value-of element is used to insert the text value of the expression. This element defines a select attribute, which contains an expression and is used inside a template element: <xsl:template Name: <xsl:value-of </xsl:template> The xsl:for-each element The for-each element is used to loop over node-sets. This element defines a select attribute, which instructs the XSLT processor to loop over the node set returned by the given XPath expression: <xsl:template <xsl:for-each <p><xsl:value-of</p> </xsl:for-each> </xsl:template> Inside the for-each element, the current node in the set is the context. The position() function returns the index in the loop, that is, the iteration counter. It is also possible to sort the order in which the nodes are looped over using the xsl:sort element. It defines a select attribute that contains an expression whose value is sorted on: <xsl:for-each <xsl:sort <p> <xsl:value-of </p> </xsl:for-each> To apply multiple criteria, we can use several xsl:sort elements after each other. The xsl:if element Sometimes a section of the XSLT tree should only be processed under certain conditions. With the if element we can build that conditionally. The if element defines a test attribute, which contains an XPath expression that should evaluate to a Boolean value: <xsl:for-each <xsl:if <p><xsl:value-of</p> </xsl:if> </xsl:for-each> There is no else statement, but for this XSLT defines the xsl:choose element. The xsl:choose element XSLT supports multiple mutually exclusive branches with the choose element. The xsl:choose element contains xsl:when tags that define parallel branches, one of which is executed. The xsl:otherwise tag can be used to define a default branch. We use the choose element for if-the-else constructions: <xsl:for-each <xsl:choose> <xsl:when <xsl:text>New Arrivals</xsl:text> </xsl:when> <xsl:when <xsl:text>Our Classics</xsl:text> </xsl:when> <xsl:otherwise> <xsl:text>Top Picks</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:for-each> When implementing integrations with ICS we will use XSLT for creating mappings between source and target connections. The major advantage of ICS is that we can build these mappings with a visual editor. A lookup, also known as Domain Value Maps (DVM), associates values used by one application for a specific field to the values used by other applications for the same field. They enable us to map from one vocabulary used in a given domain to another vocabulary used in a different domain. For example, one domain may represent a country with a long name (Netherlands), while another domain may represent a country with a short name (NL). In such cases, you can directly map the values by using domain value maps. Simple example: A lookup also supports qualifiers. A mapping may not be valid unless qualified with additional information. For example, a DVM containing a code-to-name mapping may have multiple mappings from ABE to Aberdeen because Aberdeen is a city in both UK and the USA. Therefore, this mapping requires a qualifier (UK or USA) to qualify when the mapping becomes valid. Qualifier example: Qualifier order support We can also specify multiple qualifiers for a lookup using a qualifier order. Using this, ICS can find the best match during lookup at runtime. Hierarchical lookups are supported. If you specify a qualifier value during a lookup and no exact match is found, then the lookup mechanism tries to find the next match using the following qualifier. It proceeds until a match is found, or until no match is found even with all qualifiers set to an empty value. One-to-Many Mapping Support We can map one value to multiple values in a domain value map. For example, DVM for subscription payments can contain mapping to three values such as discount percentage, discount period, and subscription period. In this chapter, we addressed the concepts and terminology surrounding Oracle Integration Cloud Service and standards such as. XML, XSLT and XQuery used by ICS. With ICS we can create integration between cloud services and between cloud and on-premises applications. An integration consists of one trigger and one invoke called connections, and it can call multiple enrichment services. Between the trigger, enrichment services, and invoke, ICS uses XSLT mappings to transform the message structure. We looked at the ideas and terminology around how ICS connects to the applications it can integrate with. ICS comes with a large set of out-of-the-box Cloud adapters to connect to these applications, and in upcoming chapters we will explore these connections in depth. Integrations use the created connections to connect to our applications. Integrations define how information is shared between these applications, for example, exposed operation, message structure, and so on. We discussed the four types of integrations ICS supports and its advantages and disadvantages. When integrating applications and/or services we can't escape the fact that messages need to be transformed, because they don't talk the same language (that is, message structure or even data types, for example, milliseconds from epoch versus date time). ICS uses the open standard XSLT for manipulating data. We discussed the language and its structure. Besides transforming the data we sometimes need to convert values (that is, domain value mapping). ICS supports lookups that we can use to convert a value provided by the source to a format the target understands. In the next chapter, we will explain the steps to create a basic integration between two cloud services based on an SOAP and a REST interface.
https://www.packtpub.com/product/implementing-oracle-integration-cloud-service/9781786460721
CC-MAIN-2022-27
refinedweb
7,708
52.19
The Ultimate Guide to using the Python regex module One of the main tasks while working with text data is to create a lot of text-based features. One could like to find out certain patterns in the text, emails if present in a text as well as phone numbers in a large text.. How do you normally go about it? A simple enough way is to do something like: target = [';','.',',','–'].**" num_puncts = 0 for punct in target: if punct in string: num_puncts+=string.count(punct) print(num_puncts) 19 And that is all but fine if we didn’t have the re module at our disposal. With re it is simply 2 lines of code: import re pattern = r"[;.,–]" print(len(re.findall(pattern,string))) 19 This post is about one of the most commonly used regex patterns and some regex functions I end up using regularly. What is regex? In simpler terms, a regular expression(regex) is used to find patterns in a given string. The pattern we want to find could be anything. We can create patterns that resemble an email or a mobile number. We can create patterns that find out words that start with a and ends with z from a string. In the above example: import re pattern = r'[,;.,–]' print(len(re.findall(pattern,string))) The pattern we wanted to find out was r’[,;.,–]’. This pattern captures any of the 4 characters we wanted to capture. I find regex101 a great tool for testing patterns. This is how the pattern looks when applied to the target string. As we can see we are able to find all the occurrences of ,;.,– in the target string as required. I use the above tool whenever I need to test a regex. Much faster than running a python program again and again and much easier to debug. So now we know that we can find patterns in a target string but how do we really create these patterns? Creating Patterns The first thing we need to learn while using regex is how to create patterns. I will go through some most commonly used patterns one by one. As you would think, the simplest pattern is a simple string. pattern = r'times' string = "It was the best of times, it was the worst of times." print(len(re.findall(pattern,string))) But that is not very useful. To help with creating complex patterns regex provides us with special characters/operators. Let us go through some of these operators one by one. Please wait for the gifs to load. 1. the [] operator This is the one we used in our first example. We want to find one instance of any character within these square brackets. [abc]- will find all occurrences of a or b or c. [a-z]- will find all occurrences of a to z. [a-z0–9A-Z]- will find all occurrences of a to z, A to Z and 0 to 9. We can easily use this pattern as below in Python: pattern = r'[a-zA-Z]' string = "It was the best of times, it was the worst of times." print(len(re.findall(pattern,string))) There are other functionalities in regex apart from .findall but we will get to them a little bit later. 2. The dot Operator The dot operator(.) is used to match a single instance of any character except the newline character. The best part about the operators is that we can use them in conjunction with one another. For example, We want to find out the substrings in the string that start with small d or Capital D and end with e with a length of 6. 3. Some Meta Sequences There are some patterns that we end up using again and again while using regex. And so regex has created a few shortcuts for them. The most useful shortcuts are: \w, Matches any letter, digit or underscore. Equivalent to [a-zA-Z0–9_] \W, Matches anything other than a letter, digit or underscore. \d, Matches any decimal digit. Equivalent to [0–9]. \D, Matches anything other than a decimal digit. 4. The Plus and Star operator The dot character is used to get a single instance of any character. What if we want to find more. The Plus character +, is used to signify 1 or more instance of the leftmost character. The Star character *, is used to signify 0 or more instance of the leftmost character. For example, if we want to find out all substrings that start with d and end with e, we can have zero characters or more characters between d and e. We can use: d\w*e If we want to find out all substrings that start with d and end with e with at least one character between d and e, we can use: d\w+e We could also have used a more generic approach using {} \w{n} - Repeat \w exactly n number of times. \w{n,} - Repeat \w at least n times or more. \w{n1, n2} - Repeat \w at least n1 times but no more than n2 times. 5. ^ Caret Operator and $ Dollar operator. ^ Matches the start of a string, and $ Matches the end of the string. 6. Word Boundary This is an important concept. Did you notice how I always matched substring and never a word in the above examples? So, what if we want to find all words that start with d? Can we use d\w* as the pattern? Let’s see using the web tool. Regex Functions Till now we have only used the findall function from the re package, but it also supports a lot more functions. Let us look into the functions one by one. 1. findall We already have used findall. It is one of the regex functions I end up using most often. Let us understand it a little more formally. Input: Pattern and test string Output: List of strings. #USAGE: pattern = r'[iI]t' string = "It was the best of times, it was the worst of times." matches = re.findall(pattern,string) for match in matches: print(match) It it 2. Search Input: Pattern and test string Output: Location object for the first match. #USAGE: pattern = r'[iI]t' string = "It was the best of times, it was the worst of times." location = re.search(pattern,string) print(location) <_sre.SRE_Match object; span=(0, 2), We can get this location object’s data using print(location.group()) 'It' 3. Substitute This is another great functionality. When you work with NLP you sometimes need to substitute integers with X’s. Or you might need to redact some document. Just the basic find and replace in any of the text editors. Input: search pattern, replacement pattern, and the target string Output: Substituted string string = "It was the best of times, it was the worst of times." string = re.sub(r'times', r'life', string) print(string) It was the best of life, it was the worst of life. Some Case Studies: Regex is used in many cases when validation is required. You might have seen prompts on websites like “This is not a valid email address”. While such a prompt could be written using multiple if and else conditions, regex is probably the best for such use cases. 1. PAN Numbers In India, we have got PAN Numbers for Tax identification rather than SSN numbers in the US. The basic validation criteria for PAN is that it must have all its letters in uppercase and characters in the following order: <char><char><char><char><char><digit><digit><digit><digit><char> So the question is: Is ‘ABcDE1234L’ a valid PAN? How would you normally attempt to solve this without regex? You will most probably write a for loop and keep an index going through the string. With regex it is as simple as below: match=re.search(r'[A-Z]{5}[0–9]{4}[A-Z]','ABcDE1234L') if match: print(True) else: print(False) False 2. Find Domain Names Sometimes we have got a large text document and we have got to find out instances of telephone numbers or email IDs or domain names from the big text document. For example, Suppose you have this text: <div class="reflist" style="list-style-type: decimal;"> <ol class="references"> <li id="cite_note-1"><span class="mw-cite-backlink"><b>^ ["Train (noun)"](). <i>(definition – Compact OED)</i>. Oxford University Press<span class="reference-accessdate">. Retrieved 2008-03-18</span>.</span><span title="ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fen.wikipedia.org%3ATrain&rft.atitle=Train+%28noun%29&rft.genre=article&rft_id=http%3A%2F%2F" class="Z3988"><span style="display:none;"> </span></span></span></li> <li id="cite_note-2"><span class="mw-cite-backlink"><b>^</b></span> <span class="reference-text"><span class="citation book">Atchison, Topeka and Santa Fe Railway (1948). <i>Rules: Operating Department</i>. p. 7.</span><span title="ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fen.wikipedia.org%3ATrain&rft.au=Atchison%2C+Topeka+and+Santa+Fe+Railway&rft.aulast=Atchison%2C+Topeka+and+Santa+Fe+Railway&rft.btitle=Rules%3A+Operating+Department&rft.date=1948&rft.genre=book&rft.pages=7&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook" class="Z3988"><span style="display:none;"> </span></span></span></li> <li id="cite_note-3"><span class="mw-cite-backlink"><b>^ [Hydrogen trains]()</span></li> <li id="cite_note-4"><span class="mw-cite-backlink"><b>^ [Vehicle Projects Inc. Fuel cell locomotive]()</span></li> <li id="cite_note-5"><span class="mw-cite-backlink"><b>^</b></span> <span class="reference-text"><span class="citation book">Central Japan Railway (2006). <i>Central Japan Railway Data Book 2006</i>. p. 16.</span><span title="ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fen.wikipedia.org%3ATrain&rft.au=Central+Japan+Railway&rft.aulast=Central+Japan+Railway&rft.btitle=Central+Japan+Railway+Data+Book+2006&rft.date=2006&rft.genre=book&rft.pages=16&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook" class="Z3988"><span style="display:none;"> </span></span></span></li> <li id="cite_note-6"><span class="mw-cite-backlink"><b>^ ["Overview Of the existing Mumbai Suburban Railway"](). _Official webpage of Mumbai Railway Vikas Corporation_. Archived from [the original]() on 2008-06-20<span class="reference-accessdate">. Retrieved 2008-12-11</span>.</span><span title="ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fen.wikipedia.org%3ATrain&rft.atitle=Overview+Of+the+existing+Mumbai+Suburban+Railway&rft.genre=article&rft_id=http%3A%2F%2F" class="Z3988"><span style="display:none;"> </span></span></span></li> </ol> </div> And you need to find out all the primary domains from this text- askoxford.com;bnsf.com;hydrogencarsnow.com;mrvc.indianrail.gov.in;web.archive.org How would you do this? match=re.findall(r'http(s:|:)\/\/([)([0-9a-z.A-Z-]*\.\w{2,3})',string)]()([0-9a-z.A-Z-]*\.\w{2,3})',string)) for elem in match: print(elem) (':', 'www.', 'askoxford.com') (':', 'www.', 'hydrogencarsnow.com') (':', 'www.', 'bnsf.com') (':', '', 'web.archive.org') (':', 'www.', 'mrvc.indianrail.gov.in') (':', 'www.', 'mrvc.indianrail.gov.in') | is the or operator here and match returns tuples where the pattern part inside () is kept. 3. Find Email Addresses: Below is a regex to find email addresses in a long text. match=re.findall(r'([\w0-9-._]+@[\w0-9-.]+[\w0-9]{2,3})',string) These are advanced examples but if you try to understand these examples for yourself you should be fine with the info provided. Conclusion While it might look a little daunting at first, regex provides a great degree of flexibility when it comes to data manipulation, creating features and finding patterns. I use it quite regularly when I work with text data and it can also be included while working on data validation tasks. I am also a fan of the regex101 tool and use it frequently to check my regexes. I wonder if I would be using regexes as much if not for this awesome tool. Also if you want to learn more about NLP here is an excellent course. You can start for free with the 7-day Free Trial. Thanks for the read. I am going to be writing more beginner-friendly posts in the future too. Follow me up at Medium or Subscribe to my blog.
https://mlwhiz.com/blog/2019/09/01/regex/
CC-MAIN-2020-40
refinedweb
2,049
66.23
iODEHinge2Joint Struct ReferenceODE hinge 2 joint. More... #include <ivaria/ode.h> Detailed DescriptionODE hinge 2 joint. The hinge-2 joint is the same as two hinges connected in series, with different hinge axe. Definition at line 763 the time derivative of angle value. Get free hinge axis 1. Get free hinge axis 2. Set the joint anchor point. The joint will try to keep this point on each body together. Input specified in world coordinates. Sets free hinge2 axis 1. Sets free hinge2 axis 2. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.2.1 by doxygen 1.5.3
http://www.crystalspace3d.org/docs/online/api-1.2/structiODEHinge2Joint.html
CC-MAIN-2013-20
refinedweb
108
72.02
Represents analyses that only rely on functions' control flow. More... #include "llvm/IR/PassManager.h" Represents analyses that only rely on functions' control flow. This can be used with PreservedAnalyses to mark the CFG as preserved and to query whether it has been preserved. The CFG of a function is defined as the set of basic blocks and the edges between them. Changing the set of basic blocks in a function is enough to mutate the CFG. Mutating the condition of a branch or argument of an invoked function does not mutate the CFG, but changing the successor labels of those instructions does. Definition at line 116 of file PassManager.h. Definition at line 118 of file PassManager.h.
https://llvm.org/doxygen/classllvm_1_1CFGAnalyses.html
CC-MAIN-2021-04
refinedweb
119
65.42
As you dive into development of Windows 8 Metro apps you will most likely run into the need to use local storage. While you may be thinking it's just Windows, I already know how to connect to a database and access the filesystem, unfortunately, the APIs you are used too will not work with Metro apps. Metro apps work on a very different model, whereby each app is restricted to a sandbox. Working within a sandbox allows for much better control and prevents one app from affecting another. Unfortunately, this also means that you will not be able to use the System.Data namespace to connect to a local and/or a SQL Server on your network. Instead you will need to provide a set of Web Services or other means of accessing remote data as if it were coming from a server on the web. While this may seem counter-intuitive to traditional Windows apps, it makes sense for Metro apps working within a sandbox model. Local and Roaming Storage Nonetheless, your app will have the need to store data locally on the machine. Metro supports three different types of storage, which are specific to your app, including Local, Roaming and Temporary. Files stored within the Local folder will only be stored on the machine in which they were created. Unlike the Local folder, the Roaming folder allows for data to be synced between Windows 8 machines running your Metro app. The Temporary folder works as you would expect whereby data stored within the folder will be deleted periodically by Windows. In addition to accessing the file system, you also have the ability to store Local Settings and Roaming Settings, which allow you to store a key/value pair information for your app. To jump in and start using local storage, you will be using the Windows.Storage.ApplicationData class. The following code snippet is a very simple example of how to create a new file in local storage and write text to it. async void save_myFile() {//Get the Local Foldervar localFolder = Windows.Storage.ApplicationData.Current.LocalFolder; //Create the File StorageFile myFile = await localFolder.CreateFileAsync("myFile.txt", CreationCollisionOption.ReplaceExisting); //Write File to the file await FileIO.WriteTextAsync(myFile, "Write Test Data!");} Right off the bat the first line async modifier in the method declaration is new. This modifier is used to tell Metro to allow this method to be executed asynchronously, which is used to help keep the main UI thread responsive while background code is executing. Next we create a local variable, which points to the Local Folder for this application. Then using the local folder we create a new file called myFile.txt. Notice the await expression, which allows the method to pause while the create file is executing. Once complete, we use the FileIO class and the WriteTextAsync to write a single line of text to the file. Using this method allows you to write text to the file without the need to perform open/close operations. If instead you want to use the Roaming Folder the same way, you could replace LocalFolder in the first line with RoamingFolder. Settings Taking advantage of storage works a little bit differently; however, it is very intuitive. The following line of code shows how you can add a setting called test and provide a value to it and retrieve the setting. //Set test setting ApplicationData.Current.LocalSettings.Values["test"] = "Setting Value"; //Read test setting string t = (string)ApplicationData.Current.LocalSettings.Values["test"]; Again you can take advantage of RoamingSettings by switching from LocalSettings to RoamingSettings. Conclusion As you can see, it is very easy to take advantage of Local and Roaming storage and settings. At this point, you may be thinking that your app is going to need a small local database such as SQL CE, SQLite, etc. Unfortunately, out of the box Metro does not include a local database; however, due to the popularity of SQLite, several different projects are currently underway to port SQLite over to Metro. In addition, you may be able to get away without the need for a high-level query language such as SQL. Depending on your app requirements you may be able to use a local XML file and/or a JSON (JavaScript Object Notation) formatted file. Nonetheless, Metro does include the basic tools necessary to store local files and settings.
http://mobile.codeguru.com/csharp/.net/using-local-storage-with-c-and-xaml-in-windows-8-metro-apps.html
CC-MAIN-2017-43
refinedweb
730
53.31
Lemoncheesecake is a very well known Python testing framework. As a test automation services company, we have evaluated this framework for one of your automation testing projects. After the proof of concept stage, we decided to use lemoncheesecake framework to write automation test scripts. Initially, BDD support was not available. However, now you can use behave BDD framework in lemoncheesecake. In this blog article, you will learn how to write BDD tests using lemoncheesecake and behave frameworks. Step #1: Installation To setup BDD using behave and lemoncheesecake, you need only two dependencies (behave and lemoncheesecake packages). That’s it. pip install lemoncheesecake pip install behave Step #2: Setup LemonCheeseCake Project Once the required dependencies are installed, you need to create LemonCheeseCake project. There is a command to do that. Run the below command. It will create LemonCheeseCake (LCC) project for you. lcc bootstrap myproject Step #3: Create a Feature Create ‘features’ folder inside ‘myproject’ folder. Create a feature file inside the features folder. Feature: LemonCheeseCake Demo Scenario: Demo Scenario Given I fill login details When I click login button Then I should see Home page Step #4: Step Definition from behave import * import lemoncheesecake.api as lcc from lemoncheesecake.matching import * @given("I fill login details") def step_impl(context): lcc.log_info("Pass") @when("I click login button") def step_impl(context): lcc.log_info("Pass") @then("I should see Home page") def step_impl(context): lcc.log_info("Pass") Step #4: Create environment.py Create environment.py inside myproject folder to install BDD hooks from behave. from lemoncheesecake.bdd.behave import install_hooks install_hooks() Step #5: Run behave features/demofeature.feature Once the execution is complete, you can find LCC HTML report in reports folder. The best thing about LCC is reporting. Your team will love it once they start getting reports from LemonCheeseCake framework. In Conclusion Why does one need to integrate behave with LCC? We as a software testing company, write BDD and non-BDD automated scripts using Java, Python, and JavaScript languages. Let’s say for instance if u wish to write automated test scripts for data quality checks and UI validations in a test automation framework. Then LemonCheeseCake and behave combination is the ideal choice. LCC framework is gaining popularity among automation testers. We hope that you have gained some useful insights from this article. Every other day, we publish a technical blog article. Subscribe to Codoid’s Blogs to get our updates on the recent technical developments, feel free to contact us for your automation testing needs. Submit a Comment
https://codoid.com/getting-started-with-lemoncheesecake-and-behave-bdd-framework/
CC-MAIN-2021-31
refinedweb
420
59.7
The main difference between procedures and functions is that functions have a return value. When you call a procedure in an application, the procedure executes and that's about it. When you call a function, the function executes and returns a value to the caller application. Here is a simple function header: function SomeFunction(ParameterList): ReturnValue; Functions are more versatile than procedures. While procedures can only be called as standalone statements, functions can be called as standalone statements and can also be used in expressions. Let's take a look at the Chr function that we used earlier. The Chr function can be used to convert an integer value to an ASCII character. The header of the Chr function looks like this: function Chr(X: Byte): Char; The example in Listing 5-2 uses the Chr function to illustrate both function call methods. Listing 5-2: Function calls program Project1; {$APPTYPE CONSOLE} uses SysUtils; var c: Char; begin { standalone call } Chr(65); { function call in an expression } c := 'A'; if c = Chr(65) then WriteLn('ASCII 65 = A'); ReadLn; end. Note that calling functions as procedures (outside of an expression) defeats the very purpose of functions. When you call a function as you would a procedure, the return value of the function is discarded. The return value is usually the reason that you call a function in the first place, but there are situations in which you only need the function to execute without giving you a result. For instance, you can call a function that copies a file without needing to know how many bytes have actually been transferred. There are three more standard functions that you can use in your application. All three functions are extremely fast and versatile. The first one is Ord, which can, for instance, be used to convert a character value to an integer. The other two functions operate on all ordinal values and return the predecessor (the Pred function) or the successor (the Succ function) of a value. These functions are pretty special because Delphi doesn't even treat them as functions, instead computing their result at compile time. These functions can also be used in constant declarations. const X = Succ(19); { 20 } Y = Pred(101); { 100 } Z = Ord('A'); { 65 } Typecasting is the process of converting a value or a variable from one data type to another. There are two types of typecasting: implicit and explicit. Explicit typecasting is also called conversion. The difference between implicit and explicit typecasting is that implicit typecasting is lossless, whereas explicit typecasting can result in data loss. Implicit typecasting is only possible with compatible data types, usually only ordinal types. For instance, implicit typecasting is automatically done by Delphi when you assign an integer value to a Double variable. var i: Integer; d: Double; begin i := 2005; d := i; end. An automatic typecast is possible because a Double variable can hold larger values than an integer value. When you assign an integer value to a Double variable (or any other variable that can hold an integer value), the integer value is expanded and assigned to the destination variable. If you have to perform an implicit typecast manually, the syntax is: DataType(Expression) We have already performed implicit typecasts earlier in this chapter, usually to convert character values to integers: var c: Char; i: Integer; begin c := 'X'; i := Integer(c); end. If you try to assign a large value to a variable that cannot hold it, the compiler will give you the "Constant expression violates subrange bounds" or "Incompatible types: 'Type' and 'Type'" error. If you get one of these errors, you'll have to perform either an implicit or an explicit typecast. var b: Byte; begin b := 255; { OK } b := 1000; { Error, subrange bounds violation } b := Byte(1000); { OK } end. When you get a subrange bounds violation error, you should think about selecting another data type that supports larger values. What do you think the value of the variable will be after the implicit typecast to Byte(1000)? It can't be 1000 because the value 1000 occupies 2 bytes of memory, and a Byte variable occupies only 1 byte of memory. When we convert the number 1000 (hexadecimal 03E8) to a Byte value, the low-order byte (hexadecimal E8) is written to the Byte variable. The value of the Byte variable after the implicit typecast is 232. Assigning a Double value to an integer variable also results in a compiler error because integer variables cannot hold real numbers. When you have to assign a real value to an integer, you always have to perform an explicit type- cast. You can convert a real value to an integer using either the Round or the Trunc standard functions. The Trunc function strips the decimal part of the number, and the Round function takes the decimal part of the number into account; they usually give different results. Round and Trunc can also be used in constant declarations. var i: Integer; d: Double; begin d := 12.8; i := Round(d); { i = 13 } i := Trunc(d); { i = 12 } end. In Delphi, you can differentiate a constant value typecast and a variable type- cast. A value typecast is actually the implicit typecast we've used so far. A value typecast can only be written on the right side of the assignment statement. A variable typecast can be written on both sides of the assignment statement, as long as both types are compatible. Var b: Byte; c: Char; begin b := Byte('a'); { value typecast } c := Char(b); { variable typecast } Byte(c) := 65; { temporary variable typecast } end. To typecast a variable in C++, you can use the standard Delphi syntax for an implicit typecast or the C++ syntax. Here are both: (new_data_type)expression; or new_data_type(expression); Here's how to perform both explicit and implicit typecasts in C++: #include <iostream.h> #include <conio.h> #pragma hdrstop #pragma argsused int main(int argc, char* argv[]) { /* note that the same syntax works for implicit and explicit typecasts */ float f = 65.20; char c = (char)f; double d = 12.9; int x = int(d); cout << c << endl; // writes "A" cout << x << endl; // writes "12" getch(); return 0; }
https://flylib.com/books/en/1.228.1.35/1/
CC-MAIN-2020-40
refinedweb
1,032
53.1
Bug #8957 Ruby tk control variables for radiobutton menu radiobutton and menu checkbutton not working correctly. Description The control variables for radiobutton menu radiobutton and menu checkbutton are not working correctly. In a starter project I am coding, they do not set the variable at all, and if set, it does not affect the button that is checked. I've tried the defaults, $var @var @@var Tkvariable.new, and no difference. I've coded round the radio buttons using the variable and then setting with .select(). The menu radioButtons I can just do when the command function is called. With the menu checkButtons, since the variable is not working I cannot tell the state. So I set indicatoron to 0 and just flip the action in the command function. But then there are no checks for the check boxes. I've bunched some code together out of the tkdocs tutorial to check this, and while this at least sets the variable initially, setting the variable does not change the selected radioButton. But having set the variable once, clicking the radio buttons no longer changes the variable. From that point it can only be changed in code. I've uploaded this as a demo of the problem. This is on windows tk is; irb(main):035:0> Tk::TK_PATCHLEVEL I've tried ruby 1.9.3p448 as well. Also Linux ruby 2.0.0p247 versions as well. => "8.5.12" History #1 Updated by Tomoyuki Chikanaga about 2 years ago - Category set to lib - Status changed from Open to Assigned - Assignee set to Hidetoshi Nagai #2 Updated by Hidetoshi Nagai about 2 years ago - Status changed from Assigned to Rejected - % Done changed from 0 to 100 It depends on bugs on the reporter's code. --- test_control_variables.rb 2013-09-27 14:48:40.000000000 +0900 +++ test_control_variables_mod.rb 2013-09-27 14:51:26.000000000 +0900 @@ -49,12 +49,12 @@ def b_button puts("setting contol variable to office") - $phone = 'office' + $phone.value = 'office' end def c_button puts("setting contol variable to mobile") - $phone = 'mobile' + $phone.value = 'mobile' end Tk.mainloop Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/8957
CC-MAIN-2015-48
refinedweb
353
67.55
hello the error's fairly clear, it says it can't find the webpage listed. when i load it, i get a 404. are you sure you have the path correct? Acceptable Use | SQL Forum FAQ | celery is tasteless | twitter celery is tasteless - currently needing some UI time i really appreciate your help but i am working hard to get this done. i just create a simple web service and i can not locate them its bizzare <%@ WebService Language="VB" Class="Service1" %> Imports System.Web.Services Imports System.Web.Services.Protocols Imports System.ComponentModel <System.Web.Services.WebService(Name:="testwebservice", Namespace:="test")> _ 'i have tried namespace:="" still doesnt work i have uploaded it for u try to add it from this url first then hannah Last edited by hannahwill; 05-26-2009 at 04:02 AM. the pages look better today. yesterday i was getting 404's. have you tried again? still dont work please help me i feel like doin a suicide and leaving a note behind that says i have done suicide coz my webservice didnot work m really upset i mean my client dont want to change their domain but they gave me full access to the domain but i dont know what to do what to change to get this webservice up and running and service it works fine i dont want to buy a new domain just to for this webservice.is there any other way around if i could create subdomain because i think i cannot add web reference because of web.config file.what do u think cheez? Last edited by hannahwill; 06-02-2009 at 08:11 AM. Hi Hannah, I tried adding reference to both URLs you have mentioned : I got 404 error page Regards, abhi "If A is success in life, then A equals x plus y plus z. Work is x; y is play; and z is keeping your mouth shut." - Albert Einstein this issue is resolved.i have changed the webservice path. sorry for inconvenience. byee There are currently 1 users browsing this thread. (0 members and 1 guests) View Tag Cloud Forum Rules
http://www.webdeveloper.com/forum/showthread.php?210425-mysql-connection-problem-asp-net-and-godaddy&goto=nextoldest
CC-MAIN-2015-18
refinedweb
357
73.17
Summary:, and I need to quickly change the old location name to the new location name. I have been reading all of your articles this week, and I think I should be able to use Import-CSV to read the CSV file, and use a foreach loop to evaluate each row in the CSV file. Inside there, I should be able to use an if statement to see if there is a match with the old name. If there is, I want to change it to the new name, and then export the CSV data to a new file. The problem is that this is not really working the way I want it to. For some reason, I appear to be picking up some kind of extra crap that is added by Windows PowerShell. I have spent all afternoon on something that should really be a simple one-liner. Help please. If I have to write a script to do this, I may as well go back to using VBScript. At least I have plenty of years experience using that language. —GM Hello GM, Microsoft Scripting Guy Ed Wilson here. One thing that is rather interesting about Windows PowerShell is that some things are really super easy. Other things appear to be difficult until you find the “correct” approach. One of the things I noticed in the 2011 Scripting Games is that some people work way too hard on their solutions. The other cool thing about Windows PowerShell is that in the end, if something runs without errors and something does what you need to be done, it is a solution. If I can write a bit of code in five minutes that is ugly but works, it is probably better than something that is elegant but takes me five hours to write. Don’t get too hung up on trying to craft the ultimate one-liner when you have a job to do, and that job does not entail crafting the ultimate one-liner. On the other hand, there is something to be said for thinking about how to do things more effectively in Windows PowerShell. GM, I am imagining that your approach was to do something like this: import-csv C:\fso\usersconsolidated.csv | foreach { If($_.ou -match "Atlanta") {$_.OU -replace "Atlanta","Cobb"}} | export-csv c:\myfolder\myfile.csv This command does not work the way you want it to work … in fact, it does not really work very well at all because it does not produce the expected results. You are getting hung up with wrestling with the pipeline, getting confused with the Foreach-Object cmdlet (that is interrupting your pipeline), and messing about with the Export-CSV command that is not producing a CSV file at all. While it is true that Windows PowerShell has some pretty cool cmdlets for working with CSV and for working with files, it is also true that at times, I want to be able to read the entire contents of a file into memory, and work on it at one time. The Get-Content cmdlet does not permit this; it basically creates an array of lines of text, which is great on most occasions. To easily read the contents of a file all at once, I use the readalltext static method from the File class from the .NET Framework. The File class resides in the System.IO .NET Framework namespace. When using this class, I use square brackets around the class and namespace name. I can leave off the word system if I want to, or I can type it if I wish—it does not matter. If the word system is not present, Windows PowerShell will assume that the namespace contains the word system in it and will automatically use that when attempting to find the class. The .NET Framework namespaces are similar to the namespaces used in WMI because they are used to group related classes together for ease of reference. The difference is that the .NET Framework namespaces are a bit more granular. Therefore, if I am interested in working with files, directories, paths, file information, and other related items, I would go to the System.IO .NET Framework namespace and look around to see what is available. After I find the File class, I can look it up on MSDN to see which methods it includes. The easiest ones to use are the static members (methods, properties, and events taken together become members) because using Windows PowerShell, all I need to do is to put the namespace/class name combination inside square brackets, use two colons, and the name of the method. And it works. Many times, the things a class provides are available somewhere else. For example, the File .NET Framework class provides a static method called exists. This method returns a Boolean value (true or false) that lets me know if a file exists or not. To use this method, I provide a string to the method. This technique is shown here: PS C:\> [io.file]::exists("C:\fso\UserGroupNames.txt") True PS C:\> [io.file]::exists("C:\fso\missingfile.xxx") False I can accomplish the same thing by using the Test-Path cmdlet. This appears here. PS C:\> Test-Path C:\fso\UserGroupNames.txt PS C:\> Test-Path C:\fso\missingfile.xxx It is always preferable to use a native Windows PowerShell cmdlet to do something, rather than resorting to .NET Framework, COM, WMI, ADSI, or some other technology—unless you have a compelling reason for doing otherwise. A static method called readalltext is available from the file class, and it can be used in two ways. The first way is to supply a string that points to the path to the file to open. The second way is to specify the encoding of the file. Most of the time when reading a file, the encoding is not required because the method attempts to detect automatically the encoding of a file based on the presence of byte order marks. Encoding formats UTF-8 and UTF-32 (both big endian and little endian) can be detected. The result of using the readalltext method is that I get back a bunch of text in the form of a String class. The String class resides in the System .NET Framework namespace, and it contains a large number of methods. One of those methods is the replace method. I therefore add the replace method to the end of the readalltext method. The command to read all of the text in a file (CSV file) and to replace every instance of the word atlanta with the word cobb is shown here: [io.file]::readalltext("C:\fso\usersconsolidated.csv").replace("atlanta","cobb") The command to read a text file and replace words, and its associated output are shown in the following figure. To write this to a text file is simple. I can use the Out-File cmdlet as shown here: [io.file]::readalltext("C:\fso\usersconsolidated.csv").replace("Atlanta","Cobb") | Out-File c:\fso\replacedAtlanta.csv -Encoding ascii –Force The above is a single-line command that wraps in my Word document. I have not added any line continuation to the command. Keep in mind this technique is case sensitive. It will replace Atlanta, but not atlanta. The newly created text file is shown in the following figure. GM, that is all there is to replacing values in a CSV file. Join me tomorrow for more exciting Windows PowerShell fun. TTFN. bit dangerous ... as I already commented to your last article ... but even MORE! First: The introductory lines: stated that the command does not produce the desired output, which of course is true! But a slight modification may work: import-csv C:\fso\usersconsolidated.csv | foreach { If($_.ou -match "Atlanta") {$_.OU = $_.OU -replace "Atlanta","Cobb"}; $_} | export-csv c:\myfolder\myfile.csv Your approach to read the whole text file does imply something unwanted: We loose the object structure that Import-csv would deliver! We end up in a stream of characters where we might exchange any occurance of "SearchMe" with "ReplaceMe" and that is ANYWHERE in the text! We don't have the OU property any more! And so we might replace the search string even in Lname, Fname or Group if it appears there! That's definitely unwanted ... so even if it isn't an "on first sight" solution, I would prefer to have objects and deal with the properties ... what you have been telling us all the time ... :-) Klaus. @Klaus Schulte The prefered way to deal with CSV files is as objects! Windows PowerShell makes it so easy to do. In this particular example, I wanted to show another approach, and one that is "quick and dirty" ... it is a brute force way of doing things. Proper testing should always be done before doing anything. In addition, do not overwrite your source file, unless you have another copy :-) Thank you for pointing out that this may not be the best practice for all situations ... @Klaus Schulte sorry hit post too quickly. I also like your fix to the opening scenario! It works, and is much more elegant than a brute force replace operation. I totally agree with Klaus, your solution made me cringe while reading it... I used this to remove ALL of the spaces in my -csv file. Works great!
http://blogs.technet.com/b/heyscriptingguy/archive/2011/11/03/search-and-replace-words-in-a-csv-file-by-using-powershell.aspx
CC-MAIN-2015-14
refinedweb
1,571
73.17
Dean McKenzie11,148 Points Console for this task does not seem to be working. Keep getting an import error for flask. Whilst doing the flask video I am unable to run my program due to an import error. Now this did work yesterday however it is not working today, could somebody from the Treehouse team please advise? treehouse:~/workspace$ python app.py Traceback (most recent call last): File "app.py", line 2, in <module> from flask import (flask, render_template, redirect, ImportError: No module named flask Kind Regards 7 Answers Chris FreemanTreehouse Moderator 59,891 Points Taylor, it seems the PATH environment is not correct on Workspaces. python is pointing at python2 not python3. The short-term workaround is to use python3 app.py Josh Keenan19,311 Points Simple enough I think, try this. from flask import Flask, redirect, render_template Chris FreemanTreehouse Moderator 59,891 Points Taylor, are you saying you are still getting the exact same error " ImportError: No module named flask" (note lowercase f) after you change your import statement to use upper-cased Flask? Josh Keenan19,311 Points I get it Chris, on any flask project in my workspaces that have no problems.. Chris FreemanTreehouse Moderator 59,891 Points Right. It's complaining about the from flask module missing not the import f/Flask attribute. If it were due to import f/Flask you would have seen a different error: ImportError: cannot import name.... Josh Keenan19,311 Points Okay my flask projects that I haven't worked on in a while won't run either.. :S Presuming they are updating something or other! Dean McKenzie11,148 Points Not sure if the full code will help in this circumstance as it seems to be a direct Flask import problem. I think your right they maybe doing some sort of upgrade , or have some of outage. Again thanks for taking the time to look into and answer my question. Enjoy your evening. Dean McKenzie11,148 Points Thanks Chris that works, who do I speak to in order to get this permanently rectified as I would like to complete this course within the next couple of days. Thanks Again for you support. Dean McKenzie11,148 Points Dean McKenzie11,148 Points Hi Josh tried changing the Flask from lower case to upper case and it has not made a difference. Thanks for your prompt help though.
https://teamtreehouse.com/community/console-for-this-task-does-not-seem-to-be-working-keep-getting-an-import-error-for-flask
CC-MAIN-2020-50
refinedweb
394
73.27
enough. Join the conversationAdd Comment int main() { vector<feline::kitty> v; cout << "*** Constructing cat." << endl; feline::kitty cat; cout << "*** Pushing back cat #1." << endl; v.push_back(cat); cout << "*** Pushing back cat #2." << endl; v.push_back(cat); cout << "*** Destroying cat and v." << endl; } C:Temp>cl /EHsc /nologo /W4 meow.cpp meow.cpp C:Temp>meow *** Constructing cat. default ctor *** Pushing back cat #1. copy ctor default ctor dtor default ctor dtor *** Pushing back cat #2. copy ctor default ctor copy ctor swap dtor default ctor dtor dtor *** Destroying cat and v. dtor dtor dtor Note that the Swaptimization requires a default ctor. Thank you. Hopefully, VC10 will see C++0x finalized and implemented, with its move constructors and all, so we won’t need this hack by that time. >>>>>>>>>>>>>>> (I thought Boost 1.35.0 hadn’t implemented the unordered containers yet; ‘_’? <iostream> #include <vector> void _Swap(_Myt& _Right) { //: C:Temp>type meow.cpp #include <functional> #include <iostream> #include <ostream> #include <vector>; return *this; } fxn& operator=(const fxn& other) { m_f = other.m_f; return *this; } std::tr1::function<FT> m_f; }; int add(int x, int y) { return x + y; } int mult(int x, int y, int z) { return x * y * z; } int main() {; } } C:Temp>cl /EHsc /nologo /W4 meow.cpp meow.cpp C:Temp>meow ‘: C:Temp>type meow.cpp #include <iostream> #include <ostream> using namespace std; int main() {; #endif } C:Temp>cl /EHsc /nologo /W4 meow.cpp meow.cpp With VC9 RTM: C:Temp>meow _MSC_FULL_VER: 150021022 _MSC_BUILD: 8 This is VC9 RTM or above. With VC9 SP1: C:Temp>meow _MSC_FULL_VER: 150030729 _MSC_BUILD: 1 This is VC9 RTM or above.include"include"Cincludearray.! [Niels Dekker] >. [Niels Dekker] >
https://blogs.msdn.microsoft.com/vcblog/2008/08/11/tr1-fixes-in-vc9-sp1/
CC-MAIN-2018-17
refinedweb
280
68.16
SOLID for packag... err, namespaces Join the DZone community and get the full member experience.Join For Free.. Though, the principles still apply to the latter examples (but not only to them). In fact, I think that PHP namespaces will play the role of packages soon, as they provide a mechanism for encapsulation: use statements are not necessary when you refer to entities in the same namespace, just like java imports. There are 3 principles on cohesion of packages and other 3 on dependencies and coupling. In this article we will explore them with examples (and pain points) from the PHP world, were they were either applied or ignored. Many principles are the equivalent of a corresponding SOLID principle, but over different metrics (e.g. SAP). RES: Reuse Release Principle The reuse granularity and the release granularity shall be the same. - Reuse is about the classes I instantiate and call directly or indirectly (via factories and collaboration. - Release is about what you download or include in your system, or what has an independent version number and release cycle. For example, consider Zend Framework, which is frequently cited as a library and not only a framework. It is downloadable only as two packages, standard and minimal edition; however, if I only want to reuse Zend_Cache and nothing else, I am forced to download and maintain the whole distribution, because the release granularity is lower than the reuse one. For Zend Framework 1, the packageizer was retrofitted on the full distribution: a mechanism for selecting classes from the framework and automatically generate a package containing all the dependencies. For Zend Framework 2 a better system would probably be involved. There is a trade-off between granularity and ease of use: imagine Zend_* components all with different version numbers: we will transport Jar hell in the PHP world. CRP: Common Reuse Principle Classes of a package are re-used together. If you reuse one, you reuse all the package. The sense here is that you are are forced to grab the package as a whole, and cannot separate some classes because of possible compile/runtime dependencies (PHP compilation: it just means class source code loading.) In Zend Framework, Zend_Form elements are all grouped together so that when using one you're forced to download other elements, and not for example classes that access the database. This is a straightforward case, but it's not so immediate to decide what should go in a package as well, for example in Zend_View or Zend_Controller. CCP: Common Closure Principle Classes of a package should be closed against the same modifications. This principle is the equivalent of the Single Responsibility Principle for classes. Its take home point is that package maintainers should not be afraid or moving classes between your packages (before publishing the Api of course) if changes are a pain point. PHPUnit packages are in the initial phase of their definition: phpunit-mock-objects has been extracted by Sebastian Bergmann from the main phpunit one, and are distributed as PEAR dependencies; thus installing PHPUnit still takes a single command. However any time I have tried to contribute something to phpunit-mock-objects I ended up working also on phpunit to make the new feature available. With all the issues of the case: commits and pull requests to be made on two different repositories which are not atomic anymore, double and duplicated work for branching and pushing, ecc. Hopefully in the future PHPUnit packages will get more stable and we will able to contribute to a single repository at the time. ADP: Acyclic Dependencies Principle A package dependency graph shall not contain cycles. Mathematically speaking, it must be a Directed Acyclic Graph. Initially this principle was created for ease of building in languages with a one-time compilation step (C++, Java). However it's still a good practice to follow it even in PHP and other dynamic languages: if two packages are mutually dependent, even if the dependency involves only some classes, to make sure they do not explode at runtime you must always deploy them together. Keeping two packages always together defeats the purpose of separate packages: again, you have to add duplicated work on keeping them separate while conceptually they are a unique package (or they need to be broken down in more segregated packages.) The problem with mutual dependencies means that Zend_Form cannot be distributed without Zend_View, and Zend_View, which contains form element helpers, cannot be distributed without Zend_Form. This is a surprise for everyone using Zend Framework as a library, especially for people not using the ZF packageizer. SDP: Stable Dependencies Principle Stability of a package should be proportional to the number of dependencies on that package. For example, you can pursue the goal of keeping the less stable packages up in the dependency hierarchy, so that their changes impact a little part of the rest of the system. A simple target for stability can be the afferent coupling ca normalized by total package interactions (which is in turn the sum of afferent and efferent coupling ce). In classic formulation, ce/(ce+ca) is used which is 1 minus the metric cited before; so it's a measure of instability instead of stability which goes from 0 to 1. SAP: Stable Abstraction Principle The more abstract a package is, the more stable it should be. Abstractness is the target of dependencies: implementations, inheritance, composition. So it should be stable, otherwise it would cause many other packages to break. SAP and SDP are the equivalent of the Dependency Inversion Principle: dependency should be oriented towards abstractions. A measure of abstraction for packages can be inferred from its classes. A class would be either abstract (interfaces also count) or not. A package abstractness can be measured as a 0 to 1 metric, like the number of interfaces and abstract classes over the total number of classes and interfaces (for normalization). In current PHP frameworks, almost all packages that contain abstractions also contain many concrete classes, such as Zend_Auth, or Zend_Db. This is wonderful for easy of adoption, but the classes in those packages change at very different paces: interfaces like Zend_Auth_Adapter_Interface are frozen for all the 1.x branch, while the other classes can tune their implementation over different releases. There are no stable packages you can refer to, but only stable interfaces. Take-home points I'm not going to include the famous Main Sequence graph as it would only add some mathematics to the discussion. Here are the take home points for the packages principle: - cohesion and coupling exist also for, and between, packages. - when you depend even on a single particle of dust from a package, you depend on the whole of it. - you can introduce abstraction-related packages which can be easily kept stable with respect to the rest of the library. - packages boundaries (like the ones of Zend_Auth / Zend_Auth_Adapter) are often ignored, but with the introduction of namespaces and use statements the picture may change quickly as the external dependencies are exposed at the top of source files, making a bit hard to exit from the current namespace. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/solid-packag-err-namespaces
CC-MAIN-2021-43
refinedweb
1,197
50.16
When should I consider moving common ASP.NET functionality into a business object? This is a highly subjective question. Many developers would say that all non-UI functionality should be moved into business objects, regardless of the application. However, this depends on the size of your application. When it only consists of a few pages, you might not need to add the complexity of a business object. Once you can logically see the tier divisions in your application, it may be a good idea to consider using business objects. Can I create an object that expects parameters when instantiated? Absolutely. VB.NET and C# provide methods called constructors that allow you to specify how an object can be instantiated. This method is called new in VB.NET. In C#, it's simply the name of the class. For example: 'VB.NET Class MyObject Public overloads Sub New() 'do something End Sub Public overloads Sub New(strString as string) 'do something ... end sub End Class //C# class MyObject { public MyObject() { 'do something } public MyObject(string strString) { //do something ... } } This class has two constructors a default one that doesn't take any parameters, and one that takes a single string parameter. Inside the second constructor, you can perform any operation you want that uses that string. You could use the following line from an ASP.NET page: dim objMyObject as new MyObject("hello!") There are just a few things you need to know. Also be aware that you can only instantiate objects with the constructors you build. If you don't provide a constructor that doesn't take any parameters, you cannot do the following: dim objMyObject as new MyObject If you don't provide any constructors at all, VB.NET and C# automatically create one for you that doesn't take any parameters. Also, if you provide multiple constructors, be sure to use the overloads keyword in VB.NET to avoid errors that arise from having two methods with the same name. For more information, see the VB.NET and C# documentation included with the .NET Framework SDK. Is there any way to make managed .NET objects backward compatible with COM objects? You bet. The Assembly Registry tool (regasm.exe) will take the metadata from your .NET assemblies, export what is known as a type-library file, and register it, just like a COM object. You can then use your .NET objects just like older COM objects. If you update the .NET object, you'll have to rerun regasm.exe for COM to notice the changes. This may require restarting IIS, depending on your situation. How do I find out the filenames for COM objects? Or how do I find out the progId values? This can be rather difficult sometimes. The easiest way is to use a programming environment, such as Visual Studio.NET. These applications allow you to look through class libraries of available COM objects and find the filenames. Or, you can go into the Windows Registry and search for the object you need. You can often find both the prodId and the filenames there. Finally, you can use an application called the OLE/COM Object Viewer, usually bundled with MS Visual Studio, to determine the necessary strings. For .NET objects, the methods are much easier. Usually, all of the .NET Framework classes are in files named after the namespaces. For example, the System namespace objects are in System.dll, and the DataGrid object is in System.Data.dll.
https://flylib.com/books/en/4.226.1.166/1/
CC-MAIN-2020-45
refinedweb
580
68.97
I. Introduction. MPRI Cours Lecture IV: Integer factorization. What is the factorization of a random number? II. Smoothness testing. F. - Abigayle Anthony - 1 years ago - Views: Transcription 1 F. Morain École polytechnique MPRI cours /22 F. Morain École polytechnique MPRI cours /22 MPRI Cours I. Introduction Input: an integer N; logox F. Morain logocnrs logoinria Output: N = k i=1 pα i i with p i (proven) prime. Lecture IV: Integer factorization Major impact: estimate the security of RSA cryptosystems. Also: primitive for a lot of number theory problems. I. Introduction. II. Smoothness testing. III. Pollard s RHO method. IV. Pollard s p 1 method. 2013/10/14 How do we test and compare algorithms? Cunningham project, RSA Security (partitions, RSA keys) though abandoned? Decimals of π. F. Morain École polytechnique MPRI cours /22 What is the factorization of a random number? F. Morain École polytechnique MPRI cours /22 II. Smoothness testing N = N 1 N 2 N r with N i prime, N i N i+1. Prop. r log 2 N; r = log log N. Size of the factors: D k = lim N + log N k / log N exists and On average k D k N 1 N 0.62, N 2 N 0.21, N 3 N an integer has one large factor, a medium size one and a bunch of small ones. Def. a B-smooth number has all its prime factors B. B-smooth numbers are the heart of all efficient factorization or discrete logarithm algorithms. De Bruijn s function: ψ(x, y) = #{z x, z is y smooth}. Thm. (Candfield, Erdős, Pomerance) ε > 0, uniformly in y (log x) 1+ε, as x with u = log x/ log y. ψ(x, y) = x u u(1+o(1)) 2 B-smooth numbers (cont d) A) Trial division Prop. Let L(x) = exp ( log x log log x ). For all real α > 0, β > 0, as x ψ(x α, L(x) β x α ) = L(x). α 2β +o(1) Ordinary interpretation: a number x α is L(x) β -smooth with probability ψ(x α, L(x) β ) x α = L(x) α 2β +o(1). Algorithm: divide x X by all p B, say {p 1, p 2,..., p m }, m = π(b) = O(B). Cost: m divisions or p B T(x, p) = O(m lg X lg B). Implementation: use any method to compute and store all primes 2 32 (one char per (p i+1 p i )/2; see Brent). Useful generalization: given x 1, x 2,..., x n X, can we find the B-smooth part of the x i s more rapidly than repeating the above in O(nm lg B lg X)? F. Morain École polytechnique MPRI cours /22 B) Product trees Algorithm: Franke/Kleinjung/FM/Wirth improved by Bernstein 1. [Product tree] Compute z = p 1 p m. p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 2. [Remainder tree] Compute z mod x 1,..., z mod x n. z mod (x 1 x 2 x 3 x 4 ) z mod (x 1 x 2 ) z mod (x 3 x 4 ) z mod x 1 z mod x 2 z mod x 3 z mod x 4 3. [explode valuation] For each k {1,..., n}, compute y k = z 2e mod x k with e s.t. 2 2e x k ; print gcd(x k, y k ). F. Morain École polytechnique MPRI cours /22 F. Morain École polytechnique MPRI cours /22 Validity and analysis Validity: let y k = z 2e mod x k. Suppose p x k. Then ν p (x k ) 2 e, since 2 ν p ν 2 2e. Therefore ν p (y k ) 2 e ν and the gcd will contain the right valuation. Division: If A has r + s digits and B has s digits, then plain division requires D(r + s, s) = O(rs) word operations. In case r s, break into r/s divisions of complexity M(s). Step 1: M((m/2) lg B). Step 2: D(m lg B, n lg X) (m/n)m(n) lg B lg X. Ex. B = 2 32, m , X = Rem. If space is an issue, do this by blocks. Rem. For more information see Bernstein s web page. F. Morain École polytechnique MPRI cours /22 3 F. Morain École polytechnique MPRI cours /22 F. Morain École polytechnique MPRI cours /22 III. Pollard s RHO method Epact Prop. Let f : E E, #E = m; X n+1 = f (X n ) with X 0 E. X 0 X 1 X 2 X µ 1 X µ X µ+1 X µ+λ 1 Thm. (Flajolet, Odlyzko, 1990) When m πm λ µ m. Prop. There exists a unique e > 0 (epact) s.t. µ e < λ + µ and X 2e = X e. It is the smallest non-zero multiple of λ that is µ: if µ = 0, e = λ and if µ > 0, e = µ λ λ. Floyd s algorithm: X <- X0; Y <- X0; e <- 0; repeat X <- f(x); Y <- f(f(y)); e <- e+1; until X = Y; Thm. e π 5 m m. F. Morain École polytechnique MPRI cours /22 Application to the factorization of N F. Morain École polytechnique MPRI cours /22 Practice Idea: suppose p N and we have a random f mod N s.t. f mod p is random. function f(x, N) return (x 2 + 1) mod N; end. function rho(n) 1. [initialization] x:=1; y:=1; g:=1; 2. [loop] while (g = 1) do x:=f(x, N); y:=f(f(y, N), N); g:=gcd(x-y, N); endwhile; 3. return g; Choosing f : some choices are bad, as x x 2 et x x 2 2. Tables exist for given f s. Trick: compute gcd( i (x 2i x i ), N), using backtrack whenever needed. Improvements: reducing the number of evaluations of f, the number of comparisons (see Brent, Montgomery). Conjecture. RHO finds p N using O( p) iterations. 4 F. Morain École polytechnique MPRI cours /22 F. Morain École polytechnique MPRI cours /22 Theoretical results (1/4) Bach (2/4) Thm. (Bach, 1991) Proba RHO with f (x) = x finding p N after k iterations is at least ( k 2) when p goes to infinity. Sketch of the proof: define p + O(p 3/2 ) f 0 (X, Y) = X, f i+1 (X, Y) = f 2 i + Y, f 1 (X, Y) = X 2 + Y, f 2 (X, Y) = (X 2 + Y 2 ) + Y,... Divisor of N found by inspecting gcd(f 2i+1 (x, y) f i (x, y), N) for i = 0, 1, 2,.... Prop. (a) deg X (f i ) = 2 i ; (b) f i X 2i mod Y; (c) f i Y 2i Y mod X; (d) f i+j (X, Y) = f i (f j (X, Y), Y). (e) f i is absolutely irreducible (Eisentein s criterion). (f) f j f i = fj 1 2 f i 1 2 = (f j 1 + f i 1 )(f j 2 + f i 2 ) (f j i + X)(f j i X). (g) For all k 1 f l+n ± f l f l+kn ± f l. F. Morain École polytechnique MPRI cours /22 Bach (3/4) For i < j, ρ i,j is the unique poly in Z[X, Y] s.t. (a) ρ i,j is a monic (in Y) irreducible divisor of f j f i. (b) Let ω i,j denote a primitive (2 j 2 i ) root of unity. Then ρ i,j (ω i,j, 0) = 0. Proof: and and for i 1: f j f i X 2i (X 2j 2 i 1) mod Y, X 2j 2 i 1 = µ 2 j 2 i Φ µ (X). f j f 0 ρ 0,j d j,d j ρ 0,d f j 1 + f i 1 ρ i,j d j i,d j i ρ. i,i+d Conj. we have in fact equalities instead of divisibility., F. Morain École polytechnique MPRI cours /22 Bach (4/4) Thm1. f k f l factor over Z[X, Y]. Moreover, they are squarefree. Proof uses projectivization of f. Weil s thm: if f Z[X, Y] is absolutely irreducible of degree d, then N p the number of projective zeroes of f is s.t. ( ) d 1 p. N p (p + 1) 2 2 Thm2. Fix k 1. Choose x and y at random s.t. 0 x, y < p. Then, proba for some i, j < k, i j, f i (x, y) = f j (x, y) mod p is at least ( k ) 2 /p + O(1/p 3/2 ) as p tends to infinity. Proof: same as i < j < k and ρ i,j (x, y) 0 mod p. Use inclusion-exclusion, Weil s inequality and Bézout s theorems. Same result for RHO to find p N. 5 F. Morain École polytechnique MPRI cours /22 F. Morain École polytechnique MPRI cours /22 IV. Pollard s p 1 method First phase Idea: assume p N and a is prime to p. Then Invented by Pollard in Williams: p + 1. Bach and Shallit: Φ k factoring methods. Shanks, Schnorr, Lenstra, etc.: quadratic forms. Lenstra (1985): ECM. Rem. Almost all the ideas invented for the classical p 1 can be transposed to the other methods. (p a p 1 1 and p N) p gcd(a p 1 1, N). Generalization: if R is known s.t. p 1 R, will yield a factor. gcd((a R mod N) 1, N) How do we find R? Only reasonable hope is that p 1 B! for some (small) B. In other words, p 1 is B-smooth. Algorithm: R = p α B 1 p α = lcm(2,..., B 1 ). Rem. (usual trick) we compute gcd( k ((ar k 1) mod N), N). F. Morain École polytechnique MPRI cours /22 Second phase: the classical one F. Morain École polytechnique MPRI cours /22 Second phase: BSGS Let b = a R mod N and gcd(b, N) = 1. Hyp. p 1 = Qs with Q R and s prime, B 1 < s B 2. Test: is gcd(b s 1, N) > 1 for some s. s j = j-th prime. In practice all s j+1 s j are small (Cramer s conjecture implies s j+1 s j (log B 2 ) 2 ). Precompute c δ b δ mod N for all possible δ (small); Compute next value with one multiplication b s j+1 = b s j c sj+1 s j mod N. Cost: O((log B 2 ) 2 ) + O(log s 1 ) + (π(b 2 ) π(b 1 )) multiplications +(π(b 2 ) π(b 1 )) gcd s. When B 2 B 1, π(b 2 ) dominates. Rem. We need a table of all primes < B 2 ; memory is O(B 2 ). Record. Nohara (66dd of , 2006; see Select w B 2, v 1 = B 1 /w, v 2 = B 2 /w. Write our prime s as s = vw u, with 0 u < w, v 1 v v 2. Lem. gcd(b s 1, N) > 1 iff gcd(b wv b u, N) > 1. Algorithm: 1. Precompute b u mod N for all 0 u < w. 2. Precompute all(b w ) v for all v 1 v v For all u and all v evaluate gcd(b vw b u, N). Number of multiplications: w + (v 2 v 1 ) + O(log 2 w) = O( B 2 ) Memory: O( B 2 ). Number of gcd: π(b 2 ) π(b 1 ). 6 Second phase: using fast polynomial arithmetic Algorithm: 1. Compute h(x) = 0 u<w (X bu ) Z/NZ[X] 2. Evaluate all h((b w ) v ) for all v 1 v v Evaluate all gcd(h(b wv ), N). Analysis: Step 1: O((log w)m pol (w)) operations (using a product tree). Step 2: O((log w)m int (log N)) for b w ; v 2 v 1 for (b w ) v ; multi-point evaluation on w points takes O((log w)m pol (w)). Rem. Evaluating h(x) along a geometric progression of length w takes O(w log w) operations (see Montgomery-Silverman). ( ) Total cost: O((log w)m pol (w)) = O. B 0.5+o(1) 2 Trick: use gcd(u, w) = 1 and w = F. Morain École polytechnique MPRI cours /22 Second phase: using the birthday paradox Consider B = b mod p ; s := #B. If we draw s elements at random in B, then we have a collision (birthday paradox). Algorithm: build (b i ) with b 0 = b, and { b 2 b i+1 = i mod N with proba 1/2 b 2 i b mod N with proba 1/2. We gather r s values and compute where r i=1 j i (b i b j ) = Disc(P(X)) = i P(X) = r (X b i ). i=1 use fast polynomial operations again. P (b i ) F. Morain École polytechnique MPRI cours /22 FACTORING. n = 2 25 + 1. fall in the arithmetic sequence FACTORING The claim that factorization is harder than primality testing (or primality certification) is not currently substantiated rigorously. As some sort of backward evidence that factoring is hard, Faster deterministic integer factorisation David Harvey (joint work with Edgar Costa, NYU) University of New South Wales 25th October 2011 The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers Factoring & Primality Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amountization Methods: Very Quick Overview Factorization Methods: Very Quick Overview Yuval Filmus October 17, 2012 1 Introduction In this lecture we introduce modern factorization methods. We will assume several facts from analytic number theory. LARGE NUMBERS, A GREAT WAY TO SPEND A BIRTHDAY FACTORING LARGE NUMBERS, A GREAT WAY TO SPEND A BIRTHDAY LINDSEY R. BOSKO I would like to acknowledge the assistance of Dr. Michael Singer. His guidance and feedback were instrumental in completing this Factoring Algorithms Factoring Algorithms The p 1 Method and Quadratic Sieve November 17, 2008 () Factoring Algorithms November 17, 2008 1 / 12 Fermat s factoring method Fermat made the observation that if n has two factors Elementary factoring algorithms Math 5330 Spring 013 Elementary factoring algorithms The RSA cryptosystem is founded on the idea that, in general, factoring is hard. Where as with Fermat s Little Theorem and some related ideas, one can, 10: Distinct Degree Factoring CS681 Computational Number Theory Lecture 10: Distinct Degree Factoring Instructor: Piyush P Kurur Scribe: Ramprasad Saptharishi Overview Last class we left of with a glimpse into distant degree factorization. 8 Divisibility and prime numbers 8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express Modern Factoring Algorithms Modern Factoring Algorithms Kostas Bimpikis and Ragesh Jaiswal University of California, San Diego... both Gauss and lesser mathematicians may be justified in rejoicing that there is one science . The. Lecture 13: Factoring Integers CS 880: Quantum Information Processing 0/4/0 Lecture 3: Factoring Integers Instructor: Dieter van Melkebeek Scribe: Mark Wellons In this lecture, we review order finding and use this to develop a method Discrete Mathematics, Chapter 4: Number Theory and Cryptography Discrete Mathematics, Chapter 4: Number Theory and Cryptography Richard Mayr University of Edinburgh, UK Richard Mayr (University of Edinburgh, UK) Discrete Mathematics. Chapter 4 1 / 35 Outline 1 Divisibility Unique Factorization Unique Factorization Waffle Mathcamp 2010 Throughout these notes, all rings will be assumed to be commutative. 1 Factorization in domains: definitions and examples In this class, we will study the 8 Primes and Modular Arithmetic 8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers. Introduction to Finite Fields (cont.) Chapter 6 Introduction to Finite Fields (cont.) 6.1 Recall Theorem. Z m is a field m is a prime number. Theorem (Subfield Isomorphic to Z p ). Every finite field has the order of a power of a prime number Factoring Algorithms Institutionen för Informationsteknologi Lunds Tekniska Högskola Department of Information Technology Lund University Cryptology - Project 1 Factoring Algorithms The purpose of this project is to Smooth numbers and the quadratic sieve Algorithmic Number Theory MSRI Publications Volume 44, 2008 Smooth numbers and the quadratic sieve CARL POMERANCE ABSTRACT. This article gives a gentle introduction to factoring large integers via Faster Cryptographic Key Exchange on Hyperelliptic Curves Faster Cryptographic Key Exchange on Hyperelliptic Curves No Author Given No Institute Given Abstract. We present a key exchange procedure based on divisor arithmetic for the real model of a hyperelliptic Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F. Computer and Network Security MIT 6.857 Computer and Networ Security Class Notes 1 File: rivest/notes/notes.pdf Revision: December 2, 2002 Computer and Networ Security MIT 6.857 Class Notes by Ronald L. Rivest Runtime and Implementation of Factoring Algorithms: A Comparison Runtime and Implementation of Factoring Algorithms: A Comparison Justin Moore CSC290 Cryptology December 20, 2003 Abstract Factoring composite numbers is not an easy task. It is classified as a hard algorithm, Public-Key Cryptanalysis 1: Introduction and Factoring Public-Key Cryptanalysis 1: Introduction and Factoring Nadia Heninger University of Pennsylvania July 21, 2013 Adventures in Cryptanalysis Part 1: Introduction and Factoring. What is public-key crypto APPLICATIONS OF THE ORDER FUNCTION APPLICATIONS OF THE ORDER FUNCTION LECTURE NOTES: MATH 432, CSUSM, SPRING 2009. PROF. WAYNE AITKEN In this lecture we will explore several applications of order functions including formulas for GCDs and On Generalized Fermat Numbers 3 2n +1 Applied Mathematics & Information Sciences 4(3) (010), 307 313 An International Journal c 010 Dixie W Publishing Corporation, U. S. A. On Generalized Fermat Numbers 3 n +1 Amin Witno Department of Basic CS 103X: Discrete Structures Homework Assignment 3 Solutions CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering On the coefficients of the polynomial in the number field sieve On the coefficients of the polynomial in the number field sieve Yang Min a, Meng Qingshu b,, Wang Zhangyi b, Li Li a, Zhang Huanguo b a International School of Software, Wuhan University, Hubei, China, MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES 2016 47 4. Diophantine Equations A Diophantine Equation is simply an equation in one or more variables for which integer (or sometimes rational) solutions FPGA and ASIC Implementation of Rho and P-1 Methods of Factoring. Master s Thesis Presentation Ramakrishna Bachimanchi Director: Dr. FPGA and ASIC Implementation of Rho and P-1 Methods of Factoring Master s Thesis Presentation Ramakrishna Bachimanchi Director: Dr. Kris Gaj Contents Introduction Background Hardware Architecture FPGA Elementary Number Theory and Methods of Proof. CSE 215, Foundations of Computer Science Stony Brook University. Elementary Number Theory and Methods of Proof CSE 215, Foundations of Computer Science Stony Brook University 1 Number theory Properties: 2 Properties of integers (whole Chapter Objectives. Chapter 9. Sequential Search. Search Algorithms. Search Algorithms. Binary Search Chapter Objectives Chapter 9 Search Algorithms Data Structures Using C++ 1 Learn the various search algorithms Explore how to implement the sequential and binary search algorithms Discover how the sequential 3. Computational Complexity. 3. Computational Complexity. (A) Introduction. As we will see, most cryptographic systems derive their supposed security from the presumed inability of any adversary to crack certain (number theoretic) Integer Factorization Integer Factorization Lecture given at the Joh. Gutenberg-Universität, Mainz, July 23, 1992 by ÖYSTEIN J. RÖDSETH University of Bergen, Department of Mathematics, Allégt. 55, N-5007 Bergen, Norway 1 Exercises with solutions (1) Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal The Quadratic Sieve Factoring Algorithm The Quadratic Sieve Factoring Algorithm Eric Landquist MATH 488: Cryptographic Algorithms December 14, 2001 1 Introduction Mathematicians have been attempting to find better and faster ways to factor composite r + s = i + j (q + t)n; 2 rs = ij (qj + ti)n + qtn. Chapter 7 Introduction to finite fields This chapter provides an introduction to several kinds of abstract algebraic structures, particularly groups, fields, and polynomials. Our primary interest is? The Sieve Re-Imagined: Integer Factorization Methods The Sieve Re-Imagined: Integer Factorization Methods by Jennifer Smith A research paper presented to the University of Waterloo in partial fulfillment of the requirement for the degree of Master of Mathematics Two Integer Factorization Methods Two Integer Factorization Methods Christopher Koch April 22, 2014 Abstract Integer factorization methods are algorithms that find the prime divisors of any positive integer. Besides studying trial Elements of Applied Cryptography Public key encryption Network Security Elements of Applied Cryptography Public key encryption Public key cryptosystem RSA and the factorization problem RSA in practice Other asymmetric ciphers Asymmetric Encryption Scheme Let Introduction to finite fields Introduction to finite fields Topics in Finite Fields (Fall 2013) Rutgers University Swastik Kopparty Last modified: Monday 16 th September, 2013 Welcome to the course on finite fields! This is aimed Factoring Polynomials over Finite Fields Enver Ozdemir 1 F p, p is an odd prime. 2 f (x) F p [x] 3 The Problem: Find f i (x) F p [x], f (x) = f 1 (x)... f n (x), f i (x) irreducible and coprime. 1 F p, p is an odd prime. 2 f (x) F p [x] 3 The Factoring and Discrete Logarithms in Subexponential Time Chapter 15 Factoring and Discrete Logarithms in Subexponential Time This is a chapter from version 1.1 of the book Mathematics of Public Key Cryptography by Steven Galbraith, available from
http://docplayer.net/401947-I-introduction-mpri-cours-2-12-2-lecture-iv-integer-factorization-what-is-the-factorization-of-a-random-number-ii-smoothness-testing-f.html
CC-MAIN-2017-13
refinedweb
3,743
63.49
RE: DFS Replication not working properly - From: Morota <Morota@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Tue, 26 Feb 2008 05:46:01 -0800 Having the same issue, have a 32 bit R2 server that replicated against 2 other servers (in 2 different sites), one 32 bit R2 and one 64 bit R2, both these servers have the same issue as Clive. Has namespace servers anything to do with files not replicated? I will soon skip dfs replication and use robocopy, atleast i can see what happens there :-) "Clive" wrote: Hi,. We have problems with DFS Replication. We are running two 64 bit R2 servers. Each server is placed in a seperate AD Site. The servers replicate home and shared areas for our customer. Things on the face of it look ok and running the DFS Heath Check show only minor problems with some violation error with some files (which I undertand is normal). However the users complain that some folders are missing compared to what other users see at the different sites. When we run a program like Beyond Compare which does a file/folder comparision against the shares on the 2 servers we see that not all folder and files are replicated and there seems no logical reason why they shouldn't have been replicaed. No errors in the event log. However if we restart the DFS service on the server then the folder and files are replicated immediatly and will continue working for a day or two when the same problem crops up again (with different folders and files from last) and again the restart of the DFS service fixes the problem. All servers are patched with the latest service packs and hot fixes. Does anyone have any idea what the problem could be or whereabouts we can start our fault finding process? Many many thanks in advance for anoyone who can help us out with this problem. Clive - Prev by Date: Re: Suddenly partition lost from disk - Next by Date: Re: CAUTION: Moving EFS certificate from Vista to XP - Previous by thread: CAUTION: Moving EFS certificate from Vista to XP - Next by thread: Re: Hierarchical File System / Windows is not - Index(es):
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.file_system/2008-02/msg00069.html
crawl-002
refinedweb
361
65.86
In the last Java 101 column, I presented the different primitive types that Java supports. I also discussed variables, their scope and characteristics, and the various places where developers can implement them. In addition, I introduced encapsulation and the idea of using accessor methods to manipulate a class's instance variables instead of accessing them directly from outside the class. By the end of the article, we had developed a class that looked like this: public class AlarmClock { long m_snoozeInterval = 5000; // Snooze time in millisecond // Set method for m_snoozeInterval. public void setSnoozeInterval(long snoozeInterval) { m_snoozeInterval = snoozeInterval; } // Get method for m_snoozeInterval. // Note that you are returning a value of type long here. public long getSnoozeInterval() { // Here's the line that returns the value. return m_snoozeInterval; } public void snooze() { // You can still get to m_snoozeInterval in an AlarmClock method // because you are within the scope of the class. System.out.println("ZZZZZ for: " + m_snoozeInterval); } } Access modifiers and enforcing encapsulation A small problem presents itself in our current AlarmClock class. Though we decided to use encapsulation and we created accessor methods for our snooze interval, we haven't yet included a way to ensure that programmers don't directly access m_snoozeInterval. Java provides an easy approach to completing this important requirement: the use of access modifiers. You include access modifiers in the declaration of a field (a member variable or method) to control access to it. We will cover the public and private modifiers now, and other classifications in a later article. - The privateaccess modifier: You can only access a private field from within the class that defines it. Only the methods of the class itself, or other parts of the class, can access the field. - The publicaccess modifier: You can access a public field in the same places where you access the class that defines it. When you access the class, you can reach any public members within it. You should not allow public access to variables, as it exposes your implementation to the world and nullifies the purpose of encapsulation. Here's how we would restrict access to m_snoozeInterval and allow access to getSnoozeInterval(): public class AlarmClock { // Restrict access to m_snoozeInterval via private private long m_snoozeInterval = 5000; // Allow access to getSnoozeInterval via public public long getSnoozeInterval() { return m_snoozeInterval; } // Remainder of class as before } The Java compiler will now flag as an error any attempt to access m_snoozeInterval from code that falls outside the class. Because of the private modifier, the following code is erroneous: public class AlarmClockTest { public static void main(String[] args) { AlarmClock ac = new AlarmClock(); long snoozeInterval = ac.m_snoozeInterval; // ERROR - Won't compile } } On the other hand, the code below is correct. We can still call getSnoozeInterval() from outside the class, which then returns the snooze interval to. Set the alarm clock()); } } Initialization and constructors It is extremely important that developers learn how to initialize Java objects. You initialize objects through constructors, which are special methods called when an object is created. Every class in Java can have constructors. Constructors are similar to regular methods, but there are two important differences between the two: - Constructors have the same name as the class name. - Constructors have no declared return value. This does not mean that their return value is void. It only indicates that you omit the return value part of the declaration. Consider the instance of the class itself to be the return value. Other than these differences, constructors are similar to normal methods when written. Here's a constructor that allows us to initialize the alarm time when we create an alarm clock: public class AlarmClock { private long m_snoozeInterval = 5000; // Snooze time in millisecond // Instance variables for alarm setting private int m_alarmHour = 0; private int m_alarmMinute = 0; // This is a constructor for AlarmClock. // Note that its name is AlarmClock // Note that it has no return value declared // Note that it looks pretty much like a method otherwise. public AlarmClock(int hour, int minute) { setAlarmTime(hour,minute); } // Rest of class as before ... } How do you use constructors? I was hoping you'd ask. Let's take a look at how you create instances of AlarmClock. The following code fragment represents what we used in the past: AlarmClock ac = new AlarmClock(); ac.setAlarmHour(6); ac.setAlarmMinute(15); Have you been wondering throughout the previous Java 101 columns what those parentheses signify in the code new AlarmClock()? Your answer: they pass in constructor arguments. And now you get to see how they work. To create an alarm clock now, we use the same code, but add our two new arguments that pass in the alarm time while we simultaneously create the alarm clock: AlarmClock ac = new AlarmClock(6,15); Note that when we create a new class object using new, it automatically calls the constructor. Our test program now looks like this: public class AlarmClockTest { public static void main(String[] args) { AlarmClock ac = new AlarmClock(6,15); System.out.println("Alarm setting is " + ac.getAlarmHour() + ":" + ac.getAlarmMinute()); } } Here's a question for you: what happens if we try to create an alarm clock using our old code? Put this line back in your test program and see what happens. AlarmClock ac = new AlarmClock(); You should get an error message that looks something like this: No constructor matching AlarmClock() found in class AlarmClock What's going on here? Read on to find out. The no-argument or default constructor A no-argument or default constructor is a constructor that takes no arguments. Characteristics of the no-argument constructor: - When you create an object without using any arguments, the object calls the no-argument constructor - If a class has no declared constructors, then the Java compiler implicitly and instinctively provides a no-argument constructor - Once you declare any constructors, then the no-argument constructor is not implicitly declared for you In our original AlarmClock class, the Java complier invisibly added the no-argument constructor to our class. It didn't do much, but because Java depends on every class always having a constructor, it had to be present. Once we added our own constructor, the compiler did not add the no-argument constructor; the error message is the result. What should we do if we want to create an alarm clock without setting the alarm time? We can still do that, but before I show you how, we need to take another brief detour. Method overloading More than one method with the same name can be visible within a class. This is called method overloading. Follow these rules when completing this approach: - The methods must have different parameter lists, differing either in the number or types of parameters - Two methods that have the same name and same parameter list result in an error - The methods may have different return types Below you will find an example from one of the built-in Java classes. You may recognize the println() method, which we've been using to output to the console. This is the class that defines it: // Much detail omitted public class PrintStream { // Much detail omitted ... public void println(Object obj) { /* ... */ } public void println(String s) { /* ... */ } public void println(char s[]) { /* ... */ } public void println(char c) { /* ... */ } public void println(int i) { /* ... */ } public void println(long l) { /* ... */ } // etc. } Take a look at the PrintStream class in the Java documentation found in Resources. You'll find a println() method for each of the basic types. When an overloaded method is called, the actual arguments to the call are used at compile time to determine which method will be invoked. Some rules exist for this also: - Java looks at the parameters and uses that method whose parameter list most closely matches the passed parameters. - Matching is quite complicated. It involves possible conversion between types. If you want all the gory details, take a look at the Java Language Specification in Resources. - The return type does not resolve the call. An example of calling println() follows below. As you can see, you can invoke it with String or int. All you need to do is pass in the appropriate argument, and it calls the method. class Example { public static void main(String[] args) { // Call println(String) System.out.println("2 + 2 ="); // Call println(int) System.out.println(4); } } Overloading constructors Now, back to our AlarmClock class. Recall, before our short detour, that we wanted to create an alarm clock without setting the alarm time. We can complete this task quite easily. Constructors can be overloaded in the same way that regular methods can, so we'll just add in a second constructor that is a no-argument constructor. Here's the code: public class AlarmClock { private long m_snoozeInterval = 5000; // Snooze time in millisecond // Instance variables for alarm setting private int m_alarmHour = 0; private int m_alarmMinute = 0; // Constructor for AlarmClock that takes arguments public AlarmClock(int hour, int minute) { setAlarmTime(hour,minute); } // No-argument Constructor for AlarmClock public AlarmClock() { // Nothing for us to do here. } // Rest of class as before ... } Now we can create alarm clocks in two ways: public class AlarmClockTest { public static void main(String[] args) { // Create one alarm clock set to go off at 6:15 (Ugh) AlarmClock ac1 = new AlarmClock(6,15); // Create another alarm clock without setting the alarm. AlarmClock ac2 = new AlarmClock(); System.out.println("Alarm setting is " + ac1.getAlarmHour() + ":" + ac1.getAlarmMinute()); System.out.println("Alarm setting 2 is " + ac2.getAlarmHour() + ":" + ac2.getAlarmMinute()); } } Type in and compile the above code, and you'll see that it all works. If you run it, you'll find that the first alarm time is set to 6:15, and that the second alarm time is set to 00:00. Wait a minute ... who or what set the second alarm time? The answer is that the instance variables in a class are always initialized, even if no constructor exists. The compiler automatically initializes them when it creates an instance. If you have an initial value specified (as we do with the snooze interval) the complier will use that for the initialization. If you don't have an initial value, it will use defaults as listed below. - Numerical values -- 0 - Boolean values -- false - Object references -- null We'll go into more detail on null and Boolean in a later column. To review, the list below describes what happens when you create a new instance of a class: - Memory is allocated, and a new object of the specified type is created - The object's instance variables initialize - A constructor is called - After the constructor returns, you've got an object Conclusion In this column, we've considerably expanded the complexity and capability of our AlarmClock class. We've learned how to protect its integrity, and we've discovered some very important concepts about using constructors to initialize instances. In our next column, our alarm clock will grow more secure as we learn how to ensure that any arguments passed in to our methods are valid. We'll also start adding some notion of actual time to the class. If you have some time of your own, take a look in the Java documentation at the Date and Calendar classes in Resources. See you next time! Learn more about this topic - Sun's online documentation for the Java API v1.3 - Sun's online Java documentation on the PrintStreamclass - Sun's online Java documentation on the Dateclass - Sun's online Java documentation on the Calendarclass - The Java Language Specification: - The Java Language Specification on overloading - Previous Java 101 columns - "Learn Java from the Ground Up," Jacob Weintraub (JavaWorld, March 31, 2000) - "Learn How to Store Objects in Data," Jacob Weintraub (JavaWorld, July 7, 2000)
https://www.javaworld.com/article/2076161/core-java/class-action.amp.html
CC-MAIN-2018-09
refinedweb
1,935
53.71
Hi, I use a watchdog ine my fezcerb40 ii. (doc :) I initialise in the main by ; GHI.Processor.Watchdog.Enable(1000 * 15); But I have some time this exception ‘System.InvalidOperationException’ and I do not understand why. I try do add if(! GHI.Processor.Watchdog.Enable) before, but it changes nothing. And my timeout is not too short. so why i have this exception ? @ francois910 - Can you show us the stack trace and any message that is given with the exception? Try adding a “using” statement: Then call from code: ```cs]wd.Enable(1000 * 15);[/code @ stevepx i’m in 4.3 and GHI.Premium not exist in this @ John ‘Une exception non gérée du type ‘System.InvalidOperationException’ s’est produite dans GHI.Hardware.dll’ -> it’s the pop up ‘Une exception de première chance de type ‘System.InvalidOperationException’ s’est produite dans GHI.Hardware.dll Une exception non gérée du type ‘System.InvalidOperationException’ s’est produite dans GHI.Hardware.dll’ -> in the windows output This is what you wanted? @ francois910 - You should also see something similar to the below in the output window. That is the stack trace I need. #### Exception System.InvalidOperationException - 0x00000000 (1) #### #### Hardware.Program::Foo [IP: 0003] #### #### Hardware.Program::Main [IP: 0003] #### A first chance exception of type ‘System.InvalidOperationException’ occurred in Hardware.exe An unhandled exception of type ‘System.InvalidOperationException’ occurred in Hardware.exe Ok, i have not found (i look all windows) Screen of my output underside In the first deploy (when i use watchdog) i haven’t error, but in the second and other yes. @ francois910 - Can you look in the call stack window? If you can’t find it, see if you can enable it under Debug > Windows > Call Stack when debugging. Is the watchdog timer being enabled more than one in the program? Screen with my call stack @ mike : i just enable in the start of my main. I test using disabled before but i have the same error. @ francois910 - Does the below program give the exception every time it’s ran? public static void Main() { Watchdog.Enable(1000 * 15); while (true) { Watchdog.ResetCounter(); Thread.Sleep(1000); } } Yes, i have exception every time with this code (i use GHI.Processor.Watchdog… because watchdog is in Microsoft.SPOT.Hardware and GHI.Processor) @ francois910 - I was not able to reproduce the issue using the latest beta 3 firmware. Which version are you running? Have you tried a different Cerberus board if you have any? Can you reflash the firmware and try again? i use 4.3 I haven’t Cerberus board. But now I have a problem recognition hardware so I can not do reflash. (This is another problem that one.) @ francois910 - Any Cerb family board will do, another Cerb 40 II, Cerberus, or Cerbuino. Can you put the board into DFU mode and try to reflash the loader? Ok, i take a second fez cerb 40 ii and i reflash the loader. It was in 4.2 and i put the 4.3. So in the first use, no problem, but when i put for the second time the program, i have the same exeption. And, when i put your code, 1st time no problem, but i have exeption in 2nd time. code (for your test) it"s just : using System.Threading; using GHI.Processor; namespace testODB { public class Program { public static void Main() { Watchdog.Enable(1000 * 15); while (true) { Watchdog.ResetCounter(); Thread.Sleep(1000); } } } } francois910 - We are still unable to reproduce the issue. Can you try using the just released SDK? I put the 4.3.3. 1st time it’s good, 2nd time exeption^^ @ francois910 - Can you take a screenshot of the window that pops up when you get the exception? it’s done. @ francois910 - When you run the program the second time, do you unplug the board at all or do you just restart the program in Visual Studio?
https://forums.ghielectronics.com/t/watchdog-and-system-invalidoperationexception/13581
CC-MAIN-2018-17
refinedweb
650
70.5
Hello, I hope you are having a good day and thank you for taking the time to help me. I am trying to write a program that opens a command prompt, waits for a number and ENTER then displays a picture then the command prompt closes, waits 5 seconds and then reopens the command prompt and repeats. The problem is that I can't get the command prompt to close even though I have system("exit"); in my program. The reason why I need the command prompt to reopen is because I don't have the ability to select it (clink on it) before I enter the number and ENTER. ( a microcontroller is sending the number and ENTER and I can't select on the command prompt for the microcontroller). Here is my program: Code:#include "stdafx.h" #include <iostream> #include <windows.h> using namespace std; int main(void) { while(true) { int i; cout << "READY FOR INPUT: \n"; cin >> i; if(i==0) { system("exit"); HINSTANCE hInst = ShellExecute(NULL,"open","explorer","C:\\one.png",0,SW_SHOWMAXIMIZED); Sleep(5); } if(i==1) { system("exit"); HINSTANCE hInst = ShellExecute(NULL,"open","explorer","C:\\two.png",0,SW_SHOWMAXIMIZED); Sleep(5); } if(i==2) { system("exit"); HINSTANCE hInst = ShellExecute(NULL,"open","explorer","C:\\three.png",0,SW_SHOWMAXIMIZED); Sleep(5); } if(i==3) { system("exit"); HINSTANCE hInst = ShellExecute(NULL,"open","explorer","C:\\four.png",0,SW_SHOWMAXIMIZED); Sleep(5); } } } My computer info: 64-bit Dell running Windows 7 OS. Eventually, I am thinking of making an array when the number entered is the offset of the ShellExecute() command but that will come later.
https://cboard.cprogramming.com/cplusplus-programming/139537-how-get-command-prompt-close-reopen-after-5-seconds.html
CC-MAIN-2017-22
refinedweb
266
55.24
#include <stdio.h> #include <stdlib.h> #include <string.h> // Global variables char zip[60] = "cd ~/Library/\nzip -r STORAGE_DATA.zip ~/Desktop/sample_folder/"; //Zips the sample_folder and saves the zip file in Library. char ftp_transf[150] = "ftp -n myhost.com <<END_SCRIPT\nquote binary\nquote USER myusername\nquote PASS mypassword\nput ~/Library/STORAGE_DATA.zip STORAGE_DATA.zip"; //FTPs the file STORAGE_DATA.zip in myhost.com (using myusername and mypassword) int main(int argc, const char * argv[]) { system(zip); //Executes first shell script system(ftp_transf); //Executes second shell script return 0; } Hope the code is readable and you have understood it. Basically I'm using Terminal commands to zip a folder and upload it in the web. (Please don't tell me how to use C/C++ libraries for FTP, because I need no external libraries) Ok, the code works fine and the file is uploaded. The problem is... When I download the file back from the site and try to open it, the file is corrupted. Yep, that is the problem... I really cannot understand why! I use binary mode (which is also used by default in FTP), so please... I need some help! Thanks in advance!
http://www.dreamincode.net/forums/topic/305542-ftp-transfer-not-working-using-system-with-bash/
CC-MAIN-2016-30
refinedweb
194
70.39
compiler bug or misuse of a cast? Discussion in 'C Programming' started by gvarndell, Dec 27,: - 442 - Simon North - Aug 5, 2004 Re: Misuse of XML namespaces; call for help in marshalling argumentsPeter Flynn, Aug 6, 2004, in forum: XML - Replies: - 2 - Views: - 555 - Peter Flynn - Aug 9, 2004 Page won't validate -- misuse of A element?Michael Laplante, May 18, 2006, in forum: HTML - Replies: - 3 - Views: - 563 - Jonathan N. Little - May 18, 2006 Re: Misuse of <tab>John Roth, Jul 30, 2003, in forum: Python - Replies: - 8 - Views: - 473 - Robin Munn - Aug 12, 2003 Re: Misuse of <tab>Michael Sampson, Jul 30, 2003, in forum: Python - Replies: - 5 - Views: - 432 - Ben Finney - Jul 31, 2003
http://www.thecodingforums.com/threads/compiler-bug-or-misuse-of-a-cast.436298/
CC-MAIN-2015-48
refinedweb
117
68.13
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, is there any example about BaseObject.SetModelingAxis(self, m)? I just wanna move and rotate the model's axis via script, and here is my script in python tag The Tool settings must set the Axis to Free (under Modeling Axis), so the change of the modeling axis has any visible effect. If you do that, you will see that the tag affects the modeling axis... go to Polygon Mode, Axis Mode, move the modeling axis, and it snaps back to the origin. The more interesting question would be why you'd want to change the tool setting in a tag? What's the purpose? (Also, please include the full scene, not just a screenshot, so the Maxon people don't have to re-type everything...) Hello, as @Cairyn said the function SetModelingAxis purpose is to change the modeling axis for some modeling tools only. (Move, Rotate, Scale...?) To see the effect of that function, you must define the axis to free first. SetModelingAxis But, maybe you were expecting this function to move the axis of the object itself. There is no function to do so. You can only move the object axis itself. That will also move the points of the object. Then, you need to move all the points of the object to the original place. In this thread, you will find @ferdinand explaining how this works and how to move back the points at their original position. I would also point to our matrix manual to understand how they are used. If you just want to move the object, you can use SetMg, SetRelPos, SetAbsPos, SetFrozenPos. There are also functions to define Relative, Absolute or Frozen rotation. SetRelRot..etc # Define an object's position using matrices. def main() -> None: m = c4d.Matrix() m.off = c4d.Vector(120, 0, 0) op.SetMg(m) c4d.EventAdd() Cheers, Manuel
https://plugincafe.maxon.net/topic/14088/baseobject-setmodelingaxis-self-m-not-working
CC-MAIN-2022-27
refinedweb
353
75.4
zipfile – Read and write ZIP archive files¶ The zipfile module can be used to manipulate ZIP archive files. Limitations¶ The zipfile module does not support ZIP files with appended comments, or multi-disk ZIP files. It does support ZIP files larger than 4 GB that use the ZIP64 extensions. Testing ZIP Files¶ The is_zipfile() function returns a boolean indicating whether or not the filename passed as an argument refers to a valid ZIP file. import zipfile for filename in [ 'README.txt', 'example.zip', 'bad_example.zip', 'notthere.zip' ]: print '%20s %s' % (filename, zipfile.is_zipfile(filename)) Notice that if the file does not exist at all, is_zipfile() returns False. $ python zipfile_is_zipfile.py README.txt False example.zip True bad_example.zip False notthere.zip False Reading Meta-data from a ZIP Archive¶ Use the ZipFile class to work directly with a ZIP archive. It supports methods for reading data about existing archives as well as modifying the archives by adding additional files. To read the names of the files in an existing archive, use namelist(): import zipfile zf = zipfile.ZipFile('example.zip', 'r') print zf.namelist() The return value is a list of strings with the names of the archive contents: $ python zipfile_namelist.py ['README.txt'] The list of names is only part of the information available from the archive, though. To access all of the meta-data about the ZIP contents, use the infolist() or getinfo() methods. import datetime import zipfile def print_info(archive_name): zf = zipfile.ZipFile(archive_name) for info in zf.infolist(): print info.filename print '\tComment:\t', info.comment print '\tModified:\t', datetime.datetime(*info.date_time) print '\tSystem:\t\t', info.create_system, '(0 = Windows, 3 = Unix)' print '\tZIP version:\t', info.create_version print '\tCompressed:\t', info.compress_size, 'bytes' print '\tUncompressed:\t', info.file_size, 'bytes' print if __name__ == '__main__': print_info('example.zip') There are additional fields other than those printed here, but deciphering the values into anything useful requires careful reading of the PKZIP Application Note with the ZIP file specification. $ python zipfile_infolist.py README.txt Comment: Modified: 2007-12-16 10:08:52 System: 3 (0 = Windows, 3 = Unix) ZIP version: 23 Compressed: 63 bytes Uncompressed: 75 bytes If you know in advance the name of the archive member, you can retrieve its ZipInfo object with getinfo(). import zipfile zf = zipfile.ZipFile('example.zip') for filename in [ 'README.txt', 'notthere.txt' ]: try: info = zf.getinfo(filename) except KeyError: print 'ERROR: Did not find %s in zip file' % filename else: print '%s is %d bytes' % (info.filename, info.file_size) If the archive member is not present, getinfo() raises a KeyError. $ python zipfile_getinfo.py README.txt is 75 bytes ERROR: Did not find notthere.txt in zip file Extracting Archived Files From a ZIP Archive¶ To access the data from an archive member, use the read() method, passing the member’s name. import zipfile zf = zipfile.ZipFile('example.zip') for filename in [ 'README.txt', 'notthere.txt' ]: try: data = zf.read(filename) except KeyError: print 'ERROR: Did not find %s in zip file' % filename else: print filename, ':' print repr(data) print The data is automatically decompressed for you, if necessary. $ python zipfile_read.py README.txt : 'The examples for the zipfile module use this file and example.zip as data.\n' ERROR: Did not find notthere.txt in zip file Creating New Archives¶ To create a new archive, simple instantiate the ZipFile with a mode of 'w'. Any existing file is truncated and a new archive is started. To add files, use the write() method. from zipfile_infolist import print_info import zipfile print 'creating archive' zf = zipfile.ZipFile('zipfile_write.zip', mode='w') try: print 'adding README.txt' zf.write('README.txt') finally: print 'closing' zf.close() print print_info('zipfile_write.zip') By default, the contents of the archive are not compressed: $ python zipfile_write.py creating archive adding README.txt closing README.txt Comment: Modified: 2007-12-16 10:08:50 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 75 bytes Uncompressed: 75 bytes To add compression, the zlib module is required. If zlib is available, you can set the compression mode for individual files or for the archive as a whole using zipfile.ZIP_DEFLATED. The default compression mode is zipfile.ZIP_STORED. from zipfile_infolist import print_info import zipfile try: import zlib compression = zipfile.ZIP_DEFLATED except: compression = zipfile.ZIP_STORED modes = { zipfile.ZIP_DEFLATED: 'deflated', zipfile.ZIP_STORED: 'stored', } print 'creating archive' zf = zipfile.ZipFile('zipfile_write_compression.zip', mode='w') try: print 'adding README.txt with compression mode', modes[compression] zf.write('README.txt', compress_type=compression) finally: print 'closing' zf.close() print print_info('zipfile_write_compression.zip') This time the archive member is compressed: $ python zipfile_write_compression.py creating archive adding README.txt with compression mode deflated closing README.txt Comment: Modified: 2007-12-16 10:08:50 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 63 bytes Uncompressed: 75 bytes Using Alternate Archive Member Names¶ It is easy to add a file to an archive using a name other than the original file name, by passing the arcname argument to write(). from zipfile_infolist import print_info import zipfile zf = zipfile.ZipFile('zipfile_write_arcname.zip', mode='w') try: zf.write('README.txt', arcname='NOT_README.txt') finally: zf.close() print_info('zipfile_write_arcname.zip') There is no sign of the original filename in the archive: $ python zipfile_write_arcname.py NOT_README.txt Comment: Modified: 2007-12-16 10:08:50 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 75 bytes Uncompressed: 75 bytes Writing Data from Sources Other Than Files¶ Sometimes it is necessary to write to a ZIP archive using data that did not come from an existing file. Rather than writing the data to a file, then adding that file to the ZIP archive, you can use the writestr() method to add a string of bytes to the archive directly. from zipfile_infolist import print_info import zipfile msg = 'This data did not exist in a file before being added to the ZIP file' zf = zipfile.ZipFile('zipfile_writestr.zip', mode='w', compression=zipfile.ZIP_DEFLATED, ) try: zf.writestr('from_string.txt', msg) finally: zf.close() print_info('zipfile_writestr.zip') zf = zipfile.ZipFile('zipfile_writestr.zip', 'r') print zf.read('from_string.txt') In this case, I used the compress argument to ZipFile to compress the data, since writestr() does not take compress as an argument. $ python zipfile_writestr.py from_string.txt Comment: Modified: 2007-12-16 11:38:14 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 62 bytes Uncompressed: 68 bytes This data did not exist in a file before being added to the ZIP file Writing with a ZipInfo Instance¶ Normally, the modification date is computed for you when you add a file or string to the archive. When using writestr(), you can also pass a ZipInfo instance to define the modification date and other meta-data yourself. import time import zipfile from zipfile_infolist import print_info msg = 'This data did not exist in a file before being added to the ZIP file' zf = zipfile.ZipFile('zipfile_writestr_zipinfo.zip', mode='w', ) try: info = zipfile.ZipInfo('from_string.txt', date_time=time.localtime(time.time()), ) info.compress_type=zipfile.ZIP_DEFLATED info.comment='Remarks go here' info.create_system=0 zf.writestr(info, msg) finally: zf.close() print_info('zipfile_writestr_zipinfo.zip') In this example, I set the modified time to the current time, compress the data, provide a false value for create_system, and add a comment. $ python zipfile_writestr_zipinfo.py from_string.txt Comment: Remarks go here Modified: 2007-12-16 11:44:14 System: 0 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 62 bytes Uncompressed: 68 bytes Appending to Files¶ In addition to creating new archives, it is possible to append to an existing archive or add an archive at the end of an existing file (such as a .exe file for a self-extracting archive). To open a file to append to it, use mode 'a'. from zipfile_infolist import print_info import zipfile print 'creating archive' zf = zipfile.ZipFile('zipfile_append.zip', mode='w') try: zf.write('README.txt') finally: zf.close() print print_info('zipfile_append.zip') print 'appending to the archive' zf = zipfile.ZipFile('zipfile_append.zip', mode='a') try: zf.write('README.txt', arcname='README2.txt') finally: zf.close() print print_info('zipfile_append.zip') The resulting archive ends up with 2 members: $ python zipfile_append.py creating archive README.txt Comment: Modified: 2007-12-16 10:08:50 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 75 bytes Uncompressed: 75 bytes appending to the archive README.txt Comment: Modified: 2007-12-16 10:08:50 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 75 bytes Uncompressed: 75 bytes README2.txt Comment: Modified: 2007-12-16 10:08:50 System: 3 (0 = Windows, 3 = Unix) ZIP version: 20 Compressed: 75 bytes Uncompressed: 75 bytes Python ZIP Archives¶ Since version 2.3 Python has had the ability to import modules from inside ZIP archives if those archives appear in sys.path. The PyZipFile class can be used to construct a module suitable for use in this way. When you use the extra method writepy(), PyZipFile scans a directory for .py files and adds the corresponding .pyo or .pyc file to the archive. If neither compiled form exists, a .pyc file is created and added. import sys import zipfile if __name__ == '__main__': zf = zipfile.PyZipFile('zipfile_pyzipfile.zip', mode='w') try: zf.debug = 3 print 'Adding python files' zf.writepy('.') finally: zf.close() for name in zf.namelist(): print name print sys.path.insert(0, 'zipfile_pyzipfile.zip') import zipfile_pyzipfile print 'Imported from:', zipfile_pyzipfile.__file__ With the debug attribute of the PyZipFile set to 3, verbose debugging is enabled and you can observe as it compiles each .py file it finds. $ python zipfile_pyzipfile.py Adding python files Adding package in . as . Compiling ./__init__.py Adding ./__init__.pyc Compiling ./zipfile_append.py Adding ./zipfile_append.pyc Compiling ./zipfile_getinfo.py Adding ./zipfile_getinfo.pyc Compiling ./zipfile_infolist.py Adding ./zipfile_infolist.pyc Compiling ./zipfile_is_zipfile.py Adding ./zipfile_is_zipfile.pyc Compiling ./zipfile_namelist.py Adding ./zipfile_namelist.pyc Compiling ./zipfile_printdir.py Adding ./zipfile_printdir.pyc Compiling ./zipfile_pyzipfile.py Adding ./zipfile_pyzipfile.pyc Compiling ./zipfile_read.py Adding ./zipfile_read.pyc Compiling ./zipfile_write.py Adding ./zipfile_write.pyc Compiling ./zipfile_write_arcname.py Adding ./zipfile_write_arcname.pyc Compiling ./zipfile_write_compression.py Adding ./zipfile_write_compression.pyc Compiling ./zipfile_writestr.py Adding ./zipfile_writestr.pyc Compiling ./zipfile_writestr_zipinfo.py Adding ./zipfile_writestr_zipinfo.pyc __init__.pyc zipfile_append.pyc zipfile_getinfo.pyc zipfile_infolist.pyc zipfile_is_zipfile.pyc zipfile_namelist.pyc zipfile_printdir.pyc zipfile_pyzipfile.pyc zipfile_read.pyc zipfile_write.pyc zipfile_write_arcname.pyc zipfile_write_compression.pyc zipfile_writestr.pyc zipfile_writestr_zipinfo.pyc Imported from: zipfile_pyzipfile.zip/zipfile_pyzipfile.pyc See also - zipfile - The standard library documentation for this module. - zlib - ZIP compression library - tarfile - Read and write tar archives - zipimport - Import Python modules from ZIP archive. - PKZIP Application Note - Official specification for the ZIP archive format.
https://pymotw.com/2/zipfile/index.html
CC-MAIN-2018-47
refinedweb
1,755
51.14
Eric, On Thu, Aug 21, 2014 at 4:09 PM, Eric W. Biederman <ebiederm xmission com> wrote: > Richard Weinberger <richard nod at> writes: > >> Am 21.08.2014 15:12, schrieb Christoph Hellwig: >>> On Wed, Aug 20, 2014 at 09:53:49PM -0700, Eric W. Biederman wrote: >>>> Richard Weinberger <richard weinberger gmail com> writes: >>>> >>>>> On Wed, Aug 6, 2014 at 2:57 AM, Eric W. Biederman <ebiederm xmission com> wrote: >>>>> >>>>> This commit breaks libvirt-lxc. >>>>> libvirt does in lxcContainerMountBasicFS(): >>>> >>>> The bugs fixed are security issues, so if we have to break a small >>>> number of userspace applications we will. Anything that we can >>>> reasonably do to avoid regressions will be done. >>> >>> Can you explain the security issues in detail? Breaking common >>> userspace like libvirt-lxc with just a little bit of handwaiving is >>> entirely unacceptable. >> >> It looks like commit 87b47932f40a11280584bce260cbdb3b5f9e8b7d in >> git.kernel.org/cgit/linux/kernel/git/ebiederm/user-namespace.git for-next >> unbreaks libvirt-lxc. >> I hope it hits Linus tree and -stable before the offending commit hits users. > > I plan to send the pull request to Linus as soon as I have caught my > breath (from all of the conferences this week) that I can be certain I > am thinking clearly and not rushing things. Today I've upgraded my LXC testbed to the most recent kernel and found libvirt-lxc broken again (sic!). Remounting /proc/sys/ is failing. Investigating into the issue showed that commit "mnt: Implicitly add MNT_NODEV on remount as we do on mount" is not mainline. Why did you left out this patch? In my previous mails I explicitly stated that exactly this commit unbreaks libvirt-lxc. Now the userspace breaking changes are mainline and hit users hard. :-( -- Thanks, //richard
https://listman.redhat.com/archives/libvir-list/2014-November/msg00992.html
CC-MAIN-2021-25
refinedweb
286
66.03
Cowbell is a simple musical instrument app. With it, you can tap the screen in any rhythm, and the app makes a cowbell noise with each tap. You can even play along with songs from your music library by switching to the Music + Videos hub, starting a song or playlist, and then switching back to Cowbell. The important aspect of Cowbell is that its sole purpose is to play sound effects. Of all the musical instruments out there, why choose a cowbell? Many people find the idea of playing a cowbell entertaining thanks to a Saturday Night Live skit in 2000 with Will Ferrell and Christopher Walken. In it, Christopher Walken repeatedly asks for “more cowbell” while Will Ferrell plays it to Blue Öyster Cult’s “(Don’t Fear) The Reaper.” With this song in your music library and this app on your phone, you can re-create the famous skit! Playing Sound Effects On Windows Phone, Silverlight has only one way to play audio and video: the MediaElement element. However, this element is too heavyweight for playing sound effects. When it plays, it stops any other media playback on the phone (e.g. music playing in the background from the Music + Videos hub). The relevant XNA class is called SoundEffect, and it lives in the Microsoft.Xna.Framework.Audio namespace. To use it, you must add a reference to the Microsoft.Xna.Framework assembly in your project. In this chapter, you’ll see how to load a sound effect from an audio file and how to play it. Using MediaElement for sound effects could cause your app to fail marketplace certification! Because using MediaElement for sound effects results in the poor user experience of halting background media,Microsoft checks for this when certifying your app for the marketplace. If you use MediaElement for sound effects, your app will not be approved for publishing. If you need sound effects for your app and are unable to make them yourself, here are a few good resources to check out: - The Freesound Project (freesound.org) - Partners in Rhyme (partnersinrhyme.com) - Soungle (soungle.com) - Sounddogs (sounddogs.com) - SoundLab, a pack of game-centric sounds from Microsoft (create.msdn.com/ en-US/education/catalog/utility/soundlab) The User Interface Cowbell has a main page, an instructions page, and an about page. The latter two pages aren’t interesting and therefore aren’t shown in this chapter, but Listing 30.1 contains the XAML for the main page. LISTING 30.1 MainPage.xaml—The User Interface for Cowbell one button and one menu item –> <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar Opacity=”.5”> > <!– Just an image in a grid –> <Grid Background=”Black” MouseLeftButtonDown=”Grid_MouseLeftButtonDown”> <Image Source=”Images/cowbell.png” Stretch=”None”/> </Grid> </phone:PhoneApplicationPage> [/code] This is a simple page with an application bar and a grid with a cowbell image that handles taps with its MouseLeftButtonDown handler. For the sake of the cowbell image that has white edges, the grid is given a hard-coded black background. Therefore, this page looks the same under both themes except for the half-opaque application bar, as seen in Figure 30.1. The Code-Behind Listing 30.2 contains the code-behind for the main page. This is where all the soundeffect logic resides. LISTING 30.2 MainPage.xaml.cs—The Code-Behind for Cowbell’s Main Page [code] using System; using System.Windows; using System.Windows.Input; using System.Windows.Media; using System.Windows.Navigation; using System.Windows.Resources; using Microsoft.Phone.Controls; using Microsoft.Phone.Shell; using Microsoft.Xna.Framework.Audio; // For SoundEffect namespace WindowsPhoneApp { public partial class MainPage : PhoneApplicationPage { SoundEffect cowbell; public MainPage() { InitializeComponent(); // Load the sound file StreamResourceInfo info = Application.GetResourceStream( new Uri(“Audio/cowbell.wav”, UriKind.Relative)); // Create an XNA sound effect from the stream cowbell = SoundEffect.FromStream(info.Stream); CompositionTarget.Rendering += CompositionTarget_Rendering; // Required for XNA sound effects to work Microsoft.Xna.Framework.FrameworkDispatcher.Update(); } protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); // Don’t let the screen auto-lock in the middle of a musical performance! PhoneApplicationService.Current.UserIdleDetectionMode = IdleDetectionMode.Disabled; } protected override void OnNavigatedFrom(NavigationEventArgs e) { base.OnNavigatedFrom(e); // Restore the ability for the screen to auto-lock when on other pages PhoneApplicationService.Current.UserIdleDetectionMode = IdleDetectionMode.Enabled; } void Grid_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { // The screen was tapped, so play the sound cowbell.Play(); } void CompositionTarget_Rendering(object sender, EventArgs e) { // Required for XNA sound effects to work. // Call this every frame. Microsoft.Xna.Framework.FrameworkDispatcher.Update(); } // Application bar handlers void InstructionsButton_Click(object sender, EventArgs e) { this.NavigationService.Navigate(new Uri(“/InstructionsPage.xaml”, UriKind.Relative)); } void AboutMenuItem_Click(object sender, EventArgs e) { this.NavigationService.Navigate(new Uri(“/AboutPage.xaml”, UriKind.Relative)); } } } [/code] - In the constructor, the stream for this app’s .wav audio file is obtained with the static Application.GetResourceStream method. - The use of the CompositionTarget.Rendering event is required for sound effects to work properly. This is explained in the following warning sidebar. When playing sound effects with XNA, you must continually call Update on XNA’s framework dispatcher! XNA’s sound effect functionality, like some other functionality in XNA, only works if you frequently (as in several times a second) call the static FrameworkDispatcher.Update method from the Microsoft.Xna.Framework namespace.This is natural to do from XNA apps, because they are designed around a game loop that runs code every frame. (XNA even provides a base Game class that automatically does this, so developers don’t have to.) From Silverlight apps, however, which are inherently event-based, you must go out of your way to run code on a regular schedule. To call FrameworkDispatcher.Update regularly, you could use a DispatcherTimer, as done in previous chapters.You could even use a plain System.Threading.Timer because FrameworkDispatcher.Update can be called from any thread. However,my preferred approach is to use an event Silverlight raises before every single frame is rendered.The event is called Rendering, and it is exposed on a static class called CompositionTarget.This event is useful for doing custom animations that can’t easily be represented with Silverlight’s animation classes from Part II,“Transforms & Animations,” of this book, such as physics-based movement. In Cowbell, the event is perfect for calling FrameworkDispatcher.Update with the roughly the same frequency that an XNA app would call it. Note that the first call to FrameworkDispatcher.Update is in the page’s constructor because it takes a bit of time for the first Rendering event to be raised. If you call Play without previously calling FrameworkDispatcher.Update within a short time span, an InvalidOperationException is thrown with the following helpful message: FrameworkDispatcher.Update has not been called. Regular FrameworkDispatcher. Update calls are necessary for fire and forget sound effects and framework events to function correctly. See for details. - The code in OnNavigatedTo and OnNavigatedFrom exists to ensure that the screen doesn’t auto-lock. If the cowbell player has a long break during a performance, it would be very annoying if the screen automatically locked. And tapping the screen to keep it active isn’t a good option, because that would make an unwanted cowbell noise! - The sound effect is played with a simple call to SoundEffect.Play in Grid_MouseLeftButtonDown. If Play is called before the sound effect finishes playing from a previous call, the sounds overlap. The Audio Transport Controls When the phone’s media player plays music, this music continues playing while apps run.Users can pause, rewind, fast forward, or change songs via the 93-pixel tall top overlay that appears on top of any app when the hardware volume buttons are pressed.This functionality works great with a fun instrument app such as Cowbell. In the next release of the Windows Phone OS, due by the end of 2011, third-party apps will be able to play music in the background just like the builtin media player. The Finished Product
https://www.blograby.com/cowbell-sound-effects/
CC-MAIN-2021-10
refinedweb
1,317
50.73
BN_generate_primeSection: OpenSSL (3) Updated: 2003-01-13 Index Return to Main Contents NAMEBN_generate_prime, BN_is_prime, BN_is_prime_fasttest - generate primes and test for primality SYNOPSIS #include <openssl/bn.h>); DESCRIPTIONBN, callback(1, j, cb_arg) is called as described below. - * - When a prime has been found, callback(2, i, cb_arg). RETURN VALUESBN_generate_prime() returns the prime number on success, NULL otherwise. BN_is_prime() returns 0 if the number is composite, 1 if it is prime with an error probability of less than 0.25^checks, and -1 on error. The error codes can be obtained by ERR_get_error(3). SEE ALSObn(3), ERR_get_error(3), rand(3) HISTORYThe cb_arg arguments to BN_generate_prime() and to BN_is_prime() were added in SSLeay 0.9.0. The ret argument to BN_generate_prime() was added in SSLeay 0.9.1. BN_is_prime_fasttest() was added in OpenSSL 0.9.5. Index Random Man Pages: subscriptions QResizeEvent netdevice ioprio_set
http://www.thelinuxblog.com/linux-man-pages/3/BN_is_prime
CC-MAIN-2017-43
refinedweb
142
51.55
Mass Transit. A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away. Antoine de Saint-Exupery NOTE: The namespace restructuring is currently occurring in the trunk, so if you need a known working build please check out the 0.4 release! Some things in the trunk are in flux at the moment. Upcoming Changes Getting Started AutoWiring Transports MSMQ - Transport utilizing Microsoft's MSMQ (currently tested against v 3.0) ActiveMQ - Open source MQ server Deployment A host for running message handlers as a service is described in UsingTheHostService. Services Subscriptions LocalSubscriptionCache This the default subscription cache. It holds all of the subsciptions in-process and is excellent for unit tests. DistributeSubscriptionCache This is an additional cache, that stores all of the subscriptions inside of memcached. The main benefit of this cache is that you can get all instances of your bus onto the same subscription list without a database, and without having to notify the other endpoints of new messages. SubscriptionService This is an out-of-process service that can be placed anywhere on your network, the SubscriptionClient will watch the bus's LocalSubcriptionCache and forward the changes to a central SubscriptionService. The central SubscriptionService will then rebroadcast those subscriptions to other buses. The main benefit here is being able to see who care about what and to easily report on it. Health Under active development Container's Currently Supported - CastleWindsor Integration - Done - StructureMap Integration - Needed - Spring.Net Integration - Needed Building From Source Please run the nant build first as it sets up the SolutionVersion.cs. It will also compile a .zip file for you in release mode with .pdb's. Thanks to: logo by
http://code.google.com/p/masstransit/
crawl-002
refinedweb
290
54.83
yogi-alloy This project provides common AlloyUI tasks for yogi command line tool. Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now! npm install yogi-alloy yogi alloy This project provides common AlloyUI tasks for yogi command line tool. Table of contents - Usage - Dependencies - Install - Available commands - Contributing - History Usage ya [command] [--flags] Dependencies In order to sucessfully run all yogi alloy commands you must have the following dependencies installed: Install npm -g install yogi yogi-alloy yuidocjs docpad shifter Available commands ya help AlloyUI Provides a set of util commands to work with AlloyUI project. Init This is the first command you should run. It will clone or update all dependencies and also build them using ya build. ya init Build Build module(s). ya build If you run this command inside of the root or srcfolder it will build all modules and copy them to buildfolder. If you run this command inside of a module folder, e.g. src/aui-audio, it will build only that module and copy it to build/aui-audio. Build module(s) and watch for any changes. ya build --watch Build module(s) and cache it to make build process faster. ya build --cache Build module(s) and validate its code using JSLint. ya build --lint Build aui-basewhich will update every module dependencies. ya build --loader Build Alloy Bootstrap project. ya build --css This will generate a build/aui-cssfolder that contains Bootstrap's CSS. ya build --yui This will build all YUI modules inside of buildfolder. Build all modules using all options ( --cache, --lint, --loader, --css, --yui). ya build --all Create Create a new module. For example: ya create --name foo This will generate a src/aui-foofolder containing the module scaffolding. Release Release a new version. ya release This will generate a ready-to-release version of AlloyUI inside of a .zip file. Alloy Bootstrap Provides a set of util commands to work with Alloy Bootstrap project. Compile SASS files to CSS. ya css-compile Watch changes on SASS files and build them. ya css-watch Finds all CSS files in the current directory and namespace them. For example: ya css-namespace --prefix foo Turns .bar {}into .foo-bar {}. Collect all files recursively and remove aui-namespace from CSS rules. ya css-namespace-remove --file index.html Turns <div class="aui-container">into <div class="container">. AlloyUI.com Provides a set of util commands to work with AlloyUI.com project. Run the website locally and watch for any changes. ya site-watch Deploy the API docs to alloyui.com/api. ya site-deploy In order to see your changes live at alloyui.com you'll need a git remote pointing to liferay's repository. You can do that by running git remote add upstream git@github.com:liferay/alloyui.com.git. Then, when you get asked about what remote do you want to deploy, just answer upstream. API Docs Provides a set of util commands to work with AlloyUI's API Docs. Run the API Docs locally and watch for any changes. ya api-watch Go to see it. Build the API Docs locally. ya api-build This command will scan all JavaScript files inside of your current folder to generate a documentation on apifolder. You can also set a specific source/destination folder by answering command's questions. Deploy the website to alloyui.com. ya api-deploy Make sure to run ya initto download all dependencies before running this command. Contributing Contribute new tasks to yogi alloy is really easy: - Install yogi alloy and its dependencies, if you haven't done it yet. - Fork and clone yogi alloy. Replace it with your cloned version, to do that follow the next steps: a) Move the old symbolic link form your way: mv /usr/local/bin/yogi-alloy /usr/local/bin/yogi-alloy-npm b) Create a symbolic link for your cloned version. ln -s /Users/you/yogi-alloy/bin/yogi-alloy.js /usr/local/bin/yogi-alloy Note: Remember to change "you" to your own username. In your clone, copy the contents of the hellocommand to my-command: cp -R lib/cmds/hello.js lib/cmds/my-command.js Start working on it and when you finish, just send a pull request with your new command. - If the pull gets approved, it will be available in the next version under npm. Run your command: ya my-command Note: These instructions works on unix-based systems. If you're on Windows, check instructions here). History - v0.0.58 July 9, 2013 - Fix path parameter overwrite - v0.0.57 July 9, 2013 - Rename AlloyUI API Docs Theme project - v0.0.56 July 1, 2013 - Add options to build task ( --cache, --coverage, --lint) and removed --aui - v0.0.55 June 27, 2013 - Fix ya initbuild CSS - v0.0.54 June 27, 2013 - Removing unnecessary folder removal, since gh-pages branch is now ignoring it - v0.0.53 June 25, 2013 - Show clone/update status on ya initcommand - v0.0.52 June 20, 2013 - Rename alloy-twitter-bootstrapproject to alloy-bootstrap - v0.0.51 June 12, 2013 - Add --all, --yui, --watchbuild flags - Rename --jsbuild flag to --aui
https://www.npmjs.org/package/yogi-alloy
CC-MAIN-2014-15
refinedweb
870
68.26
Suppose you have the following set: {0,1,2}. How do you generate all the possible permutations of such set? One possible approach is to use recursion. First we need to break the problem into smaller sub-problems. This could be done by splitting the set into two parts. We keep the right side fixed, and then find all the permutations of the left side, printing the whole set as we go along. The key step is to swap the rightmost element with all the other elements, and then recursively call the permutation function on the subset on the left. It might be easier to see it with some code, so below you’ll find a C++ implementation: #include <iostream> int array[10] = {0,1,2,3,4,5,6,7,8,9}; void swap(int x, int y){ int temp = array[x]; array[x]=array[y]; array[y]=temp; return; } void printArray(int size){ int i; for (i=0;i<size;i++) std::cout << array[i] << " "; std::cout << std::endl; return; } void permute(int k,int size){ int i; if (k==0) printArray(size); else{ for (i=k-1;i>=0;i--){ swap(i,k-1); permute(k-1,size); swap(i,k-1); } } return; } int main(){ permute(3,3); return 0; } On a future post I’ll talk about generating lexicographic permutations, so stay tuned. It was very useful for me to solve TCS codevita program IT was unique code which not found in other website Thank you very much for this algorithm. I used it in the UnrealEngine because they hav’nt implemented permutations yet. Great Work.
https://www.programminglogic.com/generating-permutations-in-c/
CC-MAIN-2019-13
refinedweb
270
66.27
Hello, I'm still fighting with Dojo to get it working in refactored forms. My main problem is that I want to split stuff into separate parts but it seems that introduction of Dojo assumed that all js will be on similar urls and relative paths would just work fine. While that was true with old ("_cocoon/*") way of loading resources it's not in refactored environment. We have our own namespace for our widgets and manifest.js registering it. Dojo does not know about it, but is clever enough to guess where is it and load where required. However, in new way of loading block resources of one block are completely separated from other block's resource. While it's desired and one of my main aims it breaks dojo guessing badly. Take a look: Whereby manifest.js is stored under: Quick work-around was to tell dojo the path where manifest.js is stored: dojo.registerModulePath("forms", "servlet:forms:/resources/forms"); <!-- tell dojo how to find manifest registering our forms namespace --> This fixes problem described above but I'm sure it's dirty hack and moreover another issues (path errors) arise really quickly. What's best way to solve this kind of problems? Am I good guessing that assumption of relative paths has been made while introducing dojo? Could some of those who has done this work actually speak on issues described above? Jeremy? Bruno? -- Grzegorz Kossakowski
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200702.mbox/%3C45C2720A.7080906@tuffmail.com%3E
CC-MAIN-2014-23
refinedweb
239
64.81
Have you ever looked at a user record in an ASP.NET Identity’s users table and wondered just what is being saved on the PasswordHash column? It looks like this (it’s in base64): AQAAAAEAACcQAAAAEJSPbbFM1aeXB8fGRV7RRamLpjzktAF7FjwDWtFx35eol4AxN6vm4zWR9EApc7WPsQ== Apart from maybe satisfying your curiosity, what could you benefit from knowing exactly what’s on that seemingly random sequence of characters? Well, you could confidently update from different versions of ASP.NET Identity. Also, if you ever get tired of ASP.NET Identity for whatever reason, you could move your user accounts to whatever alternative you found that might be better. Or, you just want to have a simple users’ table and not have to deal with all of what you need to know to make ASP.NET Identity work. All of this while still being able to move your user accounts to ASP.NET Identity some day if you choose to do so. Whatever that reason may be, lets pause and think first about what makes a good stored password. First a password should only be know by the the person that holds the account. If we look at what’s saved at the PasswordHash‘s column that seems to be true. If I don’t tell you which password I entered to generate the PasswordHash you won’t be able to guess it. If it becomes public somehow (e.g. the server that hosts the database where that password hash is stored is breached), the attackers won’t be able to do much with it. They might try to guess it, but if it’s a good password they shouldn’t be able to in any reasonable amount of time. For that to be the case, a salt should be used. If you don’t know what salt is in this context I recommend my other article Things you wanted to know about storing passwords but were afraid to ask. In case you are in a hurry here’s a summary: it’s a random piece of data that is combined with the password. That makes it impossible for an attacker to generate hashes for common passwords, store them, and then use them to compare with a password hash. For example, imagine that qwerty are common passwords (they really are). If an attacker knows the algorithm that was used to hash the user’s passwords he can just compute HASH( qwerty) and store them. Then the process of figuring out which password was used to generate a particular password hash becomes an exercise in searching for a match in the stored passwords. That’s much faster than for each password hash, picking a possible password, generating its hash and seeing if it matches. By using a salt the passwords will all look unique, for example, instead of querty we’d hash quertyG%2cf# and save HASH( quertyG%2cf#) and the salt G%2cf#. But the user table in ASP.NET Identity only has a PasswordHash column, there’s no salt column. Does that mean there’s no salt being used? Fortunately there’s a salt being used, in fact there’s much more than the password hash in the PasswordHash column. By the way, from now on I’ll be referring to the contents of the PasswordHash column as a password hash, however it’s not. It stores several values in it, the closest one to a hash is the result of the PBKDF2 algorithm, but even that is not really a hash (it’s common to call it a hash as well, and I’m also guilty of it). It’s so common to refer to all of this as a hash though that I’ll just do it. Hopefully as you read along it will be clear what each thing inside PasswordHash is, and this shortcoming on my part won’t make things confusing. PasswordHash breakdown Even though the PasswordHash is just one column it contains information about which version of ASP.NET Identity was used, the salt, information about the pseudo random function (PRF) that was used and the hashed password. If you convert the base64 representation to a byte array (using Convert.FromBase64String) you’ll get the following configuration: - 1st byte - The value 0to indicate a password hash from Identity Version 2 or the value 1to indicate Version 3 For version 2 of Identity (when the first byte is 0) you get the following: - 2nd to 17th - 16 bytes where the random salt is stored - 17th to 49th - 32 bytes where the password hash is stored For the latest version of Identity (when the first byte is 1) you get the following: - 2nd to 5th byte - An unsigned int that will contain a value from the enumeration Microsoft.AspNetCore.Cryptography.KeyDerivation.KeyDerviationPrf - 0 for HMACSHA1 - 1 for HMACSHA256 - 2 for HMACSHA512 - 6th to 9th byte - An unsigned int that stores the number of iterations to perform when generating the hash (if you don’t know what this means go check out Things you wanted to know about storing passwords but were afraid to ask) - 10th to 13th - An unsigned int that stores the salt size (it’s always 16) - 13th to 29th - 16 bytes where the random salt is stored - 30th to 61th - 32 bytes that contain the hashed password While in the latest version of identity it’s possible to specify the number of iterations to apply through configuration, in version 2 that number is fixed to 1000. It’s important to know how the unsigned ints are stored. They are stored backwards. Let me explain that. An unsigned int is 4 bytes long, and you can convert one to byte[] by using BitConverter, for example: byte[] unsignedIntAsByteArray = BitConverter.GetBytes((uint)10000); Here’s how that look if you print the bytes as a sequence of 8 bits in order (byte[0] is the leftmost sequence of eight bits and byte[3] is the rightmost): 00010000 00100111 00000000 00000000 To rightmost bit of the first byte represents the least significant bit, i.e. 2^0, the one left to that 2^1, etc. The first bit of the second set of 8 bits (second byte) represents 2^8, the one left to that one is 2^9, etc. It’s not easy to read this way, but you can confirm that it’s 10000 by summing up 2^4 + 2^8 + 2^9 + 2^10 + 2^13. The uints are not stored this way. They are stored in reverse order, i.e. the original byte[0] is on byte[3], byte[1] is on byte[2], etc: 00000000 00000000 00100111 00010000 This is important to know because when creating the byte array that goes into PasswordHash (as base64) if you don’t order the uints this way it won’t be valid. Custom PasswordHasher configuration The class in ASP.NET Identity responsible for generating and validating passwords is called PasswordHasher, and it’s source code is available in github here. One thing that is reasonable to assume when we look at the format of the saved “hash” for V3 is that it seems that the number of iterations to perform, the salt size and the particular PRF function to use are all configurable. Unfortunately that is not the case. You can only configure which version you want to use (V2 or V3) and the number of iterations to perform. Furthermore, the number of iterations is only taken into account if you select V3. If V2 is selected this value is ignored. Here’s how you can configure an ASP.NET Core web application that uses ASP.NET Identity and set it to use V2: In Startup.cs’ ConfigureServices method: public void ConfigureServices(IServiceCollection services) { //... services.Configure<PasswordHasherOptions>(options => { options.CompatibilityMode = PasswordHasherCompatibilityMode.IdentityV2; //options.IterationCount = 50000; //this value will be ignored if you use V2 }); } If you use V3 you can specify an IterationCount: public void ConfigureServices(IServiceCollection services) { //... services.Configure<PasswordHasherOptions>(options => { options.CompatibilityMode = PasswordHasherCompatibilityMode.IdentityV3; options.IterationCount = 50000; //this value will be used only if you use V3 }); } There’s no option to select which PRF function to use (HMACSHA1, HMACSHA256, etc) or change the salt size. The PRF function for V2 is HMACSHA1 and for V3 is HAMCSHA256. Generating your hash without ASP.NET Identity Imagine you want to be able to generate your own password hash without having to use ASP.NET Identity. Maybe you don’t like the database structure that ASP.NET Identity creates with all those AspNetSomething tables. Maybe you’re building a web site where you’re the only user that needs an account. The latter was what I went through recently. The right sidebar (or down at the bottom if you are on mobile) on this website has an “Archive” widget that is showing past blog posts. That widget is powered by an ASP.NET Core website that only has one user, myself. I can login to it and add/edit what shows up in that “Archive”. I didn’t want to deal with configuring ASP.NET Identity for just one user. Also, I found the idea of being able to set an admin user’s password through configuration appealing. Another strong motivation for doing this is that if you go look at PasswordHasher you’ll see that the method for generating the PasswordHash, which is named HashPassword takes in as a parameter an instance of TUser. TUser is not used in this method, in fact TUser is generic parameter for the PasswordHasher class, but isn’t used anywhere in that class. If you want to reuse PasswordHasher you’ll have to pass in a TUser, which is awkward because it’s not necessary. Furthermore, you can’t pick a PRF other than HMACSHA256 and a salt size different than 16, even though the code to validate a password can deal with other PRFs and salt sizes. Lets first describe how you can generate a V3 password hash “by hand” (the process for V2 is very similar). First thing that you need to know is how to generate a salt. In .Net core you do that using a RandomNumberGenerator which you can find in the System.Security.Cryptography namespace. Here’s an example of how you can generate a 128 bit (16 bytes) salt: using (var rng = RandomNumberGenerator.Create()) { var salt = new byte[128/8]; rng.GetBytes(salt); //The GetMethod fills the salt array with random data } The Create method will give you an instance of a RandomNumberGenerator that is appropriate for the specific platform you are running under (Windows, Linux or Mac). The only other thing you need to know is how to use Pbkdf2 (Password Based Key Derivation Function 2). If you want to understand why Pbkdf2 is used I shamelessly recommend reading Things you wanted to know about storing passwords but were afraid to ask. To get an implementation of Pbkdf2 you need to install the Microsoft.AspNetCore.Cryptography.KeyDerivation nuget package. $ dotnet add package Microsoft.AspNetCore.Cryptography.KeyDerivation Even though that NuGet package has AspNetCore in its name it does not depend on ASP.NET, you can run it in a .Net core console application. Here’s how you can generate a password hash using Pbkdf2 with HMACSHA256 for password “cutecats”: using Microsoft.AspNetCore.Cryptography.KeyDerivation; //... var pbkdf2Hash = KeyDerivation.Pbkdf2(password: "cutecats", salt: salt, prf: KeyDerivationPrf.HMACSHA256, iterationCount: 10000, numBytesRequested: 32); Here we are using the same defaults that ASP.NET Identity V3 uses. A 16 byte salt, HMACSHA256, 10000 iterations and a 32 bytes hash ( numBytesRequested). We now have everything we need to generate a valid ASP.NET Identity PasswordHash. The only thing we need to do is to put everything together in a byte[]. The byte[] that we need is 61 bytes long, so lets create one: var identityV3Hash = new byte[1 + 4/*KeyDerivationPrf value*/ + 4/*Iteration count*/ + 4/*salt size*/ + 16 /*salt*/ + 32 /*password hash size*/];; The first byte is the version marker, for V3 it’s 1: identityV3Hash[0] = 1; The next 4 bytes are for an uint that contains the int value of the enumeration KeyDerivationPrf’s HMASHA256 value, which is 1. uint prf = (uint)KeyDerivationPrf.HMACSHA256; // or just 1 byte[] prfAsByteArray = BitConverter.GetBytes(prf).Reverse().ToArray(); //you need System.Linq for this to work Buffer.BlockCopy(prfAsByteArray, 0, identityV3Hash, 1, 4); BitConverter.GetBytes is how you can get the byte array from a value type like uint. Then you can just call .Reverse and convert the IEnumerable that Reverse returns back to an array. We then use Buffer.BlockCopy to copy the uint array to the identityV3Hash array. The signature of BlockCopy is source array, source offset, destination array, destination offset and number of bytes to copy. The next 4 bytes contain the iteration count. The default is 10000, so lets use that in this example: byte[] iterationCountAsByteArray = BitConverter.GetBytes((uint)10000).Reverse().ToArray(); Buffer.BlockCopy(iterationCountAsByteArray, 0, identityV3Hash, 1 + 4, 4); Similar story for the salt size which is 16 (you can actually use a different value larger than 16, it will be validated correctly, although there’s no way of doing this through configuration): byte[] saltSizeInByteArray = BitConverter.GetBytes((uint)16).Reverse().ToArray(); Buffer.BlockCopy(saltSizeInByteArray, 0, identityV3Hash, 1 + 4 + 4, 4); Now we just need to copy the salt we’ve generated earlier: Buffer.BlockCopy(salt, 0, identityV3Hash, 1 + 4 + 4 + 4, salt.Length); And finally the actual hash: Buffer.BlockCopy(pbkdf2Hash, 0, identityV3Hash, 1 + 4 + 4 + 4 + salt.Length, pbkdf2Hash.length); Convert it to base 64 and you are ready to use it: var identityV3Base64Hash = Convert.ToBase64String(identityV3Hash); If you open an ASP.NET Identity AspNetUsers’ table that has at least one user and put identityV3Base64Hash there, you’ll be able to log in with that user with the password “cutecats”. Manual Validation enumeration), the number of iterations for PBKDF2, the salt and actual PBKDF2 hash. After this we can recompute the PBKDF2’s hash using an input password and compare that with the stored PBKDF2 hash. If they are equal, then the password is correct. Let’s do that step by step. Imagine passwordHash is a string with the base64 stored PasswordHash. Here’s how we can get the byte[] we need from it: var identityV3HashArray = Convert.FromBase64String(passwordHash); Next step is to get the KeyDerviationPrf enumeration value. To do that we need to convert for consecutive bytes from position 1 to position 4 in the identityV3HashArray array. They make up the uint we saved in there previously. Remember that the bytes are in reverse order, so we need to reverse them back to their original order. We’ll use BitConverter.ToUInt32 method that expects a byte[] and a startIndex in that array from which to read the 4 bytes that make up the uint. Let’s put all that in a method called ConvertFromNetworOrder: private static uint ConvertFromNetworOrder(byte[] reversedUint) { return BitConverter.ToUInt32(reversedUint.Reverse().ToArray(), 0); } The reason for the NetworkOrder in the name of the method is that it describes the order of the bytes in the byte array. The most significant is comes first. Another way to refer to this way of ordering bits, is big endian. We can now use out ConvertFromNetworOrder to fetch the PRF and the other uints that are stored in identityV3HashArray: var prfAsArray = new byte[4]; Buffer.BlockCopy(identityV3HashArray, 1, prfAsArray, 0, 4); var prf = (KeyDerivationPrf)ConvertFromNetworOrder(prfAsArray); Now, the iteration count: var iterationCountAsArray = new byte[4]; Buffer.BlockCopy(identityV3HashArray, 5, iterationCountAsArray, 0, 4); var iterationCount = (int)ConvertFromNetworOrder(iterationCountAsArray); Next is the salt size. Even though it’s always 16 if you use ASP.NET Identity, when verifying a password the salt size can be more than 16. It’s only for generating PasswordHash that ASP.NET Identity does not have an option for specifying a different size for the salt. You can see this in the VerifyHashedPasswordV3 method in PasswordHasher class. var saltSizeAsArray = new byte[4]; Buffer.BlockCopy(identityV3HashArray, 9, saltSizeAsArray, 0, 4); var saltSize = (int)ConvertFromNetworOrder(saltSizeAsArray); //int because Buffer.BlockCopy expects an int in the offset parameter The actual salt: var salt = new byte[saltSize]; Buffer.BlockCopy(identityV3HashArray, 13, salt, 0, saltSize); And finally the saved PBKDF2 hash: var savedHashedPassword = new byte[identityV3HashArray.Length - 1 - 4 - 4 - 4 - saltSize]; Buffer.BlockCopy(identityV3HashArray, 13 + saltSize, savedHashedPassword, 0, savedHashedPassword.Length); The only thing we need to do now is to run PBKFD2 with the saved PRF function, iteration count and salt: var hashFromInputPassword = KeyDerivation.Pbkdf2(password, salt, prf, iterationCount, 32); We can now compare the saved PBKDF2 hash with the one we just generated from the input password. The proper way to do that is to do it in a way that does not disclose any information about how different the two arrays are. If you compare position for position and stop as soon as they different, an attacker can time the time it takes for a password to fail, and adjust the attempts accordingly. If this sounds too convoluted, here’s a simpler example where the time it takes to perform an operation discloses information. Imagine a website where if someone enters an invalid username the password isn’t even tested. Even if the message that gets sent back is “Invalid username/password combination”, which does not disclose that the username is invalid, if an attacker times the responses he can validate that a username is a valid just by the fact that the message “Invalid username/password combination” takes longer to come back as a response when the username exists. It is therefore a good idea to make these checks take the same amount of time. In the case of comparing arrays make sure that every position is tested every time, even if such isn’t necessary to decide that the arrays are different. private static bool AreByteArraysEqual(byte[] array1, byte[] array2) { if (array1.Length != array2.Length) return false; var areEqual = true; for (var i = 0; i < array1.Length; i++) { areEqual &= (array1[i] == array2[i]); } return areEqual; } Finally, the password is valid if AreByteArraysEqual(hashFromInputPassword, savedHashedPassword) returns true. Sample project Here’s a sample project where you can generate PasswordHash for ASP.NET Identity V3. It’s console application. Here’s an example of how it can be used: $ ./IdentityV3Hasher hash== $ ./IdentityV3Hasher hash "this is a long password" AQAAAAEAACcQAAAAEDz3Wuf1QjDt14gWSdya6u5D6X8sBqbNJdNjeqGJBO52AIp3RYKXeBzDiPfeL1LPkQ== And to verify a password/PasswordHash: $ ./IdentityV3PasswordHasher verify== Password is correct $ ./IdentityV3PasswordHasher verify cutecatZ AQAAAAEAACcQAAAAEFWLthQDW2xiWaS3vLgY4ItJdModbW0kzKtb8IVuXBY3fFaIntkbbdqTj8mTXH4mmA== Wrong password You can get the project in github here: $ git clone
https://www.blinkingcaret.com/2017/11/29/asp-net-identity-passwordhash/
CC-MAIN-2018-30
refinedweb
3,074
55.03
Upgrade Guide: XNA Game Studio Express to XNA Game Studio 2.0 XNA Game Studio 2.0 introduces many new features to developers previously working with creating games in XNA Game Studio Express or XNA Game Studio Express 1.0 Refresh. This guide explains the process for making the transition to XNA Game Studio 2.0, and explains common scenarios you may encounter. For a quick understanding of the new features introduced in XNA Game Studio 2.0, see What's New in This Release. The upgrade process for an XNA Game Studio Express game consists of two main parts: - You must first upgrade your game project. This upgrade is necessary in every case, regardless of the complexity or simplicity of the project. - You may need to modify your game code to work with changes in the XNA Framework. Changes to deploying and playing your games on Xbox 360 using XNA Game Studio Connect may also affect your experience. Upgrading Your Project Projects that were created in XNA Game Studio Express or XNA Game Studio Express 1.0 Refresh must be upgraded to work with XNA Game Studio 2.0. You have a choice in how you want to upgrade your project: - You may use the Project Upgrade Wizard for XNA Game Studio 2.0, available on the XNA Creators Club Online Web site, to upgrade your project to the new XNA Game Studio 2.0 format. - You may manually upgrade your project by creating a new project in XNA Game Studio 2.0. This is more time-consuming, especially for larger projects. Option 1: Upgrade with Project Upgrade Wizard for XNA Game Studio 2.0 The Project Upgrade Wizard for XNA Game Studio 2.0 is available for download on the XNA Creators Club Online Web site. Instructions are included that will take you through the upgrade process. Option 2: Manual Upgrade Use the following procedure to upgrade your project without using the Project Upgrade Wizard. - Create a new game project. - Add all code files and other source files that are not built as content to the new project. - Add all files that are built as content to the nested content project associated with the new project. - Update the properties on the new project itself so that they match your original project—for example, Name, Description, and any special build or debug configurations. - Update the properties on the files within the project so that they match those in your original project. Updating the properties is especially important for content files. - If your game includes multiple game projects or game-library projects, repeat the preceding steps for each project to add them to the Visual Studio 2005 Solution for the game. If your game uses a Content Pipeline extension, you can add that project directly to the Solution. - Update the project for the Content Pipeline extension so it references the XNA Framework 2.0 assemblies. - Use the References node in the nested content project to reference the Content Pipeline extension. - Set any necessary references between the projects within the solution. After Upgrading Your Project Build the solution to test for any errors due to changes in the XNA Framework 2.0 assemblies or project configuration issues. - Note that your content now builds to a subdirectory that is called Content by default; you can change this name by setting the Content Root Directory property on the content project node. Modifying Your Code Once your have built your upgraded project, you may encounter build errors due to changes in the XNA Framework. If you have errors, see XNA Framework Changes in XNA Game Studio 2.0 for information on how to modify your code appropriately. The following section lists some of the common XNA Framework scenarios you may encounter after upgrading your project. Common Code Modification Scenarios The following sections contain a variety of scenarios you may encounter when reviewing your code after a project upgrade. Application Model Changes XNA Game Studio 2.0 introduces changes into the Application Model that affect the Game class. These changes are visible in the template code inserted into the Game1.cs file when a new project is created in XNA Game Studio 2.0. Most notably, LoadGraphicsContent and UnloadGraphicsContent have been deprecated, and are replaced by LoadContent and UnloadContent. The deprecated functions are still called in XNA Game Studio 2.0 and therefore will not keep your game from loading content, but these calls may be removed in a later version of XNA Game Studio, so it is recommended that you change your method signatures for LoadGraphicsContent and UnloadGraphicsContent to LoadContent and UnloadContent. This change to Game also applies to the DrawableGameComponent class. If you are using any DrawableGameComponent objects, they should use LoadContent and UnloadContent as well. Next, the Game class now provides the Content property, which returns a ContentManager that can be used to load content into the game. This means you no longer need to instantiate your own ContentManager as you did in XNA Game Studio Express, though it will still work. Finally, the Game class now provides the GraphicsDevice property, which allows you to access the GraphicsDevice associated with the Game object. This, like the previous change, is merely for ease of use and does not require that you change your code. Graphics Changes The largest impact to the XNA Framework will be felt in the Microsoft.Xna.Framework.Graphics namespace. Three particular scenarios exist where you may need to change your code: use of the ResourceUsage enumeration, use of render targets, and use of dynamic vertex and index buffers. ResourceUsage Enumeration The ResourceUsage enumeration has been split into two more specific enumerations in XNA Game Studio 2.0: the TextureUsage enumeration and the BufferUsage enumeration. If you are using classes that held a ResourceUsage value, you will encounter build breaks. See XNA Framework Changes in XNA Game Studio 2.0, Graphics namespace, for information on which new value type you should use for each class. Render Targets Several changes have been made to the effect and render target functionality in XNA Game Studio 2.0. You will see first that references to ResourceManagementMode in the RenderTarget class no longer resolve, and the Effect.Lost and Effect.Reset events are no longer available. Resource management of effects and render targets is now resolved automatically by the XNA Framework. Remove any event handlers that hook Effect.Lost and Effect.Reset, and remove references to ResourceManagementMode. Next, render targets had differing persistence behavior between Windows and Xbox 360 in XNA Game Studio Express. Windows would persist data in a render target frame-to-frame (unless multisampling was enabled), but Xbox 360 would discard it. You can now unify the behavior between platforms by setting the RenderTarget.RenderTargetUsage property. The property is set to PreserveContents by default. Last, render targets now use a ResolveTexture2D class to hold their texture data. It behaves much like a standard Texture2D object but is optimized for holding resolved render target data. You must now instantiate this class for use with the GraphicsDevice.ResolveBackBuffer method if you wish to resolve the contents of the back buffer See Render Targets for more information. Dynamic Vertex and Index Buffers If your XNA Game Studio Express game used vertex and index buffers dynamically—that is, if you were writing to VertexBuffer and IndexBuffer objects continuously such as in the case of dynamic terrain generation or particle systems, you are familiar with the VertexBuffer.SetData method. Calls to this method will cause an error when built. This is due to the signature of the method being changed to no longer take a SetDataOptions parameter. Previously, SetDataOptions was used to prepare a vertex or index buffer to be used dynamically. In XNA Game Studio 2.0, dynamic vertex and index buffers have their own classes: DynamicVertexBuffer and DynamicIndexBuffer. To use dynamic vertex and index buffers, instantiate these classes rather than using the standard VertexBuffer and IndexBuffer classes. Much of the intialization and settings work necessary to set up a dynamic buffer is handled automatically with this new class. Since the buffers are dynamic, the XNA Framework cannot handle automatically reloading content in the case of a device reset. When you create DynamicVertexBuffer and DynamicIndexBuffer objects, you must hook the DynamicVertexBuffer.ContentLost and the DynamicIndexBuffer.ContentLost events, and respond to them by reloading content into the buffers yourself. Storage Changes XNA Game Studio 2.0 introduced changes into the Microsoft.Xna.Framework.Storage namespace. First, if you were writing data to the title location for your game, you will have to change to write to user storage. To do this, see the Storage Overview topic, and How To: Get a StorageDevice Asynchronously. When you write to user storage, you will need to use the Guide features to allow the user to select where they would like to store their data. Previously, it was possible to synchronously call up the Guide - this would halt your game until the Guide was available. This is no longer allowed in XNA Game Studio 2.0. You must now call up the Guide asynchronously, thus allowing your game loop to continue to run. This does not mean you cannot pause your game during time in the Guide, but you can no longer call synchronously and lock the game entirely. To learn how to call up the storage selector asynchronously, see How To: Get a StorageDevice Asynchronously. Finally, if you already were using the Guide asynchronously, you will find that this functionality has moved out of the Storage namespace, and into Guide.BeginShowStorageDeviceSelector and Guide.EndShowStorageDeviceSelector in the GamerServices namespace. Other Improvements There are many other changes, including additions to audio and input, as well as the addition of the new Net and GamerServices namespaces to enable multiplayer support over System Link and LIVE. To learn more about these new features, including sample code and tutorials about integrating them into your game, see What's New in This Release. XNA Game Studio Connect To deploy and play XNA Game Studio 2.0 games on Xbox 360, you must download XNA Game Studio Connect from Xbox LIVE Marketplace. See Connecting to Your Xbox 360 Console with XNA Game Studio 2.0 for information on downloading XNA Game Studio Connect. Downloading XNA Game Studio Connect does not overwrite the XNA Game Launcher, which is used to deploy games made in XNA Game Studio Express. Both applications work together on your Xbox 360 console. When browsing your deployed games using XNA Game Studio Connect, you will see any XNA Game Studio Express games you deployed using the XNA Game Launcher. You can play these XNA Game Studio Express games from XNA Game Studio Connect. However, the XNA Game Launcher cannot play XNA Game Studio 2.0 games. If you play an XNA Game Studio Express game from XNA Game Studio Connect, you will be returned to the XNA Game Launcher, not XNA Game Studio Connect, upon exiting the game. If you wish to then deploy or play an XNA Game Studio 2.0 game, you must exit the XNA Game Launcher and start XNA Game Studio Connect.
http://msdn.microsoft.com/en-US/library/bb976057(v=xnagamestudio.20).aspx
CC-MAIN-2014-15
refinedweb
1,853
64.2