text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
(This post is a part of a tutorial series on Building REST APIs in Django) In our last blog post, Getting started with Django REST Framework, we saw how we could use the APIView and accept inputs from users using request.data. In our example, we dealt with string, so it was pretty straightforward. But consider the case of age or account_balance – one has to be integer, the other has to be float / decimal. How do we properly validate the incoming data? We can manually check every input field and send an error if the field type doesn’t match. But soon we’re going to have a problem at our hand – when the number of inputs will grow, we can’t just keep doing this kind of manual validation. In Django, we would probably use Django Forms for validation. Does DRF provide us with something similar? Yes, it does. The solution to our problem is Serializers. What can a Serializer do for us? Have you ever tried JSON serializing a Django model? Or a queryset? You can’t directly because they are not JSON serializable. So what do we do instead? We convert them to Python’s native data structures which could be serialized into JSON. We can serialize querysets into lists and model instances to dictionaries. But doing that by hand is cumbersome. On the other hand, we saw how we can get incoming data from request.data – we get this data as key value pairs. We can’t just store them in database directly – we have to transform them into Django’s data structures like models and querysets. Doing that by hand is also cumbersome. Serializers can help us with that. It can serialize complex types into Python natives types and then again deserialize native types into those complex types. Besides that, it also does basic validation based on the serializer field types. If a field is defined as an integer field, it will raise an error if we pass a string to that field. If we need more advanced validation rules, we can plug in the built in Validators or even write our own. Let’s see code examples to understand the use case better. Defining a Serializer Create a file named serializers.py inside the api app directory. Put the following codes into it. from rest_framework import serializers class HelloWorldSerializer(serializers.Serializer): name = serializers.CharField(required=True, max_length=6) age = serializers.IntegerField(required=False, min_value=10, default=10) We’re creating a HelloWorldSerializer which extends serializers.Serializer. We’re defining two fields on this serializer – nameis a CharFieldso it accepts string. It has a max_lengthof 6. ageis an optional integer field. The value must be at least 10 if provided. If not provided, default value will be 10. With this serializer setup, let’s modify our view to use it. from .serializers import HelloWorldSerializer class HelloWorldView(APIView): def get(self, request): return Response({"message": "Hello World!"}) def post(self, request): serializer = HelloWorldSerializer(data=request.data) if serializer.is_valid(): valid_data = serializer.data name = valid_data.get("name") age = valid_data.get("age") return Response({"message": "Hello {}, you're {} years old".format(name, age)}) else: return Response({"errors": serializer.errors}) We pass the request.data as the data parameter to HelloWorldSerializer so it can read all the request data and parse them. Then we check if the serializer is valid. If you have used Django Forms, this will feel very similar. If the serializer is valid, that means we have a valid set of data available. So we can take the value of name and age and show a pretty message. On the other hand, if the serializer is not valid, we can pass the serializer.errors back to the client, which will contain elaborate error messages. Let’s try out the API to see what happens. Let’s first send an empty request: $ curl -H "Content-Type: application/json" -X POST {"errors":{"name":["This field is required."]}} The errors say the name field is required. Of course it is! Let’s pass the name. $ curl -H "Content-Type: application/json" -X POST -d '{"name": "masnun"}' {"message":"Hello masnun, you're 10 years old"} We just passed the name but didn’t pass the age. Since it is not required and has a default value set, we get the default value. But what if we set a low value? $ curl -H "Content-Type: application/json" -X POST -d '{"name": "masnun", "age": '8'}' {"errors":{"age":["Ensure this value is greater than or equal to 10."]}} So we passed 8 and it’s not happy about that. Please note we passed the 8 as a string but DRF doesn’t mind as long as it can convert it to an integer successfully. What if we pass a value that is no number? $ curl -H "Content-Type: application/json" -X POST -d '{"name": "masnun", "age": "ten"}' {"errors":{"age":["A valid integer is required."]}} That works too! Cool, okay then let’s give it a rest and pass a valid value. $ curl -H "Content-Type: application/json" -X POST -d '{"name": "masnun", "age": 12}' {"message":"Hello masnun, you're 12 years old"} Serializer with Model How does Serializers help us in working with models? To understand that, let’s first create one model. Creating the Subscriber Model Open api/models.py and add the Subscriber model like this: class Subscriber(models.Model): name = models.CharField("Name", max_length=50) age = models.IntegerField("Age") email = models.EmailField("Email") Now create and run the migration. python manage.py makemigrations python manage.py migrate That should setup the table for our new model. Update The Serializer We added an email field to our model, also the max length for name is now 50 chars. Let’s update our serializer to match these constraints. Also rename it as SubscriberSerializer. from rest_framework import serializers class SubscriberSerializer(serializers.Serializer): name = serializers.CharField(max_length=50) age = serializers.IntegerField() email = serializers.EmailField() Update The View And now let’s refactor our view. from .serializers import SubscriberSerializer from .models import Subscriber class SubscriberView(APIView): def get(self, request): return Response({"message": "Hello World!"}) def post(self, request): serializer = SubscriberSerializer(data=request.data) if serializer.is_valid(): subscriber_instance = Subscriber.objects.create(**serializer.data) return Response({"message": "Created subscriber {}".format(subscriber_instance.id)}) else: return Response({"errors": serializer.errors}) The code is very simple and straightforward. If the serializer validation succeeds, we create a new subscriber out of the validated data. Update URLConf Let’s update the urls.py to update our url end point. url(r'^subscriber', SubscriberView.as_view(), name="subscriber") Now let’s try it out. We will post the following JSON using curl or postman: {"name": "Abu Ashraf Masnun", "email": "[email protected]", "age": 29} And we will get back the following response: { "message": "Created subscriber 1" } With the serializer, we needed so much less codes. And we did it in a very clean way. List All Subscribers According to the REST Best Practices, the GET call to a resource route ( /api/subscriber) should return a list of all the items (subscribers). So let’s refactor the get method to return the subscribers list. def get(self, request): all_subscribers = Subscriber.objects.all() serialized_subscribers = SubscriberSerializer(all_subscribers, many=True) return Response(serialized_subscribers.data) We are fetching all subscribers and then passing the queryset to the serializer constructor. Since we’re passing a query set (not just a single model instance rather a list of model instances), we need to set many=True . Also note, we don’t need to call is_valid – the data is coming from database, they’re already valid. In fact, we can’t call is_valid unless we’re passing some value to the data parameter ( SubscriberSerializer(data=request.data)). When we pass queryset or model, the data is automatically available as serializer.data. What’s Next? We have so far learned how to use APIView and Serializer to build beautiful APIs with clean code. But we still have some duplication of efforts, for example when we had to define fields on both the model and the serializer. Also, we have to implement both collection and element resources. So that’s like 5 method implementation. Wouldn’t it be great if, by some mechanism we could make things simpler, shorter and cleaner? We’ll see, in our next blog post 🙂 Please subscribe to our mailing list for email updates. Also if you liked the content, please don’t forget to share with your friends! 5 thoughts on “Django REST Framework: Serializers” This Blog is really great. No other blogs explains things so clearly and simple. Excellent work. Thank you! I am glad you liked it. Please share it with your friends so they know about it too!
http://polyglot.ninja/django-rest-framework-serializers/
CC-MAIN-2018-43
refinedweb
1,452
52.36
Search the Community Showing results for tags 'null'. - I tried to make a simplified version of what I experienced in another project. Essentially I get 2 warning in the console. I tried to make a modal that it will render if the state is equal to true. Else it will return null. I believe my issue is due to returning null. But I don't know how to do this another way. I experimented with the kill() option in gsap, but I had no luck with it. Here is the reference from the docs that I read. import React, { useRef, useEffect } from "react"; import gsap from "gsap"; export default function TestGsap(props) { const box = useRef(); useEffect(() => { gsap.from(box.current, { y: "500", ease: "expo", duration: 2, }); }); if (props.toggleModal === true) { return ( <div> <section ref={box} style={{ width: "10rem", height: "10rem", backgroundColor: "red" }} > <p>Hello, I am a red box.</p> </section> </div> ); } else { return null; } }! Yahoo SafeFrame aantaya posted a topic in Banner AnimationHas anyone had any experience with ads running on the Yahoo homepage. I am going back and forth with yahoo's ad team trying to trouble shoot this issue of ads not loading on their Homepage. This is an issue specifically with Firefox and ads not displaying and possible an issue with TweenLite/Max. After doing so looking into the inspector of Firefox I get an error similar to this, and any time you get an HTML ad that uses greensock you get errors just like this in the inspector window. I am not really sure how to trouble shoot any of this any further so I am posting this here to see if anyone has had similar issues with Yahoo or any other site that uses Safeframes to display ads.
https://greensock.com/tags/null/
CC-MAIN-2022-33
refinedweb
294
64
Test Run - Fault Injection Testing with TestApi By James McCaffrey | August 2010 Fault injection testing is the process of deliberately inserting an error into an application under test and then running the application to determine whether the application deals with the error properly. Fault injection testing can take several different forms. In this month’s column, I explain how you can introduce faults into .NET applications at run time using a component of the TestApi library. The best way for you to see where I’m headed in this column is to take a look at the screenshot in Figure 1. The screenshot shows that I’m performing fault injection testing on a dummy .NET WinForm application named TwoCardPokerGame.exe. A C# program named FaultHarness.exe is running in the command shell. It alters the normal behavior of the application under test so the application throws an exception the third time a user clicks on the button labeled Evaluate. In this situation, the Two Card Poker application does not handle the application exception gracefully and the result is the system-generated message box. Figure 1 Fault Injection Testing in Action Let’s take a closer look at this scenario to consider some of the details involved. When FaultHarness.exe is launched from the command shell, behind the scenes the harness prepares profiling code that will intercept the normal code execution of TwoCardPokerGame.exe. This is called the fault injection session. The fault injection session uses a DLL to start watching for calls to the application’s button2_Click method, which is the event handler for the button labeled Evaluate. The fault injection session has been configured so that the first two times a user clicks on the Evaluate button, the application behaves as coded, but on the third click the fault session causes the application to throw an exception of type System.ApplicationException. The fault session records session activity and logs a set of files to the test host machine. Notice in Figure 1 that the first two application Deal-Evaluate click pairs work properly, but the third click generated an exception. In the sections that follow, I’ll briefly describe the dummy Two Card Poker Game application under test, present and explain in detail the code in the FaultHarness.exe program shown in Figure 1, and provide some tips about when the use of fault injection testing is appropriate and when alternative techniques are more suitable. Although the FaultHarness.exe program itself is quite simple and most of the difficult work is performed behind the scenes by the TestApi DLLs, understanding and modifying the code I present here to meet your own testing scenarios requires a solid understanding of the .NET programming environment. That said, even if you’re a .NET beginner, you should be able to follow my explanations without too much difficulty. I’m confident you’ll find the discussion of fault injection an interesting and possibly useful addition to your toolset. The Application Under Test My dummy application under test is a simplistic but representative C# WinForm application that simulates a hypothetical card game called Two Card Poker. The application consists of two main components: TwoCardPokerGame.exe provides the UI and TwoCardPokerLib.dll provides the underlying functionality. To create the game DLL I launched Visual Studio 2008 and selected the C# Class Library template from the File | New Project dialog box. I named the library TwoCardPokerLib. The overall structure of the library is presented in Figure 2. The code for TwoCardPokerLib is too long to present in its entirety in this article. The complete source code for the TwoCardPokerLib library and the FaultHarness fault injection harness is available in the code download that accompanies this article. using System; namespace TwoCardPokerLib { // ------------------------------------------------- public class Card { private string rank; private string suit; public Card() { this.rank = "A"; // A, 2, 3, . . ,9, T, J, Q, K this.suit = "c"; // c, d, h, s } public Card(string c) { . . . } public Card(int c) { . . . } public override string ToString(){ . . . } public string Rank { . . . } public string Suit { . . . } public static bool Beats(Card c1, Card c2) { . . . } public static bool Ties(Card c1, Card c2) { . . . } } // class Card // ------------------------------------------------- public class Deck { private Card[] cards; private int top; private Random random = null; public Deck() { this.cards = new Card[52]; for (int i = 0; i < 52; ++i) this.cards[i] = new Card(i); this.top = 0; random = new Random(0); } public void Shuffle(){ . . . } public int Count(){ . . . } public override string ToString(){ . . . } public Card[] Deal(int n) { . . . } } // Deck // ------------------------------------------------- public class Hand { private Card card1; // high card private Card card2; // low card public Hand(){ . . . } public Hand(Card c1, Card c2) { . . . } public Hand(string s1, string s2) { . . . } public override string ToString(){ . . . } private bool IsPair() { . . . } private bool IsFlush() { . . . } private bool IsStraight() { . . . } private bool IsStraightFlush(){ . . . } private bool Beats(Hand h) { . . . } private bool Ties(Hand h) { . . . } public int Compare(Hand h) { . . . } public enum HandType { . . . } } // class Hand } // ns TwoCardPokerLib The Application UI Code Once I had the underlying TwoCardPokerLib library code finished, I created a dummy UI component. I started a new project in Visual Studio 2008 using the C# WinForm Application template and I named my application TwoCardPokerGame. Using the Visual Studio designer, I dragged a Label control from the Toolbox collection onto the application design surface, and modified the control’s Text property from “textBox1” to “Two Card Poker.” Next I added two more Label controls (“Your Hand” and “Computer’s Hand”), two TextBox controls, two Button controls (“Deal” and “Evaluate”), and a ListBox control. I didn’t change the default control names of any of the eight controls—textBox1, textBox2, button1 and so on. Once my design was in place, I double-clicked on the button1 control to have Visual Studio generate an event handler skeleton for the button and load file Form1.cs into the code editor. At this point I right-clicked on the TwoCardPokerGame project in the Solution Explorer window, selected the Add Reference option from the context menu, and pointed to the file TwoCardPokerLib.dll. In Form1.cs, I added a using statement so that I wouldn’t need to fully qualify the class names in the library. Next, I added four class-scope static objects to my application: Object h1 is the Hand for the user, and h2 is the Hand for the computer. Then I added some initialization code to the Form constructor: The Deck constructor creates a deck of 52 cards, in order from the ace of clubs to the king of spades, and the Shuffle method randomizes the order of the cards in the deck. Next I added the code logic to the button1_Click method as shown in Figure 3. For each of the two hands, I call the Deck.Deal method to remove two cards from the deck object. Then I pass those two cards to the Hand constructor and display the value of the hand in a TextBox control. Notice that the button1_Click method handles any exception by displaying a message in the ListBox control. private void button1_Click( object sender, EventArgs e) { try { ++dealNumber; listBox1.Items.Add("Deal # " + dealNumber); Card[] firstPairOfCards = deck.Deal(2); h1 = new Hand(firstPairOfCards[0], firstPairOfCards[1]); textBox1.Text = h1.ToString(); Card[] secondPairOfCards = deck.Deal(2); h2 = new Hand(secondPairOfCards[0], secondPairOfCards[1]); textBox2.Text = h2.ToString(); listBox1.Items.Add(textBox1.Text + " : " + textBox2.Text); } catch (Exception ex) { listBox1.Items.Add(ex.Message); } } Next, in the Visual Studio designer window I double-clicked on the button2 control to auto-generate the control’s event handler skeleton. I added some simple code to compare the two Hand objects and display a message in the ListBox control. Notice that the button2_Click method does not directly handle any exceptions: private void button2_Click( object sender, EventArgs e) { int compResult = h1.Compare(h2); if (compResult == -1) listBox1.Items.Add(" You lose"); else if (compResult == +1) listBox1.Items.Add(" You win"); else if (compResult == 0) listBox1.Items.Add(" You tie"); listBox1.Items.Add("-------------------------"); } The Fault Injection Harness Before creating the fault injection harness shown in Figure 1, I downloaded the key DLLs to my test host machine. These DLLs are part of a collection of .NET libraries named TestApi and can be found at testapi.codeplex.com. The TestApi library is a collection of software-testing-related utilities. Included in the TestApi library is a set of Managed Code Fault Injection APIs. (Read more about them at blogs.msdn.com/b/ivo_manolov/archive/2009/11/25/9928447.aspx.) I downloaded the latest fault injection APIs release, which in my case was version 0.4, and unzipped the download. I will explain what’s in the download and where to place the fault injection binaries shortly. Version 0.4 supports fault injection testing for applications created using the .NET Framework 3.5. The TestApi library is under active development, so you should check the CodePlex site for updates to the techniques I present in this article. Additionally, you may want to check for updates and tips on the blog of Bill Liu, the primary developer of the TestApi fault injection library, at blogs.msdn.com/b/billliu/. To create the fault injection harness I started a new project in Visual Studio 2008 and selected the C# Console Application template. I named the application FaultHarness and I added some minimal code to the program template (see Figure 4). using System; namespace FaultHarness { class Program { static void Main(string[] args) { try { Console.WriteLine("\nBegin TestApi Fault Injection environmnent session\n"); // create fault session, launch application Console.WriteLine("\nEnd TestApi Fault Injection environment session"); } catch (Exception ex) { Console.WriteLine("Fatal: " + ex.Message); } } } // class Program } // ns I hit the <F5> key to build and run the harness skeleton, which created a \bin\Debug folder in the FaultHarness root folder. The TestApi download has two key components. The first is TestApiCore.dll, which was located in the Binaries folder of the unzipped download. I copied this DLL into the root directory of the FaultHarness application. Then I right-clicked on the FaultHarness project in the Solution Explorer window, selected Add Reference, and pointed it to TestApiCore.dll. Next, I added a using statement for Microsoft.Test.FaultInjection to the top of my fault harness code so my harness code could directly access the functionality in TestApiCore.dll. I also added a using statement for System.Diagnostics because, as you’ll see shortly, I want to access the Process and ProcessStartInfo classes from that namespace. The second key component in the fault injection download is a folder named FaultInjectionEngine. This holds 32-bit and 64-bit versions of FaultInjectionEngine.dll. I copied the entire FaultInjectionEngine folder into the folder holding my FaultHarness executable, in my case C:\FaultInjection\FaultHarness\bin\Debug\. The 0.4 version of the fault injection system I was using requires the FaultInjectionEngine folder to be in the same location as the harness executable. Additionally, the system requires that the application under test binaries be located in the same folder as the harness executable, so I copied files TwoCardPokerGame.exe and TwoCardPokerLib.dll into C:\FaultInjection\FaultHarness\bin\Debug\. To summarize, when using the TestApi fault injection system, a good approach is to generate a skeleton harness and run it so that a harness \bin\Debug directory is created, then place file TestApiCore.dll in the harness root directory, place the FaultInjectionEngine folder in \bin\Debug, and place the application under test binaries (.exe and .dll) in \bin\Debug as well. Using the TestApi fault injection system requires that you specify the application under test, the method in the application under test that will trigger a fault, the condition that will trigger a fault, and the kind of fault that will be triggered: string appUnderTest = "TwoCardPokerGame.exe"; string method = "TwoCardPokerGame.Form1.button2_Click(object, System.EventArgs)"; ICondition condition = BuiltInConditions.TriggerEveryOnNthCall(3); IFault fault = BuiltInFaults.ThrowExceptionFault( new ApplicationException( "Application exception thrown by Fault Harness!")); FaultRule rule = new FaultRule(method, condition, fault); Notice that, because the system requires the application under test to be in the same folder as the harness executable, the name of the application under test executable does not need the path to its location. Specifying the name of the method that will trigger the injected fault is a common source of trouble for TestApi fault injection beginners. The method name must be fully qualified in the form Namespace.Class.Method(args). My preferred technique is to use the ildasm.exe tool to examine the application under test to help me determine the triggering method’s signature. From the special Visual Studio tools command shell I launch ildasm.exe, point to the application under test, then double-click on the target method. Figure 5 shows an example of using ildasm.exe to examine the signature for the button2_Click method. Figure 5 Using ILDASM to Examine Method Signatures When specifying the trigger method signature, you do not use the method return type, and you do not use parameter names. Getting the method signature correct sometimes requires a bit of trial and error. For example, on my first attempt to target button2_Click, I used: I had to correct it to: The TestApi download contains a Documentation folder containing a concepts document that provides good guidance on how to correctly construct different kinds of method signatures including constructors, generic methods, properties, and overloaded operators. Here I target a method that’s located in the application under test, but I could have also targeted a method in the underlying TwoCardPokerLib.dll, such as: After specifying the trigger method, the next step is to specify the condition under which the fault will be injected into the application under test. In my example I used TriggerEveryOnNthCall(3), which as you’ve seen injects a fault every third time the trigger method is called. The TestApi fault injection system has a neat set of trigger conditions including TriggerIfCalledBy(method), TriggerOnEveryCall, and others. After specifying the trigger condition, the next step is to specify the type of fault that will be injected into the system under test. I used BuiltInFaults.ThrowExceptionFault. In addition to exception faults, the TestApi fault injection system has built-in return type faults that allow you to inject erroneous return values into your application under test at run time. For example, this will cause the trigger method to return a (presumably incorrect) value of -1: After the fault trigger method, condition, and fault kind have been specified, the next step is to create a new FaultRule and pass that rule to a new FaultSession: FaultRule rule = new FaultRule(method, condition, fault); Console.WriteLine( "Application under test = " + appUnderTest); Console.WriteLine( "Method to trigger injected runtime fault = " + method); Console.WriteLine( "Condition which will trigger fault = On 3rd call"); Console.WriteLine( "Fault which will be triggered = ApplicationException"); FaultSession session = new FaultSession(rule); With all the preliminaries in place, the last part of writing the fault harness code is to programmatically launch the application under test in the fault session environment: When you execute the fault harness, it will launch the application under test in your fault session, with the FaultInjectionEngine.dll watching for situations where the trigger method is called when the trigger condition is true. The tests are performed manually here, but you can also run test automation in a fault session. While the fault session is running, information about the session is logged into the current directory—that is, the directory that holds the fault harness executable and the application under test executable. You can examine these log files to help resolve any problems that might occur while you’re developing your fault injection harness. Discussion The example and explanations I've presented here should get you up and running with creating a fault injection harness for your own application under test. As with any activity that’s part of the software development process, you will have limited resources and you should analyze the costs and benefits of performing fault injection testing. In the case of some applications, the effort required to create fault injection testing may not be worthwhile, but there are many testing scenarios where fault injection testing is critically important. Imagine software that controls a medical device or a flight system. In situations such as these, applications absolutely must be robust and able to correctly handle all kinds of unexpected faults. There is a certain irony involved with fault injection testing. The idea is that, if you can anticipate the situations when an exception can occur, you can in theory often programmatically guard against that exception and test for the correct behavior of that guarding behavior. However, even in such situations, fault injection testing is useful for generating difficult to create exceptions. Additionally, it’s possible to inject faults that are very difficult to anticipate, such as System.OutOfMemoryException. Fault injection testing is related to and sometimes confused with mutation testing. In mutation testing, you deliberately insert errors into the system under test, but then execute an existing test suite against the faulty system in order to determine whether the test suite catches the new errors created. Mutation testing is a way to measure test suite effectiveness and ultimately increase test case coverage. As you’ve seen in this article, the primary purpose of fault injection testing is to determine whether the system under test correctly handles errors.: Bill Liu and Paul Newson Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/magazine/ff898404.aspx
CC-MAIN-2019-43
refinedweb
2,899
56.05
create-react-app 📄 Table of contents - What is CRA (create-react-app) - setting up a React project with CRA - setting up a Redux project with CRA ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ✦ ✦ ✦ ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Current Versions create-react-app: v3.4.1 react: v16.1.1 react-scripts: v1.0.17 Creating. ◉ Create-React-App Why One of the major complaints that was always repeated about React was how difficult it was to get started building an application. So, what’s a web developer to do when faced with the cliff of Webpack/Babel/Brunch/NPM/Node/React? Simple, just use create-react-app! What create-react-app is, quite simply, an answer to the common complaints of Javascript Fatigue or the sheer difficulty of getting started in a common React environment, which usually involves configuring a combination of things such as webpack, babel, and react. In a nutshell, is the best way to start building a new single-page application in React. It sets up your development environment so that you can use the latest JavaScript features, provides a nice developer experience, and optimizes your app for production. Updates To create-react-app what’s new! NPM - Manages packages but doesn't make life easy executing any. NPX - A tool for executing Node packages. NPXcomes bundled with NPMversion 5.2+ what a package manager does, it essentially is a way automate the process of installing, updating, configuring, and removing pieces of software (packages) retrieved from a global registry. - yarn is now default in create-react-app. yarn is basically a far more predictable and faster version of npm and is absolutely a welcome addition! - The index.html/favicon.ico file now live in public instead of the root directory! create-react-appnow ships with jest by default as its testing harness/framework! This means new react apps will treat testing like a first-class citizen! Prerequisites Nodejs/npm (install this first of all)$ npx install -g create-react-app. // yes its npx and not npm$ create-react-app — version$ create-react-app version: 1.4.3$ create-react-app new-project # new-project is the directory where you want to develop your application. It takes some time over here to create the minimal basic folder structure.$ cd new-project$ yarn start // or npm start Till here, the folder structure created so far looks like: yarn start This will start up a little development web server and give you a place to start working on your application. Running this will start up a development server at and give you a nice little starter template: yarn run build “Bundles the app into static files for production.” If you’re comfortable with webpack/brunch and the build process for production sites, then you probably know what this all means. However, if all of this is Greek to you, then we’ll explain a little more. Basically, this means it’s going to take all of the Javascript code that the browser can’t interpret without any help and turn it into a smaller (“minified”) version that the browser can read and understand. It shrinks the file down as much as it possibly can to reduce the download time (probably doesn’t matter when you’re sitting on a good internet connection, but the minute you drop to 3G or worse speeds you’ll be very thankful for this!) while keeping your application running! Exploring Our First App Let’s start by expanding at the first application that is created by create-react-app above: public/index.html is the main HTML file that includes your React code and application and provides a context for React to render to. Specifically, it includes a div that your react app will show up inside. Let’s take a look at the file: It helpfully generates a readme for your project ( README.md) which is in Markdown format, as well as a favicon (the icon that shows up in your browser’s address bar and is used as the icon for bookmarks and whatnot). public/index.html is the main HTML file that includes your React code and application and provides a context for React to render to. Specifically, it includes a div that your react app will show up inside. Let’s take a look at the"> <!-- <div id=”root”> bit is the important part: this is where your react application gets rendered in your browser! Next, we have our package.json file. This is what stores the lists of dependencies for your application, as well as what describes your application (the name, the version number, etc). Similarly, we also have a yarn.lock file, which is basically a locked list of all dependencies of our app. The node_modules/ directory is where all of the dependencies get built/stored. For the most part, you shouldn’t have to fiddle with this too much. The important directory for us as developers is the src/ directory. This stores all of our modifiable code. We’ll explore the contents of that file really quickly, too. index.js stores our main Render call from ReactDOM (more on that later). It imports our App.js component that we start off with and tells React where to render it (remember that div with an id of root?). index.css stores the base styling for our application. App.js is a sample React component called “App” that we get for free when creating a new app. We’ll actually be deleting the entire contents of the file and starting over! App.css stores styling targeting that component specifically. Finally, logo.svg is just the React logo. App.test.js is our first set of tests to run against our sample App component that we start off with. Wait, What Is JSX? JSX is a templating language that looks VERY similar to HTML. This allows you to write templates for your components in a way that’s very comfortable to developers already familiar with HTML, but there are a few extra things that it provides. First, you can embed any javascript inside of a JSX template by wrapping it in curly braces (these: {}). Second, some words are special and reserved, such as class, so there are JSX-specific properties/attributes/etc you need to use (such as className). In addition, React components must only return a SINGLE JSX node at its root, so it’s very common to wrap up your components into a single div that might have multiple children underneath it. ◉ Setting Up a Redux Project With Create-React-App You can set up your project in two ways. - ) Install create-react-app globally (for older versions of npm). This provides a CLI command — create-react-app— to create a project. [1]. npm install -g create-react-app# Installs create-react-app globally[2]. create-react-app new-project. #create a new project, [3]. cd new-project [4]. npm start 2. ) Alternatively, you can use npx — this is a new binary that comes alongside npm v5.2 and above. To create a new project, just prepend npx before create-react-app redux-cra. This installs create-react-app globally (if it has not been installed) and also creates a new project < redux-cra >. npx create-react-app redux-cra cd redux-cra npm start This should start your project on port 3000. A Glimpse of a CRA Project Folder Structure Note: For your project to build, the following files must exist with exact filenames: public/index.htmlis the page template; src/index.jsis the JavaScript entry point. Redux Setup Redux provides global state management with just a single state tree. Note: A redux application needs only one store. To setup Redux, these packages are required: redux, react-redux, and redux-thunk. Why? The answers are simple. let’s install them. npm install redux react-redux redux-thunk --save Action Creators People often use actions and action creators interchangeably. On the contrary, they are two different entities. Actions are payloads of information which send data to the store. Essentially, they are JavaScript objects. What makes a particular action unique is the type key defined in it.Yeah, you guessed right — an action must have a type key. The type indicates the type of action being performed. On the other hand, action creators are functions that create actions — they simply return actions. For manageability and ease, redux-thunk allows you to write them as functions that return a function (a callback? 🤔). Moving forward, action creators will be defined in an actions folder in src (you can choose to name yours differently — name it “action-creators”, “action-initiators”, etc. They are also cool 😎). /* src/actions/simpleAction.js */export const simpleAction = () => dispatch => { dispatch({ type: 'SIMPLE_ACTION', payload: 'result_of_simple_action' }) } Reducers These describe how the application state changes with respect to actions dispatched to the redux store. Create a reducers folder in src and define reducers for your app actions. Here’s an example of reducer: /* src/reducers/simpleReducer.js */export default (state = {}, action) => { switch (action.type) { case 'SIMPLE_ACTION': return { result: action.payload } default: return state } } Using the combineReducers utility from redux, you can combine all reducers in the app into a single index reducer. For this project, we’ll reference this index reducer as rootReducer. Provider Store The redux store is all setup but the application has no access to it yet. This is handled using the Provider React binding from react-redux. To make the store available to every component in the app, it makes sense to call this binding in the app’s entry point, in this case, index.js . Provider takes store and children as props. Calling configureStore() method from src/store.js creates the redux store, which is passed as props to Provider. /* src/index.js */import React from 'react'; import ReactDOM from 'react-dom'; import { Provider } from 'react-redux' import configureStore from './store'; import './index.css'; import App from './App'; import registerServiceWorker from './registerServiceWorker'; ReactDOM.render( <Provider store={configureStore()}> <App /> </Provider>, document.getElementById('root') ); registerServiceWorker(); Start the app. All should still work fine 😄 Connect React Component to Store This will be done using the connect React binding from react-redux . export default connect()(App); connect takes in two parameters: mapStateToProps and mapDispatchToProps. /* src/App.js */export default connect(mapStateToProps, mapDispatchToProps)(App);
https://anil-pace.medium.com/create-react-app-d963e3577e07?source=post_internal_links---------4-------------------------------
CC-MAIN-2022-05
refinedweb
1,711
57.77
(For more resources related to this topic, see here.) Describing our sample application concept The concept is simple: a chat application that uses a standard Internet connection as an alternative to sending text messages. There are several popular applications like this in the Apple App Store, probably due to the cost of text messaging and support for devices such as the iPod Touch or iPad. This should be a neat real-world example that could be useful for users, and will cover specific topics in developing applications for iOS and Android. Before starting with the development, let's list the set of screens that we'll need: - Login / sign up: This screen will include a standard login and sign-up process for the user - List of conversations: This screen will include a button to start a new conversation - List of friends: This screen will provide a way to add new friends when we start a new conversation - Conversation: This screen will have a list of messages between you and another user, and an option to reply A quick wireframe layout of the application would help us grasp a better understanding of the layout of the app. The following figure shows the set of screens to be included in your app: Developing our model layer Since we have a good idea of what the application is, the next step is to develop the business objects or model layer of this application. Let's start out by defining a few classes that would contain the data to be used throughout the app. It is recommended, for the sake of organization, to add these to a Models folder in your project. Let's begin with a class representing a user. The class can be created as follows: public class User { public int Id { get; set; } public string Username { get; set; } public string Password { get; set; } } Pretty straightforward so far; let's move on to create classes representing a conversation and a message as follows: public class Conversation { public int Id { get; set; } public int UserId { get; set; } public string Username { get; set; } } public class Message { public int Id { get; set; } public int ConversationId { get; set; } public int UserId { get; set; } public string Username { get; set; } public string Text { get; set; } } Notice that we are using integers as identifiers for the various objects. UserId is the value that would be set by the application to change the user that the object is associated with. Now let's go ahead and set up our solution by performing the following steps: - Start by creating a new solution and a new C# Library project. - Name the project as XamChat.Core and the solution as XamChat. - Next, let's set the library to a Mono / .NET 4.5 project. This setting is found in the project option dialog under Build | General | Target Framework. - You could also choose to use Portable Library for this project, Writing a mock web service Many times when developing a mobile application, you may need to begin the development of your application before the real backend web service is available. To prevent the development from halting entirely, a good approach would be to develop a mock version of the service. First, let's break down the operations our app will perform against a web server. The operations are as follows: - Log in with a username and password. - Register a new account. - Get the user's list of friends. - Add friends by their usernames. - Get a list of the existing conversations for the user. - Get a list of messages in a conversation. - Send a message. Now let's define an interface that offers a method for each scenario. The interface is as follows: public interface IWebService { Task<User> Login(string username, string password); Task<User> Register(User user); Task<User[]> GetFriends(int userId); Task<User> AddFriend(int userId, string username); Task<Conversation[]> GetConversations(int userId); Task<Message[]> GetMessages(int conversationId); Task<Message> SendMessage(Message message); } As you see, we're using asynchronous communication with the TPL(Task Parallel Library) technology. Since communicating with a web service can be a lengthy process, it is always a good idea to use the Task<T> class for these operations. Otherwise, you could inadvertently run a lengthy task on the user interface thread, which would prevent user inputs during the operation. Task is definitely needed for web requests, since users could easily be using a cellular Internet connection on iOS and Android, and it will give us the ability to use the async and await keywords down the road. Now let's implement a fake service that implements this interface. Place classes such as FakeWebService in the Fakes folder of the project. Let's start with the class declaration and the first method of the interface: public class FakeWebService { public int SleepDuration { get; set; } public FakeWebService() { SleepDuration = 1; } private Task Sleep() { return Task.Delay(SleepDuration); } public async Task<User> Login( string username, string password) { await Sleep(); return new User { Id = 1, Username = username }; } } We started off with a SleepDuration property to store a number in milliseconds. This is used to simulate an interaction with a web server, which can take some time. It is also useful for changing the SleepDuration value in different situations. For example, you might want to set this to a small number when writing unit tests so that the tests execute quickly. Next, we implemented a simple Sleep method to return a task that introduce delays of a number of milliseconds. This method will be used throughout the fake service to cause a delay on each operation. Finally, the Login method merely used an await call on the Sleep method and returned a new User object with the appropriate Username. For now, any username or password combination will work; however, you may wish to write some code here to check specific credentials. Now, let's implement a few more methods to continue our FakeWebService class as follows: public async Task<User> Register(User user) { await Sleep(); return user; } public async Task<User[]> GetFriends(int userId) { await Sleep(); return new[] { new User { Id = 2, Username = "bobama" }, new User { Id = 2, Username = "bobloblaw" }, new User { Id = 3, Username = "gmichael" }, }; } public async Task<User> AddFriend( int userId, string username) { await Sleep(); return new User { Id = 4, Username = username }; } For each of these methods, we kept in mind exactly same pattern as the Login method. Each method will delay and return some sample data. Feel free to mix the data with your own values. Now, let's implement the GetConversations method required by the interface as follows: public async Task<Conversation[]> GetConversations(int userId) { await Sleep(); return new[] { new Conversation { Id = 1, UserId = 2 }, new Conversation { Id = 1, UserId = 3 }, new Conversation { Id = 1, UserId = 4 }, }; } Basically, we just create a new array of the Conversation objects with arbitrary IDs. We also make sure to match up the UserId values with the IDs we've used on the User objects so far. Next, let's implement GetMessages to retrieve a list of messages as follows: public async Task<Message[]> GetMessages(int conversationId) { await Sleep(); return new[] { new Message { Id = 1, ConversationId = conversationId, UserId = 2, Text = "Hey", }, new Message { Id = 2, ConversationId = conversationId, UserId = 1, Text = "What's Up?", }, new Message { Id = 3, ConversationId = conversationId, UserId = 2, Text = "Have you seen that new movie?", }, new Message { Id = 4, ConversationId = conversationId, UserId = 1, Text = "It's great!", }, }; } Once again, we are adding some arbitrary data here, and mainly making sure that UserId and ConversationId match our existing data so far. And finally, we will write one more method to send a message as follows: public async Task<Message> SendMessage(Message message) { await Sleep(); return message; } Most of these methods are very straightforward. Note that the service doesn't have to work perfectly; it should merely complete each operation successfully with a delay. Each method should also return test data of some kind to be displayed in the UI. This will give us the ability to implement our iOS and Android applications while filling in the web service later. Next, we need to implement a simple interface for persisting application settings. Let's define an interface named ISettings as follows: public interface ISettings { User User { get; set; } void Save(); } Note that you might want to set up the Save method to be asynchronous and return Task if you plan on storing settings in the cloud. We don't really need this with our application since we will only be saving our settings locally. Later on, we'll implement this interface on each platform using Android and iOS APIs. For now, let's just implement a fake version that will be used later when we write unit tests. The interface is created by the following lines of code: public class FakeSettings : ISettings { public User User { get; set; } public void Save() { } } Note that the fake version doesn't actually need to do anything; we just need to provide a class that will implement the interface and not throw any unexpected errors. This completes the Model layer of the application. Here is a final class diagram of what we have implemented so far: Writing the ViewModel layer Now that we have our model layer implemented, we can move on to write the ViewModel layer. The ViewModel will be responsible for presenting each operation to the UI and offering properties to be filled out by the View layer. Other common responsibilities of this layer are input validation and simple logic to display busy indicators. At this point, it would be a good idea to include the ServiceContainer class in our XamChat.Core project, as we will be using it through our ViewModels to interact with the Model layer. We will be using it as a simple option to support dependency injection and Inversion of Control; however, you may use another library of your preference for this. Normally, we start off by writing a base class for all the ViewModel layers within our project. This class always has the functionality shared by all the classes. It's a good place to put some parts of the code that are used by all the methods in the classes; for example, notification changes, methods, or similar instances. Place the following code snippet in a new ViewModels folder within your project: public class BaseViewModel { protected readonly IWebService service = ServiceContainer.Resolve<IWebService>(); protected readonly ISettings settings = ServiceContainer.Resolve<ISettings>(); public event EventHandler IsBusyChanged = delegate { }; private bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; IsBusyChanged(this, EventArgs.Empty); } } } The BaseViewModel class is a great place to insert any common functionality that you plan on reusing throughout your application. For this app, we only need to implement some functionality to indicate if the ViewModel layer is busy. We provided a property and an event that the UI will be able to subscribe to and display a wait indicator on the screen. We also added some fields for the services that will be needed. Another common feature that could be added would be validation for user inputs; however, we don't really need it for this application. Implementing our LoginViewModel class Now that we have a base class for all of the ViewModel layers, we can implement ViewModel for the first screen in our application, the Login screen. Now let's implement a LoginViewModel class as follows: public class LoginViewModel : BaseViewModel { public string Username { get; set; } public string Password { get; set; } public async Task Login() { if (string.IsNullOrEmpty(Username)) throw new Exception("Username is blank."); if (string.IsNullOrEmpty(Password)) throw new Exception("Password is blank."); IsBusy = true; try { settings.User = await service .Login(Username, Password); settings.Save(); } finally { IsBusy = false; } } } In this class, we implemented the following: - We subclassed BaseViewModel to get access to IsBusy and the fields containing common services - We added the Username and Password properties to be set by the View layer - We added a User property to be set when the log in process is completed - We implemented a Login method to be called from View, with validation on Username and Password properties - We set IsBusy during the call to the Login method on IWebService - We set the User property by awaiting the result from Login on the web service Basically, this is the pattern that we'll follow for the rest of the ViewModels in the application. We provide properties for the View layer to be set by the user's input, and methods to call for various operations. If it is a method that could take some time, such as a web request, you should always return Task and use the async and await keywords. Note that we used a try and finally block for setting IsBusy back to false. This will ensure it gets reset properly even when an exception is thrown. We plan on handling the error in the View layer, so we can display a native pop up to the user displaying a message. Implementing our RegisterViewModel class Since we have finished writing our ViewModel class to log in, we will now need to create one for the user's registration. Let's implement another ViewModel to register a new user: public class RegisterViewModel : BaseViewModel { public string Username { get; set; } public string Password { get; set; } public string ConfirmPassword { get; set; } } These properties will handle inputs from the user. Next, we need to add a Register method as follows: public async Task Register() { if (string.IsNullOrEmpty(Username)) throw new Exception("Username is blank."); if (string.IsNullOrEmpty(Password)) throw new Exception("Password is blank."); if (Password != ConfirmPassword) throw new Exception("Passwords don't match."); IsBusy = true; try { settings.User = await service .Register(new User { Username = Username, Password = Password, }); settings.Save(); } finally { IsBusy = false; } } The RegisterViewModel class is very similar to the LoginViewModel class, but has an additional ConfirmPassword property for the UI to set. A good rule to follow for when to split up the ViewModel layer's functionality is to always create a new class when the UI has a new screen. This helps to keep your code clean and somewhat follow the single responsibility principle for your classes. This concept states that a class should only have a single purpose or responsibility. We'll try to follow this concept to keep our classes small and organized, which can be more important than usual when sharing code across platforms. Implementing our FriendViewModel class Next on the list is a ViewModel layer to work with a user's friend list. We will need a method to load a user's friend list and add a new friend. Now let's implement the FriendViewModel as follows: public class FriendViewModel : BaseViewModel { public User[] Friends { get; private set; } public string Username { get; set; } } Now we'll need a method to load friends. This method is as follows: public async Task GetFriends() { if (settings.User == null) throw new Exception("Not logged in."); IsBusy = true; try { Friends = await service .GetFriends(settings.User.Id); } finally { IsBusy = false; } } Finally, we'll need a method to add a new friend, and then update the list of friends contained locally: public async Task AddFriend() { if (settings.User == null) throw new Exception("Not logged in."); if (string.IsNullOrEmpty(Username)) throw new Exception("Username is blank."); IsBusy = true; try { var friend = await service .AddFriend(settings.user.Id, Username); //Update our local list of friends var friends = new List<User>(); if (Friends != null) friends.AddRange(Friends); friends.Add(friend); Friends = friends .OrderBy(f => f.Username) .ToArray(); } finally { IsBusy = false; } } Again, this class is fairly straightforward. The only thing new here is that we added some logic to update the list of friends and sort them within our client application and not the server. You could also choose to reload the complete list of friends if you have a good reason to do so. Implementing our MessageViewModel class Our final required ViewModel layer will be handling messages and conversations. We need to create a way to load conversations and messages, and send a new message. Let's start implementing our MessageViewModel class as follows: public class MessageViewModel : BaseViewModel { public Conversation[] Conversations { get; private set; } public Conversation Conversation { get; set; } public Message[] Messages { get; private set; } public string Text { get; set; } } Next, let's implement a method to retrieve a list of conversations as follows: public async Task GetConversations() { if (settings.User == null) throw new Exception("Not logged in."); IsBusy = true; try { Conversations = await service .GetConversations(settings.User.Id); } finally { IsBusy = false; } } Similarly, we need to retrieve a list of messages within a conversation. We will need to pass the conversation ID to the service as follows: public async Task GetMessages() { if (Conversation == null) throw new Exception("No conversation."); IsBusy = true; try { Messages = await service .GetMessages(Conversation.Id); } finally { IsBusy = false; } } Finally, we need to write some code to send a message and update the local list of messages as follows: public async Task SendMessage() { if (settings.User == null) throw new Exception("Not logged in."); if (Conversation == null) throw new Exception("No conversation."); if (string.IsNullOrEmpty (Text)) throw new Exception("Message is blank."); IsBusy = true; try { var message = await service.SendMessage( new Message { UserId = settings.User.Id, ConversationId = Conversation.Id, Text = Text, }); //Update our local list of messages var messages = new List<Message>(); if (Messages != null) messages.AddRange(Messages); messages.Add(message); Messages = messages.ToArray(); } finally { IsBusy = false; } } This concludes the ViewModel layer of our application and the entirety of the shared code used on iOS and Android. For the MessageViewModel class, you could have also chosen to put GetConversations and Conversations properties in their own class, since they could be considered as a separate responsibility, but it is not really necessary. Here is the final class diagram of our ViewModel layer: Summary In this article, we went over the concept for a sample application called XamChat. We also implemented the core business objects for the application in the Model layer. Since we do not have a server to support this application yet, we implemented a fake web service. This gives us the flexibility to move forward with the app without building a server application. We also implemented the ViewModel layer. This layer will expose operations in a simple way to the View layer. Resources for Article: Further resources on this subject: - Editing attributes [Article] - BSD Socket Library [Article] - Gesture [Article]
https://www.packtpub.com/books/content/xamchat-%E2%80%93-cross-platform-app
CC-MAIN-2016-07
refinedweb
3,054
51.38
Next, we’ll look into several methods that show us information about specific threads. First, we’ll examine the currentThread() method: Gets the Thread object that represents the current thread of execution. The method is static and may be called through the Thread class name. This is a static method of the Thread class, and it simply returns a Thread object that represents the current thread; the current thread is the thread that called the currentThread() method. The object returned is the same Thread object first created for the current thread. But why is this method important? The Thread object for the current thread may not be saved anywhere, and even if it is, it may not be accessible to the called method. For example, let’s look at a class that performs socket I/O and stores the data it reads into an internal buffer. We’ll show the full implementation of this class in the next chapter, but for now, we’re interested only in its interface: public class AsyncReadSocket extends Thread { StringBuffer result; public AsyncReadSocket(String host, int port) { // Open a socket to the given host. } public void run() { // Read data from a socket into the result string buffer. } // Get the string already read from the socket so far. // Only allows "Reader" threads to execute this method. public String getResult() { String reader = Thread.currentThread().getName(); if (reader.startsWith("Reader")) { String retval = result.toString(); ... No credit card required
https://www.oreilly.com/library/view/java-threads-second/1565924185/ch02s05.html
CC-MAIN-2019-30
refinedweb
240
65.42
Well I have this string in my class and i try to type two words or more and it will mess up and input those values into my other cin statements. I'm sure there must be away to allow whitespaces but I have yet to discover it from searching on the net. I removed all the other methods to make it a lot easier to find .. Also, is there any way to limit strings? I have looked on the net about it but from what I can tell strings predefine their length and I have seen no ways to change that. Any help you guys can give would be great.Any help you guys can give would be great.Code:#include <iostream> #include <string> #include <iomanip> using namespace std; class cPRODUCT { private: long nSerialNum; string ProductDesc; float fWholesale; float fRetailcost; int nInStock; float fTotalValue; public: cPRODUCT(void) { nSerialNum = 0; ProductDesc = ""; fWholesale = 0; fRetailcost = 0; nInStock = 0; fTotalValue = 0; } void enterProdInfo(void) { cout << "Please enter the product description (max 18 characters): "; cin >> ProductDesc; } }; Thanks, ...Dan Hopefully this isn't a really obvious answer like my previous two lol.
https://cboard.cprogramming.com/cplusplus-programming/75376-strings-allowing-spaces.html
CC-MAIN-2017-09
refinedweb
188
67.99
(site) could not be found..pls check the name and try again Discussion in 'Firefox' started by Lisa Hetherington, Apr 20,,007 - Boomer - Jun 21, 2004 host pop3 could not be found, outlook not working fullyjboniza181, Sep 21, 2004, in forum: Computer Support - Replies: - 4 - Views: - 13,706 - Norman Miller - Sep 25, 2004 Try, Try, Try, again..., Jan 29, 2005, in forum: Computer Support - Replies: - 3 - Views: - 1,052 - TAB - Jan 29, 2005 The type or namespace name 'localhost' could not be foundAnthony Curtis, Feb 15, 2011, in forum: Software - Replies: - 3 - Views: - 3,838 - Anthony Curtis - Feb 16, 2011 server not found, try again over and over again- helpBill Penney, Jul 30, 2011, in forum: Firefox - Replies: - 1 - Views: - 1,994 - Ralph Fox - Jul 31, 2011
http://www.velocityreviews.com/threads/site-could-not-be-found-pls-check-the-name-and-try-again.11029/
CC-MAIN-2015-27
refinedweb
127
55.95
Hello there. So I'm playing aroung with python and pysfml, started writing pong clone. I began with creating UI, and I got little problem with callbacks and buttons. As you will see in the code (link below) in gamestatesettings.py I create a list of buttons, and in each of them I assign a callback method to invoke when pressed. And in this very method, I also invoke code on the same button explicitly e.g. def diff_up(self, engine): engine.game_settings.difficulty_up() self.butt_list[0].set_text('# ' + engine.game_settings.get_difficulty()) which i find inconvenient. And I don't have any idea how could I do this look nice. So maybe someone here will be so kind and suggest some solution for me?
http://www.gamedev.net/topic/659033-little-feedback-with-ui-and-callbacks/
CC-MAIN-2016-26
refinedweb
123
67.45
Guards a cursor and restores it when the guard is destroyed. If the guard is destroyed without first calling its cancel method then the associated cursor's location is reset to its location at the time this guard was constructed. Definition at line 366 of file util/Sawyer/CommandLine.h. #include <CommandLine.h> Construct a guard for a cursor. The guard remembers the cursor's location and restores the location if the guard is destroyed before its cancel method is called. Definition at line 373 of file util/Sawyer/CommandLine.h. Cancel the excursion guard. The associated cursor will not be reset to its initial location when this guard is destroyed. Definition at line 378 of file util/Sawyer/CommandLine.h. Starting location. This is the location to which the cursor is restored when the guard is destroyed, unless cancel has been called. Definition at line 382 of file util/Sawyer/CommandLine.h.
http://rosecompiler.org/ROSE_HTML_Reference/classSawyer_1_1CommandLine_1_1ExcursionGuard.html
CC-MAIN-2021-39
refinedweb
153
50.33
, Identity Management in Red Hat Enterprise Linux (IdM) supports two different integration options with Active Directory: synchronization and trust. I recently got a question about comparison of the two. I was surprised to find that I haven’t yet covered this topic in my blog. So let us close this gap! The customer was interested in comparison of the two. Here is the question he asked: To integrate IdM with AD 2016 I want to use winsync rather than trusts. - We would like to be able to manage the SUDO, SELinux, SSH key and other options that are not in AD. - I understand the advantages and disadvantages of each of the configurations and it seems to me that the synchronization is the best option to get the maximum of functionalities of IdM - But I would like to know the reason why Red Hat does not suggest the synchronisation. Red Hat documentation states: "In some integration scenarios, the user synchronization may be the only available option, but in general, use of the synchronization approach is discouraged in favor of the cross-realm trust-based integration." Is there any special reason why Red Hat recommends trusts (although more complected) vs. winsync? Thank you for asking! We in fact do not recommend synchronization for several reasons that I will lay down below but we also acknowledge some cases when synchronization might be the only option. So let us dive into the details… When you have sync you really have two accounts: one in AD and one in IdM. These would be two different users. In this case you need to keep the passwords in sync too. Keeping password in sync requires putting a password intercepting plugin - passsync on every AD domain controller because it is never known which domain controller will be used for the password change operation. After you deploy the plugin to the domain controllers you need to reset the password for every account so that the plugin can intercept the password and store it in the IdM account. So in fact there is a lot of complexity that is related to synchronization. Let us add that this solution would work only for a single domain. If you have more than one domain in a forest or even several forests you can’t use sync. The synchronization also is done against one AD domain controller so if the connecting is down the synchronization is not going to work and there is no failover. Another issue to keep in mind is that with synchronization you have two different places where the user authentication happens. For compliance purpose all your audit tools need to be pointed to yet another environment and they would have to collect and merge logs from IdM and AD. It is usually doable but yet another complexity to keep in mind. Another aspect is the account related policies, when you have two different accounts you need to make sure that policies are the same and not diverge. Synchronization only works for user accounts not groups. Groups structure needs to be created on the IdM side. Benefits of Trust With trust there are no duplicate accounts. Users always authenticate against AD. All the audit trails are there in the single place. Since there is only one account for a user all the settings that apply to the account (password length, strength, expiration, etc.) are always consistent with the company wide policy and you do not need to check and enforce them in more than one place. This makes it easier to pass audits. Trusts are established on the environment to environment level so there is really no single point of failure. Trust allows users in all AD domains to access IdM managed environment, and since IdM can establish trusts with multiple AD forests if needed you really can cover all forests in your infrastructure. With the trust setup POSIX attributes can be either managed in AD via schema extensions, if they are already there, dynamically created from AD SIDs on the fly by IdM and SSSD or set on the IdM side as explicit overrides. This capability also allows setting different POSIX attributes for different sets of clients. This is usually needed in the complicated environments where UID and GID namespace has duplicates due to NIS history or merges. AD groups are transparently exposed by IdM to the clients without the need to recreate them. IdM groups can be created on top or in addition to AD groups. The information above can be summarized in the following table: So the promise of the trust setup is to provide a more flexible, reliable and feature rich solution. But this is the promise. This is why I put an asterisk in the table. The reality is more complex. In practice there are challenges with the trust setup too. It turns out the trust setup assumes a well configured and well behaved AD environment. In multiple deployments Red Hat consultants uncovered misconfiguration of AD, DNS, firewalls and other elements of the infrastructure that made deployments more painful than we would like them to be. Despite of the challenges some of which are covered in the article Discovery and Affinity published last year and some of which will be covered in my talk at Red Hat Summit in May most of the current deployments see a way to resolve the deficiencies of the existing infrastructure and get to a stable and reliable environment. So synchronization might be attractive in the case of the small environment but even in such environment setting up a trust would not be a big complication. The only case where I would call synchronization out is two factor authentication (2FA) using one time password (OTP) tokens. Customers usually want to have some subset of users to be able to use OTP tokens to login into Linux systems. Since AD does not support 2FA natively some other system needs to assign a token to AD user. It can be a 3rd party solution if customer has it or it can be IdM. In this case to provide centralized OTP based authentication for the Linux systems managed by IdM the accounts that would use OTP would need to be created in IdM. This can be done in different ways: by syncing them from AD using winsync, by syncing them from AD using ipa migrate-ds command, by a script that will load user data from some other source using IdM CLI or LDAP operation, just manually. Once the user is created a password and token can be assigned to him in IdM or the account can be configured to proxy authentication to an existing 2FA solution via RADIUS. IdM allows to enforce 2FA for selected set of systems and services. How to do it, please, read the Red Hat documentation about authentication indicators. This is the best approach. It allows for a general population of users to access systems with their AD password while a selected set of special users will be required to use 2FA on a specific subset of hosts. The only limitation is that this approach will work on Red Hat Enterprise Linux 7 systems. Older systems have limitations with OTP support. If all the users need to have OTP tokens to log into the Linux systems then trust does not make sense and syncing accounts might be a more attractive option. Thank you for reading! Comments and suggestions are welcome!
https://www.redhat.com/zh/blog/sync-vs-trust-red-hat-enterprise-linux-identify-management-integration-active-directory
CC-MAIN-2022-33
refinedweb
1,242
60.65
Log in to like this post! 12 tips to increase the performance of your ASP.NET application drastically – Part 1 DevToolsGuy / Friday, August 07, 2015 This is a guest post by Brij Bhushan Mishra, a Microsoft MVP-ASP.NET/IIS , C# Corner MVP, CodeProject Insider, Former CodeProject MVP, CodeProject Mentor, and Platinum Member at CodeProject . He has around 6 years of experience as a Sr. Developer/Architect and a passion about computers since childhood. Building. 1- and listens the request directly from TCP layer. We can save all the time spent IIS/ASP.NET pipeline, page lifecycle, our custom code, time taken in DB etc. Let’s see how we can implement it. a) Go to IIS and select the web site. b) Click on Output Cache icon on right under IIS section c) In right panel, under Actions click on Add. Following dialog will open Here in red encircled area, we need to define that the file extensions which we want to cache at kernel. Second encircled area, we need to select the checkbox. Third encircled area show that there are three options is provided to invalidate the cache. Based on our requirement we can configure it. Note – there are some limitation over caching in kernel level. As all the features of IIS are implemented at User level so we will not able leverage any of those. Refer this msdn article for complete list where kernel caching cannot be implemented. 2- Pipeline mode (Available on IIS 7+) – At application pool level, there are two pipeline modes available: classic and integrated. Classic is available to support the applications that were migrated from IIS6. So first let’s understand these modes. IIS provides many features which are implemented as modules in IIS and in similar way many feature are implemented as HTTP Module which are part of ASP.NET Pipeline. In case of classic mode, each request goes through first IIS pipeline and then ASP.NET pipeline before getting served. There are many features which are part of both the pipelines like Authentication etc. In case of Integrated mode, these two pipelines are merged into one and all the module (IIS and ASP.NET) are invoked from the single event as they come along which reduces the redundancy and very helpful in performance of an application. To set/update the pipeline mode, select the desired application pool and right click properties Here as encircled in the above pic, we can set the pipeline mode. Note – Don’t go for blindly changing it, if your application migrated from IIS6 then there could be some dependency over it. After changing thoroughly test it before moving ahead. 3- Remove unused Modules – Each request has go through ASP.NET Pipeline which contains many http modules and at the end one http handler, which serves the request as below We can see here the request goes through the each modules, processed by the handler then again come though again via the same modules. Let’s see how many modules are by default enabled in an ASP.NET Application. I have added the below code to get all the modules HttpApplication httpApps = HttpContext.ApplicationInstance; //Get list of active modules HttpModuleCollection httpModuleCollections = httpApps.Modules; ViewBag.ModulesCount = httpModuleCollections.Count; And this collection can be bound to any control and it displays as It is showing eighteen modules which some of them we may not be using but each request gets has to though these all modules. So we can remove these modules from the pipeline. To remove a module, we just require to add configuration in web.config as <system.webServer> <modules> <remove name="FormsAuthentication" /> <remove name="DefaultAuthentication" /> <remove name="OutputCache" /> <remove name="AnonymousIdentification" /> <remove name="RoleManager" /> modules> system.webServer> Here we list down the modules that we want to remove with remove tag. Now as we added here remove five modules, next time when we will check the active modules, it will be thirteen. Note – For this demo I have used VS 2013, you may get different number when use another version but the key point is that we should remove all the modules which are not required. 4 - runAllManagedModulesForAllRequests - It is another configuration, one must have seen in web.config or applicationHost.config where it is set globally for all the application on that IIS as <modules runAllManagedModulesForAllRequests="true"> It means all the modules would be running for all the request coming to application but we normally don’t require that because it should run only ASP.NET files, not other files like css, js, jpg, html etc. It means even the request of these resources going through the pipeline which is unnecessary for these files and it just adding extra overheads. But we cannot make simply false at application level. So there could be two ways - a) Create a different application just for serving these static resources and set this setting as false in web.config. b) Or in same application, put all the static resources in a folder and add a web.config file specific to that folder and make it false. 5 Do not writing anything in the folder c:\inetpub\wwwroot. - There is a file watcher looks into the folder and if there is any changes in this folder, it restarts the corresponding application pool. This is a feature available in IIS, if there is any change in web.config or any file, it restarts the application pool so that your modified application serves the request. Now say you write the application log in some text file inside the application folder which makes couple of entries in each request, then application pool would be restarting that many times which would be hazardous for your application. So instead, don’t write or change anything in this folder until it is not part of application binaries. 6 Remove extra View Engines – a) As we know View Engines is a part of MVC request life cycle and has a responsibility to find the view and process it. It allows us add our own custom view engines as well. Let’s create a default MVC application and try to return a view which does not exists in the solution. Now when we run this applications this shows the following error. It shows. So we should remove all the extra View Engines. We need to add following code in Application_Start method which is available in Global.asax. // Removing all the view engines ViewEngines.Engines.Clear(); //Add Razor Engine (which we are using) ViewEngines.Engines.Add(new RazorViewEngine()); Now let’s run it again Now it is looking for only razor files b) If we carefully see the above screenshot then we see that it is looking for c# and vb files and say in our solutions we have never used vb, so again there is no use of looking for vbhtml files. To fix this we need to write our own custom ViewEngine. So let’s write our Custom RazorViewEngine as public class MyCustomViewEngine : RazorViewEngine { public MyCustomViewEngine() { base.AreaViewLocationFormats = new string[] { "~/Areas/{2}/Views/{1}/{0}.cshtml", "~/Areas/{2}/Views/Shared/{0}.cshtml"}; base.AreaMasterLocationFormats = new string[] { "~/Areas/{2}/Views/{1}/{0}.cshtml", "~/Areas/{2}/Views/Shared/{0}.cshtml" }; base.AreaPartialViewLocationFormats = new string[] { "~/Areas/{2}/Views/{1}/{0}.cshtml","~/Areas/{2}/Views/Shared/{0}.cshtml"}; base.ViewLocationFormats = new string[] { "~/Views/{1}/{0}.cshtml", "~/Views/Shared/{0}.cshtml" }; base.MasterLocationFormats = new string[] { "~/Views/{1}/{0}.cshtml", "~/Views/Shared/{0}.cshtml" }; base.PartialViewLocationFormats = new string[] { "~/Views/{1}/{0}.cshtml", "~/Views/Shared/{0}.cshtml" }; base.FileExtensions = new string[] { "cshtml" }; } } Here I have inherited it from RazorViewEngine and if we see the constructor in the then we find that there we have defined all the possible locations where a file can exist which includes possible file extensions as well. Now let’s use this View Engine in Global.asax. And run the application. Now it looks for csharp razor files which makes sense and performance friendly. Conclusion – In this post, we have discussed following six tips which can be easily applied to any ASP.NET application. 1- Kernel mode Cache 2- Pipeline mode 3- Remove unused modules 4- runAllManagedModulesForAllRequests 5- Don’t write in wwwroot 6- Remove unused view engines and language In next post of the series we will discuss five more tips which will work as a performance booster for the applications. Cheers Brij
https://www.infragistics.com/community/blogs/b/devtoolsguy/posts/12-tips-to-increase-the-performance-of-asp-net-application-drastically-part-1
CC-MAIN-2018-05
refinedweb
1,394
64
Importing JavaScript Resources in QML¶ Description of how to import and use JavaScript resources in QML documents¶¶¶ A JavaScript resource may import another in the following fashion: import * as MathFunctions from "factorial.mjs"; Or: .import "filename.js" as Qualifier The former is standard ECMAScript syntax for importing ECMAScript modules, and only works from within ECMAScript modules as denoted by the mjs file extension. The latter is an extension to JavaScript provided by the QML engine and will work also with non-modules. As an extension superseded by the ECMAScript standard, its usage is discouraged.. include() is deprecated and should be avoided. It will be removed in a future version of Qt. Importing a QML Module from a JavaScript Resource¶.
https://doc.qt.io/qtforpython-6.2/overviews/qtqml-javascript-imports.html
CC-MAIN-2022-40
refinedweb
119
54.63
SourceKit is more than just crashes & HUDs! It’s a powerful tool that can empower you to be a more productive Swift programmer. In this talk at App Builders 2016, JP Simard looks at a few examples of how you can leverage SourceKit to accomplish powerful tasks simply. Introduction (00:00) Raise your hand if you are an Android developer (I have chased them away). This talk is about you, and how you can do things that will empower your development in ways that you could not before, using this tool called SourceKit. First off, my name’s JP. I am a Canadian transplant in San Francisco. I work on the Objective-C and Swift parts of Realm (a mobile database). We have built Realm from the ground up (data engine, completely built from scratch). We launched it a week after Swift was announced (WWDC 2014). We were about to launch it the week before, but decided to wait a week to see what Apple had. We had no expectation that they would announce this new programming language: we scrambled, and made sure that we supported the language. That brought us to a set of challenges: we were building a product on this, we needed API docs, clean codes, and a way to enforce code style or consistency across the product. And digging around, Xcode 6 internal, whether there was anything we could hack at. Swift was a more complex language than Objective-C. In some ways, Swift is much simpler than Objective-C (it has a certain set of rules, and those rules are modular and flexible). But there are configurations that were not possible in Objective-C; you have constructs, keywords that do not exist in any other language. Also, it is an extremely flexible language, by composing multiple simple concepts (e.g. you could have types nested within other types, with functions inside, with a local type that is declared in your function, with a closure…). You have capabilities that are not possible in Objective-C, which made Objective-C a simpler language to work with. If you needed to build a custom tool for refactoring or being productive with Objective-C, you could probably get by with a few regular expressions, because you had one level of nesting, class and property declarations, methods. You cannot have classes in classes (or anything complex than you can with Swift). Ultimately, to do something right with a constantly evolving language like Swift was (and it still is… we heard from Daniel a number of things that are going to be changing in preparation for Swift 3, and it is not showing any signs of slowing down). If you were building a custom tool using regular expressions, trying to manipulate your Swift source to do anything that you need with a large project, you would have a hard time keeping up with all the changes. Then, we found the SourceKit: the tool that we decided to build of our additional tooling. The SourceKit You Know (04:28) SourceKit is more than what you know (aka the thing that crashes and displays a HUD in Xcode whenever you are trying to get your job done): it is a much more powerful tool, and it is hackable. There are many things you could build on top of it that will make your life as a developer more productive. You have interacted with SourceKit many times, if you have written Swift or used Xcode. The functionality that is powered by SourceKit, within Xcode are (for example): syntax highlighting, or code-completion; when you are tabbing, writing a chunk of your code, and you hit tab and the rest of it fills in magically; code formatting, indentation, interface generations. When you are using an Objective-C API from Swift and you want to look at the interface for that API, it will call through SourceKit, go through the Clang Importer, and do the required transformations. These transformations are getting more complex with some heuristics that determine if there is any repetition in the API (things that are hard to estimate). Get more development news like this SourceKit does this heavy lifting, e.g. generating documentation; you are Option-clicking on a token in Xcode, you want to see what is this type, what are the parameters, what do they take, what are the constraints that I can pass in. You do not necessarily see that the interface that Xcode uses to talk to SourceKit is available to you, as a developer. In fact, SourceKit was included in Apple’s Open Sourcing of Swift itself. It is part of the Swift repo - you will see in the tools/SourceKit directory everything that Xcode uses to power its Swift capabilities. The description below is from the README in the repository: SourceKit is a framework for supporting IDE features like indexing, syntax-coloring, code-completion, etc. In general it provides the infrastructure that an IDE needs for excellent language support. – Apple In general, you could build an IDE on top of SourceKit. And that is essentially what Xcode does. The SourceKit You Do Not See (07:12) There is a bunch of SourceKit that you are probably not familiar with, even if you interface with it on a daily basis via Xcode. Under the hood, there is a number of components that it combines and offers via easy-to-use interface: - libIDE - libParse - libFormat - libXPC (Darwin-Only) - libdispatch (Darwin-Only) There are all these components of the Swift Compiler (the parser, IDE functionality such as code completion, formatting functionality). There is also a number of components that it pulls in, in order to get interprocess communication (XPC) and asynchronous capabilities (libdispatch). Because of these two other dependencies, SourceKit is Darwin-only. It is fascinating that it can compile for iOS, which is also a Darwin platform. We may see IDE for Swift on iOS in the future, but the underlying tooling supports it. This is something that a member of the community could build if they were inclined. Most of the internals are written in C++ (because it is interacting with LLVM and other aspects of the Swift compiler). The SourceKit You Interface With (08:28) If you wanted to use SourceKit to accomplish your own tasks, you have a number of options: - C Interface - sourcekitd.framework - libsourcekitdInProc.dylib - Official Python Binding - Unofficial Swift Binding: SourceKitten It exposes a C interface, which makes it nice to interact with via Swift. It would be hard to interact with the C++ API, given Swift’s limited to support for C++. There are two flavors of the framework that you can use SourceKit with: out-of-process XPC version (sourcekitd.framework); and libsourcekitdInProc (“in process”). I am willing to say (you might not agree) that the fact that SourceKit lives out of process in Xcode is a major blessing, because Swift being a complex language, difficult to parse, difficult to cover all the edge cases, SourceKit often crashes (less these days, and it is getting better): but imagine if that was running in process in Xcode (like all of the previous tooling that Xcode had before Swift). It would bring the entire IDE down the whole time that you were hitting an edge case in the Swift syntax (it is a blessing in disguise). You can also use the official Python binding for SourceKit, included in the repo as part of tools/SourceKit. It is my understanding that Apple uses this Python binding for a number of internal tools. I think they use this for their internal documentation generation, which is a whole stack built on Python. apple/swift/tools/SourceKit (10:16) For the rest of the talk, I will be talking about an unofficial binding that I affectionately call SourceKitten, which is a way to interact with SourceKit. Again, the way you interface with SourceKit is via these APIs, and this is what Apple has to say about it: The stable C API for SourceKit is provided via the sourcekitd.framework which uses an XPC service for process isolation and the libsourcekitdInProc.dylib library which is in process. – Apple I will focus on using SourceKit through SourceKitten, which is an abstraction layer. It is Swift binding for SourceKit, which over the years has led to some interesting problems. When you build tooling for a language in that language, and not only does the tooling change over time, but does the language… you end up having extra work. But it is still been a fun project. It is this little framework and command-line that you can use to interact with SourceKit. It is open source, available through Homebrew, and this is how you install it: $ brew install sourcekitten ... $ sourcekitten version 0.12.2 $ sourcekitten help Available commands: complete Generate code completion options doc Display general or command-specific help help Index Swift file and print as JSON index Print Swift docs as JSON or Objective-C docs as XML structure Print Swift structure information as JSON syntax Print Swift syntax information as JSON version Display the current version of SourceKitten The SourceKit You Build On (11:33) It gives a number of higher level features that are available in SourceKit C API, but as a programmatic interface in Swift or as a command-line utility, it can do code completion, index generation, structure of the abstract, syntax tree extraction. It can also grab syntax information, code-completion. The purpose of this talk is to go through examples of how you can use SourceKit with fairly minimal effort, to accomplish daily tasks that you might have to do, and sometimes even augment what Xcode’s capable of doing. A few of the examples that we will go through are: code analysis, analyzing some of the structure and some of the make up of your code. Most will apply for large projects (several thousands of lines, tens, fifty, hundred thousand line Swift projects, which is going to become more and more common), this is where these examples will shine (e.g. large-scale refactoring of Swift code, large-scale migrations of Swift Code). Apple ships Xcode with a migrator for Swift itself, that every time there are Swift changes in the standard library or in the language, Xcode can offer you nice shortcuts to update your code. But there are times that you need to update third party dependencies that do not ship with their own custom migrator - you will be able to use these techniques. Code generation is the last example that we will go over. Code Analysis (13:01) For code analysis, I have a handful of examples. We will pop over to Command-Line. $ cat PublicDeclarations.swift public struct A { public func b() {} private func c() {} } $ echo $query [recurse(.["key.substructure"][]?) | select(."key.accessibility" == "source.lang.swift.accessibility.public") | ."key.name"?] $ sourcekitten structure --file PublicDeclarations.swift | jq $query ["A", "b()"] This is using the SourceKit and Command-Line interface. We are running the structure command on a file (here as 1.swift). You have this public struct with a public function and a private function. If you have a massive code base, and you want to get a sense of how large your public API is, if we pipe this through SourceKitten and get the structure, we get this pseudo AST out of it, wrapped in JSON here. There is this substructure, which is a recursive child element; every structure has a sub-structure, if there are any nested elements. At the top level, there is the struct declaration, which is public (SourceKit is already telling us this information). Within it, nested, we have a instance method declaration (again, public): b(). We also have another declaration of instance method, which is of a private ACL level: c(). If we wanted to go through this massive code base, this massive API, and we are trying to estimate over time how large our public API is, or get a consolidated list of all the declarations that we expos, we could parse this. I am only using Swift, SourceKitten, and this command-line JSON parser called JQ. JQ is a command-line Swift parser - pass it through, we can get some syntax highlighting for it. But if we print out queries, you can use something similar to an XML XPath: “give me, via recursion, trying to find my cursor, via recursion, all the substructures with an accessibility of public,” and print out the name”. If we run this, we will get all of the public declarations in our massive API (which I have consolidated to four lines). In a nutshell, this is one of the things that you can do to process this structure. That is one example of code analysis, get all of the public declarations in your API. $ cat FunctionsPerStruct.swift struct A { func one() {} } struct B { let nope = 0; func two() {}; func three() {} } $ echo $query [."key.substructure"[] | select(."key.kind" == "source.lang.swift.decl.struct") | {key: ."key.name", value: [ (."key.substructure"[] | select(."key.kind" == "source.lang.swift.decl.function.method.instance") | ."key.name")]} ] | from_entries $ sourcekitten structure --file FunctionsPerStruct.swift | jq $query {"A": ["one()"], "B": ["two()", "three()"]} If we go further, we can start poking at this structure in different ways, e.g. measure the number of functions per struct. We do not care about classes. We want to look at the structs (hard to do via regular expressions; especially with the Swift syntax constantly changing, you have attributes and modifiers). In this case, this is the entire example consolidated. If we run it through SourceKit, try to do a query on the resulting data set, we can get the functions that are nested within our structs, and this would ignore anything that is within a class. If you are trying to do some large-scale analysis of your code base, this can come in handy. cat LongFunctionNames.swift func okThisFunctionMightHaveAnOverlyLongNameThatYouMightWantToRefactor() {} func nahThisOnesFine() { func youCanEvenFindNestedOnesIfYouRecurse() {} } $ echo $query [recurse(.["key.substructure"][]?) | select(."key.kind" | tostring | startswith("source.lang.swift.decl.function")) | select((."key.name" | length) > 20) | ."key.name"] $ sourcekitten structure --file LongFunctionNames.swift | jq $query [ "okThisFunctionMightHaveAnOverlyLongNameThatYouMightWantToRefactor()", "youCanEvenFindNestedOnesIfYouRecurse()" ] Say you are trying to audit your API for long functions - you could reuse this existing structure. These are things that you can do fairly easily without deep understanding of C++, or LLVM, or the Swift compiler. You access this higher level interface and you can do powerful things with this. Code Refactoring (17:53) But, you are a developer: you want to write code, you do not want to read it. We can do is leverage SourceKit to have powerful refactoring tools (refactoring tools have not been far in the last 40, 50, 60 years). With SourceKit, Apple’s provided this higher level tool that we can build to further hack on, or build on. Below we use the SourceKit in framework, not the command-line interface. To be able to programmatically iterate over not just the structure, but the SourceKit index. Whenever you see Xcode freezing up, and you have this long progress bar that says indexing, and you cannot do anything until that completes… this is the indexing process. It is walking through all of your files in your project and building an index of all of the declarations, all of its APIs, that when you are trying to access it from other files, it knows that it can resolve those symbols. We can access this API directly via SourceKit and via SourceKitten. import SourceKittenFramework let arguments = Process.arguments let (file, usr, oldName, newName) = (arguments[1], arguments[2], arguments[3], arguments[4]) let index = (Request.Index(file: file).send()["key.entities"] as! [SourceKitRepresentable]) .map({ $0 as! [String: SourceKitRepresentable] }) func usesOfUSR(usr: String, dictionary: [String: SourceKitRepresentable]) -> [(line: Int, column: Int)] { if dictionary["key.usr"] as? String == usr, let line = dictionary["key.line"] as? Int64, let column = dictionary["key.column"] as? Int64 { return [(Int(line - 1), Int(column))] } return (dictionary["key.entities"] as? [SourceKitRepresentable])? .map({ $0 as! [String: SourceKitRepresentable] }) .flatMap { usesOfUSR(usr, dictionary: $0) } ?? [] } Here we have a short example, within the 30 lines of Swift, where we can build a fairly flexible refactoring tool for the command line. We are passing in: the file that we want to refactor its contents, a unique symbol resolution identifier (which is a unique way to represent some token in Swift), and the old and the new variable name. This does not even have to be a variable (as in this case), it can be any token, a function, a class, a struct, an enum. Our command-line tool has two functions. It will find the uses of this USR (this unique symbol resolution identifier). It will recursively reiterate over the structure this potentially arbitrarily nested structure of your AST, and find all of the lines and columns where we are using this USR. This will return a number of these tuples of line and column. The second function in our 30-line refactoring tool is a tool that will go over all of these locations and replace the use of that specific token with our new name. At the end it will print out the result. func renameUSR(usr: String, toName: String) { let uses = index.flatMap({ usesOfUSR(usr, dictionary: $0) }).sort(>) let fileContents = try! String(contentsOfFile: file) var lines = (fileContents as NSString).lines().map({ $0.content }) for use in uses { lines[use.line] = lines[use.line] .stringByReplacingOccurrencesOfString(oldName, withString: newName) } print(lines.joinWithSeparator("\n")) } renameUSR(usr, toName: newName) If you want to use this, say that the top part is our input file. (imagine a larger project, where we have two properties on two different types that have the same name, or even have the same type signature). If you are trying to refactor this (again, imagine a large project, via regular expressions, or find and replace), you will have a hard time renaming one but not the other. $ cat CodeToRefactor.swift struct A { let prop = 0 } struct B { let prop = 1 } print(A().prop) print(B().prop) $ ./refactor.swift CodeToRefactor.swift s:vV4file1A4propSi prop newProp struct A { let newProp = 0 } struct B { let prop = 1 } print(A().newProp) print(B().prop) With this small tool, we can refactor this in a safe and efficient way, in a way that will continue to adapt as the Swift language changes (we are not going to spend much time keeping up this tool with all the language changes). This is one example of how you can incorporate your own ad-hoc, custom-built refactoring tools, where you do not have to conform to some general purpose refactoring tool that you have downloaded from Xcode plug-ins for instance (useful, but not the end of the road). You can build your own tooling simply, quickly. Code Migration (22:14) Imagine you were using this third-party API and it underwent a major renaming of a bunch of types. You could continue doing this, even if those types conflicted with other imports that you are using - e.g. say you are using the Result type, and you are pulling in a library that used a different Result type; you could refactor all of this, or migrate all of this without breaking all the code. Code Generation (22:44) Code Generation: this is something of increased interest lately on the Swift mailing lists and in the development forums. On Linux, it is currently required that developers duplicate their functions in some global header. The reason being that XCTest does not have the same reflection capabilities on limits as it does on Darwin on OS X. It cannot automatically detect what all the tests in a XCTestCase subclass are. There is been much discussion as to how to do this. With SourceKit, it is trivial: you can do this without writing any code, passing it through the SourceKit structure. Here we are getting the classes that have an inherited type of XCTestCase. We are requiring that you subclass XCTestCase - we are not going to catch any of the methods called test on any of your other classes. From there, we are getting all of those substructures that are instance methods (we are not going to catch class methods), and they have to start with “test”. From there we will print all of our results. Here we get this JSON dictionary of classes to an array of functions. And you could very easily build an XCTest manifest for Linux using a handful of lines. If you wanted to build your own test runner, you can use the same approach. The last example I want to go through is more code generation - say you want to generate some Swift code based off of some existing Swift code. One way that this is quite popular is to represent your model classes or your model structs in type-safe and accessible ways. There is this cool tool out there called QueryKit, which allows you to generate NSPredicates from type-safe queries. You can do things like person.age == 10. And it will generate an NSPredicate that is type-safe that will only work on integer properties (you will not be able to try to do that equality on strings). $ cat GenerateXCTestManifest.swift class MyTests: XCTestCase { func nope() {}; func testYolo() {} } $ echo $query [."key.substructure"[] | select(."key.kind" == "source.lang.swift.decl.class") | select(."key.inheritedtypes"[]."key.name" == "XCTestCase") | {key: ."key.name", value: [ (."key.substructure"[] | select(."key.kind" == "source.lang.swift.decl.function.method.instance") | select(."key.name" | startswith("test")) | ."key.name")]} ] | from_entries $ sourcekitten structure --file GenerateXCTestManifest.swift | jq $query {"MyTests": ["testYolo()"]} We write a very short Swift program using the SourceKit in framework that will allow you to generate this code in a short way. First, we will represent properties and models. Models can have multiple properties, and they each have this swiftSourceRepresentation computed variable, which outputs the equivalent Swift code: import SourceKittenFramework struct Property { let name: String let type: String var swiftSourceRepresentation: String { return "static let \(name) = Property<\(type)>(name: \"\(name)\")" } } struct Model { let name: String let properties: [Property] var swiftSourceRepresentation: String { return "extension \(name) {\n" + properties.map({" \($0.swiftSourceRepresentation)"}).joinWithSeparator("\n") + "\n}" } } The second part of this code, we are walking through the input file and recursively getting all of these contents, and saying, “for every struct that we have that has properties, let’s get its model name, property names, and property types.” If we then run this on our initial model, we can generate this equivalent static property extension where we can do Person.name and get an equivalent representation of property. let structure = Structure(file: File(path: Process.arguments[1])!) let models = (structure.dictionary["key.substructure"] as! [SourceKitRepresentable]).map({ $0 as! [String: SourceKitRepresentable] }).filter({ substructure in return SwiftDeclarationKind(rawValue: substructure["key.kind"] as! String) == .Struct }).map { modelStructure in return Model(name: modelStructure["key.name"] as! String, properties: (modelStructure["key.substructure"] as! [SourceKitRepresentable]).map({ $0 as! [String: SourceKitRepresentable] }).filter({ substructure in return SwiftDeclarationKind(rawValue: substructure["key.kind"] as! String) == .VarInstance }).map { Property(name: $0["key.name"] as! String, type: $0["key.typename"] as! String) } ) } print(models.map({ $0.swiftSourceRepresentation }).joinWithSeparator("\n")) This is a very small example, but you can see how this could expand to massive code generation aspect, where you have to duplicate a number of types with only slight variations. You can write it in Swift, use the compiler to help you, and then have variance of that generated via code generation. $ cat QueryKitModels.swift struct Person { let name: String let age: Int } $ ./generate.swift QueryKitModels.swift extension Person { static let name = Property<String>(name: "name") static let age = Property<Int>(name: "age") } This highlights some of the things that you could build on top of SourceKit, but there are others (code formatting, completion, syntax highlighting, documentation generation, real time integrated development environment features). You can gain some inspiration by what some other people have build on top of SourceKit. There is a handful of projects that I work on (e.g. documentation generation for Swift, a tool called Jazzy, a linter called SwiftLint), and there is a number of other tools that other people in the community have been building (e.g. a refactoring tool, the Swift Refactorator). SourceKittenDaemon is an auto-completion for text editors: a back-end that another number of open source projects use for Emacs, Vim, Atom, Sublime Text, TextMate. They have extended the Swift tooling to meet their needs when Xcode is not enough, or when Xcode is heavy-handed. Next time that you feel that Swift tooling is not up to what you need, remember that you can build your own. Apple has gone out of their way to make sure that they have open sourced SourceKit, and that they advertise it as a way to build IDEs on top of Swift. There is a number of opportunities out there, yours for the taking. About the content This talk was delivered live in April 2016 at App Builders. The video was transcribed by Realm and is published here with the permission of the conference organizers.
https://academy.realm.io/posts/appbuilders-jp-simard-sourcekit/
CC-MAIN-2022-27
refinedweb
4,175
60.75
Hi again, I read carefully all your suggestions, and I took them before resubmitting this patch. These are the results: - fixed configure selection of the right version library - removed all cosmetic stuff - removed unrelated changes - removed personal testing header (ops... why I left it here before?) - changed #ifdef to #if - changed a check to be type consistent - refactored the code to be easier readable I think You all may need a more comprehensive support for the following complain. Roman Shaposhnik wrote: > The DC1394_VIDEO_MODE_320x240_YUV422 vs MODE_320x240_YUV422 should be > handled through the macro substitution. The choice of not handling these macro that way is based on the choice of libdc1394 developers, when they decided to apply a DC1394_ prefix to avoid namespace pollution. I think that keeping these defines this way let the reader easily understand that DC1394_* stuff is non ffmpeg internal, while it comes from outside headers. IMHO, they should be as they are. Sorry. > are you sure dc1394_camera_free_list (garbage) will work? There is no way in that line where the list pointer could become garbage: the pointer is init'd by the dc1394_camera_enumerate some lines above, with a calloc (which returns NULL if it fails); free(NULL) means "no operation is performed". In any case, I refactored that part to be simpler, avoiding also a free(NULL). > You know what, #ifdefs suck. Please refactor the code in two > separate functions. Refactored and using #if instead of #ifdef. Now the old way and the new one have nothing but the name in common. >> + if (dc1394_capture_enqueue(dc1394->camera, dc1394->frame) != DC1394_SUCCESS) > > This strikes me as wrong. > > This put back the pointer to the current frame into the dma ring buffer, so a future dc1394_capture_dequeue() could use it again. The form is different, but the underline code is almost the same as the dc1394_dma_done_with_buffer() used in the old version. The difference is that the new version has exposed the video frame structure directly, while the previous one had the frame inside the camera structure. By the way, (and unrelated to this patch): if frame rate is not set to the appropriate value while launching ffmpeg (aka using ffplay) you get a division by zero while updating dc1394->packet.pts inside dc1394_read_packet(). I submitted a patch that solves this unrelated issue yesterday. (Message-ID: <47802ABA.7060302 at ngi.it>) I hope that the attached patch will be good for you all. In the meanwhile, Kind regards Alessandro -------------- next part -------------- A non-text attachment was scrubbed... Name: use_libdc1394_version_1_or_version_2.patch Type: text/x-patch Size: 9576 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2008-January/046856.html
CC-MAIN-2014-35
refinedweb
426
64.51
iBlobManager Struct Reference The blob manager. More... #include <ivaria/blobs.h> Detailed Description The blob manager. Definition at line 234 of file blobs.h. Member Function Documentation Check if a moving object collides with another moving object on this layer (and of the same viewport). Return the object. Check if two moving objects collide. Create a new blob on a layer with a given image (name). If the image cannot be found it will try to load it with the given 'filename'. Note that currently only layers from 0 to 8 are supported. Create blob images. These functions are not usually called by users. Create a new geom object. Find a moving object. Only works on clickable objects. Create a modifier that blurs the source image. Create a modifier that modifies the color of the source image. Create a modifier that combines two other modifiers. Create a modifier that converts an image to grayscale. Load a texture to be used for blobs. Define a smaller texture that represents a part of a bigger texture. When defining blobs or animations you can use small or big textures automatically. Move a moving object so it appears in front on its given layer. Update all blobs and render them. This will automatically switch the renderer to 2D mode. The documentation for this struct was generated from the following file: Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/structiBlobManager.html
CC-MAIN-2017-30
refinedweb
237
62.44
The ISO C++ Committee has closed the feature list for the next C++ standard, dubbed C++20, scheduled to be published by February 2020. C++20 will be a significant revision of C++, bringing modules, coroutines, and concepts, among its major new features. At its latest meeting in Cologne, the ISO C++ Committee agreed on the last changes to the C++20 draft before submitting it to all the national standard bodies to gather their feedback. Among the latest additions are std::format, the C++20 Synchronization library, and better threading. Contracts, on the contrary, have fallen out of the draft and have been postponed to C++23/C++26. C++20 will introduce: string message = format("The answer is {}.", 42); C++20 will also improve synchronization and thread coordination, including support for efficient atomic waiting and semaphores, latches, barriers, lockfree integral types, and more. In previous meetings, the standard committee had already agreed on the inclusion of a few major features that promise to radically change the way developers use the language, including modules, coroutines, and concepts. Module support will be orthogonal to namespaces and will enable the structuring of large codebases into logical parts without requiring the separation of header and source files. This is how you can define a simple module exporting a function and its usage in a different file: // math.cppm export module mod1; export int identity(int arg) { return arg; } // main.cpp import mod1; int main() { identity(100); } Coroutines are functions that can be stopped and resumed. They are stack-less, meaning they return to the caller when suspended. Coroutines support three new operators: co_await to suspend execution without returning a value to the caller; co_yield to suspend while returning a value; co_return to finalize the execution and return a value. Those three operators enable the creation of asynchronous tasks, generators, and lazy functions. The following is an example of generator: generator<int> iota(int n = 0) { while(true) co_yield n++; } Another new major feature in C++20 will be concepts, which provide the foundation for equational reasoning. For example, concepts can be used to perform compile-time validation of template arguments and perform function dispatch based on properties of types. Besides contracts, C++20 will not include a number of additional major language features that have been deferred to C++23/C++26, including reflection metaclasses, executors, properties, and others. There is of course a lot more to C++20 that can be covered in a short post, so make sure you read the full trip report for the complete detail. Community comments Thank god contracts is out by Kurt Guntheroth / Thank god contracts is out by Kurt Guntheroth / Your message is awaiting moderation. Thank you for participating in the discussion. Failing a contract caused a program to exit, when programs that need to keep running really wanted it to throw an exception. People need to learn (and I'm talking about the ISO committee now) that the way you terminate a C++ program is by throwing an exception. That way programs can decide not to terminate if terminating is a bad idea, like in a driverless car or a rocket.
https://www.infoq.com/news/2019/07/cpp20-draft-finalized/
CC-MAIN-2019-51
refinedweb
528
50.26
CHAPTER 6 Names are used to refer to entities declared in a program. A declared entity (§6.1)). Every declaration that introduces a name has a scope (§6.3), which is the part of the program text within which the declared entity can be referred to by a simple name. Packages and reference types (that is, class types, interface types, and array types) have members (§6.4).), the context of the occurrence is used to disambiguate among packages, types, variables, and methods with the same name. Access control (§6.6)), or a method invocation expression (§15.12) in which the method is not specified by a simple name. The default access is that a member can be accessed anywhere within the package that contains its declaration; other possibilities are public, protected, and private. Fully qualified and canonical names (§6.7) and naming conventions (§6.8) are also discussed in this chapter. The name of a field, parameter, or local variable may be used as an expression (§15.14.1). The name of a method may appear in an expression only as part of a method invocation expression (§15.12). The name of a class or interface type may appear in an expression only as part of a class literal (§15.8.2), a qualified this expression (§15.8.4), a class instance creation expression (§15.9), an array creation expression (§15.10), a cast expression (§15.16), or an instanceof expression (§15.20.2), or as part of a qualified name for a field or method. The name of a package may appear in an expression only as part of a qualified name for a class or interface type. packagedeclaration (§7.4) length, which is implicitly a member of every array type (§10.7) abstractor otherwise) declared in a class type (§8.4) abstract) declared in an interface type (§9.4) abstractmethod of an interface (§9.4) catchclause of a trystatement (§14.19) forstatement (§14). Not all identifiers in programs are a part of a name. Identifiers are also used in the following situations: ." token to indicate a member of an object that is the value of an expression or the keyword superthat appears before the " ." token ." token and before a " (" token to indicate a method to be invoked for an object that is the value of an expression or the keyword superthat appears before the " ." token break(§14.14) and continue(§14.15) statements that refer to statement labels. the identifiersthe identifiersclass Test { public static void main(String[] args) { Class c = System.out.getClass(); System.out.println(c.toString().length() + args[0].length() + args.length); } } Test, main, and the first occurrences of argsand care not names; rather, they are used in declarations to specify the names of the declared entities. The names String, Class, System.out.getClass, System.out.println, c.toString, args, and args.lengthappear in the example. The first occurrence of lengthis not a name, but rather an identifier appearing in a method invocation expression (§15.12). The second occurrence of lengthis not a name, but rather an identifier appearing in a method invocation expression (§15.12). The identifiers used in labeled statements and their associated break and continue statements are completely separate from those used in declarations. Thus, the following code is valid: This code was taken from a version of the classThis code was taken from a version of the classclass TestString { char[] value; int offset, count; int indexOf(TestString str, int fromIndex) { char[] v1 = value, v2 = str.value; int max = offset + (count - str.count); int start = offset + ((fromIndex < 0) ? 0 : fromIndex); i: for (int i = start; i <= max; i++) { int n = str.count, j = i, k = str.offset; while (n-- != 0) { if (v1[j++] != v2[k++]) continue i; } return i - offset; } return -1; } } Stringand its method indexOf, where the label was originally called test. Changing the label to have the same name as the local variable idoes not obscure (§6.3.2) the label in the scope of the declaration of i. The identifier maxcould also have been used as the statement label; the label would not obscure the local variable maxwithin the labeled statement. The scoping rules for various constructs are given in the sections that describe those constructs. For convenience, the rules are repeated here: The scope of the declaration of an observable (§7.4.3) top level package is all observable compilation units (§7.3). The declaration of a package that is not observable is never in scope. Subpackage declarations are never in scope. The scope of a type imported by a single-type-import declaration (§7.5.1) or type-import-on-demand declaration (§7.5.2) is all the class and interface type declarations (§7.6) in the compilation unit in which the import declaration appears. The scope of a top level type is all type declarations in the package in which the top level type is declared. The scope of a label declared by a labeled statement is the statement immediately enclosed by the labeled statement.. The scope of a parameter of a method (§8.4.1) or constructor (§8.8.1) is the entire body of the method or constructor. The scope of a local variable declaration in a block (§14.4.2) is the rest of the block in which the declaration appears, starting with its own initializer (§14.4) and including any further declarators to the right in the local variable declaration statement. The scope of a local class declared in a block is the rest of the immediately enclosing block, including its own class declaration. The scope of a local variable declared in the ForInit part of a for statement (§14.13) includes all of the following: forstatement forstatement catchclause of a trystatement (§14.19) is the entire block associated with the catch. These rules imply that declarations of class and interface types need not appear before uses of the types. In the example: the use ofthe use ofpackage points; class Point { int x, y; PointList list; Point next; } class PointList { Point first; } PointListin class Pointis correct, because the scope of the class declaration PointListincludes both class Pointand class PointList, as well as any other type declarations in other compilation units of package points. A declaration d of a type named n shadows the declarations of any other types named n that are in scope at the point where d occurs throughout the scope of d. A declaration d of a field, local variable, method parameter, constructor parameter or exception handler parameter named n shadows the declarations of any other fields, local variables, method parameters, constructor parameters or exception handler parameters named n that are in scope at the point where d occurs throughout the scope of d. A declaration d of a label named n shadows the declarations of any other labels named n that are in scope at the point where d occurs throughout the scope of d. A declaration d of a method named n shadows the declarations of any other methods named n that are in an enclosing scope at the point where d occurs throughout the scope of d. A package declaration never shadows any other declaration. A single-type-import declaration d in a compilation unit c of package p that imports a type named n shadows the declarations of: A type-import-on-demand declaration never causes any other declaration to be shadowed. A declaration d is said to be visible at point p in a program if the scope of d includes p, and d is not shadowed by any other declaration at p. When the program point we are discussing is clear from context, we will often simply say that a declaration is visible. Note that shadowing is distinct from hiding (§8.3, §8.4.6.2, §8.5, §9.3, §9.5). Hiding, in the technical sense defined in this specification, applies only to members which would otherwise be inherited but are not because of a declaration in a subclass. Shadowing is also distinct from obscuring (§6.3.2). Here is an example of shadowing of a field declaration by a local variable declaration: produces the output:produces the output:class Test { static int x = 1; public static void main(String[] args) { int x = 0; System.out.print("x=" + x); System.out.println(", Test.x=" + Test.x); } } This example declares:This example declares:x=0, Test.x=1 Test static) variable xthat is a member of the class Test mainthat is a member of the class Test argsof the mainmethod xof the mainmethod xwould normally be available throughout the entire body of the method main. In this example, however, the class variable xis shadowed within the body of the method mainby the declaration of the local variable x. A local variable has as its scope the rest of the block in which it is declared (§14.4.2); in this case this is the rest of the body of the main method, namely its initializer " 0" and the invocations of println. This means that: x" in the invocation of x. printlnuses a qualified name (§6.6) Test.x, which uses the class type name Testto access the class variable x, because the declaration of Test.xis shadowed at this point and cannot be referred to by its simple name. compiles and prints:compiles and prints:import java.util.*; class Vector { int val[] = { 1 , 2 }; } class Test { public static void main(String[] args) { Vector v = new Vector(); System.out.println(v.val[0]); } } using the classusing the class1 Vectordeclared here in preference to class java.util.Vectorthat might be imported on demand. Obscuring is distinct from shadowing (§6.3.1) and hiding (§8.3, §8.4.6.2, §8.5, §9.3, §9.5). The naming conventions of §6.8 help reduce obscuring. This section provides an overview of the members of packages and reference types here, as background for the discussion of qualified names and the determination of the meaning of names. For a complete description of membership, see §7.1, §8.2, §9.2, and §10.7. The members of a package are subpackages and all the top level (§7.6) class (§8) and top level interface (§9) types declared in all the compilation units (§7.3) of the package. In general, the subpackages of a package are determined by the host system (§7.2). However, the package java always includes the subpackages lang and io and may include other subpackages. No two distinct members of the same package may have the same simple name (§7.1), but members of different packages may have the same simple name. For example, it is possible to declare a package: that has as a member athat has as a member apackage vector; public class Vector { Object[] vec; } publicclass named Vector, even though the package java.utilalso declares a class named Vector. These two class types are different, reflected by the fact that they have different fully qualified names (§6.7). The fully qualified name of this example Vectoris vector.Vector, whereas java.util.Vectoris the fully qualified name of the standard Vectorclass. Because the package vectorcontains a class named Vector, it cannot also have a subpackage named Vector. The members of a class type are all of the following: Objecthas no direct superclass) There is no restriction against a field and a method of a class type having the same simple name. Likewise, there is no restriction against a member class or member interface of a class type having the same simple name as a field or method of that class type. A class may have two or more fields with the same simple name if they are declared in different interfaces and inherited. An attempt to refer to any of the fields by its simple name results in a compile-time error (§6.5.7.2, §8.2). In the example: the namethe nameinterface Colors { int WHITE = 0, BLACK = 1; } interface Separates { int CYAN = 0, MAGENTA = 1, YELLOW = 2, BLACK = 3; } class Test implements Colors, Separates { public static void main(String[] args) { System.out.println(BLACK); // compile-time error: ambiguous } } BLACKin the method mainis ambiguous, because class Testhas two members named BLACK, one inherited from Colorsand one from Separates. A class type may have two or more methods with the same simple name if the methods have different signatures (§8.4.2), that is,, then the new declaration is said to implement it; if the method not inherited is not abstract, then the new declaration is said to override it. In the example: the classthe classclass Point { float x, y; void move(int dx, int dy) { x += dx; y += dy; } void move(float dx, float dy) { x += dx; y += dy; } public String toString() { return "("+x+","+y+")"; } } Pointhas two members that are methods with the same name, move. The overloaded movemethod of class Pointchosen for any particular method invocation is determined at compile time by the overloading resolution procedure given in §15.12. In this example, the members of the class Point are the float instance variables x and y declared in Point, the two declared move methods, the declared toString method, and the members that Point inherits from its implicit direct superclass Object (§4.3.2), such as the method hashCode. Note that Point does not inherit the toString method of class Object because that method is overridden by the declaration of the toString method in class Point. throwsclause t corresponding to each public instance method m with signature s, return type r, and throwsclause t declared in Object, unless a method with the same signature, same return type, and a compatible throwsclause is explicitly declared by the interface. In the example: the members of the interfacethe members of the interfaceinterface Colors { int WHITE = 0, BLACK = 1; } interface Separates { int CYAN = 0, MAGENTA = 1, YELLOW = 2, BLACK = 3; } interface ColorsAndSeparates extends Colors, Separates { int DEFAULT = BLACK; // compile-time error: ambiguous } ColorsAndSeparatesinclude those members inherited from Colorsand those inherited from Separates, namely WHITE, BLACK(first of two), CYAN, MAGENTA, YELLOW, and BLACK(second of two). The member name BLACKis ambiguous in the interface ColorsAndSeparates. The members of an array type are all of the following: The example: produces the output:produces the output:class Test { public static void main(String[] args) { int[] ia = new int[3]; int[] ib = new int[6]; System.out.println(ia.getClass() == ib.getClass()); System.out.println("ia has length=" + ia.length); } } This example uses the methodThis example uses the methodtrue ia has length=3 getClassinherited from class Objectand the field length. The result of the comparison of the Classobjects in the first printlndemonstrates that all arrays whose components are of type intare instances of the same array type, which is int[]. PackageName: Identifier PackageName . Identifier TypeName: Identifier PackageOrTypeName . Identifier ExpressionName: Identifier AmbiguousName . Identifier MethodName: Identifier AmbiguousName . Identifier PackageOrTypeName: Identifier PackageOrTypeName . Identifier AmbiguousName: Identifier AmbiguousName . IdentifierThe use of context helps to minimize name conflicts between entities of different kinds. Such conflicts will be rare if the naming conventions described in §6.8 are followed. Nevertheless, conflicts may arise unintentionally as types developed by different programmers or different organizations evolve. For example, types, methods, and fields may have the same name. It is always possible to distinguish between a method and a field with the same name, since the context of a use always tells whether a method is intended. ." in a qualified PackageName extendsclause in a class declaration (§8.1.3) implementsclause in a class declaration (§8.1.4) extendsclause in an interface declaration (§9.1.2) catchclause of a trystatement (§14.19) thisexpression (§15.8.4). super(§15.11.2) super(§15.12) instanceofrelational operator (§15.20.2) (" in a method invocation expression (§15.12) ." in a qualified ExpressionName ." in a qualified MethodName ." in a qualified AmbiguousName .", and an Identifier, then the name to the left of the " ." is first reclassified, for it is itself an AmbiguousName. There is then a choice: ." is reclassified as a PackageName, then if there is a package whose name is the name to the left of the " ." and that package contains a declaration of a type whose name is the same as the Identifier, then this AmbiguousName is reclassified as a TypeName. Otherwise, this AmbiguousName is reclassified as a PackageName. A later step determines whether or not a package of that name actually exists. ." is reclassified as a TypeName, then if the Identifier is the name of a method or field of the class or interface denoted by TypeName, this AmbiguousName is reclassified as an ExpressionName. Otherwise, if the Identifier is the name of a member type of the class or interface denoted by TypeName, this AmbiguousName is reclassified as a TypeName. Otherwise, a compile-time error results. ." is reclassified as an ExpressionName, then let T be the type of the expression denoted by ExpressionName. If the Identifier is the name of a method or field of the class or interface denoted by T, this AmbiguousName is reclassified as an ExpressionName. Otherwise, if the Identifier is the name of a member type (§8.5, §9.5) of the class or interface denoted by T, then this AmbiguousName is reclassified as a TypeName. Otherwise, a compile-time error results. As an example, consider the following contrived "library code": and then consider this example code in another package:and then consider this example code in another package:package org.rpgpoet; import java.util.Random; interface Music { Random[] wizards = new Random[4]; } First of all, the nameFirst of all, the namepackage bazola; class Gabriel { static int n = org.rpgpoet.Music.wizards.length; } org.rpgpoet.Music.wizards.lengthis classified as an ExpressionName because it functions as a PostfixExpression. Therefore, each of the names: is initially classified as an AmbiguousName. These are then reclassified:is initially classified as an AmbiguousName. These are then reclassified:org.rpgpoet.Music.wizards org.rpgpoet.Music org.rpgpoet org orgis reclassified as a PackageName (since there is no variable or type named orgin scope). rpgpoetin any compilation unit of package org(and we know that there is no such class or interface because package orghas a subpackage named rpgpoet), the qualified name org.rpgpoetis reclassified as a PackageName. org.rpgpoethas an interface type named Music, the qualified name org.rpgpoet.Musicis reclassified as a TypeName. org.rpgpoet.Musicis a TypeName, the qualified name org.rpgpoet.Music.wizardsis reclassified as an ExpressionName. .Id, then Q must also be a package name. The package name Q .Id names a package that is the member named Id within the package named by Q. If Q does not name an observable package (§7.4.3), or Id is not the simple name an observable subpackage of that package, then a compile-time error occurs. Otherwise, the PackageOrTypeName is reclassified as a PackageName. The meaning of the PackageOrTypeName is the meaning of the reclassified name. Otherwise, it is reclassified as a PackageName. The meaning of the qualified PackageOrTypeName is the meaning of the reclassified name. It is possible that the identifier occurs within the scope of more than one type with that name, in which case the type denoted by the name is determined as follows: :produced the following output the first time it was run:package wnj.test; class Test { public static void main(String[] args) { java.util.Date date = new java.util.Date(System.currentTimeMillis()); System.out.println(date.toLocaleString()); } } In this example the nameIn this example the nameSun Jan 21 22:56:29 1996 java.util.Datemust denote a type, so we first use the procedure recursively to determine if java.utilis an accessible type or a package, which it is, and then look to see if the type Dateis accessible in this package. final(§8.3.1.2), then the expression name denotes the value of the field. The type of the expression name is the declared type of the field. If the Identifier appears in a context that requires a variable and not a value, then a compile-time error occurs. staticmethod (§8.4.3.2), static initializer (§8.7), or initializer for a staticvariable (§8.3.1.1, §12.4.2), then a compile-time error occurs. the names used as the left-hand-sides in the assignments tothe names used as the left-hand-sides in the assignments toclass Test { static int v; static final int f = 3; public static void main(String[] args) { int i; i = 1; v = 2; f = 33; // compile-time error System.out.println(i + " " + v + " " + f); } } i, v, and fdenote the local variable i, the field v, and the value of f(not the variable f, because fis a finalvariable). The example therefore produces an error at compile time because the last assignment does not have a variable as its left-hand side. If the erroneous assignment is removed, the modified code can be compiled and it will produce the output: 1 2 3 .Id, then Q has already been classified as a package name, a type name, or an expression name: static), then a compile-time error occurs. final, then Q .Id denotes the value of the class variable. The type of the expression Q .Id is the declared type of the class variable. If Q .Id appears in a context that requires a variable and not a value, then a compile-time error occurs. .Id denotes the class variable. The type of the expression Q .Id is the declared type of the class variable. .Id denotes the value of the field. The type of the expression Q .Id is the declared type of the field. If Q .Id appears in a context that requires a variable and not a value, then a compile-time error occurs. finalfield of a class type (which may be either a class variable or an instance variable) finalfield lengthof an array type .Id denotes the value of the field. The type of the expression Q .Id is the declared type of the field. If Q .Id appears in a context that requires a variable and not a value, then a compile-time error occurs. .Id denotes a variable, the field Id of class T, which may be either a class variable or an instance variable. The type of the expression Q .Id is the declared type of the field The example: encounters two compile-time errors, because theencounters two compile-time errors, because theclass Point { int x, y; static int nPoints; } class Test { public static void main(String[] args) { int i = 0; i.x++; // compile-time error Point p = new Point(); p.nPoints(); // compile-time error } } intvariable ihas no members, and because nPointsis not a method of class Point. .Id, then Q has already been classified as a package name, a type name, or an expression name. If Q is a package name, then a compile-time error occurs. Otherwise, Id is the method name to be used for method invocation. If Q is a type name, then Id must name at least one staticmethod of the type Q. If Q is an expression name, then let T be the type of the expression Q; Id must name at least one method of the type T. See §15.12 for further discussion of the interpretation of qualified method names in method invocation expressions. Note that accessibility is a static property that can be determined at compile time; it depends only on types and declaration modifiers. Qualified names are a means of access to members of packages and reference types; related means of access include field access expressions (§15.11) and method invocation expressions (§15.12). All three are syntactically similar in that a " ." token appears, preceded by some indication of a package, type, or expression having a type and followed by an Identifier that names a member of the package or type. These are collectively known as constructs for qualified access. Access control applies to qualified access and to the invocation of constructors by class instance creation expressions (§15.9) and explicit constructor invocations (§8.8.5). Accessibility also affects inheritance of class members (§8.2), including hiding and method overriding (§8.4.6.1). public, then it may be accessed by any code, provided that the compilation unit (§7.3) in which it is declared is observable. If a top level class or interface type is not declared public, then it may be accessed only from within the package in which it is declared. public, then access is permitted. All members of interfaces are implicitly public. protected, then access is permitted only when one of the following is true: protectedmember or constructor is declared. private, then access is permitted if and only if it occurs within the body of the top level class (§7.6) that encloses the declaration of the member. protected Access protectedmember or constructor of an object may be accessed from outside the package in which it is declared only by code that is responsible for the implementation of that object. protectedMember protectedmember m is declared. Access is permitted only within the body of a subclass S of C. In addition, if Id denotes an instance field or instance method, then: .Id, where Q is an ExpressionName, then the access is permitted if and only if the type of the expression Q is S or a subclass of S. .Id, where E is a Primary expression, or by a method invocation expression E .Id (. . . ), where E is a Primary expression, then the access is permitted if and only if the type of E is S or a subclass of S. protectedConstructor protectedconstructor is declared and let S be the innermost class in whose declaration the use of the protectedconstructor occurs. Then: super(. . . )or by a qualified superclass constructor invocation of the form E .super(. . .), where E is a Primary expression, then the access is permitted. newC (. . .){...}or by a qualified class instance creation expression of the form E .new C(. . .){...}, where E is a Primary expression, then the access is permitted. newC (. . . )or by a qualified class instance creation expression of the form E .new C(. . .), where E is a Primary expression, then the access is not permitted. A protectedconstructor can be accessed by a class instance creation expression (that does not declare an anonymous class) only from within the package in which it is defined. and:and:package points; class PointVec { Point[] vec; } which declare two class types in the packagewhich declare two class types in the packagepackage points; public class Point { protected int x, y; public void move(int dx, int dy) { x += dx; y += dy; } public int getX() { return x; } public int getY() { return y; } } points: PointVecis not publicand not part of the publicinterface of the package points, but rather can be used only by other classes in the package. Pointis declared publicand is available to other packages. It is part of the publicinterface of the package points. move, getX, and getYof the class Pointare declared publicand so are available to any code that uses an object of type Point. xand yare declared protectedand are accessible outside the package pointsonly in subclasses of class Point,and only when they are fields of objects that are being implemented by the code that is accessing them. protectedaccess modifier limits access. publicand Non-publicClasses publicmodifier, access to the class declaration is limited to the package in which it is declared (§6.6). In the example: two classes are declared in the compilation unit. The classtwo classes are declared in the compilation unit. The classpackage points; public class Point { public int x, y; public void move(int dx, int dy) { x += dx; y += dy; } } class PointList { Point next, prev; } Pointis available outside the package points, while the class PointListis available for access only within the package. Thus a compilation unit in another package can access points.Point, either by using its fully qualified name: or by using a single-type-import declaration (§7.5.1) that mentions the fully qualified name, so that the simple name may be used thereafter:or by using a single-type-import declaration (§7.5.1) that mentions the fully qualified name, so that the simple name may be used thereafter:package pointsUser; class Test { public static void main(String[] args) { points.Point p = new points.Point(); System.out.println(p.x + " " + p.y); } } However, this compilation unit cannot use or importHowever, this compilation unit cannot use or importpackage pointsUser; import points.Point; class Test { public static void main(String[] args) { Point p = new Point(); System.out.println(p.x + " " + p.y); } } points.PointList, which is not declared publicand is therefore inaccessible outside package points. public, protected, or privateare specified, a class member or constructor is accessible throughout the package that contains the declaration of the class in which the class member is declared, but the class member or constructor is not accessible in any other package. If a public class has a method or constructor with default access, then this method or constructor is not accessible to or inherited by a subclass declared outside this package. For example, if we have: then a subclass in another package may declare an unrelatedthen a subclass in another package may declare an unrelatedpackage points; public class Point { public int x, y; void move(int dx, int dy) { x += dx; y += dy; } public void moveAlso(int dx, int dy) { move(dx, dy); } } movemethod, with the same signature (§8.3.2) and return type. Because the original movemethod is not accessible from package morepoints, supermay not be used: Because move ofBecause move ofpackage morepoints; public class PlusPoint extends points.Point { public void move(int dx, int dy) { super.move(dx, dy); // compile-time error moveAlso(dx, dy); } } Pointis not overridden by movein PlusPoint, the method moveAlsoin Pointnever calls the method move in PlusPoint. Thus if you delete the super.move call from PlusPoint and execute the test program: it terminates normally. If move ofit terminates normally. If move ofimport points.Point; import morepoints.PlusPoint; class Test { public static void main(String[] args) { PlusPoint pp = new PlusPoint(); pp.move(1, 1); } } Pointwere overridden by movein PlusPoint, then this program would recurse infinitely, until a StackoverflowErroroccurred. publicFields, Methods, and Constructors publicclass member or constructor is accessible throughout the package where it is declared and from any other package, provided the package in which it is declared is observable (§7.4.3). For example, in the compilation unit: thethepackage points; public class Point { int x, y; public void move(int dx, int dy) { x += dx; y += dy; moves++; } public static int moves = 0; } publicclass Pointhas as publicmembers the movemethod and the movesfield. These publicmembers are accessible to any other package that has access to package points. The fields xand yare not publicand therefore are accessible only from within the package points. protectedFields, Methods, and Constructors pointspackage declares: and theand thepackage points; public class Point { protected int x, y; void warp(threePoint.Point3d a) { if (a.z > 0) // compile-time error: cannot access a.z a.delta(this); } } threePointpackage declares: which defines a classwhich defines a classpackage threePoint; import points.Point; public class Point3d extends Point { protected int z; public void delta(Point p) { p.x += this.x; // compile-time error: cannot access p.x p.y += this.y; // compile-time error: cannot access p.y } public void delta3d(Point3d q) { q.x += this.x; q.y += this.y; q.z += this.z; } } Point3d. A compile-time error occurs in the method deltahere: it cannot access the protected members xand yof its parameter p, because while Point3d(the class in which the references to fields xand yoccur) is a subclass of Point(the class in which xand yare declared), it is not involved in the implementation of a Point(the type of the parameter p). The method delta3dcan access the protected members of its parameter q, because the class Point3dis a subclass of Pointand is involved in the implementation of a Point3d. The method delta could try to cast (§5.5, §). privateFields, Methods, and Constructors privateclass member or constructor is accessible only within the class body in which the member is declared and is not inherited by subclasses. In the example: thetheclass Point { Point() { setMasterID(); } int x, y; private int ID; private static int masterID = 0; private void setMasterID() { ID = masterID++; } } privatemembers ID, masterID, and setMasterIDmay be used only within the body of class Point. They may not be accessed by qualified names, field access expressions, or method invocation expressions outside the body of the declaration of Point. See §8.8.8 for an example that uses a private constructor. boolean, char, byte, short, int, long, float, or double. .", followed by the simple (member) name of the subpackage. .", followed by the simple name of the class or interface. []". longis " long". java.langis " java.lang" because it is subpackage langof package java. Object, which is defined in the package java.lang, is " java.lang.Object". Enumeration, which is defined in the package java.util, is " java.util.Enumeration". double" is " double[]". String" is " java.lang.String[][][][]". the fully qualified name of the typethe fully qualified name of the typepackage points; class Point { int x, y; } class PointVec { Point[] vec; } Pointis " points.Point"; the fully qualified name of the type PointVecis " points.PointVec"; and the fully qualified name of the type of the field vecof class PointVecis " points.Point[]". Every package, top level class, top level interface, and primitive type has a canonical name. An array type has a canonical name if and only if its element type has a canonical name. A member class or member interface M declared in another class C has a canonical name if and only if C has a canonical name. In that case, the canonical name of M consists of the canonical name of C, followed by ".", followed by the simple name of M. For every package, top level class, top level interface and primitive type, the canonical name is the same as the fully qualified name. The canonical name of an array type is defined only when the component type of the array has a canonical name. In that case, the canonical name of the array type consists of the canonical name of the component type of the array type followed by " []". The difference between a fully qualified name and a canonical name can be seen in examples such as: In this example bothIn this example bothpackage p; class O1 { class I{}} class O2 extends O1{}; p.O1.Iand p.O2.Iare fully qualified names that denote the same class, but only p.O1.Iis its canonical name. We recommend these conventions for use in all programs written in the Java programming language. However, these conventions should not be followed slavishly if long-held conventional usage dictates otherwise. So, for example, the sin and cos methods of the class java.lang.Math have mathematically conventional names, even though these method names flout the convention suggested here because they are short and are not verbs. com, edu, gov, mil, net, org, or a two-letter ISO country code such as ukor jp. Here are examples of hypothetical unique names that might be formed under this convention: Names of packages intended only for local use should have a first identifier that begins with a lowercase letter, but that first identifier specifically should not be the identifierNames of packages intended only for local use should have a first identifier that begins with a lowercase letter, but that first identifier specifically should not be the identifiercom.JavaSoft.jag.Oak org.npr.pledge.driver uk.ac.city.rugby.game java; package names that start with the identifier javaare reserved by Sun for naming Java platform packages. When package names occur in expressions: importdeclarations (§7.5) can usually be used to make available the type names declared in that package. The first component of a package name is normally not easily mistaken for a type name, as a type name normally begins with a single uppercase letter. (The Java programming language does not actually rely on case distinctions to determine whether a name is a package name or a type name.) Likewise,Likewise, ClassLoader SecurityManager Thread DictionaryBufferedInputStream java.io.DataInputand java.io.DataOutput; or it may be an adjective describing a behavior, as for the interfaces Runnableand Cloneable. Obscuring involving class and interface type names is rare. Names of fields, parameters, and local variables normally do not obscure type names because they conventionally begin with a lowercase letter whereas type names conventionally begin with an uppercase letter. getand setan attribute that might be thought of as a variable V should be named getV and setV. An example is the methods getPriorityand setPriorityof class Thread. length, as in class String. booleancondition V about an object should be named isV. An example is the method isInterruptedof class Thread. toF. Examples are the method toStringof class Objectand the methods toLocaleStringand toGMTStringof class java.util.Date. Method names cannot obscure or be obscured by other names (§6.5.7). finalshould be in mixed case with a lowercase first letter and the first letters of subsequent words capitalized. Note that well-designed classes have very few publicor protectedfields, except for fields that are constants ( final staticfields) (§6.8.5). Fields should have names that are nouns, noun phrases, or abbreviations for nouns. Examples of this convention are the fields buf, pos, and count of the class java.io.ByteArrayInputStream and the field bytesTransferred of the class java.io.InterruptedIOException. Obscuring involving field names is rare. importdeclaration (§7.5) can usually be used to make available the type names declared in that package.: Obscuring involving constant names is rare:Obscuring involving constant names is rare:interface ProcessStates { int PS_RUNNING = 0; int PS_SUSPENDED = 1; } cpfor a variable holding a reference to a ColoredPoint bufholding a pointer to a bufferof some kind inand out, whenever some kind of input and output are involved, patterned after the fields of System offand len, whenever an offset and length are involved, patterned after the parameters to the readand writemethods of the interfaces DataInputand DataOutputof java.io One-character local variable or parameter names should be avoided, except for temporary and looping variables, or where a variable holds an undistinguished value of a type. Conventional one-character names are: bfor a byte cfor a char dfor a double efor an Exception ffor a float i, j, and kfor integers lfor a long ofor an Object sfor a String vfor an arbitrary value of some type
http://java.sun.com/docs/books/jls/second_edition/html/names.doc.html
crawl-002
refinedweb
6,412
55.95
view raw I have to create a copy of some elements of the standard library in C and I have to create a copy of strcat. So I have to create a function that concatenate two strings in C. I know arrays in C can't change the allocated size. The only fonction i'm allowed to use is copies i made of strlen, strstr, and write() ... My code looks like this : char *my_strcat(char *dest, char *src) { int dest_size; int src_size; int current_pos; int free_space; int pos_in_src; src_size = my_strlen(src); dest_size = my_strlen(dest); while (dest[current_pos] != '\0') current_pos = current_pos + 1; free_space = dest_size - current_pos; if (free_space < src_size) return (0); while (src[pos_in_src] != '\0') { dest[current_pos] = src[pos_in_src]; pos_in_src = pos_in_src + 1; current_pos = current_pos + 1; } return (dest); } char dest[19] = "epite"; char *src = "chor42spotted"; my_strcat(dest, src); Also, read the man for strcat(3) the dest string must have enough space for the result. So your function is behaving incorrectly, you do not need to check that you have enough free space in dest
https://codedump.io/share/zcbjPPBPx2Az/1/how-to-create-a-copy-of-strcat
CC-MAIN-2017-22
refinedweb
172
59.64
In this project, we'll be working with data from the S&P500 Index. We will be using historical data on the price of the S&P500 Index to make predictions about future prices. Predicting whether an index will go up or down will help us forecast how the stock market as a whole will perform. Since stocks tend to correlate with how well the economy as a whole is performing, it can also help us make economic forecasts. We will be working with a csv file containing index prices. Each row in the file contains a daily record of the price of the S&P500 Index from 1950 to 2015. The dataset is stored in sphist.csv. The columns of the dataset are: Date-- The date of the record. Open-- The opening price of the day (when trading starts). High-- The highest trade price during the day. Low-- The lowest trade price during the day. Close-- The closing price for the day (when trading is finished). Volume-- The number of shares traded. Adj Close-- The daily closing price, adjusted retroactively to include any corporate actions. Read more here. We'll be using this dataset to develop a predictive model. We'll train the model with data from 1950-2012, and try to make predictions from 2013-2015. import pandas as pd from datetime import datetime import numpy as np from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error #Let's start by reading in the dataset and converting the `Date` column to datetime format: sp = pd.read_csv('sphist.csv') sp['Date'] = pd.to_datetime(sp['Date']) sp.head() sp.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 16590 entries, 0 to 16589 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Date 16590 non-null datetime64[ns] 1 Open 16590 non-null float64 2 High 16590 non-null float64 3 Low 16590 non-null float64 4 Close 16590 non-null float64 5 Volume 16590 non-null float64 6 Adj Close 16590 non-null float64 dtypes: datetime64[ns](1), float64(6) memory usage: 907.4 KB sp['after'] = sp['Date'] > datetime(year=2015, month=4, day=1) sp.head() #Let's sort the dataframe by the `Date` column in a descending order sp = sp.sort_values(by='Date', ascending = True) sp.head() Datasets taken from the stock market need to be handled differently than datasets from other sectors when it comes time to make predictions. In a normal machine learning exercise, we treat each row as independent. Stock market data is sequential, and each observation comes a day after the previous observation. Thus, the observations are not all independent, and you can't treat them as such. This means we have to be extra careful to not inject "future" knowledge into past rows when we do training and prediction. Injecting future knowledge will make our model look good when we are training and testing it, but will make it fail in the real world. This is how many algorithmic traders lose money. #Calculate the mean for the past 5, 30, 365 days sp['day_5'] = sp['Close'].rolling(5).mean().shift(1) sp['day_30'] = sp['Close'].rolling(30).mean().shift(1) sp['day_365'] = sp['Close'].rolling(365).mean().shift(1) #Calculate the STD for the past 5, 365 days sp['std_5'] = sp['Close'].rolling(5).std().shift(1) sp['std_365'] = sp['Close'].rolling(365).std().shift(1) #Calculate the mean volume for the past 5, 365 days sp['day_5_volume'] = sp['Volume'].rolling(5).mean().shift(1) sp['day_365_volume'] = sp['Volume'].rolling(365).mean().shift(1) #Calculate the STD of the average volume over the past five days sp['5_volume_std'] = sp['day_5_volume'].rolling(5).std().shift(1) sp.tail(10) Since we are computing indicators that use historical data, there are some rows where there isn't enough historical data to generate them. Some of the indicators use 365 days of historical data, and the dataset starts on 1950-01-03. Thus, any rows that fall before 1951-01-03 don't have enough historical data to compute all the indicators. We will need to remove these rows before you split the data. sp = sp[sp['Date'] > datetime(year=1951, month=1, day=2)] sp = sp.dropna(axis=0) sp.head() Let's now generate two new dataframes to use in making our algorithm. train should contain any rows in the data with a date less than 2013-01-01. test should contain any rows with a date greater than or equal to 2013-01-01. train = sp[sp['Date'] < datetime(year=2013, month=1, day=1)] test = sp[sp['Date'] > datetime(year=2013, month=1, day=1)] lr = LinearRegression() lr.fit(train[['day_5','day_30', 'day_365', 'std_5', 'std_365', 'day_5_volume', 'day_365_volume','5_volume_std']],train['Close']) predictions = lr.predict(test[['day_5','day_30', 'day_365', 'std_5', 'std_365', 'day_5_volume', 'day_365_volume', '5_volume_std']]) mse = mean_squared_error(test['Close'], predictions) mse 494.66054052414876 Let's now make a prediction just one day ahead. train_1 = sp.iloc[:-1] test_1 = sp.iloc[-1:] lr = LinearRegression() lr.fit(train_1[['day_5','day_30', 'day_365', 'std_5', 'std_365', 'day_5_volume', 'day_365_volume','5_volume_std']],train_1['Close']) predictions_1 = lr.predict(test_1[['day_5','day_30', 'day_365', 'std_5', 'std_365', 'day_5_volume', 'day_365_volume', '5_volume_std']]) mse_1 = mean_squared_error(test_1['Close'], predictions_1) mse_1 9.629910205317884
https://nbviewer.org/urls/community.dataquest.io/uploads/short-url/fDbXTGNxQuyoHnoNNknQVEzYD0r.ipynb
CC-MAIN-2022-40
refinedweb
864
67.65
Error reading from file in java need help fixing this error. I am trying to print the length of a countryNames string array. Exception in thread "main" java.util.NoSuchElementException: No line found at java.util.Scanner.nextLine(Scanner.java:1516) at CSVReader.setUp(CSVReader.java:25) at CSVReader.<init>(CSVReader.java:16) at TestCSVReader.main(TestCSVReader.java:16) I am reading from cvs file and I keep getting above error. Below is my code: import java.io.*; import java.util.Scanner; public class CSVReader { //String countryNames; String[] countryNames; int yearLabels; int[] yearNum; double cellularDataTable; double[][] tables; Scanner scan; public CSVReader(String filename)// throws FileNotFoundException { setUp(filename); } private void setUp(String filename) //throws FileNotFoundException { File file = new File(filename); try{ String input=""; scan = new Scanner(file); scan.nextLine(); while((input=scan.nextLine())!=null) { String[] countryNames = input.split(","); //int a = Integer.parseInt(countryNames[1]); System.out.println(countryNames[0]); //System.out.println(a); } scan.close(); } catch(FileNotFoundException e) { System.out.println(e); } } public String[] getCountryNames() { return countryNames; } Below is my TestCSV class file: public class TestCSVReader { /** * Includes test examples for class CSVReader. */ public static void main(String[] args) { final String FILENAME = "data/cellular.csv"; // Directory path for Mac OS X //final String FILENAME = "data\cellular.csv"; // Directory path for Windows OS (i.e. Operating System) CSVReader parser = new CSVReader(FILENAME); String [] countryNames = parser.getCountryNames(); System.out.println(countryNames.length); } my csv file looks like this: This is a summary as I didn't want to print all 252 countries. Basically I take in 252 countries and store them in an array of strings, years and store them in an array of int and each country's cell statistics in a two dimensional array. I am new to programming and would like to know if I am heading in the right direction and if not how would you name the country in an array of strings, years in int. and statistics in a two-inch array. The cvs file looks like this: World Development Indicators Number of countries,252 Country Name Aruba,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.029310471,0,0,2.138784453,3.605985937,3.98141538,6.16435217,13.48254011,16.50927821,57.05427692,65.05605558,72.10431377,99.64250268,103.3849507,108.1325002,112.2180618,119.2038996,126.2103374,129.72824,0,131.8565401 Andorra,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1.307211734,1.278625641,1.250259142,4.424155104,8.538444783,13.44671556,22.12730607,32.14530928,35.99902139,43.27794118,45.77115817,68.60251444,73.82494308,79.48487497,84.27763597,78.1171579,80.2836099,82.06181111,84.06818386,83.53432222,81.50204186 Afghanistan,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.112598381,0.865196277,2.498055472,4.826865367,9.833164022,17.71624331,29.22037376,37.89493697,45.77817474,60.32631999,60.35299258 source to share This line while((input=scan.nextLine())!=null) tries to read individual lines from the file until input it becomes null . But it nextLine() will throw an exception if there are no more rows available. To avoid this, change your code to the following //String input=""; // this line is obsolete scan = new Scanner(file); scan.nextLine(); while(scan.hasNextLine()) { final String input = scan.nextLine(); // read the line after the check (to make sure there is a line available) String[] countryNames = input.split(","); //int a = Integer.parseInt(countryNames[1]); System.out.println(countryNames[0]); //System.out.println(a); } source to share The problem posed in the thread was solved with while(scan.hasNextLine()) { But there is also another problem you will find in your code. Within the loop, you have created a local variable countryNames [] that is not associated with the String [] countryNames declared in the class. I recommend that you change your declaration like this: public class CSVReader { List<String> countryNames; And initialize it in the class constructor: public CSVReader(String filename)// throws FileNotFoundException { countryNames = new ArrayList<String>(); setUp(filename); } ArrayList allows you to dynamically add values. So, inside the loop, you have to add the name of each country to this ArrayList: String[] countryValues = input.split(","); countryNames.add(countryValues[0]); source to share Perhaps so, you have a special method of checking if there is more item to read in the scan String input=""; scan = new Scanner(file); scan.nextLine(); while(scan.hasNext()) { input=scan.nextLine(); String[] countryNames = input.split(","); //int a = Integer.parseInt(countryNames[1]); System.out.println(countryNames[0]); //System.out.println(a); } scan.close(); source to share I think the countryNames array is null. try initializing it before adding data. debug your code and check country names ... String[] countryNames = new String[...] If you don't know the length of the array, you are better off using Arraylist. (when using Arraylist, you don't need to initialize first.) source to share
https://daily-blog.netlify.app/questions/2169568/index.html
CC-MAIN-2021-21
refinedweb
877
50.53
Snowplow Unity Tracker 0.1.0 released We are pleased to announce the release of our much-requested Snowplow Unity Tracker. This Tracker rounds out our support for popular mobile environments, and is an important part of our analytics offering for videogame companies. The Tracker is designed to work completely asynchronously within your Unity code to provide great performance in your games, even under heavy load. In the rest of this post we will cover: - How to install the tracker - How to use the tracker - Core features - Snowplow Pong - Roadmap - Documentation - Getting help 1. How to install the tracker The release version of this Tracker as well as all of its dependencies are included in a single UnityPackage available from our BinTray: snowplow_unity_tracker_0.1.0 Simply add this package to your project and it should add all of the required DLLs into your Unity Project. And that’s it! You’re now ready to start using the Tracker. 2. How to use the tracker To setup the Tracker you first need to add the following using lines to your Unity Scripts: using SnowplowTracker; using SnowplowTracker.Emitters; using SnowplowTracker.Events; You can then instantiate and start a new Tracker like so: // Create Emitter and Tracker AsyncEmitter e1 = new AsyncEmitter ("com.collector.acme"); Tracker t1 = new Tracker (e1, "Namespace", "AppId"); // Start the Tracker t1.StartEventTracking (); You are now ready to Track events! Now let’s send an event: t1.Track (new Structured () .SetCategory ("GameScene") .SetAction ("Launch") .Build ()); And that’s all there is to it! Please check out the Unity Tracker documentation on our wiki for the Tracker’s full API. 3. Core features The core features of the Tracker include: - Fully asynchronous event storage and sending - Outbound events are cached in a SQLite database to prevent event loss - User sessionization with a persistent user ID for the life of the application - Type and value checking for all custom contexts and events to ensure your events are sent properly An overview of how the Tracker sends events: The Tracker is setup to ensure that your application will never be blocked while also being completely thread-safe in its operation. It will use as many rhreads as it needs relative to the amount of events you are tracking. If you are experiencing any performance issues you can either: - Reduce the size of the C# ThreadPool manually; thus restricting the amount of Threads available to the Tracker - Reduce the sendLimitvariable in the Emitter - this variable controls how many events are sent at any one time, which is very important for GETrequests, where every event is sent individually 4. Snowplow Pong Along with the Tracker release, we are also including a Snowplow interpretation of a classic game, Pong. The game allows you to configure a Snowplow collector endpoint and will then emit a continous stream of events to this collector as you play through the game. To play the game you will need to: - Clone the repo: git clone - Open up snowplow-unity-tracker/DemoGame/DemoGame.slnin Unity. - You can then play directly from the Unity IDE or build it yourself for a particular platform. It is currently only configured to run as a desktop application so you will not be able to play it on your iOS or Android device as of yet. We will hopefully port it to a mobile setup in the future! Here are some screens from the game: As you can see in the second screen we have configured the Tracker to point at a local collector but you can put in any valid collector endpoint here. 5. Roadmap We have big plans for the Snowplow Unity Tracker, including but not limited to: - Adding support for the WebPlayer platform (#2) - Automating the creation of Mobile, Desktop and GeoLocation contexts (#4) - Automating the detection of background/foreground application state for sessionization (#5) 6. Documentation You can find the Unity Tracker usage manual on our wiki. 7. Getting help We hope that you find the Snowplow Unity Tracker useful - of course, this is only its first release, so don’t be afraid to get in touch or raise an issue Unity Tracker issues on GitHub!
https://snowplowanalytics.com/blog/2015/10/08/snowplow-unity-tracker-0.1.0-released/
CC-MAIN-2019-18
refinedweb
693
58.01
Unmaps and deallocates the region in the current address space at the given address. #include <sys/types.h> #include <sys/errno.h> #include <sys/vmuser.h> #include <sys/adspace.h> void io_det (eaddr) caddr_t eaddr; The io_det kernel service unmaps the region containing the address specified by the eaddr parameter and deallocates the region. This service then adds the region to the free list for the current address space. The io_det service assumes an address space model of fixed-size I/O objects and address space regions. The io_det kernel service can be called from either the process or interrupt environment. The io_det kernel service has no return values. The io_det kernel service is part of Base Operating System (BOS) Runtime. The io_att kernel service. Memory Kernel Services and Understanding Virtual Memory Manager Interfaces in AIX Kernel Extensions and Device Support Programming Concepts.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/ktechrf1/iodet.htm
CC-MAIN-2022-27
refinedweb
143
51.75
# What's new in Feathers-Vuex 3.0 # Vue Composition API Support Version 3.0 of Feathers-Vuex is the Vue Composition API release! There were quite a few disappointed (and misinformed:) developers in 2019 when the Vue.js team announced what is now called the Vue Composition API. From my perspective: - It is the most powerful feature added to Vue since its first release. - It improves the ability to create dynamic functionality in components. - It greatly enhances organization of code in components. - It encourages code re-use. Check out the vue-use-web collection for some great examples. And now it has become the best way to perform queries with Feathers-Vuex. To find out how to take advantage of the new functionality in your apps, read the Feather-Vuex Composition API docs. # Custom Handling for Feathers Events 3.1.0+ Version 3.1 of Feathers-Vuex enables ability to add custom handling for each of the FeathersJS realtime events. You can read more about it in the Service Plugin: Events docs. # Breaking Changes Feathers-Vuex follows semantic versioning. There are two breaking changes in this release: # Auth Plugin user Not Reactive New API in 3.2.0+ Due to changes in how reactivity is applied to service state (it's now using Vue.set under the hood), the user state of the auth module is no longer reactive. To fix this issue, two getters have been added to the auth state. They are available when a userService is provided to the makeAuthPlugin options. user: returns the reactive, logged-in user from the userServicespecified in the options. isAuthenticated: a easy to remember boolean attribute for if the user is logged in. If you depend on a reactive, logged-in user in your apps, here is how to fix the reactivity: - Replace any reference to store.state.auth.userwith store.getters['auth/user']. Because the user state is no longer reactive, it is logical for it to be removed in the next version. It will likely be replaced by a userId attribute in Feathers-Vuex 4.0. # Server-Side Pagination Support is Off by Default The makeFindMixin (and the new useFind utility) now have server-side pagination support turned off, by default. Real-time arrays of results are now the default setting. This really improves the development experience, especially for new users. To migrate your app to version 3.0, you need to update any params where you are using server-side pagination. It will work as it has been in version 2.0 once you explicitly set paginate: true in the params, like this: import { makeFindMixin } from 'feathers-vuex' export default { name: 'MyComponent', mixins: [ makeFindMixin({ service: 'users', watch: true })], computed: { usersParams() { return { query: {}, paginate: true // explicitly enable pagination, now. } } } } This behavior exactly matches the new useFind utility. # Deprecations # The keepCopiesInStore Option The keepCopiesInStore option is now deprecated. This was a part of the "clone and commit" API which basically disabled the reason for creating the "clone and commit" API in the first place. If you're not familiar with the Feathers-Vuex "clone and commit" API, you can learn more about the built-in data modeling API and the section about Working with Forms. The keepCopiesInStore feature is set to be removed in Feathers-Vuex 4.0. # Auth Plugin State: user As described, earlier on this page, since the Auth Plugin's user state is no longer reactive and has been replaced by a user getter that IS reactive, the user state will be removed in the Feathers-Vuex 4.0.
https://vuex.feathersjs.com/3.0-major-release.html
CC-MAIN-2020-10
refinedweb
592
66.03
Suppose you want to develop a program to play lottery. The program randomly generates a Lottery of a two-digit number, prompts the user to enter a two-digit number, and determines whether the user wins according to the following rule: If the user matches the lottery in exact order , the awards is $10,000. If the user input matches the lottery, the awards is $3,000. If one digit in the user input matches a digit in the lottery, the awards is $1,000. - I'm having trouble all the numbers that i input it prints out "You matched both numbers. You win $3,000", please help. #include <stdio.h> #include <stdlib.h> int main() { int guessNum1,guessNum2,lotteryNum1,lotteryNum2; double lotramd,userg; char ans; srand(time(NULL)); do{ lotramd= rand()%(100)+1; printf("Dehkhoda Jackpot Lottery\n"); do{ printf("Enter your lottery numbers:"); scanf("%lf",&userg); if (userg == lotramd) printf("Exact match! You win $10,000\n"); else if (guessNum2 == lotteryNum1 && guessNum1 == lotteryNum2) printf("You matched both numbers. You win $3,000\n"); else if (guessNum1 == lotteryNum1 || guessNum1 == lotteryNum2 || guessNum2 == lotteryNum1 || guessNum2 == lotteryNum2) printf("You matched one digit! You win $1,000\n"); else printf("Sorry, you did not get any matches!"); } while(userg != lotramd); printf("Play Again? (y/n)"); scanf(" %c",&ans); }while(ans=='y'); return 0; }
https://www.daniweb.com/programming/software-development/threads/361127/lottery-game
CC-MAIN-2017-09
refinedweb
220
65.73
I’m trying to fetch data from the api and returns a promise does anyone know how to handle a response in frontity? Hi @hugsaf2132 Could you provide a bit more info about your issue. Please see this post about the kind of information to include so that people can better help you: About the 🤗 Dev Talk & Questions category Hi @hugsaf2132 Here you have a few guide of what promises are and how to work with them: - - - Hope this helps 158/5000 Good afternoon, even following the promises I have some difficulty mainly in frontity. someone who could help me, I already left the link to my project HI @hugsaf2132 Can you specify the problem you are having for fetching data ? In the docs you have and example of how to use fetch in a React Component and an explanation of each step import React, { useEffect } from "react"; import { connect } from "frontity"; // In a React component that uses "connect": const CategoryNature = ({ state, actions }) => { // 1. fetch data related to a path // With this useEffect we make the call to fetch // only the first time the component is rendered. // When the data is fetched, the state is updated with the new data // so the component is re-rendered and "data" will get proper content useEffect(() => { actions.source.fetch("/category/nature/"); }, []); // 2. get data from frontity state const data = state.source.get("/category/nature/"); // 3. get entities from frontity state if (data.isCategory) { // the category entity const category = state.source.category[data.id]; // posts from that category const posts = data.items.map(({ type, id }) => state.source[type][id]); // 4. render! return ( <> <h1>{category.name}</h1> {posts.map((p) => ( <a href={p.link}>{p.title.rendered}</a> ))} </> ); } return null; }; export default connect(CategoryNature);
https://community.frontity.org/t/api-returns-a-promise/2141/2
CC-MAIN-2021-10
refinedweb
289
52.9
Given + 1 + 1 + 1 + 2 + 1 = 7 Input : 12 Output : 105 Input : 1 Output : 0 Input : 2 Output : 1 A Naive approach is to run two loops one inside the other. Select all pairs one by one, find GCD of every pair and then find sum of these GCDs. Time complexity of this approach is O(N2 * log(N)) Efficient Approach is based on following concepts: - E. The idea is to convert given problem into sum of Euler Totient Functions. Sum of all GCDs where j is a part of pair is and j is greater element in pair: Sumj = ∑(i=1 to j-1) gcd(i, j) Our final result is Result = ∑(j=1 to N) Sumj The above equation can be written as : Sumj = ∑ g * count(g) For every possible GCD 'g' of j. Here count(g) represents count of pairs having GCD equals to g. For every such pair(i, j), we can write : gcd(i/g, j/g) = 1 We can re-write our previous equation as Sumj = ∑ d * phi(j/d) For every divisor d of j and phi[] is Euler Totient number Example : j = 12 and d = 3 is one of divisor of j so in order to calculate the sum of count of all pairs having 3 as gcd we can simple write it as => 3*phi[12/3] => 3*phi[4] => 3*2 => 6 Therefore sum of GCDs of all pairs where 12 is greater part of pair and 3 is GCD. GCD(3, 12) + GCD(9, 12) = 6. Complete Example : N = 4 Sum1 = 0 Sum2 = 1 [GCD(1, 2)] Sum3 = 2 [GCD(1, 3) + GCD(2, 3)] Sum4 = 4 [GCD(1, 4) + GCD(3, 4) + GCD(2, 4)] Result = Sum1 + Sum2 + Sum3 + Sum4 = 0 + 1 + 2 + 4 = 7 Below is C++ implementation of above idea. We precompute Euler Totient Functions and result for all numbers till a maximum value. The idea used in implementation is based this post. // C++ approach of finding sum of GCD of all pairs #include<bits/stdc++.h> using namespace std; #define MAX 100001 // phi[i] stores euler totient function for i // result[j] stores result for value j long long phi[MAX], result[MAX]; // Precomputation of phi[] numbers. Refer below link // for details : void computeTotient() { // Refer phi[1] = 1; for (int i=2; i<MAX; i++) { if (!phi[i]) { phi[i] = i-1; for (int j = (i<<1); j<MAX; j+=i) { if (!phi[j]) phi[j] = j; phi[j] = (phi[j]/i)*(i-1); } } } } // Precomputes result for all numbers till MAX void sumOfGcdPairs() { // Precompute all phi value computeTotient(); for (int i=1; i<MAX; ++i) { // Iterate throght all the divisors // of i. for (int j=2; i*j<MAX; ++j) result[i*j] += i*phi[j]; } // Add summation of previous calculated sum for (int i=2; i<MAX; i++) result[i] += result[i-1]; } // Driver code int main() { // Function to calculate sum of all the GCD // pairs sumOfGcdPairs(); int N = 4; cout << "Summation of " << N << " = " << result[N] << endl;; N = 12; cout << "Summation of " << N << " = " << result[N] << endl; N = 5000; cout << "Summation of " << N << " = " << result[N] ; return 0; } Output: Summation of 4 = 7 Summation of 12 = 105 Summation of 5000 = 61567426 Time complexity: O(MAX*log(log MAX)) Auxiliary space: O(MAX).
http://www.geeksforgeeks.org/summation-gcd-pairs-n/
CC-MAIN-2017-17
refinedweb
546
60.18
Problem evaluating a very tiny integral Hi everyone, I'm pretty new with sage. I was forced to change from MATLAB to Sage, because I was told Sage does approximate very tiny numbers better as it can work with sqrt(2) as sqrt(2) and not as the rational number approximating it. Approximations are very important for my problem. I need to evaluate this integral $$\sum_{c=1}^{d}\int_{\min(256c-0.5,\frac{y(y+1)}{2})}^{\min(256c+0.5,\frac{y(y+1)}{2})}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\, dx$$ Suppose d = 1, then this is simply the integral $$ \int_{255.5}^{256.5}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\, dx$$ To evaluate this integral I wrote the following code T = RealDistribution('gaussian,1) print T.cum_distribution_function(256.5)-T.cum_distribution_function(255.5) because the integral above is the same as the difference of the distribution function of a standard distributed gaussian random variable between the boundaries of the integral. However, and you can check yourselves if you don't believe it, the result I get with sage is 0. I guess that this is due to some approximation, which sage does. Indeed the exact value of the integral is pretty close (and with pretty I mean a lot) to 0. My problem is that I need to be able to have the exact value, because the sum of integrals I'm working with and my whole work behind this integral requires me to be very careful with those little tiny numbers. I tried to use the function integrate to deal with this problem, but something funny and apparently inexplicable happened when I was trying to use it. To be precise I defined this code: def VarianceOfvy(y): temp1 = 0 temp2 = 0 for r in range(0,y+1): for x in range(0,r+1): temp1 = temp1 + (255/256)^x * 1/256 * (r-x)^2 for r in range(0,y+1): for x in range(0,r+1): temp2 = temp2 + ((255/256)^x * 1/256 * (r-x))^2 sigma = temp1 - temp2 return sqrt(sigma) def Integerd(y): b = y*(y+1)/2 d = 1 c = 0 while min((c+1)*256-0.5,b) == (c+1)*256-0.5: d = d+1 c = c+1 return d def Probabilityvynequiv(y): var('c') b = (y*(y+1))/2 sigma = 2 mu = 1 d = Integerd(y) factor = 1/(integrate(1/sqrt(2*pi)*e^(-(x/sigma)^2/2),x,-oo,(b-mu)/sigma) - integrate(1/sqrt(2*pi)*e^(-(x)^2/2),x,-oo,(-mu)/sigma)) p = sum(factor*1/sigma*integrate(1/sqrt(2*pi)*e^((-x^2)/(2)),x,c*256+0.5,min((c+1)*256-0.5,b)),c,0,d) return p And if I let it run, the result I get is 1/2*(erf(255.75*sqrt(2)) - erf(128.25*sqrt(2)) + erf(127.75*sqrt(2)) - erf(0.25*sqrt(2)))/(3*erf(1/4*sqrt(2)) + 1) which I assume is correct, and at least it tells me that Sage is able to read my code and output a result. If I call the function VarianceOfvy(2), the result I get is 3/65536*sqrt(11133895), which is also correct. Now, if I'm changing the command sigma = 2 with sigma = VarianceOfvy(2) and try to let the whole program run again, Sage is not able anymore to output a result. I'm really lost and I don't know what to do. Could someone advise me and give me some hints on how to evaluate those tiny integrals, in such a way that I don't loose any approximation? You can enclose the LaTeX code in $(or $$) to make it readable. The error function is in Sage and can be useful. @mforets I don't know what you mean. If you tell Sage to compute the difference between erf(256.5) and erf(255.5), with the code print erf(256.5)-erf(255.5), it tells you that it is 0. Is there a way to make it more precise? Ok, i see. Actually with 10000 bits of precision it's still not enough: I don't know; the exponential of -x^2goes fast to zero, can we estimate how much precision is needed here? With one more order of magnitude of precision, i get someting non-zero (long time!):
https://ask.sagemath.org/question/39514/problem-evaluating-a-very-tiny-integral/
CC-MAIN-2018-17
refinedweb
745
62.27
Introduction ¶ Something. Specification ¶ To generate nice HTML however we must follow some specifications that this wiki will try to describe. The following are 2 classes that we will be using as an example: <?php /** * short class description. * Extended class description * * @author me * @version 0.1 * @package application.components */ class DummyClassA extends CController { /** * short method description * * extended method description * * @param string $bar param description {@link DummyClassB} * @return DummyClassA */ public function foo($bar) { } } And <?php /** * Class documentation: ¶ /** *) Attribute Documentation ¶ /** * . Method Documentation ¶ <?php /** *: Documenting Modules ¶: <?php $this->setImport(array( 'mymodule.components.*', 'mymodule.models.*', )); You will have to add this at the beginning of your "MymoduleModule" class <?php. Documenting Views ¶ If you want to document your views, add a doc block right at the top of the view file with the file description. You can also specify the parameters that your view file receives by using the @uses tag, e.g. <?php /** *. Resources ¶ - [Yii Docs Generator on Git]( "Yii Docs Generator") - Yii Docs Generator on Yii Good job Good job Yii Documentor! Active? This project still active? The project on GitHub the las modification is from 1 year ago. And i follow the instructions of instalation, but the yiic don't with the command docs. How to enable this? Thanks. It's a great project. @ale.nakamura Instrucctions have not changed, I can't help you if you don't tell me what the problem. Getting an error on Compile.. Im getting the error: when i run the documentation build command... Any ideas??? @tom@ku Check Documenting Modules, it might be something similar. Only framework and some views are documented I expected all my components and other files would be documented but to my suprise only framework related classes are documented and some views from my project. I went to my project protected folder and then tried this command .\yiic docs 'C:\Docs\ProjDocs\Documentation' Exceed 1GB of memory and broken It works pretty good except a few things: The Git repo seems to be death, i am not sure if I should folk it and fix issue #1 and #2. Issue #3 I don't think it can be fixed if the project doesn't utilize namespace. Private methods and properties I like that extentsion. Can somebody tell me how to comment private methods and properties? It is a must have for internal documentation. phpDocumentor had a flag for this. Other doc-generators have it too (doxygen,...). If you have any questions, please ask in the forum instead.
https://www.yiiframework.com/wiki/186/how-to-generate-yii-like-documentation
CC-MAIN-2018-26
refinedweb
415
58.99
This is a mobile version, full one is here. Yegor Bugayenko 27 April 2014 of the listed mistakes are related to object-oriented programming in general and to Java in particular. Class Names Your class should be an abstraction of a real life entity with no "validators," "controllers," "managers," etc. If your class name ends with an "-er"—it's a bad design. BTW, here are my seven virtues of a good object. Also, this post explains this idea in more details: Don't Create Objects That End With -ER. And, of course, utility classes are anti-patterns, like StringUtils, FileUtils, and IOUtils from Apache. The above are perfect examples of terrible designs. Read this follow up post: OOP Alternative to Utility Classes Of course, never add suffixes or prefixes to distinguish between interfaces and classes. For example, all of these names are terribly wrong: IRecord, IfaceEmployee, or RecordInterface. Usually, interface name is the name of a real-life entity, while class name should explain its implementation details. If there is nothing specific to say about an implementation, name it Default, Simple, or something similar. For example: class SimpleUser implements User {}; class DefaultRecord implements Record {}; class Suffixed implements Name {}; class Validated implements Content {}; Method Names Methods can either return something or return void. If a method returns something, then its name should explain what it returns, for example (don't use the get prefix ever): boolean isValid(String name); String content(); int ageOf(File file); If it returns void, then its name should explain what it does. For example: void save(File file); void process(Work work); void append(File file, String line); You can read more about this idea in Elegant Objects book, section 2.4. There is only one exception to the rule just mentioned—test methods for JUnit. They are explained below. Test Method Names Method names in JUnit tests should be created as English sentences without spaces. It's easier to explain by example: /** * HttpRequest can return its content in Unicode. * @throws Exception If test fails */ @Test public void returnsItsContentInUnicode() throws Exception { } It's important to start the first sentence of your Javadoc with the name of the class you're testing followed by can (or cannot). So, your first sentence should always be similar to "somebody can do something." The method name will state exactly the same, but without the subject. If I add a subject at the beginning of the method name, I should get a complete English sentence, as in above example: " HttpRequest returns its content in Unicode." Pay attention that the test method doesn't start with can. Only Javadoc It's a good practice to always declare test methods as throwing Exception. Variable Names Avoid composite names of variables, like timeOfDay, firstItem, or httpRequest. I mean with both—class variables and in-method ones. A variable name should be long enough to avoid ambiguity in its scope of visibility, but not too long if possible. A name should be a noun in singular or plural form, or an appropriate abbreviation. More about it in this post: A Compound Name Is a Code Smell. For example: List<String> names; void sendThroughProxy(File file, Protocol proto); private File content; public HttpRequest request; Sometimes, you may have collisions between constructor parameters and in-class properties if the constructor saves incoming data in an instantiated object. In this case, I recommend to create abbreviations by removing vowels (see how USPS abbreviates street names). Another example: public class Message { private String recipient; public Message(String rcpt) { this.recipient = rcpt; } } In many cases, the best hint for a name of a variable can ascertained by reading its class name. Just write it with a small letter, and you should be good: File file; User user; Branch branch; However, never do the same for primitive types, like Integer number or String string. You can also use an adjective, when there are multiple variables with different characteristics. For instance: String contact(String left, String right); Constructors Without exceptions, there should be only one constructor that stores data in object variables. All other constructors should call this one with different arguments. For example: public class Server { private String address; public Server(String uri) { this.address = uri; } public Server(URI uri) { this(uri.toString()); } } More about it in There Can Be Only One Primary Constructor. One-time Variables Avoid one-time variables at all costs. By "one-time" I mean variables that are used only once. Like in this example: String name = "data.txt"; return new File(name); This above variable is used only once and the code should be refactored to: return new File("data.txt"); Sometimes, in very rare cases—mostly because of better formatting—one-time variables may be used. Nevertheless, try to avoid such situations at all costs. Exceptions Needless to say, you should never swallow exceptions, but rather let them bubble up as high as possible. Private methods should always let checked exceptions go out. Never use exceptions for flow control. For example this code is wrong: int size; try { size = this.fileSize(); } catch (IOException ex) { size = 0; } Seriously, what if that IOException says "disk is full?" Will you still assume that the size of the file is zero and move on? Indentation For indentation, the main rule is that a bracket should either end a line or be closed on the same line (reverse rule applies to a closing bracket). For example, the following is not correct because the first bracket is not closed on the same line and there are symbols after it. The second bracket is also in trouble because there are symbols in front of it and it is not opened on the same line: final File file = new File(directory, "file.txt"); Correct indentation should look like: StringUtils.join( Arrays.asList( "first line", "second line", StringUtils.join( Arrays.asList("a", "b") ) ), "separator" ); The second important rule of indentation says that you should put as much as possible on one line - within the limit of 80 characters. The example above is not valid since it can be compacted: StringUtils.join( Arrays.asList( "first line", "second line", StringUtils.join(Arrays.asList("a", "b")) ), "separator" ); Redundant Constants Class constants should be used when you want to share information between class methods, and this information is a characteristic (!) of your class. Don't use constants as a replacement of string or numeric literals—very bad practice that leads to code pollution. Constants (as with any object in OOP) should have a meaning in a real world. What meaning do these constants have in the real world: class Document { private static final String D_LETTER = "D"; // bad practice private static final String EXTENSION = ".doc"; // good practice } Another typical mistake is to use constants in unit tests to avoid duplicate string/numeric literals in test methods. Don't do this! Every test method should work with its own set of input values. Use new texts and numbers in every new test method. They are independent. So, why do they have to share the same input constants? Test Data Coupling This is an example of data coupling in a test method: User user = new User("Jeff"); // maybe some other code here MatcherAssert.assertThat(user.name(), Matchers.equalTo("Jeff")); On the last line, we couple "Jeff" with the same string literal from the first line. If, a few months later, someone wants to change the value on the third line, he/she has to spend extra time finding where else "Jeff" is used in the same method. To avoid this data coupling, you should introduce a variable. More about it here: A Few Thoughts on Unit Test Scaffolding.
http://www.yegor256.com/2014/04/27/typical-mistakes-in-java-code.amp.html
CC-MAIN-2017-51
refinedweb
1,278
64
Contents. (More precisely, a local module or package can shadow another hanging directly off sys.path.) For the first problem, it is proposed that parentheses be permitted to enclose multiple names, thus allowing Python's standard mechanisms for multi-line values to apply. For the second problem, it is proposed that all import statements be absolute by default (searching sys.path only) with special syntax . Rationale for Relative Imports With the shift to absolute imports, the question arose whether relative imports should be allowed at all. Several use cases were presented, the most important of which is being able to rearrange the structure of large packages without having to edit sub-packages. In addition, a module inside a package can't easily import itself without relative imports. bar and from ...foo import bar algorithm proposed is "search up from current package directory until the ultimate package parent gets hit".).5 (too big a change), and allowing relative imports is sufficiently critical that we need something now (given that the standard import will change to absolute import). More than that, this proposed syntax has several open questions: What is the precise proposed syntax? (Which clauses are optional under which circumstances?) How strongly does the searching clause bind? In other words, do you write: import foo as bar searching XXX, spam as ham searching XXX or: import foo as bar, spam as ham searching XXX, following are correct usages of the new syntax: from .moduleY import spam from .moduleY import spam as ham from . import moduleY from ..subpackage1 import moduleY from ..subpackage2.moduleZ import eggs from ..moduleA import foo from ...package import bar from ...sys import path Note that while that last case is legal, it is certainly discouraged ("insane" was the word Guido used). Relative imports must always use from <> import; import <> is always absolute. use a module's __name__ attribute to determine that module's position in the package hierarchy. If the module's name does not contain any package information (e.g. it is set to '__main__') then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.:
https://legacy.python.org/dev/peps/pep-0328/
CC-MAIN-2019-04
refinedweb
364
57.77
1. Introduction to Device Drivers 2. Template Driver Example 3. Reading and Writing Data in Kernel Memory Displaying Data Stored in Kernel Memory Writing Quote Of The Day Version 1 Building, Installing, and Using Quote Of The Day Version 1 Displaying Data on Demand Writing Quote Of The Day Version 2 Initializing and Unloading Opening the Device, Closing the Device, and Getting Module Information Quote Of The Day Version 2 Source Building, Installing, and Using Quote Of The Day Version 2 Modifying Data Stored in Kernel Memory Writing Quote Of The Day Version 3 Attaching, Allocating Memory, and Initializing a Mutex and a Condition Variable Checking for Changes, Cleaning Up, and Detaching Allocating and Freeing Kernel Memory Managing Thread Synchronization Locking Rules for Quote Of The Day Version 3 Lock and Condition Variable Members of the State Structure Creating and Destroying Locks and Condition Variables Reporting and Setting Device Size and Re-initializing the Device Quote Of The Day Version 3 Source Building and Installing Quote Of The Day Version 3 Using Quote Of The Day Version 3 Exercising the Driver's I/O Controls 4. Tips for Developing Device Drivers The pseudo device driver presented in this section writes a constant string to a system log when the driver is loaded. This first version of the Quote Of The Day driver (qotd_1) is even more simple than the dummy driver from the previous chapter. The dummy driver includes all functions that are required to drive hardware. This qotd_1 driver includes only the bare minimum functions it needs to make a string available to a user command. For example, this qotd_1 driver has no cb_ops(9S) structure. Therefore, this driver defines no open(9E), close(9E), read(9E), or write(9E) function. If you examine the dev_ops(9S) structure for this qotd_1 driver, you see that no getinfo(9E), attach(9E), or detach(9E) function is defined. This driver contains no function declarations because all the functions that are defined in this driver are declared in the modctl.h header file. You must include the modctl.h header file in your qotd_1.c file. This qotd_1 driver defines a global variable to hold its text data. The _init(9E) entry point for this driver uses the cmn_err(9F) function to write the string to a system log. The dummy driver also uses the cmn_err(9F) function to display messages. The qotd_1 driver is different from the dummy driver because the qotd_1 driver stores its string in kernel memory. Enter the source code shown in the following example into a text file named qotd_1.c. Example 3-1 Quote Of The Day Version 1 Source File #include <sys/modctl.h> #include <sys/conf.h> #include <sys/devops.h> #include <sys/cmn_err.h> #include <sys/ddi.h> #include <sys/sunddi.h> #define QOTD_MAXLEN 128 static const char qotd[QOTD_MAXLEN] = "Be careful about reading health books. \ You may die of a misprint. - Mark Twain\n"; static struct dev_ops qotd_dev_ops = { DEVO_REV, /* devo_rev */ 0, /* devo_refcnt */ ddi_no_info, /* devo_getinfo */ nulldev, /* devo_identify */ nulldev, /* devo_probe */ nulldev, /* devo_attach */ nulldev, /* devo_detach */ nodev, /* devo_reset */ (struct cb_ops *)NULL, /* devo_cb_ops */ (struct bus_ops *)NULL, /* devo_bus_ops */ nulldev, /* devo_power */ ddi_quiesce_not_needed, /* devo_quiesce */ }; static struct modldrv modldrv = { &mod_driverops, "Quote of the Day 1.0", &qotd_dev_ops}; static struct modlinkage modlinkage = { MODREV_1, (void *)&modldrv, NULL }; int _init(void) { cmn_err(CE_CONT, "QOTD: %s\n", qotd); return (mod_install(&modlinkage)); } int _info(struct modinfo *modinfop) { return (mod_info(&modlinkage, modinfop)); } int _fini(void) { return (mod_remove(&modlinkage)); } Enter the configuration information shown in the following example into a text file named qotd_1.conf. Example 3-2 Quote Of The Day Version 1 Configuration File name="qotd_1" parent="pseudo" instance=0; Compile and link the driver. Use the -D_KERNEL option to indicate that this code defines a kernel module. The following example shows compiling and linking for a 32-bit architecture using the Oracle Solaris Studio C compiler: % cc -D_KERNEL -c qotd_1.c % ld -r -o qotd_1 qotd_1.o Note that the name of the driver, qotd_1, must match the name property in the configuration file. Make sure you are user root when you install the driver. Copy the driver binary to the /tmp directory as discussed in Device Driver Testing Tips. # cp qotd_1 /tmp # ln -s /tmp/qotd_1 /usr/kernel/drv/qotd_1 Copy the configuration file to the kernel driver area of the system. # cp qotd_1.conf /usr/kernel/drv This qotd_1 driver writes a message to a system log each time the driver is loaded. The cmn_err(9F) function writes low priority messages such as the message defined in this qotd_1 driver to /dev/log. The syslogd(1M) daemon reads messages from /dev/log and writes low priority messages to /var/adm/messages. To test this driver, watch for the message in /var/adm/messages. In a separate window, enter the following command: % tail -f /var/adm/messages Make sure you are user root when you load the driver. Use the add_drv(1M) command to load the driver: # add_drv qotd_1 You should see the following messages in the window where you are viewing /var/adm/messages: date time machine pseudo: [ID 129642 kern.info] pseudo-device: devinfo0 date time machine genunix: [ID 936769 kern.info] devinfo0 is /pseudo/devinfo@0 date time machine qotd: [ID 197678 kern.notice] QOTD_1: Be careful about reading health books. You may die of a misprint. - Mark Twain This last line is the content of the variable output by the cmn_err(9F) function in the _init(9E) entry point. The _init(9E) entry point is called when the driver is loaded.
http://docs.oracle.com/cd/E19963-01/html/819-3159/fahrt.html
CC-MAIN-2015-40
refinedweb
921
54.02
Much Ado About Monads – Reader Edition In the previous post, we talked a bit about the State Monad, what it is and how you could use it today in your F# application. But, with any new piece of information such as this, it should be taken in context, and there are other patterns as well when dealing with a multi-paradigm language such as F#. We also talked about how the State Monad might not have been the best choice for modeling our web scripting DSL as our browser state is encapsulated in the Browser class, and once it is set, it doesn’t change. With that, we could turn our eyes to using the Reader Monad as we read from our environment. Reading From Our Environment If you recall from the previous post, we had a simple example of keeping track of browser state our ultimate goal was to have the state managed for us underneath the covers. When dealing with the State Monad, each bind call would not only return us our calculated value, but also our new state as well. In this case, this was wasteful due to the fact that once the state was set, it never changed, as the state was fully encapsulated inside the Browser object. So, our ultimate goal would be instead to have our environment set once and then read from it implicitly. We still want to keep what we have here in terms of our script, but change the underlying mechanism for how it happens: [<Fact>] let ``Can find CodeBetter on Bing``() = reader { do! openPage "" do! enterText "q" "CodeBetter" do! clickButton "go" let! result = containsText "CodeBetter" isTrue result do! closePage } |> runScript As the State Monad allows us to plumb a piece of mutable state through our code, the Reader Monad helps us pass immutable state around underneath the covers. This immutable state could be anything, such as configuration data, environment variables and so on. Instead of relying on a static dependency like the Environment or ConfigurationManager class within .NET, you might imagine having some abstraction passed around implicitly inside the Reader Monad. This way, our testing scenarios become easier as we don’t have to try and mock static calls. Just as well, you can abstract other items behind the scenes as well such as locks, transactions and so forth. Now that we defined the problem and generally speaking what the Reader Monad is, let’s go ahead and define it. To do this, we’ll need an overarching container to describe this environment that we’re maintaining. In this case, we have the Reader<’r,’a> type which the ‘a is our result type and the ‘r is our type of our environment. This Reader type has a constructor that takes a function which takes our environment parameter and returns our result. type Reader<'r,'a> = Reader of ('r -> 'a) In addition, we’ll need a way to run our Reader so that we can provide it with our environment and return our calculated value. Let’s create a function called runReader which takes our Reader and our environment, and returns our calculated value. // val runReader : Reader<'a,'b> -> 'a -> 'b let runReader (Reader r) env = r env Before we get to some of our helper functions, let’s get to the monad part.. You should recall that our return function must have the following signature, keeping in mind that M defined below is our monadic type. val returnF : 'a -> M<'a> In the case of the Reader Monad, it should look like this where we take in a value and return a Reader of our environment and return value. val returnF : 'a -> Reader<'r,'a> Now, let’s look at the implementation. In the case of return, we simply return a Reader with the constructed function taking in any value and returning our calculated value. Our return value should be the same no matter what the environment value that is passed in, so we can safely ignore it. let returnF a = Reader (fun _ -> a) Next, we need to define the bind operation. As you may recall, our bind operation must look like the following code, keeping in mind that M defined below is our monadic type. val bind : M<'a> -> ('a -> M<'b>) -> M<'b> In the case of our Reader Monad, it should look like this: val bind : Reader<'r,'a> -> ('a -> Reader<'r, 'b>) -> Reader<'r, 'b> Let’s break this down a little more to show what is really going on underneath the covers. val bind : ('r -> 'a) -> ('a -> 'r -> 'b) -> ('r -> 'b) What we see is that the first argument is a function which takes our environment and produces a our calculated value. Our second argument is a function that takes a value and our environment and then generates our new calculated value. What our goal is to combine these two functions into a larger function from our environment to our new calculated value. let bind m k = Reader (fun r -> runReader (k (runReader m r)) r) What we did above is make our environment available to both the inside execution and outside execution of our runReader. Taking these two functions together, the bind and return, we can now create a builder which can provide us some syntactic sugar when writing functions using the Reader Monad. type ReaderBuilder() = member this.Return(a) = Reader (fun _ -> a) member this.Bind(m, k) = Reader (fun r -> runReader (k (runReader m r)) r) By no means are these two the only methods that we could implement, and in fact, there are quite a few more we could do, but that’s for the next post. Now, we need to revisit some helper functions that are necessary when dealing with the Reader Monad. For example, how can we get our current environment? All we have to do is ask: // val ask : Reader<'r,'r> let ask = Reader (id) What this function simply does is return our environment to us by using the id function, which is to say that you return what you are given. As you recall, our Reader has a constructing function with ‘r –> ‘a and in this case, the ‘a will also be the ‘r. Also, instead of just returning our environment, what about also providing a function that applies a function over our environment as well? Let’s implement that as asks: // val asks : ('r -> 'a) -> Reader<'r,'a> let asks f = reader { let! r = ask return (f r) } One more function to consider is the local function. This function allows us to execute a given function on a modified environment. The local function takes a function which modifies the environment as well as a reader to run, and the returns a reader with the changed environment. This doesn’t change the environment globally, but gives you a way to locally change the environment and execute a function against it. // val local : ('r1 -> 'r2) -> Reader<'r1,'a> -> Reader<'r2,'a> let local f m = Reader (f >> runReader m) The function implementation is fairly straight forward as it returns a Reader with the constructing function that first executes the f function on the environment, and then we run our Reader parameter against this new environment value. Now that we’ve got some basic ideas, let’s venture into a scenario or two. A Scenario Let’s look at a scenario for how we might use the Reader Monad. One interesting example came from Greg Neverov to handle locks as the environment using this monad. Let’s look at that a little deeper. In order to support such a scenario, we’d need a few pieces. First, we would need the ability to run a particular reader while handling the locking behind the scenes. Let’s take a look at what that would entail: open System.Threading let tryRunLock lock m = let lock = box lock let lockTaken = ref false Monitor.Enter(lock, lockTaken) match !lockTaken with | true -> try Some(runReader m lock) finally Monitor.Exit lock | _ -> None Our tryRunLock function takes a lock and our reader that we wish to execute. We call Monitor.Enter with our lock that has been boxed and a flag to determine whether the lock has been taken. We then check whether the lock has been taken and if so, we return Some of our Reader result, else we return None to indicate a failure. Next, we need the ability to notify all waiting threads that there has been a change in our state. To do that, we must call the Monitor.PulseAll method. let pulseAll = Reader Monitor.PulseAll Here we partially applied the environment to the Monitor.PulseAll method call so there was no need to specify the argument explicitly. After this, we need one more function which is to release the lock and block the current thread until we can reacquire it. To do this, we simply call the Monitor.Wait method. let wait = Reader (Monitor.Wait >> ignore) Now we can apply these in a scenario such as moving items between two Stacks in which we pop from one and push to another. Let’s turn our attention first to the pop function which takes our Stack, and then while the Stack count is zero, then it blocks and waits. When there is something in the Stack, we simply call Pop and return the value. open System.Collections.Generic let pop (stack:Stack<_>) = reader { while stack.Count = 0 do return! wait return stack.Pop() } Note that there is a while loop here which we will work on implementing in the next post. We can now turn our attention to the push function which takes our Stack and a value to push. If our Stack count is zero, then we call pulseAll which indicates there is a state change. After this we simply push the value onto the Stack. let push (stack:Stack<_>) x = reader { if stack.Count = 0 then return! pulseAll do stack.Push(x) } Moving on, we can now define a function called move which takes a value from one Stack and moves it to another using locks behind the scenes. // Our lock object let lockObj = new obj() let move s1 s2 = reader { let! x = pop s1 do! push s2 x return x } |> tryRunLock lockObj We can run this through to verify our behavior such as the following: > let s1 = new Stack<int>([1..3]) > let s2 = new Stack<int>() > let moved = move s1 s2;; val s1 : Stack<int> val s2 : Stack<int> val moved : int option = Some 3 > s2;; val it : Stack<int> = seq [3] As you can see, it took the last value from our s1 and moved it to the s2 and did so using locks in a composable fashion. Let’s look at our original example again. Back to the Web Example Revisiting our web example again, we can rewrite much of what we had before using the State Monad as the Reader Monad. To do this takes very little effort. First, we must instead of calling getState in the State Monad, we simply ask for the environment and then call the appropriate method on our Browser object. For example, we can implement the openPage function as follows: let openPage (url:string) = reader { let! (browser : Browser) = ask return browser.GoTo url } We could have also implemented the openPage using the standard Reader constructor as well: let openPage (url:string) = Reader (fun (browser : Browser) -> browser.GoTo url) Just as well, we could utilize the asks function as well to execute a function against our environment. We could rewrite our openPage such as this to take advantage: let openPage (url:string) = reader { return! asks (fun (browser : Browser) -> browser.GoTo url) } But, in order to make this happen, we need to implement another method on our ReaderBuilder that we implemented above. The method required is called ReturnFrom which basically does nothing to our Reader but return it. type ReaderBuilder() = ... member this.ReturnFrom((a:Reader<'r,'a>)) = a ... I’ll cover the rest of the other methods you can introduce to your builders in order to take advantage of such things as try/catch, try/finally, using, for, while, etc in the next post. Following the same pattern on all the other methods from the previous post, we can now execute our script and it will work much as it did for the State Monad. [<Fact>] let ``Can find CodeBetter on Bing``() = reader { do! openPage "" do! enterText "q" "CodeBetter" do! clickButton "go" let! result = containsText "CodeBetter" isTrue result do! closePage } |> runScript By using this design pattern, we have a nice way of abstracting this environment that we can create a very flexible syntax. But with any pattern, it’s about finding the right applications for it. Conclusion Once again, by looking at Monads, we can discover what abstractions can be accomplished with them, but just as well, what they are not about. Certainly, functional languages don’t need them, but they certainly can be quite useful. When we find repeated behavior, such as maintaining state, this abstraction can be quite powerful. Practically though in languages such as F#, they can be useful, but other patterns and abstractions can as well. Find the right abstraction and use it. In the next post, we’ll cover the additional methods you can add to your builder to enable such things as while loops, for loops, try/catch blocks, and so on before wrapping up this series again.
https://weblogs.asp.net/podwysocki/much-ado-about-monads-reader-edition
CC-MAIN-2021-43
refinedweb
2,249
69.92
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. So you suggest that I may have a problem because product.template already contain variant field? The "product_variant_id" field is defined in "product.template" model. So you should give that _sql_constraints in "product.template" Hi, I have updated the answer by correcting your code. Please check it. Hi, You can use an _sql_constraints for that, by inheriting the product.template object. For eg: class product_template(models.Model): _inherit = 'product.template' _sql_constraints = [ ('name_unique', 'UNIQUE(name)', _('Duplicate Product Name!')), ] Let me correct your code. Please try the following in your .py file: from openerp import models, fields, api class product_variant(models.Model): _name = 'product.variant' name = fields.Char('Variant') selector = fields.Selection([ ('Computer','Computer'), ('Processor','Processor'), ('Storage_device','Storage device'), ('Optical_drive','Optical drive'), ('RAM_module','RAM module'), ('Graphics_card','Graphics card'), ('Storage_controller','Storage controller'), ('Power_supply','Power supply'), ('Monitor','Monitor'), ('Motherboard','Motherboard'), ('Cable','Cable'), ('Case_part','Case part'), ('Peripheral','Peripheral'), ('Internal_components','Internal components')],'selector') class product_template(models.Model): _inherit = 'product.template' product_variant_id = fields.Many2one( 'product.variant', 'Variant', help='Select or add a variant of the IBM or Lenovo product if not exist', ondelete='restrict') _sql_constraints = [('product_variant_id_uniq', 'unique(product_variant_id)', 'Already exist!')] May I know why are you trying to create a model for variants, as product variants are already there. Note that at the moment when you add this contraint there should be no duplicates present in the database. Otherwise setting the constraint will fail without much of a notification. Apparently I've tried to add code which is for new API into the old one. That's why it won't work. Right ? The code You showed me is new API. Because my module is "old API" how can i do it in old ? Or should I create another module just to add this _sql_constraints modification ? "May I know why are you trying to create a model for variants, as product variants are already there." Is not like that that one create a field in product_template and second is adding name and selector into the variant in product_template ? Maybe I just misunderstood something :)? No Robert, no need to create a seperate module just for that. I feel like, to work with new api is bit more easier. You can also use old api within the new api but reverse is not possible. If you need it in old api, no problem, I can help you with that. To have a drop down list with option "Create/Add" i decided to do it that way. If you want check this film then you will see what is it for. Product module contain a couple of drop down lists to automatically create based on the same details product name to keep same form and syntax. Because people quiet often naming product in theirs own way (we don't like it) I created something like this to make it easier. Now I'm only improving it. Oh thank you Akhil, it's very nice of you. To be honest I just started learning how to build modules, how to change things in Odoo so my knowledge is not ... very broad :) About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-can-i-check-is-name-already-in-database-84888
CC-MAIN-2018-17
refinedweb
566
51.65
Jan 08, 2010 12:50 PM|Rossoneri|LINK I wanted to check if there is a better way of doing this. In my object classes, I have several properties for storing Id values that ultimately get readfrom/inserted into the database. For example, my Employees class has a property for SpecializationId. This value gets set when the person selects from a databound dropdown list containing all the Specializations. When we view the record, we just set the selected value of that dropdown list to this Id value. No problems. Now we have to implement a search feature and results grid. We thought about populating a collection of matching Employees, and binding the collection to the grid with the properties from the Employees class. However, the grid needs to show the actual "Name" of the Specialization (Chemical), not the database Id (4) So my question is, for each of the grid cells that have to show the actual name instead of the Id, do I have to add an additional property to my class? // existing public int SpecializationId {...} // new addition? public string Specialization {...} This binds to the grid ok and shows the name, but when I have to transform 10+ Id type properties to Names, it seems like it is just bloating the class for just this single purpose. My gut tells me there is a better way of doing this? Jan 08, 2010 02:30 PM|uid639299|LINK Normally I would put both fields into the Class. But if your field names are static and no new ones are added, you could use an enum for the values. public enum Specializations : int { Chemical = 4, Mechanical = 5, } Then in your code, you can just use the value (4) and it will be understood to be Chemical. You can also convert back to the int value from the name with Enum.Parse(typeof(Specializations), "Chemical", true). Nick Star 8012 Points Jan 08, 2010 02:38 PM|sumitd|LINK As you have mentioned you need ID and Name both. You need to create both, there is no other go due to simple fact that there needs to be property in the class to hold it. Jan 08, 2010 02:44 PM|Rossoneri|LINK True. Unfortunately the lists are subject to change (new ones added, some deleted), so the enum probably won't work. I am going to try setting up a new class with two properties public class ValueList { public string Name {... } public int DatabaseId { ... } } Then in the Employee class I should be able to set up a property: public ValueList Specalization { ... } Depending on what I want to use, I can call Employee.Specialization.Name or Employee.Specialization.DatabaseId But, I don't think this field going to bind very well without some intervention. Thanks for the suggestion and I'll continue to try some things. Jan 08, 2010 04:01 PM|Rossoneri|LINK As I suspected, it fails if you create a BoundField and try to set the DataField = "Specialization.Name" I had to follow the approach here using Template Fields: Seems to work ok thus far. I will probably go this route unless I see a better way. Thanks for all the help. 5 replies Last post Jan 08, 2010 04:01 PM by Rossoneri
https://forums.asp.net/t/1512151.aspx?Class+Design
CC-MAIN-2021-31
refinedweb
544
72.56
From: Simon Buchan (simon_at_[hidden]) Date: 2005-09-23 03:40:00 Vladimir Prus wrote: > Simon Buchan wrote: > > [ snip 34 quoted lines, of which only 2 are important] > > >>>Note that the author of the blog post explicitly mentions this FAQ item. >>>Did you read the post at all? > > >>Actually, he mentions a sentence inside the FAQ, and seems to ignore the >>rest of it. Did you notice that the FAQ Item was explicitly about >>working with Qt? And that the library is carefully designed to not be >>clobbered by the Qt preprocceser? (signals is only used for the >>namespace name, and slot(s) is never used, only slot_type). >>Simply put '#define BOOST_SIGNALS_NAMESPACE whatever' before '#include >><boost/signal.hpp>' and everything works nicely. > > > I would not call this "nicely", that's more "klugely". What if I happen to > write a library which used boost::signals in headers? Do you suggest me to > use > > class My { > public: > boost::BOOST_SIGNALS_NAMESPACE::signal<.....> on_whatever; > }; > > ? Do you mean that the current situation is optimal, and any attempts on > improving it are just wasting time? > > - Volodya > > > _______________________________________________ > Unsubscribe & other changes: > Are you serious? You're writing a library! You think someone might use it with Qt! Yes, dumbass, you SHOULD do that! Take a look at some of the boost headers (esp. functional, lambda, bind) and THEN bitch about how hard it is to write a library. If it hurts your poor fingers (or eyes), put this somewhere useful (ie. detail namespace): namespace whatever = ::boost::BOOST_SIGNALS_NAMESPACE; Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/09/94068.php
CC-MAIN-2019-43
refinedweb
271
68.87
Get the size of a file in java In this instructional exercise, we will figure out how to get or compute the measure of a record in java with a basic and simple precedent. As a rule, you may need to compute the document measure for further tasks in a specific content record. Let’s learn how to get the size of a file in Java. In the event that you don’t have a clue how to compute the document measure, at that point you are at the correct spot. Since in this instructional exercise, we going to discover how to get the document estimate. Get the size of a file in java Let’s learn this with some easy examples. At first import a package named “File” to your java program to access the pre-defined classes to your java program. You can do it by: import java.io.File; or, import java.io.*; Now, create a text file and write some content in it and save it. In java program after importing File package, create a File object which will contain the path of the particular text file which you want to refer to perform the further operations as per your requirements in a file. You can do it by: File obj_name= new File("name_of_text_file.txt"); Now, If you want to perform any operations in a file if and only if file exists then use exists() function to check whether a file exists or not. You can do it by: if(obj_name.exists()) After that, use length function to determine the size of a file in bytes. For this write: double bytes=obj_name.length() Calculate the size of a file import java.io.File; class FileSizeDemoExample { public static void main(String []args) { File f=new File("Example.txt"); if(f.exists()) { double bytes=f.length(); System.out.println("File size in bytes="+bytes); } else System.out.println("File does not exist mention existing file"); } } Also read:
https://www.codespeedy.com/get-the-size-of-a-file-in-java/
CC-MAIN-2019-43
refinedweb
329
65.52
The Adoption Program Insights series describes experiences of Microsoft Services consultants involved in the Windows Azure Technical Adoption Program assisting customers deploy solutions on the Windows Azure platform. This post is by Patrick Butler Monterde and Tom Hollander. Many custom-developed applications need to send email, whether it is a part of the registration process, a way of notifying users when important events occur or something else. If you're a .NET developer you've probably used classes of the System.Web.Mail namespace to accomplish this. However these classes require access to an SMTP e-mail server to send messages and Windows Azure does not currently provide such a capability. However all is not lost. This two-part series describes some patterns for enabling emailing capabilities for applications deployed to the Windows Azure platform. - Using a custom on-premise Email Forwarder Service: This pattern, described in this post, utilizes an on-premise email server to send emails on behalf of the application running on Windows Azure. This is accomplished by creating a custom service using a distributed asynchronous model that uses Windows Azure storage queues and blobs to deliver emails generated in Windows Azure to the an on-premise email server. - Using Email Server's Web Services APIs: This pattern, that will be described in Part 2, uses the web services API provided by Microsoft Exchange to send email directly from Windows Azure. This pattern can be applied to other messaging products that provide a similar web services interface. - Using a third party SMTP Service: This pattern, described in Steve Marx's blog post EmailTheInternet.com: Sending and Receiving Email in Windows Azure utilizes a 3rd party Email service like SendGrid or AuthSMTP to relay emails. The solution described in this post goes one step further and also shows how to receive email from a Windows Azure application by listening for SMTP traffic on port 25. Pattern 1: Using a Custom On-premise Email Forwarder Service This pattern utilizes your existing on premise email server to send email on behalf of a Windows Azure application. This is accomplished by creating a custom on-premise Email Forwarder Service that uses Windows Azure Storage queues and blobs to deliver emails generated in Windows Azure to an on-premise email server. The pattern is divided into two main sections: - Preparing and sending email work items: This is the implementation of a Windows Azure Web/Worker Role that generates the email. It serializes the email object and creates an email work item in Windows Azure Storage. - Receiving and sending email work items: This is the implementation of an Email Forwarder Service which retrieves the email work items from Windows Azure storage, deserializes the email object and sends it to the email server. For the distribution of emails from Windows Azure to the on premise email servers, we will define a concept of a "work item". A work item is a logical container composed of: - One Queue Item: The queue item stores the reference (URI) of Blob where the email message is stored. It can also hold up to 8k of metadata you may need. - One Blob Item: The Blob item contains the serialized email object. Because Blobs can have up to 1TB of size, the email object could hold multiple large attachments. The following diagram shows the pattern's workflow: This is what happens when an application hosted in Windows Azure needs to send an email message: - A worker/web role generates an email message. This email message is in the form of a System.Net.Mail.MailMessage instance. This mail object could include any number of attachments. - The email object is serialized, and stored into a blob. The blob's URL is then added to a Queue item. The combination of the queue item and the blob become the email work item. You can make use of both the Queue and Blob items metadata to store additional information. - On premise, an Email Forwarder Service constantly monitors the Queues for emails. Queue items can be retrieved a rate of 32 items at the time. The Email Forwarder Service first retrieves the queue item, and then it extracts the Blob URI and retrieves the serialized email. - Once deserialized, the Email Forwarder Service uses the on-premise email server information to send the email. After delivering the email, it removes the work item from the queue and blob storage. - The on-premise email server receives the emails. Because it is an on-premise application, the authentication and authorization should be straightforward. - The email server sends the email to the appropriate user. To better illustrate the pattern, a sample implementation of the Email Forwarder Service and a Windows Azure application that uses it can be downloaded below as a .zip file. The code sample contains the following projects: - Email Forwarder Service: Implementation of the on-premise Email Forwarder Service. For simple demonstration purposes it is implemented as a Windows Form application; however for real-world deployment you would implement this in a Windows service. To test the sample service, edit the app.config file to include the details of your on-premise SMTP Server. - Entity: Class Library that contains the email message serialization capabilities and the operations to add and remove email work items from Windows Azure storage. Both the Email Forwarder Service and the Web/Worker roles use this project. - Email Generator Web Role: Implementation of a simple Web role that can send email. The role provides a web user interface that lets you enter details of the email to be sent. - Email Generator Worker Role: Implementation of a simple worker role that can send Email. The role generates and sends email messages every 10 seconds using details found in the role's app.config file. Architectural Considerations It is important to understand the architectural implications for any solution. Some of the considerations for the custom Email Forwarder Service include: - Cost: Data storage in Blobs and Queues and the data flow to the on-premise service will incur the additional cost to the overall solution. The overall cost impact will vary based on the email volume in individual solution, which must be taken in consideration before implementing this pattern. Use of compression may be desirable to reduce the size of serialized email objects. To minimize bandwidth costs, the Windows Azure Storage account used for the blobs and queues should be located in the same affinity group as the web/worker roles sending the emails. - Performance: There are two main observations regarding performance: - Serialized email objects that contain large email attachments may pose some performance impact since these needs to be serialized, moved to storage and then retrieved and de-serialized by the Email Forwarder Service. - Due to asynchronous nature of this pattern, the Email Forwarder Service checks the Windows Azure Storage Queues periodically for work items. This generated a marginal delay sending the emails to the email server. This must be studied carefully and set as per the individual needs. - Management: This service should be monitored. We recommend adding logging and monitoring capabilities to the implementation. - Reliability: Proper retry mechanisms (including exponential back-off) should be implemented in the Email Forwarder Service to take care of any connection failures. Part 2 will be posted here later this week.
https://azure.microsoft.com/nb-no/blog/adoption-program-insights-sending-emails-from-windows-azure-part-1-of-2/
CC-MAIN-2018-47
refinedweb
1,213
53.1
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.; } done: user_access_end(); return (val == 0); err_fault: user_access_end(); return -EFAULT; } note how "mask" carries around from the very beginning all the way to the end, and "align" itself is no longer used after mask has been calculated. That's required because of say a 2-byte read at offset 5. You end up with "align=5, size=7" at the beginning, and mask needs to be 0x00ffff0000000000 (on little-endian) for that final access. Anyway, I checked, and the above seems to generate ok code quality too. Sadly "unsafe_get_user()" cannot use "asm goto" because of a gcc limitation (no asm goto with outputs), so it's not _perfect_, but that's literally a compiler limitation. But I didn't actually _test_ the end result. You should probably verify that it gets the right behavior exactly for those interesting cases where we mask both the beginning and the end. Linus
http://sourceware.org/ml/libc-alpha/2019-09/msg00426.html
CC-MAIN-2019-51
refinedweb
165
62.58
Is there a particular reason why this and are separate patches? The Python 2/3 portability is a huge patch. I have split it in several micro-patch, one for each package/feature impacted. The underlying idea is that we can keep the discussions focus. This package should just be a renaming issue, while uses a higher level API than the one before. LGTM with one minor nit. Minor nit. Elsewhere (E.g. D55202 and D55200), the pattern: try: import py3name except ImportError: import py2name as py3name is used, so for consistency I'd suggest doing the same here (it also has the nice property of implying that Python 3 should be the 'real' version). Thanks @gbedwell for the review o/
https://reviews.llvm.org/D55258
CC-MAIN-2022-33
refinedweb
121
65.22
Okay, so the premise is that this place contains items for sale, and has five of everything that it sells. The inventory is brought in from a file and with parallel arrays you set up the item names with the inventory in stock. I have that all down, but now I need to get a cin loop going which I'm not sure how to do and if the user enters an item name that's in the array, the stock for that item is decremented one. My current troubles aside from not sure how to get a cin loop going, is that I can't even figure out how to let the user enter a string and see if it matches any of the item names in the array. Here's the code I have written: #include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; const int MAX_ITEMS = 1000; string filename; string names [MAX_ITEMS]; int inventory [MAX_ITEMS]; string item; string item2; int count; int count2 = 0; int inventorycount = 5; int main() { ifstream inFile; cout << " ** 5-of-Everything Department Store ** " << endl; cout << "What's the inventory file?" << endl; getline(cin, filename); inFile.open(filename.c_str()); count = 0; getline(inFile, item); while (inFile) { names[count] = item; count++; inventory[count2] = inventorycount; count2 ++; getline(inFile, item); } inFile.close(); cout << "What is being sold?" << endl; cin >> item2; if (item2 == names[count]) { cout <<"Sold!" << endl; } else { cout <<"Sorry not in stock!"<< endl; } return 0; } I'm not sure what to be comparing after something has been entered from the user, to check if that item name exists in the array. Also the user is supposed to be allowed to enter the item name in all lowers even though in the inventory file it may be spelled with capital letters. I've been messing around with tolower, but it says it expects an int where I've tried to use a string. So, not sure how to do a tolower on a string, of if it's even possible.
https://www.daniweb.com/programming/software-development/threads/91734/parallel-array-help
CC-MAIN-2017-34
refinedweb
335
74.12
keyname, key_name - get name of key #include <curses.h> char *keyname(int c); char *key_name(wchar_t c); The keyname() and key_name() functions generate a character string whose value describes the key c. The c argument of keyname() can be an 8-bit character or a key code. The c argument of key_name() must be a wide character. The string has a format according to the first applicable row in the following table: The meta-character notation shown above is used only if meta-characters are enabled. Upon successful completion, keyname() returns a pointer to a string as described above. Otherwise, it returns a null pointer. No errors are defined. The return value of keyname() and key_name() may point to a static area which is overwritten by a subsequent call to either of these functions. Applications normally process meta-characters without storing them into a window. If an application stores meta-characters in a window and tries to retrieve them as wide characters, keyname() cannot detect meta-characters, since wide characters do not support meta-characters. meta(), <curses.h>.
http://pubs.opengroup.org/onlinepubs/007908799/xcurses/key_name.html
CC-MAIN-2019-35
refinedweb
178
56.05
mutex A library for creating locks to ensure mutual exclusion when running critical sections of code. Purpose Mutexes can be used to protect critical sections of code to prevent race conditions. Although Dart uses a single thread of execution, race conditions can still occur when asynchronous operations are used inside critical sections. For example, x = 42; synchronousOperations(); // this does not modify x assert(x == 42); // x will NOT have changed y = 42; // a variable that other asynchronous code can modify await asynchronousOperations(); // this does NOT modify y, but... // There is NO GUARANTEE other async code didn't run and change it! assert(y == 42 || y != 42); // WARNING: y might have changed An example is when Dart is used to implement a server-side Web server that updates a database (assuming database transactions are not being used). The update involves querying the database, performing calculations on those retrieved values, and then updating the database with the result. You don't want the database to be changed by "something else" while performing the calculations, since the results you would write will not incorporate those other changes. That "something else" could be the same Web server handling another request in parallel. This package provides a normal mutex and a read-write mutex. Mutex A mutex guarantees at most only one lock can exist at any one time. If the lock has already been acquired, attempts to acquire another lock will be blocked until the lock has been released. import 'package:mutex/mutex.dart'; m = Mutex(); Acquiring the lock before running the critical section of code, and then releasing the lock. await m.acquire(); // No other lock can be acquired until the lock is released try { // critical section with asynchronous code await ... } finally { m.release(); } The following code uses the protect convenience method to do the same thing as the above code. Use the convenence method whenever possible, since it ensures the lock will always be released. await m.protect(() async { // critical section }); Read-write mutex A read-write mutex allows multiple reads locks to be exist simultaneously, but at most only one write lock can exist at any one time. A write lock and any read locks cannot both exist together at the same time. If there is one or more read locks, attempts to acquire a write lock will be blocked until all the read locks have been released. But attempts to acquire more read locks will not be blocked. If there is a write lock, attempts to acquire any lock (read or write) will be blocked until that write lock is released. A read-write mutex can also be describeed as a single-writer mutex, multiple-reader mutex, or a reentrant lock. import 'package:mutex/mutex.dart'; m = MutexReadWrite(); Acquiring a write lock: await m.acquireWrite(); // No other locks (read or write) can be acquired until released try { // critical write section with asynchronous code await ... } finally { m.release(); } Acquiring a read lock: await m.acquireRead(); // No write lock can be acquired until all read locks are released, // but additional read locks can be acquired. try { // critical read section with asynchronous code await ... } finally { m.release(); } The following code uses the protectWrite and protectRead convenience methods to do the same thing as the above code. Use the convenence method whenever possible, since it ensures the lock will always be released. await m.protectWrite(() async { // critical write section }); await m.protectRead(() async { // critical read section }); When mutual exclusion is not needed The critical section should always contain some asynchronous code. If the critical section only contains synchronous code, there is no need to put it in a critical section. In Dart, synchronous code cannot be interrupted, so there is no need to protect it using mutual exclusion. Also, if the critical section does not involve data or shared resources that can be accessed by other asynchronous code, it also does not need to be protected. For example, if it only uses local variables that other asynchronous code won't have access to: while the other asynchronous code could run, it won't be able to make unexpected changes to the local variables it can't access. Features and bugs Please file feature requests and bugs at the issue tracker.
https://pub.dev/documentation/mutex/latest/
CC-MAIN-2020-45
refinedweb
706
61.36
Many people are impressed by realistic camera animations in games or multimedia demos. The math behind what is commonly called camera interpolation is actually pretty simple. In this article, I will focus on a simple algorithm that uses a particular class of spline curves called Overhauser or Catmull-Rom splines, and I will show how and why they are superior to other existing more or less similar approaches. You may hate me for this, but math can be really nice. We will brush up our knowledge of vector calculus in this section, which will allow us to understand the sample code better. Let's start with the basics: A curve that passes through its control points is said to interpolate those points. Bezier curves interpolate only 2 out of each 4 control points, while B-splines interpolate none of the specified control points (the curve goes smoothly around those points). The Catmull-Rom splines, also called Overhauser splines, belong to a class of curves known as Hermite splines. They are uniform rational cubic polynomial curves that interpolate between N control points and pass through exactly N-2 control points (except the first and last one). They are uniform, because the control points (also known as knots) are spaced at equal intervals with respect to the curve's parameter (t). The interpolation is performed in a piecewise manner: a new cubic curve is defined between each pair of points. t The parametric equation of the Catmull-Rom spline is given by: Where the vectors V and T and matrix M are: We could simply use this equation as is, and code up our solution using vector and matrix multiplication. While doable, that would probably not be very efficient. Let us simplify the equation a bit. I encourage you to double-check my math -- it's fun. By multiplying the horizontal vector T with matrix M and factoring in the vertical vector, we get: Where b1...b4 are cubic polynomials in t: b1 b4 Figure 3A shows the final equation's members. P1...P4 are the control points. In 3D, Pn are homogeneous or non-homogeneous vectors (3 or 4 coordinates). In 2D they are 2-coordinate vectors. P1 P4 Pn What does all of this gibberish mean? Well, it means that if you know N intermediate positions plus possibly axis/angle pairs for a camera at N moments in time, you can produce an accurate and smooth animation of the camera by interpolating between N-2 of those positions and axis/angle pairs using Eq 3A above. The camera will pass through all the middle N-2 points. [Note: If you double the start and end points, the camera will pass through all N positions.] First, we need a class to provide an abstraction for our control points, Pn. We will write up a minimal 3D vector class with just a couple of operations. Feel free to extend this as necessary. Also please note that the enclosed sample application nullifies the Z coordinate of these vectors and essentially uses them for plotting 2D curves. However, the package is fully capable of computing 3D splines! Let us review the 3D vector class, vec3: vec3 /// Minimal 3-dimensional vector abstraction class vec3 { public: // Constructors vec3() : x(0), y(0), z(0) {} vec3(float vx, float vy, float vz) { x = vx; y = vy; z = vz; } vec3(const vec3& v) { x = v.x; y = v.y; z = v.z; } // Destructor ~vec3() {} // A minimal set of vector operations vec3 operator * (float mult) const // result = this * arg { return vec3(x * mult, y * mult, z * mult); } vec3 operator + (const vec3& v) const // result = this + arg { return vec3(x + v.x, y + v.y, z + v.z); } vec3 operator - (const vec3& v) const // result = this - arg { return vec3(x - v.x, y - v.y, z - v.z); } float x, y, z; }; Pretty simple. Now we will introduce a new class for abstracting our spline. #include <span class="code-string">"vec3.hpp"</span> #include <vector /> class CRSpline { public: // Constructors and destructor CRSpline(); CRSpline(const CRSpline&); ~CRSpline(); // Operations void AddSplinePoint(const vec3& v); vec3 GetInterpolatedSplinePoint(float t); // t = 0...1; 0=vp[0] ... 1=vp[max] int GetNumPoints(); vec3& GetNthPoint(int n); // Static method for computing the Catmull-Rom parametric equation // given a time (t) and a vector quadruple (p1,p2,p3,p4). static vec3 Eq(float t, const vec3& p1, const vec3& p2, const vec3& p3, const vec3& p4); private: std::vector<vec3> vp; float delta_t; }; This is, again, pretty intuitive: the CRSpline class is essentially a container for a bunch of control points (represented as a std::vector). It has a static member function for solving the spline equation for a given parameter t and four control points P1...P4. The function returns a 3-coordinate vector, which is the result of the interpolation between the given 4 control points, for the given value of t. CRSpline std::vector static The methods AddSplinePoint and GetInterpolatedSplinePoint allow us to specify the 2D/3D curve's control points and get the smooth curve back. Let us take a quick look at the latter, which contains one last bit of tricky logic: AddSplinePoint GetInterpolatedSplinePoint vec3 CRSpline::GetInterpolatedSplinePoint(float t) { // Find out in which interval we are on the spline int p = (int)(t / delta_t); // Compute local control point indices #define BOUNDS(pp) { if (pp < 0) pp = 0; else if (pp >= (int)vp.size()-1) pp = vp.size() - 1; } int p0 = p - 1; BOUNDS(p0); int p1 = p; BOUNDS(p1); int p2 = p + 1; BOUNDS(p2); int p3 = p + 2; BOUNDS(p3); // Relative (local) time float lt = (t - delta_t*(float)p) / delta_t; // Interpolate return CRSpline::Eq(lt, vp[p0], vp[p1], vp[p2], vp[p3]); } As the code above shows, function GetInterpolatedSplinePoint divides the spline into 4-point segments, transforms the parameter t with respect to the local segment, and then uses the static equation solver to get the final result. The function assumes that t varies from 0 to 1, where 0 represents the "start" of the spline (first control point) and 1 represents the "end" of the spline (last control point). You can make up your own time scheme here, if this is not appropriate; remember however to adjust the computation of p and lt above. 0 1 1 p lt Any sequence of 2D or 3D vectors can be interpolated in a similar fashion. For example, camera axis/angles, the positions and orientation of various moving objects in a scene, etc. The beauty of this approach compared to using b-splines or Bezier curves is that the resulting curves touch all of their control points (again, make sure to double the first and last control points to make that possible). Here, I included a very primitive application of the package, implemented in Borland Dev Studio 4. It basically instantiates the CRSpline class and populates it with pseudo-random control points, then uses BDS's TCanvas interface to plot the spline on a regular dialog box's canvas. TCanvas I hope you did not find my little math session too boring. Enjoy the code. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) delta_t = (float)1 / (float)vp.size(); delta_t = (float)1 / (float)(vp.size()-1); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=30838
CC-MAIN-2014-52
refinedweb
1,244
61.26
table of contents - bullseye 1.10.0-2 - bullseye-backports 1.10.1-4~bpo11+1 - testing 1.10.1-4 - unstable 1.10.1-4 NAME¶ pcap_get_required_select_timeout - get a timeout to be used when doing select() for a live capture SYNOPSIS¶ #include <pcap/pcap.h> const struct timeval *pcap_get_required_select_timeout(pcap_t *p); DESCRIPTION¶ pcap_get_required_select_timeout() returns, on UNIX, a pointer to a struct timeval containing a value that must be used as the minimum timeout in select(2), poll(2), epoll_wait(2), and kevent(2) calls, or NULL if there is no such timeout. If a non-NULL value is returned, it must be used regardless of whether pcap_get_selectable_fd(3PCAP) returns -1 for any descriptor on which those calls are being done. pcap_get_required_select_timeout() should be called for all pcap_ts before a call to select(), poll(), epoll_wait(), or kevent(), and any timeouts used for those calls should be updated as appropriate given the new value of the timeout. For kevent(), one EVFILT_TIMER filter per selectable descriptor can be used, rather than using the timeout argument to kevent(); if the EVFILT_TIMER event for a particular selectable descriptor signals an event, pcap_dispatch(3PCAP) should be called for the corresponding pcap_t. On Linux systems with timerfd_create(2), one timer object created by timerfd_create() per selectable descriptor can be used, rather than using the timeout argument to epoll_wait(); if the timer object for a particular selectable descriptor signals an event, pcap_dispatch(3PCAP) should be called for the corresponding pcap_t. Otherwise, a timeout value no larger than the smallest of all timeouts returned by pcap_get_required_select_timeout() for devices from which packets will be captured and any other timeouts to be used in the call should be used as the timeout for the call, and, when the call returns, pcap_dispatch(3PCAP) should be called for all pcap_ts for which a non-NULL timeout was returned, regardless of whether it's indicated as having anything to read from it or not. All devices with a non-NULL timeout must be put in non-blocking mode with pcap_setnonblock(3PCAP). Note that a device on which a read can be done without blocking may, on some platforms, not have any packets to read if the packet buffer timeout has expired. A call to pcap_dispatch() or pcap_next_ex(3PCAP) will return 0 in this case, but will not block. pcap_get_required_select_timeout() is not available on Windows. RETURN VALUE¶ A pointer to a struct timeval is returned if the timeout is required; otherwise NULL is returned. BACKWARD COMPATIBILITY¶ This function became available in libpcap release 1.9.0. In previous releases, select(), poll(), epoll_wait(), and kevent() cannot be used on any capture source for which pcap_get_selectable_fd() returns -1. In libpcap release 1.10.0 and later, the timeout value can change from call to call, so pcap_get_required_select_timeout() must be called before each call to select(), poll(), epoll_wait(), or kevent(), and the new value must be used to calculate timeouts for the call. Code that does that will also work with libpcap 1.9.x releases, so code using pcap_get_required_select_timeout() should be changed to call it for each call to select(), poll(), epoll_wait(), or kevent() even if the code must also work with libpcap 1.9.x. BACKWARD COMPATIBILITY¶ This function became available in libpcap release 1.9.0. In previous releases, select(), poll(), epoll_wait(), and kevent() could not be used for devices that don't provide a selectable file descriptor. SEE ALSO¶ pcap(3PCAP), pcap_get_selectable_fd(3PCAP), select(2), poll(2), epoll_wait(2), kqueue(2)
https://manpages.debian.org/testing/libpcap0.8-dev/pcap_get_required_select_timeout.3pcap.en.html
CC-MAIN-2022-21
refinedweb
578
50.06
Member since 04-10-2019 33 0 Kudos Received 0 Solutions 06-15-2019 08:04 AM 06-15-2019 08:04 AM Hi @Jay Kumar SenSharma. I just checked my FQDN and they are all correct : master.rh.bigdata.cluster node2.rh.bigdata.cluster node3.rh.bigdata.cluster node4.rh.bigdata.cluster Still got the error ! ... View more 07-01-2019 06:09 PM 07-01-2019 06:09 PM @Adil BAKKOURI @Norbert Werling I am having the same issue with Ambari 3.1, Were you guys able to resolve this issue? Thanks ... View more 06-11-2019 10:52 AM 06-11-2019 10:52 AM @Adil BAKKOURI We see the following message seems to be causing the DataNode startup failure. 2019-06-11 12:30:52,832 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-bd1a4e24-9ff2-4ab8-928a-f04000e375cc; datanode clusterID = CID-9a605cbd-1b0e-41d3-885e-f0efcbe54851 Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct. Please copy the clusterID from nematode "<dfs.namenode.name.dir>/current/VERSION" and put it in the VERSION file of datanode "<dfs.datanode.data.dir>/current/VERSION" and then try again. Also please check the following link: ... View more 06-11-2019 10:21 AM 06-11-2019 10:21 AM @Adil BAKKOURI Based on your Latest Logs it looks like after changing the permissions the NameNode is starting fine without any issue. For the DataNode not starting issue please open a Separate HCC thread and Mark this HCC thread as Answered by clicking on "Accept" link on correct answer. ... View more 06-07-2019 07:40 PM 06-07-2019 07:40 PM @Adil BAKKOURI The below output means your name node is in safe mode which means it won't allow any changes to the cluster as it won't be able to record that change. The return code 1 is like an exit code 1 A Safemode for Namenode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally, Namenode disables safe mode automatically at the beginning Cause: Your name node has not FSimage is a point-in-time snapshot of HDFS's namespace. Edit log records every change from the last snapshot. The last snapshot is actually stored in FSImage. -safemode get | grep 'Safe mode is OFF'' returned 1. # su - hdfs Get the current name node status of the output should validate the return code 1 $ hdfs dfsadmin -safemode get If the output is safe mode on then proceed to save Namespace $ hdfs dfsadmin -safemode enter Force the savepoint for the namespace $ hdfs dfsadmin -saveNamespace Quit the safe mode $ hdfs dfsadmin -safemode leave Begin normal operation check if you need to restart and services do that but now you should be able to connect to the namenode HTH ... View more 06-06-2019 11:06 AM 06-06-2019 11:06 AM Hi @Vinay , @Geoffrey Shelton Okot, Any updates or solutions for this problem ? Thank You ... View more 05-10-2019 10:01 PM 05-10-2019 10:01 PM @Adil BAKKOURI Hurrah we are now there, thats the error I was expecting now this is a case closed. Validate the hostname by running # hostname -f This should give you the FQDN The error below is very simple its a privilege issue with the hive user and database creation script you run, you didn't give the correct privileges to the hive user "Access denied for user 'hive'@'master.rh.bigdata.cluster' to database 'hive'" To resolve the above please do the following assumptions Root password = gr3atman Hive password = hive Hostname = master.rh.bigdata.cluster mysql -uroot -pgr3atman mysql> GRANT ALL PRIVILEGES ON hive.* to 'hive'@'localhost' identified by 'hive'; mysql> GRANT ALL PRIVILEGES ON hive.* to 'hive'@'master.rh.bigdata.cluster' identified by 'hive'; mysql> GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'master.rh.bigdata.cluster'; mysql> flush privileges; All of the above should succeed. Now your hive should fire up Bravo !! ************ If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors. ... View more
https://community.cloudera.com/t5/user/viewprofilepage/user-id/47577/user-messages-feed/participations
CC-MAIN-2020-34
refinedweb
745
60.65
Starting With "Hello World" Here are resources for more information about working with virtual environments, pip and bpython: 00:00 To get started with openpyxl, create a virtual environment in a new folder. I’m going to use venv, but you can use any environment manager. 00:11 Once you have it, go ahead and activate it 00:17 and then pip install openpyxl. I’m also going to grab bpython to clean up some terminal sessions later. 00:29 While that’s installing, go ahead and make a new file. I’m going to call mine hello_openpyxl.py Inside this new file, I’m going to say from openpyxl import Workbook() 00:47 and then define workbook as a new Workbook. Then, sheet will equal workbook.active. With sheet, say that "A1" is equal to "hello", and then "B1" is going to be equal to "world!". 01:10 If this doesn’t make sense, don’t worry. We’re going to talk about it in a little bit. Finally, say workbook.save() and pass it in the filename, like "hello_world.xlsx". 01:22 Save this, and then go ahead and run it. Okay. If everything went well, you should have a new file called "hello_world.xlsx". If you go to open that up, you should see hello and world! in A1 and B1. All right! 01:42 You’ve just made your first spreadsheet using openpyxl. If we go back here, we can see that the spreadsheet, or Workbook, is a class inside openpyxl, and then within each Workbook is a number of sheets. Here, by taking the active sheet in the workbook, you’ve just selected whichever sheet is open when you open the workbook normally. Each sheet is then broken up into a number of cells, which here are "A1" and "B1", and then these cells are organized into rows and columns, which are represented by letters for columns and numbers for the rows. 02:17 Before we finish up in this video, go ahead and download the sample Excel worksheet from the resources (Supporting Material) tab. It contains a number of reviews of watches from Amazon, and you’ll use that to practice the different features of openpyxl. 02:30 Thanks for watching. Become a Member to join the conversation.
https://realpython.com/lessons/start-hello-world/
CC-MAIN-2020-40
refinedweb
384
73.88
IRC log of ws-ra on 2011-07-05 Timestamps are in UTC. 19:27:01 [RRSAgent] RRSAgent has joined #ws-ra 19:27:01 [RRSAgent] logging to 19:27:03 [trackbot] RRSAgent, make logs public 19:27:03 [Zakim] Zakim has joined #ws-ra 19:27:05 [trackbot] Zakim, this will be WSRA 19:27:05 [Zakim] ok, trackbot; I see WS_WSRA()3:30PM scheduled to start in 3 minutes 19:27:06 [trackbot] Meeting: Web Services Resource Access Working Group Teleconference 19:27:06 [trackbot] Date: 05 July 2011 19:27:35 [Zakim] WS_WSRA()3:30PM has now started 19:27:41 [Zakim] +??P3 19:27:55 [Bob] zakim, ??P3 is Bob 19:27:55 [Zakim] +Bob; got it 19:28:50 [Dug] Dug has joined #ws-ra 19:29:30 [Zakim] +Doug_Davis 19:32:06 [gpilz] gpilz has joined #ws-ra 19:32:16 [asoldano] asoldano has joined #ws-ra 19:32:57 [Zakim] +Tom_Rutt 19:32:58 [Zakim] +Gil 19:33:21 [Zakim] +Alessio 19:34:10 [trutt] trutt has joined #ws-ra 19:34:15 [Zakim] +Yves 19:36:29 [gpilz] SCRIBE: gpilz 19:36:48 [Bob] agenda: 19:37:10 [gpilz] TOPIC: Agenda 19:37:21 [gpilz] RESOLUTION: Agenda agreed 19:37:33 [gpilz] TOPIC: Approval of Minutes 19:37:41 [gpilz] RESOLUTION: minutes approved 19:37:47 [gpilz] TOPIC: New Issues 19:38:20 [gpilz] Issue-13016: 19:38:20 [trackbot] Sorry... adding notes to ISSUE-13016 failed, please let sysreq know about it 19:38:57 [gpilz] Gil: looks like a typo 19:39:08 [gpilz] Bob: issue accepted 19:39:17 [gpilz] Doug: it's already been fixed 19:39:30 [gpilz] Bob: any objection to just fixing this? 19:39:41 [Dug] 19:39:42 [gpilz] RESOLUTION: Doug's proposal accepted 19:40:10 [gpilz] Issue-13148: 19:40:10 [trackbot] Sorry... adding notes to ISSUE-13148 failed, please let sysreq know about it 19:40:25 [gpilz] Doug: Proposal is to just add 'REQUIRED' 19:40:47 [gpilz] RESOLUTION: proposal for Issue-13148 is accepted as proposed 19:40:49 [Dug] 19:41:05 [gpilz] Issue-13151: 19:41:05 [trackbot] Sorry... adding notes to ISSUE-13151 failed, please let sysreq know about it 19:41:14 [gpilz] Bob: any objections to opening this issue? 19:41:23 [gpilz] ... is the text in the spec correct? 19:41:29 [gpilz] Doug: yes the text is correct 19:41:43 [gpilz] RESOLUTION: Issue-13151 resolved as proposed 19:41:50 [gpilz] TOPIC: External Comments 19:41:52 [Bob] 19:42:51 [gpilz] Bob: mismatch between namespace in WSDL and the URI of the location of the WSDL 19:43:03 [gpilz] Tom: Do we have a RDDL file for this stuff? 19:43:06 [gpilz] Doug: We do 19:43:51 [gpilz] Tom: It seems this person just needs to be educated (on the difference between the URI and the @targetNamespace) 19:44:07 [trutt] q+ 19:44:16 [gpilz] (confusing dicsussion on possible changes to the RDDL file) 19:44:38 [gpilz] Doug: when you click on the namespace link, you get an HTML page that describes the namespace 19:44:54 [gpilz] Yves: the link to the WSDL is wrong - we have that in the ED copy as well 19:45:00 [gpilz] ... I can do the change 19:45:13 [gpilz] Doug: I don't mind doing it, but I need to know what the correct thing is 19:45:22 [gpilz] Yves: dated WSDL reference is wrogn 19:45:37 [gpilz] Doug: assuming we approve the docs, the dated links all get updated again 19:46:01 [trutt] q- 19:46:08 [gpilz] ... perhaps we could just tell this person that things are out of synch now but will come back into synch on the next publishing event 19:46:17 [Dug] birthing activity! ouch! 19:46:20 [gpilz] Bob: who is going to take care of this? 19:46:29 [gpilz] Yves: me 19:46:53 [gpilz] Bob: we need to respond back to Andy 19:47:02 [gpilz] ... who would like to do that? 19:47:34 [gpilz] ... "we shall correct the RDDL file location at . . . when we publish our PR" 19:47:42 [gpilz] ... Yves can you take care of this? 19:47:49 [gpilz] Yves: ok 19:48:08 [Bob] 19:48:35 [gpilz] Bob: wondering why faults are not declared in the portType's of the WSDLs (mex, eventing, etc.) 19:48:46 [gpilz] ... we don't normally do this sort of thing 19:48:52 [gpilz] Doug: and we won't 19:49:28 [gpilz] Bob: something along the lines of "it has not been the custom to define faults in the portTypes of infrastructure specs like . . ." 19:49:37 [gpilz] Tom: is he talking about event notifications? 19:49:47 [gpilz] Bob: no, he's referring to the XSDs 19:49:59 [gpilz] Tom: we don't define any faults in our spec WSDLs? 19:51:26 [gpilz] Gil: if you define faults in your WSDL they don't appear on the wire the way we say the should 19:51:48 [gpilz] ... infrastructure faults versus application faults 19:52:16 [gpilz] Tom: we have a different mapping for our faults than that defined in WSDL 19:52:46 [gpilz] Gil: yes - no WS-* has ever used WSDL-defined faults for error handling 19:53:09 [gpilz] Bob: anyone to volunteer 19:53:12 [gpilz] Gil: I will 19:53:34 [gpilz] Bob: should I create pro-forma issues to track these 19:53:40 [gpilz] Yves: that would be best 19:53:57 [gpilz] TOPIC: Test Status 19:53:57 [Bob] 19:55:01 [gpilz] Bob: look like we have met our criteria for 2 interoperable implementations for each specifications 19:55:22 [gpilz] ... the exceptions are the metadata specifications (SOAP assertion and EventDescriptions) 19:55:37 [gpilz] ... these don't have any direct, on-the wire tests associated with them 19:55:47 . 19:55:50 [gpilz] ... have folks had a chance to take a look at the latest scenario doc? 19:55:58 [Dug] 19:56:18 [gpilz] Bob: is that adequate? 19:56:32 [Zakim] +[Microsoft] 19:56:38 [gpilz] ... is there anyone who finds it inadequate 19:57:57 [Bob] 19:58:08 [gpilz] (pause while Ram is updated on progress of meeting) 20:00:11 [gpilz] Bob: seems like we need to change the docs before we go to PR 20:00:18 [gpilz] Doug: will be done within the hour 20:00:37 [gpilz] Bob: seems unfair to ask people to vote based on documents that they have never seen 20:00:51 [gpilz] ... better to let everyone review the docs as they will appear for PR 20:01:00 [gpilz] ... we've passed all of our exit criteria 20:01:13 [gpilz] ... is everyone able to make a meeting on July 12th? 20:01:21 [gpilz] ... and is that enough time? 20:01:27 [gpilz] Ram: a few questions? 20:01:41 [gpilz] ... there hasn't been any substantive changes since the CR 20:01:43 [gpilz] Bob: true 20:02:10 [gpilz] ... people may quibble with things like getting the machine readable artifacts to match with the text of the spec 20:02:24 [gpilz] ... but does any member believe there have been substantive changes? 20:02:27 [gpilz] (silence) 20:02:38 [gpilz] Ram: so all changes have been editorial? 20:02:41 [gpilz] Bob: yes 20:03:09 [gpilz] Ram: assuming that is the case, if the candidate PR drafts are available - i think i may be able to be ready as early as the 12th 20:03:33 [gpilz] Bob: on most of the specs there have been no changes 20:03:52 [gpilz] Doug: i've been doing some spec hygiene 20:04:01 [gpilz] ... a couple of typos in eventing and enumeration 20:04:28 [gpilz] Ram: when you send out the drafts, will you send out a diff-marked version relative to the PRs? 20:04:32 [gpilz] Bob: Yves? 20:04:36 [gpilz] Yves: yes I can do that 20:04:50 [gpilz] Bob: we want to diff between the CR and the proposed PR drafts 20:05:02 [gpilz] ... those will be valuable when we do the progression anouncement 20:05:17 [gpilz] ... Doug, once you have the materials ready - let Yves know 20:05:21 [gpilz] Doug: they are ready now 20:05:28 [Yves] I'll produce them tomorrow morning 20:05:29 [gpilz] Ram: I need roughly 3 days for internal review 20:05:38 [gpilz] ... should have them by next Tuesday 20:05:49 [gpilz] Bob: Yves, today or tomorrow would be good 20:05:57 [gpilz] ... then we can make the decision next week 20:06:06 [gpilz] ... and get to the PR progression before August 20:06:24 [gpilz] Yves: have the diffs ready by my morning (your night) 20:06:55 [gpilz] TOPIC: WS-EVD MIME type status 20:07:00 [gpilz] Bob: Yves - how is this going? 20:07:08 [gpilz] Yves: I will slap Phillipe 20:07:21 [gpilz] Bob: does this need to be nailed down before we do PR 20:07:23 [gpilz] Yves: we do 20:07:44 [gpilz] Bob: tell Phillipe that we would prefer if we didn't end up waiting on the MIME type assignment 20:07:57 [gpilz] MEETING ADJOURNED 20:08:00 [Zakim] -Tom_Rutt 20:08:03 [asoldano] bye 20:08:05 [Zakim] -[Microsoft] 20:08:06 [Zakim] -Bob 20:08:10 [Zakim] -Yves 20:08:12 [Zakim] -Alessio 20:08:15 [Bob] rrsagent, generate hours 20:08:15 [RRSAgent] I have made the request to generate Bob 20:08:18 [Zakim] -Doug_Davis 20:46:44 [trutt_] trutt_ has joined #ws-ra 21:02:38 [gpilz] gpilz has left #ws-ra 21:05:00 [Zakim] disconnecting the lone participant, Gil, in WS_WSRA()3:30PM 21:05:02 [Zakim] WS_WSRA()3:30PM has ended 21:05:05 [Zakim] Attendees were Bob, Doug_Davis, Tom_Rutt, Gil, Alessio, Yves, [Microsoft] 22:01:24 [Zakim] Zakim has left #ws-ra
http://www.w3.org/2011/07/05-ws-ra-irc
CC-MAIN-2016-26
refinedweb
1,700
69.65
Minor grammatical fix. Submitted-by: Dave Leimbach <leimySPAM2k@mac.com> 1: .\" Copyright (c) 2003 Matthew Dillon <dillon@backplane.com>: .\" $DragonFly: src/bin/varsym/varsym.1,v 1.3 2003/12/12 19:50:07 dillon Exp $ 26: .\" 27: .Dd Nov 5, 2003 28: .Dt VARSYM 1 29: .Os 30: .Sh NAME 31: .Nm varsym 32: .Nd get and set user and system-wide variables for variant symlinks 33: .Sh SYNOPSIS 34: .Nm 35: .Op Fl qadsup 36: .Ar var[=data] 37: .Sh DESCRIPTION 38: The 39: .Nm 40: program manages user and system-wide variables. These variables are typically 41: used by the system to resolve variant symlinks but may also be used generally. 42: .Pp 43: For each operand set, modify, retrieve, or delete the specified variable. 44: By default variables specified without data are retrieved and variables 45: specified with data are set. Variables may be set to empty. 46: .Bl -tag -width Ar 47: .It Fl a 48: List all variables at the specified level. Note that per-process variables 49: override per-user variables and per-user variables override system-wide 50: variables. By default, per-user variables are listed. 51: .It Fl q 52: Quiet mode. When retrieving a variable only its data is printed. 53: .It Fl d 54: Delete mode. The specified variables are deleted. Any specified data is 55: ignored. 56: .It Fl s 57: This option causes variables to be set system-wide and restricts retrievals 58: to system-specific variables. 59: .It Fl u 60: This option causes variables to be set on a per-user-id basis and restricts 61: retrievals to user-specific variables. This is the default. 62: .It Fl p 63: This option causes variables to be set on a per-process basis and restricts 64: retrievals to process-specific variables. Note that since 65: .Nm 66: is run as its own process, using this option to set a variable will not 67: affect your shell's namespace. 68: .El 69: .Sh RETURN VALUES 70: The 71: .Nm 72: utility exits with one of the following values: 73: .Bl -tag -width Ds 74: .It 0 75: No errors occured. 76: .It 1 77: A requested variable could not be found 78: .It 2 79: A requested variable could not be set 80: .El 81: .Sh SEE ALSO 82: .Xr ln 1 ,
http://www.dragonflybsd.org/cvsweb/src/bin/varsym/varsym.1?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.3
CC-MAIN-2015-27
refinedweb
395
70.9
Troubleshooting Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers This is a wiki; if you find items not adequately covered below, please consider adding an entry. Contents - 1 download.py - 2 ns-allinone-3.26 build issues - 3 ns-allinone-3.5 build issues - 4 ns-allinone-3.4 build issues - 5 Platform-specific issues - 6 hg merge and resolve problems - 7 Click routing build problems - 8 General build problems - 9 Valgrind errors - 10 Run-time errors - 11 Python-related problems - 12 Debugging tips download.py pybindgen download fails If after running download.py, you get: bzr: ERROR: Invalid http response for: Unable to handle http code 400: Bad Request Check the status of the file ~/.bzr.log You may see: 0.050 failed to import pycurl: No module named pycurl If so, install the python-curl package and try again. Make sure that your environment variable https_proxy is set if you are behind a web proxy. ns-allinone-3.26 build issues f you have installed Anaconda on a Mac,. ns-allinone-3.5 build issues gcc-4.4 issues For Fedora Core 11 and other systems that use gcc-4.4, you may encounter this type of compilation error: [614/702] cxx: build/debug/bindings/python/ns3_module_csma.cc -> build/debug/bindings/python/ns3_module_csma_3.o In file included from /usr/include/python2.6/pyconfig.h:6, from /usr/include/python2.6/Python.h:8, from debug/bindings/python/ns3module.h:3, from ../bindings/python/ns3module_helpers.cc:2: /usr/include/python2.6/pyconfig-64.h:1022:1: error: "_POSIX_C_SOURCE" redefined and gcc " " - f P I C " " " " - D _ _ A S S E M B L 1 Y _ _ - f P I C " " -Ilinux-2.6.18/include -Ilinux-2.6.18/include/asm/mach-default -Isim -Ilinux-2.6.18/nsc -Ilinux-2.6.18/override -c -o linux-2.6.18/arch/x86_64/lib/csum-copy.o linux-2.6.18/arch/x86_64/lib/csum-copy.S gcc: : No such file or directory gcc: f: No such file or directory gcc: P: No such file or directory gcc: I: No such file or directory gcc: C: No such file or directory gcc: : No such file or directory gcc: : No such file or directory ... The first error requires this patch to ns-3.5: The second error is an error in Network Simulation Cradle and requires patching nsc-0.5.0 with the nsc.patch attached to this message: ns-allinone-3.4 build issues Scons NSC error Note: This issue should not appear in ns-allinone-3.5 or later. # Build NSC Entering directory `nsc-0.5.0' => python scons.py linux-2.6.18 scons: Reading SConscript files ... Checking target architecure...(cached) x86 AttributeError: SConsEnvironment instance has no attribute 'CFile': or # Build NSC Entering directory `nsc-0.5.0' => python scons.py linux-2.6.18 scons: Reading SConscript files ... Checking target architecure...(cached) x86 scons: done reading SConscript files. scons: Building targets ... scons: *** While building `['globaliser/lexer.lex.cc']' from ` ['globaliser/lexer.l']': Don't know how to build from a source file with suffix `.l'. Expected a suffix in this list: ['.i']. scons: building terminated because of errors. This indicates that you probably don't have flex and bison installed on your system (for compiling NSC). On Ubuntu/Debian systems, you can try: sudo apt-get install flex bison For other systems, consult your package repositories, or install the packages from source. You may optionally edit the .config file of ns-allinone-3.4 to remove the nsc line, since nsc is an optional component: <?xml version="1.0" ?><config> <ns-3 <ns-3-traces <pybindgen dir="pybindgen-0.10.0.630" version="0.10.0.630"/> </config> Platform-specific issues CentOS 5/RHEL 5 CentOS 5 (or Red Hat Enterprise Linux 5) cannot successfully build Python bindings; the build errors out with messages such as: [ 886/1013] cxx: bindings/python/ns3module_helpers.cc -> build/debug/bindings/python/ns3module_helpers_3.o cc1plus: warnings being treated as errors debug/ns3/ptr.h: In instantiation of ‘ns3::Ptr<const ns3::AttributeAccessor>’: debug/ns3/attribute-accessor-helper.h:98: instantiated from here debug/ns3/ptr.h:69: warning: lowering visibility of ‘U* ns3::GetPointer(const ns3::Ptr<>&) [with U = U, T = const ns3::AttributeAccessor]’ to match its type debug/ns3/ptr.h:71: warning: lowering visibility of ‘U* ns3::PeekPointer(const ns3::Ptr<>&) [with U = U, T = const ns3::AttributeAccessor]’ to match its type ... Build failed -> task failed (err #1): {task: cxx ns3module.cc -> ns3module_3.o} -> task failed (err #1): {task: cxx ns3module_helpers.cc -> ns3module_helpers_3.o} -> task failed (err #1): {task: cxx ns3_module_flame.cc -> ns3_module_flame_3.o} -> task failed (err #1): {task: cxx ns3_module_list_routing.cc -> ns3_module_list_routing_3.o} This is a compiler problem with gcc-4.1.2 variant that is shipped on these platforms. One solution is to disable python (if you do not need the bindings) at configure time with: ./waf configure --disable-python On CentOS 5.4 at least, this can be fixed by moving to gcc-4.4, which is distributed as a package "gcc44". sudo yum install gcc44 gcc44-c++ Then, set your CXX environment variable as follows: export CXX=g++44 ./waf configure && ./waf SELinux (First reported on Fedora Core 10) cannot restore segment prot after reloc: Permission denied If you see a message such as this when executing an ns-3 script, you have crossed swords with SELinux. You have several alternatives to move forward. The first is to change the library file security context. If, for example, the OS complains about your liblinux2.6.26.so library, you can: cd ns-3-allinone/nsc/linux2.6.26 chcon -t texrel_shlib_t liblinux2.6.26.so If you feel confident about about security and are okay with it, you can temporarily disable SELinux by /usr/sbin/setenforce 0 If you don't care about security at all you can permanently disable SELinux by changing "enforcing" to "disabled" in /etc/selinux/config and rebooting. This issue was addressed in the Linux kernel, and has been verified as not happening in Fedora Core 11 SELinux (First reported on Fedora Core 12) liblinux2.6.26.so: cannot enable executable stack as shared object requires: Permission denied This is another variation on the previous selinux issue. The NSC liblinux* libraries are requesting an executable stack. See this with:. Cygwin Linking fails As of ns-3.10, Cygwin (version 1.7.7-1) fails at the linking stage: 1972 [main] ld 3896 C:\cygwin\bin\ld.exe: *** fatal error - cmalloc would have returned NULL collect2: ld terminated with signal 1 [Hangup] Waf: Leaving directory `/cygdrive/e/temp/ns-allinone-3.10-RC2/ns-3.10-RC2/build' Build failed: -> task failed (err #1): It is not known whether this problem exists with earlier versions of Cygwin. It has been reported elsewhere; e.g. here, and reported on the users mailing list during 2010 timeframe. Python bindings Python bindings do not build successfully due to a gccxml bug. See the NS-3_Python_Bindings#Caveats page. If you see "resolving vtable" messages while linking such as the below when you compile on Cygwin, you can safely ignore them: [530/535] cxx_link: build/debug/examples/wifi-wired-bridging_23.o -> build/debug /examples/wifi-wired-bridging.exe Info: resolving vtable for ns3::EmptyAttributeValueby linking to __imp___ZTVN3ns 319EmptyAttributeValueE (auto-import) Info: resolving vtable for ns3::RandomVariableValueby linking to __imp___ZTVN3ns 319RandomVariableValueE (auto-import) Info: resolving vtable for ns3::RectangleValueby linking to __imp___ZTVN3ns314Re ctangleValueE (auto-import) Info: resolving vtable for ns3::TimeValueby linking to __imp___ZTVN3ns39TimeValu eE (auto-import) Info: resolving vtable for ns3::BooleanValueby linking to __imp___ZTVN3ns312Bool eanValueE (auto-import) etc. NSC The Network Simulation Cradle is not supported under Cygwin. It is advised to use a Unix-based platform instead. Windows users can use a virtual machine to run e.g. Linux. See e.g. for a free virtual machine. hg merge and resolve problems When working with multiple repos, you may encounter a problem when your merge fails, and you get this error: user@ubuntu-desktop:~/hg/ns-3-allinone/temp/ns-3-lr-wpan$ hg pull pulling from searching for changes adding changesets adding manifests adding file changes added 444 changesets with 2410 changes to 1241 files (+1 heads) (run 'hg heads' to see heads, 'hg merge' to merge) user@ubuntu-desktop:~/hg/ns-3-allinone/temp/ns-3-lr-wpan$ hg merge merging doc/models/Makefile merging src/network/helper/trace-helper.h merging src/network/utils/address-utils.cc merging src/network/utils/address-utils.h merging src/network/wscript merging src/wscript warning: conflicts during merge. merging src/wscript failed! 1222 files updated, 5 files merged, 92 files removed, 1 files unresolved use 'hg resolve' to retry unresolved file merges or 'hg update -C .' to abandon However, it is not clear what do to about this. This mailing list post raises some issues about the 'hg resolve' command, which haven't been addressed by Mercurial, and which are summarized below: In summary: - Do not use the bare 'hg resolve' command. It will tend to overwrite the changes that you make to fix this problem. - For each file for which the merge failed, hand edit the file in question (unless you use a built-in merge tool for your environment). Merge failures will be delimited such as follows: <<<<<<< local all_modules = [] ======= all_modules = ['core'] >>>>>>> other The above states that in your 'local' copy, you have an 'all_modules' statement which the proposed merged version ('other') has changed, but in such a way that the merge failed. You must edit all of these code snippets to remove the "<<<<<<", "=======", and ">>>>>>" lines and resolve the conflicting code statements by hand. - When you are done, make sure you mark the files as resolved with the 'hg resolve -m FILE' command, such as: user@ubuntu-desktop:~/hg/ns-3-allinone/temp/ns-3-lr-wpan$ hg resolve -m src/wscript - Commit your merge as the final step; e.g.: user@ubuntu-desktop:~/hg/ns-3-allinone/temp/ns-3-lr-wpan$ hg commit -m"merge with ns-3-dev" Click routing build problems click-2.0 release (July 2011) will not build without this patch. General build problems - Q. What is this kind of build error representative of? assert failed. file=../src/core/type-id.cc, line=389, cond="uid != 0" Command ['/ns-3-dev/build/debug/examples/mixed-wireless'] exited with code -5 - A. The system can't find the specified TypeId. Check spelling, and whether it is prefixed with the namespace "ns3::" - Q. What is this kind of build error representative of? collect2: ld terminated with signal 9 [Killed] - A. There is not enough virtual memory for the linker (ld). This type of error is most typically experienced when linking a static build with every ns-3 module enabled. Try to either reduce the scope of the build (by enabling only the modules that you care about) or increase your machine's swap size or physical memory. - Q. A required software package is in a non-standard place. How to get waf to find the packages? - A. To compile with non-standard link path, use the environment variable LINKFLAGS. Example for bash shell: LINKFLAGS="-L/tmp" ./waf configure Valgrind errors Valgrind can be used with ns-3 as follows: ./test.py -g // runs tests with valgrind ./waf --command-template="valgrind %s" --run <program-name> or simply by using valgrind within waf shell or with library paths set correctly. Valgrind sometimes reports errors on certain platforms due to some known issues with STL or with glibc. In particular, these problems have been reported: - "still-reachable" errors due to use of STL: - glibc-2.10 problems reported on Gentoo Linux: - glibc-2.11.1 problems on Ubuntu 10.04 LTS x86_64 The following program can be used on Linux to check your glibc version: #include <stdio.h> #include <gnu/libc-version.h> int main (void) { puts (gnu_get_libc_version ()); return 0; } Run-time errors - Q. What is this assertion indicating? assert failed. file=debug/ns3/tags.h, line=105, cond="sizeof (T) <= Tags::SIZE" - A. The packet tag may be greater than Tags::SIZE (16 bytes) - Q. Why does valgrind fail with SIGILL on a simulation using NSC? - A. Valgrind does not work under 64-bit mode when running NSC due to lack of support for all the opcodes used. This has been confirmed to work in 32-bit mode. Check this bug report if you are concerned about a possible valgrind failure with an nsc script and want to see if it is a known issue. - Q. NSC does not work on a platform with SELinux enabled; for instance $ ./waf --run tcp-nsc-lfn Entering directory `/hg/ns-3-dev/build' [250/250] build-nsc Compilation finished successfully /hg/ns-3-dev/build/debug/liblinux2.6.26.so: cannot restore segment prot after reloc: Permission denied Command ['/hg/ns-3-dev/build/debug/examples/tcp-nsc-lfn'] exited with code -11 - A. This is logged in the tracker. Workaround: to temporarily disable enforcement on a running system, set /usr/sbin/setenforce 0 To permanently disable enforcement during a system startup change "enforcing" to "disabled" in /etc/selinux/config and reboot. - Q. What happened to my pcap trace files? The files are empty or are missing some packets! - A. This is usually a symptom of a memory leak; the pcap trace files do not close cleanly. Try running your script with the --valgrind option to see if you've introduced a memory leak. See the python page Debugging tips - Q. Is there a way to dereference a Ptr when in gdb? - A. yes: p *ptr->m_ptr Craigdo 22:52, 27 March 2009 (UTC)
https://www.nsnam.org/wiki/Troubleshooting
CC-MAIN-2017-51
refinedweb
2,296
50.63
We use WTForms, a module for validation of forms. We will start with a simple form containing one field asking for a name. Flask web forms We create this in code: We then create the template hello.html in the /templates/ directory: Start the application and open it in your webbrowser at. If you will submit an empty form, you will get an error. But if you enter your name, it will greet you. Form validation is handled by the wtforms module, but because we gave the argument the field cannot be empty. Other parameters can be given here. css with Flask We use bootstrap to style the form.Bootstrap is a popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web. It makes front-end web development faster and easier. The output will be: You can get the bootstrap files from and extract them in a new directory /static/. The code remains almost the same, but the template is changed. Code: We added bootstrap to the template hello.html: Flask registration form We use the same principle to create a registration form asking for name, email and password. We update the Form class: And we can get the variables passed using: Full code: Update the template hello.html with this code: Output: WTForms can validate email, password, numbers and many more. For a list of validators see: 5 thoughts on “Flask web forms” In the first code example, where are you getting the request object from? Also, what are you doing when you are passing “methods=[‘GET’, ‘POST’]” to app.route? Im a newbie to flask so please, explain it like im 5. 😀 thanks for the awesome tutorials! The request object is imported from flask on top. Your computer communicates with a special computer called a server. This server sends you data such as this website. Both computers need to ‘speak’ a language to communicate, this language is called the HTTP protocol. The language is very simple, there are only a few words such as GET and POST. A client (webbrowser) can send the “GET” word to the server, it will return the file you request. To give data to the server, you use the “POST” command. When you opened this page, your web browser did the GET command. When you posted this comment, your browser did a POST command. Flask is server side software. Python needs a way to connect the incoming HTTP commands (URLs) to functions, in this case “/”, but may as well be “/connor”. This process is called routing. In the first example we define a route @app.route(“/”), and map it to a function for both GET and POST commands. If a GET is send, the function returns a screen with the text input. If a POST is send, another output is shown. Thanks! Could you explain a bit more about the following line of code? def reset(self): blankData = MultiDict([ (‘csrf’, self.reset_csrf() ) ]) self.process(blankData) I can’t see where it is used and I have never come across it in wtforms comments before. Thanks Hi, thanks for the comment! This method is not used now, you can remove it. For CSRF protection see:
https://pythonspot.com/en/flask-web-forms/
CC-MAIN-2017-09
refinedweb
536
76.01
1. Introduction Here you will see how to use the File Open dialog and folder select dialogs. Also you will see how to use the StreamReader and StreamWriter classes. We will also use the Path utility class, which has a static function to phrase the string that comes in the form of file path. Before we start, have a look at the picture below which was taken when the form is running: When you click the button, a file open dialog is displayed asking you to select Bmp or Jpg image file or a text file. If you select a file other than the above-specified three formats, an error will be displayed and file open dialog still stays to select a different file. When a text file is selected it will be displayed in the top portion of the dialog. This is shown in the above screen shot. When you select a bmp or jpg file the selected image is displayed instead of the text. The Save File button will save whatever text content is displayed in the multiple line textbox. It takes the file name from the Save File Name textbox and saves that in the text format. Before saving, it will ask you to select folder by displaying the Folder Select dialog. Once the file is saved, the saved location is shown below in a label. Save in Temp Path saves the file in the system temporary path. OK. Let us start developing it. 2. Design the Form Start a Visual C# Windows application. Then design the form by taking the guidance from the previous section. From the downloaded solution (Visual studio 2005) have a look at each control's properties changed from the default (The bold values in the property window). The following is the short guidance: The OpenFileDialog and FolderBrowser dialogs are available under the dialogs group in your toolbox. Drag and drop those controls to the form. And these controls are then shown below the form after you dropped them. Since they are not visual controls they are not shown in the form. Place a text box as shown in the picture, set the multiline property to true then resize the text box. Place a PictureBox control on top of the text box control and resize it to the same size of the text box. I hope the other controls and their placement can be done easily. Let us go to the coding part: you can explore coding steps by searching the tag: //File 0 3. The System.IO namespace This name space is used to access all the functionality for the Windows file system. This article makes use of FileStream, StreamReader, StreamWriter and Path class from the IO name space. So first include this Name space in the form code: //File 000: Name space required for accessing the Path utility and other file related// classes.using System.IO; 4. File open Event Handler In this event handler first the filters to the File Open dialog is set using the filter property. This property controls what file type a user can select. In our case it is bitmap, jpg and text files. However we have also kept all files as an option. Once the filter is set, the dialog can be opened using the Showdialog method. And the return value constant tells whether the user selected a file and clicked ok or not. In the filter property we set the supported file types in pairs. For example for text files: Text Files(*.txt) | *.txt //File 001: Use File open dialog to get the file nameprivate void btnFileOpen_Click(object sender, EventArgs e){ //001: Check the user selected a file and pressed ok button. Before opening the dialog // set the filters dlgFileOpen.Filter = "Bitmap Files(*.bmp)|*.bmp|Other Image Files(*.jpg)|*.jpg|" + "Text Files(*.txt) | *.txt|All Files(*.*)|*.*"; if (dlgFileOpen.ShowDialog() == DialogResult.OK) { Note: "All Files" in the filter is intentional to show it's use in the FileOpen dialog. Then using the Path utility class's GetExtension method, the extension of the selected file is extracted. When the extension is not txt we are loading the image file using the Picturebox control. The Filename property of the file open dialog will have the selected file name. And that property is referred in the picture box's load method call. The code is shown below:);} The else portion clearly says that the selected file is a text file. And, here, construct the StreamReader object by passing the FileStream to the selected text file to the constructor of the stream reader object. The stream object has the ability to read the stream in the form of the text strings. Once we have the stream reader, we read each line of text and display it the multi-select text box. Below is the code:();} 5. FileOk event handler I can read your mind thinking, "In the previous step this guy sets all files as part of the filter option… and what happens if I select a file that does not fall on any of bmp, txt or jpg file type category?". It will fall in the first portion of if statement and tries to load the file in the picture box getting exception complaints on not a valid image file. So how do we avoid it? It is simple; remove that last pair in the filter. OK. Anyhow, I kept it to explain the FileOK handler. The FileOK event will be raised once you select the file using the file open dialog. To have a handler on it, select the OpenFileDialog component: Then using the property window, double click on the FileOK event (Select the event Icon button first in the Property window). The code the handler is can be easily understood and it is shown below: /("Extension Maaaan... Extension! Open only txt or bmp or jpg"); e.Cancel = true; }}.
http://www.c-sharpcorner.com/UploadFile/6897bc/file-reader-writer-and-file-open-dialog-folder-open-dialog-in-C-Sharp-part-1/
CC-MAIN-2014-52
refinedweb
984
72.16
This is the mail archive of the newlib@sourceware.org mailing list for the newlib project. On 08/29/2014 09:51 AM, Grissiom wrote: >> Yes, it does. The namespace reserved for the implementation is _[_A-Z]. >> The namespace _[a-z] is still available for the user. Which means the >> user can declare their own _printf_float, and WE (as the implementation) >> MUST NOT INTERFERE with it. Since WE are the implementation, we should >> use the namespace reserved for us, namely __printf_float. >> >> > You mean _[_a-z] (lower case) is the namespace reserved for implementation, > right? No, I spoke correctly. The namespace reserved for the implementation is all double underscores, and all single underscore followed by a capital. Single underscore followed by a lower case is NOT reserved for the implementation, and is therefore free for use by the user, and therefore the implementation must not interfere with the user's use of that namespace. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://sourceware.org/legacy-ml/newlib/2014/msg00365.html
CC-MAIN-2021-31
refinedweb
175
59.09
Hi all, I am having problems finding the correct command to center the origin of my object using Python 2.5.2 for Blender 2.49b. I have a script that imports obj(faces) and then rotates it taking still images at regular increments. Normally this works fine, but I have received some different objs that come into Blender in a completely different position i.e. the center of origin is in the correct place but the mesh is way off to the left. The script is designed so that the face rotates around it's own center of origin, which is located just behind the face, which has worked for all previous data I have used. So can anyone help me to find the correct command line for either centering the origin in the mesh or better still, allowing me to define where the center of origin should be in relation to the mesh? I have found commands for later versions but not for 2.49b. Below is the script; import Blender import bpy from Blender import * import os import glob import import_obj import math pth1 = "C:\..........." os.chdir(pth1) f = glob.glob("*.obj") myCam = Blender.Object.Get("Camera") mylamp = Blender.Object.Get("Lamp") for a in f: fn = pth1 + "/" + a import_obj.load_obj(fn, 10) from Blender.Scene import Render scn = bpy.data.scenes.active context = scn.getRenderingContext() context.extensions = True context.renderPath = "//" context.imageType = Render.BMP context.sizeX = 480 context.sizeY = 480 lo = list(scn.objects) print(lo) myobj = Blender.Object.Get(lo[0].name) is_editmode = Window.EditMode() # Store edit mode state if is_editmode: Window.EditMode(0) # Python must get a mesh in object mode. mymsh = myobj.getData() #print(msh) #msh = list(bpy.data.meshes) # mymsh = msh[len(msh)-1] for face in mymsh.faces: face.smooth=1 Window.EditMode(1) verts = mymsh.verts[:] #mymsh.transform(myobj.matrix) for v in verts: v.co[1]-=50 mymsh.verts = verts mymsh.update() Blender.Redraw() mtrl = mymsh.materials[0] mtrl.rgbCol = [0.5,0.5,0.5] mtrl.setSpecShader(1) mtrl.setSpec(2) mylamp.LocZ = 10 mylamp.LocY = 10 mylamp.LocX = 10 mylamp.RotX = 3*3.14159/2+math.pi/6 mylamp.RotZ = 0 #3.14159/4 mylamp.RotY = 0 #TEST stimuli myobj.LocX = 0 myobj.LocY = -1 myobj.LocZ = 0 myobj.RotX = math.pi/18 myobj.RotZ = 3.14159+3.14159/2 myCam.RotX = 3*3.14159/2 myCam.RotY = 3.14159 myCam.LocZ = 0 myCam.LocX = 0 myCam.LocY = 5 one2three = range(1,4) #one2seven = range(1,8) one2twenty = range(1,20) one2thirtyeight = range(1,38) one2seventythree = range(1,73) one2onehundredtwentytwo = range(1,122) one2onehundredeightytwo = range(1,182) for i in one2onehundredtwentytwo: Blender.Redraw() context.render() yang = 600-(i-1)*15 imname = a[:-4]+ "vy"+`yang` context.saveRenderedImage(imname) myobj.RotZ -= 3.14159/120 # myobj.RotZ = 3.14159+3.14159/4 scn.unlink(myobj) obs = scn.objects del scn I am a beginner with Python for Blender so any help would be appreciated. Thanks!!! Center origin using python script for Blender 2.49b Scripting in Blender with Python, and working on the API Moderators: jesterKing, stiv 2 posts • Page 1 of 1 this relates to 2.63, but maybe gives a hint: ... group_id=9 ... group_id=9 I'm sitting, waiting, wishing, building Blender in superstition... 2 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
https://www.blender.org/forum/viewtopic.php?p=97678
CC-MAIN-2016-18
refinedweb
566
54.18
Garbage Collector / Destructor in java Java is a object oriented programming language. As we all know that: It supports Objects, Methods, Variables, etc. When we create a new Object, It allocates space required for the class in HEAP memory of JAVA. Which can not be removed by any function call. That space is totally under control of JVM – Java Virtual Machine. So, We can conclude that there is no concept like destructor in java. But for the memory management Java supports Garbage Collector mechanism. What is Garbage Collector? - It is a utility of Java Virtual Machine which is used to reclaim the space/memory allocated previously. - Compared to C++, in Java the developer need not to worry about freeing the memory. - Garbage Collection is done by a Thread named as: Garbage Collector - Developer can not invoke Garbage collector. It is controlled by JVM. When JVM thinks that there is a need of removing memory, It starts the Garbage Collector Thread. - But we can of course send a request to JVM that: “Please invoke Garbage Collector!”. But it is just a request to JVM. It’s not a call. Garbage Collector may or may not be invoked. Commands to send such request are: 1) System.gc() 2) Runtime.gc(). - One most important part of Garbage Collector is: When garbage collector is invoked by JVM, finalize() method of our program will be invoked by JVM. So that developer can write/execute some code at that time. Which objects will be removed by Garbage Collector? - Objects which are no where used further. - In case object has multiple referenced, All the references of that object are not used further. - Objects which are created inside a block and execution of that block is over - When parent object is not gonna used further, All the child objects are also unused. Above listed kind of objects will be removed by Garbage Collector. void finalize() As discussed above, a method named finalize() will be invoked when Garbage Collector is gonna remove the memory(object) of that class. So, that developer can write his end moment code. You can refer the below code to learn: How to write finalize() method. import java.io.*; class OpenAFile { FileInputStream aFile = null; String filename = ""; OpenAFile(String filename) { this.filename = filename; try { aFile = new FileInputStream(filename); }catch (java.io.FileNotFoundException e) { System.err.println("Could not open file " + filename); } } public void finalize () { try { if (aFile != null) { aFile.close(); aFile = null; } }catch (Exception e) { System.err.println("Could not close" + filename); } } } Let me know in case you have any question. Community Answers Trackbacks/Pingbacks
http://ankit.co/tutorials/java-tutorials/garbage-collector/garbage-collector-destructor-in-java
CC-MAIN-2019-35
refinedweb
429
68.36
Products and Services Downloads Store Support Education Partners About Oracle Technology Network Name: krT82822 Date: 09/23/99 This is in response to the evaluation of bug 4256006. In spite (or because) of what the evaluation said, I remain convinced that this is a bug. (I have included the bug report below, with the test case.) The evaluator asks: "That's really the design here, if you have ideas for a better design please send it in " Okay, here goes. I propose you use the existing design. If I edit a cell using the code below, and double-click into the cell. You now get a focussed cell, with a blinking caret, into which you can type. If you hit an arrow key, the cursor moves. The selected cell doesn't change. That's the design. Tha's also the behavior I want to see, and it's already implemented. However, if you follow the instructions described below, you get entirely different behavior, which is clearly a bug. What you get is a cell that appears to have the focus (it has a black outline) and into which you can type, but the caret doesn't blink. Once I start to type, I want to see exactly the same behavior that I see if I double-click into the cell. The fact that I can get two different behaviors when I enter text is clearly a bug. It is also unquestionably a bug when I can type without a caret. My users are complaining about this issue. If you have any doubts, or if you would like to discuss this issue with me, please call me at 626/685-5661. I have submitted this bug three times before, and it sometimes gets accepted, then closed with an evaluation that fails to address the issue of the missing cursor. It's very frustrating to submit the same bug over and over again. Here is the original report. /* Program to illustrate the caret bug. Typing into an unfocussed cell fails to transfer the focus to the cell editor. To reproduce, run this program. Click into any cell a single time. A caret will not appear. At this point, the cell has a colored outline, indicating that the JTable has the focus, rather than the cell editor. So the caret should not be visible yet. Now, start typing. The cell outline will change to black, indicating that the focus has shifted to the cell editor. The text will appear as I type, but the caret is still missing. By now, it is clear that there is a bug. The caret should always be visible during typing. It appears that, even though the cell is outlined in black, and I can type into the cell, it still doesn't have the focus. You can see this by hitting an arrow key. The focus will move to a different cell, rather than changing the caret position. This shows that the cell editor doesn't actually have the focus, as it should. Once I have started typing, I can click into the cell, and the caret will finally appear. Now the arrow keys will move the caret. This behavior is discussed in bug 4188907. That bug was erroneously closed. Please reopen that bug, rather than creating a new bug. I don't want to lose the votes this bug has already accumulated. */ import javax.swing.*; import javax.swing.event.*; import javax.swing.table.*; import java.awt.event.*; public class JTableFocusBug { public static void main(String[] args) { JFrame mf = new JFrame("Caret in JTable Bug"); WindowListener wl = new WindowAdapter() { public void windowClosing(WindowEvent evt) { System.exit(0); } }; mf.addWindowListener(wl); mf.setBounds(10, 10, 400, 400); JScrollPane scroller = new JScrollPane(new JTable(new BugModel())); mf.getContentPane().add(scroller); mf.show(); } private static class BugModel extends AbstractTableModel { public int getRowCount() { return 3; } public int getColumnCount() { return 3; } public Object getValueAt(int row, int col) { return "" /* + row + ", " + col*/; } BugModel() { super(); } public boolean isCellEditable(int row, int col) { return true; } } } (Review ID: 95618) ====================================================================== CONVERTED DATA BugTraq+ Release Management Values COMMIT TO FIX: generic merlin-beta FIXED IN: merlin-beta INTEGRATED IN: merlin-beta EVALUATION In Merlin-beta we have added a new property to the JTable: "surrendersFocusOnKeystroke" to address this issue. The property is false by default so the behavior of the JTable is backward compatible with previous releases. By setting the property value to true, the JTable is put in a mode where typing in a cell transfers focus to the editor automatically. The key difference in this mode is seen in the the behavior of horizonatal arrow keys which will effect intercell movements instead of changing the selected cell in the JTable. The blinking caret is also visible in this mode as the editor component has genuinely recieved the focus after a key is pressed. philip.milne@eng 2001-03-28
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4274963
CC-MAIN-2016-50
refinedweb
813
64.41
A web application framework that is designed to support realtime web applications simply and effectively. It takes a 'batteries included, but optional' approach; it comes with a template language, and an extensible authentication system, but you are not required to use any of that. It also provides a django-like admin section that, when coupled with the built in database support, alows you to inspect, modify, and create instances of your models. ( Note: this feature is very new, and very experimental; it is not recommended to use in production. ) A basic omega application looks like this: var path = ;var app = app;approuter;// Start the omega app.app; This tells omega to serve the ./static folder statically at the url /static/*, and then starts listening for incomming connections. Admitedly, this isn't the most exciting application in the world, but it illustrates the basics of omega; first and foremost: omega is simple. It tries to make whatever you're doing as straightforward as possible, and hide any complexity from you. Second, this demonstrates a core concept of working with omega: the app is king. omega plication object provides an api into omega's functionality. Really, this should be intuitive for most people, but it's worth repeating. The basic example also included a basic example of static file serving. Omega supports as many static files as you would like, and handled directories, as well as individual files. To exand on the first example, we can also pass a list of directories to serve: var path = ;var app = app;approuter The big difference between serving a static file, and serving a normal url route, is that omega looks for the path key and assumes that anything with path is a static file. As path is not a HTTP verb, I feel this is safe. Currently, the only option supported is autoIndex. If true, and the path requested is a directory, omega looks for an index.html file, and serves that, instead. Under the hood omega uses it's own custom static file router, which supports streaming. While the intention is for it to be usable in production, a dedicated static file server, like nginx will always be faster, and more secure. Even though the focus of omega is on realtime web applications, there are reasons you may wish to do things a more traditional way, with server-side processing. Or, perhaps, you need to write a simple REST service. That's easily done with omega: var app = app;var controllers = ;approuter; It also supports adding multiple paths, with multiple verbs at once: var app = app;var controllers = ;approuter; The url parameter can be a regular expression, supporting capture groups: approuter; In the controllers.blog function, you can get the url parameter like this: {var slug = requestparamsslug;// Your code hereresponse;} This is all simply a wrapper around gett/router, with a bit of syntactic sugar. All HTTP verbs supported there are supported by omega. One of the big things omega provides is socket.io functionality. We expose this in a very straightforward way: var app = app;appsockets; Since socket.io has a great, easy to understand api, we don't even bother to wrap it; we just take care of starting the server for you. (We don't expose the io object directly, as there's hardly any need. Should you need to use it, you can access it via app._io.) We also expose socket.io's namespaces as channels: var app = app;app; This makes is nice and straightforward to write socket.io code however you wish. Omega has integration with Passport for authentication. This can be accessed through require('omega-wf').auth. ( Note: This is one of the few pieces not wrapped in the omega app. This is because auth is considered optional. ) Example to come soon. Sometimes, you need to do some initialization that depends on the omega app having finished it's setup. For these cases, omega provides app.init: var app = app;app;// It is not safe to work with app.config here. This is very useful if you want to split your app into several modules, some of which depend on configuration. It's possible to set the name of your application: var app = app;app; This is useful for logging, mostly. (But it might get used later. Suggestions welcome!) Tests can be run with: $ npm tests Simply install globally from npm: $ npm install -g omega-wf This will get you the omega-admin script, with which you can start a new app: $ omega-admin startapp my_app (You can also install it locally, but then you won't get the omega admin script.) This is a massive work in progress. Currently, I'm gearing up for a 1.0 release. The API is mostly stable, and I'm using it to develop some projects. As I find issues, I am documenting them with github issues, and then fixing them, so the issues list is a good idea of what doesn't work. At the moment, I would call it "beta quality", and wouldn't run it in production without doing some extensive testing. If, however, you are a brave soul, and are using it in production, let me know! The more feedback, the better.
https://www.npmjs.com/package/omega-wf-krtv
CC-MAIN-2017-22
refinedweb
878
64
On Wed, 2006-03-15 at 14:22 -0500, David Zeuthen wrote: > I'm reiterating all this to give some kind of insight to what kind of > "external applications" we can expect. I think this would be a wonderful platform to start with. Miles better than Win32 at any rate, which is the competition. > Btw, I was wondering whether we really need union mounts to make your > proposal work? Won't playing tricks with PATH and LD_LIBRARY_PATH work > instead? If so, what are the constraints of this? Things like IPC and > other stuff comes to mind. Union mounts are not totally essential. I proposed them because a _lot_ of software assumes everything is installed under /usr, /etc and so on. You don't realise how deep this assumption goes until you try and let people install software elsewhere - as both autopackage and CodeWeavers products let users do this to some extent I've experienced this pain directly. For instance, consider the following things that have to be adjusted to enable a new prefix to work correctly: * Shell PATHs. For every shell, mind, not just bash. * DBUS config file needs to merge paths * Fontconfig config file needs to merge paths * $XDG_DATA_DIRS * Menu XML definitions sometimes need to merge in extra .desktop directories * Bonobo activation paths * Linker paths * man/info paths * screensaver paths The list never seems to end. Certainly adjusting all these paths at runtime would require serious patching action. Essentially this is a problem of namespace design: without union mounts the filing system can only represent one kind of "query". On Windows/OS X this query is "what files are logically related by software package?", and on UNIX this query is "what files are logically related by purpose?". Both choices are fine, but because every modern OS needs to be able to do both kinds of query another separate database is maintained. On Windows this is the registry and this is what is queried to get file association information, COM activation paths, menu definition data etc. On Linux this is the package manager database. And on OS X this is a database held in memory by "LaunchServices". Union mounts let you drop the separate database because now you can use the filing system to define both groups. Keeping /usr around makes it the 'central file type database' and that is great for backwards compatibility and convenience. Keeping stuff in /applications means you don't need a separate db to find out which files are related by package (file hash lists etc can always be stored in /applications/whatever/.hashes). > (yes, I really wish myself that we had union mounts, don't Darwin or Mac > OS X support this btw?) There is unionfs which works OK and is in widespread deployment on Live CDs, and actually I use it myself a bit. But it's not a part of the mainline kernel. And as pointed out by Alan the code doesn't seem to mention file notifications anywhere (I've asked on the unionfs list about this). I don't know if OS X supports union mounts but I'd doubt it. They don't use the filing system to do things like "Give me a list of every help document on the system", instead they do RPCs to LaunchServices which builds a database at startup and then registers watches to keep it up to date. > Oh - our image build scripts will make this somewhat easy for you to > experiment with [1] as the whole OS is prepared there. Yes, I saw that. It looks nice and easy to play with. For extra bonus points this summer I'll be exiled to Malvern again for two months, which means I'll be using a 600mhz laptop with 128mb of ram and no internet access in the evenings. Apart from resembling the specs of an OLPC laptop the lack of internet is great for productivity :) Hopefully I'll get time to hack on some of this stuff then. It's still a few months off but I want a binary ABI lint tool for autopackage anyway, so our goals are aligned here. > Personally I > think it would be really awesome if you/we could come up with a solution > (based on autopackage? modified?) that fits both the requirements we > come up with (cf. you mail about use cases and design which is good) and > is feasible to implement kernel, user and desktop-space wise. Yeah, where/if autopackage fits into this is still something I'm looking at. If you assume a closed universe then there's no point to it, as autopackage is at heart an abstraction of Linux distributions. If a goal is to let the wider Linux community package software for the OLPC without necessarily being a part of the project (or even knowing about it) then it could prove handy. Autopackage also currently lacks a good online update system, but it's not too hard to add and I hope to reach it this summer. The rest of it is feasible IMHO. Bringing unionfs up to scratch is probably the biggest chunk of work, there's a difference between a live CD and a full time OS. Writing the little programs to watch network mounts and manage the unions etc is easy. So there's some work to be done. On the other hand, the requirements are sufficiently different that I don't think "zero coding" is realistic for software management here. One nice thing is that personally if I can find time and energy, I wanted to try these ideas out anyway. As they're quite experimental they could be tried in parallel and the OLPC project gets them "for free" in effect. > [1] : I'm not sure if our Mercurial repository for these tools are > publically available yet, but if it's not it will be as soon when FC5 is > out - the people controlling servers in our DMZ have been / are busy > with Fedora and RHEL update releases... That's cool. Why Mercurial though? thanks -mike
https://www.redhat.com/archives/olpc-software/2006-March/msg00108.html
CC-MAIN-2015-22
refinedweb
1,011
69.72
Grokkers for plone.tiles Project Description Release History Download Files This package provides declarative configuration (grokkers) for the plone.tiles package. Contents Usage A basic tile is configured like this: from five import grok from plone.directives import tiles from my.package import MyMessageFactory as _ from my.package.interfaces import IMyTileSchema class MyTile(tiles.Tile): grok.context(IContext) grok.require('zope2.View') grok.name('my.tile') grok.title(_(u"My title")) grok.description(_(u"My description")) tiles.add_permission('mypackage.AddMyTile') tiles.schema(IMyTileSchema) Notes - A persistent tile can be configured by deriving from PersistentTile instead of Tile. - The context(), requires(), name(), and layer() directives are used in the same way as they are for a view derived from grok.View. - Templates are associated using the same semantics as are used for views. For example, a tile in mytiles.py defined in the class MyTile would be associated with a template mytiles_templates/mytile.pt by default. - Unlike a view, the name() directive is required and should give a dotted name for the tile by convention. - Dictionary key access (__getitem__()) is defined to work as it does in a tile. In a standard grokked view, it will retrieve a macro from the template. In a tile, it is used as a traversal hook to set the tile id, and subsequently to look up views on the tile. See plone.tiles for details. - Similarly, standard grokked views have a url() method which can be used to construct a URL, possibly with query string parameters. For grokked tiles, this is replaced by a url read-only property, which returns the canonical tile URL, as per plone.tiles. Changelog 1.1 (2012-11-26) - Added icon directive to support plone.tiles >= 1.1. [datakurre] 1.0 (2012-06-23) - Initial release [garbas] Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/plone.directives.tiles/
CC-MAIN-2017-39
refinedweb
321
68.26
On Mon, Feb 15, 2010 at 04:44:48AM -0500, Christoph Hellwig wrote: > Currently the fsync file operation is divided into a low-level routine doing > all the work and one that implements the Linux file operation and does minimal > argument wrapping. This is a leftover from the days of the vnode operations > layer and can be removed to simplify the code a bit, as well as preparing for > the implementation of an optimized fdatasync which needs to look at the > Linux inode state. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> Looks good, one minor thing: > > Index: xfs/fs/xfs/linux-2.6/xfs_file.c > =================================================================== > --- xfs.orig/fs/xfs/linux-2.6/xfs_file.c 2010-02-15 10:18:58.640023657 > +0100 > +++ xfs/fs/xfs/linux-2.6/xfs_file.c 2010-02-15 10:28:07.311260422 +0100 > @@ -35,6 +35,7 @@ > #include "xfs_dir2_sf.h" > #include "xfs_dinode.h" > #include "xfs_inode.h" > +#include "xfs_inode_item.h" > #include "xfs_bmap.h" > #include "xfs_error.h" > #include "xfs_rw.h" > @@ -96,6 +97,120 @@ xfs_iozero( > return (-status); > } > > +/* > + * We ignore the datasync flag here because a datasync is effectively > + * identical to an fsync. That is, datasync implies that we need to write > + * only the metadata needed to be able to access the data that is written > + * if we crash after the call completes. Hence if we are writing beyond > + * EOF we have to log the inode size change as well, which makes it a > + * full fsync. If we don't write beyond EOF, the inode core will be > + * clean in memory and so we don't need to log the inode, just like > + * fsync. > + */ > +STATIC int > +xfs_file_fsync( > + struct file *file, > + struct dentry *dentry, > + int datasync) > +{ > + struct xfs_inode *ip = XFS_I(dentry->d_inode); > + struct xfs_trans *tp; > + int error = 0; > + int log_flushed = 0; > + > + xfs_itrace_entry(ip); > + > + if (XFS_FORCED_SHUTDOWN(ip->i_mount)) > + return -XFS_ERROR(EIO); > + > + xfs_iflags_clear(ip, XFS_ITRUNCATED); > + > + /* > + * We always need to make sure that the required inode state is safe on > + * disk. The inode might be clean but we still might need to force the > + * log because of committed transactions that haven't hit the disk yet. > + * Likewise, there could be unflushed non-transactional changes to the > + * inode core that have to go to disk and this requires us to issue > + * a synchronous transaction to capture these changes correctly. > + * > + * This code relies on the assumption that if the i_update_core field > + * of the inode is clear and the inode is unpinned then it is clean > + * and no action is required. > + */ > + xfs_ilock(ip, XFS_ILOCK_SHARED); > + > + if (ip->i_update_core) { > + /* > + * Kick off a transaction to log the inode core to get the > + * updates. The sync transaction will also force the log. > + */ > + xfs_iunlock(ip, XFS_ILOCK_SHARED); > + tp = xfs_trans_alloc(ip->i_mount, XFS_TRANS_FSYNC_TS); > + error = xfs_trans_reserve(tp, 0, > + XFS_FSYNC_TS_LOG_RES(ip->i_mount), 0, 0, 0); > + if (error) { > + xfs_trans_cancel(tp, 0); > + return -error; > + } > + xfs_ilock(ip, XFS_ILOCK_EXCL); > + > + /* > + * Note - it's possible that we might have pushed ourselves out > + * of the way during trans_reserve which would flush the inode. > + * But there's no guarantee that the inode buffer has actually > + * gone out yet (it's delwri). Plus the buffer could be pinned > + * anyway if it's part of an inode in another recent > + * transaction. So we play it safe and fire off the > + * transaction anyway. > + */ > + xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL); > + xfs_trans_ihold(tp, ip); > + xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); > + xfs_trans_set_sync(tp); > + error = _xfs_trans_commit(tp, 0, &log_flushed); > + > + xfs_iunlock(ip, XFS_ILOCK_EXCL); > + } else { > + /* > + * Timestamps/size haven't changed since last inode flush or > + * inode transaction commit. That means either nothing got > + * written or a transaction committed which caught the updates. > + * If the latter happened and the transaction hasn't hit the > + * disk yet, the inode will be still be pinned. If it is, > + * force the. Other than that, the change looks ok. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx
http://oss.sgi.com/archives/xfs/2010-02/msg00274.html
CC-MAIN-2014-49
refinedweb
615
55.13
add web reference - access denied Discussion in 'ASP .Net' started by Steve Rich shared assembly in the add referencebabu dhayal via .NET 247, Aug 5, 2004, in forum: ASP .Net - Replies: - 2 - Views: - 6,422 - Nelson Xu - Aug 17, 2004 Access Denied ... <add assembly="*">Randy Paez, Nov 20, 2003, in forum: ASP .Net - Replies: - 8 - Views: - 3,357 - fokko - Mar 10, 2010 Access Denied to Reference dllbyrd48, Oct 30, 2005, in forum: ASP .Net - Replies: - 3 - Views: - 4,193 - Patrick.O.Ige - Oct 31, 2005 Web Service + Anon Access, but getting 401 Access Denied ErrorAlex Washtell via .NET 247, Apr 5, 2005, in forum: ASP .Net Web Services - Replies: - 1 - Views: - 304 - Mauro Ottaviani [MSFT] - Apr 5, 2005 procedure to add web reference which will not create new namespace just add class in existing namespDeep Mehta via .NET 247, May 28, 2005, in forum: ASP .Net Web Services - Replies: - 2 - Views: - 567 - Dave A - May 31, 2005
http://www.thecodingforums.com/threads/add-web-reference-access-denied.519206/
CC-MAIN-2015-32
refinedweb
157
84.07
Post Syndicated from Eevee original IndustrialRobot has generously donated in order to inquire: In the last few years there seems to have been a lot of activity with adding emojis to Unicode. Has there been an equal effort to add ‘real’ languages/glyph systems/etc? And as always, if you don’t have anything to say on that topic, feel free to choose your own. :p Yes. I mean, each release of Unicode lists major new additions right at the top — Unicode 10, Unicode 9, Unicode 8, etc. They also keep fastidious notes, so you can also dig into how and why these new scripts came from, by reading e.g. the proposal for the addition of Zanabazar Square. I don’t think I have much to add here; I’m not a real linguist, I only play one on TV. So with that out of the way, here’s something completely different! A brief history of JavaScript JavaScript was created in seven days, about eight thousand years ago. It was pretty rough, and it stayed rough for most of its life. But that was fine, because no one used it for anything besides having a trail of sparkles follow your mouse on their Xanga profile. Then people discovered you could actually do a handful of useful things with JavaScript, and it saw a sharp uptick in usage. Alas, it stayed pretty rough. So we came up with polyfills and jQuerys and all kinds of miscellaneous things that tried to smooth over the rough parts, to varying degrees of success. And… that’s it. That’s pretty much how things stayed for a while. I have complicated feelings about JavaScript. I don’t hate it… but I certainly don’t enjoy it, either. It has some pretty neat ideas, like prototypical inheritance and “everything is a value”, but it buries them under a pile of annoying quirks and a woefully inadequate standard library. The DOM APIs don’t make things much better — they seem to be designed as though the target language were Java, rarely taking advantage of any interesting JavaScript features. And the places where the APIs overlap with the language are a hilarious mess: I have to check documentation every single time I use any API that returns a set of things, because there are at least three totally different conventions for handling that and I can’t keep them straight. The funny thing is that I’ve been fairly happy to work with Lua, even though it shares most of the same obvious quirks as JavaScript. Both languages are weakly typed; both treat nonexistent variables and keys as simply false values, rather than errors; both have a single data structure that doubles as both a list and a map; both use 64-bit floating-point as their only numeric type (though Lua added integers very recently); both lack a standard object model; both have very tiny standard libraries. Hell, Lua doesn’t even have exceptions, not really — you have to fake them in much the same style as Perl. And yet none of this bothers me nearly as much in Lua. The differences between the languages are very subtle, but combined they make a huge impact. Lua has separate operators for addition and concatenation, so +is never ambiguous. It also has printf-style string formatting in the standard library. Lua’s method calls are syntactic sugar: foo:bar()just means foo.bar(foo). Lua doesn’t even have a special thisor selfvalue; the invocant just becomes the first argument. In contrast, JavaScript invokes some hand-waved magic to set its contextual thisvariable, which has led to no end of confusion. Lua has an iteration protocol, as well as built-in iterators for dealing with list-style or map-style data. JavaScript has a special dedicated Arraytype and clumsy built-in iteration syntax. Lua has operator overloading and (surprisingly flexible) module importing. Lua allows the keys of a map to be any value (though non-scalars are always compared by identity). JavaScript implicitly converts keys to strings — and since there’s no operator overloading, there’s no way to natively fix this. These are fairly minor differences, in the grand scheme of language design. And almost every feature in Lua is implemented in a ridiculously simple way; in fact the entire language is described in complete detail in a single web page. So writing JavaScript is always frustrating for me: the language is so close to being much more ergonomic, and yet, it isn’t. Or, so I thought. As it turns out, while I’ve been off doing other stuff for a few years, browser vendors have been implementing all this pie-in-the-sky stuff from “ES5” and “ES6”, whatever those are. People even upgrade their browsers now. Lo and behold, the last time I went to write JavaScript, I found out that a number of papercuts had actually been solved, and the solutions were sufficiently widely available that I could actually use them in web code. The weird thing is that I do hear a lot about JavaScript, but the feature I’ve seen raved the most about by far is probably… built-in types for working with arrays of bytes? That’s cool and all, but not exactly the most pressing concern for me. Anyway, if you also haven’t been keeping tabs on the world of JavaScript, here are some things we missed. let MDN docs — supported in Firefox 44, Chrome 41, IE 11, Safari 10 I’m pretty sure I first saw let over a decade ago. Firefox has supported it for ages, but you actually had to opt in by specifying JavaScript version 1.7. Remember JavaScript versions? You know, from back in the days when people actually suggested you write stuff like this: Yikes. Anyway, so, let declares a variable — but scoped to the immediately containing block, unlike var, which scopes to the innermost function. The trouble with var was that it was very easy to make misleading: If you reused the same temporary variable name in a different block, or if you expected to be shadowing an outer foo, or if you were trying to do something with creating closures in a loop, this would cause you some trouble. But no more, because let actually scopes the way it looks like it should, the way variable declarations do in C and friends. As an added bonus, if you refer to a variable declared with let outside of where it’s valid, you’ll get a ReferenceError instead of a silent undefined value. Hooray! There’s one other interesting quirk to let that I can’t find explicitly documented. Consider: If this code had used var i, then it would print 4 four times, because the function-scoped var i means each closure is sharing the same i, whose final value is 4. With let, the output is 0 1 2 3, as you might expect, because each run through the loop gets its own i. But wait, hang on. The semantics of a C-style for are that the first expression is only evaluated once, at the very beginning. So there’s only one let i. In fact, it makes no sense for each run through the loop to have a distinct i, because the whole idea of the loop is to modify i each time with i++. I assume this is simply a special case, since it’s what everyone expects. We expect it so much that I can’t find anyone pointing out that the usual explanation for why it works makes no sense. It has the interesting side effect that for no longer de-sugars perfectly to a while, since this will print all 4s: This isn’t a problem — I’m glad let works this way! — it just stands out to me as interesting. Lua doesn’t need a special case here, since it uses an iterator protocol that produces values rather than mutating a visible state variable, so there’s no problem with having the loop variable be truly distinct on each run through the loop. Classes MDN docs — supported in Firefox 45, Chrome 42, Safari 9, Edge 13 Prototypical inheritance is pretty cool. The way JavaScript presents it is a little bit opaque, unfortunately, which seems to confuse a lot of people. JavaScript gives you enough functionality to make it work, and even makes it sound like a first-class feature with a property outright called prototype… but to actually use it, you have to do a bunch of weird stuff that doesn’t much look like constructing an object or type. The funny thing is, people with almost any background get along with Python just fine, and Python uses prototypical inheritance! Nobody ever seems to notice this, because Python tucks it neatly behind a class block that works enough like a Java-style class. (Python also handles inheritance without using the prototype, so it’s a little different… but I digress. Maybe in another post.) The point is, there’s nothing fundamentally wrong with how JavaScript handles objects; the ergonomics are just terrible. Lo! They finally added a class keyword. Or, rather, they finally made the class keyword do something; it’s been reserved this entire time. This is all just sugar for existing features: creating a Vector function to act as the constructor, assigning a function to Vector.prototype.dot, and whatever it is you do to make a property. (Oh, there are properties. I’ll get to that in a bit.) The class block can be used as an expression, with or without a name. It also supports prototypical inheritance with an extends clause and has a super pseudo-value for superclass calls. It’s a little weird that the inside of the class block has its own special syntax, with function omitted and whatnot, but honestly you’d have a hard time making a class block without special syntax. One severe omission here is that you can’t declare values inside the block, i.e. you can’t just drop a bar = 3; in there if you want all your objects to share a default attribute. The workaround is to just do this.bar = 3; inside the constructor, but I find that unsatisfying, since it defeats half the point of using prototypes. Properties MDN docs — supported in Firefox 4, Chrome 5, IE 9, Safari 5.1 JavaScript historically didn’t have a way to intercept attribute access, which is a travesty. And by “intercept attribute access”, I mean that you couldn’t design a value foo such that evaluating foo.bar runs some code you wrote. Exciting news: now it does. Or, rather, you can intercept specific attributes, like in the class example above. The above magnitude definition is equivalent to: Beautiful. And what even are these configurable and enumerable things? It seems that every single key on every single object now has its own set of three Boolean twiddles: configurablemeans the property itself can be reconfigured with another call to Object.defineProperty. enumerablemeans the property appears in for..inor Object.keys(). writablemeans the property value can be changed, which only applies to properties with real values rather than accessor functions. The incredibly wild thing is that for properties defined by Object.defineProperty, configurable and enumerable default to false, meaning that by default accessor properties are immutable and invisible. Super weird. Nice to have, though. And luckily, it turns out the same syntax as in class also works in object literals. Alas, I’m not aware of a way to intercept arbitrary attribute access. Another feature along the same lines is Object.seal(), which marks all of an object’s properties as non-configurable and prevents any new properties from being added to the object. The object is still mutable, but its “shape” can’t be changed. And of course you can just make the object completely immutable if you want, via setting all its properties non-writable, or just using Object.freeze(). I have mixed feelings about the ability to irrevocably change something about a dynamic runtime. It would certainly solve some gripes of former Haskell-minded colleagues, and I don’t have any compelling argument against it, but it feels like it violates some unwritten contract about dynamic languages — surely any structural change made by user code should also be able to be undone by user code? Slurpy arguments MDN docs — supported in Firefox 15, Chrome 47, Edge 12, Safari 10 Officially this feature is called “rest parameters”, but that’s a terrible name, no one cares about “arguments” vs “parameters”, and “slurpy” is a good word. Bless you, Perl. Now you can call foo with as many arguments as you want, and every argument after the second will be collected in args as a regular array. You can also do the reverse with the spread operator: It even works in array literals, even multiple times: Apparently there’s also a proposal for allowing the same thing with objects inside object literals. Default arguments MDN docs — supported in Firefox 15, Chrome 49, Edge 14, Safari 10 Yes, arguments can have defaults now. It’s more like Sass than Python — default expressions are evaluated once per call, and later default expressions can refer to earlier arguments. I don’t know how I feel about that but whatever. Also, unlike Python, you can have an argument with a default and follow it with an argument without a default, since the default default (!) is and always has been defined as undefined. Er, let me just write it out. Arrow functions MDN docs — supported in Firefox 22, Chrome 45, Edge 12, Safari 10 Perhaps the most humble improvement is the arrow function. It’s a slightly shorter way to write an anonymous function. An arrow function does not set this or some other magical values, so you can safely use an arrow function as a quick closure inside a method without having to rebind this. Hooray! Otherwise, arrow functions act pretty much like regular functions; you can even use all the features of regular function signatures. Arrow functions are particularly nice in combination with all the combinator-style array functions that were added a while ago, like Array.forEach. Symbol MDN docs — supported in Firefox 36, Chrome 38, Edge 12, Safari 9 This isn’t quite what I’d call an exciting feature, but it’s necessary for explaining the next one. It’s actually… extremely weird. symbol is a new kind of primitive (like number and string), not an object (like, er, Number and String). A symbol is created with Symbol('foo'). No, not new Symbol('foo'); that throws a TypeError, for, uh, some reason. The only point of a symbol is as a unique key. You see, symbols have one very special property: they can be used as object keys, and will not be stringified. Remember, only strings can be keys in JavaScript — even the indices of an array are, semantically speaking, still strings. Symbols are a new exception to this rule. Also, like other objects, two symbols don’t compare equal to each other: Symbol('foo') != Symbol('foo'). The result is that symbols solve one of the problems that plauges most object systems, something I’ve talked about before: interfaces. Since an interface might be implemented by any arbitrary type, and any arbitrary type might want to implement any number of arbitrary interfaces, all the method names on an interface are effectively part of a single global namespace. I think I need to take a moment to justify that. If you have IFoo and IBar, both with a method called method, and you want to implement both on the same type… you have a problem. Because most object systems consider “interface” to mean “I have a method called method“, with no way to say which interface’s method you mean. This is a hard problem to avoid, because IFoo and IBar might not even come from the same library. Occasionally languages offer a clumsy way to “rename” one method or the other, but the most common approach seems to be for interface designers to avoid names that sound “too common”. You end up with redundant mouthfuls like IFoo.foo_method. This incredibly sucks, and the only languages I’m aware of that avoid the problem are the ML family and Rust. In Rust, you define all the methods for a particular trait (interface) in a separate block, away from the type’s “own” methods. It’s pretty slick. You can still do obj.method(), and as long as there’s only one method among all the available traits, you’ll get that one. If not, there’s syntax for explicitly saying which trait you mean, which I can’t remember because I’ve never had to use it. Symbols are JavaScript’s answer to this problem. If you want to define some interface, you can name its methods with symbols, which are guaranteed to be unique. You just have to make sure you keep the symbol around somewhere accessible so other people can actually use it. (Or… not?) The interesting thing is that JavaScript now has several of its own symbols built in, allowing user objects to implement features that were previously reserved for built-in types. For example, you can use the Symbol.hasInstance symbol — which is simply where the language is storing an existing symbol and is not the same as Symbol('hasInstance')! — to override instanceof: Oh, and those brackets around Symbol.hasInstance are a sort of reverse-quoting — they indicate an expression to use where the language would normally expect a literal identifier. I think they work as object keys, too, and maybe some other places. The equivalent in Python is to implement a method called __instancecheck__, a name which is not special in any way except that Python has reserved all method names of the form __foo__. That’s great for Python, but doesn’t really help user code. JavaScript has actually outclassed (ho ho) Python here. Of course, obj[BobNamespace.some_method]() is not the prettiest way to call an interface method, so it’s not perfect. I imagine this would be best implemented in user code by exposing a polymorphic function, similar to how Python’s len(obj) pretty much just calls obj.__len__(). I only bring this up because it’s the plumbing behind one of the most incredible things in JavaScript that I didn’t even know about until I started writing this post. I’m so excited oh my gosh. Are you ready? It’s: Iteration protocol MDN docs — supported in Firefox 27, Chrome 39, Safari 10; still experimental in Edge Yes! Amazing! JavaScript has first-class support for iteration! I can’t even believe this. It works pretty much how you’d expect, or at least, how I’d expect. You give your object a method called Symbol.iterator, and that returns an iterator. What’s an iterator? It’s an object with a next() method that returns the next value and whether the iterator is exhausted. Wait, wait, wait a second. Hang on. The method is called next? Really? You didn’t go for Symbol.next? Python 2 did exactly the same thing, then realized its mistake and changed it to __next__ in Python 3. Why did you do this? Well, anyway. My go-to test of an iterator protocol is how hard it is to write an equivalent to Python’s enumerate(), which takes a list and iterates over its values and their indices. In Python it looks like this: It’s super nice to have, and I’m always amazed when languages with “strong” “support” for iteration don’t have it. Like, C# doesn’t. So if you want to iterate over a list but also need indices, you need to fall back to a C-style for loop. And if you want to iterate over a lazy or arbitrary iterable but also need indices, you need to track it yourself with a counter. Ridiculous. Here’s my attempt at building it in JavaScript. Incidentally, for..of (which iterates over a sequence, unlike for..in which iterates over keys — obviously) is finally supported in Edge 12. Hallelujah. Oh, and let [i, value] is destructuring assignment, which is also a thing now and works with objects as well. You can even use the splat operator with it! Like Python! (And you can use it in function signatures! Like Python! Wait, no, Python decided that was terrible and removed it in 3…) It’s a Halloween miracle. 🎃 Generators MDN docs — supported in Firefox 26, Chrome 39, Edge 13, Safari 10 That’s right, JavaScript has goddamn generators now. It’s basically just copying Python and adding a lot of superfluous punctuation everywhere. Not that I’m complaining. Also, generators are themselves iterable, so I’m going to cut to the chase and rewrite my enumerate() with a generator. Amazing. function* is a pretty strange choice of syntax, but whatever? I guess it also lets them make yield only act as a keyword inside a generator, for ultimate backwards compatibility. JavaScript generators support everything Python generators do: yield* yields every item from a subsequence, like Python’s yield from; generators can return final values; you can pass values back into the generator if you iterate it by hand. No, really, I wasn’t kidding, it’s basically just copying Python. It’s great. You could now built asyncio in JavaScript! In fact, they did that! JavaScript now has async and await. An async function returns a Promise, which is also a built-in type now. Amazing. Sets and maps MDN docs for Map — MDN docs for Set — supported in Firefox 13, Chrome 38, IE 11, Safari 7.1 I did not save the best for last. This is much less exciting than generators. But still exciting. The only data structure in JavaScript is the object, a map where the strings are keys. (Or now, also symbols, I guess.) That means you can’t readily use custom values as keys, nor simulate a set of arbitrary objects. And you have to worry about people mucking with Object.prototype, yikes. But now, there’s Map and Set! Wow. Unfortunately, because JavaScript, Map couldn’t use the indexing operators without losing the ability to have methods, so you have to use a boring old method-based API. But Map has convenient methods that plain objects don’t, like entries() to iterate over pairs of keys and values. In fact, you can use a map with for..of to get key/value pairs. So that’s nice. Perhaps more interesting, there’s also now a WeakMap and WeakSet, where the keys are weak references. I don’t think JavaScript had any way to do weak references before this, so that’s pretty slick. There’s no obvious way to hold a weak value, but I guess you could substitute a WeakSet with only one item. Template literals MDN docs — supported in Firefox 34, Chrome 41, Edge 12, Safari 9 Template literals are JavaScript’s answer to string interpolation, which has historically been a huge pain in the ass because it doesn’t even have string formatting in the standard library. They’re just strings delimited by backticks instead of quotes. They can span multiple lines and contain expressions. Someone decided it would be a good idea to allow nesting more sets of backticks inside a ${} expression, so, good luck to syntax highlighters. However, someone also had the most incredible idea ever, which was to add syntax allowing user code to do the interpolation — so you can do custom escaping, when absolutely necessary, which is virtually never, because “escaping” means you’re building a structured format by slopping strings together willy-nilly instead of using some API that works with the structure. It’s a shame this feature is in JavaScript, the language where you are least likely to need it. Trailing commas Remember how you couldn’t do this for ages, because ass-old IE considered it a syntax error and would reject the entire script? Well now it’s part of the goddamn spec and if there’s anything in this post you can rely on, it’s this. In fact you can use AS MANY GODDAMN TRAILING COMMAS AS YOU WANT. But only in arrays. Apparently that has the bizarre side effect of reserving extra space at the end of the array, without putting values there. And more, probably Like strict mode, which makes a few silent “errors” be actual errors, forces you to declare variables (no implicit globals!), and forbids the completely bozotic with block. Or String.trim(), which trims whitespace off of strings. Or… Math.sign()? That’s new? Seriously? Well, okay. Or the Proxy type, which lets you customize indexing and assignment and calling. Oh. I guess that is possible, though this is a pretty weird way to do it; why not just use symbol-named methods? You can write Unicode escapes for astral plane characters in strings (or identifiers!), as \u{XXXXXXXX}. There’s a const now? I extremely don’t care, just name it in all caps and don’t reassign it, come on. There’s also a mountain of other minor things, which you can peruse at your leisure via MDN or the ECMAScript compatibility tables (note the links at the top, too). That’s all I’ve got. I still wouldn’t say I’m a big fan of JavaScript, but it’s definitely making an effort to clean up some goofy inconsistencies and solve common problems. I think I could even write some without yelling on Twitter about it now. On the other hand, if you’re still stuck supporting IE 10 for some reason… well, er, my condolences.
https://noise.getoto.net/tag/yelling/
CC-MAIN-2018-26
refinedweb
4,325
62.78
#include <hallo.h> * Tom [Sat, Jul 03 2004, 02:16:11AM]: > Today, I thought I'd give the Mono/Gtk# tutorial at > a shot, but > compiling the very first tiny example yields errors (about not finding > the assemblies 'gtk-sharp.dll' and 'glib-sharp.dll'. Don't follow deprecated guides. This method did work with alpha versions of Mono when all .dll and .so files have been thrown into /usr/lib but does not work anymore. Nowadays, run: mcs helloworld.cs /pkg:gtk-sharp > By now, I must have installed anything available in sid. The gtk-sharp > package itself refers to both libgtk-cil and libglib-cil. Those are > installed just fine. Yup, better use Sid (and wait till tommorow when Mono 1.0 appears on mirrors). Regards, Eduard. -- Wenn Schlachthöfe Glasfenster hätten, wäre jeder Mensch Vegetarier. -- Paul McCartney
https://lists.debian.org/debian-user/2004/07/msg00338.html
CC-MAIN-2016-30
refinedweb
139
69.68
You can modify the types that ProxyGen.exe generates in the proxy code file by changing elements and attributes in the proxy descriptor file. You must specify at least one proxy type as an entry point for the object model of the host application. You can also make other optional changes to proxy types, such as hiding or renaming types or members, changing the type of return values, or applying new attributes. These kinds of changes modify the object model that is exposed to add-in developers without changing the object model source code in the host application. All changes you make to the proxy descriptor file must conform to the XML schema file ProxyGenDescriptorv2.xsd. For more information, see ProxyGen Descriptor Schema Reference. An entry point is a type that is instantiated when the host application loads an add-in. The entry point also contains methods that can be called by the host application to initialize the add-in. Typically, the entry point is a proxy for a host type that lives at the top of the object model hierarchy, such as an Application class that represents the host application, or a Document class that represents a document in the application. For proxies that are generated from managed assemblies, you can specify a class or an interface as an entry point. For proxies that are generated from COM type libraries, you can specify only classes as entry points. If your object model is implemented in multiple assemblies or type libraries, you do not have to specify an entry point in every proxy descriptor file that you generate. However, at least one of the proxy descriptor files must have an entry point (although you can optionally specify entry points in multiple descriptor files). An entry point is also the base class for a host item in the add-in developer's project. A host item is a class that provides a starting point for add-in developers to write their code, and is the class that add-in developers use to obtain all other objects that they use to automate the host application. For more information about host items, see Creating Project Templates Using the Project Template Generation Tool (Projectgen.exe). To specify a class or interface as an entry point, set the isAddInEntryPoint attribute to true for the element in the proxy descriptor file that represents the class or interface. Then, run ProxyGen.exe again and pass in the revised proxy descriptor file as an input parameter to generate the proxy code file. For more information, see Creating Proxies. In the generated proxy code file, ProxyGen.exe generates an entry point class that can be discovered and instantiated by the host application. For more information, see Architecture of Generated Proxy Code. For walkthroughs that demonstrate how to specify an entry point, see Walkthrough: Creating a Proxy Assembly. You can make the following optional changes to the proxy descriptor file. These changes specify how the proxy types and members are generated in the proxy code file. Task Description To change the namespace of a type Use the newNamespace attribute of the element that represents the type. To hide a type or member from the proxy assembly Use the isExcluded attribute of the element that represents the type or member. To rename a type or member Use the newName attribute of the element that represents the type or member. To add a custom attribute To add an attribute to a return value, add a new ReturnValueAttribute element to a parent element that represents the member that has the return value. To add an attribute to a type or a member, add a new Attribute element to a parent element that represents the type or member (that is, a class, delegate, enumeration, event, exception, interface, method, parameter, property, or struct).
http://msdn.microsoft.com/en-us/library/cc175950.aspx
crawl-002
refinedweb
638
51.58
Minor important bug: custom color schemes are not sorted in alphabetical order in menu. I think this is a bug unless I misunderstand the API: When calling view.show(placeToGo) There is no refresh, as soon as you move the cursor or scroll the view (even a pixel) everything refreshes. The only workaround I've found is to set (scroll past the end) to true and manually scroll the view a little. Below is an example of the problem. [code]import sublime_plugin class NextInterestingThing(sublime_plugin.WindowCommand): rex = '(^\ \t](class|def)\ \t]+\w+\ \t](.*$|(?!\w\W])|\Z|\n\n\w)' def run(self): view = self.window.active_view() # get current line curPoint = view.line(view.sel()[0].a).b found = view.find(self.rex, curPoint) placeToGo = view.line(found.b) if placeToGo: view.sel().clear() view.sel().add(placeToGo) view.show(placeToGo) for delta in [1, -1]: x, y = view.viewport_position() view.set_viewport_position((x, y + delta))[/code] Hey John, Any chance you can fix this one here: Thanks. -- Felipe. Significant indeed Are the python console and other panels rendering via the same mechanisms? Maybe it's something theme specific (I'm on Phoenix Dark) but scrolling in the python console and output panels is much slower compared to regular views. It's still controlled by the same setting, gpu_window_buffer, but the default value of auto now applies to more displays. I suspect I have no control over the "Look Up" functionality, unfortunately. Performance
https://forum.sublimetext.com/t/build-3010/8723/10
CC-MAIN-2016-18
refinedweb
241
51.95
Delete Row from NumPy Array that contains a specific value In this tutorial, we will be learning how to delete a row from Numpy array that contains a specific value in Python. (Numerical Python). You may be thinking, “This looks so easy”. Yes, You may feel that way but it is a bit tricky while programming and for that, you need to be aware of some NumPy array functions. If you are not aware, relax, because I will cover those for you. So let’s get started. Remove row from NumPy Array containing a specific value in Python First of all, we need to import NumPy in order to perform the operations. import numpy as np You may or may not write “as Your_name“. It is done so that we do not have to write numpy again and again in our code. Your_name can be anything you like. Next, Using numpy, we need to create a 2D array, which is nothing but multiple lists and we need to store our array in a variable let’s say arr. arr=np.array([[1,2,3],[4,5,6],[7,8,9]]) array() is a function used to create array which contains multiple lists separated by comma. If you do not know about creating 2D lists in python, learn here. Now, lets declare a variable var that contains the value whose row is to be deleted. var=3 Now, We need to iterate the array arr to find our value inside it. This can be done as follows. for i in range(0,2): for x in arr[i]: if(x==val): After we find the value, we need to delete the containing row. For that, we use delete() function that takes 3 arguments. - Array_name - Index of containing list. - axis (If we do not mention axis then our list gets flattened i.e, Converts into 1D array) if axis=0, it means we are choosing row-wise deletion and if axis=1, then it means column wise deletion. Let’s see how to do it. if(x==val): arr=np.delete(arr,i,0) We modified our existing array arr with the new array which does not has the row containing the value var=3. Alternatively, you can name your new array something else. So, now we have an array which does not has the row containing the value 3 finally, we print our array to see the required output. print(arr) Let’s see how our code looks like. import numpy as np arr=np.array([[1,2,3],[4,5,6],[7,8,9]]) val=3 for i in range(0,2): for x in arr[i]: if(x==val): arr=np.delete(arr,i,0) print(arr) Output: [[4 5 6] [7 8 9]] Also, learn Nicely explained, was looking for the same stuff I’m glad you found it helpful.
https://www.codespeedy.com/delete-row-from-numpy-array-that-contains-a-specific-value-python/
CC-MAIN-2019-43
refinedweb
482
73.07
This page describes how you can create new apps that receive, process, and respond to events from Google Chat: - Receive messages and other kinds of events generated by Google Chat - Send event responses and other messages into Google Chat Endpoint types Events from Google Chat are delivered to your app via an endpoint, of which there are different types: - HTTPS endpoints present your app as a web service. You'll need to set up a web server to use as an interface for your app's implementation. Your app can respond synchronously or asynchronously to these events. - Google Cloud Pub/Sub endpoints use a topic on Google Cloud Pub/Sub to relay an event to your app's implementation. This is useful when your implementation is behind a firewall. Apps that use pub/sub endpoints can only respond asynchronously. - DialogFlow endpoints let your app utilize the natural language processing (NLP) capabilities of DialogFlow. Please see DialogFlow documentation for details. For a simple, straightforward app architecture, try implementing a app using an HTTPS endpoint (a web service, essentially) that responds synchronously, always enclosing its payload in the HTTPS POST response. This approach does not involve authorization, so it doesn't need a service account. See the simple app implementation section below for an example of this style of app. You may need to take a more complex approach if your app is behind a firewall or sends unsolicited messages such as alarms or other notifications to Google Chat. tl;dr... A very simple app implementation The following code implements a simple app in Python using the Flask web framework. #!/usr/bin/env python3 """Example app app presents an HTTPS endpoint and doesn't need to use Cloud Pub/Sub to relay events to it. And because it always returns its response payload within the JSON response, it doesn't need to authenticate using a service account. Handling events from Google Chat This section describes how to receive and process events that your app receives from Google Chat. Registering the app Before your app can receive events from Google Chat, you must specify its endpoint in the Chat API configuration tab when you publish your app. Once you've registered the endpoint and published your app, Google Chat will recognize events addressed to your app and dispatch them to the specified endpoint. Verifying app authenticity Once you've registered your HTTPS app, you need a way for your implementation to verify that the request is actually coming from Google. Google Chat includes a bearer token in the Authorization header of every HTTPS Request to a app. For example: POST Host: yourapp app's project number from the Google API Console. For example, if the request is for a app with the project number 1234567890, then the audience is 1234567890. You should verify that the request is coming from Google and is intended for the target app. If the token doesn't verify, the app should respond to the request with an HTTPS Apps in Google Chat. */ public class JWTVerify { // Bearer Tokens received by apps will always specify this issuer. static String CHAT_ISSUER = "chat@system.gserviceaccount.com"; // Url to obtain the public certificate for the issuer. static String PUBLIC_CERT_URL_PREFIX = ""; // Intended audience of the token, which will be the project number of the app. static String AUDIENCE = "1234567890"; // Get this value from the request's Authorization HTTPS apps will always specify this issuer. CHAT_ISSUER = 'chat@system.gserviceaccount.com' # Url to obtain the public certificate for the issuer. PUBLIC_CERT_URL_PREFIX = '' # Intended audience of the token, which will be the project number of the app. AUDIENCE = '1234567890' # Get this value from the request's Authorization HTTPS app When your app receives an event from Google Chat, what it does with that event is completely implementation dependent. The app may look up some information from a data source, record the event information, or just about anything else. This processing behavior is essentially what defines the app. In most cases, an app will not only process the information contained in the event, but will generate a response back to the thread that issued the event. The following diagram describes a typical interaction with an app in a Chat space: There are three kinds of events shown in the above diagram: ADDED_TO_SPACE, MESSAGE, and REMOVED_FROM_SPACE. An app can't respond after being removed from a space, but it can respond to the other two types. Responding synchronously An app can respond to an event synchronously by returning a JSON-formatted message payload in the HTTPS response. The deadline for a synchronous response is 30 seconds. A synchronous response from an app is always posted in the thread that generated the event to the app. Responding asynchronously If an app apps that don't use service accounts cannot respond asynchronously. Retry If an HTTPS request to your app fails (e.g. timeout, temporary network failure, or a non-2xx HTTPS status code), Google Chat will additionally retry delivery twice, with at least a ten-second delay between each retry. As a result, an app may receive the same message up to three times in certain situations. No retry is attempted if the request completes successfully but returns an invalid message payload. App-initiated messages This section describes how apps can send arbitrary messages into a space. Many apps send messages only in direct response to an event that they receive from Google Chat. However, some apps app receives from Google Chat. Keep track of this ID so that the app can inject messages into the thread. As a new thread To send a message into Google Chat as a new thread, your app In many cases, apps may want to post multiple messages related to the same entity into the same thread. For example, a bug tracker integration may want to post all notification messages related to the same bug into the same thread. To achieve this, apps can specify an arbitrary thread key in each request. Messages posted with the same thread key will be grouped into the same thread. For example, the example bug tracker integration above might use the bug ID as part of a consistent thread key. The first notification message for a bug will then create a new thread; all subsequent messages for the same bug will be posted into that same thread. The thread key is specified in the threadKey query parameter in an inbound HTTPS request. For instance:?\ threadKey=ARBITRARY_STRING Thread keys are also scoped to a specific app; if two different apps happen to both post messages using the same thread key, those two messages will not be grouped into the same thread.
https://developers.google.cn/chat/how-tos/apps-develop?hl=ru
CC-MAIN-2022-33
refinedweb
1,110
60.75
What kind of machine do you need? We may have one we can bring. If you need gear I'll need a full spec of what you require. Simon On 8/25/06 4:56 PM, "Jim Fehlig" <jfehlig@xxxxxxxxxx> wrote: > All, > > Find my latest work on the summit presentation attached. The diagram on > slide 11 does not include CIM_VSSD or recorded RASD - it was getting > cluttered with these components and their associations. I think the > diagram illustrates the concept of resource allocation as is, but will > add some of the finer points if desired. Perhaps we should have 1 slide > containing a busy diagram that shows the model in action for an active > vm. It would bring together all of the 'component' diagrams on previous > slides. > > Gareth - I don't have source for diagram on slide 8. You will take care > of this one? > > Currently for demo I've been thinking along the lines of: > - Connect to cimom and look in Interop namespace for our profiles > - Follow ElementConformsToProfile to find host (OMC) CS. > - Look at pools associated with host CS > - Look at host resources contained in pool > - Look at virtual resources (and RASD) allocated from pool > - Enumerate virtual CS running on host CS by follow HostedDependency > - Define a virtual CS (will have to use cli for this one as none of the > free GUI clients support embedded instances) > - Enumerate virtual CS - see new defined but disabled CS > - Invoke RequestStateChange() on new CS to move to enabled state > - Pause/unpause via RequestStateChange() > - ?? > > Feel free to add suggestion for the demo as well. Still lots of work. > Would like to have this solidified by the end of next week. > > BTW, I do not have a laptop capable of running this demo :-(. My plan > is to bring along a workstation-sized machine loaded with the demo. > Hopefully I can get access to a monitor on site so I don't have to drag > that along as well. > > Jim > _______________________________________________ > Xen-cim mailing list > Xen-cim@xxxxxxxxxxxxxxxxxxx > Simon -- Simon Crosby XenSource m: +1 415 819 1965 2300 Geng Road # 250 o: +1 650 798 5936 Palo Alto, CA 94303 skype: simoncrosby _______________________________________________ Xen-cim mailing list Xen-cim@xxxxxxxxxxxxxxxxxxx
http://old-list-archives.xen.org/xen-cim/2006-08/msg00019.html
CC-MAIN-2018-43
refinedweb
366
69.72
[1.1.4 PyJnius] How to set 'similar' from default 0.7 to 1.0 ? My Script: import os, sys # https:/ # https:/ # http:// # https:/ # Windows 10 Screen Scale > 100% : [error] RobotDesktop: checkMousePosition: should be L[2880,540]@S(1)... # https:/ # Solution: Scale back to 100% and remake all Screenshots if not done with 100% Scale jarpath = ';' + os.path. if not 'CLASSPATH' in os.environ: os. else: os. from jnius import autoclass App = autoclass( Screen = autoclass( Pattern = autoclass( Finder = autoclass( Settings = autoclass( Debug = autoclass( user = 'DOP' ext_drive = 'E' class Sikuli: def start_app(self, path, app): app = App(os. app.open() def click_image(self, image, timeout=5): screen = Screen() # Debug.info("Screen: %s", screen) # pat = Pattern(image) # pat.similar(1.0) # Throw an Error: TypeError: 'float' object is not callable # pat.similar = 1.0 # Throw an Error: AttributeError: can't set attribute # screen.exists(pat, 0) My Question: How can I set the Similar Sikuli Attribute / Setting from it's default 0.7 Value to 1.0 ? Question information - Language: - English Edit question - Status: - Solved - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Solved by: - Dirk Schiller - Solved: - 2019-02-06 - Last query: - 2019-02-06 - Last reply: - 2019-02-06 I want to give a few comments on your approach. --- # http:// this link is no longer supported and does not reflect the actual version actual javadoc for 1.1.4: https:/ BE Aware: it is always a good idea to first look into the docs (https:/ --- from jnius import autoclass I do not recommend, to use jnius: this is intended to bridge Java Api's from C-Python. SikuliX is Java based and uses Jython as interpreter (language level 2.7) As such, you have direct access to all Java classes by just using import. ... if you use Jython either implizit via SikuliX IDE (sikulix.jar) or with plain Jython and sikulixapi.jar on classpath ... and if you use Jython, then there is no need for jnius, since the relevant classes are already pre-imported, when you run your scripts obeying the SikuliX rules for Python scripting. Come back, if you have more questions, that might not be obvious when looking into the docs. One more thing: Please make a new question, for a new topic. Do not change an existing question, to ask different things. In doubt use the comment, to add information. I get notified immediately by every change to every post on Launchpad and usually answer within 24 hours. All the best But finally of course it is your decision, how to get on the road. I exactly tried those things, but as I wrote in my Script ( see above ) it doesn't work. Any other Solutions ? Cheers Dirk Ok I understand. Thanks for your Answers. Unfortunetly I can not use Python 2.7 and for that I can not use Jython, it doesn't support Python 3.7.x. Thanks for everything. It was a try to use your Framework. Cheers Dirk ok, understood. Do you know that? https:/ Hey RaiMan, no i don't know it. I will try it as well, thank' s a lot. At the Moment I even try Jython and Sikuli but don't have luck so far. Found a Solution for the '# pat.similar(1.0) # Throw an Error: TypeError: 'float' object is not callable': In 'C:\Users\ if name != 'getClass' and bean_getter(name) and len(method. to: if name != 'getClass' and bean_getter(name) and len(method. if lowername not in methods_name: see the docs for the Python scripting level (comment on Java API in Python scripts later). https:/ /sikulix- 2014.readthedoc s.io/en/ latest/ pattern. html#pattern
https://answers.launchpad.net/sikuli/+question/678376
CC-MAIN-2019-09
refinedweb
609
76.82
On Wed, 13 Jul 2011 18:05:45 -0300 Mauro Carvalho Chehab wrote:> Em 10-07-2011 16:51, Randy Dunlap escreveu:> >".> > Hi Randy,> > After applying patch 1/9 and 2/9 over 3.0-rc7+media patches, I'm> now getting this error:> > drivers/media/radio/radio-aimslab.c:52:1: error: invalid suffix "x20f" on integer constant> > $ grep 20f .config> CONFIG_RADIO_RTRACK_PORT=20f> > $ gcc --version> gcc (GCC) 4.4.5 20110214 (Red Hat 4.4.5-6)> > Before this patch, this were working (or, at least, weren't producing> any error).> > Perhaps the breakage on your compilation happened due to another> patch at the tree? If so, the better would be to apply this patchDo you suspect that?I built this patch series against the latest linux-next (20110707),so it should contain media patches as of that date.> series together with the ones that caused the breakage, to avoid> bisect troubles.Sure, if we know what patch it is (if there indeed is one).Can you do:$ make drivers/media/radio/radio-aimslab.iand tell me what this line contains for you?Mine says:static int io = 0x20f;> > > >)> > +> > #endif /* !__LINUX_STRINGIFY_H */> > -----~Randy*** Remember to use Documentation/SubmitChecklist when testing your code ***
https://lkml.org/lkml/2011/7/13/310
CC-MAIN-2016-44
refinedweb
202
69.48
NAME¶levee - A Screen Oriented Editor. SYNOPSIS¶levee [+address] [file ...] DESCRIPTION¶Levee is a screen oriented editor based on the Unix editor "vi". It provides a terse, powerful way to enter and edit text (however, if you want a word-processor, you're better off with WordStar.) Levee is a moded editor. It operates in 3 modes -- visual, command, and insert. Most of the editing work is done is visual mode, file reading and writing is done in command mode, and insert mode does what you would expect. When you enter Levee, you may specify an address to start editing at. These addresses are in the same format as command mode addresses, except that a naked + will put you at the very end of the file. Levee is copyright (c) 1982-2008 by David L. Parsons. (see the notice at the end of this document for distribution terms) COMMAND MODE COMMANDS¶These commands are used for editing new files, writing modified files, changing options, doing substitutions, and a subset of the visual commands. They take as input whole lines, terminated by return (to execute), or escape (to abort.) Command mode is reached by typing ":" or "Q" from visual mode. If you enter command mode by typing ":", Levee will execute one command, then return to visual mode after prompting you with "[more]". If you type anything except a space or return, Levee will accept another command, and so forth. If, however, you enter command mode via "Q", Levee will remain in command mode until you enter the "visual" command. A NOTE ON COMMAND SYNTAX¶A command may be preceded by an optional line-range. If you do not provide a line-range, Levee will use the default line-range shown by the command. A line-range is one or two address specifications in the following format: - . - current line. - $ - last line. - 'x - the line with mark x on it. - # - line #. For example, ".-5,.+5p" will print every line within ten lines of the current line. "$-5" is the fifth line from the end of the file, and "/end/+2" is the second line past the next occurrence of the pattern "end". Patterns may be regular expressions (see below.) Also, a naked line-range will set the current line to the first line in the range and print all the lines in that range. "1,10" sets the current line to 1, then prints lines 1 to 10. If you specify a non-existent line in a range, the command will abort and Levee will tell you "bad address". Command mode commands¶ - args - show the current argument list, if one exists. The file that you are currently editing will be framed by '[' and ']'. - (.,.)change - delete lines, then enter insert mode. - (.,.)delete - delete lines. Deleted lines are stored in a Yank Buffer for later putback with "put". - edit[!] [file] - Discard the current file and start editing a new one. If changes were made to the current file, you must enter "edit!" to force Levee to discard the changes. If you do not specify a filename, Levee will try to reedit the current filename. When Levee reads in a new file, it will tell you how many bytes it read in, or [overflow] if the file is larger than the internal buffer (256000 bytes on most platforms; 20k on USCD Pascal.) - execmode - Remain in command mode until you use the "visual" command. - file[name] - Echo what the current filename is, its status, and the current line. If you provide it with a name, it will change the filename to that. - (.)insert - Insert text above the current line. If you specify a line number, Levee will make that the current line, then insert above it. Insert mode commands¶ - ^W - back over the last word you entered. - ^H - back over one character. - ^U - back over all input on this line. - ^V - escape the next character typed. (For example, ^V^H will put a ^H into the file.) - ESC - exit insert mode. - ^D - If at start of line, reduce indentation 'shiftwidth' columns. - ^T - If at start of line, increase indentation 'shiftwidth' columns. When in insert mode, Levee will not allow you to enter any control characters except return and tab. Return ends input on this line and opens a new line for input. - map[!][key[text]] - Define/list macros. There are 3 forms of map: - map. - This lists all the active macros. - map(key). - This shows the macro associated with (key), if any. - map(key) (text) - This maps (key) to (text). You may map any key except ":" and escape. In the normal form (map), the macro will be effective in visual mode, but in the alternate form, (map!), the macro will be effective in insert and command modes. For example, if you map!ped return to "hello world", every time you entered a return in command or visual mode, the string "hello world" would pop up. - next[file...] - Edit the next file in the arglist, or edit a new arglist. Levee takes its initial arglist off the command line when you execute it. If "autowrite" is set, Levee will write out the changes to the current file before editing the next one. - (.)open - Insert below the current line. Otherwise just like insert. - Edit the previous file in the arglist. Otherwise, like next. - Display lines without changing the current line. - (.)put - Put the contents of the yank buffer back on the line below the current line. If you specify a line, it resets the current line, then puts the yank buffer back. The yank buffer is filled by the delete, change, or yank commands. Put does not destroy the yank buffer, so you may put back text multiple times. - quit[!] - Exit Levee. If you want to discard changes, use "quit!" - (.)read[file] - put the contents of 'file' after the current line. - rmfile - Delete 'file' from disk. - set[option=value] - Set a tunable variable. Levee has a dozen or so user-definable variables which you can twiddle via this command. There are boolean, integer, and string variables that you can set. A string or integer variable is set by "set xxx=yyy", a boolean variable is set via "set xxx" or "set noxxx". Here are the settable variables (and abbreviations): - tabsize(ts) - tab stop. - shiftwidth(sw) - columns to shift on ^D, ^T, >>, or << - scroll - number of lines to scroll on ^D, ^U - autoindent(ai) - supply indentation during insert mode. - autowrite(aw) - write out changes before :next, :prev - autocopy(ac) - make backup copies before writing changes. - list - display tabs as ^I, end of line as $. - magic - use regular expressions in searches. - suffix - if the filename does not have a . in it, supply the suffix. (this is the only string variable.) - overwrite(ow) - destroy old file first, then write. - beautify(be) - When set, Levee will not allow insert of any control character except tab and return unless you escape it with ctrl-V. - wrapscan - searches wrap around end of buffer. - ignorecase(ic) - Ignore the case of alphabetic characters during searches. - mapslash - (ST version only) Map "/" in filenames to "\". If the environment contains `mapslash' when levee is called, this variable will default to true, otherwise it defaults to false. (See the documentation for the Teeny-shell on how the teeny-shell interprets `mapslash') - lines(li) - (ST version only) How many lines on the display. This is primarily for running levee through the serial port - put set li=xx into your LVRC for a xx line terminal. - cols(co) - (ST version only) How many columns on the display. Like the lines variable, it's for running levee through the serial port. You may set multiple variables on one line, as in 'set ws noai'. To see the current settings of these variables, :set -- without any arguments -- will show the current settings. At startup, Levee looks in the environment variable LVRC for a list of variables to set (GEMDOS/MS-DOS). LVRC is one line of the form 'option=value ...'. If you have a LVRC defined that is 'ts=4 ow nows', Levee will set tabsize to 4, turn on overwrite, and turn off wrapscan. If you are using RMX, Levee looks in the file ":home:r?lvrc" for initialization. If you are using Osy/SWOs, Levee looks in the file "*.lvrc". The format of these files are different from the LVRC variable -- see "source" for more information. - sourcefile - Take command mode commands from 'file'. These commands can be any legal command, except "visual". If a error happens during execution of 'file', Levee abandons that level of source'ing. In Osy/SWOs, there are a few differences in insert mode from within a sourced file. No character has special meaning except a line containing nothing but a period, which terminates insert mode. For example: :insert blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah :more commands If you are running Levee under any other operating system, you cannot do a insert from a :source file. - (.,.)substitute(delim)patt(delim)repl(delim)[qcpg] - (.,.)substitute& Search for patt and replace it with repl. Levee will look for patt once on each line and replace it with repl. The delimiter may be any ascii character. The pattern is a regular expression, just like a search pattern. You may include parts of the pattern in the replacement string; A '&' in the replacement pattern copies in the whole source pattern, so if you do a 'sub/this/& and that/g', every instance of 'this' will be replaced with 'this and that'. Also, you may pull parts of the pattern out by using the \( and \) argument meta-characters. Arguments gotten by \( & \) are put into the replacement string everywhere you do a \1..\9 [ \1 is the first argument you set up with \( & \) ]. So, if you want to reverse the order of two substrings, you can do 'sub/\(string1\)\(string2\)/\2\1/'. substitute& redoes the last substitution. Options: - q,c - before doing the substitute, display the affected line and wait for you to type a character. If you type 'y', it will do the substitution. 'q' aborts the substitute, 'a' does the rest of the change without prompting, and 'n' does not do it. - p - print the affected lines after the change. - g - do the change globally. That is, do it for every occurrence of patt on a line, rather than just once. - undo - Undo the last modification to the file (except :edit, :next, :rm, or :write.) You can only undo the last change to a file -- undo counts as a change. :undo followed by :undo does nothing to the file. - unmap(key) - Undefine a macro (see map). - visual[list] - If you entered command mode by "Q" or "execmode", return to visual mode. If you provide an argument list, it also does a `:next' on that list. - version - Show which version of levee this is. - (.,.)write [file] - Write lines to a file. If you write the everything to 'file', the filename is set to 'file', and if you do not specify a file, Levee will write to the filename. - (.,.)wq [file] - Write to a file, then quit. - (.,.)yank - Yank lines from the file into the yank buffer, for later putback with "put". - xit[!] - Write changes to the current file, then exit. If there are more files in the arglist, use "xit!" - ![command] - Execute command. Example: This command is available only under GEMDOS, MSDOS, RMX, and Unix. - ($)= - Give the line number of the addressed line. /end/= gives you the line number of the next line with a 'end' on it. VISUAL MODE COMMANDS¶Visual mode commands move you around and modify the file. There are movement commands to move the cursor by a variety of objects. In the description, a (#) means a optional count. If a command has a optional count, it will tell you what the count does in parenthesis. A (*) means that the command can be used in the delete, yank, and change commands. Counts are made up by entering digits. If you type '45', the count will be set to 45. To cancel a count, type ESC. This section discusses 'whitespace' occasionally. Whitespace is tabs, spaces, and end of line. How the display works¶Characters are displayed on the screen as you would expect, except that nonprinting characters are shown as ^x, and tabs expand to spaces ( unless you set the option list, then they show as ^I.) When sitting on a control character or tab, the cursor is placed on the FIRST character displayed. If you move the cursor to any other part of them ( via j or k -- see below), any changes will start at the next character. Levee does not display a end of file marker, but lines past the end of the file are denoted by ~ lines. If list is set, tabs display as ^I, and the end of line displays as $. If a line is too long for the screen, it will just disappear off the end of the screen. Levee will handle any screen resolution and any monospaced font you hand it ( if you are running in low resolution, Levee will give you a 25x40 window, for example.) Visual mode commands¶ - ^A - Show a debugging message at the bottom of the screen. This is not at all useful unless you are debugging the editor. Ignore it. - (#)^D - Scroll the screen down a half screen. If a count is specified, scroll down the specified number of lines. - ^E - Scroll down 1 line (shorthand for 1^D ) - ^G - Show file statistics. Exactly like ':file'. - (*)(#)^H - Move the cursor left one (count) chars. - ^I - Redraw the screen. - (*)(#)^J - Move down one (count) lines. When you use ^J and ^K (below) to move up or down lines, the cursor will remain in the same column, even if it is in the middle of a tabstop or past the end of a line. - (*)(#)^K - Move up one (count) lines. - (*)(#)^L - Move right one (count) characters. - (*)(#)^M - Move to the first nonwhite space on the next line. If a count is specified, move to the first nonwhite count lines down. - (#)^U - Scroll the screen up a half page. If a count is specified, scroll up count lines. - ^Y - Scroll the screen up 1 line (shorthand for 1^U.) - (#)a - Insert text AFTER the cursor. If you give a count, the insertion will be repeated count times ( 40i-ESC will give you a line of 40 dashes). The commands in insert mode are the same for visual and command mode. - (*)(#)b - Move to the beginning of the last word (the count'th word back). A word is a collection of alphanumeric characters (a-z0-9$_#) or any other nonwhite character (i.e. anything but space, tab, eoln). - c - Change a object. Change deletes an object, then enters insert mode without redrawing the screen. When you tell it the object to be changed, Levee puts a '$' on the last character of the object. You cannot change backwards. The object may be any visual mode command marked with a '(*) '. For example, 'c4l' will change the next 4 characters on the line to something else. (4cl does the same thing -- 4c4l changes the next 16 characters on this line.) 'cc' will change whole lines. When changing, deleting, or yanking a object, it will be placed into a yank buffer, where it can be retrieved by the 'p' or 'P' commands. - (#)d - Delete an object. Like 'cc', 'dd' affects whole lines. - (*)(#)e - Move to the end of the current word. - (*)(#)f(x) - Find the next (count'th) occurrence of a character on the current line. For example, if the cursor is sitting on the first character of the line 'abcdef', typing "ff" will put the cursor on the 'f'. - (*)(#)h - Move left one (count) characters. Exactly like ^H. - (#)i - Start inserting characters at the cursor. If you specify a count, the insertion will be duplicated count times. - (*)(#)j - Move down one (count) lines. Exactly like ^J. - (*)(#)k - Move up one (count) lines. Exactly like ^K. - ,B (*) (#)l - Move right one (count) character. Exactly like ^L. - m(x) - Set the marker (x). There are 26 markers available (a-z). You may move to a marker by use of the ' or ` commands. - (*)n - Find the next occurrence of a search pattern. When you do a search with a / or ? command, Levee will remember the pattern and the direction you searched in. 'n' will search in the same direction for the pattern, 'N' searches in the opposite direction. - o - Open a line below the current line for insertion. - p - Put yanked/deleted text back after the cursor. Text is yanked by the delete (d,x,X,D), change (c,C,s,S), and yank (y,Y) commands. - (#)r(x) - Replace characters (up to end of line) with (x). '4ra' will change the next 4 characters after the cursor into 'aaaa'. - (#)s - change one (count) characters. Shorthand for (#)cl. - (*)(#)t(x) - Move up to a character on the current line. If you are on the first character of the line 'abcdef' and you type 'tf', you will end up sitting on the 'e'. - u - Undo last modification. You can undo ANY modification command except :edit, :next, :rm, or :write. (Just like :undo). - (*)(#)v - Move back to the very end of the previous (count'th) word. See 'b' for the definition of a word. - (*)(#)w - Move up to the very beginning of the next (count'th) word. - (#)x - Delete one (count) characters forward. Shorthand for (#)dl. - y - Yank an object for later use by put. 'yy' yanks whole lines. - A - Append text at the end of the line. Shorthand for $a. - (*)(#)B - Move to the beginning of the current word. Exactly like 'b'. NOTE: this is incorrect. the capitalized word movement commands should, and will in the future, be used for movement by space-delimited words. - C - Change to the end of the line. Shorthand for c$. - D - Delete to the end of the line. Shorthand for d$. - (*)(#)F(x) - Move to the first (count'th) previous occurrence of a character on the current line. If you are sitting at the end of the line 'abcdef', typing "Fa" will move you back to the 'a' at the start of the line. - (*)(#)G - Goto line. If you specify a count, Levee will move to that line, and if there is no count, Levee moves to the absolute end of the file. To get to the start of the file, type "1G". To the end, just "G". - (*)H - Move to the first nonwhite character at the top of the screen. - I - Insert at the end of the current line. Shorthand for $i. - (#)J - Join two (count+1) lines together. Joining appends the second line at the end of the first, putting a space between them. If the first line ends in whitespace, Levee will not put in a space. - (*)L - Move to the last nonwhite character on the last line of the screen. - (*)M - Move to the first nonwhite character in the middle of the screen. - O - Open a line above the current line. Otherwise works just like 'o'. - P - Put back the yank buffer at the cursor. Otherwise works just like 'p'. - Q - Enter and remain in command mode. Just like the command :exec. To get back to visual mode, you must enter the command ':visual'. - R - Replace mode. A limited subset of insert mode that overwrites characters up to end of line. All of the normal insert mode commands apply. If you overwrite a character, then back over it with ^H,^U, or ^W, it will reappear after you exit Replace mode. Escape exits replace mode. NOTE: due to a bug, entering a <return> in Replace mode will drop you back into visual mode with an error. The replacements you have made will remain. - S - Change characters backwards. Shorthand for (#)ch. - (*)(#)T(x) - Move back to character on current line. If you are on the last character of the line 'abcdef', typing "Ta" will move you back to the 'b'. - (*)(#)W - Move to end of word. Exactly like 'e'. - (#)X - Delete characters backwards. Shorthand for (#)dh. - Y - Yank to end of line. Shorthand for y$. - ZZ - Write changes to current file and exit if last file in arglist. Exactly like :xit. - (*)(#)$ - Move to end of line. If you give a count, move to the end of the (count-1) line down (so 2$ moves you to the end of the next line.). - 0 - Move to the beginning of the current line. Shorthand for 0|. - (#)! - Pipe an object through an external program. Like 'cc', '!!' affects whole lines. - (*)% - Find matching bracket, parenthesis, or squiggly bracket. If you are not sitting on a '[]{}()', Levee will search forward for one of them on the current line, then match whatever it finds. - [space] - Move to the first nonwhite character on the current line. - & - Redo last substitution command. - (*)(#){ - Move to the beginning of the count'th paragraph back. A paragraph is delimited by a blank line. - (*)(#)} - Move to the end of the count'th paragraph forward. - (*)(#)( - Move to the beginning of the count'th sentence back. A sentence is delimited by a ., a !, or a ? followed by a space, a tab, or end of line. - (*)(#)) - Move to the end of the count'th sentence forward. - (*)(#)- - Move to the (count'th) previous line, first nonwhite. - (*)(#)+ - Move to the (count'th) next line, first nonwhite. - (#)~ - Change the case of the next count characters. Upper case becomes lowercase, lowercase becomes uppercase. - (*)`(x) - Move to the exact position of mark (x). There is a special mark for some of the visual mode move meant commands -- '' will move you to where you were before the last (,),',`,G,/,?,n,N command. - : - Execute one command mode command. When the command is done, it will return to visual mode if it produces one line of output, but if it scrolls the screen, Levee will prompt [more] before returning to visual mode. If you type a : in response to the [more] prompt, Levee will remain in command mode for one more command. - (#)<(#) - Shift one (count) objects left. If you specify a second count, Levee will shift the object left that many columns -- if you do not, they will be sh shifted shiftwidth columns. This is a nondestructive shift. If the shift would carry past the left margin, the objects will be moved up to the left margin but no farther. Like the other object movement commands, '<<' will affect whole lines. - (#)>(#) - Shift one (count) objects right. Just like <, except it will not shift objects past the right margin of the screen. If you do shift an object past the right margin of the screen, all of its indent will be removed and it will end up by the left margin. - . - Repeat last modification command. (except undo) - (*)? - Search for pattern backwards. Escape aborts the search pattern, and a empty pattern means search for the last pattern again. - (*)/ - (#)| - Move to specified column. If you don't have a count, move to column 0. REGULAR EXPRESSIONS¶Levee gives special meanings to some characters during a pattern match. The character "." will match any one char, the character "*" will match zero or more occurrences of the previous char ( so, a* will match 'a','aa','aaa', etc, or it will match nothing at all). If a pattern begins with "^", it will only match at the beginning of a line, and patterns ending with a "$" will only match at the end of a line. Brackets ('[]') have special meaning as well. They mean match any one of the characters inside the brackets. '[abc]' will match 'a', 'b', or 'c'. You may specify a range of characters inside brackets by using a dash (-). '[a-z]' will match any lowercase alphabetic character. If ^ is the first character in the bracket, it means match any character except those in the brackets. '[^abc]' will match anything except 'a','b', or 'c'. Backslash takes away special meaning for these chars, but '\t' specifies a tab, and \( & \) delimit arguments inside a pattern (used only by :substitute.) The patterns \< and \> have special meaning, too; they match the start and end of alpha-numeric tokens. If you turn off the editor variable 'magic', none of the above characters will have special meaning inside of a pattern (see 'set'). Some example patterns: - ^end$ - Find a line that is just 'end'. - [Ee][Nn][Dd] - Find a 'end', ignoring case. - [A-Za-z][A-Za-z0-9]* - Find the next identifier. - ([*].*[*]) - Find the next one-line pascal comment. - <the> - Find the next occurrence of `the'. LIMITATIONS¶Levee can only edit files up to 256000 characters long. ^M is used as its internal line separator, so inserting ^M will have interesting consequences. BUGS¶Probably infinite. AUTHOR¶David L. Parsons (orc@pell.chi.il.us) Testing, suggestions, and impractical design goals by: Jim Bolland. John Tainter. John Plocher. Redistribution and use in source and binary forms, without or without modification, are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by David L Parsons (orc@pell.chi.il.us). My name may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
https://manpages.debian.org/unstable/levee/vi.1.en.html
CC-MAIN-2019-47
refinedweb
4,247
76.42
import "github.com/ipfs/ipfs-cluster/monitor/pubsubmon" Package pubsubmon implements a PeerMonitor component for IPFS Cluster that uses PubSub to send and receive metrics. Default values for this Config. PubsubTopic specifies the topic used to publish Cluster metrics. Config allows to initialize a Monitor and customize some parameters. ApplyEnvVars fills in any Config fields found as environment variables. ConfigKey provides a human-friendly identifier for this type of Config. Default sets the fields of this Config to sensible values. LoadJSON sets the fields of this Config to the values defined by the JSON representation of it, as generated by ToJSON. ToJSON generates a human-friendly JSON representation of this Config. Validate checks that the fields of this Config have working values, at least in appearance. Monitor is a component in charge of monitoring peers, logging metrics and detecting failures New creates a new PubSub monitor, using the given host and config. Alerts returns a channel on which alerts are sent when the monitor detects a failure. LatestMetrics returns last known VALID metrics of a given type. A metric is only valid if it has not expired and belongs to a current cluster peers. LogMetric stores a metric so it can later be retrieved. PublishMetric broadcasts a metric to all current cluster peers. SetClient saves the given rpc.Client for later use Shutdown stops the peer monitor. It particular, it will not deliver any alerts. Package pubsubmon imports 17 packages (graph). Updated 2019-03-08. Refresh now. Tools for package owners.
https://godoc.org/github.com/ipfs/ipfs-cluster/monitor/pubsubmon
CC-MAIN-2019-13
refinedweb
251
51.44
signature SQLITE structure SQLite : SQLITE The SQLite structure provides an interface to a database engine using a simple flat-file format. See the SQLite web page for further details. Alice uses version 3 of the library. import structure SQLite from "x-alice:/lib/sqlite/SQLite" signature SQLITE = sig type db exception SQLError of string exception TypeError of string val opendb : string -> db val closedb : db -> unit val query : db * string -> 'a[...] list end Opens the SQLite database stored in file fileName. If opening fails, this raises an SQLError exception with a string describing what went wrong. Closes the SQLite database referened through db. If closing fails, this raises an SQLError exception with a string describing what went wrong. Runs the SQL query sql on database db. The resulting rows are returned as a lazy list. Each row is returned as a tuple or record of strings, integers, or reals, or the corresponding option types. The return type must match the result of your query: It must be either a tuple of the right size or a record with the same labels as the column names. If the result does not match the type, a TypeError exception is raised with a string describing the error. The fields of the record or tuple must be of type int,real, or string, or an option of these types. Whether a plain type or an option is given determines how the special database value null is treated: it is converted to the empty string or 0 if the field type is a plain type, or to NONE if an option type was given. If the type does not satisfy these rules, a TypeError exception is raised with a string describing the error.Assuming a database of users with the following layout: table users: lastname (text), firstname (text), uid (int), password (text) the following queries would be legal: val users : {lastname:string, firstname:string, uid:int option} list = query (db, "SELECT lastname,firstname,uid FROM users") val [pwd:string] = query (db, "SELECT password FROM users WHERE uid=42") val all : (string * string * int * string) list = = query (db, "SELECT * FROM users")
https://www.ps.uni-saarland.de/alice/manual/library/sqlite.html
CC-MAIN-2021-21
refinedweb
355
67.99
ZF-8672: Zend_Loader_Autoloader_Resource model example throws exception Description The manual provides this example of how to add resource types: $resourceLoader->addResourceType('acl', 'acls/', 'Acl') ->addResourceType('form', 'forms/', 'Form') ->addResourceType('model', 'models/'); In fact, the third type would throw an exception, since it needs namespace to define a new 'model' type. Posted by Michael Ekoka (verysimple) on 2010-08-26T14:10:29.000+0000 I have a media module and I thought I could just name my models Media_Picture if I ignored the namespace argument as in ->addResourceType('model', 'models/'). This should definitely be fixed in the doc. Posted by Ramon Henrique Ornelas (ramon) on 2011-02-18T19:36:10.000+0000 Fixed with the issue ZF-10852 in the version 1.11.2.
http://framework.zend.com/issues/browse/ZF-8672?actionOrder=desc
CC-MAIN-2014-15
refinedweb
123
54.22
Penanya Offline Files Very Slow Sorry to double post but I've no idea how to move a thread from one forum to another, but I think this might get a better response in this performance forum instead of the networking forum. Here's the existing thread (if a moderator could move/merge it with this one that would be appreciated): Thanks Chris My website (free apps I've written for IT Pro's) : My blog: cjwdev.wordpress.com Pertanyaan Semua Balasan Hi , The issue occurs if your network device does not support Remote Differential Compression. Please follow the steps below to disable it. 1. Click Start>Control Panel>Programs>Turn Windows features on or off. 2. Uncheck “Remote Differential Compression” and click OK. 3. Restart the computer and you should see an improved performance with copying files. If the issue persists, temporary disable antivirus and firewall then check the result again. Tracy Cai TechNet Community Support - Diedit oleh Tracy CaiMicrosoft contingent staff, Moderator 18 Juni 2012 8:33 Thanks for the response Tracy, unfortunately that didn't help though. We've even tried rebuilding one of our PCs with a completely blank Windows 7 image without any of our software on it (so no anti-virus software or anything) and just joined it to the domain, moved it into an OU that has no GPOs applied at all, then logged on with a test user that has no GPOs applied to it at all other than folder redirection, and we still see the same kind of problems. Here's some example timings I've taken for copying 120 MB worth of files between various locations: From C: to C: = 11 seconds From C: to network drive (not available offline) = 25 seconds From C: to My Documents (available offline, online mode) = 35 seconds From C: to My Documents (available offline, offline mode) = 20 seconds We have repeated this test across multiple machines and the times are different based on the spec of the PC but they are always the same relation - i.e. copying to offline files in online mode is always considerably slower than anything else, and even copying in offline mode is always a lot slower than just copying locally from one part of the C drive to another part (which is all is really happening anyway when copying to an offline files location in offline mode right?). There are also some machines where even in offline mode the speed is incredibly slow (over 5 seconds just to load the list of files within a folder), and before you think that its just a slow PC, bear in mind that opening another folder elsewhere that is not available offline (either on the network or on the C drive) loads the list of files in a folder pretty much instantly. My website (free apps I've written for IT Pro's) : My blog: cjwdev.wordpress.com Hi , Another thing you can try is Disable TCP Auto Tuning. Go to Start and type cmd in the search bar. Right click on cmd and select “Run as Administrator”. 1. Type netsh interface tcp show global This will show the current status of auto tuning. 2. Run the command netsh interface tcp set global autotuning=disabled This disables the auto tuning 3. Similar to enable auto tuning back, use the command netsh interface tcp set global autotuningl=normal After the above step you may check if the copying performance improves. Tracy Cai TechNet Community Support Thanks, tried that and rebooted and then tried it again but it hasn't helped :( If anything that made it a bit slower - copying 120 MB to the offline location (in online mode) this time took 55 seconds My website (free apps I've written for IT Pro's) : My blog: cjwdev.wordpress.com Hey Chris What Servers are Hosting the Files? (2008R2 or 2003?) If Server < 2008 R2, try this related solution: Ist basically disabling the autotuning und receive side scaling: netsh interface tcp set global rss=disabled netsh interface tcp set global autotuninglevel=disabled Then this (if no luck): How to re-initialize the offline files cache and database The server's hosting the files are Server 2008, but like I said transferring files to the same server but a share that is not made available offline works fine. It is only when working with a share that has been made available offline (either manually or by folder redirection) that things get really slow. What I have just found though is that on one user's machine where it is really slow, if they disconnect from the VPN then it all works fine. Surely this should be no different to them choosing Work Offline? In both situations it shouldn't be trying to communicate with the server, but when they choose to Work Offline it is still just as slow as when they are in Work Online mode (which is pretty slow, as they are connected via VPN over the internet). Regardless of that though, it still doesn't explain why when copying to a network drive available offline would take a lot longer than copying to a normal network drive (when on site connected to the network). Oh and re-initializing the offline cache isn't going to help, as this is a problem on every PC we have (and even on a brand new PC with none of our software or GPOs on it). My website (free apps I've written for IT Pro's) : My blog: cjwdev.wordpress.com Just done some tests copying a folder from one folder on my desktop to another, and then copying the same folder to my My Documents (which is available offline) and used Process Monitor to capture the file activity on each one, and I set a filter to only include file activity from explorer.exe. When copying from local to local (one folder on desktop to another) there are on average 14 events logged between the actual creation of each file that is being copied (a duration of 0.0048502 seconds). When copying from local to offline files (desktop to My Documents) there are 135 events logged between each file being created (a duration of 0.0337028 seconds). I appreciate that there will be some overhead with offline files, but this seems excessive. It seems to just constantly be reading and writing to the file "C:\Windows\CSC\v2.0.6\pq" between every copy and also keeps querying the folders in "C:\Windows\CSC\v2.0.6\namespace\PATH_TO_MY_REDIRECTED_MYDOCUMENTS" a lot before it copies each file to them. Pretty much all of those additional 100+ events that appear in the offline files copy but not in the local copy are just explorer querying properties of the directory in the CSC folder that it is about to copy a file into... it seems to just do the same thing about 10 times. My website (free apps I've written for IT Pro's) : My blog: cjwdev.wordpress.com No I'm afraid not :( we just eventually abandoned using offline files and gave all laptop users a USB hard drive to back up to. Far from ideal... Funnily enough it got brought up yesterday though, so we might start looking into it again soon - so I'd be really keen to hear how you get on and if you find a solution. My website (free apps I've written for IT Pro's) : My blog: cjwdev.wordpress.com Hi, we are having the same problems in a customer project with Win7 Offline Files. Copying to / deleting in an "Offline Folder" is up to 2 times slower! MS is working on a created ticket but meanwhile i think this is "by design", because i have this performance issues with 2 different customers. I think i've tried everything in the last month, but nothing helps. If anyone out there has a clue - your welcome! Guido IncredibleGBT
http://social.technet.microsoft.com/Forums/windows/id-ID/0578dc8b-1dc8-4380-b3a5-60a59ccc4f74/offline-files-very-slow?forum=w7itproperf
CC-MAIN-2014-15
refinedweb
1,330
62.82
Creating SOAP Web Services This chapter describes the basics of how to create a web service in Caché. It includes the following topics: Overview of Caché web services Basic requirements for web services How to create web services How to specify parameters of the web service Caché catalog and test pages for the web service How to view the generated WSDL See the first appendix for a table that summarizes the URLs related to your web service. Overview of Caché Web Services To create a web service in Caché, you create a class that extends %SOAP.WebService, which provides all the functionality required to make one or more methods callable via the SOAP protocol. In addition, this class automates the management of SOAP-related bookkeeping, such as maintaining a WSDL document that describes a service. This class is derived from the %CSP.Page class. Because of this, every web service class can respond to HTTP requests. Thus, the %SOAP.WebService class implements methods that respond to HTTP events to do the following: Publish the WSDL document for the web service as an XML document. Publish a human-readable catalog page (using HTML) describing the web service and its methods. The descriptions on this page show the comments included in the class definition. Basic Requirements To create and publish a web service in Caché, create and compile a Caché class that meets the following basic requirements: The class must extend %SOAP.WebService. The class must define the SERVICENAME parameter. Caché does not compile the class unless it defines this parameter. This class should define methods or class queries that are marked with the WebMethod keyword.Important: In most cases, web methods should be instance methods. Within a web method, it is often necessary to set properties of and invoke methods of the web service instance (as described in later chapters) to fine-tune the behavior of the method. Because a class method cannot do these tasks, a class method is usually not suitable as a web method. For any web methods, make sure that each value in the method signature has an XML projection. For example, suppose that your method had the following signature: Method MyWebMethod(myarg as ClassA) as ClassB [ WebMethod ]Copy code to clipboard In this case, both ClassA and ClassB must have an XML representation. In most cases, this means that their superclass lists must include %XML.Adaptor; see Projecting Objects to XML. Caché SOAP support provides special handling for collections and streams, as noted after this list. The web method can specify the ByRef and Output keywords in the same way that ordinary methods do. (For information on these keywords, see the chapter “Methods” in Using Caché Objects.) Consider the values that are likely to be carried within these arguments and return values. XML does not permit nonprinting characters, specifically characters below ASCII 32 (except for carriage returns, line feeds, and tabs, which are permitted in XML). If you need to include any disallowed nonprinting character, specify the type as %Binary, %xsd.base64Binary (which is equivalent), or a subclass. This value is automatically converted to base–64 encoding on export to XML (or automatically converted from base–64 encoding on import). Do not rely on the method signature to specify the default value for an argument. If you do, the default value is ignored and a null string is used instead. For example, consider the following method: Method TestDefaults(val As %String = "Default String") As %String [ WebMethod ]Copy code to clipboard When you invoke this method as a web method, if you do not supply an argument, a null string is used, and the value "Default String" is ignored. Instead, at the start of the method implementation, test for a value and use the desired default if applicable. One technique is as follows: if arg="" { set arg="Default String" }Copy code to clipboard You can indicate the default value in the method signature as usual, but this is purely for informational purposes and does not affect the SOAP messages. For any required arguments in a web method, specify the REQUIRED property parameter within the method signature. For example: Method MyWebMethod(myarg as ClassA(REQUIRED=1)) as ClassB [ WebMethod ]Copy code to clipboard By default, any inherited methods are treated as ordinary methods, even if a superclass marks them as web methods (but see “Subclassing an Existing Caché Web Services,” later in this chapter). Input and Output Objects That Do Not Need %XML.Adaptor In most cases, when you use an object as input or output to a web method, that object must extend %XML.Adaptor. The exceptions are as follows: If the object is %ListOfDataTypes, %ListOfObjects, %ArrayOfDataTypes, %ArrayOfObjects, or a subclass, the Caché SOAP support implicitly treats the object as if it included %XML.Adaptor. You do not need to subclass these classes. However: You must specify ELEMENTTYPE within the method signature, as follows: Method MyMethod() As %ListOfObjects(ELEMENTTYPE="MyApp.MyXMLType") [WebMethod] { //method implementation }Copy code to clipboard Or, in the case of an input argument: Method MyMethod(input As %ListOfObjects(ELEMENTTYPE="MyApp.MyXMLType")) [WebMethod] { //method implementation }Copy code to clipboard If the class that you name in ELEMENTTYPE is an object class, it must inherit from %XML.Adaptor. If the object is one of the stream classes, the Caché SOAP support implicitly treats the object as if it included %XML.Adaptor. You do not need to subclass the stream class. If it is a character stream, the Caché SOAP tools assume that the type is string. If it is a binary stream, the tools treat it as base-64–encoded data. Thus it is not necessary to supply type information. Using Result Sets as Input or Output You can use result sets as input or output, but your approach depends on the intended web clients. If both the web service and client are based on Cache or one is based on .NET, you can use the specialized result set class, %XML.DataSet, which is discussed in the chapter “Using Datasets in SOAP Messages.” Or you can use a class query as a web method. The XML representation is automatically the same as for %XML.DataSet. To output results of a query so that a Java-based web client can work with it, use a %ListOfObjects subclass; there is an example in SOAP.Demo in the SAMPLES namespace. Simple Example This section shows an example web service, as well as an example of a request message that it can recognize and the corresponding response message. First, the web service is as follows: ///; /// This method returns tomorrow's price for the requested stock Method Forecast(StockName As %String) As %Integer [WebMethod] { // apply patented, nonlinear, heuristic to find new price Set price = $Random(1000) Quit price } } When you invoke this method from a web client, the client sends a SOAP message to the web service. This SOAP message might look like the following (with line breaks and spaces added here for readability): <?xml version="1.0" encoding="UTF-8" ?> <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <Forecast xmlns=""> <StockName xsi:GZP</StockName> </Forecast> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Note that the message body (the <SOAP-ENV:Body> element) includes an element named <Forecast>, which is the name of the method that the client is invoking. The <Forecast> includes one element, <StockName>, whose name is based on the argument name of the web method that we are invoking. This element contains the actual value of this argument. The web service performs the requested action and then sends a SOAP message in reply. The response message might look like the following: <?xml version="1.0" encoding="UTF-8" ?> <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <ForecastResponse xmlns=""> <ForecastResult>799</ForecastResult> </ForecastResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> These examples do not include the HTTP headers that precede the SOAP message itself. Creating a Web Service You can create web services in any of the following ways: By creating a new class or editing an existing class to follow the requirements described earlier in this chapter By using the Web Service Wizard By using the SOAP Wizard with an existing WSDL By subclassing one or more Caché web services Using the Web Service Wizard The Web Service Wizard generates a simple stub. Click File > New. This displays the New dialog box. Click the General tab. Click New Web Service and then click OK. This displays a wizard. Enter values for the package name, class name, and web service name. These are required. Optionally edit the namespace URI (or change this initial value later). This is the XML namespace, not the Caché namespace. Optionally type a list of method names, on separate lines. Click OK. Now, you have a new web service class that contains stubs for the web methods. For example: ///; /// TODO: add arguments and implementation. /// Forecast Method Forecast() As %String [ WebMethod ] { ;Quit "Forecast" } } Using the SOAP Wizard with an Existing WSDL In some cases, the WSDL has been designed already and it is necessary to create a web service that matches the WSDL; this is known as “WSDL-first development.” In Caché, there are three steps to this development: Use the SOAP Wizard to read the WSDL and to generate the web service and all supporting classes. This wizard can also generate web client classes (which is more common). For information on using this wizard, see “Using the SOAP Wizard,” later in this book. Follow the steps described in that section and also select the Create Web Service option within the wizard. Or use the %SOAP.WSDL.Reader class as described in “Using the %SOAP.WSDL.Reader Class.” Examine the generated classes to see if you need to change any %String values in the method signatures. When the wizard reads a WSDL, it assumes that any string-type input or output can be represented in Caché as %String, which is not always true. Some strings might exceed the Caché 32 KB limit for strings. See “Adjusting the Generated Classes for Long Strings,” later in this book. Edit the methods in the generated web service so that they perform the desired actions. Each method is initially a stub like the following example: Method Add(a As Test.ns2.ComplexNumber, b As Test.ns2.ComplexNumber) As Test.ns2.ComplexNumber [ Final, SoapAction = "", SoapBindingStyle = document, SoapBodyUse = literal, WebMethod ] { // Web Service Method Implementation Goes Here. }Copy code to clipboard The wizard includes compiler keywords such as Final and SoapBindingStyle. You should not change the values of these keywords. If the WSDL includes WS-Policy elements, the wizard also generates a configuration class for the web service. The default configuration class name is the web service name, with Config appended to it. For information on WS-Policy, see Securing Caché Web Services. Subclassing an Existing Caché Web Service You can create a web service by creating a subclass of an existing Caché web service class and then adding the SOAPMETHODINHERITANCE parameter to your class as follows: PARAMETER SOAPMETHODINHERITANCE = 1; The default for this parameter is 0. If this parameter is 0, your class does not inherit the web methods as web methods. That is, the methods are available as ordinary methods but cannot be accessed as web methods within the web service defined by the subclass. If you set this parameter to 1, then your class can use web methods defined in any superclasses that are web services. Specifying Parameters of the Web Service Make sure that your web service class uses appropriate values for the following parameters. If you use the SOAP wizard to generate a web service from an existing WSDL, do not modify any of these parameters. Name of the web service. This name must start with a letter and must contain only alphanumeric characters. Caché does not compile the class unless the class defines this parameter. URI that defines the target namespace for your web service, so that your service, and its contents, do not conflict with another service. This is initially set to "" which is a temporary URI often used by SOAP developers during development. If you do not specify this parameter, the target namespace is "". For a Caché web service, there is no way to put request messages in different namespaces. A Caché web client, however, does not have this limitation; see “Namespaces for the Messages,” later in this book. URI that defines the namespace for the response messages. By default, this is equal to the namespace given by the NAMESPACE parameter. For a Caché web service, there is no way to put response messages in different namespaces. A Caché web client, however, does not have this limitation; see “Namespaces for the Messages,” later in this book. Namespace for the schema for the types defined by the web service. If you do not specify this parameter, the schema is in the target namespace of the web service (that is, either NAMESPACE or the default, which is ""). For a Caché web service, there is no way to put the request message types in different namespaces. A Caché web client does not have this limitation; see “Namespaces for Types,” later in this book. URI that defines the namespace for types used by the response messages. By default, this is equal to the namespace given by the TYPENAMESPACE parameter. This parameter is used only if SoapBindingStyle equals "document" (the default). For either a Caché web service or a Caché web client, the types for the response messages must all be in the same namespace. Specifies the SOAP version or versions advertised in the WSDL of the web service. Use one of the following values: "" — Use this value for SOAP 1.1 or 1.2. "1.1" — Use this value for SOAP 1.1. This is the default. "1.2" — Use this value for SOAP 1.2. When the web service receives a SOAP request, the SoapVersion property of the web service is updated to equal the SOAP version of that request. See also “Restricting the SOAP Versions Handled by a Web Service,” later in this book. For details on how these values affect the WSDL, see the appendix “Details of the Generated WSDLs.” About the Catalog and Test Pages When you compile a web service class, the class compiler creates a convenient catalog page that you can use to examine the web service. This catalog page provides a link to a simple test page. To see these CSP pages: In Studio, display the web service class. Click View > Web Page. The catalog page is immediately displayed. Its URL is constructed as follows: base/csp/app/web_serv.cls Here base is the base URL for your web server (including port if necessary), /csp/app is the name of the web application in which the web service resides, and web_serv is the class name of the web service. (Typically, /csp/app is /csp/namespace.) For example: Access to These Pages These CSP pages are part of a Caché web application, as noted in the previous chapter. If there is no web application for the namespace you are using, you cannot access these pages. Also, by default, these pages are inaccessible. To enable access to them, open the Terminal, go to the %SYS namespace, and enter the following commands: set ^SYS("Security","CSP","AllowClass",webapplicationname,"%SOAP.WebServiceInfo")=1 set ^SYS("Security","CSP","AllowClass",webapplicationname,"%SOAP.WebServiceInvoke")=1 Where webapplicationname is the web application name with a trailing slash, for example, "/csp/mynamespace/". This access is enabled by default for the /csp/samples web application. Also, you can use these pages only if you are logged in as a user who has USE permission for the %Development resource. Notes on These Pages The catalog page displays the class name, namespace, and service name, as well as the comments for the class and web methods. The Service Description link displays the WSDL; for information, see the section “Viewing the WSDL,” later in this chapter. The page then lists the web methods, with links (if you have the suitable permissions). The link for a given method displays a test page where you can test that method in a limited fashion. Notes about this test page: It does not enable you to see the SOAP request. It does not test the full SOAP pathway. This means, for example, it does not write to the SOAP log that is discussed later in this chapter. It accepts only simple, literal inputs, so you cannot use it to call methods whose arguments are objects, collections, or datasets. This book does not discuss this page further. To test your web service more fully, generate and use a web client as described later in this book. Viewing the WSDL When you use %SOAP.WebService to define a web service, the system creates and publishes a WSDL document that describes this web service. Whenever you modify and recompile the web service, the system automatically updates the WSDL correspondingly. This section discusses the following: Viewing the WSDL and the URL at which the WSDL is published Methods you can use to generate the WSDL as a static document Also see “WSDL Support in Caché” in the first chapter. By definition, a web service and its web clients are required to comply to a common interface, regardless of their respective implementations (and regardless of any underlying changes in technology). A WSDL is a standards-compliant description of this interface. It is important to note the following: In practice, a single SOAP interface can often be correctly described by multiple, slightly different WSDL documents. Accordingly, the WSDL generated by Caché may have a slightly different form depending on the version of Caché. It is beyond the scope of this documentation to describe any such differences. InterSystems can commit only to the interoperability of web services and their respective clients, as required in the W3C specifications. The W3C specifications do not require that either a web service or a web client be able to generate a WSDL to describe the interface with which it complies. The system generates the WSDL document and serves it at a specific URL, for convenience. However, if the containing web application requires password authentication or requires an SSL connection, you may find it impractical to access the WSDL in this way. In such cases, you should download the WSDL to a file and use the file instead. Also, as noted previously, the generated WSDL does not contain any information about SOAP headers added at runtime. If you need a WSDL document to contain information about SOAP headers added at runtime, you should download the WSDL to a file, modify the file as appropriate, and then use that file. Viewing the WSDL To view the WSDL for the web service, use the following URL: base/csp/app/web_serv.cls?WSDL Here base is the base URL for your web server (including port if necessary), /csp/app is the name of the web application in which the web service resides, and web_serv is the class name of the web service. (Typically, /csp/app is /csp/namespace.) Any percent characters (%) in your class name are replaced by underscore characters (_) in this URL. For example: The browser displays the WSDL document, for example: Not all browsers display the schema correctly. You might need to view the page source to see the actual schema. For example, in Firefox, right-click and then select View Source. Generating the WSDL You can also generate the WSDL as a static document. The %SOAP.WebService class provides a method you can use for this: Suppressing Internal Web Methods from the WSDL If the web service has web methods that are marked as Internal, by default the WSDL includes these web methods. You can prevent these methods from being included in the WSDL. To do so, do either of the following: Use the FileWSDL() method of the web service to generate the WSDL; see the previous section. This method provides an argument that controls whether the WSDL includes internal web methods. Specify the SOAPINTERNALWSDL class parameter as 0 in the web service class. (The default for this class parameter is 1.)
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSOAP_WEBSERVICE
CC-MAIN-2021-04
refinedweb
3,357
54.32
Library Interfaces and Headers - word-expansion types #include <wordexp.h> The <wordexp.h> header defines the structures and symbolic constants used by the wordexp() and wordfree() functions. See wordexp(3C). The structure type wordexp_t contains: Append words to those previously generated. Number of null pointers to prepend to we_wordv. Fail if command substitution is requested. The pwordexp argument was passed to a previous successful call to wordexp(), and has not been passed to wordfree(). The result is the same as if the application had called wordfree() and then called wordexp() without WRDE_REUSE. Do not redirect stderr to /dev/null. Report error on an attempt to expand an undefined shell variable. The following constants are defined as error return values: One of the unquoted characters—<newline>, '|', '&', ';', '<', '>', '(', ')', '{', '}'—appears in words in an inappropriate context. Reference to undefined shell variable when WRDE_UNDEF is set in flags. Command substitution requested when WRDE_NOCMD was set in flags. Attempt to allocate memory failed. Reserved. Shell syntax error, such as unbalanced parentheses or unterminated string. The <wordexp.h> header defines the following type: As described in <stddef.h>. See attributes(5) for descriptions of the following attributes: wordexp(3C), attributes(5), standards(5)
http://docs.oracle.com/cd/E18752_01/html/816-5173/wordexp.h-3head.html
CC-MAIN-2017-26
refinedweb
196
52.05
This is an automated email from the ASF dual-hosted git repository. mmerli pushed a commit to branch master in repository The following commit(s) were added to refs/heads/master by this push: new a72c912 Pulsar FAQ (#1186) a72c912 is described below commit a72c91294aa8ec8546652606a5300ca0b47858f6 Author: XiaoZYang <xiaozy...@users.noreply.github.com> AuthorDate: Sat Feb 24 07:25:00 2018 +0800 Pulsar FAQ (#1186) * 1. add faq.md * 1.add initializeSubscriptionOnLatest in ConsumerConfiguration to initialize cursor when subscription initialized * Revert "1.add initializeSubscriptionOnLatest in ConsumerConfiguration to initialize cursor when subscription initialized" This reverts commit 8747966a5e5a3fea50b4a5ba010ef92790868f2e. --- faq.md | 273 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 273 insertions(+) diff --git a/faq.md b/faq.md new file mode 100644 index 0000000..1ff7093 --- /dev/null +++ b/faq.md @@ -0,0 +1,273 @@ +# Frequently Asked Questions +- Getting Started +- Concepts and Design +- Usage and Configuration + +--- + +## Getting Started + +### What is the minimum requirements for Apache Pulsar ? +You need 3 kind of clusters: bookie, broker, zookeeper. But if not have enough resource, it's ok to run them on same machine. + +--- + +## Concepts and Design + +### Is ack tied to subscription? +Yes, ack is tied to a particular subscription. + +### Where should I look into to tweak load balancing ? +There are few parameters to look at : +1. The topic assignments to brokers are done in terms of “bundles”, that is in group of topic +2. Topics are matched to bundles by hashing on the name +3. Effectively, a bundle is a hash-range where topics falls into +4. Initially the default is to have 4 “bundles” for a namespace +5. When the traffic increases on a given bundle, it will be split in 2 and reassigned to a different broker +6. There are some adjustable thresholds that can be used to control when the split happens, based on number of topics/partitions, messages in/out, bytes in/out, etc.. +7. It’s also possible to specify a higher number of bundles when creating a namepsac +8. There are the load-manager threshold that control when a broker should offload some of the bundles to other brokers + +### What is the lifecycle of subscription? +Once it’s created, it retains all messages published after that (minus explicit TTL). Subscriptions can be dropped by explicitly unsubscribing (in `Consumer` API) or through the REST/CLI . + +### What is a bundle? +In Pulsar, "namespaces" are the administrative unit: you can configure most options on a namespace and they will be applied on the topics contained on the namespace. It gives the convenience of doing settings and operations on a group of topics rather than doing it once per topic. + +In general, the pattern is to use a namespace for each user application. So a single user/tenant, can create multiple namespaces to manage its own applications. + +When it comes to topics, we need a way to assign topics to brokers, control the load and move them if a broker becomes overloaded. Rather that doing this operations per each single topic (ownership, load-monitoring, assigning), we do it in bundles, or "groups of topics". + +In practical words, the number of bundles determines "into how many brokers can I spread the topics for a given namespace". + +From the client API or implementation, there's no concept of bundles, clients will lookup the topics that want to publish/consumer individually. + +On the broker side, the namespace is broken down into multiple bundles, and each bundle can be assigned to a different broker. Effectively, bundles are the "unit of assignment" for topics into brokers and this is what the load-manager uses to track the traffic and decide where to place "bundles" and whether to offload them to other brokers. + +A bundle is represented by a hash-range. The 32-bit hash space is initially divided equally into the requested bundles. Topics are matched to a bundle by hashing on the topic name. + +Default number of bundles is configured in `broker.conf`: `defaultNumberOfNamespaceBundles=4` + +When the traffic increases on a given bundle, it will be split in 2 and reassigned to a different broker. + +Enable auto-split: `loadBalancerAutoBundleSplitEnable=true` trigger unload and reassignment after splitting: `loadBalancerAutoUnloadSplitsEnable=true`. + +If is expected to have a high traffic on a particular namespace, it's a good practice to specify a higher number of bundles when creating the namespace: `bin/pulsar-admin namespaces create $NS --bundles 64`. This will avoid the initial auto-adjustment phase. + +All the thresholds for the auto-splitting can be configured in `broker.conf`, eg: number of topics/partitions, messages in/out, bytes in/out, etc... + +### How the design deals with isolation between tenants, which concepts enable that and up to what extent, how huge difference can exist between tenants so that impact on each other is noticeable via degraded latency. +The isolation between tenants (and topics of same tenant) happens at many different points. I'll start from the bottom up. + +#### Storage +You're probably familiar with BookKeeper, but of the main strength is that each bookie can efficiently serve many different ledger (segments of topic data). We tested with 100s of thousand per single node. + +This is because there is a single journal (on its own device) where all the write operations gets appended and then the entries are periodically flushed in background on the storage device. + +This gives isolation between writes and reads in a bookie. You can read as fast as you can, maxing out the IO on the storage device, but your write throughput and latency are going to be unaffected. + +#### Broker +Everything in the broker happens asynchronously. The amount of memory that is used is also capped per broker. + +Whenever the broker is marked as overloaded, traffic can be quickly shifted (manually or without intervention) to less loaded brokers. LoadManager component in brokers is dedicated to that. + +There are several points of flow control: +- On the producer side, there are limits on the in-flight message for broker bookies, that will slow down users trying to publish faster that the system can absorb +- On the consumer side, it's possible to throttle the delivery to a certain rate + +#### Quotas +Can configure different storage quotas for different tenants/namespaces and take different actions when the quotas are filled up (block producer, give exception, drop older messages). + +#### Broker level isolation +There is the option to isolate certain tenants/namespaces to a particular set of broker. Typically the reason for using that was to experiment with different configurations, debugging and quickly react to unexpected situations. + +For example, a particular user might be triggering a bad behavior in the broker that can impact performance for other tenants. + +In this case, the particular user can be "isolated" a subset of brokers that will not serve any other traffic, until a proper fix that correctly handles the condition can be deployed. + +This is a lightweight option of having multiple clusters for different users, since most of the other parts are still shared (ZK, BK,...). + + +### Is there "regex" topic in Pulsar? +There is regex subscription coming up in Pulsar 2.0. See [PIP-13](). + +### Does Pulsar have, or plan to have, a concept of log compaction where only the latest message with the same key will be kept ? +Yes, see [PIP-9]() for more details. + +### When I use an exclusive subscription to a partitioned topic, is the subscription attached to the "whole topic" or to a "topic partition"? +On a partitioned topic, you can use all the 3 supported subscription types (exclusive, failover, shared), same as with non partitioned topics. +The “subscription” concept is roughly similar to a “consumer-group” in Kafka. You can have multiple of them in the same topic, with different names. + +If you use “exclusive”, a consumer will try to consume from all partitions, or fail if any partition is already being consumer. + +The mode similar to Kafka is “failover” subscription. In this case, you have 1 active consumer per partition, the active/stand-by decision is made at the partition level, and Pulsar will make sure to spread the partition assignments evenly across consumer. + +### What is the proxy component? +It’s a component that was introduced recently. Essentially it’s a stateless proxy that speaks that Pulsar binary protocol. The motivation is to avoid (or overcome the impossibility) of direct connection between clients and brokers. + +--- + +## Usage and Configuration +### Can I manually change the number of bundles after creating namespaces? +Yes, you can split a given bundle manually. + +### Is the producer kafka wrapper thread-safe? +The producer wrapper should be thread-safe. + +### Can I just remove a subscription? +Yes, you can use the cli tool `bin/pulsar-admin persistent unsubscribe $TOPIC -s $SUBSCRIPTION`. + +### How are subscription modes set? Can I create new subscriptions over the WebSocket API? +Yes, you can set most of the producer/consumer configuration option in websocket, by passing them as HTTP query parameters like: +`ws://localhost:8080/ws/consumer/persistent/sample/standalone/ns1/my-topic/my-sub?subscriptionType=Shared` + +see [the doc](). + +### Is there any sort of order of operations or best practices on the upgrade procedure for a geo-replicated Pulsar cluster? +In general, updating the Pulsar brokers is an easy operation, since the brokers don't have local state. The typical rollout is a rolling upgrade, either doing 1 broker at a time or some percentage of them in parallel. + +There are not complicated requirements to upgrade geo-replicated clusters, since we take particular care in ensuring backward and forward compatibility. + +Both the client and the brokers are reporting their own protocol version and they're able to disable newer features if the other side doesn't support them yet. + +Additionally, when making metadata breaking format changes (if the need arises), we make sure to spread the changes along at least 2 releases. + +This is to always allow the possibility to downgrade a running cluster to a previous version, in case any server problem is identified in production. + +So, one release will understand the new format while the next one will actually start using it. + +### Since Pulsar has configurable retention per namespace, can I set a "forever" value, ie., always keep all data in the namespaces? +So, retention applies to "consumed" messages. Ones, for which the consumer has already acknowledged the processing. By default, retention is 0, so it means data is deleted as soon as all consumers acknowledge. You can set retention to delay the retention. + +That also means, that data is kept forever, by default, if the consumers are not acknowledging. + +There is no currently "infinite" retention, other than setting to very high value. + +### How can a consumer "replay" a topic from the beginning, ie., where can I set an offset for the consumer? +1. Use admin API (or CLI tool): + - Reset to a specific point in time (3h ago) + - Reset to a message id +2. You can use the client API `seek`. + +### When create a consumer, does this affect other consumers ? +The key is that you should use different subscriptions for each consumer. Each subscription is completely independent from others. + +### The default when creating a consumer, is it to "tail" from "now" on the topic, or from the "last acknowledged" or something else? +So when you spin up a consumer, it will try to subscribe to the topic, if the subscription doesn't exist, a new one will be created, and it will be positioned at the end of the topic ("now"). + +Once you reconnect, the subscription will still be there and it will be positioned on the last acknowledged messages from the previous session. + +### I want some produce lock, i.e., to pessimistically or optimistically lock a specified topic so only one producer can write at a time and all further producers know they have to reprocess data before trying again to write a topic. +To ensure only one producer is connected, you just need to use the same "producerName", the broker will ensure that no 2 producers with same name are publishing on a given topic. + +### I tested the performance using PerformanceProducer between two server node with 10,000Mbits NIC(and I tested tcp throughput can be larger than 1GB/s). I saw that the max msg throughput is around 1000,000 msg/s when using little msg_size(such as 64/128Bytes), when I increased the msg_size to 1028 or larger , then the msg/s will decreased sharply to 150,000msg/s, and both has max throughput around 1600Mbit/s, which is far from 1GB/s. And I'm curious that the throughput between produ [...] +Most probably, when increasing the payload size, you're reaching the disk max write rate on a single bookie. + +There are few tricks that can be used to increase throughput (other than just partitioning) + +1. Enable striping in BK, by setting ensemble to bigger than write quorum. E.g. e=5 w=2 a=2. Write 2 copies of each message but stripe them across 5 bookies + +2. If there are already multiple topics/partitions, you can try to configure the bookies with multiple journals (e.g. 4). This should increase the throughput when the journal is on SSDs, since the controller has multiple IO queues and can efficiently sustain multiple threads each doing sequential writes + +- Option (1) you just configure it on a given pulsar namespace, look at "namespaces set-persistence" command + +- Option (2) needs to be configured in bookies + +### Is there any work on a Mesos Framework for Pulsar/Bookkeeper this point? Would this be useful? +We don’t have anything ready available for Mesos/DCOS though there should be nothing preventing it + +It would surely be useful. + + +### Is there an HDFS like interface? +Not for Pulsar.There was some work in BK / DistributedLog community to have it but not at the messaging layer. + +### Where can I find information about `receiveAsync` parameters? In particular, is there a timeout as in `receive`? +There’s no other info about `receiveAsync()`. The method doesn’t take any parameters. Currently there’s no timeout on it. You can always set a timeout on the `CompletableFuture` itself, but the problem is how to cancel the future and avoid “getting” the message. + +What’s your use case for timeout on the `receiveAsync()`? Could that be achieved more easily by using the `MessageListener`? + +### Why do we choose to use bookkeeper to store consumer offset instead of zookeeper? I mean what's the benefits? +ZooKeeper is a “consensus” system that while it exposes a key/value interface is not meant to support a large volume of writes per second. + +ZK is not an “horizontally scalable” system, because every node receive every transaction and keeps the whole data set. Effectively, ZK is based on a single “log” that is replicated consistently across the participants. + +The max throughput we have observed on a well configured ZK on good hardware was around ~10K writes/s. If you want to do more than that, you would have to shard it.. + +To store consumers cursor positions, we need to write potentially a large number of updates per second. Typically we persist the cursor every 1 second, though the rate is configurable and if you want to reduce the amount of potential duplicates, you can increase the persistent frequency. + +With BookKeeper it’s very efficient to have a large throughput across a huge number of different “logs”. In our case, we use 1 log per cursor, and it becomes feasible to persist every single cursor update. + +### I'm facing some issue using `.receiveAsync` that it seems to be related with `UnAckedMessageTracker` and `PartitionedConsumerImpl`. We are consuming messages with `receiveAsync`, doing instant `acknowledgeAsync` when message is received, after that the process will delay the next execution of itself. In such scenario we are consuming a lot more messages (repeated) than the num of messages produced. We are using Partitioned topics with setAckTimeout 30 seconds and I believe this issue [...] +PartitionedConsumer is composed of a set of regular consumers, one per partition. To have a single `receive()` abstraction, messages from all partitions are then pushed into a shared queue. + +The thing is that the unacked message tracker works at the partition level.So when the timeout happens, it’s able to request redelivery for the messages and clear them from the queue when that happens, +but if the messages were already pushed into the shared queue, the “clearing” part will not happen. + +- the only quick workaround that I can think of is to increase the “ack-timeout” to a level in which timeout doesn’t occur in processing +- another option would be to reduce the receiver queue size, so that less messages are sitting in the queue + +### Can I use bookkeeper newer v3 wire protocol in Pulsar? How can I enable it? +The answer is currently not, because we force the broker to use v2 protocol and that's not configurable at the moment. + +### Is "kubernetes/generic/proxy.yaml" meant to be used whenever we want to expose a Pulsar broker outside the Kubernetes cluster? +Yes, the “proxy” is an additional component to deploy a stateless proxy frontend that can be exposed through a load balancer and that doesn’t require direct connectivity to the actual brokers. No need to use it from within Kubernetes cluster. Also in some cases it’s simpler to have expose the brokers through `NodePort` or `ClusterIp` for other outside producer/consumers. + +### Is there a way of having both authentication and the Pulsar dashboard working at same time? +The key is that with authorization, the stats collector needs to access the APIs that require the credentials. That’s not a problem for stats collected through Prometheus but it is for the “Pulsar dashboard” which is where the per-topic stats are shown. I think that should be quite easy to fix. + +### How can I know when I've reached the end of the stream during replay? +There is no direct way because messages can still be published in the topic, and relying on the `readNext(timeout)` is not precise because the client might be temporarily disconnected from broker in that moment. + +One option is to use `publishTimestamp` of messages. When you start replaying you can check current "now", then you replay util you hit a message with timestamp >= now. + +Another option is to "terminate" the topic. Once a topic is "terminated", no more message can be published on the topic, a reader/consumer can check the `hasReachedEndOfTopic()` condition to know when that happened. + +A final option is to check the topic stats. This is a tiny bit involved, because it requires the admin client (or using REST) to get the stats for the topic and checking the "backlog". If the backlog is 0, it means we've hit the end. + +### How can I prevent an inactive topic to be deleted under any circumstance? I want to set no time or space limit for a certain namespace. +There’s not currently an option for “infinite” (though it sounds a good idea! maybe we could use `-1` for that). The only option now is to use INT_MAX for `retentionTimeInMinutes` and LONG_MAX for `retentionSizeInMB`. It’s not “infinite” but 4085 years of retention should probably be enough! + +### Is there a profiling option in Pulsar, so that we can breakdown the time costed in every stage? For instance, message A stay in queue 1ms, bk writing time 2ms(interval between sending to bk and receiving ack from bk) and so on. +There are latency stats at different stages. In the client (eg: reported every 1min in info logs). +In the broker: accessible through the broker metrics, and finally in bookies where there are several different latency metrics. + +In broker there’s just the write latency on BK, because there is no other queuing involved in the write path. + +### How can I have multiple readers that each get all the messages from a topic from the beginning concurrently? I.e., no round-robin, no exclusivity +you can create reader with `MessageId.earliest` + + +### Does broker validate if a property exists or not when producer/consumer connects ? +yes, broker performs auth&auth while creating producer/consumer and this information presents under namespace policies.. so, if auth is enabled then broker does validation + +### From what I’ve seen so far, it seems that I’d instead want to do a partitioned topic when I want a firehose/mix of data, and shuffle that firehose in to specific topics per entity when I’d have more discrete consumers. Is that accurate? +Precisely, you can use either approach, and even combine them, depending on what is more convenient for the use case. The general traits to choose one or the other are: + +- Partitions -> Maintain a single “logical” topic but scale throughput to multiple machines. Also, ability to consume in order for a “partition” of the keys. In general, consumers are assigned a partition (and thus a subset of keys) without specifying anything. + +- Multiple topics -> When each topic represent some concrete existing “concept” in the application and it is “finite” (eg: using a topic per each user when the number of users is unbound and can be in the 100s of millions it’s not a good idea), within 10s or 100s of thousands. Having multiple topics makes it easier for a consumer to consume a specific portion of messages. + +### For subscribing to a large number of topics like that, would i need to call `subscribe` for each one individually, or is there some sort of wildcard capability? +Currently you can only subscribe individually, (though you can automate it by getting the list of topics and going through it), but we’re working on the wildcard subscribe and we’re targeting that for next release. + +### Hi, is the difference between a consumer and a reader documented somewhere? +Main difference: a reader can be used when manually managing the offset/messageId, rather than relying on Pulsar to keep track of it with the acknowledgments +- consumer -> managed subscriptions with acks and auto retention +- reader -> always specify start message id on creation + + +### Hey, question on routing mode for partitioned topics. What is the default configuration and what is used in the Kafka adaptor? +The default is to use the hash of the key on a message. If the message has no key, the producer will use a “default” partition (picks 1 random partition and use it for all the messages it publishes). + +This is to maintain the same ordering guarantee when no partitions are there: per-producer ordering. + +The same applies when using the Kafka wrapper. + +### I'm setting up bookies on AWS d2.4xlarge instances (16 cores, 122G memory, 12x2TB raid-0 hd). Do you have any recommendation for memory configuration for this kind of setup? For configurations like java heap, direct memory and dbStorage_writeCacheMaxSizeMb, dbStorage_readAheadCacheMaxSizeMb, dbStorage_rocksDB_blockCacheSize. BTW, I'm going to use journalSyncData=false since we cannot recover machines when they shutdown. So no fsync is required for every message. +Since the VM has lot of RAM you can increase a lot from the defaults and leave the rest page cache. For JVM heap I'd say ~24g. WriteCacheMaxSize and ReadAheadCacheMaxSize are both coming from JVM direct memory. I'd say to start with 16g @ 16g. For rocksdb block cache, which is allocated in JNI so it's completely out of JVM configuration, ideally you want to cache most of the indexes. I'd say 4gb should be enough to index all the data in the 24Tb storage space. + +### When there are multiple consumers for a topic, the broker reads once from bookies and send them to all consumers with some buffer? or go get from bookies all the time for each consumers ? +In general, all dispatching is done directly by broker memory. We only read from bookies when consumer are falling behind. + + -- To stop receiving notification emails like this one, please contact mme...@apache.org.
https://www.mail-archive.com/commits@pulsar.incubator.apache.org/msg04367.html
CC-MAIN-2018-26
refinedweb
3,962
54.83
great animation but great animation and voice acting, but its a complet "Vampire Bullet" ripoff. sorry. Rated 4.5 / 5 stars nice def worth waitin for a second, kinda reminds me of hellsing tho Rated 5 / 5 stars awesome! i love the care you put into your shots and cinematography, its very dynamic and it brings to mind feelings of animes like bebop. really cool - - ps cant go wrong with shock-dingo, he's very skilled Rated 3 / 5 stars that was kind of dumb the animation was choppy, and it just seems like a stylistic ripoff of Hellsing and Cybersix. Even the design of the priest is from that first episode of Hellsing. Rated 4.5 / 5 stars 4/5 9/10 Nice animation, it does seem a bit choppy and bad at some parts, but for the most part its really good. The story seems to have a good build up, especially with this nice intro of yours. It has nice drama, and seems like it'll continue to be a good series.
http://www.newgrounds.com/portal/view/524616/review_page/11
CC-MAIN-2014-52
refinedweb
175
67.08
then build a Python program that will download YouTube videos. In this post we will cover: - What is the youtube_dlLibrary? - How to Download YouTube Videos with the youtube_dlLibrary What is the youtube_dl Library? The youtube_dl library is an open-source command line tool that can download YouTube videos. It is a command line tool, meaning you can run it from your terminal. In addition to having a command line interface, it also has a Python library. The Python library allows you to execute the commands normally executed in the command line with Python. The advantage of using it in Python is having a more customizable interface and compacting other options into one command. To follow this tutorial, you’ll need to install the Python library for youtube_dl. You can do so by using the line in the terminal below. pip install youtube_dl Downloading a YouTube Video with the youtube_dl Library As always, we’ll begin our program by importing libraries. In this case, the only library we need is the youtube_dl library. Let’s create a function that will download a YouTube video from a link. The only parameter it will take is the link itself. We’ll start our function by defining some options for youtube_dl. We’ll tell it that we want our function in mp4 format and also to save it as the id of the video. Next, we’ll strip the link to get the YouTube video ID. We’ll use the YoutubeDL object to extract the info from the video including the metadata. Then we’ll print out and return the save location. The extract_info function does the downloading part. Running the download_video function on a link will save the video as a .mp4 file in your current directory. import youtube_dl def download_video(link): ydl_opts = { 'format': 'mp4', 'outtmpl': "./%(id)s.%(ext)s", } _id = link.strip() meta = youtube_dl.YoutubeDL(ydl_opts).extract_info(_id) save_location = meta['id'] + ".mp4" print(save_location) return save_location Further Reading - Matrix Multiplication in Basic Python - Multiprocessing Python Functions with Dependencies - Build an AI Content Moderation System - Python High Low Guessing Game - Python Web Scraping with Selenium and Beautiful Soup
https://pythonalgos.com/download-any-youtube-video-with-python/
CC-MAIN-2022-27
refinedweb
355
67.35
The Samba-Bugzilla – Bug 7065 using the security.* namespace for NTACL in vfs_acl_xattr considered improper Last modified: 2010-01-27 02:24:27 UTC quoting simo from samba-techincal: >. There have been arguments on the list that there might be installations which already use the vfs modue and use the security.* name space. On the other hand I have not heard of a productive environment using it. In addition to that the man page explicitly states that the module is still experimental in the 3.4 series. There has also been the argument that a future LSM module could be using the security.NTACL EA for storing the ACLs. I think the chances are now that a GNU Linux system will use the name NTACL to store NFS4 ACLs - the name will more likely be NFS4ACL or so. In my opinion we should not violate the name space usage rules. With 3.5 we will have the first ready-to-use version of vfs_acl_xattr and we should use a proper namespace. When a LSM module will come up in the future there will likely be a conversion to a different namespace anyway. The security modules have been shipping using the security.* namespace since 3.2.x. This will not be changed. Jeremy.
https://bugzilla.samba.org/show_bug.cgi?id=7065
CC-MAIN-2016-50
refinedweb
212
65.52
When React Native was announced, the first reactions were overwhelmingly positive. Traditionally, when we think about web technologies in the mobile space, things like Apache Cordova spring to mind which allow us to package web sites or web applications as applications for mobile platforms. In this, been in private beta since the start of the year, and internally used at Facebook for a while before that. The saying “Rome wasn’t built in a day” generally applies to technology as well. Tools like “grunt” and platforms like “node” took years to mature. In the web world things are moving quickly, and with a huge number of frameworks, packages, and tools coming out every day, developers tend to get a little more sceptical web site version, of are needed in order to develop for React Native. Since iOS is currently the only supported platform, we need Mac OS X and Xcode, at least version 6.3. io.js - a Node.js compatible platform - is creat a new React Native application. Running react-native init HelloWorld creates a folder called “HelloWorld” in which the boilerplate code can be found. />”. var React = require('react-native'); var {View} = React; class HelloThere extends React.Component { clickMe() { alert('Hi!'); }. var React = require('react-native'); var {View, Text} = React; class HelloThere extends React.Component { clickMe() { alert('Hi!'); }. var React = require('react-native'); var {View, Text, StyleSheet} = React; class HelloThere extends React.Component { clickMe() { alert('Hi!'); } at first.. var React = require('react-native'); var {Text} = React;. var React = require('react-native'); var {View, Text, StyleSheet, TouchableOpacity} = React; class HelloThere extends React.Component { clickMe() { alert('Hi!'); } render() { return ( <TouchableOpacity onPress={this.clickMe.bind(this)}> : Registering the Application When developing with React for the browser, we just need to define a mount point, call “React.render”, and let React do its magic. In React Native, this is a little bit different. var React = require('react-native'); var {View, Text, StyleSheet, TouchableOpacity, AppRegistry} = React; class HelloThere extends React.Component { clickMe() { alert('Hi!'); } render() { return ( <View style={styles.container}> <TouchableOpacity onPress={this.clickMe.bind(this)}> <View style={styles.box}> <Text>Hello {this.props.name}. Please click me.</Text> </View> </TouchableOpacity> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center' }, box: { borderColor: 'red', backgroundColor: '#fff', borderWidth: 1, padding: 10, width: 100, height: 100 } }); side React Native takes separation of concern. After the terminal has popped up, our application will then show up in the simulator. Pressing CMD+D will show a development menu. Clicking on the box will then show an alert: tools. Web developers are familiar with JSFiddle or JSBin, an online playground for quick web tests. There is a similar environment, RNPlay, which allows us to try out React Native in a web browser and see the result from a simulator streamed in the web browser. Can I Use React Native? In its current state, a few things are still in fluctuation and not up to the expectation you might have from using other frameworks. These are mainly advanced gesture recognition. You might want to refrain from completely rewriting your mobile applications in React Native just yet. In React, one of the big advantages is that React does not impose on what workflow you want to use, as. One recommendation for React from the official website. React Native is not a cross-platform development framework by any means, the philosophy behind it is “Learn once, write anywhere”. While you might end up sharing some code across platforms, but there will always be code you may need to re-implement depending on the platform you are targeting. However, one thing is certain: React Native isn’t going away. Facebook has a massive stake in it having multiple React Native powered applications in app stores. The community around React Native is growing, and with each new release there are more contributors, issues, and pull requests. _If you like this and want to learn more about ReactJS - the key feature of React Native, join us for a Toptal Academy session on August 29th
https://www.toptal.com/ios/cold-dive-into-react-native-a-beginners-tutorial/
CC-MAIN-2017-43
refinedweb
672
58.58
On Mon, Mar 01, 2004 at 05:32:39PM -0800, Roland McGrath wrote: > Here is a new version of the patch. Please let me know how it works for you. (I've uploaded new versions of the kernel-patch and the kernel-image packages to 'deb ./') This time, after formatting the partition, extracting my gnu.tar.bz2 with star went fine and I was able to boot into Hurd without any tweaks (except fixing /etc/fstab, as I extracted on a different box). The bootup was complete and successful and I was able to do some stuff like adding a new user or installing a Debian package. If you'd like to see any further testing, please tell me what I could do. > I hope that was a test filesystem like mine was! Yes. Marcus and Neal already applauded me for my cautiousness *g* Two small remarks to your patch: > Index: fs/ext2/Makefile Why did you drop the Kconfig data? > Index: fs/ext2/xattr.h > =================================================================== > RCS file: /home/roland/redhat/bkcvs/linux-2.5/fs/ext2/xattr.h,v > retrieving revision 1.8 > diff -b -p -u -r1.8 xattr.h > --- fs/ext2/xattr.h 13 May 2003 06:12:52 -0000 1.8 > +++ fs/ext2/xattr.h 13 Feb 2004 03:22:36 -0000 [...] > @@ -135,6 +136,6 @@ exit_ext2_xattr(void) > #; - > --- /dev/null 2003-06-05 09:19:08.000000000 -0700 > +++ fs/ext2/xattr_hurd.c 2004-02-28 15:11:57.000000000 -0800 > @@ -0,0 +1,260 @@ I had to add this '-' there, or else 'patch -p0' got confused and produced this reject: ************** *** 135,140 **** # endif /* CONFIG_EXT2_FS_XATTR */ extern struct ext2_xattr_handler ext2_xattr_user_handler; extern struct ext2_xattr_handler ext2_xattr_trusted_handler; extern struct ext2_xattr_handler ext2_xattr_security_handler; - -- /dev/null 2003-06-05 09:19:08.000000000 -0700 --- 136,141 ---- #; thanks, Michael -- <neal> you are crazy to even mount them!!! <marcus> azeem haha :) <marcus> azeem: You are a mad man
http://lists.gnu.org/archive/html/bug-hurd/2004-03/msg00029.html
CC-MAIN-2015-35
refinedweb
312
67.45
How to fix “the name ‘InitializeComponent’ does not exist in the current context” in Silverlight or WPF The error messages: You’ve made some changes to your Silverlight application and all of the sudden your MainPage.xaml will not compile because InitializeComponent does not exist in the current context or the LinksStackPanel.Children doesn’t exist in the current context. This is a frustrating problem until you know what causes it. The root cause of the problem is that your x:Class directive in the “xaml” page somehow got lost or is not in sync with your code-behind file. Did you just copy a MainPage from another project, or backed up project? Copying the MainPage.xaml and the code-behind doesn’t seem to fix the problem by itself. First, fix the x:Class reference in your MainPage.xaml Make sure that your x:Class is named properly and that the namespace matches that of your application! If you have already checked this then don’t worry there is one more thing that probably went wrong. <UserControl x:Class="WordpressManagerPHP.MainPage" xmlns:sdk="" xmlns="" snipped the rest... Second, fix the “Build Action” of your MainPage.xaml This one took me several hours, and a few headaches to find. I compared a working project to the non-working one and notices that the “Build Action” of MainPage.xaml must have changed when manipulating the window. The Build Action was set to “Content”. The Build Action should be set to “Page” Simply highlight the MainPage.xaml in your Solution Explorer and in the properties check out the Build Action. Once you fix this, you will no longer have problems with InitializeComponent and it will fix problems with LinksStackPanel.Children not existing in the context.
http://www.displacedguy.com/tech/silverlight-initializecomponent-does-not-exist-in-the-current-context/
CC-MAIN-2014-42
refinedweb
293
66.84
Fl_Widget | +----Fl_Box, Fl_Browser_, Fl_Button, Fl_Chart, Fl_Clock_Output, Fl_Free, Fl_Group, Fl_Input_, Fl_Menu_, Fl_Positioner, Fl_Progress, Fl_Timer, Fl_Valuator #include <FL/Fl_Widget.H> All "property" accessing methods, such as color(), parent(), or argument() are implemented as trivial inline functions and thus are as fast and small as accessing fields in a structure. Unless otherwise noted, the property setting methods such as color(n) or label(s) are also trivial inline functions, even if they change the widget's appearance. It is up to the user code to call redraw() after these. Creates a widget at the given position and size. The Fl_Widget is a protected constructor, but all derived widgets have a matching public constructor. It takes a value for x(), y(), w(), h(), and an optional value for label(). Destroys the widget. Destroying single widgets is not very common, and it is your responsibility to either remove() them from any enclosing group or destroy that group immediately after destroying the children. You almost always want to destroy the parent group instead which will destroy all of the child widgets and groups in that group. Fl_Widget::active() returns whether the widget is active. Fl_Widget::active_r() returns whether the widget and all of its parents are active._ACTIVATE or FL_DEACTIVATE to the widget if active_r() is true. Currently you cannot deactivate Fl_Window widgets. Gets or sets the label alignment, which controls how the label is displayed next to or inside the widget. The default value is FL_ALIGN_CENTER, which centers the label inside the widget. The value can be any of these constants bitwise-OR'd together: Gets or sets the current user data (long) argument that is passed to the callback function. This is implemented by casting the long value to a void * and may not be portable on some machines. Gets or sets the box type for the widget, which identifies a routine that draws the background of the widget. See Box Types for the available types. The default depends on the widget, but is usually FL_NO_BOX or FL_UP_BOX. Gets or sets the current callback function for the widget. Each widget has a single callback. Fl_Widget::changed() is a flag that is turned on when the user changes the value stored in the widget. This is only used by subclasses of Fl_Widget that store values, but is in the base class so it is easier to scan all the widgets in a panel and do_callback() on the changed ones in response to an "OK" button. Most widgets turn this flag off when they do the callback, and when the program sets the stored value. Hides the widget; you must still redraw the parent to see a change in the window. Normally you want to use the hide() method instead. hide() Disables keyboard focus navigation with this widget; normally, all widgets participate in keyboard focus navigation. Gets or sets the background color of the widget. The color is passed to the box routine. The color is either an index into an internal table of RGB colors or an RGB color value generated using fl_rgb_color(). The default for most widgets is FL_BACKGROUND_COLOR. See the enumeration list for predefined colors. Use Fl::set_color() to redefine colors. The two color form sets both the background and selection colors. See the description of the selection_color() method for more information. Returns 1 if b is a child of this widget, or is equal to this widget. Returns 0 if b is NULL. Sets the current label. Unlike label(), this method allocates a copy of the label string instead of using the original string pointer. The first version returns non-zero if draw() needs to be called. The damage value is actually a bit field that the widget subclass can use to figure out what parts to draw. The last two forms set the damage bits for the widget; the last form damages the widget within the specified bounding box. The default callback, which puts a pointer to the widget on the queue returned by Fl::readqueue(). You may want to call this from your own callback. Gets or sets the image to use as part of the widget label. This image is used when drawing the widget in the inactive state. Causes a widget to invoke its callback function, optionally with arbitrary arguments. Handles the specified event. You normally don't call this method directly, but instead let FLTK do it when the user interacts with the widget. When implemented in a new widget, this function must return 0 if the widget does not use the event or 1 if it uses the event. Gets or sets the image to use as part of the widget label. This image is used when drawing the widget in the active state. Returns 1 if this widget is a child of a, or is equal to a. Returns 0 if a is NULL. Get or set the current label pointer. The label is shown somewhere on or next to the widget. The passed pointer is stored unchanged in the widget (the string is not copied), so if you need to set the label to a formatted value, make sure the buffer is static, global, or allocated. The copy_label() method can be used to make a copy of the label string automatically. Gets or sets the label color. The default color is FL_FOREGROUND_COLOR. Gets or sets the font to use. Fonts are identified by small 8-bit indexes into a table. See the enumeration list for predefined typefaces. The default value uses a Helvetica typeface (Arial for Microsoft® Windows®). The function Fl::set_font() can define new typefaces. Gets or sets the font size in pixels. The default size is 14 pixels. Gets or sets the labeltype which identifies the function that draws the label of the widget. This is generally used for special effects such as embossing or for using the label() pointer as another form of data such as an icon. The value FL_NORMAL_LABEL prints the label as plain text. output() means the same as !active() except it does not change how the widget is drawn. The widget will not receive any events. This is useful for making scrollbars or buttons that work as displays rather than input devices. Returns a pointer to the parent widget. Usually this is a Fl_Group or Fl_Window. Returns NULL if the widget has no parent. Marks the widget as needing its draw() routine called. Marks the widget or the parent as needing a redraw for the label area of a widget. Change extensiive calculations. position(x,y) is a shortcut for resize(x,y,w(),h()), and size(w,h) is a shortcut for resize(x(),y(),w,h). Gets or sets the selection color, which is defined for Forms compatibility and is usually used to color the widget when it is selected, although some widgets use this color for other purposes. You can set both colors at once with color(a,b). This is the same as (active() && !output() && visible()) but is faster. Gets or sets a string of text to display in a popup tooltip window when the user hovers the mouse over the widget. The string is not copied, so make sure any formatted string is stored in a static, global, or allocated buffer. If no tooltip is set, the tooltip of the parent is inherited. Setting a tooltip for a group and setting no tooltip for a child will show the group's tooltip instead. To avoid this behavior, you can set the child's tooltip to an empty string (""). Returns the widget type value, which is used for Forms compatability and to simulate RTTI. Returns the position of the upper-left corner of the widget in its enclosing Fl_Window (not its parent if that is not an Fl_Window), and its width and height. Gets or sets the current user data (void *) argument that is passed to the callback function. Returns a pointer to the primary Fl_Window widget. Returns NULL if no window is associated with this widget. Note: for an Fl_Window widget, this returns its parent window (if any), not this window. Makes the widget visible; you must still redraw the parent widget to see a change in the window. Normally you want to use the show() method instead. show() Enables keyboard focus navigation with this widget; note, however, that this will not necessarily mean that the widget will accept focus, but for widgets that can accept focus, this method enables it if it has been disabled. An invisible widget never gets redrawn and does not get events. The visible() method returns true if the widget is set to be visible.The visible_r() method returns true if the widget and all of its parents are visible. A widget is only visible if visible() is true on it and all of its parents. Changing it will send FL_SHOW or FL_HIDE events to the widget. Do not change it if the parent is not visible, as this will send false FL_SHOW or FL_HIDE events to the widget. redraw() is called if necessary on this or the parent. Modifies keyboard focus navigation. See set_visible_focus() and clear_visible_focus(). The second form returns non-zero if this widget will participate in keyboard focus navigation. set_visible_focus() clear_visible_focus() Fl_Widget::when() is a set of bitflags used by subclasses of Fl_Widget to decide when to do the callback. If the value is zero then the callback is never done. Other values are described in the individual widgets. This field is in the base class so that you can scan a panel and do_callback() on all the ones that don't do their own callbacks in response to an "OK" button. The effect of the align() function is not clear from just the names of the arguments. Pictures (or word pictures) are needed to explain what these names mean. It is particularly unclear how these interact with an image drawn on the button, as adding an image makes the label move. [ Reply ] It is not documented what the second argument in this call is for: void Fl_Widget::callback(Fl_Callback*, void* = 0) I assume that it serves the same purpose as the user_data(void*) function, but there is no indication that this is true nor explanation of how the two might interact (does one take precedence?). [ Reply ] > I assume that it serves the same purpose as the user_data(void*) Yes. > does one take precedence? Which ever method is called last takes precedence. In other words, callback(foo,(void*)data) will stay in effect until changed with user_data(newdata); [ Reply ]
https://www.fltk.org/documentation.php/doc-1.1/Fl_Widget.html
CC-MAIN-2018-51
refinedweb
1,755
65.83
cannot operate in near-earth-orbit. Luckily for those guys, Elon Musk can deliver their presents via Space X's delivery rocket instead. A large number of the remaining people are not visited by Santa at all including people of various religious affiliations or beliefs and people in average (some rich ones get much more, some poor just get one if they are lucky). Additionally we estimate that there is an average distance of 500 m (i.e. 0.5 km or 0.3 miles) between each household and that Santa's average sleigh speed is 4 m/s (i.e. 14 km/h or 9 mph) and that it takes Santa 1 second to drop of each present. It should be noted that the actual procedure of handing over presents differs from country to country. In the US, Santa is more likely to drop the packages down the chimney and curve the packages so that they always backspin and land in socks that are different time zones. So, Santa can only work 24 hours. The Java Code In Java we can model the problem like this: public class NumberOfSantas { private static final long NUMBER_OF_PERSONS = 7_472_085_518L / 3; private static final int PERSONS_PER_HOUSHOLD = 3; private static final long NUMBER_OF_HOUSHOLDS = NUMBER_OF_PERSONS / PERSONS_PER_HOUSHOLD; private static final int PRESENTS_PER_PERSON = 4; private static final int AVERAGE_DISTANCE_BETWEEN_HOUSHOLDS = 500; // m (i.e. 0.5 km or 0.3 miles) private static final int AVERAGE_SLEIGH_SPEED = 4; // 4 m/s (i.e. 14 km/h or 9 mph) private static final int TIME_TO_DELIVER_PRESENTS = 1; // 1 s private static final int WORKING_HOURS = 24; public static void main(String[] args) { long totalTimeS = NUMBER_OF_HOUSHOLDS * (AVERAGE_DISTANCE_BETWEEN_HOUSHOLDS / AVERAGE_SLEIGH_SPEED) + TIME_TO_DELIVER_PRESENTS * NUMBER_OF_PERSONS * PRESENTS_PER_PERSON; long noSantas = totalTimeS / (WORKING_HOURS * 3600); System.out.format("There are %,d Santas in the world", noSantas); } } Evidently, there is a large number of Santas out there because the program above outputs the following: There are 1,316,455 Santas in the world (Ho, ho)^1.3 M... Follow the Java Holiday Calendar 2016 with small tips and tricks all the way through the winter holiday season. I am contributing to open-source Speedment, a stream based ORM tool and runtime. Please check it out on GitHub.
http://minborgsjavapot.blogspot.com/2016/12/day-24-java-holiday-calendar-2016-how.html
CC-MAIN-2018-34
refinedweb
363
52.39
On Tue, 18 Nov 2008, Eric Dumazet wrote: > > * > > * Compare two ethernet addresses, returns 0 if equal > > */ > > static inline unsigned compare_ether_addr(const u8 *addr1, const u8 *addr2) > > { > > const u16 *a = (const u16 *) addr1; > > const u16 *b = (const u16 *) addr2; > > > > BUILD_BUG_ON(ETH_ALEN != 6); > > return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) != 0; Btw, at least on some Intel CPU's, it would be faster to do this as a 32-bit xor and a 16-bit xor. And if we can know that there is always 2 bytes at the end (because of how the thing was allocated), it's faster still to do it as a 64-bit xor and a mask. And that's true even if the addresses are only 2-byte aligned. The code that gcc generates for "memcmp()" for a constant-size small data thing is sadly crap. It always generates a "rep cmpsb", even if the size is something really trivial like 4 bytes, and even if you compare for exact equality rather than a smaller/greater-than. Gaah. Linus
http://lkml.org/lkml/2008/11/17/475
crawl-002
refinedweb
179
73.51
A struct describing heap or stack allocations. More... #include <CodeGen_Posix.h> A struct describing heap or stack allocations. Definition at line 38 of file CodeGen_Posix.h. The memory. Definition at line 40 of file CodeGen_Posix.h. Destructor stack slot for this allocation. Definition at line 43 of file CodeGen_Posix.h. Function to accomplish the destruction. Definition at line 46 of file CodeGen_Posix.h. Pseudostack slot for this allocation. Non-null for allocations of type Stack with dynamic size. Definition at line 50 of file CodeGen_Posix.h. The (Halide) type of the allocation. Definition at line 53 of file CodeGen_Posix.h. How many bytes this allocation is, or 0 if not constant. Definition at line 57 of file CodeGen_Posix.h. How many bytes of stack space used. 0 implies it was a heap allocation. Definition at line 61 of file CodeGen_Posix.h. A unique name for this allocation. May not be equal to the Allocate node name in cases where we detect multiple Allocate nodes can share a single allocation. Definition at line 66 of file CodeGen_Posix.h.
https://halide-lang.org/docs/struct_halide_1_1_internal_1_1_code_gen___posix_1_1_allocation.html
CC-MAIN-2021-21
refinedweb
177
63.46
A revised Collection class similar to the VB6 Collection is defined in the Microsoft. VisualBasic namespace. Visual Basic .NET has added several new Abstract Data Types (ADTs) to ultimately replace the Collection class. The names of the new classes were enumerated in Chapter 2, including an example of using the Stack class. The subsections that follow demonstrate ArrayList, HashTable, SortedList, and Queue. The ArrayList class is a designed replacement for an unadorned array. Although arrays in Visual Basic .NET are classes, they are designed to work similarly to VB6 arrays. This means you have to resize the arrays and preserve elements manually, which is likely to yield code that is littered with ReDim statements. Dynamic array sizing is semantical behavior that belongs to an array. In an object-oriented programming language, you would expect that such behavior is defined as part of an array class. Combining storage and capacity management in a single class is what ArrayList has to offer. You will find ArrayList easier to use than a System.Array or the VB6-style collection. Table 3.1 lists the members of ArrayList. The next section demonstrates some of the characteristics of ArrayList. The biggest benefit of ArrayList over System.Array is that ArrayList has dynamic capacity management built in. When you use System.Array, you have to make sure there is enough room for an element. If not, you have to add capacity to the array with ReDim. On the other hand, if you use the ArrayList Add, AddRange, Insert, or InsertRange methods , the capacity is adjusted as needed. ArrayList has significant advantages over VB6 arrays but fewer advantages over Visual Basic .NET System.Array; however, capacity management is enough of a reason to prefer ArrayList over System.Array. Many of the methods in ArrayList are similar to methods in Array (see "Using Array Methods"); therefore, we will not repeat examples of those methods here. Capacity management and adding and managing a range of elements are additional features offered in ArrayList. Let's take a look at examples of using these behaviors. Listing 3.4 demonstrates behaviors of ArrayList that are not found in System.Array. 1: Sub DemoSetRange() 2: 3: Dim MyArray As New ArrayList() 4: 5: Dim Array1() As Integer = {0, 1, 2, 3, 4, 5} 6: 7: MyArray.InsertRange(0, Array1) 8: 9: Dim I As Integer 10: 11: For I = 0 To MyArray.Count - 1 12: Debug.WriteLine(MyArray(I)) 13: Next 14: 15: Debug.WriteLine("Contains 3? " & MyArray.Contains(3)) 16: 17: End Sub The example declares an ArrayList named MyArray using one of three possible constructors. The constructor on line 3 takes no parameters. Line 5 allocates a System.Array and initializes the members to the integers 0 through 5. Line 7 demonstrates ArrayList.InsertRange. InsertRange takes a start index and an ICollection object. System.Array implements the ICollection interface, so System.Array is a suitable argument for InsertRange. In fact, any class that implements ICollection (HashTable, Stack, Queue, and SortedList are other examples) is a suitable argument for InsertRange. Lines 11 through 13 demonstrate that elements of an ArrayList can be accessed as if it were a simple array. (Of course you can use the new Enumerator behavior that you saw in Listing 3.2, as well.) Line 15 demonstrates the Contains method. Contains takes an object, which can be a literal integer like 3, and returns a Boolean indicating whether or not the object is in the ArrayList. In the example, Option Strict is On so the Boolean returned by Contains is printed using the ToString method of the Boolean type. Hash tables use key and value pairs. The key is processed through a hashing function that is designed to generate a unique value that is then used as an index into the hash table to the location containing the value. Hash tables strike a balance between resource usage and speed. Instead of probing each element for equality to determine whether objects are equal, simply processing the key provides an index to the location that contains the associated value. There is a significant amount of research on hash tables, hashing functions, and key-collision avoidance . (You may have studied some of them in college if you were a computer science major, but the .NET Framework provides a HashTable implemented for you.) The System.Collections.HashTable class implements a hash table available for you to use. HashTable works externally much like a data dictionary. Provide a unique key and an associated value, and the HashTable takes care of the rest. Suppose you were storing personnel records in memory for quick access. You might key each record on the Social Security number, and the value would be the personnel record. (For the demonstration, we will simply store a person's name to represent the personnel record.) Listing 3.5 declares a new instance of a hash table and adds some unique elements to the hash table keyed on pseudo-Social Security numbers . The values stored in the hash table represent the data associated with the keys. (The key is the first argument and the value is the second argument of the Add method.) 1: Sub DemoHashTable() 2: Dim Hash As New Hashtable() 3: Hash.Add("555-55-5555", "Frank Arndt") 4: Hash.Add("555-55-5556", "Mary Bonnici") 5: Hash.Add("555-55-5557", "Paul Kimmel") 6: 7: Dim Enumerator As IDictionaryEnumerator = Hash.GetEnumerator 8: 9: While (Enumerator.MoveNext()) 10: Debug.WriteLine(Enumerator.Key & "=" & _ 11: Enumerator.Value) 12: End While 13: 14: End Sub Tip Enumerator objects are read-only. To modify elements of a collection like a HashTable, you can use a For Next loop, indexing the elements of the collection directly. HashTable uses an IDictionaryEnumerator object to iterate over elements. Line 7 declares an enumerator and lines 9 through 12 iterate over each element displaying the key and value pairs. (.Key and .Value were not defined in the IEnumerator interface; they were added in the IDictionaryEnumerator.) The SortedList ADT is based on the dictionary interface. Recall from the last section that a dictionary is a collection of key (or name) and value pairs. SortedList maintains two internal arrays. One keeps track of keys and the second keeps track of values. As with a hash table, the key values of a sorted list must be unique. SortedList has methods similar to ArrayList, in addition to the key and value pairs introduced with HashTable. SortedList is defined in the System.Collections namespace. For more information on SortedList, look in the MSDN help files. Queue data structures are also referred to as First In First Out (FIFO) data structures. Think of a queue as a line. There is a first in line, a last in line, and everything in between. Just as Stacks have a language for adding elements to the collectionPush and Popto denote adding and removing elements to a Stack, Queue uses the notion of enqueuing and dequeuing. All of the collection-based ADTs work with Objects; hence to enqueue means to add an Object to the queue and to dequeue means to remove an Object from the queue. Queues are a natural choice when you want the first item put into a collection to be the first item out. Listing 3.6 demonstrates basic queue behavior. 1: Sub DemoQueue() 2: 3: Dim Q As New Queue() 4: Q.Enqueue("One") 5: Q.Enqueue("Two") 6: 7: While (Q.Count > 0) 8: Debug.WriteLine(Q.Dequeue) 9: End While 10: 11: End Sub The output from Listing 3.6 is One and Two. The elements are dequeued in exactly the same order in which they were enqueued. Queues implement several of the same COM interfaces as other ADTs defined in the System.Collections namespace, like ICollection, IEnumerable, and ICloneable. For this reason queues have many of the same operations by name as other ADTs. For general purpose in-memory storage, ArrayList will suffice. For key and value pairs, use HashTable or SortedList. If you want objects stored and retrieved in the same order, use Queue, and if you want the last element put into a collection to be the first one out, use Stack. The fundamental behaviors of the collection classes are identical. The semantic operations are consistent with the type of data structure.
https://flylib.com/books/en/1.488.1.52/1/
CC-MAIN-2019-26
refinedweb
1,384
58.58
. As a result, attempting to read variable arguments without reinitializing the va_list can have unexpected behavior. According to the C Standard, 7.16, paragraph 3 [ISO/IEC 9899:2011], If access to the varying arguments is desired, the called function shall declare an object (generally referred to as apin this subclause) having type va_list. The object apmay be passed as an argument to another function; if that function invokes the va_argmacro with parameter ap, the value of apin the calling function is indeterminate and shall be passed to the va_endmacro prior to any further reference to ap.253 253) It is permitted to create a pointer to a va_listand pass that pointer to another function, in which case the original function may take further use of the original list after the other function returns. Noncompliant Code Example This noncompliant code example attempts to check that none of its variable arguments are zero by passing a va_list to helper function contains_zero(). After the call to contains_zero(), the value of ap is indeterminate. #include <stdarg.h> #include <stdio.h> int contains_zero(size_t count, va_list ap) { for (size_t i = 1; i < count; ++i) { if (va_arg(ap, double) == 0.0) { return 1; } } return 0; } int print_reciprocals(size_t count, ...) { va_list ap; va_start(ap, count); if (contains_zero(count, ap)) { va_end(ap); return 1; } for (size_t i = 0; i < count; ++i) { printf("%f ", 1.0 / va_arg(ap, double)); } va_end(ap); return 0; } Compliant Solution The compliant solution modifies contains_zero() to take a pointer to a va_list. It then uses the va_copy macro to make a copy of the list, traverses the copy, and cleans it up. Consequently, the print_reciprocals() function is free to traverse the original va_list. #include <stdarg.h> #include <stdio.h> int contains_zero(size_t count, va_list *ap) { va_list ap1; va_copy(ap1, *ap); for (size_t i = 1; i < count; ++i) { if (va_arg(ap1, double) == 0.0) { return 1; } } va_end(ap1); return 0; } int print_reciprocals(size_t count, ...) { int status; va_list ap; va_start(ap, count); if (contains_zero(count, &ap)) { printf("0 in arguments!\n"); status = 1; } else { for (size_t i = 0; i < count; i++) { printf("%f ", 1.0 / va_arg(ap, double)); } printf("\n"); status = 0; } va_end(ap); return status; } Risk Assessment Reading variable arguments using a va_list that has an indeterminate value can have unexpected results. Automated Detection Related Vulnerabilities Search for vulnerabilities resulting from the violation of this rule on the CERT website. 9 Comments David Svoboda ap? some variable in the standard? Unknown User (astromme) I've revised the title and introduction; hopefully they are clearer now. I also changed the CS so that the copy is made within contains_zeroes, because that makes it clearer how contains_zeroes is meant to be used. David Svoboda Martin Sebor I would recommend to avoid relying on semantics like those implemented in contains_zero()in the compliant solution. Instead, callers should defensively assume that the callee modifies the va_listargument and pass to it a copy before manipulating the original. (I.e., just as when invoking one of the C standard library functions that take a va_listargument, such as vsprintf().) Incidentally, this guideline covers undefined behavior 129. Besides this problem, UB 131 and UB 134 are common bugs as well. It would be nice to extend the scope of this guideline and discuss them as well. David Svoboda Aaron Ballman va_copy does not exist in MSVC, so the CCE is not particularly useful there. David Svoboda va_copy is in C11 (and prob C99 too). I suppose a Windows-specific CS would be in order here. Then again, how heavily are varargs used in Windows? I'd guess not muich. Aaron Ballman In C code; I would guess (based off nothing at all), they're used about the same amount as on other platforms. I've seen it used in a handful of code bases myself. I'll think about a CS for Windows and see if I can come up with one. Aaron Ballman So this is not portably possible in Visual Studio until Visual Studio 2013 is released. On some platforms, you can simply assign the va_list, but on others platforms (notably ARM and x64) it may not be safe. Once 2013 is released, we should revise this rule.
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152285
CC-MAIN-2019-22
refinedweb
698
55.64
01 February 2011 14:05 [Source: ICIS news] TORONTO (ICIS)--Eastman has completed the divestment of its US polyethylene terephthalate (PET) business to DAK Americas, the ?xml:namespace> Eastman chief financial officer Curt Espeland said the deal closed on Monday, resulting in cash proceeds of about $600m (€438m), before transaction costs. Eastman could use the proceeds for share buy-backs or smaller acquisitions, either in the US or in emerging markets, he told analysts during Eastman’s 2010 fourth-quarter results conference call. CEO Jim Rogers added: "There's more [acquistion] opportunities below $500m than over $500m." "We want to be really prudent ... looking for the right acquistion," he said. The acquisition by DAK followed its purchase in 2007 of Eastman’s PET operations in DAK, based in ($1 = €0.73) For more on Eastman
http://www.icis.com/Articles/2011/02/01/9431357/eastman-completes-pet-divestiture-to-dak-americas.html
CC-MAIN-2014-10
refinedweb
135
55.44
I revisited the typechecking of binding groups, partly to fix the shortcomings of SPECIALISE pragmas. On the way, I implemented the refined dependency analysis described by Mark Jones in "Typing Haskell in Haskell". As a result, this "Contexts differ in length" problem has gone away. Robert, would you like to give it a try? Your example below is now part of GHC's test suite. It'll be in the next major release, but not in 6.4. Simon | -----Original Message----- | From: Robert van Herk [mailto:rherk at cs.uu.nl] | Sent: 24 May 2005 13:31 | To: glasgow-haskell-users at haskell.org; Simon Peyton-Jones | Subject: Contexts differ in length | | Hi all, | | A while ago I sent an email to the glasgow haskell users maillinglist to | explain how the "Contexts differ in length" feature (or bug :-)) | restricted me in writing a haskell application. I was hoping for a | reply, however I didn't receive one (yet). | | Therefore, I will explain the problem again: | | I am writing (for my master's thesis project) a webdevelopment framework | in Haskell, with features somewhat comparable to Apple's WebObjects. | Amongst others, session state and database interaction is transparent, etc. | | In my framework, functions that generate HTML are called WFComponents. | These functions are monadic since they can generate IO (because they may | do database interaction etc). Also, components can generate links to | other components. However, since component a may generate a link to | component b (so that when the user clicks that link component b will be | evaluated) and component b may link to component a, there will occur | errors when I try to do this, since the contexts of component a and b | may not be the same. A minimal example of this will be something like: | | {-# OPTIONS -fglasgow-exts #-} | | module Main where | import Data.IORef | | class MyReader r v | r -> v where | myRead :: r -> IO v | | data R v = R (IORef v) | instance MyReader (R v) v where | myRead (R v) = | do v <- readIORef v | return v | | | a :: IO () | a = | do r <- createReader | b r | | b :: MyReader r Int => r -> IO () | b r = | do i <- myRead r | if i > 10 | then a | else putStrLn (show i) | | createReader :: IO (R Int) | createReader = | do ref <- newIORef 0 | return (R ref) | | | | A real example will be a bit more complicated, but this is basically | what I need to do and currently am not able to. Of course, when needed, | I can show you the real example. Somewhere in the history of this | mailling list I read that people have had this program before, but only | in toy programs. However, I am experincing this problem currently in | something that is not a toy program. Therefore, my question is if it | would be possible to lift this constraint on the language, and also, if | the developers of GHC are currently planning to do this... | | Thanks, | | Robert
http://www.haskell.org/pipermail/glasgow-haskell-users/2005-July/008786.html
CC-MAIN-2014-49
refinedweb
480
66.98
Marian Marinov <kernel@kyup.com> writes:> On 04/04/2018 07:02 PM, Eric W. Biederman wrote:>> Angel Shtilianov <kernel@kyup.com> writes:>> >>> Currently the same boot_id is reported for all containers running>>> on a host node, including the host node itself. Even after restarting>>> a container it will still have the same persistent boot_id.>>>>>> This can cause troubles in cases where you have multiple containers>>> from the same cluster on one host node. The software inside each>>> container will get the same boot_id and thus fail to join the cluster,>>> after the first container from the node has already joined.>>>>>> UTS namespace on other hand keeps the machine specific data, so it>>> seems to be the correct place to move the boot_id and instantiate it,>>> so each container will have unique id for its own boot lifetime, if>>> it has its own uts namespace.>> >> Technically this really needs to use the sysctl infrastructure that>> allows you to register different files in different namespaces. That>> way the value you read from proc_do_uuid will be based on who opens the>> file not on who is reading the file.>> Ok, so would you accept a patch that reimplements boot_id trough the sysctl infrastructure?Assuming I am convinced this makes sense to do on the semantic level.>> Practically why does a bind mount on top of boot_id work? What makes>> this a general problem worth solving in the kernel? Why is hiding the>> fact that you are running the same instance of the same kernel a useful>> thing? That is the reality.>> The problem is, that the distros do not know that they are in> container and don't know that they have to bind mount something on top> of boot_id. You need to tell Docker, LXC/LXD and all other container> runtimes that they need to do this bind mount for boot_id.Yes. Anything like this is the responsibility of the container runtimeone way or another. Magic to get around fixing the small set ofcontainer runtimes you care about is a questionable activity.> I consider this to be a general issue, that lacks good general> solution in userspace. The kernel is providing this boot_id> interface, but it is giving wrong data in the context of containers.I disagree. Namespaces have never been about hiding that you are on agiven machine or a single machine. They are about encapsulating globalidentifers so that process migration can happen, and so that processescan be better isolated. The boot_id is not an identify of an object inthe kernel at all, and it is designed to be trully globally uniqueacross time and space so I am not at all certain that it makes the leastbit of sense to do anything with a boot_id.That said my memory of boot_id is that was added so that emacs (andrelated programs) could create lock files on nfs and could tell if thecurrent machine owns the file, and if so be able to tell if the ownerof the lock file is alive.So there is an argument to be made that boot_id is to coarse. Thatargument suggest that boot_id is a pid_namespace property.I have not looked at the users of boot_id, and I don't have a definitionof boot_id that makes me think it is too coarse.If you can provide a clear description of what the semantics are andwhat they should be for boot_id showing how boot_id fits into anamespace, making it clear what should happen with checkpoint/restart.We can definitely look at changing how the kernel supports boot_id.The reason I suggested the bind mount is there are lots of cases wherepeople want to lie to applications about the reality of what is going onfor whatever reason, and we leave those lies to userspace. Thingslike changing the contents of /proc/cpuinfo.> Proposing to fix this problem in userspace seams like ignoring the> issue. You could have said to the Consul guys, that they should> simply stop using boot_id, because it doesn't work correctly on> containers.I don't know the Consul guys. From a quick google search I see thatConsul is an open source project that is aims to be distributed andhighly available. It seems a reasonable case to look at to motivatechanges to boot_id.That said if I want to be highly available I would find every nodehaving the same boot_id to be very worrying, and very useful. It allowsdetecting if no hardware redundancy is present in a situation. Thatcertainly seems like a good thing.If you just want to test Consul then hacking boot_id with a bind mountseems the right thing. If you really want to run Consul in productionI am curious to know how removing the ability to detect if you are onthe same kernel as another piece of Consul is a good thing.Eric
https://lkml.org/lkml/2018/4/4/910
CC-MAIN-2020-40
refinedweb
801
70.13
Sometimes, suspending execution of a thread is useful. For example, a separate thread can be used to display the time of day. If the user doesn’t want a clock, then its thread can be suspended. Whatever the case, suspending a thread is a simple matter. Once suspended, restarting the thread is also a simple matter. The mechanisms to suspend, stop, and resume threads differ between early versions of Java, such as Java 1.0, and modern versions, beginning with Java 2. Although you should use the modern approach for all new code, you still need to understand how these operations were accomplished for earlier Java environments. For example, you may need to update or maintain older, legacy code. You also need to understand why a change was made. For these reasons, the next section describes the original way that the execution of a thread was controlled, followed by a section that describes the modern approach. Prior to Java 2, a program used suspend( ), resume( ),stop() which are methods defined by Thread, to pause and restart the execution of a thread. They have the form shown below : final void suspend( ) final void resume( ) final void stop( ) Once a thread has been stopped, it cannot be restarted using resume( ). Example as follows : class NewThread3 implements Runnable { String name; // name of thread Thread t; NewThread3(String threadname) { name = threadname; t = new Thread(this, name); System.out.println("New thread: " + t); t.start(); // Start the thread } // This is the entry point for thread. public void run() { try { for (int i = 15; i > 0; i--) { System.out.println(name + ": " + i); Thread.sleep(200); } } catch (InterruptedException e) { System.out.println(name + " interrupted."); } System.out.println(name + " exiting."); } } public class SuspendResume { public static void main(String args[]) { NewThread3 ob1 = new NewThread3("One"); NewThread3 ob2 = new NewThread3("Two"); try { Thread.sleep(1000); ob1.t.suspend(); System.out.println("Suspending thread One"); Thread.sleep(1000); ob1.t.resume(); System.out.println("Resuming thread One"); ob2.t.suspend(); System.out.println("Suspending thread Two"); Thread.sleep(1000); ob2.t.resume();."); } } Output New thread: Thread[One,5,main] New thread: Thread[Two,5,main] One: 15 Two: 15 Two: 14 One: 14 One: 13 Two: 13 Two: 12 One: 12 Two: 11 One: 11 Two: 10 Suspending thread One Two: 9 Two: 8 Two: 7 Two: 6 One: 10 Resuming thread One Suspending thread Two One: 9 One: 8 One: 7 One: 6 One: 5 Two: 5 Resuming thread Two Waiting for threads to finish. Two: 4 One: 4 One: 3 Two: 3 One: 2 Two: 2 Two: 1 One: 1 Two exiting. One exiting. Main thread exiting.
http://www.loopandbreak.com/suspending-resuming-and-stopping-threads/
CC-MAIN-2019-43
refinedweb
438
64.81
java.io.CharArrayReader class creates a character buffer using a character array. Declaration: public class CharArrayReader extends Reader Constructor : - CharArrayReader(char[] char_array) : Creates a CharArrayReader from a specified character array. - CharArrayReader(char[] char_array, int offset, int maxlen) : Creates a CharArrayReader from a specified part of character array. Methods: - read() : java.io.CharArrayReader.read() reads a single character and returns -1 if end of the Stream is reached. Syntax : public int read() Parameters : ----------- Return : Returns read character as an integer ranging from range 0 to 65535. -1 : when end of file is reached. - read(char[] char_array, int offset, int maxlen) : java.io.CharArrayReader.read(char[] char_array, int offset, int maxlen)) reads a single character and returns -1 if end of the Stream is reached Syntax : public int read(char[] char_array, int offset, int maxlen)) Parameters : char_array : destination array offset : starting position from where to store characters maxlen : maximum no. of characters to be read Return : Returns all the characters read -1 : when end of file is reached. - ready() : java.io.CharArrayReader.ready() checks whether the Stream is ready to be read or not. CharArrayReader are always ready to be read. Syntax : public boolean ready() Parameters : ----------- Return : true if CharArrayReader is ready to be read. - skip(long char) : java.io.CharArrayReader.skip(long char_no) skips ‘char_no’ no. of characters. If n is negative, then this method does nothing and returns 0. Syntax : public long skip(long char) Parameters : char_no : char no. of characters to be skipped Return : no. of characters skipped Exception : IOException : In case of I/O error occurs Output : char_array1 is ready Use of read() method : G Characters Skipped : 72 E Characters Skipped : 70 S Characters Skipped : 84 char_array2 is ready Use of read(char[] char_array, int offset, int maxlen) method : EKS - mark(int readLimit) : java.io.CharArrayReader.mark(int readLimit) marks the current position in the Stream upto which the character can be read. This method always invokes reset() method. Subsequent calls to reset() will reposition the stream to this point. Syntax : public long mark(int readLimit) Parameters : readLimit : No. of characters that can be read up to the mark Return : void Exception : IOException : In case of I/O error occurs - markSupported() : java.io.CharArrayReader.markSupported() tells whether the mark method is supported by the stream or not. Syntax : public boolean markSupported() Parameters : ------- Return : true if the mark method is supported by the stream Exception : IOException : In case of I/O error occurs - reset() : java.io.CharArrayReader.reset() Resets the stream to the most recent mark, or to the beginning if it has never been marked. Syntax : public void reset() Parameters : ------- Return : void Exception : IOException : In case of I/O error occurs - close() : java.io.CharArrayReader.close() closes the stream amd reallocates the resources that were allotted to it. Syntax : public void close() Parameters : ------- Return : void Exception : IOException : In case of I/O error occurs Output : Char : H Char : E Char : L mark() method comes to play Char : L Char : O Char : G mark() supported reset() invoked Char : L Char : O This article is contributed by Mohit Gupta.
https://www.geeksforgeeks.org/java-io-chararrayreader-class-java/?ref=rp
CC-MAIN-2021-17
refinedweb
509
55.74
Securing Network Data Tutorial for Android In this Android tutorial, you’ll learn how to keep your information private by securing network data in transit. Version - Kotlin 1.2, Android 4.4, Android Studio 3 Security is an important part of development. Users expect you to protect their data from unintended prying eyes. Android has mechanisms in place by default for controlling who can view the information that your app collects on devices, but almost every app communicates over a network. You can keep your users’ information private by ensuring that your app is securing network data in transit. In this tutorial, you’ll secure a simple Android app named PetMed for veterinary clinics that exchange medical information over a network. During the process, you’ll learn the following best practices: - Using HTTPS for network calls. - Trusting a connection with certificate pinning. - Verifying the integrity of transmitted data. Note: This tutorial assumes that you’re already familiar with the basics of Android networking. If the concepts of networking are new to you, first read through our Android Networking Tutorial. Getting Started Download and unzip the materials for this tutorial using the Download materials button at the top or bottom of this page. Open the starter project in Android Studio 3.1.3 or higher, and navigate to the PetRequester.kt file. Right now, the retrievePets() method is making a simple call to retrieve JSON data for a list of pets and their medical data. Build and run the project to see what you’ll be working with. Understanding HTTPS Browse through the selection of pets by swiping up on the screen. Tapping the photo of a pet reveals a detailed view of its medical data. Everything looks fine on the surface but, on the first line of the retrievePets() method, you’ll notice that the URL starts with http://. HTTP data is transmitted in the clear. This means all the medical information about Pom the Pomeranian, for example, was retrieved unprotected for anyone to view. Many popular tools are available to monitor HTTP traffic. Some examples are Wireshark, mitmproxy, and Charles. Because Pomeranians tend to be fussy about their privacy, you’ll change this request to HTTPS. HTTPS uses TLS — or Transport Layer Security — to encrypt data in transit. All you need to do to change this request is to append “s” to the “http” section of the URL string in retrievePets(), and change the connection class to HttpsURLConnection. As long as the host supports HTTPS, a secure connection will be made. That makes it very difficult to use those previously mentioned tools to monitor the data. This, of course, not only applies to the example of medical data. Login requests, banking details or anything with personally identifiable information (PII) should be sent over HTTPS. But instead of trying to guess what type of information is personal, it’s a better practice to make all requests HTTPS from the beginning. As of Android N, you can enforce this with the Network Security Configuration. Note: It’s also a good practice to limit the amount of data you send from your app to just the essentials. Enforcing HTTPS To enforce HTTPS traffic on Android N and higher, right-click on the app/res directory and select New ▸ Directory. Name it xml. Right-click on the newly created directory and select New ▸ File. Name it network_security_config.xml. In the newly created file, add the following code: <?xml version="1.0" encoding="utf-8"?> <network-security-config> <domain-config <domain includeSubdomains="true">github.io</domain> </domain-config> </network-security-config> Here, you set the cleartextTrafficPermitted attribute to false, which will block any network requests that do not use HTTPS, for specific domains that you specify. You then added github.io as a domain, setting its includeSubdomains attribute to true, which will require HTTPS for subdomains like collinstuart.github.io. Now you need to tell the Android system to use this file. In the AndroidManifest.xml file, replace the beginning <application tag with this line: <application android:networkSecurityConfig="@xml/network_security_config" Build and debug the project again on an Android N (or newer) emulator or device. You should see an error message in the Debug tab that says java.io.IOException: Cleartext HTTP traffic to collinstuart.github.io not permitted: That's because Android blocked the calls so that no data was retrieved in the clear. Your app should look like this: Now that you have enabled HTTPS enforcement, it's time to fix the violation. At the beginning of the retrievePets() method in the PetRequester.kt file, replace all of the code up until the doAsync block with this: val urlString = "" val url = URL(urlString) val connection = url.openConnection() as HttpsURLConnection This code breaks out the connection into separate variables that you will work with later. The main differences here are: - The URL changed from http:// to https:// - The connection variable is now of the HttpsURLConnectiontype. Build and run the app again. The app displays the data again, this time over HTTPS. That was easy! Note: Often when security vulnerabilities are found in software, a patch is released. It's a good idea to make sure the security provider for HTTPS is patched. If, during your debugging, you see an error such as SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure, this usually means the provider needs to be updated as well. For more information about this procedure, see the Update Your Security Provider page. Understanding Certificate and Public Key Pinning Now that you've taken the first step in securing the data, take a step back to talk about how HTTPS works. When you start an HTTPS connection, the server presents a certificate that verifies it's the real entity. This is possible because a trusted certificate authority signed the certificate. A certificate might be signed by an intermediate certificate authority. That intermediate certificate might in turn be signed by another intermediate authority. The connection is secure as long as the first certificate is signed by a root certificate authority that is trusted by Android. The Android system evaluates that certificate chain. If a certificate is not valid, then it closes the connection. That sounds good, but it's far from foolproof. Many weaknesses exist that can make Android trust an attacker's certificate instead of a legitimately signed one. For example, a hacker can manually instruct Android to accept their own installed certificate. Or a company may have a work device configured to accept their own certificate. This allows the entity in possession of the certificate to be able to decrypt, read and modify the traffic, called a man-in-the-middle attack. Certificate pinning comes to the rescue by preventing connections for these scenarios. It works by checking the server's certificate against a copy of the expected certificate. Fortunately, on Android N, this is easy to implement. Instead of comparing the entire certificate, it compares the hash (more on this later) of the public key, often called a pin. To get the pin for the host you are talking to, head over to SSL Labs. Type in github.io for the Hostname field and click submit: On the next page, select one of the servers from the list: You'll see there are two certificates listed, the second one being a backup. Each entry has a Pin SHA256 section: Those are the hashes of the public keys that you'll add into the app. Go back into the network_security_config.xml file and add them right after the domain tag for github.io: <pin-set> <!--Note: These values may change over time, so be sure to use the values that you obtained from the ssllabs lookup that you did as part of this tutorial --> <pin digest="SHA-256">sm6xYAA3V3PtiyWIX6G/FY2kgHCRzR1k9XndcF5A0mg=</pin> <pin digest="SHA-256">k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQws=</pin> </pin-set> Note: There are many ways to get the public key hash. One alternative is to download the certificate directly from the website and run OpenSSL commands on it. If you're developing an app for a company, you might directly bug IT for it. :] Implementing TrustKit You've added certificate pinning support for Android N and higher, but what if your app needs to support versions under N? TrustKit is a library that uses the same format in the network_security_config.xml file to add support for versions under Android N. You'll now add the TrustKit library to the project. Head over to your app module build.gradle file and add this to your list of dependencies: implementation "com.datatheorem.android.trustkit:trustkit:$trustkit_version" Next, add the TrustKit version to your project level build.gradle file at the beginning of the buildscript block: ext.trustkit_version = '1.0.3' Make sure to sync your Gradle files before proceeding. Then, in the network_security_config.xml file, add this right after the pin-set section: <trustkit-config This tells TrustKit to enable certificate pinning using the existing pins we added above. You need to initialize TrustKit with that security configuration, somewhere near your app startup, before you make any network requests. In the MainActivity.kt file, add the initialization code to the onCreate() method, just before the last line that sets the petRequester variable: TrustKit.initializeWithNetworkSecurityConfiguration(this) TrustKit will need to be imported. You can either use option+return on Mac or Alt+Enter on PC, or manually add it to the list of imports at the top of the file: import com.datatheorem.android.trustkit.TrustKit Now, go back and tell HttpsURLConnection to involve TrustKit when making a connection. In the PetRequester.kt file, add TrustKit to the list of imports, then add the following right before the doAsync block: connection.sslSocketFactory = TrustKit.getInstance().getSSLSocketFactory(url.host) The HttpsURLConnection will now use the TrustKit socket factory, which will take care of making sure the certificates match. Build and run the app. If all went well, the app will display the pets on the screen. To test that everything is working, navigate to the network_security_config.xml file. Change any character besides = for each of the pin digest entries. Here's an example: <pin digest="SHA-256">sm6xYAA3V3PtiyWIX6G/FY2kgHCRzR1k9XndcF5A0mz=</pin> <pin digest="SHA-256">k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQwz=</pin> Build and debug the app. You should now see an error that says javax.net.ssl.SSLHandshakeException: Pin verification failed. You just successfully added certificate pinning to your app! Don't forget to undo those changes that cause pin verification to fail. :] There are many other third-party solutions for certificate pinning. For more information about certificate pinning in general, see the OWASP documentation. Understanding Authentication During the second world war, German bombers used Lorenz radio beams to navigate and to find targets in Britain. A problem with this technology was that the British started transmitting their own stronger beams on the same wavelength to mask the German beams. What the Germans needed was some kind of signature to be able to tell the false beams from the authentic ones. We have this today with digital signatures, which verify the integrity of information. Digital signatures make sure that it's you accessing your banking data, starting a chat or logging into a service. They also makes sure someone else has not altered the data. At the heart of a digital signature is a hash function. A hash function takes a variable amount of data and outputs a signature of a fixed length. It's a one-way function. Given the resulting output, there is no computationally feasible way to reverse it to reveal what the original input was. The output of a hash function will always be the same if the input is the same. The output will be drastically different even if you change one byte or character. That makes it perfect for you to verify that a large amount of data is not corrupted by hashing the data and then comparing that hash with the expected one. You'll use Secure Hash Algorithm (SHA), which a well-known standard that refers to a group of hash functions. Public-Key Cryptography In many cases, when an API sends data over a network, the data will also contain a signature. But how can you use this to know if a malicious user has tampered with the data? Right now, all an attacker needs to do is alter that data and then recompute the signature. What you need is some secret information added to the mix when hashing the data so that the attacker cannot recompute the signature without knowing that secret. But even if you have a secret, how do both parties let each other know what the secret is without it being intercepted? That's where Public-Key Cryptography comes into the picture. Public-Key Cryptography works by creating a set of keys, one public and one private. The private key is used to create the signature, while the public key verifies the signature. Given a public key, it is not computationally feasible to derive the private key. Even if malicious users know the public key, all they can do is to verify the integrity of the original message. Attackers can't alter a message because they don't have the private key needed to reconstruct the signature. The latest and greatest way to do this is through Elliptic-Curve Cryptography (ECC). Elliptic-Curve Cryptography ECC is a new set of algorithms based on elliptic curves over finite fields. While it can be used for encryption, you'll use it for authentication, which is often referred to as ECDSA (Elliptic Curve Digital Signature Algorithm). Right-click on the com.raywenderlich.android.petmed folder and select New ▸ Kotlin File/Class. Call it Authenticator and select Class for the Kind. At the top of the file, below the package declaration, import the necessary key and factory classes: import java.security.KeyFactory import java.security.KeyPairGenerator import java.security.PrivateKey import java.security.PublicKey import java.security.Signature import java.security.spec.X509EncodedKeySpec Adding a Public and Private Keypair Add a public and private keypair to the class so that it looks like the following: class Authenticator { private val publicKey: PublicKey private val privateKey: PrivateKey } Initializing the Private and Public Keys You need to initialize these private and public keys. Right after the variables, add the init block: init { val keyPairGenerator = KeyPairGenerator.getInstance("EC") // 1 keyPairGenerator.initialize(256) // 2 val keyPair = keyPairGenerator.genKeyPair() // 3 // 4 publicKey = keyPair.public privateKey = keyPair.private } What did you do here? - Created a KeyPairGeneratorinstance for the Elliptic Curve (EC) type. - Initialized the object with the recommended key size of 256 bits. - Generated a key pair, which contains both the public and private key. - Set the publicKeyand privateKeyvariables of your class to those newly generated keys. Adding the Sign and Verify Methods To complete this class, add the sign and verify methods. Put this code right after the init block: fun sign(data: ByteArray): ByteArray { val signature = Signature.getInstance("SHA1withECDSA") signature.initSign(privateKey) signature.update(data) return signature.sign() } This method takes in a ByteArray. It initializes a Signature object with the private key that is used for signing, adds the ByteArray data and then returns a ByteArray signature. Now, add the verify method to your class: fun verify(signature: ByteArray, data: ByteArray): Boolean { val verifySignature = Signature.getInstance("SHA1withECDSA") verifySignature.initVerify(publicKey) verifySignature.update(data) return verifySignature.verify(signature) } This time, the Signature object is initialized with the public key that is needed for verification. The signature object is updated with the data to be verified and then the update method is called to do the verification. The method returns true if the verification was successful. You'll also need a way to verify data for a public key that is sent to you. Create a second verify method that accepts an external public key: fun verify(signature: ByteArray, data: ByteArray, publicKeyString: String): Boolean { val verifySignature = Signature.getInstance("SHA1withECDSA") val bytes = android.util.Base64.decode(publicKeyString, android.util.Base64.DEFAULT) val publicKey = KeyFactory.getInstance("EC").generatePublic(X509EncodedKeySpec(bytes)) verifySignature.initVerify(publicKey) verifySignature.update(data) return verifySignature.verify(signature) } This code is similar to the previous verify method, except that it converts a Base64 public key string into a PublicKey object. Base64 is a format that allows raw data bytes to be easily passed over the network as a string. Now that you have an Authenticator class, you'll make use of it inside PetRequester. Verifying a Signature In one scenario, apps could be required to register with a service where the public key is passed back; this is often called a token or secret. For a chat app, for example, each user might exchange public keys upon initiating a chat session. In this example, the public key for the Github server that you're communicating with will be included in the code. It will be used to verify the pet data that comes from the items JSON list. Open PetRequester.kt and add the public key to the top of the file, just under the import statements: private const val SERVER_PUBLIC_KEY = "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEP9M/My4tmNiaZRcQtYj58EjGN8N3uSnW/s7FpTh4Q+T3tNVkwVCjmDN+a2qIRTcedQyde0d8CoG3Lp2ZlnPhcw==" Next, create an authenticator instance in the retrievePets() method, right under the first three lines that define the URL: val authenticator = Authenticator() Then, replace the contents inside the uiThread block with the following: // Verify received signature // 1 val jsonElement = JsonParser().parse(json) val jsonObject = jsonElement.asJsonObject val result = jsonObject.get("items").toString() val resultBytes = result.toByteArray(Charsets.UTF_8) // 2 val signature = jsonObject.get("signature").toString() val signatureBytes = android.util.Base64.decode(signature, android.util.Base64.DEFAULT) // 3 val success = authenticator.verify(signatureBytes, resultBytes, SERVER_PUBLIC_KEY) // 4 if (success) { // Process data val receivedPets = Gson().fromJson(json, PetResults::class.java) responseListener.receivedNewPets(receivedPets) } Here's what’s going on in the updated block: - You are taking all the JSON content for itemsand turning it into a ByteArray. - You're also retrieving the signature string that is returned and you are turning that into a ByteArray. - Now, you are using authenticatorto verify the data bytes with the signature bytes, given the server's public key. - If the data is verified, it is passed to the response listener. Build and run the app to check that it worked. Set a breakpoint on the if (success) { line to check that success is true: To test what happens when there are problems, you'll alter the received data. Add the following right after val resultBytes = result.toByteArray(Charsets.UTF_8): resultBytes[resultBytes.size - 1] = 0 That code will replace the last byte of the received data with 0. Build and run the app again. This time, no data will be displayed because success will be false: Don't forget to remove that test after you're done. Another common scenario is when you're connecting to a server with a back-end API. Often, you'll need to register by sending your public key before being able to access a specific endpoint, such as /send_message. A PublicKey's bytes can be retrieved by calling publicKey.encoded. The app then needs to sign its request to the /send_message endpoint in order to successfully use it. When signing a request, it's common practice to take selected parts of the request, such as HTTP Headers, GET or POST parameters and the URL, and join them together into a string. That string is used to create the signature. On the backend side, the server repeats the process of joining the strings and creating a signature. If the signatures match, it proves that the user must have possession of the private key. No other users are able to impersonate the user because they do not have that private key. Since specific parameters of the request were part of the string to be signed, it also guarantees the integrity of the request; it prevents attackers from altering the request parameters to their liking. For example, a bank wouldn't be happy if attackers could alter the destination account number for a money transfer, or be able to alter the mailing address to receive the victim's credit statements in the mail. You'll create a simple signature for the pets' request. Back in PetRequester.kt, add the following code to the retrievePets() method, just under the line that sets the authenticator value: val bytesToSign = urlString.toByteArray(Charsets.UTF_8) // 1 val signedData = authenticator.sign(bytesToSign) // 2 val requestSignature = android.util.Base64.encodeToString(signedData, android.util.Base64.DEFAULT) // 3 Log.d("PetRequester", "signature for request : $requestSignature") Here: - You take the request string and turn it into a ByteArray. - The bytes get signed using the internal private key and the signature bytes are returned. - You turn the signature bytes into a Base64 string so that it can easily be sent over the network. Now, add the following lines to verify that the signature works: val signingSuccess = authenticator.verify(signedData, bytesToSign) Log.d("PetRequester", "success : $signingSuccess") Build and run the app to see the result in the Debug tab. Now, you'll alter the request data to see what happens. Add the following code right before the authenticator.verify() call: bytesToSign[bytesToSign.size - 1] = 0 Build and run the app. This time, success will be false in the Debug tab. Congratulations! You just secured the data with a signature. While you have been verifying the integrity of the data, it's not a replacement for regular data validation checks such as type and bounds checking. For example, if your method expects a string of 128 characters or less, you should still check for this. You should also be aware of a few other standards: - RSA is a popular and accepted standard. Its key sizes must be much larger (such 4096 bits), and key generation is slower. You might use it if the rest of your team is already familiar with or using the standard. - HMAC is another popular solution that, instead of using public-key cryptography, relies on a single, shared key. The secret key must be exchanged securely. HMAC is used when speed considerations are very important. Where to Go From Here? You've just secured an app for dealing with sensitive medical data. Download the final project using the Download materials button at the top or bottom of this tutorial. While you have secured your connection to a server, the traffic is decrypted once it arrives. Usually, it's a requirement for a company to be able to see this information, but there is a recent trend towards end-to-end encryption. The best way to explain end-to-end encryption is a chat app where only the sender and receiver have the keys to decrypt each others' messages. The service or company has no way of knowing what the content is. This is a great way to avoid being liable in the event of a server-side data breach or compromise. To learn more about implementing this approach, a good place to start is the open-source Signal App GitHub repo. To learn more about Android security in general, check out the Security For Android Developers page. And, of course, to check out more about Pom The Pomeranian, you can find him on Instagram. :] If you have any questions about what has been covered, please join the discussion below!
https://www.raywenderlich.com/5634-securing-network-data-tutorial-for-android
CC-MAIN-2019-35
refinedweb
3,871
57.27
Top″. There can be more than one topological sorting for a graph. For example, another topological sorting of the following graph is “4 5 2 3 1 0″. The first vertex in topological sorting is always a vertex with in-degree as 0 (a vertex with no in-coming above graph is “5 2 3 1 0 4″, but it is not a topological sorting Algorithm to find Topological Sorting: We recommend to first see implementation of DFS here. stack. Note that a vertex is pushed to stack only when all of its adjacent vertices (and their adjacent vertices and so on) are already in stack. Following are C++ and Java implementations of topological sorting. Please see the code for Depth First Traversal for a disconnected Graph and note the differences between the second code given there and the below code. C++ // A C++ program to print topological sorting of a DAG #include<iostream> #include <list> #include <stack> using namespace std; // Class to represent a graph class Graph { int V; // No. of vertices' // Pointer to an array containing adjacency listsList list<int> *adj; // A function used by topologicalSort void topologicalSortUtil(int v, bool visited[], stack<int> &Stack); public: Graph(int V); // Constructor // function to add an edge to graph void addEdge(int v, int w); // prints a Topological Sort of the complete graph void topologicalSort(); }; Graph::Graph(int V) { this->V = V; adj = new list<int>[V]; } void Graph::addEdge(int v, int w) { adj[v].push_back(w); // Add w to v’s list. } // A recursive function used by topologicalSort void Graph::topologicalSortUtil(int v, bool visited[], stack<int> &Stack) { // Mark the current node as visited. visited[v] = true; // Recur for all the vertices adjacent to this vertex list<int>::iterator i; for (i = adj[v].begin(); i != adj[v].end(); ++i) if (!visited[*i]) topologicalSortUtil(*i, visited, Stack); // Push current vertex to stack which stores result Stack.push(v); } // The function to do Topological Sort. It uses recursive // topologicalSortUtil() void Graph::topologicalSort() { stack<int> Stack; // Mark all the vertices as not visited bool *visited = new bool) { cout << Stack.top() << " "; Stack.pop(); } } // Driver program to test above functions int main() { // Create a graph given in the above diagram Graph g(6); g.addEdge(5, 2); g.addEdge(5, 0); g.addEdge(4, 0); g.addEdge(4, 1); g.addEdge(2, 3); g.addEdge(3, 1); cout << "Following is a Topological Sort of the given graph \n"; g.topologicalSort(); return 0; } Java // A Java program to print topological sorting of a DAG import java.io.*; import java.util.*; // This class represents a directed graph using adjacency // list representation class Graph { private int V; // No. of vertices private LinkedList<Integer> adj[]; // Adjacency List //Constructor Graph(int v) { V = v; adj = new LinkedList[v]; for (int i=0; i<v; ++i) adj[i] = new LinkedList(); } // Function to add an edge into the graph void addEdge(int v,int w) { adj[v].add(w); } // A recursive function used by topologicalSort void topologicalSortUtil(int v, boolean visited[], Stack stack) { // Mark the current node as visited. visited[v] = true; Integer i; // Recur for all the vertices adjacent to this // vertex Iterator<Integer> it = adj[v].iterator(); while (it.hasNext()) { i = it.next(); if (!visited[i]) topologicalSortUtil(i, visited, stack); } // Push current vertex to stack which stores result stack.push(new Integer(v)); } // The function to do Topological Sort. It uses // recursive topologicalSortUtil() void topologicalSort() { Stack stack = new Stack(); // Mark all the vertices as not visited boolean visited[] = new boolean) System.out.print(stack.pop() + " "); } // Driver method public static void main(String args[]) { // Create a graph given in the above diagram Graph g = new Graph(6); g.addEdge(5, 2); g.addEdge(5, 0); g.addEdge(4, 0); g.addEdge(4, 1); g.addEdge(2, 3); g.addEdge(3, 1); System.out.println("Following is a Topological " + "sort of the given graph"); g.topologicalSort(); } } // This code is contributed by Aakash Hasija Output: Following is a Topological Sort of the given graph 5 4 2 3 1 0 Time Complexity: The above algorithm is simply DFS with an extra stack. So time complexity is same as DFS which is O(V+E).
http://www.geeksforgeeks.org/topological-sorting/
CC-MAIN-2016-40
refinedweb
694
57.16
Building the Right Environment to Support AI, Machine Learning and Deep Learning Watch→ char *strtok(char *s1, const char *s2); Great, a very complicated way to do s1.split(). It has its merits though if you need to process a huge buffer of text that you don't mind destroying and you don't want to pay for all the copies that are done implicitly by split(). Anyway, the main point here is how to make ctypes call this function from Python. The process with any function is similar: The following sample code creates a character array from the Python string '123 456 789', sets the return type to be ctypes.c_char_p and then calls strtok() repeatedly and prints each token until it returns NULL/None: p = ctypes.create_string_buffer('123 456 789') libc.strtok.restype = ctypes.c_char_p x = libc.strtok(p, ' ') while x: print x x = libc.strtok(None, ' ') 123 456 789 Pointers are really easy with ctypes. You can create a pointer for any ctypes type using the ctypes.POINTER factory function. To create a pointer from an existing variable use the pointer function. To access the value of a pointer px you can use px.content.value or simply px[0]. Note that ctypes variables are always mutable. x = ctypes.c_int(888) px = ctypes.pointer(x) print 'x.value=', x.value print 'px[0]=', px[0] px.contents.value = 444 print 'x.value=', x.value print 'px[0]=', px[0] x.value= 888 px[0]= 888 x.value= 444 px[0]= 444 garbled_song = ('mo', 'mini', 'ini', 'miny') # Create an array type of char pointers the size of the garbled song StringArrayType = ctypes.c_char_p * len(garbled_song) # Create an instance of this array and assign it the garbled song strings = StringArrayType(*garbled_song) print ' '.join(strings) # Modify an element of the array strings[1] = 'used_to_be_mini' print ' '.join(strings) mo mini ini miny mo used_to_be_mini ini miny qsort is very convenient since it's available in libc and it operates on arrays using pointers. Here is the C signature of qsort: void qsort(void *base, size_t length, size_t width, int (*compare)(const void *, const void *)); This is pretty complicated, but ctypes comes through. You already know how to define pointers and arrays, so the only unknown is how to define a callback function. ctypes provides the CFUNCTYPE factory function. You pass in as first argument the result type and then the types of the arguments in order. Here is the definition of the comparison function type: CmpFuncType = ctypes.CFUNCTYPE(ctypes.c_int, ctypes.POINTER(ctypes.c_char_p), ctypes.POINTER(ctypes.c_char_p)) def string_compare(s1, s2): return cmp(s1[0], s2[0]) garbled_song = ('mo', 'mini', 'ini', 'miny') StringArrayType = ctypes.c_char_p * len(garbled_song) strings = StringArrayType(*garbled_song) print ' '.join(strings) libc.qsort.restype = None libc.qsort(strings, len(strings), ctypes.sizeof(ctypes.c_char_p), CmpFuncType(string_compare)) print ' '.join(strings) mo mini ini miny ini mini miny.
http://www.devx.com/opensource/Article/33153/0/page/2
CC-MAIN-2020-16
refinedweb
478
60.01
Hello, With the code below, I am trying to replace all regex matches for visa cards within a given text file. My first test was with a text "new3.txt" exclusively containing the visa test card 4111111111111111. My objective was to replace the card with "xxxx-xxxx-xxxx-xxxx". This was successful. However, when modifying the text file to include other characters and text before and after (ex: " qwerty 4111111111111111 adsf zxcv"), it gives mixed results. Although it successfully validates the match, it replaces the whole text in the file, rather than replacing solely the match. When trying this search and replace with words (rather than a regex match), it does not have this behavior. What am I missing? import java.io.*; import java.util.regex.*; public class BTest { //VISA Test private final static String PATTERN = "(?s).*\\b4[0-9]{12}(?:[0-9]{3})?\\b.*"; public static void main(String args[]) { try { File file = new File("c:/eclipse/new3.txt"); BufferedReader reader = new BufferedReader(new FileReader(file)); String line = "", oldtext = ""; while((line = reader.readLine()) != null) { oldtext += line + "\r\n"; } reader.close(); String newtext = oldtext.replaceAll(PATTERN, "xxxx-xxxx-xxxx-xxxx"); FileWriter writer = new FileWriter("c:/eclipse/new4.txt"); writer.write(newtext);writer.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } }
https://www.javaprogrammingforums.com/whats-wrong-my-code/35787-regex-replacing-entire-text-rather-then-match.html
CC-MAIN-2020-24
refinedweb
208
60.82
Re: How to make downloadable source code available to website visitors? - From: "Swaregirl" <swaregirl@xxxxxxxxx> - Date: Thu, 26 May 2005 21:46:51 GMT "Kevin Spencer" <kevin@xxxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:u9FqYWjYFHA.2884@xxxxxxxxxxxxxxxxxxxxxxx > > Second, learn > >> and apply logic with discipline. Third, learn programming. Last, teach > >> others as others have taught you. And always continue to learn. > > > > > > We can agree on this, and, believe it or not, I do this. Your lecture was > > unnesseccary. > > I would have to say a waste of my precious little time, and I'll waste no > more on you. You are unteachable. If what you have to teach me is how to be rude and full of self-righteousness, I will consider that a compliment. > Now, as you no doubt feel you must, have > your last word (if possible), and leave me alone (if possible). If you do > not leave me alone, I'll simply put you on my blocked senders list, and you > can rant all you like about how evil and stupid I am to the rest of the > uncaring world. May you have an interesting life, and attract the attention > of important people. > I wouldn't call you evil. You probably think that you are doing the world a favor by giving instructions but so blinded by your own false sense of superiority that you yourself can't seem to follow you own instruction. I am posting to a public forum. I am not breaking any legal or moral code. I have not e-mailed you directly, so do what you must. Block me or whatever. I don't care. I never asked for you to post your trite, mean-spritited code or to give me your hypocritical lectures in the first place. And don't expect that you can talk to anyone in any fashion that you choose to and that people will have no say in the matter. > Yours in arrogance and condescension, On this, we can agree. But don't forget "self-righteous". > > Kevin Spencer > Microsoft MVP > .Net Developer > Sometimes the elephant eats you. > > "Swaregirl" <swaregirl@xxxxxxxxx> wrote in message > news:SXple.2645$3D6.78@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx > > > > "Kevin Spencer" <kevin@xxxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message > > news:#w5O8MeYFHA.980@xxxxxxxxxxxxxxxxxxxxxxx > >> Thank you for your attempt at belittling me. Every attack is an > > opportunity > >> to grow, sensai. However, it should be noted that: > >> > > > > Normally, I would never have posted such statements. I much prefer to > > seek > > common ground. I have used newsgroups, especially technical ones back > > when > > there was no graphical interface on the Internet, and I have never seen > > any > > posts that matched yours in response to someone seeking help. > > > > However, you took the time and made a great effort to be snide and > > insulting. So I just had to know more about you. And I found out that you > > were human after all. Real human. Those who live in glass houses should > > throw no stones. > > > > If you learn anything from this exchange, I would hope that it would be > > that > > you are not so much above the fray as you might think. I hope that you > > might gain some humility and do away with your need and desire to insult > > people. I hope that you may see that the contempt that you have shown > > regarding your perceived shortcomings in others can be directed right back > > at you. I hope that you feel the need to direct your judgement more > > towards > > yourself and less towards others. > > > > I have no contempt for you, personally. However, I have no problem > > holding > > up the mirror that was your judgement and to place it squarely in your > > face. > > > >> 1. I was not the primary author of any of those books. I contributed > > several > >> chapters in each. The primary author was John Kauffman, who you should be > >> insulting for the opinions you read, I suppose. > >> > > > > You failed the test which was to take your own advice regarding > > responsibility. And taking true responsibility means never blaming > > others. > > You went on and on about what your mother taught you, but you did not > > learn > > your lesson well. > > > > Here's a heads up. When you put your name on a paper, manuscript or book, > > ESPECIALLY when it is published, you don't have the luxury of palming its > > failures or shortcomings off on the other authors. > > > > Cowboy up, dude! > > > >> 2. You haven't actually READ any of the books, but are simply quoting the > >> opinions of others in a rather lame attempt to belittle me. I must > >> compliment your ability to research, but you need to direct it towards > > that > >> which is useful. > >> > > Are you kidding? Why would I buy a book that had such consistently > > horrible > > reviews? They didn't just say that the book sucked. They went into > > painful > > detail regarding coding errors. > > > > The review of your book was among the more scathing that I have seen > > relative to a techncial book, and I've looked up many technical book > > review. > > > > > >> 3. Opinions are like a**holes. Not only does everybody have one, but they > >> all stink. Only a dog would find one interesting. > >> > > So why did you put your stinking two cents worth in the first place? Do > > you > > think that you are Jesus, the only man with the right to cast the first > > stone? > > > >> 4. I was attempting to help you. I speak from experience. You speak from > >> bitterness. > > > > You hit someone over the head with a frying pan and then say that you were > > trying to kill a mosquito? Try again. > > > > I harbor no bitterness towards you. I can't be tempted or bribed into it. > > Why should I when you have shown yourself to be unworthy of your own > > advice? > > > >> Perhaps you might like to check out my web site, which is full > >> of code, all of which I wrote myself. It also contains articles, > > tutorials, > >> and quite a bit of other free material regardin ASP, ASP.Net and > > programming > >> in general, all of which I wrote myself.. > >> > > > > We know, we know. You would NEVER look at any code in the Microsoft ..NET > > namespaces. You would never, ever use it to jumpstart yourself. And I > > just > > can't figure out why Bill Gates would ever allow any code samples in ..NET > > help in the first place? We lazy morons should be able to figure out how > > to > > write the code from scratch, from the namespace class descriptions alone! > > > >> 5. You misunderstood my attempt to make a point with a bit of humor. > > > > I misunderstood nothing. Try again. Before my last two posts, I checked > > with you to see if I was misunderstanding you. In a subsequent post, you > > made it clear to me that I wasn't and proceeded to give me a lecture on > > work > > ethic, your mother, etc. > > > > And I am quite sure that you found humor in what you said. Statements are > > always funny to the guy who is dishing out contempt and insults. > > > >> I was > >> not implying that you are ignorant, although I'm beginning to think that > > may > >> be the case. I was making several points. > > > > There is nothing wrong with igonrance. Ignorance can be cured with > > knowledge. And I am on the road to curing my ignorance on the specific > > problem that I was trying to solve in ASP.NET > > > > Stupidity is a totally different matter and is incurable without a change > > of > > mind and heart. > > > > And for someone who takes the time to point out the real or perceived > > shortcomings of others especially when the accuser himself has obvious and > > glaring ones of his or her own , that is, well, you figure it out. > > > >> One cannot teach what one does not > >> know. Regardless of whether one has knowledge or not, one does not need > > code > >> written by other people (which is likely to be unreliable, as is one's > > own, > >> unless checked extremely carefully), and one needs to continually educate > >> one's self. All of these points apply to myself as well as everyone else. > >> > > > > There you go assuming again. You assume that I am not taking it upon > > myself > > to go through the related namespaces and to really understand it. You > > assume that all programmers who ask for help won't do that. You are > > wrong. > > But of course, if you hold on to the belief that every software developer > > who learns from sample code would never take it upon themselves to > > understand it or even make it better, then you get to maintain your false > > sense of superiority. > > > >> In conclusion, you have not made me look bad, as hard as you have tried. > > You > >> have made yourself look like a petty child, and proven my point in the > >> process. > > > > No, Kevin, you've besmirched your own credibility because your words do > > not > > match your actions in a published manuscript. You made it real easy. > > > > Programming is a discipline that requires a relentless logic. First > >> learn discipline. Develop an anal-retentive habit of criticism, > >> especially > >> of yourself. > > > > Although what you say is true here, you make yourself look really bad by > > stating that fact and having a book out there that got complaints about > > code > > errors. > > > >> Trust no one, and no code, especially your own. > > > > So maybe you saw it coming? > > > > Second, learn > >> and apply logic with discipline. Third, learn programming. Last, teach > >> others as others have taught you. And always continue to learn. > > > > > > We can agree on this, and, believe it or not, I do this. Your lecture was > > unnesseccary. > > > >> > >> -- > >> HTH, > >> > >> Kevin Spencer > >> Microsoft MVP > >> .Net Developer > >> Sometimes you eat the elephant. > >> Sometimes the elephant eats you. > >> > >> "Swaregirl" <swaregirl@xxxxxxxxx> wrote in message > >> news:QTcle.1238$4N2.472@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx > >> > Kevin, > >> > > >> > You have a lot of nerve to be so condescending. Before you write > >> > little > >> > snide code snippets, you need to clean up around your own front door. > > Or > >> > better yet, clean up your own books and code. Especially when anyone > > can > >> > go > >> > to Amazon.com and see the reviews of your books. > >> > > >> > Especially after you helped write one book with that squeaked out a 3 > > star > >> > rating after you bolstered it with your own 5 star rating. Especially > > when > >> > you helped to write another that bombed outright. Especially when and > > you > >> > are the only common denominator between the two. > >> > > >> > Especially when the complaints were that the code for BOTH books was > >> > riddled > >> > with errors. One review said that 80% of the code was incorrect in the > >> > Beginning SQL Programming book was incorrect. > >> > > >> > > >> > Beginning SQL Programming (Programmer to Programmer) > >> > by John Kauffman, Brian Matsik, Kevin Spencer, Ian Herbert, Sakhr > > Youness, > >> > Julian Skinner > >> > > >> > > >> > 3 Star rating based on 11 reviews - pumped up with your own glowing > >> > "review" > >> > of your work. > >> > > >> > "Overpriced for meager information", July 27, 2001 > >> > > >> > "Rubbish", November 22, 2002 > >> > > >> > "Absolutely Appalling!!!", October 18, 2002 > >> > Reviewer: MR M J O'Sullivan "gameshaven" (London) - See all my > >> > reviews > >> > "This has got to be the worst book on any computing subject ever > >> > published. > >> > The try it out sections and code snippets throughout the book are about > >> > 80% > >> > incorrect, honestly the level of errors and poor coding is disgraceful. > >> > Wrox > >> > books are normally of a very high quality but this one is absolutely > >> > terrible, I can't emphasised that enough. I begun this book with high > >> > enthusiasm for this subject but now my enthusiasm has totally > > diminished. > >> > Apart from the appalling level of errors there is also a high level or > >> > repetition, which can only be down to poor communication among its > >> > authors. > >> > Then in other areas you'll find exceptional weak explanations or > >> > explanations lacking altogether." > >> > > >> > "Typos, inconsistent examples, sidetracked, advocate for Micr," August > > 16, > >> > 2001 > >> > > >> > Here you are, Kevin, gushing your opinion of your book and trying like > >> > hell > >> > to get that perfect 5 star rating: > >> > > >> > Beginning to End SQL Programming, April 11, 2001 > >> > Reviewer: Kevin Spencer (Pell City, AL USA) - See all my reviews > >> > I am one of the authors of this book, so of course my rating is going > >> > to > >> > be > >> > a little prejudiced. However, I must say that I'm very proud of this > > work, > >> > and after reading the complete work, I thought it should be called > >> > "Beginning to End SQL Programming." There isn't a single aspect of SQL > >> > that > >> > is not covered in this book. While it is designed to cover using SQL > > from > >> > the beginning, and written in a style that is easy for novices to > >> > understand, it covers more about the actual use of SQL than any other > > SQL > >> > book that I've seen. I actually use it as a reference myself! > >> > > >> > > >> > > >> > > > > >> > -2/ref=sr_1_2/102-9867468-7270568?v=glance&s=books > >> > > >> > ----------------------------------- > >> > > >> > > >> > Beginning Mac Programming > >> > by Jeff Thompson, Kevin Spencer > >> > > >> > > >> > 1 Star rating - 1 reviewer. Come on Kevin, why didn't you give > >> > yourself > > a > >> > 5 > >> > star rating on this one. Too embarrassed? > >> > Comment: > >> > "poor tutorial, but good overview of cocoa and carbon", February 24, > > 2002 > >> > "...Magic code appears in the project without ever being introduced in > > the > >> > tutorial. There are multiple errors in the tutorial as well. I guess it > > is > >> > good practice to be able to debug on your own." > >> > > >> > > > > >> > -3/ref=sr_1_3/102-9867468-7270568?v=glance&s=books > >> > > >> > --------------------- > >> > All of this and you have the nerve to bleat about on about the > > grammatical > >> > and spelling errors that posters make and how so many programmers are > > dumb > >> > and lazy. > >> > > >> > You are a genius in your own mind only. No true genius would be > > satisfied > >> > with mediocrity. > >> > > >> > > >> > > >> > > >> > "Kevin Spencer" <kevin@xxxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message > >> > news:#7rBHJRYFHA.3320@xxxxxxxxxxxxxxxxxxxxxxx > >> >> Here's a Freebie: > >> >> > >> >> ' Conditional, depends on Knowledge > >> >> If (IsNothing(Me.Knowledge)) > >> >> Me.ShareKnowledge = False > >> >> Else > >> >> Me.ShareKnowledge = True > >> >> End If > >> >> > >> >> ' Non-Conditional > >> >> Me.NeedCode = False > >> >> Me.Educate() > >> >> > >> >> -- > >> >> HTH, > >> >> > >> >> Kevin Spencer > >> >> Microsoft MVP > >> >> .Net Developer > >> >> Sometimes you eat the elephant. > >> >> Sometimes the elephant eats you. > >> >> > >> >> > >> >> "Swaregirl" <swaregirl@xxxxxxxxx> wrote in message > >> >> news:dLWke.2361$TJ2.1967@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx > >> >> > Hello, > >> >> > > >> >> > I would like to build a website using ASP.NET. I would like website > >> >> > visitors to be able to download code that I would like to make > >> >> > available > >> >> > to > >> >> > them and that would be residing on my personal server. > >> >> > > >> >> > Are there any code samples or books that someone can recommend so > > that > >> >> > I > >> >> > can > >> >> > implement this. I would prefer VB.NET code, but I am willing to > >> >> > convert > >> >> > from C# if necessary. > >> >> > > >> >> > Thanks > >> >> > > >> >> > > >> >> > >> >> > >> > > >> > > >> > >> > > > > > > . - References: - - Prev by Date: Session timeout problem - Next by Date: User Control Question: Button Onclick Event won't work (won't fire) - Previous by thread: Re: How to make downloadable source code available to website visitors? - Next by thread: How to make downloadable source code available to website visitors? - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2005-05/msg05162.html
crawl-002
refinedweb
2,423
74.49