text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
In Java, the if statement is conditional branch statement. It can be used to route the program execution through two different paths. Here is the general form to use if statement in Java language: if(condition) statement1; else statement2; Here, each statement may be a single or compound statement enclosed in the curly braces (a block). The condition is any expression that returns a boolean value. The else clause is optional. The working of if statement is as follows: if the condition is true, then statement1 is executed. Otherwise, the statement2 (if it will present) is executed. In none case will both the statements be executed. For example, consider the following : int a, b; //... if(a < b) a = 0; else b = 0; Here, if a is less than b, then a is set to zero. Otherwise, b is set to zero. In no case are they both set to zero. Usually, the expression used to control the if statement will involve the relational operators. Though, this is not technically necessary. It is possible to control the if using a single boolean variable, as shown in the following code fragment : boolean dataAvailable; //... if(dataAvailable) ProcessData(); else waitForMoreData(); Remember, only one statement can appear directly after the if or the else. If you want to include more statements, you will use to create a block, as shown in the following code fragment : int bytesAvailable; //... if(bytesAvailable > 0) { ProcessData(); bytesAvailable -= n; } else waitForMoreData(); Here, both the statements in the if block will execute if bytesAvailable is greater than zero. Some programmers find it favorable to include the curly braces when using the if statement, even when there is only one statement in each clause. It makes it easy to add another statement later, and you don't have to worry about forgetting the braces. A nested if is an if statement that is the target of another if or else. Nested ifs are very common in programming. When you nest ifs, the important thing to remember is that an else statement always refers to the nearest if statement that is in the same block as the else and that is not already associated with an else. Following is an example: if(i == 10) { if(j < 20) a = b; if(k > 100) c = d; // this if is else // associated with this else a = c; } else a = d; // this else refers to if(i == 10) Don't become confuse with the curly braces. Naturally if there is more than one block of code then include curly braces. Otherwise don't. Here is the improved version of the above program using curly braces : if(i == 10) { if(j < 20) { a = b; } if(k > 100) { c = d; } else { a = c; } } else { a = d; } As the comments indicator, the final else is not associated with if(j<20) as it is not in the same block (even though it is the nearest if without an else). Instead, the final else is associated with if(i==10). The inner else refers to the if(k>100) because it is the closest if within the same block. A general programming construct that is based upon a sequence of nested ifs is if-else-if ladder. Here is the general form of if-else-if ladder in Java: if(condition) statement; else if(condition) statement; else if(condition) statement . . . else statement; The if statements are executed from the top down. As soon as one of the conditions controlling the if is true, the statement related with that if is executed, and the rest of the ladder is bypassed. If none of the conditions is true, then the last else statement will be executed. The final else acts as a default condition; that is, if all the other conditional tests fail, then the final else statement is performed. If there is no final else and all the other conditions are false, then no action will take place. Following is a program that uses the if-else-if ladder to determine which season a particular month is in : /* Java Program Example - Java if Statement * This program demonstrates the if-else-if statements */ public class JavaProgram {); } } When the above Java program is compile and executed, it will produce the following output: From the above program, you will find, no matter what value you give month, there is one and only one assignment statement within the ladder will be executed. Here are some list of example programs that uses the if statement: Tools Calculator Quick Links
http://codescracker.com/java/java-if-statement.htm
CC-MAIN-2016-50
refinedweb
751
60.14
Machine learning is a continuous process that involves Data extraction, cleaning, picking important features, model building, validation, and deployment to test out the model on unseen data. While the initial data engineering and model building phase is fairly a tedious process and requires a lot of time to be spent with Data, model deployment may seem simple, but it is a critical process and depends on the use case you want to target. You can cater the model to mobile users, websites, smart devices, or through any other IoT device. One can choose to integrate the model in the main application, include it in SDLC, or the cloud. There are various strategies to deploy and run the model in the cloud platform, which seems a better option for most of the cases because of the availability of tools such as Google Cloud Platform, Azure, Amazon Web Services, and Heroku. While you can opt to expose the model in Pub/Sub way, API (Application Program Interface) or REST wrapper is more commonly used to deploy the model in production. As the model complexity increases, different teams are assigned to handle such situations commonly known as Machine Learning Engineers. With this much introduction, let’s look at how to deploy a machine learning model as an API on the Heroku platform. What is Heroku? Heroku is a Platform as a service tool that allows developers to host their serverless code. What this means is that one can develop scripts to serve one or the other for specific purposes. The Heroku platform is itself hosted on AWS (Amazon Web Services), which is an infrastructure as a service tool. The Heroku is a free platform but limited to 500hrs uptime. The apps are hosted as a dyno which after inactivity of 30 minutes goes into sleep mode. This ensures that your app is not consuming all the free time during inactivity. The platform supports Ruby, Java, PHP, Python, Node, Go, Scala. Most Data Science beginners refer to this platform to have an experience of running and deploying a model in the cloud. Preparing the Model Now that you are aware of this platform, let’s prepare the model for the same. When a machine learning model is trained, the corresponding parameters are stored in the memory itself. This model needs to be exported in a separate file so we can directly load this model, pass unseen data, and get the outputs. Different model formats are usually practiced such as Pickle, job-lib which converts the Python Object Model into a bitstream, ONNX, PMML, or MOJO which is an H20.ai export format and offers the model to be integrated into Java applications too. For simplicity, consider that we want to export the model via pickle then you can do it by: import pickle Pkl_Filename = “model.pkl” with open(Pkl_Filename, ‘wb’) as file: pickle.dump(model_name, file) The model is now stored in a separate file and ready to be used in integrated into an API. The Server logic For providing access to this model for predictions, we need a server code that can redirect and handle all client-side requests. Python supports web development frameworks and a famous one is Flask. It is a minimalistic framework that allows to set up a server with few lines of code. As it is a minimal package, a lot of functionalities such as authentication and RESTful nature are not explicitly supported. These can be integrated with extensions. Another option is to opt for the newly released framework FastAPI. It is much faster, scalable, well documented, and comes with a lot of integrated packages. For now, let’s continue with the flask to set up a simple prediction route. from flask import Flask import pickle app = Flask(__name__) with open(Filename, ‘rb’) as file: model = pickle.load(file) @app.route(‘/predict’, methods = [‘GET’, ‘POST’]) def pred(): # implement the logic to get parameters either through query or payload prediction = model.predict([parameters obtained]) return {‘result’: prediction} This is a rough code to show how to proceed with the server logic. There are various strategies you can opt for better implementation. Setting up Deployment Files Heroku requires a list of all dependencies required by our application. This is called the requirements file. It is a text file listing all the external libraries the application uses. In this example, the file contents would contain: flask sklearn numpy </p> pandas gunicorn The last library, gunicorn allows us to set up the WSGI server implementation that forms the interface for the client and the server handling the HTTP traffic. Heroku also demands another file known as Procfile that is used to specify the entry point of the app. Consider that the server logic file is saved by the name main.py, then the command to be put in this file is: web: gunicorn main:app “web” is the type of dyno we are deploying, “gunicorn” act as the mediator to pass the request to the server code “main” and search for “app” in “main”. The app handles all the routes here. Final Deployment All the preparations are done, and now it’s time to run the app in the cloud. Create an account if not on the Heroku, click on create an app, choose any region. After that connect your GitHub account, and choose the repo that contains these files: server code, model.pkl, requirements.txt, and Procfile. After this simply hit deploy branch! If it’s successful, then visit the link generated and your app should be live. Now you can make requests to appname.herokuapp.com/predict route and it should give out the predictions. Learn more about machine learning models. Conclusion This was an introduction to what is Heroku, why it is required, and how to deploy a model with the help of Flask. There are a lot of hosting platforms that offer more advanced features such as Data Pipelines, streaming, but Heroku being the free platform is still a good choice for beginners who just want to have a taste of deployment..
https://www.upgrad.com/blog/deploying-machine-learning-models-on-heroku/
CC-MAIN-2021-04
refinedweb
1,013
62.07
Investors eyeing a purchase of Proto Labs Inc (Symbol: PRLB) stock, but cautious about paying the going market price of $70.10/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the October put at the $60 strike, which has a bid at the time of this writing of $3.30. Collecting that bid as the premium represents a 5.5% return against the $60 commitment, or a 10.1% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to PRLB Proto Labs Inc sees its shares fall 15.2% and the contract is exercised (resulting in a cost basis of $56.70 per share before broker commissions, subtracting the $3.30 from $60), the only upside to the put seller is from collecting that premium for the 10.1% annualized rate of return. Below is a chart showing the trailing twelve month trading history for Proto Labs Inc, and highlighting in green where the $60 strike is located relative to that history: The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the October put at the $60 strike for the 10.1% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Proto Labs Inc (considering the last 252 trading day closing values as well as today's price of $70.10) to be 38%. For other put options contract ideas at the various different available expirations, visit the PRLB.
https://www.nasdaq.com/articles/commit-buy-proto-labs-60-earn-101-annualized-using-options-2015-04-01
CC-MAIN-2020-50
refinedweb
278
63.09
Hi, developers: I have the large training dataset which is packed in a zip file. In train.py, I load it once and then pass it into dataloader, here is the code: import zipfile # load zip dataset zf = zipfile.ZipFile(zip_path) # read the images of zip via dataloader train_loader = torch.utils.data.DataLoader( DataSet(zf, transform), batch_size = args.batch_size, shuffle = True, num_workers = args.workers, pin_memory=True)) If the num_workers is set into the num larger than 1, the dataloader will stuck into the procedure that read images in zip file, it never stop to read images. How to fix this? Thanks.
https://discuss.pytorch.org/t/dataloader-stucks/14087/2
CC-MAIN-2022-33
refinedweb
101
66.74
Some languages look like the key data structures used to implement them. A common example is lisp—all those parentheses build structures that look like parse trees. Languages like Forth, and like the little language we’ll build here, look like stacks. “Concatenative” just means putting things together by putting them next to each other. A nice slogan for that might be “composition by juxtaposition”. In our language, for example, the program 1 2 + means “push a 1 to the stack, push a 2 to the stack, then pop the first two elements off the stack and push their sum on the stack”. We’ll add debugging output that looks like this: : [1] 1 : [2] 2, 1 : [+] 3 We put : in front of each line just to separate this debug output from normal printing output. The thing in square brackets is the current word (or command) that we’ve just added to the stack. On each line, we’ll then write out the stack separated by commas, which is essentially the entire state of the program at that time. With a language this simple, you really can’t go wrong with your choice of implementation language, but we’ll use Python for extra simplicity. The first thing we need to do is read in each word from standard input. Any connected string of non-whitespace characters should be treated as a word, and we’ll just skip over whitespace. Simple enough. The next thing we’ll want to do is add an empty stack. In this case, it’s just an empty Python list. We’ll choose to add new words to the beginning of that list. We also want to provide the debugging output we saw above so we can follow the trace of our programs. That can be done with a formatted string that takes the current word and the current stack and prints them out. import sys stack = [] for line in sys.stdin: for word in line.split(): stack.insert(0, word) word, *rest = stack # ... print(f': [{word}] {", ".join(stack)}') Now we can begin adding commands to our language. This will be a simple match on the current word. Usually, a word’s behavior involves manipulating the stack in some way, and then calling through to some primitive from Python. For We implement addition likewise. Let’s deconstruct the stack by peeling off its first two elements, then add them and put them back on the stack. Now we can run programs like this! 1 2 3 + + print The debugging output shows us what our program is really doing at every step. It reads almost like a deductive proof, where each step follows from the previous one. : [1] 1 : [2] 2, 1 : [3] 3, 2, 1 : [+] 5, 1 : [+] 6 6 : [print] 6 We can also add features that are usually “baked in” to a lexer or parser, but we can do it with only the stack. Let’s add a flag for whether we’re currently in a comment. (. When we see the word (, we enter “comment mode” by setting the is_comment flag. We continue reading in words until we see the “end comment” word ). Reset the is_comment flag, and continue on to reading the next word. We can now correctly read programs like this 1 2 ( 3 + commented out ) + print Note that while in comment mode, there’s no debug output because the state of the stack is going to remain unchanged throughout the entire comment. : [1] 1 : [2] 2, 1 : [(] 2, 1 : [+] 3 3 : [print] 3
https://vitez.me/concatenative-implementation
CC-MAIN-2020-16
refinedweb
591
80.41
>>IMAGE. Then we need to start the Visual Studio environment and create a new project. To do this we will go to File then navigate to New Project and click it. A dialog box will appear and ask you which project you would like to include in your solution that will be automatically created for your project. We need to use the Console Application. Next we need to replace the box at the bottom where it says ConsoleApplication1 with HelloWorldConsole and then after the project and solution is created press CTRL-S to change the name of the solution file to HelloWorld in the box under the project name box and press OK. This will create a project inside a Solution file. The solution file acts like the glue that binds all projects included in the solution file together. Later on we will discover how this is beneficial for creating projects and making class files that reference DLL’s that we will code. Once the project is created we are going to edit the program.cs file. After you have open the program.cs we are going to add the text necessary to have the program output “Hello World” to the console. To do this we will need to add the line Console.Out.WriteLine(“Hello World!”); inside the static void main curly brackets. After this is complete we can now build and attempt to build our solution. To build the solution we need to press CTRL + SHIFT + B and the build process will being. After the build is a success we can now run the Console Application by pressing CTRL + F5. This will display a command prompt with “Hello World!”. Here is the source code for program.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace HelloWorldConsole { class Program { static void Main(string[] args) { Console.Out.WriteLine("Hello World!"); } } } We can now move on to the windows forms application of Hello World. To do this, we need to go to the solution and Right Click, then go to Add then click New Project. For the project we will name it HelloWorldForms. After the project is created we are going to delete the Form1.cs and we are going to create a new form by Right Clicking the HelloWorldForms Project, Navigating to Add then to New Item, and when the dialog box appears we are going to pick Windows Form. The name we are going to use is main.cs and press Add. We now edit the program.cs to change the Form1 that can be found in the file to main. After the Windows Form is created we can start adding in Controls from the Toolbox. We are going to drag a label and a button onto the form portrayed in the middle of the program. We are going to edit the properties here to make the text inside the label blank and the name of the label lbHelloWorld instead of label1. After this is done we are going to want to edit the button we dropped onto the form earlier. We will change the name of the button to btnHelloWorld and the Text of the button to Click Me!. After this is done we are going to want to use an event handler to tie the button and the label together, so when the button is clicked “Hello World!” will appear in the label. To make an event handler for the button we are going to go to the Properties panel and click the button on the top that looks like a lightning bolt. This will take us to all o the event handlers that this button can handle. We want the Click event handler, this will create the code required to handle a click event in the main.cs. Now that the wrapper is there we can code the output to the label when the button is clicked. Inside the curly brackets of “private void btnHelloWorld_Click” in main.cs input the following line of code to link the two Controls: lbHelloWorld.Text = “Hello World!”; This will make the main.cs look like this: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace HelloWorldForms { public partial class main : Form { public main() { InitializeComponent(); } private void btnHelloWorld_Click(object sender, EventArgs e) { lbHelloWorld.Text = "Hello World!"; } } } The program.cs should look like this: using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; namespace HelloWorldForms { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new main()); } } } After all of this is completed we need to run the program by pressing CTRL + F5 again. The screen that should appear should be something like this: The screen after the button has been pressed should look like this: Now that we have completed the next tutorial you should be able to move through the Visual Studio IDE to make multiple projects under one solution, delete files within a project and create new forms and classes, and modify source code within event handlers. The next tutorial will go more in depth with the Visual Studio Toolbox and make a form with controls on it with minimal backbone code, as well as review some of the common files created and what is automatically included for you. For more information on Toolbox Controls you can check out Microsoft MSDN article on Toolbox Controls. If you are having any trouble with this project feel free to comment and I will help to try and resolve the issue. Until next tutorial, Happy Hacking! 55 thoughts on “C Sharp Development 101 – Part 1: Hello World” Thanks for the tutorial, I was worried, having previous done a few things in C# it would take a few of these before it got to anything new for me, but the explanation of Solutions and Projects, was very helpful, wasn’t quite sure what the difference was. Are you covering Windows Forms Designer too or just console-type work? The .NET suite really shines when you get to the rapid prototyping steps, specifically the forms designer. If you’re covering UI development at all, please introduce people to WPF! WPF and databinding is fantastic. Soo… Remind me what this is doing on Hack A Day exactly? Thank you for posting this! More! C# is a perfectly serviceable language for IT grunt work. Has fuckall to do with hacking though. Please can you do a section on Web Services and SOAP calls?) Guys! There are plenty of newbie C# tutorials elsewhere, we don’t need them on our favourite hardware hacking websites! Never used it for hacking exactly, but I have used it to create a dll that interfaced with a micro and an LCD. Which was a super easy way to create a couple other apps in C# which used the LCD screen. As code behind in ASP.NET, web interfaces should be pretty easy to build also. I also built a conversion program, that goes between ascii,hex,binary,decimal which has come in as a handy tool. Not exactly hacking, but useful in the process. Nice but I gotta echo others: what’s this have to do with hacking? Wouldn’t C be a little more appropriate than C# for a hacking blog? Or Java…something we can port to hardware… @Trollicus C# is an open standard, and it does have portability. These aspects are most prominantly displayed in the Mono Project and Silverlight. The .NET Framework is what is locked down, but it has some very broad application (ASP.NET, Silverlight, Desktop applications, Windows Phone 7). Also, Microsoft open sourced the .NET Micro Framework which is perfect for hacking with C#. The Netduino is a great example of this in practice. I would agree, I use C or Assembly for micro controller programming, more than I use C# for anything resembling what would be considered HaD material. But there is the C# micro framework, for this purpose, never used it myself, but its out there. Not trying to be a fan boy, I’ve just found it useful and easy to use for certain things. I’d love to see a C tutorial for micro controllers too as I’ve recently started using it, and its more complicated to understand. IMHO, nothing this editor has posted is worth reading. A list of Twitter clients and junk regurgitated from the _very_ comprehensive Android/C#+.NET docs. How useful. :/ Even if this stuff did belong on HaD, which it doesn’t, it has nothing to add over the official sources.) ———————————– That comment was worth repeating. (Some emphasis mine). I can’t understand the choice of C# for this either. It’s actually quite bizarre that someone would think C# was a better choice for a tutorial series than C/C++. “”Guys! There are plenty of newbie C# tutorials elsewhere”” yea I know, after 27 days of having extra stuff installed on my machine I went elsewhere to learn how to make it do something cant wait for Halloween for number 2 in the series (if you cant tell by the tone in my text… pick up the pace if you insist on doing this please) “Proprietary versiion of C” Someone has clearly never used C. C# has as much to do with C as software “engineering” has to do with engineering. The similarities are only in the names. I am sorry to say that this is the worst article I’ve ever read on this site. If Hack a Day continues marketing proprietary technology and encourages (many first-time) programmers to become part of this culture, this website will suffer bad from decreasing visitors. (Still I admit that the language and the IDE provide a practical “click-and-play” methodology that serves well for prototyping and have used it professionally, but I think people can read about this elsewhere.) Hack a Day: listen to your faithful readers or you will make a big mistake. Stop bitchin, guys… C# is the Arduino of C, and is really usefull to quickly test some code, mostly for the awesome ability to edit code while debugging and moving around the pointer as you please. It makes a lot of tasks completly trivial, and is very easy to learn and use. I’m pretty sure HaD will add a nice tutorial on hacking c# programs with Refractor too, right? Damn, everyone is hating today. Apply for a job @ HAD and write the articles you think should be on hear. Stop bitchin! Hacking is not only hardware. You can use C# to create windows hacks. Or even talk to hacked hardware. A good hacker should understand the need for multiple programming languages and skill sets to be a great one man hacker. (cough cough, like me) :) Okay my turn to gripe, I think an Assembly 101 lesson would be great. It can be used on many micro processors to get the speed you need out of them. The Arduino may be easy to use but can you get it to do time sensitive control? digitalwrite does take some time to do. @M4CGYV3R: I have to agree with you. Rapid prototyping is really the only reason I use C# I would like to see this series transition into something more useful (hardware interfacing), instead of repeating the hundreds of other C# tutorials that are around. response.write “Hello World” I prefer this. People have to start programming somewhere, and I guess C# is better than C/C++. Beginners that start with those tend to get locked into a certain mindset… The folks screaming in rage have a point though. Python or Perl or Lisp or assembler wouldn’t dilute the spirit of hacking the way this does. (C# could be justified due to its ease of GUI creation, but this tutorial doesn’t make a GUI so…) To heck with the debbie-downers. Keep posting tutorials. One needs to learn in order to hack. If you hate C#, use C++ CLI instead. The latter allows you to use std libraries as well as .NET shit. But whatever. .NET is fucking awesome for Windows GUI programming, which I think is essential if you want to make a decent front-end for your Windows projects. Of course, it’s completely useless for portability, which in that case you should be looking at other options anyway. I wish Microsoft would ditch MSIL and its JIT compiler, and compile .NET directly to native code instead. I am currently using a combination of MS VS10/.Net and MonoDevelop/Mono to write a Winforms based application that talks to x0xb0xes via the serial port. .Net & C# are great for productivity (I achieve more with less code), and port well (with a few gotchas) to Mac OS X and Linux. @Tachikoma, you haven’t done much coding with anything besides .net have you. windows GUIs are simple to do in ASM or C. i have to agree with r_d.. nothing this editor has posted is worth reading. on top of that almost everything posted has been *borrowed* and reworked from other sites. the original content posted is is at such a low level i doubt 5th grader might find intellectually challenging. This is HAD one of the few sites that actually posts about hardware hacks, if i wanted poorly done hello world examples i would go to one of the thousands of programming tutorials on the web. C# can be used for embedded development. Scott Hanselman blog has some details on it. NetDuino (runs tiny CLR) .NET micro framework Not tried it myself, but it is probably worth investigating. Wow, this I didn’t expect. Did you get sponsored by Microsoft to spread this disease? Please make some more, so I can finally shorten my RSS-Feed list. @Tachikoma( a fellow GITS fan myself, so I love the name.) It would be ill-advised to give Microsoft any suggestion that a direct compile of .NET to native code( even if it’s through an expansion of commands using .dll files) is a good thing. As far as all things are considered, much of the code itself is very unstable/disruptive to many internet-based protocols. All you’ll end up with is another broken language that allows Microsoft to point the finger at various internet protocols while they tried to showcase a “better” solution to a non-existent problem. @Mathew We get it, you don’t like tutorial on HAD. The great thing is nobody care what you think. Hmm, I’m not sure if I’m following you… A great portion of .net libs in C# and C++ CLI has nothing to do with networking. I just don’t see how an executable in MSIL or x86 binary has anything to do with standardisation of protocols? My gripe is that the whole thing appears to be a wrapper for WIN32 and MFC APIs, which get called via JIT compiled code. If you are going to develop a standalone windows application, why not compile it to native x86 binary straight away? @everyone asking why: Hacking isn’t just robotics and microcontroller projects. We feel that C# has its place in our tutorial line up. However, we plan on doing much more. C# is what we have available right now. We are currently putting together other tutorial series as well as hiring to expand our capabilities. stay tuned, or even better, contribute! What languages would you like to see in the software development tutorials? Email your suggestions to me caleb@ or the tip line instead of filling this thread please. —-edit— re-reading that, it seems like I’m being short and lecturing… I’m not. in short: great feedback guys, we’re giving you what we can, let us know what else you want! > using System.Collections.Generic; > using System.Linq; If you’re going to write a tutorial, please at least understand the source material – none of these libraries are required for the code you’ve written! It would be a much more valuable exercise to explain some basics as you go, such as what these USING commands are for, and how about some of the additional tools you can use (most for free!) in Visual Studio, such as refactoring programs (JetBrains ReSharper, for example) or even MS’s own FX Cop – all of these can underpin basic, important principles in writing good code and are worthy of using from the start. I’m conflicted on this one. I earn my living developing in C# so this is of little use to me, but I’m all for sharing information. I have to agree with the posts that indicate that are 1000’s of C# starter tutorials. Perhaps instead of writing their own Hello World tutorial, HAD should find some good ones and link to them for the beginning stuff (to get people up to speed), and then write a tutorial on how we can apply .Net to some hardware projects. For example integrating with a microcontroller. Click a GUI button to start/stop some blinkenlights or something. Guys lets see how the series pans out before getting worked up. In a few tutorials time the author might have us communicating with our AVR projects over USB interface or something similar. This is only beginning and a lot of people don’t know how to do this “simple” stuff already. Personally it is no use to me as I work with this kind of thing daily, but I just chose to not look at the posts (unless it is a slow Friday afternoon!). Give the author a break man! I code in C/C++ but not in this object oriented crap. C# is another language what we don’t need. I don’t know it’s story and I don’t care about it either but they probably made it to compete with Java. Good C coders dont use lame shits like boost library either but code their own stuff, especially not C#… Theres no point of it anyway because everyone is developing apps for the web now, java is perfect for that. I love C#, is use it everyday, it’s my own personal b**ch when i’m messing in windows, although i think Assembly 101, (asm if you will) would suit HaD better, and is something everyone should know. I’m not a fanboy, but i see many hateboys around here, face it if windows wasn’t heavy like a truck, the world wouldn’t know what multi-cores were. It’s heavy software that propels the industry to go further, and you can stick with your linux or whatever suits you, but it’s counter-productive to tear down windows if you want an awesome machine or your room. WOW, quit crying. If you don’t want to read it, DON’T! It’s the internet, not a newspaper. The article isn’t taking up space that could be used by another story. And C# is great for hacking. I’ve used it to make dozens of little apps to interface with my embedded systems. I don’t care whether or not it’s proprietary, it’s quick and easy. Good coders use libraries like boost because they know that it will have less bugs than reinventing the wheel and saves time. I’m currently in a computer science program in cegep (college) in Canada, and our programming classes are all about C#. I hate it with a passion. I’ve coded in C for quite a long time, more recently in Haskell, but C# makes me wanna cry. Especially when you’re running GNU/Linux. Even more when you have to use XNA on top of that, which has zero compatibility with *nix. Why not take a course without C#? Most universities offer courses with C, C#, Java, and others. Crazy that this is on Hack a Day, even if it can be used in embedded systems. @terribledamage, I just want to say this. I HATE WEB APPS! They are slow and i dont want to have to be on the internet to use them. That said, i’ve seen the new web based game streaming service and its awesome. Maybe in a couple more years when everyone is downloading at 20MBs not 20Mbs then it will be okay. Still you rely on the web service to be up and running to use it. C# over C++. You can write code faster in C#. Garbage collection in C#. Yes c++ is fast but will you notice the difference? not that i use C#, i’ve only played with it. I get paid to use LabView :) and of course RUBY! mhhh due to my professional field i just doing some embedded programming during my bsc and private. in my current company we are developing server inprograms mainly with java an spring… it’s a pain in the ass, if you compare it to c# its just bullshit. yes, c# is not intended to use for embedded hardware. for using in wcf services or desktop applications, its the most modern language and framework. yes you can write guis in c++ or even in asm, but in professional development speed of development is more honored(time is money) than a c program which is 10% percent faster than the c counterpart. dont understand me wrong, I love c(++) but in most cases its not applicable in (very) fast evolving business. Your boss will you fire, if you tell him in the daily scrum “sorry, yesterday ive lost a pointer and this morning i found it…” thats not acceptable in most 08/15 bussiness applications. c# has the big advantage, there is one framework. Have a look at 10 java job offers, you have 20 frameworks. .net doesnt. dont curse everything from m$, the os is thrash, but be open to new things… Perhaps the author should take a look at the “Console.WriteLine” function. There is nothing wrong in writing directly to the “Out” stream, but the Console.WriteLine function provides some nice overloads for parameters/writing things other than strings. @k4l: This is a hacking site. We shouldn’t give a fsck what’s “acceptasble in most business applications”. We care about whats useful to _us_, and what’s different, interesting, and/or clever. Hello world in C# is not useful to me, and it is not different/interesting/clever. Wow! Now everyone can learn to develop for the new Microsoft Kin phones! Oh wait – they’re already discontinued. Microsoft constantly targets inexperienced programmers and tries to hook them into proprietary backwaters. Consider all the time and effort squandered learning other proprietary MS stuff like Visual InterDev, Visual J++ and Visual Test only to have them discontinued. Consider products like Microsoft Money & Microsoft Office Accounting. Did you use them for your business? They were also discontinued and the file formats are proprietary and undocumented. Of course, the IRS sill holds you liable to provide the information regardless of the MS shrink wrap weasel words. You can lose your business and go to prison because MS won’t support their own old software and won’t release the code. The long list of discontinued proprietary platforms from Microsoft won’t even fit in this space. All the .net products are just another dead end proprietary backwater. JavaScript runs on every major platform, faces the net and scales. .Net does none of those things. .Net is a waste of time – a strategic distraction by MS and hosting such infomercials is a terrible dis-service to your readers. At first i thought, this would be some nice programming 101…but after going over this post i’m starting to doubt that. Here are some things you should consider when posting the next part of this: 1. Don’t mess with the tabs! They are there for a reason: C# is one of those languages, which needs those brackets pretty much everywhere. If you mess with the tabbing the IDE does, people who don’t know C#, have to put time into understanding which open bracket belongs to a close bracket. Even i for myself was a bit confused when i saw the first code example here. 2. Don’t make simple things complicated! You used Console.Out.WriteLine, and you could easily have used Console.Writeline…for a newbie less code means he has less to learn. And you don’t need to learn much with the simplest things ever. 3. Don’t make simple things complicated! (I know it’s the same as 2) Why did you use Ctrl+F5 to run the Program? And why did you build it before that (And used CTRL+SHIFT+B)? There are some nice one-button-shortcuts for this! F6 will build the Project and F5 will run it. Oh and for purposes like learning a new language you don’t need to mess with building, because if you try to run the program with F5 it will be build automagically! PS:To all those “hackers” who are trolling around because they don’t like C#, .NET or whatsoever: Where is your problem? Did someone make you read this article against your will? The goal of such a series of tutorials is to show the people, who can’t do anything, how they can do something. And i wouldn’t even think about starting a tutorial about assembler targeted to people who know nothing about programming. And people who already know a programming language wouldn’t read tutorials, but start programming with the new language and look for the things they need.
http://hackaday.com/2010/09/30/c-sharp-development-101-part-1-hello-world/?like=1&source=post_flair&_wpnonce=a832ba55c5
CC-MAIN-2015-48
refinedweb
4,313
73.58
Welcome to Part 2 of the Python How to Program series. I’m going to mainly focus on variable types specific to Python in this article. I also will go over many basic topics that make Python different than any other programming langauge. How Python Handles White Space If you have programmed in other languages, you are probably used to enclosing your code between brackets { }. Python instead separates it’s code segments through the use of white space. So if you where creating a function in Python you would indent all of the code that lies in that function (normally 4 spaces). Now you don’t have to indent 4 spaces, but you must always indent using the same number of spaces, or the interpreter will throw an error. Here is an example program, in which I demonstrate the use of white space for the function called main: #!/usr/bin/python3 #Messing around with whitespace in Python ”’ This is a multiple line comment, for when you get kind of wordy ”’ def main(): if __name__ == “__main__”: main() What Does this Program Do If you didn’t read the previous tutorial Python How to Program Tutorial, you should probably check it out before moving on. I’ll explain every line above either way: #!/usr/bin/python3 #Messing around with whitespace in Python def main(): Here I’m defining a function named main. Unlike many languages, the main function holds no significance like in other languages. You define it in the same ways you create any other function. It is however common for programmers to organize a programs functionality in a function named main. The braces () signify where you would assign variables to specific variable names, if any where passed. None are expected in this program, and I’ll talk more about functions in a future article. You then signify the beginning of the function with a colon : . As I explained previously, every line there after that is indented, is considered to be part of function main. The print function as well will print to the screen any characters enclosed in both the braces and quotes. Special Note: You may also have noticed that statements don’t end with a semi colon, like in most other programming languages. How a Python Script is Processed When a Python script is executed by the interpreter, it skips over all of the functions that are defined and goes directly to the first statement that is not indented and not a function definition. In this case it goes to the line of code here: if __name__ == “__main__”: main() [adsense] Wrapping Up the Basics Those are some of the basic differences between Python and other languages. Now I’ll get into the data types that are available in Python. Before we start I’d like to explain some jargon you may here about data types in Python. What Does Mutable and Immutable Mean These are big strange words that aren’t very complicated. When ever you create a variable of a specific data type in Python, it is either Mutable or Immutable. These words define how the data is stored in memory. When you assign a new value to an Immutable variable, that new value is stored in a different location in memory. Beginning Python programmers often are confused into thinking that if a variable is Immutable that it can’t be changed. While this is technically true, in reference to how data is stored in memory, it isn’t true that you can’t assign a new value to a Immutable variable. Ex. It is completely legal to assign a new value to an integer named age, even if it had a value already. Naming Rules for Variables When you are naming variables in Python, follow these rules: The Protected Python Keywords You cannot name your variables any of these names: and, continue, except, global, lambda, pass, while, as, def, False, if, None, raise, with, assert, del, finally, import, nonlocal, return, yield, break, elif, for, in, not, True, class, else, from, is, or, try Boolean Data Type An Integer is just a number with no decimal point. There are two types of integers in Python. The integer (int) and boolean (bool). Here I’ll go over boolean variables first. You can assign True or False to a bool. If you assign any other number to a bool, it will automatically become of whatever type that number would be. Here is some code: (type(), will tell you the data type of the variable it is passed) a = True # a is considered a boolean type now b = 1 # b is considered to be of type integer print(type(a), type(b)) # The output would be, <class ‘bool’> <class ‘int’> a = 0 # After this assignment, a is now considered to be an integer print(type(a)) # The output would be, <class ‘int’> You can compare boolean types with the and and or operators. a = True b = False print( a and b ) # The output would be False print ( a or b ) # The output would be True You could also compare boolean’s with the following operator’s, which I’ll cover later: <, >, <=, >=, etc. The Integer Type Integers in Python can be as big as the computers memory allows. You define variables in Python based off of the value assigned. Hence with, a = 1, a becomes an integer by default. An integer could be assigned in decimal, binary, octal or hexadecimal. So as not to confuse some people, I’ll skip over this topic. It probably won’t be of interest to 95% of people reading this. Floating Point Type A floating point number, is just a number with a decimal value. There are 3 types of floats in Python: float, complex, and decimal. I’ll skip over complex types, since they also will be of little interest to most people. If you assign a value to a variable, that contains a decimal, it will become of type float by default. The values assigned to a float will normally remain accurate up to 17 decimal places. This won’t be true on most machines however: c = 1.234567891011121314 print(c) # The output on my computer is 1.23456789101 (Accuracy to 11 decimal places) The Decimal Type If you demand that your floating point number be more accurate than 11 spaces, define a decimal type. To use this special data type, you must import the decimal module, or library, by starting your Python program with the line import decimal. Here is how you load a value into a decimal variable d = decimal.Decimal(“1.234567891011121314”) print(d) # The output would be 1.234567891011121314 You must assign a decimal value by using the function decimal.Decimal(). The value inside of the brackets can be of type integer or string, but not float. After these values are assigned, you can treat them as if they were any other variable. For example: d = decimal.Decimal(“1.234567891011121314”) e = decimal.Decimal(“1.234567891011121314”) print(d) # The output would be 1.234567891011121314 print(d + e) # The output would be 2.469135782022242628 The String Type Strings contain a sequence of characters. You can assign a string value with single quotes ‘, double quotes “, or triple quotes “””. You would use the triple quotes when your string is multiple lines long. I also show another way to create multiple line strings below: f = “””This is a multiple line string””” print(f) g = (“Another multiple” + ” ” “line string”) print(g) The Output This is a multiple line string Another multiple line string There are more ways to mess with strings, but I’ll leave that for another article. What is a Tuple A Tuple in Python, provides you with a way to collect multiple values in one variable. Tuples are immutable, meaning once they are created you cannot add additional values to them. You can however create an entirely new tuple and assign it to the same variable. Here is some example code: h = (“Maine”, “Pennsylvania”, 1, 2.345) print(h) The Output (‘Maine’, ‘Pennsylvania’, 1, 2.345) If you want to create a 1 item tuple, you must end that 1 item with a comma. If you don’t it will be considered a string. For example: i = (“New York”,) I’ll go over the many things you can do with Tuples in a future article. What is a Python List Lists are similar to Tuples except for the fact that they are mutable. This means you can add additional values to them after they are created. They are also created with brackets, as you can see here: j = [1, 2, “Happy”, “Sad”] # Create a list with 4 values print(j) j.append(“Dog”) # Add the value “Dog” to the list print(j) print(j[0]) # Print the first item in the list j.remove(j[1]) # Remove the item in position 1, in the list print(j) The Output [1, 2, ‘Happy’, ‘Sad’] [1, 2, ‘Happy’, ‘Sad’, ‘Dog’] 1 [1, ‘Happy’, ‘Sad’, ‘Dog’] Here I’m showing you how to edit values in a list. You may be a bit confused by what your seeing here? What you have to understand is that everything in Python is what we call an Object. To keep it simple, that means that every variable we create, in these examples, have access to functions that can be called by using the dot operator “.”. So, I can perform the action of adding an item to the list named j, by just calling a function prebuilt into all list variables. That function is called append and you see how I called for it above. If you find all of this object stuff to be confusing, don’t worry, I’ll cover it in detail later. For now, just remember how to manipulate lists. What is a Dictionary in Python Dictionary types, are similar to Tuples and Lists, in that they contain a collection of variables. One difference is that there is a key associated with each value in a dictionary. Here is some sample code, to help you understand: k = ({“Age”: 35, “Height”: “6’3”, “Weight”: 170}) # Creates a dictionary, notice the key value pairs print(k.get(“Age”)) # get() outputs the value associated with the key named “Age” print(k.items()) # items() outputs all of the keys and values for the dictionary print(k.copy()) # copy() outputs keys and values as well print(k.values()) # values() only outputs the values k.pop(“Height”) # pop() removes the value and key associated with the key “Height” print(k.items()) The Output 35 dict_items([(‘Age’, 35), (‘Weight’, 170), (‘Height’, “6’3”)]) {‘Age’: 35, ‘Weight’: 170, ‘Height’: “6’3”} dict_values([35, 170, “6’3”]) dict_items([(‘Age’, 35), (‘Weight’, 170)]) That pretty much explains how Dictionary’s work in Python. I’ll cover them in much more detail in a future article. That’s All Folks I covered a bunch of things in this article. You now know a lot about the quirkiness of the Python programming language. You also know about all of the main data types available in Python. If you have any questions or comments, leave them in the comment section below. Till Next Time Think Tank
http://www.newthinktank.com/2010/08/python-how-to-basics-pt-2/
CC-MAIN-2020-24
refinedweb
1,868
70.73
Configuration as Code, Part 6: Testing Configuration Scripts In this blog post, we are going to look at how to test TeamCity configuration scripts. - Getting started with Kotlin DSL - Working with configuration scripts - Creating build configurations dynamically - Extending Kotlin DSL - Using libraries - Testing configuration scripts Given that the script is implemented with Kotlin, we can simply add a dependency to a testing framework of our choice, set a few parameters and start writing tests for different aspects of our builds. In our case, we’re going to use JUnit. For this, we need to add the JUnit dependency to the pom.xml file junit junit 4.12 We also need to define the test directory. tests settings In this example, we have redefined the source directory as well, so it corresponds with the following directory layout. Once we have this in place, we can write unit tests as we would in any other Kotlin or Java project, accessing the different components of our project, build types, etc. However, before we can start writing any code we need to make a few adjustments to the script. The reason is that our code for the configuration resides in settings.kts file. The objects that we declared in the kts file are not visible in the other files. Hence, to make these objects visible, we have to extract them into a file (or multiple files) with a kt file extension. First, instead of declaring the project definition as a block of code in the settings.kts file, we can extract it into an object: version = "2018.2" project(SpringPetclinic) object SpringPetclinic : Project ({ … }) The SpringPetclinic object then refers to the build types, VCS roots, etc. Next, to make this new object visible to the test code, we need to move this declaration into a file with a kt extension: settings.kts now serves as an entry point for the configuration where the project { } function is called. Everything else can be declared in the other *.kt files and referred to from the main script. After the adjustments, we can add some tests. For instance, we could validate if all the build types start with a clean checkout: import org.junit.Assert.assertTrue import org.junit.Test class StringTests { @Test fun buildsHaveCleanCheckOut() { val project = SpringPetclinic project.buildTypes.forEach { bt -> assertTrue("BuildType '${bt.id}' doesn't use clean checkout", bt.vcs.cleanCheckout) } } } Configuration checks as part of the CI pipeline Running the tests locally is just one part of the story. Wouldn’t it be nice to run validation before the build starts? When we make changes to the Kotlin configuration and check it into the source control, TeamCity synchronizes the changes and it will report any errors it encounters. The ability to now add tests allows us to add another extra layer of checks to make sure that our build script doesn’t contain any scripting errors and that certain things are validated such as the correct VCS checkout, as we’ve seen above, and the appropriate number of build steps are being defined, etc. We can define a build configuration in TeamCity that will execute the tests for our Kotlin scripts prior to the actual build. Since it is a Maven project, we can apply Maven build step – we just need to specify the correct path to pom.xml, i.e. .teamcity/pom.xml. The successful run of the new build configuration is a prerequisite for the rest of the build chain. Meaning, if there are any JUnit test failures, then the rest of the chain will not be able to start. 8 Responses to Configuration as Code, Part 6: Testing Configuration Scripts Jakub Podlešák says:November 8, 2019 Hi Anton, I find the post very useful overall. Many thanks for it. I am only wondering why there is this mixture of Kotlin based config code and UI based build step example (this .teamcity/pom.xml maven invocation). When you already have your project configured as Kotlin code, it woud be useful to also show the rest as code. A more generic question is related to pull request workflow with a TeamCity based gate (as described e.g. in). There ideally (IMHO) if you make a pull request that contains gate update (you touch anything in this .teamcity subdirectory), it would be nice if the TC related change was reflected in what is actually build as part of the pull request build process. Is it possible at all? Is there any blueprint for such a configuration as code? Thanks a lot in advance for any comments and thoughts! Cheers, ~Jakub Anton Arhipov says:November 8, 2019 Hello Jakub! > am only wondering why there is this mixture of Kotlin based config code and UI based build step example In this blog post, I show how you use to configure the build and how it’s reflected in the UI. It is not about editing the config in the UI. Anton Arhipov says:November 8, 2019 For the pull requests related question, you should take a look at this blog post: Although this blog post explains this all from the UI perspective, you can serialize all the config to Kotlin DSL as described in the first post about Kotlin DSL: Jakub Podlešák says:November 12, 2019 Thanks for the quick response and the links. I have successfully configured a pull request gate build project as code in TeamCity, but that was not the issue. The issue that i am trying to solve now is that gate build updates (changes to .teamcity stuff, that could be part of any PR) are not tested as part of the gate build (corresponding Kotlin code is taken from the master and not from the actual PR branch). I know that this is kind of a chicken egg problem, but i hoped it should be solvable somehow (a PR gate build should be based on the corresponding branch). The thing is that the gate related Kotlin code is taken from the master branch (maybe a configuration issue, but i do not know how to configure it otherwise). The gate is run, and if it succeeds, and the branch is then merged with the master, the following build could break (because the gate program changed with the previous commit, and the change was not tested). Any thoughts on that? Anton Arhipov says:November 13, 2019 Yeah, it is a chicken and egg problem, you just have to choose who’d should be the first 🙂 In fact, what’s under .teamcity is a Maven project. So if we think of it as if it is not a configuration, but an application code, then we can just configure a build configuration that will invoke the Maven goals and run tests and then merge the changes from the branch into master. Maybe it is worth using a dedicated repository for the .teamcity configs even. Jakub Podlesak says:November 12, 2019 Hi Anton, Many thanks for the quick response. I think i know how to configure the gate build as code (as i have it running already), the thing is that i do not know how to make sure that build definition updates (that might be part of a pull request) would then be also taken into account by TeamCity when the build is run for an actual pull request. I.e. currently the build definition (.teamcity content) is always taken from the master branch, the build then runs against the pull request branch (all content outside of .teamcity space), and the branch is than merged to the master. This way people could break the build definition if they incorporate new incompatible changes to the .teamcity space (which would pass the gate build for the actual pull request, but could fail the next one). My question is whether the above mentioned scenario is possible due to some misconfiguration on my side, or if it is something that can not be avoided in principle (i know the situation described here resembles this chicken and egg problem). That is the reason why i asked for a blueprint. I can share an example project as requested. Cheers, ~Jakub Anton Arhipov says:November 13, 2019 Please share the example! Maybe I could come up with a solution that is generic enough and post it as a followup article to this blog post series – everyone would benefit! Jakub Podlesak says:November 12, 2019 I apologise for posting the same question twice, got a web proxy error initially and thought that my first post was lost, so then submitted the other one only to realise the first post was not actually lost…
https://blog.jetbrains.com/teamcity/2019/05/configuration-as-code-part-6-testing-configuration-scripts/
CC-MAIN-2021-10
refinedweb
1,439
69.01
Pages: 1 I have very little experience in programming GTK2, nevertheless I wanted to try it on archlinux. After a 'pacman -S gtk2' (there seems to be no gtk2-devel package) I thought I was all set to compile a hello world example()...alas. While the pkg-config output for "pkg-config --cflags gtk+-2.0" and "pkg-config --libs gtk+-2.0" seems correct, I get a huge amount of errors. It looks pretty much like this : … 69167.html This was the only simular problem I could find online, unfortunatly I don't know how to apply the answer in that thread. (It suggests the linker can't find the gtk libraries) Is there some configuration required after installing the gtk2 package? if so, I didn't find any documentation about it in forum/wiki. Since no one else posted this problem, I suppose normally it works out-of-the box? All help is very appreciated. I'd love to see that hello world gtk2 example pop up Offline arch has no separate devel packages as you already found out so i guess it should work, i haven't coded native gtk yet only pygtk and that has always worked for me without problems, arch + gentoo + initng + python = enlisy Offline I also would assume it works without problems, it does on both suse and ubuntu. Thats why I wonder if I'm the only one with this problem. Can anybody test if they can compile this hello world example? Offline I followed your links and copy-pasted helloworld.c into a text editor, saved it and compiled it without problem. I used the following to compile it: gcc -Wall -g helloworld.c -o helloworld `pkg-config --cflags gtk+-2.0` `pkg-config --libs gtk+-2.0` I did make sure that everything was on one line in the console, however. Offline starthis, can you post the exact code and erros you're getting... usually when you get spam like that from header files, it's something like a missing semicolon or some odd syntax error that mucks everything up Offline The problem seems to be that many of the /usr/include dirs are not readable for the users group. It seemed that this was the problem. As root it compiled fine. I started changing the permissions in /usr/X11R6/include and /usr/include, but it seems there is still more to change. I wonder how these perms can be set so wrong? I didn't alter them manually (eg. gtk2 was freshly installed). And they appear like this : [starthis@morpheus X11R6]$ ls -al total 28 drwxr-xr-x 6 root root 4096 2005-03-27 21:59 . drwxr-xr-x 13 root root 4096 2002-09-04 18:07 .. drwxr-xr-x 2 root root 4096 2005-03-31 17:19 bin drwx------ 6 root root 4096 2005-03-27 21:59 include drwxr-xr-x 6 root root 8192 2005-04-09 18:52 lib drwxr-xr-x 7 root root 4096 2005-02-18 17:41 man After changing the permissions to 755 on the entire /usr/include & /usr/X11R6/include all went fine 8) Offline Well, I don't get any error related to permissions but pure errors in code: tobias@buran learn_c]$ gcc -Wall -g 01_gtk2_test.c -o 01_gtk2_test `pkg-config --cflags gtk+-2.0` `pkg-config --libs gtk+-2.0` 01_gtk2_test.c: In function 'main': 01_gtk2_test.c:10: warning: passing argument 2 of 'gtk_init' from incompatible pointer type 01_gtk2_test.c:12: error: incompatible types in assignment 01_gtk2_test.c:13: error: 'hauptfenster' undeclared (first use in this function) 01_gtk2_test.c:13: error: (Each undeclared identifier is reported only once 01_gtk2_test.c:13: error: for each function it appears in.) 01_gtk2_test.c:16: warning: control reaches end of non-void function this is how I compiled it. gcc -Wall -g 01_gtk2_test.c -o 01_gtk2_test `pkg-config --cflags gtk+-2.0` `pkg-config --libs gtk+-2.0` get the same as root. -neri Offline That's an error in your code then. Could you post the main function that doesn't compile? Offline That's an error in your code then. Could you post the main function that doesn't compile? #include <gtk/gtk.h> #define WID GtkWidget void killall(); int main(int argc, char *argv[]) { WID p_MainWindow; gtk_init(&argc, argv); p_MainWindow = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_signal_connect(GTK_OBJECT(hauptfenster), "destroy", GTK_SIGNAL_FUNC(killall), NULL); gtk_main(); } void killall() { gtk_main_quit(); } well, I have no clue about native gtk2 stuff but 'hauptfenster' should be 'p_MainWindow' I guess. Anyway it gives me an error about somethin' wrong in gtk_window_new() then. As I said, no idea abou gtk API -neri Offline I will compile this example when I get home to be sure, but I think the code should compile fine with following changes :(); } or(); } gtk_window_new() returns an address so you need a pointer. Offline Pages: 1
https://bbs.archlinux.org/viewtopic.php?pid=91832
CC-MAIN-2016-26
refinedweb
812
74.69
Hi all, I'm glad to announce the release of IPython 0.6.6. IPython's homepage is at: and downloads are at: I've provided RPMs (Py2. Release notes ------------- This release was made to fix a few crashes recently found by users, and also to keep compatibility with matplotlib, whose internal namespace structure was recently changed. * Adapt to matplotlib's new name convention, where the matlab-compatible module is called pylab instead of matlab. The change should be transparent to all users, so ipython 0.6.6 will work both with existing matplotlib versions (which use the matlab name) and the new versions (which will use pylab instead). * Don't crash if pylab users have a non-threaded pygtk and they attempt to use the GTK backends. Instead, print a decent error message and suggest a few alternatives. * Improved printing of docstrings for classes and instances. Now, class, constructor and instance-specific docstrings are properly distinguished and all printed. This should provide better functionality for matplotlib.pylab users, since matplotlib relies heavily on class/instance docstrings for end-user information. * New timing functionality added to %run. '%run -t prog' will time the execution of prog.py. Not as fancy as python's timeit.py, but quick and easy to use. You can optionally ask for multiple runs. * Improved (and faster) verbose exeptions, with proper reporting of dotted variable names (this had been broken since ipython's beginnings). * The IPython.genutils.timing() interface changed, now the repetition number is not a parameter anymore, fixed to 1 (the most common case). timings() remains unchanged for multiple repetitions. * Added ipalias() similar to ipmagic(), and simplified their interface. They now take a single string argument, identical to what you'd type at the ipython command line. These provide access to aliases and magics through a python function call, for use in nested python code (the special alias/magic syntax only works on single lines of input). * Fix an obscure crash with recursively embedded ipythons at the command line. * Other minor fixes and cleanups, both to code and documentation. The NEWS file can be found at, and the full ChangeLog at. Enjoy, and as usual please report any problems. Regards, Fernando.
https://mail.python.org/pipermail/python-list/2004-December/249943.html
CC-MAIN-2014-10
refinedweb
367
58.58
There are times when you get the below warning message in the Visual Studio Xaml Designer when using the MapsControl . The name “MapControl” does not exist in the namespace “using:Windows.UI.Xaml.Controls.Maps” I got this error when trying to open the Universal Windows Apps MapsControl sample project in Visual Studio 2015 . When I build the project and run it on Windows Mobile 10 emulator , it would work fine . But the designer the the error “Invalid Markup”. The fix to the problem was pretty simple. Just try to change the build configuration from x64 to x86 and it fixed the issue on the designer .
https://developerpublish.com/visual-studio-2015-the-name-mapcontrol-does-not-exist-in-the-namespace-usingwindows-ui-xaml-controls-maps/
CC-MAIN-2020-29
refinedweb
106
65.32
On Sat, Jan 24, 2009 at 6:19 PM, Andrew Morton<akpm@linux-foundation.org> wrote:> On Fri, 16 Jan 2009 23:51:35 -0800 (PST) Paul Turner <pjt@google.com> wrote:>>>>> (Specifically) Several interfaces under /proc have been migrated to use>> seq_files. This was previously observed to be a problem with VMware's>> reading of /proc/uptime. We're now running into the same problem on>> /proc/<pid>/stat; we have many consumers performing preads on this>> interface which break under new kernels.>>>> Reverting these migrations presents other problems and doesn't scale with>> everyones' pet dependencies over an abi that's been>> broken :(>> We changed userspace-visible behaviour and broke real applications.> This is a serious matter. So serious in fact that your report has> languished without reply for a week.>> Reverting those changes until we have a suitable reimplementation which> doesn't bust userspace is 100% justifiable.>> In which kernel versions is this regression present?Commit is ee992744ea53db0a90c986fd0a70fbbf91e7f8bd, merged with 2.6.25>> What would a revert look like? Big and ugly or small and simple? Do> the original commits (which were they?) still revert OK?>There is a race on namespaces that would have to be resolved. From commit:"... Currently (as pointed out by Oleg) do_task_stat has a race when calling task_pid_nr_ns with the task exiting. In addition do_task_stat is not currently displaying information in the context of the pid namespace that mounted the /proc filesystem. So "cut -d' ' -f 1 /proc/<pid>/stat" may not equal <pid>. ...">> Part of the problem in implementing pread in seq_files is that we don't>> know know whether the read was issued by pread(2) or read(2). It's not>> nice to shoehorn this information down the stack. I've attached a>> skeleton patch which shows one way we could push it up (although something>> like a second f_pos would be necessary to make it maintain pread>> semantics against reads).>>>> One advantage of this style of approach is that it doesn't break on>> partial record reads. But it's a little gross at the same time.>>>> Yes, that is a bit gross.>> Does this patch actually 100% solve the problem, or is it a precursor> to some other fix or what? It's hard to comment sensibly if it's a> partial thing with no sign how it will be used.>It's not fully robust, the case of two simultaneous preaders on thesame fd isn't something I have any nice answer for given the nature ofseq_files.Apart from that, as long as we maintain a separate f_pos for thepreads it should be ok.>> diff --git a/fs/read_write.c b/fs/read_write.c>> index 2fc2980..744094a 100644>> --- a/fs/read_write.c>> +++ b/fs/read_write.c>> @@ -407,6 +407,16 @@ asmlinkage ssize_t sys_pread64(unsigned int fd, char __user *buf,>> ret = -ESPIPE;>> if (file->f_mode & FMODE_PREAD)>> ret = vfs_read(file, buf, count, &pos);>> + else if (file->f_mode & FMODE_SEQ_FILE) {>> + /*>> + * We break the pread semantic and actually make it>> + * seek, this prevents inconsistent record reads across>> + * boundaries.>> + */>> + vfs_llseek(file, pos, SEEK_SET);>> + ret = vfs_read(file, buf, count, &pos);>> + file_pos_write(file, pos);>> + }>> Well yes, that's a userspace-visible wrong change too.Yes -- I mentioned above that to make this into a 'real' patch andremain not perturb reads we'd need to maintain a second file positionfor the preaderBut this is getting farther up the ugly tree, I was hoping someoneelse might have a more palatable idea :)>>> fput_light(file, fput_needed);>> }>>>> diff --git a/fs/seq_file.c b/fs/seq_file.c>> index 3f54dbd..f8c5379 100644>> --- a/fs/seq_file.c>> +++ b/fs/seq_file.c>> @@ -50,6 +50,8 @@ int seq_open(struct file *file, const struct seq_operations *op)>>>> /* SEQ files support lseek, but not pread/pwrite */>> file->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE);>> + file->f_mode |= FMODE_SEQ_FILE;>> +>> return 0;>> }>> EXPORT_SYMBOL(seq_open);>> diff --git a/include/linux/fs.h b/include/linux/fs.h>> index 5f7b912..c3b5916 100644>> --- a/include/linux/fs.h>> +++ b/include/linux/fs.h>> @@ -76,6 +76,8 @@ extern int dir_notify_enable;>> behavior for cross-node execution/opening_for_writing of files */>> #define FMODE_EXEC 16>>>> +#define FMODE_SEQ_FILE_PREAD 32>> -EWONTCOMPILE, btw.>
http://lkml.org/lkml/2009/1/24/119
CC-MAIN-2017-51
refinedweb
681
57.27
Modules Module Loading By default javascript doesn’t have a module system like other languages, e.g. Java or Python. This means that if you wanted to call a function in some other file, you have to remember to explicitly load that file via script tags before you call the function. If you tried to use code that you forgot to add via a script tag, then javascript would complain. Other languages have a module loading system e.g. in python if you wanted to use some code from another file you would type something like import foo from bar; foo(); The language itself figured out where bar was, loaded it up from the filesystem, extracted the function foo and made it available to you in your file to use. This feature was missing in JavaScript so the community developed their own solutions, such as CommonJS which is used in node. ES6 Modules ES6 took the best of the existing module systems and introduced this concept on a language level. Although it’s made it into the ES6 standard it’s up to the javascript engine makers to actually implement it natively and they haven’t… yet. So until that happens we code using the ES6 module syntax in TypeScript. When typescript transpiles the code to ES5 it uses the CommonJS module loading system which we touched on above. Note Exporting // utils.ts function square(x) { return Math.pow(x,2) } function cow() { console.log("Mooooo!!!") } export {square, cow}; We declare some functions in a file. By using the export keyword we say which of those functions can be exported, and therefore imported and used in other modules. Note {square, cow}is just destructuring syntax and is short for {square: square, cow: cow}. Importing // script.ts import {square, cow} from './utils'; console.log(square(2)); cow(); Tip tsc -t ES5 -w utils.ts script.ts We again use that destructuring syntax to import the functions we want from the utils module, we provide a path relative to this module. Aliases We may want to import a function with one name but then use it via another name. Perhaps to avoid name collisions or just to have a more convenient naming, like so: import {square as sqr} from './utils'; sqr(2); Or we can import everything in a file like so: import * as utils from './utils'; console.log(utils.square(2)); utils.cow(); Alternative export syntax As well as describing the exports by using the export keyword, like so: export {square, cow}; We can also export functions or variables as they are defined by prepended the word export to the front of their declarations: export function square(x) { return Math.pow(x,2) } Default exports If a module defines one export which is the most common, we can take advantage of the default export syntax, like so: export default function square(x) { return Math.pow(x,2) } And then when we import it we don’t need to use { }, like so: import square from './utils'; Or, if we want to import the default export as well as some other exports, we can use: import square, { cow } from './utils'; Summary With ES6 modules we finally have a mechanism for letting the language deal with loading of dependant files for us. This isn’t baked into javascript engines yet. So to solve this problem in Angular we still use the ES6 module loading syntax but leave it to TypeScript to transpile to CommonJS. Listing import * as utils from './utils'; console.log(utils.square(4)); utils.cow(); export function square(x) { return Math.pow(x, 2) } export function cow() { console.log("Mooooo!!!") }
https://codecraft.tv/courses/angular/es6-typescript/modules/
CC-MAIN-2018-34
refinedweb
608
71.14
Important: Please read the Qt Code of Conduct - Unhandled exception at QMap - Meysam Hashemi last edited by Meysam Hashemi Greeting Im using QT 4.8.5 With Visual Studio 2008. I use QT Webkit in my project. In some functions i need to pass Java Script Object into C++ and vise versa. So i stringify Java Script's Object to JSON and pass them as QString into C++ and C++ make JSON and in Java Script i parse them to Java Script Object. Since QtJSON is not provided in QT 4.8.5 i downloaded Qt Json in order to handle it. This library get QString and will parse it QVariantMap. This library working fine but in 1 case when ever i want to return from function i get unhandled exception. This happens when there is QVariantMap inside another QVariantMap. (In Java Script there is Object within another Object) I checked where exactly it happens and it happens in the following code in QMap. #if defined(_MSC_VER) #pragma warning(push) #pragma warning(disable:4189) #endif template <class Key, class T> Q_OUTOFLINE_TEMPLATE void QMap<Key, T>::freeData(QMapData *x) { if (QTypeInfo<Key>::isComplex || QTypeInfo<T>::isComplex) { QMapData *cur = x; QMapData *next = cur->forward[0]; while (next != x) { cur = next; next = cur->forward[0]; Node *concreteNode = concrete(reinterpret_cast<QMapData::Node *>(cur)); concreteNode->key.~Key(); concreteNode->value.~T(); <<< This Line !!! } } x->continueFreeData(payload()); } #if defined(_MSC_VER) #pragma warning(pop) #endif Whats wrong here ? Thanks in advance - kshegunov Moderators last edited by kshegunov @Meysam-Hashemi Hello, Don't jump the gun, QMaphas been quite thoroughly tested and I, for one, have never ever found anything wrong with it for years. concreteNode->key.~Key(); concreteNode->value.~T(); <<< This Line !!! If I were to guess QMapmakes some in-place allocations and these lines are just the cleanup. What is your code that triggers this error? Kind regards. - Meysam Hashemi last edited by As i mentioned , My code use this library. I believe what cause this exception is inside the library, I'm checking it to find out !!! - kshegunov Moderators last edited by @Meysam-Hashemi Hello, Right, but I have not used that library and there might be a bug in it. In any case, you should try and debug the code that uses the aforementioned library first, then the library itself and only then think about a bug in Qt. My advice is, since you've already found the end-point of the stack where the problem occurs, to go about the stack trace and look up in the debugger what is called where/when with what arguments. As you've put your original question, unfortunately, I'm very doubtful anyone can provide adequate assistance. Please try to find the relevant code in the library itself and in your application, so we have a more complete information on the issue. Kind regards. - Meysam Hashemi last edited by Will do, Thanks alot
https://forum.qt.io/topic/63625/unhandled-exception-at-qmap
CC-MAIN-2022-05
refinedweb
485
63.39
I took a look at fanotify to see if it would be a better fit for afilesystem indexer (like tracker or beagle), as inotify is pretty bad.I think it is a better fit in general, but it needs some additions.Lets first define the requirements. By "indexer" I mean a userspaceprogram that keeps a database that stores information about files onsome form of pathname basis. You can use this to do a query forsomething and reverse-map back to the filename. As a minimally complex,yet sufficient model we have locate/updatedb that lets you quicklysearch for files by filename. However, instead of indexing everythingeach night we want to continuosly index things "soon" after they change,for some loose definition of soon (say within a few minutes in thenormal case). Its not realistic to imagine the indexer handling each file change asthey happen, as modern machines can dirty a lot of files in a shorttime which would immediately result in change event queuesoverflowing. It as also not really what isis desired. Many kinds ofactivities produce a lot of filesystem activity with creation oftemporary files and changing of files multiple times over some time(for instance a compile). What we really want is to ignore all thesetemporary files and the flury of changes and wait for a more quiescentstate to reindex.One of the core properties of the indexer is that it knows what thefilesystem looked like last time it indexed, so a more adequate modelfor changes would be to get told on a per-directory basis that"something changed in this directory" with a much more coarse-grainedtime scale, say e.g. at most once every 30 seconds. The indexer couldthen schedule a re-indexing of that directory, comparing mtimes withwhat is stored in its db to see which files changed. This is how theMacOS FSEvents userspace framework[1] works, and it seems reasonable. updatedb keeps track of the full filesystem tree, based on the"current" mounts in it (at the time updatedb ran). While this wasprobably valid when it was designed it is not really adequate forcurrent use which is much more dynamic in how things get plugged in andout. A more modern way to look at this is to consider the full set ofmounted filesystems being a forrest of trees, with the current processnamespace being composed of a subset of these mounted in various placesin the namespace.So, in order to handle a filesystem being unmounted, and then latere.g. mounted in another place or another filesystem mounted in thesame location we shouldn't index based on how things are mounted, butrather keep an index per filesystem. The kernel identifier for afilesystem is the major:minor of the block device its mounted on. Thisis not a persistent identifier, but given such an identifier auserspace app could use a library like libvolume_id to make up apersistent identifier for use as the key in its index. It would thenstore each item in its database by a volume id + relative path pair,which can be mapped to a pathname in the current namespace by usinge.g. /proc/self/mountinfo.In order to write an app using the fanotify API satisfying the aboveneeds)* A file handle that was written to was closed* optionally: A file handle was written to (this is somewhat expensiveto track as there are a lot of these events)For these events we need some form of identifier that references thefile that was affected. There are two types of changes above, purename changes (link/unlink/rename) and inode changes(close/write). fanotify currently only gives "inode changes" kind ofevents, and it uses a file descriptor as the identifier.Using an fd as an identifier is interesting, because it avoids theproblems with absolute pathnames and namespaces. The user of the APIcan use readlink on /proc/self/fd/<fd> to get at the pathname of thefile that was opened (in its namespace), we can also use fstat to getthe block device of the file and /proc/self/mountinfo to calculate thefilesystem relative path. Additionally, by using a fd like this we'rebasically given a userspace reference to a dentry. This means that thelink in /proc will be updated as the filename changes. So we can relyon the paths gotten from the events to be up to date wrt any namespacechanges during the time of the change to the time we're handling theevent. We don't have to manually update events due to e.g. laterrename events. However, this is somewhat of a problem in the name change events. Forinstance, for rename if we have an fd to the moved file we can'treally know its original position. For these types of changes we wantthe fd of the parent directory and the filename that changed. With these events we should be able to track any directory that haschanged files in it, with these exceptions:* Sometimes we can only say "everything might have changed" (queueoverflow)* We only track locally originating changes* If a hardlinked file is updated in-place we only know of the changein the filename used to open the file.* If we chose not to pick up every write event (for performancereasons) we won't know of writes to files that weren't closed (likee.g. logfiles)I think these exceptions are reasonable for most usecases.Its unlikely that users actually want to index all files in thesystem. In practice its more likely that they want to index theirhomedirs, removable media and maybe a few other directories. So, inorder to lower the total system load due to changes on areas where we'renot interested in changes it would be nice to be able to set up eitherblacklists like the current fastpath, or even better subscriptions,where we ignore everything not specifically requested. I don't reallythink the fastpaths that are currently in fanotify are good enoughtfor file indexing, as they are per file, and there are potentiallymillions of files that we want to ignore.Instead I would like a form of subscription based on block major+minorand dentry prefix. So, you'd say "I want everything on block 8:1affecting the subtree under the dentry specified by this fd Iopened". The fd should be optional, and probably the minor nr too. In fact, even the major nr should probably be optional too if you reallywant events for every change in the system. In an indexer this wouldbe used by reading the set of paths that the user specified aswanting indexed, looking up in /proc/self/mountinfo what thiscorresponds to wrt devices and registering the required subscriptions.---[1]
http://lkml.org/lkml/2009/3/27/166
CC-MAIN-2013-20
refinedweb
1,098
57.4
mbrtowc() Convert a multibyte character into a wide character (restartable) Synopsis: #include <wchar.h> size_t mbrtowc( wchar_t * pwc, const char * s, size_t n, mbstate_t * ps ); Since: BlackBerry 10.0.0 Arguments: - pwc - A pointer to a wchar_t object where the function can store the wide character. - s - A pointer to the multibyte character that you want to convert. - n - The maximum number of bytes in the multibyte character to convert. - ps - An internal pointer that lets mbrtowc() be a restartable version of mbtowc(); if ps is NULL, mbrtowc() uses its own internal variable. You can call mbsinit() to determine the status of this variable. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The mbrtowc() function converts single multibyte characters pointed to by s into wide characters pointed to by pwc, to a maximum of n bytes (not characters). This function is affected by LC_TYPE. Returns: - (size_t)-2 - After converting all n characters, the resulting conversion state indicates an incomplete multibyte character. - (size_t)-1 - The function detected an encoding error before completing the next multibyte character; the function sets errno to EILSEQ and leaves the resulting conversion state undefined. - 0 - The next completed character is a null character; the resulting conversion state is the same as the initial one. - x - The number of bytes needed to complete the next multibyte character, in which case the resulting conversion state indicates that x bytes have been converted. Errors: - EILSEQ - Invalid character sequence. - EINVAL - The ps argument points to an invalid object. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mbrtowc.html
CC-MAIN-2014-41
refinedweb
283
57.47
There are two main concepts that have to be explained in order for us to fully understand the differences between global and local variables in C programs: - Scope – determines the region of the program where a variable can be accessed - Storage Duration – determines when a variable is created and destroyed (lifetime of the variable) Scope of C Variables The scope of a variable depends primarily on its place of declaration. There are three types of scope a variable can have: - Global (Program) Scope – Variables with global scope are visible(accessible) within all source files, functions and blocks(code enclosed by curly brackets {}) of the program. These type of variables are declared outside of blocks and functions (including main()) and are called global variables. - Block Scope – Variables with block scope are visible only within the block they are declared. These variables are called local variables. - Example: #include <stdio.h> void main() { int i = 34 ; //This variable is local to this block (Block Scope) { int i = 43 ; //this variable is local to this block (Block Scope) printf("\nValue of i : %d",i); //Will print value 43 } printf("\nValue of i : %d",i); //Will print value 34 } - File Scope – Variables with file scope are visible from the point of their declaration to the end of the source file they are declared in. These variables are global variables declared with the static specifier. The scope of C variables is an important concept because it supports the creation of large embedded programs where more than one person is writing the source code. Limiting the scope of the variables prevents eventual conflicts with variables of the same name used in different parts of the program. Storage Duration of C Variables Storage Class Specifiers - auto – This is the default storage class of local variables and is usually not explicitly declared. The storage duration of auto local variables is equal to the duration of the block they are declared in. For example local auto variable declared inside a function (C function is in fact a block) is created when the function is entered and is destroyed when the function is exited. - static - Local variable declared as static retains its value between different function calls and its storage duration is equal to the program duration. In other words the static local variable exists during the program life-time and not only during the function lifetime. It however keeps its block scope (accessible only by the function/block it is declared in). - When a global variable is declared as static its scope is limited to the file it is declared in (file scope). - register – This specifier requests that the compiler stores the variable in a CPU register instead of the RAM memory. The compiler can ignore the request depending on register availability or other hardware implementation restrictions. - extern – This specifier tells the compiler that the variable exist and is declared somewhere else (e.g in different source file). Its practical usage is for accessing the same global variable across different source files. Summary - Global variables are declared outside of all functions and blocks of a program. They are accessible from the whole program (global scope), unless they are declared with the static specifier which limits their accessibility only to the file they are declared in (file scope). - Local variables are declared inside blocks. They are accessible only from the block they are declared in (block scope). They are created on block/function entry and destroyed on block/function exit, unless they are declared with the static specifier. Example: #include <stdio.h> int var_1; //Global variable with Global Scope static int var_2; //Global variable with File Scope extern int var_3; //Global variable that is declared/defined in other source file void main() { static int var_4 = 34 ; //Local variable with lifetime until the end of the program { int var_5 = 43 ; //Local variable with lifetime until the end of this block auto int var_6 = 46; //Same as the above printf("\nValue of var_5 : %d",var_5); //Will print value 43 } } Was this post helpful? Let us know if you liked the post. That’s the only way we can improve. You can also leave feedback in the comments section.
https://open4tech.com/global-local-variables-embedded-c-programs/
CC-MAIN-2020-05
refinedweb
695
58.42
Excel saves these values differently depending on which other characters are in the cell, which means if it only says(CASE 1): ÜÖÄüöä and you try to read the cell: HSSFCell cell = (HSSFCell)cellI.next(); if(cell.getCellType() == HSSFCell.CELL_TYPE_STRING ) { cell_value = cell.getStringCellValue(); } Then the resulting String is comprised of chars, all with the integer value 65533. On the other hand, if you write other characters into the cell(CASE 2), sometimes they are read correctly. This is because normally the chars(as read in hexidecimal values) are seperated by 00(CASE 2) chars, but somtimes not. And when they are not(CASE 1), the usermodel and eventmodel can't read them. I hope I missed out on something about character sets or what do I know, if not, this is a nasty bug. What is the value of HSSFCell.getEncoding() on the Short.MAX_VALUE returning cells? In the case where you have umlauts it should be 1 and HSSF should read it as 16-bit characters. If not...that is a most nasty bug in 1.5.1. Glen, correct me if I'm wrong. Hello again, I made the good old: System.out.println( "encoding: " + cell.getEncoding() ); and got the result for every single cell(well all 7 anyway): encoding: 0 I may have left out an important piece of information: Basically, I: Create my Excel sheets using HSSF on a weblogic Download them Change the sheets Upload them Read them using HSSF. Actually, if I DON'T make any changes, they are read just fine, but as soon as I press the save button in Excel, the format changes a bit, and HSSF is not able to read the cells with the characters with ascii values above 128. I use Microsoft Excel 2000. /Tom PS. this change is what I tried to explain in the initial bug report, except then I didn't know excaclty what happened. Correction, the cells are read ok, I meant it's the characters IN the cells that are wrong. eeenteresting... So it sounds like Excel is not updating the encoding flag when it changes to 16 bit. Can you attach to this bug 1. A sheet created with Excel containing the characters 2. An identical sheet created with HSSF containing those characters 3. An sheet created with HSSF and modified with Excel containing those characters (the broken case) All 3 sheets should be identical. This will help myself or someone else try and figure out the problem. If you'd like to give it a go, what we'll do is run org.apache.poi.hssf.BiffViewer on them and compare the results via "diff" (or whatever the windoze equivilent is) -Andy Created attachment 2551 [details] The file made with HSSF and saved in excel Created attachment 2552 [details] The file made with HSSF, has not been opened in excel, notice it is smaller. Created attachment 2553 [details] File made with excel I think what I've observed is that Excel 2000 treats fields like <Über> as normal text, but if there's a Trademark in there, excel changes from 8 to the 16 bit flag, so <Über™> would be treated as 16 bit. Whereas HSSF reads both <Über>(Excel 8 bit) and <Über™>(Excel 16 bit) as 16 bit fields, and so it reads out a REALLY big value for the Ü in <Über>(Excel 8 bit). Correct? /Tom PS. hope HTML can show "™"...if not, then it's supposed to be a trademark(ascii 0153). By the way, this bug also occurs in your latest releases. QUESTION 1: Is there anyway for me to tell if there is any progress in tracking/fixing this bug? QUESTION 2: can I help speed up the process of fixing this bug in some way, more then I allready have? It would be nice if someone who cares would simply write: Yes, my friend, we are working on this bug. or: No we are not working on your bug, so stick it up your **s. Because I'd like to know how YOU feel...!? Thanks in advance, /Tom One thing I relize I didn't ask for. . Can you give me the source you used to generate the HSSF version? If you can supply it in the form of a junit test it would be perfect. (I'd apply that in advance of fixing it!) The fact that I haven't closed the bug or marked it as fixed implies that I will look at it when I have time. (If you'd like it fixed faster, I can give you a mailing address and you can contribute to paying my mortgage.) Thanks, that's all I wanted to know. The source is very simple: HSSFWorkbook wb = new HSSFWorkbook(); HSSFSheet sheet = wb.createSheet(); wb.setSheetName( sheetcounter, sheetName ); HSSFRow row = sheet.createRow((short)rowcounter); HSSFCell cell = row.createCell((short)0); cell.setEncoding( HSSFCell.ENCODING_UTF_16 ); cell.setCellType( HSSFCell.CELL_TYPE_STRING ); cell.setCellValue(iETI.getLanguage()); /*multiple cells...*/ byte[] bytes = wb.getBytes(); POIFSFileSystem fs = new POIFSFileSystem(); fs.createDocument(new ByteArrayInputStream(bytes), "Workbook"); ByteArrayOutputStream byteos = new ByteArrayOutputStream(); fs.writeFilesystem(byteos); byteos.close(); return byteos; The thing is, which I've explained, the error occurs when you write out the file, manually change it, and read the file again with HSSF. Then the above mentioned errors occur. HSSF has propblems recognizing if the cells are saved as 8 or 16 bit, depending on if the characters in the cell are between ascii: 0- 128 or 129-159 or 160-255. So basically, the error that should be fixed is the following, you have 3 different ascii sets: A) 0-128 B) 129-159 C) 160-255 Cells containing A, should be read as an 8 bit cell. Cells containing A & B should be read as 16 bit Cells containing A & C should be read as 16 bit Cells containing A & B & C should be read as 16 bit Cells containing B & C should be read as 16 bit But HSSF reads cells containing A & C as an 8 bit cell. Which is wrong, because Excel handles these as 16 bit. cool, can you submit a patch changing this behavior? Start at org.apache.poi.hssf.record.SSTRecord Hi Thomas and Andy, I have looked at a recent CVS snapshot. I do not believe that there is a problem when reading in the strings from the attached files. I have traced through the SSTDeserializer class using the BiffViewer and the source code attached below to read the workbook and have found that I can correctly read all cells. Both the 1st and last attachments above correctly read the Uber cell as 8bit and the tmUber as 16bit. It is only the second attachment where the Uber is read as 16bit. Interestingly the tm character is unicode \u2122 rather than ascii 0153 (which you mention in the bug report), I guess the character set that the sheet was originally created in is something other that ISOLatin-1 I postulate that the only problem here is the fact that a supposed 8bit string has been written out as 16bit (ie the second attachment). As such we would need to look at the exact code that created the second attachment (The code that is attached to the bug doesnt have the values that were being allocated to the cell values). I think that the problem would become evident quickly. Jason <source code> import java.io.*; import org.apache.poi.hssf.usermodel.*; public class Tester { public Tester() { try { HSSFWorkbook wb = new HSSFWorkbook(new FileInputStream("c:/at1.xls")); HSSFSheet sheet = wb.getSheetAt(0); HSSFRow row = sheet.getRow(0); for (int i=0; i<row.getPhysicalNumberOfCells();i++) { HSSFCell cell = row.getCell((short)i); System.out.println("Cell "+i+"="+cell.getStringCellValue()); } } catch (Exception ex) { ex.printStackTrace(); } } public static void main(String[] args) { Tester tester1 = new Tester(); } } Just a quick comment: Thank you very much for your help, I know I've been annoying, but being an end user, I feel it is my duty. I appreciate how Andy carefully and fondly responds to every one of my blundering e-mails in particular. To the point: I've run a few test programs myself and you're right Jason, it does write out the correct values on NT, well, actually it writes out Ö▄Í─³÷õ, not ÜÖÄüöä, in my cmdprompt, but I'm not picky. In the ascii table the values are the correct ones. BUT, on my BEA WebLogic Application Server 6.1, the seperate "case C" chars are read as 65533, which is something I'll look into, and report back to you with if I find out what the ... is going on. Guess I'll begin with char sets. I really do appreciate the help, and the brilliant software. Who knows, maybe I'll contribute with something constructive yet. By the way, I ALLWAYS set the value useUTF16-thing to true, but I'd like to know where you decide what's READ, because it doesn't matter how you save it, Excel changes the format when you're playing around... I'm looking into it myself right now, the values are wrong when read from the Binary tree, so the next thing I'll try to figure out is if they're wrong when they're put() into it. I'll need a bit more time for that though, since I have to read up on the filesystem. What I've narrowed it down to, is that in the SSTDeserializer, when you call processString, and say: UnicodeString string = new UnicodeString(UnicodeString.sid, (short) unicodeStringBuffer.length, unicodeStringBuffer ); String chars = string.getString(); for( int i = 0; i < chars.length(); i++ ) { System.out.print((int)chars.charAt(i) + " "); }chars = null; then if it is a there are chars between 160-255(keep in mind the bytes actually have the int value -1 & -64), then they all get converted to the value 65533 I don't know exactly where the conversion takes place, and I don't have time to look any further today, but if you know, please tell me. I'd like to solve this bug. Tomorrow I'll have a good look at the UnicodeString class. wooops, decided to snoop around, just a little bit longer, and now I think I've located the error, so now I guess I just have to solve it. In UnicodeString the field_2_optionflags is 0, and the toString conversion is carried out in fillFields(byte [] data, short size), which results in the 65533 chars on my WebLogic server. PS. Sorry to anyone who feels I'm spamming their e-mail account. thanks you for your patience and insights, I've narrowed it down to one line now, in the 1.5.1 class: UnicodeString in the function: fillFields the line: field_3_string = new String(data, 3, getCharCount()); makes a new String regardless of the ISO character set standard, and since I use "ISO-8859-1" it doesn't work. Adding: field_3_string = new String(data, 3, getCharCount(), "ISO-8859-1"); makes it work though, guess I'll have to make a few tests now, to see if anything else is messed up. Thanks again, Thomas Try this with a recent nightly build. I think its fixed. Is it fixed? can i close this bug? Thomas? no response
https://bz.apache.org/bugzilla/show_bug.cgi?id=11322
CC-MAIN-2019-30
refinedweb
1,887
71.85
What is the difference between >> and >>>? For example let's say you have: int x = 64; what is x >> 4? what is x >>> 4?; Answer: Thanks for your questions. It's interesting for us to see the types of questions on Sun's own exam. First question: 64 >> 4 = 64 >>> 4 = 4 Explanation: x >> y is arithmetic shift right of x by y bits x >>> y is logical shift right of x by y bits Since 64 > 0, the answer is the same, but in the eight-bit 2's complement representation, -64 = 11000000. Hence -64 >> 4 = 11111100 = -4, while -64 >>> 4 = 00001111 = 15. What about left shift? Fortunately, the same fill bit, 0, is used for unsigned and signed multiplication by two. Hence x << y corresponds to x * 2^y no matter what the sign of x is. Garbage Collection a. I'm going to answer true to a. It would be hard to write a program that gobbled up the entire heap without releasing any of it. Reading a giant file into a hash table could do the trick. b. No. The Java garbage collector runs in a low priority thread. In some systems the thread is scheduled when System.gc() is explicitly called by a program, or when the heap runs low. On other systems gc() runs whenever the system is idle, and is interrupted whenever a higher priority thread becomes run-able. c. An object is implicitly flagged for collection as soon as there are no more active references to the object. Explicitly flagging an object for collection would imply there was an active reference to the object otherwise, how would one explicitly flag it? hence, when the object is collected, we are left with a dangling reference. d.Programs can directly call the garbage collector: System.gc(); Thread scheduling a. The Java thread scheduler is preemptive in the sense that it will preempt a running thread if a higher priority thread becomes run-able. But don't depend on this; the Java scheduler may run a lower priority thread to prevent starvation. b. Yes. I tried the following simple experiment: class MyThread extends Thread { int id; MyThread(int i) { id = i; } public void run() { for(int j = 0; j < 3; j++) { System.out.println("Thread " + id + " running"); setPriority(1); // 5 = normal } } } public class Test { public static void main(String[] args) { MyThread t1 = new MyThread(1); MyThread t2 = new MyThread(2); t1.start(); t2.start(); } } Thread 1 running Thread 2 running Thread 2 running Thread 2 running Thread 1 running Thread 1 running c. My example shows the answer to c is no. Thread 2 stopped running, but that didn't stop thread 1. Of course threads in the same ThreadGroup can be terminated simultaneously. Breaking an active reference to an object: ob = null; Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/23652
CC-MAIN-2016-40
refinedweb
498
74.69
#include <a.out.h> #include <stab.h> #include <nlist.h> Sun-2, Sun-3, and Sun-4 systems only. For Sun386i systems refer to coff.5 program was loaded with the -s option of ld or if the symbols and relocation: The macro N_BADMAG takes an exec structure as an argument; it evaluates to 1 if the a_magic field of that structure is invalid, and evaluates to 0 if it is valid. information. The N_DATOFF macro returns the absolute file position of the beginning of the data segment when given an exec structure as argument. The relocation information appears after the text and data segments. The N_TRELOFF macro returns the absolute file position of the relocation. If relocation information is present, it amounts to eight bytes per relocatable datum as in the following structure: struct reloc_info_68k { long r_address; /* address which is relocated */ unsigned int r relocation (for instance, N_TEXT meaning relative to segment text origin.) offset, and the sum is inserted into the bytes in the text or data segment. int r_address; /* relocation addr (offset in segment) */ unsigned int r_index :24; /* segment index or symbol index */ unsigned int r_extern : 1; /* if F, r_index==SEG#; if T, SYM idx */ int : 2; /* <unused> */ enum reloc_type r_type : 5; /* type of relocation to perform */ long int r_addend; /* addend for relocation value */ }; If r_extern is 0, then r_index is actually a n_type for the relocation (for instance, N_TEXT meaning relative to segment text origin.) The N_SYMOFF macro returns the absolute file position of the symbol table when given an exec structure as argument. Within this symbol table, n_strx; /* index into file string table */ } n_un; unsigned char n_type; /* type flag, that is, N_TEXT etc; see below */ char n_other; short n_desc; /* see <stab.h> */ unsigned n_value; /* value of this symbol (or adb of the N_STAB bits set. * These are given in <stab.h> */ #define N_STAB 0 common region whose size is indicated by the value of the symbol. Created by unroff & hp-tools. © by Hans-Peter Bischof. All Rights Reserved (1997). Last modified 21/April/97
http://www.vorlesungen.uni-osnabrueck.de/informatik/shellscript/Html/Man/_Man_SunOS_4.1.3_html/html5/a.out.5.html
CC-MAIN-2013-48
refinedweb
339
55.03
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello, I'd like to set keyframes for the changed attributes of an object using scripting. How should I go about doing this? Here are the two approaches I've tried: Autokey Method SetMl() obj = doc.GetActiveObject() doc.AddUndo(c4d.UNDOTYPE_CHANGE,obj) obj.SetMl(myMatrix) # Should this be turning Autokeying on for this object? Am I using undo correctly with this? # There are no examples I could find on this forum, in the documentation, or online. doc.AutoKey(obj, obj, recursive=True, pos=True, scale=True, rot=True, param=True, pla=False) c4d.EventAdd() Setting Attributes One-by-One If Autokeying isn't an option, I could get all of the Desc IDs for the attributes and set keys manually, which is probably what will need to happen, but I didn't know if there was a faster, more flexible way. Thank you! Hi @blastframe Find an example of AutoKey bellow. Note that the AutoKey command needs to be enabled to make it works. Instead of GetUndoPtr, you could try op.GetClone, but its mail fail if you need to AutoKey Parameter/Pla so it's preferred to use it as I demonstrate it in the code example. import c4d def main(): ID_AUTOKEYING_COMMAND = 12425 # If autokey is not enable, enables it previousAutokeyEnableState = c4d.IsCommandChecked(ID_AUTOKEYING_COMMAND) if not previousAutokeyEnableState: c4d.CallCommand(ID_AUTOKEYING_COMMAND) # Starts the undo process doc.StartUndo() # Defines that we will change the current object doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) # Retrieves the object we put in the undo stack with AddUndo undoObj = doc.GetUndoPtr() # Modifies the X position of the object, here do any modification op[c4d.PRIM_CUBE_SUBX] = 10 op[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X] = 100 # Adds a keyframe by comparing the stored obj in the undo and # the obj with the position modified in the scene doc.AutoKey(op, undoObj, False, True, True, True, True, True) # Finalizes the undo process doc.EndUndo() # If the command was disabled, re CallCommand to disable it if not previousAutokeyEnableState: c4d.CallCommand(ID_AUTOKEYING_COMMAND) # Pushes an update event to Cinema 4D c4d.EventAdd() # Execute main() if __name__=='__main__': main() Cheers, Maxime. @m_adam Thank you for this quick response! It works. Note: I'm not sure if this is just for me, but the script does not work on Frame 0 with AutoKey on unless there are existing Animation Tracks for the attributes (in this case op[c4d.PRIM_CUBE_SUBX] and op[c4d.ID_BASEOBJECT_REL_POSITION,c4d.VECTOR_X]). It does work for me if the frame is higher than 0 and there are no existing Animation Tracks. Thanks again! Hi @blastframe this is also the behavior of AutoKey in the Editor, so I would say is not an API issue. But you can use BaseDocument.Record to simulate the click of AddKeyFrame (but of course correct parameter have to be checked).
https://plugincafe.maxon.net/topic/12178/creating-keyframes-with-autokey
CC-MAIN-2021-39
refinedweb
504
58.18
On Sonntag 08 Januar 2006 14:48, Klaus Schmidinger wrote: > >)). I personally would rather trust the original EPG data. But to be on the conservative side, you could use the smaller value. for 1.3.36, something like this (untested, I don't understand all of this source code): --- timers.c 2005-09-09 17:22:33.000000000 +0200 +++ timers.c.new 2006-01-08 16:14:01.000000000 +0100 @@ -364,9 +364,10 @@ bool cTimer::Matches(time_t t, bool Dire if (HasFlags(tfActive)) { if (HasFlags(tfVps) && !Directly && event && event->Vps() && schedule && schedule->PresentSeenWithin(30)) { + time_t orgstopTime = stopTime; startTime = event->StartTime(); stopTime = event->EndTime(); - return event->IsRunning(true); + return (stopTime > t && orgstopTime > t) || event->IsRunning(true); } return startTime <= t && t < stopTime; // must stop *before* stopTime to allow adjacent timers } -- Wolfgang
http://www.linuxtv.org/pipermail/vdr/2006-January/007041.html
CC-MAIN-2013-20
refinedweb
130
60.41
Introduction to the Vue Composition API - 143 The Vue Composition API is useful for creating components in a complex app. It’s organized that reduces clashes of names and provides better type inference in our components. Vue.js is an easy to use web app framework that we can use to develop interactive front end apps. In this article, we’ll take a look at the Vue Composition API plugin for Vue 2.x to create our components in ways that are closer to the Vue 3.x way. Vue Composition API The Vue Composition API lets us move reusable code into composition functions, which any component can use with the setup component option. With it, we don’t have to worry about name clashes between mixin members and component members because all the members are encapsulated in their own function and we can import them with new names. Also, we don’t have to recreate component instances to reuse logic from different components. The Composition API is built with large projects in mind, where there’re lots of reusable code. In addition to the benefits that are listed above, we also have better type inference. It’s also more readable because we can trace all the code back to their composition function where the code is declared in the setup method. The Vetur VS Code extension, which helps with developing Vue apps with VS Code, provides type inference when it’s used with the Composition API. How to Use It? With Vue 2.x projects, we can use it by installing the Vue Composition API package as follows for Vue CLI projects: npm install @vue/composition-api It’s also available for projects that don’t use the Vue CLI as a library that we can add via a script tag as follows: <script src=""></script> The rest of the steps assumes that we’re building a Vue CLI project. Once we installed it, we can use it by first registering the plugin as follows in main.js: import Vue from "vue"; import App from "./App.vue"; import VueCompositionApi from "@vue/composition-api"; Vue.use(VueCompositionApi); Vue.config.productionTip = false; new Vue({ render: h => h(App) }).$mount("#app"); Next, we’ll actually use the Vue Composition API plugin to build our components. First, we create a file called Search.vue in the components folder as follows: <template> <div class="hello"> <form @submit. <label>{{label}}</label> <input type="text" v- <input type="submit" value="search"> </form> </div> </template> <script> import { reactive } from "@vue/composition-api"; export default { name: "Search", props: { label: String }, setup({ label }, { emit }) { const state = reactive({ name: "" }); return { handleSubmit(event) { emit("search", state.name); }, state }; } }; </script> In the code above, we have created a component that takes a prop called label , which is a string. Then we can get the value of the prop by getting it from the first argument of the setup method instead of getting the prop as a property of this with the original Vue API. Also, the emit method is retrieved from the object that’s passed into setup as the 2nd argument of it instead of a property of this . To hold states, we call the reactive function from the Composition API package by passing in an object with the state properties to it. state and reactive are equivalent to the object that we return in the data method with the regular Vue API. The state constant is return as a property of the object so that we can reference the states as properties of state in our template. To add methods with the Vue Composition API, we add them to the object that we return instead of as a property of the methods property like we did without the Vue Composition API. We did that with the handleSubmit method. In our template, we bind v-model to state.name rather than just name . But we reference the handleSubmit method like we do with methods before. The search event is emitted when we type in something to the input and the click search. Next, in App.vue , we write the following code: <template> <div id="app"> <Search label="name" @ <div>{{state.data.name}}</div> </div> </template> <script> import Search from "./components/Search"; import { reactive } from "@vue/composition-api"; export default { name: "App", components: { Search }, setup() { const state = reactive({ data: {} }); return { state, async search(ev) { const res = await fetch(`{ev}`); state.data = await res.json(); } }; } }; </script> In the code above, we have similar structure as in Search.vue . Something that we didn’t have in this file is listening to events. We have the search method which listens to the search event emitted from Search.vue . In the search method, we get some data, assign it to the state , and display it on the template. Also, we assigned the retrieved data to state.data instead. We can add computed properties with the computed function from the Vue Composition API package. For instance, we can replace App.vue with the following to create a computed property and use it: <template> <div id="app"> <Search label="name" @ <div>{{state.name}}</div> </div> </template> <script> import Search from "./components/Search"; import { computed, reactive } from "@vue/composition-api"; export default { name: "App", components: { Search }, setup() { const state = reactive({ data: {}, name: computed(() => state.data.name ? `The name is: ${state.data.name}` : "" ) }); return { state, async search(ev) { const res = await fetch(`{ev}`); state.data = await res.json(); } }; } }; </script> In the code above, we changed App.vue slightly by importing the computed function to create a computed property. To create the property, we added: computed(() => state.data.name ? `The name is: ${state.data.name}` : "") Then in the template, we referenced it by writing: <div>{{state.name}}</div> Now when we type in something and clicked search, we get ‘The name is (whatever you typed in)’ displayed. Conclusion The Vue Composition API is useful for creating components in a complex app. It’s organized that reduces clashes of names and provides better type inference in our components. We still have everything that we’re used to like states, methods, events, templates and computed properties. It’s just that they’re in different places.
https://geekwall.in/p/Di_qCLPrQ/introduction-to-the-vue-composition-api
CC-MAIN-2020-29
refinedweb
1,032
64.71
i18n of React with Lingui.js #1 stereobooster ・1 min read i18n of React with Lingui.js (4 Part Series) Everybody talking about React Hooks after React Conf. Other talks didn't get that much attention. It's a pity because there was absolutely brilliant talk about i18n/l10n of React applications - Let React speak your language by Tomáš Ehrlich. In this post, I want to show how to use Lingui.js to do i18n/l10n of React applications. I will use Node 10.10 and yarn, but I guess npm and other versions of Node would work too. The full source code is here. Each step of the tutorial is done as a separate commit, so you can trace all changes of the code. Install Follow Create React App documentation for more info. Boostrap your project with following commands: npx create-react-app react-lingui-example cd react-lingui-example Install @lingui/cli, @lingui/macro and Babel core packages as a development dependencies and @lingui/react as a runtime dependency. npm install --save-dev @lingui/cli@next @lingui/macro@next @babel/core babel-core@bridge npm install --save @lingui/react@next # or using Yarn yarn add --dev @lingui/cli@next @lingui/macro@next @babel/core babel-core@bridge yarn add @lingui/react@next Create .linguirc file with LinguiJS configuration in root of your project (next to package.json): { "localeDir": "src/locales/", "srcPathDirs": ["src/"], "format": "lingui", "fallbackLocale": "en" } This configuration will extract messages from source files inside src directory and write them into message catalogs in src/locales. Add following scripts to your package.json: { "scripts": { "start": "lingui compile && react-scripts start", "build": "lingui compile && react-scripts build", "add-locale": "lingui add-locale", "extract": "lingui extract", "compile": "lingui compile" } } Run npm run add-locale (or yarn add-locale) with locale codes you would like to use in your app: npm run add-locale en # or using Yarn yarn add-locale en Check the installation by running npm run extract (or yarn extract): npm run extract # or using Yarn yarn extract There should be no error and you should see output similar following: yarn extract Catalog statistics: ┌──────────┬─────────────┬─────────┐ │ Language │ Total count │ Missing │ ├──────────┼─────────────┼─────────┤ │ en │ 0 │ 0 │ └──────────┴─────────────┴─────────┘ (use "lingui add-locale <language>" to add more locales) (use "lingui extract" to update catalogs with new messages) (use "lingui compile" to compile catalogs for production) Congratulations! You’ve sucessfully set up project with LinguiJS. Basic usage (based on example project) Create src/i18n.js: import { setupI18n } from "@lingui/core"; export const locales = { en: "English", cs: "Česky" }; export const defaultLocale = "en"; function loadCatalog(locale) { return import(/* webpackMode: "lazy", webpackChunkName: "i18n-[index]" */ `./locales/${locale}/messages.js`); } export const i18n = setupI18n(); i18n.willActivate(loadCatalog); Add src/locales/*/*.js to .gitignore. Add <I18nProvider> to the App: import { I18nProvider } from "@lingui/react"; import { i18n, defaultLocale } from "./i18n"; i18n.activate(defaultLocale); class App extends Component { render() { return <I18nProvider i18n={i18n}>{/* ... */}</I18nProvider>; } } Use <Trans> macro to mark text for tanslation: import { Trans } from "@lingui/macro"; // ... <Trans>Learn React</Trans>; Run npm run extract (or yarn extract): yarn extract Catalog statistics: ┌──────────┬─────────────┬─────────┐ │ Language │ Total count │ Missing │ ├──────────┼─────────────┼─────────┤ │ en │ 2 │ 2 │ └──────────┴─────────────┴─────────┘ Now you can start your development environment with npm run start (or yarn start). You can edit src/locales/*/messages.json to change translations or upload those files to translation service. i18n of React with Lingui.js (4 Part Series) Hi Thanks for simple yet powerful tutorial! I have one question about how it works; If you define default value (for default locale - en here) why would you also add-locale for the same language you define in Trans component? I didn't understand your question. Can you provide code example, because it seems you use different terminology I am trying your example code (github.com/stereobooster/react-lin...). The point is; If I run npm run extractthe language which is default(en) has all messages "Missing" (because it is non-sense to give translation to these as well if that is the source-of-truth). Yes they missing, because there is no "sourceLocale": "en", otherwise they would be there. What is your question? Catalog statistics is used to determine whether there are some non-translated messages therefore if You use it for any kind of automation (as a test) it wouldn't pass unless you translate the source messages as well. I am asking if according to your opinion is this good practice or I am missing some point where you simply set "sourceLocale": "en"and problem solves? (because this isn't included in your, nor official example) No "sourceLocale": "en"is not a solution. You can ask author for this feature Thank you for the article there @sterobooster. May I request if you can add links to (possibly in ToC format) next articles? By any chance anyone knows how to place those dots navigation for series. I saw it in some posts, but have no idea how to do it I've been manually entering Markdown ToC's as shown below. But not sure what you mean by dots navigation. @andy & @ben Is there a way to add dots navigationthat @stereobooster is referring to by chance? I was reading help document & liquid tags, but not sure if there is a standard way to add ToCs in dev.to Think we're talking about two separate things. I believe the dots navigation is making the post part of a series. If you're using the v1 editor, you can add a post to a series by adding a series: series nameline in the front matter: If you're using the v2 editor, you can click the additional settings button next to the "IMAGES" button, and add the series from there. For Table of Contents, we don't have anything that fully supports it yet but we have a PR that will make it a bit easier at least. Thank you!! This is exactly what I was talking about Thank you Andy 😀
https://dev.to/stereobooster/i18n-of-react-with-linguijs-1-24oi
CC-MAIN-2020-10
refinedweb
986
54.12
original source : original source : Socket Programming in Java This // A Java program for a Client import java.net.*; import java.io.*; public class Client { // initialize socket and input output streams private Socket socket = null; private DataInputStream input = null; private DataOutputStream out = null; // constructor to put ip address and port public Client(String address, int port) { // establish a connection try { socket = new Socket(address, port); System.out.println(“Connected”); // takes input from terminal input = new DataInputStream(System.in); // sends output to the socket out = new DataOutputStream(socket.getOutputStream()); } catch(UnknownHostException u) { System.out.println(u); } catch(IOException i) { System.out.println(i); } // string to read message from input String line = “”; // keep reading until “Over” is input while (!line.equals(“Over”)) { try { line = input.readLine(); out.writeUTF(line); } catch(IOException i) { System.out.println(i); } } // close the connection try { input.close(); out.close(); socket.close(); } catch(IOException i) { System.out.println(i); } } public static void main(String args[]) { Client client = new Client(“127.0.0.1”, 5000); } }. // A Java program for a Server import java.net.*; import java.io.*; public class Server { //initialize socket and input stream private Socket socket = null; private ServerSocket server = null; private DataInputStream in = null; // constructor with port public Server(int port) { // starts server and waits for a connection try { server = new ServerSocket(port); System.out.println(“Server started”); System.out.println(“Waiting for a client …”); socket = server.accept(); System.out.println(“Client accepted”); // takes input from the client socket in = new DataInputStream( new BufferedInputStream(socket.getInputStream())); String line = “”; // reads message from client until “Over” is sent while (!line.equals(“Over”)) { try { line = in.readUTF(); System.out.println(line); } catch(IOException i) { System.out.println(i); } } System.out.println(“Closing connection”); // close connection socket.close(); in.close(); } catch(IOException i) { System.out.println(i); } } public static void main(String args[]) { Server server = new Server(5000); } }. This article is contributed by Souradeep Barua. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. 또 다른 socket programming in java 설명 original source : Java Socket Programming. The client in socket programming must know two information: - IP Address of Server, and - Port number. Here, we are going to make one-way client and server communication. In this application, client sends a message to the server, server reads the message and prints it. Here, two classes are being used: Socket and ServerSocket. The Socket class is used to communicate client and server. Through this class, we can read and write message. The ServerSocket class is used at server-side. The accept() method of ServerSocket class blocks the console until the client is connected. After the successful connection of client, it returns the instance of Socket at server-side. Socket class A socket is simply an endpoint for communications between the machines. The Socket class can be used to create a socket. Important methodsMethodDescription 1) public InputStream getInputStream() returns the InputStream attached with this socket. 2) public OutputStream getOutputStream() returns the OutputStream attached with this socket. 3) public synchronized void close() closes this socket ServerSocket class The ServerSocket class can be used to create a server socket. This object is used to establish communication with the clients. Important methodsMethodDescription 1) public Socket accept() returns the socket and establish a connection between server and client. 2) public synchronized void close() closes the server socket. Example of Java Socket Programming Creating Server: To create the server application, we need to create the instance of ServerSocket class. Here, we are using 6666 port number for the communication between the client and server. You may also choose any other port number. The accept() method waits for the client. If clients connects with the given port number, it returns an instance of Socket. - ServerSocket ss=new ServerSocket(6666); - Socket s=ss.accept();//establishes connection and waits for the client Creating Client: To create the client application, we need to create the instance of Socket class. Here, we need to pass the IP address or hostname of the Server and a port number. Here, we are using “localhost” because our server is running on same system. - Socket s=new Socket(“localhost”,6666); Let’s see a simple of Java socket programming where client sends a text and server receives and prints it. File: MyServer.java - import java.io.*; - import java.net.*; - public class MyServer { - public static void main(String[] args){ - try{ - ServerSocket ss=new ServerSocket(6666); - Socket s=ss.accept();//establishes connection - DataInputStream dis=new DataInputStream(s.getInputStream()); - String str=(String)dis.readUTF(); - System.out.println(“message= ”+str); - ss.close(); - }catch(Exception e){System.out.println(e);} - } - } File: MyClient.java - import java.io.*; - import java.net.*; - public class MyClient { - public static void main(String[] args) { - try{ - Socket s=new Socket(“localhost”,6666); - DataOutputStream dout=new DataOutputStream(s.getOutputStream()); - dout.writeUTF(“Hello Server”); - dout.flush(); - dout.close(); - s.close(); - }catch(Exception e){System.out.println(e);} - } - } To execute this program open two command prompts and execute each program at each command prompt as displayed in the below figure. After running the client application, a message will be displayed on the server console. Example of Java Socket Programming (Read-Write both side) In this example, client will write first to the server then server will receive and print the text. Then server will write to the client and client will receive and print the text. The step goes on. File: MyServer.java - import java.net.*; - import java.io.*; - class MyServer{ - public static void main(String args[])throws Exception{ - ServerSocket ss=new ServerSocket(3333); - Socket s=ss.accept(); -=din.readUTF(); - System.out.println(“client says: ”+str); - str2=br.readLine(); - dout.writeUTF(str2); - dout.flush(); - } - din.close(); - s.close(); - ss.close(); - }} File: MyClient.java -(); - }} css display , visibility property vs android visibility p.ex1 {display: none;} p.ex2 {display: inline;} p.ex3 {display: block;} p.ex4 {display: inline-block;} h2.a { visibility: visible; } h2.b { visibility: hidden; } Visible XML file: android:visibility=“visible” Java code: view.setVisibility(View.VISIBLE); invisible XML file: android:visibility=“invisible” Java code: view.setVisibility(View.INVISIBLE); Hiding (GONE) XML file: android: visibility = “go” Java code: view.setVisibility(View.GONE); tacademy의 android content provider를 공부하다 찾아보게 된 자료이다. original source : Content providers manage access to a structured set of data. They encapsulate the data, and provide mechanisms for defining data security. Content providers are the standard interface that connects data in one process with code running in another process. 구조를 가진 data를 android에서 encapsulate해서도 다 접근하는 경우 Uri 를 이용한다. content provider process. 즉 다른 app간는 이야기이다. content provider 에 대한 google doc (길지 않고 개념 잡기 좋다) Content providers Content_1<<. Figure 2. Illustration of migrating content provider storage. : How to access and update data using an existing content provider. Creating a content provider How to design and implement your own content provider. How to access the Calendar provider that is part of the Android platform. How to access the Contacts provider that is part of the Android platform. For sample code related to this page, refer to the Basic Sync Adapter sample application original source : 위 링크를 가면 실제 custom content provider를 만드는 과정과 실제 예제 코드가 있다.. ContentProvider sometimes it is required to share data across applications.() methods. In most cases this data is stored in an SQlite database. A content provider is implemented as a subclass of ContentProvider class and must implement a standard set of APIs that enable other applications to perform transactions. public class My Application extends ContentProvider { } Content URIs To query a content provider, you specify the query string in the form of a URI which has following format − <prefix>://<authority>/<data_type>/<id> Here is the detail of various parts of the URI − Create Content Provider This involves number of simple steps to create your own content provider. - First of all you need to create a Content Provider class that extends the ContentProviderbaseclass. - Second, you need to define your content provider URI address which will be used to access the content. - Next you will need to create your own database to keep the content. Usually, Android uses SQLite database and framework needs to override onCreate() method which will use SQLite Open Helper method to create or open the provider’s database. When your application is launched, the onCreate() handler of each of its Content Providers is called on the main application thread. - Next you will have to implement Content Provider queries to perform different database specific operations. - Finally register your Content Provider in your activity file using <provider> tag. Here is the list of methods which you need to override in Content Provider class to have your Content Provider working − ContentProvider - onCreate() This method is called when the provider is started. - query() This method receives a request from a client. The result is returned as a Cursor object. - insert()This method inserts a new record into the content provider. - delete() This method deletes an existing record from the content provider. - update() This method updates an existing record from the content provider. - getType() This method returns the MIME type of the data at the given URI. tacademy 동영상를 보다가 이해가 부족하여 다른 자료를 찾게 되었다. original source : clip object, and then put that object into system wide clipboard. In order to use clipboard , you need to instantiate an object of ClipboardManager by calling the getSystemService() method. Its syntax is given below − ClipboardManager myClipboard; myClipboard = (ClipboardManager)getSystemService(CLIPBOARD_SERVICE); Copying data The next thing you need to do is to instantiate the ClipData object by calling the respective type of data method of the ClipData class. In case of text data , the newPlainText method will be called. After that you have to set that data as the clip of the Clipboard Manager object.Its syntax is given below − ClipData myClip; String text = "hello world"; myClip = ClipData.newPlainText("text", text); myClipboard.setPrimaryClip(myClip); The ClipData object can take these three form and following functions are used to create those forms. Pasting data In order to paste the data, we will first get the clip by calling the getPrimaryClip() method. And from that click we will get the item in ClipData.Item object. And from the object we will get the data. Its syntax is given below − ClipData abc = myClipboard.getPrimaryClip(); ClipData.Item item = abc.getItemAt(0); String text = item.getText().toString(); Apart from the these methods , there are other methods provided by the ClipboardManager class for managing clipboard framework. These methods are listed below − original source : 초간단 ClipboardManager와 ClipData 구현 예시 original source : Here are some definitions: -. -. original source : original source : In one sentence, A Window is a rectangular area which has one view hierarchy. Colored rectangles in below image are windows. As you can see, there can be multiple windows in one screen, and WindowManager manages them. According to Android Developer Documentation, “Each activity is given a window in which to draw its user interface.” and, Dianne Hackborn, who is a Android framework engineer, gave some definitions here. (1시간 분량) She said,. Also, I found some other info from Romain Guy’s presentation(You can watch his talk at San Francisco Android user group from here, and download full slides from here) So, in a nutshell: - An Activityhas a window (in which it draws its user interface), - a Windowhas a single Surfaceand a single view hierarchy attached to it, - a Surfaceinclude ViewGroupwhich holds views. original source : The Android Canvas, bitmap, paint에 관한 설명 1:00 부터 보면된다.
http://jacob-yo.net/2019/11/
CC-MAIN-2020-50
refinedweb
1,916
50.53
I have some pandas code running for 9 different files each day. Currently, I have a scheduled task to run the code at a certain time but sometimes the files have not been uploaded to the SFTP by our client on time which means that the code will fail. I want to create a file checking script. import os, time filelist = ['file1','file2','file3'] while True: list1 = [] for file in filelist: list1.append(os.path.isfile(file)) if all(list1): # All elements are True. Therefore all the files exist. Run %run commands break else: # At least one element is False. Therefore not all the files exist. Run FTP commands again time.sleep(600) # wait 10 minutes before checking again all() checks if all the elements in the list are True. If at least one element is False it returns False.
https://codedump.io/share/3z0owijPQktg/1/checking-if-a-list-of-files-exists-before-proceeding
CC-MAIN-2018-26
refinedweb
140
84.88
Apr 05, 2012 11:07 AM 134491 Points Moderator MVP Apr 05, 2012 11:08 AM|LINK hc1They are doing some common tasks, but they still need to interact with the rest of the application. In these files, how do I use routes or call the controller actions? Please elaborate your needs. Apr 05, 2012 11:46 AM|LINK Hi, Welcome to the world of MVC, You better start from the following sample: Have fun Apr 05, 2012 12:30 PM. All-Star 20888 Points Apr 05, 2012 12:59 PM|LINK HttpContext.Current.Response.Redirect ("....")......need just to agg the namespace the HttpContext is contained in...and calling HttpContext.Current.Response.Redirect ("...."),,,,,,you can do it. However it is not good practice to call Web Environment function in Business classes All-Star 134491 Points Moderator MVP Apr 05, 2012 01:41 PM|LINK hc1in one of my methods in Class1.cs I need to do a HttpContext.Current.Response.Redirect ("...."). the class should anonunce( event?!) that the web should perform a redirect, not to perform itself... All-Star 20888 Points Apr 05, 2012 03:12 PM|LINK hc1What is announce? Also something written into a property that says to the controller to perform a redirect Apr 05, 2012 03:50 PM? 13 replies Last post Apr 07, 2012 03:42 AM by hc1
http://forums.asp.net/p/1789567/4919502.aspx/1?Re+What+MVC+features+available+to+normal+class+files+
CC-MAIN-2013-20
refinedweb
224
75.1
I'm learning C# on my own. Here is the code that is in error: The error is:The error is:Code:using System; public struct Person { public Person(string name, int age) { this.Name = name; this.Age = age; } public string Name { get; set; } public int Age { get; set; } public override string ToString() { return String.Format("{0} is {1} years old", Name, Age); } } public class CSharpApp { static void Main() { Person p1 = new Person("Beky", 18); Person p2 = p1; Console.WriteLine(p2); p2.Name = "Jane"; p2.Age = 17; Console.WriteLine(p2); Console.WriteLine(p1); } } "Backing field for automatically implemented property CSharpApp.Person.Age' mus t be fully assigned before control is returned to the caller. Consider calling the default constructor from a constructor initializer." -Same for person.age The problem is that I really cannot understand it! haha. What I mean to say it, there should be nothing wrong with the get or set methods. hmm. I really can't figure it out. Please tell me how should I get at the correct answer or something. Really appreciated.
http://forums.devshed.com/net-development-87/consider-calling-default-constructor-939544.html
CC-MAIN-2016-44
refinedweb
177
61.02
Type.GetType Method (String, Boolean, Boolean) Updated: July 2009. - throwOnError - Type: System.Boolean true to throw an exception if the type cannot be found; false to return null. null is returned or an exception is thrown. In some cases, an exception is thrown regardless of the value of throwOnError. See the Exceptions section. You can use the GetType method to obtain a Type object for a type in another assembly, if the you know its namespace-qualified name. GetType causes loading of the assembly specified in typeName. You can also load an assembly using the Load method, and then use the GetType or GetTypes methods of the Assembly class to get Type objects. If a type is in an assembly known to your program at compile time, it is more efficient to use typeof in C#, GetType in Visual Basic, or typeid in C++. GetType only works on assemblies loaded from disk. If you call GetType to look up a type defined in a dynamic assembly defined using the System.Reflection.Emit services, you might get inconsistent behavior. The behavior depends on whether the dynamic assembly is persistent, that is, created using the RunAndSave or Save access modes of the System.Reflection.Emit.AssemblyBuilderAccess enumeration. If the dynamic assembly is persistent and has been written to disk before GetType is called, the loader finds the saved assembly on disk, loads that assembly, and retrieves the type from that assembly. If the assembly has not been saved to disk when GetType is called, the method returns null. GetType does not understand transient dynamic assemblies; therefore, calling GetType to retrieve a type in a transient dynamic assembly returns null. other exception conditions, as described in the Exceptions section. Some exceptions are thrown regardless of the value of throwOnError. For example, if the type is found but cannot be loaded, a TypeLoadException is thrown even if throwOnError is false.. Arrays or COM types are not searched for unless they have already been loaded into the table of available classes. typeName can be the type name qualified by its namespace or an assembly-qualified name that includes an assembly name specification. See AssemblyQualifiedName. If typeName includes the namespace but not the assembly name, this method searches only the calling object's assembly and Mscorlib.dll, in that order. If typeName is fully qualified with the partial or complete assembly name, this method searches in the specified assembly. If the assembly has a strong name, a complete assembly name is required. The AssemblyQualifiedName property returns a fully qualified type name including nested types, the assembly name, and type arguments. All compilers that support the common language runtime will emit the simple name of a nested class, and reflection constructs a mangled name when queried, in accordance with the following conventions. For example, the fully qualified name for a class might look like this: If the namespace were TopNamespace.Sub+Namespace, then the string would have to precede the plus sign (+) with an escape character (\) to prevent it from being interpreted as a nesting separator. Reflection emits. The name of a generic type ends with a backtick (`) followed by digits representing the number of generic type arguments. The purpose of this name mangling is to allow compilers to support generic types with the same name but with different numbers of type parameters, occurring in the same scope. For example, reflection returns the mangled names Tuple`1 and Tuple`2 from the generic methods Tuple(Of T) and Tuple(Of T0, T1) in Visual Basic, or Tuple<T> and Tuple<T0, T1> in Visual C#. For generic types, the type argument list is enclosed in brackets, and the type arguments are separated by commas. For example, a generic Dictionary<TKey, TValue> has two type parameters. A Dictionary<TKey, TValue> of MyType with keys of type String might be represented as follows: To specify an assembly-qualified type within a type argument list, enclose the assembly-qualified type within brackets. Otherwise, the commas that separate the parts of the assembly-qualified name are interpreted as delimiting additional type arguments. For example, a Dictionary<TKey, TValue> of MyType from MyAssembly.dll, with keys of type String, might be specified as follows: Nullable types are a special case of generic types. For example, a nullable Int32 is represented by the string "System.Nullable`1[System.Int32]". The following table shows the syntax you use with GetType for various types. Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows CE Platform Note: The ignoreCase parameter is not supported and should be set.
https://msdn.microsoft.com/en-us/library/a87che9d(v=vs.90).aspx
CC-MAIN-2015-35
refinedweb
762
53.81
A pet project of mine has flung me into the exciting though less-than-firm territory of web-backed geographical information systems. Since I don’t have the thousands of dollars it costs to get a commercial server like those provided by ESRI, I’ve had to check out the open-source alternatives. And there are some out there. I’m using GeoServer, and it works great! I can send all the web-feature service transactions (WFS-T) in XML I want and it works every time. Not bad if you want to make a GoogleMap of your house on your own—so long as you’re content to hard-code everything by hand. Should you want to jazz things up a bit (i.e., make minimally useful dynamic maps), like me, then you have to do a little more work. Actually, you need to do a lot more work. GeoServer is built on top of a gargantuan set of Java libraries, collectively packaged under the name GeoTools. Now I appreciate that this thing exists, all two hundred and fifty megs of source code and all the functionality that comes with it. However, navigating the mountain of documentation for this thing is, at least for me, a little daunting. It took me a few days (and some serious help from my friend Matt) to figure out how to write a simple update transaction using their API. (Compare that to the forty-two seconds it takes me to type up the XML.) Since other people might want to know what they have to do update an attribute field using WFS with GeoTools, and since I couldn’t easily find out how to do it elsewhere, I’ve decided to post a short snippet of code right here on my blog. That’s right: my charity knows no bounds. In this example I’m going to update the value of all the features (polygons, lines, points, whatever) that match a simple filter. Here I’m going to change the value of propertyToUpdate to updatedValue using a filter to get all the features with the attribute called constraintProperty with a value of constraintValue. I’ve marked them in red, so that it’s as easy as possible to customize this example to fit your needs. Let’s start with the XML that the Open Geospatial Consortium standards expects to see. <wfs:Transaction service=”WFS” version=”1.0.0″ xmlns:myns=”“ xmlns:ogc=”” xmlns:wfs=””> <wfs:Update typeName=”myns:LayerToUpdate“> <wfs:Property> <wfs:Name>propertyToUpdate</wfs:Name> <wfs:Value>updatedValue</wfs:Value> </wfs:Property> <ogc:Filter> <ogc:PropertyIsEqualTo> <ogc:PropertyName>constraintProperty</ogc:PropertyName> <ogc:Literal>constraintValue</ogc:Literal> </ogc:PropertyIsEqualTo> </ogc:Filter> </wfs:Update> </wfs:Transaction> Now let’s rock out the Java. Like I said, GeoTools is mammoth. To make life easy, we’re going to import a whole bunch of classes for this example. So many, in fact, that their number really warrants my displaying them here in their own list. What’s more, the names of some of classes (like Filter) show up in more than one package, and you need to keep track of which is used where. So keep an eye out for things from org.opengis. import java.io.IOException; import java.net.MalformedURLException; import java.net.URL; import java.util.HashMap; import java.util.Map; import java.util.logging.Level; import org.geotools.data.DataStore; import org.geotools.data.DefaultTransaction; import org.geotools.data.FeatureStore; import org.geotools.data.Transaction; import org.geotools.data.wfs.WFSDataStoreFactory; import org.geotools.feature.AttributeType; import org.geotools.feature.FeatureType; import org.geotools.filter.FilterFactoryFinder; import org.geotools.xml.XMLSAXHandler; import org.opengis.filter.Filter; import org.opengis.filter.FilterFactory; import org.opengis.filter.expression.Expression; In our constructor we’ll set up a connection to the WFS server using a URL. If you’re tinkering with GeoServer, then that URL you’re looking for probably looks something like. Since we know that we’ll want to filter our responses, it’s not a terrible idea to make a filter factory now and save it for later. In GeoTools everything is made using a factory. For filters, we need to make a factory using the new keyword, though. Here goes. public class WFSUpdater { private DataStore wfs; private FilterFactory filterFactory; public WFSUpdater(String url) { try { URL endPoint = new URL(url); XMLSAXHandler.setLogLevel(Level.OFF); // turns off logging for XML parsing. // Parameters to connect to the WFS server, namely the URL. // You could have others, say if you had to authenticate your connection with a username and password. Map params = new HashMap(); params.put(WFSDataStoreFactory.URL.key, endPoint); wfs = (new WFSDataStoreFactory()).createNewDataStore(params); filterFactory = FilterFactoryFinder.createFilterFactory(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } Now that we have a connection, it’s time to make the transaction. As a first timer to GeoTools, I found it difficult to crawl through the documentation. Lots of their classes and methods have been deprecated with no clear hints about what to use instead. I had a hard time finding the right constructors. In what follows everything is hard-wired into the code, but it shouldn’t be all that bad to tweak things so that it works the way you like. public void updateProperty(String propertyToUpdate, String updatedValue) { // This is the layer name on the server. String layer = “myns:LayerToUpdate“; Transaction update = new DefaultTransaction(“update”); // The handle/ID of this transaction is called “update.” It’s required. try { // Make the filter. Expression property = filterFactory.property(“constraintProperty“); Expression value = filterFactory.literal(“constraintValue“); Filter filter = filterFactory.equals(property, value); // This is an org.opengis.filter.Filter. FeatureStore features = (FeatureStore) wfs.getFeatureSource(layer); // Set the transaction. Otherwise, we can’t commit our changes later on. features.setTransaction(update); // Fetch the property from the FeatureType schema so that we can update it with the new value. FeatureType schema = features.getSchema(); AttributeType atrributeToUpdate = schema.getAttributeType(propertyToUpdate); features.modifyFeatures(atrributeToUpdate, updatedValue, (org.geotools.filter.Filter) filter); // There’s that casting again. // Record the modifications. update.commit(); } catch (IOException e) { e.printStackTrace(); } } } Anyway, I hope this saves some people the hassle of tearing through the Javadocs for GeoTools. Also, if there’s a better way to do what I did, please let me know. Happy GIS-ing. Technorati Tags: geotools, geoserver, opengis, ogc, java, wfs, wfs-t, shapefile, update, request, javadoc, gis, esri, xml Nice Post :-) Ao can we add your helpful information to the documentation for WFS DataStore? As far as I know there is not very much documentation yet .. For now I am linking to this blog post from the wiki: WFS Plugin If you are able to contribute documentation (and example code) please add it to this user guide as it will be easier for others to discover. I’m only too glad to be of service. Thanks for the link. Hi, I´m new in this, I was read your text and code, but where I will put this code for a layer in special and this will not affect my other layers?? Thanks Thanks for this post. By the way… I’m using Geotools (tried several versions) to connect to an Oracle 11.1.0 schema via GeoServer WFS. When using GeoServer 2.1.2 it works fine until you perform a commit (or an auto-commit). At that point GeoServer sticks on some problem causing the client connection to timeout. Once you shutdown the server the commit is executed (but it’s too late…). My workaround was to move to version 2.1.0 of GeoServer. I wish I could remember what my development environment was. But I wrote this post back in 2007 and haven’t done much with anything GIS since.
https://blogs.harvard.edu/jreyes/2007/08/03/geotools-wfs-t-update-request/?replytocom=6653
CC-MAIN-2021-31
refinedweb
1,288
51.65
You might have learnt intersection of subsets in mathematics, similarly, we have intersection types in Scala 3.0 as well. Let’s see, how can we use intersection types while programming in scala: What is Intersection or Conjunction? Intersection of two or more things under consideration, is the collection of common things among them. Let’s understand with respect to set theory: Set s1 = {2, 3, 5}, set of first three primes Set s2 = {2, 4, 6}, set of first three evens Now, intersection of s1 and s2 will be: Set s = {2} What we can infer from it is: s(intersection of s1 and s2) is a set which belongs to s1 as well as s2, i.e ‘s’ is a set of prime numbers which are also prime. Now, we can see what it means in terms of scala 3.0: Consider above mentioned sets as types and their elements as their members. So, here’s what we can infer about s(intersection of type s1 and type s2): s is a type which is s1 and also s2 at any given point in time. Complete example would look something like given below: trait Respiration { def breathe() : Unit } trait Lungs extends Respiration trait Gills extends Respiration trait Move { def move(x : Int, y: Int) : Unit } Now, if we have a method run() which takes in an object which can move and respire would be: def run(obj : Move & Respiration): Unit = { println("I ran 100 miles") } Here, in the method signature we have specified that this method takes a parameter which can move and respire. Operator & is available in scala 3.0 and which can only be compiled by dotc compiler. So, a sample class whose object can be passed to run() for successful execution is : class Human extends Move with Lungs { override def move(x: Int, y: Int): Unit = {} override def breathe(): Unit = { } } So, in run() method, “obj” should be a sub-type of both Move and Respiration traits, for it to execute successfully. I hope you understand what Intersection types in Scala 3.0, Dotty are all about. For more details visit official documentation Thank you.
https://blog.knoldus.com/intersection-types-in-scala-3-0/
CC-MAIN-2021-04
refinedweb
359
66.57
Chapter 5. Modeling Your Programs with UML In This Chapter Using some UML extras, such as packages, notes, and tags Taking advantage of the freedom UML gives you Creating C++ enumerations in UML Using static members in UML Notating templates with UML In this chapter, we give you some miscellaneous details about using UML. After you understand how to use the diagrams and have a feel for a methodology or process, read this chapter for interesting details about UML. For example, you can use several symbols in any of your diagrams to make them more descriptive; we discuss those here. We also talk about how to show various C++ features in UML. Using UML Goodies The UML specification is huge. We're talking big. So in this section we give you some additional information you can use when creating UML diagrams. Packaging your symbols In computer programming, a common term is namespace. When you have functions and classes and variables, you can put them into their own namespace, which is nothing more than just a grouping. When you do so, the names of the functions, classes, and variables must be unique within the namespace. But if you create another namespace, you can reuse any of the names from the other namespace. In technical terms, identifiers must be unique within a namespace. To make this clear, let us show you a C++ example. In C++, you can create a namespace by using none other than the namespace keyword. Have a gander at Listing 5-1 — and bear in mind that using namespace std; line you see ... Get C++ All-In-One For Dummies®, 2nd Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/c-all-in-one-for/9780470317358/ch13.html
CC-MAIN-2021-43
refinedweb
296
62.58
Hi Dan, I have tried -mlong-calls option to the compiler, but is still gives "unable to resolve printk" when I insmod the LKM. When I 'cat' the /proc/ksyms, it shows printk_Rcxyz.... When I disable the MODVERSIONING by using 'make menuconfig' during kernel configuration process, and build the kernel again for MIPS platform, the proc/ksyms still shows the printk_Rcxyz ... i.e, function name with a suffix. Can you elaborate as to what are the steps needed to build the kernel with MODVERSIONING disabled, etc..... Regards, Adeel -----Original Message----- From: Dan Aizenstros [mailto:daizenstros@quicklogic.com] Sent: Thursday, August 28, 2003 8:43 PM To: AdeelM@avaznet.com Subject: Re: RE: Information required Hello Adeel, You need to pass -mlong-calls to the compiler. You can add it the CFLAGS. Regards, Dan Aizenstros Software Engineering Manager QuickLogic Canada >>> Adeel Malik <AdeelM@avaznet.com> 08/28/03 08:22 AM >>> Hello Jun, Thanks for the reply.I have checked for the unresolved function symbols like "printk" and "register_chrdev", and found that they are present in /proc/ksyms. So it appears to me that I may be compiling the module with incorrect parameters. Below is the Makefile for the Loadable Module: /*************************************************************************** **************************************/ CROSS_COMPILE= /backup/buildroot-QuickMIPS/build/staging_dir/bin/mipsel-uclibc- TARGET = example_driver INCLUDE = /backup/buildroot-QuickMIPS/build/linux-2.4.20/include CC = $(CROSS_COMPILE)gcc -I${INCLUDE} CFLAGS = -DMODVERSIONS -I${INCLUDE}/linux/modversions.h ${TARGET}.o: ${TARGET}.c .PHONY: clean clean: rm -rf ${TARGET}.o /*************************************************************************** *************************************/ Do you think that I need to modify the makefile or add some more options to CFLAGS. I have ensured that the kernel is compiled with "module-option" turned on. Also my module uses symbol versioning (sometimes called module versioning). I use the following lines of code at the start of header file to accomplish this: /*************************************************************************** **********************************/ #if defined (CONFIG_MODVERSIONS) && ! defined (MODVERSIONS) #include <linux/modversions.h> #define MODVERSIONS #endif /*************************************************************************** *********************************/ I have successfully cross-compiled user-space applications on the target platform. Only when i do the kernel work, this unresolved symbol (like printk, register_chrdev, etc..) phenomenon happens. ADEEL MALIK, -----Original Message----- From: linux-mips-bounce@linux-mips.org [mailto:linux-mips-bounce@linux-mips.org]On Behalf Of Jun Sun Sent: Wednesday, August 27, 2003 9:51 PM To: Adeel Malik Cc: linux-mips@linux-mips.org; jsun@mvista.com Subject: Re: Information required On Wed, Aug 27, 2003 at 07:30:26PM +0500, Adeel Malik wrote: > Hi All, > I am involved in Embedded Linux Development for MIPS Processor. I > need to write a device driver for a MIPS Target Platform. When I insmod the > driver.o file, the linux bash script running on the target hardware gives me > the error message like ; > 1. unable to resolve the printk function > 2. unable to resolve the register_chardev function > etc. > Can you plz give me the direction as to how to proceed to tackle this > situation. Make sure your kernel compiled with module option turned on. Does it use module version (CONFIG_MODVERSIONS)? If so, add -DMODVERSIONS -include $(KERNEL_PATH)/include/linux/modversions.h to your CFLAGS. Jun
http://www.linux-mips.org/archives/linux-mips/2003-08/msg00152.html
CC-MAIN-2014-52
refinedweb
507
50.73
The.2+. macOS and Linux Installation on macOS and Linux is supported via Swift Package Manager. Step 1:‘s documentation. Note that the versions provided by your package manager may be too old, in which case you can follow the instructions for building and installing from source. See example installation from source on Ubuntu in Docker. Step 2: Install MongoSwift Please follow the instructions in the previous section on installing the MongoDB C Driver before proceeding. Add MongoSwift to your dependencies in Package.swift: // swift-tools-version:4.2 import PackageDescription let package = Package( name: "MyPackage", dependencies: [ .package(url: "", from: "VERSION.STRING.HERE"), ], targets: [ .target(name: "MyPackage", dependencies: ["MongoSwift"]) ] ) Then run swift build to download, compile, and link all your dependencies. iOS, tvOS, and watchOS Installation is supported via CocoaPods. The pod includes as a dependency an embedded version of the MongoDB C Driver, meant for use on these OSes. Note: the embedded driver currently does not support SSL. See #141 and CDRIVER-2850 for more information. Add MongoSwift to your Podfile as follows: platform :ios, '11.0' use_frameworks! target 'MyApp' do pod 'MongoSwift', '~> VERSION.STRING.HERE' end Then run pod install to install your project’s dependencies. Example Usage Note: You should call cleanupMongoSwift() exactly once at the end of your application to release all memory and other resources allocated by libmongoc. Connect to MongoDB and Create a Collection import MongoSwift let client = try MongoClient("mongodb://localhost:27017") let db = client.db("myDB") let collection = try db.createCollection("myCollection") // free all resources cleanupMongoSwift() Note: we have included the client connectionString parameter}` // Using functional methods like map, filter: let evensDoc = doc.filter { elem in guard let value = elem.value as? Int else { return false } return value % 2 == 0 } print(evensDoc) // prints `{ "b" : 2, "d" : 4 }` let doubled = doc.map { elem -> Int in guard let value = elem.value as? Int else { return 0 } return value * 2 } print(doubled) // prints `[2, 4, 6, 8]` Note that Document conforms to Collection, so useful methods from Sequence and Collection are all available. However, runtime guarantees are not yet met for many of these methods. Usage With Kitura and Vapor The Examples/ directory contains sample projects that use the driver with Kitura and Vapor. Development Instructions See our development guide for instructions for building and testing the driver. Latest podspec { "name": "MongoSwift", "version": "0.1.3", "summary": "The Swift driver for MongoDB", "homepage": "", "license": "Apache License, Version 2.0", "authors": { "Matt Broadstone": "[email protected]", "Kaitlin Mahar": "[email protected]", "Patrick Freed": "[email protected]" }, "source": { "git": "", "tag": "v0.1.3" }, "platforms": { "ios": "11.0", "tvos": "10.2", "watchos": "4.3" }, "requires_arc": true, "source_files": "Sources/MongoSwift/**/*.swift", "dependencies": { "mongo-embedded-c-driver": [ "~> 1.13.0-4.0.0" ] } } Sun, 09 Jun 2019 10:26:13 +0000
https://tryexcept.com/articles/cocoapod/mongoswift
CC-MAIN-2020-40
refinedweb
458
51.44
Rod Johnson Discusses Spring, OSGi, Tomcat and the Future of Enterprise Java Recorded at: - | - - - - - - Read later Reading List. Sponsored Content 1. Hi my name is Ryan Slobojan I am here with Rod Johnson at QCon. How are you doing? Pretty good, I am enjoying the conference, I am enjoying being back in London. 2. Excellent, glad to hear it. So one of the first things that I wanted to ask you about is, there is an upcoming group of releases on March 20th related to the Spring portfolio, can you tell us a little bit more about that? Yes, we've got quite a lot of new functionality coming across the Spring portfolio, and this includes, for example, the Spring Security version 2.0 release. This is the evolution of Acegi Security for Spring, so now it's being very clear that this is very much a core part of the Spring portfolio and it also really makes Acegi Security much easier to use. One of the problems with Acegi Security, historically, was that it was very powerful but it wasn't simple to use. And our motto with Spring has always been "simplicity and power" and I think we've finally gotten to the point now where we have greatly simplified configuration through using Spring 2 namespaces and made Spring Security a real pleasure to use. Some of the other releases that are upcoming include Spring Web Flow 2.0. There are a number of major enhancements in Spring Web Flow also geared around ease of use, and also additional functionality with respect to JSF and AJAX. So we want to make sure that, for users who are using JSF, Spring Web Flow is the most natural choice for them for a framework. We also have, actually, a variety of SpringSource product releases coming in the next month or two, including the SpringSource Tool Suite, which is a value-added distribution of Eclipse with many new features available to our subscription customers. Also the SpringSource Application Management Suite, which is a management and operational monitoring product that can provide detailed insight into Spring applications in production. And the SpringSource Advanced Pack for Oracle database which provides value-added connectivity to Oracle RAC, Oracle AQ and some other Oracle products. 3. Interesting. One of the things that you had mentioned was Oracle. Now one of the recent changes in the software development arena has been that Oracle has bought BEA and Sun has bought MySQL. How do you think that's changed the landscape for both open source and Java? I think that's a very interesting question. I don't think that the Oracle acquisition of BEA makes so much difference for open source because obviously neither of those companies is an open source company. We have an excellent relationship with BEA and also a good relationship with Oracle so we actually think that it will continue to be a very good story around working with Spring and WebLogic, in particular, together. So with respect to open source, BEA, I guess, has used more open source than Oracle, so BEA for example has used Spring in the core of WebLogic 10. Oracle on the other hand is also increasing its use of open source so I'd expect it maybe to push that trend a little further within Oracle. I think there's some pretty significant implications for the JCP, because interestingly the app server market looks more and more like a two horse race now. So if you look at the conventional app server market it is now going to be totally dominated by IBM and Oracle, when Oracle acquires the WebLogic customer base. Meanwhile the challenge to BEA, in particular from JBoss, seems to have receded since Red Hat acquired JBoss. So, I think that is going to have some very interesting fallout. Essentially the full-blown Java EE server market is going to be largely a two horse race. And I think that may well encourage even more people to look towards lighter-weight solutions like Tomcat. With respect to the acquisition of MySQL by Sun, I think this is going to be a very good thing for Java. I think it is going to clearly help to drive Sun's commitment to open source, and I would expect to see, for example, and hope to see that the JCP should become more open, more like an open source process through this. 4. Excellent, and you had also mentioned that you believe there is going to be a move towards a lighter-weight solution such as Tomcat. Now do you think that the Java EE 6 specification, with its profiles idea, is going to help with that or is this something that is going to happen independent of that? I think it is something that is going to happen in any case. I personally see Java EE 6 as being more a question of whether Java EE 6 is going to get with the program and ensure that Java EE remains relevant. I think we are long past the point where the Java EE specifications really shape the market. I think that there are a number of things in Java EE 6 that are very positive. One of them is profiles. I think it is very clear that people do value JCP standards, they value some a lot more than others, and I think it is very clear that users want to have standardized and coherent subsets. So I think that profiles is a very good idea, I also think the extensibility goal is a very good idea and recognizing that part of the role for Java EE is to create a level playing field on which innovation can flourish rather than necessarily delivering everything, from soup to nuts, that people need in enterprise Java. Of course SpringSource is involved in the Java EE 6 effort. I am on the Java EE 6 expert group and I do think in particular the profiles move is very good. Interestingly if you look at the statistics in terms of Tomcat take-up -- this is something I blogged about a couple of months ago -- I think Tomcat has gotten to a dominant position more or less without anyone noticing. There is no question now that on every set of numbers that I have seen, for share of different application servers, Tomcat is in first place. It is significantly ahead of WebSphere, which shows up in second place on most of those surveys. So I think it is very interesting that while Java EE has progressed in a particular direction, at least up until Java EE6, very large parts of the market have just voted with their feet and decided "Hey, this is the subset that we want". 5. And one of the other products which has come out recently is Spring Dynamic Modules 1.0. Do you see that as being a strategic component of the Spring portfolio? Absolutely. Spring Dynamic Modules, we think, is an extremely important project because we believe that the combination of Spring and OSGi is the future of Java middleware. OSGi has definitely reached the point where it is going to help to reshape the server side in the same way that, for example, it enabled the highly successful Eclipse plugin architecture. However OSGi gives you an excellent dynamic modularization solution, solves problems like versioning, hot deployment, etc. It doesn't give you a component model, it doesn't give you enterprise services, it doesn't really give you the kind of experience that enterprise developers want. You put OSGi together with the Spring component model in enterprise services and then you get something that really is both very easy to use and extremely capable. So we see Spring Dynamic Modules as something that is very important to our strategy for that reason. 6. I have one question . In the last years we have seen a lot of drawback from enterprise technologies, POJO is more popular than EJB probably. Now there are profiles for J2EE 6, so I wonder what is your estimation about the future. Would Tomcat and Spring maybe be the mainstream enterprise server? Or do you think that there will be still place for heavy and complex, full-blown J2EE servers? Well that is certainly a question that got to the point. I think the pretty clear answer to that question as to will Spring and Tomcat become the mainstream enterprise Java platform? The answer is, Spring and Tomcat is the mainstream enterprise Java platform. I can't actually share the numbers, but we have actually commissioned some research from analyst firms, and basically what all those studies have shown is that Spring and Tomcat is more prevalent than for example WebSphere, WebLogic, so I think it is pretty clear that a change has happened in the market. There are a number of data points to verify this. I mean, rather than just take my word for it, you can for example look at job listings and you will find more jobs requiring Spring than requiring EJB. With respect to the J2EE or Java EE platform I don't think it is dead, I think the profiles can keep it alive. I think the full blown application server is dead and frankly I think that's a good thing. That's not to say for example that there isn't value in products like WebSphere and WebLogic. There is value in those products, but I think people want to consume that value in different ways. People don't want to have something that really is pretty big and bloated shoved down their throat -- they want to be able to pick and choose. And indeed if you look for example at some of the initiatives that those vendors are taking, like for example BEA's MSA or Micro Services Architecture which is using OSGi. They are actually changing the products to make them more modular. I think it is pretty clear that the Java EE application server was something that was designed by committee; it never really solved the problems that it was intended to solve, and the world is changing in a way that makes it even less relevant than it used to be. So for example with the rise of SOA there are fewer and fewer applications that kind of fit that stovepipe architecture. With respect to specific component technologies of Java EE, I think some of them have a bright future, I think others of them don't. I think some that clearly have a strong future and remain very relevant include the servlet API, it's still very fundamental. A lot of technologies or APIs like JMS and JTA define the fundamental middleware contracts; I think those will continue to play a very important role. JPA I think is likely to be successful. I think with respect to EJB, I think my views are pretty well known on it. I mean EJB is just a bad technology. I can't quite understand the desire that, at least Sun and some of the vendors seem to have to keep EJB alive. I think it has reached the point where it has a negative brand value. So for example a surprising number of large organizations like banks ban the use of EJB now. And realistically I think there were two choices: one was to either decide that maybe we should just end-of-life this, and the other was to accept that there was no point in backward compatibility with failed previous versions, and they start from scratch; that something obviously could be more successful without the baggage. 7. The benefits of OSGi for servers or making IDEs are clear, but what is the opportunity for application developers with OSGi?. So for example the problem of version conflict in enterprise Java is gradually getting worse and worse. I think pretty much everyone has experienced the problem where they have seen a clash between different projects that are using different open source libraries. As an example of that, Hibernate 3, the first time it ran a query would cause a WebLogic 8.1 or 9 server to quit, literally quit, with a console message saying: "CharScanner; panic". The reason that happened was because WebLogic used ANTLR and so did Hibernate; but they used different versions of ANTLR. Those kind of problems are getting worse and worse as even commercial application servers use more and more open source. Basically the solution to that in Java EE is "Oh, we haven't gotten to that yet, so just use whatever duct tape the vendor gave you". So there is a variety of pretty horrible hacks that you can use to invert class loading order, and they are not portable and they are really not a strategic solution. It is a band-aid solution. If you look at OSGi, it really solves that problem cleanly. It solves it in a standard way that is portable between different environments. It's predictable, it is not a duct tape solution. And that also allows you to do things like concurrently running the same application, different versions of the same class, if you need to do that; and also to give you the ability to do fine-grained redeployment of components of your business applications. So I think the really clear benefit will end up being improved uptime. You won't need to take down your application to replace a few classes in the billing system; you will be able to put in that subsystem at runtime. I think there is a lot of ease-of-use challenges that need to be solved to get there. So let me be perfectly clear that if you go and take Spring and Spring Dynamic Modules and OSGi right now, you're still going to find life fairly complex. I think we need to gradually see more integrated products that pull all of this together into a single solution. But definitely I think the technology capabilities are there, and that OSGi will solve some problems like versioning and maintainability that Java EE has not done terribly well. I think in the future we'll see benefits of isolation, so for example integration between AspectJ load-time weaving and OSGi. So once you have a well-defined deployment model in terms of a bundle, which is independent of any particular environment like a Java EE server, you can start to do some really interesting things with load-time weaving such as automatic policy enforcement. I think it raises some very very interesting possibilities. 8. So going back to the discussion of acquisition, SpringSource has recently made their own acquisition with Covalent. Can you tell us a little more about that? Yes,. For those who may not be familiar with Covalent, they are the leading provider of support and services for Tomcat and for the Apache web server. So when you see the fact that, as I mentioned, the combination of Spring and Tomcat is really the most popular platform now for enterprise Java deployment, it really made a lot of sense for us to bring together the ability to provide the highest quality services for that combination. Some other fact were that there are a lot of shared values between the two companies. So obviously I have been pretty vocal in the past about my very strong belief that you can only provide high-quality support for open source projects if you are actively involved in contributing to those projects and contributing intellectual property. Covalent have exactly the same attitude. So for example some of the Covalent engineers include the most active committer on Tomcat; and also Bill Rowe, who wrote very large parts of Apache 2 and continues to be incredibly involved in Apache development. So it has actually been a very smooth acquisition in terms of shared attitudes and people who have a lot of mutual respect. Finally it was a pretty natural step in terms of our ambitions to be the leading company in enterprise Java open source, and it was kind of a no-brainer to have an opportunity to bring together support for the components that seem to be becoming a de facto standard for enterprise Java. 9. Speaking a little bit more about the future, what's in store for the Spring Framework 3.0? The Spring Framework 3.0 will continue the substantial enhancements that were already introduced in Spring MVC in 2.5. Probably the biggest change from the end-user perspective in 2.5 was pretty extensive use of annotations, so rolling out the ability to use annotations across both the Spring IoC container and Spring MVC. In Spring 3.0 we'll be furthering that with hopefully a convergence in the programming model between MVC and Web Flow. So we are looking toward providing a single programming model that scales through web MVC classic request-response navigation through to the directed navigation that Spring Web Flow provides, with a consistent programming model. We are also looking at introducing support for REST, so for example processing in Spring MVC RESTful URLs, and we are also likely to be dropping support for Java versions before Java 5. I mean that won't produce a… We already have extensive support for Java 5 and indeed Java 6, so that will only have a modest impact, but it certainly will make it easier for us, to maintain a codebase that is purely Java 5 and above. 10. One last question that I have is, are there any new projects planned for the Spring portfolio? The most recent addition was Spring Integration which was announced in December at The Spring Experience. This provides a model for enterprise integration built of course on the Spring component model. It also has what we believe to be a fairly innovative programming model that extensively uses annotations to very concisely handle patterns like aggregation, transformation and routing. We don't have any more new projects that we expect to add to the Spring portfolio in the near term, but as you have seen with a number of projects that are going to go in final releases in the next couple of months, we have been extremely active across the Spring portfolio overall. SpringSource will certainly continue to announce new products and certainly by the JavaOne time scale we will have some pretty significant new products that we'll be demoing. Big up by Lukas Zapletal Good interview by Surya De Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
https://www.infoq.com/interviews/johnson-spring-osgi-tomcat
CC-MAIN-2018-17
refinedweb
3,148
57.61
I've been testing out JQuery plugins and CakePHP plugins and it gets really tedious adding the required libraries to the <head> tag individually. I'm gonna make a bootstrapper that loads it all automatically, but what happens if there are functions and classes with the same name. They'll end up overriding each other. Is there a bootstrapper app available which will check for conflicting function names and stuff like that? What about using namespacing? I know how to namespace PHP classes but I'm talking about loading .js and .css files. I don't know if theres a way to wrap those in namespaces without modifying the code of each file. RequireJS maybe? Understanding RequireJS for Effective JavaScript Module Loading This topic is now closed. New replies are no longer allowed.
http://community.sitepoint.com/t/way-to-bootstrap-all-your-js-and-css-files/38613
CC-MAIN-2015-18
refinedweb
133
73.98
I'm looking to take the default ID key from the django model turn it into hexadecimal and display it on a page when the user sumbits the post, I've tried several of methods with no success can anyone point me in the right direction? views.py def post_new(request): if request.method == "POST": form = PostForm(request.POST) if form.is_valid(): post = form.save(commit=False) post.published_date = timezone.now() post.save() return redirect('post_detail', pk=post.pk) else: form = PostForm() return render(request, 'books_log/post_edit.html', {'form': form}) Python's hex function is all you need here, but the problem is you can't call it directly from your template. So the solution is to add a method to your model. class MyModel(models.Model): def to_hex(self): return hex(self.pk) Then in your template {{ my_object.to_hex() }}
https://codedump.io/share/kvcJ6Dxwqj8q/1/python-django-id-to-hex
CC-MAIN-2017-51
refinedweb
140
61.12
Robot Game Testing Kit Project description Please see this link for the instructions to the game. Here are some excellent tools made by fellow players! Robot Game was originally started by Brandon Hsiao. Package Installation pip The easiest way to install the kit is with pip. From the terminal, run: pip install rgkit Or if you want the development version: pip install git+ Note: This will install rgkit system-wide. It is recommended to use virtualenv to manage development environments. virtualenv Installing with virtualenv requires the following steps: mkdir my_robot cd my_robot virtualenv env source env/bin/activate pip install rgkit setup.py You can also manually install directly from the source folder. Make a local copy of the git repository or download the source files. Then, using the terminal, run the following from the root directory of the source code: python setup.py install Note: This will install rgkit system-wide. It is recommended to use virtualenv to manage development environments. Running the game After installing the package, the script is executable from the command line (if you’re using virtualenv, remember to activate the environment). There are two entry points provided: rgrun and rgmap. The general usage of run is: usage: rgrun [-h] [-m MAP] [-c COUNT] [-A] [-q] [-H | -T | -C] [--game-seed GAME_SEED] [--match-seeds [MATCH_SEEDS [MATCH_SEEDS ...]]] [-r] player1 player2 Robot game execution script. positional arguments: player1 File containing first robot class definition. player2 File containing second robot class definition. optional arguments: -h, --help show this help message and exit -m MAP, --map MAP User-specified map file. -c COUNT, --count COUNT Game count, default: 1, multithreading if >1 -A, --animate Enable animations in rendering. -q, --quiet Quiet execution. -q : suppresses bot stdout -qq: suppresses bot stdout and stderr -qqq: supresses all rgkit and bot output -H, --headless Disable rendering game output. -T, --play-in-thread Separate GUI thread from robot move calculations. -C, --curses Display game in command line using curses. --game-seed GAME_SEED Appended with game countfor per-match seeds. --match-seeds [MATCH_SEEDS [MATCH_SEEDS ...]] Used for random seed of the first matches in order. -r, --random Bots spawn randomly instead of symmetrically. So, from a directory containing your_robot.py, you can run a game against the default robot and suppress GUI output with the following command: rgrun -H your_robot.py defaultrobots.py Developing in the source directory: rgkit is packaged as a module, so you can just checkout the module directory and import/run as usual. ./rgkit |--- rgkit | |--- __init__.py | |--- game.py | |--- run.py | |--- ... | |--- your_robot.py |--- setup.py ... /path/your_other_robot.py Running the game To run the game with the source configured this way, use the terminal and execute the following from the inner rgkit folder (i.e., in the same directory as run.py): python run.py your_robot.py /path/your_other_robot.py Importing: Once installed, you should only need the rg module (which is itself optional) to develop your own robots. The package can be imported like any other module: import rg class Robot: def act(self): return ['guard'] Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rgkit/0.4.1/
CC-MAIN-2018-22
refinedweb
533
58.18
How to Reduce Next.js Bundle Size How we analyzed and reduced 26.5% of the js payload of an e-commerce website build with Reactjs, Webpack & Next.js In NE Digital, we are continuously working to provide faster and smoother user experience irrespective of the internet connection or the device type. In order to achieve that shipping less amount of javascript payload is one of our key focus areas. Byte-for-byte, JavaScript is still the most expensive resource we send to mobile phones, because it can delay interactivity in large ways. Addy Osmani We are using WebPageTest to do our performance benchmark on the homepage. With device type as MotoG4 and connection type as 3GFast, we observed the following results. From the above image, we can observe that around 76.4% of processing time is consumed by scripting, and among that around 35.5% of the time is taken by Evaluatescript alone. If we can reduce the amount of js we ship, then those smaller scripts can be downloaded faster and it will reduce CPU execution time. As a result, the main thread will be able to respond to user interaction much earlier providing a much better user experience. Tools to Analyze Javascript Bundles 🛠 Let’s open our toolbox and see a few of the tools we can use to analyze js bundles and find scope for optimization. 1) Webpack Bundle Analyzer Webpack Bundle Analyzer is a popular tool to analyze js bundles and here are a few of the key use cases. - Analyze which components and libraries are part of a bundle. - Discover if some library got included multiple times. - Check if a library showing up unexpectedly in a bundle. - Check if the tree shaking for a specific dependency library is working properly or not. Here is a sample interactive treemap visualization generated using Webpack Bundle Analyzer. We can easily add Webpack Bundle Analyzer in our next.config.js using the below code. After that, we can generate the visualization using the below command ANALYZE=true npm run build ANALYZE flag ensures that Webpack Bundle Analyzer runs only for those builds only when the flag is set to true. 2) Source Map Explorer Source-map-explorer is a useful tool for analyzing each bundle and finding any bloat in it. Once the npm package is installed, we can run the below command to generate a treemap visualization as shown in the below image. source-map-explorer bundle.min.js bundle.min.js.map 3) Bundle Wizard Bundle Wizard is a key tool to get an overview of all bundles loaded for a specific page. The best part about bundle-wizard is that it color-codes coverage for all components. npx bundle-wizard Running this command will generate a color-coded visualization similar to as shown below. From this, we can easily analyze which component/library in a bundle has less coverage and needs further investigation for optimizing the bundle. 4) Chrome DevTools Coverage We can type CTRL+SHIFT+P in Windows or CMD+SHIFT+P in Mac inside Chrome DevTools and then select show coverage drawer to open it. Once we click on the reload button, it reloads all files and shows coverage for all js files. We can filter by URL to show coverage for 1st Party js files. Once we click on a file inside the coverage drawer, it will be opened in the Sources panel and line numbers highlighted in red indicates parts of the code in a js bundle are not executed. Below we can see how to discover unused code using the Chrome DevTools coverage drawer. Let’s Reduce Javascript Bundle Size 🚀🚀 After using the above-mentioned tools to analyze our js bundles, we discovered parts of code that can be loaded on-demand or don’t need to be loaded for all browsers. Let’s see some of the key changes we made to reduce our js size. 1) Improve Polyfill With every new version of js, we get many useful functionalities that make developing features easier. But the adoption of these new features takes time for different browsers. Polyfills let us write the latest js functionality without waiting for it to be available natively in all browsers. A polyfill is a piece of code (usually JavaScript on the Web) used to provide modern functionality on older browsers that do not natively support it. — MDN a) Remove explicit core.js import Our Application supports IE11 along with other commonly used desktop and mobile browsers. So even though we write code using newer features of js, we still need to provide polyfills for older browsers. To resolve this, earlier we had explicitly imported core-js in our code as shown below. import 'core-js' Because of this, polyfills were being served even to the browsers supporting given features of js and therefore those browsers were unnecessary getting huge amounts of polyfills. From the below Webpack Bundle Analyzer visualization, we can see that roughly 30.83kb gzipped js footprint was explicitly added by explicit core.js import. To provide only necessary polyfills to browsers, Next.js has introduced the inbuilt module/nomodule pattern in v9.1. The module/nomodule pattern provides a reliable mechanism for serving modern JavaScript to modern browsers while allowing older browsers to fall back to polyfilled ES5 code. After migrating to Next.js v9.1, we took advantage of the inbuilt module/nomodule pattern and removed our explicit import of core-js without facing any issues in IE 11 or other older browsers. b) Use feature detection to load polyfill core-js supports ECMAScript and closely related ECMAScript features. There are some browsers only features such as IntersectionObserver that core-js currently does not support. For those cases, we use feature detection and load polyfills only in the browsers that don’t support the feature. Below we can see how to load intersection-observer polyfill using feature-detection. 2) Use dynamic import Next.js comes preconfigured with great code-splitting strategy and it has been further improved since v 9.2. It already does route based code splitting. So when we load a page, only js chunks specific to that page is loaded. On top of that Next.js also provides next/dynamic to let us implement component-based code splitting. For our website, we have lots of popups that get triggered by user action. We decided to use next/dynamic to load all such popups on-demand to remove those from the initial payload. Below we can see that after clicking on the search input box search suggestion chunk is dynamically loaded and the search suggestion popup is shown to users. We also have some components which only load when some specific condition is fulfilled. We have also decided to use the next/dynamic for those cases. Some sample code of this pattern can be found in the Next.js Github repo examples. Milica Mihajlija has written an excellent in-depth article on this topic which I highly recommend reading. 3) Remove duplicate packages As our project grows and we keep adding several npm packages, there are possibilities of several different versions of the same package getting included in our Webpack bundles. To find out if this is happening with our project we are going to use duplicate-package-checker-webpack-plugin. Next.js makes it very easy and straightforward to add custom Webpack config. After setup, once we generate build, we can see a result similar to the below one. Here we have fast-deep-equal package included twice in our bundle since we are directly using the 2.0.1 version of it and react-image-magnify is also using v1.0.0 of it as a sub dependency. There are several ways to solve this issue. Depending on our tool (npm or yarn) and the specific library we can take different approaches. Below we are going to use resolve.alias feature of Webpack to resolve duplicate fast-deep-equal npm package issues in the bundle. Word of caution ⏰ ⏰ Some libraries introduce breaking changes between the major versions, so before resolving to a higher version, it should be checked if any feature is breaking. 4) Load library based on user interaction There are few scenarios where we can avoid loading an npm package at the initial load time and trigger it on-demand based on user interaction. Let's see an example of the above pattern. On our website, we have a feature called ScrollToTop where users can click a specific icon to scroll to the top of the page. We are using react-scroll package to implement this feature. But unless the user clicks on the icon, we don’t need to load the library upfront. So in this case, when the user clicks on the ScrollToTop icon we import the react-scroll library inside our handleScrollToTop function, and then it scrolls the user to the top of the page. From the below image it can be seen that when the user clicks on ScrollToTop icon, the additional js chunk is loaded on demand, and the user is scrolled to the top of the page. 5) Webpack Specific Optimization Few of the libraries can be optimized using different webpack plugins. We have found many useful Webpack specific optimizations from this GitHub repo maintained by Ivan Akulov. One of the great things about the repo is that it clearly mentions which changes are safe to make and which ones we need to be cautious about. For libraries like lodash and date-fn, we can easily reduce bundle size by just importing specific functions instead of full import. 6) Remove inline big images from js chunks We are using next-optimized-images for the build-time optimization of our images. While analyzing our bundles via webpack-bundle-analyzer we discovered many images were getting inlined 😭😭 in our js chunk increasing the overall bundle size. Upon further investigation, we found that next-optimized-images have a config option inlineImageLimit which has a default value of 8192 bytes making all images below this size inlined with a data-uri by url-loader. For our use-case, we wanted to disable this image inlining feature. To disable this we set inlineImageLimit to -1. After making this change, we reran the webpack-bundle-analyzer again and verified that images are not getting inlined anymore. Let’s See The Results 🎊🎊 After implementing the patterns mentioned above, these are the significant changes we noticed from our frontend performance dashboard. The below results are based on data collected from the PageSpeed Insights API. Our overall js size(brotli compressed) reduced from 923kb ==> 831 kb(~10% reduction) and our first-party js size reduced from 347kb ==> 255kb(~26.5% reduction). Along with js size reduction, we also noticed improvement across different key performance metrics. Performance metrics for the desktop homepage before changes. Performance metrics for the desktop homepage after changes. Performance metrics for the mobile-web homepage before changes. Performance metrics for the mobile-web homepage after changes. Please read the below article(a little self-promotion 🙈🙈) to find out how to set up a similar frontend performance dashboard. Build Frontend Performance Monitor Dashboard Using PageSpeed Insights How We Built our frontend performance monitor dashboard using PageSpeed, Apps Script, Google Sheet & Data Studio medium.com Bonus Tip These are the tools we often use while adding a new npm package or finding out how to import a package optimally in our code. Bundle Phobia Whenever we need to add an npm package to our project we always check the import cost of that package using Bundle Phobia. It gives several key details about the package such as minified and gzipped bundle size, download time, and the composition of the package. A cool feature of bundle phobia is it also recommends several similar packages with a lower footprint. The image below shows several similar package suggestions for momentjs. Import Cost Extension This editor extension is available for both Sublime Text and VSCode. We found this extension useful since it shows the import cost upfront at the time of adding an import statement in our code. This helps us to prevent accidental full imports and avoid importing with a larger footprint. Below we can see an example where import cost extension is showing that instead of importing from date-fns, if we only import the specific function of date-fns library, we can reduce the js payload. Final Thoughts Many of the patterns shared above can also be applied to sites built with Angularjs or Vuejs. We are continuously monitoring and looking for scopes to reduce our javascript payload even further. In the next article, I will share how we improved the LCP of the homepage of our website. That's all, if you liked this article, please give it a clap & follow me on medium or twitter for more updates on similar topics.
https://medium.com/ne-digital/how-to-reduce-next-js-bundle-size-68f7ac70c375
CC-MAIN-2020-50
refinedweb
2,157
54.42
So I think more of the inexperienced programmers will be roaming around with this question. I can pass a struct to a function successfully like the snippet below: void printMonster(MonsterStats slime, MonsterStats spider, MonsterStats orc, MonsterStats ogre, MonsterStats dragon, int charLevel) ; } What you tend to call "members" (e.g. ogre, spider, etc...) are in fact instances of the MonsterStats structure. So variables. In C++, the "members" are the variables or functions that are in the structure (e.g. name, health, level, etc...). Of course, using many instances like in your snippet, and retype a lot of time the same things would be a nightmare. Fortunately, in chapter 5 you'll see the loops and in chapter 6 you'll learn about arrays and vectors to handle a bundle of instances. This would look like: #include <iostream> #include <string> #include <vector> struct MonsterStats { ... }; ... int main() { using namespace std; vector<MonsterStats> monsters { {"Ogre","Grumble",345, 16}, {"Dragon","Spyro", 890, 21}, {"Orc","Rakanishu", 165, 11}, {"Giant Spider", "Arachnid", 80, 6}, {"Slime", "Blurp", 35, 1} }; ... printMonster(monsters, charlevel); ... } And you would then access to the i-th item with monsters[i].
https://codedump.io/share/tVcnatjLKDDU/1/passing-struct-to-function-without-typing-every-member-c-no-pointers
CC-MAIN-2017-17
refinedweb
188
63.8
If you’re using ReactJS and you are accessing some endpoint APIs, you may have come across environment variables. In this tutorial, I will take you through how to use environment variables. Assumption: You’re familiar with the Create React App and you are using it to create your React application. Why You Need Environment Variables To customize variables based on your environment, such as whether it is in production, development, or staging, etc. To store sensitive information like API keys and Passwords which are highly sensitive and you shouldn’t push them to version control. Create React App supports custom environment variables without installing other packages. Two ways of adding environment variables - Using the .envfile (This is the approach we are going to use). - Through the shell (temporary, lasts as long as shell session last, and varies depending on the OS type). Defining environment variable using .env Step 1: In your application's root folder, create a text file called .env. Your working directory will look like this: my-react-app/ |-node-modules/ |-src/ |-public/ |-.env |-gitignore |-package.json |-package.lock.json. |-README.md Step 2: Create your custom variables. Create React App(CRA) enforces the prefix REACT_APP on every custom variable. Note that variables without the prefix REACT_APP are ignored during bundling. If you want to know more about how does CRA do this, check documentation here. Variables should look like this: REACT_APP_CLIENT_ID=12345 REACT_APP_KEY=aaddddawrfffvvvvssaa Step 3: Access them in your react app. You can print them and also assign them to other variables, but they are ready only in your application (means their value can't be changed). import React from 'react'; function App() { console.log(process.env.REACT_APP_CLIENT_ID);//printing it to console console.log(process.env.REACT_APP_KEY);//printing it to console return ( <div className="app"> <p>{process.env.REACT_APP_KEY}</p>//printing it to screen </div> ); } export default App; Step 4: There is a built-in environment variable called NODE_ENV. You can access it from process.env.NODE_ENV. This variable changes based on what mode you are currently in. When you run npm start it is equal to "development", when you run npm test it is equal to "test", and when you run npm run build it is equal to "production". More on use case can be found in this great tutorial. Step five: Open the .gitignore file. I like to put .env and other environment variables under #misc as seen below. # dependencies /node_modules # testing /coverage # production /build # misc .DS_Store .env #<--------Put the custom env files here .env.local .env.development.local .env.test.local .env.production.local npm-debug.log* yarn-debug.log* yarn-error.log* Why Isn’t It Working Even After Following These Processes🤔? - Make sure you used the prefix REACT_APP on every variable - Confirm that the variable names on the .envfile match the ones on your js file. For example,REACT_APP_KEY in .env versus process.env.REACT_APP_KEY - If the development server was running, stop it then rerun using npm start it. I really struggled with this (variable is undefined error). - Every time you update the .env file, you need to stop the server and rerun it, as the environment variables are only updated during build (variable is undefined error). - Remove quotations from the values of the variables. The official documentation for using environment variables from Create React App can be found here. Discussion (7) If we don't push it to vsrsion control, then do we need to set those env variables in deployment server in PROD ? well, it varies with your production environment. You may consider reading this article. dev.to/rajatetc/configure-environm... Let me know if you need any help with your deployment. nope , but it helps when testing as you can just switch to the local env variables instead of deleting and hard coding them . plus it's good practice because it's easy to forget and push these changes and almost loose all your new changes after panicking and running some git script you found on stack overflow to remove the keys from your public repo Thanks A Lot ! you're welcome. Thanks a lot! One question: if I set an env variable as integer value, will it automatically be converted to string when I access it using process.env.REACT_APP_SOME_VARIABLE? Yes, everything from the environment variables are casted to string data type.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/felixdusengimana/environment-variables-in-reactjs-139l
CC-MAIN-2022-27
refinedweb
726
58.48
C++ Programming Sum of natural numbers in C++ We will take a number as input from the user and then determine the sum of natural numbers less than or equal to that numbers. Suppose a user gives the number 7. Then we will determine sum of, 1, 2, 3, 4, 5, 6 and 7. We can make the program to determine the sum in several way. We will see some programs related to determining sum of natural numbers. Sum of natural numbers using formulae We can easily make a program to find sum of natural numbers using the formulae bellow; Let’s see the C++ code for determining the sum of numbers using this formulae // C++ program to find sum of natural numbers #include <iostream> using namespace std; int main(){ int num, result; cout <<"Enter an integer here : "; cin >>num; result = (num * (num + 1)) / 2; cout <<"\nSum is = " << result <<endl; return 0; } Output of sum of natural number program Sum of natural numbers using a loop The bellow program will calculate the sum of natural numbers using a for loop. You can use other loop where the logic may be same or different if you have your own algorithm. Now, let’s see the program. // sum of natural numbers using for loop #include <iostream> using namespace std; int main(){ int c, num, result = 0; cout <<"Enter the number here : "; cin >>num; for(c = 1; c <= num; ++c){ result = result + c; } cout <<"\nSum is = " <<result <<endl; return 0; } Output of this program: Enter the number here : 30 Sum is = 465
https://worldtechjournal.com/cpp-programming/cpp-sum-of-natural-numbers/
CC-MAIN-2022-40
refinedweb
259
59.57
Installing Jupyter notebooks on a Centos 7 server Tweet The server The place that I work recently has just launched a pilot Infrastructure As A Service, uh-iaas, which is based on openstack. To try this out I wanted to setup an instance of Jupyter notebook. I will not document the server setup, but I used a plain Centos 7 images from the provided GOLDEN images in the iaas. This will work on pretty much any standard Centos 7 installation. The packages This is the packages we are going to need. epel-release is needed for certbot, and bzip2 is needed by the anaconda installer we are going to download later. # yum update # yum install epel-release # yum install bzip2 # yum install certbot Read more about EPEL here: Jupyter stuff We need a non-root user, this is good practice. # useradd jupyter For this guide we also grant the user jupyter full sudo without password. You should consider if you want to do this or not. # echo 'jupyter ALL = (ALL) NOPASSWD: ALL' > /etc/sudoers.d/20-jupyter Letsencrypt and certbot We are going to enable passwordbased authentication on our Jupyter notebook installation. To protect the password from preying eyes we need to encrypt our traffic. SSL certificates are provided to you for free, and with automatic renewal, by the great letsencrypt project. # certbot certonly --standalone -d redbook.yourdomain.no -d # certbot renew --dry-run You now have a valid certificate, and you have tested that it is renewable. Lets add it to our crontab. # crontab -e Put the following line into the crontab config and save it. * 5,11 * * * certbot renew --quiet Read more about certbot here: Read more about letsencrypt here: Read more about crontab here: The Jupyter configuration … now its time to switch to the jupyter user # su jupyter … this notebook folder will be the home for everything that you create in the notebook $ mkdir ~/notebooks … download the anaconda installer, install anaconda and generate a default config file $ curl -o anaconda.sh $ bash anaconda.sh $ jupyter notebook --generate-config Now we need to generate the login password for our Jupyter notebook installation. $ python … enter the following code in the python shell import notebook.auth.security as ne ne.passwd() Enter password: Verify password: Out[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed' Exit the python shell and open the Jupyter notebook configuration file. $ vi ~/.jupyter/jupyter_notebook_config.py … find the following setting and insert your generated sha1 value like shown below. Remember to uncomment the c.NotebookApp.password setting too. c.NotebookApp.password = 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed' And here is some more configuration that might be a good starter. # Set options for certfile and keyfile c.NotebookApp.certfile = u'/etc/letsencrypt/live/redbook.domain.no/fullchain.pem' c.NotebookApp.keyfile = u'/etc/letsencrypt/live/redbook.domain.no/privkey.pem' # Set ip to '*' to bind on all interfaces (ips) for the public server c.NotebookApp.ip = '*' # It is a good idea to set a known, fixed port for server access c.NotebookApp.port = 8888 # The password that provide access to the notebook c.NotebookApp.password = sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed # It gives no meaning opening a browser on localhost when running from a server like ours c.NotebookApp.open_browser = False # Set the directory what we want to serve notebooks from c.NotebookApp.notebook_dir = 'notebooks' Start your server and start playing with your new Jupyter notebooks. $ sudo ~/anaconda3/bin/jupyter notebook --config ~/.jupyter/jupyter_notebook_config.py Point your webbrowser and login with the password you created in the python shell.
https://visualisere.no/installing-jupyter-notebooks-on-a-centos-7-server.html
CC-MAIN-2019-18
refinedweb
581
56.96
Opened 3 years ago Closed 3 years ago #19100 closed Bug (invalid) @cache_page argument parsing is misleading Description Yesterday we moved: @cache_page(max_age=60*30) Which was working fine. To: @cache_page(max_age=60*30, cache="alternative_cache") Which triggered the following error: The only keyword arguments are cache and key_prefix Which is not true since @cache_page(max_age=60*30) worked fine before. Of course, after fiddling a bit, we realized that: @cache_page(60*30, cache="alternative_cache") Worked. And we were a bit confused. The source code of cache_page shows this: def cache_page(*args, **kwargs): ... cache_alias = kwargs.pop('cache', None) key_prefix = kwargs.pop('key_prefix', None) assert not kwargs, "The only keyword arguments are cache and key_prefix" I understand this is for legacy reasons, as I can see the deprecation warning below these lines: def warn(): import warnings warnings.warn('The cache_page decorator must be called like: ' 'cache_page(timeout, [cache=cache name], [key_prefix=key prefix]). ' 'All other ways are deprecated.', PendingDeprecationWarning, stacklevel=3) But then it should accept max_age as a keyword argument and raise the deprecation warning OR not accept it at all and have an explicit mandatory first positional argument being max_age. But not one behavior in one case, and another in the other case. Change History (1) comment:1 Changed 3 years ago by lukeplant - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to invalid - Status changed from new to closed I cannot reproduce this. @cache_page(max_age=60*30) throws an exception for me on 1.4 and master, which matches the documented warning and our intention. The docs are also clear on this.
https://code.djangoproject.com/ticket/19100
CC-MAIN-2015-27
refinedweb
268
55.34
Tutorial How To Automate Deployments to DigitalOcean Kubernetes with CircleCI The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program. Introduction Having an automated deployment process is a requirement for a scalable and resilient application, and GitOps, or Git-based DevOps, has rapidly become a popular method of organizing CI/CD with a Git repository as a “single source of truth.” Tools like CircleCI integrate with your GitHub repository, allowing you to test and deploy your code automatically every time you make a change to your repository. When this kind of CI/CD is combined with the flexibility of Kubernetes infrastructure, you can build an application that scales easily with changing demand. In this article you will use CircleCI to deploy a sample application to a DigitalOcean Kubernetes (DOKS) cluster. After reading this tutorial, you’ll be able to apply these same techniques to deploy other CI/CD tools that are buildable as Docker images. Prerequisites To follow this tutorial, you’ll need to have: A DigitalOcean account, which you can set up by following the Sign up for a DigitalOcean Account documentation. Docker installed on your workstation, and knowledge of how to build, remove, and run Docker images. You can install Docker on Ubuntu 18.04 by following the tutorial on How To Install and Use Docker on Ubuntu 18.04. Knowledge of how Kubernetes works and how to create deployments and services on it. It’s highly recommended to read the Introduction to Kubernetes article. The kubectlcommand line interface tool installed on the computer from which you will control your cluster. An account on Docker Hub to be used to store your sample application image. A GitHub account and knowledge of Git basics. You can follow the tutorial series Introduction to Git: Installation, Usage, and Branches and How To Create a Pull Request on GitHub to build this knowledge. For this tutorial, you will use Kubernetes version 1.13.5 and kubectl version 1.10.7. Step 1 — Creating Your DigitalOcean Kubernetes Cluster Note: You can skip this section if you already have a running DigitalOcean Kubernetes cluster. In this first step, you will create the DigitalOcean Kubernetes (DOKS) cluster from which you will deploy your sample application. The kubectl commands executed from your local machine will change or retrieve information directly from the Kubernetes cluster. Go to the Kubernetes page on your DigitalOcean account. Click Create a Kubernetes cluster, or click the green Create button at the top right of the page and select Clusters from the dropdown menu. The next page is where you are going to specify the details of your cluster. On Select a Kubernetes version pick version 1.13.5-do.0. If this one is not available, choose a higher one. For Choose a datacenter region, choose the region closest to you. This tutorial will use San Francisco - 2. You then have the option to build your Node pool(s). On Kubernetes, a node is a worker machine, which contains the services necessary to run pods. On DigitalOcean, each node is a Droplet. Your node pool will consist of a single Standard node. Select the 2GB/1vCPU configuration and change to 1 Node on the number of nodes. You can add extra tags if you want; this can be useful if you plan to use DigitalOcean API or just to better organize your node pools. On Choose a name, for this tutorial, use kubernetes-deployment-tutorial. This will make it easier to follow throughout while reading the next sections. Finally, click the green Create Cluster button to create your cluster. After cluster creation, there will be a button on the UI to download a configuration file called Download Config File. This is the file you will be using to authenticate the kubectl commands you are going to run against your cluster. Download it to your kubectl machine. The default way to use that file is to always pass the --kubeconfig flag and the path to it on all commands you run with kubectl. For example, if you downloaded the config file to Desktop, you would run the kubectl get pods command like this: - kubectl --kubeconfig ~/Desktop/kubernetes-deployment-tutorial-kubeconfig.yaml get pods This would yield the following output: OutputNo resources found. This means you accessed your cluster. The No resources found. message is correct, since you don’t have any pods on your cluster. If you are not maintaining any other Kubernetes clusters you can copy the kubeconfig file to a folder on your home directory called .kube. Create that directory in case it does not exist: - mkdir -p ~/.kube Then copy the config file into the newly created .kube directory and rename it config: - cp current_kubernetes-deployment-tutorial-kubeconfig.yaml_file_path ~/.kube/config The config file should now have the path ~/.kube/config. This is the file that kubectl reads by default when running any command, so there is no need to pass --kubeconfig anymore. Run the following: - kubectl get pods You will receive the following output: OutputNo resources found. Now access the cluster with the following: - kubectl get nodes You will receive the list of nodes on your cluster. The output will be similar to this: OutputNAME STATUS ROLES AGE VERSION kubernetes-deployment-tutorial-1-7pto Ready <none> 1h v1.13.5 In this tutorial you are going to use the default namespace for all kubectl commands and manifest files, which are files that define the workload and operating parameters of work in Kubernetes. Namespaces are like virtual clusters inside your single physical cluster. You can change to any other namespace you want; just make sure to always pass it using the --namespace flag to kubectl, and/or specifying it on the Kubernetes manifests metadata field. They are a great way to organize the deployments of your team and their running environments; read more about them in the official Kubernetes overview on Namespaces. By finishing this step you are now able to run kubectl against your cluster. In the next step, you will create the local Git repository you are going to use to house your sample application. Step 2 — Creating the Local Git Repository You are now going to structure your sample deployment in a local Git repository. You will also create some Kubernetes manifests that will be global to all deployments you are going to do on your cluster. Note: This tutorial has been tested on Ubuntu 18.04, and the individual commands are styled to match this OS. However, most of the commands here can be applied to other Linux distributions with little to no change needed, and commands like kubectl are platform-agnostic. First, create a new Git repository locally that you will push to GitHub later on. Create an empty folder called do-sample-app in your home directory and cd into it: - mkdir ~/do-sample-app - cd ~/do-sample-app Now create a new Git repository in this folder with the following command: - git init . Inside this repository, create an empty folder called kube: - mkdir ~/do-sample-app/kube/ This will be the location where you are going to store the Kubernetes resources manifests related to the sample application that you will deploy to your cluster. Now, create another folder called kube-general, but this time outside of the Git repository you just created. Make it inside your home directory: - mkdir ~/kube-general/ This folder is outside of your Git repository because it will be used to store manifests that are not specific to a single deployment on your cluster, but common to multiple ones. This will allow you to reuse these general manifests for different deployments. With your folders created and the Git repository of your sample application in place, it’s time to arrange the authentication and authorization of your DOKS cluster. Step 3 — Creating a Service Account It’s generally not recommended to use the default admin user to authenticate from other Services into your Kubernetes cluster. If your keys on the external provider got compromised, your whole cluster would become compromised. Instead you are going to use a single Service Account with a specific Role, which is all part of the RBAC Kubernetes authorization model. This authorization model is based on Roles and Resources. You start by creating a Service Account, which is basically a user on your cluster, then you create a Role, in which you specify what resources it has access to on your cluster. Finally, you create a Role Binding, which is used to make the connection between the Role and the Service Account previously created, granting to the Service Account access to all resources the Role has access to. The first Kubernetes resource you are going to create is the Service Account for your CI/CD user, which this tutorial will name cicd. Create the file cicd-service-account.yml inside the ~/kube-general folder, and open it with your favorite text editor: - nano ~/kube-general/cicd-service-account.yml Write the following content on it: apiVersion: v1 kind: ServiceAccount metadata: name: cicd namespace: default This is a YAML file; all Kubernetes resources are represented using one. In this case you are saying this resource is from Kubernetes API version v1 (internally kubectl creates resources by calling Kubernetes HTTP APIs), and it is a ServiceAccount. The metadata field is used to add more information about this resource. In this case, you are giving this ServiceAccount the name cicd, and creating it on the default namespace. You can now create this Service Account on your cluster by running kubectl apply, like the following: - kubectl apply -f ~/kube-general/ You will recieve output similar to the following: Outputserviceaccount/cicd created To make sure your Service Account is working, try to log in to your cluster using it. To do that you first need to obtain their respective access token and store it in an environment variable. Every Service Account has an access token which Kubernetes stores as a Secret. You can retrieve this secret using the following command: - TOKEN=$(kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode) Some explanation on what this command is doing: $(kubectl get secret | grep cicd-token | awk '{print $1}') This is used to retrieve the name of the secret related to our cicd Service Account. kubectl get secret returns the list of secrets on the default namespace, then you use grep to search for the lines related to your cicd Service Account. Then you return the name, since it is the first thing on the single line returned from the grep. kubectl get secret preceding-command -o jsonpath='{.data.token}' | base64 --decode This will retrieve only the secret for your Service Account token. You then access the token field using jsonpath, and pass the result to base64 --decode. This is necessary because the token is stored as a Base64 string. The token itself is a JSON Web Token. You can now try to retrieve your pods with the cicd Service Account. Run the following command, replacing server-from-kubeconfig-file with the server URL that can be found after server: in ~kube/config. This command will give a specific error that you will learn about later in this tutorial: - kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods --insecure-skip-tls-verify skips the step of verifying the certificate of the server, since you are just testing and do not need to verify this. --kubeconfig="/dev/null" is to make sure kubectl does not read your config file and credentials but instead uses the token provided. The output should be similar to this: OutputError from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:cicd" cannot list resource "pods" in API group "" in the namespace "default" This is an error, but it shows us that the token worked. The error you received is about your Service Account not having the neccessary authorization to list the resource secrets, but you were able to access the server itself. If your token had not worked, the error would have been the following one: Outputerror: You must be logged in to the server (Unauthorized) Now that the authentication was a success, the next step is to fix the authorization error for the Service Account. You will do this by creating a role with the necessary permissions and binding it to your Service Account. Step 4 — Creating the Role and the Role Binding Kubernetes has two ways to define roles: using a Role or a ClusterRole resource. The difference between the former and the latter is that the first one applies to a single namespace, while the other is valid for the whole cluster. As you are using a single namespace on this tutorial, you will use a Role. Create the file ~/kube-general/cicd-role.yml and open it with your favorite text editor: - nano ~/kube-general/cicd-role.yml The basic idea is to grant access to do everything related to most Kubernetes resources in the default namespace. Your Role would look like this: kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cicd namespace: default rules: - apiGroups: ["", "apps", "batch", "extensions"] resources: ["deployments", "services", "replicasets", "pods", "jobs", "cronjobs"] verbs: ["*"] This YAML has some similarities with the one you created previously, but here you are saying this resource is a Role, and it’s from the Kubernetes API rbac.authorization.k8s.io/v1. You are naming your role cicd, and creating it on the same namespace you created your ServiceAccount, the default one. Then you have the rules field, which is a list of resources this role has access to. In Kubernetes resources are defined based on the API group they belong to, the resource kind itself, and what actions you can do on then, which is represented by a verb. Those verbs are similar to the HTTP ones. In our case you are saying that your Role is allowed to do everything, *, on the following resources: deployments, services, replicasets, pods, jobs, and cronjobs. This also applies to those resources belonging to the following API groups: "" (empty string), apps, batch, and extensions. The empty string means the root API group. If you use apiVersion: v1 when creating a resource it means this resource is part of this API group. A Role by itself does nothing; you must also create a RoleBinding, which binds a Role to something, in this case, a ServiceAccount. Create the file ~/kube-general/cicd-role-binding.yml and open it: - nano ~/kube-general/cicd-role-binding.yml Add the following lines to the file: kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cicd namespace: default subjects: - kind: ServiceAccount name: cicd namespace: default roleRef: kind: Role name: cicd apiGroup: rbac.authorization.k8s.io Your RoleBinding has some specific fields that have not yet been covered in this tutorial. roleRef is the Role you want to bind to something; in this case it is the cicd role you created earlier. subjects is the list of resources you are binding your role to; in this case it’s a single ServiceAccount called cicd. Note: If you had used a ClusterRole, you would have to create a ClusterRoleBinding instead of a RoleBinding. The file would be almost the same. The only difference would be that it would have no namespace field inside the metadata. With those files created you will be able to use kubectl apply again. Create those new resources on your Kubernetes cluster by running the following command: - kubectl apply -f ~/kube-general/ You will receive output similar to the following: Outputrolebinding.rbac.authorization.k8s.io/cicd created role.rbac.authorization.k8s.io/cicd created serviceaccount/cicd created Now, try the command you ran previously: - kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods Since you have no pods, this will yield the following output: OutputNo resources found. In this step, you gave the Service Account you are going to use on CircleCI the necessary authorization to do meaningful actions on your cluster like listing, creating, and updating resources. Now it’s time to create your sample application. Step 5 — Creating Your Sample Application Note: All commands and files created from now on will start from the folder ~/do-sample-app you created earlier. This is becase you are now creating files specific to the sample application that you are going to deploy to your cluster. The Kubernetes Deployment you are going to create will use the Nginx image as a base, and your application will be a simple static HTML page. This is a great start because it allows you to test if your deployment works by serving a simple HTML directly from Nginx. As you will see later on, you can redirect all traffic coming to a local address:port to your deployment on your cluster to test if it’s working. Inside the repository you set up earlier, create a new Dockerfile file and open it with your text editor of choice: - nano ~/do-sample-app/Dockerfile Write the following on it: FROM nginx:1.14 COPY index.html /usr/share/nginx/html/index.html This will tell Docker to build the application container from an nginx image. Now create a new index.html file and open it: - nano ~/do-sample-app/index.html Write the following HTML content: <!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body> This HTML will display a simple message that will let you know if your application is working. You can test if the image is correct by building and then running it. First, build the image with the following command, replacing dockerhub-username with your own Docker Hub username. You must specify your username here so when you push it later on to Docker Hub it will just work: - docker build ~/do-sample-app/ -t dockerhub-username/do-kubernetes-sample-app Now run the image. Use the following command, which starts your image and forwards any local traffic on port 8080 to the port 80 inside the image, the port Nginx listens to by default: - docker run --rm -it -p 8080:80 dockerhub-username/do-kubernetes-sample-app The command prompt will stop being interactive while the command is running. Instead you will see the Nginx access logs. If you open localhost:8080 on any browser it should show an HTML page with the content of ~/do-sample-app/index.html. In case you don’t have a browser available, you can open a new terminal window and use the following curl command to fetch the HTML from the webpage: - curl localhost:8080 You will receive the following output: Output<!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body> Stop the container ( CTRL + C on the terminal where it’s running), and submit this image to your Docker Hub account. To do this, first log in to Docker Hub: - docker login Fill in the required information about your Docker Hub account, then push the image with the following command (don’t forget to replace the dockerhub-username with your own): - docker push dockerhub-username/do-kubernetes-sample-app You have now pushed your sample application image to your Docker Hub account. In the next step, you will create a Deployment on your DOKS cluster from this image. Step 6 — Creating the Kubernetes Deployment and Service With your Docker image created and working, you will now create a manifest telling Kubernetes how to create a Deployment from it on your cluster. Create the YAML deployment file ~/do-sample-app/kube/do-sample-deployment.yml and open it with your text editor: - nano ~/do-sample-app/kube/do-sample-deployment.yml Write the following content on the file, making sure to replace dockerhub-username with your Docker Hub username: apiVersion: apps/v1 kind: Deployment metadata: name: do-kubernetes-sample-app namespace: default labels: app: do-kubernetes-sample-app spec: replicas: 1 selector: matchLabels: app: do-kubernetes-sample-app template: metadata: labels: app: do-kubernetes-sample-app spec: containers: - name: do-kubernetes-sample-app image: dockerhub-username/do-kubernetes-sample-app:latest ports: - containerPort: 80 name: http Kubernetes deployments are from the API group apps, so the apiVersion of your manifest is set to apps/v1. On metadata you added a new field you have not used previously, called metadata.labels. This is useful to organize your deployments. The field spec represents the behavior specification of your deployment. A deployment is responsible for managing one or more pods; in this case it’s going to have a single replica by the spec.replicas field. That is, it’s going to create and manage a single pod. To manage pods, your deployment must know which pods it’s responsible for. The spec.selector field is the one that gives it that information. In this case the deployment will be responsible for all pods with tags app=do-kubernetes-sample-app. The spec.template field contains the details of the Pod this deployment will create. Inside the template you also have a spec.template.metadata field. The labels inside this field must match the ones used on spec.selector. spec.template.spec is the specification of the pod itself. In this case it contains a single container, called do-kubernetes-sample-app. The image of that container is the image you built previously and pushed to Docker Hub. This YAML file also tells Kubernetes that this container exposes the port 80, and gives this port the name http. To access the port exposed by your Deployment, create a Service. Make a file named ~/do-sample-app/kube/do-sample-service.yml and open it with your favorite editor: - nano ~/do-sample-app/kube/do-sample-service.yml Next, add the following lines to the file: apiVersion: v1 kind: Service metadata: name: do-kubernetes-sample-app namespace: default labels: app: do-kubernetes-sample-app spec: type: ClusterIP ports: - port: 80 targetPort: http name: http selector: app: do-kubernetes-sample-app This file gives your Service the same labels used on your deployment. This is not required, but it helps to organize your applications on Kubernetes. The service resource also has a spec field. The spec.type field is responsible for the behavior of the service. In this case it’s a ClusterIP, which means the service is exposed on a cluster-internal IP, and is only reachable from within your cluster. This is the default spec.type for services. spec.selector is the label selector criteria that should be used when picking the pods to be exposed by this service. Since your pod has the tag app: do-kubernetes-sample-app, you used it here. spec.ports are the ports exposed by the pod’s containers that you want to expose from this service. Your pod has a single container which exposes port 80, named http, so you are using it here as targetPort. The service exposes that port on port 80 too, with the same name, but you could have used a different port/name combination than the one from the container. With your Service and Deployment manifest files created, you can now create those resources on your Kubernetes cluster using kubectl: - kubectl apply -f ~/do-sample-app/kube/ You will receive the following output: Outputdeployment.apps/do-kubernetes-sample-app created service/do-kubernetes-sample-app created Test if this is working by forwarding one port on your machine to the port that the service is exposing inside your Kubernetes cluster. You can do that using kubectl port-forward: - kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80 The subshell command $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') retrieves the name of the pod matching the tag you used. Otherwise you could have retrieved it from the list of pods by using kubectl get pods. After you run port-forward, the shell will stop being interactive, and will instead output the requests redirected to your cluster: OutputForwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Opening localhost:8080 on any browser should render the same page you saw when you ran the container locally, but it’s now coming from your Kubernetes cluster! As before, you can also use curl in a new terminal window to check if it’s working: - curl localhost:8080 You will receive the following output: Output<!DOCTYPE html> <title>DigitalOcean</title> <body> Kubernetes Sample Application </body> Next, it’s time to push all the files you created to your GitHub repository. To do this you must first create a repository on GitHub called digital-ocean-kubernetes-deploy. In order to keep this repository simple for demonstration purposes, do not initialize the new repository with a README, license, or .gitignore file when asked on the GitHub UI. You can add these files later on. With the repository created, point your local repository to the one on GitHub. To do this, press CTRL + C to stop kubectl port-forward and get the command line back, then run the following commands to add a new remote called origin: - cd ~/do-sample-app/ - git remote add origin There should be no output from the preceding command. Next, commit all the files you created up to now to the GitHub repository. First, add the files: - git add --all Next, commit the files to your repository, with a commit message in quotation marks: - git commit -m "initial commit" This will yield output similar to the following: Output[master (root-commit) db321ad] initial commit 4 files changed, 47 insertions(+) create mode 100644 Dockerfile create mode 100644 index.html create mode 100644 kube/do-sample-deployment.yml create mode 100644 kube/do-sample-service.yml Finally, push the files to GitHub: - git push -u origin master You will be prompted for your username and password. Once you have entered this, you will see output like this: OutputCounting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (7/7), done. Writing objects: 100% (7/7), 907 bytes | 0 bytes/s, done. Total 7 (delta 0), reused 0 (delta 0) To github.com:your-github-account-username/digital-ocean-kubernetes-deploy.git * [new branch] master -> master Branch master set up to track remote branch master from origin. If you go to your GitHub repository page you will now see all the files there. With your project up on GitHub, you can now set up CircleCI as your CI/CD tool. Step 7 — Configuring CircleCI For this tutorial, you will use CircleCI to automate deployments of your application whenever the code is updated, so you will need to log in to CircleCI using your GitHub account and set up your repository. First, go to their homepage, and press Sign Up. You are using GitHub, so click the green Sign Up with GitHub button. CircleCI will redirect to an authorization page on GitHub. CircleCI needs some permissions on your account to be able to start building your projects. This allows CircleCI to obtain your email, deploy keys and permission to create hooks on your repositories, and add SSH keys to your account. If you need more information on what CircleCI is going to do with your data, check their documentation about GitHub integration. After authorizing CircleCI you will be redirected to their dashboard. Next, set up your GitHub repository in CircleCI. Click on Set Up New Projects from the CircleCI Dashboard, or as a shortcut, open the following link changing the highlighted text with your own GitHub username:. After that press Start Building. Do not create a config file in your repository just yet, and don’t worry if the first build fails. Next, specify some environment variables in the CircleCI settings. You can find the settings of the project by clicking on the small button with a cog icon on the top right section of the page then selecting Environment Variables, or you can go directly to the environment variables page by using the following URL (remember to fill in your username):. Press Add Variable to create new environment variables. First, add two environment variables called DOCKERHUB_USERNAME and DOCKERHUB_PASS which will be needed later on to push the image to Docker Hub. Set the values to your Docker Hub username and password, respectively. Then add three more: KUBERNETES_TOKEN, KUBERNETES_SERVER, and KUBERNETES_CLUSTER_CERTIFICATE. The value of KUBERNETES_TOKEN will be the value of the local environment variable you used earlier to authenticate on your Kubernetes cluster using your Service Account user. If you have closed the terminal, you can always run the following command to retrieve it again: - kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode KUBERNETES_SERVER will be the string you passed as the --server flag to kubectl when you logged in with your cicd Service Account. You can find this after server: in the ~/.kube/config file, or in the file kubernetes-deployment-tutorial-kubeconfig.yaml downloaded from the DigitalOcean dashboard when you made the initial setup of your Kubernetes cluster. KUBERNETES_CLUSTER_CERTIFICATE should also be available on your ~/.kube/config file. It’s the certificate-authority-data field on the clusters item related to your cluster. It should be a long string; make sure to copy all of it. Those environment variables must be defined here because most of them contain sensitive information, and it is not secure to place them directly on the CircleCI YAML config file. With CircleCI listening for changes on your repository, and the environment variables configured, it’s time to create the configuration file. Make a directory called .circleci inside your sample application repository: - mkdir ~/do-sample-app/.circleci/ Inside this directory, create a file named config.yml and open it with your favorite editor: - nano ~/do-sample-app/.circleci/config.yml Add the following content to the file, making sure to replace dockerhub-username with your Docker Hub username: version: 2.1 jobs: build: docker: - image: circleci/buildpack-deps:stretch environment: IMAGE_NAME: dockerhub-username/do-kubernetes-sample-app working_directory: ~/app steps: - checkout - setup_remote_docker - run: name: Build Docker image command: | docker build -t $IMAGE_NAME:latest . - run: name: Push Docker Image command: | echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin docker push $IMAGE_NAME:latest workflows: version: 2 build-master: jobs: - build: filters: branches: only: master This sets up a Workflow with a single job, called build, that runs for every commit to the master branch. This job is using the image circleci/buildpack-deps:stretch to run its steps, which is an image from CircleCI based on the official buildpack-deps Docker image, but with some extra tools installed, like Docker binaries themselves. The workflow has four steps: setup_remote_dockersets up a remote, isolated environment for each build. This is required before you use any dockercommand inside a job step. This is necessary because as the steps are running inside a docker image, setup_remote_dockerallocates another machine to run the commands there. - The first runstep builds the image, as you did previously locally. For that you are using the environment variable you declared in environment:, IMAGE_NAME(remember to change the highlighted section with your own information). - The last runstep pushes the image to Dockerhub, using the environment variables you configured on the project settings to authenticate. Commit the new file to your repository and push the changes upstream: - cd ~/do-sample-app/ - git add .circleci/ - git commit -m "add CircleCI config" - git push This will trigger a new build on CircleCI. The CircleCI workflow is going to correctly build and push your image to Docker Hub. Now that you have created and tested your CircleCI workflow, you can set your DOKS cluster to retrieve the up-to-date image from Docker Hub and deploy it automatically when changes are made. Step 8 — Updating the Deployment on the Kubernetes Cluster Now that your application image is being built and sent to Docker Hub every time you push changes to the master branch on GitHub, it’s time to update your deployment on your Kubernetes cluster so that it retrieves the new image and uses it as a base for deployment. To do that, first fix one issue with your deployment: it’s currently depending on an image with the latest tag. This tag does not tell us which version of the image you are using. You cannot easily lock your deployment to that tag because it’s overwritten everytime you push a new image to Docker Hub, and by using it like that you lose one of the best things about having containerized applications: Reproducibility. You can read more about that on this article about why depending on Docker latest tag is a anti-pattern. To correct this, you first must make some changes to your Push Docker Image build step in the ~/do-sample-app/.circleci/config.yml file. Open up the file: - nano ~/do-sample-app/.circleci/config.yml Then add the highlighted lines to your Push Docker Image step: ... - run: name: Push Docker Image command: | echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1 docker push $IMAGE_NAME:latest docker push $IMAGE_NAME:$CIRCLE_SHA1 ... Save and exit the file. CircleCI has some special environment variables set by default. One of them is CIRCLE_SHA1, which contains the hash of the commit it’s building. The changes you made to ~/do-sample-app/.circleci/config.yml will use this environment variable to tag your image with the commit it was built from, always tagging the most recent build with the latest tag. That way, you always have specific images available, without overwriting them when you push something new to your repository. Next, change your deployment manifest file to point to that file. This would be simple if inside ~/do-sample-app/kube/do-sample-deployment.yml you could set your image as dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1, but kubectl doesn’t do variable substitution inside the manifests when you use kubectl apply. To account for this, you can use envsubst. envsubst is a cli tool, part of the GNU gettext project. It allows you to pass some text to it, and if it finds any variable inside the text that has a matching environment variable, it’s replaced by the respective value. The resulting text is then returned as their output. To use this, you will create a simple bash script which will be responsible for your deployment. Make a new folder called scripts inside ~/do-sample-app/: - mkdir ~/do-sample-app/scripts/ Inside that folder create a new bash script called ci-deploy.sh and open it with your favorite text editor: - nano ~/do-sample-app/scripts/ci-deploy.sh Inside it write the following bash script: #! /bin/bash # exit script when any command ran here returns with non-zero exit code set -e COMMIT_SHA1=$CIRCLE_SHA1 # We must export it so it's available for envsubst export COMMIT_SHA1=$COMMIT_SHA1 # since the only way for envsubst to work on files is using input/output redirection, # it's not possible to do in-place substitution, so we need to save the output to another file # and overwrite the original with that one. envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt ./kubectl \ --kubeconfig=/dev/null \ --server=$KUBERNETES_SERVER \ --certificate-authority=cert.crt \ --token=$KUBERNETES_TOKEN \ apply -f ./kube/ Let’s go through this script, using the comments in the file. First, there is the following: set -e This line makes sure any failed command stops the execution of the bash script. That way if one command fails, the next ones are not executed. COMMIT_SHA1=$CIRCLE_SHA1 export COMMIT_SHA1=$COMMIT_SHA1 These lines export the CircleCI $CIRCLE_SHA1 environment variable with a new name. If you had just declared the variable without exporting it using export, it would not be visible for the envsubst command. envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml envsubst cannot do in-place substitution. That is, it cannot read the content of a file, replace the variables with their respective values, and write the output back to the same file. Therefore, you will redirect the output to another file and then overwrite the original file with the new one. echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt The environment variable $KUBERNETES_CLUSTER_CERTIFICATE you created earlier on CircleCI’s project settings is in reality a Base64 encoded string. To use it with kubectl you must decode its contents and save it to a file. In this case you are saving it to a file named cert.crt inside the current working directory. ./kubectl \ --kubeconfig=/dev/null \ --server=$KUBERNETES_SERVER \ --certificate-authority=cert.crt \ --token=$KUBERNETES_TOKEN \ apply -f ./kube/ Finally, you are running kubectl. The command has similar arguments to the one you ran when you were testing your Service Account. You are calling apply -f ./kube/, since on CircleCI the current working directory is the root folder of your project. ./kube/ here is your ~/do-sample-app/kube folder. Save the file and make sure it’s executable: - chmod +x ~/do-sample-app/scripts/ci-deploy.sh Now, edit ~/do-sample-app/kube/do-sample-deployment.yml: - nano ~/do-sample-app/kube/do-sample-deployment.yml Change the tag of the container image value to look like the following one: # ... containers: - name: do-kubernetes-sample-app image: dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1 ports: - containerPort: 80 name: http Save and close the file. You must now add some new steps to your CI configuration file to update the deployment on Kubernetes. Open ~/do-sample-app/.circleci/config.yml on your favorite text editor: - nano ~/do-sample-app/.circleci/config.yml Write the following new steps, right below the Push Docker Image one you had before: ... - run: name: Install envsubst command: | sudo apt-get update && sudo apt-get -y install gettext-base - run: name: Install kubectl command: | curl -LO(curl -s)/bin/linux/amd64/kubectl chmod u+x ./kubectl - run: name: Deploy Code command: ./scripts/ci-deploy.sh The first two steps are installing some dependencies, first envsubst, and then kubectl. The Deploy Code step is responsible for running our deploy script. To make sure the changes are really going to be reflected on your Kubernetes deployment, edit your index.html. Change the HTML to something else, like: <!DOCTYPE html> <title>DigitalOcean</title> <body> Automatic Deployment is Working! </body> Once you have saved the above change, commit all the modified files to the repository, and push the changes upstream: - cd ~/do-sample-app/ - git add --all - git commit -m "add deploy script and add new steps to circleci config" - git push You will see the new build running on CircleCI, and successfully deploying the changes to your Kubernetes cluster. Wait for the build to finish, then run the same command you ran previously: - kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80 Make sure everything is working by opening your browser on the URL localhost:8080 or by making a curl request to it. It should show the updated HTML: - curl localhost:8080 You will receive the following output: Output<!DOCTYPE html> <title>DigitalOcean</title> <body> Automatic Deployment is Working! </body> Congratulations, you have set up automated deployment with CircleCI! Conclusion This was a basic tutorial on how to do deployments to DigitalOcean Kubernetes using CircleCI. From here, you can improve your pipeline in many ways. The first thing you can do is create a single build job for multiple deployments, each one deploying to different Kubernetes clusters or different namespaces. This can be extremely useful when you have different Git branches for development/staging/production environments, ensuring that the deployments are always separated. You could also build your own image to be used on CircleCI, instead of using buildpack-deps. This image could be based on it, but could already have kubectl and envsubst dependencies installed. If you would like to learn more about CI/CD on Kubernetes, check out the tutorials for our CI/CD on Kubernetes Webinar Series, or for more information about apps on Kubernetes, see Modernizing Applications for Kubernetes.
https://www.digitalocean.com/community/tutorials/how-to-automate-deployments-to-digitalocean-kubernetes-with-circleci
CC-MAIN-2021-25
refinedweb
6,753
53.61
Google::ProtocolBuffers - simple interface to Google Protocol Buffers ## ## Define structure of your data and create serializer classes ## use Google::ProtocolBuffers; Google::ProtocolBuffers->parse(" message Person { required string name = 1; required int32 id = 2; // Unique ID number for this person. optional string email = 3; enum PhoneType { MOBILE = 0; HOME = 1; WORK = 2; } message PhoneNumber { required string number = 1; optional PhoneType type = 2 [default = HOME]; } repeated PhoneNumber phone = 4; } ", {create_accessors => 1 } ); ## ## Serialize Perl structure and print it to file ## open my($fh), ">person.dat"; binmode $fh; print $fh Person->encode({ name => 'A.U. Thor', id => 123, phone => [ { number => 1234567890 }, { number => 987654321, type=>Person::PhoneType::WORK() }, ], }); close $fh; ## ## Decode data from serialized form ## my $person; { open my($fh), "<person.dat"; binmode $fh; local $/; $person = Person->decode(<$fh>); close $fh; } print $person->{name}, "\n"; print $person->name, "\n"; ## ditto Google Protocol Buffers is a data serialization format. It is binary (and hence compact and fast for serialization) and as extendable as XML; its nearest analogues are Thrift and ASN.1. There are official mappings for C++, Java and Python languages; this library is a mapping for Perl. Protocol Buffers is a typed protocol, so work with it starts with some kind of Interface Definition Language named 'proto'. For the description of the language, please see the official page () Methods 'parse' and 'parsefile' take the description of data structure as text literal or as name of the proto file correspondently. After successful compilation, Perl serializer classes are created for each message, group or enum found in proto. In case of error, these methods will die. On success, a list of names of created classes is returned. Options are given as a hash reference, the recognizable options are: One proto file may include others, this option sets where to look for the included files. Compilation of proto source is a relatively slow and memory consuming operation, it is not recommended in production environment. Instead, with this option you may specify filename or filehandle where to save Perl code of created serializer classes for future use. Example: ## in helper script use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( "message Foo {optional int32 a = 1; }", { generate_code => 'Foo.pm' } ); ## then, in production code use Foo; my $str = Foo->encode({a => 100}); If this option is set, then result of 'decode' will be a blessed structure with accessor methods for each field, look at Class::Accessor for more info. Example: use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( "message Foo { optional int32 id = 1; }", { create_accessors => 1 } ); my $foo = Foo->decode("\x{08}\x{02}"); print $foo->id; ## prints 2 $foo->id(100); ## now it is set to 100 This option is from Class::Accessor too; it has no effect without 'create_accessors'. If set, names of getters (read accessors) will start with get_ and names of setter with set_: use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( "message Foo { optional int32 id = 1; }", { create_accessors => 1, follow_best_practice => 1 } ); ## Class::Accessor provides a constructor too my $foo = Foo->new({ id => 2 }); print $foo->get_id; $foo->set_id(100); If this option is set, then extensions are treated as if they were regular fields in messages or groups: use Google::ProtocolBuffers; use Data::Dumper; Google::ProtocolBuffers->parse( " message Foo { optional int32 id = 1; extensions 10 to max; } extend Foo { optional string name = 10; } ", { simple_extensions=>1, create_accessors => 1 } ); my $foo = Foo->decode("\x{08}\x{02}R\x{03}Bob"); print Dumper $foo; ## { id => 2, name => 'Bob' } print $foo->id, "\n"; $foo->name("Sponge Bob"); This option is off by default because extensions live in a separate namespace and may have the same names as fields. Compilation of such proto with 'simple_extension' option will result in die. If the option is off, you have to use special accessors for extension fields - setExtension and getExtension, as in C++ Protocol Buffer API. Hash keys for extended fields in Plain Old Data structures will be enclosed in brackets: use Google::ProtocolBuffers; use Data::Dumper; Google::ProtocolBuffers->parse( " message Foo { optional int32 id = 1; extensions 10 to max; } extend Foo { optional string id = 10; // <-- id again! } ", { simple_extensions => 0, ## <-- no simple extensions create_accessors => 1, } ); my $foo = Foo->decode("\x{08}\x{02}R\x{05}Kenny"); print Dumper $foo; ## { id => 2, '[id]' => 'Kenny' } print $foo->id, "\n"; ## 2 print $foo->getExtension('id'), "\n"; ## Kenny $foo->setExtension("id", 'Kenny McCormick'); By default, names of created Perl classes are taken from "camel-cased" names of proto's packages, messages, groups and enums. First characters are capitalized, all underscores are removed and the characters following them are capitalized too. An example: a fully qualified name 'package_test.Message' will result in Perl class 'PackageTest::Message'. Option 'no_camel_case' turns name-mangling off. Names of fields, extensions and enum constants are not affected anyway. This method may be called as class or instance method. 'MessageClass' must already be created by compiler. Input is a hash reference. Output is a scalar (string) with serialized data. Unknown fields in hashref are ignored. In case of errors (e.g. required field is not set and there is no default value for the required field) an exception is thrown. Examples: use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( "message Foo {optional int32 id = 1; }", {create_accessors => 1} ); my $string = Foo->encode({ id => 2 }); my $foo = Foo->new({ id => 2 }); $string = $foo->encode; ## ditto Class method. Input: serialized data string. Output: data object of class 'MessageClass'. Unknown fields in serialized data are ignored. In case of errors (e.g. message is broken or partial) or data string is a wide-character (utf-8) string, an exception is thrown. For each enum in proto, a Perl class will be constructed with constants for each enum value. You may import these constants via ClassName->import(":constants") call. Please note that Perl compiler will know nothing about these constants at compile time, because this import occurs at run time, so parenthesis after constant's name are required. use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( " enum Foo { FOO = 1; BAR = 2; } ", { generate_code => 'Foo.pm' } ); print Foo::FOO(), "\n"; ## fully quailified name is fine Foo->import(":constants"); print FOO(), "\n"; ## now FOO is defined in our namespace print FOO; ## <-- Error! FOO is bareword! Or, do the import inside a BEGIN block: use Foo; ## Foo.pm was generated in previous example BEGIN { Foo->import(":constants") } print FOO, "\n"; ## ok, Perl compiler knows about FOO here Though group are considered deprecated they are supported by Google::ProtocolBuffers. They are like nested messages, except that nested type definition and field definition go together: use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( " message Foo { optional group Bar = 1 { optional int32 baz = 1; } } ", { create_accessors => 1 } ); my $foo = Foo->new; $foo->Bar( Foo::Bar->new({ baz => 2 }) ); print $foo->Bar->baz, ", ", $foo->{Bar}->{baz}, "\n"; # 2, 2 Proto file may specify a default value for a field. The default value is returned by accessor if there is no value for field or if this value is undefined. The default value is not accessible via plain old data hash, though. Default string values are always byte-strings, if you need wide-character (Unicode) string, use "decode_utf8" in Encode. use Google::ProtocolBuffers; Google::ProtocolBuffers->parse( "message Foo {optional string name=1 [default='Kenny'];} ", {create_accessors => 1} ); ## no initial value my $foo = Foo->new; print $foo->name(), ", ", $foo->{name}, "\n"; # Kenny, (undef) ## some defined value $foo->name('Ken'); print $foo->name(), ", ", $foo->{name}, "\n"; # Ken, Ken ## empty, but still defined value $foo->name(''); print $foo->name(), ", ", $foo->{name}, "\n"; # (empty), (empty) ## undef value == default value $foo->name(undef); print $foo->name(), ", ", $foo->{name}, "\n"; # Kenny, (undef) From the point of view of serialized data, there is no difference if a field is declared as regular field or if it is extension, as far as field number is the same. That is why there is an option 'simple_extensions' (see above) that treats extensions like regular fields. From the point of view of named accessors, however, extensions live in namespace different from namespace of fields, that's why they simple names (i.e. not fully qualified ones) may conflict. (And that's why this option is off by default). The name of extensions are obtained from their fully qualified names from which leading part, most common with the class name to be extended, is stripped. Names of hash keys enclosed in brackets; arguments to methods 'getExtension' and 'setExtension' do not. Here is the self-explanatory example to the rules: use Google::ProtocolBuffers; use Data::Dumper; Google::ProtocolBuffers->parse( " package some_package; // message Plugh contains one regular field and three extensions message Plugh { optional int32 foo = 1; extensions 10 to max; } extend Plugh { optional int32 bar = 10; } message Thud { extend Plugh { optional int32 baz = 11; } } // Note: the official Google's proto compiler does not allow // several package declarations in a file (as of version 2.0.1). // To compile this example with the official protoc, put lines // above to some other file, and import that file here. package another_package; // import 'other_file.proto'; extend some_package.Plugh { optional int32 qux = 12; } ", { create_accessors => 1 } ); my $plugh = SomePackage::Plugh->decode( "\x{08}\x{01}\x{50}\x{02}\x{58}\x{03}\x{60}\x{04}" ); print Dumper $plugh; ## {foo=>1, '[bar]'=>2, '[Thud.baz]'=>3, [another_package.qux]=>4} print $plugh->foo, "\n"; ## 1 print $plugh->getExtension('bar'), "\n"; ## 2 print $plugh->getExtension('Thud.baz'), "\n"; ## 3 print $plugh->getExtension('Thud::baz'), "\n"; ## ditto Another point is that 'extend' block doesn't create new namespace or scope, so the following proto declaration is invalid: // proto: package test; message Foo { extensions 10 to max; } message Bar { extensions 10 to max; } extend Foo { optional int32 a = 10; } extend Bar { optional int32 a = 20; } // <-- Error: name 'a' in package // 'test' is already used! Well, extensions are the most complicated part of proto syntax, and I hope that you either got it or you don't need it. You don't like to mess with proto files? Structure of your data is known at run-time only? No problem, create your serializer classes at run-time too with method Google::ProtocolBuffers->create_message('ClassName', \@fields, \%options); (Note: The order of field description parts is the same as in proto file. The API is going to change to accept named parameters, but backward compatibility will be preserved). use Google::ProtocolBuffers; use Google::ProtocolBuffers::Constants(qw/:labels :types/); ## ## proto: ## message Foo { ## message Bar { ## optional int32 a = 1 [default=12]; ## } ## required int32 id = 1; ## repeated Bar bars = 2; ## } ## Google::ProtocolBuffers->create_message( 'Foo::Bar', [ ## optional int32 a = 1 [default=12] [LABEL_OPTIONAL, TYPE_INT32, 'a', 1, '12'] ], { create_accessors => 1 } ); Google::ProtocolBuffers->create_message( 'Foo', [ [LABEL_REQUIRED, TYPE_INT32, 'id', 1], [LABEL_REPEATED, 'Foo::Bar', 'bars', 2], ], { create_accessors => 1 } ); my $foo = Foo->new({ id => 10 }); $foo->bars( Foo::Bar->new({a=>1}), Foo::Bar->new({a=>2}) ); print $foo->encode; There are methods 'create_group' and 'create_enum' also; the following constants are exported: labels (LABEL_OPTIONAL, LABEL_OPTIONAL, LABEL_REPEATED) and types (TYPE_INT32, TYPE_UINT32, TYPE_SINT32, TYPE_FIXED32, TYPE_SFIXED32, TYPE_INT64, TYPE_UINT64, TYPE_SINT64, TYPE_FIXED64, TYPE_SFIXED64, TYPE_BOOL, TYPE_STRING, TYPE_BYTES, TYPE_DOUBLE, TYPE_FLOAT). All proto options are ignored except default values for fields; extension numbers are not checked. Unknown fields in serialized data are skipped, no stream API (encoding to/decoding from file handlers) is present. Ask for what you need most. Introspection API is planned. Declarations of RPC services are currently ignored, but their support is planned (btw, which Perl RPC implementation would you recommend?) Official page of Google's Protocol Buffers project () Protobuf-PerlXS project () - creates XS wrapper for C++ classes generated by official Google's compiler protoc. You have to complile XS files every time you've changed the proto description, however, this is the fastest way to work with Protocol Buffers from Perl. Protobuf-Perl project - someday it may be part of official Google's compiler. Thrift ASN.1, JSON and YAML Author: Igor Gariev <gariev@hotmail.com> Proto grammar is based on work by Alek Storm This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.10.0 or, at your option, any later version of Perl 5 you may have available.
http://search.cpan.org/~gariev/Google-ProtocolBuffers-0.08/lib/Google/ProtocolBuffers.pm
CC-MAIN-2014-10
refinedweb
1,998
50.16
Chrome back-end code is all C++ and we want to leverage many C++ features, such as stack-based classes and namespaces. As a result, all front-end Bling files should be .mm files, as we expect eventually they will contain C++ code or language features. While there are no smart pointers in Objective-C, Chrome has scoped_nsobject<T> and WeakNSObject<T> to automatically manage (and document) object ownership. Under ARC, scoped_nsobject<T> and WeakNSObject<T> should only be used for interfacing with existing APIs that take these, or for declaring a C++ member variable in a header. Otherwise use __weak variables and strong/ weak properties. Note that scoped_nsobject and WeakNSObject provide the same API under ARC, i.e. scoped_nsobject<T> foo([[Bar alloc] init]); is correct both under ARC and non-ARC. scoped_nsobject<T> should be used for all owned member variables in C++ classes (except the private classes that only exist in implementation files) and Objective-C classes built without ARC, even if that means writing dedicated getters and setters to implement @property declarations. Same goes for WeakNSObject - always use it to express weak ownership of an Objective-C object, unless you are writing ARC code. We'd rather have a little more boilerplate code than a leak. As the C++ style guide tells you, we never use C casts and prefer static_cast<T> and dynamic_cast<T>. However, for Objective-C casts we have two specific casts: base::mac::ObjCCast<T>arg is similar to dynamic_cast<T>, and ObjcCCastStrict DCHECKs against that class. We follow Google style for blocks, except that historically we have used 2-space indentation for blocks that are parameters, rather than 4. You may continue to use this style when it is consistent with the surrounding code. NOTREACHED: This function should not be called. If it is, we have a problem somewhere else. NOTIMPLEMENTED: This isn‘t implemented because we don’t use it yet. If it's called, then we need to figure out what it should do. When something is called, but don't need an implementation, just comment that rather than using a logging macro. Sometimes we include TODO comments in code. Generally we follow C++ style, but here are some more specific practices we've agreed upon as a team: // TODO(crbug.com/######): Something that needs doing.
https://chromium.googlesource.com/chromium/src/+/25eb9e4d58709999fcb50219a4dc1b44b49b7256/docs/ios/style.md
CC-MAIN-2019-26
refinedweb
387
55.84
In languages like C or C++, the programmer is responsible for dynamic allocation and deallocation of memory on the heap. But in python programmer does not have to preallocate or deallocate memory. Python uses following garbage collection algorithms for memory management - Reference counting Cycle-detecting algorithm (Circular References) Reference counting Reference counting is a simple procedure where referenced objects are deallocated when there is no reference to them in a program. In short, When the reference count becomes 0, the object is deallocated(frees its allocated memory). Let's have a look at below example - def calculate_sum(num1, num2): total = num1 + num2 print(total) In the above example, We have three local references num1, num2 and total. Here total is different from num1 and num2 because it only has reference inside the block thus its reference count is 1, num1 and num2 referenced outside the block so maybe their reference count more than one. So, here when the function has finished execution the referenced count of total reduced to 0. Since it tracked by the garbage collector. The garbage collector finds out that the total is no longer referenced(reference count field reaches 0) and frees its allocated memory. The Variables, which are declared outside of functions, such variables do not get destroyed even after function has finished execution. We can do the manual deletion also using the del statement. del statement removes a variable and its reference. When the reference count reaches 0, it will be collected by the garbage collector. The reference counting algorithm has some issues also, such as circular references - Circular References A reference cycle occurs when one or more objects are referencing each other. As you can see in the above image, the list object is pointing to itself, and object1 and object2 are pointing to each other. The reference count for such objects is always at least 1. Let's go to the practical - import gc gc.set_debug(gc.DEBUG_SAVEALL) lst = [] lst.append(lst) lst_address = id(lst) del lst object_1 = {} object_2 = {} object_1['obj2'] = object_2 object_2['obj1'] = object_1 obj_address = id(object_1) del object_1, object_2 In the above example, the del statement removes the variable and its reference to the objects. Let's check-it deleted variables using gc.collect, gc.collect saves to gc.garbage instead of deleting. >>> gc.collect() 3 When we delete a variable, we only delete the __main__ reference. Now we don’t have access to lst, object_1 and object_2 at all, but these variables still have 1 reference, it means reference count is 1, the reference count algorithm will not collect it. Check the reference count as below- import sys print(sys.getrefcount(obj_address)) print(sys.getrefcount(lst_address)) 2 2 # 1 from the variable and 1 from getrefcount Multiply this number by 1 Million objects and you may have absolutely a serious memory leak issue. For this kind of Reference cycle, Python has another algorithm specially dedicated to discovering and destroying circular references. It is also the only controllable part of Python’s GC Summary Python has 2 Garbage Collection algorithms. One for dealing with reference count, When the reference count reaches 0, it removes the object and frees its allocated memory. The other is the Cycle-detecting algorithm which discovers and destroys circular references. I hope that you now have a fair understanding of the Garbage Collection algorithms in python. If you have any suggestions on your mind, please let me know in the comments. Reference Discussion If you are interested to play with GC, using Nim you can play with 6 GC, choose and customize, including Rust-like one, Real-Time, Go lang one, No GC (manual). For Python it works like Cython basically, it has a --gc:CLI param, that can choose the GC, also a GC_statistics()that print info, you can turn on and off the GC on the fly from the code and more. Some GC that books say are slow, are actually pretty fast in practice. Good post, interesting. @juan Thanks for sharing, Sure I will play with GC.
https://dev.to/sharmapacific/garbage-collection-in-python-1d4g
CC-MAIN-2020-50
refinedweb
674
53.92
The. The. Both MPI and OpenMP are portable across a wide variety of platforms from laptops to the largest supercomputers on the planet. MPI codes are usually written to dynamically decompose the problem among MPI processes. These processes run their piece of the problem on different processors usually having their own (distributed) memory. OpenMP codes, on the other hand, have directives that describe how parallel sections or loop iterations are to be split up among threads. The team of threads share a single memory space, but may also have some of their own private stack variables. MPI programs should be written to generate the same answers no matter what decomposition is used, even if that decomposition is the entire problem space running in a single process on a single processor (in other words, serially). OpenMP programs should be written to generate the same answers no matter how many threads are used, and should yield the same answers even when compiled without OpenMP enabled. Hybrid MPI/OpenMP codes present additional challenges over developing with just one or the other. Hybrid programs should generate the same answers no matter what decomposition is used for the distributed memory MPI parallelism and no matter how many OpenMP threads are used. Some hybrid codes — usually those that are carefully designed — are written so that they may be compiled as serial programs without even requiring the use of the MPI library. The example code included here is designed to run with any combination of MPI or OpenMP enabled. Show Me the Code! Since OpenMP spawns (or forks) a team of threads at the beginning of a parallel region and joins (or kills) them at the end of the parallel region, OpenMP parallelism should, in general, be exploited at as high a level as possible in the parallel code. Likewise, MPI should be used in a manner that minimizes communications, typically by decomposing the problem so that most of the work is done independently, if possible, by MPI processes over a subset of data. In some cases it’s most effective to use MPI and OpenMP to parallelize processing at the same level. And that’s exactly what’s done in the hybrid.c code shown in Listing One. Listing One: hybrid.c, an OpenMP and MPI hybrid #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <math.h> #ifdef USE_MPI #include <mpi.h> #endif /* USE_MPI */ #ifdef _OPENMP #include <omp.h> #endif /* _OPENMP */ int read_slab_info() { /* This should read info from a file or something, but we fake it */ return 80; } double process_slab(int snum) { int i, j; double x; for (i = 0; i < 10000; i++) for (j = 0; j < 10000; j++) x += sqrt((i-j)*(i-j) / (sqrt((i*i) + (j*j)) + 1)); return x; } void exit_on_error(char *message) { fprintf(stderr, “%sn”, message); #ifdef USE_MPI MPI_Finalize(); #endif exit(1); } int main(int argc, char **argv) { int i, j, p, me, nprocs, num_threads, num_slabs, spp; int *my_slabs, *count; double x, sum; #ifdef _OPENMP int np; #endif /* _OPENMP */ #ifdef USE_MPI int namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; #endif /* USE_MPI */ #ifdef USE_MPI MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &me); MPI_Get_processor_name(processor_name, &namelen); #else /* USE_MPI */ nprocs = 1; me = 0; #endif /* USE_MPI */ #ifdef _OPENMP np = omp_get_num_procs(); omp_set_num_threads(np); num_threads = omp_get_max_threads(); #else /* _OPENMP */ num_threads = 1; #endif /* _OPENMP */ printf(”Process %d of %d”, me, nprocs); #ifdef USE_MPI printf(” running on %s”, processor_name); #endif /* USE_MPI */ #ifdef _OPENMP printf(” using OpenMP with %d threads”, num_threads); #endif /* _OPENMP */ printf(”n”); /* Master process reads slab data */ if (!me) num_slabs = read_slab_info(); #ifdef USE_MPI if (MPI_Bcast(&num_slabs, 1, MPI_INT, 0, MPI_COMM_WORLD) != MPI_SUCCESS) exit_on_error(”Error in MPI_Bcast()”); #endif /* USE_MPI */ if (num_slabs < nprocs) exit_on_error(”Number of slabs may not exceed number of processes”); /* maximum number of slabs per process */ spp = (int)ceil((double)num_slabs / (double)nprocs); if (!me) printf(”No more than %d slabs will assigned to each processn”, spp); /* allocate list and count of slabs for each process */ if (!(my_slabs = (int *)malloc(nprocs*spp* sizeof(int)))) { perror(”my_slabs”); exit(2); } if (!(count = (int *)malloc(nprocs*sizeof(int)))) { perror(”count”); exit(2); } /* initialize slab counts */ for (p = 0; p < nprocs; p++) count[p] = 0; /* round robin assignment of slabs to processes for better potential * load balancing */ for (i = j = p = 0; i < num_slabs; i++) { my_slabs[p*spp+j] = i; count[p]++; if (p == nprocs -1) p = 0, j++; else p++; } /* each process works on its own list of slabs, but OpenMP threads * divide up the slabs on each process because of OpenMP directive */ #pragma omp parallel for reduction(+: x) for (i = 0; i < count[me]; i++) { printf(”%d: slab %d being processed”, me, my_slabs[me*spp+i]); #ifdef _OPENMP printf(” by thread %d”, omp_get_thread_num()); #endif /* _OPENMP */ printf(”n”); x += process_slab(my_slabs[me*spp+i]); } #ifdef USE_MPI if (MPI_Reduce(&x, &sum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD) != MPI_SUCCESS) exit_on_error(”Error in MPI_Reduce()”); #else /* USE_MPI */ sum = x; #endif /* USE_MPI */ if (!me) printf(”Sum is %lgn”, sum); #ifdef USE_MPI printf(”%d: Calling MPI_Finalize()n”, me); MPI_Finalize(); #endif /* USE_MPI */ exit(0); } hybrid.c does nothing useful, but it does demonstrate the framework one might use for developing a hybrid code. As usual, the OpenMP parts of the code, except for the actual #pragma omp directive, are wrapped with #ifdefs. As a result, OpenMP components are compiled only if the appropriate flag is provided to the compiler. In addition, the MPI parts are wrapped so that the code may be compiled without MPI. The USE_MPI macro is set manually during compilation to enable the use of MPI. The _OPENMP macro should never be defined manually; it’s automatically set when the code is compiled with OpenMP enabled. hybrid.c decomposes “slabs” into chunks that are assigned to each MPI process. The slabs are processed independently, so this is the level at which MPI parallelism should be employed. Each MPI process then loops over each slab in its chunk or list of slabs, processing each slab in turn. This high level loop is an excellent candidate for employing OpenMP parallelism as well. In this way, each thread in the team of threads associated with an MPI process independently processes its own slab. The code provided here does not actually process slabs of data; instead it has some loops that just waste time to simulate work for each slab. Just inside main(), MPI is initialized if MPI was enabled during compilation by defining the USE_MPI preprocessor macro. Otherwise, nprocs is set to 1 and me is set to 0. Next, when OpenMP is enabled, the number of threads to use is set to the number of processors on the node. The number of processors on the node is obtained by the call to omp_get_ num_procs(), and the number of threads is set by the call to omp_set_num_threads(). A call to omp_get_max_ threads() checks that that the number of threads was set correctly. When compiled with OpenMP disabled, num_ threads is simply set to 1. A diagnostic message is printed by each process describing the configuration used for running the code. Next, the master process calls read_slab_info() to obtain the number of slabs (num_slabs) and this value is subsequently broadcast to all MPI processes by calling MPI_Bcast(). Then the number of slabs is checked to be sure that the number of MPI processes does not exceed the number of slabs to be processed. In preparation for decomposing these slabs into chunks, the maximum number of slabs per process is determined by dividing the total number of slabs by the number of MPI processes. This value is rounded up to an integer by ceil(). Next, memory is allocated for arrays containing the list of slabs for each process (my_slabs) and the number of slabs contained in each list (count). Then the slab count for each process is initialized to 0. In many scientific codes, slabs or columns or blocks distributed among processes in this way may take slightly different amounts of time to process. Often there is a spatial correlation between slabs and processing time — that is to say, slabs near each other tend to take about the same amount of time to process. In an attempt to balance the load, the chunks/lists are filled in round robin fashion. This should help even out the load among processes thereby providing better load balancing. The loop over slabs assigns one slab to each process (by adding one slab to the list for each process), then cycles through again until all slabs are assigned to a process. Following this assignment loop, each slab is processed in a loop that runs from 0 to count[me], the number of slabs in the list for this process. The processing loop has an OpenMP parallel for directive above it, so trips through the loop are distributed among OpenMP threads that are spawned at this point in the code. The slab number, obtained from the slab list for this process, is passed to the process_slab() routine. The slab number, MPI process rank, and OpenMP thread number are printed for diagnostic purposes for every trip through the loop. The process_slab() routine merely performs meaningless calculations before returning a double precision value. The returned values are accumulated into the variable x on each process. Recall that the reduction clause in the omp parallel for directive causes each thread to have its own stack variable for x, and then the values are reduced (summed in this case) across all threads in each process before the loop exits. These values are then summed across all MPI processes by the MPI_Reduce() call when MPI is enabled. Without MPI, the sum is merely the x value, irrespective of whether OpenMP is used. The sum is printed by the first MPI process (or the single serial process), MPI is finalized, and the program ends. Building and Running the Hybrid Code The Portland Group compiler was used along with the MPICH implementation of MPI in the cases below. The -mp flag on the compiler line enables OpenMP, and the USE_MPI preprocessor macro, set on the compile line, causes the MPI components to be compiled into the program. Output One contains partial results from building and running the hybrid program with both MPI and OpenMP enabled. The program is run on four dual-processor nodes. As you can see from the diagnostic output, the slabs are distributed round robin fashion among MPI processes, and the loop iterations are distributed among threads on each process. The sum is printed at the end. This version of the code ran in 1 minute, 6 seconds. Output One: Building and running the hybrid code with MPI and OpenMP [forrest@node01 hybrid]$ make hybrid pgcc -O -mp -DUSE_MPI - I/usr/local/mpich_pgi/include -c -o hybrid.o hybrid.c pgcc -mp -L/usr/local/mpich_pgi/lib -o hybrid hybrid.o -lm -lmpich [forrest@node01 hybrid]$ time mpirun -nolocal -np 4 -machinefile machines hybrid running hybrid Process 2 of 4 running on node03 using OpenMP with 2 threads 2: slab 2 being processed by thread 0 2: slab 42 being processed by thread 1 2: slab 6 being processed by thread 0 2: slab 46 being processed by thread 1 … 2: Calling MPI_Finalize() Process 1 of 4 running on node02 using OpenMP with 2 threads 1: slab 1 being processed by thread 0 1: slab 41 being processed by thread 1 1: slab 45 being processed by thread 1 1: slab 5 being processed by thread 0 … 1: Calling MPI_Finalize() Process 3 of 4 running on node04 using OpenMP with 2 threads 3: slab 3 being processed by thread 0 3: slab 43 being processed by thread 1 3: slab 47 being processed by thread 1 3: slab 7 being processed by thread 0 … 3: Calling MPI_Finalize() Process 0 of 4 running on node01 using OpenMP with 2 threads No more than 20 slabs will assigned to each process 0: slab 0 being processed by thread 0 0: slab 40 being processed by thread 1 0: slab 4 being processed by thread 0 0: slab 44 being processed by thread 1 … Sum is 3.09086e+11 0: Calling MPI_Finalize() real 1m6.847s user 0m0.113s sys 0m0.178s Output Two contains the output from building and running the same code with OpenMP disabled. In this case, the -mp flag is not passed to the compiler, but the USE_MPI macro is defined. This program is again run on the same four dual-processor nodes. However, since OpenMP is not enabled, only one processor is effectively used in each node. The same sum is printed, and the program ends. With MPI only, the code ran in 1 minute, 25 seconds. Output Two: Building and running the hybrid code with MPI only [forrest@node01 hybrid]$ make hybrid_mpi_only pgcc -O -DUSE_MPI -I/usr/local/mpich_pgi/include -c -o hybrid_mpi_only.o hybrid.c pgcc -L/usr/local/mpich_pgi/lib -o hybrid_mpi_only hybrid_mpi_only.o -lm -lmpich [forrest@node01 hybrid]$ time mpirun -nolocal -np 4 -machinefile machines hybrid_mpi_only Process 1 of 4 running on node02 1: slab 1 being processed 1: slab 5 being processed 1: slab 9 being processed … 1: Calling MPI_Finalize() Process 2 of 4 running on node03 2: slab 2 being processed 2: slab 6 being processed 2: slab 10 being processed … 2: Calling MPI_Finalize() Process 3 of 4 running on node04 3: slab 3 being processed 3: slab 7 being processed 3: slab 11 being processed … 3: Calling MPI_Finalize() Process 0 of 4 running on node01 No more than 20 slabs will assigned to each process 0: slab 0 being processed 0: slab 4 being processed 0: slab 8 being processed … Sum is 3.09086e+11 0: Calling MPI_Finalize() real 1m25.025s user 0m0.086s sys 0m0.109s Next, the same code is compiled and run without MPI, but with OpenMP enabled. Output Three contains the results from this OpenMP-only run. In this case, all slabs are assigned to the single process, and each slab is processed by one of the two threads running on a single dual-processor node. The same sum is computed, but the time required to complete has risen to 4 minutes and 43 seconds. Output Three: Building and running the hybrid code with OpenMP only [forrest@node01 hybrid]$ make hybrid_omp_only pgcc -O -mp -c -o hybrid_omp_only.o hybrid.c pgcc -mp -o hybrid_omp_only hybrid_omp_only.o -lm [forrest@node01 hybrid]$ time ./hybrid_omp_only Process 0 of 1 using OpenMP with 2 threads No more than 80 slabs will assigned to each process 0: slab 0 being processed by thread 0 0: slab 40 being processed by thread 1 0: slab 1 being processed by thread 0 0: slab 41 being processed by thread 1 . . . Sum is 3.19123e+11 real 4m43.153s user 9m26.291s sys 0m0.000s Finally, the hybrid code is compiled without either MPI or OpenMP. Output Four contains the results. The slabs are assigned to the single process, and each is processed one at a time. The same answer is generated. A sample run completed in 5 minutes, 37 seconds. Output Four: Building and running the hybrid code without MPI or OpenMP [forrest@node01 hybrid]$ make hybrid_serial pgcc -O -c -o hybrid_serial.o hybrid.c pgcc -o hybrid_serial hybrid_serial.o -lm [forrest@node01 hybrid]$ time ./hybrid_serial Process 0 of 1 No more than 80 slabs will assigned to each process 0: slab 0 being processed 0: slab 1 being processed 0: slab 2 being processed . . . 0: slab 79 being processed Sum is 3.09086e+11 real 5m37.036s user 5m37.025s sys 0m0.000s Hybrids Are Go As you can see from these results, adding OpenMP parallelism to the MPI parallel code reduced total run time. With only a little additional design effort, it’s fairly easy to write a hybrid MPI/OpenMP code. Moreover, with careful attention to coding, it’s often possible to develop your code so that it can be run in any combination. Go ahead and give it a try on your own cluster.
http://www.linux-mag.com/id/1631
crawl-002
refinedweb
2,652
59.74
47281/ignore-the-nan-and-the-linear-regression-on-remaining-values Is there a way to ignore the NaN and do the linear regression on remaining values? val=([0,2,1,'NaN',6],[4,4,7,6,7],[9,7,8,9,10]) time=[0,1,2,3,4] slope_1 = stats.linregress(time,values[1]) # This works slope_0 = stats.linregress(time,values[0]) # This doesn't work Yes, you can do this using statsmodels: import statsmodels.api as sm from numpy import NaN x = [0, 2, NaN, 4, 5, 6, 7, 8] y = [1, 3, 4, 5, 6, 7, 8, 9] model = sm.OLS(y, x, missing='drop') results = model.fit() In [2]: results.params Out[2]: array([ 1.16494845]) Use the following code: from scipy import stats slope, ...READ MORE Hi @Dipti, you could try something like ...READ MORE You can read excel files df = pd.read_excel(...) You ...READ MORE Rolling regression is the analysis of changing ...READ MORE Hey @Tanmay, try something like this: >>> from ...READ MORE Hey @Vivek, Try something like this: >>> from ...READ MORE Supervised learning is an aspect of machine learning ...READ MORE In reinforcement learning, the output depends on ...READ MORE Hey @Ruth, you can use this model ...READ MORE LassoLars is a lasso model implemented using ...READ MORE OR
https://www.edureka.co/community/47281/ignore-the-nan-and-the-linear-regression-on-remaining-values?show=47284
CC-MAIN-2019-35
refinedweb
216
71
import "upspin.io/user" Package user provides tools for parsing and validating user names. Clean returns the user name in canonical form as described by the comments for the Parse function. Parse splits an upspin.UserName into user and domain and returns the pair. It also returns the "+" suffix part of the user name, if it has one. For example, given the user name ann+backup@example.com it would return the strings "ann+backup" "backup" "example.com" Parsed validates the name as an e-mail address and lower-cases the domain so it is canonical. The rules are: <name> := <user name>@<domain name> <domain name> := - each . separated token < 64 characters - character set for tokens [a-z0-9\-] - final token at least two characters - whole name < 254 characters - characters are case insensitive - final period is OK, but we remove it We ignore the rules of punycode, which is defined in . <user name> := Names are validated and canonicalized by the UsernameCasePreserved profile of the RFC 7613, "Preparation, Enforcement, and Comparison of Internationalized Strings", also known as PRECIS. Further restrictions are added here. The only ASCII punctuation characters that are legal are "!#$%&'*+-/=?^_{|}~", and a name that is only ASCII punctuation is rejected. As a special case for use in Access and Group files, the name "*" is allowed. Case is significant and spaces are not allowed. The username suffix is tightly constrained: It uses the same character set as domains, but of course the spacing of periods is irrelevant. Facebook and Google constrain usernames to [a-zA-Z0-9+-.], ignoring the period and, in Google only, ignoring everything from a plus sign onwards. We accept a superset of this but do not follow the "ignore" rules. ParseDomain parses the component of a user name after the '@', that is, the domain component of an email address. The rules are defined in the documentation for Parse except the domain name itself must be less than 255 bytes long. ParseUser parses the component of a user name before the '@', that is, the user component of an email address. The rules are defined in the documentation for Parse except that "*" is not a valid user and the user name itself must be less than 255 bytes long. Package user imports 4 packages (graph) and is imported by 17 packages. Updated 2019-02-15. Refresh now. Tools for package owners.
https://godoc.org/upspin.io/user
CC-MAIN-2019-13
refinedweb
393
56.86
astropy.io.fits FAQ¶ Contents - astropy.io.fits FAQ - General Questions - Usage Questions - Something didn’t work as I expected. Did I do something wrong? - Astropy crashed and output a long string of code. What do I do? - Why does opening a file work in CFITSIO, ds9, etc. but not in Astropy? - How do I turn off the warning messages Astropy keeps outputting to my console? - What convention does Astropy use for indexing, such as of image coordinates? - How do I open a very large image that won’t fit in memory? - How can I create a very large FITS file from scratch? - How do I create a multi-extension FITS file from scratch? - Why is an image containing integer data being converted unexpectedly to floats? - Why am I losing precision when I assign floating point values in the header? - Why is reading rows out of a FITS table so slow? - I’m opening many FITS files in a loop and getting OSError: Too many open files - Comparison with Other FITS Readers General Questions¶ What is PyFITS and how does it relate to Astropy?¶ PyFITS is a library written in, and for use with the Python programming language for reading, writing, and manipulating FITS formatted files. It includes a high-level interface to FITS headers with the ability for high and low-level manipulation of headers, and it supports reading image and table data as Numpy arrays. It also supports more obscure and non-standard formats found in some FITS files. The astropy.io.fits module is identical to PyFITS but with the names changed. When development began on Astropy it was clear that one of the core requirements would be a FITS reader. Rather than starting from scratch, PyFITS–being the most flexible FITS reader available for Python–was ported into Astropy. There are plans to gradually phase out PyFITS as a stand-alone module and deprecate it in favor of astropy.io.fits. See more about that in the next question. Although PyFITS is written mostly in Python, it includes an optional module written in C that’s required to read/write compressed image data. However, the rest of PyFITS functions without this extension module. What is the development status of PyFITS?¶ PyFITS was written and maintained by the Science Software Branch at the Space Telescope Science Institute, and is licensed by AURA under a 3-clause BSD license (see LICENSE.txt in the PyFITS source code). It is now exclusively developed as a component of Astropy ( astropy.io.fits) rather than as a stand-alone module. There are a few reasons for this: The first is simply to reduce development effort; the overhead of maintaining both PyFITS and astropy.io.fits in separate code bases is non-trivial. The second is that there are many features of Astropy (units, tables, etc.) from which the astropy.io.fits module can benefit greatly. Since PyFITS is already integrated into Astropy, it makes more sense to continue development there rather than make Astropy a dependency of PyFITS. PyFITS’ past primary developer and active maintainer was Erik Bray. There is a GitHub project for PyFITS, but PyFITS is not actively developed anymore so patches and issue reports should be posted on the Astropy issue tracker. There is also a legacy Trac site with some older issue reports still open, but new issues should be submitted via GitHub if possible. The current (and last) stable release is 3.4.0. Usage Questions¶ Something didn’t work as I expected. Did I do something wrong?¶ Possibly. But if you followed the documentation and things still did not work as expected, it is entirely possible that there is a mistake in the documentation, a bug in the code, or both. So feel free to report it as a bug. There are also many, many corner cases in FITS files, with new ones discovered almost every week. astropy.io.fits is always improving, but does not support all cases perfectly. There are some features of the FITS format (scaled data, for example) that are difficult to support correctly and can sometimes cause unexpected behavior. For the most common cases, however, such as reading and updating FITS headers, images, and tables, astropy.io.fits. is very stable and well-tested. Before every Astropy release it is ensured that all its tests pass on a variety of platforms, and those tests cover the majority of use-cases (until new corner cases are discovered). Astropy crashed and output a long string of code. What do I do?¶ This listing of code is what is knows as a stack trace (or in Python parlance a “traceback”). When an unhandled exception occurs in the code, causing the program to end, this is a way of displaying where the exception occurred and the path through the code that led to it. As Astropy is meant to be used as a piece in other software projects, some exceptions raised by Astropy are by design. For example, one of the most common exceptions is a KeyError when an attempt is made to read the value of a non-existent keyword in a header: >>> from astropy.io import fits >>> h = fits.Header() >>> h['NAXIS'] Traceback (most recent call last): ... KeyError: "Keyword 'NAXIS' not found." This indicates that something was looking for a keyword called “NAXIS” that does not exist. If an error like this occurs in some other software that uses Astropy, it may indicate a bug in that software, in that it expected to find a keyword that didn’t exist in a file. Most “expected” exceptions will output a message at the end of the traceback giving some idea of why the exception occurred and what to do about it. The more vague and mysterious the error message in an exception appears, the more likely that it was caused by a bug in Astropy. So if you’re getting an exception and you really don’t know why or what to do about it, feel free to report it as a bug. Why does opening a file work in CFITSIO, ds9, etc. but not in Astropy?¶ As mentioned elsewhere in this FAQ, there are many unusual corner cases when dealing with FITS files. It’s possible that a file should work, but isn’t support due to a bug. Sometimes it’s even possible for a file to work in an older version of Astropy, but not a newer version due to a regression that isn’t tested for yet. Another problem with the FITS format is that, as old as it is, there are many conventions that appear in files from certain sources that do not meet the FITS standard. And yet they are so common-place that it is necessary to support them in any FITS readers. CONTINUE cards are one such example. There are non-standard conventions supported by Astropy that are not supported by CFITSIO and possibly vice-versa. You may have hit one of those cases. If Astropy is having trouble opening a file, a good way to rule out whether not the problem is with Astropy is to run the file through the fitsverify program. For smaller files you can even use the online FITS verifier. These use CFITSIO under the hood, and should give a good indication of whether or not there is something erroneous about the file. If the file is malformatted, fitsverify will output errors and warnings. If fitsverify confirms no problems with a file, and Astropy is still having trouble opening it (especially if it produces a traceback) then it’s possible there is a bug in Astropy. How do I turn off the warning messages Astropy keeps outputting to my console?¶ Astropy uses Python’s built-in warnings subsystem for informing about exceptional conditions in the code that are recoverable, but that the user may want to be informed of. One of the most common warnings in astropy.io.fits occurs when updating a header value in such a way that the comment must be truncated to preserve space: Card is too long, comment is truncated. Any console output generated by Astropy can be assumed to be from the warnings subsystem. See Astropy’s documentation on the Python warnings system for more information on how to control and quiet warnings. What convention does Astropy use for indexing, such as of image coordinates?¶ All arrays and sequences in Astropy use a zero-based indexing scheme. For example, the first keyword in a header is header[0], not header[1]. This is in accordance with Python itself, as well as C, on which Python is based. This may come as a surprise to veteran FITS users coming from IRAF, where 1-based indexing is typically used, due to its origins in FORTRAN. Likewise, the top-left pixel in an N x N array is data[0,0]. The indices for 2-dimensional arrays are row-major order, in that the first index is the row number, and the second index is the column number. Or put in terms of axes, the first axis is the y-axis, and the second axis is the x-axis. This is the opposite of column-major order, which is used by FORTRAN and hence FITS. For example, the second index refers to the axis specified by NAXIS1 in the FITS header. In general, for N-dimensional arrays, row-major orders means that the right-most axis is the one that varies the fastest while moving over the array data linearly. For example, the 3-dimensional array: [[[1, 2], [3, 4]], [[5, 6], [7, 8]]] is represented linearly in row-major order as: [1, 2, 3, 4, 5, 6, 7, 8] Since 2 immediately follows 1, you can see that the right-most (or inner-most) axis is the one that varies the fastest. The discrepancy in axis-ordering may take some getting used to, but it is a necessary evil. Since most other Python and C software assumes row-major ordering, trying to enforce column-major ordering in arrays returned by Astropy is likely to cause more difficulties than it’s worth. How do I open a very large image that won’t fit in memory?¶ astropy.io.fits.open has an option to access the data portion of an HDU by memory mapping using mmap. In Astropy this is used by default. What this means is that accessing the data as in the example above only reads portions of the data into memory on demand. For example, if I request just a slice of the image, such as hdul[0].data[100:200], then just rows 100-200 will be read into memory. This happens transparently, as though the entire image were already in memory. This works the same way for tables. For most cases this is your best bet for working with large files. To ensure use of memory mapping, just add the memmap=True argument to fits.open. Likewise, using memmap=False will force data to be read entirely into memory. The default can also be controlled through a configuration option called USE_MEMMAP. Setting this to 0 will disable mmap by default. Unfortunately, memory mapping does not currently work as well with scaled image data, where BSCALE and BZERO factors need to be applied to the data to yield physical values. Currently this requires enough memory to hold the entire array, though this is an area that will see improvement in the future. An alternative, which currently only works for image data (that is, non-tables) is the sections interface. It is largely replaced by the better support for mmap, but may still be useful on systems with more limited virtual-memory space, such as on 32-bit systems. Support for scaled image data is flakey with sections too, though that will be fixed. See the documentation on image sections for more details on using this interface. How can I create a very large FITS file from scratch?¶ See Create a very large FITS file from scratch. For creating very large tables, this method may also be used. Though it can be difficult to determine ahead of time how many rows a table will need. In general, use of the astropy.io.fits module is currently discouraged for the creation and manipulation of large tables. The FITS format itself is not designed for efficient on-disk or in-memory manipulation of table structures. For large, heavy-duty table data it might be better too look into using HDF5 through the PyTables library. The Astropy Table interface can provide an abstraction layer between different on-disk table formats as well (for example for converting a table between FITS and HDF5). PyTables makes use of Numpy under the hood, and can be used to write binary table data to disk in the same format required by FITS. It is then possible to serialize your table to the FITS format for distribution. At some point this FAQ might provide an example of how to do this. How do I create a multi-extension FITS file from scratch?¶ See Create a multi-extension FITS (MEF) file from scratch. Why is an image containing integer data being converted unexpectedly to floats?¶ If the header for your image contains non-trivial values for the optional BSCALE and/or BZERO keywords (that is, BSCALE != 1 and/or BZERO != 0), then the raw data in the file must be rescaled to its physical values according to the formula: physical_value = BZERO + BSCALE * array_value As BZERO and BSCALE are floating point values, the resulting value must be a float as well. If the original values were 16-bit integers, the resulting values are single-precision (32-bit) floats. If the original values were 32-bit integers the resulting values are double-precision (64-bit floats). This automatic scaling can easily catch you of guard if you’re not expecting it, because it doesn’t happen until the data portion of the HDU is accessed (to allow things like updating the header without rescaling the data). For example: >>> fits_scaledimage_filename = fits.util.get_testdata_filepath('scale.fits') >>> hdul = fits.open(fits_scaledimage_filename) >>> image = hdul[0] >>> image.header['BITPIX'] 16 >>> image.header['BSCALE'] 0.045777764213996 >>> data = image.data # Read the data into memory >>> data.dtype.name # Got float32 despite BITPIX = 16 (16-bit int) 'float32' >>> image.header['BITPIX'] # The BITPIX will automatically update too -32 >>> 'BSCALE' in image.header # And the BSCALE keyword removed False The reason for this is that once a user accesses the data they may also manipulate it and perform calculations on it. If the data were forced to remain as integers, a great deal of precision is lost. So it is best to err on the side of not losing data, at the cost of causing some confusion at first. If the data must be returned to integers before saving, use the scale method: >>> image.scale('int32') >>> image.header['BITPIX'] 32 >>> hdul.close() Alternatively, if a file is opened with mode='update' along with the scale_back=True argument, the original BSCALE and BZERO scaling will be automatically re-applied to the data before saving. Usually this is not desirable, especially when converting from floating point back to unsigned integer values. But this may be useful in cases where the raw data needs to be modified corresponding to changes in the physical values. To prevent rescaling from occurring at all (good for updating headers–even if you don’t intend for the code to access the data, it’s good to err on the side of caution here), use the do_not_scale_image_data argument when opening the file: >>> hdul = fits.open(fits_scaledimage_filename, do_not_scale_image_data=True) >>> image = hdul[0] >>> image.data.dtype.name 'int16' >>> hdul.close() Why am I losing precision when I assign floating point values in the header?¶ The FITS standard allows two formats for storing floating-point numbers in a header value. The “fixed” format requires the ASCII representation of the number to be in bytes 11 through 30 of the header card, and to be right-justified. This leaves a standard number of characters for any comment string. The fixed format is not wide enough to represent the full range of values that can be stored in a 64-bit float with full precision. So FITS also supports a “free” format in which the ASCII representation can be stored anywhere, using the full 70 bytes of the card (after the keyword). Currently Astropy only supports writing fixed format (it can read both formats), so all floating point values assigned to a header are stored in the fixed format. There are plans to add support for more flexible formatting. In the meantime it is possible to add or update cards by manually formatting the card image from a string, as it should appear in the FITS file: >>> c = fits.Card.fromstring('FOO = 1234567890.123456789') >>> h = fits.Header() >>> h.append(c) >>> h FOO = 1234567890.123456789 As long as you don’t assign new values to ‘FOO’ via h['FOO'] = 123, will maintain the header value exactly as you formatted it (as long as it is valid according to the FITS standard). Why is reading rows out of a FITS table so slow?¶ Underlying every table data array returned by astropy.io.fits is a Numpy recarray which is a Numpy array type specifically for representing structured array data (i.e. a table). As with normal image arrays, Astropy accesses the underlying binary data from the FITS file via mmap (see the question “What performance differences are there between astropy.io.fits and fitsio?” for a deeper explanation of this). The underlying mmap is then exposed as a recarray and in general this is a very efficient way to read the data. However, for many (if not most) FITS tables it isn’t all that simple. For many columns there are conversions that have to take place between the actual data that’s “on disk” (in the FITS file) and the data values that are returned to the user. For example FITS binary tables represent boolean values differently from how Numpy expects them to be represented, “Logical” columns need to be converted on the fly to a format Numpy (and hence the user) can understand. This issue also applies to data that is linearly scaled via the TSCALn and TZEROn header keywords. Supporting all of these “FITS-isms” introduces a lot of overhead that might not be necessary for all tables, but are still common nonetheless. That’s not to say it can’t be faster even while supporting the peculiarities of FITS–CFITSIO for example supports all the same features but is orders of magnitude faster. Astropy could do much better here too, and there are many known issues causing slowdown. There are plenty of opportunities for speedups, and patches are welcome. In the meantime for high-performance applications with FITS tables some users might find the fitsio library more to their liking. I’m opening many FITS files in a loop and getting OSError: Too many open files¶ Say you have some code like: from astropy.io import fits for filename in filenames: with fits.open(filename) as hdul: for hdu in hdul: hdu_data = hdul.data # Do some stuff with the data The details may differ, but the qualitative point is that the data to many HDUs and/or FITS files are being accessed in a loop. This may result in an exception like: Traceback (most recent call last): File "<stdin>", line 2, in <module> OSError: [Errno 24] Too many open files: 'my_data.fits' As explained in the note on working with large files, because Astropy uses mmap by default to read the data in a FITS file, even if you correctly close a file with HDUList.close a handle is kept open to that file so that the memory-mapped data array can still be continued to be read transparently. The way Numpy supports mmap is such that the file mapping is not closed until the overlying ndarray object has no references to it and is freed memory. However, when looping over a large number of files (or even just HDUs) rapidly, this may not happen immediately. Or in some cases if the HDU object persists, the data array attached to it may persist too. The easiest workaround is to manually delete the .data attribute on the HDU object so that the ndarray reference is freed and the mmap can be closed: from astropy.io import fits for filename in filenames: with fits.open(filename) as hdul: for hdu in hdul: hdu_data = hdul.data # Do some stuff with the data # ... # Don't need the data anymore; delete all references to it # so that it can be garbage collected del hdu_data del hdu.data In some extreme cases files are opened and closed fast enough that Python’s garbage collector does not free them (and hence free the file handles) often enough. To mitigate this your code can manually force a garbage collection by calling gc.collect() at the end of the loop. In a future release it will be easier to automatically perform this sort of cleanup when closing FITS files, where needed. Comparison with Other FITS Readers¶ What is the difference between astropy.io.fits and fitsio?¶ The astropy.io.fits module (originally PyFITS) is a “pure Python” FITS reader in that all the code for parsing the FITS file format is in Python, though Numpy is used to provide access to the FITS data via the ndarray interface. astropy.io.fits currently also accesses the CFITSIO to support the FITS Tile Compression convention, but this feature is optional. It does not use CFITSIO outside of reading compressed images. fitsio, on the other hand, is a Python wrapper for the CFITSIO library. All the heavy lifting of reading the FITS format is handled by CFITSIO, while fitsio provides an easier to use object-oriented API including providing a Numpy interface to FITS files read from CFITSIO. Much of it is written in C (to provide the interface between Python and CFITSIO), and the rest is in Python. The Python end mostly provides the documentation and user-level API. Because fitsio wraps CFITSIO it inherits most of its strengths and weaknesses, though it has an added strength of providing an easier to use API than if one were to use CFITSIO directly. Why did Astropy adopt PyFITS as its FITS reader instead of fitsio?¶ When the Astropy Project was first started it was clear from the start that one of its core components should be a submodule for reading and writing FITS files, as many other components would be likely to depend on this functionality. At the time, the fitsio package was in its infancy (it goes back to roughly 2011) while PyFITS had already been established going back to before the year 2000). It was already a mature package with support for the vast majority of FITS files found in the wild, including outdated formats such as “Random Groups” FITS files still used extensively in the radio astronomy community. Although many aspects of PyFITS’ interface have evolved over the years, much of it has also remained the same, and is already familiar to astronomers working with FITS files in Python. Most of not all existing training materials were also based around PyFITS. PyFITS was developed at STScI, which also put forward significant resources to develop Astropy, with an eye toward integrating Astropy into STScI’s own software stacks. As most of the Python software at STScI uses PyFITS it was the only practical choice for making that transition. Finally, although CFITSIO (and by extension fitsio) can read any FITS files that conform to the FITS standard, it does not support all of the non-standard conventions that have been added to FITS files in the wild. It does have some support for some of these conventions (such as CONTINUE cards and, to a limited extent, HIERARCH cards), it is not easy to add support for other conventions to a large and complex C codebase. PyFITS’ object-oriented design makes supporting non-standard conventions somewhat easier in most cases, and as such PyFITS can be more flexible in the types of FITS files it can read and return useful data from. This includes better support for files that fail to meet the FITS standard, but still contain useful data that should still be readable at least well-enough to correct any violations of the FITS standard. For example, a common error in non-English- speaking regions is to insert non-ASCII characters into FITS headers. This is not a valid FITS file, but still should be readable in some sense. Supporting structural errors such as this is more difficult in CFITSIO which assumes a more rigid structure. What performance differences are there between astropy.io.fits and fitsio?¶ There are two main performance areas to look at: reading/parsing FITS headers and reading FITS data (image-like arrays as well as tables). In the area of headers fitsio is significantly faster in most cases. This is due in large part to the (almost) pure C implementation (due to the use of CFITSIO), but also due to fact that it is more rigid and does not support as many local conventions and other special cases as astropy.io.fits tries to support in its pure Python implementation. That said the difference is small, and only likely to be a bottleneck either when opening files containing thousands of HDUs, or reading the headers out of thousands of FITS files in succession (in either case the difference is not even an order of magnitude). Where data is concerned the situation is a little more complicated, and requires some understanding of how astropy.io.fits is implemented versus CFITSIO and fitsio. First it’s important to understand how they differ in terms of memory management. astropy.io.fits uses mmap, by default, to provide access to the raw binary data in FITS files. Mmap is a system call (or in most cases these days a wrapper in your libc for a lower-level system call) which allows user-space applications to essentially do the same thing your OS is doing when it uses a pagefile (swap space) for virtual memory: It allows data in a file on disk to be paged into physical memory one page (or in practice usually several pages) at a time on an as-needed basis. These cached pages of the file are also accessible from all processes on the system, so multiple processes can read from the same file with little additional overhead. In the case of reading over all the data in the file the performance difference between using mmap versus reading the entire data into physical memory at once can vary widely between systems, hardware, and depending on what else is happening on the system at the moment, but mmap almost always going to be better. In principle it requires more overhead since accessing each page will result in a page fault, and the system requires more requests to the disk. But in practice the OS will optimize this pretty aggressively, especially for the most common case of sequential access–also in reality reading the entire thing into memory is still going to result in a whole lot of page faults too. For random access having all the data in physical memory is always going to be best, though with mmap it’s usually going to be pretty good too (one doesn’t normally access all the data in a file in totally random order–usually a few sections of it will be accessed most frequently, the OS will keep those pages in physical memory as best it can). So for the most general case of reading FITS files (or most large data on disk) this is the best choice, especially for casual users, and is hence enabled by default. CFITSIO/ fitsio, on the other hand, doesn’t assume the existence of technologies like mmap and page caching. Thus it implements its own LRU cache of I/O buffers that store sections of FITS files read from disk in memory in FITS’ famous 2880 byte chunk size. The I/O buffers are used heavily in particular for keeping the headers in memory. Though for large data reads (for example reading an entire image from a file) it does bypass the cache and instead does a read directly from disk into a user-provided memory buffer. However, even when CFITSIO reads direct from the file, this is still largely less efficient than using mmap: Normally when your OS reads a file from disk, it caches as much of that read as it can in physical memory (in its page cache) so that subsequent access to those same pages does not require a subsequent expensive disk read. This happens when using mmap too, since the data has to be copied from disk into RAM at some point. The difference is that when using mmap to access the data, the program is able to read that data directly out of the OS’s page cache (so long as it’s only being read). On the other hand when reading data from a file into a local buffer such as with fread(), the data is first read into the page cache (if not already present) and then copied from the page cache into the local buffer. So every read performs at least one additional memory copy per page read (requiring twice as much physical memory, and possibly lots of paging if the file is large and pages need to dropped from the cache). The user API for CFITSIO usually works by having the user allocate a memory buffer large enough to hold the image/table they want to read (or at least the section they’re interested in). There are some helper functions for determining the appropriate amount of space to allocate. Then you just pass it a pointer to your buffer and CFITSIO handles all the reading (usually using the process described above), and copies the results into your user buffer. For large reads it reads directly from the file into your buffer. Though if the data needs to be scaled it makes a stop in CFITSIO’s own buffer first, then writes the rescaled values out to the user buffer (if rescaling has been requested). Regardless, this means that if your program wishes to hold an entire image in memory at once it will use as much RAM as the size of the data. For most applications it’s better (and sufficient) to write it work on smaller sections of the data, but this requires extra complexity. Using mmap on the other hand makes managing this complexity simpler and more efficient. A very simple and informal test demonstrates this difference. This test was performed on four simple FITS images (one of which is a cube) of dimensions 256x256, 1024x1024, 4096x4096, and 256x1024x1024. Each image was generated before the test and filled with randomized 64-bit floating point values. A similar test was performed using both astropy.io.fits and fitsio: A handle to the FITS file is opened using each library’s basic semantics, and then the entire data array of the files is copied into a temporary array in memory (for example if we were blitting the image to a video buffer). For Astropy the test is written: def read_test_astropy(filename): with fits.open(filename, memmap=True) as hdul: data = hdul[0].data c = data.copy() The test was timed in IPython on a Linux system with kernel version 2.6.32, a 6-core Intel Xeon X5650 CPU clocked at 2.67 GHz per core, and 11.6 GB of RAM using: for filename in filenames: print(filename) %timeit read_test_astropy(filename) where filenames is just a list of the aforementioned generated sample files. The results were: 256x256.fits 1000 loops, best of 3: 1.28 ms per loop 1024x1024.fits 100 loops, best of 3: 4.24 ms per loop 4096x4096.fits 10 loops, best of 3: 60.6 ms per loop 256x1024x1024.fits 1 loops, best of 3: 1.15 s per loop For fitsio the test was: def read_test_fitsio(filename): with fitsio.FITS(filename) as f: data = f[0].read() c = data.copy() This was also run in a loop over all the sample files, producing the results: 256x256.fits 1000 loops, best of 3: 476 µs per loop 1024x1024.fits 100 loops, best of 3: 12.2 ms per loop 4096x4096.fits 10 loops, best of 3: 136 ms per loop 256x1024x1024.fits 1 loops, best of 3: 3.65 s per loop It should be made clear that the sample files were rewritten with new random data between the Astropy test and the fitsio test, so they were not reading the same data from the OS’s page cache. Fitsio was much faster on the small (256x256) image because in that case the time is dominated by parsing the headers. As already explained this is much faster in CFITSIO. However, as the data size goes up and the header parsing no longer dominates the time, astropy.io.fits using mmap is roughly twice as fast. This discrepancy would be almost entirely due to it requiring roughly half as many in-memory copies to read the data, as explained earlier. That said, more extensive benchmarking could be very interesting. This is also not to say that astropy.io.fits does better in all cases. There are some cases where it is currently blown away by fitsio. See the subsequent question. Why is fitsio so much faster than Astropy at reading tables?¶ In many cases it isn’t–there is either no difference, or it may be a little faster in Astropy depending on what you’re trying to do with the table and what types of columns or how many columns the table has. There are some cases, however, where fitsio can be radically faster, mostly for reasons explained above in “Why is reading rows out of a FITS table so slow?” In principle a table is no different from, say, an array of pixels. But instead of pixels each element of the array is some kind of record structure (for example two floats, a boolean, and a 20 character string field). Just as a 64-bit float is an 8 byte record in an array, a row in such a table can be thought of as a 37 byte (in the case of the previous example) record in a 1-D array of rows. So in principle everything that was explained in the answer to the question “What performance differences are there between astropy.io.fits and fitsio?” applies just as well to tables as it does to any other array. However, FITS tables have many additional complexities that sometimes preclude streaming the data directly from disk, and instead require transformation from the on-disk FITS format to a format more immediately useful to the user. A common example is how FITS represents boolean values in binary tables. Another, significantly more complicated example, is variable length arrays. As explained in “Why is reading rows out of a FITS table so slow?”, astropy.io.fits does not currently handle some of these cases as efficiently as it could, in particular in cases where a user only wishes to read a few rows out of a table. Fitsio, on the other hand, has a better interface for copying one row at a time out of a table and performing the necessary transformations on that row only, rather than on the entire column or columns that the row is taken from. As such, for many cases fitsio gets much better performance and should be preferred for many performance-critical table operations. Fitsio also exposes a microlanguage (implemented in CFITSIO) for making efficient SQL-like queries of tables (single tables only though–no joins or anything like that). This format, described in the CFITSIO documentation can in some cases perform more efficient selections of rows than might be possible with Numpy alone, which requires creating an intermediate mask array in order to perform row selection.
http://docs.astropy.org/en/stable/io/fits/appendix/faq.html
CC-MAIN-2019-18
refinedweb
6,052
61.97
On 06/15/2012 11:14 AM, Luiz Capitulino wrote: On Fri, 15 Jun 2012 11:04:16 -0400 Corey Bryant <coreyb linux vnet ibm com> wrote:On 06/15/2012 10:32 AM, Luiz Capitulino wrote:On Thu, 14 Jun 2012 11:55:02 -0400 Corey Bryant <coreyb linux vnet ibm com> wrote:This patch adds the pass-fd QMP command using the QAPI framework. Like the getfd command, it is used to pass a file descriptor via SCM_RIGHTS. However, the pass-fd command also returns the received file descriptor, which is a difference in behavior from the getfd command, which returns nothing. The closefd command can be used to close a file descriptor that was passed with the pass-fd command. Note that when using getfd or pass-fd, there are some commands (e.g. migrate with fd:name) that implicitly close the named fd. When this is not the case, closefd must be used to avoid fd leaks. Signed-off-by: Corey Bryant <coreyb linux vnet ibm com> --- v2: -Introduce new QMP command to pass/return fd (lcapitulino redhat com) -Use passfd as command name (berrange redhat com) v3: -Use pass-fd as command name (lcapitulino redhat com) -Fail pass-fd if fdname already exists (lcapitulino redhat com) -Add notes to QMP command describing behavior in more detail (lcapitulino redhat com, eblake redhat com) -Add note about fd leakage (eblake redhat com) monitor.c | 33 +++++++++++++++++++++++++++++++++ qapi-schema.json | 19 +++++++++++++++++++ qmp-commands.hx | 34 ++++++++++++++++++++++++++++++++++ 3 files changed, 86 insertions(+) diff --git a/monitor.c b/monitor.c index 1a7f7e7..6d99368 100644 --- a/monitor.c +++ b/monitor.c @@ -2182,6 +2182,39 @@ static void do_inject_mce(Monitor *mon, const QDict *qdict) } #endif +int64_t qmp_pass_fd(const char *fdname, Error **errp) +{ + mon_fd_t *monfd; + int fd; + + fd = qemu_chr_fe_get_msgfd(cur_mon->chr); + if (fd == -1) { + error_set(errp, QERR_FD_NOT_SUPPLIED); + return -1; + } + + if (qemu_isdigit(fdname[0])) { + error_set(errp, QERR_INVALID_PARAMETER_VALUE, "fdname", + "a name not starting with a digit"); + return -1; + } + + QLIST_FOREACH(monfd, &cur_mon->fds, next) { + if (strcmp(monfd->name, fdname) == 0) { + error_set(errp, QERR_INVALID_PARAMETER_VALUE, "fdname", + "a name that does not already exist"); + return -1; + } + }Returning the same error class for two different errors is not a good idea. I think you have two options here. You could return QERR_INVALID_PARAMETER for the "already exists" case or introduce QERR_FD_EXISTS. The later is certainly nicer, but we were trying to avoid having too specific errors...I'm not clear on what the problem is with returning the same error class for two different errors. Could you explain? I don't have a problem changing it if it's an issue.Because an mngt app/user won't know if what has happened was that the fd name is invalid or if the fd name already exists. I agree, it makes sense so the management app can program based on the error class. It just seems like the error classes in general should be able to be used multiple times if it fits, especially if we're trying to avoid having too specific errors.I agree, it makes sense so the management app can program based on the error class. It just seems like the error classes in general should be able to be used multiple times if it fits, especially if we're trying to avoid having too specific errors. Maybe (in the future) something like a sub-class could be used? In other words to allow using the same error class with unique sub-classes.Maybe (in the future) something like a sub-class could be used? In other words to allow using the same error class with unique sub-classes. I'll plan on updating to use a different error class for this. + + monfd = g_malloc0(sizeof(mon_fd_t)); + monfd->name = g_strdup(fdname); + monfd->fd = fd;Maybe you could try to move this to a separate function to share code with qmp_getfd()?Sure, no problem. I can do that.+ + QLIST_INSERT_HEAD(&cur_mon->fds, monfd, next); + return fd; +} + void qmp_getfd(const char *fdname, Error **errp) { mon_fd_t *monfd; diff --git a/qapi-schema.json b/qapi-schema.json index 26a6b84..ed99f23 100644 --- a/qapi-schema.json +++ b/qapi-schema.json @@ -1864,6 +1864,25 @@ { 'command': 'netdev_del', 'data': {'id': 'str'} } ## +# @pass-fd: +# +# Pass a file descriptor via SCM rights and assign it a name +# +# @fdname: file descriptor name +# +# Returns: The QEMU file descriptor that was received +# If file descriptor was not received, FdNotSupplied +# If @fdname is not valid, InvalidParameterType +# +# Since: 1.2.0 +# +# Notes: If @fdname already exists, the command will fail. +# The 'closefd' command can be used to explicitly close the +# file descriptor when it is no longer needed. +## +{ 'command': 'pass-fd', 'data': {'fdname': 'str'}, 'returns': 'int' } + +## # @getfd: # # Receive a file descriptor via SCM rights and assign it a name diff --git a/qmp-commands.hx b/qmp-commands.hx index e3cf3c5..c039947 100644 --- a/qmp-commands.hx +++ b/qmp-commands.hx @@ -869,6 +869,40 @@ Example: EQMP { + .name = "pass-fd", + .args_type = "fdname:s", + .params = "pass-fd name", + .help = "pass a file descriptor via SCM rights and assign it a name", + .mhandler.cmd_new = qmp_marshal_input_pass_fd, + }, + +SQMP +pass-fd +------- + +Pass a file descriptor via SCM rights and assign it a name. + +Arguments: + +- "fdname": file descriptor name (json-string) + +Return a json-int with the QEMU file descriptor that was received. + +Example: + +-> { "execute": "pass-fd", "arguments": { "fdname": "fd1" } } +<- { "return": 42 } + +Notes: + +(1) If the name specified by the "fdname" argument already exists, + the command will fail. +(2) The 'closefd' command can be used to explicitly close the file + descriptor when it is no longer needed. + +EQMP + + { .name = "getfd", .args_type = "fdname:s", .params = "getfd name", -- Regards, Corey
https://www.redhat.com/archives/libvir-list/2012-June/msg00644.html
CC-MAIN-2014-15
refinedweb
929
63.59
dns.k8g8.com- this is a dynamic DNS address that will point at your machines public IP address! Very useful for hosting games, SSH servers, and anything non-http that the Gateway tunneling service doesn't address for you. Let me know if you have any questions! Thanks everyone! 1000 monkeys and 1000 typewriters... Didn't really know (nor still know) what I was doing when I tried to connect my RPi to KubeSail and ended up with some persistent errors that I am trying to get rid of so I can start over. Any idea how to clear these red warning boxes every time I log in? Permission denied from Games games.xxxxxx.usw1.k8g8.com / Deployment Invalid clusterConfig from db - no credentials found for cluster: games.xxxxxx.usw1.k8g8.com Unable to fetch namespaces from games.xxxxxx.usw1.k8g8.com games.xxxxxx.usw1.k8g8.com might be offline? @/all New features this week! Dynamic DNS, plain-Dockerfile support in Repo-Builder, lots of improvements and bug-fixes around the site, and lots more coming soon! Also excited to announce the launch of - a customer of ours built on KubeSail's platform. Thanks everyone and have a great week! Wow Beeper looks amazing and can't believe its self hosted on a cluster somewhere. Super cool thanks for sharing this gem! Hey @flynmoose ! Thanks for reaching out - I sent you an email about that error - we're trying to nail it down and eliminate it but its a hard one to recreate! If you go to and click "Settings", you should be able to delete the cluster from the dashboard entirely. Once that's done, you can try re-installing the agent (add-cluster), and it should do the trick. Emailed you back with some screenshots and a commandline history of my buffonery. Hope it helps. --advertise-addressfor example. But KubeSail does aim to make this easier by automatically assigning you an address for your cluster, and automatically setting a DNS name for your home-IP address. I hope this answers that question. There is no ETA for GitLab on the site just yet, but you can install the GitLab runner on your cluster yourself and things should work normally (just won't be on the KubeSail website). That's all a bit complex, so let me know if I misunderstood your question and I'd be happy to help :) kubesail-agentnamespace, it might print that it couldn't find any ingress controller. You may try restarting the agent and seeing if the ingress starts to work. an Nginx message now - so one step forward! I assume restarting the agent did the trick there... I'll look into that asap, that's not a good bug :crying_cat_face: cert-managerpod in that namespace and the logs should say what's happening - I plan on surfacing this in the UI much better in the future) /resourcestab which should show the raw ingress documents more clearly. I'd be happy to do a Zoom call as well if you need a hand getting things sorted out :) Thanks for keeping at it!
https://gitter.im/KubeSail/community?at=60128d479fa6765ef8e0c21a
CC-MAIN-2021-17
refinedweb
519
72.26
Cut vtkDataSet with user-specified implicit function. More... #include <vtkCutter.h> Cut vtkDataSet with user-specified implicit function. vtkCutter is a filter to cut through data using any subclass of vtkImplicitFunction. That is, a polygonal surface is created corresponding to the implicit function F(x,y,z) = value(s), where you can specify one or more values used to cut with. In VTK, cutting means reducing a cell of dimension N to a cut surface of dimension N-1. For example, a tetrahedron when cut by a plane (i.e., vtkPlane implicit function) will generate triangles. (In comparison, clipping takes a N dimensional cell and creates N dimension primitives.) vtkCutter is generally used to "slice-through" a dataset, generating a surface that can be visualized. It is also possible to use vtkCutter to do a form of volume rendering. vtkCutter does this by generating multiple cut surfaces (usually planes) which are ordered (and rendered) from back-to-front. The surfaces are set translucent to give a volumetric rendering effect. Note that data can be cut using either 1) the scalar values associated with the dataset or 2) an implicit function associated with this class. By default, if an implicit function is set it is used to clip the data set, otherwise the dataset scalars are used to perform the clipping. Definition at line 63 of file vtkCutter.h. Definition at line 66 of file vtkCutter.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkPolyDataAlgorithm. Reimplemented in vtkCompositeCutter. Reimplemented from vtkPolyDataAlgorithm. Reimplemented in vtkCompositeCutter. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkPolyDataAlgorithm. Construct with user-specified implicit function; initial value of 0.0; and generating cut scalars turned off. Set a particular contour value at contour number i. The index i ranges between 0<=i<NumberOfContours. Definition at line 79 of file vtkCutter.h. Get the ith contour value. Definition at line 84 of file vtkCutter.h. Get a pointer to an array of contour values. There will be GetNumberOfContours() values in the list. Definition at line 90 of file vtkCutter.h. Fill a supplied list with contour values. There will be GetNumberOfContours() values in the list. Make sure you allocate enough memory to hold the list. Definition at line 97 of file vtkCutter.h. Set the number of contours to place into the list. You only really need to use this method to reduce list size. The method SetValue() will automatically increase list size as needed. Definition at line 104 of file vtkCutter.h. Get the number of contours in the list of contour values. Definition at line 109 of file vtkCutter.h. Generate numContours equally spaced contour values between specified range. Contour values will include min/max range values. Definition at line 115 of file vtkCutter.h. Generate numContours equally spaced contour values between specified range. Contour values will include min/max range values. Definition at line 124 of file vtkCutter.h. Override GetMTime because we delegate to vtkContourValues and refer to vtkImplicitFunction. Reimplemented from vtkObject. Specify the implicit function to perform the cutting. If this flag is enabled, then the output scalar values will be interpolated from the implicit function values, and not the input scalar data. If this is enabled (by default), the output will be triangles otherwise, the output will be the intersection polygons WARNING: if the cutting function is not a plane, the output will be 3D poygons, which might be nice to look at but hard to compute with downstream. Specify a spatial locator for merging points. By default, an instance of vtkMergePoints is used. Set the sorting order for the generated polydata. There are two possibilities: Sort by value = 0 - This is the most efficient sort. For each cell, all contour values are processed. This is the default. Sort by cell = 1 - For each contour value, all cells are processed. This order should be used if the extracted polygons must be rendered in a back-to-front or front-to-back order. This is very problem dependent. For most applications, the default order is fine (and faster). Sort by cell is going to have a problem if the input has 2D and 3D cells. Cell data will be scrambled because with vtkPolyData output, verts and lines have lower cell ids than triangles. Definition at line 194 of file vtkCutter.h. Definition at line 195 of file vtkCutter.h. Return the sorting procedure as a descriptive character string. Definition at line 258 of file vtkCutter.h. Create default locator. Used to create one when none is specified. The locator is used to merge coincident points. Normally I would put this in a different class, but since This is a temporary fix until we convert this class and contour filter to generate unstructured grid output instead of poly data, I am leaving it here. Set/get the desired precision for the output types. See the documentation for the vtkAlgorithm::DesiredOutputPrecision enum for an explanation of the available precision settings. 235 of file vtkCutter.h. Definition at line 236 of file vtkCutter.h. Definition at line 238 of file vtkCutter.h. Definition at line 239 of file vtkCutter.h. Definition at line 240 of file vtkCutter.h. Definition at line 241 of file vtkCutter.h. Definition at line 243 of file vtkCutter.h. Definition at line 244 of file vtkCutter.h. Definition at line 245 of file vtkCutter.h. Definition at line 246 of file vtkCutter.h. Definition at line 247 of file vtkCutter.h.
https://vtk.org/doc/nightly/html/classvtkCutter.html
CC-MAIN-2020-50
refinedweb
965
52.66
Recently I had to port some C# code to C++. The C# code was making a limited use of the .NET framework, so, most of the porting turned out to be fairly simple. (I only had to pay attention to the difference between value and reference types and to fix small issues with uninitialized data members, where the code relied on the default initialization in C#). But part of the code was also using some of the features of C# 3.0: implicit types, lambda expressions, iterator blocks and LINQ. Some of this code is not difficult to port to C++, if we target the new C++11 standard. For example C# ‘var’ maps to C++ ‘auto’; C# lambda expressions can be translated into C++ lambdas. But there is nothing in C++ that matches the elegance of IEnumerable<T> sequences and LINQ, so for the problem at hand I just ended up rewriting the LINQ-based code using STL containers and algorithms. That worked fine, but let me wonder: is there a way to implement C# iterators in C++, and, consequently, to have something similar to LINQ-to-objects in unmanaged code? The answer is… kind of. In this short series of posts I’ll try to present a possible solution, based on co-routines implemented with Win32 Fibers. Caveats A few caveats: this is really just an experiment, a proof of concept, a toy project. The best way to deal with algorithms and containers in C++ is certainly to use the STL. I found that using Fibers has many drawbacks in Windows and makes it very difficult to write correct code. Also, the absence of a garbage collector makes things much trickier, as we will see. But still, I think that emulating the behavior of C# iterator in C++ is a very interesting exercise, and a great way to learn how LINQ actually works. In many ways it also made me appreciate even more the simplicity and elegance of C#. It’s not the destination, but the journey. C# iterator blocks Iterators are objects that allow you to traverse a sequence of elements. Iterators in .NET are the basis of the foreach statement and of the LINQ-to-Objects framework. A very good introduction to iterators, iterator blocks and data pipelines can be found here, or in John Skeet’s “C# In Depth”. To summarize, the iterator pattern in .NET is encapsulated in the IEnumerable/IEnumerator interfaces, which are defined as follows: public interface IEnumerable : IEnumerable { IEnumerator GetEnumerator(); } public interface IEnumerator : IEnumerator, IDisposable { void Reset(); bool MoveNext(); T Current { get; } void Dispose(); } An object that wants to enumerate a sequence of data will implement the IEnumerable interface. Data consumers ask the IEnumerable for an IEnumerator calling GetEnumerator(), and then iterate over the sequence calling MoveNext and getting the Current property. MoveNext returns false when there is no more data to return. A foreach statement is transformed by the C# compiler in code like this: IEnumerator it = source.GetEnumerator(); while (it.MoveNext()) { int value = it.Current; // use ‘value’ } Note that the Reset() method is never called. C#2.0 added support for iterator blocks, with the yield keyword. An iterator block is a function that contains the yield keyword and returns an IEnumerable interface. For example, this function returns the sequence of the first 10 Fibonacci numbers: class Test { public static IEnumerable Fibonacci() { long a = 0; long b = 1; yield return a; yield return b; for (int i = 2; i < 10; i++) { long tmp = a + b; a = b; b = tmp; yield return b; } } } Coroutines It is interesting to understand how the function returns the sequence of numbers. A function like this is completely different from normal methods of a class. When normal methods are invoked, execution begins at the beginning and once the method exits, it is finished; an instance of a method only returns once. In the case of iterators, instead, the execution is paused each time a value is yielded. Therefore, C# iterators behave like coroutines. More precisely, coroutines can be defined as generalized subroutines with the following three properties: · Local data to a coroutine persists between successive calls. · Execution is suspended at the point where control leaves a coroutine (perhaps to allow another coroutine to produce something for it). When control reenters the coroutine, execution is resumed at the point where it previously left off. · Control can be passed from one coroutine to another, causing the current coroutine to suspend its execution and transfer control to the new coroutine. C# iterators can be considered like a basic form of coroutines. In this article Joe Duffy explains how they can be used to implement a kind of co-routine scheduler, which can be used to schedule lightweight tasks for simulated concurrent execution. State machine However, C# iterators are not really coroutines. They are implemented by the C# compiler by generating the code of a state machine. More precisely, for a function like the one above, the compiler generates a class that represents the closure for the block of code in the function, and that exposes both the IEnumerable and the IEnumerator interfaces. The effect of this compiler magic is that an instance of this iterator function will suspend (or “yield” itself and can be resumed at well-defined points. The core of the implementation is in the MoveNext method: this is where the state machine is carefully crafted to produce the sequence of values according to the sequence of yield statements. When a yield statement is met the method MoveNext exits returning the corresponding value and the state machine keeps track of the exact state of the iterator in that point. When MoveNext is called again, afterwards, the state machine restarts the execution from the instruction that follows the last issued yield. This implementation has one drawback: it is possible to yield only from one stack frame deep. The C# compiler state-machine can only generate the code required to transform one function, but obviously cannot do that for the entire stack. Real coroutines can yield from an arbitrarily nested call stack, and have that entire stack restored when they are resumed. For example, C# iterators cannot be used to easily implement in-order traversal of a binary tree. The following code implements a recursive iterator. It works, but with poor performance: if the tree has N nodes, it builds O(N) temporary iterator objects, which is not ideal. class TreeNode<T> { public TreeNode<T> left; public TreeNode<T> right; public T value; public static IEnumerable<T> InOrder(TreeNode<T> t) { if (t != null) { foreach (var i in InOrder(t.left)) { yield return i; } yield return t.value; foreach (var i in InOrder(t.right)) { yield return i; } } } } // build tree TreeNode<int> root = new TreeNode<int> { … }; // in-order traversal var query = TreeNode<int>.InOrder(root); foreach (int i in query) { Console.WriteLine("{0}", i); // only the root node will be printed! } We’ll see how this limitation can be overcome with an implementation of coroutines based on Fibers. I will not delve into more implementation details, since a very detailed explanation can be found here and here. Also, it can be very interesting to look at the generated code using the Reflector tool. In the next posts we’ll see a possible way to reproduce in C++ the behavior of C# iterators (deferred execution, lazy evaluation, “yield” semantic and so on) using Win32 Fibers. And we’ll use iterators as the base for reimplementing LINQ-style queries in C++. Of course, I can’t think of a way to reproduce the declarative query syntax in C++, with keywords like from, where, select. But we should be able to write code like the following, with expressions defined as the pipe of query operators: auto source = IEnumerable::Range(0, 10); auto it = source->Where([](int val) { return ((val % 2) == 0); }) ->Select([](int val) -> double { return (val * val); })); foreach(it, [](double& val){ printf("%.2f\n", val); });
https://paoloseverini.wordpress.com/2012/01/
CC-MAIN-2017-04
refinedweb
1,325
53.1
base-feature-macros: Semantic CPP feature macros for base This provides a set of feature macros describing features of base in a semantic way. See <base-feature-macros.h> for the set of currently provided macros. In order to use the CPP header provided by this package, add this package as a dependency to your .cabal file, i.e. build-depends: base-feature-macros: >= 0.1 && < 0.2 while making sure that the version specified as lower bound defines the feature-macros your code tests for. This is particularly important as CPP will implicitly treat undefined CPP macros as having the value 0. See also GNU CPP/CC's -Wundef warning to detect such errors; or starting with GHC 8.2, -Wcpp-undef can be used: if impl(ghc >= 8.2) ghc-options: -Wcpp-undef Then in your code, you can include and use the <base-feature-macros.h> header like so module M where #include <base-feature-macros.h> #if !HAVE_FOLDABLE_TRAVERSABLE_IN_PRELUDE import Data.Foldable (Foldable (..)) import Prelude hiding (foldr, foldr1) #endif #if !HAVE_MONOID_IN_PRELUDE import Data.Monoid hiding ((<>)) #endif This package is inspired by the blogpost "Make macros mean something – readable backwards compatibility with CPP". Downloads - base-feature-macros-0.1.0.1.tar.gz [browse] (Cabal source package) - Package description (as included in the package) Maintainer's Corner For package maintainers and hackage trustees
https://hackage.haskell.org/package/base-feature-macros
CC-MAIN-2019-30
refinedweb
225
51.04
Hello, Thanks in advance for anyone's help and/or advice, it is greatly apprecaited on what, i am sure, is a stupid question. Nevertheless, here is my problem: I have a template class called "MyArray" and i want it to be able to accept an int or char. The contents of the array and the size of the array is taken as a command line argument. Here is the code in the main function: Here is the header file:Here is the header file:Code:#include "MyArrayDefs.h" using namespace std; void main( int argc, char* argv[] ) { MyArray<char> charArr( strlen( argv[ 1 ] ) ); MyArray<int> intArr( strlen( argv[ 1 ] ) ); } Here is the Template header file:Here is the Template header file:Code:template < class Type > class MyArray { public: MyArray( const int ); ~MyArray(); private: char* base; int size; }; When i create an array like this:When i create an array like this:Code:#include <iostream> #include "MyArray.h" using namespace std; template < class Type > MyArray < Type > :: MyArray( const int n ) { base = new Type [ n ]; size = n; } it works fine. But when i do this:it works fine. But when i do this:Code:MyArray<char> charArr( strlen( argv[ 1 ] ) ); it does not work fine. It comes up with the following error:it does not work fine. It comes up with the following error:Code:MyArray<int> intArr( strlen( argv[ 1 ] ) ); MyArrayDefs.h: In method `MyArray<int>::MyArray(int)': main.cc:15: instantiated from here MyArrayDefs.h:14: assignment to `char *' from `int *' Any help in solving this error would be massively appreciated. Best regards, global
http://cboard.cprogramming.com/cplusplus-programming/64760-assignment-%60char-%2A%27-%60int-%2A%27.html
CC-MAIN-2014-23
refinedweb
265
71.44
Simple user-oriented graphics drawing and image manipulation. Project description PyAgg is a precompiled Python library for lightweight, easy, and convenient graphics rendering based on the aggdraw module. Motivation There are several ways to create high quality 2D drawings and graphics in Python. The widely used Matplotlib library is a favorite for advanced scientific visualizations, but for simpler drawing uses, its size, dependencies, and steep learning curve can be a bit overkill. PyCairo is another library, but unfortunately it suffers some from slow drawing and a stateful API that leads to longer code. Finally, Aggdraw is simple and intuitive, but has slightly limited functionality, is no longer maintained, and precompiled binaries are hard to come by. The current library, PyAgg, aims to solve some of these problems. By building on the lightweight aggdraw module and including the necessary pre-compiled binaries for multiple Python and architecture versions in the same package, PyAgg is ready to use out of the box, no installation or compiling required. It is very fast and produces high quality antialiased drawings. Most importantly, PyAgg wraps around and offers several convenience functions for easier no-brain drawing and data visualization, including flexible handling of coordinate systems and length units. Here is to at least another 5 years of beautiful and lightweight AGG image drawing. Cheers! After that, Python 2x will no longer be maintained, and while people can still continue to use it, it is likely that most people will be using Python 3x. At that point someone will have to update the C++ Aggdraw wrapper so it can be compiled for Python 3x if Python users are to continue to enjoy the power of the Agg graphics library. Features The main features of PyAgg include: - A coordinate aware image. No need for the intricacies of affine matrix transformations, just define the coordinates that make up each corner of your image once, and watch it follow as you resize, crop, rotate, move, and paste the image. Zoom in and out of you coordinate system by bounding box, factor, or units, and lock the aspect ratio to avoid distortions. - Oneliners to easily draw and style high quality graphical elements including polygons with holes, lines with or without smooth curves, pie slices, and point symbols like circles, squares, and triangles with an optional flattening factor. - Style your drawing sizes and distances in several types of units, including pixels, percentagea, cm, mm, inches, and font points, by specifying the real world size of your image. - Smart support for text writing, including text anchoring, and automatic detection of available fonts. - Instantly view your image in a Tkinter pop up window, or prepare it for use in a Tkinter application. There is also support for common domain specific data visualization: - Partial support for geographical plotting, including a lat-long coordinate system, and automatic drawing of GeoJSON features. - Easily plot statistical data on graphs witn a syntax and functionality that is aimed more for data analysts and laymen than computer scientists, including linegraph, scatterplot, histogram, etc, although these are still a work in progress. Platforms Python 2.6 and 2.7 PyAgg relies on the aggdraw Python C++ wrapper, and as a convenience comes with aggdraw precompiled for the following platforms: - Python 2.6 (Windows: 32-bit only, Mac and Linux: No support) - Python 2.7 (Windows: 32 and 64-bit, Mac and Linux: 64-bit only) Note: Mac and Linux support has not been fully tested, and there are some reports of problems on Linux. You can get around these limitations by compiling aggdraw on your own, in which case PyAgg should work on any machine that you compile for. Dependencies PIL/Pillow (used for image loading, saving, and manipulation. Also used for text-rendering, which means if you compile PIL/Pillow on your own make sure FreeType support is enabled) Installing it PyAgg is installed with pip from the commandline: pip install pyagg It also works to just place the “pyagg” package folder in an importable location like “site-packages”. Example Usage Begin by importing the pyagg module: import pyagg To begin drawing, create your canvas instance and define its coordinate system, in this case based on percentages to easily draw using relative positions. In our case we give our image the size of an A4 paper, and specify that all further drawing in real world units should use 96 pixels per inch: canvas = pyagg.Canvas("210mm", "297mm", background=(222,222,222), ppi=96) canvas.percent_space() Next, draw some graphical elements: canvas.draw_line([10,10, 50,90, 90,10], smooth=True, fillcolor=(222,0,0), fillsize="2cm") canvas.draw_triangle((50,50),fillsize="30px", fillcolor=(0,0,0, 111)) And some text: canvas.draw_text((50,50),"PyAgg is for drawing!", textanchor="center", textfont="Segoe UI", textsize=42) Once you are done, view or save your image: canvas.save("test.png") canvas.view() More Information: The above was just a very small example of what you can do with PyAgg. But until I get around to making the full tutorial just check out the API documentation below. License: This code is free to share, use, reuse, and modify according to the MIT license, see license.txt Credits: Karim Bahgat (2016) Changes 0.2.0 (2016-06-22) - Plenty of (undocumented) feature additions, including some unfinished ones - Replaced heavy fontTools dependency with a more lightweight font locator - Fixed some bugs improving platform support for Mac and Linux (though not fully tested) 0.1 (2016-03-28) - First basic release Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/PyAgg/
CC-MAIN-2018-39
refinedweb
942
51.58
pythoscope crash on test generation Bug Description I'm trying pythoscope on iTrade to get the hang of both. While pythoscope starts generating tests, it seems to go fine for a while, then it crashes when it reaches a specific module. Removing the module fixes the crash. cd project pythoscope --init pythoscope *.py ERROR: Oops, it seems internal Pythoscope error occured. Please file a bug report at https:/ Traceback (most recent call last): File "C:\Python25\ load_ File "c:\python25\ generate_ File "c:\python25\ add_ File "c:\python25\ generator. File "c:\python25\ self. File "c:\python25\ for test_case in self._generate_ File "c:\python25\ test_case = self._generate_ File "c:\python25\ method_ File "c:\python25\ return sorted( File "c:\python25\ yield self._generate_ File "c:\python25\ method. File "c:\python25\ return ('equal_stub', 'expected', call_with_ File "c:\python25\ return "%s(%s)" % (callable, ', '.join(args)) TypeError: sequence item 0: expected string, tuple found This bug is related to itrade_logging.py using nested arguments, for example in formatException method of the myColorFormatter class: > class myColorFormatte > def formatException Work on this bug is being done in the itrade-fixes branch. Fix committed into the itrade-fixes branch. Successfully tested on all Python files from the iTrade distribution. Both inspection and generation of test stubs work OK now. Fix released in 0.4.1. Confirming bug in current trunk. Pythoscope fails during test generation for module itrade_logging.py taken from iTrade release 0.4.6.
https://bugs.launchpad.net/pythoscope/+bug/344220
CC-MAIN-2018-43
refinedweb
239
61.22
11 February 2011 20:39 [Source: ICIS news] HOUSTON (ICIS)--Strong demand from Asia for polycarbonate (PC) resins turned the ?xml:namespace> US imports of PC resins in 2010 fell 92%, to 56,353 tonnes from 739,996 tonnes in 2009. Meanwhile, exports increased by 44% to 378,863 tonnes in 2010 from 262,361 tonnes a year earlier. Improved demand in Asia was the main reason for the shift in Sources also said that despite typically higher domestic prices for PC resins, US buyers and distributors did not want to deal with shipping hassles for material. ITC data for the month of December 2010 reflected overall trends for the year. Compared with December 2009, imports fell to 3,201 tonnes from 47,846 tonnes a year ago, a decrease of 93%. Exports increased by 104% to 35,782 tonnes in December 2010 from 17,534 tonnes a year ago. PC prices are assessed by ICIS at 156-175 cents/lb ($3,461-3,882/tonne, €2,545-2,855/tonne) for moulding- or extrusion-grade material FOB (free on board) in the US Gulf. Major ($1 = €0.74) For more on
http://www.icis.com/Articles/2011/02/11/9434838/2010-us-polycarbonate-imports-fall-92-from-2009-itc.html
CC-MAIN-2015-11
refinedweb
192
71.24
This question is a follow up of Moving a member function from base class to derived class breaks the program for no obvious reason (this is a prime example of why one shouldn't use using namespace std; this-> #include <iostream> #include <bitset> using namespace std; template<class T> struct B { T bitset{}; }; template<class T> struct D : B<T> { bool foo() { return this->bitset < 32; } }; int main(){} this->bitset B<T>::bitset std::bitset B<T>::bitset In member function 'bool D<T>::foo(T, std::__cxx11::string)': cpp/scratch/minimal.cpp:24:22: error: invalid use of 'class std::bitset<1ul>' < return this->bitset == 32; this->bitset < 32 < > tl;dr it looks like this is a deliberate decision, specifically to support the alternate syntax you already used. An approximate walkthrough of the standardese below: this-> B < ^ this->Bdoes name something, but it's a template B<T>, so keep going Bon it's own also names something, a class template B<T> this->B<T>as a qualifier, and it isn't a less-than after all In the other case, this->bitset proceeds identically until the third step, when it realises there are two different things called bitset (a template class member and a class template), and just gives up. This is from a working draft I have lying around, so not necessarily the most recent, but: 3.4.5 Class member access [basic.lookup.classref ] 1In a class member access expression (5.2.5), if the . or -> token is immediately followed by an identifier followed by a <, the identifier must be looked up to determine whether the < is the beginning of a template argument list (14.2) or a less-than operator. The identifier is first looked up in the class of the object expression. If the identifier is not found, it is then looked up in the context of the entire postfix-expression and shall name a class template. If the lookup in the class of the object expression finds a template, the name is also looked up in the context of the entire postfix-expression and - if the name is not found, the name found in the class of the object expression is used, otherwise - if the name is found in the context of the entire postfix-expression and does not name a class template, the name found in the class of the object expression is used, otherwise - if the name found is a class template, it shall refer to the same entity as the one found in the class of the object expression, otherwise the program is ill-formed. So, in any expression like this->id < ..., it has to handle cases where id<... is the start of a template identifier (like this->B<T>::bitset). It still checks the object first, but if this->id finds a template, further steps apply. And in your case, this->bitset is presumably considered a template as it still depends on T, so it finds the conflicting std::bitset and fails at the third bullet above.
https://codedump.io/share/2Uw7j2KgJwdI/1/template-dependent-base-member-is-not-resolved-properly
CC-MAIN-2017-17
refinedweb
509
52.94
A mirrors helper library for the dart language. This is how you can scrap the metadata info from an object. MetaDataHelper<MetaData, MethodMirror> mirrorHelper = new MetaDataHelper<MetaData, MethodMirror>(); List<MetaDataValue<MetaData>> mirrorModels = mirrorHelper.from(new Anno()); The annotated class. class Anno { @MetaData("test") void test() {} } MetaDataValue has the following fields: Symbol memberName; InstanceMirror instanceMirror; T object; Search for classes that implement or extend for example the class Anno. ClassSearcher<Anno> searcher = new ClassSearcher<Anno>(); List<Anno> searchResult = searcher.scan(); If you'd like to contribute back to the core, you can fork this repository and send us a pull request, when it is ready. If you are new to Git or GitHub, please read this guide first. Realtime web framework for dart that uses forcemvc & mirrorme & wired source code This file contains highlights of what changes on each version of the forcemvc package. renaming to mirrorme add a method on MetadataValue to get the return type typeOfOwner extends the MetaDataHelper with method fromMirror, so we can look at it without instantiating it. add iml to gitignore Rename AnnotationChecker to AnnotationScanner Scan All libraries instead of just RootLibrary Add Documentation Change logic of annotation_checker and class_scanner to reduce size. Skip dart core libraries to scan for annotations. Add method 'getOtherMetadata' in the class 'MetaDataValue'. Extend MetaDataHelper so it can search for methods and variables. Adding name to get the string name out of the mirrors in the class 'MetaDataValue'. Adding functionality to check if a annotation is available on a class. Provide methodMirror in MetaDataValue and added a property to access parameters. Adding dev dependencies for unittests. Creating the correct instance of the correct class in class searcher. Adding a way to search for class that implements or extends a certain class with mirror operations. Solving a bug in metadata helpers to return a value. Adding a class scanner to get a list of all classes with a certain annotation. Adding invoke method to force metadata value. Adding some minor changes. Adding tests for mirrorhelpers and classes for annotation handling. Add this to your package's pubspec.yaml file: dependencies: mirrorme2: ^0.1.0 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:mirrorme2/mirrorme2.dart'; We analyzed this package on Jun 3, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: web, other Primary library: package:mirrorme2/mirrorme2.dartwith components: mirrors. Fix lib/helpers/metadata_value.dart. (-2.96 points) Analysis of lib/helpers/metadata_value.dart reported 6 hints, including: line 3 col 1: Prefer using /// for doc comments. line 16 col 3: Prefer using /// for doc comments. line 23 col 3: Prefer using /// for doc comments. line 28 col 3: Prefer using /// for doc comments. line 39 col 3: Prefer using /// for doc comments. Fix lib/helpers/metadata_helpers.dart. (-1.99 points) Analysis of lib/helpers/metadata_helpers.dart reported 4 hints: line 3 col 1: Prefer using /// for doc comments. line 8 col 3: Prefer using /// for doc comments. line 18 col 3: Prefer using /// for doc comments. line 26 col 3: Prefer using /// for doc comments. Fix lib/helpers/annotation_checker.dart. (-1.49 points) Analysis of lib/helpers/annotation_checker.dart reported 3 hints: line 4 col 3: Prefer using /// for doc comments. line 11 col 4: Prefer using /// for doc comments. line 16 col 8: DO use curly braces for all flow control structures. Fix additional 3 files with analysis or formatting issues. (-1 points) Additional issues in the following files: lib/helpers/class_scanner.dart(1 hint) lib/helpers/class_searcher.dart(1 hint) lib/mirrorme2.dart(Run dartfmtto format lib/mirrorme2.dart.) Maintain an example. (-10 points) Create a short demo in the example/ directory to show how to use this package. Common filename patterns include main.dart, example.dart, and mirrorme2.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/mirrorme2
CC-MAIN-2019-26
refinedweb
680
60.82
)) Astropy provides functionality for reading in and manipulating tabular data through the astropy.table subpackage. An additional set of tools for reading and writing ASCII data are provided with the astropy.io.ascii subpackage, but fundamentally use the classes and methods implemented in astropy.table. We'll start by importing the ascii subpackage: from astropy.io import ascii For many cases, it is sufficient to use the ascii.read('filename') function as a black box for reading data from table-formatted text files. By default, this function will try to figure out how your data is formatted/delimited (by default, guess=True). For example, if your data are: # name,ra,dec BLG100,17:51:00.0,-29:59:48 BLG101,17:53:40.2,-29:49:52 BLG102,17:56:20.2,-29:30:51 BLG103,17:56:20.2,-30:06:22 ... (see _simpletable.csv) ascii.read() will return a Table object: tbl = ascii.read("simple_table.csv") tbl The header names are automatically parsed from the top of the file, and the delimiter is inferred from the rest of the file -- awesome! We can access the columns directly from their names as 'keys' of the table object: tbl["ra"] If we want to then convert the first RA (as a sexagesimal angle) to decimal degrees, for example, we can pluck out the first (0th) item in the column and use the coordinates subpackage to parse the string: import astropy.coordinates as coord import astropy.units as u first_row = tbl[0] # get the first (0th) row ra = coord.Angle(first_row["ra"], unit=u.hour) # create an Angle object ra.degree # convert to degrees 267.75 Now let's look at a case where this breaks, and we have to specify some more options to the read() function. Our data may look a bit messier:: ,,,,2MASS Photometry,,,,,,WISE Photometry,,,,,,,,Spectra,,,,Astrometry,,,,,,,,,,, ,00 04 02.84 -64 10 35.6,1.01201,-64.18,15.79,0.07,14.83,0.07,14.01,0.05,13.37,0.03,12.94,0.03,12.18,0.24,9.16,null,L1γ,,Kirkpatrick et al. 2010,,,,,,,,,,,Kirkpatrick et al. 2010,, PC 0025+04,00 27 41.97 +05 03 41.7,6.92489,5.06,16.19,0.09,15.29,0.10,14.96,0.12,14.62,0.04,14.14,0.05,12.24,null,8.89,null,M9.5β,,Mould et al. 1994,,0.0105,0.0004,-0.0008,0.0003,,,,,Faherty et al. 2009,Schneider et al. 1991,,,00 32 55.84 -44 05 05.8,8.23267,-44.08,14.78,0.04,13.86,0.03,13.27,0.04,12.82,0.03,12.49,0.03,11.73,0.19,9.29,null,L0γ,,Cruz et al. 2009,,0.1178,0.0043,-0.0916,0.0043,38.4,4.8,,,Faherty et al. 2012,Reid et al. 2008,, ... (see Young-Objects-Compilation.csv) If we try to just use ascii.read() on this data, it fails to parse the names out, and the column names become col followed by the number of the column: tbl = ascii.read("Young-Objects-Compilation.csv") tbl.colnames [', 'col31', 'col32', 'col33', 'col34'] What happened? The column names are just col1, col2, etc., the default names if ascii.read() is unable to parse out column names. We know it failed to read the column names, but also notice that the first row of data are strings -- something else went wrong! tbl[0] A few things are causing problems here. First, there are two header lines in the file and the header lines are not denoted by comment characters. The first line is actually some meta data that we don't care about, so we want to skip it. We can get around this problem by specifying the header_start keyword to the ascii.read() function. This keyword argument specifies the index of the row in the text file to read the column names from: tbl = ascii.read("Young-Objects-Compilation.csv", header_start=1) tbl.colnames ['] Great -- now the columns have the correct names, but there is still a problem: all of the columns have string data types, and the column names are still included as a row in the table. This is because by default the data are assumed to start on the second row (index=1). We can specify data_start=2 to tell the reader that the data in this file actually start on the 3rd (index=2) row: tbl = ascii.read("Young-Objects-Compilation.csv", header_start=1, data_start=2) Some of the columns have missing data, for example, some of the RA values are missing (denoted by -- when printed): print(tbl['RA']) RA --------- 1.01201 6.92489 8.23267 9.42942 11.33929 -- -- -- 21.19163 21.5275 ... 300.20171 -- 303.46467 321.71 -- -- 332.05679 333.43715 342.47273 -- 350.72079 Length = 64 rows This is called a Masked column because some missing values are masked out upon display. If we want to use this numeric data, we have to tell astropy what to fill the missing values with. We can do this with the .filled() method. For example, to fill all of the missing values with NaN's: tbl['RA'].filled(np.nan) Let's recap what we've done so far, then make some plots with the data. Our data file has an extra line above the column names, so we use the header_start keyword to tell it to start from line 1 instead of line 0 (remember Python is 0-indexed!). We then used had to specify that the data starts on line 2 using the data_start keyword. Finally, we note some columns have missing values. data = ascii.read("Young-Objects-Compilation.csv", header_start=1, data_start=2) Now that we have our data loaded, let's plot a color-magnitude diagram. Here we simply make a scatter plot of the J-K color on the x-axis against the J magnitude on the y-axis. We use a trick to flip the y-axis plt.ylim(reversed(plt.ylim())). Called with no arguments, plt.ylim() will return a tuple with the axis bounds, e.g. (0,10). Calling the function with arguments will set the limits of the axis, so we simply set the limits to be the reverse of whatever they were before. Using this pylab-style plotting is convenient for making quick plots and interactive use, but is not great if you need more control over your figures. plt.scatter(data["Jmag"] - data["Kmag"], data["Jmag"]) # plot J-K vs. J plt.ylim(reversed(plt.ylim())) # flip the y-axis plt.xlabel("$J-K_s$", fontsize=20) plt.ylabel("$J$", fontsize=20) <matplotlib.text.Text at 0x111d448d0> As a final example, we will plot the angular positions from the catalog on a 2D projection of the sky. Instead of using pylab-style plotting, we'll take a more object-oriented approach. We'll start by creating a Figure object and adding a single subplot to the figure. We can specify a projection with the projection keyword; in this example we will use a Mollweide projection. Unfortunately, it is highly non-trivial to make the matplotlib projection defined this way follow the celestial convention of longitude/RA increasing to the left. The axis object, ax, knows to expect angular coordinate values. An important fact is that it expects the values to be in radians, and it expects the azimuthal angle values to be between (-180º,180º). This is (currently) not customizable, so we have to coerce our RA data to conform to these rules! astropy provides a coordinate class for handling angular values, astropy.coordinates.Angle. We can convert our column of RA values to radians, and wrap the angle bounds using this class. import astropy.coordinates as coord ra = coord.Angle(data['RA'].filled(np.nan)*u.degree) ra = ra.wrap_at(180*u.degree) dec = coord.Angle(data['Dec'].filled(np.nan)*u.degree) fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(111, projection="mollweide") ax.scatter(ra.radian, dec.radian) <matplotlib.collections.PathCollection at 0x1121deda0> By default, matplotlib will add degree ticklabels, so let's change the horizontal (x) tick labels to be in units of hours, and display a grid fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(111, projection="mollweide") ax.scatter(ra.radian, dec.radian) ax.set_xticklabels(['14h','16h','18h','20h','22h','0h','2h','4h','6h','8h','10h']) ax.grid(True) We can save this figure as a PDF using the savefig function: fig.savefig("map.pdf") Make the map figures as just above, but color the points by the 'Kmag' column of the table. Try making the maps again, but with each of the following projections: 'aitoff', 'hammer', 'lambert', and None (which is the same as not giving any projection). Do any of them make the data seem easier to understand?
http://www.astropy.org/astropy-tutorials/plot-catalog.html
CC-MAIN-2018-13
refinedweb
1,484
58.69
Each Answer to this Q is separated by one/two green lines. I have a fairly complex Python object that I need to share between multiple processes. I launch these processes using multiprocessing.Process. When I share an object with multiprocessing.Queue and multiprocessing.Pipe in it, they are shared just fine. But when I try to share an object with other non-multiprocessing-module objects, it seems like Python forks these objects. Is that true? I tried using multiprocessing.Value. But I’m not sure what the type should be? My object class is called MyClass. But when I try multiprocess.Value(MyClass, instance), it fails with: TypeError: this type has no size Any idea what’s going on? After a lot research and testing, I found that “Manager” does this job at a non-complex object level. The code below shows that object inst is shared between processes, which means property var of inst is changed outside when child process changes it. from multiprocessing import Process, Manager from multiprocessing.managers import BaseManager class SimpleClass(object): def __init__(self): self.var = 0 def set(self, value): self.var = value def get(self): return self.var def change_obj_value(obj): obj.set(100) if __name__ == '__main__': BaseManager.register('SimpleClass', SimpleClass) manager = BaseManager() manager.start() inst = manager.SimpleClass() p = Process(target=change_obj_value, args=[inst]) p.start() p.join() print inst # <__main__.SimpleClass object at 0x10cf82350> print inst.get() # 100 Okay, above code is enough if you only need to share simple objects. Why no complex? Because it may fail if your object is nested (object inside object): from multiprocessing import Process, Manager from multiprocessing.managers import BaseManager class GetSetter(object): def __init__(self): self.var = None def set(self, value): self.var = value def get(self): return self.var class ChildClass(GetSetter): pass class ParentClass(GetSetter): def __init__(self): self.child = ChildClass() GetSetter.__init__(self) def getChild(self): return self.child def change_obj_value(obj): obj.set(100) obj.getChild().set(100) if __name__ == '__main__': BaseManager.register('ParentClass', ParentClass) manager = BaseManager() manager.start() inst2 = manager.ParentClass() p2 = Process(target=change_obj_value, args=[inst2]) p2.start() p2.join() print inst2 # <__main__.ParentClass object at 0x10cf82350> print inst2.getChild() # <__main__.ChildClass object at 0x10cf6dc50> print inst2.get() # 100 #good! print inst2.getChild().get() # None #bad! you need to register child class too but there's almost no way to do it #even if you did register child class, you may get PicklingError :) I think the main reason of this behavior is because Manager is just a candybar built on top of low-level communication tools like pipe/queue. So, this approach is not well recommended for multiprocessing case. It’s always better if you can use low-level tools like lock/semaphore/pipe/queue or high-level tools like Redis queue or Redis publish/subscribe for complicated use case (only my recommendation lol). You can do this using Python’s multiprocessing “Manager” classes and a proxy class that you define. See Proxy Objects in the Python docs. What you want to do is define a proxy class for your custom object, and then share the object using a “Remote Manager” — look at the examples in the same linked doc page in the “Using a remote manager” section where the docs show how to share a remote queue. You’re going to be doing the same thing, but your call to your_manager_instance.register() will include your custom proxy class in its argument list. In this manner, you’re setting up a server to share the custom object with a custom proxy. Your clients need access to the server (again, see the excellent documentation examples of how to setup client/server access to a remote queue, but instead of sharing a Queue, you are sharing access to your specific class). here’s a python package I made just for that (sharing complex objects between processes). git: The idea is you create a proxy for your object and pass it to a process. Then you use the proxy like you have a reference to the original object. Although you can only use method calls, so accessing object variables is done threw setters and getters. Say we have an object called ‘example’, creating proxy and proxy listener is easy: from pipeproxy import proxy example = Example() exampleProxy, exampleProxyListener = proxy.createProxy(example) Now you send the proxy to another process. p = Process(target=someMethod, args=(exampleProxy,)) p.start() Use it in the other process as you would use the original object (example): def someMethod(exampleProxy): ... exampleProxy.originalExampleMethod() ... But you do have to listen to it in the main process: exampleProxyListener.listen() Read more and find examples here: In Python 3.6 the docs say: Changed in version 3.6: Shared objects are capable of being nested. For example, a shared container object such as a shared list can contain other shared objects which will all be managed and synchronized by the SyncManager. As long as instances are created through the SyncManager, you should be able to make the objects reference each other. Dynamic creation of one type of object in the methods of another type of object might still be impossible or very tricky though. Edit: I stumbled upon this issue Multiprocessing managers and custom classes with python 3.6.5 and 3.6.7. Need to check python 3.7 Edit 2: Due to some other issues I can’t currently test this with python3.7. The workaround provided in works fine for me I tried to use BaseManager and register my customized class to make it happy, and get the problem about nested class just as Tom had mentioned above. I think the main reason is irrelevant to the nested class as said, yet the communication mechanism that python take in low level. The reason is python use some socket-alike communication mechanism to synchronize the modification of customized class within a server process in low level. I think it encapsulate some rpc methods, make it just transparent to the user as if they called the local methods of a nested class object. So, when you want to modify, retrieve your self-defined objects or some third-party objects, you should define some interfaces within your processes to communicate to it rather than directly get or set values. Yet when operating the multi-nested objects in the nested objects, one can ignore the issues mentioned above, just as what you do in your common routine because your nested objects in the registered class is not a proxy objects any longer, on which the operation will not go through the socket-alike communication routine again and is localized. Here is the workable code I wrote to solve the problem. from multiprocessing import Process, Manager, Lock from multiprocessing.managers import BaseManager import numpy as np class NestedObj(object): def __init__(self): self.val = 1 class CustomObj(object): def __init__(self, numpy_obj): self.numpy_obj = numpy_obj self.nested_obj = NestedObj() def set_value(self, p, q, v): self.numpy_obj[p, q] = v def get_obj(self): return self.numpy_obj def get_nested_obj(self): return self.nested_obj.val class CustomProcess(Process): def __init__(self, obj, p, q, v): super(CustomProcess, self).__init__() self.obj = obj self.index = p, q self.v = v def run(self): self.obj.set_value(*self.index, self.v) if __name__=="__main__": BaseManager.register('CustomObj', CustomObj) manager = BaseManager() manager.start() data = [[0 for x in range(10)] for y in range(10)] matrix = np.matrix(data) custom_obj = manager.CustomObj(matrix) print(custom_obj.get_obj()) process_list = [] for p in range(10): for q in range(10): proc = CustomProcess(custom_obj, p, q, 10*p+q) process_list.append(proc) for x in range(100): process_list[x].start() for x in range(100): process_list[x].join() print(custom_obj.get_obj()) print(custom_obj.get_nested_obj()) To save some headaches with shared resources you can try to collect data that needs access to a singleton resource in a return statement of the function that is mapped by e.g. pool.imap_unordered and then further process it in a loop that retrieves the partial results: for result in in pool.imap_unordered(process_function, iterable_data): do_something(result) If it is not much data that gets returned, then there might not be much overhead in doing this.
https://techstalking.com/programming/python/sharing-a-complex-object-between-processes/
CC-MAIN-2022-40
refinedweb
1,368
51.14
Any of you happen to know how can i reload the content of the current file from a plugin? In my plugin i update the file contents using an external program, and at the end i want the view to reflect the updated file contents. i tried adding the following to my plugin: f = view.fileName(); wnd = view.window() wnd.reloadFile(f, 0, 0) OR view.runCommand('revert') but none seems to work, any suggestions? i found a workaround: f = view.fileName(); s = open(file, 'r').read() region = sublime.Region(0L, view.size()) view.replace(region, s) If you think on a more elegant solution please let me know. cheers! Try this: import sublime, sublimeplugin import functools class ThingyCommand(sublimeplugin.TextCommand): def run(self, view, args): # ...code to write out file here sublime.setTimeout(functools.partial(view.runCommand, 'revert'), 0) revert is somewhat byzantine, due mostly to file loading being async: it won't work correctly unless it's in the top level of an undo group, so you have to run it via setTimeout, rather than within a command. great! thanks
https://forum.sublimetext.com/t/reload-current-file-from-a-plugin/117/4
CC-MAIN-2016-44
refinedweb
182
68.47
Created on 2013-03-12 01:36 by leim, last changed 2013-11-03 12:37 by ncoghlan. This issue is now closed. Currently: ipaddress.IPv4Network('100.64.1.0/24').is_private == False Given RFC6598, 100.64.0.0/10 is now approved for use as CGN space, and also for rfc1918-like private usage. Could the code be altered so that is_private will return true for 100.64.0.0/10 as well?? According to Wikipedia [1] even more address ranges are reserved and non-routable. But only three address ranges are marked as private. So 100.64.0.0/10 is reserved and non-routable but not considered a private address range. [1] I don't see anyway to actually assign this bug to myself, but I'll get a patch for this. Thanks Peter. On 13 March 2013 03:35, pmoody <report@bugs.python.org> wrote: > > pmoody added the comment: > > I don't see anyway to actually assign this bug to myself, but I'll get a > patch for this. > > ---------- > nosy: +pmoody > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ >? Peter, 'Assigned To' is a developer who intends to push (or has pushed) a patch. Anyone can write and attach one. And it is nice to give notice that you intend to. is_private should return true for all prefixes that are intended for *private* use, hence it should include rfc1918 and rfc6598. rfc6598 stipulates 100.64.0.0/10 On 16 March 2013 06:34, pmoody <report@bugs.python.org> wrote: > > pmoody added the comment: > >? > > ---------- > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > So I'm not convinced that 6598 space should be treated like 1918 space. Specifically, the second paragraph of the rfc states:. which I read as, "It's not private like rfc1918 space, but sometimes certain people can treat it similarly." Are there more convincing arguments for treating 6598 like 1918? I was about to make the same suggestion as the OP. Most users think of "private IP" addresses as NATed IP addresses. I think the technical term is "forwardable, but not globally unique". Thus, the method of least surprise would be that indeed the is_private() method returns True for 100.64.0.0/10. As for the RFC, these addresses are indeed the same, that they are both NATted. They are different that for RFC 1918 addresses, it is the end-site (home network, or office network) that does the NATing, while for RFC 6598, it is the ISP that does the NATing. I think the confusing comes from the term is_private(). Formally, this only applies to RFC 1918 addresses, but it seems that this library does not take a formal but pragmatic approach. Otherwise, they would have added the methods is_forwardable(), is_global() and is_reserved() in line with what is the official specification at. I prefer a pragmatic approach, and the term is_natted() or is_private() because that is what most programmers are interested in. Those few programmers that truly understand the difference between all these IP ranges (e.g. those who write bogon filter software), will simply avoid these methods and just use the rest of the library. So +1 for this request. I'm still not convinced. The rfc still says in essence "It's not private like rfc1918 space, but sometimes certain people can treat it similarly." and I think it would be pretty surprising for ipaddress to return True if it's not a network operator running the query. Since we have no way of knowing that, I'm extremely disinclined to make this change. A more formal solution would be all of the possible "is_RFCXXXX" methods, but that doesn't seem to be worth the effort. I don't understand your remark "I think it would be pretty surprising for ipaddress to return True if it's not a network operator running the query." Edit: could you rephrase? A bit odd questions: What is the is_private() function intended to accomplish? I have been wondering what is_private() means, and how users of the library are going to use this function. I've actually failed to come up with any sensible use-case with the current implementation. So in the current state, without the modifications, my vote would be to remove the method, as it is more likely to add confusion than the be of a particular use. The most useful method (which I originally thought it was meant to do) is a function that indicates if a certain IP range is NATted (only an indication, since it requires a network to reliable test). However, that's not what the current function entails: it misses the 100.64.0.0/10 range, which is NATted, but includes the fc00::/7 unique local block, which is not NATted. I would be all in favour of such a is_natted() function, but that's not what this is. The is_private() function also does not simply list "private" IP addresses, when looking at the formal IETF definitions, since it includes fc00::/7, which are unique local addresses, which in practice used in a rather different way: IPv4 private IP addresses are often NATted (and routed after translation), while IPv6 unique local addresses are typically used for local non-routed networks. Also, the is_private() function does not list all non-globally assigned addresses. That should includes a lot more ranges, as listed on and. So far, the is_private() function seems to return True for addresses which are: * non-globally assigned (as opposed to regular unicast and multicast addresses) * available for manual assignment (as opposed to link-local addresses) * may be assigned by end-sites, but not by ISPs (as opposed to shared IP addresses and the small DS-Lite block) * is not intended for benchmarking purposes (as opposed to benchmarking IP addresses) Frankly, I wonder if this very particular information of enough interest to warrant a function on its own. In that case, I much rather see more generic (and hopefully more useful) functions such as is_natted() and for is_global(), is_forwardable() and is_reserved(), as defined by IANA. is_private was, as you note, basically shorthand for is_RFC1918 (and is_RFC4193 for v6). It's not a particularly well-named method, but at the time that I wrote it (~5 years ago?), it did what I needed it to do. I'm not sure what you mean by an 'is_natted()' method; there's nothing in particular preventing someone from natting a globally unique address. is_global() makes some sense to me, and it appears that I most likely have to update is_reserved, but I don't understand is_forwardable(). Peter, first of all, thanks for your library. I didn't mention that before, but should have. I'm in favour of a pragmatic approach. I've only come across NATing for RFC 1918 and RFC 6598 addresses. While it can technically be done for other addresses, and is allowed by RFC 3022 section 5.1, I have never seen that in practice. is_natted() (or perhaps: is_nattable()?) could be used by a SIP client to decide if it should include a VIA header or not, without need to do a resource-expensive NAT check at all times. To be clear: I'm not a great fan of is_natted() myself, but I fear that keeping is_private() the way it is, people will use it as if it meant is_natted() and will end up with unintended errors in their code. is_forward and is_global -if deemed useful- should just follow the columns at and. Perhaps functions like is_valid_source(), is_valid_destination() and is_reserved() may be included too. The meaning of these columns is explained in [ RFC 6890]. I interpret forwardable as "should be forwarded by a router" and global as "may be seen on the Internet / should be forwarded beyond administrative domain boundaries". For example, private IP addresses or benchmarking IP addresses may be routed just fine, as long as they're never seen on the global Internet. PS: There is a typo in the documentation. is_unspecified mentions RFC 5375, but that should be RFC 5735, which in turn is obsoleted by RFC 6890. I'll see if I can make a patch, but that will be after my holiday. Reopening this - rewording the issue title to cover the problem to be solved (i.e. accounting for RFC 6598 addresses) rather than a specific solution (which isn't appropriate, since the RFC *explicitly* states that shared addresses and private addresses aren't the same thing) It seems to me that the simplest solution would be to use the terminology from RFC 6598 and add a new "is_shared" attribute to addresses. The problem is that 'shared' describes exactly one network, unless you mean that we should try to start 'private' as 'shared'. That's something I really don't want to do because it leads to confusion like this. Do you not think that is_global or is_forwardable (per the iana registry) is worthwhile? I'd also be fine with "is_carrier_private", or, as you say, the inverse "is_global" for "not is_private and not is_carrier_private and not (any of the other private addresses)" (assuming I understood that suggestion correctly). I guess the "is_global" one is the most useful, since that lets you know if you can send or store that address directly, or if you need to translate it to a persistent global address somehow. ok, here's an is_global/is_private patch using the iana special registry for ipv4 and ipv6. New changeset 2e8dd5c240b7 by Peter Moody in branch 'default': #17400; ipaddress should make it easy to identify rfc6598 addresses About 2e8dd5c240b7 It might be a good idea to cache the two lists in a class or module variable in order to speed things up. It might also be a good idea to move the most common networks like 192.168.0.0/16 to the top of the list. New changeset 07a5610bae9d by Peter Moody in branch 'default': #17400; NEWS and ipaddress.rst change I have a change that needs to be submitted for the parser then I'll come back to the caching. The pedant in me would like like to keep the addresses ordered because that makes it clear where to add new networks as iana changes classifications, but it may just make more sense to put rfc1918 at the top. The docs patch doesn't look quite right - Peter, did you mean to copy the "is_private" docs before modifying them? As far as caching goes, perhaps we can just drop functools.lru_cache into the relevant property implementations? @property @lru_cache() def is_global(self): """Test if this address is allocated for public networks.""" (also in copying that example method header, I noticed the docstring for the IPv4 is_global currently still says "private" instead of "public") Sorry for chiming in a bit late, but what's the rationale for including 100.64.0.0/10 in the "is_private" set, rather than *only* excluding it from the "is_global" set? The rationale for RFC 6598 is precisely that 100.64.0.0/10 is *not* private in the common sense, so it would deserve a different treatment in the ipaddress module as well. New changeset 365fd677856f by Peter Moody in branch 'default': #17400: fix documentation, add cache to is_global and correctly handle 100.64.0.0/10 antoine, quite right. I've updated is_global. Nick, I've added lru_cache() to is_private and updated the docs (hope it's right this time). Thanks Peter, just a couple more tweaks: - tests need to ensure the carrier private range is neither global *nor* private (it looks to me like they will still show up as private at this point) - looks like the docs for is_private still need to be restored (and mention the subtlety about deliberately excluding "carrier private"). - this is probably worthy of a What's New entry mentioning the new attribute My apologies for not finding time to review the original patch more closely, as that could have saved you a bit of the subsequent running around... > antoine, quite right. I've updated is_global. > Nick, I've added lru_cache() to is_private and updated the docs (hope it's right this time). Mmmh... I actually meant the reverse. IIUC, 100.64.0.0/10 isn't global (i.e. globally routable) and isn't private either. New changeset b9623fa5a0dd by Peter Moody in branch 'default': #17400: correct handling of 100.64.0.0/10, fixing the docs and updating NEWS Just updating the issue state to reflect the fact Peter committed this a while ago.
https://bugs.python.org/issue17400
CC-MAIN-2020-16
refinedweb
2,082
71.44
Mini-tutorial on responsive layouts, part 3. A word on HTML tables As mentioned in the paragraph above, there are some layout problems that can be solved either by flexbox or by table layout. What is table layout and why is it considered a bad practice? Table layout is a way of arranging things on the page without any styles, and it was commonly used before CSS came into this world. The idea of using table tags was natural, as each layout is mostly an arranged grid, and table is a powerful tool for customizing grids. There are several problems with table layouts, though: - semantics; tabletag is meant for displaying tabular data, not for arranging elements - cumbersome code, cluttered with tables inside tables inside tables, is very hard to read and support tabletag doesn't allow for rearranging cells for different resolutions, which makes it, against your initial intuition, not suitable for creating truly responsive layouts (three beautiful columns on large screens will become three uncomfortably narrow columns on small screens) When is it okay to use tables? On webpages table tag should be used for one purpose only - displaying tables. When building responsive layouts, almost always there are more efficient and elegant ways of doing it. That said, there are times you're left with two choices: flexbox and tables. And if you're reluctant to use the former, CSS provides a method of table-like layouting with just the styles - table- values of the display property (see all values here). It's preferable to the use of HTML tables. And finally, tables can and should be used for email body layouts. The reason for such is that none of the mail clients support enough CSS for it to be used effectively, and different clients support different subsets of it, which makes it impossible to build email templates that work cross-client. Media queries The point of responsive layouts is to look good on any (supported) device, including tablets and mobile phones. Sometimes in order to look good on smaller screens, the content has to be rearranged on the page, and although flexbox gives us some means for rearrangement, using media queries is the preferred way of adjusting element styles depending on the screen sizes and types. @media (max-width: 800px) { #sidebar { display: none; } } This code, appended to the end of the CSS file, will make sure that an element with id='sidebar' will stay hidden on screens with width of 800px and smaller. It's important to add @media queries to the bottom of the CSS files, so they will be loaded after the rest of the styles (and override them). Complex queries To create media queries with multiple conditions, use logical operators and, not and only. @media (min-width: 30em) and (orientation: landscape) { ... } /* landscape-oriented screens of min width 30em */ @media not screen and (color) { ... } /* negates the whole query: not (screen and (color)) */ The only keyword has no effect on modern browsers and is used for preventing the old ones from applying styles; it won't be discussed here. Read more Although media queries are mostly used for adjusting layouts, there are other uses. For a better overview of the @media rule and detailed syntax see the MDN docs. Cross-browser support It's important to stay mindful of people using different browsers when surfing the Web. One may prefer the conventional Google Chrome or Firefox, and another is used to Safari for Windows. The problem with this diversity is that rule interpretation can vary, to the point of not supporting some of them at all. Some development time can be saved by using tools like caniuse.com which has updated charts of CSS feature support by different browsers and versions. Generally, if some of the modern browsers don't support a rule, it's best to avoid it. Best Practices (by Gil Meir) - Keep your layout as nested contained boxes, if a parrent collapses or a child overflows, fix it! - Use namespaces via a preprocessor (i.e. SCSS) or simple dashes to avoid global namespace collisions. - Use calc to show where the numbers are coming from. Vars or custom properties (from preprocessor) are also great. - Prefer REMs and percentages for most stuff, EMs for margin/padding of elements with custom sized fonts and pixels as little as possible. - Escalate positioning: static -> relative -> absolute -> fixed. - Don't add rules to fix things, try to understand why the browser decided to draw things the way it did, most likely you need to change rules or even delete them. - Don't use negative values to position things unless you're using some well defined pattern explicitly. - Display table-cell and floats work really well for positioning things. Flex is also great with caution. - Devtools is part of the process, use them to gain insight into how the browser derives properties and to see things like reflows and repaints. - Always develop on more than one browser, it will expose mostly your mistakes. - Don't just rip things off of the web and paste them in your code, retrofit things to the style of the project. - Be very wary of properties with global significance such as z-index. Prefer selecting by class and never use inline styles or important. These make your code harder to manage. - Don't bother too much with abstractions, CSS does not compose well. Focus on separation, locality, cross-browser and minimal rule-sets. Further reading - MDN docs - for API reference and examples - Learn Enough CSS & Layout to Be Dangerous by Lee Donahoe and Michael Hartl
https://www.spectory.com/blog/Mini-tutorial%20on%20responsive%20layouts,%20part%203
CC-MAIN-2019-18
refinedweb
929
61.26
The Two Sides of Group Policy Script Extension Processing, Part 2 By Judith Herman, Microsoft Corporation In Part 2 of this two-part series, Judith Herman explains how to troubleshoot problems with Group Policy logon, logoff, startup, and shutdown scripts. Have you ever deployed a script using Group Policy and wondered why it didn’t run? Part 1 of this article began to show you how to go about discovering the reason, focusing on how to determine if Group Policy scripts client-side extension (CSE) processed the scripts extension correctly. Part 1 presented a way to read the registry keys representing Group Policy startup, logon, logoff, and shutdown scripts. This enables you to quickly see a list of the scripts deployed by Group Policy.? For Microsoft Windows Server 2003, Windows XP, and Windows 2000 operating systems, the short answers to these questions are that there could be many reasons why the script didn’t run, but the Group Policy scripts CSE isn’t running the script and therefore can’t report the script failure through the event log. But there is a process responsible for running the Group Policy startup, logon, logoff, and shutdown scripts that reports script failures: the USERINT process. The USERINIT process When Group Policy has processed the Group Policy scripts CSE, Group Policy’s job is done. The USERINIT process owns the job of running the Group Policy scripts for each startup, logon, logoff, and shutdown event. For each of these events, the USERINIT process will perform the following tasks: - Based on the type of event, the USERINIT process will cycle through the corresponding Group Policy registry key representing the startup, logon, logoff, or shutdown scripts associated with each Group Policy object (GPO) applied either to the machine for startup or shutdown scripts or to the user for logon or logoff scripts. - The USERINIT process will launch the script along with any parameters using the appropriate privileges associated with either the machine or the user. This information is read from the registry keys. - The USERINIT process launches the scripts using calls to the ShellExecuteEx method. This limits the type of script that can be run to what ShellExecuteEx will process. Check out the documentation on MSDN on the ShellExecuteEx method. - The USERINIT process will launch all scripts for an event either synchronously, waiting for each script to finish before launching the next, or asynchronously, depending on the type of event and the policy settings affecting this event. For example, all startup scripts will be launched synchronously, by default, on a Microsoft Windows Server 2003, Windows XP, and Windows 2000 operating system. A Group Policy setting can be enabled to change all startup scripts to run asynchronously. The full path of this policy setting, in the Group Policy Object Editor, is Computer Configuration \ Administrative Templates \ System \ Scripts \ Run startup scripts asynchronously. Unsure how to set this policy setting? Check out the documentation about settingadministrative template policy settings. - If a script fails to run, USERINIT will log an error. A failed script will not necessarily cause all scripts to fail. The exception to the rule is in the synchronous case when a script fails and hangs. In this situation, the script processing will wait for either the hung script to complete or the Group Policy scripts processing to time out. The default timeout for Group Policy scripts processing is 10 minutes. This timeout value can be modified through Group Policy for each event. Be aware that this timeout value applies to the time it takes to run all scripts for an event. If you set this value too low, the USERINIT process may time out before all the scripts for an event have had time to complete. Did my Group Policy script run? Besides the obvious way to know the script ran of checking for the effect of the script, you can use the Resultant Set of Policy (RSoP) information for the Group Policy scripts CSE. This will tell you if the script ran by reporting the time the script ran. If the script has never executed successfully, you will see either a time of “0” or a blank entry in the Last Executed column. Figure 1 shows the RSoP display for a script that has never executed successfully. If it ran successfully at some point but then failed, you will see an old date and time of execution. Note: The script execution time is reset to 0 if gpupdate /force is run on a machine. Administrators tend to force a Group Policy update when troubleshooting a problem. If you are looking for Group Policy script execution problems, you should reboot instead of forcing the update. Figure 1 The RSoP display for a failed script. You can also display this information via a script. There are two important issues to keep in mind when creating a script to gather the scripts extension information from RSoP. First, the information about the scripts extension is completely populated in the RSoP namespace only until the time when the RSoP provider was activated. In the example scripts provided below, the RSoP provider is activated using the following lines in the script. An array, stored in the ScriptList property, is populated with one entry for each Group Policy script that the USERINIT process launches. Each entry will provide information on the script arguments, execution time and script type of each called script in the ScriptList array. The second issue is that execution time is stored in WMI time format (UTC time) instead of normal human-readable time. The WMIDateToString function is used to convert WMI time to the execution time you expect to see. An example script to display computer Group Policy script execution times is provided here: & "\Computer", 3 WScript.Echo vbTab & "Type: Startup" Case 4 WScript.Echo vbTab & "Type: Shutdown" End Select WScript.Echo vbTab & "Arguments: " & objScript.arguments WScript.Echo vbTab & "Execution Time: " & WMIDateToString(objScript.executiontime) WScript.Echo Next Next provider.RsopDeleteSession namespaceLocation, hResult You can create a similar script to check the user Group Policy scripts that the USERINIT process launches. In this case, you need to connect to the RSoP provider for the user namespace. The values for the logon and logoff script types will be 1 and 2, respectively. And now you get the script to display launched user Group Policy scripts: & "\user", 1 WScript.Echo vbTab & "Type: Logon" Case 2 WScript.Echo vbTab & "Type: Logoff" End Select WScript.Echo vbTab & "Arguments: " & objScript.arguments WScript.Echo vbTab & "Execution Time: " & WMIDateToString(objScript.executiontime) WScript.Echo Next Next provider.RsopDeleteSession namespaceLocation, hResult The script didn't run—what do I do now? Understanding that the USERINIT process—and not Group Policy—launches the Group Policy scripts provides the first step in troubleshooting why a Group Policy script didn’t run. You will want to look for any error messages in the application event viewer with USERINIT as the source. The error message will provide the path and name of the script along with the reason for failure to run the script. The typical failures you will see with running the Group Policy scripts are bad script path, a hung script, or access to the script is restricted via ACLS. An example of an event viewer error message from a failed logon script is shown in Figure 2. Figure 2 The event message for a failed logon script This same information can be displayed via a script that displays the application event viewer messages with USERINIT as a source. Below is an example of a script provides this information using the WMI service to query for the application Logfile with a SourceName of USERINIT.'and " _ & "SourceName = 'USERINIT'") For Each objEvent in colLoggedEvents Next So now you have some information about which script failed and possibly why the script failed. But say you get an "Access Denied" message for the script. You log on as administrator, and it runs just fine. What do you do now? Be aware that the USERINIT process will try to launch the script using the appropriate privileges. For startup or shutdown scripts, the USERINIT process will impersonate system privileges. For logon or logoff scripts, the USERINIT process will impersonate the user logging on. To test if the script is running under the appropriate privileges, make sure you are logged on either as that user, if they do not have full local administrator rights on the machine, or from a command window with system privileges. One scenario usually not tested correctly is verifying a script with machine privileges. You can use the AT command to open a command window with system or machine privileges. To do so, type the following in a command window: This sample AT command will cause a new window to open in one minute. The time must be in the future for the job to run. Also, the time is specified in 24-hour format. For information about the AT command, type the following text in a command window: What else could go wrong? We’ve looked at a way to start troubleshooting the Group Policy scripts CSE when you suspect a script has failed. There is one scenario in which you can avoid having to troubleshoot later if you do some extra planning ahead of time when deploying scripts through Group Policy. When you have mobile devices, such as laptops, you must take into account that the users will want to take their devices off the corporate network, such as to their homes.As we mentioned, when you set up a script to run synchronously, the script could fail to launch in such a way that it will hang until the timeout period has expired. Remember this timeout is set to 10 minutes by default. In the case of startup scripts, the machine will not fully boot until all the startup scripts have run or the Group Policy scripts extension has reached its timeout period for executing all startup scripts. And 10 minutes can be a very long time to wait for the machine to finish booting when you don’t know why it’s taking so long. For this reason, it is best to be cautious when deploying startup scripts to mobile devices. When administrators create a Group Policy script policy setting they typically will locate the script on the SYSVOL directory of the domain controller for their domain. This works fine as long as the machine can get to a domain controller on startup. However, when the laptop is not on the corporate network, it will not be able to do this. In the case of mobile devices, it works best to either not deploy startup scripts or place the script in a standard location for each machine. Final thoughts To sum up, we’ve looked at how the USERINIT process actually launches Group Policy scripts and how you can start thinking about troubleshooting the problem of Group Policy scripts not running.
http://technet.microsoft.com/en-us/library/ff404239.aspx
CC-MAIN-2014-15
refinedweb
1,822
61.26
New-Smb Syntax New-SmbShare [ new Server Message Block (SMB) share. To delete a share that was created by this cmdlet, use the Remove-SMBShare cmdlet. Examples EXAMPLE 1 PS C:\>New-SmbShare -Name VMSFiles -Path C:\ClusterStorage\Volume1\VMFiles -FullAccess Contoso\Administrator, Contoso\Contoso-HV1$ Name ScopeName Path Description ---- --------- ---- ----------- VMSFiles Contoso-SO C:\ClusterStorage\Volume1\... This example creates a new SMB share. EXAMPLE 2 PS C:\>New-SmbShare -Name Data -Path J:\Data -EncryptData $true Name ScopeName Path Description ---- --------- ---- ----------- Data Contoso-FS J:\Data This example creates a new encrypted SMB share. Parameters ps_cimcommon_asjob Specifies the continuous availability timeout for the share.. Specifies which user will new SMB share. A description of the share is displayed by running the Get-SmbShare cmdlet. The description may not contain more than 256 characters. The default value no description, or an empty description Indicates that the share is encrypted.. Specifies a name for the new SMB share. The name may be composed of any valid file name characters, but must be less than 80 characters in length. The names pipe and mailslot are reserved for use by the computer. Specifies which accounts are denied access to the share. Multiple accounts can be specified by supplying a comma-separated list. Specifies the path to the location of the folder to share. The path must be fully qualified; relative paths or paths that contain wildcard characters are not permitted. Specifies which user reboot of the computer. By default, new SMB shares are persistent, and non-temporary./Microsoft/Windows/SMB/MSFT_SmbShare The Microsoft.Management.Infrastructure.CimInstance object is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign ( #) provides the namespace and class name for the underlying WMI object. The MSFT_SmbShare object represents the new SMB share. Related Links Feedback
https://docs.microsoft.com/en-us/powershell/module/smbshare/new-smbshare?view=winserver2012r2-ps
CC-MAIN-2019-39
refinedweb
301
51.55
The aggregate() function is utilized to combine outcomes. Initially, a sequence operation is applied as that is the first parameter of aggregate() function and then its followed by a combine operation which is utilized to combine the solutions generated by the sequence operation performed. This function can be enforced on all the collection data structures in Scala and can be practiced on Scala’s Mutable as well as Immutable collection data structures. It belongs to the TraversableOnce trait in Scala. Syntax: def aggregate[B](z: => B)(seqop: (B, A) => B, combop: (B, B) => B): B Where, - B is the type of aggregated results, and z is the initial value for the aggregated result. - seqop is an operator for sequence operation and is utilized to figure out the sum of each of the elements of the stated collection and also enumerates the total number of elements in the collection. - combop is a combine operator which is utilized to combine the outcomes obtained by the parallel computation of the collection. - Parallel computation: Let, List = (2, 3, 4, 5, 6, 7, 8) Suppose, you have three threads of the list stated above where, let the first thread is (2, 3, 4), second thread is (5, 6) and the third thread is (7, 8). Now, lets perform parallel computation. First thread = (2, 3, 4) = (2+3+4, 3) = (9, 3) // It is evaluated like below, // (sum of all the elements, total number of elements) Second thread = (5, 6) = (5+6, 2) = (11, 2) Third thread = (7, 8) = (7+8, 2) = (15, 2) Finally, after parallel computation, we have (9, 3), (11, 2), and (15, 2) and now this combine operator is applied to combine the outcomes of each thread i.e, (9+11+15, 3+2+2) = (35, 7) Now let’s see an example. Example: (Sum of all the elements, total number of elements) = (10, 4) Here, par implies parallel which is utilized for the parallel computation of the list. we will discuss three parts in details. aggregate(0, 0) This is the first part where aggregate() function has two zeroes which are the initial values of the register s so, s._1 is at first zero which is utilized to figure out the sum of all the elements in the list and s._2 is also zero in the beginning which helps in enumerating the total number of elements in the list. (s._1 + r, s._2 + 1) This is the second part, it performs the sequence operation for the list stated above. The first part of this code evaluates the sum and the second part is for counting total elements. Now, lets see the evaluation step by step. Here, List = (1, 2, 3, 4) (s._1 + r, s._2 + 1) // (Initially, s._1 and s._2 = 0) = (0+1, 0+1) = (1, 1) // r is the initial value of the list = (1+2, 1+1) = (3, 2) = (3+3, 2+1) = (6, 3) = (6+4, 3+1) = (10, 4) This shows, how the evaluation is done exactly. (s._1 + r._1, s._2 + r._2) This is the last part, it is utilized in combine operation, as stated above in parallel computation. Suppose, during parallel computation of the list(1, 2, 3, 4), it is broken into two threads i.e, (1, 2) and (3, 4) then lets evaluate it step by step. First thread : (1, 2) = (1+2, 2) = (3, 2) Second thread : (3, 4) = (3+4, 2) = (7, 2) Now, lets combine the two threads i.e, (3, 2) and (7, 2) using combine operator as stated above. (s._1 + r._1, s._2 + r._2) // s._1 and s._2 = 0 = (0+3, 0+2) = (3, 2) // r._1 = 3 and r._2 = 2 = (3+7, 2+2) = (10, 4) // r._1 = 7 and r._2 = 2 Thus, this part works like this. Lets see, one more example. Example: The total number of letters used are: 17 Here, the initial value of the aggregate function is zero which is utilized to compute the total number of letters in the strings used here. The method length is used to enumerate the length of each string. Let’s discuss the below code used in the above program in details. (_ + _.length, _ + _) Here, Seq = (“nidhi”, “yes”, “sonu”, “Geeks”) Lets perform the sequence operation first. (0 + "nidhi".length ) // (0+5) = 5 (0 + "yes".length) // (0+3) = 3 (0 + "sonu".length) // (0+4) = 4 (0 + "Geeks".length) // (0+5) = 5 Therefore, we have (5), (3), (4), (5) from the sequence operation. Now, lets perform combine operation. (5+3) = 8 (4+5) = 9 // Now lets combine it again (8+9) = 17 Thus, total number of letters are 17. Recommended Posts: - Scala Tutorial – Learn Scala with Step By Step Guide - Scala | reduce() Function - Scala | Function Composition - Program to transform an array of String to an array of Int using map function in Scala - Pure Function In Scala -.
https://www.geeksforgeeks.org/scala-aggregate-function/?ref=leftbar-rightbar
CC-MAIN-2020-50
refinedweb
825
72.36
Charlie? Charlie has Rs. 45 and each chocolate cost Rs. 3. So he buys only 15. But there is a scheme. He will return 15 wrappers and get 5 chocolates free. Then he will return 3 out of 5 and get one free. And he will again use one wrapper with the remaining 2 to get one more. So 15+5+1+1 = 22. 22 Answer should be 15. The question is how many chocolates can he buy right? 22 is so wrong!!! It would help if the questio were rephrased to read "How many chocolates can he GET?" First he buys 15, then ... ju 22 IT WAS VERY SIMPLE WATCH THIS 22 and 1 wrapper left 15 chocolates from 45 Rs. generates 15 wrappers => 5 more chocolates generates 5wrappers => 1 more chocolate + 2 wrappers generates 1 more wrapper => 3 wrappers=> 1 more chocolate 1 wrapper left Urvesh, there's a Show Solution button to check if your solution is correct! yep! forgot to see that. . . :P Nice to find a formula for the general case whereas there's 'n' instead of '45'... Sum of (n/3^i) from i = 0 to i = ∞ But it gives 22.5 for 45 The formula is incorrect ... It assumes that money and wrappers can be used together to buy new chocolate 22 Total: 22 22 21 22 actually.. didn't realize in end he will have 2 wrappers + 1 chocolate => 3 chocolates.. its -: 15+5+1+1 22 Chocolates And 1 wrapper is left 22 22 chocolates and 1 wrapper left How about this? good one just needs patience. Actually, he can only buy 15 chocolates...the others are free. A ruby solution. You could also build in different wrapper trade in ratios or different costs of chocolate bars. class Buyer attrreader :chocbars def initialize(cash = 45, choc_bars = 0) end def getdatchocolate end private def tradeinwrappers end def spend_monies end end x = Buyer.new x.getdatchocolate The real answer should be 15... He can only BUY 15.. the last 7 he got free for trading in wrappers. 22 He can only BUY 15. The other 7 he gets are free as per the parameters of the question. Rs. 45 = Rs.3 * 15 Chocolates 15 Chocolates = 15 Wrappers 3 Wrappers = 1 Chocolate 15 Wrappers = 5 Chocolates.. so totally 20 CHOCOLATES...!!! Rs. 45 = Rs.3 * 15 Chocolates 15 Chocolates = 15 Wrappers 3 Wrappers = 1 Chocolate 15 Wrappers = 5 Chocolate(3 Wrappers = 1 Chocolates.).. 15+5+1+1=22 so totally 22 CHOCOLATES...!!! 22 22 22
http://www.mindcipher.com/puzzles/146
CC-MAIN-2017-39
refinedweb
420
77.84
Leeds Longitude : 53.7880258 Leeds Latitude : -1.5472628 Leeds Population : 1529000 Number ranking: Searched: First search: Last search Directional Reverse Name Lookup: Who called me from Leeds ??? Your information about who called from number 011332811. Whose telephone number is this for free. Someone could have contacted the hotel regarding a vacation or holiday, from a service company regarding the service ordered, or called a courier with a parcel to be delivered purchased on the allegro or online store. From what network he called to me? Non stop calling hospitals alexandra rehabilitation centre midhurst wayne beckett builders coventry cv2 1ny this number has called three times in the last 24 hours latest at 21.09 phone rings continously if you pick up all you get is loud burr.. cannot cut connection goes on for abo... baker bro mckay sustainable designs This number just called me offering the box. I asked if they had a website and whilst I was writing the www address down, the guy hung up. I immediately knew it was a scam. Why... Manchester Independent and Preparatory Schools Oaklands Preparatory School Loughton Computer systems and software micross systems ltd - Cardiff Think this is a scam! artists and illustrators aris raissis potrait artist (London) It went on my answering machine I wondered who it was trying to sell things..... estate agents bunn & co (London) launceston college household removals and storage pickquick carriers (London) a douglas & p birkett - pubs bars and inns Halifax computer systems and software moonraker systems ashton under lyne lancashire counselling and advice litchfield citizen advice bureau lichfield Auto Dealers D R Motors Colchester This caller is really annoying. Calls regularly sometimes saying my name and when I ask who is calling he just repeats my name. Either this or like today when I answer phone and say hello they hang up. This happened again 08.53 today AGAIN! I don't know how to block the number. I am not well and this is getting beyond a joke. My family and friends know not to call me early because I am trying to recover from cancer but although I could Ignore the calls and let it go to answerphone it wakes me. readman associates guisborough architectural and engineering activities and relat westminster health care ltd home care services warwick cv34 4de Glasgow powder coatings p t finishings ltd haywards heath SCAM Indian lady asking about mobile phone from a loud call centre. p p finishers ltd print finishers west bromwich b70 6as r d o discount kitchen appliances reigate estate agents woolwich property services (London) sam heffer caterham surrey Fraud bit coin chancer leon jaeggi and sons ltd ashford manufacture of machinery for food, beverage and to flooring services city flooring ltd (London) auto dealers west kent kia tunbridge wells palace hotel inverness h hotels and restaurants ian garrett building design lowestoft k business services debt collection Fraud royal dragon - take away Bradford carpets, rugs and matting bradleys carpets newton-le-willows merseyside pharmacy 4 u Pembridge import and export agents asgm shipping ltd (London) unisex hairdressers hairgraphics hair designs wolverhampton femcare ltd nottingham 3310 hill oldridge ltd london 6523012 financial advisers nursing homes the jasmine centre westgate on sea Premium APP Say goodbye to No Caller ID. Unmask all withheld / no caller id calls instantly. SCAM - Automated voice saying my National Insurance number has been compromised. DO NOT reply to this call!! Claims to be Carphone Warehouse. Hung up when I adked for my number to be removed. SCAM SCAM - mastercard/visa recorded message. 0014505322460 - BT internet scam Left voicemail no talking called number back from voicemail person answered denied calling me Possible Fraud Text 24 March 2021: Royal Mail: Your Package has been held and will not be delivered due to a £1.99 unpaid shipping fee. To pay this now, visit: SCAM Fraud SCAM SCAM SCAM SPAM Computer generated revenue investigation. SPAM SCAM Silent call SCAM SCAM SCAM Silent call You will see opinions if that are hidden.
https://whocalledmeuk.co.uk/phone-number/01133281114
CC-MAIN-2021-21
refinedweb
665
60.95
Injection woestexan Aug 15, 2006 2:41 PM How about adding a "@Singleton" annotation to Seam? Right now, I use this pattern: @In(create = true) @Out(scope = APPLICATION) private SomeClass myClass; Obviously, the target class itself isn't implemented as a singleton in this case, since it needs a public constructor. My point is, why perform an "outjection" every time when I'll only ever create it once? That is, the "@Out" (and create=true) is only truly needed once. I have no interest in reassigning "myClass" to a new instance and have that one pushed into the context. In fact, I'd like an error if I tried, preferably even a compiler error. [side note, can an annotation cause an instance variable to be redefined as "final?"] I'm being picky, but it would be nice to be able to do this: @Singleton private SomeClass myClass; or even @Singleton (method="getInstance") private SomeClass myClass; or possibly @Singleton private final SomeClass myClass; (if the annotation itself can't enforce the "final" notion) I would be quite content to do this myself, if I knew how. I'm assuming that, in addition to creating the annotation interface, I'd need to create an interceptor that is always called when my session beans are instantiated, and that interceptor would need to scan the member variables for this annotation and do the work of checking for an existing instance in the app context, etc. Presumably it would use the fully qualified class name as the context key rather than the instance variable name. Sadly, I haven't delved into interceptor creation just yet... Besides, this seems (in my selfish opinion) to be a Seam-worthy annotation. 1. Re: Injection woesssilvert Aug 15, 2006 7:38 PM (in response to texan) If it is in application scope then it isn't really a singleton. If it is in application scope that doesn't mean there is only one instance per web app. It just means that there is only one instance at a time. There is nothing to stop code from changing the value of "myClass" that is stored in application scope. What might be interesting is a new scope called SINGLETON. The rule would be that you can add to the SINGLETON scope but you can't remove or replace anything. So you would have: @In(create = true) @Out(scope = SINGLETON) private SomeClass myClass; Outjection would only happen if the myClass instance got created. Would that satisfy your use case? Stan 2. Re: Injection woestexan Aug 16, 2006 12:23 AM (in response to texan) It would get the job done, but my goal also was to reduce it to one annotation, since @Singleton really says it all. :-) BTW, I had a terrible time with the following pattern today when referencing a simple java object that doesn't have a Seam @Name tag: @In (create = true) @Out (scope = SESSION) private DocumentManager docMgr; I'm not including the DocumentManager code, as it's just a java class with a simple public constructor. I kept getting the error where it was complaining about the docMgr attribute being required but not being initialized (I don't have the log file handy right now). Anyway, the only way I could make it work was to add an @Name tag to the DocumentManager class. This was even more frustrating because directly beneath those lines was the following block of code: @In (create = true) @Out (scope = SESSION) private QCCache qcCache; And this worked fine without any changes! (And QCCache has no @Name annotation either). Very odd that it worked for one case and not another... I was sort of assuming that Seam would be following the "use an intuitive default" approach and would use the variable's name as the context key, without having to instrument the referenced object. After all, I might want to reference a java.util.Date object, and I can't add a @Name annotaton to that without subclassing it. Any ideas? Am I just abusing the bijection feature? 3. Re: Injection woespmuir Aug 16, 2006 9:38 AM (in response to texan) AFAIK POJOs without a @Name annotation cannot be injected. I'm surprised it worked for qcCache. I would suggest you look at @Unwrap and the component mananger pattern which allows Seam to manage the lifecycle of a non-seam component (which is how quite a lot of the seam core components work). So yes, I would say you are abusing bijection :p 4. Re: Injection woesssilvert Aug 16, 2006 10:26 AM (in response to texan) "ptmain" wrote: It would get the job done, but my goal also was to reduce it to one annotation, since @Singleton really says it all. :-) I don't think @Singleton really says it all. There is a difference between: Inject from any scope, create if needed, and outject to SINGLETON scope: @In (create = true) @Out (scope = SINGLETON) private DocumentManager docMgr; Inject from any scope, outject to SINGLETON scope: @In @Out (scope = SINGLETON) private DocumentManager docMgr; Inject from any scope, one of which could be SINGLETON @In private DocumentManager docMgr; Don't inject, but outject to SINGLETON scope @Out (scope = SINGLETON) private DocumentManager docMgr; I don't think this can/should be reduced to one annotation. Just my humble opinion. The more I think about it, the more I like the idea of a SINGLETON scope. Most of the time when you use application scope you really want a web app singleton. But application scope allows the singleton instance to be replaced or removed which could be a source of errors. Stan 5. Re: Injection woestexan Aug 16, 2006 10:48 AM (in response to texan) I really have gone through the Seam Reference, but the @Unwrap and @Factory methods still confuse me. I'm just not clear on what they do. I think that @Factory is just an initializer method so that the first time an instance variable is accessed and is null, if there is a @Factory method it will be called. (is that right?) @Unwrap baffles me, though. The description in the reference implies that if I have an @Unwrap method in a BlogService component named "blog", then whenever I reference "blog" I'm actually getting an instance of a Blog instead (I'm using an example from the tutorial). In a later example, the @Unwrap method returns a list of BlogEntity objects, which is even stranger. Tell me if I have this right: the "component manager pattern" is being used in the Blog example as a way to magically initialize data without explicitly doing so. So, if I reference the component "blog", the BlogServer actually fetches the "real" component from the database and that is what is returned. Similarly with the search: rather than the client getting a reference to the SearchService, it's actually getting a reference to the search results List. Hmm, I think I just answered all of my questions about that. In my original example, I think I will edit those classes that are within my project to make them Seam objects (@Name) and just move on with my life, and those outside of my project I may just subclass since they're not entity beans. Or maybe I'll give up on my quest for the @Singleton annotation (which I still say would be really useful) and just use MyClass.getInstance().whatever(). I was mostly trying to see whether the injection hammer would be a good tool for banging the singleton nail (which turned out to be a staple and was just mangled). However, the @Unwrap annotation is pretty slick! Somehow I passed right over it on my initial pass through the tutorial. 6. Re: Injection woespmuir Aug 16, 2006 11:17 AM (in response to texan) @Name("myComponent") public class MyComponentManager { private MyComponent myComponent; public MyComponentManager() { myComponent = MyComponentFactory.getInstance(); } @Unwrap public MyComponent getMyComponent() { return myComponent; } ... @In(create=true) private MyComponent myComponent; ... Where MyComponent is a third-party non-seam-managed interface (e.g. something from your reporting library) and MyComponentFactory retrives an implementation for MyComponent. HTH 7. Re: Injection woestexan Aug 16, 2006 11:40 AM (in response to texan) Looks pretty straightforward. Thanks!
https://developer.jboss.org/message/456266
CC-MAIN-2020-40
refinedweb
1,367
59.13
You need to do your drawing in the WM_PAINT message handler. You need to do your drawing in the WM_PAINT message handler. Okay, I found a solution. I use FillRectangle of the Graphics class : using (Graphics g = e.Graphics) { Pen WhitePen = new Pen(Color.White, 1); ... Im new to c#. In c++ I have made a window and painted it with dots and concentric circles, like a radar PPI screen. Trying to do this in c#, I can't find how to draw a single pixel on the window. ... You do realize that your input expression is not just numbers. Its numbers and operators (+, -,...) You can't just add expr[i] to a sum. What?! You mean that is still the case? I thought they changed this after I was out of school? :D Get a government or military job. Your head will explode. :D Without seeing what your code looks like, it will be almost impossible to answer. I don't know the reason it doesn't work, but your code will work if you change the '+' and '-' to a different char. I change + = p and - = m and it worked. You might try reading one line at a... I solved it. It was pretty easy. :) There is no outputting of any kind in that code, so an empty output file would be expected. For one, if the function is needed multiple times, you save on code size (no repeated code) by using a function. Which would make for easier maintainability of the code. A problem I see : #define MAX_N 10 double integral_yn[MAX_N] for(n=1; n<10; n++) The C version of atoi is used just like the C++ version. I'm not sure what you mean by using it as an alternative to printf. You can define them as floats. It just depends on the size of number you expect to fill it. How much are you paying? Discount1 and Discount2 are not int's, but you declared them as int's. From :localtime - C++ Reference /* localtime example */ #include <stdio.h> #include <time.h> int main () { Your scanf line is waiting for user input. Its not adding elements to the array, which I think is what you were wanting. But just fixing that isn't going to solve all your problems. I think you... On line 123 and 123 within your loop you set max and min = gpa. This gpa will be the second student on the second iteration of the loop. Try intializing min and max outside the loop. For one, declare your floats dtl, has, and dbb above the switch block. As in : float dtl; float has; float dbb; Obviously you don't yet understand about classes in C++. You should go through some simple examples (actually typing in the code and compiling and running, even modifying) to thoroughly understand... I'll bet your final_hours is not > 1300. Thats a lot of hours. Look at strcat Sorry I don't have a microscope so I can't read the code. Have you tried setting break points and debugging the code?
https://cboard.cprogramming.com/search.php?s=a1382b1b188d4c36ee3f084297be847a&searchid=3601235
CC-MAIN-2020-16
refinedweb
515
85.99
Posted by Kelvin on 16 Sep 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch Solr has a number of Autocomplete implementations which are great for most purposes. However, a client of mine recently had some fairly specific requirements for autocomplete: 1. phrase-based substring matching 2. out-of-order matches ('foo bar' should match 'the bar is foo') 3. fallback matching to a secondary field when substring matches on the primary field fails, e.g. 'windstopper jac' doesn't match anything on the 'title' field, but matches on the 'category' field The most direct way to model this would probably have been to create a separate Solr core and use ngram + shingles indexing and Solr queries to obtain results. However, because the index was fairly small, I decided to go with an in-memory approach. The general strategy was: 1. For each entry in the primary field, create ngram tokens, adding entries to a Guava Table, where key was ngram, column was string, and value was a distance score. 2. For each entry in the secondary field, create ngram tokens and add entries to a Guava Multimap, where key was ngram, and value was term. 3. When a autocomplete query is received, split it by space, then do lookups against the primary Table. 4. If no matches were found, lookup against the secondary Multimap 5. Return results. The scoring for the primary Table was a simple one based on length of word and distance of token from the start of the string. Posted by Kelvin on 09 Sep 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch In this post, I'll show you what you need to do to implement a custom Solr QueryParser. Extend QParserPlugin. @Override public QParser createParser(String s, SolrParams localParams, SolrParams params, SolrQueryRequest req) { return new TestQParser(s, localParams, params, req); } } This is the class you'll define in solrconfig.xml, informing Solr of your queryparser. Define it like so: Extend QParser. @Override public Query parse() throws SyntaxError { return null; } } Actually implement the parsing in the parse() method. Suppose we wanted to make a really simple parser for term queries, which are space-delimited. Here's how I'd do it: In your query, use the nested query syntax to call your queryparser, e.g. Maybe in a follow-up post, I'll post the full code with jars and all. Posted by Kelvin on 09 Sep 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch, programming I've just spent the last couple days wrapping my head around implementing Latent Semantic Analysis, and after wading through a number of research papers and quite a bit of linear algebra, I've finally emerged on the other end, and thought I'd write something about it to lock the knowledge in. I'll do my best to keep it non-technical, yet accurate. Input : documents Output : term-document matrix Latent Semantic Analysis has the same starting point as most Information Retrieval algorithms : the term-document matrix. Specifically, columns are documents, and rows are terms. If a document contains a term, then the value of that row-column is 1, otherwise 0. If you start with a corpus of documents, or a database table or something, then you'll need to index this corpus into this matrix. Meaning, lowercasing, removing stopwords, maybe stemming etc. The typical Lucene/Solr analyzer chain, basically. Input : term-document matrix Output : 3 matrices, U, S and V Apply Singular Value Decomposition (SVD) to the matrix. This is the computationally expensive step of the whole operation. SVD is a fairly technical concept and quite an involved process (if you doing it by hand). If you do a bit of googling, you're going to find all kinds of mathematical terms related to this, like matrix decomposition, eigenvalues, eigenvectors, PCA (principal component analysis), random projection etc. The 5 second explanation of this step is that the original term-document matrix gets broken down into 3 simpler matrices: a term-term matrix (also known as U, or the left matrix), a matrix comprising of the singular values (also known as S), and a document-document matrix (also known as V, or the right matrix). Something which usually also happens in the SVD step for LSA, and which is important, is rank reduction. In this context, rank reduction means that the original term-document matrix gets somehow "factorized" into its constituent factors, and the k most significant factors or features are retained, where k is some number greater than zero and less than the original size of the term-document matrix. For example, a rank 3 reduction means that the 3 most significant factors are retained. This is important for you to know because most LSA/LSI applications will ask you to specify the value of k, meaning the application wants to know how many features you want to retain. So what's actually happening in this SVD rank reduction, is basically an approximation of the original term-document matrix, allowing you to compare features in a fast and efficient manner. Smaller k values generally run faster and use less memory, but are less accurate. Larger k values are more "true" to the original matrix, but require longer to compute. Note: this statement may not be true of the stochastic SVD implementations (involving random projection or some other method), where an increase in k doesn't lead to a linear increase in running time, but more like a log(n) increase in running time. Input : query string Output : query vector From here, we're on our downhill stretch. The query string needs to be expressed in terms that allow for searching. Input : query vector, document matrix Output : document scores To obtain how similar each document is to the query, aka the doc score, we have to go through each document vector in the matrix and calculate its cosine distance to the query vector. Voila!! Posted by Kelvin on 05 Sep 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch Solr makes Spellcheck easy. Super-easy in fact. All you need to do is to change some stuff in solrconfig.xml, and voila, spellcheck suggestions! However, that's not how google does spellchecking. What Google does is determine if the query has a mis-spelling, and if so, transparently correct the misspelled term for you and perform the search, but also giving you the option of searching for the original term via a link. Now, whilst it'd be uber-cool to have an exact equivalent in Solr, you'd need some statistical data to be able to perform this efficiently. A naive version is to use spellcheck corrections to transparently perform a new query when the original query returned less than x hits, where x is some arbitrarily small number. Here's a simple SearchComponent that does just that: import org.apache.solr.common.util.NamedList; import org.apache.solr.handler.component.QueryComponent; import org.apache.solr.handler.component.ResponseBuilder; import java.io.IOException; public class AutoSpellcheckResearcher extends QueryComponent { // if less than *threshold* hits are returned, a re-search is triggered private int threshold = 0; @Override public void init(NamedList args) { super.init(args); this.threshold = (Integer) args.get("threshold"); } @Override public void prepare(ResponseBuilder rb) throws IOException { } @Override public void process(ResponseBuilder rb) throws IOException { long hits = rb.getNumberDocumentsFound(); if (hits <= threshold) { final NamedList responseValues = rb.rsp.getValues(); NamedList spellcheckresults = (NamedList) responseValues.get("spellcheck"); if (spellcheckresults != null) { NamedList suggestions = (NamedList) spellcheckresults.get("suggestions"); if (suggestions != null) { final NamedList collation = (NamedList) suggestions.get("collation"); if (collation != null) { String collationQuery = (String) collation.get("collationQuery"); if (responseValues != null) { responseValues.add("researched.original", rb.getQueryString()); responseValues.add("researched.replaced", collationQuery); responseValues.remove("response"); } rb.setQueryString(collationQuery); super.prepare(rb); super.process(rb); } } } } } @Override public String getDescription() { return "AutoSpellcheckResearcher"; } @Override public String getSource() { return "1.0"; } } Posted by Kelvin on 23 May 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch Just got on my hands on a review copy of PacktPub's ElasticSearch Server book, which I believe is the first ES book on the market. Review to follow shortly.. Posted by Kelvin on 30 Apr 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch Lucene 4.1 introduces new files in the index. Here's a link to the documentation: The different types of files are: .tim: Term Dictionary .tip: Term Index .doc: Frequencies and Skip Data .pos: Positions .pay: Payloads and Offsets Posted by Kelvin on 03 Apr 2013 | Tagged as: Lucene / Solr / Elastic Search / Nutch For an app I'm working on, permissions ACL is stored in a string, in the form: Both users and documents have an ACL string. The number represents the access level for that category. Bigger numbers mean higher access. In the previous Lucene-based iteration, to perform permission filtering, I just loaded the entire field into memory and did quick in-memory lookups. In this current iteration, I'm trying something different. I'm creating a one field per category level, and populating the field values accordingly. Then when searching, I need to search for all the possible categories using range queries, including specifying empty fields where applicable. Works pretty well. The main drawback (and its a severe one), is that I need to know a priori all the categories. This is not a problem for this app, but might be for other folks. Here's an example of how it looks. Document A: user=300|moderator=100 maps to acl_user:300 acl_moderator:100 User A: moderator=300 Filter Query to determine if User A can access Document A: -acl_user:[* TO *] acl_moderator:[0 T0 300] Grrrr... keep forgetting the Solr DateField dateformat, so here it is for posterity.! Next Page »
http://www.supermind.org/blog/category/lucene-solr-elasticsearch-nutch
CC-MAIN-2014-15
refinedweb
1,614
54.93
In this codelab, you'll learn how to use intents to make your application a better "citizen" of the device, that can perform functions for other applications and respond to system alerts. You'll do this by taking a basic media player application, and gradually making it better.. begindirectory from the sample code folder (Quickstart > Import Project... > begin). Imagine: There you are, walking down the street, music from your swanky new Android phone pumping through through your earbuds, experiencing sheer sonic euphoria as the world around you seems to thump and clang in time to the beat. You pass by a library and remember those books you need to drop off. As you step inside, even through the music, even through your headphones, you can hear the stillness of the place. The silence is a powerful thing, in a place where knowledge is revered. You tread carefully. Well, you try to tread carefully. As you round a corner, a pencil sharpener rebelliously hooks the cable of your earbuds and yanks them out of your phone. Sound leaks out, BLASTS out, and that beautiful silent stillness is washed away in the cacophony of the latest Skrillex/Celine Dion mashup to top the charts. Embarrassment turns terror as your eyes meet those of The Librarian, who, at this point, is singular in both rage and purpose. Children cry out in fear at that look of fury (or perhaps your ludicrous taste in music). Helpless, ashamed, and afraid, you run, vowing never to return. The problem? Your media player is a bad app, and it has failed you. What it should have done is recognize when a headset was unplugged, and pause the music before it blasted out your speakers at a much higher and less considerate volume. Alas! It doesn't know how. Let's fix that. The first thing your app needs to do is listen for an "Audio becoming Noisy" event. In the Android Framework, these are sent by a Broadcast Intent. A Broadcast Intent is just a message that any app on the system can pick up. Broadcast intents are used by the system to alert apps to system-level events like the device running low on memory or being plugged into a charger. They're different than implicit intents because multiple apps can respond to the same event. While you don't want 8 camera apps opening every time you fire an intent to take a picture, you do want 8 currently running apps to quiet down when you receive a call, or shrink their memory footprint when the device is low on memory. In this case, the media player needs to listen for an AudioManager.ACTION_AUDIO_BECOMING_NOISY intent, which indicates that things are about to get loud thanks to a change in audio output. The first thing to do is write a BroadcastReceiver, which can listen for this intent. Since this BroadcastReceiver will only be interacting with the ArtistActivity class, open that up and let's add an inner class near the bottom. /** * Listens for situations where the audio is about to become suddenly noisy, like headphones * being unplugged from the device. */ public class AudioEventReceiver extends BroadcastReceiver { public AudioEventReceiver() { } (); } } } Excellent! Now let's fill in that "onReceive" method. This method is where you analyze the broadcast intent and react to it in some useful way. To inspect the intent, first access the "action" of that intent. This receiver should react to a AudioManager.ACTION_AUDIO_BECOMING_NOISY action, so compare the action you extract from the intent against that. (); } } That onReallyGoodReasonToPauseMusic method, brilliantly named as it is, doesn't actually exist yet in the parent Activity. We need to add a quick method to the ArtistActivity so that it can look up the fragment that handles playback, and send the pause command along to it. Add the following method to your Activity, that will fix our little plumbing issue. /** * Callback for the broadcast receiver to pause music when necessary. The Activity should know * to punt this responsibility to a specific fragment, but the broadcast receiver shouldn't * have to worry about such details. */ public void onReallyGoodReasonToPauseMusic() { // A really good reason to pause music has occurred. How should we react? // Impromptu dance party? Pause the music? Heckle innocent codelab authors? Wait, let's // go back one. Pause the music! Yeah! First get a reference the fragment that manages // playback. PlaylistFragment playlistFragment = (PlaylistFragment) getSupportFragmentManager() .findFragmentById(R.id.playlist_fragment); // Now... what to do? WHAT TO DO? playlistFragment.pause(); } At this point, you're probably itching to try out all this awesome new code you just wrote. Right there with you, reader. I admire your attitude. If you go ahead and open the app now, and plug/unplug your earbuds from the device, you'll notice that IT DOES NOTHING. Why? Why does it do nothing? Because we haven't registered the BroadcastReceiver yet. The system has to be aware that there's a BroadcastReceiver in place, and what kind of broadcast intents should go to it. Otherwise it's a lump of dead code sitting in your application waiting for a purpose that will never come. Depressing, right? Let's fix that! In ArtistActivity's onResume method, add a couple lines that create a new AudioEventReceiver and register it with the system. The registering method also takes an IntentFilter as a parameter, which lets us specify what KIND of events we want to receive. Which is great, because it means the code doesn't have to fire up every time we get a low battery warning, receive an SMS message, or anything else happens that we have absolutely no interest in. In onResume, add this: mReceiver = new CallEventReceiver(); registerReceiver(mReceiver, new IntentFilter(TelephonyManager.ACTION_PHONE_STATE_CHANGED)); Correspondingly we want to unregister this receiver when ArtistActivity is no longer active, because the way this media player is set up, that means music isn't playing anymore. You'll want to unregister by overriding onPause and calling unregister, like so: @Override protected void onPause() { unregisterReceiver(mReceiver); super.onPause(); } Now compile your app and run it. Plug some headphones into your app. When you unplug them, the music should pause. Vicious library scenario averted! Hooray! So now you have a media player that not only knows how to perform its basic purpose in life, but also how to not be totally selfish and irritating. Congratulations, your app is... adequate. A solid 3 stars out of 5. If you were to publish at this point, you would get such glowing reviews as "did not offend me with its painful mediocrity." or "I didn't uninstall it until a couple days after I found something better." Deep down you'd know it could be better. Deep down, you'd know that 5-star rating with the review ".r@@ " was a toddler who got ahold of his parent's phone and slapped the touch screen a few times to see what noise it would make before hurling it across the room. You didn't earn those 5 stars. Not... Not really. Not yet. What's missing? Well, there's two parts to good app citizenship. In the previous section we covered listening for Broadcast Intents to make sure we're not being a public nuisance to the "community" (whatever, we're stretching the analogy, just go with it). The second part is giving back. Helping little old apps cross the street. Pitching in. Doing your part. Letting your app be a resource. Imagine for a moment what it would be like if your media player could respond to voice commands. "Play Dual Core" would kick off some tunes by your favorite Nerdcore artists. "Play something awesome" would pick music that your app had decided was just really great. But your media player doesn't respond to media related intents, let alone voice commands, so it wallows in it's own siloed feature set. And for a media player, that's just really sad. Let's fix that. An intent filter tells the system what types of intents an app can handle. Intent filters are declared in the app's AndroidManifest.xml file. Open that file and add an "intent-filter" tag. It should go after the existing intent-filter tag, as a child of the first Activity tag. Here's what it should look like. <activity android: ... <intent-filter> <action android: <category android: <category android: </intent-filter> ... </activity> Okay, what did we just do? That intent filter specifies that your application's MainActivity can respond to an intent of the type "android.media.action.MEDIA_PLAY_FROM_SEARCH". Which, according to this list of common Android intents, is an intent to play music based on a search query. Adding an intent to your manifest like that tells the system, "This app is ready and able to respond to this intent". It's a promise that we'll do something useful and relevant to the request. Not just throw up an ad for your completely unrelated cheese shop. Now that we've promised the Android Framework that we know how to play media from search, we should probably write some code to handle playing media from search. Open up MainActivity.java in Android Studio. Add code to the onCreate method that ArtistActivity This is how you'd do that. // Get the intent that launched this Activity. Intent intent = getIntent(); // Check the intent for an artist name String artistName = intent.getStringExtra(MediaStore.EXTRA_MEDIA_ARTIST); // If the artist isn't null, that means this activity was started with the intention of // playing music by this specific artist. Don't wait around for the user, just start // playing stuff by this artist. if (artistName != null) { MyMusicStore.Artist artist = mMyMusicStore .getArtist(artistName); if (artist != null) { // create another intent to launch ArtistActivity Intent artistIntent = new Intent(MainActivity.this, ArtistActivity.class); // get stored artist URL artistIntent.setData(Uri.parse(artist.getUrl())); startActivity(artistIntent); } } Ready to ship? Easy there code cowboy! Let's make sure this thing actually works first. Build the app and install it on an Android device. Then in the Google Search bar, type "play dual core". Or tap the microphone and SAY "play dual core", if you want to feel like a mad scientist. There should be a play icon with a small gray arrow in your search results. That's because there are probably multiple apps on your device that can handle this request. Pick one! No, you know what? Pick yours. If all has gone according to plan, your phone should be playing music, because you told it to. From outside the app. A high quality Android app doesn't exist in a vacuum. It can respond to the needs of the system it inhabits, whether that means knowing when to get out of the user's way or extending the functionality of other applications. You took a basic application and made it a thoughtful one. By taking this mindset of good citizenship and extending it to your own applications, you'll find that user satisfaction and the longevity of an app install will increase over time. When you have a spare moment, we'd really appreciate if you could fill out some feedback on your codelab experience. We'll use this information to iterate and improve the codelab over time.
https://codelabs.developers.google.com/codelabs/app-citizenship-with-intents/index.html?index=..%2F..%2Findex
CC-MAIN-2018-47
refinedweb
1,873
66.23
#include <TimerEvent.h> List of all members. See EventRouter's class documentation for discussion of how to request and use timers. Definition at line 12 of file TimerEvent.h. [inline]. assignment operator, does a shallow copy (copies pointer value, doesn't try to clone target!) Definition at line 21 of file TimerEvent.h. [inline, virtual] from EventBase. Definition at line 23 of file TimerEvent 25 of file TimerEvent.h. returns target Definition at line 27 of file TimerEvent.h. assigns tgt to target Definition at line 28 of file TimerEvent.h. true [virtual] generates a description of the event with variable verbosity Definition at line 11 of file TimerEvent.cc. should return the minimum size needed if using binary format (i.e. not XML) Definition at line 25 of file TimerEvent.cc. load from binary format Definition at line 36 of file TimerEvent.cc. save to binary format Definition at line 47 of file TimerEvent.cc. load from XML format Definition at line 55 of file TimerEvent.cc. save to XML format Definition at line 81 of file TimerEvent.cc. [protected] indicates the listener for which the timer was created Definition at line 39 of file TimerEvent.h. Referenced by getBinSize(), getDescription(), getTarget(), loadBinaryBuffer(), loadXML(), operator=(), saveBinaryBuffer(), saveXML(), and setTarget(). [static, protected] causes class type id to automatically be regsitered with EventBase's FamilyFactory (getTypeRegistry()) Definition at line 42 of file TimerEvent.h. Referenced by getClassTypeID().
http://www.tekkotsu.org/dox/classTimerEvent.html
crawl-001
refinedweb
236
51.95
use a class which inherits the PlasmaScripting::Applet class. You can think of this as your main class for your applet. The class will always have at least two methods, initialize and init. While these two are very similar, there is an important distinction. The initialize method is Ruby's default constructor. It will be called by the Ruby interpreter when an object of your class gets initialized (hence the name). The init method on the other hand, gets called by Plasma. Plasma calls this method after the applet has been loaded. You can therefore assume that everything is set up when init is called, while with initialize, you only know your applet class is ready. Before you can open your main class, require plasma_applet and open'. A name in metadata.desktop of ruby-test translates to a module name of RubyTest. The only line in the init method now is set_minimum_size. This, as the name suggests, sets the minimum size for the applet. To run your applet, you need to install it. This is where plasmapkg comes in. It's a small tool which installs and upgrades Plasma packages. To put your code in a package, create a new folder and put a metadata.desktop file in it (as shown above). Now put your code in a file called main.rb in contents/code. cd out of the folder and run plasmapkg -i <foldername>.. For this there is a tool called plasmoidviewer. As argument it takes the name of your plasma applet specified with X-KDE-PluginInfo-Name in your metadata.desktop file. Launch plasmoidviewer and see your still empty Plasma applet running. plasmoidviewer ruby-test As described earlier, plasmapkg can install and upgrade Plasma packages. Since you've now installed your applet, you need to upgrade it after making changes. This isn't much different then installing, just use the -u commandline switch instead of -i. Now that you have a very basic applet running, you can go two ways. You can put QWidgets on your applet, or draw your applet yourself by implementing paintInterface. I like to use standard widgets most of the time, so we're going to place some QWidgets on the plasma applet. Plasma has a couple of themed widgets. A list can be found in the Plasma API [1]. To place these widgets on your applet, you need a 'layout'. The layout you'll be using most of the time will be the GraphicsLinearLayout, which basically puts your widgets in a horizontal or vertical line. Let's start by putting a Label on the applet and putting it in a GraphicsLinearLayout. require 'plasma_applet' module RubyTest class Main < PlasmaScripting::Applet def initialize parent super parent @parent = parent end def init set_minimum_size 150, 150 label = Plasma::Label.new @parent.applet label.text = 'This is a label on a plasmoid, hello Plasma!' layout = Qt::GraphicsLinearLayout.new @parent.applet @parent.applet.layout = layout layout.add_item label end end end In order to place widgets on the applet, we need the applet's parent. This parent has a method called applet, which gives us a reference to the 'real' applet. To get to this applet, we save the parent in a instance variable [2]. In the init method we create a Plasma::Label and assign it a text.. Before that, let's do a small clean up. In the previous example we used @parent.applet a lot. We can clean this up by defining a method called applet which returns @parent.applet. def applet @parent.applet end (Note that with Ruby, it's not required to specify the return keyword. This can be a bit confusing. It can also make a method very clear, I think this method is a good example of such a case. You are of course, free to put a return in front of the single line in the method. ) To add a line edit widget we can use the Plasma::LineEdit class. This class is a basic KLineEdit (a text field of one line) themed for Plasma. To add it to our applet, we do the same as with the label. But since we want the line edit to be empty, we don't set a text. Next up is a push button. We can use the Plasma::PushButton class for that. Adding it is exactly the same as the label. Now comes the interesting part, doing something with the button.[3].The syntax for this is easy: button.connect(SIGNAL(:clicked)) do # do something end Now, when the button gets clicked, the code in the block gets executed. In this tutorial we will do something simple when the button gets pressed. Qt makes it very easy to put some text on the clipboard, @parent = parent end def applet @parent.applet end def init set_minimum_size 150, 150 layout = Qt::GraphicsLinearLayout.new Qt::Vertical, applet applet.layout = layout label = Plasma::Label.new applet label.text = 'This plasmoid will copy the text you enter below to the clipboard.' layout.add_item label line_edit = Plasma::LineEdit.new applet layout.add_item line_edit button = Plasma::PushButton.new applet[1]. The official Plasma API is unfortunately written for C++ usage. But with a little imagination and some logic you should be able to make use of it. You could also try looking at some Ruby Plasma examples[4]. These are written a bit different then the example described above, but they should still be useful. If you have any questions about Plasma development there are several ways to ask for help. First of all there is the Plasma mailinglist[5]. Secondly you can hop by on IRC, #plasma on FreeNode. As a third option you could try asking you question on the KDE forums[6]. Good luck, and don't forget to publish your Plasma applet on kde-look.org! [1]: [2]: In Ruby, instance variables always begin with an '@' character. All variables without an '@' are local variables. Variables with two @ signs are class (a.k.a. static) variables. [3]: Explain blocks [4]: [5]: [6]:
http://techbase.kde.org/index.php?title=Development/Tutorials/Plasma/Ruby/SimplePasteApplet&oldid=37515
CC-MAIN-2014-10
refinedweb
1,005
68.77
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! I am beginner in C4C development. I was trying to reverse a number. In Business object - I have created an element with data type Integer value (code snippet is below) and defined an action called as Reverse (Please refer Action Logic below) - every time when program enters into loop - the num value( num/10) is getting stored in float. Kindly suggest how to convert this float value into interger. *************LOGIC******************* BO: [Label("Enter Number:")] element num1:IntegerValue; Action Reverse: var num = this.num1; var revnum = 0; while(num >= 1) { revnum = revnum * 10; revnum = revnum + num%10 ; num = (num/10); } this.result = revnum ; Hello Sumit, Looks like the compiler is defining the "num" variabel as a float. Please use this code: var num : IntegerValue; HTH, Horst Hello Horst, Now this is working fine. Thank you very much for your prompt response. Cheers! Thanks. Regards, Sumit. Hi Horst & All, I have found one more issue while implementing the logic of "Armstrong number (which is nothing but - sum of the cubes of it's digits is equal to the number itself) Example - 371 is an Armstrong number. But when it starts entering into loop - initially it works fine but in iteration 3 (please refer implemented logic below) when num = 37/10 = 3.7 = since the data type is IntegerValue and hence it should store as 3 (ideally, there should not be any round off), not 4. However this is getting round off and stored as 4. Hence, in turn result is showing as "Given number is not armstrong" Kindly suggest on how to proceed on this. Thanks in advance. ******My Logic:************************ import ABSL; import AP.Common.GDT as apCommonGDT; var num:IntegerValue; num = this.num1; var temp = num; var remainder; while( num != 0) { remainder = num % 10; this.result=this.result+(remainder*remainder*remainder); num = num/10; } if(temp == this.result) { raise IsArmStrong.Create("E",temp); } else raise NotArmstarong.Create("E",temp); Hello Sumit, We have "%" as the modulo operator but if you read the documentation in section 7.2.4.4 Arithmetic Expressions (Business Logic) you will not find the division mentioned. Therefore you need to calculate the dividend in another way. Sorry, Horst Hi Horst, Thanks for your response. I have gone thru the section 7.2.4.4 and found that it supports division as well(please refer below). Here issue is, result is getting rounded off and hence this issue has happened. Request you to kindly suggest if there is any other way to restrict the value from getting rounded off. ******************7.2.4.4 Arithmetic Expressions (Business Logic)********************** Syntax literal | <varName> | <path expression> [ + | — | * | / | % ] <arithmetic expression>; Description The arithmetic expressions support the common mathematical operators for addition, subtraction, multiplication, division, and modulo calculations. Operands can be atomic expressions, such as literals, variable identifiers, path expressions, or other arithmetic expressions. You can overwrite the precedence order of the operators by using parentheses. The operands have to evaluate to the exactly same type. The compiler does not implicitly convert incompatible types. The plus sign (+) operator is overloaded to allow string concatenation. Example var result = (-0.234e-5 + this.Total) * currentValue; var resultString = "Hello " + "world!";
https://answers.sap.com/questions/72942/how-to-typecast-float-value-to-integer-in-c4c.html
CC-MAIN-2018-30
refinedweb
538
58.99
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.6 i686) Description of problem: duplex printing fails on ps files, but works on ASCII files. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. I have set up my printer as samba printer, using driver available in printtool. I seleceted HP LaserJet 5M, and accept all default values, except I change the default papersize and set the default to duplex mode. All test pages can be printed 2. Text files can be printed in duplex, simply "lpr any-text-file" prints the text file in duplex. 3. Printing ps files always printed in simplex. Actual Results: 1. "lpr -Z Duplex" will always print simplex. 2. I tried "a2ps -s2" but the result is not duplex, but two pages printed in the same side. Expected Results: ps files should be printed in duplex. Additional info: I will attached the output of zcat /etc/alchemist/namespace/printconf/local.adl Created attachment 31981 [details] output of "zcat /etc/alchemist/namespace/printconf/local.adl" A fix for -Z option handling is in rawhide, and that may very well fix this problem. Please re-open if this isn't fixed in the next release (or beta). Thanks.
https://bugzilla.redhat.com/show_bug.cgi?id=53773
CC-MAIN-2019-13
refinedweb
215
68.26
: - Public Folders are widely used by our customers for sharing document, calendar, contact, and tasks; and for archiving distribution lists. - Exchange 12 (aka "E12") and Outlook 2007 (aka "Outlook 12") include public folder support – with great new investments in manageability and improved storage costs. Both of these products will be fully supported for 10 years from the availability of E12 – through at least 2016. You can continue to use Public Folders with full support throughout this time. - Windows SharePoint Services is another option from Microsoft for, amongst other things: sharing document, calendar, contact, and tasks; and for archiving distribution lists. DL archiving and Outlook synchronization are new in Windows SharePoint Services v3 (WSS v3), which ships at the same time as Office 12 and E12. - For all new collaborative application development, we recommend WSS v3 and the new E12 web services as your platform, both of which are designed with .NET development in mind. We recommend maintaining existing Public Folder applications in-place. - All current versions of Outlook (from 97 through 2003) and Exchange (4.0 through 2003) require Public Folders to be deployed – Public Folders are required within an organization until all Outlook clients are upgraded to Outlook 2007, all mailboxes have been migrated to E12, and of course, no Public Folder applications are still used. Details below… What are they?. What are they used for? What did we introduce in Exchange 2003 SP2?? What is new in 12? As mentioned in the summary, Outlook 2007 and E12 will include support for Public Folders. Of course, there are some details worth noting: - E12’s 64-bit storage optimizations accrue to Public Folders – providing a 70% reduction in IOPS requirements. - E12’s MONAD command line experience can be used to manage E12 public folder deployments. - Outlook 2007 and E12 have greatly improved connection and synchronization logic, which accrues to both mailboxes and Public Folders. - E12 is the first version of Exchange that will enable customers to turn off Public Folders. When Public Folders are not available, Outlook 2007 is the only version of Outlook that supports: - Offline address book distribution through a BITS http connection to the E12 client access server. - Free/busy lookups through the new E12 availability web service - Outlook Security settings from the registry deployed through Group Policy - Organizational forms will be unavailable (Infopath forms are so much better, I recommend checking them out) - WSS v3 adds support for DL archiving and Outlook synchronization. - E12 OWA will support viewing documents shared through SharePoint sites. -). - E12 Public Folders continue to support replication across geographically distributed sites. WSS v3 has limited support for these topologies. What is the plan for future versions of Exchange?: - Plan on migrating to Outlook 2007 and Exchange 12. - Develop all new applications with .net - Watch this blog for detailed information on: - IT data management (i.e. setting expiration limits on data, deleting unneeded data) - End-user data management (to what extent can Outlook 2007 help end-users manage their Public Folder data) - Partner solutions for data management Thanks, I think one thing that is majorly overlooked by the WSS team, is the "replication across geographically distributed sites". I use public folders for this purpose alone, and though I like WSS, there is no way to use it over slow links, and no good way to replicate the data. I hope that whenever exchange drops support for PF, that the WSS team adds replication to the product, or else customers will be stuck. Given that PFs will be officially supported until at least 2016, I think it’s a safe bet that the SharePoint folks will provide a distributed multi-master replication facility sometime before PFs fade off into the sunset. My experience has generally been that many of the organizations now using PFs have lots of PF content that they don’t need and aren’t using, so PF cleanup is always one of the first things included in any supportability review I perform for customers. Terry Meyerson of the Exchange team clears up a lot of misconceptions about Exchange Public folders and… First, I noticed that you’re still calling the product E12, and that it wasn’t announced as part of Office 2007 last week. Does this mean it’s not a part of Office anymore (again)? Second, two things we also use public folders for here in addition to sharing messages with a workgroup: 1) Shared/generic e-mail addresses that we don’t want to set up as a distribution list. It seems like creating a public folder has lower overhead than maintaining a separate user account for a generic mailbox. 2) Easy way to move e-mail messages between users when a user is retiring or taking a new position. Before they leave, they can move any e-mails they wish to pass along to a public folder, and the new employee can claim them and move them back to their inbox. We use public folders for Scripts (Events.exe) Will our scritps work whith Exchange 12? The Exchange Team has a great new post on the future of Public Folders in Exchange 12 and understanding… Mail-enabled public folders are often used as shared group mailboxes, centralized fax mailboxes, and a variety of other group collaboration needs in which direct SMTP support is necessary. I haven’t seen any promise that WSS will provide the capability of a shared mail-enabled repository in which permissions can be controlled as easily as public folders can. Are there plans to allow SharePoint to offer the simplicity and flexibility of mail-enabled public folders? Многие серьёзные компании хранят документы в Exchange. Но внутреннее устройство Will E12 integrate with Groove Server? My customer is providing accessing to public folder for non-Microsoft clients through NNTP under Exchange 2003 Will this still work ? As many above I also have concerns about mail enabled public folders. We use a number of them as shared generic accounts such as info@domain.com etc. My other question is about "Free/busy lookups through the new E12 availability web service". Can you elaborate more on this? Will this be based on the CalDAV standard? I also want to register my concern about mail enabled public folder. We use them, and as a Gold Partner we have advised our clients to use them as they simplify management of info@domain.com mailboxes. Another common use for our organization and our customers is centralizing external contacts into a public folder. This works extremely well, as that contact folder can be setup as an Outlook Address Book. Although I understand that contact folders can be handled in WSS v3, what about making that contact folder in WSS v3 an Outlook Address Book. If you can provide a simple migration roadmap for these two issues, you will make adoption of Outook 2007 and E12 that much easier for many organizations! K. Jung, To answer your question – the EVENTS.EXE aka Events Service is gone from Exchange 12. This service was "de-emphasized" in Exchange 2000/2003 releases (meaning – it was included to allow time to migrate Event Service scripts to other solutions). Hello Everybody, Terry Myerson wrote about the future of Public Folder in Exchange Server. The text… I don’t understand why MS is so driven to a future that a lot of people like. I want to have that sportcar in red, but it will soon be available only in bleu because we don’t want to sell red ones anyomre. I see many organizations making use of PF’s even beside Sharepoint. Because it’s handy simple inside your mailbox and its a simple folderstructure, and they are free how to use it. For example lot of companies use it to collect emails for example info@company.com share it and it’s available. The nice thing about it, is the simplicity easy fast solutions. installing another Backoffice product for those simple tasks is often overkill Well in the end besicly it’s just simple mail, and there are a lot of sportcar brands, when people look at cars they buy the ones with the most gadgets (wrong, that are the people who buy computers), car buyers prefer red sportcars Hurray, 2 week vacation starting today&nbsp;;-) How to enforce Microsoft Outlook cached mode via… Well, maybe this post is just a bit out of scope, but still about Exchange 12. First of all, exchange dev. roadmap seems to be really nice. IMHO, E2k/E2k3 has too many different management technologies… But the best news about Ex12 is management based on Monad. Here is my question: where can I find more technical information on Monad&Exchange? For example, which cmdlets will be available? Will Ex12 include namespace providers(for example, AD provider)? And how is GUI interacting with Monad? Does it use Cmdlets directly or via Monad hosting? If via hosting, could it be possible to view GUI-generated Monad scripts? I undestand, of course, this stuff would be in Ex12 SDK, but maybe you have some information to share now? In general, it would be great to see more blog posts about Monad&Exchange. I love it that you will support this for 10 years. However I have a concern that you will withdraw support for features before that 10 year period. Point in question – during the launch of Exchange 2000, much ado was made about the IFS and how you can now put information directly into the Exchange store. My company built an Intranet solution for a client based on this technology which worked fine – even on Exchange 2003 – until that is Office 2003 SP1 – this "broke" their ability to save information to the data store. MS response is "oh that’s no longer supported"… Hence the need to redesign the application at the cost to the customer. Therefore I take the "10 year support" cycle very cynically. IMAP and OWA don’t fully play with PFs in E12. So what happens to the Entourage clients? Can they no longer use PFs? Kokią kompiuterių įrangą turėčiau įsigyti, jeigu jau dabar noriu naudoti Exchange Server 2003, bet tikiuosi iškart migruoti į Exchange 12 vos tik jam pasirodžius? Ar draugaus naujasis Exchange 12 su Public Folders technologija? No IMAP or OWA access to Public Folders in E12? This is not a good thing. It is bad enough that our Mac user base is screwed regarding freedocs in Public Folders, but under E12 they’ll have NO access? Since we often get asked about building collaborative applications, I thought I would highlight a few… I’d like to join the voices of those disappointed in the lack of IMAP and OWA support for E12 Public Folders. I think it’s horribly misleading to suggest that Public Folders AS WE KNOW THEM TODAY will be supported beyond the EOL of Exchange 2003. It sounds to me like Public Folders are being reluctantly retained for the sole purpose of appeasing the Outlook user population, without regard to the rest of the Exchange customer base. This only reinforces the industry misconceptions of Exchange being a closed, proprietary platform with poor interoperability options. Please…if you can’t RTM with IMAP and OWA support for E12 Public Folders, then at least keep them on the list to catch for SP1. Comme donnée lors du BGC, la réponse est oui et heureusement… Les Dossiers Publics seront supportés… Kjempe kuuuul hjemmeside du har. PingBack from PingBack from I wanted to give you an intial heads up as to what to expect in E12 this post is just a very high level… Already looking forward to using the availability web service you mention. OWA not supporting public folders, is there a technical explanation for this change? Well, as I promised last week at the Roadshow in Nottingham , I said I’d blog about all of the links This question has come up several times recently so I thought I’d share a good references and tools for Hi all, Many times I follow and participate of discussions on " File Servers versus Sharepoint ", and I promised to give links to everything I talked about at this mornings TechNet event on Exchange 2007
https://blogs.technet.microsoft.com/exchange/2006/02/20/exchange-12-and-public-folders/
CC-MAIN-2017-09
refinedweb
2,035
60.85
How to automatically play an mp4 uploaded in flask app I have a flask app with an upload button for videos (mp4), and I am trying to figure out how to get the video to play on the screen after upload. Currently, it opens a generic gray video screen, but no video plays. I am including portions of my app.py and index.html files below. I also have a predict button that I would like to be able to interact with while the video continues to play. I would appreciate any suggestions for how to accomplish this. @app.route('/', methods=['GET']) def index(): # Main page return render_template('index.html') @app.route('/predict', methods=['GET', 'POST']) def upload(): if request.method == 'GET': return render_template('index.html') if request.method == 'POST': # Get the file from post request f = request.files['file'] # Save the file to ./uploads basepath = os.path.dirname(__file__) file_path = os.path.join( basepath, 'static', secure_filename(f.filename)) f.save(file_path) result = model_predict(file_path, model) return jsonify(result) return None if __name__ == '__main__': # app.run(port=5002, debug=True) # Serve the app with gevent http_server = WSGIServer(('', 5000), app) http_server.serve_forever() ` <div> <h4> <center></center> </h4> <br> </br> <center> <form id="upload-file" method="post" enctype="multipart/form-data"> <center> <label for="imageUpload" class="upload-label"> Upload (mp4) </label> </center> <input type="file" name="file" id="imageUpload" accept=".mp4"> </form> </center> <div class="image-section" style="display:none;"> <div class="img-preview"> <div id="imagePreview"> <html> <head> </head> <center> <body> <center> <video controls <source src="{{url_for('static', filename=f)}}" type="video/mp4" autoplay> Sorry, your browser doesn't support embedded videos. </video> </center> </body> </html> <input type="file" name="file[]" class="file_multi_video" accept="video/*"> </div> </div> <div> <button type="button" class="btn btn-primary btn-lg " id="btn-predict">Identify</button> </div> </div> <div class="loader" style="display:none;"></div> <h3 id="result"> <span> </span> </h3> </div> </center>. - Is there a way where i can pass the value to a webpage using VBA? I am using SAP Bex where after login, i need to fill in the text boxes to get the data. How can i parse the data to web using Excel VBA? I was able to achieve the same to parse ID and Password. But didn't work for the other boxes on the next page Sub test() Set ie = CreateObject("InternetExplorer.Application") my_url = "" With ie .Visible = True .navigate my_url Do Until Not ie.Busy And ie.readyState = 4 DoEvents Loop End With ' Input the userid and password ie.document.getElementById("j_username").Value = "myID" ie.document.getElementById("j_password").Value = "myPassword" ie.document.getElementById("uidPasswordLogon").click ie.document.getElementById("DLG_VARIABLE_vsc_cvl_VAR_2_INPUT_inp") ---- Below is the html source ----- <INPUT onchange="sapUrMapi_InputField_change('DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp',event);" tabIndex=0 onkeyup="sapUrMapi_InputField_KeyUp('DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp',event);" onfocus="sapUrMapi_InputField_focus('DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp',event);" onblur="sapUrMapi_InputField_Blur('DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp',event);" id=DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp class="urEdf2TxtEnbl urEdf2TxtHlp" onselectstart="sapUrMapi_InputField_onselectstart('DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp',event);" onkeydown="sapUrMapi_InputField_keydown('DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp',event);" name=DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp ct="I" f4always="false" ti="0" tp="STRING" st="" autocomplete="off" oldvalue> Input name is "DLG_VARIABLE_vsc_cvl_VAR_1_INPUT_inp" -? - Display calendar in modal form popup I have modal form to pick time but the calendar is behind modal form when modal form display. I want calendar display on the modal form <script src=""></script> <script src=""></script> <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria- <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <h4 class="modal-title" id="myModalLabel">Chọn thời gian</h4> <button type="button" class="close" data-×</button> </div> <div class="modal-body"> <div class="form-inline"> <label style="padding-right: 10px">Từ</label> <div class="input-group date" data- <input type="text" id="datetimepicker1" class="form-control datetimepicker-input"> </div> <br> <div class="modal-footer"> <button class="btn btn-success" id="btn_ok">OK</button> <button class="btn" data-Close</button> </div> </div> </div> </div> $('#datetimepicker1').datepicker(); $('#datetimepicker2').datepicker(); $(document).on('click', '#btn_ok', function () { alert($('#datetimepicker1').val()); }) - - How to access Flask's config inside a Celery worker? I use the recommended factor pattern to give Celery tasks the Flask app's context in extensions.py. class FlaskCelery(Celery): def __init__(self, *args, **kwargs): super(FlaskCelery, self).__init__(*args, **kwargs) self.patch_task() if "app" in kwargs: self.init_app(kwargs["app"]) def patch_task(self): TaskBase = self.Task _celery = self class ContextTask(TaskBase): abstract = True def __call__(self, *args, **kwargs): if flask.has_app_context(): return TaskBase.__call__(self, *args, **kwargs) else: with _celery.app.app_context(): return TaskBase.__call__(self, *args, **kwargs) self.Task = ContextTask def init_app(self, app): self.app = app self.config_from_object(app.config) auth = HTTPBasicAuth() celery = FlaskCelery() db = SQLAlchemy() However, I have been thinking about how this actually gets used in my tasks.py, and it doesn't actually seem to ever grab the Flask context. It seems like FlaskCeleryis useless, then. from extensions import celery, db @celery.task(bind=True, name="a_nice_task") def a_nice_task(self, arg_1, arg_2): # Do stuff I don't feel like showing you on here. Deals with the db. return {"status" : "Done"} Why even bother using the FlaskCelery object? It doesn't even have access to the Flask app's config, because over in extensions.py, appwas not passed as a keyword argument (which makes sense...). And how do I get access to the Flask app's config? In app.py, the config is set. def create_app(): app = Flask(__name__) app.config.from_object("config") db.init_app(app) celery.init_app(app) return app app = create_app() But that obviously does not affect tasks.pywhen the Celery worker is run with celery -A app:celery worker --loglevel=info. TL;DR How do I get access to the Flask config in tasks.py? Am I doing something very wrong? - Nexted list iterating using jinja2 flask I have 3list coming from flask contain the "name" "department" "position", I need to create a table and pollute the rows using all 3list... **problem I can't able to access all 3list data at once so I read in somewhere in StackOverflow to link my list into dictionary and dictionary to the list an_item = dict(name=names, department= departments, position=positions) itema.append(an_item) ** it works fine if the name is string, not a list FLASK SIDE: names = ["Alice", "Mike", ...] department = ["CS", "MATHS", ... ] position = ["HEAD", "CR", ....] an_item = dict(name=names, department= departments, position=positions) itema.append(an_item) HTML: <tbody> {% for item in items %} <tr> <td>{{item.name}}</td> #printing the list ["alice", "mike", ...] <td>{{item.department}}</td> <td>{{item.position}}</td> </tr> {% endfor %} </tbody> I want a simple table for name department and position Name Department Position Alice CS Head Mike MATHS CR - Sliding Transition for switching div content I'm building a website (in Flask and Bootstrap) with a form that outputs some result on the spot. I have a div for the form content; the intended result is that when the user submits the form, the form slides out of the screen to the left and the content of the div is replaced with the results from the form. I want all the other elements to appear to stay still, such as the navbar, title of the website, etc. I've looked a bit into some JQuery and AJAX to approach this problem but am a bit lost? Any suggestions on how I can achieve this effect? - Encrypt mp4 file from stealing I want to protect online mp4 file (moov before mdat) from stealing. I encrypt the first 1024 bytes using des (the key is unknown to the user). Is this enough for protect? Or there is a better solution? - Android Gesture VideoView seek? Is there any API that allows to change VideoView frame using gesture(video isn't playing and frames are changing using swipe gesture like in small rect in YouTube app while swipe on seekbar) Or only way is to get frames from video and change ImageView source? - How to have Javascript download a specific media file type? The following script is built to one-click download .mp4 audio files that have just played in a player. The only problem is that the version of the player I use provides audio in lossless .flac format, so it doesn't do anything when I use it. How might I change it so that it outputs flac instead of just nothing? I'm totally clueless. (function() { var songs = document.getElementsByTagName('video'); for(var i = 0 ; i < songs.length ; i++ ) { if(songs[i].getAttribute("jw-played") == "") { window.open(songs[i]["src"]); } } }) (); When I use it with the version of the player that's using .mp4 audio, it scrapes just fine. I'm not sure how to change it to make it see .flac files. It just does nothing when I use it.
http://quabr.com/56746070/how-to-automatically-play-an-mp4-uploaded-in-flask-app
CC-MAIN-2019-30
refinedweb
1,450
51.04
Results 1 to 1 of 1 Thread: c++ efficiency subject - Join Date - Nov 2009 - 131 c++ efficiency subject This is a subject related to copy constructors and efficiency in C++. The following code snippet shows three functions: 'example1', 'example2' and 'example3'. I have been discussing with a workmate which one is the most efficient : Code: #include <iostream> #include <string> //just an example class class MyClass { public: std::string toString() { return "just an example"; } }; static MyClass myObj; bool example1() { std::string str = myObj.toString();//executing copy constructor of std::string //... some code std::cout << str << std::endl; //.. some other code std::cout << str << std::endl; return true; } bool example2() { const std::string& str = myObj.toString();//avoid copy constructor of std::string, but more verbose. //... some code std::cout << str << std::endl; //.. some other code std::cout << str << std::endl; return true; } bool example3() { //avoid copy constructor of std::string, but we get two call stacks . //... some code std::cout << myObj.toString() << std::endl; //.. some other code std::cout << myObj.toString() << std::endl; return true; } int main() { for (size_t i = 0; i < 1000000; i++) { //example1(); example2(); //example3(); } } What is your opinion about that? My opinion is that efficiency issues should not be left to the compiler, they should always be fixed in the c++ code. The code is aimed to be run on all platforms (Windows, Linux and Mac) and different compilers may behave in a different way. Even different versions of the same compiler may or may not perform optimizations. I would choose 'example2' or even 'example3' because a call stack will always be faster than a system call to allocate memory. Here I used std::string as an example, but it can be with any user defined complex type. Thanks a lot!
http://www.linuxforums.org/forum/programming-scripting/210955-c-efficiency-subject-post994665.html?s=58efe09f09722d52049530a84b03035d
CC-MAIN-2018-39
refinedweb
292
64.91
Introducing XML::SAX::Machines, Part OneIntroducing.. XML::SAX::Machines is high-level wrapper class that allows its various Machine classes (which may also be used as standalone libraries) to be easily chained together to create complex SAX filtering systems. XML::SAX::Machines currently installs and knows about several Machines by default.. One of the more interesting ideas to emerge in the Web development world in recent years is the notion of custom tag libraries (or taglibs, for short). In a taglib implementation one or more custom tags are defined and the server application evaluates and expands or replaces those tags with the result of running some chunk of code on the server. This allows document authors to add reusable bits of server-side functionality to their pages without the hair loss associated with embedding code in the documents. For this month's example we will write a mod_perl handler that allows us to create our own custom taglibs. We will do this by creating SAX filters that transform the various tags in our library into the desired results. ANd we'll use SAX::Machines within our Apache handler to manage the filter chain. First, we need to define our taglib. To keep the example simple we start off with only two tags: an <include> tag that provides a way to insert the contents of an external document defined by the uri attribute at the location of the tag, and a <fortune> tag that inserts a random quote. To avoid possible collision with the elements allowed in the documents that will contain the tags from our taglib, we will quarantine them in their own XML namespace and bind that namespace to the prefix "widget". Here is an example of a simple XHTML document containing our custom tags: <?xml version="1.0"?> <html xmlns: <head> <title>My Cool Taglib-Enabled Page</title> </head> <body> <widget:include <p> Today quote is: </p> <pre><widget:fortune/></pre> <p> Thanks for stopping by. </p> <widget:include </body> </html> Now let's create our SAX filters to expand our custom tags. We'll write the filter that include an external XML document, first. package Widget::Include; use strict; use vars qw(@ISA $WidgetURI); @ISA = qw(XML::SAX::Base); $WidgetURI = ''; After a bit of initialization we get straight to the SAX event handlers. In the start_element handler we examine the current element's NamespaceURI and LocalName properties to see if we have an "include" element in our widgets namespace. If it finds one, it further checks for an uri attribute, and, if it finds one, it passes that file name on to a new parser instance using the current filter as the handler. sub start_element { my ( $self, $el ) = @_; if ( $el->{NamespaceURI} eq $WidgetURI && $el->{LocalName} eq 'include' ) { if ( defined $el->{Attributes}->{'{}uri'} ) { my $uri = $el->{Attributes}->{'{}uri'}->{Value}; my $parser = XML::SAX::ParserFactory->parser( Handler => $self ); $p->parse_uri( $uri ); } } If we did not get an element with the right name in the right namespace we forward the event to the next filter in the chain. else { $self->SUPER::start_element( $el ); } } We do a similar test in the end_element event handler; forwarding the events that we are not interested in. sub end_element { my ( $self, $el ) = @_; $self->SUPER::end_element( $el ) unless $el->{NamespaceURI} eq $WidgetURI and $el->{LocalName} eq 'include'; } That's it. Since this filter inherits from XML::SAX::Base we need only implement the event handlers that are required for the task at hand. All other events will be safely forwarded to the next filter/handler. The filter that implements the <widget:fortune> tag is very similar. We check to see if the current element is named "fortune" and is bound to the correct namespace. If so, we replace the element with the text returned from a system call to the fortune program. If not, the events are forwarded to the next filter. package Widget::Fortune; use strict; use vars qw(@ISA $WidgetURI); @ISA = qw(XML::SAX::Base); $WidgetURI = ''; sub start_element { my ( $self, $el ) = @_; if ( $el->{NamespaceURI} eq $WidgetURI && $el->{LocalName} eq 'fortune' ) { my $fortune = `/usr/games/fortune`; $self->SUPER::characters( { Data => $fortune } ); } else { $self->SUPER::start_element( $el ); } } sub end_element { my ( $self, $el ) = @_; $self->SUPER::end_element( $el ) unless $el->{NamespaceURI} eq $WidgetURI and $el->{LocalName} eq 'fortune'; } With the filters out of the way we turn to the Apache handler that will make our filters work as expected for the files on our server. The basic Apache handler module that makes our taglibs work is astonishingly small considering what it provides. We simply create a new instance of XML::SAX::Pipeline then, inside the required handler subroutine, we create a Pipeline machine, passing in the names of the widget filter classes we just created. Then we send the required HTTP headers and call parse_uri on the file being requested by the client. package SAXWeb::MachinePages; use strict; use XML::SAX::Machines qw( :all ); sub handler { my $r = shift; my $machine = Pipeline( "Widget::Include" => "Widget::Fortune" => \*STDOUT ); $r->content_type('text/html'); $r->send_http_header; $machine->parse_uri( $r->filename ); } Finally, we need to upload the XML documents to the server and add a small bit to one of our Apache configuration file so our handler is called appropriately. I used <Directory /www/sites/myhostdocroot > <FilesMatch "\.(xml|xhtml)"> SetHandler perl-script PerlHandler SAXWeb::MachinePages </FilesMatch> </Directory> After restarting Apache, a request to the XML document we created earlier will look something like the following: <html xmlns: <head> <title>My Cool Page</title> </head> <body> <div class='header'> <h2>MySite.tld</h2> <hr /> </div> <p> Today quote is: </p> <pre>The faster we go, the rounder we get. -- The Grateful Dead </pre> <p> Thanks for stopping by. </p> <div class='footer'> <hr /> <p>Copyright 2000 MySite.tld, Ltd. All rights reserved.</p> </div> </body> </html> No Webby awards here, to be sure, but the basic foundation is sound and implementing new tags for our tag library is a matter of creating new SAX filter classes and adding them the Pipeline in the Apache handler. We've only touched the surface of what XML::SAX::Machines can do. Tune in next month when we will delve deeper into the API and show off some of its advanced features. XML.com Copyright © 1998-2006 O'Reilly Media, Inc.
http://www.xml.com/lpt/a/921
CC-MAIN-2014-15
refinedweb
1,048
57.61
Download presentation Presentation is loading. Please wait. Published byMariam Polly Modified over 2 years ago 1 B. Ramamurthy 4/17/20151 2 Overview of EC2 Components (fig. 2.1) 10..* 1 1 1 1 4/17/20152 3 S3: accessing s3, working with S3: command line as given in the blue book or through aws console You can create directories: your own namespace You can transfer with a click of a button data in and out of S3 (lets check it out) You can simply continue to shove data into Amazon S3 without having to worry about ever running out of space! Short-term and long-term backup facility. Access to S3 is via web services and not via file system: so relatively slow. See demo 4/17/20153 4 You want to create an “instance” of a server from an already established “AMI: Amazon Machine Image” It has a elastic IP for the whole world to interact with it. How to control access to this? While creating the instance, create a new security group that will specify the policy or “rules” about the access methods Security group is somewhat similar to network segment protected by a firewall Once the server is started you cannot change the security group: so plan ahead See Demo 4/17/20154 5 Availability zones are analogous to a physical data center. Amazon keeps adding availability zones: US East (Virginia), US East (N. California), EU West (Ireland), Asia (Singapore) Am important feature is that zones have different characteristics that no two zones will be down at any time. You can spread your data between two zones or replicate it in 2 or more zones for survivability. Traffic (bandwidth) between zones cost money. You may want to launch in the same zone if bandwidth is your concern; but if redundancy is your quest, you may need different zones 4/17/20155 6 Access to an instance is (for ssh) is through a key pair. Another enabling technology is PKI You generate a private-public key pair, store the private key in your local hard drive, the public key is passed to the instance when it is launched. EC2 instance is configures such that root account is accessible to any user with the private key. 4/17/20156 7 By default when you launch a new instance amazon dynamically assign a private and a public IP. While this is fine for development purposed, for a real launch of a web accessible service, we need static IP. Amazon makes available what are classes elastic IPs for this purpose. Up to 5 elastic IPs can be assigned to an instance. Elastic IPs cost money even if you don’t use them; assigning and reassigning strains the system; so it cost money Allocate elastic ip and associate it with an instance. 4/17/20157 8 Snapshot is for saving a volume (of storage) is a feature of Amazon’s elastic block storage. You can take a snapshot as often as needed. EC2 automatically saves the snapshots to S3, thus enabling a quick and powerful backup scheme. You can replay it by creating a volume from snapshot. See demo. 4/17/20158 9 You can use already existing AMI to bootstrap your system AMI contains root file system for your image. You will have clean up all your files and store them in S3 or snap shot them so that they can be replayed. Stop the sql server Name it and create it with a description. 4/17/20159 10 1. Data-intensive Data structures: HDFS. 2. Data-intensive Algorithms: MapReduce. use it in a scenario given below. 3. Cloud architectures: AWS: EC2, S3, + Google App engine, AWS (Chapter 1,2 in bluebook) 4. Enabling Technologies: infrastructure realization: Virtualization, infrastructure management 5. Project 1 and Lucene: 4/17/201510 Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3405717/
CC-MAIN-2017-17
refinedweb
644
62.17
Uncyclopedia:QuickVFD/archive14 From Uncyclopedia, the content-free encyclopedia The New September Family old versions Image:Redandblue.jpg Paul O'Connell Ganguly Viva Piñata Adverbs Life 2: Countdown To Death Suwannabhumi Government of India Unporn Template:Infobox_India - redundant. no links. Template:Infobox_Pakistan - redundant. no links. MiR Template:Infobox Bangladesh - redundant. no links. Template:Infobox Nepal - redundant. no links. Template:Infobox_Sri_Lanka - redundant. no links. Logan "the greatest school ever" Park Coarse adjustment knob Trigger the gun South park mexican Hack Telnet Georgia (country) - abs. identical to Georgia. November 23, 2012 Facepunch Studios La Cucaracha Dyer Kalyan kumar Image:Superman11.jpg Image:Supermango.jpg Unintelligent Nargis Jonathan rhys-meyers The ledgend of dan brown Nintenteachers The Confounded Penknife Company Velma from Scooby Doo Arno Vleiskuif Rademeyer Reynard de Jager Hampshire Image:Hattrickmagic.jpg - orphan image. Category:Soprt_in_Australia - typo. User:FreeMorpheme/OriginalStripClub Kyle clark Jean Paul Tascon Cynthia Old versions Image:Davidsketch.jpg 16:54, 19 September 2006, 13:59, 20 September 2006 & 14:02, 20 September 2006 Sep 30 Phantasmagoria Mspaint The one day I fell down the stairs A Stop Sign Satoru Iwata A certain German ruler from 1933-1945 Vida Guerra Marcus Belcher Deek velagandula Repair/El Presidente de Andromeda The Juggernaut Ysgol eifionydd Gfuuusurhdsaeaegiaaqv3 User talk:86.31.22.166 Lair (video game) Ganguly Purply Æ Expanse Of Empires Tootsie roll Bahaind Dink hickey First Great Depression Apple bottoms Guadalcanal 5th dimension Penis! Pinoak middle school Darth uncyclopedia Guadalcanal Drama Mousse Robotmonkeyologists Oprah Winfrey's Hair Apple bottoms Sep 29 Chris stewart Matt brooks 0000 Sep 28 DJ Jazzy Jeff Going commando Marcus Belcher Ukrainian girls Cyborg Napoleon South Park's Chef Richard Clinton T.O. Henry Cullen Sisters Boner-tech Dumbledore's dead body - good for me... Alexis Young Dave loh xie hui - ugly slandanity Edward Qiu Kofi Wayo Led Zeppelin (christian band) Jason David Frank Final Fantasy XII Malice Mizer Cynthia Sep 27 Captain Ishmael Emma scofield Vanity? Truth? QVFD! Diarea Canal Reincarnation Kumquat AC Milan Moderation RC-135 Alcy gremlin HELP MOMMY SANTA CLAUS IS TRYING TO TOUCH MY PENIS Instant spontaneous castration Sean Hannity Sam claxon Yanks eh? Gogol Bordello uncyclopedia.wikia.com/index.php?title=Primate&redirect=no AAAA creates double redirect! Babe Ruth Cerial spoon Political incorrectness Sam claxton Epidemic Second Council of Lyon Impotency Mountain Jew Bluevale AC Milan Sir Sean Connery Dcv2 - good for us... C'mon, delete 'em. I unlinked them from the main sequence and everything! --L Blærum Ramsgate Jesus Christohper Reeves - Oh come on... & not even spelt correctly Nathan Allen Christopher Sittin Garland High School Edward bell nyaa Moderation Vandal box ...I think there are other pages for this, eh? C-ism Sean mclaughlin Ehud Omelette Taipei YPOS1 "Tigerlily" Lilian Metz Nord Sep 26 David Scott Pee-Wee Herman Evil telekinetic ducky James duncan OG Simpson Arlo Tomato man frenzy Wesley Murray Limavady Kyla Fareham Straits of Panama Mämmi Christian Lesley J Van Asdasd Hunchie IT Drumcore Southend-on-Sea Stan prezhegodsky Tokyo ska paradise orchestra Conrad vickers Template:YourmomjokeNo. –H. UnNews:Foxtons Mind Control Anal Sex Milhouse Van Houten Kim Borgersen Mad Ferrits Dr. Cox Leeroy Jenkins Oooooooooooooooooooooooooo Seoul International School cuntpaste from Wikipedia. (yes, I said it.) Sling blade Pedro Páramo Gurte Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo I disagree, don't huff it Aaadddaaammm 09:32, 26 September 2006 (UTC) Chain of Command Verbose Eric mohlin Erik Skux Ultimate Guitar Archive J-Pop Bat Shit Crazy - what the heck? Journey Government Techno Glover - no comment Chelman Ishmael - vanity; created by user with same name Military Techno Jesus and Mike Howes Cyborg pirate ninja jesus, sigh... Wizet Worse than NRV, surely? John Harvard VANITY? Justin herzig VANITY Vr troopers Word for word en.wp rip-off. Didn't even bother to change anything. Clay Aiken Third non-World War Battlefield 2: Very Speshul Forces Tortilla chip old versions Osama bin Grinchen 20:04, 25 September 2006, 21:36, 25 September 2006 & 21:54, 25 September 2006 old version Opie mugshot 23:45, 16 September 2006 Sep 25 Marco Biagi Cade Skywalker Mmotales The End of the Internet Some idiot NRV'd this. Words per minute Right Side EDEC Juan rulfo Bambi A page that doesn't exist yet Daddy Yankee Ede Gazillion - a somewhat small article Hanso Foundation Manfred Bernie Quill Category:Open Source is like sooo kewl Words per minute Boules Alex Rarog Bronx High School of Science Gay people Ishmael Fair trial Abu badali - unfunny slandanity Lee Stevens' Knee Jenje Chelman Noodles Redirect Peal University High School Playdoh more like doh-doh. Bishop Guertin High School Vanity. Mayo Relish! Duck language Quack! Quack!- redirected Magnificent silver tortoise Sep 24 Strong Mad UnSpace Clueless Squirrle Jesus William Tennent High School -Vanityshite, blanked by author. Khram IPot Interracial Not funny, likely copyright images, pure porn. Himesh Owner removed NRV on 12th Sep without change in content. Delete. Incredibly overweight odourous fuckfaces who just won't die Rachael McHaffie Ken carlo User:213.237.66.155 ZRXLQLPRMKQBZ Danzilla' Yes, with a '. I've no idea why. █ I've no idea who in their right mind would type a █. Machine Gun Tyrnävä Fergus Burnett Guerra de Malvinas Zzzzzzzzzzzzzz International Page Blanking Day Has potential, maybe... Bah who am I kidding. Peter Frampton Gene Shalit Blanked by author. Wierd al yankovich In 2-D Dare To Be Smart People who add pages to Uncyclopedia half-assedly UnNews talk:NSA may be wagging the dog with new spying policies Rober egert Roger Ebert Molly Meldrum Ken carlo Betty Bush Minor threat Sep 23 Babel:Zh-hant/煤體 - moved to Traditional Chinese sister Bloody French bastard template:Welles Goregosis Was Homocracy Desolator UnNews:Anus Crak Andrew the Midget Shiite Talk:Lebanon Carjacking Kubb Milton Keynes Dons Netto Billis Bob Brute the one headed Cop Chris jones Newman "Tom Ellard" Pedro Hanssen Blackwood- Much expanded no longer QVFD'able How to be retarded and not just stupid Mole Vice Versapedia Scarlett Johansson's Boobs Angry dude Nigger poop Prehistoric man Atlantic records Sep 22 Yokostan Icant-sleep-at-night-alitys Coinflip says this doesn't work. Page 77790 Game:Page 263 Page 1567 Still no idea. Dean koontz Levi nellis Game:Page 538 I've no idea what this is about, but it ain't good. Tim Horton Chris Jones Encyclopedia_Dramatica_sucks Nice sentiment but execution lacking. Prehistoric man Atlantic records Too injoke. Biloxi high school Think this falls under the new vanity clause.. Oratory prep Rose Drøbak EMP Aryan Jacques Shirac The Great Excommunication of Everything Evan Smirnow Morris Anisette User:Hanoi4 Corner Bakery I didn't mean to put this here. Damn Oli and his odd talk page methods. Ms. Patak David Wendl Eric Williams Smegma Powderfinger Rule!!! Awesometown Ask a ninja Frandom X4 Lifehouse LoS -1 Guiseley School Pure vanity. Again. Right behind you Random "praise" vandal.. Liam W. AAAAGGGGHHHGHHGHGHGHHHHGH The shit Image:ScreenShot0266.jpg vanity image for vanity page (which was deleted) Zippy (SWG) If even the spoof site Wikipedia banned this.. (aka pure vanity) Hindu Kush - created by me Motte and bailey castle old versions Image:Quandaribegum.jpg old versions Image:Tajbreast.jpg old versions Image:Empiree.jpg old versions Image:Mr lanka.jpg old versions Image:Rockp.jpg old versions Image:Dar.jpg old versions Image:Supercat.jpg Rock Lee Metamorphic rocks Splash Mountain Tim coffey Speedracer Oslama Wiki Edit War (blanked by author) 'ZIG' Scott vanderlind -Vanityshite Ultimate turkey lizards Johannes Guttenberg Buck Satan Benjamin elliott Fred and Ethel Satan Year Without A Summer Your girlfriend Michael Jackson's other glove Rocompsagnathanus -- follow directions of writer: it wasn't any better b4--Shandon 03:35, 22 September 2006 (UTC) Talk:Resistance - No corresponding article. Sep 21 Kurt Cobain Nigger poop Spam. (Are you bat-fuck insane? Keep by all means!) Microsoft's Attempt To Take Over The World Pointless articles do not deserve stub statud. Fletcher Quite sure this is vanity. Respiratory system Kitty Snuffing Lechia White men South Auckland Ricky Clinton Omaha, Nebraska --worthy topic, worthless article. Aragorn - copy from WIkipedia article of Indiana Jones, with "Indiana Jones" replaced with "Aragorn" Wilf greene - apparently vanity, although some say slandanity. Afrikaans Alan beem Quackzilla FUCKING LEGEND- CVP'd 00456 Hi-5 Anthony nido - Oh god I thought you were being serious then! HERE Nigger poop 400+ K of the words "NIGGER POOP". Ban the author if he's not already banned. Poopoo blowjob SFR Yugoslavia User:Modusoperandi/The Andy Griffith Show...done and moved... Deep Impact Probe American Culture Luke pearce Eskilstuna Bas rutten No, not funny. Microsoft Mouse Chuck Norris Guide To Better Peacekeeping Either move to a "How To:" or delete outright. I'm voting for the latter. The Evil Penguin Civil War Coker Nagy Wilkinson The Peoples Republic of China Image:DSC00008.JPG Vanity image linked in vanity pages. Automail Dark cave Impotency Second Council of Lyon Sir Thomas Coffee Toasters Albodanbotross Suresh veeragoni vanity. The_Ultimate_Battle Either delete or move it to gamespace... though it's probably vanity. Murloc Surigadu Rahmaniac -vanityshite Calarts Cal arts California Institute of the Arts Template:Please explain Ents Twinkys Fletchers and cokers Ultraviolet Strickland Sir Thomas Coffee -vanityshite HexatЯidecimal pointless redirect. Selling mith Spam is bad for your health. Kurt Cobain Haha! Testicles!-- 13:39, 21 September 2006 (UTC) Sep 20 Canadian Idiot (Song) Working hard generally requires you to have more than one line of text. Guitarphile Ricky_Clinton vanity Kevin Bragg Alex De Hayr Facey old versions Image:Davidsketch.jpg 16:54, 19 September 2006 & 13:59, 20 September 2006 OMGWTFLOLROFLMFAO This is just a stupid one liner. Burvi Is there a way to make sure people post in english? (not that it matters, since this is too short) MC Spanner No, not funny. Foot Fetish Spartanburg, South Carolina Game:Zork/party Game:Zork/SupDen Ana leuca N.E. Klaas Jan Huntelaar Alexa Feyenoord Unquotable:Hamlet So atrociously formatted that it deserves a quick killing.Not QVFDable. • Spang • ☃ • talk • User:Sir Cornbread/The Putz Who Stole Hannukah -Eventually gonna be a double redirect, so lets kill it now... Sir C 04:13, 20 September 2006 (UTC) Stupid maths Hermione Does Hogwarts Gazelle Super Slayin' Jesus Lennon Emo sexual Zonic Sebastian Lopez Cynthia McKinney Nick barker Brants "Electric Jack" McMaster =Dx62 Hippopotomonstrosesquippedaliphobia : I'm going to disagree with this one, it does have potential. (but only if someone with more head writes it) --Quira 02:32, 20 September 2006 (UTC) SputinBr Annie Mills - recreated vanity Gabby old versions Image:Davidsketch.jpg 22:00, 17 September 2006 & 16:50, 19 September 2006 Sep 19 One huge link 919K of "E"s Kegg Barker Alejandro Good Eddie Royal Mail Jolly good fun Tsubasa Chronicle The war on terror World War VIII Great Pyramid of Giza Granola Bars User talk:Sbluen/I am Dying I don't need the copied discussion anymore. Secret Page - Secret page is enough Editing bRisbane sTate hIgh Kenn-eth vanity Crab Dip War Brisbane state high too ugly and terrible to live Dread Pirate Wesley Content consists of "BOOGA BOOGA BOOGA" Penus Scream Chris Catmul Davids german shepherd NRV expanding into Vanity page. Lose/lose situation. Gabby Only content: got milk Simple Plan Too unfactual. Belongs to Wikipedia. Sep 18 Exploding Wales its funny, but... International School of Port of spain Tits McGee Inprobability generator Ian Smith bit sad this one, but he certainly doesnt want an article here. Wannabe punk bitch Biloxi high school Poonscape Supercat, i've already moved contents to Super Cat Mr. Flibble BsKaiyan@email.com Image:403.jpg uploaded for Jake page Jake "manly pants" L Ken Livingstone (en.wikipedia word for word copy paste) Image:Halfsc.jpg What does Will Young and a washing machine have in common? OMGWTFLOLROFLMFAO UnNews:I hate niggers, this is not gonna be expanded. Old versions Image:Redandblue.jpg 22:32, 12 September 2006, 21:20, 13 September 2006, 22:41, 13 September 2006, 20:18, 14 September 2006, 12:08, 15 September 2006 & 07:37, 16 September 2006 Darth Hitler I_Think_Satan_Likes_cheese South_Auckland Sep 17 Robert Gordon's Colledge Chris carrano Casio XR-118 Wal-Mart* Leggggo Eh Elizabeth Corona Beer Image:Al&super.jpg Talk:Lilo - orphaned talk pageno longer This page does not exist Longinus Babel:Zh-hant/死亡筆記 User:Sbluen/I am Dying No need to keep a copy now. Mentiqa GhandiBurger -Hinoa NRV'd, but its vanity. Look at the history.I would have deleted it if I had cared. –H.00:13 UTC, 09.17.2006 Lowtax Happy! Happy! Joy! Joy! Sep 16 Image:Superal.jpg Superfunk Bill Goldberg Round screw die Pakistan and it's umbilical connection with STAR WARS Perfect dark Takapuna Bricemanning Stanaway Ball Image:Bushkeys.jpg (By request of uploader) Image:Crawhusv11.jpg Sep 15 Corona_Beer F._Scott_Fitzgerald Template:Notinmouth I built this out of a Wikipedia template, but since Wikipedia uses a different license than we do (something I only just noticed) I think it should be deleted. Stewart Elkins DUMB / RACIST Porcupine Ellie Robinson vanity Real Madrid CF Ishaan patodia Animal bladders Flava' flave Darth Hitler Joseph perez Charles Wong slander Ceres Spousal tyranny Dogpoop Catpoop Worst 100 Ways to Die of All Time recreated crap Aiden galea NEN: No explanation necessary Aaron ledbetter Mindless Self IndulgenceNRV'd; be patient... –H.02:09 UTC, 09.15.2006 Sep 14 …·°º•ø®@» Balamory Tom petty Marla brock Theeves -- Both Lookingfor Articles. Theives Bradon hargraves-wall, dim-witted personal attack Sep 13 Babel:Zh-hant/真紅衛兵 Babel:Zh-hant/水銀黨 moved to Traditional Chinese sister Brendon hargaves-wall SLANDANITY Abanana Shashlyk UnNews:Upskirt User:Rataube/Portada. Not need for it anymore.--:59, 13 September 2006 (UTC) Nick SandbergNope, NRV-level suckage –H. Black man's penis - Vanity ad for shock newsgrounds vid masquarading as article. User:Modusoperandi/Das Love Boot User:Modusoperandi/Solid Gold spring cleaning: all 3 of these are in mainspace now American analog set Sep 12 27 (number) blank article Hoserland, crap Charles Martel The House of Yahweh Playcow Trust Kgchilds Mind Control Caffe` Universities in Sweden Template:User Jesus loves Tom bruise Annie Sep 11 Enforced Hair Styling Act Lemonmuncher [1] Wrong namespace Bible camps August 23 Marianna How to become rich in Malaysia Pendatang asing Great Bolehland Teh tarik Mr. Scruff XviD Newry Filedeletion.exe Here 5000 times Octillion File Deletion Program Gamecube Andriy Shevchenko (moved to Uncyclopedia) Freebird Cheesedude - someone really needs to block this guy. Cheesedude781 UnNews:The Tom UnNews:The tom Image:Padlock.svg Owsley Orange, crap Planet earth Red Planet White on rice Ali-Ben Jaafar Sep 10 Moyado -Blatant Vanity [2], [3], [4]. Silt, two word "article" Mother Russia Charlie parker Silly Bitch Pools closed Lamar high school Tunza fun Ukrainian girls Baked beans Kappa Mikey Chris Farley Yuri's Revenge, crap Worst 100 Ways to Die of All Time, crap Fagball UnNews:Romania John Techno craptastic UnNews talk:Senate report concludes Saddam had no al-Gayda links after all, ugh... the author actually signed this nasty one liner^ not QVFD material - User:Guest/sig 06:35, 10 September 2006 (UTC) Cabaret - cut 'n paste from WP article on euro common agro policy Sep 9 A random zombie, stupid article John Techno's Blitzkrieg Allstars lame one liner Image:Hentaixxx.jpeg Oi Kekkonen, sä suurin kaikista... I NRV'd this, and then realized that I could have put it here ZB Neko Old versions Image:Twistending.jpg 07:18, 17 July 2006, 14:35, 21 July 2006, 14:45, 21 July 2006 & 14:46, 21 July 2006 "RaRaRa! Oooh! FANGORIOUS!!" Lowtax Fuchsia Slim Thug Snozeberry Carlow Crap Kallil Chebaro Rajput Charles V Midnight Natanijuani this is not NRV material but rather a speedy delition... Beyblade same for this one Monkey disease this too Dysentry this doesn't even make sense. It's just a few nonsensical words. Mike, from down the street and this Editing Kim Deal and Heiwa no justification necessary Heiwa same here Kyle Busch same here, but dumber than above Kurt Busch same creator/vein as Kyle Chao short, unfunny crap Rastafari *shakes head sadly* Sep 8 Image:Yep.jpg Image used by previous crappy youtube band vanity page Alec Sydlow And The Remedials vanity for crappy youtube band Image:Seany's a weeney.jpg Beyblade Elisha Cuthbert's boobs Prothegated (moved to Undictionary) Determinism Fat Tony old versions IVC arifact old versions faria alam & corbett old versions laden in bamiyan old versions tull on thin ice old versions bono is bonzai old versions bill gates with king of bhutan old versions prehistoric Nepal under sea old versions coat of arms of Nepal Uber nooblet Jerry Stiller Michael broderick Alex ball Adolf Hitler's red-headed step child Jon robin That gay emo kid at your school Kubashnubadogiepoo God is a Girl Sep 7 Miller Time Shcool Zug-zug Bulimic mafia Killer origami The world acording to the USA Symbolism Todd's Comb Zug-zug Shcool Chickenhawk Mumbles Lismá Bulimic mafia Dave the Chameleon Symbolism The world acording to the USA Killer Oragami Harry The Hamster Neil Warnock Wheels Government of India Plezzo Battle of Campbell’s Chunky Soup The Anal Probe Image:LOL.jpg apparently created for sarah hooper attack page Sarah Hooper attack page Bo Hu Jacob landers vanity attack Halo 2 Fuck You Bulimic mafia Sep 6 Battle of guilford courthouse Tibia You spell it with an s Sep 5 Trevor Linden Toilet Duck James allen Death machine (moved to Undictionary) Smack is Back Caveolli Lolcakes Sep 4 Bad Mat hislop Mitch dwyer 47 (number) Oooooooooooooooo Oogry-Moogric languages John mcglynn Matleena peltonen Scissor Sisters Five Point Someone Gayzilla Category:Dumb asses who end up like Kenny McCormick This is worse that Bluto's category. ☻ This is blatantly idiotic. Sep 3 USAID Nellie Fox Aditya Kumar Tiwari 2 Sep The Legend Of Zelda: OH SHIIT! UnNews:Lithuania beat Italy 1:1 Stillblade Tazers Victoria University Arkan - copied from Wikipedia Page 28 Game:Zork/xyzzyzzyx/palindrome %CE%98 Erik Satie Adaptation. Taylor Trump Lewin old version burqa 23:52, 31 August 2006 new versions Image:Button sig.png someone though it would be funny to- New change the sig button to a turd (14:12, 1 September 2006)...I overwrote it w/a copy of the original turdless pic (18:19, 1 September 2006) versions cannot be deleted, only old versions. Ok, old versions then. Image:Button sig.png 08:52, 15 August 2005 + 14:12, 1 September 2006. I said "new versions" before because now it'll look like I make the pic... 1 Sep old versions Solid Gold 00:27, 1 September 2006 + 10:58, 1 September 2006 + 10:59, 1 September 2006 Miles Tails Prower The Roeper Android - copy from Wikipedia UnNews:GM shows off new green assembly plant not humourous... at all Batrakna BIKECAT Winner Ethan sindler Matt Harrington MSN Man Cocktipuss - recreated page, not improved Liam Meatwad - I added in a few bits just to humor the author Tokoroa Disco - I though I'd use it, used a more appropriate pic instead David S. Pyman - vanity? Template:Satanic - unused, and should be better anyway Homer the Gluttonous Led Zeppelins Center of the universe Gilang Primordial pasta Tiny Vikings from Heikansjor UnNews:Ernesto blows away Carolinas Kross - lame ass kiddie attack page that someone brought from the Wikipedia Wiccapedia
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:QuickVFD/archive14
CC-MAIN-2016-36
refinedweb
3,215
53.21
On 10/10/05, Gale, David <David.Gale@hypertherm.com> wrote: > I got past my earlier problem (I hadn't realized I needed to explicitly > enable mod_dav when I compiled apache), and am now trying to set up > ViewCVS to look at my subversion repositories, following the > instructions here: <>. > Everything goes smoothly until I actually try to hit the /viewcvs > directory, at which point I get: > > > > This was generated from the snapshot supplied on that site; I also tried > the latest nightly ViewCVS, and got the same error. Any thoughts on how > I can fix this/what I'm doing wrong? > I assume that you have already installed the SWIG python bindings. If not, build and install it first (read the subversion/bindings/swig/INSTALL file). After installing the SWIG pythong bindings, make sure the python path sees this swig modules. Add the following two lines to the top of the file viewcvs/www/cgi/viewcvs.cgi import sys sys.path.insert(0, '/usr/local/lib') sys.path.insert(0, '/usr/local/lib/svn-python') regards, -- .
https://svn.haxx.se/users/archive-2005-10/0323.shtml
CC-MAIN-2022-40
refinedweb
176
57.77
The QRubberBand class provides a rectangle or line that can indicate a selection or a boundary. More... #include <QRubberBand> The QRubberBand class provides a rectangle or line that can indicate a selection or a boundary.:. This enum specifies what shape a QRubberBand should have. This is a drawing hint that is passed down to the style system, and can be interpreted by each QStyle.. Destructor. Reimplemented from QWidget::changeEvent(). Reimplemented from QObject::event(). Initialize option with the values from this QRubberBand. This method is useful for subclasses when they need a QStyleOptionRubberBand, but don't want to fill in all the information themselves. See also QStyleOption::initFrom(). Moves the rubberband to point (x, y). This is an overloaded function. Moves the rubberband to point p. Reimplemented from QWidget::moveEvent(). Reimplemented from QWidget::paintEvent(). Resizes the rubberband so that its width is width, and its height is height. This is an overloaded function. Resizes the rubberband so that its new size is size. Reimplemented from QWidget::resizeEvent(). Sets the geometry of the rubber band to rect, specified in the coordinate system of its parent widget. See also QWidget::geometry. This is an overloaded function. Sets the geometry of the rubberband to the rectangle whose top-left corner lies at the point (x, y), and with dimensions specified by width and height. The geometry is specified in the parent widget's coordinate system. Returns the shape of this rubber band. The shape can only be set upon construction. Reimplemented from QWidget::showEvent().
http://doc.qt.nokia.com/main-snapshot/qrubberband.html#changeEvent
crawl-003
refinedweb
250
62.14
Python - Insert a new node at a given position in the Linked List In this method, a new element is inserted at the specified position in the linked list. For example - if the given List is 10->20->30 and a new element 100 is added at position 2, the Linked List becomes 10->100->20->30. First, a new node with given element is created. If the insert position is 1, then the new node is made to head. Otherwise, traverse to the node that is previous to the insert position and check if it is null or not. In case of null, the specified position does not exist. In other case, assign next of the new node as next of the previous node and next of previous node as new node. The below figure describes the process, if the insert node is other than the head node. The function push_at is created for this purpose. It is a 6-step process. def push_at(self, newElement, position): #1. allocate node to new element newNode = Node(newElement) #2. check if the position is > 0 if(position < 1): print("\nposition should be >= 1.") elif (position == 1): #3. if the position is 1, make next of the # new node as head and new node as head newNode.next = self.head self.head = newNode else: #4. Else, make a temp node and traverse to the # node previous to the position temp = self.head for i in range(1, position-1): if(temp != None): temp = temp.next #5. If the previous node is not null, make # newNode next as temp next and temp next # as newNode. if(temp != None): newNode.next = temp.next temp.next = newNode else: #6. When the previous node is null print("\nThe previous node is null.") The below is a complete program that uses above discussed concept to insert new node at a given position in the linked list. # node structure class Node: def __init__(self, data): self.data = data self.next = #Inserts a new element at the given position def push_at(self, newElement, position): newNode = Node(newElement) if(position < 1): print("\nposition should be >= 1.") elif (position == 1): newNode.next = self.head self.head = newNode else: temp = self.head for i in range(1, position-1): if(temp != None): temp = temp.next if(temp != None): newNode.next = temp.next temp.next = newNode else: print("\nThe previous node is null.") () #Insert an element at position 2 MyList.push_at(100, 2) MyList.PrintList() #Insert an element at position 1 MyList.push_at(200, 1) MyList.PrintList() The above code will give the following output: The list contains: 10 20 30 The list contains: 10 100 20 30 The list contains: 200 10 100 20 30
https://www.alphacodingskills.com/python/ds/python-insert-a-new-node-at-a-given-position-in-the-linked-list.php
CC-MAIN-2021-31
refinedweb
453
77.03
This feature is called “reputation”, where participation earns you points. As you earn points, you earn achievements that are reflected as your reputation. The settings for reputation allow you to define your own points system, defining the points earned for creating a post, replying to a post, having a post marked with 4 or 5 stars or “liked”, and having a post marked as “Best Reply”. You further define the number of points per level, and whether the achievements are represented as an image or as text. The second feature to point out is called a Gifted Badge. The site owner will define a set of badges that allow you to define the name of each badge. This allows a community site owner to define a set of badges and assign them to community members. For instance, you might want to create a badge that calls out if someone has achieved a particular certification level. Just highlight a community site member and the “Give Badge” button will be enabled in the ribbon in the Moderation tab. Once users have been assigned badges, they appear on the front page of the Community site in the “Top contributors” web part. When I first saw the new Community Sites in SharePoint 2013, I was excited. Of course, there are a ton of things that I would like to see improved, but this is a decent start. To me, the ability to assign badges seemed really cool and I saw a bunch of opportunities such as assigning badges when employees are on-boarded and they complete training courses. Out of the box, all you can do with them is either earn them or assign them. What would be really cool is if you could somehow use them to protect content such that only people who have a specific badge are allowed to use a site. For instance, I could create a badge called “Microsoft Certified Professional” and only users who have that badge are allowed into the “MCPs Only” web site. That would allow me to do this: That is exactly what we are going to build. Wanna Come In? Show Me Your Badge! Badges are simply items in a list, so how would you use that in order to allow or deny someone access to a site? The solution is a custom claims provider. A claims provider is used to augment a user’s claims and to provide name resolution. Augmenting claims simply means adding more claims into the user’s token. For instance, I can add a claim that says what the user’s favorite basketball team is, what their hair color is, whatever information I want to add to their user token. Here is a screen shot where I added a new claim type “” with a value “Microsoft Certified Master”. When the user logs in, our custom claim provider will augment the user’s claim set by adding an additional claim indicating if the user has been assigned a badge. If the user has not been assigned a badge, then no claim is added. This scenario could be adapted for many other scenarios. Perhaps you have a document library that contains your company’s intellectual property, and only people who have authored intellectual property have access to a special site. Maybe you require people to complete training courses before allowing them access to content. There are many ways that you can adapt this scenario. The Badge Custom Claims Provider There are a number of claim providers already registered out of the box, and you can register more than one claim provider to augment claims. Using the Get-SPClaimProvider cmdlet in PowerShell, we can see the claim providers that have been registered. Start by creating a new SharePoint 2013 farm solution using the “SharePoint 2013 – Empty Project” template. Next, add a class that derives from Microsoft.SharePoint.Administration.Claims.SPClaimProvider. Here is a class diagram of what we will implement. Before we go into the claims provider, let’s get our data access layer stuff out of the way. Reading Badges From a Community Site The first step is to create a few classes that store the data we want to work with. The first class is Badge that just holds the ID and Title for the badge. namespace Microsoft.PFE.ClaimsProviders { public class Badge { public int ID { get; set; } public string Title { get; set; } } } The next class is BadgeUser, which holds information about the user and the badges that they have been assigned. using System.Collections.Generic; namespace Microsoft.PFE.ClaimsProviders { public class BadgeUser { public string Name { get; set; } public string LoginName { get; set; } public List<Badge> Badges { get; set; } } } Now that we’ve defined some classes to hold the data, let’s create a helper class that can fill them with data. The first method allows us to search for any badges that start with a set of characters. 1: using Microsoft.SharePoint; 2: using System.Collections.Generic; 3: 4: namespace Microsoft.PFE.ClaimsProviders 5: { 6: public class BadgeHelper 7: { 8: /// <summary> 9: /// Get the list of badges that start with the partial name. 10: /// </summary> 11: /// <param name="partialBadgeName">The characters the badge 12: /// name should start with. 13: /// </param> 14: /// <returns>A list of badges that match the criteria.</returns> 15: public static List<Badge> GetBadgesByPartialName(string partialBadgeName) 16: { 17: 18: List<Badge> badges = new List<Badge>(); 19: 20: //Use RWEP here because the user may not have 21: //access to the site. 22: SPSecurity.RunWithElevatedPrivileges(delegate() 23: { 24: using (SPSite site = new SPSite("")) 25: { 26: SPWeb web = site.OpenWeb("Community"); 27: 28: SPQuery query = new SPQuery(); 29: query.Query = "<Where>" + 30: "<BeginsWith>" + 31: "<FieldRef Name='Title' />" + 32: "<Value Type='TEXT' >" + 33: partialBadgeName + 34: "</Value>" + 35: "</BeginsWith>" + 36: "</Where>"; 37: 38: query.ViewFields = "<FieldRef Name='ID'/>" + 39: "<FieldRef Name='Title'/>"; 40: 41: SPList badgeList = web.Lists["Badges"]; 42: SPListItemCollection items = badgeList.GetItems(query); 43: 44: 45: if (null != items) 46: { 47: foreach (SPListItem item in items) 48: { 49: badges.Add(new Badge 50: { 51: ID = item.ID, 52: Title = item.Title 53: }); 54: } 55: } 56: 57: } 58: }); 59: return badges; 60: } The call on line 22 is necessary because the current user may not have access to the community site that we are obtaining data from. You can see this is a pretty simple method that just gets data from a SharePoint list. The next method looks nearly identical in that it gets a list of data based on an exact match rather than a partial match. /// <summary> /// A list of badges that match the exact name. /// </summary> /// <param name="name">The name of the badge to match.</param> /// <returns>A list of badges that have the exact name.</returns> public static List<Badge> GetBadgesByExactName(string name) { List<Badge> badges = new List<Badge>(); //Use RWEP here because the user may not have //access to the site. SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite("")) { SPWeb web = site.OpenWeb("Community"); SPQuery query = new SPQuery(); query.Query = "<Where>" + "<Eq>" + "<FieldRef Name='Title' />" + "<Value Type='TEXT' >" + name + "</Value>" + "</Eq>" + "</Where>"; query.ViewFields = "<FieldRef Name='ID'/>" + "<FieldRef Name='Title'/>"; SPList badgeList = web.Lists["Badges"]; SPListItemCollection items = badgeList.GetItems(query); if (null != items) { foreach (SPListItem item in items) { badges.Add(new Badge { ID = item.ID, Title = item.Title }); } } } }); return badges; } The last method bears a little more explanation. Now we want to get the badges for a particular user. The user may not have access to the site, so we have to use RunWithElevatedPrivileges. In fact, the user may not even be in the site users list, so we have to use EnsureUser in order to obtain a valid reference to the SPUser object for that user. 1: /// <summary> 2: /// Gets the badges for a particular user 3: /// </summary> 4: /// <param name="loginName">The login name in encoded claims 5: /// format. For instance, "i:0#.w|contoso\administrator".</param> 6: /// <returns>The user information and badges for that user.</returns> 7: public static BadgeUser GetBadgesForUser(string loginName) 8: { 9: BadgeUser badgeUser = null; 10: 11: //Use RWEP here because the user may not have 12: //access to the site. 13: SPSecurity.RunWithElevatedPrivileges(delegate() 14: { 15: using (SPSite site = new SPSite("")) 16: { 17: SPWeb web = site.OpenWeb("Community"); 18: 19: //The encoded claim will be in the format 20: //0#.f|ldapmember|kirkevans. Add the i: prefix 21: //to indicate this is an identity claim. 22: if (!loginName.StartsWith("i:")) 23: { 24: loginName = "i:" + loginName; 25: } 26: 27: //EnsureUser will only work if the same login provider 28: //is available in this site. For example, if you are 29: //calling this from a user authenticated in another web 30: //application, and this site does not have FBA configured, 31: //the call to EnsureUser will fail. 32: SPUser user = user = web.EnsureUser(loginName); 33: 34: SPQuery query = new SPQuery(); 35: query.Query = "<Where>" + 36: "<And>" + 37: "<Eq>" + 38: "<FieldRef Name='Member' LookupId='TRUE' />" + 39: "<Value Type='USER' >" + user.ID + "</Value>" + 40: "</Eq>" + 41: "<IsNotNull>" + 42: "<FieldRef Name='GiftedBadgeLookup'/>" + 43: "</IsNotNull>" + 44: "</And>" + 45: "</Where>"; 46: 47: query.ViewFields = "<FieldRef Name='Title'/>" + 48: "<FieldRef Name='Member'/>" + 49: "<FieldRef Name='GiftedBadgeLookup'/>"; 50: SPList memberList = web.Lists["Community Members"]; 51: SPListItemCollection items = memberList.GetItems(query); 52: 53: if (items.Count > 0) 54: { 55: badgeUser = new BadgeUser 56: { 57: LoginName = user.LoginName, 58: Name = items[0].Title 59: }; 60: List<Badge> badges = new List<Badge>(); 61: 62: foreach (SPListItem item in items) 63: { 64: SPFieldLookupValue badgeName = 65: new SPFieldLookupValue(item["GiftedBadgeLookup"].ToString()); 66: //The current data structure only allows 67: //one badge, but we may want to add badges 68: //from another system in the future. 69: badges.Add(new Badge 70: { 71: ID = badgeName.LookupId, 72: Title = badgeName.LookupValue 73: }); 74: } 75: badgeUser.Badges = badges; 76: } 77: } 78: }); 79: return badgeUser; 80: } 81: } 82: } The call to EnsureUser requires that the provider used to authenticate the user is available to this web application. If we are in one web application that uses FBA claims and then we make a call to another web site (in this case), then both web applications must provide the same authentication capabilities. If one web application uses FBA, the site you are calling EnsureUser for must also be configured to use the same FBA provider. Implementing the Badge Claim Provider Now that we have the data access stuff out of the way, let’s focus on the really cool part of the solution. We’ll start by implementing the properties as shown in the previous class diagram. 1: using System; 2: using System.Collections.Generic; 3: using Microsoft.SharePoint.Administration; 4: using Microsoft.SharePoint.Administration.Claims; 5: using Microsoft.SharePoint.WebControls; 6: 7: namespace Microsoft.PFE.ClaimsProviders 8: { 9: public class BadgeClaimProvider : SPClaimProvider 10: { 11: #region Constructor 12: public BadgeClaimProvider(string displayName) 13: : base(displayName) 14: { 15: } 16: #endregion 17: 18: #region Properties 19: internal static string ProviderInternalName 20: { 21: get { return "BadgeClaimsProvider"; } 22: } 23: 24: public override string Name 25: { 26: get { return ProviderInternalName; } 27: } 28: 29: internal static string ProviderDisplayName 30: { 31: get { return "Badges for User"; } 32: } 33: 34: private static string BadgeClaimType 35: { 36: get { return ""; } 37: } 38: 39: private static string BadgeClaimValueType 40: { 41: get { return Microsoft.IdentityModel.Claims.ClaimValueTypes.String; } 42: } 43: 44: public override bool SupportsEntityInformation 45: { 46: get { return true; } 47: } 48: 49: public override bool SupportsHierarchy 50: { 51: get { return false; } 52: } 53: 54: public override bool SupportsResolve 55: { 56: get { return true; } 57: } 58: 59: public override bool SupportsSearch 60: { 61: get { return true; } 62: } 63: 64: #endregion The ProviderDisplayName property on line 29 can be seen in the DisplayName property when we use the Get-SPClaimProvider cmdlet. The BadgeClaimType (line 34) defines the URI for the claim, and the BadgeClaimValueType (line 39) defines the type of value the claim will contain. The four properties starting with “Support” indicate to SharePoint what capabilities this provider has. - EntityInformation – provides claims augmentation for a specific user. - Hierarchy – used in the People Picker to create a hierarchical representation of claims. - Resolve – Find an exact match for the claim value. - Search – Search for a value based on a partial word or search pattern. Our provider will support entity information, search, and resolve. Claims Augmentation – Providing Entity Information We need to add a claim to the user’s token that shows a badge that they might have earned. To do that, we first tell SharePoint that we are going to fill a claim with a specific type (“”) and what type its value will be (Microsoft.IdentityModel.Claims.ClaimValueTypes.String). We then use the FillClaimsForEntity method to augment the claims for the current user. This method accepts a parameter, “entity”, the value of which is the user’s identity claim (for instance, “0#.f|ldapmember|kirkevans”). We then pass that value to our helper method GetBadgesForUser. 1: protected override void FillClaimTypes(List<string> claimTypes) 2: { 3: if (claimTypes == null) 4: throw new ArgumentNullException("claimTypes"); 5: 6: // Add our claim type. 7: claimTypes.Add(BadgeClaimType); 8: } 9: 10: protected override void FillClaimValueTypes(List<string> claimValueTypes) 11: { 12: if (claimValueTypes == null) 13: throw new ArgumentNullException("claimValueTypes"); 14: 15: // Add our claim value type. 16: claimValueTypes.Add(BadgeClaimValueType); 17: } 18: 19: 20: protected override void FillClaimsForEntity(Uri context, SPClaim entity, List<SPClaim> claims) 21: { 22: if (entity == null) 23: throw new ArgumentNullException("entity"); 24: 25: if (claims == null) 26: throw new ArgumentNullException("claims"); 27: 28: 29: var badgeUser = BadgeHelper.GetBadgesForUser(entity.Value); 30: 31: 32: if (null != badgeUser) 33: { 34: if (null != badgeUser.Badges) 35: { 36: foreach (var badge in badgeUser.Badges) 37: { 38: claims.Add(CreateClaim(BadgeClaimType, badge.Title, BadgeClaimValueType)); 39: } 40: } 41: } 42: } 43: 44: 45: 46: protected override void FillEntityTypes(List<string> entityTypes) 47: { 48: if (null == entityTypes) 49: { 50: throw new ArgumentNullException("entityTypes"); 51: } 52: entityTypes.Add(SPClaimEntityTypes.FormsRole); 53: } Enabling the People Picker So far, all we have done is add a claim to the user’s token. We want to enable the scenario where someone can create a site, select Users and Groups, and use the People Picker to select our badge, allowing any users with that badge access to the site. To do this, we need to implement a few more methods: FillSchema, FillSearch and FillResolve. FillSchema tells the PeoplePicker what information we will display and in what format. FillSearch accepts a string value that we use to find any badges that start with those characters. protected override void FillSchema(SPProviderSchema schema) { if (null == schema) { throw new ArgumentNullException("schema"); } schema.AddSchemaElement(new SPSchemaElement(PeopleEditorEntityDataKeys.DisplayName, "Display Name", SPSchemaElementType.TableViewOnly)); } protected override void FillSearch(Uri context, string[] entityTypes, string searchPattern, string hierarchyNodeID, int maxCount, SPProviderHierarchyTree searchTree) { if (!EntityTypesContain(entityTypes, SPClaimEntityTypes.FormsRole)) { return; } List<Badge> badges = BadgeHelper.GetBadgesByPartialName(searchPattern);; searchTree.AddEntity(entity); } } } Another way you can use the People Picker is to type a value in. When you type the value instead of using search, this calls the FillResolve method of the registered claims providers. protected override void FillResolve(Uri context, string[] entityTypes, SPClaim resolveInput, List<PickerEntity> resolved) { FillResolve(context, entityTypes, resolveInput.Value, resolved); } protected override void FillResolve(Uri context, string[] entityTypes, string resolveInput, List<PickerEntity> resolved) { List<Badge> badges = BadgeHelper.GetBadgesByExactName(resolveInput);; resolved.Add(entity); } } } protected override void FillHierarchy(Uri context, string[] entityTypes, string hierarchyNodeID, int numberOfLevels, SPProviderHierarchyTree hierarchy) { throw new NotImplementedException(); } } } Registering Your Custom Claim Provider In Visual Studio 2012, add a new farm-scoped feature and a feature receiver. Change the type of the feature receiver to SPClaimProviderFeatureReceiver. You only need to override a few properties here, no need to override FeatureActivating or FeatureDeactivating. using System; using System.Runtime.InteropServices; using Microsoft.SharePoint.Administration.Claims; namespace Microsoft.PFE.ClaimsProviders.Features.BadgeClaimFeature { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("99871a2c-f81a-443c-9625-d2465d2ec29c")] public class BadgeClaimFeatureEventReceiver : SPClaimProviderFeatureReceiver { public override string ClaimProviderAssembly { get { return typeof(BadgeClaimProvider).Assembly.FullName; } } public override string ClaimProviderDescription { get { return "A sample provider written by Kirk Evans"; } } public override string ClaimProviderDisplayName { get { return BadgeClaimProvider.ProviderDisplayName; } } public override string ClaimProviderType { get { return typeof(BadgeClaimProvider).FullName; } } } } Right-click the project and choose Publish. This will package the solution into a WSP. Once packaged into a WSP, you can add the solution and deploy it using PowerShell. Add-SPSolution -LiteralPath C:\temp\claims\Microsoft.PFE.ClaimsProviders.wsp Install-SPSolution microsoft.pfe.claimsproviders.wsp -GACDeployment Securing Content Based on a Badge Now let’s put our new claim provider through its paces. I create a new site titled “MCM Only” with unique permissions. I then determine who to add to the group “MCM Only Members”. When I search for “Microsoft” I see a list of the badges that start with the search term (this called our FillSearch method). When we hover over a result, a popup shows which claim provider the information came from. Now let’s try our FillResolve method by typing the value in. When we hover over the underlined value, we choose the value that matches. Once we choose a value, only users that have that claim will be granted access. For those users that have access to the site because they match a claim, they will be granted access. Notice that it shows in the top nav bar for a user who has the badge. When I try to access the site, it succeeds. Further, SharePoint is smart enough to security trim away the site that a user doesn’t have access to based on our claim provider! Now I am logged in as Dan Jump, who does not have a badge. And when I try to access the site by typing it in the URL, it fails just as we would expect. Summary Custom claim providers address a large number of scenarios that people commonly face with SharePoint, enabling the ability to add additional information into the user’s token and then using that information to secure content or assign work to users via the people picker. There are quite a few interesting ideas on using custom claim providers. Securing SharePoint Content using Profile Attributes Leveraging Facebook for SharePoint Security Creating a Claims Provider Based on SharePoint Audiences Creating a Hierarchical Claim Provider Based on Favorite Basketball Teams As I survey many of the implementations that have been done for various customers using a mix of HttpModules and code in master pages to provide less than elegant solutions to similar problems, I can see more and more scenarios that could be solved using custom claim providers. Microsoft.PFE.ClaimsProviders.zip Join the conversationAdd Comment This post helped me understand more about custom claims providers..Thank you. Hi, I followed the above steps still I am not a ble to see MCM even thought he has permissions on MCM. Could you please suggest me…how to debug this provider…
https://blogs.msdn.microsoft.com/kaevans/2013/04/22/how-to-allow-only-users-who-have-a-community-badge-to-your-sharepoint-2013-site/
CC-MAIN-2016-50
refinedweb
3,144
55.44
Working with Images In this section, you will see how to use two of the most common graphic types a programmer interacts with: bitmaps and icons. As stated earlier in the chapter, all user-interface objects in Windows are some form of a bitmap; a bitmap is simply a collection of pixels set to various colors. Images The namespace library gives us three classes for working with images: Image, Bitmap and Icon. Image is simply the base class from which the others inherit. Bitmap allows us to convert a graphics file into the native GDI+ format (bitmap). This class can be used to define images as fill patterns, transform images for display, define the look of a buttonits uses are many. Although the bitmap format is used to manipulate images at the pixel level, GDI+ can actually work with the following image types: Bitmaps (BMP) Graphics Interchange Format (GIF) Joint Photographic Experts Group (JPEG) Exchangeable Image File (EXIF) Portable Network Graphics (PNG) Tag Image File Format (TIFF) Creating an instance of Bitmap requires a filename, stream, or another valid Image instance. For example, the following line of code will instantiate a Bitmap object based on a JPEG file: Dim myBitmap As New System.Drawing.Bitmap(fileName:="Sample.jpg") Once instantiated, we can do a number of things with the image. For instance, we can change its resolution with the SetResolution method or make part of the image transparent with MakeTransparent. Of course, we will also want to draw our image to the form. We use the DrawImage method of the Graphics class to output the image to the screen. The DrawImage method has over 30 overloaded parameter sets. In its simplest form, we pass the method an instance of Bitmap and the upper-left coordinate of where we want the method to begin drawing. For example: myGraphics.DrawImage(image:=myBitmap, point:=New Point(x:=5, y:=5)) Scaling and Cropping It is often helpful to be able to scale or crop an image to a different size. Suppose you need a 100 x 100 image to fit in a 20 x 20 space, or you want to give your users the ability to zoom in on a portion of an image. You use a variation of the DrawImage method to scale images. This overloaded method takes a Rectangle instance as the destination for drawing your image. However, if the rectangle is smaller or larger than your image, the method will automatically scale the image to match the bounds of the rectangle. Another version of the DrawImage method takes both a source rectangle and a destination rectangle. The source rectangle defines the portion of the original image to be drawn into the destination rectangle. This, effectively, is cropping. The source rectangle defines how the image gets cropped when applied to the destination. Of course, you can crop to the original size or scale the cropped portion to a new size. Listing 9.8 provides a detailed code example of both scaling and cropping an image. Listing 9.8 Scale and Crop Protected Overrides Sub OnClick(ByVal e As System.EventArgs) 'local scope Dim myBitmap As System.Drawing.Bitmap Dim myGraphics As Graphics Dim mySource As Rectangle Dim myDestination As Rectangle 'create an instance of bitmap based on a file myBitmap = New System.Drawing.Bitmap(fileName:="dotnet.gif") 'return the current form as a drawing surface myGraphics = Graphics.FromHwnd(ActiveForm().Handle) 'define a rectangle as the size of the original image (source) mySource = New Rectangle(x:=0, y:=0, Width:=81, Height:=45) 'draw the original bitmap to the source rectangle myGraphics.DrawImage(image:=myBitmap, rect:=mySource) 'create a destination rectangle myDestination = New Rectangle(x:=90, y:=0, Width:=162, Height:=90) 'output the image to the dest. rectangle (scale) myGraphics.DrawImage(image:=myBitmap, rect:=myDestination) 'output a cropped portion of the source myGraphics.DrawImage(image:=myBitmap, _ destRect:=New Rectangle(x:=0, y:=100, Width:=30, Height:=30), _ srcRect:=New Rectangle(x:=0, y:=35, Width:=14, Height:=14), _ srcUnit:=GraphicsUnit.Pixel) End Sub Notice that we actually drew the image to the form three times. The first time, we drew the image into a rectangle (mySource) based on its original size. The second time, we scaled the image to two times its original size (myDestination) by creating a larger rectangle and outputting the image accordingly. Finally, we cropped a portion of the original output and put it in a new, larger rectangle. Figure 9.5 shows the code's output to the form. Figure 9.5 Scale and crop output. Icons An icon in Windows is a small bitmap image that represents an object. You cannot go far in without seeing and working with icons. For example, the File Explorer uses icons to represent folders and files; your desktop contains icons for My Computer, Recycle Bin, and My Network Places. We use the Icon class to work with icons in .NET. We can instantiate an Icon instance in much the same way we created Bitmap objects. The following code creates an icon based on a file name: Dim myIcon as New Icon(fileName:="myIcon.ico") The DrawIcon method of the Graphics class is used to draw the icon to the form. To it, you can pass the icon and either just the upper-left x and y coordinates or a bounding rectangle. If you pass a Rectangle instance, the icon will be scaled based on the bounding rectangle. The following line of code draws an icon object into a bounding rectangle. myGraphics.DrawIcon(icon:=myIcon, _ rectangle:=New Rectangle(x:=5, y:=5, width:=32, height:=32)) The Graphics class also gives us the DrawIconUnstretched method that allows us to specify a bounding rectangle without actually scaling the icon. In fact, if the icon is larger than the bounding rectangle, it will be cropped to fit from the left corner down and to the right. Suggestions for Further Exploration To animate images, take a look at the ImageAnimator class. Check out the SmoothingMode property of the Graphics class and the SmoothingMode enumeration members. This method allows you to set things like antialiasing to make your graphics look "smoother."
http://www.informit.com/articles/article.aspx?p=29477&seqNum=8
CC-MAIN-2019-13
refinedweb
1,030
54.22
# bookshare web app option. You'll be prompted to type the name of your new app. Go with something simple, like 'roomlist'. A dropdown will show an option to create a new app. Put Education as the category. Skip over 'namespace': that's used for specifically defining the app's name via Facebook's API. For the line labeled 'Site URL', type 'localhost:8000'. Hit 'next' and then click the link above the listed functionality that says 'Skip to Developer Dashboard'. (For the record, we're implementing the Login functionality.) After you click 'create app ID' and successfully complete a captcha, you'll. Back on the Django admin page, start filling in fields. App Name to Name, App ID to Client id, App Secret to Secret Key. The 'Sites' section is a security measure you can have in place to make sure that. Something you'll need to carefully specify is the callback (or redirect) url. This is where your user is sent once they have successfully authenticated with the outside social media site. Let's add localhost to our Available sites. Now, to Instagram. Instagram actually asks you what you want to build and then will choose to give you access or not. Cross your fingers and let's begin the process. Sign up as developer. Your website (localhost:8000) phone number (if you haven't already given it to Instagram, see above) and what you want to build with the API. 'Verify users are linking to social media accounts they in fact control. Register Your Application. Manage clients. Register new client ID. Application name Description Company name(?) SRCT You must enter an absolute URI that starts with or Submission error: all fields are required. Website URL localhost:8000 Valid redirect urls: localhost:8000 (tab after to make it a link) contact email Captcha Next, to Twitter. Must actually be unique Use Permissions: Read only Finally, to Google. Google's auth setup process is unquestionably the most confusing of the bunch, and yet we proceed, despite the number of students who will link their Google+ page will likely be no more than four. Project name-- google gives you the project name enable and manage apis google+ credentials add credentials oauth2 client id web application name authorized javascript origins authorized redirect urls with trailing slash localhost you'll see them in a popup... roomlist or generated roomlist? project consent screen product name shown to users domain verification your site must be registered on the search console watch out for trailing spaces!! ###.
https://git.gmu.edu/srct/roomlist/-/blame/37c29af10b735e2e3f1a30d09c93a7424b161624/README.md
CC-MAIN-2022-21
refinedweb
421
67.04
Next article: Friday Q&A 2010-02-05: Error Returns with Continuation Passing Style Previous article: Friday Q&A 2010-01-22: Toll Free Bridging Internals Tags: evil fridayqna objectivec override swizzling It's that time of the week again. For this week's Friday Q&A Mike Shields has suggested that I talk about method replacement and method swizzling in Objective-C. Overriding Methods Overriding methods is a common task in just about any object oriented language. Most of the time you do this by subclassing, a time-honored technique. You subclass, you implement the method in the subclass, you instantiate the subclass when necessary, and instances of the subclass use the overridden method. Everybody knows how to do this. Sometimes, though, you need to override methods that are in objects whose instantiation you don't control. Subclassing doesn't suffice in that case, because you can't make that code instantiate your subclass. Your method override sits there, twiddling its thumbs, accomplishing nothing. Posing Posing is an interesting technique but, alas, is now obsolete, since Apple no longer supports it in the "new" (64-bit and iPhone) Objective-C runtime. With posing, you subclass, then pose the subclass as its superclass. The runtime does some magic and suddenly the subclass is used everywhere, and method overrides become useful again. Since this is no longer supported, I won't go into details. Categories Using a category, you can easily override a method in an existing class: @implementation NSView (MyOverride) - (void)drawRect: (NSRect)r { // this runs instead of the normal -[NSView drawRect:] [[NSColor blueColor] set]; NSRectFill(r); } @end - It's impossible to call through to the original implementation of the method. The new implementation replaces the original, which is simply lost. Most overrides want to add functionality, not completely replace it, but it's not possible with a category. - The class in question could implement the method in question in a category too, and the runtime doesn't guarantee which implementation "wins" when two categories contain methods with the same name. Using a technique called method swizzling, you can replace an existing method from a category without the uncertainty of who "wins", and while preserving the ability to call through to the old method. The secret is to give the override a different method name, then swap them using runtime functions. First, you implement the override with a different name: @implementation NSView (MyOverride) - (void)override_drawRect: (NSRect)r { // call through to the original, really [self override_drawRect: r]; [[NSColor blueColor] set]; NSRectFill(r); } @end override_drawRect:is actually the original! To swap the method, you need a bit of code to move the new implementation in and the old implementation out: void MethodSwizzle(Class c, SEL origSEL, SEL overrideSEL) { Method origMethod = class_getInstanceMethod(c, origSEL); Method overrideMethod = class_getInstanceMethod(c, overrideSEL); For the case where the method only exists in a superclass, the first step is to add a new method to this class, using the override as the implementation. Once that's done, then the override method is replaced with the original one. The step of adding the new method can also double as a check to see which case is actually present. The runtime function class_addMethod will fail if the method already exists, and so can be used for the check:); } } method_exchangeImplementationscall just uses the two methods that the code already fetched, and you might wonder why it can't just go straight to that and skip all of the annoying stuff in the middle. The reason the code needs the two cases is because class_getInstanceMethod will actually return the Method for the superclass if that's where the implementation lies. Replacing that implementation will replace the method for the wrong class! As a concrete example, imagine replacing -[NSView description]. If NSView doesn't implement -description (which is probable) then you'll get NSObject's Method instead. If you called method_exchangeImplementations on that Method, you'd replace the -description method on NSObject with your own code, which is not what you want to do! (When that's the case, a simple category method would work just fine, so this code wouldn't be needed. The problem is that you can't know whether a class overrides a method from its superclass or not, and that could even change from one OS release to the next, so you have to assume that the class may implement the method itself, and write code that can handle that.) Finally we just need to make sure that this code actually gets called when the program starts up. This is easily done by adding a +load method to the MyOverride category: + (void)load { MethodSwizzle(self, @selector(drawRect:), @selector(override_drawRect:)); } This is a bit complicated, though. The swizzling concept is a little weird, and especially the way that you call through to the original implementation tends to bend the mind a bit. It's a pretty standard technique, but I want to propose a way that I believe is a little simpler, both in terms of being easier to understand and easier to implement. It turns out that there's no need to preserve the method-ness of the original method. The dynamic dispatch involved in [self override_drawRect: r] is completely unnecessary. We know which implementation we want right from the start. Instead of moving the original method into a new one, just move its implementation into a global function pointer: void (*gOrigDrawRect)(id, SEL, NSRect); +loadyou can fill that global with the original implementation + (void)load { Method origMethod = class_getInstanceMethod(self, @selector(drawRect:)); gOrigDrawRect = (void *)method_getImplementation(origMethod); void *for these things just because it's so much easier to type than long, weird function pointer types, and thanks to the magic of C, the void *gets implicitly converted to the right pointer type anyway.) Next, replace the original. Like before, there are two cases to worry about, so I'll first add the method, then replace the existing one if it turns out that there is one: if(!class_addMethod(self, @selector(drawRect:), (IMP)OverrideDrawRect, method_getTypeEncoding(origMethod))) method_setImplementation(origMethod, (IMP)OverrideDrawRect); } static void OverrideDrawRect(NSView *self, SEL _cmd, NSRect r) { gOrigDrawRect(self, _cmd, r); [[NSColor blueColor] set]; NSRectFill(r); } The Obligatory Warning Overriding methods on classes you don't own is a dangerous business. Your override could cause problems by breaking the assumptions of the class in question. Avoid it if it's at all possible. If you must do it, code your override with extreme care. Conclusion That's it for this week. Now you know the full spectrum of method override possibilities in Objective-C, including one variation that I haven't seen discussed much elsewhere. Use this power for good, not for evil! Come back in seven days for the next edition. Until then, keep sending in your suggestions for topics. Friday Q&A is powered by reader submissions, so if you have an idea for a topic to cover here, send it in! It takes care of most of the mindless busywork and edge cases. The runtime-function class_replaceMethod() takes care of that, if the method is defined in the class, then the replacement is done, if it's defined in a super-class then the function adds the method to the class. So you simply need to retrieve the "old" implementation (that might be the super-class's implementation) and call it in your function replacement. +loadmethod can be cut down to just this: Method origMethod = class_getInstanceMethod(self, @selector(drawRect:)); gOrigDrawRect = (void *)class_replaceMethod(self, @selector(drawRect:), (IMP)OverrideDrawRect, method_getTypeEncoding(origMethod)) mouseEntered: mouseExited: mouseMoved: If I remember correctly, these methods all use an IMP that actually determines what to do based on the _cmd argument. Therefore, when you swizzle, you end up passing override_mouseEntered: as _cmd instead of a value that it knows how to handle. The direct override should not suffer from this problem since it's passing on _cmd correctly. _cmd, but never came across a place where it mattered in practice. Interesting! Warning: extreme hacking inside! But yes, the supersequent method stuff is a hack for the same reason any "undocumented" stuff is: it can be gone at a moment's notice. It relies on the way that things just happen to be done. As for multiple categories though... the way the ObjC 1.0/2.0 runtimes just happen to be written, all categories are preserved in the order that they are loaded. Technically load order is deterministic but it's fragile -- generally, the system libraries will be loaded before your code, but not always. But don't ship code with it unless you want to get burned. Sorry to have gotten in the way. I'll see myself out... Your macro looks like this: #define invokeSupersequent(...) \ ([self getImplementationOf:_cmd \ after:impOfCallingMethod(self, _cmd)]) \ (self, _cmd, ##__VA_ARGS__) -getImplementationOf:is defined to return an IMP, which takes variable arguments after the selfand _cmdparameters. This macro does not cast the IMPto a different function pointer type (and indeed could not, as it doesn't have enough information to do so). This means that the IMPis being called with variable argument calling conventions. Or did I miss some place where everything gets cast to the right function pointer type before calling? However, this is the calling convention used by objc_msgSend, and by extension all methods except the objc_msgSend_(st/fp/fp2)ret methods. But yes, (st/fp/fp2)ret methods require a correct cast of the IMP or other special handling to work. Fortunately, the compiler is smart enough to give a hard error if you try to do this without a cast -- it doesn't slip through unnoticed. I find the bigger problem is that without a signature, all regular parameters need to be correctly typed or they won't be passed correctly (since the compiler can infer the wrong register or stack size). This can cause problems without so much as a warning if you're not careful. float, short, or char(or the unsigned counterparts of the last two) through a vararg function, because they get promoted to doubleand int. This code illustrates the problem: void Tester(int ign, float x, char y) { printf("float: %f char: %d\n", x, y); } int main(int argc, char **argv) { float x = 42; float y = 42; Tester(0, x, y); void (*TesterAlt)(int, ...) = (void *)Tester; TesterAlt(0, x, y); return 0; } On my computer, the second invocation prints float: 0.000000 char: 0. objc_msgSenddoesn't use vararg calling conventions. The convention it "uses" is any convention which is compatible with a pointer return value, and placing the first two arguments in a place where they can be expected. objc_msgSendcompletely ignores all remaining arguments, and lets them pass through unhindered. The caller and the eventual callee (after objc_msgSendlooks it up and jumps to it) still have to agree on how those work, and if the caller thinks they're varargs and the callee doesn't, they won't get along. You must cast all calls to objc_msgSendand its variants in order to have the compiler generate the correct code. Failing to do so will work for many cases, but only because you're getting lucky. The same goes for casting IMPs. main, yshould be of type char. Still fails as described with that change made. I forgot to mention: even if none of your parameters are of the offending types, there's still nothing which guarantees that the calling conventions will match between a vararg call with certain argument types and a non-vararg receiver with those same types. It's far more likely to work (and I think the ABIs of the platforms that OS X runs on may guarantee it on those particular platforms) but still unsafe. in 32bit, the code will run as i expected, but when in 64bit, i get this error: Error loading XXX: dlopen(XXX, 123): Symbol not found: _OBJC_CLASS_$_SomeClass Referenced from: XXX Expected in: flat namespace in XXX i googled for solutions, and i get some answers like: i go and see the code, which does not fix the error. i use JRSwizzle's + (BOOL)jr_swizzleClassMethod:(SEL)origSel_ withClassMethod:(SEL)altSel_ error:(NSError**)error_ thx :) Example: Application has classes Dog and Mammal. Mammal has a reproduceWith: method. MadScientist uses your Direct Override technique to hook -[Dog reproduceWith:] and inject his additional code. EvilGeneticist uses any other hook technique to hook -[Mammal reproduceWith:]. Later, -[Dog reproduceWith:] is called; only MadScientist's hook is runs instead of both MadScientist and EvilGeneticist's hooks. This is not a theoretical problem--the iPhone jailbreak community has encountered this issue numerous times and has standardized on two libraries: MobileSubstrate emits ARM bytecode at runtime to avoid it, and CaptainHook avoids it through macro trickery. class_getInstanceMethodwill return the superclass's implementation if the class in question doesn't have one of its own, so everything still works as desired. I think, though, that you should have mentioned that in order to use the class_...() and method_...() functions, one needs to include the libobjc.A.dylib library in the Xcode Target and then #import <objc/runtime.h> in the source file. Method origMethod = class_getInstanceMethod(self, @selector(drawRect:)); gOrigDrawRect = (void *)method_getImplementation(origMethod); class_replaceMethod(self, @selector(drawRect:), (IMP)OverrideDrawRect, method_getTypeEncoding(origMethod)) - (void)copyToPrivatePasteboard:(id)sender { UIPasteboard *privatePasteboard = [self getPrivatePasteboard]; [privatePasteboard setString:@""];//How to get the copied string to store in pasteboard. } How can i write copied string to pasteboard. The parameter i am getting is of type id. If i convert it to NSString, it won't be proper because it is the sender who is calling this method (UIMenuController). i'm sure you are aware that nothing get's "converted" here, in C a pointer is a pointer, 4 or 8 bytes are copied, that's all. (not to be confused by the real magic Objective C can do converting types on the fly when setting them, setting a BOOL or float from a NSNumber for example, using setValueForKey. In any case, the type gets converted, even if the value remains the same. Add your thoughts, post a comment: Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.
http://www.mikeash.com/pyblog/friday-qa-2010-01-29-method-replacement-for-fun-and-profit.html
CC-MAIN-2013-48
refinedweb
2,362
60.24
* Joerg Roedel <joro@8bytes.org> wrote:> On Fri, Nov 21, 2008 at 06:43:48PM +0100, Ingo Molnar wrote:> > > > * Joerg Roedel <joerg.roedel@amd.com> wrote:> > > > > +static struct list_head dma_entry_hash[HASH_SIZE];> > > +> > > +/* A slab cache to allocate dma_map_entries fast */> > > +static struct kmem_cache *dma_entry_cache;> > > +> > > +/* lock to protect the data structures */> > > +static DEFINE_SPINLOCK(dma_lock);> > > > some more generic comments about the data structure: it's main purpose > > is to provide a mapping based on (dev,addr). There's little if any > > cross-entry interaction - same-address+same-dev DMA is checked.> > > > 1)> > > > the hash:> > > > + return (entry->dev_addr >> HASH_FN_SHIFT) & HASH_FN_MASK;> > > > should mix in entry->dev as well - that way we get not just per > > address but per device hash space separation as well.> > > > 2)> > > > HASH_FN_SHIFT is 1MB chunks right now - that's probably fine in > > practice albeit perhaps a bit too small. There's seldom any coherency > > between the physical addresses of DMA - we rarely have any real > > (performance-relevant) physical co-location of DMA addresses beyond 4K > > granularity. So using 1MB chunking here will discard a good deal of > > random low bits we should be hashing on.> > > > 3)> > > > And the most scalable locking would be per hash bucket locking - no > > global lock is needed. The bucket hash heads should probably be > > cacheline sized - so we'd get one lock per bucket.> > Hmm, I just had the idea of saving this data in struct device. How > about that? The locking should scale too and we can extend it > easier. For example it simplifys a per-device disable function for > the checking. Or another future feature might be leak tracing.that will help with spreading the hash across devices, but brings in lifetime issues: you must be absolutely sure all DMA has drained at the point a device is deinitialized.Dunno ... i think it's still better to have a central hash for all DMA ops that is aware of per device details.The moment we spread this out to struct device we've lost the ability to _potentially_ do inter-device checking. (for example to make sure no other device is DMA-ing on the same address - which is a possible bug pattern as well although right now Linux does not really avoid it actively)Hm?Btw., also have a look at lib/debugobjects.c: i think we should also consider extending that facility and then just hook it up to the DMA ops. The DMA checking is kind of a similar "op lifetime" scenario - with a few extras to extend lib/debugobjects.c with. Note how it already has pooling, a good hash, etc. Ingo
http://lkml.org/lkml/2008/11/23/70
CC-MAIN-2013-20
refinedweb
431
62.58
An object called a layout manager determines the way that components are arranged in a container. All containers will have a default layout manager but you can choose a different layout manager when necessary. There are many layout manager classes provided in the java.awt and javax.swing packages, so we will introduce those that you are most likely to need. It is possible to create your own layout manager classes, but creating layout managers is beyond the scope of this book. The layout manager for a container determines the position and size of all the components in the container: you should not change the size and position of such components yourself. Just let the layout manager take care of it. Since the classes that define layout managers all implement the LayoutManager interface, you can use a variable of type LayoutManager to store any of them if necessary. We will look at six layout manager classes in a little more detail. The names of these classes and the basic arrangements that they provide are as follows: The BoxLayout, SpringLayout, and Box classes are defined in the javax.swing package. The other layout manager classes in the list above are defined in java.awt. One question to ask is why do we need layout managers at all? Why don't we just place components at some given position in a container? The basic reason is to ensure that the GUI elements for your Java program are displayed properly in every possible Java environment. Layout managers automatically adjust components to fit the space available. If you fix the size and position of each of the components, they could run into one another and overlap if the screen area available to your program is reduced. To set the layout manager of a container, you can call the setLayout() method for the container. For example, you could change the layout manager for the container object aWindow of type JFrame to flow layout with the statements: FlowLayout flow = new FlowLayout(); aWindow.getContentPane().setLayout(flow); Remember that we can't add components directly to a JFrame object – we must add them to the content pane for the window. The same goes for JDialog and JApplet objects. With some containers you can set the layout manager in the constructor for that container, as we shall see in later examples. Let's look at how the layout managers work, and how to use them in practice. The flow layout manager places components in a row, and when the row is full, it automatically spills components onto the next row. The default positioning of the row of components is centered in the container. There are actually three possible row-positioning options that you specify by constants defined in the class. These are FlowLayout.LEFT, FlowLayout.RIGHT, and FlowLayout.CENTER – this last option being the default. The flow layout manager is very easy to use, so let's jump straight in and see it working in an example. As we said earlier, this layout manager is used primarily to arrange a few components whose relative position is unimportant. Let's implement a TryFlowLayout program based on the TryWindow example: import javax.swing.JFrame; import javax.swing.JButton; import java.awt.Toolkit; import java.awt.Dimension; import java.awt.Container; import java.awt.FlowLayout; public class TryFlowLayout { // The window object static JFrame aWindow = new JFrame("This is a Flow); FlowLayout flow = new FlowLayout(); // Create a layout manager Container content = aWindow.getContentPane(); // Get the content pane content.setLayout(flow); // Set the container layout mgr // Now add six button components for(int i = 1; i <= 6; i++) content.add(new JButton("Press " + i)); // Add a Button to content pane aWindow.setVisible(true); // Display the window } } Since it is based on the TryWindow class, only the new code is highlighted. The new code is quite simple. We create a FlowLayout object and make this the layout manager for aWindow by calling setLayout(). We then add six JButton components of a default size to aWindow in the loop. If you compile and run the program you should get a window similar to the following: The Button objects are positioned by the layout manager flow. As you can see, they have been added to the first row in the window, and the row is centered. You can confirm that the row is centered and see how the layout manger automatically spills the components on to the next row once a row is full by reducing the size of the window. Here the second row is clearly centered. Each button component has been set to its preferred size, which comfortably accommodates the text for the label. The centering is determined by the alignment constraint for the layout manager, which defaults to CENTER. It can also be set to RIGHT or LEFT by using a different constructor. For example, you could have created the layout manager with the statement: FlowLayout flow = new FlowLayout(FlowLayout.LEFT); The flow layout manager then left-aligns each row of components in the container. If you run the program with this definition and resize the window, it will look like: Now the buttons are left aligned. Two of the buttons have spilled from the first row to the second because there is insufficient space across the width of the window to accommodate them all. The flow layout manager in the previous examples applies a default gap of 5 pixels between components in a row, and between one row and the next. You can choose values for the horizontal and vertical gaps by using yet another FlowLayout constructor. You can set the horizontal gap to 20 pixels and the vertical gap to 30 pixels with the statement: FlowLayout flow = new FlowLayout(FlowLayout.LEFT, 20, 30); If you run the program with this definition of the layout manager, when you resize the window you will see the components distributed with the spacing specified. You can also set the gaps between components and rows explicitly by calling the setHgap() or the setVgap() method. To set the horizontal gap to 35 pixels, you would write: flow.setHgap(35); // Set the horizontal gap Don't be misled by this. You can't get differential spacing between components by setting the gap before adding each component to a container. The last values for the gaps between components that you set for a layout manager will apply to all the components in a container. The methods getHgap() and getVgap() will return the current setting for the horizontal or vertical gap as a value of type int. The initial size at which the application window is displayed is determined by the values we pass to the setBounds() method for the JFrame object. If you want the window to assume a size that just accommodates the components it contains, you can call the pack() method for the JFrame object. Add the following line immediately before the call to setVisible(): aWindow.pack(); If you recompile and run the example again, the application window should fit the components. As we've said, you add components to an applet created as a JApplet object in the same way as for a JFrame application window. We can verify this by adding some buttons to an example of an applet. We can try out a Font object and add a border to the buttons to brighten them up a bit at the same time. We can define the class for our applet as follows: import javax.swing.JButton; import javax.swing.JApplet; import java.awt.Font; import java.awt.Container; import java.awt.FlowLayout; import javax.swing.border.BevelBorder; public class TryApplet extends JApplet { public void init() { Container content = getContentPane(); // Get content pane content.setLayout(new FlowLayout(FlowLayout.RIGHT)); // Set layout JButton button; // Stores a button Font[] fonts = { new Font("Arial", Font.ITALIC, 10), // Two fonts new Font("Playbill", Font.PLAIN, 14) }; BevelBorder edge = new BevelBorder(BevelBorder.RAISED); // Bevelled border // Add the buttons using alternate fonts for(int i = 1; i <= 6; i++) { content.add(button = new JButton("Press " + i)); // Add the button button.setFont(fonts[i%2]); // One of our own fonts button.setBorder(edge); // Set the button border } } } Of course, to run the applet we will need an .html file containing the following: <APPLET CODE="TryApplet.class" WIDTH=300 HEIGHT=200> </APPLET> This specifies the width and height of the applet – you can use your own values here if you wish. You can save the file as TryApplet.html. Once you have compiled the applet source code using javac, you can execute it with the appletviewer program by entering the following command from the folder the .html file and class are in: appletviewer TryApplet.html You should see the AppletViewer window displaying our applet. The arrangement of the buttons is now right justified in the flow layout. We have the button labels alternating between the two fonts that we created. The buttons also look more like buttons with a beveled edge. How It Works As we saw in Chapter 1, an applet is executed rather differently from a Java program and it is not really an independent program at all. The browser (or appletviewer in this case) initiates and controls the execution of the applet. An applet does not require a main() method. To execute the applet, the browser first creates an instance of our applet class, TryApplet, and then calls the init() method for it. This method is inherited from the Applet class (the base for JApplet) and you typically override this method to provide your own initialization. We need the import statement for java.awt in addition to that for javax.swing because our code refers to the Font, Container, and FlowLayout classes. Before creating the buttons, we create a BevelBorder object that we will use to specify the border for each button. In the loop that adds the buttons to the content pane for the applet, we select one or other of the Font objects we have created depending on whether the loop index is even or odd, and then set edge as the border by calling the setBorder()member. This would be the same for any component. Note how the size of each button is automatically adjusted to accommodate the button label. Of course, the font selection depends on the two fonts being available on your system, so if you don't have the ones that appear in the code, change it to suit what you have. The buttons look much better with raised edges. If you wanted them to appear sunken, you would specify BevelBorder.LOWERED as the constructor argument. You might like to try out a SoftBevelBorder too. All you need to do is use the class name, SoftBevelBorder, when creating the border. The border layout manager is intended to place up to five components in a container. Possible positions for these components are on any of the four borders of the container and in the center. Only one component can be at each position. If you add a component at a position that is already occupied, the previous component will be displaced. A border is selected by specifying a constraint that can be NORTH, SOUTH, EAST, WEST, or CENTER. These are all final static constants defined in the BorderLayout.class. You can't specify the constraints in the BorderLayout constructor since a different constraint has to be applied to each component. You specify the position of each component in a container when you add it using the add() method. We can modify the earlier application example to add five buttons to the content pane of the application window in a border layout: Make the following changes to TryFlowLayout.java to try out the border layout manager and exercise another border class: import javax.swing.JFrame; import javax.swing.JButton; import java.awt.Toolkit; import java.awt.Dimension; import java.awt.Container; import java.awt.BorderLayout; import javax.swing.border.EtchedBorder; public class TryBorderLayout { // The window object static JFrame aWindow = new JFrame("This is a Border); BorderLayout border = new BorderLayout(); // Create a layout manager Container content = aWindow.getContentPane(); // Get the content pane content.setLayout(border); // Set the container layout mgr EtchedBorder edge = new EtchedBorder(EtchedBorder.RAISED); // Button border // Now add five JButton components and set their borders JButton button; content.add(button = new JButton("EAST"), BorderLayout.EAST); button.setBorder(edge); content.add(button = new JButton("WEST"), BorderLayout.WEST); button.setBorder(edge); content.add(button = new JButton("NORTH"), BorderLayout.NORTH); button.setBorder(edge); content.add(button = new JButton("SOUTH"), BorderLayout.SOUTH); button.setBorder(edge); content.add(button = new JButton("CENTER"), BorderLayout.CENTER); button.setBorder(edge); aWindow.setVisible(true); // Display the window } } If you compile and execute the example, you will see the window shown below. You can see here how a raised EtchedBorder edge to the buttons looks. How It Works Components laid out with a border layout manager are extended to fill the space available in the container. The "NORTH" and "SOUTH" buttons are the full width of the window and the "EAST" and "WEST" buttons occupy the height remaining unoccupied once the "NORTH" and "SOUTH" buttons are in place. It always works like this, regardless of the sequence in which you add the buttons – the "NORTH" and "SOUTH" components occupy the full width of the container and the "CENTER" component takes up the remaining space. If there are no "NORTH" and "SOUTH" components, the "EAST" and "WEST" components will extend to the full height of the container. The width of the "EAST" and "WEST" buttons is determined by the space required to display the button labels. Similarly, the "NORTH" and "SOUTH" buttons are determined by the height of the characters in the labels. You can alter the spacing between components by passing arguments to the BorderLayout constructor – the default gaps are zero. For example, you could set the horizontal gap to 20 pixels and the vertical gap to 30 pixels with the statement: content.setLayout(new BorderLayout(20, 30)); Like the flow layout manager, you can also set the gaps individually by calling the methods setHgap() and setVgap() for the BorderLayout object. For example: BorderLayout border = new BorderLayout(); // Construct the object content.setLayout(border); // Set the layout border.setHgap(20); // Set horizontal gap This sets the horizontal gap between the components to 20 pixels and leaves the vertical gap at the default value of zero. You can also retrieve the current values for the gaps with the getHgap() and getVgap() methods. The card layout manager generates a stack of components, one on top of the other. The first component that you add to the container will be at the top of the stack, and therefore visible, and the last one will be at the bottom. You can create a CardLayout object with the default constructor, CardLayout(), or you can specify horizontal and vertical gaps as arguments to the constructor. The gaps in this case are between the edge of the component and the boundary of the container. We can see how this works in an applet: Because of the way a card layout works, we need a way to interact with the applet to switch from one component to the next. We will implement this by enabling mouse events to be processed, but we won't explain the code that does this in detail here. We will leave that to the next chapter. Try the following code: import javax.swing.JApplet; import javax.swing.JButton; import java.awt.Container; import java.awt.CardLayout; import java.awt.event.ActionEvent; // Classes to handle events import java.awt.event.ActionListener; public class TryCardLayout extends JApplet implements ActionListener { CardLayout card = new CardLayout(50,50); // Create layout public void init() { Container content = getContentPane(); content.setLayout(card); // Set card as the layout mgr JButton button; // Stores a button for(int i = 1; i <= 6; i++) { content.add(button = new JButton("Press " + i), "Card" + i); // Add a button button.addActionListener(this); // Add listener for button } } // Handle button events public void actionPerformed(ActionEvent e) { card.next(getContentPane()); // Switch to the next card } } If you run the program the applet should be as shown below. Click on the button – and the next button will be displayed. How It Works The CardLayout object, card, is created with horizontal and vertical gaps of fifty pixels. In the init() method for our applet, we set card as the layout manager and add six buttons to the content pane. Note that we have two arguments to the add() method. Using card layout requires that you identify each component by some Object. In this case we pass a String object as the second argument to the add() method. We use an arbitrary string for each consisting of the string "Card" with the sequence number of the button appended to it. Within the loop we call the addActionListener() method for each button to identify our applet object as the object that will handle events generated for the button (such as clicking on it with the mouse). When you click on a button, the actionPerformed() method for the applet object will be called. This just calls the next() method for the layout object to move the next component in sequence to the top. We will look at event handling in more detail in the next chapter. The argument to the next()method identifies the container as the TryCardLayout object that is created when the applet starts. The CardLayout class has other methods that you can use for selecting from the stack of components: Using the next() or previous() methods you can cycle through the components repeatedly, since the next component after the last is the first, and the component before the first is the last. The String object that we supplied when adding the buttons identifies each button and can be used to switch to any of them. For instance, you could switch to the button associated with "Card4" before the applet is displayed by adding the following statement after the loop that adds the buttons: card.show(content, "Card4"); // Switch to button "Card4" This calls the show() method for the layout manager. The first argument is the container and the second argument is the object identifying the component to be at the top. A grid layout manager arranges components in a rectangular grid within the container. There are three constructors for creating GridLayout objects: In the second and third constructors shown above, you can specify either the number of rows, or the number of columns as zero (but not both). If you specify the number of rows as zero, the layout manager will provide as many rows in the grid as are necessary to accommodate the number of components you add to the container. Similarly, setting the number of columns as zero indicates an arbitrary number of columns. If you fix both the rows and the columns, and add more components to the container than the grid will accommodate, the number of columns will be increased appropriately. We can try out a grid layout manager in a variation of a previous application: Make the highlighted changes to TryWindow.java. import javax.swing.JFrame; import java.awt.*; import javax.swing.border.EtchedBorder; public class TryGridLayout { // The window object static JFrame aWindow = new JFrame("This is a GridLayout grid = new GridLayout(3,4,30,20); // Create a layout manager Container content = aWindow.getContentPane(); // Get the content pane content. setLayout(grid); // Set the container layout mgr EtchedBorder edge = new EtchedBorder(EtchedBorder.RAISED); // Button border // Now add ten Button components JButton button; // Stores a button for(int i = 1; i <= 10; i++) { content.add(button = new JButton("Press " + i)); // Add a Button button.setBorder(edge); // Set the border } aWindow.setVisible(true); // Display the window } } We create a grid layout manager, grid, for three rows and four columns, and with horizontal and vertical gaps between components of 30 and 20 pixels respectively. With ten buttons in the container, the application window will be as shown below. The BoxLayout class defines a layout manager that arranges components in either a single row or a single column. You specify whether you want a row-wise or a columnar arrangement when creating the BoxLayout object. The BoxLayout constructor requires two arguments. The first is a reference to the container to which the layout manager applies, and the second is a constant value that can be either BoxLayout.X_AXIS for a row arrangement, or BoxLayout.Y_AXIS for a column arrangement. Components are added from left to right in a row, or from top to bottom in a column. Components in the row or column do not spill onto the next row or column when the row is full. When this occurs, the layout manager will reduce the size of components or even clip them if necessary and keep them all in a single row or column. With a row of components, the box layout manager will try to make all the components the same height, and try to set a column of components to the same width. The container class, Box, is particularly convenient when you need to use a box layout since it has a BoxLayout manager built in. It also has some added facilities providing more flexibility in the arrangement of components than other containers, such as JPanel objects, provide. The Box constructor accepts a single argument that specifies the orientation as either BoxLayout.X_AXIS or BoxLayout.Y_AXIS. The class also has two static methods, createHorizontalBox()and createVerticalBox(), that each return a reference to a Box container with the orientation implied. As we said earlier a container can contain another container, so you can easily place a Box container inside another Box container to get any arrangement of rows and columns that you want. Let's try that out. We will create an application that has a window containing a column of radio buttons on the left, a column of checkboxes on the right, and a row of buttons across the bottom. Here's the code: import javax.swing.*; import java.awt.Toolkit; import java.awt.Dimension; import java.awt.Container; import java.awt.BorderLayout; import javax.swing.border.Border; public class TryBoxLayout { // The window object static JFrame aWindow = new JFrame("This is a Box); // Create left column of radio buttons Box left = Box.createVerticalBox(); ButtonGroup radioGroup = new ButtonGroup(); // Create button group JRadioButton rbutton; // Stores a button radioGroup.add(rbutton = new JRadioButton("Red")); // Add to group left.add(rbutton); // Add to Box radioGroup.add(rbutton = new JRadioButton("Green")); left.add(rbutton); radioGroup.add(rbutton = new JRadioButton("Blue")); left.add(rbutton); radioGroup.add(rbutton = new JRadioButton("Yellow")); left.add(rbutton); // Create right columns of checkboxes Box right = Box.createVerticalBox(); right.add(new JCheckBox("Dashed")); right.add(new JCheckBox("Thick")); right.add(new JCheckBox("Rounded")); // Create top row to hold left and right Box top = Box.createHorizontalBox(); top.add(left); top.add(right); // Create bottom row of buttons JPanel bottomPanel = new JPanel(); Border edge = BorderFactory.createRaisedBevelBorder(); // Button border JButton button; Dimension size = new Dimension(80,20); bottomPanel.add(button = new JButton("Defaults")); button.setBorder(edge); button.setPreferredSize(size); bottomPanel.add(button = new JButton("OK")); button.setBorder(edge); button.setPreferredSize(size); bottomPanel.add(button = new JButton("Cancel")); button.setBorder(edge); button.setPreferredSize(size); // Add top and bottom panel to content pane Container content = aWindow.getContentPane(); // Get content pane content.setLayout(new BorderLayout()); // Set border layout manager content.add(top, BorderLayout.CENTER); content.add(bottomPanel, BorderLayout.SOUTH); aWindow.setVisible(true); // Display the window } } When you run this example and try out the radio buttons and checkboxes, it should produce a window something like that shown overleaf. It's not an ideal arrangement, but we will improve on it. How It Works The shaded code is of interest – the rest we have seen before. The first block creates the left column of radio buttons providing a color choice. A Box object with a vertical orientation is used to contain the radio buttons. If you tried the radio buttons you will have found that only one of them can ever be selected. This is the effect of the ButtonGroup object that is used – to ensure radio buttons operate properly, you must add them to a ButtonGroup object. The ButtonGroup object ensures that only one of the radio buttons it contains can be selected at any one time. Note that a ButtonGroup object is not a component – it's just a logical grouping of radio buttons – so you can't add it to a container. We must still independently add the buttons to the Box container that manages their physical arrangement. The Box object for the right-hand group of JCheckBox objects works in the same way as that for the radio buttons. Both the Box objects holding the columns are added to another Box object that implements a horizontal arrangement to position them side-by-side. Note how the vertical Box objects adjust their width to match that of the largest component in the column. That's why the two columns are bunched towards the left side. We will see how to improve on this in a moment. We use a JPanel object to hold the buttons. This has a flow layout manager by default, which suits us here. Calling the setPreferredSize() method for each button sets the preferred width and height to that specified by the Dimension object, size. This ensures that, space permitting, each button will be 80 pixels wide and 20 pixels high. We have introduced another way of obtaining a border for a component here. The BorderFactory class (defined in the javax.swing package) contains static methods that return standard borders of various kinds. The createBevelBorder() method returns a reference to a BevelBorder object as type Border – Border being an interface that all border objects implement. We use this border for each of the buttons. We will try some more of the methods in the BorderFactory class later. To improve the layout of the application window, we can make use of some additional facilities provided by a Box container. The Box class contains static methods to create an invisible component called a strut. A vertical strut has a given height in pixels and zero width. A horizontal strut has a given width in pixels and zero height. The purpose of these struts is to enable you to insert space between your components, either vertically or horizontally. By placing a horizontal strut between two components in a horizontally arranged Box container, you fix the distance between the components. By adding a horizontal strut to a vertically arranged Box container, you can force a minimum width on the container. You can use a vertical strut in a horizontal box to force a minimum height. Note that although vertical struts have zero width, they have no maximum width so they can expand horizontally to have a width that takes up any excess space. Similarly, the height of a horizontal strut will expand when there is excess vertical space available. A vertical strut is returned as an object of type Component by the static createVerticalStrut() method in the Box class . The argument specifies the height of the strut in pixels. To create a horizontal strut, you use the createHorizontalStrut() method. We can space out our radio buttons by inserting struts between them: // Create left column of radio buttons Box left = Box.createVerticalBox(); left.add(Box.createVerticalStrut(30)); // Starting space ButtonGroup radioGroup = new ButtonGroup(); // Create button group JRadioButton rbutton; // Stores a button radioGroup.add(rbutton = new JRadioButton("Red")); // Add to group left.add(rbutton); // Add to Box left.add(Box.createVerticalStrut(30)); // Space between radioGroup.add(rbutton = new JRadioButton("Green")); left.add(rbutton); left.add(Box.createVerticalStrut(30)); // Space between radioGroup.add(rbutton = new JRadioButton("Blue")); left.add(rbutton); left.add(Box.createVerticalStrut(30)); // Space between radioGroup.add(rbutton = new JRadioButton("Yellow")); left.add(rbutton); The extra statements add a 30 pixel vertical strut at the start of the columns, and a further strut of the same size between each radio button and the next. We can do the same for the checkboxes: // Create right columns of checkboxes Box right = Box.createVerticalBox(); right.add(Box.createVerticalStrut(30)); // Starting space right.add(new JCheckBox("Dashed")); right.add(Box.createVerticalStrut(30)); // Space between right.add(new JCheckBox("Thick")); right.add(Box.createVerticalStrut(30)); // Space between right.add(new JCheckBox("Rounded")); If you run the example with these changes the window will look like this: It's better, but far from perfect. The columns are now equally spaced in the window because the vertical struts have assumed a width to take up the excess horizontal space that is available. The distribution of surplus space vertically is different in the two columns because the number of components is different. We can control where surplus space goes in a Box object with glue. Glue is an invisible component that has the sole function of taking up surplus space in a Box container. While the name gives the impression that it binds components together, it, in fact, provides an elastic connector between two components that can expand or contract as necessary, so it acts more like a spring. Glue components can be placed between the actual components in the Box and at either or both ends. Any surplus space that arises after the actual components have been accommodated is distributed between the glue components added. If you wanted all the surplus space to be at the beginning of a Box container, for instance, you should first add a single glue component in the container. You create a component that represents glue by calling the createGlue() method for a Box object. You then add the glue component to the Box container in the same way as any other component wherever you want surplus space to be taken up. You can add glue at several positions in a row or column, and spare space will be distributed between the glue components. We can add glue after the last component in each column to make all the spare space appear at the end of each column of buttons. For the radio buttons we can add the statement, // Statements adding radio buttons to left Box object left.add(Box.createGlue()); // Glue at the end and similarly for the right box. The glue component at the end of each column of buttons will take up all the surplus space in each vertical Box container. This will make the buttons line up at the top. Running the program with added glue will result in the following application window. It's better now, but let's put together a final version of the example with some additional embroidery. We will use some JPanel objects with a new kind of border to contain the vertical Box containers. import javax.swing.*; import java.awt.*; import javax.swing.border.*; public class TryBoxLayout { // The window object static JFrame aWindow = new JFrame("This is a Box Layout"); public static void main(String[] args) { // Set up the window as before... // Create left column of radio buttons with struts and glue as above... // Create a panel with a titled border to hold the left Box container JPanel leftPanel = new JPanel(new BorderLayout()); leftPanel.setBorder(new TitledBorder( new EtchedBorder(), // Border to use "Line Color")); // Border title leftPanel.add(left, BorderLayout.CENTER); // Create right columns of checkboxes with struts and glue as above... // Create a panel with a titled border to hold the right Box container JPanel rightPanel = new JPanel(new BorderLayout()); rightPanel.setBorder(new TitledBorder( new EtchedBorder(), // Border to use "Line Properties")); // Border title rightPanel.add(right, BorderLayout.CENTER); // Create top row to hold left and right Box top = Box.createHorizontalBox(); top.add(leftPanel); top.add(Box.createHorizontalStrut(5)); // Space between vertical boxes top.add(rightPanel); // Create bottom row of buttons JPanel bottomPanel = new JPanel(); bottomPanel.setBorder(new CompoundBorder( BorderFactory.createLineBorder(Color.black, 1), // Outer border BorderFactory.createBevelBorder(BevelBorder.RAISED))); // Inner border // Create and add the buttons as before... Container content = aWindow.getContentPane(); // Set the container layout mgr BoxLayout box = new BoxLayout(content, BoxLayout.Y_AXIS); // Vertical for content pane content.setLayout(box); // Set box layout manager content.add(top); content.add(bottomPanel); aWindow.setVisible(true); // Display the window } } The example will now display the window shown below. How It Works Both vertical boxes are now contained in a JPanel container. Since JPanel objects are Swing components, we can add a border, and this time we add a TitledBorder border that we create directly using the constructor. A TitledBorder is a border specified by the first argument to the constructor, plus a title that is a String specified by the second argument to the constructor. We use a border of type EtchedBorder here, but you can use any type of border. We introduce space between the two vertically aligned Box containers by adding a horizontal strut to the Box container that contains them. If you wanted space at each side of the window, you could add struts to the container before and after the components. The last improvement is to the panel holding the buttons along the bottom of the window. We now have a border that is composed of two types, one inside the other: a LineBorder and a BevelBorder. A CompoundBorder object defines a border that is a composite of two border objects, the first argument to the constructor being the outer border and the second being the inner border. The LineBorder class defines a border consisting of a single line of the color specified by its first constructor argument and a thickness in pixels specified by the second. There is a static method defined for the class, createBlackLineBorder() that creates a black line border that is one pixel wide, so we could have used that here. The GridBagLayout manager is much more flexible than the other layout managers we have seen and, consequently, rather more complicated to use. The basic mechanism arranges components in an arbitrary rectangular grid, but the rows and columns of the grid are not necessarily the same height or width. A component is placed at a given cell position in the grid specified by the coordinates of the cell, where the cell at the top-left corner is at position (0, 0). A component can occupy more than one cell in a row and/or column in the grid, but it always occupies a rectangular group of cells. Each component in a GridBagLayout has its own set of constraints. These are defined by an object of type GridBagConstraints that you associate with each component, before adding the component to the container. The location of each component, its relative size, and the area it occupies in the grid, are all determined by its associated GridBagConstraints object. A GridBagConstraints object has no less than eleven public instance variables that may be set to define the constraints for a component. Since they also interact with each other there's more entertainment here than with a Rubik's cube. Let's first get a rough idea of what these instance variables in a GridBagConstraints object do: That seems straightforward enough. We can now explore the possible values we can set for these and then try them out. A component will occupy at least one grid position, or cell, in a container that uses a GridBagLayout object, but it can occupy any rectangular array of cells. The total number of rows and columns, and thus the cell size, in the grid for a container is variable, and determined by the constraints for all of the components in the container. Each component will have a position in the grid plus an area it is allocated defined by a number of horizontal and vertical grid positions. The top-left cell in a layout is at position (0, 0). You specify the position of a component by defining where the top-left cell that it occupies is, relative to either the grid origin, or relative to the last component that was added to the container. You specify the position of the top-left cell that a component occupies in the grid by setting values of type int for the gridx and gridy members of the GridBagConstraints object. The default value for gridx is GridBagConstraints.RELATIVE – a constant that places the top-left grid position for the component in the column immediately to the right of the previous component. The same value is the default for gridy, which places the next component immediately below the previous one. You specify the number of cells occupied by a component horizontally and vertically by setting values for the gridwidth and gridheight instance variables for the GridBagConstraints object. The default value for both of these is 1. There are two constants you can use as values for these variables. With a value of GridBagConstraints.REMAINDER, the component will be the last one in the row or column. If you specify the value as GridBagConstraints.RELATIVE, the component will be the penultimate one in the row or column. If the preferred size of the component is less than the display area, you can control how the size of the component is adjusted to fit the display area by setting the fill and insets instance variables for the GridBagConstraints object. If you don't intend to expand a component to fill its display area, you may still want to enlarge the component from its minimum size. You can adjust the dimensions of the component by setting the following GridBagConstraints instance variables: If the component is still smaller than its display area in the container, you can specify where it should be placed in relation to its display area by setting a value for the anchor instance variable of the GridBagConstraints object. Possible values are NORTH, NORTHEAST, EAST, SOUTHEAST, SOUTH, SOUTHWEST, WEST, NORTHWEST, and CENTER, all of which are defined in the GridBagConstraints class. The last GridBagConstraints instance variables to consider are weightx and weighty which are of type double. These determine how space in the container is distributed between components in the horizontal and vertical directions. You should always set a value for these, otherwise the default of 0 will cause the components to be bunched together adjacent to one another in the center of the container. The absolute values for weightx and weighty are not important. It is the relative values that matter. If you set all the values the same (but not zero), the space for each component will be distributed uniformly. Space is distributed in the proportions defined by the values. For example, if three components in a row have weightx values of 1.0, 2.0, and 3.0, the first will get 1/6 of the total in the x direction, the second will get 1/3, and the third will get half. The proportion of the available space that a component gets in the x direction is the weightx value for the component divided by the sum of the weightx values in the row. This also applies to the weighty values for allocating space in the y direction. We'll start with a simple example of placing two buttons in a window, and introduce another way of obtaining a standard border for a component. Make the following changes to the previous program and try out the GridBagLayout manager. import javax.swing.*; import java.awt.*; import javax.swing.border.Border; public class TryGridBagLayout { // The window object static JFrame aWindow = new JFrame("This is a GridbagBagLayout gridbag = new GridBagLayout(); // Create a layout manager GridBagConstraints constraints = new GridBagConstraints(); aWindow.getContentPane().setLayout(gridbag); // Set the container layout mgr // Set constraints and add first button constraints.weightx = constraints.weighty = 10.0; constraints.fill = constraints.BOTH; // Fill the space addButton("Press", constraints, gridbag); // Add the button // Set constraints and add second button constraints.gridwidth = constraints.REMAINDER; // Rest of the row addButton("GO", constraints, gridbag); // Create and add button aWindow.setVisible(true); // Display the window } static void addButton(String label, GridBagConstraints constraints, GridBagLayout layout) { // Create a Border object using a BorderFactory method Border edge = BorderFactory.createRaisedBevelBorder(); JButton button = new JButton(label); // Create a button button.setBorder(edge); // Add its border layout.setConstraints(button, constraints); // Set the constraints aWindow.getContentPane().add(button); // Add button to content pane } } The program window will look like that shown below: As you see, the left button is slightly wider than the right button. This is because the length of the button label affects the size of the button. How It Works Because the process will be the same for every button added, we have implemented the helper function addButton(). This creates a Button object, associates the GridBagConstraints object with it in the GridBagLayout object, and then adds it to the content pane of the frame window. After creating the layout manager and GridBagConstraints objects we set the values for weightx and weighty to 10.0. A value of 1.0 would have the same effect. We set the fill constraint to BOTH to make the component fill the space it occupies. Note that when the setConstraints() method is called to associate the GridBagConstraints object with the button object, a copy of the constraints object is stored in the layout – not the object we created. This allows us to change the constraints object and use it for the second button without affecting the constraints for the first. The buttons are more or less equal in size in the x direction (they would be exactly the same size if the labels were the same length) because the weightx and weighty values are the same for both. Both buttons fill the space available to them because the fill constraint is set to BOTH. If fill was set to HORIZONTAL, for example, the buttons would be the full width of the grid positions they occupy, but just high enough to accommodate the label, since they would have no preferred size in the y direction. If we alter the constraints for the second button to: // Set constraints and add second button constraints.weightx = 5.0; // Weight half of first constraints.insets = new Insets(10, 30, 10, 20); // Left 30 & right 20 constraints.gridwidth = constraints.RELATIVE; // Rest of the row addButton("GO", constraints, gridbag); // Add button to content pane the application window will be as shown: Now the second button occupies one third of the space in the x direction – that is a proportion of 5/(5+10) of the total – and the first button occupies two thirds. Note that the buttons still occupy one grid cell each – the default values for gridwidth and gridheight of 1 apply – but the weightx constraint values have altered the relative sizes of the cells for the two buttons in the x direction. The second button is also within the space allocated – ten pixels at the top and bottom, thirty pixels on the left and twenty on the right (set with the insets constraint). You can see that for a given window size here, the size of a grid position depends on the number of objects. The more components there are, the less space they will each be allocated. Suppose we wanted to add a third button, the same width as the Press button, and immediately below it. We could do that by adding the following code immediately after that for the second button: // Set constraints and add third button constraints.insets = new Insets(0,0,0,0); // No insets constraints.gridx = 0; // Begin new row constraints.gridwidth = 1; // Width as "Press" addButton("Push", constraints, gridbag); // Add button to content pane We reset the gridx constraint to zero to put the button at the start of the next row. It has a default gridwidth of 1 cell, like the others. The window would now look like: Having seen how it looks now, clearly it would be better if the GO button were the height of Press and Push combined. To arrange them like this, we need to make the height of the GO button twice that of the other two buttons. The height of the Press button is 1 by default, so by making the height of the GO button 2, and resetting the gridheight constraint of the Push button to 1, we should get the desired result. Modify the code for the second and third buttons to: // Set constraints and add second button constraints.weightx = 5.0; // Weight half of first constraints.gridwidth = constraints.REMAINDER; // Rest of the row constraints.insets = new Insets(10, 30, 10, 20); // Left 30 & right 20 constraints.gridheight = 2; // Height 2x "Press" addButton("GO", constraints, gridbag); // Add button to content pane // Set constraints and add third button constraints.gridx = 0; // Begin new row constraints.gridwidth = 1; // Width as "Press" constraints.gridheight = 1; // Height as "Press" constraints.insets = new Insets(0, 0, 0, 0); // No insets addButton("Push", constraints, gridbag); // Add button to content pane With these code changes, the window will be: We could also see the effect of padding the components out from their preferred size by altering the button constraints a little: // Create constraints and add first button constraints.weightx = constraints.weighty = 10.0; constraints.fill = constraints.NONE; constraints.ipadx = 30; // Pad 30 in x constraints.ipady = 10; // Pad 10 in y addButton("Press", constraints, gridbag); // Add button to content pane // Set constraints and add second button constraints.weightx = 5.0; // Weight half of first constraints.fill = constraints.BOTH; // Expand to fill space constraints.ipadx = constraints.ipady = 0; // No padding constraints.gridwidth = constraints.REMAINDER; // Rest of the row constraints.gridheight = 2; // Height 2x "Press" constraints.insets = new Insets(10, 30, 10, 20); // Left 30 & right 20 addButton("GO", constraints, gridbag); // Add button to content pane // Set constraints and add third button constraints.gridx = 0; // Begin new row constraints.fill = constraints.NONE; constraints.ipadx = 30; // Pad component in x constraints.ipady = 10; // Pad component in y constraints.gridwidth = 1; // Width as "Press" constraints.gridheight = 1; // Height as "Press" constraints.insets = new Insets(0, 0, 0, 0); // No insets addButton("Push", constraints, gridbag); // Add button to content pane With the constraints for the buttons as before, the window will look like: Both the Push and the Press button occupy the same space in the container, but, because fill is set to NONE, they are not expanded to fill the space in either direction. The ipadx and ipady constraints specify by how much the buttons are to be expanded from their preferred size – by thirty pixels on the left and right, and ten pixels on the top and bottom. The overall arrangement remains the same. You need to experiment with using GridBagLayout and GridBagConstraints to get a good feel for how the layout manager works because you are likely to find yourself using it quite often. You can set the layout manager for the content pane of a JFrame object, aWindow, to be a SpringLayout manager like this: SpringLayout layout = new SpringLayout(); // Create a layout manager Container content = aWindow.getContentPane(); // Get the content pane content.setLayout(layout); The layout manager defined by the SpringLayout class determines the position and size of each component in the container according to a set of constraints that are defined by Spring objects. Every component within a container using a SpringLayout manager has an object associated with it of type SpringLayout.constraints that can define constraints on the position of each of the four edges of the component. Before you can access the SpringLayout.constraints object for an object, you must first add the object to the container. For example: JButton button = new JButton("Press Me"); content.add(button); Now we can call the getConstraint() method for the SpringLayout object to obtain the object encapsulating the constraints: SpringLayout.Constraints buttonConstr = layout.getConstraints(button); To constrain the location and size of the button object, we will call methods of the buttonConstr object to set individual constraints. The top, bottom, left, and right edges of a component are referred to by their compass points, north, south, west, and east. When you need to refer to a particular edge in your code – for setting a constraint for instance, you use constants that are defined in the SpringLayout class, NORTH, SOUTH, WEST, and EAST respectively. As the diagram shows, the position of a component is determined by a horizontal constraint on the x-coordinate of the component and a vertical constraint on the y-coordinate. These obviously also determine the location of the WEST and NORTH edges of the component, since the position determines where the top-left corner is located. The width and height are determined by horizontal constraints that relate the position of the EAST and SOUTH edges to the positions of the WEST and NORTH edges respectively. Thus the constraints on the positions of the EAST and SOUTH edges are determined by constraints that are derived from the others, as follows: EAST-constraint = X-constraint + width-constraint SOUTH-constraint = Y-constraint + height-constraint You can set the X, Y, width, and height constraints independently as we shall see in a moment, and you can also set a constraint explicitly for any edge. If you set a constraint on the SOUTH or EAST edges of a component, the Y or X constraint will be adjusted if necessary to ensure the relationships above still hold. The Spring class in the javax.swing package defines an object that represents a constraint. A Spring object is defined by three integer values that relate to the notional length of the spring: the minimum value, the preferred value, and the maximum value. A Spring object will also have an actual value that lies between the minimum and the maximum, and that will determine the location of the edge to which it applies. You can create a Spring object like this: The static constant() method creates a Spring object from the three arguments that are the minimum, preferred, and maximum values for the object. If all three values are equal, the object is called a strut because its value is fixed at the common value you set for all three. There's an overloaded version of the constant() method for creating struts: Spring strut = Spring.constant(40); // min, pref, and max all set to 40 The Spring class also defines static methods that operate on Spring objects: The setX() and setY() methods for a SpringLayout.Constraints object set the constraints for the WEST and NORTH edges of the component respectively. For example: Spring xSpring = Spring.constant(5,10,20); // Spring we'll use for X Spring ySpring = Spring.constant(3,5,8); // Spring we'll use for Y buttonConstr.setX(xSpring); // Set the WEST edge constraint buttonConstr.setY(xSpring); // Set the NORTH edge constraint The setX() method defines a constraint between the WEST edge of the container, and the WEST edge of the component. Similarly, the setY() method defines a constraint between the NORTH edge of the container and the NORTH edge of the component. This fixes the location of the component in relation to the origin of the container. To set the width and height of the component, you call the setWidth() and setHeight() methods for its SpringLayout.Constraints object: Spring wSpring = Spring.constant(30,50,70); // Spring we'll use for width Spring hSpring = Spring.constant(15); // Strut we'll use for height buttonConstr.setWidth(wSpring); // Set component width constraint buttonConstr.setHeight(hSpring); // Set component height constraint The width constraint is applied between the WEST and EAST edges and the height constraint applies between the component's NORTH and SOUTH edges. Since we have specified a strut for the height, there is no leeway on this constraint; its value is fixed at 15. If you want to explicitly set an edge constraint for a component, you call the setConstraint() method for the component's SpringLayout.Constraints object: layout.getConstraints(newButton) .setConstraint(StringLayout.EAST, Spring.sum(xSpring, wSpring)); This statement ties the EAST edge of the newButton component to the WEST edge of the container by a Spring object that is the sum of xSpring and wSpring. You can also set constraints between pairs of vertical or horizontal edges where one edge can belong to a different component from the other. For instance, we could add another button to the container like this: JButton newButton = new JButton("Push"); content.add(newButton); We can now constrain its WEST and NORTH edges by tying the edges to the EAST and SOUTH edges of button. We use the putConstraint() method for the SpringLayout object to do this: SpringLayout.Constraints newButtonConstr = layout.getConstraints(newButton); layout.putConstraint(SpringLayout.WEST, newButton, xSpring, SpringLayout.EAST, button); The first two arguments to the putConstraint() method for the layout object are the edge specification and a reference to the dependent component respectively. The third argument is a Spring object defining the constraint. The fourth and fifth arguments specify the edge and a reference to the component to which the dependent component is anchored. Obviously, since constraints can only be horizontal or vertical, both edges should have the same orientation. There is an overloaded version of the putConstraint() method where the third argument is a value of type int that defines a fixed distance between the edges. Let's look at a simple example using a SpringLayout object as the layout manager. Here's the code for an example that displays six buttons in a window. import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.SpringLayout; import javax.swing.Spring; import java.awt.Container; import java.awt.Dimension; import java.awt.Toolkit; public class TrySpringLayout { // The window object static JFrame aWindow = new JFrame("This is a Spring); SpringLayout layout = new SpringLayout(); // Create a layout manager Container content = aWindow.getContentPane(); // Get the content pane content.setLayout(layout); // Set the container layout mgr JButton[] buttons = new JButton[6]; // Array to store buttons for(int i = 0; i < buttons.length; i++) { buttons[i] = new JButton("Press " + (i+1)); content.add(buttons[i]); // Add a Button to content pane } Spring xSpring = Spring.constant(5,15,25); // x constraint for 1st button Spring ySpring = Spring.constant(10,30, 50); // y constraint for first button Spring wSpring = Spring.constant(30,80,130); // Width constraint for buttons // Connect x,y for first button to left and top of container by springs SpringLayout.Constraints buttonConstr = layout.getConstraints(buttons[0]); buttonConstr.setX(xSpring); buttonConstr.setY(ySpring); //]); } } aWindow.setVisible(true); // Display the window } } When you compile and run this you should get a window with the buttons laid out as shown below. How It Works After adding six buttons to the content pane of the window, we define two Spring objects that we will use to position the first button: Spring xSpring = Spring.constant(5,15,25); // x constraint for 1st button Spring ySpring = Spring.constant(10,30, 50); // y constraint for first button We also define a spring we will use to determine the width of each button: Spring wSpring = Spring.constant(30,80,130); // Width constraint for buttons We then set the location of the first button relative to the container: // Connect x,y for first button to left and top of container by springs SpringLayout.Constraints buttonConstr = layout.getConstraints(buttons[0]); buttonConstr.setX(xSpring); buttonConstr.setY(ySpring); This fixes the first button. We can define the positions of each the remaining buttons relative to its predecessor. We do this by adding constraints between the NORTH and WEST edges of each button and the SOUTH and EAST edges of its predecessor. This is done in the for loop after setting the width and height constraints for each button : //]); } // end if } // end for This places each component after the first relative to the bottom right corner of its predecessor so the buttons are laid out in a cascade fashion. Of course, the size of the application window in our example is independent of the components within it. If you resize the window the springs have no effect. If you call pack() for the aWindow object before calling its setVisible() method, the window will shrink to a width and height just accommodating the title bar so you won't see any of the components. This is because SpringLayout does not adjust the size of the container by default so the effect of pack() is as though the content pane was empty. We can do much better than this. We can set constraints on the edges of the container using springs that will control its size. We can therefore place constraints on the height and width of the container in terms of the springs that we used to determine the size and locations of the components. This will have the effect of relating all the springs that determine the size and position of the buttons to the size of the application window. Try adding the following code to the example immediately preceding the call to setVisible() for the window object: SpringLayout.Constraints constr = layout.getConstraints(content); constr.setConstraint(SpringLayout.EAST, Spring.sum(buttonConstr.getConstraint(SpringLayout.EAST), Spring.constant(15))); constr.setConstraint(SpringLayout.SOUTH, Spring.sum(buttonConstr.getConstraint(SpringLayout.SOUTH), Spring.constant(10))); aWindow.pack(); This sets the constraint on the EAST edge of a container that is the Spring constraining the EAST edge of the last button plus a strut 15 units long. This positions the right edge of the container 15 units to the right of the right edge of the last button. The bottom edge of the container is similarly connected by a fixed link, 10 units long, to the bottom edge of the last button. If you recompile with these additions and run the example again, you should find that not only is the initial size of the window set to accommodate all the buttons, but also when you resize the window the size and positions of the buttons adapt accordingly. Isn't that nice? The SpringLayout manager is extremely flexible and can do much of what the other layout mangers can do if you choose the constraints on the components appropriately. It's well worth experimenting to see the effect of various configurations of springs on your application.
http://www.yaldex.com/java_tutorial/0331739826.htm
CC-MAIN-2017-04
refinedweb
9,663
55.13
Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris This is an intro to Dependency Injection, also called DI. I plan a follow-up post covering more advanced scenarios. For now, I want to explain what it is and how you can use it and finally show how it helps to test A LOT. What is Dependency Injection It's a programming technique that makes a class independent of its dependencies. We don't rely on a concrete implementation of our dependencies, but rather interfaces. This makes our code more flexible and we can easily switch out a concrete implementation for another while maintaining the same logic. References - Overview of Dependency Injection - DI MVC example - DI in Controllers - Dependency Injection pattern by Martin Fowler - Overview of Dependency Injection namespace Why use it There are many advantages: - Flexible code, we can switch out one implementation for another without changing the business logic. - Easy to test, because we rely on interfaces over implementations - we can more easily test our code without worrying about side-effects. We will show this later in the article. DI in .NET Core - it's built-in There's a built-in Dependency Injection container that's used by a lot of internal services like: - Hosting Environment - Configuration - Routing - MVC - ApplicationLifetime - Logging The container is sometimes referred to as IoC, Inversion of Control Container. The overall idea is to Register at the application startup and then Resolve at runtime when needed. Container responsibilities: - Creating - Disposing - IServiceCollection, Register services, lets the IoC container know of concrete implementation. It should be used to resolve what Interface belongs to what implementation. How something is created can be as simple as just instantiating an object but sometimes we need more data than that. - IServiceProvider, Resolve service instances, actually looking up what interface belongs to what concrete implementation and carry out the creation. It lives in the Microsoft.Extensions.DependencyInjection. What to register There are some telltale signs. - Lifespan outside of this method?, Are we new-ing the service, any services that can they live within the scope of the method? I.e are they a dependency or not? - More than one version, Can there be more than one version of this service? - Testability, ideally you only want to test a specific method. If you got code that does a lot of other things in your method, you probably want to move that to a dedicated service. This moved code would then become dependencies to the method in question - Side-effect, This is similar to the point above but it stresses the importance of having a method that does only one thing. If a side-effect is produced, i.e accessing a network resource, doing an HTTP call or interacting with I/O - then it should be placed in a separate service and be injected in as a dependency. Essentially, you will end up moving out code to dedicated services and then inject these services as dependencies via a constructor. You might start out with code looking like so: public void Action above has many problems: - Unwanted side-effects when testing, The first problem is that we control the lifetime of PaymentServiceand ShippingService, thus risking firing off a side-effect, an HTTP call, when trying to test. - Can't test all paths, we can't really test all paths, we can't ask the PaymentService to respond differently so we can test all execution paths - Hard to extend, will this PaymentService cover all the possible means of payment or would we need to add a lot of conditional code in this method to cover different ways of taking payment if we added say support for PayPal or a new type of card, etc? - Unvalidated Primitives, there are primitives like doubleand string. Can we trust those values, is the addressa valid address for example? From the above, we realize that we need to refactor our code into something more maintainable and more secure. Turning a lot of the code into dependencies and replacing primitives with more complex constructs - is a good way to go. The result could look something like this: class Controller private readonly IPaymentService _paymentService; private readonly IShippingService _shippingService; public void Controller( IPaymentService paymentService, IShippingService shippingService ) { _paymentService = paymentService; _shippingService = shippingService; } public void Action(IPaymentInfo paymentInfo, IShippingAddress shippingAddress) { var successfullyCharged = _paymentService.Charge(paymentInfo); if (successfullyCharged) { _shippingService.Ship(ShippingAddress); } } } Above we have turned both the PaymentService and ShippingService into dependencies that we inject in the constructor. We also see that all the primitives have been collected into the complex structures IShippingAddress and IPaymentInfo. What remains is pure business logic. Dependency Graph When you have a dependency it might itself rely on another dependency being resolved first and so on and so forth. This means we get a hierarchy of dependencies that need to be resolved in the right order for things to work out. We call this a Dependency Graph. DEMO - registering a Service We will do the following: - create a .NET Core solution - add a webapi project to our solution - fail, see what happens if we forgot to register a service. It's important to recognize the error message so we know where we went wrong and can fix it - registering a service, we will register our service and we will now see how everything works Create a solution mkdir di-demo cd di-demo dotnet new sln this will create the following structure: -| di-demo ---| di-demo.sln Create a WebApi project dotnet new webapi -o api dotnet sln add api/api.csproj The above will create a webapi project and add it to our solution file. Now we have the following structure: -| di-demo ---| di-demo.sln ---| api/ fail First, we will compile and run our project so we type: dotnet run The first time you run the project the web browser might tell you something like your connection is not secure. You have a dev cert that's not trusted. Fortunately, there's a built-in tool that can fix this so you can run a command like this: dotnet dev-certs https --trust For more context on the problem: You should have something like this running: Ok then, we don't have an error but let's introduce one. Let's do the following: - Create a controller that supports getting products, this should inject a ProductsService - Create a ProductsService, this should be able to retrieve Products from a data source - **Create a IProductsService interface, inject this interface in the controller Add a ProductsController Add the file ProductsController.cs with the following content: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Services; namespace api.Controllers { [ApiController] [Route("[controller]")] public class ProductsController : ControllerBase { private readonly IProductsService _productsService; public ProductsController(IProductsService productsService) { _productsService = productsService; } [HttpGet] public IEnumerable<Product> GetProducts() { return _productsService.GetProducts(); } } } Note how we inject the IProductsService in the constructor. This file should be added to the Controllers directory. Add a ProductsService Let's create a file ProductsService.cs under a directory Services, with the following content: using System; using System.Collections.Generic; using System.Linq; namespace Services { public class Product { public string Title { get; set; } } public class ProductsService: IProductsService { private readonly List<Product> Products = new List<Product> { new Product { Title= "DVD player" }, new Product { Title= "TV" }, new Product { Title= "Projector" } }; public IEnumerable<Product> GetProducts() { return Products.AsEnumerable(); } } } Create an interface IProductsService Let's create the file IProductsService.cs under the Services directory, with the following content: using System; using System.Collections.Generic; namespace Services { public interface IProductsService { IEnumerable<Product> GetProducts(); } } Run Let's run the project with: dotnet build dotnet run We should get the following response in the browser: It's failing, just like we planned. Now what? Well, we fix it, by registering it with our container. Registering a service Ok, let's fix our problem. We do so by opening up the file Startup.cs in the project root. Let's find the ConfigureServices() method. It should have the following implementation currently: public void ConfigureServices(IServiceCollection services) { services.AddControllers(); } Let's change the code to the following: public void ConfigureServices(IServiceCollection services) { services.AddTransient<IProductsService, ProductsService>(); services.AddControllers(); } The call to services.AddTransient() registers IProductsService and associates it with the implementing class ProductsService. If we run our code again: dotnet run Now your browser should be happy and look like this: Do we know all we need to know now? No, there's lots more to know. So please read on in the next section to find out about different lifetimes, transient is but one type of lifetime type. Service lifetimes The service life time means how long the service will live, before it's being garbage collected. There are currently three different lifetimes: - Transient, services.AddTransient(), the service is created each time it is requested - Singleton, services.AddSingleton(), created once for the lifetime of the application - Scoped, services.AddScoped(), created once per request So when to use each kind? Good question. Transient So for Transient, it makes sense to use when you have a mutable state and it's important that the service consumer gets their own copy of this service. Also when thread-safety is not a requirement. This is a good default choice when you don't know what life time to go with. Singleton Singleton means that we have one instance for the lifetime of the application. This is good if we want to share state or creating the Service is considered expensive and we want to create it only once. So this can boost performance as it's only created once and garbage collected once. Because it can be accessed by many consumers thread-safety is a thing that needs to be considered. A good use case here is a memory cache but ensure you are making it thread-safe. Read more here about how to make something thread-safe: Read especially about the lock keyword in the above link. Scoped Scoped means it's created once per request. So all calling consumers within that request will get the same instance. Examples of scoped services are for example DbContext for Entity Framework. It's the class we use to access a Database. It makes sense to make it scoped. We are likely to do more than one call to it during our request and the resource should be scoped to that specific request/user. Here be dragons There's such a thing as captured dependencies. This means that a service lives longer than expected. So why is that bad? Well, you want services to live according to their lifetime, otherwise, we take up unnecessary space in memory. How does it happen? When you start depending on a Service with a shorter lifetime than yourself you are effectively capturing it, forcing it to stay around according to your lifetime. Example: You register a ProductsService with a scoped lifetime and an ILogService with a transient lifetime. Then you inject the ILogService into the ProductsService constructor and thereby capturing it. class ProductsService { ProductsService(ILogService logService) { } } Don't do that! If you are going to depend on something ensure that what you inject has an equal or longer life time than yourself. So either change what you depend on or change the lifetime of your dependency. Summary We have explained what Dependency Injection is and why it's a good idea to use it. Additionally, we have shown how the built-in container helps us register our dependencies. Lastly, we've discussed how there are different lifetimes for a dependency and which one we should be choosing. This was the first part of the built-in container. I hope you are excited about a follow-up post talking about some of its more advanced features. Discussion Hi Chris. Excellent Post. Thank you. I have a question => you mentioned " it makes sense to use when you have a mutable state and it's important that the service consumer gets their own copy of this service." can you explain what that means and also explain how that work with other scopes like Scoped and Singleton ? hey.. so the whole idea of DI, at least in my mind is about three things 1) reuse services that are expensive to create 2) be able to share state. 3) make testing easier/code needs modification less seldom, you rely on contracts, i.e Interfaces Singleton pattern is a clear case where we can share state. Because it's created only once, you have fields on that service that you write to, e.g a CartService can be a singleton with an Items fields that you can keep adding items to and you can ensure that there are no more than one CartService created Excellent Post, Well Done! Thank you for that Jeffrey :) Thanks for the article. The last link at the beginning routes to a 404 pages. hi Peter. Thanks for that. The link should be fixed now Loved the article! I found a small typo for correction in the controller: private readonly _shippingService IShippingService; Thanks, Michael, glad you liked it. Appreciate the typo correction as well :) Wonderful article on dependency injection Just a small question? The heading says Create an interface IProductsController And the code actually created is IProductService, it the heading a typo? yes it was, thank you :) Seriously, First article that finally got me to understand Dependency Injection! (BTW, you need to fix your 'e' key. It's missing in many places. Are you using one of those Mac laptops?) thanks. and yes my Mac has seen better days :) Hi Chris, Great article, good refresher!
https://dev.to/dotnet/how-you-can-learn-dependency-injection-in-net-core-and-c-245g
CC-MAIN-2020-50
refinedweb
2,272
55.24
So I am new to Swift. I am learning it via video tutorials step by step. However with each tutorial I attempt to make very simple apps to re-enforce what I am learning, and I am moving slowly so I can retain what I am learning. Down to my question! I have created a very simple app that is no more than a button laid over an image and when the button is pressed the image changes. You then press the reset button and it goes back to the original image. What I want is for the user to press the button then have the image change and 5 seconds later it auto changes back to the original image, "no reset button" for the user to have to press. How can I do this, in a very simple way? FYI I have several buttons on the screen so I will want to do this for each button individually at this time, as I learn more I will revisit this project and learn to create a single function that I can just call anytime I want to do this, that way creating cleaner code. But I got to walk before I run. Ok, so here is what my current code looks like. How do I add this into it? @IBOutlet weak var bkgrdImage: UIImageView! @IBOutlet weak var yellowDesktopImage: UIImageView! @IBOutlet weak var greenDesktopImage: UIImageView! @IBOutlet weak var yellowExpanded: UIImageView! @IBOutlet weak var greenExpanded: UIImageView! @IBOutlet weak var yellowButton: UIButton! @IBOutlet weak var greenButton: UIButton! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } @IBAction func learnYellow(sender: AnyObject) { bkgrdImage.hidden = false yellowDesktopImage.hidden = true greenDesktopImage.hidden = true yellowExpanded.hidden = false greenExpanded.hidden = true yellowButton.hidden = true greenButton.hidden = true } imageView.image = newImage //change to the new image dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (Int64)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), { imageView.image = originalImage //change back to the old image after 5 sec }); something like this in your button press function should do the job: Above the class, import Dispatch imageView.image = newImage //change to the new image dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (Int64)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), { imageView.image = originalImage //change back to the old image after 5 sec });
https://codedump.io/share/VmSF4Y08wvoF/1/change-image-when-button-is-pressed-and-then-have-image-return-to-original-after-5-seconds
CC-MAIN-2017-04
refinedweb
381
61.02
J. Winterbottom Andrew February 2008 Revised Civic Location Format for Presence Information Data Format Location Object (PIDF. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. Changes from PIDF-LO . . . . . . . . . . . . . . . . . . . . . 3 3.1. Additional Civic Address Types . . . . . . . . . . . . . . 3 3.2. New Thoroughfare Elements . . . . . . . . . . . . . . . . 4 3.2.1. Street Numbering . . . . . . . . . . . . . . . . . . . 5 3.2.2. Directionals and Other Qualifiers . . . . . . . . . . 5 3.3. Country Element . . . . . . . . . . . . . . . . . . . . . 6 3.4. A1 Element . . . . . . . . . . . . . . . . . . . . . . . . 6 3.5. Languages and Scripts . . . . . . . . . . . . . . . . . . 6 3.5.1. Converting from the DHCP Format . . . . . . . . . . . 7 3.5.2. Combining Multiple Elements Based on Language Preferences . . . . . . . . . . . . . . . . . . . . . 7 3.6. Whitespace . . . . . . . . . . . . . . . . . . . . . . . . 7 4. Civic Address Schema . . . . . . . . . . . . . . . . . . . . . 8 5. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6. Security Considerations . . . . . . . . . . . . . . . . . . . 10 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 7.1. URN sub-namespace registration for 'urn:ietf:params:xml:ns:pidf:geopriv10:civicAddr' . . . . 10 7.2. XML Schema Registration . . . . . . . . . . . . . . . . . 11 7.3. CAtype Registry Update . . . . . . . . . . . . . . . . . . 11 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12 8.1. Normative References . . . . . . . . . . . . . . . . . . . 12 8.2. Informative References . . . . . . . . . . . . . . . . . . 12 Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 13" in the figure below intersected figure shows a fictional arrangement of roads where these new thoroughfare elements are applicable. | || | ---------------|| | Carol La. Carol La. || Bob | || St. | West Alice Pde. || ==========/=================/===============/==========||=========== Sec.1 Sec.2 Sec.3 | Sec.4 || Sec.5 | || ----------| Carol || Alley 2 | La. || | || 3.2.1. Street Numbering The introduction of new thoroughfare elements affects the interpretation of several aspects uppercase; and> 3.5. Languages and Scripts The XML schema defined for civic addresses allows for the addition of the "xml:lang" attribute to all elements except "country" and "PLC", which both contain language-neutral values. The range of allowed values for "country" is defined in [ISO.3166-1]; the range of allowed values for "PLC" is described in the IANA registry defined by [RFC4589]. The "script" field defined in [RFC4776] is omitted in favor. 7. IANA Considerations 7.1. URN sub-namespace registration for 'urn:ietf:params:xml:ns:pidf:geopriv10:civicAddr' This document defines a new XML namespace (as per the guidelines in [RFC3688]) that has been registered with IANA.> <p>See <a href=""> RFC5139<. 7.3. CAtype Registry Update This document updates the civic address type registry established by [RFC4776]. The "PIDF" column of the CAtypes table has been updated to include the types shown in the first column of Table 1. 8. References 8.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [W3C.REC-xmlschema-2-20041028] - Biron, P. and A. Malhotra, . Appendix A. Acknowledgements The authors would like to thank Henning Schulzrinne for his assistance in defining the additional civic address types, particularly his research into different addressing schemes that led to the introduction of the thoroughfare elements. Rohan Mahy suggested the ISO 3166-2 recommendation for A1. In addition, we would like to thank Jon Peterson for his work in defining the PIDF-LO..
https://pike.lysator.liu.se/docs/ietf/rfc/51/rfc5139.xml
CC-MAIN-2020-40
refinedweb
473
50.23
Gatling is an open source load testing framework based on Scala, Akka and Netty and ready our self for load testing. Load testing is performed to determine a system’s behaviour under both normal and at peak conditions. The aim includes; - High performance - Ready-to-present HTML reports - Scenario recorder and developer-friendly DSL Setup: 1. In project/plugins.sbt, add: addSbtPlugin("io.gatling" % "gatling-sbt" % "2.1.6") Also needs to add these two dependencies in build.sbt: enablePlugins.GatlingPlugin libraryDependencies += "io.gatling.highcharts" % "gatling-charts-highcharts" % "2.1.6" % "test" libraryDependencies += "io.gatling" % "gatling-test-framework" % "2.1.6" % "test" DSL: A Simulation is a real Scala class containing 4 different parts: http: we can define the baseURL, which will be prepended to all the relative paths in the scenario definition. Also we can define some other configurations such as common headers, user agent etc, which will be added on each request. header: we define the headers which can be used for each request that will be sent to the server. scenario: This is the main part of your test. It has a sequence of exec commands. Each exec command is used to execute a set of actions(GET, POST, etc.) that will be executed in order to simulate. simulation: This is usually the last part of the test script. This is where we define the load we want to inject to your server Example Code: import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ /** Load test on scalaJobz Rest api for find all jobs */ class TestScript extends Simulation { val baseurl = "" val httpConf = http.baseURL(baseurl) val scn = scenario("Finding all jobs") .exec( http("Request") .get(baseurl) .check(status.is(200))) setUp(scn.inject(atOnceUsers(5))).protocols(httpConf) } Running the Scenario $ sbt test Result it provides meaningful reports through we can easily understand our result. Report is present in our target folder. References: Reblogged this on Rishi Khandelwal. Reblogged this on Agile Mobile Developer. Pingback: Gatling – SBT For Load Testing – payalblog2
https://blog.knoldus.com/2015/07/14/gatling-sbt-for-load-testing/
CC-MAIN-2017-26
refinedweb
341
52.46