text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
How to create report using ReportViewer in ASP.NET
by GetCodeSnippet.com • May 6, 2014 • Microsoft .NET C# (C-Sharp), Microsoft ASP.NET • 0 Comments
There are few tools and APIs used to generate and display reports in ASP.NET. Crystal Report is one of them and it is a good reporting tool. Hopefully, I will write few articles on Crystal Reports. Today, I will show you the use of ReportViewer control in ASP.NET.
ASP.NET has provided a server control in Visual Studio for reporting that is called ReportViewer. ReportViewer control is provided so that you can process and display reports in your applications. It is a freely redistributable control that enables embedding reports in applications developed is ASP.NET. Reports can be designed with drag-and-drop simplicity using report designer included in Visual Studio 2010.
You can process your data efficiently using ReportViewer and it can perform sorting, filtering, aggregation and grouping using ReportViewer reporting engine. ReportViewer supports a variety of ways to display data such as list, table and chart. You can also create reports which are visually good by setting fonts, colors, border styles, background images. ReportViewer also supports multiple other things like conditional formatting, collapsible sections, interactive sorting and document maps. You can also export your reports into Word, Excel and PDF formats.
ReportViewer control can work in two modes. It can process and render reports using built-in engine in local mode or it can work in server mode. ReportViewer control displays one report at a time but you can combine multiple ReportViewer controls to display multiple reports simultaneously. You can use smart tags to configure ReportViewer control and you can also write code if you want to configure it programmatically.
Now let’s see how to create a report using ReportViewer control.
- Create a web application in visual studio 2010 and add a webform into it.
- Add a report file from reporting tab in add new item window.
- Create a XML schema file. I have used following code and run the project to create it.
- Add this XML schema file to your web application as an existing file.
- Open Report1.rdlc file and click on new > Dataset from Report Data tab on left to add new DataSet
- Select Dataset and click ok
- Now insert textbox in Report1.rdlc file and drag and drop columns from Report Data tab
- Add reference of Microsoft.ReportViewer.Webforms in your application
- Add a ScriptManager and a ReportViewer control in your webform
- Add following namespaces in code file
- Now add following method to generate report
- Add following code in page load event to get data for one ID
- If you want to convert your report in PDF format and save it to specific location, you can write following lines of code at the end of GenerateReport() method
- Now you can see report in your browser and you can also download complete code sample from below link
| http://getcodesnippet.com/2014/05/06/how-to-create-report-using-reportviewer-in-asp-net/ | CC-MAIN-2017-43 | refinedweb | 490 | 55.44 |
Screen resolution with retina display
Is it possible to set the screen resolution from 1024 x 748 to 2048 x 1496?
scene.get_screen_scale()=> 2.0
Setting self.size and self.bounds to a higher resolution changed the values but the resolution is still the same.
def setup(self): self.size = scene.Size(2048.0,1496.0) self.bounds = scene.Rect(0,0,2048.0,1496.0)
Why do you want to change the resolution? You can use 0.5 points to refer to physical pixels on a retina device.
On one hand it would be easier to use the real resolution, but on the other hand it makes sense to use a scale factor for different screen resolutions. | https://forum.omz-software.com/topic/438/screen-resolution-with-retina-display | CC-MAIN-2017-47 | refinedweb | 118 | 68.87 |
: InlineBlockParent.java 426576 2006-07-28 15:44:37Z jeremias $ */19 20 package org.apache.fop.area.inline;21 22 import org.apache.fop.area.Area;23 import org.apache.fop.area.Block;24 25 26 /**27 * Inline block parent area.28 * This is an inline area that can have one block area as a child29 */30 public class InlineBlockParent extends InlineArea {31 32 /**33 * The list of inline areas added to this inline parent.34 */35 protected Block child = null;36 37 /**38 * Create a new inline block parent to add areas to.39 */40 public InlineBlockParent() {41 }42 43 /**44 * Override generic Area method.45 *46 * @param childArea the child area to add47 */48 public void addChildArea(Area childArea) {49 if (child != null) {50 throw new IllegalStateException ("InlineBlockParent may have only one child area.");51 }52 if (childArea instanceof Block) {53 child = (Block) childArea;54 //Update extents from the child55 setIPD(childArea.getAllocIPD());56 setBPD(childArea.getAllocBPD());57 } else {58 throw new IllegalArgumentException ("The child of an InlineBlockParent must be a"59 + " Block area");60 }61 }62 63 /**64 * Get the child areas for this inline parent.65 *66 * @return the list of child areas67 */68 public Block getChildArea() {69 return child;70 }71 72 }73
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/fop/area/inline/InlineBlockParent.java.htm | CC-MAIN-2017-30 | refinedweb | 223 | 65.52 |
The more you do programming, the more you will here about how you should test your code. You will hear about things like Extreme Programming and Test Driven Development (TDD). These are great ways to create quality code. But how does testing fit in with Jupyter? Frankly, it really doesn’t. If you want to test your code properly, you should write your code outside of Jupyter and import it into cells if you need to. This allows you to use Python’s unittest module or py.test to write tests for your code separately from Jupyter. This will also let you add on test runners like nose or put your code into a Continuous Integration setup using something like Travis CI or Jenkins.
However all is now lost. You can do some testing of your Jupyter Notebooks even though you won’t have the full flexibility that you would get from keeping your code separate. We will look at some ideas that you can use to do some basic testing with Jupyter.
Execute and Check
One popular method of “testing” a Notebook is to run it from the command line and send its output to a file. Here is the example syntax that you could use if you wanted to do the execution on the command line:
jupyter-nbconvert --to notebook --execute --output output_file_path input_file_path
Of course, we want to do this programmatically and we want to be able to capture errors. To do that, we will take our Notebook runner code from my exporting Jupyter Notebook article and re-use it. Here it is again for your convenience:
# notebook_runner.py import nbformat import os from nbconvert.preprocessors import ExecutePreprocessor def run_notebook(notebook_path): nb_name, _ = os.path.splitext(os.path.basename(notebook_path)) dirname = os.path.dirname(notebook_path) with open(notebook_path) as f: nb = nbformat.read(f, as_version=4) proc = ExecutePreprocessor(timeout=600, kernel_name='python3') proc.allow_errors = True proc.preprocess(nb, {'metadata': {'path': '/'}}) output_path = os.path.join(dirname, '{}_all_output.ipynb'.format(nb_name)) with open(output_path, mode='wt') as f: nbformat.write(nb, f) errors = [] for cell in nb.cells: if 'outputs' in cell: for output in cell['outputs']: if output.output_type == 'error': errors.append(output) return nb, errors if __name__ == '__main__': nb, errors = run_notebook('Testing.ipynb') print(errors)
You will note that I have updated the code to run a new Notebook. Let’s go ahead and create a Notebook that has two cells of code in it. After creating the Notebook, change the title to Testing and save it. That will cause Jupyter to save the file as Testing.ipynb. Now enter the following code in the first cell:
def add(a, b): return a + b add(5, 6)
And enter the following code into cell #2:
1 / 0
Now you can run the Notebook runner code. When you do, you should get the following output:
[{'ename': 'ZeroDivisionError', 'evalue': 'integer division or modulo by zero', 'output_type': 'error', 'traceback': ['\x1b[0;31m\x1b[0m', '\x1b[0;31mZeroDivisionError\x1b[0mTrace']}]
This indicates that we have some code that outputs an error. In this case, we did expect that as this is a very contrived example. In your own code, you probably wouldn’t want any of your code to output an error. Regardless, this Notebook runner script isn’t enough to actually do a real test. You need to wrap this code with testing code. So let’s create a new file that we will save to the same location as our Notebook runner code. We will save this script with the name “test_runner.py”. Put the following code in your new script:
import unittest import runner class TestNotebook(unittest.TestCase): def test_runner(self): nb, errors = runner.run_notebook('Testing.ipynb') self.assertEqual(errors, []) if __name__ == '__main__': unittest.main()
This code uses Python’s unittest module. Here we create a testing class with a single test function inside of it called test_runner. This function calls our Notebook runner and asserts that the errors list should be empty. To run this code, open up a terminal and navigate to the folder that contains your code. Then run the following command:
python test_runner.py
When I ran this, I got the following output:
F ====================================================================== FAIL: test_runner (__main__.TestNotebook) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_runner.py", line 10, in test_runner self.assertEqual(errors, []) AssertionError: Lists differ: [{'output_type': u'error', 'ev... != [] First list contains 1 additional elements. First extra element 0: {'ename': 'ZeroDivisionError', 'evalue': 'integer division or modulo by zero', 'output_type': 'error', 'traceback': ['\x1b[0;31m---------------------------------------------------------------------------\x1b[0m', '\x1b[0;31mZeroDivisionError\x1b[0m ' 'Trace']} Diff is 677 characters long. Set self.maxDiff to None to see it. ---------------------------------------------------------------------- Ran 1 test in 1.463s FAILED (failures=1)
This clearly shows that our code failed. If you remove the cell that has the divide by zero issue and re-run your test, you should get this:
. ---------------------------------------------------------------------- Ran 1 test in 1.324s OK
By removing the cell (or just correcting the error in that cell), you can make your tests pass.
The py.test Plugin
I discovered a neat plugin you can use that appears to help you out by making the workflow a bit easier. I am referring to the py.test plugin for Jupyter, which you can learn more about here.
Basically it gives py.test the ability to recognize Jupyter Notebooks and check if the stored inputs match the stored outputs and also that Notebooks run without error. After installing the nbval package, you can run it with py.test like this (assuming you have py.test installed):
py.test --nbval
Frankly you can actually run just py.test with no commands on the test file we already created and it will use our test code as is. The main benefit of adding nbval is that you won’t need to necessarily add wrapper code around Jupyter if you do so.
Testing within the Notebook
Another way to run tests is to just include some tests in the Notebook itself. Let’s add a new cell to our Testing Notebook that contains the following code:
import unittest class TestNotebook(unittest.TestCase): def test_add(self): self.assertEqual(add(2, 3), 5)
This will test the add function in the first cell eventually. We could add a bunch of different tests here. For example, we might want to test what happens if we add a string type with a None type. But you may have noticed that if you try to run this cell, you get to output. The reason is that we aren’t instantiating the class yet. We need to call unittest.main to do that. So while it’s good to run that cell to get it into Jupyter’s memory, we actually need to add one more cell with the following code:
unittest.main(argv=[''], verbosity=2, exit=False)
This code should be put in the last cell of your Notebook so it can run all the tests that you have added. It is basically telling Python to run with verbosity level of 2 and not to exit. When you run this code you should see the following output in your Notebook:
test_add (__main__.TestNotebook) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.003s OK
You can do something similar with Python’s doctest module inside of Jupyter Notebooks as well.
Wrapping Up
As I mentioned at the beginning, while you can test your code in your Jupyter Notebooks, it is actually much better if you just test your code outside of it. However there are workarounds and since some people like to use Jupyter for documentation purposes, it is good to have a way to verify that they are working correctly. In this chapter you learned how to run Notebooks programmatically and verify that the output was as you expected. You could enhance that code to verify certain errors are present if you wanted to as well.
You also learned how to use Python’s unittest module in your Notebook cells directly. This does offer some nice flexibility as you can now run your code all in one place. Use these tools wisely and they will serve you well.
Related Reading
- Testing Jupyter Notebooks
- The nbval package
- Testing and Debugging Jupyter Notebooks
- StackOverflow: Unit tests for functions in a Jupyter notebook? | https://www.blog.pythonlibrary.org/2018/10/16/testing-jupyter-notebooks/ | CC-MAIN-2020-10 | refinedweb | 1,378 | 66.44 |
Delegates and lambdas
Delegates define a type, which specify a particular method signature. A method (static or instance) that satisfies this signature can be assigned to a variable of that type, then called directly (with the appropriate arguments) or passed as an argument itself to another method and then called. The following example demonstrates delegate use.
public class Program { public delegate string Reverse(string s); static string ReverseString(string s) { return new string(s.Reverse().ToArray()); } static void Main(string[] args) { Reverse rev = ReverseString; Console.WriteLine(rev("a string")); } }
- On line 4 we create a delegate type of a certain signature, in this case a method that takes a string parameter and then returns a string parameter.
- On line 6, we define the implementation of the delegate by providing a method that has the exact same signature.
- On line 13, the method is assigned to a type that conforms to the
Reversedelegate.
- Finally, on line 15 we invoke the delegate passing a string to be reversed.
In order to streamline the development process, .NET includes a set of delegate types that programmers can reuse and not have to create new types. These are
Func<>,
Action<> and
Predicate<>, and they can be used in various places throughout the .NET APIs without the need to define new delegate types. Of course, there are some differences between the three as you will see in their signatures which mostly have to do with the way they were meant to be used:
Action<>is used when there is a need to perform an action using the arguments of the delegate.
Func<>is used usually when you have a transformation on hand, that is, you need to transform the arguments of the delegate into a different result. Projections are a prime example of this.
Predicate<>is used when you need to determine if the argument satisfies the condition of the delegate. It can also be written as a
Func<T, bool>.
We can now take our example above and rewrite it using the
Func<> delegate instead of a custom type. The program will continue running exactly the same.
public class Program { static string ReverseString(string s) { return new string(s.Reverse().ToArray()); } static void Main(string[] args) { Func<string, string> rev = ReverseString; Console.WriteLine(rev("a string")); } }
For this simple example, having a method defined outside of the Main() method seems a bit superfluous. It is because of this that .NET Framework 2.0 introduced the concept of anonymous delegates. With their support you are able to create "inline" delegates without having to specify any additional type or method. You simply inline the definition of the delegate where you need it.
For an example, we are going to switch it up and use our anonymous delegate to filter out a list of only even numbers and then print them to the console.
public class Program { public static void Main(string[] args) { List<int> list = new List<int>(); for (int i = 1; i <= 100; i++) { list.Add(i); } List<int> result = list.FindAll( delegate(int no) { return (no%2 == 0); } ); foreach (var item in result) { Console.WriteLine(item); } } }
Notice the highlighted lines. As you can see, the body of the delegate is just a set of expressions, as any other delegate. But instead of it being a separate definition, we’ve introduced it ad hoc in our call to the
FindAll() method of the
List<T> type.
However, even with this approach, there is still much code that we can throw away. This is where lambda expressions come into play.
Lambda expressions, or just "lambdas" for short, were introduced first in C# 3.0, as one of the core building blocks of Language Integrated Query (LINQ). They are just a more convenient syntax for using delegates. They declare a signature and a method body, but don’t have an formal identity of their own, unless they are assigned to a delegate. Unlike delegates, they can be directly assigned as the left-hand side of event registration or in various Linq clauses and methods.
Since a lambda expression is just another way of specifying a delegate, we should be able to rewrite the above sample to use a lambda expression instead of an anonymous delegate.
public class Program { public static void Main(string[] args) { List<int> list = new List<int>(); for (int i = 1; i <= 100; i++) { list.Add(i); } List<int> result = list.FindAll(i => i % 2 == 0); foreach (var item in result) { Console.WriteLine(item); } } }
If you take a look at the highlighted lines, you can see how a lambda expression looks like. Again, it is just a very convenient syntax for using delegates, so what happens under the covers is similar to what happens with the anonymous delegate.
Again, lambdas are just delegates, which means that they can be used as an event handler without any problems, as the following code snippet illustrates.
public MainWindow() { InitializeComponent(); Loaded += (o, e) => { this.Title = "Loaded"; }; } | https://docs.microsoft.com/en-us/dotnet/standard/delegates-lambdas | CC-MAIN-2018-13 | refinedweb | 827 | 64.2 |
Code. Collaborate. Organize.
No Limits. Try it Today.
Frygreen wrote:Do you have any idea?
Frygreen wrote:I moved (copied it to C#)
Frygreen wrote:Sorry,for
Frygreen wrote:(built .net4.0)with
Frygreen wrote:Verything
Frygreen wrote:I get an exception from NUnit and I can not see any code as expected
wizardzz wrote:Misleading title. I thought you were talking about bj's from the wife.
wizardzz wrote:Well, that's what girlfriends are for.
public class Naerling : Lazy<Person>{
public void DoWork(){ throw new NotImplementedException(); }
}
*pre-emptive celebratory nipple tassle jiggle* - Sean Ewington
"Mind bleach! Send me mind bleach!" - Nagy Vilmos
CPallini wrote:don't mind, being so happy,
Nish Sivakumar wrote:Days 22 to ∞ - Mock every other language on this planet*.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Lounge.aspx?msg=4466558 | CC-MAIN-2014-23 | refinedweb | 154 | 58.38 |
Tips and Tricks to Ace the Certified Kubernetes Application Developer
I recently passed the Certified Kubernetes Application Developer exam and thought to share some tips and tricks that might come in handy if you are also planning to take the exam in the future.
📔 Background
About a month ago, I decided to learn more about Kubernetes as it would be really useful for the stuff I’m working at GitHub daily. Prior to that, I was always fascinated by Kubernetes but never got the chance to work on an actual system that used it. I knew how it worked from a 10,000 feet view, but didn’t have an idea of core components, basic constructs and literally to be able to do anything with it.
Having taken the exam, I’m quite comfortable navigating through Kubernetes and now it makes sense when I’m doing something with it, rather than merely following some commands.
CKAD is a hands-on exam and managing your time is absolutely crucial. I hope you find the following tips useful✌️
🗒️ Summary of the exam
To summarize the key facts about the CKAD exam,
- Passing score is 66%
- 2 hours duration, comprised of 19 questions
- Questions will have varying weights (from 2% - 13%)
- You can also open only one tab to browse Kubernetes documentation
- Remotely proctored
💻 Aliases and bash tricks
This is a really important first that I can’t recommend enough. I was using the full
kubectl command during the study phase but later started using just
k by setting up an alias when I was practising simply to cut down the time when typing commands.
alias k=kubectl
Initially, it will take a few seconds to type this out but it will pay dividends throughout the exam. Here are a few more if you are interested. You don’t need to use everything in here though. In fact, I only used the above alias.
Feel free to mix and match the commands you are comfortable with 👍
alias kd='kubectl describe' alias kr='kubectl run' alias kc='kubectl create' alias ke='kubectl explain' alias kgp='kubectl get pods' alias kgs='kubectl get svc'
You don’t need to be a Linux guru to take the exam, but, remember you will do it in some Linux env. (potentially Ubuntu). So it helps to know a few basic Bash commands if you are coming from Windows.
cp- Copy files
mv- Move/Rename files
mkdir- Create new folder
ls- List files
rm- Remove/Delete files
grep- Search through text. Useful when you want to filter a list of pods. Eg:
kubectl get pods | grep -i status:
Ctrl+R- To do a reverse search to find a command you have previously run
Extra tip: Use short names of resources whenever possible.
Not sure what are the short names? You can check it with
kubectl api-resources command.
⌨️ Get a good grasp of VIM
I found having previous experience in VIM came in handy. However, you don’t need to be a master at it. Using nano would be fine too if you are good.
Take the time to set the following to your VIM profile before attempting any questions.
vi ~/.vimrc
Add the following lines and save it.
set expandtab set tabstop=2 set shiftwidth=2
These commands will save you from having indentation issues and weird syntax issues while working with YAML files during the exam.
Here are some other commands that may be of help if you are not familiar with VIM.
/- Search through text. Also, use
nto go to the next result.
dd- Delete a line
u- Undo
Shift+A- Go to the end of the line and enter the INSERT mode
gg- Go to the beginning of the file
G- Go to the end of the file
o- Go to the next line and enter INSERT mode
v- Enter VISUAL mode. You can select a block of lines with arrow keys or
jand
kkeys. You can copy with
yand paste with
p. Also, you can indent a block with
Shift + >to right and
Shift + <to indent to the left
And finally, while you are in NORMAL mode you can type
ZZ to quickly save and go back to the terminal without having to type
:wq How cool is that? ⚡
☄️ Mastering the imperative commands
You would come across many questions where you would have to create pods, deployments, services etc. In such cases, don’t bother writing up YAML definitions from scratch - or even finding the relevant reference in the k8s docs.
You can save a lot of time by using imperative commands. For instance, if you are tasked to create a pod with
nginx as the image,
tier:frontend as labels with the port
80 exposed:
kubectl run tmp --image=nginx --labels tier=frontend --port 80
Say you are asked to expose a deployment
nginx with a
NodePort service called
nginx-svc,
kubectl expose deploy nginx --name=nginx-svc --port=80 --target-port=80 --type=NodePort
But what if you can’t get everything included in a single command you can use the
--dry-run=client -o yaml > tmp.yaml to export it to a file before creating the resource.
Oh btw, if you need to delete a pod quickly you can use the
--grace-period=0 --force command to quickly delete them without waiting.
kubectl delete po <pod name> --grace-period=0 --force
🤔 When in trouble
Pay attention to the weightage of the question and a rough idea of how long it will take you to solve it. I can remember, I was looking at a question that was quite long and had a fair bit of configuration to be done. But the weightage was only 2% 😆 I marked it down on the provided notepad and skipped it (you can also Flag a question). The next question was 4% and was really really easy! I hope you get the point.
💡 Don’t be afraid to skip and revisit questions.
If you forgot how something is placed in a resource definition, you can use
kubectl explain <resource name> --recursive | less to find what you are looking for.
Another useful tip I can give you is, the
kubectl <resource name> -h command. You can use it like so.
k run -h
☝️ A note on clusters & namespaces
This is also a very important point you should pay attention to. At the top of each question, if you will be given a command to set the current context. Make sure to run it for each question as different questions will be in different clusters.
Another point is, pay attention to any namespaces in the given question text. Sometimes it will be worded within the question. Sometimes it will be at the bottom of the question as a separate note!
In a question where you will have to
ssh into servers please make sure to remember (or note it down) which cluster and server you are in. And remember to
exit out of it before going to the next question.
📄 Leverage the docs
In certain cases, it’s better to visit the docs than to spend time to figure out what needs to be done. For instance, if there’s a question on setting up a Persistent Volume, the question will also have a section to create a Persistent Volume Claim and to create a Pod to use that.
Go to the docs, type
pv at the search bar and click on the link that says “Configure a Pod to Use a PersistentVolume for Storage”. And yes, you need to know where things are at within the K8S docs!
👟 Practice, practice, practice
Speed is key to the exam. Although you get 2 hours, it will just fly! 🦅
When you pay for the exam you will get 2 free mock exam sessions before sitting the real exam.
Here are some more exercises I used.
- [Free]
- [Free]
- [Paid]
- [Free]
👋 Conclusion
Do you know what’s the hardest thing to do after the exam? waiting for the results! 🤣 It might take up to 24 - 36 hours to get your result. Here’s my certificate if you are interested.
I hope you found these tips helpful. Feel free to comment below if you have got any tips and tricks too! Good luck with your exam!!! 🎉 | https://sahansera.dev/tips-and-tricks-for-acing-ckad/ | CC-MAIN-2022-33 | refinedweb | 1,383 | 69.01 |
PH…
18 thoughts on “Santa Knows If Your Contact Form Uses PHPMailer < 5.2.18”
looks like it was maybe possible to insert arbitrary entries into the php ini file, overwriting previous statements .. ?
No ini_set is only a runtime modification, that’s fine and expected.
The problem they are trying to “solve” is that the -f argument added into params is passed onto the sendmail command line by PHP’s mail() function (phpMailer can use either mail() or it can communicate over SMTP directly, mail() is more common).
By not verifying the Sender address is an actual email address it can allow you to inject arbitrary options to the sendmail command line, obviously that’s not great but it’s really not that big of a deal in this case because these additional parameters are passed internally to escapeshellcmd, so there is protection there against allowing arbitrary execution of other commands than sendmail.
( )
“This parameter is escaped by escapeshellcmd() internally to prevent command execution. escapeshellcmd() prevents command execution, but allows to add additional parameters. For security reasons, it is recommended for the user to sanitize this parameter to avoid adding unwanted parameters to the shell command.”
> it’s really not that big of a deal in this case because these additional parameters are passed internally to escapeshellcmd, so there is protection there against allowing arbitrary execution of other commands than sendmail.
However, if you’re actually running sendmail (as opposed to Exim or Postfix or similar with a sendmail wrapper), escapeshellcmd alone won’t protect you here so this *would* become a big deal – as you can coerce sendmail into dumping the mail to an arbitrary file, which in most configurations leads to RCE – basically, the bug in Roundcube from a month or so ago, but in PHPMailer now.
So yeah, if you’re running sendmail, worth going into work on Boxing Day for.
Ahhh, I remember PhP (3.x.x – 4.x.x for me). Once you can create a singleton closure that actually works then the whole language is no longer interesting because you have solved the last of PhP’s problems and that is – it has no scope, everything (functions) is normally in the global scope.
Then you write API’s to white list commands and functions and put the API on a different server and use Apache directives to IP lock the whole thing so that you API is secure, problem is that the server it’s running on has holes in the operating system so hackers get to it anyway.
PhP has probably done more for the internet (www) as it is today than any other language but it has become a workhorse in that security takes second place to new refinements and features. It is basically a OOP that based on procedural code.
Thanks for your very bad and almost illegible rough draft of what PHP is, even though no one asked for it and you would assume most people here know what it is. It’s almost like you just like hearing yourself talk about technical subjects,do you have a lot of IRL friends/acquaintances ?
Well if you don’t like it then offer a *good* and *intelligible* response or is it just that *you* “like hearing yourself talk …”
Just to give you some idea of what you called “almost illegible” as it *seams* that you don’t php.
function wrapper()
{
function operator()
{
return 3;
}
}
operator(); // returns error as function has been created yet
wrapper();
$var = operator(); // assigns the value 3 to the variable $var
What this means is that once created the function named operator() exists in the global scope and is not limited to the scope of the function named wrapper() so at *any* point (or depth) of scope code can litter the global function namespace.
To fix this problem you need to create a function that is –
1) A singleton. It can’t be duplicated, cloned, or re-instantiated
2) A closure. It cannot get any access to or modify the global scope. It is completely limited to it’s own specific scope and cannot return anything except a specified parameter of a specified type.
Within a singleton closure you can even allow code execution because of it’s isolation (by scope) from other code.
API- written in php
An Application Programming Interface (API) essentially provides a means of white-listing operations/functions/commands as opposed to the normal and far less secure method of black-listing activities. Most of the more secure websites use an API and hide or obfuscate the address (IP or namespace) of the actual (Application) server. The web site that you actually visit is just a front end and doesn’t contain any sensitive data etc. On an even more secure system there is another step that separates the API server from the database server.
While these techniques hugely increase security, they are still vulnerable at the operating system level which is how hackers break things anyway.
Quote: “PhP has probably done more for the internet (www) as it is today than any other language”
www: World Wide Web, HTTP, websites
internet: TCP/IP, www, instant messaging , video conferencing, email etc.
I deliberately misused the the incorrect term internet and then put the correct term www in brackets as most people today uses these two terms interchangeably and are unable to discern the two. This was not intended to be in anyway unintelligible.
PHP should never really have been born. It has this feeling that it was developed by kids reading a text book without really understanding anything behind what they do used by other kids that saw PHP for Dummies on a shelf somewhere.
Every damn time I’m asked to audit PHP, there’s always something insanely wrong. The problems exist with every language I ever worked with, but PHP (and MS Visual Basic) seem to amplify the problems.
I once had a senior PHP developer give me a blank look when I asked him why he wasted nearly a year writing and tweaking over 2k lines of validation code on shitty data stored in a database (to give an idea of the scope here, the SD was storing phone numbers as 32bit ints but unique keys as ASCII then truncating the 7 characters down to 5 before searching) Apparently the concept of uh… validating data *before* storing it seemed to have escaped him entirely.
P.H.P.
Pretty Hard to Protect
u kiidin i hope
Note that something being a valid email address does not mean it can’t contain something bad.
It is even possible to write x86 binary code using only alpha-numerical characters.
Given that the x86 architecture will attempt to execute just about any opcode, it’s not a huge stretch to get strings to execute. Probably why more and more programs and websites are incorrectly filtering emails. Easier to blindly filter emails than it is to just encapsulate and avoid executing.
PHP…
The PHPMailer < 5.2.18 Remote Code Execution Proof of Concept exploit is on GitHub Now
Amazing how hard it is to get details on what the actual vulnerability is…
And if it’s really the diff you show here, then it’s really not as big of a deal as the fuss it’s getting. The “SMTP sender” address is usually entered by admins and not just any users. And anywhere that it was entered by users, failure to work with spaces should have been an obvious indication of a problem even without precise example of how it can be exploited, so a validation or escaping of some sort was probably added in the calling code. And anywhere that it was not added, deserves to be hacked for stupidity.
These sorts of problems are most often found by sys-admins or sys-ops and if you are one of these people then you won’t make any friends by publicly releasing the details about a vulnerability before most others have patched the problem.
Sure, on a dedicated server, the sys-admin will fill in things like sender address but 99% of websites (and SMTP servers) are shared servers today so the from address has to be under the control of the hosting account holder who is probably not as aware of the traps.
Once a SPAMer gets a hold of a problem like this they will hammer the server resource hard and that effects all the clients on a shared server. Firstly the SMTP server will be black listed and that effects all users on a shared server and not just one hosting account. Then there is the (probably) less significant resource burdening.
I don’t know how it is today but the ‘SPAMMer vs sys-op’ scenario played out for a decade and last I knew the latest weapon the sys-op had was SPF records attached to the DNS zone info.
As usual this exploit is being oversold in this new world of security branding as people try to make names for themselves. This exploit requires the specific condition of the attacker having access to unfiltered input to the setFrom() command provided by PHP Mailer. | https://hackaday.com/2016/12/25/santa-knows-if-your-contact-form-uses-phpmailer-5-2-18/ | CC-MAIN-2019-39 | refinedweb | 1,534 | 53.55 |
int paint_primitive_rgb ( char *path, int id, RGB rgb, int mode )
char *path; // path to object containing primitive
int id; // ID number of primitive
RGB rgb; // rgb color to use
int mode; // parameters for painting
Synopsis
#include "silver.h"
The paint_primitive_rgb function draws the specified primitive in the specified rgb. The primitive is designated by path , which is the name of the object that contains the primitive, and id , which is the ID number of the primitive primitive.
Parameters
path is a null-terminated string containing the full path name of the object containing the primitive. id is the ID number of the primitive in the object at path . rgb is the rgb color used to draw the primitive. Mode may have the following or’ed values:
Value
Meaning
COLOR_WIDE
The specified primitive is drawn in wide lines.
COLOR_INVERSE
The specified primitive is drawn by xor'ing the screen with the color (and width) lines specified, and the lines drawn are saved away, where they may be xor'ed back by refresh_lines .
Return Value
paint_primitive_rgb returns SS_SUCCESS (0) if the spcified primitive exists, and SS_NOTFOUND (-1) otherwise.
See Also
paint_primitive, paint_entity, paint_entity_rgb
Help URL: | http://silverscreen.com/paint_primitive_rgb.htm | CC-MAIN-2021-21 | refinedweb | 193 | 52.39 |
In this post we will explain how to insert data into a table with Flask postgreSQL application. We will use Insert SQL statement to do this.
Installing PostgreSQL
To create a Flask PostgreSQL web application you need to install PostgreSQL in your system. You can click here and download PostgreSQL and install. You need to download a stable version which is compatible with FLASK version you are using to develop application. In the example we have used PostgreSQL (postgresql-10.13-1-windows-x64) with Flask 1.1.2. After installing PostgreSQL you have to set it up by defining a password. This password is required when you will connect the Flask Website with PostgreSQL database.
Create the local Website
Prepare your website folder following the steps in this post.
Installing SQLAlchemy
To install SQLAlchemy in windows, use pip install SQLAlchemy. Open command window using cmd. Open your project folder using CD DOS command and run the pip command.
Create the Database Table
Once you install PostgreSQL you can create tables in two ways- command line or GUI interface launched with pgAdmin available when installed PostgreSQL as shown in the diagram below.
If you have used Microsoft Access, SQL Server, Oracle or MySQL you will be easily able to create the database.
Here you can create a new database for your application. On Databases right click and select Create Database and specify name and other details of your database. Once you create database, open the schemas group and create table there. You can create tables in two ways-
- Right click on Tables and open table creation dialog box by choosing Create Table
- Create it by using the Query Tool . Right click on tables and write the create table statement.
CREATE TABLE public.booklist ( bookid integer NOT NULL DEFAULT nextval('booklist_bookid_seq'::regclass), isbn character varying COLLATE pg_catalog."default" NOT NULL, title character varying COLLATE pg_catalog."default" NOT NULL, author character varying COLLATE pg_catalog."default" NOT NULL, year integer NOT NULL )
Creating Application.py
Once you have created the database, next step is to set up the database information in Python Flask application.py file so that data can be stored and accessed data from DB tables.
Include these import statements at the top of this file
from sqlalchemy import create_engine from sqlalchemy.orm import scoped_session, sessionmaker
Set up database engine in application.py with these statements-
engine = create_engine("postgresql://postgres:yourpassword@localhost:5432/yourdatabasename") db = scoped_session(sessionmaker(bind=engine))
In the first statement the username is postgres. Colon (:) is followed by the password you gave while installing PostgreSQL. After @ you need to specify the server and port. Since you are creating a web application at local server, most of the times the value given here will work. If it doesn’t, check the browser address bar of pgAdmin to get the database server details and change it in application.py. Lastly replace yourdatabasename with the name of the database your have created.
In the application.py file after app = Flask(__name__) statement
app.secret_key = '12345678' ' this key is used to communicate with database. #Configure session to use filesystem app.config["SESSION_PERMANENT"] = False app.config["SESSION_TYPE"] = "filesystem"
Create the Data Entry Form
The data entry form is created by using HTML form tags. We have created a table with following fields to store booklist table details.
This form is created (addbook.html)
<form action="{{ url_for('bookadd')}}" method="post"> <h5>Enter the book Details</h5> <div class="form-group"> <input class="form-control" type="text" name="isbn" placeholder="ISBN"> </div> <div class="form-group"> <input class="form-control" type="text" name="title" placeholder="Title"> </div> <div class="form-group"> <input class="form-control" type="text" name="author" placeholder="Author"> </div> <div class="form-group"> <input class="form-control" type="text" name="year" placeholder="Year"> </div> <div class="form-group"> <button class="btn btn-primary">Submit</button> </div> </form>
Add a route in application.py to access this HTML form in browser
@app.route("/addbook") def addbook(): return render_template("addbook.html")
Add route and code to Insert data in Table
The last step is to add the code that saves the data in the booklist table. INSERT statement is created using the data submitted from the addbook page with POST method.
@app.route("/bookadd", methods=["POST"]) def bookadd(): isbn=request.form.get("isbn") title=request.form.get("title") author=request.form.get("author") year=request.form.get("year") db.execute("INSERT INTO booklist (isbn, title,author, year) VALUES (:isbn, :title,:author,:year)", {"isbn": isbn, "title": title, "author":author,"year":year}) db.commit() return render_template("addbook.html")
In the next post you will learn how to display the saved data in a tabular form.
Be First to Comment | https://csveda.com/flask-postgresql-insert-data-into-table/ | CC-MAIN-2022-33 | refinedweb | 783 | 50.12 |
Created on 2013-01-03 16:41 by wdanilo, last changed 2016-06-04 22:36 by berker.peksag. This issue is now closed.
Hi! I think this behaviour is bug. Lets concider the following code:
import inspect
class X(object):
def a(self):pass
def b(self):pass
def c(self):pass
print(inspect.getmembers(X, predicate=inspect.ismethod))
print(inspect.getmembers(X, predicate=inspect.isfunction))
In python 2.7, the results are:
[('a', <unbound method X.a>), ('b', <unbound method X.b>), ('c', <unbound method X.c>)]
[]
and in Python 3.2:
[]
[('a', <function a at 0x1b0fd10>), ('b', <function b at 0x1b0fe20>), ('c', <function c at 0x1b160d8>)]
I think, the results from python 2.7 are correct.
I think that's expected and by design. In Python 3 there are no unbound methods, but simply functions:
>>> class X:
... def add(a, b): return a+b
...
>>> add = X.add
>>> add
<function add at 0xb740d26c>
>>> add(3, 4)
7
>>> def add(a, b): return a+b
...
>>> add
<function add at 0xb740d22c>
>>> add(3, 4)
7
As you can see there's no real difference between the two "add".
It's different though with bound methods (obtained from an instance rather than a class):
>>> add = X().add
>>> add
<bound method X.add of <__main__.X object at 0xb740e0ec>>
The documentation is also clear that ismethod() "Return true if the object is a bound method written in Python.". Maybe an additional note can be added to state that "unbound methods" are not included, and that are instead recognized by isfunction().
In my opinion, the Python 2.7 results are wrong.
In Python 2.7, inspect.ismethod returns True for both bound and unbound methods -- ie., is broken according to the documentation. As a workaround, I'm using:
def is_bound_method(obj):
return hasattr(obj, '__self__') and obj.__self__ is not None
is_bound_method also works for methods of classes implemented in C, e.g., int:
>>> a = 1
>>> is_bound_method(a.__add__)
True
>>> is_bound_method(int.__add__)
False
But is not very useful in that case because inspect.getargspec does not work for functions implemented in C.
is_bound_method works unchanged in Python 3, but as noted above, in Python 3, inspect.ismethod properly distinguishes between bound and unbound methods, so it is not necessary.
I agree that the docs for inspect.ismethod() for Python 2 are wrong.
The docs say: "Return true if the object is a bound method written in Python."
However, it also returns True for an unbound method:
>>> class A:
... def meth(self):
... pass
...
>>> A.meth
<unbound method A.meth>
>>> import inspect
>>> inspect.ismethod(A.meth)
True
I checked the tests on 2.7 and found this:
# contrary to spec, ismethod() is also True for unbound methods
# (see #1785)
self.assertIn(('f', B.f), inspect.getmembers(B, inspect.ismethod))
#1785 also has some discussion about this.
This patch fixes Python 2.7.
New changeset 813a0e0934ce by Victor Stinner in branch '2.7':
Fix inspect.ismethod() doc
Thanks Anna, I pushed your doc fix.
Can you please sign the Python Contributor Agreement?
New changeset a90b39aa6af4 by Victor Stinner in branch '2.7':
Issue #16851: Add Anna Koroliuk to Misc/ACKS
Sounds like this can be closed? | https://bugs.python.org/issue16851 | CC-MAIN-2018-47 | refinedweb | 531 | 70.7 |
Tim Berners-Lee on the Web 224
notmyopinion writes "In a wide-ranging interview with the British Computer Society, Sir Tim Berners-Lee criticizes software patents, speaks out on US and ICANN control of the Internet, proposes browser security changes, and says he got domain names backwards in web addresses all those years ago."
Finally! (Score:5, Funny)
Re:Finally! (Score:2)
Sir Tim (Score:5, Insightful)
I found this amusing, along the lines of "there are those who call me.... Tim."
Seriously though, I thought he had some great things to say about professionalism in IT. We all need to absorb and remember this:
Re:Sir Tim (Score:2)
Re:Sir Tim (Score:2)
I found this amusing,
I found it saddening against the recent UK Honour scandal [bbc.co.uk].
If Sir Tim [w3.org] was viewed as a member of traditional sphere such as Law, Economics, Education he would be Lord Tim [parliament.uk].
His work [w3.org] has changed the world in all of those traditional spheres.
The whole interview content, our agenda would, would gain real traction in the second house of a G8 Nation.
Looking back... (Score:4, Funny)
I would have skipped on the double slash - there's no need for it. Also I would have put the domain name in the reverse order - in order of size so, for example, the BCS address would read: [org.bcs]. The last two terms of this example could both be servers if necessary."
He could do anything differently and he would drop a slash?
Re:Looking back... (Score:2)
Har har.
C//
Re:Looking back... (Score:5, Interesting)
Re:Looking back... (Score:2)
Re:Looking back... (Score:2)
Re:Looking back... (Score:5, Insightful)
Re:Looking back... (Score:5, Insightful)
Re:Looking back... (Score:2)
Please tell me no-one actually writes like that.
Re:Looking back... (Score:2)
Very confusing for people from other countries (although when the middle date is >12 at least you get to wonder something might be wrong).
Another problem I've had is signs with things like "Parking prohibited between Nice Friday and President's Holiday" (or whatever vacation days they have over there and expect that everyone have comitted to memory). Apparently using plain dates is a big no no, even in middle-endian format.
Re:Looking back... (Score:2)
Re:Looking back... (Score:2)
Re:Looking back... (Score:2)
road signs (Score:2)
Each state has its own convention for road signs and traffic systems (loosely based on federal standards.) Some states are downright awful (Massachusetts, New Jersey) and other states are really good. (My Ohio for instance goes out of its way to make road signage detailed and clear.)
Depends where you go.
Re:Looking back... (Score:2)
You can't use dates for many holidays because they're not observed on the same date every year.
Re:Looking back... (Score:2)
Ah well, I guess every country is entitled to its little weirdnesses...
Re:Looking back... (Score:2)
So it's even more convenient to use those as delimiters for a time period instead of dates ! Briliant !
Well, I suppose in some cases the reason for having the restrictions is related to the holidays, so using specific dates wouldn't work. I can't think of an example, but that's okay, because I've never seen any signs like that, either.
Re:Looking back... (Score:3, Interesting)
Re:Looking back... (Score:3, Funny)
Re:Looking back... (Score:2, Funny)
Re:Looking back... (Score:4, Insightful)
I wish more apps had a "web ordering" mode for sorting directories, files, or bookmarks. I think there was a version of Firefox with that, but the current build I'm using doesn't seem to have it.
One reason is that it's easier to sort, since right now the server name goes from most detailed to least, while the directory structure behind it goes from least detailed to most. If you're a programmer, it's much easier to work with consistent ordering.
Another is that it makes organization of sites with many subdomains easier, especially sub-sub-domains. Imagine sorting through
africa.news.search.com
americas.news.search.com
art.some.edu
asia.news.search.com
cs.some.edu
europe.news.search.com
linux.cs.some.edu
linux.search.com
ms.cs.some.edu
news.search.com
news.some.edu
physics.some.edu
search.com
store.search.com
store.some.edu
As
edu.some.art
edu.some.cs
edu.some.cs.linux
edu.some.cs.ms
edu.some.store
edu.some.store
edu.some.physics
com.search
com.search.store
com.search.linux
com.search.news
com.search.news.africa
com.search.news.americas
com.search.news.asia
com.search.news.europe
Re:Looking back... (Score:2)
the key problem (Score:2)
HTTP urls are essentially formatted as a file path with a dns name as one component so the top level name ends up somewhere in the middle and if the hostname is long potentially quite hard to spot.
Re:Looking back... (Score:2)
org.dotslash? org.slash?
Re:Looking back... (Score:3, Funny)
Re:Looking back... (Score:5, Funny)
There is a reason for the double slash. The double slash says it's the traditional format. The single slash signifies the domain name extension should go first. In the new-Berners-Lee format...
For example.
http:/org.slashdot
Should both be allowed addresses. They aren't. But, because he did a double slash in the beginning we could actually flip the extention order and drop the slash and it wouldn't be confused with the original format. See, Sir Tim is such a foward thinker he added a worthless slash to save the day years later!
Least specific to most specific (Score:2)
Why not no slash? http:org/slashdot [org]. Much like mailto:foo@example.org [mailto] (or would that be mailto:foo@org/example [mailto]). Or aim:do_something_really_annoying, bittorrent:linux.iso.torrent, irc:freenode.org/#debian.
The good thing about going from least specific to most specific is that it's easy to chop off unnecessary data. In dates for example, "the 25th of March, 2006" is a mouthful to say. But saying just "the 25th" is sufficient because one can assume the month is March. Or if not, "the 25th of March" is enoug
domain name... (Score:2)
From the Article... (Score:2, Funny)
But how could you make a jingle out of ... (Score:5, Insightful)
But how could you make an advertising jingle out of
"com dot expediAAAAAAHHH!"
A true Brit. (Score:5, Informative)
So the idea that he started off having trouble with the Berkeley naming convention doesn't surprise me at all.
(I'd prefer a more heirarchical system, myself, where an organization can ONLY have one domain name and have all their actual addresses inside of that. It would make the namespace a lot less cluttered and would reduce trademark abuses. On the other hand, names would be a lot longer. However, if you're using a search engine, a portal or bookmarks most of the time anyway, that's no big deal.)
Re:A true Brit. (Score:3, Interesting)
If you're going to use bookmarks, portals and search engines anyway, why not leverage them fully and make all names/identifiers collision-free cryptographic names. Trademark problem: solved permanently.
Re:A true Brit. (Score:5, Funny)
In fact, every machine on the internet could be given a unique 32 bit number. Then you could connect to it using that number as the name. That would be awesome!
Re:A true Brit. (Score:3, Funny)
*gasp* 128-bits? Is that wise?
What's the matter, Colonel Sanders? Chicken?
(No, I have no idea why that popped into my head)
Re:A true Brit. (Score:2)
the trouble with using ip addresses directly is they are too close to the physical network infrastructure and as such not very portable (unless you own a very large private block.....).
also combined with name based virtual hosting using domain names allows sites to be combined onto one server and later split up again if nessacery without huge wastage
Re:A true Brit. (Score:2)
You do. My email address used to be [user]@uk.ac.swan.pyr
That was a boon to to us mudders, though. You could connect directly from the PAD in each terminal room to a MUD on JANet, without having to log on to an i
Re:A true Brit. (Score:3, Interesting)
The X25/X29 PAD addressing thing was very much akin to using the Internet without a DNS, that's all. A PAD was merely a terminal server which gave you a command line access. I've used TCP/IP terminal servers which were very similar.
The naming convention used in the UK for e-mail (which was supported long after the transition to TCP/IP) was purely that, an e-mail address convention. At the time it was decided upon the ARPAnet were making their own deci
Heh! (Score:2)
DNS often
Re:Heh! (Score:3, Insightful)
Who remembers it? Just Google the movie title. If it doesn't come up in the first 5 hits, add "IMDB" or "Tomatoes" to the search string, which should get you the IMDB and Rotten Tomatoes pages on the film respectively, either of which will have the link to the "official" site. The whole reason Google is successful is that the name of the most relevant website is rare
Re:But how could you make a jingle out of ... (Score:2, Interesting)
Re:But how could you make a jingle out of ... (Score:5, Funny)
Re:But how could you make a jingle out of ... (Score:5, Funny)
It's fun, it's naughty! Catholic.org! Nun.org! Starving-Panda.org!
Re:But how could you make a jingle out of ... (Score:2)
Re:But how could you make a jingle out of ... (Score:2)
But how would:
com.ebay/
com.amazon/
org.slashdot/
have been easier to remember? Or really easier technically overall?
On a second thought, it would have been:
org.dotslash/
But still.
TLDs (Score:5, Insightful)
at least someone realises this.
If i had my way i'd redo the whole domain system; the distinctions between TLDs are totally irrelevent these days.
That or enforce the distinctions, so that only ISPs can have
Won't work (Score:5, Insightful)
at least someone realises this.
If i had my way i'd redo the whole domain system; the distinctions between TLDs are totally irrelevent these days.
That or enforce the distinctions, so that only ISPs can have
The purpose of a domain name is to make it easy for poeple. Computers don't care, they use IP addresses and the DNS is simpy a way to make easy to rememeber names that are automatically converted to IP addresses by software.
There is no taxonomy or more correctly, ontology, behind domain names. They're arbitrary strings of characters. There is no meaning whatsoever in the TLD, that's sad articfact of the way things were; they should not ideally have any meaning.
NSI under the original Internic cooperative agreement tried for many years to enforce the
TLDS should be meaningful, but arbitrary. And pretending any sort of classification system can me made out of it belies two decades of expereince with the way we name computers on the network.
Sir Tim may be a Sir but he's dead wrong about this expansion of tld space. Would you find it easier to remember (and yes, there are times you'll rememeber and type in, instead of looking something up in a search engine) company.biz or perhaps company.info because that was available when perhapes the only thing available in
Typically the internet solves problems of scarcity (.com names) by creating new resources, not by regulating old ones.
Re:Won't work (Score:2)
? um, yes, there is. it's just that no-one adheres to it (myself included).
Before you answer, wonder if there's any non-arbitrary relationship between the proposed
"Would you find it easier to remember...."
I think what STBL is suggesting is a complete rewrite of the way DNS works, according to his semantic web vision. Perhaps search engines would be a thing of the past, perhaps URLs would (though that's unlikel
Re:TLDs (Score:2)
I wouldn't want it taken away because I'm not able to use it for its intended purpose at this time. There's no guarantee someone else would be as nice about it.
Re:TLDs (Score:2)
Re:TLDs (Score:2)
Re:TLDs (Score:2)
Re:TLDs (Score:2)
Oh, man. My boss got sucked up in the hype around that and had me (over my objections) enter the lotteries, sometimes several times through different domain name services, for a dozen variations on our company name, plus a bunch of other words somewhat related, for both
.info and .biz. All of the ones that we won he has now let lapse.
Thousands of dollars spent for nothing.
The onl
A sad case of marketing anti-genius (Score:5, Interesting)
The following story is true, though extraordinarily sad.
At the company where I used to work, they registered all TLDs for their name. We had
.com, .net, .org, .biz, etc.
One day, our chief marketing goober decided that
.biz was going to be the next "in" thing on the Internet, and we would be one of the first companies to capitalize on it. So we had all of our business cards chaged, our mailers, our letterhead... everything. We were explicitly told never to use the .com domain name in our business dealings, it was .biz. We, the IT gurus, begged and implored them not to do this, that it would cause more trouble in the end than it was worth, and that the only companies that use .biz are fly-by-night companies that grab the .biz equivalent of famous .com names so that they can rip people off.
Who do you think they listened to?
Long story short: Within a few months, after our customers, suppliers, vendors, and lots of other really, really important people started complaining that their e-mails to us were bouncing back and e-mails from us were not being received because spam blockers were automatically assuming that our
.biz address either weren't valid, our chief marketing goober decided to "spend more time with his family," our old business cards, letterhead, etc. was dug out, and we were instructed never to use the .biz domain name again.
Re:A sad case of marketing anti-genius (Score:3, Interesting)
Sorry, I forgot, they're management. Survival of the skinniest and hardest-working, then. Yanno, like the fable of the ant and the grasshopper.
Already exists (Score:2, Funny)
Re:TLDs (Score:2, Funny)
Aw shucks, they noticed!
just a question here (Score:2)
Do joo mean the symantec web? (Score:2, Funny)
startkeylogger
Re:just a question here (Score:2)
Re:just a question here (Score:2)
Re:just a question here (Score:2)
Oh yeah, and tag spam.
'Duh' Browser security (Score:5, Interesting)
"Most browsers have certificates set up and secure connections, but the browser view only shows a padlock - it doesn't tell you who owns the certificate."
I still can't believe that, to this very day, there is no major browser that displays the right information about a certificate by default! This is the whole point of a certificate: it tells you that paypal.com actually belongs to a real-world entity named "PayPal Inc."
At the very least, when connected via SSL to a site with a valid cert, the browser address bar should have an extra line that names the real-world entity. A yellow padlock and location bar tell you nothing about who you're really talking to. You shouldn't have to manually examine the certificate to find out this information.
Does anyone have any idea why even Firefox, with all its other great usability and security innovations, still gets this basic thing wrong??
Re:'Duh' Browser security (Score:2, Informative)
Re:'Duh' Browser security (Score:2)
Re:'Duh' Browser security (Score:2, Informative)
Re:'Duh' Browser security (Score:5, Insightful)
This makes it a major pain when you just want to encrypt data without claiming to be anyone in particular, since you have to jump through a lot of hoops both on server and client side to get it working. The browser gets bitchy about a certificate that isn't signed by any of its roots, even though it may very well be the case that nobody cares.
If we clearly thought about these two aspects, and separated them, it would become clear that A: we need a better way to just say "secure the damn connection" without claiming to be anybody and B: When a site is claiming to be somebody, it hardly makes sense to not show the claim clearly to the user. But since the concepts are all mushed up, you get a lock icon that sort of covers half the situation, mostly, and few people really realize there's a problem.
Re:'Duh' Browser security (Score:3, Insightful)
It is no point in having a secure connection to a person you do not know who is.
You cannot know if you are talking to a man in the middle or you are actually talking to the man you want to be communicating with.
To get the ww2 version of this:
You got an ubersecure connection with a german spy which got an ubersecure connection to the man you think you are communicating with. Then the german spy can listen in and you nor the person you
yea, you need a secure path of transmission (Score:2)
But the parent is somewhat right too, because actually you would first have to make sure that you have correctly established the identity of the root key-signing enitity over a secure handshake, which often is not the case.
On the other hand, with an extended web of trust, man in the middle attacks
Re:'Duh' Browser security (Score:3, Informative)
We made a mistake back in the day.
We made many mistakes, but this wasn't one of them.
Certificates are serving two purposes: One is to encrypt the data, one is to verify identity.
Those two purposes are the *same* purpose. There is a distinction here, but you're drawing it in the wrong place.
SSL-sytle secure connections do two things: Encrypt data and authenticate data. After establishing session keys, the data that is sent both directions is encrypted and has cryptographic authentication codes (
Re:'Duh' Browser security (Score:5, Insightful)
That's a good explanation and it's accurate. It does have a hidden assumption though.
A lot of security analysis takes as an axiom that the threat is an intelligent and determined adversary who will crawl in through any weakness. That axiom may seem self-evident because of infosec's military heritage: if your opponent is willing to hire Alan Turing and invent the digital computer in order to read your ciphertext, you daren't leave any chink in your armor.
If you're a civilian and willing to gamble that you'll only be a random target and that your opponents will always go for the softest targets, then you might decide on a self-signed certificate. You might believe that sniffing Internet traffic is so much easier than running a man-in-the-middle attack that you could just take your chances on MiTM.
You'd be wrong in today's environment, though. Phishing means you really have to worry about who a public key really belongs to. Not that certs are helping very much.
Quite a few people are proposing a compromise trust model like ssh has, where the browser UI would change so as to warn you when you're about to encrypt to an unexpected public key.
Re:'Duh' Browser security (Score:2)
Quite a few people are proposing a compromise trust model like ssh has, where the browser UI would change so as to warn you when you're about to encrypt to an unexpected public key.
This model has some good things going for it, but I don't see it as very useful for stopping phishing.
Phishers don't use the same domain name as the legitimate site. So the browser won't warn you "the key for paypal.com has changed! danger!" If the phisher bothers to self-sign at all, at most the browser will say "you're talki
Re:'Duh' Browser security (Score:2)
Re:'Duh' Browser security (Score:2)
mod down -1 wrong (Score:2)
right so you've got yourself a nice encrypted connection to the man in the middle. You need some mechanism to tell you that the person you think you are linked with is
Re:'Duh' Browser security (Score:2)
Yeah, but that's the job of the CA: to filter out bogus domain names and entity names. Granted, the CAs don't do their jobs too too well, but I think they would refuse such a blatant fraud.
Re:'Duh' Browser security (Score:2)
Bloody spreadsheets (Score:2)
Tell me about it.
well, he got it wrong again (Score:5, Interesting)
if the server name isn't going to be the name of a server, then you can do this:
and now everything is a hierarchical pathname that is resolved to a fqdn internally and nobody needs to worry that bcs.org.uk is a node on the network and members is a service on that node...
add it to the pile of big-woops! ideas along with ken thompson's anally elided 'e' in "creat()"...
Re:well, he got it wrong again (Score:2)
And now the browser can't figure out which server to contact to get the content without recursing down the tree asking stupid questions. And you can't contact the subsidiary sites if the top-level site is down.
-scott
Re:well, he got it wrong again (Score:2)
Basically that proposal means you're not sure whether
means you're looking for
Or
So what do you query the DNS server for? Or do you make multiple queries to the DNS servers?
You can disambiguate things by adding some stuff, but really its a waste of time.
As for Tim's proposal, while that could work, I like being able to copy part of a hostname, modifying it a bit and then using ping, ssh etc on the res
Re:well, he got it wrong again (Score:2)
And it would be extremely useful. You could add a new server for each directory with very little effort.
With Slashes we could drill down... (Score:2)
(p.s. please ignore that slashdot finds links for these
examples) [co] [tld]
for example, calcula might be found at: ex.html [us]
(and as it is currently "/index.html" is the default and might be omitted)
Mix of Complex and overly simple (Score:2)
... which is? (Score:3, Insightful)
Just think - there would be no "Dotcom Industry" (Score:2)
Re:Just think - there would be no "Dotcom Industry (Score:2)
Re:JACK ASS (Score:5, Funny)
Re:JACK ASS (Score:5, Funny)
Doesn't make Henry Ford a good driver...
Re:JACK ASS (Score:2)
Re:JACK ASS (Score:2)
Ya know, I just had this discussion with someone. The person was basically trying to justify something an elementary school teacher said to the students that she knew was incorrect. I was really offended by the fact that a teacher knowingly misled her
Re:JACK ASS (Score:5, Insightful)
Re:JACK ASS (Score:2, Funny)
Re:JACK ASS (Score:2)
Re:Finally! (Score:2)
Sorry, for that address, I'm either thinking of walmart or electronics first, and com is definitely last.
Search is a better match for how we think. Which is to say that we don't think in an hierarchy like "Hmm, a company
Re:Finally! (Score:2)
Good example, and that is why I disagree (Score:2)
Then [walmart.com] makes a lot more sense, because you see that you are where you are looking for.
Now: [com.walmart.electronics]
Most people don't really care whether it is com or org. It also doesn't play nice with autocompletion
Moreover, | https://slashdot.org/story/06/03/24/1920230/tim-berners-lee-on-the-web | CC-MAIN-2017-22 | refinedweb | 4,101 | 63.49 |
Don't Tell Them its REST
- |
-
-
-
-
-
-
Read later
My Reading List
Don't advertise your API as as being RESTful. This is one piece of advice from API developers on a recent episode of the Nodeup podcast - an APIs show. The podcast featured a conversation between Daniel Shaw engineer at Voxer and host of Nodeup, James Halliday founder of Browserling and guests Mark Cavage and Andrew Sliwinksi. Cavage is a software engineer at cloud infrastructure provider Joyent and author of the Node.js package Restify. Joyent owns Node.js and uses it extensively. Sliwinski is Co-founder and CTO of children's experience website DIY.
Kicking off with API design, the conversation meanders through key concerns of API development with Node.js, including the value of REST, security, testing, documentation, schemas and streaming. The favourite API design method is to "start with the README". Approach API design like you are building a user interface. Cavage refers to the way Amazon.com approaches product development by "writing the press release first." He adds that you want to build a minimal initial API and then evolve it based on usage experience. Don't attempt to include too many features up front.
The value of REST is regarded as mixed. The problem with advertising your API as being RESTful is that it "acts as troll bait" according to Shaw. People get too hung up and pedantic about REST conformance. More important instead is to be pedantic about HTTP because that is the common underlying protocol. Cavage's advice is that HTTP is "the interface you want to use at the firewall" but internally you can revert to whatever more efficient data formats and transports make sense, noting that "HTTP comes with a cost." He cited dnode and fast as example rpc frameworks that they use in back-end infrastructure. add origin authentication, message integrity and replay resistance to HTTP requests. Developed at Joyent, HTTP Signatures has recently been submitted to the IETF.
All agreed that testing should be based on real systems and data and that "mocking" is a bad practice. The problem with mocking is that your tests are detached from reality, explains Halliday. Sliwinski describes how at DIY they populate a local database with live data to perform continuous integration testing without resorting to mocks. Some of the test tools used include Nodeunit, Tap and Travis CI.
On the topic of documentation, Cavage jokes that WSDL earns "demerit points". At DIY they write the API documentation first in a JSON specification format that drives an interactive documentation webpage. Cavage describes how Joyent uses Restdown to document their APIs and keep that documentation in the repo where they host the API. Keeping documentation in-line with the API is an important challenge and all agree that they prefer to have interactive documentation with sample data.
Schemas are another problematic area. Two main problems with schema validators are that they are slow and they throw obtuse errors, explains Cavage. But on the other hand doing things by hand without formal grammers is risky. DIY uses JSON Schema but that comes with trade-offs. The standard is still evolving and there are no stand-out Node.js implementations of JSON Schema validators. Sliwinski says they've had some luck with JSV but have had to write their own wrappers to catch and reinterpret those obtuse error messages.
The podcast includes a lot of discussion around streaming APIs which take advantage of Node.js' non-blocking I/O architecture. Halliday explained that streaming JSON in APIs makes a lot of sense to avoid buffering and latency. Cavage notes that support for streaming JSON is one of the main reasons to use Restify over the alternative package Express. Caution was expressed about the use of Node.js middleware with streaming JSON. Cavage likes to use middleware such as Connect to handle "userland complexity" such as sessions and client vagaries but says you pay for that with the loss of streams. Core Node.js without middleware is best for native handling of streams.
But back to the question raised at the begining of this item. Is it safe to advertise your API as being RESTful, or does that just encourage bad behaviour from the pedants? Is REST a standard to be followed or is it a set of constraints? Where do you draw the line between RESTful and not? We'd like to hear just JSON API
by
Arturo Hernandez
An API centric Uniform Interface
by
Jean-Jacques Dubray
APIs have won, pragmatically, overwhelmingly. Resource orientation was just a dream, like distributed objects in their time. Even Swagger has started to "generate" some code, wait a year or two and everyone will use a json-schema (don't forget the namespaces either), and we would have done what we have done for decades, standing still.
We don't hear much from the original RESTafarians these days, they have all moved on to greener pastures. Maybe it's time to close that debate and get some rest (real rest). | https://www.infoq.com/news/2013/07/dont-tell-them-its-rest/ | CC-MAIN-2017-09 | refinedweb | 841 | 65.22 |
import "android.googlesource.com/platform/tools/gpu/memory"
Package memory contains types used for representing and simulating memory observed in the capture.
type Data interface { Get(db database.Database, logger log.Logger) ([]byte, error) Size() uint64 }
Data is the interface for a data source that can be resolved to a byte slice with Get.
Size returns the number of bytes that would be returned by calling Get.
type DataSliceWriter interface { Data Slice(r Range) DataSliceWriter Write(d Data) }
DataSliceWriter is similar to DataSlicer, but also has the Write method for modifying the data that the DataSliceWriter refers to.
Write will replace the data in this DataSliceWriter with d. If d is shorter in length than the slice, then only the range [0, d.Size()-1] bytes will be replaced. A sliced DataSliceWriter shares the same data from which it was sliced
type DataSlicer interface { Data Slice(r Range) DataSlicer }
DataSlicer extends the Data interface with aditional support for slicing.
Slice returns a new DataSlicer referencing a subset range of the data. The range r is relative to the base of the DataSlicer. For example a slice of [0, 4] would return a DataSlicer referencing the first 5 bytes of this DataSlicer. Attempting to slice outside the range of this DataSlicer will result in a panic.
func ResourceData(resId binary.ID, size uint64) DataSlicer
ResourceData returns a DataSlicer that wraps a resource. resId is the identifier of the resource and size is the size in bytes of the resource.
type Memory struct { binary.Generate `disable:"true"` }
Memory represents an unbounded and isolated memory space. Memory can be used to represent the application address space, or hidden GPU memory.
Memory can be sliced into smaller regions which can be read or written to. All writes to Memory or its slices do not actually perform binary data copies, but instead all writes are stored as lightweight records. Only when a Memory slice has Get called will any resolving, loading or copying of binary data occur.
func (m *Memory) Slice(rng Range) DataSliceWriter
Slice returns a DataSliceWriter referencing the subset of the Memory range.
func (m *Memory) Write(d Data)
Write copies d to the Memory slice [0, d.Size()-1].
type Pointer uint64
Pointer is the type representing a memory pointer.
type Range struct { binary.Generate Base Pointer // A pointer to the first byte in the memory range. Size uint64 // The size in bytes of the memory range. }
Range represents a region of memory.
func (*Range) Class() binary.Class
func (i Range) Contains(p Pointer) bool
Contains returns true if the pointer p is within the Range.
func (i Range) Expand(p Pointer) Range
Expand returns a new Range that is grown to include the pointer p.
func (i Range) First() Pointer
First returns a Pointer to the first byte in the Range.
func (i Range) Intersect(other Range) Range
Intersect returns the Range that is common between this Range and other. If the two memory ranges do not intersect, then this function panics.
func (i Range) Last() Pointer
Last returns a Pointer to the last byte in the Range.
func (i Range) Span() interval.U64Span
Span returns the Range as a U64Span.
func (i Range) String() string
type RangeList []Range
func (l *RangeList) Copy(to, from, count int)
Copy performs a copy of ranges within the RangeList.
func (l *RangeList) GetSpan(index int) interval.U64Span
GetSpan returns the span of the range with the specified index in the RangeList.
func (l *RangeList) Length() int
Length returns the number of ranges in the RangeList.
func (l *RangeList) Resize(length int)
Resize resizes the RangeList to the specified length.
func (l *RangeList) SetSpan(index int, span interval.U64Span)
SetSpan adjusts the range of the span with the specified index in the RangeList. | https://android.googlesource.com/platform/tools/gpu/+/refs/heads/studio-1.3-dev/memory/ | CC-MAIN-2022-27 | refinedweb | 628 | 66.84 |
When a large amount of data is to be dealt with, the most efficient way is to store it in an optimized manner. As programmers often face lots of problems to solve on daily basis, it is quite easy to solve this problem in minimum time if your data is arranged in a specialized format, i.e. either ascending or descending format. Therefore, every programming language provides a class of algorithm known as Sorting which enables you to arrange your data in a specific format and sort them in ascending or descending manner.
There are many sorting techniques available, all with their own advantage and disadvantage, and can be used in different situations. In this article, we will study one such sorting algorithm i.e., bubble sort, and its algorithm. We will also learn the C++ code for sorting data using bubble sort and conclude with its applications. So, let's get started!
What is Bubble Sort?
The Bubble Sort, also called Sinking Sort, is a sorting algorithm that compares each pair of adjacent elements. Bubble sort is not an efficient sorting algorithm compared to others but it provides useful insights to beginners on what sorting algorithm is and how it works behind the scene. The basic technique of bubble sort is that the first element of the list is compared to the second, the second to the third element, and so on. Each iteration moves each element of the array closer to the end, similar to how air bubbles rise to the surface of the water. This is how bubble sort gets its name. Here, the sorting is done in the form of passes or iteration. As a result, the largest element is placed in its rightful place in the list at the end of each iteration. You can also say that the list's largest element rises to the top after the completion of each iteration/pass.
Algorithm for Bubble Sort
Step1: for k = 0 to n-1 repeat Step 2 Step2: for j = k + 1 to n – k repeat Step3: if A[j] > A[k] Swap A[j] and A[k] [end of inner for loop] [end if outer for loop] Step4: end
How Does Bubble Sort Work?
- If the order of adjacent elements is incorrect, we compare them ( a[k] > a[j] ) and swap them.
- Assume we have an array of length 'n'. Therefore, to sort 'n' elements using the previous step, we require an 'n-1' pass.
- After following these steps, the largest element goes at the end of the array. The next largest is one place behind the last. The 'kth' largest element is swapped to its rightful place in the array in the 'kth' pass.
- So basically, at least one element from 'n - k + 1' will appear at its right place at the end of the ith pass. It will be the 'kth' largest element (for 1 *= k *= n - 1) of the array. Because in the 'kth' pass of the array, in the 'jth' iteration (for 1 <= j <= n - k), we are checking if 'a[j] > a[j + 1]', and 'a[j]' will always be greater than 'a[j + 1]' when it is the largest element in range '[1, n - k + 1]'. We'll now swap it. The process will continue until the largest element in the array is at the '(n - k + 1)th' position.
Pseudocode for Bubble Sort
begin bubbleSort(array): N <- length(array) for j = 0 to N: for i = 0 to N-1: if array[i] > array[i+1] temp <- array[i] array[i+1] <- array[i] array[i] <- temp end if end for end for return array end bubbleSort begin modifiedbubbleSort(array): N <- length(array) for j = 0 to N: flag = False for i = 0 to N-1: if array[i] > array[i+1] temp <- array[i] array[i+1] <- array[i] array[i] <- temp flag = True end if end for if flag == False: break end for return array end modifiedbubbleSort
Here, we traverse the array using two iterative loops.
We begin with the '0th' element in the first loop, then we begin with an adjacent element in the second loop. We compare each of the neighboring components in the inner loop body and swap them if they are out of order. The heaviest element bubbles up at the end of each iteration of the outer loop.
Bubble Sort Example
Let us consider the below array of elements to sort.
As we can see from the above illustration, the greatest element bubbles up to the last with each pass, sorting the list. Each element is compared to its adjacent element and switched with one another if they are not in order.
If the array is to be sorted in ascending order at the end of the first pass, the largest element is placed at the end of the list. The second largest element is placed at the second last position in the list for the second pass, and so on.
At last, we will have sorted the entire list when n-1 (where n is the total number of entries in the list) passes. The final output of the bubble sort will be as shown below.
C++ Code for Bubble Sort
#include<bits/stdc++.h> #define swap(x,y) { x = x + y; y = x - y; x = x - y; } using namespace std; /** * Function to Sort the array using Modified Bubble Sort Algorithm * @param arr: Array to be Sorted * @param n: Size of array * @return : None */ void bubbleSort(int arr[], int n) { int i, j; bool flag; // Outer pass for(i = 0; i < n; i++) { flag = false; // Set flag as false for(j = 0; j < n-i-1; j++) { // Compare values if( arr[j] > arr[j+1]) { swap(arr[j],arr[j+1]); flag = true; } } // If no to elements are swapped then // array is sorted. Hence Break the loop. if(!flag) { break; } } } int main(int argv, char* argc[]) { int arr[] = {1,5,6,8,3,4,7,2,9}; int n = sizeof(arr)/sizeof(int); cout<<"Unsorted Array :"; for(int i=0;i<n;i++) // Print the Original Array cout<<arr[i]<<" "; cout<<endl; bubbleSort(arr,n); // Call for Bubble Sort Function cout<<"Sorted Array :"; for(int i=0;i<n;i++) // Print the Sorted Array cout<<arr[i]<<" "; return(0); }
Output
Unsorted Array :1 5 6 8 3 4 7 2 9 Sorted Array :1 2 3 4 5 6 7 8 9
Time Complexity
The time complexity of the bubble sort algorithm is O(n) for the best-case scenario when the array is completely sorted. Considering the average case and worst-case scenarios, the time complexity of bubble sort is O(n^2) where n is a total number of elements in the array. It is because we have to make use of two loops traversing through the entire array for sorting the elements.
Applications
- Bubble sorts are used to teach students the foundations of sorting.
- Bubble sort is used when the complexity of the algorithm does not matter
- It is quite preferred by the programmers because of being short and simple
Conclusion
Sorting is one of the most used techniques by programmers to store and optimize a huge amount of data. By sorting the elements, You can provide a better visual to your data collection and reduce the time complexity of any program efficiently. In this article, we study one such sorting technique i.e. bubble sort which is the base to learn the sorting and its works. Bubble Sort is not commonly used compared to other sorting algorithms but still have its own importance in the technical world. Learning bubble sort is well worth the effort because it acts as a bridge to understanding higher-level concepts and makes it easier for you to understand more difficult topics. | https://favtutor.com/blogs/bubble-sort-cpp | CC-MAIN-2022-05 | refinedweb | 1,303 | 65.76 |
Ever wanted to have your own C/C++ preprocessor? Or maybe you are curious
about how this invisible everyday helper of your toolbox works? If yes, you may
want to read further. If no - before hitting the 'back' button of your browser
consider to learn something new and read further too .
The C++ preprocessor is a macro processor that under normal circumstances is
used automatically by your C++ compiler to transform your program before the
actual compilation. It is called a macro processor because it allows you to
define macros, which are brief abbreviations for longer constructs. The C++
preprocessor provides four separate facilities that you can use as you see fit:
These features are greatly underestimated today, even more, the preprocessor
has been frowned on for so long that its usage just hasn't been effectively
pushed until the Boost preprocessor library [1]) C++ compiler,
which has a bug free implementation of the rather simple preprocessor
requirements mandated therein. This may be a result of the mentioned
underestimation or even banning of the preprocessor from good programming style
during the last few years or may stem from the somewhat awkward standardized
dialect of English used to describe it.
So the Wave preprocessor library is an attempt to:
To simplify the parsing task of the input stream (which is most of the time,
but not restricted to, a file) the Spirit parser construction library [4]
is used.
these two iterators will return the preprocessed tokens, which are to be built
on the fly from the given input stream.
The C++ preprocessor iterator itself is feeded by a C++ lexer iterator, which
implements an unified interface. By the way, the C++ lexers contained within the
Wave library may be used standalone too and are not tied to the C++
preprocessor iterator at all. As a lexer I'll understand a piece of code, which
combines several consecutive characters in the input stream into a stream of
objects (called tokens) more suitable for subsequent parsing. These tokens carry
around not only the information about the matched character sequence, but
additionally the position in the input stream, where a particular token was
found. In other words the lexer removes all this so-needed-by-human garbage like
spaces, newlines, etc. (i.e. performs some lexical transformation) leaving the
structural transformation for parser.
To make the Wave C++ preprocessing library modular, the C++ lexer is
held completely separate and independent from the preprocessor. To proof this
concept, there are two different C++ lexers implemented and contained within the
library by now, which are functionally completely identical. The C++ lexers
expose the mentioned unified interface, so that the C++ preprocessor iterator
may be used with both of them. The abstraction of the C++ lexer from the C++
preprocessor iterator library was done to allow to plug in different other C++
lexers too, without the need to re-implement the preprocessor. This will allow
for benchmarking and specific finetuning of the process of preprocessing
itself.
During the last weeks Wave got another field of application: testing
the usability and applicability of different Standards proposals. A new C++0x
mode was implemented, which allows to try out and help to establish some ideas,
which are designed to overcome some of the known limitations of the C++
preprocessor.
The actual preprocessing is a highly configurable process, so obviously you
have to define a couple of parameters to control this process, such as:
#include <...>
#include "..."
You can access all these processing parameters through the
wave::context object. So you have to instantiate at least one
object of this type to use the Wave library. For more information about
the context template please refer to the class reference as included in the
downloadable file or as may be found here.
The context object is a template class, for which you have to supply at least
two template parameters: the iterator type of the underlying input stream to use
and the type of the token to be returned from the preprocessing engine. The type
of the used input stream is defined by you, so may the token type, but as a
starting point I would recommend to use the token type predefined as the default
inside the Wave library - the wave::cpplexer::lex_token<>
template class. A full reference of this class you can find inside the
downloadable file or here.
wave::context
wave::cpplexer::lex_token<>
The main preprocessing iterators are not to be instantiated directly, but
should be generated through this context object too. The following code snippet
preprocesses a given input file and outputs the generated text into
std::cout.
std::cout
// Open the file and read it into a string variable
std::ifstream instream("input.cpp");
std::string input(
std::istreambuf_iterator<char>(instream.rdbuf());
std::istreambuf_iterator<char>());
// The template wave::cpplexer::lex_token<> is the default
// token type to be used by the Wave library.
// This token type is one of the central types throughout
// the library, because it is a template parameter to many
// of the public classes and templates and it is returned
// from the iterators itself.
typedef wave::context<std::string::iterator,
wave::cpplexer::lex_token<> >
context_t;
// The C++ preprocessor iterators shouldn't be constructed
// directly. These are to be generated through a
// wave::context<> object. Additionally this wave::context<>
// object is to be used to initialize and define different
// parameters of the actual preprocessing.
context_t ctx(input.begin(), input.end(), "input.cpp");
context_t::iterator_t first = ctx.begin();
context_t::iterator_t last = ctx.end();
// The preprocessing of the input stream is done on the fly
// behind the scenes during the iteration over the
// context_t::iterator_t based stream.
while (first != last) {
std::cout << (*first).get_value();
++first;
}
This sample shows, how the input may be read into a string variable, from
where it is fed into the preprocessor. But the parameters to the constructor
of the wave::context<> object are not restricted to this type
of input stream. It can take a pair of arbitrary iterator types (conceptually at
least forward_iterator type iterators) to the input stream, from
where the data to be preprocessed should be read. The third parameter supplies a
filename, which is subsequently accessible from inside the preprocessed tokens
returned from the preprocessing to indicate the token position inside the
underlying input stream. Note though, that this filename is used only as long no
#include or #line directives are encountered, which in
turn will alter the current filename.
wave::context<>
forward_iterator
#include
#line
The iteration over the preprocessed tokens is relatively straight forward.
Just get the starting and the ending iterators from the context object (maybe
after initializing some include search paths) and you are done! The
dereferencing of the iterator will return the preprocessed tokens, which are
generated on the fly from the input stream.
As you may have seen, the complete library resides in a C++ namespace
wave. So you have to explicitly specify this while using the different
classes. The other way around is certainly to place a using namespace
wave; somewhere at the beginning of your source files.
namespace
wave
using namespace
wave;
If explicitly enable/disable the
tracing for the macro in question only. This may be done with the help of a
special #pragma:
#pragma
#pragma wave trace(enable) // enable the tracing
// the macro expansions here will be traced
// ...
#pragma wave trace(disable) // disable the tracing
To see, what the Wave driver generates while expanding a simple macro,
I suggest, that you try to compile the following with 'wave -t test.trace
test.cpp':
//)
After executing this command the file test.trace will contain the generated
trace output. The generated output is relatively straightforward to understand,
but you can find a thorough description of the trace output format in the
documentation included with the downloadable file.
In order to prepare and support a proposal for the C++ Standards committee,
which will describe certain new and enhanced preprocessor facilities, the
Wave preprocessor library has implemented experimental support for the
following features:
Variadic macros and placemarker tokens are known already from the C99
Standard. Its addition to the C++ Standard would help to make C99 and C++ less
different.
Token-pasting of unrelated tokens (i.e. token-pasting resulting in multiple
preprocessing tokens) is currently undefined behaviour for no substantial
reason. It is not dependent on architecture nor is it difficult for an
implementation to diagnose. Furthermore, retokenization is what most, if not
all, preprocessors already do and what most programmers already expect the
preprocessor to do. Well-defined behavior is simply standardizing existing
practice and removing an arbitrary and unnecessary undefined behavior from the
Standard.
One of the major problems of the preprocessor is that macro definitions do
not respect any of the scoping mechanisms of the core language. As history has
shown, this is a major inconvenience and drastically increases the likelihood of
name clashes within a translation unit. The solution is to add both a named and
unnamed scoping mechanism to the C++ preprocessor. This limits the scope of
macro definitions without limiting its accessibility.
The proposed scoping mechanism is implemented with the help of three new
preprocessor directives: #region, #endregion and
#import (note that the actual names for the directives may change
during the standardization process). Additionally it changes minor details of
some of the existing preprocessor directives: #ifdef,
#ifndef and the operator defined().
#region
#endregion
#import
#ifdef
#ifndef
operator defined()
To avoid overly detailed descriptions of the new features in this article, a
simple example is provided here (taken from the experimental version of the
preprocessor library written by Paul
Mensonides), which demonstrates the proposed extensions:
# ifndef ::CHAOS_PREPROCESSOR::chaos::WSTRINGIZE_HPP
# region ::CHAOS_PREPROCESSOR::chaos
#
# define WSTRINGIZE_HPP
#
# include <chaos/experimental/cat.hpp>
#
# // wstringize
#
# define wstringize(...) \
chaos::primitive_wstringize(__VA_ARGS__) \
/**/
#
# // primitive_wstringize
#
# define primitive_wstringize(...) \
chaos::primitive_cat(L, #__VA_ARGS__) \
/**/
#
# endregion
# endif
# import ::CHAOS_PREPROCESSOR
chaos::wstringize(a,b,c) // expands to: L"a,b,c"
The macro scope syntax is resembled after the namespace scoping already known
from the core C++ language. There is a significant difference though. The
#region and #endregion directives are opaque for any
macro definition from outside or inside the spanned region, respective. This way
macros defined inside a specific region are visible from outside this region
only, if these are imported (by the #import directive) or if these
are qualified (as for instance the argument to the #ifndef
directive above).
For more details about the new experimental features please refer to the
documentation included with the downloadable file.
The described features are enabled by the --c++0x command line
option of the Wave driver. Alternatively you can enable these features by
calling the wave::context<>::set_language() function with the
wave::support_cpp0x value.
--c++0x
wave::context<>::set_language()
wave::support_cpp0x
To see, how you may write a full blown preprocessor, you may refer to the
Wave driver sample, included in the downloadable file. This Wave
driver program fully to stderr
--variadics: enable variadics and placemarkers in C++ mode
--c99: enable C99 mode (implies variadics)
--c++0x: enable experimental C++0x support (implies
variadics)
To allow the tracing output, the Wave driver now has a special command
line option -t (--trace), which should be used to specify a file, to which the
generated trace information will be put. If you use a single dash ('-') as the
file name, the output goes to the std::cerr stream.
std::cerr
There is left one caveat to mention. To use the Wave library or to
compile the Wave driver yourself you will need at least the VC7.1
compiler (the C++ compiler included in the VS.NET 2003 release). Alternatively
you may compile it with a recent version of the gcc compiler (GNU Compiler
Collection) or the Intel V7.0 C++ complier. Sorry, for now no VC6 and no VC7 -
these are to far away from C++ Standard conformance. But I will eventually try
to alter parts of the Wave library to make it compilable with this
compilers too - it depends on your response.
Wave depends on the Boost library (at least V1.30.2) and the Program Options library from Vladimir Prus (at least rev. 160, recently adopted to Boost, but not included yet) , so please be sure to install these libraries, before trying to recompile Wave.
Despite the fact, that the Wave library is quite complex and heaviliy
uses advanced C++ idioms, as templates and template based metaprogramming, it is
farely simple to be used in a broad spectrum of applications. It nicely fits
into well known paradigms used over years by the C++ Standard Template Library
(STL).
The Wave driver program is the only known to me C++ preprocessor,
which
therefore it may be an invaluable tool for the development of modern C++
programs.
As recent developments like the Boost Preprocessor Library show [1],
we will see in the future a lot of applications for advanced preprocessor
techniques. But these need a solid base - a Standard conformant preprocessor. As
long as the widely available compilers do not fit into these needs, the
Wave library may fill this gap.
__INCLUDE_LEVEL__
operator _Pragma()
_Pragma wave system()
true
false
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
#define COMMENT_MACRO /##/
void test() {
COMMENT_MACRO comment
}
void test() {
// comment
}
void test() {
comment
}
Evgeniy Ryzhkov wrote:If I pass "--c99" or "—variadics" parameters to wave then I have this problem.
Evgeniy Ryzhkov wrote:P.S. Where is a good place for questions like this?
pj1000 wrote:I'm evaluating wave as a cpp replacement. I'm having a problem at the moment in that the #line output shows the name of the source file as an absolute path, and my regressions are expecting a gcc-style relative path.
Before I have a go at hacking this, can anyone tell me if there's some configuration that can be set to output relative paths? Or to make the output more gcc/cpp-compatible?
pj1000 wrote:Or to make the output more gcc/cpp-compatible? There are a couple of other minor differences to gcc and mcpp: wave produces '#line n', for example, and the others produce '# n'.
Rowan S-B wrote:1. I don't need/want #line directives - they confuse my assembler, and I don't want to have to run another pass with some other sort of processor to remove them. MCPP has an option to turn these off.
Rowan S-B wrote:2. I would like to preserve white space, since the code is pretty illegible without it, and this code may have to be maintained manually. MCPP has an option for this.
Rowan S-B wrote:3. Wave inserts a space inside tokens like 20d or 5ah (meaning decimal or hex constants), converting these to 20 d and 5 ah. Can I prevent this happening? MCPP doesn't do this...
Vincent_RICHOMME wrote:First I have created a config file (wave.cfg) with my system include but it seems wave doesn't know how to interpret windows PATH.
Vincent_RICHOMME wrote:and when I execute wave.exe I got nothing.
talewisx wrote:I found that include statements which include relative paths (such as "..\..\INCLUDE\PlatformBinding.h" generates an error such as:
(1): exception caught: boost::filesystem::path: invalid name "..\..\INCLUDE\PlatformBinding.h" in path: "..\..\INCLUDE\PlatformBinding.h"
This seems to be a problem with Boost's path parsing code. However, this passes other preprocessors that I have. Any thoughts?
#include "test.h"
int i;
-D Enable
command line(0): command line error: invalid macro definition: Enable
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/3853/Wave-a-Standard-conformant-C-preprocessor-library?fid=14979&df=90&mpp=25&sort=Position&spc=Relaxed&select=3406320&noise=3&prof=True&view=None | CC-MAIN-2016-36 | refinedweb | 2,635 | 52.6 |
33866/mapreduce-in-python
mapper.py
#!/usr/bin/python
import sys
#Word Count Example
# input comes from standard input STDIN
for line in sys.stdin:
line = line.strip() #remove leading and trailing whitespaces
words = line.split() #split the line into words and returns as a list
for word in words:
#write the results to standard output STDOUT
print'%s\t%s' % (word,1) #Emit the word
reducer.py
#!/usr/bin/python
import sys
from operator import itemgetter
# using a dictionary to map words to their counts
current_word = None
current_count = 0
word = None
# input comes from STDIN)
Hi
As you write mapper and reducer program ...READ MORE
Problem has been solved by zipimport.
Then I zip chardet to ...READ MORE
You have to override isSplitable and see if it works:
public ...READ MORE
#!/usr/bin/python
from subprocess import Popen, PIPE
cat = Popen(["hadoop", ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/33866/mapreduce-in-python | CC-MAIN-2020-34 | refinedweb | 152 | 66.44 |
How can I get a compiler for C++ like Dev-Cpp to support chinese character and pinyin input? Like if I wanted to make a chinese program that would allow chinese input and output in the console window... Example:
#include <iostream> using namespace std; int main() { wchar_t input; char loop; loop='a'; while(loop = 'a') { system("CLS"); cout << "Input Chinese Characters or pinyin to translate to english: "; cin >> input; if(input = "你") { cout << "nǐ = you" << endl; } if(input = "您") { cout << "nín = you (with respect)" << endl; } if(input = "我") { cout << "wǒ = I; me; my; (anything that has to do with yourself)" << endl; } if(input = "星") { cout << "xīng = star;" << endl; } if(input = "期") { cout << "qī = period of time; " << endl; } if(input = "子") { cout << "zi = word; character;" << endl; } if(input = "好") { cout << "hǎo = to be good/well;" << endl; } if(input = "姓") { cout << "xìng = family name; last name;" << endl; } if(input = "今") { cout << "jīn = now;" << endl; } if(input = "天") { cout << "tiān = today; sky;" << endl; } } return 0; }
etc.
So pinyin is like 'xìng' and the characters are like '姓'. So some how im wondering how I could create a program that would be like a dictionary type thing.
BONUS:
你教什么名字 = nǐ jiáo shén me míng zi? = whats your full name?
您好 = nín hǎo! = hello! (Respectfully).
今天星期二 = jīn tiān xīng qī èr = today is tuesday.
星期 = xīng qī = week. | https://www.daniweb.com/programming/software-development/threads/151192/how-to-get-dev-cpp-to-support-chinese-characters-and-pinyin | CC-MAIN-2018-43 | refinedweb | 223 | 75.74 |
After stumbling with the latest install of Orcas, I loaded up the Silverlight 1.1 chess demo and was blown away with the poor quality of the coding style used in the demo. I'm at a total loss as to how a public demo comes out like this. It truly saddens me because for every 1 shitty demo Microsoft puts out, it probably takes 100 good ones to even out.
Am I being too harsh? Judge for yourself:
1 -Multiple public types per file which is a direct violation of Microsoft's INTERNAL Design Guidelines.
2 -The namespace is "Chess", which also violates Microsoft's Design Guidelines.
3 -The boardui.cs, browser.xaml and default.html.js files are all lowercase - all other files are PascalCased
4 -Some simple getters are written in a single line, others over multiple lines
5 - Some types and members rely on implicit visibility (don't specify private/internal/public), others specify it. I don't even know what the default type is...isn't it internal for classes and private for fields?
6 -Fields are spread all over...most are at the top of the class definition, some are defined above a method half way down.
7 -Single line case statement which violates Microsoft's Internal design guidelines
8 -Classes with only static members that aren't sealed and don't have a private constructor (or, in 2.0, aren't marked as static)
9 -An unbelievable amount of SET-only properties (again, like most of these, violates Microsoft's own design guidelines)
10 -Within the same method, some fields are accessed with "this." others aren't
11 -Some 1 line IF's/Loops have braces...some don't
12 -Properties that call fairly expensive methods
13 -Some usings are declared inside the namespace, some declared outside
14 -Many if statements would benefit from Guard Clauses and Decompose Conditional refactoring
15 -Poor use of #regions and very little of them
There's certainly more. Many of the items above (especially #1 and #6) make the sample very hard to read. Running the project through FxCop came up with 143 issues.
I want to cry.
[Advertisement]
I think just maybe you take demo downloads a pinch too seriously.
That's been the case for Microsoft example code for as long as I remember (Windows 2.0). I guess that if you're skilled enough to write good code, Microsoft would rather you developed products, so demos get done by interns and poor developers.
Mark, you don't find that if the style makes the demo hard to read (some of the classes have 5+ public types and 3+ different places where private fields are declared), that it somewhat defeats the purpose of the demo?
Karl I completely agree, it's one of my personal bug-bears. I used to get really annoyed with my developers if they checked in what I thought was "messy" code. I still haven't got a decent argument about why I think it's important to have "clean" code though, other than vague ones about it being easier for other developers to read. I think quite a few people thought I was just being obsessive-compulsive about it all (and I used to worry that perhaps I was).
Demo code I think has the ultimate requirement to be both easy and pleasant to read - afterall it serves no other purpose other than to be read. Sloppy layout to me implies sloppy code overall.
I'm sure that one day source code in text form will be a thing of the past - I think eventually it will be stored as parse trees and we can all format it how we like - an end to the religious syntax wars.
(Aside #1 - I'd like to see more demo code written in the style of Fraser and Hanson - literate code written as code and prose intertwined.)
(Aside #2 - I'd also like to see more stuff like Hackety Hack - MS could learn a thing or two there).
I would agree with your gripes, other than the regions thing. I HATE regions - they're usually abused to hide massive amounts of code.
For me, I use demo code as a learning tool and if you think the demo is crap, well...
See what happens when "designers" code! :-)
Thanks for your comments! You are right that the code for this demo needs some work... we will invest in getting it cleaned up for a future release...
Feel free to drop me an email about any other issues you see and keep the comments comming...
Your right but all that can be cleaned up. The most important thing to me is that bad coders were able to do RAD and get it working as proof of concept and it worked. Others will surely disagree.
Come down off the ledge, man. It'll all be ok. It's a demo of silverlight, not a demo of the design guidelines which are primarily intended for class libraries and public API's anyway.
I write pretty good code when I want to (for the larger applications). But I don't take the time to make sure everything is neat and consistent when I'm writing quick utilities and sample code. That's life. Time is money.
I agree with you Karl and also the commenter Stu - Demo code should be the most readable code of all. If I was putting out code for public viewing I would spend the hour or two required to pretty it up including conforming to the styleguide, adding extra comments, adjusting whitespace to visually separate components (if putting multiple elements in one file), etc. That stuff doesn't take long, but it makes a big difference to a stranger looking at it.
I prefer to obvuscate my code....Much better job security when no one else understands your code. I guess the SilverLight developers want the same :-)
Regards
Lee
Regions are a mess to begin with, and it isn't surprising that people have subjective opinions about how they *should* be done.
Most everything else you mentioned are superficial and completely ignorable. With all due respect, when people throw up their arms over such trivia of code, I immediately suspect their skill, and it casts them more as a "coding assistant" than an actual professional.
Your post lost credibility to me when you mentioned
"Poor use of #regions and very little of them"
Give me a break. Regions are a disaster to coding. They shouldnt be anywhere. If you find yourself needing to use them, its a sign that your class has become an unmanagable mess.
for you region haters, the main complaint is how inconsistent they are. I'm hopeful that you would agree consistent regions are better than inconsistent ones.
JC:
If out of 14 points I make 1 you don't agree with and I've lost all credibility, then you're high horse is higher than mine :)
John:
It's a demo to learn how to code. If it was a demo of just what silverlight can do, they wouldn't give the source. Releasing poor source code reflects bad on the product and the company, but more importantly wastes my time.
I was at MIX07 and as I recall more than a few of the demos were put together within a two-week timeframe. The Ruby-Silverlight REPL running on a Mac is a good example. Two weeks. Please take these examples for what they actually are: Demo code to intended to demonstrate BETA TOOLS.
Incidentally, orcas crashed in virtually every session that I was in -- even the keynote iirc. My point is that these aren't finished products by any stretch so don't expect the demo-code to be a shining beacon of the Microsoft Method.
Garren and to the others defending Microsoft....don't you just naturally code properly? You can rush me all you want, but I'll still only put 1 public type per file, and make sure my fields aren't spread out everywhere.
I think Brad Abrams has it wrong. Yes it's great that they'll revisit it, but there's a root problem that ought to be fixed.
I'm sure the code will get fixed in a couple months, but why is making code readable something that doesn't happen as you're writing it? Even the crap I write in Code Snippet follows some order.
I think you guys here are missing the point. There should be someone doing Quality Control and what MS puts out should be reviewed by someone senior is the assumption that poor coders and interns are used for these purposes is true. Other than than, it makes pretty good sense why its hard for MS to resolve issues in their products and why its also very important that they maintain the closed source stance.
I bet most of the brilliant coders have issues with doing all these aesthetics to code
man, this evening go to your local bar and have a couple of drinks, really
I guarantee the demos were rushed out the door in order to demonstrate Silverlight. Microsoft never said that the code reflects best practices. It's asking a little too much, in my opinion, to have proof-of-concept demo written in two weeks to demonstrate alpha software be well-written -- and I'm a standards nazi by nature. :)
Hmm....I agree with the above, when you're rushing something out the door for demo purposes and the technology behind the scenes is changing fast, there's a lot more important things to be thinking about than cleaning up your code, like how does that new Brush class that got thrown into last night's build work?
For file's being named in a mixture of all lower-case and some upper case. What if the very beta version of the tools Created the boiler plate files in a Camel Case way, but then the developer was given a helper file from another developer that was lower-case? I see so many reason's for some sloppy things on tight deadlines with changing infastructure. I just don't see the importance or need to care about how files are named for something like this.
Now, if it's an application being pushed into production with hundreds of classes and multiple-people editing the files...that's a different story.
And namespace "Chess" violating guidlines.
Also, this is a Chess application, it's not like there's some good boiler-plate code in there that you're going to copy out and use in your application.
But if you are cursorily and messy while writing code and lots of you are argumenting to spend some time later to clean it up, then the question is just: why are you not writing it "clean" from the beginning, it would take you just a trickle longer than doing it afterwards. This is a principle of organization (not only regarding coders)!
André you're absolutely right. This goes into being a professional at what you do, rather than a Mort.
Yes there's a reality...there are tight deadlines, but all that does is [partially] excuse the individual directly responsible for the code...it doesn't excuse the PM or the VP who's impossible-to-meet deadline compromised the system.
Note: this sample is now updated to work with Silverlight 1.0 I decided to catch up with the | http://codebetter.com/blogs/karlseguin/archive/2007/05/07/microsoft-s-own-silverlight-demo-showcases-how-not-to-code.aspx | crawl-002 | refinedweb | 1,928 | 70.53 |
#include <deal.II/grid/grid_reordering.h>
This class reorders the vertices of cells such that they meet the standard requirements of the Triangulation class when creating grids, i.e. all lines have a unique orientation with respect to all neighboring cells. This class is mainly used when reading in grids from files and converting them to deal.II triangulations.
* 2 * 3--->---2 * | | * 3^ ^1 * | | * 0--->---1 * 0 *the vertices in 3d:
* 7-------6 7-------6 * /| | / /| * / | | / / | * / | | / / | * 3 | | 3-------2 | * | 4-------5 | | 5 * | / / | | / * | / / | | / * |/ / | |/ * 0-------1 0-------1 *and the faces in 3d:
* *-------* *-------* * /| | / /| * / | 1 | / 4 / | * / | | / / | * * | | *-------* | * | 5 *-------* | | 3 * * | / / | | / * | / 2 / | 0 | / * |/ / | |/ * *-------* *-------* *After calling the GridReordering::reorder_cells() function the CellData is still in this old numbering scheme. Hence, for creating a Triangulation based on the resulting CellData the Triangulation::create_triangulation_compatibility() (and not the Triangulation::create_triangulation()) function must be used. For a typical use of the reorder_cells() function see the implementation of the GridIn
read_*()functions.
Triangulations in deal.II have a special structure, in that there are not only cells, but also faces, and in 3d also edges, that are objects of their own right. Faces and edges have unique orientations, and they have a specified orientation also with respect to the cells that are adjacent. Thus, a line that separates two cells in two space dimensions does not only have a direction, but it must also have a well-defined orientation with respect to the other lines bounding the two quadrilaterals adjacent to the first line. Likewise definitions hold for three dimensional cells and the objects (lines, quads) that separate them.
For example, in two dimensions,.
As a sidenote, we remark that if one adopts the idea that having directions of faces is useful, then the orientation of the four faces of a cell as shown above is almost necessary. In particular, it is not possible to orient them such that they represent a (counter-)clockwise sense, since then we couldn't already find a valid orientation of the following patch of three cells:
* o * / \ * o o * | \ / | * | o | * | | | * o---o---o *
(The reader is asked to try to find a conforming choice of line directions; it will soon be obvious that there can't exists such a thing, even if we allow that there might be cells with clockwise and counterclockwise orientation of the lines at the same time.)
One might argue that the definition of unique directions for faces and edges, and the definition of directions relative to the cells they bound, is a misfeature of deal.II. In fact, it makes reading in grids created by mesh generators rather difficult, as they usually don't follow these conventions when generating their output. On the other hand, there are good reasons to introduce such conventions, as they can make programming much simpler in many cases, leading to an increase in speed of some computations as one can avoid expensive checks in many places because the orientation of faces is known by assumption that it is guaranteed by the triangulation.
The purpose of this class is now to find an ordering for a given set of cells such that the generated triangulation satisfies all the requirements stated above. To this end, we will first show some examples why this is a difficult problem, and then develop algorithms that finds such a reordering. Note that the algorithm operates on a set of CellData objects that are used to describe a mesh to the triangulation class. These objects are, for example, generated by the GridIn class, when reading in grids from input files.
As a last question for this first section: is it guaranteed that such orientations of faces always exist for a given subdivision of a domain into cells? The linear complexity algorithm described below for 2d also proves that the answer is yes for 2d. For 3d, the answer is no (which also underlines that using such orientations might be an – unfortunately uncurable – misfeature of deal.II). A simple counter-example in 3d illustrates this: take a string of 3d cells and bend it together to a torus. Since opposing lines in a cell need to have the same direction, there is a simple ordering for them, for example all lines radially outward, tangentially clockwise, and axially upward. However, if before joining the two ends of the string of cells, the string is twisted by 180 degrees, then no such orientation is possible any more, as can easily be checked. In effect, some meshes could not be used in deal.II. In order to overcome this problem, the
face_rotation,
face_flip and
line_orientation flags have been introduced. With these, it is possible to treat all purely hexahedral meshes. However, in order to reduce the effect of possible bugs, it should still be tried to reorder a grid. Only if this procedure fails, the original connectivity information should be used.
As noted, reordering the vertex lists of cells such that the resulting grid is not a trivial problem. In particular, it is often not sufficient to only look at the neighborhood of a cell that cannot be added to a set of other cells without violating the requirements stated above. We will show two examples where this is obvious.
The first such example is the following, which we will call the ``four cells at the end'' because of the four cells that close of the right end of a row of three vertical cells each (in the following picture we only show one such column of three cells at the left, but we will indicate what happens if we prolong this list):
* 9---10-----11 * | | / | * 6---7---8 | * | | | | * 3---4---5 | * | | \ | * 0---1------2 *
Assume that you had numbered the vertices in the cells at the left boundary in a way, that the following line directions are induced:
* 9->-10-----11 * ^ ^ / | * 6->-7---8 | * ^ ^ | | * 3->-4---5 | * ^ ^ \ | * 0->-1------2 *
(This could for example be done by using the indices
(0 1 4 3),
(3 4 7 6),
(6 7 10 9) for the three cells). Now, you will not find a way of giving indices for the right cells, without introducing either ambiguity for one line or other, or without violating that within each cells, there must be one vertex from which both lines are directed away and the opposite one to which both adjacent lines point to.
The solution in this case is to renumber one of the three left cells, e.g. by reverting the sense of the line between vertices 7 and 10 by numbering the top left cell by
(9 6 7 10):
* 9->-10-----11 * v v / | * 6->-7---8 | * ^ ^ | | * 3->-4---5 | * ^ ^ \ | * 0->-1------2 *
The point here is the following: assume we wanted to prolong the grid to the left like this:
* o---o---o---o---o------o * | | | | | / | * o---o---o---o---o---o | * | | | | | | | * o---o---o---o---o---o | * | | | | | \ | * o---o---o---o---o------o *
Then we run into the same problem as above if we order the cells at the left uniformly, thus forcing us to revert the ordering of one cell (the one which we could order as
(9 6 7 10) above). However, since opposite lines have to have the same direction, this in turn would force us to rotate the cell left of it, and then the one left to that, and so on until we reach the left end of the grid. This is therefore an example we we have to track back right until the first column of three cells to find a consistent ordering, if we had initially ordered them uniformly.
As a second example, consider the following simple grid, where the order in which the cells are numbered is important:
* 3-----2-----o-----o ... o-----7-----6 * | | | | | | | * | 0 | N | N-1 | ... | 2 | 1 | * | | | | | | | * 0-----1-----o-----o ... o-----4-----5 *
We have here only indicated the numbers of the vertices that are relevant. Assume that the user had given the cells 0 and 1 by the vertex indices
0 1 2 3 and
6 7 4 5. Then, if we follow this orientation, the grid after creating the lines for these two cells would look like this:
* 3-->--2-----o-----o ... o-----7--<--6 * | | | | | | | * ^ 0 ^ N | N-1 | ... | 2 v 1 v * | | | | | | | * 0-->--1-----o-----o ... o-----4--<--5 *
Now, since opposite lines must point in the same direction, we can only add the cells 2 through N-1 to cells 1 such that all vertical lines point down. Then, however, we cannot add cell N in any direction, as it would have two opposite lines that do not point in the same direction. We would have to rotate either cell 0 or 1 in order to be able to add all the other cells such that the requirements of deal.II triangulations are met.
These two examples demonstrate that if we have added a certain number of cells in some orientation of faces and can't add the next one without introducing faces that had already been added in another direction, then it might not be sufficient to only rotate cells in the neighborhood of the the cell that we failed to add. It might be necessary to go back a long way and rotate cells that have been entered long ago.
From the examples above, it is obvious that if we encounter a cell that cannot be added to the cells which have already been entered, we can not usually point to a cell that is the culprit and that must be entered in a different orientation. Furthermore, even if we knew which cell, there might be large number of cells that would then cease to fit into the grid and which we would have to find a different orientation as well (in the second example above, if we rotated cell 1, then we would have to rotate the cells 1 through N-1 as well).
A brute force approach to this problem is the following: if cell N can't be added, then try to rotate cell N-1. If we can't rotate cell N-1 any more, then try to rotate cell N-2 and try to add cell N with all orientations of cell N-1. And so on. Algorithmically, we can visualize this by a tree structure, where node N has as many children as there are possible orientations of node N+1 (in two space dimensions, there are four orientations in which each cell can be constructed from its four vertices; for example, if the vertex indices are
(0 1 2 3), then the four possibilities would be
(0 1 2 3),
(1 2 3 0),
(2 3 0 1), and
(3 0 1 2)). When adding one cell after the other, we traverse this tree in a depth-first (pre-order) fashion. When we encounter that one path from the root (cell 0) to a leaf (the last cell) is not allowed (i.e. that the orientations of the cells which are encoded in the path through the tree does not lead to a valid triangulation), we have to track back and try another path through the tree.
In practice, of course, we do not follow each path to a final node and then find out whether a path leads to a valid triangulation, but rather use an inductive argument: if for all previously added cells the triangulation is a valid one, then we can find out whether a path through the tree can yield a valid triangulation by checking whether entering the present cell would introduce any faces that have a nonunique direction; if that is so, then we can stop following all paths below this point and track back immediately.
Nevertheless, it is already obvious that the tree has
4**N leaves in two space dimensions, since each of the N cells can be added in four orientations. Most of these nodes can be discarded rapidly, since firstly the orientation of the first cell is irrelevant, and secondly if we add one cell that has a neighbor that has already been added, then there are already only two possible orientations left, so the total number of checks we have to make until we find a valid way is significantly smaller than
4**N. However, the algorithm is still exponential in time and linear in memory (we only have to store the information for the present path in form of a stack of orientations of cells that have already been added).
In fact, the two examples above show that the exponential estimate is not a pessimized one: we indeed have to track back to one of the very first cells there to find a way to add all cells in a consistent fashion.
This discouraging situation is greatly improved by the fact that we have an alternative algorithm for 2d that is always linear in runtime (discovered and implemented by Michael Anderson of TICAM, University of Texas, in 2003), and that for 3d we can find an algorithm that in practice is usually only roughly linear in time and memory. We will describe these algorithms in the following.
The algorithm uses the fact that opposite faces of a cell need to have the same orientation. So you start with one arbitrary line, choose an orientation. Then the orientation of the opposite face is already fixed. Then go to the two cells across the two faces we have fixed: for them, one face is fixed, so we can also fix the opposite face. Go on with doing so. Eventually, we have done this for a string of cells. Then take one of the non-fixed faces of a cell which has already two fixed faces and do all this again.
In more detail, the algorithm is best illustrated using an example. We consider the mesh below:
* 9------10-------11 * | | /| * | | / | * | | / | * 6------7-----8 | * | | | | * | | | | * | | | | * 3------4-----5 | * | | \ | * | | \ | * | | \| * 0------1---------2 *
First a cell is chosen ( (0,1,4,3) in this case). A single side of the cell is oriented arbitrarily (3->4). This choice of orientation is then propagated through the mesh, across sides and elements. (0->1), (6->7) and (9->10). The involves edge-hopping and face hopping, giving a path through the mesh shown in dots.
* 9-->--10-------11 * | . | /| * | . | / | * | . | / | * 6-->--7-----8 | * | . | | | * | . | | | * | . | | | * 3-->--4-----5 | * | . | \ | * | X | \ | * | . | \| * 0-->--1---------2 *
This is then repeated for the other sides of the chosen element, orienting more sides of the mesh.
* 9-->--10-------11 * | | /| * v.....v.......V | * | | /. | * 6-->--7-----8 . | * | | | . | * | | | . | * | | | . | * 3-->--4-----5 . | * | | \. | * ^..X..^.......^ | * | | \| * 0-->--1---------2 *
Once an element has been completely oriented it need not be considered further. These elements are filled with o's in the diagrams. We then move to the next element.
* 9-->--10->-----11 * | ooo | . /| * v ooo v . V | * | ooo | . / | * 6-->--7-->--8 | * | | . | | * | | . | | * | | . | | * 3-->--4-->--5 | * | ooo | . \ | * ^ ooo ^ X ^ | * | ooo | . \| * 0-->--1-->------2 *
Repeating this gives
* 9-->--10->-----11 * | ooo | oooooo /| * v ooo v ooooo V | * | ooo | oooo / | * 6-->--7-->--8 | * | | | | * ^.....^..X..^...^ * | | | | * 3-->--4-->--5 | * | ooo | oooo \ | * ^ ooo ^ ooooo ^ | * | ooo | oooooo \| * 0-->--1-->------2 *
and the final oriented mesh is
* 9-->--10->-----11 * | | /| * v v V | * | | / | * 6-->--7-->--8 | * | | | | * ^ ^ ^ ^ * | | | | * 3-->--4-->--5 | * | | \ | * ^ ^ ^ | * | | \| * 0-->--1-->-------2 *
It is obvious that this algorithm has linear run-time, since it only ever touches each face exactly once.
The algorithm just described in the two-dimensional case is implemented for both 2d and (in generalized form) for 3d in this class. The 3d case uses sheets instead of strings of cells to work on. If a grid is orientable, then the algorithm is able to do its work in linear time; if it is not orientable, then it aborts in linear time as well.
Both algorithms are described in the paper "On orienting edges of unstructured two- and three-dimensional meshes", R. Agelek, M. Anderson, W. Bangerth, W. L. Barth, ACM Transactions on Mathematical Software, vol. 44, article 5, 2017. A preprint is available as arxiv 1512.02137.
Prior to the implementation of the algorithms described above (originally implemented by Michael Anderson in 2002, and re-implemented by Wolfgang Bangerth in 2016), we used a branch-and-cut algorithm initially implemented in 2000 by Wolfgang Bangerth. Although it is no longer used, here is how it works, and why it doesn't always work for large meshes since its run-time can be exponential in bad cases.
The first observation is that although there are counterexamples, problems are usually local. For example, in the second example mentioned above, if we had numbered the cells in a way that neighboring cells have similar cell numbers, then the amount of backtracking needed is greatly reduced. Therefore, in the implementation of the algorithm, the first step is to renumber the cells in a Cuthill-McKee fashion: start with the cell with the least number of neighbors and assign to it the cell number zero. Then find all neighbors of this cell and assign to them consecutive further numbers. Then find their neighbors that have not yet been numbered and assign to them numbers, and so on. Graphically, this represents finding zones of cells consecutively further away from the initial cells and number them in this front-marching way. This already greatly improves locality of problems and consequently reduced the necessary amount of backtracking.
The second point is that we can use some methods to prune the tree, which usually lead to a valid orientation of all cells very quickly.
The first such method is based on the observation that if we fail to insert one cell with number N, then this may not be due to cell N-1 unless N-1 is a direct neighbor of N. The reason is obvious: the chosen orientation of cell M could only affect the possibilities to add cell N if either it were a direct neighbor or if there were a sequence of cells that were added after M and that connected cells M and N. Clearly, for M=N-1, the latter cannot be the case. Conversely, if we fail to add cell N, then it is not necessary to track back to cell N-1, but we can track back to the neighbor of N with the largest cell index and which has already been added.
Unfortunately, this method can fail to yield a valid path through the tree if not applied with care. Consider the following situation, initially extracted from a mesh of 950 cells generated automatically by the program BAMG (this program usually generates meshes that are quite badly balanced, often have many – sometimes 10 or more – neighbors of one vertex, and exposed several problems in the initial algorithm; note also that the example is in 2d where we now have the much better algorithm described above, but the same observations also apply to 3d):
* 13----------14----15 * | \ | | * | \ 4 | 5 | * | \ | | * | 12-----10----11 * | | | | * | | | 7 | * | | | | * | 3 | 8-----9 * | | | | * | | | 6 | * | | | | * 4-----5-----6-----7 * | | | | * | 2 | 1 | 0 | * | | | | * 0-----1-----2-----3 *
Note that there is a hole in the middle. Assume now that the user described the first cell 0 by the vertex numbers
2 3 7 6, and cell 5 by
15 14 10 11, and assume that cells 1, 2, 3, and 4 are numbered such that 5 can be added in initial rotation. All other cells are numbered in the usual way, i.e. starting at the bottom left and counting counterclockwise. Given this description of cells, the algorithm will start with cell zero and add one cell after the other, up until the sixth one. Then the situation will be the following:
* 13----->---14--<--15 * | \ | | * | > 4 v 5 v * | \ | | * | 12->--10--<--11 * | | | | * ^ | | 7 | * | | | | * | 3 ^ 8-->--9 * | | | | * | | ^ 6 ^ * | | | | * 4-->--5-->--6-->--7 * | | | | * ^ 2 ^ 1 ^ 0 ^ * | | | | * 0-->--1-->--2-->--3 *
Coming now to cell 7, we see that the two opposite lines at its top and bottom have different directions; we will therefore find no orientation of cell 7 in which it can be added without violation of the consistency of the triangulation. According to the rule stated above, we track back to the neighbor with greatest index, which is cell 6, but since its bottom line is to the right, its top line must be to the right as well, so we won't be able to find an orientation of cell 6 such that 7 will fit into the triangulation. Then, if we have finished all possible orientations of cell 6, we track back to the neighbor of 6 with the largest index and which has been added already. This would be cell 0. However, we know that the orientation of cell 0 can't be important, so we conclude that there is no possible way to orient all the lines of the given cells such that they satisfy the requirements of deal.II triangulations. We know that this can't be, so it results in an exception be thrown.
The bottom line of this example is that when we looked at all possible orientations of cell 6, we couldn't find one such that cell 7 could be added, and then decided to track back to cell 0. We did not even attempt to turn cell 5, after which it would be simple to add cell 7. Thus, the algorithm described above has to be modified: we are only allowed to track back to that neighbor that has already been added, with the largest cell index, if we fail to add a cell in any orientation. If we track back further because we have exhausted all possible orientations but could add the cell (i.e. we track back since another cell, further down the road couldn't be added, irrespective of the orientation of the cell which we are presently considering), then we are not allowed to track back to one of its neighbors, but have to track back only one cell index.
The second method to prune the tree is that usually we cannot add a new cell since the orientation of one of its neighbors that have already been added is wrong. Thus, if we may try to rotate one of the neighbors (of course making sure that rotating that neighbor does not violate the consistency of the triangulation) in order to allow the present cell to be added.
While the first method could be explained in terms of backtracking in the tree of orientations more than one step at once, turning a neighbor means jumping to a totally different place in the tree. For both methods, one can find arguments that they will never miss a path that is valid and only skip paths that are invalid anyway.
These two methods have proven extremely efficient. We have been able to read very large grids (several ten thousands of cells) without the need to track back much. In particular, the time to find an ordering of the cells was found to be mostly linear in the number of cells, and the time to reorder them is usually much smaller (for example by one order of magnitude) than the time needed to read the data from a file, and also to actually generate the triangulation from this data using the Triangulation::create_triangulation() function.
Definition at line 632 1044 of file grid_reordering.cc.
Grids generated by grid generators may have an orientation of cells which is the inverse of the orientation required by deal.II.
In 2d and 3d this function checks whether all cells have negative or positive measure/volume. In the former case, all cells are inverted. It does nothing in 1d.
The inversion of cells might also work when only a subset of all cells have negative volume. However, grids consisting of a mixture of negative and positively oriented cells are very likely to be broken. Therefore, an exception is thrown, in case cells are not uniformly oriented.
Note, that this function should be called before reorder_cells(). | https://www.dealii.org/developer/doxygen/deal.II/classGridReordering.html | CC-MAIN-2017-43 | refinedweb | 3,981 | 62.31 |
index
count_wl...
error_wl...
<Timestamp: 2011-03-25 11:09:00>...
<Timestamp: 2011-03-25 11:10:00>...
All columns consist of either floats or integers, so the only "mixed" part is in the index. Within my dataframe I have invalid points flagged as -999. I want these removed before I start doing my stats work, but this fails:
- Code: Select all
from numpy import*
import pandas as pd
tmp = pd.load(filename)
tmp[tmp==-999] = NaN
Though this works:
- Code: Select all
tmp = tmp.T
tmp[tmp==-999] = NaN
I get the following error when I do not transpose:
ValueError: Cannot do boolean setting on mixed-type frame
I guess this is from the mixed index, but I can't imagine why I would get flagged for this.
I have another problem. Normally, the .describe() function will return a list of stats such as:
- Code: Select all
tmp = pd.load(filename)
>>> tmp.pixel_0.describe()
count 859.000000
mean 3388.121071
std 99045.175315
min -999.000000
25% 1.000000
50% 12.000000
75% 21.000000
max 2902894.000000
dtype: float64
But, after replacing my -999 values:
- Code: Select all
tmp = pd.load(filename).T
tmp[tmp==-999] = NaN
tmp = tmp.T
>>> tmp.pixel_0.describe()
count 858
unique 119
top 9
freq 32
dtype: int64
Again, this is indicative of a mixed type, but i cannot imagine what the "mixed" portion is. Everything within the pixel_0 column is of the same type. Anyone who can shed light on this will be very helpful. | http://www.python-forum.org/viewtopic.php?p=7780 | CC-MAIN-2015-06 | refinedweb | 252 | 78.45 |
pysense: lots of invalid temp/hum values
Hi,
I am running the following code on a pysense board (with a LoPy):
from pysense import Pysense
from SI7006A20 import SI7006A20
import time
import pycom
pycom.heartbeat(False)
py = Pysense()
tempHum = SI7006A20(py)
for i in range(1, 100):
print("{}/{}".format(tempHum.temp(), tempHum.humidity()))
time.sleep(60)
Below is the output. It seems that most of the values are invalid. Is there any issue with my code or any known issue with the board ?
35.33643/48.2554
35.30425/174.0031
5181.098/2693.725
35.34715/16.50949
5181.088/48.24777
35.2828/2756.156
5181.077/48.27829
35.27207/47.85104
35.27207/2662.483
1260.164/174.0183
5181.152/2662.49
5181.152/2662.49
5181.077/670.1731
1260.186/47.98837
1260.164/47.76712
5181.035/2662.445
Thanks !.
- chumelnicu
@bucknall Thank you !!! Just let me know when is ready ...i'm pretty sure a lot of people will apreciate mbus library :)
@syncope Unfortunately not yet, we're still to implement the internal temperature sensor from Espressif so are waiting to implement that code before releasing a library for the offset!
Thanks for your patience,
Alex
@chumelnicu This is on the roadmap but unfortunately I can't give you an estimate of when it will arrive as we have a number of important firmware features/changes that are taking priority.
Thanks for your patience!
@bucknall said in pysense: lots of invalid temp/hum values:!).
do you have any news about the offset?
- chumelnicu
What are you referring to? Are you asking about the internal temperature sensor and offset that I mentioned below?
@bucknall Hi Alex.Do you have any librery scheduled?
Just to know if it's matter of weeks or months.
@thomas, @bucknall
Thanks Thomas for checking thoroughly, thanks Bucknall for explaining the cause.
I managed reduce the offset to < 1 °C by just putting my WiPy on Break-away headers like this one.
The distance is makes clearly a difference. I did the test outdoor, without housing or enclosure, and compared to my simple outdoor thermometer. If you need exact values you certainly should elaborate more, but for my simple need +- 1° is good enough.
@livius
Thank you for the hint.
In fact we work with pymodbus, or I should say we use it. I'm not very deep in Modbus and new to Micropython, this is why I currently do not want to go into it deeper. But there are quite a few devices that talk modbus.!).
This is in the works and should be arriving soon!
I've just tested and it's also much better: no more errors so far. Only thing is a ~7° offset with the actual temperature. Any idea ?
Thanks anyway for the quick changes, great products, great team !
Thomas
@albert_c
As start point - here is modbus python(not micro python) lib
Maybe this can help somehow
@albert_c That's great to hear! We're pleased you're enjoying using our devices! :)
We have a MicroPython Modbus library on the roadmap for internal development; many people in the community have been requesting this library!
@bucknall Currently I'm evaluating my first WiPy's / PySenses around my house and garden. I am a Geologist in Germany and work in a small company focused on environmental measurements. We always check promising new products. The PyCom product family seems like a dream come true. Easy to use, cheap, if it now turns out to be also reliable we will certainly use Pycoms for our business. I wish you success!
Btw: a great thing to have would be a MicroPython Modbus library (in case you have spare time and dont know what to do with it :-) ).
I did an overnight test: no errors. Works perfectly!
Thanks a lot for the quick fix.
I've made some changes to the micropython libraries which should increase stability. Could you try them again from and let me know if you're still having the same issues?
Thanks! | https://forum.pycom.io/topic/1373/pysense-lots-of-invalid-temp-hum-values | CC-MAIN-2018-09 | refinedweb | 677 | 74.59 |
Suppose we have an array called points with some points in the form (x, y). Now the cost of connecting two points (xi, yi) and (xj, yj) is the Manhattan distance between them, the formula is |xi - xj| + |yi - yj|. We have to find the minimum cost to make all points connected.
So, if the input is like points = [(0,0),(3,3),(2,10),(6,3),(8,0)], then the output will be 22 because
so here total distance is (6+5+3+8) = 22.
To solve this, we will follow these steps −
Let us see the following implementation to get better understanding −
import heapq def solve(points): points_set = set(range(len(points))) heap = [(0, 0)] visited_node = set() total_distance = 0 while heap and len(visited_node) < len(points): distance, current_index = heapq.heappop(heap) if current_index not in visited_node: visited_node.add(current_index) points_set.discard(current_index) total_distance += distance x0, y0 = points[current_index] for next_index in points_set: x1, y1 = points[next_index] heapq.heappush(heap, (abs(x0 - x1) + abs(y0 - y1), next_index)) return total_distance points = [(0,0),(3,3),(2,10),(6,3),(8,0)] print(solve(points))
[(0,0),(3,3),(2,10),(6,3),(8,0)]
22 | https://www.tutorialspoint.com/program-to-find-minimum-cost-to-connect-all-points-in-python | CC-MAIN-2021-49 | refinedweb | 195 | 53.1 |
table of contents
NAME¶
s390_runtime_instr - enable/disable s390 CPU run-time instrumentation
SYNOPSIS¶
#include <asm/runtime_instr.h>
int s390_runtime_instr(int command, int signum);
DESCRIPTION¶. This argument was used to specify a signal number that should be delivered to the thread if the run-time instrumentation buffer was full or if the run-time-instrumentation-halted interrupt had occurred. This feature was never used, and in Linux 4.4 support for this feature was removed; thus, in current kernels, this argument is ignored.
RETURN VALUE¶.
ERRORS¶
-.
VERSIONS¶
This system call is available since Linux 3.7.
CONFORMING TO¶
This Linux-specific system call is available only on the s390 architecture. The run-time instrumentation facility is available beginning with System z EC12.
NOTES¶
Glibc does not provide a wrapper for this system call, use syscall(2) to call it.
The asm/runtime_instr.h header file is available since Linux 4.16.
Starting with Linux 4.4, support for signalling was removed, as was the check whether signum is a valid real-time signal. For backwards compatibility with older kernels, it is recommended to pass a valid real-time signal number in signum and install a handler for that signal.
SEE ALSO¶
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/s390_runtime_instr.2.en.html | CC-MAIN-2021-49 | refinedweb | 235 | 57.98 |
2 posts in this topic
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!
Register a new account
Already have an account? Sign in here.
Similar Content
- By
- By ur
To get the current time stamp, I got the below code.
#include <Date.au3> #include <MsgBoxConstants.au3> #include <WindowsConstants.au3> #RequireAdmin ; Under Vista the Windows API "SetSystemTime" may be rejected due to system security $td = _Date_Time_GetSystemTime() $td = _Date_Time_SystemTimeToDateTimeStr($td) $td = StringReplace($td, " ", "_") $td = StringReplace($td, ":", "_") MsgBox(0,"",$td) But it is not giving the date or time of the timezone where the system is there.
Please suggestt
- By SkysLastChance
I am not sure what is happing at all, unfortunatlly there is no way I can put a full running code. When I enter the first and last name it works fine, However when I get to the date of birth it puts in '19760703000000'
I can't figure out why "7/3/1976" is the value before the formant and "07031976" is after the format.
I want it to pull the value after the format. "07031976"
$r = 1 Local $aArray = _Excel_RangeRead($oExcel, Default, Default,Default,False) For $i = 1 To UBound($aArray) - 1 ;$i =0 Start from row A $sR1 = $aArray[$i][0] ;status $sR2 = $aArray[$i][1] ;first name $sR3 = $aArray[$i][2] ;Last name $sR4 = $aArray[$i][4] ;DOB $sR5 = $aArray[$i][5] ;Email Address WinWaitActive ("[CLASS:Notepad]") ControlSend("[CLASS:Notepad]", "", "Edit1", $sR3 & ',' & $sR2 & @CR) Sleep (2000) ControlSend("[CLASS:Notepad]", "", "Edit1",("{TAB}")) Sleep (3000) ControlSend("[CLASS:Notepad]", "", "Edit1", $sR4 & @CR) ControlSend("[CLASS:Notepad]", "", "Edit1",("{ENTER}")) ControlSend("[CLASS:Notepad]", "", "Edit1",("{F12}") $r = $r + 1 If $r > $sBox Then Exit Endif Next auto it demo.xlsx - excel that I am using.
Edit: I also want to mention I have tried
Local $aArray = _Excel_RangeRead($oExcel, Default, Default,3) When I do this not even the name first and last name will write.
- By ioripalm!!!
- By Jefrey
Hi folks!
I bring this little executable file and an AutoIt wrapper for it to generate documents (bills and incomes) by sending variables (strings, numbers or even arrays) to a PHP script you wrote.
This is the syntax:
themecli.exe <input> <output> <vars>
input: The input PHP file (the extension does not need to be .php, it can be anything, like .html, .txt, .bin, .wtf...), relative to @ScriptDir output: The output HTML file to save, relative to @ScriptDir vars: A base64-encoded JSON with all the variables (hard? Don't worry! See below:) Looks confusing? Well, don't worry. That's why we have a wrapper!
Example, if we have this file "page.php":
Hello, <?=$user?>! How are you? And do this with AutoIt (using my JSONgen UDF [ ] which is already included in AutoPHP):
#include 'inc\autophp.au3' $ojson = New_JSON() JSON_AddElement($ojson, 'user', 'John Doe') $json = JSON_GetJSON($ojson) AutoPHP('page.php', 'hello.txt', $json) Note that you don't need to base64-encode it. After running it, we will see a file named "hello.txt", and the content is:
Hello, John Doe! How are you? On the PHP file you can also use loops and everything else supported in PHP4 (since it uses Bambalam PHP Compiler, it does not support PHP5 yet).
On the zip file there is an example of how to use it with loops.
License: same Bambalam PHP Compiler license:
Tip: If you use wkhtmltox, you'll be able to convert the generated HTML file into PDF with just a few lines! See here:
Download:
AutoPHP.zip full script, dependencies and examples (everything you need to start using AutoPHP) phpthemecli-src.zip Themecli binary and source (Themecli is the tool that AutoPHP uses to run PHP code - since it's already included on AutoPHP UDF, you don't need to download it, unless you want to use it with another programming language) | https://www.autoitscript.com/forum/topic/162259-ole2-compound-document-format/ | CC-MAIN-2017-43 | refinedweb | 647 | 63.49 |
Refresh Open GL in GLUT
I'm going through the NeHe tutorials, just started really. Anyway, I'm on the tutorial where you first actually have some rotating going on, unfortunately my view only refreshes when I resize the window. Here's all my code:
How can I make it refresh regularly. I guess this would be the beginning of an 'engine'?
Code:
#include <GLUT/glut.h>
// Constants -----------------------------------------------------------------
#define kWindowWidth 400
#define kWindowHeight 300
// Function Prototypes -------------------------------------------------------
GLvoid InitGL(GLvoid);
GLvoid DrawGLScene(GLvoid);
GLvoid ReSizeGLScene(int Width, int Height);
// Main ----------------------------------------------------------------------
GLfloat rtri; // Angle For The Triangle ( NEW )
GLfloat rquad; // Angle For The Quad ( NEW )
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize (kWindowWidth, kWindowHeight);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
InitGL();
glutDisplayFunc(DrawGLScene);
glutReshapeFunc(ReSizeGLScene);
glutMainLoop();
return 0;
}
// Init ----------------------------------------------------------------------
GLvoid InitGL(GLvoid)
{MatrixMode(GL_PROJECTION);
glLoadIdentity(); // Reset The Projection Matrix
gluPerspective(45.0f,(GLfloat)kWindowWidth/(GLfloat)kWindowHeight,0.1f,100.0f); // Calculate The Aspect Ratio Of The Window
glMatrixMode(GL_MODELVIEW);
}
// DrawGLScene ---------------------------------------------------------------
GLvoid DrawGLScene(GLvoid)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity(); // Reset The View
glTranslatef(-1.5f,0.0f,-6.0f); // Move Left 1.5 Units And Into The Screen 6.0
glRotatef(rtri,0.0f,1.0f,0.0f); // Rotate The Triangle On The Y axis ( NEW )
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glColor3f(1.0f,0.0f,0.0f); // Set The Color To Red
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glColor3f(0.0f,1.0f,0.0f); // Set The Color To Green
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glColor3f(0.0f,0.0f,1.0f); // Set The Color To Blue
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd(); // Finished Drawing The Triangle
glLoadIdentity(); // Reset The Current Modelview Matrix
glTranslatef(1.5f,0.0f,-6.0f); // Move Right 1.5 Units And Into The Screen 6.0
glRotatef(rquad,1.0f,0.0f,0.0f); // Rotate The Quad On The X axis ( NEW )
glColor3f(0.5f,0.5f,1.0f); // Set The Color To Blue One Time Only(); // Done Drawing The Quad
rtri+=0.2f; // Increase The Rotation Variable For The Triangle ( NEW )
rquad-=0.15f; // Decrease The Rotation Variable For The Quad ( NEW )
glFlush();
}
// ReSizeGLScene ------------------------------------------------------------
GLvoid ReSizeGLScene(int Width, int Height)
{
glViewport (0, 0, (GLsizei) Width, (GLsizei) Height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, (GLfloat) Width / (GLfloat) Height, 0.1, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
How can I make it refresh regularly. I guess this would be the beginning of an 'engine'?
Member
Posts: 70
Joined: 2004.06
Joined: 2004.06
Code:
// Function Prototype
GLvoid IdleFunc(GLvoid);
// Callback registration (along with glutDisplayFunc etc)
glutIdleFunc(IdleFunc);
// this is the actual idle function
GLvoid IdleFunc(GLvoid)
{
glutPostRedisplay();
}
First, if you've requested a double buffered context (GLUT_DOUBLE) then you need to swap the buffers to display anything. So use glutSwapBuffers() instead of glFlush(). The only reason you are seeing anything in your window is because GLUT forces a swap during window resize (the window is resized once upon creation.)
Then, to make this animate, you need to tell GLUT that you want to call DrawGLScene repeatedly. You could use glutIdleFunc(), or just add glutPostRedisplay() after glutSwapBuffers().
Then, to make this animate, you need to tell GLUT that you want to call DrawGLScene repeatedly. You could use glutIdleFunc(), or just add glutPostRedisplay() after glutSwapBuffers().
Thank you, very helpful, unfortunately glutSwapBuffers and post redisplay weren't yet mentioned in the tutorial, but now I know.
You can find an example on how to setup a window, an OpenGL context, capture Carbon events, all without GLUT, and load textures from JPG, PNG, TGA files, and play AIFF sounds with CoreAudio on the source code at:
All our games that used GLUT were converted not to use it, as we received quite a few complaints that the games were not rendering on certain hardware configurations running the lastest versions of OS X.
What, specifically, is broken? Full screen mode? On what hardware?
GLUT is just NSGL inside.
GLUT is just NSGL inside.
It seems to be something related to the fullscreen mode. The screen would show entirely black or just partially rendered. It was discussed on another thread a few weeks ago. I don't remember exactly what the cause is.
I didn't keep track of the hardware either. I just know that Danlabgames received quite a few complaints via e-mail or on Macupdate and VersionTracker about this issue and it was gone once we updated the games and got rid of GLUT.
I didn't keep track of the hardware either. I just know that Danlabgames received quite a few complaints via e-mail or on Macupdate and VersionTracker about this issue and it was gone once we updated the games and got rid of GLUT.
Possibly Related Threads... | http://idevgames.com/forums/thread-6297.html | CC-MAIN-2016-07 | refinedweb | 802 | 57.57 |
The 25 Most Dangerous Programming Errors 534
Hugh Pickens writes "The Register reports that experts from some 30 organizations worldwide have compiled 2010's list of the 25 most dangerous programming errors along with a novel way to prevent them: by drafting contracts that hold developers responsible when bugs creep into applications. The 25 flaws are the cause of almost every major cyber attack in recent history, including the ones that recently struck Google and 33 other large companies, as well as breaches suffered by military systems and millions of small business and home users. The top 25 entries are prioritized using inputs from over 20 different organizations, who evaluated each weakness based on prevalence and importance. Interestingly enough the classic buffer overflow ranked 3rd in the list while Cross-site Scripting and SQL Injection are considered the 1-2 punch of security weaknesses in 2010. the list states and includes a draft contract with the terms customers should request to enable buyers of custom software to make code writers responsible for checking the code and for fixing security flaws before software is delivered."
Yeah, right. (Score:5, Insightful)
I'll sign such a contract, but the project will take twice as long and my hourly rate will go up 300%.
People like to draw the comparison with civil engineering, where an engineer may be liable (even criminally) if, say, a bridge collapsed. But this isn't really the same thing. We're not talking about software that simply fails and causes damage. We're talking about software that fails when people deliberately attack it. This would be like holding a civil engineer responsible when a terrorist blows up a bridge -- he should have planned for a bomb being placed in just such-and-such location and made the bridge more resistant to attack.
The fault lies with two parties -- those who wrote the insecure code, and those who are attacking it. I'll start taking responsibility for my own software failures when the justice system starts tracking down these criminals and prosecuting them. Until then, I'll be damned if you're going to lay all the blame on me.
Re:Yeah, right. (Score:5, Insightful)
I could lock down any system and make 100% hacker proof - i'd unplug their server.
it's a ratio of risk to reward like most things, if you want zero risk there won't be any reward.
re:zero risk (Score:5, Insightful)
"Insisting on absolute safety is for people who don't have the balls to live in the real world."
- Mary Shafer, NASA Dryden Flight Research Center
Re: (Score:3, Funny)
Re: (Score:3, Insightful)
More to the point, the astronauts explicitly agreed to the risk. They knew what they were doing.
It's really not the same thing as bridge building at all. xD
Re: (Score:3, Interesting)
They wanted Feynman's name on the cover up but he wasn't going to go along with it after NASA engineers went around management and the rigged enquiry procedure and showed him the evidence.
Re:zero risk (Score:5, Funny)
is it me or is americans in love with absolutes?
You are 100% correct. Anything less would be un-American.
Re: (Score:3)
If simple common sense coding techniques can plug most of the avenues of attack, then any developer worth his salt SHOULD be responsible for plugging them...especially in this day and age where security holes are making headlines.
For the bridge analogy, I'd consider a buffer overflow equivalent to missing a rivet. If you know what you're doing it shouldn't be possible. Trusting user-generated input is one of the first taboos you learn about in computer science.
Re:Yeah, right. (Score:5, Interesting)
I once coded a program for my own use that downloaded images from binary newsgroups, decoded them, and inserted them into a PostgreSQL database, with keywords extracted from the message. It was a nice program, handled multipart messages, and only stored each image once, using SHA1 hashes to check for dublicates - I even took the possibility of a hash collision into account and only used them as an index. No buffer overruns, no SQL injections, no nothing. Yet it crashed. So why did it crash?
Some moron included the same image twice in a single message..
Re:Yeah, right. (Score:5, Funny)
Re:Yeah, right. (Score:5, Insightful).
It is nearly impossible if you want to enumerate all possible ways input can be wrong. Thus you should just enumerate all ways input is right. If you expecting for instance numerical input, don't look for ";" or ")" or anything that could have been inserted maliciously. Just throw everything away that doesn't fit [0-9]*.
You know the input your program can work with. So instead of trying to formulate rules how input may differ from the input you want to catch errors, write down the rules the input has to follow and reject everything else. This is straightforward.
Re: (Score:3, Informative)
Re:Yeah, right. (Score:4, Insightful)
I once coded a program for my own use that downloaded images from binary newsgroups, decoded them, and inserted them into a PostgreSQL database, with keywords extracted from the message.
So, I'm guessing you were building a porn search engine? For "research" purposes of course.
Re:Yeah, right. (Score:5, Funny)
Your ideas are intriguing to me and I wish to subscribe to your pr0n scraper.
Re: (Score:3, Insightful)
Ah, the ol' Reject-Known-Bad or Sanitise-All-Input paradigm. It is indeed impossible to anticipate all the bone-headed ways in which input can be botched, maliciously or not. Thus, it is then more secure to prepare a discreet list of valid input and only accept that, rejecting anything that does not conform to what you expect. This rejection is not based on a black-list of bad input to compare against, but on a sort of white-list what you assert your program can handle.
If you would have considered this a
Re:Yeah, right. (Score:5, Funny)
His code didn't expect two girls and one bucket
Re:Yeah, right. (Score:4, Insightful)
"Guess what, princess: coding is a hell of a lot easier to do, is simpler to test, and has less inherent risk than any other kind of engineering."
False, false, and true, in that order. The non-inherent risk can be pretty high though, but this is also true of other types of engineering.
"You can't put out a patch to fix a collapsed bridge"
...do you understand that the word "patch" did not originate in programming? I think the closest analogue to a collapsed bridge is unrecoverable data loss (or possibly hardware failure, which is rarely software-induced, at least on the desktop).
"or release a service pack for a unbalanced rotor shaft that destroys a generator"
...do you know what you're talking about? You also can't reinforce a bad password-negotiation algorithm with tempered steel, but that doesn't mean ANYTHING.
"You can't do destructive testing on a completed project in the real world."
Sure you can. The computer equivalent would be doing it on a live system. Stupid, in either case.
"You can't tell a client that it's going to take twice as long and he's going to have to pay you three times as much for you to do it properly."
Up until this point I could see where you were coming from, but this is divorced from reality. Cost and schedule overruns are not disproportionately present in the computer field, and anyway the GP didn't seem to be talking about overruns, he was talking about what it costs compared to the jackass who wants to use crazy glue and popsicle sticks to build his bridge. Where some of what you said before was ignorant or hyperbole, this is just a stupid statement. I realize you're trying to turn the GP's words on him, but you failed.
"You might have a program that takes two numbers and adds them together and spits out the answer. I type in a number and a letter, your program crashes because you're retarded and didn't sanitise the inputs. My bridge doesn't fall down because a bus drives over it instead of a car."
Do you realize that you just made an argument that building a bride that can support a bus is easier than input-validation? It isn't, of course. I can imagine extremely stupid ways to make a bad bridge.
"The programming errors identified are repeated, endemic errors. The attack vectors are the same every time. You have the tools to check and protect your code. Your software still breaks. That's why everyone's pissed at you. You have a mindset that any errors in the program will be trivial to correct, so you don't do it properly the first time."
True, false, inherently imperfect tools, true, true, strawman, misdirected anger.
"Your'e paid to produce software that performs a given function, not software that might work under some circumstances. Harden the fuck up, and do your job properly."
Not two paragraphs ago you argued that he should be paid the going rate for software that might work under some circumstances.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
That day has already arrived in the form of recognised quality assurance standards (eg: ISO-9000). Such standards in both software and civil engineering are concerned with prevention, detection and remedy of faults rather than the individual's skill at bolting things together.
Re: (Score:2, Insightful)
Yep this isn't about removing vulnerabilities or improving quality - this is about making someone accountable.
Having a countract where the developer is made liable? This is management blame-storming at it's finest.
Re:Yeah, right. (Score:5, Insightful)
Anybody who honestly thinks that scary looking contracts are going to keep the engineers in line should read up on the sorts of things that happen in workplaces with real hazards: heavy machinery, toxic materials(and not the chickenshit "recognized by the state of california to cause reproductive harm" type, the "Schedule 2, Part A, per CWC" type), molten metal, exposed high voltages, and the like. Even when their lives are on the line, when the potential for imminent gruesome death is visible to even the least imaginative, people fuck up from time to time. They slip, they make the wrong motion, they get momentarily confused, some instinct that was real useful back when lions were the primary occupational hazard kicks in and the adrenalin shuts down their frontal lobe. Happens all the time, even in countries with some degree of industrial hygiene regulation.
Re:Yeah, right. (Score:5, Insightful)
y drafting contracts that hold developers responsible when bugs creep into applications.
Arguably the stupidest thing I've ever heard, and I'm old enough to have heard a lot of stupid shit.
Anybody who honestly thinks that scary looking contracts are going to keep the engineers in line
Is a moron who would have been a candidate for early-term abortion if we could only predict such things. The reality here is this: if you try to put engineers (especially software engineers) into a situation where every line of code they produce might put them in court, you're going to find yourself with a severe shortage of engineers. There are many things that creative minds can do, and if you make a particular line of work too personally dangerous nobody will enter that field.
More to the point however, only completely drain-bamaged organizations actually ship alpha code, which is obviously what we are talking about in this case. Because if we're not, if we're discussing production code that was overseen by competent management, conceived by competent designers, coded by competent software engineers and tested by competent QC engineers (you do have those, don't you?) then blaming the programmer alone is absolutely batshit insane, and will serve no legitimate purpose whatsoever.
Modern software development, much like the production of a motion picture, is a complex team effort, and singling out one sub-group of such an organization for punishment when failures occur (as it happens, the ones least responsible for such failures in shipping code) is just this side of brain-dead.
And I mean that about the programmers being the least responsible. Unless management has no functioning cerebral cortex material, they will understand and plan for bugs. You expect them, and you deal with them as part of your quality control process. Major failures can most frequently be attributed to a defective design and design review process: that sort of high-level work that happens long before a single developer writes one line of code. The reason that engineers who build bridges are not put in jail when a bridge fails and kills someone is because there are layers and layers and layers of review and error-checking that goes on before a design is approved for construction. It's no different in a well-run software team.
If your team is not well run, you have a management failure, not a programmer problem.
I had stupid people, I really do. And people that propose to punish programmers for bugs are fundamentally stupid.
Re:Yeah, right. (Score:4, Insightful)
The reality here is this: if you try to put engineers (especially software engineers) into a situation where every line of code they produce might put them in court, you're going to find yourself with a severe shortage of engineers.
I'd be happy to do the job. However, I'll probably never actually finish it, as I'll be checking it over and over for bugs until they get tired of me refusing to release it and sign off on it and fire me, at which point I'll move on to the next sucker^Wemployer and continue to collect a paycheck.
Re: (Score:3, Insightful)
The reality here is this: if you try to put engineers (especially software engineers) into a situation where every line of code they produce might put them in court, you're going to find yourself with a severe shortage of engineers.
Actually, you're going to end up paying a fortune for software in order to cover the developers' litigation insurance premiums. Most customers prefer to have cheaper software and carry the risk themselves.
Re:Yeah, right. (Score:4, Insightful)
I think the main reason we have so many bugs in software is quite simply that no one really cares. Of course everyone complains about it, but when you look past the words towards the actions, you can see it more clearly.
Everyone still buys the cheap software with tons of features. A simple bridge with a few modifications to an almost cookie cutter design costs a lot more than a very complex piece of custom business software with far more potential points of failure. And that's about right. If the bridge fails there's a good chance someone will die. If business software fails, someone might lose some money. So when you're looking at the risk of bugs in business software, paying for a lot of people to do detailed design, design reviews, code, code reviews, QA testing etc. etc. Well it just doesn't add up. The cost of getting it right is higher than the cost of dealing with the bugs.
The reason this contract is fundamentally stupid is because a vendor following it will have to increase the contract cost by an order of magnitude. Probably some more as well to cover the risk of litigation. Then the customer will have to weigh up the costs and risks, and realise their older contract might actually be more sensible in the real world.
Re:Yeah, right. (Score:5, Insightful)
Except what really happens is that American coders won't sign the documents. That's where Indian and Chinese agencies will sign "whatever", cash the check, and farm it out to low paid code monkeys. Legally, they're not in the USA so your contract is Worthless.
Re: (Score:3, Insightful)
Exactly. Why do you see that as a bad thing? Suppose instead of "contract" we say "these are the design/coding standards at this company and as an employee of this company you are required to follow them. If you don't then we will penalize you." What exactly is wrong with that?
For the last umpteen years, in all sorts of venues social and professional, I've been seeing accountability become more and more denigrated and dismissed. "Oh let's not play the blame gam
Re:Yeah, right. (Score:5, Insightful)
For "accountability" to be properly applied, it must always be connected to power. The relationship goes both ways. Nobody with power should ever lack accountability, lest their power degenerate into mere tyranny, and nobody with accountability should ever lack power, lest they merely be made the scapegoat. This is the real problem with the false "accountability" commonly found in organizational contexts:
If, for example, you have a "release engineer" who must sign off on a software product, or a team of mechanics that must get a 747 ready for passenger flight, those people must have the power to halt the release, or the flight, if they believe that there is a problem. If they do no have this power, they aren't actually "accountable" they are merely scapegoats, and the one who does have this power is truly accountable; but is dodging accountability by assigning it to subordinates. The trouble is, in real world situations, being the person proximately responsible for expensive delays is, at best, thankless. Unless the organization as a whole is invested in the importance of that role, the person filling it will be seen as an obstruction. Obstructions have a way of being circumvented. Assigning blame under those circumstances is actually the opposite of accountability; because punishing the person who didn't make the decision will mean letting the person who did off the hook(in the same way that falsely convicting the innocent isn't "tough on crime" because it implies releasing the guilty). The second issue is the belief that being made accountable will make humans behave fully responsibly. This isn't the abusive mess that the first issue is; but it is contrafactual and tends to distract attention away from the more valuable task of building systems that are (at least somewhat) resistant to human error. Even when accountability is correctly apportioned to power, humans are imperfect instruments. If you want to build systems of complexity unprecedented in human evolutionary history, you will have to learn to build systems that are tolerant of some amount of error. Checklists, automated interlocks, automated fuzz testing, etc, etc. must all be employed; because, ultimately, "accountability" and punishment, while they have their virtues, cannot remediate failure. Executing murderers doesn't resurrect their victims. Suing programmers doesn't recover data stolen in some hack attack. There isn't anything wrong with punishing the guilty; but its utility in accomplishing people's actual objectives is surprisingly tepid. People don't want to sue programmers, they want high-quality software. People don't want to fire mechanics, they want planes that don't crash. People don't want to incarcerate criminals, they want to be free of crime. "Accountability" is one tool that can be used to build the systems that people actually want(and there are arguments to be made that it is ethically obligatory in any case); but single minded focus on it will not achieve the ultimate objectives that people are actually seeking.
Re: (Score:2, Insightful)
Re:Yeah, right. (Score:5, Insightful)
Re:Yeah, right. (Score:5, Insightful).
Software is designed by humans. It won't be perfect. Unfortunately, software is targeted by miscreants because of its wide deployment, homogeneity, and relative invisibility, which are concepts that are still quite new to human society. I'd be willing to take responsibility for security failures in my products, but I'm sure as hell not going to do so when I'm subjected to your every idiotic whim as a client, nor will I do so at your currently pathetic pay rates. If you want me to take the fall for security failures, then I reserve the right not to incorporate inherently unsecure technologies into my solutions. In fact, I reserve the right to veto just about any god damned thing you can come up with. After all, I'm a security expert, and that's why you hired me, right? And I'm going to charge you $350 an hour. Don't like it? Go find somebody dumber than me to employ.
Re:Yeah, right. (Score:4, Interesting).
I mostly agree. My pet peeve is SQL injection attacks, because they are so frickin' easy to avoid. Any developer that leaves their code open to SQL injection attacks should be held liable (unless their employer insists they use a language that doesn't have prepared statements, in which case the company should be held liable).
Re: (Score:3, Interesting)
In my time, I have seen several instances of SQL injection-vulnerable code, and 99% of the time it comes from junior level developers, who obviously have had no security training.
Should the developer be liable, or the company that let them code without being trained?
Re: (Score:3, Funny)
even then, a decent DBA will prevent even the crappest program from being a problem.
When you find one of these elusive DBAs can you send me a reference, because so far I have yet to meet one even remotely tolerable, let alone "decent"
Re:Yeah, right. (Score:5, Interesting)
Unfortunately, many of these errors are _not_ subtle. Let's take Subversion as an example. It is filled with mishandling of user passwords, by storing them in plaintext in the svnserve "passwd" file or in the user's home directory. Given that it also provides password based SSH access, and stores those passwords in plaintext, it's clear that it was written by and is maintained by people who simply _do not care_ about security. Similarly, if you read the code, you will see numerous "case" statements that have no exception handling: they simply ignore cases that the programmer didn't think of.
This is widely spread, popular open source software, and it is _horrible_ from a security standpoint. Collabnet, the maintainers of it, simply have no excuse for this: they have been selling professional services for this software for years, and could have at least reviewed if not accepted outright the various patches for it. The primary patch would be to absolutely disable the password storage features at compilation time, by default, especially for SSH access. There was simply never an excuse to do this.
I urge anyone with an environment requiring security that doesn't have the resources to enforce only svn+ssh access to replace Subversion immediately with git, which is not only faster and more reliable but far more secure in its construction.
Re: (Score:3, Insightful)
I agree writing password to the disk is bad, but have you ever used CVS/SVN/etc. without stored passwords? You end up typing your password a thousand times a day, which is simply unusable.
So there needs to be *some* way to store passwords, or no one will use the system. On some systems there's a wallet/keychain/etc. available for secure password storage, but on most there is not, and there's certainly not a universal one among Win/Mac/linux/BSD/etc., so you pretty much have to write your own if you intend t
Re: (Score:3, Informative)
_Allow_ svn+ssh. Too many sites have managers unwilling to do so, and insistent on using HTTP with the user's own system passwords used in clear-text via HTTP, and stored in clear-text on build servers, because they cannot be bothered to allow competent managers to _enable_ svn+ssh.
The problem isn't that servers won't allow svn+ssh access, it's that managers for such sites won't allow the "svn" user to be configured and used in such circumstances. I've run into exactly that situation on several occastions:
Re:Yeah, right. (Score:5, Insightful)
There is a big difference between "not perfect" and "damn sloppy" and buffer overflows fall into the latter category. For decades we've been teaching students to make sure a buffer has enough space for a chunk of data before writing the data to the buffer. Any so-called programmer who does this is lazy or stupid, or both, and doesn't deserve the title of programmer or a job trying to do what real programmers do for a living. Good gravy, the quality of most software I encounter (and by that I mean software that I use) is so poor it's amazing! I find myself thinking with discouraging frequency "didn't anybody at Widget Co. even try this software out before shipping?"
Re: (Score:3, Insightful)
Actually, software is applied mathematics, which can be perfect to solve a discreet problem, indeed. However, there are many factors involved in the development of software, most unrelated to mathematics and algorithms, that hasten the design and implementation, and require compromising such perfect solutions.
It would then be more accurate to say Software is subject to business pressures, and as such it won't be perfect.
-dZ.
And who would buy such a car? (Score:3, Insightful)
There's another problem here which we seem to be forgetting: The user.
Users continue to buy systems with inferior security -- every dollar spent on Windows is a dollar telling Microsoft that you're OK with them taking months to fix known security bugs, and Apple is no better. Maybe this "contract" would help, though it will kill Easter Eggs, among other things, and that makes me sad.
But even if you design the most secure system ever, it's useless if the users aren't educated about security. This was specifi
Re: (Score:3, Funny)
Oh please... what's with this "Window" customer requirement? It's trivial for a thief to break it with a rock. So what exactly is the point of doors and locks????
Apparently all car makers are aiding and abetting by including windows.
Re:Yeah, right. (Score:5, Interesting)
Even a car with no locks shouldn't be responsible, you bought the car knowing full well there was no locks, if you want cars with locks, pressure those who make cars and take your business to one with locks.
Exactly. At a previous job I was in charge of maintaining a system for tracking clients' assets. By which I mean realtime GPS coordinates and telemetry of big frikkin trucks full of DVD players or plasma screens or whatever. Information which would actually be very dangerous Fast-and-the-Furious style if it were accessible by the wrong people. When I inherited this system I went to my manager and went "this could be cracked in about a hundred different ways simply by changing some numbers in the URL" and they said "why would anyone do that?" Later, the internal client who owned the data asked about security and I just said "it would basically need a rewrite, it has enough trouble not showing users each others' data let alone standing up to a deliberate attack" and so they swept it under the rug. I could probably still log on to one of their accounts today and find out where the truckload of free plasma screens was, if I was a bad person.
I almost stole a motorcycle by accident (Score:3, Interesting)
I figured it was simply a 1 in several thousand chance and was mildly amused.
Re: (Score:3, Interesting)
Re:Yeah, right. (Score:5, Informative)
Holding programmers accountable for their coding errors should happen inside of the corporation as they are working on the project. I don't remember which company had this, but if a developer broke the build it failed to pass a test a lava lamp at their cubicle would turn on, and until the developer fixed the build the lava lamp would stay on, which generally meant you had a certain amount of time to fix the issue before it would actually start bubbling. This way there is an incentive not to break the build, and a bit of competition between the various programmers to have the least amount of bugs or build breakages.
Having programmers imagine every way that their program may be attacked is impossible. There will always be new attacks that take advantage of that one that the programmer had not thought of. Just like the security systems that are in place at airports around the world. If the good guys could come up with every single scenario that an attacker could take airports would be much safer, as every single scenario had already been thought about.
I agree with you, don't put all the blame on me as a programmer.
Oh, if I had mod points, you sir would have them!
Re: (Score:3, Funny)
> Holding programmers accountable for their coding errors
We used to have a board where we would note "bozo the clown points" for anybody involved in the project, even managers !
;-)) [wikipedia.org]
Re:Yeah, right. (Score:4, Funny)
We used to have the "Diaper of Shame". That started when one of the engineers said "If my code is broken, I will wear a diaper around the office all day tomorrow". Sure enough, it was broken and sure enough, some one went out and got a package of adult diapers.
We let him wear it over his pants and afterwards it would just migrate to your cubicle.
I wonder if we could still do that today....I smell a harassment suit being stirred up somewhere.
Secure software: not about imagining every attack (Score:3, Insightful)
Having programmers imagine every way that their program may be attacked is impossible.
Fortunately, that's typically not required for software security. In a lot of cases, you can prove that for all inputs, the software does the intended thing.
For instance, if you know that the username variable will always be properly escaped, you don't care whether the user is called "bobby" or "bobby tables" ( [xkcd.com]).
It takes a lot of discipline, though, to always consider who the origin of a particular piece of data is, to decide (based on that) exactly what amount of trust to place in it,
Re:Yeah, right. (Score:5, Insightful)
Not only that, but civil/mechanical/other engineers usually know exactly what they are dealing with - a Civil engineer may specify the type of concrete used, car engineer may specify the alloy of steel.
Most of the time, software engineers don't have that luxury. Video Game consoles (and still are, mostly) used to be nice that way and it was the reason they had fewer problems than PCs.
Tell a bridge engineer that he has no absolutely control over the hardware he has to work with and that it may have a billion variations, and see if he signs his name to it.
Re:Yeah, right. (Score:4, Insightful)
Not only that, but civil/mechanical/other engineers usually know exactly what they are dealing with - a Civil engineer may specify the type of concrete used, car engineer may specify the alloy of steel.
But other engineers can't specify all the variables. They have to deal with the real world - rock mechanics, soil mechanics, wind, corrosion, etc. - so they too can never know exactly what they're dealing with. Many of the worst engineering disasters occured because some aspect of the natural world was poorly understood or not accounted for. However, it remains the engineer's responsibility to understand and mitigate those uncertainties.
Re:Yeah, right. (Score:5, Insightful)
Let's see. The top programming errors are:
Let people inject code into your website through cross site scripting.
Let people inject code into your database by improperly sanitizing your inputs.
Let people run code by not checking buffer sizes.
Granting more access than necessary.
Granting access through unreliable methods.
Geez, #7 is fricking directory traversal. DIRECTORY TRAVERSAL. In 2010! It's not like your drawbridge is getting nuked by terrorists here. Generally bridges are built to withstand certain calamaties, like small bombs, fires, hurricanes, earthquakes, etc. Being successfully assaulted through a directory traversal attack is like someone breaking into the drawbridge control room because you didn't install locks on the doors and left it open in the middle of the night. Why not leave out cookies and milk for the terrorists with a nice note saying "please don't kill us all [smiley face]" and consider that valid security for a major public-facing application.
Further down the list: Failing to encrypt sensitive data. Array index validation. Open redirects. Etc, etc, etc. These aren't super sophisticated attacks and preventative measures we're talking about here. Letting people upload and run PHP scripts! If you fall for THAT one, that's like a bridge that falls because some drunk highschooler hits it with a broken beer bottle. Forget contractual financial reprisals. If your code falls for that, the biggest reprisal should be an overwhelming sense of shame at the absolute swill that you've stunk out.
And yes, security takes longer than doing it improperly. It always does, and that has to be taken seriously. And it is still cheaper than cleaning up the costs of exposing your customer's banking information to hackers, or your research to competitors in China. Stop whining, man up, and take your shit seriously.
Re: (Score:3, Insightful)
While that (and a good many of these "bugs") sounds really really obvious, consider that many apps vulnerable to such attacks started as strictly single-user locally-running versions. Yes, you want to take basic steps to make sure your users don't accidentally overwrite system files (though any "real" OS does this for you), but for the most part, you trust a local user not to trash their own files. Permissions? If a (local) program
Re: (Score:3, Insightful)
You know, that's what modern operating systems with hardware abstraction layers and APIs, and high-level development toolkits are for.
Just because it was designed with that task doesn't mean it works as designed. Not all security issues are deterministic SQL injection prone scripts and can actually be affected by timing issues amongst other things.
Re: (Score:3, Insightful)
It's funny those nazi assholes are trying to pin all the problems on developers. A background check? Are they fucking crazy?
A lot less software bugs would exist if PHBs weren't trying to cut costs all the time, assigning to the lowest bidder, establishing stupid project timelines and disregarding training, planning, documentation and tests as useless time-wasters.
Re:Yeah, right. (Score:4, Insightful)
Software bugs are logic typos. Have you ever made a grammatical error? Reading your post, I can say yes. Bugs are like that. In projects with tens of thousands of lines of code, it is unreasonable and completely unrealistic to expect every line to be a pinnacle of perfection, just like it is unreasonable to expect that every sentence in a book is completely without error.
Security holes tend to be failures to predict the way that things might "align" as to allow unforeseen things to happen. Working to specification is in no way, shape, or form a guarantee that something is secure. It is impossible to predict new security holes - if it were, the vast majority wouldn't exist to begin with. Further, when dealing with other libraries and programs (like every application on the planet), there are variables beyond the programmer's control, which might not be totally as they should be. If you know of somebody who can write specs that compensate totally for unknowns, I think you should shut up and go ask them for lottery numbers.
Come back when you have even a marginal understanding of what is involved in programming.
Duty of care NOT perfect code (Score:3, Interesting)
...it is unreasonable and completely unrealistic to expect every line to be a pinnacle of perfection, just like it is unreasonable to expect that every sentence in a book is completely without error.
And every lawyer you ever talk to will agree with you AND then tell you that what you just said is irrelevant. Nobody really cares if the code is perfect. What they care about is if the code failed and someone got hurt (financially, physically, etc) as a result. If the code is designed and/or implemented such that a reasonably common and foreseeable attack (say a buffer overflow) can and does occur and harm results, then the programmer has failed in their duty of care [wikipedia.org]. Doctors, (civil) engineers, lawye
Re: (Score:3, Interesting)
T
Re: (Score:3, Insightful)
You're probably still in school, but I'll give you a break. Allow me to quote Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."
Anyway... back to the Ivory Tower with you. The hour is getting late, and I think your faculty advisor has a cup of warm milk and a cozy set of jammies ready for you.
Re: (Score:3, Interesting)
No. Software is like locks or cryptography. There is not a lock in the world that can _never_ be broken. There's not a crypto system ever invented that will _never_ be broken - at least not if people give a damn about getting in. That's why locks are rated based on how long it would likely take to crack them. And crypto systems are continually upgraded - because it can be practical on today's computers and secure for the next 20 years or so, or it can be secure for maybe 100 years but take three days to enc
Re: (Score:3, Interesting)
Common misconceptions. It is indeed possible to have perfect cryptography ( [wikipedia.org]), encryption time does not scale with encryption strength as you might expect, and modern ciphers with reasonably long keys are very fast, and are not expected to be broken anytime in the near future. Brute forcing a 256bit key is impossible in this universe with the laws of physics as we know them, and DES has been around for over 30 years now with no major breakthroughs. Using AES-256 yo
Re: (Score:3, Informative)
Brute forcing a 256bit key is impossible in this universe with the laws of physics as we know them
RSA keys up to 768bit have been broken: [wikipedia.org]
The RSA-100 challange, which had a strength of 300 bits: "It takes four hours to repeat this factorization using the program Msieve on a 2200 MHz Athlon 64 processor." [wikipedia.org]
Re: (Score:3, Interesting)
Bad analogy. There are, in fact, several crypto systems which have never been broken, and aren't likely to be broken until quantum computing is practical, and maybe not even then.
Quick summary of my position: Software can be made invincible. The cost of doing so is prohibitive, especially given the amount of legacy code we have to work with. New tools like garbage-collected languages and ORMs which properly abstract SQL away can help, a lot, to reach that middle ground (instantly eliminating two of the top
Re: (Score:3, Interesting)
Look, I am not going to talk down to or insult you as most of the replies have, but you have to realize a couple of things:
Re:Yeah, right. (Score:4, Insightful)
Specification -- what specification ?
When engineers make a new airline/bridge/circuit, they model the entire thing on a computer first. The CAD model is an unambiguous model of the plane. Important subsystems in it are modelled and analysed independently and in conjunction with the components around it.
So, if writing software was similar, we would first model the software on a computer. Oh, er, wait a moment. In an important sense, software is the specification. The only unambiguous specification is the actual software [otherwise we could make whatever was used for the specification be the programming language].
When someone designs a bad aircraft, the design is modelled, flaws are found and the design is improved. Nobody builds the thing until they feel pretty sure the design is right. However, software is often bad for the same reason that an initial design of anything else is bad. If it was equivalent to an airplane, windows 95 for instance, once designed, would never have been built. However, once the design for a piece of software is complete, one has created the software. With software there is no meaningful way to separate the specification and the implementation.
So, the question boils down to: how much time/money do you want to spend ? The answer from the client is generally 'as little as we can get away with'
Re:Yeah, right. (Score:5, Insightful)
How many clients have you ever met that actually ~know~ what they want?
:-)
Re: (Score:3, Insightful)
Formally correct software does not fail in the sense that it 'suddenly' stops working. If it has a 'bug', then the 'bug' has always been there. That's what I mean with failing, because the parent of my post made an analogy between bridges and computer programs. An
Errors, Schmerrors (Score:5, Funny)
A real programmer can do all 25 in one line of code.
Re:Errors, Schmerrors (Score:5, Funny)
#include "win32.h"
/* :p */
Alanis ? (Score:5, Funny)
Kind of ironic the report is a PDF file, when another report stated that PDF accounts for 9/10 (or something like that) exploits last year.
I bookmarked this immediately (Score:3, Interesting)
Some of the errors are not relevant, mainly having my code in a managed (i.e.
.NET) environment. The SQL injection and XSS potential vulnerabilities are still very relavent to me. Although most of my responsibility lies in code which is only reached via a https authenticated connection, as with any other web programmer, a "trusted" user can still -especially- find exploits.
This is even more true in inherited code. If you inherited code from a previous employee, I recommend a rigorous audit of the input and output validation. You just don't know what was missed in something you didn't write.
And Number 26 ... (Score:3, Funny)
Bad Idea (Score:5, Insightful)
Holding a gun to somebody's head won't make them a better developer.
I don't understand why well-known and tested techniques can't be used to catch these bugs. There are many ways to help ensure code quality stays high, from good automated and manual testing to full-on code reviews. The problem is that most companies aren't willing to spend the money on them and most open source projects don't have the manpower to dedicate to testing and review.
TFA seems like it's just looking for somebody to blame when the axe falls. If your method of preventing bugs is to fire everybody that makes a programming mistake pretty soon you won't have any developers left.
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
Holding a gun to somebody's head won't make them a better developer.
I don't understand why well-known and tested techniques can't be used to catch these bugs. soft
Re:Bad Idea (Score:4, Insightful) software, and security implications of config options, is an excellent idea.) For most of the documentation requirements, I don't really need to hear how you plan to do it: I just need to know that, if you screw up, you're going to be (at least partially) financially liable. And yet, the contract fails to specify that. What happens when there *is* a security breach, despite all the documentation saying the software is secure? If the procedures weren't followed, then that's obviously a breach of contract — but what if there was a problem anyway?
I actually like designating a single person in charge of security. Finding someone to blame after the fact is a horrible idea. However, having someone who's job it is to pay attention early, with the authority to do something about it is an excellent way to make sure it doesn't just fall through the cracks. By requiring their personal signoff on deliverables, you give them the power they need to be effective. (Of course, if management inside your vendor is so bad that they get forced into just rubber-stamping everything, that's a different problem. But if you wanted to micromanage every detail of how your vendor does things internally, why are you contracting to a vendor?)
I don't agree there re the lazy comment. The reason poor coders release insecure code is because they are lazy. For the rest of us, it is generally because we are told we MUST release X features by go-live date. Go-live date will not slip under any circumstances. X features are non-negotiable for go-live date. The project manager (not the development PM, the project owner) has assigned a certain period for testing, however this testing is never SIT or the such, it is usually UAT of a very non-technical nature and the devs time is spent on feedback from UAT. Development itself has virtually no proper regression / SIT / design time factored in. The development team are never asked how long realistically something will take, instead some non-technical person will tell them how long they have and then tells them to figure out how to make it happen. Specs will change continuously throughout the project so a design approach at the beginning will be all but useless at the end after numerous fundamental changes (got this one on a project I'm working on now - had my part finished, fully tested and ready to deploy about 3 months ago, then change after change after change and I'm still doing dev and if I mention that I need time to conduct SIT / regression testing I'm told "but I thought you fully tested it already a few months ago?"). This leads a dev with a fast approaching deadline, who doesn't have the authority to say "no, this won't give us enough time to test properly" and the emphasis on being feature complete rather than a few features down but fully tested and secure.
This of course does not even touch on the subject of what happens if a third party library or other software sourced externally has vulnerabilities. Can you in good faith sign off that you guarantee a piece of software is totally secure without knowing how third party libraries, runtime environments or whatever were developed? This is not just isolated to open source, try holding MS liable for a security vulnerability that was uncovered after you deployed and see how far you get. This then starts taking us out of the realm of absolutes and into the realm of "best practices" etc. So how good is a contract that expects the signatory to follow "best practi
and yet they do not mention COBOL (Score:2)
The obvious follow-up question... (Score:2)
Oh, you mean VENDORs, not DEVELOPERs (Score:2)
When you say "developer", I think individual employee. However, the individual employee isn't around long enough, the project validation will more than likely happen after the majority of them have finished, taken their final pay and left.
As for the actual contract? It reads like lawyer bait.
Consistent with the provisions of this Contract, the Vendor shall use the highest applicable industry standards for sound secure software development practices to resolve critical
security
They missed one (Score:2)
Just Show Me the List!! (Score:5, Informative)
So much shit. So much commentary. Just gimme the list? Here it is:
Re: (Score:3, Informative)
What's sad is that SQL injection and good old primitive buffer overflow still top the list...
Regarding #2, I'm inclined to blame PHP for that thing being so high up there. Its standard library way of handling parameters in SQL statements has long been lacking - and while it's definitely possible to get right, and there are frameworks which make it easier, too much "HOWTO" and "learn in 24 seconds" PHP code out there is written without any regard to injection possibility, and it gets blindly copied over and
Re: (Score:3, Informative)
Typically such a flaw appears in web applications. As such, the attacker does not have access to the local machine, and such an attack gives him information would could aid him in gaining access (usernames).
Sanitization is a worrying term to use. (Score:3, Informative)
Improper Sanitization of Special Elements used in an OS Command
The best solution is not "sanitization" (which people usually perform by blocking or editing out what THEY think are dangerous metacharacters) but proper encapsulation. In addition, there's a misleading section here:
Execl() is not a "C" API, it's a UNIX API. It doesn't involve quoting. On a UNIX system, you can safely take advantage of this mechanism to pass parameters and bypass either shell or application quoting inconsistencies. On Windows, even if your program is in Perl and you pass system() an array of arguments, Perl is still at the mercy of the called program to correctly parse the quoted string it gets from CreateProcess()... *unless* you are operating under the POSIX subsystem or a derivitive like Interix.
In addition, whether you quote your arguments, use execl(), or use a smart wrapper like Perl's system(), you still need to ensure that COMMAND level metacharacters (like the leading dash (on UNIX) or slash (on Windows) of an option string) are properly handled.
This latter problem may remain even if you pass the command arguments through a configuration file to avoid the possibility of shell metacharacters being exploited.
Whitelists can't be simplistic. You can't ban the use of "-" in email addresses, for example. Encoding is better.
Re:Sanitization is a worrying term to use. (Score:4, Informative)
Use prepared statements. A prepared "INSERT INTO FOO (BAR, BAZ, BIFF) VALUES (?, ?,?)", along with parameters from the user, is safe from SQL injection attacks, no matter the parameters.
Unfortunately there are a few cases you can't do that. No way to use a prepared statement for an "IN" clause, for instance.
Lol @ Dangerous (Score:3, Informative)
I work as a software developer in the avionics industry.
This list is ridiculous.
There's nothing any website programmer could do that is even remotely dangerous compared to what we could screw up yet all I see in the list are website programming bugs.
Actions speak louder than words (Score:3, Insightful)
"As a customer, you have the power to influence vendors to provide more secure products by letting them know that security is important to you,"
And, as a consumer, you have the power to influence vendors to provide better employment and buying practices by letting them know that they are important to you.
Meanwhile, the vast majority of America continues to shop at Walmart whilst every competitor goes out of business.
"Does it get the job done? Now what's the cheapest I can get it for?" is most people's primary motivation.
Sellers, who listen to them saying, "I want security!" and deliver that, at the expense of greater cost, are then left wondering why the competitor who did just enough to avoid standing out on security but otherwise kept their product slightly cheaper is selling many times more copies.
So, yes, people can influence sellers with their actions. The problem is, it needs to be their actions, not their words. Even worse, they're already successfully doing just that - unfortunately, their actions are screaming something quite different to any words about, "Security is truly important to me."
What it's like to do software like that (Score:5, Interesting)
Been there, done that, in an aerospace company. Here's what it's like.
First, the security clearance. There's the background check before hiring, which doesn't mean much. Then, there's the real background check. The one where the FBI visits your neighbors. The one where, one day, you're sent to an unmarked office in an anonymous building for a lie detector test.
Programming is waterfall model. There are requirements documents, and, especially, there are interface documents. In the aerospace world, interface documents define the interface. If a part doesn't conform to the interface document, the part is broken, not the document. The part gets fixed, not the documentation. (This is why you can take a Rolls Royce engine off a 747, put on a GE engine, and go fly.)
Memory-safe languages are preferred. The Air Force used to use Jovial. Ada is still widely used in flight software. Key telephony software uses Erlang.
Changes require change orders, and are billable to the customer as additional work. Code changes are tied back to change orders, just like drawing changes on metal parts.
In some security applications, the customer (usually a 3-letter agency) has their own "tiger teams" who attack the software. Failure is expensive for the contractor. NSA once had the policy that two successive failures meant vendor disqualification. (Sadly, they had to lighten up, except for some very critical systems.)
So that's what it's like to do it right.
A real problem today is that we need a few rock-solid components built to those standards. DNS servers and Border Gateway Protocol nodes would be a good example. They perform a well-defined security-critical function that doesn't change much. Somebody should be selling one that meets high security standards (EAL-6, at least.) It should be running on an EAL-6 operating system, like Green Hills Integrity.
We're not seeing those trusted boxes.
The most dangerous C programming error (Score:5, Funny)
launch_missles ();
Re: (Score:3, Funny)
//Fixed.
void le_nap(void)
{
sleep 500;
}
if (alert_code = red)
{
if (le_tired) le_nap;
launch_missles ();
}
Wrong approach. (Score:3, Interesting)
By all means, accountability is great.
But saying the developer is at fault is ridiculous. It opens the door for companies to mismanage projects as per usual, and clueless HR departments to hire people who don't know what they're doing, and fire people arbitrarily every time they have a complaint from someone that the software doesn't work.
Start the responsibility with the company. If the company sends a flawed product and are to be made accountable, then the organisation needs to prove:
* It has proper QA processes in place to test the product, and that the staff are suitably qualified.
* The project management was performed to allow for proper specification, design and development within the normal working hours of a day, taking holidays and time lost due to usual unforeseen circumstances.
* Training, or self learning time is allocated to enable staff to keep current with developments and issues with languages/compilers/methods they use.
If you're going to hold a developer responsible, then it should be absolutely certain that everyone in the dependancy chain for that person is responsible. Did HR hire someone who wasn't fit for purpose? Their job is to ensure that doesn't happen. They're the start of the problem chain.
Did management provide the logistics necessary to complete the job to a quality? If not, they should be liable.
Did the sales team (if it's bespoke software) make impossible promises (if so, and developer opinion was overturned such that a 'broken' system was arrived at from spec, then the salesman should be accountable).
Did the producer of the spec introduce a design flaw that resulted in the error? If it wasn't the developer, then the specifier/designer was at fault.
Pretty much whichever way you look at it, management and HR should carry the can first, leaking down to the developer, if you're going to be sensible about it. If a place is well run, well managed then sure, have developer liability, but expect to have raised costs to cover developer's professional liability insurance.
The 25 most dangerous programming errors (Score:3)
1. PHP.
...better stop there before I get modded into oblivion.
2. Visual BASIC.
3. Perl.
4. C.
5. C++.
Blame HTML and the browser for XSS. (Score:3, Insightful)
Re: (Score:3, Interesting)
And lets not forget to put some blame on the OS. If the OS would provided a framework to properly isolate applications from each other most exploids would simply turn into a harmless denial of service. I couldn't care less if a broken PDF crashes the PDF reader, but I if that broken PDF can get access to my whole system something is seriously wrong with the underlying OS. There is no reason why a PDF reader, webbrowser or most other tools should ever need access to my whole system. Access to a window to dra
Re:Background checks are awful and stupid (Score:4, Insightful)
Child molesters are really a special case; they have a mental disorder. However, even there the system is fucked. A guy who screws a 16-year-old girl when he's 18 is NOT a child molester. The only people who should be guilty of true child molestation are those who molest pro-pubescent children, like 12 and under. That's where you someone is truly sick in the head, because no normal man would ever be attracted to a pre-pubescent child. But lots of men will admit to being attracted to a 17-year-old girl. Lots of female movie stars aren't much older than this.
Re: (Score:3)
Sorry, no, because then that would require criminalizing relationships between 60-year-old men and 30-year-old women, and that's just wrong. Or, it would create a harsher sentence for a 60-year-old raping a 30-year old than a 20-year-old raping a 12-year-old, and that's very wrong (one's a prepubescent child, and the other is not).
Re: (Score:3)
Even there, I don't think anyone should be considered a child molester if the victim is 17. There's no significant difference between a 17-year-old and an 18-year-old, yet one is adult and the other isn't.
The dividing line should be at 12 or 13, which is pubescence. Child molesters aren't even interested in 17-year-old girls, they're too old for them. They'd rather rape someone who's 5.
Re:Background checks are awful and stupid (Score:4, Insightful)
Actually, I have to say I can't blame the guy. There's some freaks on this site who think it's funny to "out" someone. Someone did it to me a while ago, calling me by my real name, even though there's no references I know of in my profile to my real identity. I have no idea how he did it. It's why I never say much specifically about my employment here, or if I say a little too much, I post anonymously, even though I hate doing that because it makes it impossible for me to read any responses.
So if Mr. Anonymous gives enough information about his crime, some freak could very well go to the trouble of spending a day digging through government websites to try to find his real identity and post it here.
Re: (Score:3, Insightful)
Thank you for an enjoyable half hour wandering through your website. You're a total nutter, but it pleased me to see that my Internet Kook detectors are properly calibrated.
Re: (Score:3, Informative)
Yes and no. The problem was that the software could get into an inconsistent state - this shouldn't happen, but it shouldn't be a problem. And it wasn't, because the first generation had hardware interlocks that made it fail safely (beam wouldn't activate).
Cutting corners was the biggest problem. Had they not removed the interlocks for cost reasons, nothing bad could have happened. It would have been physically impossible.
Another couple of deaths due to the profit motive. I don't mean to suggest that the pr
Re: (Score:3, Interesting)
I have mod points so I would mod you up.
However you're an AC, and lots of people browse
/. with all AC's automatically downmodded to -1 so there's probably not much point. But I agree with much of what you say - with more to add.
Most of the arguments against this article boil down to one single thing.
"It's too hard."
You know something? That's a lousy argument. If "It's too hard" was a real argument against reliable software, the airline industry would never have developed modern autopilots without planes | http://developers.slashdot.org/story/10/02/17/2327253/the-25-most-dangerous-programming-errors?sdsrc=next | CC-MAIN-2015-22 | refinedweb | 10,239 | 61.56 |
#include <stdio.h> #include <string.h> #include <stdlib.h> //CONSTANTS #define wrdlen 48 #define linelen 1024 // A struct representing a node in a binary search tree struct treenode { // The contents of the node char word[wrdlen]; // Links to the node's left and right children struct treenode* left; struct treenode* right; }; // Adds a new node to the tree. Duplicates are disallowed. Returns 1 if a // new node was added, returns 0 if newdata was already in the tree int insert(struct treenode** root, char newword[wrdlen]) { // If we've reached the right place to insert, create a new node and add it in if( (*root) == NULL) { (*root) = (struct treenode*)malloc(sizeof(struct treenode)); strcpy((*root)->word,newword); (*root)->left = NULL; (*root)->right = NULL; return 1; } // Otherwise, search for the correct place to insert if(strcmp(newword,(*root)->word)<0) { return insert( &((*root)->left), newword); } else if(strcmp(newword,(*root)->word)>0) { return insert( &((*root)->right), newword); } // If the new data is neither less than nor greater than the the data at // the current node, it must be equal, and duplicates are not allowed else return 0; } // Returns 1 if target is in the tree and 0 otherwise int search(struct treenode* root, char target[wrdlen]) { // An empty tree contains nothing, much less target if(root == NULL) return 0; // If the current node is what we're looking for, we've found it if(strcmp(root->word,target) == 0) return 1; // If what we're looking for is smaller than this node, it must be in // the left subtree if it exists if(strcmp(target,root->word) < 0) return search(root->left, target); // Similarly, if the target is greater than this node, it can only be in // the right subtree else return search(root->right, target); } // An iterative version of the search algorithm int searchiterative(struct treenode* root, char target[wrdlen]) { struct treenode* crnt = root; // Keep descending through the tree until we reach the bottom or find what // we're looking for while(crnt != NULL && strcmp(crnt->word,target)!=0) { if(strcmp(target,crnt->word)>0) crnt = crnt->left; else crnt = crnt->right; } // If we reached the bottom of the tree, then the target isn't present, // otherwise we found what we're looking for if(crnt == NULL) return 0; else return 1; } void spellcheck(struct treenode* root, char *token, int line) { //capital letters are from 65 -> 90 //lowercase letters are from 97-122 if(search(root, token))//if you find it normally then return; else if(token[0] >= 65 && token[0] <= 90) { token[0] = token[0] + 32; if(search(root, token)) return; else { token[0] = token[0] - 32; printf("Line %d: %s\n", line, token); return; } } else { printf("Line %d: %s\n", line, token); return; } } int main(void) { FILE *ifp; //dictionary file FILE *scfp; //file to be spellchecked char infile[wrdlen]; //"words.txt"; char Tinfile[wrdlen]; //"test.txt"; //test file int valid = 0; struct treenode* root; char newword[wrdlen]; root = NULL; int i = 0; char str[linelen]; char delims[] = ("~!@#$%^&*()-_=+[]{}\\|;:\'\",.<>/?\n\r\t "); root = NULL; //ask the user for the dictionary file while(!valid) { printf("Please enter the name of the dictionary file you wish to access\n"); scanf("%s", infile); ifp = fopen(infile, "r"); if(ifp == NULL) printf("sorry, could not find that file!\n"); else { valid = 1; printf("Reading file now.........\n"); } } valid = 0; //read in all the words and place them into a BST while(!feof(ifp)) { fscanf(ifp, "%s ", newword); insert(&root, newword); } //ask the user for the file to be spellchecked while(!valid) { printf("what is the name of the file you would like to spellcheck?\n"); scanf("%s", Tinfile); scfp = fopen(Tinfile, "r"); if(scfp == NULL) printf("sorry, could not find that file!\n"); else { valid = 1; printf("Reading file now.........\n"); } } printf("The following words were not recognized: \n"); for(i = 1; !feof(scfp); i++) { // start my spellchecking algorithm here fgets(str, linelen, scfp); char *token; token = strtok(str, delims); while( token != NULL ) { spellcheck(root, token, i); token = strtok( NULL, delims ); } } system("PAUSE"); return; }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
You have a double "" at the end of :
char delims[] = {"~!@#$%^&*()-_=+[]{}\\|;:
The correct initialization is what you had earlier :
char delims[] = "~!@#$%^&*()-_=+[]{}\\|;:\.
>> 1> c:\program files\microsoft visual studio 8\vc\include\string.h(74) : see declaration of 'strcpy'
This is just VC being weird. strcpy is not deprecated by the standard - it's perfectly fine to keep using it (as long as you use it correctly) :)
>> 1>c:\users\mike regas\documents\cop3205\pr
in C, all variable declarations have to be at the start of a block. Line 133 :
root = NULL;
is followed by more declarations - that's not allowed in C. Just remove line 133.
You initialize root to NULL a few lines further anyway, and you could also consider doing the initialization the moment root is declared :
struct treenode* root = NULL;
It's always good to initialize your variables as soon as they're declared.
char delims[] = "~!@#$%^&*()-_=+[]{}\\|;:\
Open in new window
>> char delims[] = "~!@#$%^&*()-_=+[]{}\\|;:\
That line looks fine.
I am going to post the code again so that you can look I will also post the errors below:
1>------ Rebuild All started: Project: assignment 408, Configuration: Debug Win32 ------
1>Deleting intermediate and output files for project 'assignment 408', configuration 'Debug|Win32'
1>Compiling...
1>test4.c
1>c:\users\mike regas\documents\cop3205\pr
1> c:\program files\microsoft visual studio 8\vc\include\string.h(74) : see declaration of 'strcpy'>Build log was saved at ":\Users\Mike Regas\Documents\COP3205\Pr
1>assignment 408 - 3 error(s), 6 warning(s)
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========
Open in new window
this is my dictionary file below and it is crashing while it is reading it
Open in new window
Note that you didn't initialize root to NULL - you really have to do that, since your insert function relies on it being root.
where do I put the NULL statement that you are talking about?
where do I put the NULL statement that you are talking about?
where do I put the NULL statement that you are talking about?
See http:#22926176
May I ask why you gave a B grade ? That usually means that something was missing in the answer and/or that something is still unclear. If that's the case, then do not hesitate to ask for clarification where needed. | https://www.experts-exchange.com/questions/23892281/spell-checker-with-BST.html | CC-MAIN-2018-22 | refinedweb | 1,096 | 57.61 |
I've updated the article and code based on reader comments—thank you very much for reading and voting! The original event tracer class would trace only events that followed the .NET Framework design pattern for events—that's most of the events you'll encounter but not all of the ones that C# allows. I've added a second event tracer class that will trace many of the rest of the events you might find. I've also improved the robustness of the class.
The events of any object can be traced, by use of a single class, through the use of .NET Reflection.
Have you tried to understand the patterns of raised events of a complicated object? I needed to work with a very capable, very full-featured grid control from a ISV's popular control collection. It wasn't very long before I was very puzzled trying to figure out which of the many events provided by the control were the ones I wanted, and in which order they were raised. I started to write individual event handlers for the events I was interested in, but there were so many I thought I'd be better off generating tracing event handlers for all the events using a nice editor macro. But it turned out there were over 260 events for this grid control, and they were declared at various levels of the class hierarchy. I looked for a better way.
.NET Reflection turned out to be a very easy way to get access to the definition of the class I was interested in—or any other class for that matter. It is a simple process to get all the events that a class declares, and then to attach handlers to any or all of the events.
This article presents the simple-but-general event hooking/tracing class I built, which is provided (with a demo) as a download.
To use the
Tracer class, you need three things:
Tracerinstance
The event handler must match the following
delegate:
public delegate void OnEventHandler( object sender, object target, string eventName, EventArgs e );
The
sender and the
EventArgs e are the parameters to the event that was raised. The
target is the object being traced. (The target may be different from the sender if the event is a
static event.) And the
eventName is the name of the event, of course.
So, for example, a simple trace routine may look like this:
private void OnEvent( object sender, object target, string eventName, EventArgs e ) { string s = String.Format( "{0} - args {1} - sender {2} - target {3}", eventName, e.ToString( ), sender ?? "null", target ?? "null" ); System.Diagnostics.Trace.TraceInformation( s ); }
Now all that is left is to create an event
Tracer on your object of interest, and to hook the events:
EventTracer.Tracer tracer = new EventTracer.Tracer( targetObject, OnEvent ); tracer.HookAllEvents( );
Alternatively, you could just hook specific events by name:
tracer.UnhookAllEvents( ); tracer.HookEvent( "Click" ); tracer.HookEvent( "DoubleClick" );
The
Tracer class has methods for hooking a specific event, by name, or hooking all events, and likewise for unhooking events. It has a property that will return a collection of the names of all the events that can be raised by the target object. The events considered are all the
public events, instance or
static, that are defined by the target object's class, or superclasses.
It is probably the case that at least some of the events you hook with the tracer you're already handling in your application. In that case, does the tracer's event handler execute before or after your event handler? For events without user-specified accessors (
add and
remove) and where the class lets event itself call the handlers normally, the subscribing handlers are called in the order in which they are added to the event. If you subscribe to the event in designer-generated code but create the event
Tracer later, during a Form's
Load event, for example, then the call to your handler will occur before the call to the trace routine.
(If the class implements its own
add or
remove accessors for the event, or if it takes over calling the subscribed handlers itself (by calling
GetInvocationList on the event and processing the delegates returned) then the order may be different: it depends on the classes' code.)
Well, actually, no. The
Tracer class just described traces only those events which match the .NET Framework event design pattern. All .NET Framework event
delegates take two parameters, the first is of type
object (and is the object that is raising the event), and the second is of type
System.EventArgs or a class derived from
System.EventArgs:
namespace System { public delegate void EventHandler( object sender, EventArgs e ); }
The event's
delegate type must also return
void. The .NET Framework 2.0 introduced the generic
delegate type
System.EventHandler<TEventArgs> so you can easily declare standard events like so:
public class MyEventArgs : EventArgs { ... } public event EventHandler<MyEventArgs> MyEvent;
System.EventHandler<TEventArgs> has a constraint that
TEventArgs derives from
System.EventArgs, so that's how it matches the .NET Framework event design pattern.
But even though all of the events in the .NET Framework meet this pattern, and thus can be hooked by the
Tracer, the C# language doesn't restrict events to this pattern. In fact, an event can be declared using any
delegate type. The
Tracer class checks each event exposed by the target object and only lets you hook the ones which match the pattern. Those events are listed in the collection returned by the
EventNames property. If the class has any non-traceable events, their names are returned by the
UntraceableEvents property.
I've implemented a second class,
TracerEx that can handle more event types, though still not all. It is very similar to
Tracer except that the event handler you pass to it must match a different
delegate type:
public delegate void OnEventHandler( object target, string eventName, object[] parameters );
When this
delegate is called on an event raised by the target object, it is given the target object, the event's name, and an array of the parameters of the event. (If it is in fact a .NET Framework standard event type, then
parameters[0] will be the sender and
parameters[1] will be the
EventArgs or class derived from
EventArgs.)
TracerEx can trace any event that has up to 10 parameters, where each parameter is of a reference type, and is not passed as a
ref parameter or as an
out parameter.
There are two important aspects of the implementation of the
Tracer class:
Getting the events couldn't be easier. The
Type object for the target object, describing the target object's class, has a
GetEvents method. That method takes a set of flags that describe the events you're interested in, and it returns an array of
System.Reflection.EventInfo objects. Those objects (which inherit from
System.Reflection.MethodInfo) provide all that is necessary to hook an event: The event's name, and methods to add or remove event handler (a
delegate) to the event's list of handlers. (An
EventInfo contains lots of other information too, of course.)
Since we're interested in all of the
public events,
static or instance, declared by the target object's class or any of its superclasses, we call
GetEvents like this:
EventInfo[] events = m_target.GetType().GetEvents( BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.FlattenHierarchy );
A second call to
GetEvents 'without' the flag
BindingFlags.Instance will return an array of only the
static events. (It is an interesting detail that a call to
Type.FindMembers specifying
MemberTypes.Event and
BindingFlags.Static returns 'all' events,
static 'and' instance, instead of just the
static events. At least, this is the case in .NET 2.0. So, to get
static events only, use
GetEvents rather than
FindMembers.)
Now, to actually hook the event, one only needs to create a
delegate—the event handler—and call
AddEventHandler on the
EventInfo for the event. But what to use for the
delegate? The event itself, as raised by the target object, will only have the
sender and
EventArgs e parameters, as usual. But from that information, there is no way for the user to get the name of the event. (Unless the user wants to walk the stack.) So we want to pass the event name along with the event args. We'll need an object to hold that name.
Therefore, the
Tracer class has a
private class declared within it,
EventProxy. The purpose of
EventProxy is to hold the event name, and to have a method to use as an event handler for an arbitrary event. To hook an event of the target object, we create a new instance of
EventProxy, create a
delegate to its handler method, and attach that
delegate as the event handler of the target object's event.
Here's how that works, where
this is the
Tracer instance,
m_target is the target object whose events we're hooking, and
eventInfo is the
EventInfo for the event we're hooking:
EventProxy proxy = new EventProxy( this, m_target, eventInfo.Name ); MethodInfo handler = typeof( EventProxy ).GetMethod( "OnEvent" ); Delegate d = Delegate.CreateDelegate( eventInfo.EventHandlerType, proxy, handler ); eventInfo.AddEventHandler( m_target, d );
Note that when creating the
delegate, I'm passing in the event handler type of the event I'm hooking,
eventInfo.EventHandlerType, not the signature of
EventProxy.OnEvent. In order to add the
delegate to the event, it must be of the event's handler's type, which is why it is specified as the first argument to
Delegate.CreateDelegate, but that type might not match the signature of the
EventProxy.OnEvent which is the method the
delegate is for. Why does this work? Well, before .NET 2.0, it didn't work. Before .NET 2.0 when you made a
delegate from a method, the
delegate's signature and the method's signature had to match exactly. This was somewhat painful. For example, Martin Carolan created a class to do event tracing on arbitrary objects that worked in .NET 1.1 (see this CodeProject article). He had to generate code for each event he hooked, generating a method with a signature exactly matching the event.
In .NET 2.0, the CLR changed so that methods are matched to signatures with looser rules called contravariance, which allows a "more general" method to be made into a
delegate with a "more specific" signature. Because of this change,
EventProxy.OnEvent's signature...
public void OnEvent( object sender, EventArgs e );
... will match any event that implements the .NET Framework's standard design pattern for events.
In the actual implementation, the
delegate to
EventProxy.OnEvent is also saved in a dictionary keyed by the event's name, so that it can be used to unhook the event via the
EventInfo.RemoveEventHandler method.
TracerExto handle more kinds of events than just those that match the .NET Framework event design pattern. Contravariance is pretty nice when matching methods to
delegates, but it doesn't give you arbitrary wildcards. The number of parameters of the method and
delegatemust be the same, none of the
delegateparameters can be value types, and none of the
delegateparameters can be passed as
refparameters or
outparameters. (These are limitations defined by the CLR.) To handle events like that, you need to generate code on the fly—and for two different approaches see articles and code by Martin Carolan and "Death_Child". I wanted to avoid that so I was willing to accept some restrictions on the kinds of events that
TracerExcould handle. I implemented 11 event handlers, for events with zero to 10 parameters, where all of the event handlers accept parameters of type
object:
public void OnEvent0( ) { ... } public void OnEvent1( object p1 ) { ... } public void OnEvent2( object p1, object p2 ) { ... } public void OnEvent3( object p1, object p2, object p3 ) { ... } ... public void OnEvent10( object p1, object p2, object p3, object p4 , object p5, object p6, object p7, object p8, object p9, object p10 ) { ... }
Then, I choose which of these event handlers to pass to
Delegate.CreateDelegate based on the number of parameters in the event's
delegate type.
Both
Tracer and
TracerEx need to check each of the target's events to figure out if they are traceable or not.
Tracer has a method called
IsGoodNetFrameworkEvent that gets the event's
delegate type from the event's
EventInfo, then gets the
delegate's
Invoke method's
MethodInfo. From that, it can check that the
delegate has a return type of
void, that it takes exactly two parameters, and that the two parameters are of the proper types.
TracerEx has a method called
IsTraceableEvent which gets the event
delegate's type
Invoke method's
MethodInfo and then checks each parameter to make sure it is not a value type and not passed as
ref or
out. See the code for the details.
The only other implementation detail that seems worth mentioning is that the
Tracer class derives from
IDisposable. (Also, of course, the
TracerEx class.) It isn't strictly necessary since
Tracer instances don't own unmanaged resources, but it does provide a reasonable way for the class to remember to unhook any events and disconnect itself from the target object.
The class
Tracer runs 250 lines of code (for a generous interpretation of "line of code" that includes braces standing on lines by themselves). But the key aspects are centered on just a few APIs provided by .NET Reflection:
System.GetType,
Type.GetEvents,
Type.GetMethod,
EventInfo.AddEventHandler,
EventInfo.RemoveEventHandler (and also
System.CreateDelegate, which strictly speaking is not part of Reflection).
namespace EventTracer { public sealed class Tracer : IDisposable { public delegate void OnEventHandler( object sender, object target, string eventName, EventArgs e ); // Create an event tracer on a particular target object, using a // a particular delegate to handle the traced events. public Tracer( object target, OnEventHandler handler ); // Property: The type of the target object. public Type TheType { get; } // Property: The target object. public object TheTarget { get; } // Property: A collection of all the // traceable events raised by the target object. public ReadOnlyCollection<string> EventNames { get; } // Predicate: Returns true iff the parameter is a valid event name. public bool IsValidEvent( string eventName ); // Property: A collection of all the untraceable events // raised by the target object. // These are the events that don't conform to the .NET Framework's event design // pattern. public ReadOnlyCollection<string> UntraceableEventNames { get; } // Property: The number of events currently hooked. public int EventsHookedCount { get; } // Predicate: Returns true iff the parameter names a hooked event. public bool IsHookedEvent( string eventName ); // Action: Hook the named event of the target object. Your event // handler will be called whenever the target object raises the event. // (It is harmless to hook an event that is already hooked.) public void HookEvent( string eventName ); // Action: Hook all events raised by the target object. public void HookAllEvents( ); // Action: Unhook the named event. Your event handler will no longer // be called when the target object raises the event. (It is harmless // to unhook an event that is not hooked.) public void UnhookEvent( string eventName ); // Action: Unhook all events. public void UnhookAllEvents(); public void Dispose(); } }
Reflection.Emitwith
DynamicMethod.
This class is very useful, even though it is so simple. I've tested it on .NET 2.0 and .NET 3.0. Try it yourself—and be sure to let me know if you find any problems. In fact, let me know if you like or dislike this article—it's my first for CodeProject. (I appreciate the comments I've already received.) I thank Giorgi Dalakishvili for pointing out the article by Martin Carolan, which I was not aware of, and "Death_Child" for reminding me that the
Tracer only traces events matching the .NET Framework design pattern.
delegatesignature matching
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/eventtracingviareflection.aspx | crawl-002 | refinedweb | 2,630 | 64.1 |
On Oct 3, 2006, at 10:12 AM, Paul McMahan wrote:
> OK I should have guessed that windows was the culprit. I jumped over
> to debian and now all is fine. GShell looks very promising!! I love
> the idea of being able to telnet or ssh into the server and run
> commands remotely. As a matter of fact this appears to provide a
> vital improvement that Geronimo users have been asking for: the
> ability to remotely administer a running server (see the "Swing
> console?" thread from 9/17/2006).
Yup, that is the general idea. :-)
> A few questions :
> - what's the security model?
Mostly this is TBD. I plan on adding some sort of login context to
allow users to login (for ssh this will be the ssh auth callback, for
telnet it will have to be a custom login handler). Once credentials
are established, then it should be possible to limit the set of
commands that are enabled for a user. I am sure there is much more
than can be done here too. But at the moment, I've only planned to
get a simple login implemented and have not done much more design of
a richer model.
> - will GShell share a common code path with the console and the CLI
> for handling things like deployment?
Yes, well... yes for the CLI, as I hope to eventually replace the
existing CLI tools with command plugins into gshell, so they will be
one and the same. I am not really sure what the web console is
doing, but we should definitely share as much as possible.... where
possible.
It should be possible to define a simple interface (or set o
interfaces) for each admin bit in the console... and provide a
portlet and gshell command (or set of commands) to adapt to the
interface.
I think this will be easy(ish) to do... more so once the webconsole
is more xtensible/pluggbale, and when admin portlets are split up
into modules that are specific to the function they provide.
It certainly would be nice to have one-to-one mappings for admin
functionality between the webconsole and gshell commands... but there
is still work on both sides before that would be possible... but it
is kind of a longer term goal.
> - which subsystems of geronimo will GShell depend on and how will it
> access them? e.g. will it be wrappered as a gbean and use the kernel
> to get access to them? mainly I'm wondering if (unlike the console)
> it will be able to administer components running in the server without
> a having a run-time dependency against them.
Um, well I am in the process of writing a few simple GBeans to run/
manage the server components of GShell (the telnet/ssh server and the
config and monitoring aspects that they bring in). Short of that
there are no dependencies on Geronimo.
I am thinking that a simple GBean to mange the start/stop of the
server (ie, start/stop the telnet/ssh connector), which will manage
the basic port config as well as the more advanced ssh config
needed. And then a simple portlet to control the gbean, to list who
is logged in, maybe even allow sessions to be terminated and such.
May need to introduce some extra command sets which are G specific...
or augment the script command to know more about the G namespace, so
that we can register some more helpful variables to allow some
meaningful/useful scripts to be written.
But, so far these are just ideas in my head... some with TODO
comments in code... I have not had enough time to get anymore
significant work done on gshell after the initial flurry of code I
dropped in over a week or so.
I hope that once all this build muck is over and done with that I can
get back into the GShell groove... clean it up and get it integrated
into the server.
Anyways... ideas and suggestions are welcome :-)
--jason | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200610.mbox/%3C39572CF5-C828-4065-BAD0-0980B134399C@planet57.com%3E | CC-MAIN-2014-23 | refinedweb | 673 | 70.84 |
Multi-Tenancy¶
If a site built with the django-SHOP framework shall be used by more than one vendor, we speak about a multi-tenant environment.. Each new product, he adds to the site, is assigned to him. Later on, existing products can only be modified and deleted by the vendor they belong to.
Product Model¶
Since we are free to declare our own product models, This can be achieved by adding a foreign key onto the User model:
from shop.models.product import BaseProduct class Product(BaseProduct): # other product attributes merchant = models.ForeignKey( User, verbose_name=_("Merchant"), limit_choices_to={'is_staff': True}, )
Note
unfinished docs | http://django-shop.readthedocs.io/en/latest/howto/multi-tenancy.html | CC-MAIN-2017-47 | refinedweb | 104 | 55.54 |
This is because the C++ standard does not make available (yet) any library to shield the different approaches taken by different operating system to the file system, and so the programmers have to directly take care of the system's low level details.
Alternatively, we could use the Boost Filesystem library. By the way, this library is a candidate to be included in Technical Report 2, and so we could expect that something close to it would be eventually included in the standard.
The typical issues we have to tackle when writing code that has to do with file system are:
Are the file names written accordingly Windows or UNIX conventions (or both)?
Is the passed filename correct? I mean, does it refer to what could be an object in the referred file system?
Does that file actually exists?
Accessing to a file system is always a dangerous matter: how we could avoid the application to crash in case of an unforeseen problem?
The Boost::filesystem answer to these questions using a custom class for storing the filename and throwing exceptions (or returning an error code, if for any reason we don't want our code to use the exception facility) in case of errors.
Let see as example a possible implementation for the function we said above: it gets as input a filename, and it prints to standard console it length:
#include <iostream>
#include <boost/filesystem.hpp>
// ...
void fileSize(boost::filesystem::path filename)
{
try
{
std::cout << filename << " size: "
<< boost::filesystem::file_size(filename) << std::endl;
}
catch(const boost::filesystem::filesystem_error& ex)
{
std::cout << ex.what() << std::endl;
}
}
Amazingly concise and effective, I'd say. We try to call the Boost Filesystem file_size() function, if it succeeds, it returns a uintmax_t (the biggest unsigned integer type available for the current platform), otherwise it throws an exception.
Here is a few test calls I performed for this function:
fileSize(argv[0]); // 1.
std::string s("/dev/pl.sql"); // 2.
fileSize(s);
fileSize("/dev"); // 3.
fileSize("W:\\not\\existing.com"); // 4.
1. We pass to our function the first argument coming from the environment to the main() function, that is the filename used to invoke the application. Being compiled for Windows, it is in that format.
2. We can pass to our function C and C++ strings. Notice that here we are using the UNIX convention (forward slashes)
3. On the current drive I have a directory named in this way. Asking for the size of a directory leads to an exception.
4. There is no "W" drive on my environment. So here I expect another exception.
The example code is based on what you can find in the official boost filesystem tutorial. For more background information, you could have a look at the Boost filesystem version 3 home page. | http://thisthread.blogspot.com/2011/06/boost-filesystem-library.html | CC-MAIN-2018-26 | refinedweb | 467 | 62.27 |
.3: Decisions
About This Page
Questions Answered: How can I make my program select between alternatives? How can I execute a piece of code only if a specific condition is met?
Topics: Expressions that select a value.
if and
else. Conditional
effects on program state. Sequencing and nesting
ifs.
What Will I Do? Read and work on small assignments.
Rough Estimate of Workload:? One hour.
Points Available: A75.
Related Projects: GoodStuff, FlappyBug, Odds Miscellaneous (new) features in one optional assignment.
The Need to Select
Nearly all programs select between alternatives. For example: to determine the new favorite experience, we need to select between the old favorite experience and a newly added experience. Or: given a user input or “yes” or “no”, we want to select whether or not the program performs an operation of some sort. And so forth.
One of the things we need, then, is a way to formulate a condition: Did the user select
“yes”; is this experience better than that one? You already know what we can use for
this part: the
Boolean type.
The other thing we need is a way to mark commands as conditional so that the computer executes them only if a particular condition is met. It would be nice if we could also provide an alternative command to be executed in case the condition is not met.
Scala offers several tools for executing a command conditionally. In this chapter, we’ll
look at the most straighforward of these tools: the
if command.
Selecting Between Two Alternatives:
if
you can use the words
if and
else to form an expression whose value depends on a
particular condition — that is, the value depends on whether a particular
Boolean
expression evaluates to
true or
false.
Here’s the basic idea as pseudocode:
if (condition) value in case condition is met else value otherwise
It’s easy to experiment with this in the REPL:
if (10 > 100) "is bigger" else "is not bigger"res0: String = is not bigger if (100 > 10) "is bigger" else "is not bigger"res1: String = is bigger
Booleanso that it evaluates to either
trueor
false. In this example, the condition has been formed with a comparison operator. When the computer runs the
ifcommand, it first evaluates the conditional expression, which determines what happens next.
false, the code that follows
elseis executed. The entire
ifexpression evaluates to the value produced by evaluating the “
elsebranch”.
true, the code that immediately follows the condition is executed. The entire
ifexpression evaluates to the value of this “then branch”. (In Scala, we don’t actually spell out the word “then”, but you can read an
ifexpression as “if X then A else B”.)
Examples
Any expression of type
Boolean is a valid condition. It can be the name of a
Boolean
variable, for example:
val theDeathStarIsFlat = falsetheDeathStarIsFlat: Boolean = false if (theDeathStarIsFlat) "Yeah right" else "No kidding"res2: String = No kidding
Or a
Boolean literal, namely
true or
false (although this isn’t too useful):
if (true) "This was chosen." else "This was not."res3: String = This was chosen.
In all the above examples, the “then” and “else” branches were expressions of type
String,
but an
if certainly admits other subexpressions, too. Numbers work, for instance:
val number = 100number: Int = 100 if (number > 0) number * 2 else 0res4: Int = 200 if (number < 0) number * 2 else 0res5: Int = 0
ifexpression is determined by what you write in the conditional branches.
Using an
if as an Expression
An
if expression has a type, and you can use an
if expression in any context that
calls for an expression of that type. For example, you can use an
if in a function’s
parameter expression, you can assign an
if’s value to a variable, or you can use an
if as part of an arithmetic expression. Here are some more examples of valid
if
commands:
println(if (number > 100) 10 else 20)20 val chosenAlternative = if (number > 100) 10 else 20chosenAlternative: Int = 20 (if (number > 100) 10 else 20) * (if (number <= 100) -number else number) + 1res6: Int = -1999
That being said, the last of those commands is branchy enough that you’d do better to store the intermediate values in a temporary variables before multiplying them.
Practice on
if expressions
Formatting an
if expression
Where a single-line
if expression would be long and hard to read, you can split it
across multiple lines as shown here:
val longerRemark = if (animal == favePet) "Outstanding! The " + animal + " is my favorite animal." else "I guess the " + animal + " is nice enough."
ifto a deeper level than the lines that start with
ifand
else. Once you’re used to this convention, it makes the
ifexpression faster to read as a whole.
This style convention is worth following when you write multi-line
ifs, despite the
fact that, in Scala, indentations don’t affect the program’s behavior.
Mini-assignment: describing an image, part 1 of 2
In project Miscellaneous, file
misc.scala, add an effect-free
function that:
- has the name
describe;
- takes a single parameter, of type
Pic; and
- returns the string "portrait" in case the given picture’s height is greater than its width, and the string "landscape" otherwise (i.e., if the image is square or wider than it’s high).
A+ presents the exercise submission form here.
Finishing class
Experience
You can use
if in a class, thus enabling objects to make decisions when their methods
are invoked.
The
chooseBetter method
From Chapter 3.2, we already have a partial implementation for class
Experience. The
one part still missing is the
chooseBetter method that compares two experiences and
returns the higher-rated one.
We’d like the method to work as follows:
val wine1 = new Experience("Il Barco 2001", "okay", 6.69, 5)wine1: o1.goodstuff.Experience = o1.goodstuff.Experience@1b101ae val wine2 = new Experience("Tollo Rosso", "not great", 6.19, 3)wine2: o1.goodstuff.Experience = o1.goodstuff.Experience@233b80 val betterOfTheTwo = wine1.chooseBetter(wine2)betterOfTheTwo: o1.goodstuff.Experience = o1.goodstuff.Experience@1b101ae betterOfTheTwo.nameres7: String = Il Barco 2001
In essence,
chooseBetter is similar to the familiar
max function (Chapter 1.6) that
picks the larger of two given numbers.
Here is class
Experience with a pseudocode implementation for
chooseBetter:
class Experience(val name: String, val description: String, val price: Double, val rating: Int) { def valueForMoney = this.rating / this.price def isBetterThan(another: Experience) = this.rating > another.rating def chooseBetter(another: Experience) = { Determine if this experience is rated higher than the experience given as * another*. If so, return a reference to the this object. Otherwise, return the reference stored in * another*. } }
And here is the method in Scala:
def chooseBetter(another: Experience) = { val thisIsBetter = this.rating > another.rating if (thisIsBetter) this else another }
thisalone constitutes an expression whose value is a reference to the object whose method is being called.
chooseBetter’s execution ends in an
ifexpression; the method returns the value of that expression. Depending on whether
thisIsBetterstores
trueor
false, the
ifexpression’s value will be a reference to either the active object or the parameter object, respectively.
Improving code by calling an object’s own methods
That implementation of
chooseBetter works. But class
Experience is now needlessly
repetitive. Both
isBetterThan and
chooseBetter do the same comparison
this.rating >
another.rating, so we have a double definition of what counts as “better” in our application.
In addition to being inelegant, this is unhelpful to anyone who wants to modify the application. For instance, if we wanted GoodStuff to compare experiences by their value for money rather than their rating, we’d need to modify the code in two places. In this small-scale program, the problem is isn’t too terrible, but redundant code can make larger programs very difficult to work with.
Let’s improve our code.
You’ve already seen that an object can call its own method to “send itself a message”:
this.myMethod(params). Let’s use this to compose a new version of
chooseBetter:
def chooseBetter(another: Experience) = { val thisIsBetter = this.isBetterThan(another) if (thisIsBetter) this else another }
Experienceobject asks itself: “Are you better than this other experience?”
Now
chooseBetter relies on whichever kind of comparison is defined in
isBetterThan.
We have eliminated the redundancy.
A more compact solution: a method call as a conditional
A method call, too, can serve as an
if’s conditional expression, as long as the method
returns a
Boolean. We can use that fact to simplify
chooseBetter further. This is
illustrated in the REPL session below (which assumes that
wine1 and
wine2 are defined
as above):
if (wine1.isBetterThan(wine2)) "yes, better!" else "no, wasn’t better"res8: String = "yes, better!" if (wine2.isBetterThan(wine1)) "yes, better!" else "no, wasn’t better"res9: String = "no, wasn’t better" if (wine1.isBetterThan(wine1)) "yes, better!" else "no, wasn’t better"res10: String = "no, wasn’t better"
Here, evaluating the
if entails calling the method; the value of the conditional is
whichever
Boolean the method returns. After calling the method, execution continues into
one of the
if’s two branches.
We’re now equipped to write a more compact implementation for
chooseBetter. We don’t
need to use the temporary local variable
thisIsBetter as we did above.
def chooseBetter(another: Experience) = if (this.isBetterThan(another)) this else another
Our
Experience class is now ready. Later, in Chapter 4.1, we’ll turn our attention to
GoodStuff’s other key class,
Category.
Assignment: Odds (Part 7 of 9)
Let’s return again to the Odds project and add a method that reports an event’s odds in a format called moneyline, which is popular among North American betting agencies. This slightly curious format works as follows:
(Below, P and Q refer to the numbers that make up the
fractionalreprestentation P/Q of an
Odds.)
In case the event’s estimated probability is at most 50%, its moneyline number is positive and equals 100 * P / Q. For instance, the moneyline number for 7/2 odds is 350, because 100 * 7 / 2 = 350. This positive number indicates that if you bet 100 monetary units and win, you profit 350 units in addition to getting your bet back. A fifty–fifty scenario (1/1 odds) has a moneyline number of 100.
In case the event’s estimated probability is over 50%, its moneyline number is negative and equals -100 * Q / P. For instance, the moneyline number for 1/5 Odds is -500, because -100 * 5 / 1 = -500. This negative number indicates that if you want to make a profit of 100 units, you have to place a bet of 500 units.
Task description
In class
Odds, add a
moneyline method that returns
Odds object’s moneyline
representation as an
Int:
val norwayWin = new Odds(5, 2) norwayWin: Odds = o1.odds.Odds@171c36b norwayWin.moneylineres11: Int = 250
In
OddsTest1, add a command that prints out the first
Odds object’s moneyline
number. The program’s output should now look like this:
Please enter the odds of an event as two integers on separate lines. For instance, to enter the odds 5/1 (one in six chance of happening), write 5 and 1 on separate lines. 11 13 The odds you entered are: In fractional format: 11/13 In decimal format: 1.8461538461538463 In moneyline format: -118 Event probability: 0.5416666666666666 Reverse odds: 13/11 Odds of happening twice: 407/169 Please enter the size of a bet: 200 If successful, the bettor would claim 369.2307692307692 Please enter the odds of a second event as two integers on separate lines. 10 1 The odds of both events happening are: 251/13 The odds of one or both happening are: 110/154
Instructions and hints
- Use an
ifexpression.
moneylinemust return an integer. Drop any decimals from the result; always round towards zero. Scala’s integer division drops the decimals for you (Chapter 1.3), so the easiest solution is simply to do the arithmetic in the right order: multiply first, divide second.
- It’s a single word, so write
moneylinenot
moneyLine.
Submission form
A+ presents the exercise submission form here.
Apocalyptic programming
According to Finnish folklore, God pushished the hazel grouse (for reasons that are disputed) and condemned it to grow smaller until the world ends. People will know that the end is nigh when the grouse is vanishingly small.
This story has given rise to an odd Finnish idiom: a Finn may say that something “shrinks like the grouse before the apocalypse”.
Let’s model this programmatically, because why not. Here’s a class that represents grouses:
class Grouse { private var size = 400 private val basePic = Pic("bird.png") def foretellsDoom = this.size <= 0 def shrink() = { if (this.size > 0) { this.size = this.size - 1 } } def toPic = this.basePic.scaleTo(this.size) }
You can find the class and a GUI that uses it within the IntroApps project, in
package
o1.grouse. The GUI uses techniques from Chapter 2.8 to shrink the
image of a grouse against a white background until the grouse vanishes from sight.
Your task is to read the given code and modify
makePic so that it turns the
entire view black at the end. The method should therefore return:
- the bird pic against a white background (as per the given code) only if calling
foretellsDoomon the grouse returns
false; and
- a fully black
endOfWorldPicif the return value is
true.
In practical terms, the only thing you need to add is an
if expression in
makePic.
A+ presents the exercise submission form here.
Affecting Program State with an
if
The branches of a selection command may specify effects on program state. They may print
text onscreen and assign to
vars, for example. Like here:
if (number > 0) { println("The number is positive.") println("More specifically, it is: " + number) } else { println("The number is not positive.") } println("I have spoken.")
ifcommand but follows it. After executing one of the two branches above, that
printlnwill be executed no matter the value of the conditional.
The animations below detail how the code works, first for a positive
number, then for
a negative.
Note: Earlier in this chapter, we used the
if command to form expressions, that is,
chunks of code that evaluate to a (meaningful) value. We used
ifs to select between
two alternative values. On the other hand, in the code animated above, the
if selects
between different effects on program state. Here’s the general notion summarized as
pseudocode:
if (condition) { commands to be executed in case the condition evaluates to true } else { commands to be executed in case the condition evaluates to false }
When an
if affects program state, convention dictates that you use line breaks,
indentations, and curly brackets as in the examples above. We’ll follow this custom in
O1.
Some Scala programmers...
... do omit the curly brackets from branches that contain just
a single command, even if the
if is effectful. We’ll
consistently use curly brackets in effectful
if s, however.
ifs that produce
Unit
The following assignment to
meaninglessResult doesn’t make a lot of sense but
is worth a moment of consideration:
val meaninglessResult = if (number > 1000) { println("Pretty big") } else { println("No so big") }Not so big meaninglessResult: Unit = ()
Here, the
if expression has no meaningful value. The code does print one of the two
strings depending on the value in
number, but it doesn’t assign anything useful in
meaninglessResult, just the
Unit value (Chapter 1.6). This is because the
ifs
branches end in print commands that don’t return anything beyond
Unit. It makes
little sense to assign the value of such an
if to a variable.
Perhaps you’ll also find it instructive to compare the above code with these two:
val result = if (number > 1000) "Pretty big" else "Not so big"result: String = Not so big println(result)Not so big
println(if (number > 1000) "Pretty big" else "Not so big")Not so big
if without
else
When you use an
if to affect program state, you don’t always need an
else branch.
You may wish to execute one or more commands if a condition is met but do nothing
otherwise.
Of course, you could write something like this:
if (condition) { commands to be executed in case the condition was true } else { }
elsebranch does nothing. But we don’t even have to write it; we can simply omit this part.
If there’s no
else branch and the conditional evaluates to
false, the computer simply
disregards the rest of the
if:
if (condition) { commands to be executed in case the condition was true and skipped otherwise }
Here’s a more concrete example:
if (number != 0) { println("The quotient is: " + 1000 / number) } println("The end.")
elsebranch has been omitted.
ifcommand is effectful (it prints stuff), which is why we use line breaks and curly brackets.
ifbut follows it. This piece of code invariably finishes by printing "The end." no matter if
numberwas zero or not. If it was zero, that’s all the code prints out.
If you wish, you can also view an animation of that example:
Practice reading
ifs
Assignment: FlappyBug (Part 11 of 16: Minor Adjustments)
Task description
Add two
if commands to the FlappyBug game so that:
- the bug darts upwards only in case the key pressed by the user is the space bar (rather than any old key, as in the current version); and
- the bug accelerates (i.e., the value of its instance variable
yVelocityincreases) only if the bug is in the air (i.e., located above the ground level).
Instructions and hints
- You need to modify the
onKeyDownmethod in
FlappyBugAppand the
fallmethod in class
Bug. Both are described in Chapter 2.8.
- The parameter of the event handler
onKeyDownindicates which key was pressed. You can use the equality operator
==to compare the parameter value with
Key.Space, a constant that corresponds to the
Spacekey.
- You won’t need any
elsebranches.
Submission form
A+ presents the exercise submission form here.
Combining
ifs
else if chains
If you want to select among more than two alternatives, you can write an
if command
within the
else branch of another
if:
val description = if (number < 0) "negative" else if (number > 0) "positive" else "zero"
Brackets may clarify the structure of the code:
val description = if (number < 0) "negative" else (if (number > 0) "positive" else "zero")
Perhaps the best way to highlight the multiple branches is to split the code across multiple lines and indent it:
val description = if (number < 0) "negative" else if (number > 0) "positive" else "zero"
A similar chain of “else ifs” also works for selecting among multiple effectful commands:
if (number < 0) { println("The number is negative.") } else if (number > 0) { println("The number is positive.") } else { println("The number is zero.") }
Mini-assignment: describing an image, part 2 of 2
Edit the
describe function you wrote in
misc.scala earlier so that it returns:
- the string "portrait" if the picture’s height is greater than its width;
- the string "landscale" if the picture’s width is greater than its height; and
- the string "square" if the picture’s height and width are exactly equal.
A+ presents the exercise submission form here.
Nesting
ifs
You’re free to nest
if commands within each other. When you do, you need to be
especially careful about which
else goes with which
if. Take a look at this REPL
session:
val number = 100number: Int = 100 if (number > 0) { println("Positive.") if (number > 1000) { println("More than a thousand.") } else { println("Positive but no more than a thousand.") } } Positive. Positive but no more than a thousand.
if’s contents are executed. This includes the first
println.
ifis another
if. Since the inner
if’s condition isn’t met, the program jumps to the
elsebranch.
if’s branches are indented to a still deeper level.
ifis vertically aligned with the curly brackets that delimit the corresponding branches. In our example, this applies both to the inner...
if. Do keep in mind, however, that whereas the curly brackets actually affect how the commands nest within one another, the indentations are there only for readability.
ifhas no
elsebranch.
In the above example, the
else matches the “closer” of the two
ifs. The
else branch
was executed because the outer
if’s condition was met but the inner one’s wasn’t. By
adjusting the curly brackets, we can attach the
else to the outer
if instead:
if (number > 0) { println("Positive.") if (number > 1000) { println("More than a thousand.") } } else { println("Zero or negative.") }Positive.
ifhas two branches and its
elsebranch would have been executed only if the number hadn’t been positive.
elsebranch connects with the nearest complete “then branch” that precedes it. In this case, the outer
ifs “then branch” is the latest one to close.
ifhas no
elsebranch. As a result, the program produces just the single line of output.
Practice on nested
ifs
Assignment: Odds (Part 8 of 9)
Task description
The app object
OddsTest2 from Chapter 3.2 creates a couple of
Odds objects and
reports selected facts about them. Edit it:
- Remove the
printlncommands that print the return values of
isLikelyand
isLikelierThan(the ones whose output begins with the word “The”). This is because you’re about to replace these lines with new ones that produce a different printout than before.
- The revised
OddsTest2should check whether the event represented by the first
Oddsobject is more likely than the second. Based on this check, the program should print either "The first event is likelier than the second." or "The first event is not likelier than the second." as appropriate.
- Next, the program should print the line "Each of the events is odds-on to happen." in case both of the events are likely. If neither event is likely, the program should print nothing at this point.
- As per our earlier definition, an event counts as likely in case the chances of it occurring are greater than those of it not occurring, that is, in case
isLikelyreturns
true.
- The final line of output, which thanks the user, should appear no matter which odds the user entered.
The example runs below detail the expected output:
Please enter the odds of the first event as two integers on separate lines. 5 1 Please enter the odds of the second event as two integers on separate lines. 1 2 The first event is not likelier than the second. Thank you for using OddsTest2. Please come back often. Have a nice day!
Please enter the odds of the first event as two integers on separate lines. 1 1 Please enter the odds of the second event as two integers on separate lines. 2 1 The first event is likelier than the second. Thank you for using OddsTest2. Please come back often. Have a nice day!
Please enter the odds of the first event as two integers on separate lines. 1 2 Please enter the odds of the second event as two integers on separate lines. 2 3 The first event is likelier than the second. Each of the events is odds-on to happen. Thank you for using OddsTest2. Please come back often. Have a nice day!
Instructions and hints
Submission form
A+ presents the exercise submission form here.
Summary of Key Points
- You can use an
ifcommand to select which of two commands or sequences of commands is executed.
- You can also use an
ifto indicate that the execution of a piece of code is conditional: the code is executed only if specific circumstances apply.
- When you write an
if, you need to specify the selection condition you want. Any expression of type
Booleanis valid for that purpose.
- You can combine
ifcommands by sequencing them one after the other or by nesting them within each other.
- Links to the glossary:
if;
Boolean; expression; DRY.
Optional assignment: a game of precision
Write a new program where the user is supposed to click the exact center of the window with a mouse click. This simple game is over when the user manages to click, say, within three pixels of the center. At that point, the program should display an image of your choosing to signal victory.
To model the problem domain, you may wish to create a simple object that keeps track
of whether the game is over in a
Boolean variable. Also write a GUI that
displays the game’s state.
No automatic feedback is available for this optional assignment..
ifand
else. Also notice the round brackets around the conditional expression, which are mandatory. | https://plus.cs.aalto.fi/o1/2018/w03/ch03/ | CC-MAIN-2020-24 | refinedweb | 4,094 | 64.51 |
month I promised to eventually discuss the use of schemas with XSLT 2.0 — that is, XSLT 2.0's ability to read a W3C schema to discover additional information about a source tree, result tree, or interim temporary tree, and to use that information when processing a document. This month I'll talk about the use of schemas with XSLT, but not schemas for the documents you're processing. Schemas for the stylesheets themselves, when those available are a good fit for your tools, can add a lot to your XSLT development. (While I'm on the topic, though, it's great to see that one new addition to the 11 February XSLT 2.0 Working Draft is that "A non-schema-aware processor now allows all the built-in types defined in XML Schema to be used; previously only a subset of the primitive types plus xs:integer were permitted." This will allow even more type-aware XSLT processing without requiring the use of a W3C schema.)
The XSLT 1.0 Recommendation included an appendix with a non-normative DTD Fragment for XSLT Stylesheets. It's non-normative because namespaces play such an important role in XSLT stylesheets, and DTDs don't understand namespaces; it's a fragment because several extra declarations are necessary to allow for the use of literal result elements.
Literal result elements are the elements in a stylesheet from outside of the XSLT namespace that an XSLT processor will add to the result document just the way they are. For example, in the following template rule, the h2 element is a literal result element that will be wrapped around the contents of each subtitle element from the source tree that gets added to result tree:
h2
subtitle
<xsl:template
<h2><xsl:apply-templates/></h2>
</xsl:template>
It's easy enough for a DTD's xsl:template element declaration to list the elements such as xsl:apply-templates, xsl:choose, and xsl:element that are allowed inside of an xsl:template element, but a DTD has no way to say "and any other elements from outside of the namespace." The appendix mentioned above describes some contortions that use parameter entity redefinition to allow this, but it's enough trouble that I've never heard of anyone doing it for a production environment. One simpler alternative, which has been used in production environments, is to avoid all use of literal result elements and to use the xsl:element element to insert any new elements into the result tree. Using this approach, the template rule above might be written like this:
xsl:template
xsl:apply-templates
xsl:choose
xsl:element
<xsl:template
<xsl:element
<xsl:apply-templates/>
</element>
</xsl:template>
This use of the xsl:element instead of literal result elements allowed the use of a DTD-driven editor to edit stylesheets. It also allowed the addition of a valuable quality-control step to a system responsible for the maintenance of a large number of stylesheets, because validating each edited stylesheet before checking it into the repository greatly reduced the possibility of the runtime system choking on a bad stylesheet.
Stylesheet quality control and the use of intelligent editors are still worth pursuing beyond DTDs. An important reason that XSLT stylesheets are XML documents is to let us take advantage of our favorite XML tools on the stylesheets themselves. Just because DTDs aren't a good fit with these goals, though, we don't have to give up.
I've written before about how Schematron lets us fill some of the gaps of a DTD-driven system. While it may not help us edit XSLT stylesheets with most popular XML editors, it can help the quality control goal of keeping certain mistakes out of stylesheets. When a co-worker asked me about the possibility of doing this, I replied that it would be a good idea, but that he'd still have some Schematron rules to write and test. Then I remembered: buried in the distribution of nxml, an Emacs mode that uses RELAX NG Compact schemas to turn Emacs into a context-sensitive XML editor, is a RELAX NG schema for XSLT. (It's a RELAX NG Compact version created from a non-compact version with trang.) So, I told my co-worker that instead of writing a set of Schematron rules, he could take advantage of work already done by James Clark.
Among Clark's many other contributions to the XML world, he helped to invent XSLT, so he knows the syntax pretty well. He didn't write up some content models based on the DTD fragment and his own knowledge of XSLT, though; according to a header comment in his xslt.rng schema file, it "was mostly generated from the syntax summary in the XSLT Recommendation." Being generated right from the spec itself (I assume that, by "syntax summary," he meant the p elements with a class value of element-syntax in the XSLT 1.0 Recommendation) automates the process enough that we can assume that nothing was missed.
p
class
element-syntax
It's ironic that I had forgotten about this schema, because I had been using it all along. I've used Emacs with nxml to edit XSLT stylesheets and other XML documents as long as nxml has been available, but I never had to configure it to use the xslt.rnc schema when editing files with an ".xsl" extension because that's its default behavior.
If you don't know RELAX NG syntax, you can still use Emacs with nxml to edit documents based on your DTDs by using trang to convert your DTDs to RELAX NG Compact versions and pointing the nxml mode at those. If you're more familiar with XSLT than with RELAX NG Compact syntax, xslt.rnc is a great way to learn about RELAX NG Compact syntax. For example, to see how it addresses the issue of allowing certain XSLT elements and literal result elements from other namespaces in the content of an xsl:template element, see how it declares this element:
element template {
extension.atts,
attribute match { pattern.datatype }?,
attribute name { qname.datatype }?,
attribute priority { number.datatype }?,
attribute mode { qname.datatype }?,
(param.element*, template.model)
}
First, note how it's declaring an element called "template" and not "xsl:template". Because RELAX NG is namespace-aware, you can assign any namespace prefix you want to the namespace and use that in your stylesheets. (Use of the XSLT 1.0 DTD fragment requires you to hardcode a prefix such as "xsl:" in the element declaration and then use that for all "documents" — in this case, stylesheets — that you check against that DTD.)
The template.model pattern referenced in the template.element declaration is declared near the beginning of the schema:
template.model
template.element
template.model =
(instruction.category | literal-result-element | text)*
The instruction.category pattern names all the XSLT elements that a template rule can contain, and the literal-result-element pattern has this declaration:
instruction.category
literal-result-element
literal-result-element =
element * - xsl:* { literal-result-element.atts, template.model }
It shows the declaration of an element with any name, as long as it's outside of the namespace assigned to the "xsl" prefix in this stylesheet. Along with its attributes, it can contain anything that can go into a template rule.
Norm Walsh has created an alternative version of the RELAX NG XSLT schema for the creation and editing of XSLT 2.0 stylesheets. Norm had a puzzle to solve along the way, though. He wondered what he could do to have nxml automatically use the original schema for 1.0 stylesheets when he edited a stylesheet whose xsl:stylesheet element had a version attribute value of "1.0" and the schema for 2.0 stylesheets when he used nxml to edit an XSLT 2.0 stylesheet. The solution turned out to be an elegant bit of RELAX NG syntax that he hadn't used before; read about what he did on his weblog.
xsl:stylesheet
version
There are several W3C Schemas for XSLT 1.0 out there, so I decided to try a few. My first step with each was to use the Xerces Java ASBuilder utility, which I wrote about in the O'Reilly book XML Hacks, to check the integrity of each schema. If it couldn't parse the schema, there wasn't much point in trying to validate a stylesheet against that schema. (To use ASBuilder to check a W3C schema's integrity, make sure that xercesSamples.jar is in your classpath, add the -f option to the command line, and don't include the -i option if you're not adding a document to validate against the schema.)
The "XSLT v1.1 XSD Schema for Visual Studio.NET" available on gotdotnet.com failed this first test, but according to a Kathleen Dollard weblog entry it works with Microsoft's Visual Studio editor. (The gotdotnet.com web page describes it as an XSLT v1.0 schema, and the only 1.1-like feature I saw was the msxsl:script element, which I suppose corresponds to the xsl:script element in the aborted 1.1 version of XSLT. This undeclared "msxsl" prefix was one of the things that Xerces choked on.) I didn't see anything in this schema's declaration for the xsl:template element that looked like it would allow literal result elements, and without a copy of Visual Studio, I couldn't test my hypothesis that this schema would not allow them to be used.
msxsl:script
xsl:script
When I heard of an XSLT 1.0 schema from webMethods, I couldn't find it on their web site, but I did find a copy on the web site for Austria's University of Klagenfurt. It was written before W3C Schemas became a Recommendation, and after I tried changing its namespace URL from to the URL specified by the Recommendation, the Xerces ASBuilder utility complained enough about it to convince me that it wasn't a robust option for checking stylesheet integrity. I had a similar experience with an xslt.xsd schema developed by Don Box in early April 2000.
A RELAX NG devotee would say "You have a RELAX NG schema that works and you need a W3C schema version of the same schema? Just use trang to create one!" trang couldn't convert the original xslt.rng schema to xslt.xsd because of a nested grammar, so I took out the definition of and reference to the top-level-extension pattern. trang then converted the schema with a few warnings, but no errors. Before the ASBuilder utility approved of the xsd version, I had to change some quotation marks that were part of an attribute value into " entity references. Then, Xerces parsed a stylesheet that included literal result elements against this schema with no complaints, and it gave the right error message when I added an illegal xsl:whatever element to the stylesheet.
top-level-extension
"
xsl:whatever
Also in Transforming XML
Automating Stylesheet Creation
Appreciating Libxslt
Push, Pull, Next!
Seeking Equality
The Path of Control
I don't know exactly what I gave up by removing the top-level-extension pattern, but I would look further into it before using the W3C schema created from xslt.rng in a production environment. I would be more likely to just use xslt.rng and a RELAX NG validator in the production environment, but if you really need to use an XSLT schema with a tool that doesn't know about RELAX NG, a W3C XSLT schema created from a RELAX NG one is one option. (Make sure to mention to the tool's developers that full RELAX NG support would be easier to implement than full W3C Schema support.)
There's one more option: the XSL Working Group has made an XSLT 2.0 W3C schema available on the W3C's web site. It's non-normative, and XSLT 2.0 is not quite finished, but ASBuilder had no complaints with the schema, Xerces had no problem with an error-free XSLT 1.0 stylesheet that included literal result elements when I checked it against this schema, and Xerces found my errant xsl:whatever element in a stylesheet when I parsed it against this schema.
Because XSLT 2.0 is mostly backward-compatible with 1.0, using the W3C's 2.0 schema to edit and validate your XSLT 1.0 stylesheets is better than using an editor that's completely unaware of XSLT stylesheet structure. It would be an interesting project for someone who wanted to learn a lot about XSLT 2.0 and W3C Schema to revise this schema to truly reflect XSLT 1.0 structure. Myself, I'll stick with James' RELAX NG schema for XSLT 1.0 stylesheets and Norm's RELAX NG schema for XSLT 2.0 stylesheets.
The link to the W3C beta XML Schema for XSLT 2.0 produced the file schema_for_xslt20.xsd. This schema, loaded into Altova's XMLSpy produces a validation error under the xs:restriction of the xs:complexType named "text-element-base-type". Specifically, xs:simpleType shouldn't have xs:restriction as it's parent.
I'm not saying that that's not an error, but after validating with XML Spy it's always good to get a second opinion. I've heard too many stories about validation problems from XML Spy.
Thanks for a helpful article.
Don't forget the xsl: prefix on the close tag in your second code sample...
Lars
(While I'm on the topic, though, it's great to see that one new addition to the 11 February XSLT 2.0
Thanks. Bummer, though.
Bob
Bob
© , O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.xml.com/pub/a/2005/04/06/tr-xml.html?page=last&x-showcontent=off | CC-MAIN-2015-32 | refinedweb | 2,321 | 62.38 |
Can't import geocoder
(sorry, this is probably simple)
I"m trying to use the geocoder package in Drawbot, and wihle it works with Geany for example, I can't get the geocoder module to import. Not sure if there's anything I'm doing wrong that I don't know of?
This is the code I'm trying to use:
import geocoder g = geocoder.ip('me') print(g.latlng)
I figured out the problem: I was using the wrong version of Python when installing the package. Just so this isn't useless, I used this to help me check if the module was installed and accessible in Drawbot:
help("modules") | https://forum.drawbot.com/topic/68/can-t-import-geocoder/1 | CC-MAIN-2019-13 | refinedweb | 111 | 68.5 |
I have downloaded the Qt SDK 1.1 BETA version today at Forum.Nokia website , March 2, 2011.
This SDK contains:
Qt 4.7.2 and Qt Mobility 1.1.
This is stated in the Forum.Nokia website
What I want to do and what I did so far:
1. I want to develop an application that utilizes the built-in camera of a Symbian^3 phone.
2. I have researched and found out that the latest release of Qt Mobility which is Qt Mobility 1.1.1 APIs SHOULD include the Camera APIs which is under the Multimedia API -
3. I also found a QCamera application example here -
4. I downloaded the example and tried to build it in Qt Creator using the new Qt SDK 1.1
5. Now the first problem is that it cannot find the QCamera APIs specifically these two in the cameraexample.h header file:
// QtMobility API
#include <qcamera.h>
#include <qstillimagecapture.h>
My questions now:
Now is it possible that the latest Qt SDK 1.1 that includes the Qt Mobility 1.1.1 APIs did not really include the Camera APIs?
Did the developers forgot to include it into the Qt Mobility 1.1.1 APIs? | http://developer.nokia.com/community/discussion/showthread.php/220273-Camera-API-missing-from-Qt-SDK-1-1-s-Qt-Mobility-APIs | CC-MAIN-2014-10 | refinedweb | 204 | 70.09 |
:
Sockets and Internet Protocols
socket output writing problem
Jeff Yan
Ranch Hand
Posts: 42
posted 9 years ago
hey,
im having trouble getting my
java
server to write to the socket to get a browser to read the html it has been sent, the browser just waits for a response!!
i dont know what else to do, i need to be able to have html the server sends readable by 127.0.0.1:PORT
here is the code:
package exercise_1; import java.io.*; import java.net.*; public class JavaHTMLServer { ServerSocket serverSocket; Socket connection = null; PrintWriter out; BufferedReader in; String message; static String clientIP = ""; JavaHTMLServer() { } void run() throws IOException { try { // creates a server socket try { serverSocket = new ServerSocket(8080); } catch (IOException e) { System.err.println("Could not listen on port: 8080."); System.exit(1); } System.out.println("have opened port 8080 locally"); // Waits for connection System.out.println("Waiting for connection"); connection = serverSocket.accept(); System.out.println("Connection received from " + connection.getInetAddress().getHostName() + "\n"); clientIP = connection.getInetAddress().getHostName(); } catch (IOException ioException) { ioException.printStackTrace(); } // Gets Input and Output streams try { in = new BufferedReader(new InputStreamReader( connection.getInputStream())); out = new PrintWriter(connection.getOutputStream(), true); } catch (IOException e) { System.out.println("Cannot connect streams"); } // Reads lines as they get received through socket String text = "."; while (!text.equals(" ")) { text = in.readLine(); System.out.println(text); } // Response that is sent back to client sendMessage("HTTP/1.1 200 OK\nContent-Type: text/html\n\n<html><p>Text</a></html>"); // Closes connection try { in.close(); out.close(); serverSocket.close(); } catch (IOException ioException) { System.out.println("Error - sockets could not be closed!"); } } // Sends response back to client private void sendMessage(String msg) { out.println(msg); // out.flush(); System.out.println("SERVER SENT> " + msg); } public static void main(String args[]) throws UnknownHostException { Server server = new Server(); try { while (true) { server.run(); } } catch (NullPointerException n) { System.out.println("Connection Closed by client " + clientIP); } } }
any help appreciated
~ Jeff Yan ~
Lester Burnham
Rancher
Posts: 1337
posted 9 years ago
The server design is a bit odd. I suggest to work through the
Java Networking Tutorial
, especially the section
Writing a Client/Server Pair
.
Jeff Yan
Ranch Hand
Posts: 42
posted 9 years ago
in what sense is it odd?? i need some constructive critasism. the problem with that tutorial is that i do not want to implement that knock knock protocol! i want it to be able to receive a GET request and process it accordingly, either getting a requested webpage or directory / file.
~ Jeff Yan ~
Lester Burnham
Rancher
Posts: 1337
posted 9 years ago
Ah - HTTP. That's a bit different from most other TCP/IP protocols (in that it closes the connection after the response, unless Keep-Alive is used), and the code you have accordingly more "normal". One issue that often leads to "hangs" is
Don't println to a Socket
(which this code does, via the "readLine" call).
A second issue is that HTTP ends lines using "\r\n" - this code uses just a newline.
Lastly, HTTP (even just GET) is a lot more complex than what this code does, but I assume you have read the relevant specification, so you know that already.
Satya Maheshwari
Ranch Hand
Posts: 368
posted 9 years ago
You may also want to put in a packet sniffer on your client and see how is the server response differing from an actual GET response. That may reveal if anything else is missing.
Thanks and Regards
I'm sure glad that he's gone. Now I can read this tiny ad in peace!
Thread Boost feature
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
Why won't my program compile?
Writing to files using socket server
a total newbie in Sockets
Server/Client Socket Connection
accessing a server using socket connection
More... | https://www.coderanch.com/t/515666/java/socket-output-writing | CC-MAIN-2020-40 | refinedweb | 635 | 57.37 |
ICANN To Allow
.brandname Top-Level Domains
300
AndyAndyAndyAndy sends in this excerpt from a Reuters report: . ... The move is seen as a big opportunity for brands to gain more control over their online presence and send visitors more directly to parts of their sites — and a danger for those who fail to take advantage."
Funny That (Score:5, Insightful)
"As a big brand, you ignore it at your peril," says Theo Hnarakis, chief executive of Australian domain name-registration firm Melbourne IT DBS, which advises companies and other organizations worldwide about how to do business online.
And it only costs $185,000 USD.
Funny, that.
Re:Funny That (Score:5, Interesting)
There's a HUGE glaring hole with this notion. As someone who's filed for a trademark before, trademarks are only limited within a particular field of business. So, for example, you could have a car company named Shiny, a spatula manufacturer named Shiny, a metal alloy named Shiny, whatever.
But there's only one TLD.
So, not only is this messing over individuals, but it's *really* messing over smaller businesses or businesses who came later to the game -- even if they hold a legitimate trademark on that name. I own a small software company that happens to have the same name as a larger, established trucking company. This could happen to me.
(Oh, and if your answer to anyone is, "Just pick another name"... do you have any clue how thoroughly picked through the trademark filings are? The Futurama "popplers" joke about there only being two product names in existence left untrademarked isn't that far off. Oh, and if you use a foreign word, you have to not overlap on both the foreign word *and* its translation)
Re: (Score:2)
OK, so what we really need is each "particular field of business" getting a TLD. So there's one apple.computer; one apple.records, and no apple.fruit as that would be too generic.
What could possibly go wrong?
Re: (Score:2)
Domain names aren't trademarks, they're ways to address a node on a network.
The network works just fine even though both Delta Airlines and Delta faucets have a trademark for "Delta".
Nobody turns on their tap and expects to hear airline schedules.
You miss the point. (Score:2)
Who gets to own (and run) the
.delta TLD?
Re: (Score:2)
No one, we create a ".airline" and a ".plumbing" TLD.
Re: (Score:2)
Re: (Score:2)
ICANN employees, some of you read this site, surely?
Re: (Score:2)
That's how Chinese Internet Keywords [cnnic.cn] work, and there is a "keyword registry" for them, just like a "domain registry".
It's just like DNS though, you're still mapping a name to a number (IP) somehow.
In your example, assuming *.apple was a wildcard top level domain, would go directly to their website. (And perhaps even "apple" if they used a glue record at the root servers.)
Re:Funny That (Score:4, Interesting)
It goes beyond that... it's also about recognition.
When I see "blablabla.com" I'm pretty sure that's a website. When toplevel domains are fully customizable and some companies will presumably start using [microsoft] or [apple]
... recognition will be gone, which is very annoying and slightly confusing. Most annoying for me personally (and many others I gather) will be I can no longer use the top bar for both searching and entering a webaddress. If I enter one word right now, it searches for it and if I enter a word+".com" (or similar) it goes to the web page. How will it be able to know once we go "keyword"-ing our TLDs? (Without either having a current list of ALL TLD's (which can become a huge list) or looking it up online (which introduces lag, especially on mobiles)?
But it was bound to happen I guess... ICANN wasn't going to ignore this huge amount of money that they can make from this just because it might make sense.
Re: (Score:3)
Re: (Score:2)
Only assholes fail to set a CNAME for www...
Re:Funny That (Score:4, Insightful)
It's the other way around. Assholes not setting the actual domain to the www server.
Australian government departments are classic for this. [govtdept.gov.au] will work, drop the www and you get time outs.
Re: (Score:2)
How about we say "ignorant people" rather than assholes. "Asshole" implies they are being deliberately obnoxious.
This changes or improves NOTHING (Score:5, Insightful)
In fact, I think it just makes it worse.
Not only will there continue to be trademark and other fights over
.com, .net and all the rest, there will now be a new level of fighting over a huge rush of TLDs.
Next up, rapid filing for trademarks in small island nations and squatting on TLDs. If I thought of it that easily, so did a thousand scum-bags out there.
Re: (Score:2)
They can fight over 'em if they want, but I doubt that anyone would actually *use* them.
Re:This changes or improves NOTHING (Score:4, Insightful)
I see the exact same thing - it was bad enough when a company went after (anythingclosetomytrademark).(anyTLD), now that second part goes from one-in-100 to a wildcard.
Buy
.georgejetson and then try to use pepsi.georgejetson and watch the fireworks. this is just going to create a mess. Look at how crazy they go now if you try to register pepsii.com or a TLD they didn't think to register like pepsi.co
Now companies have to be thinking about unlimited TLDs, not just a handful.
Re:This changes or improves NOTHING (Score:5, Funny)
Now companies have to be thinking about unlimited TLDs, not just a handful.
Due to the hierarchical nature of DNS, there is no difference between adding one more TLD and allowing any domain as a TLD (. vs
.com).
I propose registering '.sucks' and then mirroring all of DNS inside it so resolving icann.org.sucks resolves to icann's website. Extra props for doing so recursively so that so does icann.org.sucks.sucks.sucks.
Re: (Score:2)
I propose registering '.sucks' and then mirroring all of DNS inside it so resolving icann.org.sucks resolves to icann's website. Extra props for doing so recursively so that so does icann.org.sucks.sucks.sucks.
Congratulations, you've recreated the alt.* hierarchy on usenet!
Re: (Score:2)
Re: (Score:2)
Actually.. I do think it's an improvement, in a way.
There's plenty of non-commercial entities on
.com domains. .org domains sometimes have commercial entities .net could be anything from raindows and ponies to hardcore porn .us site may well be run by a company on the Seychelles acting for a business in Georgia
a
Given that there's really very little meaning to the a TLD anyway, I welcome its further dilution to the point where we realize that really it doesn't matter whether we access [coca-cola.com] o
Re:This changes or improves NOTHING (Score:4, Insightful)
Re: (Score:3)
I agree completely. It solves nothing. In fact, it just makes things needlessly complicated. For instance, does some nature conservatory body about the amazon river get dibs on
.amazon if they front the cash or does the internet giant get it? Does that infringe on Amazons copyright? The classic excuse regarding similarly named companies is that it confuses consumers, e.g. Facebook sueing all "___Book" companies. So now both big and small companies can spend more time sueing each other than making products o
Re: (Score:2)
The case law regarding squatting on trademarks in domain names is pretty cut-and-dried by now.
One case extending all of that to TLDs would be sufficient, and would take about 5 minutes in the court of a judge who isn't a total dunce.
Dear ICANN (Score:3, Insightful)
F U!
Sincerely,
The Internet
Dear Internet (Score:3)
Yes,we are aware you are one of many petitioning for
.FU, but now you must convince us you are not violating the .FUBU trademark.
Sincerely,
ICANN
Oh no (Score:2, Funny)
Re: (Score:2)
That's what I was going to say. Beat to the registration, dammit.
A bad idea. (Score:3)
This does nothing but muddy the waters further as to what a top level domain is for. The original purpose was to help distinguish the class of site one was dealing with. Branding was already a clear part of the domain. The second part.
This will make web browsers less useful too. As it stands now, if you type apple in your browser bar, it uses a search engine and locates the cloest match to that idea. This would make it ambiguous with a TLD and make it impossible for your browser to easily tell when to search.
Re: (Score:3, Funny)
It should search never. The address bar is for typing in addresses. If you want to search, type something in the search bar.
Re:A bad idea. (Score:5, Funny)
Re: (Score:2)
That is the search bar. A dedicated text field you have to put your cursor in depending on whether you want to visit a URL or search for keywords is a waste of screen space and of user time.
Re: (Score:2, Insightful)
Utter nonsense. Unless you want to give all your searching over to whatever search service (read: "corporation") happens to have control of your address bar at any point in time. If you're a Google fanboi, maybe you like that idea. I don't.
There is perfectly good rationale for having your searches separate from your explicit addresses. When I want to go to a site, I want to go to the site I typed in, not some search engine's idea of what site I was looking for.
It may be a "90s" idea (as someone else called it), but it's a damned good one. I'll keep my searches separate, thank you very f*ing much.
Re: (Score:3)
Search engines today are simply not smart enough to guess what I want... and I know, because I have given them lots of tries in that context and they get it wrong far too often. I was royally pissed when Comcast usurped my 404 errors and directed me to their favorite search engine instead. That should be illegal. As it is, I had to look up how to turn that very seriously UNWANTED feature "off".
Maybe in another 10 y
Re: (Score:3, Insightful)
We've reached the stage where
Do TLDs and Urls actually matter to users? (Score:5, Insightful)
Re: (Score:2)
My impression is that most folks don't type addresses, they get to sites through google. If I want to go to say Ford's website I open google, type ford, and click on the first link. I usually never type urls unless I have no other way to get there. I don't really need to care if their site is ford.com or cars.ford or whatevever.
This is basically it. Top level domains are getting to be virtually irrelevant to the average user.
Re: (Score:2)
The solution to what problem, exactly? TLDs are already semantic defunct. They weren't really necessary in the first place. As others here have pointed out, there's no point in artificially limiting TLDs to meaningless TLAs like "com" or "net".
Re: (Score:3)
Re:Do TLDs and Urls actually matter to users? (Score:5, Funny)
What's bad is that I have seen people who, when I say, "Go to Google," actually go to Google, type in "google" in the search bar, and click the first link to get to it.
Re:Do TLDs and Urls actually matter to users? (Score:5, Funny)
That's stupid, they should be using the "I'm feeling lucky" button.
Re: (Score:2)
wait.. how do you send emails to people then?
That's a WONDERFUL idea (Score:5, Insightful)
Now Apple Computers, Apple Corp, and assorted apple grower associations can all go to legal war with each over who has the most right to the one, the only, the singular ".apple" vanity TLD.
Protip: Trademarks don't all share the same namespace, and only have to be unique within a general field of commercial endeavor.
Re:That's a WONDERFUL idea (Score:4, Informative)
Since when has ICANN given a single thought to what is good for the internet, what makes sense, or what the users of the internet want? This is all about money... they intend to charge huge $$$ for your own TLD. I'm sure they will award themselves big fat bonuses for being so innovative.
The problem is I can't think of anything better to replace ICANN; Giving the UN control over the internet is certain to be worse. Letting idiots with no idea how the internet works vote on its architecture is equally as awful. As soon as national governments get involved, you have their ridiculous petty disputes and nationalism injecting themselves into every issue (go read up on why MS had to disable the timezone map in Windows... India threatened to kick them out of the country because one or two pixels weren't properly highlighted due to conflicting claims over a certain region.)
Re: (Score:2)
Re:That's a WONDERFUL idea (Score:5, Informative)
Re: (Score:3)
Re: (Score:2, Funny)
Think we'll be able to go to Microsoft.apple?
This is stupid (Score:2)
Proliferating the TLDs with all the
.com domain names is just plain asinine.
Someone take these morons out back and have them shot, please.
TLDs are almost worthless (Score:3)
Proliferating the TLDs with all the
.com domain names is just plain asinine.
Someone take these morons out back and have them shot, please.
As a counter-argument, I'd say that TLDs themselves (as they currently stand) are pretty worthless these days.
Consider: If you have a site that's not on ".com", and there's another domain with the same name, except it's in ".com", there's a pretty good chance site visitors will screw up and go to the ".com" on instead. If the same name is in different TLDs and these domains are not run by the same organization, confusion is bound to result.
So one solution would be to go to a fully flat namespace. Ditch TL
Re:TLDs are almost worthless (Score:4, Insightful)
Around here, almost every major site uses
.nl (our country TLD). Why American companies that only trade in America use .com I don't understand.
Registrar greed at its finest... (Score:3)
This is just plain stupid...but then again, how many ideas birthed from pure greed aren't. I'll believe it's not an act of greed when they only charge $5/year to register these "uber-premium" names. Fat chance of THAT happening.
And when they advertise it's "dangerous" for companies to NOT register ALL relevant TLDs related to their business? I can hear the registrar salesperson now..."What?!?, you mean you don't have yourcompany.com/.net/.org/.info/.biz/.me/.mobi/.us/.biz/(and now)
.brandname!?! You MUST register ALL of these NOW or your brand will surely be ruined!"
Yeah, good luck with SEO too...All their damn TLDs won't even fit on the first page of hits.
Why? (Score:2)
Re: (Score:2)
I doubt anyone is going to use brandname.brandname URLs
How about boing.boing and pizza.pizza?
Heh, I can't wait to see how this messes up form input validation. "dave@hal really is my email address, goddammit!"
Re: (Score:2)
Other than shiny marketing-speak, what is the practical difference between something like computers.apple.com and computers.apple? I doubt anyone is going to use brandname.brandname URLs, so are we just waving goodbye to the first section of the domain?
We're waving good-bye to the last part, too. If ".com" is the first TLD people think of when trying to figure out the domain name for a site - and if that expectation (that it's dot-com) is so ingrained that people get confused when it's anything else, then having that "dot-com" on there is worthless and only introduces confusion. So TLDs become the new (premium-price, anti-squatter) domain names for big organizations, and everyone who can't afford one must accept a domain under an ordinary, "old-fashion"
Corporations and countries (Score:2)
Just one more step for Corporations to be considered Sovereign Entities. Soon they will be considered the same as a country.
Re: (Score:2)
Good. The we can cut off CEOs head.
Re: (Score:2)
Without the Shiawase Decision how are we going to be running around with Panther Assault Canons and more chrome than a Harley?
Monetization of what should be neutral (Score:5, Interesting)
The internet is damaged by commercial interests. I don't think I'm speaking from nostalgia about 'the good old days' but large commercial interests have only weakened the utility of the internet.
The top level domains should be neutral. The internet is no longer neutral if every company can buy out the namespace.
I envy biological scientists and ecologists with their highly organized binomial classification systems. They're neutral. They organize information how it should be organized.
I reckon we have difficulty classifying and namespacing the internet is because we don't really know what it is. I guarantee that the information architecture will have at least one massive restructuring in our lifetimes. One day it will be called something different, like 'the link' or the 'exchange'. You know the 'omniscient' like information system that you see alien races mention in Star Trek.
Re: (Score:2)
I envy biological scientists and ecologists with their highly organized binomial classification systems. They're neutral. They organize information how it should be organized.
That's why there should only be an "asshat" TLD with all of the brands going under that.
Re:Monetization of what should be neutral (Score:5, Insightful)
The top level domains should be neutral.
Why?
Re: (Score:2)
large commercial interests have only weakened the utility of the internet.
Damn you Google for indexing the Internet and providing information at our fingertips! Damn you Amazon and other sellers for letting us comparison shop and buy things from the convenience of our homes! Damn you...etc.
I was around in the "good old days". The Internet boom has been, overall, a huge net benefit. I was skeptical at the time when it was starting to become commercialized, but it turned out all right.
i-cann-has-tld? (Score:2)
corporate dystopia is here (Score:5, Insightful)
Hmmmmm, until recently, only countries and groups got TLDs. Now, corporations have been elevated to the level of countries.
Yet another sign that the dystopia is upon us.
Re: (Score:2)
Hmmmmm, until recently, only countries and groups got TLDs. Now, corporations have been elevated to the level of countries.
Yeah, but on the bright side, I've got this great gig, working with the Ravens' Ark as an independent contractor...
Last one out please turn off the lights (Score:2)
This is stupid on so many levels. What's next? Religions and cults? Political parties? Hobbies?
Man, who will be the registration authority? How will domains be impacted when/if companies are prohibited from doing business in some location?
IPv6, first. (Score:2)
We need IPv6 to be fully supported by everyone, first. More domains and sub-domains means more SSL certificates and exchange servers, etc. Which means more IP addresses.
I know, I know, name based hosting and all that. Unfortunately large corps don't think that way, they think in terms of IP blocks. They will see this as a reason for more IP block thus diminishing the already relatively low number of IPv4 addresses.
So in conclusion, focus on IPv6 first.
My personal opinion on this is it's a stupid gimmick by
Too late? (Score:2)
It seems like almost anyone can register almost any TLD, so I doubt that this would cause the current situation to deteriorate. However, most of the people who are online have been online for over a decade. It is going to be very hard to change people's habits.
Besides, what is the merit of this? Even from a marketing perspective, most people identify "brand.com" as the address to a website so you can just plop that onto any piece of advertising. How would you identify an address in this new scheme? Add
ICANN did not weigh the costs vs. benefits (Score:5, Insightful)
ICANN has really dropped the ball on new TLDs. Folks like Tim Berners-Lee were explicitly against new top level domains. The W3 even wrote a position paper New Top Level Domains Considered Harmful [w3.org]. They used the examples of
.xxx and .mobi, but the reasoning applied to all new TLDs. [doc.gov] regarding the renewal. I would encourage people to send comments, to voice their concerns about the bad policymaking from ICANN.
ICANN is also about to renew the
.NET agreement with VeriSign [icann.org] despite numerous comments [icann.org]..
Re: (Score:2)
To be fair, a large part of the argument was the problem of domain owners having to buy a whole new domain for every TLD issued. Example corp. had to get example.com, example.net, and example.org. Adding
.mobi and .xxx 'forces' them to get example.xxx and example.mobi. The premise here is that in practice, the TLD became a meaningless thing, and adding another just causes another clone of the other TLDs. In this line of thinking, the TLD being deprecated for a single '.' TLD actually alleviates this syndrom
Pure Economics (Score:2)
This could be FUN! (Score:5, Funny)
I'd like to register my company domains, we are Local Domain, Inc. Our leading product is our LocalHost operating system. Please register to us:
localdomain
localhost.localdomain
Thank you,
Root User of Local Domain
Re: (Score:2)
My company is "#1" and I would like to register "1" as our TLD. Our nameservers will be located at 10.0.0.1, 127.0.0.1, 172.16.0.1 and 192.168.0.1.
If they can do this... (Score:2)
ICANN deserves to rot in .hell (Score:2)
ICANN appears to be well on its way to loosing legitimacy. A poster child for what happens when an organization tasked with helping the network is rotted out from the inside out by money.
Fuck these retards. The only acceptable response should be for DNS, network operators and governments to take a stand and disallow queries to arbitrary TLDs. If these new TLDs can't actually be used they will have no value.
Who names the namers? (Score:2)
Fortunately we live in a representative democracy, where I can write to my legislator and object to this action by the agency that governs the internet.
But I'm trying to remember... which level of the government has authority over ICANN? I know it's isn't the state or provincial government, and it isn't the federal or national government.... Surely someone must have authority over them?
Sounds like a phishers paradise... (Score:2)
Although I suppose the startup costs will keep a lot of them away. Or have them fighting over abandoned TLD domains...
Still, this seems like a 'clarification' that will only muddy the waters further for most people.
This also will change SPAM as we know it (Score:2)
At that point, it is game over, the spammers have won. There will be no wa
Protest Domains (Score:3)
Great. Now we can have one stop shopping for protest domains.
riaa.go.fuck.yourself
sony.go.fuck.yourself
mpaa.go.fuck.yourself
Anybody got a spare 185k kicking around?
Sponsored by Go Daddy (Score:2)
The only thing this will do is give Go Daddy another gimmicky TLD to upsell
This is brilliant! (Score:3)
Now we can have: [slashdot.slashdot]
Re: (Score:2) [slash.dotslash]
Hosts file (Score:2)
And one more step on the way from DNS to
/etc/hosts. Next they'll be complaining that a flat namespace has management and scaling issues.
iCANN TLD (Score:2)
Please put me down for the iCANN TLD.
I intend to throw it open to the public, first-come, first-serve.
Re:Why? (Score:5, Insightful)
Anyone else here old enough to remember the Great Renaming on Usenet? It was just before my time there, actually, but this sounds like the exact same thing... in reverse. They took a whole bunch of newsgroups which were turning into an unwieldy flat file (under the net.* prefix), and sorted them into a hierarchy with a small batch of broad top-level nodes: (comp.*, misc.*, news.*, rec.*, sci.*, soc.*, talk.*) which could be further subdivided, etc. In the process net.comics became rec.arts.comics, and so on. What it built was a lot like the internet domain name hierarchy (but opposite-endian). It added structure and organization, which are Very Useful Things to have when dealing with Something Very Large. (Such as the Internet.) All this move by ICANN would do is to chop the last four characters off every
.com in the database, and move that whole damn thing to the root level. If I can think of a business name that hasn't already been squatted, I can still register ____.COM for a few bucks, but I have to write up a proposal and take it to ICANN if I want to also claim .____? Bad policy, bad engineering, bad idea.
Re:Why? (Score:5, Informative)
Yeah, I was there. This is where brian reid got pissed off at gene spafford so brian and jon gilmore created alt.*
Speaking of alt, this also applied to dns...
Ironically it was Eugene Kashpureff that came up with the
.brandnam idea in 1997 and was universally reviled by the very poeple who are doing it now. Turns out it wasn't that it was a bad idea, it was just they they wern't making any money off it. Now they can.
Re: (Score:2)
So now, Apple will have to have Apple.com and Apple.apple? Or store.apple?
All of them, obviously. Let the money flow.
Now Internet == TV (Score:2)
I guess I can logout now.
I haven't done so, since 1997
Re: (Score:2)
I would assume you'd just go to "apple". Hypothetically:
nickuj@work:~$ host apple
apple has address 17.149.160.49
apple has address 17.172.224.47
apple mail is handled by 10 mail-in14.apple.
apple mail is handled by 20 mail-in2.apple.
apple mail is handled by 20 mail-in6.apple.
apple mail is handled by 100 mail-in3.apple.
apple mail is handled by 10 mail-in11.apple.
apple mail is handled by 10 mail-in12.apple.
apple mail is handled by 10 mail-in13.apple.
Re: (Score:2)
Re:Why? (Score:4, Insightful)
How about?
Even better, the The American Society for Microbiology [asm.org] could change their URL to. I imagine that'd get them a few extra page hits.
Re: (Score:3)
Even better, the The American Society for Microbiology [asm.org] could change their URL to. I imagine that'd get them a few extra page hits.
*facepalm*
My first though was "You mis-spelled 'organism'".
Re: (Score:2)
I was thinking more like
.Mac.
Re: (Score:2)
I'm with you and all but, seriously, it's not like Google will stop working. So a company has a brand domain, bfd.
Re: (Score:2)
Why does every comment here have a score of exactly two minutes after the article was posted? Is it standard practice to mod up your own comments here. FUCK THIS.
Calm down. People who have been moderated up enough times in the past have +1 to their comments' scores as a Karma Bonus Modifier. You can change this for your profile, which would drop a bunch of the +2s you see down to +1.
Re: (Score:2)
Re: (Score:2)
Good boys start at 4.
;-)
Re: (Score:2)
Re: (Score:2)
Quick: start a company called Localdomain and trademark the name.
Once you get your new TLD, your website, of course, will be hosted on the machine "localhost".
Re: (Score:2)
Took them long enough
Can't wait to see which spammer registers free.viagra
A spammer with deep pockets and a very strong legal team?
$185k to apply, and if the TLD you're applying for is trademarked, you have to show it's yours. | http://tech.slashdot.org/story/11/06/17/202245/icann-to-allow-brandname-top-level-domains | CC-MAIN-2015-48 | refinedweb | 4,833 | 75.81 |
Geo::WebService::Elevation::USGS - Elevation queries against USGS web services.
use Geo::WebService::Elevation::USGS; my $eq = Geo::WebService::Elevation::USGS->new(); print "The elevation of the White House is ", $eq->elevation( 38.898748, -77.037684 )->{Elevation}, " feet above sea level.\n";
The GIS data web service this module was originally based on has gone the way of the dodo. This release uses the NED service, which is similar but simpler. I have taken advantage of the new service's ability to provide output in JSON to simplify processing, and have added a compatibility mode to make the output from the new service as much like the output of the old as possible, but there are still differences:
* The new service does not expose data source selection. Therefore all functionality related to selecting data from a particular source now does nothing.
* The new service does not support retrieval of data from more than one source. Therefore any functionality that used to do this now returns data from a single source.
* The new service does not return the
{Data_ID} information in any way whatsoever. So, at least in compatibility mode, this is set to the value of
{Data_Source}.
* The new service does not report extent errors. Instead it appears to return an elevation of
0.
* The structure returned by the new service is similar to that returned by the old service, but the top-level hash key name is different. This module will attempt to hide this difference in compatibility mode.
Because this module attempts to hide the differences from the data returned by the old service, a new attribute,
compatible, is added to manage this. If this attribute is true (the default) returned data will be as close to the old server's data as possible.
With release 0.100, the following functionality is deprecated:
* Attributes
default_ns,
proxy,
source, and
use_all_limit.
* Methods
getElevation() and
getAllElevations(). The
elevation() method will remain.
Starting with release 0.104_01, all deprecated functionality will warn every time it is used. Six months after that it will become fatal. After a further six months, all code related to the deprecated functionality will be removed.
In the meantime you can suppress the warnings with
no warnings qw{ deprecated };
At the point where the deprecated functionality warns on every use, the
compatible attribute will also become deprecated. Six months after that, its default will become false, and it will warn when set true. Six months after that it will become a fatal error to use it.
This module executes elevation queries against the United States Geological Survey's web NAD server. You provide the latitude and longitude in degrees, with south latitude and west longitude being negative. The return is typically a hash containing the data you want. Query errors are exceptions by default, though the object can be configured to signal an error by an undef response, with the error retrievable from the 'error' attribute.
For documentation on the underlying web service, see.
For all methods, the input latitude and longitude are documented at the above web site as being WGS84, which for practical purposes I understand to be equivalent to NAD83. The vertical reference is not documented under the above link, but correspondence with the USGS says that it is derived from the National Elevation Dataset (NED; see). This is referred to NAD83 (horizontal) and NAVD88 (vertical). NAVD88 is based on geodetic leveling surveys, not the WGS84/NAD83 ellipsoid, and takes as its zero datum sea level at Father Point/Rimouski, in Quebec, Canada. Alaska is an exception, and is based on NAD27 (horizontal) and NAVD29 (vertical).
Anyone interested in the gory details may find the paper Converting GPS Height into NAVD88 Elevation with the GEOID96 Geoid Height Model by Dennis G. Milbert, Ph.D. and Dru A. Smith, Ph.D helpful. This is available at. This paper states that the difference between ellipsoid and geoid heights ranges between -75 and +100 meters globally, and between -53 and -8 meters in "the conterminous United States."
The following public methods are provided:
This method instantiates a query object. If any arguments are given, they are passed to the set() method. The instantiated object is returned.
This method returns a list of the names and values of all attributes of the object. If called in scalar context it returns a hash reference.
This method queries the data base for the elevation at the given latitude and longitude, returning the results as a hash reference. This hash will contain the following keys:
{Data_Source} => A text description of the data source;
{Elevation} => The elevation in the given units;
{Units} => The units of the elevation (
'Feet' or
'Meters');
{x} => The
$lon argument;
{y} => The
$lat argument.
For compatibility with versions of this module before
0.100, this method behaves slightly differently if the compatible attribute is true:
* The
{Data_ID} key of the return will be set to the value of the {Data_Source} key;
* The
{Units} key will be converted to upper case;
* If called in scalar context the return will be a reference to an array whose single element is the results hash.
You can also pass a
Geo::Point,
GPS::Point, or
Net::GPSD::Point object in lieu of the
$lat and
$lon arguments. If you do this,
$valid becomes the second argument, rather than the third.
If the optional
$valid argument is specified and the returned data are invalid, nothing is returned. This means an empty array if compatible is true. The NAD source does not seem to produce data recognizable as invalid, so you will probably not see this.
The NAD server appears to return an elevation of
0 if the elevation is unavailable.
This method returns the value of the given attribute. It will croak if the attribute does not exist.
Starting with version 0.100, this method is essentially a wrapper for the
elevation() method. For compatibility with the prior version of this module it returns an array reference in scalar context.
Starting with version 0.100, this method is essentially a wrapper for the
elevation() method.
The
$source argument is both optional and ignored, but must be present for backwards compatibility if the
$elevation_only argument is used. Feel free to pass
undef.
The
$elevation_only argument is optional. If provided and true (in the Perl sense) it causes the return on success to be the numeric value of the elevation, rather than the hash reference described below.
This method (which can also be called as a static method or as a subroutine) returns true if the given datum represents a valid elevation, and false otherwise. A valid elevation is a number having a value greater than -1e+300. The input can be either an elevation value or a hash whose {Elevation} key supplies the elevation value.
This method sets the value of the given attribute. Multiple attribute/value pairs may be specified. The object itself is returned, to allow call chaining. An attempt to set a non-existent attribute will result in an exception being thrown.
This boolean attribute determines whether the data acquisition methods carp on encountering an error. If false, they silently return undef. Note, though, that the croak attribute trumps this one.
If retry is set to a number greater than 0, you will get a carp on each failed query, provided croak is false. If croak is true, no retries will be carped.
This attribute was introduced in Geo::WebService::Elevation::USGS version 0.005_01.
The default is 0 (i.e. false).
This boolean attribute determines whether this object attempts to make returned data consistent with the old GIS server.
The default is
1 (i.e. true) for the moment, but see the NOTICE above for plans to change this.
This attribute determines whether the data acquisition methods croak on encountering an error. If false, they return undef on an error.
If retry is set to a number greater than 0, the data acquisition method will not croak until all retries are exhausted.
The default is 1 (i.e. true).
This attribute is deprecated. See the NOTICE above for the deprecation schedule.
This attribute records the XML namespace used by the SOAP query. This must agree with the targetNamespace value records the error returned by the last query operation, or undef if no error occurred. This attribute can be set by the user, but will be reset by any query operation.
The default (before any queries have occurred) is undef.
If this attribute is set to a non-negative integer, elevation results will be rounded to this number of decimal places by running them through sprintf "%.${places}f".
The default is undef.
This attribute is deprecated. See the NOTICE above for the deprecation schedule.
This attribute specifies the actual url to which the SOAP query is posted. It must agree with the soap:address location value given for wsdl:port name "Elevation_ServiceSoap" specifies the number of retries to be done by
getAllElevations() and
getElevation() when an error is encountered. The first try is not considered a retry, so if you set this to 1 you get a maximum of two queries (the try and the retry).
Retries are done only on actual errors, not on bad extents. They are also subject to the "throttle" setting if any.
The default is 0, i.e. no retries.
This attribute specifies a piece of code to be called before retrying. The code will be called before a retry takes place, and will be passed the Geo::WebService::Elevation::USGS object, the number of the retry (from 1), the name of the method being retried (
'getAllElevations' or
'getElevation'), and the arguments to that method. If the position was passed as an object, the hook gets the latitude and longitude unpacked from the object. The hook will not be called before the first try, nor after the last retry.
Examples:
# To sleep 5 seconds between retries: $eq->set( retry_hook => sub { sleep 5 } ); # To sleep 1 second before the first retry, 2 seconds # before the second, and so on: $eq->set( retry_hook => sub { sleep $_[1] } ); # To do nothing between retries: $eq->set( retry_hook => sub {} );
The default is the null subroutine, i.e.
sub {}.
This attribute is deprecated. See the NOTICE above for the deprecation schedule.
This attribute specifies the ID of the source layer to be queried by the elevation() method. Valid layer IDs are documented at.
A legal value is a scalar, or an ARRAY, CODE, HASH, or Regexp reference. Please see the elevation() method's documentation for how these are used.
The default is '-1', which requests a response from the 'best' data source for the given point.
Geo::WebService::Elevation::USGS->set( throttle => 5 );
This attribute, if defined and positive, specifies the minimum interval between queries, in seconds. This attribute may be set statically only, and the limit applies to all queries, not just the ones from a given object. If Time::HiRes can be loaded, then sub-second intervals are supported, otherwise not.
This functionality, and its implementation, are experimental, and may be changed or retracted without notice. Heck, I may even go back to
$TARGET, though I don't think so.
This attribute specifies the timeout for the SOAP query in seconds.
The default is 30.
If true, this attribute requests that network requests and responses be dumped to standard error. This should only be used for troubleshooting, and the author makes no representation about and has no control over what output you get if you set this true.
The default is undef (i.e. false).
This attribute specifies the desired units for the resultant elevations. Valid values are
'Feet' and
'Meters'. In practice these are not case-sensitive, and any value other than case-insensitive
'Meters' will be taken as
'Feet'.
The default is 'FEET', but this will become
'Feet' when the compatibility code goes away.
This attribute is deprecated. See the NOTICE above for the deprecation schedule.
This attribute is used to optimize the behavior of the elevation() method when the 'source' attribute is an array or hash reference. If the number of elements in the array or hash is greater than or equal to this, elevation() gets its data by calling getAllElevations() and then dropping unwanted data. If the number of elements is less than this number, elevation() iterates over the elements of the array or the sorted keys of the hash, calling getElevation() on each.
Note that setting this to 0 causes getAllElevations() to be used always. Setting this to -1 (or any negative number) is special-cased to cause getElevation() to be used whenever the 'source' array or hash has any entries at all, no matter how many it has.
The default is 5, which was chosen based on timings of the two methods.
The author wishes to acknowledge the following individuals and groups.
The members of the geo-perl mailing list provided valuable suggestions and feedback, and generally helped me thrash through such issues as how the module should work and what it should actually be called.
Michael R. Davis provided prompt and helpful feedback on a testing problem in my first module to rely heavily on Test::More.. | http://search.cpan.org/~wyant/Geo-WebService-Elevation-USGS-0.106/lib/Geo/WebService/Elevation/USGS.pm | CC-MAIN-2017-39 | refinedweb | 2,193 | 56.25 |
Calling a method
First of all, Pike calculates the values of all the arguments. In the call
average(19.0 + 11.0, 10.0);
the two arguments are calculated, giving the values 30.0 and 10.0. Then the argument values are sent to the method. If we look at the method head,
float average(float x1, float x2)
we see that it has two formal parameters.
The argument values will be put
in the two parameter variables
x1 and
x2,
which work as local variables
but with the argument values as initial values.
Execution will then continue with the body of the method. In this case, the body is
{ return (x1 + x2) / 2; }
The value of
(x1 + x2) / 2 will be calculated,
giving 20.0.
This value is then returned to the point where the method was called,
and is used as the value of the method-call expression.
Note that Pike uses “call by value”. This means that Pike always calculates the value of the arguments before calling a method, and then it sends those values (or, more precisely: copies of those values) to the called method. This means that in the example
average(1.0, average(2.0, 3.0));
Pike will first call
average with the two values 2.0 and 3.0,
and when
average has returned the value 2.5,
it will send the two values 1.0 and 2.5 to
average.
This second call of
average will return 1.75,
and this is the value of the entire expression. | http://pike.lysator.liu.se/docs/tut/methods/invocation.md | CC-MAIN-2017-30 | refinedweb | 257 | 76.93 |
On Wed, 27 Aug 2008, Jeffrey E. Care <carej@us.ibm.com> wrote:
> Stefan Bodewig <bodewig@apache.org> wrote on 08/27/2008 09:29:02 AM:
>
>> > I'm using META-INF/services from the JAR spec,
>>
>> would it be possible to somehow use antlib.xml instead? Right now
>> antlibs don't need to do anything to META-INF at all.
>
> I would have preferred this but to my knowledge there's no standard
> location for antlib.xml that I could use to load it from.
This is true. To make things worse an antlib with several antlib.xml
files would be perfectly legal (one per namespace the antlib wants to
expose).
> I agree that it's a little clumsy to be using META-INF/services here
> but this is exactly the kind of application that the services stuff
> was added to support.
OK. Maybe the antlib would just point to the antlib.xml file(s) there
and we add the description and versioning stuff into the antlib
descriptor?
Stefan
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org | http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200808.mbox/%3Cy1umyixrwev.fsf@v30161.1blu.de%3E | CC-MAIN-2020-10 | refinedweb | 189 | 77.13 |
Introduction
Writing tests is an integral part of application development. Testing results in software that has fewer bugs, more stability, and is easier to maintain. In this article, we'll look at how to test a React application using the Jest testing framework.
Jest is a JavaScript test runner maintained by Facebook. A test runner is software that looks for tests in your codebase, runs them and displays the results (usually through a CLI interface).
The following are some of the features that Jest offers.
- Performance - Jest run tests in parallel processes thus minimizing test runtime.
- Mocking - Jest allows you to mock objects in your test files. It supports function mocking, manual mocking and timer mocking. You can mock specific objects or turn on automatic mocking with automock which will mock every component/object that the component/object test depends on.
- Snapshot testing -.
- Code coverage support - This is provided with Jest with no additional packages or configuration.
- Test isolation and sandboxing - With Jest, no two tests will ever conflict with each other, nor will there ever be a global or module local state that is going to cause trouble. Sandboxed test files and automatic global state resets for every test.
- Integrates with other testing libraries - Jest works well with other testing libraries (e.g. Enzyme, Chai).
Jest is a Node-based runner which means that it runs tests in a Node environment as opposed to a real browser. Tests are run within a fake DOM implementation (via jsdom) on the command line.
You should note though that while Jest provides browser globals such as
window by using jsdom, their behavior is only an approximation of their counterparts on a real browser. Jest is intended for unit testing an application's logic and components rather than for testing it for any DOM quirks it might encounter. For this, it is recommended that you use a separate tool for browser end-to-end tests. This is out of scope of this article.
Setting up the Sample Project
Before looking at how tests are written, let's first look at the application we'll be testing. It can be downloaded here. In the downloaded folder, you will find three projects - one named
starter with no test files, another named
completed with the test files included and another named
completed_with_auth0 which contains test files and also adds authentication to the application. In this article, we'll start with the
starter project and proceed to add tests to it.
The sample application is a simple countdown timer created in React. To run it, first navigate to the root of the starter project:
$ cd path/to/starter/CountdownTimer
Install the necessary libraries:
$ npm install
Run Webpack:
$ webpack
Then run the application with:
$ npm start
Navigate to in you browser. You should see the following.
You can set a time in seconds and start the countdown by clicking on the Start Countdown button.
The functionality of the countdown timer has been separated into three components stored in the
app/components folder namely
Clock.jsx,
Countdown.jsx and
CountdownForm.jsx.
The Clock component is responsible for rendering the clock face and formatting the user's input to an
MM:SS format. The CountdownForm component contains a form that takes the user input and passes it to the Countdown component which starts decrementing the value every second, passing the current value to the Clock component for display.
Having looked at the sample application, we'll now proceed with writing tests for it.
Writing Tests
Let's start by installing and configuring Jest.
Run the following command to install Jest and the
babel-jest library which is a Jest plugin for Babel. The application uses Babel for transpiling JSX and ES6 so the plugin is needed for the tests to work.
$ npm install --save-dev jest babel-jest
With
babel-jest added, Jest will be able to work with the Babel config file
.babelrc to know which presets to run the code through. The sample application already has this file. You can see its contents below.
{ "presets": ["es2015", "react"] }
The
react preset is used to transform JSX into JavaScript and
es2015 is used to transform ES6 JavaScript to ES5.
With that done, we are now ready to write our first test.
Jest looks for tests to run using the following conventions:
- Files with .test.js suffix.
- Files with .spec.js suffix.
- Files with .js suffix inside a folder named tests.
Other than
.js files, it also automatically considers files and tests with the
jsx extension.
For our project, we'll store the test files inside a tests folder. In the
app folder, create a folder named
__tests__.
For the first test, we'll write a simple test that ensures that Jest was set up correctly and that it can run a test successfully.
Create a file inside the
app/__tests__ folder, name it
app.test.jsx and add the following to the file.
describe('App', () => { it('should be able to run tests', () => { expect(1 + 2).toEqual(3); }); });
To create a test, you place its code inside an
it() or
test() block, including a label for the test. You can optionally wrap your tests inside
describe() blocks for logical grouping.
Jest comes with a built-in
expect() global function for making assertions. The above test checks if the expression
1 + 2 is equal to
3. Read this for a list of assertions that can be used with
expect()
Next, modify the
test property of the
package.json file as shown.
"test": "jest"
You can now run the added test with
npm test and see the results in the Terminal.
You can also run Jest in watch mode which will keep it running in the terminal and save you from having to start the tests yourself when you make changes to the code. For this use the
npm test -- --watch command. Anything that is placed after the first
-- is passed to the underlying command, therefore
npm test -- --watch is similar to
jest --watch.
We only have one test so far, but as you go further into testing your application and add more tests, you might want to exclude some from running. Jest allows you to either exclude some tests from running or focus on specific tests. To exclude a test from being executed, use
xit() instead of
it(). To focus on a specific test without running other tests, use
fit().
Now that we know the application can run tests, let's move on to testing its components.
Testing Components
In the
__tests__ folder, add another folder named
components and inside that folder, add a file named
Clock.test.jsx. Then add the following to the file.
import React from 'react'; import ReactDOM from 'react-dom'; import Clock from 'Clock'; describe('Clock', () => { it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<Clock/>, div); }); });
This test mounts a component and checks that it doesn't throw an exception during rendering.
If you run the test, it will fail with the error message
Cannot find module 'Clock' from 'Clock.test.jsx'.
In the application we specify aliases for some files so that we don't have to write their full path every time we import them in another file. The aliases and the files they represent are specified in the
webpack.config.js file.
resolve: { root: __dirname, alias: { applicationStyles: 'app/styles/app.scss', Clock: 'app/components/Clock.jsx', Countdown: 'app/components/Countdown.jsx', CountdownForm: 'app/components/CountdownForm.jsx' }, extensions: ['', '.js', '.jsx'] }
Other test runners like karma are able to pick up the application's setting from the Webpack config file, but this is not the case with Jest. Jest doesn't automatically work with Webpack. In the above case, Jest doesn't know how to resolve the aliases specified in the Webpack config file.
To solve this, you can either use a third party tool like jest-webpack-alias or babel-plugin-module-resolver, or you can add the aliases in Jest's configuration settings. I prefer the latter solution as it is easier to setup and it requires the least modification to the app. With this, Jest settings are separate from the app's settings. If I ever wanted to change the test runner used, I would just need to delete Jest settings from package.json (or from the Jest config file) and won't have to edit the Webpack config file and Babel config file.
You can define Jest's configuration settings either in the
package.json or create a separate file for the settings and then add the
--config <path/to/config_file> option to the
jest command. In the spirit of separation of concerns, we'll create a separate file. Create a file at the root of the project named
jest.config.js and add the following to it.
{ "moduleFileExtensions": [ "js", "jsx" ], "moduleNameMapper": { "Clock": "<rootDir>/app/components/Clock.jsx", "CountdownForm": "<rootDir>/app/components/CountdownForm.jsx", "Countdown": "<rootDir>/app/components/Countdown.jsx" } }
moduleFileExtensions specifies an array of file extensions your modules use. By default it includes
["js", "json", "jsx", "node"] (if you require modules without specifying a file extension, Jest looks for these extensions) so we don't really need the setting in the above file as
js and
jsx are included. I wanted to include it so you know that it is necessary if your project consists of files with other extensions e.g. if you are using TypeScript, then you would include
["js", "jsx", "json", "ts", "tsx"].
In
moduleNameMapper, we map different files to their respective aliases.
rootDir is a special token that gets replaced by Jest with the root of the project. This is usually the folder where the
package.json file is located, unless you specify a custom
rootDir option in your configuration.
If you are interested in finding out other options that you can set for Jest, check out the documentation.
In
package.json modify the value of
test as shown.
"test": "jest --config jest.config.js"
Run the test again with
npm test and it should now pass. We now know that the component tested renders without throwing an exception.
Testing Business Logic
We've written a test that assures us that our component renders properly. This however is not an indicator that the component behaves as it should and produces the correct output. To test for this, we'll test the component's functions and make sure they are doing what they should be doing.
For this we'll use the Enzyme library to write the tests. Enzyme is a JavaScript Testing utility for React created by Airbnb that makes it easier to assert, manipulate, and traverse a React Component's output. It is unopinionated regarding which test runner or assertion library used, and is compatible with the major test runners and assertion libraries available.
To install Enzyme, run the following command:
$ npm install --save-dev enzyme react-addons-test-utils
Then modify
Clock.test.jsx as shown.
import React from 'react'; import ReactDOM from 'react-dom'; import { shallow } from 'enzyme'; import Clock from 'Clock'; describe('Clock', () => { it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<Clock/>, div); }); describe('render', () => { it('should render the clock', () => { const clock = shallow(<Clock timeInSeconds={63}/>); const time = <span className="clock-text">01:03</span>; expect(clock.contains(time)).toEqual(true); }); }); describe('formatTime', () => { it('should format seconds', () => { const clock = shallow(<Clock/>); const seconds = 635; const expected = '10:35'; const actual = clock.instance().formatTime(seconds); expect(actual).toBe(expected); }); it('should format seconds when minutes or seconds are less than 10', () => { const clock = shallow(<Clock/>); const seconds = 65; const expected = '01:05'; const actual = clock.instance().formatTime(seconds); expect(actual).toBe(expected); }); }); });
The first test remains the same, but since we are using Enzyme you could simplify it by using
shallow() or
mount() to render it, like so:
import { mount } from 'enzyme'; it('renders without crashing', () => { mount(<Clock/>); });
The difference between
shallow() and
mount() is that
shallow() tests components in isolation from the child components they render while
mount() goes deeper and tests a component's children. For
shallow() this means that if the parent component renders another component that fails to render, then a
shallow() rendering on the parent will still pass.
The remaining tests test the
Clock.jsx
render() and
formatTime() functions, placed in separate
describe blocks.
The Clock component's
render() function takes a props value of
timeInSeconds, passes it to
formatTime() and then displays the returned value inside a
<span> with a
class of
clock-text. In the test with the
describe label of
render, we pass in the time in seconds to Clock and assert that the output is as expected.
The
formatTime
describe contains two tests. The first checks to see if the
formatTime() function returns a formatted time if given a valid input and the second ensures that the function prefixes the minutes or seconds value with
0 if the value is less than
10.
To call the component's function with Enzyme, we use
clock.instance().formatTime(seconds).
instance() returns the instance of the component being rendered as the root node passed into
mount() or
shallow().
Run the tests and they should all pass.
Next we'll add tests for the Countdown component.
Create a file named
Countdown.test.jsx in the
app/__tests__/components folder. Add the following to the file.
import React from 'react'; import ReactDOM from'react-dom'; import TestUtils from 'react-addons-test-utils'; import Countdown from 'Countdown'; describe('Countdown', () => { it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<Countdown/>, div); }); describe('handleSetCountdownTime', () => { it('should set countdown time and start countdown', (done) => { const countdown = TestUtils.renderIntoDocument(<Countdown/>); countdown.handleSetCountdownTime(10); expect(countdown.state.count).toBe(10); expect(countdown.state.countdownStatus).toBe(1); setTimeout(() => { expect(countdown.state.count).toBe(9); done(); }, 1001) }); it('should never set countdown time to less than zero', (done) => { const countdown = TestUtils.renderIntoDocument(<Countdown/>); countdown.handleSetCountdownTime(1); setTimeout(() => { expect(countdown.state.count).toBe(0); done(); }, 3000) }); }); });
The first test is similar to what we had in
Clock.test.jsx, it just checks that the Countdown component rendered okay. The rest of the tests test the
handleSetCountdownTime() function of this component. This function is called when the form is submitted and is passed the number of seconds entered (if valid). It then uses this to set the component's state which consists of two values - the
count and the
countdownStatus.
componentDidUpdate() checks if the
countdownStatus was changed and if so calls the
tick() function which starts decrementing the value of
count every second.
In the above we use
TestUtils to test the component. We could have used Enzyme functions here as well, but we wanted to showcase another great tool that makes testing React components easier. Facebook recommends both Enzyme and TestUtils, so you can decide which you prefer; or you can use them both (in fact, when using Enzyme, you are essentially using TestUtils as well since Enzyme wraps around the
react-addons-test-utils library).
With TestUtils, components are rendered with
TestUtils.renderIntoDocument().
The first test in the block ensures that the
countdownStatus of the component is changed when a valid time is passed to
handleSetCountdownTime() and that the
count has been decremented by
1 after a second.
The second test ensures that
handleSetCountdownTime() stops counting down at
0.
Testing Events
The last component remaining to test is CountdownForm. This contains a form that the user uses to enter the time to be count down. We'll test it to make sure that when a user submits the forms, the listener will call
onSetCountdownTime() only if the input is valid.
Create a file named
CountdownForm.test.jsx in the
app/__tests__/components folder. Add the following to the file.
import React from 'react'; import ReactDOM from 'react-dom'; import TestUtils from 'react-addons-test-utils'; import CountdownForm from 'CountdownForm'; describe('CountdownForm', () => { it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render(<CountdownForm/>, div); }); it('should call onSetCountdownTime if valid seconds entered', () => { const spy = jest.fn(); const countdownForm = TestUtils.renderIntoDocument(<CountdownForm onSetCountdownTime={spy}/>); const form = TestUtils.findRenderedDOMComponentWithTag(countdownForm, 'form'); countdownForm.refs.seconds.value = '109'; TestUtils.Simulate.submit(form); expect(spy).toHaveBeenCalledWith(109); }); it('should not call onSetCountdownTime if invalid seconds entered', () => { const spy = jest.fn(); const countdownForm = TestUtils.renderIntoDocument(<CountdownForm onSetCountdownTime={spy}/>); const form = TestUtils.findRenderedDOMComponentWithTag(countdownForm, 'form'); countdownForm.refs.seconds.value = '1H63'; TestUtils.Simulate.submit(form); expect(spy).not.toHaveBeenCalled(); }); });
In the above we use
TestUtils to simulate the form
submit event.
Jest comes with spy functionality that enables us to assert that functions are called (or not called) with specific arguments.
A test spy is a function that records arguments, return value, the value of
this and exception thrown (if any) for all its calls. Test spies are useful to test both callbacks and how certain functions are used throughout the system under test. To create a spy in Jest, we use
const spy = jest.fn(). This provides a function we can spy on and ensure that it is called correctly.
We then render the CountdownForm component and pass in the spy as the value of the
onSetCountdownTime props. We then set the form's
seconds value and simulate a submission. If the value for
seconds is valid, the spy will be called, otherwise it won't.
Run the tests and everything should pass.
Coverage Reporting
As mentioned earlier, Jest has an integrated coverage reporter that works well with ES6 and requires no further configuration. You can run it with
npm test -- --coverage. Below you can see the coverage report of our tests.
Snapshot Testing
Snapshot testing is another feature of Jest which automatically generates text snapshots of your components and saves them to disk so if the UI output changes later on, you will get notified without manually writing any assertions on the component output.
When running a snapshot test for the first time, Jest renders the component and saves the output as a JavaScript object. Each time the test is run again, Jest will compare its output to the saved snapshot and if the component's output is different from the snapshot, the test will fail. This may be an indicator that the component has a bug somewhere and you can go ahead and fix it until its output matches the snapshot, or you might have made the changes to the component on purpose and so it is the snapshot that will need updating. To update a snapshot you run jest with the
-u flag.
With snapshot testing, you will always know when you accidentally change a component's behaviour and it also saves you from writing a lot of assertions that check if your components are behaving as expected.
We'll include one snapshot test for the Clock component in the sample app. You can include the snapshot test in the
Clock.test.js file, but I prefer to have my snapshot tests in separate files.
Create a file named
Clock.snapshot.test.jsx in the
app/__tests__/components folder. Add the following to the file.
import React from 'react'; import Clock from 'Clock'; import renderer from 'react-test-renderer'; describe('Clock component renders the clock correctly', () => { it('renders correctly', () => { const seconds = 63; const rendered = renderer.create( <Clock timeInSeconds={seconds}/> ); expect(rendered.toJSON()).toMatchSnapshot(); }); });
The above renders the Clock (with a value of 63 seconds passed into it) and saves the output to a file.
Before running the test, install the following package. It provides a React renderer that can be used to render React components to pure JavaScript objects.
$ npm install --save-dev react-test-renderer
Run your tests and the output will show that a snapshot has been added.
When you look at your project, there will be a
__snapshots__ folder inside the
app/__tests__/components folder with a file named
Clock.snapshot.test.jsx.snap inside it. The following are its contents.
exports[`Clock component renders the clock correctly renders correctly 1`] = ` <div className="clock"> <span className="clock-text"> 01:03 </span> </div> `;
As you can see, it shows the expected result of having passed
63 to the Clock component.
With the snapshot test that we just added, we don't need the test in
Clock.test.jsx that checks if the rendered output contains a
<span> with a certain string in it.
You should include the
__snapshots__ folder in your versioning system to ensure that all team members have a correct snapshot to compare with.
Aside: Using React with Auth0
Before concluding the article, let's take a look at how you can add authentication to the React app and ensure the tests work with this. We'll change the app so that it requires the user to be logged in before they can start the countdown timer. In the process, we'll take a look at a caveat that Jest has as a Node-based test runner that runs its tests on jsdom.
To get started, first sign up for an Auth0 account, then navigate to the Dashboard. Click on the New Client button and fill in the name of the client (or leave it at its default. Select Single Page Web Applications from the Client type list. On the next page, select the Settings tab where the client ID, client Secret and Domain can be retrieved. Set the Allowed Callback URLs and Allowed Origins (CORS) to and save the changes with the button at the bottom of the page.
Auth0 offers a generous free tier to get started with modern authentication.
We'll add the Auth0 Lock widget to our app, which provides an interface for the user to login and/or signup.
Create a folder named
utils in the
app folder and add a
AuthService.js file to it. Add the following to the file.
import React from 'react'; import Auth0Lock from 'auth0-lock'; import decode from 'jwt-decode'; export default class AuthService { constructor() { // Configure Auth0 this.clientId = 'YOUR_CLIENT_ID'; this.domain = 'YOUR_CLIENT_DOMAIN'; this.lock = new Auth0Lock(this.clientId, this.domain, {}); // Add callback for lock `authenticated` event this.lock.on('authenticated', this._doAuthentication.bind(this)); // binds login functions to keep this context this.login = this.login.bind(this); } _doAuthentication(authResult){ // Saves the user token this.setToken(authResult.idToken); } getLock() { // An instance of Lock return new Auth0Lock(this.clientId, this.domain, {}); } login() { // Call the show method to display the widget. this.lock.show(); } loggedIn() { // Checks if there is a saved token and it's still valid const idToken = this.getToken(); return idToken && !this.isTokenExpired(idToken); } setToken(idToken){ // Saves user token to localStorage localStorage.setItem('id_token', idToken); } getToken(){ // Retrieves the user token from localStorage return localStorage.getItem('id_token'); } logout(){ // Clear user token and profile data from localStorage localStorage.removeItem('id_token'); }(); } }
Authentication will be handled by this class. The code contains comments that explain what is happening at each step, so I won't go over it here.
Replace
YOUR_CLIENT_ID and
YOUR_CLIENT_DOMAIN in the above code with your Auth0 client details.
Install the following two packages.
$ npm install --save auth0-lock jwt-decode
auth0-lock provides the Lock widget while
jwt-decode is used in the code to decode a JSON Web token before checking if its expiration date has passed.
Modify
CountdownForm.jsx as shown:
import React from 'react'; import AuthService from '../utils/AuthService' class CountdownForm extends React.Component { constructor(props) { super(props); this.state = { loggedIn: false }; } componentDidMount() { this.auth = new AuthService(); this.setState({ loggedIn: this.auth.loggedIn() }); // instance of Lock this.lock = this.auth.getLock(); this.lock.on('authenticated', () => { this.setState({ loggedIn: this.auth.loggedIn() }); }); } login() { this.auth.login(); } logout() { this.auth.logout(); this.setState({ loggedIn: this.auth.loggedIn() }); } onSubmit(e) { e.preventDefault(); if (this.state.loggedIn) { var secondsStr = this.refs.seconds.value; if (secondsStr.length > 0 && secondsStr.match(/^[0-9]*$/)) { this.refs.seconds.value = ''; this.props.onSetCountdownTime(parseInt(secondsStr, 10)); } } else { alert("You need to log in first"); } } render() { const authButton = this.state.loggedIn ? <div><button className="button expanded" onClick={this.logout.bind(this)}>Logout</button></div> : <div><button className="button expanded" onClick={this.login.bind(this)}>Login</button></div>; return ( <div> <form ref="form" onSubmit={this.onSubmit.bind(this)} <input type="text" ref="seconds" placeholder="Enter time in seconds"/> <input type="submit" className="button success expanded" value="Start Countdown"/> </form> { authButton } </div> ); } } export default CountdownForm;
In the above, we add a
loggedIn state to the component that will keep track of the user's authentication status. We instantiate a
AuthService object and use this to make an instance of the
lock widget. We set a callback function that will be called after authentication with
this.lock.on('authenticated', cb) and in this function we change the
loggedIn state to
true. On log out, this will be set to
false.
In the render button, we check the
loggedIn state and add a
Login button if its value is
false and a
Logout button otherwise. These buttons are bound to the
login() and
logout() functions respectively.
When the form is submitted, we first check if the user is authenticated before proceeding with the countdown. If they aren't, an
alert is displayed that lets them know they need to be logged in..
Run Webpack to process and bundle the JavaScript files and then start the app.
$ webpack $ npm start
When you navigate to, you will see the added Login button.
On clicking the button, the Lock widget will be displayed.
Use its Sign Up tab to create an account. After signing up, you will be automatically logged in, therefore you will be able to perform a countdown and the bottom button will now be the
Logout button.
That works fine, but if you run the tests, there will be several failing ones.
If you take a look at the error messages, you will see
ReferenceError: localStorage is not defined several times.
We mentioned earlier that Jest is a Node-based runner that runs its tests in a Node environment, simulating the DOM with jsdom. jsdom does a great job in replicating a lot of DOM features, but it lacks some browser features; for example, at the time of writing this, the current version of jsdom doesn't support localStorage or sessionStorage. This is a problem for us because our app saves the authentication token it gets back from Auth0 to localStorage.
To get around this limitation, we can either create our own implementation of localStorage or use a third party one like node-localstorage. Since we only require a simple version of localStorage, we'll create our own implementation. To be able to save, retrieve and remove a token to localStorage, we only require the
setItem(key, value),
getItem(key) and
removeItem(key) functions of the the Storage interface. If your application requires other localStorage features, it's better to use the third party option.
Create a file in the
utils folder named
localStorage.js and add the following to it.
module.exports = { setLocalStorage: function() { global.localStorage = { getItem: function (key) { return this[key]; }, setItem: function (key, value) { this[key] = value; }, removeItem: function (key) { delete this[key]; } }; const jwt = require('jsonwebtoken'); const token = jwt.sign({ foo: 'bar', exp: Math.floor(Date.now() / 1000) + 3000 }, 'shhhhh'); localStorage.setItem('id_token', token); } };
In the above, we create an object with the three required functions and assign it to
global.localStorage. We then create a token, set an expiration date to it and save it in localStorage as the value of the
id_token key. The token will be decoded in
AuthService and its
exp attribute checked to determine if it has expired. You should note that
jwt-decode doesn't validate tokens; any well formed JWT will be decoded. If your app uses tokens to authorize API calls, you should validate the tokens in your server-side logic by using something like express-jwt, koa-jwt, Owin Bearer JWT, etc.
You can create a test account and perform a real Login during testing, but I prefer to not make unneccesary network calls during testing. Since we aren't testing the Login functionality, I deem it unnecessary to perform authentication with the Auth0 server, therefore we create a 'fake' token with a
exp attribute that will be checked by the app.
Install the following package.
$ npm install --save-dev jsonwebtoken
Add the following to the
CountdownForm.test.jsx and
Countdown.test.jsx components inside their outer
describe() blocks before all the
it() and inner
describe() blocks.
beforeAll(() => { const ls = require("../../utils/localStorage.js"); ls.setLocalStorage(); });
Run the tests with
npm test and they should all pass.
Auth0 provides the simplest and easiest to use User interface tools to help administrators manage user identities including password resets, creating and provisioning, blocking and deleting users.
Conclusion
We've looked at how to use Jest as a test runner when testing a React application. For more on Jest, be sure to check its documentation.
- Auth0 Docs
Implement Authentication in Minutes
- Auth0 Community
Join the Conversation | https://auth0.com/blog/testing-react-applications-with-jest/?utm_source=reactnl&utm_medium=email | CC-MAIN-2019-18 | refinedweb | 4,806 | 56.96 |
java.io.Serializable; 22 23 import org.apache.shiro.authz.Permission; 24 25 26 /** 27 * An all <tt>AllPermission</tt> instance is one that always implies any other permission; that is, its 28 * {@link #implies implies} method always returns <tt>true</tt>. 29 * 30 * <p>You should be very careful about the users, roles, and/or groups to which this permission is assigned since 31 * those respective entities will have the ability to do anything. As such, an instance of this class 32 * is typically only assigned only to "root" or "administrator" users or roles. 33 * 34 * @since 0.1 35 */ 36 public class AllPermission implements Permission, Serializable { 37 38 /** 39 * Always returns <tt>true</tt>, indicating any Subject granted this permission can do anything. 40 * 41 * @param p the Permission to check for implies logic. 42 * @return <tt>true</tt> always, indicating any Subject grated this permission can do anything. 43 */ 44 public boolean implies(Permission p) { 45 return true; 46 } 47 } | http://shiro.apache.org/static/1.2.2/xref/org/apache/shiro/authz/permission/AllPermission.html | CC-MAIN-2014-42 | refinedweb | 163 | 55.13 |
Posted 26 Jan 2011
Link to this post
Posted 27 Jan 2011
Link to this post
In our online documentation you can find how to create your own custom theme for RadControls/WPF using the Telerik approach, please follow this link.
If you need to modify the default appearance of TreeListViewRow you have to edit its template and follow the approach demonstrated here.
If you have any difficulties feel free to send us a small application in a new support ticket which can help us to provide you with an appropriate solution.
Posted 10 Feb 2011
Link to this post
using
System;
Telerik.Windows.Controls;
namespace
Silverlight.Help.RadCommon
{
[ThemeLocation( ThemeLocation.BuiltIn )]
public
class
MyTheme : Theme
MyTheme()
this
.Source =
new
Uri(
"/MyTheme;component/themes/Generic.xaml"
, UriKind.RelativeOrAbsolute );
}
Please find attached a working custom theme following the pattern described in our online help article "Creating a Custom Theme Project - Using the Telerik Approach".
Try this one and let me know if you need any further assistance.
Posted 11 Feb 2011
Link to this post
Posted 15 Feb 2011
Link to this post
<Style x:Key="{telerik:ThemeResourceKey ThemeType=telerik:Windows7Theme
<Style x:Key="{telerik:ThemeResourceKey ThemeType=customTheme:MyCustomTheme
xmlns:customTheme="clr-namespace:MyTheme"
Unfortunately, we are not able to see what is wrong in your project. I have updated the referred project for RadTreeListView/TreeListViewRow and everything works as expected.
Please check the attached project and let me know if you need any further assistance.
When you are creating a custom theme, you need to implement the same approach that is used in our Themes. The previous sample demonstrates how to apply this custom theme in RadTreeListView using the Telerik approach. If you need fully functional theme, you have to change the GridViewHeaderCell, GridViewCell, GridViewHeaderRow in the same way as it was implemented in our Themes. You can get the xaml files for all Telerik Themes from the Themes folder of your personal installation (you may also find this file attached here). Within Telerik.Windows.Controls.GridView.xaml file you need to get the styles for the rest of its parts and everything will work as expected. In addition to this I think the following online help article that shows how to "Modify Built-In Theme" would be helpful on that matter.
Please find attached your updated project.
Posted 16 Feb 2011
Link to this post
App():
base
()
Telerik.Windows.Controls.StyleManager.ApplicationTheme =
Telerik.Windows.Controls.Windows7Theme();
Please accept my apology for the initial misleading.
Posted 17 Feb 2011
Link to this post
If you need to change this background you have to check the GridViewCell.xaml file and find the brush named GridView_CellBackground_Edited, which paints the background of a GridViewCell when it is in edit mode. Thinking in this way you may change the TextBox control in Windows7 theme as a default editor of GridViewDataColumn.
Posted 21 Mar 2011
Link to this post
Posted 22 Mar 2011
Link to this post
May you please share with us the exact problems you are currently experiencing? Is the sample project attached previously not helpful ?
On the other hand, you may also send us a small application in a new support ticket where we can see what is going on in your custom theme. Any additional information about the grid version you use will be highly appreciated as well.
Posted 11 | http://www.telerik.com/forums/apply-our-custom-theme-to-treelistview | CC-MAIN-2017-17 | refinedweb | 559 | 53.31 |
I have a sample file here:
#include <stdio.h>
#include <math.h>
int main(){
printf("%f\n", log(10));
}
gcc sample.c -o a
./a
2.302585
#include <stdio.h>
#include <math.h>
int main(){
double a = 10;
printf("%f\n", log(a));
}
gcc sample.c -o a
gcc sample.c -o a -lm
Check the disassembly, and you'll likely find that the compiler is optimizing the call to
log() out entirely in the first case (so there's nothing to link), but not in the second. In this particular case, glibc defines:
# define M_LN10 2.30258509299404568402
in
math.h, for instance, and any standard library function can be implemented as a macro, so it can calculate some of these things without a function call. | https://codedump.io/share/aAGiwlJxita1/1/why-is--lm-not-necessary-in-some-cases-when-compiling-and-linking-c-code | CC-MAIN-2017-13 | refinedweb | 126 | 79.26 |
I created a function that took a list as a parameter and removed either a space or a number. Code below:
def cleaner(List, filter_out=' '):
if filter_out == ' ':
for i in List:
if i == ' ':
List.remove(' ')
if filter_out == 'int':
for i in List:
if type(i) == int:
List.remove(i)
myList = ['h' ' ', 'g', 1, 2, 3, 4, 5, 'p']
print(cleaner(myList, filter_out='int'))
['h' ' ', 'g', 'p']
['h ', 'g', 2, 4, 'p']
1
2
Removing items from a list while iterating it results in bad consequences:
>>> nums = [1, 2, 3, 4] >>> for x in nums: ... print x ... nums.remove(x) ... 1 3
You start at index 0. You print
nums[0],
1. You then remove it. The next index is
1. Well,
[nums[1] is
3 because now the list is
[2, 3, 4]. You print that, and remove it. The list is now
[2, 4], and you are at the third index. Since
nums[2] does not exist, the loop ends, skipping two numbers. What you should do is take advantage of the builtin functions:
myList = ... myList = filter(lambda x: not isinstance(x, int), myList)
For the example of
' ', it would be:
myList = ... myList = filter(str.strip, myList)
or
myList = filter(lambda x: x != ' ', myList)
Note: The Python 3
filter() function returns a
filter object, not a list. That makes it more efficient if you are just iterating, but if you truly need a list, you can use
list(filter(...)).
All of these make a copy of the list instead of doing their work in place. If you want it in place, use
myList[:] = ... instead of
myList = ... (in the
filter() line). Note that a Python 3
filter object does not need to be converted to a list for this to work. | https://codedump.io/share/TrXSM2z6ZMk1/1/how-to-get-the-type-of-a-varible | CC-MAIN-2018-22 | refinedweb | 291 | 84.27 |
JimS 0 Posted September 6, 2017 I am writing a script that will be a command-line wrapper for launching programs from shortcuts. The purpose of the script is to make sure only one instance of a program gets launched. It finds the process based on the program name. If it doesn't find a process, it launches the program with run(). If it finds a process for that exe, it uses the PID to find the handle of the window for that PID. All the rest of the activity is done using the handle. My initial testing is using osk.exe. The script works, sort of, but has problems with certain states and making the window show up again on subsequent runs. The problem is that I am using WinGetState() to get the window state, again using the handle but the state returned is inconsistent with expectations. I'm using a mix of WinSetState and WinActivate to make the window show up, but that also is not behaving as expected. For example, I launch the osk.exe program using the script, then manually minimize the program using the minimize button. When I run the script again, it finds the state of the window as (5) - it doesn't have the bitwise 16 state for minimized, or the 2 state for visible. Then, when my function sets the state to 15 (exists, visible, enabled, active) it doesn't display the window. Running it again, it says the state is (7), and even though I am displaying the state (15) before the autoit script closes, it still says state is 7 on subsequent runs of the script before it executes my function. If I restore the window manually by clicking the taskbar icon, it still claims the window state is 7. If I minimize again with the minimize button, it still says its state is 7. If I float over the minimized taskbar icon, then float over the Aero Glass image of the window, it shows the window in the position it's supposed to be, which is the normal Windows 7 response. When I run the script again when it is in this state, it says the state is 7, or exists, visible and enabled but not active. Then it says it sets the window state to 15 but the program window is still not showing. If I restore it ,manually by clicking on the taskbar icon, then minimize it again by clicking again on the taskbar icon, when running the script it now says the state is 23, and the window properly activates and displays when it sets the state to 15. Minimizing again by using the minimize button then running the script it says the window state is 7. Attempting to fix this, I have added some _winapi commands to my function but that didn't change a thing. Does the minimize button do something to the window other than minimize it? Or is it a flaw in my function? My function looks like this at the moment: Func _FixWinState($wstate) If Not BitAND($wstate, 2) Then WinSetState($hwnd, "", @SW_SHOW) EndIf If Not BitAND($wstate, 4) Then WinSetState($hwnd, "", @SW_ENABLE) EndIf If Not BitAND($wstate, 8) Then WinActivate($hwnd) EndIf If BitAND($wstate, 16) Or BitAND($wstate, 32) Then WinSetState($hwnd, "", @SW_RESTORE) EndIf If not WinActive($hwnd) Then WinActivate($hwnd) endif _winapi_setForegroundWindow($hwnd) _winapi_setActiveWindow($hwnd) _winapi_switchtothiswindow($hwnd) EndFunc ;==>_FixWinState $hwnd is global, $wstate is the window state from $wstate= WinGetState($hwnd) before this function is called. Any ideas? Also, if I use the script to launch MSTSC.EXE with a command-line of a saved .rdf file to configure the remote desktop session, it launches it just fine, but if I run a second pass of the script instead of restoring the minimized remote desktop window or doing what I described above with the OSK test, it creates a new remote desktop "window" but the new one is not a new session, it is a window with no size, just a blank header bar and frame about an inch long. I can't help but think the issues are related somehow. Share this post Link to post Share on other sites | https://www.autoitscript.com/forum/topic/190234-troubles-with-winsetstate-and-restoring-a-minimized-window/ | CC-MAIN-2018-43 | refinedweb | 703 | 66.27 |
grokcore.component 2.5
Grok-like configuration for basic components (adapters, utilities, subscribers)
This package provides base classes of basic component types for the Zope Component Architecture, as well as means for configuring and registering them directly in Python (without ZCML).
Contents
- How to set up grokcore.component
- Examples
- Changes that file is what we’ll be editing.
In order to register the components that you wrote using the base classes and directives available from grokcore.component, we’ll use the <grok:grok /> ZCML directive. But before we can use it, we need to make sure it’s available to the ZCML machinery. We do this by including the meta configuration from grokcore.component:
<include package="grokcore.component" file="meta.zcml" />
Put this line somewhere to the top of site.zcml, __init__(self, context, request, view): pass)
Global adapter
Sometimes, you may have an object that should be registered as an adapter factory. It may have come from some other framework that configured that adapter for you, say, or you may have a class that you instantiate many times to get different variations on a particular adapter factory. In these cases, subclassing grokcore.component.Adapter or MultiAdapter is not possible. Instead, you can use the global_adapter() directive. Here is an example drawing on the z3c.form library, which provides an adapter factory factory for named widget attributes:
import zope.interface import zope.schema import grokcore.component import z3c.form.widget import ComputedWidgetAttribute class ISchema(Interface): """This schema will be used to power a z3c.form form""" field = zope.schema.TextLine(title=u"Sample field") ... label_override = z3c.form.widget.StaticWidgetAttribute( u"Override label", field=ISchema['field']) grokcore.component.global_adapter(label_override, name=u"label")
In the example above, the provided and adapted interfaces are deduced from the object returned by the StaticWidgetAttribute factory. The full syntax for global_adapter is:
global_adapter(factory, (IAdapted1, IAdapted2,), IProvided, name=u"name")
The factory must be a callable (the adapter factory). Adapted interfaces are given as a tuple. You may use a single interface instead of a one-element tuple for single adapters. The provided interface is given as shown. The name defaults to u”” (an unnamed adapter).
Handling events
Here we see an event handler much like it occurs within Zope itself. It subscribes to the modified event for all annotatable objects (in other words, objects that can have metadata associated with them). When invoked, it updates the Dublin Core ‘Modified’.subscribe(IAnnotatable, IObjectModifiedEvent) def updateDublinCoreAfterModification(obj, event): """Updated the Dublin Core 'Modified' property when a modified event is sent for an object.""" IZopeDublinCore(obj).modified = datetime.datetime.utcnow()
Subscriptions
Subscriptions look similar to Adapter, however, unlike regular adapters, subscription adapters are used when we want all of the adapters that adapt an object to a particular adapter.
Analogous to MultiAdapter, there is a MultiSubscription component that “adapts” multiple objects.
Changes
2.5 (2012-05-01)
- Introduce provideUtility, providerAdapter, provideSubscriptionAdapter, provideHandler and provideInterface in grokcore.component. These by default delegate the registration of components to the global site manager like was done before, but provide the possibility for custom registries for the grokked components.
- Fix the global_adapter to properly use information annotated by grok.adapter, and using the IContext object if it was not specified. (Fix Launchpad issue #960097).
- Add a key option to sort_components that behave like key options available on standard Python sort methods.
2.4 (2011-04-27)
- Fix the global_adapter directive implementation to accept an explicit “empty” name for nameless adapter registrations (as it used to be that providing an empty name in the registration would actually result in registering a named adapter in case the factory has a grok.name).
2.3 (2011-02-14)
- Implement the generic (Multi)Subscriptions components.
2.2 (2010-11-03)
The default values computation for the context directive and the provides directive is now defined in the directives themselves. This means that where the values for these directives is being retrieved, the “default_context” function does not need to be passed along anymore for general cases.
Analogous to this, when getting values for the provides directive the “default_provides” function does not need to be passed along in the general case.
2.1 (2010-11-01)
- Made package comply to zope.org repository policy.
- Moved directives ‘order’ from grokcore.viewlet and ‘path’ from grokcore.view to this very package.
- Tiny dependency adjustment: moved zope.event to test dependencies.
- Port from 1.x branch exclude parameter to the Grok ZCML directive.
- Port from 1.x branch the ignore of testing.py modules.
2.0 (2009-09-16)
- Use a newer version of Martian that has better support for inheritance. This is demonstrated in tests/inherit.
- The ContextGrokker and the scan.py module have gone away thanks the newer Martian.
- Directive implementations (in their factory method) should not bind directives. Directive binding cannot take place at import time, but only at grok time. Binding directives during import time (when directives are executed) can lead to change problems. (we noticed this during our refactoring to use the new Martian).
- Use 1.0b1 versions.cfg in Grok’s release info instead of a local copy; a local copy for all grokcore packages is just too hard to maintain.
1.7 (2009-06-01)
- Add missing provider, global_adapter, implementsOnly, classProvides() to the module interface so that they are included in __all__
1.6 (2009-04-10)
Add convenience imports for implementsOnly() and classProvides() class declarations form zope.interface.
Add support for registering global adapters at module level:
grok.global_adapter(factory, (IAdapted1, IAdapted2,), IProvided, name=u"name")
Only ‘factory’ is required. If only a single interface is adapted, the second argument may be a single interface instead of a tuple. If the component has declared adapted/provided interfaces, the second and third arguments may be omitted.
Add support for an @provider decorator to let a function directly provide an interface:
@grok.provider(IFoo, IBar) def some_function(): ...
This is equivalent to doing alsoProvides(some_function, IFoo, IBar).
Add support for named adapters with the @adapter decorator:
@grok.adapter(IAdaptedOne, IAdaptedTwo, name=u"foo") def some_function(one, two): ...
1.5.1 (2008-07-28)
- The IGrokcoreComponentAPI interface was missing declarations for the title and description directives.
1.5 (2008-07-22)
Fix: grokcore.component contains old-style test setup. There is no register_all_tests method in grokcore.component.testing anymore. Use z3c.testsetup instead.
Allow functions that have been marked with @grok.subscribe also be registered with zope.component.provideHandler() manually. This is useful for unit tests where you may not want to grok a whole module.
Document grokcore.component’s public API in an interface, IGrokcoreComponentAPI. When you now do:
from grokcore.component import *
only the items documented in that interface will be imported into your local namespace.
1.4 (2008-06-11)
- Ported class grokkers to make use of further improvements in Martian. This requires Martian 0.10.
1.3 (2008-05-14)
- Ported class grokkers to make use of the new declarative way of retrieving directive information from a class. This requires Martian 0.9.6..
- Downloads (All Versions):
- 118 downloads in the last day
- 551 downloads in the last week
- 603 downloads in the last month
- Author: Grok Team
- Download URL:
- License: ZPL
- Categories
- Package Index Owner: jw, ctheune, philikon, faassen, thefunny42
- DOAP record: grokcore.component-2.5.xml | https://pypi.python.org/pypi/grokcore.component | CC-MAIN-2015-11 | refinedweb | 1,216 | 51.04 |
Petrol Disabled Three Wheel Motorcycle 110CC 125CC Engine Single Cylinder
US $340-360 / Piece
1 Piece (Min. Order)
single cylinder 4 stroke gasoline three wheel rubbish motorcycle for sale in Ghana
US $1280.0-1380.0 / Acre
1 Acre (Min. Order)
alibaba China wholesale Single cylinder trike cargo three wheel motorcycle
US $720.0-820.0 / Set
1 Set (Min. Order)
motor engine/water cooled single cylinder engine/China three wheel motorcycle
US $600-1100 / Unit
20 Units (Min. Order)
250cc Chinese Three Wheel Racing Motorcycle Single Cylinder 4 Stroke Chinese Motorcycle Engine
US $790-900 / Piece
15 Pieces (Min. Order)
single cylinder engine three wheel motorcycle
US $1050-1090 / Unit
35 Units (Min. Order)
ENGINE TYPE 4-STROKE SINGLE CYLINDER THREE WHEEL ELECTRIC MOTORCYCLE 2016
US $800-1000 / Set
10 Sets (Min. Order)
import from china single cylinder engine water cooled three wheel cargo motorcycle on sale
US $840-1300 / Set
22 Sets (Min. Order)
2017 new 110cc gasoline three wheel motorcycle with EEC for disabled people
US $620-650 / Unit
60 Units (Min. Order)
150CC Three Wheel Motorcycle /Motor Tricycle/air cooling engine Cargo Tricycle
US $700-1600 / Unit
1 Unit (Min. Order)
Fully enclosed chinese three wheel motorcycle
US $2300-2600 / Set
1 Set (Min. Order)
Chongqing China 2016 New Cargo Tricycle 300cc 3 Wheel Motorcycle for Business Use
US $400-500 / Unit
48 Units (Min. Order)
Tuk tuk taxi rickshaw/motorcycles/three wheel motorcycle /keke bajaj motorized tricycle 21000040
US $1280.0-1320.0 / Sets
40 Sets (Min. Order)
50cc 150cc 200cc gas powered adult motorcycle three wheel scooter
US $1000.0-1250.0 / Unit
1 Unit (Min. Order)
Hot selling Three Wheel Motorcycle with low price
US $1000-1600 / Set
9 Sets (Min. Order)
110CC 125CC 150CC 175CC 200CC Three Wheel Cargo Motorcycles
US $500-800 / Piece
10 Pieces (Min. Order)
Chongqing suzuki truck cargo three wheel motorcycle for india
US $980.0-1000.0 / Units
12 Units (Min. Order)
thailand market motorcycle 110cc three wheel cargo
US $1000-1200 / Piece
1 Piece (Min. Order)
Lifan brand 250cc powerful engine three wheel motorcycle /Famous brand 250cc powerful engine Cargo Motor tricycle
US $1150-1300 / Unit
5 Units (Min. Order)
motorized gas powered cabin cargo tricycle/three wheel motorcycle with handbar/steering wheel for adult
US $1540-1780 / Unit
1 Unit (Min. Order)
chinese best new motorized type open body three wheel motorcycle for sale
US $700-800 / Set
1 Set (Min. Order)
Open body three wheel motorcycle for sale
US $1150-1200 / Unit
10 Units (Min. Order)
250cc Cargo Motor Tricycle Three Wheel Motorcycle
US $1500-3000 / Unit
1 Unit (Min. Order)
Factory sale KAVAKI motor 150cc 200cc 250cc tri motorcycle
US $750-900 / Piece
5 Pieces (Min. Order)
Famous brand KAVAKI sale three wheel motorcycle for cargo transportation with good quality and reasonable price in China 2015
US $720-1000 / Piece
15 Pieces (Min. Order)
foton 200cc three wheel motorcycle with steering wheel
US $2387.8-2407.38 / Piece
1 Piece (Min. Order)
Top Quality Adults Antirust China Three Wheel Motorcycle For Cargo
US $1130-1230 / Set
50 Sets (Min. Order)
Comfortable Cargo Three Wheel Motorcycle For Loncin Engine Tricycles
US $320-1000 / Unit
15 Units (Min. Order)
Professional electric three wheel motorcycle for wholesale
US $459-759 / Unit
10 Units (Min. Order)
2014 new portable three wheel motorcycle 50cc
US $500-600 / Set
30 Sets (Min. Order)
three wheel large cargo motorcycle
US $750-900 / Set
15 Sets (Min. Order)
3wheel closed passenger tricycle/three wheel passenger tricycle made in china/ chinese closed body tricycle 150cc
US $2000-2300 / Unit
4 Units (Min. Order)
Good quality model gasoline three- row Bajaj three wheel motorcycle tricycle
US $1000-1200 / Unit
36 Units (Min. Order)
three wheel 150cc motorized tricycles for cargo
US $1200-2000 / Piece
TVS King Electric Three Wheel Rickshaw Tricycle Taxi
US $1200-1800 / Piece
27 Pieces (Min. Order)
single row bajaj three wheel motorcycle taxi
US $1300-1600 / Unit
8 Sets (Min. Order)
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show single cylinder three wheel motorcycle or other products of your own company? Display your Products FREE now! | http://www.alibaba.com/showroom/single-cylinder-three-wheel-motorcycle.html | CC-MAIN-2017-34 | refinedweb | 703 | 65.42 |
0
import java.util.Scanner; public class Highest{ public static void main (String [] args){ Scanner kb = new Scanner(System.in); int scores [] = new int[3]; String names [] = new String [3]; int highest = scores [0]; String names1 = names [0]; for (int i = 0; i<3; i++){ System.out.println("enter name and score: "); names[i] = kb.next(); scores[i] = kb.nextInt(); if (scores[i] > highest) highest = scores[i]; names1 = names[i]; } System.out.println(highest + names1); } }
i'm stuck on the part where i have to extract the name that associates with the highest score. it always prints out the last name that i enter instead of the one with the highest score. | https://www.daniweb.com/programming/software-development/threads/315087/extract-strings-from-an-array | CC-MAIN-2016-40 | refinedweb | 112 | 68.26 |
> From: address@hidden > > On Mon, Oct 22, 2001 at 12:08:45PM -0400, Brett Viren wrote: > > > > Currently, if one of those functions divide by zero I catch the error > > (via evaluating the function with scm_catch()) and just ignore the > > result. However, the users want their functions to return (and plot) > > zero in the case of divide by zero error. The users are just a little confused. The proper thing to do is to catch the error just before plotting, and plot a vertical line. Doing further arithmetic on an undefined value is useless, the "correct" result could still be anything. 0/0 = 5 + 0/0 = cos(1/0) > > I don't want the users to have to explicitly check for divide by zero > > in their Scheme functions, but rather handle it implicitly. If being equal to zero at certain points that would otherwise be undefined is part of the definition of the function, then the user should say that; otherwise don't put words in eir mouth. Maybe somebody wants to define cos(1/0) = 0, because that's the average value of cos(1/x) as x->0. (it would be 1 under the luser proposal). > > Is it possible to reference the real "/" inside this > > overloaded "/"? Brett Viren wrote: > What about the following: guile> (define real-/ /) guile> (define (/ a b) (+ 1 (real-/ a b))) guile> (list (/ 25 5) (real-/ 25 5)) ==> (6 5) The Scheme works, and even real-/ continues to work after the definition of the new / (the original example did not show that). If you don't want to clutter the global name space with real-/, you can do it this way: guile> (define / (let ((real-/ /)) (lambda (a b) (+ 1 (real-/ a b))))) guile> (/ 25 5) ==> 6 This works because the let binding is done before the new value of / is set!, so the new value is a procedure with the value of real-/ already determined. For maximal enigmatographic effect: (define / (let ((/ /)) (lambda (a b) (+ 1 (/ a b))))) > Ralf Mattes > > > Or is there any other good way to turn a divide by zero into a zero? There is never a good way to do a wrong thing. But there are lots of fun ways. -- -- Keith Wright <address@hidden> Programmer in Chief, Free Computer Shop <> --- Food, Shelter, Source code. --- | https://lists.gnu.org/archive/html/guile-user/2001-10/msg00145.html | CC-MAIN-2021-17 | refinedweb | 381 | 66.67 |
By Peter Drayton, Ben Albahari, Ted Neward
Second Edition August 2003
Pages: 924
Series: In a Nutshell
ISBN 10: 0-596-00526-1 | ISBN 13: 9780596005269
(Average of 4 Customer Reviews)
This book has been updated—the edition you're requesting is OUT OF PRINT. Please visit the catalog page of the latest edition.
The latest edition is also available on Safari Books Online.
The heart of C# in a Nutshell is a succinct but detailed reference to the C# language and the .NET types most essential to C# programmers. Each chapter in the API reference begins with an overview of a .NET namespace and a diagram of its types, including a quick-reference entry for each type, with name, assembly, category, description, member availability, class hierarchy, and other relevant information, such as whether the type is part o the ECMA CLI specification. Newly updated for .NET Framework version 1.1, the second edition also adds a CD that allows you to integrate the book's API Quick Reference directly into the help files of Visual Studio .NET 2002 & 2003, giving you direct access to this valuable information via your computer.
Full Description
-.
Cover | Table of Contents | Colophon
Featured customer reviews
I never seen before book like this, August 15 2006
I buy more than 15 books for c# language , this book is number 1 .
Discovering the C #, September 30 2005
This book is of basic importance for who intends to give courses in university or also to learn the concepts and rules of the language, really was valid the penalty to study this time c all # with this book, I also indicate for all the programmers who are starting to study c # and for that he has a good notion of the language. The book approaches of interesting clear form and needs many itens, and the measure that the desenvolvedor will be if going deep the studies will see that its learning will be impressive.... The book is show!
SEQGRAPH review, May 03 2004
This book is not a teach yourself C#; it is only a reference text. If you are experienced in another programming language, however, you can use this reference to easily find the syntax required by C#. The book is a quick reference useful for programmers who like to have a paper copy reference on their desktops. By the publishers own admission, it is not an exhausted reference.
I like having a hard copy reference when Im programming, so this book suits me fine. There are numerous example code snippets throughout the book to help you learn C#. In addition, the second edition also adds a CD that allows you to incorporate the book's Quick Reference directly into the help files of Visual Studio .NET. This gives you, the programmer, more options when you need help. It is also handy when you have left the book at home.
Im an intermediate Java programmer who needed to make the conversion to C# for a particular project. C# in a Nutshell has assisted me in this aim, and as a result, I would recommend this book to anyone as a useful reference text.
C# in a Nutshell, 2nd Edition Review, April 02 2004
I think it is a good book for me and every one
Media reviews
"In my opinion O'Reilly continually puts out the best technical books and 'C# in a Nutshell' further supports their excellent reputation. As usual with O'Reilly's other offerings in their 'in a Nutshell' series they leave out the fluff and provide just the facts. This approach makes 'C# in a Nutshell' easy to recommend if youve already gotten your feet wet in C#." 4.5 out of 5 stars
--L. Todd Knudsen, Salt Lake Area Student Programmers' Group, January 2004 | http://www.oreilly.com/catalog/9780596005269/ | crawl-001 | refinedweb | 632 | 59.64 |
C# | Object Class
The Object class is the base class for all the classes in .Net Framework. It is present in the System namespace. In C#, the .NET Base Class Library(BCL) has a language-specific alias which is Object class with the fully qualified name as System.Object. Every class in C# is directly or indirectly derived from the Object class. If a Class does not extend any other class then it is the direct child class of Object class and if extends other class then it is an indirectly derived. Therefore the Object class methods are available to all C# classes. Hence Object class acts as a root of the inheritance hierarchy in any C# Program. The main purpose of the Object class is to provide the low-level services to derived classes.
There are two types in C# i.e Reference types and Value types. By using System.ValueType class, the value types inherit the object class implicitly. System.ValueType class overrides the virtual methods from Object Class with more appropriate implementations for value types. In other programming languages, the built-in types like int, double, float etc. does not have any object-oriented properties. To simulate the object-oriented behavior for built-in types, they must be explicitly wrapped into the objects. But in C#, we have no need of such wrapping due to the presence of value types which are inherited from System.ValueType that are further inherited from System.Object. So in C#, value types also work similar to reference types. Reference types directly or indirectly inherit the object class by using other reference types.
Explanation of the above Figure: Here, you can see the Object class at the top of the type hierarchy. Class 1 and Class 2 are the reference types. Class 1 is directly inheriting the Object class while Class 2 is Indirectly inheriting by using Class 1. Struct1 is value type that implicitly inheriting the Object class through the System.ValueType type.
Example:
Ouput:
For Object obj1 = new Object(); Object System.Object System For String str System.ValueType Int32 System.Int32 System
Constructor
Methods
There are total 8 methods present in the C# Object class as follows:
Important Points:
- C# classes don’t require to declare the inheritance from Object class as the inheritance is implicit.
- Every method defined in the Object class is available in all objects in the system as all classes in the .NET Framework are derived from Object class.
- Derived classes can and do override Equals, Finalize, GetHashCode and ToString methods of Object class.
- The process of boxing and unboxing a type internally causes a performance cost. Using the type-specific class to handle the frequently used types can improve the performance cost.
Recommended Posts:
- C# | Class and Object
- C# | Check if an array object is equal to another array object
- C# | Add an object to the end of the ArrayList
- C# | Add an object to the end of Collection<T>
- Object and Dynamic Array in C#
- Object.ReferenceEquals() Method in C#
- C# | Method returning an object
- C# | Getting the key at the specified index of a SortedList object
- C# | Getting the keys in a SortedList object
- C# | Getting the Values in a SortedList object
- C# | Get an IDictionaryEnumerator object in OrderedDictionary
- Object and Collection Initializer in C#
- C# | Search in a SortedList object
- C# | Getting the value at the specified index of a SortedList object
- Different ways to create an. | https://www.geeksforgeeks.org/c-sharp-object-class/ | CC-MAIN-2019-35 | refinedweb | 575 | 55.84 |
When I use Sphinx autodoc to document a class, the values for the attributes are always reported, (as it says it should here, under #437) but always as "= None"
Attribute = None
Some Documentation
.. autoclass:: core.SomeClass
class SomeClass(object):
def __init__(self):
self.attribute = "value" #: Some Documentation
I am pretty sure this has to do with the fact that your attribute is an instance attribute. It does not get a value until the class is instantiated. Sphinx imports modules in order to inspect them, but it does not instantiate any classes.
So the "real value" is not known by Sphinx, and
None is output. I don't think you can make it go away easily (but I suppose anything is possible if you are prepared to patch the Sphinx source code...). If you don't like this, you could document attributes in the docstring of the class instead.
Class attributes that are documented using the same markup scheme (described here) do get their values displayed in the rendered output. But there is no clear indication that makes it easy for the reader to distinguish between class and instance attributes. Maybe Sphinx could be a little more helpful here. | https://codedump.io/share/Bt7xQOU1hDSY/1/sphinx-values-for-attributes-reported-as-none | CC-MAIN-2017-34 | refinedweb | 198 | 70.63 |
This week’s post is brought to us by Suraj Guarav, our EDI/AS2 development lead for R2:
A new feature coming up in Beta 2 (also in Feb CTP but with limited test coverage) is the ability to dynamically modify the list of allowed values in envelope ID fields.
A typical customer scenario could be that there is a unique sender and receiver qualifier that two trading partners use which is outside the set of values defined by the X12 standards body. In such a case, the allowable values that appear in the drop-down lists in the EDI Properties in the Partner Agreement Manager (PAM) for ISA05 and ISA07 need to be extended to incorporate the additional acceptable values.
Customization Using Extended Service Schema
The EDI system pulls the list of allowed values from static service schemas in the Microsoft.BizTalk.Edi.BaseArtifacts dll that ships with R2. There is one schema each for X12 and Edifact. To extend the base set of values, a service schema extension needs to be developed and deployed. Service schema extension templates for X12 and Edifact ship with the product and are located at Microsoft BizTalk Server 2006\XSD_Schema\EDI. They are called X12_ServiceSchemaExtension.xsd and Edifact_ServiceSchemaExtension.xsd respectively.
To extend ISA01 field, create a Biztalk project and add the X12 Service schema to it. In case a service schema has been deployed previously, it should be used as it may contain extensions to other envelope fields.
Add values to the ISA01 collection. Similarly, other fields in the schema can be modified as well. Once the changes are done, deploy the schema in the current BizTalk group. It should have the same namespace and root node name as the original extension schema that came with the product. After the schema is deployed, new values would be recognized throughout the system. They would show up in Partner Agreement Manager and runtime processing would also employ these values during validation. In addition, XML tools (e.g. validate instance) would also make use of these values for validation.
Only the list of fields that can be customized are present in the extension schemas. Addition of any new field in the schemas would not have any impact on system’s behavior, they would simply be ignored. Additionally, it is not possible to delete a value from the base service schemas. Customizing envelopes is additive only!
Thanks Suraj!
Cheers,
Tony
Hello Suraj, Nice to see the Customization of Env. Fields. I am coming from a bts 2002 background, so if you can point out the new way of doing it in bts2k6 will appreciate.
In 2002 we load a particular docdef for a partner via the selection crtieria keys on the document definition,thereafter create a new channel/port for him which has this docdef tied to it and finally message gets serialized via a custom envelope which has the right schema tied to it.
So I installed r2 and I do not see any of the new property pages for EDI?
Please ensure that you have configured Microsoft EDI/AS2 in the BizTalk configuration tool after installation.
Greetings!
In these days I have been experimenting an EDI Error regarding ISA12 (interchange control version number)
Let me explain:
When an incoming message has the following ISA Values, Biztalk is throwing an error regarding “Invalid versionId”
ISA*04*SW426 *00* *02*RRDC *02*KCS *080512*1247*U*00503*001651461*0*P*;~
Error: 1 (Miscellaneous error)
17: Invalid VersionId
Error: 2 (Field level error)
SegmentID: ISA
Position in TS: 1
Data Element ID: ISA12
Position in Segment: 12
Data Value: 00503
7: Invalid code value
I have created the corresponding parties and the messages are processed successfully as long as the messages have an ISA12 value prior to 00503.
I was looking to modify the service extension schema () however the ISA12 is not available on that schema.
I actually have begun to create a Custom Pipeline Component and modify the EDI envelope in order to replace the ISA12 value for 00502 version but I do not know if it is the best way to accomplish this.
Do you know if I can modify some setting to solve this?
Thanks in advance
JPablo | https://blogs.msdn.microsoft.com/biztalkb2b/2007/03/02/customizing-envelope-fields/ | CC-MAIN-2016-44 | refinedweb | 703 | 59.74 |
As problems go, It grows every time you turn around. Just how are you going to deal with all of these new devices? Suddenly everything is internet ready and your site has to accommodate them all. Figuring out how to deal with Netscape and IE were nothing compared to what is coming up. People will soon be viewing your site with a text browser that only show 6 lines at a time. Forget graphics. Your site has to be reduced to the basics. Html is not good for these new browsers so you have to learn a new markup language. When web services and SOAP become mainstream you may not be presenting your data in a browser at all. The worst part is that you have to write all of these So, how do you support all of these new devices, manage to keep your data consistent and actually keep your sanity?
One of your largest problems is html. Html is is presentation oriented. When you write html the focus is on how the page will look. The actual data in the page is secondary to the layout. There are now many tools to help make web pages look nice but they do not help make sense of the data contained in the pages.
XML, in contrast, is data oriented. When you create an XML document you focus on what the data is without concern for layout or presentation. Once you have the data and its structure down, you then write stylesheets and focus on how the data will be presented. The great thing about this approach is that you can present the same data in different ways. XML can greatly reduce the amount of work you need to do ensure that your data stays consistent no matter what media your users are using. This may not be particularly useful for your home page but it can do amazing things to help manage volumes of company data, documentation, contacts, schedules or orders.
This tutorial will guide you through setting up Tomcat and Cocoon to serve XML pages, then you will create a DTD, XML file and three XSL stylesheets so that you can view your data in your desktop browser, a cell phone browser and a pdf file. Before getting started you should be warned that writing pages in XML requires more time up front than HTML. By the end of the tutorial you will see the value in taking the extra time.
At the end of this tutorial you will have:
1. Installed and configured Tomcat to serve up xml documents
2. Installed Cocoon to process xml documents and format it according to your xsl documents.
3. Created a dtd to define the structure of you xml document.
4. Created an xml document containing an address book entry.
5. Created three xsl files to format the xml document in HTML,WML and pdf formats
{mospagebreak title=Getting the tools} If you do not already have java installed you will need jdk 1.1 or higher. You can get the Linux jdk from
For this tutorial you will need:
Tomcat 3.2.1
For this tutorial we will use the binary build. Get it at. In a production environment you would also want to download the Apache web server and configure Apache and Tomcat to work together. In the interest of keeping this tutorial a reasonable length we will not be configuring Tomcat and Apache to work together. So you will be accessing Tomcat directly using port 8080 instead of the usual port 80. If you want to configure Apache to pass requests on to Tomcat see the documentation at the jakarta web site.
UP SDK from phone.com (now openware.com)
Unfortunately you must register before downloading the sdk and it only works on Windows. If you are going through this tutorial in Linux you will need another Windows machine to use the phone browser. Get it at
{mospagebreak title=Installing Tomcat} Tomcat is the Apache Group’s Java Servlet and JavaServer Page server. We need Tomcat for the Cocoon servlet to run.
Installing Tomcat is straight forward: all you need to do is unpack the file you downloaded, set JAVA_HOME and TOMCAT_HOME variables and start Tomcat.
The first step is to unpack Tomcat:
$ cd /usr/local/ $ gunzip jakarta-tomcat-3.2.1.tar.gz $ tar -xvf jakarta-tomcat-3.2.1.tar
Then set up your environment:
$ JAVA_HOME=/path/to/jdk $ TOMCAT_HOME=/usr/local/jakarta-tomcat-3.2.1 $ export TOMCAT_HOME JAVA_HOME
You can add these lines to the file $TOMCAT_HOME/bin/tomcat.sh just above the first ‘if’ statement if you do not feel like setting these variable everytime.
That is all you need to do, Tomcat is ready to run. To start Tomcat cd to $TOMCAT_HOME/bin
and run:
$ ./startup.sh
After a few moments you should see two lines that look like this:
2001-01-26 12:25:02 - PoolTcpConnector: Starting HttpConnectionHandler on 8080 2001-01-26 12:25:02 - PoolTcpConnector: Starting Ajp12ConnectionHandler on 8007
When you see these line Tomcat is running and ready to serve. Open up a browser
and go to and run a couple samples to make sure that Tomcat
is working properly. Run at least one servlet and one jsp sample. If you get any
errors check your environment and restart Tomcat.
{mospagebreak title=Installing Cocoon} Cocoon is Apache’s XML publishing engine. Cocoon will read your xml file and format it according to the layout defined in your XSL files. The Cocoon distribution includes the cocoon.jar file and a set of jar files that help cocoon read and format xml files. Installing Cocoon is not much more complicated than installing Tomcat. You need to copy all of the jar files to a place where Tomcat knows to find them and update Tomcat’s configuration.
First unpack Cocoon and change to the Cocoon directory:
$ tar -xzf Cocoon-1.8.2.tar.gz $ cd cocoon-1.8.2
Next copy all of the jar files in lib/ to $TOMCAT_HOME/lib
$ cp lib/*.jar $TOMCAT_HOME/lib $ cp bin/cocoon.jar $TOMCAT_HOME/lib
Tomcat 3.2.1 will automatically load all of the jar files found in $TOMCAT_HOME/lib so you do not have to add these to your CLASSPATH. However, if you are using java 1.2 or higher you will also need to add $JAVA_HOME/lib/tools.jar to your CLASSPATH. Cocoon uses tools.jar for page compilation. If you do not want to add tools.jar to your profile add this line to $TOMCAT_HOME/bin/tomcat.sh:
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib/tools.jar
The next step is to add a Context for Cocoon to Tomcat’s server.xml file. This tells Tomcat to create a context for Cocoon. It also tells Tomcat where to find the cocoon files and what url path cocoon should be launched from. Change to $TOMCAT_HOME/conf and use an editor to open server.xml
At the bottom of the file you will see this tag </ContextManager>. Above that
tag add these two lines:
<Context path="/cocoon" docBase="webapps/cocoon" debug="0" reloadable="true"> </Context>
This tells Tomcat to create a context for cocoon with the path /cocoon. Whenever
/cocoon is called Tomcat will look for the files in $TOMCAT_HOME/webapps/cocoon.
Of course at this point there is no webapps/cocoon directory so let’s create that
$ mkdir $TOMCAT_HOME/webapps/cocoon $ mkdir $TOMCAT_HOME/webapps/cocoon/WEB-INF
The WEB-INF directory is part of the servlet 2.2 specification. According to the specification the WEB-INF directory should contain a file called web.xml which defines your servlets and any parameters the serlvets need. Most servlet engines will not strictly enforce these standards but trying to work around it will usually give you more headaches than solutions.
The next step is to copy the cocoon web.xml file and cocoon.properties files
to your WEB-INF directory and copy the sample files to the webapps directory.
$ cd /path/to/cocoon-1.8 $ cp src/WEB-INF/web.xml $TOMCAT_HOME/webapps/cocoon/WEB-INF/ $ cp conf/cocoon.properties $TOMCAT_HOME/webapps/cocoon/WEB-INF/ $ cp -R samples/ $TOMCAT_HOME/webapps/cocoon/samples
The last thing you need to do is edit the web.xml file you just copied. Open
$TOMCAT_HOME/webapps/cocoon/WEB-INF/web.xml in an editor. You will see the line:
<param-value>[path-to-cocoon]/conf/cocoon.properties</param-value>
Change it to:
<param-value>WEB-INF/cocoon.properties</param-value>
Now stop and start Tomcat.
$ $TOMCAT_HOME/bin/shutdown.sh $ $TOMCAT_HOME/bin/startup.sh
While Tomcat is loading you will see:
2001-01-26 01:35:04 - ContextManager: Adding context Ctx( /cocoon )
If you see this then everything is good. Open a browser and go to
and try some of the samples. Not all of the samples are meant to be viewed in
a browser so don’t worry if you get errors with some of them. If you can see hello-page.xml
then everything is working.
{mospagebreak title=Defining your document} Your server environment is ready to go. Now we can actually start creating xml files. The first step is to create a Document Type Definition (dtd). The dtd tells the xml parser what information is allowed in the xml file and defines the format of the data. If the xml file does not fit into the constraints of the dtd the xml parser will give you an error and stop parsing the document. In our example we are creating a ‘contact’ element which has a name, address, phone number and email address. The dtd will define which tags must be part of a contact element and what kind of data will be in each tag.
It is not actually necessary to create a dtd for such a simple xml file, but it is important to use dtds when you are creating more sophisticated documents. Advantages of using dtds are that they ensure that your xml files are valid and well-formed and that you have all of the required data in the document. Using dtds also helps create standardization. Once you have created a dtd for a ‘contact’ it can be used in any place where contact information is required. This helps ensure that your data is consistent through out the company. Dtds also make it easier to communicate your data structures to others. If someone wants to interact with your applications or web site, for example, you could use dtds to show them your data structures.
To create your dtd, change directories to $TOMCAT_HOME/webapps/cocoon and create a directory called ‘address’. Then change directories to address, create a file called contact.dtd and open it up in a text editor
The first step is to define you root element:
<!ELEMENT contact (name,address,phone,e-mail)>
In the line above we defined an element called contact which contains the elements name, address, phone and e-mail. Any time we use this dtd we must have a contact element which contains all of the child elements.
Next we will define the name element:
<!ELEMENT name (first-name,last-name)> <!ELEMENT first-name (#PCDATA)> <!ELEMENT last-name (#PCDATA)>
The first line defines the name element as containing first-name and last-name elements. The next two lines define first-name and last-name elements as text elements.
Following the same logic we can fill in the address, phone and e-mail elements.
<!ELEMENT address (street,city,state,country)> <!ELEMENT street (#PCDATA)> <!ELEMENT city (#PCDATA)> <!ELEMENT state (#PCDATA)> <!ELEMENT country (#PCDATA)> <!ELEMENT phone (#PCDATA)> <!ELEMENT e-mail (#PCDATA)>
The entire dtd looks like this:
<!-- contact.dtd --> <!ELEMENT contact (name,address,phone,e-mail)> <!ELEMENT name (first-name,last-name)> <!ELEMENT first-name (#PCDATA)> <!ELEMENT last-name (#PCDATA)> <!ELEMENT address (street,city,state,country)> <!ELEMENT street (#PCDATA)> <!ELEMENT city (#PCDATA)> <!ELEMENT state (#PCDATA)> <!ELEMENT country (#PCDATA)> <!ELEMENT phone (#PCDATA)> <!ELEMENT e-mail (#PCDATA)>
{mospagebreak title=Creating your xml file} Now that we know what our data should look like we create the xml file. According to the dtd our xml file must contain all of these elements in (more or less) this structure:
<contact> <name> <first-name></first-name> <last-name></last-name> </name> <address> <street></street> <city></city> <state></state> <country></country> </address> <phone></phone> <e-mail></e-mail> </contact>
To begin create a file called homer.xml and open it in a text editor. The first
line of the xml file is the XML declaration:
<?xml version="1.0"?>
Every xml and xsl document should begin with the XML declaration. This declares that this is an xml document and which version of xml you are using.
Tags of the form <?…?> are processing instructions (PI). Processing instructions are instructions that are passed to the application that will be using the xml document. The first word after the <? is called the target which is the application that the instructions will be passed to. The rest of the PI contains the instructions to be passed to the target. In this case the target is the xml application and we are telling it that we are using xml version 1.0.
The next is our document type declaration. This is where we associate our xml
file with the dtd we created. All declarations use the tag <!…>. The document
type declaration looks like:
<!DOCTYPE contact SYSTEM "contact.dtd">
Next we want to add a PI for cocoon so that Cocoon knows that we want it to perform
stylesheet transformations for us:
<?cocoon-process type="xslt"?>
Finally we can populate our file with data. At this point filling in the data is pretty simple, just add data values between the tags:
<contact> <name> <first-name>Homer</first-name> <last-name>Simpson</last-name> </name> <address> <street>122 West 1st Avenue</street> <city>Springfield</city> <state>ZZ</state> <country>USA</country> </address> <phone>1-555-555-1111</phone> <e-mail>h.simpson@springfieldnuclear.com</e-mail> </contact>
With properly defined dtds and an xml template filling in data should be a relatively painless task.{mospagebreak title=Viewing your document in an HTML browser} We now have our xml file and a dtd to verify the validity of the document. You could view the xml file in an HTML browser but it would be quite uninteresting. The browser will load the xml file exactly as it appears in the text file. Your data will be presented correctly but users will want it to be more readable. Our next step is to create a stylesheet that contains HTML formatting information so that you can view the xml document as an html document.
To begin create a file called ‘address-html.xsl’ and open it using a text editor.
As always the first line of the file is:
<?xml version="1.0"?> Next we declare our document element and namespace by adding this line: <xsl:stylesheet xmlns:
The first part of the tag (xsl:stylesheet) is called the document element. Basically
it tells the xml processor that the following document is an xsl:stylesheet. To
close the document element we must end our document with:
<./xsl:stylesheet>
All data in the document must be enclosed within these tags.
The next part of the tag (xmlns:xsl=””) declares
the namespace. XMLNamespaces (xmlns) are used to provide a way for the xml parser
to make sense out of colliding vocabularies. Trying to create a namespace that
is guaranteed to be unique in any and every context would be impossible. Without
xmlns it is likely that your tags will collide with other tags at some point.
For example suppose in our contact book we wanted to add the contact’s job title.
To do this we would add:
<title>Nuclear Technician</title>
to our xml file and update the dtd. We now have a problem because HTML also uses the tags <title> </title> but it means something totally different. The XMLNamespace is used to sort out which tags are which. As an aside, the link (“) does not actually point to anything.
Next we create our first xsl template. A template is a set of instructions and literal results that are executed when certain conditions are met by the processor. The first template we will create lays out the HTML page:
<xsl:template <html> <head> <title> <xsl:value-of <xsl:text>, </xsl:text> <xsl:value-of </title> </head> <body bgcolor="#ffffff"> <xsl:apply-templates/> </body> </html> </xsl:template>
The match=”contact” attribute of the template tag tells the processor to use the following template when it comes across a contact element.
The statement <xsl:value-of select=” “> is where we actually select data from the xml document. In this case we select name/last-name, name/first-name and use that for the title of page. You must also use <xsl:text> some text </xsl:text> where you want regular text in the HTML. With this template the title of the page will be “Simpson, Homer”. Otherwise the template contains standard HTML that will layout the page. In the body of the HTML page we use xsl:apply-templates to tell the processor to apply the templates for the other elements of the xml document.
To fill in the body of the html document we will create templates for each of the elements of the xml document.
<xsl:template <h1 align="left"> <xsl:apply-templates/> </h1> </xsl:template> Matches the name elements and formats the name with an h1 tag. <xsl:template <i><xsl:text>Address: </xsl:text></i><br/> <xsl:value-of<br/> <xsl:value-of<br/> <xsl:value-of<br/> <xsl:value-of<br/> </xsl:template>
This simply matches the address element and prints out each address element line
by line.
<xsl:template<br/> <xsl:text>Phone number: </xsl:text> <xsl:apply-templates/> </xsl:template> <xsl:template<br/> <xsl:text>Email: </xsl:text> <a> <xsl:attribute <xsl:text>mailto:</xsl:text><xsl:apply-templates/> </xsl:attribute> <xsl:apply-templates/> </a> </xsl:template>
The last two templates match phone and e-mail and print the data out. For each element we grab the data, add some text and add it to the result tree. In the template that matches e-mail we also add html formatting to add a mailto: link to the e-mail address. The user only has to click on the email address link to create an email addressed to the contact. Since this page will be viewed in a web browser it seems likely that the user will want to send an email to the contact so why not make is easier?
Now that we have an xml and xsl document we need to tell the xml document to
use the xsl stylesheet. To do this open up homer.xml in a browser and add the
following processing instruction under the cocoon-process PI (<?cocoon-process
type=”xslt”?>)
<?xml-stylesheet href="address-html.xsl? type="text/xsl"?>
Everything is ready for viewing your first xml/xsl page in a browser. If tomcat is not running start it up and point your browser to the following URL:
If you see any Cocoon errors read it carefully. The error messages are generally quite good at telling you what is wrong. Xml is much more strict than html. Among other things case if followed strictly and all tags must be properly terminated. If you have xml syntax problems, Cocoon will usually tell you what is wrong.
{mospagebreak title=Viewing your document in a WAP browser} One of the key features of xml is that you can view the same data in a variety of formats. Once you have created a contact book wouldn’t it be nice to be able to use the same contact book whenever and wherever you need to look up a contact? Rather than writing conversion tools to reformat your contact for each media you can use different stylesheets on the same data.
As the use of web enabled cell phone and PDAs grows this approach can save you a lot of headaches. The standard for cell phone browsers is (for better or worse) wireless markup language (wml). We want to make our contact accessible from a cell phone browser. To do this we must now create a stylesheet for wml browsers. Create a file called ‘address-wml.xsl’ and open it in an editor.
The xsl instructions are essentially the same but we are now using wml constructs.
The top of the stylesheet adds a processing-instruction for the type text/wml
so the wml can be interpreted and formatted.
<?xml version="1.0"?> <xsl:stylesheet <xsl:template <xsl:processing-instruction type="text/wml" </xsl:processing-instruction>
The body is a mix of xsl and wml instead of html but is quite similar otherwise. Because users will be viewing this page on a cell phone browser we should rethink the presentation of the data. Taking the small screen and intended use of the page into consideration the first page will only show the name and phone number. If the cell phone user wants more contact information there is a link to the full contact info.
<wml> <card id="index" title="Your Contacts"> <p align="center"> <a href="WP_BOOKMARK#contact">Phone Book</a><br/> </p> </card> <card id="contact" title="Phone Book"> <p> <b><xsl:value-of <xsl:text> </xsl:text> <xsl:value-of</b><br/> <xsl:value-of <do type="accept" label="More"> <go href="#Address"/> </do> </p> </card> <card id="Address" title="address"> <p> <b> <xsl:value-of <xsl:text> </xsl:text> <xsl:value-of </b><br/> <xsl:value-of<br/> <xsl:value-of<br/> <xsl:value-of<br/> <xsl:value-of<br/> <a href="#index">Main</a> <do type="prev"> <prev/> </do> </p> </card> </wml> </xsl:template> </xsl:stylesheet>
As with the html stylesheet you need to tell the xml processor to use addres-wml.xsl
when a phone browser accesses the page. To do this open up homer.xml in an editor
and add the following line:
<?xml-stylesheet href="address-wml.xsl? type="text/xsl? media="wap"?>
Simply put, this line tells cocoon to use address-wml.xsl for formatting instructions when the media requesting the page is a wap browser.
Open up your phone.com browser and point it to the same URL you used to view the html page.
The data is the same but we have changed the document layout to make it more
useful in a cell phone browser. We do not have all of the information in one page.
Instead the first page shows the contacts name and phone number only. Assuming
that a cell phone user is most likely to look up a phone number lets show the
phone number first and not clutter up the small screen.
{mospagebreak title=Viewing your file as a pdf} The last thing we do in this tutorial is convert the xml to a pdf file. Creating a pdf file is great when your users might want to keep or print a copy of the data. Since pdfs are not editable they are a great way to provide users with a document that they will not be able to accidentally (or purposely) modify later.
For pdf generation we will use FOP from Apache. FOP takes in an xml document,
an FO stylesheet and outputs a pdf. FOP is actually a separate project from cocoon
but the FOP jar is included in the cocoon distribution so you do not need to add
anything new to the project. There are a lot of similarities with html or wml
stylesheets but you will notice that the formatting is much more specific. You
need to create a file called address-pdf.xsl and open it in a text browser. Begin
the file with the usual xsl:stylesheet PI but you also need to say that you will
be using the xml namespaces for xml and fo:
<xsl:stylesheet
Use an xsl template to match the contact item and add a processing-instruction for the type text/xslfo.
<xsl:template <xsl:processing-instruction type="text/xslfo"</xsl:processing-instruction>
For any fo file you first need an fo:root element. The fo:root element is essentially
the same as the root xsl:template. It defines page layouts, page sequences and
host of optional text formatting instructions.
<fo:root xmlns:
Every fo:root must have one fo:layout-master-set. The fo:layout-master-set defines
the layout specification including page margins.
<fo:layout-master-set> <fo:simple-page-master <fo:region-body <fo:region-after </fo:simple-page-master> </fo:layout-master-set>
Next add an fo:page-sequence to define how to create the pages within the document.
Your entire document can be within one fo:page-sequence or you can use different
page-sequence for sections.
<fo:page-sequence> <fo:sequence-specification> <fo:sequence-specifier-alternating </fo:sequence-specification>
Next you need to add fo:flow to output the text. In our file we are applying
the templates from the xml result tree.
<fo:flow> <xsl:apply-templates/> </fo:flow> </fo:page-sequence> Close the fo:root and xsl:template </fo:root> </xsl:template>
Finally we create xsl:templates for the elements of our contact item. In each
template we have an fo:block which is used to define font, font size, alignment,
etc.
<xsl:template <fo:block <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block <xsl:apply-templates/> </fo:block> </xsl:template> </xsl:stylesheet>
You will notice that the xsl syntax is similar to previous xml files but with
fo formatting instruction. Now all that needs to be done is to tell your xml file
to use address-pdf.xsl. For simplicity open homer.xml and replace:
<?xml-stylesheet href="address-html.xsl" type="text/xsl"?>
with:
<?xml-stylesheet href="address-pdf.xsl" type="text/xsl"?>
It is possible to have both the pdf and html file generated at the same time and have the pdf downloaded from a link in the html page but it is not a straight forward as you might think. I will save that for another day. All that is left for you now is to point your browser to. If you have an Acrobat plugin the pdf should load automatically. If you do not have a plugin you should be prompted to download the file. Once downloaded open Acrobat to view the file.
That covers this tutorial. If everything has gone well you have a good idea of
how powerful xml can be. Hopefully it can help you manage your data and make your
life a little easier | http://www.devshed.com/c/a/xml/introduction-to-cocoon-xml-xsl/2/ | CC-MAIN-2017-34 | refinedweb | 4,467 | 65.42 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
compiling gcc-config on Solaris gives a warning of an implicitly defined
function `alloca' conflicts with built-in function. This is simply resolved by
including <alloc.h>. I haven't found any repercussions of this include on
Linux, Darwin and AIX.
(inlining patch because it's trivial)
Index: files/wrapper-1.4.7.c
===================================================================
--- files/wrapper-1.4.7.c (revision 5251)
+++ files/wrapper-1.4.7.c (working copy)
@@ -20,6 +20,7 @@
#include <string.h>
#include <stdarg.h>
#include <errno.h>
+#include <alloca.h>
#define GCC_CONFIG "/usr/bin/gcc-config"
#define ENVD_BASE "/etc/env.d/05gcc"
i dont like the sound of that at all ... why not just remove the usage of
alloca()
after all, this seems pretty frivolous:
int main(int argc, char *argv[]) {
struct wrapper_data *data;
data = alloca(sizeof(*data));
might as well just write it:
int main(int argc, char *argv[]) {
struct wrapper_data data;
Your rewrite makes sense to me. It's up to you.
I just patched ldwrapper (which is based on (gcc) wrapper), do you want me to
come up with a patch which punts the alloca usage?
feel free to post ... the worst that could happen is that i ignore it and write
it myself ;)
Created an attachment (id=115893) [edit]
wrapper non-alloca patch
Ok, I'll take that challenge :)
that's pretty much how i would have written it, thanks
added to gcc-config-1.3.16 | http://bugs.gentoo.org/173998 | crawl-002 | refinedweb | 267 | 67.45 |
Provides threading utilities for QGIS. More...
#include <qgsthreadingutils.h>
Provides threading utilities for QGIS.
Definition at line 35 of file qgsthreadingutils.h.
Guarantees that func is executed on the main thread.
If this is called from another thread, the other thread will be blocked until the function has been executed. This is useful to quickly access information from objects that live on the main thread and copying this information into worker threads. Avoid running expensive code inside func. If a feedback is provided, it will observe if the feedback is canceled. In case the feedback is canceled before the main thread started to run the function, it will return without executing the function.
Definition at line 56 of file qgsthreadingutils.h. | https://qgis.org/api/classQgsThreadingUtils.html | CC-MAIN-2022-05 | refinedweb | 121 | 67.96 |
I have the following named scope on
class RentableItem
< ActiveRecord::Base named_scope :available_at, lambda{
|starts_at, ends_at| { :select => "t.*", :from
=> "(SELECT ri.*, COALESCE(c1.start_date, '#{starts_at}') AS
EarliestAvailable, COALESCE(c2.end_date,
'#{ends_at}') AS LatestAvailable
View Replies
Say i have a controller with index and doesn't have any other RESTful
actions. I want to call different actions as a parametr from the same
controller which has a named route as follows.
Example,
I have TargetsController and i should be able to call
targets_path(any_action, params)
Can i call
resources :targets do
View Replies
Rails
Named
Routes
Calling
different
actions
I want to build a macro that connects our Excel-Data-Sheet with our
Reporting-Powerpoint-Presentation.So I have this named Range ("A")
selected and copied.Then I want to paste the data into a shape in
Powerpoint which has the same name as my Range ("A").
Sub
SyncWithPPT()Dim pptApp As PowerPoint.ApplicationDim pptPres As
PowerPoint.PresentationDim p
I have a webapp that lists all of my artists, albums and songs when the
appropriate link is clicked. I make extensive use of generic views
(object_list/detail) and named urls but I am coming across an annoyance. I
have three templates that pretty much output the exact same html that look
just like this:
{% extends "base.html" %}{% block
content %}<div id="content
I have a program that references a strongly named assembly which is in
the gac. I have a debug build of this assembly which I want to debug into,
but it isn't strongly named. Can I use a policy file or something to force
the program to use the weakly named assembly? Or do I have to recompile
the program to reference the weakly named assembly? (These assembles are
purchased from a 3rd-party.
Can anyone explain the difference in behaviour between Internet Explorer
and Firefox in regards to the below:
var myNamespace =
(function () { var exposed = {}; exposed.myFunction =
function myFunction () { return "Works!"; }
console.log(myFunction()); // IE: "Works!" // Firefox:
ReferenceError: myFunction is not defined
Unable to create named pipe w/err 0x0000007b
I am getting
the above error when I try to connect the server and client on different
machines.
The code I got from MSDN Link:
I am using Windows 7 machines to communicate.
#define
FULL_PIPE_NAME L".pipeSamplePipe"
What might be the problem?
@Entity@NamedQueries(
{ @NamedQuery(name = User.ALL, query =
"SELECT u FROM User u")})public class User { public
static final String ALL = "User.all";}public class Service { find... with ... User.ALL}
Stacktrace:
Caused by: org.hiberna
So I want to write a... well... not-so-simple parser with
boost::spirit::qi. I know the bare basics of boost spirit, having gotten
acquainted with it for the first time in the past couple of hours.
Basically I need to parse this:
# comment# other
i 0 0 0 i 1 2 5
I'm adding some custom logging functionality to a bash script, and can't
figure out why it won't take the output from one named pipe and feed it
back into another named pipe.
Here is a basic version of the
script ():
#!/bin/bashPROGNAME=$(basename $(readlink -f $0))LOG="$PROGNAME.log"PIPE_LOG="$PROGNAME-$$-log"PIPE_E | http://bighow.org/tags/Named/1 | CC-MAIN-2017-22 | refinedweb | 529 | 57.57 |
Closed Bug 692922 Opened 9 years ago Closed 7 years ago
QCMS looks for HAS
_POSIX _MEMALIGN s/b HAVE _POSIX _MEMALIGN
Categories
(Core :: GFX: Color Management, defect)
Tracking
()
mozilla32
People
(Reporter: swsnyder, Assigned: shashank)
Details
(Whiteboard: [good first bug][mentor=bgirard@mozilla.com])
Attachments
(1 file, 4 obsolete files)
There is code in QCMS that is #ifdef'd to symbol HAS_POSIX_MEMALIGN. This symbol isn't defined anywhere. Maybe someone overlooked defining this symbol. More likely is that the intent was to use HAVE_POSIX_MEMALIGN, which actually is defined and used elsewhere. In any case, that call to posix_memalign() will never be enabled with the #ifdef in its current state.
Thanks for the report. We should fix this. Willing to take a patch or mentor someone. The problem is under this file:
Whiteboard: [good first bug][mentor=bgirard@mozilla.com]
Assignee: nobody → swsnyder
Attachment #565665 - Flags: review?(jmuizelaar)
Comment on attachment 565665 [details] [diff] [review] Fix use of posix_memalign() when appropriate >--- mozilla8/gfx/qcms/transform.c.original 2011-10-06 20:59:02.000000000 -0400 >+++ mozilla8/gfx/qcms/transform.c 2011-10-07 18:04:34.146821044 -0400 >@@ -27,10 +27,11 @@ > #include <string.h> //memcpy > #include "qcmsint.h" > #include "chain.h" > #include "matrix.h" > #include "transform_util.h" >+#include "mozilla-config.h" I'd rather this be included by the build system instead of directly into transform.c >-#ifdef HAS_POSIX_MEMALIGN >+#ifdef HAVE_POSIX_MEMALIGN > static qcms_transform *transform_alloc(void) > { > qcms_transform *t; >- if (!posix_memalign(&t, 16, sizeof(*t))) { >+ if (!posix_memalign((void **)&t, 16, sizeof(*t))) { >+ memset(t, 0, sizeof(*t)); I'd rather we do something like: void *allocated_memory; posix_memalign(&allocated_memory,...) t = allocated_memory This lets us avoid the cast.
Attachment #565665 - Flags: review?(jmuizelaar) → review-
(In reply to Jeff Muizelaar [:jrmuizel] from comment #3) > I'd rather this be included by the build system instead of directly into > transform.c That's why I didn't submit a patch in the original report. It's a judgement call as to where this symbol is defined and how to communicate its presence to the intended source file(s). > I'd rather we do something like: > > void *allocated_memory; > posix_memalign(&allocated_memory,...) > t = allocated_memory > > This lets us avoid the cast. Sounds reasonable to me. I was trying to keep changes to a minimum to increase the chances of the patch being approved and applied. I can modify the patch to avoid the casting, but I'm not sure how to get the build system to put HAVE_POSIX_MEMALIGN on the compiler command line.
One more thing. (In reply to Jeff Muizelaar [:jrmuizel] from comment #3) > I'd rather we do something like: > > void *allocated_memory; > posix_memalign(&allocated_memory,...) > t = allocated_memory > > This lets us avoid the cast. Your proposed change doesn't actually eliminate a cast, it just moves it from posix_memalign() to return(). Given the function's return type the last line in the function would end up being: return (qcms_transform *)allocated_memory; That's probably cleaner (as the patch probably should have put that "void *" cast on the memset() call as well), but it doesn't leave the function cast-free.
Sorry Steve for the late response. I'm going to ping Jeff for a response for the second comment. The first problem should be fixed however before checking in.
Could someone please tear themselves away from the SocialAPI long enough to apply this 4-line fix (or equivalent) to an obvious bug? Thanks.
Can you confirm that you're still working on this bug?
Flags: needinfo?(swsnyder)
(In reply to Mike Hoye [:mhoye] from comment #8) > Can you confirm that you're still working on this bug? I can confirm that I am *not* working on it. Beyond the creation of the patch 18 months ago, I don't know what there is for me to do. The last meaningful comment is comment #3 in which Jeff expresses the desire to have the build system provide the HAVE_POSIX_MEMALIGN def instead of including mozilla-config.h. I don't know how to coax that behavior out of the build system.
Flags: needinfo?(swsnyder)
Assignee: swsnyder → nobody
(In reply to Benoit Girard (:BenWa) from comment #6) > Sorry Steve for the late response. I'm going to ping Jeff for a response for > the second comment. The first problem should be fixed however before > checking in. Twenty-nine months and counting. This must set some sort of record for ping RTT. Could someone please pull an intern off Social API to fix this single #ifdef?
Benoit Girard (:BenWa), I have looked into the patch sublitted earlier by @Steve Snyder. I would work on this but share the same apprehension as @Steve Snyder -- "I'm not sure how to get the build system to put HAVE_POSIX_MEMALIGN on the compiler command line." Are you willing to direct me or point me to someone else?
Flags: needinfo?(bgirard)
Build system questions? Mr. Szorc, I choo-choo-choose you!
Flags: needinfo?(gps)
Build command for transform.c: clang -o transform.o -c -fvisibility=hidden -DNO_NSPR_10_SUPPORT -I/Users/bgirard/mozilla/mozilla-central/tree/gfx/qcms -I. -I../../dist/include -I/Users/bgirard/mozilla/mozilla-central/builds/obj-ff-64gdb/dist/include/nspr -I/Users/bgirard/mozilla/mozilla-central/builds/obj-ff-64gdb/dist/include/nss -I/Users/bgirard/mozilla/mozilla-central/builds/obj-ff-64gdb/dist/include -I/Users/bgirard/mozilla/mozilla-central/tree/modules/zlib/src -fPIC -Qunused-arguments -include ../../mozilla-config.h -DMOZILLA_CLIENT -MD -MP -MF .deps/transform.o.pp -Qunused-arguments -Qunused-arguments -Wall -Wpointer-arith -Wdeclaration-after-statement -Werror=return-type -Werror=int-to-pointer-cast -Wtype-limits -Wempty-body -Wsign-compare -Wno-unused -Wno-error=uninitialized -Wno-error=deprecated-declarations -std=gnu99 -fno-strict-aliasing -fno-math-errno -pthread -DNO_X11 -pipe -DNDEBUG -DTRIMMED -g -O3 -fno-omit-frame-pointer -Wno-missing-field-initializers /Users/bgirard/mozilla/mozilla-central/tree/gfx/qcms/transform.c Looks like mozilla-config.h is included for c files so that #include shouldn't be required.
Flags: needinfo?(bgirard)
The HAVE_* defines are typically provided by mozilla-config.h. And, mozilla-config.h is typically silently included as part of all compilations (via -include on the command line). Comment #13 implies this is working as expected! The underlying problem might be HAVE_POSIX_MEMALIGN not being defined in the build configuration? Is it in objdir/config.status or objdir/mozilla-config.h?
Flags: needinfo?(gps)
Looks like chromium already patched this: Too bad they didn't upstream it :(
Comment on attachment 8419755 [details] [diff] [review] Fix use of posix_memalign() avoiding typecasts Review of attachment 8419755 [details] [diff] [review]: ----------------------------------------------------------------- LGTM, I'll land the revised patch. ::: gfx/qcms/transform.c @@ +929,5 @@ > qcms_transform *t; > + > + void *allocated_memory; > + if (!posix_memalign(&allocated_memory, 16, sizeof(qcms_transform))) { > + t = allocated_memory; Just one more thing. Notice how the non HAVE_POSIX_MEMALIGN version does a memset? The two version should be consistent here. They should either both return 0'ed memory or both not. Since this change is about fixing this we should make sure that for platforms that have memalign we don't suddenly start returning non 0'd memory. Please add a memset here and I think we're good to land the patch. Thanks!
Attachment #8419755 - Flags: review+
Initialising memory block to all zeroes for consistency with non HAVE_POSIX_MEMALIGN verion of the code
Attachment #8419755 - Attachment is obsolete: true
Attachment #8419804 - Flags: review?(bgirard)
Corrected the previous Bug692922_2.patch
Attachment #8419804 - Attachment is obsolete: true
Attachment #8419804 - Flags: review?(bgirard)
Attachment #8419822 - Flags: review?(bgirard)
Flags: needinfo?(jmuizelaar)
Comment on attachment 8419822 [details] [diff] [review] Fix use of posix_memalign() avoiding typecasts with proper initialisations Review of attachment 8419822 [details] [diff] [review]: ----------------------------------------------------------------- It's fine but next time we don't need a comment to explain a memset. It would be useful however to have a comment 'memset here to be consistent with the other transform_alloc'.
Attachment #8419822 - Flags: review?(bgirard) → review+
Fixed up the patch. Ready to land.
Attachment #565665 - Attachment is obsolete: true
Attachment #8419822 - Attachment is obsolete: true
Attachment #8419829 - Flags: review+
Assignee: nobody → shashank16392
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla32
Flags: needinfo?(jmuizelaar) | https://bugzilla.mozilla.org/show_bug.cgi?id=692922 | CC-MAIN-2021-04 | refinedweb | 1,343 | 50.94 |
:
The purpose of these documents is to record the disposition of issues which have come before the Core Language Working Group of the ANSI (J16) and ISO (WG21) C++ Standard Committee.
Issues represent potential defects in the ISO/IEC IS 14882:2003 document and corrected defects in the earlier ISO/IEC 14882:1998 document; they.:
Unless a class template specialization has been explicitly instantiated (14.7.2 temp.explicit) or explicitly specialized (14.7.3 temp.expl.spec), the class template specialization is implicitly instantiated when the specialization is referenced in a context that requires a completely-defined object type or when the completeness of the class type affects the semantics of the program.).
Section 9.6 class.bit paragraph 4 needs to be more specific about the signedness of bit fields of enum type. How much leeway does an implementation have in choosing the signedness of a bit field? In particular, does the phrase "large enough to hold all the values of the enumeration" mean "the implementation decides on the signedness, and then we see whether all the values will fit in the bit field", or does it require the implementation to make the bit field signed or unsigned if that's what it takes to make it "large enough"?
]
Furthermore, names of member templates shall not be prefixed by the keyword template if the postfix-expression or qualified-id does not appear in the scope of a template.[Note: just as is the case with the typename prefix, the template prefix is allowed in cases where it is not strictly necessary; i.e., when the { } access checking of base-specifiers must be deferred until the entire base-specifier-list has been seen. —end example]
In 11.4 class.friend paragraph 2, replace the following text:] discussed at some length. While there was widespread agreement that such inclusion is inherently implementation-dependent, we agreed to try to add wording that would make it clear that implementations are permitted (but not required) to allow inclusion of files using the <...> form of #include.
Proposed resolution (April, 2005):
Change 16.2 cpp.include paragraph 7 from:
[Example: The most common uses of #include preprocessing directives are as in the following:#include <stdio.h> #include "myprog.h"—end example]
to:
[Note: Although an implementation may provide a mechanism for making arbitrary source files available to the < > search, in general programmers should use the < > form for headers provided with the implementation, and the " " form for sources outside the control of the implementation. For instance:#include <stdio.h> #include <unistd.h> #include "usefullib.h" #include "myprog.h"—end note]
Notes from October, 2005 meeting:
Some doubt was expressed as to whether the benefit of this non-normative clarification outweighs the overall goal of synchronizing clause 16 with the corresponding text in the C99 Standard. As a result, this issue is being left in “review” status to allow further discussion.]
5.4 expr.cast paragraph 6 says,.
The wording seems to allow the following:
casting from void pointer to incomplete type
struct A; struct B; void *v; A *a = (A*)v; // allowed to choose reinterpret_cast
variant application of static or reinterpret casting
B *b = (B*)a; // compiler can choose static_cast here A *aa = (A*)b; // compiler can choose reinterpret_cast here assert(aa == a); // might not hold
ability to somehow choose static_cast
It's not entirely clear how a compiler can
choose static_cast as 5.4
expr.cast paragraph 6
seems to allow. I believe the intent of 5.4
expr.cast
paragraph 6 is to force the use of reinterpret_cast when
either are incomplete class types and static_cast iff the
compiler knows both types and there is a non-ambiguous
hierarchy-traversal between that cast (or maybe not, core issue 242 talks about this). I cannot see any
other interpretation because it isn't intuitive, every compiler I've
tried agrees with me, and neither standard pointer conversions
(4.10
conv.ptr paragraph 3) nor static_cast
(5.2.9
expr.static.cast paragraph 5) talk about incomplete
class types. If the committee agrees with me, I would like to see
Proposed resolution (April, 2006):
Change 5.4 expr.cast paragraph 6 as indicated:.If both the operand and destination types are class types and one or both are incomplete, it is unspecified whether the static_cast or the reinterpret_cast interpretation is used, even if there is an inheritance relationship between the two classes. [Note: For example, if the classes were defined later in the translation unit, a multi-pass compiler would be permitted to interpret a cast between pointers to the classes as if the class types were complete at that point. —end note]
Are string literals from default arguments used in extern inlines supposed to have the same addresses across all translation units?
void f(const char* = "s") inline g() { f(); }
Must the "s" strings be the same in all copies of the inline function?
Steve Adamczyk: The totality of the standard's wisdom on this topic is (7.1.2 dcl.fct.spec paragraph 4):
A string literal in an extern inline function is the same object in different translation units.
I'd hazard a guess that a literal in a default argument expression is not "in" the extern inline function (it doesn't appear in the tokens of the function), and therefore it need not be the same in different translation units.
I don't know that users would expect such strings to have the same address, and an equally valid (and incompatible) expectation would be that the same string literal would be used for every expansion of a given default argument in a single translation unit.
Notes from April 2003 meeting:
The core working group feels that the address of a string literal should be guaranteed to be the same only if it actually appears textually within the body of the inline function. So a string in a default argument expression in a block extern declaration inside the body of a function would be the same in all instances of the function. On the other hand, a string in a default argument expression in the header of the function (i.e., outside of the body) would not be the same.
Proposed resolution (April 2003):
Change the last sentence and add the note to the end of 7.1.2 dcl.fct.spec paragraph 4:
A string literal in the body of an extern inline function is the same object in different translation units. [Note: A string literal that is encountered only in the context of a function call (in the default argument expression of the called function), is not “in” the extern inline function.]
Notes from October 2003 meeting:
We discussed ctor-initializer lists and decided that they are also part of the body. We've asked Clark Nelson to work on syntax changes to give us a syntax term for the body of a function so we can refer to it here. See also issue 452, which could use this term.
(October, 2005: moved to “review” status in concert with issue 452. With that resolution, the wording above needs no further changes.)
Proposed resolution (April, 2006):
Change the last sentence and add the note to the end of 7.1.2 dcl.fct.spec paragraph 4:
A string literal in the body of an extern inline function is the same object in different translation units. [Note: A string literal appearing in a default argument expression is not considered to be “in the body” of an inline function merely by virtue of the expression’s use in a function call from that inline function. —end note]
The current wording of 8.5.1 dcl.init.aggr paragraph 8 requires that
An initializer for an aggregate member that is an empty class shall have the form of an empty initializer-list {}.
This is overly constraining. There is no reason that the following should be ill-formed:
struct S { }; S s; S arr[1] = { s };
Mike Miller: The wording of 8.5.1 dcl.init.aggr paragraph 8 is unclear. “An aggregate member” would most naturally mean “a member of an aggregate.” In context, however, I think it must mean “a member [of an aggregate] that is an aggregate”, that is, a subaggregate. Members of aggregates need not themselves be aggregates (cf paragraph 13 and 12.6.1 class.expl.init); it cannot be the case that an object of an empty class with a user-declared constructor must be initialized with {} when it is a member of an aggregate. This wording should be clarified, regardless of the decision on Nathan's point.
Proposed resolution (October, 2005):
This issue is resolved by the resolution of issue 413.
In 9 class paragraph 4, the first sentence says "A structure is a class definition defined with the class-key struct". As far as I know, there is no such thing as a structure in C++; it certainly isn't listed as one of the possible compound types in 3.9.2 basic.compound. And defining structures opens the question of whether a forward declaration is a structure or not. The parallel here with union (which follows immediately) suggests that structures and classes are really different things, since the same wording is used, and classes and unions do have some real differences, which manifest themselves outside of the definition. It also suggests that since one can't forward declare union with class and vice versa, the same should hold for struct and class -- I believe that the intent was that one could use struct and class interchangeably in forward declaration.
Suggested resolution:
I suggest something like the following:
If a class is defined with the class-key class, its members and base classes are private by default. If a class is defined with the class-key struct, its members and base classes are public by default. If a class is defined with the class-key union, its members are public by default, and it holds only one data member at a time. Such classes are called unions, and obey a number of additional restrictions, see 9.5 class.union.
Proposed resolution (April, 2006):
This issue is resolved by the resolution of issue 538.
The proposal says that value is true if "T is an empty class (10)". Clause 10 doesn't define an empty class, although it has a note that says a base class may "be of zero size (clause 9)" 9/3 says "Complete objects and member subobjects of class type shall have nonzero size." This has a footnote, which says "Base class subobjects are not so constrained."
The standard uses the term "empty class" in two places (8.5.1 dcl.init.aggr), but neither of those places defines it. It's also listed in the index, which refers to the page that opens clause 9, i.e. the nonzero size stuff cited above.
So, what's the definition of "empty class" that determines whether the predicate is_empty is true?
The one place where it's used is 8.5.1 dcl.init.aggr paragraph 8, which says (roughly paraphrased) that an aggregate initializer for an empty class must be "{}", and when such an initializer is used for an aggregate that is not an empty class the members are default-initialized. In this context it's pretty clear what's meant. In the type traits proposal it's not as clear, and it was probably intended to have a different meaning. The boost implementation, after it eliminates non-class types, determines whether the trait is true by comparing the size of a class derived from T to the size of an otherwise-identical class that is not derived from T.
Howard Hinnant: is_empty was created to find out whether a type could be derived from and have the empty base class optimization successfully applied. It was created in part to support compressed_pair which attempts to optimize away the space for one of its members in an attempt to reduce spatial overhead. An example use is:
template <class T, class Compare = std::less<T> > class SortedVec { public: ... private: T* data_; compressed_pair<Compare, size_type> comp_; Compare& comp() {return comp_.first();} const Compare& comp() const {return comp_.first();} size_type& sz() {return comp_.second();} size_type sz() const {return comp_.second();} };
Here the compare function is optimized away via the empty base optimization if Compare turns out to be an "empty" class. If Compare turns out to be a non-empty class, or a function pointer, the space is not optimized away. is_empty is key to making this work.
This work built on Nathan's article:.
Clark Nelson: I've been looking at issue 413, and I've reached the conclusion that there are two different kinds of empty class. A class containing only one or more anonymous bit-field members is empty for purposes of aggregate initialization, but not (necessarily) empty for purposes of empty base-class optimization.
Of course we need to add a definition of emptiness for purposes of aggregate initialization. Beyond that, there are a couple of questions:
Notes from the October, 2005 meeting:
There are only two places in the Standard where the phrase “empty class” appears, both in 8.5.1 dcl.init.aggr paragraph 8. Because it is not clear whether the definition of “empty for initialization purposes” is suitable for use in defining the is_empty predicate, it would be better just to avoid using the phrase in the language clauses. The requirements of 8.5.1 dcl.init.aggr paragraph 8 appear to be redundant; paragraph 6 says that an initializer-list must have no more initializers than the number of elements to initialize, so an empty class already requires an empty initializer-list, and using an empty initializer-list with a non-empty class is covered adequately by paragraph 7's description of the handling of an initializer-list with fewer initializers than the number of members to initialize.
Proposed resolution (October, 2005):]
Delete.
This resolution also resolves issue 491.
Additional note (October, 2005):
Deleting 8.5.1 dcl.init.aggr paragraph 8 altogether may not be a good idea. It would appear that, in its absence, the initializer elision rules of paragraph 11 would allow the initializer for a in the preceding example to be written { 3 } (because the empty-class member s would consume no initializers from the list).
Proposed resolution (April, 2006):
(Drafting note: this resolution also cleans up incorrect references to syntactic non-terminals in the nearby paragraphs.)
Change 8.5.1 dcl.init.aggr paragraph 4 as indicated:
An array of unknown size initialized with a brace-enclosed initializer-list containing n
initializersinitializer-clauses, where n shall be greater than zero... An empty initializer list {} shall not be used as the initializerinitializer-clause for an array of unknown bound.]
Change 8.5.1 dcl.init.aggr paragraph 6 as indicated:
An initializer-list is ill-formed if the number of
initializersinitializer-clauses exceeds the number of members...
Change 8.5.1 dcl.init.aggr paragraph 7 as indicated:
If there are fewer
initializersinitializer-clauses in the list than there are members...
Replace.
with:]
Change 8.5.1 dcl.init.aggr paragraph 10 as indicated:
When initializing a multi-dimensional array, the
initializersinitializer-clauses initialize the elements...
Change 8.5.1 dcl.init.aggr paragraph 11 as indicated:
Braces can be elided in an initializer-list as follows. If the initializer-list begins with a left brace, then the succeeding comma-separated list of
initializersinitializer-clauses initializes the members of a subaggregate; it is erroneous for there to be more initializersinitializer-clauses than members. If, however, the initializer-list for a subaggregate does not begin with a left brace, then only enough initializersinitializer-clauses from the list are taken to initialize the members of the subaggregate; any remaining initializersinitializer-clauses are left to initialize the next member of the aggregate of which the current subaggregate is a member. [Example:...
Change 8.5.1 dcl.init.aggr paragraph 12 as indicated:
All implicit type conversions (clause 4 conv) are considered when initializing the aggregate member with an
initializerassignment-expression from an initializer-list. If the initializerassignment-expression can initialize a member, the member is initialized. Otherwise, if the member is itself a non-emptysubaggregate, brace elision is assumed and the initializerassignment:... Braces are elided around the initializerinitializer-clause for b.a1.i...
Change 8.5.1 dcl.init.aggr paragraph 15 as indicated:
When a union is initialized with a brace-enclosed initializer, the braces shall only contain an
initializerinitializer-clause for the first member of the union...
Change 8.5.1 dcl.init.aggr paragraph 16 as indicated:
[Note:
asAs described above, the braces around the initializerinitializer-clause for a union member can be omitted if the union is a member of another aggregate. —end note]
(Note: this resolution also resolves issue 491.)
There are several problems with the terms defined in 9 class paragraph 4:
A structure is a class defined with the class-key struct; its members and base classes (clause 10 class.derived) are public by default (clause 11 class.access). A union is a class defined with the class-key union; its members are public by default and.
Although the term structure is defined here, it is used only infrequently throughout the Standard, often apparently inadvertently and consequently incorrectly:
5.2.5 expr.ref paragraph 4: the use is in a note and is arguably correct and helpful.
9.2 class.mem paragraph 11: the term is used (three times) in an example. There appears to be no reason to use it instead of “class,” but its use is not problematic.
17.1.8 defns.iostream.templates: the traits argument to the iostream class templates is (presumably unintentionally) constrained to be a structure, i.e., to use the struct keyword and not the class keyword in its definition.
B limits paragraph 2: the minimum number of declarator operators is given for structures and unions but not for classes defined using the class keyword.
B limits paragraph 2: class, structure, and union are used as disjoint terms in describing nesting levels. (The nonexistent nonterminal struct-declaration-list is used, as well.)
There does not appear to be a reason for defining the term structure. The one reference where it is arguably useful, in the note in 5.2.5 expr.ref, could be rewritten as something like, “'class objects' may be defined using the class, struct, or union class-keys; see clause 9 class.”
Based on its usage later in the paragraph and elsewhere, “POD-struct” appears to be intended to exclude unions. However, the definition of “aggregate class” in 8.5.1 dcl.init.aggr paragraph 1 includes unions. Furthermore, the name itself is confusing, leading to the question of whether it was intended that only classes defined using struct could be POD-structs or if class-classes are included. The definition should probably be rewritten as, “A POD-struct is an aggregate class defined with the class-key struct or the class-key class that has no...
In most references outside clause 9 class, POD-struct and POD-union are mentioned together and treated identically. These references should be changed to refer to the unified term, “POD class.”
Noted in passing: 18.1 lib.support.types paragraph 4 refers to the undefined terms “POD structure” and (unhyphenated) “POD union;” the pair should be replaced by a single reference to “POD class.”
Proposed resolution (April, 2006):
Change 9 class paragraph 4 as indicated:
A structure is a class defined with the class-key struct; its members and base classes (clause 10 class.derived) are public by default (clause 11 class.access).A union is a class defined with the class-key union; its members are public by default and.A POD class is an aggregate class that has no non-static data members of non-POD type (or array of such a type) or reference, and has no user-declared copy assignment operator and no user-declared destructor. A POD-struct is a POD class defined with the class-key struct or the class-key class. A POD-union is a POD class defined with the class-key union.
Drafting note: The “before” wording does not appear to support classes that contain scalar non-static data members. That is certainly not the intention. Moreover, C99 supports const and volatile data members, see sections 6.2.5 and 6.7.2.1 of the C99 standard. Access for members is described in clause 11 (note the redundancy in the old wording), access to base classes is described in clause 11.2 (see below).
Change 11.2 class.access.base paragraph 2 as indicated:
In the absence of an access-specifier for a direct base class, public is assumed when the derived class is
declareddefined with the class-key struct and private is assumed when the class is declareddefined with the class-key class. [Example:...
In 5.2.5 expr.ref paragraph 4, replace the note
[Note: “class objects” can be structures (9.2 class.mem) and unions (9.5 class.union). Classes are discussed in clause 9 class. —end note]
with
[Note: Classes (clause 9 class) can be defined with the class-keys struct, class, or union. —end note]
Change the commentary in the example in 9.2 class.mem paragraph 11 as indicated:
...an integer, and two pointers to
similar structuresobjects of the same type. Once this definition...
...the count member of the
structureobject to which sp points; s.left refers to the left subtree pointer of the structureobject s; and...
Change 17.1.8 defns.iostream.templates as indicated:
...the argument traits is a
structureclass which defines additional characteristics...
Change 18.4 lib.support.dynamic paragraph 4 as indicated:
If type is not a
POD structure or a POD unionPOD class (clause 9), the results are undefined.
Change the third bullet of B limits paragraph 2 as indicated:
Pointer, array, and function declarators (in any combination)
modifying
an a class, arithmetic, structure,
union, or incomplete type in a declaration [256].
Change the nineteenth bullet of B limits paragraph 2 as indicated:
Data members in a single class
, structure, or union [16 384].
Change the twenty-first bullet of B limits paragraph 2 as indicated:
Levels of nested class
, structure, or union definitions in a
single struct-declaration-list
member-specification [256].
Change C.2 diff.library paragraph 6 as indicated:
The C++ Standard library provides 2 standard
structuresstructs from the C library, as shown in Table 126.
Change the last sentence of 3.9 basic.types paragraph 10 as indicated:
Scalar types,
POD-struct types, POD-union typesPOD classes (clause 9 class), arrays of such types and cv-qualified versions of these types (3.9.3 basic.type.qualifier) are collectively called POD types.
Drafting note: Do not change 3.9 basic.types paragraph 11, because it's a note and the definition of “layout-compatible” is separate for POD-struct and POD-union in 9.2 class.mem.
(This resolution also resolves issue 327.):
The potential scope of a declaration that extends to or past the end of a class definition also extends to the regions defined by its member definitions, even if the members are defined lexically outside the class (this includes static data member definitions, nested class definitions, member function definitions (including the member function body
and,.
According to 14.1 temp.param paragraph 3, the following fragment is ill-formed:
template <class T> class X{ friend void T::foo(); };
In the friend declaration, the T:: part is a nested-name-specifier (8 dcl.decl paragraph 4), and T must be a class-name or a namespace-name (5.1 expr.prim paragraph 7). However, according to 14.1 temp.param paragraph 3, it is only a type-name. The fragment should be well-formed, and instantiations of the template allowed as long as the actual template argument is a class which provides a function member foo. As a result of this defect, any usage of template parameters in nested names is ill-formed, e.g., in the example of 14.6 temp.res paragraph 2.
Notes from 04/00 meeting:
The discussion at the meeting revealed a self-contradiction in the current IS in the description of nested-name-specifiers. According to the grammar in 5.1 expr.prim paragraph 7, the components of a nested-name-specifier must be either class-names or namespace-names, i.e., the constraint is syntactic rather than semantic. On the other hand, 3.4.3 basic.lookup.qual paragraph 1 describes a semantic constraint: only object, function, and enumerator names are ignored in the lookup for the component, and the program is ill-formed if the lookup finds anything other than a class-name or namespace-name. It was generally agreed that the syntactic constraint should be eliminated, i.e., that the grammar ought to be changed not to use class-or-namespace-name.
A related point is the explicit prohibition of use of template parameters in elaborated-type-specifiers in 7.1.5.3 dcl.type.elab paragraph 2. This rule was the result of an explicit Committee decision and should not be unintentionally voided by the resolution of this issue.
Proposed resolution (04/01):
Change 5.1 expr.prim paragraph 7 and A.4 gram.expr from
to
This resolution depends on the resolutions for issues 245 (to change the name lookup rules in elaborated-type-specifiers to include all type-names) and 283 (to categorize template type-parameters as type-names).
Notes from 10/01 meeting:
There was some sentiment for going with simply identifier in front of the "::", and stronger sentiment for going with something with a more descriptive name if possible. See also issue 180.
Notes from April 2003 meeting:
This was partly resolved by the changes for issue 125. However, we also need to add a semantic check in 3.4.3 basic.lookup.qual to allow T::foo and we need to reword the first sentence of 3.4.3 basic.lookup.qual.
Proposed resolution (October, 2004): class or namespace, the program is ill-formed. [...]
Notes from the April, 2005 meeting:
The 10/2004 resolution does not take into account the fact that template type parameters do not designate class types in the context of the template definition. Further drafting is required.
Proposed resolution (April, 2006): namespace or a class or dependent type, the program is ill-formed. [...].)
C.)> };
The original intent of the Committee when Koenig lookup was added to the language was apparently something like the following:
This approach is not reflected in the current wording of the Standard. Instead, the following appears to be the status quo:
John Spicer: Argument-dependent lookup was created to solve the problem of looking up function names within templates where you don't know which namespace to use because it may depend on the template argument types (and was then expanded to permit use in nontemplates). The original intent only concerned functions. The safest and simplest change is to simply clarify the existing wording to that effect.
Bill Gibbons: I see no reason why non-function declarations should not be found. It would take a special rule to exclude "function objects", as well as pointers to functions, from consideration. There is no such rule in the standard and I see no need for one.
There is also a problem with the wording in 3.4.2 basic.lookup.argdep paragraph 2:
If the ordinary unqualified lookup of the name finds the declaration of a class member function, the associated namespaces and classes are not considered.
This implies that if the ordinary lookup of the name finds the declaration of a data member which is a pointer to function or function object, argument-dependent lookup is still done.
My guess is that this is a mistake based on the incorrect assumption that finding any member other than a member function would be an error. I would just change "class member function" to "class member" in the quoted sentence.
Mike Miller: In light of the issue of "short-circuiting" Koenig lookup when normal lookup finds a non-function, perhaps it should be written as "...finds the declaration of a class member, an object, or a reference, the associated namespaces..."?
Andy Koenig: I think I have to weigh in on the side of extending argument-dependent lookup to include function objects and pointers to functions. I am particularly concerned about [function objects], because I think that programmers should be able to replace functions by function objects without changing the behavior of their programs in fundamental ways.
Bjarne Stroustrup: I don't think we could seriously argue from first principles that [argument-dependent lookup should find only function declarations]. In general, C++ name lookup is designed to be independent of type: First we find the name(s), then, we consider its(their) meaning. 3.4 basic.lookup states "The name lookup rules apply uniformly to all names ..." That is an important principle.
Thus, I consider text that speaks of "function call" instead of plain "call" or "application of ()" in the context of koenig lookup an accident of history. I find it hard to understand how 5.2.2 expr.call doesn't either disallow all occurrences of x(y) where x is a class object (that's clearly not intended) or requires koenig lookup for x independently of its type (by reference from 3.4 basic.lookup). I suspect that a clarification of 5.2.2 expr.call to mention function objects is in order. If the left-hand operand of () is a name, it should be looked up using koenig lookup.
John Spicer: This approach causes otherwise well-formed programs to be ill-formed, and it does so by making names visible that might be completely unknown to the author of the program. Using-directives already do this, but argument-dependent lookup is different. You only get names from using-directives if you actually use using-directives. You get names from argument-dependent lookup whether you want them or not.
This basically breaks an important reason for having namespaces. You are not supposed to need any knowledge of the names used by a namespace.
But this example breaks if argument-dependent lookup finds non-functions and if the translation unit includes the <list> header somewhere.
namespace my_ns { struct A {}; void list(std::ostream&, A&); void f() { my_ns::A a; list(cout, a); } }
This really makes namespaces of questionable value if you still need to avoid using the same name as an entity in another namespace to avoid problems like this.
Erwin Unruh: Before we really decide on this topic, we should have more analysis on the impact on programs. I would also like to see a paper on the possibility to overload functions with function surrogates (no, I won't write one). Since such an extension is bound to wait until the next official update, we should not preclude any outcome of the discussion.
I would like to have a change right now, which leaves open several outcomes later. I would like to say that:
Koenig lookup will find non-functions as well. If it finds a variable, the program is ill-formed. If the primary lookup finds a variable, Koenig lookup is done. If the result contains both functions and variables, the program is ill-formed. [Note: A future standard will assign semantics to such a program.]
I myself are not comfortable with this as a long-time result, but it prepares the ground for any of the following long term solutions:
The note is there to prevent compiler vendors to put their own extensions in here.
(See also issues 113 and 143.)
Notes from 04/00 meeting:
Although many agreed that there were valid concerns motivating a desire for Koenig lookup to find non-function declarations, there was also concern that supporting this capability would be more dangerous than helpful in the absence of overload resolution for mixed function and non-function declarations.
A straw poll of the group revealed 8 in favor of Koenig lookup finding functions and function templates only, while 3 supported the broader result.
Notes from the 10/01 meeting:
There was unanimous agreement on one less controversial point: if the normal lookup of the identifier finds a non-function, argument-dependent lookup should not be done.
On the larger issue, the primary point of consensus is that making this change is an extension, and therefore it should wait until the point at which we are considering extensions (which could be very soon). There was also consensus on the fact that the standard as it stands is not clear: some introductory text suggests that argument-dependent lookup finds only functions, but the more detailed text that describes the lookup does not have any such restriction.
It was also noted that some existing implementations (e.g., g++) do find some non-functions in some cases.
The issue at this point is whether we should (1) make a small change to make the standard clear (presumably in the direction of not finding the non-functions in the lookup), and revisit the issue later as an extension, or (2) leave the standard alone for now and make any changes only as part of considering the extension. A straw vote favored option (1) by a strong majority..5.3 lib.7..6.4 lib.uncaught).
Change 18.6.4 lib.
Section 1.3.10 lib.
The standard defines “signature” in two places: 1.3.10 defns.signature and 14.5.10 defns.signature words “the information about a function that participates in overload resolution” isn't quite right either. Perhaps, “the information about a function that distinguishes it in a set of overloaded functions?”
Eric Gufford:
In 1.3.10 defns.signature the definition states that “Function signatures do not include return type, because that does not participate in overload resolution,” while 14.5. lib.containers) of pointers show undefined behaviour, e.g. 23.2.2.3 lib.list.modifiers requires to invoke the destructor as part of the clear() method of the container.
If any other meaning was intended for 'using an expression', that meaning should be stated explicitly.static ,.
Split off from issue 315.
Incidentally, another thing that ought to be cleaned up is the inconsistent use of "indirection" and "dereference". We should pick one..4.1.3 lib; } int main() { printf("short: %d\n", conv_int<short>::value); printf("int *: %d\n", conv_int<int *>::value); printf("short: %d\n", conv_int2<short>()); printf("int *: %d\n", conv_int2.
7.1?..7n,.
The wholesale replacement of the phrase “template function” by the resolution of issue 105 seems to have overlooked the similar phrase “template conversion function.” This phrase appears a number of times in 13.3.3.1.2 over.ics.user paragraph 3, 14.5.2 temp.mem paragraphs 5-8, and 14.8.2 temp.deduct paragraph 1. It should be systematically replaced in similar fashion to the resolution of issue 105.?
7.1.3 dcl.typedef paragraph 1 says,
The typedef specifier shall not be used in a function-definition (
8.4 dcl.fct.def)...
Does this mean that the following is ill-formed?
void f() { typedef int. | http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2029.html | CC-MAIN-2014-10 | refinedweb | 5,894 | 53.51 |
We have seen the evolution of WCF in our previous article: WCF Part 1: Why WCF?. Next, we will go through the fundamentals of WCF. I will try to make it more worthy than an interview question puff.
Windows Communication Foundation. Easy. Although, I was not able to find as why it is called a foundation but I was able to relate it to the fact that WCF was the first attempt to bring all the communication protocols and messaging formats under one roof. Refer to the last article for details. With WCF in hand,
no developer has to go anywhere or learn a.
Communication between server and client happens by exchanging messages over a network which must be following a transport protocol.
.
We hear a term SOAP, it is,
A channel is a component that operates on messages and message headers. There are two types of channels: transport channels and protocol channels.
Now we know about the message and their transmissions, we will move to the components of WCF required to implement all this.
Major components of WCF are ABC. We will concentrate on the flow between client and server machine to identify the components of WCF..
Now client knows where the service is located, then for transmission of this service from server to client; transport, encoding, and protocol details are required. These details should be agreed upon between server and client before starting the transmission. A combination of these elements will form our binding.
Table - Combination of encoding and transport protocol for each binding
Additional Bindings:
WSDualHttpBinding, WSFederationHttpBinding, WSFederationHttpBinding,
MsmqIntegrationBinding.
WSDualHttpBinding
WSFederationHttpBinding
MsmqIntegrationBinding
Developers can use built-in bindings plus they can tweak and configure as per the requirements or can try and write their own custom bindings. WCF gives full freedom to developers in terms of customization. ;
}
}
string
int
;
}
[MessageContract]
public class Employee
{
[MessageHeader]
public int EmpID;
[MessageBodyMember]
public string Name;
MessageBodyMember]
public int Salary;
}.
Related posts:
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/509629/WCFplusPartplus2-3aplusFundamentals | CC-MAIN-2014-52 | refinedweb | 339 | 55.13 |
The hvPlot NetworkX plotting API is meant as a drop-in replacement for the
networkx.draw methods. In most cases the existing code will work as is or with minor modifications, returning a HoloViews object rendering an interactive bokeh plot, equivalent to the matplotlib plot the standard API constructs. First let us import the plotting interface and give it the canonical name
hvnx:
import hvplot.networkx as hvnx import networkx as nx import holoviews as hv
In this user guide we will follow along with many of the examples in the NetworkX tutorial on drawing graphs.
The
hxnx namespace provides all the same plotting functions as
nx, this means in most cases one can simply be swapped for the other. This also includes most keywords used to customize the plots. The main difference is in the way multiple plots are composited, like all other hvPlot APIs the networkX functions returns HoloViews objects which can be composited using
+ and
* operations:
G = nx.petersen_graph() spring = hvnx.draw(G, with_labels=True) shell = hvnx.draw_shell(G, nlist=[range(5, 10), range(5)], with_labels=True, font_weight='bold') spring + shell
H = nx.triangular_lattice_graph(1, 20) hvnx.draw_planar(H, node_color='green', edge_color='brown')
The most common
layout functions have dedicated drawing methods such as the
draw_shell function above, which automatically computes the node positions.
However layout algorithms are not necessarily deterministic, so if we want to plot and overlay subsets of either the nodes or edges using the
nodelist and
edgelist keywords the node positions should be computed ahead of time and passed in explicitly:
pos = nx.layout.spring_layout(G) hvnx.draw(G, pos, nodelist=[0, 1, 2, 3, 4], node_color='blue') *\ hvnx.draw_networkx_nodes(G, pos, nodelist=[5, 6, 7, 8, 9], node_color='green')
The
hvnx namespace also makes
save and ``show utilities available to save the plot to HTML or PNG files or display it in a separate browser window when working in a standard Python interpreter.
G = nx.dodecahedral_graph() shells = [[2, 3, 4, 5, 6], [8, 1, 0, 19, 18, 17, 16, 15, 14, 7], [9, 10, 11, 12, 13]] shell = hvnx.draw_shell(G, nlist=shells) pos = nx.nx_agraph.graphviz_layout(G) graphviz = hvnx.draw(G, pos=pos) layout = shell + graphviz hvnx.save(layout, 'graph_layout.png')
The full set of options which are inherited from networkx's API are listed in the
hxnx.draw() docstring. Using these the more common styling of nodes and edges can easily be altered through the common set of options that are inherited from networkx. In addition common HoloViews options to control the size of the plots, axes and styling are also supported. Finally, some
layout functions also accept special keyword arguments such as the
nlist argument for the shell layout which specifies the shells.
options = { 'node_color': 'black', 'node_size': 100, 'edge_width': 3, 'width': 300, 'height': 300 } random = hvnx.draw_random(G, **options) circular = hvnx.draw_circular(G, **options) spectral = hvnx.draw_spectral(G, **options) shell = hvnx.draw_shell(G, nlist=[range(5,10), range(5)], **options) (random + circular + spectral + shell).cols(2)
In addition to being able to set scalar style values hvPlot also supports the HoloViews concept of style mapping, which uses so called
dim transforms to map attributes of the graph nodes and edges to vary the visual attributes of the plot. For example we might construct a graph with edge weights and node sizes as attributes. The plotting function will extract these attributes which means they can be used to scale visual properties of the plot such as the
edge_width,
edge_color or
node_size:) G.add_node('a', size=20) G.add_node('b', size=10) G.add_node('c', size=12) G.add_node('d', size=5) G.add_node('e', size=8) G.add_node('f', size=3) pos = nx.spring_layout(G) # positions for all nodes hvnx.draw(G, pos, edge_color='weight', edge_cmap='viridis', edge_width=hv.dim('weight')*10, node_size=hv.dim('size')*20)
The full set of options that are supported can be accessed on the
hvnx.draw function (note this does not include some bokeh specific option to control the styling of selection, nonselection and hover nodes and edges which may also be supplied and follow a pattern like
hover_node_fill_color or
selection_edge_line_alpha).
For reference here is the docstring listing the main supported option:
print(hvnx.draw.__doc__)
The main difference to the networkx.draw API are a few options which are not supported (such as
font_weight and
arrowsize) and the renaming of
width (which controls the edge line width) to
edge_width since
width and
height are reserved for defining the screen dimensions of the plot.
Compute some network properties for the lollipop graph.
URL:
# Copyright (C) 2004-2018 by # Aric Hagberg <[email protected]> # Dan Schult <[email protected]> # Pieter Swart <[email protected]> # All rights reserved. # BSD license. # Adapted by Philipp Rudiger <[email protected]> G = nx.lollipop_graph(4, 6) pathlengths = [] print("source vertex {target:length, }") for v in G.nodes(): spl = dict(nx.single_source_shortest_path_length(G, v)) print('{} {} '.format(v, spl)) for p in spl: pathlengths.append(spl[p]) print('') print("average shortest path length %s" % (sum(pathlengths) / len(pathlengths))) # histogram of path lengths dist = {} for p in pathlengths: if p in dist: dist[p] += 1 else: dist[p] = 1 print('') print("length #paths") verts = dist.keys() for d in sorted(verts): print('%s %d' % (d, dist[d])) print("radius: %d" % nx.radius(G)) print("diameter: %d" % nx.diameter(G)) print("eccentricity: %s" % nx.eccentricity(G)) print("center: %s" % nx.center(G)) print("periphery: %s" % nx.periphery(G)) print("density: %s" % nx.density(G)) hvnx.draw(G, with_labels=True)
Draw a graph with hvPlot.
URL:
G = nx.path_graph(8) hvnx.draw(G)
Draw a graph with hvPlot, color by degree.
URL:
# Author: Aric Hagberg ([email protected]) # Adapted by Philipp Rudiger <[email protected]> G = nx.cycle_graph(24) pos = nx.spring_layout(G, iterations=200) # Preferred API # hvnx.draw(G, pos, node_color='index', node_size=500, cmap='Blues') # Original code hvnx.draw(G, pos, node_color=range(24), node_size=500, cmap='Blues')
Draw a graph with hvPlot, color edges.
URL:
# Author: Aric Hagberg ([email protected]) # Adapted by Philipp Rudiger <[email protected]> G = nx.star_graph(20) pos = nx.spring_layout(G) colors = range(20) hvnx.draw(G, pos, node_color='#A0CBE2', edge_color=colors, edge_width=4, edge_cmap='Blues', with_labels=False)
Draw a graph with hvPlot.
URL:
# Author: Aric Hagberg ([email protected]) # Adapted by Philipp Rudiger <[email protected]> G = nx.house_graph() # explicitly set positions pos = {0: (0, 0), 1: (1, 0), 2: (0, 1), 3: (1, 1), 4: (0.5, 2.0)} hvnx.draw_networkx_nodes(G, pos, node_size=2000, nodelist=[4], padding=0.2) *\ hvnx.draw_networkx_nodes(G, pos, node_size=3000, nodelist=[0, 1, 2, 3], node_color='black') *\ hvnx.draw_networkx_edges(G, pos, alpha=0.5, width=6, xaxis=None, yaxis=None)
try: import pygraphviz # noqa from networkx.drawing.nx_agraph import graphviz_layout except ImportError: try: import pydot # noqa from networkx.drawing.nx_pydot import graphviz_layout except ImportError: raise ImportError("This example needs Graphviz and either " "PyGraphviz or pydot") G = nx.balanced_tree(3, 5) pos = graphviz_layout(G, prog='twopi', args='') hvnx.draw(G, pos, node_size=20, alpha=0.5, node_color="blue", with_labels=False, width=600, height=600)
The spectral layout positions the nodes of the graph based on the eigenvectors of the graph Laplacian L=D−A,.
URL:
options = { 'node_size': 100, 'width': 250, 'height': 250 } G = nx.grid_2d_graph(6, 6) spectral1 = hvnx.draw_spectral(G, **options) G.remove_edge((2, 2), (2, 3)) spectral2 = hvnx.draw_spectral(G, **options) G.remove_edge((3, 2), (3, 3)) spectral3 = hvnx.draw_spectral(G, **options) G.remove_edge((2, 2), (3, 2)) spectral4 = hvnx.draw_spectral(G, **options) G.remove_edge((2, 3), (3, 3)) spectral5 = hvnx.draw_spectral(G, **options) G.remove_edge((1, 2), (1, 3)) spectral6 = hvnx.draw_spectral(G, **options) G.remove_edge((4, 2), (4, 3)) spectral7 = hvnx.draw_spectral(G, **options) (hv.Empty() + spectral1 + hv.Empty() + spectral2 + spectral3 + spectral4 + spectral5 + spectral6 + spectral7).cols(3)
Draw a graph with hvPlot.
URL:
# Author: Aric Hagberg ([email protected]) # Adapted by Philipp Rudiger <[email protected]> # Copyright (C) 2004-2018 # Aric Hagberg <[email protected]> # Dan Schult <[email protected]> # Pieter Swart <[email protected]> # All rights reserved. # BSD license. G = nx.grid_2d_graph(4, 4) # 4x4 grid pos = nx.spring_layout(G, iterations=100) g1 = hvnx.draw(G, pos, font_size=8) g2 = hvnx.draw(G, pos, node_color='black', node_size=0, with_labels=False) g3 = hvnx.draw(G, pos, node_color='green', node_size=250, with_labels=False, edge_width=6) H = G.to_directed() g4 = hvnx.draw(H, pos, node_color='blue', node_size=20, with_labels=False) (g1 + g2 + g3 + g4).cols(2)
Example using the NetworkX ego_graph() function to return the main egonet of the largest hub in a Barabási-Albert network.
URL:
# Author: Drew Conway ([email protected]) # Adapted by Philipp Rudiger <[email protected]> from operator import itemgetter # Create a BA model graph n = 1000 m = 2 G = nx.generators.barabasi_albert_graph(n, m) # find node with largest degree node_and_degree = G.degree() (largest_hub, degree) = sorted(node_and_degree, key=itemgetter(1))[-1] # Create ego graph of main hub hub_ego = nx.ego_graph(G, largest_hub) # Draw graph pos = nx.spring_layout(hub_ego) g = hvnx.draw(hub_ego, pos, node_color='blue', node_size=50, with_labels=False) # Draw ego as large and red gnodes = hvnx.draw_networkx_nodes(hub_ego, pos, nodelist=[largest_hub], node_size=300, node_color='red') g * gnodes
URL:
G = nx.random_geometric_graph(200, 0.125) # position is stored as node attribute data for random_geometric_graph pos = nx.get_node_attributes(G, 'pos') # find node near center (0.5,0.5) dmin = 1 ncenter = 0 for n in pos: x, y = pos[n] d = (x - 0.5)**2 + (y - 0.5)**2 if d < dmin: ncenter = n dmin = d # color by path length from node near center p = nx.single_source_shortest_path_length(G, ncenter) hvnx.draw_networkx_edges(G, pos, nodelist=[ncenter], alpha=0.4, width=600, height=600) *\ hvnx.draw_networkx_nodes(G, pos, nodelist=list(p.keys()), node_size=80, node_color=list(p.values()), cmap='Reds_r')
An example using Graph as a weighted network.
URL:
# Author: Aric Hagberg ([email protected]) # Adapted by Philipp Rudiger <[email protected]> import networkx as nx) elarge = [(u, v) for (u, v, attr) in G.edges(data=True) if attr['weight'] > 0.5] esmall = [(u, v) for (u, v, attr) in G.edges(data=True) if attr['weight'] <= 0.5] pos = nx.spring_layout(G) # positions for all nodes # nodes nodes = hvnx.draw_networkx_nodes(G, pos, node_size=700) # edges edges1 = hvnx.draw_networkx_edges( G, pos, edgelist=elarge, edge_width=6) edges2 = hvnx.draw_networkx_edges( G, pos, edgelist=esmall, edge_width=6, alpha=0.5, edge_color='blue', style='dashed') labels = hvnx.draw_networkx_labels(G, pos, font_size=20, font_family='sans-serif') edges1 * edges2 * nodes * labels
Draw a graph with directed edges using a colormap and different node sizes.
Edges have different colors and alphas (opacity). Drawn using matplotlib.
URL:
# Author: Rodrigo Dorantes-Gilardi ([email protected]) # Adapted by Philipp Rudiger <[email protected]> G = nx.generators.directed.random_k_out_graph(10, 3, 0.5) pos = nx.layout.spring_layout(G) node_sizes = [3 + 10 * i for i in range(len(G))] M = G.number_of_edges() edge_colors = range(2, M + 2) edge_alphas = [(5 + i) / (M + 4) for i in range(M)] nodes = hvnx.draw_networkx_nodes(G, pos, node_size=node_sizes, node_color='blue') edges = hvnx.draw_networkx_edges(G, pos, node_size=node_sizes, arrowstyle='->', arrowsize=10, edge_color=edge_colors, edge_cmap='Blues', edge_width=2, colorbar=True) nodes * edges
This example illustrates the sudden appearance of a giant connected component in a binomial random graph.
# Copyright (C) 2006-2018 # Aric Hagberg <[email protected]> # Dan Schult <[email protected]> # Pieter Swart <[email protected]> # All rights reserved. # BSD license. # Adapted by Philipp Rudiger <[email protected]> import math try: import pygraphviz # noqa from networkx.drawing.nx_agraph import graphviz_layout layout = graphviz_layout except ImportError: try: import pydot # noqa from networkx.drawing.nx_pydot import graphviz_layout layout = graphviz_layout except ImportError: print("PyGraphviz and pydot not found;\n" "drawing with spring layout;\n" "will be slow.") layout = nx.spring_layout n = 150 # 150 nodes # p value at which giant component (of size log(n) nodes) is expected p_giant = 1.0 / (n - 1) # p value at which graph is expected to become completely connected p_conn = math.log(n) / float(n) # the following range of p values should be close to the threshold pvals = [0.003, 0.006, 0.008, 0.015] region = 220 # for pylab 2x2 subplot layout plots = [] for p in pvals: G = nx.binomial_graph(n, p) pos = layout(G) region += 1 g = hvnx.draw(G, pos, with_labels=False, node_size=15) # identify largest connected component Gcc = sorted([G.subgraph(c) for c in nx.connected_components(G)], key=len, reverse=True) G0 = Gcc[0] edges = hvnx.draw_networkx_edges( G0, pos, with_labels=False, edge_color='red', edge_width=6.0) # show other connected components other_edges = [] for Gi in Gcc[1:]: if len(Gi) > 1: edge = hvnx.draw_networkx_edges(Gi, pos, with_labels=False, edge_color='red', alpha=0.3, edge_width=5.0 ) other_edges.append(edge) plots.append((g*edges*hv.Overlay(other_edges)).relabel("p = %6.3f" % (p))) hv.Layout(plots).cols(2) | https://nbviewer.org/github/holoviz/hvplot/blob/v0.8.0/examples/user_guide/NetworkX.ipynb | CC-MAIN-2022-40 | refinedweb | 2,082 | 52.76 |
I have updated the wiki page at with these ideas. If you have further thoughts on all of this, please update that page and send an email out so we know to look at the changes! My timeline for implementing all of this (not hard, but it needs to be done) is around the end of the month. Thanks, Richard On Apr 4, 2013, at 11:11 AM, Edward A Kmett <ekmett at gmail.com> wrote: > Note the eq lib and the type-eq/(:~:) GADT-based approach are interchangeable. > > You can upgrade a Leibnizian equality to a type equality by applying the Leibnizian substitution to an a :~: a. > > lens also has a notion of an Equality family at the bottom of the type semilattice for lens-like constructions, which is effectively a naked Leibnizian equality sans newtype wrapper. > > The only reason eq exists is to showcase this approach, but in practice I recommend just using the GADT, modulo mutterings about the name. :) > > That said those lowerings are particularly useful, if subtle -- Oleg wrote the first version of them, which I simplified to the form in that lib and they provide functionality that is classically not derivable for Leibnizian equality. > > Sent from my iPhone > > On Apr 4, 2013, at 3:01 AM, Erik Hesselink <hesselink at gmail.com> wrote: > >> On Wed, Apr 3, 2013 at 6:08 PM, Richard Eisenberg <eir at cis.upenn.edu> wrote: >>> Comments? Thoughts? >> >> Edward Kmett 'eq' library uses a different definition of equality, but >> can also be an inspiration for useful functions. Particularly, it >> includes: >> >> lower :: (f a :~: f b) -> a :~: b >> >> Another question is where all this is going to live? In a separate >> library? Or in base? And should it really be in a GHC namespace? The >> functionality is not bound to GHC. Perhaps something in data would be >> more appropriate. >> >> Erik >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> > | http://www.haskell.org/pipermail/libraries/2013-April/019701.html | CC-MAIN-2014-35 | refinedweb | 316 | 64.41 |
... :
It sits there with these errors and doesn't display a login prompt.) ...
These are the only changes I made to free UART:
From the command prompt enter the following command:
And changed:
Code: Select all
$ sudo nano /boot/cmdline.txt
to:
Code: Select all
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 $
Step Two: Edit /etc/inittab
Code: Select all
dwc_otg.lpm_enable=0 console=tty1 $
From the command prompt enter the following command:
And changed:
Code: Select all
$ sudo nano /etc/inittab
to:
Code: Select all
#Spawn a getty on Raspberry Pi serial line T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100
Step Three: Reboot your Pi.
Code: Select all
#Spawn a getty on Raspberry Pi serial line #T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100
The problem is that now that I have this slice of pi breakout board and I'm not sure what TTY device to use.
This is the current python code I was using:
I wasn't sure what device to set while using the Slice of Pi breakout instead.
Code: Select all
import socket import RPi.GPIO as GPIO import wiringpi device = "/dev/ttyUSB0" message = "hello world" serial = wiringpi.Serial(ttyDevice, 9600) serial.puts(message)
Is it /dev/ttyAMA0?
According to this tutorial that's what they were using: ... ensor.html | https://www.raspberrypi.org/forums/viewtopic.php?t=20660 | CC-MAIN-2019-04 | refinedweb | 223 | 55.44 |
Managing snapshot policies
You must enable an HDFS directory for snapshots to allow snapshot policies to be created for that directory. To designate a HDFS directory as snapshottable, follow the procedure in
To create a snapshot policy:
- In Cloudera Manager, select.
Existing snapshot policies are shown in a table.
- To create a new policy, click Create Snapshot Policy.
- From the drop-down list, select the service (HDFS or HBase) and cluster for which you want to create a policy.
- Provide a name for the policy and, optionally, enter a description.
- Specify the directories, namespaces, the relevant box. The fields display where you can edit the time and number of snapshots to keep.
- Specify whether Alerts should be generated for various state changes in the snapshot workflow. You can alert on failure, on start, on success, or when the snapshot workflow is aborted.
- Click Save Policy.
The new policy appears on the Snapshot Policies page.
To edit or delete a snapshot policy:
- From Cloudera Manager, select.
Existing snapshot policies are shown in a table.
- Locate the policy and click the Actions menu, and select Edit or Delete options. | https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/tools-and-methods/topics/rm-dc-managing-snapshot-policies.html | CC-MAIN-2021-49 | refinedweb | 187 | 58.89 |
Before you can begin working with Google Maps on iOS, you need to download the Google Maps SDK for iOS and ensure that you have an API key.
Complete release notes are available for each release.
Step 1: Get the latest version of Xcode
To build a project using the Google Maps SDK for iOS, you need version 6.3 or later of Xcode.
Step 2: Get CocoaPods
The Google Maps SDK Maps SDK, and is commonly referred to as a Podspec.
Edit the
Podfileand add your dependencies. Here is a simple Podspec, including the name of the pod you need for the Google Maps SDK for iOS:
source '' platform :ios, '8.1' pod 'GoogleMaps'
Save the
Podfile.
Open a terminal and go to the directory containing the
Podfile:
$ cd <path-to-project>: Enable the required APIs on the Google Developers Console
You need to activate the Google Maps SDK for iOS, and optionally the Google Places API for iOS, in your project on the Google Developers Console.
If you want to be guided through the process and activate both the APIs automatically, click this link.
Alternatively, you can activate the APIs yourself in the Developers Console, by doing the following:
- Go to the Google Developers Console.
- Select a project, or create a new one.
- Enable the Google Maps SDK for iOS, and optionally the Google Places API for iOS:.
Step 5: Get an iOS API key
Using an API key enables you to monitor your application's API usage, and ensures that Google can contact you about your application if necessary. The key is free, you can use it with any of your applications that call the Google Maps SDK for iOS, and it supports an unlimited number of users. You obtain an API key from the Google Developers Console by providing your application's bundle identifier.
If your project doesn't already have a key for iOS applications, follow these steps to create an API key from the Google Developers Console:
- In the sidebar on the left, select Credentials.
- Click Create New Key and then select iOS key.
- In the resulting dialog, enter your app's bundle identifier. For example:
com.example.hellomap.
Click Create.
The Google Developers Console displays a section titled Key for iOS applications with a character-string API key. Here's an example:
AIzaSyBdVl-cTICSwYKrZ95SuvNw7dbMuDt1KG0
Add your API key to your
AppDelegate.mas follows:
- Add the following import statement:
@import GoogleMaps;
- Add the following to your
application:didFinishLaunchingWithOptions:method, replacing API_KEY with your API key:
[GMSServices provideAPIKey:@"API_KEY"];
Step 6: Add a map
The code below demonstrates how to add a simple map to an existing
ViewController. If you're creating a new app to iOS, first follow the
installation instructions above, and create a new Single View Application.
Add or update a few methods inside your app's default
ViewController to create
and initialize an instance of
GMSMapView, as shown in the sample below.
Objective-C
#import "YourViewController.h" @import GoogleMaps; @implementation YourViewController { GMSMapView *mapView_; } - (void)viewDidLoad { //
Swift
import UIKit class YourViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() var camera = GMSCameraPosition.cameraWithLatitude(-33.86, longitude: 151.20, zoom: 6) var mapView = GMSMapView.mapWithFrame(CGRectZero, camera: camera) mapView.myLocationEnabled = true self.view = mapView var marker = GMSMarker() marker.position = CLLocationCoordinate2DMake(-33.86, 151.20) marker.title = "Sydney" marker.snippet = "Australia" marker.map = mapView } }
Run your application. You should see a map with a single marker centered over Sydney, Australia. If you see the marker, but the map is not visible, confirm that you have provided your API key.
Experiment with the Google Maps SDK demo project
Try the SDK demos using
pod try GoogleMaps. For more details,
see the guide to code samples.
Upgrade from an earlier version
Follow these instructions to upgrade an existing project to the most recent version of the Google Maps SDK for iOS.
If you previously installed the Google Maps SDK for iOS from a zip file containing a static framework:
- Remove all references to the previous framework from your Xcode project.
- Follow the instructions above to install the Google. | https://developers.google.com/maps/documentation/ios/start?hl=es | CC-MAIN-2015-32 | refinedweb | 680 | 56.35 |
Is this already known at ATMEL's datasheet editorial office?
In several (all?) datasheets the USART receive example needs in the assemly part a 'ret':
Assembly Code Example(1) USART_Receive: ; Wait for data to be received sbis UCSRnA, RXCn rjmp USART_Receive ; Get status and 9th bit, then data from buffer in r18, UCSRnA in r17, UCSRnB in r16, UDRn ; If error, return -1 andi r18,(1<<FEn)|(1<<DORn)|(1<<UPEn) breq USART_ReceiveNoError ldi r17, HIGH(-1) ldi r16, LOW(-1) *** here I would expect a return *** USART_ReceiveNoError: ; Filter the 9th bit, then return lsr r17 andi r17, 0x01 ret C Code Example(1) unsigned int USART_Receive( void ) { unsigned char status, resh, resl; /* Wait for data to be received */ while ( !(UCSRnA & (1<<RXCn)) ) ; /* Get status and 9th bit, then data */ /* from buffer */ status = UCSRnA; resh = UCSRnB; resl = UDRn; /* If error, return -1 */ if ( status & (1<<FEn)|(1<<DORn)|(1<<UPEn) ) return -1; /* Filter the 9th bit, then return */ resh = (resh >> 1) & 0x01; return ((resh << 8) | resl); }
Datasheet ATmega...P, page 189:...
In the assembly example in error case a 511 is returned.
The example code is a normal subroutine, not an interrupt routine, so the use of "ret" is correct. (note the use of "wait for data to be received"... you NEVER do that in an ISR)
But yes there should be a return after the loading of -1.
Writing code is like having sex.... make one little mistake, and you're supporting it for life.
Top
- Log in or register to post comments
Yes, glitch you are correct: my subject line was buggy, too. ;-)
Probably it comes, because similar code is more often used as an ISR rather than in polling mode.
But on the other hand, I must say, that the Atmel's datasheets are really good and clearly arranged.
Michael
In the beginning was the Word, and the Word was with God, and the Word was God.
Top
- Log in or register to post comments | https://www.avrfreaks.net/comment/329672 | CC-MAIN-2020-34 | refinedweb | 328 | 68.81 |
Bill Davidsen <davidsen@tmr.com> wrote: > > This strange and basically useless /proc/sys/dev/cdrom/info has not been > > a default interface. > > > > > It is present on a lot more machines than SCSI CD/DVD drives these days. > While it's true that the user can suppress it, it's present in the major > distributions, and I can't remember seeing any Linux system with /proc > disabled in years. 1) Things like this do not beling into /proc 2) This kind of "interface" was subject to many incompatible changes during the past years. If it stays stable for at least 2-3 years, we may talk again about a pissibility of using it... > > Looks like you loose connectivity to reality. > > > The reality is that (a) almost no one uses SCSI optical devices, even if > SCSI disk is in use, (b) there is a lot more use and support for Linux > than any (or possibly all) of the legacy systems you mention as > standard, and (c) the systems you like so well now support use of device > names. That's the reality. Come back to reallity! No optical drive will allow you to write whithout SCSI. Linux is the only platform that tries to remove a formerly present unique SCSI namespace. Other platforms try the converse.... > > Cdrtools are based on generic SCSI (libscg) and recent Linux versions lack > > ONE orthogonal interface to access all SCSI devices. It does not help to have > > an infinite number of overlapping interfaces that are even implemented in a > > malicius way to prevent tools from detecting and identifying drives that are > > visible via more than one interface. > > > The library was a wonderful thing... once. But it persists in a decade > old view of peripherals. A valid path to a device can be determined from The Linux people are trying to use a 30+ year old view of peripherals, this is worse! You seem to forget, that a memory stick is SCSI, a CD/DVD writer is SCSI and many other devices are SCSI..... a properly designed OS would implement a proper integration of SCSI devices instead of breaking an existing integration. Jörg -- EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin js@cs.tu-berlin.de (uni) schilling@fokus.fraunhofer.de (work) Blog: URL: | http://lists.debian.org/cdwrite/2006/11/msg00068.html | CC-MAIN-2013-20 | refinedweb | 382 | 64.3 |
Hi All,
How do I bring an animated Flash File .swf into PE7? Is there a free converter?
Thanks, Stan
Go to the SEARCH FORUMS box to the upper-right, and enter
import swf
There are many previous discussions
I use Moyea's FLV Importer for Premiere which works with PRE and does a great job with PrPro.
I liked it so much, that I bought a second license for my other NLE computer.
Good luck,
Hunt
Bill, I downloaded the trial version, but can't figure out how to insatll it in
PE7. Any suggestions?
For me, it was just an installation, and it found both PrPro and PrE, or maybe I should say that it found them. There was no real interaction, other than the install, just after that action, I could Import FLV into either NLE program - like magic.
Now, I have an earlier version, and it was the full-paid version, but I would expect that not much has changed. Install, and reboot. Then, the Import is transparent and seamless, just as though Premiere could support FLV natively.
Good luck,
Hunt
workin now, thanks!
Great news! So far, I am very happy with that plug-in in both PrE and PrPro 2.0.
Good luck,
Hunt | https://forums.adobe.com/message/2709869?tstart=0 | CC-MAIN-2018-30 | refinedweb | 210 | 82.04 |
Fix typos in net/sched/Kconfig. And yes, its "queuing" in the dictionary.com. :)
-
Matt LaPlante
CCNP, CCDP, A+, Linux+, CQS
laplam@rpi.edu
--
--- a/net/sched/Kconfig 2006-06-20 05:31:55.000000000 -0400
+++ b/net/sched/Kconfig 2006-06-29 02:26:33.000000000 -0400
@@ -2,14 +2,14 @@
# Traffic control configuration.
#
-menu "QoS and/or fair queueing"
+menu "QoS and/or fair queuing"
config NET_SCHED
- bool "QoS and/or fair queueing"
+ bool "QoS and/or fair queuing"
When the kernel has several packets to send out over a network
device, it has to decide which ones to send first, which ones to
- delay, and which ones to drop. This is the job of the queueing
+ delay, and which ones to drop. This is the job of the queuing
disciplines, several different algorithms for how to do this
"fairly" have been proposed.
@@ -98,12 +98,12 @@
endchoice
-comment "Queueing/Scheduling"
+comment "Queuing/Scheduling"
config NET_SCH_CBQ
- tristate "Class Based Queueing (CBQ)"
+ tristate "Class Based Queuing (CBQ)"
- Say Y here if you want to use the Class-Based Queueing (CBQ) packet
+ Say Y here if you want to use the Class-Based Queuing (CBQ) packet
scheduling algorithm. This algorithm classifies the waiting packets
into a tree-like hierarchy of classes; the leaves of this tree are
in turn scheduled by separate algorithms.
@@ -111,7 +111,7 @@
See the top of <file:net/sched/sch_cbq.c> for more details.
CBQ is a commonly used scheduler, so if you're unsure, you should
- say Y here. Then say Y to all the queueing algorithms below that you
+ say Y here. Then say Y to all the queuing algorithms below that you
want to use as leaf disciplines.
To compile this code as a module, choose M here: the
@@ -155,7 +155,7 @@
module will be called sch_atm.
config NET_SCH_PRIO
- tristate "Multi Band Priority Queueing (PRIO)"
+ tristate "Multi Band Priority Queuing (PRIO)"
Say Y here if you want to use an n-band priority queue packet
scheduler.
@@ -175,9 +175,9 @@
module will be called sch_red.
config NET_SCH_SFQ
- tristate "Stochastic Fairness Queueing (SFQ)"
+ tristate "Stochastic Fairness Queuing (SFQ)"
- Say Y here if you want to use the Stochastic Fairness Queueing (SFQ)
+ Say Y here if you want to use the Stochastic Fairness Queuing (SFQ)
packet scheduling algorithm .
See the top of <file:net/sched/sch_sfq.c> for more details.
@@ -189,7 +189,7 @@
tristate "True Link Equalizer (TEQL)"
Say Y here if you want to use the True Link Equalizer (TLE) packet
- scheduling algorithm. This queueing discipline allows the combination
+ scheduling algorithm. This queuing discipline allows the combination
of several physical devices into one virtual device.
See the top of <file:net/sched/sch_teql.c> for more details.
@@ -305,7 +305,7 @@
tristate "Universal 32bit comparisons w/ hashing (U32)"
select NET_CLS
- Say Y here to be able to classify packetes using a universal
+ Say Y here to be able to classify packets using a universal
32bit pieces based comparison scheme.
To compile this code as a module, choose M here: the
@@ -485,7 +485,7 @@
tristate "IPtables targets"
depends on NET_CLS_ACT && NETFILTER && IP_NF_IPTABLES
- Say Y here to be able to invoke iptables targets after succesful
+ Say Y here to be able to invoke iptables targets after successful
classification.
To compile this code as a module, choose M here: the
@@ -537,8 +537,8 @@
Say Y here to allow using rate estimators to estimate the current
rate-of-flow for network devices, queues, etc. This module is
- automaticaly selected if needed but can be selected manually for
- statstical purposes.
+ automatically selected if needed but can be selected manually for
+ statistical purposes.
endif # NET_SCHED
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
Please read the FAQ at | http://lkml.org/lkml/2006/6/29/26 | crawl-002 | refinedweb | 641 | 54.32 |
Vim undo breaks with auto-close plugins
Prelude
If you've used IDEs or other heavy editors ever in your life, you'd know how nice it is to have parentheses and brackets to get auto-closed. If you don't know what I'm talking about, its a feature usually present in IDEs like eclipse and easily recreated in vim with mappings like
inoremap ( ()<Left>
Of course, that's just a simple taste. There are vastly complicated plugins that achieve this.
Now, what's really super annoying about these plugins is that they tend to break vim's amazingly powerful undo functionality. In other words, if you are using an auto-close plugin, chances are, you can't rely on vim's undo anymore.
Debugging this and finding the cause has been on my todo list for quite some time and a few days ago, I finally sat down to explore. I am writing my experience here. First, a simple test case to see if the auto-close plugin you use breaks undo, open vim (a blank file) and hit the following keys:
iabc{<CR><ESC>u
Where instead of
<CR> you'd hit the return key and instead of
<ESC> you'd hit the Escape key. Decent knowledge of vim should tell you that after the above keys, you should end up with a blank file again. Right?
If instead, you see a closing brace dangling in the second line, your undo is broken. MUHAHAHAHAHA! You can't rely on undo anymore until you get rid of that one plugin!
What's going on?
So, experimenting with many auto-close plugins and reading the source of at least 3 of those, I say there are basically two different implementations of this functionality, which all these plugins use. The first one is pretty much what was shown at the start of this article,
inoremap ( ()<Left> " or inoremap ( <C-r>="()\<Left>"
I'm going to call this class of plugins, the critters. These do not break your undo. The next class of implementations, that do break your undo, the beasts, do a bit of dark sorcery with stuff like
inoremap ( <C-r>=MyAwesomePairInseter()<CR>
There is no dark sorcery here that is immediately apparent. The real sorcery is inside that function, where a call to
setline() function is made to replace your current line to contain the parentheses text at the cursor. Doesn't make sense? Don't worry, you'll get it soon enough.
Which plugins? Name them!
Here are a few ones that break undo:
Beasts
-
-
-
and these don't break undo
Critters
-
-
-
An initial look at them and you can tell, the ones that break undo are actually more popular and have a relatively larger code base. So why doesn't anyone complain about breaking undo? I think they do and I believe the root cause is a bug with vim itself.
The main difference in usability among these classes is again to do with undo. In the beasts, typing a brace does not start a new undo action, but it does in the critters (like hitting a
<C-g>u). This might actually be playing a role in why undo breaks in beasts only, but the exact reason escapes me.
A reproducible test case
I wanted to reproduce this problem with a vanilla vim with no custom configuration (except for
nocompatible). So, I checked out the latest version (vim73-353) from the mercurial repository, compiled (with python, ruby and usual shit) and opened it, with no plugins and a simple vimrc as the following:
set nocompatible inoremap <buffer> <silent> ( <C-R>=<SID>InsertPair("(", ")")<CR> inoremap <buffer> <silent> [ <C-R>=<SID>InsertPair("[", "]")<CR> inoremap <buffer> <silent> { <C-R>=<SID>InsertPair("{", "}")<CR> function! s:InsertPair(opener, closer) let l:save_ve = &ve set ve=all call s:InsertStringAtCursor(a:closer) exec "set ve=" . l:save_ve return a:opener endfunction function! s:InsertStringAtCursor(str) let l:line = getline('.') let l:column = col('.')-2 if l:column < 0 call setline('.', a:str . l:line) else call setline('.', l:line[:l:column] . a:str . l:line[l:column+1:]) endif endfunction
Which is a stripped down version of the auto-close functionality implemented in townk's auto-close plugin. And opened vim
vim -u undo-breaker-vimrc
and did the test here. Boom, a dangling brace character.
For all I know, its the call to
setline() that's making all the difference. But I could be entirely wrong with that. I say this because that is the major difference between the two classes of implementations.
I use persistent-undo in vim73 and heavily depend on it. Combined with the gundo plugin by Steve Losh, I get a kind of nicely visualized version history that is centric to every file, which is quite handy in its own right.
So, if there are others who have faced this, have a fix for it, perhaps a patch to vim, or if there is already a bug in vim's bug database on this, let me know.
Thanks for reading. | http://sharats.me/vim-undo-breaks-with-auto-close-plugins.html | CC-MAIN-2014-42 | refinedweb | 844 | 61.87 |
When writing React, should you use the
React.createClass syntax or the ES6
class syntax? Or maybe neither? This post will explain some of the differences and help you decide.
React can be written perfectly well in either ES5 or ES6.
Using JSX means that you’ll need a “build” step already, where Babel transpiles the JSX into
React.createElement calls. Many people take advantage of this and just tack on
es2015 to Babel’s list of transpilers, and then the entire world of ES6 is made available.
If you’re using something like Quik or React Heatpack, this is already set up for you (read quick start React if you don’t have an environment set up yet).
Compare: createClass vs class
Here’s the same component written using
React.createClass and ES6
class:
var InputControlES5 = React.createClass({ propTypes: { initialValue: React.PropTypes.string }, defaultProps: { initialValue: '' }, // Set up initial state getInitialState: function() { return { text: this.props.initialValue || 'placeholder' }; }, handleChange: function(event) { this.setState({ text: event.target.value }); }, render: function() { return ( <div> Type something: <input onChange={this.handleChange} value={this.state.text} /> </div> ); } });
class InputControlES6 extends React.Component { constructor(props) { super(props); // Set up initial state this.state = { text: props.initialValue || 'placeholder' }; // Functions must be bound manually with ES6 classes this.handleChange = this.handleChange.bind(this); } handleChange(event) { this.setState({ text: event.target.value }); } render() { return ( <div> Type something: <input onChange={this.handleChange} value={this.state.text} /> </div> ); } } InputControlES6.propTypes = { initialValue: React.PropTypes.string }; InputControlES6.defaultProps = { initialValue: '' };
Here are the key differences:
Binding of Functions
This is probably the biggest tripping point.
With
createClass, it’s easy: every property that’s a function is automatically bound by React. Refer to them as
this.whateverFn wherever you need to, and
this will be set correctly whenever they’re called.
With ES6
class, it’s trickier: functions are not autobound. You must manually bind them. The best place to do this is in the constructor, as in the example above.
If you want to save yourself from having to type out all those bindings manually, check out react-autobind or autobind-decorator.
Another way is to bind them inline, where you use them:
// Use `.bind`: render() { return ( <input onChange={this.handleChange.bind(this)} value={this.state.text} /> ); } // --- OR --- // Use an arrow function: render() { return ( <input onChange={() => this.handleChange()} value={this.state.text} /> ); }
Either of these will work, but they’re not as efficient. Every time
render is called (which could be pretty often!) a new function will be created. It’s a little slower than binding the functions once in the constructor.
One final option is to replace the function itself with an arrow function, like this:
// the normal way // requires binding elsewhere handleChange(event) { this.setState({ text: event.target.value }); } // the ES7 way // all done, no binding required handleChange = (event) => { this.setState({ text: event.target.value }); }
With this method, you don’t need to do any binding. It’s all taken care of, through the magic of arrow functions. The
this inside the function will refer to the component instance, as expected.
The only caveat is that this is an “experimental” feature, meaning it’s not in the official ES6 spec. But it is supported by Babel when you enable the “stage-0” preset. If you like the syntax (which reads as “set handleChange to an arrow function that takes an event”), try it out.
Constructor Should Call super
The ES6 class’s constructor needs to accept
props as an argument and then call
super(props). It’s a little bit of boilerplate that
createClass doesn’t have.
class vs createClass
This one is obvious. One calls
React.createClass with an object, and the other uses
class extending
React.Component.
Pro tip: Import
Component directly to save some typing if you have multiple components in one file:
import React, {Component} from 'react'.
Initial State Configuration
createClass accepts an
initialState function that gets called once when the component is mounted.
ES6
class uses the constructor instead. After calling
super, set the state directly.
Location of propTypes and defaultProps
With
createClass, define
propTypes and
defaultProps as properties on the object you pass in.
With ES6
class, these become properties of the class itself, so they need to be tacked on to the class after it is defined.
There’s a shortcut if your build has ES7 property initializers turned on:
class Person extends React.Component { static propTypes = { name: React.PropTypes.string, age: React.PropTypes.string }; static defaultProps = { name: '', age: -1 }; ... }
The Ninja Third Option
In addition to
createClass and
class, React also supports what it calls “stateless functional components.” Basically, it’s just a function, and it can’t have
state, and it can’t use any of the lifecycle methods like
componentWillMount or
shouldComponentUpdate.
Stateless functional components are great for simple components when all they do is take some props and render something based on those props. Here’s an example:
function Person({firstName, lastName}) { return ( <span>{lastName}, {firstName}</span> ); }
That uses ES6 destructuring to pull apart the props that are passed in, but it could also be written like this:
function Person(props) { var firstName = props.firstName; var lastName = props.lastName; return ( <span>{lastName}, {firstName}</span> ); }
Which is the Right One to Use?
Facebook has stated that React.createClass will eventually be replaced by ES6 classes, but they said “until we have a replacement for current mixin use cases and support for class property initializers in the language, we don’t plan to deprecate React.createClass.”
Wherever you can, use stateless functional components. They’re simple, and will help force you to keep your UI components simple.
For more complex components that need state, lifecycle methods, or access to the underlying DOM nodes (through refs), use class.
It’s good to know all 3 styles, though. When it comes time to look up a problem on StackOverflow or elsewhere, you’ll probably see answers in a mix of ES5 and ES6. The ES6 style has been gaining in popularity but it’s not the only one you’ll see in the wild.
Wrap Up
I hope this overview helped clear up some confusion about the different ways to write components in React.
If you’re feeling overwhelemed by all there is to learn, and looking for a path to follow, sign up below to get a downloadable Timeline for Learning | https://daveceddia.com/react-es5-createclass-vs-es6-classes/ | CC-MAIN-2019-35 | refinedweb | 1,058 | 59.4 |
In the first post of this series, we discussed why object-oriented programming (OOP) was helpful for game development, and learned how to identify objects, their states, and their behaviors. In this article, we'll look at the specific OOP principle of cohesion and how it applies to games.
Note: Although this tutorial is written using Java, you should be able to use the same techniques and concepts in almost any game development environment.
What Is Cohesion?
Cohesion is the principle of being or doing one thing well. In other words, cohesion means grouping together code that contributes to a single task.
A great non-programming example of this principle was covered in one of the first Gamedevtuts+ articles which talked about the Covert Action Rule:
Don't try to do too many games in one package ... Individually, those each could have been good games. Together, they fought with each other.
The same rule applies to object-oriented programming. Each object should only have one responsibility. Every behavior of that object should only do one task. Any more than that and you'll have a much harder time making changes to the code.
Why Is It Helpful?
Code that is organized by functionality and does only one task is said to have high cohesion. Highly cohesive code is reusable, simple, and easy to understand. It also creates objects that are small and focused.
Code that is organized arbitrarily and has multiple tasks is said to have low cohesion. Such code is difficult to understand, maintain, and reuse, and is often complex. It also creates objects that are large and unfocused.
Having high cohesion is generally good, while having low cohesion is generally bad. When writing code, always strive to write highly cohesive code.
How to Apply It
So how do we apply this to object-oriented programming? Well for starters, organizing code into objects helps increase cohesion of the game in general. However, each individual object should also have high cohesion. Let's refer back to our three examples to see how this works.
Asteroids
Recall from the last article that we defined the ship object as having behaviors of turning, moving, and firing.
If we were to write a single piece of code that did all three behaviors at once, it would get pretty messy. Instead, we should separate each behavior into what are known as functions. Functions allow us to separate functionality and group similar code together, thus helping to create highly cohesive code.
In programming, an object is defined by creating a class. In Java, a class is coded as follows:
/** * The Ship Class */ public class Ship { /** * Function – performs the behavior (task) of turning the Ship */ public void rotate() { // Code that turns the ship } /** * Function – performs the behavior (task) of moving the Ship */ public void move() { // Code that moves the ship } /** * Function – performs the behavior (task) of firing the Ship's gun */ public void fire() { // Code that makes the ship fire a bullet } }
As you can see, each behavior gets its own function, and the code is pretty well organized just in this skeleton structure.
Tetris
For Tetris, recall that the behaviors of a tetromino were falling, moving (sideways), and rotating. The basic class structure is as follows:
/** * The Tetromino Class */ public class Tetromino { /** * Function – update a Tetromino's position */ public void fall() { // Code that updates the Tetromino's position } /** * Function - move a Tetromino */ public void move() { // Code that moves the Tetromino sideways } /** * Function – rotate a Tetromino */ public void rotate() { // Code that rotates the Tetromino by 90 degrees } }
Again, the behaviors are separated into their own functions. For the
fall method, though, notice that the task is to update the tetromino's position. This is because the tetromino is always falling, so we can't just make the task "cause the tetromino to fall".
Instead, a falling tetromino just moves down the screen a certain number of rows at a time - so we have to update the position of the tetromino to reflect this falling speed.
Pac-Man
For the ghost object with behaviors of moving and changing state, we have to do a bit more work to get it to be highly cohesive.
/** * The Ghost Class */ public class Ghost { /** * Function – moves the Ghost */ public void move() { // Code that moves the ghost in the current direction } /** * Function - change Ghost direction */ public void changeDirection() { // Code that changes the Ghost's direction } /** * Function – change Ghost speed */ public void changeSpeed() { // Code that changes the Ghost's speed } /** * Function – change Ghost color */ public void changeColor() { // Code that changes the Ghost's color } /** * Function – change Ghost state */ public void changeState() { // Code that changes the Ghost's state // This function also will call the three functions of changeDirection, changeSpeed, and changeColor } }
The Ghost state has three extra functions added to it:
changeDirection,
changeColor, and
changeSpeed. These weren't in our original behavior list because they aren't behaviors. Instead, these functions are what are known as helper functions and are there to help us maintain high cohesion.
The behavior of changing state (what happens when Pac-Man eats a power pellet) requires three different tasks to be performed: turn deep blue, reverse direction, and move more slowly. To maintain cohesion, we don't want one function to do all three of these tasks, so we divide them up into three subtasks that the function will call upon to complete its one main task.
The use of the word and when describing what a behavior/function does usually means we should create more than one function.
Conclusion
Cohesion is the principle of grouping like code together and ensure each function performs only a single task. Cohesion helps to create code that is maintainable and reusable.
In the next Quick Tip, we'll discuss the principle of coupling and how it relates to cohesion. Follow us on Twitter, Facebook, or Google+ to keep up to date with the latest posts. | http://gamedevelopment.tutsplus.com/tutorials/quick-tip-the-oop-principle-of-cohesion--gamedev-1811 | CC-MAIN-2013-48 | refinedweb | 985 | 59.64 |
def Test():
print 't = ' + `t`
def test2():
print 'calling Test()'
Test()
I read in the file and want to execute test2().
So I do the following:
f = open('file','r')
a = f.read()
f.close()
a_c = compile(a,'<string>','exec')
local_ns = {}
global_ns = {}
exec(a_c,global_ns,local_ns)
Now if I dump local_ns, it shows 't', 'Test' and 'test2' as being
members of local_ns. global_ns is still empty.
Now how can I call test2? I have tried local_ns['test2']()
but it says it cannot find 'Test'
I CANNOT use 'import' as this code is actually part of a file
that cannot be run..
Any ideas?????
Thanks!
Lance Ellinghouse
lance@markv.com | http://www.python.org/search/hypermail/python-1993/0236.html | CC-MAIN-2013-48 | refinedweb | 109 | 74.49 |
I am trying to use the headcount hook () with my main develop repository.
I pulled the code and placed it in C:\Python26\Lib\site-packages. I made the following entries into my hgrc file:
C:\Python26\Lib\site-packages
[hooks]
pretxnchangegroup.headcount = python:headcount.headcount.hook
[headcount]
push_ok = *
commit_ok = *
warnmsg = %(headcount)d new heads detected. You may not push new heads to this repository.
debug = False
All this is as per the install instructions.
I then cloned the repository, created a branch, committed a change to that branch, and then
issued:
hg push -f
as a test. However, this fails with:
C:\junk\htmlwriter>hg push -f
pushing to c:\code\htmlwriter
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files
transaction abort!
rollback completed
abort: pretxnchangegroup.headcount hook is invalid (import of "headcount.headcou
nt" failed)
I then ran this:
C:\Python26>python c:\Python26\Lib\site-packages\headcount\headcount.py
Traceback (most recent call last):
File "c:\Python26\Lib\site-packages\headcount\headcount.py", line 2, in <modul
e>
import mercurial.node
ImportError: No module named mercurial.node
I'm far from a python expert, so can someone help me figure out how to get the headcount hook to run inside my mercurial environment?
Details: Windows 7, Mercurial 1.7.2, TortoiseHg 1.1.7
Test add and commit, test push from remote location with user in acl push
5 months ago | http://serverfault.com/questions/221203/mercurial-hook-fails-on-windows | crawl-003 | refinedweb | 245 | 60.11 |
.
Using Roles or Capabilities to Limit Visibility
WordPress uses a concept of Roles, designed to give the site owner the ability to control what users can and cannot do within the site. Each role is allowed to perform a set of tasks called Capabilities. We can customize the role and its capabilities with add_roles and add_cap functions.
Our plugin will create a new capability call
servermetric. Only the user who has that capability can load our dashboard widgets. We will add this capability for administrator role so all administrator users will see it by default.
For the other users, you can use plugin User Role Editor to manage the capabilities of any particular user, and assign the
servermetic capability for the user.
We use add_cap to add a new capability, however, this function writes to the database, so we should only do it when activating plugin. Once deactivating, we should clean up the database by removing the role with remove_cap. ); } //... }
We create a new constant call
CAP_METRIC and set its value to
server_metric so we can easily change the capability name later easily. We modify our
run method to add two hooks.
The register_activation_hook runs when activating plugin. It accepts two parameters:
- (string) file name: path to the main plugin file
- (callback) (required) The function to be run when the plugin is activated
register_deactivation_hook runs when deactivating plugin. It accepts same parameter as register_activation_hook.
administratorand call
add_capor
remove_capon the roles object.
Next, we will modify our
add_dashboard_widgetsmethod to only register the widgets if the current user has the
servermetriccap.
/** * Register dashboard widget proider to show up on dashboard */ function add_dashboard_widgets() { if (!current_user_can(self::CAP_METRIC)) { return false; } $widget = Widget::instance(); foreach ($widget->get_provider() as $name=>$provider) { $widget->register($name); } }Next, we use current_user_can to check whether or not the current user has the request capability.
Now, only the administrator will see the widget when loading the dashboard. If you want to enable the server status widget for other users, you can install plugin User Role Editor to manage roles and capabilities for any users.
Then on capabilities screen, we can assign the
server_metric cap.
By utilizing roles and capabilities, we enhanced our plugin security to make our widget available to only user we trust.
Caching the Server Metric
WordPress use the Transients API as a cache API. The data is serialized and stored into the
wp_option table of WordPress with an expiration time of the cache..
We use mostly
get_transient and
set_transient when working with the API. According to the WordPress documentation:
get_transient( $transient ): retrieve the transient name as string and return its data. If the data has expired, it returns false. We should use
===operator to check because we can store an empty value for the transient.
set_transient( $transient, $value, $expiration ): retrieves three parameters: the transient name, its value, and its expiration time in second. Note that the transient name should not be longer than 45 characters.
Our two options are to considering caching the metric data or caching the generated HTML data. Caching the HTML data can make our site very fast, but it puts a load on the database. To that end, we could do benchmark to decide which is best.
For our tutorial, let's just cache the metric data. Besides, we should have a way of invalidating cache - like an anchor - that will allow us to reload the dashboard data and force loading the data instead of from cache.
Caching Data for the Widget
We can directly use the function
get_transient or
set_transient to work with Transient API. However, if we decide to change the way we used Transient API, we have to go over
every place we use it and modify it for every widget.
Let's add one more layer to abstract the cache mechanism. We will design a simple cache class for our widget that has three methods:
set: set cache data for a widget
get: get cache data for a widget
load: try to loading from cache, if not existed, calculate data, set the cache and return
Let's compose the file
widget/cache.php in the following way. Note that, as our auto loading convention, the class name will be
Cache and its namespace is
AX\StatBoard\Widget
<; } }First, notice that we've marked our caching methods as static. Our
setand
getmethods are just wrappers for
get_transientand
set_transient. The
loadmethod sits on top of
setand
get. All of these methods expect to retrieve the widget provider object; therefore, inside of
loadmethod we can invoke
get_metricmethod to get the real data.
Time for using our
Cacheclass. We will try to implement
Cachefor
widget/software.php. Change our original
get_contentmethod to:
<?php namespace AX\StatBoard\Widget; class Software implements Provider { //... public function get_content() { $cmds = Cache::load($this, 3600 * 24); $content = ''; foreach ($cmds as $cmd=>$info) { $content .= "<p><strong>$cmd</strong> $info</p>"; } echo $content; } //... }You can see that we get rid of
$cmds = $this->get_metric()and simply replace it with
Cache::loadwhich will load data from cache, or will load it from system if no cache existed.
Now that you've got the idea, you can repeat with every other widget that you would like to cache. Just replace
get_metricinside
get_contentwith:
Cache::load($this, $time_in_second);to have it take care of its caching..
We have one final example with
widget/ethernet.php. We can add cache ability as following:
<?php namespace AX\StatBoard\Widget; class Ethernet implements Provider { //...Again, we only need to replace
public function get_content() {
$interfaces = Cache::load($this, 3600 * 24 * 7);
$html = '<table class="wp-list-table widefat"><thead><tr>
<th>Interface</th>
<th>IP</th>
</tr></thead><tbody>';
foreach ($interfaces as $interface=>$ip) {
$html .= "<tr>
<td>{$interface}</td>
<td>{$ip}</td>
</tr>";
}
$html .= '</tbody></table>';
echo $html;
}
//.... }
get_metricwith
Cache::load. The ethernet information and its IP address probably never changes so I set a very long cache life time to one week: 3600 seconds * 24 hours * 7 days.
Force Loading Real Data
Once we've added a cache ability, we should support a mechanism so that the administrator can pull the widget without it being cached. The easiest way to do this is to use a special query parameter to indicate that we want real data.
How about small parameter like
nocache for this? So instead of the default WordPress dashboard URL with
domain.com/wp-admin/ we can use
domain.com/wp-admin/?nocache.
Sound easy? Let's do it.
Edit our method
get in widget/cache.php
static function get(Provider $provider) { if (isset($_GET['nocache'])) { return false; } $cache_id = get_class($provider); if (false !== $data = get_transient($cache_id)) { return $data; } return false; }As long as the
nocachequery parameter existed, we return false instantly and therefore force the real data is fetched instead of cached data.
Now, let's think about adding this feature without the Cache class. We may have to go to every line of
get_transientand check for the query parameter there. Therefore, consider breaking things down into many layer when designing your plugin. Don't put everything in same file or copy paste code over and over.
Now, let's try to visit
domain.com/wp-admin and domain.com/wp-admin?nocacheand notice the different speed.
Here is the result with
?nocache=1 appended to the URL.
Using cronjob to Generate Cache
wp_schedule_event. Ideally, we can use
wp_schedule_event to schedule a hook which will be executed at a specific interval.
Looking at this example, our plugin can schedule a hook to invoke every three minutes, the hook, in turn will invoke another function to fetch the metric data. The data is always available in cache and fresh enough.
Open our main plugin file,
serverdashboard.php, and update the run method to include new hook as well as new hook handler.
<(); } } }First, the wp_schedule_event method only support three recurrence type: daily, hourly, and twicedaily. We have to add a new kind of recurrence with wp_get_schedules filter.
$schedules['3min'] = array( 'interval' => 3 * 60, 'display' => __( 'Once every 3 minutes' ) ); return $schedules;We can customize the interval value to how many second we want the job to be repeated. Next we setup a
metric_generate_every_3minhook.
add_action( 'metric_generate_every_3min', array($this, 'generate_metric') );This is our custom hook , it doesn't exist in WordPress. We register an handle with method
generate_metricfor that hook. Whenever
metric_generate_every_3minhook is invoked,
generate_metricwill be executed.
In next statement, we hook into action
initwith
setup_schedulemethod to check for existence of the next scheduled event of the hook
metric_generate_every_3min. If it is not yet defined, we will schedule an event with
wp_schedule_event, using our custom recurrence for every three minutes for that hook.
Inside the
generate_metricmethod, we loop through all available widget provide, and call their
get_contentmethod. By doing that, we trigger
Cache::loadprocess for that metric.
WordPress will automatically run those scheduled events whenever someone visits your WordPress site. It will try to find the scheduled event that need to be run and invoke it.
However, you can also run them manually. WordPress runs cronjob via visiting the file
wp-content.phpwith the URL
yourdomain.com/wp-cron.php?doing_wp_cron.
You may want to update your cronjob to add a new job that ping the above URL every minutes
Let's open your crontab on server with
crontab -eand append this line at the end of it:
0 * * * *wget domain.com/wp-cron.php?doing_wp_cron > /dev/null 2>&1We used wget to make a HTTP request to the wp-cron.php file. Since we don't care about output and any error thing, we redirect all the output to
/dev/null.
You can read more about setting up these cronjob in the following articles:
-
-...
Conclusion<< | https://code.tutsplus.com/tutorials/the-privacy-and-optimization-of-a-wordpress-dashboard-widget--cms-20749 | CC-MAIN-2021-04 | refinedweb | 1,606 | 56.86 |
Seiji Ralph Villafranca
Creating a CRUD React App with Next.JS and Redux
Updated: Aug 8
React Applications are very fast and reliable, as a famous SPA framework, It is widely known as lightweight, easy to plugin and very easy to learn framework but there are times that our projects get bigger and bigger and this might lead to a messy and hard to maintain code, as projects gets complicated, the data we need to maintain is also getting complex, another downside is that React Apps are Client Side Rendered (CSR) like Angular and this might not be SEO friendly and it would take away the chance to be on top of Search engines like Google.
With these downsides for React, we are fortunate that several technologies now exists to make our developer life easier and we have our Redux and Next.JS.
Redux is a state management pattern that is used to manage data in a React application with the use of stores, this provides a single source of truth and allows the data to behave consistently. Redux is also used in other frameworks such as Angular in the form of Ngrx and this is very useful especially if we have large scale applications. I will explain this deeper as we move on to the development.
Next.JS by Zeit is framework based by React, Webpack and Babel, it is used to create React Applications with Server Side Rendering. to know more about Server Side Rendering (SSR) you can visit my tutorial here /post/what-is-ssr-and-why-we-need-it-on-spa.
Setting up the Application
1.Installing the dependencies
now, let's move on in setting up our application, the first step is to create a folder on your terminal.
mkdir <folder-name>
then lets initialize our root folder with a new npm package with the command
npm init
then in the root of your created folder, execute the command:
npm install --save react react-dom next
this will install react dependencies and next js framework.
now we will install our dependencies in redux with the following command:
npm install redux react-redux next-redux-wrapper redux-thunk --save
We will also install axios as this will be responsible for calling http requests
npm install axios --save
And lastly, let's install react bootstrap for UI
npm install react-bootstrap bootstrap
After successfully installing all our dependencies, we will see in our folder project that there is a new package.json file where all downloaded dependencies with their versions are listed.
package.json
{ "name": "next-app", "version": "1.0.0", "description": "nextjs tutorial", "main": "index.js", "dependencies": { "axios": "^0.19.2", "bootstrap": "^4.4.1", "next": "^9.3.4", "next-redux-wrapper": "^5.0.0", "react": "^16.13.1", "react-bootstrap": "^1.0.0", "react-dom": "^16.13.1", "react-redux": "^7.2.0", "redux": "^4.0.5", "redux-thunk": "^2.3.0" }, "devDependencies": {}, "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "seiji villafranca", "license": "ISC" }
we will add scripts here that we will use in running and building our react applications
"scripts": { "dev": "next", "build": "next build", "start": "next start" },
2. Creating folder structure
the basic folder structure of next js is really simple, there are two vital folders in next which are pages and components but in our case we are using state management so we will also add a store folder in our root folder, we will also add assets folder that will hold our global styles, images, and other static files.
Our folder structure should now look like this.
we have now our main folder structure, our next step is to create the files that we need to for out application to have a configuration and entry point.
3. Setting up _app.js and index.js and store
under pages folder, lets create two files named _app.js and index.js, we will also create styles.css under assets folder for our global css and lastly, we will create a store.js file under the store folder.
Our structure would now look like the following
now we we will create our store, in the store.js file lets copy the following code:
import {createStore, applyMiddleware, combineReducers} from 'redux'; import thunkMiddleWare from 'redux-thunk'; const bindMiddleWare = (middleware) => { return applyMiddleware(...middleware); } export const initStore = () => { return createStore( // this is where all reducers go combineReducers({ }), bindMiddleWare([thunkMiddleWare]) ) }
in this snippet we have created two functions, the first function is the bindMiddleware() where it returns the applyMiddleware where it will apply thunk that will allow redux to handle actions which our not plain object.
the second function is the initStore() which will create our store, it returns the createStore() function that accepts two parameters, the first one is the cpmbineReducers() which will accept the list of reducers that we will create in the process of development, the second paramater is the bindMiddleware() function we have created, this is where we will apply thunk as our middleware.
Our next step is to copy this code in our _app.js
import '../styles/styles.css'; import 'bootstrap/dist/css/bootstrap.min.css'; import React from 'react'; import { Provider } from 'react-redux'; import withRedux from 'next-redux-wrapper' import { initStore } from '../store/store'; const MyApp = (props) => { const {Component, pageProps, store } = props; return ( <Provider store={store}> <Component {...pageProps}/> </Provider> ) } export default withRedux(initStore)(MyApp);
the _app.js file is like our configuration file, this is where we import our global css files and other dependencies like bootstrap, we can see that the <Component.. has been wrapper with a Provider tag, this means that we are making the store available to all the components in our application and with the code export default withRedux (initStore) (MyApp); we are initializing what store that will be used in our application.
Now let's copy this code in our index.js file
import React from 'react'; class Index extends React.Component { render() { return ( <div> <span>Hello World</span> </div> ) } } export default Index;
this will be the entry point of our app, we will just display a hello world as our app is loaded. we created a class Index that returns a simple DOM.
now let's try running our app, in our terminal pointing at the root of our project execute the following command:
npm run dev
this will run the devscript which we have added earlier (next start) and this will run our application under port 3000.
let's visit and we will see the following page:
and that's it! we have successfully created our running Next.JS application, our next goal is to create a CRUD application with our initialized store, in this example, we will create a simple Blog application that will allow users to create update delete and view details of each Blog.
Developing a Simple CRUD with Redux
1. Creating Reusable Components
in this part we need to consider, what are the reusable components in our application and what are the pages or the smart components, in this case we need only two pages which is the list of all blogs (Index Page) and the Blog Form (use for creating, viewing and updating blog details), on the other hand, we can consider a Blog Item, Text Input and Header and Layout of the application as reusable components.
Components
Blog Item
TextInput
Header
Layout
Pages
Blog Form
Blog List (Index)
with these classification let us create BlogItem.js, Header.js and TextInput.js files under components folder and blogform.js in the pages folder, we don't need to create bloglist as index page will cater this feature.
We will now create our Header Component (Header.js)
import Link from "next/link"; const linkStyle = { marginRight : 15 } const Header = () => ( <div> <Link href="/"> <a style={linkStyle}>Home</a> </Link> <Link href="/about"> <a style={linkStyle}>About</a> </Link> </div> ) export default Header;
we can see that components are simple functions and does not extend React components as this will be only be used to display data in our application. in our Header component, we haves used a Link from next js that allow us to navigate from one page to another, the url name will also be the filename of the page.
Now we will create our Layout component (Layout.js)
import Header from "./Header"; const layoutStyle = { margin: 20, padding: 20, border: '1px solid #DDD' } const Layout = props => ( <div style={layoutStyle}> <Header/> {props.children} </div> ) export default Layout;
the layout component will be our standard layout on every page, this means that this will be used in every page will navigate, we have used Header component because .header should be present anywhere in our app. thee props.children is any component that will be a child of the layout component.
next is to create our TextInput Component (TextInput.js)
const TextInput = ({disabled, name, value, label, onChange }) => { return ( <div> <label>{label}</label> <input name={name} className="form-control" value={value} onChange= {onChange} disabled={disabled} > </input> </div> ) } export default TextInput;
this will be later used in our blog form page, we have initialized several props such as disabled, name, value and label so we can pass values in our input component and an onChange event to allow other components to be aware of the TextInput event.
the last component we need to create is the BlogItem Component (BlogItem.js)
import Link from "next/link"; import { Button, Container, Row, Col } from 'react-bootstrap'; const BlogItem = ( {blog, onClick} ) => { return ( <div> <span> <Container> <Row> <Col sm={4}><h3>{blog.title}</h3></Col> <Col sm={4}><span>{blog.description}</span></Col> <Col sm={4}> <Button variant="primary" onClick={() => onClick({blog, action:"view"})} > View Blog</Button> <Button variant="primary" onClick={() => onClick({blog,action: "edit"})} > Edit Blog</Button> <Button variant="danger" onClick={() => onClick({blog, action "delete"})}>Delete</Button> </Col> </Row> </Container> </span> </div> ) } export default BlogItem;
We used a simple bootsrap grid to display the blogs details in each column and w have provided a blog property where a single instance of blog should be passed and an onClick event that will be triggered if one action button will be clicked.
2. Creating actions and reducers for store
we have successfully created our reusable components, now our next step is to create the necessary actions and reducers to communicate with our endpoints and store, we know that we are creating a CRUD application, this means we need actions for the create, read update and delete we would also need an action that would store the list of all blogs available in our database and an action that would set the state of the form if it is being used for viewing, creating or updating a blog.
suppose we have the following endpoint:
GET - get list of all blogs POST - create a new blog PUT{id} - update details of blog DELETE{id} - delete blog
create a config.js file in your root folder and this is where we will place the url of our endpoint.
export const HTTP_ENDPOINT = "";
then we will create our actions. create a folder named blog under store and create a file named actions.js under the created blog folder.
import axios from 'axios'; import { HTTP_ENDPOINT } from '../../config'; export const blogActionTypes = { GET_ALL_BLOGS: 'GET_ALL_BLOGS', SET_FORM_STATE: 'SET_FORM_STATE', SET_SELECTED_BLOG: 'SET_SELECTED_BLOG' } export const getAllBlogs = () => { return dispatch => {return axios.get(`${HTTP_ENDPOINT}blogs`) .then(({data}) => data) .then(blogs => dispatch({type: blogActionTypes.GET_ALL_BLOGS, data: blogs})) } } export const saveBlog = (blog) => { return dispatch => { return axios.put(`${HTTP_ENDPOINT}blogs/${blog.id}`, blog) } } export const setSelectedBlog = (blog) => { return dispatch => dispatch({type: blogActionTypes.SET_SELECTED_BLOG, data: blog}) } export const createBlog = (blog) => { return dispatch => { return axios.post(`${HTTP_ENDPOINT}blogs`, blog) } } export const deleteBlog = (id) => { return dispatch => { return axios.delete(`${HTTP_ENDPOINT}blogs/${id}`) } } export const setFormState = (state) => { return dispatch => dispatch({type: blogActionTypes.SET_FORM_STATE, data: state}) }
we have created a blogActionTypes object which will be used by our reducer later on to determine what to do with our state after an action has been dispatched.
The getAllBlogs() calls the http request GET that will get the list of blogs using axios and will dispatch the retrieved list of blogs with a type of GET_ALL_BLOGS.
the saveBlog() only calls an http request PUT{id} that will update the specific blog with the provided id, it will not call any dispatch because we don't need to update any state after updating, this behavior is also the same with createBlog() and deleteBlog() function.
the setSelectedBlog() and setFormState() functions only calls a dispatch action, this means that we are only updating the state, we do not require any data from our database as we need to update what blog is selected and what is the current action being done in the form respectively.
Now lets create our reducers, create a reducer.js under the blog folder.
import { blogActionTypes } from "./actions"; const blogInitialState = { allBlogs: [], form: "", selectedBlog: null } export default (state = blogInitialState, action) => { switch(action.type) { case blogActionTypes.GET_ALL_BLOGS: return { ...state, allBlogs: action.data } case blogActionTypes.SET_SELECTED_BLOG: return { ...state, selectedBlog: action.data } case blogActionTypes.SET_FORM_STATE: return { ...state, form: action.data } default: return state; } }
in the code above, we have created our initial state which is the blogInitialState, we can see three properties (allBlogs, form and selectedBlog) which there are initial values declared, this is where all our data will be stored if an action will be dispatched and the reducers are the only capable to edit the state of these properties.
on the previous creation of actions, the getAllBlogs() function dispatches an object with a type of GET_ALL_BLOGS, this will be redirected to the reducer and will fall on the first case which has a type of GET_ALL_BLOGS, the { ...state, allBlogs: action.data} simply updates the value of the allBlogs state with the list of blogs retrieved in the database.
After creating the reducer, we should not forget to add the reducer in our store.js
import {createStore, applyMiddleware, combineReducers} from 'redux'; import thunkMiddleWare from 'redux-thunk'; import blogs from './blog/reducer' const bindMiddleWare = (middleware) => { return applyMiddleware(...middleware); } export const initStore = () => { return createStore( // this is where all reducers go combineReducers({ blogs }), bindMiddleWare([thunkMiddleWare]) ) }
And we have successfully created our reducers and actions, our next step is to connect our store with our components.
in this step, will now create our view where we will list all the available blogs, we will place our code in the index.js so as the app is loaded in the home page, we will see a list of blogs in the page.
the first thing we need to do is connect the props of the component to the value of our states in store, in order to achieve this we will add the mapStateToProps() and mapDispatchToProps() function
import { connect } from 'react-redux' import { bindActionCreators } from 'redux' class Index extends React.Component { render() { return ( <Layout> ... </Layout>) } } //mapStateToProps and mapDispatchToProps here);
The mapStateProps() simply maps the value of the allBlogs state in he blogs property of the Index components while the mapDispatchToProps() uses bindActionCreators() which binds all the available actions in the blogActions.
Our next step is to call the getAlloBlogs() action and display the list.
import { Button } from 'react-bootstrap'; import Layout from "../components/Layout" import BlogItem from '../components/BlogItem'; class Index extends React.Component {(); } } }
the componentDidMount() is a lifecycle where it states that component is already rendered in the DOM, we will call the this.props.actions.getAllBlogs() here to retrieve the list of blogs, remember that we have binded the actions in blogActions to the actions props of Index using bindActionCreators, this means that we can call the getAllBlogs() action in the this.props.actions.
on the render() function we have added a blogs.map which will iterate the list of blogs from the blogs property and each blog instance will passed in our BlogItem component that we have created, we have created an onClick event which call a selectBlog() that we will create in a while.
our last step is to create our selectBlog() function that will redirect us to the blogform page and determine which action is to be done with the blog.
import Router from 'next/router' class Index extends React.Component { constructor(props, context) { super(props, context); this.selectBlog = this.selectBlog.bind(this); } render() { const { blogs } = this.props; return ( <Layout>... <') } } }
the selectBlog() will receive an object {action: <action.type>, blog: <bloginstance>}, if the action is delete, we will call the deleteBlog() action and this will call the delete endpoint and we need to refresh the list having to call getAllBlogs() action to update our list, if the action is create view or update, we will set the state of the selectedBlog with the blog instance and the form state with the passed action and will redirect the application in the blog form page.
we should also bind in the selectBlog() function in the constructor so the component will be able to access the function.
Now our code in index.js should look like this.
import React from 'react'; import Layout from "../components/Layout"; import { connect } from 'react-redux' import { bindActionCreators } from 'redux' import * as blogActions from '../store/blog/actions'; import BlogItem from '../components/BlogItem'; import Router from 'next/router' import { Button } from 'react-bootstrap'; const button = { marginRight: "0px", } class Index extends React.Component { constructor(props, context) { super(props, context); this.selectBlog = this.selectBlog.bind(this); });
run our application and visit localhost:3000 and we should see our page with the list of blogs.
and we have successfully connected our components with Redux! Our last goal is to make our blogform working for updating viewing and creating a blog.
We will repeat the same step as we did in creating the index.js view, we will connect the blogform page into our store but in this case, we need the selectedBlog and form state.
const mapStateToProps = (state, ownProps) => { return { selectedBlog: state.blogs.selectedBlog, form: state.blogs.form } } const mapDispatchToProps = (dispatch, ownProps) => { return { action: bindActionCreators(Object.assign({}, blogActions), dispatch) } } export default connect(mapStateToProps, mapDispatchToProps)(BlogForm)
then we will create the form using he TextInput component we created.> {form !== 'view'?<Button onClick={() => this.submit()}>Save Blog</Button> : ''} </div> ) } submit() { if(this.props.form == 'create') { this.props.action.createBlog(this.state.selectedBlog).then(() => { Router.push("/"); }); } else if(this.props.form == 'edit') { this.props.action.saveBlog(this.state.selectedBlog).then(() => { Router.push("/"); }); } } updateState(event) { let field = event.target.name; let selectedBlog = Object.assign({}, this.state.selectedBlog) selectedBlog[field] = event.target.value; return this.setState({selectedBlog}); }
In the constructor, we have connected the selectedBlog property which holds the value of the selectedBlog state in the store with the own state of the component, we have used form state to determine if the fields are disabled, if it is in view state, we know that the blog should not be edited and all fields cannot be modified.
We have provided the onChange event on TextInput component which will trigger the updateState() function, this would update the value of the selectedBlog state in our component using the setState(), remember to use Object.assign as we cannot edit the component state directly as this is immutable.
And lastly the submit() function will call the saveBlog() or createBlog() action depending if the state of the form is create or update.
And now, our blogform.js should look like this:
import React from 'react'; import { connect } from 'react-redux' import TextInput from '../components/TextInput'; import { bindActionCreators } from 'redux'; import * as blogActions from '../store/blog/actions'; import { Button } from 'react-bootstrap'; import Router from 'next/router'; class BlogForm extends React.Component {>< | https://www.seijivillafranca.com/post/creating-a-crud-react-app-with-next-js-and-redux | CC-MAIN-2021-49 | refinedweb | 3,232 | 53.41 |
-2015 01:36 PM
I have built my own Microblaze configuration using EDK 14.7. The Mircoblaze runs on a Spartan 3 model xc3s1000 on a Digilent starter board rev E. The Microblaze I setup to use the onboard SRAM as extra data memory. After the Microblaze configuration I have exported the hardware to SDK. In SDK I have started a C project. In my C project my program did not behave like intended. The program is:
#include <stdio.h>
#include "platform.h"
#include "xparameters.h"
int main()
{
init_platform();
int i = 0, j = 0;
for(i=0; i <100; i++)
{
j++;
}
return 0;
}
Here the “for loop” does not count right, as you can see on the picture below from the debugger where the variable j count faster than i. So far I am aware i and j should follow in count. Does anyone have a clue about what is happing here?
Here is a link to the hardware design rapport:
05-17-2015 02:17 AM
05-17-2015 11:04 AM
The loop does something? It counts a variable?
I have also seen that when it checks the statement of a while-loop, it counts the variable up in the statement.
We have tried to use volatile, it worked for some time, but them stopped again, when the code got longer and more complicated. | https://forums.xilinx.com/t5/Embedded-Processor-System-Design/For-loops-does-not-count-as-intended/td-p/603902 | CC-MAIN-2018-17 | refinedweb | 226 | 83.36 |
Online C++ FAQ/Tutorial and Advanced Questions
- Darren Daniel
- 2 years ago
- Views:
Transcription
1 Online C++ FAQ/Tutorial and Advanced Questions Alexis Angelidis January 11, Quick notes to C programmers instead of macros use const or enum to define constants inline to prevent function call overload template to declare families of type and families of functions use new/delete instead of free/malloc (use delete[] for arrays) don t use void* and pointer arithmetic an explicit type conversion reveals an error of conception. avoid to use C style tables, use vectors instead. don t recode what is already available in the C++ standard library. variables can be declared anywhere: initialization can be done when variable is required. whenever a pointer cannot be zero, use a reference. when using derived class, destructors should be virtual. 2 What is C++ C++ is C with classes. It was designed for one person to manage large amounts of code, and so that a line of C++ express more thing than a line of C. The main functionalities C++ adds to C are: control of the accessibility to code within the source files (namespace, struct and class). mechanisms that make a line of code more expressive (constructors, destructors, operators,...). object programming (class derivation). generic programming (templates). 1
2 3 How do I use namespaces? Namespace allow to code different entities without bothering about unicity of names. A namespace is a logical entity of code. A namespace can be included into another namespace (and so on). If a namespace is anonymous, its code is only accessible from the compilation unit (.cpp file), thus equivalent to using static and extern in C. // in file1.cpp namespace name1 void function1() /*... */ namespace name2 void function2() /*... */ namespace // anonymous namespace void function() /*... */ void function3() function(); // ok // in file2.cpp void function4a() function1(); // error void function4c() name1::function2(); // error void function4c() function(); // error void function4b() name1::function1(); // ok void function4d() name1::name2::function2(); // ok using namespace name1; //makes accessible the entire namespace name1 void function4e() function1(); // ok void function4f() name2::function1(); // ok using name1::name2::function2; //makes accessible function2 only void function3g() function2(); // ok 4 How do I use references? A reference is merely an initialized pointer. This reduces significantly zero pointer and un-initialized pointers errors. Prefer references to pointers. Although the two following codes look equivalent, the second implementation prevents invalid compilation. // in C // in C++ int i; int i; int *pi = &i; int &ti = i; int *pj; int &rj; // g++ refuses to compile that 2
3 *pi = 0; ri = 0; *pj = 0; // segfault rj = 0; 5 Functions 5.1 How do I declare, assign and call a pointer to a function? int f(int, char) //... //... int (*pf)(int, char); // pointer to a function pf = f; // assignment pf = &f; // alternative int r = (*pf)(42, a ); 5.2 How do I declare a type of a pointer to a function? typedef int (*pf)(int, char); 5.3 How do I declare an array of pointers to a function? typedef int (*pf)(int, char); pf pfarray[10]; // array of 10 pointers to a function 6 class and struct 6.1 How do I use class and struct? A C++ struct (or class) is almost like a struct in C (i.e. a set of attributes), but has two additional kinds of members: methods and data types. A class method has to be used with an instance of that class. The important is that methods have always access to their instance s attributes and data types. typedef char t_byte; // type unsigned char i; // attribute void m() // method t_byte b = 1; // m can access type t_byte 3
4 i = b; // m can access attribute i void function() A a; a.m(); // an instance is required to call a method. 6.2 When do I use a class or a struct? In struct, the members are public by default. In class, the members are private by default (Question 6.3 explains private and public). Since in C++ privacy is a vertue, prefer class to struct. 6.3 Who has access to class members? There are three levels of access to members private : access is granted only to the class methods and to friend functions and classes (Question 6.17 explains friend). protected : access is granted only to the methods and to derived classes methods. public : access is granted to everyone. Restricting access to members is usefull for detecting illicit use of the members of a class when compiling, as shown in the following code. class A private: int a0; void f0() /*... */ protected: int a1; void f1() f0(); // ok public: int a2; void f2() /*... */ void function() A a; a.a0 = 0; a.f0(); a.a1 = 0; // error // error // error 4
5 a.f1(); a.a2 = 0; a.f2(); // error // ok // ok 6.4 How do I use private, protected or public? Restricting access to member is important to prevent illicit use. Use them in this order of increasing preference: public, protected, private. 6.5 How do I create an instance of a class? The methods called when a class is created are called contructors. There are four possible ways of specifying constructors; the fifth method is worth mentioning for clarifying reasons: default constructor copy constructor value constructor conversion constructor copy assignment (not a constructor) A() /*... */ // default constructor A(const A &a) /*... */ // copy constructor A(int i, int j) /*... */ // value constructor A &operator=(const A &a) /*... */ // copy assignment struct B B() /*... */ // default constructor B(const A &a) /*... */ // conversion constructor void function() A a0(0, 0); // shortcut, value constructor A a1(a0); // shortcut, copy constructor B b1(a1); // shortcut, conversion constructor B b; // shortcut, default constructor b1 = a0; a0 = a1; // conversion contructor // copy assignment 5
6 6.6 How do I initialize members of a class? There are two ways. int a; int b; A(): a(0) b = 0; // attribute a and b are initialized to How do I initialize a const member? Since the value of a const member cannot be assigned with operator =, its value must be initialized as follows: const int id; A(int i): id(i) // attribute id is initialized to the value of parameter i 6.8 How do I call a parent constructor? A() /*... */ struct B B(): A() /*... */ // call to parent s constructor. 6.9 What is a destructor? See Question How do I free the memory allocated to a class? The method called when the memory occupied by a class is freed is called the destructor. With derived classes, destructor should always be virtual. If a class is destroyed through a base class pointer whose destructor isn t virtual, the result is undefined (only part of the destructors will be called). A() virtual ~A() 6
7 struct B: public A B() ~B() void function() B *b = new B(); A *a = b; delete a; // calls ~A and ~B. If ~A wasn t virtual, only ~A would be called How do I put the code of methods outside classes? It is possible to put in a class only the prototype of the methods, and to put all the algorithms outside. This is recommended, because it allows the programmer to read a short prototype of the class, and makes re-usability easier: // in a header file (.h file) int a; A(); virtual ~A(); void f(); // in a compilation unit (.cpp file) A::A() /*... */ A::~A() /*... */ void A::f() /*... */ 7
8 6.12 How do I put the code of methods outside classes in a header file? The code of a method specified in a header file must be declared inline. It means that when compiling, the calls to the method will all be replaced by the code of that method. int a; A(); inline A::A() /*... */ 6.13 How do I declare, assign and call a pointer to a method? Note that a pointer to a method does not hold a second pointer to an instance of a class. To use a pointer to a method, you need an instance of the class onto which this method can be called (possibly a second pointer). void m(int) void function() void (A::*pm)(int) = &A::m; // pointer on method A a; // instance of a class (a.*m)(1); // calling the method with parameter value How can I handle easily a pointer to a method? If you use templates, you won t have to write the annoying type of a pointer to a method (at the cost of using a template). You can also use typedef to declare the type. void m() template <class PMETHOD> void f(a &a 8
9 (a.*pm)() PMETHOD pm) void function() A a; f(a, &A::m); 6.15 How do I declare a method that returns a pointer to a function? The following method takes parameter a char and returns a pointer to a function. To avoid this heavy syntax, you may use a typedef. void (*m(char))(int) /*... */ //... A a; void (*pf)(int); pf = a.m( a ); 6.16 How do I specify a pointer to a static method? It works exactly like pointers to functions in C. static void sm(int) void function() void (*psm)(int) = A::sm; // assignment void (*psm)(int) = &A::sm; // alternative (*psm)(1); // calling the method with parameter value 1. 9
10 6.17 How can I access the private part of a class from another class or function? A class can define friends: functions or classes. The friends will have access to the private part of that class. If a function is friend, it has to be prototyped or defined before specifying its friendship. class A void f(); // f prototyped before; class B; friend A; // A can access private part of B friend void f(); // f can access private part of B void f() /*... */ 6.18 How do I prevent the compiler from doing implicit type conversions? Use the keyword explicit on the constructor. Forcing explicit conversions is usefull to make the programmers aware of the conversion. This is especially interesting for time consuming conversions. A() struct B B() B(const A &a) struct C C() explicit C(const A &a) void fb(b b) /*... */ 10
11 void fc(c c) /*... */ void function() A a; B b; C c; fb(a); // ok, conversion is implicit fc(a); // error fc(c(a)); // ok, conversion is explicit 6.19 When should I use const methods? When a method isn t going to modify a class, it should be const. This prevents from modifying attributes unwantedly, and reduces significantly errors: int a; bool f(int i) const if (a = i) // error. f shouldn t modify a. return true; return false; 6.20 How do I modify attributes in a const method? When you have no other choice (which happens), use the keyword mutable: int a; mutable int b; void f() const a = 0; // error b = 0; // ok 6.21 What are static members? Static members are members that exist independently from an instantiation of a class. They are shared by all instances of that class, and can be used without requiring an 11
12 instance. Only methods and attributes can be static members 1. A static attribute must be defined in a.cpp file. static int id; static int genid() return id++; int A::id = 0; // defined in a.cpp file 6.22 When should I use a static method or a function? A static method has full access to the members of a class. If this isn t required, the method should be a function. class A private: int i; public: static void f(a &a) a.i = 0; 6.23 When should I use a static method or a friend function? A static method has full access to the members of a single class. A function can be the friend of more than one class, and therefore can have access to the members of one of more classes When should I use a global variable or a static attribute? If possible, avoid global variables How do I call a static member outside a class? static int id; static int genid() return id++; 1 This adds a third meaning to the keyword static: the first is a local definition of a function or variable inside a compilation unit, the second is a variable instantiated only once inside a function or method. 12
13 int function() A::id = 0; // call to a static attribute return A::gendId(); // call to a static method 6.26 How do I derive classes? The purpose of deriving classes is to factor code: if two classes derive from a parent class, the members defined in the parent will be accessible by both, and have to be coded only once. The level of access to a parent s member is specified with public, private /*... */ 6.27 How do I avoid ambiguities with multiple class derivation? Consider the two following valid examples: the left one if non-ambiguous, the right one is. void a() void a() struct B: public virtual A struct B: public A 13
14 struct C: public virtual A struct C: public A struct D: public B, public C struct D: public B, public C // D has two a void function() void function() D d; D d; d.a(); d.a(); // error, ambiguous d.b::a(); // ok d.c::a(); // ok 6.28 What is a virtual method? A virtual method in a parent allows children to have a different implementation for it. A pure virtual method in a parent forces children to have an implementation for it (interface in Java). A class with a pure virtual method is called virtual. virtual void f1() = 0; virtual void f2() /*... */ struct B: public A void f1() /*... */ struct C: public A void f1() /*... */ void f2() /*... */ 6.29 What is a pure virtual method? See Question What are templates? Template allow the programmers to implement algorithms once for various data types. Contrarily to macros, the compiler checks the syntax. Functions, methods and classes can be templated. Template parameters are of two kinds: data types or integers. 14
15 How do I specify a function template? In a header (.h file) : template <class T> T max(t a, T b) return (a < b)? b: a; How do I specify a class template? In a header (.h file). The template parameter can be used for defining any member of the class. template <class T, int N> class Vector T array[n]; void method(t t, int i) array[i] = T; How do I specify a template method? In a header (.h file) : class int Vector array[3]; template <class TVECTOR2> void eqadd(tvector2 v2); template <class TVECTOR2> void Vector::eqAdd(TVECTOR2 a2) for (int i(0); i < 3; ++i) array[i] += a2[i]; How do I put the code of template methods outside classes? template <class T, int N> class Vector T array[n]; void reset(); template <class T, int N> void Vector<T, N>::reset() 15
16 for (int i(0); i < N; ++i) array[i] = 0; How do I write a template method of a template class? The syntax is a bit heavy. There is no point of using it unless you realy need to. template <class T, int N> class Vector T array[n]; template <class F> void apply(f f); template <class T, int N> template <class F> void Vector<T, N>::apply(F function) for (int i(0); i < N; ++i) array[i] = f(array[i]); How do I specify a friend template class? Like that: class A template<class> friend class B; friend class B <class T>; How do I write different code for different template parameters? Use specializations: template <class T, int N> class Vector T a[n]; public: Vector(const T v) for (unsigned i(0); i < N; ++i) a[i] = v; template <> class Vector <double, 3> double x, y, z; public: 16
17 Vector(const double v): x(v), y(v), z(v) 6.31 How do I write to the standard output in a C++ way? Include iostream and use the operator <<. std::cout << "Hello!" << std::endl; 6.32 How do I read from the standard input in a C++ way? Include iostream and use the operator >>. getline. float f; char str[255]; std::cin >> f; std::cin.getline(str, 255); For a string of undefined length, use 6.33 How do I specify a copy constructor for a class whose code is not accessible? Use the keyword operator. // suppose class A s code is not accessible A::A(const int i) /*... */ // This does not prevent you to make a B to A convertor struct B int b; B(const int i): b(i) operator A() return A(b); 6.34 How do I redefined arithmetic operators? There are two ways of redefining an operator: with a method or with a function (usually friend function, for most operators need to access the private members of a class). Some operators can only be redefined with a method Method/function operators The following operators can be redefined with a method or a function: binary +, unary +, binary, unary,, /, %, ==,! =, &&,,!, <, >, <=, >=, + =, =, =, / =, % =, &,,,, <<, >>, & =, =, ++,. The advantage of a function 17
18 operator over a method operator is the possibility to define it independently from the class. For example, to use a method: struct Number Number operator+(const Number &n) const /*... */ Number operator-() const /*... */ bool operator==(const Number &n) const /*... */ Number &operator+=(const Number &n) /*... */ Number &operator++() /*... */ // postfixed Number operator++(int) /*... */ // prefixed double &operator*() /*... */ // prefixed double &operator->() /*... */ // prefixed struct Stream Stream &operator<<(const Number &n) /*... */ They can also be redefined with a function (possibly friend). For example: Number operator+(const Number &n0, const Number &n1) /*... */ Number operator-(const Number &n) /*... */ bool operator==(const Number &n0, const Number &n1) /*... */ Stream &operator<<(stream &is, const Number &n) /*... */ Number &operator+=(number &n0, const Number &n1) /*... */ Number &operator++(const Number &n) /*... */ // postfixed Number operator++(const Number &n, int) /*... */ // prefixed double &operator*(const Number &n) /*... */ // prefixed double &operator->(const Number &n) /*... */ // prefixed Method-only operators The following operators can be redefined with a non-static method only: (), []. For example: struct Matrix double &operator()(int i, int j) /*... */ struct Vertex double &operator()(int i) /*... */ double &operator[](int i) /*... */ 6.35 When should I use operator [] or ()? Use () instead of []: calling [] on a pointer to the array (instead of the array) will compile too, but will have a different effect. 18
19 6.36 When should I use operator i++ or ++i? Since i++ returns a copy of i, it is preferable to use ++i How do I cast types? Remember that an explicit type conversion often reveals an error of conception. To cast a variable a into another type T, use one of the following: static_cast<t>(a) // explicit and standard conversion dynamic_cast<t>(a) // validity of object checked at run-time reinterpret_cast<t>(a) // binary copy const_cast<t>(a) // changes the constness 19
C++ Overloading, Constructors, Assignment operator
C++ Overloading, Constructors, Assignment operator 1 Overloading Before looking at the initialization of objects in C++ with constructors, we need to understand what function overloading is In C, two functions
C++ Keywords. If/else Selection Structure. Looping Control Structures. Switch Statements. Example Program
C++ Keywords There are many keywords in C++ that are not used in other languages. bool, const_cast, delete, dynamic_cast, const, enum, extern, register, sizeof, typedef, explicit, friend, inline, mutable,
SYSTEMS PROGRAMMING C++ INTRODUCTION
Faculty of Computer Science / Institute of Systems Architecture / Operating Systems SYSTEMS PROGRAMMING C++ INTRODUCTION Alexander Warg WHY C++? C++ is the language that allows to express ideas from
C++ INTERVIEW QUESTIONS
C++ INTERVIEW QUESTIONS Copyright tutorialspoint.com Dear readers, these C++ Interview Questions have been designed specially to get
Introduction to C++ Programming Vahid Kazemi
Introduction to C++ Programming Vahid Kazemi Overview An overview of C/C++ - Basic types, Pointers, Arrays, Program control, Functions, Arguments, Structures, Operator overloading, Namespaces, Classes,
C++ Introduction to class and data abstraction
C++ Introduction to class and data abstraction 1 Data abstraction A data abstraction is a simplified view of an object by specifying what can be done with the object while hiding unnecessary details In
Advanced C++ Programming
Advanced C++ Programming Course ID CPP110 Course Description The comprehensive, five-day course consists of three modules. A preliminary module reviews topics, including inheritance, the ANSI C++ Standard
C++ Programming: From Problem Analysis to Program Design, Fifth Edition. Chapter 2: Basic Elements of C++
C++ Programming: From Problem Analysis to Program Design, Fifth Edition Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with the basic components of a C++ program,
Advanced Systems Programming
Advanced Systems Programming Introduction to C++ Martin Küttler September 23, 2016 1 / 21 About this presentation 2 / 21 About this presentation This presentation is not about learning to program 2 / 21
C++ for Game Programmers
C++ for Game Programmers Course Description C++ has become one of the favourite programming language for game programmers. Reasons for wide spread acceptability of C++ are plenty, but primary reasons are,
CS201- Introduction to Programming Latest Solved Mcqs from Final term Papers July 11,2011
CS201- Introduction to Programming Latest Solved Mcqs from Final term Papers July 11,2011 Mc100401285 moaaz.pk@gmail.com Moaaz Siddiq Bc100400662 bc100400662asad@gmail.com Asad Ali Latest Mcqs FINALTERM
KITES TECHNOLOGY COURSE MODULE (C, C++, DS)
KITES TECHNOLOGY 360 Degree Solution info@kitestechnology.com technologykites@gmail.com Contact: - 8961334776 9433759247 9830639522.NET JAVA WEB DESIGN PHP SQL, PL/SQL
Summary. Pre requisition. Content Details: 1. Basics in C++
Summary C++ Language is one of the approaches to provide object-oriented functionality with C like syntax. C++ adds greater typing strength, scoping and other tools useful in object-oriented programming
C++ CLASSES C/C++ ADVANCED PROGRAMMING
C++ CLASSES C/C++ ADVANCED PROGRAMMING GOAL OF THIS LECTURE C++ classes Dr. Juan J. Durillo 2 (C++) CLASSES: BASIC CONCEPTS Fundamentals of classes data abstraction data encapsulation Data abstraction:
C++ Programming Language
C++ Programming Language Lecturer: Yuri Nefedov 7th and 8th semesters Lectures: 34 hours (7th semester); 32 hours (8th semester). Seminars: 34 hours (7th semester); 32 hours (8th semester). Course abstract
C++ tutorial. C++ tutorial
Introduction I I will assume that you know some basics of C++: # include < iostream > int main ( void ) { std :: cout
Adjusted/Modified by Nicole Tobias. Chapter 2: Basic Elements of C++
Adjusted/Modified by Nicole Tobias Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with functions, special symbols, and identifiers in C++ Explore simple data types
C++ Crash Kurs. C++ Object-Oriented Programming
C++ Crash Kurs C++ Object-Oriented Programming Dr. Dennis Pfisterer Institut für Telematik, Universität zu Lübeck C++ classes A class is user-defined type
Proposal to add an absolute difference function to the C++ Standard Library
Proposal to add an absolute difference function to the C++ Standard Library Document number: N4318 Date: 2014-09-21 Project: Programming Language C++, Library Evolution Working Group Reply-to: Jeremy Turnbull
Advanced C++ Exception Handling Topic #5
Advanced C++ Exception Handling Topic #5 CS202 5-1 CS202 5-2 Exception Handling Throwing an Exception Detecting an Exception Catching an Exception Examine an Example using Classes and Operator Overloading
Operator Overloading; String and Array Objects
11 Operator Overloading; String and Array Objects The whole difference between construction and creation is exactly this: that a thing constructed can only be loved after it is constructed; but a thing
C++ Language Tutorial
cplusplus.com C++ Language Tutorial Written by: Juan Soulié Last revision: June, 2007 Available online at: The online version is constantly revised and may contain
7 Introduction to C++
7 Introduction to C++ 7.1 Introduction C++ is an extension to C Programming language. It was developed at AT&T Bell Laboratories in the early 1980s by Bjarne Stroustrup. It is a deviation from traditional
OBJECT ORIENTED PROGRAMMING IN C++
OBJECT ORIENTED PROGRAMMING IN C++ For Off Campus BSc Computer Science Programme UNIT 1 1. The goal of programmers is to develop software that are. A. Correct B. Reliable and maintainable C. Satisfy all
Punctuation in C. Identifiers and Expressions. Identifiers. Variables. Keywords. Identifier Examples
Identifiers and Expressions CSE 130: Introduction to C Programming Spring 2005 Punctuation in C Statements are terminated with a ; Groups of statements are enclosed by curly braces: { and } Commas separate
C Primer. Fall Introduction C vs. Java... 1
CS 33 Intro Computer Systems Doeppner C Primer Fall 2016 Contents 1 Introduction 1 1.1 C vs. Java.......................................... 1 2 Functions 1 2.1 The main() Function....................................,
An Incomplete C++ Primer. University of Wyoming MA 5310
An Incomplete C++ Primer University of Wyoming MA 5310 Professor Craig C. Douglas C++ is a legacy programming language, as is other languages
C++FA 5.1 PRACTICE MID-TERM EXAM
C++FA 5.1 PRACTICE MID-TERM EXAM This practicemid-term exam covers sections C++FA 1.1 through C++FA 1.4 of C++ with Financial Applications by Ben Van Vliet, available at. 1.) A pointer
Chapter 2: Problem Solving Using C++
Chapter 2: Problem Solving Using C++ 1 Objectives In this chapter, you will learn about: Modular programs Programming style Data types Arithmetic operations Variables and declaration statements Common
El Dorado Union High School District Educational Services
El Dorado Union High School District Course of Study Information Page Course Title: ACE Computer Programming II (#495) Rationale: A continuum of courses, including advanced classes in technology is needed.
An introduction to C++
An introduction to C++ C++ concepts C++ = C concepts + bigger library + classes + namespaces + some additional gear C concepts: syntax, data types, control structures, operators, pointer semantic etc.
Course Name: ADVANCE COURSE IN SOFTWARE DEVELOPMENT (Specialization:.Net Technologies)
Course Name: ADVANCE COURSE IN SOFTWARE DEVELOPMENT (Specialization:.Net Technologies) Duration of Course: 6 Months Fees: Rs. 25,000/- (including Service Tax) Eligibility: B.E./B.Tech., M.Sc.(IT/ computer
A First Book of C++ Chapter 2 Data Types, Declarations, and Displays
A First Book of C++ Chapter 2 Data Types, Declarations, and Displays Objectives In this chapter, you will learn about: Data Types Arithmetic Operators Variables and Declarations Common Programming Errors
Sources: On the Web: Slides will be available on:
C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison, C Programming Language course syllabus associate level
TECHNOLOGIES The C Programming Language course syllabus associate level Course description The course fully covers the basics of programming in the C programming language and demonstrates fundamental programming
COURSE CONTENTS. 3 -months 8:30 am - 3:30 pm Mon - Fri. [Admissions strictly through written test based on Basic C and Aptitude]
COURSE CONTENTS 3 -months 8:30 am - 3:30 pm Mon - Fri [Admissions strictly through written test based on Basic C and Aptitude] Subhash Programming Classes Revision: January, 2016 All rights reserved Call:
Constructors and Assignment. Mike Precup
Constructors and Assignment Mike Precup (mprecup@stanford.edu) Administrivia Actually important this time! Assignment 2 is due on Tuesday at midnight. I ll be holding LaiR hours from 8 PM - 10 PM on Sund
Visual C++ Object-Oriented Programming
Visual C++ Object-Oriented Programming A Mark Andrews SAMS PUBLISHING A Division of Prentice Hall Computer Publishing 201 West 103rd Street, Indianapolis, Indiana, 46290 USA Contents Introduction xxvii
CS11 Advanced C++ FALL LECTURE 1
CS11 Advanced C++ FALL 2015-2016 LECTURE 1 Welcome! 2 Advanced C++ track is a deeper dive into C++ More advanced language features More use of the C++ standard library and the STL Development tools
Arrays. Arrays, Argument Passing, Promotion, Demotion
Arrays Arrays, Argument Passing, Promotion, Demotion Review Introduction to C C History Compiling C Identifiers Variables Declaration, Definition, Initialization Variable Types Logical Operators Control
CS.
Coding conventions and C++-style
Chapter 1 Coding conventions and C++-style This document provides an overview of the general coding conventions that are used throughout oomph-lib. Knowledge of these conventions will greatly facilitate
Tutorial 9 Income Tax Calculator Application: Introducing the switch Multiple-Selection Statement
Tutorial 9 Income Tax Calculator Application: Introducing the switch Multiple-Selection Statement Outline 9.1 Test-Driving the Income Tax Calculator Application 9.2 Introducing the switch Multiple1020E: DATA STRUCTURES AND ALGORITHMS I
CS1020E: DATA STRUCTURES AND ALGORITHMS I Tutorial 1 Basic C++, OOP Problem Solving (Week 3, starting 22 August 2016) 1. Evaluation Order (Note: You can use any other C++ code editor/compiler). Examine
Chapter 2: Basic Elements of C++
Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with functions, special symbols, and identifiers in C++ Explore simple data types Discover how a program evaluates
GETTING STARTED WITH C++ C++ BASICS - 1 -
- 1 - GETTING STARTED WITH C++ Programming is a core activity in the process of performing tasks or solving problems with the aid of a computer. An idealised picture is: PROBLEM COMPUTER SOLUTION Unfortunately
Variable Base Interface
Chapter 6 Variable Base Interface 6.1 Introduction Finite element codes has been changed a lot during the evolution of the Finite Element Method, In its early times, finite element applications were developed
Moving from Java to C++
Moving from Java to C++ This appendix explains how to transfer your Java programming skills to a substantial subset of C++. This is necessary for students who take their first programming course in Java
In this lecture you will learn:
Data Types and Variables Imed Hammouda Department of Software Systems Tampere University of Technology Objectives In this lecture you will learn: What is a data type and how types are represented in C++.
Programming for MSc Part I
Herbert Martin Dietze University of Buckingham herbert@the-little-red-haired-girl.org July 24, 2001 Abstract The course introduces the C programming language and fundamental software development techniques.
Introduction to STL (Standard Template Library)
Introduction to STL (Standard Template Library) Rajanikanth Jammalamadaka A template is defined as something that establishes or serves as a pattern Websters In C++, a template
Java Interview Questions and Answers
1. What is the most important feature of Java? Java is a platform independent language. 2. What do you mean by platform independence? Platform independence means that we can write and compile the java
C++ for Scientific Computing
C++ for Scientific Computing Ronald Kriemann MPI MIS Leipzig 2012-10-01 R. Kriemann,»C++ for Scientific Computing«1/316 1. Introduction 2. Variables and Datatypes 3. Arithmetic Operators 4. Type Casting
Java Review (Essentials of Java for Hadoop)
Java Review (Essentials of Java for Hadoop) Have You Joined Our LinkedIn Group? What is Java? Java JRE - Java is not just a programming language but it is a complete platform for object oriented programming.
Introduction to C++ Programming
Introduction to C++ Programming 1 Outline Introduction to C++ Programming A Simple Program: Printing a Line of Text Another Simple Program: Adding Two Integers Memory Concepts Arithmetic Decision Making:.
Exam objectives. Java Certification - Week 2. Operators and Assignments. Chris Harris. Overview. Exam objectives (continued)
Exam objectives Java Certification - Week 2 Operators and Assignments Chris Harris Determine the result of applying any operator,including assignment operators,instance of,and casts to operands of any
Abstract Data types Ivor Page 1
CS 3345 module 2, Abstract Data Types 1 Abstract Data types Ivor Page 1 2.1 Introduction An abstract data type (ADT) is a class specification that enables objects to be created at run time that (1) hold
Theory Assignments-3. Theory Assignments-4
Prof. Ramkrishna More Arts, Commerce & Science College S.Y.B.Sc.(Computer Science) Subject Object oriented concepts & programming in C++. Theory Assignments-1 Q.1) Answer the following.(1m) 1. List any
Variables and Constants
HOUR 3 Variables and Constants What You ll Learn in This Hour:. How to declare and define variables and constants. How to assign values to variables and manipulate those values. How to write the value
Object Oriented Programming With C++(10CS36) Question Bank. UNIT 1: Introduction to C++
Question Bank UNIT 1: Introduction to C++ 1. What is Procedure-oriented Programming System? Dec 2005 2. What is Object-oriented Programming System? June 2006 3. Explain the console I/O functions supported
Dept. of CSE, IIT KGP
Programming in C: Basics CS10001: Programming & Data Structures Pallab Dasgupta Professor, Dept. of Computer Sc. & Engg., Indian Institute of Technology Kharagpur Types of variable We must declare the
Polymorphism. Problems with switch statement. Solution - use virtual functions (polymorphism) Polymorphism
Polymorphism Problems with switch statement Programmer may forget to test all possible cases in a switch. Tracking this down can be time consuming and error prone Solution - use virtual functions (polymorphism)
(e) none of the above.
1 The default value of a static integer variable of a class in Java is, (a) 0 (b) 1 (c) Garbage value (d) Null (e) -1 2 What will be printed as the output of the following program? public class testincr
Binary storage of graphs and related data
EÖTVÖS LORÁND UNIVERSITY Faculty of Informatics Department of Algorithms and their Applications Binary storage of graphs and related data BSc thesis Author: Frantisek Csajka full-time student Informatics | http://docplayer.net/25316650-Online-c-faq-tutorial-and-advanced-questions.html | CC-MAIN-2018-47 | refinedweb | 5,484 | 55.95 |
class Node: # This is the Class Node with constructor that contains data variable to type data and left,right pointers. def __init__(self, data): self.data = data self.left = None self.right = None def depth_of_tree(tree): #This is the recursive function to find the depth of binary tree. if tree is None: return 0 else: depth_l_tree = depth_of_tree(tree.left) depth_r_tree = depth_of_tree(tree.right) if depth_l_tree > depth_r_tree: return 1 + depth_l_tree else: return 1 + depth_r_tree def is_full_binary_tree(tree): # This functions returns that is it full binary tree or not? if tree is None: return True if (tree.left is None) and (tree.right is None): return True if (tree.left is not None) and (tree.right is not None): return (is_full_binary_tree(tree.left) and is_full_binary_tree(tree.right)) else: return False def main(): # Main func for testing. tree = Node(1) tree.left = Node(2) tree.right = Node(3) tree.left.left = Node(4) tree.left.right = Node(5) tree.left.right.left = Node(6) tree.right.left = Node(7) tree.right.left.left = Node(8) tree.right.left.left.right = Node(9) print(is_full_binary_tree(tree)) print(depth_of_tree(tree)) if __name__ == '__main__': main() | http://python.algorithmexamples.com/web/binary_tree/basic_binary_tree.html | CC-MAIN-2020-24 | refinedweb | 189 | 74.05 |
Introduction:
In this article I will explain how to show time in facebook/ twitter formats like minute ago, hour ago, week ago in asp.net using JQuery.
In this article I will explain how to show time in facebook/ twitter formats like minute ago, hour ago, week ago in asp.net using JQuery.
Description:To implement this concept I am using previous post Repeater example in asp.net. After bind data to repeater control our date and time would be like this
In previous posts I explained many articles relating to JQuery. Now I will explain another article relating to JQuery. If we share anything on facebook or twitter we will get time in different format like 1 minute ago, 1 hour ago, 1 week ago etc.
In previous posts I explained many articles relating to JQuery. Now I will explain another article relating to JQuery. If we share anything on facebook or twitter we will get time in different format like 1 minute ago, 1 hour ago, 1 week ago etc.
We can change date and time format like facebook / twitter style (minute ago, hour ago, month ago etc) by using JQuery timeago plugin. This plugin automatically update our datetime without having any postback of page you can get this JQuery timeago plugin by downloading attached folder.
Before use this plugin I used other methods to change date and time format like facebook/twitter but I got one problem that is now I opened website during that time that is showing 1 minute ago. I stayed on that website upto 5 minutes but still that is showing 1 minute only if I refresh website then datetime value is updating but this JQuery timeago plugin automatically update our datetime without having any postsbacks.
To implement this one check previous post Repeater example in asp.net I am using same concept just adding new logic to change date and time format
After design data table and insert data in database by using above repeater post write the following code in your aspx page to bind data o repeater control
If you observe above code in header section I added some of script files by using those files we can change date and time. To get those files download attached sample code and use it in your application.
Another thing here we need to know is label link
JQuery timeago plugin automatically change time format for label elements with class="timeago" and title.
After completion of aspx page design add following namespaces in code behind
C# Code
After add namespace write the following code
VB.NET Code
Now run your application check the output that would be like this
After open your website stay on that website for 1 or 2 minutes and check your time in website that will be updated automatically.
Download sample code attached
25 comments :
excellent
fabulous
Suresh Sir
AAp ek code post kar sakte hai kya?
Google Map (API) Jis city ko dikna chaye use page load par dal kar dekha sakte hai in Asp.net code
Plz post this I need This Code ........................
thanks.....
kya aap facebook type comment box ka codding bata sakte he....agar he to kya name he vo bata ye ..
plz i need your help......
excellent code..
Sir,
Jquery of to show time in facebook/ twitter formats like minute ago, hour ago, week ago in asp.net using JQuery.
not working properly.....
@Pankaj Kumar..
it's working perfectly. Please check your code
JQuery display time in facebook/ twitter formats like minute ago, hour ago, week ago in asp.net is not working properly.when i run this code its show Created Date: 4 months ago for all.
@Santosh Pal...
Check your created dates range based on that only it will show..
Suresh Sir i have created 1 table and i have some data like
Id UserName Subject Comment PostedDate
11 sarfaraz Check ssjjfj 2012-06-01 16:26:00.000
12 santoshpal Pal good 2012-06-01 16:52:00.000
13 minal Check hi 2012-06-01 17:13:00.000
when i run my code its show Created Date: 4 months ago for all.
what i have to do extra things to get result
Plz help me sir.
@Santhosh Pal...
Please change your date format to mm/date/year like 06/01/2012 then try it will work for you.
Thanks to his help Suresh sir
Its working perfectly but when we add new comment in a post all dates automatically hidden becoz at that moment ur jquery is not working properly becoz of data binding so i think
var ts = new TimeSpan(DateTime.UtcNow.Ticks - dt.Ticks);
double delta = Math.Abs(ts.TotalSeconds);
if (delta < 60)
{
return ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago";
}
if (delta < 120)
{
return "a minute ago";
}
if (delta < 2700) // 45 * 60
{
return ts.Minutes + " minutes ago";
}
if (delta < 5400) // 90 * 60
{
return "an hour ago";
}
if (delta < 86400) // 24 * 60 * 60
{
return ts.Hours + " hours ago";
}
if (delta < 172800) // 48 * 60 * 60
{
return "yesterday";
}
if (delta < 2592000) // 30 * 24 * 60 * 60
{
return ts.Days + " days ago";
}
if (delta < 31104000) // 12 * 30 * 24 * 60 * 60
{
int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30));
return months <= 1 ? "one month ago" : months + " months ago";
}
int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365));
return years <= 1 ? "one year ago" : years + " years ago";
this is much better thn ur code
@Moneysukh...
Whenver you enter comment and hidding date means that's the problem with your code. Please check it that's not the problem with this plugin.
Thanks in a million!!!!!! it really works i will observe this if the date of the client pc is not update to date
Hello Sir,
I have used this code and set date time format (MM/dd/YYYY) but it shows 14 hours ago ,17 hours ago like this for all records. please help thanks
how display twitter user comments in asp.net????
hi suresh garu
it is not working server side because label have no runat="server" wat we do.......
vcn bvn
Nice one..
Hello Suresh Dasari
I have added timeago plugin in my web application.It is similar post and comment system like facebook.After each post and comment i do a partial post back with ajax and bind the data back to repeater.But after the partial post back the time ago do not bind .If I remove update panel and use full post back it works fine.Pls help,seeking your help.
Thanks
Nitin Goswami
Hi Suresh,
Thanks for your code,i have one doubt that can we implement this same concept in the datalist. If yes,How to You..
Hi,
I got an answer for this.,thnks Suresh..how to use this same concept in data list..thnks for your code in advance.
hello Suresh
Its not working if we do partial post.I mean if we put some code with in updatepanel and doing operation then its not working.all the previous data is hidden.
I have created 3 linkbutton and each link button binded with listview,but on every click of linkbutton the time ago is not working. | http://www.aspdotnet-suresh.com/2012/02/jquery-display-time-in-facebook-twitter.html | CC-MAIN-2017-17 | refinedweb | 1,200 | 73.17 |
Chrome 74 has arrived, and while there’s not much exciting from a user-facing perspective, there are some goodies for the developer minded. The new version comes complete with new private class fields for Javascript, a media query that allows users to reduce animation, dark mode for Windows, and more.
Public class fields, meet private class fields
You might remember that Chrome 72 added support for Javascript’s new public class field syntax back in January. This is a nifty new way to simplify your syntax by allowing you to define class fields directly in the class definition, no constructor necessary.
Now in Chrome 74, private class fields join their public cousins. The private class fields function more or less the same but make use of a # to denote that they’re private vs. public, and of course, they’re only accessible inside the class.
For a refresher, a public class field looks like this:
class IncreasingCounter { // Public class field _publicValue = 0; get value() { return this._publicValue; } increment() { this._publicValue++; } }
And a private class field has that added #:
class IncreasingCounter { // Private class field #privateValue = 0; get value() { return this.#privateValue; } increment() { this.#privateValue++; } }
Not so fast
As it turns out, some people aren’t huge fans of the flashy animations found on some modern websites. In fact, parallax scrolling, zooming, and jumpy motion effects can make some motion sick. Imagine getting car sick while browsing the website. Not fun. Operating systems have started adding options to reduce this motion and now with Chrome 74, you can use a media query, prefers-reduced-motion, to design with this group of people in mind.
How does this work? Say you have an animated button.
You can use @media (prefers-reduced-motion: reduce) like so:
button { animation: vibrate 0.3s linear infinite both; } @media (prefers-reduced-motion: reduce) { button { animation: none; } }
And now, when someone has a reduced motion preference turned on in MacOS or another operating system, they won’t see the animation.
Listen for CSS transition events
Good news, everyone! You can now listen for CSS transition events like
transitionrun,
transitionstart,
transitionend, and
transitioncancel. Other browsers have supported this for quite a while, but better late than never. Listening to these events can come in handy if you want to track or change behavior when a transition runs.
Just a little code…
element.addEventListener(‘transitionstart’, () => { console.log(‘Started transitioning’); });
and voilà! You’re logging transitions on your website.
What can you do with this? Well, maybe you have an eye-catching animation on your website to, well, catch the user’s attention. After it runs and they’re captivated, you want to deliver an important message. How can you do that? Transition events (transitionend)!
Take control with feature policy APIs
Chrome’s new feature policies make it easy to enable, disable or modify the behavior of APIs and other website features. With them, you can do things like allow iframes to use the fullscreen API or change the default behavior of autoplay on mobile and third-party videos. You can take advantage of this new functionality with the Feature-Policy header or with an iframe’s allow attribute:
HTTP Header: Feature-Policy: geolocation ‘self’ <iframe … allow=”geolocation self”></iframe>
For a deeper dive on feature policies, take a look at Google’s article on the subject.
Embrace the dark mode
Or don’t. The point is, now you can choose. In Chrome 73, dark mode was added for Mac users, but not for Windows. Chrome 74 starts the roll out of it for Windows as well. Like with the Mac version, dark mode in Windows looks a bit like incognito mode with a different theme applied to new tabs, the bookmarks bar, and more.
According to Google, this will be happening gradually so if you can’t do it quite yet, don’t worry. Dark mode is coming.
What else?
These are just some of the highlights for Chrome 74. If you’re looking for the nitty-gritty, check out chromestatus.com, Google’s official site for all Chrome updates. They get into the weeds on these features and even give you a sneak peek into future releases.. | https://blog.logrocket.com/whats-new-in-chrome-74-6f8b82919c68/ | CC-MAIN-2019-39 | refinedweb | 698 | 65.12 |
On Fri, Jul 13, 2007 at 02:21:19PM +0100, Christoph Hellwig wrote:> On Fri, Jul 13, 2007 at 06:17:55PM +0530, Amit K. Arora wrote:> > /*> > + * sys_fallocate - preallocate blocks or free preallocated blocks> > + * @fd: the file descriptor> > + * @mode: mode specifies the behavior of allocation.> > + * @offset: The offset within file, from where allocation is being> > + * requested. It should not have a negative value.> > + * @len: The amount of space in bytes to be allocated, from the offset.> > + * This can not be zero or a negative value.> > kerneldoc comments are for in-kernel APIs which syscalls aren't. I'd say> just temove this comment, the manpage is a much better documentation anyway.Ok. I will remove this entire comment.> > + * <TBD> Generic fallocate to be added for file systems that do not> > + * support fallocate.> > Please remove the comment, adding a generic fallback in kernelspace is a> very dumb idea as we already discussed long time ago.>> > --- linux-2.6.22.orig/include/linux/fs.h> > +++ linux-2.6.22/include/linux/fs.h> > @@ -266,6 +266,21 @@ extern int dir_notify_enable;> > #define SYNC_FILE_RANGE_WRITE 2> > #define SYNC_FILE_RANGE_WAIT_AFTER 4> > > > +/*> > + * sys_fallocate modes> > + * Currently sys_fallocate supports two modes:> > + * FALLOC_ALLOCATE : This is the preallocate mode, using which an application> > + * may request reservation of space for a particular file.> > + * The file size will be changed if the allocation is> > + * beyond EOF.> > + * FALLOC_RESV_SPACE : This is same as the above mode, with only one difference> > + * that the file size will not be modified.> > + */> > +#define FALLOC_FL_KEEP_SIZE 0x01 /* default is extend/shrink size */> > +> > +#define FALLOC_ALLOCATE 0> > +#define FALLOC_RESV_SPACE FALLOC_FL_KEEP_SIZE> > Just remove FALLOC_ALLOCATE, 0 flags should be the default. I'm also> not sure there is any point in having two namespace now that we have a flags-> based ABI.Ok. Since we have only one flag (FALLOC_FL_KEEP_SIZE) and we do not wantto declare the default mode (FALLOC_ALLOCATE), we can _just_ have thisflag and remove the other mode too (FALLOC_RESV_SPACE).Is this what you are suggesting ?> Also please don't add this to fs.h. fs.h is a complete mess and the> falloc flags are a new user ABI. Add a linux/falloc.h instead which can> be added to headers-y so the ABI constant can be exported to userspace.Should we need a header file just to declare one flag - i.e.FALLOC_FL_KEEP_SIZE (since now there is no point of declaring the twomodes) ? If "linux/fs.h" is not a good place, will "asm-generic/fcntl.h"be a sane place for this flag ?Thanks!--Regards,Amit Arora-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2007/7/13/224 | CC-MAIN-2014-42 | refinedweb | 445 | 57.16 |
so now that I am using the computer at home with eclipse, I see the error "Can't read input file" should i try making a folder for the image or something else, cuz just putting in the image did not...
so now that I am using the computer at home with eclipse, I see the error "Can't read input file" should i try making a folder for the image or something else, cuz just putting in the image did not...
sorry was not near computer for while, and no it did not :/ still have the white frame screen
image = ImageIO.read(new File("ges.jpg"));
JLabel picLabel = new JLabel(new ImageIcon(image));
this adds the image to the label, oh, should i do a
add(picLabel); after that?
ah okay, will do that, thanks for that tip, and i am adding the Label when in the Frame class I say
BackgroundPanel panel = new BackgroundPanel();
frame.add(panel);
should I make a panel inside...
no, just the program wanted a try and catch
Hello all, I am making a 2D Scroller game, and I know how to accomplish it, however, having trouble setting an image to the background, no amount of videos or tutorials is helping. The image does not...
I have figured it out, the snake game is up and running, everything works, all the collides and all, now to customize it how i want, thanks so much for your help both times Norm and anyone else who...
yea, i tested it out, the problem is in my follow method, what i do is i set the x and y of the bodyparts to the x and y of the snake, so now i need to make it so that it appears behind the snake,...
I think it is this code
if(go)
{
int i=0;
for(SnakeBody s: snake)
{
if(snake.indexOf(s)>=1)
{
temp1=s;
they are drawn separately, with different x's and y's, but then the next time they move they all become the original snake's x and y and then are drawn upon each other, so it is all the bodyparts in...
okay, yea, my assumption was correct, they do draw, but then their x and y are changed to that of the original snake, thus merging back into one snake
no, they still disappear, are the coordinates getting changed to the first snake's, or is the SnakeBody not being actually drawn, it says it is, trying to figure this out now
well i put the debugger here
for(SnakeBody s: snake)
{
s.draw(g);
System.out.println(s);
}
and each time i got a food it created a new SnakeBody, and then also was drawing them...
well i debugged the paintComponent, and whenever i collide with a food, a new SnakeBody is created and read every single tick, however, it is not being drawn
as an update, i worked on my class and now the snake body parts get created and drawn, but only for a second, so you see the body part and then it disappears, any help?
Snake class:
import...
So if I were to make many snake objects, would i just create an:
ArrayList <SnakeBody> snake = new ArrayList <SnakeBody>();
and then call them one by one in a for loop based on that?
...
The food class did not copy all over, here is the code again
import java.awt.*;
public class Food
{
double x = 0, y = 0;
private double myX; // x and y coordinates of center
...
Hello all, this is godlynom again, thanks to those who helped me last time. Everything is working, now I just need the body parts to be added after I collide with the food. Essentially everytime you...
yea sorry, was at soccer practice
ugh dang small problems....thank you so much for your time, have now got it working and everything, thanks so much
well a fix to my response, now the coordinates of x and y change when keys are pressed, (questions 1 and 3) and the draw method is called also, but the shapes themselves are not drawing, it is not...
The timer printed out the values of the x and y coordinates, following the timer specified at the top, but it does not change the x or y coordinates
I did :
System.out.println(x + " " + y);
...
that was a dumb question, the KeyListener should be in the Snake class, still trying to figure out what is not working, it should be working, mep
I am having trouble with the debugging, it is not helping me. If i want to control the SnakeBody object, should i put the KeyListener in the SnakeBody class, or the Snake class?
yea, one of the end brackets did not copy over for some reason in the SnakeBody class, and it is not really a problem, I just cannot control the SnakeBody object that is created, like I need to. The... | http://www.javaprogrammingforums.com/search.php?s=b791e9da05794db5b6a10a53e63b3e23&searchid=1075910 | CC-MAIN-2014-41 | refinedweb | 827 | 68.13 |
These postings are provided "AS IS" with no warranties, and confer no rights. The use of any script / code samples is subject to the terms specified here.
One of the demos I've been doing at the MSDN Roadshow is to take the new ASP.NET routing capabilities (currently in preview) and show them in isolation from MVC / Dynamic Data. Although the new routing capabilities were developed for the MVC framework, they've been factored out and are now shared with Dynamic Data and, of course, you can use them in your own ASP.NET applications as well.
What do these new routing capabilities give you? Flexibility for one. Each request is intercepted and matched against a pre-populated RouteTable for a pattern match. Assuming a match is found, the designated RouteHandler is invoked to handle the request. The job of the RouteHandler is to take whatever action is required and ultimately return an IHttpHandler which will be used to render the response to the client. So we have something like:
Let's try and build the simplest of ASP.NET applications taking advantage of the new routing capabilities. To do this you will need Visual Studio 2008 / ASP.NET 3.5 and I am using the 0408 release of Dynamic Data to provide the routing capabilities. There are various flavours around (from MVC and Dynamic Data), you might just have to change things slightly to get them working.
<%@ Application Language="C#" %>
<%@ Import Namespace="System.Web.Routing" %>
<script RunAt="server">
void Application_Start(object sender, EventArgs e)
{
RegisterRoutes(RouteTable.Routes);
}
private void RegisterRoutes(RouteCollection Routes)
{
Route r = new Route("{Parameter}", new MEOSimpleRouteHandler());
Routes.Add(r);
}
</script>
using System.Web.Routing;
using System.Web;
using System.Web.UI;
using System.Web.Compilation;
public class MEOSimpleRouteHandler : IRouteHandler
{
public IHttpHandler GetHttpHandler(RequestContext requestContext)
{
string pageName
= requestContext.RouteData.GetRequiredString("Parameter");
pageName =
pageName.ToLower() == "home" ? "default" : pageName;
string virtualPath
= string.Format("~/RoutingDemoPages/{0}.aspx", pageName);
return (Page)BuildManager.CreateInstanceFromVirtualPath(virtualPath, typeof(Page));
}
}
Our implementation of the GetHttpHandler() method firstly extracts a parameter (I've called it Parameter) from the URL. We do this by calling GetRequiredString() on the RouteData object in the requestContext. By virtue of the fact that we enclosed "Parameter" in curly braces in the Route pattern, it is treated as a parameter in the URL rather than a static string to be matched.
Having done that I do one simple manipulation (you get as clever / fancy as you like here) which is to say that if the user has requested "home" as the URL then it gets converted to "default". ie requesting "home" or "default" will give the same result. I then create a virtualPath to the aspx page I'm going to serve up as the IHttpHandler instance.
The virtual path is set to ~/RoutingDemoPages/[Parameter].aspx. So whatever is requested as a URL, our RouteHandler tries to serve up as an aspx page from the RoutingDemoPages folder. Finally, the BuildManager class allows us to create an instance of the Page class at that virtual path.
So we need to add a RoutingDemoPages folder to our website and populate it with a few aspx pages. Let's say we add Page1.aspx, Page2.aspx, Page3.aspx and default.aspx. Add some unique content to each page so you know when it's being served.
There's one last thing we need to do to get this site working - we need to add the UrlRoutingModule into the httpModules section in web.config (in the system.web section).
We should be good to go. Set the start page for the website to / (in website properties Start Options tab) and hit F5 to start debugging.
In your browser, from the site root, add "default" to the URL, for me that looks like (I'm using the Visual Studio built-in webserver) and you'll find that the default.aspx page in the RoutingDemoPages folder is served up. Change default to home, Page1, Page2 etc and confirm these pages are also served. Maybe place a breakpoint in the RouteHandler code just to confirm that the code is executed on each request.
Hello,
What's the main difference between Url Routing vs. Url Rewriter?
One thing I've yet to see answered about the new routing capabilities is if it is supported on IIS6 / Win2K3? If so, does it require configuring wildcard mapping to be able to handle extension-less URLs?
Thanks.
Anthony - extensibility. You have scope for doing a lot more with the request using System.Web.Routing.
Donnie - it is supported and yes, you need to configure wildcard mappings so all requests are passed to ASP.NET.
Mike
Hi Mike,
Just posted an entry that takes this a little further to include passing parameters to the page:
Cheers
Ian
While I've been sunning myself on a Greek Island (Kefalonia - very nice, especially the baclava), Ian
Bit slow on this having just got back from holiday but Scott has blogged about the latest "official"
I got a great question on Tuesday night at the .NET Developer Network where I was talking about some
Presentation: What's new for Presentation Presenter: Mike Ormond ( ) ASP.NET ASP.NET
Presentation: What's new for Presentation Presenter: Mike Taulty ASP.NET ASP.NET website
Back here I gave an example of setting up and ASP.NET webforms site to use ASP.NET routing. A few people
asp.net 3.5 has already supported routing. if you need great flexibility on how your url should pass
Mike Ormond's Blog : Using ASP.NET Routing Independent of MVC
Following on from my post on VS2010 Enhancements for ASP.NET I thought I’d spend a little time describing
Thank you for submitting this cool story - Trackback from DotNetShoutout
Trademarks |
Privacy Statement | http://blogs.msdn.com/mikeormond/archive/2008/05/14/using-asp-net-routing-independent-of-mvc.aspx | crawl-002 | refinedweb | 960 | 57.27 |
I'm using Resharper 6 and I've got this enum and class:
public enum MyEnum { Value1, Value2 } public class Test { public Test() { this.MyEnum = MyEnum.Value1; } public MyEnum MyEnum { get; set; } }
If I then copy the Test class to another namespace and click on the popup over the MyEnum type on the property to add the using statement, I find that it has changed to
this.MyEnum = this.MyEnum.Value1;
Which doesn't even compile. Is resharper doing this? How can I get this to stop?
Hello,
This seems to be a bug. You can track the progress here:.
Meanwhile, you can use the 'Import types for pasted code' quick-fix after
pasting the Test class which works fine. Thank you!
Andrey Serebryansky
Senior Support Engineer
JetBrains, Inc
"Develop with pleasure!"
Thanks for taking a look at it. | https://resharper-support.jetbrains.com/hc/en-us/community/posts/206034399-Resharper-adding-this-before-enum-values?sort_by=created_at | CC-MAIN-2019-22 | refinedweb | 139 | 75.2 |
CRUD Unit Testing in Laravel 5
Recently, I started an open source e-commerce application called LARACOM made with Laravel together with Rob Gloudemans’ shopping cart and other related e-commerce packages.
When starting this project, I have to think of the long term so never in my mind that I will not use the TDD (test driven development) approach, it is a MUST. So in doing it, I need my tests to be separated in two different groups, one for Unit testing and one for Feature testing.
Unit testing is testing your classes — Models, Repositories etc. while Feature testing is testing your code if it hits the controllers and asserting the actions it will throw, say a redirect, returns a view or redirect with error flash messages.
Enough with the introduction. If you haven’t tried TDD before, I have written a basic TDD approach in here. I will not talk about the basic already since there is already an article for it.
What I will do today is doing a Carousel for my e-commerce project.
Edit: Hey! I created a base repository package you can use for your next project :)
Part I: Positive Unit Testing
Let’s start with the CREATE test.
Create the file
/tests/Unit/Carousels/CarouselUnitTest.php
What are we trying to do in this file:
- See if our repository class can take this data parameters and actually create the carousel in the database
- See if after creating the carousel, the response have the same values as the parameters we feed it in
Now, in your terminal, run
vendor/bin/phpunit (you must be in the root directory of your application)
Is it an error? Yes! It should be since we have not created the files it is trying to use.
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.E 1 / 1 (100%)Time: 700 ms, Memory: 26.00MBThere was 1 error:1) Tests\Unit\Carousels\CarouselUnitTest::it_can_create_a_carousel
Error: Class 'Tests\Unit\Carousels\CarouselRepository' not found
So we need to solve the error by creating the
CarouselRepository file.
Let’s create it in
app/Shop/Carousels/Repositories/CarouselRepository.php
Obviously, you cannot find the
Shopand
Carouselsfolder in the default Laravel installation. You can do your own way on how to structure your folder but I’d love to do it this way. You just have to namespace it properly.
We also created a specific error exception file here.
Create it in,
app/Shop/Carousels/Exceptionsfolder otherwise it will throw an error.
In this repository, we use dependency injection. We inject the class to the repository so it can use it. The
Carousel model would throw also an error here since it is not yet existing.
We create it first:
app/Shop/Carousels/Carousel.php
After the repository is created, import it to our test file like so:
and run again
vendor/bin/phpunit
Error? Yes? Yup. You guessed it right.
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.E 1 / 1 (100%)Time: 898 ms, Memory: 26.00MBThere was 1 error:1) Tests\Unit\Carousels\CarouselUnitTest::it_can_create_a_carousel
App\Shop\Carousels\Exceptions\CreateCarouselErrorException: PDOException: SQLSTATE[HY000]: General error: 1 no such table: carousels in /Users/jsd/Code/shop/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:77
It is trying to insert the data into the database but the table is not yet created. What do we need now? Migration file.
In your terminal, run:
php artisan make:migration create_carousels_table --create=carousels
Once it is created, open it and configure the columns you need.
The link is nullable but definitely the title and the image src are required.
Once this is done, run
vendor/bin/phpunit again.
Well done mate, well done.
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.. 1 / 1 (100%)Time: 696 ms, Memory: 26.00MBOK (1 test, 6 assertions)
You already have one test passing. So, as long as this test passes whenever you run
vendor/bin/phpunit, you can be confident that the application can create a carousel slider always.
Show the carousel after it is created
Now, lets try to test the READ.
See if we can see the carousel once it is created.
Again, run the test. Every test we newly created, we always expect a failed test. Why? It is because we have not yet done any implementation. If we get a success after we create the test, then you are doing it WRONG.
Note: Every test we create should be at the top as we want it to be run first
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.E 1 / 1 (100%)Time: 688 ms, Memory: 26.00MBThere was 1 error:1) Tests\Unit\Carousels\CarouselUnitTest::it_can_show_the_carousel
InvalidArgumentException: Unable to locate factory with name [default] [App\Shop\Carousels\Carousel].
With this error, it is looking for the model factory for the Carousel.
So, we need to create it in:
database/factories/CarouselModelFactory.php
Run the phpunit again.
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.E 1 / 1 (100%)Time: 708 ms, Memory: 26.00MBThere was 1 error:1) Tests\Unit\Carousels\CarouselUnitTest::it_can_show_the_carousel
Error: Call to undefined method App\Shop\Carousels\Repositories\CarouselRepository::findCarousel()
Yup. The factory error is now gone. Meaning it can now persist in the database. Others, they want persistence to be separate so you can put in your test instead of :
$carousel = factory(Carousel::class)->create();
then
$carousel = factory(Carousel::class)->make();
But we still have an error, it can’t find the
findCarousel() method in the repository class. We create that method.
Self explanatory file. Run the phpunit and see the output.
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.. 1 / 1 (100%)Time: 932 ms, Memory: 26.00MBOK (1 test, 6 assertions)
Pat on the back mate. Good job.
Update Carousel test
Now, see if we can UPDATE the carousel
In here, we use again the model factory to create the carousel the we pass
$data parameters to update the created carousel and assert if we can get the same values in the update
$data parameters.
This will fail as you know because the
updateCarousel method is not yet created so we create it.
Once this method is created, run phpunit again.
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.. 1 / 1 (100%)Time: 932 ms, Memory: 26.00MBOK (1 test, 6 assertions)
It just work out of the box!
Finally, we test the delete
Create a test if we can delete the carousel we created.
Then create the
deleteCarousel() method
When this method is hit, it should return a
boolean response.
True if successfully delete else
null. Then run phpunit again.
➜ git: phpunit --filter=CarouselUnitTest::it_can_delete_the_carousel
PHPUnit 6.5.7 by Sebastian Bergmann and contributors.. 1 / 1 (100%)Time: 916 ms, Memory: 26.00MBOK (1 test, 1 assertion)
WOW. You have made it here! CONGRATULATIONS!
If you want to see tests live in action, heads up to my Unit Tests here
###
As a PART II, let us do NEGATIVE testing!
/jsd | https://jsdecena.medium.com/crud-unit-testing-in-laravel-5-ac286f592cfd | CC-MAIN-2021-39 | refinedweb | 1,172 | 67.04 |
A Quick Look at Qt Quick
Qt 4.7 has not been released yet, but the curious can download the beta or even grab a snapshot from git. The big news in this point seven release is Qt Quick - a new approach to user interfaces.
Before I get carried away and start throwing screenshots at you, let's step back and look at today's user interfaces. On the desktop, standard buttons, sliders, text fields and windows still serve a purpose. However, on all gadgets running Linux these days, gray buttons are considered boring. Instead, the users want fluid, animated, glossy user interfaces that go with the branding of the device. This is where Qt Quick fits in.
So, back to Qt Quick. The solution can be said to consist of two parts: the QtDeclarative module that runs and integrates Qt Quick in Qt applications, and the language QML. Using QtDeclarative it is possible to share objects between C++ and QML, as well as inserting C++ classes into QML and such. The QML language is a brand new language, primarily for creating user interfaces, but useful in other applications as well.
So, how does QML look, smell and feel? Let's have a look at a minimal example:
import Qt 4.6 Rectangle { width: 200 height: 200 Text { text: "Linux Journal" font.pointSize: 10 font.bold: true anchors.centerIn: parent } }
Reading the code and looking at the screenshot that comes with this article pretty much explains it all. Each word starting with a capital letter (Rectangle and Text) creates a new object. The outer Rectangle forms the area that we have to play with. Then properties are set. The anchors property asks the Text to stay centered in the parent (i.e. the Rectangle) so when the user resizes the window, the text will stay positioned.
So, I talked about fluid, animated, glossy user interfaces and I've shown you a simple text string on a white surface. Well, add the following property assignment to the Text and you will be amazed.
rotation: NumberAnimation { to: 360; duration: 1500; running: true; repeat: true; }
Now the text is spinning madly (as shown in the screenshot)! Actually, what you have done is that you have bound a property to the value generated by a NumberAnimation object. There is far more to Qt Quick - states, effects and cool elements. Also, there is the whole question of integrating QML with a C++ back-end. I've shown you spinning text but the possibilities are unlimited. They really are.
Johan Thelin is a consultant working with Qt, embedded and free
software. On-line, he is known as e8johan.
>
Like it
I very much like JavaFX and it is great to see that similar concepts are comming to C++ with Qt. I think I will like it even more since it will be completely open source and faster and more efficient.
JFX
Looks like JavaFX, but I think that it will be better with Qt because JFX is proprietary (or at least the JFX SDK). Qt Quick will be FOSS.
Reminds me of Java FX
The concept reminds me of Java FX. I wonder if it will take off with Qt.
Looks very nice :). I see
Looks very nice :). I see many similarities with JavaFX.
Same here, JavaFX all over
Same here, JavaFX all over again. Did not notice your comment since page was open for a couple of days in the browser. | http://www.linuxjournal.com/content/quick-look-qt-quick?quicktabs_1=1 | CC-MAIN-2015-18 | refinedweb | 576 | 75.4 |
hi so fare i have this but it doesnt seem to be working:
in one class i have:
and then in another class i want to call arrayA:and then in another class i want to call arrayA:Code :
public class siwtchMenu{ public String[][] arrayA(int x, int y){ String arrayTest[][] = {{"0", "1", "2"},{"0","1", "2"}}; return arrayTest[x][y]; //error is here } }
Code :
public class case1{ void case1(){ switchMenu obj = new switchMenu(); System.out.println("Array is:"); for(int i=0; i<3; i++){ System.out.println(obj.arrayTest[i][i]+ ", "); } } }
when i do this it comes up with an error (at error is here) saying:
Please help me on this as i have know idea what im doing wrong!Please help me on this as i have know idea what im doing wrong!Quote:
incompatible types
required: java.lang.String[][]
found: java.lang.String
thanks in advance! =] | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/3601-returning-2d-array-printingthethread.html | CC-MAIN-2015-14 | refinedweb | 150 | 70.23 |
There are so many ways to extract or remove stop words from Text. But these stop words are pre-defined in NLP libraries. Now if you are searching How to use custom stopwords python NLP, You will get the answer step by step in this article. Let’s Add stopwords python-
1. Create a custom stopwords python NLP –
It will be a simple list of words(string) which you will consider as a stopword. Let’s understand with an example –
custom_stop_word_list=['you know', 'i mean','yo','dude']
2. Extracting the list of stop words NLTK corpora (optional) –
This is optional because if you want to go ahead with the above custom list of stopwords then This is not required. Usually, developers/data scientists merge the custom stopword list with a predefined stop stopword list. There are so many Libraries for extracting the stopwords list. Here we are achieving it with NLTK-
# importing Nltk stopword package import nltk nltk.download('stopwords') from nltk.corpus import stopwords #Loading Stopwords into a list NLTK_stop_words_list=stopwords.words('english') print(NLTK_stop_words_list) print("Total numbers of stop words are ") print(len(NLTK_stop_words_list))
Here we have generated the list which contains predefined stop words from the NLTK package.
How to use custom stop word list in python NLP -1
3. Add the custom stop words NLP list into NLTK library’s stop words (nltk add stopwords) –
Let’s see how to Add the custom stop words NLP list with NLTK library’s stop words –
final_stopword_list = custom_stop_word_list + NLTK_stop_words_list
Here final_stopword_list contains stop words from both sources. Here is the output screenshot-
Here the final_stopwords_list can be custom alone as well.
4. How to remove stop words python NLTK?
Here is the way to remove stopwords. Here will use the custom stopwords list.
from nltk.corpus import stopwords from nltk.tokenize import word_tokenize nltk.download('punkt') input = "'you know Lets see the examples in DataScienceLearner" tokens = word_tokenize(input) sentence_without_stopword = [k for k in tokens if not k in final_stopword_list] print(tokens) print(sentence_without_stopword)
Once you run this code, It will remove the custom stopword and predefine the stopword. That’s it.
Conclusion –
I hope this article must clear your doubt about removing standard stopwords as well as custom user define. If you have any thoughts and suggestions on this topic, please comment in the comment box.
Thanks
Data Science Learner Team
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox. | https://www.datasciencelearner.com/custom-stopwords-python-nlp/ | CC-MAIN-2021-39 | refinedweb | 410 | 64.51 |
Database plays a very important role in maintaining a web site or a organization details.It will be exciting if we are able to use the data present in Database in our program.One of the important features of Java is that we can connect to database using it.Connecting Java to Db is called as Java connectivity.With help of Java connectivity we can retrieve, update the data present in the Database.
For connecting the database to the Java program we need to install the drivers.In our tutorial we going to use Microsoft Access as the database.For installing the drivers we need jdbc-odbc files.Jdbc drivers are available with java api kit.Microsoft provides the odbc drivers to the users defaulty…but we need to install them.To install them follow these steps.
Step 1: Go to control Panel
Step 2: Select administrative tools.
Step 3: Select Data Sources(odbc).
Step 4: There select User DSN…..and click on Add
Step 5: Then select the ms access(*.mdb)
Step 6: Now give the name of the data source which you are going to use.
Step 7: Then click on select button and then find your database…..and then click on ok
With this you installed the drivers for connecting the java the to Ms Access.
Now we shall learn how to write the java code for connecting to the database.The main important steps in any one the database connecting program is:
Step 1: specify the rivers which you are going to use in the program.For this we need to invoke a method called as forName() which is present in the class “Class”.
Class.forName(“sun.jdbc.odbc.JdbcOdbcDriver”);
Step 2: Now we need to start the connection to the Database.For this we use the method getConnection() which is present in the class DriverManager.Which return an object of connection class.We need to pass the protocol,user id and password for the database.The url consists of three parts protocol,subprotocol,sourcename.
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); Connection con; con = DriverManager.getConnection(url,userid,pswd);
Step 3:Now we need to create an statement to execute the queries.For this we use the method createStatement() which is present in the Connection class and we use the con object to call the method,Which returns an object of statement class.
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); Connection con; con = DriverManager.getConnection(url,userid,pswd); Statement st = con.createStatement();
Step 4:Now we shall execute the queries on the table.For this we use the method executeQuery() which is present in the class Statement and we use the object st.It returns the set of results of the operation and we need to store them in an object of ResultSet.
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); Connection con; con = DriverManager.getConnection(url,userid,pswd); Statement st = con.createStatement(); Resultset res = st.executeQuery("select * from emp");
Now the entire code for the program is:
import java.sql.*; public class msaccessdemo { public static void main(String arg[]) { ResultSet res; //To store the result of the query String url="jdbc:odbc:company"; String id=""; String pd=""; try { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); Connection con; con = DriverManager.getConnection(url.id,pd); Statement st = con.createStatement(); res = st.executeQuery("select * from emp"); while(res.next()) { System.out.print("res.getString(1)+" "+res.getString(2)); } } catch(Exception e) { System.out.print(e); } } }
OUTPUT:
To download the program click here:download
Pingback: Connecting Java to Mysql Database | letusprogram...!!!
Pingback: Java program for Inserting rows using preparedstatement | letusprogram...!!! | https://letusprogram.com/2013/11/23/connecting-java-to-microsoft-access-database/ | CC-MAIN-2021-17 | refinedweb | 597 | 51.95 |
[C#][NativeUI] How to create a mod menu using NativeUI(Part 1)
Hello guys, with my last tutorial done and dusted, I was requested to make a mod menu tutorial. I thought it was a very good idea, and so I am creating a mod menu tutorial, using NativeUI. It is very simple, you will get to know how to create menus more than just simple mod menu functions.
REQUIREMENTS:
ScriptHookVDotNet
NativeUI
First, lets make a new project:
Lets add our references to ScriptHookVDotNet.dll and NativeUI.dll:
Also add a reference to System.Windows.Forms and System.Drawing:
With that done, lets add our using namespaces:
Awesome, now lets make our class inherit GTA.Script and be done with setting everything up:
Awesome! Now, lets talk a bit about NativeUI. NativeUI contains all its menus in a MenuPool, and we can put all our menus in it. Right now, we need one mainmenu, which can be than divided into many other submenus, and one MenuPool. Let's do that:
Let's make a script constructor, which can instantiate and start all these classes:
Lets go over this one... the first line,
modMenuPool = new MenuPool();
instantiates the menuPool class so we can add it to the pool. The second line,
mainMenu = new UIMenu("Mod Menu", "SELECT AN OPTION");
instantiates a new UIMenu, whose title, is Mod Menu and the subtitle is SELECT AN OPTION. You can of course edit these to whatever you want, but I will be keeping these throughout the tutorial. In the last line, we add our mainMenu to our menuPool. Simple, right? Now lets also add the basic events(OnKeyDown, OnTick):
One thing we need to do is in our tick we need to add a menuPool.ProcessMenu line, if you dont do that, the menus wont show up:
This is just something to do with every NativeUI menu you create. Now, let's make a little keypress to open/close our nativeUI menu, which surprisingly is very easy:
This little piece of code checks if we press the key F10, and if we do, no menus are open. If they arent, we open/close our menu using the visible property. Let's check if this code works ingame:
As you can see, we don't have any options, but this does work! Next, we'll add some simple items which you can select. But before, we need to set everything up. I would like to do that in a little private function which we can call later on. This is how I do it:
First, I declare a global variable resetWantedLevel of type UIMenuItem, which I set its value in the Setup function. Than, I add the item to our mainMenu. Now this will give us an Item which we can click, but nothing will happen. For that, we need another event, which is built-in NativeUI's UIMenu class:|
Fairly easy to understand, just another event. This event gets called whenever we select an item, and it also passes through which Item was selected. This way, we can expand our menu easily by checking the returned item with other items, and make more and more functions. Let's however implement our basic reset wanted level script:
Easy to understand, as explained before. Let's test this script ingame!
As you can see, this script works! Now, you can add as much to it as you want, but I am also going to show you other basics of how you can use this. For example, why not make a little weapon selector? You select a weapon, and press a button and you get the desired weapon? Well, I am going to make this in a submenu. And this is how you create a submenu:
This creates a submenu inside our mainMenu, and its name is Weapon Selector Menu. Now that we have the menu, lets get to work on it. For lists, we use something called UIMenuListItem, which can contain a dynamic list and return the index of what is selected. We will be using this. Lets first make a dynamic list:
Now, this is where the most complicated part begins. We need to get all the values from the WeaponHash array, and cast it into a WeaponHash:
Basically what is happening is we get all the weapon hashes and cast it into a weaponhash array. Now we need to add all those weapon hashes into our list:
Now we have all the weapon hashes. Let's setup our other half of the menu:
I will show you another way of creating an event, if you dont want another function. It would be very good, since all the code could come into one function. This is how you do it:
Now, lets put our button code in:
We need to do two things now, get the index of our List item, and get the WeaponHash from our our whole array of weapons. This is pretty easy. We get the index by list.Index and the other by just allWeaponHashes[index]. Easy:
I believe this code is straightforward, so now lets put this function in:
Now, lets test this!
as you can see I have no weapons. But now I want the heavy shotgun:
And I got the heavy shotgun!
I think this wraps up the first part of the tutorial, in the second half, we will be doing more of these types of functions, like car spawning, car god mode, stuff like that. And I will also move the Reset Wanted Level into its own Player submenu. Until then, bye!
Second tutorial:
@GTAVModder4Life perfect, good explanation. i am very interested in spawning cars by name. so im looking forward to your next tutorial. thank you again for all your tutorials!!
@TobsiCred Thanks dude! And nice idea, will cover in the next one.
@GTAVModder4Life Thank you for this tutorial
i want to know how to create Pickups.. For Example: When the player drive with a car in this pickup he can sell it..
@GTAVModder4Life Hey man i was trying to install VS but that thing wont install for me kept getting errors (package failed) ,so can you please convert these two source code files to a working .asi ? please ?
files
@MK sorry dude, I need the whole project, I dont have most of the files needed. Like IniWriter, you didnt give me all the files to build the project.
@GTAVModder4Life The files that i sent you are the source code for this mod
So wait a second,the source code isn't for editing an stuff like what I'm trying to do here ?
@GTAVModder4Life What i did was edited the codes to include ADDON vehicles.
I been wanting to get the police package how do you get it? I have Xbox 360 HELP!!!!
@GTAVModder4Life oh how do you do mods on Xbox?
@Xxkc95xX I'm sorry but I dont make or have ever made mods for Xbox 360, and like I said this website is only for PC.
@GTAVModder4Life Nevermind i found a solution.
@GTAVModder4Life okay don't get stupid I was just asking smh
- GTATerminal
@Xxkc95xX Drop the attitude.
@GTAVModder4Life was in no way "stupid" towards you.
You come here and hijack a thread, ask for mods for consoles on a PC modding website,
and then proceed to get hostile when you don't get your way. That is not how we do things here.
@yeahhmonkey you start in the class, like I mentioned in the start. If you dont know how to create a normal GTA 5 script, refer to my How to create a Basic script tutorial.
@GTAVModder4Life Awesome explanations, Now I am gonna implement this with my script for too many options I was thinking to add to.
Thank you so much.
@GTATerminal I could be late at party, but very nicely said.
- GTATerminal
This post is deleted!
- HeadShotWillDo0
Can't Get this to work
Please Help
- HeadShotWillDo0
This post is deleted!
Hello,
So i get the Issue that my "KeyCode is not in an EventArgs Implemented... Does anyone know how to fix that?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Drawing;
using GTA;
using GTA.Native;
using GTA.Math;
using NativeUI;
namespace GTA_Script1
{
public class Class1 : Script
{
MenuPool modMenuPool;
UIMenu mainMenu;
public Class1() { modMenuPool = new MenuPool(); mainMenu = new UIMenu("Mod Menu", "Select an Option"); modMenuPool.Add(mainMenu); Tick += onTick; KeyDown += onKeyDown; } void onTick(object sender, EventArgs e) { if (modMenuPool != null) modMenuPool.ProcessMenus(); } void onKeyDown(object sender, EventArgs e) { if (e.KeyCode == Keys.F10 && !modMenuPool.IsAnyMenuOpen()) { mainMenu.Visible = !mainMenu.Visible; } } }
}
- TehSynapse
So the menu does not pop up when I press the button, the code is 100% the same. Do I have to install nativeUI into the GTA folder or something? It's not mentioned in this post...
NativeUI needs to be in you Scripts folder, yes. | https://forums.gta5-mods.com/topic/1023/c-nativeui-how-to-create-a-mod-menu-using-nativeui-part-1 | CC-MAIN-2018-09 | refinedweb | 1,491 | 82.75 |
timeit — Time the execution of small bits of Python code.¶.
Module Contents¶
timeit defines a single public class,
Timer. The
constructor for
Timer takes a statement to be timed and a
“setup” statement (used to initialize variables, for example). The
Python statements should be strings and can include embedded newlines.
The
timeit() method runs the setup statement one time, then
executes the primary statement repeatedly and returns the amount of
time that passes. The argument to
timeit() controls how many
times to run the statement; the default is 1,000,000.
Basic Example¶
To illustrate how the various arguments to
Timer are used,
here is a simple example that prints an identifying value when each
statement is executed.
import timeit # using setitem t = timeit.Timer("print('main statement')", "print('setup')") print('TIMEIT:') print(t.timeit(2)) print('REPEAT:') print(t.repeat(3, 2))
When run, the output shows the results of the repeated calls to
print().
$ python3 timeit_example.py TIMEIT: setup main statement main statement 3.7070130929350853e-06 REPEAT: setup main statement main statement setup main statement main statement setup main statement main statement [1.4499528333544731e-06, 1.1939555406570435e-06, 1.1870870366692543e-06]
timeit() runs the setup statement one time, then calls the
main statement
count times. It returns a single floating point value
representing the cumulative amount of time spent running the main
statement.
When
repeat() is used, it calls
timeit() several
times (3 in this case) and all of the responses are returned in a
list.
Storing Values in a Dictionary¶
This more complex example compares the amount of time it takes to
populate a dictionary with a large number of values using a variety of
methods. First, a few constants are needed to configure the
Timer. The
setup_statement variable initializes a
list of tuples containing strings and integers that will be used by
the main statements to build dictionaries using the strings as keys
and storing the integers as the associated values.
# A few constants range_size = 1000 count = 1000 setup_statement = ';'.join([ "l = [(str(x), x) for x in range(1000)]", "d = {}", ])
A utility function,
show_results(), is defined to print the
results in a useful format. The
timeit() method returns the
amount of time it takes to execute the statement repeatedly. The
output of
show_results() converts that into the amount of time
it takes per iteration, and then further reduces the value to the
average amount of time it takes to store one item in the dictionary.
def show_results(result): "Print microseconds per pass and per item." global count, range_size per_pass = 1000000 * (result / count) print('{:6.2f} usec/pass'.format(per_pass), end=' ') per_item = per_pass / range_size print('{:6.2f} usec/item'.format(per_item)) print("{} items".format(range_size)) print("{} iterations".format(count)) print()
To establish a baseline, the first configuration tested uses
__setitem__(). All of the other variations avoid overwriting
values already in the dictionary, so this simple version should be the
fastest.
The first argument to
Timer is a multi-line string, with
white space preserved to ensure that it parses correctly when run. The
second argument is a constant established to initialize the list of
values and the dictionary.
# Using __setitem__ without checking for existing values first print('__setitem__:', end=' ') t = timeit.Timer( textwrap.dedent( """ for s, i in l: d[s] = i """), setup_statement, ) show_results(t.timeit(number=count))
The next variation uses
setdefault() to ensure that values
already in the dictionary are not overwritten.
# Using setdefault print('setdefault :', end=' ') t = timeit.Timer( textwrap.dedent( """ for s, i in l: d.setdefault(s, i) """), setup_statement, ) show_results(t.timeit(number=count))
This method adds the value only if a
KeyError
exception is raised when looking for the
existing value.
# Using exceptions print('KeyError :', end=' ') t = timeit.Timer( textwrap.dedent( """ for s, i in l: try: existing = d[s] except KeyError: d[s] = i """), setup_statement, ) show_results(t.timeit(number=count))
And the last method uses “
in” to determine if a dictionary has a
particular key.
# Using "in" print('"not in" :', end=' ') t = timeit.Timer( textwrap.dedent( """ for s, i in l: if s not in d: d[s] = i """), setup_statement, ) show_results(t.timeit(number=count))
When run, the script produces the following output.
$ python3 timeit_dictionary.py 1000 items 1000 iterations __setitem__: 91.79 usec/pass 0.09 usec/item setdefault : 182.85 usec/pass 0.18 usec/item KeyError : 80.87 usec/pass 0.08 usec/item "not in" : 66.77 usec/pass 0.07 usec/item
Those times are for a MacMini, and will vary depending on what
hardware is used and what other programs are running on the
system. Experiment with the
range_size and
count variables, since
different combinations will produce different results.
From the Command Line¶
In addition to the programmatic interface,
timeit provides a
command line interface for testing modules without instrumentation.
To run the module, use the
-m option to the Python
interpreter to find the module and treat it as the main program:
$ python3 -m timeit
For example, to get help:
$ python3 -m timeit -h Tool for measuring execution time of small code snippets. This module avoids a number of common traps for measuring execution times. See also Tim Peters' introduction to the Algorithms chapter in the Python Cookbook, published by O'Reilly. ...
The
statement argument works a little differently on the command
line than the argument to
Timer. Instead of using one long
string, pass each line of the instructions as a separate command line
argument. To indent lines (such as inside a loop), embed spaces in the
string by enclosing it in quotes.
$ python3 -m timeit -s \ "d={}" \ "for i in range(1000):" \ " d[str(i)] = i" 1000 loops, best of 3: 306 usec per loop
It is also possible to define a function with more complex code, then call the function from the command line.
def test_setitem(range_size=1000): l = [(str(x), x) for x in range(range_size)] d = {} for s, i in l: d[s] = i
To run the test, pass in code that imports the modules and runs the test function.
$ python3 -m timeit \ "import timeit_setitem; timeit_setitem.test_setitem()" 1000 loops, best of 3: 401 usec per loop
See also
- Standard library documentation for timeit
profile– The
profilemodule is also useful for performance analysis.
- Monotonic Clocks – Discussion of the monotonic clock from the
timemodule. | https://pymotw.com/3/timeit/index.html | CC-MAIN-2017-43 | refinedweb | 1,044 | 57.16 |
I intended to write a post like this long ago as a sort of follow-up in spirit to my old(er) post on C. Unfortunately since I code most of the day my brain is too beat out for writing articles later in the day. I must admit, however, that I am quite lazy when it comes to writing blog posts. Writing a fairly technical blog post takes a lot of effort and research, so it’s not as easy as it may seem. Regardless, I’ve finally got into my blog-post-writing-spirit. 🙂
It seems since my January post there has been a big inflation in Go and Rust users. Some brave souls even call Go a replacement for C when writing more low-level programs. My opinions have certainly changed since my original post on C, and I’ve formed some interesting views on the whole “Go vs C” debate. If you read my original article, you’ll know I claimed C was the best programming language. Although it may have seemed I was trying to prove how C was the absolute best, it was intended to be taken a bit more lightly. I don’t think C is absolutely perfect. It has flaws like every other programming language in existence. Moreover my intention was to say that C was still just as legit as any other language that exists. A lot of people seem to view C as some archaic nightmare used by hackers and code gurus, but it’s not like that. I tried to say that C is still pretty awesome, and stated some reasons for it being awesome. With that off my chest, let’s explore the weird world of the C and Go programming languages!
What is Go?
To say it simply, Go is a language with the low-level capabilities and simplicity of C but with tons of modern programming language features. The language itself was developed inside Google by Robert Griesemer, Rob Pike and Ken Thompson. The people who created Go make it even more interesting. Ken Thompson was the creator of the original B programming langauge, which was a very early compiled language that inspired Dennis Ritchie to create C!
Although Go comes with all the typical features of a modern language (garbage collection, package management, powerful object-oriented features, etc) the syntax resembles that of C and C++. Here is a Hello World program in Go:
package main import ( "fmt" ) func main() { fmt.Printf("Hello world, or こんにちは世界!") }
As you can see, Go has a
main function with a
Printf function much like C. Unlike C, however, are two of Go’s required statements at the beginning of a program. One is the
package statement, which defines what package the program is part of. This is used in Go libraries so you can import each package nicely. The second is the
import statement, where a list of packages for the program to use is defined. There are a ton of default Go packages you can import, but Go has unique features like importing packages from GitHub URLs. Go’s Hello World program likes to show off its amazing Unicode support by including a “hello world!” translation in Japanese.
The Go compiler itself compiles code directly to machine code. Unlike languages like Java, Go compiles completely optimized machine code binaries for each platform. Lucky for us, Go supports almost every relevant operating system today. Even operating systems like NetBSD and Plan 9 are supported!
Go’s compiler is itself written in C (I’m guessing Rob Pike, Ken Thompson and co. had a lot of experience writing compilers in C), but the developers are trying to make Go’s compiler self-hosted. A self-hosted compiler is a compiler written in the same language it compiles. In other words, Go’s compiler would be completely written in Go. This is a major step for Go, since it’s the ultimate proof that Go is fast and versatile if it can implement itself.
Will Go Replace C?
It’s hard to tell whether newer low-level languages like Go will replace classic languages like C anytime soon, but I can give a prediction: not anytime in the near future.
Imagine C is a bike and Go is a modern car. Although a car is much more powerful, it completely relies on gasoline. Anyone can power a bike with two working feet. By setting up a full Go environment you can make amazing, easy to read programs, but anyone with a C compiler can create some decent C programs. Even though we have modern cars does that mean we need to stop selling bikes? Of course it doesn’t. Just because we have Go doesn’t mean C is just going to vanish. For goodness sakes, the military is still running on code written in Ada!
That said, we may see a lot more code written in Go. We won’t see stuff like the Linux kernel converted to Go for a long time, so it’s safe to say C isn’t leaving us completely. Whatever happens there’s no need to worry. Sometimes we focus too much on the tools we use to make something rather than what we want to make. Use C or Go, they’re both perfectly relevant languages for the time being. | http://www.woohooitsbacon.com/is-go-the-c-killer/ | CC-MAIN-2017-13 | refinedweb | 898 | 72.76 |
Hello,
Apart from the major support for Unicode that we all would like to see, I would like to see support for Namespaces being a priority, but in regards to type hints I too would like to see that we could use the package structure as type hints as well
Another feature I would like to see would be inner classes. There are a few other things I'd like to see but first, what's on your wish list?
The Doctor...
Anonymous code blocks. create_function gives me a headache.
ICC
Intelligent Code Creation
i type
"make a 3 field form for name , email address and password and make sure all three field are filled in for it to be submitted then add the 3 details into the database and email the address with confirmation"
and it spits out the necassary code the other end.
may take a little working out but im sure you code gods could do it
Not only that, but make them closures as well.
Support for properties, ala how asp.net has adopted that standardSupport for overloading.I'll be another vote for namespaces and also type hints for return values.Pie in the sky, but a persistent data layer would be nice.
overloading please
namespaces would be nice
also maybe an option to compile to executable on linux (i dont care about windows), php5.2 its fast already with APC but it still common sense tells me a binary will be much faster, also its handy in asp.net being able to just destribute a dll without sharing the code
also asp.net style widgets. for example i dont wish to spend all day messing around with javascript when lest say i want a calendar control! in asp.net u just drag it and it works no mucking around with 3rd party libraries
improving parser to allow expressions wherever a variable can be used: foo()[x], new (foo()) etc.
cleanup of standard library by creating new sets of string, array and file functions with consistent naming and arguments order, collating similar functions using optional flag params i.e. a_sort($ary, SORT_KEYS | SORT_DESC) etc
regexp literals
perl || semantics
goto
compiler, ability to mix bytecode with php: if(xyz) asm { 234098adef... }
deprecating errors in favor of exceptions
Lexical scope for code blocks would be really nice, but that is a major change to the language, so I don't think it'll happen.
+1 for that.
And allow any callback to be called as a function. Eg. :
$fn = Array($this, 'foo');
$fn();
Hi...
My preferences in order for PHP6...
Anything else would just delay adoption and increase incompatibility.
For PHP7 I would like in order...* Namespaces so that foreign code can be imported without clashing.* Dumping the SPL and it's bloat in favour of operator methods.* Making Array a class.* Namespaces* Removing gotchas, such as givesArray()[23], (new Thing())->do()* Removal of dead features and libraries* Namespaces* Documenting the C interface and PECL* Decent lambda function syntax without all that quoting or heredoc stuff (that would deal with blocks and inner classes in a reasonably PHP way).* Did I mention namespaces?
yours, Marcus
Namespaces Namespaces Namespaces.
I'm gutted that the last I read on the matter was an argument on the dev mailing list about them being not needed in PHP and then progressing to "well maybe the best we can do is allow ":::" to be used as a namespace separator" which was essentially allowing class names to include those characters. They said they'd not be adding any "import" feature to bounce between namespaces because it would slow PHP down too much and PHP was never written in a way which can be easily adapted to support it.
I'll have to find the conversation and link to it... not seen any progress since then
Yeah, and anonymous functions.
What's with this too?
function global_one()
{
function closed_one() {}
function closed_two() {}
}
function global_two()
{
function closed_one() {} //Fatal error: Function closed_one previously declared
}
I don't know if this is still valid (it's over 1 year old), however, here's a link sent to me by a colleague of mine last year.
I've come up with a whole load of ideas over the months, I'm sure many of you will have reasons I haven't thought of why they might be bad:[list][*]Unicode[*]Namespaces ( using :: )[*]Type hinted return types (only on interfaces for now)[*]Namespace visibility?[*]Class visibility[*]Proper getter/setter methods ala C# et al, maybe something like this:
public $someProp {
get {
}
set ($value) {
}
}
[*]Closures[*]Deprecate, where possible, badly named functions such as call_user_func_array(), keep them on as aliases but come up with better standardized names. is_string() should be isString() for a start; inline with isSet().[*]Built in support for improving code testability? Mocks as part of SPL perhaps.[*]Completely remove @[*]Parametrized cloning
$a = new SomeClass();
$b = clone $a('Foo', 'Bar');
[*]const keyword work in any scope and deprecate define()[*]isset augmented to replace defined()[*]method and function overloading[*]'object' type hint - manual claims it exist; it don't [*]Late binding statics[*]Might be nice to disallow declaration of non-uppercase constants[*]augment list() to get from keys
function returnsAssoc()
{
return array('foo' => 1, 'bar' => 2);
}
list('foo' => $gir, 'bar' => $zim) = returnsAssoc();
$gir; // 1
$bar; // 2
[*]Type hinting arrays of things
public function someMethod(stdClass array $param) {
// $param must be an array of stdClasses
}
[*]New decorates keyword, dynamically add behaviour to an object with another class using late binding.[*]static variables inside methods aren't static to the class as well[*]Pipe dream: Remove dollar syntax for everything dynamic variables/constants. Use {} in heredoc and double-quoted strings. e.g.
foo = 1;
const BAR = 2;
echo "foo has value of {foo} and BAR has value of {BAR}";
zim = 'foo';
echo $zim; // 1
class SomeClass
{
static public foo = 1;
public gir = 3;
const BAR = 2;
}
SomeClass::foo; // 1
SomeClass::$zim; // 1
SomeClass::BAR; // 2
dynamicConst = 'BAR';
SomeClass::$dynamicConst; // 2
SomeClass::gir; // illegal non-static
inst = new SomeClass();
inst->gir; // 3
zim = 'gir';
inst->$zim; // 3
inst->{$zim}; // 3
inst->{'gir'}; // 3
{'zi' . 'm'}; // gir
inst->BAR; // illegal static
[/list]
Dump public/private/protected. Keep hints but make them trigger E_STRICT errors: "are you sure you really need to do this?" I'm tempted to say dump exceptions too but perhaps I just haven't learned how to use them properly. At the least allow $this to be thrown - or indeed any other object you like.
You can throw other objects
I don't think visibility should be removed... perhaps made optional, but not removed. You get PHP4 behaviour by simply using "public" anyway. I'd actually like class visibility like in Java, but for that to have any meaning, you need namespaces first
throw new StdClass();
Fatal error: Exceptions must be valid objects derived from the Exception base class in C:\\apache\\htdocs\\foo.php
What version of PHP are you using?
No: make everything private, which means that no object but the one it belongs to may access it (and it has no effect on inheritance; every property and method is always inherited). Methods only may be designated public, which means that they may be accessed by any object. That's it!
I posted about three items on my blog before seeing this post: late static binding, declared accessor methods, and closures.
Inner Classes? Not necessary with anonymous functions or closures.
How about anonymous objects? Think PHON as in JSON.
PHP's builtin arrays are hashmaps already - so no need for that, although a prettier syntax for literal arrays would be nice. It would be nice though if arrays could be treated transparently as objects without explicit typecasting, a la SPL ArrayObject. Eg. making the following syntax valid:
$a = Array('foo' => "bar");
echo $a->foo; // yields bar
And in that case, perhaps object properties should be available with array syntax as well:
$o = new StdClass();
$o->foo = "bar";
echo $o['foo']; // yields bar
My point is not to represent the JS data types in PHP, but to have a literal notation for each of the PHP data types, including object. (see __set_state) | https://www.sitepoint.com/community/t/what-features-do-you-want-to-see-in-php6/45124 | CC-MAIN-2017-09 | refinedweb | 1,361 | 59.64 |
Tweet vibration anomalies detected by Azure IoT services on data from an Intel Edison running Node.js
If you want to try out Azure IoT Hub and other Azure IoT services, this sample will walk you through the configuration of services and the setup of an Intel Edison running Node.js. The device will log vibration sensor data that will be analyzed by Stream Analytics and a worker role will send an alert message back to the device via IoT Hub as well as tweet the alert on twitter.
Running this sample
Hardware prerequisites
In order to run this sample you will need the following hardware: - An Intel Edison board - In the sample we are using the Seeed Xadow wearable kit for Intel Edison but you can use another kit and sensor adapting the code for the device. - These are the sensors and extensions we are using in the sample (all included in the Seeed Xadow wearable kit mentioned above): - Xadow Expansion board - Xadow Programmer - Xadow 3-Axis accelerometer - Xadow OLED screen
Software prerequisites
- Visual Studio 2015 with Azure SDK for .Net
- A Serial terminal, such as PuTTY, so you monitor debug traces from the devices.
- Intel XDK
Services setup
In order to run the sample you will need to do the following (directions for each step follow): - Create an IoT hub that will receive data from devices and send commands and notifications back to it - Create an Event hub into which we will post alerts triggered by the Stream Analytics job - Create a Stream Analytics job that will read data from the IoT hub and post alerts into the Event hub - Create a Storage account that will be used by the worker role - Deploy a simple worker role that will read alerts from the Event hub and tweet alerts on Twitter and forward alerts back to devices through the IoT hub
Create an IoT Hub
Log on to the Azure Preview Portal.
In the jumpbar, click New, then click Internet of Things, and then click IoT Hub.
In the New IoT Hub blade, specify the desired configuration for the IoT Hub.
- In the Name box, enter a name to identify your IoT hub such as myiothubname. When the Name is validated, a green check mark appears in the Name box.
- Change the Pricing and scale tier as desired. This tutorial does not require a specific tier.
- In the Resource group box, create a new resource group, or select and existing one. For more information, see Using resource groups to manage your Azure resources.
- Use Location to specify the geographic location in which to host your IoT hub.
Once the new IoT hub options are configured, click Create. It can take a few minutes for the IoT hub to be created. To check the status, you can monitor the progress on the Startboard. Or, you can monitor your progress from the Notifications section.
After the IoT hub has been created successfully, open the blade of the new IoT hub, take note of the Hostname, and select the Key icon on the top.
Select the Shared access policy called iothubowner, then copy and take note of the connection string on the right blade. Also take note of the Primary key
Your IoT hub is now created, and you have the Hostname and connection string you need to complete this tutorial.
For the creation of the Stream Analytics job Input, you will need to retreive some informations from the IoT Hub: - From the Messaging blade (found in the settings blade), write down the Event Hub-compatible name - Look at the Event-hub-compatible Endpoint, and write down this part: sb://thispart.servicebus.windows.net/ we will call this one the IoTHub EventHub-compatible namespace - For the key, you will need the Primary Key read in step #6
Create an Event Hub
Log on to the Azure Management Portal.
In the lower left corner of the page, click on the + NEW button.
Select App Services, Service Bus, Event Hub, Quick Create
Enter the following settings for the Event Hub (use a name of your choice for the event hub and the namespace):
- Event Hub Name: "myeventhubname"
- Region: your choice
- Subscription: your choice
- Namespace Name: "mynamespacename-ns"
Click on Create a new Event Hub
Select the mynamespacename-ns and go in the Event Hub tab
Select the myeventhubname event hub and go in the Configure tab
in the Shared Access Policies section, add a new policy:
- Name = "readwrite"
- Permissions = Send, Listen
Write down the Primary Key for the readwrite policy name
Click Save, then go to the event hub Dashboard tab and click on Connection Information at the bottom
Write down the connection string for the readwrite policy name.
Create a Stream Analytics job
Log on to the Azure Preview Portal.
In the jumpbar, click New, then click Internet of Things, and then click Azure Stream Analytics.
Enter a name for the job, a prefered region, choose your subscription. At this stage you are also offered to create a new or to use an existing resource group. This is usefull to gather several Azure services used together. To learn more on resource groups, read this.
Once the job is created, click on the Inputs tile in the job topology section. In the Inputs blade, click on Add
Enter the following settings:
- Input Alias = "accel"
- Type = "Data Stream"
- Source = "IoT Hub"
- IoT Hub = "myiothubname" (use the name for the IoT Hub you create before
- Shared Access Policy Name = "iothubowner"
- Shared Access Policy Key = "iothubowner Primary Key" (That's the key you wrote down when creating the IoT Hub)
- IoT Hub Consumer Group = "" (leave it to the default empty value)
- Event serialization format = "JSON"
- Encoding = "UTF-8"
Back to the Stream Analytics Job blade, click on the Query tile. In the Query settings blade, type in the below query and click Save
SELECT * INTO accel4twitter FROM accel WHERE (accel.z < 0)
Back to the Stream Analytics Job blade, click on the Outputs tile and in the Outputs blade, click on Add
Enter the following settings then click on create:
- Output Alias = "accel4twitter"
- Source = "Event Hub"
- Service Bus Namespace = "mynamespacename-ns
- Event Hub Name = "myeventhubname"
- Event Hub Policy Name = "readwrite"
- Event Hub Policy Key = "Primary Key for readwrite Policy name" (That's the one you wrote down after creating the event hub)
- Partition Key Column = "4"
- Event Serialization format = "JSON"
- Encoding = "UTF-8"
- Format = "Line separated"
Back in the Stream Analytics blade, start the job by clicking on the Start button at the top
Create a storage account
Log on to the Azure Preview Portal.
In the jumpbar, click New and select Data + Storage then Storage Account
Choose Classic for the deployment model and click on create
Enter the name of your choice (i.e. "mystorageaccountname" for the account name and select your resource group, subscription,... then click on "Create"
Once the account is created, find it in the resources blade and write down the Primary Key for it as well as the storage account name you chose to configure the worker role
Get a twitter app consumer and access information
In a browser, go to
Login with your twitter account
Click on "Create New App" button and follow instructions
Go to the Keys and Access Tokens tab and generate an access token and access token secret by clicking the "Generate My Access Token and Token Secret" button
Write down the Consumer Key (API Key), Consumer Secret (API Secret), Access Token, and Access Token Secret.
Deploy the worker role
The sample uses a worker role to trigger alerts back on devices through IoT Hub. To build an deploy the worker role here are the few simple steps:
Clone the repository on your machine (see the links on top of this tutorial)
Note that the project depends on the Azure SDK for .Net. If you have not done so yet, install the SDK.
Open the solution events_to_device_service\events_to_device_service.sln in Visual Studio 2015
Open the file app.config and replace the fields below with the connection strings from the Event Hub, the IoT Hub, the storage account and your Twitter account
<add key="Microsoft.ServiceBus.ConnectionString" value="<<Enter your EH connection string>>" /> <add key="Microsoft.ServiceBus.EventHubName" value="<<Enter your EH name string>>" /> <add key="AzureStorage.AccountName" value="<<Enter your Storage account name>>" /> <add key="AzureStorage.Key" value="<<Enter your storage account key>>" /> <add key="Twitter.ConsumerKey" value="<<Enter your Twitter consumer key>>" /> <add key="Twitter.ConsumerSecret" value="<<Enter your Twitter consumer secret>>" /> <add key="Twitter.AccessToken" value="<<Enter your Twitter access token>>" /> <add key="Twitter.AccessSecret" value="<<Enter your Twitter access secret>>" /> <add key="AzureIoTHub.ConnectionString" value="<<IoT Hub Connection String>>" />
Compile the project
From here you can whether run the service locally or publish it on Azure. To run locally, right click on the events_to_device_service project and select "Set As Startup Project" then hit F5. To publish to Azure, right click on the events_to_device_service project, select "Publish..." and follow the prompts.
Create a new device identity in the IoT Hub
To connect your device to the IoT Hub instance, you need to generate a unique identity and connection string. IoT Hub does that for you. To create a new device identity, you have the following options: - Use the Device Explorer tool (runs only on Windows for now) - Use the node.js tool - For this one, you need to have node installed on your machine () - Once node is installed, in a command shell, type the following commands:
```bash npm install -g iothub-explorer ```
You can type the following to learn more about the tool
iothub-explorer help
Type the following commands to create a new device identity (replace
in the commands with the connection string for the iothubowner that you retreive previously in the portal.
iothub-explorer <connectionstring> create mydevice --connection-string
This will create a new device identity in your IoT Hub and will display the required information. Copy the connectionString
Prepare the device
For deploying the application on the Intel Edison, You have a couple of options. The first one is to use the Intel XDK IDE, while the second one uses a simple deployment script (batch script for Windows users).
Deploy the app using the Intel XDK
Connect your Intel Edison to your development machine over USB (see Intel Edison getting started instructions)
Clone the project repository on your machine (see the links on top of this tutorial)
Start Intel XDK
Create a new project, choose a Blank Template in the Templates tab and click Continue
Enter the project name of your choice, for example: tweetmyvibe, click Create
Open the package.json file and cpy paste the content from the js\package.json file from the repository of the project you cloned at step 1.
Open the main.js file and cpy paste the content from the js\main.js file from the repository of the project you cloned at step 1.
In the>>'; // must match the deviceID in the connection string var deviceID = '<<Enter your deviceID>>';
In the Intel XDK tool, compile the project (using the icon with a hammer in it) then click on the Play button to start runnig the app on the device
Deploy the app using the Windows batch script
Clone the project repository on your machine (see the links on top of this tutorial)
In the js>>'; var deviceID = '<<Enter your deviceID>>'; // must match the deviceID in the connection string
In order to use the deployment script, you need to install PuTTY as the file transfer uses PuTTY SCP, one of the tools coming with the full install of PuTTY. You can consider adapting the script if you prefer using a different SSH client and SCP tool.
Connect your Intel Edison to your development machine over USB (see Intel Edison getting started instructions)
Determine which COM port the device is showing up as on your development machine. Open a command prompt and type the following command:
mode
Connect PuTTY to the COM port for the device at 115200 Bauds.
Type in user name root and your device's password. Note that in order for the SCP tool to work, you need to device to be password protected. If you want to set a password for the user, type the following command in the Serial Terminal prompt:
passwd
Retreive the device's IP address typing the below command in the serial termninal.
ifconfig
Open the /js/tools/deploy.cmd script and edit the board_ip, and board_pw values with your device's IP address and password
Run the script on your Windows machine. this will download the app script and package.json files in a specific folder (./node_app_slot) on the device. This folder is monitored by a Daemon running on the Intel Edison which will ensure the node.js application that is stored in this specific folder is started at boot time and kept alive. The script will also install all the modules the app depends on.
Once the application is deployed, you can whether reboot the device, or type the following commands in the serial terminal:
cd /node_app_slot node .
App is now running on the device!
At this point your device is connected to IoT Hub and sends telemetry data. The Stream Analytics Job will detect when the acceleration.z value is negative and will post an alert in the event hub. The worker role will pick up the alerts, will notify the device that an alert was triggered and will tweet the alert.
More information
To learn more about Azure IoT Hub check out the Azure IoT Dev center. In the IoT dev center you will also find plenty simple samples for connecting many sorts of devices to Azure IoT Hub. | https://azure.microsoft.com/zh-tw/resources/samples/iot-hub-nodejs-intel-edison-vibration-anomaly-detection/ | CC-MAIN-2017-43 | refinedweb | 2,283 | 56.08 |
During initialization of a shared object, the object must be accessible only to the thread constructing it. However, the object can be published safely (that is, made visible to other threads) once its initialization is complete. The Java memory model (JMM) allows multiple threads to observe the object after its initialization has begun but before it has concluded. Consequently, programs must prevent publication of partially initialized objects.
This rule prohibits publishing a reference to a partially initialized member object instance before initialization has concluded. It specifically applies to safety in multithreaded code. TSM01-J. Do not let the this reference escape during object construction prohibits the
this reference of the current object from escaping its constructor. OBJ11-J. Be wary of letting constructors throw exceptions describes the consequences of publishing partially initialized objects even in single-threaded programs.
Noncompliant Code Example
This noncompliant code example constructs a
Helper object in the
initialize() method of the
Foo class. The
Helper object's fields are initialized by its constructor.
class Foo { private Helper helper; public Helper getHelper() { return helper; } public void initialize() { helper = new Helper(42); } } public class Helper { private int n; public Helper(int n) { this.n = n; } // ... }
If a thread were to access
helper using the
getHelper() method before the
initialize() method executed, the thread would observe an uninitialized
helper field. Later, if one thread calls
initialize() and another calls
getHelper(), the second thread could observe one of the following:
- The
helperreference as
null
- A fully initialized
Helperobject with the
nfield set to 42
- A partially initialized
Helperobject with an uninitialized
n, which contains the default value
0
In particular, the JMM permits compilers to allocate memory for the new
Helper object and to assign a reference to that memory to the
helper field before initializing the new
Helper object. In other words, the compiler can reorder the write to the
helper instance field and the write that initializes the
Helper object (that is,
this.n = n) so that the former occurs first. This can expose a race window during which other threads can observe a partially initialized
Helper object instance.
There is a separate issue: if more than one thread were to call
initialize(), multiple
Helper objects would be created. This is merely a performance issue—correctness would be preserved. The
n field of each object would be properly initialized and the unused
Helper object (or objects) would eventually be garbage-collected.
Compliant Solution (Synchronization)
Appropriate use of method synchronization can prevent publication of references to partially initialized objects, as shown in this compliant solution:
class Foo { private Helper helper; public synchronized Helper getHelper() { return helper; } public synchronized void initialize() { helper = new Helper(42); } }
Synchronizing both methods guarantees that they cannot execute concurrently. If one thread were to call
initialize() just before another thread called
getHelper(), the synchronized
initialize() method would always finish first. The
synchronized keywords establish a happens-before relationship between the two threads. Consequently, the thread calling
getHelper() would see either the fully initialized
Helper object or an absent
Helper object (that is,
helper would contain a null reference). This approach guarantees proper publication both for immutable and mutable members.
Compliant Solution (Final Field)
The JMM guarantees that the fully initialized values of fields that are declared final are safely published to every thread that reads those values at some point no earlier than the end of the object's constructor.
class Foo { private final Helper helper; public Helper getHelper() { return helper; } public Foo() { // Point 1 helper = new Helper(42); // Point 2 } }
However, this solution requires the assignment of a new
Helper instance to
helper from Foo's constructor. According to The Java Language Specification, §17.5.2, "Reading Final Fields During Construction" [JLS 2015]:.
Consequently, the reference to the
helper instance should remain unpublished until the
Foo class's constructor has completed (see TSM01-J. Do not let the this reference escape during object construction for additional information).
Compliant Solution (Final Field and Thread-Safe Composition)
Some collection classes provide thread-safe access to contained elements. When a
Helper object is inserted into such a collection, it is guaranteed to be fully initialized before its reference is made visible. This compliant solution encapsulates the
helper field in a
Vector<Helper>.
class Foo { private final Vector<Helper> helper; public Foo() { helper = new Vector<Helper>(); } public Helper getHelper() { if (helper.isEmpty()) { initialize(); } return helper.elementAt(0); } public synchronized void initialize() { if (helper.isEmpty()) { helper.add(new Helper(42)); } } }
The
helper field is declared final to guarantee that the vector is always created before any accesses take place. It can be initialized safely by invoking the synchronized
initialize() method, which ensures that only one
Helper object is ever added to the vector. If invoked before
initialize(), the
getHelper() avoids the possibility of a null-pointer dereference by conditionally invoking
initialize(). Although the
isEmpty() call in
getHelper() is made from an unsynchronized context (which permits multiple threads to decide that they must invoke
initialize) race conditions that could result in addition of a second object to the vector are nevertheless impossible. The synchronized
initialize() method also checks whether
helper is empty before adding a new
Helper object, and at most one thread can execute
initialize() at any time. Consequently, only the first thread to execute
initialize() can ever see an empty vector and the
getHelper() method can safely omit any synchronization of its own.
Compliant Solution (Static Initialization)
In this compliant solution, the
helper field is initialized statically, ensuring that the object referenced by the field is fully initialized before its reference becomes visible:
// Immutable Foo final class Foo { private static final Helper helper = new Helper(42); public static Helper getHelper() { return helper; } }
The
helper field should be declared final to document the class's immutability.
According to JSR-133, Section 9.2.3, "Static Final Fields" [JSR-133 2004]:
The rules for class initialization ensure that any thread that reads a
staticfield will be synchronized with the static initialization of that class, which is the only place where static final fields can be set. Thus, no special rules in the JMM are needed for static final fields.
Compliant Solution (Immutable Object - Final Fields, Volatile Reference)
The JMM guarantees that any final fields of an object are fully initialized before a published object becomes visible [Goetz 2006a]. By declaring
n final, the
Helper class is made immutable. Furthermore, if the
helper field is declared volatile in compliance with VNA01-J. Ensure visibility of shared references to immutable objects,
Helper's reference is guaranteed to be made visible to any thread that calls
getHelper() only after
Helper has been fully initialized.
class Foo { private volatile Helper helper; public Helper getHelper() { return helper; } public void initialize() { helper = new Helper(42); } } // Immutable Helper public final class Helper { private final int n; public Helper(int n) { this.n = n; } // ... }
This compliant solution requires that
helper be declared volatile and that class
Helper is immutable. If the
helper field were not volatile, it would violate VNA01-J. Ensure visibility of shared references to immutable objects.
Providing a public static factory method that returns a new instance of
Helper is both permitted and encouraged. This approach allows the
Helper instance to be created in a private constructor.
Compliant Solution (Mutable Thread-Safe Object, Volatile Reference)
When
Helper is mutable but thread-safe, it can be published safely by declaring the
helper field in the
Foo class volatile:
class Foo { private volatile Helper helper; public Helper getHelper() { return helper; } public void initialize() { helper = new Helper(42); } } // Mutable but thread-safe Helper public class Helper { private volatile int n; private final Object lock = new Object(); public Helper(int n) { this.n = n; } public void setN(int value) { synchronized (lock) { n = value; } } }
Synchronization is required to ensure the visibility of mutable members after initial publication because the
Helper object can change state after its construction. This compliant solution synchronizes the
setN() method to guarantee the visibility of the
n field.
If the
Helper class were synchronized incorrectly, declaring
helper volatile in the
Foo class would guarantee only the visibility of the initial publication of
Helper; the visibility guarantee would exclude visibility of subsequent state changes. Consequently, volatile references alone are inadequate for publishing objects that are not thread-safe.
If the
helper field in the
Foo class is not declared volatile, the
n field must be declared volatile to establish a happens-before relationship between the initialization of
n and the write of
Helper to the
helper field. This is required only when the caller (class
Foo) cannot be trusted to declare
helper volatile.
Because the
Helper class is declared public, it uses a private lock to handle synchronization in conformance with LCK00-J. Use private final lock objects to synchronize classes that may interact with untrusted code.
Exceptions
TSM03-J-EX0: Classes that prevent partially initialized objects from being used may publish partially initialized objects. This could be implemented, for example, by setting a volatile Boolean flag in the last statement of the initializing code and checking whether the flag is set before allowing class methods to execute.
The following compliant solution shows this technique:
public class Helper { private int n; private volatile boolean initialized; // Defaults to false public Helper(int n) { this.n = n; this.initialized = true; } public void doSomething() { if (!initialized) { throw new SecurityException( "Cannot use partially initialized instance"); } // ... } // ... }
This technique ensures that if a reference to the
Helper object instance were published before its initialization was complete, the instance would be unusable because each method within
Helper checks the flag to determine whether the initialization has finished.
Risk Assessment
Failure to synchronize access to shared mutable data can cause different threads to observe different states of the object or to observe a partially initialized object.
45 Comments
David Svoboda
dvariable should be private, according to OBJ00-J. Declare data members as private and provide accessible wrapper methods
Fooclass immutable; it just puts stricter controls on the mutability.
dobject, but also has the setDate() and getDate() methods synchronized also be a valid CS?
dvolatile if it is final?
David Svoboda
As if I haven't criticized this rule enough, one more problem: Declaring a mutable thread-unsafe object volatile is not a security flaw per se, it just doesn't make the same guarantees that a volatile primitive type gets. By that argument, you could also insist that arrays never be volatile (CON11-J).
I would guess there might still be reasons to have a mutable or thread-unsafe volatile object, so I'm not (yet) convince that we should restrict volatile to primitive types. What do you think?
Dhruv Mohindra
f.d == nulleven though f.d has already been set by another thread.
Wrt the other point, I am narrowing the title. Last but not the least, this guideline is only a couple of hours old so I'll let it evolve. Meanwhile, comments are welcome.
David Svoboda
Dhruv, allowing your NCCE to implicitly violate OBJ00-J indicates a lack of faith in OBJ00-J...it's a good idea, but only if you feel like it. We want these rules to be more rigorous and precise and we have to convince people that following these rules is critical for their security, even if they don't feel like it. That means the rules need to be tolerable to coders...and that includes ourselves. The phrase "eat our own dog food" is appropriate here. Your choices are:
I'd prefer the latter, but I'll leave the choice up to you. Note that this applies to all fields in all codes in all the rules, not just this one. (but we can fix those later.)
That's a problem. The
java.util.Date.setDate()method is deprecated since JDK 1.1. (and I know we have a rule against using deprecated methods.) Given the code samples you've got, I suggest you use another class.
OK, but that is not particularly bad as far as the code is concerned. The code needs to at least show both the f.d == null and the setting of f.d by that other thread. Right now there is no condition possible b/c no other thread has access to the f object.
I reviewed the conversation we had about concrete vs. general examples, and the conclusion was "there is no black and white answer". So I'm saying you need a more concrete example for this rule. (without saying you need more concrete examples in general).
Huh? This is squishy thinking. In this case s/effectively immutable/mutable, but under strictly controlled circumstances/.
OK. I think it's worthwhile to have my CS listed before yours (that is, a CS with non-volatile
dand synchronized set/get methods). Then your CS that uses volatile d and sync set method appears next and is noted to be faster than mine.
That's a start, but I'm not sure what "facilitating safe publication" means. I just know that declaring an object volatile doesn't provide the same guarantees as a volatile primitive type. Let me rephrase my earlier assertion as a question: Is there any good reason to declare an object volatile, provide it is mutable or thread-safe? It might be best to forbid any such declaration (which was your rule's original intent)...I just don't know.
Finally, whatever we decide for volatile objects would also apply to volatile arrays, hence CON11-J. I still think CON11-J and this rule should be merged together.
Dhruv Mohindra
I have eliminated the idea of "safe publication" and generalized this guideline and explained some terms such as "effectively immutable". I hope these changes address your concerns.
There may be. But this guideline has been renamed to not disallow declaring mutable variables as
volatile, but to synchronize the setters. Bottom line: this guideline shows how synchronization is preferable over the use of
volatilealone, to ensure visibility of objects, and only visibility. This is slightly different from CON01 whose main purpose is atomicity and a secondary purpose - visibility. It is different from CON00 because this one shows the pitfalls of
volatilew/o synchronization when used with mutable objects. The advice of this guideline is not even an exception to CON00 because synchronization can be used in addition to
volatile. These guidelines are complementary. I plan to name this one CON01 and the current CON01 as CON02. Comments?
David Svoboda
Personally, I'd keep all the IDs the same until we do one great renumbering, prob after we are done making our other changes to the Concurrency section.
Dhruv Mohindra
Dhruv Mohindra
From [JPL 06], 14.10.2. Final Fields and Security:
Points to note:
Foo) that have
finalfields (such as
Helper) that are published to other threads before their construction is over are not immutable.
staticfields (
Helper) may be vulnerable if they are published before object construction is over.
I think these are similar to TSM01-J. Do not let the (this) reference escape during object construction. Just that you need an example with class
Foowhich has a constructor that initializes the
helperfield.
Foocan be accessed by multiple threads. It is prematurely published.
I also think the problem is with the words "fully constructed". The "full construction" should also mean that a reference to the object (
Helper) was not published before Helper's own construction concluded.
Problems with current guideline:
helperwhile another thread attempts to retrieve its instance, it may see a default value. Additional synchronization may be required or declaring the variable
volatilemay suffice.
Stringexample). I think the field should be
finaland/or additional synchronization may be required.
Dhruv Mohindra
From, [JSR-133 04]:
One way to have an NCE could be the following but it cannot be fixed. Publishing "this" before or after construction is immaterial.
Can you think of something less contrived in which Foo is non-null but helper is null? I think the quote refers to:
Yet, another example:
Also, static initializers are an exception:
Dhruv Mohindra
In Compliant Solution (volatile):
volatilecan be used to safely publish only when the
Helperobject is fully constructed or when it is immutable. It may not be a CS by itself for this situation and the text does not indicate this currently.
David Svoboda
Agreed. We need the quote in order to 'invalidate' the code. But it should probably become a NCCE.
No, the JPL quote is not relevant. The CS is good not because Helper is static, but rather because it is statically initialized. The JMM guarantees that static initialization ensures safe publication fully constructed objects.
Not sure how this code is relevant. Foo.f is never initialized, and that trumps any other problems with this code.
I've been led to believe that volatile can be used to publish the object, without worrying about a partially-initialized object. Unfortunately, my copy of Goetz is not available right now...if I'm right, the CS clearly needs a citation that it is good.
I still believe the CS would be good if we used an AtomicReference instead.
Dhruv Mohindra
A thread can call
getHelper()and see the field with default values. The CS cannot be insecure.
This comment says that you must not publish the object prematurely. This ruins the guarantees provided by even
final. This is similar to yet different from TSM01-J. Do not let the (this) reference escape during object construction because we are not letting the
thisreference of the current object escape, but the reference of the sub-object under construction. Perhaps this discussion belongs to CON14. This guideline should refer to COn14 in that case (esp. the CS
finalbecause its guarantees depend on whether the sub-object was prematurely published before initialization or not).
Offhand, you can look at [Goetz 07] in the references, pattern #2. Also, see the last NCE of VNA06-J. Do not assume that declaring an object reference volatile guarantees visibility of its members. You'll spot the missing text that should go into CS
volatile.
Dhruv Mohindra
Regarding my first point, is there a reason why the field is not declared
final? It would be fully compliant in that case.
According to JSR-133, 9.2.3 Static Final Fields -
This is safer than using just
staticfields. For example, if the CS had a setter then it would clearly be not thread-safe. The use of
finaltells us exactly that we cannot have a setter.
David Svoboda
I believe I have addressed these comments.
Dhruv Mohindra
It appears that Compliant Solution (thread-safe composition) violates VNA03-J. Do not assume that a group of calls to independently atomic methods is atomic. Between creating the vector and adding an element to it, a thread could call
getHelper()and see the default value of the vector instead of the one after adding Helper(42). It should be possible to declare the vector
final, initialize in the constructor and let the current initialize method just add the helper to the vector.
Dhruv Mohindra
I think you can just remove the hard-coded number 42 and let the constructor accept the num. No need to check if helper is null, anywhere.
David Svoboda
OK, I think the code is ready for review.
WRT your comment, the code would be best with either the null checks & non-hardcoded number, or w/o the null chekcs & with 42. The issue with the null checks is to prevent double initialization, which is a problem distinct from what this rule is talking about. So I opted for simplicity, and removed the null checks. While the rule mentions the double init'n, it should focus on the partial-construction problem.
David Svoboda
Dhruv has created a new CS, nicknamed 'volatile flag' that raises two problems:
initializeddefaults to false (according to the JLS) and is set to true when the constructor completes. Being volatile prevents it from being reordered by the compiler. My question is, if a second thread can see a partially-constructed Helper object, could it see
initializedbefore it is initialized to
false? I know a JVM would have to (1) allocate memory for the object, (2) initialize the fields (3) set the type of the object (to Helper) and (4) call the constructor. At what point could a partially-initialized object be seen? What guarantees do the JLS give us here?
For instance, this would be a problem in C/C++ with pthreads...it is quite possible for a spying thread to see allocated-but-not-initialized memory. But this is mainly b/c C & C++ don't require initialization like Java does.
IME Java concurrency rules always seem to have safe code that violates the rule, and trying to catch all corner cases tends to make for extremely complex rules. So I'm not in favor of modifying this rule. Offhand, I'd personally prefer that this be an exception to the rule, but what do others think?
Dhruv Mohindra
For the record, even though I added it as a CS I have a slight preference to list it as an exception (this code may be hard to maintain and not suitable for a CS in general cases). But what do others think?
According to the description of the issue. a partially constructed object is when a thread observes
nto be 0. This is the default value. Similarly, a thread can only see the default value of the boolean flag which will be
false. The security of the solution lies in the fact that even if you obtain an unintialized
Helperinstance, you cannot invoke any methods on it because of the check.
From JLS, 17.4.5 Happens-before Order:
I think this is a fairly strong statement.
Dhruv Mohindra
We might want to list the public static factory method approach of CON14's Compliant Solution (public static factory method), as a solution/exception in this guideline.
David Svoboda
You're welcome to try adding the example. I don't think it will help, though.
The problem is the initialize() method, whereas the factory method is suppsoed to wrap around a constructor.
Dhruv Mohindra
Foo will be unable to construct the object using its constructor because the constructor is private. So, it has to use the newInstance() method to get an instance of Helper. And this method guarantees that an object will be fully initialized before it is returned.
David Svoboda
That's what the initialize() method is for...to create the Helper object. In the NCCE it doesn't properly guarantee the object is fully initialized, and in the CS's it does. The most we could do is mention that this is similar to a factory method. Again, if you want to take a crack at it, go ahead.
Dhruv Mohindra
Done. I found one source: [Goetz 06]. See Listing 3.8.
David Svoboda
Jaroslav Sevcik says:
David Svoboda
Dhruv Mohindra
nis
final. This was duly fixed before the discussion. However,
helpercan be declared volatile as suggested by everyone. This also means that it fits into Compliant Solution (immutable object, volatile reference). I think we should add to the latter CS that a factory method can be used as well instead of a constructor and completely get rid of the factory method CS.
finalis broken anyway. I think we should get rid of it.
David Svoboda
I got rid of it & merged its text into the following CS. Also beefed up that CS to handle a call to getHelper() preceding initialize().
Dhruv Mohindra
Suggest you change:
to: check if the vector is empty. If not, return element(0). Catching runtime exceptions = questionable practice.
David Svoboda
Done. now getHelper() returns null if called b/f initialize().
Dhruv Mohindra
I think now you violate MET10-J. For methods that return an array or collection prefer returning an empty array or collection over a null value. How about -
Two threads may still see helper as empty but only one will win while trying to add the vector element because
initialize()is synchronized.
David Svoboda
agreed, done.
Dhruv Mohindra
David,
The title Compliant Solution (immutable object, factory method)
is not a very good fit. I had combined it with CS (immutable object, volatile reference) because using a public static factory method was used to guarantee immutability of the object. You still need the reference to be declared volatile. This CS is not distinct from Compliant Solution (immutable object, volatile reference).
Also, the quote:
should occur where it was because this was there to support the sentence "The reference to the helper field should not be published before class Foo's constructor has finished its initialization (see CON14-J. Do not let the "this" reference escape during object construction).".
Also, some of your edits in EX1 do not mean what I want them to mean. We can discuss next time if you like.
Dhruv Mohindra
I struck a compromise. Where I couldn't, I left hidden comments. Feel free to go over them and leave me a comment or edit accordingly. Thanks!
Dhruv Mohindra
A condition can be added to some of the CS classes'
getHelper()method such that
initialize()is called when
helperis null. Subsequently, a non-null Helper can be returned.
Dhruv Mohindra
Another possibility is to add an example where the object is published before deserialization has fully constructed the object. For example, in a custom deserialized form using
writeObject(), the
writeObject()method should be synchronized.
David Svoboda
This rule cannot be violated if Helper is immutable; perhaps we can use this fact to replace some of the CS's here with just one saying "make Helper immutable". (This comment arose from a discussion on VNA01-J. Ensure visibility of shared references to immutable objects)
Dhruv Mohindra
I replied in CON09.
Robert Seacord (Manager)
It seems to me that the Compliant Solution for final field and thread-safe composition is a solution to a different problem than the one represented by the NCE.
Masaki Kubo
To be consistent with the title, I prefer the term "partially-constructed" be changed to "partially initialized".
David Svoboda
Changed.
David Svoboda
Thomas Hawtin sez:
James Ahlborn
In the "CS (Mutable Thread Safe Object, Volatile Reference)", the synchronized block in setN() is completely unnecessary since the Helper "n" field is volatile.
Andrea
In "Compliant Solution (Final Field)" it is said: "the JMM guarantees that the fully initialized values of fields that are declared final are safely published to every thread that reads those values at some point no later than the end of the object's constructor."
From what I understand, I would have written "... at some point no EARLIER than the end of the object's constructor".
I cannot understand what it would otherwise mean.
David Svoboda
Oops, you're right. I adopted your suggestion. | https://wiki.sei.cmu.edu/confluence/display/java/TSM03-J.+Do+not+publish+partially+initialized+objects?focusedCommentId=88884089 | CC-MAIN-2019-39 | refinedweb | 4,458 | 55.64 |
No, really. C++ not only lets you declare a ``static extern "C"'' function, it even requires you to do so in some circumstances. This is another tale of woe from the battles of hapless programmers to write compliant code and stay a part of the C++ programming language freakshow.
static extern "C"
C++ is just like C. And in C, ``static'' is the antonym of ``extern'', at least as far as function declarations go. Saying
static
extern
static extern int f(int);
Actually, the compiler requires it in certain circumstances. Suppose you have some function written in C, which takes a callback argument. For concreteness, I'll talk about qsort. STL has its own sort, so this situation might still appear a bit far-fetched. But not every project can use STL. And when your project has a mature, working, debugged C library, you use it, even from C++.
You tell your C++ compiler about this function:
extern "C" qsort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *));
qsort
Now you want to write a comparison function for pointers to members of some POD class A. It's just a static function -- you only need it in the one source file (sorry, translation unit) where you call the C function:
static int cmp_A(const void* a, int void* b)
{
const A* aa = const_cast<const A*>(a);
const B* bb = const_cast<const A*>(b);
return aa > bb ? -1 : bb < aa ? +1 : 0;
}
cmp_A
Errm, no, it doesn't. Sun Workshop's CC warns of "anachronism", and asks that you declare cmp_A as extern "C". Why?
extern "C"
Well, it's not the name mangling issue, or anything to do with external linkage. After all, cmp_A is invisible outside this source file (sorry, translation unit) -- it's static!
But ``recall'' from extern "C" that this declaration affects not just linkage, but also calling convention (ABI). Not that any C++ compiler on the face of this God-forsaken planet actually uses a different calling convention for its functions than the platform's ABI guarantees for C. But it might. In effect, C++ regards the extern "C" declaration as also affecting any function pointers taken by that function!
Now, the function qsort is already written. In C. And it expects a function pointer to a function with C calling convention. So to pass it a function from C++, that function must have the C calling convention -- extern "C". And since we don't want it visible outside its source file (translation unit, sorry), it must also be static. So we declare cmp_A to be ``static extern "C"''. And another syntactic monstrosity is born.
premchai21 notes (correctly, of course!) that modern ISO C++ deprecates this use of "static", preferring instead the use of an unnamed namespace. The true modern style would instead use
namespace {
extern "C" int cmp_A(const void* a, int void* b)
{
const A* aa = const_cast<const A*>(a);
const B* bb = const_cast<const A*>(b);
return aa > bb ? -1 : bb < aa ? +1 : 0;
}
}
Note the pretty "extern" in there. It's still not an external function -- the nameless namespace ensures it's invisible outside the filetranslation unit.
namespace
Log in or register
to write something here or to contact authors. | http://everything2.com/title/C%252B%252B%253A+static+extern+%2522C%2522 | CC-MAIN-2013-48 | refinedweb | 542 | 64.51 |
IE8 May Be End of the Line For Internet Explorer 380.
Misleading headline, and ActiveX (Score:5, Insightful)
1. Headline should read, IE8 May Be End of the Line for Internet Explorer Engine .
2. I don't see any reason why ActiveX apps couldn't be sandboxed like anything else. Granted, it has deep hooks into the OS-- but if nothing else, given how beefy computers are going to be by the time IE9 comes out, you could give each ActiveX app its own perfectly compatible virtual copy of XP+IE8 to run on, and just parse the result into IE9 format. Destroy the virtualized OS+browser when the app closes.
Moore's Law makes some problems easy, yay.
:)
Re:Misleading headline, and ActiveX (Score:5, Insightful)
Moore's Law be damned. People have been using this excuse for years to write bloated, crappy software. How about for once we don't try to predict the future. Instead, lets write the code for todays hardware. People seem to forget that we have sold way more computers than people in the world... no reason to replace them all to run IE9.
Re: (Score:2, Interesting)
Mod parent insightful.
A browser designed for a netbook ought to run just fine on my aging laptop.
Re:Misleading headline, and ActiveX (Score:4, Insightful)
Exactly..and Moore's law isn't exatly as reliable as it was 15 years ago when talking about a direct improvement to the desktop computers speed.
Re:Misleading headline, and ActiveX (Score:5, Informative)
Exactly..and Moore's law isn't exatly as reliable as it was 15 years ago when talking about a direct improvement to the desktop computers speed.
Especially since it never was about speed, only the density of transistors on a chip. Which, through clever architecture, smart compilers, and good programming can result in more speed.
Re:Misleading headline, and ActiveX (Score:5, Funny)
Yes, especially since the emancipation proclamation was nearly 130 years ago.
Re: (Score:3, Insightful)
Writing software to force people to buy new PC's has been an integral part of Microsoft's strategy for years, it's only recently begun to bite them on the ass with Vista and the credit crunch happening at the same time. People keep forgetting that around 80% of Windows sales come from new PCs pre-installed with the current version of Windows that Microsoft are giving customers the choice of.
Mobile computing educates them (Score:5, Informative)
Do you know what hit them very seriously? I mean the coders laughing to vendors like Opera for struggling not to code CPU and speed dependent stuff?
Mobile computing. It is like ultimate punishment for them. Do you remember those fanatics calling people to ''buy more RAM'' no matter what their issue with memory is? Top of the line smart phone comes with 512MB RAM or something and 400 Mhz ARM CPU. Opera ships 9.5 beta which runs the exact same engine as Desktop version to 256MB RAM having, 200Mhz CPU UIQ3 devices with zero vendor support.
I know some professional OS X developers keeping a G4 Mac Mini no matter how many xeons they have, just to make sure their application runs on low end computers fine. So far, thanks to their wise decision, their software gets good feedback not just from low end but very high end computers too. If it works on low end, it will rock on high end. Trust me, some of the ''cool guys'' out there still couldn't figure this basic rule.
When Webkit proved to work on Nokia S60 Symbian devices and got very good feedback from users, I said Webkit is the future. What mattered was, can the code run under 128MB RAM, completely alien OS? S60 browser proved it.
Re:Mobile computing educates them (Score:5, Interesting)
I always kept saying that every developer should be forced to use a slow machine, at least where compilation and automated tests are not involved. If you sit your butt at a fast box, you simply never notice anything is unacceptable slow.
I've personally caught myself ignoring complaints that a piece of my code is slow and noticing it only after seeing it crawl on a slow machine myself.
Re:Misleading headline, and ActiveX (Score:5, Interesting)
Or even better: let's write code for yesterday's hardware. Not everyone has a computer of today, and the more computers that can use your software, the better.
Re: (Score:3, Interesting)
I see this argument occasionally on
/. and always find it more than a bit puzzling: if software that you think is "bloated" continues to be used (and to be sold to people willing to pay for it), then it must be of more value to its users than whatever hypothetical small and beautiful software that you're imagining. In fact, Joel Spolsky wrote a pretty good article called Bloatware and the 80/20 myth [joelonsoftware.com] attacking the very line of th
Re: (Score:2)
Microsoft will rebrand and slap a bunch of blinkenlights on their next browser, and then pay other strategic entities huge sums of money so that those others can shove it down our throats like they're trying to do with Silverlight.
Re: (Score:3, Informative)
When my daughter came home from first day of computer class in kindergarten, she sat down at her computer (iMac G4) she poked around for a few minutes and then burst in to tears. She had a new website she wanted to show us but couldn't find the 'blue e' to get to it. I explained how web sites could be viewed by any web browser. She already had Firefox and Safari in the dock and once I showed her how to type in the web addy, she was good to go. Only have to explain a concept once to a kid, if you catch them
Re: (Score:3, Interesting)
Then again it could be much worse. One girl I tutored used to use the File|Open dialog box in MS Word for ALL her file management. Just goes to show that if you make it possible, someone will do it.
Re: (Score:2)
, you could give each ActiveX app its own perfectly compatible virtual copy of XP+IE8 to run on, and just parse the result into IE9 format
Sure, but what about the initial download + updates. Already in America not everyone can get high speed internet, even a 700 MB ISO file takes a while to download on many DSL connections so how are you going to download this Gigabyte+ browser compatibility package? It takes up about 1/4ths of a DVD so even including it on Windows 8 install media isn't going to really fly unless there is some rapid migration to Blu-Ray which I just don't see hapening.
Moore's Law makes some problems easy, yay. :) (Score:5, Funny)
Re: (Score:3, Insightful)
A company named Apple tried to save itself from the amazingly huge work and tried to modernise and secure MacOS. It took years and a top of the line IT director to admit it won't happen.
Their plan was exactly the same, sandboxed MacOS virtual machines.
They accepted that sad fact, (probably) mailed to their software vendors saying ''We are going with NeXT''
As MS is known for not admitting such facts and keep shipping that biggest PR disaster of all times named IE (I mean it), they may go with your method. Th
Re:Misleading headline, and ActiveX (Score:5, Insightful)
Really? Because it's not clear that you do. Seriously, would it kill people to bring the issue to the surface in an intelligent manner that might benefit those of us who are outside the loop on this? I'm not asking for a thesis but rather a simple dialog that can be researched by people who are interested in learning more about the issue at hand.
Re:Misleading headline, and ActiveX (Score:5, Funny)
Re: (Score:2)
i LOL'ed
Re:Misleading headline, and ActiveX (Score:5, Informative)
Re:Misleading headline, and ActiveX (Score:5, Insightful)
Needing information and having full control over the system are two different things. If all activex needs is the information, then let it have read only access. Now since most activex programs want a lot more then read only access, this will not work. The question is was it lazy programming that required full root/admin access in order to work or something else?
Some programmers feel that unless they have complete control they cannot get anything done. In development this is fine. Once in testing and production stages why do people insist that they still need to run as root/admin? Run as the least privileged level as you can.
Re:Misleading headline, and ActiveX (Score:4, Informative)
Needing information and having full control over the system are two different things. If all activex needs is the information, then let it have read only access.
Which is already enough to be a humongous security breach.
Re: (Score:3, Interesting)
This full admin lazy programming thing drives me nuts.
I did some part time IT work at an agency, and I was severely annoyed when I found out that their booking system REQUIRES local admin privileges to run.
It needed local admin... TO INTERFACE WITH AN SQL DATABASE ON A SERVER.
I intended for all the users to run with limited local rights, since they had a high intern turnover rate and interns can't be trusted... but screw security, some program originally written in the Win98 days still has this idiocy in a
Re:Misleading headline, and ActiveX (Score:5, Funny)
Re:Misleading headline, and ActiveX (Score:5, Insightful)
Re: (Score:3, Interesting)
Sorry, but when have you seen the last ActiveX anything?
The only plug-ìns that are widely spread are Flash and Java. They both can run as NSplugin. So if IE9 adopts that interface, and maybe another new one, they are good.
Korean websites use TOOOOOOOOOOONS of ActiveX. If you break ActiveX, then you basically break the entire Korean-language internet.
Re:Misleading headline, and ActiveX (Score:5, Insightful)
You can do the same thing with a signed Java Applet. OMG! Java is tightly integrated to the OS!
Re: (Score:3, Funny)
OMG! Java is tightly integrated to the OS!
Yanno, spilling coffee on your computer is generally _not_ a good thing.
Re: (Score:3, Insightful)
Using quasi-mystical language like "deep connections" in a technical discussion is a good sign the person doesn't know what he's talking about.
ActiveX applications have no more "connections" than any other Win32 app.
Re:Misleading headline, and ActiveX (Score:5, Funny)
ActiveX applications have no more "connections" than any other Win32 app.
But I was looking at ActiveX's facebook page and it had like a million friends in common with Windows - isn't that a deep connection?
Re:Misleading headline, and ActiveX (Score:4, Funny)
ActiveX applications have no more "connections" than any other Win32 app.
But I was looking at ActiveX's facebook page and it had like a million friends in common with Windows - isn't that a deep connection?
You're probably thinking of eHarmony.
Re:Misleading headline, and ActiveX (Score:4, Funny)
Yeah, but it's also in an "it's complicated" with Trojans. Kind of a problem, really.
Re:Misleading headline, and ActiveX (Score:5, Funny)
A better technical explanation would be that ActiveX can lick Windows' bellybutton from the inside.
Re:Misleading headline, and ActiveX (Score:5, Informative)
Ever been to Windows update? That's an ActiveX control. How does it get so much information about your computer? By it's deep connection to the OS. ActiveX CANNOT be sandboxed because it needs too many things to be accessible in the OS. Almost all ActiveX components make use of that integration.
XP has not relied on the browser-based Windows Update for several years. I imagine the OS-side Windows Update/Microsoft Update may very well be based on the same code; but it's certainly not being triggered by a visit in a web browser to an external website for goodness sake.
ActiveX needs to die, plain and simple - the past decade has shown how fundamentally flawed the ActiveX concept is. Just think about all the horrible security exploits that wouldn't have happened over the past decade if ActiveX had never existed.
Re:Misleading headline, and ActiveX (Score:4, Interesting)
Even the decade before it existed it was known how stupid an idea it was. Remember this was the time when one of the main talking points about java was it running in a sandbox.
Even a librarian warned me about the danger of ActiveX just proir to it's release (training session on using search engines for academics). I have never understood why it was released. Just when everyone had learned how to disable it they had to turn it back on to get OS updates.
Re: (Score:3, Interesting)
MS have already made and released a sandboxable and verifiable COM.
They called it COM2+ for a while, and then released it as
.Net.
Re:Misleading headline, and ActiveX (Score:5, Informative)
A lot of people seem to have little-to-no understanding as to what ActiveX is. It is a plug-in infrastructure based on COM, nothing more, nothing less. It allows for a library to provide a visual component that can be loaded by another application to display content. That plug-in infrastructure was used in Internet Explorer to load browser plug-ins. Those plug-ins run within the browser process under the current user security context. There is absolutely no functional difference between ActiveX in Internet Explorer on Windows or an XPCOM plug-in for Firefox on Linux.
The problem is that in both cases those plug-ins have to have a fairly wide amount of functionality. If that plug-in is intended to display video then it has to be able to work with the video API of the platform in question. As such these plug-ins generally cannot be sandboxed too tightly otherwise they would no longer be able to function and their usefulness of being able to extend the functionality of the browser is lost. [mozilla.org]
This website lists the XPCOM plug-ins available for Firefox. There are quite a few more if you follow the link to the bottom. If a vulnerability is identified in ANY of those plug-ins a successful exploit will be fully capable of trashing the profile of the current user and there is nothing that Firefox can do to stop it, even on Linux.
Re:Misleading headline, and ActiveX (Score:4, Interesting)
Re:Misleading headline, and ActiveX (Score:4, Insightful)
There is absolutely no functional difference between ActiveX in Internet Explorer on Windows or an XPCOM plug-in for Firefox on Linux.
Except for one crucial thing: IE provides content authors with the ability to advertise ActiveX plugins required to view the content, which pops up the window on the client asking the user whether he wants to install the plugin. And it's damn easy to trick a user into clicking "yes". In a technical sense, it's secure. In practice, because of social and psychological factors, it is a very convenient attack vector.
Re: (Score:3, Interesting)
And Firefox does the same thing. If I don't have Shockwave installed and I navigate to a website that contains Flash content I will be presented with a little yellow information bar telling me that there is content on the page that requires a plug-in and asks me if I want to install that plug-in. Is there any browser that doesn't do this by default?
There's still a difference. In Firefox, if you click "yes", it will send you to Adobe's download page for Flash; but you still need to initiate the download manually, and then run the downloaded installer. In IE, if you click "yes", it immediately downloads the ActiveX binary and executes it, all by itself.
Re:Misleading headline, and ActiveX (Score:4, Insightful)
And then you'll realize that you just reinvented
Re:Misleading headline, and ActiveX (Score:4, Funny)
Last Post! (Score:3, Funny)
Oh wait...
Nope, not webkit... (Score:5, Funny)
...they're going to buy Mozilla. Mark my words.
:P
Re: (Score:2)
What a wasted buy. Firefox is dead, long live Iceweasel!
Can't kill an OSS project by buying it.
Re: (Score:3, Informative)
It was renamed IceCat
Re:Nope, not webkit... (Score:5, Funny)
Those fucking weasels. At least they didn't call it LOLcat.
Re: (Score:2)
It's not as bad as "gNewSense [gnewsense.org]". On many levels...
Re: (Score:3, Interesting)
Nokia basically knew (a good guess) that Apple will enter smart phone market and become the ultimate rival to their smart phone business but that didn't stop them from implementing Webkit S60 Browser to near hundred million phones giving Apple the ultimate credibility.
Of course, Nokia is a company which is run by market rules. If there is an opportunity, no matter where it comes from, they will pick it.
Somehow, MS can keep acting like a spoiled kid and keep pushing a technological and PR disaster since firs
Please kill ActiveX (Score:5, Insightful)
The sticking point will be what Microsoft does about compatibility for ActiveX apps.
KILL IT!!!
Seriously. Since IE8 does it, people will just keep using that for the next decade...
If they don't kill ActiveX after IE8, we'll be stuck with it even longer than that. Since it's going to take 10 years to actually die, please start the process now, Microsoft.
Re: (Score:2)
Although I agree with your KILL IT sentiments on principle, in what way are we stuck with it even today?
You don't have to use IE, and if you use windows you can't uninstall it, but you can lock it down so it's less of a security hole.
That just leaves developers...but I don't remember the last time I saw a site that used ActiveX.
I heard that some banks do, though that would be one ghetto bank. And apparently a load of South Korean websites use it, so that's pretty limited damage if it goes the same way as ev
legacy hardware (Score:2)
I for example have a couple of panasonic IP cameras that use it in their internal webserver to display motion video to the end user.
Re:Please kill ActiveX (Score:4, Informative)
In my experience ActiveX seems to be used most often in internal business applications (intranets). When you're on a homogeneous environment it's easy to build for the specific platform. Using ActiveX often allowed for continual updates without deployment issues. Thankfully it doesn't appear to be popular for new projects, but there's a lot of old business systems out there.
Re: (Score:2)
I think the main users of active x in IE are intranet sites/applications.
Re: (Score:2)
I don't remember the last time I saw a site that used ActiveX.
Windows Update?
Re: (Score:2, Informative)
Re: (Score:2)
KILL IT!!!
How can you kill that which does not live?
Re:Please kill ActiveX (Score:5, Funny)
How can you kill that which does not live?
By using sudo:
...
sudo kill -9
Re:Please kill ActiveX (Score:5, Funny)
You've clearly never tried to kill a zombie process.
Re:Please kill ActiveX (Score:5, Funny)
> > How can you kill that which does not live?
> By using sudo:
...
> sudo kill -9
Nope. A process that isn't alive is a zombie. And kill -9 won't kill a zombie. We need a grenade_launcher command. After all, to quote the old Quake manual:
"Thou can not kill that with doth not live. But you can blow it to chunky kibbles."
Plays for Sure (Score:4, Insightful)
The sticking point will be what Microsoft does about compatibility for ActiveX apps.
How sticky are we talking? Sticky like trying to make PlaysForSure compatible with the Zune? [slashdot.org] Sticky like ongoing support for MSN Music? [slashdot.org]
If Microsoft has taught us anything, it's that today's lockin is tomorrow's lockout. The day MS decides that ActiveX no longer serves their purposes is the day that every site requiring ActiveX is out of luck.
ActiveX won't matter (Score:5, Insightful)
Given the compatibility issues that ActiveX has in IE8, then it probably won't matter what Microsoft will do in the future. In all reality no site should be depending on ActiveX. If it breaks without it, then fix the site.
Re:ActiveX won't matter (Score:5, Insightful)
Given the compatibility issues that ActiveX has in IE8, then it probably won't matter what Microsoft will do in the future. In all reality no site should be depending on ActiveX.
No external public facing site should rely on activeX. There is really nothing wrong with internal enterprise apps using it.
If it breaks without it, then fix the site.
You mean build the enterprise intranet application from scratch? When its working perfectly fine exactly the way it is? That will be a pretty tough sell.
Re:ActiveX won't matter (Score:5, Insightful)
> No external public facing site should rely on activeX. There is really nothing wrong with internal enterprise apps using it.
Um, yes there most certainly is a MAJOR problem with internal enterprise apps using it. It means that everyone is chained to running MS-Windows and IE *only* on the desktops and every possible device that connects to that internal enterprise application. Just because you might not have a choice with what is running on the server doesn't necessarily mean you want to have no choice for the client.
Perhaps a company might want some additional choice.
Re: (Score:2)
Actually every single IE plugin uses ActiveX (Flash, QuickTime, Java, etc.) Any future version of IE will likely have some ActiveX support for legacy plugins.
This is also the reason Google Chrome also supports ActiveX.
Thinks like an os, eh? (Score:5, Funny)
Given their history, this could be pretty funny.
Re: (Score:2, Funny)
They're doing it for the lulz.
Re:Thinks like an os, eh? (Score:5, Funny)
WebKit?! (Score:5, Insightful)
"Some are still claiming that Microsoft will go with WebKit"
Microsoft will never allow the browser that ships with Windows to become a commodity. They will go with Gazelle or whatever they develop that's as incompatible to official standards as possible while still being called a web browser engine.
Their goal is lock-in. A standards-based engine would negate that.
Re: (Score:3, Insightful)
If, for instance, MS decided to use webkit; but push Silverlight, you could easily end up with an equivalent situation.
Re: (Score:2)
But they have failed to do lock in, and if they try they will get shut down.
They show signs of learning to keep at when they do well and sell to that market instead of trying to lock in at the puny application layer. By putting an OS on almost every box, they are getting paid to be the gate keeper.
the 1000 year view MS had isn't panning out, and all the people that bought into it when the document was created are leaving MS.
Re: (Score:3, Insightful)
>But they have failed to do lock in, and if they try they will get shut down.
Wrong. They have failed to lock in PUBLIC facing web sites. But they have done a MARVELLOUS job of lock-in for corporate web applications and inside apps with IE. Trust me, I have fought that monster over and over again.
Re: (Score:2)
More likely Gazelle is a ruse, as is interest in WebKit. I wouldn't be surprised if MS attempted a hostile takeover of Opera. Opera doesn't have that much usage share among desktops/laptops, but its share on cell phones and other mobile devices is huge.
They might.... (Score:2)
Their goal is lock-in. A standards-based engine would negate that.
Honestly, I've agreed with you up until now. Spending resources to play catch-up with what Webkit and Gecko have been able to do for years doesn't make any sense at all... unless your goal is to depart from those implementations.
However, I've wondered if someday, the resource logic wouldn't occur to Microsoft, or the trident codebase wouldn't become such a problem that it'd become stronger. They don't need to have their own rendering engine t
Re: (Score:3, Interesting)
> Their goal is lock-in. A standards-based engine would negate that.
True enough, but they are learning of late. They were so hellbent on pushing OOXML they perverted the ISO. But enough people stood firm and resisted so they are putting ODF support into the next Office service pack. We will see if they manage to put a sting into it. I'd bet they won't make it possible to set ODF as the default save format. Or ensure subtle conversion errors force large instituitions to not use ODF as their primary i
Coming full circle? (Score:2)
First, Microsoft tried to make the browser part of their operating system, without paying much attention to security. Now, they're trying to make a browser into an operating system with security first in mind?
Looks like an about-face if you ask me...
Funny how the vendor of one of the world's most insecure operating systems now considers that they're going to one-up the competition with the most secure browser / operating system? I guess they'd have an excellent track record of finding out what not to do...
Re:Coming full circle? (Score:5, Insightful)
``Funny how the vendor of one of the world's most insecure operating systems now considers that they're going to one-up the competition with the most secure browser / operating system?''
I wonder if Windows is still one of the world's most insecure operating systems. Microsoft have certainly been working hard to improve things, which is more than I can say for many other operating system vendors. Meanwhile, Linux user seem to be content pointing and laughing at Microsoft's efforts and pointing out that Linux is so much more secure.
I won't make any claims about which operating system is more secure than another operating system (because I think it is fundamentally impossible to measure, let alone to know), but if I see that Microsoft is introducing things like address space layout randomization and non-executable stacks, I have to wonder why those features aren't in other mainstream operating systems yet. OpenBSD has done a lot of pioneering work already, but when will we see the day that all of Debian is compiled with -fstack-protector and ships with PaX enabled?
Re:Coming full circle? (Score:5, Informative)
Meanwhile, Linux user seem to be content pointing and laughing at Microsoft's efforts and pointing out that Linux is so much more secure.
Because it is. There. I said it.
The relatively simple, understandable Unix security model has a very long history, and has grown gracefully as the strength, power, speed, and ability of the individual computers have. Everything is a file, and all files have the three permissions: Users, Groups, and Other. Each of these can have read, write, and execute permissions. Simple, understandable, easy to enforce. It's so taken for granted as such that it's routinely used in embedded devices (such as routers) where updates are few and far between, yet they are rarely, if ever, compromised.
Compare/contrast that with the Windows security model, where there are actually alternate file spaces within the existing file system. With the Windows API, it's trivial to save a file that's in an alternate namespace and thus cannot be found with *any* normal Windows system call. There are many examples of strangeness like this!
There was a recent article I read about the confessions of a grey-hat programmer... he describes Windows as incredibly complex, labyrinthine, and basically impossible to secure well. He laughed at so-called "security vendors" like anti-virus.
Re:Coming full circle? (Score:5, Informative)
Everything is a file, and all files have the three permissions: Users, Groups, and Other.
Don't forget the sticky bit! Much as one might like to, let's not forget that the "simple Unix permissions" included one Hell of an egregious security flaw.
there are actually alternate file spaces within the existing file system. With the Windows API, it's trivial to save a file that's in an alternate namespace and thus cannot be found with *any* normal Windows system call..
he describes Windows as incredibly complex, labyrinthine, and basically impossible to secure well.
Vista clearly lost the thread, going for security through complexity, but any OS that doesn't have a read-only kernel is impossible to secure. Any OS that does have a read-only kernel is impossible to patch. No OS can secure itself. Scanning for modifications to kernel bits from a hardware-protected hypervisor is the only way, but as long as "Trusted Computing" is used for evil, we can't get there.
Re:Coming full circle? (Score:5, Interesting).
This... is actually not the whole story.
NTFS is actually a case-sensitive file system. You can illustrate this by installing Services for Unix. This is an alternative subsystem that doesn't go through the normal Windows API (or the DLLs implementing it) and collection of Unix programs that have been "ported" to it. Once you install this, programs that are part of SFU are able to create files with the same case-sensitive name but different case.
Instead, the reason you normally can't do this is because the DLLs that are part of the Windows subsystem (the one providing the normal Windows API) hides this case-sensitivity in concert with the file system driver. (IIRC, open commands in the driver get a flag saying whether to be case-sensitive or not.) Instead of making calls through the Windows API, you can either use another subsystem like SFU or make native system calls directly (though that interface isn't supported).
Finally, the implementation of the Windows API is such that if you create two files with different case but the same name, only one will be visible through the Windows API, at least with NTFS's implementation of all of this.
This means that if you want to write security software for Windows, to catch malware written by people who know about this hole, you need to make API calls to an undocumented interface if you don't want to require people to install SFU. (Of course, security software does so much other stuff that's even worse that's hardly a drop in the bucket.)
Doesn't microsoft say this about everything? (Score:2).
Doesn't Microsoft scream "This one's WAAAAAY more secure than the last one!" about everything they release? When has that actually meant anything? Sure, I'd take Windows XP over Windows 95, but it's not very hard to do better than their old lousy products. Making the claim that it'll be more secure than Firefox or even Chrome, that's a bold statement and I doubt they'll be able to back it up. Plus all the security in the world is useless if the thing doesn't conform to any web standards.
Also, are they chang
Re: (Score:3, Informative)
Gazelle [microsoft.com] con
IE8 may be end of the line for Trident (Score:5, Insightful)
Re: (Score:3)
And when they click on "the internet", a window pops up that says Internet Explorer on the top, and probably takes them straight to MSN, where they can check their email through Hotmail. It's all part of the MS brand, and they're not about to toss any piece of it.
Good marketing is a lot like whale hunting - you might not notice one or two small harpoons/eleme
Netscape do over (Score:2)
It may contain a core of truth (Score:2)
IE has failed to do what it was designed for, dominate the standards. Internet Explorer's aim was to change the standard from the open w3HTML to MSHTML and use it to bind "The Internet" to Windows and Microsoft as its Autocrat. Now with the rise of Firefox and open standards another attempt to control the standards will only break old (IE-only) sites therefore MS has decided to throw in the towel(or so is the theory) and stop working on its rendering engine. The use of Webkit is probably because it's a wide
Re: (Score:2)
Gecko is licensed under MPL, GPL, and LGPL -- two of these allow you to use it in closed-source software.
Browser as a milli-application (Score:2, Interesting)
I am obsessed with microkernels. This idea's been in my head for years, since I looked at how KDE sandboxes Flash and thought, "Hey, this should be for every piece of the whole application!"
Web Brower Like as OS? (Score:2)
>Microsoft will instead adopt Gazelle, Microsoft Research's brand-new engine that thinks like an OS.
Great, just what we need, a single-platform browser that thinks like an OS- something that will further guaranteed web "sites" designed in a manner that will only work with MS-Windows and their own browser. Been there, done that.
Nobody Will Use IE By Version 9 (Score:5, Interesting)
I would also argue that a lot more 'dumb consumers' (people like my parents) are buying Macs now to be trendy which will help IEs market share drop.
Also has anyone used IE8 yet and tested sites out on it? I've used it and it rendering engine is pretty terrible, even when set in emulate IE7 mode which then introduces a complete new set of rendering bugs.
Hypothetical news? (Score:5, Informative)
The author is effectively saying his story is not credible! Slashdot is supposed to run with a hypothetical situation about IE8 demise instead of commenting on real news? It should be fun scanning through these comments to find out who bites (not the big one
Russian Roulette with a Fully Loaded Gun (Score:4, Interesting)
This is obviously a dream, but it would be nice to have some sort of standard system for Internet Cloud and Browser software and hardware not unlike the telco and cellular market. There would still be billions to make for all of the Tech companies.
Re:Russian Roulette with a Fully Loaded Gun (Score:4, Interesting)
The
.Net framework is very closely tied to the IE engine
In what way is
.NET tied to IE? WPF doesn't use Trident at all, and that's the only thing I can really think of that might be in .NET that could be tenuously tied to IE. So what am I missing?
ActiveX Must Die (Score:4, Insightful)
The sticking point will be what Microsoft does about compatibility for ActiveX apps.
No sticking point... ActiveX needs to die.
What will they do with ActiveX? (Score:2)
Hopefully they'll do the right thing: deprecate it as of IE8's release, so people have plenty of warning, and start releasing tools for those still stuck with it to migrate it something perhaps not quite so fundamentally flawed.
"myriad plug-ins" Heh, yeah right (Score:3, Insightful)
"This new engine will supposedly be more secure than Firefox or even Chrome, making copious use of sandboxing to keep its myriad plug-ins isolated and the overall browser process model protected.'
IE doesn't have any plugins, does it? At least, if it does, they're nagware garbage compared to the truly myriad plugins for Firefox. Really, if it wasn't for FF add-ons, I doubt it would have even a half percent share.
How to make 30% of planet hate a browser? (Score:4, Interesting)
Have a stupid blogger who could say things like ''This new engine will supposedly be more secure than Firefox or even Chrome''
That is 30% of entire Web browser market, you have guaranteed that they will do everything to joke about your code without being even released to public.
Also very advanced coders who are talented enough to work on Mozilla or Google will come up with real information debunking your allegations. They may ask a very basic question: ''How can people review your code?''. Mozilla, Google and even Apple has answer, you don't.
Clippy? (Score:3, Interesting)
Enterprise pipe dreams (Score:3, Insightful)
By the time IE8 is EOL'ed, I hope ActiveX will be long gone.
Just like COBOL is. | http://tech.slashdot.org/story/09/03/10/1942232/ie8-may-be-end-of-the-line-for-internet-explorer | CC-MAIN-2015-27 | refinedweb | 6,194 | 62.88 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.